paper_id
stringlengths
15
40
source
stringclasses
2 values
year
int32
2.02k
2.03k
paper_text
stringlengths
25.4k
748k
anonymized_paper_text
stringlengths
22.6k
734k
decision_label
stringclasses
2 values
decision_text
stringclasses
14 values
average_review_score
float32
1
9.33
018a6bc9b1e5715041d81f310f0bb7714d575a3f
iclr
2,017
NEURAL CODE COMPLETION Chang Liu*, Xin Wang*, Richard Shin, Joseph E. Gonzalez, Dawn Song University of California, Berkeley ABSTRACT Code completion, an essential part of modern software development, yet can be challenging for dynamically typed programming languages. In this paper we explore the use of neural network techniques to automatically learn code completion from a large corpus of dynamically typed JavaScript code. We show different neural networks that leverage not only token level information but also structural information, and evaluate their performance on different prediction tasks. We demonstrate that our models can outperform the state-of-the-art approach, which is based on decision tree techniques, on both next non-terminal and next terminal prediction tasks by 3.8 points and 0.5 points respectively. We believe that neural network techniques can play a transformative role in helping software developers manage the growing complexity of software systems, and we see this work as a first step in that direction. 1 INTRODUCTION As the scale and complexity of modern software libraries and tools continue to grow, code completion has become an essential feature in modern integrated development environments (IDEs). By suggesting the right libraries, APIs, and even variables in real-time, intelligent code completion engines can substantially accelerate software development. Furthermore, as many projects move to dynamically typed and interpreted languages, effective code completion can help to reduce costly errors by eliminating typos and identifying the right arguments from context. However, existing approaches to intelligent code completion either rely on strong typing (e.g., Visual Studio for C++), which limits their applicability to widely used dynamically typed languages (e.g., JavaScript and Python), or are based on simple heuristics and term frequency statistics which are often brittle and are relatively error-prone. In particular, Raychev et al. (2016) proposes the state-of-the-art probabilistic model for code, which generalizes both simple \( n \)-gram models and probabilistic grammar approaches. This approach, however, examines only a limited number of elements in the source code when completing the code. Therefore, the effectiveness of this approach may not scale well to large programs. In this paper we explore the use of deep learning techniques to address the challenges of code completion for the widely used and dynamically typed JavaScript programming language. We formulate the code completion problem as a sequential prediction task over the traversal of a parse-tree structure consisting of both non-terminal structural nodes and terminal nodes encoding program text. We then present simple, yet expressive, LSTM-based (Hochreiter & Schmidhuber (1997)) models that leverage additional side information obtained by parsing the program structure. Compared to widely used heuristic techniques, deep learning for code completion offers the opportunity to learn rich contextual models that can capture language and even library specific code patterns without requiring complex rules or expert intervention. We evaluate our recurrent neural network architecture on an established benchmark dataset for the JavaScript code completion. Our evaluations reveal several findings: (1) when evaluated on short programs, our RNN-based models can achieve better performance on the next node prediction tasks compared to the prior art (Bielik et al. (2016); Raychev et al. (2016)), which are based on decision-tree models; (2) our models’ prediction accuracies on longer programs, which is provided in the test set, but were not evaluated upon by previous work, are better than our models’ accuracies on shorter *The first and second authors contributed equally and are listed in an alphabetical order. Figure 1: Code Completion Example in IntelliJ IDEA Figure 2: Correct prediction of the program in Figure 1 programs; and (3) in the scenario that the code completion engine suggests a list of candidates, our RNN-based models allow users to choose from a list of 5 candidates rather than inputting manually for over 96% of all time when this is possible. These promising results encourage more investigation into developing neural network approaches for the code completion problem. We believe that our work not only highlights the importance of the field of neural network-based code completion, but is also an important step toward neural network-based program synthesis. 2 RELATED WORK Existing approaches that build probabilistic models for code can typically be categorized as n-gram models (Hindle et al., 2012; Nguyen et al., 2013; Tu et al., 2014), probabilistic grammars (Collins, 2003; Allamanis & Sutton, 2014; Allamanis et al., 2015; Maddison & Tarlow, 2014; Liang et al., 2010), and log-bilinear models (Allamanis et al., 2015). Bielik et al. (2016) generalizes the PCFG approach and n-gram approach, while Raychev et al. (2016a) further introduces decision tree approaches to generalize Bielik et al. (2016). Raychev et al. (2014) and White et al. (2015) explore how to use recurrent neural networks (RNNs) to facilitate the code completion task. However, these works only consider running RNNs on top of a token sequence to build a probabilistic model. Although the input sequence considered in Raychev et al. (2014) is produced from an abstract object, the structural information contained in the abstract syntax tree is not directly leveraged by the RNN structure in both of these two works. In contrast, we consider extending LSTM, a RNN structure, to leverage the structural information directly for the code prediction task. Recently there has been an increasing interest in developing neural networks for program synthesis (Ling et al., 2016; Beltagy & Quirk (2016); Dong & Lapata (2016); Chen et al., (2016)). These works all consider synthesizing a program based on inputs in other formats such as images or natural language descriptions. 3 CODE COMPLETION VIA BIG CODE In this section, we first introduce the problem of code completion and its challenges. Then we explain abstract syntax trees (AST), which we use as the input for our problems. Lastly, we formally define the code completion problem in different settings as several prediction problems based on a partial AST. 3.1 CODE COMPLETION: AN EXAMPLE Code completion is a feature in some integrated development environments (IDEs) to speed up programmers’ coding process. Figure 1 demonstrates this feature in IntelliJ IDEA. In this example, a part of a JavaScript program has been input to the IDE. When the dot symbol (i.e., “.”) is added after _webpack_require_, the IDE prompts with a list of candidates that the programmer is most likely to input next. When a candidate matches the intention, the programmer can choose it from the list rather than typing it manually. In this work, we define the code completion problem as predicting the next symbol while a program is being written. We consider this problem as an important first step toward completing an entire program. Traditional code completion techniques are developed by the programming language community to leverage context information for prediction. For example, when a programmer writes a Java program and inputs a variable name and then a dot symbol, the code completion engine will analyze the class of the variable and prompt the members of the class. In programming language literature, such information is referred to as type information. Statically typed languages, such as C and Java, enforces type checking at static time, so that the code completion engine can take advantage of full type information to make prediction without executing the code. In recent years, dynamically typed languages, such as Python or JavaScript, have become increasingly popular. In these languages, type checking is usually performed dynamically while executing a program. Thus, type information may be only partially available to the code completion engine while the programmer is writing the code. Despite their popularity, the dynamic typing of these languages makes code completion for them challenging. For example, in Figure 1, the next symbol to be added is p. This symbol does not appear in the previous part of the program, and thus the code completion engine in IntelliJ IDEA IDE cannot prompt with this symbol. However, this challenge may be remedied by leveraging a large corpus of code, a.k.a., big code. In fact, _webpack_require_.p is a frequently used combination appearing in many programs on Github.com, one of the largest repositories of source code. Therefore, a code completion engine powered by big code is likely to learn this combination and to prompt p. In fact, our methods discussed in later sections can predict this case very well (Figure 2). 3.2 ABSTRACT SYNTAX TREE Regardless of whether it is dynamically typed or statically typed, any programming language has an unambiguous context free grammar (CFG), which can be used to parse source code into an abstract syntax tree (AST). Further, an AST can be converted back into source code easily. Therefore we consider the input of our code completion problem as an AST, which is a typical assumption made by most code completion engines. An AST is a rooted tree. In an AST, each non-leaf node corresponds to a non-terminal in the CFG specifying structure information. In JavaScript, non-terminals may be ExpressionStatement, ForStatement, IfStatement, SwitchStatement, etc. Each leaf node corresponds to a terminal in the CFG encoding program text. There are infinite possibilities for terminals. They can be variable names, string or numerical literals, operators, etc. Figure 3 illustrates a part of the AST of the code snippet in Figure 1. In this tree, a node without a surrounding box (e.g., ExpressionStatement, etc.) denotes a non-terminal node. A node embraced by an orange surrounding box (e.g., installedModules) denotes a terminal node. At the bottom of the figure, there is a non-terminal node Property and a terminal node p. They have not been observed by the editor, so we use green to indicate this fact. Note that each non-terminal has at most one terminal as its child. In a traditional code completion engine, the AST can be further processed by a type checker so that type information will be attached to each node. In this work, however, we focus on dynamically typed languages, and type information is not always available. Therefore, we do not consider the type information provided by a compiler, and leave it for our future work. 3.3 PROBLEM SETUP In this work, we consider the input to be a partial AST, and the code completion problem is to predict the next node given the partial AST. In the following, we first define a partial AST, and then present the code completion problems in different scenarios. Input: a partial AST. Given a complete AST \( T \), we define a partial AST to be a subtree \( T' \) of \( T \), such that for each node \( n \) in \( T' \), its left set \( L_T(n) \) with respect to \( T \) is a subset of \( T' \), i.e., \( L_T(n) \subseteq T' \). Here, the left set \( L_T(n) \) of a node \( n \) with respect to \( T \) is defined as the set of all nodes in the in-order sequence during the depth-first search of \( T \) that are visited earlier than \( n \). Under this definition, in each partial AST \( T' \), there exists the right-most node \( n_R \), such that all other nodes in \( T' \) form its left set \( L_T(n_R) \). The next node in the in-order depth-first search visiting sequence after \( n_R \) is also the first node not appearing in \( T' \). We call this node the next node following the partial AST. Figure 4 illustrates these concepts using the example in Figure 3. In the rest of the paper, we also refer to a partial AST as a query. Next node prediction. Given a partial AST, the next node prediction problem, as suggested by its name, is to predict the next node following the partial AST. Based on the node’s kind, i.e., whether its a non-terminal node or a terminal one, we can categorize the problem into the next non-terminal prediction problem and the next terminal prediction problem. Although the next terminal prediction problem may sound more interesting, the next non-terminal prediction problem is also important, since it predicts the structure of the program. For example, when then next non-terminal is ForStatement, the next token in the source program is the keyword for, which does not have a corresponding terminal in the dataset. In this case, a model able to predict the next non-terminal can be used by the code-completion engine to emit the keyword for. These two tasks are also the same problems considered by previous works employing domain specific languages to achieve heuristic-based code completion (Raychev et al. (2016b); Bielik et al. (2016)). Predicting the next node versus predicting the next token. A natural alternative formulation of the problem is predicting the next token given the token sequence that has been inputted so far. Such a formulation, however, does not take advantage of the AST information, which is very easy to acquire with a suitable parser. Predicting the next node allows taking advantage of such information to enable more intelligent code completion. In particular, predicting the next non-terminal allows completing the structure of a code block rather than a single (keyword) token. For example, when the next token is a keyword for, the corresponding next non-terminal is ForStatement, which corresponding to the following code block: for( ___ ; ___ ; ___ ) { // for-loop body } In this case, successfully predicting the next non-terminal node allows completing not only the next key token for, but also tokens such as (, ;, ), {, and }. Such structure completion enabled by predicting the next non-terminal is more compelling in modern IDEs. Predicting the next terminal node allows completing identifiers, properties, literals, etc., which is similar to the next token prediction. However, predicting the next terminal node can leverage the information of the predicting node’s non-terminal parent, indicating what is being predicted, i.e., an identifier, a property, or a literal, etc. For example, when completing the following expression: __webpack_require_. the code completion engine with AST information will predict a property of __webpack_require_, while the engine without AST information only learns two tokens __webpack_require_ and a dot ". " and tries to predict the next token without any constraint. In our evaluation, we show that by leveraging the information from the non-terminal parent can significantly improve the performance. In this work, we focus on the next node prediction task, and leave the comparison with next token prediction as our future work. Joint prediction. A more important problem than predicting only the next non-terminal or terminal itself is to predict the next non-terminal and terminal together. We refer to this task to predict both next non-terminal and terminal as the joint prediction problem. We hope code completion can be used to generate the entire parsing tree in the end, and joint prediction is one step further toward this goal than next node prediction. Formally, the joint prediction problem that we consider is that, given a partial AST whose following node is a non-terminal one, we want to predict both the next non-terminal and the next terminal. There may be non-terminal nodes which do not have a terminal child (e.g., the AssignmentStatement). In this case, we artificially add an EMPTY terminal as its child. Note that this treatment is the same as in [Bielik et al., 2016]. We count it as a correct prediction if both the next non-terminal and terminal are predicted correctly. Denying prediction. There may be infinite possibilities for terminals, so it is impossible to predict all terminals correctly. We consider an alternative scenario that, when it thinks that the programmer will input a rare terminal, the code completion engine should have the ability to identify this case, and deny predicting the next node(s). In our problem, we build a vocabulary for frequent terminals. All terminals not in this vocabulary are considered as an UNK terminal. In this case, when it predicts UNK for the next terminal, the code completion model is considered as denying prediction. Since non-terminals’ vocabulary size is very small, denying prediction is only considered for the next terminal prediction, but not for the next non-terminal prediction. 4 MODELS In this section, we present the basic models considered in this work. In particular, given a partial AST as input, we first convert the AST into its left-child right-sibling representation, and serialize it as its in-order depth first search sequence. Thus, we consider the input for the next non-terminal prediction as a sequence of length k, i.e., \((N_1, T_1), (N_2, T_2), ..., (N_k, T_k)\). Here, for each \(i\), \(N_i\) is a non-terminal, and \(T_i\) is the terminal child of \(N_i\). For each non-terminal node \(N_i\), we encode not only its kind, but also whether the non-terminal has at least one non-terminal child, and/or one right-sibling. In doing so, from an input sequence, we can reconstruct the original AST. This encoding is also employed by [Raychev et al., 2016a]. We refer to each element in the sequence (e.g., \((N_i, T_i)\)) as a token. As mentioned above, a non-terminal without a terminal child is considered to have an EMPTY child. This input sequence \((N_1, T_1), (N_2, T_2), ..., (N_k, T_k)\) is the only input for all problems except the next terminal prediction. For the next terminal prediction problem, besides the input sequence, we also have the information about the parent of the current predicting terminal, which is a non-terminal, i.e., \(N_{k+1}\). Throughout the rest of the discussion, we assume that both \(N_i\) and \(T_i\) employ one-hot encoding. The vocabulary sets of non-terminals and terminals are separate. 4.1 NEXT NON-TERMINAL PREDICTION Given an input sequence, our first model predicts the next non-terminal. The architecture is illustrated in Figure 5. We refer to this model as NT2N, which stands for using the sequence of Figure 5: Architecture (NT2N) for predicting the next non-terminal. Non-terminal and Terminal pairs TO predict the next Non-terminal. We first explain each layer of NT2N, and then introduce two variants of this model. Embedding non-terminal and terminal. Given an input sequence, the embedding of each token is computed as \[ E_i = AN_i + BT_i \] where \( A \) is a \( J \times V_N \) matrix and \( B \) is a \( J \times V_T \) matrix. Here \( J \) is the size of the embedding vector, \( V_N \) and \( V_T \) are the vocabulary sizes of non-terminals and terminals respectively. LSTM layer. Then the embedded sequence is fed into a LSTM layer to get the hidden state. In particular, a LSTM cell takes an input token and a hidden state \( h_{i-1}, c_{i-1} \) from the previous LSTM cell as input, computes a hidden state \( h_i, c_i \), and outputs \( h_i \), based on the following formulas: \[ \begin{pmatrix} q \\ f \\ o \\ g \end{pmatrix} = \begin{pmatrix} \sigma \\ \sigma \\ \sigma \\ \tanh \end{pmatrix} P_{J,2J} \left( \begin{pmatrix} x_i \\ h_{i-1} \end{pmatrix} \right) \] \[ c_i = f \odot c_{i-1} + q \odot g \] \[ h_i = o \odot \tanh(c_i) \] Here, \( P_{J,2J} \) denotes a \( J \times 2J \) parameter matrix, where \( J \) is the size of the hidden state, i.e. dimension of \( h_i \), which is equal to the size of embedding vectors. \( \sigma \) and \( \odot \) denote the sigmoid function and pointwise multiplication respectively. Softmax layer. Assume \( h_k \) is the output hidden state of the last LSTM cell. \( h_k \) is fed into a softmax classifier to predict the next non-terminal. In particular, we have \[ \hat{N}_{k+1} = \text{softmax}(W_N \times h_k + b_N) \] where \( W_N \) and \( b_N \) are a matrix of size \( V_N \times J \) and a \( V_N \)-dimensional vector respectively. Using only non-terminal inputs. One variant of this model is to omit all terminal information from the input sequence. In this case, the embedding is computed as \( E_i = AN_i \). We refer to this model as N2N, which stands for using Non-terminal sequence TO predict the next Non-terminal. Predicting the next terminal and non-terminal together. Based on NT2N, we can predict not only the next non-terminal but also the next terminal, using \[ \hat{T}_{k+1} = \text{softmax}(W_T \times h_k + b_T) \] where \( W_T \) and \( b_T \) are a matrix of size \( V_T \times J \) and a \( V_T \)-dimensional vector respectively. In this case, the loss function has an extra term to give supervision on predicting \( \hat{T} \). We refer to this model as NT2NT, which stands for *using the sequence of Non-terminal and Terminal pairs TO predict the next Non-terminal and Terminal pair*. 4.2 NEXT TERMINAL PREDICTION In the next terminal prediction problem, the partial AST does not only contain \((N_1, T_1), ..., (N_k, T_k)\), but also \(N_{k+1}\). In this case, we can employ the architecture in Figure 6 to predict \(T_{k+1}\). In particular, we first get the LSTM output \(h_k\) in the same way as in NT2N. The final prediction is based on \[ \hat{T}_{k+1} = \text{softmax}(W_T h_k + W_{NT} N_{k+1} + b_T) \] where \(W_{NT}\) is a matrix of size \(V_T \times V_N\), and \(W_T\) and \(b_T\) are the same as in NT2NT. We refer to this model as NTN2T, which stands for *Non-terminal and Terminal pair sequence and the next Non-terminal TO predict the next Terminal*. Note that the model NT2NT can also be used for the next terminal prediction task, although the non-terminal information \(N_{k+1}\) is not leveraged. We will compare the two approaches later. 4.3 JOINT PREDICTION We consider two approaches to predict the next non-terminal and the next terminal together. The first approach is NT2NT, which is designed to predict the two kinds of nodes together. An alternative approach is to (1) use a next non-terminal approach \(X\) to predict the next non-terminal; and (2) feed the predicted non-terminal and the input sequence into NTN2T to predict the next terminal. We refer to such an approach as X+NTN2T. 4.4 DENYING PREDICTION We say a model *denies prediction* when it predicts the next terminal to be UNK, a special terminal to substitute rare terminals. However, due to the large amount of rare terminals, the occurrences of UNK may be much greater than any single frequent terminals. In this case, a model that can deny prediction may tend to predict UNKs, and thus may predict for fewer queries than it should. To mitigate this problem, we modify the loss function to be adaptive. Specifically, training a machine learning model \(f_\theta\) is to optimize the following objective: \[ \arg\min_\theta \sum_i l(f_\theta(q_i), y_i) \] where \(\{(q_i, y_i)\}\) is the training dataset consisting pairs of a query \(q_i\) and its ground truth next token \(y_i\). \(l\) is the loss function to measure the distance between the prediction \(\hat{y}_i = f_\theta(q_i)\) and the ground <table> <tr> <th colspan="2">Training set</th> <th colspan="2">Test set</th> <th colspan="2">Overall</th> </tr> <tr> <th>Programs</th> <td>100,000</td> <th>Programs</th> <td>50,000</td> <th>Non-terminal</th> <td>44</td> </tr> <tr> <th>Queries</th> <td>1.7 \times 10^8</td> <th>Queries</th> <td>8.3 \times 10^7</td> <th>Terminal</th> <td>3.1 \times 10^6</td> </tr> </table> Table 1: Statistics of the dataset truth \( y_i \). We choose \( l \) to be the standard cross-entropy loss. We introduce a weight \( \alpha_i \) for each sample \( (q_i, y_i) \) in the training dataset to change the objective to be as follows: \[ \argmin_{\theta} \sum_i \alpha_i l(f_\theta(q_i), y_i) \] When training a model not allowed to deny prediction, we set \( \alpha_i = 0 \) for \( y_i = \text{UNK} \), and \( \alpha_i = 1 \) otherwise. In doing so, it is equivalent to remove all queries whose ground truth next token is UNK. When training a model that allows denying prediction, we set all \( \alpha_i \) to be 1. To denote this case, we put a notation “+D” at the end of the model, (e.g., NT2NT+D, etc.). 5 EVALUATION 5.1 DATASET We use the JavaScript dataset[2] provided by [Raychev et al. (2016b)] to evaluate different approaches. The statistics of the dataset can be found in Table 1. [Raychev et al. (2016a)] provides an approach, called PHOG, for the next token prediction. The reported accuracy results are based on a subset of \( 5.3 \times 10^7 \) queries from the full test set. Specifically, [Raychev et al. (2016a)] chose all queries in each program containing fewer than 30,000 tokens[7]. When we compare with their results, we use the same testset. Otherwise, without a special specification, our results are based on the full test set consisting of \( 8.3 \times 10^7 \) queries. 5.2 TRAINING DETAILS Vocabulary In our dataset, there are 44 different kinds of non-terminals. Combining two more bits of information to indicate whether the non-terminal has a child and/or a right sibling, there are at most 176 different non-terminals. However, not all such combinations are possible: a ForStatement must have a child. In total, the vocabulary size for non-terminals is 97. For terminals, we sort all terminals in the training set by their frequencies. Then we choose the 50,000 most frequent terminals to build the vocabulary. We further add three special terminals: UNK for out-of-vocabulary tokens, EOF indicating the end of program, and Empty for the non-terminal which does not have a terminal. Note that about 45% terminals in the dataset are Empty terminals. Training details. We use a single layer LSTM network with hidden unit size of 1500 as our base model. To train the model, we use Adam ([Kingma & Ba (2014)] with base learning rate 0.001. The learning rate is multiplied by 0.9 every 0.2 epochs. We clip the gradients’ norm to 5. The batch size is \( b = 80 \). We use truncated backpropagation through time, by unrolling the LSTM model \( s = 50 \) times to take an input sequence of length 50 in each batch (and therefore each batch contains \( b \times s = 4000 \) tokens). We divide each program into segments consisting of \( s \) consecutive tokens. The last segment of a program, which may not be full, is padded with (EOF) tokens. We coalesce multiple epochs together. We organize all training data into \( b \) buckets. In each epoch, we randomly shuffle all programs in the training data to construct a queue. Whenever a bucket is empty, a program is popped from the queue and all segments of the program are inserted into the empty bucket sequentially. When the queue becomes empty, i.e., the current epoch finishes, all programs are re-shuffled randomly to reconstruct the queue. Each mini-batch is formed by \( b \) segments, i.e., one segment popped from each bucket. When the training data has been shuffled for \( e = 8 \) times, i.e., \( e \) epochs are inserted into the [2] http://www.srl.inf.ethz.ch/js150 [7] This detail was not explained in the paper. We contacted the authors to confirm it. ![Training epoch illustration](page_184_120_1207_354.png) Figure 7: Training epoch illustration <table> <tr> <th>Categories</th> <th>Previous work<br>Raychev et al. (2016a)</th> <th>N2N</th> <th>NT2N</th> <th>NT2NT</th> </tr> <tr> <td>One model accuracy</td> <td></td> <td>79.4 ± 0.2%</td> <td>84.8 ± 0.1%</td> <td>84.0 ± 0.1%</td> </tr> <tr> <td>Ensemble accuracy</td> <td></td> <td>82.3%</td> <td>87.7%</td> <td>86.2%</td> </tr> </table> Table 2: Next non-terminal prediction results bucket, we stop adding whole programs, and start adding only the first segment of each program: when a bucket is empty, a program is chosen randomly, and its first segment is added to the bucket. We terminate the training process when all buckets are empty at the same time. That is, all programs from the first 8 epochs have been trained. This is illustrated in Figure 7. The hidden states are initialized with \( h_0, c_0 \), which are two trainable vectors. The hidden states of LSTM from the previous segment are fed into the next one as input if both segments belong to the same program. Otherwise, the hidden states are reset to be \( h_0, c_0 \). We observe that resetting the hidden states for every new program improves the performance a lot. We initialize all parameters in \( h_0, c_0 \) to be 0. All other parameters are initialized with values uniformly randomly sampled from \([−0.05, 0.05]\). For each model, we train 5 sets of parameters using different random initializations. We evaluate the ensemble of the 5 models by averaging 5 softmax outputs. In our evaluation, we find that the ensemble improves the accuracy by 1 to 3 points in general. 5.3 NEXT NODE PREDICTION In this section, we present the results of our models on next node prediction, and compare them with the counterparts in Bielik et al. (2016), which is the state-of-the-art on these tasks. Therefore, we use the same testset consisting of \( 5.3 \times 10^7 \) queries as in Bielik et al. (2016). In the following, we first report results of next non-terminal prediction and of next terminal prediction, then evaluate our considered models’ performance on programs with different lengths. Next non-terminal prediction. The results are presented in Table 2. From the table, we can observe that both NT2N and NT2NT can outperform Raychev et al. (2016a). In particular, an ensemble of 5 NT2N models improves [Raychev et al., 2016a] by 3.8 percentage points. We also report the average accuracies of the 5 single models and the variance among them. We observe that the variance is very small, i.e., \( 0.1\% - 0.2\% \). This indicates that the trained models’ accuracies are robust to random initialization. Among the neural network approaches, NT2NT’s performance is lower than NT2N, even given that the former is provided with more supervision. This shows that given the limited capacity of the model, it may learn to trade off non-terminal prediction performance in favor of the terminal prediction task it additionally needs to perform. <table> <tr> <th rowspan="2">Categories</th> <th colspan="2">Previous work</th> <th colspan="2">Our considered models</th> </tr> <tr> <th>Raychev et al. (2016a)</th> <th>NT2NT</th> <th>NTN2T</th> </tr> <tr> <td>One model accuracy</td> <td>82.9%</td> <td>76.6 ± 0.1%</td> <td>81.9 ± 0.1%</td> </tr> <tr> <td>Overall</td> <td></td> <td>78.6%</td> <td>83.4%</td> </tr> </table> Table 3: Next terminal prediction results <table> <tr> <th rowspan="2">Top 1 accuracy</th> <th colspan="2">Non-terminal</th> <th colspan="2">Terminal</th> </tr> <tr> <th>N2N</th> <th>NT2N</th> <th>NT2NT</th> <th>NTN2T</th> <th>NT2NT</th> </tr> <tr> <td>Short programs (<30,000 non-terminals)</td> <td>82.3%</td> <td>87.7%</td> <td>86.2%</td> <td>83.4%</td> <td>78.6%</td> </tr> <tr> <td>Long programs (>30,000 non-terminals)</td> <td>87.7%</td> <td>94.4%</td> <td>92.7%</td> <td>89.0%</td> <td>85.8%</td> </tr> <tr> <td>Overall</td> <td>84.2%</td> <td>90.1%</td> <td>88.5%</td> <td>85.4%</td> <td>81.2%</td> </tr> <tr> <th colspan="6">Top 5 accuracy</th> </tr> <tr> <td>Short programs (<30,000 non-terminals)</td> <td>97.9%</td> <td>98.9%</td> <td>98.7%</td> <td>87.9%</td> <td>86.4%</td> </tr> <tr> <td>Long programs (>30,000 non-terminals)</td> <td>98.8%</td> <td>99.6%</td> <td>99.4%</td> <td>91.5%</td> <td>90.5%</td> </tr> <tr> <td>Overall</td> <td>98.2%</td> <td>99.1%</td> <td>98.9%</td> <td>89.2%</td> <td>87.8%</td> </tr> </table> Table 4: Next token prediction on programs with different lengths. Next terminal prediction. The results are presented in Table 3. We observe that an ensemble of 5 NTN2T models can outperform Raychev et al. (2016a) by 0.5 points. Without the ensemble, its accuracies are around 82.1%, i.e., 0.8 points less than Raychev et al. (2016a). For the 5 single models, we also observe that the variance on their accuracies is also very small, i.e., 0.1%. On the other hand, we observe that NT2NT has much worse performance than NTN2T, i.e., by 4.8 percentage points. This shows that leveraging additional information about the parent non-terminal of the current predicting terminal can improve the performance significantly. Prediction accuracies on programs with different lengths. We examine our considered models’ performance over different subsets of the test set. In particular, we consider the queries in programs containing no more than 30,000 tokens, which is the same as used in Bielik et al. (2016); Raychev et al. (2016a). We also consider the rest of the queries in programs which have more than 30,000 tokens. The results are presented in Table 4. We can observe that for both non-terminal and terminal prediction, accuracies on longer programs are higher than on shorter programs. This shows that a LSTM-based model may become more accurate when observing more code inputted by programmers. We also report top 5 prediction accuracy. We can observe that the top 5 accuracy improves upon top 1 accuracy dramatically. This metric corresponds to the code completion scenario that an IDE may pop up a list of few (i.e., 5) candidates for users to choose from. In particular, NT2N can achieve 99.1% top-5 accuracy on the non-terminal prediction task. On the other hand, NTN2T can also achieve 89.2% accuracy on the terminal prediction task. In the test set, there are 7.4% of tokens in the data whose ground truth is UNK, i.e., non-top 50,000 most frequent tokens. This means that NTN2T can predict over \( 89.2 / (100 - 7.4\%) = 96.3\% \) of all tokens whose ground truth is not UNK. Therefore, this means that the users can choose from the popup list without typing the token manually over 96% of all time that the code completion is possible if the completion is restricted to the top 50,000 most frequent tokens in the dataset. The effectiveness of different UNK thresholds. We evaluate the effectiveness of how to choose the threshold to cut for UNK terminals on the accuracy. We randomly sample 1/10 of the training dataset and the test dataset and vary the thresholds to cut for UNK terminals from 10000 to 80000. We plot the percentage of UNK terminals in both the full test set and its subset in Figure 8. We can observe that the distributions of UNK terminals are almost the same in both sets. Further, when the threshold is 10000, i.e., all terminals out of the top 10000 most frequent ones are turned into UNKs, there are more than 11% UNK queries (i.e., queries with ground truth being UNK) in the test set. When the threshold increases to 50000 or more, this number drops to 7% to 6%. The variance of the UNK queries’ percentages is not large when threshold of UNK is varied from 50000 to 80000. Figure 8: Percentage of UNK tokens in the entire test data and the sampled subset of the test data by varying the UNK threshold from 10000 to 80000. Figure 9: Accuracies of different models trained over the sampled subset of training data by varying the UNK threshold from 10000 to 80000. <table> <tr> <th></th> <th>NT2NT</th> <th>N2N+NTN2T</th> <th>NT2N+NTN2T</th> </tr> <tr> <th>Top 1 accuracy</th> <td>73.9%</td> <td>72.0%</td> <td>77.7%</td> </tr> </table> Table 5: Predicting non-terminal and terminal together We train one NTN2T model for each threshold, and evaluate it using the sampled test set. The accuracies of different models are plotted in Figure 9. The trend of different models’ accuracies is similar to the trend of the percentage of non-UNK tokens in the test set. This is expected, since when the threshold increases the model has more chance to make correct predictions for original UNK queries. However, we observe that this is not always the case. For example, the accuracies of models trained with thresholds being 30000 and 40000 are almost the same, i.e., the difference is only 0.02%. We make similar observations among the models trained with thresholds being 60000, 70000, and 80000. Notice that we have observed above that when we train 5 models with different random initialization, the variance of the accuracies of these models is within 0.1%. Therefore, we conclude that when we increase the UNK threshold from 30000 to 40000 and from 60000 to 80000, the accuracies do not change significantly. One potential explanation is that when increasing the UNK threshold, while it has more chance to predict those otherwise UNK terminals, a model may also more likely make mistakes when it needs to choose the next terminal from more candidates. 5.4 JOINT PREDICTION In this section, we evaluate different approaches to predict the next non-terminal and terminal together for the joint prediction task. In fact, NT2NT is designed for this task. Alternative approaches can predict the next non-terminal first, and then predict the next terminal based on the predicted next non-terminal. We choose NTN2T method as the second step to predict the next terminal, and we examine two different approaches as the first step to predict the next non-terminal: N2N and NT2N. Therefore, we compare three methods in total. The top 1 accuracy results are presented in Table 5. N2N+NTN2T is less effective than NT2N+NTN2T, as expected, since when predicting the non-terminal in the first step, N2N is less effective than NT2N as we have shown in Table 4. On the other hand, NT2NT’s performance is better than N2N+NTN2T, but is worse than NT2N+NTN2T. We observe that for all these three combinations, we have \[ \Pr(\hat{T}_{k+1} = T_{k+1} \land \hat{N}_{k+1} = N_{k+1}) > \Pr(\hat{T}_{k+1} = T_{k+1}) \Pr(\hat{N}_{k+1} = N_{k+1}) \] These facts indicate that the events of the next non-terminal and terminal being predicted correctly are not independent, but very relevant to each other instead. This is also the case for NT2NT, even though NT2NT predicts the next non-terminal and the next terminal independently conditional upon the LSTM hidden states. <table> <tr> <th></th> <th>NT2NT</th> <th>NT2NT+D</th> <th>NTN2T</th> <th>NTN2T+D</th> </tr> <tr> <td>Overall accuracy</td> <td>81.2%</td> <td>85.1%</td> <td>85.4%</td> <td>89.9%</td> </tr> <tr> <td>Accuracy on non-UNK terminals</td> <td>87.6%</td> <td>87.5%</td> <td>92.2%</td> <td>91.8%</td> </tr> <tr> <td>Deny prediction rate</td> <td>0%</td> <td>5.2%</td> <td>0%</td> <td>6.1%</td> </tr> </table> Table 6: Deny prediction results. **Top 1 accuracy** is computed as the percentage of all queries (including the ones whose ground truth is **UNK**) that can be predicted correctly, i.e., the prediction matches the ground truth even when the ground truth is **UNK**. **Accuracy on non-UNK terminals** measures the accuracy of each model on all non-UNK terminals. **Deny rate** is calculated as the percentage of all queries that a model denies prediction. **Prediction accuracy** is the top 1 accuracy over those queries that a model does not deny prediction, i.e., the prediction is not **UNK**. ![Line plot showing overall accuracies and accuracies on non-UNK terminals by varying alpha.](page_370_670_808_246.png) Figure 10: Overall accuracies and accuracies on non-UNK terminals by varying \( \alpha \). 5.5 DENYING PREDICTION We compare the models which do not deny prediction (i.e., NT2NT and NTN2T) and those which do (i.e., NT2NT+D and NTN2T+D). Results are presented in Table[6]. For a reference, in the test set, there are 7.42% UNK queries. We can observe that deny prediction models (i.e., +D models) have higher accuracies than the corresponding original models. This is expected. Since deny prediction models allow predicting UNK terminals, while NT2NT and NTN2T fail on all UNK queries, +D will succeed on most of them. We further evaluate the accuracy on non-UNK terminals. One may expect that since +D models may prefer to predict UNK, a standard model should have a higher accuracy on non-UNK terminals than its deny prediction counterpart. The results show that this is indeed the case, but the margin is very small, i.e., 0.1% for NT2NT and 0.3% for NTN2T. This means that, allowing denying prediction does not necessarily sacrifice a model’s ability on predicting non-UNK terminals. We are also interested in how frequent a +D model will deny prediction. We can observe that NTN2T+D will deny prediction for only 6.1% of all queries, which is even less than the percentage of UNK queries (i.e., 7.42%). This shows that although we allow the model to deny prediction, it is conservative when executing this privilege. This partially explains why NTN2T+D’s accuracy on non-UNK terminals is not much less than NTN2T’s. **Effectiveness of the value of \( \alpha \).** We are interested in how the hyperparameter \( \alpha \) in a +D model affects its accuracy. We train 11 different NTN2T+D models on the 1/10 subset of the training set, which is used above to examine the effectiveness of UNK thresholds, by varying \( \alpha \) from 0.0 to 1.0. Notice that \( \alpha = 0.0 \), this model becomes a standard NTN2T model. We plot both overall accuracies and accuracies on non-UNK terminals in Figure[10]. We observe the same effect as above: 1) the overall accuracy for \( \alpha = 1 \) is 6% higher than the one for \( \alpha = 0 \); and 2) the accuracy on non-UNK terminals for \( \alpha = 1 \) is less than the one for \( \alpha = 0 \), but the margin is not large (i.e., less than 1%). When we increase \( \alpha \) from 0 to 0.3, we can observe that the overall accuracy steeply increases. When we further increase \( \alpha \), however, the overall accuracy becomes steady. This is also the case for accuracy on non-UNK terminals. The result of this experiment shows that how to set \( \alpha \) is a trade-off between the overall accuracy and the accuracy on non-UNK terminals and how to choose \( \alpha \) depends on the application. 5.6 RUNTIME We evaluate our models’ runtime performance. Our models are implemented in TensorFlow (Abadi et al. (2016)). We evaluate our models on a machine equipped with 16 Intel Xeon CPUs, 16 GB RAM, and a single GPU Tesla K80. All queries from the same program are processed incrementally. That is, given two queries \( A, B \), if \( A \) has one more node than \( B \), then the LSTM outputs for \( B \) will be reused for processing \( A \), so that only the additional node in \( A \) needs to be processed. Note that this is consistent with the practice where programs are written incrementally from beginning to end. For each model, we feed in one query at a time into the model. There are 3939 queries in total coming from randomly chosen programs. We measure the overall response latency for each query. We observe that the query response time is consistent across all queries. On average, each model takes around 16 milliseconds to respond a query on GPU, and around 33 milliseconds on CPU. Note that these numbers are from just a proof of concept implementation and we have not optimized the code. Considering that a human being usually does not type in a token within 30 milliseconds, we conclude that our approach is efficient enough for potential practical usage. We emphasize that these numbers do not directly correspond to the runtime latency when the techniques are deployed to a code completion engine, since the changes of AST serialization may not be sequential while users are programming incrementally. This analysis, however, only provides an evidence to show the feasibility of applying our approach toward a full-fledged code completion engine. 6 CONCLUSION In this paper we introduce, motivate, and formalize the problem of automatic code completion. We describe LSTM-based approaches that capture parsing structure readily available in the code completion task. We introduce a simple LSTM architecture to model program context. We then explore several variants of our basic architecture for different variants of the code completion problem. We evaluate our techniques on a challenging JavaScript code completion benchmark and compare against the state-of-the-art code completion approach. We demonstrate that deep learning techniques can achieve better prediction accuracy by learning program patterns from big code. In addition, we find that using deep learning techniques, our models perform better for longer programs than for shorter ones, and when the code completion engine can pop up a list of candidates, our approach allows users to choose from the list instead of inputting the token over 96% of all time that this is possible. We also evaluate our approaches’ runtime performance and demonstrate that deep code completion has the potential to run in real-time as users type. We believe that deep learning techniques can play a transformative role in helping software developers manage the growing complexity of software systems, and we see this work as a first step in that direction. ACKNOWLEDGMENTS We thank the anonymous reviewers for their valuable comments. This material is based upon work partially supported by the National Science Foundation under Grant No. TWC-1409915, and a DARPA grant FA8750-15-2-0104. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation and DARPA. REFERENCES Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. Miltiadis Allamanis and Charles Sutton. Mining idioms from source code. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 472–483. ACM, 2014. Miltiadis Allamanis, Earl T Barr, Christian Bird, and Charles Sutton. Suggesting accurate method and class names. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, pp. 38–49. ACM, 2015. I. Beltagy and Chris Quirk. Improved semantic parsers for if-then statements. In ACL, 2016. Pavol Bielik, Veselin Raychev, and Martin Vechev. PHOG: Probabilistic Model for Code. In ICML, 2016. Xinyun Chen, Chang Liu, Richard Shin, Dawn Song, and Mingcheng Chen. Latent attention for if-then program synthesis. In NIPS, 2016. Michael Collins. Head-driven statistical models for natural language parsing. Computational linguistics, 29(4):589–637, 2003. Li Dong and Mirella Lapata. Language to logical form with neural attention. In ACL, 2016. Abram Hindle, Earl T Barr, Zhendong Su, Mark Gabel, and Premkumar Devanbu. On the naturalness of software. In 2012 34th International Conference on Software Engineering (ICSE), pp. 837–847. IEEE, 2012. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980 Percy Liang, Michael I Jordan, and Dan Klein. Learning programs: A hierarchical bayesian approach. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 639–646, 2010. Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomás Kociský, Andrew Senior, Fumin Wang, and Phil Blunsom. Latent predictor networks for code generation. CoRR, 2016. URL http://arxiv.org/abs/1603.06744 Chris J Maddison and Daniel Tarlow. Structured generative models of natural source code. In ICML, 2014. Tung Thanh Nguyen, Anh Tuan Nguyen, Hoan Anh Nguyen, and Tien N Nguyen. A statistical semantic language model for source code. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, pp. 532–542. ACM, 2013. Veselin Raychev, Martin Vechev, and Eran Yahav. Code completion with statistical language models. In ACM SIGPLAN Notices, volume 49, pp. 419–428. ACM, 2014. Veselin Raychev, Pavol Bielik, and Martin Vechev. Probabilistic model for code with decision trees. In Proceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications, pp. 731–747. ACM, 2016a. Veselin Raychev, Pavol Bielik, Martin Vechev, and Andreas Krause. Learning programs from noisy data. In POPL, 2016b. Zhaopeng Tu, Zhendong Su, and Premkumar Devanbu. On the localness of software. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 269–280. ACM, 2014. Martin White, Christopher Vendome, Mario Linares-Vásquez, and Denys Poshyvanyk. Toward deep learning software repositories. In 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories, 2015.
ABSTRACT Code completion, an essential part of modern software development, yet can be challenging for dynamically typed programming languages. In this paper we explore the use of neural network techniques to automatically learn code completion from a large corpus of dynamically typed JavaScript code. We show different neural networks that leverage not only token level information but also structural information, and evaluate their performance on different prediction tasks. We demonstrate that our models can outperform the state-of-the-art approach, which is based on decision tree techniques, on both next non-terminal and next terminal prediction tasks by 3.8 points and 0.5 points respectively. We believe that neural network techniques can play a transformative role in helping software developers manage the growing complexity of software systems, and we see this work as a first step in that direction. 1 INTRODUCTION As the scale and complexity of modern software libraries and tools continue to grow, code completion has become an essential feature in modern integrated development environments (IDEs). By suggesting the right libraries, APIs, and even variables in real-time, intelligent code completion engines can substantially accelerate software development. Furthermore, as many projects move to dynamically typed and interpreted languages, effective code completion can help to reduce costly errors by eliminating typos and identifying the right arguments from context. However, existing approaches to intelligent code completion either rely on strong typing (e.g., Visual Studio for C++), which limits their applicability to widely used dynamically typed languages (e.g., JavaScript and Python), or are based on simple heuristics and term frequency statistics which are often brittle and are relatively error-prone. In particular, Raychev et al. (2016) proposes the state-of-the-art probabilistic model for code, which generalizes both simple \( n \)-gram models and probabilistic grammar approaches. This approach, however, examines only a limited number of elements in the source code when completing the code. Therefore, the effectiveness of this approach may not scale well to large programs. In this paper we explore the use of deep learning techniques to address the challenges of code completion for the widely used and dynamically typed JavaScript programming language. We formulate the code completion problem as a sequential prediction task over the traversal of a parse-tree structure consisting of both non-terminal structural nodes and terminal nodes encoding program text. We then present simple, yet expressive, LSTM-based (Hochreiter & Schmidhuber (1997)) models that leverage additional side information obtained by parsing the program structure. Compared to widely used heuristic techniques, deep learning for code completion offers the opportunity to learn rich contextual models that can capture language and even library specific code patterns without requiring complex rules or expert intervention. We evaluate our recurrent neural network architecture on an established benchmark dataset for the JavaScript code completion. Our evaluations reveal several findings: (1) when evaluated on short programs, our RNN-based models can achieve better performance on the next node prediction tasks compared to the prior art (Bielik et al. (2016); Raychev et al. (2016)), which are based on decision-tree models; (2) our models’ prediction accuracies on longer programs, which is provided in the test set, but were not evaluated upon by previous work, are better than our models’ accuracies on shorter *The first and second authors contributed equally and are listed in an alphabetical order. Figure 1: Code Completion Example in IntelliJ IDEA Figure 2: Correct prediction of the program in Figure 1 programs; and (3) in the scenario that the code completion engine suggests a list of candidates, our RNN-based models allow users to choose from a list of 5 candidates rather than inputting manually for over 96% of all time when this is possible. These promising results encourage more investigation into developing neural network approaches for the code completion problem. We believe that our work not only highlights the importance of the field of neural network-based code completion, but is also an important step toward neural network-based program synthesis. 2 RELATED WORK Existing approaches that build probabilistic models for code can typically be categorized as n-gram models (Hindle et al., 2012; Nguyen et al., 2013; Tu et al., 2014), probabilistic grammars (Collins, 2003; Allamanis & Sutton, 2014; Allamanis et al., 2015; Maddison & Tarlow, 2014; Liang et al., 2010), and log-bilinear models (Allamanis et al., 2015). Bielik et al. (2016) generalizes the PCFG approach and n-gram approach, while Raychev et al. (2016a) further introduces decision tree approaches to generalize Bielik et al. (2016). Raychev et al. (2014) and White et al. (2015) explore how to use recurrent neural networks (RNNs) to facilitate the code completion task. However, these works only consider running RNNs on top of a token sequence to build a probabilistic model. Although the input sequence considered in Raychev et al. (2014) is produced from an abstract object, the structural information contained in the abstract syntax tree is not directly leveraged by the RNN structure in both of these two works. In contrast, we consider extending LSTM, a RNN structure, to leverage the structural information directly for the code prediction task. Recently there has been an increasing interest in developing neural networks for program synthesis (Ling et al., 2016; Beltagy & Quirk (2016); Dong & Lapata (2016); Chen et al., (2016)). These works all consider synthesizing a program based on inputs in other formats such as images or natural language descriptions. 3 CODE COMPLETION VIA BIG CODE In this section, we first introduce the problem of code completion and its challenges. Then we explain abstract syntax trees (AST), which we use as the input for our problems. Lastly, we formally define the code completion problem in different settings as several prediction problems based on a partial AST. 3.1 CODE COMPLETION: AN EXAMPLE Code completion is a feature in some integrated development environments (IDEs) to speed up programmers’ coding process. Figure 1 demonstrates this feature in IntelliJ IDEA. In this example, a part of a JavaScript program has been input to the IDE. When the dot symbol (i.e., “.”) is added after _webpack_require_, the IDE prompts with a list of candidates that the programmer is most likely to input next. When a candidate matches the intention, the programmer can choose it from the list rather than typing it manually. In this work, we define the code completion problem as predicting the next symbol while a program is being written. We consider this problem as an important first step toward completing an entire program. Traditional code completion techniques are developed by the programming language community to leverage context information for prediction. For example, when a programmer writes a Java program and inputs a variable name and then a dot symbol, the code completion engine will analyze the class of the variable and prompt the members of the class. In programming language literature, such information is referred to as type information. Statically typed languages, such as C and Java, enforces type checking at static time, so that the code completion engine can take advantage of full type information to make prediction without executing the code. In recent years, dynamically typed languages, such as Python or JavaScript, have become increasingly popular. In these languages, type checking is usually performed dynamically while executing a program. Thus, type information may be only partially available to the code completion engine while the programmer is writing the code. Despite their popularity, the dynamic typing of these languages makes code completion for them challenging. For example, in Figure 1, the next symbol to be added is p. This symbol does not appear in the previous part of the program, and thus the code completion engine in IntelliJ IDEA IDE cannot prompt with this symbol. However, this challenge may be remedied by leveraging a large corpus of code, a.k.a., big code. In fact, _webpack_require_.p is a frequently used combination appearing in many programs on Github.com, one of the largest repositories of source code. Therefore, a code completion engine powered by big code is likely to learn this combination and to prompt p. In fact, our methods discussed in later sections can predict this case very well (Figure 2). 3.2 ABSTRACT SYNTAX TREE Regardless of whether it is dynamically typed or statically typed, any programming language has an unambiguous context free grammar (CFG), which can be used to parse source code into an abstract syntax tree (AST). Further, an AST can be converted back into source code easily. Therefore we consider the input of our code completion problem as an AST, which is a typical assumption made by most code completion engines. An AST is a rooted tree. In an AST, each non-leaf node corresponds to a non-terminal in the CFG specifying structure information. In JavaScript, non-terminals may be ExpressionStatement, ForStatement, IfStatement, SwitchStatement, etc. Each leaf node corresponds to a terminal in the CFG encoding program text. There are infinite possibilities for terminals. They can be variable names, string or numerical literals, operators, etc. Figure 3 illustrates a part of the AST of the code snippet in Figure 1. In this tree, a node without a surrounding box (e.g., ExpressionStatement, etc.) denotes a non-terminal node. A node embraced by an orange surrounding box (e.g., installedModules) denotes a terminal node. At the bottom of the figure, there is a non-terminal node Property and a terminal node p. They have not been observed by the editor, so we use green to indicate this fact. Note that each non-terminal has at most one terminal as its child. In a traditional code completion engine, the AST can be further processed by a type checker so that type information will be attached to each node. In this work, however, we focus on dynamically typed languages, and type information is not always available. Therefore, we do not consider the type information provided by a compiler, and leave it for our future work. 3.3 PROBLEM SETUP In this work, we consider the input to be a partial AST, and the code completion problem is to predict the next node given the partial AST. In the following, we first define a partial AST, and then present the code completion problems in different scenarios. Input: a partial AST. Given a complete AST \( T \), we define a partial AST to be a subtree \( T' \) of \( T \), such that for each node \( n \) in \( T' \), its left set \( L_T(n) \) with respect to \( T \) is a subset of \( T' \), i.e., \( L_T(n) \subseteq T' \). Here, the left set \( L_T(n) \) of a node \( n \) with respect to \( T \) is defined as the set of all nodes in the in-order sequence during the depth-first search of \( T \) that are visited earlier than \( n \). Under this definition, in each partial AST \( T' \), there exists the right-most node \( n_R \), such that all other nodes in \( T' \) form its left set \( L_T(n_R) \). The next node in the in-order depth-first search visiting sequence after \( n_R \) is also the first node not appearing in \( T' \). We call this node the next node following the partial AST. Figure 4 illustrates these concepts using the example in Figure 3. In the rest of the paper, we also refer to a partial AST as a query. Next node prediction. Given a partial AST, the next node prediction problem, as suggested by its name, is to predict the next node following the partial AST. Based on the node’s kind, i.e., whether its a non-terminal node or a terminal one, we can categorize the problem into the next non-terminal prediction problem and the next terminal prediction problem. Although the next terminal prediction problem may sound more interesting, the next non-terminal prediction problem is also important, since it predicts the structure of the program. For example, when then next non-terminal is ForStatement, the next token in the source program is the keyword for, which does not have a corresponding terminal in the dataset. In this case, a model able to predict the next non-terminal can be used by the code-completion engine to emit the keyword for. These two tasks are also the same problems considered by previous works employing domain specific languages to achieve heuristic-based code completion (Raychev et al. (2016b); Bielik et al. (2016)). Predicting the next node versus predicting the next token. A natural alternative formulation of the problem is predicting the next token given the token sequence that has been inputted so far. Such a formulation, however, does not take advantage of the AST information, which is very easy to acquire with a suitable parser. Predicting the next node allows taking advantage of such information to enable more intelligent code completion. In particular, predicting the next non-terminal allows completing the structure of a code block rather than a single (keyword) token. For example, when the next token is a keyword for, the corresponding next non-terminal is ForStatement, which corresponding to the following code block: for( ___ ; ___ ; ___ ) { // for-loop body } In this case, successfully predicting the next non-terminal node allows completing not only the next key token for, but also tokens such as (, ;, ), {, and }. Such structure completion enabled by predicting the next non-terminal is more compelling in modern IDEs. Predicting the next terminal node allows completing identifiers, properties, literals, etc., which is similar to the next token prediction. However, predicting the next terminal node can leverage the information of the predicting node’s non-terminal parent, indicating what is being predicted, i.e., an identifier, a property, or a literal, etc. For example, when completing the following expression: __webpack_require_. the code completion engine with AST information will predict a property of __webpack_require_, while the engine without AST information only learns two tokens __webpack_require_ and a dot ". " and tries to predict the next token without any constraint. In our evaluation, we show that by leveraging the information from the non-terminal parent can significantly improve the performance. In this work, we focus on the next node prediction task, and leave the comparison with next token prediction as our future work. Joint prediction. A more important problem than predicting only the next non-terminal or terminal itself is to predict the next non-terminal and terminal together. We refer to this task to predict both next non-terminal and terminal as the joint prediction problem. We hope code completion can be used to generate the entire parsing tree in the end, and joint prediction is one step further toward this goal than next node prediction. Formally, the joint prediction problem that we consider is that, given a partial AST whose following node is a non-terminal one, we want to predict both the next non-terminal and the next terminal. There may be non-terminal nodes which do not have a terminal child (e.g., the AssignmentStatement). In this case, we artificially add an EMPTY terminal as its child. Note that this treatment is the same as in [Bielik et al., 2016]. We count it as a correct prediction if both the next non-terminal and terminal are predicted correctly. Denying prediction. There may be infinite possibilities for terminals, so it is impossible to predict all terminals correctly. We consider an alternative scenario that, when it thinks that the programmer will input a rare terminal, the code completion engine should have the ability to identify this case, and deny predicting the next node(s). In our problem, we build a vocabulary for frequent terminals. All terminals not in this vocabulary are considered as an UNK terminal. In this case, when it predicts UNK for the next terminal, the code completion model is considered as denying prediction. Since non-terminals’ vocabulary size is very small, denying prediction is only considered for the next terminal prediction, but not for the next non-terminal prediction. 4 MODELS In this section, we present the basic models considered in this work. In particular, given a partial AST as input, we first convert the AST into its left-child right-sibling representation, and serialize it as its in-order depth first search sequence. Thus, we consider the input for the next non-terminal prediction as a sequence of length k, i.e., \((N_1, T_1), (N_2, T_2), ..., (N_k, T_k)\). Here, for each \(i\), \(N_i\) is a non-terminal, and \(T_i\) is the terminal child of \(N_i\). For each non-terminal node \(N_i\), we encode not only its kind, but also whether the non-terminal has at least one non-terminal child, and/or one right-sibling. In doing so, from an input sequence, we can reconstruct the original AST. This encoding is also employed by [Raychev et al., 2016a]. We refer to each element in the sequence (e.g., \((N_i, T_i)\)) as a token. As mentioned above, a non-terminal without a terminal child is considered to have an EMPTY child. This input sequence \((N_1, T_1), (N_2, T_2), ..., (N_k, T_k)\) is the only input for all problems except the next terminal prediction. For the next terminal prediction problem, besides the input sequence, we also have the information about the parent of the current predicting terminal, which is a non-terminal, i.e., \(N_{k+1}\). Throughout the rest of the discussion, we assume that both \(N_i\) and \(T_i\) employ one-hot encoding. The vocabulary sets of non-terminals and terminals are separate. 4.1 NEXT NON-TERMINAL PREDICTION Given an input sequence, our first model predicts the next non-terminal. The architecture is illustrated in Figure 5. We refer to this model as NT2N, which stands for using the sequence of Figure 5: Architecture (NT2N) for predicting the next non-terminal. Non-terminal and Terminal pairs TO predict the next Non-terminal. We first explain each layer of NT2N, and then introduce two variants of this model. Embedding non-terminal and terminal. Given an input sequence, the embedding of each token is computed as \[ E_i = AN_i + BT_i \] where \( A \) is a \( J \times V_N \) matrix and \( B \) is a \( J \times V_T \) matrix. Here \( J \) is the size of the embedding vector, \( V_N \) and \( V_T \) are the vocabulary sizes of non-terminals and terminals respectively. LSTM layer. Then the embedded sequence is fed into a LSTM layer to get the hidden state. In particular, a LSTM cell takes an input token and a hidden state \( h_{i-1}, c_{i-1} \) from the previous LSTM cell as input, computes a hidden state \( h_i, c_i \), and outputs \( h_i \), based on the following formulas: \[ \begin{pmatrix} q \\ f \\ o \\ g \end{pmatrix} = \begin{pmatrix} \sigma \\ \sigma \\ \sigma \\ \tanh \end{pmatrix} P_{J,2J} \left( \begin{pmatrix} x_i \\ h_{i-1} \end{pmatrix} \right) \] \[ c_i = f \odot c_{i-1} + q \odot g \] \[ h_i = o \odot \tanh(c_i) \] Here, \( P_{J,2J} \) denotes a \( J \times 2J \) parameter matrix, where \( J \) is the size of the hidden state, i.e. dimension of \( h_i \), which is equal to the size of embedding vectors. \( \sigma \) and \( \odot \) denote the sigmoid function and pointwise multiplication respectively. Softmax layer. Assume \( h_k \) is the output hidden state of the last LSTM cell. \( h_k \) is fed into a softmax classifier to predict the next non-terminal. In particular, we have \[ \hat{N}_{k+1} = \text{softmax}(W_N \times h_k + b_N) \] where \( W_N \) and \( b_N \) are a matrix of size \( V_N \times J \) and a \( V_N \)-dimensional vector respectively. Using only non-terminal inputs. One variant of this model is to omit all terminal information from the input sequence. In this case, the embedding is computed as \( E_i = AN_i \). We refer to this model as N2N, which stands for using Non-terminal sequence TO predict the next Non-terminal. Predicting the next terminal and non-terminal together. Based on NT2N, we can predict not only the next non-terminal but also the next terminal, using \[ \hat{T}_{k+1} = \text{softmax}(W_T \times h_k + b_T) \] where \( W_T \) and \( b_T \) are a matrix of size \( V_T \times J \) and a \( V_T \)-dimensional vector respectively. In this case, the loss function has an extra term to give supervision on predicting \( \hat{T} \). We refer to this model as NT2NT, which stands for *using the sequence of Non-terminal and Terminal pairs TO predict the next Non-terminal and Terminal pair*. 4.2 NEXT TERMINAL PREDICTION In the next terminal prediction problem, the partial AST does not only contain \((N_1, T_1), ..., (N_k, T_k)\), but also \(N_{k+1}\). In this case, we can employ the architecture in Figure 6 to predict \(T_{k+1}\). In particular, we first get the LSTM output \(h_k\) in the same way as in NT2N. The final prediction is based on \[ \hat{T}_{k+1} = \text{softmax}(W_T h_k + W_{NT} N_{k+1} + b_T) \] where \(W_{NT}\) is a matrix of size \(V_T \times V_N\), and \(W_T\) and \(b_T\) are the same as in NT2NT. We refer to this model as NTN2T, which stands for *Non-terminal and Terminal pair sequence and the next Non-terminal TO predict the next Terminal*. Note that the model NT2NT can also be used for the next terminal prediction task, although the non-terminal information \(N_{k+1}\) is not leveraged. We will compare the two approaches later. 4.3 JOINT PREDICTION We consider two approaches to predict the next non-terminal and the next terminal together. The first approach is NT2NT, which is designed to predict the two kinds of nodes together. An alternative approach is to (1) use a next non-terminal approach \(X\) to predict the next non-terminal; and (2) feed the predicted non-terminal and the input sequence into NTN2T to predict the next terminal. We refer to such an approach as X+NTN2T. 4.4 DENYING PREDICTION We say a model *denies prediction* when it predicts the next terminal to be UNK, a special terminal to substitute rare terminals. However, due to the large amount of rare terminals, the occurrences of UNK may be much greater than any single frequent terminals. In this case, a model that can deny prediction may tend to predict UNKs, and thus may predict for fewer queries than it should. To mitigate this problem, we modify the loss function to be adaptive. Specifically, training a machine learning model \(f_\theta\) is to optimize the following objective: \[ \arg\min_\theta \sum_i l(f_\theta(q_i), y_i) \] where \(\{(q_i, y_i)\}\) is the training dataset consisting pairs of a query \(q_i\) and its ground truth next token \(y_i\). \(l\) is the loss function to measure the distance between the prediction \(\hat{y}_i = f_\theta(q_i)\) and the ground <table> <tr> <th colspan="2">Training set</th> <th colspan="2">Test set</th> <th colspan="2">Overall</th> </tr> <tr> <th>Programs</th> <td>100,000</td> <th>Programs</th> <td>50,000</td> <th>Non-terminal</th> <td>44</td> </tr> <tr> <th>Queries</th> <td>1.7 \times 10^8</td> <th>Queries</th> <td>8.3 \times 10^7</td> <th>Terminal</th> <td>3.1 \times 10^6</td> </tr> </table> Table 1: Statistics of the dataset truth \( y_i \). We choose \( l \) to be the standard cross-entropy loss. We introduce a weight \( \alpha_i \) for each sample \( (q_i, y_i) \) in the training dataset to change the objective to be as follows: \[ \argmin_{\theta} \sum_i \alpha_i l(f_\theta(q_i), y_i) \] When training a model not allowed to deny prediction, we set \( \alpha_i = 0 \) for \( y_i = \text{UNK} \), and \( \alpha_i = 1 \) otherwise. In doing so, it is equivalent to remove all queries whose ground truth next token is UNK. When training a model that allows denying prediction, we set all \( \alpha_i \) to be 1. To denote this case, we put a notation “+D” at the end of the model, (e.g., NT2NT+D, etc.). 5 EVALUATION 5.1 DATASET We use the JavaScript dataset[2] provided by [Raychev et al. (2016b)] to evaluate different approaches. The statistics of the dataset can be found in Table 1. [Raychev et al. (2016a)] provides an approach, called PHOG, for the next token prediction. The reported accuracy results are based on a subset of \( 5.3 \times 10^7 \) queries from the full test set. Specifically, [Raychev et al. (2016a)] chose all queries in each program containing fewer than 30,000 tokens[7]. When we compare with their results, we use the same testset. Otherwise, without a special specification, our results are based on the full test set consisting of \( 8.3 \times 10^7 \) queries. 5.2 TRAINING DETAILS Vocabulary In our dataset, there are 44 different kinds of non-terminals. Combining two more bits of information to indicate whether the non-terminal has a child and/or a right sibling, there are at most 176 different non-terminals. However, not all such combinations are possible: a ForStatement must have a child. In total, the vocabulary size for non-terminals is 97. For terminals, we sort all terminals in the training set by their frequencies. Then we choose the 50,000 most frequent terminals to build the vocabulary. We further add three special terminals: UNK for out-of-vocabulary tokens, EOF indicating the end of program, and Empty for the non-terminal which does not have a terminal. Note that about 45% terminals in the dataset are Empty terminals. Training details. We use a single layer LSTM network with hidden unit size of 1500 as our base model. To train the model, we use Adam ([Kingma & Ba (2014)] with base learning rate 0.001. The learning rate is multiplied by 0.9 every 0.2 epochs. We clip the gradients’ norm to 5. The batch size is \( b = 80 \). We use truncated backpropagation through time, by unrolling the LSTM model \( s = 50 \) times to take an input sequence of length 50 in each batch (and therefore each batch contains \( b \times s = 4000 \) tokens). We divide each program into segments consisting of \( s \) consecutive tokens. The last segment of a program, which may not be full, is padded with (EOF) tokens. We coalesce multiple epochs together. We organize all training data into \( b \) buckets. In each epoch, we randomly shuffle all programs in the training data to construct a queue. Whenever a bucket is empty, a program is popped from the queue and all segments of the program are inserted into the empty bucket sequentially. When the queue becomes empty, i.e., the current epoch finishes, all programs are re-shuffled randomly to reconstruct the queue. Each mini-batch is formed by \( b \) segments, i.e., one segment popped from each bucket. When the training data has been shuffled for \( e = 8 \) times, i.e., \( e \) epochs are inserted into the [2] http://www.srl.inf.ethz.ch/js150 [7] This detail was not explained in the paper. We contacted the authors to confirm it. ![Training epoch illustration](page_184_120_1207_354.png) Figure 7: Training epoch illustration <table> <tr> <th>Categories</th> <th>Previous work<br>Raychev et al. (2016a)</th> <th>N2N</th> <th>NT2N</th> <th>NT2NT</th> </tr> <tr> <td>One model accuracy</td> <td></td> <td>79.4 ± 0.2%</td> <td>84.8 ± 0.1%</td> <td>84.0 ± 0.1%</td> </tr> <tr> <td>Ensemble accuracy</td> <td></td> <td>82.3%</td> <td>87.7%</td> <td>86.2%</td> </tr> </table> Table 2: Next non-terminal prediction results bucket, we stop adding whole programs, and start adding only the first segment of each program: when a bucket is empty, a program is chosen randomly, and its first segment is added to the bucket. We terminate the training process when all buckets are empty at the same time. That is, all programs from the first 8 epochs have been trained. This is illustrated in Figure 7. The hidden states are initialized with \( h_0, c_0 \), which are two trainable vectors. The hidden states of LSTM from the previous segment are fed into the next one as input if both segments belong to the same program. Otherwise, the hidden states are reset to be \( h_0, c_0 \). We observe that resetting the hidden states for every new program improves the performance a lot. We initialize all parameters in \( h_0, c_0 \) to be 0. All other parameters are initialized with values uniformly randomly sampled from \([−0.05, 0.05]\). For each model, we train 5 sets of parameters using different random initializations. We evaluate the ensemble of the 5 models by averaging 5 softmax outputs. In our evaluation, we find that the ensemble improves the accuracy by 1 to 3 points in general. 5.3 NEXT NODE PREDICTION In this section, we present the results of our models on next node prediction, and compare them with the counterparts in Bielik et al. (2016), which is the state-of-the-art on these tasks. Therefore, we use the same testset consisting of \( 5.3 \times 10^7 \) queries as in Bielik et al. (2016). In the following, we first report results of next non-terminal prediction and of next terminal prediction, then evaluate our considered models’ performance on programs with different lengths. Next non-terminal prediction. The results are presented in Table 2. From the table, we can observe that both NT2N and NT2NT can outperform Raychev et al. (2016a). In particular, an ensemble of 5 NT2N models improves [Raychev et al., 2016a] by 3.8 percentage points. We also report the average accuracies of the 5 single models and the variance among them. We observe that the variance is very small, i.e., \( 0.1\% - 0.2\% \). This indicates that the trained models’ accuracies are robust to random initialization. Among the neural network approaches, NT2NT’s performance is lower than NT2N, even given that the former is provided with more supervision. This shows that given the limited capacity of the model, it may learn to trade off non-terminal prediction performance in favor of the terminal prediction task it additionally needs to perform. <table> <tr> <th rowspan="2">Categories</th> <th colspan="2">Previous work</th> <th colspan="2">Our considered models</th> </tr> <tr> <th>Raychev et al. (2016a)</th> <th>NT2NT</th> <th>NTN2T</th> </tr> <tr> <td>One model accuracy</td> <td>82.9%</td> <td>76.6 ± 0.1%</td> <td>81.9 ± 0.1%</td> </tr> <tr> <td>Overall</td> <td></td> <td>78.6%</td> <td>83.4%</td> </tr> </table> Table 3: Next terminal prediction results <table> <tr> <th rowspan="2">Top 1 accuracy</th> <th colspan="2">Non-terminal</th> <th colspan="2">Terminal</th> </tr> <tr> <th>N2N</th> <th>NT2N</th> <th>NT2NT</th> <th>NTN2T</th> <th>NT2NT</th> </tr> <tr> <td>Short programs (<30,000 non-terminals)</td> <td>82.3%</td> <td>87.7%</td> <td>86.2%</td> <td>83.4%</td> <td>78.6%</td> </tr> <tr> <td>Long programs (>30,000 non-terminals)</td> <td>87.7%</td> <td>94.4%</td> <td>92.7%</td> <td>89.0%</td> <td>85.8%</td> </tr> <tr> <td>Overall</td> <td>84.2%</td> <td>90.1%</td> <td>88.5%</td> <td>85.4%</td> <td>81.2%</td> </tr> <tr> <th colspan="6">Top 5 accuracy</th> </tr> <tr> <td>Short programs (<30,000 non-terminals)</td> <td>97.9%</td> <td>98.9%</td> <td>98.7%</td> <td>87.9%</td> <td>86.4%</td> </tr> <tr> <td>Long programs (>30,000 non-terminals)</td> <td>98.8%</td> <td>99.6%</td> <td>99.4%</td> <td>91.5%</td> <td>90.5%</td> </tr> <tr> <td>Overall</td> <td>98.2%</td> <td>99.1%</td> <td>98.9%</td> <td>89.2%</td> <td>87.8%</td> </tr> </table> Table 4: Next token prediction on programs with different lengths. Next terminal prediction. The results are presented in Table 3. We observe that an ensemble of 5 NTN2T models can outperform Raychev et al. (2016a) by 0.5 points. Without the ensemble, its accuracies are around 82.1%, i.e., 0.8 points less than Raychev et al. (2016a). For the 5 single models, we also observe that the variance on their accuracies is also very small, i.e., 0.1%. On the other hand, we observe that NT2NT has much worse performance than NTN2T, i.e., by 4.8 percentage points. This shows that leveraging additional information about the parent non-terminal of the current predicting terminal can improve the performance significantly. Prediction accuracies on programs with different lengths. We examine our considered models’ performance over different subsets of the test set. In particular, we consider the queries in programs containing no more than 30,000 tokens, which is the same as used in Bielik et al. (2016); Raychev et al. (2016a). We also consider the rest of the queries in programs which have more than 30,000 tokens. The results are presented in Table 4. We can observe that for both non-terminal and terminal prediction, accuracies on longer programs are higher than on shorter programs. This shows that a LSTM-based model may become more accurate when observing more code inputted by programmers. We also report top 5 prediction accuracy. We can observe that the top 5 accuracy improves upon top 1 accuracy dramatically. This metric corresponds to the code completion scenario that an IDE may pop up a list of few (i.e., 5) candidates for users to choose from. In particular, NT2N can achieve 99.1% top-5 accuracy on the non-terminal prediction task. On the other hand, NTN2T can also achieve 89.2% accuracy on the terminal prediction task. In the test set, there are 7.4% of tokens in the data whose ground truth is UNK, i.e., non-top 50,000 most frequent tokens. This means that NTN2T can predict over \( 89.2 / (100 - 7.4\%) = 96.3\% \) of all tokens whose ground truth is not UNK. Therefore, this means that the users can choose from the popup list without typing the token manually over 96% of all time that the code completion is possible if the completion is restricted to the top 50,000 most frequent tokens in the dataset. The effectiveness of different UNK thresholds. We evaluate the effectiveness of how to choose the threshold to cut for UNK terminals on the accuracy. We randomly sample 1/10 of the training dataset and the test dataset and vary the thresholds to cut for UNK terminals from 10000 to 80000. We plot the percentage of UNK terminals in both the full test set and its subset in Figure 8. We can observe that the distributions of UNK terminals are almost the same in both sets. Further, when the threshold is 10000, i.e., all terminals out of the top 10000 most frequent ones are turned into UNKs, there are more than 11% UNK queries (i.e., queries with ground truth being UNK) in the test set. When the threshold increases to 50000 or more, this number drops to 7% to 6%. The variance of the UNK queries’ percentages is not large when threshold of UNK is varied from 50000 to 80000. Figure 8: Percentage of UNK tokens in the entire test data and the sampled subset of the test data by varying the UNK threshold from 10000 to 80000. Figure 9: Accuracies of different models trained over the sampled subset of training data by varying the UNK threshold from 10000 to 80000. <table> <tr> <th></th> <th>NT2NT</th> <th>N2N+NTN2T</th> <th>NT2N+NTN2T</th> </tr> <tr> <th>Top 1 accuracy</th> <td>73.9%</td> <td>72.0%</td> <td>77.7%</td> </tr> </table> Table 5: Predicting non-terminal and terminal together We train one NTN2T model for each threshold, and evaluate it using the sampled test set. The accuracies of different models are plotted in Figure 9. The trend of different models’ accuracies is similar to the trend of the percentage of non-UNK tokens in the test set. This is expected, since when the threshold increases the model has more chance to make correct predictions for original UNK queries. However, we observe that this is not always the case. For example, the accuracies of models trained with thresholds being 30000 and 40000 are almost the same, i.e., the difference is only 0.02%. We make similar observations among the models trained with thresholds being 60000, 70000, and 80000. Notice that we have observed above that when we train 5 models with different random initialization, the variance of the accuracies of these models is within 0.1%. Therefore, we conclude that when we increase the UNK threshold from 30000 to 40000 and from 60000 to 80000, the accuracies do not change significantly. One potential explanation is that when increasing the UNK threshold, while it has more chance to predict those otherwise UNK terminals, a model may also more likely make mistakes when it needs to choose the next terminal from more candidates. 5.4 JOINT PREDICTION In this section, we evaluate different approaches to predict the next non-terminal and terminal together for the joint prediction task. In fact, NT2NT is designed for this task. Alternative approaches can predict the next non-terminal first, and then predict the next terminal based on the predicted next non-terminal. We choose NTN2T method as the second step to predict the next terminal, and we examine two different approaches as the first step to predict the next non-terminal: N2N and NT2N. Therefore, we compare three methods in total. The top 1 accuracy results are presented in Table 5. N2N+NTN2T is less effective than NT2N+NTN2T, as expected, since when predicting the non-terminal in the first step, N2N is less effective than NT2N as we have shown in Table 4. On the other hand, NT2NT’s performance is better than N2N+NTN2T, but is worse than NT2N+NTN2T. We observe that for all these three combinations, we have \[ \Pr(\hat{T}_{k+1} = T_{k+1} \land \hat{N}_{k+1} = N_{k+1}) > \Pr(\hat{T}_{k+1} = T_{k+1}) \Pr(\hat{N}_{k+1} = N_{k+1}) \] These facts indicate that the events of the next non-terminal and terminal being predicted correctly are not independent, but very relevant to each other instead. This is also the case for NT2NT, even though NT2NT predicts the next non-terminal and the next terminal independently conditional upon the LSTM hidden states. <table> <tr> <th></th> <th>NT2NT</th> <th>NT2NT+D</th> <th>NTN2T</th> <th>NTN2T+D</th> </tr> <tr> <td>Overall accuracy</td> <td>81.2%</td> <td>85.1%</td> <td>85.4%</td> <td>89.9%</td> </tr> <tr> <td>Accuracy on non-UNK terminals</td> <td>87.6%</td> <td>87.5%</td> <td>92.2%</td> <td>91.8%</td> </tr> <tr> <td>Deny prediction rate</td> <td>0%</td> <td>5.2%</td> <td>0%</td> <td>6.1%</td> </tr> </table> Table 6: Deny prediction results. **Top 1 accuracy** is computed as the percentage of all queries (including the ones whose ground truth is **UNK**) that can be predicted correctly, i.e., the prediction matches the ground truth even when the ground truth is **UNK**. **Accuracy on non-UNK terminals** measures the accuracy of each model on all non-UNK terminals. **Deny rate** is calculated as the percentage of all queries that a model denies prediction. **Prediction accuracy** is the top 1 accuracy over those queries that a model does not deny prediction, i.e., the prediction is not **UNK**. ![Line plot showing overall accuracies and accuracies on non-UNK terminals by varying alpha.](page_370_670_808_246.png) Figure 10: Overall accuracies and accuracies on non-UNK terminals by varying \( \alpha \). 5.5 DENYING PREDICTION We compare the models which do not deny prediction (i.e., NT2NT and NTN2T) and those which do (i.e., NT2NT+D and NTN2T+D). Results are presented in Table[6]. For a reference, in the test set, there are 7.42% UNK queries. We can observe that deny prediction models (i.e., +D models) have higher accuracies than the corresponding original models. This is expected. Since deny prediction models allow predicting UNK terminals, while NT2NT and NTN2T fail on all UNK queries, +D will succeed on most of them. We further evaluate the accuracy on non-UNK terminals. One may expect that since +D models may prefer to predict UNK, a standard model should have a higher accuracy on non-UNK terminals than its deny prediction counterpart. The results show that this is indeed the case, but the margin is very small, i.e., 0.1% for NT2NT and 0.3% for NTN2T. This means that, allowing denying prediction does not necessarily sacrifice a model’s ability on predicting non-UNK terminals. We are also interested in how frequent a +D model will deny prediction. We can observe that NTN2T+D will deny prediction for only 6.1% of all queries, which is even less than the percentage of UNK queries (i.e., 7.42%). This shows that although we allow the model to deny prediction, it is conservative when executing this privilege. This partially explains why NTN2T+D’s accuracy on non-UNK terminals is not much less than NTN2T’s. **Effectiveness of the value of \( \alpha \).** We are interested in how the hyperparameter \( \alpha \) in a +D model affects its accuracy. We train 11 different NTN2T+D models on the 1/10 subset of the training set, which is used above to examine the effectiveness of UNK thresholds, by varying \( \alpha \) from 0.0 to 1.0. Notice that \( \alpha = 0.0 \), this model becomes a standard NTN2T model. We plot both overall accuracies and accuracies on non-UNK terminals in Figure[10]. We observe the same effect as above: 1) the overall accuracy for \( \alpha = 1 \) is 6% higher than the one for \( \alpha = 0 \); and 2) the accuracy on non-UNK terminals for \( \alpha = 1 \) is less than the one for \( \alpha = 0 \), but the margin is not large (i.e., less than 1%). When we increase \( \alpha \) from 0 to 0.3, we can observe that the overall accuracy steeply increases. When we further increase \( \alpha \), however, the overall accuracy becomes steady. This is also the case for accuracy on non-UNK terminals. The result of this experiment shows that how to set \( \alpha \) is a trade-off between the overall accuracy and the accuracy on non-UNK terminals and how to choose \( \alpha \) depends on the application. 5.6 RUNTIME We evaluate our models’ runtime performance. Our models are implemented in TensorFlow (Abadi et al. (2016)). We evaluate our models on a machine equipped with 16 Intel Xeon CPUs, 16 GB RAM, and a single GPU Tesla K80. All queries from the same program are processed incrementally. That is, given two queries \( A, B \), if \( A \) has one more node than \( B \), then the LSTM outputs for \( B \) will be reused for processing \( A \), so that only the additional node in \( A \) needs to be processed. Note that this is consistent with the practice where programs are written incrementally from beginning to end. For each model, we feed in one query at a time into the model. There are 3939 queries in total coming from randomly chosen programs. We measure the overall response latency for each query. We observe that the query response time is consistent across all queries. On average, each model takes around 16 milliseconds to respond a query on GPU, and around 33 milliseconds on CPU. Note that these numbers are from just a proof of concept implementation and we have not optimized the code. Considering that a human being usually does not type in a token within 30 milliseconds, we conclude that our approach is efficient enough for potential practical usage. We emphasize that these numbers do not directly correspond to the runtime latency when the techniques are deployed to a code completion engine, since the changes of AST serialization may not be sequential while users are programming incrementally. This analysis, however, only provides an evidence to show the feasibility of applying our approach toward a full-fledged code completion engine. 6 CONCLUSION In this paper we introduce, motivate, and formalize the problem of automatic code completion. We describe LSTM-based approaches that capture parsing structure readily available in the code completion task. We introduce a simple LSTM architecture to model program context. We then explore several variants of our basic architecture for different variants of the code completion problem. We evaluate our techniques on a challenging JavaScript code completion benchmark and compare against the state-of-the-art code completion approach. We demonstrate that deep learning techniques can achieve better prediction accuracy by learning program patterns from big code. In addition, we find that using deep learning techniques, our models perform better for longer programs than for shorter ones, and when the code completion engine can pop up a list of candidates, our approach allows users to choose from the list instead of inputting the token over 96% of all time that this is possible. We also evaluate our approaches’ runtime performance and demonstrate that deep code completion has the potential to run in real-time as users type. We believe that deep learning techniques can play a transformative role in helping software developers manage the growing complexity of software systems, and we see this work as a first step in that direction. ACKNOWLEDGMENTS We thank the anonymous reviewers for their valuable comments. This material is based upon work partially supported by the National Science Foundation under Grant No. TWC-1409915, and a DARPA grant FA8750-15-2-0104. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation and DARPA.
reject
Reject
4.5
04fe2c31cb55fd5ab35962cd8699152710488db1
iclr
2,017
LEARNING TO SUPEROPTIMIZE PROGRAMS Rudy Bunel1, Alban Desmaison1, M. Pawan Kumar1,2 & Philip H.S. Torr1 1Department of Engineering Science - University of Oxford 2Alan Turing Institute Oxford, UK {rudy,alban,pawan}@robots.ox.ac.uk, philip.torr@eng.ox.ac.uk Pushmeet Kohli Microsoft Research Redmond, WA 98052, USA pkohli@microsoft.com ABSTRACT Code super-optimization is the task of transforming any given program to a more efficient version while preserving its input-output behaviour. In some sense, it is similar to the paraphrase problem from natural language processing where the intention is to change the syntax of an utterance without changing its semantics. Code-optimization has been the subject of years of research that has resulted in the development of rule-based transformation strategies that are used by compilers. More recently, however, a class of stochastic search based methods have been shown to outperform these strategies. This approach involves repeated sampling of modifications to the program from a proposal distribution, which are accepted or rejected based on whether they preserve correctness and the improvement they achieve. These methods, however, neither learn from past behaviour nor do they try to leverage the semantics of the program under consideration. Motivated by this observation, we present a novel learning based approach for code super-optimization. Intuitively, our method works by learning the proposal distribution using unbiased estimators of the gradient of the expected improvement. Experiments on benchmarks comprising of automatically generated as well as existing ("Hacker's Delight") programs show that the proposed method is able to significantly outperform state of the art approaches for code super-optimization. 1 INTRODUCTION Considering the importance of computing to human society, it is not surprising that a very large body of research has gone into the study of the syntax and semantics of programs and programming languages. Code super-optimization is an extremely important problem in this context. Given a program or a snippet of source-code, super-optimization is the task of transforming it to a version that has the same input-output behaviour but can be executed on a target compute architecture more efficiently. Superoptimization provides a natural benchmark for evaluating representations of programs. As a task, it requires the decoupling of the semantics of the program from its superfluous properties, the exact implementation. In some sense, it is the natural analogue of the paraphrase problem in natural language processing where we want to change syntax without changing semantics. Decades of research has been done on the problem of code optimization resulting in the development of sophisticated rule-based transformation strategies that are used in compilers to allow them to perform code optimization. While modern compilers implement a large set of rewrite rules and are able to achieve impressive speed-ups, they fail to offer any guarantee of optimality, thus leaving room for further improvement. An alternative approach is to search over the space of all possible programs that are equivalent to the compiler output, and select the one that is the most efficient. If the search is carried out in a brute-force manner, we are guaranteed to achieve super-optimization. However, this approach quickly becomes computationally infeasible as the number of instructions and the length of the program grows. In order to efficiently perform super-optimization, recent approaches have started to use a stochastic search procedure, inspired by Markov Chain Monte Carlo (MCMC) sampling (Schkufza et al., 2013). Briefly, the search starts at an initial program, such as the compiler output. It iteratively suggests modifications to the program, where the probability of a modification is encoded in a proposal distribution. The modification is either accepted or rejected with a probability that is dependent on the improvement achieved. Under certain conditions on the proposal distribution, the above procedure can be shown, in the limit, to sample from a distribution over programs, where the probability of a program is related to its quality. In other words, the more efficient a program, the more times it is encountered, thereby enabling super-optimization. Using this approach, high-quality implementations of real programs such as the Montgomery multiplication kernel from the OpenSSL library were discovered. These implementations outperformed the output of the gcc compiler and even expert-handwritten assembly code. One of the main factors that governs the efficiency of the above stochastic search is the choice of the proposal distribution. Surprisingly, the state of the art method, Stoke (Schkufza et al., 2013), employs a proposal distribution that is neither learnt from past behaviour nor does it depend on the syntax or semantics of the program under consideration. We argue that this choice fails to fully exploit the power of stochastic search. For example, consider the case where we are interested in performing bitwise operations, as indicated by the compiler output. In this case, it is more likely that the optimal program will contain bitshifts than floating point opcodes. Yet, Stoke will assign an equal probability of use to both types of opcodes. In order to alleviate the aforementioned deficiency of Stoke, we build a reinforcement learning framework to estimate the proposal distribution for optimizing the source code under consideration. The score of the distribution is measured as the expected quality of the program obtained via stochastic search. Using training data, which consists of a set of input programs, the parameters are learnt via the REINFORCE algorithm (Williams, 1992). We demonstrate the efficacy of our approach on two datasets. The first is composed of programs from “Hacker’s Delight” (Warren, 2002). Due to the limited diversity of the training samples, we show that it is possible to learn a prior distribution (unconditioned on the input program) that outperforms the state of the art. The second dataset contains automatically generated programs that introduce diversity in the training samples. We show that, in this more challenging setting, we can learn a conditional distribution given the initial program that significantly outperforms Stoke. 2 RELATED WORKS Super-optimization The earliest approaches for super-optimization relied on brute-force search. By sequentially enumerating all programs in increasing length orders (Granlund & Kenner, 1992; Massalin, 1987), the shortest program meeting the specification is guaranteed to be found. As expected, this approach scales poorly to longer programs or to large instruction sets. The longest reported synthesized program was 12 instructions long, on a restricted instruction set (Massalin, 1987). Trading off completeness for efficiency, stochastic methods (Schkufza et al., 2013) reduced the number of programs to test by guiding the exploration of the space, using the observed quality of programs encountered as hints. In order to improve the size of solvable instances, Phothilimthana et al. (2016) combined stochastic optimizers with smart enumerative solvers. However, the reliance of stochastic methods on a generic unspecific exploratory policy made the optimization blind to the problem at hand. We propose to tackle this problem by learning the proposal distribution. Neural Computing Similar work was done in the restricted case of finding efficient implementation of computation of value of degree k polynomials (Zaremba et al., 2014). Programs were generated from a grammar, using a learnt policy to prioritise exploration. This particular approach of guided search looks promising to us, and is in spirit similar to our proposal, although applied on a very restricted case. Another approach to guide the exploration of the space of programs was to make use of the gradients of differentiable relaxation of programs. Bunel et al. (2016) attempted this by simulating program execution using Recurrent Neural Networks. However, this provided no guarantee that the network parameters were going to correspond to real programs. Additionally, this method only had the possibility of performing local, greedy moves, limiting the scope of possible transformations. On the contrary, our proposed approach operates directly on actual programs and is capable of accepting short-term detrimental moves. Learning to optimize Outside of program optimization, applying learning algorithms to improve optimization procedures, either in terms of results achieved or runtime, is a well studied subject. Doppa et al. (2014) proposed imitation learning based methods to deal with structured output spaces, in a “Learning to search” framework. While this is similar in spirit to stochastic search, our setting differs in the crucial aspect of having a valid cost function instead of searching for one. More relevant is the recent literature on learning to optimize. Li & Malik (2016) and Andrychowicz et al. (2016) learn how to improve on first-order gradient descent algorithms, making use of neural networks. Our work is similar, as we aim to improve the optimization process. However, as opposed to the gradient descent that they learn on a continuous unconstrained space, our initial algorithm is an MCMC sampler on a discrete domain. Similarly, training a proposal distribution parameterized by a Neural Network was also proposed by Paige & Wood (2016) to accelerate inference in graphical models. Similar approaches were successfully employed in computer vision problems where data driven proposals allowed to make inference feasible (Jampani et al., 2015; Kulkarni et al., 2015; Zhu et al., 2000). Other approaches to speeding up MCMC inference include the work of Salimans et al. (2015), combining it with Variational inference. 3 LEARNING STOCHASTIC SUPER-OPTIMIZATION 3.1 STOCHASTIC SEARCH AS A PROGRAM OPTIMIZATION PROCEDURE Stoke (Schkufza et al., 2013) performs black-box optimization of a cost function on the space of programs, represented as a series of instructions. Each instruction is composed of an opcode, specifying what to execute, and some operands, specifying the corresponding registers. Each given input program \( \mathcal{T} \) defines a cost function. For a candidate program \( \mathcal{R} \) called *rewrite*, the goal is to optimize the following cost function: \[ \text{cost}(\mathcal{R}, \mathcal{T}) = \omega_c \times \text{eq}(\mathcal{R}, \mathcal{T}) + \omega_p \times \text{perf}(\mathcal{R}) \] The term \( \text{eq}(\mathcal{R}; \mathcal{T}) \) measures how well the outputs of the rewrite match the outputs of the reference program. This can be obtained either exactly by running a symbolic validator or approximately by running test cases. The term \( \text{perf}(\mathcal{R}) \) is a measure of the efficiency of the program. In this paper, we consider runtime to be the measure of this efficiency. It can be approximated by the sum of the latency of all the instructions in the program. Alternatively, runtime of the program on some test cases can be used. To find the optimum of this cost function, Stoke runs an MCMC sampler using the Metropolis (Metropolis et al., 1953) algorithm. This allows us to sample from the probability distribution induced by the cost function: \[ p(\mathcal{R}; \mathcal{T}) = \frac{1}{Z} \exp(-\text{cost}(\mathcal{R}, \mathcal{T})). \] The sampling is done by proposing random moves from a different proposal distribution: \[ \mathcal{R}' \sim q(\cdot|\mathcal{R}). \] The cost of the new modified program is evaluated and an acceptance criterion is computed. This acceptance criterion \[ \alpha(\mathcal{R}, \mathcal{T}) = \min \left(1, \frac{p(\mathcal{R}'; \mathcal{T})}{p(\mathcal{R}; \mathcal{T})}\right), \] is then used as the parameter of a Bernoulli distribution from which an accept/reject decision is sampled. If the move is accepted, the state of the optimizer is updated to \( \mathcal{R}' \). Otherwise, it remains in \( \mathcal{R} \). While the above procedure is only guaranteed to sample from the distribution \( p(\cdot; \mathcal{T}) \) in the limit if the proposal distribution \( q \) is symmetric (\( q(\mathcal{R}'|\mathcal{R}) = q(\mathcal{R}|\mathcal{R}') \) for all \( \mathcal{R}, \mathcal{R}' \)), it still allows us to perform efficient hill-climbing for non-symmetric proposal distributions. Moves leading to an improvement are always going to be accepted, while detrimental moves can still be accepted in order to avoid getting stuck in local minima. 3.2 LEARNING TO SEARCH We now describe our approach to improve stochastic search by learning the proposal distribution. We begin our description by defining the learning objective (section 3.2.1), followed by a parameterization of the proposal distribution (section 3.2.2), and finally the reinforcement learning framework to estimate the parameters of the proposal distribution (section 3.2.3). 3.2.1 OBJECTIVE FUNCTION Our goal is to optimize the cost function defined in equation (1). Given a fixed computational budget of \( T \) iterations to perform program super-optimization, we want to make moves that lead us to the lowest possible cost. As different programs have different runtimes and therefore different associated costs, we need to perform normalization. As normalized loss function, we use the ratio between the best rewrite found and the cost of the initial unoptimized program \( \mathcal{R}_0 \). Formally, the loss for a set of rewrites \( \{\mathcal{R}_t\}_{t=0..T} \) is defined as follows: \[ r(\{\mathcal{R}_t\}_{t=0..T}) = \left( \frac{\min_{t=0..T} \operatorname{cost}(\mathcal{R}_t, \mathcal{T})}{\operatorname{cost}(\mathcal{R}_0, \mathcal{T})} \right). \] Recall that our goal is to learn a proposal distribution. Given that our optimization procedure is stochastic, we will need to consider the expected cost as our loss. This expected loss is a function of the parameters \( \theta \) of our parametric proposal distribution \( q_\theta \): \[ \mathcal{L}(\theta) = \mathbb{E}_{\{\mathcal{R}_t\} \sim q_\theta} [r(\{\mathcal{R}_t\}_{t=0..T})]. \] 3.2.2 PARAMETERIZATION OF THE MOVE PROPOSAL DISTRIBUTION The proposal distribution (3) originally used in Stoke (Schkufza et al., 2013) takes the form of a hierarchical model. The type of the move is initially sampled from a probability distribution. Additional samples are drawn to specify, for example, the affected location in the programs ,the new operands or opcode to use. Which of these probability distribution get sampled depends on the type of move that was first sampled. The detailed structure of the proposal probability distribution can be found in Appendix B. Stoke uses uniform distributions for each of the elementary probability distributions the model samples from. This corresponds to a specific instantiation of the general stochastic search paradigm. In this work, we propose to learn those probability distributions so as to maximize the probability of reaching the best programs. The rest of the optimization scheme remains similar to the one of Schkufza et al. (2013). Our chosen parameterization of \( q \) is to keep the hierarchical structure of the original work of Schkufza et al. (2013), as detailed in Appendix B, and parameterize all the elementary probability distributions (over the positions in the programs, the instructions to propose or the arguments) independently. The set \( \theta \) of parameters for \( q_\theta \) will thus contain a set of parameters for each elementary probability distributions. A fixed proposal distribution is kept through the optimization of a given program, so the proposal distribution needs to be evaluated only once, at the beginning of the optimization and not at every iteration of MCMC. The stochastic computation graph corresponding to a run of the Metropolis algorithm is given in Figure 1. We have assumed the operation of evaluating the cost of a program to be a deterministic function, as we will not model the randomness of measuring performance. 3.2.3 LEARNING THE PROPOSAL DISTRIBUTION In order to learn the proposal distribution, we will use stochastic gradient descent on our loss function (6). We obtain the first order derivatives with regards to our proposal distribution parameters using the REINFORCE (Williams, 1992) estimator, also known as the likelihood ratio estimator (Glynn, 1990) or the score function estimator (Fu, 2006). This estimator relies on a rewriting of the gradient of the expectation. For an expectation with regards to a probability distribution \( x \sim f_\theta \), the REINFORCE estimator is: \[ \nabla_\theta \sum_x f(x; \theta) r(x) = \sum_x r(x) \nabla_\theta f(x; \theta) = \sum_x f(x; \theta) r(x) \nabla_\theta \log(f(x; \theta)), \] and provides an unbiased estimate of the gradient. ![Stochastic computation graph of the Metropolis algorithm used for program super-optimization.](page_324_613_900_579.png) Figure 1: Stochastic computation graph of the Metropolis algorithm used for program super-optimization. Round nodes are stochastic nodes and square ones are deterministic. Red arrows corresponds to computation done in the forward pass that needs to be learned while green arrows correspond to the backward pass. Full arrows represent deterministic computation and dashed arrows represent stochastic ones. The different steps of the forward pass are: (a) Based on features of the reference program, the proposal distribution \( q \) is computed. (b) A random move is sampled from the proposal distribution. (c) The score of the proposed rewrite is experimentally measured. (d) The acceptance criterion (4) for the move is computed. (e) The move is accepted with a probability equal to the acceptance criterion. (f) The cost is observed, corresponding to the best program obtained during the search. (g) Moves b to f are repeated T times. A helpful way to derive the gradients is to consider the execution traces of the search procedure under the formalism of stochastic computation graphs (Schulman et al., 2015). We introduce one “cost node” in the computation graphs at the end of each iteration of the sampler. The associated cost corresponds to the normalized difference between the best rewrite so far and the current rewrite after this step: \[ c_t = \min \left( 0, \left( \frac{\operatorname{cost}(\mathcal{R}_t, \mathcal{T}) - \min_{i=0..t-1} \operatorname{cost}(\mathcal{R}_i, \mathcal{T})}{\operatorname{cost}(\mathcal{R}_0, \mathcal{T})} \right) \right). \] The sum of all the cost nodes corresponds to the sum of all the improvements made when a new lowest cost was achieved. It can be shown that up to a constant term, this is equivalent to our objective function (5). As opposed to considering only a final cost node at the end of the \( T \) iterations, this has the advantage that moves which were not responsible for the improvements would not get assigned any credit. For each round of MCMC, the gradient with regards to the proposal distribution is computed using the REINFORCE estimator which is equal to \[ \widehat{\nabla_{\theta,i}} \mathcal{L}(\theta) = (\nabla_{\theta} \log q_{\theta}(\mathcal{R}_i|\mathcal{R}_{i-1})) \sum_{t>i} c_t. \] (9) As our proposal distribution remains fixed for the duration of a program optimization, these gradients needs to be summed over all the iterations to obtain the total contribution to the proposal distribution. Once this gradient is estimated, it becomes possible to run standard back-propagation with regards to the features on which the proposal distribution is based on, so as to learn the appropriate feature representation. 4 EXPERIMENTS 4.1 SETUP Implementation Our system is built on top of the Stoke super-optimizer from Schkufza et al. (2013). We instrumented the implementation of the Metropolis algorithm to allow sampling from parameterized proposal distributions instead of the uniform distributions previously used. Because the proposal distribution is only evaluated once per program optimisation, the impact on the optimization throughput is low, as indicated in Table 3. Our implementation also keeps track of the traces through the stochastic graph. Using the traces generated during the optimization, we can compute the estimator of our gradients, implemented using the Torch framework (Collobert et al., 2011). Datasets We validate the feasibility of our learning approach on two experiments. The first is based on the Hacker’s delight (Warren, 2002) corpus, a collection of twenty five bit-manipulation programs, used as benchmark in program synthesis (Gulwani et al., 2011; Jha et al., 2010; Schkufza et al., 2013). Those are short programs, all performing similar types of tasks. Some examples include identifying whether an integer is a power of two from its binary representation, counting the number of bits turned on in a register or computing the maximum of two integers. An exhaustive description of the tasks is given in Appendix C. Our second corpus of programs is automatically generated and is more diverse. Models The models we are learning are a set of simple elementary probabilities for the categorical distribution over the instructions and over the type of moves to perform. We learn the parameters of each separate distribution jointly, using a Softmax transformation to enforce that they are proper probability distributions. For the types of move where opcodes are chosen from a specific subset, the probabilities of each instruction are appropriately renormalized. We learn two different type of models and compare them with the baseline of uniform proposal distributions equivalent to Stoke. Our first model, henceforth denoted the bias, is not conditioned on any property of the programs to optimize. By learning this simple proposal distribution, it is only possible to capture a bias in the dataset. This can be understood as an optimal proposal distribution that Stoke should default to. The second model is a Multi Layer Perceptron (MLP), conditioned on the input program to optimize. For each input program, we generate a Bag-of-Words representation based on the opcodes of the program. This is embedded through a three hidden layer MLP with ReLU activation unit. The proposal distribution over the instructions and over the type of moves are each the result of passing the outputs of this embedding through a linear transformation, followed by a SoftMax. The optimization is performed by stochastic gradient descent, using the Adam (Kingma & Ba, 2015) optimizer. For each estimate of the gradient, we draw 100 samples for our estimator. The values of the hyperparameters used are given in Appendix A. The number of parameters of each model is given in Table 1. <table> <tr> <th>Model</th> <th># of parameters</th> </tr> <tr> <td>Uniform</td> <td>0</td> </tr> <tr> <td>Bias</td> <td>2912</td> </tr> <tr> <td>MLP</td> <td>1.4 \times 10^6</td> </tr> </table> Table 1: Size of the different models compared. Uniform corresponds to Stoke Schkufza et al. (2013). <table> <tr> <th>Model</th> <th>Training</th> <th>Test</th> </tr> <tr> <td>Uniform</td> <td>57.01\%</td> <td>53.71\%</td> </tr> <tr> <td>Bias</td> <td>36.45\%</td> <td>31.82\%</td> </tr> <tr> <td>MLP</td> <td>35.96\%</td> <td>31.51\%</td> </tr> </table> Table 2: Final average relative score on the Hacker’s Delight benchmark. While all models improve with regards to the initial proposal distribution based on uniform sampling, the model conditioning on program features reach better performances. 4.2 Existing Programs In order to have a larger corpus than the twenty-five programs initially present in “Hacker’s Delight”, we generate various starting points for each optimization. This is accomplished by running Stoke with a cost function where \( \omega_p = 0 \) in (1), and keeping only the correct programs. Duplicate programs are filtered out. This allows us to create a larger dataset from which to learn. Examples of these programs at different level of optimization can be found in Appendix D. We divide this augmented Hacker’s Delight dataset into two sets. All the programs corresponding to even-numbered tasks are assigned to the first set, which we use for training. The programs corresponding to odd-numbered tasks are kept for separate evaluation, so as to evaluate the generalisation of our learnt proposal distribution. The optimization process is visible in Figure 2, which shows a clear decrease of the training loss and testing loss for both models. While simply using stochastic super-optimization allows to discover programs 40% more efficient on average, using a tuned proposal distribution yield even larger improvements, bringing the improvements up to 60%, as can be seen in Table2. Due to the similarity between the different tasks, conditioning on the program features does not bring any significant improvements. ![Two line plots showing training and testing loss over time for Bias and Multi-layer Perceptron](page_1012_1042_484_246.png) (a) Bias (b) Multi-layer Perceptron Figure 2: Proposal distribution training. All models learn to improve the performance of the stochastic optimization. Because the tasks are different between the training and testing dataset, the values between datasets can’t directly be compared as some tasks have more opportunity for optimization. It can however be noted that improvements on the training dataset generalise to the unseen tasks. In addition, to clearly demonstrate the practical consequences of our learning, we present in Figure 3 a superposition of score traces, sampled from the optimization of a program of the test set. Figure 3a corresponds to our initialisation, an uniform distribution as was used in the work of Schkufza et al. (2013). Figure 3d corresponds to our optimized version. It can be observed that, while the uniform proposal distribution was successfully decreasing the cost of the program, our learnt proposal distribution manages to achieve lower scores in a more robust manner and in less iterations. Even using only 100 iterations (Figure 3e), the learned model outperforms the uniform proposal distribution with 400 iterations (Figure 3c). (a) With Uniform proposal Optimization Traces (b) Scores after 200 iterations (c) Scores after 400 iterations (d) With Learned Bias Optimization Traces (e) Scores after 100 iterations (f) Scores after 200 iterations Figure 3: Distribution of the improvement achieved when optimising a training sample from the Hacker’s Delight dataset. The first column represent the evolution of the score during the optimization. The other columns represent the distribution of scores after a given number of iterations. (a) to (c) correspond to the uniform proposal distribution, (d) to (f) correspond to the learned bias. 4.3 AUTOMATICALLY GENERATED PROGRAMS While the previous experiments shows promising results on a set of programs of interest, the limited diversity of programs might have made the task too simple, as evidenced by the good performance of a blind model. Indeed, despite the data augmentation, only 25 different tasks were present, all variations of the same programs task having the same optimum. To evaluate our performance on a more challenging problem, we automatically synthesize a larger dataset of programs. Our methods to do so consists in running Stoke repeatedly with a constant cost function, for a large number of iterations. This leads to a fully random walk as every proposed programs will have the same cost, leading to a 50% chance of acceptance. We generate 600 of these programs, 300 that we use as a training set for the optimizer to learn over and 300 that we keep as a test set. The performance achieved on this more complex dataset is shown in Figure 4 and Table 4. (a) Bias (b) Multi-layer Perceptron Figure 4: Training of the proposal distribution on the automatically generated benchmark. <table> <tr> <th>Proposal distribution</th> <th>MCMC iterations throughput</th> </tr> <tr> <td>Uniform</td> <td>60 000 /second</td> </tr> <tr> <td>Categorical</td> <td>20 000 /second</td> </tr> </table> Table 3: Throughput of the proposal distribution estimated by timing MCMC for 10000 iterations <table> <tr> <th>Model</th> <th>Training</th> <th>Test</th> </tr> <tr> <td>Uniform</td> <td>76.63%</td> <td>78.15 %</td> </tr> <tr> <td>Bias</td> <td>61.81%</td> <td>63.56%</td> </tr> <tr> <td>MLP</td> <td><b>60.13%</b></td> <td><b>62.27%</b></td> </tr> </table> Table 4: Final average relative score. The MLP conditioning on the features of the program perform better than the simple bias. Even the unconditioned bias performs significantly better than the Uniform proposal distribution. 5 CONCLUSION Within this paper, we have formulated the problem of optimizing the performance of a stochastic super-optimizer as a Machine Learning problem. We demonstrated that learning the proposal distribution of a MCMC sampler was feasible and lead to faster and higher quality improvements. Our approach is not limited to stochastic superoptimization and could be applied to other stochastic search problems. It is interesting to compare our method to the synthesis-style approaches that have been appearing recently in the Deep Learning community (Graves et al., 2014) that aim at learning algorithms directly using differentiable representations of programs. We find that the stochastic search-based approach yields a significant advantage compared to those types of approaches, as the resulting program can be run independently from the Neural Network that was used to discover them. Several improvements are possible to the presented methods. In mature domains such as Computer Vision, the representations of objects of interests have been widely studied and as a result are successful at capturing the information of each sample. In the domains of programs, obtaining informative representations remains a challenge. Our proposed approach ignores part of the structure of the program, notably temporal, due to the limited amount of existing data. The synthetic data having no structure, it wouldn’t be suitable to learn those representations from it. Gathering a larger dataset of frequently used programs so as to measure more accurately the practical performance of those methods seems the evident next step for the task of program synthesis. REFERENCES Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. In NIPS, 2016. Rudy Bunel, Alban Desmaison, Pushmeet Kohli, Philip HS Torr, and M Pawan Kumar. Adaptive neural compilation. In NIPS. 2016. Berkeley Churchill, Eric Schkufza, and Stefan Heule. Stoke. https://github.com/StanfordPL/stoke, 2016. Ronan Collobert, Koray Kavukcuoglu, and Clément Farabet. Torch7: A matlab-like environment for machine learning. In NIPS, 2011. Janardhan Rao Doppa, Alan Fern, and Prasad Tadepalli. Hc-search: A learning framework for search-based structured prediction. JAIR, 2014. Michael C. Fu. Gradient estimation. Handbooks in Operations Research and Management Science. 2006. Peter W Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM, 1990. Torbjörn Granlund and Richard Kenner. Eliminating branches using a superoptimizer and the GNU C compiler. ACM SIGPLAN Notices, 1992. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, 2014. Sumit Gulwani, Susmit Jha, Ashish Tiwari, and Ramarathnam Venkatesan. Synthesis of loop-free programs. In PLDI, 2011. Varun Jampani, Sebastian Nowozin, Matthew Loper, and Peter V Gehler. The informed sampler: A discriminative approach to bayesian inference in generative computer vision models. Computer Vision and Image Understanding, 2015. Susmit Jha, Sumit Gulwani, Sanjit A Seshia, and Ashish Tiwari. Oracle-guided component-based program synthesis. In International Conference on Software Engineering, 2010. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. Tejas D Kulkarni, Pushmeet Kohli, Joshua B Tenenbaum, and Vikash Mansinghka. Picture: A probabilistic programming language for scene perception. In CVPR, 2015. Ke Li and Jitendra Malik. Learning to optimize. CoRR, 2016. Henry Massalin. Superoptimizer: A look at the smallest program. In ACM SIGPLAN Notices, 1987. Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth, Augusta H Teller, and Edward Teller. Equation of state calculations by fast computing machines. The journal of chemical physics, 1953. Brookes Paige and Frank Wood. Inference networks for sequential Monte Carlo in graphical models. In ICML, 2016. Phitchaya Mangpo Phothilimthana, Aditya Thakur, Rastislav Bodik, and Dinakar Dhurjati. Scaling up superoptimization. In ACM SIGPLAN Notices, 2016. Tim Salimans, Diederik P Kingma, Max Welling, et al. Markov chain monte carlo and variational inference: Bridging the gap. In ICML, 2015. Eric Schkufza, Rahul Sharma, and Alex Aiken. Stochastic superoptimization. SIGPLAN, 2013. John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic computation graphs. In NIPS, 2015. Henry S Warren. Hacker’s delight. 2002. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 1992. Wojciech Zaremba, Karol Kurach, and Rob Fergus. Learning to discover efficient mathematical identities. In NIPS, 2014. Song-Chun Zhu, Rong Zhang, and Zhuowen Tu. Integrating bottom-up/top-down for object recognition by data driven markov chain monte carlo. In CVPR, 2000. A HYPERPARAMETERS A.1 ARCHITECTURES The output size of 9 corresponds to the types of move. The output size of 2903 correspond to the number of possible instructions that Stoke can use during a rewrite. This is smaller that the 3874 that are possible to find in an original program. <table> <tr> <th>Outputs</th> <th>Bias (9)<br>SoftMax</th> <th>Bias (2903)<br>SoftMax</th> </tr> </table> Table 5: Architecture of the Bias <table> <tr> <th rowspan="2">Embedding</th> <td colspan="2">Linear (3874 → 100) + ReLU<br>Linear (100 → 300) + ReLU<br>Linear (300 → 300) + ReLU</td> </tr> <tr> <th>Outputs</th> <td>Linear (300 → 9)<br>SoftMax</td> <td>Linear (300 → 2903)<br>SoftMax</td> </tr> </table> Table 6: Architecture of the Multi Layer Perceptron A.2 TRAINING PARAMETERS All of our models are trained using the Adam (Kingma & Ba, 2015) optimizer, with its default hyper-parameters \( \beta_1 = 0.9, \beta_2 = 0.999, \epsilon = 10^{-8} \). We use minibatches of size 32. The learning rate were tuned by observing the evolution of the loss on the training datasets for the first iterations. The picked values are given in Table 7. Those learning rates are divided by the size of the minibatches. <table> <tr> <th></th> <th>Hacker’s Delight</th> <th>Synthetic</th> </tr> <tr> <th>Bias</th> <td>1</td> <td>10</td> </tr> <tr> <th>MLP</th> <td>0.01</td> <td>0.1</td> </tr> </table> Table 7: Values of the Learning rate used. B STRUCTURE OF THE PROPOSAL DISTRIBUTION The sampling process of a move is a hierarchy of sampling step. The easiest way to represent it is as a generative model for the program transformations. Depending on what type of move is sampled, different series of sampling steps have to be performed. For a given move, all the probabilities are sampled independently so the probability of proposing the move is the product of the probability of picking each of the sampling steps. The generative model is defined in Figure 5. It is going to be parameterized by the the parameters of each specific probability distribution it samples from. The default Stoke version uses uniform probabilities over all of those elementary distributions. def proposal(current_program): move_type = sample(categorical(all_move_type)) if move_type == 1: % Add empty Instruction pos = sample(categorical(all_positions(current_program))) return (ADD_NOP, pos) if move_type == 2: % Delete an Instruction pos = sample(categorical(all_positions(current_program))) return (DELETE, pos) if move_type == 3: % Instruction Transform pos = sample(categorical(all_positions(current_program))) instr = sample(categorical(set_of_all_instructions)) arity = nb_args(instr) for i = 1, arity: possible_args = possible_arguments(instr, i) % get one of the arguments that can be used as i-th % argument for the instruction 'instr'. operands[i] = sample(categorical(possible_args)) return (TRANSFORM, pos, instr, operands) if move_type == 4: % Opcode Transform pos = sample(categorical(all_positions(current_program))) args = arguments_at(current_program, pos) instr = sample(categorical(possible_instruction(args))) % get an instruction compatible with the arguments % that are in the program at line pos. return(OPCODE_TRANSFORM, pos, instr) if move_type == 5: % Opcode Width Transform pos = sample(categorical(all_positions(current_program))) curr_instr = instruction_at(current_program, pos) instr = sample(categorical(same_memonic_instr(curr_instr))) % get one instruction with the same mnemonic that the % instruction 'curr_instr'. return (OPCODE_TRANSFORM, pos, instr) if move_type == 6: % Operand transform pos = sample(categorical(all_positions(current-program))) curr_instr = instruction_at(current_program, pos) arg_to_mod = sample(categorical(args(curr_instr))) possible_args = possible_arguments(curr_instr, arg_to_mod) new_operand = sample(categorical(possible_args)) return (OPERAND_TRANSFORM, pos, arg_to_mod, new_operand) if move_type == 7: % Local swap transform block_idx = sample(categorical(all_blocks(current_program))) possible_pos = pos_in_block(current_program, block_idx) pos_1 = sample(categorical(possible_pos)) pos_2 = sample(categorical(possible_pos)) return (SWAP, pos_1, pos_2) if move_type == 8: % Global swap transform pos_1 = sample(categorical(all_positions(current_program))) pos_2 = sample(categorical(all_positions(current_program))) return (SWAP, pos_1, pos_2) if move_type == 9: % Rotate transform pos_1 = sample(categorical(all_positions(current_program))) pos_2 = sample(categorical(all_positions(current_program))) return (ROTATE, pos_1, pos_2) Figure 5: Generative Model of a Transformation. C H A C K E R ’ S D E L I G H T T A S K S The 25 tasks of the Hacker’s delight Warren (2002) datasets are the following: 1. Turn off the right-most one bit 2. Test whether an unsigned integer is of the form \( 2^{(n-1)} \) 3. Isolate the right-most one bit 4. Form a mask that identifies right-most one bit and trailing zeros 5. Right propagate right-most one bit 6. Turn on the right-most zero bit in a word 7. Isolate the right-most zero bit 8. Form a mask that identifies trailing zeros 9. Absolute value function 10. Test if the number of leading zeros of two words are the same 11. Test if the number of leading zeros of a word is strictly less than of another work 12. Test if the number of leading zeros of a word is less than of another work 13. Sign Function 14. Floor of average of two integers without overflowing 15. Ceil of average of two integers without overflowing 16. Compute max of two integers 17. Turn off the right-most contiguous string of one bits 18. Determine if an integer is a power of two 19. Exchanging two fields of the same integer according to some input 20. Next higher unsigned number with same number of one bits 21. Cycling through 3 values 22. Compute parity 23. Counting number of bits 24. Round up to next highest power of two 25. Compute higher order half of product of x and y Reference implementation of those programs were obtained from the examples directory of the stoke repository (Churchill et al., 2016). D EXAMPLES OF HACKER’S DELIGHT OPTIMISATION The first task of the Hacker’s Delight corpus consists in turning off the right-most one bit of a register. When compiling the code in Listing 6a, llvm generates the code shown in Listing 6b. A typical example of an equivalent version of the same program obtained by the data-augmentation procedure is shown in Listing 6c. Listing 6d contains the optimal version of this program. Note that such optimization are already feasible using the stoke system of Schkufza et al. (2013). #include <stdint.h> int32_t p01(int32_t x) { int32_t o1 = x - 1; return x & o1; } pushq %rbp movq %rsp , %rbp movl %edi , -0x4(%rbp ) movl -0x4(%rbp ), %edi subl $0x1 , %edi movl %edi , -0x8(%rbp ) movl -0x4(%rbp ), %edi andl -0x8(%rbp ), %edi movl %edi , %eax popq %rbp retq nop nop nop (a) Source. blsrl %edi , %esi sets %ch xorq %rax , %rax sarb $0x2 , %ch rorw $0x1 , %di subb $0x3 , %dil mull %ebp subb %ch , %dh rcrb $0x1 , %dil cmovbel %esi , %eax retq (b) Optimization starting point. blsrl %edi , %eax retq (c) Alternative equivalent program. (d) Optimal solution. Figure 6: Program at different stage of the optimization.
ABSTRACT Code super-optimization is the task of transforming any given program to a more efficient version while preserving its input-output behaviour. In some sense, it is similar to the paraphrase problem from natural language processing where the intention is to change the syntax of an utterance without changing its semantics. Code-optimization has been the subject of years of research that has resulted in the development of rule-based transformation strategies that are used by compilers. More recently, however, a class of stochastic search based methods have been shown to outperform these strategies. This approach involves repeated sampling of modifications to the program from a proposal distribution, which are accepted or rejected based on whether they preserve correctness and the improvement they achieve. These methods, however, neither learn from past behaviour nor do they try to leverage the semantics of the program under consideration. Motivated by this observation, we present a novel learning based approach for code super-optimization. Intuitively, our method works by learning the proposal distribution using unbiased estimators of the gradient of the expected improvement. Experiments on benchmarks comprising of automatically generated as well as existing ("Hacker's Delight") programs show that the proposed method is able to significantly outperform state of the art approaches for code super-optimization. 1 INTRODUCTION Considering the importance of computing to human society, it is not surprising that a very large body of research has gone into the study of the syntax and semantics of programs and programming languages. Code super-optimization is an extremely important problem in this context. Given a program or a snippet of source-code, super-optimization is the task of transforming it to a version that has the same input-output behaviour but can be executed on a target compute architecture more efficiently. Superoptimization provides a natural benchmark for evaluating representations of programs. As a task, it requires the decoupling of the semantics of the program from its superfluous properties, the exact implementation. In some sense, it is the natural analogue of the paraphrase problem in natural language processing where we want to change syntax without changing semantics. Decades of research has been done on the problem of code optimization resulting in the development of sophisticated rule-based transformation strategies that are used in compilers to allow them to perform code optimization. While modern compilers implement a large set of rewrite rules and are able to achieve impressive speed-ups, they fail to offer any guarantee of optimality, thus leaving room for further improvement. An alternative approach is to search over the space of all possible programs that are equivalent to the compiler output, and select the one that is the most efficient. If the search is carried out in a brute-force manner, we are guaranteed to achieve super-optimization. However, this approach quickly becomes computationally infeasible as the number of instructions and the length of the program grows. In order to efficiently perform super-optimization, recent approaches have started to use a stochastic search procedure, inspired by Markov Chain Monte Carlo (MCMC) sampling (Schkufza et al., 2013). Briefly, the search starts at an initial program, such as the compiler output. It iteratively suggests modifications to the program, where the probability of a modification is encoded in a proposal distribution. The modification is either accepted or rejected with a probability that is dependent on the improvement achieved. Under certain conditions on the proposal distribution, the above procedure can be shown, in the limit, to sample from a distribution over programs, where the probability of a program is related to its quality. In other words, the more efficient a program, the more times it is encountered, thereby enabling super-optimization. Using this approach, high-quality implementations of real programs such as the Montgomery multiplication kernel from the OpenSSL library were discovered. These implementations outperformed the output of the gcc compiler and even expert-handwritten assembly code. One of the main factors that governs the efficiency of the above stochastic search is the choice of the proposal distribution. Surprisingly, the state of the art method, Stoke (Schkufza et al., 2013), employs a proposal distribution that is neither learnt from past behaviour nor does it depend on the syntax or semantics of the program under consideration. We argue that this choice fails to fully exploit the power of stochastic search. For example, consider the case where we are interested in performing bitwise operations, as indicated by the compiler output. In this case, it is more likely that the optimal program will contain bitshifts than floating point opcodes. Yet, Stoke will assign an equal probability of use to both types of opcodes. In order to alleviate the aforementioned deficiency of Stoke, we build a reinforcement learning framework to estimate the proposal distribution for optimizing the source code under consideration. The score of the distribution is measured as the expected quality of the program obtained via stochastic search. Using training data, which consists of a set of input programs, the parameters are learnt via the REINFORCE algorithm (Williams, 1992). We demonstrate the efficacy of our approach on two datasets. The first is composed of programs from “Hacker’s Delight” (Warren, 2002). Due to the limited diversity of the training samples, we show that it is possible to learn a prior distribution (unconditioned on the input program) that outperforms the state of the art. The second dataset contains automatically generated programs that introduce diversity in the training samples. We show that, in this more challenging setting, we can learn a conditional distribution given the initial program that significantly outperforms Stoke. 2 RELATED WORKS Super-optimization The earliest approaches for super-optimization relied on brute-force search. By sequentially enumerating all programs in increasing length orders (Granlund & Kenner, 1992; Massalin, 1987), the shortest program meeting the specification is guaranteed to be found. As expected, this approach scales poorly to longer programs or to large instruction sets. The longest reported synthesized program was 12 instructions long, on a restricted instruction set (Massalin, 1987). Trading off completeness for efficiency, stochastic methods (Schkufza et al., 2013) reduced the number of programs to test by guiding the exploration of the space, using the observed quality of programs encountered as hints. In order to improve the size of solvable instances, Phothilimthana et al. (2016) combined stochastic optimizers with smart enumerative solvers. However, the reliance of stochastic methods on a generic unspecific exploratory policy made the optimization blind to the problem at hand. We propose to tackle this problem by learning the proposal distribution. Neural Computing Similar work was done in the restricted case of finding efficient implementation of computation of value of degree k polynomials (Zaremba et al., 2014). Programs were generated from a grammar, using a learnt policy to prioritise exploration. This particular approach of guided search looks promising to us, and is in spirit similar to our proposal, although applied on a very restricted case. Another approach to guide the exploration of the space of programs was to make use of the gradients of differentiable relaxation of programs. Bunel et al. (2016) attempted this by simulating program execution using Recurrent Neural Networks. However, this provided no guarantee that the network parameters were going to correspond to real programs. Additionally, this method only had the possibility of performing local, greedy moves, limiting the scope of possible transformations. On the contrary, our proposed approach operates directly on actual programs and is capable of accepting short-term detrimental moves. Learning to optimize Outside of program optimization, applying learning algorithms to improve optimization procedures, either in terms of results achieved or runtime, is a well studied subject. Doppa et al. (2014) proposed imitation learning based methods to deal with structured output spaces, in a “Learning to search” framework. While this is similar in spirit to stochastic search, our setting differs in the crucial aspect of having a valid cost function instead of searching for one. More relevant is the recent literature on learning to optimize. Li & Malik (2016) and Andrychowicz et al. (2016) learn how to improve on first-order gradient descent algorithms, making use of neural networks. Our work is similar, as we aim to improve the optimization process. However, as opposed to the gradient descent that they learn on a continuous unconstrained space, our initial algorithm is an MCMC sampler on a discrete domain. Similarly, training a proposal distribution parameterized by a Neural Network was also proposed by Paige & Wood (2016) to accelerate inference in graphical models. Similar approaches were successfully employed in computer vision problems where data driven proposals allowed to make inference feasible (Jampani et al., 2015; Kulkarni et al., 2015; Zhu et al., 2000). Other approaches to speeding up MCMC inference include the work of Salimans et al. (2015), combining it with Variational inference. 3 LEARNING STOCHASTIC SUPER-OPTIMIZATION 3.1 STOCHASTIC SEARCH AS A PROGRAM OPTIMIZATION PROCEDURE Stoke (Schkufza et al., 2013) performs black-box optimization of a cost function on the space of programs, represented as a series of instructions. Each instruction is composed of an opcode, specifying what to execute, and some operands, specifying the corresponding registers. Each given input program \( \mathcal{T} \) defines a cost function. For a candidate program \( \mathcal{R} \) called *rewrite*, the goal is to optimize the following cost function: \[ \text{cost}(\mathcal{R}, \mathcal{T}) = \omega_c \times \text{eq}(\mathcal{R}, \mathcal{T}) + \omega_p \times \text{perf}(\mathcal{R}) \] The term \( \text{eq}(\mathcal{R}; \mathcal{T}) \) measures how well the outputs of the rewrite match the outputs of the reference program. This can be obtained either exactly by running a symbolic validator or approximately by running test cases. The term \( \text{perf}(\mathcal{R}) \) is a measure of the efficiency of the program. In this paper, we consider runtime to be the measure of this efficiency. It can be approximated by the sum of the latency of all the instructions in the program. Alternatively, runtime of the program on some test cases can be used. To find the optimum of this cost function, Stoke runs an MCMC sampler using the Metropolis (Metropolis et al., 1953) algorithm. This allows us to sample from the probability distribution induced by the cost function: \[ p(\mathcal{R}; \mathcal{T}) = \frac{1}{Z} \exp(-\text{cost}(\mathcal{R}, \mathcal{T})). \] The sampling is done by proposing random moves from a different proposal distribution: \[ \mathcal{R}' \sim q(\cdot|\mathcal{R}). \] The cost of the new modified program is evaluated and an acceptance criterion is computed. This acceptance criterion \[ \alpha(\mathcal{R}, \mathcal{T}) = \min \left(1, \frac{p(\mathcal{R}'; \mathcal{T})}{p(\mathcal{R}; \mathcal{T})}\right), \] is then used as the parameter of a Bernoulli distribution from which an accept/reject decision is sampled. If the move is accepted, the state of the optimizer is updated to \( \mathcal{R}' \). Otherwise, it remains in \( \mathcal{R} \). While the above procedure is only guaranteed to sample from the distribution \( p(\cdot; \mathcal{T}) \) in the limit if the proposal distribution \( q \) is symmetric (\( q(\mathcal{R}'|\mathcal{R}) = q(\mathcal{R}|\mathcal{R}') \) for all \( \mathcal{R}, \mathcal{R}' \)), it still allows us to perform efficient hill-climbing for non-symmetric proposal distributions. Moves leading to an improvement are always going to be accepted, while detrimental moves can still be accepted in order to avoid getting stuck in local minima. 3.2 LEARNING TO SEARCH We now describe our approach to improve stochastic search by learning the proposal distribution. We begin our description by defining the learning objective (section 3.2.1), followed by a parameterization of the proposal distribution (section 3.2.2), and finally the reinforcement learning framework to estimate the parameters of the proposal distribution (section 3.2.3). 3.2.1 OBJECTIVE FUNCTION Our goal is to optimize the cost function defined in equation (1). Given a fixed computational budget of \( T \) iterations to perform program super-optimization, we want to make moves that lead us to the lowest possible cost. As different programs have different runtimes and therefore different associated costs, we need to perform normalization. As normalized loss function, we use the ratio between the best rewrite found and the cost of the initial unoptimized program \( \mathcal{R}_0 \). Formally, the loss for a set of rewrites \( \{\mathcal{R}_t\}_{t=0..T} \) is defined as follows: \[ r(\{\mathcal{R}_t\}_{t=0..T}) = \left( \frac{\min_{t=0..T} \operatorname{cost}(\mathcal{R}_t, \mathcal{T})}{\operatorname{cost}(\mathcal{R}_0, \mathcal{T})} \right). \] Recall that our goal is to learn a proposal distribution. Given that our optimization procedure is stochastic, we will need to consider the expected cost as our loss. This expected loss is a function of the parameters \( \theta \) of our parametric proposal distribution \( q_\theta \): \[ \mathcal{L}(\theta) = \mathbb{E}_{\{\mathcal{R}_t\} \sim q_\theta} [r(\{\mathcal{R}_t\}_{t=0..T})]. \] 3.2.2 PARAMETERIZATION OF THE MOVE PROPOSAL DISTRIBUTION The proposal distribution (3) originally used in Stoke (Schkufza et al., 2013) takes the form of a hierarchical model. The type of the move is initially sampled from a probability distribution. Additional samples are drawn to specify, for example, the affected location in the programs ,the new operands or opcode to use. Which of these probability distribution get sampled depends on the type of move that was first sampled. The detailed structure of the proposal probability distribution can be found in Appendix B. Stoke uses uniform distributions for each of the elementary probability distributions the model samples from. This corresponds to a specific instantiation of the general stochastic search paradigm. In this work, we propose to learn those probability distributions so as to maximize the probability of reaching the best programs. The rest of the optimization scheme remains similar to the one of Schkufza et al. (2013). Our chosen parameterization of \( q \) is to keep the hierarchical structure of the original work of Schkufza et al. (2013), as detailed in Appendix B, and parameterize all the elementary probability distributions (over the positions in the programs, the instructions to propose or the arguments) independently. The set \( \theta \) of parameters for \( q_\theta \) will thus contain a set of parameters for each elementary probability distributions. A fixed proposal distribution is kept through the optimization of a given program, so the proposal distribution needs to be evaluated only once, at the beginning of the optimization and not at every iteration of MCMC. The stochastic computation graph corresponding to a run of the Metropolis algorithm is given in Figure 1. We have assumed the operation of evaluating the cost of a program to be a deterministic function, as we will not model the randomness of measuring performance. 3.2.3 LEARNING THE PROPOSAL DISTRIBUTION In order to learn the proposal distribution, we will use stochastic gradient descent on our loss function (6). We obtain the first order derivatives with regards to our proposal distribution parameters using the REINFORCE (Williams, 1992) estimator, also known as the likelihood ratio estimator (Glynn, 1990) or the score function estimator (Fu, 2006). This estimator relies on a rewriting of the gradient of the expectation. For an expectation with regards to a probability distribution \( x \sim f_\theta \), the REINFORCE estimator is: \[ \nabla_\theta \sum_x f(x; \theta) r(x) = \sum_x r(x) \nabla_\theta f(x; \theta) = \sum_x f(x; \theta) r(x) \nabla_\theta \log(f(x; \theta)), \] and provides an unbiased estimate of the gradient. ![Stochastic computation graph of the Metropolis algorithm used for program super-optimization.](page_324_613_900_579.png) Figure 1: Stochastic computation graph of the Metropolis algorithm used for program super-optimization. Round nodes are stochastic nodes and square ones are deterministic. Red arrows corresponds to computation done in the forward pass that needs to be learned while green arrows correspond to the backward pass. Full arrows represent deterministic computation and dashed arrows represent stochastic ones. The different steps of the forward pass are: (a) Based on features of the reference program, the proposal distribution \( q \) is computed. (b) A random move is sampled from the proposal distribution. (c) The score of the proposed rewrite is experimentally measured. (d) The acceptance criterion (4) for the move is computed. (e) The move is accepted with a probability equal to the acceptance criterion. (f) The cost is observed, corresponding to the best program obtained during the search. (g) Moves b to f are repeated T times. A helpful way to derive the gradients is to consider the execution traces of the search procedure under the formalism of stochastic computation graphs (Schulman et al., 2015). We introduce one “cost node” in the computation graphs at the end of each iteration of the sampler. The associated cost corresponds to the normalized difference between the best rewrite so far and the current rewrite after this step: \[ c_t = \min \left( 0, \left( \frac{\operatorname{cost}(\mathcal{R}_t, \mathcal{T}) - \min_{i=0..t-1} \operatorname{cost}(\mathcal{R}_i, \mathcal{T})}{\operatorname{cost}(\mathcal{R}_0, \mathcal{T})} \right) \right). \] The sum of all the cost nodes corresponds to the sum of all the improvements made when a new lowest cost was achieved. It can be shown that up to a constant term, this is equivalent to our objective function (5). As opposed to considering only a final cost node at the end of the \( T \) iterations, this has the advantage that moves which were not responsible for the improvements would not get assigned any credit. For each round of MCMC, the gradient with regards to the proposal distribution is computed using the REINFORCE estimator which is equal to \[ \widehat{\nabla_{\theta,i}} \mathcal{L}(\theta) = (\nabla_{\theta} \log q_{\theta}(\mathcal{R}_i|\mathcal{R}_{i-1})) \sum_{t>i} c_t. \] (9) As our proposal distribution remains fixed for the duration of a program optimization, these gradients needs to be summed over all the iterations to obtain the total contribution to the proposal distribution. Once this gradient is estimated, it becomes possible to run standard back-propagation with regards to the features on which the proposal distribution is based on, so as to learn the appropriate feature representation. 4 EXPERIMENTS 4.1 SETUP Implementation Our system is built on top of the Stoke super-optimizer from Schkufza et al. (2013). We instrumented the implementation of the Metropolis algorithm to allow sampling from parameterized proposal distributions instead of the uniform distributions previously used. Because the proposal distribution is only evaluated once per program optimisation, the impact on the optimization throughput is low, as indicated in Table 3. Our implementation also keeps track of the traces through the stochastic graph. Using the traces generated during the optimization, we can compute the estimator of our gradients, implemented using the Torch framework (Collobert et al., 2011). Datasets We validate the feasibility of our learning approach on two experiments. The first is based on the Hacker’s delight (Warren, 2002) corpus, a collection of twenty five bit-manipulation programs, used as benchmark in program synthesis (Gulwani et al., 2011; Jha et al., 2010; Schkufza et al., 2013). Those are short programs, all performing similar types of tasks. Some examples include identifying whether an integer is a power of two from its binary representation, counting the number of bits turned on in a register or computing the maximum of two integers. An exhaustive description of the tasks is given in Appendix C. Our second corpus of programs is automatically generated and is more diverse. Models The models we are learning are a set of simple elementary probabilities for the categorical distribution over the instructions and over the type of moves to perform. We learn the parameters of each separate distribution jointly, using a Softmax transformation to enforce that they are proper probability distributions. For the types of move where opcodes are chosen from a specific subset, the probabilities of each instruction are appropriately renormalized. We learn two different type of models and compare them with the baseline of uniform proposal distributions equivalent to Stoke. Our first model, henceforth denoted the bias, is not conditioned on any property of the programs to optimize. By learning this simple proposal distribution, it is only possible to capture a bias in the dataset. This can be understood as an optimal proposal distribution that Stoke should default to. The second model is a Multi Layer Perceptron (MLP), conditioned on the input program to optimize. For each input program, we generate a Bag-of-Words representation based on the opcodes of the program. This is embedded through a three hidden layer MLP with ReLU activation unit. The proposal distribution over the instructions and over the type of moves are each the result of passing the outputs of this embedding through a linear transformation, followed by a SoftMax. The optimization is performed by stochastic gradient descent, using the Adam (Kingma & Ba, 2015) optimizer. For each estimate of the gradient, we draw 100 samples for our estimator. The values of the hyperparameters used are given in Appendix A. The number of parameters of each model is given in Table 1. <table> <tr> <th>Model</th> <th># of parameters</th> </tr> <tr> <td>Uniform</td> <td>0</td> </tr> <tr> <td>Bias</td> <td>2912</td> </tr> <tr> <td>MLP</td> <td>1.4 \times 10^6</td> </tr> </table> Table 1: Size of the different models compared. Uniform corresponds to Stoke Schkufza et al. (2013). <table> <tr> <th>Model</th> <th>Training</th> <th>Test</th> </tr> <tr> <td>Uniform</td> <td>57.01\%</td> <td>53.71\%</td> </tr> <tr> <td>Bias</td> <td>36.45\%</td> <td>31.82\%</td> </tr> <tr> <td>MLP</td> <td>35.96\%</td> <td>31.51\%</td> </tr> </table> Table 2: Final average relative score on the Hacker’s Delight benchmark. While all models improve with regards to the initial proposal distribution based on uniform sampling, the model conditioning on program features reach better performances. 4.2 Existing Programs In order to have a larger corpus than the twenty-five programs initially present in “Hacker’s Delight”, we generate various starting points for each optimization. This is accomplished by running Stoke with a cost function where \( \omega_p = 0 \) in (1), and keeping only the correct programs. Duplicate programs are filtered out. This allows us to create a larger dataset from which to learn. Examples of these programs at different level of optimization can be found in Appendix D. We divide this augmented Hacker’s Delight dataset into two sets. All the programs corresponding to even-numbered tasks are assigned to the first set, which we use for training. The programs corresponding to odd-numbered tasks are kept for separate evaluation, so as to evaluate the generalisation of our learnt proposal distribution. The optimization process is visible in Figure 2, which shows a clear decrease of the training loss and testing loss for both models. While simply using stochastic super-optimization allows to discover programs 40% more efficient on average, using a tuned proposal distribution yield even larger improvements, bringing the improvements up to 60%, as can be seen in Table2. Due to the similarity between the different tasks, conditioning on the program features does not bring any significant improvements. ![Two line plots showing training and testing loss over time for Bias and Multi-layer Perceptron](page_1012_1042_484_246.png) (a) Bias (b) Multi-layer Perceptron Figure 2: Proposal distribution training. All models learn to improve the performance of the stochastic optimization. Because the tasks are different between the training and testing dataset, the values between datasets can’t directly be compared as some tasks have more opportunity for optimization. It can however be noted that improvements on the training dataset generalise to the unseen tasks. In addition, to clearly demonstrate the practical consequences of our learning, we present in Figure 3 a superposition of score traces, sampled from the optimization of a program of the test set. Figure 3a corresponds to our initialisation, an uniform distribution as was used in the work of Schkufza et al. (2013). Figure 3d corresponds to our optimized version. It can be observed that, while the uniform proposal distribution was successfully decreasing the cost of the program, our learnt proposal distribution manages to achieve lower scores in a more robust manner and in less iterations. Even using only 100 iterations (Figure 3e), the learned model outperforms the uniform proposal distribution with 400 iterations (Figure 3c). (a) With Uniform proposal Optimization Traces (b) Scores after 200 iterations (c) Scores after 400 iterations (d) With Learned Bias Optimization Traces (e) Scores after 100 iterations (f) Scores after 200 iterations Figure 3: Distribution of the improvement achieved when optimising a training sample from the Hacker’s Delight dataset. The first column represent the evolution of the score during the optimization. The other columns represent the distribution of scores after a given number of iterations. (a) to (c) correspond to the uniform proposal distribution, (d) to (f) correspond to the learned bias. 4.3 AUTOMATICALLY GENERATED PROGRAMS While the previous experiments shows promising results on a set of programs of interest, the limited diversity of programs might have made the task too simple, as evidenced by the good performance of a blind model. Indeed, despite the data augmentation, only 25 different tasks were present, all variations of the same programs task having the same optimum. To evaluate our performance on a more challenging problem, we automatically synthesize a larger dataset of programs. Our methods to do so consists in running Stoke repeatedly with a constant cost function, for a large number of iterations. This leads to a fully random walk as every proposed programs will have the same cost, leading to a 50% chance of acceptance. We generate 600 of these programs, 300 that we use as a training set for the optimizer to learn over and 300 that we keep as a test set. The performance achieved on this more complex dataset is shown in Figure 4 and Table 4. (a) Bias (b) Multi-layer Perceptron Figure 4: Training of the proposal distribution on the automatically generated benchmark. <table> <tr> <th>Proposal distribution</th> <th>MCMC iterations throughput</th> </tr> <tr> <td>Uniform</td> <td>60 000 /second</td> </tr> <tr> <td>Categorical</td> <td>20 000 /second</td> </tr> </table> Table 3: Throughput of the proposal distribution estimated by timing MCMC for 10000 iterations <table> <tr> <th>Model</th> <th>Training</th> <th>Test</th> </tr> <tr> <td>Uniform</td> <td>76.63%</td> <td>78.15 %</td> </tr> <tr> <td>Bias</td> <td>61.81%</td> <td>63.56%</td> </tr> <tr> <td>MLP</td> <td><b>60.13%</b></td> <td><b>62.27%</b></td> </tr> </table> Table 4: Final average relative score. The MLP conditioning on the features of the program perform better than the simple bias. Even the unconditioned bias performs significantly better than the Uniform proposal distribution. 5 CONCLUSION Within this paper, we have formulated the problem of optimizing the performance of a stochastic super-optimizer as a Machine Learning problem. We demonstrated that learning the proposal distribution of a MCMC sampler was feasible and lead to faster and higher quality improvements. Our approach is not limited to stochastic superoptimization and could be applied to other stochastic search problems. It is interesting to compare our method to the synthesis-style approaches that have been appearing recently in the Deep Learning community (Graves et al., 2014) that aim at learning algorithms directly using differentiable representations of programs. We find that the stochastic search-based approach yields a significant advantage compared to those types of approaches, as the resulting program can be run independently from the Neural Network that was used to discover them. Several improvements are possible to the presented methods. In mature domains such as Computer Vision, the representations of objects of interests have been widely studied and as a result are successful at capturing the information of each sample. In the domains of programs, obtaining informative representations remains a challenge. Our proposed approach ignores part of the structure of the program, notably temporal, due to the limited amount of existing data. The synthetic data having no structure, it wouldn’t be suitable to learn those representations from it. Gathering a larger dataset of frequently used programs so as to measure more accurately the practical performance of those methods seems the evident next step for the task of program synthesis. #include <stdint.h> int32_t p01(int32_t x) { int32_t o1 = x - 1; return x & o1; } pushq %rbp movq %rsp , %rbp movl %edi , -0x4(%rbp ) movl -0x4(%rbp ), %edi subl $0x1 , %edi movl %edi , -0x8(%rbp ) movl -0x4(%rbp ), %edi andl -0x8(%rbp ), %edi movl %edi , %eax popq %rbp retq nop nop nop (a) Source. blsrl %edi , %esi sets %ch xorq %rax , %rax sarb $0x2 , %ch rorw $0x1 , %di subb $0x3 , %dil mull %ebp subb %ch , %dh rcrb $0x1 , %dil cmovbel %esi , %eax retq (b) Optimization starting point. blsrl %edi , %eax retq (c) Alternative equivalent program. (d) Optimal solution. Figure 6: Program at different stage of the optimization.
accept
Accept (Poster)
7
0ec08b71ce5890a3a5db686f3063ae9c962ba05e
iclr
2,017
Variational Recurrent Adversarial Deep Domain Adaptation Sanjay Purushotham*, Wilka Carvalho*, Tanachat Nilanon, Yan Liu Department of Computer Science University of Southern California Los Angeles, CA 90089, USA {spurusho,wcarvalh,nilanon,yanliu.cs}@usc.edu ABSTRACT We study the problem of learning domain invariant representations for time series data while transferring the complex temporal latent dependencies between domains. Our model termed as Variational Recurrent Adversarial Deep Domain Adaptation (VRADA) is built atop a variational recurrent neural network (VRNN) and trains adversarially to capture complex temporal relationships that are domain-invariant. This is (as far as we know) the first to capture and transfer temporal latent dependencies of multivariate time-series data. Through experiments on real-world multivariate healthcare time-series datasets, we empirically demonstrate that learning temporal dependencies helps our model’s ability to create domain-invariant representations, allowing our model to outperform current state-of-the-art deep domain adaptation approaches. 1 INTRODUCTION Many real-world applications require effective machine learning algorithms that can learn invariant representations across related time-series datasets. For example, precision medicine for patients of various age groups, mobile application recommendation for users based on locations, and so on. In these examples, while the domains (i.e. age group and location) may vary, there exist common predictive patterns that can aid in inferring knowledge from one domain to another. More often than not, some domains have a significantly larger number of observations than others (e.g., respiratory failure in adults vs. children). Therefore effective domain adaption of time-series data is in great demand. The general approach to tackling domain adaptation has been explored under many facets which include reducing the domain discrepancy between the source and target domains (Ben-David et al. (2007)), instance re-weighting (Jiang & Zhai (2007)), subspace alignment (Fernando et al. (2013)), and deep learning (Tzeng et al. (2015); Ganin & Lempitsky (2014)). Many of these approaches work very well for non-sequential data but are not suitable for multivariate time-series data as they do not usually capture the temporal dependencies present in the data. For sequential data, earlier work has successfully used dynamic Bayesian Networks (Huang & Yates (2009)) and Recurrent Neural Networks (Socher et al. (2011)) to learn latent feature representations which were domain-invariant. Unfortunately, these works were not flexible enough to model non-linear dynamics or did not explicitly capture and transfer the complex latent dependencies needed to perform domain adaptation of time-series data. In this paper, we address this problem with a model that learns temporal latent dependencies (i.e. dependencies between the latent variables across timesteps) that can be transferred across domains that experience different distributions in their features. We draw inspiration from the Variational Recurrent Neural Network (Chung et al. (2016)) and use variational methods to produce a latent representation that captures underlying temporal latent dependencies. Motivated by the theory of domain adaptation (Ben-David et al. (2010)), we perform adversarial training on this representation Figure 1: A Story of Temporal Dependency and Domain Invariance (a) DNN (b) R-DANN (c) VRADA ![t-SNE projections for the latent representations of DNN, R-DANN, and our VRADA model. We show adaption from Adult-AHRF to Child-AHRF data. Source data is represented with red circles and target data with blue circles. From left to right, one can see that domain adaptation results in mixing the source and target domain data distributions. We can also see a story of how encoding more temporal dependency into the latent representation induces more domain-invariant representations. As models capture more underlying factors of variation, post domain adaptation representations gradually smoothen and become evenly dispersed, indicating that temporal dependency acts synergistically with domain adaptation.](page_232_186_1124_273.png) t-SNE projections for the latent representations of DNN, R-DANN, and our VRADA model. We show adaption from Adult-AHRF to Child-AHRF data. Source data is represented with red circles and target data with blue circles. From left to right, one can see that domain adaptation results in mixing the source and target domain data distributions. We can also see a story of how encoding more temporal dependency into the latent representation induces more domain-invariant representations. As models capture more underlying factors of variation, post domain adaptation representations gradually smoothen and become evenly dispersed, indicating that temporal dependency acts synergistically with domain adaptation. similarly to the Domain Adversarial Neural Network (DANN) (Ganin et al. (2016)) to make the representations invariant across domains. We call our model the Variational Recurrent Adversarial Deep Domain Adaptation (VRADA) model. As far as we know, this is the first model capable of accomplishing unsupervised domain adaptation while transferring temporal latent dependencies for complex multivariate time-series data. Figure 1 shows an example of the domain invariant representations learned by different deep learning models including our VRADA model. From this figure, we can see that our model (VRADA) shows better mixing of the domain distributions than the competing models indicating that it learns better domain invariant representations. In order to prove the efficacy of our model, we perform domain adaptation using real-world healthcare time-series data. We choose healthcare data for two primary reasons. (1) Currently, a standard protocol in healthcare is to build, evaluate, and deploy machine learning models for particular datasets that may perform poorly on unseen datasets with different distributions. For example, models built around patient data from particular age groups perform poorly on other age groups because the features used to train the models have different distributions across the groups (Alemayehu & Warner (2004); Lao et al. (2004); Seshamani & Gray (2004)). Knowledge learned from one group is not transferable to the other group. Domain adaptation seems like a natural solution to this problem as knowledge needs to be transferred across domains which share features that exhibit different distributions. (2) Healthcare data has multiple attributes recorded per patient visit, and it is longitudinal and episodic in nature. Thus, healthcare data is a suitable platform on which to study a model which seeks to capture complex temporal representations and transfer this knowledge across domains. The rest of the paper is structured as follows. In the following section, we briefly discuss the current state-of-the-art deep domain adaptation approaches. Afterwards, we present our model mathematically, detailing how it simultaneously learns to capture temporal latent dependencies and create domain-invariant representations. In Section 4, we compare and contrast the performance of proposed approach with other approaches on two real-world health care datasets, and provide analysis on our domain-invariant representations. 2 RELATED WORK Domain adaptation is a specific instance of transfer learning in which the feature spaces are shared but their marginal distributions are different. A good survey on the two has been done in several previous works (Pan & Yang (2009); Jiang (2008); Patel et al. (2015)). Domain adaptation has been thoroughly studied in computer vision (Saenko et al. (2010); Gong et al. (2012); Fernando et al. (2013)) and natural language processing (NLP) (Blitzer (2007); Foster et al. (2010)) applications. Recently, the deep learning paradigm has become popular in domain adaptation (Chen et al. (2012); Tzeng et al. (2015); Yang & Eisenstein; Long & Wang (2015)) due to its ability to learn rich, flexible, non-linear domain-invariant representations. Here, we briefly discuss two deep domain adaptation approaches which are closely related to our proposed model. Domain Adversarial Neural Networks (DANN) Figure 2: Block diagram of VRADA. Blue lines show the inference process, \( q_{\theta_i}(z_t | x_{\leq t}, z_{<t}) \). Brown lines show the generation process, \( p_{\theta_j}(x_t | z_{\leq t}, x_{<t}) \). Red lines show the recurrence process where \( h_t \) is informed by \( h_{t-1} \), which is informed by \( z_{t-1} \) and \( x_{t-1} \). Black lines indicate classification. (Ganin et al. (2016)) is a deep domain adaptation model which uses two core components to create domain-invariant representations, a feature extractor that produces the data’s latent representation, and an adversarial domain labeler that attempts to classify that data’s domain to help the feature extractor produce latent representations which are domain-invariant. In Louizos et al. (2015), the authors propose Variational Fair AutoEncoder, which uses Variational Autoencoding architecture (Kingma & Welling (2013)) to learn latent representations where most of the information about certain known factors of variation are purged from the representation while still retaining as much information about the data as possible. While, these deep learning approaches learn domain-invariant representations, they fail to capture and transfer the underlying complex temporal latent relationships from one domain to another as they use convolutional or feed forward neural networks which we claim are not suitable for multivariate time-series data. Other works such as Huang & Yates (2009); Xiao & Guo (2013) have used distributed representations for domain adaptation in NLP sequence labeling tasks. However, they either induce hidden states as latent features using dynamic Bayesian networks (DBNs) or learn generalizable distributed representations of words using Recurrent Neural Networks (RNN) (Socher et al. (2011)) to enable domain adaptation. These works either model the highly non-linear dynamics, as one can with RNN, or capture the complex latent dependencies present in sequential data, as one can with DBNs, but not both. To overcome the challenges of DBNs and RNNs, Variational Recurrent Neural Network (VRNN) (Chung et al. (2016)) was proposed recently to capture the complex relationship between the underlying hidden factors of variation and the output variables at different time-steps. The VRNN uses Variational Autoencoders (VAEs)(Kingma & Welling (2013); Goodfellow et al. (2016)) at each time-step to learn a complex relationship between the latent hidden factors across time-steps. Like the VAE, its latent variable is parametric. Combined, these things make it well-suited for multimodal sequential data such as multivariate time-series. In the following section, we discuss our approach, Variational Adversarial Deep Domain Adaptation (VRADA), which uses a VRNN to model and transfer complex domain-invariant temporal latent relationships for unsupervised domain adaptation of multivariate time-series. 3 VARIATIONAL RECURRENT ADVERSARIAL DEEP DOMAIN ADAPTATION In this section, we present our Variational Recurrent Adversarial Deep Domain Adaptation (VRADA) model for the purpose of capturing and transferring temporal latent dependencies across domains via domain-invariant representations. First, we introduce the notations used in this paper and then discuss our VRADA model in detail. 3.1 NOTATIONS Let us denote a multivariate variable-length time series with \( N \) data samples as \( \{ \mathbf{x}_i^1 = (x_i^1)^{T_i^1}_{t=1} \}_{i=1}^N \), where \( x_i^t \in \mathbb{R}^D \). (Note: in our experiments, for all data samples \( T^n = \tau \), but for generality we maintain \( T^n \)). We denote \( \{ \mathbf{x}_S^1 \}_{i=1}^n \) as source domain data and \( \{ \mathbf{x}_T^1 \}_{i=n+1}^N \) as target domain data. We assume that each source domain data sample \( \mathbf{x}_S^i \) comes with \( L \) labels \( y_i \in \{0, 1\}^L \) (for example, these labels may correspond to a clinical outcome such as mortality or ICD9 diagnosis codes), while target domain has no labeled data samples. We assign a domain label \( d_i \in \{0, 1\} \) to each data sample to indicate if it comes from the source or target domain. \( d_i \) will be used for adversarial training. 3.2 VRADA The block diagram of our VRADA model is shown in Figure 2. To explicitly model the dependencies between the latent random variable across time steps, the VRADA model utilizes Variational Recurrent Neural Networks (VRNN) (Chung et al. (2016)). The VRNN effectively contains a Variational Auto-Encoders (Kingma & Welling (2013)) at every time step, all of which are conditioned on previous auto-encoders via the hidden state \( h_{t-1} \) of an RNN, such as an LSTM (Hochreiter & Schmidhuber (1997)). Therefore, for each time-step of \( x_t^i \), we infer a latent random variable \( z_t^i \) via \[ z_t^i | x_t^i \sim \mathcal{N}(\mu_{z,t}, \text{diag}(\sigma_{z,t})), \quad \text{where } [\mu_{z,t}, \sigma_{z,t}] = \varphi_\tau^{enc}(\varphi_\tau^e(x_t^i), h_{t-1}) \] with prior \[ z_t^i \sim \mathcal{N}(\mu_{0,t}, \text{diag}(\sigma_{0,t})), \quad \text{where } [\mu_{0,t}, \sigma_{0,t}] = \varphi_\tau^{prior}(h_{t-1}) \] where \( \mu_{*,t}, \sigma_{*,t} \) denote parameters of a generating distribution, and \( \varphi_\tau^* \) can be any highly flexible function such as deep neural networks. For each \( z_t^i \), \( x_t^i \) is generated via \[ x_t^i | z_t^i \sim \mathcal{N}(\mu_{x,t}, \text{diag}(\sigma_{x,t})), \quad \text{where } [\mu_{x,t}, \sigma_{x,t}] = \varphi_\tau^{dec}(\varphi_\tau^z(z_t^i), h_{t-1}) \] and learned by optimizing the VRNN objective function: \[ \mathcal{L}_r(x_t^i; \theta_e, \theta_g) = E_{q_{\theta_e}(z_{T:t}^i|x_{T:t}^i)}[\sum_{t=1}^T (-D(q_{\theta_e}(z_t^i|x_{<t}^i, z_{<t}^i)||p(z_t^i|x_{<t}^i, z_{<t}^i)) + \log p_{\theta_g}(x_t^i|z_{<t}^i, x_{<t}^i))] \] where \( q_{\theta_e}(z_t^i|x_{<t}^i, z_{<t}^i) \) is the inference model, \( p(z_t^i|x_{<t}^i, z_{<t}^i) \) is the prior, \( p_{\theta_g}(x_t^i|z_{<t}^i, x_{<t}^i) \) is the generative model, \( \theta_e \) is the parameters of the VRNN’s encoder, \( \theta_g \) the parameters of the VRNN’s decoder, and \( D(\cdot||\cdot) \) refers to KL-Divergence. Note: \( z_{<T} \) refers to the set of all \( z_t \) such that \( t \leq T \), likewise for \( z_{<T} \). For each \( x^i \), we use \( z^i \sim q_{\theta_e}(z_{T:t}^i|x_{T:t}^i, z_{<T}^i) \) as our feature representation for source domain classification task since it captures temporal latent dependencies across the time-steps. Training the VRNN for the source domain classification involves solving the following optimization: \[ \min_{\theta_e, \theta_g, \theta_y} \frac{1}{n} \sum_{i=1}^n \frac{1}{T_i} \mathcal{L}_r(x^i; \theta_e, \theta_g) + \frac{1}{n} \sum_{i=1}^n \mathcal{L}_y(x^i; \theta_y, \theta_e) + \lambda \mathcal{R}(\theta_e) \] where \( \mathcal{R}(\theta_e) \) is a regularizer for the parameters of VRNN encoder (which is also the feature extractor of VRADA) with a tuning hyperparameter \( \lambda \). As we are interested in achieving domain adaptation via the latent representation \( \tilde{z}^i \) (i.e. to make \( \tilde{z}^i \) domain-invariant), we can adversarially train the above objective function (equation 1) by employing the domain adaptation idea proposed in Ganin et al. (2016). Let \( G_y(\tilde{z}^i; \theta_y) \) and \( G_d(\tilde{z}^i; \theta_d) \) represent the source label classifier (to predict source labels \( y_i \)) and domain label classifier (to predict domain labels \( d_i \)) respectively with parameters \( \theta_y \) and \( \theta_d \) for a given input \( \tilde{z}^i \). Here, \( G_y(.) \) and \( G_d(.) \) can be deep neural networks. Let us denote their loss functions respectively as \[ \mathcal{L}_y(x^i; \theta_y, \theta_e) = \mathcal{L}_B(G_y(V_e(x^i; \theta_e); \theta_y), y_i); \quad \mathcal{L}_d(x^i; \theta_d, \theta_e) = \mathcal{L}_B(G_d(V_e(x^i; \theta_e); \theta_d), d_i) \] where \( \mathcal{L}_B \) is the classification loss such as a binary or categorical cross-entropy loss function and \( V_e(x^i; \theta_e) \) is the VRNN encoder that maps input \( x^i \) to \( \tilde{z}^i \). Now, for adversarial training, we consider the following domain adaptation term as the regularizer of equation 1 \[ \mathcal{R}(\theta_e) = \max_{\theta_d} \left[ -\frac{1}{n} \sum_{i=1}^n \mathcal{L}_d(x^i; \theta_d, \theta_e) - \frac{1}{n'} \sum_{i=n+1}^{n+N} \mathcal{L}_d(x^i; \theta_d, \theta_e) \right] \] where \( n' \) is the number of target domain samples. As shown in Ganin et al. (2016), \( \mathcal{R} \) is the domain regularizer and it is derived from the empirical \( \mathcal{H} \)-divergence between the source domain and target domain samples (Ben-David et al. (2010)). Combining the joint optimization problem of equations[1]and[2]leads to our VRADA model, where we minimize the source classification risk and at the same time achieve domain adaptation. Mathematically, we optimize the following complete objective function: \[ E(\theta_e, \theta_g, \theta_y, \theta_d) = \frac{1}{N} \sum_{i=1}^N \frac{1}{T_i} \mathcal{L}_r(x^i; \theta_e, \theta_g) + \frac{1}{n} \sum_{i=1}^n \mathcal{L}_y(x^i; \theta_y) - \lambda (\frac{1}{n} \sum_{i=1}^n \mathcal{L}_d(x^i; \theta_d) + \frac{1}{n'} \sum_{i=n+1}^N \mathcal{L}_d(x^i; \theta_d)) \] where \( \lambda \) is a trade-off between optimizing on making domain-invariant representations and optimizing source classification accuracy. Our optimization involves minimization with respect to some parameters, and maximization with respect to the others, i.e., we iteratively solve the following: \[ (\hat{\theta}_g, \hat{\theta}_y, \hat{\theta}_e) = \arg \min_{\theta_g, \theta_y, \theta_e} E(\theta_e, \theta_g, \theta_y, \hat{\theta}_d) \] \[ \hat{\theta}_d = \arg \max_{\theta_d} E(\hat{\theta}_e, \hat{\theta}_g, \hat{\theta}_y, \theta_d) \] with the gradient updates calculated as: \[ \theta_e \leftarrow \theta_e - \eta (\frac{\partial \mathcal{L}_r}{\partial \theta_e} + \frac{\partial \mathcal{L}_y}{\partial \theta_e} - \lambda \frac{\partial \mathcal{L}_d}{\partial \theta_e}) \] \[ \theta_g \leftarrow \theta_g - \eta \frac{\partial \mathcal{L}_r}{\partial \theta_g} \] \[ \theta_d \leftarrow \theta_d - \eta \frac{\partial \mathcal{L}_d}{\partial \theta_d} \] \[ \theta_y \leftarrow \theta_y - \eta \lambda \frac{\partial \mathcal{L}_d}{\partial \theta_y} \] where \( \eta \) is the learning rate. We can use stochastic gradient descent (SGD) to solve the equations (5-7). To solve equation (4), we can use SGD and the gradient reversal layer (GRL)(Ganin et al. (2016)). The role of GRL is to reverse the gradient sign while performing backpropagation. This ensures that the domain classification loss is maximized which makes the feature representations domain-invariant. Thus, VRADA results in learning feature representations which are domain-invariant (due to domain regressor \( \mathcal{R} \)) and which capture the temporal latent dependencies (due to optimizing VRNN objective function \( \mathcal{L}_r \)). These things combine to allow the VRADAs’ discriminative power on the source domain to transfer to the target domain. 4 EXPERIMENTS We conduct experiments on two real-world health care datasets to answer the following questions: (a) How does our VRADA model perform when compared to the state-of-the-art domain adaptation and non-adaptation approaches? (b) How different are the domain-invariant representations learned by various domain adaptation methods? (c) How do we show that the temporal latent dependencies are transferred between domains? In the remainder of this section, we will describe the datasets, methods, empirical results, and show visualizations to answer the above questions. 4.1 DATASET DESCRIPTION We conduct experiments on two health care datasets, including the MIMIC-III dataset and a Pediatric ICU (PICU) dataset from Children’s Hospital Los Angeles. MIMIC-III(Johnson et al. (2016)) is a public dataset with deidentified clinical care data collected at Beth Israel Deaconess Medical Center from 2001 to 2012. It contains over 58,000 hospital admission records of 38,645 adults and 7,875 neonates. For our experiments, we extracted the following two datasets: • Adult-AHRF dataset: To study domain adaptation for adult patients with acute hypoxemic respiratory failure (AHRF), we extracted 20 time series features (such as Base excess, blood pH value, Mean Air Pressure, PaO2, etc.) from 5527 admission records based on Khemani et al. (2009). We grouped the patients into 4 groups/cohorts based on their age[1]: Group 2: working-age adult (20 to 45 yrs, 508 patients); Group 3: old working-age adult (46 to 65 yrs, 1888 patients); Group 4: elderly (66 to 85 yrs, 2394 patients); Group 5: old elderly (85 yrs and up, 437 patients). We treated each group as a separate domain with which we could perform domain adaptation. For each patient, we used the first 4 day after admission (with each day serving as a single time-step) as time series data for training and testing our models. • ICD9 dataset: For this dataset we extracted 99 time series features from 19714 admission records from 4 modalities including input-events (fluids into patient, e.g., insulin), output-events (fluids out of the patient, e.g., urine), lab-events (lab test results, e.g., blood pH values, platelet count, etc.) and prescription-events (drugs prescribed by doctors, e.g., aspirin, potassium chloride, etc.). These modalities are known to be extremely useful for monitoring ICU patients. All the time series are of more than 48 hours of duration, and only the first 24 hours (after admission) 2-hourly sampled time series data is used for training and testing our models. We use this dataset to predict the ICD9 Diagnosis code categories for each patient’s admission record. Child-AHRF dataset: This is a PICU dataset which contains health records of 398 children patient with acute hypoxemic respiratory failure in the intensive care unit at Children’s Hospital Los Angeles (CHLA)(Khemani et al.(2009)). Similar to Adult-AHRF, this dataset has 20 time series features collected for 4 days after ICU admission. This dataset is considered as one group (Group 1: children, age 0 to 19 yrs) and represents one domain. 4.1.1 PREDICTION AND DOMAIN ADAPTATION TASKS Mortality Prediction: For Adult-AHRF and Child-AHRF datasets, we are interested in predicting mortality, i.e. whether a patient dies from AHRF during their hospital stay. 20.10% of all the patients in Child-AHRF and 13.84% of all patients in Adult-AHRF have a positive mortality label (i.e. the patients who die in hospital). ICD9 Code Prediction: Each admission record in MIMIC-III dataset has multiple ICD-9 diagnosis codes. We group all the occurrences of the ICD-9 codes into 20 diagnosis groups[2]. For the ICD9 dataset, we are interested in predicting these 20 ICD-9 Diagnosis Categories for each admission record. We treat this as a multi-task prediction problem. Domain Adaptation Tasks: We study unsupervised domain adaptation (i.e. target domain labels are unavailable during training and validation) task with-in age groups of Adult-AHRF dataset, ICD9 dataset and across Adult and Child-AHRF datasets. For Adult-AHRF and ICD9 datasets, we created 12 source-target domain pairs using the age groups, pairing up each domain \( D_i \) with another domain \( D_{j \neq i} \), for example, the source-target pair 2-5 was used for adapting from group 2 (working-age adult) to group 5 (old elderly). We also created 4 source-target pairs for performing domain adaptation from 4 adult age-groups to 1 child age-group. 4.2 METHODS AND IMPLEMENTATION DETAILS We categorize the methods used in our main experiments into the following groups: • Non-adaptive baseline methods: Logistic Regression (LR), Adaboost with decision regressors (Adaboost), and feed forward deep neural networks (DNN) • Deep Domain adaptation methods: Domain Adversarial Neural Networks (DANN) (Ganin et al. (2016)); DANN with a RNN (LSTM) as feature extractor (R-DANN); Variational Fair Autoencoder (VFAE)(Louizos et al. (2015)) • Our method: Variational Recurrent Adversarial Deep Domain Adaptation (VRADA)[3] [1]: https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/ [2]: http://tdrdata.com/ipd/ipd_SearchForICD9CodesAndDescriptions.aspx “Conditions Originating in the Perinatal Period” is not present in the preprocessed dataset. [3]: Codes will be publicly released soon In all our experiments, we conducted unsupervised domain adaptation where target domain labels are unavailable during training and validation. For R-DANN, we used LSTM (Hochreiter & Schmidhuber (1997)) as the feature extractor network instead of the feed-forward neural networks used in DANN. For VFAE, DANN and all the non-domain adaptive approaches we flattened the time series along time axis and treat it as the input to the model. For fairness, the classifier and feature extractors of the VRADA and R-DANN were equivalent in depth and both had the same model capacity. We also ensure that the size of latent feature representation \( z^i \) are similar for VRADA and DANN models. The model capacity of VFAE was chosen to be similar to VRADA. All the deep domain adaptation models including ours had depth of size 8 (including output classifier layers). We used the Adam optimizer (Kingma & Ba (2014)) and ran all models for 500 epochs with a learning rate of \( 3e-4 \). We set an early stopping criteria that the model does not experience a decrease in the validation loss for 20 epochs. Source domain data was split into train/validation subsets with a 70/30 ratio and target domain data into train/validation/test subsets with a 70/15/15 ratio. In order to compare all the methods, we report AUC scores on the entire target domain set, and the test subset for each target domain data of a source-target pair. 4.3 Quantitative Results In Table 1, we compare non domain adaptation and domain adaptation models’ performance on the target domain test subset for the AHRF mortality prediction task. It is immediately clear that domain adaptation methods consistently outperform non domain adaptation methods. We see that generally the VRADA outperforms both variants of the DANN with it consistently seeing scores ~ 4% higher. While the standard deviation for the VRADA was about 1%, it was about 2% for the R-DANN, further showing our models efficacy as it converges to more stable local optima. Our model VRADA beats state-of-the-art DANN (Ganin et al. (2016)) and VFAE (Louizos et al. (2015)) on all the source-pair domain adaptation tasks for Adult-AHRF dataset. For the domain adaptation from Adult-AHRF to Child-AHRF dataset, we observe that VRADA mostly outperforms all the competing models. This shows that our model can perform well even for smaller target domain datasets. <table> <tr> <th>Source-Target</th> <th>LR</th> <th>Adaboost</th> <th>DNN</th> <th>DANN</th> <th>VFAE</th> <th>R-DANN</th> <th>VRADA</th> </tr> <tr> <td>3 - 2</td> <td>0.555</td> <td><b>0.562</b></td> <td>0.569</td> <td>0.572</td> <td>0.615</td> <td>0.603</td> <td><b>0.654</b></td> </tr> <tr> <td>4 - 2</td> <td>0.624</td> <td>0.645</td> <td>0.569</td> <td>0.589</td> <td>0.635</td> <td>0.584</td> <td><b>0.656</b></td> </tr> <tr> <td>5 - 2</td> <td>0.527</td> <td><b>0.554</b></td> <td>0.551</td> <td>0.540</td> <td>0.588</td> <td>0.611</td> <td><b>0.616</b></td> </tr> <tr> <td>2 - 3</td> <td><b>0.627</b></td> <td>0.621</td> <td>0.550</td> <td>0.563</td> <td>0.585</td> <td>0.708</td> <td><b>0.724</b></td> </tr> <tr> <td>4 - 3</td> <td><b>0.681</b></td> <td>0.636</td> <td>0.542</td> <td>0.527</td> <td>0.722</td> <td><b>0.821</b></td> <td>0.770</td> </tr> <tr> <td>5 - 3</td> <td>0.655</td> <td><b>0.706</b></td> <td>0.503</td> <td>0.518</td> <td>0.608</td> <td>0.769</td> <td><b>0.782</b></td> </tr> <tr> <td>2 - 4</td> <td>0.585</td> <td><b>0.591</b></td> <td>0.530</td> <td>0.560</td> <td>0.582</td> <td>0.716</td> <td><b>0.777</b></td> </tr> <tr> <td>3 - 4</td> <td><b>0.652</b></td> <td>0.629</td> <td>0.531</td> <td>0.527</td> <td>0.697</td> <td><b>0.769</b></td> <td>0.764</td> </tr> <tr> <td>5 - 4</td> <td>0.689</td> <td><b>0.699</b></td> <td>0.538</td> <td>0.532</td> <td>0.614</td> <td>0.728</td> <td><b>0.738</b></td> </tr> <tr> <td>2 - 5</td> <td><b>0.565</b></td> <td>0.543</td> <td>0.549</td> <td>0.526</td> <td>0.555</td> <td>0.659</td> <td><b>0.719</b></td> </tr> <tr> <td>3 - 5</td> <td>0.576</td> <td><b>0.587</b></td> <td>0.510</td> <td>0.526</td> <td>0.533</td> <td>0.630</td> <td><b>0.721</b></td> </tr> <tr> <td>4 - 5</td> <td><b>0.682</b></td> <td>0.587</td> <td>0.575</td> <td>0.548</td> <td>0.712</td> <td>0.747</td> <td><b>0.775</b></td> </tr> <tr> <td>5 - 1</td> <td>0.502</td> <td><b>0.573</b></td> <td>0.557</td> <td>0.563</td> <td>0.618</td> <td>0.563</td> <td><b>0.639</b></td> </tr> <tr> <td>4 - 1</td> <td><b>0.565</b></td> <td>0.533</td> <td>0.572</td> <td>0.542</td> <td><b>0.668</b></td> <td>0.577</td> <td>0.636</td> </tr> <tr> <td>3 - 1</td> <td>0.500</td> <td>0.500</td> <td>0.542</td> <td>0.535</td> <td>0.570</td> <td>0.591</td> <td><b>0.631</b></td> </tr> <tr> <td>2 - 1</td> <td><b>0.520</b></td> <td>0.500</td> <td>0.534</td> <td>0.559</td> <td>0.578</td> <td>0.630</td> <td><b>0.637</b></td> </tr> </table> In the above table, we test classification without adaptation using Logistic Regression (LR), Adaboost with decision tree classifiers and Feed forward Deep Neural Networks (DNN); and with adaptation using Deep Domain Adversarial Neural Networks (DANN), a DANN with an LSTM in its feature extractor (R-DANN), Variational Fair Autoencoder (VFAE) and our Variational Adversarial Domain Adaptation Model (VRADA). All results are reported on the target domain test subset dataset. As the AHRF mortality prediction task made it clear that domain adaptation is necessary for inter-group adaptation, for the ICD9 multi-task prediction task that involved data with time-steps of length 12, we focused strictly on domain adaptive models (i.e. the DANN, R-DANN, and VRADA). Table 2 shows the aggregated AUC scores on the entire target domain dataset and test data of the target domain for the 20 tasks of the ICD9 Code Prediction task. Here, we clearly see that VRADA and Table 2: AUC Comparison for ICD9 Diagnosis Code Prediction task <table> <tr> <th rowspan="2">Model</th> <th colspan="3">23</th> <th colspan="3">24</th> <th colspan="3">25</th> <th colspan="3">32</th> <th colspan="3">34</th> <th colspan="3">35</th> <th colspan="3">42</th> <th colspan="3">43</th> <th colspan="3">45</th> <th colspan="3">52</th> <th colspan="3">53</th> <th colspan="3">54</th> </tr> <tr> <td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td> </tr> <tr> <td>DANN</td> <td>0.513</td><td>0.508</td><td>0.509</td><td>0.511</td><td>0.508</td><td>0.514</td><td>0.511</td><td>0.507</td><td>0.512</td><td>0.505</td><td>0.508</td><td>0.506</td><td>0.509</td><td>0.513</td><td>0.531</td><td>0.527</td><td>0.515</td><td>0.531</td><td>0.515</td><td>0.521</td><td>0.521</td><td>0.518</td><td>0.514</td><td>0.519</td> </tr> <tr> <td>R-DANN</td> <td>0.608</td><td>0.581</td><td>0.562</td><td>0.618</td><td>0.610</td><td>0.586</td><td>0.604</td><td>0.607</td><td>0.575</td><td>0.573</td><td>0.558</td><td>0.566</td><td>0.605</td><td>0.579</td><td>0.570</td><td>0.628</td><td>0.609</td><td>0.589</td><td>0.614</td><td>0.616</td><td>0.586</td><td>0.573</td><td>0.563</td><td>0.564</td> </tr> <tr> <td>VRADA</td> <td>0.620</td><td>0.564</td><td>0.557</td><td>0.611</td><td>0.617</td><td>0.580</td><td>0.598</td><td>0.615</td><td>0.588</td><td>0.571</td><td>0.582</td><td>0.576</td><td>0.609</td><td>0.563</td><td>0.560</td><td>0.620</td><td>0.617</td><td>0.580</td><td>0.606</td><td>0.623</td><td>0.594</td><td>0.576</td><td>0.581</td><td>0.576</td> </tr> </table> Here, we compare results for the ICD9 Diagnosis Code Prediction task on the ICD9 dataset. For each model, the top row corresponds to the performance on the entire target domain dataset and the bottom row corresponds to performance on the test subset (15%) of the target domain dataset. R-DANN models outperform DANN [Ganin et al. (2016)] by significant margins. We also observe that VRADA outperforms R-DANN by \(1.5 \sim 2\%\) when averaged over all the source-target domain pairs. 4.4 DISCUSSION Figure 3 shows the temporal latent dependencies captured by our VRADA as compared to the R-DANN for 3-4 source-target pair. While both models learn temporal latent dependencies fairly well, the VRADA outperforms the R-DANN in two ways. First, the VRADA’s neurons learned stronger predictions of whether features are relevant towards modeling the data. If we look at the VRADA row, for both AHRF and ICD9 we see that the neural activation patterns are more consistent across time-steps than for R-DANN. Figure 4 shows the unrolled memory cell states (in the form Examples × (Time * Neurons)) for all the source and target domain data points. We see a consistent activation firing patterns across all these data points for VRADA but not for R-DANN. Together with the stronger performance on 3-4 for AHRF and 2-5 for ICD9, this potentially indicates that VRADA is better learning the temporal dependencies. Second, nuanced values are consistent across time-steps for the VRADA, exhibiting a gradual transition towards stronger activation with time, whereas the temporal activation pattern of the R-DANN seems somewhat sporadic. While activation gradients across time are consistent for both the R-DANN and VRADA, more consistent inhibitory and excitatory neuron firing patterns indicate that the VRADA better transfers knowledge. Another indication of domain adaptation was shown in Figure 1c. Looking at the t-SNE projections of feature representations of DNN, R-DANN, and VRADA we can see that the addition of temporal latent dependencies might help in better mixing of the domain distributions since we observe that the data is more evenly spread out. Figure 1c and Figure 3 together indicate that the VRADA’s temporal latent dependency capturing power and ability to create domain-invariant representations act synergistically. For plots of activation patterns without domain adaptation, please see appendix section 6.2.3. 5 SUMMARY Because of its diverse range of patients and its episodic and longitudinal nature, healthcare data provides a good platform to test domain adaptation techniques for temporal data. With it as our example, we showcase the Variational Recurrent Adversarial Domain Adaptation (VRADA) model’s ability to learn temporal latent representations that are domain-invariant. By comparing our model’s latent representations to others’, we show its ability to use variational methods to capture hidden factors of variation and produce more robust domain-invariant representations. We hope this work serves as a bedrock for future work capturing and adapting temporal latent representations across domains. ACKNOWLEDGMENTS This material is based upon work supported by the NSF research grants IIS-1134990, IIS-1254206, Samsung GRO Grant and the NSF Graduate Research Fellowship Program under Grant No. DGE-1418060. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funding agencies. We also acknowledge Thailand’s Development and Promotion of Science and Technology Talents Project for financial support. We thank Dr. Robinder Khemani for sharing the Child-AHRF dataset. Figure 3: Cell states of memory cell for R-DANN and VRADA showing temporal latent dependencies captured by neurons of the R-DANN and VRADA for the source domain and transferred to the target domain. Each step along the y-axis refers to the activation of a single neuron with blue for strong inhibition and yellow for strong excitation. Step along the x-axis refers to activation per time-step. The left shows a single example in adapting 3-4 and the right for adapting 2-5. Figure 4: Cell states of memory cell for R-DANN and VRADA showing activation for all ICD9 2-5 adaptation examples. Here, we show temporal dependencies learned across time, feature pairs for examples in a domain. The y-axis values refer to values per data point and the x-axis shows activation at time, feature pairs with the time and feature dimensions being flattened. REFERENCES Berhanu Alemayehu and Kenneth E Warner. The lifetime distribution of health care costs. Health services research, 39(3):627–642, 2004. S Ben-David, J Blitzer, and K Crammer. Analysis of representations for domain adaptation. Advances in Neural . . . , pp. 137–144, 2007. Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine learning, 79(1-2):151–175, 2010. John Blitzer. Domain adaptation of natural language processing systems. PhD thesis, University of Pennsylvania, 2007. Minmin Chen, Zhixiang Xu, Kilian Weinberger, and Fei Sha. Marginalized denoising autoencoders for domain adaptation. arXiv preprint arXiv:1206.4683, 2012. Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron Courville, and Yoshua Bengio. A Recurrent Latent Variable Model for Sequential Data. arXiv.org, May 2016. Basura Fernando, Amaury Habrard, Marc Sebban, and Tinne Tuytelaars. Unsupervised visual domain adaptation using subspace alignment. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2960–2967, 2013. George Foster, Cyril Goutte, and Roland Kuhn. Discriminative instance weighting for domain adaptation in statistical machine translation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pp. 451–459. Association for Computational Linguistics, 2010. Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. arXiv preprint arXiv:1409.7495, 2014. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1), 2016. Boqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. Geodesic flow kernel for unsupervised domain adaptation. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 2066–2073. IEEE, 2012. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. Mit Press, December 2016. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735–1780, 1997. Fei Huang and Alexander Yates. Distributional representations for handling sparsity in supervised sequence-labeling. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pp. 495–503. Association for Computational Linguistics, 2009. Jing Jiang. A literature survey on domain adaptation of statistical classifiers. URL: http://sifaka.cs.utuc.edu/jiang4/domainadaptation/survey, 2008. Jing Jiang and ChengXiang Zhai. Instance weighting for domain adaptation in nlp. In ACL, volume 7, pp. 264–271, 2007. AEW Johnson, TJ Pollard, L Shen, L Lehman, M Feng, M Ghassemi, B Moody, P Szolovits, LA Celi, and RG Mark. Mimic-iii, a freely accessible critical care database. Scientific Data, 2016. Robinder G Khemani, David Conti, Todd A Alonzo, Robert D Bart III, and Christopher JL Newth. Effect of tidal volume in children with acute hypoxemic respiratory failure. Intensive care medicine, 35(8):1428–1437, 2009. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980 Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. arXiv.org, December 2013. Zhiqiang Lao, Dinggang Shen, Zhong Xue, Bilge Karacali, Susan M Resnick, and Christos Davatzikos. Morphological classification of brains via high-dimensional shape transformations and machine learning methods. Neuroimage, 21(1):46–57, 2004. Mingsheng Long and Jianmin Wang. Learning transferable features with deep adaptation networks. CoRR, abs/1502.02791, 1:2, 2015. Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. The variational fair auto encoder. arXiv preprint arXiv:1511.00830, 2015. Sinno Jialin Pan and Qiang Yang. A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345–1359, 2009. Vishal M Patel, RaghuRaman Gopalan, Ruonan Li, and Rama Chellappa. Visual domain adaptation: A survey of recent advances. IEEE signal processing magazine, 32(3):53–69, 2015. Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In European conference on computer vision, pp. 213–226. Springer, 2010. Meena Seshamani and Alastair M Gray. A longitudinal study of the effects of age and time to death on hospital costs. Journal of health economics, 23(2):217–235, 2004. Table 3: AUC Comparison for AHRF Mortality Prediction task for different types of VRADA training <table> <tr> <th>Training</th> <th>23</th> <th>24</th> <th>25</th> <th>32</th> <th>34</th> <th>35</th> <th>42</th> <th>43</th> <th>45</th> <th>52</th> <th>53</th> <th>54</th> </tr> <tr> <td>I</td> <td>0.704</td> <td>0.777</td> <td>0.682</td> <td>0.540</td> <td>0.764</td> <td>0.721</td> <td>0.603</td> <td>0.727</td> <td>0.710</td> <td>0.616</td> <td>0.782</td> <td>0.738</td> </tr> <tr> <td>II</td> <td>0.724</td> <td>0.656</td> <td>0.719</td> <td>0.627</td> <td>0.748</td> <td>0.683</td> <td>0.656</td> <td>0.770</td> <td>0.755</td> <td>0.595</td> <td>0.736</td> <td>0.732</td> </tr> <tr> <td>III</td> <td>0.721</td> <td>0.688</td> <td>0.656</td> <td>0.654</td> <td>0.757</td> <td>0.691</td> <td>0.609</td> <td>0.766</td> <td>0.775</td> <td>0.602</td> <td>0.709</td> <td>0.714</td> </tr> </table> Richard Socher, Cliff C Lin, Chris Manning, and Andrew Y Ng. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th international conference on machine learning (ICML-11), pp. 129–136, 2011. Eric Tzeng, Judy Hoffman, Trevor Darrell, and Kate Saenko. Simultaneous deep transfer across domains and tasks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4068–4076, 2015. Min Xiao and Yuhong Guo. Domain adaptation for sequence labeling tasks with a probabilistic language adaptation model. In ICML (1), pp. 293–301, 2013. Yi Yang and Jacob Eisenstein. Unsupervised multi-domain adaptation with feature embeddings. 6 APPENDIX 6.1 TRAINING VARIATIONS We tested 3 variations of training VRADA: (a) training VRADA regularly as discussed in Section[3] (denoted by I), (b) loading a pretrained VRNN encoder and optimizing strictly off the classification errors, i.e. \[ E(\theta_e, \theta_y, \theta_d) = \frac{1}{n} \sum_{i=1}^n \mathcal{L}_y(x^i; \theta_y) - \lambda (\frac{1}{n} \sum_{i=1}^n \mathcal{L}_d(x^i; \theta_d) + \frac{1}{n'} \sum_{i=n+1}^{N} \mathcal{L}_d(x^i; \theta_d))) \] and (c) loading a pretrained VRNN encoder and using the objective as presented in equation[3] (denoted by III). Key to note is that in method II, we do not apply variational methods towards learning the shared latent representation. This was done to test whether they were helpful or harmful towards the learned latent representation used for classification. In method III, we train VRADA as normal but load a pretrained encoder. We pretrain the encoder by training the VRNN on all source and target domain samples for a desired source-target adaptation pair. In order to choose how many samples would be used for training, we looked at which domain had more examples and chose the larger of the two. For example, if the source domain was group 2 with 508 patients and the target domain was group 5 with 437 patients, the VRNN would see 508 samples of each domain, with group 5 being sampled with replacement after seeing all its samples. As the encoder was used for learning latent representations, we thought it worth investigating whether if pretrained it better captured the latent representations that were being used by the domain classifier for adversarial training. We thought beginning domain classification at a better initialization point might help VRADA avoid local minima. For each method, we fed one source domain sample to \( G_y \) and either a source or target domain sample to \( G_d \). (For this training and all training samples, order was randomized.) We only calculated the loss \( \mathcal{L}_r \) once for the \( G_d \) samples so as to not bias the optimization of the VRNN. Table[3] shows the results of AHRF Mortality Prediction task for different types of VRADA training. From these experiments, we found that jointly training VRADA (i.e method I) usually performed better than the other pretrained training approaches. 6.2 MODEL VARIATIONS 6.2.1 ADVERSARIAL TRAINING AT EVERY TIME-STEP A natural question is whether adversarial training at every time-step is more effective than adversarial training at the last time-step of a latent representation. If done at every time-step, the network learns to create domain-invariant representations of subsets of your input \( x_{\leq T} \). Do these domain-invariant representations help the network find more optimal domain-invariant representations of \( x \)? We empirically tested this scenario (Table 4) and found the results to be sub-optimal when compared to only performing adversarial training at the last time-step (Table 1). Below are results for the R-DANN and VRADA models for adversarial training at every time-step. Table 4: AUC Comparison for AHRF Mortality Prediction task with adversarial training done at every time-step <table> <tr> <th>Model</th> <th>23</th> <th>24</th> <th>25</th> <th>32</th> <th>34</th> <th>35</th> <th>42</th> <th>43</th> <th>45</th> <th>52</th> <th>53</th> <th>54</th> </tr> <tr> <td>R-DANN</td> <td>.651</td> <td>.599</td> <td>.598</td> <td>.557</td> <td>.679</td> <td>.534</td> <td>.563</td> <td>.768</td> <td>.588</td> <td>.528</td> <td>.696</td> <td>.669</td> </tr> <tr> <td>VRADA</td> <td>.681</td> <td>.691</td> <td>.643</td> <td>.594</td> <td>.733</td> <td>.641</td> <td>.733</td> <td>.794</td> <td>.675</td> <td>.583</td> <td>.755</td> <td>.726</td> </tr> </table> 6.2.2 Effect of Reconstruction Loss Table 5 shows the effect of reconstruction loss for our VRADA model. We observe that reconstructing the original data (i.e. using the decoder for reconstructing the data) helps in the overall performance improvement of our VRADA model. Table 5: AUC Comparison of VRADA model for AHRF Mortality Prediction task with and without reconstruction loss <table> <tr> <th>Model</th> <th>23</th> <th>24</th> <th>25</th> <th>32</th> <th>34</th> <th>35</th> <th>42</th> <th>43</th> <th>45</th> <th>52</th> <th>53</th> <th>54</th> </tr> <tr> <td>Without reconstruction</td> <td>.703</td> <td>.623</td> <td>.570</td> <td>.647</td> <td>.622</td> <td>.564</td> <td>.577</td> <td>.608</td> <td>.552</td> <td>.599</td> <td>.640</td> <td>.676</td> </tr> <tr> <td>With reconstruction</td> <td>.724</td> <td>.777</td> <td>.719</td> <td>.654</td> <td>.764</td> <td>.721</td> <td>.656</td> <td>.770</td> <td>.775</td> <td>.616</td> <td>.782</td> <td>.738</td> </tr> </table> 6.2.3 Impact of Adversarial Training In figures 5 and 6 we show the cell state activations for the VRADA and R-DANN without domain adaptation (i.e. no adversarial training). From these figures, we see that the dependencies between source and target domains are not transferred correctly since we do not perform adversarial training. On the otherhand, as discussed in section 4.4, figure 3 shows that adversarial training helps in transferring the dependencies between source and target domains efficiently. 6.3 R-DANN Model Information Here we provide more details on the network architectures of the R-DANN and DANN. Please refer to Figure 7 for a diagram of the R-DANN model showing the dimensions of each layer and the connections between layers. The R-DANN and DANN were essentially identical except that, for the DANN, the first layer used a fully-connected layer instead of an RNN and took input flattened over the time-dimension. Thus the input dimensions corresponded to \( f \) and \( t \times f \) for the R-DANN and DANN, respectively, where \( f \) is the number of features and \( t \) is the length of the time-dimension. R-DANN VRADA Source Target Figure 5: Cell states of memory cell for R-DANN and VRADA showing temporal latent dependencies captured by neurons of the R-DANN and VRADA for the source domain and the target domain. Each step along the y-axis refers to the activation of a single neuron with blue for strong inhibition and yellow for strong excitation. Step along the x-axis refers to activation per time-step. The figure shows a single example in adapting 3-4 for AHRF dataset. R-DANN VRADA Source Target Figure 6: Cell states of memory cell for R-DANN and VRADA showing temporal latent dependencies captured by neurons of the R-DANN and VRADA for the source domain and the target domain. Each step along the y-axis refers to the activation of a single neuron with blue for strong inhibition and yellow for strong excitation. Step along the x-axis refers to activation per time-step. The figure shows a single example in adapting 2-5 for ICD9 dataset. ![Block diagram of the R-DANN showing the number of neurons used in each layer and how the layers were connected. This model had a capacity of about 46,000 parameters.](page_573_682_491_491.png) Figure 7: Block diagram of the R-DANN showing the number of neurons used in each layer and how the layers were connected. This model had a capacity of about 46,000 parameters.
ABSTRACT We study the problem of learning domain invariant representations for time series data while transferring the complex temporal latent dependencies between domains. Our model termed as Variational Recurrent Adversarial Deep Domain Adaptation (VRADA) is built atop a variational recurrent neural network (VRNN) and trains adversarially to capture complex temporal relationships that are domain-invariant. This is (as far as we know) the first to capture and transfer temporal latent dependencies of multivariate time-series data. Through experiments on real-world multivariate healthcare time-series datasets, we empirically demonstrate that learning temporal dependencies helps our model’s ability to create domain-invariant representations, allowing our model to outperform current state-of-the-art deep domain adaptation approaches. 1 INTRODUCTION Many real-world applications require effective machine learning algorithms that can learn invariant representations across related time-series datasets. For example, precision medicine for patients of various age groups, mobile application recommendation for users based on locations, and so on. In these examples, while the domains (i.e. age group and location) may vary, there exist common predictive patterns that can aid in inferring knowledge from one domain to another. More often than not, some domains have a significantly larger number of observations than others (e.g., respiratory failure in adults vs. children). Therefore effective domain adaption of time-series data is in great demand. The general approach to tackling domain adaptation has been explored under many facets which include reducing the domain discrepancy between the source and target domains (Ben-David et al. (2007)), instance re-weighting (Jiang & Zhai (2007)), subspace alignment (Fernando et al. (2013)), and deep learning (Tzeng et al. (2015); Ganin & Lempitsky (2014)). Many of these approaches work very well for non-sequential data but are not suitable for multivariate time-series data as they do not usually capture the temporal dependencies present in the data. For sequential data, earlier work has successfully used dynamic Bayesian Networks (Huang & Yates (2009)) and Recurrent Neural Networks (Socher et al. (2011)) to learn latent feature representations which were domain-invariant. Unfortunately, these works were not flexible enough to model non-linear dynamics or did not explicitly capture and transfer the complex latent dependencies needed to perform domain adaptation of time-series data. In this paper, we address this problem with a model that learns temporal latent dependencies (i.e. dependencies between the latent variables across timesteps) that can be transferred across domains that experience different distributions in their features. We draw inspiration from the Variational Recurrent Neural Network (Chung et al. (2016)) and use variational methods to produce a latent representation that captures underlying temporal latent dependencies. Motivated by the theory of domain adaptation (Ben-David et al. (2010)), we perform adversarial training on this representation Figure 1: A Story of Temporal Dependency and Domain Invariance (a) DNN (b) R-DANN (c) VRADA ![t-SNE projections for the latent representations of DNN, R-DANN, and our VRADA model. We show adaption from Adult-AHRF to Child-AHRF data. Source data is represented with red circles and target data with blue circles. From left to right, one can see that domain adaptation results in mixing the source and target domain data distributions. We can also see a story of how encoding more temporal dependency into the latent representation induces more domain-invariant representations. As models capture more underlying factors of variation, post domain adaptation representations gradually smoothen and become evenly dispersed, indicating that temporal dependency acts synergistically with domain adaptation.](page_232_186_1124_273.png) t-SNE projections for the latent representations of DNN, R-DANN, and our VRADA model. We show adaption from Adult-AHRF to Child-AHRF data. Source data is represented with red circles and target data with blue circles. From left to right, one can see that domain adaptation results in mixing the source and target domain data distributions. We can also see a story of how encoding more temporal dependency into the latent representation induces more domain-invariant representations. As models capture more underlying factors of variation, post domain adaptation representations gradually smoothen and become evenly dispersed, indicating that temporal dependency acts synergistically with domain adaptation. similarly to the Domain Adversarial Neural Network (DANN) (Ganin et al. (2016)) to make the representations invariant across domains. We call our model the Variational Recurrent Adversarial Deep Domain Adaptation (VRADA) model. As far as we know, this is the first model capable of accomplishing unsupervised domain adaptation while transferring temporal latent dependencies for complex multivariate time-series data. Figure 1 shows an example of the domain invariant representations learned by different deep learning models including our VRADA model. From this figure, we can see that our model (VRADA) shows better mixing of the domain distributions than the competing models indicating that it learns better domain invariant representations. In order to prove the efficacy of our model, we perform domain adaptation using real-world healthcare time-series data. We choose healthcare data for two primary reasons. (1) Currently, a standard protocol in healthcare is to build, evaluate, and deploy machine learning models for particular datasets that may perform poorly on unseen datasets with different distributions. For example, models built around patient data from particular age groups perform poorly on other age groups because the features used to train the models have different distributions across the groups (Alemayehu & Warner (2004); Lao et al. (2004); Seshamani & Gray (2004)). Knowledge learned from one group is not transferable to the other group. Domain adaptation seems like a natural solution to this problem as knowledge needs to be transferred across domains which share features that exhibit different distributions. (2) Healthcare data has multiple attributes recorded per patient visit, and it is longitudinal and episodic in nature. Thus, healthcare data is a suitable platform on which to study a model which seeks to capture complex temporal representations and transfer this knowledge across domains. The rest of the paper is structured as follows. In the following section, we briefly discuss the current state-of-the-art deep domain adaptation approaches. Afterwards, we present our model mathematically, detailing how it simultaneously learns to capture temporal latent dependencies and create domain-invariant representations. In Section 4, we compare and contrast the performance of proposed approach with other approaches on two real-world health care datasets, and provide analysis on our domain-invariant representations. 2 RELATED WORK Domain adaptation is a specific instance of transfer learning in which the feature spaces are shared but their marginal distributions are different. A good survey on the two has been done in several previous works (Pan & Yang (2009); Jiang (2008); Patel et al. (2015)). Domain adaptation has been thoroughly studied in computer vision (Saenko et al. (2010); Gong et al. (2012); Fernando et al. (2013)) and natural language processing (NLP) (Blitzer (2007); Foster et al. (2010)) applications. Recently, the deep learning paradigm has become popular in domain adaptation (Chen et al. (2012); Tzeng et al. (2015); Yang & Eisenstein; Long & Wang (2015)) due to its ability to learn rich, flexible, non-linear domain-invariant representations. Here, we briefly discuss two deep domain adaptation approaches which are closely related to our proposed model. Domain Adversarial Neural Networks (DANN) Figure 2: Block diagram of VRADA. Blue lines show the inference process, \( q_{\theta_i}(z_t | x_{\leq t}, z_{<t}) \). Brown lines show the generation process, \( p_{\theta_j}(x_t | z_{\leq t}, x_{<t}) \). Red lines show the recurrence process where \( h_t \) is informed by \( h_{t-1} \), which is informed by \( z_{t-1} \) and \( x_{t-1} \). Black lines indicate classification. (Ganin et al. (2016)) is a deep domain adaptation model which uses two core components to create domain-invariant representations, a feature extractor that produces the data’s latent representation, and an adversarial domain labeler that attempts to classify that data’s domain to help the feature extractor produce latent representations which are domain-invariant. In Louizos et al. (2015), the authors propose Variational Fair AutoEncoder, which uses Variational Autoencoding architecture (Kingma & Welling (2013)) to learn latent representations where most of the information about certain known factors of variation are purged from the representation while still retaining as much information about the data as possible. While, these deep learning approaches learn domain-invariant representations, they fail to capture and transfer the underlying complex temporal latent relationships from one domain to another as they use convolutional or feed forward neural networks which we claim are not suitable for multivariate time-series data. Other works such as Huang & Yates (2009); Xiao & Guo (2013) have used distributed representations for domain adaptation in NLP sequence labeling tasks. However, they either induce hidden states as latent features using dynamic Bayesian networks (DBNs) or learn generalizable distributed representations of words using Recurrent Neural Networks (RNN) (Socher et al. (2011)) to enable domain adaptation. These works either model the highly non-linear dynamics, as one can with RNN, or capture the complex latent dependencies present in sequential data, as one can with DBNs, but not both. To overcome the challenges of DBNs and RNNs, Variational Recurrent Neural Network (VRNN) (Chung et al. (2016)) was proposed recently to capture the complex relationship between the underlying hidden factors of variation and the output variables at different time-steps. The VRNN uses Variational Autoencoders (VAEs)(Kingma & Welling (2013); Goodfellow et al. (2016)) at each time-step to learn a complex relationship between the latent hidden factors across time-steps. Like the VAE, its latent variable is parametric. Combined, these things make it well-suited for multimodal sequential data such as multivariate time-series. In the following section, we discuss our approach, Variational Adversarial Deep Domain Adaptation (VRADA), which uses a VRNN to model and transfer complex domain-invariant temporal latent relationships for unsupervised domain adaptation of multivariate time-series. 3 VARIATIONAL RECURRENT ADVERSARIAL DEEP DOMAIN ADAPTATION In this section, we present our Variational Recurrent Adversarial Deep Domain Adaptation (VRADA) model for the purpose of capturing and transferring temporal latent dependencies across domains via domain-invariant representations. First, we introduce the notations used in this paper and then discuss our VRADA model in detail. 3.1 NOTATIONS Let us denote a multivariate variable-length time series with \( N \) data samples as \( \{ \mathbf{x}_i^1 = (x_i^1)^{T_i^1}_{t=1} \}_{i=1}^N \), where \( x_i^t \in \mathbb{R}^D \). (Note: in our experiments, for all data samples \( T^n = \tau \), but for generality we maintain \( T^n \)). We denote \( \{ \mathbf{x}_S^1 \}_{i=1}^n \) as source domain data and \( \{ \mathbf{x}_T^1 \}_{i=n+1}^N \) as target domain data. We assume that each source domain data sample \( \mathbf{x}_S^i \) comes with \( L \) labels \( y_i \in \{0, 1\}^L \) (for example, these labels may correspond to a clinical outcome such as mortality or ICD9 diagnosis codes), while target domain has no labeled data samples. We assign a domain label \( d_i \in \{0, 1\} \) to each data sample to indicate if it comes from the source or target domain. \( d_i \) will be used for adversarial training. 3.2 VRADA The block diagram of our VRADA model is shown in Figure 2. To explicitly model the dependencies between the latent random variable across time steps, the VRADA model utilizes Variational Recurrent Neural Networks (VRNN) (Chung et al. (2016)). The VRNN effectively contains a Variational Auto-Encoders (Kingma & Welling (2013)) at every time step, all of which are conditioned on previous auto-encoders via the hidden state \( h_{t-1} \) of an RNN, such as an LSTM (Hochreiter & Schmidhuber (1997)). Therefore, for each time-step of \( x_t^i \), we infer a latent random variable \( z_t^i \) via \[ z_t^i | x_t^i \sim \mathcal{N}(\mu_{z,t}, \text{diag}(\sigma_{z,t})), \quad \text{where } [\mu_{z,t}, \sigma_{z,t}] = \varphi_\tau^{enc}(\varphi_\tau^e(x_t^i), h_{t-1}) \] with prior \[ z_t^i \sim \mathcal{N}(\mu_{0,t}, \text{diag}(\sigma_{0,t})), \quad \text{where } [\mu_{0,t}, \sigma_{0,t}] = \varphi_\tau^{prior}(h_{t-1}) \] where \( \mu_{*,t}, \sigma_{*,t} \) denote parameters of a generating distribution, and \( \varphi_\tau^* \) can be any highly flexible function such as deep neural networks. For each \( z_t^i \), \( x_t^i \) is generated via \[ x_t^i | z_t^i \sim \mathcal{N}(\mu_{x,t}, \text{diag}(\sigma_{x,t})), \quad \text{where } [\mu_{x,t}, \sigma_{x,t}] = \varphi_\tau^{dec}(\varphi_\tau^z(z_t^i), h_{t-1}) \] and learned by optimizing the VRNN objective function: \[ \mathcal{L}_r(x_t^i; \theta_e, \theta_g) = E_{q_{\theta_e}(z_{T:t}^i|x_{T:t}^i)}[\sum_{t=1}^T (-D(q_{\theta_e}(z_t^i|x_{<t}^i, z_{<t}^i)||p(z_t^i|x_{<t}^i, z_{<t}^i)) + \log p_{\theta_g}(x_t^i|z_{<t}^i, x_{<t}^i))] \] where \( q_{\theta_e}(z_t^i|x_{<t}^i, z_{<t}^i) \) is the inference model, \( p(z_t^i|x_{<t}^i, z_{<t}^i) \) is the prior, \( p_{\theta_g}(x_t^i|z_{<t}^i, x_{<t}^i) \) is the generative model, \( \theta_e \) is the parameters of the VRNN’s encoder, \( \theta_g \) the parameters of the VRNN’s decoder, and \( D(\cdot||\cdot) \) refers to KL-Divergence. Note: \( z_{<T} \) refers to the set of all \( z_t \) such that \( t \leq T \), likewise for \( z_{<T} \). For each \( x^i \), we use \( z^i \sim q_{\theta_e}(z_{T:t}^i|x_{T:t}^i, z_{<T}^i) \) as our feature representation for source domain classification task since it captures temporal latent dependencies across the time-steps. Training the VRNN for the source domain classification involves solving the following optimization: \[ \min_{\theta_e, \theta_g, \theta_y} \frac{1}{n} \sum_{i=1}^n \frac{1}{T_i} \mathcal{L}_r(x^i; \theta_e, \theta_g) + \frac{1}{n} \sum_{i=1}^n \mathcal{L}_y(x^i; \theta_y, \theta_e) + \lambda \mathcal{R}(\theta_e) \] where \( \mathcal{R}(\theta_e) \) is a regularizer for the parameters of VRNN encoder (which is also the feature extractor of VRADA) with a tuning hyperparameter \( \lambda \). As we are interested in achieving domain adaptation via the latent representation \( \tilde{z}^i \) (i.e. to make \( \tilde{z}^i \) domain-invariant), we can adversarially train the above objective function (equation 1) by employing the domain adaptation idea proposed in Ganin et al. (2016). Let \( G_y(\tilde{z}^i; \theta_y) \) and \( G_d(\tilde{z}^i; \theta_d) \) represent the source label classifier (to predict source labels \( y_i \)) and domain label classifier (to predict domain labels \( d_i \)) respectively with parameters \( \theta_y \) and \( \theta_d \) for a given input \( \tilde{z}^i \). Here, \( G_y(.) \) and \( G_d(.) \) can be deep neural networks. Let us denote their loss functions respectively as \[ \mathcal{L}_y(x^i; \theta_y, \theta_e) = \mathcal{L}_B(G_y(V_e(x^i; \theta_e); \theta_y), y_i); \quad \mathcal{L}_d(x^i; \theta_d, \theta_e) = \mathcal{L}_B(G_d(V_e(x^i; \theta_e); \theta_d), d_i) \] where \( \mathcal{L}_B \) is the classification loss such as a binary or categorical cross-entropy loss function and \( V_e(x^i; \theta_e) \) is the VRNN encoder that maps input \( x^i \) to \( \tilde{z}^i \). Now, for adversarial training, we consider the following domain adaptation term as the regularizer of equation 1 \[ \mathcal{R}(\theta_e) = \max_{\theta_d} \left[ -\frac{1}{n} \sum_{i=1}^n \mathcal{L}_d(x^i; \theta_d, \theta_e) - \frac{1}{n'} \sum_{i=n+1}^{n+N} \mathcal{L}_d(x^i; \theta_d, \theta_e) \right] \] where \( n' \) is the number of target domain samples. As shown in Ganin et al. (2016), \( \mathcal{R} \) is the domain regularizer and it is derived from the empirical \( \mathcal{H} \)-divergence between the source domain and target domain samples (Ben-David et al. (2010)). Combining the joint optimization problem of equations[1]and[2]leads to our VRADA model, where we minimize the source classification risk and at the same time achieve domain adaptation. Mathematically, we optimize the following complete objective function: \[ E(\theta_e, \theta_g, \theta_y, \theta_d) = \frac{1}{N} \sum_{i=1}^N \frac{1}{T_i} \mathcal{L}_r(x^i; \theta_e, \theta_g) + \frac{1}{n} \sum_{i=1}^n \mathcal{L}_y(x^i; \theta_y) - \lambda (\frac{1}{n} \sum_{i=1}^n \mathcal{L}_d(x^i; \theta_d) + \frac{1}{n'} \sum_{i=n+1}^N \mathcal{L}_d(x^i; \theta_d)) \] where \( \lambda \) is a trade-off between optimizing on making domain-invariant representations and optimizing source classification accuracy. Our optimization involves minimization with respect to some parameters, and maximization with respect to the others, i.e., we iteratively solve the following: \[ (\hat{\theta}_g, \hat{\theta}_y, \hat{\theta}_e) = \arg \min_{\theta_g, \theta_y, \theta_e} E(\theta_e, \theta_g, \theta_y, \hat{\theta}_d) \] \[ \hat{\theta}_d = \arg \max_{\theta_d} E(\hat{\theta}_e, \hat{\theta}_g, \hat{\theta}_y, \theta_d) \] with the gradient updates calculated as: \[ \theta_e \leftarrow \theta_e - \eta (\frac{\partial \mathcal{L}_r}{\partial \theta_e} + \frac{\partial \mathcal{L}_y}{\partial \theta_e} - \lambda \frac{\partial \mathcal{L}_d}{\partial \theta_e}) \] \[ \theta_g \leftarrow \theta_g - \eta \frac{\partial \mathcal{L}_r}{\partial \theta_g} \] \[ \theta_d \leftarrow \theta_d - \eta \frac{\partial \mathcal{L}_d}{\partial \theta_d} \] \[ \theta_y \leftarrow \theta_y - \eta \lambda \frac{\partial \mathcal{L}_d}{\partial \theta_y} \] where \( \eta \) is the learning rate. We can use stochastic gradient descent (SGD) to solve the equations (5-7). To solve equation (4), we can use SGD and the gradient reversal layer (GRL)(Ganin et al. (2016)). The role of GRL is to reverse the gradient sign while performing backpropagation. This ensures that the domain classification loss is maximized which makes the feature representations domain-invariant. Thus, VRADA results in learning feature representations which are domain-invariant (due to domain regressor \( \mathcal{R} \)) and which capture the temporal latent dependencies (due to optimizing VRNN objective function \( \mathcal{L}_r \)). These things combine to allow the VRADAs’ discriminative power on the source domain to transfer to the target domain. 4 EXPERIMENTS We conduct experiments on two real-world health care datasets to answer the following questions: (a) How does our VRADA model perform when compared to the state-of-the-art domain adaptation and non-adaptation approaches? (b) How different are the domain-invariant representations learned by various domain adaptation methods? (c) How do we show that the temporal latent dependencies are transferred between domains? In the remainder of this section, we will describe the datasets, methods, empirical results, and show visualizations to answer the above questions. 4.1 DATASET DESCRIPTION We conduct experiments on two health care datasets, including the MIMIC-III dataset and a Pediatric ICU (PICU) dataset from Children’s Hospital Los Angeles. MIMIC-III(Johnson et al. (2016)) is a public dataset with deidentified clinical care data collected at Beth Israel Deaconess Medical Center from 2001 to 2012. It contains over 58,000 hospital admission records of 38,645 adults and 7,875 neonates. For our experiments, we extracted the following two datasets: • Adult-AHRF dataset: To study domain adaptation for adult patients with acute hypoxemic respiratory failure (AHRF), we extracted 20 time series features (such as Base excess, blood pH value, Mean Air Pressure, PaO2, etc.) from 5527 admission records based on Khemani et al. (2009). We grouped the patients into 4 groups/cohorts based on their age[1]: Group 2: working-age adult (20 to 45 yrs, 508 patients); Group 3: old working-age adult (46 to 65 yrs, 1888 patients); Group 4: elderly (66 to 85 yrs, 2394 patients); Group 5: old elderly (85 yrs and up, 437 patients). We treated each group as a separate domain with which we could perform domain adaptation. For each patient, we used the first 4 day after admission (with each day serving as a single time-step) as time series data for training and testing our models. • ICD9 dataset: For this dataset we extracted 99 time series features from 19714 admission records from 4 modalities including input-events (fluids into patient, e.g., insulin), output-events (fluids out of the patient, e.g., urine), lab-events (lab test results, e.g., blood pH values, platelet count, etc.) and prescription-events (drugs prescribed by doctors, e.g., aspirin, potassium chloride, etc.). These modalities are known to be extremely useful for monitoring ICU patients. All the time series are of more than 48 hours of duration, and only the first 24 hours (after admission) 2-hourly sampled time series data is used for training and testing our models. We use this dataset to predict the ICD9 Diagnosis code categories for each patient’s admission record. Child-AHRF dataset: This is a PICU dataset which contains health records of 398 children patient with acute hypoxemic respiratory failure in the intensive care unit at Children’s Hospital Los Angeles (CHLA)(Khemani et al.(2009)). Similar to Adult-AHRF, this dataset has 20 time series features collected for 4 days after ICU admission. This dataset is considered as one group (Group 1: children, age 0 to 19 yrs) and represents one domain. 4.1.1 PREDICTION AND DOMAIN ADAPTATION TASKS Mortality Prediction: For Adult-AHRF and Child-AHRF datasets, we are interested in predicting mortality, i.e. whether a patient dies from AHRF during their hospital stay. 20.10% of all the patients in Child-AHRF and 13.84% of all patients in Adult-AHRF have a positive mortality label (i.e. the patients who die in hospital). ICD9 Code Prediction: Each admission record in MIMIC-III dataset has multiple ICD-9 diagnosis codes. We group all the occurrences of the ICD-9 codes into 20 diagnosis groups[2]. For the ICD9 dataset, we are interested in predicting these 20 ICD-9 Diagnosis Categories for each admission record. We treat this as a multi-task prediction problem. Domain Adaptation Tasks: We study unsupervised domain adaptation (i.e. target domain labels are unavailable during training and validation) task with-in age groups of Adult-AHRF dataset, ICD9 dataset and across Adult and Child-AHRF datasets. For Adult-AHRF and ICD9 datasets, we created 12 source-target domain pairs using the age groups, pairing up each domain \( D_i \) with another domain \( D_{j \neq i} \), for example, the source-target pair 2-5 was used for adapting from group 2 (working-age adult) to group 5 (old elderly). We also created 4 source-target pairs for performing domain adaptation from 4 adult age-groups to 1 child age-group. 4.2 METHODS AND IMPLEMENTATION DETAILS We categorize the methods used in our main experiments into the following groups: • Non-adaptive baseline methods: Logistic Regression (LR), Adaboost with decision regressors (Adaboost), and feed forward deep neural networks (DNN) • Deep Domain adaptation methods: Domain Adversarial Neural Networks (DANN) (Ganin et al. (2016)); DANN with a RNN (LSTM) as feature extractor (R-DANN); Variational Fair Autoencoder (VFAE)(Louizos et al. (2015)) • Our method: Variational Recurrent Adversarial Deep Domain Adaptation (VRADA)[3] [1]: https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/ [2]: http://tdrdata.com/ipd/ipd_SearchForICD9CodesAndDescriptions.aspx “Conditions Originating in the Perinatal Period” is not present in the preprocessed dataset. [3]: Codes will be publicly released soon In all our experiments, we conducted unsupervised domain adaptation where target domain labels are unavailable during training and validation. For R-DANN, we used LSTM (Hochreiter & Schmidhuber (1997)) as the feature extractor network instead of the feed-forward neural networks used in DANN. For VFAE, DANN and all the non-domain adaptive approaches we flattened the time series along time axis and treat it as the input to the model. For fairness, the classifier and feature extractors of the VRADA and R-DANN were equivalent in depth and both had the same model capacity. We also ensure that the size of latent feature representation \( z^i \) are similar for VRADA and DANN models. The model capacity of VFAE was chosen to be similar to VRADA. All the deep domain adaptation models including ours had depth of size 8 (including output classifier layers). We used the Adam optimizer (Kingma & Ba (2014)) and ran all models for 500 epochs with a learning rate of \( 3e-4 \). We set an early stopping criteria that the model does not experience a decrease in the validation loss for 20 epochs. Source domain data was split into train/validation subsets with a 70/30 ratio and target domain data into train/validation/test subsets with a 70/15/15 ratio. In order to compare all the methods, we report AUC scores on the entire target domain set, and the test subset for each target domain data of a source-target pair. 4.3 Quantitative Results In Table 1, we compare non domain adaptation and domain adaptation models’ performance on the target domain test subset for the AHRF mortality prediction task. It is immediately clear that domain adaptation methods consistently outperform non domain adaptation methods. We see that generally the VRADA outperforms both variants of the DANN with it consistently seeing scores ~ 4% higher. While the standard deviation for the VRADA was about 1%, it was about 2% for the R-DANN, further showing our models efficacy as it converges to more stable local optima. Our model VRADA beats state-of-the-art DANN (Ganin et al. (2016)) and VFAE (Louizos et al. (2015)) on all the source-pair domain adaptation tasks for Adult-AHRF dataset. For the domain adaptation from Adult-AHRF to Child-AHRF dataset, we observe that VRADA mostly outperforms all the competing models. This shows that our model can perform well even for smaller target domain datasets. <table> <tr> <th>Source-Target</th> <th>LR</th> <th>Adaboost</th> <th>DNN</th> <th>DANN</th> <th>VFAE</th> <th>R-DANN</th> <th>VRADA</th> </tr> <tr> <td>3 - 2</td> <td>0.555</td> <td><b>0.562</b></td> <td>0.569</td> <td>0.572</td> <td>0.615</td> <td>0.603</td> <td><b>0.654</b></td> </tr> <tr> <td>4 - 2</td> <td>0.624</td> <td>0.645</td> <td>0.569</td> <td>0.589</td> <td>0.635</td> <td>0.584</td> <td><b>0.656</b></td> </tr> <tr> <td>5 - 2</td> <td>0.527</td> <td><b>0.554</b></td> <td>0.551</td> <td>0.540</td> <td>0.588</td> <td>0.611</td> <td><b>0.616</b></td> </tr> <tr> <td>2 - 3</td> <td><b>0.627</b></td> <td>0.621</td> <td>0.550</td> <td>0.563</td> <td>0.585</td> <td>0.708</td> <td><b>0.724</b></td> </tr> <tr> <td>4 - 3</td> <td><b>0.681</b></td> <td>0.636</td> <td>0.542</td> <td>0.527</td> <td>0.722</td> <td><b>0.821</b></td> <td>0.770</td> </tr> <tr> <td>5 - 3</td> <td>0.655</td> <td><b>0.706</b></td> <td>0.503</td> <td>0.518</td> <td>0.608</td> <td>0.769</td> <td><b>0.782</b></td> </tr> <tr> <td>2 - 4</td> <td>0.585</td> <td><b>0.591</b></td> <td>0.530</td> <td>0.560</td> <td>0.582</td> <td>0.716</td> <td><b>0.777</b></td> </tr> <tr> <td>3 - 4</td> <td><b>0.652</b></td> <td>0.629</td> <td>0.531</td> <td>0.527</td> <td>0.697</td> <td><b>0.769</b></td> <td>0.764</td> </tr> <tr> <td>5 - 4</td> <td>0.689</td> <td><b>0.699</b></td> <td>0.538</td> <td>0.532</td> <td>0.614</td> <td>0.728</td> <td><b>0.738</b></td> </tr> <tr> <td>2 - 5</td> <td><b>0.565</b></td> <td>0.543</td> <td>0.549</td> <td>0.526</td> <td>0.555</td> <td>0.659</td> <td><b>0.719</b></td> </tr> <tr> <td>3 - 5</td> <td>0.576</td> <td><b>0.587</b></td> <td>0.510</td> <td>0.526</td> <td>0.533</td> <td>0.630</td> <td><b>0.721</b></td> </tr> <tr> <td>4 - 5</td> <td><b>0.682</b></td> <td>0.587</td> <td>0.575</td> <td>0.548</td> <td>0.712</td> <td>0.747</td> <td><b>0.775</b></td> </tr> <tr> <td>5 - 1</td> <td>0.502</td> <td><b>0.573</b></td> <td>0.557</td> <td>0.563</td> <td>0.618</td> <td>0.563</td> <td><b>0.639</b></td> </tr> <tr> <td>4 - 1</td> <td><b>0.565</b></td> <td>0.533</td> <td>0.572</td> <td>0.542</td> <td><b>0.668</b></td> <td>0.577</td> <td>0.636</td> </tr> <tr> <td>3 - 1</td> <td>0.500</td> <td>0.500</td> <td>0.542</td> <td>0.535</td> <td>0.570</td> <td>0.591</td> <td><b>0.631</b></td> </tr> <tr> <td>2 - 1</td> <td><b>0.520</b></td> <td>0.500</td> <td>0.534</td> <td>0.559</td> <td>0.578</td> <td>0.630</td> <td><b>0.637</b></td> </tr> </table> In the above table, we test classification without adaptation using Logistic Regression (LR), Adaboost with decision tree classifiers and Feed forward Deep Neural Networks (DNN); and with adaptation using Deep Domain Adversarial Neural Networks (DANN), a DANN with an LSTM in its feature extractor (R-DANN), Variational Fair Autoencoder (VFAE) and our Variational Adversarial Domain Adaptation Model (VRADA). All results are reported on the target domain test subset dataset. As the AHRF mortality prediction task made it clear that domain adaptation is necessary for inter-group adaptation, for the ICD9 multi-task prediction task that involved data with time-steps of length 12, we focused strictly on domain adaptive models (i.e. the DANN, R-DANN, and VRADA). Table 2 shows the aggregated AUC scores on the entire target domain dataset and test data of the target domain for the 20 tasks of the ICD9 Code Prediction task. Here, we clearly see that VRADA and Table 2: AUC Comparison for ICD9 Diagnosis Code Prediction task <table> <tr> <th rowspan="2">Model</th> <th colspan="3">23</th> <th colspan="3">24</th> <th colspan="3">25</th> <th colspan="3">32</th> <th colspan="3">34</th> <th colspan="3">35</th> <th colspan="3">42</th> <th colspan="3">43</th> <th colspan="3">45</th> <th colspan="3">52</th> <th colspan="3">53</th> <th colspan="3">54</th> </tr> <tr> <td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td><td>entire target</td><td>target test</td><td></td> </tr> <tr> <td>DANN</td> <td>0.513</td><td>0.508</td><td>0.509</td><td>0.511</td><td>0.508</td><td>0.514</td><td>0.511</td><td>0.507</td><td>0.512</td><td>0.505</td><td>0.508</td><td>0.506</td><td>0.509</td><td>0.513</td><td>0.531</td><td>0.527</td><td>0.515</td><td>0.531</td><td>0.515</td><td>0.521</td><td>0.521</td><td>0.518</td><td>0.514</td><td>0.519</td> </tr> <tr> <td>R-DANN</td> <td>0.608</td><td>0.581</td><td>0.562</td><td>0.618</td><td>0.610</td><td>0.586</td><td>0.604</td><td>0.607</td><td>0.575</td><td>0.573</td><td>0.558</td><td>0.566</td><td>0.605</td><td>0.579</td><td>0.570</td><td>0.628</td><td>0.609</td><td>0.589</td><td>0.614</td><td>0.616</td><td>0.586</td><td>0.573</td><td>0.563</td><td>0.564</td> </tr> <tr> <td>VRADA</td> <td>0.620</td><td>0.564</td><td>0.557</td><td>0.611</td><td>0.617</td><td>0.580</td><td>0.598</td><td>0.615</td><td>0.588</td><td>0.571</td><td>0.582</td><td>0.576</td><td>0.609</td><td>0.563</td><td>0.560</td><td>0.620</td><td>0.617</td><td>0.580</td><td>0.606</td><td>0.623</td><td>0.594</td><td>0.576</td><td>0.581</td><td>0.576</td> </tr> </table> Here, we compare results for the ICD9 Diagnosis Code Prediction task on the ICD9 dataset. For each model, the top row corresponds to the performance on the entire target domain dataset and the bottom row corresponds to performance on the test subset (15%) of the target domain dataset. R-DANN models outperform DANN [Ganin et al. (2016)] by significant margins. We also observe that VRADA outperforms R-DANN by \(1.5 \sim 2\%\) when averaged over all the source-target domain pairs. 4.4 DISCUSSION Figure 3 shows the temporal latent dependencies captured by our VRADA as compared to the R-DANN for 3-4 source-target pair. While both models learn temporal latent dependencies fairly well, the VRADA outperforms the R-DANN in two ways. First, the VRADA’s neurons learned stronger predictions of whether features are relevant towards modeling the data. If we look at the VRADA row, for both AHRF and ICD9 we see that the neural activation patterns are more consistent across time-steps than for R-DANN. Figure 4 shows the unrolled memory cell states (in the form Examples × (Time * Neurons)) for all the source and target domain data points. We see a consistent activation firing patterns across all these data points for VRADA but not for R-DANN. Together with the stronger performance on 3-4 for AHRF and 2-5 for ICD9, this potentially indicates that VRADA is better learning the temporal dependencies. Second, nuanced values are consistent across time-steps for the VRADA, exhibiting a gradual transition towards stronger activation with time, whereas the temporal activation pattern of the R-DANN seems somewhat sporadic. While activation gradients across time are consistent for both the R-DANN and VRADA, more consistent inhibitory and excitatory neuron firing patterns indicate that the VRADA better transfers knowledge. Another indication of domain adaptation was shown in Figure 1c. Looking at the t-SNE projections of feature representations of DNN, R-DANN, and VRADA we can see that the addition of temporal latent dependencies might help in better mixing of the domain distributions since we observe that the data is more evenly spread out. Figure 1c and Figure 3 together indicate that the VRADA’s temporal latent dependency capturing power and ability to create domain-invariant representations act synergistically. For plots of activation patterns without domain adaptation, please see appendix section 6.2.3. 5 SUMMARY Because of its diverse range of patients and its episodic and longitudinal nature, healthcare data provides a good platform to test domain adaptation techniques for temporal data. With it as our example, we showcase the Variational Recurrent Adversarial Domain Adaptation (VRADA) model’s ability to learn temporal latent representations that are domain-invariant. By comparing our model’s latent representations to others’, we show its ability to use variational methods to capture hidden factors of variation and produce more robust domain-invariant representations. We hope this work serves as a bedrock for future work capturing and adapting temporal latent representations across domains. ACKNOWLEDGMENTS This material is based upon work supported by the NSF research grants IIS-1134990, IIS-1254206, Samsung GRO Grant and the NSF Graduate Research Fellowship Program under Grant No. DGE-1418060. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funding agencies. We also acknowledge Thailand’s Development and Promotion of Science and Technology Talents Project for financial support. We thank Dr. Robinder Khemani for sharing the Child-AHRF dataset. Figure 3: Cell states of memory cell for R-DANN and VRADA showing temporal latent dependencies captured by neurons of the R-DANN and VRADA for the source domain and transferred to the target domain. Each step along the y-axis refers to the activation of a single neuron with blue for strong inhibition and yellow for strong excitation. Step along the x-axis refers to activation per time-step. The left shows a single example in adapting 3-4 and the right for adapting 2-5. Figure 4: Cell states of memory cell for R-DANN and VRADA showing activation for all ICD9 2-5 adaptation examples. Here, we show temporal dependencies learned across time, feature pairs for examples in a domain. The y-axis values refer to values per data point and the x-axis shows activation at time, feature pairs with the time and feature dimensions being flattened.
accept
Accept (Poster)
5.666667
10b4cd8cb62528021c9e44c0c67a161d5b25e958
iclr
2,017
Beyond Fine Tuning: A Modular Approach to Learning on Small Data Aryk Anderson† Eastern Washington University Cheney, Washington aryk.anderson@eagles.ewu.edu Kyle Shaffer*, Artem Yankov; Courtney D. Corley† & Nathan O. Hodas† Pacific Northwest National Laboratory Washington, USA {kyle.shaffer,artem.yankov,court,nathan.hodas}@pnnl.gov ABSTRACT In this paper we present a technique to train neural network models on small amounts of data. Current methods for training neural networks on small amounts of rich data typically rely on strategies such as fine-tuning a pre-trained neural network or the use of domain-specific hand-engineered features. Here we take the approach of treating network layers, or entire networks, as modules and combine pre-trained modules with untrained modules, to learn the shift in distributions between data sets. The central impact of using a modular approach comes from adding new representations to a network, as opposed to replacing representations via fine-tuning. Using this technique, we are able surpass results using standard fine-tuning transfer learning approaches, and we are also able to significantly increase performance over such approaches when using smaller amounts of data. 1 INTRODUCTION Training generalizable models using only a small amount of data has proved a significant challenge in the field of machine learning since its inception. This is especially true when using artificial neural networks, with millions or billions of parameters. Conventional wisdom gleaned from the surge in popularity of neural network models indicates that extremely large quantities of data are required for these models to be effectively trained. Indeed the work from Krizhevsky et al. (2012) has commonly been cited as only being possible through the development of ImageNet (Russakovsky et al. (2015)). As neural networks become explored by practitioners in more specialized domains, the volume of available labeled data also narrows. Although training methods have improved, it is still difficult to train deep learning models on small quantities of data, such as only tens or hundreds of examples. The current paradigm for solving this problem has come through the use of pre-trained neural networks. Bengio et al. (2012) were able to show that transfer of knowledge in networks could be achieved by first training a neural network on a domain for which there is a large amount of data and then retraining that network on a related but different domain via fine-tuning its weights. Though this approach demonstrated promising results on small data, these models do not retain the ability to function as previously trained. That is, these models end up fine tuning their weights to the new learning task, forgetting many of the important features learned from the previous domain. The utility of pre-training models extends beyond training on small data. It is also used as an effective initialization technique for many complicated models (Jaderberg et al. (2015); Lakkaraju et al. (2014)). This, in addition to the continuing trend of treating specific network layer architectures as modular components to compose more advanced models (He et al. (2015); Larsson et al. (2016); Szegedy et al. (2013); Abadi et al. (2016)) informs our work as we seek to use pre-trained models as an architectural framework to build upon. Instead of overwriting these models and fine-tuning the internal representations to a specific task, we propose composing pre-trained models as modules in a higher order architecture where multiple, potentially distinct representations contribute to the task. With this approach, useful representations already learned are not forgotten and new representations specific to the task are learned in other modules in the architecture. In this paper we present our neuro-modular approach to fine-tuning. We demonstrate how modules learn subtle features that pre-trained networks may have missed. We quantitatively compare traditional fine-tuning with our modular approach, showing that our approach is more accurate on small amounts of data (<100 examples per class). We also demonstrate how to improve classification in a number of experiments, including CIFAR-100, text classification, and fine-grained image classification, all with limited data. 2 RELATED WORK Transferring knowledge from a source domain to a target domain is an important challenge in machine learning research. Many shallow methods have been published, those that learn feature invariant representations or by approximating value without using an instance’s label (Pan & Yang (2010); Sugiyama et al. (2008); Pan et al. (2011); Zhang et al. (2013); Wang & Schneider (2014); Gong et al. (2016)). More recent deep transfer learning methods enable identification of variational factors in the data and align them to disparate domain distributions (Tzeng et al. (2014); Long et al. (2015); Ganin & Lempitsky (2014); Tzeng et al. (2015)). Mesnil et al. (2012) presents the Unsupervised and Transfer Learning Challenge and discusses the important advances that are needed for representation learning, and the importance of deep learning in transfer learning (Oquab et al. (2014) applied these techniques to mid-level image representations using CNNs. Specifically, they showed that image representations learned in visual recognition tasks (ImageNet) can be transferred to other visual recognition tasks (Pascal VOC) efficiently. Further study regarding the transferability of features by Yosinski et al. (2014) showed surprising results that features from distant tasks perform better than random features and that difficulties arise when optimizing splitting networks between co-adapted neurons. We build on these results by leveraging existing representations to transfer to target domains without overwriting the pre-trained models through standard fine-tuning approaches. Long et al. (2015) developed the Deep Adaptation Network (DAN) architecture for convolutional neural networks that embed hidden representations of all task-specific layers in a reproducing kernel Hilbert space. This allows the mean of different domain distributions to be matched. Another feature of their work is that it can linearly scale and provide statistical guarantees on transferable features. The Net2Net approach (Chen et al. (2015)) accelerates training of larger neural networks by allowing them to grow gradually using function preserving transformations to transfer information between neural networks. However, it does not guarantee that existing representational power will be preserved on a different task. Gong et al. (2016) consider domain adaptation where transfer from source to domain is modeled as a causal system. Under these assumptions, conditional transferable components are extracted which are invariant after location-scale transformations. Long et al. (2016) proposed a new method that overcomes the need for conditional components by comparing joint distributions across domains. Unlike our work, all of these require explicit assumptions or modifications to the pre-trained networks to facilitate adaptation. We note that while writing this paper, the progressive network architecture of Rusu et al. (2016) was released, sharing a number of qualities with our work. Both the results we present here and the progressive networks allow neural networks to extend their knowledge without forgetting previous information. In addition, Montone et al. (2015) discusses a semi-modular approach. Montone et al. also froze the weights of the original network, although it did not focus on the small data regime, where only a few tens of examples could be available. However, our modular approach detailed here focuses on leveraging small data to adapt to different domains. Our architecture also complements existing network building strategies, such as downloading pre-trained neural networks to then be fine-tuned for domain adaptation. Figure 1: The modular approach to neural networks involves feeding data through one or more pre-existing neural networks as well as a new network, the module. The existing networks have their weights locked, so they will not be altered by the training process. Only the module weights are trained. The end result is a representation that adds a new representation to an existing representation without losing any information from the original network. Figure 2: Modular networks do not simply need to be two models in parallel. Here, we present the stitched module approach. We insert a small neural network between each layer of the original network. This way, the modules explicitly receive information about the representations at each layer of the pre-trained network. 3 MODULAR ARCHITECTURE Generically, modular neural networks are directed graphs of pre-trained networks linked together with auxiliary, untrained networks. Depicted in Fig. 1, one only trains the new components of the network. The architecture could take the form of simply placing two networks in parallel (the two-towers approach), shown in Fig. 1. In addition, the architecture could interleave the modules with the layers of the pre-trained network (the stitch approach), shown in Fig. 2. This allows the network as a whole to retain the original representational power of the pre-trained network. Thus, our modular approach bounds from below the performance of transfer learning. Here, we explore some of the properties of these modular architectures, including how they learn new representations and how they perform on small amounts of data. 3.1 LEARNED FILTERS In the case of convolutional networks, we posit that adding modules to networks helps them learn new domains because the original modules contribute well-trained filters, allowing the untrained modules to learn more subtle features that may perhaps be more discriminating. Even slight regularization on the module network will encourage the network to avoid redundancy with the base network. To visualize images that maximally stimulate each filter, we followed the approach of Zeiler & Fergus (2014). We set the objective function to be the activation of the filter we were querying. We then conducted back-propagation. Instead of using the gradients to alter the weights, we used the gradients at the input layer to alter the pixels themselves. We initialized with an image of noise smoothed with a Gaussian filter of radius 1. The gradient was normalized, so the input image, \( X \) was updated according to \[ X_{t+1} = X_t + 0.01 * \nabla / |\nabla|, \] where \( \nabla \) is the induced gradient at the input layer. This was repeated 500 times, at which point the image largely had converged. After training a simple neural network on MNIST with 3 convolutional layers, \( (8 \times 8 \times 8) - maxpool2 - (8 \times 4 \times 4) - (8 \times 3 \times 3) - Dense128 \), which was done using ADAM Kingma & Ba (2014) and augmenting the images with 10% shifts and zooms, we reached an accuracy of 98.8%. We then added an even simpler module to the neural network, \( (4 \times 8 \times 8) - maxpool2 - (4 \times 4 \times 4) - (4 \times 3 \times 3) - Dense32 \). This module is trained on the same input as the original model but it is tied together with the output features of the original model, as illustrated in Fig. 1. After training the module, the combined network achieves 99.2% accuracy. The models were intentionally kept small, with the original model only having 8 filters per layer, and the module only having 4 filters per layer. As we can see in Fig. 3, the module does not learn filters that merely duplicate the original network. As is common, the first layer learns typical edge and orientation detectors, but the module is more sensitive to high-frequency diagonal components and details around the edge of the image. In the second layer, we see that the module is sensitive to diagonal components near the boundary. And the third layer shows that the module has indeed concentrated its effort on detecting strokes near the edge of the image. As we can see from inspecting Figure 2, while the original network concentrated its efforts on the center of the images (as it should), the module was then able to focus more around the edges of the image and catch some of the mistakes made by the original network. 3.2 SMALL DATA Although the modular approach can be used to extend and improve a network on its original task, its value comes from its ability to facilitate transfer learning. If a network has been trained on thousands or even millions of examples and hand-tuned for weeks or months, one would not want to throw away this valuable representational power by training the network with 100 examples from an out-of-domain dataset. Instead, the modular approach keeps the original, unaltered network, in addition to learning supplementary representations specific to the distribution of the new data. This allows the modular approach to more robustly handle small data sets than naive fine-tuning. To demonstrate this, we trained a network on CIFAR-10 and used it to apply to CIFAR-100 for varying Figure 3: After training a vanilla CNN on MNIST, images that maximally stimulate each filter are shown on the bottom rows. Images that maximally stimulate the auxiliary module network, trained on the same data to supplement the original network, are shown on the top. (a) first layer (b) second layer (c) third layer Figure 4: By explicitly preserving the original representation learned on pre-trained net, the module is able to learn more robust features using fewer examples than naive fine-tuning. (a) Comparison of modular approach vs. fine-tuning based on amount of training data (b) Comparison of validation accuracy on the Stanford Cars data. Training on full data Figure 4: By explicitly preserving the original representation learned on pre-trained net, the module is able to learn more robust features using fewer examples than naive fine-tuning. amounts of training data. The CIFAR-10 network was trained until it was 88.9% accurate, using the network in He et al. (2016) with 3 residual units, for a total of 28 layers. We then compared two approaches. For the first approach, we simply fine tuned the CIFAR-10 network by using training data from the CIFAR-100 dataset and replacing the final softmax. Second, we froze the original CIFAR-10 network and added an identical copy as a module, which would be trained on the same batches of data as the first approach. That is, we have: Network 1 – fine-tuning the base network and Network 2 – freezing the base network and fine-tuning a module. This doubles the amount of weights in the second network, but Network 1 and Network 2 have an identical number of weights to be trained and those weights have the same starting value. More formally, we present these two approaches in equations 1 and 2 below. \[ y_{ft} = \text{softmax}(NN(x; w_0 = \{C_{10}\})) \] (1) \[ y_{mod} = \text{softmax}([NN(x; w^* = \{C_{10}\}), NN(x; w_0 = \{C_{10}\})]) \] (2) where \( y_{ft} \) denotes predictions made from a fine-tuned network and \( y_{mod} \) denotes predictions made from our modular architecture. \( NN \) denotes the neural network without softmax activation trained on CIFAR-10, and \( w_0 \) is the initialization of the weights, which are learned from training on CIFAR-10, i.e. \( w_0 = \{C_{10}\} \). Note that in our modular architecture pre-trained weights are locked as denoted by \( w^* = \{C_{10}\} \) in Equation 2, i.e., \( \nabla_w NN(w^*) \equiv 0 \). To train, we used the ADAM optimization algorithm (Kingma & Ba (2014)). We added an activity L2 regularization of \( 1e^{-6} \) to the module to help break degeneracy. We used batches of 200, where each batch contained two images per class. Each batch was iterated over five times, before the next batch was used. This iteration allowed simulating multiple epochs over small data. We recorded the results of the performance on the test set after each batch, in Fig. 4a. We observe that for all amounts of training data, but particularly for small amounts of training data, the modular approach outperforms traditional fine-tuning. Of course, we chose to make the module Effect of training set size on fine-tuning versus modular architectures ![Bar chart comparing fine tuning vs the stitched module approach across different training set sizes and TFT layers](page_232_186_1107_410.png) Figure 5: Comparison of fine tuning vs the stitched module approach. TFT stands for 'Traditional Fine Tuning,' and the number of layers fine-tuned is indicated. Notice that our modular approach outperforms fine-tuning for all amounts of training data. The modular approach’s benefit over fine-tuning increases as the amount of available training data decreases. a complete copy of the original CIFAR-10 network. This ensured we could compare with the same number of weights, same initialization, same data, etc. Further research will certainly reveal more compact module networks that outperform our example. 4 EXPERIMENTS 4.1 TRANSFER LEARNING FROM CIFAR-10 TO CIFAR-100 WITH STITCH NETWORKS To investigate the effectiveness of modular networks for transfer learning, we explore a second example of transfer learning from CIFAR-10 in order to model CIFAR-100. As we were able to show above, a modular network is able to outperform traditional fine-tuning because it learns additional features that may complement those captured by the pre-trained model. However, there is no reason why a module needs to only accept input from the input layer nor a reason why it needs to send its output directly to the softmax layer. Here, we describe stitch networks, where the modules are actually interwoven with the original network. We believe that in modular networks, the untrained module learns representations that capture the difference from the original distribution of data to the distribution of data under the new task. Expanding upon this idea, instead of learning the shift in distributions only at the softmax layer as with our other modular networks, we integrate the signal from the learned modules much more tightly with the paired untrained modules by using a Stitch Network. Use of the Stitch Network allows for the model to learn to correct the distribution difference after each transformation made by the learned module, shown in Fig.2 The stitch network we explore is comprised of layer pairs between a single learned and unlearned module. The learned module is a five layer convolutional neural network where the first two layers are 3x3 convolutional layers with 32 filters, followed by max pooling and two more 3x3 convolutions with 64 filters. The convolutions are followed by a fully connected layer with 512 outputs and finally a softmax for classification. This model is pre-trained on CIFAR-10 then stripped of the softmax layer, has its weights locked and is then used as the learned module for the stitch network. The untrained module is composed in a similar fashion, with four 3x3 convolutions with maxpooling, and a fully connected layer each with 1/4 the number of outputs as the corresponding pre-trained layers. The outputs of each layer pair are concatenated and fed as the input for each proceeding layer of the untrained module. Both modules feed into the final softmax layer of the composite network which then classifies over the new data set. A sketch of this is shown in Fig.2 ![Network architecture used for Stanford Cars fine-tuned model.](page_184_97_1207_186.png) Figure 6: Network architecture used for Stanford Cars fine-tuned model. ![Network architecture used for Stanford Cars module model. Note, the ResNet used is identical to the one describe in He et al. (2015).](page_184_377_1207_186.png) Figure 7: Network architecture used for Stanford Cars module model. Note, the ResNet used is identical to the one describe in He et al. (2015). We train our entire composite model on a randomly selected subset of ten CIFAR-100 classes. We compare the accuracy over the validation set of the selected classes against traditional fine-tuning using only the learned module, as well as against an uninitialized version of the learned module. We were additionally interested in comparing across all models the effect of limiting the amount of data available for training. We repeat the experiment with the same subset of classes, varying the amount of available training data such that the networks are shown only a fraction of each class for training. Note that there are 500 available training examples per class in CIFAR-100. We find by using the stitch network, we are able to match or improve upon classification results (Fig. 5) obtained using traditional fine-tuning over a pre-trained model. We also outperform training from scratch, regardless of the amount of training data used. We note that we find significant gains over traditional methods as the number of available training examples drops below 200 examples per class. 4.2 STANFORD CARS DATA SET The Stanford Cars data set (Krause et al. (2013)), which features 16,185 images of 196 classes of cars, is an example of a data set for fine-grained categorization. Rather than train a classifier to distinguish between fundamentally different objects like horses and planes, as required in the Large Scale Visual Recognition Challenge (Russakovsky et al.(2015)), fine-grained categorization requires the classifier to learn subtle differences in variations of the same entity. For example, a classifier trained on the Stanford Cars data set would have to learn distinguishing features between a BMW X6 SUV from 2012 and an Isuzu Ascender SUV from 2008. In this research two models are trained on the Stanford Cars data set. Both models utilize a transfer learning approach by leveraging the non-fully connected output from the VGG16 model (Simonyan & Zisserman (2014)). The “fine-tuned” model passes the VGG16 features to a fully connected layer of length 512 followed by a softmax layer of length 196, as seen in Fig. 6. Gradient descent via RMSprop is used to train the dense layers. The “module” model merges the fixed VGG16 features with a ResNet (He et al. (2015)) model, whose output is then fed to two consecutive dense layers of length 256 capped by a softmax layer of length 196. The module model architecture is shown in Fig. 7. Again, RMSprop is used to train ResNet and post-merge dense layer weights, but the VGG features are unchanged. As seen in Fig. 4b after 50 epochs the module model appears to significantly outperform the fine-tuned model in validation accuracy. However, it should be noted that while the module model carries 19,537,990 trainable parameters the fine-tuned model only has 12,946,116 parameters. Furthermore, no hyperparameter optimization is performed on either model. 4.3 MODULE FOR LSTM TEXT CLASSIFICATION We further investigate the effects of our modular network approach by applying this method to a different modeling problem - text classification. Similar to image data, text represents an unstructured data-type that often exhibits long-term and interconnected dependencies within the data that are difficult to model with simpler classifiers. Whereas in the case of images neighboring pixels may represent semantically related concepts or objects, in text words may exhibit long-term semantic or syntactic dependencies that can be modeled sequentially. These characteristics make text classification particularly well-suited to recurrent neural networks such as long short-term memory (LSTM) networks, but these learning methods typically require a great deal of data to be learned efficiently and to avoid overfitting. To test our methodology, we evaluate a modular recurrent network against two individual recurrent neural networks on the IMDB sentiment dataset. Previous work has shown deep learning methods to be effective at sentiment classification performance on this dataset (Maas et al. (2011)), however we add to this past work by presenting an analysis that demonstrates the effectiveness of modular networks in the case of extremely small training sets. To this end, we sample only 500 training examples from the original 25,000 available in the full training set, and evaluate on the full 25,000 validation examples. We use the same 500 training examples for each model evaluated in our experiments for consistency, and report accuracy for each model on the full validation set. We evaluate three models in our text-classification experiments, two of which are individual recurrent networks and the final which is our modular recurrent network. The first model consists of three layers - an initial layer that projects sequences of words into an embedding space, a second LSTM layer with 32 units, and a final sigmoid layer for computing the probability of the text belonging to the positive class. Our second model is identical to the first except that we fix the weights of the embedding layer using pre-trained GloVe word vectors[8]. In particular, we use 100-dimensional vectors computed from a 2014 version of Wikipedia. Finally, we detail our modular network, which leverages both individual recurrent neural networks described above. To construct our modular network, we take the embedding and LSTM layers from our individual networks, and concatenate the output of both LSTM layers into a single tensor layer in the middle of our modular network. Additionally, we modify the output of each of these component LSTM layers by forcing each to output a weight matrix that tracks the state of the LSTM layer across all timesteps throughout the dataset. In this way, we seek to fully leverage the sequential dependencies learned by this layer, and this method outperforms the simpler alternative method of simply outputting the final state of each of the LSTM layers. We then feed this concatenated layer to a gated recurrent unit (GRU) layer with a sigmoid activation function for calculation of class probabilities. We experimented with an LSTM and densely connected layers after the tensor concatenation layer, but found best performance with the GRU. All models were optimized with the ADAM algorithm, and trained for 15 epochs. An outline of this architecture can be seen in Figure 8 Here, we report results for our classification experiments with the three networks described above. We see an accuracy of 61.9% for our first model which is trained directly from the data without any pre-training. This is significantly lower than previously reported results, however we are training on only 2% of the available data to test our method’s application to small training sets. We see slightly better performance in terms of accuracy (64.9%) from our second model initialized with GloVe vectors. This seems to indicate that despite being trained on more formally written language in Wikipedia, these vectors can still boost performance on a task modeling text that is inherently subjective and opinion-based. Finally, we see an accuracy of 69.6% from our modular network, an increase of almost 5% accuracy over the next best performing model. Because weight initializations of recurrent networks can greatly affect model performance, we ran the classification experiments with our modular network 10 times, and report the average accuracy across these 10 runs. As can be seen here, our modular approach improves on the best performing individual network suggesting that [8] http://nlp.stanford.edu/projects/glove/ ![Diagram of architecture for our modular recurrent text classification network. Dimensionality for embedding layers and number of units for all other layers are given in boxes denoting those layers.](page_184_120_1207_312.png) Figure 8: Diagram of architecture for our modular recurrent text classification network. Dimensionality for embedding layers and number of units for all other layers are given in boxes denoting those layers. <table> <tr> <th>MODEL</th> <th>BM</th> <th>PM</th> <th>MM</th> </tr> <tr> <th>ACCURACY (%)</th> <td>61.9</td> <td>64.9</td> <td><b>69.6</b></td> </tr> </table> Table 1: Accuracy results for text classification experiments, using only 500 training examples. Results are shown for the baseline model (BM), pre-trained (GloVe) model (PM) and modular model (MM). this approach is useful in the domain text classification, and that our modular approach overcomes the poor performance shown by one of its component models. 5 CONCLUSIONS We have presented a neuro-modular approach to transfer learning. By mixing pre-trained neural networks (that have fixed weights) with networks to be trained on the specific domain data, we are able to learn the shift in distributions between data sets. As we have shown, often the new modules learn features that complement the features previously learned in the pre-trained network. We have shown that our approach out-performs traditional fine-tuning, particularly when the amount of training data is small – only tens of examples per class. Further research will explore more efficient architectures and training strategies, but we have demonstrated that our approach works well for MNIST, CIFARs, the Stanford Cars dataset, and IMDB sentiment. Thus, the modular approach will be a valuable strategy when one has a large pre-trained network available but only a small amount of training data in the transfer task. ACKNOWLEDGMENTS This work was supported by the US Government. REFERENCES Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. Yoshua Bengio et al. Deep learning of representations for unsupervised and transfer learning. ICML Unsupervised and Transfer Learning, 27:17–36, 2012. Tianqi Chen, Ian Goodfellow, and Jonathon Shlens. Net2net: Accelerating learning via knowledge transfer. arXiv preprint arXiv:1511.05641, 2015. Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. arXiv preprint arXiv:1409.7495, 2014. Mingming Gong, Kun Zhang, Tongliang Liu, Dacheng Tao, Clark Glymour, and Bernhard Schölkopf. Domain adaptation with conditional transferable components. In Proceedings of The 33rd International Conference on Machine Learning, pp. 2839–2848, 2016. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016. Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Advances in Neural Information Processing Systems, pp. 2017–2025, 2015. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 554–561, 2013. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012. Himabindu Lakkaraju, Richard Socher, and Chris Manning. Aspect specific sentiment analysis using hierarchical deep learning. In NIPS Workshop on Deep Learning and Representation Learning, 2014. Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648, 2016. Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In Proceedings of The 32nd International Conference on Machine Learning, pp. 97–105, 2015. Mingsheng Long, Jianmin Wang, and Michael I. Jordan. Deep transfer learning with joint adaptation networks. CoRR, abs/1605.06636, 2016. URL http://arxiv.org/abs/1605.06636 Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/P11-1015 Grégoire Mesnil, Yann Dauphin, Xavier Glorot, Salah Rifai, Yoshua Bengio, Ian J Goodfellow, Erick Lavoie, Xavier Muller, Guillaume Desjardins, David Warde-Farley, et al. Unsupervised and transfer learning challenge: a deep learning approach. ICML Unsupervised and Transfer Learning, 27:97–110, 2012. Guglielmo Montone, J Kevin ORegan, and Alexander V Terekhov. The usefulness of past knowledge when learning a new task in deep neural networks. 2015. Maxime Oquab, Leon Bottou, Ivan Laptev, and Josef Sivic. Learning and transferring mid-level image representations using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1717–1724, 2014. Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345–1359, 2010. Sinno Jialin Pan, Ivor W Tsang, James T Kwok, and Qiang Yang. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 22(2):199–210, 2011. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y. Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Masashi Sugiyama, Shinichi Nakajima, Hisashi Kashima, Paul V Buenau, and Motoaki Kawanabe. Direct importance estimation with model selection and its application to covariate shift adaptation. In Advances in neural information processing systems, pp. 1433–1440, 2008. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9, 2015. Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474, 2014. Eric Tzeng, Judy Hoffman, Trevor Darrell, and Kate Saenko. Simultaneous deep transfer across domains and tasks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4068–4076, 2015. Xuezhi Wang and Jeff Schneider. Flexible transfer learning under support and model shift. In Advances in Neural Information Processing Systems, pp. 1898–1906, 2014. Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in neural information processing systems, pp. 3320–3328, 2014. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European Conference on Computer Vision, pp. 818–833. Springer, 2014. Kun Zhang, Bernhard Schölkopf, Krikamol Muandet, and Zhikun Wang. Domain adaptation under target and conditional shift. In ICML (3), pp. 819–827, 2013.
ABSTRACT In this paper we present a technique to train neural network models on small amounts of data. Current methods for training neural networks on small amounts of rich data typically rely on strategies such as fine-tuning a pre-trained neural network or the use of domain-specific hand-engineered features. Here we take the approach of treating network layers, or entire networks, as modules and combine pre-trained modules with untrained modules, to learn the shift in distributions between data sets. The central impact of using a modular approach comes from adding new representations to a network, as opposed to replacing representations via fine-tuning. Using this technique, we are able surpass results using standard fine-tuning transfer learning approaches, and we are also able to significantly increase performance over such approaches when using smaller amounts of data. 1 INTRODUCTION Training generalizable models using only a small amount of data has proved a significant challenge in the field of machine learning since its inception. This is especially true when using artificial neural networks, with millions or billions of parameters. Conventional wisdom gleaned from the surge in popularity of neural network models indicates that extremely large quantities of data are required for these models to be effectively trained. Indeed the work from Krizhevsky et al. (2012) has commonly been cited as only being possible through the development of ImageNet (Russakovsky et al. (2015)). As neural networks become explored by practitioners in more specialized domains, the volume of available labeled data also narrows. Although training methods have improved, it is still difficult to train deep learning models on small quantities of data, such as only tens or hundreds of examples. The current paradigm for solving this problem has come through the use of pre-trained neural networks. Bengio et al. (2012) were able to show that transfer of knowledge in networks could be achieved by first training a neural network on a domain for which there is a large amount of data and then retraining that network on a related but different domain via fine-tuning its weights. Though this approach demonstrated promising results on small data, these models do not retain the ability to function as previously trained. That is, these models end up fine tuning their weights to the new learning task, forgetting many of the important features learned from the previous domain. The utility of pre-training models extends beyond training on small data. It is also used as an effective initialization technique for many complicated models (Jaderberg et al. (2015); Lakkaraju et al. (2014)). This, in addition to the continuing trend of treating specific network layer architectures as modular components to compose more advanced models (He et al. (2015); Larsson et al. (2016); Szegedy et al. (2013); Abadi et al. (2016)) informs our work as we seek to use pre-trained models as an architectural framework to build upon. Instead of overwriting these models and fine-tuning the internal representations to a specific task, we propose composing pre-trained models as modules in a higher order architecture where multiple, potentially distinct representations contribute to the task. With this approach, useful representations already learned are not forgotten and new representations specific to the task are learned in other modules in the architecture. In this paper we present our neuro-modular approach to fine-tuning. We demonstrate how modules learn subtle features that pre-trained networks may have missed. We quantitatively compare traditional fine-tuning with our modular approach, showing that our approach is more accurate on small amounts of data (<100 examples per class). We also demonstrate how to improve classification in a number of experiments, including CIFAR-100, text classification, and fine-grained image classification, all with limited data. 2 RELATED WORK Transferring knowledge from a source domain to a target domain is an important challenge in machine learning research. Many shallow methods have been published, those that learn feature invariant representations or by approximating value without using an instance’s label (Pan & Yang (2010); Sugiyama et al. (2008); Pan et al. (2011); Zhang et al. (2013); Wang & Schneider (2014); Gong et al. (2016)). More recent deep transfer learning methods enable identification of variational factors in the data and align them to disparate domain distributions (Tzeng et al. (2014); Long et al. (2015); Ganin & Lempitsky (2014); Tzeng et al. (2015)). Mesnil et al. (2012) presents the Unsupervised and Transfer Learning Challenge and discusses the important advances that are needed for representation learning, and the importance of deep learning in transfer learning (Oquab et al. (2014) applied these techniques to mid-level image representations using CNNs. Specifically, they showed that image representations learned in visual recognition tasks (ImageNet) can be transferred to other visual recognition tasks (Pascal VOC) efficiently. Further study regarding the transferability of features by Yosinski et al. (2014) showed surprising results that features from distant tasks perform better than random features and that difficulties arise when optimizing splitting networks between co-adapted neurons. We build on these results by leveraging existing representations to transfer to target domains without overwriting the pre-trained models through standard fine-tuning approaches. Long et al. (2015) developed the Deep Adaptation Network (DAN) architecture for convolutional neural networks that embed hidden representations of all task-specific layers in a reproducing kernel Hilbert space. This allows the mean of different domain distributions to be matched. Another feature of their work is that it can linearly scale and provide statistical guarantees on transferable features. The Net2Net approach (Chen et al. (2015)) accelerates training of larger neural networks by allowing them to grow gradually using function preserving transformations to transfer information between neural networks. However, it does not guarantee that existing representational power will be preserved on a different task. Gong et al. (2016) consider domain adaptation where transfer from source to domain is modeled as a causal system. Under these assumptions, conditional transferable components are extracted which are invariant after location-scale transformations. Long et al. (2016) proposed a new method that overcomes the need for conditional components by comparing joint distributions across domains. Unlike our work, all of these require explicit assumptions or modifications to the pre-trained networks to facilitate adaptation. We note that while writing this paper, the progressive network architecture of Rusu et al. (2016) was released, sharing a number of qualities with our work. Both the results we present here and the progressive networks allow neural networks to extend their knowledge without forgetting previous information. In addition, Montone et al. (2015) discusses a semi-modular approach. Montone et al. also froze the weights of the original network, although it did not focus on the small data regime, where only a few tens of examples could be available. However, our modular approach detailed here focuses on leveraging small data to adapt to different domains. Our architecture also complements existing network building strategies, such as downloading pre-trained neural networks to then be fine-tuned for domain adaptation. Figure 1: The modular approach to neural networks involves feeding data through one or more pre-existing neural networks as well as a new network, the module. The existing networks have their weights locked, so they will not be altered by the training process. Only the module weights are trained. The end result is a representation that adds a new representation to an existing representation without losing any information from the original network. Figure 2: Modular networks do not simply need to be two models in parallel. Here, we present the stitched module approach. We insert a small neural network between each layer of the original network. This way, the modules explicitly receive information about the representations at each layer of the pre-trained network. 3 MODULAR ARCHITECTURE Generically, modular neural networks are directed graphs of pre-trained networks linked together with auxiliary, untrained networks. Depicted in Fig. 1, one only trains the new components of the network. The architecture could take the form of simply placing two networks in parallel (the two-towers approach), shown in Fig. 1. In addition, the architecture could interleave the modules with the layers of the pre-trained network (the stitch approach), shown in Fig. 2. This allows the network as a whole to retain the original representational power of the pre-trained network. Thus, our modular approach bounds from below the performance of transfer learning. Here, we explore some of the properties of these modular architectures, including how they learn new representations and how they perform on small amounts of data. 3.1 LEARNED FILTERS In the case of convolutional networks, we posit that adding modules to networks helps them learn new domains because the original modules contribute well-trained filters, allowing the untrained modules to learn more subtle features that may perhaps be more discriminating. Even slight regularization on the module network will encourage the network to avoid redundancy with the base network. To visualize images that maximally stimulate each filter, we followed the approach of Zeiler & Fergus (2014). We set the objective function to be the activation of the filter we were querying. We then conducted back-propagation. Instead of using the gradients to alter the weights, we used the gradients at the input layer to alter the pixels themselves. We initialized with an image of noise smoothed with a Gaussian filter of radius 1. The gradient was normalized, so the input image, \( X \) was updated according to \[ X_{t+1} = X_t + 0.01 * \nabla / |\nabla|, \] where \( \nabla \) is the induced gradient at the input layer. This was repeated 500 times, at which point the image largely had converged. After training a simple neural network on MNIST with 3 convolutional layers, \( (8 \times 8 \times 8) - maxpool2 - (8 \times 4 \times 4) - (8 \times 3 \times 3) - Dense128 \), which was done using ADAM Kingma & Ba (2014) and augmenting the images with 10% shifts and zooms, we reached an accuracy of 98.8%. We then added an even simpler module to the neural network, \( (4 \times 8 \times 8) - maxpool2 - (4 \times 4 \times 4) - (4 \times 3 \times 3) - Dense32 \). This module is trained on the same input as the original model but it is tied together with the output features of the original model, as illustrated in Fig. 1. After training the module, the combined network achieves 99.2% accuracy. The models were intentionally kept small, with the original model only having 8 filters per layer, and the module only having 4 filters per layer. As we can see in Fig. 3, the module does not learn filters that merely duplicate the original network. As is common, the first layer learns typical edge and orientation detectors, but the module is more sensitive to high-frequency diagonal components and details around the edge of the image. In the second layer, we see that the module is sensitive to diagonal components near the boundary. And the third layer shows that the module has indeed concentrated its effort on detecting strokes near the edge of the image. As we can see from inspecting Figure 2, while the original network concentrated its efforts on the center of the images (as it should), the module was then able to focus more around the edges of the image and catch some of the mistakes made by the original network. 3.2 SMALL DATA Although the modular approach can be used to extend and improve a network on its original task, its value comes from its ability to facilitate transfer learning. If a network has been trained on thousands or even millions of examples and hand-tuned for weeks or months, one would not want to throw away this valuable representational power by training the network with 100 examples from an out-of-domain dataset. Instead, the modular approach keeps the original, unaltered network, in addition to learning supplementary representations specific to the distribution of the new data. This allows the modular approach to more robustly handle small data sets than naive fine-tuning. To demonstrate this, we trained a network on CIFAR-10 and used it to apply to CIFAR-100 for varying Figure 3: After training a vanilla CNN on MNIST, images that maximally stimulate each filter are shown on the bottom rows. Images that maximally stimulate the auxiliary module network, trained on the same data to supplement the original network, are shown on the top. (a) first layer (b) second layer (c) third layer Figure 4: By explicitly preserving the original representation learned on pre-trained net, the module is able to learn more robust features using fewer examples than naive fine-tuning. (a) Comparison of modular approach vs. fine-tuning based on amount of training data (b) Comparison of validation accuracy on the Stanford Cars data. Training on full data Figure 4: By explicitly preserving the original representation learned on pre-trained net, the module is able to learn more robust features using fewer examples than naive fine-tuning. amounts of training data. The CIFAR-10 network was trained until it was 88.9% accurate, using the network in He et al. (2016) with 3 residual units, for a total of 28 layers. We then compared two approaches. For the first approach, we simply fine tuned the CIFAR-10 network by using training data from the CIFAR-100 dataset and replacing the final softmax. Second, we froze the original CIFAR-10 network and added an identical copy as a module, which would be trained on the same batches of data as the first approach. That is, we have: Network 1 – fine-tuning the base network and Network 2 – freezing the base network and fine-tuning a module. This doubles the amount of weights in the second network, but Network 1 and Network 2 have an identical number of weights to be trained and those weights have the same starting value. More formally, we present these two approaches in equations 1 and 2 below. \[ y_{ft} = \text{softmax}(NN(x; w_0 = \{C_{10}\})) \] (1) \[ y_{mod} = \text{softmax}([NN(x; w^* = \{C_{10}\}), NN(x; w_0 = \{C_{10}\})]) \] (2) where \( y_{ft} \) denotes predictions made from a fine-tuned network and \( y_{mod} \) denotes predictions made from our modular architecture. \( NN \) denotes the neural network without softmax activation trained on CIFAR-10, and \( w_0 \) is the initialization of the weights, which are learned from training on CIFAR-10, i.e. \( w_0 = \{C_{10}\} \). Note that in our modular architecture pre-trained weights are locked as denoted by \( w^* = \{C_{10}\} \) in Equation 2, i.e., \( \nabla_w NN(w^*) \equiv 0 \). To train, we used the ADAM optimization algorithm (Kingma & Ba (2014)). We added an activity L2 regularization of \( 1e^{-6} \) to the module to help break degeneracy. We used batches of 200, where each batch contained two images per class. Each batch was iterated over five times, before the next batch was used. This iteration allowed simulating multiple epochs over small data. We recorded the results of the performance on the test set after each batch, in Fig. 4a. We observe that for all amounts of training data, but particularly for small amounts of training data, the modular approach outperforms traditional fine-tuning. Of course, we chose to make the module Effect of training set size on fine-tuning versus modular architectures ![Bar chart comparing fine tuning vs the stitched module approach across different training set sizes and TFT layers](page_232_186_1107_410.png) Figure 5: Comparison of fine tuning vs the stitched module approach. TFT stands for 'Traditional Fine Tuning,' and the number of layers fine-tuned is indicated. Notice that our modular approach outperforms fine-tuning for all amounts of training data. The modular approach’s benefit over fine-tuning increases as the amount of available training data decreases. a complete copy of the original CIFAR-10 network. This ensured we could compare with the same number of weights, same initialization, same data, etc. Further research will certainly reveal more compact module networks that outperform our example. 4 EXPERIMENTS 4.1 TRANSFER LEARNING FROM CIFAR-10 TO CIFAR-100 WITH STITCH NETWORKS To investigate the effectiveness of modular networks for transfer learning, we explore a second example of transfer learning from CIFAR-10 in order to model CIFAR-100. As we were able to show above, a modular network is able to outperform traditional fine-tuning because it learns additional features that may complement those captured by the pre-trained model. However, there is no reason why a module needs to only accept input from the input layer nor a reason why it needs to send its output directly to the softmax layer. Here, we describe stitch networks, where the modules are actually interwoven with the original network. We believe that in modular networks, the untrained module learns representations that capture the difference from the original distribution of data to the distribution of data under the new task. Expanding upon this idea, instead of learning the shift in distributions only at the softmax layer as with our other modular networks, we integrate the signal from the learned modules much more tightly with the paired untrained modules by using a Stitch Network. Use of the Stitch Network allows for the model to learn to correct the distribution difference after each transformation made by the learned module, shown in Fig.2 The stitch network we explore is comprised of layer pairs between a single learned and unlearned module. The learned module is a five layer convolutional neural network where the first two layers are 3x3 convolutional layers with 32 filters, followed by max pooling and two more 3x3 convolutions with 64 filters. The convolutions are followed by a fully connected layer with 512 outputs and finally a softmax for classification. This model is pre-trained on CIFAR-10 then stripped of the softmax layer, has its weights locked and is then used as the learned module for the stitch network. The untrained module is composed in a similar fashion, with four 3x3 convolutions with maxpooling, and a fully connected layer each with 1/4 the number of outputs as the corresponding pre-trained layers. The outputs of each layer pair are concatenated and fed as the input for each proceeding layer of the untrained module. Both modules feed into the final softmax layer of the composite network which then classifies over the new data set. A sketch of this is shown in Fig.2 ![Network architecture used for Stanford Cars fine-tuned model.](page_184_97_1207_186.png) Figure 6: Network architecture used for Stanford Cars fine-tuned model. ![Network architecture used for Stanford Cars module model. Note, the ResNet used is identical to the one describe in He et al. (2015).](page_184_377_1207_186.png) Figure 7: Network architecture used for Stanford Cars module model. Note, the ResNet used is identical to the one describe in He et al. (2015). We train our entire composite model on a randomly selected subset of ten CIFAR-100 classes. We compare the accuracy over the validation set of the selected classes against traditional fine-tuning using only the learned module, as well as against an uninitialized version of the learned module. We were additionally interested in comparing across all models the effect of limiting the amount of data available for training. We repeat the experiment with the same subset of classes, varying the amount of available training data such that the networks are shown only a fraction of each class for training. Note that there are 500 available training examples per class in CIFAR-100. We find by using the stitch network, we are able to match or improve upon classification results (Fig. 5) obtained using traditional fine-tuning over a pre-trained model. We also outperform training from scratch, regardless of the amount of training data used. We note that we find significant gains over traditional methods as the number of available training examples drops below 200 examples per class. 4.2 STANFORD CARS DATA SET The Stanford Cars data set (Krause et al. (2013)), which features 16,185 images of 196 classes of cars, is an example of a data set for fine-grained categorization. Rather than train a classifier to distinguish between fundamentally different objects like horses and planes, as required in the Large Scale Visual Recognition Challenge (Russakovsky et al.(2015)), fine-grained categorization requires the classifier to learn subtle differences in variations of the same entity. For example, a classifier trained on the Stanford Cars data set would have to learn distinguishing features between a BMW X6 SUV from 2012 and an Isuzu Ascender SUV from 2008. In this research two models are trained on the Stanford Cars data set. Both models utilize a transfer learning approach by leveraging the non-fully connected output from the VGG16 model (Simonyan & Zisserman (2014)). The “fine-tuned” model passes the VGG16 features to a fully connected layer of length 512 followed by a softmax layer of length 196, as seen in Fig. 6. Gradient descent via RMSprop is used to train the dense layers. The “module” model merges the fixed VGG16 features with a ResNet (He et al. (2015)) model, whose output is then fed to two consecutive dense layers of length 256 capped by a softmax layer of length 196. The module model architecture is shown in Fig. 7. Again, RMSprop is used to train ResNet and post-merge dense layer weights, but the VGG features are unchanged. As seen in Fig. 4b after 50 epochs the module model appears to significantly outperform the fine-tuned model in validation accuracy. However, it should be noted that while the module model carries 19,537,990 trainable parameters the fine-tuned model only has 12,946,116 parameters. Furthermore, no hyperparameter optimization is performed on either model. 4.3 MODULE FOR LSTM TEXT CLASSIFICATION We further investigate the effects of our modular network approach by applying this method to a different modeling problem - text classification. Similar to image data, text represents an unstructured data-type that often exhibits long-term and interconnected dependencies within the data that are difficult to model with simpler classifiers. Whereas in the case of images neighboring pixels may represent semantically related concepts or objects, in text words may exhibit long-term semantic or syntactic dependencies that can be modeled sequentially. These characteristics make text classification particularly well-suited to recurrent neural networks such as long short-term memory (LSTM) networks, but these learning methods typically require a great deal of data to be learned efficiently and to avoid overfitting. To test our methodology, we evaluate a modular recurrent network against two individual recurrent neural networks on the IMDB sentiment dataset. Previous work has shown deep learning methods to be effective at sentiment classification performance on this dataset (Maas et al. (2011)), however we add to this past work by presenting an analysis that demonstrates the effectiveness of modular networks in the case of extremely small training sets. To this end, we sample only 500 training examples from the original 25,000 available in the full training set, and evaluate on the full 25,000 validation examples. We use the same 500 training examples for each model evaluated in our experiments for consistency, and report accuracy for each model on the full validation set. We evaluate three models in our text-classification experiments, two of which are individual recurrent networks and the final which is our modular recurrent network. The first model consists of three layers - an initial layer that projects sequences of words into an embedding space, a second LSTM layer with 32 units, and a final sigmoid layer for computing the probability of the text belonging to the positive class. Our second model is identical to the first except that we fix the weights of the embedding layer using pre-trained GloVe word vectors[8]. In particular, we use 100-dimensional vectors computed from a 2014 version of Wikipedia. Finally, we detail our modular network, which leverages both individual recurrent neural networks described above. To construct our modular network, we take the embedding and LSTM layers from our individual networks, and concatenate the output of both LSTM layers into a single tensor layer in the middle of our modular network. Additionally, we modify the output of each of these component LSTM layers by forcing each to output a weight matrix that tracks the state of the LSTM layer across all timesteps throughout the dataset. In this way, we seek to fully leverage the sequential dependencies learned by this layer, and this method outperforms the simpler alternative method of simply outputting the final state of each of the LSTM layers. We then feed this concatenated layer to a gated recurrent unit (GRU) layer with a sigmoid activation function for calculation of class probabilities. We experimented with an LSTM and densely connected layers after the tensor concatenation layer, but found best performance with the GRU. All models were optimized with the ADAM algorithm, and trained for 15 epochs. An outline of this architecture can be seen in Figure 8 Here, we report results for our classification experiments with the three networks described above. We see an accuracy of 61.9% for our first model which is trained directly from the data without any pre-training. This is significantly lower than previously reported results, however we are training on only 2% of the available data to test our method’s application to small training sets. We see slightly better performance in terms of accuracy (64.9%) from our second model initialized with GloVe vectors. This seems to indicate that despite being trained on more formally written language in Wikipedia, these vectors can still boost performance on a task modeling text that is inherently subjective and opinion-based. Finally, we see an accuracy of 69.6% from our modular network, an increase of almost 5% accuracy over the next best performing model. Because weight initializations of recurrent networks can greatly affect model performance, we ran the classification experiments with our modular network 10 times, and report the average accuracy across these 10 runs. As can be seen here, our modular approach improves on the best performing individual network suggesting that [8] http://nlp.stanford.edu/projects/glove/ ![Diagram of architecture for our modular recurrent text classification network. Dimensionality for embedding layers and number of units for all other layers are given in boxes denoting those layers.](page_184_120_1207_312.png) Figure 8: Diagram of architecture for our modular recurrent text classification network. Dimensionality for embedding layers and number of units for all other layers are given in boxes denoting those layers. <table> <tr> <th>MODEL</th> <th>BM</th> <th>PM</th> <th>MM</th> </tr> <tr> <th>ACCURACY (%)</th> <td>61.9</td> <td>64.9</td> <td><b>69.6</b></td> </tr> </table> Table 1: Accuracy results for text classification experiments, using only 500 training examples. Results are shown for the baseline model (BM), pre-trained (GloVe) model (PM) and modular model (MM). this approach is useful in the domain text classification, and that our modular approach overcomes the poor performance shown by one of its component models. 5 CONCLUSIONS We have presented a neuro-modular approach to transfer learning. By mixing pre-trained neural networks (that have fixed weights) with networks to be trained on the specific domain data, we are able to learn the shift in distributions between data sets. As we have shown, often the new modules learn features that complement the features previously learned in the pre-trained network. We have shown that our approach out-performs traditional fine-tuning, particularly when the amount of training data is small – only tens of examples per class. Further research will explore more efficient architectures and training strategies, but we have demonstrated that our approach works well for MNIST, CIFARs, the Stanford Cars dataset, and IMDB sentiment. Thus, the modular approach will be a valuable strategy when one has a large pre-trained network available but only a small amount of training data in the transfer task. ACKNOWLEDGMENTS This work was supported by the US Government.
reject
Reject
5.333333
13cb1b3f52dcef286c4a91dd6efeca87b51c3eee
iclr
2,017
IMPROVING SAMPLING FROM GENERATIVE AUTOENCODERS WITH MARKOV CHAINS Antonia Creswell, Kai Arulkumaran & Anil A. Bharath Department of Bioengineering Imperial College London London SW7 2BP, UK {ac2211,ka709,aab01}@ic.ac.uk ABSTRACT We focus on generative autoencoders, such as variational or adversarial autoencoders, which jointly learn a generative model alongside an inference model. Generative autoencoders are those which are trained to softly enforce a prior on the latent distribution learned by the inference model. We call the distribution to which the inference model maps observed samples, the *learned latent distribution*, which may not be consistent with the prior. We formulate a Markov chain Monte Carlo (MCMC) sampling process, equivalent to iteratively decoding and encoding, which allows us to sample from the learned latent distribution. Since, the generative model learns to map from the *learned* latent distribution, rather than the prior, we may use MCMC to improve the quality of samples drawn from the *generative* model, especially when the learned latent distribution is far from the prior. Using MCMC sampling, we are able to reveal previously unseen differences between generative autoencoders trained either with or without a denoising criterion. 1 INTRODUCTION Unsupervised learning has benefited greatly from the introduction of deep generative models. In particular, the introduction of generative adversarial networks (GANs) (Goodfellow et al., 2014) and variational autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014) has led to a plethora of research into learning latent variable models that are capable of generating data from complex distributions, including the space of natural images (Radford et al., 2015). Both of these models, and their extensions, operate by placing a prior distribution, \( P(Z) \), over a latent space \( Z \subseteq \mathbb{R}^b \), and learn mappings from the latent space, \( Z \), to the space of the observed data, \( X \subseteq \mathbb{R}^a \). We are interested in autoencoding generative models, models which learn not just the generative mapping \( Z \mapsto X \), but also the inferential mapping \( X \mapsto Z \). Specifically, we define *generative autoencoders* as autoencoders which softly constrain their latent distribution, to match a specified prior distribution, \( P(Z) \). This is achieved by minimising a loss, \( \mathcal{L}_{\text{prior}} \), between the latent distribution and the prior. This includes VAEs (Kingma & Welling, 2014; Rezende et al., 2014), extensions of VAEs (Kingma et al., 2016), and also adversarial autoencoders (AAEs) (Makhzani et al., 2015). Whilst other autoencoders also learn an encoding function, \( e : \mathbb{R}^a \to Z \), together with a decoding function, \( d : \mathbb{R}^b \to X \), the latent space is not necessarily constrained to conform to a specified probability distribution. This is the key distinction for generative autoencoders; both \( e \) and \( d \) can still be deterministic functions (Makhzani et al., 2015). The functions \( e \) and \( d \) are defined for any input from \( \mathbb{R}^a \) and \( \mathbb{R}^b \) respectively, however the outputs of the functions may be constrained practically by the type of functions that \( e \) and \( d \) are, such that \( e \) maps to \( Z \subseteq \mathbb{R}^b \) and \( d \) maps to \( X \subseteq \mathbb{R}^a \). During training however, the encoder, \( e \) is only fed with training data samples, \( x \in X \) and the decoder, \( d \) is only fed with samples from the encoder, \( z \in Z \), and so the encoder and decoder learn mappings between \( X \) and \( Z \). The process of encoding and decoding may be interpreted as sampling the conditional probabilities \( Q_\phi(Z|X) \) and \( P_\theta(X|Z) \) respectively. The conditional distributions may be sampled using the encoding and decoding functions \( e(X;\phi) \) and \( d(Z;\theta) \), where \( \phi \) and \( \theta \) are learned parameters of the encoding and decoding functions respectively. The decoder of a generative autoencoder may be used to generate new samples that are consistent with the data. There are two traditional approaches for sampling generative autoencoders: Approach 1 (Bengio et al. [2014]): \[ \mathbf{x}_0 \sim P(X), \quad \mathbf{z}_0 \sim Q_\phi(Z|X = \mathbf{x}_0), \quad \mathbf{x}_1 \sim P_\theta(X|Z = \mathbf{z}_0) \] where \(P(X)\) is the data generating distribution. However, this approach is likely to generate samples similar to those in the training data, rather than generating novel samples that are consistent with the training data. Approach 2 (Kingma & Welling [2014] Makhzani et al. [2015] Rezende et al. [2014]): \[ \mathbf{z}_0 \sim P(Z), \quad \mathbf{x}_0 \sim P_\theta(X|Z = \mathbf{z}_0) \] where \(P(Z)\) is the prior distribution enforced during training and \(P_\theta(X|Z)\) is the decoder trained to map samples drawn from \(Q_\phi(Z|X)\) to samples consistent with \(P(X)\). This approach assumes that \( \int Q_\phi(Z|X)P(X)dX = \hat{P}(Z) \), suggesting that the encoder maps all data samples from \(P(X)\) to a distribution that matches the prior distribution, \(P(Z)\). However, it is not always true that \( \int Q_\phi(Z|X)P(X)dX = P(Z) \). Rather \(Q_\phi(Z|X)\) maps data samples to a distribution which we call, \( \hat{P}(Z) \): \[ \int Q_\phi(Z|X)P(X)dX = \hat{P}(Z) \] where it is not necessarily true that \( \hat{P}(Z) = P(Z) \) because the prior is only softly enforced. The decoder, on the other hand, is trained to map encoded data samples (i.e. samples from \( \int Q_\phi(Z|X)P(X)dX \)) to samples from \(X\) which have the distribution \(P(X)\). If the encoder maps observed samples to latent samples with the distribution \( \hat{P}(Z) \), rather than the desired prior distribution, \(P(Z)\), then: \[ \int P_\theta(X|Z)P(Z)dZ \neq P(X) \] This suggests that samples drawn from the decoder, \(P_\theta(X|Z)\), conditioned on samples drawn from the prior, \(P(Z)\), may not be consistent with the data generating distribution, \(P(X)\). However, by conditioning on \( \hat{P}(Z) \): \[ \int P_\theta(X|Z)\hat{P}(Z)dZ = P(X) \] This suggests that to obtain more realistic generations, latent samples should be drawn via \( \mathbf{z} \sim \hat{P}(Z) \) rather than \( \mathbf{z} \sim P(Z) \), followed by \( \mathbf{x} \sim P_\theta(X|Z) \). A limited number of latent samples may be drawn from \( \hat{P}(Z) \) using the first two steps in Approach 1 - however this has the drawbacks discussed in Approach 1. We introduce an alternative method for sampling from \( \hat{P}(Z) \) which does not have the same drawbacks. Our main contribution is the formulation of a Markov chain Monte Carlo (MCMC) sampling process for generative autoencoders, which allows us to sample from \( \hat{P}(Z) \). By iteratively sampling the chain, starting from an arbitrary \( \mathbf{z}_{t=0} \in \mathbb{R}^b \), the chain converges to \( \mathbf{z}_{t \to \infty} \sim \hat{P}(Z) \), allowing us to draw latent samples from \( \hat{P}(Z) \) after several steps of MCMC sampling. From a practical perspective, this is achieved by iteratively decoding and encoding, which may be easily applied to existing generative autoencoders. Because \( \hat{P}(Z) \) is optimised to be close to \( P(Z) \), the initial sample, \( \mathbf{z}_{t=0} \) can be drawn from \( P(Z) \), improving the quality of the samples within a few iterations. When interpolating between latent encodings, there is no guarantee that \( \mathbf{z} \) stays within high density regions of \( \hat{P}(Z) \). Previously, this has been addressed by using spherical, rather than linear interpolation of the high dimensional \( Z \) space (White [2016]). However, this approach attempts to keep \( \mathbf{z} \) Figure 1: \( P(X) \) is the data generating distribution. We may access some samples from \( P(X) \) by drawing samples from the training data. \( Q_\phi(Z|X) \) is the conditional distribution, modeled by an encoder, which maps samples from \( \mathbb{R}^a \) to samples in \( \mathbb{R}^b \). An ideal encoder maps samples from \( P(X) \) to a known, prior distribution \( P(Z) \): in reality the encoder maps samples from \( P(X) \) to an unknown distribution \( \hat{P}(Z) \). \( P_\theta(X|Z) \) is a conditional distribution, modeled by a decoder, which maps samples from \( \mathbb{R}^b \) to \( \mathbb{R}^a \). During training the decoder learns to map samples drawn from \( \hat{P}(Z) \) to \( P(X) \) rather than samples drawn from \( P(Z) \) because the decoder only sees samples from \( \hat{P}(Z) \). Regularisation on the latent space only encourages \( \hat{P}(Z) \) to be close to \( P(Z) \). Note that if \( \mathcal{L}_{prior} \) is optimal, then \( \hat{P}(Z) \) overlaps fully with \( P(Z) \). ![Diagram showing the relationship between P(X), P(Z), Q_\phi(Z|X), and P_\theta(X|Z)](page_232_186_1107_349.png) (a) VAE (initial) (b) VAE (5 steps) (c) VAE (initial) (d) VAE (5 steps) Figure 2: **Prior work:** Spherically interpolating [White, 2016] between two faces using a VAE (a, c). In (a), the attempt to gradually generate sunglasses results in visual artifacts around the eyes. In (c), the model fails to properly capture the desired change in orientation of the face, resulting in three partial faces in the middle of the interpolation. **This work:** (b) and (d) are the result of 5 steps of MCMC sampling applied to the latent samples that were used to generate the original interpolations, (a) and (c). In (b), the discolouration around the eyes disappears, with the model settling on either generating or not generating glasses. In (d), the model moves away from multiple faces in the interpolation by producing new faces with appropriate orientations. within \( P(Z) \), rather than trying to sample from \( \hat{P}(Z) \). By instead applying several steps of MCMC sampling to the interpolated z samples before sampling \( P_\theta(X|Z) \), unrealistic artifacts can be reduced (see Figure 2). Whilst most methods that aim to generate realistic samples from X rely on adjusting encodings of the observed data [White, 2016], our use of MCMC allows us to walk any latent sample to more probable regions of the learned latent distribution, resulting in more convincing generations. We demonstrate that the use of MCMC sampling improves generations from both VAEs and AAEs with high-dimensional Z; this is important as previous studies have shown that the dimensionality of Z should be scaled with the intrinsic latent dimensionality of the observed data. Our second contribution is the modification of the proposed transition operator for the MCMC sampling process to denoising generative autoencoders. These are generative autoencoders trained using a denoising criterion. [Seung, 1997][Vincent et al., 2008]. We reformulate our original MCMC sampling process to incorporate the noising and denoising processes, allowing us to use MCMC sampling on denoising generative autoencoders. We apply this sampling technique to two models. The first is the denoising VAE (DVAE) introduced by Im et al. (2015). We found that MCMC sampling revealed benefits of the denoising criterion. The second model is a denoising AAE (DAAE), constructed by applying the denoising criterion to the AAE. There were no modifications to the cost function. For both the DVAE and the DAAE, the effects of the denoising criterion were not immediately obvious from the initial samples. Training generative autoencoders with a denoising criterion reduced visual artefacts found both in generations and in interpolations. The effect of the denoising criterion was revealed when sampling the denoising models using MCMC sampling. 2 BACKGROUND One of the main tasks in machine learning is to learn explanatory factors for observed data, commonly known as inference. That is, given a data sample \( x \in X \subseteq \mathbb{R}^a \), we would like to find a corresponding latent encoding \( z \in Z \subseteq \mathbb{R}^b \). Another task is to learn the inverse, generative mapping from a given z to a corresponding x. In general, coming up with a suitable criterion for learning these mappings is difficult. Autoencoders solve both tasks efficiently by jointly learning an inferential mapping \( e(X; \phi) \) and generative mapping \( d(Z; \theta) \), using unlabelled data from X in a self-supervised fashion [Kingma & Welling, 2014]. The basic objective of all autoencoders is to minimise a reconstruction cost, \( \mathcal{L}_{reconstruct} \), between the original data, X, and its reconstruction, \( d(e(x_n; \phi); \theta) \). Examples of \( \mathcal{L}_{reconstruct} \) include the squared error loss, \( \frac{1}{2} \sum_{n=1}^N \|d(e(x_n; \phi); \theta) - x_n\|^2 \), and the cross-entropy loss, \( \mathcal{H}[P(X)\|P(d(e(X; \phi); \theta))] = -\sum_{n=1}^N x_n \log(d(e(x_n; \phi); \theta)) + (1-x_n) \log(1-d(e(x_n; \phi); \theta)) \). Autoencoders may be cast into a probabilistic framework, by considering samples \( x \sim P(X) \) and \( z \sim P(Z) \), and attempting to learn the conditional distributions \( Q_\phi(Z|X) \) and \( P_\theta(X|Z) \) as \( e(X; \phi) \) and \( d(Z; \theta) \) respectively, with \( \mathcal{L}_{reconstruct} \) representing the negative log-likelihood of the reconstruction given the encoding [Bengio, 2009]. With any autoencoder, it is possible to create novel \( x \in X \) by passing a \( z \in Z \) through \( d(Z; \theta) \), but we have no knowledge of appropriate choices of z beyond those obtained via \( e(X; \phi) \). One solution is to constrain the latent space to which the encoding model maps observed samples. This can be achieved by an additional loss, \( \mathcal{L}_{prior} \), that penalises encodings far away from a specified prior distribution, \( P(Z) \). We now review two types of generative autoencoders, VAEs [Kingma & Welling, 2014] [Rezende et al., 2014] and AAEs [Makhzani et al., 2015], which each take different approaches to formulating \( \mathcal{L}_{prior} \). 2.1 GENERATIVE AUTOENCODERS Consider the case where e is constructed with stochastic neurons that can produce outputs from a specified probability distribution, and \( \mathcal{L}_{prior} \) is used to constrain the distribution of outputs to \( P(Z) \). This leaves the problem of estimating the gradient of the autoencoder over the expectation \( \mathbb{E}_{Q_\phi(Z|X)} \), which would typically be addressed with a Monte Carlo method. VAEs sidestep this by constructing latent samples using a deterministic function and a source of noise, moving the source of stochasticity to an input, and leaving the network itself deterministic for standard gradient calculations—a technique commonly known as the reparameterisation trick [Kingma & Welling, 2014]. \( e(X; \phi) \) then consists of a deterministic function, \( e_{rep}(X; \phi) \), that outputs parameters for a probability distribution, plus a source of noise. In the case where \( P(Z) \) is a diagonal covariance Gaussian, \( e_{rep}(X; \phi) \) Figure 3: Reconstructions of faces from a DVAE trained with additive Gaussian noise: \( Q(\tilde{X}|X) = \mathcal{N}(X, 0.25I) \). The model successfully recovers much of the detail from the noise-corrupted images. maps \( \mathbf{x} \) to a vector of means, \( \mu \in \mathbb{R}^b \), and a vector of standard deviations, \( \sigma \in \mathbb{R}_+^b \), with the noise \( \epsilon \sim \mathcal{N}(0, I) \). Put together, the encoder outputs samples \( \mathbf{z} = \mu + \epsilon \odot \sigma \), where \( \odot \) is the Hadamard product. VAEs attempt to make these samples from the encoder match up with \( P(Z) \) by using the KL divergence between the parameters for a probability distribution outputted by \( e_{rep}(X; \phi) \), and the parameters for the prior distribution, giving \( \mathcal{L}_{prior} = D_{KL}[Q_\phi(Z|X)\|P(Z)] \). A multivariate Gaussian has an analytical KL divergence that can be further simplified when considering the unit Gaussian, resulting in \( \mathcal{L}_{prior} = \frac{1}{2} \sum_{n=1}^N \mu^2 + \sigma^2 - \log(\sigma^2) - 1 \). Another approach is to deterministically output the encodings \( \mathbf{z} \). Rather than minimising a metric between probability distributions using their parameters, we can turn this into a density ratio estimation problem where the goal is to learn a conditional distribution, \( Q_\phi(Z|X) \), such that the distribution of the encoded data samples, \( \hat{P}(Z) = \int Q_\phi(Z|X)P(X)dX \), matches the prior distribution, \( P(Z) \). The GAN framework solves this density ratio estimation problem by transforming it into a class estimation problem using two networks (Goodfellow et al., 2014). The first network in GAN training is the discriminator network, \( D_\psi \), which is trained to maximise the log probability of samples from the “real” distribution, \( \mathbf{z} \sim P(Z) \), and minimise the log probability of samples from the “fake” distribution, \( \mathbf{z} \sim Q_\phi(Z|X) \). In our case \( e(X; \phi) \) plays the role of the second network, the generator network, \( G_\phi \), which generates the “fake” samples\footnote{We adapt the variables to better fit the conventions used in the context of autoencoders.}. The two networks compete in a minimax game, where \( G_\phi \) receives gradients from \( D_\psi \) such that it learns to better fool \( D_\psi \). The training objective for both networks is given by \( \mathcal{L}_{prior} = \operatorname{argmin}_\phi \operatorname{argmax}_\psi \mathbb{E}_{P(Z)}[\log(D_\psi(Z))] + \mathbb{E}_{P(X)}[\log(1 - D_\psi(G_\phi(X)))] = \operatorname{argmin}_\phi \operatorname{argmax}_\psi \mathbb{E}_{P(Z)}[\log(D_\psi(Z))] + \mathbb{E}_{Q_\phi(Z|X)P(X)}[\log[1 - D_\psi(Z)]] \). This formulation can create problems during training, so instead \( G_\phi \) is trained to minimise \( -\log(D_\psi(G_\phi(X))) \), which provides the same fixed point of the dynamics of \( G_\phi \) and \( D_\psi \). The result of applying the GAN framework to the encoder of an autoencoder is the deterministic AAE (Makhzani et al., 2015). 2.2 DENOISING AUTOENCODERS In a more general viewpoint, generative autoencoders fulfill the purpose of learning useful representations of the observed data. Another widely used class of autoencoders that achieve this are denoising autoencoders (DAEs), which are motivated by the idea that learned features should be robust to “partial destruction of the input” (Vincent et al., 2008). Not only does this require encoding the inputs, but capturing the statistical dependencies between the inputs so that corrupted data can be recovered (see Figure 3). DAEs are presented with a corrupted version of the input, \( \tilde{\mathbf{x}} \in \tilde{X} \), but must still reconstruct the original input, \( \mathbf{x} \in X \), where the noisy inputs are created through sampling \( \tilde{\mathbf{x}} \sim C(\tilde{X}|X) \), a corruption process. The denoising criterion, \( \mathcal{L}_{denoise} \), can be applied to any type of autoencoder by replacing the straightforward reconstruction criterion, \( \mathcal{L}_{reconstruct}(X, d(e(X; \phi); \theta)) \), with the reconstruction criterion applied to noisy inputs: \( \mathcal{L}_{reconstruct}(\tilde{X}, d(e(\tilde{X}; \phi); \theta)) \). The encoder is now used to model samples drawn from \( Q_\phi(Z|\tilde{X}) \). As such, we can construct *denoising generative autoencoders* by training autoencoders to minimise \( \mathcal{L}_{denoise} + \mathcal{L}_{prior} \). One might expect to see differences in samples drawn from denoising generative autoencoders and their non-denoising counterparts. However, Figures 4 and 6 show that this is not the case. Im et al. (2015) address the case of DVAEs, claiming that the noise mapping requires adjusting the original VAE objective function. Our work is orthogonal to theirs, and others which adjust the training or model (Kingma et al., 2016), as we focus purely on sampling from generative autoencoders after training. We claim that the existing practice of drawing samples from generative autoencoders conditioned on \( \mathbf{z} \sim P(Z) \) is suboptimal, and the quality of samples can be improved by instead conditioning on \( \mathbf{z} \sim \hat{P}(Z) \) via MCMC sampling. 3 MARKOV SAMPLING We now consider the case of sampling from generative autoencoders, where \( d(Z; \theta) \) is used to draw samples from \( P_\theta(X|Z) \). In Section 2 we showed that it was important, when sampling \( P_\theta(X|Z) \), to condition on \( \mathbf{z}' \)'s drawn from \( \hat{P}(Z) \), rather than \( P(Z) \) as is often done in practice. However, we now show that for any initial \( \mathbf{z}_0 \in Z_0 = \mathbb{R}^b \), Markov sampling can be used to produce a chain of samples \( \mathbf{z}_t \), such that as \( t \to \infty \), produces samples \( \mathbf{z}_t \) that are from the distribution \( \hat{P}(Z) \), which may be used to draw meaningful samples from \( P_\theta(X|Z) \), conditioned on \( \mathbf{z} \sim \hat{P}(Z) \). To speed up convergence we can initialise \( \mathbf{z}_0 \) from a distribution close to \( \hat{P}(Z) \), by drawing \( \mathbf{z}_0 \sim P(Z) \). 3.1 MARKOV SAMPLING PROCESS A generative autoencoder can be sampled by the following process: \[ \mathbf{z}_0 \in Z_0 = \mathbb{R}^b, \quad \mathbf{x}_{t+1} \sim P_\theta(X|Z_t), \quad \mathbf{z}_{t+1} \sim Q_\phi(Z|X_{t+1}) \] This allows us to define a Markov chain with the transition operator \[ T(Z_{t+1}|Z_t) = \int Q_\phi(Z_{t+1}|X)P_\theta(X|Z_t)dX \] for \( t \geq 0 \). Drawing samples according to the transition operator \( T(Z_{t+1}|Z_t) \) produces a Markov chain. For the transition operator to be homogeneous, the parameters of the encoding and decoding functions are fixed during sampling. 3.2 CONVERGENCE PROPERTIES We now show that the stationary distribution of sampling from the Markov chain is \( \hat{P}(Z) \). **Theorem 1.** *If \( T(Z_{t+1}|Z_t) \) defines an ergodic Markov chain, \( \{Z_1, Z_2...Z_t\} \), then the chain will converge to a stationary distribution, \( \Pi(Z) \), from any arbitrary initial distribution. The stationary distribution \( \Pi(Z) = \hat{P}(Z) \).* The proof of Theorem 1 can be found in (Rosenthal, 2001). **Lemma 1.** *\( T(Z_{t+1}|Z_t) \) defines an ergodic Markov chain.* *Proof.* For a Markov chain to be ergodic it must be both irreducible (it is possible to get from any state to any other state in a finite number of steps) and aperiodic (it is possible to get from any state to any other state without having to pass through a cycle). To satisfy these requirements, it is more than sufficient to show that \( T(Z_{t+1}|Z_t) > 0 \), since every \( \mathbf{z} \in Z \) would be reachable from every other \( \mathbf{z} \in Z \). We show that \( P_\theta(X|Z) > 0 \) and \( Q_\phi(Z|X) > 0 \), giving \( T(Z_{t+1}|Z_t) > 0 \), providing the proof of this in Section A of the supplementary material. \( \square \) **Lemma 2.** *The stationary distribution of the chain defined by \( T(Z_{t+1}|Z_t) \) is \( \Pi(Z) = \hat{P}(Z) \).* *Proof.* For the transition operator defined in Equation (1), the asymptotic distribution to which \( T(Z_{t+1}|Z_t) \) converges to is \( \hat{P}(Z) \), because \( \hat{P}(Z) \) is, by definition, the marginal of the joint distribution \( Q_\phi(Z|X)P(X) \), over which the \( \mathcal{L}_{prior} \) used to learn the conditional distribution \( Q_\phi(Z|X) \). \( \square \) Using Lemmas [1] and [2] with Theorem [1] we can say that the Markov chain defined by the transition operator in Equation (1) will produce a Markov chain that converges to the stationary distribution \( \Pi(Z) = \hat{P}(Z) \). 3.3 EXTENSION TO DENOISING GENERATIVE AUTOENCODERS A denoising generative autoencoder can be sampled by the following process: \[ z_0 \in Z_0 = \mathbb{R}^b, \quad x_{t+1} \sim P_\theta(X|Z_t), \quad \tilde{x}_{t+1} \sim C(\tilde{X}|X_{t+1}), \quad z_{t+1} \sim Q_\phi(Z|\tilde{X}_{t+1}). \] This allows us to define a Markov chain with the transition operator \[ T(Z_{t+1}|Z_t) = \int Q_\phi(Z_{t+1}|\tilde{X})C(\tilde{X}|X)P_\theta(X|Z_t)dX d\tilde{X} \] for \( t \geq 0 \). The same arguments for the proof of convergence of Equation (1) can be applied to Equation (2). 3.4 RELATED WORK Our work is inspired by that of Bengio et al. (2013); denoising autoencoders are cast into a probabilistic framework, where \( P_\theta(X|\tilde{X}) \) is the denoising (decoder) distribution and \( C(\tilde{X}|X) \) is the corruption (encoding) distribution. \( \tilde{X} \) represents the space of corrupted samples. Bengio et al. (2013) define a transition operator of a Markov chain – using these conditional distributions – whose stationary distribution is \( P(X) \) under the assumption that \( P_\theta(X|\tilde{X}) \) perfectly denoises samples. The chain is initialised with samples from the training data, and used to generate a chain of samples from \( P(X) \). This work was generalised to include a corruption process that mapped data samples to latent variables (Bengio et al., 2014), to create a new type of network called Generative Stochastic Networks (GSNs). However in GSNs (Bengio et al., 2014) the latent space is not regularised with a prior. Our work is similar to several approaches proposed by Bengio et al. (2013)(2014) and Rezende et al. (Rezende et al., 2014). Both Bengio et al. and Rezende et al. define a transition operator in terms of \( X_t \) and \( X_{t-1} \). Bengio et al. generate samples with an initial \( X_0 \) drawn from the observed data, while Rezende et al. reconstruct samples from an \( X_0 \) which is a corrupted version of a data sample. In contrasts to Bengio et al. and Rezende et al., in this work we define the transition operator in terms of \( Z_{t+1} \) and \( Z_t \), initialise samples with a \( Z_0 \) that is drawn from a prior distribution we can directly sample from, and then sample \( X_1 \) conditioned on \( Z_0 \). Although the initial samples may be poor, we are likely to generate a novel \( X_1 \) on the first step of MCMC sampling, which would not be achieved using Bengio et al.’s or Rezende et al.’s approach. We are able draw initial \( Z_0 \) from a prior because we constrain \( \hat{P}(Z) \) to be close to a prior distribution \( P(Z) \); in Bengio et al. a latent space is either not explicitly modeled (Bengio et al., 2013) or it is not constrained (Bengio et al., 2014). Further, Rezende et al. (2014) explicitly assume that the distribution of latent samples drawn from \( Q_\phi(Z|X) \) matches the prior, \( P(Z) \). Instead, we assume that samples drawn from \( Q_\phi(Z|X) \) have a distribution \( \hat{P}(Z) \) that does not necessarily match the prior, \( P(Z) \). We propose an alternative method for sampling \( \hat{P}(Z) \) in order to improve the quality of generated image samples. Our motivation is also different to Rezende et al. (2014) since we use sampling to generate improved, novel data samples, while they use sampling to denoise corrupted samples. 3.5 EFFECT OF REGULARISATION METHOD The choice of \( L_{prior} \) may effect how much improvement can be gained when using MCMC sampling, assuming that the optimisation process converges to a reasonable solution. We first consider the case of VAEs, which minimise \( D_{KL}[Q_\phi(Z|X)||P(Z)] \). Minimising this KL divergence penalises the model \( \hat{P}(Z) \) if it contains samples that are outside the support of the true distribution \( P(Z) \), which might mean that \( \hat{P}(Z) \) captures only a part of \( P(Z) \). This means that when sampling \( P(Z) \), we may draw from a region that is not captured by \( \hat{P}(Z) \). This suggests that MCMC sampling can improve samples from trained VAEs by walking them towards denser regions in \( \hat{P}(Z) \). Generally speaking, using the reverse KL divergence during training, \( D_{KL}[P(Z)\|Q_\phi(Z|X)] \), penalises the model \( Q_\phi(Z|X) \) if \( P(Z) \) produces samples that are outside of the support of \( \hat{P}(Z) \). By minimising this KL divergence, most samples in \( P(Z) \) will likely be in \( \hat{P}(Z) \) as well. AAEs, on the other hand are regularised using the JS entropy, given by \( \frac{1}{2}D_{KL}[P(Z)\|\frac{1}{2}(P(Z)+Q_\phi(Z|X))]+\frac{1}{2}D_{KL}[Q_\phi(Z|X)\|\frac{1}{2}(P(Z)+Q_\phi(Z|X))] \). Minimising this cost function attempts to find a compromise between the aforementioned extremes. However, this still suggests that some samples from \( P(Z) \) may lie outside \( \hat{P}(Z) \), and so we expect AAEs to also benefit from MCMC sampling. 4 EXPERIMENTS 4.1 MODELS We utilise the deep convolutional GAN (DCGAN) (Radford et al., 2015) as a basis for our autoencoder models. Although the recommendations from Radford et al. (2015) are for standard GAN architectures, we adopt them as sensible defaults for an autoencoder, with our encoder mimicking the DCGAN’s discriminator, and our decoder mimicking the generator. The encoder uses strided convolutions rather than max-pooling, and the decoder uses fractionally-strided convolutions rather than a fixed upsampling. Each convolutional layer is succeeded by spatial batch normalisation (Ioffe & Szegedy, 2015) and ReLU nonlinearities, except for the top of the decoder which utilises a sigmoid function to constrain the output values between 0 and 1. We minimise the cross-entropy between the original and reconstructed images. Although this results in blurry images in regions which are ambiguous, such as hair detail, we opt not to use extra loss functions that improve the visual quality of generations (Larsen et al., 2015; Dosovitskiy & Brox, 2016; Lamb et al., 2016) to avoid confounding our results. Although the AAE is capable of approximating complex probabilistic posteriors (Makhzani et al., 2015), we construct ours to output a deterministic \( Q_\phi(Z|X) \). As such, the final layer of the encoder part of our AAEs is a convolutional layer that deterministically outputs a latent sample, \( \mathbf{z} \). The adversary is a fully-connected network with dropout and leaky ReLU nonlinearities. \( e_{rep}(X;\phi) \) of our VAEs have an output of twice the size, which corresponds to the means, \( \mu \), and standard deviations, \( \sigma \), of a diagonal covariance Gaussian distribution. For all models our prior, \( P(Z) \), is a 200D isotropic Gaussian with zero mean and unit variance: \( \mathcal{N}(\mathbf{0},\mathbf{I}) \). 4.2 DATASETS Our primary dataset is the (aligned and cropped) CelebA dataset, which consists of 200,000 images of celebrities (Liu et al., 2015). The DCGAN (Radford et al., 2015) was the first generative neural network model to show convincing novel samples from this dataset, and it has been used ever since as a qualitative benchmark due to the amount and quality of samples. In Figures 7 and 8 of the supplementary material, we also include results on the SVHN dataset, which consists of 100,000 images of house numbers extracted from Google Street view images (Netzer et al., 2011). 4.3 TRAINING & EVALUATION For all datasets we perform the same preprocessing: cropping the centre to create a square image, then resizing to \( 64 \times 64 \)px. We train our generative autoencoders for 20 epochs on the training split of the datasets, using Adam (Kingma & Ba, 2014) with \( \alpha = 0.0002, \beta_1 = 0.5 \) and \( \beta_2 = 0.999 \). The denoising generative autoencoders use the additive Gaussian noise mapping \( C(\tilde{X}|X) = \mathcal{N}(X,0.25\mathbf{I}) \). All of our experiments were run using the Torch library (Collobert et al., 2011).2 For evaluation, we generate novel samples from the decoder using \( \mathbf{z} \) initially sampled from \( P(Z) \); we also show spherical interpolations (White, 2016) between four images of the testing split, as depicted in Figure 2. We then perform several steps of MCMC sampling on the novel samples and interpolations. During this process, we use the training mode of batch normalisation (Ioffe & 2Example code is available at https://github.com/Kaixhin/Autoencoders [Szegedy] (2015), i.e., we normalise the inputs using minibatch rather than population statistics, as the normalisation can partially compensate for poor initial inputs (see Figure 4) that are far from the training distribution. We compare novel samples between all models below, and leave further interpolation results to Figures 5 and 6 of the supplementary material. 4.4 SAMPLES (a) VAE (initial) (b) VAE (1 step) (c) VAE (5 steps) (d) VAE (10 steps) (e) DVAE (initial) (f) DVAE (1 step) (g) DVAE (5 steps) (h) DVAE (10 steps) (i) AAE (initial) (j) AAE (1 step) (k) AAE (5 steps) (l) AAE (10 steps) (m) DAAE (initial) (n) DAAE (1 step) (o) DAAE (5 steps) (p) DAAE (10 steps) Figure 4: Samples from a VAE (a-d), DVAE (e-h), AAE (i-l) and DAAE (m-p) trained on the CelebA dataset. (a), (e), (i) and (m) show initial samples conditioned on \( \mathbf{z} \sim P(Z) \), which mainly result in recognisable faces emerging from noisy backgrounds. After 1 step of MCMC sampling, the more unrealistic generations change noticeably, and continue to do so with further steps. On the other hand, realistic generations, i.e. samples from a region with high probability, do not change as much. The adversarial criterion for deterministic AAEs is difficult to optimise when the dimensionality of \( Z \) is high. We observe that during training our AAEs and DAAEs, the empirical standard deviation of \( \mathbf{z} \sim Q_\phi(Z|X) \) is less than 1, which means that \( \hat{P}(Z) \) fails to approximate \( P(Z) \) as closely as was achieved with the VAE and DVAE. However, this means that the effect of MCMC sampling is more pronounced, with the quality of all samples noticeably improving after a few steps. As a side-effect of the suboptimal solution learned by the networks, the denoising properties of the DAAE are more noticeable with the novel samples. 5 CONCLUSION Autoencoders consist of a decoder, \( d(Z; \theta) \) and an encoder, \( e(X; \phi) \) function, where \( \phi \) and \( \theta \) are learned parameters. Functions \( e(X; \phi) \) and \( d(Z; \theta) \) may be used to draw samples from the conditional distributions \( P_\theta(X|Z) \) and \( Q_\phi(Z|X) \) (Bengio et al., 2014; 2013; Rezende et al., 2014), where \( X \) refers to the space of observed samples and \( Z \) refers to the space of latent samples. The encoder distribution, \( Q_\phi(Z|X) \), maps data samples from the data generating distribution, \( P(X) \), to a latent distribution, \( P(Z) \). The decoder distribution, \( P_\theta(X|Z) \), maps samples from \( P(Z) \) to \( P(X) \). We are concerned with *generative autoencoders*, which we define to be a family of autoencoders where regularisation is used during training to encourage \( \hat{P}(Z) \) to be close to a known prior \( P(Z) \). Commonly it is assumed that \( \hat{P}(Z) \) and \( P(Z) \) are similar, such that samples from \( P(Z) \) may be used to sample a decoder \( P_\theta(X|Z) \); we do not make the assumption that \( \hat{P}(Z) \) and \( P(Z) \) are “sufficiently close” (Rezende et al., 2014). Instead, we derive an MCMC process, whose stationary distribution is \( \hat{P}(Z) \), allowing us to directly draw samples from \( \hat{P}(Z) \). By conditioning on samples from \( \hat{P}(Z) \), samples drawn from \( x \sim P_\theta(X|Z) \) are more consistent with the training data. In our experiments, we compare samples \( x \sim P_\theta(X|Z = z_0), z_0 \sim P(Z) \) to \( x \sim P_\theta(X|Z = z_i) \) for \( i = \{1, 5, 10\} \), where \( z_i \)'s are obtained through MCMC sampling, to show that MCMC sampling improves initially poor samples (see Figure 4). We also show that artifacts in \( x \) samples induced by interpolations across the latent space can also be corrected by MCMC sampling see (Figure 2). We further validate our work by showing that the denoising properties of denoising generative autoencoders are best revealed by the use of MCMC sampling. Our MCMC sampling process is straightforward, and can be applied easily to existing generative autoencoders. This technique is orthogonal to the use of more powerful posteriors in AAEs (Makhzani et al., 2015) and VAEs (Kingma et al., 2016), and the combination of both could result in further improvements in generative modeling. Finally, our basic MCMC process opens the doors to apply a large existing body of research on sampling methods to generative autoencoders. ACKNOWLEDGEMENTS We would like to acknowledge the EPSRC for funding through a Doctoral Training studentship and the support of the EPSRC CDT in Neurotechnology. REFERENCES Yoshua Bengio. Learning deep architectures for AI. *Foundations and trends® in Machine Learning*, 2(1): 1–127, 2009. Yoshua Bengio, Li Yao, Guillaume Alain, and Pascal Vincent. Generalized denoising auto-encoders as generative models. In *Advances in Neural Information Processing Systems*, pp. 899–907, 2013. Yoshua Bengio, Eric Thibodeau-Laufer, Guillaume Alain, and Jason Yosinski. Deep generative stochastic networks trainable by backprop. In *Journal of Machine Learning Research: Proceedings of the 31st International Conference on Machine Learning*, volume 32, 2014. Ronan Collobert, Koray Kavukcuoglu, and Clément Farabet. Torch7: A matlab-like environment for machine learning. In *BigLearn, NIPS Workshop*, number EPFL-CONF-192376, 2011. Alexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics based on deep networks. *arXiv preprint arXiv:1602.02644*, 2016. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Nets. In *Advances in Neural Information Processing Systems*, pp. 2672–2680, 2014. Daniel Jiwong Im, Sungjin Ahn, Roland Memisevic, and Yoshua Bengio. Denoising criterion for variational auto-encoding framework. *arXiv preprint arXiv:1511.06406*, 2015. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In *Proceedings of the 32nd International Conference on Machine Learning (ICML-15)*, pp. 448–456, 2015. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the 2015 International Conference on Learning Representations (ICLR-2015), arXiv preprint arXiv:1412.6980, 2014. URL https://arxiv.org/pdf/1412.6980v8.pdf Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. In Proceedings of the 2015 International Conference on Learning Representations (ICLR-2015), arXiv preprint arXiv:1312.6114, 2014. URL https://arxiv.org/abs/1312.6114 Diederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregressive flow. arXiv preprint arXiv:1606.04934, 2016. Alex Lamb, Vincent Dumoulin, and Aaron Courville. Discriminative regularization for generative models. arXiv preprint arXiv:1602.03220, 2016. Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. In Proceedings of The 33rd International Conference on Machine Learning, arXiv preprint arXiv:1512.09300, pp. 1558–1566, 2015. URL http://jmlr.org/proceedings/papers/v48/larsen16.pdf Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3730–3738, 2015. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. URL https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37648.pdf Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In International Conference on Learning Representations (ICLR) 2016, arXiv preprint arXiv:1511.06434, 2015. URL https://arxiv.org/pdf/1511.06434.pdf Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31st International Conference on Machine Learning, arXiv preprint arXiv:1401.4082, 2014. URL https://arxiv.org/pdf/1401.4082.pdf Jeffrey S Rosenthal. A review of asymptotic convergence for general state space markov chains. Far East J. Theor. Stat, 5(1):37–50, 2001. H Sebastian Seung. Learning continuous attractors in recurrent networks. In NIPS Proceedings, volume 97, pp. 654–660, 1997. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning, pp. 1096–1103. ACM, 2008. Tom White. Sampling generative networks: Notes on a few effective techniques. arXiv preprint arXiv:1609.04468, 2016. Supplementary Material A PROOF THAT \( T(Z_{t+1}|Z_t) > 0 \) For \( P_\theta(X|Z) > 0 \) we require that all possible \( x \in X \subseteq \mathbb{R}^a \) may be generated by the network. Assuming that the model \( P_\theta(X|Z) \) is trained using a sufficient number of training samples, \( x \in X_{train} = X \), and that the model has infinite capacity to model \( X_{train} = X \), then we should be able to draw any sample \( x \in X_{train} = X \) from \( P_\theta(X|Z) \). In reality \( X_{train} \subseteq X \) and it is not possible to have a model with infinite capacity. However, \( P_\theta(X|Z) \) is modeled using a deep neural network, which we assume has sufficient capacity to capture the training data well. Further, deep neural networks are able to interpolate between samples in very high dimensional spaces (Radford et al., 2015); we therefore further assume that if we have a large number of training samples (as well as large model capacity), that almost any \( x \in X \) can be drawn from \( P_\theta(X|Z) \). Note that if we wish to generate human faces, we define \( X_{all} \) to be the space of all possible faces, with distribution \( P(X_{all}) \), while \( X_{train} \) is the space of faces made up by the training data. Then, practically even a well trained model which learns to interpolate well only captures an \( X \), with distribution \( \int P_\theta(X|Z)P(Z)dZ \), where \( X_{train} \subseteq X \subseteq X_{all} \), because \( X \) additionally contains examples of interpolated versions of \( x \sim P(X_{train}) \). For \( Q_\phi(Z|X) > 0 \) it must be possible to generate all possible \( z \in Z \subseteq \mathbb{R}^b \). \( Q_\phi(Z|X) \) is described by the function \( e(\cdot; \phi) : X \to Z \). To ensure that \( Q_\phi(Z|X) > 0 \), we want to show that the function \( e(X; \phi) \) allows us to represent all samples of \( z \in Z \). VAEs and AAEs each construct \( e(X; \phi) \) to produce \( z \in Z \) in different ways. The output of the encoder of a VAE, \( e_{VAE}(X; \phi) \) is \( z = \mu + \epsilon \odot \sigma \), where \( \epsilon \sim \mathcal{N}(0, I) \). The output of a VAE is then always Gaussian, and hence there is no limitation on the \( z \)'s that \( e_{VAE}(X; \phi) \) can produce. This ensures that \( Q_\phi(Z|X) > 0 \), provided that \( \sigma \neq 0 \). The encoder of our AAE, \( e_{AAE}(X; \phi) \), is a deep neural network consisting of multiple convolutional and batch normalisation layers. The final layer of the \( e_{AAE}(X; \phi) \) is a fully connected layer without an activation function. The input to each of the \( M \) nodes in the fully connected layer is a function \( f_{i=1...M}(x) \). This means that \( z \) is given by: \( z = a_1 f_1(x) + a_2 f_2(x) + ... + a_M f_M(x) \), where \( a_{i=1...M} \) are the learned weights of the fully connected layer. We now consider three cases: Case 1: If \( a_i \) are a complete set of bases for \( Z \) then it is possible to generate any \( z \in Z \) from an \( x \in X \) with a one-to-one mapping, provided that \( f_i(x) \) is not restricted in the values that it can take. Case 2: If \( a_i \) are an overcomplete set of bases for \( Z \), then the same holds, provided that \( f_i(x) \) is not restricted in the values that it can take. Case 3: If \( a_i \) are an undercomplete set of bases for \( Z \) then it is not possible to generate all \( z \in Z \) from \( x \in X \). Instead there is a many (X) to one (Z) mapping. For \( Q_\phi(Z|X) > 0 \) our network must learn a complete or overcomplete set of bases and \( f_i(x) \) must not be restricted in the values that it can take \( \forall i \). The network is encouraged to learn an overcomplete set of bases by learning a large number of \( a_i \)'s—specifically \( M = 8192 \) when basing our network on the DCGAN architecture (Radford et al., 2015)—more that 40 times the dimensionality of \( Z \). By using batch normalisation layers throughout the network, we ensure that values of \( f_i(x) \) are spread out, capturing a close-to-Gaussian distribution (Ioffe & Szegedy, 2015), encouraging infinite support. We have now shown that, under certain reasonable assumptions, \( P_\theta(X|Z) > 0 \) and \( Q_\phi(Z|X) > 0 \), which means that \( T(Z_{t+1}|Z_t) > 0 \), and hence we can get from any \( Z \) to any another \( Z \) in only one step. Therefore the Markov chain described by the transition operator \( T(Z_{t+1}|Z_t) \) defined in Equation (1) is both irreducible and aperiodic, which are the necessary conditions for ergodicity. B CELEBA B.1 INTERPOLATIONS (a) DVAE (initial) (b) DVAE (5 steps) (c) DVAE (initial) (d) DVAE (5 steps) Figure 5: Interpolating between two faces using (a-d) a DVAE. The top rows (a, c) for each face is the original interpolation, whilst the second rows (b, d) are the result of 5 steps of MCMC sampling applied to the latent samples that were used to generate the original interpolation. The only qualitative difference when compared to VAEs (see Figure 4) is a desaturation of the generated images. (a) AAE (initial) (b) AAE (5 steps) (c) AAE (initial) (d) AAE (5 steps) (e) DAAE (initial) (f) DAAE (5 steps) (g) DAAE (initial) (h) DAAE (5 steps) Figure 6: Interpolating between two faces using (a-d) an AAE and (e-h) a DAAE. The top rows (a, c, e, g) for each face is the original interpolation, whilst the second rows (b, d, f, h) are the result of 5 steps of MCMC sampling applied to the latent samples that were used to generate the original interpolation. Although the AAE performs poorly (b, d), the regularisation effect of denoising can be clearly seen with the DAAE after applying MCMC sampling (f, h). C STREET VIEW HOUSE NUMBERS C.1 SAMPLES (a) VAE (initial) (b) VAE (1 step) (c) VAE (5 steps) (d) VAE (10 steps) (e) DVAE (initial) (f) DVAE (1 step) (g) DVAE (5 steps) (h) DVAE (10 steps) (i) AAE (initial) (j) AAE (1 step) (k) AAE (5 steps) (l) AAE (10 steps) (m) DAAE (initial) (n) DAAE (1 step) (o) DAAE (5 steps) (p) DAAE (10 steps) Figure 7: Samples from a VAE (a-d), DVAE (e-h), AAE (i-l) and DAAE (m-p) trained on the SVHN dataset. The samples from the models imitate the blurriness present in the dataset. Although very few numbers are visible in the initial sample, the VAE and DVAE produce recognisable numbers from most of the initial samples after a few steps of MCMC sampling. Although the AAE and DAAE fail to produce recognisable numbers, the final samples are still a clear improvement over the initial samples. C.2 Interpolations (a) VAE (initial) (b) VAE (5 steps) (c) VAE (initial) (d) VAE (5 steps) (e) DVAE (initial) (f) DVAE (5 steps) (g) DVAE (initial) (h) DVAE (5 steps) Figure 8: Interpolating between Google Street View house numbers using (a-d) a VAE and (e-h) a DVAE. The top rows (a, c, e, g) for each house number are the original interpolations, whilst the second rows (b, d, f, h) are the result of 5 steps of MCMC sampling. If the original interpolation produces symbols that do not resemble numbers, as observed in (a) and (e), the models will attempt to move the samples towards more realistic numbers (b, f). Interpolation between 1- and 2-digit numbers in an image (c, g) results in a meaningless blur in the middle of the interpolation. After a few steps of MCMC sampling the models instead produce more recognisable 1- or 2-digit numbers (d, h). We note that when the contrast is poor, denoising models in particular can struggle to recover meaningful images (h).
ABSTRACT We focus on generative autoencoders, such as variational or adversarial autoencoders, which jointly learn a generative model alongside an inference model. Generative autoencoders are those which are trained to softly enforce a prior on the latent distribution learned by the inference model. We call the distribution to which the inference model maps observed samples, the *learned latent distribution*, which may not be consistent with the prior. We formulate a Markov chain Monte Carlo (MCMC) sampling process, equivalent to iteratively decoding and encoding, which allows us to sample from the learned latent distribution. Since, the generative model learns to map from the *learned* latent distribution, rather than the prior, we may use MCMC to improve the quality of samples drawn from the *generative* model, especially when the learned latent distribution is far from the prior. Using MCMC sampling, we are able to reveal previously unseen differences between generative autoencoders trained either with or without a denoising criterion. 1 INTRODUCTION Unsupervised learning has benefited greatly from the introduction of deep generative models. In particular, the introduction of generative adversarial networks (GANs) (Goodfellow et al., 2014) and variational autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014) has led to a plethora of research into learning latent variable models that are capable of generating data from complex distributions, including the space of natural images (Radford et al., 2015). Both of these models, and their extensions, operate by placing a prior distribution, \( P(Z) \), over a latent space \( Z \subseteq \mathbb{R}^b \), and learn mappings from the latent space, \( Z \), to the space of the observed data, \( X \subseteq \mathbb{R}^a \). We are interested in autoencoding generative models, models which learn not just the generative mapping \( Z \mapsto X \), but also the inferential mapping \( X \mapsto Z \). Specifically, we define *generative autoencoders* as autoencoders which softly constrain their latent distribution, to match a specified prior distribution, \( P(Z) \). This is achieved by minimising a loss, \( \mathcal{L}_{\text{prior}} \), between the latent distribution and the prior. This includes VAEs (Kingma & Welling, 2014; Rezende et al., 2014), extensions of VAEs (Kingma et al., 2016), and also adversarial autoencoders (AAEs) (Makhzani et al., 2015). Whilst other autoencoders also learn an encoding function, \( e : \mathbb{R}^a \to Z \), together with a decoding function, \( d : \mathbb{R}^b \to X \), the latent space is not necessarily constrained to conform to a specified probability distribution. This is the key distinction for generative autoencoders; both \( e \) and \( d \) can still be deterministic functions (Makhzani et al., 2015). The functions \( e \) and \( d \) are defined for any input from \( \mathbb{R}^a \) and \( \mathbb{R}^b \) respectively, however the outputs of the functions may be constrained practically by the type of functions that \( e \) and \( d \) are, such that \( e \) maps to \( Z \subseteq \mathbb{R}^b \) and \( d \) maps to \( X \subseteq \mathbb{R}^a \). During training however, the encoder, \( e \) is only fed with training data samples, \( x \in X \) and the decoder, \( d \) is only fed with samples from the encoder, \( z \in Z \), and so the encoder and decoder learn mappings between \( X \) and \( Z \). The process of encoding and decoding may be interpreted as sampling the conditional probabilities \( Q_\phi(Z|X) \) and \( P_\theta(X|Z) \) respectively. The conditional distributions may be sampled using the encoding and decoding functions \( e(X;\phi) \) and \( d(Z;\theta) \), where \( \phi \) and \( \theta \) are learned parameters of the encoding and decoding functions respectively. The decoder of a generative autoencoder may be used to generate new samples that are consistent with the data. There are two traditional approaches for sampling generative autoencoders: Approach 1 (Bengio et al. [2014]): \[ \mathbf{x}_0 \sim P(X), \quad \mathbf{z}_0 \sim Q_\phi(Z|X = \mathbf{x}_0), \quad \mathbf{x}_1 \sim P_\theta(X|Z = \mathbf{z}_0) \] where \(P(X)\) is the data generating distribution. However, this approach is likely to generate samples similar to those in the training data, rather than generating novel samples that are consistent with the training data. Approach 2 (Kingma & Welling [2014] Makhzani et al. [2015] Rezende et al. [2014]): \[ \mathbf{z}_0 \sim P(Z), \quad \mathbf{x}_0 \sim P_\theta(X|Z = \mathbf{z}_0) \] where \(P(Z)\) is the prior distribution enforced during training and \(P_\theta(X|Z)\) is the decoder trained to map samples drawn from \(Q_\phi(Z|X)\) to samples consistent with \(P(X)\). This approach assumes that \( \int Q_\phi(Z|X)P(X)dX = \hat{P}(Z) \), suggesting that the encoder maps all data samples from \(P(X)\) to a distribution that matches the prior distribution, \(P(Z)\). However, it is not always true that \( \int Q_\phi(Z|X)P(X)dX = P(Z) \). Rather \(Q_\phi(Z|X)\) maps data samples to a distribution which we call, \( \hat{P}(Z) \): \[ \int Q_\phi(Z|X)P(X)dX = \hat{P}(Z) \] where it is not necessarily true that \( \hat{P}(Z) = P(Z) \) because the prior is only softly enforced. The decoder, on the other hand, is trained to map encoded data samples (i.e. samples from \( \int Q_\phi(Z|X)P(X)dX \)) to samples from \(X\) which have the distribution \(P(X)\). If the encoder maps observed samples to latent samples with the distribution \( \hat{P}(Z) \), rather than the desired prior distribution, \(P(Z)\), then: \[ \int P_\theta(X|Z)P(Z)dZ \neq P(X) \] This suggests that samples drawn from the decoder, \(P_\theta(X|Z)\), conditioned on samples drawn from the prior, \(P(Z)\), may not be consistent with the data generating distribution, \(P(X)\). However, by conditioning on \( \hat{P}(Z) \): \[ \int P_\theta(X|Z)\hat{P}(Z)dZ = P(X) \] This suggests that to obtain more realistic generations, latent samples should be drawn via \( \mathbf{z} \sim \hat{P}(Z) \) rather than \( \mathbf{z} \sim P(Z) \), followed by \( \mathbf{x} \sim P_\theta(X|Z) \). A limited number of latent samples may be drawn from \( \hat{P}(Z) \) using the first two steps in Approach 1 - however this has the drawbacks discussed in Approach 1. We introduce an alternative method for sampling from \( \hat{P}(Z) \) which does not have the same drawbacks. Our main contribution is the formulation of a Markov chain Monte Carlo (MCMC) sampling process for generative autoencoders, which allows us to sample from \( \hat{P}(Z) \). By iteratively sampling the chain, starting from an arbitrary \( \mathbf{z}_{t=0} \in \mathbb{R}^b \), the chain converges to \( \mathbf{z}_{t \to \infty} \sim \hat{P}(Z) \), allowing us to draw latent samples from \( \hat{P}(Z) \) after several steps of MCMC sampling. From a practical perspective, this is achieved by iteratively decoding and encoding, which may be easily applied to existing generative autoencoders. Because \( \hat{P}(Z) \) is optimised to be close to \( P(Z) \), the initial sample, \( \mathbf{z}_{t=0} \) can be drawn from \( P(Z) \), improving the quality of the samples within a few iterations. When interpolating between latent encodings, there is no guarantee that \( \mathbf{z} \) stays within high density regions of \( \hat{P}(Z) \). Previously, this has been addressed by using spherical, rather than linear interpolation of the high dimensional \( Z \) space (White [2016]). However, this approach attempts to keep \( \mathbf{z} \) Figure 1: \( P(X) \) is the data generating distribution. We may access some samples from \( P(X) \) by drawing samples from the training data. \( Q_\phi(Z|X) \) is the conditional distribution, modeled by an encoder, which maps samples from \( \mathbb{R}^a \) to samples in \( \mathbb{R}^b \). An ideal encoder maps samples from \( P(X) \) to a known, prior distribution \( P(Z) \): in reality the encoder maps samples from \( P(X) \) to an unknown distribution \( \hat{P}(Z) \). \( P_\theta(X|Z) \) is a conditional distribution, modeled by a decoder, which maps samples from \( \mathbb{R}^b \) to \( \mathbb{R}^a \). During training the decoder learns to map samples drawn from \( \hat{P}(Z) \) to \( P(X) \) rather than samples drawn from \( P(Z) \) because the decoder only sees samples from \( \hat{P}(Z) \). Regularisation on the latent space only encourages \( \hat{P}(Z) \) to be close to \( P(Z) \). Note that if \( \mathcal{L}_{prior} \) is optimal, then \( \hat{P}(Z) \) overlaps fully with \( P(Z) \). ![Diagram showing the relationship between P(X), P(Z), Q_\phi(Z|X), and P_\theta(X|Z)](page_232_186_1107_349.png) (a) VAE (initial) (b) VAE (5 steps) (c) VAE (initial) (d) VAE (5 steps) Figure 2: **Prior work:** Spherically interpolating [White, 2016] between two faces using a VAE (a, c). In (a), the attempt to gradually generate sunglasses results in visual artifacts around the eyes. In (c), the model fails to properly capture the desired change in orientation of the face, resulting in three partial faces in the middle of the interpolation. **This work:** (b) and (d) are the result of 5 steps of MCMC sampling applied to the latent samples that were used to generate the original interpolations, (a) and (c). In (b), the discolouration around the eyes disappears, with the model settling on either generating or not generating glasses. In (d), the model moves away from multiple faces in the interpolation by producing new faces with appropriate orientations. within \( P(Z) \), rather than trying to sample from \( \hat{P}(Z) \). By instead applying several steps of MCMC sampling to the interpolated z samples before sampling \( P_\theta(X|Z) \), unrealistic artifacts can be reduced (see Figure 2). Whilst most methods that aim to generate realistic samples from X rely on adjusting encodings of the observed data [White, 2016], our use of MCMC allows us to walk any latent sample to more probable regions of the learned latent distribution, resulting in more convincing generations. We demonstrate that the use of MCMC sampling improves generations from both VAEs and AAEs with high-dimensional Z; this is important as previous studies have shown that the dimensionality of Z should be scaled with the intrinsic latent dimensionality of the observed data. Our second contribution is the modification of the proposed transition operator for the MCMC sampling process to denoising generative autoencoders. These are generative autoencoders trained using a denoising criterion. [Seung, 1997][Vincent et al., 2008]. We reformulate our original MCMC sampling process to incorporate the noising and denoising processes, allowing us to use MCMC sampling on denoising generative autoencoders. We apply this sampling technique to two models. The first is the denoising VAE (DVAE) introduced by Im et al. (2015). We found that MCMC sampling revealed benefits of the denoising criterion. The second model is a denoising AAE (DAAE), constructed by applying the denoising criterion to the AAE. There were no modifications to the cost function. For both the DVAE and the DAAE, the effects of the denoising criterion were not immediately obvious from the initial samples. Training generative autoencoders with a denoising criterion reduced visual artefacts found both in generations and in interpolations. The effect of the denoising criterion was revealed when sampling the denoising models using MCMC sampling. 2 BACKGROUND One of the main tasks in machine learning is to learn explanatory factors for observed data, commonly known as inference. That is, given a data sample \( x \in X \subseteq \mathbb{R}^a \), we would like to find a corresponding latent encoding \( z \in Z \subseteq \mathbb{R}^b \). Another task is to learn the inverse, generative mapping from a given z to a corresponding x. In general, coming up with a suitable criterion for learning these mappings is difficult. Autoencoders solve both tasks efficiently by jointly learning an inferential mapping \( e(X; \phi) \) and generative mapping \( d(Z; \theta) \), using unlabelled data from X in a self-supervised fashion [Kingma & Welling, 2014]. The basic objective of all autoencoders is to minimise a reconstruction cost, \( \mathcal{L}_{reconstruct} \), between the original data, X, and its reconstruction, \( d(e(x_n; \phi); \theta) \). Examples of \( \mathcal{L}_{reconstruct} \) include the squared error loss, \( \frac{1}{2} \sum_{n=1}^N \|d(e(x_n; \phi); \theta) - x_n\|^2 \), and the cross-entropy loss, \( \mathcal{H}[P(X)\|P(d(e(X; \phi); \theta))] = -\sum_{n=1}^N x_n \log(d(e(x_n; \phi); \theta)) + (1-x_n) \log(1-d(e(x_n; \phi); \theta)) \). Autoencoders may be cast into a probabilistic framework, by considering samples \( x \sim P(X) \) and \( z \sim P(Z) \), and attempting to learn the conditional distributions \( Q_\phi(Z|X) \) and \( P_\theta(X|Z) \) as \( e(X; \phi) \) and \( d(Z; \theta) \) respectively, with \( \mathcal{L}_{reconstruct} \) representing the negative log-likelihood of the reconstruction given the encoding [Bengio, 2009]. With any autoencoder, it is possible to create novel \( x \in X \) by passing a \( z \in Z \) through \( d(Z; \theta) \), but we have no knowledge of appropriate choices of z beyond those obtained via \( e(X; \phi) \). One solution is to constrain the latent space to which the encoding model maps observed samples. This can be achieved by an additional loss, \( \mathcal{L}_{prior} \), that penalises encodings far away from a specified prior distribution, \( P(Z) \). We now review two types of generative autoencoders, VAEs [Kingma & Welling, 2014] [Rezende et al., 2014] and AAEs [Makhzani et al., 2015], which each take different approaches to formulating \( \mathcal{L}_{prior} \). 2.1 GENERATIVE AUTOENCODERS Consider the case where e is constructed with stochastic neurons that can produce outputs from a specified probability distribution, and \( \mathcal{L}_{prior} \) is used to constrain the distribution of outputs to \( P(Z) \). This leaves the problem of estimating the gradient of the autoencoder over the expectation \( \mathbb{E}_{Q_\phi(Z|X)} \), which would typically be addressed with a Monte Carlo method. VAEs sidestep this by constructing latent samples using a deterministic function and a source of noise, moving the source of stochasticity to an input, and leaving the network itself deterministic for standard gradient calculations—a technique commonly known as the reparameterisation trick [Kingma & Welling, 2014]. \( e(X; \phi) \) then consists of a deterministic function, \( e_{rep}(X; \phi) \), that outputs parameters for a probability distribution, plus a source of noise. In the case where \( P(Z) \) is a diagonal covariance Gaussian, \( e_{rep}(X; \phi) \) Figure 3: Reconstructions of faces from a DVAE trained with additive Gaussian noise: \( Q(\tilde{X}|X) = \mathcal{N}(X, 0.25I) \). The model successfully recovers much of the detail from the noise-corrupted images. maps \( \mathbf{x} \) to a vector of means, \( \mu \in \mathbb{R}^b \), and a vector of standard deviations, \( \sigma \in \mathbb{R}_+^b \), with the noise \( \epsilon \sim \mathcal{N}(0, I) \). Put together, the encoder outputs samples \( \mathbf{z} = \mu + \epsilon \odot \sigma \), where \( \odot \) is the Hadamard product. VAEs attempt to make these samples from the encoder match up with \( P(Z) \) by using the KL divergence between the parameters for a probability distribution outputted by \( e_{rep}(X; \phi) \), and the parameters for the prior distribution, giving \( \mathcal{L}_{prior} = D_{KL}[Q_\phi(Z|X)\|P(Z)] \). A multivariate Gaussian has an analytical KL divergence that can be further simplified when considering the unit Gaussian, resulting in \( \mathcal{L}_{prior} = \frac{1}{2} \sum_{n=1}^N \mu^2 + \sigma^2 - \log(\sigma^2) - 1 \). Another approach is to deterministically output the encodings \( \mathbf{z} \). Rather than minimising a metric between probability distributions using their parameters, we can turn this into a density ratio estimation problem where the goal is to learn a conditional distribution, \( Q_\phi(Z|X) \), such that the distribution of the encoded data samples, \( \hat{P}(Z) = \int Q_\phi(Z|X)P(X)dX \), matches the prior distribution, \( P(Z) \). The GAN framework solves this density ratio estimation problem by transforming it into a class estimation problem using two networks (Goodfellow et al., 2014). The first network in GAN training is the discriminator network, \( D_\psi \), which is trained to maximise the log probability of samples from the “real” distribution, \( \mathbf{z} \sim P(Z) \), and minimise the log probability of samples from the “fake” distribution, \( \mathbf{z} \sim Q_\phi(Z|X) \). In our case \( e(X; \phi) \) plays the role of the second network, the generator network, \( G_\phi \), which generates the “fake” samples\footnote{We adapt the variables to better fit the conventions used in the context of autoencoders.}. The two networks compete in a minimax game, where \( G_\phi \) receives gradients from \( D_\psi \) such that it learns to better fool \( D_\psi \). The training objective for both networks is given by \( \mathcal{L}_{prior} = \operatorname{argmin}_\phi \operatorname{argmax}_\psi \mathbb{E}_{P(Z)}[\log(D_\psi(Z))] + \mathbb{E}_{P(X)}[\log(1 - D_\psi(G_\phi(X)))] = \operatorname{argmin}_\phi \operatorname{argmax}_\psi \mathbb{E}_{P(Z)}[\log(D_\psi(Z))] + \mathbb{E}_{Q_\phi(Z|X)P(X)}[\log[1 - D_\psi(Z)]] \). This formulation can create problems during training, so instead \( G_\phi \) is trained to minimise \( -\log(D_\psi(G_\phi(X))) \), which provides the same fixed point of the dynamics of \( G_\phi \) and \( D_\psi \). The result of applying the GAN framework to the encoder of an autoencoder is the deterministic AAE (Makhzani et al., 2015). 2.2 DENOISING AUTOENCODERS In a more general viewpoint, generative autoencoders fulfill the purpose of learning useful representations of the observed data. Another widely used class of autoencoders that achieve this are denoising autoencoders (DAEs), which are motivated by the idea that learned features should be robust to “partial destruction of the input” (Vincent et al., 2008). Not only does this require encoding the inputs, but capturing the statistical dependencies between the inputs so that corrupted data can be recovered (see Figure 3). DAEs are presented with a corrupted version of the input, \( \tilde{\mathbf{x}} \in \tilde{X} \), but must still reconstruct the original input, \( \mathbf{x} \in X \), where the noisy inputs are created through sampling \( \tilde{\mathbf{x}} \sim C(\tilde{X}|X) \), a corruption process. The denoising criterion, \( \mathcal{L}_{denoise} \), can be applied to any type of autoencoder by replacing the straightforward reconstruction criterion, \( \mathcal{L}_{reconstruct}(X, d(e(X; \phi); \theta)) \), with the reconstruction criterion applied to noisy inputs: \( \mathcal{L}_{reconstruct}(\tilde{X}, d(e(\tilde{X}; \phi); \theta)) \). The encoder is now used to model samples drawn from \( Q_\phi(Z|\tilde{X}) \). As such, we can construct *denoising generative autoencoders* by training autoencoders to minimise \( \mathcal{L}_{denoise} + \mathcal{L}_{prior} \). One might expect to see differences in samples drawn from denoising generative autoencoders and their non-denoising counterparts. However, Figures 4 and 6 show that this is not the case. Im et al. (2015) address the case of DVAEs, claiming that the noise mapping requires adjusting the original VAE objective function. Our work is orthogonal to theirs, and others which adjust the training or model (Kingma et al., 2016), as we focus purely on sampling from generative autoencoders after training. We claim that the existing practice of drawing samples from generative autoencoders conditioned on \( \mathbf{z} \sim P(Z) \) is suboptimal, and the quality of samples can be improved by instead conditioning on \( \mathbf{z} \sim \hat{P}(Z) \) via MCMC sampling. 3 MARKOV SAMPLING We now consider the case of sampling from generative autoencoders, where \( d(Z; \theta) \) is used to draw samples from \( P_\theta(X|Z) \). In Section 2 we showed that it was important, when sampling \( P_\theta(X|Z) \), to condition on \( \mathbf{z}' \)'s drawn from \( \hat{P}(Z) \), rather than \( P(Z) \) as is often done in practice. However, we now show that for any initial \( \mathbf{z}_0 \in Z_0 = \mathbb{R}^b \), Markov sampling can be used to produce a chain of samples \( \mathbf{z}_t \), such that as \( t \to \infty \), produces samples \( \mathbf{z}_t \) that are from the distribution \( \hat{P}(Z) \), which may be used to draw meaningful samples from \( P_\theta(X|Z) \), conditioned on \( \mathbf{z} \sim \hat{P}(Z) \). To speed up convergence we can initialise \( \mathbf{z}_0 \) from a distribution close to \( \hat{P}(Z) \), by drawing \( \mathbf{z}_0 \sim P(Z) \). 3.1 MARKOV SAMPLING PROCESS A generative autoencoder can be sampled by the following process: \[ \mathbf{z}_0 \in Z_0 = \mathbb{R}^b, \quad \mathbf{x}_{t+1} \sim P_\theta(X|Z_t), \quad \mathbf{z}_{t+1} \sim Q_\phi(Z|X_{t+1}) \] This allows us to define a Markov chain with the transition operator \[ T(Z_{t+1}|Z_t) = \int Q_\phi(Z_{t+1}|X)P_\theta(X|Z_t)dX \] for \( t \geq 0 \). Drawing samples according to the transition operator \( T(Z_{t+1}|Z_t) \) produces a Markov chain. For the transition operator to be homogeneous, the parameters of the encoding and decoding functions are fixed during sampling. 3.2 CONVERGENCE PROPERTIES We now show that the stationary distribution of sampling from the Markov chain is \( \hat{P}(Z) \). **Theorem 1.** *If \( T(Z_{t+1}|Z_t) \) defines an ergodic Markov chain, \( \{Z_1, Z_2...Z_t\} \), then the chain will converge to a stationary distribution, \( \Pi(Z) \), from any arbitrary initial distribution. The stationary distribution \( \Pi(Z) = \hat{P}(Z) \).* The proof of Theorem 1 can be found in (Rosenthal, 2001). **Lemma 1.** *\( T(Z_{t+1}|Z_t) \) defines an ergodic Markov chain.* *Proof.* For a Markov chain to be ergodic it must be both irreducible (it is possible to get from any state to any other state in a finite number of steps) and aperiodic (it is possible to get from any state to any other state without having to pass through a cycle). To satisfy these requirements, it is more than sufficient to show that \( T(Z_{t+1}|Z_t) > 0 \), since every \( \mathbf{z} \in Z \) would be reachable from every other \( \mathbf{z} \in Z \). We show that \( P_\theta(X|Z) > 0 \) and \( Q_\phi(Z|X) > 0 \), giving \( T(Z_{t+1}|Z_t) > 0 \), providing the proof of this in Section A of the supplementary material. \( \square \) **Lemma 2.** *The stationary distribution of the chain defined by \( T(Z_{t+1}|Z_t) \) is \( \Pi(Z) = \hat{P}(Z) \).* *Proof.* For the transition operator defined in Equation (1), the asymptotic distribution to which \( T(Z_{t+1}|Z_t) \) converges to is \( \hat{P}(Z) \), because \( \hat{P}(Z) \) is, by definition, the marginal of the joint distribution \( Q_\phi(Z|X)P(X) \), over which the \( \mathcal{L}_{prior} \) used to learn the conditional distribution \( Q_\phi(Z|X) \). \( \square \) Using Lemmas [1] and [2] with Theorem [1] we can say that the Markov chain defined by the transition operator in Equation (1) will produce a Markov chain that converges to the stationary distribution \( \Pi(Z) = \hat{P}(Z) \). 3.3 EXTENSION TO DENOISING GENERATIVE AUTOENCODERS A denoising generative autoencoder can be sampled by the following process: \[ z_0 \in Z_0 = \mathbb{R}^b, \quad x_{t+1} \sim P_\theta(X|Z_t), \quad \tilde{x}_{t+1} \sim C(\tilde{X}|X_{t+1}), \quad z_{t+1} \sim Q_\phi(Z|\tilde{X}_{t+1}). \] This allows us to define a Markov chain with the transition operator \[ T(Z_{t+1}|Z_t) = \int Q_\phi(Z_{t+1}|\tilde{X})C(\tilde{X}|X)P_\theta(X|Z_t)dX d\tilde{X} \] for \( t \geq 0 \). The same arguments for the proof of convergence of Equation (1) can be applied to Equation (2). 3.4 RELATED WORK Our work is inspired by that of Bengio et al. (2013); denoising autoencoders are cast into a probabilistic framework, where \( P_\theta(X|\tilde{X}) \) is the denoising (decoder) distribution and \( C(\tilde{X}|X) \) is the corruption (encoding) distribution. \( \tilde{X} \) represents the space of corrupted samples. Bengio et al. (2013) define a transition operator of a Markov chain – using these conditional distributions – whose stationary distribution is \( P(X) \) under the assumption that \( P_\theta(X|\tilde{X}) \) perfectly denoises samples. The chain is initialised with samples from the training data, and used to generate a chain of samples from \( P(X) \). This work was generalised to include a corruption process that mapped data samples to latent variables (Bengio et al., 2014), to create a new type of network called Generative Stochastic Networks (GSNs). However in GSNs (Bengio et al., 2014) the latent space is not regularised with a prior. Our work is similar to several approaches proposed by Bengio et al. (2013)(2014) and Rezende et al. (Rezende et al., 2014). Both Bengio et al. and Rezende et al. define a transition operator in terms of \( X_t \) and \( X_{t-1} \). Bengio et al. generate samples with an initial \( X_0 \) drawn from the observed data, while Rezende et al. reconstruct samples from an \( X_0 \) which is a corrupted version of a data sample. In contrasts to Bengio et al. and Rezende et al., in this work we define the transition operator in terms of \( Z_{t+1} \) and \( Z_t \), initialise samples with a \( Z_0 \) that is drawn from a prior distribution we can directly sample from, and then sample \( X_1 \) conditioned on \( Z_0 \). Although the initial samples may be poor, we are likely to generate a novel \( X_1 \) on the first step of MCMC sampling, which would not be achieved using Bengio et al.’s or Rezende et al.’s approach. We are able draw initial \( Z_0 \) from a prior because we constrain \( \hat{P}(Z) \) to be close to a prior distribution \( P(Z) \); in Bengio et al. a latent space is either not explicitly modeled (Bengio et al., 2013) or it is not constrained (Bengio et al., 2014). Further, Rezende et al. (2014) explicitly assume that the distribution of latent samples drawn from \( Q_\phi(Z|X) \) matches the prior, \( P(Z) \). Instead, we assume that samples drawn from \( Q_\phi(Z|X) \) have a distribution \( \hat{P}(Z) \) that does not necessarily match the prior, \( P(Z) \). We propose an alternative method for sampling \( \hat{P}(Z) \) in order to improve the quality of generated image samples. Our motivation is also different to Rezende et al. (2014) since we use sampling to generate improved, novel data samples, while they use sampling to denoise corrupted samples. 3.5 EFFECT OF REGULARISATION METHOD The choice of \( L_{prior} \) may effect how much improvement can be gained when using MCMC sampling, assuming that the optimisation process converges to a reasonable solution. We first consider the case of VAEs, which minimise \( D_{KL}[Q_\phi(Z|X)||P(Z)] \). Minimising this KL divergence penalises the model \( \hat{P}(Z) \) if it contains samples that are outside the support of the true distribution \( P(Z) \), which might mean that \( \hat{P}(Z) \) captures only a part of \( P(Z) \). This means that when sampling \( P(Z) \), we may draw from a region that is not captured by \( \hat{P}(Z) \). This suggests that MCMC sampling can improve samples from trained VAEs by walking them towards denser regions in \( \hat{P}(Z) \). Generally speaking, using the reverse KL divergence during training, \( D_{KL}[P(Z)\|Q_\phi(Z|X)] \), penalises the model \( Q_\phi(Z|X) \) if \( P(Z) \) produces samples that are outside of the support of \( \hat{P}(Z) \). By minimising this KL divergence, most samples in \( P(Z) \) will likely be in \( \hat{P}(Z) \) as well. AAEs, on the other hand are regularised using the JS entropy, given by \( \frac{1}{2}D_{KL}[P(Z)\|\frac{1}{2}(P(Z)+Q_\phi(Z|X))]+\frac{1}{2}D_{KL}[Q_\phi(Z|X)\|\frac{1}{2}(P(Z)+Q_\phi(Z|X))] \). Minimising this cost function attempts to find a compromise between the aforementioned extremes. However, this still suggests that some samples from \( P(Z) \) may lie outside \( \hat{P}(Z) \), and so we expect AAEs to also benefit from MCMC sampling. 4 EXPERIMENTS 4.1 MODELS We utilise the deep convolutional GAN (DCGAN) (Radford et al., 2015) as a basis for our autoencoder models. Although the recommendations from Radford et al. (2015) are for standard GAN architectures, we adopt them as sensible defaults for an autoencoder, with our encoder mimicking the DCGAN’s discriminator, and our decoder mimicking the generator. The encoder uses strided convolutions rather than max-pooling, and the decoder uses fractionally-strided convolutions rather than a fixed upsampling. Each convolutional layer is succeeded by spatial batch normalisation (Ioffe & Szegedy, 2015) and ReLU nonlinearities, except for the top of the decoder which utilises a sigmoid function to constrain the output values between 0 and 1. We minimise the cross-entropy between the original and reconstructed images. Although this results in blurry images in regions which are ambiguous, such as hair detail, we opt not to use extra loss functions that improve the visual quality of generations (Larsen et al., 2015; Dosovitskiy & Brox, 2016; Lamb et al., 2016) to avoid confounding our results. Although the AAE is capable of approximating complex probabilistic posteriors (Makhzani et al., 2015), we construct ours to output a deterministic \( Q_\phi(Z|X) \). As such, the final layer of the encoder part of our AAEs is a convolutional layer that deterministically outputs a latent sample, \( \mathbf{z} \). The adversary is a fully-connected network with dropout and leaky ReLU nonlinearities. \( e_{rep}(X;\phi) \) of our VAEs have an output of twice the size, which corresponds to the means, \( \mu \), and standard deviations, \( \sigma \), of a diagonal covariance Gaussian distribution. For all models our prior, \( P(Z) \), is a 200D isotropic Gaussian with zero mean and unit variance: \( \mathcal{N}(\mathbf{0},\mathbf{I}) \). 4.2 DATASETS Our primary dataset is the (aligned and cropped) CelebA dataset, which consists of 200,000 images of celebrities (Liu et al., 2015). The DCGAN (Radford et al., 2015) was the first generative neural network model to show convincing novel samples from this dataset, and it has been used ever since as a qualitative benchmark due to the amount and quality of samples. In Figures 7 and 8 of the supplementary material, we also include results on the SVHN dataset, which consists of 100,000 images of house numbers extracted from Google Street view images (Netzer et al., 2011). 4.3 TRAINING & EVALUATION For all datasets we perform the same preprocessing: cropping the centre to create a square image, then resizing to \( 64 \times 64 \)px. We train our generative autoencoders for 20 epochs on the training split of the datasets, using Adam (Kingma & Ba, 2014) with \( \alpha = 0.0002, \beta_1 = 0.5 \) and \( \beta_2 = 0.999 \). The denoising generative autoencoders use the additive Gaussian noise mapping \( C(\tilde{X}|X) = \mathcal{N}(X,0.25\mathbf{I}) \). All of our experiments were run using the Torch library (Collobert et al., 2011).2 For evaluation, we generate novel samples from the decoder using \( \mathbf{z} \) initially sampled from \( P(Z) \); we also show spherical interpolations (White, 2016) between four images of the testing split, as depicted in Figure 2. We then perform several steps of MCMC sampling on the novel samples and interpolations. During this process, we use the training mode of batch normalisation (Ioffe & 2Example code is available at https://github.com/Kaixhin/Autoencoders [Szegedy] (2015), i.e., we normalise the inputs using minibatch rather than population statistics, as the normalisation can partially compensate for poor initial inputs (see Figure 4) that are far from the training distribution. We compare novel samples between all models below, and leave further interpolation results to Figures 5 and 6 of the supplementary material. 4.4 SAMPLES (a) VAE (initial) (b) VAE (1 step) (c) VAE (5 steps) (d) VAE (10 steps) (e) DVAE (initial) (f) DVAE (1 step) (g) DVAE (5 steps) (h) DVAE (10 steps) (i) AAE (initial) (j) AAE (1 step) (k) AAE (5 steps) (l) AAE (10 steps) (m) DAAE (initial) (n) DAAE (1 step) (o) DAAE (5 steps) (p) DAAE (10 steps) Figure 4: Samples from a VAE (a-d), DVAE (e-h), AAE (i-l) and DAAE (m-p) trained on the CelebA dataset. (a), (e), (i) and (m) show initial samples conditioned on \( \mathbf{z} \sim P(Z) \), which mainly result in recognisable faces emerging from noisy backgrounds. After 1 step of MCMC sampling, the more unrealistic generations change noticeably, and continue to do so with further steps. On the other hand, realistic generations, i.e. samples from a region with high probability, do not change as much. The adversarial criterion for deterministic AAEs is difficult to optimise when the dimensionality of \( Z \) is high. We observe that during training our AAEs and DAAEs, the empirical standard deviation of \( \mathbf{z} \sim Q_\phi(Z|X) \) is less than 1, which means that \( \hat{P}(Z) \) fails to approximate \( P(Z) \) as closely as was achieved with the VAE and DVAE. However, this means that the effect of MCMC sampling is more pronounced, with the quality of all samples noticeably improving after a few steps. As a side-effect of the suboptimal solution learned by the networks, the denoising properties of the DAAE are more noticeable with the novel samples. 5 CONCLUSION Autoencoders consist of a decoder, \( d(Z; \theta) \) and an encoder, \( e(X; \phi) \) function, where \( \phi \) and \( \theta \) are learned parameters. Functions \( e(X; \phi) \) and \( d(Z; \theta) \) may be used to draw samples from the conditional distributions \( P_\theta(X|Z) \) and \( Q_\phi(Z|X) \) (Bengio et al., 2014; 2013; Rezende et al., 2014), where \( X \) refers to the space of observed samples and \( Z \) refers to the space of latent samples. The encoder distribution, \( Q_\phi(Z|X) \), maps data samples from the data generating distribution, \( P(X) \), to a latent distribution, \( P(Z) \). The decoder distribution, \( P_\theta(X|Z) \), maps samples from \( P(Z) \) to \( P(X) \). We are concerned with *generative autoencoders*, which we define to be a family of autoencoders where regularisation is used during training to encourage \( \hat{P}(Z) \) to be close to a known prior \( P(Z) \). Commonly it is assumed that \( \hat{P}(Z) \) and \( P(Z) \) are similar, such that samples from \( P(Z) \) may be used to sample a decoder \( P_\theta(X|Z) \); we do not make the assumption that \( \hat{P}(Z) \) and \( P(Z) \) are “sufficiently close” (Rezende et al., 2014). Instead, we derive an MCMC process, whose stationary distribution is \( \hat{P}(Z) \), allowing us to directly draw samples from \( \hat{P}(Z) \). By conditioning on samples from \( \hat{P}(Z) \), samples drawn from \( x \sim P_\theta(X|Z) \) are more consistent with the training data. In our experiments, we compare samples \( x \sim P_\theta(X|Z = z_0), z_0 \sim P(Z) \) to \( x \sim P_\theta(X|Z = z_i) \) for \( i = \{1, 5, 10\} \), where \( z_i \)'s are obtained through MCMC sampling, to show that MCMC sampling improves initially poor samples (see Figure 4). We also show that artifacts in \( x \) samples induced by interpolations across the latent space can also be corrected by MCMC sampling see (Figure 2). We further validate our work by showing that the denoising properties of denoising generative autoencoders are best revealed by the use of MCMC sampling. Our MCMC sampling process is straightforward, and can be applied easily to existing generative autoencoders. This technique is orthogonal to the use of more powerful posteriors in AAEs (Makhzani et al., 2015) and VAEs (Kingma et al., 2016), and the combination of both could result in further improvements in generative modeling. Finally, our basic MCMC process opens the doors to apply a large existing body of research on sampling methods to generative autoencoders. ACKNOWLEDGEMENTS We would like to acknowledge the EPSRC for funding through a Doctoral Training studentship and the support of the EPSRC CDT in Neurotechnology. Supplementary Material A PROOF THAT \( T(Z_{t+1}|Z_t) > 0 \) For \( P_\theta(X|Z) > 0 \) we require that all possible \( x \in X \subseteq \mathbb{R}^a \) may be generated by the network. Assuming that the model \( P_\theta(X|Z) \) is trained using a sufficient number of training samples, \( x \in X_{train} = X \), and that the model has infinite capacity to model \( X_{train} = X \), then we should be able to draw any sample \( x \in X_{train} = X \) from \( P_\theta(X|Z) \). In reality \( X_{train} \subseteq X \) and it is not possible to have a model with infinite capacity. However, \( P_\theta(X|Z) \) is modeled using a deep neural network, which we assume has sufficient capacity to capture the training data well. Further, deep neural networks are able to interpolate between samples in very high dimensional spaces (Radford et al., 2015); we therefore further assume that if we have a large number of training samples (as well as large model capacity), that almost any \( x \in X \) can be drawn from \( P_\theta(X|Z) \). Note that if we wish to generate human faces, we define \( X_{all} \) to be the space of all possible faces, with distribution \( P(X_{all}) \), while \( X_{train} \) is the space of faces made up by the training data. Then, practically even a well trained model which learns to interpolate well only captures an \( X \), with distribution \( \int P_\theta(X|Z)P(Z)dZ \), where \( X_{train} \subseteq X \subseteq X_{all} \), because \( X \) additionally contains examples of interpolated versions of \( x \sim P(X_{train}) \). For \( Q_\phi(Z|X) > 0 \) it must be possible to generate all possible \( z \in Z \subseteq \mathbb{R}^b \). \( Q_\phi(Z|X) \) is described by the function \( e(\cdot; \phi) : X \to Z \). To ensure that \( Q_\phi(Z|X) > 0 \), we want to show that the function \( e(X; \phi) \) allows us to represent all samples of \( z \in Z \). VAEs and AAEs each construct \( e(X; \phi) \) to produce \( z \in Z \) in different ways. The output of the encoder of a VAE, \( e_{VAE}(X; \phi) \) is \( z = \mu + \epsilon \odot \sigma \), where \( \epsilon \sim \mathcal{N}(0, I) \). The output of a VAE is then always Gaussian, and hence there is no limitation on the \( z \)'s that \( e_{VAE}(X; \phi) \) can produce. This ensures that \( Q_\phi(Z|X) > 0 \), provided that \( \sigma \neq 0 \). The encoder of our AAE, \( e_{AAE}(X; \phi) \), is a deep neural network consisting of multiple convolutional and batch normalisation layers. The final layer of the \( e_{AAE}(X; \phi) \) is a fully connected layer without an activation function. The input to each of the \( M \) nodes in the fully connected layer is a function \( f_{i=1...M}(x) \). This means that \( z \) is given by: \( z = a_1 f_1(x) + a_2 f_2(x) + ... + a_M f_M(x) \), where \( a_{i=1...M} \) are the learned weights of the fully connected layer. We now consider three cases: Case 1: If \( a_i \) are a complete set of bases for \( Z \) then it is possible to generate any \( z \in Z \) from an \( x \in X \) with a one-to-one mapping, provided that \( f_i(x) \) is not restricted in the values that it can take. Case 2: If \( a_i \) are an overcomplete set of bases for \( Z \), then the same holds, provided that \( f_i(x) \) is not restricted in the values that it can take. Case 3: If \( a_i \) are an undercomplete set of bases for \( Z \) then it is not possible to generate all \( z \in Z \) from \( x \in X \). Instead there is a many (X) to one (Z) mapping. For \( Q_\phi(Z|X) > 0 \) our network must learn a complete or overcomplete set of bases and \( f_i(x) \) must not be restricted in the values that it can take \( \forall i \). The network is encouraged to learn an overcomplete set of bases by learning a large number of \( a_i \)'s—specifically \( M = 8192 \) when basing our network on the DCGAN architecture (Radford et al., 2015)—more that 40 times the dimensionality of \( Z \). By using batch normalisation layers throughout the network, we ensure that values of \( f_i(x) \) are spread out, capturing a close-to-Gaussian distribution (Ioffe & Szegedy, 2015), encouraging infinite support. We have now shown that, under certain reasonable assumptions, \( P_\theta(X|Z) > 0 \) and \( Q_\phi(Z|X) > 0 \), which means that \( T(Z_{t+1}|Z_t) > 0 \), and hence we can get from any \( Z \) to any another \( Z \) in only one step. Therefore the Markov chain described by the transition operator \( T(Z_{t+1}|Z_t) \) defined in Equation (1) is both irreducible and aperiodic, which are the necessary conditions for ergodicity. B CELEBA B.1 INTERPOLATIONS (a) DVAE (initial) (b) DVAE (5 steps) (c) DVAE (initial) (d) DVAE (5 steps) Figure 5: Interpolating between two faces using (a-d) a DVAE. The top rows (a, c) for each face is the original interpolation, whilst the second rows (b, d) are the result of 5 steps of MCMC sampling applied to the latent samples that were used to generate the original interpolation. The only qualitative difference when compared to VAEs (see Figure 4) is a desaturation of the generated images. (a) AAE (initial) (b) AAE (5 steps) (c) AAE (initial) (d) AAE (5 steps) (e) DAAE (initial) (f) DAAE (5 steps) (g) DAAE (initial) (h) DAAE (5 steps) Figure 6: Interpolating between two faces using (a-d) an AAE and (e-h) a DAAE. The top rows (a, c, e, g) for each face is the original interpolation, whilst the second rows (b, d, f, h) are the result of 5 steps of MCMC sampling applied to the latent samples that were used to generate the original interpolation. Although the AAE performs poorly (b, d), the regularisation effect of denoising can be clearly seen with the DAAE after applying MCMC sampling (f, h). C STREET VIEW HOUSE NUMBERS C.1 SAMPLES (a) VAE (initial) (b) VAE (1 step) (c) VAE (5 steps) (d) VAE (10 steps) (e) DVAE (initial) (f) DVAE (1 step) (g) DVAE (5 steps) (h) DVAE (10 steps) (i) AAE (initial) (j) AAE (1 step) (k) AAE (5 steps) (l) AAE (10 steps) (m) DAAE (initial) (n) DAAE (1 step) (o) DAAE (5 steps) (p) DAAE (10 steps) Figure 7: Samples from a VAE (a-d), DVAE (e-h), AAE (i-l) and DAAE (m-p) trained on the SVHN dataset. The samples from the models imitate the blurriness present in the dataset. Although very few numbers are visible in the initial sample, the VAE and DVAE produce recognisable numbers from most of the initial samples after a few steps of MCMC sampling. Although the AAE and DAAE fail to produce recognisable numbers, the final samples are still a clear improvement over the initial samples. C.2 Interpolations (a) VAE (initial) (b) VAE (5 steps) (c) VAE (initial) (d) VAE (5 steps) (e) DVAE (initial) (f) DVAE (5 steps) (g) DVAE (initial) (h) DVAE (5 steps) Figure 8: Interpolating between Google Street View house numbers using (a-d) a VAE and (e-h) a DVAE. The top rows (a, c, e, g) for each house number are the original interpolations, whilst the second rows (b, d, f, h) are the result of 5 steps of MCMC sampling. If the original interpolation produces symbols that do not resemble numbers, as observed in (a) and (e), the models will attempt to move the samples towards more realistic numbers (b, f). Interpolation between 1- and 2-digit numbers in an image (c, g) results in a meaningless blur in the middle of the interpolation. After a few steps of MCMC sampling the models instead produce more recognisable 1- or 2-digit numbers (d, h). We note that when the contrast is poor, denoising models in particular can struggle to recover meaningful images (h).
reject
Reject
3
172bc8f7c4fc6267c1ca379d59e79cebed1ea041
iclr
2,017
LEARNING TO ACT BY PREDICTING THE FUTURE Alexey Dosovitskiy Intel Labs Vladlen Koltun Intel Labs ABSTRACT We present an approach to sensorimotor control in immersive environments. Our approach utilizes a high-dimensional sensory stream and a lower-dimensional measurement stream. The cocomporeal structure of these streams provides a rich supervisory signal, which enables training a sensorimotor control model by interacting with the environment. The model is trained using supervised learning techniques, but without extraneous supervision. It learns to act based on raw sensory input from a complex three-dimensional environment. The presented formulation enables learning without a fixed goal at training time, and pursuing dynamically changing goals at test time. We conduct extensive experiments in three-dimensional simulations based on the classical first-person game Doom. The results demonstrate that the presented approach outperforms sophisticated prior formulations, particularly on challenging tasks. The results also show that trained models successfully generalize across environments and goals. A model trained using the presented approach won the Full Deathmatch track of the Visual Doom AI Competition, which was held in previously unseen environments. 1 INTRODUCTION Machine learning problems are commonly divided into three classes: supervised, unsupervised, and reinforcement learning. In this view, supervised learning is concerned with learning input-output mappings, unsupervised learning aims to find hidden structure in data, and reinforcement learning deals with goal-directed behavior [Murphy, 2012]. Reinforcement learning is compelling because it considers the natural setting of an organism acting in its environment. It is generally taken to comprise a class of problems (learning to act), the mathematical formalization of these problems (maximizing the expected discounted return), and a family of algorithmic approaches (optimizing an objective derived from the Bellman equation) [Kaelbling et al., 1996; Sutton & Barto, 2017]. While reinforcement learning (RL) has achieved significant progress [Mnih et al., 2015], key challenges remain. One is sensorimotor control from raw sensory input in complex and dynamic three-dimensional environments, learned directly from experience. Another is the acquisition of general skills that can be flexibly deployed to accomplish a multitude of dynamically specified goals [Lake et al., 2016]. In this work, we propose an approach to sensorimotor control that aims to assist progress towards overcoming these challenges. Our approach departs from the reward-based formalization commonly used in RL. Instead of a monolithic state and a scalar reward, we consider a stream of sensory input \( \{s_t\} \) and a stream of measurements \( \{m_t\} \). The sensory stream is typically high-dimensional and may include the raw visual, auditory, and tactile input. The measurement stream has lower dimensionality and constitutes a set of data that pertain to the agent’s current state. In a physical system, measurements can include attitude, supply levels, and structural integrity. In a three-dimensional computer game, they can include health, ammunition levels, and the number of adversaries overcome. Our guiding observation is that the interlocked temporal structure of the sensory and measurement streams provides a rich supervisory signal. Given present sensory input, measurements, and goal, the agent can be trained to predict the effect of different actions on future measurements. Assuming that the goal can be expressed in terms of future measurements, predicting these provides all the information necessary to support action. This reduces sensorimotor control to supervised learning, while supporting learning from raw experience and without extraneous data. Supervision is pro- vided by experience itself: by acting and observing the effects of different actions in the context of changing sensory inputs and goals. This approach has two significant benefits. First, in contrast to an occasional scalar reward assumed in traditional RL, the measurement stream provides rich and temporally dense supervision that can stabilize and accelerate training. While a sparse scalar reward may be the only feedback available in a board game (Tesauro) [1994] [Silver et al., 2016], a multidimensional stream of sensations is a more appropriate model for an organism that is learning to function in an immersive environment (Adolph & Berger [2006]). The second advantage of the presented formulation is that it supports training without a fixed goal and pursuing dynamically specified goals at test time. Assuming that the goal can be expressed in terms of future measurements, the model can be trained to take the goal into account in its prediction of the future. At test time, the agent can predict future measurements given its current sensory input, measurements, and goal, and then simply select the action that best suits its present goal. We evaluate the presented approach in immersive three-dimensional simulations that require visually navigating a complex three-dimensional environment, recognizing objects, and interacting with dynamic adversaries. We use the classical first-person game Doom, which introduced immersive three-dimensional games to popular culture (Kushner [2003]). The presented approach is given only raw visual input and the statistics shown to the player in the game, such as health and ammunition levels. No human gameplay is used, the model trains on raw experience. Experimental results demonstrate that the presented approach outperforms state-of-the-art deep RL models, particularly on complex tasks. Experiments further demonstrate that models learned by the presented approach generalize across environments and goals, and that the use of vectorial measurements instead of a scalar reward is beneficial. A model trained with the presented approach won the Full Deathmatch track of the Visual Doom AI Competition, which took place in previously unseen environments. The presented approach outperformed the second best submission, which employed a substantially more complex model and additional supervision during training, by more than 50%. 2 BACKGROUND The supervised learning (SL) perspective on learning to act by interacting with the environment dates back decades. Jordan & Rumelhart (1992) analyze this approach, review early work, and argue that the choice of SL versus RL should be guided by the characteristics of the environment. Their analysis suggests that RL may be more efficient when the environment provides only a sparse scalar reward signal, whereas SL can be advantageous when temporally dense multidimensional feedback is available. Sutton (1988) analyzed temporal-difference (TD) learning and argued that it is preferable to SL for prediction problems in which the correctness of the prediction is revealed many steps after the prediction is made. Sutton’s influential analysis assumes a sparse scalar reward. TD and policy gradient methods have since come to dominate the study of sensorimotor learning (Kober et al., 2013; Mnih et al., 2015; Sutton & Barto, 2017). While the use of SL is natural in imitation learning (LeCun et al., 2005; Ross et al., 2013) or in conjunction with model-based RL (Levine & Koltun, 2013), the formulation of sensorimotor learning from raw experience as supervised learning is rare (Levine et al., 2016). Our work suggests that when the learner is exposed to dense multidimensional sensory feedback, direct future prediction can support effective sensorimotor coordination in complex dynamic environments. Our approach has similarities to Monte Carlo methods. The convergence of such methods was analyzed early on and they were seen as theoretically advantageous, particularly when function approximators are used (Bertsekas [1995] Sutton [1995] Singh & Sutton [1996]). The choice of TD learning over Monte Carlo methods was argued on practical grounds, based on empirical performance on canonical examples (Sutton [1995]). While the understanding of the convergence of both types of methods has since improved (Szepesvári & Littman [1999] Tsitsiklis [2002] Even-Dar & Mansour [2003]), the argument for TD versus Monte Carlo is to this day empirical (Sutton & Barto [2017]). Sharp negative examples exist (Bertsekas [2010]). Our work deals with the more general setting of vectorial feedback and parameterized goals, and shows that a simple Monte-Carlo-type method performs extremely well in a compelling instantiation of this setting. Vector-valued feedback has been considered in the context of multi-objective decision-making (Gábor et al., 1998; Roijers et al., 2013). Transfer across related tasks has been analyzed by Konidaris et al. (2012). Parameterized goals have been studied in the context of continuous motor skills such as throwing darts at a target (da Silva et al., 2012; Kober et al., 2012; Deisenroth et al., 2014). A general framework for sharing value function approximators across both states and goals has been described by Schaul et al. (2015). Our work is most closely related to the framework of Schaul et al. (2015), but presents a specific formulation in which goals are defined in terms of intrinsic measurements and control is based on direct future prediction. We provide an architecture that handles realistic sensory and measurement streams and achieves state-of-the-art performance in complex and dynamic three-dimensional environments. Learning to act in simulated environments has been the focus of significant attention following the successful application of deep RL to Atari games by Mnih et al. (2015). A number of recent efforts applied related ideas to three-dimensional environments. Lillicrap et al. (2016) considered continuous and high-dimensional action spaces and learned control policies in the TORCS simulator. Mnih et al. (2016) described asynchronous variants of deep RL methods and demonstrated navigation in a three-dimensional labyrinth. Oh et al. (2016) augmented deep Q-networks with external memory and evaluated their performance on a set of tasks in Minecraft. In a recent technical report, Kulkarni et al. (2016b) proposed end-to-end training of successor representations and demonstrated navigation in a Doom-based environment. In another recent report, Blundell et al. (2016) considered a nonparametric approach to control and conducted experiments in a three-dimensional labyrinth. Experiments reported in Section 4 demonstrate that our approach significantly outperforms state-of-the-art deep RL methods. Prediction of future states in dynamical systems was considered by Littman et al. (2001) and Singh et al. (2003). Predictive representations in the form of generalized value functions were advocated by Sutton et al. (2011). More recently, Oh et al. (2015) learned to predict future frames in Atari games. Prediction of full sensory input in realistic three-dimensional environments remains an open challenge, although significant progress is being made (Mathieu et al., 2016; Finn et al., 2016; Kalchbrenner et al., 2016). Our work considers prediction of future values of meaningful measurements from rich sensory input and shows that such prediction supports effective sensorimotor control. 3 MODEL Consider an agent that interacts with the environment over discrete time steps. At each time step \( t \), the agent receives an observation \( o_t \) and executes an action \( a_t \) based on this observation. We assume that the observations have the following structure: \( o_t = (s_t, m_t) \), where \( s_t \) is raw sensory input and \( m_t \) is a set of measurements. In our experiments, \( s_t \) is an image: the agent’s view of its three-dimensional environment. More generally, \( s_t \) can include input from multiple sensory modalities. The measurements \( m_t \) can indicate the attitude, supply levels, and structural integrity in a physical system, or health, ammunition, and score in a computer game. The distinction between sensory input \( s_t \) and measurements \( m_t \) is somewhat artificial: both \( s_t \) and \( m_t \) constitute sensory input in different forms. In our model, the measurement vector \( m_t \) is distinguished from other sensations in two ways. First, the measurement vector is the part of the observation that the agent will aim to predict. At present, predicting full sensory streams is beyond our capabilities (although see the work of Kalchbrenner et al. (2016) and van den Oord et al. (2016) for impressive recent progress). We therefore designate a subset of sensations as measurements that will be predicted. Second, we assume that the agent’s goals can be defined in terms of future measurements. Specifically, let \( \tau_1, \ldots, \tau_n \) be a set of temporal offsets and let \( f = (m_{t+\tau_1} - m_t, \ldots, m_{t+\tau_n} - m_t) \) be the corresponding differences of future and present measurements. We assume that any goal that the agent will pursue can be defined as maximization of a function \( u(f; g) \). Any parametric function can be used. Our experiments use goals that are expressed as linear combinations of future measurements: \[ u(f; g) = g^\top f, \] where the vector \( g \) parameterizes the goal and has the same dimensionality as \( f \). This model generalizes the standard reinforcement learning formulation: the scalar reward signal can be viewed as a measurement, and exponential decay is one possible configuration of the goal vector. To predict future measurements, we use a parameterized function approximator, denoted by \( F \): \[ \mathbf{p}_t^a = F(\mathbf{o}_t, a, \mathbf{g}; \theta). \] Here \( a \in \mathcal{A} \) is an action, \( \theta \) are the learned parameters of \( F \), and \( \mathbf{p}_t^a \) is the resulting prediction. The dimensionality of \( \mathbf{p}_t^a \) matches the dimensionality of \( \mathbf{f} \) and \( \mathbf{g} \). Note that the prediction is a function of the current observation, the considered action, and the goal. At test time, given learned parameters \( \theta \), the agent can choose the action that yields the best predicted outcome: \[ a_t = \arg\max_{a \in \mathcal{A}} \mathbf{g}^\top F(\mathbf{o}_t, a, \mathbf{g}; \theta). \] The goal vector used at test time need not be identical to any goal seen during training. 3.1 TRAINING The predictor \( F \) is trained on experiences collected by the agent. Starting with a random policy, the agent begins to interact with its environment. This interaction takes place over episodes that last for a fixed number of time steps or until a terminal event occurs. Consider a set of experiences collected by the agent, yielding a set \( \mathcal{D} \) of training examples: \( \mathcal{D} = \{ (\mathbf{o}_i, a_i, \mathbf{g}_i, \mathbf{f}_i) \}_{i=1}^N \). Here \( (\mathbf{o}_i, a_i, \mathbf{g}_i) \) is the input and \( \mathbf{f}_i \) is the output of example \( i \). The predictor is trained using a regression loss: \[ \mathcal{L}(\theta) = \sum_{i=1}^N \| F(\mathbf{o}_i, a_i, \mathbf{g}_i; \theta) - \mathbf{f}_i \|^2. \] A classification loss can be used for predicting categorical measurements, but this was not necessary in our experiments. As the agent collects new experiences, the training set \( \mathcal{D} \) and the predictor used by the agent change. We maintain an experience memory of the \( M \) most recent experiences out of which a mini-batch of \( N \) examples is randomly sampled for every iteration of the solver. The parameters of the predictor used by the agent are updated after every \( k \) new experiences. This setup departs from pure on-policy training and we have not observed any adverse effect of using a small experience memory. Additional details are provided in Appendix A. We have evaluated two training regimes: 1. Single goal: the goal vector is fixed throughout the training process. 2. Randomized goals: the goal vector for each episode is generated at random. In both regimes, the agent follows an \( \varepsilon \)-greedy policy: it acts greedily according to the current goal with probability \( 1 - \varepsilon \), and selects a random action with probability \( \varepsilon \). The value of \( \varepsilon \) is initially set to 1 and is decreased during training according to a fixed schedule. 3.2 ARCHITECTURE The predictor \( F \) is a deep network parameterized by \( \theta \). The network architecture we use is shown in Figure 1. The network has three input modules: a perception module \( S(s) \), a measurement module \( M(m) \) and a goal module \( G(g) \). In our experiments, \( s \) is an image and the perception module \( S \) is implemented as a convolutional network. The measurement and goal modules are fully-connected networks. The outputs of the three input modules are concatenated, forming the joint input representation used for subsequent processing: \[ \mathbf{j} = J(s, m, g) = (S(s), M(m), G(g)). \] Future measurements are predicted based on this input representation. The network emits predictions of future measurements for all actions at once. This could be done by a fully-connected module that absorbs the input representation and outputs predictions. However, we found that introducing additional structure into the prediction module enhances its ability to learn the fine differences between the outcomes of different actions. To this end, we build on the ideas of Wang et al. (2016) and Figure 1: Network structure. The image s, measurements m, and goal g are first processed separately by three input modules. The outputs of these modules are concatenated into a joint representation j. This joint representation is processed by two parallel streams that predict the expected measurements \( E(j) \) and the normalized action-conditional differences \( \{ \overline{A^i}(j) \} \), which are then combined to produce the final prediction for each action. split the prediction module into two streams: an expectation stream \( E(j) \) and an action stream \( A(j) \). The expectation stream predicts the average of the future measurements over all potential actions. The action stream concentrates on the fine differences between actions: \( A(j) = \langle A^1(j), \ldots, A^w(j) \rangle \), where \( w = |A| \) is the number of actions. We add a normalization layer at the end of the action stream that ensures that the average of the predictions of the action stream is zero for each future measurement: \[ \overline{A^i}(j) = A^i(j) - \frac{1}{w} \sum_{k=1}^w A^k(j) \] for all \( i \). The normalization layer subtracts the average over all actions from each prediction, forcing the expectation stream \( E \) to compensate by predicting these average values. The output of the expectation stream has dimensionality \( \dim(f) \), where \( f \) is the vector of future measurements. The output of the action stream has dimensionality \( w \cdot \dim(f) \). The output of the network is a prediction of future measurements for each action, composed by summing the output of the expectation stream and the normalized action-conditional output of the action stream: \[ p = \langle p^{a_1}, \ldots, p^{a_w} \rangle = \left\langle \overline{A^1}(j) + E(j), \ldots, \overline{A^w}(j) + E(j) \right\rangle . \] The output of the network has the same dimensionality as the output of the action stream. 4 EXPERIMENTS We evaluate the presented approach in immersive three-dimensional simulations based on the classical game Doom. In these simulations, the agent has a first-person view of the environment and must act based on the same visual information that is shown to human players in the game. To interface with the game engine, we use the ViZDoom platform developed by Kempka et al. (2016). One of the advantages of this platform is that it allows running the simulation at thousands of frames per second on a single CPU core, which enables training models on tens of millions of simulation steps in a single day. We compare the presented approach to state-of-the-art deep RL methods in four scenarios of increasing difficulty, study generalization across environments and goals, and evaluate the importance of different aspects of the model. 4.1 SETUP Scenarios. We use four scenarios of increasing difficulty: D1 Gathering health kits in a square room. ("Basic") D2 Gathering health kits and avoiding poison vials in a maze. ("Navigation") D3 Defending against adversaries while gathering health and ammunition in a maze. ("Battle") D4 Defending against adversaries while gathering health and ammunition in a more complicated maze. ("Battle 2") These scenarios are illustrated in Figure 2 and in the supplementary video (http://bit.ly/2f9tacZ). The first two scenarios are provided with the ViZDoom platform. In D1, the agent is in a square room and its health is declining at a constant rate. To survive, it must move around and collect health kits, which are distributed abundantly in the room. This task is easy: as long as the agent learns to avoid walls and keep traversing the room, performance is good. In D2, the agent is in a maze and its health is again declining at a constant rate. Here it must again collect health kits that increase its health, but it must also avoid blue poison vials that decrease health. This task is harder: the agent must learn to traverse irregularly shaped passageways, and to distinguish health kits from poison vials. In both tasks, the agent has access to three binary sub-actions: move forward, turn left, and turn right. Any combination of these three can be used at any given time, resulting in 8 possible actions. The only measurement provided to the agent in these scenarios is health. The last two scenarios, D3 and D4, are more challenging and were designed by us using elements of the ViZDoom platform. Here the agent is armed and is under attack by alien monsters. The monsters spawn abundantly, move around in the environment, and shoot fireballs at the agent. Health kits and ammunition are sporadically distributed throughout the environment and can be collected by the agent. The environment is a simple maze in D3 and a more complex one in D4. In both scenarios, the agent has access to eight sub-actions: move forward, move backward, turn left, turn right, strafe left, strafe right, run, and shoot. Any combination of these sub-actions can be used, resulting in 256 possible actions. The agent is provided with three measurements: health, ammunition, and frag count (number of monsters killed). Model. The future predictor network used in our experiments was configured to be as close as possible to the DQN model of [Mnih et al. (2015)], to ensure a fair comparison. Additional details on the architecture are provided in Appendix A. Training and testing. The agent is trained and tested over episodes. Each episode terminates after 525 steps (equivalent to 1 minute of real time) or when the agent’s health drops to zero. Statistics reported in figures and tables summarize the final values of respective measurements at the end of episodes. We set the temporal offsets \( \tau_1, \ldots, \tau_n \) of predicted future measurements to 1, 2, 4, 8, 16, and 32 steps in all experiments. Only the latest three time steps contribute to the objective function, with coefficients (0.5, 0.5, 1). More details are provided in Appendix A. 4.2 RESULTS Comparison to prior work. We have compared the presented approach to three deep RL methods: DQN (Mnih et al., 2015), A3C (Mnih et al., 2016), and DSR (Kulkarni et al., 2016b). DQN is a standard baseline for visuomotor control due to its impressive performance on Atari games. A3C is more recent and is commonly regarded as the state of the art in this area. DSR is described in a recent technical report and we included it because the authors also used the ViZDoom platform in experiments, albeit with a simple task. Further details on the setup of the prior approaches are provided in Appendix B. The performance of the different approaches during training is shown in Figure 3. In reporting the results of these experiments, we refer to our approach as DFP (direct future prediction). For the first two scenarios, all approaches were trained to maximize health. For these scenarios, Figure 3 reports average health at the end of an episode over the course of training. For the last two scenarios, all approaches were trained to maximize a linear combination of the three normalized measurements (ammo, health, and frags) with coefficients (0.5, 0.5, 1). For these scenarios, Figure 3 reports average frags at the end of an episode. Each presented curve averages information from three independent training runs, and each data point is computed from \( 3 \times 50,000 \) steps of testing. DQN, A3C, and DFP were trained for 50 million steps. The training procedure for DSR is much slower and can only process roughly 1 million simulation steps per day. For this reason, we were only able to evaluate DSR on the Basic scenario and were not able to perform extensive hyperparameter tuning. We report results for this technique after 10 days of training. (This time was sufficient to significantly exceed the number of training steps reported in the experiments of Kulkarni et al. (2016b), but not sufficient to approach the number of steps afforded by the other approaches.) Table 1 reports the performance of the models after training. Each fully trained model was tested over 1 million simulation steps. The table reports average health at the end of an episode for scenarios D1 and D2, and average frags at the end of an episode for D3 and D4. We also report the average training speed for each approach, in millions of simulation steps per day of training. The performance of the different models is additionally illustrated in the supplementary video (http://bit.ly/2f9tacZ). <table> <tr> <th></th> <th>D1 (health)</th> <th>D2 (health)</th> <th>D3 (frags)</th> <th>D4 (frags)</th> <th>steps/day</th> </tr> <tr> <td>DQN</td> <td>89.1 ± 6.4</td> <td>25.4 ± 7.8</td> <td>1.2 ± 0.8</td> <td>0.4 ± 0.2</td> <td>7M</td> </tr> <tr> <td>A3C</td> <td><b>97.5 ± 0.1</b></td> <td>59.3 ± 2.0</td> <td>5.6 ± 0.2</td> <td>6.7 ± 2.9</td> <td>80M</td> </tr> <tr> <td>DSR</td> <td>4.6 ± 0.1</td> <td>—</td> <td>—</td> <td>—</td> <td>1M</td> </tr> <tr> <td>DFP</td> <td><b>97.7 ± 0.4</b></td> <td><b>84.1 ± 0.6</b></td> <td><b>33.5 ± 0.4</b></td> <td><b>16.5 ± 1.1</b></td> <td>70M</td> </tr> </table> Table 1: Comparison to prior work. We report average health at the end of an episode for scenarios D1 and D2, and average frags at the end of an episode for scenarios D3 and D4. Figure 3: Performance of different approaches during training. DQN, A3C, and DFP achieve similar performance in the Basic scenario. DFP outperforms the prior approaches in the other three scenarios, with a multiplicative gap in performance in the most complex ones (D3 and D4). In the Basic scenario, DQN, A3C, and DFP all perform well. As reported in Table[1] the performance of A3C and DFP is virtually identical at 97.5%, while DQN reaches 89%. In the more complex Navigation scenario, a significant gap opens up between DQN and A3C; this is consistent with the experiments of Mnih et al. (2016). DFP achieves the best performance in this scenario, with a 25 percentage point advantage during testing. Note that in these first two scenarios, DFP was only given a single measurement per time step (health). In the more complex Battle and Battle 2 scenarios (D3 and D4), DFP dominates the other approaches. It outperforms A3C at test time by a factor of 6 in D3 and by a factor of 2.5 in D4. Note that the advantage of DFP is particularly significant in the scenarios that provide richer measurements: three measurements per time step in D3 and D4. The effect of multiple measurements is further evaluated in controlled experiments reported below. Generalization across environments. We now evaluate how the behaviors learned by the presented approach generalize across different environments. To this end, we have created 100 randomly textured versions of the mazes from scenarios D3 and D4. We used 90 of these for training and 10 for testing, with disjoint sets of textures in the training and testing environments. We call these scenarios D3-tx and D4-tx. Table[2] shows the performance of the approach for different combinations of training and testing regimes. For example, the entry in the D4-tx row of the D3 column shows the performance (in average number of frags at the end of an episode) of a model trained in D3 and tested in D4-tx. Not surprisingly, a model trained in the simple D3 environment does not learn sufficient invariance to surface appearance to generalize well to other environments. Training in the more complex multi-texture environment in D4 yields better generalization: the trained model performs well in D3 and exhibits non-trivial performance in D3-tx and D4-tx. Finally, exposing the model to significant variation in surface appearance in D3-tx or D4-tx during training yields very good generalization. <table> <tr> <th rowspan="2">Test</th> <th colspan="5">Train</th> </tr> <tr> <th>D3</th> <th>D4</th> <th>D3-tx</th> <th>D4-tx</th> <th>D4-tx-L</th> </tr> <tr> <td>D3</td> <td>33.6</td> <td>17.8</td> <td>29.8</td> <td>20.9</td> <td>22.0</td> </tr> <tr> <td>D4</td> <td>1.6</td> <td>17.1</td> <td>5.4</td> <td>10.8</td> <td>12.4</td> </tr> <tr> <td>D3-tx</td> <td>3.9</td> <td>8.1</td> <td>22.6</td> <td>15.6</td> <td>19.4</td> </tr> <tr> <td>D4-tx</td> <td>1.7</td> <td>5.1</td> <td>6.2</td> <td>10.2</td> <td>12.7</td> </tr> </table> Table 2: Generalization across environments. The last column of Table 2 additionally reports the performance of a higher-capacity model trained in D4-tx. This combination is referred to as D4-tx-L. As shown in the table, this model performs even better. The architecture is detailed in Appendix A. Visual Doom AI Competition. To further evaluate the presented approach, we participated in the Visual Doom AI Competition, held during September 2016. The competition evaluated sensorimotor control models that act based on raw visual input. The competition had the form of a tournament: the submitted agents play multiple games against each other, their performance measured by aggregate frags. The competition included two tracks. The Limited Deathmatch track was held in a known environment that was given to the participants in advance at training time. The Full Deathmatch track evaluated generalization to previously unseen environments and took place in multiple new environments that were not available to the participating teams at training time. We only enrolled in the Full Deathmatch track. Our model was trained using a variant of the D4-tx-L regime. Our model won, outperforming the second best submission by more than 50%. That submission, described by Lample & Chaplot (2016), constitutes a strong baseline. It is a deep recurrent Q-network that incorporates an LSTM and was trained using reward shaping and extra supervision from the game engine. Specifically, the authors took advantage of the ability provided by the ViZDoom platform to use the internal configuration of the game, including ground-truth knowledge of the presence of enemies in the field of view, during training. The authors’ report shows that this additional supervision improved performance significantly. Our model, which is simpler, achieved even higher performance without such additional supervision. Goal-agnostic training. We now evaluate the ability of the presented approach to learn without a fixed goal at training time, and adapt to varying goals at test time. These experiments are performed in the Battle scenario. We use three training regimes: (a) fixed goal vector during training, (b) random goal vector with each value sampled uniformly from [0, 1] for every episode, and (c) random goal vector with each value sampled uniformly from [−1, 1] for every episode. More details are provided in Appendix A. Intuitively, in the second regime the agent is instructed to maximize the different measurements, but has no knowledge of their relative importance. The third regime makes no assumptions as to whether the measured quantities are desirable or not. The results are shown in Table 3. Each group of columns corresponds to a training regime and each row to a different test-time goal. Goals are given by the weights of the three measurements (ammo, health, and frags) in the objective function. The first test-time goal in Table 3 is the goal vector used in the battle scenarios in the prior experiments, the second seeks to maximize the frag count, the third is a pacifist (maximize ammo and health, minimize frags), the fourth seeks to aimlessly drain ammunition, and the fifth aims to maximize health. For each row, each group of columns reports the average value of each of the three measurements at the end of an episode. Note that health level at the end of an episode can be negative if the agent suffered major damage in the pre-terminal step. We draw two main conclusions. First, on the main task (first row), models trained without knowing the goal in advance (b,c) perform nearly as well as a dedicated model trained specifically for the eventual goal (a). Without knowing the eventual goal during training, the agent performs the task almost as well as when it was specifically trained for it. Second, all models generalize to new goals but not equally well. Models trained with a variety of goals (b,c) generalize much better than a model trained with a fixed goal. <table> <tr> <th rowspan="2">test goal</th> <th colspan="3">(a) fixed goal (0.5, 0.5, 1)</th> <th colspan="3">(b) random goals [0, 1]</th> <th colspan="3">(c) random goals [−1, 1]</th> </tr> <tr> <th>ammo</th> <th>health</th> <th>frags</th> <th>ammo</th> <th>health</th> <th>frags</th> <th>ammo</th> <th>health</th> <th>frags</th> </tr> <tr> <td>(0.5, 0.5, 1)</td> <td>83.4</td> <td>97.0</td> <td>33.6</td> <td>92.3</td> <td>96.9</td> <td>31.5</td> <td>49.3</td> <td>94.3</td> <td>28.9</td> </tr> <tr> <td>(0, 0, 1)</td> <td>0.3</td> <td>−3.7</td> <td>11.5</td> <td>4.3</td> <td>30.0</td> <td>20.6</td> <td>21.8</td> <td>70.9</td> <td>24.6</td> </tr> <tr> <td>(1, 1, −1)</td> <td>28.6</td> <td>−2.0</td> <td>0.0</td> <td>22.1</td> <td>4.4</td> <td>0.2</td> <td>89.4</td> <td>83.6</td> <td>0.0</td> </tr> <tr> <td>(−1, 0, 0)</td> <td>1.0</td> <td>−8.3</td> <td>1.7</td> <td>1.9</td> <td>−7.5</td> <td>1.2</td> <td>0.9</td> <td>−8.6</td> <td>1.7</td> </tr> <tr> <td>(0, 1, 0)</td> <td>0.7</td> <td>2.7</td> <td>2.6</td> <td>9.0</td> <td>77.8</td> <td>6.6</td> <td>3.0</td> <td>69.6</td> <td>7.9</td> </tr> </table> Table 3: Generalization across goals. Each group of three columns corresponds to a training regime, each row corresponds to a test-time goal. The results in the first row indicate that the approach performs well on the main task even without knowing the goal at training time. The results in the other rows indicate that goal-agnostic training supports generalization across goals at test time. Ablation study. We now perform an ablation study using the D3-tx scenario. Specifically, we evaluate the importance of vectorial feedback versus a scalar reward, and the effect of predicting measurements at multiple temporal offsets. The results are summarized in Table 4. The table reports the performance (in average frags at the end of an episode) of our full model (predicting three measurements at six temporal offsets) and of ablated variants that only predict frags (a scalar reward) and/or only predict at the farthest temporal offset. As the results demonstrate, predicting multiple measurements significantly improves the performance of the learned model, even when it is evaluated by only one of those measurements. Predicting measurements at multiple future times is also beneficial. This supports the intuition that a dense flow of multivariate measurements is a better training signal than a scalar reward. <table> <tr> <th></th> <th></th> <th>frags</th> </tr> <tr> <td>all measurements</td> <td>all offsets</td> <td>22.6</td> </tr> <tr> <td>all measurements</td> <td>one offset</td> <td>17.2</td> </tr> <tr> <td>frags only</td> <td>all offsets</td> <td>10.3</td> </tr> <tr> <td>frags only</td> <td>one offset</td> <td>5.0</td> </tr> </table> Table 4: Ablation study. Predicting all measurements at all temporal offsets yields the best results. 5 DISCUSSION We presented an approach to sensorimotor control in immersive environments. Our approach is simple and demonstrates that supervised learning techniques can be adapted to learning to act in complex and dynamic three-dimensional environments given raw sensory input and intrinsic measurements. The model trains on raw experience, by interacting with the environment without extraneous supervision. Natural supervision is provided by the cotemporal structure of the sensory and measurement streams. Our experiments have demonstrated that this simple approach outperforms sophisticated deep reinforcement learning formulations on challenging tasks in immersive environments. Experiments have further demonstrated that the use of multivariate measurements provides a significant advantage over conventional scalar rewards and that the trained model can effectively pursue new goals not specified during training. The presented work can be extended in multiple ways that are important for broadening the range of behaviors that can be learned. First, the presented model is purely reactive: it acts based on the current frame only, with no explicit facilities for memory and no test-time retention of internal representations. Recent work has explored memory-based models (Oh et al., 2016) and integrating such ideas with the presented approach may yield substantial advances. Second, significant progress in behavioral sophistication will likely require temporal abstraction and hierarchical organization of learned skills (Barto & Mahadevan, 2003; Kulkarni et al., 2016a). Third, the presented model was developed for discrete action spaces; applying the presented ideas to continuous actions would be interesting (Lillicrap et al., 2016). Finally, predicting features learned directly from rich sensory input can blur the distinction between sensory and measurement streams (Mathieu et al., 2016; Finn et al., 2016; Kalchbrenner et al., 2016). REFERENCES Karen E. Adolph and Sarah E. Berger. Motor development. In Handbook of Child Psychology, volume 2, pp. 161–213. Wiley, 6th edition, 2006. Andrew G. Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement learning. Discrete Event Dynamic Systems, 13(1-2), 2003. Dimitri P. Bertsekas. A counterexample to temporal differences learning. Neural Computation, 7(2), 1995. Dimitri P. Bertsekas. Pathologies of temporal difference methods in approximate dynamic programming. In IEEE Conference on Decision and Control, 2010. Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z. Leibo, Jack Rae, Daan Wierstra, and Demis Hassabis. Model-free episodic control. arXiv:1606.04460, 2016. Bruno Castro da Silva, George Konidaris, and Andrew G. Barto. Learning parameterized skills. In ICML, 2012. Marc Peter Deisenroth, Peter Englert, Jan Peters, and Dieter Fox. Multi-task policy search for robotics. In ICRA, 2014. Eyal Even-Dar and Yishay Mansour. Learning rates for Q-learning. JMLR, 5, 2003. Chelsea Finn, Ian J. Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through video prediction. In NIPS, 2016. Zoltán Gábor, Zsolt Kalmár, and Csaba Szepesvári. Multi-criteria reinforcement learning. In ICML, 1998. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In ICCV, 2015. Michael I. Jordan and David E. Rumelhart. Forward models: Supervised learning with a distal teacher. Cognitive Science, 16(3), 1992. Leslie Pack Kaelbling, Michael L. Littman, and Andrew W. Moore. Reinforcement learning: A survey. JAIR, 4, 1996. Nal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex Graves, and Koray Kavukcuoglu. Video pixel networks. arXiv:1610.00527, 2016. Michal Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Jaśkowski. ViZDoom: A Doom-based AI research platform for visual reinforcement learning. In IEEE Conference on Computational Intelligence and Games, 2016. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. Jens Kober, Andreas Wilhelm, Erhan Oztop, and Jan Peters. Reinforcement learning to adjust parametrized motor primitives to new situations. Autonomous Robots, 33(4), 2012. Jens Kober, J. Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey. IJRR, 32(11), 2013. George Konidaris, Ilya Scheidwasser, and Andrew G. Barto. Transfer in reinforcement learning via shared features. JMLR, 13, 2012. Tejas D. Kulkarni, Karthik Narasimhan, Ardavan Saeedi, and Joshua B. Tenenbaum. Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. In NIPS, 2016a. Tejas D. Kulkarni, Ardavan Saeedi, Simanta Gautam, and Samuel J. Gershman. Deep successor reinforcement learning. arXiv:1606.02396, 2016b. David Kushner. Masters of Doom: How Two Guys Created an Empire and Transformed Pop Culture. Random House, 2003. Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Building machines that learn and think like people. arXiv:1604.00289, 2016. Guillaume Lample and Devendra Singh Chaplot. Playing FPS games with deep reinforcement learning. arXiv:1609.05521, 2016. Yann LeCun, Urs Muller, Jan Ben, Eric Cosatto, and Beat Flepp. Off-road obstacle avoidance through end-to-end learning. In NIPS, 2005. Sergey Levine and Vladlen Koltun. Guided policy search. In ICML, 2013. Sergey Levine, Peter Pastor, Alex Krizhevsky, and Deirdre Quillen. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. In ISER, 2016. Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. In ICLR, 2016. Michael L. Littman, Richard S. Sutton, and Satinder P. Singh. Predictive representations of state. In NIPS, 2001. Michaël Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. In ICLR, 2016. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, et al. Human-level control through deep reinforcement learning. Nature, 518(7540), 2015. Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In ICML, 2016. Kevin P. Murphy. Machine Learning: A Probabilistic Perspective. MIT Press, 2012. Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L. Lewis, and Satinder P. Singh. Action-conditional video prediction using deep networks in Atari games. In NIPS, 2015. Junhyuk Oh, Valliappa Chockalingam, Satinder P. Singh, and Honglak Lee. Control of memory, active perception, and action in Minecraft. In ICML, 2016. Diederik M. Roijers, Peter Vamplew, Shimon Whiteson, and Richard Dazeley. A survey of multi-objective sequential decision-making. JAIR, 48, 2013. Stéphane Ross, Narek Melik-Barkhudarov, Kumar Shaurya Shankar, Andreas Wendel, Debadeepta Dey, J. Andrew Bagnell, and Martial Hebert. Learning monocular reactive UAV control in cluttered natural environments. In ICRA, 2013. Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. In ICML, 2015. David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, et al. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 2016. Satinder P. Singh and Richard S. Sutton. Reinforcement learning with replacing eligibility traces. Machine Learning, 22(1-3), 1996. Satinder P. Singh, Michael L. Littman, Nicholas K. Jong, David Pardoe, and Peter Stone. Learning predictive state representations. In ICML, 2003. Richard S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3, 1988. Richard S. Sutton. Generalization in reinforcement learning: Successful examples using sparse coarse coding. In NIPS, 1995. Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press, 2nd edition, 2017. Richard S. Sutton, Joseph Modayil, Michael Delp, Thomas Degris, Patrick M. Pilarski, Adam White, and Doina Precup. Horde: a scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In AAMAS, 2011. Csaba Szepesvári and Michael L. Littman. A unified analysis of value-function-based reinforcement learning algorithms. Neural Computation, 11(8), 1999. Gerald Tesarou. TD-gammon, a self-teaching backgammon program, achieves master-level play. Neural Computation, 6(2), 1994. John N. Tsitsiklis. On the convergence of optimistic policy iteration. JMLR, 2002. Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. WaveNet: A generative model for raw audio. arXiv:1609.03499, 2016. Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot, and Nando de Freitas. Dueling network architectures for deep reinforcement learning. In ICML, 2016. A IMPLEMENTATION DETAILS A.1 NETWORK ARCHITECTURES The detailed architectures of two network variants – basic and large – are shown in Tables A1 and A2. The basic network follows the architecture of Mnih et al. (2015) as closely as possible. The large network is similar, but all layers starting from the third are wider by a factor of two. In all networks we use the leaky ReLU nonlinearity LReLU(x) = max(x, 0.2x) after each non-terminal layer. We initialize the weights as proposed by He et al. (2015). <table> <tr> <th>module</th> <th>input dimension</th> <th>channels</th> <th>kernel</th> <th>stride</th> </tr> <tr> <td rowspan="4">Perception</td> <td>84 × 84 × 1</td> <td>32</td> <td>8</td> <td>4</td> </tr> <tr> <td>21 × 21 × 32</td> <td>64</td> <td>4</td> <td>2</td> </tr> <tr> <td>10 × 10 × 64</td> <td>64</td> <td>3</td> <td>1</td> </tr> <tr> <td>10 · 10 · 64</td> <td>512</td> <td>–</td> <td>–</td> </tr> <tr> <td rowspan="3">Measurement</td> <td>3</td> <td>128</td> <td>–</td> <td>–</td> </tr> <tr> <td>128</td> <td>128</td> <td>–</td> <td>–</td> </tr> <tr> <td>128</td> <td>128</td> <td>–</td> <td>–</td> </tr> <tr> <td rowspan="3">Goal</td> <td>3 · 6</td> <td>128</td> <td>–</td> <td>–</td> </tr> <tr> <td>128</td> <td>128</td> <td>–</td> <td>–</td> </tr> <tr> <td>128</td> <td>128</td> <td>–</td> <td>–</td> </tr> <tr> <td rowspan="2">Expectation</td> <td>512 + 128 + 128</td> <td>512</td> <td>–</td> <td>–</td> </tr> <tr> <td>512</td> <td>3 · 6</td> <td>–</td> <td>–</td> </tr> <tr> <td rowspan="2">Action</td> <td>512 + 128 + 128</td> <td>512</td> <td>–</td> <td>–</td> </tr> <tr> <td>512</td> <td>3 · 6 · 256</td> <td>–</td> <td>–</td> </tr> </table> Table A1: The basic architecture. <table> <tr> <th>module</th> <th>input dimension</th> <th>channels</th> <th>kernel</th> <th>stride</th> </tr> <tr> <td rowspan="4">Perception</td> <td>128 × 128 × 1</td> <td>32</td> <td>8</td> <td>4</td> </tr> <tr> <td>32 × 32 × 32</td> <td>64</td> <td>4</td> <td>2</td> </tr> <tr> <td>16 × 16 × 64</td> <td>128</td> <td>3</td> <td>1</td> </tr> <tr> <td>16 · 16 · 128</td> <td>1024</td> <td>–</td> <td>–</td> </tr> <tr> <td rowspan="3">Measurement</td> <td>3</td> <td>128</td> <td>–</td> <td>–</td> </tr> <tr> <td>128</td> <td>128</td> <td>–</td> <td>–</td> </tr> <tr> <td>128</td> <td>128</td> <td>–</td> <td>–</td> </tr> <tr> <td rowspan="3">Goal</td> <td>3 · 6</td> <td>128</td> <td>–</td> <td>–</td> </tr> <tr> <td>128</td> <td>128</td> <td>–</td> <td>–</td> </tr> <tr> <td>128</td> <td>128</td> <td>–</td> <td>–</td> </tr> <tr> <td rowspan="2">Expectation</td> <td>1024 + 128 + 128</td> <td>1024</td> <td>–</td> <td>–</td> </tr> <tr> <td>1024</td> <td>3 · 6</td> <td>–</td> <td>–</td> </tr> <tr> <td rowspan="2">Action</td> <td>1024 + 128 + 128</td> <td>1024</td> <td>–</td> <td>–</td> </tr> <tr> <td>1024</td> <td>3 · 6 · 256</td> <td>–</td> <td>–</td> </tr> </table> Table A2: The large architecture. We empirically validate the architectural choices in the D3-tx regime. We compare the full basic architecture to three variants: • No normalization: normalization at the end of the action stream is not performed. • No split: no expectation/action split, simply predict future measurements with a fully-connected network. • No input measurements: the input measurement stream is removed, and current measurements are not provided to the network. The results are reported in Table A3. All modifications of the basic architecture hurt performance, showing that the two-stream formulation is beneficial and that providing the current measurements to the network increases performance but is not crucial. <table> <tr> <th></th> <th>full</th> <th>no normalization</th> <th>no split</th> <th>no input measurements</th> </tr> <tr> <th>Score</th> <td>22.6</td> <td>21.6</td> <td>16.5</td> <td>19.4</td> </tr> </table> Table A3: Evaluation of different network architectures. A.2 OTHER DETAILS The raw sensory input to the agent is the observed image, in grayscale, without any additional pre-processing. The resolution is 84×84 pixels for the basic model and 128×128 pixels for the large one. We normalized the measurements by their standard deviations under random exploration. More precisely, we divided ammo count, health level, and frag count by 7.5, 30.0, and 1.0, respectively. We performed frame skipping during both training and testing. The agent observes the environment and selects an action every 4th frame. The selected action is repeated during the skipped frames. This accelerates training without sacrificing accuracy. In the paper, “step” always refers to steps after frame skipping (equivalent to every 4th step before frame skipping). When played by a human, Doom runs at 35 frames per second, so one step of the agent is equivalent to 114 milliseconds of real time. Therefore, frame skipping has the added benefit of bringing the reaction time of the agent closer to that of a human. We set the temporal offsets \( \tau_1, \ldots, \tau_n \) of predicted future measurements to 1, 2, 4, 8, 16, and 32 steps in all experiments. The longest temporal offset corresponds to 3.66 seconds of real time. In all experiments, only the latest three predictions (after 8, 16, and 32 steps) contributed to the objective function, with fixed coefficients (0.5, 0.5, 1.0). Therefore, in scenarios with multiple measurements available to the agent (D3 and D4), the goal vector was specified by three numbers: the relative weights of the three measurements (ammo, health, frags) in the objective function. In goal-directed training, these were fixed to (0.5, 0.5, 1.0), and in goal-agnostic training they were sampled uniformly at random from [0, 1] or [−1, 1]. We used an experience memory of \( M = 20,000 \) steps, and sampled a mini-batch of \( N = 64 \) samples after every \( k = 64 \) new experiences added. We added the experiences to the memory using 8 copies of the agent running in parallel. The networks in all experiments were trained using the Adam algorithm (Kingma & Ba [2015]) with \( \beta_1 = 0.95, \beta_2 = 0.999 \), and \( \varepsilon = 10^{-4} \). The initial learning rate is set to \( 10^{-4} \) and is gradually decreased during training. The basic networks were trained for 800,000 mini-batch iterations (or 51.2 million steps), the large one for 2,000,000 iterations. B BASELINES We compared our approach to three prior methods: DQN (Mnih et al. [2015]), DSR (Kulkarni et al. [2016b]), and A3C (Mnih et al. [2016]). We used the authors’ implementations of DQN (https://github.com/kuz/DeepMind-Atari-Deep-Q-Learner) and DSR (https://github.com/Ardavans/DSR), and an independent implementation of A3C (https://github.com/muupan/async-rl). For scenarios D1 and D2 we used the change in health as reward. For D3 and D4 we used a linear combination of changes of the three normalized measurements with the same coefficients as for the presented approach: (0.5, 0.5, 1). For DQN and DSR we tested three learning rates: the default one (0.00025) and two alternatives (0.00005 and 0.00002). Other hyperparameters were left at their default values. For A3C, which trains faster, we performed a search over a set of learning rates (\{2, 4, 8, 16, 32\} \cdot 10^{-4}\) for the first two tasks; for the last two tasks we trained 20 models with random learning rates sampled log-uniformly between \(10^{-4}\) and \(10^{-2}\) and random \( \beta \) (entropy regularization) sampled log-uniformly between \(10^{-4}\) and \(10^{-1}\). For all baselines we report the best results we were able to obtain.
ABSTRACT We present an approach to sensorimotor control in immersive environments. Our approach utilizes a high-dimensional sensory stream and a lower-dimensional measurement stream. The cocomporeal structure of these streams provides a rich supervisory signal, which enables training a sensorimotor control model by interacting with the environment. The model is trained using supervised learning techniques, but without extraneous supervision. It learns to act based on raw sensory input from a complex three-dimensional environment. The presented formulation enables learning without a fixed goal at training time, and pursuing dynamically changing goals at test time. We conduct extensive experiments in three-dimensional simulations based on the classical first-person game Doom. The results demonstrate that the presented approach outperforms sophisticated prior formulations, particularly on challenging tasks. The results also show that trained models successfully generalize across environments and goals. A model trained using the presented approach won the Full Deathmatch track of the Visual Doom AI Competition, which was held in previously unseen environments. 1 INTRODUCTION Machine learning problems are commonly divided into three classes: supervised, unsupervised, and reinforcement learning. In this view, supervised learning is concerned with learning input-output mappings, unsupervised learning aims to find hidden structure in data, and reinforcement learning deals with goal-directed behavior [Murphy, 2012]. Reinforcement learning is compelling because it considers the natural setting of an organism acting in its environment. It is generally taken to comprise a class of problems (learning to act), the mathematical formalization of these problems (maximizing the expected discounted return), and a family of algorithmic approaches (optimizing an objective derived from the Bellman equation) [Kaelbling et al., 1996; Sutton & Barto, 2017]. While reinforcement learning (RL) has achieved significant progress [Mnih et al., 2015], key challenges remain. One is sensorimotor control from raw sensory input in complex and dynamic three-dimensional environments, learned directly from experience. Another is the acquisition of general skills that can be flexibly deployed to accomplish a multitude of dynamically specified goals [Lake et al., 2016]. In this work, we propose an approach to sensorimotor control that aims to assist progress towards overcoming these challenges. Our approach departs from the reward-based formalization commonly used in RL. Instead of a monolithic state and a scalar reward, we consider a stream of sensory input \( \{s_t\} \) and a stream of measurements \( \{m_t\} \). The sensory stream is typically high-dimensional and may include the raw visual, auditory, and tactile input. The measurement stream has lower dimensionality and constitutes a set of data that pertain to the agent’s current state. In a physical system, measurements can include attitude, supply levels, and structural integrity. In a three-dimensional computer game, they can include health, ammunition levels, and the number of adversaries overcome. Our guiding observation is that the interlocked temporal structure of the sensory and measurement streams provides a rich supervisory signal. Given present sensory input, measurements, and goal, the agent can be trained to predict the effect of different actions on future measurements. Assuming that the goal can be expressed in terms of future measurements, predicting these provides all the information necessary to support action. This reduces sensorimotor control to supervised learning, while supporting learning from raw experience and without extraneous data. Supervision is pro- vided by experience itself: by acting and observing the effects of different actions in the context of changing sensory inputs and goals. This approach has two significant benefits. First, in contrast to an occasional scalar reward assumed in traditional RL, the measurement stream provides rich and temporally dense supervision that can stabilize and accelerate training. While a sparse scalar reward may be the only feedback available in a board game (Tesauro) [1994] [Silver et al., 2016], a multidimensional stream of sensations is a more appropriate model for an organism that is learning to function in an immersive environment (Adolph & Berger [2006]). The second advantage of the presented formulation is that it supports training without a fixed goal and pursuing dynamically specified goals at test time. Assuming that the goal can be expressed in terms of future measurements, the model can be trained to take the goal into account in its prediction of the future. At test time, the agent can predict future measurements given its current sensory input, measurements, and goal, and then simply select the action that best suits its present goal. We evaluate the presented approach in immersive three-dimensional simulations that require visually navigating a complex three-dimensional environment, recognizing objects, and interacting with dynamic adversaries. We use the classical first-person game Doom, which introduced immersive three-dimensional games to popular culture (Kushner [2003]). The presented approach is given only raw visual input and the statistics shown to the player in the game, such as health and ammunition levels. No human gameplay is used, the model trains on raw experience. Experimental results demonstrate that the presented approach outperforms state-of-the-art deep RL models, particularly on complex tasks. Experiments further demonstrate that models learned by the presented approach generalize across environments and goals, and that the use of vectorial measurements instead of a scalar reward is beneficial. A model trained with the presented approach won the Full Deathmatch track of the Visual Doom AI Competition, which took place in previously unseen environments. The presented approach outperformed the second best submission, which employed a substantially more complex model and additional supervision during training, by more than 50%. 2 BACKGROUND The supervised learning (SL) perspective on learning to act by interacting with the environment dates back decades. Jordan & Rumelhart (1992) analyze this approach, review early work, and argue that the choice of SL versus RL should be guided by the characteristics of the environment. Their analysis suggests that RL may be more efficient when the environment provides only a sparse scalar reward signal, whereas SL can be advantageous when temporally dense multidimensional feedback is available. Sutton (1988) analyzed temporal-difference (TD) learning and argued that it is preferable to SL for prediction problems in which the correctness of the prediction is revealed many steps after the prediction is made. Sutton’s influential analysis assumes a sparse scalar reward. TD and policy gradient methods have since come to dominate the study of sensorimotor learning (Kober et al., 2013; Mnih et al., 2015; Sutton & Barto, 2017). While the use of SL is natural in imitation learning (LeCun et al., 2005; Ross et al., 2013) or in conjunction with model-based RL (Levine & Koltun, 2013), the formulation of sensorimotor learning from raw experience as supervised learning is rare (Levine et al., 2016). Our work suggests that when the learner is exposed to dense multidimensional sensory feedback, direct future prediction can support effective sensorimotor coordination in complex dynamic environments. Our approach has similarities to Monte Carlo methods. The convergence of such methods was analyzed early on and they were seen as theoretically advantageous, particularly when function approximators are used (Bertsekas [1995] Sutton [1995] Singh & Sutton [1996]). The choice of TD learning over Monte Carlo methods was argued on practical grounds, based on empirical performance on canonical examples (Sutton [1995]). While the understanding of the convergence of both types of methods has since improved (Szepesvári & Littman [1999] Tsitsiklis [2002] Even-Dar & Mansour [2003]), the argument for TD versus Monte Carlo is to this day empirical (Sutton & Barto [2017]). Sharp negative examples exist (Bertsekas [2010]). Our work deals with the more general setting of vectorial feedback and parameterized goals, and shows that a simple Monte-Carlo-type method performs extremely well in a compelling instantiation of this setting. Vector-valued feedback has been considered in the context of multi-objective decision-making (Gábor et al., 1998; Roijers et al., 2013). Transfer across related tasks has been analyzed by Konidaris et al. (2012). Parameterized goals have been studied in the context of continuous motor skills such as throwing darts at a target (da Silva et al., 2012; Kober et al., 2012; Deisenroth et al., 2014). A general framework for sharing value function approximators across both states and goals has been described by Schaul et al. (2015). Our work is most closely related to the framework of Schaul et al. (2015), but presents a specific formulation in which goals are defined in terms of intrinsic measurements and control is based on direct future prediction. We provide an architecture that handles realistic sensory and measurement streams and achieves state-of-the-art performance in complex and dynamic three-dimensional environments. Learning to act in simulated environments has been the focus of significant attention following the successful application of deep RL to Atari games by Mnih et al. (2015). A number of recent efforts applied related ideas to three-dimensional environments. Lillicrap et al. (2016) considered continuous and high-dimensional action spaces and learned control policies in the TORCS simulator. Mnih et al. (2016) described asynchronous variants of deep RL methods and demonstrated navigation in a three-dimensional labyrinth. Oh et al. (2016) augmented deep Q-networks with external memory and evaluated their performance on a set of tasks in Minecraft. In a recent technical report, Kulkarni et al. (2016b) proposed end-to-end training of successor representations and demonstrated navigation in a Doom-based environment. In another recent report, Blundell et al. (2016) considered a nonparametric approach to control and conducted experiments in a three-dimensional labyrinth. Experiments reported in Section 4 demonstrate that our approach significantly outperforms state-of-the-art deep RL methods. Prediction of future states in dynamical systems was considered by Littman et al. (2001) and Singh et al. (2003). Predictive representations in the form of generalized value functions were advocated by Sutton et al. (2011). More recently, Oh et al. (2015) learned to predict future frames in Atari games. Prediction of full sensory input in realistic three-dimensional environments remains an open challenge, although significant progress is being made (Mathieu et al., 2016; Finn et al., 2016; Kalchbrenner et al., 2016). Our work considers prediction of future values of meaningful measurements from rich sensory input and shows that such prediction supports effective sensorimotor control. 3 MODEL Consider an agent that interacts with the environment over discrete time steps. At each time step \( t \), the agent receives an observation \( o_t \) and executes an action \( a_t \) based on this observation. We assume that the observations have the following structure: \( o_t = (s_t, m_t) \), where \( s_t \) is raw sensory input and \( m_t \) is a set of measurements. In our experiments, \( s_t \) is an image: the agent’s view of its three-dimensional environment. More generally, \( s_t \) can include input from multiple sensory modalities. The measurements \( m_t \) can indicate the attitude, supply levels, and structural integrity in a physical system, or health, ammunition, and score in a computer game. The distinction between sensory input \( s_t \) and measurements \( m_t \) is somewhat artificial: both \( s_t \) and \( m_t \) constitute sensory input in different forms. In our model, the measurement vector \( m_t \) is distinguished from other sensations in two ways. First, the measurement vector is the part of the observation that the agent will aim to predict. At present, predicting full sensory streams is beyond our capabilities (although see the work of Kalchbrenner et al. (2016) and van den Oord et al. (2016) for impressive recent progress). We therefore designate a subset of sensations as measurements that will be predicted. Second, we assume that the agent’s goals can be defined in terms of future measurements. Specifically, let \( \tau_1, \ldots, \tau_n \) be a set of temporal offsets and let \( f = (m_{t+\tau_1} - m_t, \ldots, m_{t+\tau_n} - m_t) \) be the corresponding differences of future and present measurements. We assume that any goal that the agent will pursue can be defined as maximization of a function \( u(f; g) \). Any parametric function can be used. Our experiments use goals that are expressed as linear combinations of future measurements: \[ u(f; g) = g^\top f, \] where the vector \( g \) parameterizes the goal and has the same dimensionality as \( f \). This model generalizes the standard reinforcement learning formulation: the scalar reward signal can be viewed as a measurement, and exponential decay is one possible configuration of the goal vector. To predict future measurements, we use a parameterized function approximator, denoted by \( F \): \[ \mathbf{p}_t^a = F(\mathbf{o}_t, a, \mathbf{g}; \theta). \] Here \( a \in \mathcal{A} \) is an action, \( \theta \) are the learned parameters of \( F \), and \( \mathbf{p}_t^a \) is the resulting prediction. The dimensionality of \( \mathbf{p}_t^a \) matches the dimensionality of \( \mathbf{f} \) and \( \mathbf{g} \). Note that the prediction is a function of the current observation, the considered action, and the goal. At test time, given learned parameters \( \theta \), the agent can choose the action that yields the best predicted outcome: \[ a_t = \arg\max_{a \in \mathcal{A}} \mathbf{g}^\top F(\mathbf{o}_t, a, \mathbf{g}; \theta). \] The goal vector used at test time need not be identical to any goal seen during training. 3.1 TRAINING The predictor \( F \) is trained on experiences collected by the agent. Starting with a random policy, the agent begins to interact with its environment. This interaction takes place over episodes that last for a fixed number of time steps or until a terminal event occurs. Consider a set of experiences collected by the agent, yielding a set \( \mathcal{D} \) of training examples: \( \mathcal{D} = \{ (\mathbf{o}_i, a_i, \mathbf{g}_i, \mathbf{f}_i) \}_{i=1}^N \). Here \( (\mathbf{o}_i, a_i, \mathbf{g}_i) \) is the input and \( \mathbf{f}_i \) is the output of example \( i \). The predictor is trained using a regression loss: \[ \mathcal{L}(\theta) = \sum_{i=1}^N \| F(\mathbf{o}_i, a_i, \mathbf{g}_i; \theta) - \mathbf{f}_i \|^2. \] A classification loss can be used for predicting categorical measurements, but this was not necessary in our experiments. As the agent collects new experiences, the training set \( \mathcal{D} \) and the predictor used by the agent change. We maintain an experience memory of the \( M \) most recent experiences out of which a mini-batch of \( N \) examples is randomly sampled for every iteration of the solver. The parameters of the predictor used by the agent are updated after every \( k \) new experiences. This setup departs from pure on-policy training and we have not observed any adverse effect of using a small experience memory. Additional details are provided in Appendix A. We have evaluated two training regimes: 1. Single goal: the goal vector is fixed throughout the training process. 2. Randomized goals: the goal vector for each episode is generated at random. In both regimes, the agent follows an \( \varepsilon \)-greedy policy: it acts greedily according to the current goal with probability \( 1 - \varepsilon \), and selects a random action with probability \( \varepsilon \). The value of \( \varepsilon \) is initially set to 1 and is decreased during training according to a fixed schedule. 3.2 ARCHITECTURE The predictor \( F \) is a deep network parameterized by \( \theta \). The network architecture we use is shown in Figure 1. The network has three input modules: a perception module \( S(s) \), a measurement module \( M(m) \) and a goal module \( G(g) \). In our experiments, \( s \) is an image and the perception module \( S \) is implemented as a convolutional network. The measurement and goal modules are fully-connected networks. The outputs of the three input modules are concatenated, forming the joint input representation used for subsequent processing: \[ \mathbf{j} = J(s, m, g) = (S(s), M(m), G(g)). \] Future measurements are predicted based on this input representation. The network emits predictions of future measurements for all actions at once. This could be done by a fully-connected module that absorbs the input representation and outputs predictions. However, we found that introducing additional structure into the prediction module enhances its ability to learn the fine differences between the outcomes of different actions. To this end, we build on the ideas of Wang et al. (2016) and Figure 1: Network structure. The image s, measurements m, and goal g are first processed separately by three input modules. The outputs of these modules are concatenated into a joint representation j. This joint representation is processed by two parallel streams that predict the expected measurements \( E(j) \) and the normalized action-conditional differences \( \{ \overline{A^i}(j) \} \), which are then combined to produce the final prediction for each action. split the prediction module into two streams: an expectation stream \( E(j) \) and an action stream \( A(j) \). The expectation stream predicts the average of the future measurements over all potential actions. The action stream concentrates on the fine differences between actions: \( A(j) = \langle A^1(j), \ldots, A^w(j) \rangle \), where \( w = |A| \) is the number of actions. We add a normalization layer at the end of the action stream that ensures that the average of the predictions of the action stream is zero for each future measurement: \[ \overline{A^i}(j) = A^i(j) - \frac{1}{w} \sum_{k=1}^w A^k(j) \] for all \( i \). The normalization layer subtracts the average over all actions from each prediction, forcing the expectation stream \( E \) to compensate by predicting these average values. The output of the expectation stream has dimensionality \( \dim(f) \), where \( f \) is the vector of future measurements. The output of the action stream has dimensionality \( w \cdot \dim(f) \). The output of the network is a prediction of future measurements for each action, composed by summing the output of the expectation stream and the normalized action-conditional output of the action stream: \[ p = \langle p^{a_1}, \ldots, p^{a_w} \rangle = \left\langle \overline{A^1}(j) + E(j), \ldots, \overline{A^w}(j) + E(j) \right\rangle . \] The output of the network has the same dimensionality as the output of the action stream. 4 EXPERIMENTS We evaluate the presented approach in immersive three-dimensional simulations based on the classical game Doom. In these simulations, the agent has a first-person view of the environment and must act based on the same visual information that is shown to human players in the game. To interface with the game engine, we use the ViZDoom platform developed by Kempka et al. (2016). One of the advantages of this platform is that it allows running the simulation at thousands of frames per second on a single CPU core, which enables training models on tens of millions of simulation steps in a single day. We compare the presented approach to state-of-the-art deep RL methods in four scenarios of increasing difficulty, study generalization across environments and goals, and evaluate the importance of different aspects of the model. 4.1 SETUP Scenarios. We use four scenarios of increasing difficulty: D1 Gathering health kits in a square room. ("Basic") D2 Gathering health kits and avoiding poison vials in a maze. ("Navigation") D3 Defending against adversaries while gathering health and ammunition in a maze. ("Battle") D4 Defending against adversaries while gathering health and ammunition in a more complicated maze. ("Battle 2") These scenarios are illustrated in Figure 2 and in the supplementary video (http://bit.ly/2f9tacZ). The first two scenarios are provided with the ViZDoom platform. In D1, the agent is in a square room and its health is declining at a constant rate. To survive, it must move around and collect health kits, which are distributed abundantly in the room. This task is easy: as long as the agent learns to avoid walls and keep traversing the room, performance is good. In D2, the agent is in a maze and its health is again declining at a constant rate. Here it must again collect health kits that increase its health, but it must also avoid blue poison vials that decrease health. This task is harder: the agent must learn to traverse irregularly shaped passageways, and to distinguish health kits from poison vials. In both tasks, the agent has access to three binary sub-actions: move forward, turn left, and turn right. Any combination of these three can be used at any given time, resulting in 8 possible actions. The only measurement provided to the agent in these scenarios is health. The last two scenarios, D3 and D4, are more challenging and were designed by us using elements of the ViZDoom platform. Here the agent is armed and is under attack by alien monsters. The monsters spawn abundantly, move around in the environment, and shoot fireballs at the agent. Health kits and ammunition are sporadically distributed throughout the environment and can be collected by the agent. The environment is a simple maze in D3 and a more complex one in D4. In both scenarios, the agent has access to eight sub-actions: move forward, move backward, turn left, turn right, strafe left, strafe right, run, and shoot. Any combination of these sub-actions can be used, resulting in 256 possible actions. The agent is provided with three measurements: health, ammunition, and frag count (number of monsters killed). Model. The future predictor network used in our experiments was configured to be as close as possible to the DQN model of [Mnih et al. (2015)], to ensure a fair comparison. Additional details on the architecture are provided in Appendix A. Training and testing. The agent is trained and tested over episodes. Each episode terminates after 525 steps (equivalent to 1 minute of real time) or when the agent’s health drops to zero. Statistics reported in figures and tables summarize the final values of respective measurements at the end of episodes. We set the temporal offsets \( \tau_1, \ldots, \tau_n \) of predicted future measurements to 1, 2, 4, 8, 16, and 32 steps in all experiments. Only the latest three time steps contribute to the objective function, with coefficients (0.5, 0.5, 1). More details are provided in Appendix A. 4.2 RESULTS Comparison to prior work. We have compared the presented approach to three deep RL methods: DQN (Mnih et al., 2015), A3C (Mnih et al., 2016), and DSR (Kulkarni et al., 2016b). DQN is a standard baseline for visuomotor control due to its impressive performance on Atari games. A3C is more recent and is commonly regarded as the state of the art in this area. DSR is described in a recent technical report and we included it because the authors also used the ViZDoom platform in experiments, albeit with a simple task. Further details on the setup of the prior approaches are provided in Appendix B. The performance of the different approaches during training is shown in Figure 3. In reporting the results of these experiments, we refer to our approach as DFP (direct future prediction). For the first two scenarios, all approaches were trained to maximize health. For these scenarios, Figure 3 reports average health at the end of an episode over the course of training. For the last two scenarios, all approaches were trained to maximize a linear combination of the three normalized measurements (ammo, health, and frags) with coefficients (0.5, 0.5, 1). For these scenarios, Figure 3 reports average frags at the end of an episode. Each presented curve averages information from three independent training runs, and each data point is computed from \( 3 \times 50,000 \) steps of testing. DQN, A3C, and DFP were trained for 50 million steps. The training procedure for DSR is much slower and can only process roughly 1 million simulation steps per day. For this reason, we were only able to evaluate DSR on the Basic scenario and were not able to perform extensive hyperparameter tuning. We report results for this technique after 10 days of training. (This time was sufficient to significantly exceed the number of training steps reported in the experiments of Kulkarni et al. (2016b), but not sufficient to approach the number of steps afforded by the other approaches.) Table 1 reports the performance of the models after training. Each fully trained model was tested over 1 million simulation steps. The table reports average health at the end of an episode for scenarios D1 and D2, and average frags at the end of an episode for D3 and D4. We also report the average training speed for each approach, in millions of simulation steps per day of training. The performance of the different models is additionally illustrated in the supplementary video (http://bit.ly/2f9tacZ). <table> <tr> <th></th> <th>D1 (health)</th> <th>D2 (health)</th> <th>D3 (frags)</th> <th>D4 (frags)</th> <th>steps/day</th> </tr> <tr> <td>DQN</td> <td>89.1 ± 6.4</td> <td>25.4 ± 7.8</td> <td>1.2 ± 0.8</td> <td>0.4 ± 0.2</td> <td>7M</td> </tr> <tr> <td>A3C</td> <td><b>97.5 ± 0.1</b></td> <td>59.3 ± 2.0</td> <td>5.6 ± 0.2</td> <td>6.7 ± 2.9</td> <td>80M</td> </tr> <tr> <td>DSR</td> <td>4.6 ± 0.1</td> <td>—</td> <td>—</td> <td>—</td> <td>1M</td> </tr> <tr> <td>DFP</td> <td><b>97.7 ± 0.4</b></td> <td><b>84.1 ± 0.6</b></td> <td><b>33.5 ± 0.4</b></td> <td><b>16.5 ± 1.1</b></td> <td>70M</td> </tr> </table> Table 1: Comparison to prior work. We report average health at the end of an episode for scenarios D1 and D2, and average frags at the end of an episode for scenarios D3 and D4. Figure 3: Performance of different approaches during training. DQN, A3C, and DFP achieve similar performance in the Basic scenario. DFP outperforms the prior approaches in the other three scenarios, with a multiplicative gap in performance in the most complex ones (D3 and D4). In the Basic scenario, DQN, A3C, and DFP all perform well. As reported in Table[1] the performance of A3C and DFP is virtually identical at 97.5%, while DQN reaches 89%. In the more complex Navigation scenario, a significant gap opens up between DQN and A3C; this is consistent with the experiments of Mnih et al. (2016). DFP achieves the best performance in this scenario, with a 25 percentage point advantage during testing. Note that in these first two scenarios, DFP was only given a single measurement per time step (health). In the more complex Battle and Battle 2 scenarios (D3 and D4), DFP dominates the other approaches. It outperforms A3C at test time by a factor of 6 in D3 and by a factor of 2.5 in D4. Note that the advantage of DFP is particularly significant in the scenarios that provide richer measurements: three measurements per time step in D3 and D4. The effect of multiple measurements is further evaluated in controlled experiments reported below. Generalization across environments. We now evaluate how the behaviors learned by the presented approach generalize across different environments. To this end, we have created 100 randomly textured versions of the mazes from scenarios D3 and D4. We used 90 of these for training and 10 for testing, with disjoint sets of textures in the training and testing environments. We call these scenarios D3-tx and D4-tx. Table[2] shows the performance of the approach for different combinations of training and testing regimes. For example, the entry in the D4-tx row of the D3 column shows the performance (in average number of frags at the end of an episode) of a model trained in D3 and tested in D4-tx. Not surprisingly, a model trained in the simple D3 environment does not learn sufficient invariance to surface appearance to generalize well to other environments. Training in the more complex multi-texture environment in D4 yields better generalization: the trained model performs well in D3 and exhibits non-trivial performance in D3-tx and D4-tx. Finally, exposing the model to significant variation in surface appearance in D3-tx or D4-tx during training yields very good generalization. <table> <tr> <th rowspan="2">Test</th> <th colspan="5">Train</th> </tr> <tr> <th>D3</th> <th>D4</th> <th>D3-tx</th> <th>D4-tx</th> <th>D4-tx-L</th> </tr> <tr> <td>D3</td> <td>33.6</td> <td>17.8</td> <td>29.8</td> <td>20.9</td> <td>22.0</td> </tr> <tr> <td>D4</td> <td>1.6</td> <td>17.1</td> <td>5.4</td> <td>10.8</td> <td>12.4</td> </tr> <tr> <td>D3-tx</td> <td>3.9</td> <td>8.1</td> <td>22.6</td> <td>15.6</td> <td>19.4</td> </tr> <tr> <td>D4-tx</td> <td>1.7</td> <td>5.1</td> <td>6.2</td> <td>10.2</td> <td>12.7</td> </tr> </table> Table 2: Generalization across environments. The last column of Table 2 additionally reports the performance of a higher-capacity model trained in D4-tx. This combination is referred to as D4-tx-L. As shown in the table, this model performs even better. The architecture is detailed in Appendix A. Visual Doom AI Competition. To further evaluate the presented approach, we participated in the Visual Doom AI Competition, held during September 2016. The competition evaluated sensorimotor control models that act based on raw visual input. The competition had the form of a tournament: the submitted agents play multiple games against each other, their performance measured by aggregate frags. The competition included two tracks. The Limited Deathmatch track was held in a known environment that was given to the participants in advance at training time. The Full Deathmatch track evaluated generalization to previously unseen environments and took place in multiple new environments that were not available to the participating teams at training time. We only enrolled in the Full Deathmatch track. Our model was trained using a variant of the D4-tx-L regime. Our model won, outperforming the second best submission by more than 50%. That submission, described by Lample & Chaplot (2016), constitutes a strong baseline. It is a deep recurrent Q-network that incorporates an LSTM and was trained using reward shaping and extra supervision from the game engine. Specifically, the authors took advantage of the ability provided by the ViZDoom platform to use the internal configuration of the game, including ground-truth knowledge of the presence of enemies in the field of view, during training. The authors’ report shows that this additional supervision improved performance significantly. Our model, which is simpler, achieved even higher performance without such additional supervision. Goal-agnostic training. We now evaluate the ability of the presented approach to learn without a fixed goal at training time, and adapt to varying goals at test time. These experiments are performed in the Battle scenario. We use three training regimes: (a) fixed goal vector during training, (b) random goal vector with each value sampled uniformly from [0, 1] for every episode, and (c) random goal vector with each value sampled uniformly from [−1, 1] for every episode. More details are provided in Appendix A. Intuitively, in the second regime the agent is instructed to maximize the different measurements, but has no knowledge of their relative importance. The third regime makes no assumptions as to whether the measured quantities are desirable or not. The results are shown in Table 3. Each group of columns corresponds to a training regime and each row to a different test-time goal. Goals are given by the weights of the three measurements (ammo, health, and frags) in the objective function. The first test-time goal in Table 3 is the goal vector used in the battle scenarios in the prior experiments, the second seeks to maximize the frag count, the third is a pacifist (maximize ammo and health, minimize frags), the fourth seeks to aimlessly drain ammunition, and the fifth aims to maximize health. For each row, each group of columns reports the average value of each of the three measurements at the end of an episode. Note that health level at the end of an episode can be negative if the agent suffered major damage in the pre-terminal step. We draw two main conclusions. First, on the main task (first row), models trained without knowing the goal in advance (b,c) perform nearly as well as a dedicated model trained specifically for the eventual goal (a). Without knowing the eventual goal during training, the agent performs the task almost as well as when it was specifically trained for it. Second, all models generalize to new goals but not equally well. Models trained with a variety of goals (b,c) generalize much better than a model trained with a fixed goal. <table> <tr> <th rowspan="2">test goal</th> <th colspan="3">(a) fixed goal (0.5, 0.5, 1)</th> <th colspan="3">(b) random goals [0, 1]</th> <th colspan="3">(c) random goals [−1, 1]</th> </tr> <tr> <th>ammo</th> <th>health</th> <th>frags</th> <th>ammo</th> <th>health</th> <th>frags</th> <th>ammo</th> <th>health</th> <th>frags</th> </tr> <tr> <td>(0.5, 0.5, 1)</td> <td>83.4</td> <td>97.0</td> <td>33.6</td> <td>92.3</td> <td>96.9</td> <td>31.5</td> <td>49.3</td> <td>94.3</td> <td>28.9</td> </tr> <tr> <td>(0, 0, 1)</td> <td>0.3</td> <td>−3.7</td> <td>11.5</td> <td>4.3</td> <td>30.0</td> <td>20.6</td> <td>21.8</td> <td>70.9</td> <td>24.6</td> </tr> <tr> <td>(1, 1, −1)</td> <td>28.6</td> <td>−2.0</td> <td>0.0</td> <td>22.1</td> <td>4.4</td> <td>0.2</td> <td>89.4</td> <td>83.6</td> <td>0.0</td> </tr> <tr> <td>(−1, 0, 0)</td> <td>1.0</td> <td>−8.3</td> <td>1.7</td> <td>1.9</td> <td>−7.5</td> <td>1.2</td> <td>0.9</td> <td>−8.6</td> <td>1.7</td> </tr> <tr> <td>(0, 1, 0)</td> <td>0.7</td> <td>2.7</td> <td>2.6</td> <td>9.0</td> <td>77.8</td> <td>6.6</td> <td>3.0</td> <td>69.6</td> <td>7.9</td> </tr> </table> Table 3: Generalization across goals. Each group of three columns corresponds to a training regime, each row corresponds to a test-time goal. The results in the first row indicate that the approach performs well on the main task even without knowing the goal at training time. The results in the other rows indicate that goal-agnostic training supports generalization across goals at test time. Ablation study. We now perform an ablation study using the D3-tx scenario. Specifically, we evaluate the importance of vectorial feedback versus a scalar reward, and the effect of predicting measurements at multiple temporal offsets. The results are summarized in Table 4. The table reports the performance (in average frags at the end of an episode) of our full model (predicting three measurements at six temporal offsets) and of ablated variants that only predict frags (a scalar reward) and/or only predict at the farthest temporal offset. As the results demonstrate, predicting multiple measurements significantly improves the performance of the learned model, even when it is evaluated by only one of those measurements. Predicting measurements at multiple future times is also beneficial. This supports the intuition that a dense flow of multivariate measurements is a better training signal than a scalar reward. <table> <tr> <th></th> <th></th> <th>frags</th> </tr> <tr> <td>all measurements</td> <td>all offsets</td> <td>22.6</td> </tr> <tr> <td>all measurements</td> <td>one offset</td> <td>17.2</td> </tr> <tr> <td>frags only</td> <td>all offsets</td> <td>10.3</td> </tr> <tr> <td>frags only</td> <td>one offset</td> <td>5.0</td> </tr> </table> Table 4: Ablation study. Predicting all measurements at all temporal offsets yields the best results. 5 DISCUSSION We presented an approach to sensorimotor control in immersive environments. Our approach is simple and demonstrates that supervised learning techniques can be adapted to learning to act in complex and dynamic three-dimensional environments given raw sensory input and intrinsic measurements. The model trains on raw experience, by interacting with the environment without extraneous supervision. Natural supervision is provided by the cotemporal structure of the sensory and measurement streams. Our experiments have demonstrated that this simple approach outperforms sophisticated deep reinforcement learning formulations on challenging tasks in immersive environments. Experiments have further demonstrated that the use of multivariate measurements provides a significant advantage over conventional scalar rewards and that the trained model can effectively pursue new goals not specified during training. The presented work can be extended in multiple ways that are important for broadening the range of behaviors that can be learned. First, the presented model is purely reactive: it acts based on the current frame only, with no explicit facilities for memory and no test-time retention of internal representations. Recent work has explored memory-based models (Oh et al., 2016) and integrating such ideas with the presented approach may yield substantial advances. Second, significant progress in behavioral sophistication will likely require temporal abstraction and hierarchical organization of learned skills (Barto & Mahadevan, 2003; Kulkarni et al., 2016a). Third, the presented model was developed for discrete action spaces; applying the presented ideas to continuous actions would be interesting (Lillicrap et al., 2016). Finally, predicting features learned directly from rich sensory input can blur the distinction between sensory and measurement streams (Mathieu et al., 2016; Finn et al., 2016; Kalchbrenner et al., 2016).
accept
Accept (Oral)
7.666667
2181e7db97a41562e65e57635f8ce2c288b8f627
iclr
2,017
SUBMODULAR SUM-PRODUCT NETWORKS FOR SCENE UNDERSTANDING Abram L. Friesen & Pedro Domingos Department of Computer Science and Engineering University of Washington Seattle, WA 98195, USA {afriesen,pedrod}@cs.washington.edu ABSTRACT Sum-product networks (SPNs) are an expressive class of deep probabilistic models in which inference takes time linear in their size, enabling them to be learned effectively. However, for certain challenging problems, such as scene understanding, the corresponding SPN has exponential size and is thus intractable. In this work, we introduce submodular sum-product networks (SSPNs), an extension of SPNs in which sum-node weights are defined by a submodular energy function. SSPNs combine the expressivity and depth of SPNs with the ability to efficiently compute the MAP state of a combinatorial number of labelings afforded by submodular energies. SSPNs for scene understanding can be understood as representing all possible parses of an image over arbitrary region shapes with respect to an image grammar. Despite this complexity, we develop an efficient and convergent algorithm based on graph cuts for computing the (approximate) MAP state of an SSPN, greatly increasing the expressivity of the SPN model class. Empirically, we show exponential improvements in parsing time compared to traditional inference algorithms such as \( \alpha \)-expansion and belief propagation, while returning comparable minima. 1 INTRODUCTION Sum-product networks (SPNs) [Poon & Domingos, 2011; Gens & Domingos, 2012] are a class of deep probabilistic models that consist of many layers of hidden variables and can have unbounded treewidth. Despite this depth and corresponding expressivity, exact inference in SPNs is guaranteed to take time linear in their size, allowing their structure and parameters to be learned effectively from data. However, there are still many models for which the corresponding SPN has size exponential in the number of variables and is thus intractable. For example, in scene understanding (or semantic segmentation), the goal is to label each pixel of an image with its semantic class, which requires simultaneously detecting, segmenting, and recognizing each object in the scene. Even the simplest SPN for scene understanding is intractable, as it must represent the exponentially large set of segmentations of the image into its constituent objects. Scene understanding is commonly formulated as a flat Markov (or conditional) random field (MRF) over the pixels or superpixels of an image (e.g., Shotton et al. [2006]; Gould et al. [2009]). Inference in MRFs is intractable in general; however, there exist restrictions of the MRF that enable tractable inference. For pairwise binary MRFs, if the energy of each pairwise term is submodular (alternatively, attractive or regular) [Kolmogorov & Zabih, 2004], meaning that each pair of neighboring pixels prefers to have the same label, then the exact MAP labeling of the MRF can be recovered in low-order polynomial time through the use of a graph cut algorithm\footnote{Formally, a min-cut/max-flow algorithm [Ahuja et al., 1993] on a graph constructed from the MRF.} [Greig et al., 1989; Boykov & Kolmogorov, 2004]. This result from the binary case has been used to develop a number of powerful approximate algorithms for the multi-label case (e.g., Komodakis et al. [2007]; Lempitsky et al. [2010]), the most well-known of which is \( \alpha \)-expansion [Boykov et al., 2001], which efficiently returns an approximate labeling that is within a constant factor of the true optimum by solving a series of binary graph cut problems. Unfortunately, pairwise MRFs are insufficiently expressive for com- plex tasks such as scene understanding, as they are unable to model high-level relationships, such as constituency (part-subpart) or subcategorization (superclass-subclass), between arbitrary regions of the image, unless these can be encoded in the labels of the MRF and enforced between pairs of (super)pixels. However, this encoding requires a combinatorial number of labels, which is intractable. Instead, higher-level structure is needed to efficiently represent these relationships. In this paper, we present submodular sum-product networks (SSPNs), a novel model that combines the expressive power of sum-product networks with the tractable segmentation properties of submodular energies. An SSPN is a sum-product network in which the weight of each child of a sum node corresponds to the energy of a particular labeling of a submodular energy function. Equivalently, an SSPN over an image corresponds to an instantiation of all possible parse trees of that image with respect to a given image grammar, where the probability distribution over the segmentations of a production on a particular region is defined by a submodular random field over the pixels in that region. Importantly, SSPNs permit objects and regions to take arbitrary shapes, instead of restricting the set of possible shapes as has previously been necessary for tractable inference. By exploiting submodularity, we develop a highly-efficient approximate inference algorithm, INFERRSPN, for computing the MAP state of the SSPN (equivalently, the optimal parse of the image). INFERRSPN is an iterative move-making-style algorithm that provably converges to a local minimum of the energy, reduces to \( \alpha \)-expansion in the case of a trivial grammar, and has complexity \( O(|G| c(n)) \) for each iteration, where \( c(n) \) is the complexity of a single graph cut and \( |G| \) is the size of the grammar. As with other move-making algorithms, INFERRSPN converges to a local minimum with respect to an exponentially-large set of neighbors, overcoming many of the main issues of local minima (Boykov et al., 2001). Empirically, we compare INFERRSPN to belief propagation (BP) on a multi-level MRF and to \( \alpha \)-expansion on an equivalent flat MRF. We show that INFERRSPN parses images in exponentially less time than both of these while returning energies comparable to \( \alpha \)-expansion, which is guaranteed to return energies within a constant factor of the true optimum. The literature on using higher-level information for scene understanding is vast. We briefly discuss the most relevant work on hierarchical random fields over multiple labels, image grammars for segmentation, and neural parsing methods. Hierarchical random field models (e.g., Russell et al. (2010); Lempitsky et al. (2011)) define MRFs with multiple layers of hidden variables and then perform inference, often using graph cuts to efficiently extract the MAP solution. However, these models are typically restricted to just a few layers and to pre-computed segmentations of the image, and thus do not allow arbitrary region shapes. In addition, they require a combinatorial number of labels to encode complex grammar structures. Previous grammar-based methods for scene understanding, such as Zhu & Mumford (2006) and Zhao & Zhu (2011), have used MRFs with AND-OR graphs (Dechter & Mateescu, 2007), but needed to restrict their grammars to a very limited set of productions and region shapes in order to perform inference in reasonable time, and are thus much less expressive than SSPNs. Finally, neural parsing methods such as those in Socher et al. (2011) and Sharma et al. (2014) use recursive neural network architectures over superpixel-based features to segment an image; thus, these methods also do not allow arbitrary region shapes. Further, Socher et al. (2011) greedily combine regions to form parse trees, while Sharma et al. (2014) use randomly generated parse trees, whereas inference in SSPNs finds the (approximately) optimal parse tree. 2 SUBMODULAR SUM-PRODUCT NETWORKS In the following, we define submodular sum-product networks (SSPNs) in terms of an image grammar because this simplifies the exposition with respect to the structure of the sum-product network (SPN) and because scene understanding is the domain we use to evaluate SSPNs. However, it is not necessary to define SSPNs in this way, and our results extend to any SPN with sum-node weights defined by a random field with submodular potentials. Due to lack of space we refer readers to Gens & Domingos (2012), Poon & Domingos (2011) and Gens & Domingos (2013) for SPN details. With respect to scene understanding, an SSPN defines a generative model of an image and a hierarchy of regions within that image where each region is labeled with a production (and implicitly by the head symbol of that production), can have arbitrary shape, and is a subset of the region of each of its ancestors. An example of an SSPN for parsing a farm scene is shown in Figure 1. Given a starting symbol and the region containing the entire image, the generative process is to first choose a production of that symbol into its constituent symbols and then choose a segmentation of the region into a set of mutually exclusive and exhaustive subregions, with one subregion per constituent sym- Figure 1: A partial (submodular) sum-product network for parsing an image with respect to the grammar shown. There is a sum node for each nonterminal symbol with a child sum node for each production of that symbol. Each sum node for a production has a child product node for each possible segmentation of its region. bol. The process then recurses, choosing a production and a segmentation for each subregion given its symbol. The recursion terminates when one of the constituents is a terminal symbol, at which point the pixels corresponding to that region of the image are generated. This produces a parse tree in which each internal node is a pair containing a region and a production of the region, and the leaves are regions of pixels. For each node in a parse tree, the regions of its children are mutually exclusive and exhaustive with respect to the parent node’s region. As in a probabilistic context-free grammar (PCFG) [Jurafsky & Martin 2000], productions are chosen from a categorical distribution over the productions of the current symbol. Segmentations of a given region, however, are sampled from a (submodular) Markov random field (MRF) over the pixels in the region. Formally, let \( G = (N, \Sigma, R, S, w) \) be a non-recursive stochastic grammar, where \( N \) is a finite set of nonterminal symbols; \( \Sigma \) is a finite set of terminal symbols; \( R \) is a finite set of productions \( R = \{ v : X \rightarrow Y_1 Y_2 \ldots Y_k \} \) with head symbol \( X \in N \) and constituent symbols \( Y_i \in N \cup \Sigma \) for \( i = 1 \ldots k \) and \( k > 0 \); \( S \in N \) is a distinguished start symbol, meaning that it does not appear on the right-hand side of any production; and \( w \) are the weights that parameterize the probability distribution defined by \( G \). For a production \( v \in t \) in a parse tree \( t \in \mathcal{T}_G \), we denote its region as \( \mathcal{P}_v \) and its parent and children as \( \mathrm{pa}(v) \) and \( \mathrm{ch}(v) \), respectively, where \( \mathcal{T}_G \) is the set of possible parse trees under the grammar \( G \). The labeling corresponding to the segmentation of the pixels in \( \mathcal{P}_v \) for production \( v : X \rightarrow Y_1 \ldots Y_k \) is \( y^v \in \mathcal{Y}_v^{|\mathcal{P}_v|} \), where \( \mathcal{Y}_v = \{ Y_1, \ldots, Y_k \} \). The region of any production \( v \in t \) is the set of pixels in \( \mathcal{P}_{\mathrm{pa}(v)} \) whose assigned label is the head of \( v \), i.e., \( \mathcal{P}_v = \{ p \in \mathcal{P}_{\mathrm{pa}(v)} : y_p^{\mathrm{pa}(v)} = \mathrm{head}(v) \} \), except for the production of the start symbol, which has the entire image as its region. The probability of an image \( x \) is \( p_w(x) = \sum_{t \in \mathcal{T}_G} p_w(t, x) \), where the joint probability of parse tree \( t \) and the image is the product over all productions in \( t \) of the probability of choosing that production \( v \) and then segmenting its region \( \mathcal{P}_v \) according to \( y^v \): \[ p_w(t, x) = \frac{1}{Z} \exp(-E_w(t, x)) = \frac{1}{Z} \exp(-\sum_{v \in t} E_w^v(v, y^v, \mathrm{head}(v), \mathcal{P}_v, x)). \] Here, \( Z = \sum_{t \in \mathcal{T}_G} \exp(-E_w(t, x)) \) is the partition function, \( w \) are the model parameters, and \( E \) is the energy function. In the following, we will simplify notation by omitting \( \mathrm{head}(v) \), \( \mathcal{P}_v \), \( x \), \( w \), and superscript \( v \) from the energy function when they are clear from context. The energy of a production and its segmentation on the region \( \mathcal{P}_v \) are given by a pairwise Markov random field (MRF) as \( E(v, y^v) = \sum_{p \in \mathcal{P}_v} \theta_p^v(y_p^v; w) + \sum_{(p, q) \in \mathcal{E}_v} \theta_{pq}^v(y_p^v, y_q^v; w) \), where \( \theta_p^v \) and \( \theta_{pq}^v \) are the unary and pairwise costs of the segmentation MRF, \( \{ y_p^v : p \in \mathcal{P}_v \} \) is the labeling defining the segmentation of the pixels in the current region, and \( \mathcal{E}_v \) are the edges in \( \mathcal{P}_v \). Without loss of generality we assume that \( \mathcal{E}_v \) contains only one of \( (p, q) \) or \( (q, p) \), since the two terms can always be combined. Here, \( \theta_p^v \) is the per-pixel data cost and \( \theta_{pq}^v \) is the boundary term, which penalizes adjacent pixels within the same region that have different labels. We describe these terms in more detail below. In general, even computing the segmentation for a single production is intractable. In order to permit efficient inference, we require that \( \theta_{pq}^v \) satisfies the submodularity condition \( \theta_{pq}^v(Y_1, Y_1) + \theta_{pq}^v(Y_2, Y_2) \leq \theta_{pq}^v(Y_1, Y_2) + \theta_{pq}^v(Y_2, Y_1) \) for all productions \( v : X \rightarrow Y_1 Y_2 \) once the grammar has been converted to a grammar in which each production has only two constituents, which is always possible and in the worst case increases the grammar size quadratically [Jurafsky & Martin 2000] [Chomsky We also require for every production \( v \in R \) and for every production \( c \) that is a descendant of \( v \) in the grammar that \( \theta^v_p(y^v_p, y^c_q) \geq \theta^c_{pq}(y^c_p, y^c_q) \) for all possible labelings \( (y^v_p, y^c_q, y^c_r, y^c_s) \), where \( y^v_p, y^c_q \in \mathcal{Y}_v \) and \( y^c_r, y^c_s \in \mathcal{Y}_c \). This condition ensures that segmentations for higher-level productions are submodular, no matter what occurs below them. It also encodes the reasonable assumption that higher-level abstractions are separated by stronger, shorter boundaries (relative to their size), while lower-level objects are more likely to be composed of smaller, more intricately-shaped regions. The above model defines a sum-product network containing a sum node for each possible region of each nonterminal, a product node for each segmentation of each production of each possible region of each nonterminal, and a leaf function on the pixels of the image for each possible region of the image for each terminal symbol. The children of the sum node \( s \) for nonterminal \( X_s \) with region \( \mathcal{P}_s \) are all product nodes \( r \) with a production \( v_r : X_s \to Y_1 \ldots Y_k \) and region \( \mathcal{P}_{v_r} = \mathcal{P}_s \). Each product node corresponds to a labeling \( y^{v_r} \) of \( \mathcal{P}_{v_r} \) and the edge to its parent sum node has weight \( \exp(-E(v, y^{v_r}, \mathcal{P}_{v_r})) \). The children of product node \( r \) are the sum or leaf nodes with matching regions that correspond to the constituent nonterminals or terminals of \( v_r \), respectively. Since the weights of the edges from a sum node to its children correspond to submodular energy functions, we call this a submodular sum-product network (SSPN). A key benefit of SSPNs in comparison to previous grammar-based approaches is that regions can have arbitrary shapes and are not restricted to a small class of shapes such as rectangles [Poon & Domingos 2011; Zhao & Zhu 2011]. This flexibility is important when parsing images, as real-world objects and abstractions can take any shape, but it comes with a combinatorial explosion of possible parses. However, by exploiting submodularity, we are able to develop an efficient inference algorithm for SSPNs, allowing us to efficiently parse images into a hierarchy of arbitrarily-shaped regions and objects, yielding a very expressive model class. This efficiency is despite the size of the underlying SSPN, which is in general far too large to explicitly instantiate. 2.1 MRF SEGMENTATION DETAILS As discussed above, the energy of each segmentation of a region for a given production is defined by a submodular MRF \( E(v, y^v) = \sum_{p \in \mathcal{P}} \theta^v_p(y^v_p; \mathbf{w}) + \sum_{(p,q) \in \mathcal{E}_v} \theta^v_{pq}(y^v_p, y^v_q; \mathbf{w}) \). The unary terms in \( E(v, y^v) \) differ depending on whether the label \( y^v_p \) corresponds to a terminal or nonterminal symbol. For a terminal \( T \in \Sigma \), the unary terms are a linear function of the image features \( \theta^v_p(y^v_p = T; \mathbf{w}) = w^v_{PC} + \mathbf{w}_T^\top \phi^U_p \), where \( w^v_{PC} \) is an element of \( \mathbf{w} \) that specifies the cost of \( v \) relative to other productions and \( \phi^U_p \) is a feature vector representing the local appearance of pixel \( p \). In our experiments, \( \phi^U_p \) is the output of a deep neural network. For labels corresponding to a nonterminal \( X \in N \), the unary terms are \( \theta^v_p(y^v_p = X; \mathbf{w}) = w^v_{PC} + \theta^c_p(y^c_p) \), where \( c \) is the child production of \( v \) in the current parse tree that contains \( p \), such that \( p \in \mathcal{P}_c \). This dependence makes inference challenging, because the choice of children in the parse tree itself depends on the region that is being parsed as \( X \), which depends on the segmentation this unary is being used to compute. The pairwise terms in \( E(v, y^v) \) are a recursive version of the standard contrast-dependent pairwise boundary potential (e.g. Shotton et al. (2006)) defined for each production \( v \) and each pair of adjacent pixels \( p, q \) as \( \theta^v_{pq}(y^v_p, y^v_q; \mathbf{w}) = w^v_{BF} \exp(-\beta^{-1}||\phi^B_p - \phi^B_q||^2) \cdot [y^v_p \neq y^v_q] + \theta^v_{pq}(y^v_p, y^v_q; \mathbf{w}) \), where \( \beta \) is half the average image contrast between all adjacent pixels in an image, \( w^v_{BF} \) is the boundary factor that controls the relative cost of this term for each production, \( \phi^B_p \) is the pairwise per-pixel feature vector, \( c \) is the same as in the unary term above, and \( [\cdot] \) is the indicator function, which has value 1 when its argument is true and is 0 otherwise. For each pair of pixels \( (p, q) \), only one such term will ever be non-zero, because once two pixels are labeled differently at a node in the parse tree, they are placed in separate subtrees and thus never co-occur in any region below the current node. In our experiments, \( \phi^B_p \) are the intensity values for each pixel. 3 INFERENCE Scene understanding (or semantic segmentation) requires labeling each pixel of an image with its semantic class. By constructing a grammar containing a set of nonterminals in one-to-one correspondence with the semantic labels and only allowing these symbols to produce terminals, we can recover the semantic segmentation of an image from a parse tree for this grammar. In the simplest case, a grammar need contain only one additional production from the start symbol to all other nonterminals. More generally, however, the grammar encodes rich structure about the relationships Figure 2: The two main components of InferSSPN: (a) Parsing a region \( \mathcal{P} \) as \( X \rightarrow YZ \) by fusing two parses of \( \mathcal{P} \) as \( Y \rightarrow AB \) and as \( Z \rightarrow CD \), and (b) Improving the parse of \( \mathcal{P} \) as \( X \rightarrow YZ \) by (re)parsing each of its subregions, taking the union of the new \( Y \) and \( Z \) parses of \( \mathcal{P} \), and then fusing these new parses. between image regions at various levels of abstraction, including concepts such as composition and subcategorization. Identifying the relevant structure and relationships for a particular image entails finding the best parse of an image \( x \) given a grammar \( G \) (or, equivalently, performing MAP inference in the corresponding SSPN), i.e., \( t^* = \arg\max_{t \in \mathcal{T}_G} p(t|x) = \arg\min_{t \in \mathcal{T}_G} \sum_{v \in t} E(v, y^v, x) \). In PCFGs over sentences [Jurafsky & Martin 2000], the optimal parse can be recovered exactly in time \( O(n^3|G|) \) with the CYK algorithm [Hopcroft & Ullman 1979], where \( n \) is the length of the sentence and \( |G| \) is the number of productions in the grammar, by iterating over all possible split points of the sentence and using dynamic programming to avoid recomputing sub-parses. Unfortunately, for images and other 2-D data types, there are \( 2^n \) possible segmentations of the data for each binary production, rendering this approach infeasible in general. With an SSPN, however, it is possible to efficiently compute the approximate optimal parse of an image. In our algorithm, InferSSPN, this is done by iteratively constructing parses of different regions in a bottom-up fashion. 3.1 PARSE TREE CONSTRUCTION Given a production \( v : X \rightarrow Y_1Y_2 \) and two parse trees \( t_1, t_2 \) over the same region \( \mathcal{P} \) and with head symbols \( Y_1, Y_2 \), respectively, then for any labeling \( y^v \in \{Y_1, Y_2\}^{|\mathcal{P}|} \) of \( \mathcal{P} \) we can construct a third parse tree \( t_X \) over region \( \mathcal{P} \) with root production \( v \), labeling \( y^v \), and subtrees \( t'_1, t'_2 \) over regions \( \mathcal{P}_1, \mathcal{P}_2 \), respectively, such that \( \mathcal{P}_i = \{p \in \mathcal{P} : y^v_p = Y_i\} \) and \( t'_i = t_i \cap \mathcal{P}_i \) for each \( i \), where the intersection of a parse tree and a region \( t \cap \mathcal{P} \) is the new parse tree resulting from intersecting \( \mathcal{P} \) with the region at each node in \( t \). Of course, the quality of the resulting parse tree, \( t_X \), depends on the particular labeling (segmentation) \( y^v \) used. Recall that a parse tree \( t \) on region \( \mathcal{P} \) has energy \( E(t, \mathcal{P}) = \sum_{v \in t} E(v, y^v, \mathcal{P}_v) \), which can be written as \( E(t, \mathcal{P}) = \sum_{p \in \mathcal{P}} \theta_p + \sum_{(p, q) \in \mathcal{E}_t} \theta_{pq} \), where \( \theta_p = \sum_{v \in t_p} \theta_p^v(y_p^v) \cdot |p \in \mathcal{P}_t| \) and \( \theta_{pq} = \sum_{v \in t_p} \theta_{pq}^v(y_p^v, y_q^v) \cdot |(p, q) \in \mathcal{E}_t| \). This allows us to define the fusion operation, which is a key subroutine in InferSSPN. Note that \( \delta_{ij} \) is the Kronecker delta. Definition 1. For a production \( v : X \rightarrow Y_1, Y_2 \) and two parse trees \( t_1, t_2 \) over region \( \mathcal{P} \) with head symbols \( Y_1, Y_2 \) then \( t_X \) is the fusion of \( t_1 \) and \( t_2 \) constructed from the minimum energy labeling \( y^v = \arg\min_{y \in \mathcal{Y}^{|\mathcal{P}|}} E(v, t_1, t_2, y) \), where \[ E(v, t_1, t_2, y) = \sum_{p \in \mathcal{P}} \theta_p^1 \cdot \delta_{y_p Y_1} + \theta_p^2 \cdot \delta_{y_p Y_2} + \sum_{(p, q) \in \mathcal{E}} \theta_{pq}^1 \cdot \delta_{y_p Y_1} \cdot \delta_{y_q Y_1} + \theta_{pq}^2 \cdot \delta_{y_p Y_2} \cdot \delta_{y_q Y_2} + \theta_{pq}^v(Y_1, Y_2) \cdot \delta_{y_p Y_1} \cdot \delta_{y_q Y_2}. \] Figure 2a shows an example of fusing two parse trees to create a new parse tree. Although fusion requires finding the optimal labeling from an exponentially large set, the energy is submodular and can be efficiently optimized with a single graph cut. All proofs are presented in the appendix. Proposition 1. The energy \( E(v, t_1, t_2, y^*) \) of the fusion of parse trees \( t_1, t_2 \) over region \( \mathcal{P} \) with head symbols \( Y_1, Y_2 \) for a production \( v : X \rightarrow Y_1Y_2 \) is submodular. Once a parse tree has been constructed, InferSSPN then improves that parse tree on subsequent iterations. The following result shows how InferSSPN can improve a parse tree while ensuring that the energy of that parse tree never gets worse. Lemma 1. Given a labeling \( y^v \) which fuses parse trees \( t_1, t_2 \) into \( t \) with root production \( v \), energy \( E(t, \mathcal{P}) = E(v, t_1, t_2, y^v) \), and subtree regions \( \mathcal{P}_1 \cap \mathcal{P}_2 = \emptyset \) defined by \( y^v \), then any improvement Δ in \( E(t_1, \mathcal{P}_1) \) also improves \( E(t, \mathcal{P}) \) by at least \( \Delta \), regardless of any change in \( E(t_1, \mathcal{P} \setminus \mathcal{P}_1) \). Finally, it will be useful to define the union \( t = t_1 \cup t_2 \) of two parse trees \( t_1, t_2 \) that have the same production at their root but are over disjoint regions \( \mathcal{P}_1 \cap \mathcal{P}_2 = \emptyset \), as the parse tree \( t \) with region \( \mathcal{P} = \mathcal{P}_1 \cup \mathcal{P}_2 \) and in which all nodes that co-occur in both \( t_1 \) and \( t_2 \) (i.e., have the same path to them from the root and have the same production) are merged to form a single node in \( t \). In general, \( t \) may be an inconsistent parse tree, as the same symbol may be parsed as two separate productions, in which case we define the energy of the boundary terms between the pixels parsed as these separate productions to be infinite. 3.2 InferSSPN Pseudocode for our algorithm, InferSSPN, is presented in Algorithm[1]. InferSSPN is an iterative bottom-up algorithm based on graph cuts [Kolmogorov & Zabih 2004] that provably converges to a local minimum of the energy function. In its first iteration, InferSSPN constructs a parse tree over the full image for each production in the grammar. The parse of each terminal production is trivial to construct and simply labels each pixel as the terminal symbol. The parse for every other production \( v : X \rightarrow Y_1 Y_2 \) is constructed by choosing productions for \( Y_1 \) and \( Y_2 \) and fusing their corresponding parse trees to get a parse of the image as \( X \). Since the grammar is non-recursive, we can construct a directed acyclic graph (DAG) containing a node for each symbol and an edge from each symbol to each constituent of each production of that symbol and then traverse this graph from the leaves (terminals) to the root (start symbol), fusing the children of each production of each symbol when we visit that symbol’s node. Of course, to fuse parses of \( Y_1 \) and \( Y_2 \) into a parse of \( X \), we need to choose which production of \( Y_1 \) (and \( Y_2 \)) to fuse; this is done by simply choosing the production of \( Y_1 \) (and \( Y_2 \)) that has the lowest energy over the current region. The best parse of the image, \( \hat{t} \), now corresponds to the lowest-energy parse of all productions of the start symbol. Further iterations of InferSSPN improve \( \hat{t} \) in a flexible manner that allows any of its productions or labelings to change, while also ensuring that its energy never increases. InferSSPN does this by again computing parses of the full image for each production in the grammar. This time, however, when parsing a symbol \( X \), InferSSPN independently parses each region of the image that was parsed as any production of \( X \) in \( \hat{t} \) (none of these regions will overlap because the grammar is non-recursive) and then parses the remainder of the image given these parses of subregions of the image, meaning that the pixels in these other subregions are instantiated in the MRF but fixed to the labels that the subregion parses specify. The parse of the image as \( X \) is then constructed as the union of these subregion parses. This procedure ensures that the energy will never increase (see Theorem[1] and Lemma[1]), but also that any subtree of \( \hat{t} \) can be replaced with another subtree if it results in lower energy. Figure[25] shows a simple example of updating a parse of a region as \( X \rightarrow YZ \). Further, this (re)parsing of subregions can again be achieved in a single bottom-up pass through the grammar DAG, resulting in a very efficient algorithm for SSPN inference. This is because each pixel only appears in at most one subregion for any symbol, and thus only ever needs to be parsed once per production. See Algorithm[1] for more details. 3.3 ANALYSIS As shown in Theorem[1], InferSSPN always converges to a local minimum of the energy function. Similar to other graph-cut-based algorithms, such as α-expansion [Boykov et al. 2001], InferSSPN explores an exponentially large set of moves at each step, so the returned local minimum is much better than those returned by more local procedures, such as max-product belief propagation. Further, we observe convergence within a few iterations in all experiments, with the majority of the energy improvement occurring in the first iteration. Theorem 1. Given a parse (tree) \( \hat{t} \) of S over the entire image with energy \( E(\hat{t}) \), each iteration of InferSSPN constructs a parse (tree) \( t \) of S over the entire image with energy \( E(t) \leq E(\hat{t}) \) and since the minimum energy of an image parse is finite, InferSSPN will always converge. As shown in Proposition[2], each iteration of InferSSPN takes time \( O(|G|c(n)) \), where \( n \) is the number of pixels in the image and \( c(n) \) is the complexity of the underlying graph cut algorithm used, which is low-order polynomial in the worst-case but nearly linear-time in practice [Kolmogorov 2004, Boykov et al. 2001]. Proposition 2. Let \( c(n) \) be the time complexity of computing a graph cut on \( n \) pixels and \( |G| \) be the size of the grammar defining the SSPN, then each iteration of InferSSPN takes time \( O(|G|c(n)) \). Algorithm 1 Compute the (approximate) MAP assignment of the SSPN variables (i.e., the productions and labelings) defined by an image and a grammar. This is equivalent to parsing the image. Input: The image x, a non-recursive grammar \( G = (N, \Sigma, R, S, w) \), and (optional) input parse \( \hat{t} \). Output: A parse of the image, \( t^* \), with energy \( E(t^*, \mathbf{x}) \leq E(\hat{t}, \mathbf{x}) \). 1: function InferSSPN(x, G, \( \hat{t} \)) 2: \( T, E \leftarrow \) empty lists of parse trees and energies, respectively, both of length \( |R| + |\Sigma| \) 3: for each terminal \( Y \in \Sigma \) do 4: \( T[Y] \leftarrow \) the trivial parse with all pixels parsed as \( Y \) 5: \( E[Y] \leftarrow \sum_{p \in x} w_Y \phi_p^{U} \) 6: while the energy of any production of the start symbol \( S \) has not converged do 7: for each symbol \( X \in N \), in reverse topological order do // as defined by the DAG of \( G \) 8: for each subtree \( \hat{t}_i \) of \( \hat{t} \) rooted at a production \( u_i \) with head \( X \) do 9: \( \mathcal{P}_i, y_i \leftarrow \) the region that \( \hat{t}_i \) is over and its labeling in \( \hat{t}_i \) // \( \{ \mathcal{P}_i \} \) are all disjoint 10: for each production \( v_j : X \rightarrow Y_1 Y_2 \) do // iterate over all productions of \( X \) 11: \( t_{ij}, e_{ij} \leftarrow \mathrm{FUSE}(\mathcal{P}_i, y_i, v_j, T) \) // parse \( \mathcal{P}_i \) as \( v_j \) by fusing parses of \( Y_1 \) and \( Y_2 \) 12: \( \mathcal{P}_{\overline{X}} \leftarrow \) all pixels that are not in any region \( \mathcal{P}_i \) 13: for each production \( v_j : X \rightarrow Y_1 Y_2 \) do // iterate over all productions of \( X \) 14: \( y_{\text{rand}} \leftarrow \) a random labeling of \( \mathcal{P}_{\overline{X}} \) // use random for initialization 15: \( \hat{t}_{\overline{X}}, e_{\overline{X}} \leftarrow \mathrm{FUSE}(\mathcal{P}_{\overline{X}}, y_{\text{rand}}, v_j, T, (\cup_i t_{ij})) \) // parse \( \mathcal{P}_{\overline{X}} \) as \( v_j \) given \( (\cup_i t_{ij}) \) 16: update lists: \( T[v_j] \leftarrow (\cup_i t_{ij}) \cup \hat{t}_{\overline{X}} \) and \( E[v_j] \leftarrow \sum_i e_{ij} + e_{\overline{X}} \) for all \( v_j \) with head \( X \) 17: \( \hat{t}, \hat{e} \leftarrow \) the production of \( S \) with the lowest energy in \( E \) and its energy 18: return \( \hat{t}, \hat{e} \) Input: A region \( \mathcal{P} \), a labeling \( y \) of \( \mathcal{P} \), a production \( v : X \rightarrow Y_1 Y_2 \), a list of parses \( T \), and an optional parse \( t_{\overline{\mathcal{P}}} \) of pixels not in \( \mathcal{P} \), used to set pairwise terms of edges that are leaving \( \mathcal{P} \). Output: A parse tree rooted at \( v \) over region \( \mathcal{P} \) and the energy of that parse tree. 1: function Fuse(\( \mathcal{P}, y, v, T, t_{\overline{\mathcal{P}}} \)) 2: for each \( Y_i \) with \( i \in 1, 2 \) do 3: \( u_i \leftarrow \) production of \( Y_i \) in \( T \) with lowest energy over \( \{ p : y_p = Y_i \} \) given \( t_{\overline{\mathcal{P}}} \) 4: create submodular energy function \( E(v, y, \mathcal{P}, x) \) on \( \mathcal{P} \) from \( T[u_1], T[u_2] \), and \( t_{\overline{\mathcal{P}}} \) 5: \( y^v, e^v \leftarrow (\arg \min_y E(v, y, \mathcal{P}, x)) \) // label each pixel in \( \mathcal{P} \) as \( Y_1 \) or \( Y_2 \) using graph cuts 6: \( t^v \leftarrow \) combine \( T[u_1] \) and \( T[u_2] \) according to \( y^v \) and append \( v \) as the root 7: return \( t^v, e^v \) Note that a straightforward application of \( \alpha \)-expansion to image parsing that uses one label for every possible parse in the grammar requires an exponential number of labels in general. InferSSPN can be extended to productions with more than two constituents by simply replacing the internal graph cut used to fuse subtrees with a multi-label algorithm such as \( \alpha \)-expansion. InferSSPN would still converge because each subtree would still never decrease in energy. An algorithm such as QPBO [Kolmogorov & Rother, 2007] could also be used, which would allow the submodularity restriction to be relaxed. Finally, running InferSSPN on the grammar containing \( k - 1 \) binary productions that results from converting a grammar with a single production on \( k > 2 \) constituents is equivalent to running \( \alpha \)-expansion on the \( k \) constituents. 4 EXPERIMENTS We evaluated InferSSPN by parsing images from the Stanford background dataset (SBD) using grammars with generated structure and weights inferred from the pixel labels of the images we parsed. SBD is a standard semantic segmentation dataset containing images with an average size of \( 320 \times 240 \) pixels and a total of 8 labels. The input features we used were from the Deeplab system [Chen et al., 2015][2016] trained on the same images used for evaluation (note that we are not evaluating learning and thus use the same features for each algorithm and evaluate on the training data in order to separate inference performance from generalization performance). We compared InferSSPN to \( \alpha \)-expansion on a flat pairwise MRF and to max-product belief propagation (BP) on a multi-level (3-D) pairwise grid MRF. Details of these models are provided in the appendix. We note that the flat encoding for \( \alpha \)-expansion results in a label for each path in the grammar, where there are an exponential number of such paths in the height of the grammar. However, once \( \alpha \)-expansion converges, its energy is within a constant factor of the global minimum energy (Boykov et al., 2001) and thus serves as a good surrogate for the true global minimum, which is intractable to compute. We compared these algorithms by varying three different parameters: boundary strength (strength of pairwise terms), grammar height, and number of productions per nonterminal. Each grammar used for testing contained a start symbol, multiple layers of nonterminals, and a final layer of nonterminals in one-to-one correspondence with the eight terminal symbols, each of which had a single production that produces a region of pixels. The start symbol had one production for each pair of symbols in the layer below it, and the last nonterminal layer (ignoring the nonterminals for the labels) had productions for each pair of labels, distributed uniformly over this last nonterminal layer. **Boundary strength.** Increasing the boundary strength of an MRF makes inference more challenging, as individual pixel labels cannot be easily flipped without large side effects. To test this, we constructed a grammar as above with 2 layers of nonterminals (not including the start symbol), each containing 3 nonterminal symbols with 4 binary productions to the next layer. We vary \( w^{\mathrm{BF}} \) for all \( v \) and plot the mean average pixel accuracy returned by each algorithm (the x-axis is log-scale) in Figure 3a. InferSSPN returns parses with almost identical accuracy (and energy) to \( \alpha \)-expansion. BP also returns comparable accuracies, but almost always returns invalid parses with infinite energy (if it converges at all) that contain multiple productions of the same object or a production of some symbol Y even though a pixel is labeled as symbol X. ![Three line plots showing mean average pixel accuracy and total running time for varying boundary strength, grammar height, and number of productions.](page_186_579_1207_246.png) Figure 3: The mean average pixel accuracy of the returned solution and total running time for each of belief propagation, \( \alpha \)-expansion, and InferSSPN when varying (a) boundary strength, (b) grammar height, and (c) number of productions. Each data point is the average value over (the same) 10 images. Missing data points indicate out of memory errors. Figures 4, 5, and 6 in the appendix show all results for each experiment. **Grammar height.** In general, the number of paths in the grammar is exponential in its height, so the height of the grammar controls the complexity of inference and thus the difficulty of parsing images. For this experiment, we set the boundary scale factor to 10 and constructed a grammar with four nonterminals per layer, each with three binary productions to the next layer. Figure 3b shows the effect of grammar height on total inference time (to convergence or a maximum number of iterations, whichever first occurred). As expected from Proposition 2, the time taken for InferSSPN scales linearly with the height of the grammar, which is within a constant factor of the size of the grammar when all other parameters are fixed. Similarly, inference time for both \( \alpha \)-expansion and BP scaled exponentially with the height of the grammar because the number of labels for both increases combinatorially. Again, the energies and corresponding accuracies achieved by InferSSPN were nearly identical to those of \( \alpha \)-expansion (see Figure 5 in the appendix). **Productions per nonterminal.** The number of paths in the grammar is also directly affected by the number of productions per symbol. For this experiment, we increased each pairwise term by a factor of 10 and constructed a grammar with 2 layers of nonterminals, each with 4 nonterminal symbols. Figure 3c shows the effect of increasing the number of productions per nonterminal, which again demonstrates that InferSSPN is far more efficient than either \( \alpha \)-expansion or BP as the complexity of the grammar increases, while still finding comparable solutions (see Figure 6 in the appendix). 5 CONCLUSION This paper proposed submodular sum-product networks (SSPNs), a novel extension of sum-product networks that can be understood as an instantiation of an image grammar in which all possible parses of an image over arbitrary shapes are represented. Despite this complexity, we presented INFERRSPN, a move-making algorithm that exploits submodularity in order to find the (approximate) MAP state of an SSPN, which is equivalent to finding the (approximate) optimal parse of an image. Analytically, we showed that INFERRSPN is both very efficient – each iteration takes time linear in the size of the grammar and the complexity of one graph cut – and convergent. Empirically, we showed that INFERRSPN achieves accuracies and energies comparable to \( \alpha \)-expansion, which is guaranteed to return optima within a constant factor of the global optimum, while taking exponentially less time to do so. We have begun work on learning the structure and parameters of SSPNs from data. This is a particularly promising avenue of research because many recent works have demonstrated that learning both the structure and parameters of sum-product networks from data is feasible and effective, despite the well-known difficulty of grammar induction. We also plan to apply SSPNs to additional domains, such as activity recognition, social network modeling, and probabilistic knowledge bases. ACKNOWLEDGMENTS AF would like to thank Robert Gens and Rahul Kidambi for useful discussions and insights, and Gena Barnabee for assisting with Figure 1 and for feedback on this document. This research was partly funded by ONR grant N00014-16-1-2697 and AFRL contract FA8750-13-2-0019. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ONR, AFRL, or the United States Government. REFERENCES Ravindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin. Network flows: theory, algorithms and applications. Network, 1:864, 1993. Yuri Boykov and Vladimir Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(9):1124–1137, 2004. Yuri Boykov, Olga Veksler, and Ramin Zabih. Fast approximate energy minimization via graph cuts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(11):1222–1239, 2001. Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs. Proceedings of the International Conference on Learning Representations, 2015. URL http://arxiv.org/abs/1412.7062 Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. In ArXiv e-prints, 2016. ISBN 9783901608353. URL http://arxiv.org/abs/1412.7062 Noam Chomsky. On Certain Formal Properties of Grammars. Information and Control, 2:137–167, 1959. ISSN 07745141. Rina Dechter and Robert Mateescu. AND/OR search spaces for graphical models. Artificial intelligence, 171:73–106, 2007. Robert Gens and Pedro Domingos. Discriminative learning of sum-product networks. In Advances in Neural Information Processing Systems, pp. 3239–3247, 2012. ISBN 9781627480031. Robert Gens and Pedro Domingos. Learning the structure of sum-product networks. In Proceedings of the 30th International Conference on Machine Learning, pp. 873–880, 2013. Stephen Gould, Richard Fulton, and Daphne Koller. Decomposing a scene into geometric and semantically consistent regions. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1–8, 2009. D. M. Greig, B.T. Porteous, and A. H. Seheult. Exact maximum a posteriori estimation for binary images. Journal of the Royal Statistical Society. Series B (Methodological), 51(2):271–279, 1989. John Hopcroft and Jeffrey Ullman. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley, Reading MA, 1979. Daniel S. Jurafsky and James H. Martin. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Prentice Hall, 2000. ISBN 9780135041963. doi: 10.1162/089120100750105975. Vladimir Kolmogorov and Carsten Rother. Minimizing nonsubmodular functions with graph cuts - a review. IEEE transactions on pattern analysis and machine intelligence, 29(7):1274–9, 2007. ISSN 0162-8828. doi: 10.1109/TPAMI.2007.1031. Vladimir Kolmogorov and Ramin Zabih. What Energy Functions Can Be Minimized via Graph Cuts? IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(2):147–159, 2004. ISSN 01628828. doi: 10.1109/TPAMI.2004.1262177. Nikos Komodakis, Georgios Tziritas, and Nikos Paragios. Fast, approximately optimal solutions for single and dynamic MRFs. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2007. ISBN 1424411807. doi: 10.1109/CVPR.2007.383095. Victor Lempitsky, Carsten Rother, Stefan Roth, and Andrew Blake. Fusion Moves for Markov Random Field Optimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(8):1392–1405, 2010. Victor Lempitsky, Andrea Vedaldi, and Andrew Zisserman. A Pylon Model for Semantic Segmentation. In Neural Information Processing Systems, number 228180, pp. 1–9, 2011. Hoifung Poon and Pedro Domingos. Sum-product networks: A new deep architecture. In Proceedings of the 27th Conference on Uncertainty in Artificial Intelligence, pp. 337–346. AUAI Press, 2011. Chris Russell, Lubor Ladický, Pushmeet Kohli, and Philip H.S. Torr. Exact and Approximate Inference in Associative Hierarchical Networks using Graph Cuts. The 26th Conference on Uncertainty in Artificial Intelligence, pp. 1–8, 2010. Abhishek Sharma, Oncel Tuzel, and Ming-Yu Liu. Recursive Context Propagation Network for Semantic Scene Labeling. In Advances in Neural Information Processing Systems, pp. 2447–2455, 2014. Jamie Shotton, John Winn, Carsten Rother, and Antonio Criminisi. TextonBoost: Joint Appearance, Shape and Conext Modeling for Muli-class object Recognition and Segmentation. Proceedings European Conference on Computer Vision (ECCV), 3951(Chapter 1):1–15, 2006. ISSN 09205691. Richard Socher, Cliff C. Lin, Chris Manning, and Andrew Y. Ng. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th International Conference on Machine Learning, pp. 129–136, 2011. ISBN 9781450306195. doi: 10.1007/978-3-540-87479-9. Yibiao Zhao and Song-Chun Zhu. Image Parsing via Stochastic Scene Grammar. In Advances in Neural Information Processing Systems, pp. 1–9, 2011. Song-Chun Zhu and David Mumford. A Stochastic Grammar of Images. Foundations and Trends in Computer Graphics and Vision, 2(4):259–362, 2006. ISSN 1572-2740. doi: 10.1561/0600000018. A PROOFS Proposition 1. The energy \( E(v, t_1, t_2, y^v) \) of the fusion of parse trees \( t_1, t_2 \) over region \( \mathcal{P} \) with head symbols \( Y_1, Y_2 \) for a production \( v : X \to Y_1 Y_2 \) is submodular. Proof. \( E(v, t_1, t_2) \) is submodular as long as \( 2 \cdot \theta_{pq}^v(Y_1, Y_2) \geq \theta_{pq}^{t_1} + \theta_{pq}^{t_2} \), which is true by construction, since \( \theta_{pq}^{t_1}(y_p^c, y_q^c) \geq \theta_{pq}^c(y_p^c, y_q^c) \) for \( c \) any possible descendant of \( v \) and for all labelings. \( \square \) Lemma 2. Given a labeling \( y^v \) which fuses parse trees \( t_1, t_2 \) into \( t \) with root production \( v \), energy \( E(t, \mathcal{P}) = E(v, t_1, t_2, y^v) \), and subtree regions \( \mathcal{P}_1 \cap \mathcal{P}_2 = \emptyset \) defined by \( y^v \), then any improvement \( \Delta \) in \( E(t_1, \mathcal{P}_1) \) also improves \( E(t, \mathcal{P}) \) by at least \( \Delta \), regardless of any change in \( E(t_1, \mathcal{P} \setminus \mathcal{P}_1) \). Proof. Since the optimal fusion can be found exactly, and the energy of the current labeling \( y^v \) has improved by \( \Delta \), the optimal fusion will have improved by at least \( \Delta \). \( \square \) Proposition 2. Let \( c(n) \) be the time complexity of computing a graph cut on \( n \) pixels and \( |G| \) be the size of the grammar defining the SSPN, then each iteration of InferSSPN takes time \( O(|G|c(n)) \). Proof. Let \( k \) be the number of productions per nonterminal symbol and \( N \) be the nonterminals. For each nonterminal, FUSE is called \( k \) times for each region and once for the remainder of the pixels. FUSE itself has complexity \( O(|\mathcal{P}| + c(|\mathcal{P}|) = O(c(|\mathcal{P}|)) \) when called with region \( \mathcal{P} \). However, in InferSSPN each pixel is processed only once for each symbol because no regions overlap, so the worst-case complexity occurs when each symbol has only one region, and thus the total complexity of each iteration of InferSSPN is \( O(|N|k \cdot c(n)) = O(|G|c(n)) \). \( \square \) Theorem 2. Given a parse (tree) \( \hat{t} \) of \( S \) over the entire image with energy \( E(\hat{t}) \), each iteration of InferSSPN constructs a parse (tree) \( t \) of \( S \) over the entire image with energy \( E(t) \leq E(\hat{t}) \), and since the minimum energy of an image parse is finite, InferSSPN will always converge. Proof. We will prove by induction that for all nodes \( n_i \in \hat{t} \) with corresponding subtree \( \hat{t}_i \), region \( \mathcal{P}_i \), production \( v_i : X \to Y_1 Y_2 \) and child subtrees \( t_1, t_2 \), that \( E(t_i) \leq E(\hat{t}_i) \) after one iteration for all \( t_i = T[v_i] \cap \mathcal{P}_i \). Since this holds for every production of \( S \) over the image, this proves the claim. Base case. When \( \hat{t}_i \) is the subtree with region \( \mathcal{P}_i \) and production \( v_i : X \to Y \) containing only a single terminal child, then by definition \( t_i = T[v_i] \cap \mathcal{P}_i = \hat{t}_i \) because terminal parses do not change given the same region. Thus, \( E(t_i) = E(\hat{t}_i) \) and the claim holds. Induction step. Let \( v_i : X \to Y_1 Y_2 \) be the production for a node in \( \hat{t}_i \) with subtrees \( \hat{t}_1, \hat{t}_2 \) over regions \( \mathcal{P}_1, \mathcal{P}_2 \), respectively, such that \( \mathcal{P}_1 \cup \mathcal{P}_2 = \mathcal{P}_i \) and \( \mathcal{P}_1 \cap \mathcal{P}_2 = \emptyset \), and suppose that for all productions \( u_{1j} \) with head \( Y_1 \) and all productions \( u_{2k} \) with head \( Y_2 \) and corresponding parse trees \( t_{1j} = T[u_{1j}] \cap \mathcal{P}_1 \) and \( t_{2k} = T[u_{2k}] \cap \mathcal{P}_2 \), respectively, that \( E(t_{1j}) \leq E(\hat{t}_{1j}) \) and \( E(t_{2k}) \leq E(\hat{t}_{2k}) \). Now, when FUSE is called on region \( \mathcal{P}_1 \) it will choose the subtrees \( \hat{t}_{1j} : j = \arg\min_j E(t_{1j}, \mathcal{P}_1) \), and \( \hat{t}_{2k} : k = \arg\min_k E(t_{2k}, \mathcal{P}_2) \) and fuse these into \( t'_i \) over \( \mathcal{P} \). However, from Lemma 1, we know that \( t_i \) could at the very least simply reuse the labeling \( y^v \) that partitions \( \mathcal{P} \) into \( \mathcal{P}_1, \mathcal{P}_2 \) and in doing so return a tree \( t'_i \) with energy \( E(t'_i) \leq E(t_i) \), because each of its subtrees over their same regions has lower (or equal) energy to those in \( \hat{t} \). Finally, since \( t'_i \) is computed independently of any other trees for region \( \mathcal{P} \) and then placed into \( T[v_i] \) as a union of other trees, then \( t_i = T[v_i] \cap \mathcal{P} = t'_i \), and the claim follows. \( \square \) B ADDITIONAL EXPERIMENTAL RESULTS AND DETAILS We compared InferSSPN to running \( \alpha \)-expansion on a flat pairwise MRF and to max-product belief propagation over a multi-level (3-D) pairwise grid MRF. Each label of the flat MRF corresponds to a possible path in the grammar from the start symbol to a production to one of its constituent symbols, etc, until reaching a terminal. In general, the number of such paths is exponential in the height of the grammar. The unary terms are the sum of unary terms along the path and the pairwise term for a pair of labels is the pairwise term of the first production at which their constituents differ. For any two labels with paths that choose a different production of the same symbol (and have the same path from the start symbol) we assign infinite cost to enforce the restriction that an object can only have a single production of it into constituents. Note that after convergence \( \alpha \)-expansion is guaranteed to be within a constant factor of the global minimum energy (Boykov et al., 2001) and thus serves as a good surrogate for the true global minimum, which is intractable to compute. The multi-layer MRF is constructed similarly. The number of levels in the MRF is equal to the height of the DAG corresponding to the grammar used. The labels at a particular level of the MRF are all (production, constituent) pairs that can occur at this height in the grammar. The pairwise term between the same pixel in two levels is 0 when the parent label’s constituent equals the child label’s production head, and \( \infty \) otherwise. Pairwise terms within a layer are defined as in the flat MRF with infinite cost for incompatible labels (i.e., two neighboring productions of the same symbol), unless two copies of that nonterminal could be produced at that level by the grammar. All experiments were run on the same computer running an Intel Core i7-5960X with 8 cores and 128MB of RAM. Each algorithm was limited to a single thread. ![Three line plots showing minimum energy, time, and accuracy versus boundary scale factor for BP, α-expansion, and SSPN](page_340_370_900_200.png) Figure 4: The (a) best energy, (b) total running time, and (c) resulting semantic segmentation accuracy (mean average pixel accuracy) for belief propagation, \( \alpha \)-expansion, and InferSSPN when varying boundary strength. Each data point is the average value over (the same) 10 images. Missing data points indicate that an algorithm ran out of memory (middle and right) or returned infinite energy (left). ![Three line plots showing minimum energy, time, and accuracy versus grammar height for BP, α-expansion, and SSPN](page_340_650_900_200.png) Figure 5: The (a) best energy, (b) total running time, and (c) resulting semantic segmentation accuracy (mean average pixel accuracy) for belief propagation, \( \alpha \)-expansion, and InferSSPN when varying grammar height. Each data point is the average value over (the same) 10 images. Missing data points indicate that an algorithm ran out of memory (middle and right) or returned infinite energy (left). Low accuracies for grammar height 0 are a result of the grammar being insufficiently expressive. ![Three line plots showing minimum energy, time, and accuracy versus #productions per nonterminal for BP, α-expansion, and SSPN](page_340_930_900_200.png) Figure 6: The (a) best energy, (b) total running time, and (c) resulting semantic segmentation accuracy (mean average pixel accuracy) for belief propagation, \( \alpha \)-expansion, and InferSSPN when varying grammar height. Each data point is the average value over (the same) 10 images. Missing data points indicate that an algorithm ran out of memory (middle and right) or returned infinite energy (left).
ABSTRACT Sum-product networks (SPNs) are an expressive class of deep probabilistic models in which inference takes time linear in their size, enabling them to be learned effectively. However, for certain challenging problems, such as scene understanding, the corresponding SPN has exponential size and is thus intractable. In this work, we introduce submodular sum-product networks (SSPNs), an extension of SPNs in which sum-node weights are defined by a submodular energy function. SSPNs combine the expressivity and depth of SPNs with the ability to efficiently compute the MAP state of a combinatorial number of labelings afforded by submodular energies. SSPNs for scene understanding can be understood as representing all possible parses of an image over arbitrary region shapes with respect to an image grammar. Despite this complexity, we develop an efficient and convergent algorithm based on graph cuts for computing the (approximate) MAP state of an SSPN, greatly increasing the expressivity of the SPN model class. Empirically, we show exponential improvements in parsing time compared to traditional inference algorithms such as \( \alpha \)-expansion and belief propagation, while returning comparable minima. 1 INTRODUCTION Sum-product networks (SPNs) [Poon & Domingos, 2011; Gens & Domingos, 2012] are a class of deep probabilistic models that consist of many layers of hidden variables and can have unbounded treewidth. Despite this depth and corresponding expressivity, exact inference in SPNs is guaranteed to take time linear in their size, allowing their structure and parameters to be learned effectively from data. However, there are still many models for which the corresponding SPN has size exponential in the number of variables and is thus intractable. For example, in scene understanding (or semantic segmentation), the goal is to label each pixel of an image with its semantic class, which requires simultaneously detecting, segmenting, and recognizing each object in the scene. Even the simplest SPN for scene understanding is intractable, as it must represent the exponentially large set of segmentations of the image into its constituent objects. Scene understanding is commonly formulated as a flat Markov (or conditional) random field (MRF) over the pixels or superpixels of an image (e.g., Shotton et al. [2006]; Gould et al. [2009]). Inference in MRFs is intractable in general; however, there exist restrictions of the MRF that enable tractable inference. For pairwise binary MRFs, if the energy of each pairwise term is submodular (alternatively, attractive or regular) [Kolmogorov & Zabih, 2004], meaning that each pair of neighboring pixels prefers to have the same label, then the exact MAP labeling of the MRF can be recovered in low-order polynomial time through the use of a graph cut algorithm\footnote{Formally, a min-cut/max-flow algorithm [Ahuja et al., 1993] on a graph constructed from the MRF.} [Greig et al., 1989; Boykov & Kolmogorov, 2004]. This result from the binary case has been used to develop a number of powerful approximate algorithms for the multi-label case (e.g., Komodakis et al. [2007]; Lempitsky et al. [2010]), the most well-known of which is \( \alpha \)-expansion [Boykov et al., 2001], which efficiently returns an approximate labeling that is within a constant factor of the true optimum by solving a series of binary graph cut problems. Unfortunately, pairwise MRFs are insufficiently expressive for com- plex tasks such as scene understanding, as they are unable to model high-level relationships, such as constituency (part-subpart) or subcategorization (superclass-subclass), between arbitrary regions of the image, unless these can be encoded in the labels of the MRF and enforced between pairs of (super)pixels. However, this encoding requires a combinatorial number of labels, which is intractable. Instead, higher-level structure is needed to efficiently represent these relationships. In this paper, we present submodular sum-product networks (SSPNs), a novel model that combines the expressive power of sum-product networks with the tractable segmentation properties of submodular energies. An SSPN is a sum-product network in which the weight of each child of a sum node corresponds to the energy of a particular labeling of a submodular energy function. Equivalently, an SSPN over an image corresponds to an instantiation of all possible parse trees of that image with respect to a given image grammar, where the probability distribution over the segmentations of a production on a particular region is defined by a submodular random field over the pixels in that region. Importantly, SSPNs permit objects and regions to take arbitrary shapes, instead of restricting the set of possible shapes as has previously been necessary for tractable inference. By exploiting submodularity, we develop a highly-efficient approximate inference algorithm, INFERRSPN, for computing the MAP state of the SSPN (equivalently, the optimal parse of the image). INFERRSPN is an iterative move-making-style algorithm that provably converges to a local minimum of the energy, reduces to \( \alpha \)-expansion in the case of a trivial grammar, and has complexity \( O(|G| c(n)) \) for each iteration, where \( c(n) \) is the complexity of a single graph cut and \( |G| \) is the size of the grammar. As with other move-making algorithms, INFERRSPN converges to a local minimum with respect to an exponentially-large set of neighbors, overcoming many of the main issues of local minima (Boykov et al., 2001). Empirically, we compare INFERRSPN to belief propagation (BP) on a multi-level MRF and to \( \alpha \)-expansion on an equivalent flat MRF. We show that INFERRSPN parses images in exponentially less time than both of these while returning energies comparable to \( \alpha \)-expansion, which is guaranteed to return energies within a constant factor of the true optimum. The literature on using higher-level information for scene understanding is vast. We briefly discuss the most relevant work on hierarchical random fields over multiple labels, image grammars for segmentation, and neural parsing methods. Hierarchical random field models (e.g., Russell et al. (2010); Lempitsky et al. (2011)) define MRFs with multiple layers of hidden variables and then perform inference, often using graph cuts to efficiently extract the MAP solution. However, these models are typically restricted to just a few layers and to pre-computed segmentations of the image, and thus do not allow arbitrary region shapes. In addition, they require a combinatorial number of labels to encode complex grammar structures. Previous grammar-based methods for scene understanding, such as Zhu & Mumford (2006) and Zhao & Zhu (2011), have used MRFs with AND-OR graphs (Dechter & Mateescu, 2007), but needed to restrict their grammars to a very limited set of productions and region shapes in order to perform inference in reasonable time, and are thus much less expressive than SSPNs. Finally, neural parsing methods such as those in Socher et al. (2011) and Sharma et al. (2014) use recursive neural network architectures over superpixel-based features to segment an image; thus, these methods also do not allow arbitrary region shapes. Further, Socher et al. (2011) greedily combine regions to form parse trees, while Sharma et al. (2014) use randomly generated parse trees, whereas inference in SSPNs finds the (approximately) optimal parse tree. 2 SUBMODULAR SUM-PRODUCT NETWORKS In the following, we define submodular sum-product networks (SSPNs) in terms of an image grammar because this simplifies the exposition with respect to the structure of the sum-product network (SPN) and because scene understanding is the domain we use to evaluate SSPNs. However, it is not necessary to define SSPNs in this way, and our results extend to any SPN with sum-node weights defined by a random field with submodular potentials. Due to lack of space we refer readers to Gens & Domingos (2012), Poon & Domingos (2011) and Gens & Domingos (2013) for SPN details. With respect to scene understanding, an SSPN defines a generative model of an image and a hierarchy of regions within that image where each region is labeled with a production (and implicitly by the head symbol of that production), can have arbitrary shape, and is a subset of the region of each of its ancestors. An example of an SSPN for parsing a farm scene is shown in Figure 1. Given a starting symbol and the region containing the entire image, the generative process is to first choose a production of that symbol into its constituent symbols and then choose a segmentation of the region into a set of mutually exclusive and exhaustive subregions, with one subregion per constituent sym- Figure 1: A partial (submodular) sum-product network for parsing an image with respect to the grammar shown. There is a sum node for each nonterminal symbol with a child sum node for each production of that symbol. Each sum node for a production has a child product node for each possible segmentation of its region. bol. The process then recurses, choosing a production and a segmentation for each subregion given its symbol. The recursion terminates when one of the constituents is a terminal symbol, at which point the pixels corresponding to that region of the image are generated. This produces a parse tree in which each internal node is a pair containing a region and a production of the region, and the leaves are regions of pixels. For each node in a parse tree, the regions of its children are mutually exclusive and exhaustive with respect to the parent node’s region. As in a probabilistic context-free grammar (PCFG) [Jurafsky & Martin 2000], productions are chosen from a categorical distribution over the productions of the current symbol. Segmentations of a given region, however, are sampled from a (submodular) Markov random field (MRF) over the pixels in the region. Formally, let \( G = (N, \Sigma, R, S, w) \) be a non-recursive stochastic grammar, where \( N \) is a finite set of nonterminal symbols; \( \Sigma \) is a finite set of terminal symbols; \( R \) is a finite set of productions \( R = \{ v : X \rightarrow Y_1 Y_2 \ldots Y_k \} \) with head symbol \( X \in N \) and constituent symbols \( Y_i \in N \cup \Sigma \) for \( i = 1 \ldots k \) and \( k > 0 \); \( S \in N \) is a distinguished start symbol, meaning that it does not appear on the right-hand side of any production; and \( w \) are the weights that parameterize the probability distribution defined by \( G \). For a production \( v \in t \) in a parse tree \( t \in \mathcal{T}_G \), we denote its region as \( \mathcal{P}_v \) and its parent and children as \( \mathrm{pa}(v) \) and \( \mathrm{ch}(v) \), respectively, where \( \mathcal{T}_G \) is the set of possible parse trees under the grammar \( G \). The labeling corresponding to the segmentation of the pixels in \( \mathcal{P}_v \) for production \( v : X \rightarrow Y_1 \ldots Y_k \) is \( y^v \in \mathcal{Y}_v^{|\mathcal{P}_v|} \), where \( \mathcal{Y}_v = \{ Y_1, \ldots, Y_k \} \). The region of any production \( v \in t \) is the set of pixels in \( \mathcal{P}_{\mathrm{pa}(v)} \) whose assigned label is the head of \( v \), i.e., \( \mathcal{P}_v = \{ p \in \mathcal{P}_{\mathrm{pa}(v)} : y_p^{\mathrm{pa}(v)} = \mathrm{head}(v) \} \), except for the production of the start symbol, which has the entire image as its region. The probability of an image \( x \) is \( p_w(x) = \sum_{t \in \mathcal{T}_G} p_w(t, x) \), where the joint probability of parse tree \( t \) and the image is the product over all productions in \( t \) of the probability of choosing that production \( v \) and then segmenting its region \( \mathcal{P}_v \) according to \( y^v \): \[ p_w(t, x) = \frac{1}{Z} \exp(-E_w(t, x)) = \frac{1}{Z} \exp(-\sum_{v \in t} E_w^v(v, y^v, \mathrm{head}(v), \mathcal{P}_v, x)). \] Here, \( Z = \sum_{t \in \mathcal{T}_G} \exp(-E_w(t, x)) \) is the partition function, \( w \) are the model parameters, and \( E \) is the energy function. In the following, we will simplify notation by omitting \( \mathrm{head}(v) \), \( \mathcal{P}_v \), \( x \), \( w \), and superscript \( v \) from the energy function when they are clear from context. The energy of a production and its segmentation on the region \( \mathcal{P}_v \) are given by a pairwise Markov random field (MRF) as \( E(v, y^v) = \sum_{p \in \mathcal{P}_v} \theta_p^v(y_p^v; w) + \sum_{(p, q) \in \mathcal{E}_v} \theta_{pq}^v(y_p^v, y_q^v; w) \), where \( \theta_p^v \) and \( \theta_{pq}^v \) are the unary and pairwise costs of the segmentation MRF, \( \{ y_p^v : p \in \mathcal{P}_v \} \) is the labeling defining the segmentation of the pixels in the current region, and \( \mathcal{E}_v \) are the edges in \( \mathcal{P}_v \). Without loss of generality we assume that \( \mathcal{E}_v \) contains only one of \( (p, q) \) or \( (q, p) \), since the two terms can always be combined. Here, \( \theta_p^v \) is the per-pixel data cost and \( \theta_{pq}^v \) is the boundary term, which penalizes adjacent pixels within the same region that have different labels. We describe these terms in more detail below. In general, even computing the segmentation for a single production is intractable. In order to permit efficient inference, we require that \( \theta_{pq}^v \) satisfies the submodularity condition \( \theta_{pq}^v(Y_1, Y_1) + \theta_{pq}^v(Y_2, Y_2) \leq \theta_{pq}^v(Y_1, Y_2) + \theta_{pq}^v(Y_2, Y_1) \) for all productions \( v : X \rightarrow Y_1 Y_2 \) once the grammar has been converted to a grammar in which each production has only two constituents, which is always possible and in the worst case increases the grammar size quadratically [Jurafsky & Martin 2000] [Chomsky We also require for every production \( v \in R \) and for every production \( c \) that is a descendant of \( v \) in the grammar that \( \theta^v_p(y^v_p, y^c_q) \geq \theta^c_{pq}(y^c_p, y^c_q) \) for all possible labelings \( (y^v_p, y^c_q, y^c_r, y^c_s) \), where \( y^v_p, y^c_q \in \mathcal{Y}_v \) and \( y^c_r, y^c_s \in \mathcal{Y}_c \). This condition ensures that segmentations for higher-level productions are submodular, no matter what occurs below them. It also encodes the reasonable assumption that higher-level abstractions are separated by stronger, shorter boundaries (relative to their size), while lower-level objects are more likely to be composed of smaller, more intricately-shaped regions. The above model defines a sum-product network containing a sum node for each possible region of each nonterminal, a product node for each segmentation of each production of each possible region of each nonterminal, and a leaf function on the pixels of the image for each possible region of the image for each terminal symbol. The children of the sum node \( s \) for nonterminal \( X_s \) with region \( \mathcal{P}_s \) are all product nodes \( r \) with a production \( v_r : X_s \to Y_1 \ldots Y_k \) and region \( \mathcal{P}_{v_r} = \mathcal{P}_s \). Each product node corresponds to a labeling \( y^{v_r} \) of \( \mathcal{P}_{v_r} \) and the edge to its parent sum node has weight \( \exp(-E(v, y^{v_r}, \mathcal{P}_{v_r})) \). The children of product node \( r \) are the sum or leaf nodes with matching regions that correspond to the constituent nonterminals or terminals of \( v_r \), respectively. Since the weights of the edges from a sum node to its children correspond to submodular energy functions, we call this a submodular sum-product network (SSPN). A key benefit of SSPNs in comparison to previous grammar-based approaches is that regions can have arbitrary shapes and are not restricted to a small class of shapes such as rectangles [Poon & Domingos 2011; Zhao & Zhu 2011]. This flexibility is important when parsing images, as real-world objects and abstractions can take any shape, but it comes with a combinatorial explosion of possible parses. However, by exploiting submodularity, we are able to develop an efficient inference algorithm for SSPNs, allowing us to efficiently parse images into a hierarchy of arbitrarily-shaped regions and objects, yielding a very expressive model class. This efficiency is despite the size of the underlying SSPN, which is in general far too large to explicitly instantiate. 2.1 MRF SEGMENTATION DETAILS As discussed above, the energy of each segmentation of a region for a given production is defined by a submodular MRF \( E(v, y^v) = \sum_{p \in \mathcal{P}} \theta^v_p(y^v_p; \mathbf{w}) + \sum_{(p,q) \in \mathcal{E}_v} \theta^v_{pq}(y^v_p, y^v_q; \mathbf{w}) \). The unary terms in \( E(v, y^v) \) differ depending on whether the label \( y^v_p \) corresponds to a terminal or nonterminal symbol. For a terminal \( T \in \Sigma \), the unary terms are a linear function of the image features \( \theta^v_p(y^v_p = T; \mathbf{w}) = w^v_{PC} + \mathbf{w}_T^\top \phi^U_p \), where \( w^v_{PC} \) is an element of \( \mathbf{w} \) that specifies the cost of \( v \) relative to other productions and \( \phi^U_p \) is a feature vector representing the local appearance of pixel \( p \). In our experiments, \( \phi^U_p \) is the output of a deep neural network. For labels corresponding to a nonterminal \( X \in N \), the unary terms are \( \theta^v_p(y^v_p = X; \mathbf{w}) = w^v_{PC} + \theta^c_p(y^c_p) \), where \( c \) is the child production of \( v \) in the current parse tree that contains \( p \), such that \( p \in \mathcal{P}_c \). This dependence makes inference challenging, because the choice of children in the parse tree itself depends on the region that is being parsed as \( X \), which depends on the segmentation this unary is being used to compute. The pairwise terms in \( E(v, y^v) \) are a recursive version of the standard contrast-dependent pairwise boundary potential (e.g. Shotton et al. (2006)) defined for each production \( v \) and each pair of adjacent pixels \( p, q \) as \( \theta^v_{pq}(y^v_p, y^v_q; \mathbf{w}) = w^v_{BF} \exp(-\beta^{-1}||\phi^B_p - \phi^B_q||^2) \cdot [y^v_p \neq y^v_q] + \theta^v_{pq}(y^v_p, y^v_q; \mathbf{w}) \), where \( \beta \) is half the average image contrast between all adjacent pixels in an image, \( w^v_{BF} \) is the boundary factor that controls the relative cost of this term for each production, \( \phi^B_p \) is the pairwise per-pixel feature vector, \( c \) is the same as in the unary term above, and \( [\cdot] \) is the indicator function, which has value 1 when its argument is true and is 0 otherwise. For each pair of pixels \( (p, q) \), only one such term will ever be non-zero, because once two pixels are labeled differently at a node in the parse tree, they are placed in separate subtrees and thus never co-occur in any region below the current node. In our experiments, \( \phi^B_p \) are the intensity values for each pixel. 3 INFERENCE Scene understanding (or semantic segmentation) requires labeling each pixel of an image with its semantic class. By constructing a grammar containing a set of nonterminals in one-to-one correspondence with the semantic labels and only allowing these symbols to produce terminals, we can recover the semantic segmentation of an image from a parse tree for this grammar. In the simplest case, a grammar need contain only one additional production from the start symbol to all other nonterminals. More generally, however, the grammar encodes rich structure about the relationships Figure 2: The two main components of InferSSPN: (a) Parsing a region \( \mathcal{P} \) as \( X \rightarrow YZ \) by fusing two parses of \( \mathcal{P} \) as \( Y \rightarrow AB \) and as \( Z \rightarrow CD \), and (b) Improving the parse of \( \mathcal{P} \) as \( X \rightarrow YZ \) by (re)parsing each of its subregions, taking the union of the new \( Y \) and \( Z \) parses of \( \mathcal{P} \), and then fusing these new parses. between image regions at various levels of abstraction, including concepts such as composition and subcategorization. Identifying the relevant structure and relationships for a particular image entails finding the best parse of an image \( x \) given a grammar \( G \) (or, equivalently, performing MAP inference in the corresponding SSPN), i.e., \( t^* = \arg\max_{t \in \mathcal{T}_G} p(t|x) = \arg\min_{t \in \mathcal{T}_G} \sum_{v \in t} E(v, y^v, x) \). In PCFGs over sentences [Jurafsky & Martin 2000], the optimal parse can be recovered exactly in time \( O(n^3|G|) \) with the CYK algorithm [Hopcroft & Ullman 1979], where \( n \) is the length of the sentence and \( |G| \) is the number of productions in the grammar, by iterating over all possible split points of the sentence and using dynamic programming to avoid recomputing sub-parses. Unfortunately, for images and other 2-D data types, there are \( 2^n \) possible segmentations of the data for each binary production, rendering this approach infeasible in general. With an SSPN, however, it is possible to efficiently compute the approximate optimal parse of an image. In our algorithm, InferSSPN, this is done by iteratively constructing parses of different regions in a bottom-up fashion. 3.1 PARSE TREE CONSTRUCTION Given a production \( v : X \rightarrow Y_1Y_2 \) and two parse trees \( t_1, t_2 \) over the same region \( \mathcal{P} \) and with head symbols \( Y_1, Y_2 \), respectively, then for any labeling \( y^v \in \{Y_1, Y_2\}^{|\mathcal{P}|} \) of \( \mathcal{P} \) we can construct a third parse tree \( t_X \) over region \( \mathcal{P} \) with root production \( v \), labeling \( y^v \), and subtrees \( t'_1, t'_2 \) over regions \( \mathcal{P}_1, \mathcal{P}_2 \), respectively, such that \( \mathcal{P}_i = \{p \in \mathcal{P} : y^v_p = Y_i\} \) and \( t'_i = t_i \cap \mathcal{P}_i \) for each \( i \), where the intersection of a parse tree and a region \( t \cap \mathcal{P} \) is the new parse tree resulting from intersecting \( \mathcal{P} \) with the region at each node in \( t \). Of course, the quality of the resulting parse tree, \( t_X \), depends on the particular labeling (segmentation) \( y^v \) used. Recall that a parse tree \( t \) on region \( \mathcal{P} \) has energy \( E(t, \mathcal{P}) = \sum_{v \in t} E(v, y^v, \mathcal{P}_v) \), which can be written as \( E(t, \mathcal{P}) = \sum_{p \in \mathcal{P}} \theta_p + \sum_{(p, q) \in \mathcal{E}_t} \theta_{pq} \), where \( \theta_p = \sum_{v \in t_p} \theta_p^v(y_p^v) \cdot |p \in \mathcal{P}_t| \) and \( \theta_{pq} = \sum_{v \in t_p} \theta_{pq}^v(y_p^v, y_q^v) \cdot |(p, q) \in \mathcal{E}_t| \). This allows us to define the fusion operation, which is a key subroutine in InferSSPN. Note that \( \delta_{ij} \) is the Kronecker delta. Definition 1. For a production \( v : X \rightarrow Y_1, Y_2 \) and two parse trees \( t_1, t_2 \) over region \( \mathcal{P} \) with head symbols \( Y_1, Y_2 \) then \( t_X \) is the fusion of \( t_1 \) and \( t_2 \) constructed from the minimum energy labeling \( y^v = \arg\min_{y \in \mathcal{Y}^{|\mathcal{P}|}} E(v, t_1, t_2, y) \), where \[ E(v, t_1, t_2, y) = \sum_{p \in \mathcal{P}} \theta_p^1 \cdot \delta_{y_p Y_1} + \theta_p^2 \cdot \delta_{y_p Y_2} + \sum_{(p, q) \in \mathcal{E}} \theta_{pq}^1 \cdot \delta_{y_p Y_1} \cdot \delta_{y_q Y_1} + \theta_{pq}^2 \cdot \delta_{y_p Y_2} \cdot \delta_{y_q Y_2} + \theta_{pq}^v(Y_1, Y_2) \cdot \delta_{y_p Y_1} \cdot \delta_{y_q Y_2}. \] Figure 2a shows an example of fusing two parse trees to create a new parse tree. Although fusion requires finding the optimal labeling from an exponentially large set, the energy is submodular and can be efficiently optimized with a single graph cut. All proofs are presented in the appendix. Proposition 1. The energy \( E(v, t_1, t_2, y^*) \) of the fusion of parse trees \( t_1, t_2 \) over region \( \mathcal{P} \) with head symbols \( Y_1, Y_2 \) for a production \( v : X \rightarrow Y_1Y_2 \) is submodular. Once a parse tree has been constructed, InferSSPN then improves that parse tree on subsequent iterations. The following result shows how InferSSPN can improve a parse tree while ensuring that the energy of that parse tree never gets worse. Lemma 1. Given a labeling \( y^v \) which fuses parse trees \( t_1, t_2 \) into \( t \) with root production \( v \), energy \( E(t, \mathcal{P}) = E(v, t_1, t_2, y^v) \), and subtree regions \( \mathcal{P}_1 \cap \mathcal{P}_2 = \emptyset \) defined by \( y^v \), then any improvement Δ in \( E(t_1, \mathcal{P}_1) \) also improves \( E(t, \mathcal{P}) \) by at least \( \Delta \), regardless of any change in \( E(t_1, \mathcal{P} \setminus \mathcal{P}_1) \). Finally, it will be useful to define the union \( t = t_1 \cup t_2 \) of two parse trees \( t_1, t_2 \) that have the same production at their root but are over disjoint regions \( \mathcal{P}_1 \cap \mathcal{P}_2 = \emptyset \), as the parse tree \( t \) with region \( \mathcal{P} = \mathcal{P}_1 \cup \mathcal{P}_2 \) and in which all nodes that co-occur in both \( t_1 \) and \( t_2 \) (i.e., have the same path to them from the root and have the same production) are merged to form a single node in \( t \). In general, \( t \) may be an inconsistent parse tree, as the same symbol may be parsed as two separate productions, in which case we define the energy of the boundary terms between the pixels parsed as these separate productions to be infinite. 3.2 InferSSPN Pseudocode for our algorithm, InferSSPN, is presented in Algorithm[1]. InferSSPN is an iterative bottom-up algorithm based on graph cuts [Kolmogorov & Zabih 2004] that provably converges to a local minimum of the energy function. In its first iteration, InferSSPN constructs a parse tree over the full image for each production in the grammar. The parse of each terminal production is trivial to construct and simply labels each pixel as the terminal symbol. The parse for every other production \( v : X \rightarrow Y_1 Y_2 \) is constructed by choosing productions for \( Y_1 \) and \( Y_2 \) and fusing their corresponding parse trees to get a parse of the image as \( X \). Since the grammar is non-recursive, we can construct a directed acyclic graph (DAG) containing a node for each symbol and an edge from each symbol to each constituent of each production of that symbol and then traverse this graph from the leaves (terminals) to the root (start symbol), fusing the children of each production of each symbol when we visit that symbol’s node. Of course, to fuse parses of \( Y_1 \) and \( Y_2 \) into a parse of \( X \), we need to choose which production of \( Y_1 \) (and \( Y_2 \)) to fuse; this is done by simply choosing the production of \( Y_1 \) (and \( Y_2 \)) that has the lowest energy over the current region. The best parse of the image, \( \hat{t} \), now corresponds to the lowest-energy parse of all productions of the start symbol. Further iterations of InferSSPN improve \( \hat{t} \) in a flexible manner that allows any of its productions or labelings to change, while also ensuring that its energy never increases. InferSSPN does this by again computing parses of the full image for each production in the grammar. This time, however, when parsing a symbol \( X \), InferSSPN independently parses each region of the image that was parsed as any production of \( X \) in \( \hat{t} \) (none of these regions will overlap because the grammar is non-recursive) and then parses the remainder of the image given these parses of subregions of the image, meaning that the pixels in these other subregions are instantiated in the MRF but fixed to the labels that the subregion parses specify. The parse of the image as \( X \) is then constructed as the union of these subregion parses. This procedure ensures that the energy will never increase (see Theorem[1] and Lemma[1]), but also that any subtree of \( \hat{t} \) can be replaced with another subtree if it results in lower energy. Figure[25] shows a simple example of updating a parse of a region as \( X \rightarrow YZ \). Further, this (re)parsing of subregions can again be achieved in a single bottom-up pass through the grammar DAG, resulting in a very efficient algorithm for SSPN inference. This is because each pixel only appears in at most one subregion for any symbol, and thus only ever needs to be parsed once per production. See Algorithm[1] for more details. 3.3 ANALYSIS As shown in Theorem[1], InferSSPN always converges to a local minimum of the energy function. Similar to other graph-cut-based algorithms, such as α-expansion [Boykov et al. 2001], InferSSPN explores an exponentially large set of moves at each step, so the returned local minimum is much better than those returned by more local procedures, such as max-product belief propagation. Further, we observe convergence within a few iterations in all experiments, with the majority of the energy improvement occurring in the first iteration. Theorem 1. Given a parse (tree) \( \hat{t} \) of S over the entire image with energy \( E(\hat{t}) \), each iteration of InferSSPN constructs a parse (tree) \( t \) of S over the entire image with energy \( E(t) \leq E(\hat{t}) \) and since the minimum energy of an image parse is finite, InferSSPN will always converge. As shown in Proposition[2], each iteration of InferSSPN takes time \( O(|G|c(n)) \), where \( n \) is the number of pixels in the image and \( c(n) \) is the complexity of the underlying graph cut algorithm used, which is low-order polynomial in the worst-case but nearly linear-time in practice [Kolmogorov 2004, Boykov et al. 2001]. Proposition 2. Let \( c(n) \) be the time complexity of computing a graph cut on \( n \) pixels and \( |G| \) be the size of the grammar defining the SSPN, then each iteration of InferSSPN takes time \( O(|G|c(n)) \). Algorithm 1 Compute the (approximate) MAP assignment of the SSPN variables (i.e., the productions and labelings) defined by an image and a grammar. This is equivalent to parsing the image. Input: The image x, a non-recursive grammar \( G = (N, \Sigma, R, S, w) \), and (optional) input parse \( \hat{t} \). Output: A parse of the image, \( t^* \), with energy \( E(t^*, \mathbf{x}) \leq E(\hat{t}, \mathbf{x}) \). 1: function InferSSPN(x, G, \( \hat{t} \)) 2: \( T, E \leftarrow \) empty lists of parse trees and energies, respectively, both of length \( |R| + |\Sigma| \) 3: for each terminal \( Y \in \Sigma \) do 4: \( T[Y] \leftarrow \) the trivial parse with all pixels parsed as \( Y \) 5: \( E[Y] \leftarrow \sum_{p \in x} w_Y \phi_p^{U} \) 6: while the energy of any production of the start symbol \( S \) has not converged do 7: for each symbol \( X \in N \), in reverse topological order do // as defined by the DAG of \( G \) 8: for each subtree \( \hat{t}_i \) of \( \hat{t} \) rooted at a production \( u_i \) with head \( X \) do 9: \( \mathcal{P}_i, y_i \leftarrow \) the region that \( \hat{t}_i \) is over and its labeling in \( \hat{t}_i \) // \( \{ \mathcal{P}_i \} \) are all disjoint 10: for each production \( v_j : X \rightarrow Y_1 Y_2 \) do // iterate over all productions of \( X \) 11: \( t_{ij}, e_{ij} \leftarrow \mathrm{FUSE}(\mathcal{P}_i, y_i, v_j, T) \) // parse \( \mathcal{P}_i \) as \( v_j \) by fusing parses of \( Y_1 \) and \( Y_2 \) 12: \( \mathcal{P}_{\overline{X}} \leftarrow \) all pixels that are not in any region \( \mathcal{P}_i \) 13: for each production \( v_j : X \rightarrow Y_1 Y_2 \) do // iterate over all productions of \( X \) 14: \( y_{\text{rand}} \leftarrow \) a random labeling of \( \mathcal{P}_{\overline{X}} \) // use random for initialization 15: \( \hat{t}_{\overline{X}}, e_{\overline{X}} \leftarrow \mathrm{FUSE}(\mathcal{P}_{\overline{X}}, y_{\text{rand}}, v_j, T, (\cup_i t_{ij})) \) // parse \( \mathcal{P}_{\overline{X}} \) as \( v_j \) given \( (\cup_i t_{ij}) \) 16: update lists: \( T[v_j] \leftarrow (\cup_i t_{ij}) \cup \hat{t}_{\overline{X}} \) and \( E[v_j] \leftarrow \sum_i e_{ij} + e_{\overline{X}} \) for all \( v_j \) with head \( X \) 17: \( \hat{t}, \hat{e} \leftarrow \) the production of \( S \) with the lowest energy in \( E \) and its energy 18: return \( \hat{t}, \hat{e} \) Input: A region \( \mathcal{P} \), a labeling \( y \) of \( \mathcal{P} \), a production \( v : X \rightarrow Y_1 Y_2 \), a list of parses \( T \), and an optional parse \( t_{\overline{\mathcal{P}}} \) of pixels not in \( \mathcal{P} \), used to set pairwise terms of edges that are leaving \( \mathcal{P} \). Output: A parse tree rooted at \( v \) over region \( \mathcal{P} \) and the energy of that parse tree. 1: function Fuse(\( \mathcal{P}, y, v, T, t_{\overline{\mathcal{P}}} \)) 2: for each \( Y_i \) with \( i \in 1, 2 \) do 3: \( u_i \leftarrow \) production of \( Y_i \) in \( T \) with lowest energy over \( \{ p : y_p = Y_i \} \) given \( t_{\overline{\mathcal{P}}} \) 4: create submodular energy function \( E(v, y, \mathcal{P}, x) \) on \( \mathcal{P} \) from \( T[u_1], T[u_2] \), and \( t_{\overline{\mathcal{P}}} \) 5: \( y^v, e^v \leftarrow (\arg \min_y E(v, y, \mathcal{P}, x)) \) // label each pixel in \( \mathcal{P} \) as \( Y_1 \) or \( Y_2 \) using graph cuts 6: \( t^v \leftarrow \) combine \( T[u_1] \) and \( T[u_2] \) according to \( y^v \) and append \( v \) as the root 7: return \( t^v, e^v \) Note that a straightforward application of \( \alpha \)-expansion to image parsing that uses one label for every possible parse in the grammar requires an exponential number of labels in general. InferSSPN can be extended to productions with more than two constituents by simply replacing the internal graph cut used to fuse subtrees with a multi-label algorithm such as \( \alpha \)-expansion. InferSSPN would still converge because each subtree would still never decrease in energy. An algorithm such as QPBO [Kolmogorov & Rother, 2007] could also be used, which would allow the submodularity restriction to be relaxed. Finally, running InferSSPN on the grammar containing \( k - 1 \) binary productions that results from converting a grammar with a single production on \( k > 2 \) constituents is equivalent to running \( \alpha \)-expansion on the \( k \) constituents. 4 EXPERIMENTS We evaluated InferSSPN by parsing images from the Stanford background dataset (SBD) using grammars with generated structure and weights inferred from the pixel labels of the images we parsed. SBD is a standard semantic segmentation dataset containing images with an average size of \( 320 \times 240 \) pixels and a total of 8 labels. The input features we used were from the Deeplab system [Chen et al., 2015][2016] trained on the same images used for evaluation (note that we are not evaluating learning and thus use the same features for each algorithm and evaluate on the training data in order to separate inference performance from generalization performance). We compared InferSSPN to \( \alpha \)-expansion on a flat pairwise MRF and to max-product belief propagation (BP) on a multi-level (3-D) pairwise grid MRF. Details of these models are provided in the appendix. We note that the flat encoding for \( \alpha \)-expansion results in a label for each path in the grammar, where there are an exponential number of such paths in the height of the grammar. However, once \( \alpha \)-expansion converges, its energy is within a constant factor of the global minimum energy (Boykov et al., 2001) and thus serves as a good surrogate for the true global minimum, which is intractable to compute. We compared these algorithms by varying three different parameters: boundary strength (strength of pairwise terms), grammar height, and number of productions per nonterminal. Each grammar used for testing contained a start symbol, multiple layers of nonterminals, and a final layer of nonterminals in one-to-one correspondence with the eight terminal symbols, each of which had a single production that produces a region of pixels. The start symbol had one production for each pair of symbols in the layer below it, and the last nonterminal layer (ignoring the nonterminals for the labels) had productions for each pair of labels, distributed uniformly over this last nonterminal layer. **Boundary strength.** Increasing the boundary strength of an MRF makes inference more challenging, as individual pixel labels cannot be easily flipped without large side effects. To test this, we constructed a grammar as above with 2 layers of nonterminals (not including the start symbol), each containing 3 nonterminal symbols with 4 binary productions to the next layer. We vary \( w^{\mathrm{BF}} \) for all \( v \) and plot the mean average pixel accuracy returned by each algorithm (the x-axis is log-scale) in Figure 3a. InferSSPN returns parses with almost identical accuracy (and energy) to \( \alpha \)-expansion. BP also returns comparable accuracies, but almost always returns invalid parses with infinite energy (if it converges at all) that contain multiple productions of the same object or a production of some symbol Y even though a pixel is labeled as symbol X. ![Three line plots showing mean average pixel accuracy and total running time for varying boundary strength, grammar height, and number of productions.](page_186_579_1207_246.png) Figure 3: The mean average pixel accuracy of the returned solution and total running time for each of belief propagation, \( \alpha \)-expansion, and InferSSPN when varying (a) boundary strength, (b) grammar height, and (c) number of productions. Each data point is the average value over (the same) 10 images. Missing data points indicate out of memory errors. Figures 4, 5, and 6 in the appendix show all results for each experiment. **Grammar height.** In general, the number of paths in the grammar is exponential in its height, so the height of the grammar controls the complexity of inference and thus the difficulty of parsing images. For this experiment, we set the boundary scale factor to 10 and constructed a grammar with four nonterminals per layer, each with three binary productions to the next layer. Figure 3b shows the effect of grammar height on total inference time (to convergence or a maximum number of iterations, whichever first occurred). As expected from Proposition 2, the time taken for InferSSPN scales linearly with the height of the grammar, which is within a constant factor of the size of the grammar when all other parameters are fixed. Similarly, inference time for both \( \alpha \)-expansion and BP scaled exponentially with the height of the grammar because the number of labels for both increases combinatorially. Again, the energies and corresponding accuracies achieved by InferSSPN were nearly identical to those of \( \alpha \)-expansion (see Figure 5 in the appendix). **Productions per nonterminal.** The number of paths in the grammar is also directly affected by the number of productions per symbol. For this experiment, we increased each pairwise term by a factor of 10 and constructed a grammar with 2 layers of nonterminals, each with 4 nonterminal symbols. Figure 3c shows the effect of increasing the number of productions per nonterminal, which again demonstrates that InferSSPN is far more efficient than either \( \alpha \)-expansion or BP as the complexity of the grammar increases, while still finding comparable solutions (see Figure 6 in the appendix). 5 CONCLUSION This paper proposed submodular sum-product networks (SSPNs), a novel extension of sum-product networks that can be understood as an instantiation of an image grammar in which all possible parses of an image over arbitrary shapes are represented. Despite this complexity, we presented INFERRSPN, a move-making algorithm that exploits submodularity in order to find the (approximate) MAP state of an SSPN, which is equivalent to finding the (approximate) optimal parse of an image. Analytically, we showed that INFERRSPN is both very efficient – each iteration takes time linear in the size of the grammar and the complexity of one graph cut – and convergent. Empirically, we showed that INFERRSPN achieves accuracies and energies comparable to \( \alpha \)-expansion, which is guaranteed to return optima within a constant factor of the global optimum, while taking exponentially less time to do so. We have begun work on learning the structure and parameters of SSPNs from data. This is a particularly promising avenue of research because many recent works have demonstrated that learning both the structure and parameters of sum-product networks from data is feasible and effective, despite the well-known difficulty of grammar induction. We also plan to apply SSPNs to additional domains, such as activity recognition, social network modeling, and probabilistic knowledge bases. ACKNOWLEDGMENTS AF would like to thank Robert Gens and Rahul Kidambi for useful discussions and insights, and Gena Barnabee for assisting with Figure 1 and for feedback on this document. This research was partly funded by ONR grant N00014-16-1-2697 and AFRL contract FA8750-13-2-0019. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ONR, AFRL, or the United States Government.
reject
Reject
4.333333
2534f8950f3e7b86128b066ab17cfd2e5e5dc2b7
iclr
2,017
FUZZY PARAPHRASES IN LEARNING WORD REPRESENTATIONS WITH A LEXICON Yuanzhi Ke & Masafumi Hagiwara Department of Information and Computer Science Keio University Hiyoshi 3-14-1, Kohoku, Yokohama City, Kanagawa, Japan {enshika8811.a6, hagiwara}@keio.jp ABSTRACT A synonym of a polysemous word is usually only the paraphrase of one sense among many. When lexicons are used to improve vector-space word representations, such paraphrases are unreliable and bring noise to the vector-space. The prior works use a coefficient to adjust the overall learning of the lexicons. They regard the paraphrases equally. In this paper, we propose a novel approach that regards the paraphrases diversely to alleviate the adverse effects of polysemy. We annotate each paraphrase with a degree of reliability. The paraphrases are randomly eliminated according to the degrees when our model learns word representations. In this way, our approach drops the unreliable paraphrases, keeping more reliable paraphrases at the same time. The experimental results show that the proposed method improves the word vectors. Our approach is an attempt to address the polysemy problem keeping one vector per word. It makes the approach easier to use than the conventional methods that estimate multiple vectors for a word. Our approach also outperforms the prior works in the experiments. 1 INTRODUCTION Vector-space representations of words are reported useful and improve the performance of the machine learning algorithms for many natural language processing tasks such as name entity recognition and chunking (Turian et al., 2010), text classification (Socher et al., 2012; Le & Mikolov, 2014; Kim, 2014; Joulin et al., 2016), topic extraction (Das et al., 2015; Li et al., 2016), and machine translation (Zaremba et al., 2014; Sutskever et al., 2014). People are still trying to improve the vector-space representations for words. Bojanowski et al. (2016) attempt to improve word vectors by involving character level information. Other works (Yu & Dredze, 2014; Xu et al., 2014; Faruqui et al., 2015; Bollegala et al., 2016) try to estimate better word vectors by using a lexicon or ontology. The idea is simple: because a lexicon or ontology contains well-defined relations about words, we can use them to improve word vectors. However, for a polysemous word, one of its synonym does not always mean the same thing with the original one under different contexts. For example, the word "point" equals "score" in "Team A got 3 points", but does not in "my point of view." A method to address this issue is to estimate a vector for each word sense (Huang et al., 2012; Chen et al., 2014) or per word type (Neelakantan et al., 2014). However, it requires additional word sense disambiguation or part-of-speech tagging to use such word vectors. In this paper, we propose a method to improve the vector-space representations using a lexicon and alleviate the adverse effect of polysemy, keeping one vector per word. We estimate the degree of reliability for each paraphrase in the lexicon and eliminate the ones with lower degrees in learning. The experimental results show that the proposed method is effective and outperforms the prior works. The major contributions of our work include: • We propose a novel approach involving fuzzy sets to reduce the noise brought by polysemous words in the word vector space when a lexicon is used for learning, and a model to use the fuzzy paraphrase sets to learn the word vector space. ![The process flow of the proposed method.](page_324_186_900_377.png) Figure 1: The process flow of the proposed method. • Although some prior works propose to solve the polysemy problem by estimating one vector per word sense or type, using such word vectors requires additional pre-process. Our proposed method keeps one vector per word. It makes the word vectors easier to use in practical terms: it is neither necessary to disambiguate the word senses nor to tag the part-of-speeches before we use the word vectors. We give an introduction of our proposed method in section 2. We show the effects of different paraphrase sets, parameters, corpus size, and evaluate the effectiveness of our approach by comparing to simpler algorithms in section 3. We compare our approach with the prior works via an evaluation experiment in section 4. We give the findings, conclusions and outlook in section 5. 2 THE PROPOSED METHOD 2.1 FUZZY PARAPHRASES As described in section 1, whether a polysemous word’s paraphrase is the same as the original depends on the context. Henceforth, if we simply use all the paraphrases of a word in the lexicon to improve the word vector without discrimination, they may sometimes bring noise to the vector-space. A conventional method for them is to give each word sense a vector. However, such vector-spaces require additional word sense disambiguation in practical use. Here, we propose a method to alleviate the adverse effects of polysemous words’ paraphrases without word sense disambiguation. Our idea is to annotate each paraphrase with a degree about its reliability, like a member of a fuzzy set. We call such paraphrases as “fuzzy paraphrases”, and their degrees as the “memberships.” 2.2 LEARNING WITH FUZZY PARAPHRASES We also propose a novel method to jointly learn corpus with a lexicon, in order to use fuzzy paraphrases to improve the word vectors. If the meanings of two words are totally the same, they can replace each other in a text without changing the semantic features. Henceforth, we can learn the lexicon by replacing the words in the corpus with its lexical paraphrases. We learn the word vectors by maximizing the probability of a word for a given context, and also for a generated context where words are replaced by their paraphrases randomly. The memberships of the fuzzy paraphrases are used here to control the probability that the replacements occur by a control function as shown in Figure 1. For a text corpus \( T \), denote \( w_i \) the ith word in \( T \), \( c \) the context window, \( w_j \) a word in the context window, \( L_{w_j} \) the paraphrase set of \( w_j \) in the lexicon \( L \), \( w_k \) the \( k \)th fuzzy paraphrase in \( L_{w_j} \), and \( x_{jk} \) the membership of \( w_k \) for \( w_j \), the objective is \[ \sum_{w_i \in T} \sum_{(i-c) \leq j \leq (i+c)} \left[ \log p(w_i|w_j) + \sum_{w_k \in L_{w_j}}^{L_{w_j}} f(x_{jk}) \log p(w_i|w_k) \right]. \] (1) The function \( f(x_{jk}) \) of the membership \( x_{jk} \) is a specified drop-out function. It returns 0 more for the paraphrases that have lower memberships, and 1 more for the others. 2.3 Membership Estimation & Control Function \( f(x) \) Looking for a control function that is easy to train, we notice that if two words are more often to be translated to the same word in another language, the replacement of them are less likely to change the meaning of the original sentence. Thus, we use a function of the bilingual similarity (denoted as \( S_{jk} \)) as the membership function: \[ x_{jk} = g(S_{jk}). \] (2) There have been works about calculating the similarity of words using such bilingual information. A lexicon called the paraphrase database (PPDB) provides scores of the similarity of paraphrases on the basis of bilingual features [Ganitkevitch et al., 2013; Pavlick et al., 2015b,a]. We scale the similarity score of the paraphrase \( w_k \) to \([0, 1]\) in PPDB2.0 as the memberships, and draw the values of \( f(x_{jk}) \) from a Bernoulli distribution subjected to them. Denote \( S_{jk} \) the similarity score of word \( w_j \) and \( w_k \) in PPDB2.0, the value of \( f(x_{jk}) \) is drawn from the Bernoulli distribution: \[ f(x_{jk}) \sim Bernoulli(x_{jk}), \] (3) \[ x_{jk} = \frac{S_{jk}}{\max_{j \in T, k \in L} S_{jk}}. \] (4) 2.4 Training We do not need to train \( f(x_{jk}) \) using the method described above. The model can be trained by negative sampling [Mikolov et al., 2013b]: For word \( w_O \) and a word \( w_I \) in its context, denote \( A_I \) as the set of the paraphrases for \( w_I \) accepted by \( f(x_{jk}) \), we maximize \( \log p(w_O|w_I) \) by distinguishing the noise words from a noise distribution \( P_n(w) \) from \( w_O \) and its accepted paraphrases in \( A_I \) by logistic regression: \[ \log p(w_O|w_I) = \log \sigma(v_{w_O}^T v_{w_I}) + \sum_{i=1}^n E_{w_i} \sim P_n(w)[\log \sigma(-v_{w_i}^T v_{w_I})], w_i \neq w_O, w_i \notin A_I \] (5) Here, \( v_{w_O}^T \) and \( v_{w_i}^T \) stand for the transposed matrices of \( v_{w_O} \) and \( v_{w_i} \), respectively. \( n \) is the number of negative samples used. \( \sigma(x) \) is a sigmoid function, \( \sigma(x) = 1/(1 + e^{-x}) \). 3 Model Exploration 3.1 Corpus for Experiments We use enwiki9\footnote{http://mattmahoney.net/dc/enwiki9.zip} mainly for tuning and model exploration. It has a balanced size(1 GB), containing 123,353,508 tokens. It provides enough data to alleviate randomness while it does not take too much time for our model to learn. Table 1: The results of 10 times repeated learning and test under each benchmark. The vector-space dimension is set to 100. Enwiki9 is used as the corpus. The maximum, minimum, and the margin of error are marked bold. <table> <tr> <th>Benchmark</th> <th>SimLex</th> <th>WS353</th> <th>RW</th> <th>MEN</th> <th>SEM</th> <th>SYN</th> </tr> <tr> <td>1</td> <td>29.41</td> <td>62.02</td> <td>38.12</td> <td>60.00</td> <td>13.26</td> <td><b>27.77</b></td> </tr> <tr> <td>2</td> <td><b>29.57</b></td> <td>62.49</td> <td>38.26</td> <td>60.39</td> <td><b>12.70</b></td> <td>27.27</td> </tr> <tr> <td>3</td> <td>29.48</td> <td><b>61.04</b></td> <td>39.90</td> <td>59.80</td> <td>13.89</td> <td>26.94</td> </tr> <tr> <td>4</td> <td>29.52</td> <td>60.20</td> <td>39.68</td> <td>59.81</td> <td><b>14.02</b></td> <td>27.11</td> </tr> <tr> <td>5</td> <td>28.69</td> <td><b>63.45</b></td> <td>38.65</td> <td>60.16</td> <td>12.94</td> <td>26.87</td> </tr> <tr> <td>6</td> <td>29.26</td> <td>61.95</td> <td>39.13</td> <td>59.73</td> <td>13.75</td> <td>26.60</td> </tr> <tr> <td>7</td> <td>29.46</td> <td>62.90</td> <td>39.12</td> <td><b>60.45</b></td> <td>13.42</td> <td><b>25.98</b></td> </tr> <tr> <td>8</td> <td>28.51</td> <td>62.96</td> <td><b>37.93</b></td> <td><b>59.31</b></td> <td>13.58</td> <td>27.10</td> </tr> <tr> <td>9</td> <td>29.13</td> <td>62.44</td> <td><b>39.91</b></td> <td>59.75</td> <td>13.98</td> <td>26.89</td> </tr> <tr> <td>10</td> <td><b>28.59</b></td> <td>60.66</td> <td>38.67</td> <td>60.24</td> <td>13.66</td> <td>26.98</td> </tr> <tr> <th>Margin of Error</th> <th>0.98</th> <th>2.41</th> <th>1.98</th> <th>1.14</th> <th>1.32</th> <th>1.79</th> </tr> </table> We use ukWaC (Baroni et al., 2009) to compare with the prior works in section 4. But we do not use it for model exploration, because it takes more than 20 hours to learn it, as an enormous corpus containing 12 GB text. 3.2 BENCHMARKS We used several benchmarks. They are Wordsim-353 (WS353) (Finkelstein et al., 2001) (353 word pairs), SimLex-999 (SimLex) (Hill et al., 2016) (999 word pairs), the Stanford Rare Word Similarity Dataset (RW) (Luong et al., 2013) (2034 word pairs), the MEN Dataset (MEN) (Bruni et al., 2014) (3000 word pairs), and the Mikolov’s (Google’s) word analogical reasoning task (Mikolov et al., 2013a). WS353, SimLex, and RW are gold standards. They provide the similarity of words labeled by humans. We report the Spearman’s rank correlation (\( \rho \)) for them. Mikolov’s word analogical reasoning task is another widely used benchmark for word vectors. It contains a semantic part (SEM), and a syntactic part (SYN). We use the basic way suggested in their paper to find the answer for it: to guess word \( b' \) related to \( b \) in the way how \( a' \) is related to \( a \), the word closest in cosine similarity to \( a' - a + b \) is returned as \( b' \). We find that the benchmark scores change every time we learn the corpus, even under the same settings. It is because that the models involve random numbers. Therefore we should consider the margin of error of the changes when we use the benchmarks. To test the margin of error, we firstly used our proposed method to repeat learning enwiki9 for 10 times under the same parameters. Then we tested the vectors under each benchmark, to find the margin of error. In each test, we used the same parameters: the vector dimension was set to 100 for speed, the window size was set to 8, and 25 negative samples were used. The results are shown in Table 1. We use them to analyze the other experimental results later. 3.3 DIFFERENT TYPES OF PARAPHRASES In PPDB2.0, there are six relationships for paraphrases. For word \( X \) and \( Y \), the different relationships between them defined in PPDB2.0 are shown in Table 2. We do not consider the exclusion and independent relations because they are not semantic paraphrases. Those of equivalence are the most reliable because they are the closest ones. But we still want to know whether it is better to take Table 2: Different types of relationships of paraphrases in PPDB2.0 (Pavlick et al., 2015b,a). <table> <tr> <th>Relationship Type</th> <th>Description</th> </tr> <tr> <td>Equivalence</td> <td>\( X \) is the same as \( Y \)</td> </tr> <tr> <td>Forward Entailment</td> <td>\( X \) is more specific than/is a type of \( Y \)</td> </tr> <tr> <td>Reverse Entailment</td> <td>\( X \) is more general than/encompasses \( Y \)</td> </tr> <tr> <td>Exclusion</td> <td>\( X \) is the opposite of \( Y \) / \( X \) is mutually exclusive with \( Y \)</td> </tr> <tr> <td>OtherRelated</td> <td>\( X \) is related in some other way to \( Y \)</td> </tr> <tr> <td>Independent</td> <td>\( X \) is not related to \( Y \)</td> </tr> </table> ![Bar chart comparing Spearman's ρ for different paraphrase sets](page_370_684_1092_370.png) Figure 2: The \( \rho \) for SimLex using different paraphrase sets. The corpus is enwiki9. The vector-space dimension is set to 300. The context window size is set to 8. 25 negative samples are used in learning. the entailment and the other related paraphrases into consideration. We learn enwiki9 with different paraphrase sets and use SimLex to evaluate the trained vectors. Figure 2 compares the performance using different paraphrase sets, tested by SimLex. We can see that it is best to use the equivalence and entailment (forward + reverse) paraphrases together or use only the equivalence paraphrases. Only using the entailment paraphrases is weak. Involving the other related paraphrases deteriorates the performance. We use the Equivalence and Entailment paraphrases in the experiments according to these results. 3.4 EFFECTS OF PARAMETERS We use our proposed method to learn enwiki9 under different parameter settings to evaluate the effects of parameters. We firstly learn enwiki9 under different parameter settings and then test the vectors using SimLex, WS353, RW, MEN, SEM and SYN. We report Spearman’s rank correlation \( \rho \) for SimLex, WS353, RW and MEN, the percentage of correct answers for SEM and SYN. (a) Simlex-999 (SimLex) (b) Wordsim-353 (WS353) (c) Rare Word Similarity Dataset (RW) (d) MEN dataset (MEN) (e) Word Analogical Reasoning (SEM) (f) Word Analogical Reasoning (SYN) Figure 3: The scores of the benchmarks using different vector-space dimensions. For WS353, SimLex, RW and MEN, we report 100 * \( \rho \) (Spearman’s rank correlation). For word analogical reasoning, we report the percentage of the correct answers. The context window size is set to 8. The number of negative samples is set to 25. 3.4.1 EFFECTS OF VECTOR SPACE DIMENSION We compare the benchmarks using different vector-space dimensions. Figure 3 shows the change of each benchmark’s scores under different dimensions. We find that: • The larger vectors do not bring the better performance for most of the benchmarks (except SimLex), although some previous works suggest that the higher dimensions brings better performance for their methods [Pennington et al., 2014; Levy & Goldberg, 2014b]. • The curves of SimLex and SYN are gradual. However, there are several abrupt changes in the others. And those of WS353 and RW do not change gradually. • The best dimension for different benchmarks is not consistent. The differences in the content of the benchmarks may cause the inconsistence. For example, SimLex rates related but dissimilar words lower than the other word similarity benchmarks [Hill et al., 2016; Chiu et al., 2016]. The results suggest that the best dimensions for our method depends on the task. 3.4.2 EFFECTS OF CONTEXT WINDOW SIZE We compared the benchmarks using different context window sizes. They are shown in Figure 4. Previous works argue that larger window sizes introduce more topic words, and smaller ones emphasize word functions [Turney, 2012; Levy & Goldberg, 2014a; Levy et al., 2015; Hill et al., 2016; Chiu et al., 2016]. Different context window sizes provide different balances between relatedness and similarity. The best window size depends on what we want the vectors to be. We also see that in our results. The relationship between the window size and performance depends on how they rate the pairs. For example, WS353 rates word pairs according to association rather than similarity [Finkelstein et al., 2001; Hill et al., 2016]. As larger window capture relatedness rather than similarity, the results show (a) Simlex-999 (SimLex) (b) Wordsim-353 (WS353) (c) Rare Word Similarity Dataset (RW) (d) MEN dataset (MEN) (e) Word Analogical Reasoning (SEM) (f) Word Analogical Reasoning (SYN) Figure 4: The scores of the benchmarks using different context window sizes. For WS353, SimLex, RW and MEN, we report 100 * \( \rho \) (Spearman’s rank correlation). For word analogical reasoning, we report the percentage of the correct answers. We use 100-dimension vectors. The number of negative samples is set to 25. that the larger the window is, the better for WS353. The MEN dataset also prefer relatedness than similarity (Bruni et al. [2014]), but they gave annotators examples involving similarity[2]. It may be the reason that the windows larger than 8 deteriorate the benchmarks based on MEN (Figure 4d). The standards of WS353 and MEN to rate the words are similar (Bruni et al. [2014]). It leads to their similar curves (Figure 4b and 4d). The worst window sizes of them are also close. When the window size is set to about 2 or 3, respectively, the balance of similarity and relatedness is the worst for them. Unlike the other word similarity dataset, SimLex rates synonyms high and related dissimilar word pairs low. Therefore, the smallest window is the most suitable for SimLex because it is best for capturing the functional similarity. The results of RW differs from the others (Figure 4c). There are many abrupt changes. The best window size is 10, but 1 is better than 2-9. The dataset contains rare words. Because of their low frequencies, usage of broad context window may be better to draw features for them. However, additional words introduced by larger windows may also deteriorate the vectors of unusual words. For such tasks requiring rare word vectors of high quality, we should be careful in tuning the context window size. For Google’s word analogical tasks (SEM and SYN), the questions are quite related to the topic or domain. For examples, there are questions about the capitals of the countries. They are associated but not synonymous. Therefore a larger window is usually better. However for SYN, using window size 9 is a little better than 10 in Figure 4d and for MEN 8 is best in Figure 4f. It may be because that if the window is too large, it introduces too many words and reduces the sparsity (Chiu et al. [2016]). We can consider that the best context window size depends on the task, but we should avoid using too large window. 3.4.3 EFFECTS OF NEGATIVE SAMPLES We also explored the effects of the number of negative samples. The results are shown in Figure 5 2According to their homepage: http://clic.cimec.unitn.it/elia.bruni/MEN.html. (a) Simlex-999 (SimLex) (b) Wordsim-353 (WS353) (c) Rare Word Similarity Dataset (RW) (d) MEN dataset (MEN) (e) Word Analogical Reasoning (SEM) (f) Word Analogical Reasoning (SYN) Figure 5: The scores of the benchmarks using different numbers of negative samples. For WS353, SimLex, RW and MEN, we report 100 * \( \rho \) (Spearman’s rank correlation). For word analogical reasoning, we report the percentage of the correct answers. We use 100-dimension vectors. The context window size is set to 8. In Figures 5a, 5c and 5f we see that overfitting occurs when we use more than 15 negative samples. In Figure 5b and Figure 5e it occurs from 25 and 20, respectively. In Figure 5d the performance does not change very much when we use more than 30 negative samples. The results indicate that too many negative samples may cause overfitting. For 3 of the 6 benchmarks, it is best to use 15 negative samples. But we should be careful in practice use because the other different results suggest that the best number depends on the task. The abrupt change at around 15 in Figure 5b is interesting. WS353 is the smallest dataset among those we used. Because of the small size, the effects of randomness may cause such singularities when the vector-space is not well trained. 3.5 EFFECTS OF THE CONTROL FUNCTION & THE CORPUS SIZE In this section, we evaluate the effectiveness of our fuzzy approach, by comparing to the situations that set \( f(x) \) in Equation (1) as: • \( f(x) = 1 \): It makes the model regard all paraphrases equally. They are all used without drop-out. • \( f(x) = 0 \): It makes the model use no paraphrases, equivalent to CBOW. It is also a good way to show the effects of corpus size by comparing the proposed method to the situations above using corpora in varying size. Therefore we discuss them together in this section. We use text8\footnote{http://mattmahoney.net/dc/text8.zip} together with eEnwiki9 and ukWaC described in section 3.1. It is a small corpus containing 100 MB text. To show the difference, we report the benchmarks scores including not only SimLex, but also MEN, and the word analogical task (SEM and SYN). They are the other benchmarks that are shown relatively solid in section 3.2. The vector-space dimension is set to 300. The context window size is set to 8. 25 negative samples are used in learning. The results are shown in Figure 6. Figure 6: The comparison of using the proposed function described in section 2.3 \( f(x) = 0 \) (equivalent to CBOW) and \( f(x) = 1 \) (no drop-out) as the control function. They are compared under different corpora in varying size. The green bar (the left) indicates the scores of the proposed function; the blue bar (the middle) indicates the scores of \( f(x) = 0 \); the pink bar (the right) indicates the scores of \( f(x) = 1 \). We report 100 * \( \rho \) for SimLex and MEN, the percentage of correct answers for SEM and SYN. The vector-space dimension is set to 300. The context window size is set to 8. 25 negative samples are used in learning. We can see that: • The proposed function outperforms the others for SimLex and MEN under text8, for all the benchmarks under enwiki9, for SimLex, SEM and SYN under ukWaC. • The proposed function is always better than \( f(x) = 1 \) in the experiments, no matter what the benchmark is or how big the corpus is. • For SEM, the proposed function is weaker than \( f(x) = 0 \) under text8, slightly better under enwiki9, and obviously outperforms \( f(x) = 0 \) under ukWaC. As the proposed function outperforms under larger corpora, the relatively low scores under text8 may be caused by the effects of randomness: the proposed function involves random numbers; they bring huge instability under such tiny corpora. Another possible reason is that the control function is less useful for text8 because there are few polysemous words in the tiny corpus. • There is no advantages to use \( f(x) = 1 \) instead of \( f(x) = 0 \) for both text8 and enwiki9. It shows that learning the context words replaced by paraphrases may be not a good idea without fuzzy approaches. However, if we use the proposed control function, the results are better and go beyond those of \( f(x) = 0 \) in most tests. It shows that the control function utilizing fuzzy paraphrases improves the performance. Therefore, we can see that the proposed control function using the fuzzy paraphrases annotated with the degrees of reliability improves the quality of the learned word vector-space. 4 COMPARISON WITH THE PRIOR WORKS We compared our work to the prior works using a lexicon to improve word vectors. However, we failed to use the public code to reproduce the works of Yu & Dredze (2014) and Bollegala et al. (2016). We also failed to find an available implementation of Xu et al. (2014). Hence, we use the Table 3: Comparison to the prior Works. The scores of the prior works under ukWaC are from Bollegala et al. (2016). The SYN score of ours and Bollegala’s are marked as best together because the margin of error is 1.79 as shown in Table 1. <table> <tr> <th>Method</th> <th>MEN</th> <th>SEM</th> <th>SYN</th> </tr> <tr> <td>Our Proposed Method</td> <td><b>76.99</b></td> <td><b>67.48</b></td> <td><b>67.89</b></td> </tr> <tr> <td>Bollegala et al. (2016)</td> <td>70.90</td> <td>61.46</td> <td><b>69.33</b></td> </tr> <tr> <td>Yu & Dredze (2014)</td> <td>50.10</td> <td>-</td> <td>29.90</td> </tr> <tr> <td>R-Net (Xu et al. 2014)</td> <td>-</td> <td>32.64</td> <td>43.46</td> </tr> <tr> <td>C-Net (Xu et al. 2014)</td> <td>-</td> <td>37.07</td> <td>40.06</td> </tr> <tr> <td>RC-Net (Xu et al. 2014)</td> <td>-</td> <td>34.36</td> <td>44.42</td> </tr> <tr> <td>Faruqui et al. (2015) (Pretrained by CBOW)</td> <td>60.50</td> <td>36.65</td> <td>52.50</td> </tr> <tr> <td>Faruqui et al. (2015) (Pretrained by Skipgram)</td> <td>65.70</td> <td>45.29</td> <td>65.65</td> </tr> </table> same corpus and benchmarks with Bollegala et al. (2016) and compare our results with the reported scores of the prior works in their paper. The benchmarks are: • The MEN Dataset (MEN); • Word Analogical Reasoning Task (SEM and SYN). Rubenstein-Goodenough dataset (RG) (Rubenstein & Goodenough 1965) is also used in their works. However, we do not use it, because it fails the sanity check in Batchkarov et al. (2016): \( \rho \) may increase when noise is added. We use ukWaC to learn the word vectors, the same with Bollegala et al. (2016). We also use the same parameters with the prior works: The vector-space dimension is set to 300; the context window size is set to 8; the number of negative samples is set to 25. Then we calculate the cosine similarity of the words and report 100 * \( \rho \) for Men. We use the add method described in section 3.2 and report the percentage of correct answers, for the word analogical reasoning task. Table 3 shows the results of the experiments. The of MEN and SEM is 0.86 and 0.44 as shown in Table 1. Therefore we see that our proposed method outperforms the prior works under these benchmarks. We consider our score for SYN is as good as Bollegala et al. (2016) achieved, and better than the others, because its margin of error is 1.79 as shown in Table 1. 5 CONCLUSION & THE FUTURE WORKS We proposed a fuzzy approach to control the contamination caused by the polysemous words when a lexicon is used to improve the vector-space word representations. We annotate each paraphrase of a word with a degree of reliability, like the members of a fuzzy set with their memberships, on the basis of their multilingual similarities to the original ones. We use the fuzzy paraphrases to learn a corpus by jointly learning a generated text, in which the original words are randomly replaced by their paraphrases. A paraphrase is less likely to be put into the generated text if it has lower reliability than the others, and vice versa. We tested the performance using different types of paraphrases in the lexicon PPDB2.0 and find that it is best to use the equivalence type and the entailment type. Using other related paraphrases deteriorates the performance. We explored the effects of parameters. We find that the best parameter setting depends on the task. We should tune the model carefully in practical use. We evaluated the effectiveness of our approach by comparing it to the situations that simpler functions are used to control replacements: \( f(x) = 1 \) which accepts all, and \( f(x) = 0 \) which rejects all. We also repeated the experiments under a tiny, a medium sized, and a large corpus, to see the effects of the corpus size on the effectiveness. Our approach achieves the best in 3 of 4 benchmarks under the tiny corpus, and in all benchmarks under the medium sized and the large one. The results indicate that our approach is effective to improve the word vectors. Our proposed method also achieved the top scores, compared with the prior works. Unlike the previous works that solve the problems about polysemy by estimating a vector for each word sense or word type, our approach keeps one vector per word. It makes the word vectors easier to use in practical terms: it is neither necessary to disambiguate the word senses nor to tag the part-of-speeches before we use the word vectors. The fuzzy paraphrases can also be employed for the other models with some changes. We are going to show it in the future. The proposed idea for the polysemy problem without word sense disambiguation is meaningful especially for practical use because it saves the effort of part-of-speech tagging and word sense disambiguation. Besides, the control function may be more accurate if it considers all the context. We are also going to work on it in the future. We have opened the source of a demo of the proposed method online.4 REFERENCES Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. The wacky wide web: a collection of very large linguistically processed web-crawled corpora. Language resources and evaluation, 43(3):209–226, 2009. Miroslav Batchkarov, Thomas Kober, Jeremy Reffin, Julie Weeds, and David Weir. A critique of word similarity as a method for evaluating distributional semantic models. the 54th annual meeting of the Association for Computational Linguistics (ACL 2016), pp. 7, 2016. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606, 2016. Danushka Bollegala, Alsuhaibani Mohammed, Takanori Maehara, and Ken-Ichi Kawarabayashi. Joint word representation learning using a corpus and a semantic lexicon. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI’16), 2016. Elia Bruni, Nam Khanh Tran, and Marco Baroni. Multimodal distributional semantics. J. Artif. Int. Res., 49(1):1–47, January 2014. Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. A unified model for word sense representation and disambiguation. In Proceedings of the conference on empirical methods in natural language processing (EMNLP), pp. 1025–1035. Citeseer, 2014. Billy Chiu, Anna Korhonen, and Sampo Pyysalo. Intrinsic evaluation of word vectors fails to predict extrinsic performance. In Proceedings of RepEval 2016, 2016. Rajarshi Das, Manzil Zaheer, and Chris Dyer. Gaussian lda for topic models with word embeddings. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, 2015. Manaal Faruqui, Jesse Dodge, Sujay K. Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. Retrofitting word vectors to semantic lexicons. In Proceedings of NAACL, 2015. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. Placing search in context: The concept revisited. In Proceedings of the 10th international conference on World Wide Web, pp. 406–414. ACM, 2001. 4https://github.com/huaijianjiu/Bernoulli-CBOFP Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. PPDB: The paraphrase database. In Proceedings of NAACL-HLT, pp. 758–764, Atlanta, Georgia, June 2013. Association for Computational Linguistics. Felix Hill, Roi Reichart, and Anna Korhonen. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, 2016. Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pp. 873–882. Association for Computational Linguistics, 2012. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759, 2016. Yoon Kim. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014. Quoc V Le and Tomas Mikolov. Distributed representations of sentences and documents. In the 31st International Conference on Machine Learning (ICML 2014), volume 14, pp. 1188–1196, 2014. Omer Levy and Yoav Goldberg. Dependencybased word embeddings. In the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014), 2014a. Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. In Proceedings of the 27th International Conference on Neural Information Processing Systems, Advances in Neural Information Processing Systems 27 (NIPS 2014), pp. 2177–2185, Cambridge, MA, USA, 2014b. MIT Press. Omer Levy, Yoav Goldberg, and Ido Dagan. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211–225, 2015. Shaohua Li, Tat-Seng Chua, Jun Zhu, and Chunyan Miao. Generative topic embedding: a continuous representation of documents. In the 54th annual meeting of the Association for Computational Linguistics (ACL 2016). Association for Computational Linguistics, 2016. Minh-Thang Luong, Richard Socher, and Christopher D. Manning. Better word representations with recursive neural networks for morphology. In CoNLL, Sofia, Bulgaria, 2013. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. In ICLR Workshop, 2013a. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119, 2013b. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. Efficient non-parametric estimation of multiple embeddings per word in vector space. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1059–1069, Doha, Qatar, October 2014. Association for Computational Linguistics. Ellie Pavlick, Johan Bos, Malvina Nissim, Charley Beller, Benjamin Van Durme, and Chris Callison-Burch. Adding semantics to data-driven paraphrasing. In Association for Computational Linguistics, Beijing, China, July 2015a. Association for Computational Linguistics. Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevich, Benjamin Van Durme, and Chris Callison-Burch. Ppdb 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification. In Association for Computational Linguistics, Beijing, China, July 2015b. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543, 2014. Herbert Rubenstein and John B Goodenough. Contextual correlates of synonymy. Communications of the ACM, 8(10):627–633, 1965. Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 1201–1211. Association for Computational Linguistics, 2012. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27 (NIPS 2014), 2014. Joseph Turian, Lev Ratinov, and Yoshua Bengio. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th annual meeting of the association for computational linguistics (ACL 2010), pp. 384–394. Association for Computational Linguistics, 2010. Peter D. Turney. Domain and function: A dual-space model of semantic relations and compositions. J. Artif. Int. Res., 44(1):533–585, May 2012. ISSN 1076-9757. Chang Xu, Yalong Bai, Jiang Bian, Bin Gao, Gang Wang, Xiaoguang Liu, and Tie-Yan Liu. Rc-net: A general framework for incorporating knowledge into word representations. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, pp. 1219–1228. ACM, 2014. Mo Yu and Mark Dredze. Improving lexical embeddings with semantic knowledge. In the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014), pp. 545–550. Association for Computational Linguistics, 2014. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. CoRR, abs/1409.2329, 2014.
ABSTRACT A synonym of a polysemous word is usually only the paraphrase of one sense among many. When lexicons are used to improve vector-space word representations, such paraphrases are unreliable and bring noise to the vector-space. The prior works use a coefficient to adjust the overall learning of the lexicons. They regard the paraphrases equally. In this paper, we propose a novel approach that regards the paraphrases diversely to alleviate the adverse effects of polysemy. We annotate each paraphrase with a degree of reliability. The paraphrases are randomly eliminated according to the degrees when our model learns word representations. In this way, our approach drops the unreliable paraphrases, keeping more reliable paraphrases at the same time. The experimental results show that the proposed method improves the word vectors. Our approach is an attempt to address the polysemy problem keeping one vector per word. It makes the approach easier to use than the conventional methods that estimate multiple vectors for a word. Our approach also outperforms the prior works in the experiments. 1 INTRODUCTION Vector-space representations of words are reported useful and improve the performance of the machine learning algorithms for many natural language processing tasks such as name entity recognition and chunking (Turian et al., 2010), text classification (Socher et al., 2012; Le & Mikolov, 2014; Kim, 2014; Joulin et al., 2016), topic extraction (Das et al., 2015; Li et al., 2016), and machine translation (Zaremba et al., 2014; Sutskever et al., 2014). People are still trying to improve the vector-space representations for words. Bojanowski et al. (2016) attempt to improve word vectors by involving character level information. Other works (Yu & Dredze, 2014; Xu et al., 2014; Faruqui et al., 2015; Bollegala et al., 2016) try to estimate better word vectors by using a lexicon or ontology. The idea is simple: because a lexicon or ontology contains well-defined relations about words, we can use them to improve word vectors. However, for a polysemous word, one of its synonym does not always mean the same thing with the original one under different contexts. For example, the word "point" equals "score" in "Team A got 3 points", but does not in "my point of view." A method to address this issue is to estimate a vector for each word sense (Huang et al., 2012; Chen et al., 2014) or per word type (Neelakantan et al., 2014). However, it requires additional word sense disambiguation or part-of-speech tagging to use such word vectors. In this paper, we propose a method to improve the vector-space representations using a lexicon and alleviate the adverse effect of polysemy, keeping one vector per word. We estimate the degree of reliability for each paraphrase in the lexicon and eliminate the ones with lower degrees in learning. The experimental results show that the proposed method is effective and outperforms the prior works. The major contributions of our work include: • We propose a novel approach involving fuzzy sets to reduce the noise brought by polysemous words in the word vector space when a lexicon is used for learning, and a model to use the fuzzy paraphrase sets to learn the word vector space. ![The process flow of the proposed method.](page_324_186_900_377.png) Figure 1: The process flow of the proposed method. • Although some prior works propose to solve the polysemy problem by estimating one vector per word sense or type, using such word vectors requires additional pre-process. Our proposed method keeps one vector per word. It makes the word vectors easier to use in practical terms: it is neither necessary to disambiguate the word senses nor to tag the part-of-speeches before we use the word vectors. We give an introduction of our proposed method in section 2. We show the effects of different paraphrase sets, parameters, corpus size, and evaluate the effectiveness of our approach by comparing to simpler algorithms in section 3. We compare our approach with the prior works via an evaluation experiment in section 4. We give the findings, conclusions and outlook in section 5. 2 THE PROPOSED METHOD 2.1 FUZZY PARAPHRASES As described in section 1, whether a polysemous word’s paraphrase is the same as the original depends on the context. Henceforth, if we simply use all the paraphrases of a word in the lexicon to improve the word vector without discrimination, they may sometimes bring noise to the vector-space. A conventional method for them is to give each word sense a vector. However, such vector-spaces require additional word sense disambiguation in practical use. Here, we propose a method to alleviate the adverse effects of polysemous words’ paraphrases without word sense disambiguation. Our idea is to annotate each paraphrase with a degree about its reliability, like a member of a fuzzy set. We call such paraphrases as “fuzzy paraphrases”, and their degrees as the “memberships.” 2.2 LEARNING WITH FUZZY PARAPHRASES We also propose a novel method to jointly learn corpus with a lexicon, in order to use fuzzy paraphrases to improve the word vectors. If the meanings of two words are totally the same, they can replace each other in a text without changing the semantic features. Henceforth, we can learn the lexicon by replacing the words in the corpus with its lexical paraphrases. We learn the word vectors by maximizing the probability of a word for a given context, and also for a generated context where words are replaced by their paraphrases randomly. The memberships of the fuzzy paraphrases are used here to control the probability that the replacements occur by a control function as shown in Figure 1. For a text corpus \( T \), denote \( w_i \) the ith word in \( T \), \( c \) the context window, \( w_j \) a word in the context window, \( L_{w_j} \) the paraphrase set of \( w_j \) in the lexicon \( L \), \( w_k \) the \( k \)th fuzzy paraphrase in \( L_{w_j} \), and \( x_{jk} \) the membership of \( w_k \) for \( w_j \), the objective is \[ \sum_{w_i \in T} \sum_{(i-c) \leq j \leq (i+c)} \left[ \log p(w_i|w_j) + \sum_{w_k \in L_{w_j}}^{L_{w_j}} f(x_{jk}) \log p(w_i|w_k) \right]. \] (1) The function \( f(x_{jk}) \) of the membership \( x_{jk} \) is a specified drop-out function. It returns 0 more for the paraphrases that have lower memberships, and 1 more for the others. 2.3 Membership Estimation & Control Function \( f(x) \) Looking for a control function that is easy to train, we notice that if two words are more often to be translated to the same word in another language, the replacement of them are less likely to change the meaning of the original sentence. Thus, we use a function of the bilingual similarity (denoted as \( S_{jk} \)) as the membership function: \[ x_{jk} = g(S_{jk}). \] (2) There have been works about calculating the similarity of words using such bilingual information. A lexicon called the paraphrase database (PPDB) provides scores of the similarity of paraphrases on the basis of bilingual features [Ganitkevitch et al., 2013; Pavlick et al., 2015b,a]. We scale the similarity score of the paraphrase \( w_k \) to \([0, 1]\) in PPDB2.0 as the memberships, and draw the values of \( f(x_{jk}) \) from a Bernoulli distribution subjected to them. Denote \( S_{jk} \) the similarity score of word \( w_j \) and \( w_k \) in PPDB2.0, the value of \( f(x_{jk}) \) is drawn from the Bernoulli distribution: \[ f(x_{jk}) \sim Bernoulli(x_{jk}), \] (3) \[ x_{jk} = \frac{S_{jk}}{\max_{j \in T, k \in L} S_{jk}}. \] (4) 2.4 Training We do not need to train \( f(x_{jk}) \) using the method described above. The model can be trained by negative sampling [Mikolov et al., 2013b]: For word \( w_O \) and a word \( w_I \) in its context, denote \( A_I \) as the set of the paraphrases for \( w_I \) accepted by \( f(x_{jk}) \), we maximize \( \log p(w_O|w_I) \) by distinguishing the noise words from a noise distribution \( P_n(w) \) from \( w_O \) and its accepted paraphrases in \( A_I \) by logistic regression: \[ \log p(w_O|w_I) = \log \sigma(v_{w_O}^T v_{w_I}) + \sum_{i=1}^n E_{w_i} \sim P_n(w)[\log \sigma(-v_{w_i}^T v_{w_I})], w_i \neq w_O, w_i \notin A_I \] (5) Here, \( v_{w_O}^T \) and \( v_{w_i}^T \) stand for the transposed matrices of \( v_{w_O} \) and \( v_{w_i} \), respectively. \( n \) is the number of negative samples used. \( \sigma(x) \) is a sigmoid function, \( \sigma(x) = 1/(1 + e^{-x}) \). 3 Model Exploration 3.1 Corpus for Experiments We use enwiki9\footnote{http://mattmahoney.net/dc/enwiki9.zip} mainly for tuning and model exploration. It has a balanced size(1 GB), containing 123,353,508 tokens. It provides enough data to alleviate randomness while it does not take too much time for our model to learn. Table 1: The results of 10 times repeated learning and test under each benchmark. The vector-space dimension is set to 100. Enwiki9 is used as the corpus. The maximum, minimum, and the margin of error are marked bold. <table> <tr> <th>Benchmark</th> <th>SimLex</th> <th>WS353</th> <th>RW</th> <th>MEN</th> <th>SEM</th> <th>SYN</th> </tr> <tr> <td>1</td> <td>29.41</td> <td>62.02</td> <td>38.12</td> <td>60.00</td> <td>13.26</td> <td><b>27.77</b></td> </tr> <tr> <td>2</td> <td><b>29.57</b></td> <td>62.49</td> <td>38.26</td> <td>60.39</td> <td><b>12.70</b></td> <td>27.27</td> </tr> <tr> <td>3</td> <td>29.48</td> <td><b>61.04</b></td> <td>39.90</td> <td>59.80</td> <td>13.89</td> <td>26.94</td> </tr> <tr> <td>4</td> <td>29.52</td> <td>60.20</td> <td>39.68</td> <td>59.81</td> <td><b>14.02</b></td> <td>27.11</td> </tr> <tr> <td>5</td> <td>28.69</td> <td><b>63.45</b></td> <td>38.65</td> <td>60.16</td> <td>12.94</td> <td>26.87</td> </tr> <tr> <td>6</td> <td>29.26</td> <td>61.95</td> <td>39.13</td> <td>59.73</td> <td>13.75</td> <td>26.60</td> </tr> <tr> <td>7</td> <td>29.46</td> <td>62.90</td> <td>39.12</td> <td><b>60.45</b></td> <td>13.42</td> <td><b>25.98</b></td> </tr> <tr> <td>8</td> <td>28.51</td> <td>62.96</td> <td><b>37.93</b></td> <td><b>59.31</b></td> <td>13.58</td> <td>27.10</td> </tr> <tr> <td>9</td> <td>29.13</td> <td>62.44</td> <td><b>39.91</b></td> <td>59.75</td> <td>13.98</td> <td>26.89</td> </tr> <tr> <td>10</td> <td><b>28.59</b></td> <td>60.66</td> <td>38.67</td> <td>60.24</td> <td>13.66</td> <td>26.98</td> </tr> <tr> <th>Margin of Error</th> <th>0.98</th> <th>2.41</th> <th>1.98</th> <th>1.14</th> <th>1.32</th> <th>1.79</th> </tr> </table> We use ukWaC (Baroni et al., 2009) to compare with the prior works in section 4. But we do not use it for model exploration, because it takes more than 20 hours to learn it, as an enormous corpus containing 12 GB text. 3.2 BENCHMARKS We used several benchmarks. They are Wordsim-353 (WS353) (Finkelstein et al., 2001) (353 word pairs), SimLex-999 (SimLex) (Hill et al., 2016) (999 word pairs), the Stanford Rare Word Similarity Dataset (RW) (Luong et al., 2013) (2034 word pairs), the MEN Dataset (MEN) (Bruni et al., 2014) (3000 word pairs), and the Mikolov’s (Google’s) word analogical reasoning task (Mikolov et al., 2013a). WS353, SimLex, and RW are gold standards. They provide the similarity of words labeled by humans. We report the Spearman’s rank correlation (\( \rho \)) for them. Mikolov’s word analogical reasoning task is another widely used benchmark for word vectors. It contains a semantic part (SEM), and a syntactic part (SYN). We use the basic way suggested in their paper to find the answer for it: to guess word \( b' \) related to \( b \) in the way how \( a' \) is related to \( a \), the word closest in cosine similarity to \( a' - a + b \) is returned as \( b' \). We find that the benchmark scores change every time we learn the corpus, even under the same settings. It is because that the models involve random numbers. Therefore we should consider the margin of error of the changes when we use the benchmarks. To test the margin of error, we firstly used our proposed method to repeat learning enwiki9 for 10 times under the same parameters. Then we tested the vectors under each benchmark, to find the margin of error. In each test, we used the same parameters: the vector dimension was set to 100 for speed, the window size was set to 8, and 25 negative samples were used. The results are shown in Table 1. We use them to analyze the other experimental results later. 3.3 DIFFERENT TYPES OF PARAPHRASES In PPDB2.0, there are six relationships for paraphrases. For word \( X \) and \( Y \), the different relationships between them defined in PPDB2.0 are shown in Table 2. We do not consider the exclusion and independent relations because they are not semantic paraphrases. Those of equivalence are the most reliable because they are the closest ones. But we still want to know whether it is better to take Table 2: Different types of relationships of paraphrases in PPDB2.0 (Pavlick et al., 2015b,a). <table> <tr> <th>Relationship Type</th> <th>Description</th> </tr> <tr> <td>Equivalence</td> <td>\( X \) is the same as \( Y \)</td> </tr> <tr> <td>Forward Entailment</td> <td>\( X \) is more specific than/is a type of \( Y \)</td> </tr> <tr> <td>Reverse Entailment</td> <td>\( X \) is more general than/encompasses \( Y \)</td> </tr> <tr> <td>Exclusion</td> <td>\( X \) is the opposite of \( Y \) / \( X \) is mutually exclusive with \( Y \)</td> </tr> <tr> <td>OtherRelated</td> <td>\( X \) is related in some other way to \( Y \)</td> </tr> <tr> <td>Independent</td> <td>\( X \) is not related to \( Y \)</td> </tr> </table> ![Bar chart comparing Spearman's ρ for different paraphrase sets](page_370_684_1092_370.png) Figure 2: The \( \rho \) for SimLex using different paraphrase sets. The corpus is enwiki9. The vector-space dimension is set to 300. The context window size is set to 8. 25 negative samples are used in learning. the entailment and the other related paraphrases into consideration. We learn enwiki9 with different paraphrase sets and use SimLex to evaluate the trained vectors. Figure 2 compares the performance using different paraphrase sets, tested by SimLex. We can see that it is best to use the equivalence and entailment (forward + reverse) paraphrases together or use only the equivalence paraphrases. Only using the entailment paraphrases is weak. Involving the other related paraphrases deteriorates the performance. We use the Equivalence and Entailment paraphrases in the experiments according to these results. 3.4 EFFECTS OF PARAMETERS We use our proposed method to learn enwiki9 under different parameter settings to evaluate the effects of parameters. We firstly learn enwiki9 under different parameter settings and then test the vectors using SimLex, WS353, RW, MEN, SEM and SYN. We report Spearman’s rank correlation \( \rho \) for SimLex, WS353, RW and MEN, the percentage of correct answers for SEM and SYN. (a) Simlex-999 (SimLex) (b) Wordsim-353 (WS353) (c) Rare Word Similarity Dataset (RW) (d) MEN dataset (MEN) (e) Word Analogical Reasoning (SEM) (f) Word Analogical Reasoning (SYN) Figure 3: The scores of the benchmarks using different vector-space dimensions. For WS353, SimLex, RW and MEN, we report 100 * \( \rho \) (Spearman’s rank correlation). For word analogical reasoning, we report the percentage of the correct answers. The context window size is set to 8. The number of negative samples is set to 25. 3.4.1 EFFECTS OF VECTOR SPACE DIMENSION We compare the benchmarks using different vector-space dimensions. Figure 3 shows the change of each benchmark’s scores under different dimensions. We find that: • The larger vectors do not bring the better performance for most of the benchmarks (except SimLex), although some previous works suggest that the higher dimensions brings better performance for their methods [Pennington et al., 2014; Levy & Goldberg, 2014b]. • The curves of SimLex and SYN are gradual. However, there are several abrupt changes in the others. And those of WS353 and RW do not change gradually. • The best dimension for different benchmarks is not consistent. The differences in the content of the benchmarks may cause the inconsistence. For example, SimLex rates related but dissimilar words lower than the other word similarity benchmarks [Hill et al., 2016; Chiu et al., 2016]. The results suggest that the best dimensions for our method depends on the task. 3.4.2 EFFECTS OF CONTEXT WINDOW SIZE We compared the benchmarks using different context window sizes. They are shown in Figure 4. Previous works argue that larger window sizes introduce more topic words, and smaller ones emphasize word functions [Turney, 2012; Levy & Goldberg, 2014a; Levy et al., 2015; Hill et al., 2016; Chiu et al., 2016]. Different context window sizes provide different balances between relatedness and similarity. The best window size depends on what we want the vectors to be. We also see that in our results. The relationship between the window size and performance depends on how they rate the pairs. For example, WS353 rates word pairs according to association rather than similarity [Finkelstein et al., 2001; Hill et al., 2016]. As larger window capture relatedness rather than similarity, the results show (a) Simlex-999 (SimLex) (b) Wordsim-353 (WS353) (c) Rare Word Similarity Dataset (RW) (d) MEN dataset (MEN) (e) Word Analogical Reasoning (SEM) (f) Word Analogical Reasoning (SYN) Figure 4: The scores of the benchmarks using different context window sizes. For WS353, SimLex, RW and MEN, we report 100 * \( \rho \) (Spearman’s rank correlation). For word analogical reasoning, we report the percentage of the correct answers. We use 100-dimension vectors. The number of negative samples is set to 25. that the larger the window is, the better for WS353. The MEN dataset also prefer relatedness than similarity (Bruni et al. [2014]), but they gave annotators examples involving similarity[2]. It may be the reason that the windows larger than 8 deteriorate the benchmarks based on MEN (Figure 4d). The standards of WS353 and MEN to rate the words are similar (Bruni et al. [2014]). It leads to their similar curves (Figure 4b and 4d). The worst window sizes of them are also close. When the window size is set to about 2 or 3, respectively, the balance of similarity and relatedness is the worst for them. Unlike the other word similarity dataset, SimLex rates synonyms high and related dissimilar word pairs low. Therefore, the smallest window is the most suitable for SimLex because it is best for capturing the functional similarity. The results of RW differs from the others (Figure 4c). There are many abrupt changes. The best window size is 10, but 1 is better than 2-9. The dataset contains rare words. Because of their low frequencies, usage of broad context window may be better to draw features for them. However, additional words introduced by larger windows may also deteriorate the vectors of unusual words. For such tasks requiring rare word vectors of high quality, we should be careful in tuning the context window size. For Google’s word analogical tasks (SEM and SYN), the questions are quite related to the topic or domain. For examples, there are questions about the capitals of the countries. They are associated but not synonymous. Therefore a larger window is usually better. However for SYN, using window size 9 is a little better than 10 in Figure 4d and for MEN 8 is best in Figure 4f. It may be because that if the window is too large, it introduces too many words and reduces the sparsity (Chiu et al. [2016]). We can consider that the best context window size depends on the task, but we should avoid using too large window. 3.4.3 EFFECTS OF NEGATIVE SAMPLES We also explored the effects of the number of negative samples. The results are shown in Figure 5 2According to their homepage: http://clic.cimec.unitn.it/elia.bruni/MEN.html. (a) Simlex-999 (SimLex) (b) Wordsim-353 (WS353) (c) Rare Word Similarity Dataset (RW) (d) MEN dataset (MEN) (e) Word Analogical Reasoning (SEM) (f) Word Analogical Reasoning (SYN) Figure 5: The scores of the benchmarks using different numbers of negative samples. For WS353, SimLex, RW and MEN, we report 100 * \( \rho \) (Spearman’s rank correlation). For word analogical reasoning, we report the percentage of the correct answers. We use 100-dimension vectors. The context window size is set to 8. In Figures 5a, 5c and 5f we see that overfitting occurs when we use more than 15 negative samples. In Figure 5b and Figure 5e it occurs from 25 and 20, respectively. In Figure 5d the performance does not change very much when we use more than 30 negative samples. The results indicate that too many negative samples may cause overfitting. For 3 of the 6 benchmarks, it is best to use 15 negative samples. But we should be careful in practice use because the other different results suggest that the best number depends on the task. The abrupt change at around 15 in Figure 5b is interesting. WS353 is the smallest dataset among those we used. Because of the small size, the effects of randomness may cause such singularities when the vector-space is not well trained. 3.5 EFFECTS OF THE CONTROL FUNCTION & THE CORPUS SIZE In this section, we evaluate the effectiveness of our fuzzy approach, by comparing to the situations that set \( f(x) \) in Equation (1) as: • \( f(x) = 1 \): It makes the model regard all paraphrases equally. They are all used without drop-out. • \( f(x) = 0 \): It makes the model use no paraphrases, equivalent to CBOW. It is also a good way to show the effects of corpus size by comparing the proposed method to the situations above using corpora in varying size. Therefore we discuss them together in this section. We use text8\footnote{http://mattmahoney.net/dc/text8.zip} together with eEnwiki9 and ukWaC described in section 3.1. It is a small corpus containing 100 MB text. To show the difference, we report the benchmarks scores including not only SimLex, but also MEN, and the word analogical task (SEM and SYN). They are the other benchmarks that are shown relatively solid in section 3.2. The vector-space dimension is set to 300. The context window size is set to 8. 25 negative samples are used in learning. The results are shown in Figure 6. Figure 6: The comparison of using the proposed function described in section 2.3 \( f(x) = 0 \) (equivalent to CBOW) and \( f(x) = 1 \) (no drop-out) as the control function. They are compared under different corpora in varying size. The green bar (the left) indicates the scores of the proposed function; the blue bar (the middle) indicates the scores of \( f(x) = 0 \); the pink bar (the right) indicates the scores of \( f(x) = 1 \). We report 100 * \( \rho \) for SimLex and MEN, the percentage of correct answers for SEM and SYN. The vector-space dimension is set to 300. The context window size is set to 8. 25 negative samples are used in learning. We can see that: • The proposed function outperforms the others for SimLex and MEN under text8, for all the benchmarks under enwiki9, for SimLex, SEM and SYN under ukWaC. • The proposed function is always better than \( f(x) = 1 \) in the experiments, no matter what the benchmark is or how big the corpus is. • For SEM, the proposed function is weaker than \( f(x) = 0 \) under text8, slightly better under enwiki9, and obviously outperforms \( f(x) = 0 \) under ukWaC. As the proposed function outperforms under larger corpora, the relatively low scores under text8 may be caused by the effects of randomness: the proposed function involves random numbers; they bring huge instability under such tiny corpora. Another possible reason is that the control function is less useful for text8 because there are few polysemous words in the tiny corpus. • There is no advantages to use \( f(x) = 1 \) instead of \( f(x) = 0 \) for both text8 and enwiki9. It shows that learning the context words replaced by paraphrases may be not a good idea without fuzzy approaches. However, if we use the proposed control function, the results are better and go beyond those of \( f(x) = 0 \) in most tests. It shows that the control function utilizing fuzzy paraphrases improves the performance. Therefore, we can see that the proposed control function using the fuzzy paraphrases annotated with the degrees of reliability improves the quality of the learned word vector-space. 4 COMPARISON WITH THE PRIOR WORKS We compared our work to the prior works using a lexicon to improve word vectors. However, we failed to use the public code to reproduce the works of Yu & Dredze (2014) and Bollegala et al. (2016). We also failed to find an available implementation of Xu et al. (2014). Hence, we use the Table 3: Comparison to the prior Works. The scores of the prior works under ukWaC are from Bollegala et al. (2016). The SYN score of ours and Bollegala’s are marked as best together because the margin of error is 1.79 as shown in Table 1. <table> <tr> <th>Method</th> <th>MEN</th> <th>SEM</th> <th>SYN</th> </tr> <tr> <td>Our Proposed Method</td> <td><b>76.99</b></td> <td><b>67.48</b></td> <td><b>67.89</b></td> </tr> <tr> <td>Bollegala et al. (2016)</td> <td>70.90</td> <td>61.46</td> <td><b>69.33</b></td> </tr> <tr> <td>Yu & Dredze (2014)</td> <td>50.10</td> <td>-</td> <td>29.90</td> </tr> <tr> <td>R-Net (Xu et al. 2014)</td> <td>-</td> <td>32.64</td> <td>43.46</td> </tr> <tr> <td>C-Net (Xu et al. 2014)</td> <td>-</td> <td>37.07</td> <td>40.06</td> </tr> <tr> <td>RC-Net (Xu et al. 2014)</td> <td>-</td> <td>34.36</td> <td>44.42</td> </tr> <tr> <td>Faruqui et al. (2015) (Pretrained by CBOW)</td> <td>60.50</td> <td>36.65</td> <td>52.50</td> </tr> <tr> <td>Faruqui et al. (2015) (Pretrained by Skipgram)</td> <td>65.70</td> <td>45.29</td> <td>65.65</td> </tr> </table> same corpus and benchmarks with Bollegala et al. (2016) and compare our results with the reported scores of the prior works in their paper. The benchmarks are: • The MEN Dataset (MEN); • Word Analogical Reasoning Task (SEM and SYN). Rubenstein-Goodenough dataset (RG) (Rubenstein & Goodenough 1965) is also used in their works. However, we do not use it, because it fails the sanity check in Batchkarov et al. (2016): \( \rho \) may increase when noise is added. We use ukWaC to learn the word vectors, the same with Bollegala et al. (2016). We also use the same parameters with the prior works: The vector-space dimension is set to 300; the context window size is set to 8; the number of negative samples is set to 25. Then we calculate the cosine similarity of the words and report 100 * \( \rho \) for Men. We use the add method described in section 3.2 and report the percentage of correct answers, for the word analogical reasoning task. Table 3 shows the results of the experiments. The of MEN and SEM is 0.86 and 0.44 as shown in Table 1. Therefore we see that our proposed method outperforms the prior works under these benchmarks. We consider our score for SYN is as good as Bollegala et al. (2016) achieved, and better than the others, because its margin of error is 1.79 as shown in Table 1. 5 CONCLUSION & THE FUTURE WORKS We proposed a fuzzy approach to control the contamination caused by the polysemous words when a lexicon is used to improve the vector-space word representations. We annotate each paraphrase of a word with a degree of reliability, like the members of a fuzzy set with their memberships, on the basis of their multilingual similarities to the original ones. We use the fuzzy paraphrases to learn a corpus by jointly learning a generated text, in which the original words are randomly replaced by their paraphrases. A paraphrase is less likely to be put into the generated text if it has lower reliability than the others, and vice versa. We tested the performance using different types of paraphrases in the lexicon PPDB2.0 and find that it is best to use the equivalence type and the entailment type. Using other related paraphrases deteriorates the performance. We explored the effects of parameters. We find that the best parameter setting depends on the task. We should tune the model carefully in practical use. We evaluated the effectiveness of our approach by comparing it to the situations that simpler functions are used to control replacements: \( f(x) = 1 \) which accepts all, and \( f(x) = 0 \) which rejects all. We also repeated the experiments under a tiny, a medium sized, and a large corpus, to see the effects of the corpus size on the effectiveness. Our approach achieves the best in 3 of 4 benchmarks under the tiny corpus, and in all benchmarks under the medium sized and the large one. The results indicate that our approach is effective to improve the word vectors. Our proposed method also achieved the top scores, compared with the prior works. Unlike the previous works that solve the problems about polysemy by estimating a vector for each word sense or word type, our approach keeps one vector per word. It makes the word vectors easier to use in practical terms: it is neither necessary to disambiguate the word senses nor to tag the part-of-speeches before we use the word vectors. The fuzzy paraphrases can also be employed for the other models with some changes. We are going to show it in the future. The proposed idea for the polysemy problem without word sense disambiguation is meaningful especially for practical use because it saves the effort of part-of-speech tagging and word sense disambiguation. Besides, the control function may be more accurate if it considers all the context. We are also going to work on it in the future. We have opened the source of a demo of the proposed method online.4
reject
Reject
4.666667
26b04b28e8bc3b0be8985d2b2659f6854f390fcd
iclr
2,017
STEERABLE CNNs Taco S. Cohen University of Amsterdam t.s.cohen@uva.nl Max Welling University of Amsterdam Canadian Institute for Advanced Research m.welling@uva.nl ABSTRACT It has long been recognized that the invariance and equivariance properties of a representation are critically important for success in many vision tasks. In this paper we present Steerable Convolutional Neural Networks, an efficient and flexible class of equivariant convolutional networks. We show that steerable CNNs achieve state of the art results on the CIFAR image classification benchmark. The mathematical theory of steerable representations reveals a type system in which any steerable representation is a composition of elementary feature types, each one associated with a particular kind of symmetry. We show how the parameter cost of a steerable filter bank depends on the types of the input and output features, and show how to use this knowledge to construct CNNs that utilize parameters effectively. 1 INTRODUCTION Much of the recent progress in computer vision can be attributed to the availability of large labelled datasets and deep neural networks capable of absorbing large amounts of information. While many practical problems can now be solved, the requirement for big (labelled) data is a fundamentally unsatisfactory state of affairs. Human beings are able to learn new concepts with very few labels, and reproducing this ability is an important challenge for artificial intelligence research. From an applied perspective, improving the statistical efficiency of deep learning is vital because in many domains (e.g. medical image analysis), acquiring large amounts of labelled data is costly. To improve the statistical efficiency of machine learning methods, many have sought to learn invariant representations. In deep learning, however, intermediate layers should not be invariant, because the relative pose of local features must be preserved for further layers (Cohen & Welling, 2016; Hinton et al., 2011). Thus, one is led to the idea of equivariance: a network is equivariant if the representations it produces transform in a predictable way under transformations of the input. In other words, equivariant networks produce representations that are steerable. Steerability makes it possible to apply filters not just in every position (as in a standard convolution layer), but in every pose, thus allowing for increased parameter sharing. Previous work has shown that equivariant CNNs yield state of the art results on classification tasks (Cohen & Welling, 2016; Dieleman et al., 2016), even though they only enforce equivariance to small groups of transformations like rotations by multiples of 90 degrees. Learning representations that are equivariant to larger groups is likely to result in further gains, but the computational cost of current methods scales linearly with the size of the group, making this impractical. In this paper we present the general theory of steerable CNNs, which covers previous approaches but also shows how the computational cost can be decoupled from the size of the symmetry group, thus paving the way for future scaling. To better understand the structure of steerable representations, we analyze them mathematically. We show that any steerable representation is a composition of low-dimensional elementary feature types. Each elementary feature can be steered independently of the others, and captures a distinct characteristic of the input that has an invariant or “objective” meaning. This doctrine of “observer-independent quantities” was put forward by (Weyl, 1939, ch. 1.4) and is used throughout physics. It has been applied to vision and representation learning by Kanatani (1990); Cohen (2013). The mentioned type system puts constraints on the network weights and architecture. Specifically, since an equivariant filter bank is required to map given input feature types to given output feature types, the number of parameters required by such a filter bank is reduced. Furthermore, by the same logic that tells us not to add meters to seconds, steerability considerations prevent us from adding features of different types (e.g. for residual learning (He et al., 2016a)). The rest of this paper is organized as follows. The theory of steerable CNNs is introduced in Section 2. Related work is discussed in Section 3, which is followed by classification experiments (Section 4) and a discussion and conclusion in Section 5. 2 STEERABLE CNNS 2.1 FEATURE MAPS AND FIBERS Consider a 2D signal \( f : \mathbb{Z}^2 \to \mathbb{R}^K \) with \( K \) channels. The signal may be an input to the network or a feature representation computed by a CNN. Since signals can be added and multiplied by scalars, the set of signals of this signature forms a linear space \( \mathcal{F} \). Each layer of the network has its own feature space \( \mathcal{F}_l \), but we will often suppress the layer index to reduce clutter. It is customary in deep learning to describe \( f \in \mathcal{F} \) as a stack of feature maps \( f_k \) (for \( k = 1, \ldots, K \)). In this paper we also consider another decomposition of \( \mathcal{F} \) into fibers. The fiber \( F_x \) at position \( x \) in the “base space” \( \mathbb{Z}^2 \) is the \( K \)-dimensional vector space spanned by all channels at position \( x \). Thus, \( f \in \mathcal{F} \) is comprised of feature vectors \( f(x) \) that live in the fibers \( F_x \) (see Figure 1(a)). ![Feature maps, fibers, and transformation law](page_370_682_849_180.png) (a) The feature space \( \mathcal{F} \) is decomposed into a stack of feature maps (left) and a bundle of fibers (right). (b) An image \( f \in \mathcal{F}_0 \) is rotated by \( r \) using \( \pi_0(r) \). Figure 1: Feature maps, fibers, and the transformation law \( \pi_0 \) of \( \mathcal{F}_0 \). Given some group of transformations \( G \) that acts on points in \( \mathbb{Z}^2 \), we can transform signals \( f \in \mathcal{F}_0 \): \[ [\pi_0(g)f](x) = f(g^{-1}x) \] This says that the pixel at \( g^{-1}x \) gets moved to \( x \) by the transformation \( g \in G \). We note that \( \pi_0(g) \) is a linear operator. An important property of \( \pi_0 \) is that \( \pi_0(gh) = \pi_0(g)\pi_0(h) \). Here, \( gh \) means composition of transformations in \( G \), while \( \pi_0(g)\pi_0(h) \) denotes matrix multiplication. A vector space such as \( \mathcal{F}_0 \) equipped with a set of linear operators \( \pi_0 \) satisfying this condition is known as a group representation (or just representation, for short). A lot is known about group representations (Serre, 1977), and we will make extensive use of the theory, explaining the relevant concepts as needed. 2.2 STEERABLE REPRESENTATIONS Let \( (\mathcal{F}, \pi) \) be a feature space with a group representation and \( \Phi : \mathcal{F} \to \mathcal{F}' \) a convolutional network. The feature space \( \mathcal{F}' \) is said to be (linearly) *steerable* with respect to \( G \), if for all transformations \( g \in G \), the features \( \Phi f \) and \( \Phi \pi(g)f \) are related by a linear transformation \( \pi'(g) \) that does not depend on \( f \). So \( \pi'(g) \) allows us to “steer” the features in \( \mathcal{F}' \) without referring to the input in \( \mathcal{F} \) from which they were computed. Combining the definition of steerability (i.e. \( \Phi \pi(g) = \pi'(g)\Phi \)) with the fact that \( \pi \) is a group representation, we find that \( \pi' \) must also be a group representation: \[ \pi'(gh)\Phi f = \Phi \pi(gh)f = \Phi \pi(g)\pi(h)f = \pi'(g)\Phi \pi(h)f = \pi'(g)\pi'(h)\Phi f \] Figure 2: Diagram showing the structural consistency that follows from equivariance of the network \( \Phi \) and the group representation structure of \( \pi_0 \). The result of following any path in this diagram depends only on the beginning and endpoint but is independent of the path itself, c.f. eq. 2 That is, \( \pi'(gh) = \pi'(g)\pi'(h) \) (at least in the span of the image of \( \Phi \)). Figure 2 gives an illustration. For simplicity, we will restrict our attention to discrete groups of transformations. The theory for continuous groups is almost completely analogous. Our running example will be the group \( p4m \) which consists of translations, rotations by 90 degrees around any point, and reflections. We further restrict our attention to groups that are constructed\footnote{as a semi-direct product} from the group of translations \( \mathbb{Z}^2 \) and a group \( H \) of transformations that fixes the origin \( 0 \in \mathbb{Z}^2 \). For \( p4m \), we have \( H = D4 \), the 8-element group of reflections and rotations about the origin. Using this division, we can first construct a filter bank that generates \( H \)-steerable fibers, and then show that convolution with such a filter bank produces a feature space that is steerable with respect to the whole group \( G \). 2.3 EQUIVARIANT FILTER BANKS A filter bank can be described as an array of dimension \( (K', K, s, s) \), where \( K, K' \) denote the number of input / output channels and \( s \) is the kernel size. For our purposes it is useful to think of a filter bank as a linear map \( \Psi : \mathcal{F} \to \mathbb{R}^{K'} \) that takes as input a signal \( f \in \mathcal{F} \) and produces a \( K' \)-dimensional feature vector. The filter bank only looks at an \( s \times s \) patch in \( \mathcal{F} \), so the matrix representing \( \Psi \) has shape \( K' \times K \cdot s^2 \). To correlate a signal \( f \) using \( \Psi \), one would simply apply \( \Psi \) to translated copies of \( f \), producing the output signal one fiber at a time. We assume (by induction) that we have a representation \( \pi \) that allows us to steer \( \mathcal{F} \). In order to make the output of the convolution steerable, we need the filter bank \( \Psi : \mathcal{F} \to \mathbb{R}^{K'} \) to be *H*-equivariant: \[ \rho(h)\Psi = \Psi \pi(h), \quad \forall h \in H \] for some representation \( \rho \) of \( H \) that acts on the output fibers (see Figure 3). Note that we only require equivariance with respect to \( H \) (which excludes translations) and not \( G \), because translations can move patterns into and out of the receptive field of a fiber, making full translation equivariance impossible. The space of maps satisfying the equivariance constraint is denoted \( \mathrm{Hom}_H(\pi, \rho) \), because an equivariant map \( \Psi \) is a “homomorphism of group representations”, meaning it respects the structure of the representations. Equivariant maps are also sometimes called *intertwiners* (Serre, 1977). Since the equivariance constraint (eq. 3) is linear in \( \Psi \), the space \( \mathrm{Hom}_H(\pi, \rho) \) of admissible filter banks is a vector space: any linear combination of maps \( \Psi, \Psi' \in \mathrm{Hom}_H(\pi, \rho) \) is again an intertwiner. Hence, given \( \pi \) and \( \rho \), we can compute a basis for \( \mathrm{Hom}_H(\pi, \rho) \) by solving a linear system. Figure 3: A filter bank \( \Psi \) that is \( H \)-equivariant. In this example, \( \rho_1 \) represents the 90-degree rotation \( r \) by a permutation matrix that cyclicly shifts the 4 channels. Computation of the intertwiner basis is done offline, before training. Once we have such a basis \( \psi_1, \ldots, \psi_n \) for \( \mathrm{Hom}_H(\pi, \rho) \), we can express any equivariant filter bank \( \Psi \) as a linear combination \( \Psi = \sum_i \alpha_i \psi_i \) using parameters \( \alpha_i \). As shown in Section 2.8, this can be done efficiently even in high dimensions. 2.4 INDUCTION We have shown how to parameterize filter banks that intertwine \( \pi \) and \( \rho \), making the output fibers \( H \)-steerable by \( \rho \) if the input space \( \mathcal{F} \) is \( H \)-steerable by \( \pi \). In this section we show how \( H \)-steerability of fibers \( F_x' \) leads to \( G \)-steerability of the whole feature space \( \mathcal{F}' \). This happens through a natural and important construction known as the induced representation (Mackey, 1952; 1953; 1968; Serre, 1977; Taylor, 1986; Folland, 1995; Kaniuth & Taylor, 2013). As stated before, the correlation \( \Psi * f \) could be computed by translating \( f \) before applying \( \Psi \): \[ [\Psi * f](x) = \Psi \left[ \pi(x)^{-1} f \right]. \] (4) Where \( x \in \mathbb{Z}^2 \) is interpreted as a translation when given as input to \( \pi \). We can now calculate the transformation law of the output space. To do so, we apply a translation \( t \) and transformation \( r \in H \) to \( f \in \mathcal{F} \), yielding \( \pi(tr)f \), and then perform the correlation with \( \Psi \). With a some algebra (Appendix A), we find: \[ [\Psi * [\pi(tr)f]](x) = \rho(r) \left[ [\Psi * f]((tr)^{-1}x) \right] \] (5) Now if we define \( \pi' \) as \[ [\pi'(tr)f](x) = \rho(r) \left[ f((tr)^{-1}x) \right] \] (6) then \( \Psi * \pi(g)f = \pi'(g)\Psi * f \) (see Fig. 4). This representation \( \pi' \) is known as the representation of \( G \) induced by the representation \( \rho \) of \( H \), and is denoted \( \pi' = \mathrm{Ind}_H^G \rho \). When parsing eq. 6, it is important to keep in mind that (as indicated by the square brackets) \( \pi' \) acts on the whole feature space \( \mathcal{F}' \) while \( \rho \) acts on individual fibers. If we compare the induced representation (eq. 6) to the representation \( \pi_0 \) defined in eq. 1, we see that the difference lies only in the presence of a factor \( \rho(r) \) applied to the fibers. This factor describes how the feature channels are mixed by the transformation. The color channels in the input space do not get mixed by geometrical transformations, so we say that \( \pi_0 \) is induced from the trivial representation \( \rho_0(h) = I \). Now that we have a \( G \)-steerable feature space \( \mathcal{F}' \), we can iterate the procedure by computing a basis for the space of intertwiners between \( \pi' \) (restricted to \( H \)) and some \( \rho' \) of our choosing. 2.5 FEATURE TYPES AND CHARACTER THEORY By now, the reader may be wondering how to choose \( \rho \), or indeed what the space of representations that we can choose from looks like in the first place. We will answer these questions in this section by showing that each representation has a type (encoded as a short list of integers) that corresponds to a certain symmetry or invariance of the feature. We further show how the number of parameters of an equivariant filter bank depends on the types of the representations \( \pi \) and \( \rho \) that it intertwines. Our discussion will make use of a number of important elementary results from group representation theory which are stated but not proved. The reader wishing to go deeper may consult chapters 1 and 2 of the excellent book by Serre (1977). Recall that a group representation is a set of invertible linear maps \( \rho(g) : \mathbb{R}^K \to \mathbb{R}^K \) satisfying \( \rho(gh) = \rho(g)\rho(h) \) for all elements \( g, h \in H \). It can be shown that any representation is a direct sum (i.e. block_diag plus change of basis) of a number of “elementary” representations associated with \( G \). These building blocks are called irreducible representations (or irreps), because they can ![The representation \( \pi_1 \) induced from the permutation representation \( \rho_1 \) shown in fig. 3. A single fiber is highlighted. It is transported to a new location, and acted on by \( \rho_1 \).](page_1012_1042_340_246.png) Figure 4: The representation \( \pi_1 \) induced from the permutation representation \( \rho_1 \) shown in fig. 3. A single fiber is highlighted. It is transported to a new location, and acted on by \( \rho_1 \). <table> <tr> <th>Irrep</th> <th>Basis in \( \mathcal{F}_0 \)</th> <th>e</th> <th>r</th> <th>r^2</th> <th>r^3</th> <th>m</th> <th>mr</th> <th>mr^2</th> <th>mr^3</th> </tr> <tr> <td>A1</td> <td>[1]</td> <td>[1]</td> <td>[1]</td> <td>[1]</td> <td>[1]</td> <td>[1]</td> <td>[1]</td> <td>[1]</td> <td>[1]</td> </tr> <tr> <td>A2</td> <td>[1]</td> <td>[1]</td> <td>[1]</td> <td>[1]</td> <td>[-1]</td> <td>[-1]</td> <td>[-1]</td> <td>[-1]</td> <td>[-1]</td> </tr> <tr> <td>B1</td> <td>[1]</td> <td>[-1]</td> <td>[1]</td> <td>[-1]</td> <td>[1]</td> <td>[-1]</td> <td>[1]</td> <td>[-1]</td> <td>[-1]</td> </tr> <tr> <td>B2</td> <td>[1]</td> <td>[-1]</td> <td>[1]</td> <td>[-1]</td> <td>[-1]</td> <td>[1]</td> <td>[-1]</td> <td>[1]</td> <td>[-1]</td> </tr> <tr> <td>E</td> <td>\([1\ 0]\)\n\([0\ 1]\)</td> <td>\([0\ -1]\)\n\([1\ 0]\)</td> <td>\([-1\ 0]\)\n\([0\ -1]\)</td> <td>\([0\ 1]\)\n\([1\ 0]\)</td> <td>\([-1\ 0]\)\n\([0\ 1]\)</td> <td>\([1\ 0]\)\n\([0\ -1]\)</td> <td>\([0\ -1]\)\n\([1\ 0]\)</td> <td>\([0\ -1]\)\n\([1\ 0]\)</td> <td>\([0\ -1]\)\n\([1\ 0]\)</td> </tr> </table> Table 1: The irreducible representations of the roto-reflection group D4. This group is generated by 90-degree rotations r and mirror reflections m, and has 5 irreps labelled A1, A2, B1, B2, E. Left: decomposition of \( \pi_0 \) (eq. 1) in the space \( \mathcal{F}_0 \) of \( 3 \times 3 \) filters with one channel. This representation turns out to have type (3, 0, 1, 1, 2), meaning there are three copies of A1, one copy of B1, one copy of B2, and two copies of the 2D irrep E (A2 does not appear). Right: the representation matrices of each irrep, for each element of the group D4. The reader may verify that these are valid representations, and that the characters (traces) are orthogonal. themselves not be block-diagonalized. In other words, if \( \varphi_i \) are the irreducible representations of \( H \), any representation \( \rho \) of \( H \) can be written in block-diagonal form: \[ \rho(g) = A \begin{bmatrix} \varphi_{i_1}(g) & & \\ & \ddots & \\ & & \varphi_{i_n} \end{bmatrix} A^{-1} \] for some basis matrix \( A \), and some \( i_k \) that index the irreps (each irrep may occur 0 or more times). Each irreducible representation corresponds to a type of symmetry, as shown in table 1. For example, as can be seen in this table, the representations B1 and B2 represent the 90-degree rotation r as the matrix [-1], so the basis filters for these representations change sign when rotated by r. It should be noted that in the higher layers \( l > 0 \), elementary basis filters can look different because they depend on the representation \( \pi_l \) that is being decomposed. The fact that all representations can be decomposed into a direct sum of irreducibles implies that each representation has a basis-independent *type*: which irreducible representations appear in it, and with what multiplicity. For example, the input representation \( \pi_0 \) (table 1) has type (3, 0, 1, 1, 2). This means that, for instance, \( \pi_0(r) \) is block-diagonalized as: \[ A^{-1} \pi_0(r) A = \text{block\_diag}([1], [1], [1], [-1], [-1], [0\ -1; 1\ 0], [0\ 1; -1\ 0]). \] Where the block matrix contains (3, 0, 1, 1, 2) copies of the irreps (A1, A2, B1, B2, E), evaluated at r (see column r in table 1). The change of basis matrix \( A \) is constructed from the basis filters shown in table 1 (and the same \( A \) block-diagonalizes \( \pi_0(g) \) for all \( g \)). So the most general way in which we can choose a representation \( \rho \) is to choose multiplicities \( m_i \geq 0 \) and a basis matrix \( A \). In Section 2.7 we will find that there is an important restriction on this freedom, which alleviates the need to choose a basis. The choice of multiplicities is then the only hyperparameter, analogous to the choice of the number of channels in an ordinary CNN. Indeed, the multiplicities determine the number of channels: \( K = \sum_i m_i \dim \varphi_i \). 2.6 Determining the type of the induced representation By choosing the type of \( \rho \), we also determine the type of \( \pi = \mathrm{Ind}_H^G \rho \) (restricted to \( H \)), but what is it? Explicit formulas exist (Reeder (2014); Serre (1977)) but are rather complicated, so we will present a simple computational procedure that can be used to determine the type of any representation. This procedure relies on the *character* \( \chi_\rho(g) = \mathrm{Tr}(\rho(g)) \) of the representation to be decomposed. The most important fact about characters is that the characters of irreps \( \varphi_i, \varphi_j \) are orthogonal: \[ \langle \chi_{\varphi_i}, \chi_{\varphi_j} \rangle \equiv \frac{1}{|H|} \sum_{h \in H} \chi_{\varphi_i}(h) \chi_{\varphi_j}(h) = \delta_{ij}. \] Furthermore, since the trace of a direct sum equals the sum of the traces (i.e. \( \chi_{\rho \oplus \rho'} = \chi_\rho + \chi_{\rho'} \)), and every representation \( \rho \) is a direct sum of irreps, it follows that we can obtain the multiplicity of irrep \( \varphi_i \) in \( \rho \) by computing the inner product with the \( i \)-th character: \[ \langle \chi_\rho, \chi_{\varphi_i} \rangle = \langle \chi_{\oplus_j m_j \varphi_j}, \chi_{\varphi_i} \rangle = \left\langle \sum_j m_j \chi_{\varphi_j}, \chi_{\varphi_i} \right\rangle = \sum_j m_j \langle \chi_{\varphi_j}, \chi_{\varphi_i} \rangle = m_i. \] So a simple dot product of characters is all we need to determine the type of a representation. As we will see next, the type of the input and output representation of a layer determines the parameter cost of that layer. 2.6.1 THE PARAMETER COST OF EQUIVARIANT CONVOLUTION LAYERS Steerable CNNs use parameters much more efficiently than ordinary CNNs. In this section we show how the number of parameters required by an equivariant layer is determined by the feature types of the input and output space, and how the efficiency of a choice of feature types may be evaluated. In section 2.3, we found that a filter bank \( \Psi \) is equivariant if and only if it lies in the vector space called \( \mathrm{Hom}_H(\pi, \rho) \). It follows that the number of parameters for such a filter bank is equal to the dimensionality of this space, \( n = \dim \mathrm{Hom}_H(\pi, \rho) \). This number is known as the intertwining number of \( \pi \) and \( \rho \) and plays an important role in the theory of group representations. As with multiplicities, the intertwining number is easily computed using characters. It can be shown (Reeder, 2014) that the intertwining number equals: \[ \dim \mathrm{Hom}_H(\pi, \rho) = \langle \chi_\pi, \chi_\rho \rangle. \] By linearity and the orthogonality of characters, we find that \( \dim \mathrm{Hom}_H(\pi, \rho) = \sum_i m_i m_i' \), for representations \( \pi, \rho \) of type \( (m_1, \ldots, m_J) \) and \( (m_1', \ldots, m_J') \), respectively. Thus, as far as the number of parameters of a steerable convolution layer is concerned, the only choice we have to make for \( \rho \) is its type – a short list of integers \( m_i \). The efficiency of a choice of type can be assessed using a quantity we call the parameter utilization: \[ \mu = \frac{\dim \pi \cdot \dim \rho}{\dim \mathrm{Hom}_H(\pi, \rho)}. \] The numerator equals \( s^2 K \cdot K' \): the number of parameters for a non-equivariant filter bank. The denominator equals the parameter cost of an equivariant filter bank with the same filter size and number of input/output channels. Typical values of \( \mu \) in effective architectures are around \( |H| \), e.g. \( \mu = 8 \) for \( H = D4 \). Such a layer utilizes its parameters 8 times more intensively than an ordinary convolution layer. 2.7 EQUIVARIANT NONLINEARITIES & CAPSULES In the previous section we showed that only the basis-independent types of \( \pi \) and \( \rho \) play a role in determining the parameter cost of an equivariant filter bank. An equivalent representation \( \rho'(g) = A \rho(g) A^{-1} \) will have the same type, and hence the same parameter cost as \( \rho \). However, when it comes to nonlinearities, different bases behave differently. Just like a convolution layer (eq. 3), a layer of nonlinearities must commute with the group action. An elementwise nonlinearity \( \nu : \mathbb{R} \to \mathbb{R} \) (or more generally, a fiber-wise nonlinearity \( \nu : \mathbb{R}^K \to \mathbb{R}^{K'} \)) is admissible for an input representation \( \rho \) if there exists an output representation \( \rho' \) such that \( \nu \) applied after \( \rho \) equals \( \rho' \) applied after \( \nu \). Since commutation with nonlinearities depends on the basis, we need a more granular notion than the feature type. We define a \( \rho \)-capsule as a (typically low-dimensional) feature vector that transforms according to a representation \( \rho \) (we may also refer to \( \rho \) as the capsule). Thus, while a capsule has a type, not all representations of that type are equivalent as capsules. Given a catalogue of capsules \( \rho^i \) (for \( i = 1, \ldots, C \)) with multiplicities \( m_i \), we can construct a fiber as a stack of capsules that is steerable by a block-diagonal representation \( \rho \) with \( m_i \) copies of \( \rho^i \) on the diagonal. Like the capsules of Hinton et al. (2011), our capsules encode the pose of a pattern in the input, and consist of a number of units (dimensions) that do not get mixed with the units of other capsules by symmetries. In this sense, a stack of capsules is disentangled (Cohen & Welling, 2014). We have found a few simple types of capsules and corresponding admissible nonlinearities. It is easy to see that any nonlinearity is admissible for \( \rho \) when the latter is realized by permutation matrices: permuting a list of coordinates and then applying a nonlinearity is the same as applying the nonlinearity and then permuting. If \( \rho \) is realized by a signed permutation matrix, then CReLU\((\alpha) = (\mathrm{ReLU}(\alpha), \mathrm{ReLU}(-\alpha))\) introduced by Shang et al. (2016), or any concatenated nonlinearity \( \nu'(\alpha) = (\nu(\alpha), \nu(-\alpha)) \), will be admissible. Any scale-free concatenated nonlinearity such as CReLU is admissible for a representation realized by monomial matrices (having the same nonzero pattern as a permutation matrix). Finally, we can always make a representation of a finite group orthogonal by a suitable choice of basis, which means that we can use any nonlinearity that acts only on the length of the vector. For many groups, the irreps can be realized using signed permutation matrices, so we can use irreducible \( \varphi_i \)-capsules with concatenated nonlinearities such as CReLU. Another class of capsules, which we call quotient capsules, are naturally realized by permutation matrices, and are thus compatible with any nonlinearity. These are described in Appendix C. 2.8 COMPUTATIONAL EFFICIENCY Modern convolutional networks often use on the order of hundreds of channels \( K \) per layer Zagoruyko & Komodakis (2016). When using \( 3 \times 3 \) filters, a filter bank can have on the order of \( 9K^2 \approx 10^6 \) dimensions. The number of parameters for an equivariant filter bank is about \( \mu \approx 10 \) times smaller, but a basis for the space of equivariant filter banks would still be about \( 10^6 \times 10^5 \), which is too large to be practical. Fortunately, the block-diagonal structure of \( \pi \) and \( \rho \) induces a block structure in \( \Psi \). Suppose \( \pi = \mathrm{block\_diag}(\pi^1, \ldots, \pi^P) \) and \( \rho = \mathrm{block\_diag}(\rho^1, \ldots, \rho^Q) \). Then an intertwiner is a matrix of shape \( K' \times K s^2 \), where \( K' = \sum_i \dim \rho^i \) and \( K s^2 = \sum_i \dim \pi^i \). This matrix has the following block structure: \[ \Psi = \begin{bmatrix} h_{11} \in \mathrm{Hom}_H(\rho^1, \pi^1) & \cdots & h_{1P} \in \mathrm{Hom}_H(\rho^1, \pi^P) \\ \vdots & \ddots & \vdots \\ h_{R1} \in \mathrm{Hom}_H(\rho^R, \pi^1) & \cdots & h_{RP} \in \mathrm{Hom}_H(\rho^R, \pi^P) \end{bmatrix} \] Each block \( h_{ij} \) corresponds to an input-output pair of capsules, and can be parameterized by a linear combination of basis matrices \( \psi_k^{ij} \in \mathrm{Hom}_H(\rho^i, \pi^j) \). In practice, we typically use many copies of the same capsule (say \( n_i \) copies of \( \rho^i \) and \( m_j \) copies of \( \pi^j \)). Therefore, many of the blocks \( h_{ij} \) can be constructed using the same intertwiner basis. If we order equivalent capsules to be adjacent, the intertwiner consists of “blocks of blocks”. Each superblock \( H_{ij} \) has shape \( n_i \dim \rho^i \times m_j \dim \pi^j \), and consists of subblocks of shape \( \dim \rho^i \times \dim \pi^j \). The computation graph for an equivariant convolution layer is constructed as follows. Given a catalogue of capsules \( \rho^i \) and corresponding post-activation capsules \( \mathrm{Act}_{\nu'} \rho^i \), we compute the induced representations \( \pi^i = \mathrm{Ind}_G^H \mathrm{Act}_{\nu'} \rho^i \) and the bases for \( \mathrm{Hom}_H(\rho^i, \pi^j) \) in an offline step. The bases are stored as matrices \( \psi^{ij} \) of shape \( \dim \rho^i \cdot \dim \pi^j \times \dim \mathrm{Hom}_H(\rho^i, \pi^j) \). Then, given a list of input / output multiplicities \( n_i, m_j \) for the capsules, a parameter matrix \( \Theta^{ij} \) of shape \( \dim \mathrm{Hom}_H(\rho^i, \pi^j) \times n_i m_j \) is instantiated. The superblocks \( H_{ij} \) are obtained by a matrix multiplication \( \psi^{ij} \Theta^{ij} \) plus reshaping to shape \( \dim \rho^i \cdot \dim \pi^j \times n_i m_j \). Once all superblocks are filled in, the matrix \( \Psi \) is reshaped from \( K' \times K s^2 \) to \( K' \times K \times s \times s \) and convolved with the input. 2.9 USING STEERABLE CNNs IN PRACTICE A full understanding of the theory of steerable CNNs requires some knowledge of group representation theory, but using steerable CNN technology is not much harder than using ordinary CNNs. Instead of choosing a number of channels for a given layer, one chooses a list of multiplicities \( m_i \) for each capsule in a library of capsules provided by the developer. To preserve equivariance, the activation function applied to a capsule must be chosen from a list of admissible nonlinearities for that capsule (which sometimes includes all nonlinearities). Finally, one must respect the type system and only add identical capsules (e.g. in ResNets). These constraints can all be checked automatically. 3 RELATED WORK Steerable filters were first studied for applications in signal processing and low-level vision (Freeman & Adelson, 1991; Greenspan et al., 1994; Simoncelli & Freeman, 1995). More or less explicit connections between steerability and group representation theory have been observed by Lenz (1989); Koenderink & Van Doorn (1990); Teo (1998); Krajsek & Mester (2007). As we have tried to demonstrate in this paper, representation theory is indeed the natural mathematical framework in which to study steerability. In machine learning, equivariant kernels were studied by Reisert (2008); Skibbe (2013). In the context of neural networks, various authors have studied equivariant representations. Capsules were introduced in Hinton et al. (2011), and significantly improved by Tieleman (2014). A theoretical account of equivariant representation learning in the brain is given by Anselmi et al. (2014). Group equivariant scattering networks were defined and studied by Mallat (2012) for compact groups, and by Sifre & Mallat (2013); Oyallon & Mallat (2015) for the roto-translation group. Jacobsen et al. (2016) describe a network that uses a fixed set of (possibly steerable) basis filters with learned weights. Lenc & Vedaldi (2015) showed empirically that convolutional networks tend to learn equivariant representations, which suggests that equivariance could be a good inductive bias. Invariant and equivariant CNNs have been studied by Gens & Domingos (2014); Kanazawa et al. (2014); Dieleman et al. (2015; 2016); Cohen & Welling (2016); Marcos et al. (2016). All of these models, as well as scattering networks, implicitly use the regular representation: feature maps are (often implicitly) conceived of as functions on \( G \), and the action of \( G \) on the space of functions on \( G \) is known as the regular representation (Serre (1977), Appendix B). Our work is the first to consider other kinds of equivariance in the context of CNNs. The idea of adding a type system to neural networks has been explored by Olah (2015); Balduzzi & Ghifary (2016). We have shown that a type system emerges naturally from the decomposition of a linear representation of a mathematical structure (a group, in our case) associated with the representation learned by a neural network. 4 EXPERIMENTS We implemented steerable CNNs in Chainer (Tokui et al., 2015) and performed experiments on the CIFAR10 dataset (Krizhevsky, 2009) to determine if steerability is a useful inductive bias, and to determine the relative merits of the various types of capsules. In order to run experiments faster, and to see how steerable CNNs perform in the small-data regime, we used only 2000 training samples for our initial experiments. As a baseline, we used the competitive wide residual networks (ResNets) architecture (He et al., 2016a;b; Zagoruyko & Komodakis, 2016). We tuned the capacity of this network for the reduced dataset size and settled on a 20 layer architecture (three residual blocks per stage, with two layers each, for three stages with feature maps of size \( 32 \times 32, 16 \times 16 \) and \( 8 \times 8 \), various widths). We compared the baseline architecture to various kinds of steerable CNN, obtained by replacing the convolution layers by steerable convolution layers. To make sure that differences in performance were not simply due to underfitting or overfitting, we tuned the width (number of channels, \( K \)) using a validation set. The rest of the training procedure is identical to Cohen & Welling (2016), and is fixed for all of our experiments. We first tested steerable CNNs that consist entirely of a single kind of capsule. We found that architectures with only one type do not perform very well (roughly 30-40% error, vs. 30% for plain ResNets trained on 2k samples from CIFAR10), except for those that use the regular representation capsule (Appendix C), which outperforms standard CNNs (26.75% error). This is not too surprising, because many capsules are quite restrictive in the spatial patterns they can express. The strong performance of regular capsules is consistent with the results of Cohen & Welling (2016), and can be explained by the fact that the regular representation contains all other (irreducible and quotient) representations as subrepresentations, and can therefore learn arbitrary spatial patterns. We then created networks that use a mix of the more successful kinds of capsules. After a few preliminary experiments, we settled on a residual network that uses one mix of capsules for the input and output layer of a residual block, and another for the intermediate layer. The first representation <table> <tr> <th>Net</th> <th>Depth</th> <th>Width</th> <th>#Params</th> <th>#Labels</th> <th>Dataset</th> <th>Test error</th> </tr> <tr> <td>Ladder</td> <td>10</td> <td>96</td> <td>4k</td> <td>4k</td> <td>C10ss</td> <td>20.4</td> </tr> <tr> <td>steer</td> <td>14</td> <td>(280, 112)</td> <td>4.4M</td> <td>4k</td> <td>C10</td> <td>23.66</td> </tr> <tr> <td>steer</td> <td>20</td> <td>(160, 64)</td> <td>2.2M</td> <td>4k</td> <td>C10</td> <td>24.56</td> </tr> <tr> <td>steer</td> <td>14</td> <td>(280, 112)</td> <td>4.4M</td> <td>4k</td> <td>C10+</td> <td>16.44</td> </tr> <tr> <td>steer</td> <td>20</td> <td>(160, 64)</td> <td>2.2M</td> <td>4k</td> <td>C10+</td> <td>16.42</td> </tr> <tr> <td>ResNet</td> <td>1001</td> <td>16</td> <td>10.2M</td> <td>50k</td> <td>C10+</td> <td>4.62</td> </tr> <tr> <td>Wide</td> <td>28</td> <td>160</td> <td>36.5M</td> <td>50k</td> <td>C10+</td> <td>4.17</td> </tr> <tr> <td>Dense</td> <td>100</td> <td>2400</td> <td>27.2M</td> <td>50k</td> <td>C10+</td> <td>3.74</td> </tr> <tr> <td>steer</td> <td>26</td> <td>(280, 112)</td> <td>9.1M</td> <td>50k</td> <td>C10+</td> <td>3.74</td> </tr> <tr> <td>steer</td> <td>20</td> <td>(440, 176)</td> <td>16.7M</td> <td>50k</td> <td>C10+</td> <td>3.95</td> </tr> <tr> <td>steer</td> <td>14</td> <td>(400, 160)</td> <td>9.1M</td> <td>50k</td> <td>C10+</td> <td><b>3.65</b></td> </tr> <tr> <td>ResNet</td> <td>1001</td> <td>16</td> <td>10.2M</td> <td>50k</td> <td>C100+</td> <td>22.71</td> </tr> <tr> <td>Wide</td> <td>28</td> <td>160</td> <td>36.5M</td> <td>50k</td> <td>C100+</td> <td>20.50</td> </tr> <tr> <td>Dense</td> <td>100</td> <td>2400</td> <td>27.2M</td> <td>50k</td> <td>C100+</td> <td>19.25</td> </tr> <tr> <td>steer</td> <td>20</td> <td>(280, 112)</td> <td>6.9M</td> <td>50k</td> <td>C100+</td> <td>19.84</td> </tr> <tr> <td>steer</td> <td>14</td> <td>(400, 160)</td> <td>9.1M</td> <td>50k</td> <td>C100+</td> <td><b>18.82</b></td> </tr> </table> Table 2: Comparison of results of steerable CNNs vs. previous state of the art methods. A plus (+) indicates modest data augmentation (shifts and flips). Width for steerable CNNs is reported as a pair of numbers, one for the input / output layer of a ResNet block, and one for the intermediate layer. consists of quotient capsules: regular, qm, qmr2, qmr3 (see Appendix C) followed by ReLUs. The second consists of irreducible capsules: A1, A2, B1, B2, E(2x) followed by CReLU. On CIFAR10 with 2k labels, this architecture works better than standard ResNets and regular capsules at 24.48% error. When tested on CIFAR10 with 4k labels (table 2), the method comes close to the state of the art in semi-supervised methods, that use additional unlabelled data (Rasmus et al., 2015), and better than transfer learning approaches such as DCGAN which achieves 26.2% error (Radford et al., 2015). When tested on the full CIFAR10 and CIFAR100 dataset, the steerable CNN substantially outperforms the ResNet (He et al., 2016b) baseline and achieves state of the art results (improving over wide and dense nets (Zagoruyko & Komodakis, 2016; Huang et al., 2016)). 5 CONCLUSION & FUTURE WORK We have presented a theoretical framework for understanding steerable representations in convolutional networks, and have shown that steerability is a useful inductive bias that can improve model accuracy, particularly when little data is available. Our experiments show that a simple steerable architecture achieves state of the art results on CIFAR10 and CIFAR100, outperforming recent architectures such as wide and dense residual networks. The mathematical connection between representation learning and representation theory that we have established improves our understanding of the inner workings of (equivariant) convolutional networks, revealing the humble CNN as an elegant geometrical computation engine. We expect that this new tool (representation theory), developed over more than a century by mathematicians and physicists, will greatly benefit future investigations in this area. For concreteness, we have used the group of flips and rotations by multiples of 90 degrees as a running example throughout this paper. This group already has some nontrivial characteristics (such as non-commutativity), but it is still small and discrete. The theory of steerable CNNs, however, readily extends to the continuous setting. Evaluating steerable CNNs for large, continuous and high-dimensional groups is an important piece of future work. Another direction for future work is learning the feature types, which may be easier in the continuous setting because (for non-compact groups) the irreps live in a continuous space where optimization may be possible. Beyond classification, steerable CNNs are likely to be useful in geometrical tasks such as action recognition, pose and motion estimation, and continuous control tasks. ACKNOWLEDGMENTS We kindly thank Kenta Oono, Shuang Wu, Thomas Kipf and the anonymous reviewers for their feedback and suggestions. This research was supported by Facebook, Google and NWO (grant number NAI.14.108). REFERENCES F. Anselmi, J. Z. Leibo, L. Rosasco, J. Mutch, A. Tacchetti, and T. Poggio. Unsupervised learning of invariant representations with low sample complexity: the magic of sensory cortex or a new framework for machine learning? Technical Report 001, MIT Center for Brains, Minds and Machines, 2014. D. Balduzzi and M. Ghifary. Strongly-Typed Recurrent Neural Networks. Proceedings of the 33rd International Conference on Machine Learning, 33, 2016. T. Cohen. Learning Transformation Groups and their Invariants, 2013. T. Cohen and M. Welling. Learning the Irreducible Representations of Commutative Lie Groups. In Proceedings of the 31st International Conference on Machine Learning (ICML), volume 31, pp. 1755–1763, 2014. T. S. Cohen and M. Welling. Group equivariant convolutional networks. In Proceedings of The 33rd International Conference on Machine Learning (ICML), volume 48, pp. 2990–2999, 2016. S. Dieleman, K. W. Willett, and J. Dambre. Rotation-invariant convolutional neural networks for galaxy morphology prediction. Monthly Notices of the Royal Astronomical Society, 450(2), 2015. S. Dieleman, J. De Fauw, and K. Kavukcuoglu. Exploiting Cyclic Symmetry in Convolutional Neural Networks. In International Conference on Machine Learning (ICML), 2016. G. B. Folland. A Course in Abstract Harmonic Analysis. CRC Press, 1995. W. T. Freeman and E. H. Adelson. The design and use of steerable filters. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 13(9):891–906, sep 1991. R. Gens and P. Domingos. Deep Symmetry Networks. In Advances in Neural Information Processing Systems (NIPS), 2014. H. Greenspan, S. Belongie, R. Goodman, and P. Perona. Overcomplete Steerable Pyramid Filters and Rotation Invariance. Proceedings of the Computer Vision and Pattern Recognition (CVPR), 1994. K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016a. K. He, X. Zhang, S. Ren, and J. Sun. Identity Mappings in Deep Residual Networks. In European Conference on Computer Vision (ECCV), 2016b. G. E. Hinton, A. Krizhevsky, and S. D. Wang. Transforming auto-encoders. ICANN-11: International Conference on Artificial Neural Networks, Helsinki, 2011. G. Huang, Z. Liu, and K. Q. Weinberger. Densely Connected Convolutional Networks. 2016. URL http://arxiv.org/abs/1608.06993. J.-H. Jacobsen, J. van Gemert, Z. Lou, and A. W. Smeulders. Structured Receptive Fields in CNNs. In Computer Vision and Pattern Recognition (CVPR), 2016. K. Kanatani. Group-Theoretical Methods in Image Understanding. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 1990. ISBN 9783642852152. A. Kanazawa, A. Sharma, and D. Jacobs. Locally Scale-invariant Convolutional Neural Network. Deep Learning and Representation Learning Workshop: NIPS, pp. 1–11, 2014. E. Kaniuth and K. F. Taylor. Induced Representations of Locally Compact Groups. Cambridge University Press, 2013. ISBN 9780521762267. J. J. Koenderink and a. J. Van Doorn. Receptive field families. Biological Cybernetics, 63(4):291–297, 1990. ISSN 03401200. doi: 10.1007/BF00203452. K. Krajsek and R. Mester. A Unified Theory for Steerable and Quadrature Filters. Communications in Computer and Information Science, 4 CCIS:201–214, 2007. ISSN 18650929. doi: 10.1007/978-3-540-75274-5_13. A. Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Technical report, University of Toronto, 2009. K. Lenc and A. Vedaldi. Understanding image representations by measuring their equivariance and equivalence. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2015. R. Lenz. Group-theoretical model of feature extraction. Journal of the Optical Society of America A (Optics and Image Science), 6(6):827–834, 1989. G. W. Mackey. Induced Representations of Locally Compact Groups I. Annals of Mathematics, 55(1):101–139, 1952. G. W. Mackey. Induced Representations of Locally Compact Groups II. The Frobenius Reciprocity Theorem. Annals of Mathematics, 58(2):193–221, 1953. G. W. Mackey. Induced Representations of Groups and Quantum Mechanics. W.A. Benjamin Inc., New York-Amsterdam, 1968. S. Mallat. Group Invariant Scattering. Communications in Pure and Applied Mathematics, 65(10):1331–1398, 2012. D. Marcos, M. Volpi, and D. Tuia. Learning rotation invariant convolutional filters for texture classification. pp. 6, 2016. URL http://arxiv.org/abs/1604.06720. C. Olah. Neural Networks, Types, and Functional Programming, 2015. URL https://colah.github.io/posts/2015-09-NN-Types-FP/. E. Oyallon and S. Mallat. Deep Roto-Translation Scattering for Object Classification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2865—2873, 2015. A. Radford, L. Metz, and S. Chintala. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv, pp. 1–15, 2015. ISSN 0004-6361. doi: 10.1051/0004-6361/201527329. URL http://arxiv.org/abs/1511.06434. A. Rasmus, H. Valpola, M. Honkala, M. Berglund, and T. Raiko. Semi-supervised learning with Ladder Networks. In Neural Information Processing Systems (NIPS), 2015. M. Reeder. Notes on representations of finite groups, 2014. URL https://www2.bc.edu/~reederma/RepThy.pdf. M. Reisert. Group Integration Techniques in Pattern Analysis: A Kernel View. PhD thesis, Albert-Ludwigs-University, 2008. J.-P. Serre. Linear Representations of Finite Groups. Springer, 1977. W. Shang, K. Sohn, D. Almeida, and H. Lee. Understanding and Improving Convolutional Neural Networks via Concatenated Rectified Linear Units. In International Conference on Machine Learning (ICML), volume 48, 2016. L. Sifre and S. Mallat. Rotation, Scaling and Deformation Invariant Scattering for Texture Discrimination. IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2013. E. Simoncelli and W. Freeman. The steerable pyramid: a flexible architecture for multi-scale derivative computation. Proceedings of the International Conference on Image Processing, 3:444–447, 1995. ISSN 0818673109. doi: 10.1109/ICIP.1995.537667. H. Skibbe. Spherical Tensor Algebra for Biomedical Image Analysis. PhD thesis, Albert-Ludwigs-Universitat Freiburg im Breisgau, 2013. M. E. Taylor. Noncommutative Harmonic Analysis. American Mathematical Society, 1986. ISBN 0821815237. P. C.-S. Teo. Theory and Applications of Steerable Functions. PhD thesis, Stanford University, 1998. T. Tieleman. Optimizing Neural Networks that Generate Images. PhD thesis, 2014. S. Tokui, K. Oono, S. Hido, and J. Clayton. Chainer: a Next-Generation Open Source Framework for Deep Learning. Proceedings of Workshop on Machine Learning Systems (LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS), pp. 1–6, 2015. H. Weyl. The classical groups: their invariants and representations. Princeton University Press, 1939. S. Zagoruyko and N. Komodakis. Wide Residual Networks. arXiv:1605.07146, 2016. APPENDIX A: INDUCTION In this section we will show that a stack of feature maps produced by convolution with an \( H \)-equivariant filter bank transforms according to the induced representation. That is, we will derive eq. 5, repeated here for convenience: \[ [\Psi \star [\pi_l(tr)f]](x) = \rho_{l+1}(r)\left([\Psi \star f]\left((tr)^{-1}x\right)\right) \] (14) In the main text, we mentioned that \( x \in \mathbb{Z}^2 \) can be interpreted as a point or as a translation. Here we make this difference explicit, by writing \( x \in \mathbb{Z}^2 \) for a point and \( \bar{x} \in G \) for a translation. (The operation \( \cdot \) defines a section of the projection map \( G \to \mathbb{Z}^2 \) that forgets the non-translational part of the transformation (Kaniuth & Taylor, 2013)). With this notation, the convolution is defined as: \[ [\Psi \star f](x) = \Psi \pi(\bar{x}^{-1})f \] (15) Although the induced representation can be described in a more general setting, we will use an explicit matrix representation of \( G \) to make it easier to check our computations. A general element of \( G \) is written as: \[ g = tr = \begin{bmatrix} I & T \\ 0 & 1 \end{bmatrix} \begin{bmatrix} R & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} R & T \\ 0 & 1 \end{bmatrix} \] (16) Where \( R \) is the matrix representation of \( r \) (e.g. a \( 2 \times 2 \) rotation / reflection matrix), and \( T \) is a translation vector. The section we use is: \[ \bar{x} = \begin{bmatrix} I & x \\ 0 & 1 \end{bmatrix} \] (17) Finally, we will distinguish the action of \( G \) on itself, written \( gh \) for \( g, h \in G \) (implemented as matrix-matrix multiplication) and its action on \( \mathbb{Z}^2 \), written \( g \cdot x \) for \( g \in G \) and \( x \in \mathbb{Z}^2 \) (implemented as matrix-vector multiplication by adding a homogeneous coordinate to \( x \)). To keep notation uncluttered, we will write \( \pi = \pi_l \) and \( \rho = \rho_{l+1} \). In full detail, the derivation of the transformation law for the feature space induced by \( \rho \) proceeds as follows: \[ \begin{align*} [\Psi \star [\pi_l(tr)f]](x) &= \Psi \pi(\bar{x}^{-1})\pi_l(tr)f \\ &= \Psi \pi(\bar{x}^{-1}tr)f \\ &= \Psi \pi(rr^{-1}\bar{x}^{-1}tr)f \\ &= \Psi \pi(r)\pi(r^{-1}\bar{x}^{-1}tr)f \\ &= \rho(r)\Psi \pi(r^{-1}\bar{x}^{-1}tr)f \\ &= \rho(r)\Psi \pi((r^{-1}t^{-1}\bar{x}r)^{-1})f \\ &= \rho(r)\Psi \pi \left( (tr)^{-1} \cdot x \right)^{-1} f \\ &= \rho(r)[\Psi \star f]((tr)^{-1} \cdot x) \end{align*} \] (18) The last line is the result shown in the paper. The justification of each step is: 1. Definition of \( \star \) 2. \( \pi \) is a homomorphism / group representation 3. \( rr^{-1} \) is the identity, so can always multiply by it 4. \( \pi \) is a homomorphism / group representation 5. \( \Psi \in \mathrm{Hom}_H(\pi, \rho) \) is equivariant to \( r \in H \). 6. Invert twice. 7. \( (tr)^{-1} \cdot x = r^{-1} t^{-1} \overline{x} r \) can be checked by multiplying the matrices / vectors. 8. Definition of \( * \) The derivation above is somewhat involved and messy, so the reader may prefer to think geometrically (using the figures in the paper) instead of algebraically. This complexity is an artifact of the lack of abstraction in our presentation. The induced representation is really a very natural object to consider (abstractly, it is the “adjoint functor” to the restriction functor. A more abstract treatment of the induced representation can be found in Serre (1977); Mackey (1952); Reeder (2014). A treatment that is close to our own, but more general is the “alternate description” found on page 49 of Kaniuth & Taylor (2013). APPENDIX B: RELATION TO GROUP EQUIVARIANT CNNs In this section we show that the recently introduced Group Equivariant Convolutional Networks (G-CNNs, Cohen & Welling (2016)) are a special kind of steerable CNN. Specifically, a G-CNN is a steerable CNN with regular capsules. In a G-CNN, the feature maps (except those of the input) are thought of as functions \( f : G \to \mathbb{R}^K \) instead of functions on the plane \( f : \mathbb{Z}^2 \to \mathbb{R}^K \), as we do here. It is shown that the feature maps transform according to \[ \pi(g)f(h) = f(g^{-1}h). \] This defines a linear representation of \( G \) known as the regular representation. It is easy to see that the regular representation is naturally realized by permutation matrices. Furthermore, it is known that the regular representation of \( G \) is induced by the regular representation of \( H \). The latter is defined in Appendix C, and is what we refer to as “regular capsules” in the paper. APPENDIX C: REGULAR AND QUOTIENT FEATURES Let \( H \) be a finite group. A subgroup of \( H \) is a subset that is also itself a group (i.e. closed under composition and inverses). The (left) coset of a subgroup \( K \) in \( H \) are the sets \( hK = \{hk | k \in K\} \). The cosets are disjoint and jointly cover the whole group \( H \) (i.e. they partition \( H \)). The set of all cosets of \( K \) in \( H \) is denoted \( H/K \), and is also called the quotient of \( H \) by \( K \). The coset space caries a natural left action by \( H \). Let \( a, b \in H \), then \( a \cdot bK = (ab)K \). This action translates into an action on the space of functions on \( H/K \). Let \( Q \) denote the space of functions \( f : H/K \to \mathbb{R} \). Then we have the following representation of \( H \): \[ \rho(a)f(bK) = f(a^{-1} \cdot bK). \] The function \( f \) attaches a value to every coset. The \( H \)-action permutes these values, because it permutes the cosets. Hence, \( \rho \) can be realized by permutation matrices. For small groups the explicit computations can easily be done by hand, while for large groups this task can be automated. In this way, we get one permutation representation for each subgroup \( K \) of \( H \). In particular, for the subgroup \( K = \{e\} \) (the trivial subgroup containing only the identity \( e \)), we have \( H/K \cong H \). The representation in the space of functions on \( H \) is known as the “regular representation”. Using such regular representations in a steerable CNN is equivalent to using the group convolutions introduced in Cohen & Welling (2016), so steerable CNNs are a strict generalization of G-CNNs. At the other extreme, we take \( K = H \), which gives the quotient \( H/K \cong \{e\} \), the trivial group, which gives the trivial representation \( A1 \). For the roto-reflection group \( H = D4 \), we have the following subgroups and associated quotient features <table> <tr> <th>Subgroup \( K \)</th> <th>quotient feature name</th> <th>dimensionality</th> </tr> <tr> <td>\{e\}</td> <td>regular</td> <td>8</td> </tr> <tr> <td>\{e, m\}</td> <td>qm</td> <td>4</td> </tr> <tr> <td>\{e, mr\}</td> <td>qmr</td> <td>4</td> </tr> <tr> <td>\{e, mr^2\}</td> <td>qmr2</td> <td>4</td> </tr> <tr> <td>\{e, mr^3\}</td> <td>qmr3</td> <td>4</td> </tr> <tr> <td>\{e, r^2\}</td> <td>r2</td> <td>4</td> </tr> <tr> <td>\{e, r, r^2, r^3\}</td> <td>r</td> <td>2</td> </tr> <tr> <td>e, r^2, m, mr^2</td> <td>r2m</td> <td>2</td> </tr> <tr> <td>e, r^2, mr, mr^3</td> <td>r2mr</td> <td>2</td> </tr> <tr> <td>H</td> <td>A1</td> <td>1</td> </tr> </table>
ABSTRACT It has long been recognized that the invariance and equivariance properties of a representation are critically important for success in many vision tasks. In this paper we present Steerable Convolutional Neural Networks, an efficient and flexible class of equivariant convolutional networks. We show that steerable CNNs achieve state of the art results on the CIFAR image classification benchmark. The mathematical theory of steerable representations reveals a type system in which any steerable representation is a composition of elementary feature types, each one associated with a particular kind of symmetry. We show how the parameter cost of a steerable filter bank depends on the types of the input and output features, and show how to use this knowledge to construct CNNs that utilize parameters effectively. 1 INTRODUCTION Much of the recent progress in computer vision can be attributed to the availability of large labelled datasets and deep neural networks capable of absorbing large amounts of information. While many practical problems can now be solved, the requirement for big (labelled) data is a fundamentally unsatisfactory state of affairs. Human beings are able to learn new concepts with very few labels, and reproducing this ability is an important challenge for artificial intelligence research. From an applied perspective, improving the statistical efficiency of deep learning is vital because in many domains (e.g. medical image analysis), acquiring large amounts of labelled data is costly. To improve the statistical efficiency of machine learning methods, many have sought to learn invariant representations. In deep learning, however, intermediate layers should not be invariant, because the relative pose of local features must be preserved for further layers (Cohen & Welling, 2016; Hinton et al., 2011). Thus, one is led to the idea of equivariance: a network is equivariant if the representations it produces transform in a predictable way under transformations of the input. In other words, equivariant networks produce representations that are steerable. Steerability makes it possible to apply filters not just in every position (as in a standard convolution layer), but in every pose, thus allowing for increased parameter sharing. Previous work has shown that equivariant CNNs yield state of the art results on classification tasks (Cohen & Welling, 2016; Dieleman et al., 2016), even though they only enforce equivariance to small groups of transformations like rotations by multiples of 90 degrees. Learning representations that are equivariant to larger groups is likely to result in further gains, but the computational cost of current methods scales linearly with the size of the group, making this impractical. In this paper we present the general theory of steerable CNNs, which covers previous approaches but also shows how the computational cost can be decoupled from the size of the symmetry group, thus paving the way for future scaling. To better understand the structure of steerable representations, we analyze them mathematically. We show that any steerable representation is a composition of low-dimensional elementary feature types. Each elementary feature can be steered independently of the others, and captures a distinct characteristic of the input that has an invariant or “objective” meaning. This doctrine of “observer-independent quantities” was put forward by (Weyl, 1939, ch. 1.4) and is used throughout physics. It has been applied to vision and representation learning by Kanatani (1990); Cohen (2013). The mentioned type system puts constraints on the network weights and architecture. Specifically, since an equivariant filter bank is required to map given input feature types to given output feature types, the number of parameters required by such a filter bank is reduced. Furthermore, by the same logic that tells us not to add meters to seconds, steerability considerations prevent us from adding features of different types (e.g. for residual learning (He et al., 2016a)). The rest of this paper is organized as follows. The theory of steerable CNNs is introduced in Section 2. Related work is discussed in Section 3, which is followed by classification experiments (Section 4) and a discussion and conclusion in Section 5. 2 STEERABLE CNNS 2.1 FEATURE MAPS AND FIBERS Consider a 2D signal \( f : \mathbb{Z}^2 \to \mathbb{R}^K \) with \( K \) channels. The signal may be an input to the network or a feature representation computed by a CNN. Since signals can be added and multiplied by scalars, the set of signals of this signature forms a linear space \( \mathcal{F} \). Each layer of the network has its own feature space \( \mathcal{F}_l \), but we will often suppress the layer index to reduce clutter. It is customary in deep learning to describe \( f \in \mathcal{F} \) as a stack of feature maps \( f_k \) (for \( k = 1, \ldots, K \)). In this paper we also consider another decomposition of \( \mathcal{F} \) into fibers. The fiber \( F_x \) at position \( x \) in the “base space” \( \mathbb{Z}^2 \) is the \( K \)-dimensional vector space spanned by all channels at position \( x \). Thus, \( f \in \mathcal{F} \) is comprised of feature vectors \( f(x) \) that live in the fibers \( F_x \) (see Figure 1(a)). ![Feature maps, fibers, and transformation law](page_370_682_849_180.png) (a) The feature space \( \mathcal{F} \) is decomposed into a stack of feature maps (left) and a bundle of fibers (right). (b) An image \( f \in \mathcal{F}_0 \) is rotated by \( r \) using \( \pi_0(r) \). Figure 1: Feature maps, fibers, and the transformation law \( \pi_0 \) of \( \mathcal{F}_0 \). Given some group of transformations \( G \) that acts on points in \( \mathbb{Z}^2 \), we can transform signals \( f \in \mathcal{F}_0 \): \[ [\pi_0(g)f](x) = f(g^{-1}x) \] This says that the pixel at \( g^{-1}x \) gets moved to \( x \) by the transformation \( g \in G \). We note that \( \pi_0(g) \) is a linear operator. An important property of \( \pi_0 \) is that \( \pi_0(gh) = \pi_0(g)\pi_0(h) \). Here, \( gh \) means composition of transformations in \( G \), while \( \pi_0(g)\pi_0(h) \) denotes matrix multiplication. A vector space such as \( \mathcal{F}_0 \) equipped with a set of linear operators \( \pi_0 \) satisfying this condition is known as a group representation (or just representation, for short). A lot is known about group representations (Serre, 1977), and we will make extensive use of the theory, explaining the relevant concepts as needed. 2.2 STEERABLE REPRESENTATIONS Let \( (\mathcal{F}, \pi) \) be a feature space with a group representation and \( \Phi : \mathcal{F} \to \mathcal{F}' \) a convolutional network. The feature space \( \mathcal{F}' \) is said to be (linearly) *steerable* with respect to \( G \), if for all transformations \( g \in G \), the features \( \Phi f \) and \( \Phi \pi(g)f \) are related by a linear transformation \( \pi'(g) \) that does not depend on \( f \). So \( \pi'(g) \) allows us to “steer” the features in \( \mathcal{F}' \) without referring to the input in \( \mathcal{F} \) from which they were computed. Combining the definition of steerability (i.e. \( \Phi \pi(g) = \pi'(g)\Phi \)) with the fact that \( \pi \) is a group representation, we find that \( \pi' \) must also be a group representation: \[ \pi'(gh)\Phi f = \Phi \pi(gh)f = \Phi \pi(g)\pi(h)f = \pi'(g)\Phi \pi(h)f = \pi'(g)\pi'(h)\Phi f \] Figure 2: Diagram showing the structural consistency that follows from equivariance of the network \( \Phi \) and the group representation structure of \( \pi_0 \). The result of following any path in this diagram depends only on the beginning and endpoint but is independent of the path itself, c.f. eq. 2 That is, \( \pi'(gh) = \pi'(g)\pi'(h) \) (at least in the span of the image of \( \Phi \)). Figure 2 gives an illustration. For simplicity, we will restrict our attention to discrete groups of transformations. The theory for continuous groups is almost completely analogous. Our running example will be the group \( p4m \) which consists of translations, rotations by 90 degrees around any point, and reflections. We further restrict our attention to groups that are constructed\footnote{as a semi-direct product} from the group of translations \( \mathbb{Z}^2 \) and a group \( H \) of transformations that fixes the origin \( 0 \in \mathbb{Z}^2 \). For \( p4m \), we have \( H = D4 \), the 8-element group of reflections and rotations about the origin. Using this division, we can first construct a filter bank that generates \( H \)-steerable fibers, and then show that convolution with such a filter bank produces a feature space that is steerable with respect to the whole group \( G \). 2.3 EQUIVARIANT FILTER BANKS A filter bank can be described as an array of dimension \( (K', K, s, s) \), where \( K, K' \) denote the number of input / output channels and \( s \) is the kernel size. For our purposes it is useful to think of a filter bank as a linear map \( \Psi : \mathcal{F} \to \mathbb{R}^{K'} \) that takes as input a signal \( f \in \mathcal{F} \) and produces a \( K' \)-dimensional feature vector. The filter bank only looks at an \( s \times s \) patch in \( \mathcal{F} \), so the matrix representing \( \Psi \) has shape \( K' \times K \cdot s^2 \). To correlate a signal \( f \) using \( \Psi \), one would simply apply \( \Psi \) to translated copies of \( f \), producing the output signal one fiber at a time. We assume (by induction) that we have a representation \( \pi \) that allows us to steer \( \mathcal{F} \). In order to make the output of the convolution steerable, we need the filter bank \( \Psi : \mathcal{F} \to \mathbb{R}^{K'} \) to be *H*-equivariant: \[ \rho(h)\Psi = \Psi \pi(h), \quad \forall h \in H \] for some representation \( \rho \) of \( H \) that acts on the output fibers (see Figure 3). Note that we only require equivariance with respect to \( H \) (which excludes translations) and not \( G \), because translations can move patterns into and out of the receptive field of a fiber, making full translation equivariance impossible. The space of maps satisfying the equivariance constraint is denoted \( \mathrm{Hom}_H(\pi, \rho) \), because an equivariant map \( \Psi \) is a “homomorphism of group representations”, meaning it respects the structure of the representations. Equivariant maps are also sometimes called *intertwiners* (Serre, 1977). Since the equivariance constraint (eq. 3) is linear in \( \Psi \), the space \( \mathrm{Hom}_H(\pi, \rho) \) of admissible filter banks is a vector space: any linear combination of maps \( \Psi, \Psi' \in \mathrm{Hom}_H(\pi, \rho) \) is again an intertwiner. Hence, given \( \pi \) and \( \rho \), we can compute a basis for \( \mathrm{Hom}_H(\pi, \rho) \) by solving a linear system. Figure 3: A filter bank \( \Psi \) that is \( H \)-equivariant. In this example, \( \rho_1 \) represents the 90-degree rotation \( r \) by a permutation matrix that cyclicly shifts the 4 channels. Computation of the intertwiner basis is done offline, before training. Once we have such a basis \( \psi_1, \ldots, \psi_n \) for \( \mathrm{Hom}_H(\pi, \rho) \), we can express any equivariant filter bank \( \Psi \) as a linear combination \( \Psi = \sum_i \alpha_i \psi_i \) using parameters \( \alpha_i \). As shown in Section 2.8, this can be done efficiently even in high dimensions. 2.4 INDUCTION We have shown how to parameterize filter banks that intertwine \( \pi \) and \( \rho \), making the output fibers \( H \)-steerable by \( \rho \) if the input space \( \mathcal{F} \) is \( H \)-steerable by \( \pi \). In this section we show how \( H \)-steerability of fibers \( F_x' \) leads to \( G \)-steerability of the whole feature space \( \mathcal{F}' \). This happens through a natural and important construction known as the induced representation (Mackey, 1952; 1953; 1968; Serre, 1977; Taylor, 1986; Folland, 1995; Kaniuth & Taylor, 2013). As stated before, the correlation \( \Psi * f \) could be computed by translating \( f \) before applying \( \Psi \): \[ [\Psi * f](x) = \Psi \left[ \pi(x)^{-1} f \right]. \] (4) Where \( x \in \mathbb{Z}^2 \) is interpreted as a translation when given as input to \( \pi \). We can now calculate the transformation law of the output space. To do so, we apply a translation \( t \) and transformation \( r \in H \) to \( f \in \mathcal{F} \), yielding \( \pi(tr)f \), and then perform the correlation with \( \Psi \). With a some algebra (Appendix A), we find: \[ [\Psi * [\pi(tr)f]](x) = \rho(r) \left[ [\Psi * f]((tr)^{-1}x) \right] \] (5) Now if we define \( \pi' \) as \[ [\pi'(tr)f](x) = \rho(r) \left[ f((tr)^{-1}x) \right] \] (6) then \( \Psi * \pi(g)f = \pi'(g)\Psi * f \) (see Fig. 4). This representation \( \pi' \) is known as the representation of \( G \) induced by the representation \( \rho \) of \( H \), and is denoted \( \pi' = \mathrm{Ind}_H^G \rho \). When parsing eq. 6, it is important to keep in mind that (as indicated by the square brackets) \( \pi' \) acts on the whole feature space \( \mathcal{F}' \) while \( \rho \) acts on individual fibers. If we compare the induced representation (eq. 6) to the representation \( \pi_0 \) defined in eq. 1, we see that the difference lies only in the presence of a factor \( \rho(r) \) applied to the fibers. This factor describes how the feature channels are mixed by the transformation. The color channels in the input space do not get mixed by geometrical transformations, so we say that \( \pi_0 \) is induced from the trivial representation \( \rho_0(h) = I \). Now that we have a \( G \)-steerable feature space \( \mathcal{F}' \), we can iterate the procedure by computing a basis for the space of intertwiners between \( \pi' \) (restricted to \( H \)) and some \( \rho' \) of our choosing. 2.5 FEATURE TYPES AND CHARACTER THEORY By now, the reader may be wondering how to choose \( \rho \), or indeed what the space of representations that we can choose from looks like in the first place. We will answer these questions in this section by showing that each representation has a type (encoded as a short list of integers) that corresponds to a certain symmetry or invariance of the feature. We further show how the number of parameters of an equivariant filter bank depends on the types of the representations \( \pi \) and \( \rho \) that it intertwines. Our discussion will make use of a number of important elementary results from group representation theory which are stated but not proved. The reader wishing to go deeper may consult chapters 1 and 2 of the excellent book by Serre (1977). Recall that a group representation is a set of invertible linear maps \( \rho(g) : \mathbb{R}^K \to \mathbb{R}^K \) satisfying \( \rho(gh) = \rho(g)\rho(h) \) for all elements \( g, h \in H \). It can be shown that any representation is a direct sum (i.e. block_diag plus change of basis) of a number of “elementary” representations associated with \( G \). These building blocks are called irreducible representations (or irreps), because they can ![The representation \( \pi_1 \) induced from the permutation representation \( \rho_1 \) shown in fig. 3. A single fiber is highlighted. It is transported to a new location, and acted on by \( \rho_1 \).](page_1012_1042_340_246.png) Figure 4: The representation \( \pi_1 \) induced from the permutation representation \( \rho_1 \) shown in fig. 3. A single fiber is highlighted. It is transported to a new location, and acted on by \( \rho_1 \). <table> <tr> <th>Irrep</th> <th>Basis in \( \mathcal{F}_0 \)</th> <th>e</th> <th>r</th> <th>r^2</th> <th>r^3</th> <th>m</th> <th>mr</th> <th>mr^2</th> <th>mr^3</th> </tr> <tr> <td>A1</td> <td>[1]</td> <td>[1]</td> <td>[1]</td> <td>[1]</td> <td>[1]</td> <td>[1]</td> <td>[1]</td> <td>[1]</td> <td>[1]</td> </tr> <tr> <td>A2</td> <td>[1]</td> <td>[1]</td> <td>[1]</td> <td>[1]</td> <td>[-1]</td> <td>[-1]</td> <td>[-1]</td> <td>[-1]</td> <td>[-1]</td> </tr> <tr> <td>B1</td> <td>[1]</td> <td>[-1]</td> <td>[1]</td> <td>[-1]</td> <td>[1]</td> <td>[-1]</td> <td>[1]</td> <td>[-1]</td> <td>[-1]</td> </tr> <tr> <td>B2</td> <td>[1]</td> <td>[-1]</td> <td>[1]</td> <td>[-1]</td> <td>[-1]</td> <td>[1]</td> <td>[-1]</td> <td>[1]</td> <td>[-1]</td> </tr> <tr> <td>E</td> <td>\([1\ 0]\)\n\([0\ 1]\)</td> <td>\([0\ -1]\)\n\([1\ 0]\)</td> <td>\([-1\ 0]\)\n\([0\ -1]\)</td> <td>\([0\ 1]\)\n\([1\ 0]\)</td> <td>\([-1\ 0]\)\n\([0\ 1]\)</td> <td>\([1\ 0]\)\n\([0\ -1]\)</td> <td>\([0\ -1]\)\n\([1\ 0]\)</td> <td>\([0\ -1]\)\n\([1\ 0]\)</td> <td>\([0\ -1]\)\n\([1\ 0]\)</td> </tr> </table> Table 1: The irreducible representations of the roto-reflection group D4. This group is generated by 90-degree rotations r and mirror reflections m, and has 5 irreps labelled A1, A2, B1, B2, E. Left: decomposition of \( \pi_0 \) (eq. 1) in the space \( \mathcal{F}_0 \) of \( 3 \times 3 \) filters with one channel. This representation turns out to have type (3, 0, 1, 1, 2), meaning there are three copies of A1, one copy of B1, one copy of B2, and two copies of the 2D irrep E (A2 does not appear). Right: the representation matrices of each irrep, for each element of the group D4. The reader may verify that these are valid representations, and that the characters (traces) are orthogonal. themselves not be block-diagonalized. In other words, if \( \varphi_i \) are the irreducible representations of \( H \), any representation \( \rho \) of \( H \) can be written in block-diagonal form: \[ \rho(g) = A \begin{bmatrix} \varphi_{i_1}(g) & & \\ & \ddots & \\ & & \varphi_{i_n} \end{bmatrix} A^{-1} \] for some basis matrix \( A \), and some \( i_k \) that index the irreps (each irrep may occur 0 or more times). Each irreducible representation corresponds to a type of symmetry, as shown in table 1. For example, as can be seen in this table, the representations B1 and B2 represent the 90-degree rotation r as the matrix [-1], so the basis filters for these representations change sign when rotated by r. It should be noted that in the higher layers \( l > 0 \), elementary basis filters can look different because they depend on the representation \( \pi_l \) that is being decomposed. The fact that all representations can be decomposed into a direct sum of irreducibles implies that each representation has a basis-independent *type*: which irreducible representations appear in it, and with what multiplicity. For example, the input representation \( \pi_0 \) (table 1) has type (3, 0, 1, 1, 2). This means that, for instance, \( \pi_0(r) \) is block-diagonalized as: \[ A^{-1} \pi_0(r) A = \text{block\_diag}([1], [1], [1], [-1], [-1], [0\ -1; 1\ 0], [0\ 1; -1\ 0]). \] Where the block matrix contains (3, 0, 1, 1, 2) copies of the irreps (A1, A2, B1, B2, E), evaluated at r (see column r in table 1). The change of basis matrix \( A \) is constructed from the basis filters shown in table 1 (and the same \( A \) block-diagonalizes \( \pi_0(g) \) for all \( g \)). So the most general way in which we can choose a representation \( \rho \) is to choose multiplicities \( m_i \geq 0 \) and a basis matrix \( A \). In Section 2.7 we will find that there is an important restriction on this freedom, which alleviates the need to choose a basis. The choice of multiplicities is then the only hyperparameter, analogous to the choice of the number of channels in an ordinary CNN. Indeed, the multiplicities determine the number of channels: \( K = \sum_i m_i \dim \varphi_i \). 2.6 Determining the type of the induced representation By choosing the type of \( \rho \), we also determine the type of \( \pi = \mathrm{Ind}_H^G \rho \) (restricted to \( H \)), but what is it? Explicit formulas exist (Reeder (2014); Serre (1977)) but are rather complicated, so we will present a simple computational procedure that can be used to determine the type of any representation. This procedure relies on the *character* \( \chi_\rho(g) = \mathrm{Tr}(\rho(g)) \) of the representation to be decomposed. The most important fact about characters is that the characters of irreps \( \varphi_i, \varphi_j \) are orthogonal: \[ \langle \chi_{\varphi_i}, \chi_{\varphi_j} \rangle \equiv \frac{1}{|H|} \sum_{h \in H} \chi_{\varphi_i}(h) \chi_{\varphi_j}(h) = \delta_{ij}. \] Furthermore, since the trace of a direct sum equals the sum of the traces (i.e. \( \chi_{\rho \oplus \rho'} = \chi_\rho + \chi_{\rho'} \)), and every representation \( \rho \) is a direct sum of irreps, it follows that we can obtain the multiplicity of irrep \( \varphi_i \) in \( \rho \) by computing the inner product with the \( i \)-th character: \[ \langle \chi_\rho, \chi_{\varphi_i} \rangle = \langle \chi_{\oplus_j m_j \varphi_j}, \chi_{\varphi_i} \rangle = \left\langle \sum_j m_j \chi_{\varphi_j}, \chi_{\varphi_i} \right\rangle = \sum_j m_j \langle \chi_{\varphi_j}, \chi_{\varphi_i} \rangle = m_i. \] So a simple dot product of characters is all we need to determine the type of a representation. As we will see next, the type of the input and output representation of a layer determines the parameter cost of that layer. 2.6.1 THE PARAMETER COST OF EQUIVARIANT CONVOLUTION LAYERS Steerable CNNs use parameters much more efficiently than ordinary CNNs. In this section we show how the number of parameters required by an equivariant layer is determined by the feature types of the input and output space, and how the efficiency of a choice of feature types may be evaluated. In section 2.3, we found that a filter bank \( \Psi \) is equivariant if and only if it lies in the vector space called \( \mathrm{Hom}_H(\pi, \rho) \). It follows that the number of parameters for such a filter bank is equal to the dimensionality of this space, \( n = \dim \mathrm{Hom}_H(\pi, \rho) \). This number is known as the intertwining number of \( \pi \) and \( \rho \) and plays an important role in the theory of group representations. As with multiplicities, the intertwining number is easily computed using characters. It can be shown (Reeder, 2014) that the intertwining number equals: \[ \dim \mathrm{Hom}_H(\pi, \rho) = \langle \chi_\pi, \chi_\rho \rangle. \] By linearity and the orthogonality of characters, we find that \( \dim \mathrm{Hom}_H(\pi, \rho) = \sum_i m_i m_i' \), for representations \( \pi, \rho \) of type \( (m_1, \ldots, m_J) \) and \( (m_1', \ldots, m_J') \), respectively. Thus, as far as the number of parameters of a steerable convolution layer is concerned, the only choice we have to make for \( \rho \) is its type – a short list of integers \( m_i \). The efficiency of a choice of type can be assessed using a quantity we call the parameter utilization: \[ \mu = \frac{\dim \pi \cdot \dim \rho}{\dim \mathrm{Hom}_H(\pi, \rho)}. \] The numerator equals \( s^2 K \cdot K' \): the number of parameters for a non-equivariant filter bank. The denominator equals the parameter cost of an equivariant filter bank with the same filter size and number of input/output channels. Typical values of \( \mu \) in effective architectures are around \( |H| \), e.g. \( \mu = 8 \) for \( H = D4 \). Such a layer utilizes its parameters 8 times more intensively than an ordinary convolution layer. 2.7 EQUIVARIANT NONLINEARITIES & CAPSULES In the previous section we showed that only the basis-independent types of \( \pi \) and \( \rho \) play a role in determining the parameter cost of an equivariant filter bank. An equivalent representation \( \rho'(g) = A \rho(g) A^{-1} \) will have the same type, and hence the same parameter cost as \( \rho \). However, when it comes to nonlinearities, different bases behave differently. Just like a convolution layer (eq. 3), a layer of nonlinearities must commute with the group action. An elementwise nonlinearity \( \nu : \mathbb{R} \to \mathbb{R} \) (or more generally, a fiber-wise nonlinearity \( \nu : \mathbb{R}^K \to \mathbb{R}^{K'} \)) is admissible for an input representation \( \rho \) if there exists an output representation \( \rho' \) such that \( \nu \) applied after \( \rho \) equals \( \rho' \) applied after \( \nu \). Since commutation with nonlinearities depends on the basis, we need a more granular notion than the feature type. We define a \( \rho \)-capsule as a (typically low-dimensional) feature vector that transforms according to a representation \( \rho \) (we may also refer to \( \rho \) as the capsule). Thus, while a capsule has a type, not all representations of that type are equivalent as capsules. Given a catalogue of capsules \( \rho^i \) (for \( i = 1, \ldots, C \)) with multiplicities \( m_i \), we can construct a fiber as a stack of capsules that is steerable by a block-diagonal representation \( \rho \) with \( m_i \) copies of \( \rho^i \) on the diagonal. Like the capsules of Hinton et al. (2011), our capsules encode the pose of a pattern in the input, and consist of a number of units (dimensions) that do not get mixed with the units of other capsules by symmetries. In this sense, a stack of capsules is disentangled (Cohen & Welling, 2014). We have found a few simple types of capsules and corresponding admissible nonlinearities. It is easy to see that any nonlinearity is admissible for \( \rho \) when the latter is realized by permutation matrices: permuting a list of coordinates and then applying a nonlinearity is the same as applying the nonlinearity and then permuting. If \( \rho \) is realized by a signed permutation matrix, then CReLU\((\alpha) = (\mathrm{ReLU}(\alpha), \mathrm{ReLU}(-\alpha))\) introduced by Shang et al. (2016), or any concatenated nonlinearity \( \nu'(\alpha) = (\nu(\alpha), \nu(-\alpha)) \), will be admissible. Any scale-free concatenated nonlinearity such as CReLU is admissible for a representation realized by monomial matrices (having the same nonzero pattern as a permutation matrix). Finally, we can always make a representation of a finite group orthogonal by a suitable choice of basis, which means that we can use any nonlinearity that acts only on the length of the vector. For many groups, the irreps can be realized using signed permutation matrices, so we can use irreducible \( \varphi_i \)-capsules with concatenated nonlinearities such as CReLU. Another class of capsules, which we call quotient capsules, are naturally realized by permutation matrices, and are thus compatible with any nonlinearity. These are described in Appendix C. 2.8 COMPUTATIONAL EFFICIENCY Modern convolutional networks often use on the order of hundreds of channels \( K \) per layer Zagoruyko & Komodakis (2016). When using \( 3 \times 3 \) filters, a filter bank can have on the order of \( 9K^2 \approx 10^6 \) dimensions. The number of parameters for an equivariant filter bank is about \( \mu \approx 10 \) times smaller, but a basis for the space of equivariant filter banks would still be about \( 10^6 \times 10^5 \), which is too large to be practical. Fortunately, the block-diagonal structure of \( \pi \) and \( \rho \) induces a block structure in \( \Psi \). Suppose \( \pi = \mathrm{block\_diag}(\pi^1, \ldots, \pi^P) \) and \( \rho = \mathrm{block\_diag}(\rho^1, \ldots, \rho^Q) \). Then an intertwiner is a matrix of shape \( K' \times K s^2 \), where \( K' = \sum_i \dim \rho^i \) and \( K s^2 = \sum_i \dim \pi^i \). This matrix has the following block structure: \[ \Psi = \begin{bmatrix} h_{11} \in \mathrm{Hom}_H(\rho^1, \pi^1) & \cdots & h_{1P} \in \mathrm{Hom}_H(\rho^1, \pi^P) \\ \vdots & \ddots & \vdots \\ h_{R1} \in \mathrm{Hom}_H(\rho^R, \pi^1) & \cdots & h_{RP} \in \mathrm{Hom}_H(\rho^R, \pi^P) \end{bmatrix} \] Each block \( h_{ij} \) corresponds to an input-output pair of capsules, and can be parameterized by a linear combination of basis matrices \( \psi_k^{ij} \in \mathrm{Hom}_H(\rho^i, \pi^j) \). In practice, we typically use many copies of the same capsule (say \( n_i \) copies of \( \rho^i \) and \( m_j \) copies of \( \pi^j \)). Therefore, many of the blocks \( h_{ij} \) can be constructed using the same intertwiner basis. If we order equivalent capsules to be adjacent, the intertwiner consists of “blocks of blocks”. Each superblock \( H_{ij} \) has shape \( n_i \dim \rho^i \times m_j \dim \pi^j \), and consists of subblocks of shape \( \dim \rho^i \times \dim \pi^j \). The computation graph for an equivariant convolution layer is constructed as follows. Given a catalogue of capsules \( \rho^i \) and corresponding post-activation capsules \( \mathrm{Act}_{\nu'} \rho^i \), we compute the induced representations \( \pi^i = \mathrm{Ind}_G^H \mathrm{Act}_{\nu'} \rho^i \) and the bases for \( \mathrm{Hom}_H(\rho^i, \pi^j) \) in an offline step. The bases are stored as matrices \( \psi^{ij} \) of shape \( \dim \rho^i \cdot \dim \pi^j \times \dim \mathrm{Hom}_H(\rho^i, \pi^j) \). Then, given a list of input / output multiplicities \( n_i, m_j \) for the capsules, a parameter matrix \( \Theta^{ij} \) of shape \( \dim \mathrm{Hom}_H(\rho^i, \pi^j) \times n_i m_j \) is instantiated. The superblocks \( H_{ij} \) are obtained by a matrix multiplication \( \psi^{ij} \Theta^{ij} \) plus reshaping to shape \( \dim \rho^i \cdot \dim \pi^j \times n_i m_j \). Once all superblocks are filled in, the matrix \( \Psi \) is reshaped from \( K' \times K s^2 \) to \( K' \times K \times s \times s \) and convolved with the input. 2.9 USING STEERABLE CNNs IN PRACTICE A full understanding of the theory of steerable CNNs requires some knowledge of group representation theory, but using steerable CNN technology is not much harder than using ordinary CNNs. Instead of choosing a number of channels for a given layer, one chooses a list of multiplicities \( m_i \) for each capsule in a library of capsules provided by the developer. To preserve equivariance, the activation function applied to a capsule must be chosen from a list of admissible nonlinearities for that capsule (which sometimes includes all nonlinearities). Finally, one must respect the type system and only add identical capsules (e.g. in ResNets). These constraints can all be checked automatically. 3 RELATED WORK Steerable filters were first studied for applications in signal processing and low-level vision (Freeman & Adelson, 1991; Greenspan et al., 1994; Simoncelli & Freeman, 1995). More or less explicit connections between steerability and group representation theory have been observed by Lenz (1989); Koenderink & Van Doorn (1990); Teo (1998); Krajsek & Mester (2007). As we have tried to demonstrate in this paper, representation theory is indeed the natural mathematical framework in which to study steerability. In machine learning, equivariant kernels were studied by Reisert (2008); Skibbe (2013). In the context of neural networks, various authors have studied equivariant representations. Capsules were introduced in Hinton et al. (2011), and significantly improved by Tieleman (2014). A theoretical account of equivariant representation learning in the brain is given by Anselmi et al. (2014). Group equivariant scattering networks were defined and studied by Mallat (2012) for compact groups, and by Sifre & Mallat (2013); Oyallon & Mallat (2015) for the roto-translation group. Jacobsen et al. (2016) describe a network that uses a fixed set of (possibly steerable) basis filters with learned weights. Lenc & Vedaldi (2015) showed empirically that convolutional networks tend to learn equivariant representations, which suggests that equivariance could be a good inductive bias. Invariant and equivariant CNNs have been studied by Gens & Domingos (2014); Kanazawa et al. (2014); Dieleman et al. (2015; 2016); Cohen & Welling (2016); Marcos et al. (2016). All of these models, as well as scattering networks, implicitly use the regular representation: feature maps are (often implicitly) conceived of as functions on \( G \), and the action of \( G \) on the space of functions on \( G \) is known as the regular representation (Serre (1977), Appendix B). Our work is the first to consider other kinds of equivariance in the context of CNNs. The idea of adding a type system to neural networks has been explored by Olah (2015); Balduzzi & Ghifary (2016). We have shown that a type system emerges naturally from the decomposition of a linear representation of a mathematical structure (a group, in our case) associated with the representation learned by a neural network. 4 EXPERIMENTS We implemented steerable CNNs in Chainer (Tokui et al., 2015) and performed experiments on the CIFAR10 dataset (Krizhevsky, 2009) to determine if steerability is a useful inductive bias, and to determine the relative merits of the various types of capsules. In order to run experiments faster, and to see how steerable CNNs perform in the small-data regime, we used only 2000 training samples for our initial experiments. As a baseline, we used the competitive wide residual networks (ResNets) architecture (He et al., 2016a;b; Zagoruyko & Komodakis, 2016). We tuned the capacity of this network for the reduced dataset size and settled on a 20 layer architecture (three residual blocks per stage, with two layers each, for three stages with feature maps of size \( 32 \times 32, 16 \times 16 \) and \( 8 \times 8 \), various widths). We compared the baseline architecture to various kinds of steerable CNN, obtained by replacing the convolution layers by steerable convolution layers. To make sure that differences in performance were not simply due to underfitting or overfitting, we tuned the width (number of channels, \( K \)) using a validation set. The rest of the training procedure is identical to Cohen & Welling (2016), and is fixed for all of our experiments. We first tested steerable CNNs that consist entirely of a single kind of capsule. We found that architectures with only one type do not perform very well (roughly 30-40% error, vs. 30% for plain ResNets trained on 2k samples from CIFAR10), except for those that use the regular representation capsule (Appendix C), which outperforms standard CNNs (26.75% error). This is not too surprising, because many capsules are quite restrictive in the spatial patterns they can express. The strong performance of regular capsules is consistent with the results of Cohen & Welling (2016), and can be explained by the fact that the regular representation contains all other (irreducible and quotient) representations as subrepresentations, and can therefore learn arbitrary spatial patterns. We then created networks that use a mix of the more successful kinds of capsules. After a few preliminary experiments, we settled on a residual network that uses one mix of capsules for the input and output layer of a residual block, and another for the intermediate layer. The first representation <table> <tr> <th>Net</th> <th>Depth</th> <th>Width</th> <th>#Params</th> <th>#Labels</th> <th>Dataset</th> <th>Test error</th> </tr> <tr> <td>Ladder</td> <td>10</td> <td>96</td> <td>4k</td> <td>4k</td> <td>C10ss</td> <td>20.4</td> </tr> <tr> <td>steer</td> <td>14</td> <td>(280, 112)</td> <td>4.4M</td> <td>4k</td> <td>C10</td> <td>23.66</td> </tr> <tr> <td>steer</td> <td>20</td> <td>(160, 64)</td> <td>2.2M</td> <td>4k</td> <td>C10</td> <td>24.56</td> </tr> <tr> <td>steer</td> <td>14</td> <td>(280, 112)</td> <td>4.4M</td> <td>4k</td> <td>C10+</td> <td>16.44</td> </tr> <tr> <td>steer</td> <td>20</td> <td>(160, 64)</td> <td>2.2M</td> <td>4k</td> <td>C10+</td> <td>16.42</td> </tr> <tr> <td>ResNet</td> <td>1001</td> <td>16</td> <td>10.2M</td> <td>50k</td> <td>C10+</td> <td>4.62</td> </tr> <tr> <td>Wide</td> <td>28</td> <td>160</td> <td>36.5M</td> <td>50k</td> <td>C10+</td> <td>4.17</td> </tr> <tr> <td>Dense</td> <td>100</td> <td>2400</td> <td>27.2M</td> <td>50k</td> <td>C10+</td> <td>3.74</td> </tr> <tr> <td>steer</td> <td>26</td> <td>(280, 112)</td> <td>9.1M</td> <td>50k</td> <td>C10+</td> <td>3.74</td> </tr> <tr> <td>steer</td> <td>20</td> <td>(440, 176)</td> <td>16.7M</td> <td>50k</td> <td>C10+</td> <td>3.95</td> </tr> <tr> <td>steer</td> <td>14</td> <td>(400, 160)</td> <td>9.1M</td> <td>50k</td> <td>C10+</td> <td><b>3.65</b></td> </tr> <tr> <td>ResNet</td> <td>1001</td> <td>16</td> <td>10.2M</td> <td>50k</td> <td>C100+</td> <td>22.71</td> </tr> <tr> <td>Wide</td> <td>28</td> <td>160</td> <td>36.5M</td> <td>50k</td> <td>C100+</td> <td>20.50</td> </tr> <tr> <td>Dense</td> <td>100</td> <td>2400</td> <td>27.2M</td> <td>50k</td> <td>C100+</td> <td>19.25</td> </tr> <tr> <td>steer</td> <td>20</td> <td>(280, 112)</td> <td>6.9M</td> <td>50k</td> <td>C100+</td> <td>19.84</td> </tr> <tr> <td>steer</td> <td>14</td> <td>(400, 160)</td> <td>9.1M</td> <td>50k</td> <td>C100+</td> <td><b>18.82</b></td> </tr> </table> Table 2: Comparison of results of steerable CNNs vs. previous state of the art methods. A plus (+) indicates modest data augmentation (shifts and flips). Width for steerable CNNs is reported as a pair of numbers, one for the input / output layer of a ResNet block, and one for the intermediate layer. consists of quotient capsules: regular, qm, qmr2, qmr3 (see Appendix C) followed by ReLUs. The second consists of irreducible capsules: A1, A2, B1, B2, E(2x) followed by CReLU. On CIFAR10 with 2k labels, this architecture works better than standard ResNets and regular capsules at 24.48% error. When tested on CIFAR10 with 4k labels (table 2), the method comes close to the state of the art in semi-supervised methods, that use additional unlabelled data (Rasmus et al., 2015), and better than transfer learning approaches such as DCGAN which achieves 26.2% error (Radford et al., 2015). When tested on the full CIFAR10 and CIFAR100 dataset, the steerable CNN substantially outperforms the ResNet (He et al., 2016b) baseline and achieves state of the art results (improving over wide and dense nets (Zagoruyko & Komodakis, 2016; Huang et al., 2016)). 5 CONCLUSION & FUTURE WORK We have presented a theoretical framework for understanding steerable representations in convolutional networks, and have shown that steerability is a useful inductive bias that can improve model accuracy, particularly when little data is available. Our experiments show that a simple steerable architecture achieves state of the art results on CIFAR10 and CIFAR100, outperforming recent architectures such as wide and dense residual networks. The mathematical connection between representation learning and representation theory that we have established improves our understanding of the inner workings of (equivariant) convolutional networks, revealing the humble CNN as an elegant geometrical computation engine. We expect that this new tool (representation theory), developed over more than a century by mathematicians and physicists, will greatly benefit future investigations in this area. For concreteness, we have used the group of flips and rotations by multiples of 90 degrees as a running example throughout this paper. This group already has some nontrivial characteristics (such as non-commutativity), but it is still small and discrete. The theory of steerable CNNs, however, readily extends to the continuous setting. Evaluating steerable CNNs for large, continuous and high-dimensional groups is an important piece of future work. Another direction for future work is learning the feature types, which may be easier in the continuous setting because (for non-compact groups) the irreps live in a continuous space where optimization may be possible. Beyond classification, steerable CNNs are likely to be useful in geometrical tasks such as action recognition, pose and motion estimation, and continuous control tasks. ACKNOWLEDGMENTS We kindly thank Kenta Oono, Shuang Wu, Thomas Kipf and the anonymous reviewers for their feedback and suggestions. This research was supported by Facebook, Google and NWO (grant number NAI.14.108). APPENDIX A: INDUCTION In this section we will show that a stack of feature maps produced by convolution with an \( H \)-equivariant filter bank transforms according to the induced representation. That is, we will derive eq. 5, repeated here for convenience: \[ [\Psi \star [\pi_l(tr)f]](x) = \rho_{l+1}(r)\left([\Psi \star f]\left((tr)^{-1}x\right)\right) \] (14) In the main text, we mentioned that \( x \in \mathbb{Z}^2 \) can be interpreted as a point or as a translation. Here we make this difference explicit, by writing \( x \in \mathbb{Z}^2 \) for a point and \( \bar{x} \in G \) for a translation. (The operation \( \cdot \) defines a section of the projection map \( G \to \mathbb{Z}^2 \) that forgets the non-translational part of the transformation (Kaniuth & Taylor, 2013)). With this notation, the convolution is defined as: \[ [\Psi \star f](x) = \Psi \pi(\bar{x}^{-1})f \] (15) Although the induced representation can be described in a more general setting, we will use an explicit matrix representation of \( G \) to make it easier to check our computations. A general element of \( G \) is written as: \[ g = tr = \begin{bmatrix} I & T \\ 0 & 1 \end{bmatrix} \begin{bmatrix} R & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} R & T \\ 0 & 1 \end{bmatrix} \] (16) Where \( R \) is the matrix representation of \( r \) (e.g. a \( 2 \times 2 \) rotation / reflection matrix), and \( T \) is a translation vector. The section we use is: \[ \bar{x} = \begin{bmatrix} I & x \\ 0 & 1 \end{bmatrix} \] (17) Finally, we will distinguish the action of \( G \) on itself, written \( gh \) for \( g, h \in G \) (implemented as matrix-matrix multiplication) and its action on \( \mathbb{Z}^2 \), written \( g \cdot x \) for \( g \in G \) and \( x \in \mathbb{Z}^2 \) (implemented as matrix-vector multiplication by adding a homogeneous coordinate to \( x \)). To keep notation uncluttered, we will write \( \pi = \pi_l \) and \( \rho = \rho_{l+1} \). In full detail, the derivation of the transformation law for the feature space induced by \( \rho \) proceeds as follows: \[ \begin{align*} [\Psi \star [\pi_l(tr)f]](x) &= \Psi \pi(\bar{x}^{-1})\pi_l(tr)f \\ &= \Psi \pi(\bar{x}^{-1}tr)f \\ &= \Psi \pi(rr^{-1}\bar{x}^{-1}tr)f \\ &= \Psi \pi(r)\pi(r^{-1}\bar{x}^{-1}tr)f \\ &= \rho(r)\Psi \pi(r^{-1}\bar{x}^{-1}tr)f \\ &= \rho(r)\Psi \pi((r^{-1}t^{-1}\bar{x}r)^{-1})f \\ &= \rho(r)\Psi \pi \left( (tr)^{-1} \cdot x \right)^{-1} f \\ &= \rho(r)[\Psi \star f]((tr)^{-1} \cdot x) \end{align*} \] (18) The last line is the result shown in the paper. The justification of each step is: 1. Definition of \( \star \) 2. \( \pi \) is a homomorphism / group representation 3. \( rr^{-1} \) is the identity, so can always multiply by it 4. \( \pi \) is a homomorphism / group representation 5. \( \Psi \in \mathrm{Hom}_H(\pi, \rho) \) is equivariant to \( r \in H \). 6. Invert twice. 7. \( (tr)^{-1} \cdot x = r^{-1} t^{-1} \overline{x} r \) can be checked by multiplying the matrices / vectors. 8. Definition of \( * \) The derivation above is somewhat involved and messy, so the reader may prefer to think geometrically (using the figures in the paper) instead of algebraically. This complexity is an artifact of the lack of abstraction in our presentation. The induced representation is really a very natural object to consider (abstractly, it is the “adjoint functor” to the restriction functor. A more abstract treatment of the induced representation can be found in Serre (1977); Mackey (1952); Reeder (2014). A treatment that is close to our own, but more general is the “alternate description” found on page 49 of Kaniuth & Taylor (2013). APPENDIX B: RELATION TO GROUP EQUIVARIANT CNNs In this section we show that the recently introduced Group Equivariant Convolutional Networks (G-CNNs, Cohen & Welling (2016)) are a special kind of steerable CNN. Specifically, a G-CNN is a steerable CNN with regular capsules. In a G-CNN, the feature maps (except those of the input) are thought of as functions \( f : G \to \mathbb{R}^K \) instead of functions on the plane \( f : \mathbb{Z}^2 \to \mathbb{R}^K \), as we do here. It is shown that the feature maps transform according to \[ \pi(g)f(h) = f(g^{-1}h). \] This defines a linear representation of \( G \) known as the regular representation. It is easy to see that the regular representation is naturally realized by permutation matrices. Furthermore, it is known that the regular representation of \( G \) is induced by the regular representation of \( H \). The latter is defined in Appendix C, and is what we refer to as “regular capsules” in the paper. APPENDIX C: REGULAR AND QUOTIENT FEATURES Let \( H \) be a finite group. A subgroup of \( H \) is a subset that is also itself a group (i.e. closed under composition and inverses). The (left) coset of a subgroup \( K \) in \( H \) are the sets \( hK = \{hk | k \in K\} \). The cosets are disjoint and jointly cover the whole group \( H \) (i.e. they partition \( H \)). The set of all cosets of \( K \) in \( H \) is denoted \( H/K \), and is also called the quotient of \( H \) by \( K \). The coset space caries a natural left action by \( H \). Let \( a, b \in H \), then \( a \cdot bK = (ab)K \). This action translates into an action on the space of functions on \( H/K \). Let \( Q \) denote the space of functions \( f : H/K \to \mathbb{R} \). Then we have the following representation of \( H \): \[ \rho(a)f(bK) = f(a^{-1} \cdot bK). \] The function \( f \) attaches a value to every coset. The \( H \)-action permutes these values, because it permutes the cosets. Hence, \( \rho \) can be realized by permutation matrices. For small groups the explicit computations can easily be done by hand, while for large groups this task can be automated. In this way, we get one permutation representation for each subgroup \( K \) of \( H \). In particular, for the subgroup \( K = \{e\} \) (the trivial subgroup containing only the identity \( e \)), we have \( H/K \cong H \). The representation in the space of functions on \( H \) is known as the “regular representation”. Using such regular representations in a steerable CNN is equivalent to using the group convolutions introduced in Cohen & Welling (2016), so steerable CNNs are a strict generalization of G-CNNs. At the other extreme, we take \( K = H \), which gives the quotient \( H/K \cong \{e\} \), the trivial group, which gives the trivial representation \( A1 \). For the roto-reflection group \( H = D4 \), we have the following subgroups and associated quotient features <table> <tr> <th>Subgroup \( K \)</th> <th>quotient feature name</th> <th>dimensionality</th> </tr> <tr> <td>\{e\}</td> <td>regular</td> <td>8</td> </tr> <tr> <td>\{e, m\}</td> <td>qm</td> <td>4</td> </tr> <tr> <td>\{e, mr\}</td> <td>qmr</td> <td>4</td> </tr> <tr> <td>\{e, mr^2\}</td> <td>qmr2</td> <td>4</td> </tr> <tr> <td>\{e, mr^3\}</td> <td>qmr3</td> <td>4</td> </tr> <tr> <td>\{e, r^2\}</td> <td>r2</td> <td>4</td> </tr> <tr> <td>\{e, r, r^2, r^3\}</td> <td>r</td> <td>2</td> </tr> <tr> <td>e, r^2, m, mr^2</td> <td>r2m</td> <td>2</td> </tr> <tr> <td>e, r^2, mr, mr^3</td> <td>r2mr</td> <td>2</td> </tr> <tr> <td>H</td> <td>A1</td> <td>1</td> </tr> </table>
accept
Accept (Poster)
7
2a7f35322805dc14c7d3da7f2803284dfe72c92d
iclr
2,017
HOLStep: A Machine Learning Dataset for Higher-Order Logic Theorem Proving Cezary Kaliszyk University of Innsbruck cezary.kaliszyk@uibk.ac.at François Chollet, Christian Szegedy Google Research {fchollet,szegedy}@google.com ABSTRACT Large computer-understandable proofs consist of millions of intermediate logical steps. The vast majority of such steps originate from manually selected and manually guided heuristics applied to intermediate goals. So far, machine learning has generally not been used to filter or generate these steps. In this paper, we introduce a new dataset based on Higher-Order Logic (HOL) proofs, for the purpose of developing new machine learning-based theorem-proving strategies. We make this dataset publicly available under the BSD license. We propose various machine learning tasks that can be performed on this dataset, and discuss their significance for theorem proving. We also benchmark a set of simple baseline machine learning models suited for the tasks (including logistic regression, convolutional neural networks and recurrent neural networks). The results of our baseline models show the promise of applying machine learning to HOL theorem proving. 1 INTRODUCTION As the usability of interactive theorem proving (ITP) systems (Harrison et al., 2014) grows, its use becomes a more common way of establishing the correctness of software as well as mathematical proofs. Today, ITPs are used for software certification projects ranging from compilers (Leroy, 2009) and operating system components (Chen et al., 2016; Klein et al., 2014), to establishing the absolute correctness of large proofs in mathematics such as the Kepler conjecture (Hales et al., 2015) and the Feit-Thomson Theorem (Gonthier et al., 2013). For results of such significance to be possible, the theorem libraries of these ITPs must contain all necessary basic mathematical properties, accompanied with formal proofs. This means that the size of many ITP libraries can be measured in dozens of thousands of theorems (Grabowski et al., 2010; Blanchette et al., 2015) and billions of individual proof steps. While the general direction of the proofs is specified by humans (by providing the goal to prove, specifying intermediate steps, or applying certain automated tactics), the majority of such proof steps are actually found by automated reasoning-based proof search (Kaliszyk & Urban, 2015b), with very little application of machine learning techniques so far. At the same time, fast progress has been unfolding in machine learning applied to tasks that involve logical inference, such as natural language question answering (Sukhbaatar et al., 2015), knowledge base completion (Socher et al., 2013a), automated translation (Wu et al., 2016), and premise selection in the context of theorem proving (Alemi et al., 2016). Deep learning in particular has proven to be a powerful tool for embedding semantic meaning and logical relationships into geometric spaces, specifically via models such as convolutional neural networks, recurrent neural networks, and tree-recursive neural networks. These advances strongly suggest that deep learning may have become mature enough to yield significant advances in automated theorem proving. Remarkably, it has recently become possible to build a system, AlphaGo (Silver et al., 2016), blending classical AI techniques such as Monte-Carlo tree search and modern deep learning techniques, capable of playing the game of Go at super-human levels. We should note that theorem proving and Go playing are conceptually related, since both consist in searching for specific nodes in trees of states with extremely large arity and relatively large depth, which involves node evaluation decision (how valuable is this state?) and policy decisions (which node should be expanded next?). The success of AlphaGo can thus serve as encouragement on the road to building deep learning-augmented theorem provers that would blend classical techniques developed over the past few decades with the latest machine learning advances. Fast progress in specific machine learning verticals has occasionally been achieved thanks to the release of specialized datasets (often with associated competitions, e.g. the ImageNet dataset for large-scale image classification (Deng et al., 2009)) serving as an experimental testbed and public benchmark of current progress, thus focusing the efforts of the research community. We hope that releasing a theorem proving dataset suited for specific machine learning tasks can serve the same purpose in the vertical of applying machine learning to theorem proving. 1.1 CONTRIBUTION AND OVERVIEW First, we develop a dataset for machine learning based on the proof steps used in a large interactive proof[Section 2]. We focus on the HOL Light (Harrison, 2009) ITP, its multivariate analysis library (Harrison, 2013), as well as the formal proof of the Kepler conjecture (Hales et al., 2010). These formalizations constitute a diverse proof dataset containing basic mathematics, analysis, trigonometry, as well as reasoning about data structures such as graphs. Furthermore these formal proof developments have been used as benchmarks for automated reasoning techniques (Kaliszczk & Urban, 2014). The dataset consists of 2,013,046 training examples and 196,030 testing examples that originate from 11,400 proofs. Precisely half of the examples are statements that were useful in the currently proven conjectures and half are steps that have been derived either manually or as part of the automated proof search but were not necessary in the final proofs. The dataset contains only proofs of non-trivial theorems, that also do not focus on computation but rather on actual theorem proving. For each proof, the conjecture that is being proven as well as its dependencies (axioms) and may be exploited in machine learning tasks. Furthermore, for each statement both its human-readable (pretty-printed) statement and a tokenization designed to make machine learning tasks more manageable are included. Next, in [Section 3] we discuss the proof step classification tasks that can be attempted using the dataset, and we discuss the usefulness of these tasks in interactive and automated theorem proving. These tasks include unconditioned classification (without access to conjectures and dependencies) and conjecture-conditioned classification (with access to the conjecture) of proof steps as being useful or not in a proof. We outline the use of such classification capabilities for search space pruning and internal guidance, as well as for generation of intermediate steps or possible new lemma statements. Finally, in [Section 4] we propose three baseline models for the proof step classification tasks, and we experimentally evaluate the models on the data in [Section 5]. The models considered include both a relatively simple regression model, as well as deep learning models based on convolutional and recurrent neural networks. 1.2 RELATED WORK The use of machine learning in interactive and automated theorem proving has so far focused on three tasks: premise selection, strategy selection, and internal guidance. We shortly explain these. Given a large library of proven facts and a user given conjecture, the multi-label classification problem of selecting the facts that are most likely to lead to a successful proof of the conjecture has been usually called relevance filtering or premise selection (Alama et al., 2014). This is crucial for the efficiency of modern automation techniques for ITPs (Blanchette et al., 2016), which today can usually solve 40–50% of the conjectures in theorem proving libraries. Similarly most competitive ATPs today (Sutcliffe, 2016) implement the SInE classifier (Hoder & Voronkov, 2011). A second theorem proving task where machine learning has been of importance is strategy selection. With the development of automated theorem provers came many parameters that control their execution. In fact, modern ATPs, such as E (Schulz, 2013) and Vampire (Kovács & Voronkov, 2013), include complete strategy description languages that allow a user to specify the orderings, weighting functions, literal selection strategies, etc. Rather than optimizing the search strategy globally, one can choose the strategy based on the currently considered problem. For this some frameworks use machine learning (Bridge et al., 2014; Kühlwein & Urban, 2015). Finally, an automated theorem prover may use machine learning for choosing the actual inference steps. It has been shown to significantly reduce the proof search in first-order tableaux by the selection of extension steps to use (Urban et al., 2011), and has been also successfully applied in monomorphic higher-order logic proving (Färber & Brown, 2016). Data/proof mining has also been applied on the level of interactive theorem proving tactics (Duncan, 2007) to extract and reuse repeating patterns. 2 DATASET EXTRACTION We focus on the HOL Light theorem prover for two reasons. First, it follows the LCF approach\footnote{LCF approach is a software architecture for implementing theorem provers which uses a strongly typed programming language with abstract datatypes (such as OCaml in the case of HOL Light) to separate the small trusted core, called the kernel, which verifies the primitive inferences from user code which allows the user to arbitrarily extend the system in a safe manner. For more details see (Gordon et al., 1979).}. This means that complicated inferences are reduced to the most primitive ones and the data extraction related modifications can be restricted the primitive inferences and it is relatively easy to extract proof steps at an arbitrary selected level of granularity. Second, HOL Light implements higher-order logic (Church, 1940) as its foundation, which on the one hand is powerful enough to encode most of today’s formal proofs, and on the other hand allows for an easy integration of many powerful automation mechanisms (Baader & Nipkow, 1998; Paulson, 1999). When selecting the theorems to record, we choose an intermediate approach between HOL Light ProofRecording (Obua & Skalberg, 2006) and the HOL/Import one (Kaliszyk & Krauss, 2013). The theorems that are derived by most common proof functions are extracted by patching these functions like in the former approach, and the remaining theorems are extracted from the underlying OCaml programming language interpreter. In certain cases decision procedures derive theorems to be reused in subsequent invocations. We detect such values by looking at theorems used across proof blocks and avoid extracting such reused unrelated subproofs. All kernel-level inferences are recorded together with their respective arguments in a trace file. The trace is processed offline to extract the dependencies of the facts, detect used proof boundaries, mark the used and unused steps, and mark the training and testing examples. Only proofs that have sufficiently many used and unused steps are considered useful for the dataset. The annotated proof trace is processed again by a HOL kernel saving the actual training and testing examples originating from non-trivial reasoning steps. Training and testing examples are grouped by proof: for each proof the conjecture (statement that is finally proved), the dependencies of the theorem are constant, and a list of used and not used intermediate statements is provided. This means that the conjectures used in the training and testing sets are normally disjoint. For each statement, whether it is the conjecture, a proof dependency, or an intermediate statement, both a fully parenthesised HOL Light human-like printout is provided, as well as a predefined tokenization. The standard HOL Light printer uses parentheses and operator priorities to make its notations somewhat similar to textbook-style mathematics, while at the same time preserving the complete unambiguity of the order of applications (this is particularly visible for associative operators). The tokenization that we propose attempts to reduce the number of parentheses. To do this we compute the maximum number of arguments that each symbol needs to be applied to, and only mark partial application. This means that fully applied functions (more than 90% of the applications) do not require neither application operators nor parentheses. Top-level universal quantifications are eliminated, bound variables are represented by their de Bruijn indices (the distance from the corresponding abstraction in the parse tree of the term) and free variables are renamed canonically. Since the Hindley-Milner type inference Hindley (1969) mechanisms will be sufficient to reconstruct the most-general types of the expressions well enough for automated-reasoning techniques Kaliszyk et al. (2015) we erase all type information. Table\ref{tab:dataset} presents some dataset statistics. The dataset, the description of the used format, the scripts used to generate it and baseline models code are available: http://cl-informatik.uibk.ac.at/cek/holstep/ <table> <tr> <th>Dataset</th> <th>Training</th> <th>Testing</th> <th>Used</th> <th>Not Used</th> </tr> <tr> <td>Proofs</td> <td>10,000</td> <td>1,000</td> <td>10,000</td> <td>1,000</td> </tr> <tr> <td>Conjectures</td> <td>10,000</td> <td>1,000</td> <td>10,000</td> <td>1,000</td> </tr> <tr> <td>Dependencies</td> <td>10,000</td> <td>1,000</td> <td>10,000</td> <td>1,000</td> </tr> </table> <table> <tr> <th></th> <th>Train</th> <th>Test</th> <th>Positive</th> <th>Negative</th> </tr> <tr> <td>Examples</td> <td>2013046</td> <td>196030</td> <td>1104538</td> <td>1104538</td> </tr> <tr> <td>Avg. length</td> <td>503.18</td> <td>440.20</td> <td>535.52</td> <td>459.66</td> </tr> <tr> <td>Avg. tokens</td> <td>87.01</td> <td>80.62</td> <td>95.48</td> <td>77.40</td> </tr> <tr> <td>Conjectures</td> <td>9999</td> <td>1411</td> <td>-</td> <td>-</td> </tr> <tr> <td>Avg. dependencies</td> <td>29.58</td> <td>22.82</td> <td>-</td> <td>-</td> </tr> </table> Table 1: HolStep dataset statistics 3 MACHINE LEARNING TASKS 3.1 TASKS DESCRIPTION This dataset makes possible several tasks well-suited for machine learning most of which are highly relevant for theorem proving: • Predicting whether a statement is useful in the proof of a given conjecture; • Predicting the dependencies of a proof statement (premise selection); • Predicting whether a statement is an important one (human named); • Predicting which conjecture a particular intermediate statement originates from; • Predicting the name given to a statement; • Generating intermediate statements useful in the proof of a given conjecture; • Generating the conjecture the current proof will lead to. In what follows we focus on the first task: classifying proof step statements as being useful or not in the context of a given proof. This task may be further specialized into two different tasks: • Unconditioned classification of proof steps: determining how likely a given proof is to be useful for the proof it occurred in, based solely on the content of statement (i.e. by only providing the model with the step statement itself, absent any context). • Conditioned classification of proof steps: determining how likely a given proof is to be useful for the proof it occurred in, with “conditioning” on the conjecture statement that the proof was aiming to attain, i.e. by providing the model with both the step statement and the conjecture statement). In the dataset, for every proof we provide the same number of useful and non-useful steps. As such, the proof step classification problem is a balanced two-class classification problem, where a random baseline would yield an accuracy of 0.5. 3.2 RELEVANCE TO INTERACTIVE AND AUTOMATED THEOREM PROVING In the interaction with an interactive theorem prover, the tasks that require most human time are: the search for good intermediate steps; the search for automation techniques able to justify the individual steps, and searching theorem proving libraries for the necessary simpler facts. These three problems directly correspond to the machine learning tasks proposed in the previous subsection. Being able to predict the usefulness of a statement will significantly improve many automation techniques. The generation of good intermediate lemmas or intermediate steps can improve level of granularity of the proof steps. Understanding the correspondence between statements and their names can allow users to search for statements in the libraries more efficiently (Aspinall & Kaliszyk [2016]). Premise selection and filtering are already used in many theorem proving systems, and generation of succeeding steps corresponds to conjecturing and theory exploration. Figure 1: Unconditioned classification model architectures. ![Unconditioned classification model architectures diagram](page_324_186_937_453.png) 4 BASELINE MODELS For each task (conditioned and unconditioned classification), we propose three different deep learning architectures, meant to provide a baseline for the classification performance that can be achieved on this dataset. Our models cover a range of architecture features (from convolutional networks to recurrent networks), aiming at probing what characteristics of the data are the most helpful for usefulness classification. Our models are implemented in TensorFlow (Abadi et al., 2015) using the Keras framework (Chollet, 2015). Each model was trained on a single Nvidia K80 GPU. Training only takes a few hours per model, which makes running these experiments accessible to most people (they could even be run on a laptop CPU). We are releasing all of our benchmark code as open-source software[1] so as to allow others to reproduce our results and improve upon our models. 4.1 UNCONDITIONED CLASSIFICATION MODELS Our three models for this task are as follow: • Logistic regression on top of learned token embeddings. This minimal model aims to determine to which extent simple differences between token distribution between useful and non-useful statements can be used to distinguish them. It provides an absolute floor on the performance achievable on this task. • 2-layer 1D convolutional neural network (CNN) with global maxpooling for sequence reduction. This model aims to determine the importance of local patterns of tokens. • 2-layer 1D CNN with LSTM (Hochreiter & Schmidhuber, 1997) sequence reduction. This model aims to determine the importance of order in the features sequences. See figure 1 for a layer-by-layer description of these models. 4.2 CONDITIONED CLASSIFICATION MODELS For this task, we use versions of the above models that have two siamese branches (identical branches with shared weights), with one branch processing the proof step statement being considered, and the [1] https://github.com/tensorflow/deepmath/tree/master/holstep_baselines Figure 2: Conditioned classification model architectures. ![Conditioned classification model architectures diagram](page_184_370_1206_693.png) other branch processing the conjecture. Each branch outputs an embedding; these two embeddings (step embedding and conjecture embedding) are then concatenated and the classified by a fully-connected network. See figure 2 for a layer-by-layer description of these models. 4.3 INPUT STATEMENTS ENCODING It should be noted that all of our models start with an Embedding layer, mapping tokens or characters in the statements to dense vectors in a low-dimensional space. We consider two possible encodings for presenting the input statements (proof steps and conjectures) to the Embedding layers of our models: • Character-level encoding of the human-readable versions of the statements, where each character (out of a set of 86 unique characters) in the pretty-printed statements is mapped to a 256-dimensional dense vector. This encoding yields longer statements (training statements are 308 character long on average). • Token-level encoding of the versions of the statements rendered with our proposed high-level tokenization scheme. This encoding yields shorter statements (training statements are 60 token long on average), while considerably increasing the size of set of unique tokens (1993 total tokens in the training set). Table 2: HolStep proof step classification accuracy without conditioning <table> <tr> <th></th> <th>Logistic regression</th> <th>1D CNN</th> <th>1D CNN-LSTM</th> </tr> <tr> <th>Accuracy with char input</th> <td>0.71</td> <td>0.82</td> <td><b>0.83</b></td> </tr> <tr> <th>Accuracy with token input</th> <td>0.71</td> <td><b>0.83</b></td> <td>0.77</td> </tr> </table> Table 3: HolStep proof step classification accuracy with conditioning <table> <tr> <th></th> <th>Logistic regression</th> <th>Siamese 1D CNN</th> <th>Siamese 1D CNN-LSTM</th> </tr> <tr> <th>Accuracy with char input</th> <td>0.71</td> <td>0.81</td> <td><b>0.83</b></td> </tr> <tr> <th>Accuracy with token input</th> <td>0.71</td> <td>0.82</td> <td>0.77</td> </tr> </table> 5 RESULTS Experimental results are presented in tables [2] and [3] as well as figs. [3] to [6]. 5.1 INFLUENCE OF MODEL ARCHITECTURE Our unconditioned logistic regression model yields an accuracy of 71%, both with character encoding and token encoding (tables [2] and [3]). This demonstrates that differences in token or character distributions between useful and non-useful steps alone, absent any context, is sufficient for discriminating between useful and non-useful statements to a reasonable extent. This also demonstrates that the token encoding is not fundamentally more informative than raw character-level statements. Additionally, our unconditioned 1D CNN model yields an accuracy of 82% to 83%, both with character encoding and token encoding (tables [2] and [3]). This demonstrates that patterns of characters or patterns of tokens are considerably more informative than single tokens for the purpose of usefulness classification. Finally, our unconditioned convolutional-recurrent model does not improve upon the results of the 1D CNN, which indicates that our models are not able to meaningfully leverage order in the feature sequences into which the statements are encoded. 5.2 INFLUENCE OF INPUT ENCODING For the logistic regression model and the 2-layer 1D CNN model, the choice of input encoding seems to have little impact. For the convolutional-recurrent model, the use of the high-level tokenization seems to cause a large decrease in model performance (figs. [4] and [6]). This may be due to the fact that token encoding yields shorter sequences, making the use of a LSTM less relevant. 5.3 INFLUENCE OF CONDITIONING ON THE CONJECTURE None of our conditioned models appear to be able to improve upon the unconditioned models, which indicates that our architectures are not able to leverage the information provided by the conjecture. The presence of the conditioning does however impact the training profile of our models, in particular by making the 1D CNN model converge faster and overfit significantly quicker (figs. [5] and [6]). 6 CONCLUSIONS Our baseline deep learning models, albeit fairly weak, are still able to predict statement usefulness with a remarkably high accuracy. Such methods already help first-order automated provers (Kaliszyk & Urban [2015a]) and as the branching factor is higher in HOL the predictions are valuable for a number of practical proving applications. This includes making tableaux-based (Paulson [1999]) and superposition-based (Hurd [2003]) internal ITP proof search significantly more efficient in turn Figure 3: Training profile of the three unconditioned baseline models with character input. Figure 4: Training profile of the three unconditioned baseline models with token input. Figure 5: Training profile of the three conditioned baseline models with character input. Figure 6: Training profile of the three conditioned baseline models with token input. making formalization easier. However, our models do not appear to be able to leverage order in the input sequences, nor conditioning on the conjectures. This is due to the fact that these models are not doing any form of logical reasoning on their input statements; rather they are doing simple pattern matching at the level of n-grams of characters or tokens. This shows the need to focus future efforts on different models that can do reasoning, or alternatively, on systems that blend explicit reasoning (e.g. graph search) with deep learning-based feature learning. A potential new direction would be to leverage the graph structure of HOL statements using e.g. Recursive Neural Tensor Networks (Socher et al., 2013a,b) or other graph-based recursive architectures. 6.1 FUTURE WORK The dataset focuses on one interactive theorem prover. It would be interesting if the proposed techniques generalize, primarily across ITPs that use the same foundational logic, for example using OpenTheory (Hurd, 2011), and secondarily across fundamentally different ITPs or even ATPs. A significant part of the unused steps originates from trying to fulfill the conditions for rewriting and from calls to intuitionistic tableaux. The main focus is however on the human found proofs so the trained predictions may to an extent mimic the bias on the usefulness in the human proofs. As ATPs are at the moment very week in comparison with human intuition improving this even for the many proofs humans do not find difficult would be an important gain. Finally, two of the proposed task for the dataset have been premise selection and intermediate sentence generation. It would be interesting to define more ATP-based ways to evaluate the selected premises, as well as to evaluate generated sentences (Kaliszyk et al., 2015). The set is a relatively large one when it comes to proof step classification, however the number of available premises makes the set a medium-sized set for premise selection in comparison with those of the Mizar Mathematical Library or the seL4 development. ACKNOWLEDGEMENTS The first author was partly supported by the ERC starting grant 714034. REFERENCES Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorflow.org. Jesse Alama, Tom Heskes, Daniel Kühlwein, Evgeni Tsivtsivadze, and Josef Urban. Premise selection for mathematics by corpus analysis and kernel methods. J. Autom. Reasoning, 52(2):191–213, 2014. doi: 10.1007/s10817-013-9286-5. Alex A. Alemi, François Chollet, Geoffrey Irving, Christian Szegedy, and Josef Urban. DeepMath – Deep sequence models for premise selection. In Daniel D. Lee, Masashi Sugiyama, Ulrike V. Luxburg, Isabelle Guyon, and Roman Garnett (eds.), Advances in Neural Information Processing Systems (NIPS 2016), pp. 2235–2243, 2016. URL https://arxiv.org/abs/1606.04442. David Aspinall and Cezary Kaliszyk. What’s in a theorem name? In Jasmin Christian Blanchette and Stephan Merz (eds.), Interactive Theorem Proving (ITP 2016), volume 9807 of LNCS, pp. 459–465. Springer, 2016. doi: 10.1007/978-3-319-43144-4. Franz Baader and Tobias Nipkow. Term rewriting and all that. Cambridge University Press, 1998. ISBN 978-0-521-45520-6. Jasmin C. Blanchette, Cezary Kaliszyk, Lawrence C. Paulson, and Josef Urban. Hammering towards QED. J. Formalized Reasoning, 9(1):101–148, 2016. ISSN 1972-5787. doi: 10.6092/issn.1972-5787/4593. Jasmin Christian Blanchette, Maximilian P. L. Haslbeck, Daniel Matichuk, and Tobias Nipkow. Mining the Archive of Formal Proofs. In Manfred Kerber, Jacques Carette, Cezary Kaliszyk, Florian Rabe, and Volker Sorge (eds.), Intelligent Computer Mathematics (CICM 2015), volume 9150 of LNCS, pp. 3–17. Springer, 2015. James P. Bridge, Sean B. Holden, and Lawrence C. Paulson. Machine learning for first-order theorem proving - learning to select a good heuristic. J. Autom. Reasoning, 53(2):141–172, 2014. doi: 10.1007/s10817-014-9301-5. Haogang Chen, Daniel Ziegler, Tej Chajed, Adam Chipala, M. Frans Kaashoek, and Nickolai Zeldovich. Using crash Hoare logic for certifying the FSCQ file system. In Ajay Gulati and Hakim Weatherspoon (eds.), USENIX 2016. USENIX Association, 2016. François Chollet. Keras. https://github.com/fchollet/keras, 2015. Alonzo Church. A formulation of the simple theory of types. J. Symb. Log., 5(2):56–68, 1940. doi: 10.2307/2266170. URL http://dx.doi.org/10.2307/2266170 J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009. Hazel Duncan. The Use of Data-Mining for the Automatic Formation of Tactics. PhD thesis, University of Edinburgh, 2007. Michael Färber and Chad E. Brown. Internal guidance for Satallax. In Nicola Olivetti and Ashish Tiwari (eds.), International Joint Conference on Automated Reasoning (IJCAR 2016), volume 9706 of LNCS, pp. 349–361. Springer, 2016. doi: 10.1007/978-3-319-40229-1. Georges Gonthier, Andrea Asperti, Jeremy Avigad, Yves Bertot, Cyril Cohen, François Garillot, Stéphane Le Roux, Assia Mahboubi, Russell O’Connor, Sidi Ould Biha, Ioana Pasca, Laurence Rideau, Alexey Solovyev, Enrico Tassi, and Laurent Théry. A machine-checked proof of the odd order theorem. In Sandrine Blazy, Christine Paulin-Mohring, and David Pichardie (eds.), Interactive Theorem Proving (ITP 2013), volume 7998 of LNCS, pp. 163–179. Springer, 2013. Michael J. C. Gordon, Robin Milner, and Christopher P. Wadsworth. Edinburgh LCF, volume 78 of Lecture Notes in Computer Science. Springer, 1979. ISBN 3-540-09724-4. doi: 10.1007/3-540-09724-4. URL http://dx.doi.org/10.1007/3-540-09724-4. Adam Grabowski, Artur Kornilowicz, and Adam Naumowicz. Mizar in a nutshell. J. Formalized Reasoning, 3(2):153–245, 2010. doi: 10.6092/issn.1972-5787/1980. Thomas Hales, John Harrison, Sean McLaughlin, Tobias Nipkow, Steven Obua, and Roland Zumkeller. A revision of the proof of the Kepler Conjecture. Discrete & Computational Geometry, 44(1):1–34, 2010. Thomas C. Hales, Mark Adams, Gertrud Bauer, Dat Tat Dang, John Harrison, Truong Le Hoang, Cezary Kaliszyk, Victor Magron, Sean McLaughlin, Thang Tat Nguyen, Truong Quang Nguyen, Tobias Nipkow, Steven Obua, Joseph Pleso, Jason Rute, Alexey Solovyev, An Hoai Thi Ta, Trung Nam Tran, Diep Thi Trieu, Josef Urban, Ky Khac Vu, and Roland Zumkeller. A formal proof of the Kepler conjecture. CoRR, abs/1501.02155, 2015. John Harrison. HOL Light: An overview. In Stefan Berghofer, Tobias Nipkow, Christian Urban, and Makarius Wenzel (eds.), Theorem Proving in Higher Order Logics (TPHOLs 2009), volume 5674 of LNCS, pp. 60–66. Springer, 2009. John Harrison. The HOL Light theory of Euclidean space. J. Autom. Reasoning, 50(2):173–190, 2013. doi: 10.1007/s10817-012-9250-9. John Harrison, Josef Urban, and Freek Wiedijk. History of interactive theorem proving. In Jörg Siekmann (ed.), Handbook of the History of Logic vol. 9 (Computational Logic), pp. 135–214. Elsevier, 2014. R. Hindley. The principal type-scheme of an object in combinatory logic. Transactions of the american mathematical society, 146:29–60, 1969. ISSN 0002-9947. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735–1780, 1997. Kryštof Hoder and Andrei Voronkov. Sine qua non for large theory reasoning. In Nikolaj Bjørner and Viorica Sofronie-Stokkermans (eds.), CADE-23, volume 6803 of LNAI, pp. 299–314. Springer, 2011. Joe Hurd. First-order proof tactics in higher-order logic theorem provers. In Myla Archer, Ben Di Vito, and César Muñoz (eds.), Design and Application of Strategies/Tactics in Higher Order Logics (STRATA 2003), number NASA/CP-2003-212448 in NASA Technical Reports, pp. 56–68, September 2003. URL http://www.gilith.com/research/papers Joe Hurd. The OpenTheory standard theory library. In Mihaela Gheorghiu Bobaru, Klaus Havelund, Gerard J. Holzmann, and Rajeev Joshi (eds.), NASA Formal Methods (NFM 2011), volume 6617 of LNCS, pp. 177–191. Springer, 2011. Cezary Kaliszyk and Alexander Krauss. Scalable LCF-style proof translation. In Sandrine Blazy, Christine Paulin-Mohring, and David Pichardie (eds.), Interactive Theorem Proving (ITP 2013), volume 7998 of LNCS, pp. 51–66. Springer, 2013. Cezary Kaliszyk and Josef Urban. Learning-assisted automated reasoning with Flyspeck. J. Autom. Reasoning, 53(2):173–213, 2014. doi: 10.1007/s10817-014-9303-3. Cezary Kaliszyk and Josef Urban. FEMaLeCoP: Fairly efficient machine learning connection prover. In Martin Davis, Ansgar Fehnker, Annabelle McIver, and Andrei Voronkov (eds.), 20th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2015), volume 9450 of LNCS, pp. 88–96. Springer, 2015a. doi: 10.1007/978-3-662-48899-7. Cezary Kaliszyk and Josef Urban. Learning-assisted theorem proving with millions of lemmas. J. Symbolic Computation, 69:109–128, 2015b. doi: 10.1016/j.jsc.2014.09.032. Cezary Kaliszyk, Josef Urban, and Jiří Vyskočil. Learning to parse on aligned corpora. In Christian Urban and Xingyuan Zhang (eds.), Proc. 6th Conference on Interactive Theorem Proving (ITP’15), volume 9236 of LNCS, pp. 227–233. Springer-Verlag, 2015. doi: 10.1007/978-3-319-22102-1_15. Gerwin Klein, June Andronick, Kevin Elphinstone, Toby C. Murray, Thomas Sewell, Rafal Kolanski, and Gernot Heiser. Comprehensive formal verification of an OS microkernel. ACM Trans. Comput. Syst., 32(1):2, 2014. Laura Kovács and Andrei Voronkov. First-order theorem proving and Vampire. In Natasha Sharygina and Helmut Veith (eds.), Computer-Aided Verification (CAV 2013), volume 8044 of LNCS, pp. 1–35. Springer, 2013. Daniel Kühlwein and Josef Urban. MaLeS: A framework for automatic tuning of automated theorem provers. J. Autom. Reasoning, 55(2):91–116, 2015. doi: 10.1007/s10817-015-9329-1. Xavier Leroy. Formal verification of a realistic compiler. Commun. ACM, 52(7):107–115, 2009. Steven Obua and Sebastian Skalberg. Importing HOL into Isabelle/HOL. In Ulrich Furbach and Natarajan Shankar (eds.), International Joint Conference on Automated Reasoning (IJCAR 2006), volume 4130 of LNCS, pp. 298–302. Springer, 2006. Lawrence C. Paulson. A generic tableau prover and its integration with Isabelle. J. Universal Computer Science, 5(3):73–87, 1999. Stephan Schulz. System description: E 1.8. In Kenneth L. McMillan, Aart Middeldorp, and Andrei Voronkov (eds.), Logic for Programming, Artificial Intelligence (LPAR 2013), volume 8312 of LNCS, pp. 735–743. Springer, 2013. David Silver, Aja Huang, Christopher J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529:484–503, 2016. URL http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings., pp. 926–934, 2013a. URL http://papers.nips.cc/paper/5028-reasoning-with-neural-tensor-networks-for-knowledge-base-completion Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1631–1642, Stroudsburg, PA, October 2013b. Association for Computational Linguistics. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in Neural Information Processing Systems, pp. 2431–2439, 2015. Geoff Sutcliffe. The CADE ATP system competition - CASC. AI Magazine, 37(2):99–101, 2016. Josef Urban, Jiří Vyskočil, and Petr Štěpánek. MaLeCoP: Machine learning connection prover. In Kai Brünnler and George Metcalfe (eds.), TABLEAUX 2011, volume 6793 of LNCS. Springer, 2011. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144, 2016. URL http://arxiv.org/abs/1609.08144.
ABSTRACT Large computer-understandable proofs consist of millions of intermediate logical steps. The vast majority of such steps originate from manually selected and manually guided heuristics applied to intermediate goals. So far, machine learning has generally not been used to filter or generate these steps. In this paper, we introduce a new dataset based on Higher-Order Logic (HOL) proofs, for the purpose of developing new machine learning-based theorem-proving strategies. We make this dataset publicly available under the BSD license. We propose various machine learning tasks that can be performed on this dataset, and discuss their significance for theorem proving. We also benchmark a set of simple baseline machine learning models suited for the tasks (including logistic regression, convolutional neural networks and recurrent neural networks). The results of our baseline models show the promise of applying machine learning to HOL theorem proving. 1 INTRODUCTION As the usability of interactive theorem proving (ITP) systems (Harrison et al., 2014) grows, its use becomes a more common way of establishing the correctness of software as well as mathematical proofs. Today, ITPs are used for software certification projects ranging from compilers (Leroy, 2009) and operating system components (Chen et al., 2016; Klein et al., 2014), to establishing the absolute correctness of large proofs in mathematics such as the Kepler conjecture (Hales et al., 2015) and the Feit-Thomson Theorem (Gonthier et al., 2013). For results of such significance to be possible, the theorem libraries of these ITPs must contain all necessary basic mathematical properties, accompanied with formal proofs. This means that the size of many ITP libraries can be measured in dozens of thousands of theorems (Grabowski et al., 2010; Blanchette et al., 2015) and billions of individual proof steps. While the general direction of the proofs is specified by humans (by providing the goal to prove, specifying intermediate steps, or applying certain automated tactics), the majority of such proof steps are actually found by automated reasoning-based proof search (Kaliszyk & Urban, 2015b), with very little application of machine learning techniques so far. At the same time, fast progress has been unfolding in machine learning applied to tasks that involve logical inference, such as natural language question answering (Sukhbaatar et al., 2015), knowledge base completion (Socher et al., 2013a), automated translation (Wu et al., 2016), and premise selection in the context of theorem proving (Alemi et al., 2016). Deep learning in particular has proven to be a powerful tool for embedding semantic meaning and logical relationships into geometric spaces, specifically via models such as convolutional neural networks, recurrent neural networks, and tree-recursive neural networks. These advances strongly suggest that deep learning may have become mature enough to yield significant advances in automated theorem proving. Remarkably, it has recently become possible to build a system, AlphaGo (Silver et al., 2016), blending classical AI techniques such as Monte-Carlo tree search and modern deep learning techniques, capable of playing the game of Go at super-human levels. We should note that theorem proving and Go playing are conceptually related, since both consist in searching for specific nodes in trees of states with extremely large arity and relatively large depth, which involves node evaluation decision (how valuable is this state?) and policy decisions (which node should be expanded next?). The success of AlphaGo can thus serve as encouragement on the road to building deep learning-augmented theorem provers that would blend classical techniques developed over the past few decades with the latest machine learning advances. Fast progress in specific machine learning verticals has occasionally been achieved thanks to the release of specialized datasets (often with associated competitions, e.g. the ImageNet dataset for large-scale image classification (Deng et al., 2009)) serving as an experimental testbed and public benchmark of current progress, thus focusing the efforts of the research community. We hope that releasing a theorem proving dataset suited for specific machine learning tasks can serve the same purpose in the vertical of applying machine learning to theorem proving. 1.1 CONTRIBUTION AND OVERVIEW First, we develop a dataset for machine learning based on the proof steps used in a large interactive proof[Section 2]. We focus on the HOL Light (Harrison, 2009) ITP, its multivariate analysis library (Harrison, 2013), as well as the formal proof of the Kepler conjecture (Hales et al., 2010). These formalizations constitute a diverse proof dataset containing basic mathematics, analysis, trigonometry, as well as reasoning about data structures such as graphs. Furthermore these formal proof developments have been used as benchmarks for automated reasoning techniques (Kaliszczk & Urban, 2014). The dataset consists of 2,013,046 training examples and 196,030 testing examples that originate from 11,400 proofs. Precisely half of the examples are statements that were useful in the currently proven conjectures and half are steps that have been derived either manually or as part of the automated proof search but were not necessary in the final proofs. The dataset contains only proofs of non-trivial theorems, that also do not focus on computation but rather on actual theorem proving. For each proof, the conjecture that is being proven as well as its dependencies (axioms) and may be exploited in machine learning tasks. Furthermore, for each statement both its human-readable (pretty-printed) statement and a tokenization designed to make machine learning tasks more manageable are included. Next, in [Section 3] we discuss the proof step classification tasks that can be attempted using the dataset, and we discuss the usefulness of these tasks in interactive and automated theorem proving. These tasks include unconditioned classification (without access to conjectures and dependencies) and conjecture-conditioned classification (with access to the conjecture) of proof steps as being useful or not in a proof. We outline the use of such classification capabilities for search space pruning and internal guidance, as well as for generation of intermediate steps or possible new lemma statements. Finally, in [Section 4] we propose three baseline models for the proof step classification tasks, and we experimentally evaluate the models on the data in [Section 5]. The models considered include both a relatively simple regression model, as well as deep learning models based on convolutional and recurrent neural networks. 1.2 RELATED WORK The use of machine learning in interactive and automated theorem proving has so far focused on three tasks: premise selection, strategy selection, and internal guidance. We shortly explain these. Given a large library of proven facts and a user given conjecture, the multi-label classification problem of selecting the facts that are most likely to lead to a successful proof of the conjecture has been usually called relevance filtering or premise selection (Alama et al., 2014). This is crucial for the efficiency of modern automation techniques for ITPs (Blanchette et al., 2016), which today can usually solve 40–50% of the conjectures in theorem proving libraries. Similarly most competitive ATPs today (Sutcliffe, 2016) implement the SInE classifier (Hoder & Voronkov, 2011). A second theorem proving task where machine learning has been of importance is strategy selection. With the development of automated theorem provers came many parameters that control their execution. In fact, modern ATPs, such as E (Schulz, 2013) and Vampire (Kovács & Voronkov, 2013), include complete strategy description languages that allow a user to specify the orderings, weighting functions, literal selection strategies, etc. Rather than optimizing the search strategy globally, one can choose the strategy based on the currently considered problem. For this some frameworks use machine learning (Bridge et al., 2014; Kühlwein & Urban, 2015). Finally, an automated theorem prover may use machine learning for choosing the actual inference steps. It has been shown to significantly reduce the proof search in first-order tableaux by the selection of extension steps to use (Urban et al., 2011), and has been also successfully applied in monomorphic higher-order logic proving (Färber & Brown, 2016). Data/proof mining has also been applied on the level of interactive theorem proving tactics (Duncan, 2007) to extract and reuse repeating patterns. 2 DATASET EXTRACTION We focus on the HOL Light theorem prover for two reasons. First, it follows the LCF approach\footnote{LCF approach is a software architecture for implementing theorem provers which uses a strongly typed programming language with abstract datatypes (such as OCaml in the case of HOL Light) to separate the small trusted core, called the kernel, which verifies the primitive inferences from user code which allows the user to arbitrarily extend the system in a safe manner. For more details see (Gordon et al., 1979).}. This means that complicated inferences are reduced to the most primitive ones and the data extraction related modifications can be restricted the primitive inferences and it is relatively easy to extract proof steps at an arbitrary selected level of granularity. Second, HOL Light implements higher-order logic (Church, 1940) as its foundation, which on the one hand is powerful enough to encode most of today’s formal proofs, and on the other hand allows for an easy integration of many powerful automation mechanisms (Baader & Nipkow, 1998; Paulson, 1999). When selecting the theorems to record, we choose an intermediate approach between HOL Light ProofRecording (Obua & Skalberg, 2006) and the HOL/Import one (Kaliszyk & Krauss, 2013). The theorems that are derived by most common proof functions are extracted by patching these functions like in the former approach, and the remaining theorems are extracted from the underlying OCaml programming language interpreter. In certain cases decision procedures derive theorems to be reused in subsequent invocations. We detect such values by looking at theorems used across proof blocks and avoid extracting such reused unrelated subproofs. All kernel-level inferences are recorded together with their respective arguments in a trace file. The trace is processed offline to extract the dependencies of the facts, detect used proof boundaries, mark the used and unused steps, and mark the training and testing examples. Only proofs that have sufficiently many used and unused steps are considered useful for the dataset. The annotated proof trace is processed again by a HOL kernel saving the actual training and testing examples originating from non-trivial reasoning steps. Training and testing examples are grouped by proof: for each proof the conjecture (statement that is finally proved), the dependencies of the theorem are constant, and a list of used and not used intermediate statements is provided. This means that the conjectures used in the training and testing sets are normally disjoint. For each statement, whether it is the conjecture, a proof dependency, or an intermediate statement, both a fully parenthesised HOL Light human-like printout is provided, as well as a predefined tokenization. The standard HOL Light printer uses parentheses and operator priorities to make its notations somewhat similar to textbook-style mathematics, while at the same time preserving the complete unambiguity of the order of applications (this is particularly visible for associative operators). The tokenization that we propose attempts to reduce the number of parentheses. To do this we compute the maximum number of arguments that each symbol needs to be applied to, and only mark partial application. This means that fully applied functions (more than 90% of the applications) do not require neither application operators nor parentheses. Top-level universal quantifications are eliminated, bound variables are represented by their de Bruijn indices (the distance from the corresponding abstraction in the parse tree of the term) and free variables are renamed canonically. Since the Hindley-Milner type inference Hindley (1969) mechanisms will be sufficient to reconstruct the most-general types of the expressions well enough for automated-reasoning techniques Kaliszyk et al. (2015) we erase all type information. Table\ref{tab:dataset} presents some dataset statistics. The dataset, the description of the used format, the scripts used to generate it and baseline models code are available: http://cl-informatik.uibk.ac.at/cek/holstep/ <table> <tr> <th>Dataset</th> <th>Training</th> <th>Testing</th> <th>Used</th> <th>Not Used</th> </tr> <tr> <td>Proofs</td> <td>10,000</td> <td>1,000</td> <td>10,000</td> <td>1,000</td> </tr> <tr> <td>Conjectures</td> <td>10,000</td> <td>1,000</td> <td>10,000</td> <td>1,000</td> </tr> <tr> <td>Dependencies</td> <td>10,000</td> <td>1,000</td> <td>10,000</td> <td>1,000</td> </tr> </table> <table> <tr> <th></th> <th>Train</th> <th>Test</th> <th>Positive</th> <th>Negative</th> </tr> <tr> <td>Examples</td> <td>2013046</td> <td>196030</td> <td>1104538</td> <td>1104538</td> </tr> <tr> <td>Avg. length</td> <td>503.18</td> <td>440.20</td> <td>535.52</td> <td>459.66</td> </tr> <tr> <td>Avg. tokens</td> <td>87.01</td> <td>80.62</td> <td>95.48</td> <td>77.40</td> </tr> <tr> <td>Conjectures</td> <td>9999</td> <td>1411</td> <td>-</td> <td>-</td> </tr> <tr> <td>Avg. dependencies</td> <td>29.58</td> <td>22.82</td> <td>-</td> <td>-</td> </tr> </table> Table 1: HolStep dataset statistics 3 MACHINE LEARNING TASKS 3.1 TASKS DESCRIPTION This dataset makes possible several tasks well-suited for machine learning most of which are highly relevant for theorem proving: • Predicting whether a statement is useful in the proof of a given conjecture; • Predicting the dependencies of a proof statement (premise selection); • Predicting whether a statement is an important one (human named); • Predicting which conjecture a particular intermediate statement originates from; • Predicting the name given to a statement; • Generating intermediate statements useful in the proof of a given conjecture; • Generating the conjecture the current proof will lead to. In what follows we focus on the first task: classifying proof step statements as being useful or not in the context of a given proof. This task may be further specialized into two different tasks: • Unconditioned classification of proof steps: determining how likely a given proof is to be useful for the proof it occurred in, based solely on the content of statement (i.e. by only providing the model with the step statement itself, absent any context). • Conditioned classification of proof steps: determining how likely a given proof is to be useful for the proof it occurred in, with “conditioning” on the conjecture statement that the proof was aiming to attain, i.e. by providing the model with both the step statement and the conjecture statement). In the dataset, for every proof we provide the same number of useful and non-useful steps. As such, the proof step classification problem is a balanced two-class classification problem, where a random baseline would yield an accuracy of 0.5. 3.2 RELEVANCE TO INTERACTIVE AND AUTOMATED THEOREM PROVING In the interaction with an interactive theorem prover, the tasks that require most human time are: the search for good intermediate steps; the search for automation techniques able to justify the individual steps, and searching theorem proving libraries for the necessary simpler facts. These three problems directly correspond to the machine learning tasks proposed in the previous subsection. Being able to predict the usefulness of a statement will significantly improve many automation techniques. The generation of good intermediate lemmas or intermediate steps can improve level of granularity of the proof steps. Understanding the correspondence between statements and their names can allow users to search for statements in the libraries more efficiently (Aspinall & Kaliszyk [2016]). Premise selection and filtering are already used in many theorem proving systems, and generation of succeeding steps corresponds to conjecturing and theory exploration. Figure 1: Unconditioned classification model architectures. ![Unconditioned classification model architectures diagram](page_324_186_937_453.png) 4 BASELINE MODELS For each task (conditioned and unconditioned classification), we propose three different deep learning architectures, meant to provide a baseline for the classification performance that can be achieved on this dataset. Our models cover a range of architecture features (from convolutional networks to recurrent networks), aiming at probing what characteristics of the data are the most helpful for usefulness classification. Our models are implemented in TensorFlow (Abadi et al., 2015) using the Keras framework (Chollet, 2015). Each model was trained on a single Nvidia K80 GPU. Training only takes a few hours per model, which makes running these experiments accessible to most people (they could even be run on a laptop CPU). We are releasing all of our benchmark code as open-source software[1] so as to allow others to reproduce our results and improve upon our models. 4.1 UNCONDITIONED CLASSIFICATION MODELS Our three models for this task are as follow: • Logistic regression on top of learned token embeddings. This minimal model aims to determine to which extent simple differences between token distribution between useful and non-useful statements can be used to distinguish them. It provides an absolute floor on the performance achievable on this task. • 2-layer 1D convolutional neural network (CNN) with global maxpooling for sequence reduction. This model aims to determine the importance of local patterns of tokens. • 2-layer 1D CNN with LSTM (Hochreiter & Schmidhuber, 1997) sequence reduction. This model aims to determine the importance of order in the features sequences. See figure 1 for a layer-by-layer description of these models. 4.2 CONDITIONED CLASSIFICATION MODELS For this task, we use versions of the above models that have two siamese branches (identical branches with shared weights), with one branch processing the proof step statement being considered, and the [1] https://github.com/tensorflow/deepmath/tree/master/holstep_baselines Figure 2: Conditioned classification model architectures. ![Conditioned classification model architectures diagram](page_184_370_1206_693.png) other branch processing the conjecture. Each branch outputs an embedding; these two embeddings (step embedding and conjecture embedding) are then concatenated and the classified by a fully-connected network. See figure 2 for a layer-by-layer description of these models. 4.3 INPUT STATEMENTS ENCODING It should be noted that all of our models start with an Embedding layer, mapping tokens or characters in the statements to dense vectors in a low-dimensional space. We consider two possible encodings for presenting the input statements (proof steps and conjectures) to the Embedding layers of our models: • Character-level encoding of the human-readable versions of the statements, where each character (out of a set of 86 unique characters) in the pretty-printed statements is mapped to a 256-dimensional dense vector. This encoding yields longer statements (training statements are 308 character long on average). • Token-level encoding of the versions of the statements rendered with our proposed high-level tokenization scheme. This encoding yields shorter statements (training statements are 60 token long on average), while considerably increasing the size of set of unique tokens (1993 total tokens in the training set). Table 2: HolStep proof step classification accuracy without conditioning <table> <tr> <th></th> <th>Logistic regression</th> <th>1D CNN</th> <th>1D CNN-LSTM</th> </tr> <tr> <th>Accuracy with char input</th> <td>0.71</td> <td>0.82</td> <td><b>0.83</b></td> </tr> <tr> <th>Accuracy with token input</th> <td>0.71</td> <td><b>0.83</b></td> <td>0.77</td> </tr> </table> Table 3: HolStep proof step classification accuracy with conditioning <table> <tr> <th></th> <th>Logistic regression</th> <th>Siamese 1D CNN</th> <th>Siamese 1D CNN-LSTM</th> </tr> <tr> <th>Accuracy with char input</th> <td>0.71</td> <td>0.81</td> <td><b>0.83</b></td> </tr> <tr> <th>Accuracy with token input</th> <td>0.71</td> <td>0.82</td> <td>0.77</td> </tr> </table> 5 RESULTS Experimental results are presented in tables [2] and [3] as well as figs. [3] to [6]. 5.1 INFLUENCE OF MODEL ARCHITECTURE Our unconditioned logistic regression model yields an accuracy of 71%, both with character encoding and token encoding (tables [2] and [3]). This demonstrates that differences in token or character distributions between useful and non-useful steps alone, absent any context, is sufficient for discriminating between useful and non-useful statements to a reasonable extent. This also demonstrates that the token encoding is not fundamentally more informative than raw character-level statements. Additionally, our unconditioned 1D CNN model yields an accuracy of 82% to 83%, both with character encoding and token encoding (tables [2] and [3]). This demonstrates that patterns of characters or patterns of tokens are considerably more informative than single tokens for the purpose of usefulness classification. Finally, our unconditioned convolutional-recurrent model does not improve upon the results of the 1D CNN, which indicates that our models are not able to meaningfully leverage order in the feature sequences into which the statements are encoded. 5.2 INFLUENCE OF INPUT ENCODING For the logistic regression model and the 2-layer 1D CNN model, the choice of input encoding seems to have little impact. For the convolutional-recurrent model, the use of the high-level tokenization seems to cause a large decrease in model performance (figs. [4] and [6]). This may be due to the fact that token encoding yields shorter sequences, making the use of a LSTM less relevant. 5.3 INFLUENCE OF CONDITIONING ON THE CONJECTURE None of our conditioned models appear to be able to improve upon the unconditioned models, which indicates that our architectures are not able to leverage the information provided by the conjecture. The presence of the conditioning does however impact the training profile of our models, in particular by making the 1D CNN model converge faster and overfit significantly quicker (figs. [5] and [6]). 6 CONCLUSIONS Our baseline deep learning models, albeit fairly weak, are still able to predict statement usefulness with a remarkably high accuracy. Such methods already help first-order automated provers (Kaliszyk & Urban [2015a]) and as the branching factor is higher in HOL the predictions are valuable for a number of practical proving applications. This includes making tableaux-based (Paulson [1999]) and superposition-based (Hurd [2003]) internal ITP proof search significantly more efficient in turn Figure 3: Training profile of the three unconditioned baseline models with character input. Figure 4: Training profile of the three unconditioned baseline models with token input. Figure 5: Training profile of the three conditioned baseline models with character input. Figure 6: Training profile of the three conditioned baseline models with token input. making formalization easier. However, our models do not appear to be able to leverage order in the input sequences, nor conditioning on the conjectures. This is due to the fact that these models are not doing any form of logical reasoning on their input statements; rather they are doing simple pattern matching at the level of n-grams of characters or tokens. This shows the need to focus future efforts on different models that can do reasoning, or alternatively, on systems that blend explicit reasoning (e.g. graph search) with deep learning-based feature learning. A potential new direction would be to leverage the graph structure of HOL statements using e.g. Recursive Neural Tensor Networks (Socher et al., 2013a,b) or other graph-based recursive architectures. 6.1 FUTURE WORK The dataset focuses on one interactive theorem prover. It would be interesting if the proposed techniques generalize, primarily across ITPs that use the same foundational logic, for example using OpenTheory (Hurd, 2011), and secondarily across fundamentally different ITPs or even ATPs. A significant part of the unused steps originates from trying to fulfill the conditions for rewriting and from calls to intuitionistic tableaux. The main focus is however on the human found proofs so the trained predictions may to an extent mimic the bias on the usefulness in the human proofs. As ATPs are at the moment very week in comparison with human intuition improving this even for the many proofs humans do not find difficult would be an important gain. Finally, two of the proposed task for the dataset have been premise selection and intermediate sentence generation. It would be interesting to define more ATP-based ways to evaluate the selected premises, as well as to evaluate generated sentences (Kaliszyk et al., 2015). The set is a relatively large one when it comes to proof step classification, however the number of available premises makes the set a medium-sized set for premise selection in comparison with those of the Mizar Mathematical Library or the seL4 development. ACKNOWLEDGEMENTS The first author was partly supported by the ERC starting grant 714034.
accept
Accept (Poster)
7
30a14c2be832e3fb82e688416075cf3277b28ef0
iclr
2,017
PRUNING FILTERS FOR EFFICIENT CONVNETS Hao Li* University of Maryland haoli@cs.umd.edu Asim Kadav NEC Labs America asim@nec-labs.com Igor Durdanovic NEC Labs America igord@nec-labs.com Hanan Samet† University of Maryland hjs@cs.umd.edu Hans Peter Graf NEC Labs America hpg@nec-labs.com ABSTRACT The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks. 1 INTRODUCTION The ImageNet challenge has led to significant advancements in exploring various architectural choices in CNNs (Russakovsky et al. (2015); Krizhevsky et al. (2012); Simonyan & Zisserman (2015); Szegedy et al. (2015a); He et al. (2016)). The general trend since the past few years has been that the networks have grown deeper, with an overall increase in the number of parameters and convolution operations. These high capacity networks have significant inference costs especially when used with embedded sensors or mobile devices where computational and power resources may be limited. For these applications, in addition to accuracy, computational efficiency and small network sizes are crucial enabling factors (Szegedy et al. (2015b)). In addition, for web services that provide image search and image classification APIs that operate on a time budget often serving hundreds of thousands of images per second, benefit significantly from lower inference times. There has been a significant amount of work on reducing the storage and computation costs by model compression (Le Cun et al. (1989); Hassibi & Stork (1993); Srinivas & Babu (2015); Han et al. (2015); Mariet & Sra(2016)). Recently Han et al. (2015; 2016b) report impressive compression rates on AlexNet (Krizhevsky et al. (2012)) and VGGNet (Simonyan & Zisserman (2015)) by pruning weights with small magnitudes and then retraining without hurting the overall accuracy. However, pruning parameters does not necessarily reduce the computation time since the majority of the parameters removed are from the fully connected layers where the computation cost is low, e.g., the fully connected layers of VGG-16 occupy 90% of the total parameters but only contribute less than 1% of the overall floating point operations (FLOP). They also demonstrate that the convolutional layers can be compressed and accelerated (Landola et al. (2016)), but additionally require sparse *Work done at NEC Labs †Supported in part by the NSF under Grant IIS-13-2079 BLAS libraries or even specialized hardware (Han et al., (2016a)). Modern libraries that provide speedup using sparse operations over CNNs are often limited (Szegedy et al., (2015a); Liu et al., (2015)) and maintaining sparse data structures also creates an additional storage overhead which can be significant for low-precision weights. Recent work on CNNs have yielded deep architectures with more efficient design (Szegedy et al., (2015a,b); He & Sun (2015); He et al.(2016)), in which the fully connected layers are replaced with average pooling layers (Lin et al., (2013); He et al.,(2016)), which reduces the number of parameters significantly. The computation cost is also reduced by downsampling the image at an early stage to reduce the size of feature maps (He & Sun (2015)). Nevertheless, as the networks continue to become deeper, the computation costs of convolutional layers continue to dominate. CNNs with large capacity usually have significant redundancy among different filters and feature channels. In this work, we focus on reducing the computation cost of well-trained CNNs by pruning filters. Compared to pruning weights across the network, filter pruning is a naturally structured way of pruning without introducing sparsity and therefore does not require using sparse libraries or any specialized hardware. The number of pruned filters correlates directly with acceleration by reducing the number of matrix multiplications, which is easy to tune for a target speedup. In addition, instead of layer-wise iterative fine-tuning (retraining), we adopt a one-shot pruning and retraining strategy to save retraining time for pruning filters across multiple layers, which is critical for pruning very deep networks. Finally, we observe that even for ResNets, which have significantly fewer parameters and inference costs than AlexNet or VGGNet, still have about 30% of FLOP reduction without sacrificing too much accuracy. We conduct sensitivity analysis for convolutional layers in ResNets that improves the understanding of ResNets. 2 RELATED WORK The early work by Le Cun et al. (1989) introduces Optimal Brain Damage, which prunes weights with a theoretically justified saliency measure. Later, Hassibi & Stork (1993) propose Optimal Brain Surgeon to remove unimportant weights determined by the second-order derivative information. Mariet & Sra (2016) reduce the network redundancy by identifying a subset of diverse neurons that does not require retraining. However, this method only operates on the fully-connected layers and introduce sparse connections. To reduce the computation costs of the convolutional layers, past work have proposed to approximate convolutional operations by representing the weight matrix as a low rank product of two smaller matrices without changing the original number of filters (Demil et al., (2013); Laderberg et al., (2014); Zhang et al., (2015b,a); Tai et al., (2016); Ioannou et al., (2016)). Other approaches to reduce the convolutional overheads include using FFT based convolutions (Mathieu et al., (2013)) and fast convolution using the Winograd algorithm (Lavin & Gray (2016)). Additionally, quantization (Han et al., (2016b)) and binarization (Rastegari et al., (2016); Courbariaux & Bengio (2016)) can be used to reduce the model size and lower the computation overheads. Our method can be used in addition to these techniques to reduce computation costs without incurring additional overheads. Several work have studied removing redundant feature maps from a well trained network (Anwar et al. (2015); Polyak & Wolf (2015)). Anwar et al. (2015) introduce a three-level pruning of the weights and locate the pruning candidates using particle filtering, which selects the best combination from a number of random generated masks. Polyak & Wolf (2015) detect the less frequently activated feature maps with sample input data for face detection applications. We choose to analyze the filter weights and prune filters with their corresponding feature maps using a simple magnitude based measure, without examining possible combinations. We also introduce network-wide holistic approaches to prune filters for simple and complex convolutional network architectures. Concurrently with our work, there is a growing interest in training compact CNNs with sparse constraints (Lebedev & Lempitsky (2016); Zhou et al., (2016); Wen et al., (2016)). Lebedev & Lempitsky (2016) leverage group-sparsity on the convolutional filters to achieve structured brain damage, i.e., prune the entries of the convolution kernel in a group-wise fashion. Zhou et al.,(2016) add group-sparse regularization on neurons during training to learn compact CNNs with reduced filters. Wen et al.,(2016) add structured sparsity regularizer on each layer to reduce trivial filters, channels or even layers. In the filter-level pruning, all above work use \( l_{2,1} \)-norm as a regularizer. Similar to the above work, we use \( \ell_1 \)-norm to select unimportant filters and physically prune them. Our fine-tuning process is the same as the conventional training procedure, without introducing additional regularization. Our approach does not introduce extra layer-wise meta-parameters for the regularizer except for the percentage of filters to be pruned, which is directly related to the desired speedup. By employing stage-wise pruning, we can set a single pruning rate for all layers in one stage. 3 PRUNING FILTERS AND FEATURE MAPS Let \( n_i \) denote the number of input channels for the \( i \)th convolutional layer and \( h_i/w_i \) be the height/width of the input feature maps. The convolutional layer transforms the input feature maps \( x_i \in \mathbb{R}^{n_i \times h_i \times w_i} \) into the output feature maps \( x_{i+1} \in \mathbb{R}^{n_{i+1} \times h_{i+1} \times w_{i+1}} \), which are used as input feature maps for the next convolutional layer. This is achieved by applying \( n_{i+1} \) 3D filters \( F_{i,j} \in \mathbb{R}^{n_i \times k \times k} \) on the \( n_i \) input channels, in which one filter generates one feature map. Each filter is composed by \( n_i \) 2D kernels \( K \in \mathbb{R}^{k \times k} \) (e.g., \( 3 \times 3 \)). All the filters, together, constitute the kernel matrix \( F_i \in \mathbb{R}^{n_i \times n_{i+1} \times k \times k} \). The number of operations of the convolutional layer is \( n_{i+1} n_i k^2 h_{i+1} w_{i+1} \). As shown in Figure 1, when a filter \( F_{i,j} \) is pruned, its corresponding feature map \( x_{i+1,j} \) is removed, which reduces \( n_i k^2 h_{i+1} w_{i+1} \) operations. The kernels that apply on the removed feature maps from the filters of the next convolutional layer are also removed, which saves an additional \( n_{i+2} k^2 h_{i+2} w_{i+2} \) operations. Pruning \( m \) filters of layer \( i \) will reduce \( m/n_{i+1} \) of the computation cost for both layers \( i \) and \( i+1 \). ![Pruning a filter results in removal of its corresponding feature map and related kernels in the next layer.](page_495_678_670_180.png) Figure 1: Pruning a filter results in removal of its corresponding feature map and related kernels in the next layer. 3.1 DETERMINING WHICH FILTERS TO PRUNE WITHIN A SINGLE LAYER Our method prunes the less useful filters from a well-trained model for computational efficiency while minimizing the accuracy drop. We measure the relative importance of a filter in each layer by calculating the sum of its absolute weights \( \sum |F_{i,j}| \), i.e., its \( \ell_1 \)-norm \( \|F_{i,j}\|_1 \). Since the number of input channels, \( n_i \), is the same across filters, \( \sum |F_{i,j}| \) also represents the average magnitude of its kernel weights. This value gives an expectation of the magnitude of the output feature map. Filters with smaller kernel weights tend to produce feature maps with weak activations as compared to the other filters in that layer. Figure 2(a) illustrates the distribution of filters’ absolute weights sum for each convolutional layer in a VGG-16 network trained on the CIFAR-10 dataset, where the distribution varies significantly across layers. We find that pruning the smallest filters works better in comparison with pruning the same number of random or largest filters (Section 4.4). Compared to other criteria for activation-based feature map pruning (Section 4.5), we find \( \ell_1 \)-norm is a good criterion for data-free filter selection. The procedure of pruning \( m \) filters from the \( i \)th convolutional layer is as follows: 1. For each filter \( F_{i,j} \), calculate the sum of its absolute kernel weights \( s_j = \sum_{l=1}^{n_i} \sum |K_{lj}| \). 2. Sort the filters by \( s_j \). 3. Prune \( m \) filters with the smallest sum values and their corresponding feature maps. The kernels in the next convolutional layer corresponding to the pruned feature maps are also removed. 4. A new kernel matrix is created for both the \( i \)th and \( i+1 \)th layers, and the remaining kernel weights are copied to the new model. Figure 2: (a) Sorting filters by absolute weights sum for each layer of VGG-16 on CIFAR-10. The x-axis is the filter index divided by the total number of filters. The y-axis is the filter weight sum divided by the max sum value among filters in that layer. (b) Pruning filters with the lowest absolute weights sum and their corresponding test accuracies on CIFAR-10. (c) Prune and retrain for each single layer of VGG-16 on CIFAR-10. Some layers are sensitive and it can be harder to recover accuracy after pruning them. Relationship to pruning weights Pruning filters with low absolute weights sum is similar to pruning low magnitude weights (Han et al. [2015]). Magnitude-based weight pruning may prune away whole filters when all the kernel weights of a filter are lower than a given threshold. However, it requires a careful tuning of the threshold and it is difficult to predict the exact number of filters that will eventually be pruned. Furthermore, it generates sparse convolutional kernels which can be hard to accelerate given the lack of efficient sparse libraries, especially for the case of low-sparsity. Relationship to group-sparse regularization on filters Recent work (Zhou et al., 2016); Wen et al. (2016) apply group-sparse regularization (\( \sum_{j=1}^{n_i} \| \mathcal{F}_{i,j} \|_2 \) or \( \ell_{2,1}\)-norm) on convolutional filters, which also favor to zero-out filters with small \( l_2 \)-norms, i.e. \( \mathcal{F}_{i,j} = 0 \). In practice, we do not observe noticeable difference between the \( \ell_2 \)-norm and the \( \ell_1 \)-norm for filter selection, as the important filters tend to have large values for both measures (Appendix 6.1). Zeroing out weights of multiple filters during training has a similar effect to pruning filters with the strategy of iterative pruning and retraining as introduced in Section 3.4 3.2 DETERMINING SINGLE LAYER’S SENSITIVITY TO PRUNING To understand the sensitivity of each layer, we prune each layer independently and evaluate the resulting pruned network’s accuracy on the validation set. Figure 2(b) shows that layers that maintain their accuracy as filters are pruned away correspond to layers with larger slopes in Figure 2(a). On the contrary, layers with relatively flat slopes are more sensitive to pruning. We empirically determine the number of filters to prune for each layer based on their sensitivity to pruning. For deep networks such as VGG-16 or ResNets, we observe that layers in the same stage (with the same feature map size) have a similar sensitivity to pruning. To avoid introducing layer-wise meta-parameters, we use the same pruning ratio for all layers in the same stage. For layers that are sensitive to pruning, we prune a smaller percentage of these layers or completely skip pruning them. 3.3 PRUNING FILTERS ACROSS MULTIPLE LAYERS We now discuss how to prune filters across the network. Previous work prunes the weights on a layer by layer basis, followed by iteratively retraining and compensating for any loss of accuracy (Han et al. [2015]). However, understanding how to prune filters of multiple layers at once can be useful: 1) For deep networks, pruning and retraining on a layer by layer basis can be extremely time-consuming 2) Pruning layers across the network gives a holistic view of the robustness of the network resulting in a smaller network 3) For complex networks, a holistic approach may be necessary. For example, for the ResNet, pruning the identity feature maps or the second layer of each residual block results in additional pruning of other layers. To prune filters across multiple layers, we consider two strategies for layer-wise filter selection: • Independent pruning determines which filters should be pruned at each layer independent of other layers. • Greedy pruning accounts for the filters that have been removed in the previous layers. This strategy does not consider the kernels for the previously pruned feature maps while calculating the sum of absolute weights. Figure 3 illustrates the difference between two approaches in calculating the sum of absolute weights. The greedy approach, though not globally optimal, is holistic and results in pruned networks with higher accuracy especially when many filters are pruned. ![Pruning filters across consecutive layers. The independent pruning strategy calculates the filter sum (columns marked in green) without considering feature maps removed in previous layer (shown in blue), so the kernel weights marked in yellow are still included. The greedy pruning strategy does not count kernels for the already pruned feature maps. Both approaches result in a \((n_{i+1} - 1) \times (n_{i+2} - 1)\) kernel matrix.](page_495_682_670_180.png) Figure 3: Pruning filters across consecutive layers. The independent pruning strategy calculates the filter sum (columns marked in green) without considering feature maps removed in previous layer (shown in blue), so the kernel weights marked in yellow are still included. The greedy pruning strategy does not count kernels for the already pruned feature maps. Both approaches result in a \((n_{i+1} - 1) \times (n_{i+2} - 1)\) kernel matrix. For simpler CNNs like VGGNet or AlexNet, we can easily prune any of the filters in any convolutional layer. However, for complex network architectures such as Residual networks (He et al. (2016)), pruning filters may not be straightforward. The architecture of ResNet imposes restrictions and the filters need to be pruned carefully. We show the filter pruning for residual blocks with projection mapping in Figure 4. Here, the filters of the first layer in the residual block can be arbitrarily pruned, as it does not change the number of output feature maps of the block. However, the correspondence between the output feature maps of the second convolutional layer and the identity feature maps makes it difficult to prune. Hence, to prune the second convolutional layer of the residual block, the corresponding projected feature maps must also be pruned. Since the identical feature maps are more important than the added residual maps, the feature maps to be pruned should be determined by the pruning results of the shortcut layer. To determine which identity feature maps are to be pruned, we use the same selection criterion based on the filters of the shortcut convolutional layers (with \(1 \times 1\) kernels). The second layer of the residual block is pruned with the same filter index as selected by the pruning of the shortcut layer. ![Pruning residual blocks with the projection shortcut. The filters to be pruned for the second layer of the residual block (marked as green) are determined by the pruning result of the shortcut projection. The first layer of the residual block can be pruned without restrictions.](page_495_1042_670_180.png) Figure 4: Pruning residual blocks with the projection shortcut. The filters to be pruned for the second layer of the residual block (marked as green) are determined by the pruning result of the shortcut projection. The first layer of the residual block can be pruned without restrictions. 3.4 RETRAINING PRUNED NETWORKS TO REGAIN ACCURACY After pruning the filters, the performance degradation should be compensated by retraining the network. There are two strategies to prune the filters across multiple layers: 1. Prune once and retrain: Prune filters of multiple layers at once and retrain them until the original accuracy is restored. 2. Prune and retrain iteratively: Prune filters layer by layer or filter by filter and then retrain iteratively. The model is retrained before pruning the next layer for the weights to adapt to the changes from the pruning process. We find that for the layers that are resilient to pruning, the prune and retrain once strategy can be used to prune away significant portions of the network and any loss in accuracy can be regained by retraining for a short period of time (less than the original training time). However, when some filters from the sensitive layers are pruned away or large portions of the networks are pruned away, it may not be possible to recover the original accuracy. Iterative pruning and retraining may yield better results, but the iterative process requires many more epochs especially for very deep networks. 4 EXPERIMENTS We prune two types of networks: simple CNNs (VGG-16 on CIFAR-10) and Residual networks (ResNet-56/110 on CIFAR-10 and ResNet-34 on ImageNet). Unlike AlexNet or VGG (on ImageNet) that are often used to demonstrate model compression, both VGG (on CIFAR-10) and Residual networks have fewer parameters in the fully connected layers. Hence, pruning a large percentage of parameters from these networks is challenging. We implement our filter pruning method in Torch7 (Collobert et al. (2011)). When filters are pruned, a new model with fewer filters is created and the remaining parameters of the modified layers as well as the unaffected layers are copied into the new model. Furthermore, if a convolutional layer is pruned, the weights of the subsequent batch normalization layer are also removed. To get the baseline accuracies for each network, we train each model from scratch and follow the same pre-processing and hyper-parameters as ResNet (He et al. (2016)). For retraining, we use a constant learning rate 0.001 and retrain 40 epochs for CIFAR-10 and 20 epochs for ImageNet, which represents one-fourth of the original training epochs. Past work has reported up to \(3 \times\) original training times to retrain pruned networks (Han et al. (2015)). <table> <tr> <th>Model</th> <th>Error(%)</th> <th>FLOP</th> <th>Pruned %</th> <th>Parameters</th> <th>Pruned %</th> </tr> <tr> <td>VGG-16</td> <td>6.75</td> <td>3.13 \(\times 10^8\)</td> <td></td> <td>1.5 \(\times 10^7\)</td> <td></td> </tr> <tr> <td>VGG-16-pruned-A</td> <td><b>6.60</b></td> <td>2.06 \(\times 10^8\)</td> <td>34.2%</td> <td>5.4 \(\times 10^6\)</td> <td>64.0%</td> </tr> <tr> <td>VGG-16-pruned-A scratch-train</td> <td>6.88</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>ResNet-56</td> <td>6.96</td> <td>1.25 \(\times 10^8\)</td> <td></td> <td>8.5 \(\times 10^6\)</td> <td></td> </tr> <tr> <td>ResNet-56-pruned-A</td> <td>6.90</td> <td>1.12 \(\times 10^8\)</td> <td>10.4%</td> <td>7.7 \(\times 10^5\)</td> <td>9.4%</td> </tr> <tr> <td>ResNet-56-pruned-B</td> <td><b>6.94</b></td> <td>9.09 \(\times 10^7\)</td> <td>27.6%</td> <td>7.3 \(\times 10^5\)</td> <td>13.7%</td> </tr> <tr> <td>ResNet-56-pruned-B scratch-train</td> <td>8.69</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>ResNet-110</td> <td>6.47</td> <td>2.53 \(\times 10^8\)</td> <td></td> <td>1.72 \(\times 10^6\)</td> <td></td> </tr> <tr> <td>ResNet-110-pruned-A</td> <td><b>6.45</b></td> <td>2.13 \(\times 10^8\)</td> <td>15.9%</td> <td>1.68 \(\times 10^6\)</td> <td>2.3%</td> </tr> <tr> <td>ResNet-110-pruned-B</td> <td>6.70</td> <td>1.55 \(\times 10^8\)</td> <td>38.6%</td> <td>1.16 \(\times 10^6\)</td> <td>32.4%</td> </tr> <tr> <td>ResNet-110-pruned-B scratch-train</td> <td>7.06</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>ResNet-34</td> <td>26.77</td> <td>3.64 \(\times 10^9\)</td> <td></td> <td>2.16 \(\times 10^7\)</td> <td></td> </tr> <tr> <td>ResNet-34-pruned-A</td> <td>27.44</td> <td>3.08 \(\times 10^9\)</td> <td>15.5%</td> <td>1.99 \(\times 10^7\)</td> <td>7.6%</td> </tr> <tr> <td>ResNet-34-pruned-B</td> <td>27.83</td> <td>2.76 \(\times 10^9\)</td> <td>24.2%</td> <td>1.93 \(\times 10^7\)</td> <td>10.8%</td> </tr> <tr> <td>ResNet-34-pruned-C</td> <td>27.52</td> <td>3.37 \(\times 10^9\)</td> <td>7.5%</td> <td>2.01 \(\times 10^7\)</td> <td>7.2%</td> </tr> </table> Table 1: Overall results. The best test/validation accuracy during the retraining process is reported. Training a pruned model from scratch performs worse than retraining a pruned model, which may indicate the difficulty of training a network with a small capacity. 4.1 VGG-16 ON CIFAR-10 VGG-16 is a high-capacity network originally designed for the ImageNet dataset (Simonyan & Zisserman (2015)). Recently, Zagoruyko (2015) applies a slightly modified version of the model on CIFAR-10 and achieves state of the art results. As shown in Table 2, VGG-16 on CIFAR-10 consists of 13 convolutional layers and 2 fully connected layers, in which the fully connected layers do not occupy large portions of parameters due to the small input size and less hidden units. We use the model described in [Zagoruyko (2015)] but add Batch Normalization (Ioffe & Szegedy (2015)) Table 2: VGG-16 on CIFAR-10 and the pruned model. The last two columns show the number of feature maps and the reduced percentage of FLOP from the pruned model. <table> <tr> <th>layer type</th> <th>w_i \times h_i</th> <th>#Maps</th> <th>FLOP</th> <th>#Params</th> <th>#Maps</th> <th>FLOP%</th> </tr> <tr> <td>Conv_1</td> <td>32 \times 32</td> <td>64</td> <td>1.8E+06</td> <td>1.7E+03</td> <td>32</td> <td>50%</td> </tr> <tr> <td>Conv_2</td> <td>32 \times 32</td> <td>64</td> <td>3.8E+07</td> <td>3.7E+04</td> <td>64</td> <td>50%</td> </tr> <tr> <td>Conv_3</td> <td>16 \times 16</td> <td>128</td> <td>1.9E+07</td> <td>7.4E+04</td> <td>128</td> <td>0%</td> </tr> <tr> <td>Conv_4</td> <td>16 \times 16</td> <td>128</td> <td>3.8E+07</td> <td>1.5E+05</td> <td>128</td> <td>0%</td> </tr> <tr> <td>Conv_5</td> <td>8 \times 8</td> <td>256</td> <td>1.9E+07</td> <td>2.9E+05</td> <td>256</td> <td>0%</td> </tr> <tr> <td>Conv_6</td> <td>8 \times 8</td> <td>256</td> <td>3.8E+07</td> <td>5.9E+05</td> <td>256</td> <td>0%</td> </tr> <tr> <td>Conv_7</td> <td>8 \times 8</td> <td>256</td> <td>3.8E+07</td> <td>5.9E+05</td> <td>256</td> <td>0%</td> </tr> <tr> <td>Conv_8</td> <td>4 \times 4</td> <td>512</td> <td>1.9E+07</td> <td>1.2E+06</td> <td>256</td> <td>50%</td> </tr> <tr> <td>Conv_9</td> <td>4 \times 4</td> <td>512</td> <td>3.8E+07</td> <td>2.4E+06</td> <td>256</td> <td>75%</td> </tr> <tr> <td>Conv_10</td> <td>4 \times 4</td> <td>512</td> <td>3.8E+07</td> <td>2.4E+06</td> <td>256</td> <td>75%</td> </tr> <tr> <td>Conv_11</td> <td>2 \times 2</td> <td>512</td> <td>9.4E+06</td> <td>2.4E+06</td> <td>256</td> <td>75%</td> </tr> <tr> <td>Conv_12</td> <td>2 \times 2</td> <td>512</td> <td>9.4E+06</td> <td>2.4E+06</td> <td>256</td> <td>75%</td> </tr> <tr> <td>Conv_13</td> <td>2 \times 2</td> <td>512</td> <td>9.4E+06</td> <td>2.4E+06</td> <td>256</td> <td>75%</td> </tr> <tr> <td>Linear</td> <td>1</td> <td>512</td> <td>2.6E+05</td> <td>2.6E+05</td> <td>512</td> <td>50%</td> </tr> <tr> <td>Linear</td> <td>1</td> <td>10</td> <td>5.1E+03</td> <td>5.1E+03</td> <td>10</td> <td>0%</td> </tr> <tr> <td>Total</td> <td></td> <td></td> <td>3.1E+08</td> <td>1.5E+07</td> <td></td> <td>34%</td> </tr> </table> layer after each convolutional layer and the first linear layer, without using Dropout (Srivastava et al., (2014)). Note that when the last convolutional layer is pruned, the input to the linear layer is changed and the connections are also removed. As shown in Figure 2(b), each of the convolutional layers with 512 feature maps can drop at least 60% of filters without affecting the accuracy. Figure 2(c) shows that with retraining, almost 90% of the filters of these layers can be safely removed. One possible explanation is that these filters operate on 4 × 4 or 2 × 2 feature maps, which may have no meaningful spatial connections in such small dimensions. For instance, ResNets for CIFAR-10 do not perform any convolutions for feature maps below 8 × 8 dimensions. Unlike previous work (Zeiler & Fergus (2014); Han et al. (2015)), we observe that the first layer is robust to pruning as compared to the next few layers. This is possible for a simple dataset like CIFAR-10, on which the model does not learn as much useful filters as on ImageNet (as shown in Figure 5). Even when 80% of the filters from the first layer are pruned, the number of remaining filters (12) is still larger than the number of raw input channels. However, when removing 80% filters from the second layer, the layer corresponds to a 64 to 12 mapping, which may lose significant information from previous layers, thereby hurting the accuracy. With 50% of the filters being pruned in layer 1 and from 8 to 13, we achieve 34% FLOP reduction for the same accuracy. ![Visualization of filters in the first convolutional layer of VGG-16 trained on CIFAR-10. Filters are ranked by \( \ell_1 \)-norm.](page_1012_1342_484_246.png) Figure 5: Visualization of filters in the first convolutional layer of VGG-16 trained on CIFAR-10. Filters are ranked by \( \ell_1 \)-norm. 4.2 ResNet-56/110 on CIFAR-10 ResNets for CIFAR-10 have three stages of residual blocks for feature maps with sizes of 32 × 32, 16 × 16 and 8 × 8. Each stage has the same number of residual blocks. When the number of feature maps increases, the shortcut layer provides an identity mapping with an additional zero padding for the increased dimensions. Since there is no projection mapping for choosing the identity feature maps, we only consider pruning the first layer of the residual block. As shown in Figure 6, most of the layers are robust to pruning. For ResNet-110, pruning some single layers without retraining even Figure 6: Sensitivity to pruning for the first layer of each residual block of ResNet-56/110. improves the performance. In addition, we find that layers that are sensitive to pruning (layers 20, 38 and 54 for ResNet-56, layer 36, 38 and 74 for ResNet-110) lie at the residual blocks close to the layers where the number of feature maps changes, e.g., the first and the last residual blocks for each stage. We believe this happens because the precise residual errors are necessary for the newly added empty feature maps. The retraining performance can be improved by skipping these sensitive layers. As shown in Table 1, ResNet-56-pruned-A improves the performance by pruning 10% filters while skipping the sensitive layers 16, 20, 38 and 54. In addition, we find that deeper layers are more sensitive to pruning than layers in the earlier stages of the network. Hence, we use a different pruning rate for each stage. We use \( p_i \) to denote the pruning rate for layers in the ith stage. ResNet-56-pruned-B skips more layers (16, 18, 20, 34, 38, 54) and prunes layers with \( p_1=60\%, p_2=30\% \) and \( p_3=10\% \). For ResNet-110, the first pruned model gets a slightly better result with \( p_1=50\%, p_2=40\% \) and layer 36 skipped. ResNet-110-pruned-B skips layers 36, 38, 74 and prunes with \( p_1=50\%, p_2=40\% \) and \( p_3=30\% \). When there are more than two residual blocks at each stage, the middle residual blocks may be redundant and can be easily pruned. This might explain why ResNet-110 is easier to prune than ResNet-56. 4.3 ResNet-34 on ILSVRC2012 ResNets for ImageNet have four stages of residual blocks for feature maps with sizes of \( 56 \times 56 \), \( 28 \times 28 \), \( 14 \times 14 \) and \( 7 \times 7 \). ResNet-34 uses the projection shortcut when the feature maps are down-sampled. We first prune the first layer of each residual block. Figure 7 shows the sensitivity of the first layer of each residual block. Similar to ResNet-56/110, the first and the last residual blocks of each stage are more sensitive to pruning than the intermediate blocks (i.e., layers 2, 8, 14, 16, 26, 28, 30, 32). We skip those layers and prune the remaining layers at each stage equally. In Table 1, we compare two configurations of pruning percentages for the first three stages: (A) \( p_1=30\%, p_2=30\%, p_3=30\% \); (B) \( p_1=50\%, p_2=60\%, p_3=40\% \). Option-B provides 24% FLOP reduction with about 1% loss in accuracy. As seen in the pruning results for ResNet-50/110, we can predict that ResNet-34 is relatively more difficult to prune as compared to deeper ResNets. We also prune the identity shortcuts and the second convolutional layer of the residual blocks. As these layers have the same number of filters, they are pruned equally. As shown in Figure 7(b), these layers are more sensitive to pruning than the first layers. With retraining, ResNet-34-pruned-C prunes the third stage with \( p_3=20\% \) and results in 7.5% FLOP reduction with 0.75% loss in accuracy. Therefore, pruning the first layer of the residual block is more effective at reducing the overall FLOP than pruning the second layer. This finding also correlates with the bottleneck block design for deeper ResNets, which first reduces the dimension of input feature maps for the residual layer and then increases the dimension to match the identity mapping. 4.4 COMPARISON WITH PRUNING RANDOM FILTERS AND LARGEST FILTERS We compare our approach with pruning random filters and largest filters. As shown in Figure 8, pruning the smallest filters outperforms pruning random filters for most of the layers at different pruning ratios. For example, smallest filter pruning has better accuracy than random filter pruning for all layers with the pruning ratio of 90%. The accuracy of pruning filters with the largest \( \ell_1 \)-norms drops quickly as the pruning ratio increases, which indicates the importance of filters with larger \( \ell_1 \)-norms. Figure 8: Comparison of three pruning methods for VGG-16 on CIFAR-10: pruning the smallest filters, pruning random filters and pruning the largest filters. In random filter pruning, the order of filters to be pruned is randomly permuted. 4.5 COMPARISON WITH ACTIVATION-BASED FEATURE MAP PRUNING The activation-based feature map pruning method removes the feature maps with weak activation patterns and their corresponding filters and kernels (Polyak & Wolf (2015)), which needs sample data as input to determine which feature maps to prune. A feature map \( x_{i,j}^{n} \in \mathbb{R}^{w_{i+1} \times h_{i+1}} \) is generated by applying filter \( F_{i,j} \in \mathbb{R}^{n_i \times k \times k} \) to feature maps of previous layer \( x_i \in \mathbb{R}^{n_i \times w_i \times h_i} \), i.e., \( x_{i+1,j} = F_{i,j} * x_i \). Given \( N \) randomly selected images \( \{ x_i^n \}_{n=1}^N \) from the training set, the statistics of each feature map can be estimated with one epoch forward pass of the \( N \) sampled data. Note that we calculate statistics on the feature maps generated from the convolution operations before batch normalization or non-linear activation. We compare our \( \ell_1 \)-norm based filter pruning with feature map pruning using the following criteria: \( \sigma_{\text{mean-mean}}(x_{i,j}) = \frac{1}{N} \sum_{n=1}^N \text{mean}(x_{i,j}^n) \), \( \sigma_{\text{mean-std}}(x_{i,j}) = \frac{1}{N} \sum_{n=1}^N \text{std}(x_{i,j}^n) \), \( \sigma_{\text{mean}-\ell_1}(x_{i,j}) = \frac{1}{N} \sum_{n=1}^N \| x_{i,j}^n \|_1 \), \( \sigma_{\text{mean}-\ell_2}(x_{i,j}) = \frac{1}{N} \sum_{n=1}^N \| x_{i,j}^n \|_2 \) and Figure 9: Comparison of activation-based feature map pruning for VGG-16 on CIFAR-10. \( \sigma_{var-\ell_2}(\mathbf{x}_{i,j}) = \mathrm{var}(\{\|\mathbf{x}_{i,j}^n\|_2\}_{n=1}^N) \), where mean, std and var are standard statistics (average, standard deviation and variance) of the input. Here, \( \sigma_{var-\ell_2} \) is the contribution variance of channel criterion proposed in [Polyak & Wolf (2015)], which is motivated by the intuition that an unimportant feature map has almost similar outputs for the whole training data and acts like an additional bias. The estimation of the criteria becomes more accurate when more sample data is used. Here we use the whole training set (\( N = 50,000 \) for CIFAR-10) to compute the statistics. The performance of feature map pruning with above criteria for each layer is shown in Figure 9. Smallest filter pruning outperforms feature map pruning with the criteria \( \sigma_{mean-mean}, \sigma_{mean-\ell_1}, \sigma_{mean-\ell_2} \) and \( \sigma_{var-\ell_2} \). The \( \sigma_{mean-std} \) criterion has better or similar performance to \( \ell_1 \)-norm up to pruning ratio of 60%. However, its performance drops quickly after that especially for layers of conv_1, conv_2 and conv_3. We find \( \ell_1 \)-norm is a good heuristic for filter selection considering that it is data free. 5 CONCLUSIONS Modern CNNs often have high capacity with large training and inference costs. In this paper we present a method to prune filters with relatively low weight magnitudes to produce CNNs with reduced computation costs without introducing irregular sparsity. It achieves about 30% reduction in FLOP for VGGNet (on CIFAR-10) and deep ResNets without significant loss in the original accuracy. Instead of pruning with specific layer-wise hyperparameters and time-consuming iterative retraining, we use the one-shot pruning and retraining strategy for simplicity and ease of implementation. By performing lesion studies on very deep CNNs, we identify layers that are robust or sensitive to pruning, which can be useful for further understanding and improving the architectures. ACKNOWLEDGMENTS The authors would like to thank the anonymous reviewers for their valuable feedback. REFERENCES Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Structured Pruning of Deep Convolutional Neural Networks. arXiv preprint arXiv:1512.08571, 2015. Ronan Collobert, Koray Kavukcuoglu, and Clément Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, 2011. Matthieu Courbariaux and Yoshua Bengio. Binarynet: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016. Misha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al. Predicting parameters in deep learning. In NIPS, 2013. Song Han, Jeff Pool, John Tran, and William Dally. Learning both Weights and Connections for Efficient Neural Network. In NIPS, 2015. Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A Horowitz, and William J Dally. EIE: Efficient Inference Engine on Compressed Deep Neural Network. In ISCA, 2016a. Song Han, Huizi Mao, and William J Dally. Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. In ICLR, 2016b. Babak Hassibi and David G Stork. Second Order Derivatives for Network Pruning: Optimal Brain Surgeon. In NIPS, 1993. Kaiming He and Jian Sun. Convolutional Neural Networks at Constrained Time Cost. In CVPR, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In CVPR, 2016. Forrest Iandola, Matthew Moskewicz, Khalidand Ashraf, Song Han, William Dally, and Keutzer Kurt. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and ; 1MB model size. arXiv preprint arXiv:1602.07360, 2016. Yani Ioannou, Duncan Robertson, Jamie Shotton, Roberto Cipolla, and Antonio Criminisi. Training CNNs with Low-Rank Filters for Efficient Image Classification. In ICLR, 2016. Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. 2015. Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. In BMVC, 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet Classification with Deep Convolutional Neural Networks. In NIPS, 2012. Andrew Lavin and Scott Gray. Fast Algorithms for Convolutional Neural Networks. In CVPR, 2016. Yann Le Cun, John S Denker, and Sara A Solla. Optimal Brain Damage. In NIPS, 1989. Vadim Lebedev and Victor Lempitsky. Fast Convnets Using Group-wise Brain Damage. In CVPR, 2016. Min Lin, Qiang Chen, and Shuicheng Yan. Network in Network. arXiv preprint arXiv:1312.4400, 2013. Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse Convolutional Neural Networks. In CVPR, 2015. Zelda Mariet and Suvrit Sra. Diversity Networks. In ICLR, 2016. Michael Mathieu, Mikael Henaff, and Yann LeCun. Fast Training of Convolutional Networks through FFTs. arXiv preprint arXiv:1312.5851, 2013. Adam Polyak and Lior Wolf. Channel-Level Acceleration of Deep Face Representations. IEEE Access, 2015. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks. In ECCV, 2016. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015. Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. In ICLR, 2015. Suraj Srinivas and R Venkatesh Babu. Data-free Parameter Pruning for Deep Neural Networks. In BMVC, 2015. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. JMLR, 2014. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going Deeper with Convolutions. In CVPR, 2015a. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the Inception Architecture for Computer Vision. arXiv preprint arXiv:1512.00567, 2015b. Cheng Tai, Tong Xiao, Xiaogang Wang, and Weinan E. Convolutional neural networks with low-rank regularization. In ICLR, 2016. Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning Structured Sparsity in Deep Learning. In NIPS, 2016. Sergey Zagoruyko. 92.45% on CIFAR-10 in Torch. http://torch.ch/blog/2015/07/30/cifar.html, 2015. Matthew D Zeiler and Rob Fergus. Visualizing and Understanding Convolutional Networks. In ECCV, 2014. Xiangyu Zhang, Jianhua Zou, Kaiming He, and Jian Sun. Accelerating Very Deep Convolutional Networks for Classification and Detection. IEEE T-PAMI, 2015a. Xiangyu Zhang, Jianhua Zou, Xiang Ming, Kaiming He, and Jian Sun. Efficient and accurate approximations of nonlinear convolutional networks. In CVPR, 2015b. Hao Zhou, Jose Alvarez, and Fatih Porikli. Less Is More: Towards Compact CNNs. In ECCV, 2016. 6 APPENDIX 6.1 COMPARISON WITH \( \ell_2 \)-NORM BASED FILTER PRUNING We compare \( \ell_1 \)-norm with \( \ell_2 \)-norm for filter pruning. As shown in Figure 10, \( \ell_1 \)-norm works slightly better than \( \ell_2 \)-norm for layer conv_2. There is no significant difference between the two norms for other layers. ![Comparison of \( \ell_1 \)-norm and \( \ell_2 \)-norm based filter pruning for VGG-16 on CIFAR-10.](page_349_573_849_246.png) Figure 10: Comparison of \( \ell_1 \)-norm and \( \ell_2 \)-norm based filter pruning for VGG-16 on CIFAR-10. 6.2 FLOP AND WALL-CLOCK TIME FLOP is a commonly used measure to compare the computation complexities of CNNs. It is easy to compute and can be done statically, which is independent of the underlying hardware and software implementations. Since we physically prune the filters by creating a smaller model and then copy the weights, there are no masks or sparsity introduced to the original dense BLAS operations. Therefore the FLOP and wall-clock time of the pruned model is the same as creating a model with smaller number of filters from scratch. We report the inference time of the original model and the pruned model on the test set of CIFAR-10 and the validation set of ILSVRC 2012, which contains 10,000 32 × 32 images and 50,000 224 × 224 images respectively. The ILSVRC 2012 dataset is used only for ResNet-34. The evaluation is conducted in Torch7 with Titan X (Pascal) GPU and cuDNN v5.1, using a mini-batch size 128. As shown in Table 3, the saved inference time is close to the FLOP reduction. Note that the FLOP number only considers the operations in the Conv and FC layers, while some calculations such as Batch Normalization and other overheads are not accounted. <table> <tr> <th>Model</th> <th>FLOP</th> <th>Pruned %</th> <th>Time (s)</th> <th>Saved %</th> </tr> <tr> <td>VGG-16</td> <td>3.13 × 10<sup>8</sup></td> <td></td> <td>1.23</td> <td></td> </tr> <tr> <td>VGG-16-pruned-A</td> <td>2.06 × 10<sup>8</sup></td> <td>34.2%</td> <td>0.73</td> <td>40.7%</td> </tr> <tr> <td>ResNet-56</td> <td>1.25 × 10<sup>8</sup></td> <td></td> <td>1.31</td> <td></td> </tr> <tr> <td>ResNet-56-pruned-B</td> <td>9.09 × 10<sup>7</sup></td> <td>27.6%</td> <td>0.99</td> <td>24.4%</td> </tr> <tr> <td>ResNet-110</td> <td>2.53 × 10<sup>8</sup></td> <td></td> <td>2.38</td> <td></td> </tr> <tr> <td>ResNet-110-pruned-B</td> <td>1.55 × 10<sup>8</sup></td> <td>38.6%</td> <td>1.86</td> <td>21.8%</td> </tr> <tr> <td>ResNet-34</td> <td>3.64 × 10<sup>9</sup></td> <td></td> <td>36.02</td> <td></td> </tr> <tr> <td>ResNet-34-pruned-B</td> <td>2.76 × 10<sup>9</sup></td> <td>24.2%</td> <td>22.93</td> <td>28.0%</td> </tr> </table> Table 3: The reduction of FLOP and wall-clock time for inference.
ABSTRACT The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks. 1 INTRODUCTION The ImageNet challenge has led to significant advancements in exploring various architectural choices in CNNs (Russakovsky et al. (2015); Krizhevsky et al. (2012); Simonyan & Zisserman (2015); Szegedy et al. (2015a); He et al. (2016)). The general trend since the past few years has been that the networks have grown deeper, with an overall increase in the number of parameters and convolution operations. These high capacity networks have significant inference costs especially when used with embedded sensors or mobile devices where computational and power resources may be limited. For these applications, in addition to accuracy, computational efficiency and small network sizes are crucial enabling factors (Szegedy et al. (2015b)). In addition, for web services that provide image search and image classification APIs that operate on a time budget often serving hundreds of thousands of images per second, benefit significantly from lower inference times. There has been a significant amount of work on reducing the storage and computation costs by model compression (Le Cun et al. (1989); Hassibi & Stork (1993); Srinivas & Babu (2015); Han et al. (2015); Mariet & Sra(2016)). Recently Han et al. (2015; 2016b) report impressive compression rates on AlexNet (Krizhevsky et al. (2012)) and VGGNet (Simonyan & Zisserman (2015)) by pruning weights with small magnitudes and then retraining without hurting the overall accuracy. However, pruning parameters does not necessarily reduce the computation time since the majority of the parameters removed are from the fully connected layers where the computation cost is low, e.g., the fully connected layers of VGG-16 occupy 90% of the total parameters but only contribute less than 1% of the overall floating point operations (FLOP). They also demonstrate that the convolutional layers can be compressed and accelerated (Landola et al. (2016)), but additionally require sparse *Work done at NEC Labs †Supported in part by the NSF under Grant IIS-13-2079 BLAS libraries or even specialized hardware (Han et al., (2016a)). Modern libraries that provide speedup using sparse operations over CNNs are often limited (Szegedy et al., (2015a); Liu et al., (2015)) and maintaining sparse data structures also creates an additional storage overhead which can be significant for low-precision weights. Recent work on CNNs have yielded deep architectures with more efficient design (Szegedy et al., (2015a,b); He & Sun (2015); He et al.(2016)), in which the fully connected layers are replaced with average pooling layers (Lin et al., (2013); He et al.,(2016)), which reduces the number of parameters significantly. The computation cost is also reduced by downsampling the image at an early stage to reduce the size of feature maps (He & Sun (2015)). Nevertheless, as the networks continue to become deeper, the computation costs of convolutional layers continue to dominate. CNNs with large capacity usually have significant redundancy among different filters and feature channels. In this work, we focus on reducing the computation cost of well-trained CNNs by pruning filters. Compared to pruning weights across the network, filter pruning is a naturally structured way of pruning without introducing sparsity and therefore does not require using sparse libraries or any specialized hardware. The number of pruned filters correlates directly with acceleration by reducing the number of matrix multiplications, which is easy to tune for a target speedup. In addition, instead of layer-wise iterative fine-tuning (retraining), we adopt a one-shot pruning and retraining strategy to save retraining time for pruning filters across multiple layers, which is critical for pruning very deep networks. Finally, we observe that even for ResNets, which have significantly fewer parameters and inference costs than AlexNet or VGGNet, still have about 30% of FLOP reduction without sacrificing too much accuracy. We conduct sensitivity analysis for convolutional layers in ResNets that improves the understanding of ResNets. 2 RELATED WORK The early work by Le Cun et al. (1989) introduces Optimal Brain Damage, which prunes weights with a theoretically justified saliency measure. Later, Hassibi & Stork (1993) propose Optimal Brain Surgeon to remove unimportant weights determined by the second-order derivative information. Mariet & Sra (2016) reduce the network redundancy by identifying a subset of diverse neurons that does not require retraining. However, this method only operates on the fully-connected layers and introduce sparse connections. To reduce the computation costs of the convolutional layers, past work have proposed to approximate convolutional operations by representing the weight matrix as a low rank product of two smaller matrices without changing the original number of filters (Demil et al., (2013); Laderberg et al., (2014); Zhang et al., (2015b,a); Tai et al., (2016); Ioannou et al., (2016)). Other approaches to reduce the convolutional overheads include using FFT based convolutions (Mathieu et al., (2013)) and fast convolution using the Winograd algorithm (Lavin & Gray (2016)). Additionally, quantization (Han et al., (2016b)) and binarization (Rastegari et al., (2016); Courbariaux & Bengio (2016)) can be used to reduce the model size and lower the computation overheads. Our method can be used in addition to these techniques to reduce computation costs without incurring additional overheads. Several work have studied removing redundant feature maps from a well trained network (Anwar et al. (2015); Polyak & Wolf (2015)). Anwar et al. (2015) introduce a three-level pruning of the weights and locate the pruning candidates using particle filtering, which selects the best combination from a number of random generated masks. Polyak & Wolf (2015) detect the less frequently activated feature maps with sample input data for face detection applications. We choose to analyze the filter weights and prune filters with their corresponding feature maps using a simple magnitude based measure, without examining possible combinations. We also introduce network-wide holistic approaches to prune filters for simple and complex convolutional network architectures. Concurrently with our work, there is a growing interest in training compact CNNs with sparse constraints (Lebedev & Lempitsky (2016); Zhou et al., (2016); Wen et al., (2016)). Lebedev & Lempitsky (2016) leverage group-sparsity on the convolutional filters to achieve structured brain damage, i.e., prune the entries of the convolution kernel in a group-wise fashion. Zhou et al.,(2016) add group-sparse regularization on neurons during training to learn compact CNNs with reduced filters. Wen et al.,(2016) add structured sparsity regularizer on each layer to reduce trivial filters, channels or even layers. In the filter-level pruning, all above work use \( l_{2,1} \)-norm as a regularizer. Similar to the above work, we use \( \ell_1 \)-norm to select unimportant filters and physically prune them. Our fine-tuning process is the same as the conventional training procedure, without introducing additional regularization. Our approach does not introduce extra layer-wise meta-parameters for the regularizer except for the percentage of filters to be pruned, which is directly related to the desired speedup. By employing stage-wise pruning, we can set a single pruning rate for all layers in one stage. 3 PRUNING FILTERS AND FEATURE MAPS Let \( n_i \) denote the number of input channels for the \( i \)th convolutional layer and \( h_i/w_i \) be the height/width of the input feature maps. The convolutional layer transforms the input feature maps \( x_i \in \mathbb{R}^{n_i \times h_i \times w_i} \) into the output feature maps \( x_{i+1} \in \mathbb{R}^{n_{i+1} \times h_{i+1} \times w_{i+1}} \), which are used as input feature maps for the next convolutional layer. This is achieved by applying \( n_{i+1} \) 3D filters \( F_{i,j} \in \mathbb{R}^{n_i \times k \times k} \) on the \( n_i \) input channels, in which one filter generates one feature map. Each filter is composed by \( n_i \) 2D kernels \( K \in \mathbb{R}^{k \times k} \) (e.g., \( 3 \times 3 \)). All the filters, together, constitute the kernel matrix \( F_i \in \mathbb{R}^{n_i \times n_{i+1} \times k \times k} \). The number of operations of the convolutional layer is \( n_{i+1} n_i k^2 h_{i+1} w_{i+1} \). As shown in Figure 1, when a filter \( F_{i,j} \) is pruned, its corresponding feature map \( x_{i+1,j} \) is removed, which reduces \( n_i k^2 h_{i+1} w_{i+1} \) operations. The kernels that apply on the removed feature maps from the filters of the next convolutional layer are also removed, which saves an additional \( n_{i+2} k^2 h_{i+2} w_{i+2} \) operations. Pruning \( m \) filters of layer \( i \) will reduce \( m/n_{i+1} \) of the computation cost for both layers \( i \) and \( i+1 \). ![Pruning a filter results in removal of its corresponding feature map and related kernels in the next layer.](page_495_678_670_180.png) Figure 1: Pruning a filter results in removal of its corresponding feature map and related kernels in the next layer. 3.1 DETERMINING WHICH FILTERS TO PRUNE WITHIN A SINGLE LAYER Our method prunes the less useful filters from a well-trained model for computational efficiency while minimizing the accuracy drop. We measure the relative importance of a filter in each layer by calculating the sum of its absolute weights \( \sum |F_{i,j}| \), i.e., its \( \ell_1 \)-norm \( \|F_{i,j}\|_1 \). Since the number of input channels, \( n_i \), is the same across filters, \( \sum |F_{i,j}| \) also represents the average magnitude of its kernel weights. This value gives an expectation of the magnitude of the output feature map. Filters with smaller kernel weights tend to produce feature maps with weak activations as compared to the other filters in that layer. Figure 2(a) illustrates the distribution of filters’ absolute weights sum for each convolutional layer in a VGG-16 network trained on the CIFAR-10 dataset, where the distribution varies significantly across layers. We find that pruning the smallest filters works better in comparison with pruning the same number of random or largest filters (Section 4.4). Compared to other criteria for activation-based feature map pruning (Section 4.5), we find \( \ell_1 \)-norm is a good criterion for data-free filter selection. The procedure of pruning \( m \) filters from the \( i \)th convolutional layer is as follows: 1. For each filter \( F_{i,j} \), calculate the sum of its absolute kernel weights \( s_j = \sum_{l=1}^{n_i} \sum |K_{lj}| \). 2. Sort the filters by \( s_j \). 3. Prune \( m \) filters with the smallest sum values and their corresponding feature maps. The kernels in the next convolutional layer corresponding to the pruned feature maps are also removed. 4. A new kernel matrix is created for both the \( i \)th and \( i+1 \)th layers, and the remaining kernel weights are copied to the new model. Figure 2: (a) Sorting filters by absolute weights sum for each layer of VGG-16 on CIFAR-10. The x-axis is the filter index divided by the total number of filters. The y-axis is the filter weight sum divided by the max sum value among filters in that layer. (b) Pruning filters with the lowest absolute weights sum and their corresponding test accuracies on CIFAR-10. (c) Prune and retrain for each single layer of VGG-16 on CIFAR-10. Some layers are sensitive and it can be harder to recover accuracy after pruning them. Relationship to pruning weights Pruning filters with low absolute weights sum is similar to pruning low magnitude weights (Han et al. [2015]). Magnitude-based weight pruning may prune away whole filters when all the kernel weights of a filter are lower than a given threshold. However, it requires a careful tuning of the threshold and it is difficult to predict the exact number of filters that will eventually be pruned. Furthermore, it generates sparse convolutional kernels which can be hard to accelerate given the lack of efficient sparse libraries, especially for the case of low-sparsity. Relationship to group-sparse regularization on filters Recent work (Zhou et al., 2016); Wen et al. (2016) apply group-sparse regularization (\( \sum_{j=1}^{n_i} \| \mathcal{F}_{i,j} \|_2 \) or \( \ell_{2,1}\)-norm) on convolutional filters, which also favor to zero-out filters with small \( l_2 \)-norms, i.e. \( \mathcal{F}_{i,j} = 0 \). In practice, we do not observe noticeable difference between the \( \ell_2 \)-norm and the \( \ell_1 \)-norm for filter selection, as the important filters tend to have large values for both measures (Appendix 6.1). Zeroing out weights of multiple filters during training has a similar effect to pruning filters with the strategy of iterative pruning and retraining as introduced in Section 3.4 3.2 DETERMINING SINGLE LAYER’S SENSITIVITY TO PRUNING To understand the sensitivity of each layer, we prune each layer independently and evaluate the resulting pruned network’s accuracy on the validation set. Figure 2(b) shows that layers that maintain their accuracy as filters are pruned away correspond to layers with larger slopes in Figure 2(a). On the contrary, layers with relatively flat slopes are more sensitive to pruning. We empirically determine the number of filters to prune for each layer based on their sensitivity to pruning. For deep networks such as VGG-16 or ResNets, we observe that layers in the same stage (with the same feature map size) have a similar sensitivity to pruning. To avoid introducing layer-wise meta-parameters, we use the same pruning ratio for all layers in the same stage. For layers that are sensitive to pruning, we prune a smaller percentage of these layers or completely skip pruning them. 3.3 PRUNING FILTERS ACROSS MULTIPLE LAYERS We now discuss how to prune filters across the network. Previous work prunes the weights on a layer by layer basis, followed by iteratively retraining and compensating for any loss of accuracy (Han et al. [2015]). However, understanding how to prune filters of multiple layers at once can be useful: 1) For deep networks, pruning and retraining on a layer by layer basis can be extremely time-consuming 2) Pruning layers across the network gives a holistic view of the robustness of the network resulting in a smaller network 3) For complex networks, a holistic approach may be necessary. For example, for the ResNet, pruning the identity feature maps or the second layer of each residual block results in additional pruning of other layers. To prune filters across multiple layers, we consider two strategies for layer-wise filter selection: • Independent pruning determines which filters should be pruned at each layer independent of other layers. • Greedy pruning accounts for the filters that have been removed in the previous layers. This strategy does not consider the kernels for the previously pruned feature maps while calculating the sum of absolute weights. Figure 3 illustrates the difference between two approaches in calculating the sum of absolute weights. The greedy approach, though not globally optimal, is holistic and results in pruned networks with higher accuracy especially when many filters are pruned. ![Pruning filters across consecutive layers. The independent pruning strategy calculates the filter sum (columns marked in green) without considering feature maps removed in previous layer (shown in blue), so the kernel weights marked in yellow are still included. The greedy pruning strategy does not count kernels for the already pruned feature maps. Both approaches result in a \((n_{i+1} - 1) \times (n_{i+2} - 1)\) kernel matrix.](page_495_682_670_180.png) Figure 3: Pruning filters across consecutive layers. The independent pruning strategy calculates the filter sum (columns marked in green) without considering feature maps removed in previous layer (shown in blue), so the kernel weights marked in yellow are still included. The greedy pruning strategy does not count kernels for the already pruned feature maps. Both approaches result in a \((n_{i+1} - 1) \times (n_{i+2} - 1)\) kernel matrix. For simpler CNNs like VGGNet or AlexNet, we can easily prune any of the filters in any convolutional layer. However, for complex network architectures such as Residual networks (He et al. (2016)), pruning filters may not be straightforward. The architecture of ResNet imposes restrictions and the filters need to be pruned carefully. We show the filter pruning for residual blocks with projection mapping in Figure 4. Here, the filters of the first layer in the residual block can be arbitrarily pruned, as it does not change the number of output feature maps of the block. However, the correspondence between the output feature maps of the second convolutional layer and the identity feature maps makes it difficult to prune. Hence, to prune the second convolutional layer of the residual block, the corresponding projected feature maps must also be pruned. Since the identical feature maps are more important than the added residual maps, the feature maps to be pruned should be determined by the pruning results of the shortcut layer. To determine which identity feature maps are to be pruned, we use the same selection criterion based on the filters of the shortcut convolutional layers (with \(1 \times 1\) kernels). The second layer of the residual block is pruned with the same filter index as selected by the pruning of the shortcut layer. ![Pruning residual blocks with the projection shortcut. The filters to be pruned for the second layer of the residual block (marked as green) are determined by the pruning result of the shortcut projection. The first layer of the residual block can be pruned without restrictions.](page_495_1042_670_180.png) Figure 4: Pruning residual blocks with the projection shortcut. The filters to be pruned for the second layer of the residual block (marked as green) are determined by the pruning result of the shortcut projection. The first layer of the residual block can be pruned without restrictions. 3.4 RETRAINING PRUNED NETWORKS TO REGAIN ACCURACY After pruning the filters, the performance degradation should be compensated by retraining the network. There are two strategies to prune the filters across multiple layers: 1. Prune once and retrain: Prune filters of multiple layers at once and retrain them until the original accuracy is restored. 2. Prune and retrain iteratively: Prune filters layer by layer or filter by filter and then retrain iteratively. The model is retrained before pruning the next layer for the weights to adapt to the changes from the pruning process. We find that for the layers that are resilient to pruning, the prune and retrain once strategy can be used to prune away significant portions of the network and any loss in accuracy can be regained by retraining for a short period of time (less than the original training time). However, when some filters from the sensitive layers are pruned away or large portions of the networks are pruned away, it may not be possible to recover the original accuracy. Iterative pruning and retraining may yield better results, but the iterative process requires many more epochs especially for very deep networks. 4 EXPERIMENTS We prune two types of networks: simple CNNs (VGG-16 on CIFAR-10) and Residual networks (ResNet-56/110 on CIFAR-10 and ResNet-34 on ImageNet). Unlike AlexNet or VGG (on ImageNet) that are often used to demonstrate model compression, both VGG (on CIFAR-10) and Residual networks have fewer parameters in the fully connected layers. Hence, pruning a large percentage of parameters from these networks is challenging. We implement our filter pruning method in Torch7 (Collobert et al. (2011)). When filters are pruned, a new model with fewer filters is created and the remaining parameters of the modified layers as well as the unaffected layers are copied into the new model. Furthermore, if a convolutional layer is pruned, the weights of the subsequent batch normalization layer are also removed. To get the baseline accuracies for each network, we train each model from scratch and follow the same pre-processing and hyper-parameters as ResNet (He et al. (2016)). For retraining, we use a constant learning rate 0.001 and retrain 40 epochs for CIFAR-10 and 20 epochs for ImageNet, which represents one-fourth of the original training epochs. Past work has reported up to \(3 \times\) original training times to retrain pruned networks (Han et al. (2015)). <table> <tr> <th>Model</th> <th>Error(%)</th> <th>FLOP</th> <th>Pruned %</th> <th>Parameters</th> <th>Pruned %</th> </tr> <tr> <td>VGG-16</td> <td>6.75</td> <td>3.13 \(\times 10^8\)</td> <td></td> <td>1.5 \(\times 10^7\)</td> <td></td> </tr> <tr> <td>VGG-16-pruned-A</td> <td><b>6.60</b></td> <td>2.06 \(\times 10^8\)</td> <td>34.2%</td> <td>5.4 \(\times 10^6\)</td> <td>64.0%</td> </tr> <tr> <td>VGG-16-pruned-A scratch-train</td> <td>6.88</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>ResNet-56</td> <td>6.96</td> <td>1.25 \(\times 10^8\)</td> <td></td> <td>8.5 \(\times 10^6\)</td> <td></td> </tr> <tr> <td>ResNet-56-pruned-A</td> <td>6.90</td> <td>1.12 \(\times 10^8\)</td> <td>10.4%</td> <td>7.7 \(\times 10^5\)</td> <td>9.4%</td> </tr> <tr> <td>ResNet-56-pruned-B</td> <td><b>6.94</b></td> <td>9.09 \(\times 10^7\)</td> <td>27.6%</td> <td>7.3 \(\times 10^5\)</td> <td>13.7%</td> </tr> <tr> <td>ResNet-56-pruned-B scratch-train</td> <td>8.69</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>ResNet-110</td> <td>6.47</td> <td>2.53 \(\times 10^8\)</td> <td></td> <td>1.72 \(\times 10^6\)</td> <td></td> </tr> <tr> <td>ResNet-110-pruned-A</td> <td><b>6.45</b></td> <td>2.13 \(\times 10^8\)</td> <td>15.9%</td> <td>1.68 \(\times 10^6\)</td> <td>2.3%</td> </tr> <tr> <td>ResNet-110-pruned-B</td> <td>6.70</td> <td>1.55 \(\times 10^8\)</td> <td>38.6%</td> <td>1.16 \(\times 10^6\)</td> <td>32.4%</td> </tr> <tr> <td>ResNet-110-pruned-B scratch-train</td> <td>7.06</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>ResNet-34</td> <td>26.77</td> <td>3.64 \(\times 10^9\)</td> <td></td> <td>2.16 \(\times 10^7\)</td> <td></td> </tr> <tr> <td>ResNet-34-pruned-A</td> <td>27.44</td> <td>3.08 \(\times 10^9\)</td> <td>15.5%</td> <td>1.99 \(\times 10^7\)</td> <td>7.6%</td> </tr> <tr> <td>ResNet-34-pruned-B</td> <td>27.83</td> <td>2.76 \(\times 10^9\)</td> <td>24.2%</td> <td>1.93 \(\times 10^7\)</td> <td>10.8%</td> </tr> <tr> <td>ResNet-34-pruned-C</td> <td>27.52</td> <td>3.37 \(\times 10^9\)</td> <td>7.5%</td> <td>2.01 \(\times 10^7\)</td> <td>7.2%</td> </tr> </table> Table 1: Overall results. The best test/validation accuracy during the retraining process is reported. Training a pruned model from scratch performs worse than retraining a pruned model, which may indicate the difficulty of training a network with a small capacity. 4.1 VGG-16 ON CIFAR-10 VGG-16 is a high-capacity network originally designed for the ImageNet dataset (Simonyan & Zisserman (2015)). Recently, Zagoruyko (2015) applies a slightly modified version of the model on CIFAR-10 and achieves state of the art results. As shown in Table 2, VGG-16 on CIFAR-10 consists of 13 convolutional layers and 2 fully connected layers, in which the fully connected layers do not occupy large portions of parameters due to the small input size and less hidden units. We use the model described in [Zagoruyko (2015)] but add Batch Normalization (Ioffe & Szegedy (2015)) Table 2: VGG-16 on CIFAR-10 and the pruned model. The last two columns show the number of feature maps and the reduced percentage of FLOP from the pruned model. <table> <tr> <th>layer type</th> <th>w_i \times h_i</th> <th>#Maps</th> <th>FLOP</th> <th>#Params</th> <th>#Maps</th> <th>FLOP%</th> </tr> <tr> <td>Conv_1</td> <td>32 \times 32</td> <td>64</td> <td>1.8E+06</td> <td>1.7E+03</td> <td>32</td> <td>50%</td> </tr> <tr> <td>Conv_2</td> <td>32 \times 32</td> <td>64</td> <td>3.8E+07</td> <td>3.7E+04</td> <td>64</td> <td>50%</td> </tr> <tr> <td>Conv_3</td> <td>16 \times 16</td> <td>128</td> <td>1.9E+07</td> <td>7.4E+04</td> <td>128</td> <td>0%</td> </tr> <tr> <td>Conv_4</td> <td>16 \times 16</td> <td>128</td> <td>3.8E+07</td> <td>1.5E+05</td> <td>128</td> <td>0%</td> </tr> <tr> <td>Conv_5</td> <td>8 \times 8</td> <td>256</td> <td>1.9E+07</td> <td>2.9E+05</td> <td>256</td> <td>0%</td> </tr> <tr> <td>Conv_6</td> <td>8 \times 8</td> <td>256</td> <td>3.8E+07</td> <td>5.9E+05</td> <td>256</td> <td>0%</td> </tr> <tr> <td>Conv_7</td> <td>8 \times 8</td> <td>256</td> <td>3.8E+07</td> <td>5.9E+05</td> <td>256</td> <td>0%</td> </tr> <tr> <td>Conv_8</td> <td>4 \times 4</td> <td>512</td> <td>1.9E+07</td> <td>1.2E+06</td> <td>256</td> <td>50%</td> </tr> <tr> <td>Conv_9</td> <td>4 \times 4</td> <td>512</td> <td>3.8E+07</td> <td>2.4E+06</td> <td>256</td> <td>75%</td> </tr> <tr> <td>Conv_10</td> <td>4 \times 4</td> <td>512</td> <td>3.8E+07</td> <td>2.4E+06</td> <td>256</td> <td>75%</td> </tr> <tr> <td>Conv_11</td> <td>2 \times 2</td> <td>512</td> <td>9.4E+06</td> <td>2.4E+06</td> <td>256</td> <td>75%</td> </tr> <tr> <td>Conv_12</td> <td>2 \times 2</td> <td>512</td> <td>9.4E+06</td> <td>2.4E+06</td> <td>256</td> <td>75%</td> </tr> <tr> <td>Conv_13</td> <td>2 \times 2</td> <td>512</td> <td>9.4E+06</td> <td>2.4E+06</td> <td>256</td> <td>75%</td> </tr> <tr> <td>Linear</td> <td>1</td> <td>512</td> <td>2.6E+05</td> <td>2.6E+05</td> <td>512</td> <td>50%</td> </tr> <tr> <td>Linear</td> <td>1</td> <td>10</td> <td>5.1E+03</td> <td>5.1E+03</td> <td>10</td> <td>0%</td> </tr> <tr> <td>Total</td> <td></td> <td></td> <td>3.1E+08</td> <td>1.5E+07</td> <td></td> <td>34%</td> </tr> </table> layer after each convolutional layer and the first linear layer, without using Dropout (Srivastava et al., (2014)). Note that when the last convolutional layer is pruned, the input to the linear layer is changed and the connections are also removed. As shown in Figure 2(b), each of the convolutional layers with 512 feature maps can drop at least 60% of filters without affecting the accuracy. Figure 2(c) shows that with retraining, almost 90% of the filters of these layers can be safely removed. One possible explanation is that these filters operate on 4 × 4 or 2 × 2 feature maps, which may have no meaningful spatial connections in such small dimensions. For instance, ResNets for CIFAR-10 do not perform any convolutions for feature maps below 8 × 8 dimensions. Unlike previous work (Zeiler & Fergus (2014); Han et al. (2015)), we observe that the first layer is robust to pruning as compared to the next few layers. This is possible for a simple dataset like CIFAR-10, on which the model does not learn as much useful filters as on ImageNet (as shown in Figure 5). Even when 80% of the filters from the first layer are pruned, the number of remaining filters (12) is still larger than the number of raw input channels. However, when removing 80% filters from the second layer, the layer corresponds to a 64 to 12 mapping, which may lose significant information from previous layers, thereby hurting the accuracy. With 50% of the filters being pruned in layer 1 and from 8 to 13, we achieve 34% FLOP reduction for the same accuracy. ![Visualization of filters in the first convolutional layer of VGG-16 trained on CIFAR-10. Filters are ranked by \( \ell_1 \)-norm.](page_1012_1342_484_246.png) Figure 5: Visualization of filters in the first convolutional layer of VGG-16 trained on CIFAR-10. Filters are ranked by \( \ell_1 \)-norm. 4.2 ResNet-56/110 on CIFAR-10 ResNets for CIFAR-10 have three stages of residual blocks for feature maps with sizes of 32 × 32, 16 × 16 and 8 × 8. Each stage has the same number of residual blocks. When the number of feature maps increases, the shortcut layer provides an identity mapping with an additional zero padding for the increased dimensions. Since there is no projection mapping for choosing the identity feature maps, we only consider pruning the first layer of the residual block. As shown in Figure 6, most of the layers are robust to pruning. For ResNet-110, pruning some single layers without retraining even Figure 6: Sensitivity to pruning for the first layer of each residual block of ResNet-56/110. improves the performance. In addition, we find that layers that are sensitive to pruning (layers 20, 38 and 54 for ResNet-56, layer 36, 38 and 74 for ResNet-110) lie at the residual blocks close to the layers where the number of feature maps changes, e.g., the first and the last residual blocks for each stage. We believe this happens because the precise residual errors are necessary for the newly added empty feature maps. The retraining performance can be improved by skipping these sensitive layers. As shown in Table 1, ResNet-56-pruned-A improves the performance by pruning 10% filters while skipping the sensitive layers 16, 20, 38 and 54. In addition, we find that deeper layers are more sensitive to pruning than layers in the earlier stages of the network. Hence, we use a different pruning rate for each stage. We use \( p_i \) to denote the pruning rate for layers in the ith stage. ResNet-56-pruned-B skips more layers (16, 18, 20, 34, 38, 54) and prunes layers with \( p_1=60\%, p_2=30\% \) and \( p_3=10\% \). For ResNet-110, the first pruned model gets a slightly better result with \( p_1=50\%, p_2=40\% \) and layer 36 skipped. ResNet-110-pruned-B skips layers 36, 38, 74 and prunes with \( p_1=50\%, p_2=40\% \) and \( p_3=30\% \). When there are more than two residual blocks at each stage, the middle residual blocks may be redundant and can be easily pruned. This might explain why ResNet-110 is easier to prune than ResNet-56. 4.3 ResNet-34 on ILSVRC2012 ResNets for ImageNet have four stages of residual blocks for feature maps with sizes of \( 56 \times 56 \), \( 28 \times 28 \), \( 14 \times 14 \) and \( 7 \times 7 \). ResNet-34 uses the projection shortcut when the feature maps are down-sampled. We first prune the first layer of each residual block. Figure 7 shows the sensitivity of the first layer of each residual block. Similar to ResNet-56/110, the first and the last residual blocks of each stage are more sensitive to pruning than the intermediate blocks (i.e., layers 2, 8, 14, 16, 26, 28, 30, 32). We skip those layers and prune the remaining layers at each stage equally. In Table 1, we compare two configurations of pruning percentages for the first three stages: (A) \( p_1=30\%, p_2=30\%, p_3=30\% \); (B) \( p_1=50\%, p_2=60\%, p_3=40\% \). Option-B provides 24% FLOP reduction with about 1% loss in accuracy. As seen in the pruning results for ResNet-50/110, we can predict that ResNet-34 is relatively more difficult to prune as compared to deeper ResNets. We also prune the identity shortcuts and the second convolutional layer of the residual blocks. As these layers have the same number of filters, they are pruned equally. As shown in Figure 7(b), these layers are more sensitive to pruning than the first layers. With retraining, ResNet-34-pruned-C prunes the third stage with \( p_3=20\% \) and results in 7.5% FLOP reduction with 0.75% loss in accuracy. Therefore, pruning the first layer of the residual block is more effective at reducing the overall FLOP than pruning the second layer. This finding also correlates with the bottleneck block design for deeper ResNets, which first reduces the dimension of input feature maps for the residual layer and then increases the dimension to match the identity mapping. 4.4 COMPARISON WITH PRUNING RANDOM FILTERS AND LARGEST FILTERS We compare our approach with pruning random filters and largest filters. As shown in Figure 8, pruning the smallest filters outperforms pruning random filters for most of the layers at different pruning ratios. For example, smallest filter pruning has better accuracy than random filter pruning for all layers with the pruning ratio of 90%. The accuracy of pruning filters with the largest \( \ell_1 \)-norms drops quickly as the pruning ratio increases, which indicates the importance of filters with larger \( \ell_1 \)-norms. Figure 8: Comparison of three pruning methods for VGG-16 on CIFAR-10: pruning the smallest filters, pruning random filters and pruning the largest filters. In random filter pruning, the order of filters to be pruned is randomly permuted. 4.5 COMPARISON WITH ACTIVATION-BASED FEATURE MAP PRUNING The activation-based feature map pruning method removes the feature maps with weak activation patterns and their corresponding filters and kernels (Polyak & Wolf (2015)), which needs sample data as input to determine which feature maps to prune. A feature map \( x_{i,j}^{n} \in \mathbb{R}^{w_{i+1} \times h_{i+1}} \) is generated by applying filter \( F_{i,j} \in \mathbb{R}^{n_i \times k \times k} \) to feature maps of previous layer \( x_i \in \mathbb{R}^{n_i \times w_i \times h_i} \), i.e., \( x_{i+1,j} = F_{i,j} * x_i \). Given \( N \) randomly selected images \( \{ x_i^n \}_{n=1}^N \) from the training set, the statistics of each feature map can be estimated with one epoch forward pass of the \( N \) sampled data. Note that we calculate statistics on the feature maps generated from the convolution operations before batch normalization or non-linear activation. We compare our \( \ell_1 \)-norm based filter pruning with feature map pruning using the following criteria: \( \sigma_{\text{mean-mean}}(x_{i,j}) = \frac{1}{N} \sum_{n=1}^N \text{mean}(x_{i,j}^n) \), \( \sigma_{\text{mean-std}}(x_{i,j}) = \frac{1}{N} \sum_{n=1}^N \text{std}(x_{i,j}^n) \), \( \sigma_{\text{mean}-\ell_1}(x_{i,j}) = \frac{1}{N} \sum_{n=1}^N \| x_{i,j}^n \|_1 \), \( \sigma_{\text{mean}-\ell_2}(x_{i,j}) = \frac{1}{N} \sum_{n=1}^N \| x_{i,j}^n \|_2 \) and Figure 9: Comparison of activation-based feature map pruning for VGG-16 on CIFAR-10. \( \sigma_{var-\ell_2}(\mathbf{x}_{i,j}) = \mathrm{var}(\{\|\mathbf{x}_{i,j}^n\|_2\}_{n=1}^N) \), where mean, std and var are standard statistics (average, standard deviation and variance) of the input. Here, \( \sigma_{var-\ell_2} \) is the contribution variance of channel criterion proposed in [Polyak & Wolf (2015)], which is motivated by the intuition that an unimportant feature map has almost similar outputs for the whole training data and acts like an additional bias. The estimation of the criteria becomes more accurate when more sample data is used. Here we use the whole training set (\( N = 50,000 \) for CIFAR-10) to compute the statistics. The performance of feature map pruning with above criteria for each layer is shown in Figure 9. Smallest filter pruning outperforms feature map pruning with the criteria \( \sigma_{mean-mean}, \sigma_{mean-\ell_1}, \sigma_{mean-\ell_2} \) and \( \sigma_{var-\ell_2} \). The \( \sigma_{mean-std} \) criterion has better or similar performance to \( \ell_1 \)-norm up to pruning ratio of 60%. However, its performance drops quickly after that especially for layers of conv_1, conv_2 and conv_3. We find \( \ell_1 \)-norm is a good heuristic for filter selection considering that it is data free. 5 CONCLUSIONS Modern CNNs often have high capacity with large training and inference costs. In this paper we present a method to prune filters with relatively low weight magnitudes to produce CNNs with reduced computation costs without introducing irregular sparsity. It achieves about 30% reduction in FLOP for VGGNet (on CIFAR-10) and deep ResNets without significant loss in the original accuracy. Instead of pruning with specific layer-wise hyperparameters and time-consuming iterative retraining, we use the one-shot pruning and retraining strategy for simplicity and ease of implementation. By performing lesion studies on very deep CNNs, we identify layers that are robust or sensitive to pruning, which can be useful for further understanding and improving the architectures. ACKNOWLEDGMENTS The authors would like to thank the anonymous reviewers for their valuable feedback.
accept
Accept (Poster)
6.75
35c51fdb3d71e73fdc3b19aa6636c6e2f1bd484d
iclr
2,017
EMERGENT PREDICATION STRUCTURE IN VECTOR REPRESENTATIONS OF NEURAL READERS Hai Wang* Takeshi Onishi* Kevin Gimpel David McAllester Toyota Technological Institute at Chicago 6045 S. Kenwood Ave. Chicago, Illinois 60637. USA {haiwang,tonishi,kgimpel,mcallister}@ttic.edu ABSTRACT Reading comprehension is a question answering task where the answer is to be found in a given passage about entities and events not mentioned in general knowledge sources. A significant number of neural architectures for this task (neural readers) have recently been developed and evaluated on large cloze-style datasets. We present experiments supporting the emergence of “predication structure” in the hidden state vectors of a class of neural readers including the Attentive Reader and Stanford Reader. We posits that the hidden state vectors can be viewed as (a representation of) a concatenation \([P, c]\) of a “predicate vector” \(P\) and a “constant symbol vector” \(c\) and that the hidden state represents the atomic formula \(P(c)\). This predication structure plays a conceptual role in relating “aggregation readers” such as the Attentive Reader and the Stanford Reader to “explicit reference readers” such as the Attention-Sum Reader, the Gated-Attention Reader and the Attention-over-Attention Reader. In an independent contribution, we show that the addition of linguistics features to the input to existing neural readers significantly boosts performance yielding the best results to date on the Who-did-What dataset.\footnote{code will be available: https://github.com/sohuren} 1 INTRODUCTION AND OVERVIEW Reading comprehension is a type of question answering task where the answer is to be found in a passage about particular entities and events not otherwise familiar to the reader. In particular, the entities and events should not be mentioned in structured databases of general knowledge. Reading comprehension problems are intended to measure a systems ability to extract semantic information about entities and relations directly from unstructured text. Several large scale reading comprehension datasets have been introduced recently. In particular the CNN & DailyMail datasets (Hermann et al., 2015), the Children’s Book Test (CBT) (Hill et al., 2016), and the Who-did-What dataset (Onishi et al., 2016). The large sizes of these datasets enable the application of deep learning. These are all cloze-style datasets where a question is constructed by deleting a word or phrase from an article summary (in CNN/DailyMail), from a sentence in a Children’s story (in CBT), or by deleting a person from the first sentence of a different news article on the same entities and events (in Who-did-What). In this paper we present empirical evidence for the emergence of predication structure in a certain class of neural readers. To understand predication structure is it helpful to review the anonymization performed in the CNN/DailyMail dataset. In this dataset named entities are replaced by anonymous entity identifiers such as “entity37”. The passage might contain “entity52 gave entity24 a rousing applause” and the question might be “X received a rounding applause from entity52”. The task is to fill in \(X\) from a given multiple choice list of candidate entity identifiers. A fixed relatively small set of the same entity identifiers are used over all the problems and the same problem is presented many times with the entity identifiers shuffled. This prevents a given entity identifier from having any semantically meaningful vector embedding. The embeddings of the entity identifiers are *Authors contributed equally presumably just pointers to semantics-free tokens. We will write entity identifiers as logical constant symbols such as \( c \) rather than strings such as “entity37”. Aggregation readers, including Memory Networks (Weston et al., Sukhbaatar et al. [2015]), the Attentive Reader (Hermann et al. [2015]) and the Stanford Reader (Chen et al. [2016]), use bidirectional LSTMs or GRUs to construct a contextual embedding \( h_t \) of each position \( t \) in the passage and also an embedding \( q \) of the question. They then select and answer \( c \) using a criterion similar to \[ \argmax_c \sum_t < h_t, q > < h_t, e(c) > \] where \( e(c) \) is the vector embedding of the constant symbol (entity identifier) \( c \). In practice the inner-product \( < h_t, q > \) is normalized over \( t \) using a softmax to yield an attention \( \alpha_t \) over \( t \) and (1) becomes. \[ \argmax_c < e(c), \sum_t \alpha_t h_t > . \] Here \( \sum_t \alpha_t h_t \) is viewed as a vector representation of the passage. We argue that for aggregation readers, roughly defined by (2), the hidden state \( h_t \) of the passage at position (or word) \( t \) can be viewed as a vector concatenation \( h_t = [e(\Phi_t), e'(c_t)] \) where \( \Phi_t \) is a property (or statement or predicate) being stated of a particular constant symbol \( c_t \). A logician might write this as \( h_t = \Phi_t[c_t] \). Furthermore, the question can be interpreted as having the form \( \Psi[x] \) where the problem is to find a constant symbol \( c \) such that the passage implies \( \Psi[c] \). Assuming \( h_t = [e(\Phi_t), e'(c_t)] \) and \( q = [e(\Psi), 0] \) and \( e(c) = [0, e'(c)] \) we can rewrite (1) as \[ \argmax_c \sum_t < e(\Phi_t), e(\Psi) > < e'(c_t), e'(c) > . \] The first inner product in (3) is interpreted as measuring the extent to which \( \Phi_t[x] \) implies \( \Psi[x] \) for any \( x \). The second inner product is interpreted as restricting \( t \) to positions talking about the constant symbol \( c \). Note that the posited decomposition of \( h_t \) is not explicit in (2) but instead must emerge during training. We present empirical evidence that this structure does emerge. The empirical evidence is somewhat tricky as the direct sum structure that divides \( h_t \) into its two parts need not be axis aligned and therefore need not literally correspond to vector concatenation. We also consider a second class of neural readers that we call explicit reference readers. Explicit reference readers avoid (2) and instead use \[ \argmax_c \sum_{t \in R(c)} \alpha_t \] where \( R(c) \) is the subset of the positions where the constant symbol (entity identifier) \( c \) occurs. Note that if we identify \( \alpha_t \) with \( < e(\Phi_t), e(\Psi) > \) and assume that \( < e'(c), e'(c_t) > \) is either 0 or 1 depending on whether \( c = c_t \), then (3) and (4) agree. In explicit reference readers the hidden state \( h_t \) need not carry a pointer to \( c_t \) as the restriction on \( t \) is independent of learned representations. Explicit reference readers include the Attention Sum Reader (Kadlec et al. [2016]), the Gated Attention Reader (Dhingra et al. [2016]), the Attention-over-Attention Reader (Cui et al. [2016]) and others (a list can be found in section 6). So far we have only considered anonymized datasets that require the handling of semantics-free constant symbols. However, even for non-anonymized datasets such as Who-Did-What, it is helpful to add features which indicate which positions in the passage are referring to which candidate answers. This indicates, not surprisingly, that reference is important in question answering. The fact that explicit reference features are needed in aggregation readers on non-anonymized data indicates that reference is not being solved by the aggregation readers. However, as reference seems to be important for cloze-style question answering, these problems may ultimately provide training data from which reference resolution can be learned. Sections 2 and 3 review various existing datasets and models respectively. Section 4 presents the logical structure interpretation of aggregation readers in more detail and the empirical evidence supporting it. Section 5 proposes new models that enforce the direct sum structure of the hidden state vectors. It is shown that these new models perform well on the Who-did-What dataset provided that reference annotations are added as input features. Section 5 also describes additional linguistic features that can be added to the input embeddings and show that these improve the performance of existing models resulting in the best single-model performance to date on the Who-did-What dataset. 2 A Brief Survey of Datasets Before presenting various models for machine comprehension we give a general formulation of the machine comprehension task. We take an instance of the task be a four tuple \((q, p, a, \mathcal{A})\), where \(q\) is a question given as sequence of words containing a special taken for a “blank” to be filled in, \(p\) is a document consisting of a sequence of words, \(\mathcal{A}\) is a set of possible answers and \(a \in \mathcal{A}\) is the ground truth answer. All words are drawn from a vocabulary \(\mathcal{V}\). We assume that all possible answers are words from the vocabulary, that is \(\mathcal{A} \subseteq \mathcal{V}\), and that the ground truth answer appears in the document, that is \(a \in p\). The problem can be described as that of selecting the answer \(a \in \mathcal{A}\) that answers question \(q\) based on information from \(p\). We will now briefly summarize important features of the related datasets in reading comprehension. CNN & DailyMail: [Hermann et al., 2015] constructed these datasets from a large number of news articles from the CNN and Daily Mail news websites. The main article is used as the context, while the cloze style question is formed from one short highlight sentence appearing in conjunction with the published article. To avoid the model using external world knowledge when answering the question, the named entities in the entire dataset were replaced by anonymous entity IDs which were then further shuffled for each example. This forces models to rely on the context document to answer each question. In this anonymized corpus the entity identifiers are taken to be a part of the vocabulary and the answer set \(\mathcal{A}\) consists of the entity identifiers occurring in the passage. Who-did-What (WDW): The Who-did-What dataset [Onishi et al., 2016] contains 127,000 multiple choice cloze questions constructed from the LDC English Gigaword newswire corpus [David & Cieri, 2003]. In contrast with CNN and Daily Mail, it avoids using article summaries for question formation. Instead, each problem is formed from two independent articles: one is given as the passage to be read and a different article on the same entities and events is used to form the question. Further, Who-did-What avoids anonymization, as each choice is a person named entity. In this dataset the answer set \(\mathcal{A}\) consists of the person named entities occurring in the passage. Finally, the problems have been filtered to remove a fraction that are easily solved by simple baselines. It has two training sets. The larger training set (“relaxed”) is created using less baseline filtering, while the smaller training set (“strict”) uses the same filtering as the validation and test sets. Children’s Book Test (CBT) [Hill et al., 2016] developed the CBT dataset in a slightly different fashion to the CNN/DailyMail datasets. They take any sequence of 21 consecutive sentences from a children’s book: the first 20 sentences are used as the passage, and the goal is to infer a missing word in the 21st sentence. The task complexity varies with the type of the omitted word (verb, preposition, named entity, or common noun). According to the original study on this dataset [Hill et al., 2016], n-gram and recurrent neural network language models are sufficient for predicting verbs or prepositions. However, for named entities and common nouns, current solvers are still far from human performance. Other Related Datasets. It is also worth mentioning several related datasets. The MCTest dataset [Richardson et al., 2013] consists of children’s stories and questions written by crowdsourced workers. The dataset only contains 660 documents and is too small to train deep models. The bAbI dataset [Weston et al., 2016] is constructed automatically using synthetic text generation and can be perfectly answered by hand-written algorithms [Lee et al., 2016]. The SQuAD dataset [Rajpurkar et al., 2016] consists passage-question pairs where the passage is a wikipedia article and the questions are written by crowdsourced workers. Although crowdsourcing is involved, the dataset contains over 200,000 problems. But the answer is often a word sequence which is difficult to handle with the reader models considered here. The LAMBADA dataset [Denis et al., 2016] is a word prediction dataset which requires a broad discourse context and the correct answer might not in the context. Nonetheless, when the correct answer is in the context, neural readers can be applied effectively [Chu et al., 2016]. 3 AGGREGATION READERS AND EXPLICIT REFERENCE READERS Here we classify readers into aggregation readers and explicit reference readers. Aggregation readers appeared first in the literature and include Memory Networks (Weston et al., Sukhbaatar et al., 2015), the Attentive Reader (Hermann et al., 2015), and the Stanford Reader (Chen et al., 2016). Aggregation readers are defined by equations (8) and (10) below. Explicit reference readers include the Attention-Sum Reader (Kadlec et al., 2016), the Gated-Attention Reader (Dhingra et al., 2016), and the Attention-over-Attention Reader (Cui et al., 2016). Explicit reference readers are defined by equation (14) below. We first present the Stanford Reader as a paradigmatic aggregation Reader and the Attention-Sum Reader as a paradigmatic explicit reference reader. 3.1 AGGREGATION READERS Stanford Reader. The the Stanford Reader (Chen et al., 2016) computes a bi-directional LSTM representation of both the passage and the question. \[ h = \text{biLSTM}(e(p)) \tag{5} \] \[ q = [\text{fLSTM}(e(q))_{|q|}, \text{bLSTM}(e(q))_1] \tag{6} \] In equations (5) and (6) we have that \( e(p) \) is the sequence of word embeddings \( e(w_i) \) for \( w_i \in p \) and similarly for \( e(q) \). The expression biLSTM(s) denotes the sequence of hidden state vectors resulting from running a bi-directional LSTM on the vector sequence s. We write biLSTM(s)_i for the ith vector in this sequence. Similarly fLSTM(s) and bLSTM(s) denote the sequence of vectors resulting from running a forward LSTM and a backward LSTM respectively and [·, ·] denotes vector concatenation. The Stanford Reader, and various other readers, then compute a bilinear attention over the passage which is then used to construct a single weighted vector representation of the passage. \[ \alpha_t = \text{softmax} \ h_t^T W_\alpha \ q \tag{7} \] \[ o = \sum_t \alpha_t h_t \tag{8} \] Finally, they compute a probability distribution over the answers \( P(a|p, q, \mathcal{A}) \). \[ p(a|d, q, \mathcal{A}) = \underset{a \in \mathcal{A}}{\text{softmax}} \ e_o(a)^T o \tag{9} \] \[ \hat{a} = \underset{a \in \mathcal{A}}{\text{argmax}} \ e_o(a)^T o \tag{10} \] Here \( e_o(a) \) is an “output embedding” of the answer a. On the CNN dataset the Stanford Reader trains an output embedding for each of the roughly 500 entity identifiers used in the dataset. In cases where the answer might be any word in \( \mathcal{V} \) an output embedding must be trained for the entire vocabulary. The reader is trained with log-loss \( \ln 1/P(a|p, q, \mathcal{A}) \) where a is the correct answer. At test time the reader is scored on the percentage of problems where \( \hat{a} = a \). Memory Networks. Memory Networks (Weston et al., Sukhbaatar et al., 2015) use (8) and (10) but have more elaborate methods of constructing “memory vectors” \( h_t \) not involve LSTMs. Memory networks use (8) and (10) but replace (9) with \[ P(w|p, q, \mathcal{A}) = P(w|p, q) = \underset{w \in \mathcal{V}}{\text{softmax}} \ e_o(w)^T o. \tag{11} \] It should be noted that (11) trains output vectors over the whole vocabulary rather than just those items occurring in the choice set \( \mathcal{A} \). This is empirically significant in non-anonymized datasets such as CBT and Who-did-What where choices at test time may never have occurred as choices in the training data. Attentive Reader. The Stanford Reader was derived from the Attentive Reader (Hermann et al., 2015). The Attentive Reader uses \( \alpha_t = \text{softmax}_x \text{MLP}([h_t, q]) \) instead of (7). Here MLP(x) is the output of a multi layer perceptron (MLP) given input x. Also, the answer distribution in the attentive reader is defined over the full vocabulary rather than just the candidate answer set \( \mathcal{A} \). \[ P(w|p, q, \mathcal{A}) = P(w|p, q) = \underset{w \in \mathcal{V}}{\text{softmax}} \ e_o(w)^T \text{MLP}([o, q]) \tag{12} \] Equation (12) is similar to (11) in that it leads to the training of output vectors for the full vocabulary rather than just those items appearing in choice sets in the training data. As in memory networks, this leads to improved performance on non-anonymized data sets. 3.2 Explicit Reference Readers Attention-Sum Reader. In the Attention-Sum Reader [Kadlec et al., 2016] h and q are computed with equations (5) and (6) as in the Stanford Reader but using GRUs rather than LSTMs. The attention \( \alpha_t \) is computed similarly to (7) but using a simple inner product \( \alpha_t = \text{softmax}_t\ h_t^\top q \) rather than a trained bilinear form. Most significantly, however, equations (9) and (10) are replaced by the following where \( t \in R(a,p) \) indicates that a reference to candidate answer a occurs at position t in p. \[ P(a|p,q,\mathcal{A}) = \sum_{t \in R(a,p)} \alpha_t \tag{13} \] \[ \hat{a} = \underset{a}{\operatorname{argmax}} \sum_{t \in R(a,p)} \alpha_t \tag{14} \] Here we think of \( R(a,p) \) as the set of references to a in the passage p. It is important to note that (13) is an equality and that \( P(a|p,q,\mathcal{A}) \) is not normalized to the members of \( R(a,p) \). When training with the log-loss objective this drives the attention \( \alpha_t \) to be normalized — to have support only on the positions t with \( t \in R(a,p) \) for some a. See the heat maps in the appendix. Gated-Attention Reader. The Gated Attention Reader [Dhingra et al., 2016] involves a K-layer biGRU-attention architecture defined by the following equations. \[ q^\ell = [\text{fGRU}(e(q))_l, \text{bGRU}(e(q))_1] \quad 1 \leq \ell \leq K \\ h^1 = \text{biGRU}(e(p)) \\ h^\ell = \text{biGRU}(h^{\ell-1} \odot q^{\ell-1}) \quad 2 \leq \ell \leq K \] Here the question embeddings \( q^\ell \) for different values of \( \ell \) are computed with different GRU model parameters. Here \( h \odot q \) abbreviates the sequence \( h_1 \odot q, h_2 \odot q, \ldots h_{|p|} \odot q \). Note that for \( K = 1 \) we have only \( q^1 \) and \( h^1 \) as in the attention-sum reader. An attention is then computed over the final layer \( h^K \) with \( \alpha_t = \text{softmax}_t\ (h^K_t)^\top q^K \) in the attention-sum reader. This reader uses (13) and (14). Attention-over-Attention Reader. The Attention-over-Attention Reader [Cui et al., 2016] uses a more elaborate method to compute the attention \( \alpha_t \). We will use t to range over positions in the passage and j to range over positions in the question. The model is then defined by the following equations. \[ h = \text{biGRU}(e(p)) \qquad q = \text{biGRU}(e(q)) \] \[ \alpha_{t,j} = \text{softmax}_t\ h_t^\top q_j \qquad \beta_{t,j} = \text{softmax}_j\ h_t^\top q_j \] \[ \beta_j = \frac{1}{|p|} \sum_t \beta_{t,j} \qquad \alpha_t = \sum_j \beta_j \alpha_{t,j} \] Note that the final equation defining \( \alpha_t \) can be interpreted as applying the attention \( \beta_j \) to the attentions \( \alpha_{t,j} \). This reader uses (13) and (14). 4 Emergent Predication Structure As discussed in the introduction the entity identifiers such as “entity37” introduced in the CNN/DailyMail dataset cannot be assigned any semantics other than their identity. We should think of them as pointers or semantics-free constant symbols. Despite this undermining of semantics, aggregation readers using (8) and (10) are able to perform well. Here we posit that this is due to an emergent predication structure in the hidden vectors \( h_t \). Intuitively we want to think of the hidden state vector \( h_t \) as a concatenation \( [e(\Phi_t), e'_o(a_t)] \) where \( \Phi_t \) carries semantic information true of \( a_t \). We think of \( h_t \) as representing \( \Phi_t[a_t] \) for semantic statement \( \Phi_t[x] \) asserted of the constant symbol \( a_t \). We also think of the vector representation \( q \) of the question as having the form \([e(\Psi), 0]\) and the vector embedding \( e_o(a) \) as having the form \([0, e'_o(a)]\). Unfortunately, the decomposition of \( h_t \) into this predication structure need not be axis aligned. Rather than posit an axis-aligned concatenation we posit that the hidden vector space \( H \) is a possibly non-aligned direct sum \[ H = S \oplus E \] where \( S \) is a subspace of “statement vectors” and \( E \) is an orthogonal subspace of “entity pointers”. Each hidden state vector \( h \in H \) then has a unique decomposition as \( h = \Psi + e \) for \( \Psi \in S \) and \( e \in E \). This is equivalent to saying that the hidden vector space \( H \) is some rotation of a concatenation of the vector spaces \( S \) and \( E \). We now present empirical evidence for this decomposition structure. We first note that the predication decomposition implies that \( e_o(a)^T h_t \) equals \( e_o(a)^T e_o(a_t) \). This suggests the following for some fixed positive constant \( c \). \[ e_o(a)^T h_t = \left\{ \begin{array}{ll} c & \text{if } t \in R(a,p) \\ 0 & \text{otherwise} \end{array} \right. \] Assuming the predication structure we have \( c = \|e_o(a)\|^2 \). We note that if different entity constants had different norms then answers would be biased toward occurrences of the constant symbol of larger norm. But we need to have that all constant symbols are equivalent. We note that (17) gives \[ \begin{align*} \argmax_a e_o(a)^T o &= \argmax_a e_o(a)^T \sum_t \alpha_t h_t \\ &= \argmax_a \sum_t \alpha_t e_o(a)^T h_t = \argmax_a \sum_{t \in R(a,p)} \alpha_t \end{align*} \] and hence (10) and (14) agree — the aggregation readers and the explicit reference readers are using essentially the same answer selection criterion. Empirical evidence for (16) is given in the first three rows of table 1. The first row empirically measures the “constant” \( c \) in (16) by measuring \( e_o(a)^T h_t \) for those cases where \( t \in R(a,p) \). The second row measures “0” in (16) by measuring \( e_o(a)^T h_t \) in those cases where \( t \notin R(a,p) \). Additional evidence for (16) is given in figure 1 showing that the output vectors \( e_o(a) \) for different entity identifiers \( a \) are nearly orthogonal. Orthogonality of the output vectors is required by (16) provided that each output vector \( e_o(a) \) is in the span of the hidden state vectors \( h_{t,p} \) for which \( t \in R(a,p) \). Intuitively, the mean of all vectors \( h_{t,p} \) with \( t \in R(a,p) \) should be approximately equal to \( e_o(a) \). Of course empirically this will only be approximately true. Equation (16) would suggest that the vector embedding of the constant symbols should have dimension at least as large as the number of distinct constants. However, in practice is sufficient that \( e(a)^T e(a') \) is small for \( a \neq a' \). This allows the vector embeddings of the constants to have dimension much smaller than the number of constants. We have experimented with two-sparse constant symbol embeddings where the number of embedding vectors in dimention \( d \) is \( 2d(d-1) \) (\( d \) choose 2 times the four ways of setting the signs of the non-zero coordinates). Although we do not report results here, these designed and untrained constant embeddings worked reasonably well. Table 1: Statistics to support (16) and (17). These statistics are computed for the Stanford Reader. <table> <tr> <th></th> <th colspan="3">CNN Dev</th> <th colspan="3">CNN Test</th> </tr> <tr> <th></th> <th>samples</th> <th>mean</th> <th>variance</th> <th>samples</th> <th>mean</th> <th>variance</th> </tr> <tr> <td>\( e_o(a)^T h_t,\quad t \in R(a,p) \)</td> <td>222,001</td> <td>10.66</td> <td>2.26</td> <td>164,746</td> <td>10.70</td> <td>2.45</td> </tr> <tr> <td>\( e_o(a)^T h_t,\quad t \notin R(a,p) \)</td> <td>93,072,682</td> <td>-0.57</td> <td>1.59</td> <td>68,451,660</td> <td>-0.58</td> <td>1.65</td> </tr> <tr> <td>\( e_o(a)^T h_{t \pm 1},\quad t \in R(a,p) \)</td> <td>443,878</td> <td>2.32</td> <td>1.79</td> <td>329,366</td> <td>2.25</td> <td>1.84</td> </tr> <tr> <td>Cosine(\( q, h_t \)),\quad \exists a\ t \in R(a,p) \)</td> <td>222,001</td> <td>0.22</td> <td>0.11</td> <td>164,746</td> <td>0.22</td> <td>0.12</td> </tr> <tr> <td>Cosine(\( q, e_o(a) \)),\quad \forall a \)</td> <td>103,909</td> <td>-0.03</td> <td>0.04</td> <td>78,411</td> <td>-0.03</td> <td>0.04</td> </tr> </table> As further support for (16) we give heat maps for \( e_o(a)h_t \) in the appendix for different identifiers \( a \) and heat maps for \( \alpha_t \) for different readers in the appendix. As another testable predication we note that the posited decomposition of the hidden state vectors implies \[ q^\top (h_i + e_o(a)) = q^\top h_i. \] This equation is equivalent to \( q^\top e_o(a) = 0 \). Experimentally, however, we cannot expect \( q^\top e_o(a) \) to be exactly zero and (17) seems to provides a more experimentally meaningful test. Empirical evidence for (17) is given in the fourth and fifth row of table I. The fourth row measures the cosine of the angle between the question vector \( q \) and the hidden state \( h_t \) averaged over passage positions \( t \) at which some entity identifier occurs. The fifth row measures the cosine of the angle between \( q \) and \( e_o(a) \) averaged over the entity identifiers \( a \). ![Plot of \( e_o(a_j)^\top e_o(a_j) \) from Stanford Reader trained on CNN dataset. Off-diagonal values have mean 25.6 and variance 17.2 while diagonal values have mean 169 and variance 17.3.](page_484_670_670_370.png) Figure 1: Plot of \( e_o(a_j)^\top e_o(a_j) \) from Stanford Reader trained on CNN dataset. Off-diagonal values have mean 25.6 and variance 17.2 while diagonal values have mean 169 and variance 17.3. A question asks for a value of \( x \) such that a statement \( \Psi[x] \) is implied by the passage. For a question \( \Psi \) we might even suggest the following vectorial interpretation of entailment. \[ \Phi[x] \text{ implies } \Psi[x] \quad \text{iff} \quad \Phi^\top \Psi \geq ||\Psi||_1. \] This interpretation is exactly correct if some of the dimensions of the vector space correspond to predicates, \( \Psi \) is a 0-1 vector representing a conjunction predicates, and \( \Phi \) is also 0-1 on these dimensions indicating whether a predicate is implied by the context. Of course in practice one expects the dimension to be smaller than the number of possible predicates. 5 Pointer Annotation Readers It is of course important to note that anonymization provides reference information — anonymization assumes that one can determine coreference so as to re-use coreferent phrases with the same entity identifier. Anonymization allows the reference set \( R(a, p) \) to be directly read off of the passage. Still, an aggregation reader must learn to recover this explicit reference structure. Aggregation readers can have difficulty when anonymization is not done. The Stanford Reader achieves just better than 45% on Who-did-What dataset while Attention Sum Reader can get near 60%. But if we anonymize the Who-did-What dataset and then re-train the Stanford Reader, the accuracy jumps to near 65%. Anonymization has two effects. First, it greatly reduces the number of output word \( e_o(a) \) to be learned — we need only learn output embeddings for the relatively small number of entity identifiers needed. Second, anonymization suppresses the semantics of the reference phrases and leaves only a semantics-free entity identifier. This suppression of semantics may facilitate the separation of the hidden state vector space \( H \) into a direct sum \( S \oplus E \) with \( q \in S \) and \( e_o(a) \in E \). We can think of anonymization as providing additional linguistic input for the reader — it explicitly marks positions of candidate answers and establishes coreference. A natural question is whether Table 2: Accuracy on WDW dataset. All these results are based on single model. Results for neural readers other than NSE are based on replications of those systems. All models were trained on the relaxed training set which uniformly yields better performance than the restricted training set. The first group of models are explicit reference models and the second group are aggregation models. + indicates anonymization with better reference identifier. <table> <tr> <th>Who did What</th> <th>Val</th> <th>Test</th> </tr> <tr> <td>Attention Sum Reader (<a href="https://arxiv.org/abs/1604.07835">Onishi et al., 2016</a>)</td> <td>59.8</td> <td>58.8</td> </tr> <tr> <td>Gated Attention Reader (<a href="https://arxiv.org/abs/1604.07835">Onishi et al., 2016</a>)</td> <td>60.3</td> <td>59.6</td> </tr> <tr> <td>NSE (<a href="https://arxiv.org/abs/1604.07835">Munkhdalai & Yu, 2016</a>)</td> <td>66.5</td> <td>66.2</td> </tr> <tr> <td>Gated Attention + Linguistic Features<sup>+</sup></td> <td>72.2</td> <td><b>72.8</b></td> </tr> <tr> <td>Stanford Reader</td> <td>46.1</td> <td>45.8</td> </tr> <tr> <td>Attentive Reader with Anonymization</td> <td>55.7</td> <td>55.5</td> </tr> <tr> <td>Stanford Reader with Anonymization</td> <td>64.8</td> <td>64.5</td> </tr> <tr> <td>One-Hot Pointer Reader</td> <td>65.1</td> <td>64.4</td> </tr> <tr> <td>One-Hot Pointer Reader + Linguistic Features<sup>+</sup></td> <td>69.3</td> <td>68.7</td> </tr> <tr> <td>Stanford with Anonymization + Linguistic Features<sup>+</sup></td> <td>69.7</td> <td><b>69.2</b></td> </tr> <tr> <td>Human Performance</td> <td>-</td> <td>84</td> </tr> </table> this information can be provided without anonymization by simply adding additional coreference features to the input. Here we evaluate two architectures inspired by this question. This evaluation is done on the Who-did-What dataset which is not anonymized. In each architecture we add features to the input to mark the occurrences of candidate answers. These models are simpler than the Stanford reader but perform comparably. This comparable performance in table 2 further supports our analysis of logical structure in aggregation readers. <b>One-Hot Pointer Annotation:</b> The Stanford Reader involves both input embeddings of words and output embeddings of entity identifiers. In the Who-did-What dataset each problem has at most five choices in the multiple choice answer list. This means that we need only five entity identifiers and we can use a five dimensional one-hot vector representation for answer identifiers. If an answer choice exists at position \( t \) in the passage let \( i_t \) be the index of that choice on the choice list. If no choice occurs \( t \) take \( i_t \) to be zero. Take \( e'(i) \) to be the zero vector if \( i = 0 \) and otherwise to be the one-hot vector for \( i \). We defined pointer annotation to be the result of adding \( e'(i_t) \) as additional features to the input embedding. \[ e(w_t) = [e(w_t), e'(i_t)] \] (18) We then define a one-hot pointer reader by designates five dimensions of the hidden state as indicators of the answer and take the probability of choice \( i \) to be defined as \[ p(i|d, q) = \operatorname{softmax}_i o_i \] (19) where \( o \) is computed by (8). <b>General Pointer Annotation:</b> In the CNN dataset there are roughly 500 entity identifier and a one-hot representation is not desirable. Instead we can let \( e'(i) \) be a fixed set of “pointers vectors” — vectors distributed widely on the unit sphere so that for \( i \neq j \) we have that \( e'(i)^T e'(j) \) is small. We again use (18) but replace (19) with \[ p(i|d, q) = \operatorname{softmax}_i [0, e'(i)]^T o \] (20) In the general pointer reader the pointer embeddings \( e'(i) \) are held fixed and not trained. <b>Linguistic Features.</b> Each model can be modified to include additional input features for each input token in the question and passage. More specifically we can add the following features to the word embeddings. <ul> <li>Binary feature: whether current token occurs in the question.</li> <li>Real value feature: the frequency of current token in the passage.</li> </ul> • Real value feature: position of the token’s first occurrence in the passage as a percentage of the passage length. • Binary feature: whether the text surrounding token match the text surrounding the placeholder in the question. We only have features for matching both left and right one word. • One hot vector: Part-of-speech (POS) tagging. We only use such feature on CBT dataset. • One hot vector: Name Entity Recognition (NER). We only use such feature on CBT dataset. 6 A Survey of Recent Results The performance of various recent readers on CNN, DailyMail and CBTest are summarized in Table 3. For purposes of comparison we only present results on single models. Model ensembles generally perform better than single models but are require more computation to train making comparisons more difficult. More experimental details can be found in appendix. Table 3: Accuracy on CNN, DailyMail, CBTest NE and CBTest CN. All results are based on a single model. Results other than those involving pointer or linguistic feature annotations are taken from the original publications. Readers in the first group are explicit reference readers. Readers in the second group are aggregation readers. The final reader defies this classification. <table> <tr> <th rowspan="2"> </th> <th colspan="2">CNN</th> <th colspan="2">DailyMail</th> <th colspan="2">CBT NE</th> <th colspan="2">CBT CN</th> </tr> <tr> <th>valid</th><th>test</th> <th>valid</th><th>test</th> <th>valid</th><th>test</th> <th>valid</th><th>test</th> </tr> <tr> <td>Human(context+query)</td> <td>-</td><td>-</td> <td>-</td><td>-</td> <td>-</td><td>-</td> <td>81.6</td><td>-</td> </tr> <tr> <td>Attention Sum (<span>Kadlec et al., 2016</span>)</td> <td>68.6</td><td>69.5</td> <td>75.0</td><td>73.9</td> <td>73.8</td><td>68.6</td> <td>68.8</td><td>63.4</td> </tr> <tr> <td>Gated Attention (<span>Dhingra et al., 2016</span>)</td> <td>73.0</td><td>73.8</td> <td>76.7</td><td>75.7</td> <td>74.9</td><td>69.0</td> <td>69.0</td><td>63.9</td> </tr> <tr> <td>AoA Reader (<span>Cui et al., 2016</span>)</td> <td>73.1</td><td>74.4</td> <td>-</td><td>-</td> <td>77.8</td><td>72.0</td> <td>72.2</td><td>69.4</td> </tr> <tr> <td>NSE (<span>Munkhdalai & Yu, 2016</span>)</td> <td>-</td><td>-</td> <td>-</td><td>-</td> <td>78.2</td><td><b>73.2</b></td> <td>74.2</td><td><b>71.4</b></td> </tr> <tr> <td>DER Network (<span>Kobayashi et al., 2016</span>)</td> <td>71.3</td><td>72.9</td> <td>-</td><td>-</td> <td>-</td><td>-</td> <td>-</td><td>-</td> </tr> <tr> <td>Epi Reader (<span>Trischler et al., 2016</span>)</td> <td>73.4</td><td>74.0</td> <td>-</td><td>-</td> <td>75.3</td><td>69.7</td> <td>71.5</td><td>67.4</td> </tr> <tr> <td>Iterative Reader (<span>Sordoni et al., 2016</span>)</td> <td>72.6</td><td>73.3</td> <td>-</td><td>-</td> <td>75.2</td><td>68.6</td> <td>72.1</td><td>69.2</td> </tr> <tr> <td>QANN (<span>Weissenborn, 2016</span>)</td> <td>-</td><td>73.6</td> <td>-</td><td><b>77.2</b></td> <td>-</td><td>70.6</td> <td>-</td><td>-</td> </tr> <tr> <td>Gated Attention with linguistic features</td> <td>74.7</td><td><b>75.4</b></td> <td>78.6</td><td><b>78.3</b></td> <td>75.7</td><td><b>72.2</b></td> <td>73.3</td><td><b>70.1</b></td> </tr> <tr> <td>MemNets (<span>Sukhbaatar et al., 2015</span>)</td> <td>63.4</td><td>66.8</td> <td>-</td><td>-</td> <td>70.4</td><td>66.6</td> <td>64.2</td><td>63.0</td> </tr> <tr> <td>Attentive Reader (<span>Hermann et al., 2015</span>)</td> <td>61.6</td><td>63.0</td> <td>70.5</td><td>69.0</td> <td>-</td><td>-</td> <td>-</td><td>-</td> </tr> <tr> <td>Stanford Reader (<span>Chen et al., 2016</span>)</td> <td>72.5</td><td>72.7</td> <td>76.9</td><td>76.0</td> <td>-</td><td>-</td> <td>-</td><td>-</td> </tr> <tr> <td>Stanford Reader with linguistic features</td> <td>75.7</td><td><b>76.0</b></td> <td>-</td><td>-</td> <td>-</td><td>-</td> <td>-</td><td>-</td> </tr> <tr> <td>ReasoNet (<span>Shen et al., 2016</span>)</td> <td>72.9</td><td>74.7</td> <td>77.6</td><td>76.6</td> <td>-</td><td>-</td> <td>-</td><td>-</td> </tr> </table> In table 3 all the high-performance approaches are proposed very recently. Blue color represents the second highest accuracy and bold font indicates the state-of-the-art accuracy. Note that the result of Stanford Reader we report here is the one without relabeling since relabeling procedure doesn’t follow the protocol used in <span>Hermann et al., 2015</span>. 7 Discussion Explicit reference architectures rely on reference resolution — a specification of which phrases in the given passage refer to candidate answers. Our experiments indicate that all existing readers benefit greatly from this externally provided information. Aggregation readers seem to demonstrate a stronger learning ability in that they essentially learn to mimic explicit reference readers by identifying reference annotation and using it appropriately. This is done most clearly in the pointer reader architectures. Furthermore, we have argued for, and given experimental evidence for, an interpretation of aggregation readers as learning emergent logical structure — a factoring of neural representations into a direct sum of a statement (predicate) representation and an entity (argument) representation. At a very high level our analysis and experiments support a central role for reference resolution in reading comprehension. Automating reference resolution in neural models, and demonstrating its value on appropriate datasets, would seem to be an important area for future research. Of course there is great interest in “learning representations”. The current state of the art in reading comprehension is such that systems still benefit from externally provided linguistic features including externally annotated reference resolution. It would seem desirable to develop fully automated neural readers that perform as well as readers using externally provided annotations. It is of course important to avoid straw man baselines when making any such claim. We are hesitant to make any more detailed comments on the differences between the architectural details of the readers discussed in this paper. The differences in scores between the leading readers are comparable to differences in scores that can be achieved by aggressive search over meta parameters or the statistical fluctuations in the quality of models learned by noisy statistical training procedures. More careful experiments over a longer period of time are needed. More dramatic improvements in performance would of course provide better support for particular innovations. ACKNOWLEDGMENTS We thanks the support of NVIDIA Corporation with the donation of GPUs used for this work. REFERENCES Danqi Chen, Jason Bolton, and Christopher D Manning. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of the ACL, 2016. Zewei Chu, Hai Wang, Kevin Gimpel, and David McAllester. Broad context language modeling as reading comprehension. Arxiv, 2016. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over-attention neural networks for reading comprehension. Arxiv, 2016. Graff David and Christopher Cieri. English gigaword ldc2003t05. Philadelphia: Linguistic Data Consortium, 2003. Paperno. Denis, Germn Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernndez. The lambada dataset: Word prediction requiring a broad discourse context. In Proceedings of the ACL, 2016. Bhuwan Dhingra, Hanxiao Liu, William W. Cohen, and Ruslan Salakhutdinov. Gated-attention readers for text comprehension. Arxiv, 2016. Karm Moritz Hermann, Tom Kocisk, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), 2015. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading childrens books with explicit memory representations. In Proceedings of the 4th International Conference on Learning Representations, 2016. Pennington Jeffrey, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods on Natural Language Processing, 14:1532–1543, 2014. Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with the attention sum reader network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 1:908–918, 2016. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, 2015. Sosuke Kobayashi, Ran Tian, Naoaki Okazaki, and Kentaro Inui. Dynamic entity representation with max-pooling improves machine reading. In Proceedings of the North American Chapter of the Association for Computational Linguistics and Human Language Technologies (NAACL-HLT), 2016. Moontae Lee, Xiaodong He, Scott Wen tau Yih, Jianfeng Gao, Li Deng, and Paul Smolensky. Reasoning in vector space: An exploratory study of question answering. Proceedings of the 4th International Conference on Learning Representations, 2016. Tsendsuren Munkhdalai and Hong Yu. Reasoning with memory augmented neural networks for language comprehension. Arxiv, 2016. Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. Who did what: A large-scale person-centered cloze dataset. In Proceedings of the EMNLP, 2016. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of International Conference on Empirical Methods in Natural Language Processing, 2016. Pascanu Razvan, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In Proceedings of ICML, pp. 1310–1318, 2013. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the Conference on Empirical Methods on Natural Language Processing, 3:4–10, 2013. Andrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. Arxiv, 2013. Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. Reasonet: Learning to stop reading in machine comprehension. Arxiv, 2016. Alessandro Sordonif, Phillip Bachmanf, and Yoshua Bengio. Iterative alternating neural attention for machine reading. Arxiv, 2016. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. In Advances in neural information processing systems, pp. 2440–2448, 2015. Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the epireader. Arxiv, 2016. Bart van Merrienboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-farley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel : Frameworks for deep learning. Arxiv, 2015. Dirk Weissenborn. Separating answers from queries for neural reading comprehension. Arxiv, 2016. Jason Weston, Sumit Chopra, and Antoine Bordes. Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merrinboer, Armand Joulin, and Tomas Mikolov. Towards ai complete question answering: A set of prerequisite toy tasks. In Proceedings of the 4th International Conference on Learning Representations, 2016. Fre de ric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements. NIPS Workshop Deep Learning and Unsupervised Feature Learning, 2012. 8 APPENDIX 8.1 EXPERIMENT DETAILS We implemented the neural readers using Theano (de ric Bastien et al., 2012) and Blocks (van Merrienboer et al., 2015) and train them on single Nvidia Tesla K40 GPU. Negative log-likelihood is employed as training criterion. We used stochastic gradient descent (SGD) with the ADAM update rule (Kingma & Ba, 2015) and set the learning rate 0.0005. For Stanford Reader and One-Hot Pointer Reader, we simply follows the Stanford Reader’s setting and didn’t tune it on each dataset. For gated attention reader, the lookup table was randomly initialized with uniform distribution from the interval [-0.2, 0.2] on CBT dataset, but on CNN&DailyMail, the lookup table was initialized by Glove vector (Jeffrey et al., 2014) trained on the train&validation set (we found that the pre-trained word vector doesn’t improve the accuracy but will accelerate the training) on CNN&DailyMail. On WDW dataset, the lookup table was initialized by pre-trained Glove vector. It should be noticed that if we initialize the lookup table with pre-trained Glove vector from //nlp.stanford.edu/data/glove.6B.zip, it will slightly boost the accuracy compared with using the Glove vector trained on train&validation set. Input to hidden state weights were initialized by random orthogonal matrices (Saxe et al., 2013) and biases were initialized to zero. Hidden to hidden state weights were initialized by identity matrices to force the model can remember longer information. To compute the attention weight, we \( \alpha_t = h_t \top W_\alpha q \) and initialize \( W_\alpha \) with random uniform distribution. We also used the gradient clipping (Razvan et al., 2013) with threshold of 10 and batches of size 32. During training we randomly shuffled all examples within each epoch. To speedup training, we always pre-fetched 10 batches worth of examples and sorted them according to document length as did by Kadlec et al. (2016). When trained on CNN, DailyMail and WDW (anonymization case) dataset, we randomly reshuffled the entity identifier to match the procedure proposed in Hermann et al. (2015). During training we evaluated the accuracy after each epoch and stopped the training when the accuracy on the validation set started decreasing. We tried limiting the vocabulary to the most frequent tokens but didn’t observed any performance improvement compared with using all the distinct tokens as vocabulary. Since part of our experiments need to check the word embedding assignment issues, finally we use all the distinct tokens as vocabulary. To find the optimal embedding and hidden state dimension, we tried several groups of different combinations, the optimal value and corresponding training statistics in Gated Attention readers are summarized in Table. 4. When anonymize the Who-did-What dataset, we can either use simple string match to replace answer in question and story with entity identifier, or we can use Name Entity Recognition(NER) tool3 to detect name entities and then replace the answer name entities in question and story with entity identifier, we found the later one generally will bring 2 % improvement compared with simple string match. More experimental details can be found in code. <table> <tr> <th>Dataset</th> <th>Embedding</th> <th>Hidden State</th> <th>Time Per Epoch</th> <th>Trained Epochs</th> <th>K</th> </tr> <tr> <td>CNN</td> <td>128</td> <td>256</td> <td>18 hours</td> <td>5</td> <td>3</td> </tr> <tr> <td>DailyMail</td> <td>128</td> <td>256</td> <td>2 days</td> <td>5</td> <td>3</td> </tr> <tr> <td>WDW Relaxed</td> <td>200</td> <td>384</td> <td>2.5 hours</td> <td>8</td> <td>1</td> </tr> <tr> <td>CBT NE</td> <td>384</td> <td>384</td> <td>1 hour</td> <td>8</td> <td>1</td> </tr> <tr> <td>CBT CN</td> <td>384</td> <td>256</td> <td>1 hour</td> <td>7</td> <td>1</td> </tr> </table> Table 4: Training Details on Different Datasets 8.2 HEAT MAP OF STANFORD READER FOR DIFFERENT ANSWER CANDIDATES We randomly choose one article from CNN dataset and show softmax(\( e_{a}(q|h_t) \)) for \( t \in [0, |p|] \) for each answer candidate \( a \) in figure 2, figure 3, figure 4, figure 5 and figure 6. Red color indicates 2http://nlp.stanford.edu/data/glove.6B.zip 3http://nlp.stanford.edu/software/CRF-NER.shtml larger probability and orange indicates smaller probability and the remaining indicates very low probability that can be ignored. From those figures, we can see that our assumption that \( e_o(a) \) is used to pick up its occurrence is reasonable. @entity0 ( @entity1 ) six survivors of the @entity0 kosher supermarket siege in january are suing a @entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking . according to @entity0 prosecutor 's spokeswoman @entity10 , the lawsuit was filed march 27 and a preliminary investigation was opened by the prosecutor 's office wednesday . the media outlet , @entity1 affiliate @entity16 , is accused of endangering the lives of the hostages , who were hiding in a cold room during the attack , by broadcasting their location live during the siege . @entity23 in a statement friday said one of its journalists " mentioned only once the presence of a woman hidden inside the @entity27 , on the basis of police sources on the ground ." " immediately , the chief editor felt that this information should not be released . it therefore has subsequently never been repeated on air or posted on - screen . @entity16 regrets that the mention of this information could cause concern to the hostages , as well as their relatives , that their lives were in danger ," the statement said . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27 @entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed in the police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born @entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15 customers from @entity47 in the cold room . the hostage - taking was the culmination of three days of terror in @entity0 that began with the january 7 shooting of 12 people at the offices of @entity5 satirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 , were killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the lives of 17 people and put @entity5 on a heightened state of alert . @entity1 's @entity80 reported from @entity0 , and @entity81 wrote from @entity82 . @entity1 's @entity83 contributed to this report . query: they hid in a cold room during the attack in @entity0 by gunman @placeholder Figure 2: Heat map of softmax(\( e_o(a)h_t \)) when \( a = \) entity0. 8.3 HEAT MAP OF DIFFERENT READERS We randomly choose one article from CNN dataset and show the attention map \( \alpha_t = \text{softmax}(q^\top W_a h_t) \) for different readers (in Attention Sum and Gated Attention Reader, \( W_a \) is identity matrix). From figure[7] figure[8]and figure[9] we can see that different readers essential put the weights on the entity identifiers. @entity0 (@entity1) six survivors of the @entity0 kosher supermarket siege in january are suing a @entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking . according to @entity0 prosecutor 's spokeswoman @entity10 , the lawsuit was filed march 27 and a preliminary investigation was opened by the prosecutor 's office wednesday . the media outlet , @entity1 affiliate @entity16 , is accused of endangering the lives of the hostages , who were hiding in a cold room during the attack , by broadcasting their location live during the siege . @entity23 in a statement friday said one of its journalists " mentioned only once the presence of a woman hidden inside the @entity27 , on the basis of police sources on the ground . " " immediately , the chief editor felt that this information should not be released . it therefore has subsequently never been repeated on air or posted on - screen . @entity16 regrets that the mention of this information could cause concern to the hostages , as well as their relatives , that their lives were in danger , " the statement said . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27 @entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed in the police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born @entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15 customers from @entity47 in the cold room . the hostage - taking was the culmination of three days of terror in @entity0 that began with the january 7 shooting of 12 people at the offices of @entity5 satirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 , were killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the lives of 17 people and put @entity5 on a heightened state of alert . @entity1 's @entity80 reported from @entity0 , and @entity81 wrote from @entity82 . @entity1 's @entity83 contributed to this report . query: they hid in a cold room during the attack in @entity0 by gunman @placeholder Figure 3: Heat map of softmax(\( e_o(a)h_t \)) when \( a = \) entity1. @entity0 (@entity1) six survivors of the @entity0 kosher supermarket siege in january are suing a @entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking . according to @entity0 prosecutor 's spokeswoman @entity10 , the lawsuit was filed march 27 and a preliminary investigation was opened by the prosecutor 's office wednesday . the media outlet , @entity1 affiliate @entity16 , is accused of endangering the lives of the hostages , who were hiding in a cold room during the attack , by broadcasting their location live during the siege . @entity23 in a statement friday said one of its journalists " mentioned only once the presence of a woman hidden inside the @entity27 , on the basis of police sources on the ground . " " immediately , the chief editor felt that this information should not be released . it therefore has subsequently never been repeated on air or posted on - screen . @entity16 regrets that the mention of this information could cause concern to the hostages , as well as their relatives , that their lives were in danger , " the statement said . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27 @entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed in the police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born @entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15 customers from @entity47 in the cold room . the hostage - taking was the culmination of three days of terror in @entity0 that began with the january 7 shooting of 12 people at the offices of @entity5 satirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 , were killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the lives of 17 people and put @entity5 on a heightened state of alert . @entity1 's @entity80 reported from @entity0 , and @entity81 wrote from @entity82 . @entity1 's @entity83 contributed to this report . query: they hid in a cold room during the attack in @entity0 by gunman @placeholder Figure 4: Heat map of softmax(\( e_o(a)h_t \)) when \( a = \) entity16. @entity0 ( @entity1 ) six survivors of the @entity0 kosher supermarket siege in january are suing a @entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking . according to @entity0 prosecutor 's spokeswoman @entity10 , the lawsuit was filed march 27 and a preliminary investigation was opened by the prosecutor 's office wednesday . the media outlet , @entity1 affiliate @entity16 , is accused of endangering the lives of the hostages , who were hiding in a cold room during the attack , by broadcasting their location live during the siege . @entity23 in a statement friday said one of its journalists " mentioned only once the presence of a woman hidden inside the @entity27 , on the basis of police sources on the ground . " " immediately , the chief editor felt that this information should not be released . it therefore has subsequently never been repeated on air or posted on - screen . @entity16 regrets that the mention of this information could cause concern to the hostages , as well as their relatives , that their lives were in danger , " the statement said . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27 @entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed in the police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born @entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15 customers from @entity47 in the cold room . the hostage - taking was the culmination of three days of terror in @entity0 that began with the january 7 shooting of 12 people at the offices of @entity5 satirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 , were killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the lives of 17 people and put @entity5 on a heightened state of alert . @entity1 's @entity80 reported from @entity0 , and @entity81 wrote from @entity82 . @entity1 's @entity83 contributed to this report . query: they hid in a cold room during the attack in @entity0 by gunman @placeholder Figure 5: Heat map of softmax(\( e_o(a)h_t \)) when \( a = \) entity27. @entity0 ( @entity1 ) six survivors of the @entity0 kosher supermarket siege in january are suing a @entity5 media outlet for what they call dangerous live broadcasting during the hostage - taking . according to @entity0 prosecutor 's spokeswoman @entity10 , the lawsuit was filed march 27 and a preliminary investigation was opened by the prosecutor 's office wednesday . the media outlet , @entity1 affiliate @entity16 , is accused of endangering the lives of the hostages , who were hiding in a cold room during the attack , by broadcasting their location live during the siege . @entity23 in a statement friday said one of its journalists " mentioned only once the presence of a woman hidden inside the @entity27 , on the basis of police sources on the ground . " " immediately , the chief editor felt that this information should not be released . it therefore has subsequently never been repeated on air or posted on - screen . @entity16 regrets that the mention of this information could cause concern to the hostages , as well as their relatives , that their lives were in danger , " the statement said . gunman @entity47 , also suspected in the slaying of a police officer , stormed the @entity27 @entity51 supermarket on january 9 , killing four people and taking others hostage . he was killed in the police operation to end the siege . a 24 - year - old supermarket employee , @entity57 - born @entity56 , was hailed as a hero afterward when it emerged that he had risked his life to hide 15 customers from @entity47 in the cold room . the hostage - taking was the culmination of three days of terror in @entity0 that began with the january 7 shooting of 12 people at the offices of @entity5 satirical magazine @entity69 . the two brothers blamed for that attack , @entity72 and @entity73 , were killed on january 9 after a violent standoff at an industrial site . the terror attacks claimed the lives of 17 people and put @entity5 on a heightened state of alert . @entity1 's @entity80 reported from @entity0 , and @entity81 wrote from @entity82 . @entity1 's @entity83 contributed to this report . query: they hid in a cold room during the attack in @entity0 by gunman @placeholder Figure 6: Heat map of softmax(\( e_o(a)h_t \)) when \( a = \) entity47. ( @entity3 ) suspected @entity2 militants this week attacked civilians inside @entity5 for the first time in a month , killing at least 16 villagers , a military spokesman told @entity3 saturday . six attackers were killed by @entity5 forces , said maj. @entity10 , an operations officer with a special military unit set up to fight @entity2 . the attackers came thursday " in the hundreds ... torched @entity14 village in the @entity15 , " he said . @entity14 is a village that borders @entity17 and has been identified as a recruiting ground for @entity2 , regional gov. @entity19 said the insurgents have been attacking border villages in @entity5 in search of supplies . @entity5 troops retook cattle that was stolen by the attackers in @entity14 . @entity10 said . the last attack in @entity5 by the @entity29 - based militants was march 10 , when the assailants struck the locality of @entity32 in a failed attempt to overrun a military base . @entity2 , whose name translates as " @entity44 education is sin , " has been waging a years - long campaign of terror aimed at instituting its extreme version of @entity42 law in @entity29 . @entity2 's tactics have intensified in recent years , from battling @entity29 government soldiers to acts disproportionately affecting civilians -- such as raids on villages , mass kidnappings , assassinations , market bombings and attacks on churches and unaffiliated mosques . much of this violence has taken place in @entity29 , but neighboring countries -- @entity5 included -- have also been hit increasingly hard . journalist @entity61 in @entity63 , @entity5 , contributed to this report . query: @placeholder is based in @entity29 but has attacked across the border of several neighbors Figure 7: Heat map \( \alpha_t \) for Stanford Reader ( @entity3 ) suspected @entity2 militants this week attacked civilians inside @entity5 for the first time in a month , killing at least 16 villagers , a military spokesman told @entity3 saturday . six attackers were killed by @entity5 forces , said maj. @entity10 , an operations officer with a special military unit set up to fight @entity2 . the attackers came thursday " in the hundreds ... torched @entity14 village in the @entity15 , " he said . @entity14 is a village that borders @entity17 and has been identified as a recruiting ground for @entity2 , regional gov. @entity19 said the insurgents have been attacking border villages in @entity5 in search of supplies . @entity5 troops retook cattle that was stolen by the attackers in @entity14 . @entity10 said . the last attack in @entity5 by the @entity29 - based militants was march 10 , when the assailants struck the locality of @entity32 in a failed attempt to overrun a military base . @entity2 , whose name translates as " @entity44 education is sin , " has been waging a years - long campaign of terror aimed at instituting its extreme version of @entity42 law in @entity29 . @entity2 's tactics have intensified in recent years , from battling @entity29 government soldiers to acts disproportionately affecting civilians -- such as raids on villages , mass kidnappings , assassinations , market bombings and attacks on churches and unaffiliated mosques . much of this violence has taken place in @entity29 , but neighboring countries -- @entity5 included -- have also been hit increasingly hard . journalist @entity61 in @entity63 , @entity5 , contributed to this report . query: @placeholder is based in @entity29 but has attacked across the border of several neighbors Figure 8: Heat map \( \alpha_t \) for Gated Attention Reader ( @entity3 ) suspected @entity2 militants this week attacked civilians inside @entity5 for the first time in a month , killing at least 16 villagers , a military spokesman told @entity3 saturday . six attackers were killed by @entity5 forces , said maj. @entity10 , an operations officer with a special military unit set up to fight @entity2 . the attackers came thursday " in the hundreds ... torched @entity14 village in the @entity15 ," he said . @entity14 is a village that borders @entity17 and has been identified as a recruiting ground for @entity2 , regional gov. @entity19 said the insurgents have been attacking border villages in @entity5 in search of supplies . @entity5 troops retook cattle that was stolen by the attackers in @entity14 , @entity10 said . the last attack in @entity5 by the @entity29 - based militants was march 10 , when the assailants struck the locality of @entity32 in a failed attempt to overrun a military base . @entity2 , whose name translates as " @entity44 education is sin , " has been waging a years - long campaign of terror aimed at instituting its extreme version of @entity42 law in @entity29 . @entity2 's tactics have intensified in recent years , from battling @entity29 government soldiers to acts disproportionately affecting civilians -- such as raids on villages , mass kidnappings , assassinations , market bombings and attacks on churches and unaffiliated mosques . much of this violence has taken place in @entity29 , but neighboring countries -- @entity5 included -- have also been hit increasingly hard . journalist @entity61 in @entity63 . @entity5 , contributed to this report . query: @placeholder is based in @entity29 but has attacked across the border of several neighbors Figure 9: Heat map \( \alpha_t \) for Attention Sum Reader
ABSTRACT Reading comprehension is a question answering task where the answer is to be found in a given passage about entities and events not mentioned in general knowledge sources. A significant number of neural architectures for this task (neural readers) have recently been developed and evaluated on large cloze-style datasets. We present experiments supporting the emergence of “predication structure” in the hidden state vectors of a class of neural readers including the Attentive Reader and Stanford Reader. We posits that the hidden state vectors can be viewed as (a representation of) a concatenation \([P, c]\) of a “predicate vector” \(P\) and a “constant symbol vector” \(c\) and that the hidden state represents the atomic formula \(P(c)\). This predication structure plays a conceptual role in relating “aggregation readers” such as the Attentive Reader and the Stanford Reader to “explicit reference readers” such as the Attention-Sum Reader, the Gated-Attention Reader and the Attention-over-Attention Reader. In an independent contribution, we show that the addition of linguistics features to the input to existing neural readers significantly boosts performance yielding the best results to date on the Who-did-What dataset.\footnote{code will be available: https://github.com/sohuren} 1 INTRODUCTION AND OVERVIEW Reading comprehension is a type of question answering task where the answer is to be found in a passage about particular entities and events not otherwise familiar to the reader. In particular, the entities and events should not be mentioned in structured databases of general knowledge. Reading comprehension problems are intended to measure a systems ability to extract semantic information about entities and relations directly from unstructured text. Several large scale reading comprehension datasets have been introduced recently. In particular the CNN & DailyMail datasets (Hermann et al., 2015), the Children’s Book Test (CBT) (Hill et al., 2016), and the Who-did-What dataset (Onishi et al., 2016). The large sizes of these datasets enable the application of deep learning. These are all cloze-style datasets where a question is constructed by deleting a word or phrase from an article summary (in CNN/DailyMail), from a sentence in a Children’s story (in CBT), or by deleting a person from the first sentence of a different news article on the same entities and events (in Who-did-What). In this paper we present empirical evidence for the emergence of predication structure in a certain class of neural readers. To understand predication structure is it helpful to review the anonymization performed in the CNN/DailyMail dataset. In this dataset named entities are replaced by anonymous entity identifiers such as “entity37”. The passage might contain “entity52 gave entity24 a rousing applause” and the question might be “X received a rounding applause from entity52”. The task is to fill in \(X\) from a given multiple choice list of candidate entity identifiers. A fixed relatively small set of the same entity identifiers are used over all the problems and the same problem is presented many times with the entity identifiers shuffled. This prevents a given entity identifier from having any semantically meaningful vector embedding. The embeddings of the entity identifiers are *Authors contributed equally presumably just pointers to semantics-free tokens. We will write entity identifiers as logical constant symbols such as \( c \) rather than strings such as “entity37”. Aggregation readers, including Memory Networks (Weston et al., Sukhbaatar et al. [2015]), the Attentive Reader (Hermann et al. [2015]) and the Stanford Reader (Chen et al. [2016]), use bidirectional LSTMs or GRUs to construct a contextual embedding \( h_t \) of each position \( t \) in the passage and also an embedding \( q \) of the question. They then select and answer \( c \) using a criterion similar to \[ \argmax_c \sum_t < h_t, q > < h_t, e(c) > \] where \( e(c) \) is the vector embedding of the constant symbol (entity identifier) \( c \). In practice the inner-product \( < h_t, q > \) is normalized over \( t \) using a softmax to yield an attention \( \alpha_t \) over \( t \) and (1) becomes. \[ \argmax_c < e(c), \sum_t \alpha_t h_t > . \] Here \( \sum_t \alpha_t h_t \) is viewed as a vector representation of the passage. We argue that for aggregation readers, roughly defined by (2), the hidden state \( h_t \) of the passage at position (or word) \( t \) can be viewed as a vector concatenation \( h_t = [e(\Phi_t), e'(c_t)] \) where \( \Phi_t \) is a property (or statement or predicate) being stated of a particular constant symbol \( c_t \). A logician might write this as \( h_t = \Phi_t[c_t] \). Furthermore, the question can be interpreted as having the form \( \Psi[x] \) where the problem is to find a constant symbol \( c \) such that the passage implies \( \Psi[c] \). Assuming \( h_t = [e(\Phi_t), e'(c_t)] \) and \( q = [e(\Psi), 0] \) and \( e(c) = [0, e'(c)] \) we can rewrite (1) as \[ \argmax_c \sum_t < e(\Phi_t), e(\Psi) > < e'(c_t), e'(c) > . \] The first inner product in (3) is interpreted as measuring the extent to which \( \Phi_t[x] \) implies \( \Psi[x] \) for any \( x \). The second inner product is interpreted as restricting \( t \) to positions talking about the constant symbol \( c \). Note that the posited decomposition of \( h_t \) is not explicit in (2) but instead must emerge during training. We present empirical evidence that this structure does emerge. The empirical evidence is somewhat tricky as the direct sum structure that divides \( h_t \) into its two parts need not be axis aligned and therefore need not literally correspond to vector concatenation. We also consider a second class of neural readers that we call explicit reference readers. Explicit reference readers avoid (2) and instead use \[ \argmax_c \sum_{t \in R(c)} \alpha_t \] where \( R(c) \) is the subset of the positions where the constant symbol (entity identifier) \( c \) occurs. Note that if we identify \( \alpha_t \) with \( < e(\Phi_t), e(\Psi) > \) and assume that \( < e'(c), e'(c_t) > \) is either 0 or 1 depending on whether \( c = c_t \), then (3) and (4) agree. In explicit reference readers the hidden state \( h_t \) need not carry a pointer to \( c_t \) as the restriction on \( t \) is independent of learned representations. Explicit reference readers include the Attention Sum Reader (Kadlec et al. [2016]), the Gated Attention Reader (Dhingra et al. [2016]), the Attention-over-Attention Reader (Cui et al. [2016]) and others (a list can be found in section 6). So far we have only considered anonymized datasets that require the handling of semantics-free constant symbols. However, even for non-anonymized datasets such as Who-Did-What, it is helpful to add features which indicate which positions in the passage are referring to which candidate answers. This indicates, not surprisingly, that reference is important in question answering. The fact that explicit reference features are needed in aggregation readers on non-anonymized data indicates that reference is not being solved by the aggregation readers. However, as reference seems to be important for cloze-style question answering, these problems may ultimately provide training data from which reference resolution can be learned. Sections 2 and 3 review various existing datasets and models respectively. Section 4 presents the logical structure interpretation of aggregation readers in more detail and the empirical evidence supporting it. Section 5 proposes new models that enforce the direct sum structure of the hidden state vectors. It is shown that these new models perform well on the Who-did-What dataset provided that reference annotations are added as input features. Section 5 also describes additional linguistic features that can be added to the input embeddings and show that these improve the performance of existing models resulting in the best single-model performance to date on the Who-did-What dataset. 2 A Brief Survey of Datasets Before presenting various models for machine comprehension we give a general formulation of the machine comprehension task. We take an instance of the task be a four tuple \((q, p, a, \mathcal{A})\), where \(q\) is a question given as sequence of words containing a special taken for a “blank” to be filled in, \(p\) is a document consisting of a sequence of words, \(\mathcal{A}\) is a set of possible answers and \(a \in \mathcal{A}\) is the ground truth answer. All words are drawn from a vocabulary \(\mathcal{V}\). We assume that all possible answers are words from the vocabulary, that is \(\mathcal{A} \subseteq \mathcal{V}\), and that the ground truth answer appears in the document, that is \(a \in p\). The problem can be described as that of selecting the answer \(a \in \mathcal{A}\) that answers question \(q\) based on information from \(p\). We will now briefly summarize important features of the related datasets in reading comprehension. CNN & DailyMail: [Hermann et al., 2015] constructed these datasets from a large number of news articles from the CNN and Daily Mail news websites. The main article is used as the context, while the cloze style question is formed from one short highlight sentence appearing in conjunction with the published article. To avoid the model using external world knowledge when answering the question, the named entities in the entire dataset were replaced by anonymous entity IDs which were then further shuffled for each example. This forces models to rely on the context document to answer each question. In this anonymized corpus the entity identifiers are taken to be a part of the vocabulary and the answer set \(\mathcal{A}\) consists of the entity identifiers occurring in the passage. Who-did-What (WDW): The Who-did-What dataset [Onishi et al., 2016] contains 127,000 multiple choice cloze questions constructed from the LDC English Gigaword newswire corpus [David & Cieri, 2003]. In contrast with CNN and Daily Mail, it avoids using article summaries for question formation. Instead, each problem is formed from two independent articles: one is given as the passage to be read and a different article on the same entities and events is used to form the question. Further, Who-did-What avoids anonymization, as each choice is a person named entity. In this dataset the answer set \(\mathcal{A}\) consists of the person named entities occurring in the passage. Finally, the problems have been filtered to remove a fraction that are easily solved by simple baselines. It has two training sets. The larger training set (“relaxed”) is created using less baseline filtering, while the smaller training set (“strict”) uses the same filtering as the validation and test sets. Children’s Book Test (CBT) [Hill et al., 2016] developed the CBT dataset in a slightly different fashion to the CNN/DailyMail datasets. They take any sequence of 21 consecutive sentences from a children’s book: the first 20 sentences are used as the passage, and the goal is to infer a missing word in the 21st sentence. The task complexity varies with the type of the omitted word (verb, preposition, named entity, or common noun). According to the original study on this dataset [Hill et al., 2016], n-gram and recurrent neural network language models are sufficient for predicting verbs or prepositions. However, for named entities and common nouns, current solvers are still far from human performance. Other Related Datasets. It is also worth mentioning several related datasets. The MCTest dataset [Richardson et al., 2013] consists of children’s stories and questions written by crowdsourced workers. The dataset only contains 660 documents and is too small to train deep models. The bAbI dataset [Weston et al., 2016] is constructed automatically using synthetic text generation and can be perfectly answered by hand-written algorithms [Lee et al., 2016]. The SQuAD dataset [Rajpurkar et al., 2016] consists passage-question pairs where the passage is a wikipedia article and the questions are written by crowdsourced workers. Although crowdsourcing is involved, the dataset contains over 200,000 problems. But the answer is often a word sequence which is difficult to handle with the reader models considered here. The LAMBADA dataset [Denis et al., 2016] is a word prediction dataset which requires a broad discourse context and the correct answer might not in the context. Nonetheless, when the correct answer is in the context, neural readers can be applied effectively [Chu et al., 2016]. 3 AGGREGATION READERS AND EXPLICIT REFERENCE READERS Here we classify readers into aggregation readers and explicit reference readers. Aggregation readers appeared first in the literature and include Memory Networks (Weston et al., Sukhbaatar et al., 2015), the Attentive Reader (Hermann et al., 2015), and the Stanford Reader (Chen et al., 2016). Aggregation readers are defined by equations (8) and (10) below. Explicit reference readers include the Attention-Sum Reader (Kadlec et al., 2016), the Gated-Attention Reader (Dhingra et al., 2016), and the Attention-over-Attention Reader (Cui et al., 2016). Explicit reference readers are defined by equation (14) below. We first present the Stanford Reader as a paradigmatic aggregation Reader and the Attention-Sum Reader as a paradigmatic explicit reference reader. 3.1 AGGREGATION READERS Stanford Reader. The the Stanford Reader (Chen et al., 2016) computes a bi-directional LSTM representation of both the passage and the question. \[ h = \text{biLSTM}(e(p)) \tag{5} \] \[ q = [\text{fLSTM}(e(q))_{|q|}, \text{bLSTM}(e(q))_1] \tag{6} \] In equations (5) and (6) we have that \( e(p) \) is the sequence of word embeddings \( e(w_i) \) for \( w_i \in p \) and similarly for \( e(q) \). The expression biLSTM(s) denotes the sequence of hidden state vectors resulting from running a bi-directional LSTM on the vector sequence s. We write biLSTM(s)_i for the ith vector in this sequence. Similarly fLSTM(s) and bLSTM(s) denote the sequence of vectors resulting from running a forward LSTM and a backward LSTM respectively and [·, ·] denotes vector concatenation. The Stanford Reader, and various other readers, then compute a bilinear attention over the passage which is then used to construct a single weighted vector representation of the passage. \[ \alpha_t = \text{softmax} \ h_t^T W_\alpha \ q \tag{7} \] \[ o = \sum_t \alpha_t h_t \tag{8} \] Finally, they compute a probability distribution over the answers \( P(a|p, q, \mathcal{A}) \). \[ p(a|d, q, \mathcal{A}) = \underset{a \in \mathcal{A}}{\text{softmax}} \ e_o(a)^T o \tag{9} \] \[ \hat{a} = \underset{a \in \mathcal{A}}{\text{argmax}} \ e_o(a)^T o \tag{10} \] Here \( e_o(a) \) is an “output embedding” of the answer a. On the CNN dataset the Stanford Reader trains an output embedding for each of the roughly 500 entity identifiers used in the dataset. In cases where the answer might be any word in \( \mathcal{V} \) an output embedding must be trained for the entire vocabulary. The reader is trained with log-loss \( \ln 1/P(a|p, q, \mathcal{A}) \) where a is the correct answer. At test time the reader is scored on the percentage of problems where \( \hat{a} = a \). Memory Networks. Memory Networks (Weston et al., Sukhbaatar et al., 2015) use (8) and (10) but have more elaborate methods of constructing “memory vectors” \( h_t \) not involve LSTMs. Memory networks use (8) and (10) but replace (9) with \[ P(w|p, q, \mathcal{A}) = P(w|p, q) = \underset{w \in \mathcal{V}}{\text{softmax}} \ e_o(w)^T o. \tag{11} \] It should be noted that (11) trains output vectors over the whole vocabulary rather than just those items occurring in the choice set \( \mathcal{A} \). This is empirically significant in non-anonymized datasets such as CBT and Who-did-What where choices at test time may never have occurred as choices in the training data. Attentive Reader. The Stanford Reader was derived from the Attentive Reader (Hermann et al., 2015). The Attentive Reader uses \( \alpha_t = \text{softmax}_x \text{MLP}([h_t, q]) \) instead of (7). Here MLP(x) is the output of a multi layer perceptron (MLP) given input x. Also, the answer distribution in the attentive reader is defined over the full vocabulary rather than just the candidate answer set \( \mathcal{A} \). \[ P(w|p, q, \mathcal{A}) = P(w|p, q) = \underset{w \in \mathcal{V}}{\text{softmax}} \ e_o(w)^T \text{MLP}([o, q]) \tag{12} \] Equation (12) is similar to (11) in that it leads to the training of output vectors for the full vocabulary rather than just those items appearing in choice sets in the training data. As in memory networks, this leads to improved performance on non-anonymized data sets. 3.2 Explicit Reference Readers Attention-Sum Reader. In the Attention-Sum Reader [Kadlec et al., 2016] h and q are computed with equations (5) and (6) as in the Stanford Reader but using GRUs rather than LSTMs. The attention \( \alpha_t \) is computed similarly to (7) but using a simple inner product \( \alpha_t = \text{softmax}_t\ h_t^\top q \) rather than a trained bilinear form. Most significantly, however, equations (9) and (10) are replaced by the following where \( t \in R(a,p) \) indicates that a reference to candidate answer a occurs at position t in p. \[ P(a|p,q,\mathcal{A}) = \sum_{t \in R(a,p)} \alpha_t \tag{13} \] \[ \hat{a} = \underset{a}{\operatorname{argmax}} \sum_{t \in R(a,p)} \alpha_t \tag{14} \] Here we think of \( R(a,p) \) as the set of references to a in the passage p. It is important to note that (13) is an equality and that \( P(a|p,q,\mathcal{A}) \) is not normalized to the members of \( R(a,p) \). When training with the log-loss objective this drives the attention \( \alpha_t \) to be normalized — to have support only on the positions t with \( t \in R(a,p) \) for some a. See the heat maps in the appendix. Gated-Attention Reader. The Gated Attention Reader [Dhingra et al., 2016] involves a K-layer biGRU-attention architecture defined by the following equations. \[ q^\ell = [\text{fGRU}(e(q))_l, \text{bGRU}(e(q))_1] \quad 1 \leq \ell \leq K \\ h^1 = \text{biGRU}(e(p)) \\ h^\ell = \text{biGRU}(h^{\ell-1} \odot q^{\ell-1}) \quad 2 \leq \ell \leq K \] Here the question embeddings \( q^\ell \) for different values of \( \ell \) are computed with different GRU model parameters. Here \( h \odot q \) abbreviates the sequence \( h_1 \odot q, h_2 \odot q, \ldots h_{|p|} \odot q \). Note that for \( K = 1 \) we have only \( q^1 \) and \( h^1 \) as in the attention-sum reader. An attention is then computed over the final layer \( h^K \) with \( \alpha_t = \text{softmax}_t\ (h^K_t)^\top q^K \) in the attention-sum reader. This reader uses (13) and (14). Attention-over-Attention Reader. The Attention-over-Attention Reader [Cui et al., 2016] uses a more elaborate method to compute the attention \( \alpha_t \). We will use t to range over positions in the passage and j to range over positions in the question. The model is then defined by the following equations. \[ h = \text{biGRU}(e(p)) \qquad q = \text{biGRU}(e(q)) \] \[ \alpha_{t,j} = \text{softmax}_t\ h_t^\top q_j \qquad \beta_{t,j} = \text{softmax}_j\ h_t^\top q_j \] \[ \beta_j = \frac{1}{|p|} \sum_t \beta_{t,j} \qquad \alpha_t = \sum_j \beta_j \alpha_{t,j} \] Note that the final equation defining \( \alpha_t \) can be interpreted as applying the attention \( \beta_j \) to the attentions \( \alpha_{t,j} \). This reader uses (13) and (14). 4 Emergent Predication Structure As discussed in the introduction the entity identifiers such as “entity37” introduced in the CNN/DailyMail dataset cannot be assigned any semantics other than their identity. We should think of them as pointers or semantics-free constant symbols. Despite this undermining of semantics, aggregation readers using (8) and (10) are able to perform well. Here we posit that this is due to an emergent predication structure in the hidden vectors \( h_t \). Intuitively we want to think of the hidden state vector \( h_t \) as a concatenation \( [e(\Phi_t), e'_o(a_t)] \) where \( \Phi_t \) carries semantic information true of \( a_t \). We think of \( h_t \) as representing \( \Phi_t[a_t] \) for semantic statement \( \Phi_t[x] \) asserted of the constant symbol \( a_t \). We also think of the vector representation \( q \) of the question as having the form \([e(\Psi), 0]\) and the vector embedding \( e_o(a) \) as having the form \([0, e'_o(a)]\). Unfortunately, the decomposition of \( h_t \) into this predication structure need not be axis aligned. Rather than posit an axis-aligned concatenation we posit that the hidden vector space \( H \) is a possibly non-aligned direct sum \[ H = S \oplus E \] where \( S \) is a subspace of “statement vectors” and \( E \) is an orthogonal subspace of “entity pointers”. Each hidden state vector \( h \in H \) then has a unique decomposition as \( h = \Psi + e \) for \( \Psi \in S \) and \( e \in E \). This is equivalent to saying that the hidden vector space \( H \) is some rotation of a concatenation of the vector spaces \( S \) and \( E \). We now present empirical evidence for this decomposition structure. We first note that the predication decomposition implies that \( e_o(a)^T h_t \) equals \( e_o(a)^T e_o(a_t) \). This suggests the following for some fixed positive constant \( c \). \[ e_o(a)^T h_t = \left\{ \begin{array}{ll} c & \text{if } t \in R(a,p) \\ 0 & \text{otherwise} \end{array} \right. \] Assuming the predication structure we have \( c = \|e_o(a)\|^2 \). We note that if different entity constants had different norms then answers would be biased toward occurrences of the constant symbol of larger norm. But we need to have that all constant symbols are equivalent. We note that (17) gives \[ \begin{align*} \argmax_a e_o(a)^T o &= \argmax_a e_o(a)^T \sum_t \alpha_t h_t \\ &= \argmax_a \sum_t \alpha_t e_o(a)^T h_t = \argmax_a \sum_{t \in R(a,p)} \alpha_t \end{align*} \] and hence (10) and (14) agree — the aggregation readers and the explicit reference readers are using essentially the same answer selection criterion. Empirical evidence for (16) is given in the first three rows of table 1. The first row empirically measures the “constant” \( c \) in (16) by measuring \( e_o(a)^T h_t \) for those cases where \( t \in R(a,p) \). The second row measures “0” in (16) by measuring \( e_o(a)^T h_t \) in those cases where \( t \notin R(a,p) \). Additional evidence for (16) is given in figure 1 showing that the output vectors \( e_o(a) \) for different entity identifiers \( a \) are nearly orthogonal. Orthogonality of the output vectors is required by (16) provided that each output vector \( e_o(a) \) is in the span of the hidden state vectors \( h_{t,p} \) for which \( t \in R(a,p) \). Intuitively, the mean of all vectors \( h_{t,p} \) with \( t \in R(a,p) \) should be approximately equal to \( e_o(a) \). Of course empirically this will only be approximately true. Equation (16) would suggest that the vector embedding of the constant symbols should have dimension at least as large as the number of distinct constants. However, in practice is sufficient that \( e(a)^T e(a') \) is small for \( a \neq a' \). This allows the vector embeddings of the constants to have dimension much smaller than the number of constants. We have experimented with two-sparse constant symbol embeddings where the number of embedding vectors in dimention \( d \) is \( 2d(d-1) \) (\( d \) choose 2 times the four ways of setting the signs of the non-zero coordinates). Although we do not report results here, these designed and untrained constant embeddings worked reasonably well. Table 1: Statistics to support (16) and (17). These statistics are computed for the Stanford Reader. <table> <tr> <th></th> <th colspan="3">CNN Dev</th> <th colspan="3">CNN Test</th> </tr> <tr> <th></th> <th>samples</th> <th>mean</th> <th>variance</th> <th>samples</th> <th>mean</th> <th>variance</th> </tr> <tr> <td>\( e_o(a)^T h_t,\quad t \in R(a,p) \)</td> <td>222,001</td> <td>10.66</td> <td>2.26</td> <td>164,746</td> <td>10.70</td> <td>2.45</td> </tr> <tr> <td>\( e_o(a)^T h_t,\quad t \notin R(a,p) \)</td> <td>93,072,682</td> <td>-0.57</td> <td>1.59</td> <td>68,451,660</td> <td>-0.58</td> <td>1.65</td> </tr> <tr> <td>\( e_o(a)^T h_{t \pm 1},\quad t \in R(a,p) \)</td> <td>443,878</td> <td>2.32</td> <td>1.79</td> <td>329,366</td> <td>2.25</td> <td>1.84</td> </tr> <tr> <td>Cosine(\( q, h_t \)),\quad \exists a\ t \in R(a,p) \)</td> <td>222,001</td> <td>0.22</td> <td>0.11</td> <td>164,746</td> <td>0.22</td> <td>0.12</td> </tr> <tr> <td>Cosine(\( q, e_o(a) \)),\quad \forall a \)</td> <td>103,909</td> <td>-0.03</td> <td>0.04</td> <td>78,411</td> <td>-0.03</td> <td>0.04</td> </tr> </table> As further support for (16) we give heat maps for \( e_o(a)h_t \) in the appendix for different identifiers \( a \) and heat maps for \( \alpha_t \) for different readers in the appendix. As another testable predication we note that the posited decomposition of the hidden state vectors implies \[ q^\top (h_i + e_o(a)) = q^\top h_i. \] This equation is equivalent to \( q^\top e_o(a) = 0 \). Experimentally, however, we cannot expect \( q^\top e_o(a) \) to be exactly zero and (17) seems to provides a more experimentally meaningful test. Empirical evidence for (17) is given in the fourth and fifth row of table I. The fourth row measures the cosine of the angle between the question vector \( q \) and the hidden state \( h_t \) averaged over passage positions \( t \) at which some entity identifier occurs. The fifth row measures the cosine of the angle between \( q \) and \( e_o(a) \) averaged over the entity identifiers \( a \). ![Plot of \( e_o(a_j)^\top e_o(a_j) \) from Stanford Reader trained on CNN dataset. Off-diagonal values have mean 25.6 and variance 17.2 while diagonal values have mean 169 and variance 17.3.](page_484_670_670_370.png) Figure 1: Plot of \( e_o(a_j)^\top e_o(a_j) \) from Stanford Reader trained on CNN dataset. Off-diagonal values have mean 25.6 and variance 17.2 while diagonal values have mean 169 and variance 17.3. A question asks for a value of \( x \) such that a statement \( \Psi[x] \) is implied by the passage. For a question \( \Psi \) we might even suggest the following vectorial interpretation of entailment. \[ \Phi[x] \text{ implies } \Psi[x] \quad \text{iff} \quad \Phi^\top \Psi \geq ||\Psi||_1. \] This interpretation is exactly correct if some of the dimensions of the vector space correspond to predicates, \( \Psi \) is a 0-1 vector representing a conjunction predicates, and \( \Phi \) is also 0-1 on these dimensions indicating whether a predicate is implied by the context. Of course in practice one expects the dimension to be smaller than the number of possible predicates. 5 Pointer Annotation Readers It is of course important to note that anonymization provides reference information — anonymization assumes that one can determine coreference so as to re-use coreferent phrases with the same entity identifier. Anonymization allows the reference set \( R(a, p) \) to be directly read off of the passage. Still, an aggregation reader must learn to recover this explicit reference structure. Aggregation readers can have difficulty when anonymization is not done. The Stanford Reader achieves just better than 45% on Who-did-What dataset while Attention Sum Reader can get near 60%. But if we anonymize the Who-did-What dataset and then re-train the Stanford Reader, the accuracy jumps to near 65%. Anonymization has two effects. First, it greatly reduces the number of output word \( e_o(a) \) to be learned — we need only learn output embeddings for the relatively small number of entity identifiers needed. Second, anonymization suppresses the semantics of the reference phrases and leaves only a semantics-free entity identifier. This suppression of semantics may facilitate the separation of the hidden state vector space \( H \) into a direct sum \( S \oplus E \) with \( q \in S \) and \( e_o(a) \in E \). We can think of anonymization as providing additional linguistic input for the reader — it explicitly marks positions of candidate answers and establishes coreference. A natural question is whether Table 2: Accuracy on WDW dataset. All these results are based on single model. Results for neural readers other than NSE are based on replications of those systems. All models were trained on the relaxed training set which uniformly yields better performance than the restricted training set. The first group of models are explicit reference models and the second group are aggregation models. + indicates anonymization with better reference identifier. <table> <tr> <th>Who did What</th> <th>Val</th> <th>Test</th> </tr> <tr> <td>Attention Sum Reader (<a href="https://arxiv.org/abs/1604.07835">Onishi et al., 2016</a>)</td> <td>59.8</td> <td>58.8</td> </tr> <tr> <td>Gated Attention Reader (<a href="https://arxiv.org/abs/1604.07835">Onishi et al., 2016</a>)</td> <td>60.3</td> <td>59.6</td> </tr> <tr> <td>NSE (<a href="https://arxiv.org/abs/1604.07835">Munkhdalai & Yu, 2016</a>)</td> <td>66.5</td> <td>66.2</td> </tr> <tr> <td>Gated Attention + Linguistic Features<sup>+</sup></td> <td>72.2</td> <td><b>72.8</b></td> </tr> <tr> <td>Stanford Reader</td> <td>46.1</td> <td>45.8</td> </tr> <tr> <td>Attentive Reader with Anonymization</td> <td>55.7</td> <td>55.5</td> </tr> <tr> <td>Stanford Reader with Anonymization</td> <td>64.8</td> <td>64.5</td> </tr> <tr> <td>One-Hot Pointer Reader</td> <td>65.1</td> <td>64.4</td> </tr> <tr> <td>One-Hot Pointer Reader + Linguistic Features<sup>+</sup></td> <td>69.3</td> <td>68.7</td> </tr> <tr> <td>Stanford with Anonymization + Linguistic Features<sup>+</sup></td> <td>69.7</td> <td><b>69.2</b></td> </tr> <tr> <td>Human Performance</td> <td>-</td> <td>84</td> </tr> </table> this information can be provided without anonymization by simply adding additional coreference features to the input. Here we evaluate two architectures inspired by this question. This evaluation is done on the Who-did-What dataset which is not anonymized. In each architecture we add features to the input to mark the occurrences of candidate answers. These models are simpler than the Stanford reader but perform comparably. This comparable performance in table 2 further supports our analysis of logical structure in aggregation readers. <b>One-Hot Pointer Annotation:</b> The Stanford Reader involves both input embeddings of words and output embeddings of entity identifiers. In the Who-did-What dataset each problem has at most five choices in the multiple choice answer list. This means that we need only five entity identifiers and we can use a five dimensional one-hot vector representation for answer identifiers. If an answer choice exists at position \( t \) in the passage let \( i_t \) be the index of that choice on the choice list. If no choice occurs \( t \) take \( i_t \) to be zero. Take \( e'(i) \) to be the zero vector if \( i = 0 \) and otherwise to be the one-hot vector for \( i \). We defined pointer annotation to be the result of adding \( e'(i_t) \) as additional features to the input embedding. \[ e(w_t) = [e(w_t), e'(i_t)] \] (18) We then define a one-hot pointer reader by designates five dimensions of the hidden state as indicators of the answer and take the probability of choice \( i \) to be defined as \[ p(i|d, q) = \operatorname{softmax}_i o_i \] (19) where \( o \) is computed by (8). <b>General Pointer Annotation:</b> In the CNN dataset there are roughly 500 entity identifier and a one-hot representation is not desirable. Instead we can let \( e'(i) \) be a fixed set of “pointers vectors” — vectors distributed widely on the unit sphere so that for \( i \neq j \) we have that \( e'(i)^T e'(j) \) is small. We again use (18) but replace (19) with \[ p(i|d, q) = \operatorname{softmax}_i [0, e'(i)]^T o \] (20) In the general pointer reader the pointer embeddings \( e'(i) \) are held fixed and not trained. <b>Linguistic Features.</b> Each model can be modified to include additional input features for each input token in the question and passage. More specifically we can add the following features to the word embeddings. <ul> <li>Binary feature: whether current token occurs in the question.</li> <li>Real value feature: the frequency of current token in the passage.</li> </ul> • Real value feature: position of the token’s first occurrence in the passage as a percentage of the passage length. • Binary feature: whether the text surrounding token match the text surrounding the placeholder in the question. We only have features for matching both left and right one word. • One hot vector: Part-of-speech (POS) tagging. We only use such feature on CBT dataset. • One hot vector: Name Entity Recognition (NER). We only use such feature on CBT dataset. 6 A Survey of Recent Results The performance of various recent readers on CNN, DailyMail and CBTest are summarized in Table 3. For purposes of comparison we only present results on single models. Model ensembles generally perform better than single models but are require more computation to train making comparisons more difficult. More experimental details can be found in appendix. Table 3: Accuracy on CNN, DailyMail, CBTest NE and CBTest CN. All results are based on a single model. Results other than those involving pointer or linguistic feature annotations are taken from the original publications. Readers in the first group are explicit reference readers. Readers in the second group are aggregation readers. The final reader defies this classification. <table> <tr> <th rowspan="2"> </th> <th colspan="2">CNN</th> <th colspan="2">DailyMail</th> <th colspan="2">CBT NE</th> <th colspan="2">CBT CN</th> </tr> <tr> <th>valid</th><th>test</th> <th>valid</th><th>test</th> <th>valid</th><th>test</th> <th>valid</th><th>test</th> </tr> <tr> <td>Human(context+query)</td> <td>-</td><td>-</td> <td>-</td><td>-</td> <td>-</td><td>-</td> <td>81.6</td><td>-</td> </tr> <tr> <td>Attention Sum (<span>Kadlec et al., 2016</span>)</td> <td>68.6</td><td>69.5</td> <td>75.0</td><td>73.9</td> <td>73.8</td><td>68.6</td> <td>68.8</td><td>63.4</td> </tr> <tr> <td>Gated Attention (<span>Dhingra et al., 2016</span>)</td> <td>73.0</td><td>73.8</td> <td>76.7</td><td>75.7</td> <td>74.9</td><td>69.0</td> <td>69.0</td><td>63.9</td> </tr> <tr> <td>AoA Reader (<span>Cui et al., 2016</span>)</td> <td>73.1</td><td>74.4</td> <td>-</td><td>-</td> <td>77.8</td><td>72.0</td> <td>72.2</td><td>69.4</td> </tr> <tr> <td>NSE (<span>Munkhdalai & Yu, 2016</span>)</td> <td>-</td><td>-</td> <td>-</td><td>-</td> <td>78.2</td><td><b>73.2</b></td> <td>74.2</td><td><b>71.4</b></td> </tr> <tr> <td>DER Network (<span>Kobayashi et al., 2016</span>)</td> <td>71.3</td><td>72.9</td> <td>-</td><td>-</td> <td>-</td><td>-</td> <td>-</td><td>-</td> </tr> <tr> <td>Epi Reader (<span>Trischler et al., 2016</span>)</td> <td>73.4</td><td>74.0</td> <td>-</td><td>-</td> <td>75.3</td><td>69.7</td> <td>71.5</td><td>67.4</td> </tr> <tr> <td>Iterative Reader (<span>Sordoni et al., 2016</span>)</td> <td>72.6</td><td>73.3</td> <td>-</td><td>-</td> <td>75.2</td><td>68.6</td> <td>72.1</td><td>69.2</td> </tr> <tr> <td>QANN (<span>Weissenborn, 2016</span>)</td> <td>-</td><td>73.6</td> <td>-</td><td><b>77.2</b></td> <td>-</td><td>70.6</td> <td>-</td><td>-</td> </tr> <tr> <td>Gated Attention with linguistic features</td> <td>74.7</td><td><b>75.4</b></td> <td>78.6</td><td><b>78.3</b></td> <td>75.7</td><td><b>72.2</b></td> <td>73.3</td><td><b>70.1</b></td> </tr> <tr> <td>MemNets (<span>Sukhbaatar et al., 2015</span>)</td> <td>63.4</td><td>66.8</td> <td>-</td><td>-</td> <td>70.4</td><td>66.6</td> <td>64.2</td><td>63.0</td> </tr> <tr> <td>Attentive Reader (<span>Hermann et al., 2015</span>)</td> <td>61.6</td><td>63.0</td> <td>70.5</td><td>69.0</td> <td>-</td><td>-</td> <td>-</td><td>-</td> </tr> <tr> <td>Stanford Reader (<span>Chen et al., 2016</span>)</td> <td>72.5</td><td>72.7</td> <td>76.9</td><td>76.0</td> <td>-</td><td>-</td> <td>-</td><td>-</td> </tr> <tr> <td>Stanford Reader with linguistic features</td> <td>75.7</td><td><b>76.0</b></td> <td>-</td><td>-</td> <td>-</td><td>-</td> <td>-</td><td>-</td> </tr> <tr> <td>ReasoNet (<span>Shen et al., 2016</span>)</td> <td>72.9</td><td>74.7</td> <td>77.6</td><td>76.6</td> <td>-</td><td>-</td> <td>-</td><td>-</td> </tr> </table> In table 3 all the high-performance approaches are proposed very recently. Blue color represents the second highest accuracy and bold font indicates the state-of-the-art accuracy. Note that the result of Stanford Reader we report here is the one without relabeling since relabeling procedure doesn’t follow the protocol used in <span>Hermann et al., 2015</span>. 7 Discussion Explicit reference architectures rely on reference resolution — a specification of which phrases in the given passage refer to candidate answers. Our experiments indicate that all existing readers benefit greatly from this externally provided information. Aggregation readers seem to demonstrate a stronger learning ability in that they essentially learn to mimic explicit reference readers by identifying reference annotation and using it appropriately. This is done most clearly in the pointer reader architectures. Furthermore, we have argued for, and given experimental evidence for, an interpretation of aggregation readers as learning emergent logical structure — a factoring of neural representations into a direct sum of a statement (predicate) representation and an entity (argument) representation. At a very high level our analysis and experiments support a central role for reference resolution in reading comprehension. Automating reference resolution in neural models, and demonstrating its value on appropriate datasets, would seem to be an important area for future research. Of course there is great interest in “learning representations”. The current state of the art in reading comprehension is such that systems still benefit from externally provided linguistic features including externally annotated reference resolution. It would seem desirable to develop fully automated neural readers that perform as well as readers using externally provided annotations. It is of course important to avoid straw man baselines when making any such claim. We are hesitant to make any more detailed comments on the differences between the architectural details of the readers discussed in this paper. The differences in scores between the leading readers are comparable to differences in scores that can be achieved by aggressive search over meta parameters or the statistical fluctuations in the quality of models learned by noisy statistical training procedures. More careful experiments over a longer period of time are needed. More dramatic improvements in performance would of course provide better support for particular innovations. ACKNOWLEDGMENTS We thanks the support of NVIDIA Corporation with the donation of GPUs used for this work.
reject
Reject
5.666667
37f74bfb2c3bc4eb1eab904563061ac1a4832879
iclr
2,017
NEURO-SYMBOLIC PROGRAM SYNTHESIS Emilio Parisotto1,2, Abdel-rahman Mohamed1, Rishabh Singh1, Lihong Li1, Dengyong Zhou1, Pushmeet Kohli1 1Microsoft Research, USA 2Carnegie Mellon University, USA eparisot@andrew.cmu.edu , {asamir,risin,lihongli,denzho,pkohli}@microsoft.com ABSTRACT Recent years have seen the proposal of a number of neural architectures for the problem of Program Induction. Given a set of input-output examples, these architectures are able to learn mappings that generalize to new test inputs. While achieving impressive results, these approaches have a number of important limitations: (a) they are computationally expensive and hard to train, (b) a model has to be trained for each task (program) separately, and (c) it is hard to interpret or verify the correctness of the learnt mapping (as it is defined by a neural network). In this paper, we propose a novel technique, Neuro-Symbolic Program Synthesis, to overcome the above-mentioned problems. Once trained, our approach can automatically construct computer programs in a domain-specific language that are consistent with a set of input-output examples provided at test time. Our method is based on two novel neural modules. The first module, called the cross correlation I/O network, given a set of input-output examples, produces a continuous representation of the set of I/O examples. The second module, the Recursive-Reverse-Recursive Neural Network (R3NN), given the continuous representation of the examples, synthesizes a program by incrementally expanding partial programs. We demonstrate the effectiveness of our approach by applying it to the rich and complex domain of regular expression based string transformations. Experiments show that the R3NN model is not only able to construct programs from new input-output examples, but it is also able to construct new programs for tasks that it had never observed before during training. 1 INTRODUCTION The act of programming, i.e., developing a procedure to accomplish a task, is a remarkable demonstration of the reasoning abilities of the human mind. Expectedly, Program Induction is considered as one of the fundamental problems in Machine Learning and Artificial Intelligence. Recent progress on deep learning has led to the proposal of a number of promising neural architectures for this problem. Many of these models are inspired from computation modules (CPU, RAM, GPU) (Graves et al., 2014; Kurach et al., 2015; Reed & de Freitas, 2015; Neelakantan et al., 2015) or common data structures used in many algorithms (stack) (Joulin & Mikolov, 2015). A common thread in this line of work is to specify the atomic operations of the network in some differentiable form, allowing efficient end-to-end training of a neural controller, or to use reinforcement learning to make hard choices about which operation to perform. While these results are impressive, these approaches have a number of important limitations: (a) they are computationally expensive and hard to train, (b) a model has to be trained for each task (program) separately, and (c) it is hard to interpret or verify the correctness of the learnt mapping (as it is defined by a neural network). While some recently proposed methods (Kurach et al., 2015; Gaunt et al., 2016; Riedel et al., 2016; Bunel et al., 2016) do learn interpretable programs, they still need to learn a separate neural network model for each individual task. Motivated by the need for model interpretability and scalability to multiple tasks, we address the problem of Program Synthesis. Program Synthesis, the problem of automatically constructing programs that are consistent with a given specification, has long been a subject of research in Computer Science (Biermann, 1978; Summers, 1977). This interest has been reinvigorated in recent years on the back of the development of methods for learning programs in various domains, ranging from low-level bit manipulation code (Solar-Lezama et al., 2005) to data structure manipulations (Singh & Solar-Lezama, 2011) and regular expression based string transformations (Gulwani, 2011). Most of the recently proposed methods for program synthesis operate by searching the space of programs in a Domain-Specific Language (DSL) instead of arbitrary Turing-complete languages. This hypothesis space of possible programs is huge (potentially infinite) and searching over it is a challenging problem. Several search techniques including enumerative (Udupa et al., 2013), stochastic (Schukfa et al., 2013), constraint-based (Solar-Lezama, 2008), and version-space algebra based algorithms (Gulwani et al., 2012) have been developed to search over the space of programs in the DSL, which support different kinds of specifications (examples, partial programs, natural language etc.) and domains. These techniques not only require significant engineering and research effort to develop carefully-designed heuristics for efficient search, but also have limited applicability and can only synthesize programs of limited sizes and types. In this paper, we present a novel technique called Neuro-Symbolic Program Synthesis (NSPS) that learns to generate a program incrementally without the need for an explicit search. Once trained, NSPS can automatically construct computer programs that are consistent with any set of input-output examples provided at test time. Our method is based on two novel module neural architectures. The first module, called the cross correlation I/O network, produces a continuous representation of any given set of input-output examples. The second module, the Recursive-Reverse-Recursive Neural Network (R3NN), given the continuous representation of the input-output examples, synthesizes a program by incrementally expanding partial programs. R3NN employs a tree-based neural architecture that sequentially constructs a parse tree by selecting which non-terminal symbol to expand using rules from a context-free grammar (i.e., the DSL). We demonstrate the efficacy of our method by applying it to the rich and complex domain of regular-expression-based syntactic string transformations, using a DSL based on the one used by Flash-Fill (Gulwani, 2011; Gulwani et al., 2012), a Programming-By-Example (PBE) system in Microsoft Excel 2013. Given a few input-output examples of strings, the task is to synthesize a program built on regular expressions to perform the desired string transformation. An example task that can be expressed in this DSL is shown in Figure 1, which also shows the DSL. Our evaluation shows that NSPS is not only able to construct programs for known tasks from new input-output examples, but it is also able to construct completely new programs that it had not observed during training. Specifically, the proposed system is able to synthesize string transformation programs for 63% of tasks that it had not observed at training time, and for 94% of tasks when 100 program samples are taken from the model. Moreover, our system is able to learn 38% of 238 real-world FlashFill benchmarks. To summarize, the key contributions of our work are: • A novel Neuro-Symbolic program synthesis technique to encode neural search over the space of programs defined using a Domain-Specific Language (DSL). • The R3NN model that encodes and expands partial programs in the DSL, where each node has a global representation of the program tree. • A novel cross-correlation based neural architecture for learning continuous representation of sets of input-output examples. • Evaluation of the NSPS approach on the complex domain of regular expression based string transformations. 2 PROBLEM DEFINITION In this section, we formally define the DSL-based program synthesis problem that we consider in this paper. Given a DSL \( L \), we want to automatically construct a synthesis algorithm \( \mathcal{A} \) such that given a set of input-output example, \( \{(i_1, o_1), \cdots, (i_n, o_n)\} \), \( \mathcal{A} \) returns a program \( P \in L \) that conforms to the input-output examples, i.e., \[ \forall j : 1 \leq j \leq n\ P(i_j) = o_j. \] <table> <tr> <th>Input \( v \)</th> <th>Output</th> </tr> <tr> <td>1 William Henry Charles</td> <td>Charles, W.</td> </tr> <tr> <td>2 Michael Johnson</td> <td>Johnson, M.</td> </tr> <tr> <td>3 Barack Rogers</td> <td>Rogers, B.</td> </tr> <tr> <td>4 Martha D. Saunders</td> <td>Saunders, M.</td> </tr> <tr> <td>5 Peter T Gates</td> <td>Gates, P.</td> </tr> </table> (a) String \( e \) := Concat(\( f_1, \cdots, f_n \)) Substring \( f \) := ConstStr(s) | SubStr(v, p_l, p_r) Position \( p \) := (r, k, Dir) | ConstPos(k) Direction Dir := Start | End Regex \( r \) := s | T_1 \cdots | T_n (b) Figure 1: An example FlashFill task for transforming names to lastname with initials of first name, and (b) The DSL for regular expression based string transformations. The syntax and semantics of the DSL for string transformations is shown in Figure[1]b and Figure[8] respectively. The DSL corresponds to a large subset of FlashFill DSL (except conditionals), and allows for a richer class of substring operations than FlashFill. A DSL program takes as input a string \( v \) and returns an output string \( o \). The top-level string expression \( e \) is a concatenation of a finite list of substring expressions \( f_1, \cdots, f_n \). A substring expression \( f \) can either be a constant string \( s \) or a substring expression, which is defined using two position logics \( p_l \) (left) and \( p_r \) (right). A position logic corresponds to a symbolic expression that evaluates to an index in the string. A position logic \( p \) can either be a constant position \( k \) or a token match expression \( (r, k, \text{Dir}) \), which denotes the Start or End of the \( k^{\text{th}} \) match of token \( r \) in input string \( v \). A regex token can either be a constant string \( s \) or one of 8 regular expression tokens: \( p \) (ProperCase), \( C \) (CAPS), \( l \) (lowercase), \( d \) (Digits), \( \alpha \) (Alphabets), \( \alpha n \) (Alphanumeric), \( ^ \) (StartOfString), and \( \$ \) (EndOfString). The semantics of the DSL programs is described in the appendix. A DSL program for the name transformation task shown in Figure [1]a that is consistent with the examples is: Concat(f_1, ConstStr(" "), f_2, ConstStr("")), where \( f_1 \equiv \text{SubStr}(v, (" ", -1, \text{End}), \text{ConstPos}(-1)) \) and \( f_2 \equiv \text{SubStr}(v, \text{ConstPos}(0), \text{ConstPos}(1)) \). The program concatenates the following 4 strings: i) substring between the end of last whitespace and end of string, ii) constant string " ", iii) first character of input string, and iv) constant string " ". 3 OVERVIEW OF OUR APPROACH We now present an overview of our approach. Given a DSL \( L \), we learn a generative model of programs in the DSL \( L \) that is conditioned on input-output examples to efficiently search for consistent programs. The workflow of our system is shown in Figure[2] which is trained end-to-end using a large training set of programs in the DSL together with their corresponding input-output examples. To generate a large training set, we uniformly sample programs from the DSL and then use a rule-based strategy to compute well-formed input strings. Given a program P (sampled from the DSL), the rule-based strategy generates input strings for the program P ensuring that the pre-conditions of P are met (i.e. P doesn’t throw an exception on the input strings). It collects the pre-conditions of all Substring expressions present in the sampled program P and then generates inputs conforming to them. For example, let’s assume the sampled program is SubStr(v,(CAPS, 2, Start),(" ", 3, Start)), which extracts the substring between the start of 2nd capital letter and start of 3rd whitespace. The rule-based strategy would ensure that all the generated input strings consist of at least 2 capital letters and 3 whitespaces in addition to other randomly generated characters. The corresponding output strings are obtained by running the programs on the input strings. A DSL can be considered as a context-free grammar with a start symbol \( S \) and a set of non-terminals with corresponding expansion rules. The (partial) grammar derivations or trees correspond to (partial) programs. A naïve way to perform a search over the programs in a DSL is to start from the start symbol \( S \) and then randomly choose non-terminals to expand with randomly chosen expansion rules until reaching a derivation with only terminals. We, instead, learn a generative model over partial derivations in the DSL that assigns probabilities to different non-terminals in a partial derivation and corresponding expansions to guide the search for complete derivations. ![An overview of the training and test workflow of our synthesis approach.](page_120_186_1347_355.png) Figure 2: An overview of the training and test workflow of our synthesis approach. Our generative model uses a Recursive-Reverse-Recursive Neural Network (R3NN) to encode partial trees (derivations) in \( L \), where each node in the partial tree encodes global information about every other node in the tree. The model assigns a vector representation for every symbol and every expansion rule in the grammar. Given a partial tree, the model first assigns a vector representation to each leaf node, and then performs a recursive pass going up in the tree to assign a global tree representation to the root. It then performs a reverse-recursive pass starting from the root to assign a global tree representation to each node in the tree. The generative process is conditioned on a set of input-output examples to learn a program that is consistent with this set of examples. We experiment with multiple input-output encoders including an LSTM encoder that concatenates the hidden vectors of two deep bidirectional LSTM networks for input and output strings in the examples, and a Cross Correlation encoder that computes the cross correlation between the LSTM tensor representations of input and output strings in the examples. This vector is then used as an additional input in the R3NN model to condition the generative model. 4 TREE-STRUCTURED GENERATION MODEL We define a program t-steps into construction as a partial program tree (PPT) (see Figure[3] for a visual depiction). A PPT has two types of nodes: leaf (symbol) nodes and inner non-leaf (rule) nodes. A leaf node represents a symbol, whether non-terminal or terminal. An inner non-leaf node represents a particular production rule of the DSL, where the number of children of the non-leaf node is equivalent to the arity of the RHS of the rule it represents. A PPT is called a program tree (PT) whenever all the leaves of the tree are terminal symbols. Such a tree represents a completed program under the DSL and can be executed. We define an expansion as the valid application of a specific production rule (e → e op2 e) to a specific non-terminal leaf node within a PPT (leaf with symbol e). We refer to the specific production rule that an expansion is derived from as the expansion type. It can be seen that if there exist two leaf nodes (\( l_1 \) and \( l_2 \)) with the same symbol then for every expansion specific to \( l_1 \) there exists an expansion specific to \( l_2 \) with the same type. 4.1 RECURSIVE-REVERSE-RECURSIVE NEURAL NETWORK In order to define a generation model over PPTs, we need an efficient way of assigning probabilities to every valid expansion in the current PPT. A valid expansion has two components: first the production rule used, and second the position of the expanded leaf node relative to every other node in the tree. To account for the first component, a separate distributed representation for each production rule is maintained. The second component is handled using an architecture where the forward propagation resembles belief propagation on trees, allowing a notion of global tree state at every node within the tree. A given expansion probability is then calculated as being proportional to the inner product between the production rule representation and the global-tree representation of the leaf-level non-terminal node. We now describe the design of this architecture in more detail. The R3NN has the following parameters for the grammar described by a DSL (see Figure[3]): 1. For every symbol \( s \in S \), an \( M \)-dimensional representation \( \phi(s) \in \mathbb{R}^M \). 2. For every production rule \( r \in R \), an \( M \)-dimensional representation \( \omega(r) \in \mathbb{R}^M \). Figure 3: (a) The initial recursive pass of the R3NN. (b) The reverse-recursive pass of the R3NN where the input is the output of the previous recursive pass. 3. For every production rule \( r \in R \), a deep neural network \( f_r \) which takes as input a vector \( x \in \mathbb{R}^{Q \cdot M} \), with \( Q \) being the number of symbols on the RHS of the production rule \( r \), and outputs a vector \( y \in \mathbb{R}^M \). Therefore, the production-rule network \( f_r \) takes as input a concatenation of the distributed representations of each of its RHS symbols and produces a distributed representation for the LHS symbol. 4. For every production rule \( r \in R \), an additional deep neural network \( g_r \) which takes as input a vector \( x' \in \mathbb{R}^M \) and outputs a vector \( y' \in \mathbb{R}^{Q \cdot M} \). We can think of \( g_r \) as a reverse production-rule network that takes as input a vector representation of the LHS and produces a concatenation of the distributed representations of each of the rule’s RHS symbols. Let \( E \) be the set of all valid expansions in a PPT \( T \), let \( L \) be the current leaf nodes of \( T \) and \( N \) be the current non-leaf (rule) nodes of \( T \). Let \( S(l) \) be the symbol of leaf \( l \in L \) and \( R(n) \) represent the production rule of non-leaf node \( n \in N \). 4.1.1 GLOBAL TREE INFORMATION AT THE LEAVES To compute the probability distribution over the set \( E \), the R3NN first computes a distributed representation for each leaf node that contains global tree information. To accomplish this, for every leaf node \( l \in L \) in the tree we retrieve its distributed representation \( \phi(S(l)) \). We now do a standard recursive bottom-to-top, RHS→LHS pass on the network, by going up the tree and applying \( f_{R(n)} \) for every non-leaf node \( n \in N \) on its RHS node representations (see Figure 3(a)). These networks \( f_{R(n)} \) produce a node representation which is input into the parent’s rule network and so on until we reach the root node. Once at the root node, we effectively have a fixed-dimensionality global tree representation \( \phi(root) \) for the start symbol. The problem is that this representation has lost any notion of tree position. To solve this problem R3NN now does what is effectively a reverse-recursive pass which starts at the root node with \( \phi(root) \) as input and moves towards the leaf nodes (see Figure 3(b)). More concretely, we start with the root node representation \( \phi(root) \) and use that as input into the rule network \( g_{R(root)} \) where \( R(root) \) is the production rule that is applied to the start symbol in \( T \). This produces a representation \( \phi'(c) \) for each RHS node \( c \) of \( R(root) \). If \( c \) is a non-leaf node, we iteratively apply this procedure to \( c \), i.e., process \( \phi'(c) \) using \( g_{R(c)} \) to get representations \( \phi'(cc) \) for every RHS node \( cc \) of \( R(c) \), etc. If \( c \) is a leaf node, we now have a leaf representation \( \phi'(c) \) which has an information path to \( \phi(root) \) and thus to every other leaf node in the tree. Once the reverse-recursive process is complete, we now have a distributed representation \( \phi'(l) \) for every leaf node \( l \) which contains global tree information. While \( \phi(l_1) \) and \( \phi(l_2) \) could be equal for leaf nodes which have the same symbol type, \( \phi'(l_1) \) and \( \phi'(l_2) \) will not be equal even if they have the same symbol type because they are at different positions in the tree. 4.1.2 Expansion Probabilities Given the global leaf representations \( \phi'(l) \), we can now straightforwardly acquire scores for each \( e \in E \). For expansion \( e \), let \( e.r \) be the expansion type (production rule \( r \in R \) that \( e \) applies) and let \( e.l \) be the leaf node \( l \) that \( e.r \) is applied to. \( z_e = \phi'(e.l) \cdot \omega(e.r) \) The score of an expansion is calculated using \( z_e = \phi'(e.l) \cdot \omega(e.r) \). The probability of expansion \( e \) is simply the exponentiated normalized sum over all scores: \( \pi(e) = \frac{e^{z_e}}{\sum_{e' \in E} e^{z_{e'}}} \). An additional improvement that was found to help was to add a bidirectional LSTM (BLSTM) to process the global leaf representations right before calculating the scores. To do this, we first order the global leaf representations sequentially from left-most leaf node to right-mode leaf node. We then treat each leaf node as a time step for a BLSTM to process. This provides a sort of skip connection between leaf nodes, which potentially reduces the path length that information needs to travel between leaf nodes in the tree. The BLSTM hidden states are then used in the score calculation rather than the leaves themselves. The R3NN can be seen as an extension and combination of several previous tree-based models, which were mainly developed in the context of natural language processing (Le & Zuidema [2014] Paulus et al. [2014] Irsay & Cardie [2013]). 5 Conditioning with Input/Output Examples Now that we have defined a generation process over tree-structured programs, we need a way of conditioning this generation process on a set of input/output examples. The set of input/output examples provide a nearly complete specification for the desired output program, and so a good encoding of the examples is crucial to the success of our program generator. For the most part, this example encoding needs to be domain-specific, since different DSLs have different inputs (some may operate over integers, some over strings, etc.). Therefore, in our case, we use an encoding adapted to the input-output strings that our DSL operates over. We also investigate different ways of conditioning program search on the learnt example input-output encodings. 5.1 Encoding input/output examples There are two types of information that string manipulation programs need to extract from input-output examples: 1) constant strings, such as “@domain.com” or “.”, which appear in all output examples; 2) substring indices in input where the index might be further defined by a regular expression. These indices determine which parts of the input are also present in the output. To simplify the DSL, we assume that there is a fixed finite universe of possible constant strings that could appear in programs. Therefore we focus on extracting the second type of information, the substring indices. In earlier hand-engineered systems such as FlashFill, this information was extracted from the input-output strings by running the Longest Common Substring algorithm, a dynamic programming algorithm that efficiently finds matching substrings in string pairs. To extract substrings, FlashFill runs LCS on every input-output string pair in the I/O set to get a set of substring candidates. It then takes the entire set of substring candidates and simply tries every possible regex and constant index that can be used at substring boundaries, exhaustively searching for the one which is the most “general”, where generality is specified by hand-engineered heuristics. In contrast to these previous methods, instead of hand-designing a complicated algorithm to extract regex-based substrings, we develop neural network based architectures that are capable of learning to extract and produce continuous representations of the likely regular expressions given I/O examples. 5.1.1 Baseline LSTM encoder Our first I/O encoding network involves running two separate deep bidirectional LSTM networks for processing the input and the output string in each example pair. For each pair, it then concatenates the topmost hidden representation at every time step to produce a \( 4HT \)-dimensional feature vector per I/O pair, where \( T \) is the maximum string length for any input or output string, and \( H \) is the topmost LSTM hidden dimension. We then concatenate the encoding vectors across all I/O pairs to get a vector representation of the entire I/O set. This encoding is conceptually straightforward and has very little prior knowledge about what operations are being performed over the strings, i.e., substring, constant, etc., which might make it difficult to discover substring indices, especially the ones based on regular expressions. 5.1.2 CROSS CORRELATION ENCODER To help the model discover input substrings that are copied to the output, we designed an novel I/O example encoder to compute the cross correlation between each input and output example representation. We used the two output tensors of the LSTM encoder (discussed above) as inputs to this encoder. For each example pair, we first slide the output feature block over the input feature block and compute the dot product between the respective position representation. Then, we sum over all overlapping time steps. Features of all pairs are then concatenated to form a \(2 * (T - 1)\)-dimensional vector encoding for all example pairs. There are \(2 * (T - 1)\) possible alignments in total between input and output feature blocks. An illustration of the cross-correlation encoder is shown in Figure9. We also designed the following variants of this encoder. Diffused Cross Correlation Encoder: This encoder is identical to the Cross Correlation encoder except that instead of summing over overlapping time steps after the element-wise dot product, we simply concatenate the vectors corresponding to all time steps, resulting in a final representation that contains \(2 * (T - 1) * T\) features for each example pair. LSTM-Sum Cross Correlation Encoder: In this variant of the Cross Correlation encoder, instead of doing an element-wise dot product, we run a bidirectional LSTM over the concatenated feature blocks of each alignment. We represent each alignment by the LSTM hidden representation of the final time step leading to a total of \(2 * H * 2 * (T - 1)\) features for each example pair. Augmented Diffused Cross Correlation Encoder: For this encoder, the output of each character position of the Diffused Cross Correlation encoder is combined with the character embedding at this position, then a basic LSTM encoder is run over the combined features to extract a \(4*H\)-dimensional vector for both the input and the output streams. The LSTM encoder output is then concatenated with the output of the Diffused Cross Correlation encoder forming a \((4*H+T*(T-1))\)-dimensional feature vector for each example pair. 5.2 CONDITIONING PROGRAM SEARCH ON EXAMPLE ENCODINGS Once the I/O example encodings have been computed, we can use them to perform conditional generation of the program tree using the R3NN model. There are a number of ways in which the PPT generation model can be conditioned using the I/O example encodings depending on where the I/O example information is inserted in the R3NN model. We investigated three locations to inject example encodings: 1) Pre-conditioning: where example encodings are concatenated to the encoding of each tree leaf, and then passed to a conditioning network before the bottom-up recursive pass over the program tree. The conditioning network can be either a multi-layer feedforward network, or a bidirectional LSTM network running over tree leaves. Running an LSTM over tree leaves allows the model to learn more about the relative position of each leaf node in the tree. 2) Post-conditioning: After the reverse-recursive pass, example encodings are concatenated to the updated representation of each tree leaf and then fed to a conditioning network before computing the expansion scores. 3) Root-conditioning: After the recursive pass over the tree, the root encoding is concatenated to the example encodings and passed to a conditioning network. The updated root representation is then used to drive the reverse-recursive pass. Empirically, pre-conditioning worked better than either root- or post- conditioning. In addition, conditioning at all 3 places simultaneously did not cause a significant improvement over just pre-conditioning. Therefore, for the experimental section, we report models which only use pre-conditioning. 6 EXPERIMENTS In order to evaluate and compare variants of the previously described models, we generate a dataset randomly from the DSL. To do so, we first enumerate all possible programs under the DSL up to a specific number of instructions, which are then partitioned into training, validation and test sets. In order to have a tractable number of programs, we limited the maximum number of instructions for programs to be 13. Length 13 programs are important for this specific DSL because all larger programs can be written as compositions of sub-programs of length at most 13. The semantics of length 13 programs therefore constitute the “atoms” of this particular DSL. In testing our model, there are two different categories of generalization. The first is input/output generalization, where we are given a new set of input/output examples as well as a program with a specific tree that we have seen during training. This represents the model’s capacity to be applied on new data. The second category is program generalization, where we are given both a previously unseen program tree in addition to unseen input/output examples. Therefore the model needs to have a sufficient enough understanding of the semantics of the DSL that it can construct novel combinations of operations. For all reported results, training sets correspond to the first type of generalization since we have seen the program tree but not the input/output pairs. Test sets represent the second type of generalization, as they are trees which have not been seen before on input/output pairs that have also not been seen before. In this section, we compare several different variants of our model. We first evaluate the effect of each of the previously described input/output encoders. We then evaluate the R3NN model against a simple recurrent model called io2seq, which is basically an LSTM that takes as input the input/output conditioning vector and outputs a sequence of DSL symbols that represents a linearized program tree. Finally, we report the results of the best model on the length 13 training and testing sets, as well as on a set of 238 benchmark functions. 6.1 SETUP AND HYPERPARAMETERS SETTINGS For training the R3NN, two hyperparameters that were crucial for stabilizing training were the use of hyperbolic tangent activation functions in both R3NN (other activations such as ReLU more consistently diverged during our initial experiments) and cross-correlation I/O encoders and the use of minibatches of length 8. Additionally, for all results, the program tree generation is conditioned on a set of 10 input/output string pairs. We used ADAM [Kingma & Ba (2014)] to optimize the networks with a learning rate of 0.001. Network weights used the default torch initializations. Due to the difficulty of batching tree-based neural networks since each sample in a batch has a potentially different tree structure, we needed to do batching sequentially. Therefore for each mini-batch of size \( N \), we accumulated the gradients for each sample. After all N sample gradients were accumulated, we updated the parameters and reset the accumulated gradients. Due to this sequential processing, in order to train models in a reasonable time, we limited our batch sizes to between 8-12. Despite the computational inefficiency, batching was critical to successfully train an R3NN, as online learning often caused the network to diverge. For each latent function and set of input/output examples that we test on, we report whether we had a success after sampling 100 functions from the model and testing all 100 to see if one of these functions is equivalent to the latent function. Here we consider two functions to be equivalent with respect to a specific input/output example set if the functions output the same strings when run on the inputs. Under this definition, two functions can have a different set of operations but still be equivalent with respect to a specific input-output set. We restricted the maximum size of training programs to be 13 because of two computational considerations. As described earlier, one difficulty was in batching tree-based neural networks of different structure and the computational cost of batching increases with the increase in size of the program trees. The second issue is that valid I/O strings for programs often grow with the program length, in the sense that for programs of length 40 a minimal valid I/O string will typically be much longer than a minimal valid I/O string for length 20 programs. For example, for a program such as (Concat (ConstStr “longstring”) (Concat (ConstStr “longstring”) (Concat (ConstStr “longstring”) …))), the valid output string would be “longstringlongstringlongstring…” which could be many <table> <tr> <th>I/O Encoding</th> <th>Train</th> <th>Test</th> </tr> <tr> <td>LSTM</td> <td>88%</td> <td>88%</td> </tr> <tr> <td>Cross Correlation (CC)</td> <td>67%</td> <td>65%</td> </tr> <tr> <td>Diffused CC</td> <td>89%</td> <td>88%</td> </tr> <tr> <td>LSTM-sum CC</td> <td>90%</td> <td>91%</td> </tr> <tr> <td>Augmented diffused CC</td> <td>91%</td> <td>91%</td> </tr> </table> Table 1: The effect of different input/output encoders on accuracy. Each result used 100 samples. There is almost no generalization error in the results. <table> <tr> <th>Sampling</th> <th>Train</th> <th>Test</th> </tr> <tr> <td>io2seq</td> <td>44%</td> <td>42%</td> </tr> </table> Table 2: Testing the I/O-vector-to-sequence model. Each result used 100 samples. hundreds of characters long. Because of limited GPU memory, the I/O encoder models can quickly run out of memory. 6.2 EXAMPLE ENCODING In this section, we evaluate the effect of several different input/output example encoders. To control for the effect of the tree model, all results here used an R3NN with fixed hyperparameters to generate the program tree. Table 1 shows the performance of several of these input/output example encoders. We can see that the summed cross-correlation encoder did not perform well, which can be due to the fact that the sum destroys positional information that might be useful for determining specific substring indices. The LSTM-sum and the augmented diffused cross-correlation models did the best. Surprisingly, the LSTM encoder was capable of finding nearly 88% of all programs without having any prior knowledge explicitly built into the architecture. We use 100 samples for evaluating the Train and Test sets. The training performance is sometimes slightly lower because there are close to 5 million training programs but we only look at less than 2 million of these programs during training. We sample a subset of only 1000 training programs from the 5 million program set to report the training results in the tables. The test sets also consist of 1000 programs. 6.3 io2seq In this section, we motivate the use of the R3NN by testing whether a simpler model can also be used to generate programs. The io2seq model is an LSTM whose initial hidden and cell states are a function of the input/output encoding vector. The io2seq model then generates a linearized tree of a program symbol-by-symbol. An example of what a linearized program tree looks like is \( (s_e(f(ConstStr“@”)ConstStr)f)_e \), which represents the program tree that returns the constant string “@”. Predicting a linearized tree using an LSTM was also done in the context of parsing (Vinyals et al., 2015). For the io2seq model, we used the LSTM-sum cross-correlation I/O conditioning model. The results in Table 2 show that the performance of the io2seq model at 100 samples per latent test function is far worse than the R3NN, at around 42% versus 91%, respectively. The reasons for that could be that the io2seq model needs to perform far more decisions than the R3NN, since the io2seq model has to predict the parentheses symbols that determine at which level of the tree a particular symbol is at. For example, the io2seq model requires on the order of 100 decisions for length 13 programs, while the R3NN requires no more than 13. 6.4 EFFECT OF SAMPLING MULTIPLE PROGRAMS For the best R3NN model that we trained, we also evaluated the effect that a different number of samples per latent function had on performance. The results are shown in Table 3. The increase of the model’s performance as the sample size increases hints that the model has a notion of what type of program satisfies a given I/O pair, but it might not be that certain about the details such as which regex to use, etc. By 300 samples, the model is nearing perfect accuracy on the test sets. <table> <tr> <th>Sampling</th> <th>Train</th> <th>Test</th> </tr> <tr> <td>1-best</td> <td>60%</td> <td>63%</td> </tr> <tr> <td>1-sample</td> <td>56%</td> <td>57%</td> </tr> <tr> <td>10-sample</td> <td>81%</td> <td>79%</td> </tr> <tr> <td>50-sample</td> <td>91%</td> <td>89%</td> </tr> <tr> <td>100-sample</td> <td>94%</td> <td>94%</td> </tr> <tr> <td>300-sample</td> <td>97%</td> <td>97%</td> </tr> </table> Table 3: The effect of sampling multiple programs on accuracy. 1-best is deterministically choosing the expansion with highest probability at each step. ![Line graph showing model accuracy with increasing I/O examples](page_328_370_627_246.png) Figure 4: The train and test accuracies for models trained with different number of input-output examples. 6.5 Effect of Number of Input-output Examples We evaluate the effect of varying the number of input-output examples used to train the Input-output encoders. The 1-best accuracy for train and test data for models trained for 74 epochs is shown in Figure 4. As expected, the accuracy increases with increase in number of input-output examples, since more examples add more information to the encoder and constrain the space of consistent programs in the DSL. 6.6 FlashFill Benchmarks We also evaluate our learnt models on 238 real-world FlashFill benchmarks obtained from the Microsoft Excel team and online help-forums. These benchmarks involve string manipulation tasks described using input-output examples. We evaluate two models – one with a cross correlation encoder trained on 5 input-output examples and another trained on 10 input-output examples. Both the models were trained on randomly sampled programs from the DSL upto size 13 with randomly generated input-output examples. The distribution of the size of smallest DSL programs needed to solve the benchmark tasks is shown in Figure 5(a), which varies from 4 to 63. The figure also shows the number of benchmarks for which our model was able to learn the program using 5 input-output examples using samples of top-2000 learnt programs. In total, the model is able to learn programs for 91 tasks (38.2%). Since the model was trained for programs upto size 13, it is not surprising that it is not able to solve tasks that need larger program size. There are 110 FlashFill benchmarks that require programs upto size 13, out of which the model is able to solve 82.7% of them. The effect of sampling multiple learnt programs instead of only top program is shown in Figure 5(b). With only 10 samples, the model can already learn about 13% of the benchmarks. We observe a steady increase in performance upto about 2000 samples, after which we do not observe any significant improvement. Since there are more than 2 million programs in the DSL of length 11 itself, the enumerative techniques with uniform search do not scale well [Alur et al., 2015]. We also evaluate a model that is learnt with 10 input-output examples per benchmark. This model can only learn programs for about 29% of the FlashFill benchmarks. Since the FlashFill benchmarks contained only 5 input-output examples for each task, to run the model that took 10 examples as input, we duplicated the I/O examples. Our models are trained on the synthetic training dataset Figure 5: (a) The distribution of size of programs needed to solve FlashFill tasks and the performance of our model, (b) The effect of sampling for trying top-k learnt programs. <table> <tr> <th>Input \( v \)</th> <th>Output</th> </tr> <tr> <td>[CPT-00350]</td> <td>[CPT-00350]</td> </tr> <tr> <td>[CPT-00340]</td> <td>[CPT-00340]</td> </tr> <tr> <td>[CPT-114563]</td> <td>[CPT-114563]</td> </tr> <tr> <td>[CPT-1AB02]</td> <td>[CPT-1AB02]</td> </tr> <tr> <td>[CPT-00360]</td> <td>[CPT-00360]</td> </tr> </table> (a) <table> <tr> <th>Input \( v \)</th> <th>Output</th> </tr> <tr> <td>732606129</td> <td>0x73</td> </tr> <tr> <td>430257526</td> <td>0x43</td> </tr> <tr> <td>444004480</td> <td>0x44</td> </tr> <tr> <td>371255254</td> <td>0x37</td> </tr> <tr> <td>635272676</td> <td>0x63</td> </tr> </table> (b) <table> <tr> <th>Input \( v \)</th> <th>Output</th> </tr> <tr> <td>John Doyle</td> <td>John D.</td> </tr> <tr> <td>Matt Walters</td> <td>Matt W.</td> </tr> <tr> <td>Jody Foster</td> <td>Jody F.</td> </tr> <tr> <td>Angela Lindsay</td> <td>Angela L.</td> </tr> <tr> <td>Maria Schulte</td> <td>Maria S.</td> </tr> </table> (c) Figure 6: Some example solved benchmarks: (a) cleaning up medical codes with closing brackets, (b) generating Hex numbers with first two digits, (c) transforming names to firstname and last initial. that is generated uniformly from the DSL. Because of the discrepancy between the training data distribution (uniform) and auxiliary task data distribution, the model with 10 input/output examples might not perform the best on the FlashFill benchmark distribution, even though it performs better on the synthetic data distribution (on which it is trained) as shown in Figure[4] Our model is able to solve majority of FlashFill benchmarks that require learning programs with upto 3 Concat operations. We now describe a few of these benchmarks, also shown in Figure 6. An Excel user wanted to clean a set of medical billing records by adding a missing “]” to medical codes as shown in Figure 6(a). Our system learns the following program given these 5 input-output examples: Concat(SubStr(v,ConstPos(0),(d,-1,End)), ConstStr("]")). The program concatenates the substring between the start of the input string and the position of the last digit regular expression with the constant string “]”. Another task that required user to transform some numbers into a hex format is shown in Figure 6(b). Our system learns the following program: Concat(ConstStr("0x"),SubStr(v,ConstPos(0),ConstPos(2))). For some benchmarks with long input strings, it is still able to learn regular expressions to extract the desired substring, e.g. it learns a program to extract “NancyF” from the string “123456789,freehafer,drew,nancy,19700101,11/1/2007,NancyF@north.com,1230102,123 1st Avenue,Seattle,wa,09999”. Our system is currently not able to learn programs for benchmarks that require 4 or more Concat operations. Two such benchmarks are shown in Figure 7. The task of combining names in Figure 7(a) requires 6 Concat arguments, whereas the phone number transformation task in Figure 7(b) requires 5 Concat arguments. This is mainly because of the scalability issues in training with programs of larger size. There are also a few interesting benchmarks where the R3NN models gets very close to learning the desired program. For example, for the task “Bill Gates” → “Mr. Bill Gates”, it learns a program that generates “Mr. Bill Gates” (without the whitespace), and for the task “617-444-5454” → “(617) 444-5454”, it learns a program that generates the string “(617 444-5454”. <table> <tr> <th>Input \( v \)</th> <th>Output</th> </tr> <tr> <td>1</td> <td>John James Paul</td> <td>John, James, and Paul.</td> </tr> <tr> <td>2</td> <td>Tom Mike Bill</td> <td>Tom, Mike, and Bill.</td> </tr> <tr> <td>3</td> <td>Marie Nina John</td> <td>Marie, Nina, and John.</td> </tr> <tr> <td>4</td> <td>Reggie Anna Adam</td> <td>Reggie, Anna, and Adam.</td> </tr> </table> (a) <table> <tr> <th>Input \( v \)</th> <th>Output</th> </tr> <tr> <td>1</td> <td>(425) 221 6767</td> <td>425-221-6767</td> </tr> <tr> <td>2</td> <td>206.225.1298</td> <td>206-225-1298</td> </tr> <tr> <td>3</td> <td>617-224-9874</td> <td>617-224-9874</td> </tr> <tr> <td>4</td> <td>425.118.9281</td> <td>425-118-9281</td> </tr> </table> (b) Figure 7: Some unsolved benchmarks: (a)Combining names by different delimiters. (b) Transforming phone numbers to consistent format. 7 RELATED WORK We have seen a renewed interest in recent years in the area of Program Induction and Synthesis. In the machine learning community, a number of promising neural architectures have been proposed to perform program induction. These methods have employed architectures inspired from computation modules (Turing Machines, RAM) (Graves et al., 2014; Kurach et al., 2015; Reed & de Freitas, 2015; Neelakantan et al., 2015) or common data structures such as stacks used in many algorithms (Joulin & Mikolov, 2015). These approaches represent the atomic operations of the network in a differentiable form, which allows for efficient end-to-end training of a neural controller. However, unlike our approach that learns comprehensible complete programs, many of these approaches learn only the program behavior (i.e., they produce desired outputs on new input data). Some recently proposed methods (Kurach et al., 2015; Gaunt et al., 2016; Riedel et al., 2016; Bunea et al., 2016) do learn interpretable programs but these techniques require learning a separate neural network model for each individual task, which is undesirable in many synthesis settings where we would like to learn programs in real-time for a large number of tasks. Liang et al. (2010) restrict the problem space with a probabilistic context-free grammar and introduce a new representation of programs based on combinatory logic, which allows for sharing sub-programs across multiple tasks. They then take a hierarchical Bayesian approach to learn frequently occurring substructures of programs. Our approach, instead, uses neural architectures to condition the search space of programs, and does not require additional step of representing program space using combinatory logic for allowing sharing. The DSL-based program synthesis approach has also seen a renewed interest recently (Alur et al., 2015). It has been used for many applications including synthesizing low-level bitvector implementations (Solar-Lezama et al., 2005), Excel macros for data manipulation (Gulwani, 2011; Gulwani et al., 2012), superoptimization by finding smaller equivalent loop bodies (Schutzza et al., 2013), protocol synthesis from scenarios (Udupa et al., 2013), synthesis of loop-free programs (Gulwani et al., 2011), and automated feedback generation for programming assignments (Singh et al., 2013). The synthesis techniques proposed in the literature generally employ various search techniques including enumeration with pruning, symbolic constraint solving, and stochastic search, while supporting different forms of specifications including input-output examples, partial programs, program invariants, and reference implementation. In this paper, we consider input-output example based specification over the hypothesis space defined by a DSL of string transformations, similar to that of FlashFill (without conditionals) (Gulwani, 2011). The key difference between our approach over previous techniques is that our system is trained completely in an end-to-end fashion, while previous techniques require significant manual effort to design heuristics for efficient search. There is some work on guiding the program search using learnt clues that suggest likely DSL expansions, but the clues are learnt over hand-coded textual features of examples (Menon et al., 2013). Moreover, their DSL consists of composition of about 100 high-level text transformation functions such as count and dedup, whereas our DSL consists of tree structured programs over richer regular expression based substring constructs. There is also a recent line of work on learning probabilistic models of code from a large number of code repositories (big code) (Raychev et al., 2015; Bielik et al., 2016; Hindle et al., 2016), which are then used for applications such as auto-completion of partial programs, inference of variable and method names, program repair, etc. These language models typically capture only the syntactic properties of code, unlike our approach that also tries to capture the semantics to learn the desired program. The work by [Maddison & Tarlow] (2014) addresses the problem of learning structured generative models of source code but both their model and application domain are different from ours. [Piech et al.] (2015) use an NPM-RNN model to embed program ASTs, where a subtree of the AST rooted at a node n is represented by a matrix obtained by combining representations of the children of node n and the embedding matrix of the node n itself (which corresponds to its functional behavior). The forward pass in our R3NN architecture from leaf nodes to the root node is, at a high-level, similar, but we use a distributed representation for each grammar symbol that leads to a different root representation. Moreover, R3NN also performs a reverse-recursive pass to ensure all nodes in the tree encode global information about other nodes in the tree. Finally, the R3NN network is then used to incrementally build a tree to synthesize a program. The R3NN model employed in our work is related to several tree and graph structured neural networks present in the NLP literature [Le & Zuidema 2014; Paulus et al. 2014; Irsoy & Cardie 2013]. The Inside-Outside Recursive Neural Network [Le & Zuidema 2014] in particular is most similar to the R3NN, where they generate a parse tree incrementally by using global leaf-level representations to determine which expansions in the parse tree to take next. 8 CONCLUSION We have proposed a novel technique called Neuro-Symbolic Program Synthesis that is able to construct a program incrementally based on given input-output examples. To do so, a new neural architecture called Recursive-Reverse-Recursive Neural Network is used to encode and expand a partial program tree into a full program tree. Its effectiveness at example-based program synthesis is demonstrated, even when the program has not been seen during training. These promising results open up a number of interesting directions for future research. For example, we took a supervised-learning approach here, assuming availability of target programs during training. In some scenarios, we may only have access to an oracle that returns the desired output given an input. In this case, reinforcement learning is a promising framework for program synthesis. REFERENCES Alur, Rajeev, Bodík, Rastislav, Dallal, Eric, Fisman, Dana, Garg, Pranav, Juniwal, Garvit, Kress-Gazit, Hadas, Madhusudan, P., Martin, Milo M. K., Raghothaman, Mukund, Saha, Shamwaditya, Seshia, Sanjit A., Singh, Rishabh, Solar-Lezama, Armando, Torlak, Emina, and Udupa, Abhishek. Syntax-guided synthesis. In Dependable Software Systems Engineering, pp. 1–25. 2015. Bielik, Pavol, Raychev, Veselin, and Vechev, Martin T. PHOG: probabilistic model for code. In ICML, pp. 2933–2942, 2016. Biermann, Alan W. The inference of regular lisp programs from examples. IEEE transactions on Systems, Man, and Cybernetics, 8(8):585–600, 1978. Bunel, Rudy, Desmaison, Alban, Kohli, Pushmeet, Torr, Philip H. S., and Kumar, M. Pawan. Adaptive neural compilation. CoRR, abs/1605.07969, 2016. URL http://arxiv.org/abs/1605.07969 Gaunt, Alexander L, Brockschmidt, Marc, Singh, Rishabh, Kushman, Nate, Kohli, Pushmeet, Taylor, Jonathan, and Tarlow, Daniel. Terpret: A probabilistic programming language for program induction. arXiv preprint arXiv:1608.04428, 2016. Graves, Alex, Wayne, Greg, and Danihelka, Ivo. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. Gulwani, Sumit. Automating string processing in spreadsheets using input-output examples. In POPL, pp. 317–330, 2011. Gulwani, Sumit, Jha, Susmit, Tiwari, Ashish, and Venkatesan, Ramarathnam. Synthesis of loop-free programs. In PLDI, pp. 62–73, 2011. Gulwani, Sumit, Harris, William, and Singh, Rishabh. Spreadsheet data manipulation using examples. Communications of the ACM, Aug 2012. Hindle, Abram, Barr, Earl T., Gabel, Mark, Su, Zhendong, and Devanbu, Premkumar T. On the naturalness of software. Commun. ACM, 59(5):122–131, 2016. Irsoy, Orzan and Cardie, Claire. Bidirectional recursive neural networks for token-level labeling with structure. In NIPS Deep Learning Workshop, 2013. Joulin, Armand and Mikolov, Tomas. Inferring algorithmic patterns with stack-augmented recurrent nets. In NIPS, pp. 190–198, 2015. Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. In ICLR, 2014. Kurach, Karol, Andrychowicz, Marcin, and Sutskever, Ilya. Neural random-access machines. arXiv preprint arXiv:1511.06392, 2015. Le, Phong and Zuidema, Willem. The inside-outside recursive neural network model for dependency parsing. In EMNLP, pp. 729–739, 2014. Liang, Percy, Jordan, Michael I., and Klein, Dan. Learning programs: A hierarchical Bayesian approach. In ICML, pp. 639–646, 2010. Maddison, Chris J and Tarlow, Daniel. Structured generative models of natural source code. In ICML, pp. 649–657, 2014. Menon, Aditya Krishna, Tamuz, Omer, Gulwani, Sumit, Lampson, Butler W., and Kalai, Adam. A machine learning framework for programming by example. In ICML, pp. 187–195, 2013. Neelakantan, Arvind, Le, Quoc V. and Sutskever, Ilya. Neural programmer: Inducing latent programs with gradient descent. arXiv preprint arXiv:1511.04834, 2015. Paulus, Romain, Socher, Richard, and Manning, Christopher D. Global belief recursive neural networks. pp. 2888–2896, 2014. Piech, Chris, Huang, Jonathan, Nguyen, Andy, Phulsuksombati, Mike, Sahami, Mehran, and Guibas, Leonidas J. Learning program embeddings to propagate feedback on student code. In ICML, pp. 1093–1102, 2015. Raychev, Veselin, Vechev, Martin T., and Krause, Andreas. Predicting program properties from "big code". In POPL, pp. 111–124, 2015. Reed, Scott and de Freitas, Nando. Neural programmer-interpreters. arXiv preprint arXiv:1511.06279, 2015. Riedel, Sebastian, Bosnjak, Matko, and Rocktäschel, Tim. Programming with a differentiable forth interpreter. CoRR, abs/1605.06640, 2016. URL http://arxiv.org/abs/1605.06640 Schkufza, Eric, Sharma, Rahul, and Aiken, Alex. Stochastic superoptimization. In ASPLOS, pp. 305–316, 2013. Singh, Rishabh and Solar-Lezama, Armando. Synthesizing data structure manipulations from storyboards. In SIGSOFT FSE, pp. 289–299, 2011. Singh, Rishabh, Gulwani, Sumit, and Solar-Lezama, Armando. Automated feedback generation for introductory programming assignments. In PLDI, pp. 15–26, 2013. Solar-Lezama, Armando. Program Synthesis By Sketching. PhD thesis, EECS Dept., UC Berkeley, 2008. Solar-Lezama, Armando, Rabbah, Rodric, Bodik, Rastislav, and Ebcioğlu, Kemal. Programming by sketching for bit-streaming programs. In PLDI, 2005. Summers, Phillip D. A methodology for lisp program construction from examples. Journal of the ACM (JACM), 24(1):161–175, 1977. Udupa, Abhishek, Raghavan, Arun, Deshmukh, Jyotirmoy V., Mador-Haim, Sela, Martin, Milo M. K., and Alur, Rajeev. TRANSIT: specifying protocols with concolic snippets. In PLDI, pp. 287–296, 2013. \[ \begin{align*} [\text{Concat}(f_1, \cdots, f_n)]_v &= \text{Concat}([\![f_1]\!]_v, \cdots, [\![f_n]\!]_v) \\ [\text{ConstStr}(s)]_v &= s \\ [\text{SubStr}(v, p_l, p_r)]_v &= v[\![p_l]\!]_v \cdots [\![p_r]\!]_v \\ [\text{ConstPos}(k)]_v &= k > 0? k : \text{len}(s) + k \\ [(r, k, \text{Start})]_v &= \text{Start of } k^{\text{th}} \text{match of r in } v \\ &\quad \text{from beginning (end if } k < 0 \text{)} \\ [(r, k, \text{End})]_v &= \text{End of } k^{\text{th}} \text{match of r in } v \\ &\quad \text{from beginning (end if } k < 0 \text{)} \end{align*} \] Figure 8: The semantics of the DSL for string transformations. ![A diagram showing the cross correlation encoder to encode a single input-output example.](page_324_670_900_350.png) Figure 9: The cross correlation encoder to encode a single input-output example. Vinyals, Oriol, Kaiser, Lukasz, Koo, Terry, Petrov, Slav, Sutskever, Ilya, and Hinton, Geoffrey. Grammar as a foreign language. In *ICLR*, 2015. A DOMAIN-SPECIFIC LANGUAGE FOR STRING TRANSFORMATIONS The semantics of the DSL programs is shown in Figure 8. The semantics of a Concat expression is to concatenate the results of recursively evaluating the constituent substring expressions \( f_i \). The semantics of ConstStr(s) is to simply return the constant string \( s \). The semantics of a substring expression is to first evaluate the two position logics \( p_l \) and \( p_r \) to \( p_1 \) and \( p_2 \) respectively, and then return the substring corresponding to \( v[p_1..p_2] \). We denote \( s[i..j] \) to denote the substring of string \( s \) starting at index i (inclusive) and ending at index j (exclusive), and len(s) denotes its length. The semantics of ConstPos(k) expression is to return \( k \) if \( k > 0 \) or return len + \( k \) (if \( k < 0 \)). The semantics of position logic \( (r, k, \text{Start}) \) is to return the Start of \( k^{\text{th}} \) match of r in \( v \) from the beginning (if \( k > 0 \)) or from the end (if \( k < 0 \)).
ABSTRACT Recent years have seen the proposal of a number of neural architectures for the problem of Program Induction. Given a set of input-output examples, these architectures are able to learn mappings that generalize to new test inputs. While achieving impressive results, these approaches have a number of important limitations: (a) they are computationally expensive and hard to train, (b) a model has to be trained for each task (program) separately, and (c) it is hard to interpret or verify the correctness of the learnt mapping (as it is defined by a neural network). In this paper, we propose a novel technique, Neuro-Symbolic Program Synthesis, to overcome the above-mentioned problems. Once trained, our approach can automatically construct computer programs in a domain-specific language that are consistent with a set of input-output examples provided at test time. Our method is based on two novel neural modules. The first module, called the cross correlation I/O network, given a set of input-output examples, produces a continuous representation of the set of I/O examples. The second module, the Recursive-Reverse-Recursive Neural Network (R3NN), given the continuous representation of the examples, synthesizes a program by incrementally expanding partial programs. We demonstrate the effectiveness of our approach by applying it to the rich and complex domain of regular expression based string transformations. Experiments show that the R3NN model is not only able to construct programs from new input-output examples, but it is also able to construct new programs for tasks that it had never observed before during training. 1 INTRODUCTION The act of programming, i.e., developing a procedure to accomplish a task, is a remarkable demonstration of the reasoning abilities of the human mind. Expectedly, Program Induction is considered as one of the fundamental problems in Machine Learning and Artificial Intelligence. Recent progress on deep learning has led to the proposal of a number of promising neural architectures for this problem. Many of these models are inspired from computation modules (CPU, RAM, GPU) (Graves et al., 2014; Kurach et al., 2015; Reed & de Freitas, 2015; Neelakantan et al., 2015) or common data structures used in many algorithms (stack) (Joulin & Mikolov, 2015). A common thread in this line of work is to specify the atomic operations of the network in some differentiable form, allowing efficient end-to-end training of a neural controller, or to use reinforcement learning to make hard choices about which operation to perform. While these results are impressive, these approaches have a number of important limitations: (a) they are computationally expensive and hard to train, (b) a model has to be trained for each task (program) separately, and (c) it is hard to interpret or verify the correctness of the learnt mapping (as it is defined by a neural network). While some recently proposed methods (Kurach et al., 2015; Gaunt et al., 2016; Riedel et al., 2016; Bunel et al., 2016) do learn interpretable programs, they still need to learn a separate neural network model for each individual task. Motivated by the need for model interpretability and scalability to multiple tasks, we address the problem of Program Synthesis. Program Synthesis, the problem of automatically constructing programs that are consistent with a given specification, has long been a subject of research in Computer Science (Biermann, 1978; Summers, 1977). This interest has been reinvigorated in recent years on the back of the development of methods for learning programs in various domains, ranging from low-level bit manipulation code (Solar-Lezama et al., 2005) to data structure manipulations (Singh & Solar-Lezama, 2011) and regular expression based string transformations (Gulwani, 2011). Most of the recently proposed methods for program synthesis operate by searching the space of programs in a Domain-Specific Language (DSL) instead of arbitrary Turing-complete languages. This hypothesis space of possible programs is huge (potentially infinite) and searching over it is a challenging problem. Several search techniques including enumerative (Udupa et al., 2013), stochastic (Schukfa et al., 2013), constraint-based (Solar-Lezama, 2008), and version-space algebra based algorithms (Gulwani et al., 2012) have been developed to search over the space of programs in the DSL, which support different kinds of specifications (examples, partial programs, natural language etc.) and domains. These techniques not only require significant engineering and research effort to develop carefully-designed heuristics for efficient search, but also have limited applicability and can only synthesize programs of limited sizes and types. In this paper, we present a novel technique called Neuro-Symbolic Program Synthesis (NSPS) that learns to generate a program incrementally without the need for an explicit search. Once trained, NSPS can automatically construct computer programs that are consistent with any set of input-output examples provided at test time. Our method is based on two novel module neural architectures. The first module, called the cross correlation I/O network, produces a continuous representation of any given set of input-output examples. The second module, the Recursive-Reverse-Recursive Neural Network (R3NN), given the continuous representation of the input-output examples, synthesizes a program by incrementally expanding partial programs. R3NN employs a tree-based neural architecture that sequentially constructs a parse tree by selecting which non-terminal symbol to expand using rules from a context-free grammar (i.e., the DSL). We demonstrate the efficacy of our method by applying it to the rich and complex domain of regular-expression-based syntactic string transformations, using a DSL based on the one used by Flash-Fill (Gulwani, 2011; Gulwani et al., 2012), a Programming-By-Example (PBE) system in Microsoft Excel 2013. Given a few input-output examples of strings, the task is to synthesize a program built on regular expressions to perform the desired string transformation. An example task that can be expressed in this DSL is shown in Figure 1, which also shows the DSL. Our evaluation shows that NSPS is not only able to construct programs for known tasks from new input-output examples, but it is also able to construct completely new programs that it had not observed during training. Specifically, the proposed system is able to synthesize string transformation programs for 63% of tasks that it had not observed at training time, and for 94% of tasks when 100 program samples are taken from the model. Moreover, our system is able to learn 38% of 238 real-world FlashFill benchmarks. To summarize, the key contributions of our work are: • A novel Neuro-Symbolic program synthesis technique to encode neural search over the space of programs defined using a Domain-Specific Language (DSL). • The R3NN model that encodes and expands partial programs in the DSL, where each node has a global representation of the program tree. • A novel cross-correlation based neural architecture for learning continuous representation of sets of input-output examples. • Evaluation of the NSPS approach on the complex domain of regular expression based string transformations. 2 PROBLEM DEFINITION In this section, we formally define the DSL-based program synthesis problem that we consider in this paper. Given a DSL \( L \), we want to automatically construct a synthesis algorithm \( \mathcal{A} \) such that given a set of input-output example, \( \{(i_1, o_1), \cdots, (i_n, o_n)\} \), \( \mathcal{A} \) returns a program \( P \in L \) that conforms to the input-output examples, i.e., \[ \forall j : 1 \leq j \leq n\ P(i_j) = o_j. \] <table> <tr> <th>Input \( v \)</th> <th>Output</th> </tr> <tr> <td>1 William Henry Charles</td> <td>Charles, W.</td> </tr> <tr> <td>2 Michael Johnson</td> <td>Johnson, M.</td> </tr> <tr> <td>3 Barack Rogers</td> <td>Rogers, B.</td> </tr> <tr> <td>4 Martha D. Saunders</td> <td>Saunders, M.</td> </tr> <tr> <td>5 Peter T Gates</td> <td>Gates, P.</td> </tr> </table> (a) String \( e \) := Concat(\( f_1, \cdots, f_n \)) Substring \( f \) := ConstStr(s) | SubStr(v, p_l, p_r) Position \( p \) := (r, k, Dir) | ConstPos(k) Direction Dir := Start | End Regex \( r \) := s | T_1 \cdots | T_n (b) Figure 1: An example FlashFill task for transforming names to lastname with initials of first name, and (b) The DSL for regular expression based string transformations. The syntax and semantics of the DSL for string transformations is shown in Figure[1]b and Figure[8] respectively. The DSL corresponds to a large subset of FlashFill DSL (except conditionals), and allows for a richer class of substring operations than FlashFill. A DSL program takes as input a string \( v \) and returns an output string \( o \). The top-level string expression \( e \) is a concatenation of a finite list of substring expressions \( f_1, \cdots, f_n \). A substring expression \( f \) can either be a constant string \( s \) or a substring expression, which is defined using two position logics \( p_l \) (left) and \( p_r \) (right). A position logic corresponds to a symbolic expression that evaluates to an index in the string. A position logic \( p \) can either be a constant position \( k \) or a token match expression \( (r, k, \text{Dir}) \), which denotes the Start or End of the \( k^{\text{th}} \) match of token \( r \) in input string \( v \). A regex token can either be a constant string \( s \) or one of 8 regular expression tokens: \( p \) (ProperCase), \( C \) (CAPS), \( l \) (lowercase), \( d \) (Digits), \( \alpha \) (Alphabets), \( \alpha n \) (Alphanumeric), \( ^ \) (StartOfString), and \( \$ \) (EndOfString). The semantics of the DSL programs is described in the appendix. A DSL program for the name transformation task shown in Figure [1]a that is consistent with the examples is: Concat(f_1, ConstStr(" "), f_2, ConstStr("")), where \( f_1 \equiv \text{SubStr}(v, (" ", -1, \text{End}), \text{ConstPos}(-1)) \) and \( f_2 \equiv \text{SubStr}(v, \text{ConstPos}(0), \text{ConstPos}(1)) \). The program concatenates the following 4 strings: i) substring between the end of last whitespace and end of string, ii) constant string " ", iii) first character of input string, and iv) constant string " ". 3 OVERVIEW OF OUR APPROACH We now present an overview of our approach. Given a DSL \( L \), we learn a generative model of programs in the DSL \( L \) that is conditioned on input-output examples to efficiently search for consistent programs. The workflow of our system is shown in Figure[2] which is trained end-to-end using a large training set of programs in the DSL together with their corresponding input-output examples. To generate a large training set, we uniformly sample programs from the DSL and then use a rule-based strategy to compute well-formed input strings. Given a program P (sampled from the DSL), the rule-based strategy generates input strings for the program P ensuring that the pre-conditions of P are met (i.e. P doesn’t throw an exception on the input strings). It collects the pre-conditions of all Substring expressions present in the sampled program P and then generates inputs conforming to them. For example, let’s assume the sampled program is SubStr(v,(CAPS, 2, Start),(" ", 3, Start)), which extracts the substring between the start of 2nd capital letter and start of 3rd whitespace. The rule-based strategy would ensure that all the generated input strings consist of at least 2 capital letters and 3 whitespaces in addition to other randomly generated characters. The corresponding output strings are obtained by running the programs on the input strings. A DSL can be considered as a context-free grammar with a start symbol \( S \) and a set of non-terminals with corresponding expansion rules. The (partial) grammar derivations or trees correspond to (partial) programs. A naïve way to perform a search over the programs in a DSL is to start from the start symbol \( S \) and then randomly choose non-terminals to expand with randomly chosen expansion rules until reaching a derivation with only terminals. We, instead, learn a generative model over partial derivations in the DSL that assigns probabilities to different non-terminals in a partial derivation and corresponding expansions to guide the search for complete derivations. ![An overview of the training and test workflow of our synthesis approach.](page_120_186_1347_355.png) Figure 2: An overview of the training and test workflow of our synthesis approach. Our generative model uses a Recursive-Reverse-Recursive Neural Network (R3NN) to encode partial trees (derivations) in \( L \), where each node in the partial tree encodes global information about every other node in the tree. The model assigns a vector representation for every symbol and every expansion rule in the grammar. Given a partial tree, the model first assigns a vector representation to each leaf node, and then performs a recursive pass going up in the tree to assign a global tree representation to the root. It then performs a reverse-recursive pass starting from the root to assign a global tree representation to each node in the tree. The generative process is conditioned on a set of input-output examples to learn a program that is consistent with this set of examples. We experiment with multiple input-output encoders including an LSTM encoder that concatenates the hidden vectors of two deep bidirectional LSTM networks for input and output strings in the examples, and a Cross Correlation encoder that computes the cross correlation between the LSTM tensor representations of input and output strings in the examples. This vector is then used as an additional input in the R3NN model to condition the generative model. 4 TREE-STRUCTURED GENERATION MODEL We define a program t-steps into construction as a partial program tree (PPT) (see Figure[3] for a visual depiction). A PPT has two types of nodes: leaf (symbol) nodes and inner non-leaf (rule) nodes. A leaf node represents a symbol, whether non-terminal or terminal. An inner non-leaf node represents a particular production rule of the DSL, where the number of children of the non-leaf node is equivalent to the arity of the RHS of the rule it represents. A PPT is called a program tree (PT) whenever all the leaves of the tree are terminal symbols. Such a tree represents a completed program under the DSL and can be executed. We define an expansion as the valid application of a specific production rule (e → e op2 e) to a specific non-terminal leaf node within a PPT (leaf with symbol e). We refer to the specific production rule that an expansion is derived from as the expansion type. It can be seen that if there exist two leaf nodes (\( l_1 \) and \( l_2 \)) with the same symbol then for every expansion specific to \( l_1 \) there exists an expansion specific to \( l_2 \) with the same type. 4.1 RECURSIVE-REVERSE-RECURSIVE NEURAL NETWORK In order to define a generation model over PPTs, we need an efficient way of assigning probabilities to every valid expansion in the current PPT. A valid expansion has two components: first the production rule used, and second the position of the expanded leaf node relative to every other node in the tree. To account for the first component, a separate distributed representation for each production rule is maintained. The second component is handled using an architecture where the forward propagation resembles belief propagation on trees, allowing a notion of global tree state at every node within the tree. A given expansion probability is then calculated as being proportional to the inner product between the production rule representation and the global-tree representation of the leaf-level non-terminal node. We now describe the design of this architecture in more detail. The R3NN has the following parameters for the grammar described by a DSL (see Figure[3]): 1. For every symbol \( s \in S \), an \( M \)-dimensional representation \( \phi(s) \in \mathbb{R}^M \). 2. For every production rule \( r \in R \), an \( M \)-dimensional representation \( \omega(r) \in \mathbb{R}^M \). Figure 3: (a) The initial recursive pass of the R3NN. (b) The reverse-recursive pass of the R3NN where the input is the output of the previous recursive pass. 3. For every production rule \( r \in R \), a deep neural network \( f_r \) which takes as input a vector \( x \in \mathbb{R}^{Q \cdot M} \), with \( Q \) being the number of symbols on the RHS of the production rule \( r \), and outputs a vector \( y \in \mathbb{R}^M \). Therefore, the production-rule network \( f_r \) takes as input a concatenation of the distributed representations of each of its RHS symbols and produces a distributed representation for the LHS symbol. 4. For every production rule \( r \in R \), an additional deep neural network \( g_r \) which takes as input a vector \( x' \in \mathbb{R}^M \) and outputs a vector \( y' \in \mathbb{R}^{Q \cdot M} \). We can think of \( g_r \) as a reverse production-rule network that takes as input a vector representation of the LHS and produces a concatenation of the distributed representations of each of the rule’s RHS symbols. Let \( E \) be the set of all valid expansions in a PPT \( T \), let \( L \) be the current leaf nodes of \( T \) and \( N \) be the current non-leaf (rule) nodes of \( T \). Let \( S(l) \) be the symbol of leaf \( l \in L \) and \( R(n) \) represent the production rule of non-leaf node \( n \in N \). 4.1.1 GLOBAL TREE INFORMATION AT THE LEAVES To compute the probability distribution over the set \( E \), the R3NN first computes a distributed representation for each leaf node that contains global tree information. To accomplish this, for every leaf node \( l \in L \) in the tree we retrieve its distributed representation \( \phi(S(l)) \). We now do a standard recursive bottom-to-top, RHS→LHS pass on the network, by going up the tree and applying \( f_{R(n)} \) for every non-leaf node \( n \in N \) on its RHS node representations (see Figure 3(a)). These networks \( f_{R(n)} \) produce a node representation which is input into the parent’s rule network and so on until we reach the root node. Once at the root node, we effectively have a fixed-dimensionality global tree representation \( \phi(root) \) for the start symbol. The problem is that this representation has lost any notion of tree position. To solve this problem R3NN now does what is effectively a reverse-recursive pass which starts at the root node with \( \phi(root) \) as input and moves towards the leaf nodes (see Figure 3(b)). More concretely, we start with the root node representation \( \phi(root) \) and use that as input into the rule network \( g_{R(root)} \) where \( R(root) \) is the production rule that is applied to the start symbol in \( T \). This produces a representation \( \phi'(c) \) for each RHS node \( c \) of \( R(root) \). If \( c \) is a non-leaf node, we iteratively apply this procedure to \( c \), i.e., process \( \phi'(c) \) using \( g_{R(c)} \) to get representations \( \phi'(cc) \) for every RHS node \( cc \) of \( R(c) \), etc. If \( c \) is a leaf node, we now have a leaf representation \( \phi'(c) \) which has an information path to \( \phi(root) \) and thus to every other leaf node in the tree. Once the reverse-recursive process is complete, we now have a distributed representation \( \phi'(l) \) for every leaf node \( l \) which contains global tree information. While \( \phi(l_1) \) and \( \phi(l_2) \) could be equal for leaf nodes which have the same symbol type, \( \phi'(l_1) \) and \( \phi'(l_2) \) will not be equal even if they have the same symbol type because they are at different positions in the tree. 4.1.2 Expansion Probabilities Given the global leaf representations \( \phi'(l) \), we can now straightforwardly acquire scores for each \( e \in E \). For expansion \( e \), let \( e.r \) be the expansion type (production rule \( r \in R \) that \( e \) applies) and let \( e.l \) be the leaf node \( l \) that \( e.r \) is applied to. \( z_e = \phi'(e.l) \cdot \omega(e.r) \) The score of an expansion is calculated using \( z_e = \phi'(e.l) \cdot \omega(e.r) \). The probability of expansion \( e \) is simply the exponentiated normalized sum over all scores: \( \pi(e) = \frac{e^{z_e}}{\sum_{e' \in E} e^{z_{e'}}} \). An additional improvement that was found to help was to add a bidirectional LSTM (BLSTM) to process the global leaf representations right before calculating the scores. To do this, we first order the global leaf representations sequentially from left-most leaf node to right-mode leaf node. We then treat each leaf node as a time step for a BLSTM to process. This provides a sort of skip connection between leaf nodes, which potentially reduces the path length that information needs to travel between leaf nodes in the tree. The BLSTM hidden states are then used in the score calculation rather than the leaves themselves. The R3NN can be seen as an extension and combination of several previous tree-based models, which were mainly developed in the context of natural language processing (Le & Zuidema [2014] Paulus et al. [2014] Irsay & Cardie [2013]). 5 Conditioning with Input/Output Examples Now that we have defined a generation process over tree-structured programs, we need a way of conditioning this generation process on a set of input/output examples. The set of input/output examples provide a nearly complete specification for the desired output program, and so a good encoding of the examples is crucial to the success of our program generator. For the most part, this example encoding needs to be domain-specific, since different DSLs have different inputs (some may operate over integers, some over strings, etc.). Therefore, in our case, we use an encoding adapted to the input-output strings that our DSL operates over. We also investigate different ways of conditioning program search on the learnt example input-output encodings. 5.1 Encoding input/output examples There are two types of information that string manipulation programs need to extract from input-output examples: 1) constant strings, such as “@domain.com” or “.”, which appear in all output examples; 2) substring indices in input where the index might be further defined by a regular expression. These indices determine which parts of the input are also present in the output. To simplify the DSL, we assume that there is a fixed finite universe of possible constant strings that could appear in programs. Therefore we focus on extracting the second type of information, the substring indices. In earlier hand-engineered systems such as FlashFill, this information was extracted from the input-output strings by running the Longest Common Substring algorithm, a dynamic programming algorithm that efficiently finds matching substrings in string pairs. To extract substrings, FlashFill runs LCS on every input-output string pair in the I/O set to get a set of substring candidates. It then takes the entire set of substring candidates and simply tries every possible regex and constant index that can be used at substring boundaries, exhaustively searching for the one which is the most “general”, where generality is specified by hand-engineered heuristics. In contrast to these previous methods, instead of hand-designing a complicated algorithm to extract regex-based substrings, we develop neural network based architectures that are capable of learning to extract and produce continuous representations of the likely regular expressions given I/O examples. 5.1.1 Baseline LSTM encoder Our first I/O encoding network involves running two separate deep bidirectional LSTM networks for processing the input and the output string in each example pair. For each pair, it then concatenates the topmost hidden representation at every time step to produce a \( 4HT \)-dimensional feature vector per I/O pair, where \( T \) is the maximum string length for any input or output string, and \( H \) is the topmost LSTM hidden dimension. We then concatenate the encoding vectors across all I/O pairs to get a vector representation of the entire I/O set. This encoding is conceptually straightforward and has very little prior knowledge about what operations are being performed over the strings, i.e., substring, constant, etc., which might make it difficult to discover substring indices, especially the ones based on regular expressions. 5.1.2 CROSS CORRELATION ENCODER To help the model discover input substrings that are copied to the output, we designed an novel I/O example encoder to compute the cross correlation between each input and output example representation. We used the two output tensors of the LSTM encoder (discussed above) as inputs to this encoder. For each example pair, we first slide the output feature block over the input feature block and compute the dot product between the respective position representation. Then, we sum over all overlapping time steps. Features of all pairs are then concatenated to form a \(2 * (T - 1)\)-dimensional vector encoding for all example pairs. There are \(2 * (T - 1)\) possible alignments in total between input and output feature blocks. An illustration of the cross-correlation encoder is shown in Figure9. We also designed the following variants of this encoder. Diffused Cross Correlation Encoder: This encoder is identical to the Cross Correlation encoder except that instead of summing over overlapping time steps after the element-wise dot product, we simply concatenate the vectors corresponding to all time steps, resulting in a final representation that contains \(2 * (T - 1) * T\) features for each example pair. LSTM-Sum Cross Correlation Encoder: In this variant of the Cross Correlation encoder, instead of doing an element-wise dot product, we run a bidirectional LSTM over the concatenated feature blocks of each alignment. We represent each alignment by the LSTM hidden representation of the final time step leading to a total of \(2 * H * 2 * (T - 1)\) features for each example pair. Augmented Diffused Cross Correlation Encoder: For this encoder, the output of each character position of the Diffused Cross Correlation encoder is combined with the character embedding at this position, then a basic LSTM encoder is run over the combined features to extract a \(4*H\)-dimensional vector for both the input and the output streams. The LSTM encoder output is then concatenated with the output of the Diffused Cross Correlation encoder forming a \((4*H+T*(T-1))\)-dimensional feature vector for each example pair. 5.2 CONDITIONING PROGRAM SEARCH ON EXAMPLE ENCODINGS Once the I/O example encodings have been computed, we can use them to perform conditional generation of the program tree using the R3NN model. There are a number of ways in which the PPT generation model can be conditioned using the I/O example encodings depending on where the I/O example information is inserted in the R3NN model. We investigated three locations to inject example encodings: 1) Pre-conditioning: where example encodings are concatenated to the encoding of each tree leaf, and then passed to a conditioning network before the bottom-up recursive pass over the program tree. The conditioning network can be either a multi-layer feedforward network, or a bidirectional LSTM network running over tree leaves. Running an LSTM over tree leaves allows the model to learn more about the relative position of each leaf node in the tree. 2) Post-conditioning: After the reverse-recursive pass, example encodings are concatenated to the updated representation of each tree leaf and then fed to a conditioning network before computing the expansion scores. 3) Root-conditioning: After the recursive pass over the tree, the root encoding is concatenated to the example encodings and passed to a conditioning network. The updated root representation is then used to drive the reverse-recursive pass. Empirically, pre-conditioning worked better than either root- or post- conditioning. In addition, conditioning at all 3 places simultaneously did not cause a significant improvement over just pre-conditioning. Therefore, for the experimental section, we report models which only use pre-conditioning. 6 EXPERIMENTS In order to evaluate and compare variants of the previously described models, we generate a dataset randomly from the DSL. To do so, we first enumerate all possible programs under the DSL up to a specific number of instructions, which are then partitioned into training, validation and test sets. In order to have a tractable number of programs, we limited the maximum number of instructions for programs to be 13. Length 13 programs are important for this specific DSL because all larger programs can be written as compositions of sub-programs of length at most 13. The semantics of length 13 programs therefore constitute the “atoms” of this particular DSL. In testing our model, there are two different categories of generalization. The first is input/output generalization, where we are given a new set of input/output examples as well as a program with a specific tree that we have seen during training. This represents the model’s capacity to be applied on new data. The second category is program generalization, where we are given both a previously unseen program tree in addition to unseen input/output examples. Therefore the model needs to have a sufficient enough understanding of the semantics of the DSL that it can construct novel combinations of operations. For all reported results, training sets correspond to the first type of generalization since we have seen the program tree but not the input/output pairs. Test sets represent the second type of generalization, as they are trees which have not been seen before on input/output pairs that have also not been seen before. In this section, we compare several different variants of our model. We first evaluate the effect of each of the previously described input/output encoders. We then evaluate the R3NN model against a simple recurrent model called io2seq, which is basically an LSTM that takes as input the input/output conditioning vector and outputs a sequence of DSL symbols that represents a linearized program tree. Finally, we report the results of the best model on the length 13 training and testing sets, as well as on a set of 238 benchmark functions. 6.1 SETUP AND HYPERPARAMETERS SETTINGS For training the R3NN, two hyperparameters that were crucial for stabilizing training were the use of hyperbolic tangent activation functions in both R3NN (other activations such as ReLU more consistently diverged during our initial experiments) and cross-correlation I/O encoders and the use of minibatches of length 8. Additionally, for all results, the program tree generation is conditioned on a set of 10 input/output string pairs. We used ADAM [Kingma & Ba (2014)] to optimize the networks with a learning rate of 0.001. Network weights used the default torch initializations. Due to the difficulty of batching tree-based neural networks since each sample in a batch has a potentially different tree structure, we needed to do batching sequentially. Therefore for each mini-batch of size \( N \), we accumulated the gradients for each sample. After all N sample gradients were accumulated, we updated the parameters and reset the accumulated gradients. Due to this sequential processing, in order to train models in a reasonable time, we limited our batch sizes to between 8-12. Despite the computational inefficiency, batching was critical to successfully train an R3NN, as online learning often caused the network to diverge. For each latent function and set of input/output examples that we test on, we report whether we had a success after sampling 100 functions from the model and testing all 100 to see if one of these functions is equivalent to the latent function. Here we consider two functions to be equivalent with respect to a specific input/output example set if the functions output the same strings when run on the inputs. Under this definition, two functions can have a different set of operations but still be equivalent with respect to a specific input-output set. We restricted the maximum size of training programs to be 13 because of two computational considerations. As described earlier, one difficulty was in batching tree-based neural networks of different structure and the computational cost of batching increases with the increase in size of the program trees. The second issue is that valid I/O strings for programs often grow with the program length, in the sense that for programs of length 40 a minimal valid I/O string will typically be much longer than a minimal valid I/O string for length 20 programs. For example, for a program such as (Concat (ConstStr “longstring”) (Concat (ConstStr “longstring”) (Concat (ConstStr “longstring”) …))), the valid output string would be “longstringlongstringlongstring…” which could be many <table> <tr> <th>I/O Encoding</th> <th>Train</th> <th>Test</th> </tr> <tr> <td>LSTM</td> <td>88%</td> <td>88%</td> </tr> <tr> <td>Cross Correlation (CC)</td> <td>67%</td> <td>65%</td> </tr> <tr> <td>Diffused CC</td> <td>89%</td> <td>88%</td> </tr> <tr> <td>LSTM-sum CC</td> <td>90%</td> <td>91%</td> </tr> <tr> <td>Augmented diffused CC</td> <td>91%</td> <td>91%</td> </tr> </table> Table 1: The effect of different input/output encoders on accuracy. Each result used 100 samples. There is almost no generalization error in the results. <table> <tr> <th>Sampling</th> <th>Train</th> <th>Test</th> </tr> <tr> <td>io2seq</td> <td>44%</td> <td>42%</td> </tr> </table> Table 2: Testing the I/O-vector-to-sequence model. Each result used 100 samples. hundreds of characters long. Because of limited GPU memory, the I/O encoder models can quickly run out of memory. 6.2 EXAMPLE ENCODING In this section, we evaluate the effect of several different input/output example encoders. To control for the effect of the tree model, all results here used an R3NN with fixed hyperparameters to generate the program tree. Table 1 shows the performance of several of these input/output example encoders. We can see that the summed cross-correlation encoder did not perform well, which can be due to the fact that the sum destroys positional information that might be useful for determining specific substring indices. The LSTM-sum and the augmented diffused cross-correlation models did the best. Surprisingly, the LSTM encoder was capable of finding nearly 88% of all programs without having any prior knowledge explicitly built into the architecture. We use 100 samples for evaluating the Train and Test sets. The training performance is sometimes slightly lower because there are close to 5 million training programs but we only look at less than 2 million of these programs during training. We sample a subset of only 1000 training programs from the 5 million program set to report the training results in the tables. The test sets also consist of 1000 programs. 6.3 io2seq In this section, we motivate the use of the R3NN by testing whether a simpler model can also be used to generate programs. The io2seq model is an LSTM whose initial hidden and cell states are a function of the input/output encoding vector. The io2seq model then generates a linearized tree of a program symbol-by-symbol. An example of what a linearized program tree looks like is \( (s_e(f(ConstStr“@”)ConstStr)f)_e \), which represents the program tree that returns the constant string “@”. Predicting a linearized tree using an LSTM was also done in the context of parsing (Vinyals et al., 2015). For the io2seq model, we used the LSTM-sum cross-correlation I/O conditioning model. The results in Table 2 show that the performance of the io2seq model at 100 samples per latent test function is far worse than the R3NN, at around 42% versus 91%, respectively. The reasons for that could be that the io2seq model needs to perform far more decisions than the R3NN, since the io2seq model has to predict the parentheses symbols that determine at which level of the tree a particular symbol is at. For example, the io2seq model requires on the order of 100 decisions for length 13 programs, while the R3NN requires no more than 13. 6.4 EFFECT OF SAMPLING MULTIPLE PROGRAMS For the best R3NN model that we trained, we also evaluated the effect that a different number of samples per latent function had on performance. The results are shown in Table 3. The increase of the model’s performance as the sample size increases hints that the model has a notion of what type of program satisfies a given I/O pair, but it might not be that certain about the details such as which regex to use, etc. By 300 samples, the model is nearing perfect accuracy on the test sets. <table> <tr> <th>Sampling</th> <th>Train</th> <th>Test</th> </tr> <tr> <td>1-best</td> <td>60%</td> <td>63%</td> </tr> <tr> <td>1-sample</td> <td>56%</td> <td>57%</td> </tr> <tr> <td>10-sample</td> <td>81%</td> <td>79%</td> </tr> <tr> <td>50-sample</td> <td>91%</td> <td>89%</td> </tr> <tr> <td>100-sample</td> <td>94%</td> <td>94%</td> </tr> <tr> <td>300-sample</td> <td>97%</td> <td>97%</td> </tr> </table> Table 3: The effect of sampling multiple programs on accuracy. 1-best is deterministically choosing the expansion with highest probability at each step. ![Line graph showing model accuracy with increasing I/O examples](page_328_370_627_246.png) Figure 4: The train and test accuracies for models trained with different number of input-output examples. 6.5 Effect of Number of Input-output Examples We evaluate the effect of varying the number of input-output examples used to train the Input-output encoders. The 1-best accuracy for train and test data for models trained for 74 epochs is shown in Figure 4. As expected, the accuracy increases with increase in number of input-output examples, since more examples add more information to the encoder and constrain the space of consistent programs in the DSL. 6.6 FlashFill Benchmarks We also evaluate our learnt models on 238 real-world FlashFill benchmarks obtained from the Microsoft Excel team and online help-forums. These benchmarks involve string manipulation tasks described using input-output examples. We evaluate two models – one with a cross correlation encoder trained on 5 input-output examples and another trained on 10 input-output examples. Both the models were trained on randomly sampled programs from the DSL upto size 13 with randomly generated input-output examples. The distribution of the size of smallest DSL programs needed to solve the benchmark tasks is shown in Figure 5(a), which varies from 4 to 63. The figure also shows the number of benchmarks for which our model was able to learn the program using 5 input-output examples using samples of top-2000 learnt programs. In total, the model is able to learn programs for 91 tasks (38.2%). Since the model was trained for programs upto size 13, it is not surprising that it is not able to solve tasks that need larger program size. There are 110 FlashFill benchmarks that require programs upto size 13, out of which the model is able to solve 82.7% of them. The effect of sampling multiple learnt programs instead of only top program is shown in Figure 5(b). With only 10 samples, the model can already learn about 13% of the benchmarks. We observe a steady increase in performance upto about 2000 samples, after which we do not observe any significant improvement. Since there are more than 2 million programs in the DSL of length 11 itself, the enumerative techniques with uniform search do not scale well [Alur et al., 2015]. We also evaluate a model that is learnt with 10 input-output examples per benchmark. This model can only learn programs for about 29% of the FlashFill benchmarks. Since the FlashFill benchmarks contained only 5 input-output examples for each task, to run the model that took 10 examples as input, we duplicated the I/O examples. Our models are trained on the synthetic training dataset Figure 5: (a) The distribution of size of programs needed to solve FlashFill tasks and the performance of our model, (b) The effect of sampling for trying top-k learnt programs. <table> <tr> <th>Input \( v \)</th> <th>Output</th> </tr> <tr> <td>[CPT-00350]</td> <td>[CPT-00350]</td> </tr> <tr> <td>[CPT-00340]</td> <td>[CPT-00340]</td> </tr> <tr> <td>[CPT-114563]</td> <td>[CPT-114563]</td> </tr> <tr> <td>[CPT-1AB02]</td> <td>[CPT-1AB02]</td> </tr> <tr> <td>[CPT-00360]</td> <td>[CPT-00360]</td> </tr> </table> (a) <table> <tr> <th>Input \( v \)</th> <th>Output</th> </tr> <tr> <td>732606129</td> <td>0x73</td> </tr> <tr> <td>430257526</td> <td>0x43</td> </tr> <tr> <td>444004480</td> <td>0x44</td> </tr> <tr> <td>371255254</td> <td>0x37</td> </tr> <tr> <td>635272676</td> <td>0x63</td> </tr> </table> (b) <table> <tr> <th>Input \( v \)</th> <th>Output</th> </tr> <tr> <td>John Doyle</td> <td>John D.</td> </tr> <tr> <td>Matt Walters</td> <td>Matt W.</td> </tr> <tr> <td>Jody Foster</td> <td>Jody F.</td> </tr> <tr> <td>Angela Lindsay</td> <td>Angela L.</td> </tr> <tr> <td>Maria Schulte</td> <td>Maria S.</td> </tr> </table> (c) Figure 6: Some example solved benchmarks: (a) cleaning up medical codes with closing brackets, (b) generating Hex numbers with first two digits, (c) transforming names to firstname and last initial. that is generated uniformly from the DSL. Because of the discrepancy between the training data distribution (uniform) and auxiliary task data distribution, the model with 10 input/output examples might not perform the best on the FlashFill benchmark distribution, even though it performs better on the synthetic data distribution (on which it is trained) as shown in Figure[4] Our model is able to solve majority of FlashFill benchmarks that require learning programs with upto 3 Concat operations. We now describe a few of these benchmarks, also shown in Figure 6. An Excel user wanted to clean a set of medical billing records by adding a missing “]” to medical codes as shown in Figure 6(a). Our system learns the following program given these 5 input-output examples: Concat(SubStr(v,ConstPos(0),(d,-1,End)), ConstStr("]")). The program concatenates the substring between the start of the input string and the position of the last digit regular expression with the constant string “]”. Another task that required user to transform some numbers into a hex format is shown in Figure 6(b). Our system learns the following program: Concat(ConstStr("0x"),SubStr(v,ConstPos(0),ConstPos(2))). For some benchmarks with long input strings, it is still able to learn regular expressions to extract the desired substring, e.g. it learns a program to extract “NancyF” from the string “123456789,freehafer,drew,nancy,19700101,11/1/2007,NancyF@north.com,1230102,123 1st Avenue,Seattle,wa,09999”. Our system is currently not able to learn programs for benchmarks that require 4 or more Concat operations. Two such benchmarks are shown in Figure 7. The task of combining names in Figure 7(a) requires 6 Concat arguments, whereas the phone number transformation task in Figure 7(b) requires 5 Concat arguments. This is mainly because of the scalability issues in training with programs of larger size. There are also a few interesting benchmarks where the R3NN models gets very close to learning the desired program. For example, for the task “Bill Gates” → “Mr. Bill Gates”, it learns a program that generates “Mr. Bill Gates” (without the whitespace), and for the task “617-444-5454” → “(617) 444-5454”, it learns a program that generates the string “(617 444-5454”. <table> <tr> <th>Input \( v \)</th> <th>Output</th> </tr> <tr> <td>1</td> <td>John James Paul</td> <td>John, James, and Paul.</td> </tr> <tr> <td>2</td> <td>Tom Mike Bill</td> <td>Tom, Mike, and Bill.</td> </tr> <tr> <td>3</td> <td>Marie Nina John</td> <td>Marie, Nina, and John.</td> </tr> <tr> <td>4</td> <td>Reggie Anna Adam</td> <td>Reggie, Anna, and Adam.</td> </tr> </table> (a) <table> <tr> <th>Input \( v \)</th> <th>Output</th> </tr> <tr> <td>1</td> <td>(425) 221 6767</td> <td>425-221-6767</td> </tr> <tr> <td>2</td> <td>206.225.1298</td> <td>206-225-1298</td> </tr> <tr> <td>3</td> <td>617-224-9874</td> <td>617-224-9874</td> </tr> <tr> <td>4</td> <td>425.118.9281</td> <td>425-118-9281</td> </tr> </table> (b) Figure 7: Some unsolved benchmarks: (a)Combining names by different delimiters. (b) Transforming phone numbers to consistent format. 7 RELATED WORK We have seen a renewed interest in recent years in the area of Program Induction and Synthesis. In the machine learning community, a number of promising neural architectures have been proposed to perform program induction. These methods have employed architectures inspired from computation modules (Turing Machines, RAM) (Graves et al., 2014; Kurach et al., 2015; Reed & de Freitas, 2015; Neelakantan et al., 2015) or common data structures such as stacks used in many algorithms (Joulin & Mikolov, 2015). These approaches represent the atomic operations of the network in a differentiable form, which allows for efficient end-to-end training of a neural controller. However, unlike our approach that learns comprehensible complete programs, many of these approaches learn only the program behavior (i.e., they produce desired outputs on new input data). Some recently proposed methods (Kurach et al., 2015; Gaunt et al., 2016; Riedel et al., 2016; Bunea et al., 2016) do learn interpretable programs but these techniques require learning a separate neural network model for each individual task, which is undesirable in many synthesis settings where we would like to learn programs in real-time for a large number of tasks. Liang et al. (2010) restrict the problem space with a probabilistic context-free grammar and introduce a new representation of programs based on combinatory logic, which allows for sharing sub-programs across multiple tasks. They then take a hierarchical Bayesian approach to learn frequently occurring substructures of programs. Our approach, instead, uses neural architectures to condition the search space of programs, and does not require additional step of representing program space using combinatory logic for allowing sharing. The DSL-based program synthesis approach has also seen a renewed interest recently (Alur et al., 2015). It has been used for many applications including synthesizing low-level bitvector implementations (Solar-Lezama et al., 2005), Excel macros for data manipulation (Gulwani, 2011; Gulwani et al., 2012), superoptimization by finding smaller equivalent loop bodies (Schutzza et al., 2013), protocol synthesis from scenarios (Udupa et al., 2013), synthesis of loop-free programs (Gulwani et al., 2011), and automated feedback generation for programming assignments (Singh et al., 2013). The synthesis techniques proposed in the literature generally employ various search techniques including enumeration with pruning, symbolic constraint solving, and stochastic search, while supporting different forms of specifications including input-output examples, partial programs, program invariants, and reference implementation. In this paper, we consider input-output example based specification over the hypothesis space defined by a DSL of string transformations, similar to that of FlashFill (without conditionals) (Gulwani, 2011). The key difference between our approach over previous techniques is that our system is trained completely in an end-to-end fashion, while previous techniques require significant manual effort to design heuristics for efficient search. There is some work on guiding the program search using learnt clues that suggest likely DSL expansions, but the clues are learnt over hand-coded textual features of examples (Menon et al., 2013). Moreover, their DSL consists of composition of about 100 high-level text transformation functions such as count and dedup, whereas our DSL consists of tree structured programs over richer regular expression based substring constructs. There is also a recent line of work on learning probabilistic models of code from a large number of code repositories (big code) (Raychev et al., 2015; Bielik et al., 2016; Hindle et al., 2016), which are then used for applications such as auto-completion of partial programs, inference of variable and method names, program repair, etc. These language models typically capture only the syntactic properties of code, unlike our approach that also tries to capture the semantics to learn the desired program. The work by [Maddison & Tarlow] (2014) addresses the problem of learning structured generative models of source code but both their model and application domain are different from ours. [Piech et al.] (2015) use an NPM-RNN model to embed program ASTs, where a subtree of the AST rooted at a node n is represented by a matrix obtained by combining representations of the children of node n and the embedding matrix of the node n itself (which corresponds to its functional behavior). The forward pass in our R3NN architecture from leaf nodes to the root node is, at a high-level, similar, but we use a distributed representation for each grammar symbol that leads to a different root representation. Moreover, R3NN also performs a reverse-recursive pass to ensure all nodes in the tree encode global information about other nodes in the tree. Finally, the R3NN network is then used to incrementally build a tree to synthesize a program. The R3NN model employed in our work is related to several tree and graph structured neural networks present in the NLP literature [Le & Zuidema 2014; Paulus et al. 2014; Irsoy & Cardie 2013]. The Inside-Outside Recursive Neural Network [Le & Zuidema 2014] in particular is most similar to the R3NN, where they generate a parse tree incrementally by using global leaf-level representations to determine which expansions in the parse tree to take next. 8 CONCLUSION We have proposed a novel technique called Neuro-Symbolic Program Synthesis that is able to construct a program incrementally based on given input-output examples. To do so, a new neural architecture called Recursive-Reverse-Recursive Neural Network is used to encode and expand a partial program tree into a full program tree. Its effectiveness at example-based program synthesis is demonstrated, even when the program has not been seen during training. These promising results open up a number of interesting directions for future research. For example, we took a supervised-learning approach here, assuming availability of target programs during training. In some scenarios, we may only have access to an oracle that returns the desired output given an input. In this case, reinforcement learning is a promising framework for program synthesis.
accept
Accept (Poster)
6.666667
3fbd1c9ee38ff9fd065e3671adcaad98511cca97
iclr
2,017
TOWARDS DEEP INTERPRETABILITY (MUS-ROVER II): LEARNING HIERARCHICAL REPRESENTATIONS OF TONAL MUSIC Haizi Yu Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801, USA haiziyu7@illinois.edu Lav R. Varshney Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign Urbana, IL 61801, USA varshney@illinois.edu ABSTRACT Music theory studies the regularity of patterns in music to capture concepts underlying music styles and composers’ decisions. This paper continues the study of building automatic theorists (rovers) to learn and represent music concepts that lead to human interpretable knowledge and further lead to materials for educating people. Our previous work took a first step in algorithmic concept learning of tonal music, studying high-level representations (concepts) of symbolic music (scores) and extracting interpretable rules for composition. This paper further studies the representation hierarchy through the learning process, and supports adaptive 2D memory selection in the resulting language model. This leads to a deeper-level interpretability that expands from individual rules to a dynamic system of rules, making the entire rule learning process more cognitive. The outcome is a new rover, MUS-ROVER II, trained on Bach’s chorales, which outputs customizable syllabi for learning compositional rules. We demonstrate comparable results to our music pedagogy, while also presenting the differences and variations. In addition, we point out the rover’s potential usages in style recognition and synthesis, as well as applications beyond music. 1 INTRODUCTION Forming hierarchical concepts from low-level observations is key to knowledge discovery. In the field of artificial neural networks, deep architectures are employed for machine learning tasks, with the awareness that hierarchical representations are important (Bengio et al., 2013). Rapid progress in deep learning has shown that mapping and representing topical domains through increasingly abstract layers of feature representation is extremely effective. Unfortunately, this layered representation is difficult to interpret or use for teaching people. Consequently, deep learning models are widely used as algorithmic task performers (e.g. AlphaGo), but few act as theorists or pedagogues. In contrast, our goal is to achieve a deeper-level interpretability that explains not just what has been learned (the end results), but also what is being learned at every single stage (the process). On the other hand, music theory studies underlying patterns beneath the music surface. It objectively reveals higher-level invariances that are hidden from the low-level variations. In practice, the development of music theory is an empirical process. Through manual inspection of large corpora of music works, theorists have summarized compositional rules and guidelines (e.g. J. J. Fux, author of Gradus ad Parnassum, the most influential book on Renaissance polyphony), and have devised multi-level analytical methods (e.g. H. Schenker, inventor of Schenkerian analysis) to emphasize the hierarchical structure of music, both of which have become the standard materials taught in today’s music theory classes. The objective and empirical nature of music theory suggests the possibility of an automatic theorist — statistical techniques that perform hierarchical concept learning — while its pedagogical purpose requires human interpretability throughout the entire learning process. The book title Gradus ad Parnassum, means “the path towards Mount Parnassus,” the home of poetry, music, and learning. This paper presents MUS-ROVER II, an extension of our prior work (Yu et al., 2016a,b), to independently retake the path towards Parnassus. The rover acts more as a pathfinder than a generative model (e.g. LSTM), emphasizing the path more than the destination. The teacher solves: maximize \( D \left( p_{\phi,stu}^{(k-1)} \mid\mid \hat{p}_\phi \right) \) subject to \( \phi \in \Phi \setminus \Phi^{(k-1)} \) (discrete optimization) music input \( \hat{p} \) teacher \( p_{stu}^{(k-1)} \) rule \( \Gamma_k \) ruleset \( \{ \Gamma_i \}_{i=1}^k \) student \( p_{stu}^{(k)} \) The student solves: maximize \( S_q \left( p_{stu}^{(k)} \right) \) subject to \( p_{stu}^{(k)} \in \Gamma_1 \) \( \ldots \) \( p_{stu}^{(k)} \in \Gamma_k \) (linear least-squares) Figure 1: MUS-ROVER’s self-learning loop (the kth iteration). The teacher (discriminator) takes as inputs the student’s latest style \( p_{stu}^{(k-1)} \) and the input style \( \hat{p} \), and identifies a feature \( \phi \) through which the two styles manifest the largest gap \( D(\cdot||\cdot) \). The identified feature is then made into a rule (a constraint set \( \Gamma_k \)), and augments the ruleset \( \{ \Gamma_i \}_{i=1}^k \). The student (generator) takes as input the augmented ruleset to update its writing style into \( p_{stu}^{(k)} \), and favors creativity, i.e. more possibilities, by maximizing the Tsallis entropy \( S_q \) subject to the rule constraints. In short, the teacher extracts rules while the student applies rules; both perform their tasks by solving optimization problems. We compare the paths taken by this improved automatic theorist to paths taken by human theorists (say Fux), studying similarities as well as pros and cons of each. So advantages from both can be jointly taken to maximize the utility in music education and research. In this paper in particular, we highlight the concept hierarchy that one would not get from our prior work, as well as enhanced syllabus personalization that one would not typically get from traditional pedagogy. 2 MUS-ROVER OVERVIEW As the first algorithmic pathfinder in music, MUS-ROVER I introduced a “teacher ⇔ student” model to extract compositional rules for writing 4-part chorales (Yu et al., 2016a,b). The model is implemented by a self-learning loop between a generative component (student) and a discriminative component (teacher), where both entities cooperate to iterate through the rule-learning process (Figure 1). The student starts as a tabula rasa that picks pitches uniformly at random to form sonorities (a generic term for chord) and sonority progressions. The teacher compares the student’s writing style (represented by a probabilistic model) with the input style (represented by empirical statistics), identifying one feature per iteration that best reveals the gap between the two styles, and making it a rule for the student to update its probabilistic model. As a result, the student becomes less and less random by obeying more and more rules, and thus, approaches the input style. Collecting from its rule-learning traces, MUS-ROVER I successfully recovered many known rules, such as “Parallel perfect octaves/fifths are rare” and “Tritons are often resolved either inwardly or outwardly”. What is Inherited from MUS-ROVER I MUS-ROVER II targets the same goal of learning interpretable music concepts. It inherits the self-learning loop, as well as the following design choices. (Dataset and Data Representation) We use the same dataset that comprises 370 C scores of Bach’s 4-part chorales. We include only pitches and their durations in a piece’s raw representation, notated as a MIDI matrix whose elements are MIDI numbers for pitches. The matrix preserves the two-dimensional chorale texture, with rows corresponding to melodies, and columns to harmonies. (Rule Representation) We use the same representation for high-level concepts in terms of rules, unrelated to rules in propositional logic. A (compositional) rule is represented by a feature and its distribution: \( r = (\phi, p_\phi) \), which describes likelihoods of feature values. It can also be transformed to a linear equality constraint (\( A_\phi p_{stu} = p_\phi \)) in the student’s optimization problem (\( \Gamma \)’s in Figure 1). (Student’s Probabilistic Model) We still use n-gram models to represent the student’s style/belief, with words being sonority features, and keep the student’s optimization problem as it was. To reiterate the distinctions to many music n-grams, we never run n-grams in the raw feature space, but only collectively in the high-level feature spaces to prevent overfitting. So, rules are expressed as probabilistic laws that describe either (vertical) sonority features or their (horizontal) progressions. What is New in MUS-ROVER II We study hierarchies on features, so rules are later presented not just as a linear list, but as hierarchical families and sub-families. In particular, we introduce conceptual hierarchy that is pre-determined by feature maps, and infer informational hierarchy that is post-implied from an information-theoretic perspective. We upgrade the self-learning loop to adaptively select memories in a multi-feature multi-n-gram language model. This is realized by constructing hierarchical filters to filter out conceptual duplicates and informational implications. By further following the information scent spilled by Bayesian surprise (Varshney, 2013), the rover can effectively localize the desired features in the feature universe. 3 RELATED WORK Adversarial or Collaborative MUS-ROVER’s self-learning loop between the teacher (a discriminator) and student (a generator) shares great structural similarity to generative adversarial nets (Goodfellow et al., 2014) and their derivatives (Denton et al., 2015; Makhzani et al., 2015). However, the working mode between the discriminator and generator is different. In current GAN algorithms, the adversarial components are black-boxes to each other, since both are different neural networks that are coupled only end to end. The learned intermediate representation from one model, no matter how expressive or interpretable, is not directly shared with the other. Contrarily in MUS-ROVER, both models are transparent to each other (also to us): the student directly leverages the rules from the teacher to update its probabilistic model. In this sense, the learning pair in MUS-ROVER is more collaborative rather than adversarial. Consequently, not only the learned concepts have interpretations individually, but the entire learning trace is an interpretable, cognitive process. Furthermore, MUS-ROVER and GAN contrast in the goal of learning and the resulting evaluations. The rover is neither a classifier nor a density estimator, but rather a pure representation learner that outputs high-level concepts and their hierarchies. Training this type of learner in general is challenging due to the lack of a clear objective or target (Bengio et al., 2013), which drives people to consider some end task like classification and use performance on the task to indirectly assess the learned representations. In MUS-ROVER, we introduce information-theoretic criteria to guide the training of the automatic theorist, and in the context of music concept learning, we directly evaluate machine generated rules and hierarchies by comparison to those in existing music theory. Interpretable Feature Learning In the neural network community, much has been done to first recover disentangled representations, and then post-hoc interpret the semantics of the learned features. This line of work includes denoising autoencoders (Vincent et al., 2008) and restricted Boltzmann machines (Hinton et al., 2006; Desjardins et al., 2012), ladder network algorithms (Rasmus et al., 2015), as well as more recent GAN models (Radford et al., 2015). In particular, InfoGAN also introduces information-theoretic criteria to augment the standard GAN cost function, and to some extent achieves interpretability for both discrete and continuous latent factors (Chen et al., 2016). However, beyond the end results, the overall learning process of these neural networks are still far away from human-level concept learning (Lake et al., 2015), so not directly instructional to people. Automatic Musicians Music theory and composition form a reciprocal pair, often realized as the complementary cycle of reduction and elaboration (Laitz, 2016) as walks up and down the multi-level music hierarchy. Accordingly, various models have been introduced to automate this up/down walk, including music generation (Cope & Mayer, 1996; Biles, 1994; Simon et al., 2008), analysis (Taube, 1999), or theory evaluation (Rohrmieier & Cross, 2008). In terms of methodologies, we have rule-based systems (Cope, 1987), language models (Google Brain, 2016; Simon et al., 2008), and information-theoretic approaches (Jacoby et al., 2015; Dubnov & Assayag, 2002). However, all of these models leverage domain knowledge (e.g. human-defined chord types, functions, rules) as part of the model inputs. MUS-ROVER takes as input only the raw notations (pitches and durations), and outputs concepts that are comparable to (but also different from) our domain knowledge. 4 HIERARCHICAL RULE LEARNING MUS-ROVER II emphasizes hierarchy induction in learning music representations, and divides the induction process into two stages. In the first stage, we impose conceptual hierarchy as pre-defined structures among candidate features before the self-learning loop. In the second stage, we infer informational hierarchy as post-implied structures through the rule learning loops. Interpretable Features A feature is a function that computes a distributed representation of the building blocks that constitute data samples. For Bach’s 4-part chorales, we model every piece (4-row matrix) as a sequence of sonorities (columns). So every sonority is the building block of its composing piece (like a word in a sentence). Then a feature maps a sonority onto some feature space, summarizing an attribute. To formalize, let \( \Omega = \{R, p_1, \ldots, p_n\} \) be an alphabet that comprises a rest symbol R, and n pitch symbols \( p_i \). In addition, the alphabet symbols — analogous to image pixels — are manipulable by arithmetic operations, such as plus/minus, modulo, and sort. More precisely, every \( p_i \) is an integer-valued MIDI number (60 for middle C, granularity 1 for semi-tone), and R is a special character which behaves like a python nan variable. The four coordinates of every sonority \( p \in \Omega^4 \) denote soprano, alto, tenor, and bass, respectively. We define a feature as a surjective function \( \phi : \Omega^4 \mapsto \phi(\Omega^4) \), and the corresponding feature space by its range. As a first and brutal categorization, we say a feature (space) is raw (or lowest-level) if \( |\phi(\Omega^4)| = |\Omega^4| \), and high-level if \( |\phi(\Omega^4)| < |\Omega^4| \). For instance, \( \Omega^4 \) or any permutation of \( \Omega^4 \) is a raw feature space. MUS-ROVER II employs a more systematic way of generating the universe of interpretable features. A (sonority) feature is constructed as the composition of a window and a descriptor. A window is a function that selects parts of the input sonority: \( w_I : \Omega^4 \mapsto \Omega^{|I|} \), where I is an index set. For instance, \( w_{\{1,4\}}(p) = (p_1, p_4) \) selects soprano and bass. A descriptor is constructed inductively from a set of basis descriptors B, consisting of atomic arithmetic operations. We currently set \( B = \{\text{order}, \text{diff}, \text{sort}, \text{mod}_{12}\} \) (Appendix A.2). We define a descriptor of length k as the composition of k bases: \( d_{(k)} = b_k \circ \cdots \circ b_1 \), for all \( b_i \in B \), where \( d_{(0)} \) is the identity function. We collect the family of all possible windows: \( W = \{w_I \mid I \in 2^{\{1,2,3,4\}} \setminus \{\emptyset\}\} \), and the family of all descriptors of length less than or equal to k: \( D^{[k]} = \{d_{(k')} \mid 0 \leq k' \leq k\} \), and form the feature universe: \[ \Phi = \{d \circ w \mid w \in W, d \in D^{[k]}\}. \] The fact that every candidate feature in \( \Phi \) is systematically generated as composition of atomic operators ensures its interpretability, since one can literally read it out step-by-step from the composition. Feature-Induced Partition On the one hand, a feature function has all the mathematic specifications to name the corresponding feature and feature values. On the other hand, we only care about the partition of the input domain (\( \Omega^4 \)) induced by the feature but not the (superficial) naming of the clusters. In other words, we only identity the sonority clusters whose members are mapped to the same function value, but not the value per se. As a result, we use a partition to refer to the essence of a concept, and the inducing function as a mathematical name to interpret the concept. To formalize, a feature function \( \phi \) induces a partition of its domain \[ \mathcal{P}_\phi = \{\phi^{-1}(\{y\}) \mid y \in \phi(\Omega^4)\}. \] Given a feature universe \( \Phi \), [2] defines an equivalence relation on \( \Phi \): \( \phi \overset{\mathcal{P}}{\sim} \phi' \) if \( \mathcal{P}_\phi = \mathcal{P}_{\phi'} \), which induces the corresponding partition family \( \mathcal{P}_\Phi \) as the resulting equivalence classes. For two partitions \( \mathcal{P}, \mathcal{Q} \in \mathcal{P}_\Phi \), we say \( \mathcal{P} \) is finer than \( \mathcal{Q} \) (or \( \mathcal{Q} \) is coarser), written as \( \mathcal{P} \succeq \mathcal{Q} \), if for all \( p, p' \in \Omega^4, p, p' \) are in the same cluster under \( \mathcal{P} \Rightarrow p, p' \) are in the same cluster under \( \mathcal{Q} \). We say \( \mathcal{P} \) is strictly finer, written as \( \mathcal{P} \succ \mathcal{Q} \), if \( \mathcal{P} \succeq \mathcal{Q} \) and \( \mathcal{Q} \not\succ \mathcal{P} \). Conceptual Hierarchy Based on the binary relation \( \succeq \), we construct the conceptual hierarchy for the partition family \( \mathcal{P}_\Phi \), and represent it as a directed acyclic graph (DAG) with nodes being partitions. For any pair of nodes \( v, v', v \rightarrow v' \) if and only if the partition referred by v is (strictly) finer than that referred by \( v' \). The DAG grows from a single source node, which represents the finest partition — every point in the domain by itself is a cluster — and extends via the edges to coarser and coarser partitions. In terms of features, we say a feature \( \phi' \) is at a higher level than another feature \( \phi \), if the induced partitions satisfy \( \mathcal{P}_\phi \succ \mathcal{P}_{\phi'} \). In other words, a higher-level feature induces a coarser partition that ignores lower-level details by merging clusters. One can check that the finest partition (the source node) is indeed induced by a raw feature. We attach an efficient algorithm for pre-computing the conceptual hierarchy in Appendix A.3. We emphasize the necessity of this multi-step process: features \( \rightarrow \) partitions \( \rightarrow \) hierarchy (DAG), as opposed to a simple hierarchical clustering (tree). The latter loses many inter-connections due to the tree structure and its greedy manner, and more importantly, the interpretability of the partitions. Informational Hierarchy We infer informational hierarchy from a many-to-one relation, called implication, along a rule trace. More formally, let \( \{r_i\}_{i=1}^k := \{(\phi_i, p_{\phi_i})\}_{i=1}^k \) be the extracted trace of rules (in terms of feature and feature distribution) by the kth iteration of the self-learning loop. We say a feature \( \phi \) is *informationally implied* from the trace \( \{ r_i \}_{i=1}^k \) with tolerance \( \gamma > 0 \), if \[ gap \left( p_{\phi,stu}^{(k)} \| \hat{p}_\phi \right) := D \left( p_{\phi,stu}^{(k)} \| \hat{p}_\phi \right) < \gamma, \quad \text{and} \quad gap \left( p_{\phi,stu}^{(k')} \| \hat{p}_\phi \right) \geq \gamma, \forall k' < k, \] where \( D(\cdot \| \cdot) \) is the KL divergence used to characterize the gap of the student’s style (probabilistic model) against Bach’s style (input). One trivial case happens when \( \phi \) is extracted as the kth rule, i.e. \( \phi = \phi_k \), then \( gap(p_{\phi',stu}^{(k)} \| \hat{p}_{\phi'}) = 0 < \gamma, \forall \phi' \in \{ \phi' | \mathcal{P}_\phi \prec \mathcal{P}_{\phi'} \} \), meaning that feature \( \phi \), once learned as a rule, informationally implies itself and all its descendants in the conceptual hierarchy. However, what is more interesting is the informational implication from other rules outside the conceptual hierarchy, which is typically hard for humans to “eyeball”. One might question the necessity of conceptual hierarchy since it can be implied in the informational hierarchy. The answer is yes in principle, but no in practice. The main difference is that conceptual hierarchy is pre-computed over the entire feature universe before the loop, which is global, precise, and trace independent. On the contrary, informational hierarchy is trace specific and loose, due to tolerance \( \gamma \) and the precision of the optimization solver. As a result, informational hierarchy alone tends to lose the big picture and require more post-hoc interpretations, and is unstable in practice. **Hierarchical Filters** Beyond their benefits in revealing inter-relational insights among distributed representations, we build *hierarchical filters* from both conceptual and informational hierarchies, for the purpose of pruning hierarchically entangled features and speeding up feature selection. This upgrades MUS-ROVER II into a more efficient, robust, and cognitive theorist. Recall the skeleton of the teacher’s optimization problem in Figure 1, we flesh it out as follows: \[ \begin{aligned} & \underset{\phi \in \Phi}{\text{maximize}} \quad gap \left( p_{\phi,stu}^{(k-1)} \| \hat{p}_\phi \right) \\ & \text{subject to} \quad H(\hat{p}_\phi) \leq \delta \tag{Regularity Condition} \\ & \quad \phi \notin C^{(k-1)} := \left\{ \phi \mid \mathcal{P}_\phi \not\preceq \mathcal{P}_{\phi'}, \phi' \in \Phi^{(k-1)} \right\} \tag{Conceptual-Hierarchy Filter} \\ & \quad \phi \notin I^{(k-1)} := \left\{ \phi \mid gap \left( p_{\phi,stu}^{(k-1)} \| \hat{p}_\phi \right) < \gamma \right\} \tag{Informational-Hierarchy Filter} \end{aligned} \] In the above optimization problem, \( \Phi \) is the feature universe defined in [1] and \( \phi \in \Phi \) is the optimization variable whose optimal value is used to form the kth rule: \( \phi_k = \phi^*, r_k = (\phi^*, \hat{p}_{\phi^*}) \). We decouple the regularity condition from the objective function in our previous work (which was the generalized *cultural hole* function), and state it separately as the first constraint that requires the Shannon entropy of the feature distribution to be no larger than a given threshold ([Pape et al., 2015]). The second constraint encodes the filter from conceptual hierarchy, which prunes coarser partitions of the learned features \( \Phi^{(k-1)} := \{ \phi_1, \ldots, \phi_{k-1} \} \). The third constraint encodes the filter from informational hierarchy, which prunes informationally implied features. There are two hyper-parameters \( \delta \) and \( \gamma \) in the optimization problem (3), whose detailed usage in syllabus customization will be discussed later in Sec.6. At a high level, we often pre-select \( \gamma \) before the loop to express a user’s *satisfaction level*: a smaller \( \gamma \) signifies a meticulous user who is harder to satisfy; the threshold \( \delta \) upper bounds the *entropic difficulty* of the rules, and is adaptively adjusted through the loop: it starts from a small value (easy rules first), and auto-increases whenever the feasible set of (3) is empty (gradually increases the difficulty when mastering the current level). 5 ADAPTIVE MEMORY SELECTION MUS-ROVER II considers a continuous range of higher order n-grams (variable memory), and adaptively picks the optimal n based on a balance among multiple criteria. The fact that every n-gram is also on multiple high-level feature spaces opens the opportunities for long-term memories without exhausting machine memory, while effectively avoiding overfitting. **Two-Dimensional Memory** In light of a continuous range of n-grams, say \( n \in N = \{ 2, 3, \ldots \} \), the feature universe adds another dimension, forming a two-dimensional memory (\( N \times \Phi \)) — *length* versus *depth* — for the language model (Figure 2 left). The length axis enumerates n-gram orders, with a longer memory corresponding to a larger n; the depth axis enumerates features, with a deeper Figure 2: MUS-ROVER II’s two-dimensional memory (left): the length axis enumerates \( n \)-gram orders; the depth axis enumerates features; and every cell is a feature distribution. Memory mask (right): 0 marks the removal of the corresponding cell from feature selection, which is caused by a hierarchical filter or the regularity condition or (contradictory) duplication. memory corresponding to a higher-level feature. Every cell in the memory is indexed by two coordinates \((n, \phi)\), referring to the feature \(\phi\) under the \(n\)-gram, and stores the corresponding feature distribution. As a consequence, the rule extraction task involves picking the right feature under the right \(n\)-gram, which extends the space of the optimization problem (3) from \(\Phi\) to \(N \times \Phi\). Accordingly, the constraints of (3) jointly forge a mask on top of the 2D memory (Figure 2 right). Criteria and Balance We propose three criteria to extract rules from the 2D memory: confidence, regularity, and efficacy. Confidence is quantified by empirical counts: the more relevant examples one sees in Bach’s chorales, the more confident. Regularity is quantified by Shannon entropy of the rule’s feature distribution: a rule is easier to memorize if it is less entropic (Pape et al., 2015). Efficacy is inversely quantified by the gap between the student’s probabilistic model and the rule’s feature distribution: a rule is more effective if it reveals a larger gap. There are tradeoffs among these criteria. For instance, a lower-level feature is usually more effective since it normally reflects larger variations in the gap, but is also unlikely to be regular, thus harder to memorize and generalize. Also a feature under a higher-order \(n\)-gram may be both regular and effective, but the number of examples that match the long-term conditionals is likely to be small, reducing confidence. Adaptive Selection: Follow the (Bayesian) Surprise The teacher’s optimization problem (3) explicitly expresses the efficacy factor in the objective, and the regularity condition as the first constraint. To further incorporate confidence, we cast the rule’s feature distribution \(\hat{p}_\phi\) in a Bayesian framework rather than a purely empirical framework as in our previous work. We assume the student’s belief with respect to a feature \(\phi\) follows a Dirichlet distribution whose expectation is the student’s probabilistic model. In the \(k\)th iteration of the self-learning loop, we set the student’s prior belief as the Dirichlet distribution parameterized by the student’s latest probabilistic model: \[ \text{prior}_{\phi, stu} \sim \operatorname{Dir}\left(c \cdot p_{\phi, stu}^{(k-1)}\right), \] where \(c > 0\) denotes the strength of the prior. From Bach’s chorales, the teacher inspects the empirical counts \(q_\phi\) associated with the feature \(\phi\) and the relevant \(n\)-gram, and computes the student’s posterior belief if \(\phi\) were selected as the rule: \[ \text{posterior}_{\phi, stu} \sim \operatorname{Dir}\left(q_\phi + c \cdot p_{\phi, stu}^{(k-1)}\right). \] The concentration parameters of the Dirichlet posterior show the balance between empirical counts and the prior. If the total number of empirical counts is small (less confident), the posterior will be smoothed more by the prior, de-emphasizing the empirical distribution from \(q_\phi\). If we compute \(\hat{p}_\phi \propto \left(q_\phi + c \cdot p_{\phi, stu}^{(k-1)}\right)\) in the objective of (3), then \[ \text{gap}\left(p_{\phi, stu}^{(k-1)} \mid \hat{p}_\phi\right) = D\left(\mathbb{E}\left[\text{prior}_{\phi, stu}\right] \parallel \mathbb{E}\left[\text{posterior}_{\phi, stu}\right]\right). \] The right side of (4) is closely related to Bayesian surprise (Varshney, 2013), which takes the form of KL divergence from the prior to posterior. If we remove the expectations and switch the roles between the prior and posterior, we get the exact formula for Bayesian surprise. Both functionals Table 1: Customizing a syllabus (* signifies rules that are skipped in the faster pace) <table> <tr> <th>Rule Trace</th> <th>Faster Pace (\( \gamma = 0.5 \))</th> <th>Slower Pace (\( \gamma = 0.1 \))</th> </tr> <tr> <td>1</td> <td>order \( \circ w_{\{1,2,3,4\}} \)</td> <td>order \( \circ w_{\{1,2,3,4\}} \)</td> </tr> <tr> <td>2</td> <td>mod\(_{12}\) \( \circ w_{\{1\}} \)</td> <td>order \( \circ \) diff \( \circ \) sort \( \circ w_{\{1,2,4\}} \)*</td> </tr> <tr> <td>3</td> <td>mod\(_{12}\) \( \circ \) diff \( \circ w_{\{2,3\}} \)</td> <td>order \( \circ \) diff \( \circ \) mod\(_{12}\) \( \circ w_{\{1,2,3\}} \)*</td> </tr> <tr> <td>4</td> <td>mod\(_{12}\) \( \circ \) diff \( \circ w_{\{3,4\}} \)</td> <td>order \( \circ \) diff \( \circ \) diff \( \circ w_{\{1,2,3,4\}} \)*</td> </tr> <tr> <td>5</td> <td>diff \( \circ \) sort \( \circ w_{\{2,3\}} \)</td> <td>order \( \circ \) sort \( \circ \) mod\(_{12}\) \( \circ w_{\{2,3,4\}} \)*</td> </tr> <tr> <td>6</td> <td>mod\(_{12}\) \( \circ w_{\{3\}} \)</td> <td>order \( \circ \) sort \( \circ \) mod\(_{12}\) \( \circ w_{\{1,3,4\}} \)*</td> </tr> <tr> <td>7</td> <td>mod\(_{12}\) \( \circ \) diff \( \circ w_{\{1,2\}} \)</td> <td>order \( \circ \) sort \( \circ \) mod\(_{12}\) \( \circ w_{\{1,2,3,4\}} \)*</td> </tr> <tr> <td>8</td> <td>mod\(_{12}\) \( \circ \) diff \( \circ w_{\{2,4\}} \)</td> <td>mod\(_{12}\) \( \circ w_{\{1\}} \)</td> </tr> <tr> <td>9</td> <td>diff \( \circ w_{\{1,2\}} \)</td> <td>mod\(_{12}\) \( \circ \) diff \( \circ w_{\{2,3\}} \)</td> </tr> <tr> <td>10</td> <td>diff \( \circ \) sort \( \circ w_{\{1,3\}} \)</td> <td>mod\(_{12}\) \( \circ \) diff \( \circ w_{\{3,4\}} \)</td> </tr> </table> capture the idea of comparing the gap between the prior and posterior. Therefore, the efficacy of concept learning is analogous to seeking (informational) surprise in the learning process. The subtlety in (4) where we exchange the prior and posterior, makes a distinction from Bayesian surprise due to the asymmetry of KL divergence. As a brief explanation, adopting (4) as the objective tends to produce rules about what Bach hated to do, while the other way produces what Bach liked to do. So we treat it as a design choice and adopt (4), given that rules are often taught as prohibitions (e.g. “parallel fifths/octaves are bad”, “never double the tendency tones”). There are more in-depth and information-theoretic discussions on this point (Huszár 2015; Palomar & Verdú 2008). 6 EXPERIMENTS MUS-ROVER II’s main use case is to produce personalized syllabi that are roadmaps to learning the input style (customized paths to Mount Parnassus). By substituting the student module, users can join the learning cycle, in which they make hands-on compositions and get iterative feedback from the teacher. Alternatively, for faster experimentation, users make the student their learning puppet, which is personalized by its external parameters. This paper discusses the latter case in detail. Math-to-Music Dictionary MUS-ROVER II conceptualizes every rule feature as a partition of the raw space, and uses the inducing function as its mathematical name. To get the meanings of the features, one can simply work out the math, but some of them already have their counterparts as music terminologies. We include a short dictionary of those correspondences in Appendix A.1. Pace Control and Syllabus Customization We present a simple yet flexible pace control panel to the users of MUS-ROVER II, enabling personalized set-up of their learning puppet. The control panel exposes four knobs: the lower bound, upper bound, and stride of the rule’s entropic difficulty (\( \delta_{min}, \delta_{max}, \delta_{stride} \)), as well as the satisfactory gap (\( \gamma \)). These four hyper-parameters together allow the user to personalize the pace and capacity of her learning experience. The entropic difficulty \( \delta \) caps the Shannon entropy of a rule’s feature distribution in (2), a surrogate for the complexity (or memorability) of the rule (Pape et al., 2015). It is discretized into a progression staircase from \( \delta_{min} \) up to \( \delta_{max} \), with incremental \( \delta_{stride} \). The resulting syllabus starts with \( \delta = \delta_{min} \), the entry level difficulty; and ends whenever \( \delta \geq \delta_{max} \), the maximum difficulty that the user can handle. Anywhere in between, the loop deactivates all rules whose difficulties are beyond current \( \delta \), and moves onto the next difficulty level \( \delta + \delta_{stride} \) if the student’s probabilistic model is \( \gamma \)-close to the input under all currently active rule features. To showcase syllabus customization, we introduce an ambitious user who demands a faster pace and a patient user who prefers a slower one. In practice, one can collectively tune the stride parameter \( \delta_{stride} \) and the gap parameter \( \gamma \), with a faster pace corresponding to a larger \( \delta_{stride} \) (let’s jump directly to the junior year from freshman) and a larger \( \gamma \) (having an A- is good enough to move onto the next level, why bother having A+). Here we simply fix \( \delta_{stride} \), and let \( \gamma \) control the pace. We illustrate two syllabi in Table 1, which compares the first ten (1-gram) rules in a faster (\( \gamma = 0.5 \)) syllabus and a slower one (\( \gamma = 0.1 \)). Notice the faster syllabus gives the fundamentals that a music student will typically learn in her first-year music theory class, including rules on voice crossing, Table 2: Sample 1-gram rules and their hierarchies. <html> <table> <tr> <th>Interpretable rule (Spacing):</th> <th>This partition sub-family includes 21 coarser partitions, which are local orderings that are already captured by the global ordering.</th> </tr> <tr> <td>Almost always, the soprano pitch is above the alto, alto above tenor, and tenor above bass.</td> <td><img src="page_374_158_627_180.png" alt="A tree diagram showing a partition sub-family"/></td> </tr> <tr> <th>Interpretable rule (Scale):</th> <th>This partition sub-family does not contain any other coarser partitions.</th> </tr> <tr> <td>The soprano voice is drawn from a diatonic scale with high probability.</td> <td><img src="page_374_393_627_180.png" alt="A tree diagram showing a partition sub-family"/></td> </tr> <tr> <th>Interpretable rule (Interval):</th> <th>This partition sub-family contains only one coarser partition:<br>order \( \circ \) sort \( \circ \) mod\(_{12}\) \( \circ \) \( w_{\{2,3\}} \).</th> </tr> <tr> <td>The interval of the inner voices are mostly consonant (3,4,5,7,8,9), but perfect octave/unison (0) is rare due to the tight spacing between alto and tenor.</td> <td><img src="page_374_628_627_180.png" alt="A tree diagram showing a partition sub-family"/></td> </tr> <tr> <th>Interpretable rule (Interval):</th> <th>This partition sub-family contains only one coarser partition:<br>order \( \circ \) sort \( \circ \) mod\(_{12}\) \( \circ \) \( w_{\{3,4\}} \).</th> </tr> <tr> <td>The interval of the lower voices are mostly consonant, and emerges more perfect octaves due to the wide spacing between tenor and bass. Also, perfect fourth (5) is now considered as a dissonance against the bass.</td> <td><img src="page_374_863_627_180.png" alt="A tree diagram showing a partition sub-family"/></td> </tr> </table> </html> pitch class set (scale), intervals, and so on (triads and seventh chords will appear later). It effectively skips the nitty-gritty rules (marked by an asterisk) that are learned in the slower setting. Most of these skipped rules do not have direct counterparts in music theory (such as taking the diff operator twice) and are not important, although occasionally the faster syllabus will skip some rules worth mentioning (such as the second rule in the slower pace, which talks about spacing among soprano, alto, and bass). Setting an appropriate pace for a user is important; a pace that is too fast will miss the whole point of knowledge discovery (jump to the low-level details too fast); a pace that is too slow will bury the important points among unimportant ones (hence, lose the big picture). Fundamentals: Hierarchical 1-gram Similar to our teaching of music theory, MUS-ROVER II’s proposed syllabus divides into two stages: fundamentals and part writing. The former is under the 1-gram setting, involving knowledge independent of the context; the latter provides online tutoring under multi-\( n \)-grams. We begin our experiments with fundamentals, and use them to illustrate the two types of feature hierarchies. Let’s take a closer look at the two syllabi in Table[1]. The specifications (left) and hierarchies (right) of the four common rules are illustrated in Table[2]. The rules’ translations are below the corresponding bar charts, all of which are consistent with our music theory. Extracted from the conceptual hierarchy, the right column lists the partition sub-family sourced at each rule, which is pictorially simplified as a tree by hiding implied edges from its corresponding DAG. Every coarser partition in Figure 3: Gap trajectories for two features. The dashed black lines show two different satisfactory gaps (\( \gamma = 0.5 \) and \( 0.1 \)). The bottom charts show the informationally implied hierarchies. a sub-family is indeed a higher-level representation, but has not accumulated sufficient significance to make itself a rule. A partition will never be learned if one of its finer ancestors has been made a rule. Observe that all of the coarser partitions are not typically taught in theory classes. MUS-ROVER II measures the student’s progress from many different angles in terms of features. With respect to a feature, the gap between the student and Bach is iteratively recorded to form a trajectory when cycling the loop. Studying the vanishing point of the trajectory reveals the (local) informational hierarchy around the corresponding feature. Taking the second and seventh rule in the slower syllabus for example, we plot their trajectories in Figure 3. Both illustrate a decreasing trend\footnote{Fluctuations on the trajectory are largely incurred by the imperfect solver of the optimization problem.} for gaps in the corresponding feature spaces. The left figure shows that the second rule is largely but not entirely implied by the first, pointing out the hierarchical structure between the two: the first rule may be considered as the dominant ancestor of the second, which is not conceptually apparent, but informationally implied. On the contrary, the right figure shows that the seventh rule is *not* predominantly implied by the first, which instead is informationally connected to many other rules. However, one could say that it is probably safe to skip both rules in light of a faster pace, since they will eventually be learned fairly effectively (with small gaps) but indirectly. **Part Writing:** **Adaptive n-grams** Unlike fundamentals which studies sonority independently along the *vertical* direction of the chorale texture, rules on part writing (e.g. melodic motion, chord progression) are *horizontal*, and *context-dependent*. This naturally results in an online learning framework, in which rule extractions are coupled in the writing process, specific to the realization of a composition (context). Context dependence is captured by the multi-*n*-gram language model, which further leads to the 2D memory pool of features for rule extraction (Sec. 5). Consider an example of online learning and adaptive memory selection, where we have the beginning of a chorale: \[ \langle s \rangle \rightarrow (60, 55, 52, 36) \rightarrow (60, 55, 52, 36) \rightarrow (62, 59, 55, 43) \rightarrow (62, 59, 55, 43) \rightarrow (62, 59, 55, 43), \] and want to learn the probabilistic model for the next sonority. Instead of starting from scratch, MUS-ROVER II launches the self-learning loop with the ruleset initialized by the fundamentals (incremental learning), and considers the 2D memory \( N \times \Phi \), for \( N = \{2, 3, 4, 5\} \). The first extracted rule is featured by \( \text{order} \circ \text{sort} \circ \text{mod}_{12} \circ w_{\{3,4\}} \). The rule is chosen because its corresponding feature has a large confidence level (validated by the large number of matched examples), a small entropy after being smoothed by Bayesian surprise, and reveals a large gap against the Bach’s style. Figure 4 shows the relative performance of this rule (in terms of confidence, regularity, and style gap) to other candidate cells in the 2D memory. Among the top 20 rules for this sonority, 12 are 5-gram, 5 are 4-gram, 3 are 2-gram, showing a long and adaptive dependence to preceding context. **Visualizing Bach’s Mind** With the hierarchical representations in MUS-ROVER II, we are now able to visualize Bach’s music mind step by step via activating nodes in the DAG of rule features Figure 4: The relative performance of the selected rule (pointed) among the pool of all cells in the 2D memory. A desired rule has: higher confidence (measured by the number of examples, brighter regions in the first row), more regularity (measured by Shannon entropy, darker regions in the second row), and larger style gap (measured by KL divergence, brighter regions in the bottom two rows). (similar to neuron activations in a brain). The hierarchical structure, as well as the additive activation process, is in stark contrast with the linear sequence of rules extracted from our prior work (Appendix A.3). Figure 5 shows a snapshot of the rule-learning status after ten loops, while the student is writing a sonority in the middle of a piece. The visualization makes it clear how earlier independent rules are now self-organized into sub-families, as well as how rules from a new context overwrite those from an old context, emphasizing that music is highly context-dependent. Figure 5: Visualization of Bach’s music mind for writing chorales. The underlying DAG represents the conceptual hierarchy (note: edges always point downwards). Colors are used to differentiate rule activations from different \( n \)-gram settings. We have enlarged \( N = \{1, 2, \ldots, 10\} \) to allow even longer-term dependencies. 7 CONCLUSIONS AND DISCUSSIONS Learning hierarchical rules as distributed representations of tonal music has played a central role in music pedagogy for centuries. While our previous work achieved the automation of rule extraction, and to certain level, the interpretability of the rules, this paper yields deeper interpretability that extends to a system of rules and the overall learning process. In summary, it highlights the importance of disentangling the rule features, sorting out their interconnections, and making the concept learning process more dynamic, hierarchical, and cognitive. MUS-ROVER is targeted to complement music teaching and learning. For instance, to many music students, learning and applying rules in part-writing is like learning to solve a puzzle (like Sudoku). Rules themselves are quite flexible as opposed to 0-1 derivatives, and may sometimes be contradictory. In addition, due to the limitation of human short-term memory and the difficulty of foreseeing implications, one has to handle a small set of rules at a time in a greedy manner, make some trials, and undo a few steps if no luck. Hence, solving this music puzzle could become a struggle (or maybe interesting): according to personal preferences, one typically begins with a small set of important rules, and via several steps of trial and error, tries one’s best to make the part-writing satisfy a majority of rules, with occasional violations on unimportant ones. On the other hand, a machine is often good at solving and learning from puzzles due to its algorithmic nature. For instance, MUS-ROVER’s student can take all rules into consideration: load them all at a time as constraints and figure out the global optimum of the optimization problem in only a few hours. The same level of efficiency might take a human student years to achieve. We envision the future of MUS-ROVER as a partner to humans in both music teaching and research, which includes but is not limited to, personalizing the learning experience of a student, as well as suggesting new methodologies to music theorists in analyzing and developing new genres. It also has practical applications: as by-products from the self-learning loop, the teacher can be made into a genre classifier, while the student can be cast into a style synthesizer. We are also eager to study the rover’s partnership beyond the domain of music. ACKNOWLEDGMENTS We thank Professor Heinrich Taube, President of Illiac Software, Inc., for providing Harmonia’s MusicXML corpus of Bach’s chorales (https://harmonia.illiacsoftware.com/), as well as his helpful comments and suggestions. This work was supported by the IBM-Illinois Center for Cognitive Computing Systems Research (C3SR), a research collaboration as part of the IBM Cognitive Horizons Network. REFERENCES Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell., 35(8):1798–1828, 2013. John Biles. GenJam: A genetic algorithm for generating jazz solos. In Proc. Int. Comput. Music Conf. (ICMC), pp. 131–131, 1994. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. arXiv:1606.03657 [cs.LG], 2016. David Cope. An expert system for computer-assisted composition. Comput. Music J., 11(4):30–46, 1987. David Cope and Melanie J. Mayer. Experiments in Musical Intelligence, volume 12. AR editions Madison, WI, 1996. Emily L. Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a Laplacian pyramid of adversarial networks. In Proc. 29th Annu. Conf. Neural Inf. Process. Syst. (NIPS), pp. 1486–1494, 2015. Guillaume Desjardins, Aaron Courville, and Yoshua Bengio. Disentangling factors of variation via generative entangling. arXiv:1210.5474 [stat.ML], 2012. Shlomo Dubnov and Gérard Assayag. Universal prediction applied to stylistic music generation. In Gérard Assayag, Hans Georg Feichtinger, and José Francisco Rodrigues (eds.), Mathematics and Music, pp. 147–159. Springer Verlag, Berlin, 2002. doi: 10.1007/978-3-662-04927-3_9. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Proc. 28th Annu. Conf. Neural Inf. Process. Syst. (NIPS), pp. 2672–2680, 2014. Google Brain. Magenta. http://magenta.tensorflow.org/ 2016. Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural Comput., 18(7):1527–1554, 2006. Lawrence Hubert and Phipps Arabie. Comparing partitions. J. Classif., 2(1):193–218, 1985. Ferenc Huszár. How to train your generative models and why does adversarial training work so well. http://www.inference.vc/how-to-train-your-generative-models-why-generative-adversarial-networks-work-so-well-2/ 2015. Nori Jacoby, Naftali Tishby, and Dmitri Tymoczko. An information theoretic approach to chord categorization and functional harmony. J. New Music Res., 44(3):219–244, 2015. Steven G. Laitz. The Complete Musician: an Integrated Approach to Tonal Theory, Analysis, and Listening. Oxford University Press, 2016. Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders. arXiv:1511.05644 [cs.LG], 2015. Daniel P. Palomar and Sergio Verdú. Lautum information. IEEE Trans. Inf. Theory, 54(3):964–975, 2008. Andreas D. Pape, Kenneth J. Kurtz, and Hiroki Sayama. Complexity measures and concept learning. J. Math. Psychol., 64:66–75, 2015. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434 [cs.LG], 2015. Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Proc. 29th Annu. Conf. Neural Inf. Process. Syst. (NIPS), pp. 3546–3554, 2015. Martin Rohrmeier and Ian Cross. Statistical properties of tonal harmony in Bach’s chorales. In Proc. 10th Int. Conf. Music Percept. Cogn. (ICMPC), pp. 619–627, 2008. Ian Simon, Dan Morris, and Sumit Basu. MySong: Automatic accompaniment generation for vocal melodies. In Proc. SIGCHI Conf. Hum. Factors Comput. Syst. (CHI 2008), pp. 725–734, 2008. Heinrich Taube. Automatic tonal analysis: Toward the implementation of a music theory workbench. Comput. Music J., 23(4):18–32, 1999. Lav R. Varshney. To surprise and inform. In Proc. 2013 IEEE Int. Symp. Inf. Theory, pp. 3145–3149, 2013. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In Proc. 25th Int. Conf. Mach. Learn. (ICML 2008), pp. 1096–1103, 2008. Haizi Yu, Lav R. Varshney, Guy E. Garnett, and Ranjitha Kumar. MUS-ROVER: A self-learning system for musical compositional rules. In Proc. 4th Int. Workshop Music. Metacreation (MUME 2016), 2016a. Haizi Yu, Lav R. Varshney, Guy E. Garnett, and Ranjitha Kumar. Learning interpretable musical compositional rules and traces. In Proc. 2016 ICML Workshop Hum. Interpret. Mach. Learn. (WHI 2016), 2016b. A APPENDIX A.1 MATH-TO-MUSIC DICTIONARY <table> <tr> <th>Music Terminology</th> <th>Feature Map</th> <th>Feature Values</th> </tr> <tr> <td>pitches in the upper 3 voices</td> <td>\( w_{\{1,2,3\}} \)</td> <td>\( C4 \mapsto 60,\ C\#4/Db4 \mapsto 61 \)</td> </tr> <tr> <td>pitch class of the bass voice</td> <td>\( \mathrm{mod}_{12} \circ w_{\{4\}} \)</td> <td>\( C \mapsto 0,\ C\#_4/Db \mapsto 1 \)</td> </tr> <tr> <td>interval of the inner voices</td> <td>\( \mathrm{diff} \circ w_{\{2,3\}} \)</td> <td>\( P8 \mapsto 12,\ M10 \mapsto 16 \)</td> </tr> <tr> <td>interval class of the outer voices</td> <td>\( \mathrm{mod}_{12} \circ \mathrm{diff} \circ w_{\{1,4\}} \)</td> <td>\( P8 \mapsto 0,\ M10/M3 \mapsto 4 \)</td> </tr> <tr> <td>voicing/spacing</td> <td>\( \mathrm{order} \circ \mathrm{diff} \circ w_I \)</td> <td>cf. open (closed) position</td> </tr> <tr> <td>chord regardless of inversion</td> <td>\( \mathrm{sort} \circ \mathrm{mod}_{12} \circ w_I \)</td> <td>\( V^7/V^6_5/V^4_3/V^2_2 \mapsto (2, 4, 5, 7) \)</td> </tr> <tr> <td>voice doubling / tripling</td> <td>\( \mathrm{order} \circ \mathrm{sort} \circ \mathrm{mod}_{12} \circ w_I \)</td> <td>doubling \( \mapsto "=" \)</td> </tr> </table> A.2 ATOMIC ARITHMETIC OPERATORS In MUS-ROVER II, we set \( B = \{\mathrm{order}, \mathrm{diff}, \mathrm{sort}, \mathrm{mod}_{12}\} \), where \[ \begin{align*} \mathrm{diff}(x) &= (x_2 - x_1, x_3 - x_2, \cdots), & \forall x \in \Omega^2 \cup \Omega^3 \cup \Omega^4; \\ \mathrm{sort}(x) &= (x_{(1)}, x_{(2)}, \cdots), & \forall x \in \Omega^2 \cup \Omega^3 \cup \Omega^4; \\ \mathrm{mod}_{12}(x) &= (\mathrm{mod}(x_1, 12), \mathrm{mod}(x_2, 12), \cdots), & \forall x \in \Omega \cup \Omega^2 \cup \Omega^3 \cup \Omega^4; \end{align*} \] and \( \mathrm{order}(x) \), similar to argsort, maps \( x \in \Omega^2 \cup \Omega^3 \cup \Omega^4 \) to a string that specifies the ordering of its elements, e.g. \( \mathrm{order}((60, 55, 52, 52)) = "4=3<2<1" \). The numbers in an order string denote the indices of the input vector \( x \). A.3 ALGORITHM FOR CONCEPTUAL HIERARCHY Input: A family of distinct partitions, represented by a sorted list \( P = [p_1, \ldots, p_n] \): \( p_i \neq p_j \), for all \( i \neq j \), and \( |p_1| \leq \cdots \leq |p_n| \); Output: The conceptual hierarchy as a DAG, represented by the n by n adjacency matrix T: \( T[i, j] = 1 \) if there is an edge from node i to node j in the DAG; initialize \( T[i, j] = 0 \) for all \( i, j \); for \( i = n : 1 \) do for \( j = (i + 1) : n \) do if \( T[i, j] == 0 \) then if is_coarser(\( p_i, p_j \)) then for k in \( \{k \mid p_j \prec p_k\} \cup \{j\} \) do \( T[i, k] = 1 \); end end end end end T = Transpose(T); Algorithm 1: Algorithm for computing the conceptual hierarchy A.4 HEURISTICS FOR COMPARING TWO PARTITIONS Given two partitions \( \mathcal{P}, \mathcal{Q} \) from the partition family, the function \( \mathrm{is\_coarser}(\mathcal{P}, \mathcal{Q}) \) in Algorithm[1] returns True if \( \mathcal{P} \prec \mathcal{Q} \). A brute-force implementation of this function involves studying all (unordered) pairs of elements in the input domain ([Hubert & Arabie][1985]), which incurs computational burdens if the size of the input domain is large. Therefore, we try to get around this brute-force routine whenever certain heuristic can be used to infer the output of is_coarser directly. We propose a few of these heuristics as follows. Transitivity Heuristic If \( \mathcal{P}_\phi \succ \mathcal{P}_{\phi'} \) and \( \mathcal{P}_{\phi'} \succ \mathcal{P}_{\phi''} \), then \( \mathcal{P}_\phi \succ \mathcal{P}_{\phi''} \). Window Heuristic Let \( \mathcal{P}_\phi \) and \( \mathcal{P}_{\phi'} \) be two partitions induced by features \( \phi \) and \( \phi' \), respectively. In addition, \( \phi \) and \( \phi' \) are generated from the same descriptor that preserves the orders of the inputs’ coordinates, e.g. diff, mod_{12}: \[ \phi = d \circ w_I, \quad \phi' = d \circ w_{I'}. \] We claim that \( \mathcal{P}_\phi \succ \mathcal{P}_{\phi'} \), if \( I \supset I' \) and \( |\phi(\Omega^4)| > |\phi'(\Omega^4)| \). To see why this is the case, pick any \( x, y \in \Omega^4 \) from the same cluster in \( \mathcal{P}_\phi \), then \( \phi(x) = \phi(y) \). Since \( d \) preserves the orders of the inputs’ coordinates, and \( I' \) extracts coordinates from \( I \), then \( \phi'(x) = \phi'(y) \), i.e. \( x, y \) are in the same cluster in \( \mathcal{P}_{\phi'} \). So, by definition, \( \mathcal{P}_\phi \succeq \mathcal{P}_{\phi'} \). Since \( |\phi(\Omega^4)| > |\phi'(\Omega^4)| \), \( \mathcal{P}_\phi \succ \mathcal{P}_{\phi'} \). Descriptor Heuristic Let \( \mathcal{P}_\phi \) and \( \mathcal{P}_{\phi'} \) be two partitions induced by features \( \phi \) and \( \phi' \), respectively. In addition, \( \phi \) and \( \phi' \) are generated from the same window: \[ \phi = d \circ w_I; \quad \phi' = d' \circ w_I. \] We claim that \( \mathcal{P}_\phi \succ \mathcal{P}_{\phi'} \), if \( d' = b \circ d \) for some function \( b \) and \( |\phi(\Omega^4)| > |\phi'(\Omega^4)| \). To see why this is the case, pick any \( x, y \in \Omega^4 \) from the same cluster in \( \mathcal{P}_\phi \), then \( \phi(x) = \phi(y) \). Since \( d' = b \circ d \) for some \( b \), then \( \phi' = b \circ d \circ w_I = b \circ \phi \), thus, \( \phi'(x) = \phi'(y) \), i.e. \( x, y \) are in the same cluster in \( \mathcal{P}_{\phi'} \). So, by definition, \( \mathcal{P}_\phi \succeq \mathcal{P}_{\phi'} \). Since \( |\phi(\Omega^4)| > |\phi'(\Omega^4)| \), \( \mathcal{P}_\phi \succ \mathcal{P}_{\phi'} \). Combined Heuristic Combining the above heuristics, one can show that for \( \mathcal{P}_\phi \) and \( \mathcal{P}_{\phi'} \) where \[ \phi = d \circ w_I; \quad \phi' = d' \circ w_{I'}, \] we have \( \mathcal{P}_\phi \succ \mathcal{P}_{\phi'} \), if the following conditions are satisfied: 1) \( d, d' \) both preserve the orders of the inputs’ coordinates, 2) \( d' = b \circ d \) for some \( b \), 3) \( I \supset I' \), 4) \( |\phi(\Omega^4)| > |\phi'(\Omega^4)| \). A.5 SAMPLE RULE TRACES FROM MUS-ROVER I Table 4 is essentially the same as Table 2 in our previous publication (Yu et al., 2016a), with feature notations following the current fashion. \( \alpha \) is the pace control parameter that we used in our previous system. No hierarchy was present in any of the three rule traces. For instance, the ordering features were learned as independent rules in a trace, even if they are apparently correlated, e.g. the ordering of \( w_{\{1,2,3,4\}} \) (S,A,T,B) implies the ordering of \( w_{\{1,4\}} \) (S,B). <table> <tr> <th></th> <th>\( \alpha = 0.1 \)</th> <th>\( \alpha = 0.5 \)</th> <th>\( \alpha = 1.0 \)</th> </tr> <tr> <td>1</td> <td>order \( \circ w_{\{1,4\}} \)</td> <td>order \( \circ w_{\{1,4\}} \)</td> <td>w_{\{1,2,3\}}</td> </tr> <tr> <td>2</td> <td>order \( \circ w_{\{1,3\}} \)</td> <td>order \( \circ w_{\{1,3\}} \)</td> <td>w_{\{2,3,4\}}</td> </tr> <tr> <td>3</td> <td>order \( \circ w_{\{2,4\}} \)</td> <td>order \( \circ w_{\{2,4\}} \)</td> <td>mod_{12} \( \circ w_{\{1,2,3,4\}} \)</td> </tr> <tr> <td>4</td> <td>order \( \circ w_{\{1,2\}} \)</td> <td>order \( \circ w_{\{1,2\}} \)</td> <td>w_{\{1,3,4\}}</td> </tr> <tr> <td>5</td> <td>order \( \circ w_{\{2,3\}} \)</td> <td>order \( \circ w_{\{2,3\}} \)</td> <td>w_{\{1,2,4\}}</td> </tr> <tr> <td>6</td> <td>order \( \circ w_{\{3,4\}} \)</td> <td>w_{\{1,3,4\}}</td> <td>diff \( \circ w_{\{1,2,3,4\}} \)</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> <td>...</td> </tr> </table>
ABSTRACT Music theory studies the regularity of patterns in music to capture concepts underlying music styles and composers’ decisions. This paper continues the study of building automatic theorists (rovers) to learn and represent music concepts that lead to human interpretable knowledge and further lead to materials for educating people. Our previous work took a first step in algorithmic concept learning of tonal music, studying high-level representations (concepts) of symbolic music (scores) and extracting interpretable rules for composition. This paper further studies the representation hierarchy through the learning process, and supports adaptive 2D memory selection in the resulting language model. This leads to a deeper-level interpretability that expands from individual rules to a dynamic system of rules, making the entire rule learning process more cognitive. The outcome is a new rover, MUS-ROVER II, trained on Bach’s chorales, which outputs customizable syllabi for learning compositional rules. We demonstrate comparable results to our music pedagogy, while also presenting the differences and variations. In addition, we point out the rover’s potential usages in style recognition and synthesis, as well as applications beyond music. 1 INTRODUCTION Forming hierarchical concepts from low-level observations is key to knowledge discovery. In the field of artificial neural networks, deep architectures are employed for machine learning tasks, with the awareness that hierarchical representations are important (Bengio et al., 2013). Rapid progress in deep learning has shown that mapping and representing topical domains through increasingly abstract layers of feature representation is extremely effective. Unfortunately, this layered representation is difficult to interpret or use for teaching people. Consequently, deep learning models are widely used as algorithmic task performers (e.g. AlphaGo), but few act as theorists or pedagogues. In contrast, our goal is to achieve a deeper-level interpretability that explains not just what has been learned (the end results), but also what is being learned at every single stage (the process). On the other hand, music theory studies underlying patterns beneath the music surface. It objectively reveals higher-level invariances that are hidden from the low-level variations. In practice, the development of music theory is an empirical process. Through manual inspection of large corpora of music works, theorists have summarized compositional rules and guidelines (e.g. J. J. Fux, author of Gradus ad Parnassum, the most influential book on Renaissance polyphony), and have devised multi-level analytical methods (e.g. H. Schenker, inventor of Schenkerian analysis) to emphasize the hierarchical structure of music, both of which have become the standard materials taught in today’s music theory classes. The objective and empirical nature of music theory suggests the possibility of an automatic theorist — statistical techniques that perform hierarchical concept learning — while its pedagogical purpose requires human interpretability throughout the entire learning process. The book title Gradus ad Parnassum, means “the path towards Mount Parnassus,” the home of poetry, music, and learning. This paper presents MUS-ROVER II, an extension of our prior work (Yu et al., 2016a,b), to independently retake the path towards Parnassus. The rover acts more as a pathfinder than a generative model (e.g. LSTM), emphasizing the path more than the destination. The teacher solves: maximize \( D \left( p_{\phi,stu}^{(k-1)} \mid\mid \hat{p}_\phi \right) \) subject to \( \phi \in \Phi \setminus \Phi^{(k-1)} \) (discrete optimization) music input \( \hat{p} \) teacher \( p_{stu}^{(k-1)} \) rule \( \Gamma_k \) ruleset \( \{ \Gamma_i \}_{i=1}^k \) student \( p_{stu}^{(k)} \) The student solves: maximize \( S_q \left( p_{stu}^{(k)} \right) \) subject to \( p_{stu}^{(k)} \in \Gamma_1 \) \( \ldots \) \( p_{stu}^{(k)} \in \Gamma_k \) (linear least-squares) Figure 1: MUS-ROVER’s self-learning loop (the kth iteration). The teacher (discriminator) takes as inputs the student’s latest style \( p_{stu}^{(k-1)} \) and the input style \( \hat{p} \), and identifies a feature \( \phi \) through which the two styles manifest the largest gap \( D(\cdot||\cdot) \). The identified feature is then made into a rule (a constraint set \( \Gamma_k \)), and augments the ruleset \( \{ \Gamma_i \}_{i=1}^k \). The student (generator) takes as input the augmented ruleset to update its writing style into \( p_{stu}^{(k)} \), and favors creativity, i.e. more possibilities, by maximizing the Tsallis entropy \( S_q \) subject to the rule constraints. In short, the teacher extracts rules while the student applies rules; both perform their tasks by solving optimization problems. We compare the paths taken by this improved automatic theorist to paths taken by human theorists (say Fux), studying similarities as well as pros and cons of each. So advantages from both can be jointly taken to maximize the utility in music education and research. In this paper in particular, we highlight the concept hierarchy that one would not get from our prior work, as well as enhanced syllabus personalization that one would not typically get from traditional pedagogy. 2 MUS-ROVER OVERVIEW As the first algorithmic pathfinder in music, MUS-ROVER I introduced a “teacher ⇔ student” model to extract compositional rules for writing 4-part chorales (Yu et al., 2016a,b). The model is implemented by a self-learning loop between a generative component (student) and a discriminative component (teacher), where both entities cooperate to iterate through the rule-learning process (Figure 1). The student starts as a tabula rasa that picks pitches uniformly at random to form sonorities (a generic term for chord) and sonority progressions. The teacher compares the student’s writing style (represented by a probabilistic model) with the input style (represented by empirical statistics), identifying one feature per iteration that best reveals the gap between the two styles, and making it a rule for the student to update its probabilistic model. As a result, the student becomes less and less random by obeying more and more rules, and thus, approaches the input style. Collecting from its rule-learning traces, MUS-ROVER I successfully recovered many known rules, such as “Parallel perfect octaves/fifths are rare” and “Tritons are often resolved either inwardly or outwardly”. What is Inherited from MUS-ROVER I MUS-ROVER II targets the same goal of learning interpretable music concepts. It inherits the self-learning loop, as well as the following design choices. (Dataset and Data Representation) We use the same dataset that comprises 370 C scores of Bach’s 4-part chorales. We include only pitches and their durations in a piece’s raw representation, notated as a MIDI matrix whose elements are MIDI numbers for pitches. The matrix preserves the two-dimensional chorale texture, with rows corresponding to melodies, and columns to harmonies. (Rule Representation) We use the same representation for high-level concepts in terms of rules, unrelated to rules in propositional logic. A (compositional) rule is represented by a feature and its distribution: \( r = (\phi, p_\phi) \), which describes likelihoods of feature values. It can also be transformed to a linear equality constraint (\( A_\phi p_{stu} = p_\phi \)) in the student’s optimization problem (\( \Gamma \)’s in Figure 1). (Student’s Probabilistic Model) We still use n-gram models to represent the student’s style/belief, with words being sonority features, and keep the student’s optimization problem as it was. To reiterate the distinctions to many music n-grams, we never run n-grams in the raw feature space, but only collectively in the high-level feature spaces to prevent overfitting. So, rules are expressed as probabilistic laws that describe either (vertical) sonority features or their (horizontal) progressions. What is New in MUS-ROVER II We study hierarchies on features, so rules are later presented not just as a linear list, but as hierarchical families and sub-families. In particular, we introduce conceptual hierarchy that is pre-determined by feature maps, and infer informational hierarchy that is post-implied from an information-theoretic perspective. We upgrade the self-learning loop to adaptively select memories in a multi-feature multi-n-gram language model. This is realized by constructing hierarchical filters to filter out conceptual duplicates and informational implications. By further following the information scent spilled by Bayesian surprise (Varshney, 2013), the rover can effectively localize the desired features in the feature universe. 3 RELATED WORK Adversarial or Collaborative MUS-ROVER’s self-learning loop between the teacher (a discriminator) and student (a generator) shares great structural similarity to generative adversarial nets (Goodfellow et al., 2014) and their derivatives (Denton et al., 2015; Makhzani et al., 2015). However, the working mode between the discriminator and generator is different. In current GAN algorithms, the adversarial components are black-boxes to each other, since both are different neural networks that are coupled only end to end. The learned intermediate representation from one model, no matter how expressive or interpretable, is not directly shared with the other. Contrarily in MUS-ROVER, both models are transparent to each other (also to us): the student directly leverages the rules from the teacher to update its probabilistic model. In this sense, the learning pair in MUS-ROVER is more collaborative rather than adversarial. Consequently, not only the learned concepts have interpretations individually, but the entire learning trace is an interpretable, cognitive process. Furthermore, MUS-ROVER and GAN contrast in the goal of learning and the resulting evaluations. The rover is neither a classifier nor a density estimator, but rather a pure representation learner that outputs high-level concepts and their hierarchies. Training this type of learner in general is challenging due to the lack of a clear objective or target (Bengio et al., 2013), which drives people to consider some end task like classification and use performance on the task to indirectly assess the learned representations. In MUS-ROVER, we introduce information-theoretic criteria to guide the training of the automatic theorist, and in the context of music concept learning, we directly evaluate machine generated rules and hierarchies by comparison to those in existing music theory. Interpretable Feature Learning In the neural network community, much has been done to first recover disentangled representations, and then post-hoc interpret the semantics of the learned features. This line of work includes denoising autoencoders (Vincent et al., 2008) and restricted Boltzmann machines (Hinton et al., 2006; Desjardins et al., 2012), ladder network algorithms (Rasmus et al., 2015), as well as more recent GAN models (Radford et al., 2015). In particular, InfoGAN also introduces information-theoretic criteria to augment the standard GAN cost function, and to some extent achieves interpretability for both discrete and continuous latent factors (Chen et al., 2016). However, beyond the end results, the overall learning process of these neural networks are still far away from human-level concept learning (Lake et al., 2015), so not directly instructional to people. Automatic Musicians Music theory and composition form a reciprocal pair, often realized as the complementary cycle of reduction and elaboration (Laitz, 2016) as walks up and down the multi-level music hierarchy. Accordingly, various models have been introduced to automate this up/down walk, including music generation (Cope & Mayer, 1996; Biles, 1994; Simon et al., 2008), analysis (Taube, 1999), or theory evaluation (Rohrmieier & Cross, 2008). In terms of methodologies, we have rule-based systems (Cope, 1987), language models (Google Brain, 2016; Simon et al., 2008), and information-theoretic approaches (Jacoby et al., 2015; Dubnov & Assayag, 2002). However, all of these models leverage domain knowledge (e.g. human-defined chord types, functions, rules) as part of the model inputs. MUS-ROVER takes as input only the raw notations (pitches and durations), and outputs concepts that are comparable to (but also different from) our domain knowledge. 4 HIERARCHICAL RULE LEARNING MUS-ROVER II emphasizes hierarchy induction in learning music representations, and divides the induction process into two stages. In the first stage, we impose conceptual hierarchy as pre-defined structures among candidate features before the self-learning loop. In the second stage, we infer informational hierarchy as post-implied structures through the rule learning loops. Interpretable Features A feature is a function that computes a distributed representation of the building blocks that constitute data samples. For Bach’s 4-part chorales, we model every piece (4-row matrix) as a sequence of sonorities (columns). So every sonority is the building block of its composing piece (like a word in a sentence). Then a feature maps a sonority onto some feature space, summarizing an attribute. To formalize, let \( \Omega = \{R, p_1, \ldots, p_n\} \) be an alphabet that comprises a rest symbol R, and n pitch symbols \( p_i \). In addition, the alphabet symbols — analogous to image pixels — are manipulable by arithmetic operations, such as plus/minus, modulo, and sort. More precisely, every \( p_i \) is an integer-valued MIDI number (60 for middle C, granularity 1 for semi-tone), and R is a special character which behaves like a python nan variable. The four coordinates of every sonority \( p \in \Omega^4 \) denote soprano, alto, tenor, and bass, respectively. We define a feature as a surjective function \( \phi : \Omega^4 \mapsto \phi(\Omega^4) \), and the corresponding feature space by its range. As a first and brutal categorization, we say a feature (space) is raw (or lowest-level) if \( |\phi(\Omega^4)| = |\Omega^4| \), and high-level if \( |\phi(\Omega^4)| < |\Omega^4| \). For instance, \( \Omega^4 \) or any permutation of \( \Omega^4 \) is a raw feature space. MUS-ROVER II employs a more systematic way of generating the universe of interpretable features. A (sonority) feature is constructed as the composition of a window and a descriptor. A window is a function that selects parts of the input sonority: \( w_I : \Omega^4 \mapsto \Omega^{|I|} \), where I is an index set. For instance, \( w_{\{1,4\}}(p) = (p_1, p_4) \) selects soprano and bass. A descriptor is constructed inductively from a set of basis descriptors B, consisting of atomic arithmetic operations. We currently set \( B = \{\text{order}, \text{diff}, \text{sort}, \text{mod}_{12}\} \) (Appendix A.2). We define a descriptor of length k as the composition of k bases: \( d_{(k)} = b_k \circ \cdots \circ b_1 \), for all \( b_i \in B \), where \( d_{(0)} \) is the identity function. We collect the family of all possible windows: \( W = \{w_I \mid I \in 2^{\{1,2,3,4\}} \setminus \{\emptyset\}\} \), and the family of all descriptors of length less than or equal to k: \( D^{[k]} = \{d_{(k')} \mid 0 \leq k' \leq k\} \), and form the feature universe: \[ \Phi = \{d \circ w \mid w \in W, d \in D^{[k]}\}. \] The fact that every candidate feature in \( \Phi \) is systematically generated as composition of atomic operators ensures its interpretability, since one can literally read it out step-by-step from the composition. Feature-Induced Partition On the one hand, a feature function has all the mathematic specifications to name the corresponding feature and feature values. On the other hand, we only care about the partition of the input domain (\( \Omega^4 \)) induced by the feature but not the (superficial) naming of the clusters. In other words, we only identity the sonority clusters whose members are mapped to the same function value, but not the value per se. As a result, we use a partition to refer to the essence of a concept, and the inducing function as a mathematical name to interpret the concept. To formalize, a feature function \( \phi \) induces a partition of its domain \[ \mathcal{P}_\phi = \{\phi^{-1}(\{y\}) \mid y \in \phi(\Omega^4)\}. \] Given a feature universe \( \Phi \), [2] defines an equivalence relation on \( \Phi \): \( \phi \overset{\mathcal{P}}{\sim} \phi' \) if \( \mathcal{P}_\phi = \mathcal{P}_{\phi'} \), which induces the corresponding partition family \( \mathcal{P}_\Phi \) as the resulting equivalence classes. For two partitions \( \mathcal{P}, \mathcal{Q} \in \mathcal{P}_\Phi \), we say \( \mathcal{P} \) is finer than \( \mathcal{Q} \) (or \( \mathcal{Q} \) is coarser), written as \( \mathcal{P} \succeq \mathcal{Q} \), if for all \( p, p' \in \Omega^4, p, p' \) are in the same cluster under \( \mathcal{P} \Rightarrow p, p' \) are in the same cluster under \( \mathcal{Q} \). We say \( \mathcal{P} \) is strictly finer, written as \( \mathcal{P} \succ \mathcal{Q} \), if \( \mathcal{P} \succeq \mathcal{Q} \) and \( \mathcal{Q} \not\succ \mathcal{P} \). Conceptual Hierarchy Based on the binary relation \( \succeq \), we construct the conceptual hierarchy for the partition family \( \mathcal{P}_\Phi \), and represent it as a directed acyclic graph (DAG) with nodes being partitions. For any pair of nodes \( v, v', v \rightarrow v' \) if and only if the partition referred by v is (strictly) finer than that referred by \( v' \). The DAG grows from a single source node, which represents the finest partition — every point in the domain by itself is a cluster — and extends via the edges to coarser and coarser partitions. In terms of features, we say a feature \( \phi' \) is at a higher level than another feature \( \phi \), if the induced partitions satisfy \( \mathcal{P}_\phi \succ \mathcal{P}_{\phi'} \). In other words, a higher-level feature induces a coarser partition that ignores lower-level details by merging clusters. One can check that the finest partition (the source node) is indeed induced by a raw feature. We attach an efficient algorithm for pre-computing the conceptual hierarchy in Appendix A.3. We emphasize the necessity of this multi-step process: features \( \rightarrow \) partitions \( \rightarrow \) hierarchy (DAG), as opposed to a simple hierarchical clustering (tree). The latter loses many inter-connections due to the tree structure and its greedy manner, and more importantly, the interpretability of the partitions. Informational Hierarchy We infer informational hierarchy from a many-to-one relation, called implication, along a rule trace. More formally, let \( \{r_i\}_{i=1}^k := \{(\phi_i, p_{\phi_i})\}_{i=1}^k \) be the extracted trace of rules (in terms of feature and feature distribution) by the kth iteration of the self-learning loop. We say a feature \( \phi \) is *informationally implied* from the trace \( \{ r_i \}_{i=1}^k \) with tolerance \( \gamma > 0 \), if \[ gap \left( p_{\phi,stu}^{(k)} \| \hat{p}_\phi \right) := D \left( p_{\phi,stu}^{(k)} \| \hat{p}_\phi \right) < \gamma, \quad \text{and} \quad gap \left( p_{\phi,stu}^{(k')} \| \hat{p}_\phi \right) \geq \gamma, \forall k' < k, \] where \( D(\cdot \| \cdot) \) is the KL divergence used to characterize the gap of the student’s style (probabilistic model) against Bach’s style (input). One trivial case happens when \( \phi \) is extracted as the kth rule, i.e. \( \phi = \phi_k \), then \( gap(p_{\phi',stu}^{(k)} \| \hat{p}_{\phi'}) = 0 < \gamma, \forall \phi' \in \{ \phi' | \mathcal{P}_\phi \prec \mathcal{P}_{\phi'} \} \), meaning that feature \( \phi \), once learned as a rule, informationally implies itself and all its descendants in the conceptual hierarchy. However, what is more interesting is the informational implication from other rules outside the conceptual hierarchy, which is typically hard for humans to “eyeball”. One might question the necessity of conceptual hierarchy since it can be implied in the informational hierarchy. The answer is yes in principle, but no in practice. The main difference is that conceptual hierarchy is pre-computed over the entire feature universe before the loop, which is global, precise, and trace independent. On the contrary, informational hierarchy is trace specific and loose, due to tolerance \( \gamma \) and the precision of the optimization solver. As a result, informational hierarchy alone tends to lose the big picture and require more post-hoc interpretations, and is unstable in practice. **Hierarchical Filters** Beyond their benefits in revealing inter-relational insights among distributed representations, we build *hierarchical filters* from both conceptual and informational hierarchies, for the purpose of pruning hierarchically entangled features and speeding up feature selection. This upgrades MUS-ROVER II into a more efficient, robust, and cognitive theorist. Recall the skeleton of the teacher’s optimization problem in Figure 1, we flesh it out as follows: \[ \begin{aligned} & \underset{\phi \in \Phi}{\text{maximize}} \quad gap \left( p_{\phi,stu}^{(k-1)} \| \hat{p}_\phi \right) \\ & \text{subject to} \quad H(\hat{p}_\phi) \leq \delta \tag{Regularity Condition} \\ & \quad \phi \notin C^{(k-1)} := \left\{ \phi \mid \mathcal{P}_\phi \not\preceq \mathcal{P}_{\phi'}, \phi' \in \Phi^{(k-1)} \right\} \tag{Conceptual-Hierarchy Filter} \\ & \quad \phi \notin I^{(k-1)} := \left\{ \phi \mid gap \left( p_{\phi,stu}^{(k-1)} \| \hat{p}_\phi \right) < \gamma \right\} \tag{Informational-Hierarchy Filter} \end{aligned} \] In the above optimization problem, \( \Phi \) is the feature universe defined in [1] and \( \phi \in \Phi \) is the optimization variable whose optimal value is used to form the kth rule: \( \phi_k = \phi^*, r_k = (\phi^*, \hat{p}_{\phi^*}) \). We decouple the regularity condition from the objective function in our previous work (which was the generalized *cultural hole* function), and state it separately as the first constraint that requires the Shannon entropy of the feature distribution to be no larger than a given threshold ([Pape et al., 2015]). The second constraint encodes the filter from conceptual hierarchy, which prunes coarser partitions of the learned features \( \Phi^{(k-1)} := \{ \phi_1, \ldots, \phi_{k-1} \} \). The third constraint encodes the filter from informational hierarchy, which prunes informationally implied features. There are two hyper-parameters \( \delta \) and \( \gamma \) in the optimization problem (3), whose detailed usage in syllabus customization will be discussed later in Sec.6. At a high level, we often pre-select \( \gamma \) before the loop to express a user’s *satisfaction level*: a smaller \( \gamma \) signifies a meticulous user who is harder to satisfy; the threshold \( \delta \) upper bounds the *entropic difficulty* of the rules, and is adaptively adjusted through the loop: it starts from a small value (easy rules first), and auto-increases whenever the feasible set of (3) is empty (gradually increases the difficulty when mastering the current level). 5 ADAPTIVE MEMORY SELECTION MUS-ROVER II considers a continuous range of higher order n-grams (variable memory), and adaptively picks the optimal n based on a balance among multiple criteria. The fact that every n-gram is also on multiple high-level feature spaces opens the opportunities for long-term memories without exhausting machine memory, while effectively avoiding overfitting. **Two-Dimensional Memory** In light of a continuous range of n-grams, say \( n \in N = \{ 2, 3, \ldots \} \), the feature universe adds another dimension, forming a two-dimensional memory (\( N \times \Phi \)) — *length* versus *depth* — for the language model (Figure 2 left). The length axis enumerates n-gram orders, with a longer memory corresponding to a larger n; the depth axis enumerates features, with a deeper Figure 2: MUS-ROVER II’s two-dimensional memory (left): the length axis enumerates \( n \)-gram orders; the depth axis enumerates features; and every cell is a feature distribution. Memory mask (right): 0 marks the removal of the corresponding cell from feature selection, which is caused by a hierarchical filter or the regularity condition or (contradictory) duplication. memory corresponding to a higher-level feature. Every cell in the memory is indexed by two coordinates \((n, \phi)\), referring to the feature \(\phi\) under the \(n\)-gram, and stores the corresponding feature distribution. As a consequence, the rule extraction task involves picking the right feature under the right \(n\)-gram, which extends the space of the optimization problem (3) from \(\Phi\) to \(N \times \Phi\). Accordingly, the constraints of (3) jointly forge a mask on top of the 2D memory (Figure 2 right). Criteria and Balance We propose three criteria to extract rules from the 2D memory: confidence, regularity, and efficacy. Confidence is quantified by empirical counts: the more relevant examples one sees in Bach’s chorales, the more confident. Regularity is quantified by Shannon entropy of the rule’s feature distribution: a rule is easier to memorize if it is less entropic (Pape et al., 2015). Efficacy is inversely quantified by the gap between the student’s probabilistic model and the rule’s feature distribution: a rule is more effective if it reveals a larger gap. There are tradeoffs among these criteria. For instance, a lower-level feature is usually more effective since it normally reflects larger variations in the gap, but is also unlikely to be regular, thus harder to memorize and generalize. Also a feature under a higher-order \(n\)-gram may be both regular and effective, but the number of examples that match the long-term conditionals is likely to be small, reducing confidence. Adaptive Selection: Follow the (Bayesian) Surprise The teacher’s optimization problem (3) explicitly expresses the efficacy factor in the objective, and the regularity condition as the first constraint. To further incorporate confidence, we cast the rule’s feature distribution \(\hat{p}_\phi\) in a Bayesian framework rather than a purely empirical framework as in our previous work. We assume the student’s belief with respect to a feature \(\phi\) follows a Dirichlet distribution whose expectation is the student’s probabilistic model. In the \(k\)th iteration of the self-learning loop, we set the student’s prior belief as the Dirichlet distribution parameterized by the student’s latest probabilistic model: \[ \text{prior}_{\phi, stu} \sim \operatorname{Dir}\left(c \cdot p_{\phi, stu}^{(k-1)}\right), \] where \(c > 0\) denotes the strength of the prior. From Bach’s chorales, the teacher inspects the empirical counts \(q_\phi\) associated with the feature \(\phi\) and the relevant \(n\)-gram, and computes the student’s posterior belief if \(\phi\) were selected as the rule: \[ \text{posterior}_{\phi, stu} \sim \operatorname{Dir}\left(q_\phi + c \cdot p_{\phi, stu}^{(k-1)}\right). \] The concentration parameters of the Dirichlet posterior show the balance between empirical counts and the prior. If the total number of empirical counts is small (less confident), the posterior will be smoothed more by the prior, de-emphasizing the empirical distribution from \(q_\phi\). If we compute \(\hat{p}_\phi \propto \left(q_\phi + c \cdot p_{\phi, stu}^{(k-1)}\right)\) in the objective of (3), then \[ \text{gap}\left(p_{\phi, stu}^{(k-1)} \mid \hat{p}_\phi\right) = D\left(\mathbb{E}\left[\text{prior}_{\phi, stu}\right] \parallel \mathbb{E}\left[\text{posterior}_{\phi, stu}\right]\right). \] The right side of (4) is closely related to Bayesian surprise (Varshney, 2013), which takes the form of KL divergence from the prior to posterior. If we remove the expectations and switch the roles between the prior and posterior, we get the exact formula for Bayesian surprise. Both functionals Table 1: Customizing a syllabus (* signifies rules that are skipped in the faster pace) <table> <tr> <th>Rule Trace</th> <th>Faster Pace (\( \gamma = 0.5 \))</th> <th>Slower Pace (\( \gamma = 0.1 \))</th> </tr> <tr> <td>1</td> <td>order \( \circ w_{\{1,2,3,4\}} \)</td> <td>order \( \circ w_{\{1,2,3,4\}} \)</td> </tr> <tr> <td>2</td> <td>mod\(_{12}\) \( \circ w_{\{1\}} \)</td> <td>order \( \circ \) diff \( \circ \) sort \( \circ w_{\{1,2,4\}} \)*</td> </tr> <tr> <td>3</td> <td>mod\(_{12}\) \( \circ \) diff \( \circ w_{\{2,3\}} \)</td> <td>order \( \circ \) diff \( \circ \) mod\(_{12}\) \( \circ w_{\{1,2,3\}} \)*</td> </tr> <tr> <td>4</td> <td>mod\(_{12}\) \( \circ \) diff \( \circ w_{\{3,4\}} \)</td> <td>order \( \circ \) diff \( \circ \) diff \( \circ w_{\{1,2,3,4\}} \)*</td> </tr> <tr> <td>5</td> <td>diff \( \circ \) sort \( \circ w_{\{2,3\}} \)</td> <td>order \( \circ \) sort \( \circ \) mod\(_{12}\) \( \circ w_{\{2,3,4\}} \)*</td> </tr> <tr> <td>6</td> <td>mod\(_{12}\) \( \circ w_{\{3\}} \)</td> <td>order \( \circ \) sort \( \circ \) mod\(_{12}\) \( \circ w_{\{1,3,4\}} \)*</td> </tr> <tr> <td>7</td> <td>mod\(_{12}\) \( \circ \) diff \( \circ w_{\{1,2\}} \)</td> <td>order \( \circ \) sort \( \circ \) mod\(_{12}\) \( \circ w_{\{1,2,3,4\}} \)*</td> </tr> <tr> <td>8</td> <td>mod\(_{12}\) \( \circ \) diff \( \circ w_{\{2,4\}} \)</td> <td>mod\(_{12}\) \( \circ w_{\{1\}} \)</td> </tr> <tr> <td>9</td> <td>diff \( \circ w_{\{1,2\}} \)</td> <td>mod\(_{12}\) \( \circ \) diff \( \circ w_{\{2,3\}} \)</td> </tr> <tr> <td>10</td> <td>diff \( \circ \) sort \( \circ w_{\{1,3\}} \)</td> <td>mod\(_{12}\) \( \circ \) diff \( \circ w_{\{3,4\}} \)</td> </tr> </table> capture the idea of comparing the gap between the prior and posterior. Therefore, the efficacy of concept learning is analogous to seeking (informational) surprise in the learning process. The subtlety in (4) where we exchange the prior and posterior, makes a distinction from Bayesian surprise due to the asymmetry of KL divergence. As a brief explanation, adopting (4) as the objective tends to produce rules about what Bach hated to do, while the other way produces what Bach liked to do. So we treat it as a design choice and adopt (4), given that rules are often taught as prohibitions (e.g. “parallel fifths/octaves are bad”, “never double the tendency tones”). There are more in-depth and information-theoretic discussions on this point (Huszár 2015; Palomar & Verdú 2008). 6 EXPERIMENTS MUS-ROVER II’s main use case is to produce personalized syllabi that are roadmaps to learning the input style (customized paths to Mount Parnassus). By substituting the student module, users can join the learning cycle, in which they make hands-on compositions and get iterative feedback from the teacher. Alternatively, for faster experimentation, users make the student their learning puppet, which is personalized by its external parameters. This paper discusses the latter case in detail. Math-to-Music Dictionary MUS-ROVER II conceptualizes every rule feature as a partition of the raw space, and uses the inducing function as its mathematical name. To get the meanings of the features, one can simply work out the math, but some of them already have their counterparts as music terminologies. We include a short dictionary of those correspondences in Appendix A.1. Pace Control and Syllabus Customization We present a simple yet flexible pace control panel to the users of MUS-ROVER II, enabling personalized set-up of their learning puppet. The control panel exposes four knobs: the lower bound, upper bound, and stride of the rule’s entropic difficulty (\( \delta_{min}, \delta_{max}, \delta_{stride} \)), as well as the satisfactory gap (\( \gamma \)). These four hyper-parameters together allow the user to personalize the pace and capacity of her learning experience. The entropic difficulty \( \delta \) caps the Shannon entropy of a rule’s feature distribution in (2), a surrogate for the complexity (or memorability) of the rule (Pape et al., 2015). It is discretized into a progression staircase from \( \delta_{min} \) up to \( \delta_{max} \), with incremental \( \delta_{stride} \). The resulting syllabus starts with \( \delta = \delta_{min} \), the entry level difficulty; and ends whenever \( \delta \geq \delta_{max} \), the maximum difficulty that the user can handle. Anywhere in between, the loop deactivates all rules whose difficulties are beyond current \( \delta \), and moves onto the next difficulty level \( \delta + \delta_{stride} \) if the student’s probabilistic model is \( \gamma \)-close to the input under all currently active rule features. To showcase syllabus customization, we introduce an ambitious user who demands a faster pace and a patient user who prefers a slower one. In practice, one can collectively tune the stride parameter \( \delta_{stride} \) and the gap parameter \( \gamma \), with a faster pace corresponding to a larger \( \delta_{stride} \) (let’s jump directly to the junior year from freshman) and a larger \( \gamma \) (having an A- is good enough to move onto the next level, why bother having A+). Here we simply fix \( \delta_{stride} \), and let \( \gamma \) control the pace. We illustrate two syllabi in Table 1, which compares the first ten (1-gram) rules in a faster (\( \gamma = 0.5 \)) syllabus and a slower one (\( \gamma = 0.1 \)). Notice the faster syllabus gives the fundamentals that a music student will typically learn in her first-year music theory class, including rules on voice crossing, Table 2: Sample 1-gram rules and their hierarchies. <html> <table> <tr> <th>Interpretable rule (Spacing):</th> <th>This partition sub-family includes 21 coarser partitions, which are local orderings that are already captured by the global ordering.</th> </tr> <tr> <td>Almost always, the soprano pitch is above the alto, alto above tenor, and tenor above bass.</td> <td><img src="page_374_158_627_180.png" alt="A tree diagram showing a partition sub-family"/></td> </tr> <tr> <th>Interpretable rule (Scale):</th> <th>This partition sub-family does not contain any other coarser partitions.</th> </tr> <tr> <td>The soprano voice is drawn from a diatonic scale with high probability.</td> <td><img src="page_374_393_627_180.png" alt="A tree diagram showing a partition sub-family"/></td> </tr> <tr> <th>Interpretable rule (Interval):</th> <th>This partition sub-family contains only one coarser partition:<br>order \( \circ \) sort \( \circ \) mod\(_{12}\) \( \circ \) \( w_{\{2,3\}} \).</th> </tr> <tr> <td>The interval of the inner voices are mostly consonant (3,4,5,7,8,9), but perfect octave/unison (0) is rare due to the tight spacing between alto and tenor.</td> <td><img src="page_374_628_627_180.png" alt="A tree diagram showing a partition sub-family"/></td> </tr> <tr> <th>Interpretable rule (Interval):</th> <th>This partition sub-family contains only one coarser partition:<br>order \( \circ \) sort \( \circ \) mod\(_{12}\) \( \circ \) \( w_{\{3,4\}} \).</th> </tr> <tr> <td>The interval of the lower voices are mostly consonant, and emerges more perfect octaves due to the wide spacing between tenor and bass. Also, perfect fourth (5) is now considered as a dissonance against the bass.</td> <td><img src="page_374_863_627_180.png" alt="A tree diagram showing a partition sub-family"/></td> </tr> </table> </html> pitch class set (scale), intervals, and so on (triads and seventh chords will appear later). It effectively skips the nitty-gritty rules (marked by an asterisk) that are learned in the slower setting. Most of these skipped rules do not have direct counterparts in music theory (such as taking the diff operator twice) and are not important, although occasionally the faster syllabus will skip some rules worth mentioning (such as the second rule in the slower pace, which talks about spacing among soprano, alto, and bass). Setting an appropriate pace for a user is important; a pace that is too fast will miss the whole point of knowledge discovery (jump to the low-level details too fast); a pace that is too slow will bury the important points among unimportant ones (hence, lose the big picture). Fundamentals: Hierarchical 1-gram Similar to our teaching of music theory, MUS-ROVER II’s proposed syllabus divides into two stages: fundamentals and part writing. The former is under the 1-gram setting, involving knowledge independent of the context; the latter provides online tutoring under multi-\( n \)-grams. We begin our experiments with fundamentals, and use them to illustrate the two types of feature hierarchies. Let’s take a closer look at the two syllabi in Table[1]. The specifications (left) and hierarchies (right) of the four common rules are illustrated in Table[2]. The rules’ translations are below the corresponding bar charts, all of which are consistent with our music theory. Extracted from the conceptual hierarchy, the right column lists the partition sub-family sourced at each rule, which is pictorially simplified as a tree by hiding implied edges from its corresponding DAG. Every coarser partition in Figure 3: Gap trajectories for two features. The dashed black lines show two different satisfactory gaps (\( \gamma = 0.5 \) and \( 0.1 \)). The bottom charts show the informationally implied hierarchies. a sub-family is indeed a higher-level representation, but has not accumulated sufficient significance to make itself a rule. A partition will never be learned if one of its finer ancestors has been made a rule. Observe that all of the coarser partitions are not typically taught in theory classes. MUS-ROVER II measures the student’s progress from many different angles in terms of features. With respect to a feature, the gap between the student and Bach is iteratively recorded to form a trajectory when cycling the loop. Studying the vanishing point of the trajectory reveals the (local) informational hierarchy around the corresponding feature. Taking the second and seventh rule in the slower syllabus for example, we plot their trajectories in Figure 3. Both illustrate a decreasing trend\footnote{Fluctuations on the trajectory are largely incurred by the imperfect solver of the optimization problem.} for gaps in the corresponding feature spaces. The left figure shows that the second rule is largely but not entirely implied by the first, pointing out the hierarchical structure between the two: the first rule may be considered as the dominant ancestor of the second, which is not conceptually apparent, but informationally implied. On the contrary, the right figure shows that the seventh rule is *not* predominantly implied by the first, which instead is informationally connected to many other rules. However, one could say that it is probably safe to skip both rules in light of a faster pace, since they will eventually be learned fairly effectively (with small gaps) but indirectly. **Part Writing:** **Adaptive n-grams** Unlike fundamentals which studies sonority independently along the *vertical* direction of the chorale texture, rules on part writing (e.g. melodic motion, chord progression) are *horizontal*, and *context-dependent*. This naturally results in an online learning framework, in which rule extractions are coupled in the writing process, specific to the realization of a composition (context). Context dependence is captured by the multi-*n*-gram language model, which further leads to the 2D memory pool of features for rule extraction (Sec. 5). Consider an example of online learning and adaptive memory selection, where we have the beginning of a chorale: \[ \langle s \rangle \rightarrow (60, 55, 52, 36) \rightarrow (60, 55, 52, 36) \rightarrow (62, 59, 55, 43) \rightarrow (62, 59, 55, 43) \rightarrow (62, 59, 55, 43), \] and want to learn the probabilistic model for the next sonority. Instead of starting from scratch, MUS-ROVER II launches the self-learning loop with the ruleset initialized by the fundamentals (incremental learning), and considers the 2D memory \( N \times \Phi \), for \( N = \{2, 3, 4, 5\} \). The first extracted rule is featured by \( \text{order} \circ \text{sort} \circ \text{mod}_{12} \circ w_{\{3,4\}} \). The rule is chosen because its corresponding feature has a large confidence level (validated by the large number of matched examples), a small entropy after being smoothed by Bayesian surprise, and reveals a large gap against the Bach’s style. Figure 4 shows the relative performance of this rule (in terms of confidence, regularity, and style gap) to other candidate cells in the 2D memory. Among the top 20 rules for this sonority, 12 are 5-gram, 5 are 4-gram, 3 are 2-gram, showing a long and adaptive dependence to preceding context. **Visualizing Bach’s Mind** With the hierarchical representations in MUS-ROVER II, we are now able to visualize Bach’s music mind step by step via activating nodes in the DAG of rule features Figure 4: The relative performance of the selected rule (pointed) among the pool of all cells in the 2D memory. A desired rule has: higher confidence (measured by the number of examples, brighter regions in the first row), more regularity (measured by Shannon entropy, darker regions in the second row), and larger style gap (measured by KL divergence, brighter regions in the bottom two rows). (similar to neuron activations in a brain). The hierarchical structure, as well as the additive activation process, is in stark contrast with the linear sequence of rules extracted from our prior work (Appendix A.3). Figure 5 shows a snapshot of the rule-learning status after ten loops, while the student is writing a sonority in the middle of a piece. The visualization makes it clear how earlier independent rules are now self-organized into sub-families, as well as how rules from a new context overwrite those from an old context, emphasizing that music is highly context-dependent. Figure 5: Visualization of Bach’s music mind for writing chorales. The underlying DAG represents the conceptual hierarchy (note: edges always point downwards). Colors are used to differentiate rule activations from different \( n \)-gram settings. We have enlarged \( N = \{1, 2, \ldots, 10\} \) to allow even longer-term dependencies. 7 CONCLUSIONS AND DISCUSSIONS Learning hierarchical rules as distributed representations of tonal music has played a central role in music pedagogy for centuries. While our previous work achieved the automation of rule extraction, and to certain level, the interpretability of the rules, this paper yields deeper interpretability that extends to a system of rules and the overall learning process. In summary, it highlights the importance of disentangling the rule features, sorting out their interconnections, and making the concept learning process more dynamic, hierarchical, and cognitive. MUS-ROVER is targeted to complement music teaching and learning. For instance, to many music students, learning and applying rules in part-writing is like learning to solve a puzzle (like Sudoku). Rules themselves are quite flexible as opposed to 0-1 derivatives, and may sometimes be contradictory. In addition, due to the limitation of human short-term memory and the difficulty of foreseeing implications, one has to handle a small set of rules at a time in a greedy manner, make some trials, and undo a few steps if no luck. Hence, solving this music puzzle could become a struggle (or maybe interesting): according to personal preferences, one typically begins with a small set of important rules, and via several steps of trial and error, tries one’s best to make the part-writing satisfy a majority of rules, with occasional violations on unimportant ones. On the other hand, a machine is often good at solving and learning from puzzles due to its algorithmic nature. For instance, MUS-ROVER’s student can take all rules into consideration: load them all at a time as constraints and figure out the global optimum of the optimization problem in only a few hours. The same level of efficiency might take a human student years to achieve. We envision the future of MUS-ROVER as a partner to humans in both music teaching and research, which includes but is not limited to, personalizing the learning experience of a student, as well as suggesting new methodologies to music theorists in analyzing and developing new genres. It also has practical applications: as by-products from the self-learning loop, the teacher can be made into a genre classifier, while the student can be cast into a style synthesizer. We are also eager to study the rover’s partnership beyond the domain of music. ACKNOWLEDGMENTS We thank Professor Heinrich Taube, President of Illiac Software, Inc., for providing Harmonia’s MusicXML corpus of Bach’s chorales (https://harmonia.illiacsoftware.com/), as well as his helpful comments and suggestions. This work was supported by the IBM-Illinois Center for Cognitive Computing Systems Research (C3SR), a research collaboration as part of the IBM Cognitive Horizons Network.
accept
Accept (Poster)
6.666667
4425ee63112aacef773739e12c2b135979c0cac4
iclr
2,017
DROPOUT WITH EXPECTATION-LINEAR REGULARIZATION Xuezhe Ma, Yingkai Gao Language Technologies Institute Carnegie Mellon University {xuezhem, yingkaig}@cs.cmu.edu Zhiting Hu, Yaoliang Yu Machine Learning Department Carnegie Mellon University {zhitinghu, yaoliang}@cs.cmu.edu Yuntian Deng School of Engineering and Applied Sciences Harvard University dengyuntian@gmail.com Eduard Hovy Language Technologies Institute Carnegie Mellon University hovy@cmu.edu ABSTRACT Dropout, a simple and effective way to train deep neural networks, has led to a number of impressive empirical successes and spawned many recent theoretical investigations. However, the gap between dropout’s training and inference phases, introduced due to tractability considerations, has largely remained under-appreciated. In this work, we first formulate dropout as a tractable approximation of some latent variable model, leading to a clean view of parameter sharing and enabling further theoretical analysis. Then, we introduce (approximate) expectation-linear dropout neural networks, whose inference gap we are able to formally characterize. Algorithmically, we show that our proposed measure of the inference gap can be used to regularize the standard dropout training objective, resulting in an explicit control of the gap. Our method is as simple and efficient as standard dropout. We further prove the upper bounds on the loss in accuracy due to expectation-linearization, describe classes of input distributions that expectation-linearize easily. Experiments on three image classification benchmark datasets demonstrate that reducing the inference gap can indeed improve the performance consistently. 1 INTRODUCTION Deep neural networks (DNNs, e.g., LeCun et al., 2015; Schmidhuber, 2015), if trained properly, have been demonstrated to significantly improve the benchmark performances in a wide range of application domains. As neural networks go deeper and deeper, naturally, its model complexity also increases quickly, hence the pressing need to reduce overfitting in training DNNs. A number of techniques have emerged over the years to address this challenge, among which dropout (Hinton et al., 2012; Srivastava, 2013) has stood out for its simplicity and effectiveness. In a nutshell, dropout randomly “drops” neural units during training as a means to prevent feature co-adaptation—a sign of overfitting (Hinton et al., 2012). Simple as it appears to be, dropout has led to several record-breaking performances (Hinton et al., 2012; Ma & Hovy, 2016), and thus spawned a lot of recent interests in analyzing and justifying dropout from the theoretical perspective, and also in further improving dropout from the algorithmic and practical perspective. In their pioneering work, Hinton et al. (2012) and Srivastava et al. (2014) interpreted dropout as an extreme form of model combination (aka. model ensemble) with extensive parameter/weight sharing, and they proposed to learn the combination through minimizing an appropriate expected loss. Interestingly, they also pointed out that for a single logistic neural unit, the output of dropout is in fact the geometric mean of the outputs of the model ensemble with shared parameters. Subsequently, many theoretical justifications of dropout have been explored, and we can only mention a few here due to space limits. Building on the weight sharing perspective, Baldi & Sadowski (2013; 2014) analyzed the ensemble averaging property of dropout in deep non-linear logistic networks, and supported the view that dropout is equivalent to applying stochastic gradient descent on some regularized loss function. Wager et al. (2013) treated dropout as an adaptive regularizer for generalized linear models (GLMs). Helmbold & Long (2016) discussed the differences between dropout and traditional weight decay regularization. In terms of statistical learning theory, Gao & Zhou (2014) studied the Rademacher complexity of different types of dropout, showing that dropout is able to reduce the Rademacher complexity polynomially for shallow neural networks (with one or no hidden layers) and exponentially for deep neural networks. This latter work (Gao & Zhou, 2014) formally demonstrated that dropout, due to its regularizing effect, contributes to reducing the inherent model complexity, in particular the variance component in the generalization error. Seen as a model combination technique, it is intuitive that dropout contributes to reducing the variance of the model performance. Surprisingly, dropout has also been shown to play some role in reducing the model bias. For instance, Jain et al. (2015) studied the ability of dropout training to escape local minima, hence leading to reduced model bias. Other studies (Chen et al., 2014; Helmbold & Long, 2014; Wager et al., 2014) focus on the effect of the dropout noise on models with shallow architectures. We noted in passing that there are also some work (Kingma et al., 2015; Gal & Ghahramani, 2015; 2016) trying to understand dropout from the Bayesian perspective. In this work, we first formulate dropout as a tractable approximation of a latent variable model, and give a clean view of weight sharing (\S3). Then, we focus on an inference gap in dropout that has somehow gotten under-appreciated: In the inference phase, for computational tractability considerations, the model ensemble generated by dropout is approximated by a single model with scaled weights, resulting in a gap between training and inference, and rendering the many previous theoretical findings inapplicable. In general, this inference gap can be very large and no attempt (to our best knowledge) has been made to control it. We make three contributions in bridging this gap: Theoretically, we introduce expectation-linear dropout neural networks, through which we are able to explicitly quantify the inference gap (\S4). In particular, our theoretical results explain why the max-norm constraint on the network weights, a standard practice in training DNNs, can lead to a small inference gap hence potentially improve performance. Algorithmically, we propose to add a sampled version of the inference gap to regularize the standard dropout training objective (*expectation-linearization*), hence allowing explicit control of the inference gap, and analyze the interaction between expectation-linearization and the model accuracy (\S5). Experimentally, through three benchmark datasets we show that our regularized dropout is not only as simple and efficient as standard dropout but also consistently leads to improved performance (\S6). 2 DROP OUT NEURAL NETWORKS In this section we set up the notations, review the dropout neural network model, and discuss the inference gap in standard dropout training that we will attempt to study in the rest of the paper. 2.1 DNNs AND NOTATIONS Throughout we use uppercase letters for random variables (and occasionally for matrices as well), and lowercase letters for realizations of the corresponding random variables. Let \( X \in \mathcal{X} \) be the input of the neural network, \( Y \in \mathcal{Y} \) be the desired output, and \( D = \{(x_1, y_1), \ldots, (x_N, y_N)\} \) be our training sample, where \( x_i, i = 1, \ldots, N \), (resp. \( y_i \)) are usually i.i.d. samples of \( X \) (resp. \( Y \)). Let M denote a deep neural network with \( L \) hidden layers, indexed by \( l \in \{1, \ldots, L\} \). Let \( \mathbf{h}^{(l)} \) denote the output vector from layer \( l \). As usual, \( \mathbf{h}^{(0)} = x \) is the input, and \( \mathbf{h}^{(L)} \) is the output of the neural network. Denote \( \theta = \{\theta_l : l = 1, \ldots, L\} \) as the set of parameters in the network M, where \( \theta_l \) assembles the parameters in layer \( l \). With dropout, we need to introduce a set of dropout random variables \( S = \{\Gamma^{(l)} : l = 1, \ldots, L\} \), where \( \Gamma^{(l)} \) is the dropout random variable for layer \( l \). Then the deep neural network M can be described as: \[ \mathbf{h}^{(l)} = f_l(\mathbf{h}^{(l-1)} \odot \gamma^{(l)}; \theta_l), \quad l = 1, \ldots, L, \] where \( \odot \) is the element-wise product and \( f_l \) is the transformation function of layer \( l \). For example, if layer \( l \) is a fully connected layer with weight matrix \( W \), bias vector \( b \), and sigmoid activation function \( \sigma(x) = \frac{1}{1 + \exp(-x)} \), then \( f_l(x) = \sigma(Wx + b) \)). We will also use \( \mathbf{h}^{(l)}(x, s; \theta) \) to denote the output of layer \( l \) with input \( x \) and dropout value \( s \), under parameter \( \theta \). In the simplest form of dropout, which is also called standard dropout, \( \Gamma^{(l)} \) is a vector of independent Bernoulli random variables, each of which has probability \( p_l \) of being 1 and \( 1 - p_l \) of being 0. This corresponds to dropping each of the weights independently with probability \( p_l \). 2.2 DROP OUT TRAINING The standard dropout neural networks can be trained using stochastic gradient decent (SGD), with a sub-network sampled by dropping neural units for each training instance in a mini-batch. Forward and backward pass for that training instance are done only on the sampled sub-network. Intuitively, dropout aims at, simultaneously and jointly, training an ensemble of exponentially many neural networks (one for each configuration of dropped units) while sharing the same weights/parameters. The goal of the stochastic training procedure of dropout can be understood as minimizing an expected loss function, after marginalizing out the dropout variables (Srivastava, 2013; Wang & Manning, 2013). In the context of maximal likelihood estimation, dropout training can be formulated as: \[ \theta^* = \arg\min_{\theta} \mathbb{E}_{S_D}[-l(D, S_D; \theta)] = \arg\min_{\theta} \mathbb{E}_{S_D}\left[ - \sum_{i=1}^N \log p(y_i|x_i, S_i; \theta) \right], \] where recall that \( D \) is the training sample, \( S_D = \{S_1, \ldots, S_N\} \) is the dropout variable (one for each training instance), and \( l(D, S_D; \theta) \) is the (conditional) log-likelihood function defined by the conditional distribution \( p(y|x, s; \theta) \) of output \( y \) given input \( x \), under parameter \( \theta \) and dropout variable \( s \). Throughout we use the notation \( \mathbb{E}_Z \) to denote the conditional expectation where all random variables except \( Z \) are conditioned on. Dropout has also been shown to work well with regularization, such as L2 weight decay (Tikhonov, 1943), Lasso (Tibshirani, 1996), KL-sparsity(Bradley & Bagnell, 2008; Hinton, 2010), and max-norm regularization (Srebro et al., 2004), among which the max-norm regularization — that constrains the norm of the incoming weight matrix to be bounded by some constant — was found to be especially useful for dropout (Srivastava, 2013; Srivastava et al., 2014). 2.3 DROP OUT INFERENCE AND GAP As mentioned before, dropout is effectively training an ensemble of neural networks with weight sharing. Consequently, at test time, the output of each network in the ensemble should be averaged to deliver the final prediction. This averaging over exponentially many sub-networks is, however, intractable, and standard dropout typically implements an approximation by introducing a deterministic scaling factor for each layer to replace the random dropout variable: \[ \mathbb{E}_S[\mathbf{H}^{(L)}(x, S; \theta)] \overset{?}{=} \mathbf{h}^{(L)}(x, \mathbb{E}[S]; \theta), \] where the right-hand side is the output of a single deterministic neural network whose weights are scaled to match the expected number of active hidden units on the left-hand side. Importantly, the right-hand side can be easily computed since it only involves a single deterministic network. Bulò et al. (2016) combined dropout with knowledge distillation methods (Hinton et al., 2015) to better approximate the averaging processing of the left-hand side. However, the quality of the approximation in (3) is largely unknown, and to our best knowledge, no attempt has been made to explicitly control this inference gap. The main goal of this work is to explicitly quantify, algorithmically control, and experimentally demonstrate the inference gap in (3), in the hope of improving the generalization performance of DNNs eventually. To this end, in the next section we first present a latent variable model interpretation of dropout, which will greatly facilitate our later theoretical analysis. 3 DROP OUT AS LATENT VARIABLE MODELS With the end goal of studying the inference gap in (3) in mind, in this section, we first formulate dropout neural networks as a latent variable model (LVM) in § 3.1. Then, we point out the relation between the training procedure of LVM and that of standard dropout in § 3.2. The advantage of formulating dropout as a LVM is that we need only deal with a single model (with latent structure), instead of an ensemble of exponentially many different models (with weight sharing). This much simplified view of dropout enables us to understand and analyze the model parameter \( \theta \) in a much more straightforward and intuitive way. 3.1 AN LVM FORMULATION OF DROPOUT A latent variable model consists of two types of variables: the observed variables that represent the empirical (observed) data and the latent variables that characterize the hidden (unobserved) structure. To formulate dropout as a latent variable model, the input \( x \) and output \( y \) are regarded as observed variables, while the dropout variable \( s \), representing the sub-network structure, is hidden. Then, upon fixing the input space \( \mathcal{X} \), the output space \( \mathcal{Y} \), and the latent space \( \mathcal{S} \) for dropout variables, the conditional probability of \( y \) given \( x \) under parameter \( \theta \) can be written as \[ p(y|x; \theta) = \int_{\mathcal{S}} p(y|x, s; \theta)p(s)d\mu(s), \] where \( p(y|x, s; \theta) \) is the conditional distribution modeled by the neutral network with configuration \( s \) (same as in Eq. (2)), \( p(s) \) is the distribution of dropout variable \( S \) (e.g. Bernoulli), here assumed to be independent of the input \( x \), and \( \mu(s) \) is the base measure on the space \( \mathcal{S} \). 3.2 LVM DROPOUT TRAINING VS. STANDARD DROPOUT TRAINING Building on the above latent variable model formulation (4) of dropout, we are now ready to point out a simple relation between the training procedure of LVM and that of standard dropout. Given an i.i.d. training sample \( D \), the maximum likelihood estimate for the LVM formulation of dropout in (4) is equivalent to minimizing the following negative log-likelihood function: \[ \theta^* = \argmin_{\theta} -l(D; \theta) = \argmin_{\theta} -\sum_{i=1}^{N} \log p(y_i|x_i; \theta), \] where \( p(y|x; \theta) \) is given in Eq. (4). Recall the dropout training objective \( \mathrm{E}_{S_D}[-l(D, S_D; \theta)] \) in Eq. (2). We have the following theorem as a simple consequence of Jensen’s inequality (details in Appendix A): **Theorem 1.** *The expected loss function of standard dropout (Eq. (2)) is an upper bound of the negative log-likelihood of LVM dropout (Eq. (5)):* \[ -l(D; \theta) \leq \mathrm{E}_{S_D}[-l(D, S_D; \theta)]. \] Theorem 1, in a rigorous sense, justifies dropout training as a convenient and tractable approximation of the LVM formulation in (4). Indeed, since directly minimizing the marginalized negative log-likelihood in (5) may not be easy, a standard practice is to replace the marginalized (conditional) likelihood \( p(y|x; \theta) \) in (4) with its empirical Monte carlo average through drawing samples from the dropout variable \( S \). The dropout training objective in (2) corresponds exactly to this Monte carlo approximation when a single sample \( S_i \) is drawn for each training instance \( (x_i, y_i) \). Importantly, we note that the above LVM formulation involves only a single network parameter \( \theta \), which largely simplifies the picture and facilitates our subsequent analysis. 4 EXPECTATION-LINEAR DROPOUT NEURAL NETWORKS Building on the latent variable model formulation in § 3, we introduce in this section the notion of expectation-linearity that essentially measures the inference gap in (3). We then characterize a general class of neural networks that exhibit expectation-linearity, either exactly or approximately over a distribution \( p(x) \) on the input space. We start with defining expectation-linearity in the simplest single-layer neural network, then we extend the notion into general deep networks in a natural way. **Definition 1** (Expectation-linear Layer). A network layer \( h = f(x \odot \gamma; \theta) \) is **expectation-linear with respect to a set \( \mathcal{X}' \subseteq \mathcal{X} \)**, if for all \( x \in \mathcal{X}' \) we have \[ \left\| \mathrm{E}[f(x \odot \Gamma; \theta)] - f(x \odot \mathrm{E}[\Gamma]; \theta) \right\|_2 = 0. \] In this case we say that \( \mathcal{X}' \) is **expectation-linearizable**, and \( \theta \) is **expectation-linearizing w.r.t \( \mathcal{X}' \)**. Obviously, the condition in (7) will guarantee no gap in the dropout inference approximation (3)—an admittedly strong condition that we will relax below. Clearly, if \( f \) is an affine function, then we can choose \( \mathcal{X}' = \mathcal{X} \) and expectation-linearity is trivial. Note that expectation-linearity depends on the network parameter \( \theta \) and the dropout distribution \( \Gamma \). Expectation-linearity, as defined in (7), is overly strong: under standard regularity conditions, essentially the transformation function \( f \) has to be affine over the set \( \mathcal{X}' \), ruling out for instance the popular sigmoid or tanh activation functions. Moreover, in practice, downstream use of DNNs are usually robust to small errors resulting from *approximate* expectation-linearity (hence the empirical success of dropout), so it makes sense to define an inexact extension. We note also that the definition in (7) is *uniform* over the set \( \mathcal{X}' \), while in a statistical setting it is perhaps more meaningful to have expectation-linearity “on average,” since inputs from lower density regions are not going to play a significant role anyway. Taking into account the aforementioned motivations, we arrive at the following inexact extension: **Definition 2** (Approximately Expectation-linear Layer). A network layer \( \mathbf{h} = f(x \odot \gamma; \theta) \) is *\( \delta \)-approximately expectation-linear with respect to* a distribution \( p(x) \) over \( \mathcal{X} \) if \[ \mathrm{E}_X \left[ \| \mathrm{E}_{\Gamma} \left[ f(X \odot \Gamma; \theta) | X \right] - f(X \odot \mathrm{E}[\Gamma]; \theta) \|_2 \right] < \delta. \] In this case we say that \( p(x) \) is *\( \delta \)-approximately expectation-linearizable*, and \( \theta \) is *\( \delta \)-approximately expectation-linearizing*. To appreciate the power of cutting some slack from exact expectation-linearity, we remark that even non-affine activation functions often have approximately linear regions. For example, the logistic function, a commonly used non-linear activation function in DNNs, is approximately linear around the origin. Naturally, we can ask whether it is sufficient for a target distribution \( p(x) \) to be well-approximated by an approximately expectation-linearizable one. We begin by providing an appropriate measurement of the quality of this approximation. **Definition 3** (Closeness, (Andreas et al., 2015)). A distribution \( p(x) \) is *\( C \)*-close to a set \( \mathcal{X}' \subseteq \mathcal{X} \) if \[ \mathrm{E} \left[ \inf_{x' \in \mathcal{X}'} \sup_{\gamma \in \mathcal{S}} \| X \odot \gamma - x' \odot \gamma \|_2 \right] \leq C, \] where recall that \( \mathcal{S} \) is the (bounded) space that the dropout variable lives in. Intuitively, \( p(x) \) is *\( C \)*-close to a set \( \mathcal{X}' \) if a random sample from \( p \) is no more than a distance \( C \) from \( \mathcal{X}' \) in expectation and under the worst “dropout perturbation”. For example, a standard normal distribution is close to an interval centering at origin (\( [-\alpha, \alpha] \)) with some constant \( C \). Our definition of closeness is similar to that in Andreas et al. (2015), who used this notion to analyze self-normalized log-linear models. We are now ready to state our first major result that quantifies approximate expectation-linearity of a single-layered network (proof in Appendix B.1): **Theorem 2.** *Given a network layer \( \mathbf{h} = f(x \odot \gamma; \theta) \), where \( \theta \) is expectation-linearizing w.r.t. \( \mathcal{X}' \subseteq \mathcal{X} \). Suppose \( p(x) \) is \( C \)-close to \( \mathcal{X}' \) and for all \( x \in \mathcal{X} \), \( \| \nabla_x f(x) \|_{op} \leq B \), where \( \| \cdot \|_{op} \) is the usual operator norm. Then, \( p(x) \) is \( 2BC \)-approximately expectation-linearizable.* Roughly, Theorem 2 states that the input distribution \( p(x) \) that place most of its mass on regions close to expectation-linearizable sets are approximately expectation-linearizable on a similar scale. The bounded operator norm assumption on the derivative \( \nabla f \) is satisfied in most commonly used layers. For example, for a fully connected layer with weight matrix \( W \), bias vector \( b \), and activation function \( \sigma \), \( \| \nabla f(\cdot) \|_{op} = |\sigma'(\cdot)| \cdot \| W \|_{op} \) is bounded by \( \| W \|_{op} \) and the supremum of \( |\sigma'(\cdot)| \) (1/4 when \( \sigma \) is sigmoid and 1 when \( \sigma \) is tanh). Next, we extend the notion of approximate expectation-linearity to deep dropout neural networks. **Definition 4** (Approximately Expectation-linear Network). A deep neural network with \( L \) layers (cf. Eq. (1)) is *\( \delta \)-approximately expectation-linear with respect to* \( p(x) \) over \( \mathcal{X} \) if \[ \mathrm{E}_X \left[ \| \mathrm{E}_S \left[ \mathbf{H}^{(L)}(X, S; \theta) | X \right] - \mathbf{h}^{(L)}(X, \mathrm{E}[S]; \theta) \|_2 \right] < \delta. \] where \( \mathbf{h}^{(L)}(X, \mathrm{E}[S]; \theta) \) is the output of the deterministic neural network in standard dropout. Lastly, we relate the level of approximate expectation-linearity of a deep neural network to the level of approximate expectation-linearity of each of its layers: Theorem 3. Given an L-layer neural network as in Eq. (1), and suppose that each layer \( l \in \{1, \ldots, L\} \) is \( \delta \)-approximately expectation-linear w.r.t. \( p(\mathbf{h}^{(l)}), \mathrm{E}[\Gamma^{(l)}] \leq \gamma, \sup_x \| \nabla f_l(x) \|_{op} \leq B \), and \( \mathrm{E}\left[ \mathrm{Var}[\mathbf{H}^{(l)}|X] \right] \leq \sigma^2 \). Then the network is \( \Delta \)-approximately expectation-linear with \[ \Delta = (B \gamma)^{L-1} \delta + (\delta + B \gamma \sigma) \left( \frac{1 - (B \gamma)^{L-1}}{1 - B \gamma} \right). \] From Theorem 3 (proof in Appendix B.2) we observe that the level of approximate expectation-linearity of the network mainly depends on four factors: the level of approximate expectation-linearity of each layer (\( \delta \)), the expected variance of each layer (\( \sigma \)), the operator norm of the derivative of each layer’s transformation function (\( B \)), and the mean of each layer’s dropout variable (\( \gamma \)). In practice, \( \gamma \) is often a constant less than or equal to 1. For example, if \( \Gamma \sim \mathrm{Bernoulli}(p) \), then \( \gamma = p \). According to the theorem, the operator norm of the derivative of each layer’s transformation function is an important factor in the level of approximate expectation-linearity: the smaller the operator norm is, the better the approximation. Interestingly, the operator norm of a layer often depends on the norm of the layer’s weight (e.g. for fully connected layers). Therefore, adding max-norm constraints to regularize dropout neural networks can lead to better approximate expectation-linearity hence smaller inference gap and the often improved model performance. It should also be noted that when \( B \gamma < 1 \), the approximation error \( \Delta \) tends to be a constant when the network becomes deeper. When \( B \gamma = 1 \), \( \Delta \) grows linearly with \( L \), and when \( B \gamma > 1 \), the growth of \( \Delta \) becomes exponential. Thus, it is essential to keep \( B \gamma < 1 \) to achieve good approximation, particularly for deep neural networks. 5 EXPECTATION-LINEAR REGULARIZED DROPOUT In the previous section we have managed to bound the approximate expectation-linearity, hence the inference gap in (3), of dropout neural networks. In this section, we first prove a uniform deviation bound of the sampled approximate expectation-linearity measure from its mean, which motivates adding the sampled (hence computable) expectation-linearity measure as a regularization scheme to standard dropout, with the goal of explicitly controlling the inference gap of the learned parameter, hence potentially improving the performance. Then we give the upper bounds on the loss in accuracy due to expectation-linearization, and describe classes of distributions that expectation-linearize easily. 5.1 A UNIFORM DEVIATION BOUND FOR THE SAMPLED EXPECTATION-LINEAR MEASURE We now show that an expectation-linear network can be found by expectation-linearizing the network on the training sample. To this end, we prove a uniform deviation bound between the empirical expectation-linearization measure using i.i.d. samples (Eq. (12)) and its mean (Eq. (13)). Theorem 4. Let \( \mathcal{H} = \{ \mathbf{h}^{(L)}(x, s; \theta) : \theta \in \Theta \} \) denote a space of L-layer dropout neural networks indexed with \( \theta \), where \( \mathbf{h}^{(L)} : \mathcal{X} \times S \to \mathcal{R} \) and \( \Theta \) is the space that \( \theta \) lives in. Suppose that the neural networks in \( \mathcal{H} \) satisfy the constraints: 1) \( \forall x \in \mathcal{X}, \|x\|_2 \leq \alpha \); 2) \( \forall l \in \{1, \ldots, L\}, \mathrm{E}[\Gamma^{(l)}] \leq \gamma \) and \( \| \nabla f_l \|_{op} \leq B \); 3) \( \| \mathbf{h}^{(L)} \| \leq \beta \). Denote empirical expectation-linearization measure and its mean as: \[ \hat{\Delta} = \frac{1}{n} \sum_{i=1}^n \| \mathrm{E}_{S_i} [\mathbf{H}^{(L)}(X_i, S_i; \theta)] - \mathbf{h}^{(L)}(X_i, \mathrm{E}[S_i]; \theta) \|_2, \] \[ \Delta = \mathrm{E}_X \left[ \| \mathrm{E}_{S} [\mathbf{H}^{(L)}(X, S; \theta)] - \mathbf{h}^{(L)}(X, \mathrm{E}[S]; \theta) \|_2 \right]. \] Then, with probability at least \( 1 - \nu \), we have \[ \sup_{\theta \in \Theta} | \Delta - \hat{\Delta} | < \frac{2 \alpha B^L (\gamma^{L/2} + 1)}{\sqrt{n}} + \beta \sqrt{\frac{\log(1/\nu)}{n}}. \] From Theorem 4 (proof in Appendix C.1) we observe that the deviation bound decreases exponentially with the number of layers \( L \) when the operator norm of the derivative of each layer’s transformation function (\( B \)) is less than 1 (and the contrary if \( B \geq 1 \)). Importantly, the square root dependence on the number of samples (\( n \)) is standard and cannot be improved without significantly stronger assumptions. It should be noted that Theorem 4 per se does not imply anything between expectation-linearization and the model accuracy (i.e. how well the expectation-linearized neural network actually achieves on modeling the data). Formally studying this relation is provided in § 5.3. In addition, we provide some experimental evidences in § 6 on how improved approximate expectation-linearity (equivalently smaller inference gap) does lead to better empirical performances. 5.2 Expectation-Linearization as Regularization The uniform deviation bound in Theorem 4 motivates the possibility of obtaining an approximately expectation-linear dropout neural networks through adding the empirical measure (12) as a regularization scheme to the standard dropout training objective, as follows: \[ loss(D; \theta) = -l(D; \theta) + \lambda V(D; \theta), \] where \(-l(D; \theta)\) is the negative log-likelihood defined in Eq. (5), \(\lambda > 0\) is a regularization constant, and \(V(D; \theta)\) measures the level of approximate expectation-linearity: \[ V(D; \theta) = \frac{1}{N} \sum_{i=1}^N \| \mathbb{E}_{S_i} [\mathbf{H}^{(L)}(x_i, S_i; \theta)] - \mathbf{h}^{(L)}(x_i, \mathbb{E}[S_i]; \theta) \|_2^2. \] To solve (15), we can minimize \(loss(D; \theta)\) via stochastic gradient descent as in standard dropout, and approximate \(V(D; \theta)\) using Monte carlo: \[ V(D; \theta) \approx \frac{1}{N} \sum_{i=1}^N \| \mathbf{h}^{(L)}(x_i, s_i; \theta) - \mathbf{h}^{(L)}(x_i, \mathbb{E}[S_i]; \theta) \|_2^2, \] where \(s_i\) is the same dropout sample as in \(l(D; \theta)\) for each training instance in a mini-batch. Thus, the only additional computational cost comes from the deterministic term \(\mathbf{h}^{(L)}(x_i, \mathbb{E}[S_i]; \theta)\). Overall, our regularized dropout (15), in its Monte carlo approximate form, is as simple and efficient as the standard dropout. 5.3 On the Accuracy of Expectation-Linearized Models So far our discussion has concentrated on the problem of finding expectation-linear neural network models, without any concerns on how well they actually perform at modeling the data. In this section, we characterize the trade-off between maximizing “data likelihood” and satisfying an expectation-linearization constraint. To achieve the characterization, we measure the *likelihood gap* between the classical maximum likelihood estimator (MLE) and the MLE subject to a expectation-linearization constraint. Formally, given training data \(D = \{(x_1, y_1), \ldots, (x_n, y_n)\}\), we define \[ \hat{\theta} = \underset{\theta \in \Theta}{\operatorname{argmin}} -l(D; \theta) \] \[ \hat{\theta}_\delta = \underset{\theta \in \Theta, V(D; \theta) \leq \delta}{\operatorname{argmin}} -l(D; \theta) \] where \(-l(D; \theta)\) is the negative log-likelihood defined in Eq. (5), and \(V(D; \theta)\) is the level of approximate expectation-linearity in Eq. (16). We would like to control the loss of model accuracy by obtaining a bound on the *likelihood gap* defined as: \[ \Delta_l(\hat{\theta}, \hat{\theta}_\delta) = \frac{1}{n} (l(D; \hat{\theta}) - l(D; \hat{\theta}_\delta)) \] In the following, we focus on neural networks with *softmax* output layer for classification tasks. \[ p(y|x, s; \theta) = \mathbf{h}_y^{(L)}(x, s; \theta) = f_L(\mathbf{h}^{(L-1)}(x, s); \eta) = \frac{e^{\eta_y^T \mathbf{h}^{(L-1)}(x, s)}}{\sum_{y' \in \mathcal{Y}} e^{\eta_{y'}^T \mathbf{h}^{(L-1)}(x, s)}} \] where \(\theta = \{\theta_1, \ldots, \theta_{L-1}, \eta\}\), \(\mathcal{Y} = \{1, \ldots, k\}\) and \(\eta = \{\eta_y : y \in \mathcal{Y}\}\). We claim: Theorem 5. Given an L-layer neural network \( \mathbf{h}^{(L)}(x, s; \theta) \) with softmax output layer in (21), where parameter \( \theta \in \Theta \), dropout variable \( s \in \mathcal{S} \), input \( x \in \mathcal{X} \) and target \( y \in \mathcal{Y} \). Suppose that for every \( x \) and \( s \), \( p(y|x, s; \hat{\theta}) \) makes a unique best prediction—that is, for each \( x \in \mathcal{X}, s \in \mathcal{S} \), there exists a unique \( y^* \in \mathcal{Y} \) such that \( \forall y \neq y^*, \hat{\eta}_y^T \mathbf{h}^{(L-1)}(x, s) < \hat{\eta}_{y^*}^T \mathbf{h}^{(L-1)}(x, s) \). Suppose additionally that \( \forall x, s, \| \mathbf{h}^{(L-1)}(x, s; \hat{\theta}) \| \leq \beta \), and \( \forall y, p(y|x; \hat{\theta}) > 0 \). Then \[ \Delta_1(\hat{\theta}, \hat{\theta}_\delta) \leq c_1 \beta^2 \left( \| \hat{\eta} \|_2 - \frac{\delta}{4\beta} \right)^2 e^{-c_2 \delta / 4\beta} \] where \( c_1 \) and \( c_2 \) are distribution-dependent constants. From Theorem 5 (proof in Appendix C.2) we observe that, at one extreme, distributions closed to deterministic can be expectation-linearized with little loss of likelihood. What about the other extreme — distributions “as close to uniform distribution as possible”? With suitable assumptions about the form of \( p(y|x, s; \hat{\theta}) \) and \( p(y|x; \hat{\theta}) \), we can achieve an accuracy loss bound for distributions that are close to uniform: Theorem 6. Suppose that \( \forall x, s, \| \mathbf{h}^{(L-1)}(x, s; \hat{\theta}) \| \leq \beta \). Additionally, for each \( (x_i, y_i) \in D, s \in \mathcal{S} \), \[ \log \frac{1}{k} \leq \log p(y_i|x_i, s; \hat{\theta}) \leq \frac{1}{k} \sum_{y \in \mathcal{Y}} \log p(y|x_i, s; \hat{\theta}). \] Then asymptotically as \( n \to \infty \): \[ \Delta_1(\hat{\theta}, \hat{\theta}_\delta) \leq \left( 1 - \frac{\delta}{4\beta \| \hat{\eta} \|_2 } \right) \mathrm{E} [ \mathrm{KL} (p(.|X; \theta) \| \mathrm{Unif}(\mathcal{Y})) ] \] Theorem 6 (proof in Appendix C.3) indicates that uniform distributions are also an easy class for expectation-linearization. The next question is whether there exist any classes of conditional distributions \( p(y|x) \) for which all distributions are provably hard to expectation-linearize. It remains an open problem and might be an interesting direction for future work. 6 EXPERIMENTS In this section, we evaluate the empirical performance of the proposed regularized dropout in (15) on a variety of network architectures for the classification task on three benchmark datasets—MNIST, CIFAR-10 and CIFAR-100. We applied the same data preprocessing procedure as in Srivastava et al. (2014). To make a thorough comparison and provide experimental evidence on how the expectation-linearization interacts with the predictive power of the learned model, we perform experiments of Monte Carlo (MC) dropout, which approximately computes the final prediction (left-hand side of (3)) via Monte Carlo sampling, w/o the proposed regularizer. In the case of MC dropout, we average \( m = 100 \) predictions using randomly sampled configurations. In addition, the network architectures and hyper-parameters for each experiment setup are the same as those in Srivastava et al. (2014), unless we explicitly claim to use different ones. Following previous works, for each data set We held out 10,000 random training images for validation to tune the hyper-parameters, including \( \lambda \) in Eq. (15). When the hyper-parameters are fixed, we train the final models with all the training data, including the validation data. A more detailed description of the conducted experiments can be provided in Appendix D. For each experiment, we report the mean test errors with corresponding standard deviations over 5 repetitions. 6.1 MNIST The MNIST dataset (LeCun et al., 1998) consists of 70,000 handwritten digit images of size 28×28, where 60,000 images are used for training and the rest for testing. This task is to classify the images into 10 digit classes. For the purpose of comparison, we train 6 neural networks with different architectures. The experimental results are shown in Table 1. 6.2 CIFAR-10 AND CIFAR-100 The CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009) consist of 60,000 color images of size 32 × 32, drawn from 10 and 100 categories, respectively. 50,000 images are used for training and the Table 1: Comparison of classification error percentage on test data with and without using expectation-linearization on MNIST, CIFAR-10 and CIFAR-100, under different network architectures (with standard deviations for 5 repetitions). <table> <tr> <th rowspan="2">Data</th> <th rowspan="2">Architecture</th> <th colspan="2">w.o. EL</th> <th colspan="2">w. EL</th> </tr> <tr> <th>Standard</th> <th>MC</th> <th>Standard</th> <th>MC</th> </tr> <tr> <td rowspan="6">MNIST</td> <td>3 dense,1024,logistic</td> <td>1.23±0.03</td> <td>1.06±0.02</td> <td>1.07±0.02</td> <td>1.06±0.03</td> </tr> <tr> <td>3 dense,1024,relu</td> <td>1.19±0.02</td> <td>1.04±0.02</td> <td>1.03±0.02</td> <td>1.05±0.03</td> </tr> <tr> <td>3 dense,1024,relu+max-norm</td> <td>1.05±0.03</td> <td>1.02±0.02</td> <td>0.98±0.03</td> <td>1.02±0.02</td> </tr> <tr> <td>3 dense,2048,relu+max-norm</td> <td>1.07±0.02</td> <td>1.00±0.02</td> <td>0.94±0.02</td> <td>0.97±0.03</td> </tr> <tr> <td>2 dense,4096,relu+max-norm</td> <td>1.03±0.02</td> <td>0.92±0.03</td> <td>0.90±0.02</td> <td>0.93±0.02</td> </tr> <tr> <td>2 dense,8192,relu+max-norm</td> <td>0.99±0.02</td> <td>0.96±0.02</td> <td>0.87±0.02</td> <td>0.92±0.03</td> </tr> <tr> <td>CIFAR-10</td> <td>3 conv+2 dense,relu+max-norm</td> <td>12.82±0.10</td> <td>12.16±0.12</td> <td>12.20±0.14</td> <td>12.21±0.15</td> </tr> <tr> <td>CIFAR-100</td> <td>3 conv+2 dense,relu+max-norm</td> <td>37.22±0.22</td> <td>36.01±0.21</td> <td>36.25±0.12</td> <td>36.10±0.18</td> </tr> </table> ![Three line plots showing error rate and empirical expectation-linearization risk relative to λ for MNIST, CIFAR-10, and CIFAR-100 datasets.](page_180_670_1208_246.png) Figure 1: Error rate and empirical expectation-linearization risk relative to \( \lambda \). rest for testing. The neural network architecture we used for these two datasets has 3 convolutional layers, followed by two fully-connected (dense) hidden layers (again, same as that in Srivastava et al. (2014)). The experimental results are recorded in Table 1, too. From Table 1 we can see that on MNIST data, dropout network training with expectation-linearization outperforms standard dropout on all 6 neural architectures. On CIFAR data, expectation-linearization reduces error rate from 12.82% to 12.20% for CIFAR-10, achieving 0.62% improvement. For CIFAR-100, the improvement in terms of error rate is 0.97% with reduction from 37.22% to 36.25%. From the results we see that with or without expectation-linearization, the MC dropout networks achieve similar results. It illustrates that by achieving expectation-linear neural networks, the predictive power of the learned models has not degraded significantly. Moreover, it is interesting to see that with the regularization, on MNIST dataset, standard dropout networks achieve even better accuracy than MC dropout. It may be because that with expectation-linearization, standard dropout inference achieves better approximation of the final prediction than MC dropout with (only) 100 samples. On CIFAR datasets, MC dropout networks achieve better accuracy than the ones with the regularization. But, obviously, MC dropout requires much more inference time than standard dropout (MC dropout with \( m \) samples requires about \( m \) times the inference time of standard dropout). 6.3 Effect of Regularization Constant \( \lambda \) In this section, we explore the effect of varying the hyper-parameter for the expectation-linearization rate \( \lambda \). We train the network architectures in Table 1 with the \( \lambda \) value ranging from 0.1 to 10.0. Figure 1 shows the test errors obtained as a function of \( \lambda \) on three datasets. In addition, Figure 1, middle and right panels, also measures the empirical expectation-linearization risk \( \hat{\Delta} \) of Eq. (12) with varying \( \lambda \) on CIFAR-10 and CIFAR-100, where \( \hat{\Delta} \) is computed using Monte carlo with 100 independent samples. From Figure 1 we can see that when \( \lambda \) increases, better expectation-linearity is achieved (i.e. \( \hat{\Delta} \) decreases). The model accuracy, however, has not kept growing with increasing \( \lambda \), showing that in practice considerations on the trade-off between model expectation-linearity and accuracy are needed. Table 2: Comparison of test data errors using standard dropout, Monte Carlo dropout, standard dropout with our proposed expectation-linearization, and recently proposed dropout distillation on CIFAR-10 and CIFAR-100 under AllConv, (with standard deviations for 5 repetitions). <table> <tr> <th>Data</th> <th>Network</th> <th>Standard</th> <th>MC</th> <th>w. EL</th> <th>Distillation</th> </tr> <tr> <td>CIFAR-10</td> <td>AllConv</td> <td>11.18±0.11</td> <td>10.58±0.21</td> <td>10.86±0.08</td> <td>10.81±0.14</td> </tr> <tr> <td>CIFAR-100</td> <td>AllConv</td> <td>35.50±0.23</td> <td>34.43±0.25</td> <td>35.10±0.13</td> <td>35.07±0.20</td> </tr> </table> 6.4 COMPARISON WITH DROP OUT DISTILLATION To make a thorough empirical comparison with the recently proposed Dropout Distillation method (Bulò et al., 2016), we also evaluate our regularization method on CIFAR-10 and CIFAR-100 datasets with the All Convolutional Network (Springenberg et al., 2014) (AllConv). To facilitate comparison, we adopt the originally reported hyper-parameters and the same setup for training. Table 2 gives the results comparison the classification error percentages on test data under AllConv using standard dropout, Monte Carlo dropout, standard dropout with our proposed expectation-linearization, and recently proposed dropout distillation on CIFAR-10 and CIFAR-100 ¹. According to Table 2, our proposed expectation-linear regularization method achieves comparable performance to dropout distillation. 7 CONCLUSIONS In this work, we attempted to establish a theoretical basis for the understanding of dropout, motivated by controlling the gap between dropout’s training and inference phases. Through formulating dropout as a latent variable model and introducing the notion of (approximate) expectation-linearity, we have formally studied the inference gap of dropout, and introduced an empirical measure as a regularization scheme to explicitly control the gap. Experiments on three benchmark datasets demonstrate that reducing the inference gap can indeed improve the end performance. In the future, we intend to formally relate the inference gap to the generalization error of the underlying network, hence providing further justification of regularized dropout. ACKNOWLEDGEMENTS This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. REFERENCES Jacob Andreas, Maxim Rabinovich, Michael I Jordan, and Dan Klein. On the accuracy of self-normalized log-linear models. In Advances in Neural Information Processing Systems, pp. 1774–1782, 2015. Pierre Baldi and Peter Sadowski. The dropout learning algorithm. Artificial intelligence, 210:78–122, 2014. Pierre Baldi and Peter J Sadowski. Understanding dropout. In Advances in Neural Information Processing Systems, pp. 2814–2822, 2013. David M Bradley and J Andrew Bagnell. Differential sparse coding. 2008. Samuel Rota Bulò, Lorenzo Porzi, and Peter Kontschieder. Dropout distillation. In Proceedings of The 33rd International Conference on Machine Learning, pp. 99–107, 2016. ¹We obtained similar results as that reported in Table 1 of Bulò et al. (2016) on CIFAR-10 corpus, while we cannot reproduce comparable results on CIFAR-100 (around 3% worse) Ning Chen, Jun Zhu, Jianfei Chen, and Bo Zhang. Dropout training for support vector machines. In Proceedings Twenty-Eighth AAAI Conference on Artificial Intelligence, 2014. Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Insights and applications. In Deep Learning Workshop, ICML, 2015. Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. In Advances in Neural Information Processing Systems, 2016. Wei Gao and Zhi-Hua Zhou. Dropout rademacher complexity of deep neural networks. arXiv preprint arXiv:1402.3811, 2014. David P Helmbold and Philip M Long. On the inductive bias of dropout. arXiv preprint arXiv:1412.4736, 2014. David P Helmbold and Philip M Long. Fundamental differences between dropout and weight decay in deep networks. arXiv preprint arXiv:1602.04484, 2016. Geoffrey Hinton. A practical guide to training restricted boltzmann machines. Momentum, 9(1):926, 2010. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. Prateek Jain, Vivek Kulkarni, Abhradeep Thakurta, and Oliver Williams. To drop or not to drop: Robustness, consistency and differential privacy properties of dropout. arXiv preprint arXiv:1503.02031, 2015. Diederik P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameterization trick. In Advances in Neural Information Processing Systems, pp. 2575–2583, 2015. Alex Krizhevsky. Learning multiple layers of features from tiny images, 2009. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521:436–444, 2015. Xuezhe Ma and Eduard Hovy. End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. In Proceedings of ACL-2016, pp. 1064–1074, Berlin, Germany, August 2016. Jürgen Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85–117, 2015. Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014. Nathan Srebro, Jason Rennie, and Tommi S Jaakkola. Maximum-margin matrix factorization. In Advances in neural information processing systems, pp. 1329–1336, 2004. Nitish Srivastava. Improving neural networks with dropout. PhD thesis, University of Toronto, 2013. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014. Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pp. 267–288, 1996. Andrey Nikolayevich Tikhonov. On the stability of inverse problems. In Dokl. Akad. Nauk SSSR, volume 39, pp. 195–198, 1943. Stefan Wager, Sida Wang, and Percy S Liang. Dropout training as adaptive regularization. In Advances in neural information processing systems, pp. 351–359, 2013. Stefan Wager, William Fithian, Sida Wang, and Percy S Liang. Altitude training: Strong bounds for single-layer dropout. In Advances in Neural Information Processing Systems, pp. 100–108, 2014. Sida Wang and Christopher Manning. Fast dropout training. In Proceedings of the 30th International Conference on Machine Learning, pp. 118–126, 2013. APPENDIX: DROPOUT WITH EXPECTATION-LINEAR REGULARIZATION A LVM Dropout Training vs. Standard Dropout Training Proof of Theorem 1 Proof. \[ \mathrm{E}_{S_D}[l(D, S_D; \theta)] = \int_S \prod_{i=1}^N p(s_i) \left( \sum_{i=1}^N \log p(y_i|x_i, s_i; \theta) \right) d\mu(s_1) \ldots d\mu(s_N) = \sum_{i=1}^N \int_S p(s_i) \log p(y_i|x_i, s_i; \theta) d\mu(s_i) \] Because \( \log(\cdot) \) is a concave function, from Jensen’s Inequality, \[ \int_S p(s) \log p(y|x, s; \theta) d\mu(s) \leq \log \int_S p(s)p(y|x, s; \theta) d\mu(s) \] Thus \[ \mathrm{E}_{S_D}[-l(D, S_D; \theta)] \geq \sum_{i=1}^N \log \int_S p(s_i)p(y_i|x_i, s_i; \theta) d\mu(s_i) = -l(D; \theta). \] □ B EXPECTATION-LINEAR DROPOUT NEURAL NETWORKS B.1 Proof of Theorem 2 Proof. Let \( \gamma^* = \mathrm{E}[\Gamma] \), and \[ A \triangleq \{ x : \| \mathrm{E}[f(x \odot \Gamma; \theta)] - f(x \odot \gamma^*; \theta) \|_2 = 0 \} \] Let \( X^* = \underset{x \in A}{\arg\min} \sup_{\gamma \in \mathcal{S}} \| X \odot \gamma - x \odot \gamma \|_2 \), and \( X^- = X - X^* \). Then, \[ X \odot \gamma = X^* \odot \gamma + X^- \odot \gamma \] In the following, we omit the parameter \( \theta \) for convenience. Moreover, we denote \[ \mathrm{E}_\Gamma[f(X \odot \Gamma; \theta)] \triangleq \mathrm{E}[f(X \odot \Gamma; \theta)|X] \] From Taylor Series, there exit some \( X', X'' \in \mathcal{X} \) satisfy that \[ \begin{align*} f(X \odot \Gamma) &= f(X^* \odot \Gamma) + f'(X' \odot \Gamma)(X^- \odot \Gamma) \\ f(X \odot \gamma^*) &= f(X^* \odot \gamma^*) + f'(X'' \odot \gamma^*)(X^- \odot \gamma^*) \end{align*} \] where we denote \( f'(x) = (\nabla_x f(x))^T \). Then, \[ \begin{align*} &\mathrm{E}_\Gamma[f(X \odot \Gamma) - f(X \odot \gamma^*)] \\ =& \mathrm{E}_\Gamma[f(X^* \odot \Gamma + X^- \odot \Gamma) - f(X^* \odot \gamma^* + X^- \odot \gamma^*)] \\ =& \mathrm{E}_\Gamma[f(X^* \odot \Gamma) - f(X^* \odot \gamma^*) + f'(X' \odot \Gamma)(X^- \odot \Gamma) - f'(X'' \odot \gamma^*)(X^- \odot \gamma^*)] \\ =& \mathrm{E}_\Gamma[f(X^* \odot \Gamma) - f(X^* \odot \gamma^*)] + \mathrm{E}_\Gamma[f'(X' \odot \Gamma)(X^- \odot \Gamma) - f'(X'' \odot \gamma^*)(X^- \odot \gamma^*)] \end{align*} \] Since \( X^* \in A \), we have \[ \mathrm{E}_\Gamma[f(X^* \odot \Gamma) - f(X^* \odot \gamma^*)] = 0. \] Then, \[ \begin{align*} &\mathrm{E}_\Gamma[f(X \odot \Gamma) - f(X \odot \gamma^*)] \\ =& \mathrm{E}_\Gamma[f'(X' \odot \Gamma)(X^- \odot \Gamma) - f'(X'' \odot \gamma^*)(X^- \odot \gamma^*)] \\ =& \mathrm{E}_\Gamma[(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)] + \mathrm{E}_\Gamma[f'(X'' \odot \gamma^*)(X^- \odot \Gamma - X^- \odot \gamma^*)] \\ =& \mathrm{E}_\Gamma[(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)] \end{align*} \] Then, \[ \| \mathbb{E}_\Gamma [f(X \odot \Gamma)] - f(X \odot \gamma^*) \|_2 \\ = \| \mathbb{E}_\Gamma [(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)] \|_2 \] Since \( \| X^- \odot \gamma' \|_2 \leq \sup_{\gamma \in \mathcal{S}} \| X^- \odot \gamma \|_2 = \inf_{x \in A} \sup_{\gamma \in \mathcal{S}} \| X \odot \gamma - x \odot \gamma \|_2 \), and from Jensen's inequality and property of operator norm, \[ \| \mathbb{E}_\Gamma [(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)] \|_2 \\ \leq \mathbb{E}_\Gamma \left[ \| f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*) \|_{op} \| X^- \odot \Gamma \|_2 \right] \\ \leq 2B \mathbb{E}_\Gamma \left[ \| X^- \odot \Gamma \|_2 \right] \\ \leq 2B \inf_{x \in A} \sup_{\gamma \in \mathcal{S}} \| X \odot \gamma - x \odot \gamma \|_2 \] Finally we have, \[ \mathbb{E}_X \left[ \| \mathbb{E}_\Gamma [(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)] \|_2 \right] \\ \leq 2B \mathbb{E} \left[ \inf_{x \in A} \sup_{\gamma \in \mathcal{S}} \| X \odot \gamma - x \odot \gamma \|_2 \right] \leq 2BC \] \hfill \qed B.2 Proof of Theorem 3 Proof. Induction on the number of the layers \( L \). As before, we omit the parameter \( \theta \). Initial step: when \( L = 1 \), the statement is obviously true. Induction on \( L \): Suppose that the statement is true for neural networks with \( L \) layers. Now we prove the case \( L + 1 \). From the inductive assumption, we have, \[ \mathbb{E}_X \left[ \| \mathbb{E}_{S_L} [\mathbf{H}^{(L)}(X, S_L)] - \mathbf{h}^{(L)}(X, \mathbb{E}[S_L]) \|_2 \right] \leq \Delta_L \] where \( S_L = \{\Gamma^{(1)}, \ldots, \Gamma^{(L)}\} \) is the dropout random variables for the first \( L \) layers, and \[ \Delta_L = (B\gamma)^{L-1}\delta + (\delta + B\gamma\sigma) \left( \frac{1 - (B\gamma)^{L-1}}{1 - B\gamma} \right) \] In addition, the \( L + 1 \) layer is \( \delta \)-approximately expectation-linear, we have: \[ \mathbb{E}_{\mathbf{H}^{(L)}} \left[ \| \mathbb{E}_{\Gamma^{(L+1)}} [f_{L+1}(\mathbf{H}^{(L)} \odot \Gamma^{(L+1)})] - f_{L+1}(\mathbf{H}^{(L)} \odot \gamma^{(L+1)}) \|_2 \right] \leq \delta \] Let \( \mathbb{E}[\Gamma^{(l)}] = \gamma^{(l)}, \forall l \in \{1, \ldots, L + 1\} \), and let \( \mathbf{H}^{(l)} \) and \( \mathbf{h}^{(l)} \) be short for \( \mathbf{H}^{(l)}(X, S_l) \) and \( \mathbf{h}^{(l)}(X, \mathbb{E}(S_l)) \), respectively, when there is no ambiguity. Moreover, we denote \[ \mathbb{E}_S [\mathbf{H}^{(L)}(X, S; \theta)] = \mathbb{E}_S [\mathbf{H}^{(L)}(X, S; \theta)|X] \] for convenience. Then, \[ \begin{align*} &\mathbb{E}_X \left[ \| \mathbb{E}_{S_{L+1}} [\mathbf{H}^{(L+1)}] - \mathbf{h}^{(L+1)} \|_2 \right] \\ &= \mathbb{E}_X \left[ \| \mathbb{E}_{S_L} \left[ \mathbb{E}_{\Gamma^{(L+1)}} [f_{L+1}(\mathbf{H}^{(L)} \odot \Gamma^{(L+1)})] - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \right] \right. \\ &\quad + \mathbb{E}_{S_L} \left[ f_{L+1}(\mathbf{H}^{(L)} \odot \gamma^{(L+1)}) ] - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \right] \|_2 \\ &\leq \mathbb{E}_X \left[ \| \mathbb{E}_{S_L} \left[ \mathbb{E}_{\Gamma^{(L+1)}} [f_{L+1}(\mathbf{H}^{(L)} \odot \Gamma^{(L+1)})] - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \right] \|_2 \right] \\ &\quad + \mathbb{E}_X \left[ \| \mathbb{E}_{S_L} \left[ f_{L+1}(\mathbf{H}^{(L)} \odot \gamma^{(L+1)}) ] - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \right] \|_2 \right] \end{align*} \] From Eq. 2 and Jensen's inequality, we have \[ \begin{align*} &\mathbb{E}_X \left[ \| \mathbb{E}_{S_L} \left[ \mathbb{E}_{\Gamma^{(L+1)}} [f_{L+1}(\mathbf{H}^{(L)} \odot \Gamma^{(L+1)})] - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \right] \|_2 \right] \\ &\leq \mathbb{E}_{\mathbf{H}^{(L)}} \left[ \| \mathbb{E}_{\Gamma^{(L+1)}} [f_{L+1}(\mathbf{H}^{(L)} \odot \Gamma^{(L+1)})] - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \|_2 \right] \leq \delta \end{align*} \] and \[ \begin{align*} & \mathbb{E}_X \left[ \| E_{S_L} \left[ f_{L+1}(\mathbf{H}^{(L)} \odot \gamma^{(L+1)}) \right] - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \|_2 \right] \\ = & \mathbb{E}_X \left[ \| E_{S_L} \left[ f_{L+1}(\mathbf{H}^{(L)} \odot \gamma^{(L+1)}) \right] - f_{L+1}(E_{S_L}[\mathbf{H}^{(L)}] \odot \gamma^{(L+1)}) \right. \\ & \left. + f_{L+1}(E_{S_L}[\mathbf{H}^{(L)}] \odot \gamma^{(L+1)}) - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \|_2 \right] \\ \leq & \mathbb{E}_X \left[ \| E_{S_L} \left[ f_{L+1}(\mathbf{H}^{(L)} \odot \gamma^{(L+1)}) \right] - f_{L+1}(E_{S_L}[\mathbf{H}^{(L)}] \odot \gamma^{(L+1)}) \|_2 \right] \\ & + \mathbb{E}_X \left[ \| f_{L+1}(E_{S_L}[\mathbf{H}^{(L)}] \odot \gamma^{(L+1)}) - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \|_2 \right] \end{align*} \] (4) Using Jensen’s inequality, property of operator norm and \( \mathbb{E}[\mathrm{Var}[\mathbf{H}^{(l)}|X]] \leq \sigma^2 \), we have \[ \begin{align*} & \mathbb{E}_X \left[ \| E_{S_L} \left[ f_{L+1}(\mathbf{H}^{(L)} \odot \gamma^{(L+1)}) \right] - f_{L+1}(E_{S_L}[\mathbf{H}^{(L)}] \odot \gamma^{(L+1)}) \|_2 \right] \\ \leq & \mathbb{E}_{\mathbf{H}^{(L)}} \left[ \| f_{L+1}(\mathbf{H}^{(L)} \odot \gamma^{(L+1)}) - f_{L+1}(E_{S_L}[\mathbf{H}^{(L)}] \odot \gamma^{(L+1)}) \|_2 \right] \\ \leq & B \gamma \mathbb{E}_{\mathbf{H}^{(L)}} \left[ \| \mathbf{H}^{(L)} - E_{S_L}[\mathbf{H}^{(L)}] \|_2 \right] \\ \leq & B \gamma \left( \mathbb{E}_{\mathbf{H}^{(L)}} \left[ \| \mathbf{H}^{(L)} - E_{S_L}[\mathbf{H}^{(L)}] \|_2^2 \right] \right)^{\frac{1}{2}} \leq B \gamma \sigma \end{align*} \] (5) From Eq. 1 \[ \begin{align*} & \mathbb{E}_X \left[ \| f_{L+1}(E_{S_L}[\mathbf{H}^{(L)}] \odot \gamma^{(L+1)}) - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \|_2 \right] \\ = & B \gamma \mathbb{E}_X \left[ \| E_{S_L}[\mathbf{H}^{(L)}] - \mathbf{h}^{(L)} \|_2 \right] \leq B \gamma \Delta_L \end{align*} \] (6) Finally, to sum up with Eq. 3, Eq. 4, , Eq. 5, , Eq. 6, we have \[ \begin{align*} & \mathbb{E}_X \left[ \| E_{S_{L+1}}[\mathbf{H}^{(L+1)}] - \mathbf{h}^{(L+1)} \|_2 \right] \\ \leq & \delta + B \gamma \sigma + B \gamma \Delta_L \\ = & (B \gamma)^L \delta + (\delta + B \gamma \sigma) \left( \frac{1 - (B \gamma)^L}{1 - B \gamma} \right) = \Delta_{L+1} \end{align*} \] C EXPECTATION-LINEARIZATION C.1 PROOF OF THEOREM 4: UNIFORM DEVIATION BOUND Before proving Theorem 4, we first define the notations. Let \( X^n = \{ X_1, \ldots, X_n \} \) be a set of n samples of input \( X \). For a function space \( \mathcal{F} : \mathcal{X} \to \mathcal{R} \), we use \( \mathrm{Rad}_n(\mathcal{F}, X^n) \) to denote the empirical Rademacher complexity of \( \mathcal{F} \), \[ \mathrm{Rad}_n(\mathcal{F}, X^n) = \mathbb{E}_\sigma \left[ \sup_{f \in \mathcal{F}} \left( \frac{1}{n} \sum_{i=1}^n \sigma_i f(X_i) \right) \right] \] and the Rademacher complexity is defined as \[ \mathrm{Rad}_n(\mathcal{F}) = \mathbb{E}_{X^n} \left[ \mathrm{Rad}_n(\mathcal{F}, X^n) \right] \] In addition, we import the definition of dropout Rademacher complexity from Gao & Zhou (2014): \[ \begin{align*} \mathcal{R}_n(\mathcal{H}, X^n, S^n) &= \mathbb{E}_\sigma \left[ \sup_{h \in \mathcal{H}} \left( \frac{1}{n} \sum_{i=1}^n \sigma_i h(X_i, S_i) \right) \right] \\ \mathcal{R}_n(\mathcal{H}) &= \mathbb{E}_{X^n, S^n} \left[ \mathrm{Rad}_n(\mathcal{H}, X^n, S^n) \right] \end{align*} \] where \( \mathcal{H} : \mathcal{X} \times \mathcal{S} \to \mathcal{R} \) is a function space defined on input space \( \mathcal{X} \) and dropout variable space \( \mathcal{S} \). \( \mathcal{R}_n(\mathcal{H}, X^n, S^n) \) and \( \mathcal{R}_n(\mathcal{H}) \) are the empirical dropout Rademacher complexity and dropout Rademacher complexity, respectively. We further denote \( \mathcal{R}_n(\mathcal{H}, X^n) \triangleq \mathrm{E}_{S^n}\left[Rad_n(\mathcal{H}, X^n, S^n)\right] \). Now, we define the following function spaces: \[ \begin{align*} \mathcal{F} &= \left\{ f(x; \theta) : f(x; \theta) = \mathrm{E}_S\left[\mathbf{H}^{(L)}(x, S; \theta)\right], \theta \in \Theta \right\} \\ \mathcal{G} &= \left\{ g(x; \theta) : g(x; \theta) = \mathbf{h}^{(L)}(x, \mathrm{E}[S]; \theta), \theta \in \Theta \right\} \\ \mathcal{H} &= \left\{ h(x, s; \theta) : h(x, s; \theta) = \mathbf{h}^{(L)}(x, s; \theta), \theta \in \Theta \right\} \end{align*} \] Then, the function space of \( v(x) = f(x) - g(x) \) is \( \mathcal{V} = \{ f(x) - g(x) : f \in \mathcal{F}, g \in \mathcal{G} \} \). **Lemma 7.** \[ Rad_n(\mathcal{F}, X^n) \leq \mathcal{R}_n(\mathcal{H}, X^n) \] *Proof.* \[ \begin{align*} \mathcal{R}_n(\mathcal{H}, X^n) &= \mathrm{E}_{S^n}\left[Rad_n(\mathcal{H}, X^n, S^n)\right] \\ &= \mathrm{E}_{S^n}\left[ \mathrm{E}_\sigma \left[ \sup_{h \in \mathcal{H}} \left( \frac{1}{n} \sum_{i=1}^n \sigma_i h(X_i, S_i) \right) \right] \right] \\ &= \mathrm{E}_\sigma \left[ \mathrm{E}_{S^n} \left[ \sup_{h \in \mathcal{H}} \left( \frac{1}{n} \sum_{i=1}^n \sigma_i h(X_i, S_i) \right) \right] \right] \\ &\geq \mathrm{E}_\sigma \left[ \sup_{h \in \mathcal{H}} \mathrm{E}_{S^n} \left[ \left( \frac{1}{n} \sum_{i=1}^n \sigma_i h(X_i, S_i) \right) \right] \right] \\ &= \mathrm{E}_\sigma \left[ \sup_{h \in \mathcal{H}} \left( \frac{1}{n} \sum_{i=1}^n \sigma_i \mathrm{E}_{S_i} [h(X_i, S_i)] \right) \right] \\ &= \mathrm{E}_\sigma \left[ \sup_{h \in \mathcal{H}} \left( \frac{1}{n} \sum_{i=1}^n \sigma_i \mathrm{E}_{S_i} [\mathbf{H}^{(L)}(X_i, S_i; \theta)] \right) \right] = Rad_n(\mathcal{F}, X^n) \end{align*} \] \hfill \( \Box \) From Lemma 7, we have \( Rad_n(\mathcal{F}) \leq \mathcal{R}_n(\mathcal{H}) \). **Lemma 8.** \[ \begin{align*} \mathcal{R}_n(\mathcal{H}) &\leq \frac{\alpha B^{L/2} \gamma^{L/2}}{\sqrt{n}} \\ Rad_n(\mathcal{G}) &\leq \frac{\alpha B \gamma}{\sqrt{n}} \end{align*} \] *Proof.* See Theorem 4 in Gao & Zhou (2014). \hfill \( \Box \) Now, we can prove Theorem 4. **Proof of Theorem 4** *Proof.* From Rademacher-based uniform bounds theorem, with probability \( \geq 1 - \delta \), \[ \sup_{v \in \mathcal{V}} |\Delta - \hat{\Delta}| < 2Rad_n(\mathcal{V}) + \beta \sqrt{\frac{\log(1/\delta)}{n}} \] Since \( \mathcal{V} = \mathcal{F} - \mathcal{G} \), we have \[ Rad_n(\mathcal{V}) = Rad_n(\mathcal{F} - \mathcal{G}) \leq Rad_n(\mathcal{F}) + Rad_n(\mathcal{G}) \leq \frac{\alpha B^L (\gamma^{L/2} + 1)}{\sqrt{n}} \] Then, finally, we have that with probability \( \geq 1 - \delta \), \[ \sup_{\theta \in \Theta} |\Delta - \hat{\Delta}| < \frac{2\alpha B^L (\gamma^{L/2} + 1)}{\sqrt{n}} + \beta \sqrt{\frac{\log(1/\delta)}{n}} \] \hfill \( \Box \) C.2 Proof of Theorem 5: Non-Uniform Bound of Model Accuracy For convenience, we denote \( \lambda = \{\theta_1, \ldots, \theta_{L-1}\} \). Then \( \theta = \{\lambda, \eta\} \), and MLE \( \hat{\theta} = \{\hat{\lambda}, \hat{\eta}\} \) Lemma 9. \[ \| \nabla f_L(\cdot; \eta)^T \|_{op} \leq 2\|\eta\|_2 \] (7) Proof. denote \[ A = \nabla f_L(\cdot; \eta)^T = [p_y(\eta_y - \overline{\eta})^T] \Big|_{y=1}^k \] where \( p_y = p(y|x, s; \theta), \overline{\eta} = \mathrm{E}[\eta_Y] = \sum_{y=1}^k p_y \eta_y \). For each \( v \) such that \( \|v\|_2 = 1 \), \[ \begin{align*} \|Av\|_2^2 &= \sum_{y \in \mathcal{Y}} \left( p_y (\eta_y - \overline{\eta})^T v \right)^2 \leq \sum_{y \in \mathcal{Y}} \|p_y (\eta_y - \overline{\eta})\|_2^2 \|v\|_2^2 = \sum_{y \in \mathcal{Y}} \|p_y (\eta_y - \overline{\eta})\|_2^2 \\ &\leq \sum_{y \in \mathcal{Y}} p_y \| \eta_y - \overline{\eta} \|_2^2 \leq \sum_{y \in \mathcal{Y}} 2p_y \left( \|\eta\|_2^2 + \sum_{y' \in \mathcal{Y}} p_{y'} \| \eta_{y'} \|_2^2 \right) \\ &= 4 \sum_{y \in \mathcal{Y}} p_y \| \eta_y \|_2^2 \leq 4\|\eta\|_2^2 \end{align*} \] So we have \( \|A\|_{op} \leq 2\|\eta\|_2 \). \( \square \) Lemma 10. If parameter \( \tilde{\theta} = \{\hat{\lambda}, \eta\} \) satisfies that \( \|\eta\|_2 \leq \frac{\delta}{4\beta} \), then \( V(D; \tilde{\theta}) \leq \delta \), where \( V(D; \theta) \) is defined in Eq. (16). Proof. Let \( S_L = \{\Gamma^{(1)}, \ldots, \Gamma^{(L)}\} \), and let \( \mathbf{H}^{(l)} \) and \( \mathbf{h}^{(l)} \) be short for \( \mathbf{H}^{(l)}(X, S_l; \tilde{\theta}) \) and \( \mathbf{h}^{(l)}(X, \mathrm{E}(S_l); \tilde{\theta}) \), respectively. From lemma 9, we have \( \|f_L(x; \eta) - f_L(y; \eta)\|_2 \leq 2\|\eta\|_2 \|x - y\|_2 \). Then, \[ \begin{align*} \| \mathrm{E}_{S_L} [\mathbf{H}^L] - \mathbf{h}^L \|_2 &= \| \mathrm{E}_{S_{L-1}} [f_L(\mathbf{H}^{(L-1)}; \eta)] - f_L(\mathbf{h}^{(L-1)}; \eta) \|_2 \\ &\leq \| \mathrm{E}_{S_{L-1}} [f_L(\mathbf{H}^{(L-1)}; \eta)] - f_L(\mathbf{h}^{(L-1)}; \eta) \|_2 \\ &\leq 2\|\eta\|_2 \|\mathbf{H}^{(L-1)} - \mathbf{h}^{(L-1)}\|_2 \\ &\leq 4\beta\|\eta\|_2 \leq \delta \end{align*} \] \( \square \) Lemma 10 says that we can get \( \theta \) satisfying the expectation-linearization constrain by explicitly scaling down \( \hat{\eta} \) while keeping \( \hat{\lambda} \). In order to prove Theorem 5, we make the following assumptions: • The dimension of \( \mathbf{h}^{(L-1)} \) is \( d \), i.e. \( \mathbf{h}^{(L-1)} \in \mathcal{R}^d \). • Since \( \forall y \in \mathcal{Y}, p(y|x; \hat{\theta}) > 0 \), we assume \( p(y|x; \hat{\theta}) \geq 1/b \), where \( b \geq |\mathcal{Y}| = k \). • As in the body text, let \( p(y|x, s; \hat{\theta}) \) be nonuniform, and in particular let \( \hat{\eta}_y^T \mathbf{h}^{(L-1)}(x, s; \hat{\lambda}) - \hat{\eta}_{y^*}^T \mathbf{h}^{(L-1)}(x, s; \hat{\lambda}) > c\|\hat{\eta}\|_2, \forall y \neq y^* \). For convenience, we denote \( \eta^T \mathbf{h}^{(L-1)}(x, s; \lambda) = \eta^T u_y(x, s; \lambda) \), where \( u_y^T(x, s; \lambda) = (v_0^T, \ldots, v_k^T) \) and \[ v_i = \left\{ \begin{array}{ll} \mathbf{h}^{(L-1)}(x, s; \lambda) & \text{if } i = y \\ 0 & \text{otherwise} \end{array} \right. \] To prove Theorem 5, we first prove the following lemmas. Lemma 11. If \( p(y|x; \hat{\theta}) \geq 1/b \), then \( \forall \alpha \in [0, 1], \) for parameter \( \tilde{\theta} = \{\hat{\lambda}, \alpha \hat{\eta}\} \), we have \[ p(y|x; \tilde{\theta}) \geq \frac{1}{b} \] Proof. We define \[ f(\alpha) \triangleq (y|x,s;\tilde{\theta}) = \frac{e^{\alpha \eta_y^T \mathbf{h}^{(L-1)}(x,s;\hat{\lambda})}}{\sum_{y' \in \mathcal{Y}} e^{\alpha \eta_{y'}^T \mathbf{h}^{(L-1)}(x,s;\hat{\lambda})}} = \frac{\left(e^{\eta_y^T \mathbf{h}^{(L-1)}(x,s;\hat{\lambda})}\right)^{\alpha}}{\sum_{y' \in \mathcal{Y}} \left(e^{\eta_{y'}^T \mathbf{h}^{(L-1)}(x,s;\hat{\lambda})}\right)^{\alpha}} \] Since \( \mathcal{Y} = \{1, \ldots, k\} \), for fixed \( x \in \mathcal{X}, s \in \mathcal{S} \), \( \log f(\alpha) \) is a concave function w.r.t \( \alpha \). Since \( b \geq k \), we have \[ \log f(\alpha) \geq (1-\alpha) \log f(0) + \alpha \log f(1) \geq -\log b \] So we have \( \forall x, s,\ p(y|x,s;\tilde{\theta}) \geq 1/b \). Then \[ p(y|x;\tilde{\theta}) = \mathrm{E}_S \left[ p(y|x,S;\tilde{\theta}) \right] \geq \frac{1}{b} \] Lemma 12. *if y is not the majority class, i.e. \( y \neq y^* \), then for parameter \( \tilde{\theta} = \{\hat{\lambda}, \alpha \hat{\eta}\} \)* \[ p(y|x,s;\tilde{\theta}) \leq e^{-c \alpha \| \hat{\eta} \|_2} \] Proof. \[ p(y|x,s;\tilde{\theta}) = \frac{e^{\alpha \eta_y^T u_y}}{\sum_{y' \in \mathcal{Y}} e^{\alpha \eta_{y'}^T u_{y'}}} \leq \frac{e^{\alpha \eta_y^T u_y}}{e^{\alpha \eta_{y^*}^T u_{y^*}}} \leq e^{-c \alpha \| \hat{\eta} \|_2} \] Lemma 13. *For a fixed x and s, the absolute value of the entry of the vector under the parameter \( \tilde{\theta} = \{\hat{\lambda}, \alpha \hat{\eta}\} \):* \[ |p(y|x,s;\tilde{\theta})(u_y - \mathrm{E}_Y[u_Y])|_i \leq \beta (k-1) e^{-c \alpha \| \hat{\eta} \|_2} \] Proof. Suppose \( y \) is the majority class of \( p(y|x,s;\tilde{\theta}) \). Then, \[ u_y - \mathrm{E}_y[u_Y] = (v_y)_{y'=1}^k \] where \[ v_y = \left\{ \begin{array}{ll} (1 - p(y|x,s;\tilde{\theta})) \mathbf{h}^{(L-1)} & \text{if } y = y^* \\ -p(y|x,s;\tilde{\theta}) \mathbf{h}^{(L-1)} & \text{otherwise} \end{array} \right. \] From Lemma 12, we have \[ |p(y|x,s;\tilde{\theta})(u_y - \mathrm{E}_Y[u_Y])|_i \leq |(u_y - \mathrm{E}_Y[u_Y])|_i \leq \beta (k-1) e^{-c \alpha \| \hat{\eta} \|_2} \] Now, we suppose \( y \) is not the majority class of \( p(y|x,s;\tilde{\theta}) \). Then, \[ |p(y|x,s;\tilde{\theta})(u_y - \mathrm{E}_Y[u_Y])|_i \leq p(y|x,s;\tilde{\theta}) \beta \leq \beta e^{-c \alpha \| \hat{\eta} \|_2} \] Overall, the lemma follows. Lemma 14. *We denote the matrix* \[ A \triangleq \mathrm{E}_S \left[ \frac{p(y|x,s;\tilde{\theta})}{p(y|x;\tilde{\theta})} (u_y - \mathrm{E}_Y[u_Y])(u_y - \mathrm{E}_Y[u_Y])^T \right] - \mathrm{E}_S \left[ \frac{p(y|x,s;\tilde{\theta})}{p(y|x;\tilde{\theta})} (u_y - \mathrm{E}_Y[u_Y]) \right] \mathrm{E}_S \left[ \frac{p(y|x,s;\tilde{\theta})}{p(y|x;\tilde{\theta})} (u_y - \mathrm{E}_Y[u_Y]) \right]^T \] *Then the absolute value of the entry of A under the parameter \( \tilde{\theta} = \{\hat{\lambda}, \alpha \hat{\eta}\} \):* \[ |A_{ij}| \leq 2b(k-1) \beta^2 e^{-c \alpha \| \hat{\eta} \|_2} \] Proof. From Lemma 11, we have \( p(y|x; \tilde{\theta}) \geq 1/b \). Additionally, the absolute value of the entry of \( u_y - E_Y[u_Y] \) is bounded by \( \beta \). We have for each \( i \) \[ \left| E_S \left[ \frac{p(y|x, s; \tilde{\theta})}{p(y|x; \tilde{\theta})} (u_y - E_Y[u_Y]) \right] \right|_i \leq E_S \left[ \frac{p(y|x, s; \tilde{\theta})}{p(y|x; \tilde{\theta})} \right] \beta = \beta \] Then from Lemma 13 \[ |A_{ij}| \leq 2b(k-1)\beta^2 e^{-c\alpha \| \hat{\eta} \|_2} \] Lemma 15. We denote the matrix \[ B \triangleq E_S \left[ \frac{p(y|x, s; \tilde{\theta})}{p(y|x; \tilde{\theta})} \left( E_Y \left[ u_Y u_Y^T \right] - E_Y[u_Y]E_Y[u_Y]^T \right) \right] \] Then the absolute value of the entry of \( B \) under the parameter \( \tilde{\theta} = \{\hat{\lambda}, \alpha \hat{\eta}\} \): \[ |B_{ij}| \leq 2(k-1)\beta^2 e^{-c\alpha \| \hat{\eta} \|_2} \] Proof. We only need to prove that for fixed \( x \) and \( s \), for each \( i, j \): \[ |E_Y \left[ u_Y u_Y^T \right] - E_Y[u_Y]E_Y[u_Y]^T|_{ij} \leq 2(k-1)\beta^2 e^{-c\alpha \| \hat{\eta} \|_2} \] Since \[ |E_Y \left[ u_Y u_Y^T \right] - E_Y[u_Y]E_Y[u_Y]^T|_{ij} = |\mathrm{Cov}_Y[(u_Y)_i, (u_Y)_j]| \leq \beta^2 \sum_{y=1}^k p(y|x, s; \tilde{\theta}) - p(y|x, s; \tilde{\theta})^2 \] Suppose \( y \) is the majority class. Then from Lemma 12, \[ p(y|x, s; \tilde{\theta}) - p(y|x, s; \tilde{\theta})^2 \leq 1 - p(y|x, s; \tilde{\theta}) \leq (k-1)e^{-c\alpha \| \hat{\eta} \|_2} \] If \( y \) is not the majority class. Then, \[ p(y|x, s; \tilde{\theta}) - p(y|x, s; \tilde{\theta})^2 \leq p(y|x, s; \tilde{\theta}) \leq e^{-c\alpha \| \hat{\eta} \|_2} \] So we have \[ \sum_{y=1}^k p(y|x, s; \tilde{\theta}) - p(y|x, s; \tilde{\theta})^2 \leq 2(k-1)e^{-c\alpha \| \hat{\eta} \|_2} \] The lemma follows. Lemma 16. Under the parameter \( \tilde{\theta} = \{\hat{\lambda}, \alpha \hat{\eta}\} \), the largest eigenvalue of the matrix \[ \frac{1}{n} \sum_{i=1}^n (A(x_i, y_i) - B(x_i, y_i)) \] is at most \[ 2dk(k-1)(b+1)\beta^2 e^{-c\alpha \| \hat{\eta} \|_2} \] Proof. From Lemma 14 and Lemma 15, each entry of the matrix in (8) is at most \( 2(k-1)(b+1)\beta^2 e^{-c\alpha \| \hat{\eta} \|_2} \). Thus, by Gershgorin’s theorem, the maximum eigenvalue of the matrix in (8) is at most \( 2dk(k-1)(b+1)\beta^2 e^{-c\alpha \| \hat{\eta} \|_2} \). Now, we can prove Theorem 5 by constructing a scaled version of \( \tilde{\theta} \) that satisfies the expectation-linearization constraint. Proof of Theorem 5 Proof. Consider the likelihood evaluated at \( \tilde{\theta} = \{ \hat{\lambda}, \alpha \hat{\eta} \} \), where \( \alpha = \frac{\delta}{4\beta \| \hat{\eta} \|_2} \). If \( \alpha > 1 \), then \( \| \hat{\eta} \|_2 > \frac{\delta}{4\beta} \). We know the MLE \( \hat{\theta} \) already satisfies the expectation-linearization constraint. So we can assume that \( 0 \leq \alpha \leq 1 \), and we know that \( \tilde{\theta} \) satisfies \( V(D; \tilde{\theta}) \leq \delta \). Then, \[ \Delta_l(\hat{\theta}, \hat{\theta}_\delta) \leq \Delta_l(\hat{\theta}, \tilde{\theta}) = \frac{1}{n}(l(D; \hat{\theta}) - l(D; \tilde{\theta})) = g(\hat{\lambda}, \hat{\eta}) - g(\hat{\lambda}, \alpha \hat{\eta}) \] where \( g(\lambda, \eta) = \frac{1}{n} l(D; (\lambda, \eta)) \). Taking the second-order Taylor expansion about \( \eta \), we have \[ g(\hat{\lambda}, \alpha \hat{\eta}) = g(\hat{\lambda}, \hat{\eta}) + \nabla_{\eta}^T g(\hat{\lambda}, \hat{\eta})(\alpha \hat{\eta} - \hat{\eta}) + (\alpha \hat{\eta} - \hat{\eta})^T \nabla_{\eta}^2 g(\hat{\lambda}, \hat{\eta})(\alpha \hat{\eta} - \hat{\eta}) \] Since \( \hat{\theta} \) is the MLE, the first-order term \( \nabla_{\eta}^T g(\hat{\lambda}, \hat{\eta})(\alpha \hat{\eta} - \hat{\eta}) = 0 \). The Hessian in the second-order term is just Eq.(8). Thus, from Lemma 16 we have \[ \begin{align*} g(\hat{\lambda}, \alpha \hat{\eta}) &\leq g(\hat{\lambda}, \hat{\eta}) - (1 - \alpha)^2 \| \hat{\eta} \|_2^2 2dk(k-1)(b+1)\beta^2 e^{-c \alpha \| \hat{\eta} \|_2} \\ &= g(\hat{\lambda}, \hat{\eta}) - 2dk(k-1)(b+1)\beta^2 \left( \| \hat{\eta} \|_2 - \frac{\delta}{4\beta} \right)^2 e^{-c \delta / 4\beta} \\ &= g(\hat{\lambda}, \hat{\eta}) - c_1 \beta^2 \left( \| \hat{\eta} \|_2 - \frac{\delta}{4\beta} \right)^2 e^{-c_2 \delta / 4\beta} \end{align*} \] with setting \( c1 = 2dk(k-1)(b+1) \) and \( c2 = c \). Then the theorem follows. □ C.3 Proof of Theorem 6: Uniform Bound of Model Accuracy In the following, we denote \( \tilde{\theta} = \{ \hat{\lambda}, \alpha \hat{\eta} \} \). Lemma 17. For each \( y \in \mathcal{Y} \), if \( p(y|x, s; \tilde{\theta}) \geq 1/k \), then \( \forall \alpha \in [0, 1] \) \[ p(y|x, s; \tilde{\theta}) \geq \frac{1}{k} \] Proof. This lemma can be regarded as a corollary of Lemma 11. □ Lemma 18. For a fixed \( x \) and \( s \), we denote \( e^{\alpha \hat{\eta}_y^T h^{(L-1)}(x, s; \hat{\lambda})} = w_y \). Then we have \[ p(y|x, s, \tilde{\theta}) = \frac{e^{\alpha \hat{\eta}_y^T h^{(L-1)}(x, s; \hat{\lambda})}}{\sum_{y' \in \mathcal{Y}} e^{\alpha \hat{\eta}_{y'}^T h^{(L-1)}(x, s; \hat{\lambda})}} = \frac{(w_y)^{\alpha}}{\sum_{y' \in \mathcal{Y}} (w_{y'})^{\alpha}} \] Additionally, we denote \( g_s(\alpha) = \sum_{y' \in \mathcal{Y}} p(y'|x, s; \tilde{\theta}) \log w_{y'} - \log w_y \). We assume \( g_s(0) \geq 0 \). Then we have \( \forall \alpha \geq 0 \) \[ g_s(\alpha) \geq 0 \] Proof. \[ \frac{\partial g_s(\alpha)}{\partial \alpha} = \sum_{y' \in \mathcal{Y}} \log w_{y'} \frac{\partial p(y'|x, s; \tilde{\theta})}{\partial \alpha} = \mathrm{Var}_Y [\log w_Y | X = x, S = s] \geq 0 \] So \( g_s(\alpha) \) is non-decreasing. Since \( g_s(0) \geq 0 \), we have \( g_s(\alpha) \geq 0 \) when \( \alpha \geq 0 \). □ From above lemma, we have for each training instance \( (x_i, y_i) \in D \), and \( \forall \alpha \in [0, 1] \), \[ \mathrm{E}_Y \left[ \log p(Y|x_i, s; \tilde{\theta}) \right] \geq \log p(y_i|x_i, s; \tilde{\theta}) \] (9) For convenience, we define \[ m(s, y) = \log p(y|x, s; \tilde{\theta}) - \mathrm{E}_Y \left[ \log p(Y|x, s; \tilde{\theta}) \right] \] Lemma 19. If \( y \) satisfies Lemma 17 and \( g_s(\alpha) \geq 0 \), then \[ \mathrm{Var}_Y[m(s, Y)] \geq m(s, y)^2 \] Proof. First we have \[ m(s, y) = \log p(y|x, s; \tilde{\theta}) - \log 1/k - KL \left( p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y}) \right) \leq 0 \] So we have \[ \begin{align*} (\mathrm{Var}_Y[m(s, Y)])^{1/2} &= \sqrt{\mathrm{E}_Y \left[ \left( \log p(Y|x, s; \tilde{\theta}) - \mathrm{E}_Y \left[ \log p(Y|x, s; \tilde{\theta}) \right] \right)^2 \right]} \\ &\geq \mathrm{E}_Y \left[ \left| \log p(Y|x, s; \tilde{\theta}) - \mathrm{E}_Y \left[ \log p(Y|x, s; \tilde{\theta}) \right] \right| \right] \\ &= \mathrm{E}_Y \left[ \left| KL \left( p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y}) \right) + \log 1/k - \log p(Y|x, s; \tilde{\theta}) \right| \right] \\ &= \mathrm{E}_Y \left[ \left| KL \left( p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y}) \right) + \left| \log 1/k - \log p(Y|x, s; \tilde{\theta}) \right| \right| \right] \\ &\geq KL \left( p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y}) \right) + \mathrm{E}_Y \left[ \log p(Y|x, s; \tilde{\theta}) - \log 1/k \right] \\ &= 2KL \left( p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y}) \right) \end{align*} \] As \( KL \left( p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y}) \right) \geq 0 \) and \( \log p(y|x, s; \tilde{\theta}) \geq \log 1/k \). So we have \[ 2KL \left( p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y}) \right) \geq KL \left( p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y}) \right) + \log 1/k - \log p(y|x, s; \tilde{\theta}) = -m(s, y) \] Then the lemma follows. □ From Lemma 19 and Eq. (9), we have for each training instance \((x_i, y_i) \in D\), and \( \forall \alpha \in [0, 1] \), \[ \mathrm{Var}_Y[m(s, Y)] \geq m(s, y_i)^2 \] (10) Lemma 20. For each training instance \((x_i, y_i) \in D\), and \( \forall \alpha \in [0, 1] \), we have \[ \log p(y_i|x_i; \{\hat{\lambda}, \alpha \hat{\eta}\}) \geq (1-\alpha) \log p(y_i|x_i; \{\hat{\lambda}, 0\}) + \alpha \log p(y_i|x_i; \{\hat{\lambda}, \hat{\eta}\}) \] Proof. We define \[ f(\alpha) = \log p(y_i|x_i; \{\hat{\lambda}, \alpha \hat{\eta}\}) - (1-\alpha) \log p(y_i|x_i; \{\hat{\lambda}, 0\}) - \alpha \log p(y_i|x_i; \{\hat{\lambda}, \hat{\eta}\}) \] Because \( f(0) = f(1) = 0 \), we only need to prove that \( f(\alpha) \) is concave on \([0, 1]\). We have \[ \nabla^2 f(\alpha) = -\mathrm{E}_{S|Y=y_i} [\mathrm{Var}_Y[m(S, Y)]] + \mathrm{Var}_{S|Y=y_i}[m(S, y_i)] \] where \( S|Y = y_i \) is under the probability distribution \( p(s|Y = y_i, x_i; \tilde{\theta}) = \frac{p(y_i|x_i, S; \tilde{\theta}) p(s)}{p(y_i|x_i; \tilde{\theta})} \) From Eq. (10), we have \[ \mathrm{E}_{S|Y=y_i} [\mathrm{Var}_Y[m(S, Y)]] \geq \mathrm{E}_{S|Y=y_i} [m(S, y_i)^2] \geq \mathrm{Var}_{S|Y=y_i}[m(S, y_i)] \] So we have \( \nabla^2 f(\alpha) \leq 0 \). The lemma follows. □ Now, we can prove Theorem 6 by using the same construction of an expectation-linearizing parameter as in Theorem 5. Proof of Theorem 6 Proof. Consider the same parameter \( \tilde{\theta} = \{\hat{\lambda}, \alpha \hat{\eta}\} \), where \( \alpha = \frac{\delta}{4\beta \|\hat{\eta}\|_2} \leq 1 \). we know that \( \tilde{\theta} \) satisfies \( V(D; \tilde{\theta}) \leq \delta \). Then, \[ \Delta_l(\hat{\theta}, \hat{\theta}_s) \leq \Delta_l(\hat{\theta}, \tilde{\theta}) = \frac{1}{n}(l(D; \hat{\theta}) - l(D; \tilde{\theta})) \] From Lemma 20 we have: \[ l(D; \tilde{\theta}) = l(D; \{ \hat{\lambda}, \alpha \hat{\eta} \}) \geq (1 - \alpha) l(D; \{ \hat{\lambda}, 0 \}) + \alpha l(D; \{ \hat{\lambda}, \hat{\eta} \}) \] So \[ \begin{align*} \Delta_l(\hat{\theta}, \hat{\theta}_\delta) &\leq (1 - \alpha) \frac{1}{n} \left( l(D; \hat{\theta}) - l(D; \{ \hat{\lambda}, 0 \}) \right) \\ &= (1 - \alpha) \frac{1}{n} \sum_{i=1}^n \log p(y_i|x_i; \hat{\theta}) - \log \mathrm{Unif}(\mathcal{Y}) \\ &\asymp (1 - \alpha) \mathrm{E} [ \mathrm{KL} (p(\cdot|X; \hat{\theta}) \| \mathrm{Unif}(\mathcal{Y})) ] \\ &\leq \left( 1 - \frac{\delta}{4 \beta \| \hat{\eta} \|_2} \right) \mathrm{E} [ \mathrm{KL} (p(\cdot|X; \hat{\theta}) \| \mathrm{Unif}(\mathcal{Y})) ] \end{align*} \] D Detailed Description of Experiments D.1 Neural Network Architectures MNIST For MNIST, we train 6 different fully-connected (dense) neural networks with 2 or 3 layers (see Table 1). For all architectures, we used dropout rate \( p = 0.5 \) for all hidden layers and \( p = 0.2 \) for the input layer. CIFAR-10 and CIFAR-100 For the two CIFAR datasets, we used the same architecture in Srivastava et al. (2014) — three convolutional layers followed by two fully-connected hidden layers. The convolutional layers have 96, 128, 256 filters respectively, with a \( 5 \times 5 \) receptive field applied with a stride of 1. Each convolutional layer is followed by a max pooling layer pools \( 3 \times 3 \) regions at strides of 2. The fully-connected layers have 2048 units each. All units use the rectified linear activation function. Dropout was applied to all the layers with dropout rate \( p = (0.1, 0.25, 0.25, 0.5, 0.5, 0.5) \) for the layers going from input to convolutional layers to fully-connected layers. D.2 Neural Network Training Neural network training in all the experiments is performed with mini-batch stochastic gradient descent (SGD) with momentum. We choose an initial learning rate of \( \eta_0 \), and the learning rate is updated on each epoch of training as \( \eta_t = \eta_0 / (1 + \rho t) \), where \( \rho \) is the decay rate and \( t \) is the number of epoch completed. We run each experiment with 2,000 epochs and choose the parameters achieving the best performance on validation sets. Table 3 summarizes the chosen hyper-parameters for all experiments. Most of the hyper-parameters are chosen from Srivastava et al. (2014). But for some experiments, we cannot reproduce the performance reported in Srivastava et al. (2014) (We guess one of the possible reasons is that we used different library for implementation.). For these experiments, we tune the hyper-parameters on the validation sets by random search. Due to time constrains it is infeasible to do a random search across the full hyper-parameter space. Thus, we try to use as many hyper-parameters reported in Srivastava et al. (2014) as possible. D.3 Effect of Expectation-linearization Rate \( \lambda \) Table 4 illustrates the detailed results of the experiments on the effect of \( \lambda \). For MNIST, it lists the error rates under different \( \lambda \) values for six different network architectures. For two datasets of CIFAR, it gives the error rates under different \( \lambda \) values, among with the empirical expectation-linearization risk \( \hat{\Delta} \). Table 3: Hyper-parameters for all experiments. <table> <tr> <th>Experiment</th> <th>Hyper-parameter</th> <th></th> </tr> <tr> <td rowspan="5">MNIST</td> <td>batch size</td> <td>200</td> </tr> <tr> <td>initial learning rate \( \eta_0 \)</td> <td>0.1</td> </tr> <tr> <td>decay rate \( \rho \)</td> <td>0.025</td> </tr> <tr> <td>momentum</td> <td>0.9</td> </tr> <tr> <td>momentum type</td> <td>standard</td> </tr> <tr> <td></td> <td>max-norm constrain</td> <td>3.5</td> </tr> <tr> <td rowspan="8">CIFAR</td> <td>batch size</td> <td>10</td> <td>100</td> </tr> <tr> <td>initial learning rate \( \eta_0 \) for conv layers</td> <td>100</td> <td>100</td> </tr> <tr> <td>initial learning rate \( \eta_0 \) for dense layers</td> <td>0.001</td> <td>0.001</td> </tr> <tr> <td>decay rate \( \rho \)</td> <td>0.1</td> <td>0.02</td> </tr> <tr> <td>momentum</td> <td>0.005</td> <td>0.005</td> </tr> <tr> <td>momentum type</td> <td>standard</td> <td>nesterov</td> </tr> <tr> <td>max-norm constrain</td> <td>4.0</td> <td>2.0</td> </tr> <tr> <td>L2-norm decay</td> <td>0.001</td> <td>0.001</td> </tr> </table> Table 4: Detailed results for experiments on the effect of \( \lambda \). <table> <tr> <th rowspan="2">Experiment</th> <th colspan="8">\( \lambda \)</th> </tr> <tr> <th>0.0</th> <th>0.5</th> <th>1.0</th> <th>2.0</th> <th>3.0</th> <th>5.0</th> <th>7.0</th> <th>10.0</th> </tr> <tr> <td rowspan="6">MNIST</td> <td>model 1</td> <td>1.23</td> <td>1.12</td> <td>1.12</td> <td>1.08</td> <td>1.07</td> <td>1.10</td> <td>1.25</td> <td>1.35</td> </tr> <tr> <td>model 2</td> <td>1.19</td> <td>1.14</td> <td>1.08</td> <td>1.04</td> <td>1.03</td> <td>1.07</td> <td>1.13</td> <td>1.21</td> </tr> <tr> <td>model 3</td> <td>1.05</td> <td>1.04</td> <td>0.98</td> <td>1.03</td> <td>1.05</td> <td>1.05</td> <td>1.10</td> <td>1.12</td> </tr> <tr> <td>model 4</td> <td>1.07</td> <td>1.02</td> <td>0.97</td> <td>0.94</td> <td>0.96</td> <td>1.01</td> <td>1.05</td> <td>1.20</td> </tr> <tr> <td>model 5</td> <td>1.03</td> <td>0.95</td> <td>0.95</td> <td>0.90</td> <td>0.92</td> <td>0.98</td> <td>1.03</td> <td>1.08</td> </tr> <tr> <td>model 6</td> <td>0.99</td> <td>0.98</td> <td>0.93</td> <td>0.87</td> <td>0.96</td> <td>0.98</td> <td>1.05</td> <td>1.10</td> </tr> <tr> <td rowspan="2">CIFAR-10</td> <td>error rate</td> <td>12.82</td> <td>12.52</td> <td>12.38</td> <td>12.20</td> <td>12.60</td> <td>12.84</td> <td>13.10</td> </tr> <tr> <td>\( \Delta \)</td> <td>0.0139</td> <td>0.0128</td> <td>0.0104</td> <td>0.0095</td> <td>0.0089</td> <td>0.0085</td> <td>0.0077</td> </tr> <tr> <td rowspan="2">CIFAR-100</td> <td>error rate</td> <td>37.22</td> <td>36.75</td> <td>36.25</td> <td>37.01</td> <td>37.18</td> <td>37.58</td> <td>38.01</td> </tr> <tr> <td>\( \Delta \)</td> <td>0.0881</td> <td>0.0711</td> <td>0.0590</td> <td>0.0529</td> <td>0.0500</td> <td>0.0467</td> <td>0.0411</td> </tr> </table>
ABSTRACT Dropout, a simple and effective way to train deep neural networks, has led to a number of impressive empirical successes and spawned many recent theoretical investigations. However, the gap between dropout’s training and inference phases, introduced due to tractability considerations, has largely remained under-appreciated. In this work, we first formulate dropout as a tractable approximation of some latent variable model, leading to a clean view of parameter sharing and enabling further theoretical analysis. Then, we introduce (approximate) expectation-linear dropout neural networks, whose inference gap we are able to formally characterize. Algorithmically, we show that our proposed measure of the inference gap can be used to regularize the standard dropout training objective, resulting in an explicit control of the gap. Our method is as simple and efficient as standard dropout. We further prove the upper bounds on the loss in accuracy due to expectation-linearization, describe classes of input distributions that expectation-linearize easily. Experiments on three image classification benchmark datasets demonstrate that reducing the inference gap can indeed improve the performance consistently. 1 INTRODUCTION Deep neural networks (DNNs, e.g., LeCun et al., 2015; Schmidhuber, 2015), if trained properly, have been demonstrated to significantly improve the benchmark performances in a wide range of application domains. As neural networks go deeper and deeper, naturally, its model complexity also increases quickly, hence the pressing need to reduce overfitting in training DNNs. A number of techniques have emerged over the years to address this challenge, among which dropout (Hinton et al., 2012; Srivastava, 2013) has stood out for its simplicity and effectiveness. In a nutshell, dropout randomly “drops” neural units during training as a means to prevent feature co-adaptation—a sign of overfitting (Hinton et al., 2012). Simple as it appears to be, dropout has led to several record-breaking performances (Hinton et al., 2012; Ma & Hovy, 2016), and thus spawned a lot of recent interests in analyzing and justifying dropout from the theoretical perspective, and also in further improving dropout from the algorithmic and practical perspective. In their pioneering work, Hinton et al. (2012) and Srivastava et al. (2014) interpreted dropout as an extreme form of model combination (aka. model ensemble) with extensive parameter/weight sharing, and they proposed to learn the combination through minimizing an appropriate expected loss. Interestingly, they also pointed out that for a single logistic neural unit, the output of dropout is in fact the geometric mean of the outputs of the model ensemble with shared parameters. Subsequently, many theoretical justifications of dropout have been explored, and we can only mention a few here due to space limits. Building on the weight sharing perspective, Baldi & Sadowski (2013; 2014) analyzed the ensemble averaging property of dropout in deep non-linear logistic networks, and supported the view that dropout is equivalent to applying stochastic gradient descent on some regularized loss function. Wager et al. (2013) treated dropout as an adaptive regularizer for generalized linear models (GLMs). Helmbold & Long (2016) discussed the differences between dropout and traditional weight decay regularization. In terms of statistical learning theory, Gao & Zhou (2014) studied the Rademacher complexity of different types of dropout, showing that dropout is able to reduce the Rademacher complexity polynomially for shallow neural networks (with one or no hidden layers) and exponentially for deep neural networks. This latter work (Gao & Zhou, 2014) formally demonstrated that dropout, due to its regularizing effect, contributes to reducing the inherent model complexity, in particular the variance component in the generalization error. Seen as a model combination technique, it is intuitive that dropout contributes to reducing the variance of the model performance. Surprisingly, dropout has also been shown to play some role in reducing the model bias. For instance, Jain et al. (2015) studied the ability of dropout training to escape local minima, hence leading to reduced model bias. Other studies (Chen et al., 2014; Helmbold & Long, 2014; Wager et al., 2014) focus on the effect of the dropout noise on models with shallow architectures. We noted in passing that there are also some work (Kingma et al., 2015; Gal & Ghahramani, 2015; 2016) trying to understand dropout from the Bayesian perspective. In this work, we first formulate dropout as a tractable approximation of a latent variable model, and give a clean view of weight sharing (\S3). Then, we focus on an inference gap in dropout that has somehow gotten under-appreciated: In the inference phase, for computational tractability considerations, the model ensemble generated by dropout is approximated by a single model with scaled weights, resulting in a gap between training and inference, and rendering the many previous theoretical findings inapplicable. In general, this inference gap can be very large and no attempt (to our best knowledge) has been made to control it. We make three contributions in bridging this gap: Theoretically, we introduce expectation-linear dropout neural networks, through which we are able to explicitly quantify the inference gap (\S4). In particular, our theoretical results explain why the max-norm constraint on the network weights, a standard practice in training DNNs, can lead to a small inference gap hence potentially improve performance. Algorithmically, we propose to add a sampled version of the inference gap to regularize the standard dropout training objective (*expectation-linearization*), hence allowing explicit control of the inference gap, and analyze the interaction between expectation-linearization and the model accuracy (\S5). Experimentally, through three benchmark datasets we show that our regularized dropout is not only as simple and efficient as standard dropout but also consistently leads to improved performance (\S6). 2 DROP OUT NEURAL NETWORKS In this section we set up the notations, review the dropout neural network model, and discuss the inference gap in standard dropout training that we will attempt to study in the rest of the paper. 2.1 DNNs AND NOTATIONS Throughout we use uppercase letters for random variables (and occasionally for matrices as well), and lowercase letters for realizations of the corresponding random variables. Let \( X \in \mathcal{X} \) be the input of the neural network, \( Y \in \mathcal{Y} \) be the desired output, and \( D = \{(x_1, y_1), \ldots, (x_N, y_N)\} \) be our training sample, where \( x_i, i = 1, \ldots, N \), (resp. \( y_i \)) are usually i.i.d. samples of \( X \) (resp. \( Y \)). Let M denote a deep neural network with \( L \) hidden layers, indexed by \( l \in \{1, \ldots, L\} \). Let \( \mathbf{h}^{(l)} \) denote the output vector from layer \( l \). As usual, \( \mathbf{h}^{(0)} = x \) is the input, and \( \mathbf{h}^{(L)} \) is the output of the neural network. Denote \( \theta = \{\theta_l : l = 1, \ldots, L\} \) as the set of parameters in the network M, where \( \theta_l \) assembles the parameters in layer \( l \). With dropout, we need to introduce a set of dropout random variables \( S = \{\Gamma^{(l)} : l = 1, \ldots, L\} \), where \( \Gamma^{(l)} \) is the dropout random variable for layer \( l \). Then the deep neural network M can be described as: \[ \mathbf{h}^{(l)} = f_l(\mathbf{h}^{(l-1)} \odot \gamma^{(l)}; \theta_l), \quad l = 1, \ldots, L, \] where \( \odot \) is the element-wise product and \( f_l \) is the transformation function of layer \( l \). For example, if layer \( l \) is a fully connected layer with weight matrix \( W \), bias vector \( b \), and sigmoid activation function \( \sigma(x) = \frac{1}{1 + \exp(-x)} \), then \( f_l(x) = \sigma(Wx + b) \)). We will also use \( \mathbf{h}^{(l)}(x, s; \theta) \) to denote the output of layer \( l \) with input \( x \) and dropout value \( s \), under parameter \( \theta \). In the simplest form of dropout, which is also called standard dropout, \( \Gamma^{(l)} \) is a vector of independent Bernoulli random variables, each of which has probability \( p_l \) of being 1 and \( 1 - p_l \) of being 0. This corresponds to dropping each of the weights independently with probability \( p_l \). 2.2 DROP OUT TRAINING The standard dropout neural networks can be trained using stochastic gradient decent (SGD), with a sub-network sampled by dropping neural units for each training instance in a mini-batch. Forward and backward pass for that training instance are done only on the sampled sub-network. Intuitively, dropout aims at, simultaneously and jointly, training an ensemble of exponentially many neural networks (one for each configuration of dropped units) while sharing the same weights/parameters. The goal of the stochastic training procedure of dropout can be understood as minimizing an expected loss function, after marginalizing out the dropout variables (Srivastava, 2013; Wang & Manning, 2013). In the context of maximal likelihood estimation, dropout training can be formulated as: \[ \theta^* = \arg\min_{\theta} \mathbb{E}_{S_D}[-l(D, S_D; \theta)] = \arg\min_{\theta} \mathbb{E}_{S_D}\left[ - \sum_{i=1}^N \log p(y_i|x_i, S_i; \theta) \right], \] where recall that \( D \) is the training sample, \( S_D = \{S_1, \ldots, S_N\} \) is the dropout variable (one for each training instance), and \( l(D, S_D; \theta) \) is the (conditional) log-likelihood function defined by the conditional distribution \( p(y|x, s; \theta) \) of output \( y \) given input \( x \), under parameter \( \theta \) and dropout variable \( s \). Throughout we use the notation \( \mathbb{E}_Z \) to denote the conditional expectation where all random variables except \( Z \) are conditioned on. Dropout has also been shown to work well with regularization, such as L2 weight decay (Tikhonov, 1943), Lasso (Tibshirani, 1996), KL-sparsity(Bradley & Bagnell, 2008; Hinton, 2010), and max-norm regularization (Srebro et al., 2004), among which the max-norm regularization — that constrains the norm of the incoming weight matrix to be bounded by some constant — was found to be especially useful for dropout (Srivastava, 2013; Srivastava et al., 2014). 2.3 DROP OUT INFERENCE AND GAP As mentioned before, dropout is effectively training an ensemble of neural networks with weight sharing. Consequently, at test time, the output of each network in the ensemble should be averaged to deliver the final prediction. This averaging over exponentially many sub-networks is, however, intractable, and standard dropout typically implements an approximation by introducing a deterministic scaling factor for each layer to replace the random dropout variable: \[ \mathbb{E}_S[\mathbf{H}^{(L)}(x, S; \theta)] \overset{?}{=} \mathbf{h}^{(L)}(x, \mathbb{E}[S]; \theta), \] where the right-hand side is the output of a single deterministic neural network whose weights are scaled to match the expected number of active hidden units on the left-hand side. Importantly, the right-hand side can be easily computed since it only involves a single deterministic network. Bulò et al. (2016) combined dropout with knowledge distillation methods (Hinton et al., 2015) to better approximate the averaging processing of the left-hand side. However, the quality of the approximation in (3) is largely unknown, and to our best knowledge, no attempt has been made to explicitly control this inference gap. The main goal of this work is to explicitly quantify, algorithmically control, and experimentally demonstrate the inference gap in (3), in the hope of improving the generalization performance of DNNs eventually. To this end, in the next section we first present a latent variable model interpretation of dropout, which will greatly facilitate our later theoretical analysis. 3 DROP OUT AS LATENT VARIABLE MODELS With the end goal of studying the inference gap in (3) in mind, in this section, we first formulate dropout neural networks as a latent variable model (LVM) in § 3.1. Then, we point out the relation between the training procedure of LVM and that of standard dropout in § 3.2. The advantage of formulating dropout as a LVM is that we need only deal with a single model (with latent structure), instead of an ensemble of exponentially many different models (with weight sharing). This much simplified view of dropout enables us to understand and analyze the model parameter \( \theta \) in a much more straightforward and intuitive way. 3.1 AN LVM FORMULATION OF DROPOUT A latent variable model consists of two types of variables: the observed variables that represent the empirical (observed) data and the latent variables that characterize the hidden (unobserved) structure. To formulate dropout as a latent variable model, the input \( x \) and output \( y \) are regarded as observed variables, while the dropout variable \( s \), representing the sub-network structure, is hidden. Then, upon fixing the input space \( \mathcal{X} \), the output space \( \mathcal{Y} \), and the latent space \( \mathcal{S} \) for dropout variables, the conditional probability of \( y \) given \( x \) under parameter \( \theta \) can be written as \[ p(y|x; \theta) = \int_{\mathcal{S}} p(y|x, s; \theta)p(s)d\mu(s), \] where \( p(y|x, s; \theta) \) is the conditional distribution modeled by the neutral network with configuration \( s \) (same as in Eq. (2)), \( p(s) \) is the distribution of dropout variable \( S \) (e.g. Bernoulli), here assumed to be independent of the input \( x \), and \( \mu(s) \) is the base measure on the space \( \mathcal{S} \). 3.2 LVM DROPOUT TRAINING VS. STANDARD DROPOUT TRAINING Building on the above latent variable model formulation (4) of dropout, we are now ready to point out a simple relation between the training procedure of LVM and that of standard dropout. Given an i.i.d. training sample \( D \), the maximum likelihood estimate for the LVM formulation of dropout in (4) is equivalent to minimizing the following negative log-likelihood function: \[ \theta^* = \argmin_{\theta} -l(D; \theta) = \argmin_{\theta} -\sum_{i=1}^{N} \log p(y_i|x_i; \theta), \] where \( p(y|x; \theta) \) is given in Eq. (4). Recall the dropout training objective \( \mathrm{E}_{S_D}[-l(D, S_D; \theta)] \) in Eq. (2). We have the following theorem as a simple consequence of Jensen’s inequality (details in Appendix A): **Theorem 1.** *The expected loss function of standard dropout (Eq. (2)) is an upper bound of the negative log-likelihood of LVM dropout (Eq. (5)):* \[ -l(D; \theta) \leq \mathrm{E}_{S_D}[-l(D, S_D; \theta)]. \] Theorem 1, in a rigorous sense, justifies dropout training as a convenient and tractable approximation of the LVM formulation in (4). Indeed, since directly minimizing the marginalized negative log-likelihood in (5) may not be easy, a standard practice is to replace the marginalized (conditional) likelihood \( p(y|x; \theta) \) in (4) with its empirical Monte carlo average through drawing samples from the dropout variable \( S \). The dropout training objective in (2) corresponds exactly to this Monte carlo approximation when a single sample \( S_i \) is drawn for each training instance \( (x_i, y_i) \). Importantly, we note that the above LVM formulation involves only a single network parameter \( \theta \), which largely simplifies the picture and facilitates our subsequent analysis. 4 EXPECTATION-LINEAR DROPOUT NEURAL NETWORKS Building on the latent variable model formulation in § 3, we introduce in this section the notion of expectation-linearity that essentially measures the inference gap in (3). We then characterize a general class of neural networks that exhibit expectation-linearity, either exactly or approximately over a distribution \( p(x) \) on the input space. We start with defining expectation-linearity in the simplest single-layer neural network, then we extend the notion into general deep networks in a natural way. **Definition 1** (Expectation-linear Layer). A network layer \( h = f(x \odot \gamma; \theta) \) is **expectation-linear with respect to a set \( \mathcal{X}' \subseteq \mathcal{X} \)**, if for all \( x \in \mathcal{X}' \) we have \[ \left\| \mathrm{E}[f(x \odot \Gamma; \theta)] - f(x \odot \mathrm{E}[\Gamma]; \theta) \right\|_2 = 0. \] In this case we say that \( \mathcal{X}' \) is **expectation-linearizable**, and \( \theta \) is **expectation-linearizing w.r.t \( \mathcal{X}' \)**. Obviously, the condition in (7) will guarantee no gap in the dropout inference approximation (3)—an admittedly strong condition that we will relax below. Clearly, if \( f \) is an affine function, then we can choose \( \mathcal{X}' = \mathcal{X} \) and expectation-linearity is trivial. Note that expectation-linearity depends on the network parameter \( \theta \) and the dropout distribution \( \Gamma \). Expectation-linearity, as defined in (7), is overly strong: under standard regularity conditions, essentially the transformation function \( f \) has to be affine over the set \( \mathcal{X}' \), ruling out for instance the popular sigmoid or tanh activation functions. Moreover, in practice, downstream use of DNNs are usually robust to small errors resulting from *approximate* expectation-linearity (hence the empirical success of dropout), so it makes sense to define an inexact extension. We note also that the definition in (7) is *uniform* over the set \( \mathcal{X}' \), while in a statistical setting it is perhaps more meaningful to have expectation-linearity “on average,” since inputs from lower density regions are not going to play a significant role anyway. Taking into account the aforementioned motivations, we arrive at the following inexact extension: **Definition 2** (Approximately Expectation-linear Layer). A network layer \( \mathbf{h} = f(x \odot \gamma; \theta) \) is *\( \delta \)-approximately expectation-linear with respect to* a distribution \( p(x) \) over \( \mathcal{X} \) if \[ \mathrm{E}_X \left[ \| \mathrm{E}_{\Gamma} \left[ f(X \odot \Gamma; \theta) | X \right] - f(X \odot \mathrm{E}[\Gamma]; \theta) \|_2 \right] < \delta. \] In this case we say that \( p(x) \) is *\( \delta \)-approximately expectation-linearizable*, and \( \theta \) is *\( \delta \)-approximately expectation-linearizing*. To appreciate the power of cutting some slack from exact expectation-linearity, we remark that even non-affine activation functions often have approximately linear regions. For example, the logistic function, a commonly used non-linear activation function in DNNs, is approximately linear around the origin. Naturally, we can ask whether it is sufficient for a target distribution \( p(x) \) to be well-approximated by an approximately expectation-linearizable one. We begin by providing an appropriate measurement of the quality of this approximation. **Definition 3** (Closeness, (Andreas et al., 2015)). A distribution \( p(x) \) is *\( C \)*-close to a set \( \mathcal{X}' \subseteq \mathcal{X} \) if \[ \mathrm{E} \left[ \inf_{x' \in \mathcal{X}'} \sup_{\gamma \in \mathcal{S}} \| X \odot \gamma - x' \odot \gamma \|_2 \right] \leq C, \] where recall that \( \mathcal{S} \) is the (bounded) space that the dropout variable lives in. Intuitively, \( p(x) \) is *\( C \)*-close to a set \( \mathcal{X}' \) if a random sample from \( p \) is no more than a distance \( C \) from \( \mathcal{X}' \) in expectation and under the worst “dropout perturbation”. For example, a standard normal distribution is close to an interval centering at origin (\( [-\alpha, \alpha] \)) with some constant \( C \). Our definition of closeness is similar to that in Andreas et al. (2015), who used this notion to analyze self-normalized log-linear models. We are now ready to state our first major result that quantifies approximate expectation-linearity of a single-layered network (proof in Appendix B.1): **Theorem 2.** *Given a network layer \( \mathbf{h} = f(x \odot \gamma; \theta) \), where \( \theta \) is expectation-linearizing w.r.t. \( \mathcal{X}' \subseteq \mathcal{X} \). Suppose \( p(x) \) is \( C \)-close to \( \mathcal{X}' \) and for all \( x \in \mathcal{X} \), \( \| \nabla_x f(x) \|_{op} \leq B \), where \( \| \cdot \|_{op} \) is the usual operator norm. Then, \( p(x) \) is \( 2BC \)-approximately expectation-linearizable.* Roughly, Theorem 2 states that the input distribution \( p(x) \) that place most of its mass on regions close to expectation-linearizable sets are approximately expectation-linearizable on a similar scale. The bounded operator norm assumption on the derivative \( \nabla f \) is satisfied in most commonly used layers. For example, for a fully connected layer with weight matrix \( W \), bias vector \( b \), and activation function \( \sigma \), \( \| \nabla f(\cdot) \|_{op} = |\sigma'(\cdot)| \cdot \| W \|_{op} \) is bounded by \( \| W \|_{op} \) and the supremum of \( |\sigma'(\cdot)| \) (1/4 when \( \sigma \) is sigmoid and 1 when \( \sigma \) is tanh). Next, we extend the notion of approximate expectation-linearity to deep dropout neural networks. **Definition 4** (Approximately Expectation-linear Network). A deep neural network with \( L \) layers (cf. Eq. (1)) is *\( \delta \)-approximately expectation-linear with respect to* \( p(x) \) over \( \mathcal{X} \) if \[ \mathrm{E}_X \left[ \| \mathrm{E}_S \left[ \mathbf{H}^{(L)}(X, S; \theta) | X \right] - \mathbf{h}^{(L)}(X, \mathrm{E}[S]; \theta) \|_2 \right] < \delta. \] where \( \mathbf{h}^{(L)}(X, \mathrm{E}[S]; \theta) \) is the output of the deterministic neural network in standard dropout. Lastly, we relate the level of approximate expectation-linearity of a deep neural network to the level of approximate expectation-linearity of each of its layers: Theorem 3. Given an L-layer neural network as in Eq. (1), and suppose that each layer \( l \in \{1, \ldots, L\} \) is \( \delta \)-approximately expectation-linear w.r.t. \( p(\mathbf{h}^{(l)}), \mathrm{E}[\Gamma^{(l)}] \leq \gamma, \sup_x \| \nabla f_l(x) \|_{op} \leq B \), and \( \mathrm{E}\left[ \mathrm{Var}[\mathbf{H}^{(l)}|X] \right] \leq \sigma^2 \). Then the network is \( \Delta \)-approximately expectation-linear with \[ \Delta = (B \gamma)^{L-1} \delta + (\delta + B \gamma \sigma) \left( \frac{1 - (B \gamma)^{L-1}}{1 - B \gamma} \right). \] From Theorem 3 (proof in Appendix B.2) we observe that the level of approximate expectation-linearity of the network mainly depends on four factors: the level of approximate expectation-linearity of each layer (\( \delta \)), the expected variance of each layer (\( \sigma \)), the operator norm of the derivative of each layer’s transformation function (\( B \)), and the mean of each layer’s dropout variable (\( \gamma \)). In practice, \( \gamma \) is often a constant less than or equal to 1. For example, if \( \Gamma \sim \mathrm{Bernoulli}(p) \), then \( \gamma = p \). According to the theorem, the operator norm of the derivative of each layer’s transformation function is an important factor in the level of approximate expectation-linearity: the smaller the operator norm is, the better the approximation. Interestingly, the operator norm of a layer often depends on the norm of the layer’s weight (e.g. for fully connected layers). Therefore, adding max-norm constraints to regularize dropout neural networks can lead to better approximate expectation-linearity hence smaller inference gap and the often improved model performance. It should also be noted that when \( B \gamma < 1 \), the approximation error \( \Delta \) tends to be a constant when the network becomes deeper. When \( B \gamma = 1 \), \( \Delta \) grows linearly with \( L \), and when \( B \gamma > 1 \), the growth of \( \Delta \) becomes exponential. Thus, it is essential to keep \( B \gamma < 1 \) to achieve good approximation, particularly for deep neural networks. 5 EXPECTATION-LINEAR REGULARIZED DROPOUT In the previous section we have managed to bound the approximate expectation-linearity, hence the inference gap in (3), of dropout neural networks. In this section, we first prove a uniform deviation bound of the sampled approximate expectation-linearity measure from its mean, which motivates adding the sampled (hence computable) expectation-linearity measure as a regularization scheme to standard dropout, with the goal of explicitly controlling the inference gap of the learned parameter, hence potentially improving the performance. Then we give the upper bounds on the loss in accuracy due to expectation-linearization, and describe classes of distributions that expectation-linearize easily. 5.1 A UNIFORM DEVIATION BOUND FOR THE SAMPLED EXPECTATION-LINEAR MEASURE We now show that an expectation-linear network can be found by expectation-linearizing the network on the training sample. To this end, we prove a uniform deviation bound between the empirical expectation-linearization measure using i.i.d. samples (Eq. (12)) and its mean (Eq. (13)). Theorem 4. Let \( \mathcal{H} = \{ \mathbf{h}^{(L)}(x, s; \theta) : \theta \in \Theta \} \) denote a space of L-layer dropout neural networks indexed with \( \theta \), where \( \mathbf{h}^{(L)} : \mathcal{X} \times S \to \mathcal{R} \) and \( \Theta \) is the space that \( \theta \) lives in. Suppose that the neural networks in \( \mathcal{H} \) satisfy the constraints: 1) \( \forall x \in \mathcal{X}, \|x\|_2 \leq \alpha \); 2) \( \forall l \in \{1, \ldots, L\}, \mathrm{E}[\Gamma^{(l)}] \leq \gamma \) and \( \| \nabla f_l \|_{op} \leq B \); 3) \( \| \mathbf{h}^{(L)} \| \leq \beta \). Denote empirical expectation-linearization measure and its mean as: \[ \hat{\Delta} = \frac{1}{n} \sum_{i=1}^n \| \mathrm{E}_{S_i} [\mathbf{H}^{(L)}(X_i, S_i; \theta)] - \mathbf{h}^{(L)}(X_i, \mathrm{E}[S_i]; \theta) \|_2, \] \[ \Delta = \mathrm{E}_X \left[ \| \mathrm{E}_{S} [\mathbf{H}^{(L)}(X, S; \theta)] - \mathbf{h}^{(L)}(X, \mathrm{E}[S]; \theta) \|_2 \right]. \] Then, with probability at least \( 1 - \nu \), we have \[ \sup_{\theta \in \Theta} | \Delta - \hat{\Delta} | < \frac{2 \alpha B^L (\gamma^{L/2} + 1)}{\sqrt{n}} + \beta \sqrt{\frac{\log(1/\nu)}{n}}. \] From Theorem 4 (proof in Appendix C.1) we observe that the deviation bound decreases exponentially with the number of layers \( L \) when the operator norm of the derivative of each layer’s transformation function (\( B \)) is less than 1 (and the contrary if \( B \geq 1 \)). Importantly, the square root dependence on the number of samples (\( n \)) is standard and cannot be improved without significantly stronger assumptions. It should be noted that Theorem 4 per se does not imply anything between expectation-linearization and the model accuracy (i.e. how well the expectation-linearized neural network actually achieves on modeling the data). Formally studying this relation is provided in § 5.3. In addition, we provide some experimental evidences in § 6 on how improved approximate expectation-linearity (equivalently smaller inference gap) does lead to better empirical performances. 5.2 Expectation-Linearization as Regularization The uniform deviation bound in Theorem 4 motivates the possibility of obtaining an approximately expectation-linear dropout neural networks through adding the empirical measure (12) as a regularization scheme to the standard dropout training objective, as follows: \[ loss(D; \theta) = -l(D; \theta) + \lambda V(D; \theta), \] where \(-l(D; \theta)\) is the negative log-likelihood defined in Eq. (5), \(\lambda > 0\) is a regularization constant, and \(V(D; \theta)\) measures the level of approximate expectation-linearity: \[ V(D; \theta) = \frac{1}{N} \sum_{i=1}^N \| \mathbb{E}_{S_i} [\mathbf{H}^{(L)}(x_i, S_i; \theta)] - \mathbf{h}^{(L)}(x_i, \mathbb{E}[S_i]; \theta) \|_2^2. \] To solve (15), we can minimize \(loss(D; \theta)\) via stochastic gradient descent as in standard dropout, and approximate \(V(D; \theta)\) using Monte carlo: \[ V(D; \theta) \approx \frac{1}{N} \sum_{i=1}^N \| \mathbf{h}^{(L)}(x_i, s_i; \theta) - \mathbf{h}^{(L)}(x_i, \mathbb{E}[S_i]; \theta) \|_2^2, \] where \(s_i\) is the same dropout sample as in \(l(D; \theta)\) for each training instance in a mini-batch. Thus, the only additional computational cost comes from the deterministic term \(\mathbf{h}^{(L)}(x_i, \mathbb{E}[S_i]; \theta)\). Overall, our regularized dropout (15), in its Monte carlo approximate form, is as simple and efficient as the standard dropout. 5.3 On the Accuracy of Expectation-Linearized Models So far our discussion has concentrated on the problem of finding expectation-linear neural network models, without any concerns on how well they actually perform at modeling the data. In this section, we characterize the trade-off between maximizing “data likelihood” and satisfying an expectation-linearization constraint. To achieve the characterization, we measure the *likelihood gap* between the classical maximum likelihood estimator (MLE) and the MLE subject to a expectation-linearization constraint. Formally, given training data \(D = \{(x_1, y_1), \ldots, (x_n, y_n)\}\), we define \[ \hat{\theta} = \underset{\theta \in \Theta}{\operatorname{argmin}} -l(D; \theta) \] \[ \hat{\theta}_\delta = \underset{\theta \in \Theta, V(D; \theta) \leq \delta}{\operatorname{argmin}} -l(D; \theta) \] where \(-l(D; \theta)\) is the negative log-likelihood defined in Eq. (5), and \(V(D; \theta)\) is the level of approximate expectation-linearity in Eq. (16). We would like to control the loss of model accuracy by obtaining a bound on the *likelihood gap* defined as: \[ \Delta_l(\hat{\theta}, \hat{\theta}_\delta) = \frac{1}{n} (l(D; \hat{\theta}) - l(D; \hat{\theta}_\delta)) \] In the following, we focus on neural networks with *softmax* output layer for classification tasks. \[ p(y|x, s; \theta) = \mathbf{h}_y^{(L)}(x, s; \theta) = f_L(\mathbf{h}^{(L-1)}(x, s); \eta) = \frac{e^{\eta_y^T \mathbf{h}^{(L-1)}(x, s)}}{\sum_{y' \in \mathcal{Y}} e^{\eta_{y'}^T \mathbf{h}^{(L-1)}(x, s)}} \] where \(\theta = \{\theta_1, \ldots, \theta_{L-1}, \eta\}\), \(\mathcal{Y} = \{1, \ldots, k\}\) and \(\eta = \{\eta_y : y \in \mathcal{Y}\}\). We claim: Theorem 5. Given an L-layer neural network \( \mathbf{h}^{(L)}(x, s; \theta) \) with softmax output layer in (21), where parameter \( \theta \in \Theta \), dropout variable \( s \in \mathcal{S} \), input \( x \in \mathcal{X} \) and target \( y \in \mathcal{Y} \). Suppose that for every \( x \) and \( s \), \( p(y|x, s; \hat{\theta}) \) makes a unique best prediction—that is, for each \( x \in \mathcal{X}, s \in \mathcal{S} \), there exists a unique \( y^* \in \mathcal{Y} \) such that \( \forall y \neq y^*, \hat{\eta}_y^T \mathbf{h}^{(L-1)}(x, s) < \hat{\eta}_{y^*}^T \mathbf{h}^{(L-1)}(x, s) \). Suppose additionally that \( \forall x, s, \| \mathbf{h}^{(L-1)}(x, s; \hat{\theta}) \| \leq \beta \), and \( \forall y, p(y|x; \hat{\theta}) > 0 \). Then \[ \Delta_1(\hat{\theta}, \hat{\theta}_\delta) \leq c_1 \beta^2 \left( \| \hat{\eta} \|_2 - \frac{\delta}{4\beta} \right)^2 e^{-c_2 \delta / 4\beta} \] where \( c_1 \) and \( c_2 \) are distribution-dependent constants. From Theorem 5 (proof in Appendix C.2) we observe that, at one extreme, distributions closed to deterministic can be expectation-linearized with little loss of likelihood. What about the other extreme — distributions “as close to uniform distribution as possible”? With suitable assumptions about the form of \( p(y|x, s; \hat{\theta}) \) and \( p(y|x; \hat{\theta}) \), we can achieve an accuracy loss bound for distributions that are close to uniform: Theorem 6. Suppose that \( \forall x, s, \| \mathbf{h}^{(L-1)}(x, s; \hat{\theta}) \| \leq \beta \). Additionally, for each \( (x_i, y_i) \in D, s \in \mathcal{S} \), \[ \log \frac{1}{k} \leq \log p(y_i|x_i, s; \hat{\theta}) \leq \frac{1}{k} \sum_{y \in \mathcal{Y}} \log p(y|x_i, s; \hat{\theta}). \] Then asymptotically as \( n \to \infty \): \[ \Delta_1(\hat{\theta}, \hat{\theta}_\delta) \leq \left( 1 - \frac{\delta}{4\beta \| \hat{\eta} \|_2 } \right) \mathrm{E} [ \mathrm{KL} (p(.|X; \theta) \| \mathrm{Unif}(\mathcal{Y})) ] \] Theorem 6 (proof in Appendix C.3) indicates that uniform distributions are also an easy class for expectation-linearization. The next question is whether there exist any classes of conditional distributions \( p(y|x) \) for which all distributions are provably hard to expectation-linearize. It remains an open problem and might be an interesting direction for future work. 6 EXPERIMENTS In this section, we evaluate the empirical performance of the proposed regularized dropout in (15) on a variety of network architectures for the classification task on three benchmark datasets—MNIST, CIFAR-10 and CIFAR-100. We applied the same data preprocessing procedure as in Srivastava et al. (2014). To make a thorough comparison and provide experimental evidence on how the expectation-linearization interacts with the predictive power of the learned model, we perform experiments of Monte Carlo (MC) dropout, which approximately computes the final prediction (left-hand side of (3)) via Monte Carlo sampling, w/o the proposed regularizer. In the case of MC dropout, we average \( m = 100 \) predictions using randomly sampled configurations. In addition, the network architectures and hyper-parameters for each experiment setup are the same as those in Srivastava et al. (2014), unless we explicitly claim to use different ones. Following previous works, for each data set We held out 10,000 random training images for validation to tune the hyper-parameters, including \( \lambda \) in Eq. (15). When the hyper-parameters are fixed, we train the final models with all the training data, including the validation data. A more detailed description of the conducted experiments can be provided in Appendix D. For each experiment, we report the mean test errors with corresponding standard deviations over 5 repetitions. 6.1 MNIST The MNIST dataset (LeCun et al., 1998) consists of 70,000 handwritten digit images of size 28×28, where 60,000 images are used for training and the rest for testing. This task is to classify the images into 10 digit classes. For the purpose of comparison, we train 6 neural networks with different architectures. The experimental results are shown in Table 1. 6.2 CIFAR-10 AND CIFAR-100 The CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009) consist of 60,000 color images of size 32 × 32, drawn from 10 and 100 categories, respectively. 50,000 images are used for training and the Table 1: Comparison of classification error percentage on test data with and without using expectation-linearization on MNIST, CIFAR-10 and CIFAR-100, under different network architectures (with standard deviations for 5 repetitions). <table> <tr> <th rowspan="2">Data</th> <th rowspan="2">Architecture</th> <th colspan="2">w.o. EL</th> <th colspan="2">w. EL</th> </tr> <tr> <th>Standard</th> <th>MC</th> <th>Standard</th> <th>MC</th> </tr> <tr> <td rowspan="6">MNIST</td> <td>3 dense,1024,logistic</td> <td>1.23±0.03</td> <td>1.06±0.02</td> <td>1.07±0.02</td> <td>1.06±0.03</td> </tr> <tr> <td>3 dense,1024,relu</td> <td>1.19±0.02</td> <td>1.04±0.02</td> <td>1.03±0.02</td> <td>1.05±0.03</td> </tr> <tr> <td>3 dense,1024,relu+max-norm</td> <td>1.05±0.03</td> <td>1.02±0.02</td> <td>0.98±0.03</td> <td>1.02±0.02</td> </tr> <tr> <td>3 dense,2048,relu+max-norm</td> <td>1.07±0.02</td> <td>1.00±0.02</td> <td>0.94±0.02</td> <td>0.97±0.03</td> </tr> <tr> <td>2 dense,4096,relu+max-norm</td> <td>1.03±0.02</td> <td>0.92±0.03</td> <td>0.90±0.02</td> <td>0.93±0.02</td> </tr> <tr> <td>2 dense,8192,relu+max-norm</td> <td>0.99±0.02</td> <td>0.96±0.02</td> <td>0.87±0.02</td> <td>0.92±0.03</td> </tr> <tr> <td>CIFAR-10</td> <td>3 conv+2 dense,relu+max-norm</td> <td>12.82±0.10</td> <td>12.16±0.12</td> <td>12.20±0.14</td> <td>12.21±0.15</td> </tr> <tr> <td>CIFAR-100</td> <td>3 conv+2 dense,relu+max-norm</td> <td>37.22±0.22</td> <td>36.01±0.21</td> <td>36.25±0.12</td> <td>36.10±0.18</td> </tr> </table> ![Three line plots showing error rate and empirical expectation-linearization risk relative to λ for MNIST, CIFAR-10, and CIFAR-100 datasets.](page_180_670_1208_246.png) Figure 1: Error rate and empirical expectation-linearization risk relative to \( \lambda \). rest for testing. The neural network architecture we used for these two datasets has 3 convolutional layers, followed by two fully-connected (dense) hidden layers (again, same as that in Srivastava et al. (2014)). The experimental results are recorded in Table 1, too. From Table 1 we can see that on MNIST data, dropout network training with expectation-linearization outperforms standard dropout on all 6 neural architectures. On CIFAR data, expectation-linearization reduces error rate from 12.82% to 12.20% for CIFAR-10, achieving 0.62% improvement. For CIFAR-100, the improvement in terms of error rate is 0.97% with reduction from 37.22% to 36.25%. From the results we see that with or without expectation-linearization, the MC dropout networks achieve similar results. It illustrates that by achieving expectation-linear neural networks, the predictive power of the learned models has not degraded significantly. Moreover, it is interesting to see that with the regularization, on MNIST dataset, standard dropout networks achieve even better accuracy than MC dropout. It may be because that with expectation-linearization, standard dropout inference achieves better approximation of the final prediction than MC dropout with (only) 100 samples. On CIFAR datasets, MC dropout networks achieve better accuracy than the ones with the regularization. But, obviously, MC dropout requires much more inference time than standard dropout (MC dropout with \( m \) samples requires about \( m \) times the inference time of standard dropout). 6.3 Effect of Regularization Constant \( \lambda \) In this section, we explore the effect of varying the hyper-parameter for the expectation-linearization rate \( \lambda \). We train the network architectures in Table 1 with the \( \lambda \) value ranging from 0.1 to 10.0. Figure 1 shows the test errors obtained as a function of \( \lambda \) on three datasets. In addition, Figure 1, middle and right panels, also measures the empirical expectation-linearization risk \( \hat{\Delta} \) of Eq. (12) with varying \( \lambda \) on CIFAR-10 and CIFAR-100, where \( \hat{\Delta} \) is computed using Monte carlo with 100 independent samples. From Figure 1 we can see that when \( \lambda \) increases, better expectation-linearity is achieved (i.e. \( \hat{\Delta} \) decreases). The model accuracy, however, has not kept growing with increasing \( \lambda \), showing that in practice considerations on the trade-off between model expectation-linearity and accuracy are needed. Table 2: Comparison of test data errors using standard dropout, Monte Carlo dropout, standard dropout with our proposed expectation-linearization, and recently proposed dropout distillation on CIFAR-10 and CIFAR-100 under AllConv, (with standard deviations for 5 repetitions). <table> <tr> <th>Data</th> <th>Network</th> <th>Standard</th> <th>MC</th> <th>w. EL</th> <th>Distillation</th> </tr> <tr> <td>CIFAR-10</td> <td>AllConv</td> <td>11.18±0.11</td> <td>10.58±0.21</td> <td>10.86±0.08</td> <td>10.81±0.14</td> </tr> <tr> <td>CIFAR-100</td> <td>AllConv</td> <td>35.50±0.23</td> <td>34.43±0.25</td> <td>35.10±0.13</td> <td>35.07±0.20</td> </tr> </table> 6.4 COMPARISON WITH DROP OUT DISTILLATION To make a thorough empirical comparison with the recently proposed Dropout Distillation method (Bulò et al., 2016), we also evaluate our regularization method on CIFAR-10 and CIFAR-100 datasets with the All Convolutional Network (Springenberg et al., 2014) (AllConv). To facilitate comparison, we adopt the originally reported hyper-parameters and the same setup for training. Table 2 gives the results comparison the classification error percentages on test data under AllConv using standard dropout, Monte Carlo dropout, standard dropout with our proposed expectation-linearization, and recently proposed dropout distillation on CIFAR-10 and CIFAR-100 ¹. According to Table 2, our proposed expectation-linear regularization method achieves comparable performance to dropout distillation. 7 CONCLUSIONS In this work, we attempted to establish a theoretical basis for the understanding of dropout, motivated by controlling the gap between dropout’s training and inference phases. Through formulating dropout as a latent variable model and introducing the notion of (approximate) expectation-linearity, we have formally studied the inference gap of dropout, and introduced an empirical measure as a regularization scheme to explicitly control the gap. Experiments on three benchmark datasets demonstrate that reducing the inference gap can indeed improve the end performance. In the future, we intend to formally relate the inference gap to the generalization error of the underlying network, hence providing further justification of regularized dropout. ACKNOWLEDGEMENTS This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. APPENDIX: DROPOUT WITH EXPECTATION-LINEAR REGULARIZATION A LVM Dropout Training vs. Standard Dropout Training Proof of Theorem 1 Proof. \[ \mathrm{E}_{S_D}[l(D, S_D; \theta)] = \int_S \prod_{i=1}^N p(s_i) \left( \sum_{i=1}^N \log p(y_i|x_i, s_i; \theta) \right) d\mu(s_1) \ldots d\mu(s_N) = \sum_{i=1}^N \int_S p(s_i) \log p(y_i|x_i, s_i; \theta) d\mu(s_i) \] Because \( \log(\cdot) \) is a concave function, from Jensen’s Inequality, \[ \int_S p(s) \log p(y|x, s; \theta) d\mu(s) \leq \log \int_S p(s)p(y|x, s; \theta) d\mu(s) \] Thus \[ \mathrm{E}_{S_D}[-l(D, S_D; \theta)] \geq \sum_{i=1}^N \log \int_S p(s_i)p(y_i|x_i, s_i; \theta) d\mu(s_i) = -l(D; \theta). \] □ B EXPECTATION-LINEAR DROPOUT NEURAL NETWORKS B.1 Proof of Theorem 2 Proof. Let \( \gamma^* = \mathrm{E}[\Gamma] \), and \[ A \triangleq \{ x : \| \mathrm{E}[f(x \odot \Gamma; \theta)] - f(x \odot \gamma^*; \theta) \|_2 = 0 \} \] Let \( X^* = \underset{x \in A}{\arg\min} \sup_{\gamma \in \mathcal{S}} \| X \odot \gamma - x \odot \gamma \|_2 \), and \( X^- = X - X^* \). Then, \[ X \odot \gamma = X^* \odot \gamma + X^- \odot \gamma \] In the following, we omit the parameter \( \theta \) for convenience. Moreover, we denote \[ \mathrm{E}_\Gamma[f(X \odot \Gamma; \theta)] \triangleq \mathrm{E}[f(X \odot \Gamma; \theta)|X] \] From Taylor Series, there exit some \( X', X'' \in \mathcal{X} \) satisfy that \[ \begin{align*} f(X \odot \Gamma) &= f(X^* \odot \Gamma) + f'(X' \odot \Gamma)(X^- \odot \Gamma) \\ f(X \odot \gamma^*) &= f(X^* \odot \gamma^*) + f'(X'' \odot \gamma^*)(X^- \odot \gamma^*) \end{align*} \] where we denote \( f'(x) = (\nabla_x f(x))^T \). Then, \[ \begin{align*} &\mathrm{E}_\Gamma[f(X \odot \Gamma) - f(X \odot \gamma^*)] \\ =& \mathrm{E}_\Gamma[f(X^* \odot \Gamma + X^- \odot \Gamma) - f(X^* \odot \gamma^* + X^- \odot \gamma^*)] \\ =& \mathrm{E}_\Gamma[f(X^* \odot \Gamma) - f(X^* \odot \gamma^*) + f'(X' \odot \Gamma)(X^- \odot \Gamma) - f'(X'' \odot \gamma^*)(X^- \odot \gamma^*)] \\ =& \mathrm{E}_\Gamma[f(X^* \odot \Gamma) - f(X^* \odot \gamma^*)] + \mathrm{E}_\Gamma[f'(X' \odot \Gamma)(X^- \odot \Gamma) - f'(X'' \odot \gamma^*)(X^- \odot \gamma^*)] \end{align*} \] Since \( X^* \in A \), we have \[ \mathrm{E}_\Gamma[f(X^* \odot \Gamma) - f(X^* \odot \gamma^*)] = 0. \] Then, \[ \begin{align*} &\mathrm{E}_\Gamma[f(X \odot \Gamma) - f(X \odot \gamma^*)] \\ =& \mathrm{E}_\Gamma[f'(X' \odot \Gamma)(X^- \odot \Gamma) - f'(X'' \odot \gamma^*)(X^- \odot \gamma^*)] \\ =& \mathrm{E}_\Gamma[(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)] + \mathrm{E}_\Gamma[f'(X'' \odot \gamma^*)(X^- \odot \Gamma - X^- \odot \gamma^*)] \\ =& \mathrm{E}_\Gamma[(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)] \end{align*} \] Then, \[ \| \mathbb{E}_\Gamma [f(X \odot \Gamma)] - f(X \odot \gamma^*) \|_2 \\ = \| \mathbb{E}_\Gamma [(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)] \|_2 \] Since \( \| X^- \odot \gamma' \|_2 \leq \sup_{\gamma \in \mathcal{S}} \| X^- \odot \gamma \|_2 = \inf_{x \in A} \sup_{\gamma \in \mathcal{S}} \| X \odot \gamma - x \odot \gamma \|_2 \), and from Jensen's inequality and property of operator norm, \[ \| \mathbb{E}_\Gamma [(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)] \|_2 \\ \leq \mathbb{E}_\Gamma \left[ \| f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*) \|_{op} \| X^- \odot \Gamma \|_2 \right] \\ \leq 2B \mathbb{E}_\Gamma \left[ \| X^- \odot \Gamma \|_2 \right] \\ \leq 2B \inf_{x \in A} \sup_{\gamma \in \mathcal{S}} \| X \odot \gamma - x \odot \gamma \|_2 \] Finally we have, \[ \mathbb{E}_X \left[ \| \mathbb{E}_\Gamma [(f'(X' \odot \Gamma) - f'(X'' \odot \gamma^*))(X^- \odot \Gamma)] \|_2 \right] \\ \leq 2B \mathbb{E} \left[ \inf_{x \in A} \sup_{\gamma \in \mathcal{S}} \| X \odot \gamma - x \odot \gamma \|_2 \right] \leq 2BC \] \hfill \qed B.2 Proof of Theorem 3 Proof. Induction on the number of the layers \( L \). As before, we omit the parameter \( \theta \). Initial step: when \( L = 1 \), the statement is obviously true. Induction on \( L \): Suppose that the statement is true for neural networks with \( L \) layers. Now we prove the case \( L + 1 \). From the inductive assumption, we have, \[ \mathbb{E}_X \left[ \| \mathbb{E}_{S_L} [\mathbf{H}^{(L)}(X, S_L)] - \mathbf{h}^{(L)}(X, \mathbb{E}[S_L]) \|_2 \right] \leq \Delta_L \] where \( S_L = \{\Gamma^{(1)}, \ldots, \Gamma^{(L)}\} \) is the dropout random variables for the first \( L \) layers, and \[ \Delta_L = (B\gamma)^{L-1}\delta + (\delta + B\gamma\sigma) \left( \frac{1 - (B\gamma)^{L-1}}{1 - B\gamma} \right) \] In addition, the \( L + 1 \) layer is \( \delta \)-approximately expectation-linear, we have: \[ \mathbb{E}_{\mathbf{H}^{(L)}} \left[ \| \mathbb{E}_{\Gamma^{(L+1)}} [f_{L+1}(\mathbf{H}^{(L)} \odot \Gamma^{(L+1)})] - f_{L+1}(\mathbf{H}^{(L)} \odot \gamma^{(L+1)}) \|_2 \right] \leq \delta \] Let \( \mathbb{E}[\Gamma^{(l)}] = \gamma^{(l)}, \forall l \in \{1, \ldots, L + 1\} \), and let \( \mathbf{H}^{(l)} \) and \( \mathbf{h}^{(l)} \) be short for \( \mathbf{H}^{(l)}(X, S_l) \) and \( \mathbf{h}^{(l)}(X, \mathbb{E}(S_l)) \), respectively, when there is no ambiguity. Moreover, we denote \[ \mathbb{E}_S [\mathbf{H}^{(L)}(X, S; \theta)] = \mathbb{E}_S [\mathbf{H}^{(L)}(X, S; \theta)|X] \] for convenience. Then, \[ \begin{align*} &\mathbb{E}_X \left[ \| \mathbb{E}_{S_{L+1}} [\mathbf{H}^{(L+1)}] - \mathbf{h}^{(L+1)} \|_2 \right] \\ &= \mathbb{E}_X \left[ \| \mathbb{E}_{S_L} \left[ \mathbb{E}_{\Gamma^{(L+1)}} [f_{L+1}(\mathbf{H}^{(L)} \odot \Gamma^{(L+1)})] - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \right] \right. \\ &\quad + \mathbb{E}_{S_L} \left[ f_{L+1}(\mathbf{H}^{(L)} \odot \gamma^{(L+1)}) ] - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \right] \|_2 \\ &\leq \mathbb{E}_X \left[ \| \mathbb{E}_{S_L} \left[ \mathbb{E}_{\Gamma^{(L+1)}} [f_{L+1}(\mathbf{H}^{(L)} \odot \Gamma^{(L+1)})] - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \right] \|_2 \right] \\ &\quad + \mathbb{E}_X \left[ \| \mathbb{E}_{S_L} \left[ f_{L+1}(\mathbf{H}^{(L)} \odot \gamma^{(L+1)}) ] - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \right] \|_2 \right] \end{align*} \] From Eq. 2 and Jensen's inequality, we have \[ \begin{align*} &\mathbb{E}_X \left[ \| \mathbb{E}_{S_L} \left[ \mathbb{E}_{\Gamma^{(L+1)}} [f_{L+1}(\mathbf{H}^{(L)} \odot \Gamma^{(L+1)})] - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \right] \|_2 \right] \\ &\leq \mathbb{E}_{\mathbf{H}^{(L)}} \left[ \| \mathbb{E}_{\Gamma^{(L+1)}} [f_{L+1}(\mathbf{H}^{(L)} \odot \Gamma^{(L+1)})] - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \|_2 \right] \leq \delta \end{align*} \] and \[ \begin{align*} & \mathbb{E}_X \left[ \| E_{S_L} \left[ f_{L+1}(\mathbf{H}^{(L)} \odot \gamma^{(L+1)}) \right] - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \|_2 \right] \\ = & \mathbb{E}_X \left[ \| E_{S_L} \left[ f_{L+1}(\mathbf{H}^{(L)} \odot \gamma^{(L+1)}) \right] - f_{L+1}(E_{S_L}[\mathbf{H}^{(L)}] \odot \gamma^{(L+1)}) \right. \\ & \left. + f_{L+1}(E_{S_L}[\mathbf{H}^{(L)}] \odot \gamma^{(L+1)}) - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \|_2 \right] \\ \leq & \mathbb{E}_X \left[ \| E_{S_L} \left[ f_{L+1}(\mathbf{H}^{(L)} \odot \gamma^{(L+1)}) \right] - f_{L+1}(E_{S_L}[\mathbf{H}^{(L)}] \odot \gamma^{(L+1)}) \|_2 \right] \\ & + \mathbb{E}_X \left[ \| f_{L+1}(E_{S_L}[\mathbf{H}^{(L)}] \odot \gamma^{(L+1)}) - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \|_2 \right] \end{align*} \] (4) Using Jensen’s inequality, property of operator norm and \( \mathbb{E}[\mathrm{Var}[\mathbf{H}^{(l)}|X]] \leq \sigma^2 \), we have \[ \begin{align*} & \mathbb{E}_X \left[ \| E_{S_L} \left[ f_{L+1}(\mathbf{H}^{(L)} \odot \gamma^{(L+1)}) \right] - f_{L+1}(E_{S_L}[\mathbf{H}^{(L)}] \odot \gamma^{(L+1)}) \|_2 \right] \\ \leq & \mathbb{E}_{\mathbf{H}^{(L)}} \left[ \| f_{L+1}(\mathbf{H}^{(L)} \odot \gamma^{(L+1)}) - f_{L+1}(E_{S_L}[\mathbf{H}^{(L)}] \odot \gamma^{(L+1)}) \|_2 \right] \\ \leq & B \gamma \mathbb{E}_{\mathbf{H}^{(L)}} \left[ \| \mathbf{H}^{(L)} - E_{S_L}[\mathbf{H}^{(L)}] \|_2 \right] \\ \leq & B \gamma \left( \mathbb{E}_{\mathbf{H}^{(L)}} \left[ \| \mathbf{H}^{(L)} - E_{S_L}[\mathbf{H}^{(L)}] \|_2^2 \right] \right)^{\frac{1}{2}} \leq B \gamma \sigma \end{align*} \] (5) From Eq. 1 \[ \begin{align*} & \mathbb{E}_X \left[ \| f_{L+1}(E_{S_L}[\mathbf{H}^{(L)}] \odot \gamma^{(L+1)}) - f_{L+1}(\mathbf{h}^{(L)} \odot \gamma^{(L+1)}) \|_2 \right] \\ = & B \gamma \mathbb{E}_X \left[ \| E_{S_L}[\mathbf{H}^{(L)}] - \mathbf{h}^{(L)} \|_2 \right] \leq B \gamma \Delta_L \end{align*} \] (6) Finally, to sum up with Eq. 3, Eq. 4, , Eq. 5, , Eq. 6, we have \[ \begin{align*} & \mathbb{E}_X \left[ \| E_{S_{L+1}}[\mathbf{H}^{(L+1)}] - \mathbf{h}^{(L+1)} \|_2 \right] \\ \leq & \delta + B \gamma \sigma + B \gamma \Delta_L \\ = & (B \gamma)^L \delta + (\delta + B \gamma \sigma) \left( \frac{1 - (B \gamma)^L}{1 - B \gamma} \right) = \Delta_{L+1} \end{align*} \] C EXPECTATION-LINEARIZATION C.1 PROOF OF THEOREM 4: UNIFORM DEVIATION BOUND Before proving Theorem 4, we first define the notations. Let \( X^n = \{ X_1, \ldots, X_n \} \) be a set of n samples of input \( X \). For a function space \( \mathcal{F} : \mathcal{X} \to \mathcal{R} \), we use \( \mathrm{Rad}_n(\mathcal{F}, X^n) \) to denote the empirical Rademacher complexity of \( \mathcal{F} \), \[ \mathrm{Rad}_n(\mathcal{F}, X^n) = \mathbb{E}_\sigma \left[ \sup_{f \in \mathcal{F}} \left( \frac{1}{n} \sum_{i=1}^n \sigma_i f(X_i) \right) \right] \] and the Rademacher complexity is defined as \[ \mathrm{Rad}_n(\mathcal{F}) = \mathbb{E}_{X^n} \left[ \mathrm{Rad}_n(\mathcal{F}, X^n) \right] \] In addition, we import the definition of dropout Rademacher complexity from Gao & Zhou (2014): \[ \begin{align*} \mathcal{R}_n(\mathcal{H}, X^n, S^n) &= \mathbb{E}_\sigma \left[ \sup_{h \in \mathcal{H}} \left( \frac{1}{n} \sum_{i=1}^n \sigma_i h(X_i, S_i) \right) \right] \\ \mathcal{R}_n(\mathcal{H}) &= \mathbb{E}_{X^n, S^n} \left[ \mathrm{Rad}_n(\mathcal{H}, X^n, S^n) \right] \end{align*} \] where \( \mathcal{H} : \mathcal{X} \times \mathcal{S} \to \mathcal{R} \) is a function space defined on input space \( \mathcal{X} \) and dropout variable space \( \mathcal{S} \). \( \mathcal{R}_n(\mathcal{H}, X^n, S^n) \) and \( \mathcal{R}_n(\mathcal{H}) \) are the empirical dropout Rademacher complexity and dropout Rademacher complexity, respectively. We further denote \( \mathcal{R}_n(\mathcal{H}, X^n) \triangleq \mathrm{E}_{S^n}\left[Rad_n(\mathcal{H}, X^n, S^n)\right] \). Now, we define the following function spaces: \[ \begin{align*} \mathcal{F} &= \left\{ f(x; \theta) : f(x; \theta) = \mathrm{E}_S\left[\mathbf{H}^{(L)}(x, S; \theta)\right], \theta \in \Theta \right\} \\ \mathcal{G} &= \left\{ g(x; \theta) : g(x; \theta) = \mathbf{h}^{(L)}(x, \mathrm{E}[S]; \theta), \theta \in \Theta \right\} \\ \mathcal{H} &= \left\{ h(x, s; \theta) : h(x, s; \theta) = \mathbf{h}^{(L)}(x, s; \theta), \theta \in \Theta \right\} \end{align*} \] Then, the function space of \( v(x) = f(x) - g(x) \) is \( \mathcal{V} = \{ f(x) - g(x) : f \in \mathcal{F}, g \in \mathcal{G} \} \). **Lemma 7.** \[ Rad_n(\mathcal{F}, X^n) \leq \mathcal{R}_n(\mathcal{H}, X^n) \] *Proof.* \[ \begin{align*} \mathcal{R}_n(\mathcal{H}, X^n) &= \mathrm{E}_{S^n}\left[Rad_n(\mathcal{H}, X^n, S^n)\right] \\ &= \mathrm{E}_{S^n}\left[ \mathrm{E}_\sigma \left[ \sup_{h \in \mathcal{H}} \left( \frac{1}{n} \sum_{i=1}^n \sigma_i h(X_i, S_i) \right) \right] \right] \\ &= \mathrm{E}_\sigma \left[ \mathrm{E}_{S^n} \left[ \sup_{h \in \mathcal{H}} \left( \frac{1}{n} \sum_{i=1}^n \sigma_i h(X_i, S_i) \right) \right] \right] \\ &\geq \mathrm{E}_\sigma \left[ \sup_{h \in \mathcal{H}} \mathrm{E}_{S^n} \left[ \left( \frac{1}{n} \sum_{i=1}^n \sigma_i h(X_i, S_i) \right) \right] \right] \\ &= \mathrm{E}_\sigma \left[ \sup_{h \in \mathcal{H}} \left( \frac{1}{n} \sum_{i=1}^n \sigma_i \mathrm{E}_{S_i} [h(X_i, S_i)] \right) \right] \\ &= \mathrm{E}_\sigma \left[ \sup_{h \in \mathcal{H}} \left( \frac{1}{n} \sum_{i=1}^n \sigma_i \mathrm{E}_{S_i} [\mathbf{H}^{(L)}(X_i, S_i; \theta)] \right) \right] = Rad_n(\mathcal{F}, X^n) \end{align*} \] \hfill \( \Box \) From Lemma 7, we have \( Rad_n(\mathcal{F}) \leq \mathcal{R}_n(\mathcal{H}) \). **Lemma 8.** \[ \begin{align*} \mathcal{R}_n(\mathcal{H}) &\leq \frac{\alpha B^{L/2} \gamma^{L/2}}{\sqrt{n}} \\ Rad_n(\mathcal{G}) &\leq \frac{\alpha B \gamma}{\sqrt{n}} \end{align*} \] *Proof.* See Theorem 4 in Gao & Zhou (2014). \hfill \( \Box \) Now, we can prove Theorem 4. **Proof of Theorem 4** *Proof.* From Rademacher-based uniform bounds theorem, with probability \( \geq 1 - \delta \), \[ \sup_{v \in \mathcal{V}} |\Delta - \hat{\Delta}| < 2Rad_n(\mathcal{V}) + \beta \sqrt{\frac{\log(1/\delta)}{n}} \] Since \( \mathcal{V} = \mathcal{F} - \mathcal{G} \), we have \[ Rad_n(\mathcal{V}) = Rad_n(\mathcal{F} - \mathcal{G}) \leq Rad_n(\mathcal{F}) + Rad_n(\mathcal{G}) \leq \frac{\alpha B^L (\gamma^{L/2} + 1)}{\sqrt{n}} \] Then, finally, we have that with probability \( \geq 1 - \delta \), \[ \sup_{\theta \in \Theta} |\Delta - \hat{\Delta}| < \frac{2\alpha B^L (\gamma^{L/2} + 1)}{\sqrt{n}} + \beta \sqrt{\frac{\log(1/\delta)}{n}} \] \hfill \( \Box \) C.2 Proof of Theorem 5: Non-Uniform Bound of Model Accuracy For convenience, we denote \( \lambda = \{\theta_1, \ldots, \theta_{L-1}\} \). Then \( \theta = \{\lambda, \eta\} \), and MLE \( \hat{\theta} = \{\hat{\lambda}, \hat{\eta}\} \) Lemma 9. \[ \| \nabla f_L(\cdot; \eta)^T \|_{op} \leq 2\|\eta\|_2 \] (7) Proof. denote \[ A = \nabla f_L(\cdot; \eta)^T = [p_y(\eta_y - \overline{\eta})^T] \Big|_{y=1}^k \] where \( p_y = p(y|x, s; \theta), \overline{\eta} = \mathrm{E}[\eta_Y] = \sum_{y=1}^k p_y \eta_y \). For each \( v \) such that \( \|v\|_2 = 1 \), \[ \begin{align*} \|Av\|_2^2 &= \sum_{y \in \mathcal{Y}} \left( p_y (\eta_y - \overline{\eta})^T v \right)^2 \leq \sum_{y \in \mathcal{Y}} \|p_y (\eta_y - \overline{\eta})\|_2^2 \|v\|_2^2 = \sum_{y \in \mathcal{Y}} \|p_y (\eta_y - \overline{\eta})\|_2^2 \\ &\leq \sum_{y \in \mathcal{Y}} p_y \| \eta_y - \overline{\eta} \|_2^2 \leq \sum_{y \in \mathcal{Y}} 2p_y \left( \|\eta\|_2^2 + \sum_{y' \in \mathcal{Y}} p_{y'} \| \eta_{y'} \|_2^2 \right) \\ &= 4 \sum_{y \in \mathcal{Y}} p_y \| \eta_y \|_2^2 \leq 4\|\eta\|_2^2 \end{align*} \] So we have \( \|A\|_{op} \leq 2\|\eta\|_2 \). \( \square \) Lemma 10. If parameter \( \tilde{\theta} = \{\hat{\lambda}, \eta\} \) satisfies that \( \|\eta\|_2 \leq \frac{\delta}{4\beta} \), then \( V(D; \tilde{\theta}) \leq \delta \), where \( V(D; \theta) \) is defined in Eq. (16). Proof. Let \( S_L = \{\Gamma^{(1)}, \ldots, \Gamma^{(L)}\} \), and let \( \mathbf{H}^{(l)} \) and \( \mathbf{h}^{(l)} \) be short for \( \mathbf{H}^{(l)}(X, S_l; \tilde{\theta}) \) and \( \mathbf{h}^{(l)}(X, \mathrm{E}(S_l); \tilde{\theta}) \), respectively. From lemma 9, we have \( \|f_L(x; \eta) - f_L(y; \eta)\|_2 \leq 2\|\eta\|_2 \|x - y\|_2 \). Then, \[ \begin{align*} \| \mathrm{E}_{S_L} [\mathbf{H}^L] - \mathbf{h}^L \|_2 &= \| \mathrm{E}_{S_{L-1}} [f_L(\mathbf{H}^{(L-1)}; \eta)] - f_L(\mathbf{h}^{(L-1)}; \eta) \|_2 \\ &\leq \| \mathrm{E}_{S_{L-1}} [f_L(\mathbf{H}^{(L-1)}; \eta)] - f_L(\mathbf{h}^{(L-1)}; \eta) \|_2 \\ &\leq 2\|\eta\|_2 \|\mathbf{H}^{(L-1)} - \mathbf{h}^{(L-1)}\|_2 \\ &\leq 4\beta\|\eta\|_2 \leq \delta \end{align*} \] \( \square \) Lemma 10 says that we can get \( \theta \) satisfying the expectation-linearization constrain by explicitly scaling down \( \hat{\eta} \) while keeping \( \hat{\lambda} \). In order to prove Theorem 5, we make the following assumptions: • The dimension of \( \mathbf{h}^{(L-1)} \) is \( d \), i.e. \( \mathbf{h}^{(L-1)} \in \mathcal{R}^d \). • Since \( \forall y \in \mathcal{Y}, p(y|x; \hat{\theta}) > 0 \), we assume \( p(y|x; \hat{\theta}) \geq 1/b \), where \( b \geq |\mathcal{Y}| = k \). • As in the body text, let \( p(y|x, s; \hat{\theta}) \) be nonuniform, and in particular let \( \hat{\eta}_y^T \mathbf{h}^{(L-1)}(x, s; \hat{\lambda}) - \hat{\eta}_{y^*}^T \mathbf{h}^{(L-1)}(x, s; \hat{\lambda}) > c\|\hat{\eta}\|_2, \forall y \neq y^* \). For convenience, we denote \( \eta^T \mathbf{h}^{(L-1)}(x, s; \lambda) = \eta^T u_y(x, s; \lambda) \), where \( u_y^T(x, s; \lambda) = (v_0^T, \ldots, v_k^T) \) and \[ v_i = \left\{ \begin{array}{ll} \mathbf{h}^{(L-1)}(x, s; \lambda) & \text{if } i = y \\ 0 & \text{otherwise} \end{array} \right. \] To prove Theorem 5, we first prove the following lemmas. Lemma 11. If \( p(y|x; \hat{\theta}) \geq 1/b \), then \( \forall \alpha \in [0, 1], \) for parameter \( \tilde{\theta} = \{\hat{\lambda}, \alpha \hat{\eta}\} \), we have \[ p(y|x; \tilde{\theta}) \geq \frac{1}{b} \] Proof. We define \[ f(\alpha) \triangleq (y|x,s;\tilde{\theta}) = \frac{e^{\alpha \eta_y^T \mathbf{h}^{(L-1)}(x,s;\hat{\lambda})}}{\sum_{y' \in \mathcal{Y}} e^{\alpha \eta_{y'}^T \mathbf{h}^{(L-1)}(x,s;\hat{\lambda})}} = \frac{\left(e^{\eta_y^T \mathbf{h}^{(L-1)}(x,s;\hat{\lambda})}\right)^{\alpha}}{\sum_{y' \in \mathcal{Y}} \left(e^{\eta_{y'}^T \mathbf{h}^{(L-1)}(x,s;\hat{\lambda})}\right)^{\alpha}} \] Since \( \mathcal{Y} = \{1, \ldots, k\} \), for fixed \( x \in \mathcal{X}, s \in \mathcal{S} \), \( \log f(\alpha) \) is a concave function w.r.t \( \alpha \). Since \( b \geq k \), we have \[ \log f(\alpha) \geq (1-\alpha) \log f(0) + \alpha \log f(1) \geq -\log b \] So we have \( \forall x, s,\ p(y|x,s;\tilde{\theta}) \geq 1/b \). Then \[ p(y|x;\tilde{\theta}) = \mathrm{E}_S \left[ p(y|x,S;\tilde{\theta}) \right] \geq \frac{1}{b} \] Lemma 12. *if y is not the majority class, i.e. \( y \neq y^* \), then for parameter \( \tilde{\theta} = \{\hat{\lambda}, \alpha \hat{\eta}\} \)* \[ p(y|x,s;\tilde{\theta}) \leq e^{-c \alpha \| \hat{\eta} \|_2} \] Proof. \[ p(y|x,s;\tilde{\theta}) = \frac{e^{\alpha \eta_y^T u_y}}{\sum_{y' \in \mathcal{Y}} e^{\alpha \eta_{y'}^T u_{y'}}} \leq \frac{e^{\alpha \eta_y^T u_y}}{e^{\alpha \eta_{y^*}^T u_{y^*}}} \leq e^{-c \alpha \| \hat{\eta} \|_2} \] Lemma 13. *For a fixed x and s, the absolute value of the entry of the vector under the parameter \( \tilde{\theta} = \{\hat{\lambda}, \alpha \hat{\eta}\} \):* \[ |p(y|x,s;\tilde{\theta})(u_y - \mathrm{E}_Y[u_Y])|_i \leq \beta (k-1) e^{-c \alpha \| \hat{\eta} \|_2} \] Proof. Suppose \( y \) is the majority class of \( p(y|x,s;\tilde{\theta}) \). Then, \[ u_y - \mathrm{E}_y[u_Y] = (v_y)_{y'=1}^k \] where \[ v_y = \left\{ \begin{array}{ll} (1 - p(y|x,s;\tilde{\theta})) \mathbf{h}^{(L-1)} & \text{if } y = y^* \\ -p(y|x,s;\tilde{\theta}) \mathbf{h}^{(L-1)} & \text{otherwise} \end{array} \right. \] From Lemma 12, we have \[ |p(y|x,s;\tilde{\theta})(u_y - \mathrm{E}_Y[u_Y])|_i \leq |(u_y - \mathrm{E}_Y[u_Y])|_i \leq \beta (k-1) e^{-c \alpha \| \hat{\eta} \|_2} \] Now, we suppose \( y \) is not the majority class of \( p(y|x,s;\tilde{\theta}) \). Then, \[ |p(y|x,s;\tilde{\theta})(u_y - \mathrm{E}_Y[u_Y])|_i \leq p(y|x,s;\tilde{\theta}) \beta \leq \beta e^{-c \alpha \| \hat{\eta} \|_2} \] Overall, the lemma follows. Lemma 14. *We denote the matrix* \[ A \triangleq \mathrm{E}_S \left[ \frac{p(y|x,s;\tilde{\theta})}{p(y|x;\tilde{\theta})} (u_y - \mathrm{E}_Y[u_Y])(u_y - \mathrm{E}_Y[u_Y])^T \right] - \mathrm{E}_S \left[ \frac{p(y|x,s;\tilde{\theta})}{p(y|x;\tilde{\theta})} (u_y - \mathrm{E}_Y[u_Y]) \right] \mathrm{E}_S \left[ \frac{p(y|x,s;\tilde{\theta})}{p(y|x;\tilde{\theta})} (u_y - \mathrm{E}_Y[u_Y]) \right]^T \] *Then the absolute value of the entry of A under the parameter \( \tilde{\theta} = \{\hat{\lambda}, \alpha \hat{\eta}\} \):* \[ |A_{ij}| \leq 2b(k-1) \beta^2 e^{-c \alpha \| \hat{\eta} \|_2} \] Proof. From Lemma 11, we have \( p(y|x; \tilde{\theta}) \geq 1/b \). Additionally, the absolute value of the entry of \( u_y - E_Y[u_Y] \) is bounded by \( \beta \). We have for each \( i \) \[ \left| E_S \left[ \frac{p(y|x, s; \tilde{\theta})}{p(y|x; \tilde{\theta})} (u_y - E_Y[u_Y]) \right] \right|_i \leq E_S \left[ \frac{p(y|x, s; \tilde{\theta})}{p(y|x; \tilde{\theta})} \right] \beta = \beta \] Then from Lemma 13 \[ |A_{ij}| \leq 2b(k-1)\beta^2 e^{-c\alpha \| \hat{\eta} \|_2} \] Lemma 15. We denote the matrix \[ B \triangleq E_S \left[ \frac{p(y|x, s; \tilde{\theta})}{p(y|x; \tilde{\theta})} \left( E_Y \left[ u_Y u_Y^T \right] - E_Y[u_Y]E_Y[u_Y]^T \right) \right] \] Then the absolute value of the entry of \( B \) under the parameter \( \tilde{\theta} = \{\hat{\lambda}, \alpha \hat{\eta}\} \): \[ |B_{ij}| \leq 2(k-1)\beta^2 e^{-c\alpha \| \hat{\eta} \|_2} \] Proof. We only need to prove that for fixed \( x \) and \( s \), for each \( i, j \): \[ |E_Y \left[ u_Y u_Y^T \right] - E_Y[u_Y]E_Y[u_Y]^T|_{ij} \leq 2(k-1)\beta^2 e^{-c\alpha \| \hat{\eta} \|_2} \] Since \[ |E_Y \left[ u_Y u_Y^T \right] - E_Y[u_Y]E_Y[u_Y]^T|_{ij} = |\mathrm{Cov}_Y[(u_Y)_i, (u_Y)_j]| \leq \beta^2 \sum_{y=1}^k p(y|x, s; \tilde{\theta}) - p(y|x, s; \tilde{\theta})^2 \] Suppose \( y \) is the majority class. Then from Lemma 12, \[ p(y|x, s; \tilde{\theta}) - p(y|x, s; \tilde{\theta})^2 \leq 1 - p(y|x, s; \tilde{\theta}) \leq (k-1)e^{-c\alpha \| \hat{\eta} \|_2} \] If \( y \) is not the majority class. Then, \[ p(y|x, s; \tilde{\theta}) - p(y|x, s; \tilde{\theta})^2 \leq p(y|x, s; \tilde{\theta}) \leq e^{-c\alpha \| \hat{\eta} \|_2} \] So we have \[ \sum_{y=1}^k p(y|x, s; \tilde{\theta}) - p(y|x, s; \tilde{\theta})^2 \leq 2(k-1)e^{-c\alpha \| \hat{\eta} \|_2} \] The lemma follows. Lemma 16. Under the parameter \( \tilde{\theta} = \{\hat{\lambda}, \alpha \hat{\eta}\} \), the largest eigenvalue of the matrix \[ \frac{1}{n} \sum_{i=1}^n (A(x_i, y_i) - B(x_i, y_i)) \] is at most \[ 2dk(k-1)(b+1)\beta^2 e^{-c\alpha \| \hat{\eta} \|_2} \] Proof. From Lemma 14 and Lemma 15, each entry of the matrix in (8) is at most \( 2(k-1)(b+1)\beta^2 e^{-c\alpha \| \hat{\eta} \|_2} \). Thus, by Gershgorin’s theorem, the maximum eigenvalue of the matrix in (8) is at most \( 2dk(k-1)(b+1)\beta^2 e^{-c\alpha \| \hat{\eta} \|_2} \). Now, we can prove Theorem 5 by constructing a scaled version of \( \tilde{\theta} \) that satisfies the expectation-linearization constraint. Proof of Theorem 5 Proof. Consider the likelihood evaluated at \( \tilde{\theta} = \{ \hat{\lambda}, \alpha \hat{\eta} \} \), where \( \alpha = \frac{\delta}{4\beta \| \hat{\eta} \|_2} \). If \( \alpha > 1 \), then \( \| \hat{\eta} \|_2 > \frac{\delta}{4\beta} \). We know the MLE \( \hat{\theta} \) already satisfies the expectation-linearization constraint. So we can assume that \( 0 \leq \alpha \leq 1 \), and we know that \( \tilde{\theta} \) satisfies \( V(D; \tilde{\theta}) \leq \delta \). Then, \[ \Delta_l(\hat{\theta}, \hat{\theta}_\delta) \leq \Delta_l(\hat{\theta}, \tilde{\theta}) = \frac{1}{n}(l(D; \hat{\theta}) - l(D; \tilde{\theta})) = g(\hat{\lambda}, \hat{\eta}) - g(\hat{\lambda}, \alpha \hat{\eta}) \] where \( g(\lambda, \eta) = \frac{1}{n} l(D; (\lambda, \eta)) \). Taking the second-order Taylor expansion about \( \eta \), we have \[ g(\hat{\lambda}, \alpha \hat{\eta}) = g(\hat{\lambda}, \hat{\eta}) + \nabla_{\eta}^T g(\hat{\lambda}, \hat{\eta})(\alpha \hat{\eta} - \hat{\eta}) + (\alpha \hat{\eta} - \hat{\eta})^T \nabla_{\eta}^2 g(\hat{\lambda}, \hat{\eta})(\alpha \hat{\eta} - \hat{\eta}) \] Since \( \hat{\theta} \) is the MLE, the first-order term \( \nabla_{\eta}^T g(\hat{\lambda}, \hat{\eta})(\alpha \hat{\eta} - \hat{\eta}) = 0 \). The Hessian in the second-order term is just Eq.(8). Thus, from Lemma 16 we have \[ \begin{align*} g(\hat{\lambda}, \alpha \hat{\eta}) &\leq g(\hat{\lambda}, \hat{\eta}) - (1 - \alpha)^2 \| \hat{\eta} \|_2^2 2dk(k-1)(b+1)\beta^2 e^{-c \alpha \| \hat{\eta} \|_2} \\ &= g(\hat{\lambda}, \hat{\eta}) - 2dk(k-1)(b+1)\beta^2 \left( \| \hat{\eta} \|_2 - \frac{\delta}{4\beta} \right)^2 e^{-c \delta / 4\beta} \\ &= g(\hat{\lambda}, \hat{\eta}) - c_1 \beta^2 \left( \| \hat{\eta} \|_2 - \frac{\delta}{4\beta} \right)^2 e^{-c_2 \delta / 4\beta} \end{align*} \] with setting \( c1 = 2dk(k-1)(b+1) \) and \( c2 = c \). Then the theorem follows. □ C.3 Proof of Theorem 6: Uniform Bound of Model Accuracy In the following, we denote \( \tilde{\theta} = \{ \hat{\lambda}, \alpha \hat{\eta} \} \). Lemma 17. For each \( y \in \mathcal{Y} \), if \( p(y|x, s; \tilde{\theta}) \geq 1/k \), then \( \forall \alpha \in [0, 1] \) \[ p(y|x, s; \tilde{\theta}) \geq \frac{1}{k} \] Proof. This lemma can be regarded as a corollary of Lemma 11. □ Lemma 18. For a fixed \( x \) and \( s \), we denote \( e^{\alpha \hat{\eta}_y^T h^{(L-1)}(x, s; \hat{\lambda})} = w_y \). Then we have \[ p(y|x, s, \tilde{\theta}) = \frac{e^{\alpha \hat{\eta}_y^T h^{(L-1)}(x, s; \hat{\lambda})}}{\sum_{y' \in \mathcal{Y}} e^{\alpha \hat{\eta}_{y'}^T h^{(L-1)}(x, s; \hat{\lambda})}} = \frac{(w_y)^{\alpha}}{\sum_{y' \in \mathcal{Y}} (w_{y'})^{\alpha}} \] Additionally, we denote \( g_s(\alpha) = \sum_{y' \in \mathcal{Y}} p(y'|x, s; \tilde{\theta}) \log w_{y'} - \log w_y \). We assume \( g_s(0) \geq 0 \). Then we have \( \forall \alpha \geq 0 \) \[ g_s(\alpha) \geq 0 \] Proof. \[ \frac{\partial g_s(\alpha)}{\partial \alpha} = \sum_{y' \in \mathcal{Y}} \log w_{y'} \frac{\partial p(y'|x, s; \tilde{\theta})}{\partial \alpha} = \mathrm{Var}_Y [\log w_Y | X = x, S = s] \geq 0 \] So \( g_s(\alpha) \) is non-decreasing. Since \( g_s(0) \geq 0 \), we have \( g_s(\alpha) \geq 0 \) when \( \alpha \geq 0 \). □ From above lemma, we have for each training instance \( (x_i, y_i) \in D \), and \( \forall \alpha \in [0, 1] \), \[ \mathrm{E}_Y \left[ \log p(Y|x_i, s; \tilde{\theta}) \right] \geq \log p(y_i|x_i, s; \tilde{\theta}) \] (9) For convenience, we define \[ m(s, y) = \log p(y|x, s; \tilde{\theta}) - \mathrm{E}_Y \left[ \log p(Y|x, s; \tilde{\theta}) \right] \] Lemma 19. If \( y \) satisfies Lemma 17 and \( g_s(\alpha) \geq 0 \), then \[ \mathrm{Var}_Y[m(s, Y)] \geq m(s, y)^2 \] Proof. First we have \[ m(s, y) = \log p(y|x, s; \tilde{\theta}) - \log 1/k - KL \left( p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y}) \right) \leq 0 \] So we have \[ \begin{align*} (\mathrm{Var}_Y[m(s, Y)])^{1/2} &= \sqrt{\mathrm{E}_Y \left[ \left( \log p(Y|x, s; \tilde{\theta}) - \mathrm{E}_Y \left[ \log p(Y|x, s; \tilde{\theta}) \right] \right)^2 \right]} \\ &\geq \mathrm{E}_Y \left[ \left| \log p(Y|x, s; \tilde{\theta}) - \mathrm{E}_Y \left[ \log p(Y|x, s; \tilde{\theta}) \right] \right| \right] \\ &= \mathrm{E}_Y \left[ \left| KL \left( p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y}) \right) + \log 1/k - \log p(Y|x, s; \tilde{\theta}) \right| \right] \\ &= \mathrm{E}_Y \left[ \left| KL \left( p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y}) \right) + \left| \log 1/k - \log p(Y|x, s; \tilde{\theta}) \right| \right| \right] \\ &\geq KL \left( p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y}) \right) + \mathrm{E}_Y \left[ \log p(Y|x, s; \tilde{\theta}) - \log 1/k \right] \\ &= 2KL \left( p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y}) \right) \end{align*} \] As \( KL \left( p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y}) \right) \geq 0 \) and \( \log p(y|x, s; \tilde{\theta}) \geq \log 1/k \). So we have \[ 2KL \left( p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y}) \right) \geq KL \left( p(\cdot|x, s; \tilde{\theta})|\mathrm{Unif}(\mathcal{Y}) \right) + \log 1/k - \log p(y|x, s; \tilde{\theta}) = -m(s, y) \] Then the lemma follows. □ From Lemma 19 and Eq. (9), we have for each training instance \((x_i, y_i) \in D\), and \( \forall \alpha \in [0, 1] \), \[ \mathrm{Var}_Y[m(s, Y)] \geq m(s, y_i)^2 \] (10) Lemma 20. For each training instance \((x_i, y_i) \in D\), and \( \forall \alpha \in [0, 1] \), we have \[ \log p(y_i|x_i; \{\hat{\lambda}, \alpha \hat{\eta}\}) \geq (1-\alpha) \log p(y_i|x_i; \{\hat{\lambda}, 0\}) + \alpha \log p(y_i|x_i; \{\hat{\lambda}, \hat{\eta}\}) \] Proof. We define \[ f(\alpha) = \log p(y_i|x_i; \{\hat{\lambda}, \alpha \hat{\eta}\}) - (1-\alpha) \log p(y_i|x_i; \{\hat{\lambda}, 0\}) - \alpha \log p(y_i|x_i; \{\hat{\lambda}, \hat{\eta}\}) \] Because \( f(0) = f(1) = 0 \), we only need to prove that \( f(\alpha) \) is concave on \([0, 1]\). We have \[ \nabla^2 f(\alpha) = -\mathrm{E}_{S|Y=y_i} [\mathrm{Var}_Y[m(S, Y)]] + \mathrm{Var}_{S|Y=y_i}[m(S, y_i)] \] where \( S|Y = y_i \) is under the probability distribution \( p(s|Y = y_i, x_i; \tilde{\theta}) = \frac{p(y_i|x_i, S; \tilde{\theta}) p(s)}{p(y_i|x_i; \tilde{\theta})} \) From Eq. (10), we have \[ \mathrm{E}_{S|Y=y_i} [\mathrm{Var}_Y[m(S, Y)]] \geq \mathrm{E}_{S|Y=y_i} [m(S, y_i)^2] \geq \mathrm{Var}_{S|Y=y_i}[m(S, y_i)] \] So we have \( \nabla^2 f(\alpha) \leq 0 \). The lemma follows. □ Now, we can prove Theorem 6 by using the same construction of an expectation-linearizing parameter as in Theorem 5. Proof of Theorem 6 Proof. Consider the same parameter \( \tilde{\theta} = \{\hat{\lambda}, \alpha \hat{\eta}\} \), where \( \alpha = \frac{\delta}{4\beta \|\hat{\eta}\|_2} \leq 1 \). we know that \( \tilde{\theta} \) satisfies \( V(D; \tilde{\theta}) \leq \delta \). Then, \[ \Delta_l(\hat{\theta}, \hat{\theta}_s) \leq \Delta_l(\hat{\theta}, \tilde{\theta}) = \frac{1}{n}(l(D; \hat{\theta}) - l(D; \tilde{\theta})) \] From Lemma 20 we have: \[ l(D; \tilde{\theta}) = l(D; \{ \hat{\lambda}, \alpha \hat{\eta} \}) \geq (1 - \alpha) l(D; \{ \hat{\lambda}, 0 \}) + \alpha l(D; \{ \hat{\lambda}, \hat{\eta} \}) \] So \[ \begin{align*} \Delta_l(\hat{\theta}, \hat{\theta}_\delta) &\leq (1 - \alpha) \frac{1}{n} \left( l(D; \hat{\theta}) - l(D; \{ \hat{\lambda}, 0 \}) \right) \\ &= (1 - \alpha) \frac{1}{n} \sum_{i=1}^n \log p(y_i|x_i; \hat{\theta}) - \log \mathrm{Unif}(\mathcal{Y}) \\ &\asymp (1 - \alpha) \mathrm{E} [ \mathrm{KL} (p(\cdot|X; \hat{\theta}) \| \mathrm{Unif}(\mathcal{Y})) ] \\ &\leq \left( 1 - \frac{\delta}{4 \beta \| \hat{\eta} \|_2} \right) \mathrm{E} [ \mathrm{KL} (p(\cdot|X; \hat{\theta}) \| \mathrm{Unif}(\mathcal{Y})) ] \end{align*} \] D Detailed Description of Experiments D.1 Neural Network Architectures MNIST For MNIST, we train 6 different fully-connected (dense) neural networks with 2 or 3 layers (see Table 1). For all architectures, we used dropout rate \( p = 0.5 \) for all hidden layers and \( p = 0.2 \) for the input layer. CIFAR-10 and CIFAR-100 For the two CIFAR datasets, we used the same architecture in Srivastava et al. (2014) — three convolutional layers followed by two fully-connected hidden layers. The convolutional layers have 96, 128, 256 filters respectively, with a \( 5 \times 5 \) receptive field applied with a stride of 1. Each convolutional layer is followed by a max pooling layer pools \( 3 \times 3 \) regions at strides of 2. The fully-connected layers have 2048 units each. All units use the rectified linear activation function. Dropout was applied to all the layers with dropout rate \( p = (0.1, 0.25, 0.25, 0.5, 0.5, 0.5) \) for the layers going from input to convolutional layers to fully-connected layers. D.2 Neural Network Training Neural network training in all the experiments is performed with mini-batch stochastic gradient descent (SGD) with momentum. We choose an initial learning rate of \( \eta_0 \), and the learning rate is updated on each epoch of training as \( \eta_t = \eta_0 / (1 + \rho t) \), where \( \rho \) is the decay rate and \( t \) is the number of epoch completed. We run each experiment with 2,000 epochs and choose the parameters achieving the best performance on validation sets. Table 3 summarizes the chosen hyper-parameters for all experiments. Most of the hyper-parameters are chosen from Srivastava et al. (2014). But for some experiments, we cannot reproduce the performance reported in Srivastava et al. (2014) (We guess one of the possible reasons is that we used different library for implementation.). For these experiments, we tune the hyper-parameters on the validation sets by random search. Due to time constrains it is infeasible to do a random search across the full hyper-parameter space. Thus, we try to use as many hyper-parameters reported in Srivastava et al. (2014) as possible. D.3 Effect of Expectation-linearization Rate \( \lambda \) Table 4 illustrates the detailed results of the experiments on the effect of \( \lambda \). For MNIST, it lists the error rates under different \( \lambda \) values for six different network architectures. For two datasets of CIFAR, it gives the error rates under different \( \lambda \) values, among with the empirical expectation-linearization risk \( \hat{\Delta} \). Table 3: Hyper-parameters for all experiments. <table> <tr> <th>Experiment</th> <th>Hyper-parameter</th> <th></th> </tr> <tr> <td rowspan="5">MNIST</td> <td>batch size</td> <td>200</td> </tr> <tr> <td>initial learning rate \( \eta_0 \)</td> <td>0.1</td> </tr> <tr> <td>decay rate \( \rho \)</td> <td>0.025</td> </tr> <tr> <td>momentum</td> <td>0.9</td> </tr> <tr> <td>momentum type</td> <td>standard</td> </tr> <tr> <td></td> <td>max-norm constrain</td> <td>3.5</td> </tr> <tr> <td rowspan="8">CIFAR</td> <td>batch size</td> <td>10</td> <td>100</td> </tr> <tr> <td>initial learning rate \( \eta_0 \) for conv layers</td> <td>100</td> <td>100</td> </tr> <tr> <td>initial learning rate \( \eta_0 \) for dense layers</td> <td>0.001</td> <td>0.001</td> </tr> <tr> <td>decay rate \( \rho \)</td> <td>0.1</td> <td>0.02</td> </tr> <tr> <td>momentum</td> <td>0.005</td> <td>0.005</td> </tr> <tr> <td>momentum type</td> <td>standard</td> <td>nesterov</td> </tr> <tr> <td>max-norm constrain</td> <td>4.0</td> <td>2.0</td> </tr> <tr> <td>L2-norm decay</td> <td>0.001</td> <td>0.001</td> </tr> </table> Table 4: Detailed results for experiments on the effect of \( \lambda \). <table> <tr> <th rowspan="2">Experiment</th> <th colspan="8">\( \lambda \)</th> </tr> <tr> <th>0.0</th> <th>0.5</th> <th>1.0</th> <th>2.0</th> <th>3.0</th> <th>5.0</th> <th>7.0</th> <th>10.0</th> </tr> <tr> <td rowspan="6">MNIST</td> <td>model 1</td> <td>1.23</td> <td>1.12</td> <td>1.12</td> <td>1.08</td> <td>1.07</td> <td>1.10</td> <td>1.25</td> <td>1.35</td> </tr> <tr> <td>model 2</td> <td>1.19</td> <td>1.14</td> <td>1.08</td> <td>1.04</td> <td>1.03</td> <td>1.07</td> <td>1.13</td> <td>1.21</td> </tr> <tr> <td>model 3</td> <td>1.05</td> <td>1.04</td> <td>0.98</td> <td>1.03</td> <td>1.05</td> <td>1.05</td> <td>1.10</td> <td>1.12</td> </tr> <tr> <td>model 4</td> <td>1.07</td> <td>1.02</td> <td>0.97</td> <td>0.94</td> <td>0.96</td> <td>1.01</td> <td>1.05</td> <td>1.20</td> </tr> <tr> <td>model 5</td> <td>1.03</td> <td>0.95</td> <td>0.95</td> <td>0.90</td> <td>0.92</td> <td>0.98</td> <td>1.03</td> <td>1.08</td> </tr> <tr> <td>model 6</td> <td>0.99</td> <td>0.98</td> <td>0.93</td> <td>0.87</td> <td>0.96</td> <td>0.98</td> <td>1.05</td> <td>1.10</td> </tr> <tr> <td rowspan="2">CIFAR-10</td> <td>error rate</td> <td>12.82</td> <td>12.52</td> <td>12.38</td> <td>12.20</td> <td>12.60</td> <td>12.84</td> <td>13.10</td> </tr> <tr> <td>\( \Delta \)</td> <td>0.0139</td> <td>0.0128</td> <td>0.0104</td> <td>0.0095</td> <td>0.0089</td> <td>0.0085</td> <td>0.0077</td> </tr> <tr> <td rowspan="2">CIFAR-100</td> <td>error rate</td> <td>37.22</td> <td>36.75</td> <td>36.25</td> <td>37.01</td> <td>37.18</td> <td>37.58</td> <td>38.01</td> </tr> <tr> <td>\( \Delta \)</td> <td>0.0881</td> <td>0.0711</td> <td>0.0590</td> <td>0.0529</td> <td>0.0500</td> <td>0.0467</td> <td>0.0411</td> </tr> </table>
accept
Accept (Poster)
7.666667
444f38ed2f51ee622c3e507a446afbfb03bcda2a
iclr
2,017
Batch Policy Gradient Methods for Improving Neural Conversation Models Kirthivasan Kandasamy* Carnegie Mellon University, Pittsburgh, PA, USA kandasamy@cs.cmu.edu Yoram Bachrachb DigitalGenius Ltd., London, UK yorambac@gmail.com Ryota Tomioka, Daniel Tarlow, David Carter Microsoft Research, Cambridge, UK {ryoto,dtarlow,dacart}@microsoft.com * b This work was done when KK/YB was an intern/employee at Microsoft Research, Cambridge, UK. ABSTRACT We study reinforcement learning of chatbots with recurrent neural network architectures when the rewards are noisy and expensive to obtain. For instance, a chatbot used in automated customer service support can be scored by quality assurance agents, but this process can be expensive, time consuming and noisy. Previous reinforcement learning work for natural language processing uses on-policy updates and/or is designed for on-line learning settings. We demonstrate empirically that such strategies are not appropriate for this setting and develop an off-policy batch policy gradient method (BPG). We demonstrate the efficacy of our method via a series of synthetic experiments and an Amazon Mechanical Turk experiment on a restaurant recommendations dataset. 1 INTRODUCTION Chatbots are one of the classical applications of artificial intelligence and are now ubiquitous in technology, business and everyday life. Many corporate entities are now increasingly using chatbots to either replace or assist humans in customer service contexts. For example, Microsoft is currently actively building a chat bot to optimise and streamline its technical support service. In these scenarios, there is usually an abundance of historical data since past conversations between customers and human customer service agents are usually recorded by organisations. An apparently straightforward solution would be to train chatbots to reproduce the responses by human agents using standard techniques such as maximum likelihood. While this seems natural, it is far from desirable for several reasons. It has been observed that such procedures have a tendency to produce very generic responses (Sordoni et al., 2015). For instance, when we trained chatbots via maximum likelihood on a restaurant recommendations dataset, they repeatedly output responses to the effect of How large is your group?, What is your budget? etc. Further, they also produce responses such as Let me look that up. or Give me a second. which, although permissible for a human agent to say, are not appropriate for a chatbot. Although there are ways to increase the diversity of responses (Li et al., 2015), our focus is on encouraging the bot to meaningfully advance the conversation. One way to address this problem is to provide some form of weak supervision for responses generated by a chatbot. For example, a human labeller, such as a quality assurance agent, could score each response generated by a chatbot in a conversation with a customer. This brings us to the reinforcement learning (RL) paradigm where these rewards (scores) are to be used to train a good chatbot. In this paper we will use the terms score, label, and reward interchangeably. Labelled data will mean conversations which have been assigned a reward of some form as explained above. Nonetheless, there are some important differences in the above scenario when compared to the more popular approaches for RL. • Noisy and expensive rewards: Obtaining labels for each conversation can be time consuming and economically expensive. As a result, there is a limited amount of labelled data available. Moreover, labels produced by humans are invariably noisy due to human error and subjectivity. • Off-line evaluations: Unlike conventional RL settings, such as games, where we try to find the optimal policy while interacting with the system, the rewards here are not immediately available. Previous conversations are collected, labelled by human experts, and then given to an algorithm which has to manage with the data it has. • Unlabelled Data: While labelled data is limited, a large amount of unlabelled data is available. If labelled data is in short supply, reinforcement learning could be hopeless. However, if unlabelled data can be used to train a decent initial bot, say via maximum likelihood, we can use policy iteration techniques to refine this bot by making local improvements using the labelled data (Bellman, 1956). Besides chatbots, this framework also finds applications in tasks such as question answering (Ferrucci et al., 2010; Hermann et al., 2015; Sachan et al., 2016), generating image descriptions (Karpathy & Fei-Fei, 2015) and machine translation (Bahdanau et al., 2014) where a human labeller can provide weak supervision in the form of a score to a sentence generated by a bot. To contextualise the work in this paper, we make two important distinctions in policy iteration methods in reinforcement learning. The first is on-policy vs off-policy. In on-policy settings, the goal is to improve the current policy on the fly while exploring the space. On-policy methods are used in applications where it is necessary to be competitive (achieve high rewards) while simultaneously exploring the environment. In off-policy, the environment is explored using a behaviour policy, but the goal is to improve a different target policy. The second distinction is on-line vs batch (off-line). In on-line settings one can interact with the environment. In batch methods, which is the setting for this work, one is given past exploration data from possibly several behaviour policies and the goal is to improve a target policy using this data. On-line methods can be either on-policy or off-policy whereas batch methods are necessarily off-policy. In this paper, we study reinforcement learning in batch settings, for improving chat bots with Seq2Seq recurrent neural network (RNN) architectures. One of the challenges when compared to on-line learning is that we do not have interactive control over the environment. We can only hope to do as well as our data permits us to. On the other hand, the batch setting affords us some luxuries. We can reuse existing data and use standard techniques for hyper-parameter tuning based on cross validation. Further, in on-line policy updates, we have to be able to “guess” how an episode will play out, i.e. actions the behaviour/target policies would take in the future and corresponding rewards. However, in batch learning, the future actions and rewards are directly available in the data. This enables us to make more informed choices when updating our policy. RELATED WORK Recently there has been a surge of interest in deep learning approaches to reinforcement learning, many of them adopting Q-learning, e.g. (He et al., 2015; Mnih et al., 2013; Narasimhan et al., 2015). In Q-learning, the goal is to estimate the optimal action value function \( Q^* \). Then, when an agent is at a given state, it chooses the best greedy action according to \( Q^* \). While Q-learning has been successful in several applications, it is challenging in the settings we consider since estimating \( Q^* \) over large action and state spaces will require a vast number of samples. In this context, policy iteration methods are more promising since we can start with an initial policy and make incremental local improvements using the data we have. This is especially true given that we can use maximum likelihood techniques to estimate a good initial bot using unlabelled data. Policy gradient methods, which fall within the paradigm of policy iteration, make changes to the parameters of a policy along the gradient of a desired objective (Sutton et al., 1999). Recently, the natural language processing (NLP) literature has turned its attention to policy gradient methods for improving language models. Ranzato et al. (2015) present a method based on the classical REINFORCE algorithm (Williams, 1992) for improving machine translation after preliminary training with maximum likelihood objectives. Bahdanau et al. (2016) present an actor-critic method also for machine translation. In both cases, as the reward, the authors use the BLEU (bilingual evaluation understudy) score of the output and the translation in the training dataset. This setting, where the rewards are deterministic and cheaply computable, does not reflect difficulties inherent to training chatbots where labels are noisy and expensive. Li et al. (2016) develop a policy gradient method bot for chatbots. However, they use user defined rewards (based on some simple rules) which, once again, are cheaply obtained and deterministic. Perhaps the closest to our work is that of Williams & Zweig (2016) who use a REINFORCE based method for chat bots. We discuss the differences of this and other methods in greater detail in Section 3. The crucial difference between all of the above efforts and ours is that they use on-policy and/or on-line updates in their methods. The remainder of this manuscript is organised as follows. In Section 2 we review Seq2Seq models and Markov decision processes (MDP) and describe our framework for batch reinforcement learning. Section 3 presents our method BPG and compares it with prior work in the RL and NLP literature. Section 4 presents experiments on a synthetic task and a customer service dataset for restaurant recommendations. 2 PRELIMINARIES 2.1 A REVIEW OF SEQ2SEQ MODELS The goal of a Seq2Seq model in natural language processing is to produce an output sequence \( y = [a_1, a_2, \ldots, a_T] \) given an input sequence \( x \) (Cho et al., 2014; Kalchbrenner & Blunsom, 2013; Sutskever et al., 2014). Here \( a_i \in \mathcal{A} \) where \( \mathcal{A} \) is a vocabulary of words. For example, in machine translation from French to English, \( x \) is the input sequence in French, and \( y \) is its translation in English. In customer service chatbots, \( x \) is the conversation history until the customer’s last query and \( y \) is the response by an agent/chatbot. In a Seq2Seq model, we use an encoder network to represent the input sequence as a euclidean vector and then a decoder network to convert this vector to an output sequence. Typically, both the encoder and decoder networks are recurrent neural networks (RNN) (Mikolov et al., 2010) where the recurrent unit processes each word in the input/output sequences one at a time. In this work, we will use the LSTM (long short term memory) (Hochreiter & Schmidhuber, 1997) as our recurrent unit due to its empirical success in several applications. In its most basic form, the decoder RNN can be interpreted as assigning a probability distribution over \( \mathcal{A} \) given the current “state”. At time \( t \), the state \( s_t \) is the input sequence \( x \) and the words \( y_{t-1} = [a_1, \ldots, a_{t-1}] \) produced by the decoder thus far, i.e. \( s_t = (x, y_{t-1}) \). We sample the next word \( a_t \) from this probability distribution \( \pi(\cdot|s_t) \), then update our state \( s_{t+1} = (x, y_t) \) where \( y_t = [y_{t-1}, a_t] \), and proceed in a similar fashion. The vocabulary \( \mathcal{A} \) contains an end-of-statement token <EOS>. If we sample <EOS> at time \( T + 1 \), we terminate the sequence and output \( y_T \). 2.2 A REVIEW OF MARKOV DECISION PROCESSES (MDP) We present a formalism for MDPs simplified to our setting. In an MDP, an agent takes an action \( a \) in a state \( s \) and transitions to a state \( s' \). An episode refers to a sequence of transitions \( s_1 \rightarrow a_1 \rightarrow s_2 \rightarrow a_2 \rightarrow \cdots \rightarrow a_T \rightarrow s_{T+1} \) until the agent reaches a terminal state \( s_{T+1} \). At a terminal state, the agent receives a reward. Formally, an MDP is the triplet \( (\mathcal{S}, \mathcal{A}, R) \). Here, \( \mathcal{S} \) is a set of states and \( \mathcal{A} \) is a set of actions. When we take an action \( a \) at state \( s \) we transition to a new state \( s' = s'(s, a) \) which, in this work, will be deterministic. \( \mathcal{A} \) will be a finite but large discrete set and \( \mathcal{S} \) will be discrete but potentially infinite. \( R : \mathcal{S} \rightarrow \mathbb{R} \) is the expected reward function such that when we receive a reward \( r \) at state \( s \in \mathcal{S} \), \( \mathbb{E}[r] = R(s) \). Let \( \mathcal{S}_0 \subset \mathcal{S} \) be a set of terminal states. When we transition to any \( s \in \mathcal{S}_0 \), the episode ends. In this work, we will assume that the rewards are received only at a terminal state, i.e \( R(s) \) is nonzero only on \( \mathcal{S}_0 \). A policy \( \pi \) is a rule to select an action at a given state. We will be focusing on stochastic policies \( \pi : \mathcal{A} \times \mathcal{S} \rightarrow \mathbb{R}_+ \) where \( \pi(a|s) \) denotes the probability an agent will execute action \( a \) at state \( s \). We define the value function \( V^{\pi} : \mathcal{S} \rightarrow \mathbb{R} \) of policy \( \pi \), where \( V(s) \) is the expected reward at the end of the episode when we follow policy \( \pi \) from state \( s \). For any terminal state \( s \in \mathcal{S}_0 \), \( V^{\pi}(s) = R(s) \) regardless of \( \pi \). We will also find it useful to define the action-value function \( Q^{\pi} : \mathcal{S} \times \mathcal{A} : \rightarrow \mathbb{R} \), where \( Q^{\pi}(s, a) \) is the expected reward of taking action \( a \) at state \( s \) and then following policy \( \pi \). With deterministic state transitions this is simply \( Q^{\pi}(s, a) = V^{\pi}(s'(s, a)) \). It can be verified that \( V^{\pi}(s) = \mathbb{E}_{a \sim \pi(\cdot|s)}[Q^{\pi}(s, a)] \) (Sutton & Barto, 1998). 2.3 SET UP We now frame our learning from labels scenario for RNN chatbots as an MDP. The treatment has similarities to some recent RL work in the NLP literature discussed above. Let \( x \) be the input and \( y_{t-1} = [a_1, \ldots, a_{t-1}] \) be the words output by the decoder until time \( t \). The state of our MDP at time \( t \) of the current episode will be \( s_t = (x, y_{t-1}) \). Therefore, the set of states \( \mathcal{S} \) will be all possible pairs of inputs and partial output sequences. The actions \( \mathcal{A} \) will be the vocabulary. The terminal states \( \mathcal{S}_0 \) will be \( (x, y) \) such that the last literal of \( y \) is <EOS>. The stochastic policy \( \pi \) will be a Seq2Seq RNN which produces a distribution over \( \mathcal{A} \) given state \( s_t \). When we wish to make the dependence of the policy on the RNN parameters \( \theta \) explicit, we will write \( \pi_\theta \). When we sample an action \( a_t \sim \pi(\cdot|s_t) \), we deterministically transition to state \( (x, [y_{t-1}, a_t]) \). If we sample \( a_{T+1} = \text{<EOS>} \) at time \( T + 1 \), the episode terminates and we observe a stochastic reward. We are given a dataset of input-output-reward triples \( \{(x^{(i)}, y^{(i)}, r^{(i)})\}_{i=1}^n \) where \( y^{(i)} = (a_1^{(i)}, \ldots, a_{T_i}^{(i)}, \text{<EOS>}) \) is the sequence of output words. This data was collected from possibly multiple *behaviour policies* which output \( y^{(i)} \) for the given input \( x^{(i)} \). In the above customer service example, the behaviour policies could be chatbots, or even humans, which were used for conversations with a customer. The rewards \( r_i \) are scores assigned by a human quality assurance agent to each response of the chatbot. Our goal is to use this data to improve a given target policy \( \pi_\theta \). We will use \( q \) to denote the distribution of the data. \( q(s) \) is the distribution of the states in the dataset, \( q(a|s) \) is the conditional distribution of an action given a state, and \( q(s, a) = q(s)q(a|s) \) is the joint distribution over states and actions. \( q \) will be determined by the initial distribution of the inputs \( x^{(i)} \) and the behaviour policies used to collect the training data. Our aim is to find a policy that does well with respect to \( q \). Specifically, we wish to maximise the following objective, \[ J(\theta) = \sum_{s \in \mathcal{S}} q(s)V^{\pi_\theta}(s). \] Here, the value function \( V^{\pi_\theta} \) is not available to us but has to be estimated from the data. This is similar to objectives used in on-line off-policy policy gradient literature where \( q \) is replaced by the limiting distribution of the behaviour policy (Degris et al., 2012). In the derivation of our algorithm, we will need to know \( q(a|s) \) to compute the gradient of our objective. In off-policy reinforcement learning settings this is given by the behaviour policy which is readily available. If the behaviour policy if available to us, then we can use this too. Otherwise, a simple alternative is to “learn” a behaviour policy. For example, in our experiments we used an RNN trained using the unlabelled data to obtain values for \( q(a|s) \). As long as this learned policy can capture the semantics of natural language (for example, the word apple is more likely than car when the current state is (x, I ate an)), then it can be expected to do reasonably well. In the following section, we will derive a stochastic gradient descent (SGD) procedure that will approximately minimise (1). Before we proceed, we note that it is customary in the RL literature to assume stochastic transitions between states and use rewards at all time steps instead of the terminal step. Further, the future rewards are usually discounted by a discount factor \( \gamma < 1 \). While we use the above formalism to simplify the exposition, the ideas presented here extend naturally to more conventional settings. 3 Batch Policy Gradient Our derivation follows the blueprint in Degris et al. (2012) who derive an off-policy on-line actor critic algorithm. Following standard policy gradient methods, we will aim to update the policy by taking steps along the gradient of the objective \( \nabla J(\theta) \). \[ \nabla J(\theta) = \nabla \mathbb{E}_{s \sim q} \left[ \sum_{a \in \mathcal{A}} \pi_\theta(a|s) Q^{\pi_\theta}(s, a) \right] = \mathbb{E}_{s \sim q} \left[ \sum_{a \in \mathcal{A}} \nabla \pi_\theta(a|s) Q^{\pi_\theta}(s, a) + \pi_\theta(a|s) \nabla Q^{\pi_\theta}(s, a) \right]. \] The latter term inside the above summation is difficult to work with, so the first step is to ignore it and work with the approximate gradient \( g(\theta) = \mathbb{E}_{s \sim q}[\sum_{a \in \mathcal{A}} \nabla \pi_\theta(a|s) Q^{\pi_\theta}(s, a)] \approx \nabla J(\theta) \). Degris et al. (2012) provide theoretical justification for this approximation in off policy settings by establishing that \( J(\theta) \leq J(\theta + \alpha g(\theta)) \) for all small enough \( \alpha \). Expanding on \( g(\theta) \), we obtain: \[ g(\theta) = \mathbb{E}_{s \sim q} \left[ \sum_{a \in \mathcal{A}} \pi_\theta(a|s) \frac{\nabla \pi_\theta(a|s)}{\pi_\theta(a|s)} Q^{\pi_\theta}(s, a) \right] = \mathbb{E}_{a \sim q(\cdot|s)} \left[ \rho(s, a)\psi(a, s)Q^{\pi_\theta}(s, a) \right] \] \[ = \mathbb{E}_{(s_t, a_t) \sim q(\cdot, \cdot)} [\rho(s_t, a_t)\psi(a_t, s_t)(Q^{\pi_\theta}(s_t, a_t) - V^{\pi_\theta}(s_t))]. \tag{2} \] Here \( \psi(a, s) = \frac{\nabla \pi_\theta(a|s)}{\pi_\theta(a|s)} = \nabla \log \pi_\theta(a|s) \) is the score function of the policy and \( \rho(s, a) = \pi_\theta(a|s)/q(a|s) \) is the importance sampling coefficient. In the last step, we have used the fact that \( \mathbb{E}[\pi(a|s)\psi(a|s)h(s)] = 0 \) for any function \( h : \mathcal{S} \to \mathbb{R} \) of the current state (Szepesvári, 2010). The purpose of introducing the value function \( V^{\pi_\theta} \) is to reduce the variance of the SGD updates – we want to assess how good/bad action \( a_t \) is relative to how well \( \pi_\theta \) will do at state \( s_t \) in expectation. If \( a_t \) is a good action (\( Q^{\pi_\theta}(s_t, a_t) \) is large relative to \( V^{\pi_\theta}(s_t) \)), the coefficient of the score function is positive and it will change \( \theta \) so as to assign a higher probability to action \( a_t \) at state \( s_t \). The \( Q^{\pi_\theta}, V^{\pi_\theta} \) functions are not available to us so we will replace them with estimates. For \( V^{\pi_\theta}(s_t) \) we will use an estimate \( \hat{V}(s_t) \) – we will discuss choices for this shortly. However, the action value function is usually not estimated in RL policy gradient settings to avoid the high sample complexity. A sensible stochastic approximation for \( Q^{\pi_\theta}(s_t, a_t) \) is to use the sum of future rewards from the current state (Sutton & Barto, 1998)\footnote{Note \( Q^{\pi_\theta}(s_t, a_t) = V^{\pi_\theta}(s_{t+1}) \) for deterministic transitions. However, it is important not to interpret the term in (2) as the difference in the value function between successive states. Conditioned on the current time step, \( V^{\pi_\theta}(s_t) \) is deterministic, while \( V^{\pi_\theta}(s_{t+1}) \) is stochastic. In particular, while a crude estimate suffices for the former, the latter is critical and should reflect the rewards received during the remainder of the episode.}. If we receive reward \( r \) at the end of the episode, we can then use \( Q^{\pi_\theta}(s_t, a_t) \approx r \) for all time steps \( t \) in the episode. However, since \( q(a_t|s_t) \) is different from \( \pi_\theta(a_t|s_t) \) we will need to re-weight future rewards via importance sampling \( r \prod_{i=t}^{T-1} \rho(s_i, a_i) \). This is to account for the fact that an action \( a \) given \( s \) may have been more likely under the policy \( \pi_\theta(\cdot|s) \) than it was under \( q(\cdot|s) \) or vice versa. Instead of directly using the re-weighted rewards, we will use the so called \( \lambda \)-return which is a convex combination of the re-weighted rewards and the value function (Sutton, 1988; 1984). In our setting, they are defined recursively from the end of the episode \( t = T + 1 \) to \( t = 1 \) as follows. For \( \lambda \in (0, 1] \), \[ \tilde{r}_{t+1}^\lambda = r, \qquad \tilde{r}_t^\lambda = (1 - \lambda)V^{\pi_\theta}(s_{t+1}) + \lambda \rho(s_t, a_t) \tilde{r}_{t+1}^\lambda \quad \text{for } t = T, \ldots, 1. \] The purpose of introducing \( \lambda \) is to reduce the variance of using the future rewards alone as an estimate for \( Q^{\pi_\theta}(s_t, a_t) \). This is primarily useful when rewards are noisy. If the rewards are deterministic, \( \lambda = 1 \) which ignores the value function is the best choice. In noisy settings, it is recommended to use \( \lambda < 1 \) (see Sec 3.1 of (Szepesvári, 2010)). In our algorithm, we will replace \( \tilde{r}_t^\lambda \) with \( r_t^\lambda \) where \( V^{\pi_\theta} \) is replaced with the estimate \( \hat{V} \). Putting it all together, and letting \( \alpha \) denote the step size, we have the following update rule for the parameters \( \theta \) of our policy: \[ \theta \leftarrow \theta + \alpha \rho(s_t, a_t) \psi(s_t, a_t)(r_t^\lambda - \hat{V}(s_t)). \] In Algorithm 1, we have summarised the procedure where the updates are performed after an entire pass through the dataset. In practice, we perform the updates in mini-batches. An Estimator for the Value Function: All that is left to do is to specify an estimator \( \hat{V} \) for the value function. We first need to acknowledge that this is a difficult problem: \( \mathcal{S} \) is quite large and for typical applications for this work there might not be enough data since labels are expensive. That said, the purpose of \( \hat{V} \) in (2), (3) is to reduce the variance of our SGD updates and speed up convergence so it is not critical that this be precise – even a bad estimator will converge eventually. Secondly, standard methods for estimating the value function based on minimising the projected Bellman error require the second derivatives, which might be intractable for highly nonlinear parametrisations of \( \hat{V} \) (Maei, 2011). For these two statistical and computational reasons, we resort to simple estimators for \( V^{\pi_\theta} \). We will study two options. The first is a simple heuristic used previously in the RL literature, namely a constant estimator for \( \hat{V} \) which is equal to the mean of all rewards in the dataset (Williams, 1992). The second uses the parametrisation \( \hat{V}(s) = \sigma(\xi^\top \phi(s)) \) where \( \sigma \) is the logistic function and \( \phi(s) \in \mathbb{R}^d \) is a Euclidean representation of the state. For \( \hat{V}(s) \) of the above form, the Hessian \( \nabla^2 \hat{V}(s) \) can be computed in \( \mathcal{O}(d) \) time. To estimate this value function, we use the GTD(\( \lambda \)) estimator from Maei (2011). As \( \phi(s) \) we will be using the hidden state of the LSTM. The rationale for this is as follows. In an LSTM trained using maximum likelihood, the hidden state contains useful information about the objective. If there is overlap between the maximum likelihood and reinforcement learning objectives, we can expect the hidden state to also carry useful information about the RL objective. Therefore, we can use the hidden state to estimate the value function whose expectation is the RL objective. We have described our implementation of GTD(\( \lambda \)) in Appendix A and specified some implementation details in Section 4. Algorithm 1 Batch Policy Gradient (BPG) Given: Data \( \{ (x_i, y_i, r_i) \}_{i=1}^n \), step size \( \alpha \), return coefficient \( \lambda \), initial \( \theta_0 \). – Set \( \theta \leftarrow \theta_0 \). – For each epoch \( k = 1, 2, \ldots \) ▶ Set \( \Delta \theta \leftarrow 0 \) ▶ For each episode \( i = 1, \ldots, n \) • \( r_{T+1}^\lambda \leftarrow r_i \) • \( \rho_t \leftarrow \pi_\theta(a_t^{(i)}|s_t^{(i)})/q(a_t^{(i)}|s_t^{(i)}) \) for \( t = 1, \ldots, T^{(i)} \). • For each time step in reverse \( t = T^{(i)}, \ldots, 1 \) (i) \( r_t^\lambda \leftarrow (1-\lambda)\widehat{V}(s_{t+1}^{(i)}) + \lambda \rho_t r_{t+1}^\lambda \) (ii) \( \Delta \theta \leftarrow \Delta \theta + \frac{1}{T^{(i)}} \rho_t \psi(s_t^{(i)}, a_t^{(i)})(r_t^\lambda - \widehat{V}(s_t^{(i)})) \) (iii) Compute updates for the value function estimate \( \widehat{V} \). ▶ Update the policy \( \theta \leftarrow \theta + \alpha \Delta \theta \) ▶ Update the value function estimate \( \widehat{V} \). Comparison with Other RL Approaches in NLP Policy gradient methods have been studied extensively in on policy settings where the goal is to improve the current policy on the fly (Amari, 1998; Williams, 1992). To our knowledge, all RL approaches in Seq2Seq models have also adopted on-policy policy gradient updates (Bahdanau et al., 2016; Li et al., 2016; Ranzato et al., 2015; Williams & Zweig, 2016). However, on policy methods break down in off-policy settings, because any update must account for the probability of the action under the target policy. For example, suppose the behaviour policy took action \( a \) at state \( s \) and received a low reward. Then we should modify the target policy \( \theta \) so as to reduce \( \pi_\theta(a|s) \). However, if the target policy is already assigning low probability to \( a|s \) then we should not be as aggressive when making the updates. The re-weighting \( \rho(s, a) \) via importance sampling does precisely this. A second difference is that we study batch RL. Standard on-line methods are designed for settings where we have to continually improve the target while exploring using the behaviour policy. Critical to such methods are the estimation of future rewards at the current state and the future actions that will be taken by both the behaviour and target policies. In order to tackle this, previous research either ignore future rewards altogether (Williams, 1992), resort to heuristics to distribute a delayed reward to previous time steps (Bahdanau et al., 2016; Williams & Zweig, 2016), or make additional assumptions about the distribution of the states such as stationarity of the Markov process (Degris et al., 2012; Maei, 2011). However, in batch settings, the \( \lambda \)-return from a given time step can be computed directly (3) since the future action and rewards are available in the dataset. Access to this information provides a crucial advantage over techniques designed for on-line settings. 4 EXPERIMENTS Implementation Details: We implement our methods using Chainer (Tokui et al., 2015), and group sentences of the same length together in the same batch to make use of GPU parallelisation. Since different batches could be of different length, we do not normalise the gradients by the batch size as we should take larger steps after seeing more data. However, we normalise by the length of the output sequence to allocate equal weight to all sentences. We truncate all output sequences to length 64 and use a maximum batch size of 32. We found it necessary to use a very small step size (\( 10^{-5} \)), otherwise the algorithm has a tendency to get stuck at bad parameter values. While importance re-weighting is necessary in off-policy settings, it can increase the variance of the updates, especially when \( q(a_t|s_t) \) is very small. A common technique to alleviate this problem is to clip the \( \rho(s_t, a_t) \) value (Swaminathan & Joachims, 2015). In addition to single \( \rho(s_t, a_t) \) values, our procedure has a product of \( \rho(s_t, a_t) \) values when computing the future rewards (3). The effect of large \( \rho \) values is a large weight \( \rho_t(r_t^\lambda - \widehat{V}(s_t)) \) for the score function in step (ii) of Algorithm 1. In our implementation, we clip this weight at 5 which controls the variance of the updates and ensures that a single example does not disproportionately affect the gradient. RNN Design: In both experiments we use deep LSTMs with two layers for the encoder and decoder RNNs. The output of the bottom layer is fed to the top layer and in the decoder RNN, the output of the top layer is fed to a softmax layer of size \( |\mathcal{A}| \). When we implement GTD(\( \lambda \)) to estimate \( V^{\pi_0} \) we use the hidden state of the bottom LSTM as \( \phi(s) \). When performing our policy updates, we only change the parameters of the top LSTM and the softmax layer in our decoder RNN. If we were to change the bottom LSTM too, then the state representation \( \phi(s) \) would also change as the policy changes. This violates the MDP framework. In other words, we treat the bottom layer as part of the environment in our MDP. To facilitate a fair comparison, we only modify the top LSTM and softmax layers in all methods. We have illustrated this set up in Fig. 1. We note that if one is content with using the constant estimator, then one can change all parameters of the RNN. 4.1 SOME SYNTHETIC EXPERIMENTS ON THE EUROPARL DATASET To convey the main intuitions of our method, we compare our methods against other baselines on a synthetic task on the European parliament proceedings corpus (Koehn, 2005). We describe the experimental set up briefly, deferring details to Appendix B.1. The input sequence to the RNN was each sentence in the dataset. Given an input, the goal was to reproduce the words in the input without repeating words in a list of forbidden words. The RL algorithm does not explicitly know either goal of the objective but has to infer it from the stochastic rewards assigned to input output sequences in the dataset. We used a training set of 500 input-output-reward triplets for the RL methods. We initialised all methods by maximum likelihood training on 6000 input output sequences where the output sequence was the reverse of the input sequence. The maximum likelihood objective captures part of the RL objective. This set up reflects naturally occurring practical scenarios for the algorithm where a large amount unlabelled data can be used to bootstrap a policy if the maximum likelihood and reinforcement learning objectives are at least partially aligned. We trained the RL algorithms for 200 epochs on the training set. At the end of each epoch, we generated outputs from the policy on test set of 500 inputs and scored them according to our criterion. We plot the test set error against the number of epochs for various methods in Fig. 2. Fig. 2(a) compares 3 methods: BPG with and without maximum likelihood initialisation and a version of BPG which does not use importance sampling. Clearly, bootstrapping an RL algorithm with ML can be advantageous especially if data is abundantly available for ML training. Further, without importance sampling, the algorithm is not as competitive for reasons described in Section 3. In all 3 cases, we used a constant estimator for \( \hat{V} \) and \( \lambda = 0.5 \). The dashed line indicates the performance of ML training alone. BPG-NIS is similar to the algorithms of Ranzato et al. (2015); Williams & Zweig (2016) except that there, their methods implicitly use \( \lambda = 1 \). Fig. 2(b) compares 4 methods: BPG and its on-line version OPG with constant (CONST) and GTD(\( \lambda \)) estimators for \( \hat{V} \). The on-line versions of the algorithms are a direct implementation of the method in Degris et al. (2012) which do not use the future rewards as we do. The first observation is that while GTD(\( \lambda \)) is slightly better in the early iterations, it performs roughly the same as using a constant estimator in the long run. Next, BPG performs significantly better than OPG. We believe this is due to the following two reasons. First, the online updates assume stationarity of the MDP. When this does not hold, such as in limited data instances like ours, the SGD updates can be ![Illustration of the encoder and decoder RNNs used in our experiments.](page_184_120_1097_370.png) Figure 1: Illustration of the encoder and decoder RNNs used in our experiments. In this example, the input to the encoder is \( x = (\ldots, A, B, <EOS>) \) and the output of the decoder is \( y = (U, V, W, \ldots) \). We use four different LSTMs for the bottom and top layers of the encoder and decoder networks. In our RL algorithms, we only change the top LSTM and the softmax layer of the decoder RNN as shown in red dashed lines. Figure 2: Results for synthetic experiments. (a): Comparison of BPG with and without maximum likelihood (ML) initialisation and BPG without importance sampling (BPG-NIS). The dotted line indicates performance of ML alone. (b): Comparison of BPG with its online counterparts OPG. We compare both methods using a constant estimator (CONST) for the value function and GTD(\( \lambda \)). (c): Comparison of BPG with different values of \( \lambda \). All curves were averaged over 10 experiments where the training set was picked randomly from a pool. The test set was the same in all 10 experiments. The error bars indicate one standard error. very noisy. Secondly, the value function estimate plays a critical role in the online version. While obtaining a reliable estimate \( \hat{V} \) is reasonable in on-line settings where we can explore indefinitely to collect a large number of samples, it is difficult when one only has a limited number of labelled samples. Finally, we compare BPG with different choices for \( \lambda \) in Fig. 2(c). As noted previously, \( \lambda < 1 \) is useful with stochastic rewards, but choosing too small a value is detrimental. The optimal \( \lambda \) value may depend on the problem. 4.2 Restaurant Recommendations We use data from an on-line restaurant recommendation service. Customers log into the service and chat with a human agent asking recommendations for restaurants. The agents ask a series of questions such as food preferences, group size etc. before recommending a restaurant. The goal is to train a chatbot (policy) which can replace or assist the agent. For reasons explained in Section 1, maximum likelihood training alone will not be adequate. By obtaining reward labels for responses produced by various other bots, we hope to improve on a bot initialised using maximum likelihood. Data Collection: We collected data for RL as follows. We trained five different RNN chatbots with different LSTM parameters via maximum likelihood on a dataset of 6000 conversations from this dataset. The bots were trained to reproduce what the human agent said (output \( y \)) given the past conversation history (input \( x \)). While the dataset is relatively small, we can still expect our bots to do reasonably well since we work in a restricted domain. Next, we generated responses from these bots on 1216 separate conversations and had them scored by workers on Amazon Mechanical Turk (AMT). For each response by the bots in each conversation, the workers were shown the history before the particular response and asked to score (label) each response on a scale of \( 0 - 1 - 2 \). We collected scores from three different workers for each response and used the mean as the reward. Policies and RL Application: Next, we initialised 2 bots via maximum likelihood and then used BPG to improve them using the labels collected from AMT. For the 2 bots we used the following LSTM hidden state size \( H \), word embedding size \( E \) and BPG parameters. These parameters were chosen arbitrarily and are different from those of the bots used in data collection described above. • Bot-1: \( H = 512, E = 256. \) BPG: \( \lambda = 0.5, \) GTD(\( \lambda \)) estimator for \( \hat{V} \). • Bot-2: \( H = 400, E = 400. \) BPG: \( \lambda = 0.5, \) constant estimator for \( \hat{V} \). Testing: We used a separate test set of 500 conversations which had a total of more than 3500 input-output (conversation history - response) pairs. For each Bot-1 and Bot-2 we generated responses before and after applying BPG, totalling 4 responses per input. We then had them scored by workers on AMT using the same set up described above. The same worker labels the before-BPG and after-BPG responses from the same bot. This controls spurious noise effects and allows us to conduct a paired test. We collected 16,808 before and after label pairs each for Bot-1 and Bot-2 and compare them using a paired t-test and a Wilcoxon signed rank test. <table> <tr> <th></th> <th>Mean (ML)</th> <th>Mean (BPG+ML)</th> <th>Paired t-test</th> <th>Wilcoxon</th> </tr> <tr> <td>Bot-1</td> <td>0.8951 ± 0.0070</td> <td>0.9052 ± 0.0069</td> <td>0.10296</td> <td>0.07930</td> </tr> <tr> <td>Bot-2</td> <td>0.7009 ± 0.0066</td> <td>0.7317 ± 0.0066</td> <td>0.00007</td> <td>0.00017</td> </tr> </table> Table 1: The results on the Mechanical Turk experiments using the restaurant dataset. The first two columns are the mean labels of all responses before and after applying BPG on the bots initialised via maximum likelihood. The last two columns are the p-values using a paired t-test and a paired Wilcoxon signed rank test. For both Bot-1 and Bot-2, we obtained 16,808 before and after responses scored by the same worker. Bot-2 is statistically significant at the 10% level on both tests while Bot-1 is significant on the Wilcoxon test. Results: The results are shown in Table 1. The improvements on Bot-2 are statistically significant at the 10% level on both tests, while Bot-1 is significant on the Wilcoxon test. The large p-values for Bot-1 are due to the noisy nature of AMT experiments and we believe that we can attain significance if we collect more labels which will reduce the standard error in both tests. In Appendix B.2 we present some examples of conversation histories and the responses generated by the bots before and after applying BPG. We qualitatively discuss specific kinds of issues that we were able to overcome via reinforcement learning. 5 CONCLUSION We presented a policy gradient method for batch reinforcement learning to train chatbots. The data to this algorithm are input-output sequences generated using other chatbots/humans and stochastic rewards for each output in the dataset. This setting arises in many applications, such as customer service systems, where there is usually an abundance of unlabelled data, but labels (rewards) are expensive to obtain and can be noisy. Our algorithm is able to efficiently use minimal labelled data to improve chatbots previously trained through maximum likelihood on unlabelled data. While our method draws its ideas from previous policy gradient work in the RL and NLP literature, there are some important distinctions that contribute to its success in the settings of interest for this work. Via importance sampling we ensure that the probability of an action is properly accounted for in off-policy updates. By explicitly working in the batch setting, we are able to use knowledge of future actions and rewards to converge faster to the optimum. Further, we use the unlabelled data to initialise our method and also learn a reasonable behaviour policy. Our method outperforms baselines on a series of synthetic and real experiments. The ideas presented in this work extend beyond chatbots. They can be used in applications such as question answering, generating image descriptions and machine translation where an output sentence generated by a policy is scored by a human labeller to provide a weak supervision signal. ACKNOWLEDGEMENTS We would like to thank Christoph Dann for the helpful conversations and Michael Armstrong for helping us with the Amazon Mechanical Turk experiments. REFERENCES Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251–276, 1998. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. An actor-critic algorithm for sequence prediction. arXiv preprint arXiv:1607.07086, 2016. Richard Bellman. Dynamic programming and Lagrange multipliers. Proceedings of the National Academy of Sciences, 42(10):767–769, 1956. Vivek S Borkar. Stochastic approximation with two time scales. Systems & Control Letters, 29(5):291–294, 1997. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. Thomas Degris, Martha White, and Richard S Sutton. Off-policy actor-critic. arXiv preprint arXiv:1205.4839, 2012. David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, et al. Building watson: An overview of the deepqa project. AI magazine, 31(3):59–79, 2010. Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, and Mari Ostendorf. Deep reinforcement learning with a natural language action space. arXiv preprint arXiv:1511.04636, 2015. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pp. 1693–1701, 2015. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. Nal Kalchbrenner and Phil Blunsom. Recurrent continuous translation models. In EMNLP, volume 3, pp. 413, 2013. Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3128–3137, 2015. Philipp Koehn. Europarl: A parallel corpus for statistical machine translation. In MT summit, volume 5, pp. 79–86, 2005. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055, 2015. Jiwei Li, Will Monroe, Alan Ritter, and Dan Jurafsky. Deep reinforcement learning for dialogue generation. arXiv preprint arXiv:1606.01541, 2016. Hamid Reza Maei. Gradient temporal-difference learning algorithms. University of Alberta, 2011. Tomas Mikolov, Martin Karafiát, Lukas Burget, Jan Černocký, and Sanjeev Khudanpur. Recurrent neural network based language model. In Interspeech, volume 2, pp. 3, 2010. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. Language understanding for text-based games using deep reinforcement learning. arXiv preprint arXiv:1506.08941, 2015. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015. Mrinmaya Sachan, Avinava Dubey, and Eric P Xing. Science question answering using instructional materials. arXiv preprint arXiv:1602.04375, 2016. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. A neural network approach to context-sensitive generation of conversational responses. arXiv preprint arXiv:1506.06714, 2015. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104–3112, 2014. Richard S Sutton. Learning to predict by the methods of temporal differences. Machine learning, 3(1):9–44, 1988. Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998. Richard S Sutton, David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pp. 1057–1063, 1999. Richard Stuart Sutton. Temporal credit assignment in reinforcement learning. University of Massachusetts, Amherst, 1984. Adith Swaminathan and Thorsten Joachims. Counterfactual risk minimization: Learning from logged bandit feedback. In Proceedings of the 32nd International Conference on Machine Learning, pp. 814–823, 2015. Csaba Szepesvári. Algorithms for reinforcement learning. Synthesis lectures on artificial intelligence and machine learning, 4(1):1–103, 2010. Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton. Chainerr: a next-generation open source framework for deep learning. In Proceedings of Workshop on Machine Learning Systems (LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS), 2015. Jason D Williams and Geoffrey Zweig. End-to-end lstm-based dialog control optimized with supervised and reinforcement learning. arXiv preprint arXiv:1606.01269, 2016. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992. APPENDIX A IMPLEMENTATION OF GTD(\( \lambda \)) We present the details of the GTD(\( \lambda \)) algorithm (Maei, 2011) to estimate a value function in Algorithm 2. However, while Maei (2011) give an on-line version we present the batch version here where the future rewards of an episode are known. We use a parametrisation of the form \( \widehat{V}(s) = \widehat{V}_\xi(s) = \sigma(\xi^\top \phi(s)) \) where \( \xi \in \mathbb{R}^d \) is the parameter to be estimated. \( \sigma(z) = 1/(1 + e^{-z}) \) is the logistic function. The algorithm requires two step sizes \( \alpha', \alpha'' \) below for the updates to \( \xi \) and the ancillary parameter \( w \). Following the recommendations in Borkar (1997), we use \( \alpha'' \ll \alpha \). In our implementations, we used \( \alpha' = 10^{-5} \) and \( \alpha'' = 10^{-6} \). When we run BPG, we perform steps (a)-(f) of Algorithm 2 in step (iii) of Algorithm 1 and the last two update steps of Algorithm 2 in the last update step of Algorithm 1. The gradient and Hessian of \( \widehat{V}_\xi \) have the following forms, \[ \nabla_\xi \widehat{V}_\xi(s) = \widehat{V}_\xi(s)(1 - \widehat{V}_\xi(s))\phi(s), \qquad \nabla_\xi^2 \widehat{V}_\xi(s) = \widehat{V}_\xi(s)(1 - \widehat{V}_\xi(s))(1 - 2\widehat{V}_\xi(s))\phi(s)\phi(s)^\top. \] The Hessian product in step (d) of Algorithm 2 can be computed in \( O(d) \) time via, \[ \nabla_\xi^2 \widehat{V}_\xi(s) \cdot w = \left[ \widehat{V}_\xi(s)(1 - \widehat{V}_\xi(s))(1 - 2\widehat{V}_\xi(s))(\phi(s)^\top w) \right] \phi(s). \] Algorithm 2 GTD(\( \lambda \)) Given: Data \( \{(x_i, y_i, r_i)\}_{i=1}^n \), step sizes \( \alpha', \alpha'' \), return coefficient \( \lambda \), initial \( \xi_0 \). - Set \( \xi \leftarrow \xi_0, w \leftarrow 0 \). - For each epoch \( k = 1, 2, \ldots \) ▶ Set \( \Delta \xi \leftarrow 0, \Delta w \leftarrow 0 \). ▶ For each episode \( i = 1, \ldots, n \) • Set \( r_{T+1}^{(i)} \leftarrow r_i,\ g_{T+1}^{(i)} \leftarrow 0,\ q_{T+1}^{(i)} \leftarrow 0 \) • \( \rho_t \leftarrow \pi_\theta(a_t^{(i)}|s_t^{(i)})/q(a_t^{(i)}|s_t^{(i)}) \) for \( t = 1, \ldots, T^{(i)} \). • For each time step in reverse \( t = T^{(i)}, \ldots, 1 \): (a) \( g_t^\lambda \leftarrow \rho_t \left( (1 - \lambda)\widehat{V}_\xi(s_{t+1}^{(i)}) + \lambda q_{t+1}^{(i)} \right) \) (b) \( q_t^\lambda \leftarrow \rho_t \left( (1 - \lambda)\nabla_\xi \widehat{V}_\xi(s_{t+1}^{(i)}) + \lambda q_{t+1}^\lambda \right) \) (c) \( \delta_t \leftarrow g_t^\lambda - \widehat{V}_\xi(s_t^{(i)}) \) (d) \( h_t \leftarrow (\delta_t - w^\top \nabla_\xi \widehat{V}_\xi(s_t^{(i)})) \nabla_\xi^2 \widehat{V}_\xi(s_t^{(i)}) \cdot w \) (e) \( \Delta w \leftarrow \Delta w + \frac{1}{T^{(i)}} (\delta_t - w^\top \nabla_\xi \widehat{V}_\xi(s_t^{(i)})) \nabla_\xi \widehat{V}_\xi(s_t^{(i)}) \) (f) \( \Delta \xi \leftarrow \Delta \xi + \frac{1}{T^{(i)}} \left( \delta_t \nabla_\xi \widehat{V}_\xi(s_t^{(i)}) - q_t^\lambda w^\top \nabla_\xi \widehat{V}_\xi(s_t^{(i)}) - h_t \right) \) ▶ \( w \leftarrow w + \alpha'' \Delta w \). ▶ \( \xi \leftarrow \xi + \alpha' \Delta \xi \). B ADDENDUM TO EXPERIMENTS B.1 DETAILS OF THE SYNTHETIC EXPERIMENT SET UP Given an input and output sequence, we used the average of five Bernoulli rewards \( \mathrm{Bern}(r) \), where the parameter \( r \) was \( r = 0.75 \times r_f + 0.25 \times r_t \). Here \( r_f \) was the fraction of common words in the input and output sequences while \( r_t = 0.01^{p_t} \) where \( p_t \) is the fraction of forbidden words in the dataset. As the forbidden words, we used the 50 most common words in the dataset. So if an input had 10 words of which 2 were forbidden, an output sequence repeating 7 of the allowed words and 1 forbidden word would receive an expected score of \( 0.75 \times (8/10) + 0.25 \times 0.01^{(1/8)} = 0.7406 \). The training and testing set for reinforcement learning were obtained as follows. We trained 4 bots using maximum likelihood on 6000 input output sequences as indicated in Section 4.1. The LSTM hidden state size \( H \) and word embedding size \( E \) for the 4 bots were, \( (H, E) = (256, 128), (128, 64), (64, 32), (32, 16) \). The vocabulary size was \( |\mathcal{A}| = 12000 \). We used these bots to generate outputs for 500 different input sequences each. This collection of input and output pairs was scored stochastically as described above to produce a pool of 2000 input-output-score triplets. From this pool we use a fixed set of 500 triplets for testing across all our experiments. From the remaining 1500 data points, we randomly select 500 for training for each execution of an algorithm. For all RL algorithms, we used an LSTM with 16 layers and 16 dimensional word embeddings. B.2 ADDENDUM TO THE AMT RESTAURANT RECOMMENDATIONS EXPERIMENT MORE DETAILS ON THE EXPERIMENTAL SET UP We collected the initial batch of training data for RL as follows: We trained, via maximum likelihood on 6000 conversations, five RNN bots whose LSTM hidden size \( H \) and word embedding size \( E \) were \( (H, E) = (512, 512), (256, 256), (128, 128), (512, 256), (256, 64) \). The inputs \( x \) were all words from the history of the conversation truncated at length 64, i.e. the most recent 64 words in the conversation history. The outputs were the actual responses of the agent which were truncated to length 64. As the vocabulary we use the \( |\mathcal{A}| = 4000 \) most commonly occurring words in the dataset and replace the rest with an <UNK> token. Using the bots trained this way we generate responses on 1216 separate conversations. This data was sent to AMT workers who were asked to label the conversations on the following scale. • 2: The response is coherent and appropriate given the history and advances the conversation forward. • 1: The response has some minor flaws but is discernible and appropriate. • 0: The response is either completely incoherent or inappropriate and fails to advance the conversation forward. SOME QUALITATIVE RESULTS In Tables 2 and 3 we have presented some examples. The text in black/grey shows the conversation history, the response in blue is by the bot trained via maximum likelihood (ML) alone and in red is the bot after improvement using our BPG reinforcement learning algorithm. The first two examples of Table 2 present examples where the ML algorithm repeated generic questions (on budget, group size etc.) even though they had already been answered previously. After applying BPG, we are able to correct such issues, even though there are some grammatical errors. In the second, third and fourth example, we see that the ML+BPG bot is able to take context into consideration well when responding. For example, the customer asks for oriental/Mexican/Italian food. While the ML bot doesn’t take this into consideration, the ML+BPG bot is able to provide relevant answers. However, in the third example, the name of the restaurant suggests that the food might be Indian and not Mexican. In the final example of Table 2 the customer asks a direct question about smoking. The ML bot provides an irrelevant answer where as the ML+BPG bot directly responds to the question. In some examples, the ML bot had a tendency to produce sentences that were grammatically correct but nonsensical, sensible but grammatically incorrect, or just complete gibberish. We were able to correct such issues via RL. The first three examples of Table 3 present such cases. Occasionally the opposite happened. The last example of Table 3 is one such instance. <table> <tr> <th>Customer:</th> <td>Hi there, I’m looking for a restaurant recommendation.</td> </tr> <tr> <th>Agent:</th> <td>How many in your party?</td> </tr> <tr> <th>Customer:</th> <td>5, including children.</td> </tr> <tr> <th>ML:</th> <td><span style="color:blue">How many people will be joining you?</span></td> <th>[0]</th> </tr> <tr> <th>ML+BPG:</th> <td><span style="color:red">That is a fun size, and what is your budget joining you?</span></td> <th>[1.5]</th> </tr> </table> ... <table> <tr> <th>Customer:</th> <td>Hey... Sorry, it’s the anniversary of my wife’s death. I always do oriental tonight.</td> </tr> <tr> <th>Agent:</th> <td>I’m sorry to hear that. I will do my best to find just what you need.</td> </tr> <tr> <th>Customer:</th> <td>Thank you very much.</td> </tr> <tr> <th>Agent:</th> <td>What kind of budget are you looking at?</td> </tr> <tr> <th>Customer:</th> <td>Doesn’t matter... $100 sounds like a good number.</td> </tr> <tr> <th>Agent:</th> <td>Will it be just you, or are others going to join you?</td> </tr> <tr> <th>Customer:</th> <td>It’s just me tonight, but if you are off soon I wouldn’t mind a friendly ear. No pressure.</td> </tr> <tr> <th>ML:</th> <td><span style="color:blue">What type of budget are you looking to stay within?</span></td> <th>[0.33]</th> </tr> <tr> <th>ML+BPG:</th> <td><span style="color:red">How about: Café Hungry <UNK> ? They serve sea American to oriental on site such?</span></td> <th>[1.33]</th> </tr> </table> <table> <tr> <th>Customer:</th> <td>I would love to find a great Mexican restaurant!</td> </tr> <tr> <th>Agent:</th> <td>Great, and will anybody be joining you?</td> </tr> <tr> <th>Customer:</th> <td>There will be a total of 3 of us.</td> </tr> <tr> <th>...</th> <td></td> </tr> <tr> <th>Customer:</th> <td>Could you find me a place with outdoor seating?</td> </tr> <tr> <th>Agent:</th> <td>How about: The Painted Burrito. They serve Mexican food.</td> </tr> <tr> <th>Customer:</th> <td>Eh, I’ve heard that one isn’t great. Can you please suggest another?</td> </tr> <tr> <th>ML:</th> <td><span style="color:blue">How about: The Hungry .w seafood restaurant? they serve sea and American food.</span></td> <th>[0.33]</th> </tr> <tr> <th>ML+BPG:</th> <td><span style="color:red">I am happy to, one moment. How about: Diva Indian Bistro Boston bar? They serve Mexican food.</span></td> <th>[1.67]</th> </tr> </table> <table> <tr> <th>Customer:</th> <td>Hello, I’m looking for a restaurant.</td> </tr> <tr> <th>Agent:</th> <td>I can help! Please tell me what kind of foods you most enjoy. I see you are in the Cambridge area.</td> </tr> <tr> <th>Customer:</th> <td>I would like a restaurant that serves Italian food.</td> </tr> <tr> <th>Agent:</th> <td>Are you looking for something a little less expensive or are you looking for a high end restaurant?</td> </tr> <tr> <th>Customer:</th> <td>I am within a budget, 20 to 50 dollars.</td> </tr> <tr> <th>Agent:</th> <td>I highly recommend The Redhouse Restaurant. It serves ...</td> </tr> <tr> <th>Customer:</th> <td>Is there another restaurant? the restaurant would also need to host a party size of 5 people.</td> </tr> <tr> <th>ML:</th> <td><span style="color:blue">How about: Amuse? They serve French and American food.</span></td> <th>[0]</th> </tr> <tr> <th>ML+BPG:</th> <td><span style="color:red">How about: Ocean prime - Boston Seaport? They serve sea, American and Italian food.</span></td> <th>[2]</th> </tr> </table> ... <table> <tr> <th>Agent:</th> <td>How about Cafe at Taj Boston?</td> </tr> <tr> <th>Customer:</th> <td>Oh, I just ate there last night. Anything else?</td> </tr> <tr> <th>Agent:</th> <td>Try Mela Restaurant at 578 Tremont St. Anything else I can help you with today?</td> </tr> <tr> <th>Customer:</th> <td>That sounds nice. Do they allow smoking?</td> </tr> <tr> <th>ML:</th> <td><span style="color:blue">This sure is within your budget for all.</span></td> <th>[0.5]</th> </tr> <tr> <th>ML+BPG:</th> <td><span style="color:red">No.</span></td> <th>[1.5]</th> </tr> </table> Table 2: Some qualitative results on the AMT experiment using restaurant data. In each cell, the text in black/grey show the history of a conversation between the customer and a human agent. The dots ... mean that the part of the conversation history is hidden. The blue response is by the bot trained using maximum likelihood (ML) alone. The red response is by the bot improved with BPG after ML. The numbers on the right of the blue/red responses are the average scores each response received from 2-3 AMT workers. <table> <tr> <th>Agent:</th> <td>How may I help you?</td> </tr> <tr> <th>Customer:</th> <td>I’d like to find a French restaurant.</td> </tr> <tr> <th>Agent:</th> <td>Do you have a budget in mind?</td> </tr> <tr> <th>Customer:</th> <td>Between $50 and $75.</td> </tr> <tr> <th>Agent:</th> <td>How many will be in your party?</td> </tr> <tr> <th>Customer:</th> <td>Just me.</td> </tr> <tr> <th>ML:</th> <td><span style="color:red">And how will you be dining?</span></td> <th>[0]</th> </tr> <tr> <th>ML+BPG:</th> <td><span style="color:red">How will you be traveling to the restaurant?</span></td> <th>[2]</th> </tr> </table> <table> <tr> <th>Agent:</th> <td>Hello, how may I help you?</td> </tr> <tr> <th>Customer:</th> <td>We are kinda hungry, any suggestion in mind?</td> </tr> <tr> <th>Agent:</th> <td>What type of cuisine do you like?</td> </tr> <tr> <th>Customer:</th> <td>I want them Americano food.</td> </tr> <tr> <th>ML:</th> <td><span style="color:red">Very evening. Will you be dining alone?</span></td> <th>[1]</th> </tr> <tr> <th>ML+BPG:</th> <td><span style="color:red">Excellent choice. How many people will be in your party?</span></td> <th>[2]</th> </tr> </table> <table> <tr> <th>Customer:</th> <td>I’d like to find a good local restaurant.</td> </tr> <tr> <th>Agent:</th> <td>What type of cuisine would you like?</td> </tr> <tr> <th>Customer:</th> <td>I have a hankering for Indian curry.</td> </tr> <tr> <th>Agent:</th> <td>Will you be needing parking?</td> </tr> <tr> <th>Customer:</th> <td>We’ll be driving altogether, so yes.</td> </tr> <tr> <th>ML:</th> <td><span style="color:red">Thank you for 1. How about: T. w your curry this place should be dining?</span></td> <th>[0]</th> </tr> <tr> <th>ML+BPG:</th> <td><span style="color:red">How about: The Maharaja? They serve Indian and American food.</span></td> <th>[2]</th> </tr> </table> <table> <tr> <th>Agent:</th> <td>How can I help you today?</td> </tr> <tr> <th>Customer:</th> <td>I would like a restaurant recommendation.</td> </tr> <tr> <th>Agent:</th> <td>Okay what type of restaurant?</td> </tr> <tr> <th>Customer:</th> <td>Chinese would be great.</td> </tr> <tr> <th>Agent:</th> <td>Who will be dining, how will you be travelling?</td> </tr> <tr> <th>Customer:</th> <td>Just me and by bus.</td> </tr> <tr> <th>ML:</th> <td><span style="color:red">Okay, thanks. Do you have a budget in mind?</span></td> <th>[2]</th> </tr> <tr> <th>ML+BPG:</th> <td><span style="color:red">Do you want a wonderful place evening, or whats your party adults, and a place.</span></td> <th>[0]</th> </tr> </table> Table 3: Qualitative results on the AMT experiment using restaurant data, continued. See caption under Table 2 for more details.
ABSTRACT We study reinforcement learning of chatbots with recurrent neural network architectures when the rewards are noisy and expensive to obtain. For instance, a chatbot used in automated customer service support can be scored by quality assurance agents, but this process can be expensive, time consuming and noisy. Previous reinforcement learning work for natural language processing uses on-policy updates and/or is designed for on-line learning settings. We demonstrate empirically that such strategies are not appropriate for this setting and develop an off-policy batch policy gradient method (BPG). We demonstrate the efficacy of our method via a series of synthetic experiments and an Amazon Mechanical Turk experiment on a restaurant recommendations dataset. 1 INTRODUCTION Chatbots are one of the classical applications of artificial intelligence and are now ubiquitous in technology, business and everyday life. Many corporate entities are now increasingly using chatbots to either replace or assist humans in customer service contexts. For example, Microsoft is currently actively building a chat bot to optimise and streamline its technical support service. In these scenarios, there is usually an abundance of historical data since past conversations between customers and human customer service agents are usually recorded by organisations. An apparently straightforward solution would be to train chatbots to reproduce the responses by human agents using standard techniques such as maximum likelihood. While this seems natural, it is far from desirable for several reasons. It has been observed that such procedures have a tendency to produce very generic responses (Sordoni et al., 2015). For instance, when we trained chatbots via maximum likelihood on a restaurant recommendations dataset, they repeatedly output responses to the effect of How large is your group?, What is your budget? etc. Further, they also produce responses such as Let me look that up. or Give me a second. which, although permissible for a human agent to say, are not appropriate for a chatbot. Although there are ways to increase the diversity of responses (Li et al., 2015), our focus is on encouraging the bot to meaningfully advance the conversation. One way to address this problem is to provide some form of weak supervision for responses generated by a chatbot. For example, a human labeller, such as a quality assurance agent, could score each response generated by a chatbot in a conversation with a customer. This brings us to the reinforcement learning (RL) paradigm where these rewards (scores) are to be used to train a good chatbot. In this paper we will use the terms score, label, and reward interchangeably. Labelled data will mean conversations which have been assigned a reward of some form as explained above. Nonetheless, there are some important differences in the above scenario when compared to the more popular approaches for RL. • Noisy and expensive rewards: Obtaining labels for each conversation can be time consuming and economically expensive. As a result, there is a limited amount of labelled data available. Moreover, labels produced by humans are invariably noisy due to human error and subjectivity. • Off-line evaluations: Unlike conventional RL settings, such as games, where we try to find the optimal policy while interacting with the system, the rewards here are not immediately available. Previous conversations are collected, labelled by human experts, and then given to an algorithm which has to manage with the data it has. • Unlabelled Data: While labelled data is limited, a large amount of unlabelled data is available. If labelled data is in short supply, reinforcement learning could be hopeless. However, if unlabelled data can be used to train a decent initial bot, say via maximum likelihood, we can use policy iteration techniques to refine this bot by making local improvements using the labelled data (Bellman, 1956). Besides chatbots, this framework also finds applications in tasks such as question answering (Ferrucci et al., 2010; Hermann et al., 2015; Sachan et al., 2016), generating image descriptions (Karpathy & Fei-Fei, 2015) and machine translation (Bahdanau et al., 2014) where a human labeller can provide weak supervision in the form of a score to a sentence generated by a bot. To contextualise the work in this paper, we make two important distinctions in policy iteration methods in reinforcement learning. The first is on-policy vs off-policy. In on-policy settings, the goal is to improve the current policy on the fly while exploring the space. On-policy methods are used in applications where it is necessary to be competitive (achieve high rewards) while simultaneously exploring the environment. In off-policy, the environment is explored using a behaviour policy, but the goal is to improve a different target policy. The second distinction is on-line vs batch (off-line). In on-line settings one can interact with the environment. In batch methods, which is the setting for this work, one is given past exploration data from possibly several behaviour policies and the goal is to improve a target policy using this data. On-line methods can be either on-policy or off-policy whereas batch methods are necessarily off-policy. In this paper, we study reinforcement learning in batch settings, for improving chat bots with Seq2Seq recurrent neural network (RNN) architectures. One of the challenges when compared to on-line learning is that we do not have interactive control over the environment. We can only hope to do as well as our data permits us to. On the other hand, the batch setting affords us some luxuries. We can reuse existing data and use standard techniques for hyper-parameter tuning based on cross validation. Further, in on-line policy updates, we have to be able to “guess” how an episode will play out, i.e. actions the behaviour/target policies would take in the future and corresponding rewards. However, in batch learning, the future actions and rewards are directly available in the data. This enables us to make more informed choices when updating our policy. RELATED WORK Recently there has been a surge of interest in deep learning approaches to reinforcement learning, many of them adopting Q-learning, e.g. (He et al., 2015; Mnih et al., 2013; Narasimhan et al., 2015). In Q-learning, the goal is to estimate the optimal action value function \( Q^* \). Then, when an agent is at a given state, it chooses the best greedy action according to \( Q^* \). While Q-learning has been successful in several applications, it is challenging in the settings we consider since estimating \( Q^* \) over large action and state spaces will require a vast number of samples. In this context, policy iteration methods are more promising since we can start with an initial policy and make incremental local improvements using the data we have. This is especially true given that we can use maximum likelihood techniques to estimate a good initial bot using unlabelled data. Policy gradient methods, which fall within the paradigm of policy iteration, make changes to the parameters of a policy along the gradient of a desired objective (Sutton et al., 1999). Recently, the natural language processing (NLP) literature has turned its attention to policy gradient methods for improving language models. Ranzato et al. (2015) present a method based on the classical REINFORCE algorithm (Williams, 1992) for improving machine translation after preliminary training with maximum likelihood objectives. Bahdanau et al. (2016) present an actor-critic method also for machine translation. In both cases, as the reward, the authors use the BLEU (bilingual evaluation understudy) score of the output and the translation in the training dataset. This setting, where the rewards are deterministic and cheaply computable, does not reflect difficulties inherent to training chatbots where labels are noisy and expensive. Li et al. (2016) develop a policy gradient method bot for chatbots. However, they use user defined rewards (based on some simple rules) which, once again, are cheaply obtained and deterministic. Perhaps the closest to our work is that of Williams & Zweig (2016) who use a REINFORCE based method for chat bots. We discuss the differences of this and other methods in greater detail in Section 3. The crucial difference between all of the above efforts and ours is that they use on-policy and/or on-line updates in their methods. The remainder of this manuscript is organised as follows. In Section 2 we review Seq2Seq models and Markov decision processes (MDP) and describe our framework for batch reinforcement learning. Section 3 presents our method BPG and compares it with prior work in the RL and NLP literature. Section 4 presents experiments on a synthetic task and a customer service dataset for restaurant recommendations. 2 PRELIMINARIES 2.1 A REVIEW OF SEQ2SEQ MODELS The goal of a Seq2Seq model in natural language processing is to produce an output sequence \( y = [a_1, a_2, \ldots, a_T] \) given an input sequence \( x \) (Cho et al., 2014; Kalchbrenner & Blunsom, 2013; Sutskever et al., 2014). Here \( a_i \in \mathcal{A} \) where \( \mathcal{A} \) is a vocabulary of words. For example, in machine translation from French to English, \( x \) is the input sequence in French, and \( y \) is its translation in English. In customer service chatbots, \( x \) is the conversation history until the customer’s last query and \( y \) is the response by an agent/chatbot. In a Seq2Seq model, we use an encoder network to represent the input sequence as a euclidean vector and then a decoder network to convert this vector to an output sequence. Typically, both the encoder and decoder networks are recurrent neural networks (RNN) (Mikolov et al., 2010) where the recurrent unit processes each word in the input/output sequences one at a time. In this work, we will use the LSTM (long short term memory) (Hochreiter & Schmidhuber, 1997) as our recurrent unit due to its empirical success in several applications. In its most basic form, the decoder RNN can be interpreted as assigning a probability distribution over \( \mathcal{A} \) given the current “state”. At time \( t \), the state \( s_t \) is the input sequence \( x \) and the words \( y_{t-1} = [a_1, \ldots, a_{t-1}] \) produced by the decoder thus far, i.e. \( s_t = (x, y_{t-1}) \). We sample the next word \( a_t \) from this probability distribution \( \pi(\cdot|s_t) \), then update our state \( s_{t+1} = (x, y_t) \) where \( y_t = [y_{t-1}, a_t] \), and proceed in a similar fashion. The vocabulary \( \mathcal{A} \) contains an end-of-statement token <EOS>. If we sample <EOS> at time \( T + 1 \), we terminate the sequence and output \( y_T \). 2.2 A REVIEW OF MARKOV DECISION PROCESSES (MDP) We present a formalism for MDPs simplified to our setting. In an MDP, an agent takes an action \( a \) in a state \( s \) and transitions to a state \( s' \). An episode refers to a sequence of transitions \( s_1 \rightarrow a_1 \rightarrow s_2 \rightarrow a_2 \rightarrow \cdots \rightarrow a_T \rightarrow s_{T+1} \) until the agent reaches a terminal state \( s_{T+1} \). At a terminal state, the agent receives a reward. Formally, an MDP is the triplet \( (\mathcal{S}, \mathcal{A}, R) \). Here, \( \mathcal{S} \) is a set of states and \( \mathcal{A} \) is a set of actions. When we take an action \( a \) at state \( s \) we transition to a new state \( s' = s'(s, a) \) which, in this work, will be deterministic. \( \mathcal{A} \) will be a finite but large discrete set and \( \mathcal{S} \) will be discrete but potentially infinite. \( R : \mathcal{S} \rightarrow \mathbb{R} \) is the expected reward function such that when we receive a reward \( r \) at state \( s \in \mathcal{S} \), \( \mathbb{E}[r] = R(s) \). Let \( \mathcal{S}_0 \subset \mathcal{S} \) be a set of terminal states. When we transition to any \( s \in \mathcal{S}_0 \), the episode ends. In this work, we will assume that the rewards are received only at a terminal state, i.e \( R(s) \) is nonzero only on \( \mathcal{S}_0 \). A policy \( \pi \) is a rule to select an action at a given state. We will be focusing on stochastic policies \( \pi : \mathcal{A} \times \mathcal{S} \rightarrow \mathbb{R}_+ \) where \( \pi(a|s) \) denotes the probability an agent will execute action \( a \) at state \( s \). We define the value function \( V^{\pi} : \mathcal{S} \rightarrow \mathbb{R} \) of policy \( \pi \), where \( V(s) \) is the expected reward at the end of the episode when we follow policy \( \pi \) from state \( s \). For any terminal state \( s \in \mathcal{S}_0 \), \( V^{\pi}(s) = R(s) \) regardless of \( \pi \). We will also find it useful to define the action-value function \( Q^{\pi} : \mathcal{S} \times \mathcal{A} : \rightarrow \mathbb{R} \), where \( Q^{\pi}(s, a) \) is the expected reward of taking action \( a \) at state \( s \) and then following policy \( \pi \). With deterministic state transitions this is simply \( Q^{\pi}(s, a) = V^{\pi}(s'(s, a)) \). It can be verified that \( V^{\pi}(s) = \mathbb{E}_{a \sim \pi(\cdot|s)}[Q^{\pi}(s, a)] \) (Sutton & Barto, 1998). 2.3 SET UP We now frame our learning from labels scenario for RNN chatbots as an MDP. The treatment has similarities to some recent RL work in the NLP literature discussed above. Let \( x \) be the input and \( y_{t-1} = [a_1, \ldots, a_{t-1}] \) be the words output by the decoder until time \( t \). The state of our MDP at time \( t \) of the current episode will be \( s_t = (x, y_{t-1}) \). Therefore, the set of states \( \mathcal{S} \) will be all possible pairs of inputs and partial output sequences. The actions \( \mathcal{A} \) will be the vocabulary. The terminal states \( \mathcal{S}_0 \) will be \( (x, y) \) such that the last literal of \( y \) is <EOS>. The stochastic policy \( \pi \) will be a Seq2Seq RNN which produces a distribution over \( \mathcal{A} \) given state \( s_t \). When we wish to make the dependence of the policy on the RNN parameters \( \theta \) explicit, we will write \( \pi_\theta \). When we sample an action \( a_t \sim \pi(\cdot|s_t) \), we deterministically transition to state \( (x, [y_{t-1}, a_t]) \). If we sample \( a_{T+1} = \text{<EOS>} \) at time \( T + 1 \), the episode terminates and we observe a stochastic reward. We are given a dataset of input-output-reward triples \( \{(x^{(i)}, y^{(i)}, r^{(i)})\}_{i=1}^n \) where \( y^{(i)} = (a_1^{(i)}, \ldots, a_{T_i}^{(i)}, \text{<EOS>}) \) is the sequence of output words. This data was collected from possibly multiple *behaviour policies* which output \( y^{(i)} \) for the given input \( x^{(i)} \). In the above customer service example, the behaviour policies could be chatbots, or even humans, which were used for conversations with a customer. The rewards \( r_i \) are scores assigned by a human quality assurance agent to each response of the chatbot. Our goal is to use this data to improve a given target policy \( \pi_\theta \). We will use \( q \) to denote the distribution of the data. \( q(s) \) is the distribution of the states in the dataset, \( q(a|s) \) is the conditional distribution of an action given a state, and \( q(s, a) = q(s)q(a|s) \) is the joint distribution over states and actions. \( q \) will be determined by the initial distribution of the inputs \( x^{(i)} \) and the behaviour policies used to collect the training data. Our aim is to find a policy that does well with respect to \( q \). Specifically, we wish to maximise the following objective, \[ J(\theta) = \sum_{s \in \mathcal{S}} q(s)V^{\pi_\theta}(s). \] Here, the value function \( V^{\pi_\theta} \) is not available to us but has to be estimated from the data. This is similar to objectives used in on-line off-policy policy gradient literature where \( q \) is replaced by the limiting distribution of the behaviour policy (Degris et al., 2012). In the derivation of our algorithm, we will need to know \( q(a|s) \) to compute the gradient of our objective. In off-policy reinforcement learning settings this is given by the behaviour policy which is readily available. If the behaviour policy if available to us, then we can use this too. Otherwise, a simple alternative is to “learn” a behaviour policy. For example, in our experiments we used an RNN trained using the unlabelled data to obtain values for \( q(a|s) \). As long as this learned policy can capture the semantics of natural language (for example, the word apple is more likely than car when the current state is (x, I ate an)), then it can be expected to do reasonably well. In the following section, we will derive a stochastic gradient descent (SGD) procedure that will approximately minimise (1). Before we proceed, we note that it is customary in the RL literature to assume stochastic transitions between states and use rewards at all time steps instead of the terminal step. Further, the future rewards are usually discounted by a discount factor \( \gamma < 1 \). While we use the above formalism to simplify the exposition, the ideas presented here extend naturally to more conventional settings. 3 Batch Policy Gradient Our derivation follows the blueprint in Degris et al. (2012) who derive an off-policy on-line actor critic algorithm. Following standard policy gradient methods, we will aim to update the policy by taking steps along the gradient of the objective \( \nabla J(\theta) \). \[ \nabla J(\theta) = \nabla \mathbb{E}_{s \sim q} \left[ \sum_{a \in \mathcal{A}} \pi_\theta(a|s) Q^{\pi_\theta}(s, a) \right] = \mathbb{E}_{s \sim q} \left[ \sum_{a \in \mathcal{A}} \nabla \pi_\theta(a|s) Q^{\pi_\theta}(s, a) + \pi_\theta(a|s) \nabla Q^{\pi_\theta}(s, a) \right]. \] The latter term inside the above summation is difficult to work with, so the first step is to ignore it and work with the approximate gradient \( g(\theta) = \mathbb{E}_{s \sim q}[\sum_{a \in \mathcal{A}} \nabla \pi_\theta(a|s) Q^{\pi_\theta}(s, a)] \approx \nabla J(\theta) \). Degris et al. (2012) provide theoretical justification for this approximation in off policy settings by establishing that \( J(\theta) \leq J(\theta + \alpha g(\theta)) \) for all small enough \( \alpha \). Expanding on \( g(\theta) \), we obtain: \[ g(\theta) = \mathbb{E}_{s \sim q} \left[ \sum_{a \in \mathcal{A}} \pi_\theta(a|s) \frac{\nabla \pi_\theta(a|s)}{\pi_\theta(a|s)} Q^{\pi_\theta}(s, a) \right] = \mathbb{E}_{a \sim q(\cdot|s)} \left[ \rho(s, a)\psi(a, s)Q^{\pi_\theta}(s, a) \right] \] \[ = \mathbb{E}_{(s_t, a_t) \sim q(\cdot, \cdot)} [\rho(s_t, a_t)\psi(a_t, s_t)(Q^{\pi_\theta}(s_t, a_t) - V^{\pi_\theta}(s_t))]. \tag{2} \] Here \( \psi(a, s) = \frac{\nabla \pi_\theta(a|s)}{\pi_\theta(a|s)} = \nabla \log \pi_\theta(a|s) \) is the score function of the policy and \( \rho(s, a) = \pi_\theta(a|s)/q(a|s) \) is the importance sampling coefficient. In the last step, we have used the fact that \( \mathbb{E}[\pi(a|s)\psi(a|s)h(s)] = 0 \) for any function \( h : \mathcal{S} \to \mathbb{R} \) of the current state (Szepesvári, 2010). The purpose of introducing the value function \( V^{\pi_\theta} \) is to reduce the variance of the SGD updates – we want to assess how good/bad action \( a_t \) is relative to how well \( \pi_\theta \) will do at state \( s_t \) in expectation. If \( a_t \) is a good action (\( Q^{\pi_\theta}(s_t, a_t) \) is large relative to \( V^{\pi_\theta}(s_t) \)), the coefficient of the score function is positive and it will change \( \theta \) so as to assign a higher probability to action \( a_t \) at state \( s_t \). The \( Q^{\pi_\theta}, V^{\pi_\theta} \) functions are not available to us so we will replace them with estimates. For \( V^{\pi_\theta}(s_t) \) we will use an estimate \( \hat{V}(s_t) \) – we will discuss choices for this shortly. However, the action value function is usually not estimated in RL policy gradient settings to avoid the high sample complexity. A sensible stochastic approximation for \( Q^{\pi_\theta}(s_t, a_t) \) is to use the sum of future rewards from the current state (Sutton & Barto, 1998)\footnote{Note \( Q^{\pi_\theta}(s_t, a_t) = V^{\pi_\theta}(s_{t+1}) \) for deterministic transitions. However, it is important not to interpret the term in (2) as the difference in the value function between successive states. Conditioned on the current time step, \( V^{\pi_\theta}(s_t) \) is deterministic, while \( V^{\pi_\theta}(s_{t+1}) \) is stochastic. In particular, while a crude estimate suffices for the former, the latter is critical and should reflect the rewards received during the remainder of the episode.}. If we receive reward \( r \) at the end of the episode, we can then use \( Q^{\pi_\theta}(s_t, a_t) \approx r \) for all time steps \( t \) in the episode. However, since \( q(a_t|s_t) \) is different from \( \pi_\theta(a_t|s_t) \) we will need to re-weight future rewards via importance sampling \( r \prod_{i=t}^{T-1} \rho(s_i, a_i) \). This is to account for the fact that an action \( a \) given \( s \) may have been more likely under the policy \( \pi_\theta(\cdot|s) \) than it was under \( q(\cdot|s) \) or vice versa. Instead of directly using the re-weighted rewards, we will use the so called \( \lambda \)-return which is a convex combination of the re-weighted rewards and the value function (Sutton, 1988; 1984). In our setting, they are defined recursively from the end of the episode \( t = T + 1 \) to \( t = 1 \) as follows. For \( \lambda \in (0, 1] \), \[ \tilde{r}_{t+1}^\lambda = r, \qquad \tilde{r}_t^\lambda = (1 - \lambda)V^{\pi_\theta}(s_{t+1}) + \lambda \rho(s_t, a_t) \tilde{r}_{t+1}^\lambda \quad \text{for } t = T, \ldots, 1. \] The purpose of introducing \( \lambda \) is to reduce the variance of using the future rewards alone as an estimate for \( Q^{\pi_\theta}(s_t, a_t) \). This is primarily useful when rewards are noisy. If the rewards are deterministic, \( \lambda = 1 \) which ignores the value function is the best choice. In noisy settings, it is recommended to use \( \lambda < 1 \) (see Sec 3.1 of (Szepesvári, 2010)). In our algorithm, we will replace \( \tilde{r}_t^\lambda \) with \( r_t^\lambda \) where \( V^{\pi_\theta} \) is replaced with the estimate \( \hat{V} \). Putting it all together, and letting \( \alpha \) denote the step size, we have the following update rule for the parameters \( \theta \) of our policy: \[ \theta \leftarrow \theta + \alpha \rho(s_t, a_t) \psi(s_t, a_t)(r_t^\lambda - \hat{V}(s_t)). \] In Algorithm 1, we have summarised the procedure where the updates are performed after an entire pass through the dataset. In practice, we perform the updates in mini-batches. An Estimator for the Value Function: All that is left to do is to specify an estimator \( \hat{V} \) for the value function. We first need to acknowledge that this is a difficult problem: \( \mathcal{S} \) is quite large and for typical applications for this work there might not be enough data since labels are expensive. That said, the purpose of \( \hat{V} \) in (2), (3) is to reduce the variance of our SGD updates and speed up convergence so it is not critical that this be precise – even a bad estimator will converge eventually. Secondly, standard methods for estimating the value function based on minimising the projected Bellman error require the second derivatives, which might be intractable for highly nonlinear parametrisations of \( \hat{V} \) (Maei, 2011). For these two statistical and computational reasons, we resort to simple estimators for \( V^{\pi_\theta} \). We will study two options. The first is a simple heuristic used previously in the RL literature, namely a constant estimator for \( \hat{V} \) which is equal to the mean of all rewards in the dataset (Williams, 1992). The second uses the parametrisation \( \hat{V}(s) = \sigma(\xi^\top \phi(s)) \) where \( \sigma \) is the logistic function and \( \phi(s) \in \mathbb{R}^d \) is a Euclidean representation of the state. For \( \hat{V}(s) \) of the above form, the Hessian \( \nabla^2 \hat{V}(s) \) can be computed in \( \mathcal{O}(d) \) time. To estimate this value function, we use the GTD(\( \lambda \)) estimator from Maei (2011). As \( \phi(s) \) we will be using the hidden state of the LSTM. The rationale for this is as follows. In an LSTM trained using maximum likelihood, the hidden state contains useful information about the objective. If there is overlap between the maximum likelihood and reinforcement learning objectives, we can expect the hidden state to also carry useful information about the RL objective. Therefore, we can use the hidden state to estimate the value function whose expectation is the RL objective. We have described our implementation of GTD(\( \lambda \)) in Appendix A and specified some implementation details in Section 4. Algorithm 1 Batch Policy Gradient (BPG) Given: Data \( \{ (x_i, y_i, r_i) \}_{i=1}^n \), step size \( \alpha \), return coefficient \( \lambda \), initial \( \theta_0 \). – Set \( \theta \leftarrow \theta_0 \). – For each epoch \( k = 1, 2, \ldots \) ▶ Set \( \Delta \theta \leftarrow 0 \) ▶ For each episode \( i = 1, \ldots, n \) • \( r_{T+1}^\lambda \leftarrow r_i \) • \( \rho_t \leftarrow \pi_\theta(a_t^{(i)}|s_t^{(i)})/q(a_t^{(i)}|s_t^{(i)}) \) for \( t = 1, \ldots, T^{(i)} \). • For each time step in reverse \( t = T^{(i)}, \ldots, 1 \) (i) \( r_t^\lambda \leftarrow (1-\lambda)\widehat{V}(s_{t+1}^{(i)}) + \lambda \rho_t r_{t+1}^\lambda \) (ii) \( \Delta \theta \leftarrow \Delta \theta + \frac{1}{T^{(i)}} \rho_t \psi(s_t^{(i)}, a_t^{(i)})(r_t^\lambda - \widehat{V}(s_t^{(i)})) \) (iii) Compute updates for the value function estimate \( \widehat{V} \). ▶ Update the policy \( \theta \leftarrow \theta + \alpha \Delta \theta \) ▶ Update the value function estimate \( \widehat{V} \). Comparison with Other RL Approaches in NLP Policy gradient methods have been studied extensively in on policy settings where the goal is to improve the current policy on the fly (Amari, 1998; Williams, 1992). To our knowledge, all RL approaches in Seq2Seq models have also adopted on-policy policy gradient updates (Bahdanau et al., 2016; Li et al., 2016; Ranzato et al., 2015; Williams & Zweig, 2016). However, on policy methods break down in off-policy settings, because any update must account for the probability of the action under the target policy. For example, suppose the behaviour policy took action \( a \) at state \( s \) and received a low reward. Then we should modify the target policy \( \theta \) so as to reduce \( \pi_\theta(a|s) \). However, if the target policy is already assigning low probability to \( a|s \) then we should not be as aggressive when making the updates. The re-weighting \( \rho(s, a) \) via importance sampling does precisely this. A second difference is that we study batch RL. Standard on-line methods are designed for settings where we have to continually improve the target while exploring using the behaviour policy. Critical to such methods are the estimation of future rewards at the current state and the future actions that will be taken by both the behaviour and target policies. In order to tackle this, previous research either ignore future rewards altogether (Williams, 1992), resort to heuristics to distribute a delayed reward to previous time steps (Bahdanau et al., 2016; Williams & Zweig, 2016), or make additional assumptions about the distribution of the states such as stationarity of the Markov process (Degris et al., 2012; Maei, 2011). However, in batch settings, the \( \lambda \)-return from a given time step can be computed directly (3) since the future action and rewards are available in the dataset. Access to this information provides a crucial advantage over techniques designed for on-line settings. 4 EXPERIMENTS Implementation Details: We implement our methods using Chainer (Tokui et al., 2015), and group sentences of the same length together in the same batch to make use of GPU parallelisation. Since different batches could be of different length, we do not normalise the gradients by the batch size as we should take larger steps after seeing more data. However, we normalise by the length of the output sequence to allocate equal weight to all sentences. We truncate all output sequences to length 64 and use a maximum batch size of 32. We found it necessary to use a very small step size (\( 10^{-5} \)), otherwise the algorithm has a tendency to get stuck at bad parameter values. While importance re-weighting is necessary in off-policy settings, it can increase the variance of the updates, especially when \( q(a_t|s_t) \) is very small. A common technique to alleviate this problem is to clip the \( \rho(s_t, a_t) \) value (Swaminathan & Joachims, 2015). In addition to single \( \rho(s_t, a_t) \) values, our procedure has a product of \( \rho(s_t, a_t) \) values when computing the future rewards (3). The effect of large \( \rho \) values is a large weight \( \rho_t(r_t^\lambda - \widehat{V}(s_t)) \) for the score function in step (ii) of Algorithm 1. In our implementation, we clip this weight at 5 which controls the variance of the updates and ensures that a single example does not disproportionately affect the gradient. RNN Design: In both experiments we use deep LSTMs with two layers for the encoder and decoder RNNs. The output of the bottom layer is fed to the top layer and in the decoder RNN, the output of the top layer is fed to a softmax layer of size \( |\mathcal{A}| \). When we implement GTD(\( \lambda \)) to estimate \( V^{\pi_0} \) we use the hidden state of the bottom LSTM as \( \phi(s) \). When performing our policy updates, we only change the parameters of the top LSTM and the softmax layer in our decoder RNN. If we were to change the bottom LSTM too, then the state representation \( \phi(s) \) would also change as the policy changes. This violates the MDP framework. In other words, we treat the bottom layer as part of the environment in our MDP. To facilitate a fair comparison, we only modify the top LSTM and softmax layers in all methods. We have illustrated this set up in Fig. 1. We note that if one is content with using the constant estimator, then one can change all parameters of the RNN. 4.1 SOME SYNTHETIC EXPERIMENTS ON THE EUROPARL DATASET To convey the main intuitions of our method, we compare our methods against other baselines on a synthetic task on the European parliament proceedings corpus (Koehn, 2005). We describe the experimental set up briefly, deferring details to Appendix B.1. The input sequence to the RNN was each sentence in the dataset. Given an input, the goal was to reproduce the words in the input without repeating words in a list of forbidden words. The RL algorithm does not explicitly know either goal of the objective but has to infer it from the stochastic rewards assigned to input output sequences in the dataset. We used a training set of 500 input-output-reward triplets for the RL methods. We initialised all methods by maximum likelihood training on 6000 input output sequences where the output sequence was the reverse of the input sequence. The maximum likelihood objective captures part of the RL objective. This set up reflects naturally occurring practical scenarios for the algorithm where a large amount unlabelled data can be used to bootstrap a policy if the maximum likelihood and reinforcement learning objectives are at least partially aligned. We trained the RL algorithms for 200 epochs on the training set. At the end of each epoch, we generated outputs from the policy on test set of 500 inputs and scored them according to our criterion. We plot the test set error against the number of epochs for various methods in Fig. 2. Fig. 2(a) compares 3 methods: BPG with and without maximum likelihood initialisation and a version of BPG which does not use importance sampling. Clearly, bootstrapping an RL algorithm with ML can be advantageous especially if data is abundantly available for ML training. Further, without importance sampling, the algorithm is not as competitive for reasons described in Section 3. In all 3 cases, we used a constant estimator for \( \hat{V} \) and \( \lambda = 0.5 \). The dashed line indicates the performance of ML training alone. BPG-NIS is similar to the algorithms of Ranzato et al. (2015); Williams & Zweig (2016) except that there, their methods implicitly use \( \lambda = 1 \). Fig. 2(b) compares 4 methods: BPG and its on-line version OPG with constant (CONST) and GTD(\( \lambda \)) estimators for \( \hat{V} \). The on-line versions of the algorithms are a direct implementation of the method in Degris et al. (2012) which do not use the future rewards as we do. The first observation is that while GTD(\( \lambda \)) is slightly better in the early iterations, it performs roughly the same as using a constant estimator in the long run. Next, BPG performs significantly better than OPG. We believe this is due to the following two reasons. First, the online updates assume stationarity of the MDP. When this does not hold, such as in limited data instances like ours, the SGD updates can be ![Illustration of the encoder and decoder RNNs used in our experiments.](page_184_120_1097_370.png) Figure 1: Illustration of the encoder and decoder RNNs used in our experiments. In this example, the input to the encoder is \( x = (\ldots, A, B, <EOS>) \) and the output of the decoder is \( y = (U, V, W, \ldots) \). We use four different LSTMs for the bottom and top layers of the encoder and decoder networks. In our RL algorithms, we only change the top LSTM and the softmax layer of the decoder RNN as shown in red dashed lines. Figure 2: Results for synthetic experiments. (a): Comparison of BPG with and without maximum likelihood (ML) initialisation and BPG without importance sampling (BPG-NIS). The dotted line indicates performance of ML alone. (b): Comparison of BPG with its online counterparts OPG. We compare both methods using a constant estimator (CONST) for the value function and GTD(\( \lambda \)). (c): Comparison of BPG with different values of \( \lambda \). All curves were averaged over 10 experiments where the training set was picked randomly from a pool. The test set was the same in all 10 experiments. The error bars indicate one standard error. very noisy. Secondly, the value function estimate plays a critical role in the online version. While obtaining a reliable estimate \( \hat{V} \) is reasonable in on-line settings where we can explore indefinitely to collect a large number of samples, it is difficult when one only has a limited number of labelled samples. Finally, we compare BPG with different choices for \( \lambda \) in Fig. 2(c). As noted previously, \( \lambda < 1 \) is useful with stochastic rewards, but choosing too small a value is detrimental. The optimal \( \lambda \) value may depend on the problem. 4.2 Restaurant Recommendations We use data from an on-line restaurant recommendation service. Customers log into the service and chat with a human agent asking recommendations for restaurants. The agents ask a series of questions such as food preferences, group size etc. before recommending a restaurant. The goal is to train a chatbot (policy) which can replace or assist the agent. For reasons explained in Section 1, maximum likelihood training alone will not be adequate. By obtaining reward labels for responses produced by various other bots, we hope to improve on a bot initialised using maximum likelihood. Data Collection: We collected data for RL as follows. We trained five different RNN chatbots with different LSTM parameters via maximum likelihood on a dataset of 6000 conversations from this dataset. The bots were trained to reproduce what the human agent said (output \( y \)) given the past conversation history (input \( x \)). While the dataset is relatively small, we can still expect our bots to do reasonably well since we work in a restricted domain. Next, we generated responses from these bots on 1216 separate conversations and had them scored by workers on Amazon Mechanical Turk (AMT). For each response by the bots in each conversation, the workers were shown the history before the particular response and asked to score (label) each response on a scale of \( 0 - 1 - 2 \). We collected scores from three different workers for each response and used the mean as the reward. Policies and RL Application: Next, we initialised 2 bots via maximum likelihood and then used BPG to improve them using the labels collected from AMT. For the 2 bots we used the following LSTM hidden state size \( H \), word embedding size \( E \) and BPG parameters. These parameters were chosen arbitrarily and are different from those of the bots used in data collection described above. • Bot-1: \( H = 512, E = 256. \) BPG: \( \lambda = 0.5, \) GTD(\( \lambda \)) estimator for \( \hat{V} \). • Bot-2: \( H = 400, E = 400. \) BPG: \( \lambda = 0.5, \) constant estimator for \( \hat{V} \). Testing: We used a separate test set of 500 conversations which had a total of more than 3500 input-output (conversation history - response) pairs. For each Bot-1 and Bot-2 we generated responses before and after applying BPG, totalling 4 responses per input. We then had them scored by workers on AMT using the same set up described above. The same worker labels the before-BPG and after-BPG responses from the same bot. This controls spurious noise effects and allows us to conduct a paired test. We collected 16,808 before and after label pairs each for Bot-1 and Bot-2 and compare them using a paired t-test and a Wilcoxon signed rank test. <table> <tr> <th></th> <th>Mean (ML)</th> <th>Mean (BPG+ML)</th> <th>Paired t-test</th> <th>Wilcoxon</th> </tr> <tr> <td>Bot-1</td> <td>0.8951 ± 0.0070</td> <td>0.9052 ± 0.0069</td> <td>0.10296</td> <td>0.07930</td> </tr> <tr> <td>Bot-2</td> <td>0.7009 ± 0.0066</td> <td>0.7317 ± 0.0066</td> <td>0.00007</td> <td>0.00017</td> </tr> </table> Table 1: The results on the Mechanical Turk experiments using the restaurant dataset. The first two columns are the mean labels of all responses before and after applying BPG on the bots initialised via maximum likelihood. The last two columns are the p-values using a paired t-test and a paired Wilcoxon signed rank test. For both Bot-1 and Bot-2, we obtained 16,808 before and after responses scored by the same worker. Bot-2 is statistically significant at the 10% level on both tests while Bot-1 is significant on the Wilcoxon test. Results: The results are shown in Table 1. The improvements on Bot-2 are statistically significant at the 10% level on both tests, while Bot-1 is significant on the Wilcoxon test. The large p-values for Bot-1 are due to the noisy nature of AMT experiments and we believe that we can attain significance if we collect more labels which will reduce the standard error in both tests. In Appendix B.2 we present some examples of conversation histories and the responses generated by the bots before and after applying BPG. We qualitatively discuss specific kinds of issues that we were able to overcome via reinforcement learning. 5 CONCLUSION We presented a policy gradient method for batch reinforcement learning to train chatbots. The data to this algorithm are input-output sequences generated using other chatbots/humans and stochastic rewards for each output in the dataset. This setting arises in many applications, such as customer service systems, where there is usually an abundance of unlabelled data, but labels (rewards) are expensive to obtain and can be noisy. Our algorithm is able to efficiently use minimal labelled data to improve chatbots previously trained through maximum likelihood on unlabelled data. While our method draws its ideas from previous policy gradient work in the RL and NLP literature, there are some important distinctions that contribute to its success in the settings of interest for this work. Via importance sampling we ensure that the probability of an action is properly accounted for in off-policy updates. By explicitly working in the batch setting, we are able to use knowledge of future actions and rewards to converge faster to the optimum. Further, we use the unlabelled data to initialise our method and also learn a reasonable behaviour policy. Our method outperforms baselines on a series of synthetic and real experiments. The ideas presented in this work extend beyond chatbots. They can be used in applications such as question answering, generating image descriptions and machine translation where an output sentence generated by a policy is scored by a human labeller to provide a weak supervision signal. ACKNOWLEDGEMENTS We would like to thank Christoph Dann for the helpful conversations and Michael Armstrong for helping us with the Amazon Mechanical Turk experiments. APPENDIX A IMPLEMENTATION OF GTD(\( \lambda \)) We present the details of the GTD(\( \lambda \)) algorithm (Maei, 2011) to estimate a value function in Algorithm 2. However, while Maei (2011) give an on-line version we present the batch version here where the future rewards of an episode are known. We use a parametrisation of the form \( \widehat{V}(s) = \widehat{V}_\xi(s) = \sigma(\xi^\top \phi(s)) \) where \( \xi \in \mathbb{R}^d \) is the parameter to be estimated. \( \sigma(z) = 1/(1 + e^{-z}) \) is the logistic function. The algorithm requires two step sizes \( \alpha', \alpha'' \) below for the updates to \( \xi \) and the ancillary parameter \( w \). Following the recommendations in Borkar (1997), we use \( \alpha'' \ll \alpha \). In our implementations, we used \( \alpha' = 10^{-5} \) and \( \alpha'' = 10^{-6} \). When we run BPG, we perform steps (a)-(f) of Algorithm 2 in step (iii) of Algorithm 1 and the last two update steps of Algorithm 2 in the last update step of Algorithm 1. The gradient and Hessian of \( \widehat{V}_\xi \) have the following forms, \[ \nabla_\xi \widehat{V}_\xi(s) = \widehat{V}_\xi(s)(1 - \widehat{V}_\xi(s))\phi(s), \qquad \nabla_\xi^2 \widehat{V}_\xi(s) = \widehat{V}_\xi(s)(1 - \widehat{V}_\xi(s))(1 - 2\widehat{V}_\xi(s))\phi(s)\phi(s)^\top. \] The Hessian product in step (d) of Algorithm 2 can be computed in \( O(d) \) time via, \[ \nabla_\xi^2 \widehat{V}_\xi(s) \cdot w = \left[ \widehat{V}_\xi(s)(1 - \widehat{V}_\xi(s))(1 - 2\widehat{V}_\xi(s))(\phi(s)^\top w) \right] \phi(s). \] Algorithm 2 GTD(\( \lambda \)) Given: Data \( \{(x_i, y_i, r_i)\}_{i=1}^n \), step sizes \( \alpha', \alpha'' \), return coefficient \( \lambda \), initial \( \xi_0 \). - Set \( \xi \leftarrow \xi_0, w \leftarrow 0 \). - For each epoch \( k = 1, 2, \ldots \) ▶ Set \( \Delta \xi \leftarrow 0, \Delta w \leftarrow 0 \). ▶ For each episode \( i = 1, \ldots, n \) • Set \( r_{T+1}^{(i)} \leftarrow r_i,\ g_{T+1}^{(i)} \leftarrow 0,\ q_{T+1}^{(i)} \leftarrow 0 \) • \( \rho_t \leftarrow \pi_\theta(a_t^{(i)}|s_t^{(i)})/q(a_t^{(i)}|s_t^{(i)}) \) for \( t = 1, \ldots, T^{(i)} \). • For each time step in reverse \( t = T^{(i)}, \ldots, 1 \): (a) \( g_t^\lambda \leftarrow \rho_t \left( (1 - \lambda)\widehat{V}_\xi(s_{t+1}^{(i)}) + \lambda q_{t+1}^{(i)} \right) \) (b) \( q_t^\lambda \leftarrow \rho_t \left( (1 - \lambda)\nabla_\xi \widehat{V}_\xi(s_{t+1}^{(i)}) + \lambda q_{t+1}^\lambda \right) \) (c) \( \delta_t \leftarrow g_t^\lambda - \widehat{V}_\xi(s_t^{(i)}) \) (d) \( h_t \leftarrow (\delta_t - w^\top \nabla_\xi \widehat{V}_\xi(s_t^{(i)})) \nabla_\xi^2 \widehat{V}_\xi(s_t^{(i)}) \cdot w \) (e) \( \Delta w \leftarrow \Delta w + \frac{1}{T^{(i)}} (\delta_t - w^\top \nabla_\xi \widehat{V}_\xi(s_t^{(i)})) \nabla_\xi \widehat{V}_\xi(s_t^{(i)}) \) (f) \( \Delta \xi \leftarrow \Delta \xi + \frac{1}{T^{(i)}} \left( \delta_t \nabla_\xi \widehat{V}_\xi(s_t^{(i)}) - q_t^\lambda w^\top \nabla_\xi \widehat{V}_\xi(s_t^{(i)}) - h_t \right) \) ▶ \( w \leftarrow w + \alpha'' \Delta w \). ▶ \( \xi \leftarrow \xi + \alpha' \Delta \xi \). B ADDENDUM TO EXPERIMENTS B.1 DETAILS OF THE SYNTHETIC EXPERIMENT SET UP Given an input and output sequence, we used the average of five Bernoulli rewards \( \mathrm{Bern}(r) \), where the parameter \( r \) was \( r = 0.75 \times r_f + 0.25 \times r_t \). Here \( r_f \) was the fraction of common words in the input and output sequences while \( r_t = 0.01^{p_t} \) where \( p_t \) is the fraction of forbidden words in the dataset. As the forbidden words, we used the 50 most common words in the dataset. So if an input had 10 words of which 2 were forbidden, an output sequence repeating 7 of the allowed words and 1 forbidden word would receive an expected score of \( 0.75 \times (8/10) + 0.25 \times 0.01^{(1/8)} = 0.7406 \). The training and testing set for reinforcement learning were obtained as follows. We trained 4 bots using maximum likelihood on 6000 input output sequences as indicated in Section 4.1. The LSTM hidden state size \( H \) and word embedding size \( E \) for the 4 bots were, \( (H, E) = (256, 128), (128, 64), (64, 32), (32, 16) \). The vocabulary size was \( |\mathcal{A}| = 12000 \). We used these bots to generate outputs for 500 different input sequences each. This collection of input and output pairs was scored stochastically as described above to produce a pool of 2000 input-output-score triplets. From this pool we use a fixed set of 500 triplets for testing across all our experiments. From the remaining 1500 data points, we randomly select 500 for training for each execution of an algorithm. For all RL algorithms, we used an LSTM with 16 layers and 16 dimensional word embeddings. B.2 ADDENDUM TO THE AMT RESTAURANT RECOMMENDATIONS EXPERIMENT MORE DETAILS ON THE EXPERIMENTAL SET UP We collected the initial batch of training data for RL as follows: We trained, via maximum likelihood on 6000 conversations, five RNN bots whose LSTM hidden size \( H \) and word embedding size \( E \) were \( (H, E) = (512, 512), (256, 256), (128, 128), (512, 256), (256, 64) \). The inputs \( x \) were all words from the history of the conversation truncated at length 64, i.e. the most recent 64 words in the conversation history. The outputs were the actual responses of the agent which were truncated to length 64. As the vocabulary we use the \( |\mathcal{A}| = 4000 \) most commonly occurring words in the dataset and replace the rest with an <UNK> token. Using the bots trained this way we generate responses on 1216 separate conversations. This data was sent to AMT workers who were asked to label the conversations on the following scale. • 2: The response is coherent and appropriate given the history and advances the conversation forward. • 1: The response has some minor flaws but is discernible and appropriate. • 0: The response is either completely incoherent or inappropriate and fails to advance the conversation forward. SOME QUALITATIVE RESULTS In Tables 2 and 3 we have presented some examples. The text in black/grey shows the conversation history, the response in blue is by the bot trained via maximum likelihood (ML) alone and in red is the bot after improvement using our BPG reinforcement learning algorithm. The first two examples of Table 2 present examples where the ML algorithm repeated generic questions (on budget, group size etc.) even though they had already been answered previously. After applying BPG, we are able to correct such issues, even though there are some grammatical errors. In the second, third and fourth example, we see that the ML+BPG bot is able to take context into consideration well when responding. For example, the customer asks for oriental/Mexican/Italian food. While the ML bot doesn’t take this into consideration, the ML+BPG bot is able to provide relevant answers. However, in the third example, the name of the restaurant suggests that the food might be Indian and not Mexican. In the final example of Table 2 the customer asks a direct question about smoking. The ML bot provides an irrelevant answer where as the ML+BPG bot directly responds to the question. In some examples, the ML bot had a tendency to produce sentences that were grammatically correct but nonsensical, sensible but grammatically incorrect, or just complete gibberish. We were able to correct such issues via RL. The first three examples of Table 3 present such cases. Occasionally the opposite happened. The last example of Table 3 is one such instance. <table> <tr> <th>Customer:</th> <td>Hi there, I’m looking for a restaurant recommendation.</td> </tr> <tr> <th>Agent:</th> <td>How many in your party?</td> </tr> <tr> <th>Customer:</th> <td>5, including children.</td> </tr> <tr> <th>ML:</th> <td><span style="color:blue">How many people will be joining you?</span></td> <th>[0]</th> </tr> <tr> <th>ML+BPG:</th> <td><span style="color:red">That is a fun size, and what is your budget joining you?</span></td> <th>[1.5]</th> </tr> </table> ... <table> <tr> <th>Customer:</th> <td>Hey... Sorry, it’s the anniversary of my wife’s death. I always do oriental tonight.</td> </tr> <tr> <th>Agent:</th> <td>I’m sorry to hear that. I will do my best to find just what you need.</td> </tr> <tr> <th>Customer:</th> <td>Thank you very much.</td> </tr> <tr> <th>Agent:</th> <td>What kind of budget are you looking at?</td> </tr> <tr> <th>Customer:</th> <td>Doesn’t matter... $100 sounds like a good number.</td> </tr> <tr> <th>Agent:</th> <td>Will it be just you, or are others going to join you?</td> </tr> <tr> <th>Customer:</th> <td>It’s just me tonight, but if you are off soon I wouldn’t mind a friendly ear. No pressure.</td> </tr> <tr> <th>ML:</th> <td><span style="color:blue">What type of budget are you looking to stay within?</span></td> <th>[0.33]</th> </tr> <tr> <th>ML+BPG:</th> <td><span style="color:red">How about: Café Hungry <UNK> ? They serve sea American to oriental on site such?</span></td> <th>[1.33]</th> </tr> </table> <table> <tr> <th>Customer:</th> <td>I would love to find a great Mexican restaurant!</td> </tr> <tr> <th>Agent:</th> <td>Great, and will anybody be joining you?</td> </tr> <tr> <th>Customer:</th> <td>There will be a total of 3 of us.</td> </tr> <tr> <th>...</th> <td></td> </tr> <tr> <th>Customer:</th> <td>Could you find me a place with outdoor seating?</td> </tr> <tr> <th>Agent:</th> <td>How about: The Painted Burrito. They serve Mexican food.</td> </tr> <tr> <th>Customer:</th> <td>Eh, I’ve heard that one isn’t great. Can you please suggest another?</td> </tr> <tr> <th>ML:</th> <td><span style="color:blue">How about: The Hungry .w seafood restaurant? they serve sea and American food.</span></td> <th>[0.33]</th> </tr> <tr> <th>ML+BPG:</th> <td><span style="color:red">I am happy to, one moment. How about: Diva Indian Bistro Boston bar? They serve Mexican food.</span></td> <th>[1.67]</th> </tr> </table> <table> <tr> <th>Customer:</th> <td>Hello, I’m looking for a restaurant.</td> </tr> <tr> <th>Agent:</th> <td>I can help! Please tell me what kind of foods you most enjoy. I see you are in the Cambridge area.</td> </tr> <tr> <th>Customer:</th> <td>I would like a restaurant that serves Italian food.</td> </tr> <tr> <th>Agent:</th> <td>Are you looking for something a little less expensive or are you looking for a high end restaurant?</td> </tr> <tr> <th>Customer:</th> <td>I am within a budget, 20 to 50 dollars.</td> </tr> <tr> <th>Agent:</th> <td>I highly recommend The Redhouse Restaurant. It serves ...</td> </tr> <tr> <th>Customer:</th> <td>Is there another restaurant? the restaurant would also need to host a party size of 5 people.</td> </tr> <tr> <th>ML:</th> <td><span style="color:blue">How about: Amuse? They serve French and American food.</span></td> <th>[0]</th> </tr> <tr> <th>ML+BPG:</th> <td><span style="color:red">How about: Ocean prime - Boston Seaport? They serve sea, American and Italian food.</span></td> <th>[2]</th> </tr> </table> ... <table> <tr> <th>Agent:</th> <td>How about Cafe at Taj Boston?</td> </tr> <tr> <th>Customer:</th> <td>Oh, I just ate there last night. Anything else?</td> </tr> <tr> <th>Agent:</th> <td>Try Mela Restaurant at 578 Tremont St. Anything else I can help you with today?</td> </tr> <tr> <th>Customer:</th> <td>That sounds nice. Do they allow smoking?</td> </tr> <tr> <th>ML:</th> <td><span style="color:blue">This sure is within your budget for all.</span></td> <th>[0.5]</th> </tr> <tr> <th>ML+BPG:</th> <td><span style="color:red">No.</span></td> <th>[1.5]</th> </tr> </table> Table 2: Some qualitative results on the AMT experiment using restaurant data. In each cell, the text in black/grey show the history of a conversation between the customer and a human agent. The dots ... mean that the part of the conversation history is hidden. The blue response is by the bot trained using maximum likelihood (ML) alone. The red response is by the bot improved with BPG after ML. The numbers on the right of the blue/red responses are the average scores each response received from 2-3 AMT workers. <table> <tr> <th>Agent:</th> <td>How may I help you?</td> </tr> <tr> <th>Customer:</th> <td>I’d like to find a French restaurant.</td> </tr> <tr> <th>Agent:</th> <td>Do you have a budget in mind?</td> </tr> <tr> <th>Customer:</th> <td>Between $50 and $75.</td> </tr> <tr> <th>Agent:</th> <td>How many will be in your party?</td> </tr> <tr> <th>Customer:</th> <td>Just me.</td> </tr> <tr> <th>ML:</th> <td><span style="color:red">And how will you be dining?</span></td> <th>[0]</th> </tr> <tr> <th>ML+BPG:</th> <td><span style="color:red">How will you be traveling to the restaurant?</span></td> <th>[2]</th> </tr> </table> <table> <tr> <th>Agent:</th> <td>Hello, how may I help you?</td> </tr> <tr> <th>Customer:</th> <td>We are kinda hungry, any suggestion in mind?</td> </tr> <tr> <th>Agent:</th> <td>What type of cuisine do you like?</td> </tr> <tr> <th>Customer:</th> <td>I want them Americano food.</td> </tr> <tr> <th>ML:</th> <td><span style="color:red">Very evening. Will you be dining alone?</span></td> <th>[1]</th> </tr> <tr> <th>ML+BPG:</th> <td><span style="color:red">Excellent choice. How many people will be in your party?</span></td> <th>[2]</th> </tr> </table> <table> <tr> <th>Customer:</th> <td>I’d like to find a good local restaurant.</td> </tr> <tr> <th>Agent:</th> <td>What type of cuisine would you like?</td> </tr> <tr> <th>Customer:</th> <td>I have a hankering for Indian curry.</td> </tr> <tr> <th>Agent:</th> <td>Will you be needing parking?</td> </tr> <tr> <th>Customer:</th> <td>We’ll be driving altogether, so yes.</td> </tr> <tr> <th>ML:</th> <td><span style="color:red">Thank you for 1. How about: T. w your curry this place should be dining?</span></td> <th>[0]</th> </tr> <tr> <th>ML+BPG:</th> <td><span style="color:red">How about: The Maharaja? They serve Indian and American food.</span></td> <th>[2]</th> </tr> </table> <table> <tr> <th>Agent:</th> <td>How can I help you today?</td> </tr> <tr> <th>Customer:</th> <td>I would like a restaurant recommendation.</td> </tr> <tr> <th>Agent:</th> <td>Okay what type of restaurant?</td> </tr> <tr> <th>Customer:</th> <td>Chinese would be great.</td> </tr> <tr> <th>Agent:</th> <td>Who will be dining, how will you be travelling?</td> </tr> <tr> <th>Customer:</th> <td>Just me and by bus.</td> </tr> <tr> <th>ML:</th> <td><span style="color:red">Okay, thanks. Do you have a budget in mind?</span></td> <th>[2]</th> </tr> <tr> <th>ML+BPG:</th> <td><span style="color:red">Do you want a wonderful place evening, or whats your party adults, and a place.</span></td> <th>[0]</th> </tr> </table> Table 3: Qualitative results on the AMT experiment using restaurant data, continued. See caption under Table 2 for more details.
accept
Accept (Poster)
7
45608d70b3e19b9049b1d21f014af56c2acfb610
iclr
2,017
OPTIMIZATION AS A MODEL FOR FEW-SHOT LEARNING Sachin Ravi* and Hugo Larochelle Twitter, Cambridge, USA {sachinr,hugo}@twitter.com ABSTRACT Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a classifier has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity classifiers requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network classifier in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner (classifier) network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning. 1 INTRODUCTION Deep learning has shown great success in a variety of tasks with large amounts of labeled data in image classification (He et al. [2015]), machine translation (Wu et al. [2016]), and speech modeling (Oord et al. [2016]). These achievements have relied on the fact that optimization of these deep, high-capacity models requires many iterative updates across many labeled examples. This type of optimization breaks down in the small data regime where we want to learn from very few labeled examples. In this setting, rather than have one large dataset, we have a set of datasets, each with few annotated examples per class. The motivation for this task lies not only in the fact that humans, even children, can usually generalize after just one example of a given object, but also because models excelling at this task would have many useful applications. Firstly, they would help alleviate data collection as we would not require millions of labeled examples to attain reasonable performance. Furthermore, in many fields, data exhibits the characteristic of having many different classes but few examples per class. Models that are able to generalize from few examples would be able to capture this type of data effectively. There seem to be two main reasons why gradient-based optimization fails in the face of few labeled examples. Firstly, the variants of gradient-based optimization algorithms, such as momentum (Nesterov [1983]), Adagrad (Duchi et al. [2011]), Adadelta (Zeiler [2012]), and ADAM (Kingma & Ba [2014]), weren’t designed specifically to perform well under the constraint of a set number of updates. Specifically when applied to non-convex optimization problems, with a reasonable choice of hyperparameters these algorithms don’t have very strong guarantees of speed of convergence, beyond that they will eventually converge to a good solution after what could be many millions of iterations. Secondly, for each separate dataset considered, the network would have to start from a random initialization of its parameters, which considerably hurts its ability to converge to a good solution after a few updates. Transfer learning (Caruana [1995]; Bengio et al., [2012]; Donahue et al. [2013]) can be applied to alleviate this problem by fine-tuning a pre-trained network from another task which has more labelled data; however, it has been observed that the benefit of a pre-trained network greatly decreases as the task the network was trained on diverges from the target task (Yosinski et al. [2014]). What is needed is a systematic way to learn a beneficial common initialization that would serve as a good point to start training for the set of datasets being considered. This would provide the same benefits as transfer learning, but with the guarantee that the initialization is an optimal starting point for fine-tuning. Previous work has suggested one manner in which to acquire quick knowledge from few examples, through the idea of meta-learning (Thrun [1998] Schmidhuber et al.,[1997]). Meta-learning suggests framing the learning problem at two levels. The first is quick acquisition of knowledge within each separate task presented. This process is guided by the second, which involves slower extraction of information learned across all the tasks. We present a method here that addresses the weakness of neutral networks trained with gradient-based optimization on the few-shot learning problem by framing the problem within a meta-learning setting. We propose an LSTM-based meta-learner optimizer that is trained to optimize a learner neural network classifier. The meta-learner captures both short-term knowledge within a task and long-term knowledge common among all the tasks. By using an objective that directly captures an optimization algorithm’s ability to have good generalization performance given only a set number of updates, the meta-learner model is trained to converge a learner classifier to a good solution quickly on each task. Additionally, the formulation of our meta-learner model allows it to learn a task-common initialization for the learner classifier, which captures fundamental knowledge shared among all the tasks. 2 TASK DESCRIPTION We first begin by detailing the meta-learning formulation we use. In the typical machine learning setting, we are interested in a dataset \( D \) and usually split \( D \) so that we optimize parameters \( \theta \) on a training set \( D_{train} \) and evaluate its generalization on the test set \( D_{test} \). In meta-learning, however, we are dealing with meta-sets \( \mathcal{D} \) containing multiple regular datasets, where each \( D \in \mathcal{D} \) has a split of \( D_{train} \) and \( D_{test} \). We consider the \( k \)-shot, \( N \)-class classification task, where for each dataset \( D \), the training set consists of \( k \) labelled examples for each of \( N \) classes, meaning that \( D_{train} \) consists of \( k \cdot N \) examples, and \( D_{test} \) has a set number of examples for evaluation. We note that previous work (Vinyals et al., 2016) has used the term episode to describe each dataset consisting of a training and test set. In meta-learning, we thus have different meta-sets for meta-training, meta-validation, and meta-testing (\( \mathcal{D}_{meta-train}, \mathcal{D}_{meta-validation}, \) and \( \mathcal{D}_{meta-test} \), respectively). On \( \mathcal{D}_{meta-train} \), we are interested in training a learning procedure (the meta-learner) that can take as input one of its training sets \( D_{train} \) and produce a classifier (the learner) that achieves high average classification performance on its corresponding test set \( D_{test} \). Using \( \mathcal{D}_{meta-validation} \) we can perform hyper-parameter selection of the meta-learner and evaluate its generalization performance on \( \mathcal{D}_{meta-test} \). For this formulation to correspond to the few-shot learning setting, each training set in datasets \( D \in \mathcal{D} \) will contain few labeled examples (we consider \( k = 1 \) or \( k = 5 \)), that must be used to generalize to good performance on the corresponding test set. An example of this formulation is given in Figure 1. 3 MODEL We now move to the description of our proposed model for meta-learning. 3.1 MODEL DESCRIPTION Consider a single dataset, or episode, \( D \in \mathcal{D}_{meta-train} \). Suppose we have a learner neural net classifier with parameters \( \theta \) that we want to train on \( D_{train} \). The standard optimization algorithms used to train deep neural networks are some variant of gradient descent, which uses updates of the form \[ \theta_t = \theta_{t-1} - \alpha_t \nabla_{\theta_{t-1}} \mathcal{L}_t, \] (1) Figure 1: Example of meta-learning setup. The top represents the meta-training set \( \mathcal{D}_{meta-train} \), where inside each gray box is a separate dataset that consists of the training set \( D_{train} \) (left side of dashed line) and the test set \( D_{test} \) (right side of dashed line). In this illustration, we are considering the 1-shot, 5-class classification task where for each dataset, we have one example from each of 5 classes (each given a label 1-5) in the training set and 2 examples for evaluation in the test set. The meta-test set \( \mathcal{D}_{meta-test} \) is defined in the same way, but with a different set of datasets that cover classes not present in any of the datasets in \( \mathcal{D}_{meta-train} \) (similarly, we additionally have a meta-validation set that is used to determine hyper-parameters). where \( \theta_{t-1} \) are the parameters of the learner after \( t-1 \) updates, \( \alpha_t \) is the learning rate at time \( t \), \( \mathcal{L}_t \) is the loss optimized by the learner for its \( t^{th} \) update, \( \nabla_{\theta_{t-1}} \mathcal{L}_t \) is the gradient of that loss with respect to parameters \( \theta_{t-1} \), and \( \theta_t \) is the updated parameters of the learner. Our key observation that we leverage here is that this update resembles the update for the cell state in an LSTM (Hochreiter & Schmidhuber [1997]) \[ c_t = f_t \odot c_{t-1} + i_t \odot \tilde{c}_t, \] if \( f_t = 1, c_{t-1} = \theta_{t-1}, i_t = \alpha_t \), and \( \tilde{c}_t = -\nabla_{\theta_{t-1}} \mathcal{L}_t \). Thus, we propose training a meta-learner LSTM to learn an update rule for training a neural network. We set the cell state of the LSTM to be the parameters of the learner, or \( c_t = \theta_t \), and the candidate cell state \( \tilde{c}_t = \nabla_{\theta_{t-1}} \mathcal{L}_t \), given how valuable information about the gradient is for optimization. We define parametric forms for \( i_t \) and \( f_t \) so that the meta-learner can determine optimal values through the course of the updates. Let us start with \( i_t \), which corresponds to the learning rate for the updates. We let \[ i_t = \sigma \left( \mathbf{W}_I \cdot [\nabla_{\theta_{t-1}} \mathcal{L}_t, \mathcal{L}_t, \theta_{t-1}, i_{t-1}] + \mathbf{b}_I \right), \] meaning that the learning rate is a function of the current parameter value \( \theta_{t-1} \), the current gradient \( \nabla_{\theta_{t-1}} \mathcal{L}_t \), the current loss \( \mathcal{L}_t \), and the previous learning rate \( i_{t-1} \). With this information, the meta-learner should be able to finely control the learning rate so as to train the learner quickly while avoiding divergence. As for \( f_t \), it seems possible that the optimal choice isn’t the constant 1. Intuitively, what would justify shrinking the parameters of the learner and forgetting part of its previous value would be if the learner is currently in a bad local optima and needs a large change to escape. This would correspond to a situation where the loss is high but the gradient is close to zero. Thus, one proposal for the forget gate is to have it be a function of that information, as well as the previous value of the forget gate: \[ f_t = \sigma \left( \mathbf{W}_F \cdot [\nabla_{\theta_{t-1}} \mathcal{L}_t, \mathcal{L}_t, \theta_{t-1}, f_{t-1}] + \mathbf{b}_F \right). \] Additionally, notice that we can also learn the initial value of the cell state \( c_0 \) for the LSTM, treating it as a parameter of the meta-learner. This corresponds to the initial weights of the classifier (that the meta-learner is training). Learning this initial value lets the meta-learner determine the optimal initial weights of the learner so that training begins from a beneficial starting point that allows optimization to proceed rapidly. Lastly, note that though the meta-learner’s update rule matches the cell state update of the LSTM, the meta-learner also bears similarity to the GRU (Cho et al., 2014) hidden state update, with the exception that the forget and input gates aren’t tied to sum to one. 3.2 Parameter Sharing & Preprocessing Because we want our meta-learner to produce updates for deep neural networks, which consist of tens of thousands of parameters, to prevent an explosion of meta-learner parameters we need to employ some sort of parameter sharing. Thus as in Andrychowicz et al. (2016), we share parameters across the coordinates of the learner gradient. This means each coordinate has its own hidden and cell state values but the LSTM parameters are the same across all coordinates. This allows us to use a compact LSTM model and additionally has the nice property that the same update rule is used for each coordinate, but one that is dependent on the respective history of each coordinate during optimization. We can easily implement parameter sharing by having the input be a batch of gradient coordinates and loss inputs \( (\nabla_{\theta_t}, \mathcal{L}_t, \mathcal{L}_t ) \) for each dimension \( i \). Because the different coordinates of the gradients and the losses can be of very different magnitudes, we need to be careful in normalizing the values so that the meta-learner is able to use them properly during training. Thus, we also found that the preprocessing method of Andrychowicz et al. (2016) worked well when applied to both the dimensions of the gradients and the losses at each time step: \[ x \rightarrow \begin{cases} \left( \frac{\log(|x|)}{p}, \operatorname{sgn}(x) \right) & \text{if } |x| \geq e^{-p} \\ (-1, e^p x) & \text{otherwise} \end{cases} \] This preprocessing adjusts the scaling of gradients and losses, while also separating the information about their magnitude and their sign (the latter being mostly useful for gradients). We found that the suggested value of \( p = 10 \) in the above formula worked well in our experiments. 3.3 Training The question now is how do we train the LSTM meta-learner model to be effective at few-shot learning tasks? As observed in Vinyals et al. (2016), in order to perform well at this task, it is key to have training conditions match those of test time. During evaluation of the meta-learning, for each dataset (episode), \( D = (D_{train}, D_{test}) \in \mathcal{D}_{meta-test} \), a good meta-learner model will, given a series of learner gradients and losses on the training set \( D_{train} \), suggest a series of updates for the classifier that pushes it towards good performance on the test set \( D_{test} \). Thus to match test time conditions, when considering each dataset \( D \in \mathcal{D}_{meta-train} \), the training objective we use is the loss \( \mathcal{L}_{test} \) of the produced classifier on \( D \)'s test set \( D_{test} \). While iterating over the examples in \( D \)'s training set \( D_{train} \), at each time step \( t \) the LSTM meta-learner receives \( (\nabla_{\theta_{t-1}} \mathcal{L}_t, \mathcal{L}_t ) \) from the learner (the classifier) and proposes the new set of parameters \( \theta_t \). The process repeats for \( T \) steps, after which the classifier and its final parameters are evaluated on the test set to produce the loss that is then used to train the meta-learner. The training algorithm is described in Algorithm 1 and the corresponding computational graph is shown in Figure 2. 3.3.1 Gradient Independence Assumption Notice that our formulation would imply that the losses \( \mathcal{L}_t \) and gradients \( \nabla_{\theta_{t-1}} \mathcal{L}_t \) of the learner are dependent on the parameters of the meta-learner. Gradients on the meta-learner’s parameters should normally take this dependency into account. However, as discussed by Andrychowicz et al. (2016), this complicates the computation of the meta-learner’s gradients. Thus, following Andrychowicz et al. (2016), we make the simplifying assumption that these contributions to the gradients aren’t important and can be ignored, which allows us to avoid taking second derivatives, a considerably expensive operation. We were still able to train the meta-learner effectively in spite of this simplifying assumption. Figure 2: Computational graph for the forward pass of the meta-learner. The dashed line divides examples from the training set \( D_{train} \) and test set \( D_{test} \). Each \( (\mathbf{X}_i, \mathbf{Y}_i) \) is the \( i^{th} \) batch from the training set whereas \( (\mathbf{X}, \mathbf{Y}) \) is all the elements from the test set. The dashed arrows indicate that we do not back-propagate through that step when training the meta-learner. We refer to the learner as \( M \), where \( M(\mathbf{X}; \theta) \) is the output of learner \( M \) using parameters \( \theta \) for inputs \( \mathbf{X} \). We also use \( \nabla_i \) as a shorthand for \( \nabla_{\theta_{t-1}} \mathcal{L}_t \). 3.3.2 Initialization of Meta-Learner LSTM When training LSTMs, it is advised to initialize the LSTM with small random weights and to set the forget gate bias to a large value so that the forget gate is initialized to be close to 1, thus enabling gradient flow (Zaremba [2015]). In addition to the forget gate bias setting, we found that we needed to initialize the input gate bias to be small so that the input gate value (and thus the learning rate) used by the meta-learner LSTM starts out being small. With this combined initialization, the meta-learner starts close to normal gradient descent with a small learning rate, which helps initial stability of training. 3.4 Batch Normalization Batch Normalization (Ioffe & Szegedy [2015]) is a recently proposed method to stabilize and thus speed up learning of deep neural networks by reducing internal covariate shift within the learner’s hidden layers. This reduction is achieved by normalizing each layer’s pre-activation, by subtracting by the mean and dividing by the standard deviation. During training, the mean and standard deviation are estimated using the current batch being trained on, whereas during evaluation a running average of both statistics calculated on the training set is used. We need to be careful with batch normalization for the learner network in the meta-learning setting, because we do not want to collect mean and standard deviation statistics during meta-testing in a way that allows information to leak between different datasets (episodes), being considered. One easy way to prevent this issue is to not collect statistics at all during the meta-testing phase, but just use our running averages from meta-training. This, however, has a bad impact on performance, because we have changed meta-training and meta-testing conditions, causing the meta-learner to learn a method of optimization that relies on batch statistics which it now does not have at meta-testing time. In order to keep the two phases as similar as possible, we found that a better strategy was to collect statistics for each dataset \( D \in \mathcal{D} \) during \( \mathcal{D}_{meta-test} \), but then erase the running statistics when we consider the next dataset. Thus, during meta-training, we use batch statistics for both the training and testing set whereas during meta-testing, we use batch statistics for the training set (and to compute our running averages) but then use the running averages during testing. This does not cause any information to leak between different datasets, but also allows the meta-learner to be trained on conditions that are matched between training and testing. Lastly, because we are doing very few training steps, we computed the running averages so that higher preference is given to the later values. Algorithm 1 Train Meta-Learner Input: Meta-training set \( \mathcal{D}_{meta-train} \), Learner \( M \) with parameters \( \theta \), Meta-Learner \( R \) with parameters \( \Theta \). 1: \( \Theta_0 \leftarrow \) random initialization 2: 3: for \( d = 1, n \) do 4: \( D_{train}, D_{test} \leftarrow \) random dataset from \( \mathcal{D}_{meta-train} \) 5: \( \theta_0 \leftarrow c_0 \) \hspace{1cm} \( \triangleright \) Intialize learner parameters 6: 7: for \( t = 1, T \) do 8: \( \mathbf{X}_t, \mathbf{Y}_t \leftarrow \) random batch from \( D_{train} \) 9: \( \mathcal{L}_t \leftarrow \mathcal{L}(M(\mathbf{X}_t; \theta_{t-1}), \mathbf{Y}_t) \) \hspace{1cm} \( \triangleright \) Get loss of learner on train batch 10: \( c_t \leftarrow R((\nabla_{\theta_{t-1}} \mathcal{L}_t); \Theta_{d-1}) \) \hspace{1cm} \( \triangleright \) Get output of meta-learner using Equation 2 11: \( \theta_t \leftarrow c_t \) \hspace{1cm} \( \triangleright \) Update learner parameters 12: end for 13: 14: \( \mathbf{X}, \mathbf{Y} \leftarrow D_{test} \) 15: \( \mathcal{L}_{test} \leftarrow \mathcal{L}(M(\mathbf{X}; \theta_T), \mathbf{Y}) \) \hspace{1cm} \( \triangleright \) Get loss of learner on test batch 16: Update \( \Theta_d \) using \( \nabla_{\Theta_{d-1}} \mathcal{L}_{test} \) \hspace{1cm} \( \triangleright \) Update meta-learner parameters 17: 18: end for 4 RELATED WORK While this work falls within the broad literature of transfer learning in general, we focus here on positioning it relative to previous work on meta-learning and few-shot learning. 4.1 META-LEARNING Meta-learning has a long history, but has grown to prominence recently as many have advocated for it as a key to achieving human-level intelligence in the future (Lake et al., 2016). The ability to learn at two levels (learning within each task presented, while accumulating knowledge about the similarities and differences between tasks) is seen as being crucial to improving AI. Previous work has used a variety of techniques in the meta-learning setting. Schmidhuber (1992, 1993) explored using networks that learn how to modify their own weights over a number of computations steps on the input. The updating of the weights is defined in a parametric form that allows the prediction and weight-change process to be differentiable end-to-end. The work of Bengio et al. (1990, 1995) and Bengio (1993) considered learning update rules for neural networks that are biologically plausible. This property is enforced by allowing the parametric form of the update to only have as input local information at each hidden unit to determine the weight change. Different optimization methods, such as genetic programming or simulated annealing, are used to train the learning rule. In Santoro et al. (2016), a memory-augmented neural network is trained to learn how to store and retrieve memories to use for each classification task. The work of Andrychowicz et al. (2016) uses an LSTM to train a neural network; however, they are interested in learning a general optimization algorithm to train neural networks for large-scale classification, whereas we are interested in the few-shot learning problem. This work also builds upon Hochreiter et al. (2001) and Bosc both of which used LSTMs to train multi-layer perceptrons to learn on binary classification and time-series prediction tasks. Another related method is the work of Bertinetto et al. (2016), who train a meta-learner to map a training example to the weights of a neural network that is then used to classify future examples from this class; however, unlike our method the classifier network is directly produced rather than being fine-tuned after multiple training steps. Our work also bears similarity to Maclaurin et al. (2015), who tune the hyperparameters of gradient descent with momentum by backpropagating through the chain of gradient steps to optimize the validation performance. 4.2 FEW-SHOT LEARNING The best performing methods for few-shot learning have been mainly metric learning methods. Deep siamese networks (Koch [2015]) train a convolutional network to embed examples so that items in the same class are close while items in different classes are far away, according to some distance metric. Matching networks (Vinyals et al.[2016]) refine this idea so that training and testing conditions match, by defining a differentiable nearest neighbor loss involving the cosine similarities of embeddings produced by a convolutional network. 5 EVALUATION In this section, we describe the results of experiments, examining the properties of our model and comparing our method’s performance against different approaches. Following Vinyals et al.[2016], we consider the k-shot, N-class classification setting where a meta-learner trains on many related but small training sets of k examples for each of N classes. We first split the list of all classes in the data into disjoint sets and assign them to each meta-set of meta-training, meta-validation, and meta-testing. To generate each instance of a k-shot, N-class task dataset \( D = (D_{train}, D_{test}) \in \mathcal{D} \), we do the following: we first sample N classes from the list of classes corresponding to the meta-set we consider. We then sample k examples from each of those classes. These k examples together compose the training set \( D_{train} \). Then, an additional fixed amount of the rest of the examples are sampled to yield a test set \( D_{test} \). We generally have 15 examples per class in the test sets. When training the meta-learner, we iterate by sampling these datasets (episodes) repeatedly. For meta-validation and meta-testing, however, we produce a fixed number of these datasets to evaluate each method. We produce enough datasets to ensure that the confidence interval of the mean accuracy is small. For the learner, we use a simple CNN containing 4 convolutional layers, each of which is a \( 3 \times 3 \) convolution with 32 filters, followed by batch normalization, a ReLU non-linearity, and lastly a \( 2 \times 2 \) max-pooling. The network then has a final linear layer followed by a softmax for the number of classes being considered. The loss function \( \mathcal{L} \) is the average negative log-probability assigned by the learner to the correct class. For the meta-learner, we use a 2-layer LSTM, where the first layer is a normal LSTM and the second layer is our modified LSTM meta-learner. The gradients and losses are preprocessed and fed into the first layer LSTM, and the regular gradient coordinates are also used by the second layer LSTM to implement the state update rule shown in (1). At each time step, the learner’s loss and gradient is computed on a batch consisting of the entire training set \( D_{train} \), because we consider training sets with only a total of 5 or 25 examples. We train our LSTM with ADAM using a learning rate of 0.001 and with gradient clipping using a value of 0.25. 5.1 EXPERIMENT RESULTS The Mini-ImageNet dataset was proposed by Vinyals et al.[2016] as a benchmark offering the challenges of the complexity of ImageNet images, without requiring the resources and infrastructure necessary to run on the full ImageNet dataset. Because the exact splits used in Vinyals et al.[2016] were not released, we create our own version of the Mini-Imagenet dataset by selecting a random 100 classes from ImageNet and picking 600 examples of each class. We use 64, 16, and 20 classes for training, validation and testing, respectively. We consider 1-shot and 5-shot classification for 5 classes. We use 15 examples per class for evaluation in each test set. We compare against two baselines and a recent metric-learning technique, Matching Networks (Vinyals et al.[2016]), which has achieved state-of-the-art results in few-shot learning. The results are shown in Table 1. The first baseline we use is a nearest-neighbor baseline (Baseline-nearest-neighbor), where we first train a network to classify between all the classes jointly in the original meta-training set. At meta-test time, for each dataset \( D \), we embed all the items in the training set using our trained network and then use nearest-neighbor matching among the embedded training examples to classify each test example. The second baseline we use (Baseline-finetune) represents a coarser version of our meta-learner model. As in the first baseline, we start by training a network to classify jointly between all classes in the meta-training set. We then use the meta-validation set to search over SGD hyperparameters, where each training set is used to fine-tune the pre-trained network before evaluating on *Code can be found at https://github.com/twitter/meta-learning-lstm <table> <tr> <th>Model</th> <th colspan="2">5-class</th> </tr> <tr> <th></th> <th>1-shot</th> <th>5-shot</th> </tr> <tr> <td><b>Baseline-finetune</b></td> <td>28.86 ± 0.54%</td> <td>49.79 ± 0.79%</td> </tr> <tr> <td><b>Baseline-nearest-neighbor</b></td> <td>41.08 ± 0.70%</td> <td>51.04 ± 0.65%</td> </tr> <tr> <td><b>Matching Network</b></td> <td>43.40 ± 0.78%</td> <td>51.09 ± 0.71%</td> </tr> <tr> <td><b>Matching Network FCE</b></td> <td>43.56 ± 0.84%</td> <td>55.31 ± 0.73%</td> </tr> <tr> <td><b>Meta-Learner LSTM (OURS)</b></td> <td>43.44 ± 0.77%</td> <td>60.60 ± 0.71%</td> </tr> </table> Table 1: Average classification accuracies on Mini-ImageNet with 95% confidence intervals. Marked in bold are the best results for each scenario, as well as other results with an overlapping confidence interval. the test set. We use a fixed number of updates for fine tuning and search over the learning rate and learning rate decay used during the course of these updates. For Matching Networks, we implemented our own version of both the basic and the fully-conditional embedding (FCE) versions. In the basic version, a convolutional network is trained to learn independent embeddings for examples in the training and test set. In the FCE version, a bidirectional-LSTM is used to learn an embedding for the training set such that each training example’s embedding is also a function of all the other training examples. Additionally, an attention-LSTM is used so that a test example embedding is also a function of all the embeddings of the training set. We do not consider fine-tuning the network using the train set during meta-testing to improve performance as mentioned in Vinyals et al. (2016), but do note that our meta-learner could also be fine-tuned using this data. Note that to remain consistent with Vinyals et al. (2016), our baseline and matching net convolutional networks have 4 layers each with 64 filters. We also added dropout to each convolutional block in matching nets to prevent overfitting. For our meta-learner, we train different models for the 1-shot and 5-shot tasks, that make 12 and 5 updates, respectively. We noticed that better performance for each task was attained if the meta-learner is explicitly trained to do the set number of updates during meta-training that will be used during meta-testing. We attain results that are much better than the baselines discussed and competitive with Matching Networks. For 5-shot, we are able to do much better than Matching Networks, whereas for 1-shot, the confidence interval for our performance intersects the interval for Matching Networks. Again, we note that the numbers do not match the ones provided by Vinyals et al. (2016) simply because we created our version of the dataset and implemented our own versions of their model. It is interesting to note that the fine-tuned baseline is worse than the nearest-neighbor baseline. Because we are not regularizing the classifier, with very few updates the fine-tuning model overfits, especially in the 1-shot case. This propensity to overfit speaks to the benefit of meta-training the initialization of the classifier end-to-end as is done in the meta-learning LSTM. 5.2 VISUALIZATION OF META-LEARNER We also visualize the optimization strategy learned by the meta-learner, in Figure 3. We can look at the \( i_t \) and \( f_t \) gate values in Equation 2 at each update step, to try to get an understanding of how the meta-learner updates the learner during training. We visualize the gate values while training on different datasets \( D_{train} \), to observe whether there are variations between training sets. We consider both 1-shot and 5-shot classification settings, where the meta-learner is making 10 and 5 updates, respectively. For the forget gate values for both tasks, the meta-learner seems to adopt a simple weight decay strategy that seems consistent across different layers. The input gate values are harder to interpret to glean the meta-learner’s strategy. However, there seems to a be a lot of variability between different datasets, indicating that the meta-learner isn’t simply learning a fixed optimization strategy. Additionally, there seem to be differences between the two tasks, suggesting that the meta-learner has adopted different methods to deal with the different conditions of each setting. (a) Forget gate values for 1-shot meta-learner (b) Input gate values for 1-shot meta-learner (c) Forget gate values for 5-shot meta-learner (d) Input gate values for 5-shot meta-learner Figure 3: Visualization of the input and forget values output by the meta-learner during the course of its updates. Layers 1 – 4 represent the values for a randomly selected parameter from the 4 convolutional layers and layer 5 represents the values for a random parameter from fully-connected layer. The different curves represent training steps on different datasets. 6 CONCLUSION We described an LSTM-based model for meta-learning, which is inspired from the parameter updates suggested by gradient descent optimization algorithms. Our LSTM meta-learner uses its state to represent the learning updates of the parameters of a classifier. It is trained to discover both a good initialization for the learner’s parameters, as well as a successful mechanism for updating the learner’s parameters to a given small training set for some new classification task. Our experiments demonstrate that our approach outperforms natural baselines and is competitive to the state-of-the-art in metric learning for few-shot learning. In this work, we focused our study to the few-shot and few-classes setting. However, it would be more valuable to train meta-learners that can perform well across a full spectrum of settings, i.e. for few or lots of training examples and for few or lots of possible classes. Our future work will thus consider moving towards this more challenging scenario. ACKNOWLEDGMENTS We thank Jake Snell, Kevin Swersky, and Oriol Vinyals for helpful discussions of this work. REFERENCES Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. CoRR, abs/1606.04474, 2016. URL http://arxiv.org/abs/1606.04474 Samy Bengio. Optimisation d’une règle d’apprentissage pour réseaux de neurones artificiels. PhD thesis, Département d’Informatique et Recherche Opérationnelle. Université de Montréal, 1993. Samy Bengio, Yoshua Bengio, and Jocelyn Cloutier. On the search for new learning rules for ANNs. Neural Processing Letters, 2(4):26–30, 1995. Yoshua Bengio, Samy Bengio, and Jocelyn Cloutier. Learning a synaptic learning rule. Université de Montréal, Département d’informatique et de recherche opérationnelle, 1990. Yoshua Bengio et al. Deep learning of representations for unsupervised and transfer learning. ICML Unsupervised and Transfer Learning, 27:17–36, 2012. Luca Bertinetto, João F. Henriques, Jack Valmadre, Philip H. S. Torr, and Andrea Vedaldi. Learning feed-forward one-shot learners. CoRR, abs/1606.05233, 2016. URL http://arxiv.org/abs/1606.05233 Tom Bosc. Learning to learn neural networks. Rich Caruana. Learning many related tasks at the same time with backpropagation. Advances in neural information processing systems, pp. 657–664, 1995. Kyunghyun Cho, Bart van Merrienboer, Çağlar Gülçehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. CoRR, abs/1406.1078, 2014. URL http://arxiv.org/abs/1406.1078 Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. CoRR, abs/1310.1531, 2013. URL http://arxiv.org/abs/1310.1531 John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res., 12:2121–2159, July 2011. ISSN 1532-4435. URL http://dl.acm.org/citation.cfm?id=1953048.2021068 Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. URL http://arxiv.org/abs/1512.03385 Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735–1780, 1997. Sepp Hochreiter, A. Steven Younger, and Peter R. Conwell. Learning to learn using gradient descent. In IN LECTURE NOTES ON COMP. SCI. 2130, PROC. INTL. CONF. ON ARTI NEURAL NETWORKS (ICANN-2001, pp. 87–94. Springer, 2001. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015. URL http://arxiv.org/abs/1502.03167 Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980 Gregory Koch. Siamese neural networks for one-shot image recognition. PhD thesis, University of Toronto, 2015. Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Building machines that learn and think like people. CoRR, abs/1604.00289, 2016. URL http://arxiv.org/abs/1604.00289 Dougal Maclaurin, David Duvenaud, and Ryan P Adams. Gradient-based hyperparameter optimization through reversible learning. In Proceedings of the 32nd International Conference on Machine Learning, 2015. Yurii Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2). 1983. Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy P. Lillicrap. One-shot learning with memory-augmented neural networks. CoRR, abs/1605.06065, 2016. URL http://arxiv.org/abs/1605.06065 Jürgen Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Computation, 4(1):131–139, 1992. Jürgen Schmidhuber. A neural network that embeds its own meta-levels. In Neural Networks, 1993., IEEE International Conference on, pp. 407–412. IEEE, 1993. Jürgen Schmidhuber, Jieyu Zhao, and Marco Wiering. Shifting inductive bias with success-story algorithm, adaptive levin search, and incremental self-improvement. Machine Learning, 28(1):105–130, 1997. Sebastian Thrun. Lifelong learning algorithms. In Learning to learn, pp. 181–209. Springer, 1998. Oriol Vinyals, Charles Blundell, Timothy P. Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. CoRR, abs/1606.04080, 2016. URL http://arxiv.org/abs/1606.04080 Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? CoRR, abs/1411.1792, 2014. URL http://arxiv.org/abs/1411.1792 Wojciech Zaremba. An empirical exploration of recurrent network architectures. 2015. Matthew D. Zeiler. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701, 2012. URL http://arxiv.org/abs/1212.5701
ABSTRACT Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a classifier has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity classifiers requires many iterative steps over many examples to perform well. Here, we propose an LSTM-based meta-learner model to learn the exact optimization algorithm used to train another learner neural network classifier in the few-shot regime. The parametrization of our model allows it to learn appropriate parameter updates specifically for the scenario where a set amount of updates will be made, while also learning a general initialization of the learner (classifier) network that allows for quick convergence of training. We demonstrate that this meta-learning model is competitive with deep metric-learning techniques for few-shot learning. 1 INTRODUCTION Deep learning has shown great success in a variety of tasks with large amounts of labeled data in image classification (He et al. [2015]), machine translation (Wu et al. [2016]), and speech modeling (Oord et al. [2016]). These achievements have relied on the fact that optimization of these deep, high-capacity models requires many iterative updates across many labeled examples. This type of optimization breaks down in the small data regime where we want to learn from very few labeled examples. In this setting, rather than have one large dataset, we have a set of datasets, each with few annotated examples per class. The motivation for this task lies not only in the fact that humans, even children, can usually generalize after just one example of a given object, but also because models excelling at this task would have many useful applications. Firstly, they would help alleviate data collection as we would not require millions of labeled examples to attain reasonable performance. Furthermore, in many fields, data exhibits the characteristic of having many different classes but few examples per class. Models that are able to generalize from few examples would be able to capture this type of data effectively. There seem to be two main reasons why gradient-based optimization fails in the face of few labeled examples. Firstly, the variants of gradient-based optimization algorithms, such as momentum (Nesterov [1983]), Adagrad (Duchi et al. [2011]), Adadelta (Zeiler [2012]), and ADAM (Kingma & Ba [2014]), weren’t designed specifically to perform well under the constraint of a set number of updates. Specifically when applied to non-convex optimization problems, with a reasonable choice of hyperparameters these algorithms don’t have very strong guarantees of speed of convergence, beyond that they will eventually converge to a good solution after what could be many millions of iterations. Secondly, for each separate dataset considered, the network would have to start from a random initialization of its parameters, which considerably hurts its ability to converge to a good solution after a few updates. Transfer learning (Caruana [1995]; Bengio et al., [2012]; Donahue et al. [2013]) can be applied to alleviate this problem by fine-tuning a pre-trained network from another task which has more labelled data; however, it has been observed that the benefit of a pre-trained network greatly decreases as the task the network was trained on diverges from the target task (Yosinski et al. [2014]). What is needed is a systematic way to learn a beneficial common initialization that would serve as a good point to start training for the set of datasets being considered. This would provide the same benefits as transfer learning, but with the guarantee that the initialization is an optimal starting point for fine-tuning. Previous work has suggested one manner in which to acquire quick knowledge from few examples, through the idea of meta-learning (Thrun [1998] Schmidhuber et al.,[1997]). Meta-learning suggests framing the learning problem at two levels. The first is quick acquisition of knowledge within each separate task presented. This process is guided by the second, which involves slower extraction of information learned across all the tasks. We present a method here that addresses the weakness of neutral networks trained with gradient-based optimization on the few-shot learning problem by framing the problem within a meta-learning setting. We propose an LSTM-based meta-learner optimizer that is trained to optimize a learner neural network classifier. The meta-learner captures both short-term knowledge within a task and long-term knowledge common among all the tasks. By using an objective that directly captures an optimization algorithm’s ability to have good generalization performance given only a set number of updates, the meta-learner model is trained to converge a learner classifier to a good solution quickly on each task. Additionally, the formulation of our meta-learner model allows it to learn a task-common initialization for the learner classifier, which captures fundamental knowledge shared among all the tasks. 2 TASK DESCRIPTION We first begin by detailing the meta-learning formulation we use. In the typical machine learning setting, we are interested in a dataset \( D \) and usually split \( D \) so that we optimize parameters \( \theta \) on a training set \( D_{train} \) and evaluate its generalization on the test set \( D_{test} \). In meta-learning, however, we are dealing with meta-sets \( \mathcal{D} \) containing multiple regular datasets, where each \( D \in \mathcal{D} \) has a split of \( D_{train} \) and \( D_{test} \). We consider the \( k \)-shot, \( N \)-class classification task, where for each dataset \( D \), the training set consists of \( k \) labelled examples for each of \( N \) classes, meaning that \( D_{train} \) consists of \( k \cdot N \) examples, and \( D_{test} \) has a set number of examples for evaluation. We note that previous work (Vinyals et al., 2016) has used the term episode to describe each dataset consisting of a training and test set. In meta-learning, we thus have different meta-sets for meta-training, meta-validation, and meta-testing (\( \mathcal{D}_{meta-train}, \mathcal{D}_{meta-validation}, \) and \( \mathcal{D}_{meta-test} \), respectively). On \( \mathcal{D}_{meta-train} \), we are interested in training a learning procedure (the meta-learner) that can take as input one of its training sets \( D_{train} \) and produce a classifier (the learner) that achieves high average classification performance on its corresponding test set \( D_{test} \). Using \( \mathcal{D}_{meta-validation} \) we can perform hyper-parameter selection of the meta-learner and evaluate its generalization performance on \( \mathcal{D}_{meta-test} \). For this formulation to correspond to the few-shot learning setting, each training set in datasets \( D \in \mathcal{D} \) will contain few labeled examples (we consider \( k = 1 \) or \( k = 5 \)), that must be used to generalize to good performance on the corresponding test set. An example of this formulation is given in Figure 1. 3 MODEL We now move to the description of our proposed model for meta-learning. 3.1 MODEL DESCRIPTION Consider a single dataset, or episode, \( D \in \mathcal{D}_{meta-train} \). Suppose we have a learner neural net classifier with parameters \( \theta \) that we want to train on \( D_{train} \). The standard optimization algorithms used to train deep neural networks are some variant of gradient descent, which uses updates of the form \[ \theta_t = \theta_{t-1} - \alpha_t \nabla_{\theta_{t-1}} \mathcal{L}_t, \] (1) Figure 1: Example of meta-learning setup. The top represents the meta-training set \( \mathcal{D}_{meta-train} \), where inside each gray box is a separate dataset that consists of the training set \( D_{train} \) (left side of dashed line) and the test set \( D_{test} \) (right side of dashed line). In this illustration, we are considering the 1-shot, 5-class classification task where for each dataset, we have one example from each of 5 classes (each given a label 1-5) in the training set and 2 examples for evaluation in the test set. The meta-test set \( \mathcal{D}_{meta-test} \) is defined in the same way, but with a different set of datasets that cover classes not present in any of the datasets in \( \mathcal{D}_{meta-train} \) (similarly, we additionally have a meta-validation set that is used to determine hyper-parameters). where \( \theta_{t-1} \) are the parameters of the learner after \( t-1 \) updates, \( \alpha_t \) is the learning rate at time \( t \), \( \mathcal{L}_t \) is the loss optimized by the learner for its \( t^{th} \) update, \( \nabla_{\theta_{t-1}} \mathcal{L}_t \) is the gradient of that loss with respect to parameters \( \theta_{t-1} \), and \( \theta_t \) is the updated parameters of the learner. Our key observation that we leverage here is that this update resembles the update for the cell state in an LSTM (Hochreiter & Schmidhuber [1997]) \[ c_t = f_t \odot c_{t-1} + i_t \odot \tilde{c}_t, \] if \( f_t = 1, c_{t-1} = \theta_{t-1}, i_t = \alpha_t \), and \( \tilde{c}_t = -\nabla_{\theta_{t-1}} \mathcal{L}_t \). Thus, we propose training a meta-learner LSTM to learn an update rule for training a neural network. We set the cell state of the LSTM to be the parameters of the learner, or \( c_t = \theta_t \), and the candidate cell state \( \tilde{c}_t = \nabla_{\theta_{t-1}} \mathcal{L}_t \), given how valuable information about the gradient is for optimization. We define parametric forms for \( i_t \) and \( f_t \) so that the meta-learner can determine optimal values through the course of the updates. Let us start with \( i_t \), which corresponds to the learning rate for the updates. We let \[ i_t = \sigma \left( \mathbf{W}_I \cdot [\nabla_{\theta_{t-1}} \mathcal{L}_t, \mathcal{L}_t, \theta_{t-1}, i_{t-1}] + \mathbf{b}_I \right), \] meaning that the learning rate is a function of the current parameter value \( \theta_{t-1} \), the current gradient \( \nabla_{\theta_{t-1}} \mathcal{L}_t \), the current loss \( \mathcal{L}_t \), and the previous learning rate \( i_{t-1} \). With this information, the meta-learner should be able to finely control the learning rate so as to train the learner quickly while avoiding divergence. As for \( f_t \), it seems possible that the optimal choice isn’t the constant 1. Intuitively, what would justify shrinking the parameters of the learner and forgetting part of its previous value would be if the learner is currently in a bad local optima and needs a large change to escape. This would correspond to a situation where the loss is high but the gradient is close to zero. Thus, one proposal for the forget gate is to have it be a function of that information, as well as the previous value of the forget gate: \[ f_t = \sigma \left( \mathbf{W}_F \cdot [\nabla_{\theta_{t-1}} \mathcal{L}_t, \mathcal{L}_t, \theta_{t-1}, f_{t-1}] + \mathbf{b}_F \right). \] Additionally, notice that we can also learn the initial value of the cell state \( c_0 \) for the LSTM, treating it as a parameter of the meta-learner. This corresponds to the initial weights of the classifier (that the meta-learner is training). Learning this initial value lets the meta-learner determine the optimal initial weights of the learner so that training begins from a beneficial starting point that allows optimization to proceed rapidly. Lastly, note that though the meta-learner’s update rule matches the cell state update of the LSTM, the meta-learner also bears similarity to the GRU (Cho et al., 2014) hidden state update, with the exception that the forget and input gates aren’t tied to sum to one. 3.2 Parameter Sharing & Preprocessing Because we want our meta-learner to produce updates for deep neural networks, which consist of tens of thousands of parameters, to prevent an explosion of meta-learner parameters we need to employ some sort of parameter sharing. Thus as in Andrychowicz et al. (2016), we share parameters across the coordinates of the learner gradient. This means each coordinate has its own hidden and cell state values but the LSTM parameters are the same across all coordinates. This allows us to use a compact LSTM model and additionally has the nice property that the same update rule is used for each coordinate, but one that is dependent on the respective history of each coordinate during optimization. We can easily implement parameter sharing by having the input be a batch of gradient coordinates and loss inputs \( (\nabla_{\theta_t}, \mathcal{L}_t, \mathcal{L}_t ) \) for each dimension \( i \). Because the different coordinates of the gradients and the losses can be of very different magnitudes, we need to be careful in normalizing the values so that the meta-learner is able to use them properly during training. Thus, we also found that the preprocessing method of Andrychowicz et al. (2016) worked well when applied to both the dimensions of the gradients and the losses at each time step: \[ x \rightarrow \begin{cases} \left( \frac{\log(|x|)}{p}, \operatorname{sgn}(x) \right) & \text{if } |x| \geq e^{-p} \\ (-1, e^p x) & \text{otherwise} \end{cases} \] This preprocessing adjusts the scaling of gradients and losses, while also separating the information about their magnitude and their sign (the latter being mostly useful for gradients). We found that the suggested value of \( p = 10 \) in the above formula worked well in our experiments. 3.3 Training The question now is how do we train the LSTM meta-learner model to be effective at few-shot learning tasks? As observed in Vinyals et al. (2016), in order to perform well at this task, it is key to have training conditions match those of test time. During evaluation of the meta-learning, for each dataset (episode), \( D = (D_{train}, D_{test}) \in \mathcal{D}_{meta-test} \), a good meta-learner model will, given a series of learner gradients and losses on the training set \( D_{train} \), suggest a series of updates for the classifier that pushes it towards good performance on the test set \( D_{test} \). Thus to match test time conditions, when considering each dataset \( D \in \mathcal{D}_{meta-train} \), the training objective we use is the loss \( \mathcal{L}_{test} \) of the produced classifier on \( D \)'s test set \( D_{test} \). While iterating over the examples in \( D \)'s training set \( D_{train} \), at each time step \( t \) the LSTM meta-learner receives \( (\nabla_{\theta_{t-1}} \mathcal{L}_t, \mathcal{L}_t ) \) from the learner (the classifier) and proposes the new set of parameters \( \theta_t \). The process repeats for \( T \) steps, after which the classifier and its final parameters are evaluated on the test set to produce the loss that is then used to train the meta-learner. The training algorithm is described in Algorithm 1 and the corresponding computational graph is shown in Figure 2. 3.3.1 Gradient Independence Assumption Notice that our formulation would imply that the losses \( \mathcal{L}_t \) and gradients \( \nabla_{\theta_{t-1}} \mathcal{L}_t \) of the learner are dependent on the parameters of the meta-learner. Gradients on the meta-learner’s parameters should normally take this dependency into account. However, as discussed by Andrychowicz et al. (2016), this complicates the computation of the meta-learner’s gradients. Thus, following Andrychowicz et al. (2016), we make the simplifying assumption that these contributions to the gradients aren’t important and can be ignored, which allows us to avoid taking second derivatives, a considerably expensive operation. We were still able to train the meta-learner effectively in spite of this simplifying assumption. Figure 2: Computational graph for the forward pass of the meta-learner. The dashed line divides examples from the training set \( D_{train} \) and test set \( D_{test} \). Each \( (\mathbf{X}_i, \mathbf{Y}_i) \) is the \( i^{th} \) batch from the training set whereas \( (\mathbf{X}, \mathbf{Y}) \) is all the elements from the test set. The dashed arrows indicate that we do not back-propagate through that step when training the meta-learner. We refer to the learner as \( M \), where \( M(\mathbf{X}; \theta) \) is the output of learner \( M \) using parameters \( \theta \) for inputs \( \mathbf{X} \). We also use \( \nabla_i \) as a shorthand for \( \nabla_{\theta_{t-1}} \mathcal{L}_t \). 3.3.2 Initialization of Meta-Learner LSTM When training LSTMs, it is advised to initialize the LSTM with small random weights and to set the forget gate bias to a large value so that the forget gate is initialized to be close to 1, thus enabling gradient flow (Zaremba [2015]). In addition to the forget gate bias setting, we found that we needed to initialize the input gate bias to be small so that the input gate value (and thus the learning rate) used by the meta-learner LSTM starts out being small. With this combined initialization, the meta-learner starts close to normal gradient descent with a small learning rate, which helps initial stability of training. 3.4 Batch Normalization Batch Normalization (Ioffe & Szegedy [2015]) is a recently proposed method to stabilize and thus speed up learning of deep neural networks by reducing internal covariate shift within the learner’s hidden layers. This reduction is achieved by normalizing each layer’s pre-activation, by subtracting by the mean and dividing by the standard deviation. During training, the mean and standard deviation are estimated using the current batch being trained on, whereas during evaluation a running average of both statistics calculated on the training set is used. We need to be careful with batch normalization for the learner network in the meta-learning setting, because we do not want to collect mean and standard deviation statistics during meta-testing in a way that allows information to leak between different datasets (episodes), being considered. One easy way to prevent this issue is to not collect statistics at all during the meta-testing phase, but just use our running averages from meta-training. This, however, has a bad impact on performance, because we have changed meta-training and meta-testing conditions, causing the meta-learner to learn a method of optimization that relies on batch statistics which it now does not have at meta-testing time. In order to keep the two phases as similar as possible, we found that a better strategy was to collect statistics for each dataset \( D \in \mathcal{D} \) during \( \mathcal{D}_{meta-test} \), but then erase the running statistics when we consider the next dataset. Thus, during meta-training, we use batch statistics for both the training and testing set whereas during meta-testing, we use batch statistics for the training set (and to compute our running averages) but then use the running averages during testing. This does not cause any information to leak between different datasets, but also allows the meta-learner to be trained on conditions that are matched between training and testing. Lastly, because we are doing very few training steps, we computed the running averages so that higher preference is given to the later values. Algorithm 1 Train Meta-Learner Input: Meta-training set \( \mathcal{D}_{meta-train} \), Learner \( M \) with parameters \( \theta \), Meta-Learner \( R \) with parameters \( \Theta \). 1: \( \Theta_0 \leftarrow \) random initialization 2: 3: for \( d = 1, n \) do 4: \( D_{train}, D_{test} \leftarrow \) random dataset from \( \mathcal{D}_{meta-train} \) 5: \( \theta_0 \leftarrow c_0 \) \hspace{1cm} \( \triangleright \) Intialize learner parameters 6: 7: for \( t = 1, T \) do 8: \( \mathbf{X}_t, \mathbf{Y}_t \leftarrow \) random batch from \( D_{train} \) 9: \( \mathcal{L}_t \leftarrow \mathcal{L}(M(\mathbf{X}_t; \theta_{t-1}), \mathbf{Y}_t) \) \hspace{1cm} \( \triangleright \) Get loss of learner on train batch 10: \( c_t \leftarrow R((\nabla_{\theta_{t-1}} \mathcal{L}_t); \Theta_{d-1}) \) \hspace{1cm} \( \triangleright \) Get output of meta-learner using Equation 2 11: \( \theta_t \leftarrow c_t \) \hspace{1cm} \( \triangleright \) Update learner parameters 12: end for 13: 14: \( \mathbf{X}, \mathbf{Y} \leftarrow D_{test} \) 15: \( \mathcal{L}_{test} \leftarrow \mathcal{L}(M(\mathbf{X}; \theta_T), \mathbf{Y}) \) \hspace{1cm} \( \triangleright \) Get loss of learner on test batch 16: Update \( \Theta_d \) using \( \nabla_{\Theta_{d-1}} \mathcal{L}_{test} \) \hspace{1cm} \( \triangleright \) Update meta-learner parameters 17: 18: end for 4 RELATED WORK While this work falls within the broad literature of transfer learning in general, we focus here on positioning it relative to previous work on meta-learning and few-shot learning. 4.1 META-LEARNING Meta-learning has a long history, but has grown to prominence recently as many have advocated for it as a key to achieving human-level intelligence in the future (Lake et al., 2016). The ability to learn at two levels (learning within each task presented, while accumulating knowledge about the similarities and differences between tasks) is seen as being crucial to improving AI. Previous work has used a variety of techniques in the meta-learning setting. Schmidhuber (1992, 1993) explored using networks that learn how to modify their own weights over a number of computations steps on the input. The updating of the weights is defined in a parametric form that allows the prediction and weight-change process to be differentiable end-to-end. The work of Bengio et al. (1990, 1995) and Bengio (1993) considered learning update rules for neural networks that are biologically plausible. This property is enforced by allowing the parametric form of the update to only have as input local information at each hidden unit to determine the weight change. Different optimization methods, such as genetic programming or simulated annealing, are used to train the learning rule. In Santoro et al. (2016), a memory-augmented neural network is trained to learn how to store and retrieve memories to use for each classification task. The work of Andrychowicz et al. (2016) uses an LSTM to train a neural network; however, they are interested in learning a general optimization algorithm to train neural networks for large-scale classification, whereas we are interested in the few-shot learning problem. This work also builds upon Hochreiter et al. (2001) and Bosc both of which used LSTMs to train multi-layer perceptrons to learn on binary classification and time-series prediction tasks. Another related method is the work of Bertinetto et al. (2016), who train a meta-learner to map a training example to the weights of a neural network that is then used to classify future examples from this class; however, unlike our method the classifier network is directly produced rather than being fine-tuned after multiple training steps. Our work also bears similarity to Maclaurin et al. (2015), who tune the hyperparameters of gradient descent with momentum by backpropagating through the chain of gradient steps to optimize the validation performance. 4.2 FEW-SHOT LEARNING The best performing methods for few-shot learning have been mainly metric learning methods. Deep siamese networks (Koch [2015]) train a convolutional network to embed examples so that items in the same class are close while items in different classes are far away, according to some distance metric. Matching networks (Vinyals et al.[2016]) refine this idea so that training and testing conditions match, by defining a differentiable nearest neighbor loss involving the cosine similarities of embeddings produced by a convolutional network. 5 EVALUATION In this section, we describe the results of experiments, examining the properties of our model and comparing our method’s performance against different approaches. Following Vinyals et al.[2016], we consider the k-shot, N-class classification setting where a meta-learner trains on many related but small training sets of k examples for each of N classes. We first split the list of all classes in the data into disjoint sets and assign them to each meta-set of meta-training, meta-validation, and meta-testing. To generate each instance of a k-shot, N-class task dataset \( D = (D_{train}, D_{test}) \in \mathcal{D} \), we do the following: we first sample N classes from the list of classes corresponding to the meta-set we consider. We then sample k examples from each of those classes. These k examples together compose the training set \( D_{train} \). Then, an additional fixed amount of the rest of the examples are sampled to yield a test set \( D_{test} \). We generally have 15 examples per class in the test sets. When training the meta-learner, we iterate by sampling these datasets (episodes) repeatedly. For meta-validation and meta-testing, however, we produce a fixed number of these datasets to evaluate each method. We produce enough datasets to ensure that the confidence interval of the mean accuracy is small. For the learner, we use a simple CNN containing 4 convolutional layers, each of which is a \( 3 \times 3 \) convolution with 32 filters, followed by batch normalization, a ReLU non-linearity, and lastly a \( 2 \times 2 \) max-pooling. The network then has a final linear layer followed by a softmax for the number of classes being considered. The loss function \( \mathcal{L} \) is the average negative log-probability assigned by the learner to the correct class. For the meta-learner, we use a 2-layer LSTM, where the first layer is a normal LSTM and the second layer is our modified LSTM meta-learner. The gradients and losses are preprocessed and fed into the first layer LSTM, and the regular gradient coordinates are also used by the second layer LSTM to implement the state update rule shown in (1). At each time step, the learner’s loss and gradient is computed on a batch consisting of the entire training set \( D_{train} \), because we consider training sets with only a total of 5 or 25 examples. We train our LSTM with ADAM using a learning rate of 0.001 and with gradient clipping using a value of 0.25. 5.1 EXPERIMENT RESULTS The Mini-ImageNet dataset was proposed by Vinyals et al.[2016] as a benchmark offering the challenges of the complexity of ImageNet images, without requiring the resources and infrastructure necessary to run on the full ImageNet dataset. Because the exact splits used in Vinyals et al.[2016] were not released, we create our own version of the Mini-Imagenet dataset by selecting a random 100 classes from ImageNet and picking 600 examples of each class. We use 64, 16, and 20 classes for training, validation and testing, respectively. We consider 1-shot and 5-shot classification for 5 classes. We use 15 examples per class for evaluation in each test set. We compare against two baselines and a recent metric-learning technique, Matching Networks (Vinyals et al.[2016]), which has achieved state-of-the-art results in few-shot learning. The results are shown in Table 1. The first baseline we use is a nearest-neighbor baseline (Baseline-nearest-neighbor), where we first train a network to classify between all the classes jointly in the original meta-training set. At meta-test time, for each dataset \( D \), we embed all the items in the training set using our trained network and then use nearest-neighbor matching among the embedded training examples to classify each test example. The second baseline we use (Baseline-finetune) represents a coarser version of our meta-learner model. As in the first baseline, we start by training a network to classify jointly between all classes in the meta-training set. We then use the meta-validation set to search over SGD hyperparameters, where each training set is used to fine-tune the pre-trained network before evaluating on *Code can be found at https://github.com/twitter/meta-learning-lstm <table> <tr> <th>Model</th> <th colspan="2">5-class</th> </tr> <tr> <th></th> <th>1-shot</th> <th>5-shot</th> </tr> <tr> <td><b>Baseline-finetune</b></td> <td>28.86 ± 0.54%</td> <td>49.79 ± 0.79%</td> </tr> <tr> <td><b>Baseline-nearest-neighbor</b></td> <td>41.08 ± 0.70%</td> <td>51.04 ± 0.65%</td> </tr> <tr> <td><b>Matching Network</b></td> <td>43.40 ± 0.78%</td> <td>51.09 ± 0.71%</td> </tr> <tr> <td><b>Matching Network FCE</b></td> <td>43.56 ± 0.84%</td> <td>55.31 ± 0.73%</td> </tr> <tr> <td><b>Meta-Learner LSTM (OURS)</b></td> <td>43.44 ± 0.77%</td> <td>60.60 ± 0.71%</td> </tr> </table> Table 1: Average classification accuracies on Mini-ImageNet with 95% confidence intervals. Marked in bold are the best results for each scenario, as well as other results with an overlapping confidence interval. the test set. We use a fixed number of updates for fine tuning and search over the learning rate and learning rate decay used during the course of these updates. For Matching Networks, we implemented our own version of both the basic and the fully-conditional embedding (FCE) versions. In the basic version, a convolutional network is trained to learn independent embeddings for examples in the training and test set. In the FCE version, a bidirectional-LSTM is used to learn an embedding for the training set such that each training example’s embedding is also a function of all the other training examples. Additionally, an attention-LSTM is used so that a test example embedding is also a function of all the embeddings of the training set. We do not consider fine-tuning the network using the train set during meta-testing to improve performance as mentioned in Vinyals et al. (2016), but do note that our meta-learner could also be fine-tuned using this data. Note that to remain consistent with Vinyals et al. (2016), our baseline and matching net convolutional networks have 4 layers each with 64 filters. We also added dropout to each convolutional block in matching nets to prevent overfitting. For our meta-learner, we train different models for the 1-shot and 5-shot tasks, that make 12 and 5 updates, respectively. We noticed that better performance for each task was attained if the meta-learner is explicitly trained to do the set number of updates during meta-training that will be used during meta-testing. We attain results that are much better than the baselines discussed and competitive with Matching Networks. For 5-shot, we are able to do much better than Matching Networks, whereas for 1-shot, the confidence interval for our performance intersects the interval for Matching Networks. Again, we note that the numbers do not match the ones provided by Vinyals et al. (2016) simply because we created our version of the dataset and implemented our own versions of their model. It is interesting to note that the fine-tuned baseline is worse than the nearest-neighbor baseline. Because we are not regularizing the classifier, with very few updates the fine-tuning model overfits, especially in the 1-shot case. This propensity to overfit speaks to the benefit of meta-training the initialization of the classifier end-to-end as is done in the meta-learning LSTM. 5.2 VISUALIZATION OF META-LEARNER We also visualize the optimization strategy learned by the meta-learner, in Figure 3. We can look at the \( i_t \) and \( f_t \) gate values in Equation 2 at each update step, to try to get an understanding of how the meta-learner updates the learner during training. We visualize the gate values while training on different datasets \( D_{train} \), to observe whether there are variations between training sets. We consider both 1-shot and 5-shot classification settings, where the meta-learner is making 10 and 5 updates, respectively. For the forget gate values for both tasks, the meta-learner seems to adopt a simple weight decay strategy that seems consistent across different layers. The input gate values are harder to interpret to glean the meta-learner’s strategy. However, there seems to a be a lot of variability between different datasets, indicating that the meta-learner isn’t simply learning a fixed optimization strategy. Additionally, there seem to be differences between the two tasks, suggesting that the meta-learner has adopted different methods to deal with the different conditions of each setting. (a) Forget gate values for 1-shot meta-learner (b) Input gate values for 1-shot meta-learner (c) Forget gate values for 5-shot meta-learner (d) Input gate values for 5-shot meta-learner Figure 3: Visualization of the input and forget values output by the meta-learner during the course of its updates. Layers 1 – 4 represent the values for a randomly selected parameter from the 4 convolutional layers and layer 5 represents the values for a random parameter from fully-connected layer. The different curves represent training steps on different datasets. 6 CONCLUSION We described an LSTM-based model for meta-learning, which is inspired from the parameter updates suggested by gradient descent optimization algorithms. Our LSTM meta-learner uses its state to represent the learning updates of the parameters of a classifier. It is trained to discover both a good initialization for the learner’s parameters, as well as a successful mechanism for updating the learner’s parameters to a given small training set for some new classification task. Our experiments demonstrate that our approach outperforms natural baselines and is competitive to the state-of-the-art in metric learning for few-shot learning. In this work, we focused our study to the few-shot and few-classes setting. However, it would be more valuable to train meta-learners that can perform well across a full spectrum of settings, i.e. for few or lots of training examples and for few or lots of possible classes. Our future work will thus consider moving towards this more challenging scenario. ACKNOWLEDGMENTS We thank Jake Snell, Kevin Swersky, and Oriol Vinyals for helpful discussions of this work.
accept
Accept (Oral)
7.666667
4bbef62012953d97e2fcea326738d5cf0c23cf55
iclr
2,017
LEVERAGING ASYNCHRONICITY IN GRADIENT DESCENT FOR SCALABLE DEEP LEARNING Jeff Daily, Abhinav Vishnu, Charles Siegel Pacific Northwest National Laboratory 902 Battelle Blvd Richland, WA 99352 {jeff.daily,abhinav.vishnu,charles.siegel}@pnnl.gov ABSTRACT In this paper, we present multiple approaches for improving the performance of gradient descent when utilizing mutiple compute resources. The proposed approaches span a solution space ranging from equivalence to running on a single compute device to delaying gradient updates a fixed number of times. We present a new approach, asynchronous layer-wise gradient descent that maximizes overlap of layer-wise backpropagation (computation) with gradient synchronization (communication). This approach provides maximal theoretical equivalence to the de facto gradient descent algorithm, requires limited asynchronicity across multiple iterations of gradient descent, theoretically improves overall speedup, while minimizing the additional space requirements for asynchronicity. We implement all of our proposed approaches using Caffe – a high performance Deep Learning library – and evaluate it on both an Intel Sandy Bridge cluster connected with InfiniBand as well as an NVIDIA DGX-1 connected with NVLink. The evaluations are performed on a set of well known workloads including AlexNet and GoogleNet on the ImageNet dataset. Our evaluation of these neural network topologies indicates asynchronous gradient descent has a speedup of up to 1.7x compared to synchronous. 1 INTRODUCTION Deep Learning (DL) algorithms are a class of Machine Learning and Data Mining (MLDM) algorithms, which use an inter-connection of neurons and synapses to emulate the computational structure of a mammalian brain. DL algorithms have demonstrated resounding success in many computer vision tasks and science domains such as high energy physics, computational chemistry and high performance computing use-cases. Several DL implementations such as TensorFlow, Caffe, Theano, and Torch have become available. These implementations are primarily geared towards compute nodes that may contain multi-core architecture (such as Intel Xeon/KNC/KNL) and or many-core architectures (GPUs). DL algorithms are under-going a tremendous revolution of their own. Widely used DL algorithms such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are computationally expensive. Their computational requirements are further worsened by: 1) Very deep neural networks such as recently proposed 1000-layer complex Residual Networks (ResNet), 2) Increasing volume of data produced by simulations, experiments and handheld devices. An important solution to these problems is the design and implementation of DL algorithms that are capable of execution on distributed memory large scale cluster/cloud computing systems. A few distributed DL implementations such as CaffeonSpark, Distributed TensorFlow, CNTK, Machine Learning Toolkit on Extreme Scale (MaTEx), and FireCaffe have become available. Implementations such as CNTK, FireCaffe and MaTEx use MPI [Gropp et al., 1996; Geist et al., 1996] – which makes them a natural fit for high-end systems. DL algorithms primarily use gradient descent – an iterative technique in which the weights of synapses are updated using the difference between the ground truth (actual value) and the predicted value (using the current state of the neural network). The larger the difference, the steeper the de- scent to a minima (a low value of minima generates the solution). An important type of gradient descent is batch gradient descent – where a random subset of samples are used for iterative feed-forward (calculation of predicted value) and back-propagation (update of synaptic weights). A small batch is prone to severe perturbations to the descent, while a large batch results in slow convergence. Hence, a data scientist tends to use a fairly average batch – which finds the balance between these two conflicting metrics. A large scale parallelization of gradient descent must maximize the equivalence to the default algorithm, such that the convergence property is maintained. Consider a scenario where a batch (\( b \)) in the original algorithm is split across multiple compute nodes (\( n \)) – an example of data parallelism. To provide equivalence to the default algorithm, the batch must be split equally to \( \frac{b}{n} \), although the communication which would require an all-to-all reduction would increase as \( \Theta(\log n) \). Naturally, as \( n \) is increased and \( b \) is held constant (strong scaling), this becomes prohibitive, whereas keeping the batch size per node \( b/n \) constant (weak scaling) increases the convergence time. Several researchers have proposed methods to alleviate the communication requirements of distributed gradient descent. Parameter-server based approaches use a server to hold the latest version of the model while clients send computed gradients and request the latest model. This approach has been proposed and extended by several researchers. While theoretically this provides \( O(1) \) time-complexity since all batch updates can be computed simultaneously, this approach fails to scale beyond a few compute nodes when considering the time to convergence relative to having run the computation on a single device. Others have proven divergence from the original algorithm. Remote Direct Memory Access (RDMA) based approaches have been proposed, but they also diverge from the original algorithm. Several other implementations are primarily geared towards shared memory systems, and address the thread contention issue for gradient descent. Our objective is to design a non-parameter-server based technique, which maximizes the equivalence to the default algorithm, while leveraging high performance architectures – including computational units such as GPUs and high performance interconnects such as InfiniBand, Intel Omni-path architectures by using MPI. 1.1 CONTRIBUTIONS Specifically, we make the following contributions in this paper: • We design a baseline asynchronous gradient descent, which delays the gradient updates of the entire model by one or more iterations adaptively on the basis of available overlap and user-defined input. • We propose a layer-wise gradient descent method, which overlaps weight updates of a layer with inter-node synchronization of other layers. The proposed method is exactly equivalent to the default sequential algorithm. • We implement our approaches and other baseline techniques using the Machine Learning Toolkit for Extreme Scale (MaTeX), which consists of a distributed memory implementation of Caffe using MPI (Gropp et al., 1996; Geist et al., 1996). • We evaluate our approaches and other baseline implementations on a large scale CPU-based InfiniBand cluster as well as on NVIDIA’s DGX-1 multi-GPU system. We use several well studied datasets and DNN topologies such as ImageNet (1.3M images, 250GB dataset) with AlexNet and GoogleNet DNNs. Our evaluation indicates the efficacy of the proposed approach. Specifically, the best asynchronous approach is up to 1.7x faster than the synchronous approach while achieving up to 82% parallel efficiency. The rest of the paper is organized as follows: In section 2, we present related work of our proposed research. We present the background in section 3, followed by an in-depth solution space in section 4. In section 6, we present a detailed performance evaluation of asynchronous gradient descent, and conclusions with future directions in section 7. 2 RELATED WORK Batch gradient descent is the most widely used algorithm for training Deep Learning models. This algorithm has been implemented several times for sequential, multi-core and many-core systems such as GPUs. The most widely used implementations are Caffe (Jia et al., 2014) (CPUs/GPUs), Warp-CTC (GPUs), Theano (Bastien et al., 2012) (Bergstra et al., 2010) (CPUs/GPUs), Torch (Collobert et al., 2002) (CPUs/GPUs), CNTK (Agarwal et al., 2014) (GPUs and Distributed Memory using MPI) and Google TensorFlow (Abadi et al., 2015) which use nVIDIA CUDA Deep Neural Network (cuDNN). Caffe is one of the leading software tools for training and deploying deep learning algorithms, and it can be used to develop novel extensions to these algorithms such as the ones described below. Caffe supports execution on a single node (connected with several GPUs) and a version has been implemented that takes full advantage of Intel systems. While the research described below was performed using Caffe, the extensions can be applied to Tensorflow as well. Caffe (and other deep learning software) is also equipped with several optimizations designed to avoid significant problems in training deep networks. The vanishing gradient problem (Bianchini & Scarselli, 2014) causes deep networks to fail to learn much at all in the early layers, and was solved in (Hinton & Osindero, 2006) and (Bengio et al., 2007) where it was shown that a network could be trained one layer at a time with autoencoders (Hinton & Salakhutdinov, 2006), and then put together to form a single network (Vincent et al., 2010). Another optimization that helps to solve this problem is switching from sigmoidal neurons to rectified linear neurons. The problem of accelerating gradient descent, especially distributed across compute resources, is of interest to many researchers. Approaches generally fall into two categories, whether or not they are equivalent to having run using a single compute device; utilizing a single compute device necessarily computes gradient updates and applies them immediately to the model. Further, the gradient updates can be classified as either synchronous or asynchronous depending on whether the communication of the gradients can be overlapped with any computation of the gradients. For example, the DistBelief parameter server approach (Dean et al., 2012) computes gradient updates asynchronously based on an out-of-date copy of the model and applies them to the latest model. Though this is not equivalent to having run on a single device, it is able to process samples much faster. Chen et al. (2016) revisit asynchronous gradient descent and propose a few synchronous variants in order to improve time to convergence. Notably, they show that waiting for all workers to complete, aggregating the gradients, and applying the gradients to the same common model (thereby each worker has a copy of the latest model) provides a good time to convergence while also leveraging multiple compute devices. Their approach is where this paper begins while additionally proposing approaches ranging from synchronous to parameter server variants. 3 FUNDAMENTALS 3.1 NEURAL NETWORKS Machine Learning algorithms designed to emulate the computational structure of the brain to model data are called “Neural Networks.” The basic unit of a neural network is the neuron and neurons are connected to one another via synapses. 3.1.1 BACKPROPAGATION Neural networks are trained through an algorithm called backpropagation. This is a means of computing gradients layer by layer to implement the gradient descent algorithm’s update rule of \[ \begin{align*} w' &= w + \lambda \nabla_w C \\ b' &= b + \lambda \nabla_b C \end{align*} \] where \( w \) are the weights, \( b \) the biases, \( \lambda \) the learning rate, and \( C \) is a cost function to be optimized, usually square error or cross-entropy. This rule is often replaced by a slightly more complex rule, such as Adaptive Gradient Descent (AdaGrad) (Duchi et al., 2011) or Momentum (Qian, 1999). To compute the gradients, we set \( W^{(\ell)}, b^{(\ell)} \) the weights and biases for each layer, \( z^{(\ell+1)} = W^{(\ell)}a^{(\ell)} + b^{(\ell)} \) and \( a^{(\ell)} = \sigma(z^{(\ell)}) \), where \( \sigma \) is the activation function. Let \( n_\ell \) represent the number of layers. Then, we use Algorithm 1. Algorithm 1 Back Propagation 1: input: Data \( X \in \mathbb{R}^{n \times p} \) and labels \( Y \in \mathbb{R}^{n \times \ell} \) 2: for \( i \) from 1 to \( n \) do 3: Compute all \( z^{(\ell)} \) and \( a^{(\ell)} \). 4: \( \delta^{(n_\ell)} = -(y - a^{n_\ell}) \odot \sigma'(z^{(n_\ell)}) \) 5: for \( \ell \) from \( n_\ell - 1 \) to 2 do 6: \( \delta^{(\ell)} = W^{\ell T}\delta^{(\ell+1)} \odot \sigma'(z^{(\ell)}) \) 7: end for 8: \( \nabla_{W^{(\ell)}} C = \delta^{(\ell+1)}a^{(\ell)T} \) 9: \( \nabla_{b^{(\ell)}} C = \delta^{(\ell+1)} \) 10: end for Although there are several nonlinear activation functions in common use, the networks examined in this paper only include rectified linear units (ReLU) where ReLU\((x) = \max(0, x)\). 3.2 CAFFE Caffe (Jia et al., 2014) is one of the leading software packages for building and training neural networks. It provides abstractions for a wide range of topologies and for training them with many different types of optimizers. Caffe provides abstractions for operations on multi-dimensional arrays (tensors) which are essential for implementing Deep Learning algorithms. From an input tensor, an output tensor, and tensors for each hidden layer, Caffe constructs a computational graph that manages these tensors and their updates as a single object. Caffe is particularly useful for researchers, because it is heavily optimized and can be modified through an open source C++ backend. As Caffe’s runtime is implemented in C++, it can extract native performance from the computation environment it is run on. Furthermore, Caffe abstracts GPU computations, leveraging nVIDIA CUDA Deep Neural Network Library (cuDNN) for the task. We have modified this code for distributed memory computation on large scale systems using MPI to natively use network hardware for optimal performance. The base, synchronous implementation is similar to FireCaffe (Iandola et al., 2015), another distributed memory implementation of Caffe. Further modifications are described in Section 4. There are three phases of computation within Caffe that pass over the enumerated layers of the network. First, the forward pass computes the output result given the samples from the input batch, starting at the first layer. Next, starting at the last (output) layer, based on the difference between the output result and the ground truth, the backward pass uses the backpropagation technique to compute the gradients for each layer. Lastly, one final pass is made over the network to apply the gradients to the weights and biases before starting the process over again with the next batch. 4 SOLUTION SPACE The goal of improving gradient descent is to accelerate the time to solution without sacrificing the accuracy of the model. The base case to consider is then computing and applying gradients one batch at a time on a single compute device. One way to accelerate the computation while also maintaining equivalence to the sequential is to use data parallelism. Data parallelism is where the traditional batch is further subdivided into equally-sized mini-batches, each mini-batch is computed on separate devices, then the gradients resulting from each mini-batch is averaged together. Since each gradient update is itself an average, taking the average of the mini-gradients results in an update that is effectively the same as having computed the original batch size. This is called the effective batch size. Data parallelism is the approach we explore in this paper, attempting many ways of hiding the latency of the gradient communication that occurs between compute devices. We use MPI to communicate the gradients. Caffe provides callback methods in its C++ interface that interject user-defined functionality into key phases of the computation (see [3,2]). Specifically, one user-defined function is executed immediately before the forward pass when the batch computation begins. The other user-defined function executes after the backward pass finishes, but before the application of the gradients to the weights and biases. Additional callback functions were added to support finer-grained control over the three phases of computation. One of the additional callbacks executes after each gradient is computed during the backward phase, once per set of learnable parameters, such as the weights or biases of a given layer. Another callback function that was added is called once per learnable parameter during the apply phase, just before the gradients are applied. Lastly, a callback function was added that turns the gradient application into a task queue, requesting additional tasks in an unspecified order until all gradients have been applied. A critical implementation detail for any of our proposed approaches is to make sure the individual network models maintained by each compute device start from the same random initial conditions for the weights and biases. Before the first batch is computed, the weights and biases from the master process are copied (broadcast) to the other processes. That way any gradients that are computed, when averaged together, are based on the same initial conditions. 4.1 SYNCHRONOUS GRADIENT DESCENT Similar to what Chen et al. (2016) proposes and what is implemented in FireCaffe (Iandola et al., 2015), synchronous gradient descent averages the gradients from each mini-batch together before applying them, forming one complete batch at a time. The way this is implemented in Caffe is to use the callback function that executes when all gradients are ready to be applied. During this callback, MPI_Allreduce is used to sum the gradients, placing the same resulting sum on each compute device. This function is blocking, meaning it returns control back to Caffe only after the sum is computed across all devices. Since the result is a sum and not the intended average, it is then scaled down based on the number of compute devices in use. It is important to note that the reduction operation can be performed in-place, meaning it can use the memory location directly holding the gradient without performing any costly memory copies, especially for networks with a large number of parameters such as AlexNet. This approach also has the important quality that the gradients are averaged after they have been used by each layer of the backpropagation, preserving the importance of any activations within the network against the mini-batch instead of against the effective batch. 4.2 LAYER-WISE GRADIENT DESCENT Chen et al. (2016) proposes the pipelining of gradient computation and application. For example, the gradients of upper layers can be concurrently applied while computing the gradients of lower layers. This approach must be done carefully to maintain equivalence with the sequential base case. We make the observation that gradients can be averaged as soon as they are computed during the backward phase, instead of waiting for all gradients to be computed. However, adjacent layers will use and/or update the gradients of layers that have otherwise finished computing their gradients. This implies the averaging of the gradients must be performed on a copy of the gradients rather than in-place. Further, the averaging of the copied gradients must finish before they can be applied. We utilize a background thread of computation in order to perform the gradient averaging concurrent with the remaining gradient computation. This provides maximal overlap of the communication latency with useful computation. There are a few options when to apply the averaged gradients. Waiting for all communication to finish before applying all gradients is straightforward and similar to the synchronous approach described previously, though perhaps at least some of the communication latency would be overlapped. Another approach is to wait, one layer at a time, for the gradients for a particular layer to finish averaging and then apply the gradients. It is intuitive to perform the waiting in the same order in which backpropagation was performed, from the last layer to the first layer. Lastly, since all gradient updates are independent, we can perform them in an arbitrary order. This takes advantage of the observation that not all layers have the same number of parameters, and further, the gradients for the weights and the gradients for the biases can be averaged separately; the size of the weight gradients are typically larger than the bias gradients, implying that the bias gradients will complete their communication more quickly. Since the communication of the various parameters can finish somewhat arbitrarily based on when the communication was initiated and the size of the communication, we can apply the gradients as soon as they complete their averaging. We evaluate these strategies in [6]. 4.3 ASYNCHRONOUS GRADIENT DESCENT As stated in (Chen et al. [2016]), parameter server implementations suffer from poor convergence since gradient updates are calculated based on out-of-date networks. Continuing with our data parallel approach, there is a lower limit to the size of the mini-batches and therefore the number of compute devices that can be utilized. As the amount of work per compute device decreases proportional to the decreasing size of the mini-batches, there is less computation available to mask the latency of the gradient averaging across the devices. Initiating the averaging layer-wise as described above may not be enough to mitigate this problem. We propose delaying the application of the gradients by a fixed number of iterations much smaller than the number of compute devices as would have been done in a parameter server approach. The gradients are delayed by using a concurrent communication thread and applying the gradient one, two, or three iterations later thus giving the averaging enough time to complete as needed. If the gradient needs to be delayed by one iteration, this requires one communication thread and one additional buffer to hold the gradient; delaying by two iterations requires two communication threads and two additional buffers and so on. This approach is somewhere between a parameter server (Dean et al. [2012]) and the various approaches that maintain equivalency with a sequential computation. 5 IMPLEMENTATION DETAILS The implementations evaluated in this paper focus on data parallelism and the averaging of gradients across compute devices. This is achieved using MPI and parallel I/O. 5.1 HANDLING I/O The data parallelism is achieved by distributing datasets across compute devices, partitioning them based on the number of devices utilized; each device receives a disjoint subset of the dataset and no samples are shuffled or exchanged between compute devices outside of the gradient averaging. Caffe frequently uses a database in LMDB format for its datasets, however this format cannot be used on remote (network) filesystems or even between processes on the same host. Caffe mitigates this issue when using more than one GPU on the same host by using a single I/O reading thread and a round-robin deal of the samples to device-specific queues. Our implementations mitigate this issue by first converting an LMDB database into a netCDF file (Rew & Davis [1990]). netCDF files can be read and partitioned using parallel MPI-IO via the parallel netCDF library (Li et al. [2003]). 5.2 DISTRIBUTED MEMORY IMPLEMENTATION USING MPI For single-node GPU computation, using one or more GPU devices in a single host, Caffe provides a means of allocating one contiguous buffer to hold the data for the weights and biases and a second buffer to hold the gradients for each. We extended this approach for CPU hosts. A single contiguous buffer allows the non-layer-wise, i.e., network-wise gradient averages to be performed using a single MPI reduction operation. The layer-wise implementations require one MPI reduction operation per network parameter. There is a fixed cost to start a communication primitive regardless of how much data is communicated. It is sometimes beneficial to aggregate otherwise many small communication requests into a larger one. Although Caffe provides a way of utilizing all GPUs within the host, it does not currently leverage NVIDIA’s NCCL package (NVIDIA Corporation [2015]) for optimized, high-bandwidth collective communication routines. We used the NCCL equivalent to the MPI all reduction to sum gradients across GPU devices on the DGX-1 platform. 6 EXPERIMENTAL EVALUATION In this section, we present an experimental evaluation and analysis of the heuristics described in section 4. 6.1 HARDWARE ARCHITECTURES We evaluate using a CPU cluster as well as NVIDIA’s specialized DGX-1 multi-GPU host system. Each node of the multi-node cluster consists of a multi-core Intel Sandybridge CPU connected via InfiniBand. We use Intel MPI 5.1.2 for performance evaluation. The heuristics are implemented in Caffe (Jia et al., 2014), specifically the intelcaffe branch designed to optimize performance on Intel CPUs. The DGX-1 system contains 8 Pascal GPUs connected using the high-speed NVlink interconnect. For the DGX-1 evaluations, the latest version of Berkley’s Caffe was modified to use the NCCL communication primitives in addition to our algorithmic changes. 6.2 IMAGE NET AND NETWORK ARCHITECTURES We evaluate on two distinct network architectures trained on the ImageNet dataset. ImageNet refers specifically to the ILSVRC2015 (Russakovsky et al., 2015) dataset. This dataset consists of a training set of just under 1.3 million images of various sizes (as jpg files) divided among 1000 classes, along with a validation set consisting of 50000 images of the same type and classes. Additionally, for the competition, there is a testing set, but it is held separately and not available publicly. It is established as one of the benchmark dataset for machine learning with large datasets, and among the famous architectures that achieved record top 1 and top 5 accuracies on it are AlexNet (Krizhevsky et al., 2012) and GoogLeNet (Szegedy et al., 2015). We evaluate on AlexNet and GoogLeNet because they are now well-established models with known training regimes and loss curves. They also demonstrate two different regimes for parallelization: AlexNet has approximately 60 million parameters that need to be communicated, whereas GoogLeNet has approximately 4 million. In contrast to the smaller amount of communication for GoogLeNet, it requires roughly twice the amount of time to process a each image than AlexNet does when communication is ignored. 6.3 EVALUATION Figure 1 compares the implemented approaches relative to a communication-less baseline “no comm”. The effective batch sizes were 256 and 32 for AleNet and GoogleLeNet, respectively. For example, using 8 compute devices for GoogLeNet uses a mini-batch size of \(32/8 = 4\). The evaluation on DGX-1 were limited to 8 compute devices whereas the CPU cluster evaluation eventually hit the strong scaling limit for data parallelism. These results show that delaying the gradient updates by one or more iterations is the most effective means of hiding the communication latency. The layer-wise approaches did not perform as well as expected. These trends were consistent across both hardware platforms. The layer-wise approaches, though promising as equivalent to a sequential computation, were not able to complete their gradient averages quickly enough. Compared to the delayed gradient approach, this is perhaps intuitive. The delayed gradient approach is able to hide the communication latency across all three complete phases of the computation whereas the layer-wise approaches only have as long as it takes to complete the backpropagation phase. This is not enough time to complete the communication, especially as the mini-batch sizes decrease and therefore provide less work to mask the communication. In addition to looking at the time per batch above, the rates of convergence of these heuristics must be evaluated. All of the heuristics completed training AlexNet to the standard top-1 accuracy of \( \approx 54\% \) using the default AlexNet settings that come with Caffe. However, it is worth noting that at the beginning of training, they showed different loss curves showing that there is a tradeoff between number of batches per second and accuracy at a given batch as shown in Table 1. (a) AlexNet CPU (b) AlexNet DGX-1 (c) GoogLeNet CPU (d) GoogLeNet DGX-1 Figure 1: Evaluation of SGD and AGD approaches. Effective batch sizes were 256 and 32 for AlexNet and GoogLeNet, respectively. <table> <tr> <th>batch</th> <th>1000</th> <th>2000</th> <th>3000</th> <th>4000</th> <th>5000</th> </tr> <tr> <td>serial, 1 GPU</td> <td>0.0124</td> <td>0.05164</td> <td>0.10102</td> <td>0.13432</td> <td>0.16454</td> </tr> <tr> <td>SGD</td> <td>0.01116</td> <td>0.03984</td> <td>0.07594</td> <td>0.10622</td> <td>0.13052</td> </tr> <tr> <td>AGD, 1 comm</td> <td>0.0039</td> <td>0.01324</td> <td>0.02632</td> <td>0.05076</td> <td>0.07362</td> </tr> <tr> <td>AGD, 2 comm</td> <td>0.00104</td> <td>0.00356</td> <td>0.00636</td> <td>0.01282</td> <td>0.01688</td> </tr> </table> Table 1: AlexNet Accuracy After Every 1000 Batches on DGX-1 We also evaluated whether these approaches converged in addition to just improving the number of iterations per second. All approaches evaluated managed to converge within the expected number of iterations. Notably, AlexNet on DGX-1 reached convergence in 11 hours using the delayed gradient approach and two communication threads using the standard AlexNet network from Caffe. 7 CONCLUSIONS There is a tradeoff between maintaining equivalence to sequential methods versus leveraging the vast computational resources available for gradient descent. We find that asynchronous methods can give a 1.7x speedup while not sacrificing accuracy at the end of an otherwise identical training regime. This improvement was achieved without the need for a warm start, contrary to previously published results using parameter servers. REFERENCES Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorflow.org. Amit Agarwal, Eldar Akchurin, Chris Basoglu, Guogou Chen, Scott Cyphers, Jasha Droppo, Adam Eversole, Brian Guenter, Mark Hillebrand, Ryan Hoens, Xuedong Huang, Zhiheng Huang, Vladimir Ivanov, Alexey Kamenev, Philipp Kranen, Oleksii Kuchaiev, Wolfgang Manousek, Avner May, Bhaskar Mitra, Olivier Nano, Gaizka Navarro, Alexey Orlov, Marko Padmilac, Hari Parthasarathi, Baolin Peng, Alexey Reznichenko, Frank Seide, Michael L. Seltzer, Malcolm Slaney, Andreas Stolcke, Yongqiang Wang, Huaming Wang, Kaisheng Yao, Dong Yu, Yu Zhang, and Geoffrey Zweig. An introduction to computational networks and the computational network toolkit. Technical Report MSR-TR-2014-112, August 2014. URL http://research.microsoft.com/apps/pubs/default.aspx?id=226641 Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012. Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deep networks. In B. Schölkopf, J. C. Platt, and T. Hoffman (eds.), Advances in Neural Information Processing Systems 19, pp. 153–160. MIT Press, 2007. URL http://papers.nips.cc/paper/3048-greedy-layer-wise-training-of-deep-networks.pdf James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June 2010. Oral Presentation. Monica Bianchini and Franco Scarselli. On the complexity of neural network classifiers: A comparison between shallow and deep architectures. IEEE Transactions on Neural Networks and Learning Systems, 25(8):1553 – 1565, 2014. doi: 10.1109/TNNLS.2013.2293637. Jianmin Chen, Rajat Monga, Samy Bengio, and Rafal Józefowicz. Revisiting distributed synchronous SGD. CoRR, abs/1604.00981, 2016. URL http://arxiv.org/abs/1604.00981 Ronan Collobert, Samy Bengio, and Johnny Marthoz. Torch: A modular machine learning software library, 2002. Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc’Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Quoc V. Le, and Andrew Y. Ng. Large scale distributed deep networks. In P. Bartlett, F.c.n. Pereira, C.j.c. Burges, L. Bottou, and K.q. Weinberger (eds.), Advances in Neural Information Processing Systems 25, pp. 1232–1240. 2012. URL http://books.nips.cc/papers/files/nips25/NIPS2012_0598.pdf John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res., 12:2121–2159, July 2011. ISSN 1532-4435. URL http://dl.acm.org/citation.cfm?id=1953048.2021068 Al Geist, William Gropp, Steve Huss-Lederman, Andrew Lumsdaine, Ewing L. Lusk, William Saphir, Tony Skjellum, and Marc Snir. MPI-2: Extending the message-passing interface. In Euro-Par, Vol. 1, pp. 128–135, 1996. W. Gropp, E. Lusk, N. Doss, and A. Skjellum. A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard. Parallel Computing, 22(6):789–828, 1996. G E Hinton and R R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, July 2006. doi: 10.1126/science.1127647. URL http://www.ncbi.nlm.nih.gov/sites/entrez?db=pubmed&uid=16873662&cmd=showDetailView&indexed=google. Geoffrey E. Hinton and Simon Osindero. A fast learning algorithm for deep belief nets. Neural Computation, 18:2006, 2006. Forrest N Iandola, Khalid Ashraf, Matthew W Moskewicz, and Kurt Keutzer. Firecaffe: near-linear acceleration of deep neural network training on compute clusters. arXiv preprint arXiv:1511.00175, 2015. Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger (eds.), Advances in Neural Information Processing Systems 25, pp. 1097–1105. Curran Associates, Inc., 2012. URL http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf Jianwei Li, Wei-keng Liao, Alok Choudhary, Robert Ross, Rajeev Thakur, William Gropp, Robert Latham, Andrew Siegel, Brad Gallagher, and Michael Zingale. Parallel netcdf: A high-performance scientific i/o interface. In Supercomputing, 2003 ACM/IEEE Conference, pp. 39–39. IEEE, 2003. NVIDIA Corporation. NCCL: Optimized primitives for collective multi-GPU communication. https://github.com/NVIDIA/nccl, 2015. Ning Qian. On the momentum term in gradient descent learning algorithms. Neural Networks, 12(1):145 – 151, 1999. ISSN 0893-6080. doi: http://dx.doi.org/10.1016/S0893-6080(98)00116-6. URL http://www.sciencedirect.com/science/article/pii/S0893608098001166 Russ Rew and Glenn Davis. Netcdf: an interface for scientific data access. IEEE computer graphics and applications, 10(4):76–82, 1990. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR 2015, 2015. URL http://arxiv.org/abs/1409.4842 Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res., 11:3371–3408, December 2010. ISSN 1532-4435. URL http://dl.acm.org/citation.cfm?id=1756006.1953039
ABSTRACT In this paper, we present multiple approaches for improving the performance of gradient descent when utilizing mutiple compute resources. The proposed approaches span a solution space ranging from equivalence to running on a single compute device to delaying gradient updates a fixed number of times. We present a new approach, asynchronous layer-wise gradient descent that maximizes overlap of layer-wise backpropagation (computation) with gradient synchronization (communication). This approach provides maximal theoretical equivalence to the de facto gradient descent algorithm, requires limited asynchronicity across multiple iterations of gradient descent, theoretically improves overall speedup, while minimizing the additional space requirements for asynchronicity. We implement all of our proposed approaches using Caffe – a high performance Deep Learning library – and evaluate it on both an Intel Sandy Bridge cluster connected with InfiniBand as well as an NVIDIA DGX-1 connected with NVLink. The evaluations are performed on a set of well known workloads including AlexNet and GoogleNet on the ImageNet dataset. Our evaluation of these neural network topologies indicates asynchronous gradient descent has a speedup of up to 1.7x compared to synchronous. 1 INTRODUCTION Deep Learning (DL) algorithms are a class of Machine Learning and Data Mining (MLDM) algorithms, which use an inter-connection of neurons and synapses to emulate the computational structure of a mammalian brain. DL algorithms have demonstrated resounding success in many computer vision tasks and science domains such as high energy physics, computational chemistry and high performance computing use-cases. Several DL implementations such as TensorFlow, Caffe, Theano, and Torch have become available. These implementations are primarily geared towards compute nodes that may contain multi-core architecture (such as Intel Xeon/KNC/KNL) and or many-core architectures (GPUs). DL algorithms are under-going a tremendous revolution of their own. Widely used DL algorithms such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are computationally expensive. Their computational requirements are further worsened by: 1) Very deep neural networks such as recently proposed 1000-layer complex Residual Networks (ResNet), 2) Increasing volume of data produced by simulations, experiments and handheld devices. An important solution to these problems is the design and implementation of DL algorithms that are capable of execution on distributed memory large scale cluster/cloud computing systems. A few distributed DL implementations such as CaffeonSpark, Distributed TensorFlow, CNTK, Machine Learning Toolkit on Extreme Scale (MaTEx), and FireCaffe have become available. Implementations such as CNTK, FireCaffe and MaTEx use MPI [Gropp et al., 1996; Geist et al., 1996] – which makes them a natural fit for high-end systems. DL algorithms primarily use gradient descent – an iterative technique in which the weights of synapses are updated using the difference between the ground truth (actual value) and the predicted value (using the current state of the neural network). The larger the difference, the steeper the de- scent to a minima (a low value of minima generates the solution). An important type of gradient descent is batch gradient descent – where a random subset of samples are used for iterative feed-forward (calculation of predicted value) and back-propagation (update of synaptic weights). A small batch is prone to severe perturbations to the descent, while a large batch results in slow convergence. Hence, a data scientist tends to use a fairly average batch – which finds the balance between these two conflicting metrics. A large scale parallelization of gradient descent must maximize the equivalence to the default algorithm, such that the convergence property is maintained. Consider a scenario where a batch (\( b \)) in the original algorithm is split across multiple compute nodes (\( n \)) – an example of data parallelism. To provide equivalence to the default algorithm, the batch must be split equally to \( \frac{b}{n} \), although the communication which would require an all-to-all reduction would increase as \( \Theta(\log n) \). Naturally, as \( n \) is increased and \( b \) is held constant (strong scaling), this becomes prohibitive, whereas keeping the batch size per node \( b/n \) constant (weak scaling) increases the convergence time. Several researchers have proposed methods to alleviate the communication requirements of distributed gradient descent. Parameter-server based approaches use a server to hold the latest version of the model while clients send computed gradients and request the latest model. This approach has been proposed and extended by several researchers. While theoretically this provides \( O(1) \) time-complexity since all batch updates can be computed simultaneously, this approach fails to scale beyond a few compute nodes when considering the time to convergence relative to having run the computation on a single device. Others have proven divergence from the original algorithm. Remote Direct Memory Access (RDMA) based approaches have been proposed, but they also diverge from the original algorithm. Several other implementations are primarily geared towards shared memory systems, and address the thread contention issue for gradient descent. Our objective is to design a non-parameter-server based technique, which maximizes the equivalence to the default algorithm, while leveraging high performance architectures – including computational units such as GPUs and high performance interconnects such as InfiniBand, Intel Omni-path architectures by using MPI. 1.1 CONTRIBUTIONS Specifically, we make the following contributions in this paper: • We design a baseline asynchronous gradient descent, which delays the gradient updates of the entire model by one or more iterations adaptively on the basis of available overlap and user-defined input. • We propose a layer-wise gradient descent method, which overlaps weight updates of a layer with inter-node synchronization of other layers. The proposed method is exactly equivalent to the default sequential algorithm. • We implement our approaches and other baseline techniques using the Machine Learning Toolkit for Extreme Scale (MaTeX), which consists of a distributed memory implementation of Caffe using MPI (Gropp et al., 1996; Geist et al., 1996). • We evaluate our approaches and other baseline implementations on a large scale CPU-based InfiniBand cluster as well as on NVIDIA’s DGX-1 multi-GPU system. We use several well studied datasets and DNN topologies such as ImageNet (1.3M images, 250GB dataset) with AlexNet and GoogleNet DNNs. Our evaluation indicates the efficacy of the proposed approach. Specifically, the best asynchronous approach is up to 1.7x faster than the synchronous approach while achieving up to 82% parallel efficiency. The rest of the paper is organized as follows: In section 2, we present related work of our proposed research. We present the background in section 3, followed by an in-depth solution space in section 4. In section 6, we present a detailed performance evaluation of asynchronous gradient descent, and conclusions with future directions in section 7. 2 RELATED WORK Batch gradient descent is the most widely used algorithm for training Deep Learning models. This algorithm has been implemented several times for sequential, multi-core and many-core systems such as GPUs. The most widely used implementations are Caffe (Jia et al., 2014) (CPUs/GPUs), Warp-CTC (GPUs), Theano (Bastien et al., 2012) (Bergstra et al., 2010) (CPUs/GPUs), Torch (Collobert et al., 2002) (CPUs/GPUs), CNTK (Agarwal et al., 2014) (GPUs and Distributed Memory using MPI) and Google TensorFlow (Abadi et al., 2015) which use nVIDIA CUDA Deep Neural Network (cuDNN). Caffe is one of the leading software tools for training and deploying deep learning algorithms, and it can be used to develop novel extensions to these algorithms such as the ones described below. Caffe supports execution on a single node (connected with several GPUs) and a version has been implemented that takes full advantage of Intel systems. While the research described below was performed using Caffe, the extensions can be applied to Tensorflow as well. Caffe (and other deep learning software) is also equipped with several optimizations designed to avoid significant problems in training deep networks. The vanishing gradient problem (Bianchini & Scarselli, 2014) causes deep networks to fail to learn much at all in the early layers, and was solved in (Hinton & Osindero, 2006) and (Bengio et al., 2007) where it was shown that a network could be trained one layer at a time with autoencoders (Hinton & Salakhutdinov, 2006), and then put together to form a single network (Vincent et al., 2010). Another optimization that helps to solve this problem is switching from sigmoidal neurons to rectified linear neurons. The problem of accelerating gradient descent, especially distributed across compute resources, is of interest to many researchers. Approaches generally fall into two categories, whether or not they are equivalent to having run using a single compute device; utilizing a single compute device necessarily computes gradient updates and applies them immediately to the model. Further, the gradient updates can be classified as either synchronous or asynchronous depending on whether the communication of the gradients can be overlapped with any computation of the gradients. For example, the DistBelief parameter server approach (Dean et al., 2012) computes gradient updates asynchronously based on an out-of-date copy of the model and applies them to the latest model. Though this is not equivalent to having run on a single device, it is able to process samples much faster. Chen et al. (2016) revisit asynchronous gradient descent and propose a few synchronous variants in order to improve time to convergence. Notably, they show that waiting for all workers to complete, aggregating the gradients, and applying the gradients to the same common model (thereby each worker has a copy of the latest model) provides a good time to convergence while also leveraging multiple compute devices. Their approach is where this paper begins while additionally proposing approaches ranging from synchronous to parameter server variants. 3 FUNDAMENTALS 3.1 NEURAL NETWORKS Machine Learning algorithms designed to emulate the computational structure of the brain to model data are called “Neural Networks.” The basic unit of a neural network is the neuron and neurons are connected to one another via synapses. 3.1.1 BACKPROPAGATION Neural networks are trained through an algorithm called backpropagation. This is a means of computing gradients layer by layer to implement the gradient descent algorithm’s update rule of \[ \begin{align*} w' &= w + \lambda \nabla_w C \\ b' &= b + \lambda \nabla_b C \end{align*} \] where \( w \) are the weights, \( b \) the biases, \( \lambda \) the learning rate, and \( C \) is a cost function to be optimized, usually square error or cross-entropy. This rule is often replaced by a slightly more complex rule, such as Adaptive Gradient Descent (AdaGrad) (Duchi et al., 2011) or Momentum (Qian, 1999). To compute the gradients, we set \( W^{(\ell)}, b^{(\ell)} \) the weights and biases for each layer, \( z^{(\ell+1)} = W^{(\ell)}a^{(\ell)} + b^{(\ell)} \) and \( a^{(\ell)} = \sigma(z^{(\ell)}) \), where \( \sigma \) is the activation function. Let \( n_\ell \) represent the number of layers. Then, we use Algorithm 1. Algorithm 1 Back Propagation 1: input: Data \( X \in \mathbb{R}^{n \times p} \) and labels \( Y \in \mathbb{R}^{n \times \ell} \) 2: for \( i \) from 1 to \( n \) do 3: Compute all \( z^{(\ell)} \) and \( a^{(\ell)} \). 4: \( \delta^{(n_\ell)} = -(y - a^{n_\ell}) \odot \sigma'(z^{(n_\ell)}) \) 5: for \( \ell \) from \( n_\ell - 1 \) to 2 do 6: \( \delta^{(\ell)} = W^{\ell T}\delta^{(\ell+1)} \odot \sigma'(z^{(\ell)}) \) 7: end for 8: \( \nabla_{W^{(\ell)}} C = \delta^{(\ell+1)}a^{(\ell)T} \) 9: \( \nabla_{b^{(\ell)}} C = \delta^{(\ell+1)} \) 10: end for Although there are several nonlinear activation functions in common use, the networks examined in this paper only include rectified linear units (ReLU) where ReLU\((x) = \max(0, x)\). 3.2 CAFFE Caffe (Jia et al., 2014) is one of the leading software packages for building and training neural networks. It provides abstractions for a wide range of topologies and for training them with many different types of optimizers. Caffe provides abstractions for operations on multi-dimensional arrays (tensors) which are essential for implementing Deep Learning algorithms. From an input tensor, an output tensor, and tensors for each hidden layer, Caffe constructs a computational graph that manages these tensors and their updates as a single object. Caffe is particularly useful for researchers, because it is heavily optimized and can be modified through an open source C++ backend. As Caffe’s runtime is implemented in C++, it can extract native performance from the computation environment it is run on. Furthermore, Caffe abstracts GPU computations, leveraging nVIDIA CUDA Deep Neural Network Library (cuDNN) for the task. We have modified this code for distributed memory computation on large scale systems using MPI to natively use network hardware for optimal performance. The base, synchronous implementation is similar to FireCaffe (Iandola et al., 2015), another distributed memory implementation of Caffe. Further modifications are described in Section 4. There are three phases of computation within Caffe that pass over the enumerated layers of the network. First, the forward pass computes the output result given the samples from the input batch, starting at the first layer. Next, starting at the last (output) layer, based on the difference between the output result and the ground truth, the backward pass uses the backpropagation technique to compute the gradients for each layer. Lastly, one final pass is made over the network to apply the gradients to the weights and biases before starting the process over again with the next batch. 4 SOLUTION SPACE The goal of improving gradient descent is to accelerate the time to solution without sacrificing the accuracy of the model. The base case to consider is then computing and applying gradients one batch at a time on a single compute device. One way to accelerate the computation while also maintaining equivalence to the sequential is to use data parallelism. Data parallelism is where the traditional batch is further subdivided into equally-sized mini-batches, each mini-batch is computed on separate devices, then the gradients resulting from each mini-batch is averaged together. Since each gradient update is itself an average, taking the average of the mini-gradients results in an update that is effectively the same as having computed the original batch size. This is called the effective batch size. Data parallelism is the approach we explore in this paper, attempting many ways of hiding the latency of the gradient communication that occurs between compute devices. We use MPI to communicate the gradients. Caffe provides callback methods in its C++ interface that interject user-defined functionality into key phases of the computation (see [3,2]). Specifically, one user-defined function is executed immediately before the forward pass when the batch computation begins. The other user-defined function executes after the backward pass finishes, but before the application of the gradients to the weights and biases. Additional callback functions were added to support finer-grained control over the three phases of computation. One of the additional callbacks executes after each gradient is computed during the backward phase, once per set of learnable parameters, such as the weights or biases of a given layer. Another callback function that was added is called once per learnable parameter during the apply phase, just before the gradients are applied. Lastly, a callback function was added that turns the gradient application into a task queue, requesting additional tasks in an unspecified order until all gradients have been applied. A critical implementation detail for any of our proposed approaches is to make sure the individual network models maintained by each compute device start from the same random initial conditions for the weights and biases. Before the first batch is computed, the weights and biases from the master process are copied (broadcast) to the other processes. That way any gradients that are computed, when averaged together, are based on the same initial conditions. 4.1 SYNCHRONOUS GRADIENT DESCENT Similar to what Chen et al. (2016) proposes and what is implemented in FireCaffe (Iandola et al., 2015), synchronous gradient descent averages the gradients from each mini-batch together before applying them, forming one complete batch at a time. The way this is implemented in Caffe is to use the callback function that executes when all gradients are ready to be applied. During this callback, MPI_Allreduce is used to sum the gradients, placing the same resulting sum on each compute device. This function is blocking, meaning it returns control back to Caffe only after the sum is computed across all devices. Since the result is a sum and not the intended average, it is then scaled down based on the number of compute devices in use. It is important to note that the reduction operation can be performed in-place, meaning it can use the memory location directly holding the gradient without performing any costly memory copies, especially for networks with a large number of parameters such as AlexNet. This approach also has the important quality that the gradients are averaged after they have been used by each layer of the backpropagation, preserving the importance of any activations within the network against the mini-batch instead of against the effective batch. 4.2 LAYER-WISE GRADIENT DESCENT Chen et al. (2016) proposes the pipelining of gradient computation and application. For example, the gradients of upper layers can be concurrently applied while computing the gradients of lower layers. This approach must be done carefully to maintain equivalence with the sequential base case. We make the observation that gradients can be averaged as soon as they are computed during the backward phase, instead of waiting for all gradients to be computed. However, adjacent layers will use and/or update the gradients of layers that have otherwise finished computing their gradients. This implies the averaging of the gradients must be performed on a copy of the gradients rather than in-place. Further, the averaging of the copied gradients must finish before they can be applied. We utilize a background thread of computation in order to perform the gradient averaging concurrent with the remaining gradient computation. This provides maximal overlap of the communication latency with useful computation. There are a few options when to apply the averaged gradients. Waiting for all communication to finish before applying all gradients is straightforward and similar to the synchronous approach described previously, though perhaps at least some of the communication latency would be overlapped. Another approach is to wait, one layer at a time, for the gradients for a particular layer to finish averaging and then apply the gradients. It is intuitive to perform the waiting in the same order in which backpropagation was performed, from the last layer to the first layer. Lastly, since all gradient updates are independent, we can perform them in an arbitrary order. This takes advantage of the observation that not all layers have the same number of parameters, and further, the gradients for the weights and the gradients for the biases can be averaged separately; the size of the weight gradients are typically larger than the bias gradients, implying that the bias gradients will complete their communication more quickly. Since the communication of the various parameters can finish somewhat arbitrarily based on when the communication was initiated and the size of the communication, we can apply the gradients as soon as they complete their averaging. We evaluate these strategies in [6]. 4.3 ASYNCHRONOUS GRADIENT DESCENT As stated in (Chen et al. [2016]), parameter server implementations suffer from poor convergence since gradient updates are calculated based on out-of-date networks. Continuing with our data parallel approach, there is a lower limit to the size of the mini-batches and therefore the number of compute devices that can be utilized. As the amount of work per compute device decreases proportional to the decreasing size of the mini-batches, there is less computation available to mask the latency of the gradient averaging across the devices. Initiating the averaging layer-wise as described above may not be enough to mitigate this problem. We propose delaying the application of the gradients by a fixed number of iterations much smaller than the number of compute devices as would have been done in a parameter server approach. The gradients are delayed by using a concurrent communication thread and applying the gradient one, two, or three iterations later thus giving the averaging enough time to complete as needed. If the gradient needs to be delayed by one iteration, this requires one communication thread and one additional buffer to hold the gradient; delaying by two iterations requires two communication threads and two additional buffers and so on. This approach is somewhere between a parameter server (Dean et al. [2012]) and the various approaches that maintain equivalency with a sequential computation. 5 IMPLEMENTATION DETAILS The implementations evaluated in this paper focus on data parallelism and the averaging of gradients across compute devices. This is achieved using MPI and parallel I/O. 5.1 HANDLING I/O The data parallelism is achieved by distributing datasets across compute devices, partitioning them based on the number of devices utilized; each device receives a disjoint subset of the dataset and no samples are shuffled or exchanged between compute devices outside of the gradient averaging. Caffe frequently uses a database in LMDB format for its datasets, however this format cannot be used on remote (network) filesystems or even between processes on the same host. Caffe mitigates this issue when using more than one GPU on the same host by using a single I/O reading thread and a round-robin deal of the samples to device-specific queues. Our implementations mitigate this issue by first converting an LMDB database into a netCDF file (Rew & Davis [1990]). netCDF files can be read and partitioned using parallel MPI-IO via the parallel netCDF library (Li et al. [2003]). 5.2 DISTRIBUTED MEMORY IMPLEMENTATION USING MPI For single-node GPU computation, using one or more GPU devices in a single host, Caffe provides a means of allocating one contiguous buffer to hold the data for the weights and biases and a second buffer to hold the gradients for each. We extended this approach for CPU hosts. A single contiguous buffer allows the non-layer-wise, i.e., network-wise gradient averages to be performed using a single MPI reduction operation. The layer-wise implementations require one MPI reduction operation per network parameter. There is a fixed cost to start a communication primitive regardless of how much data is communicated. It is sometimes beneficial to aggregate otherwise many small communication requests into a larger one. Although Caffe provides a way of utilizing all GPUs within the host, it does not currently leverage NVIDIA’s NCCL package (NVIDIA Corporation [2015]) for optimized, high-bandwidth collective communication routines. We used the NCCL equivalent to the MPI all reduction to sum gradients across GPU devices on the DGX-1 platform. 6 EXPERIMENTAL EVALUATION In this section, we present an experimental evaluation and analysis of the heuristics described in section 4. 6.1 HARDWARE ARCHITECTURES We evaluate using a CPU cluster as well as NVIDIA’s specialized DGX-1 multi-GPU host system. Each node of the multi-node cluster consists of a multi-core Intel Sandybridge CPU connected via InfiniBand. We use Intel MPI 5.1.2 for performance evaluation. The heuristics are implemented in Caffe (Jia et al., 2014), specifically the intelcaffe branch designed to optimize performance on Intel CPUs. The DGX-1 system contains 8 Pascal GPUs connected using the high-speed NVlink interconnect. For the DGX-1 evaluations, the latest version of Berkley’s Caffe was modified to use the NCCL communication primitives in addition to our algorithmic changes. 6.2 IMAGE NET AND NETWORK ARCHITECTURES We evaluate on two distinct network architectures trained on the ImageNet dataset. ImageNet refers specifically to the ILSVRC2015 (Russakovsky et al., 2015) dataset. This dataset consists of a training set of just under 1.3 million images of various sizes (as jpg files) divided among 1000 classes, along with a validation set consisting of 50000 images of the same type and classes. Additionally, for the competition, there is a testing set, but it is held separately and not available publicly. It is established as one of the benchmark dataset for machine learning with large datasets, and among the famous architectures that achieved record top 1 and top 5 accuracies on it are AlexNet (Krizhevsky et al., 2012) and GoogLeNet (Szegedy et al., 2015). We evaluate on AlexNet and GoogLeNet because they are now well-established models with known training regimes and loss curves. They also demonstrate two different regimes for parallelization: AlexNet has approximately 60 million parameters that need to be communicated, whereas GoogLeNet has approximately 4 million. In contrast to the smaller amount of communication for GoogLeNet, it requires roughly twice the amount of time to process a each image than AlexNet does when communication is ignored. 6.3 EVALUATION Figure 1 compares the implemented approaches relative to a communication-less baseline “no comm”. The effective batch sizes were 256 and 32 for AleNet and GoogleLeNet, respectively. For example, using 8 compute devices for GoogLeNet uses a mini-batch size of \(32/8 = 4\). The evaluation on DGX-1 were limited to 8 compute devices whereas the CPU cluster evaluation eventually hit the strong scaling limit for data parallelism. These results show that delaying the gradient updates by one or more iterations is the most effective means of hiding the communication latency. The layer-wise approaches did not perform as well as expected. These trends were consistent across both hardware platforms. The layer-wise approaches, though promising as equivalent to a sequential computation, were not able to complete their gradient averages quickly enough. Compared to the delayed gradient approach, this is perhaps intuitive. The delayed gradient approach is able to hide the communication latency across all three complete phases of the computation whereas the layer-wise approaches only have as long as it takes to complete the backpropagation phase. This is not enough time to complete the communication, especially as the mini-batch sizes decrease and therefore provide less work to mask the communication. In addition to looking at the time per batch above, the rates of convergence of these heuristics must be evaluated. All of the heuristics completed training AlexNet to the standard top-1 accuracy of \( \approx 54\% \) using the default AlexNet settings that come with Caffe. However, it is worth noting that at the beginning of training, they showed different loss curves showing that there is a tradeoff between number of batches per second and accuracy at a given batch as shown in Table 1. (a) AlexNet CPU (b) AlexNet DGX-1 (c) GoogLeNet CPU (d) GoogLeNet DGX-1 Figure 1: Evaluation of SGD and AGD approaches. Effective batch sizes were 256 and 32 for AlexNet and GoogLeNet, respectively. <table> <tr> <th>batch</th> <th>1000</th> <th>2000</th> <th>3000</th> <th>4000</th> <th>5000</th> </tr> <tr> <td>serial, 1 GPU</td> <td>0.0124</td> <td>0.05164</td> <td>0.10102</td> <td>0.13432</td> <td>0.16454</td> </tr> <tr> <td>SGD</td> <td>0.01116</td> <td>0.03984</td> <td>0.07594</td> <td>0.10622</td> <td>0.13052</td> </tr> <tr> <td>AGD, 1 comm</td> <td>0.0039</td> <td>0.01324</td> <td>0.02632</td> <td>0.05076</td> <td>0.07362</td> </tr> <tr> <td>AGD, 2 comm</td> <td>0.00104</td> <td>0.00356</td> <td>0.00636</td> <td>0.01282</td> <td>0.01688</td> </tr> </table> Table 1: AlexNet Accuracy After Every 1000 Batches on DGX-1 We also evaluated whether these approaches converged in addition to just improving the number of iterations per second. All approaches evaluated managed to converge within the expected number of iterations. Notably, AlexNet on DGX-1 reached convergence in 11 hours using the delayed gradient approach and two communication threads using the standard AlexNet network from Caffe. 7 CONCLUSIONS There is a tradeoff between maintaining equivalence to sequential methods versus leveraging the vast computational resources available for gradient descent. We find that asynchronous methods can give a 1.7x speedup while not sacrificing accuracy at the end of an otherwise identical training regime. This improvement was achieved without the need for a warm start, contrary to previously published results using parameter servers.
reject
Reject
3.666667
4d11aba1899e662014fe117bdb73c0e422c9cd09
iclr
2,017
CLASSLESS ASSOCIATION USING NEURAL NETWORKS Federico Raue1,2, Sebastian Palacio2, Andreas Dengel1,2, Marcus Liwicki1 1University of Kaiserslautern, Germany 2German Research Center for Artificial Intelligence (DFKI), Germany. {federico.raue,sebastian.palacio,andreas.dengel}@dfki.de, liwicki@cs.uni-kl.de ABSTRACT The goal of this paper is to train a model based on the relation between two instances that represent the same unknown class. This scenario is inspired by the Symbol Grounding Problem and the association learning in infants. We propose a novel model called Classless Association. It has two parallel Multilayer Perceptrons (MLP) that uses one network as a target of the other network, and vice versa. In addition, the presented model is trained based on an EM-approach, in which the output vectors are matched against a statistical distribution. We generate four classless datasets based on MNIST, where the input is two different instances of the same digit. In addition, the digits have a uniform distribution. Furthermore, our classless association model is evaluated against two scenarios: totally supervised and totally unsupervised. In the first scenario, our model reaches a good performance in terms of accuracy and the classless constraint. In the second scenario, our model reaches better results against two clustering algorithms. 1 INTRODUCTION Infants are able to learn the binding between abstract concepts to the real world via their sensory input. For example, the abstract concept ball is binding to the visual representation of a rounded object and the auditory representation of the phonemes /b/ /a/ /l/. This scenario can be seen as the Symbol Grounding Problem (Harnad [1990]). Moreover, infants are also able to learn the association between different sensory input signals while they are still learning the binding of the abstract concepts. Several results have shown a correlation between object recognition (visual) and vocabulary acquisition (auditory) in infants (Balaban & Waxman[1997]; Asano et al.[2015]). One example of this correlation is the first words that infants have learned. In that case, the words are mainly nouns, which are visible concepts, such as, dad, mom, ball, dog, cat (Gershkoff-Stowe & Smith [2004]). As a result, we can define the previous scenario in terms of a machine learning tasks. More formally, the task is defined by learning the association between two parallel streams of data that represent the same unknown class (or abstract concept). Note that this task is different from the supervised association where the data has labels. First, the semantic concept is unknown in our scenario whereas it is known in the supervised case. Second, both classifiers needs to agree on the same coding scheme for each sample pair during training. In contrast, the coding-scheme is already pre-defined before training in the supervised case. Figure 1 shows an example of the difference between a supervised association task and our scenario. Usually, classifiers requires labeled data for training. However, the presented scenario needs an alternative training mechanism. One way is to train based on statistical distributions. Casey (1986) proposed to solve the OCR problem using language statistics for inferring form images to characters. Later on, Knight et al. (2006) applied a similar idea to machine translation. Recently, Sutskever et al. (2015) defined the Output Distribution Matching (ODM) cost function for dual autoencoders and generative networks. In this paper, we are proposing a novel model that is trained based on the association of two input samples of the same unknown class. The presented model has two parallel Multilayer Perceptrons (MLPs) with an Expectation-Maximization (EM) (Dempster et al.[1977]) training rule that matches the network output against a statistical distribution. Also, both networks agree on the same classification because one network is used as target of the other network, and vice versa. Our model has some <table> <tr> <th>Task</th> <th>Input Samples (same)</th> <th>Abstract Concept Association (different)</th> <th>Coding Scheme for each input (different)</th> <th>Classifiers (same)</th> </tr> <tr> <td rowspan="2">Supervised Association</td> <td>3</td> <td>"three"</td> <td>Known before and after Training</td> <td>classifier 1<br>classifier 2</td> </tr> <tr> <td>3</td> <td>"three"</td> <td></td> <td></td> </tr> <tr> <td rowspan="2">Classless Association (our work)</td> <td>3</td> <td>"unknown"</td> <td>Unknown before Training<br>Known after Training</td> <td>classifier 1<br>classifier 2</td> </tr> <tr> <td>3</td> <td>"unknown"</td> <td></td> <td></td> </tr> </table> Figure 1: Difference between the supervised and classless association tasks. The classless association is more challenging that the supervised association because the model requires to learn to discriminate the semantic concept without labels. In addition, both classifiers need to agree on the same coding scheme for each semantic concept. In contrast, the mentioned information is already known in the supervised association scenario. similarities with Siamese Networks proposed by Chopra et al. (2005). They introduced their model for supervised face verification where training is based on constraints of pairs of faces. The constraints exploit the relation of two faces that may or may not be instances of the same person. However, there are some differences to our work. First, our training rule does not have pre-defined classes before training, whereas the Siamese Network requires labeled samples. Second, our model only requires instances of the same unknown class, whereas the Siamese network requires two types of input pairs: a) instances of the same person and b) instances of two different persons. Our contributions in this paper are • We define a novel training rule based on matching the output vectors of the presented model and a statistical distribution. Note that the output vectors are used as symbolic features similar to the Symbol Grounding Problem. Furthermore, the proposed training rule is based on an EM-approach and classified each sample based on generated pseudo-classes (Section 2.1). • We propose a novel architecture for learning the association in the classless scenario. Moreover, the presented model uses two parallel MLPs, which require to agree on the same class for each input sample. This association is motivated by the correlation between different sensory input signals in infants development. In more detail, one network is the target of the other network, and vice versa. Also, note that our model is gradient-based and can be extended to deeper architectures (Section 2.2). • We evaluate our classless association task against two cases: totally supervised and totally unsupervised. In this manner, we can verify the range of our results in terms of supervised and unsupervised cases since our model is neither totally supervised nor totally unsupervised. We compare against a MLP trained with labels as the supervised scenario (upper bound) and two clustering algorithms (K-means and Hierarchical Agglomerative) as the unsupervised scenario (lower bound). First, our model reaches better results than the clustering. Second, our model shows promising results with respect to the supervised scenario (Sections 3 and 4). 2 METHODOLOGY In this paper, we are interested in the classless association task in the following scenario: two input instances \( x^{(1)} \) and \( x^{(2)} \) belong to the same unknown class \( c \), where \( x^{(1)} \in X^{(1)} \) and \( x^{(2)} \in X^{(2)} \) are two disjoint sets, and the goal is to learn the output classification of \( x^{(1)} \) and \( x^{(2)} \) is the same \( c^{(1)} = c^{(2)} \), where \( c^{(1)} \) and \( c^{(2)} \in C \) is the set of possible classes. With this in mind, we present a model that has two parallel Multilayer Perceptrons (MLPs) that are trained with an EM-approach that associates both networks in the following manner: one network uses the other network as a target, and vice versa. We explain how the output vectors of the network are matched to a statistical distribution in Section 2.1 and the classless association learning is presented in Section 2.2. 2.1 Statistical Constraint One of our constraint is to train a MLP without classes. As a result, we use an alternative training rule based on matching the output vectors and a statistical distribution. For simplicity, we explain our training rule using a single MLP with one hidden layer, which is defined by \[ z = network(x; \theta) \] where \( x \in \mathbb{R}^n \) is the input vector, \( \theta \) encodes the parameters of the MLP, and \( z \in \mathbb{R}^c \) is the output vector. Moreover, the output vectors (\( z_1, \ldots, z_m \)) of a mini-batch of size \( m \) are matched to a target distribution (\( \mathbb{E}[z_1, \ldots, z_m] \sim \phi \in \mathbb{R}^c \)), e.g., uniform distribution. We have selected a uniform distribution because it is an ideal case to have a balanced dataset for any classifier. However, it is possible to extend to different distribution. We introduce a new parameter that is a weighting vector \( \gamma \in \mathbb{R}^c \). The intuition behind it is to guide the network based on a set of generated pseudo-classes \( c \). These pseudo-classes can be seen as cluster indexes that group similar elements. With this in mind, we also propose an EM-training rule for learning the unknown class given a desired target distribution. We want to point out that the pseudo-classes are internal representations of the network that are independent of the labels. The E-step obtains the current statistical distribution given the output vectors (\( z_1, \ldots, z_m \)) and the weighting vector (\( \gamma \)). In this case, an approximation of the distribution is obtained by the following equation \[ \hat{z} = \frac{1}{M} \sum_{i=1}^M power(z_i, \gamma) \] where \( \gamma \) is the weighting vector, \( z_i \) is the output vector of the network, \( M \) is the number of elements, and the function \( power \)1 is the element-wise power operation between the output vector and the weighting vector. We have used the power function because the output vectors (\( z_1, \ldots, z_m \)) are quite similar between them at the initial state of the network, and the power function provides an initial boost for learning to separate the input samples in different pseudo-classes in the first iterations. Moreover, we can retrieve the pseudo-classes by the maximum value of the following equation \[ c^* = arg\ max_c\ power(z_i, \gamma) \] where \( c^* \) is the pseudo-class, which are used in the M-step for updating the MLP weights. Also, note that the pseudo-classes are not updated in an online manner. Instead, the pseudo-classes are updated after a certain number of iterations. The reason is the network requires a number of iterations to learn the common features. The M-step updates the weighting vector \( \gamma \) given the current distribution \( \hat{z} \). Also, the MLP parameters (\( \theta \)) are updated given the current classification given by the pseudo-classes. The cost function is the variance between the distribution and the desired statistical distribution, which is defined by \[ cost = (\hat{z} - \phi)^2 \] 1We decide to use power function instead of \( z_i^\gamma \) in order to simplify the index notation Figure 2: The proposed training rule applied to a single MLP. E-steps generates a set of pseudo-classes \( c_1, \ldots, c_m \) for each output in the mini-batch of size \( m \), and a probability approximation \( \hat{z} \) of the output vectors in the mini-batch. M-step updates the MLP weights given the pseudo-classes and the weighting vector \( \gamma \) giving the target statistical distribution \( \phi \). where \( \hat{z} \) is the current statistical distribution of the output vectors, and \( \phi \) is a vector that represent the desired statistical distribution, e.g. uniform distribution. Then, the weighting vector is updated via gradient descent \[ \gamma = \gamma - \alpha * \nabla_{\gamma} cost \] where \( \alpha \) is the learning rate and \( \nabla_{\gamma} cost \) is the derivative w.r.t \( \gamma \). Also, the MLP weights are updated via the generated *pseudo-classes*, which are used as targets in the backpropagation step. In summary, we propose an EM-training rule for matching the network output vectors and a desired target statistical distribution. The *E-Step* generates *pseudo-classes* and finds an approximation of the current statistical distribution of the output vectors. The *M-Step* updates the MLP parameters and the weighting vector. With this in mind, we adapt the mentioned training rule for the classless association task. Figure 2 summarizes the presented EM training rule and its components. 2.2 CLASSLESS ASSOCIATION LEARNING Our second constraint is to classify both input samples as the same class and different from the other classes. Note that the *pseudo-class* (Equation 3) is used as identification for each input sample and it is not related to the semantic concept or labels. The presented classless association model is trained based on a statistical constraint. Formally, the input is represented by the pair \( x^{(1)} \in \mathbb{R}^{n_1} \) and \( x^{(2)} \in \mathbb{R}^{n_2} \) where \( x^{(1)} \) and \( x^{(2)} \) are two different instances of the same *unknown* label. The classless association model has two parallel Multilayer Perceptron \( MLP^{(1)} \) and \( MLP^{(2)} \) with training rule that follows an EM-approach (*cf.* Section 2.1). Moreover, input samples are divided into several mini-batches of size \( m \). Initially, all input samples have random *pseudo-classes* \( c_i^{(1)} \) and \( c_i^{(2)} \). The pseudo-classes have the same desired statistical distribution \( \phi \). Also, the weighting vectors \( \gamma^{(1)} \) and \( \gamma^{(2)} \) are initialized to one. Then each input element from the mini-batch is propagated forward to each MLP. Afterwards, an estimation of the statistical distribution for each MLP (\( \hat{z}^{(1)} \) and \( \hat{z}^{(2)} \)) is obtained. Furthermore, a new set of *pseudo-classes* (\( c_1^{(1)}, \ldots, c_m^{(1)} \) and \( c_1^{(2)}, \ldots, c_m^{(2)} \)) is obtained for each network. Note that this first part can be seen as an *E-step* from Section 2.1. We want to point out that the pseudo-classes are updated only after a number of iterations. The second part of our association training updates the MLP parameters and the *weighting vector* (\( \gamma^{(1)} \) and \( \gamma^{(2)} \)). In this step, one network (\( MLP^{(1)} \)) uses pseudo-classes (\( c_1^{(2)}, \ldots, c_m^{(2)} \)) obtained from the other network (\( MLP^{(2)} \)), and vice versa. In addition, the weighting vector is updated Figure 3: Overview of the presented model for classless association of two input samples that represent the same unknown classes. The association relies on matching the network output and a statistical distribution. Also, it can be observed that our model uses the pseudo-classes obtained by \( MLP^{(1)} \) as targets of \( MLP^{(2)} \), and vice versa. between the output approximation (\( \hat{z}^{(1)} \) and \( \hat{z}^{(2)} \)) and the desired target distribution (\( \phi \)). Figure 3 shows an overview of the presented model. 3 EXPERIMENTS In this paper, we are interested in a simplified scenario inspired by the Symbol Grounding Problem and the association learning between sensory input signal in infants. We evaluated our model in four *classless* datasets that are generated from MNIST (Lecun & Cortes 2010). The procedure of generating *classless datasets from labeled datasets* have been already applied in (Sutskever et al. 2015) (Hsu & Kira 2015). Each dataset has two disjoint sets *input 1* and *input 2*. The first dataset (*MNIST*) has two different instances of the same digit. The second dataset (*Rotated-90 MNIST*) has two different instances of the same digit, and all input samples in *input 2* are rotated 90 degrees. The third dataset (*Inverted MNIST*) follows a similar procedures as the second dataset, but the transformation of the elements in *input 2* is the invert function instead of rotation. The last dataset (*Random Rotated MNIST*) is more challenging because all elements in *input 2* are randomly rotated between 0 and \( 2\pi \). All datasets have a uniform distribution between the digits and the dataset size is 21,000 samples for training and 4,000 samples for validation and testing. The following parameters turned out being optimal on the validation set. For the first three datasets, each internal MLP relies on two fully connected layers of 200 and 100 neurons respectively. The learning rate for the MLPs was set to start at 1.0 and was continuously decaying by half after every 1,000 iterations. We set the initial *weighting vector* to 1.0 and updated after every 1,000 iterations as well. Moreover, the best parameters for the fourth dataset were the same for \( MLP^{(1)} \) and different for \( MLP^{(2)} \), which has two fully connected layers of 400 and 150 neurons respectively and the learning rate stars at 1.2. The target distribution \( \phi \) is uniform for all datasets. The decay of the learning rate (Equation 5) for the *weighting vector* was given by \( 1/(100 + epoch)^{0.3} \), where *epoch* was the number of training iterations so far. The mini-batch size \( M \) is 5,250 sample pairs (corresponding to 25% of the training set) and the mean of the derivatives for each mini-batch is used for the back-propagation step of MLPs. Note that the mini-batch is quite big comparing with common setups. We infer from this parameter that the model requires a sample size big enough for estimating the uniform distribution and also needs to learn slower than traditional approaches. Our model was implemented in *Torch*. <table> <tr> <th></th> <th>MLP(1)</th> <th>MLP (2)</th> <th>Purity (%)</th> <th>Association Matrix (%)</th> </tr> <tr> <td><b>Initial State</b></td> <td colspan="4"></td> </tr> <tr> <td></td> <td>10.9</td> <td>10.9</td> <td></td> <td></td> </tr> <tr> <td><b>Epoch 1,000</b></td> <td>24.8</td> <td>22.6</td> <td></td> <td></td> </tr> <tr> <td><b>Epoch 3,000</b></td> <td>64.4</td> <td>65.8</td> <td></td> <td></td> </tr> <tr> <td><b>Epoch 49,000</b></td> <td>95.5</td> <td>95.6</td> <td></td> <td></td> </tr> </table> Figure 4: Example of the presented model during classless training. In this example, there are ten pseudo-classes represented by each row of \( MLP^{(1)} \) and \( MLP^{(2)} \). Note that the output classification are randomly selected (not cherry picking). Initially, the pseudo-classes are assigned randomly to all input pair samples, which holds a uniform distribution (first row). Then, the classless association model slowly start learning the features and grouping similar input samples. Afterwards, the output classification of both MLPs slowly agrees during training, and the association matrix shows the relation between the occurrences of the pseudo-classes. To determine the baseline of our classless constraint, we compared our model against two cases: totally supervised and totally unsupervised. In the supervised case, we used the same MLP parameters and training for a fair comparison. In the unsupervised scenario, we used k-means and agglomerative clustering to each set (input 1 and input 2) independently. The clustering algorithm implementation are provided by scikit-learn (Pedregosa et al., 2011). 4 RESULTS AND DISCUSSION In this work, we have generated ten different folds for each dataset and report the average results. We introduce the Association Accuracy for measuring association, and it is defined by the following equation \[ Association\ Accuracy = \frac{1}{N} \sum_{i=1}^{N} \mathbb{1}(c_i^{(1)} = c_i^{(2)}) \] Table 1: Association Accuracy (%) and Purity (%) results. Our model is compared with the supervised scenario (class labels are provided) and with K-means and Hierarchical Agglomerative clustering (no class information). <table> <tr> <th rowspan="2">Dataset</th> <th rowspan="2">Model</th> <th colspan="2">Association Accuracy (%)</th> <th colspan="2">Purity (%)</th> </tr> <tr> <th>input 1</th> <th>input 2</th> <th>input 1</th> <th>input 2</th> </tr> <tr> <td rowspan="4">MNIST</td> <td>supervised association</td> <td>96.7 ± 0.3</td> <td>96.7 ± 0.2</td> <td>96.6 ± 0.3</td> <td></td> </tr> <tr> <td><b>classless association</b></td> <td>87.4 ± 2.9</td> <td>87.1 ± 6.6</td> <td>87.0 ± 6.4</td> <td></td> </tr> <tr> <td>K-means</td> <td>-</td> <td>63.9 ± 2.2</td> <td>62.5 ± 3.7</td> <td></td> </tr> <tr> <td>Hierarchical Agglomerative</td> <td>-</td> <td>64.9 ± 4.7</td> <td>64.3 ± 5.5</td> <td></td> </tr> <tr> <td rowspan="4">Rotated-90 MNIST</td> <td>supervised association</td> <td>93.2 ± 0.3</td> <td>96.4 ± 0.2</td> <td>96.6 ± 0.21</td> <td></td> </tr> <tr> <td><b>classless association</b></td> <td>86.5 ± 2.5</td> <td>82.9 ± 4.5</td> <td>82.9 ± 4.3</td> <td></td> </tr> <tr> <td>K-means</td> <td>-</td> <td>65.0 ± 2.8</td> <td>64.0 ± 3.6</td> <td></td> </tr> <tr> <td>Hierarchical Agglomerative</td> <td>-</td> <td>65.4 ± 3.5</td> <td>64.1 ± 4.1</td> <td></td> </tr> <tr> <td rowspan="4">Inverted MNIST</td> <td>supervised association</td> <td>93.2 ± 0.3</td> <td>96.5 ± 0.2</td> <td>96.5 ± 0.2</td> <td></td> </tr> <tr> <td><b>classless association</b></td> <td>89.2 ± 2.4</td> <td>89.0 ± 6.8</td> <td>89.1 ± 6.8</td> <td></td> </tr> <tr> <td>K-means</td> <td>-</td> <td>64.8 ± 2.0</td> <td>65.0 ± 2.5</td> <td></td> </tr> <tr> <td>Hierarchical Agglomerative</td> <td>-</td> <td>64.8 ± 4.4</td> <td>64.4 ± 3.8</td> <td></td> </tr> <tr> <td rowspan="4">Random Rotated MNIST</td> <td>supervised association</td> <td>88.0 ± 0.5</td> <td>96.5 ± 0.3</td> <td>90.9 ± 0.5</td> <td></td> </tr> <tr> <td><b>classless association</b></td> <td>69.3 ± 2.2</td> <td>75.8 ± 7.3</td> <td>65.3 ± 5.0</td> <td></td> </tr> <tr> <td>K-means</td> <td>-</td> <td>64.8 ± 2.6</td> <td>14.8 ± 0.4</td> <td></td> </tr> <tr> <td>Hierarchical Agglomerative</td> <td>-</td> <td>65.9 ± 2.8</td> <td>15.2 ± 0.5</td> <td></td> </tr> </table> where the *indicator function* is one if \( c_i^{(1)} = c_i^{(2)} \), zero otherwise; \( c_i^{(1)} \) and \( c_i^{(2)} \) are the *pseudo-classes* for \( MLP^{(1)} \) and \( MLP^{(2)} \), respectively, and \( N \) is the number of elements. In addition, we also reported the *Purity* of each set (*input 1* and *input 2*). *Purity* is defined by \[ Purity(\Omega, \mathcal{C}) = \frac{1}{N} \sum_{i=1}^k max_j |c_i \cap gt_j| \] where \( \Omega = \{gt_1, gt_2, \ldots, gt_j\} \) is the set of ground-truth labels and \( \mathcal{C} = \{c_1, c_2, \ldots, c_k\} \) is the set of *pseudo-classes* in our model or the set of cluster indexes of K-means or Hierarchical Agglomerative clustering, and \( N \) is the number of elements. Table 1 shows the *Association Accuracy* between our model and the supervised association task and the *Purity* between our model and two clustering algorithms. First, the supervised association task performances better that the presented model. This was expected because our task is more complex in relation to the supervised scenario. However, we can infer from our results that the presented model has a good performance in terms of the classless scenario and supervised method. Second, our model not only learns the association between input samples but also finds similar elements covered under the same *pseudo-class*. Also, we evaluate the purity of our model and found that the performance of our model reaches better results than both clustering methods for each set (*input 1* and *input 2*). Figure 4 illustrates an example of the proposed learning rule. The first two columns (\( MLP^{(1)} \) and \( MLP^{(2)} \)) are the output classification (Equation 3) and each row represents a *pseudo-class*. We have randomly selected 15 output samples for each MLP (not cherry picking). Initially, the *pseudo classes* are random selected for each MLP. As a result, the output classification of both networks does not show any visible discriminant element and the initial purity is close to random choice (first row). After 1,000 epochs, the networks start learning some features in order to discriminate the input samples. Some groups of digits are grouped together after 3,000 epochs. For example, the first row of \( MLP^{(2)} \) shows several digits *zero*, whereas \( MLP^{(1)} \) has not yet agree on the same digit for that *pseudo-class*. In contrast, both MLPs have almost agree on digit *one* at the fifth row. Finally, the association is learned using only the statistical distribution of the input samples and each digit is represented by each *pseudo-class*. Figure 5: Example of the best and worst results among all folds and datasets. It can be observed our model is able to learn to discriminate each digit (first row). However, the presented model has a limitation that two or more digits are assigned to the same pseudo-class (last row of \( MLP^{(1)} \) and \( MLP^{(2)} \)). Figure 5 shows the best and worst results of our model in two cases. The first row is the best result from MNIST dataset. Each row of \( MLP^{(1)} \) and \( MLP^{(2)} \) represent a pseudo-class, and it can be observed that all digits are grouped together. In addition, the association matrix shows a distribution per digit close to the desired uniform distribution, and the purity of each input is close to the supervised scenario. In contrast, the second row is our worst result from Random Rotated MNIST dataset. In this example, we can observe that some digits are recognized by the same pseudo-class, for example, digit one and seven (first two rows). However, there two or more digits that are recognized by the samepseudo-class. For example, the last row shows that nine and four are merged. Our model is still able to reach better results than the unsupervised scenario. 5 CONCLUSION In this paper, we have shown the feasibility to train a model that has two parallel MLPs under the following scenario: pairs of input samples that represent the same unknown classes. This scenario was motivated by the Symbol Grounding Problem and association learning between sensory input signal in infants development. We proposed a model based on gradients for solving the classless association. Our model has an EM-training that matches the network output against a statistical distribution and uses one network as a target of the other network, and vice versa. Our model reaches better performance than K-means and Hierarchical Agglomerative clustering. In addition, we compare the presented model against a supervised method. We find that the presented model with respect to the supervised method reaches good results because of two extra conditions in the unsupervised association: unlabeled data and agree on the same pseudo-class. We want to point out that our model was evaluated in an optimal case where the input samples are uniform distributed and the number of classes is known. However, we will explore the performance of our model if the number of classes and the statistical distribution are unknown. One way is to change the number of pseudo-classes. This can be seen as changing the number of clusters k in k-means. With this in mind, we are planning to do more exhaustive analysis of the learning behavior with deeper architectures. Moreover, we will work on how a small set of labeled classes affects the performance of our model (similar to semi-supervised learning). Furthermore, we are interested in replicating our findings in more complex scenarios, such as, multimodal datasets like TVGraz [Khan et al., 2009] or Wikipedia featured articles [Rasuwasia et al., 2010]. Finally, our work can be applied to more classless scenarios where the data can be extracted simultaneously from different input sources at the same time. Also, transformation functions can be applied to input samples for creating the association without classes. ACKNOWLEDGMENTS We would like to thank Damian Borth, Christian Schulze, Jörn Hees, Tushar Karayil, and Philipp Blandfort for helpful discussions. REFERENCES Michiko Asano, Mutsumi Imai, Sotaro Kita, Keiichi Kitajo, Hiroyuki Okada, and Guillaume Thierry. Sound symbolism scaffolds language development in preverbal infants. cortex, 63:196–205, 2015. M T Balaban and S R Waxman. Do words facilitate object categorization in 9-month-old infants? Journal of experimental child psychology, 64(1):3–26, January 1997. ISSN 0022-0965. Richard G Casey. Text OCR by solving a cryptogram. International Business Machines Incorporated, Thomas J. Watson Research Center, 1986. Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pp. 539–546. IEEE, 2005. AP Dempster, NM Laird, and DB Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society., 39(1):1–38, 1977. Lisa Gershkoff-Stowe and Linda B Smith. Shape and the first hundred nouns. Child development, 75(4):1098–114, 2004. ISSN 0009-3920. Stevan Harnad. The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1):335–346, 1990. Yen-Chang Hsu and Zsolt Kira. Neural network-based clustering using pairwise constraints. arXiv preprint arXiv:1511.06321, 2015. Inayatullah Khan, Amir Saffari, and Horst Bischof. Tvgraz: Multi-modal learning of object categories by combining textual and visual features. In AAPR Workshop, pp. 213–224, 2009. Kevin Knight, Anish Nair, Nishit Rathod, and Kenji Yamada. Unsupervised analysis for decipherment problems. In Proceedings of the COLING/ACL on Main conference poster sessions, pp. 499–506. Association for Computational Linguistics, 2006. Yann Lecun and Corinna Cortes. The MNIST database of handwritten digits. 2010. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011. N. Raswasia, J. Costa Pereira, E. Covello, G. Doyle, G.R.G. Lanckriet, R. Levy, and N. Vasconcelos. A New Approach to Cross-Modal Multimedia Retrieval. In ACM International Conference on Multimedia, pp. 251–260, 2010. Ilya Sutskever, Rafal Jozefowicz, Karol Gregor, Danilo Rezende, Tim Lillicrap, and Oriol Vinyals. Towards principled unsupervised learning. arXiv preprint arXiv:1511.06440, 2015. SUPPLEMENTAL MATERIAL We have included two more examples of the classless training. In addition, we have generated some demos that show the training algorithm (https://goo.gl/xsmkFD) <table> <tr> <th></th> <th>MLP(1)</th> <th>MLP (2)</th> <th>Purity (%)</th> <th>Association Matrix (%)</th> </tr> <tr> <td><b>Initial State</b></td> <td><img src="page_340_370_256_256.png" alt="MLP(1) Initial State"></td> <td><img src="page_600_370_256_256.png" alt="MLP (2) Initial State"></td> <td>10.9<br>10.9</td> <td></td> </tr> <tr> <td><b>Epoch 1,000</b></td> <td><img src="page_340_620_256_256.png" alt="MLP(1) Epoch 1,000"></td> <td><img src="page_600_620_256_256.png" alt="MLP (2) Epoch 1,000"></td> <td>24.1<br>21.9</td> <td><img src="page_860_620_256_256.png" alt="Association Matrix Epoch 1,000"></td> </tr> <tr> <td><b>Epoch 3,000</b></td> <td><img src="page_340_870_256_256.png" alt="MLP(1) Epoch 3,000"></td> <td><img src="page_600_870_256_256.png" alt="MLP (2) Epoch 3,000"></td> <td>76.7<br>72.9</td> <td><img src="page_860_870_256_256.png" alt="Association Matrix Epoch 3,000"></td> </tr> <tr> <td><b>Epoch 300,000</b></td> <td><img src="page_340_1120_256_256.png" alt="MLP(1) Epoch 300,000"></td> <td><img src="page_600_1120_256_256.png" alt="MLP (2) Epoch 300,000"></td> <td>96.8<br>96.5</td> <td><img src="page_860_1120_256_256.png" alt="Association Matrix Epoch 300,000"></td> </tr> </table> Figure 1: Example of the classless training using Inverted MNIST dataset. MLP(1) MLP (2) Purity (%) Association Matrix (%) Initial State Epoch 1,000 Epoch 3,000 Epoch 300,000 10.9 35.0 48.3 79.3 10.9 27.6 40.3 72.5 Figure 2: Example of the classless training using Random Rotated MNIST dataset.
ABSTRACT The goal of this paper is to train a model based on the relation between two instances that represent the same unknown class. This scenario is inspired by the Symbol Grounding Problem and the association learning in infants. We propose a novel model called Classless Association. It has two parallel Multilayer Perceptrons (MLP) that uses one network as a target of the other network, and vice versa. In addition, the presented model is trained based on an EM-approach, in which the output vectors are matched against a statistical distribution. We generate four classless datasets based on MNIST, where the input is two different instances of the same digit. In addition, the digits have a uniform distribution. Furthermore, our classless association model is evaluated against two scenarios: totally supervised and totally unsupervised. In the first scenario, our model reaches a good performance in terms of accuracy and the classless constraint. In the second scenario, our model reaches better results against two clustering algorithms. 1 INTRODUCTION Infants are able to learn the binding between abstract concepts to the real world via their sensory input. For example, the abstract concept ball is binding to the visual representation of a rounded object and the auditory representation of the phonemes /b/ /a/ /l/. This scenario can be seen as the Symbol Grounding Problem (Harnad [1990]). Moreover, infants are also able to learn the association between different sensory input signals while they are still learning the binding of the abstract concepts. Several results have shown a correlation between object recognition (visual) and vocabulary acquisition (auditory) in infants (Balaban & Waxman[1997]; Asano et al.[2015]). One example of this correlation is the first words that infants have learned. In that case, the words are mainly nouns, which are visible concepts, such as, dad, mom, ball, dog, cat (Gershkoff-Stowe & Smith [2004]). As a result, we can define the previous scenario in terms of a machine learning tasks. More formally, the task is defined by learning the association between two parallel streams of data that represent the same unknown class (or abstract concept). Note that this task is different from the supervised association where the data has labels. First, the semantic concept is unknown in our scenario whereas it is known in the supervised case. Second, both classifiers needs to agree on the same coding scheme for each sample pair during training. In contrast, the coding-scheme is already pre-defined before training in the supervised case. Figure 1 shows an example of the difference between a supervised association task and our scenario. Usually, classifiers requires labeled data for training. However, the presented scenario needs an alternative training mechanism. One way is to train based on statistical distributions. Casey (1986) proposed to solve the OCR problem using language statistics for inferring form images to characters. Later on, Knight et al. (2006) applied a similar idea to machine translation. Recently, Sutskever et al. (2015) defined the Output Distribution Matching (ODM) cost function for dual autoencoders and generative networks. In this paper, we are proposing a novel model that is trained based on the association of two input samples of the same unknown class. The presented model has two parallel Multilayer Perceptrons (MLPs) with an Expectation-Maximization (EM) (Dempster et al.[1977]) training rule that matches the network output against a statistical distribution. Also, both networks agree on the same classification because one network is used as target of the other network, and vice versa. Our model has some <table> <tr> <th>Task</th> <th>Input Samples (same)</th> <th>Abstract Concept Association (different)</th> <th>Coding Scheme for each input (different)</th> <th>Classifiers (same)</th> </tr> <tr> <td rowspan="2">Supervised Association</td> <td>3</td> <td>"three"</td> <td>Known before and after Training</td> <td>classifier 1<br>classifier 2</td> </tr> <tr> <td>3</td> <td>"three"</td> <td></td> <td></td> </tr> <tr> <td rowspan="2">Classless Association (our work)</td> <td>3</td> <td>"unknown"</td> <td>Unknown before Training<br>Known after Training</td> <td>classifier 1<br>classifier 2</td> </tr> <tr> <td>3</td> <td>"unknown"</td> <td></td> <td></td> </tr> </table> Figure 1: Difference between the supervised and classless association tasks. The classless association is more challenging that the supervised association because the model requires to learn to discriminate the semantic concept without labels. In addition, both classifiers need to agree on the same coding scheme for each semantic concept. In contrast, the mentioned information is already known in the supervised association scenario. similarities with Siamese Networks proposed by Chopra et al. (2005). They introduced their model for supervised face verification where training is based on constraints of pairs of faces. The constraints exploit the relation of two faces that may or may not be instances of the same person. However, there are some differences to our work. First, our training rule does not have pre-defined classes before training, whereas the Siamese Network requires labeled samples. Second, our model only requires instances of the same unknown class, whereas the Siamese network requires two types of input pairs: a) instances of the same person and b) instances of two different persons. Our contributions in this paper are • We define a novel training rule based on matching the output vectors of the presented model and a statistical distribution. Note that the output vectors are used as symbolic features similar to the Symbol Grounding Problem. Furthermore, the proposed training rule is based on an EM-approach and classified each sample based on generated pseudo-classes (Section 2.1). • We propose a novel architecture for learning the association in the classless scenario. Moreover, the presented model uses two parallel MLPs, which require to agree on the same class for each input sample. This association is motivated by the correlation between different sensory input signals in infants development. In more detail, one network is the target of the other network, and vice versa. Also, note that our model is gradient-based and can be extended to deeper architectures (Section 2.2). • We evaluate our classless association task against two cases: totally supervised and totally unsupervised. In this manner, we can verify the range of our results in terms of supervised and unsupervised cases since our model is neither totally supervised nor totally unsupervised. We compare against a MLP trained with labels as the supervised scenario (upper bound) and two clustering algorithms (K-means and Hierarchical Agglomerative) as the unsupervised scenario (lower bound). First, our model reaches better results than the clustering. Second, our model shows promising results with respect to the supervised scenario (Sections 3 and 4). 2 METHODOLOGY In this paper, we are interested in the classless association task in the following scenario: two input instances \( x^{(1)} \) and \( x^{(2)} \) belong to the same unknown class \( c \), where \( x^{(1)} \in X^{(1)} \) and \( x^{(2)} \in X^{(2)} \) are two disjoint sets, and the goal is to learn the output classification of \( x^{(1)} \) and \( x^{(2)} \) is the same \( c^{(1)} = c^{(2)} \), where \( c^{(1)} \) and \( c^{(2)} \in C \) is the set of possible classes. With this in mind, we present a model that has two parallel Multilayer Perceptrons (MLPs) that are trained with an EM-approach that associates both networks in the following manner: one network uses the other network as a target, and vice versa. We explain how the output vectors of the network are matched to a statistical distribution in Section 2.1 and the classless association learning is presented in Section 2.2. 2.1 Statistical Constraint One of our constraint is to train a MLP without classes. As a result, we use an alternative training rule based on matching the output vectors and a statistical distribution. For simplicity, we explain our training rule using a single MLP with one hidden layer, which is defined by \[ z = network(x; \theta) \] where \( x \in \mathbb{R}^n \) is the input vector, \( \theta \) encodes the parameters of the MLP, and \( z \in \mathbb{R}^c \) is the output vector. Moreover, the output vectors (\( z_1, \ldots, z_m \)) of a mini-batch of size \( m \) are matched to a target distribution (\( \mathbb{E}[z_1, \ldots, z_m] \sim \phi \in \mathbb{R}^c \)), e.g., uniform distribution. We have selected a uniform distribution because it is an ideal case to have a balanced dataset for any classifier. However, it is possible to extend to different distribution. We introduce a new parameter that is a weighting vector \( \gamma \in \mathbb{R}^c \). The intuition behind it is to guide the network based on a set of generated pseudo-classes \( c \). These pseudo-classes can be seen as cluster indexes that group similar elements. With this in mind, we also propose an EM-training rule for learning the unknown class given a desired target distribution. We want to point out that the pseudo-classes are internal representations of the network that are independent of the labels. The E-step obtains the current statistical distribution given the output vectors (\( z_1, \ldots, z_m \)) and the weighting vector (\( \gamma \)). In this case, an approximation of the distribution is obtained by the following equation \[ \hat{z} = \frac{1}{M} \sum_{i=1}^M power(z_i, \gamma) \] where \( \gamma \) is the weighting vector, \( z_i \) is the output vector of the network, \( M \) is the number of elements, and the function \( power \)1 is the element-wise power operation between the output vector and the weighting vector. We have used the power function because the output vectors (\( z_1, \ldots, z_m \)) are quite similar between them at the initial state of the network, and the power function provides an initial boost for learning to separate the input samples in different pseudo-classes in the first iterations. Moreover, we can retrieve the pseudo-classes by the maximum value of the following equation \[ c^* = arg\ max_c\ power(z_i, \gamma) \] where \( c^* \) is the pseudo-class, which are used in the M-step for updating the MLP weights. Also, note that the pseudo-classes are not updated in an online manner. Instead, the pseudo-classes are updated after a certain number of iterations. The reason is the network requires a number of iterations to learn the common features. The M-step updates the weighting vector \( \gamma \) given the current distribution \( \hat{z} \). Also, the MLP parameters (\( \theta \)) are updated given the current classification given by the pseudo-classes. The cost function is the variance between the distribution and the desired statistical distribution, which is defined by \[ cost = (\hat{z} - \phi)^2 \] 1We decide to use power function instead of \( z_i^\gamma \) in order to simplify the index notation Figure 2: The proposed training rule applied to a single MLP. E-steps generates a set of pseudo-classes \( c_1, \ldots, c_m \) for each output in the mini-batch of size \( m \), and a probability approximation \( \hat{z} \) of the output vectors in the mini-batch. M-step updates the MLP weights given the pseudo-classes and the weighting vector \( \gamma \) giving the target statistical distribution \( \phi \). where \( \hat{z} \) is the current statistical distribution of the output vectors, and \( \phi \) is a vector that represent the desired statistical distribution, e.g. uniform distribution. Then, the weighting vector is updated via gradient descent \[ \gamma = \gamma - \alpha * \nabla_{\gamma} cost \] where \( \alpha \) is the learning rate and \( \nabla_{\gamma} cost \) is the derivative w.r.t \( \gamma \). Also, the MLP weights are updated via the generated *pseudo-classes*, which are used as targets in the backpropagation step. In summary, we propose an EM-training rule for matching the network output vectors and a desired target statistical distribution. The *E-Step* generates *pseudo-classes* and finds an approximation of the current statistical distribution of the output vectors. The *M-Step* updates the MLP parameters and the weighting vector. With this in mind, we adapt the mentioned training rule for the classless association task. Figure 2 summarizes the presented EM training rule and its components. 2.2 CLASSLESS ASSOCIATION LEARNING Our second constraint is to classify both input samples as the same class and different from the other classes. Note that the *pseudo-class* (Equation 3) is used as identification for each input sample and it is not related to the semantic concept or labels. The presented classless association model is trained based on a statistical constraint. Formally, the input is represented by the pair \( x^{(1)} \in \mathbb{R}^{n_1} \) and \( x^{(2)} \in \mathbb{R}^{n_2} \) where \( x^{(1)} \) and \( x^{(2)} \) are two different instances of the same *unknown* label. The classless association model has two parallel Multilayer Perceptron \( MLP^{(1)} \) and \( MLP^{(2)} \) with training rule that follows an EM-approach (*cf.* Section 2.1). Moreover, input samples are divided into several mini-batches of size \( m \). Initially, all input samples have random *pseudo-classes* \( c_i^{(1)} \) and \( c_i^{(2)} \). The pseudo-classes have the same desired statistical distribution \( \phi \). Also, the weighting vectors \( \gamma^{(1)} \) and \( \gamma^{(2)} \) are initialized to one. Then each input element from the mini-batch is propagated forward to each MLP. Afterwards, an estimation of the statistical distribution for each MLP (\( \hat{z}^{(1)} \) and \( \hat{z}^{(2)} \)) is obtained. Furthermore, a new set of *pseudo-classes* (\( c_1^{(1)}, \ldots, c_m^{(1)} \) and \( c_1^{(2)}, \ldots, c_m^{(2)} \)) is obtained for each network. Note that this first part can be seen as an *E-step* from Section 2.1. We want to point out that the pseudo-classes are updated only after a number of iterations. The second part of our association training updates the MLP parameters and the *weighting vector* (\( \gamma^{(1)} \) and \( \gamma^{(2)} \)). In this step, one network (\( MLP^{(1)} \)) uses pseudo-classes (\( c_1^{(2)}, \ldots, c_m^{(2)} \)) obtained from the other network (\( MLP^{(2)} \)), and vice versa. In addition, the weighting vector is updated Figure 3: Overview of the presented model for classless association of two input samples that represent the same unknown classes. The association relies on matching the network output and a statistical distribution. Also, it can be observed that our model uses the pseudo-classes obtained by \( MLP^{(1)} \) as targets of \( MLP^{(2)} \), and vice versa. between the output approximation (\( \hat{z}^{(1)} \) and \( \hat{z}^{(2)} \)) and the desired target distribution (\( \phi \)). Figure 3 shows an overview of the presented model. 3 EXPERIMENTS In this paper, we are interested in a simplified scenario inspired by the Symbol Grounding Problem and the association learning between sensory input signal in infants. We evaluated our model in four *classless* datasets that are generated from MNIST (Lecun & Cortes 2010). The procedure of generating *classless datasets from labeled datasets* have been already applied in (Sutskever et al. 2015) (Hsu & Kira 2015). Each dataset has two disjoint sets *input 1* and *input 2*. The first dataset (*MNIST*) has two different instances of the same digit. The second dataset (*Rotated-90 MNIST*) has two different instances of the same digit, and all input samples in *input 2* are rotated 90 degrees. The third dataset (*Inverted MNIST*) follows a similar procedures as the second dataset, but the transformation of the elements in *input 2* is the invert function instead of rotation. The last dataset (*Random Rotated MNIST*) is more challenging because all elements in *input 2* are randomly rotated between 0 and \( 2\pi \). All datasets have a uniform distribution between the digits and the dataset size is 21,000 samples for training and 4,000 samples for validation and testing. The following parameters turned out being optimal on the validation set. For the first three datasets, each internal MLP relies on two fully connected layers of 200 and 100 neurons respectively. The learning rate for the MLPs was set to start at 1.0 and was continuously decaying by half after every 1,000 iterations. We set the initial *weighting vector* to 1.0 and updated after every 1,000 iterations as well. Moreover, the best parameters for the fourth dataset were the same for \( MLP^{(1)} \) and different for \( MLP^{(2)} \), which has two fully connected layers of 400 and 150 neurons respectively and the learning rate stars at 1.2. The target distribution \( \phi \) is uniform for all datasets. The decay of the learning rate (Equation 5) for the *weighting vector* was given by \( 1/(100 + epoch)^{0.3} \), where *epoch* was the number of training iterations so far. The mini-batch size \( M \) is 5,250 sample pairs (corresponding to 25% of the training set) and the mean of the derivatives for each mini-batch is used for the back-propagation step of MLPs. Note that the mini-batch is quite big comparing with common setups. We infer from this parameter that the model requires a sample size big enough for estimating the uniform distribution and also needs to learn slower than traditional approaches. Our model was implemented in *Torch*. <table> <tr> <th></th> <th>MLP(1)</th> <th>MLP (2)</th> <th>Purity (%)</th> <th>Association Matrix (%)</th> </tr> <tr> <td><b>Initial State</b></td> <td colspan="4"></td> </tr> <tr> <td></td> <td>10.9</td> <td>10.9</td> <td></td> <td></td> </tr> <tr> <td><b>Epoch 1,000</b></td> <td>24.8</td> <td>22.6</td> <td></td> <td></td> </tr> <tr> <td><b>Epoch 3,000</b></td> <td>64.4</td> <td>65.8</td> <td></td> <td></td> </tr> <tr> <td><b>Epoch 49,000</b></td> <td>95.5</td> <td>95.6</td> <td></td> <td></td> </tr> </table> Figure 4: Example of the presented model during classless training. In this example, there are ten pseudo-classes represented by each row of \( MLP^{(1)} \) and \( MLP^{(2)} \). Note that the output classification are randomly selected (not cherry picking). Initially, the pseudo-classes are assigned randomly to all input pair samples, which holds a uniform distribution (first row). Then, the classless association model slowly start learning the features and grouping similar input samples. Afterwards, the output classification of both MLPs slowly agrees during training, and the association matrix shows the relation between the occurrences of the pseudo-classes. To determine the baseline of our classless constraint, we compared our model against two cases: totally supervised and totally unsupervised. In the supervised case, we used the same MLP parameters and training for a fair comparison. In the unsupervised scenario, we used k-means and agglomerative clustering to each set (input 1 and input 2) independently. The clustering algorithm implementation are provided by scikit-learn (Pedregosa et al., 2011). 4 RESULTS AND DISCUSSION In this work, we have generated ten different folds for each dataset and report the average results. We introduce the Association Accuracy for measuring association, and it is defined by the following equation \[ Association\ Accuracy = \frac{1}{N} \sum_{i=1}^{N} \mathbb{1}(c_i^{(1)} = c_i^{(2)}) \] Table 1: Association Accuracy (%) and Purity (%) results. Our model is compared with the supervised scenario (class labels are provided) and with K-means and Hierarchical Agglomerative clustering (no class information). <table> <tr> <th rowspan="2">Dataset</th> <th rowspan="2">Model</th> <th colspan="2">Association Accuracy (%)</th> <th colspan="2">Purity (%)</th> </tr> <tr> <th>input 1</th> <th>input 2</th> <th>input 1</th> <th>input 2</th> </tr> <tr> <td rowspan="4">MNIST</td> <td>supervised association</td> <td>96.7 ± 0.3</td> <td>96.7 ± 0.2</td> <td>96.6 ± 0.3</td> <td></td> </tr> <tr> <td><b>classless association</b></td> <td>87.4 ± 2.9</td> <td>87.1 ± 6.6</td> <td>87.0 ± 6.4</td> <td></td> </tr> <tr> <td>K-means</td> <td>-</td> <td>63.9 ± 2.2</td> <td>62.5 ± 3.7</td> <td></td> </tr> <tr> <td>Hierarchical Agglomerative</td> <td>-</td> <td>64.9 ± 4.7</td> <td>64.3 ± 5.5</td> <td></td> </tr> <tr> <td rowspan="4">Rotated-90 MNIST</td> <td>supervised association</td> <td>93.2 ± 0.3</td> <td>96.4 ± 0.2</td> <td>96.6 ± 0.21</td> <td></td> </tr> <tr> <td><b>classless association</b></td> <td>86.5 ± 2.5</td> <td>82.9 ± 4.5</td> <td>82.9 ± 4.3</td> <td></td> </tr> <tr> <td>K-means</td> <td>-</td> <td>65.0 ± 2.8</td> <td>64.0 ± 3.6</td> <td></td> </tr> <tr> <td>Hierarchical Agglomerative</td> <td>-</td> <td>65.4 ± 3.5</td> <td>64.1 ± 4.1</td> <td></td> </tr> <tr> <td rowspan="4">Inverted MNIST</td> <td>supervised association</td> <td>93.2 ± 0.3</td> <td>96.5 ± 0.2</td> <td>96.5 ± 0.2</td> <td></td> </tr> <tr> <td><b>classless association</b></td> <td>89.2 ± 2.4</td> <td>89.0 ± 6.8</td> <td>89.1 ± 6.8</td> <td></td> </tr> <tr> <td>K-means</td> <td>-</td> <td>64.8 ± 2.0</td> <td>65.0 ± 2.5</td> <td></td> </tr> <tr> <td>Hierarchical Agglomerative</td> <td>-</td> <td>64.8 ± 4.4</td> <td>64.4 ± 3.8</td> <td></td> </tr> <tr> <td rowspan="4">Random Rotated MNIST</td> <td>supervised association</td> <td>88.0 ± 0.5</td> <td>96.5 ± 0.3</td> <td>90.9 ± 0.5</td> <td></td> </tr> <tr> <td><b>classless association</b></td> <td>69.3 ± 2.2</td> <td>75.8 ± 7.3</td> <td>65.3 ± 5.0</td> <td></td> </tr> <tr> <td>K-means</td> <td>-</td> <td>64.8 ± 2.6</td> <td>14.8 ± 0.4</td> <td></td> </tr> <tr> <td>Hierarchical Agglomerative</td> <td>-</td> <td>65.9 ± 2.8</td> <td>15.2 ± 0.5</td> <td></td> </tr> </table> where the *indicator function* is one if \( c_i^{(1)} = c_i^{(2)} \), zero otherwise; \( c_i^{(1)} \) and \( c_i^{(2)} \) are the *pseudo-classes* for \( MLP^{(1)} \) and \( MLP^{(2)} \), respectively, and \( N \) is the number of elements. In addition, we also reported the *Purity* of each set (*input 1* and *input 2*). *Purity* is defined by \[ Purity(\Omega, \mathcal{C}) = \frac{1}{N} \sum_{i=1}^k max_j |c_i \cap gt_j| \] where \( \Omega = \{gt_1, gt_2, \ldots, gt_j\} \) is the set of ground-truth labels and \( \mathcal{C} = \{c_1, c_2, \ldots, c_k\} \) is the set of *pseudo-classes* in our model or the set of cluster indexes of K-means or Hierarchical Agglomerative clustering, and \( N \) is the number of elements. Table 1 shows the *Association Accuracy* between our model and the supervised association task and the *Purity* between our model and two clustering algorithms. First, the supervised association task performances better that the presented model. This was expected because our task is more complex in relation to the supervised scenario. However, we can infer from our results that the presented model has a good performance in terms of the classless scenario and supervised method. Second, our model not only learns the association between input samples but also finds similar elements covered under the same *pseudo-class*. Also, we evaluate the purity of our model and found that the performance of our model reaches better results than both clustering methods for each set (*input 1* and *input 2*). Figure 4 illustrates an example of the proposed learning rule. The first two columns (\( MLP^{(1)} \) and \( MLP^{(2)} \)) are the output classification (Equation 3) and each row represents a *pseudo-class*. We have randomly selected 15 output samples for each MLP (not cherry picking). Initially, the *pseudo classes* are random selected for each MLP. As a result, the output classification of both networks does not show any visible discriminant element and the initial purity is close to random choice (first row). After 1,000 epochs, the networks start learning some features in order to discriminate the input samples. Some groups of digits are grouped together after 3,000 epochs. For example, the first row of \( MLP^{(2)} \) shows several digits *zero*, whereas \( MLP^{(1)} \) has not yet agree on the same digit for that *pseudo-class*. In contrast, both MLPs have almost agree on digit *one* at the fifth row. Finally, the association is learned using only the statistical distribution of the input samples and each digit is represented by each *pseudo-class*. Figure 5: Example of the best and worst results among all folds and datasets. It can be observed our model is able to learn to discriminate each digit (first row). However, the presented model has a limitation that two or more digits are assigned to the same pseudo-class (last row of \( MLP^{(1)} \) and \( MLP^{(2)} \)). Figure 5 shows the best and worst results of our model in two cases. The first row is the best result from MNIST dataset. Each row of \( MLP^{(1)} \) and \( MLP^{(2)} \) represent a pseudo-class, and it can be observed that all digits are grouped together. In addition, the association matrix shows a distribution per digit close to the desired uniform distribution, and the purity of each input is close to the supervised scenario. In contrast, the second row is our worst result from Random Rotated MNIST dataset. In this example, we can observe that some digits are recognized by the same pseudo-class, for example, digit one and seven (first two rows). However, there two or more digits that are recognized by the samepseudo-class. For example, the last row shows that nine and four are merged. Our model is still able to reach better results than the unsupervised scenario. 5 CONCLUSION In this paper, we have shown the feasibility to train a model that has two parallel MLPs under the following scenario: pairs of input samples that represent the same unknown classes. This scenario was motivated by the Symbol Grounding Problem and association learning between sensory input signal in infants development. We proposed a model based on gradients for solving the classless association. Our model has an EM-training that matches the network output against a statistical distribution and uses one network as a target of the other network, and vice versa. Our model reaches better performance than K-means and Hierarchical Agglomerative clustering. In addition, we compare the presented model against a supervised method. We find that the presented model with respect to the supervised method reaches good results because of two extra conditions in the unsupervised association: unlabeled data and agree on the same pseudo-class. We want to point out that our model was evaluated in an optimal case where the input samples are uniform distributed and the number of classes is known. However, we will explore the performance of our model if the number of classes and the statistical distribution are unknown. One way is to change the number of pseudo-classes. This can be seen as changing the number of clusters k in k-means. With this in mind, we are planning to do more exhaustive analysis of the learning behavior with deeper architectures. Moreover, we will work on how a small set of labeled classes affects the performance of our model (similar to semi-supervised learning). Furthermore, we are interested in replicating our findings in more complex scenarios, such as, multimodal datasets like TVGraz [Khan et al., 2009] or Wikipedia featured articles [Rasuwasia et al., 2010]. Finally, our work can be applied to more classless scenarios where the data can be extracted simultaneously from different input sources at the same time. Also, transformation functions can be applied to input samples for creating the association without classes. ACKNOWLEDGMENTS We would like to thank Damian Borth, Christian Schulze, Jörn Hees, Tushar Karayil, and Philipp Blandfort for helpful discussions.
reject
Reject
5.333333
50b9e056975574644b1a3f8676e4609819292e48
iclr
2,017
HYPERBAND: Bandit-based Configuration Evaluation for Hyperparameter Optimization Lisha Li*, Kevin Jamieson**, Giulia DeSalvo†, Afshin Rostamizadeh‡, and Ameet Talwalkar* *UCLA, **UC Berkeley, †NYU, and ‡Google {lishal,ameet}@cs.ucla.edu, kjamieson@berkeley.edu desalvo@cims.nyu.edu, rostami@google.com ABSTRACT Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While recent approaches use Bayesian Optimization to adaptively select configurations, we focus on speeding up random search through adaptive resource allocation. We present HYPERBAND, a novel algorithm for hyperparameter optimization that is simple, flexible, and theoretically sound. HYPERBAND is a principled early-stopping method that adaptively allocates a pre-defined resource, e.g., iterations, data samples or number of features, to randomly sampled configurations. We compare HYPERBAND with popular Bayesian Optimization methods on several hyperparameter optimization problems. We observe that HYPERBAND can provide more than an order of magnitude speedups over competitors on a variety of neural network and kernel-based learning problems. 1 INTRODUCTION The task of hyperparameter optimization is becoming increasingly important as modern data analysis pipelines grow in complexity. The quality of a predictive model critically depends on its hyperparameter configuration, but it is poorly understood how these hyperparameters interact with each other to affect the quality of the resulting model. Consequently, practitioners often default to either hand-tuning or automated brute-force methods like random search and grid search. In an effort to develop more efficient search methods, the problem of hyperparameter optimization has recently been dominated by Bayesian optimization methods (Snoek et al., 2012; Hutter et al., 2011; Bergstra et al., 2011) that focus on optimizing hyperparameter configuration selection. These methods aim to identify good configurations more quickly than standard baselines like random search by selecting configurations in an adaptive manner; see Figure 1(a). Existing empirical evidence suggests that these methods outperform random search (Thornton et al., 2013; Eggensperger et al., 2013; Snoek et al., 2015). However, these methods tackle a fundamentally challenging problem of simultaneously fitting and optimizing a high-dimensional, non-convex function with unknown smoothness, and possibly noisy evaluations. To overcome these difficulties, some Bayesian optimization methods resort to heuristics, at the expense of consistency guarantees, to model the objective function or speed up resource intensive subroutines. Moreover, these adaptive configuration selection methods are intrinsically sequential and thus difficult to parallelize. An orthogonal approach to hyperparameter optimization focuses on speeding up configuration evaluation; see Figure 1(b). These methods are adaptive in computation, allocating more resources to promising hyperparameter configurations while quickly eliminating poor ones. Resources can take various forms, including size of training set, number of features, or number of iterations for iterative algorithms. By adaptively allocating these resources, these methods aim to examine orders of magnitude more hyperparameter configurations than methods that uniformly train all configurations to completion, thereby quickly identifying good hyperparameters. While there are methods that combine Bayesian optimization with adaptive resource allocation (Swersky et al., 2013; 2014; Domhan et al., 2015), we focus on speeding up random search as it offers a simple, parallelizable, and theoretically principled launching point and is shown to outperform grid search (Bergstra & Bengio, 2012). 1Consistency can be restored by allocating a fraction of resources to performing random search. Figure 1: (a) The heatmap shows the validation error over a two dimensional search space, with red corresponding to areas with lower validation error, and putative configurations selected in a sequential manner as indicated by the numbers. (b) The plot shows the validation error as a function of the resources allocated to each configuration (i.e., each line in the plot). Configuration evaluation methods allocate more resources to promising configurations. (c) The validation loss as a function of total resources allocated for two configurations. The shaded areas bound the maximum distance from the terminal validation loss and monotonically decreases with the resource. Our novel configuration evaluation method, HYPERBAND, relies on a principled early-stopping strategy to allocate resources, allowing it to evaluate orders of magnitude more configurations than uniform allocation strategies. HYPERBAND is a general-purpose technique that makes minimal assumptions, unlike prior configuration evaluation approaches (Swersky et al., 2013; Domhan et al., 2015; Swersky et al., 2014; György & Kocsis, 2011; Agarwal et al., 2011). In this work, we describe HYPERBAND, provide intuition for the algorithm through a detailed example, and present a wide range of empirical results comparing HYPERBAND with well established competitors. We also briefly describe the theoretical underpinnings of HYPERBAND, however a thorough theoretical treatment is beyond the scope of this paper and is deferred to Li et al. (2016). 2 RELATED WORK Bayesian optimization techniques model the conditional probability \( p(f|\lambda) \) of a configuration’s performance on a metric \( f \) given a set of hyperparameters \( \lambda \). For instance, SMAC uses random forests to model \( p(f|\lambda) \) as a Gaussian distribution (Hutter et al., 2011). TPE is a non-standard Bayesian optimization algorithm based on tree-structured Parzen density estimators (Bergstra et al., 2011). A third popular method, Spearmint, uses Gaussian processes (GP) to model \( p(f|\lambda) \) and performs slice sampling over the GP’s hyperparameters (Snoek et al., 2012). Adaptive configuration evaluation is not a new idea. Maron & Moore (1993) considered a setting where training time is negligible (e.g., k-nearest-neighbor classification) and evaluation on a large validation set is accelerated by evaluating on an increasing subset of the validation set, stopping early configurations that are performing poorly. Since subsets of the validation set provide unbiased estimates of its expected performance, this is an instance of the stochastic best-arm identification problem for multi-armed bandits (see Jamieson & Nowak (2014) for a brief survey). In contrast, this paper assumes that evaluation time is negligible and the goal is to early-stop long-running training procedures by evaluating partially trained models on the validation set. Previous approaches either require strong assumptions or use heuristics to perform adaptive resource allocation. Several works propose methods that make strong assumptions on the convergence behavior of training algorithms, providing theoretical performance guarantees under these assumptions (György & Kocsis 2011; Agarwal et al., 2011; Swersky et al., 2013, 2014; Domhan et al., 2015; Sabharwal et al., 2016). Unfortunately, these assumptions are often hard to verify, and empirical performance can drastically suffer when they are violated. One recent work of particular interest proposes a heuristic based on sequential analysis to determine stopping times for training configurations on increasing subsets of the data (Krueger et al., 2015). However, it has a few shortcomings: (1) it is designed to speedup multi-fold cross-validation and is not significantly faster than standard holdout, (2) it is not an anytime algorithm and requires the set of configurations to be evaluated as an input, and (3) the theoretical correctness and empirical performance of this method are highly dependent on a user-defined “safety-zone.” Lastly, in an effort avoid heuristics and strong assumptions, Sparks et al. (2015) proposed a halving style algorithm that did not require explicit convergence behavior, and Jamieson & Talwalkar (2015) analyzed a similar algorithm, providing theoretical guarantees and encouraging empirical results. Unfortunately, these halving style algorithms suffer from the \( n \) vs \( B/n \) issue which we will discuss in Section 3. Finally, Klein et al. (2016) recently introduced Fabolas, a Bayesian optimization method that combines adaptive selection and evaluation. Similar to Swersky et al. (2013; 2014), it models the conditional validation error as a Gaussian process using a kernel that captures the covariance with downsampling rate to allow for adaptive evaluation. While we intended to compare HYPERBAND with Fabolas, we encountered some technical difficulties when using the package\footnote{The first two drawbacks prevent a full comparison to HYPERBAND on our selected empirical tasks, however, for completeness, we provide a comparison in Appendix A to Krueger et al. (2015) on some experimental tasks replicated from their paper.} and are working with the authors of Klein et al. (2016) to resolve the issues. 3 HYPERBAND ALGORITHM HYPERBAND extends the SUCCESSIVEHALVING algorithm proposed for hyperparameter optimization in Jamieson & Talwalkar (2015) and calls it as a subroutine. The idea behind SUCCESSIVEHALVING follows directly from its name: uniformly allocate a budget to a set of hyperparameter configurations, evaluate the performance of all configurations, throw out the worst half, and repeat until one configurations remains. The algorithm allocates exponentially more resources to more promising configurations. Unfortunately, SUCCESSIVEHALVING requires the number of configurations \( n \) as an input to the algorithm. Given some finite time budget \( B \) (e.g. an hour of training time to choose a hyperparameter configuration), \( B/n \) resources are allocated on average across the configurations. However, for a fixed \( B \), it is not clear a priori whether we should (a) consider many configurations (large \( n \)) with a small average training time; or (b) consider a small number of configurations (small \( n \)) with longer average training times. We use a simple example to better understand this tradeoff. Figure 1(c) shows the validation loss as a function of total resources allocated for two configurations with terminal validation losses \( \nu_1 \) and \( \nu_2 \). The shaded areas bound the maximum deviation from the terminal validation loss and will be referred to as “envelope” functions. It is possible to differentiate between the two configurations when the envelopes diverge. Simple arithmetic shows that this happens when the width of the envelopes is less than \( \nu_2 - \nu_1 \), i.e. when the intermediate losses are guaranteed to be less than \( \frac{\nu_2 - \nu_1}{2} \) away from the terminal losses. There are two takeaways from this observation: more resources are needed to differentiate between the two configurations when either (1) the envelope functions are wider or (2) the terminal losses are closer together. However, in practice, the optimal allocation strategy is unknown because we do not have knowledge of the envelope functions nor the distribution of terminal losses. Hence, if more resources are required before configurations can differentiate themselves in terms of quality (e.g., if an iterative training method converges very slowly for a given dataset or if randomly selected hyperparameter configurations perform similarly well) then it would be reasonable to work with a small number of configurations. In contrast, if the quality of a configuration is typically revealed using minimal resources (e.g., if iterative training methods converge very quickly for a given dataset or if randomly selected hyperparameter configurations are of low-quality with high probability) then \( n \) is the bottleneck and we should choose \( n \) to be large. 3.1 HYPERBAND HYPERBAND, shown in Algorithm 1, addresses this “\( n \) versus \( B/n \)” problem by considering several possible values of \( n \) for a fixed \( B \), in essence performing a grid search over feasible value of \( n \). Associated with each value of \( n \) is a minimum resource \( r \) that is allocated to all configurations before some are discarded; a larger value of \( n \) corresponds to a smaller \( r \) and hence more aggressive early stopping. There are two components to HYPERBAND; (1) the inner loop invokes SUCCESSIVEHALVING for fixed values of \( n \) and \( r \) (lines 3-9) and (2) the outer loop which iterates over different values \footnotetext{2The first two drawbacks prevent a full comparison to HYPERBAND on our selected empirical tasks, however, for completeness, we provide a comparison in Appendix A to Krueger et al. (2015) on some experimental tasks replicated from their paper.} \footnotetext{3The package provided by Klein et al. (2016) is available at https://github.com/automl/RoBO} of \( n \) and \( r \) (lines 1-2). We will refer to each such run of SUCCESSIVEHALVING within HYPERBAND as a “bracket.” Each bracket is designed to use about \( B \) total resources and corresponds to a different tradeoff between \( n \) and \( B/n \). A single execution of HYPERBAND takes a finite number of iterations, and in practice can be repeated indefinitely. HYPERBAND requires two inputs (1) \( R \), the maximum amount of resource that can be allocated to a single configuration, and (2) \( \eta \), an input that controls the proportion of configurations discarded in each round of SUCCESSIVEHALVING. The two inputs dictate how many different brackets are considered; specifically, \( s_{\max} + 1 \) different values for \( n \) are considered with \( s_{\max} = \lfloor \log_\eta(R) \rfloor \). HYPERBAND begins with the most aggressive bracket \( s = s_{\max} \), which sets \( n \) to maximize exploration, subject to the constraint that at least one configuration is allocated \( R \) resources. Each subsequent bracket reduces \( n \) by a factor of approximately \( \eta \) until the final bracket, \( s = 0 \), in which every configuration is allocated \( R \) resources (this bracket simply performs classical random search). Hence, HYPERBAND performs a geometric search in the average budget per configuration to address the “\( n \) versus \( B/n \)” problem, at the cost of approximately \( s_{\max} + 1 \) times more work than running SUCCESSIVEHALVING for a fixed \( n \). By doing so, HYPERBAND is able to exploit situations in which adaptive allocation works well, while protecting itself in situations where more conservative allocations are required. <table> <tr> <th colspan="2">Algorithm 1: HYPERBAND algorithm for hyperparameter optimization.</th> </tr> <tr> <td colspan="2"> <b>input</b> : \( R, \eta \) (default \( \eta = 3 \))<br> <b>initialization</b> : \( s_{\max} = \lfloor \log_\eta(R) \rfloor, B = (s_{\max} + 1)R \)<br> <b>for</b> \( s \in \{s_{\max}, s_{\max} - 1, \ldots, 0\}\) <b>do</b><br> <i>n = \lceil \frac{B}{R(s+1)} \rceil,\quad r = R\eta^{-s}</i><br> // begin SUCCESSIVEHALVING with (\( n, r \)) inner loop<br> \( T = \text{get\_hyperparameter\_configuration}(n) \)<br> <b>for</b> \( i \in \{0, \ldots, s\} \) <b>do</b><br> \( n_i = [n\eta^{-i}] \)<br> \( r_i = r\eta^i \)<br> \( L = \{\text{run\_then\_return\_val\_loss}(t, r_i) : t \in T\} \)<br> \( T = \text{top\_k}(T, L, [n_i/\eta]) \)<br> <b>end</b><br> <b>end</b><br> <b>return</b> Configuration with the smallest intermediate loss seen so far. </td> </tr> </table> \( R \) represents the maximum amount of resources that can be allocated to any given configuration. In most cases, there is a natural upper bound on the maximum budget per configuration that is often dictated by the resource type (e.g., training set size for dataset downsampling; limitations based on memory constraint for feature downsampling; rule of thumb regarding number of epochs when iteratively training neural networks). \( R \) is also the number of configurations evaluated in the bracket that performs the most exploration, i.e \( s = s_{\max} \). In practice one may want \( n \leq n_{\max} \) to limit overhead associated with training many configurations on a small budget, i.e. costs associated with initialization, loading a model, and validation. In this case, set \( s_{\max} = \lfloor \log_\eta(n_{\max}) \rfloor \). The value of \( \eta \) can be viewed as a knob that can be tuned based on practical user constraints. Larger values of \( \eta \) correspond to a more aggressive elimination schedule and thus fewer rounds of elimination; specifically, each round retains \( 1/\eta \) configurations for a total of \( \lfloor \log_\eta(n) \rfloor + 1 \) rounds of elimination with \( n \) configurations. If one wishes to receive a result faster at the cost of a sub-optimal asymptotic constant, one can increase \( \eta \) to reduce the budget per bracket \( B = (\lfloor \log_\eta(R) \rfloor + 1)R \). We stress that results are not very sensitive to the choice of \( \eta \). In practice we suggest taking \( \eta \) to be equal to 3 or 4. HYPERBAND requires the following methods to be defined for any given learning problem: get_hyperparameter_configuration(\( n \)) returns a set of \( n \) i.i.d. samples from some distribution defined over the hyperparameter configuration space; run_then_return_val_loss(\( t, r \)) takes a hyperparameter configuration (\( t \)) and resource allocation (\( r \)) and returns the validation loss after training for the allocated resources; and top_k(configs, losses, k) takes a set of configurations as well as their associated losses and returns the top \( k \) performing configurations. 3.2 Example application with iterations as a resource: LeNet We next present a simple example to provide intuition. We work with the MNIST dataset and optimize hyperparameters for the LeNet convolutional neural network trained using mini-batch SGD. Our search space includes learning rate, batch size, and number of kernels for the two layers of the network as hyperparameters (details are shown in Table 3 in Appendix A). We further define the number of iterations as the resource to allocate, with one unit of resource corresponding to one epoch or a full pass over the dataset. We set \( R \) to 81 and use the default value of \( \eta = 3 \), resulting in \( s_{\max} = 4 \) and thus 5 brackets of SUCCESSIVEHALVING with different tradeoffs between \( n \) and \( B/n \). The resources allocated within each bracket are displayed in Table 1. <table> <tr> <th rowspan="2">i</th> <th colspan="2">s = 4</th> <th colspan="2">s = 3</th> <th colspan="2">s = 2</th> <th colspan="2">s = 1</th> <th colspan="2">s = 0</th> </tr> <tr> <th>n<sub>i</sub></th> <th>r<sub>i</sub></th> <th>n<sub>i</sub></th> <th>r<sub>i</sub></th> <th>n<sub>i</sub></th> <th>r<sub>i</sub></th> <th>n<sub>i</sub></th> <th>r<sub>i</sub></th> <th>n<sub>i</sub></th> <th>r<sub>i</sub></th> </tr> <tr> <td>0</td> <td>81</td> <td>1</td> <td>27</td> <td>3</td> <td>9</td> <td>9</td> <td>6</td> <td>27</td> <td>5</td> <td>81</td> </tr> <tr> <td>1</td> <td>27</td> <td>3</td> <td>9</td> <td>9</td> <td>3</td> <td>27</td> <td>2</td> <td>81</td> <td></td> <td></td> </tr> <tr> <td>2</td> <td>9</td> <td>9</td> <td>3</td> <td>27</td> <td>1</td> <td>81</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>3</td> <td>3</td> <td>27</td> <td>1</td> <td>81</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>4</td> <td>1</td> <td>81</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </table> Table 1: Values of \( n_i \) and \( r_i \) for the brackets of HYPERBAND when \( R = 81 \) and \( \eta = 3 \). ![Performance of individual brackets s and HYPERBAND.](page_1012_328_377_246.png) Figure 2: Performance of individual brackets \( s \) and HYPERBAND. Figure 2 compares the empirical performance of the different brackets of HYPERBAND if they were used separately, as well as standard HYPERBAND (all results are averaged over 70 trials). In practice we do not know a priori which bracket \( s \in \{0, \ldots, 4\} \) will be most effective, and in this case neither the most (\( s = 4 \)) nor least aggressive (\( s = 0 \)) setting is optimal. However, note that HYPERBAND does nearly as well as the optimal bracket (\( s = 3 \)) and vastly outperforms the baseline uniform allocation (i.e. random search), which is equivalent to bracket \( s = 0 \). 3.3 Overview of theoretical results Although a detailed theoretical analysis is beyond the scope of this paper, we provide an intuitive, high-level description of theoretical properties of HYPERBAND. Suppose there are \( n \) configurations, each with a given terminal validation error \( \nu_i \) for \( i = 1, \ldots, n \). Without loss of generality, index the configurations by performance so that \( \nu_1 \) corresponds to the best performing configuration, \( \nu_2 \) to the second best, and so on. Now consider the task of identifying the best configuration. The optimal strategy would allocate to each configuration \( i \) the minimum resource required to distinguish it from \( \nu_1 \), i.e., enough so that the envelope functions depicted in Figure 1(c) bound the intermediate loss to be less than \( \frac{\nu_1 - \nu_i}{2} \) away from the terminal value. As shown in Jamieson & Talwalkar (2015) and Li et al. (2016), the budget required by SUCCESSIVEHALVING is in fact only a small factor away from this optimal approach because it capitalizes on configurations that are easy to distinguish from \( \nu_1 \). In contrast, the naive uniform allocation strategy, which allocates \( B/n \) to each configuration, has to allocate to every configuration the resource required to distinguish \( \nu_2 \) from \( \nu_1 \). The relative size of the budget required for uniform allocation and SUCCESSIVEHALVING depends on the envelope functions bounding deviation from terminal losses as well as the distribution from which \( \nu_i \)'s are drawn. The budget required for SUCCESSIVEHALVING is smaller when the optimal \( n \) versus \( B/n \) tradeoff requires fewer resources per configuration. Hence, if the envelope functions tighten quickly as a function of resource allocated, or the average distances between terminal losses is large, then SUCCESSIVEHALVING can be substantially faster than uniform allocation. Of course we do not have knowledge of either function in practice, so we will hedge our aggressiveness with HYPERBAND. Remarkably, despite having no knowledge of the envelope functions or the distribution of \( \nu_i \)'s, HYPERBAND requires a budget that is only log factors larger than the optimal for SUCCESSIVEHALVING. See Li et al. (2016) for details. Figure 3: Average test error across 10 trials is shown in all plots. Label “SMAC_early” corresponds to SMAC with the early stopping criterion proposed in [Domhan et al., (2015)] and label “bracket s = 4” corresponds to repeating the most exploratory bracket of HYPERBAND. 4 HYPERPARAMETER OPTIMIZATION EXPERIMENTS In this section, we evaluate the empirical behavior of HYPERBAND with iterations, data subsamples, and features as resources. For all experiments, we compare HYPERBAND with three well known Bayesian optimization algorithms — SMAC, TPE, and Spearmint. Additionally, we show results for SUCCESSIVEHALVING corresponding to repeating the most exploration bracket of HYPERBAND. Finally for all experiments, we benchmark against standard random search and random_2×, which is a variant of random search with twice the budget of other methods. 4.1 EARLY STOPPING ITERATIVE ALGORITHMS FOR NEURAL NETWORKS We study a convolutional neural network with the same architecture as that used in [Snoek et al., (2012)] and [Domhan et al., (2015)] from cuda-convnet. The search spaces used in the two previous works differ, and we used a search space similar to that of Snoek et al. (2012) with 6 hyperparameters for stochastic gradient decent and 2 hyperparameters for the response normalization layers. In line with the two previous works, we used a batch size of 100 for all experiments. For these experiments, we also compare against a variant of SMAC named SMAC_early that uses the early termination criterion proposed in [Domhan et al., (2015)] for neural networks. We view SMAC with early stopping to be a combination of adaptive configuration selection and configuration evaluation. See Appendix A for more details about the experimental setup. Datasets: We considered three image classification datasets: CIFAR-10 ([Krizhevsky, (2009)], rotated MNIST with background images (MRBI) ([Larochelle et al., (2007)]), and Street View House Numbers (SVHN) ([Netzer et al., (2011)]). CIFAR-10 and SVHN contain \(32 \times 32\) RGB images while MRBI contains \(28 \times 28\) grayscale images. The splits used for each dataset are as follows: (1) CIFAR-10 has 40k, 10k, and 10k instances; (2) MRBI has 10k, 2k, and 50k instances; and (3) SVHN has close to 600k, 6k, and 26k instances for training, validation, and test respectively. For all datasets, the only preprocessing performed on the raw images was demeaning. HYPERBAND Configuration: For these experiments, one unit of resource corresponds to 100 mini-batch iterations. For CIFAR-10 and MRBI, \(R\) is set to 300 (or 30k total iterations). For SVHN, \(R\) is set to 600 (or 60k total iterations) to accommodate the larger training set. \(\eta\) was set to 4 for all experiments, resulting in 5 SUCCESSIVEHALVING brackets for HYPERBAND. Results: Ten independent trials were performed for each searcher. For CIFAR-10, the results in Figure 3(a) show that HYPERBAND is more than an order of magnitude faster than its competitors. In Figure 6 of Appendix A, we extend the x-axis for CIFAR-10 out to 100\(R\). The results show that Bayesian optimization methods ultimately converge to similar errors as HYPERBAND. For MRBI, HYPERBAND is more than an order of magnitude faster than standard configuration selection approaches and 5× faster than SMAC with early stopping. For SVHN, while HYPERBAND finds a good configuration faster, Bayesian optimization methods are competitive and SMAC with early stopping outperforms HYPERBAND. This result demonstrates that there is merit to incorporating early stopping with configuration selection approaches. Across the three datasets, HYPERBAND and SMAC_early are the only two methods that consistently outperform random_2×. On these datasets, HYPERBAND is over 20× faster than random search while SMAC_early is \( \leq 7\times \) faster than random search within the evaluation window. In fact, the first result returned by HYPERBAND after using a budget of 5\( R \) is often competitive with results returned by other searchers after using 50\( R \). Additionally, HYPERBAND is less variable than other searchers across trials, which is highly desirable in practice (see Appendix A for plots with error bars). For computationally expensive problems in high dimensional search spaces, it may make sense to just repeat the most exploratory brackets. Similarly, if meta-data is available about a problem or it is known that the quality of a configuration is evident after allocating a small amount of resource, then one should just repeat the most exploration bracket. Indeed, for these experiments, repeating the most exploratory bracket of HYPERBAND outperforms cycling through all the brackets. In fact, bracket \( s = 4 \) vastly outperforms all other methods on CIFAR-10 and MRBI and is nearly tied with SMAC_early for first on SVHN. Finally, CIFAR-10 is a very popular dataset and state-of-the-art models achieve much better accuracy than what is shown in Figure 3. The difference in performance is mainly attributable to higher model complexities and data manipulation (i.e. using reflection or random cropping to artificially increase the dataset size). If we limit the comparison to published results that use the same architecture and exclude data manipulation, the best human expert result for the dataset is 18% error and hyperparameter optimized result is 15.0% for Snoek et al. (2012)4 and 17.2% for Domhan et al. (2015). These results are better than our results on CIFAR-10 because they use 25% more data by including the validation set and also train for more epochs. The best model found by HYPERBAND achieved a test error of 17.0% when trained on the combined training and validation data for 300 epochs. 4.2 DATA DOWNSAMPLING KERNEL REGULARIZED LEAST SQUARES CLASSIFICATION In this experiment, we use HYPERBAND with data samples as the resource to optimize the hyperparameters of a kernel-based classification task on CIFAR-10. We use the multi-class regularized least squares classification model which is known to have comparable performance to SVMs (Rifkin & Klautau 2004; Agarwal et al. 2014) but can be trained significantly faster. The hyperparameters considered in the search space include preprocessing method, regularization, kernel type, kernel length scale, and other kernel specific hyperparameters (see Appendix A for more details). HYPERBAND is run with \( \eta = 4 \) and \( R = 400 \), with each unit of resource representing 100 datapoints. Similar to previous experiments, these inputs result in a total of 5 brackets. Each hyperparameter optimization algorithm is run for ten trials on Amazon EC2 m4.2xlarge instances; for a given trial, HYPERBAND is allowed to run for two outer loops, bracket \( s = 4 \) is repeated 10 times, and all other searchers are run for 12 hours. Figure 4 shows that HYPERBAND returns a good configuration after just the first SUCCESSIVEHALVING bracket in approximately 20 minutes; other searchers fail to reach this error rate on average even after the entire 12 hours. Notably, HYPERBAND was able to evaluate over 250 configurations in this first bracket of SUCCESSIVEHALVING, while competitors were able to evaluate only three configurations in the same amount of time. Consequently, HYPERBAND is over 30× faster than Bayesian optimization methods and 70× faster than random search. Bracket \( s = 4 \) sightly outperforms HYPERBAND but the terminal performance for the two algorithms are the same. Random_2× is competitive with SMAC and TPE. 4.3 FEATURE SUBSAMPLING TO SPEED UP APPROXIMATE KERNEL CLASSIFICATION We next demonstrate the performance of HYPERBAND when using features as a resource, focusing on random feature approximations for kernel methods. Features are randomly generated using the method described in Rahimi & Recht (2007) to approximate the RBF kernel, and these random features are then used as inputs to a ridge regression classifier. We consider hyperparameters of a random feature kernel approximation classifier trained on CIFAR-10, including preprocessing method, kernel length scale, and \( l_2 \) penalty. We impose an upper bound of 100k random features for the kernel approximation so that the data will comfortably fit into a machine with 60GB of 4We were unable to reproduce this result even after receiving the optimal hyperparameters from the authors through a personal communication. Figure 4: Average test error of the best kernel regularized least square classification model found by each searcher on CIFAR-10. The color coded dashed lines indicate when the last trial of a given searcher finished. Figure 5: Average test error of the best random features model found by each searcher on CIFAR-10. The test error for HYPERBAND and bracket \( s = 4 \) are calculated in every evaluation instead of at the end of a bracket. memory. Additionally, we set one unit of resource to be 100 features for an \( R = 1000 \), which gives 5 different brackets with \( \eta = 4 \). Each searcher is run for 10 trials, with each trial lasting 12 hours on a n1-standard-16 machine from Google Cloud Compute. The results in Figure 5 show that HYPERBAND is around 6x faster than Bayesian methods and random search. HYPERBAND performs similarly to bracket \( s = 4 \). Random_2× outperforms Bayesian optimization algorithms. 4.4 EXPERIMENTAL DISCUSSION For a given \( R \), the most exploratory SUCCESSIVEHALVING round performed by HYPERBAND evaluates \( \eta^{\lfloor \log_\eta(R) \rfloor} \) configurations using a budget of \( (\lfloor \log_\eta(R) \rfloor + 1)R \), which gives an upper bound on the potential speedup over random search. If training time scales linearly with the resource, the maximum speedup offered by HYPERBAND compared to random search is \( \frac{\eta^{\lfloor \log_\eta(R) \rfloor}}{(\lfloor \log_\eta(R) \rfloor + 1)} \). For the values of \( \eta \) and \( R \) used in our experiments, the maximum speedup over random search is approximately 50× given linear training time. However, we observe a range of speedups from 6× to 70× faster than random search. The differences in realized speedup can be explained by two factors: (1) the scaling properties of total evaluation time as a function of the allocated resource and (2) the difficulty of finding a good configuration. If training time is superlinear as a function of the resource, then HYPERBAND can offer higher speedups. More generally, if training scales like a polynomial of degree \( p > 1 \), the maximum speedup of HYPERBAND over random search is approximately \( \frac{\eta^{p-1} - 1}{\eta^{p-1}} \eta^{\lfloor \log_\eta(R) \rfloor} \). Hence, higher speedups were observed for the the kernel least square classifier experiment discussed in Section 4.2 because the training time scaled quadratically as a function of the resource. If 10 randomly sampled configurations is sufficient to find a good hyperparameter setting, then the benefit of evaluating orders of magnitude more configurations is muted. Generally the difficulty of the problem scales with the dimension of the search space since coverage diminishes with dimensionality. For low dimensional problems, the number of configurations evaluated by random search and Bayesian methods is exponential in the number of dimensions so good coverage can be achieved; i.e. if \( d = 3 \) as in the features subsampling experiment, then \( n = O(2^d = 8) \). Hence, HYPERBAND is only 6× faster than random search on the feature subsampling experiment. For the neural network experiments however, we hypothesize that faster speedups are observed for HYPERBAND because the dimension of the search space is higher. 5 FUTURE WORK We have introduced a novel bandit-based method for adaptive configuration evaluation with demonstrated competitive empirical performance. Future work involves exploring (i) embedding HYPERBAND into parallel and distributed computing environments; (ii) adjusting for training methods with different convergence rates; and (iii) combining HYPERBAND with non-random sampling methods. REFERENCES A. Agarwal, J. Duchi, P. L. Bartlett, and C. Levrard. Oracle inequalities for computationally budgeted model selection. In COLT, 2011. A. Agarwal, S. Kakade, N. Karampatziakis, L. Song, and G. Valiant. Least squares revisited: Scalable approaches for multi-class prediction. In ICML, 2014. J. Bergstra and Y. Bengio. Random search for hyper-parameter optimization. In JMLR, 2012. J. Bergstra et al. Algorithms for hyper-parameter optimization. In NIPS, 2011. T. Domhan, J. T. Springenberg, and F. Hutter. Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves. In IJCAI, 2015. K. Eggensperger et al. Towards an empirical foundation for assessing bayesian optimization of hyperparameters. In NIPS Bayesian Optimization Workshop, 2013. A. György and L. Kocsis. Efficient multi-start strategies for local search algorithms. JAIR, 41, 2011. F. Hutter, H. Hoos, and K. Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In Proc. of LION-5, 2011. K. Jamieson and R. Nowak. Best-arm identification algorithms for multi-armed bandits in the fixed confidence setting. In Information Sciences and Systems (CISS), 2014 48th Annual Conference on, pp. 1–6. IEEE, 2014. K. Jamieson and A. Talwalkar. Non-stochastic best arm identification and hyperparameter optimization. In AISTATS, 2015. A. Klein, S. Falkner, S. Bartels, P. Hennig, and F. Hutter. Fast bayesian optimization of machine learning hyperparameters on large datasets. arXiv preprint arXiv:1605.07079, 2016. A. Krizhevsky. Learning multiple layers of features from tiny images. In Technical report, Department of Computer Science, University of Toronto, 2009. T. Krueger, D. Panknin, and M. Braun. Fast cross-validation via sequential testing. Journal of Machine Learning Research, 16:1103–1155, 2015. H. Larochelle et al. An empirical evaluation of deep architectures on problems with many factors of variation. In ICML, 2007. L. Li, K. Jamieson, G. DeSalvo, A. Rostamizadeh, and A. Talwalkar. Hyperband: A novel bandit-based approach to hyperparameter optimization. arXiv:1603.06560, 2016. O. Maron and A. Moore. Hoeffding races: Accelerating model selection search for classification and function approximation. In NIPS, 1993. Y. Netzer et al. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, 2007. G. Rätsch, T. Onoda, and K.R. Müller. Soft margins for adaboost. Machine Learning, 42:287–320, 2001. R. Rifkin and A. Klautau. In defense of one-vs-all classification. JMLR, 2004. A. Sabharwal, H. Samulowitz, and G. Tesauro. Selecting near-optimal learners via incremental data allocation. In AAAI, 2016. P. Sermanet, S. Chintala, and Y. LeCun. Convolutional neural networks applied to house numbers digit classification. In ICPR, 2012. J. Snoek, H. Larochelle, and R. Adams. Practical bayesian optimization of machine learning algorithms. In NIPS, 2012. J. Snoek et al. Bayesian optimization using deep neural networks. In ICML, 2015. E. Sparks, A. Talwalkar, D. Haas, M. J. Franklin, M. I. Jordan, and T. Kraska. Automating model search for large scale machine learning,. In Symposium on Cloud Computing, 2015. K. Swersky, J. Snoek, and R. Adams. Multi-task bayesian optimization. In NIPS, 2013. K. Swersky, J. Snoek, and R. P. Adams. Freeze-thaw bayesian optimization. arXiv:1406.3896, 2014. C. Thornton et al. Auto-weka: Combined selection and hyperparameter optimization of classification algorithms. In KDD, 2013. A ADDITIONAL EXPERIMENTAL RESULTS In this section, we present a comparison of HYPERBAND with the CVST algorithm from Krueger et al. (2015) and provide additional details for experiments presented in Section 3 and 4. A.1 COMPARISON WITH CVST The CVST algorithm from Krueger et al. (2015) focuses on speeding up standard k-fold cross-validation. We did not include it as one of the competitors in Section 4 because the experiments we selected were too computational expensive for multi-fold cross-validation and CVST is not an any time algorithm. Nonetheless, the CVST algorithm is an interesting approach and was shown to have promising empirical performance in Krueger et al. (2015). Hence, we performed a small scale comparison modeled after their empirical studies between CVST and HYPERBAND. We replicated the classification experiments in Krueger et al. (2015) that train a support vector machine on the datasets from the IDA benchmark (Rätsch et al. 2001). All experiments were performed on Google Cloud Compute’s n1-standard-1 instances. Following Krueger et al. (2015), we evaluated HYPERBAND and CVST on the same 2d grid of 610 hyperparameters and recorded the best test error and duration for 50 trials . The only modification we made to their original experimental setup was the data splits; instead of half for test and half for training, we used 1/11th for test and 10/11th for training. HYPERBAND performed holdout evaluation using 1/10th of the training data as the validation set. We set \( \eta = 3 \), and \( R \) was set for each dataset so that a minimum resource of 50 datapoints is allocated to each configuration. Table 2 shows that CVST and HYPERBAND achieve comparable test errors (the differences are well within the error bars), while HYPERBAND is significantly faster than CVST on all datasets. More granularly, while CVST on average has slightly lower mean error, HYPERBAND is within 0.2% of CVST on 5 of the 7 datasets. Additionally, for each of the 7 datasets, HYPERBAND does as well as or better than CVST in over half of the trials. <table> <tr> <th rowspan="2">Dataset</th> <th colspan="2">CVST</th> <th colspan="2">Hyperband</th> <th rowspan="2">Ratio Duration</th> </tr> <tr> <th>Test Error</th> <th>Duration</th> <th>Test Error</th> <th>Duration</th> </tr> <tr> <td>banana</td> <td>9.8%±1.6%</td> <td>12.3±5.0</td> <td>9.9%±1.5%</td> <td>1.8±0.1</td> <td>6.7±2.8</td> </tr> <tr> <td>german</td> <td>26.0%±4.5%</td> <td>2.7±1.1</td> <td>27.6%±4.8%</td> <td>0.7±0.0</td> <td>4.1±1.7</td> </tr> <tr> <td>image</td> <td>2.9%±1.1%</td> <td>3.5±1.0</td> <td>3.3%±1.4%</td> <td>1.0±0.0</td> <td>3.4±0.9</td> </tr> <tr> <td>splice</td> <td>8.6%±1.8%</td> <td>10.6±3.1</td> <td>8.7%±1.8%</td> <td>3.9±0.1</td> <td>2.7±0.8</td> </tr> <tr> <td>ringnorm</td> <td>1.4%±0.4%</td> <td>21.3±2.3</td> <td>1.5%±0.4%</td> <td>6.5±0.3</td> <td>3.3±0.4</td> </tr> <tr> <td>twonorm</td> <td>2.4%±0.5%</td> <td>27.9±10.0</td> <td>2.4%±0.5%</td> <td>6.5±0.2</td> <td>4.3±1.5</td> </tr> <tr> <td>waveform</td> <td>9.3%±1.3%</td> <td>13.7±2.7</td> <td>9.5%±1.3%</td> <td>2.9±0.2</td> <td>4.8±1.0</td> </tr> </table> Table 2: The test error and duration columns show the average value plus/minus the standard deviation across 50 trials. Duration is measured in minutes and indicates how long it took each method to evaluate the grid of 610 hyperparameters used in Krueger et al. (2015). The ratio column shows the ratio of the duration for HYPERBAND over that for CVST with associated standard deviation. A.2 LENET EXPERIMENT We trained the LeNet convolutional neural network on MNIST using mini-batch SGD. Code is available for the network at http://deeplearning.net/tutorial/lenet.html. The search space for the LeNet example discussed in Section 3.2 is shown in Table 3. <table> <tr> <th>Hyperparameter</th> <th>Scale</th> <th>Min</th> <th>Max</th> </tr> <tr> <td>Learning Rate</td> <td>log</td> <td>1e-3</td> <td>1e-1</td> </tr> <tr> <td>Batch size</td> <td>log</td> <td>1e1</td> <td>1e3</td> </tr> <tr> <td>Layer-2 Num Kernels (k2)</td> <td>linear</td> <td>10</td> <td>60</td> </tr> <tr> <td>Layer-1 Num Kernels (k1)</td> <td>linear</td> <td>5</td> <td>k2</td> </tr> </table> Table 3: Hyperparameter space for the LeNet application of Section 3.2. Note that the number of kernels in Layer-1 is upper bounded by the number of kernels in Layer-2. A.3 EXPERIMENTS USING ALEX KRIZHEVSKY’S CNN ARCHITECTURE For the experiments discussed in Section 4.1 the exact architecture used is the 18% model provided on cuda-convnet for CIFAR-10.5 <table> <tr> <th>Hyperparameter</th> <th>Scale</th> <th>Min</th> <th>Max</th> </tr> <tr> <th colspan="4">Learning Parameters</th> </tr> <tr> <td>Initial Learning Rate</td> <td>log</td> <td>5 \times 10^{-5}</td> <td>5</td> </tr> <tr> <td>Conv1 \( l_2 \) Penalty</td> <td>log</td> <td>5 \times 10^{-5}</td> <td>5</td> </tr> <tr> <td>Conv2 \( l_2 \) Penalty</td> <td>log</td> <td>5 \times 10^{-5}</td> <td>5</td> </tr> <tr> <td>Conv3 \( l_2 \) Penalty</td> <td>log</td> <td>5 \times 10^{-5}</td> <td>5</td> </tr> <tr> <td>FC4 \( l_2 \) Penalty</td> <td>log</td> <td>5 \times 10^{-3}</td> <td>500</td> </tr> <tr> <td>Learning Rate Reductions</td> <td>integer</td> <td>0</td> <td>3</td> </tr> <tr> <th colspan="4">Local Response Normalization</th> </tr> <tr> <td>Scale</td> <td>log</td> <td>5 \times 10^{-6}</td> <td>5</td> </tr> <tr> <td>Power</td> <td>linear</td> <td>0.01</td> <td>3</td> </tr> </table> Table 4: Hyperparameters and associated ranges for the three-layer convolutional network. Search Space: The search space used for the experiments is shown in Table 4. The learning rate reductions hyperparameter indicates how many times the learning rate was reduced by a factor of 10 over the maximum iteration window. For example, on CIFAR-10, which has a maximum iteration of 30,000, a learning rate reduction of 2 corresponds to reducing the learning every 10,000 iterations, for a total of 2 reductions over the 30,000 iteration window. All hyperparameters with the exception of the learning rate decay reduction overlap with those in Snoek et al. (2012). Two hyperparameters in Snoek et al. (2012) were excluded from our experiments: (1) the width of the response normalization layer was excluded due to limitations of the Caffe framework and (2) the number of epochs was excluded because it is incompatible with dynamic resource allocation. Datasets: CIFAR-10 and SVHN contain 32 \times 32 RGB images while MRBI contains 28 \times 28 grayscale images. For all datasets, the only preprocessing performed on the raw images was demeaning. For CIFAR-10, the training (40,000 instances) and validation (10,000 instances) sets were sampled from data batches 1-5 with balanced classes. The original test set (10,000 instances) is used for testing. For MRBI, the training (10,000 instances) and validation (2,000 instances) sets were sampled from the original training set with balanced classes. The original test set (50,000 instances) is used for testing. Lastly, for SVHN, the train, validation, and test splits were created using the same procedure as that in Sermanet et al. (2012). Computational Considerations: The experiments took the equivalent of over 1 year of GPU hours on NVIDIA GRID K520 cards available on Amazon EC2 g2.8xlarge instances. We set a total budget constraint in terms of iterations instead of compute time to make comparisons hardware independent.6 Comparing progress by iterations instead of time ignores overhead costs not associated with training like cost of configuration selection for Bayesian methods and model initialization and validation costs for HYPERBAND. While overhead is hardware dependent, the overhead for HYPERBAND is below 5% on EC2 g2.8xlarge machines, so comparing progress by time passed would not impact results significantly. Due to the high computational cost of these experiments, we were not able to run all searchers out to convergence. However, we did double the budget for each trial of CIFAR-10 to allow for a comparison of the searchers as they near convergence. Figure 6 shows while Bayesian optimization methods achieve similar performance as HYPERBAND and SUCCESSIVEHALVING, it takes them much longer to achieve a comparable error rate. Comparison with Early Stopping: Adaptive allocation for hyperparameter optimization can be thought of as a form of early stopping where less promising configurations are halted before completion. Domhan et al. (2015) propose an early stopping method for neural networks and combine it 5The model specification is available at http://code.google.com/p/cuda-convnet/ 6Most trials were run on Amazon EC2 g2.8xlarge instances but a few trials were run on different machines due to the large computational demand of these experiments. ![Average test error across 10 trials for CIFAR-10, MRBI, and SVHN](page_232_186_1127_495.png) Figure 6: Average test error across 10 trials is shown in all plots. Error bars indicate the maximum and minimum ranges of the test error corresponding to the model with the best validation error with SMAC to speed up hyperparameter optimization. Their method stops training a configuration if the probability of the configuration beating the current best is below a specified threshold. This probability is estimated by extrapolating learning curves fit to the intermediate validation error losses of a configuration. If a configuration is terminated early, the predicted terminal value from the estimated learning curves is used as the validation error passed to the hyperparameter optimization algorithm. Hence, if the learning curve fit is poor, it could impact the performance of the configuration selection algorithm. While this approach is heuristic in nature, it does demonstrate promising empirical performance so we included SMAC with early termination as a competitor. We used the conservative termination criterion with default parameters and recorded the validation loss every 400 iterations and evaluated the termination criterion 3 times within the training period (every 8k iterations for CIFAR-10 and MRBI and every 16k iterations for SVHN). Comparing performance by the total multiple of \( R \) used is conservative because it does not account for the time spent fitting the learning curve in order to check the termination criterion. A.4 Kernel classification experiments We trained the regularized least-squares classification model using a block coordinate descent solver. Our models take less than 10 minutes to train on CIFAR-10 using an 8 core machine, while the default SVM method in Scikit-learn is single core and takes hours. Table 5 shows the hyperparameters and associated ranges considered in the kernel least squares classification experiment discussed in 7We used the code provided at https://github.com/automl/pylearningcurvepredictor. <table> <tr> <th>Hyperparameter</th> <th>Type</th> <th>Values</th> </tr> <tr> <td>preprocessor</td> <td>Categorical</td> <td>min/max, standardize, normalize</td> </tr> <tr> <td>kernel</td> <td>Categorical</td> <td>rbf, polynomial, sigmoid</td> </tr> <tr> <td>C</td> <td>Continuous</td> <td>\( \log[10^{-3}, 10^3] \)</td> </tr> <tr> <td>gamma</td> <td>Continuous</td> <td>\( \log[10^{-5}, 10] \)</td> </tr> <tr> <td>degree</td> <td>if kernel=poly</td> <td>integer [2,5]</td> </tr> <tr> <td>coef0</td> <td>if kernel=poly,sigmoid</td> <td>uniform [-1.0, 1.0]</td> </tr> </table> Table 5: Hyperparameter space for kernel regularized least squares classification problem discussed in Section 4.2 Section 4.2 The cost term C is divided by the number of samples so that the tradeoff between the squared error and the \( l_2 \) penalty would remain constant as the resource increased (squared error is summed across observations and not averaged). The regularization term \( \lambda \) is equal to the inverse of the scaled cost term C. Additionally, the average test error with associated minimum and maximum ranges across 10 trials are show in Figure 7. ![Line plot showing average test error over time for different searchers, with error bars indicating minimum and maximum test error across 10 trials.](page_384_693_768_384.png) Figure 7: Average test error of the best kernel regularized least square classification model found by each searcher on CIFAR-10. The color coded dashed lines indicate when the last trial of a given searcher finished. Error bars correspond to observed minimum and maximum test error across 10 trials. <table> <tr> <th>Hyperparameter</th> <th>Type</th> <th>Values</th> </tr> <tr> <td>preprocessor</td> <td>Categorical</td> <td>none, min/max, standardize, normalize</td> </tr> <tr> <td>\( \lambda \)</td> <td>Continuous</td> <td>\( \log[10^{-3}, 10^3] \)</td> </tr> <tr> <td>gamma</td> <td>Continuous</td> <td>\( \log[10^{-5}, 10] \)</td> </tr> </table> Table 6: Hyperparameter space for random feature kernel approximation classification problem discussed in Section 4.3 Table 6 shows the hyperparameters and associated ranges considered in the random features kernel approximation classification experiment discussed in Section 4.3. The regularization term \( \lambda \) is divided by the number of features so that the tradeoff between the squared error and the \( l_2 \) penalty would remain constant as the resource increased. Additionally, the average test error with associated minimum and maximum ranges across 10 trials are show in Figure 8. ![Average test error of the best random features model found by each searcher on CIFAR-10. The test error for HYPERBAND and bracket s = 4 are calculated in every evaluation instead of at the end of a bracket. Error bars correspond to observed minimum and maximum test error across 10 trials.](page_573_682_495_312.png) Figure 8: Average test error of the best random features model found by each searcher on CIFAR-10. The test error for HYPERBAND and bracket \( s = 4 \) are calculated in every evaluation instead of at the end of a bracket. Error bars correspond to observed minimum and maximum test error across 10 trials.
ABSTRACT Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While recent approaches use Bayesian Optimization to adaptively select configurations, we focus on speeding up random search through adaptive resource allocation. We present HYPERBAND, a novel algorithm for hyperparameter optimization that is simple, flexible, and theoretically sound. HYPERBAND is a principled early-stopping method that adaptively allocates a pre-defined resource, e.g., iterations, data samples or number of features, to randomly sampled configurations. We compare HYPERBAND with popular Bayesian Optimization methods on several hyperparameter optimization problems. We observe that HYPERBAND can provide more than an order of magnitude speedups over competitors on a variety of neural network and kernel-based learning problems. 1 INTRODUCTION The task of hyperparameter optimization is becoming increasingly important as modern data analysis pipelines grow in complexity. The quality of a predictive model critically depends on its hyperparameter configuration, but it is poorly understood how these hyperparameters interact with each other to affect the quality of the resulting model. Consequently, practitioners often default to either hand-tuning or automated brute-force methods like random search and grid search. In an effort to develop more efficient search methods, the problem of hyperparameter optimization has recently been dominated by Bayesian optimization methods (Snoek et al., 2012; Hutter et al., 2011; Bergstra et al., 2011) that focus on optimizing hyperparameter configuration selection. These methods aim to identify good configurations more quickly than standard baselines like random search by selecting configurations in an adaptive manner; see Figure 1(a). Existing empirical evidence suggests that these methods outperform random search (Thornton et al., 2013; Eggensperger et al., 2013; Snoek et al., 2015). However, these methods tackle a fundamentally challenging problem of simultaneously fitting and optimizing a high-dimensional, non-convex function with unknown smoothness, and possibly noisy evaluations. To overcome these difficulties, some Bayesian optimization methods resort to heuristics, at the expense of consistency guarantees, to model the objective function or speed up resource intensive subroutines. Moreover, these adaptive configuration selection methods are intrinsically sequential and thus difficult to parallelize. An orthogonal approach to hyperparameter optimization focuses on speeding up configuration evaluation; see Figure 1(b). These methods are adaptive in computation, allocating more resources to promising hyperparameter configurations while quickly eliminating poor ones. Resources can take various forms, including size of training set, number of features, or number of iterations for iterative algorithms. By adaptively allocating these resources, these methods aim to examine orders of magnitude more hyperparameter configurations than methods that uniformly train all configurations to completion, thereby quickly identifying good hyperparameters. While there are methods that combine Bayesian optimization with adaptive resource allocation (Swersky et al., 2013; 2014; Domhan et al., 2015), we focus on speeding up random search as it offers a simple, parallelizable, and theoretically principled launching point and is shown to outperform grid search (Bergstra & Bengio, 2012). 1Consistency can be restored by allocating a fraction of resources to performing random search. Figure 1: (a) The heatmap shows the validation error over a two dimensional search space, with red corresponding to areas with lower validation error, and putative configurations selected in a sequential manner as indicated by the numbers. (b) The plot shows the validation error as a function of the resources allocated to each configuration (i.e., each line in the plot). Configuration evaluation methods allocate more resources to promising configurations. (c) The validation loss as a function of total resources allocated for two configurations. The shaded areas bound the maximum distance from the terminal validation loss and monotonically decreases with the resource. Our novel configuration evaluation method, HYPERBAND, relies on a principled early-stopping strategy to allocate resources, allowing it to evaluate orders of magnitude more configurations than uniform allocation strategies. HYPERBAND is a general-purpose technique that makes minimal assumptions, unlike prior configuration evaluation approaches (Swersky et al., 2013; Domhan et al., 2015; Swersky et al., 2014; György & Kocsis, 2011; Agarwal et al., 2011). In this work, we describe HYPERBAND, provide intuition for the algorithm through a detailed example, and present a wide range of empirical results comparing HYPERBAND with well established competitors. We also briefly describe the theoretical underpinnings of HYPERBAND, however a thorough theoretical treatment is beyond the scope of this paper and is deferred to Li et al. (2016). 2 RELATED WORK Bayesian optimization techniques model the conditional probability \( p(f|\lambda) \) of a configuration’s performance on a metric \( f \) given a set of hyperparameters \( \lambda \). For instance, SMAC uses random forests to model \( p(f|\lambda) \) as a Gaussian distribution (Hutter et al., 2011). TPE is a non-standard Bayesian optimization algorithm based on tree-structured Parzen density estimators (Bergstra et al., 2011). A third popular method, Spearmint, uses Gaussian processes (GP) to model \( p(f|\lambda) \) and performs slice sampling over the GP’s hyperparameters (Snoek et al., 2012). Adaptive configuration evaluation is not a new idea. Maron & Moore (1993) considered a setting where training time is negligible (e.g., k-nearest-neighbor classification) and evaluation on a large validation set is accelerated by evaluating on an increasing subset of the validation set, stopping early configurations that are performing poorly. Since subsets of the validation set provide unbiased estimates of its expected performance, this is an instance of the stochastic best-arm identification problem for multi-armed bandits (see Jamieson & Nowak (2014) for a brief survey). In contrast, this paper assumes that evaluation time is negligible and the goal is to early-stop long-running training procedures by evaluating partially trained models on the validation set. Previous approaches either require strong assumptions or use heuristics to perform adaptive resource allocation. Several works propose methods that make strong assumptions on the convergence behavior of training algorithms, providing theoretical performance guarantees under these assumptions (György & Kocsis 2011; Agarwal et al., 2011; Swersky et al., 2013, 2014; Domhan et al., 2015; Sabharwal et al., 2016). Unfortunately, these assumptions are often hard to verify, and empirical performance can drastically suffer when they are violated. One recent work of particular interest proposes a heuristic based on sequential analysis to determine stopping times for training configurations on increasing subsets of the data (Krueger et al., 2015). However, it has a few shortcomings: (1) it is designed to speedup multi-fold cross-validation and is not significantly faster than standard holdout, (2) it is not an anytime algorithm and requires the set of configurations to be evaluated as an input, and (3) the theoretical correctness and empirical performance of this method are highly dependent on a user-defined “safety-zone.” Lastly, in an effort avoid heuristics and strong assumptions, Sparks et al. (2015) proposed a halving style algorithm that did not require explicit convergence behavior, and Jamieson & Talwalkar (2015) analyzed a similar algorithm, providing theoretical guarantees and encouraging empirical results. Unfortunately, these halving style algorithms suffer from the \( n \) vs \( B/n \) issue which we will discuss in Section 3. Finally, Klein et al. (2016) recently introduced Fabolas, a Bayesian optimization method that combines adaptive selection and evaluation. Similar to Swersky et al. (2013; 2014), it models the conditional validation error as a Gaussian process using a kernel that captures the covariance with downsampling rate to allow for adaptive evaluation. While we intended to compare HYPERBAND with Fabolas, we encountered some technical difficulties when using the package\footnote{The first two drawbacks prevent a full comparison to HYPERBAND on our selected empirical tasks, however, for completeness, we provide a comparison in Appendix A to Krueger et al. (2015) on some experimental tasks replicated from their paper.} and are working with the authors of Klein et al. (2016) to resolve the issues. 3 HYPERBAND ALGORITHM HYPERBAND extends the SUCCESSIVEHALVING algorithm proposed for hyperparameter optimization in Jamieson & Talwalkar (2015) and calls it as a subroutine. The idea behind SUCCESSIVEHALVING follows directly from its name: uniformly allocate a budget to a set of hyperparameter configurations, evaluate the performance of all configurations, throw out the worst half, and repeat until one configurations remains. The algorithm allocates exponentially more resources to more promising configurations. Unfortunately, SUCCESSIVEHALVING requires the number of configurations \( n \) as an input to the algorithm. Given some finite time budget \( B \) (e.g. an hour of training time to choose a hyperparameter configuration), \( B/n \) resources are allocated on average across the configurations. However, for a fixed \( B \), it is not clear a priori whether we should (a) consider many configurations (large \( n \)) with a small average training time; or (b) consider a small number of configurations (small \( n \)) with longer average training times. We use a simple example to better understand this tradeoff. Figure 1(c) shows the validation loss as a function of total resources allocated for two configurations with terminal validation losses \( \nu_1 \) and \( \nu_2 \). The shaded areas bound the maximum deviation from the terminal validation loss and will be referred to as “envelope” functions. It is possible to differentiate between the two configurations when the envelopes diverge. Simple arithmetic shows that this happens when the width of the envelopes is less than \( \nu_2 - \nu_1 \), i.e. when the intermediate losses are guaranteed to be less than \( \frac{\nu_2 - \nu_1}{2} \) away from the terminal losses. There are two takeaways from this observation: more resources are needed to differentiate between the two configurations when either (1) the envelope functions are wider or (2) the terminal losses are closer together. However, in practice, the optimal allocation strategy is unknown because we do not have knowledge of the envelope functions nor the distribution of terminal losses. Hence, if more resources are required before configurations can differentiate themselves in terms of quality (e.g., if an iterative training method converges very slowly for a given dataset or if randomly selected hyperparameter configurations perform similarly well) then it would be reasonable to work with a small number of configurations. In contrast, if the quality of a configuration is typically revealed using minimal resources (e.g., if iterative training methods converge very quickly for a given dataset or if randomly selected hyperparameter configurations are of low-quality with high probability) then \( n \) is the bottleneck and we should choose \( n \) to be large. 3.1 HYPERBAND HYPERBAND, shown in Algorithm 1, addresses this “\( n \) versus \( B/n \)” problem by considering several possible values of \( n \) for a fixed \( B \), in essence performing a grid search over feasible value of \( n \). Associated with each value of \( n \) is a minimum resource \( r \) that is allocated to all configurations before some are discarded; a larger value of \( n \) corresponds to a smaller \( r \) and hence more aggressive early stopping. There are two components to HYPERBAND; (1) the inner loop invokes SUCCESSIVEHALVING for fixed values of \( n \) and \( r \) (lines 3-9) and (2) the outer loop which iterates over different values \footnotetext{2The first two drawbacks prevent a full comparison to HYPERBAND on our selected empirical tasks, however, for completeness, we provide a comparison in Appendix A to Krueger et al. (2015) on some experimental tasks replicated from their paper.} \footnotetext{3The package provided by Klein et al. (2016) is available at https://github.com/automl/RoBO} of \( n \) and \( r \) (lines 1-2). We will refer to each such run of SUCCESSIVEHALVING within HYPERBAND as a “bracket.” Each bracket is designed to use about \( B \) total resources and corresponds to a different tradeoff between \( n \) and \( B/n \). A single execution of HYPERBAND takes a finite number of iterations, and in practice can be repeated indefinitely. HYPERBAND requires two inputs (1) \( R \), the maximum amount of resource that can be allocated to a single configuration, and (2) \( \eta \), an input that controls the proportion of configurations discarded in each round of SUCCESSIVEHALVING. The two inputs dictate how many different brackets are considered; specifically, \( s_{\max} + 1 \) different values for \( n \) are considered with \( s_{\max} = \lfloor \log_\eta(R) \rfloor \). HYPERBAND begins with the most aggressive bracket \( s = s_{\max} \), which sets \( n \) to maximize exploration, subject to the constraint that at least one configuration is allocated \( R \) resources. Each subsequent bracket reduces \( n \) by a factor of approximately \( \eta \) until the final bracket, \( s = 0 \), in which every configuration is allocated \( R \) resources (this bracket simply performs classical random search). Hence, HYPERBAND performs a geometric search in the average budget per configuration to address the “\( n \) versus \( B/n \)” problem, at the cost of approximately \( s_{\max} + 1 \) times more work than running SUCCESSIVEHALVING for a fixed \( n \). By doing so, HYPERBAND is able to exploit situations in which adaptive allocation works well, while protecting itself in situations where more conservative allocations are required. <table> <tr> <th colspan="2">Algorithm 1: HYPERBAND algorithm for hyperparameter optimization.</th> </tr> <tr> <td colspan="2"> <b>input</b> : \( R, \eta \) (default \( \eta = 3 \))<br> <b>initialization</b> : \( s_{\max} = \lfloor \log_\eta(R) \rfloor, B = (s_{\max} + 1)R \)<br> <b>for</b> \( s \in \{s_{\max}, s_{\max} - 1, \ldots, 0\}\) <b>do</b><br> <i>n = \lceil \frac{B}{R(s+1)} \rceil,\quad r = R\eta^{-s}</i><br> // begin SUCCESSIVEHALVING with (\( n, r \)) inner loop<br> \( T = \text{get\_hyperparameter\_configuration}(n) \)<br> <b>for</b> \( i \in \{0, \ldots, s\} \) <b>do</b><br> \( n_i = [n\eta^{-i}] \)<br> \( r_i = r\eta^i \)<br> \( L = \{\text{run\_then\_return\_val\_loss}(t, r_i) : t \in T\} \)<br> \( T = \text{top\_k}(T, L, [n_i/\eta]) \)<br> <b>end</b><br> <b>end</b><br> <b>return</b> Configuration with the smallest intermediate loss seen so far. </td> </tr> </table> \( R \) represents the maximum amount of resources that can be allocated to any given configuration. In most cases, there is a natural upper bound on the maximum budget per configuration that is often dictated by the resource type (e.g., training set size for dataset downsampling; limitations based on memory constraint for feature downsampling; rule of thumb regarding number of epochs when iteratively training neural networks). \( R \) is also the number of configurations evaluated in the bracket that performs the most exploration, i.e \( s = s_{\max} \). In practice one may want \( n \leq n_{\max} \) to limit overhead associated with training many configurations on a small budget, i.e. costs associated with initialization, loading a model, and validation. In this case, set \( s_{\max} = \lfloor \log_\eta(n_{\max}) \rfloor \). The value of \( \eta \) can be viewed as a knob that can be tuned based on practical user constraints. Larger values of \( \eta \) correspond to a more aggressive elimination schedule and thus fewer rounds of elimination; specifically, each round retains \( 1/\eta \) configurations for a total of \( \lfloor \log_\eta(n) \rfloor + 1 \) rounds of elimination with \( n \) configurations. If one wishes to receive a result faster at the cost of a sub-optimal asymptotic constant, one can increase \( \eta \) to reduce the budget per bracket \( B = (\lfloor \log_\eta(R) \rfloor + 1)R \). We stress that results are not very sensitive to the choice of \( \eta \). In practice we suggest taking \( \eta \) to be equal to 3 or 4. HYPERBAND requires the following methods to be defined for any given learning problem: get_hyperparameter_configuration(\( n \)) returns a set of \( n \) i.i.d. samples from some distribution defined over the hyperparameter configuration space; run_then_return_val_loss(\( t, r \)) takes a hyperparameter configuration (\( t \)) and resource allocation (\( r \)) and returns the validation loss after training for the allocated resources; and top_k(configs, losses, k) takes a set of configurations as well as their associated losses and returns the top \( k \) performing configurations. 3.2 Example application with iterations as a resource: LeNet We next present a simple example to provide intuition. We work with the MNIST dataset and optimize hyperparameters for the LeNet convolutional neural network trained using mini-batch SGD. Our search space includes learning rate, batch size, and number of kernels for the two layers of the network as hyperparameters (details are shown in Table 3 in Appendix A). We further define the number of iterations as the resource to allocate, with one unit of resource corresponding to one epoch or a full pass over the dataset. We set \( R \) to 81 and use the default value of \( \eta = 3 \), resulting in \( s_{\max} = 4 \) and thus 5 brackets of SUCCESSIVEHALVING with different tradeoffs between \( n \) and \( B/n \). The resources allocated within each bracket are displayed in Table 1. <table> <tr> <th rowspan="2">i</th> <th colspan="2">s = 4</th> <th colspan="2">s = 3</th> <th colspan="2">s = 2</th> <th colspan="2">s = 1</th> <th colspan="2">s = 0</th> </tr> <tr> <th>n<sub>i</sub></th> <th>r<sub>i</sub></th> <th>n<sub>i</sub></th> <th>r<sub>i</sub></th> <th>n<sub>i</sub></th> <th>r<sub>i</sub></th> <th>n<sub>i</sub></th> <th>r<sub>i</sub></th> <th>n<sub>i</sub></th> <th>r<sub>i</sub></th> </tr> <tr> <td>0</td> <td>81</td> <td>1</td> <td>27</td> <td>3</td> <td>9</td> <td>9</td> <td>6</td> <td>27</td> <td>5</td> <td>81</td> </tr> <tr> <td>1</td> <td>27</td> <td>3</td> <td>9</td> <td>9</td> <td>3</td> <td>27</td> <td>2</td> <td>81</td> <td></td> <td></td> </tr> <tr> <td>2</td> <td>9</td> <td>9</td> <td>3</td> <td>27</td> <td>1</td> <td>81</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>3</td> <td>3</td> <td>27</td> <td>1</td> <td>81</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>4</td> <td>1</td> <td>81</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </table> Table 1: Values of \( n_i \) and \( r_i \) for the brackets of HYPERBAND when \( R = 81 \) and \( \eta = 3 \). ![Performance of individual brackets s and HYPERBAND.](page_1012_328_377_246.png) Figure 2: Performance of individual brackets \( s \) and HYPERBAND. Figure 2 compares the empirical performance of the different brackets of HYPERBAND if they were used separately, as well as standard HYPERBAND (all results are averaged over 70 trials). In practice we do not know a priori which bracket \( s \in \{0, \ldots, 4\} \) will be most effective, and in this case neither the most (\( s = 4 \)) nor least aggressive (\( s = 0 \)) setting is optimal. However, note that HYPERBAND does nearly as well as the optimal bracket (\( s = 3 \)) and vastly outperforms the baseline uniform allocation (i.e. random search), which is equivalent to bracket \( s = 0 \). 3.3 Overview of theoretical results Although a detailed theoretical analysis is beyond the scope of this paper, we provide an intuitive, high-level description of theoretical properties of HYPERBAND. Suppose there are \( n \) configurations, each with a given terminal validation error \( \nu_i \) for \( i = 1, \ldots, n \). Without loss of generality, index the configurations by performance so that \( \nu_1 \) corresponds to the best performing configuration, \( \nu_2 \) to the second best, and so on. Now consider the task of identifying the best configuration. The optimal strategy would allocate to each configuration \( i \) the minimum resource required to distinguish it from \( \nu_1 \), i.e., enough so that the envelope functions depicted in Figure 1(c) bound the intermediate loss to be less than \( \frac{\nu_1 - \nu_i}{2} \) away from the terminal value. As shown in Jamieson & Talwalkar (2015) and Li et al. (2016), the budget required by SUCCESSIVEHALVING is in fact only a small factor away from this optimal approach because it capitalizes on configurations that are easy to distinguish from \( \nu_1 \). In contrast, the naive uniform allocation strategy, which allocates \( B/n \) to each configuration, has to allocate to every configuration the resource required to distinguish \( \nu_2 \) from \( \nu_1 \). The relative size of the budget required for uniform allocation and SUCCESSIVEHALVING depends on the envelope functions bounding deviation from terminal losses as well as the distribution from which \( \nu_i \)'s are drawn. The budget required for SUCCESSIVEHALVING is smaller when the optimal \( n \) versus \( B/n \) tradeoff requires fewer resources per configuration. Hence, if the envelope functions tighten quickly as a function of resource allocated, or the average distances between terminal losses is large, then SUCCESSIVEHALVING can be substantially faster than uniform allocation. Of course we do not have knowledge of either function in practice, so we will hedge our aggressiveness with HYPERBAND. Remarkably, despite having no knowledge of the envelope functions or the distribution of \( \nu_i \)'s, HYPERBAND requires a budget that is only log factors larger than the optimal for SUCCESSIVEHALVING. See Li et al. (2016) for details. Figure 3: Average test error across 10 trials is shown in all plots. Label “SMAC_early” corresponds to SMAC with the early stopping criterion proposed in [Domhan et al., (2015)] and label “bracket s = 4” corresponds to repeating the most exploratory bracket of HYPERBAND. 4 HYPERPARAMETER OPTIMIZATION EXPERIMENTS In this section, we evaluate the empirical behavior of HYPERBAND with iterations, data subsamples, and features as resources. For all experiments, we compare HYPERBAND with three well known Bayesian optimization algorithms — SMAC, TPE, and Spearmint. Additionally, we show results for SUCCESSIVEHALVING corresponding to repeating the most exploration bracket of HYPERBAND. Finally for all experiments, we benchmark against standard random search and random_2×, which is a variant of random search with twice the budget of other methods. 4.1 EARLY STOPPING ITERATIVE ALGORITHMS FOR NEURAL NETWORKS We study a convolutional neural network with the same architecture as that used in [Snoek et al., (2012)] and [Domhan et al., (2015)] from cuda-convnet. The search spaces used in the two previous works differ, and we used a search space similar to that of Snoek et al. (2012) with 6 hyperparameters for stochastic gradient decent and 2 hyperparameters for the response normalization layers. In line with the two previous works, we used a batch size of 100 for all experiments. For these experiments, we also compare against a variant of SMAC named SMAC_early that uses the early termination criterion proposed in [Domhan et al., (2015)] for neural networks. We view SMAC with early stopping to be a combination of adaptive configuration selection and configuration evaluation. See Appendix A for more details about the experimental setup. Datasets: We considered three image classification datasets: CIFAR-10 ([Krizhevsky, (2009)], rotated MNIST with background images (MRBI) ([Larochelle et al., (2007)]), and Street View House Numbers (SVHN) ([Netzer et al., (2011)]). CIFAR-10 and SVHN contain \(32 \times 32\) RGB images while MRBI contains \(28 \times 28\) grayscale images. The splits used for each dataset are as follows: (1) CIFAR-10 has 40k, 10k, and 10k instances; (2) MRBI has 10k, 2k, and 50k instances; and (3) SVHN has close to 600k, 6k, and 26k instances for training, validation, and test respectively. For all datasets, the only preprocessing performed on the raw images was demeaning. HYPERBAND Configuration: For these experiments, one unit of resource corresponds to 100 mini-batch iterations. For CIFAR-10 and MRBI, \(R\) is set to 300 (or 30k total iterations). For SVHN, \(R\) is set to 600 (or 60k total iterations) to accommodate the larger training set. \(\eta\) was set to 4 for all experiments, resulting in 5 SUCCESSIVEHALVING brackets for HYPERBAND. Results: Ten independent trials were performed for each searcher. For CIFAR-10, the results in Figure 3(a) show that HYPERBAND is more than an order of magnitude faster than its competitors. In Figure 6 of Appendix A, we extend the x-axis for CIFAR-10 out to 100\(R\). The results show that Bayesian optimization methods ultimately converge to similar errors as HYPERBAND. For MRBI, HYPERBAND is more than an order of magnitude faster than standard configuration selection approaches and 5× faster than SMAC with early stopping. For SVHN, while HYPERBAND finds a good configuration faster, Bayesian optimization methods are competitive and SMAC with early stopping outperforms HYPERBAND. This result demonstrates that there is merit to incorporating early stopping with configuration selection approaches. Across the three datasets, HYPERBAND and SMAC_early are the only two methods that consistently outperform random_2×. On these datasets, HYPERBAND is over 20× faster than random search while SMAC_early is \( \leq 7\times \) faster than random search within the evaluation window. In fact, the first result returned by HYPERBAND after using a budget of 5\( R \) is often competitive with results returned by other searchers after using 50\( R \). Additionally, HYPERBAND is less variable than other searchers across trials, which is highly desirable in practice (see Appendix A for plots with error bars). For computationally expensive problems in high dimensional search spaces, it may make sense to just repeat the most exploratory brackets. Similarly, if meta-data is available about a problem or it is known that the quality of a configuration is evident after allocating a small amount of resource, then one should just repeat the most exploration bracket. Indeed, for these experiments, repeating the most exploratory bracket of HYPERBAND outperforms cycling through all the brackets. In fact, bracket \( s = 4 \) vastly outperforms all other methods on CIFAR-10 and MRBI and is nearly tied with SMAC_early for first on SVHN. Finally, CIFAR-10 is a very popular dataset and state-of-the-art models achieve much better accuracy than what is shown in Figure 3. The difference in performance is mainly attributable to higher model complexities and data manipulation (i.e. using reflection or random cropping to artificially increase the dataset size). If we limit the comparison to published results that use the same architecture and exclude data manipulation, the best human expert result for the dataset is 18% error and hyperparameter optimized result is 15.0% for Snoek et al. (2012)4 and 17.2% for Domhan et al. (2015). These results are better than our results on CIFAR-10 because they use 25% more data by including the validation set and also train for more epochs. The best model found by HYPERBAND achieved a test error of 17.0% when trained on the combined training and validation data for 300 epochs. 4.2 DATA DOWNSAMPLING KERNEL REGULARIZED LEAST SQUARES CLASSIFICATION In this experiment, we use HYPERBAND with data samples as the resource to optimize the hyperparameters of a kernel-based classification task on CIFAR-10. We use the multi-class regularized least squares classification model which is known to have comparable performance to SVMs (Rifkin & Klautau 2004; Agarwal et al. 2014) but can be trained significantly faster. The hyperparameters considered in the search space include preprocessing method, regularization, kernel type, kernel length scale, and other kernel specific hyperparameters (see Appendix A for more details). HYPERBAND is run with \( \eta = 4 \) and \( R = 400 \), with each unit of resource representing 100 datapoints. Similar to previous experiments, these inputs result in a total of 5 brackets. Each hyperparameter optimization algorithm is run for ten trials on Amazon EC2 m4.2xlarge instances; for a given trial, HYPERBAND is allowed to run for two outer loops, bracket \( s = 4 \) is repeated 10 times, and all other searchers are run for 12 hours. Figure 4 shows that HYPERBAND returns a good configuration after just the first SUCCESSIVEHALVING bracket in approximately 20 minutes; other searchers fail to reach this error rate on average even after the entire 12 hours. Notably, HYPERBAND was able to evaluate over 250 configurations in this first bracket of SUCCESSIVEHALVING, while competitors were able to evaluate only three configurations in the same amount of time. Consequently, HYPERBAND is over 30× faster than Bayesian optimization methods and 70× faster than random search. Bracket \( s = 4 \) sightly outperforms HYPERBAND but the terminal performance for the two algorithms are the same. Random_2× is competitive with SMAC and TPE. 4.3 FEATURE SUBSAMPLING TO SPEED UP APPROXIMATE KERNEL CLASSIFICATION We next demonstrate the performance of HYPERBAND when using features as a resource, focusing on random feature approximations for kernel methods. Features are randomly generated using the method described in Rahimi & Recht (2007) to approximate the RBF kernel, and these random features are then used as inputs to a ridge regression classifier. We consider hyperparameters of a random feature kernel approximation classifier trained on CIFAR-10, including preprocessing method, kernel length scale, and \( l_2 \) penalty. We impose an upper bound of 100k random features for the kernel approximation so that the data will comfortably fit into a machine with 60GB of 4We were unable to reproduce this result even after receiving the optimal hyperparameters from the authors through a personal communication. Figure 4: Average test error of the best kernel regularized least square classification model found by each searcher on CIFAR-10. The color coded dashed lines indicate when the last trial of a given searcher finished. Figure 5: Average test error of the best random features model found by each searcher on CIFAR-10. The test error for HYPERBAND and bracket \( s = 4 \) are calculated in every evaluation instead of at the end of a bracket. memory. Additionally, we set one unit of resource to be 100 features for an \( R = 1000 \), which gives 5 different brackets with \( \eta = 4 \). Each searcher is run for 10 trials, with each trial lasting 12 hours on a n1-standard-16 machine from Google Cloud Compute. The results in Figure 5 show that HYPERBAND is around 6x faster than Bayesian methods and random search. HYPERBAND performs similarly to bracket \( s = 4 \). Random_2× outperforms Bayesian optimization algorithms. 4.4 EXPERIMENTAL DISCUSSION For a given \( R \), the most exploratory SUCCESSIVEHALVING round performed by HYPERBAND evaluates \( \eta^{\lfloor \log_\eta(R) \rfloor} \) configurations using a budget of \( (\lfloor \log_\eta(R) \rfloor + 1)R \), which gives an upper bound on the potential speedup over random search. If training time scales linearly with the resource, the maximum speedup offered by HYPERBAND compared to random search is \( \frac{\eta^{\lfloor \log_\eta(R) \rfloor}}{(\lfloor \log_\eta(R) \rfloor + 1)} \). For the values of \( \eta \) and \( R \) used in our experiments, the maximum speedup over random search is approximately 50× given linear training time. However, we observe a range of speedups from 6× to 70× faster than random search. The differences in realized speedup can be explained by two factors: (1) the scaling properties of total evaluation time as a function of the allocated resource and (2) the difficulty of finding a good configuration. If training time is superlinear as a function of the resource, then HYPERBAND can offer higher speedups. More generally, if training scales like a polynomial of degree \( p > 1 \), the maximum speedup of HYPERBAND over random search is approximately \( \frac{\eta^{p-1} - 1}{\eta^{p-1}} \eta^{\lfloor \log_\eta(R) \rfloor} \). Hence, higher speedups were observed for the the kernel least square classifier experiment discussed in Section 4.2 because the training time scaled quadratically as a function of the resource. If 10 randomly sampled configurations is sufficient to find a good hyperparameter setting, then the benefit of evaluating orders of magnitude more configurations is muted. Generally the difficulty of the problem scales with the dimension of the search space since coverage diminishes with dimensionality. For low dimensional problems, the number of configurations evaluated by random search and Bayesian methods is exponential in the number of dimensions so good coverage can be achieved; i.e. if \( d = 3 \) as in the features subsampling experiment, then \( n = O(2^d = 8) \). Hence, HYPERBAND is only 6× faster than random search on the feature subsampling experiment. For the neural network experiments however, we hypothesize that faster speedups are observed for HYPERBAND because the dimension of the search space is higher. 5 FUTURE WORK We have introduced a novel bandit-based method for adaptive configuration evaluation with demonstrated competitive empirical performance. Future work involves exploring (i) embedding HYPERBAND into parallel and distributed computing environments; (ii) adjusting for training methods with different convergence rates; and (iii) combining HYPERBAND with non-random sampling methods.
accept
Accept (Poster)
7.333333
552e87c2b04b45dd8b52b6c8ddddb2fa47cee876
iclr
2,017
ROTATION PLANE DOUBLY ORTHOGONAL RECURRENT NEURAL NETWORKS Zoe McCarthy, Andrew Bai, Xi Chen, & Pieter Abbeel Department of Electrical Engineering and Computer Science University of California, Berkeley Berkeley, CA 94720, USA {zmccarthy, xiaoyang.bai, c.xi, pabbeel}@berkeley.edu ABSTRACT Recurrent Neural Networks (RNNs) applied to long sequences suffer from the well known vanishing and exploding gradients problem. The recently proposed Unitary Evolution Recurrent Neural Network (uRNN) alleviates the exploding gradient problem and can learn very long dependencies, but its nonlinearities make it still affected by the vanishing gradient problem and so learning can break down for extremely long dependencies. We propose a new RNN transition architecture where the hidden state is updated multiplicatively by a time invariant orthogonal transformation followed by an input modulated orthogonal transformation. There are no additive interactions and so our architecture exactly preserves forward hidden state activation norm and backwards gradient norm for all time steps, and is provably not affected by vanishing or exploding gradients. We propose using the rotation plane parameterization to represent the orthogonal matrices. We validate our model on a simplified memory copy task and see that our model can learn dependencies as long as 5,000 timesteps. 1 INTRODUCTION Deep Neural Networks have shown increasingly impressive performance on a wide variety of practical tasks. Recurrent Neural Networks (RNNs) are powerful sequence modeling tools that have found successful applications in speech recognition, natural language processing, image captioning, and many more (Sutskever et al., 2014; Bahdanau et al., 2014; Wu et al., 2016; Donahue et al., 2015; Karpathy & Li, 2015; Luong et al., 2014). One fundamental problem with RNNs is the so called vanishing and exploding gradients problem (Hochreiter, 1991; Bengio, 1994; Hochreiter et al., 2001). These problems occur when training RNNs using gradient descent to model long sequences where the gradient magnitude either goes towards zero or infinity, respectively, as the length of the sequence increases. Several heuristics have been proposed to alleviate these problems. The Long Short Term Memory (LSTM) in Hochreiter et al. (1997) and Gated Recurrent Units (GRU) in Cho et al. (2014) have been incredibly successful recurrent transition architectures to model complicated dependencies for sequences up to several hundred timesteps long and are the main RNN architectures in use today. The IRNN model modifies the standard RNN transition to initialize at the identity, which increases the timestep length modeling capability (Le et al., 2015). Stabilizing the forward hidden state norm can have a positive effect on hidden state gradient norm preservation (Krueger & Memisevic, 2015; Ba et al., 2016; Cooijmans et al., 2016). A simple gradient norm clipping during hidden state backpropagation can also help to alleviate the exploding gradient problem for these architectures (Graves, 2013). Recently, there has been a surge of interest in orthogonal and unitary transition architectures. Orthogonal and unitary transitions exactly preserve forward and gradient norms passed through them. Theory developed for the linear neural networks suggests that training time can be vastly shorter when the weight matrices are orthogonal, and even independent of depth (Saxe et al., 2013). The Unitary Evolution Recurrent Neural Networks utilizes a unitary recurrent transition followed by a contractive nonlinearity to exactly solve the exploding gradient problem and greatly alleviate the vanishing gradient problem (Arjovsky et al., 2015). A very recent extension expands on this work to increase the expressive power of these transitions and increases the validated architecture up to 2000 timestep dependencies in Wisdom et al. (2016). Analytic solutions to the most common long term dependency example tasks, the memory copy and addition problems, are provided with linear orthogonal recurrent neural networks in Henaff et al. (2016). A nonlinear activation function that is locally orthogonal is proposed in Chernodub & Nowicki and shown to have great potential. This particular activation scheme, however, is discontinuous (not just nondifferentiable), and could increase optimization difficulty in some cases. An open question that we try to partially address in this work is whether a recurrent transition architecture can be fully orthogonal or unitary (and thus linear) and still learn expressive models to solve practical tasks. To address this problem, we propose the rotation plane doubly orthogonal RNN, a novel recurrent transition architecture that provably preserves forward hidden state activation norm and backpropagated gradient norm and thus does not suffer from exploding or vanishing gradients. The doubly orthogonal refers to the fact that the RNN transition architecture updates the hidden state multiplicatively by a time invariant orthogonal transformation followed by an input modulated orthogonal transformation. Rotation plane refers to the parameterization we use to represent the orthogonal matrices. We evaluate our approach on a simplified 1-bit version of the memory copying task, and find that our architecture can scale to 5,000 timesteps on this task, outperforming past approaches. 2 Doubly Orthogonal RNN Architecture In this section we describe our proposed architecture, which we call the Doubly Orthogonal Recurrent Neural Net (DORNN). We also show that the DORNN provably preserves forward norm and gradient norm. We briefly review the definition of an orthogonal or unitary matrix, since it is fundamental to the definition and properties of our transition architecture. Orthogonal matrices, or rotation matrices, are defined as matrices \( Q \in \mathbb{R}^{n \times n} \) such that: \( Q^T Q = I \). We restrict our attention to the special orthogonal matrices \( SO(n) \) such that \( \det(Q) = 1 \). The set of special orthogonal matrices forms a matrix lie group. The complex analogue of an orthogonal matrix is a unitary matrix and is defined similarly as matrices \( U \in \mathbb{C}^{n \times n} \) such that \( U^H U = I \). 2.1 Recurrence Equations Our recurrence equation is defined below: \[ h_{t+1} = R_{xh}(x_t) R_{hh} h_t \] where \( R_{xh}(x_t) \) is an input modulated orthogonal or unitary matrix, and \( R_{hh} \) is a time invariant orthogonal or unitary matrix that is applied at every timestep. We parameterize \( R_{hh} \) by \( \theta_{hh} \) and \( R_{xh} \) by \( \phi_{xh}(x_t) \), where \( \phi \) is a function of the input \( x_t \). Figure 1 shows a graphical depiction of this transition architecture. Our transition is different from most recurrent neural network transitions since it is fully multiplicative and contains no addition terms. The input enters into the equation by modulating the rotation applied to the hidden state. This allows for more expressivity in the hidden transition than a constant linear transition, though not as much as a nonlinear hidden state dependent transition. By having an input dependent transition, the hidden dynamics are more complicated than a constant linear transition and are nonlinear with respect to the input \( x_t \). Linear time-varying models can express very complicated policies, such as in Levine et al. (2016). Our model can be viewed as a linear orthogonal time-varying model where the variation due to time is due to different input signals applied. 2.2 Forward Activation Norm and Gradient Norm Preservation Here we prove that this recurrent transition architecture exactly preserves (up to numerical precision) the forward hidden state activation norm and the backwards gradient norm. All proofs carry forward Figure 1: DORNN transition model. \( R_{hh} \) and \( R_{xh} \) are orthogonal transformations, parameterized by \( \theta_{hh} \) and \( \phi_{xh} \), respectively. The parameters \( \phi_{xh} \) are a function of the input, \( x_t \). with unitary matrices instead of orthogonal matrices when transposes are replaced with Hermitian transposes. \[ ||h_{t+1}|| = ||R_{xh}(x_t)R_{hh}h_t|| = ||R_{combined}(x_t)h_t|| = ||h_t|| \] where \( R_{combined} = R_{xh}(x_t)R_{hh} \). The last equality follows since orthogonal matrices are a group and so \( R_{combined} \in SO(n) \), and \( ||Qv|| = ||v|| \) for any \( Q \in SO(n) \) and any \( v \in \mathbb{R}^n \), since \( ||Qv|| = \sqrt{v^T Q^T Q v} = \sqrt{v^T I v} = \sqrt{v^T v} = ||v|| \) by the definition of \( Q \) (orthogonal matrices preserve norm). Now let \( C(h_T) \) be a scalar cost function. The vanishing gradients problem occurs if \( || \frac{\partial C}{\partial h_1} || \to 0 \) as \( T \to \infty \) and the exploding gradient problem occurs if \( || \frac{\partial C}{\partial h_1} || \to \infty \) as \( T \to \infty \). \[ \frac{\partial C}{\partial h_t} = \frac{\partial C}{\partial h_T} \frac{\partial h_T}{\partial h_t} = \frac{\partial C}{\partial h_T} \prod_{i=t}^{T-1} \frac{\partial h_{i+1}}{\partial h_i} = \frac{\partial C}{\partial h_T} \prod_{i=t}^{T-1} R_{hh}^T R_{xh}(x_i)^T \] and so \[ \left| \left| \frac{\partial C}{\partial h_t} \right| \right| = \left| \left| \frac{\partial C}{\partial h_T} \prod_{i=t}^{T-1} R_{hh}^T R_{xh}(x_i)^T \right| \right| = \left| \left| \frac{\partial C}{\partial h_T} \right| \right| \] where the last equality follows from \( (\prod_{i=1}^{T-1} R_{hh}^T R_{xh}(x_i)^T) \in SO(n) \) : an orthogonal matrix's transpose is its inverse and the inverse of a group element is in the group. So the norm of the gradient of the cost \( C \) at hidden state \( h_t \) is the same as the final norm of the gradient at hidden state \( h_T \), and the transition does not suffer from vanishing or exploding gradients. 3 ROTATION PLANE DOUBLY ORTHOGONAL RNN Within the Doubly Orthogonal RNN, there is a choice in how the orthogonal (alternatively, unitary), transformations are parameterized. This choice determines the number of parameters, how the gradient propagates from the hidden state to the input, and much more. There are a wide variety of possible DORNN architectures, since there are a wide variety of different ways to parameterize orthogonal and unitary matrices, each with their pros and cons. We provide a particular instantiation of the Doubly Orthogonal RNN by parameterizing the orthogonal matrices in terms of the composition of many plane rotations. We call this RNN architecture the Rotation Plane Doubly Orthogonal RNN, or RP-DORNN. We note that while we focus on this architecture within the context of a recurrent neural network, the rotation plane parameterization of orthogonal matrices could be equally useful for parameterizing very deep feedforward weight matrices. 3.1 ROTATION PLANE REPRESENTATION OF ORTHOGONAL MATRICES First we show how we parameterize a single plane rotation. The full architecture is generated by a composition of a sequence of these plane rotations. Well known in numerical linear algebra, any orthogonal matrix \( Q \in SO(n) \) can be generated as the product of \( n \) Householder reflection matrices \( H = I - 2 \frac{uu^T}{\|u\|^2_2} \) for nonzero vectors \( u \in \mathbb{R}^n \). Work in Geometric Algebra (for example see chapter 6 of [Dorst et al. (2009)]) gives us further insight into this parameterization. Two subsequent Householder reflections generate a plane rotation in the plane spanned by the reflection vectors. The generated rotation is twice the angle between the two reflection vectors. We can use this to parameterize rotation planes in term of the desired angle of rotation, by generating two reflection vectors that produce the rotation. By rotating in several planes in sequence, we can generate arbitrary orthogonal matrices. Thus we can view the rotation angle in a given plane as either a parameter to be optimized or as an input from below in the neural network, both of which we utilize in our proposed recurrent architecture. Concretely, for a plane spanned by two orthonormal vectors, \( w_0 \), and \( w_1 \), we generate a rotation of angle \( \theta \) from \( w_0 \) towards \( w_1 \) in the \( w_0 - w_1 \) plane by generating the following two reflection vectors \( v_0 \) and \( v_1 \) and composing their reflections. \[ v_0 = w_0 \] \[ v_1 = \cos(\theta/2)w_0 + \sin(\theta/2)w_1 \] then \( R_\theta = (I - 2v_1v_1^T)(I - 2v_0v_0^T) \). We don’t need to divide by the magnitude of \( v_0 \) or \( v_1 \) since by construction they are unit vectors. When we apply \( R_\theta \) to a vector or batch of vectors \( B \), we don’t have to generate the matrix \( R_\theta \), since \[ R_\theta B = (I - 2v_1v_1^T)(I - 2v_0v_0^T)B = (I - 2v_1v_1^T)(B - 2v_0(v_0^T B)) = \] \[ = B - 2v_1(v_1^T B) - 2v_0(v_0^T B) + 4v_1(v_1^T (v_0(v_0^T B))) \] and so we never have to form the full \( R_\theta \) matrix, we only need to perform matrix multiplies of a vector with a matrix and the intermediate dimensions are much smaller than forming the dense \( R_\theta \) matrix. In the next section we treat \( w_0, w_1 \) as random constant orthonormal vectors and treat \( \theta \) as a parameter or a function of inputs. 3.2 RP-DORNN We generate \( R_{hh} \) and \( R_{xh} \) as a sequence of plane rotations in the Rotation Plane Doubly Orthogonal Recurrent Neural Network (RP-DORNN). \[ R_{hh} = \prod_{i=1}^k R_{\theta_i} \] and \[ R_{xh}(x_t) = \prod_{i=1}^l R_{\phi_i(x_t)} \] for \( l, k \leq \lfloor \frac{n}{2} \rfloor \). Each \( R_{\theta_i} \) and \( R_{\phi_i} \) are plane rotations in randomized orthogonal planes (where the planes from \( R_{hh} \) are orthogonal to one another and the planes from \( R_{xh} \) are orthogonal to one another but the planes from \( R_{hh} \) intersect with the ones from \( R_{xh} \) randomly), parameterized by the angle of rotation. For \( R_{\phi_i} \), the angle of rotation is a function of \( x_t \), and for \( R_{\theta_i} \). In our exploratory experiments we investigated several choices of \( l \) and \( k \) but for our final experiments we used \( l = k = \lfloor \frac{n}{2} \rfloor \), so \( R_{xh} \) and \( R_{hh} \) can each affect the entire hidden space. For our experiments we generated the planes to rotate in for each of \( R_{hh} \) and \( R_{xh} \) by initializing a random Gaussian matrix and projecting to an orthogonal matrix, and taking consecutive pairs of columns as the orthonormal vectors that span a rotation plane (\( w_0 \) and \( w_1 \) for each plane in the notation of Section 3.1). There are several possible choices for the parameterization of the angle parameters. In our exploratory experiments we investigated directly generating the angle value as a parameter but this suffers from topology wrapping: \( \theta \) and \( \theta + m2\pi \) for any integer \( m \) generate the same transformation. We settled on generating \( \theta_i = 2\pi \sigma(\alpha_i) \) for real parameters \( \alpha_i \), where \( \sigma \) is the sigmoid function and \( \phi_i(x_t) = \pi \sigma(W x_t + b) \) for learned affine transform \( W x_t + b \). This only allows positive angle rotations for the \( R_{xh} \) and it places negative and positive angle rotations on the opposite side of the parameter space for \( R_{hh} \) (negative \( \alpha_i \) produces positive angle rotations and positive \( \alpha_i \) produce negative angle rotations). In addition, with this parameterization, \( \sigma(\alpha_i) \) and \( \sigma(W x_t + b) \) logarithmically approach 0 for very negative \( \alpha_i \) and \( W x_t + b \), which is useful since for long timescales the transformations may be applied exponentially many times. In our experiments, \( \alpha_i \) were drawn uniformly between -3 and 0, \( W \) was initialized with a Gaussian, and \( b \) was initialized as 0. The Rotation Plane representation allows a fixed transformation subspace via the unvarying planes of rotation, but with a small number of angle parameters that are actively modulated to produce vastly different transformations for different inputs. Other orthogonal matrix representations do not give as much utility to be modulated by a small number of parameters that can be then generated by the input. 4 BACKGROUND: ALTERNATIVE ORTHOGONAL MATRIX REPRESENTATIONS AND OPTIMIZATION 4.1 UNITARY REPRESENTATION USED IN uRNN The approach in Arjovsky et al. (2015) parameterizes their unitary matrices as a combination of unitary building blocks, such as complex reflections, complex unit diagonal scalings, permutations, and the FFT and iFFT. 4.2 OPTIMIZATION REPRESENTATION ON ORTHOGONAL MATRICES Another way to parameterize an orthogonal matrix is to start with an arbitrary orthogonal matrix and then ensure that the optimization process produces an orthogonal matrix at each time step. Several ways of optimizing orthogonal matrices. The defining equation can be used as a regularizer on a regular matrix transition, i.e. \( \|I - Q^T Q\| \). This approach is used in some experiments in the ORNN paper, but it does not produce an exactly orthogonal matrix, and as such still suffers from exploding and vanishing gradients. Another possibility is to perform gradient descent on a matrix and reproject to the orthogonal matrices by SVD and setting the singular values to 1. The Lie algebra representation is \( Q_{t+1} = Q_t q \) when \( q = \exp(A) \) if \( A \) is skew-symmetric: \( A^T + A = 0 \). Then the representation optimizes \( w \) in skew symmetric matrices. This approach is used in [Hyland & Rätsch (2016)]. The main downside of this approach is that you need to calculate gradient through exp. the matrix exponential, which we are not aware of a closed-form solution, and in [Hyland & Rätsch (2016)] the authors used finite differences to calculate the derivative of the exponential, which is incompatible with backpropagation. Recently proposed full capacity uRNN uses the Cayley transform: \( Q_{t+1} = (I + \frac{\lambda}{2}A)^{-1}(I - \frac{\lambda}{2}A)Q_t \) for \( A \) skew symmetric (skew-Hermitian for unitary \( Q \)) to stay on the manifold of orthogonal (unitary) matrices. 5 EXPERIMENTS Our experiments seek to answer the following questions: How do the theoretically guaranteed preservation properties of the DORNN architecture contribute to practical training to learn extremely long term dependencies, and how is training time to success influenced by the length of a long term dependency. We partially address these questions on our RP-DORNN architecture on a simplified 1-bit version of the memory copy task that is commonly used to test long term dependencies in RNN architectures. The task is as follows: The input is a sequence of four dimension one-hot vectors (categorical variables with four categories) of length \( T + 2 \) for a given \( T > 1 \). The first input is either a 1 or 2, followed by a string of \( T \) 0's, and finally ends with a 3. All of the outputs except for the last output are arbitrary and unpenalized. The last output should be the same category as the first input (1 or 2, whichever was given). The loss is the categorical cross entropy of the last output timestep. The simplifications from the usual memory copy experiment are that the number of categories is four instead of the usual ten, that the sequence to be copied is of length one instead of the usual ten, and that the loss only takes into account the portion of the output that corresponds to the copied output (the final timestep in our case), instead of the entire output sequence. These simplifications were performed in order to increase the number of experiments that could be run by training smaller models, and to function as a minimal experiment on the effect of increasing the sequence length. The third simplification was performed in order to increase the signal to noise ratio in the loss for very long sequences, since the majority of the loss comes from the intermediate outputs for very large \( T \). The model would otherwise learn to use most of its capacity to reduce error in the irrelevant intermediate output stage instead of the relevant copied sequence stage. A useful side effect is that the entire gradient signal comes from the relevant copy output error. We successfully trained the architecture on the task with \( T = 600, 800, 1600, 2400, 3200, \) and \( 5000 \), with a minibatch size of 128. There are only two possible input and output sequences, so the mini-batch size is somewhat redundant, but the different proportion of 1 and 2 inputs at each timestep helps inject extra randomness to the training procedure. Figure 2 shows the training loss curves for each of the timesteps, compared against the baseline of outputting 1 or 2 with equal probability on the final timestep, which achieves \( ln(2) \approx 0.693 \) loss. Figure 3 shows the accuracy of the network over training applied on the input minibatches. None of the hyperparameters were changed for scaling the timestep from 500 to 5000. Note that for longer \( T \), the training became increasingly unstable (although it was still able to succeed). A common phenomenon is that the loss would jump far above the baseline for a brief period but then return to a lower loss than before the jump. A possible explanation is that the training process becomes increasingly sensitive to small transition parameter changes as \( T \) increases, since the transition is applied \( T \) times, and so the final hidden state would land in a completely different region after a small parameter change and the network would have to adjust to the new regions. Training was terminated for each \( T \) after perfect accuracy had been achieved for several hundred training iterations, but the loss would continue to asymptotically approach 0 if training were to progress as the network increases confidence in its predictions. [Saxe et al. (2013)] suggests that training time should be independent of sequence length for a fully linear orthogonal neural network. We observed a roughly linear training time to success with respect to \( T \). The discrepancy might be due to the nonlinearities present elsewhere in the network or due to the parameterization of the orthogonal matrices. ![Exponentially weighted moving average (with decay of 0.95) of training loss.](page_246_183_1017_482.png) Figure 2: Exponentially weighted moving average (with decay of 0.95) of training loss. ![Exponentially weighted moving average (with decay of 0.95) of training accuracy.](page_246_728_1017_482.png) Figure 3: Exponentially weighted moving average (with decay of 0.95) of training accuracy. This experiments section will be expanded upon as more experimental results and comparisons are available. 6 DISCUSSION While we describe one particular orthogonal recurrent transition, the rotation plane parameterization describes a rich space of orthogonal transitions. Our input to hidden dependent transition is expressive enough for simplified memory copy task and can learn extremely long term dependencies up to 5,000 time steps long. The architecture still needs to be validated on more complicated tasks. Future work: The architecture we described here is linear with the transition dependent on the input, but nonlinear locally orthogonal transitions are possible discontinuous. We experimented with different locally orthogonal transitions but found them harder to train successfully (likely they are more unstable due to the discontinuity, especially amplified over the hundreds or thousands of timesteps). It might be possible to find a nonlinear transition that still produces an orthogonal Jacobian that is continuous: the equations don’t outright prevent it, but it results from the interplay of several constraints and so finding such an architecture is harder. An open question is whether is it possible to construct something like an LSTM within the space of orthogonal or unitary matrices, to get the best of both worlds? It is not clear how much expressivity in the recurrent transition is lost by having linear input dependent model instead of a nonlinear hidden to hidden transition. A possible future direction is the combinations of orthogonal or unitary transitions for long term dependencies and regular RNN or LSTM units for complicated short term dependencies. In this work we randomly fixed the planes to rotate in, but using the Cayley transform as in the full capacity uRNN the rotation planes could be optimized jointly, and so we could combine with our angle based parameterization to get full capacity input dependent rotations instead of being sensitive to the randomly initialized planes. ACKNOWLEDGMENTS This research was funded in part by Darpa through the SIMPIEX program and the FunLOL program, by NSF through a Career award. Zoe McCarthy is also supported by an NSF Graduate Fellowship. Peter Chen is also supported by a Berkeley AI Research (BAIR) lab Fellowship. REFERENCES Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. arXiv preprint arXiv:1511.06464, 2015. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer Normalization. jul 2016. URL http://arxiv.org/abs/1607.06450 Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. Y Bengio. Learning long-term dependencies with gradient descent is difficult, 1994. ISSN 19410093. Artem Chernodub and Dimitri Nowicki. Norm-preserving Orthogonal Permutation Linear Unit Activation Functions (OPLU). Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the Properties of Neural Machine Translation: EncoderDecoder Approaches. Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. pp. 103–111, 2014. URL http://arxiv.org/pdf/1409.1259v2.pdf$delimiter"026E30F$nhttp://arxiv.org/abs/1409.1259 Tim Cooijmans, Nicolas Ballas, César Laurent, and Aaron Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016. Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, U T Austin, Umass Lowell, and U C Berkeley. Long-term Recurrent Convolutional Networks for Visual Recognition and Description. Cvpr, pp. 1–14, 2015. ISSN 9781467369640. doi: 10.1109/CVPR.2015.7298878. Leo Dorst, Daniel Fontijne, and Stephen Mann. Geometric Algebra for Computer Science (Revised Edition): An Object-Oriented Approach to Geometry. 2009. ISBN 0123749425. URL http://www.amazon.de/dp/0123749425 Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. Mikael Henaff, Arthur Szlam, and Yann LeCun. Orthogonal RNNs and Long-Memory Tasks, 2016. S. Hochreiter. Untersuchungen zu dynamischen neuronalen netzen, 1991. Sepp Hochreiter, S Hochreiter, Jürgen Schmidhuber, and J Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–80, 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735. URL http://www.ncbi.nlm.nih.gov/pubmed/9377276 Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jrgen Schmidhuber. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies, 2001. Stephanie L. Hyland and Gunnar Rätsch. Learning Unitary Operators with Help From u(n). jul 2016. URL http://arxiv.org/abs/1607.04903 Andrej Karpathy and Fei Fei Li. Deep visual-semantic alignments for generating image descriptions. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 07-12-June-2015:3128–3137, 2015. ISSN 10636919. doi: 10.1109/CVPR.2015.7298932. David Krueger and Roland Memisevic. Regularizing RNNs by Stabilizing Activations. nov 2015. URL http://arxiv.org/abs/1511.08400 Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A Simple Way to Initialize Recurrent Networks of Rectified Linear Units. arXiv preprint arXiv:1504.00941, pp. 1–9, 2015. ISSN 1045-9227. doi: 10.1109/72.279181. Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-End Training of Deep Visuomotor Policies. Journal of Machine Learning Research, 17:1–40, 2016. Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. Addressing the Rare Word Problem in Neural Machine Translation. Arxiv, pp. 1–11, 2014. doi: 10.3115/v1/P15-1002. URL http://arxiv.org/abs/1410.8206 Andrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. dec 2013. URL http://arxiv.org/abs/1312.6120 Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In NIPS, pp. 3104–3112, 2014. Scott Wisdom, Thomas Powers, John R. Hershey, Jonathan Le Roux, and Les Atlas. Full-Capacity Unitary Recurrent Neural Networks. (Nips):1–9, 2016. URL http://arxiv.org/abs/1611.00035 Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideki Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. pp. 1–23, 2016. URL http://arxiv.org/abs/1609.08144
ABSTRACT Recurrent Neural Networks (RNNs) applied to long sequences suffer from the well known vanishing and exploding gradients problem. The recently proposed Unitary Evolution Recurrent Neural Network (uRNN) alleviates the exploding gradient problem and can learn very long dependencies, but its nonlinearities make it still affected by the vanishing gradient problem and so learning can break down for extremely long dependencies. We propose a new RNN transition architecture where the hidden state is updated multiplicatively by a time invariant orthogonal transformation followed by an input modulated orthogonal transformation. There are no additive interactions and so our architecture exactly preserves forward hidden state activation norm and backwards gradient norm for all time steps, and is provably not affected by vanishing or exploding gradients. We propose using the rotation plane parameterization to represent the orthogonal matrices. We validate our model on a simplified memory copy task and see that our model can learn dependencies as long as 5,000 timesteps. 1 INTRODUCTION Deep Neural Networks have shown increasingly impressive performance on a wide variety of practical tasks. Recurrent Neural Networks (RNNs) are powerful sequence modeling tools that have found successful applications in speech recognition, natural language processing, image captioning, and many more (Sutskever et al., 2014; Bahdanau et al., 2014; Wu et al., 2016; Donahue et al., 2015; Karpathy & Li, 2015; Luong et al., 2014). One fundamental problem with RNNs is the so called vanishing and exploding gradients problem (Hochreiter, 1991; Bengio, 1994; Hochreiter et al., 2001). These problems occur when training RNNs using gradient descent to model long sequences where the gradient magnitude either goes towards zero or infinity, respectively, as the length of the sequence increases. Several heuristics have been proposed to alleviate these problems. The Long Short Term Memory (LSTM) in Hochreiter et al. (1997) and Gated Recurrent Units (GRU) in Cho et al. (2014) have been incredibly successful recurrent transition architectures to model complicated dependencies for sequences up to several hundred timesteps long and are the main RNN architectures in use today. The IRNN model modifies the standard RNN transition to initialize at the identity, which increases the timestep length modeling capability (Le et al., 2015). Stabilizing the forward hidden state norm can have a positive effect on hidden state gradient norm preservation (Krueger & Memisevic, 2015; Ba et al., 2016; Cooijmans et al., 2016). A simple gradient norm clipping during hidden state backpropagation can also help to alleviate the exploding gradient problem for these architectures (Graves, 2013). Recently, there has been a surge of interest in orthogonal and unitary transition architectures. Orthogonal and unitary transitions exactly preserve forward and gradient norms passed through them. Theory developed for the linear neural networks suggests that training time can be vastly shorter when the weight matrices are orthogonal, and even independent of depth (Saxe et al., 2013). The Unitary Evolution Recurrent Neural Networks utilizes a unitary recurrent transition followed by a contractive nonlinearity to exactly solve the exploding gradient problem and greatly alleviate the vanishing gradient problem (Arjovsky et al., 2015). A very recent extension expands on this work to increase the expressive power of these transitions and increases the validated architecture up to 2000 timestep dependencies in Wisdom et al. (2016). Analytic solutions to the most common long term dependency example tasks, the memory copy and addition problems, are provided with linear orthogonal recurrent neural networks in Henaff et al. (2016). A nonlinear activation function that is locally orthogonal is proposed in Chernodub & Nowicki and shown to have great potential. This particular activation scheme, however, is discontinuous (not just nondifferentiable), and could increase optimization difficulty in some cases. An open question that we try to partially address in this work is whether a recurrent transition architecture can be fully orthogonal or unitary (and thus linear) and still learn expressive models to solve practical tasks. To address this problem, we propose the rotation plane doubly orthogonal RNN, a novel recurrent transition architecture that provably preserves forward hidden state activation norm and backpropagated gradient norm and thus does not suffer from exploding or vanishing gradients. The doubly orthogonal refers to the fact that the RNN transition architecture updates the hidden state multiplicatively by a time invariant orthogonal transformation followed by an input modulated orthogonal transformation. Rotation plane refers to the parameterization we use to represent the orthogonal matrices. We evaluate our approach on a simplified 1-bit version of the memory copying task, and find that our architecture can scale to 5,000 timesteps on this task, outperforming past approaches. 2 Doubly Orthogonal RNN Architecture In this section we describe our proposed architecture, which we call the Doubly Orthogonal Recurrent Neural Net (DORNN). We also show that the DORNN provably preserves forward norm and gradient norm. We briefly review the definition of an orthogonal or unitary matrix, since it is fundamental to the definition and properties of our transition architecture. Orthogonal matrices, or rotation matrices, are defined as matrices \( Q \in \mathbb{R}^{n \times n} \) such that: \( Q^T Q = I \). We restrict our attention to the special orthogonal matrices \( SO(n) \) such that \( \det(Q) = 1 \). The set of special orthogonal matrices forms a matrix lie group. The complex analogue of an orthogonal matrix is a unitary matrix and is defined similarly as matrices \( U \in \mathbb{C}^{n \times n} \) such that \( U^H U = I \). 2.1 Recurrence Equations Our recurrence equation is defined below: \[ h_{t+1} = R_{xh}(x_t) R_{hh} h_t \] where \( R_{xh}(x_t) \) is an input modulated orthogonal or unitary matrix, and \( R_{hh} \) is a time invariant orthogonal or unitary matrix that is applied at every timestep. We parameterize \( R_{hh} \) by \( \theta_{hh} \) and \( R_{xh} \) by \( \phi_{xh}(x_t) \), where \( \phi \) is a function of the input \( x_t \). Figure 1 shows a graphical depiction of this transition architecture. Our transition is different from most recurrent neural network transitions since it is fully multiplicative and contains no addition terms. The input enters into the equation by modulating the rotation applied to the hidden state. This allows for more expressivity in the hidden transition than a constant linear transition, though not as much as a nonlinear hidden state dependent transition. By having an input dependent transition, the hidden dynamics are more complicated than a constant linear transition and are nonlinear with respect to the input \( x_t \). Linear time-varying models can express very complicated policies, such as in Levine et al. (2016). Our model can be viewed as a linear orthogonal time-varying model where the variation due to time is due to different input signals applied. 2.2 Forward Activation Norm and Gradient Norm Preservation Here we prove that this recurrent transition architecture exactly preserves (up to numerical precision) the forward hidden state activation norm and the backwards gradient norm. All proofs carry forward Figure 1: DORNN transition model. \( R_{hh} \) and \( R_{xh} \) are orthogonal transformations, parameterized by \( \theta_{hh} \) and \( \phi_{xh} \), respectively. The parameters \( \phi_{xh} \) are a function of the input, \( x_t \). with unitary matrices instead of orthogonal matrices when transposes are replaced with Hermitian transposes. \[ ||h_{t+1}|| = ||R_{xh}(x_t)R_{hh}h_t|| = ||R_{combined}(x_t)h_t|| = ||h_t|| \] where \( R_{combined} = R_{xh}(x_t)R_{hh} \). The last equality follows since orthogonal matrices are a group and so \( R_{combined} \in SO(n) \), and \( ||Qv|| = ||v|| \) for any \( Q \in SO(n) \) and any \( v \in \mathbb{R}^n \), since \( ||Qv|| = \sqrt{v^T Q^T Q v} = \sqrt{v^T I v} = \sqrt{v^T v} = ||v|| \) by the definition of \( Q \) (orthogonal matrices preserve norm). Now let \( C(h_T) \) be a scalar cost function. The vanishing gradients problem occurs if \( || \frac{\partial C}{\partial h_1} || \to 0 \) as \( T \to \infty \) and the exploding gradient problem occurs if \( || \frac{\partial C}{\partial h_1} || \to \infty \) as \( T \to \infty \). \[ \frac{\partial C}{\partial h_t} = \frac{\partial C}{\partial h_T} \frac{\partial h_T}{\partial h_t} = \frac{\partial C}{\partial h_T} \prod_{i=t}^{T-1} \frac{\partial h_{i+1}}{\partial h_i} = \frac{\partial C}{\partial h_T} \prod_{i=t}^{T-1} R_{hh}^T R_{xh}(x_i)^T \] and so \[ \left| \left| \frac{\partial C}{\partial h_t} \right| \right| = \left| \left| \frac{\partial C}{\partial h_T} \prod_{i=t}^{T-1} R_{hh}^T R_{xh}(x_i)^T \right| \right| = \left| \left| \frac{\partial C}{\partial h_T} \right| \right| \] where the last equality follows from \( (\prod_{i=1}^{T-1} R_{hh}^T R_{xh}(x_i)^T) \in SO(n) \) : an orthogonal matrix's transpose is its inverse and the inverse of a group element is in the group. So the norm of the gradient of the cost \( C \) at hidden state \( h_t \) is the same as the final norm of the gradient at hidden state \( h_T \), and the transition does not suffer from vanishing or exploding gradients. 3 ROTATION PLANE DOUBLY ORTHOGONAL RNN Within the Doubly Orthogonal RNN, there is a choice in how the orthogonal (alternatively, unitary), transformations are parameterized. This choice determines the number of parameters, how the gradient propagates from the hidden state to the input, and much more. There are a wide variety of possible DORNN architectures, since there are a wide variety of different ways to parameterize orthogonal and unitary matrices, each with their pros and cons. We provide a particular instantiation of the Doubly Orthogonal RNN by parameterizing the orthogonal matrices in terms of the composition of many plane rotations. We call this RNN architecture the Rotation Plane Doubly Orthogonal RNN, or RP-DORNN. We note that while we focus on this architecture within the context of a recurrent neural network, the rotation plane parameterization of orthogonal matrices could be equally useful for parameterizing very deep feedforward weight matrices. 3.1 ROTATION PLANE REPRESENTATION OF ORTHOGONAL MATRICES First we show how we parameterize a single plane rotation. The full architecture is generated by a composition of a sequence of these plane rotations. Well known in numerical linear algebra, any orthogonal matrix \( Q \in SO(n) \) can be generated as the product of \( n \) Householder reflection matrices \( H = I - 2 \frac{uu^T}{\|u\|^2_2} \) for nonzero vectors \( u \in \mathbb{R}^n \). Work in Geometric Algebra (for example see chapter 6 of [Dorst et al. (2009)]) gives us further insight into this parameterization. Two subsequent Householder reflections generate a plane rotation in the plane spanned by the reflection vectors. The generated rotation is twice the angle between the two reflection vectors. We can use this to parameterize rotation planes in term of the desired angle of rotation, by generating two reflection vectors that produce the rotation. By rotating in several planes in sequence, we can generate arbitrary orthogonal matrices. Thus we can view the rotation angle in a given plane as either a parameter to be optimized or as an input from below in the neural network, both of which we utilize in our proposed recurrent architecture. Concretely, for a plane spanned by two orthonormal vectors, \( w_0 \), and \( w_1 \), we generate a rotation of angle \( \theta \) from \( w_0 \) towards \( w_1 \) in the \( w_0 - w_1 \) plane by generating the following two reflection vectors \( v_0 \) and \( v_1 \) and composing their reflections. \[ v_0 = w_0 \] \[ v_1 = \cos(\theta/2)w_0 + \sin(\theta/2)w_1 \] then \( R_\theta = (I - 2v_1v_1^T)(I - 2v_0v_0^T) \). We don’t need to divide by the magnitude of \( v_0 \) or \( v_1 \) since by construction they are unit vectors. When we apply \( R_\theta \) to a vector or batch of vectors \( B \), we don’t have to generate the matrix \( R_\theta \), since \[ R_\theta B = (I - 2v_1v_1^T)(I - 2v_0v_0^T)B = (I - 2v_1v_1^T)(B - 2v_0(v_0^T B)) = \] \[ = B - 2v_1(v_1^T B) - 2v_0(v_0^T B) + 4v_1(v_1^T (v_0(v_0^T B))) \] and so we never have to form the full \( R_\theta \) matrix, we only need to perform matrix multiplies of a vector with a matrix and the intermediate dimensions are much smaller than forming the dense \( R_\theta \) matrix. In the next section we treat \( w_0, w_1 \) as random constant orthonormal vectors and treat \( \theta \) as a parameter or a function of inputs. 3.2 RP-DORNN We generate \( R_{hh} \) and \( R_{xh} \) as a sequence of plane rotations in the Rotation Plane Doubly Orthogonal Recurrent Neural Network (RP-DORNN). \[ R_{hh} = \prod_{i=1}^k R_{\theta_i} \] and \[ R_{xh}(x_t) = \prod_{i=1}^l R_{\phi_i(x_t)} \] for \( l, k \leq \lfloor \frac{n}{2} \rfloor \). Each \( R_{\theta_i} \) and \( R_{\phi_i} \) are plane rotations in randomized orthogonal planes (where the planes from \( R_{hh} \) are orthogonal to one another and the planes from \( R_{xh} \) are orthogonal to one another but the planes from \( R_{hh} \) intersect with the ones from \( R_{xh} \) randomly), parameterized by the angle of rotation. For \( R_{\phi_i} \), the angle of rotation is a function of \( x_t \), and for \( R_{\theta_i} \). In our exploratory experiments we investigated several choices of \( l \) and \( k \) but for our final experiments we used \( l = k = \lfloor \frac{n}{2} \rfloor \), so \( R_{xh} \) and \( R_{hh} \) can each affect the entire hidden space. For our experiments we generated the planes to rotate in for each of \( R_{hh} \) and \( R_{xh} \) by initializing a random Gaussian matrix and projecting to an orthogonal matrix, and taking consecutive pairs of columns as the orthonormal vectors that span a rotation plane (\( w_0 \) and \( w_1 \) for each plane in the notation of Section 3.1). There are several possible choices for the parameterization of the angle parameters. In our exploratory experiments we investigated directly generating the angle value as a parameter but this suffers from topology wrapping: \( \theta \) and \( \theta + m2\pi \) for any integer \( m \) generate the same transformation. We settled on generating \( \theta_i = 2\pi \sigma(\alpha_i) \) for real parameters \( \alpha_i \), where \( \sigma \) is the sigmoid function and \( \phi_i(x_t) = \pi \sigma(W x_t + b) \) for learned affine transform \( W x_t + b \). This only allows positive angle rotations for the \( R_{xh} \) and it places negative and positive angle rotations on the opposite side of the parameter space for \( R_{hh} \) (negative \( \alpha_i \) produces positive angle rotations and positive \( \alpha_i \) produce negative angle rotations). In addition, with this parameterization, \( \sigma(\alpha_i) \) and \( \sigma(W x_t + b) \) logarithmically approach 0 for very negative \( \alpha_i \) and \( W x_t + b \), which is useful since for long timescales the transformations may be applied exponentially many times. In our experiments, \( \alpha_i \) were drawn uniformly between -3 and 0, \( W \) was initialized with a Gaussian, and \( b \) was initialized as 0. The Rotation Plane representation allows a fixed transformation subspace via the unvarying planes of rotation, but with a small number of angle parameters that are actively modulated to produce vastly different transformations for different inputs. Other orthogonal matrix representations do not give as much utility to be modulated by a small number of parameters that can be then generated by the input. 4 BACKGROUND: ALTERNATIVE ORTHOGONAL MATRIX REPRESENTATIONS AND OPTIMIZATION 4.1 UNITARY REPRESENTATION USED IN uRNN The approach in Arjovsky et al. (2015) parameterizes their unitary matrices as a combination of unitary building blocks, such as complex reflections, complex unit diagonal scalings, permutations, and the FFT and iFFT. 4.2 OPTIMIZATION REPRESENTATION ON ORTHOGONAL MATRICES Another way to parameterize an orthogonal matrix is to start with an arbitrary orthogonal matrix and then ensure that the optimization process produces an orthogonal matrix at each time step. Several ways of optimizing orthogonal matrices. The defining equation can be used as a regularizer on a regular matrix transition, i.e. \( \|I - Q^T Q\| \). This approach is used in some experiments in the ORNN paper, but it does not produce an exactly orthogonal matrix, and as such still suffers from exploding and vanishing gradients. Another possibility is to perform gradient descent on a matrix and reproject to the orthogonal matrices by SVD and setting the singular values to 1. The Lie algebra representation is \( Q_{t+1} = Q_t q \) when \( q = \exp(A) \) if \( A \) is skew-symmetric: \( A^T + A = 0 \). Then the representation optimizes \( w \) in skew symmetric matrices. This approach is used in [Hyland & Rätsch (2016)]. The main downside of this approach is that you need to calculate gradient through exp. the matrix exponential, which we are not aware of a closed-form solution, and in [Hyland & Rätsch (2016)] the authors used finite differences to calculate the derivative of the exponential, which is incompatible with backpropagation. Recently proposed full capacity uRNN uses the Cayley transform: \( Q_{t+1} = (I + \frac{\lambda}{2}A)^{-1}(I - \frac{\lambda}{2}A)Q_t \) for \( A \) skew symmetric (skew-Hermitian for unitary \( Q \)) to stay on the manifold of orthogonal (unitary) matrices. 5 EXPERIMENTS Our experiments seek to answer the following questions: How do the theoretically guaranteed preservation properties of the DORNN architecture contribute to practical training to learn extremely long term dependencies, and how is training time to success influenced by the length of a long term dependency. We partially address these questions on our RP-DORNN architecture on a simplified 1-bit version of the memory copy task that is commonly used to test long term dependencies in RNN architectures. The task is as follows: The input is a sequence of four dimension one-hot vectors (categorical variables with four categories) of length \( T + 2 \) for a given \( T > 1 \). The first input is either a 1 or 2, followed by a string of \( T \) 0's, and finally ends with a 3. All of the outputs except for the last output are arbitrary and unpenalized. The last output should be the same category as the first input (1 or 2, whichever was given). The loss is the categorical cross entropy of the last output timestep. The simplifications from the usual memory copy experiment are that the number of categories is four instead of the usual ten, that the sequence to be copied is of length one instead of the usual ten, and that the loss only takes into account the portion of the output that corresponds to the copied output (the final timestep in our case), instead of the entire output sequence. These simplifications were performed in order to increase the number of experiments that could be run by training smaller models, and to function as a minimal experiment on the effect of increasing the sequence length. The third simplification was performed in order to increase the signal to noise ratio in the loss for very long sequences, since the majority of the loss comes from the intermediate outputs for very large \( T \). The model would otherwise learn to use most of its capacity to reduce error in the irrelevant intermediate output stage instead of the relevant copied sequence stage. A useful side effect is that the entire gradient signal comes from the relevant copy output error. We successfully trained the architecture on the task with \( T = 600, 800, 1600, 2400, 3200, \) and \( 5000 \), with a minibatch size of 128. There are only two possible input and output sequences, so the mini-batch size is somewhat redundant, but the different proportion of 1 and 2 inputs at each timestep helps inject extra randomness to the training procedure. Figure 2 shows the training loss curves for each of the timesteps, compared against the baseline of outputting 1 or 2 with equal probability on the final timestep, which achieves \( ln(2) \approx 0.693 \) loss. Figure 3 shows the accuracy of the network over training applied on the input minibatches. None of the hyperparameters were changed for scaling the timestep from 500 to 5000. Note that for longer \( T \), the training became increasingly unstable (although it was still able to succeed). A common phenomenon is that the loss would jump far above the baseline for a brief period but then return to a lower loss than before the jump. A possible explanation is that the training process becomes increasingly sensitive to small transition parameter changes as \( T \) increases, since the transition is applied \( T \) times, and so the final hidden state would land in a completely different region after a small parameter change and the network would have to adjust to the new regions. Training was terminated for each \( T \) after perfect accuracy had been achieved for several hundred training iterations, but the loss would continue to asymptotically approach 0 if training were to progress as the network increases confidence in its predictions. [Saxe et al. (2013)] suggests that training time should be independent of sequence length for a fully linear orthogonal neural network. We observed a roughly linear training time to success with respect to \( T \). The discrepancy might be due to the nonlinearities present elsewhere in the network or due to the parameterization of the orthogonal matrices. ![Exponentially weighted moving average (with decay of 0.95) of training loss.](page_246_183_1017_482.png) Figure 2: Exponentially weighted moving average (with decay of 0.95) of training loss. ![Exponentially weighted moving average (with decay of 0.95) of training accuracy.](page_246_728_1017_482.png) Figure 3: Exponentially weighted moving average (with decay of 0.95) of training accuracy. This experiments section will be expanded upon as more experimental results and comparisons are available. 6 DISCUSSION While we describe one particular orthogonal recurrent transition, the rotation plane parameterization describes a rich space of orthogonal transitions. Our input to hidden dependent transition is expressive enough for simplified memory copy task and can learn extremely long term dependencies up to 5,000 time steps long. The architecture still needs to be validated on more complicated tasks. Future work: The architecture we described here is linear with the transition dependent on the input, but nonlinear locally orthogonal transitions are possible discontinuous. We experimented with different locally orthogonal transitions but found them harder to train successfully (likely they are more unstable due to the discontinuity, especially amplified over the hundreds or thousands of timesteps). It might be possible to find a nonlinear transition that still produces an orthogonal Jacobian that is continuous: the equations don’t outright prevent it, but it results from the interplay of several constraints and so finding such an architecture is harder. An open question is whether is it possible to construct something like an LSTM within the space of orthogonal or unitary matrices, to get the best of both worlds? It is not clear how much expressivity in the recurrent transition is lost by having linear input dependent model instead of a nonlinear hidden to hidden transition. A possible future direction is the combinations of orthogonal or unitary transitions for long term dependencies and regular RNN or LSTM units for complicated short term dependencies. In this work we randomly fixed the planes to rotate in, but using the Cayley transform as in the full capacity uRNN the rotation planes could be optimized jointly, and so we could combine with our angle based parameterization to get full capacity input dependent rotations instead of being sensitive to the randomly initialized planes. ACKNOWLEDGMENTS This research was funded in part by Darpa through the SIMPIEX program and the FunLOL program, by NSF through a Career award. Zoe McCarthy is also supported by an NSF Graduate Fellowship. Peter Chen is also supported by a Berkeley AI Research (BAIR) lab Fellowship.
reject
Reject
4.333333
58d870d1f26dc573de3c5f3c3e7fde857af2669d
iclr
2,017
GENERATIVE ADVERSARIAL NETWORKS AS VARIATIONAL TRAINING OF ENERGY BASED MODELS Shuangfei Zhai Binghamton University Vestal, NY 13902, USA szhai2@binghamton.edu Yu Cheng IBM T.J. Watson Research Center Yorktown Heights, NY 10598, USA chengyu@us.ibm.com Rogerio Feris IBM T.J. Watson Research Center Yorktown Heights, NY 10598, USA rsferis@us.ibm.com Zhongfei (Mark) Zhang Binghamton University Vestal, NY 13902, USA zhongfei@cs.binghamton.edu ABSTRACT In this paper, we study deep generative models for effective unsupervised learning. We propose VGAN, which works by minimizing a variational lower bound of the negative log likelihood (NLL) of an energy based model (EBM), where the model density \( p(x) \) is approximated by a variational distribution \( q(x) \) that is easy to sample from. The training of VGAN takes a two step procedure: given \( p(x) \), \( q(x) \) is updated to maximize the lower bound; \( p(x) \) is then updated one step with samples drawn from \( q(x) \) to decrease the lower bound. VGAN is inspired by the generative adversarial networks (GANs), where \( p(x) \) corresponds to the discriminator and \( q(x) \) corresponds to the generator, but with several notable differences. We hence name our model variational GANs (VGANs). VGAN provides a practical solution to training deep EBMs in high dimensional space, by eliminating the need of MCMC sampling. From this view, we are also able to identify causes to the difficulty of training GANs and propose viable solutions.\footnote{Experimental code is available at https://github.com/Shuangfei/vgan} 1 INTRODUCTION Unsupervised learning is a long standing challenge of machine learning and deep learning. One major difficulty of effective unsupervised learning lies in the lack of an accurate distance metric. Euclidean distance has been widely adopted as the default metric from shallow methods, such as K-means and Gaussian mixture models, to deep models such as autoencoder variants (e.g., Vincent et al. (2010)). From a probabilistic point of view, the use of Euclidean distance assumes Gaussian distributions (or mixtures thereof) in the input space, which is a strong assumption and is often times inaccurate for high dimensional data (such as images). Generative adversarial networks (GANs) (Goodfellow et al., 2014) are a particularly interesting approach as it does not assume the data distribution to take any specific form, which therefore eliminates the need of a predefined distance metric of samples. GANs work as a mini-max two player game, where a generator \( G(z) \) is trained to generate samples that can fool the best discriminator D. When both G and D are formulated as deep convolutional networks, it is shown that the generator can learn to generate surprisingly realistic looking images [Radford et al., 2015]. Energy-based models (EBMs) [LeCun et al., 2006] are another powerful family of unsupervised learning models. Similarly to GANs, EBMs make minimal assumptions about the data distribution, as it directly parameterizes the negative log density of data \( E(x) = -\log p(x) \) as a deterministic function of \( x \). It is clear that by properly choosing the capacity of \( E(x) \), an EBM can be trained to approximate an arbitrary density function perfectly well. In this paper, we propose VGAN, which bridges GANs and EBMs and combines the benefits from both worlds. In particular, we show that the mini-max game of GANs is approximately equivalent to minimizing a variational lower bound of the negative log likelihood (NLL) of an EBM. To be more concrete, the energy \( E(\mathbf{x}) \) corresponds to \( -\log D(\mathbf{x}) \), and the generator \( G(\mathbf{z}) \) defines a parameterized sampler from the model distribution defined by \( p(\mathbf{x}) = \frac{e^{-E(\mathbf{x})}}{\int_{\mathbf{x}} e^{-E(\mathbf{x})} d\mathbf{x}} \). From this view, GANs provide a viable solution for the maximum likelihood estimation of EBMs, which is known to be challenging due to the difficulty of evaluating the partition function which integrates over the input space. We discuss the important design choices of the energy functions in order to make VGAN numerically stable, and propose a novel energy formulation that is bounded and explicitly multi-modal. Moreover, from the EBM point of view, we are also able to identify the reasons that make GANs unstable to train, due to the missing of an entropy term of the generator distribution, which causes the generator to collapse to a single or few local minima of the energy landscape. As a solution, we propose to parameterize the generator as a transition distribution (that is, \( p_z(\hat{\mathbf{x}}|\mathbf{x}) \) instead of \( p_x(\mathbf{x}) \)), in analogy to the one used in Gibbs sampling procedure. We show that this variant corresponds to a variational version of contrastive divergence [Hinton (2002a)], and circumvents the need of directly approximating the cumbersome entropy term. In our experiments on MNIST, CIFAR10, and SVHN, we show that we are able to learn generators that generate sharp and diversified images. Moreover, the learned transition distributions are able to effectively capture the data manifold by consecutively sampling realistic looking samples starting from testing images. Finally, as a quantitative evaluation of the learned model, we use the transition distribution as data augmentation, from which we are able to show consistent gains of classification accuracy with few training labels on MNIST and SVHN. 2 GENERATIVE ADVERSARIAL NETWORKS Generative adversarial networks [Goodfellow et al. (2014)] work by solving the following mini-max game: \[ \max_G \min_D \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[-\log D(\mathbf{x})] - \mathbb{E}_{\mathbf{z} \sim p_z(\mathbf{z})}[\log(1 - D(G(\mathbf{z})))] \] where \( p_{data}(\mathbf{x}) \) is the data distribution; \( D(\mathbf{x}) \) is the discriminator that takes as input a sample and outputs a scalar between \([0, 1]\); \( G(\mathbf{z}) \) is the generator that maps a sample \( \mathbf{z} \in R^d \) drawn from a simple distribution \( p(\mathbf{z}) \) to the input space. Typically both \( D \) and \( G \) are parameterized as deep neural networks. Equation 1 suggests a training procedure consisting of two loops: in the inner loop D is trained till convergence given G, and in the outer loop G is updated one step given D (note that in Goodfellow et al. (2014), the authors propose to maximize \( \log(D(G(\mathbf{z}))) \) instead of \( -\log(1 - D(G(\mathbf{z}))) \) in the outer loop). As the two-player, mini-max game reaches the Nash equilibrium, G defines an implicit distribution \( p_g(\mathbf{x}) \) that recovers the data distribution, i.e., \( p_g(\mathbf{x}) = p_{data}(\mathbf{x}) \). 3 VARIATIONAL TRAINING OF DEEP-EBMs An EBM formulates a density function as: \[ p(\mathbf{x}) = \frac{e^{-E(\mathbf{x})}}{\int_{\mathbf{x}} e^{-E(\mathbf{x})} d\mathbf{x}}, \] where \( E(\mathbf{x}) \) is defined as the energy of input x. One particularly powerful case is deep energy based models (deep-EBMs) Ngiam et al. (2011); Zhai et al. (2016), where \( E(\mathbf{x}) \) is directly parameterized as the output of a deep deterministic neural network. An obvious way to train an EBM is to minimize the negative log likelihood (NLL): \[ J(E) = \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[E(\mathbf{x})] + \log[\int_{\mathbf{x}} e^{-E(\mathbf{x})} d\mathbf{x}]. \] Directly minimizing \( J(E) \) is difficult due to the integration term over the input space. As a remedy, one can rewrite Equation[5] as follows: \[ J(E) = \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[E(\mathbf{x})] + \log[\int_{\mathbf{x}} q(\mathbf{x}) \frac{e^{-E(\mathbf{x})}}{q(\mathbf{x})} d\mathbf{x}] \] \[ = \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[E(\mathbf{x})] + \log[\mathbb{E}_{\mathbf{x} \sim q(\mathbf{x})}[\frac{e^{-E(\mathbf{x})}}{q(\mathbf{x})}]] \] \[ \geq \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[E(\mathbf{x})] + \mathbb{E}_{\mathbf{x} \sim q(\mathbf{x})}[\log \frac{e^{-E(\mathbf{x})}}{q(\mathbf{x})}] \] \[ = \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[E(\mathbf{x})] - \mathbb{E}_{\mathbf{x} \sim q(\mathbf{x})}[E(\mathbf{x})] + H(q), \] where \( q(\mathbf{x}) \) is an arbitrary distribution which we call the *variational distribution* with \( H(q) \) denoting its entropy. Equation[4] is a natural application of the Jensen’s inequality, and it gives a variational lower bound of the NLL given an \( q(\mathbf{x}) \). The lower bound is tight when \( \frac{e^{-E(\mathbf{x})}}{q(\mathbf{x})} \) is a constant independent of \( \mathbf{x} \), i.e., \( q(\mathbf{x}) \propto E(\mathbf{x}) \), which implies that \( q(\mathbf{x}) = p(\mathbf{x}) \). This suggests an optimization procedure as follows: \[ \min_E \max_q \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[E(\mathbf{x})] - \mathbb{E}_{\mathbf{x} \sim q(\mathbf{x})}[E(\mathbf{x})] + H(q), \] where in the inner loop, given the energy model \( E(\mathbf{x}) \), the variational lower bound is maximized w.r.t. \( q \); the energy model then is updated one step to decrease the NLL with the optimal \( q \). 4 VARIATIONAL GENERATIVE ADVERSARIAL NETWORKS In practice, \( q(\mathbf{x}) \) can be chosen as a distribution that is easy to sample from and differentiable; the inner loop can be achieved by simple stochastic gradient descent. It turns out that the generator used in GANs exactly satisfies such requirements, which directly connects GANs to EBMs. To see this, replace \( q(\mathbf{x}) \) with the generator distribution \( p_g(\mathbf{x}) \) (that is the implicit data distribution produced by \( \mathbf{x} = G(\mathbf{z}), \mathbf{z} \sim p_z(\mathbf{z}) \)), then Equation[5] turns into: \[ \min_E \max_G \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[E(\mathbf{x})] - \mathbb{E}_{\mathbf{x} \sim p_g(\mathbf{x})}[E(\mathbf{x})] + H(p_g). \] If we further let \( E(\mathbf{x}) = -\log D(\mathbf{x}) \), this becomes: \[ \min_D \max_G \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[-\log D(\mathbf{x})] - \mathbb{E}_{\mathbf{z} \sim p_z(\mathbf{z})}[-\log D(G(\mathbf{z}))] + H(p_g). \] One can now immediately recognize the resemblance of Equation[7] to Equation[1]. Both of them take the form as a mini-max optimization problem, where D is trained to increase \( D(\mathbf{x}) \) for \( \mathbf{x} \sim p_{data}(\mathbf{x}) \) and decrease \( D(\mathbf{x}) \) for \( \mathbf{x} \sim p_g(\mathbf{x}) \), while G is trained to increase \( D(\mathbf{x}) \) for \( \mathbf{x} \sim p_g(\mathbf{x}) \). In other words, GAN behaves similarly to variational training of an EBM, where the variational distribution \( q(\mathbf{x}) \) takes the specific form of \( p_g(\mathbf{x}) \) which is easy to sample from. In light of this connection, we call the family of the models defined by Equation[6] as the variational generative adversarial networks (VGANs). The nice property of VGANs over the traditional EBM training strategies is that it simplifies the sampling procedure by defining a differentiable, deterministic mapping from a simple distribution (e.g., uniform distribution) to the input space. Compared with GANs, VGANs differ in four aspects: 1. **The order of minimization and maximization.** GANs optimize D till convergence given G, while VGANs optimize G till convergence given D. The outcome of a GAN is then a generator that can fool the best discriminator possible, while the outcome of a VGAN is an energy model parameterized by D, and a variational distribution G that can sample from the exact distribution defined by D. Also note that with the optimization procedure of GANs, there is no guarantee that D defines a viable EBM, as the variational lower bound can be arbitrarily low due to the swapping of the min and max loops. 2. **The parameterization of energy.** GANs use a specific parameterization of energy as \( E(\mathbf{x}) = -\log D(\mathbf{x}) \). The energy parameterization of GANs is lower bounded by 0. This differs from that of an RBM with binary visible hidden units, one of the most popular EBMs. An RBM has free energy \( E(\mathbf{x}) = -\mathbf{b}_v^T \mathbf{x} - \sum_{j=1}^K \log(1 + e^{\mathbf{W}_j^T \mathbf{x} + \mathbf{b}_{h,j}}) \), which is unbounded. As the goal of training an EBM is to minimize the energy of training data, this difference is significant in practice. To see this, note that the optimum in Equation 5 is invariant w.r.t. an affine transformation of the energy; that is, let \( E^*(\mathbf{x}) \) be an optimal solution to Equation 5, then \( \tilde{E}(\mathbf{x}) = a E^*(\mathbf{x}) + b \) is also an optimal solution for \( a \in R^+, b \in R \). This property makes unbounded energy inappropriate to use for VGANs, as it often causes the scale of energy to explode. Even worse, the energy parameterization as that of RBM’s has stronger gradients as the energy decreases, and this essentially encourages the energy of both training samples and generated samples to grow to negative infinity. 3. The optimal energy assignment. A related problem to the energy parameterization is that, when optimizing \( D \), the term subject to expectation under \( p(z) \) of GANs is \( \log(1 - D(G(z))) \), whereas VGANs use \( -\log D(G(z)) \). While both have the same direction of gradients w.r.t. to \( D(\mathbf{x}) \) and \( D(G(z)) \) (increasing the former and decreasing the latter), and the optimal solution to both models is when \( p_{data}(\mathbf{x}) = p_g(\mathbf{x}) \), they differ in the optimal \( D \). The optimal \( D \) for GAN is fixed as \( D(\mathbf{x}) = 0.5 \). 4. The entropy term of the generator distribution \( H(p_g) \). Last but not the least, GANs do not include the entropy term while optimizing \( G \). In VGANs, including the entropy term guarantees that \( p_g(\mathbf{x}) \) recovers the density encoded by \( D \), and that the variational lower bound is tightened as such in the inner loop. Without the entropy term, \( G \) can be easily but misleadingly optimized by collapsing into one of the few local minima of the energy landscape. In fact, this accounts for most of the failures of training GANs, as pointed out in the GAN related literature [Radford et al. (2015); Salimans et al. (2016); Kim & Bengio (2016); Zhao et al. (2016)]. Of course, an immediate challenge that one needs to solve is the approximation of \( H(p_g) \). This amounts to a well known problem of differentiable entropy approximation (see Hyvarinen (1999), for example). The fact that the approximation needs not only to be accurate, but also to be easily optimized w.r.t. \( G \) makes it even more intimidating and cumbersome. 5 BOUNDED MULTI-MODAL ENERGY The first strategy we attempt to stabilize the generator is by designing a well behaved energy such that the generator can be easily optimized. We start by noticing that the energy of the form \( -\log D(\mathbf{x}) \) is inherently uni-modal. To see this, let \( D(\mathbf{x}) = \sigma(\mathbf{w}^T \phi(\mathbf{x}) + \mathbf{b}) \), where \( \sigma \) is the sigmoid function and \( \phi(\mathbf{x}) \) denotes a feature mapping of \( \mathbf{x} \) encoded by a deep neural network. Then in order to maximize \( D(\mathbf{x}) \) such as to minimize the energy, all the samples \( \mathbf{x} \) are thus encouraged to be projected to be proportional to the weight vector \( \mathbf{w} \). This is not a problem with the regularization of \( H(p_g) \), maximizing which diversifies \( \phi(\mathbf{x}) \), but without or with a poor approximation of the entropy term may cause the generator to collapse. Consequently, we propose a bounded multi-modal energy formulation as follows: \[ E(\mathbf{x}) = \sum_{j=1}^K H(\sigma(\mathbf{W}_j^T \phi(\mathbf{x}) + \mathbf{b}_j)), \] where \( \mathbf{W}_j \in R^d, \mathbf{b}_j \in R, \phi(\mathbf{x}) \) is the feature mapping; \( H(p) \) is slightly overloaded as the entropy of a binomial distribution defined by \( p \), i.e., \( H(p) = -p \log p - (1-p) \log(1-p) \). This energy formulation can be viewed as an instance of the product of experts model (PoE) Hinton (2002b), where each set of parameters \( \mathbf{W}_j, \mathbf{b}_j \) defines an expert. The nice property of this energy parameterization is that it is 1) bounded between [0, K], 2) with strong gradients in the high energy area (\( \sigma(\cdot) \) close to 0.5) and with vanishing gradients at low energy area (\( \sigma(\cdot) \) close to 0 or 1), and 3) multi-modal by design. To see the last point, simply note that \( H(p) \) achieves its minimum with \( p = 0 \) and \( p = 1 \). Thus for such a PoE energy with K experts, there exists \( 2^K \) equally likely local minima by design. With this energy formulation, it is also relatively easy to come up with a reasonable approximation of \( H(p_g) \), which is chosen as: \[ \tilde{H}(p_g) = \sum_{j=1}^K H(\frac{1}{N} \sum_{i=1}^N \sigma(\mathbf{W}_j^T \phi(G(\mathbf{z}^i)))), \] where \( \mathbf{x}^i \) denotes the \( i \)th training example. Although there is no theoretical guarantee that Equation 9 recovers the true entropy \( H(p_g) \) to any extent, maximizing it serves the same purpose of encouraging the generated samples to be diverse, as \( H(p_g) \) reaches its minimum if \( G(z) \) collapses to one single point. Moreover, in the outer loop while minimizing the NLL w.r.t. E, we find it helpful to also maximize \( \tilde{H}(p_{data}) = \sum_{j=1}^K H(\frac{1}{N} \sum_{i=1}^N \sigma(W_j^T \phi(x^i))) \) as well, which acts as a regularizer to E to encourage the average activation of each expert \( \sigma_j(\cdot) \) to close to 0.5. The training algorithm of VGAN with the proposed bounded multi-modal energy is summarized in Algorithm 1. Algorithm 1 The optimization procedure of VGAN 1: for number of training iterations do 2: for k steps do 3: sample N noise data \( \{z^1, \ldots, z^N\} \); update G by one step gradient ascent of \[ -\frac{1}{N} \sum_{i=1}^N E(G(z^i)) + \tilde{H}(p_g) \] 4: end for 5: sample N training data \( \{x^1, \ldots, x^N\} \); sample N noise data \( \{z^1, \ldots, z^N\} \); 6: update E with one step gradient descent of \[ \frac{1}{N} \sum_{i=1}^N E(x^i) - \frac{1}{N} \sum_{i=1}^N E(G(z^i)) - \tilde{H}(p_{data}) \] 7: end for 6 VARIATIONAL CONTRASTIVE DIVERGENCE WITH TRANSITION DISTRIBUTIONS Although it is possible to discourage the generator to collapse into a single output by carefully designing the energy function as described in Section 5, there is no good way to monitor the quality of the approximation of the entropy term other than manually observing the generated samples. Also, there is no guarantee that the designed approximation is accurate such that the variational lower bound is tight enough to provide correct gradients for updating the energy parameters. In this section, we propose an additional approach to bypass the need of the cumbersome entropy approximation problem. The idea is that instead of generating samples directly from \( p_z(z) \), we define a transition operator \( p_g(\tilde{x}|x) \) conditioned on a training sample \( \tilde{x} \). This corresponds to defining the variational distribution \( q(x) \) in Equation 8 as \( q(\tilde{x}) = \int_x p_{data}(x)p_z(\tilde{x}|x)dx \). If we further restrict the transition distribution \( p_z(x|\tilde{x}) \) to be one that is closely centered around \( \tilde{x} \), then the entropy term \( H(q) \) can be well approximated by the data entropy \( H(p_{data}) \), which is a constant. The variational lower bound is thus increased by increasing the energy for \( \tilde{x} \sim p_g(\tilde{x}|x) \). Of course, this parameterization limits the shape of the variational distribution, and the variational lower bound might never be tightened especially in the early stage of training when the model distribution differs significantly from the data distribution; this nonetheless can provide meaningful gradients to update the energies. In fact, this sampling procedure is closely related to contrastive divergence (CD) [Hinton, 2010] (one step CD to be exact), whereas in CD the transition distribution can be easily obtained from specific types of EBMs (e.g., RBM). Our approach, on the other hand, uses a parameterized variational distribution to approximate the true transition distribution; we thus name it *variational contrastive divergence* (VCD). The implementation of \( p_g(x|\tilde{x}) \) is illustrated in Figure 1. Let \( h = Encode(\tilde{x}) \) be an encoder that maps an input \( \tilde{x} \) to a bottleneck vector \( h \in R^d \), and let \( \hat{x} = Decode(h) \) be the output of a decoder that maps \( h \) to the input space. A sample from \( p_g(\tilde{x}|x) \) can be drawn as follows: 1) generate a binomial vector \( m \in R^d \) with probability 0.5; 2) generate a noise vector \( z \sim p_z(z), z \in R^d \); 3) produce a new vector \( \tilde{h} = m * z + (1 - m) * h \); and 4) obtain the generated sample by passing \( \tilde{h} \) to the same decoder \( \hat{x} = Decode(\tilde{h}) \). The generator then tries to minimize the following objective: \[ \rho * \mathbb{E}_{x \sim p_{data}(x), \tilde{x} \sim p_g(\tilde{x}|x)}[E(\tilde{x})] + (1 - \rho) * \mathbb{E}_{x \sim p_{data}(x)}[\|\tilde{x} - x\|_2^2]. \] The generator can be considered as a regularized autoencoder, where decreasing the first term of Equation 10 encourages the EBM to assign low energy to samples generated by \( \tilde{h} \). The choice Figure 1: Illustration of VGAN with variational contrastive divergence. On the left panel, energies of real data x and generated data \( \tilde{x} \) are computed, with the generator shown on the right. For the generator on the right panel, each \( x \sim p_{data}(x) \) is passed through an encoder to obtain h, which is then passed through a decoder to achieve a reconstruction \( \tilde{x} \). h is then mixed with a noise vector z of the same dimensionality by a randomly generated binary mask vector m to obtain \( \tilde{h} \) following \( \tilde{h} = m * z + (1 - m) * h \). \( \tilde{h} \) is then passed through the same decoder to obtain the generated sample \( \tilde{x} \). of the generation formula of \( \tilde{h} \) is also critical. Randomly replacing half of the dimensions of h with random noise z makes sure that \( \tilde{h} \) is sufficiently different from h. Otherwise, the autoencoder can easily denoise \( \tilde{h} \) to make \( \tilde{x} \) to collapse back to x, regardless of z. Also, mixing noise in the bottleneck layer of an autoencoder makes the generation process easier, as it is known that with high level features the mixing rate of MCMC sampling is significantly higher than the in the input space [Bengio et al.]. In addition, the formulation of our transition operator does not make any Gaussian (or mixture of Gaussian) distribution assumptions, despite the use of the reconstruction error. This is due to the use of a deep decoder, such that the generated sample can be far away from the sample conditioned on, when calculating the Euclidean distance. This conjecture is also supported in our experiments, see Figure 5. The training algorithm for VCD is summarized in Table 2. Algorithm 2 The optimization procedure of VCD 1: for number of training iterations do 2: for k steps do 3: sample N training data {x^1, ..., x^N}; sample N noise data {z^1, ..., z^N}; 4: sample N binary mask vectors; 5: update G by one step gradient ascent of \[ -\frac{1}{N} \sum_{i=1}^{N} E(G(z^i, m^i)) + \tilde{H}(p_g) \] 6: end for 7: sample N training data {x^1, ..., x^N}; sample N noise data {z^1, ..., z^N}; 8: sample N binary mask vectors; 9: update E with one step gradient descent of \[ \frac{1}{N} \sum_{i=1}^{N} E(x^i) - \frac{1}{N} \sum_{i=1}^{N} E(G(z^i, m^i)) - \tilde{H}(p_{data}) \] 10: end for Table 1: CIFAR-10 Test error rates of the linear classifiers trained on the second to the top discriminator layer (\( \phi(x) \)) of GAN and VGAN with generator update steps as 1 and 3. <table> <tr> <th></th> <th>GAN k=1</th> <th>GAN k=3</th> <th>VGAN k=1</th> <th>VGAN k=3</th> </tr> <tr> <td></td> <td>84.7</td> <td>86.6</td> <td>36.5</td> <td>32.5</td> </tr> </table> 7 EXPERIMENTS 7.1 VGAN SAMPLES As a proof of concept, in the first set of experiments, we test the efficacy of the proposed VGAN algorithm as in Algorithm 1. To do this, we train a VGAN on the 50,000 training images of CIFAR-10 with a moderately sized energy (discriminator) and generator. The energy is encoded by a convolutional neural network (CNN) with two convolutional layers, two max pooling layers, and two fully connected layers, where the last fully connected layer is used to compute the energy as defined in Equation 5 (with K=100). The generator is encoded by a deconvolutional neural network with two consecutive fully connected layers, the latter of which is reshaped and followed by two deconvolutional layers to perform upsampling convolution. Both the energy and the generator use ReLU as nonlinearity, and only the generator is equipped with batch normalization [Ioffe & Szegedy, 2015]2. Both the energy and the generator are updated with Adadelta [Zeiler, 2012] using learning rate 0.1. As a direct comparison, we have also trained a GAN with the exact same architecture and training protocol, except that the top layer of the discriminator is replaced with one single sigmoid unit. We train both VGAN and GAN for 100 epochs, while varying the number of generator updates per iteration from 1 to 3 (k in Algorithm 1). Note that the original GAN paper [Goodfellow et al., 2014] proposes to update the discriminator k steps per iteration, which we did the opposite. We show the generated samples from the generator in Figure 2. Here the first row corresponds to \( k = 1 \), and the second row corresponds to \( k = 3 \). For each row, on the left are 100 generations from GAN, and on the right are 100 generations from VGAN. We see that for both step numbers, VGAN is able to generate visually appealing images that are difficult to distinguish from samples from the test set. GAN, on the other hand, clearly fails to generate diversified, or realistically looking images when k=1, but works much better when k=3. This can be easily understood from the variational point of view, where a larger step k for generator makes the lower bound tighter, thus producing much stabler models. In order to further justify the observations, we train two linear classifiers with the second to the top layer fully connected activation from the discriminator of both models (1204 dimensional), for \( k = 1, 3 \); the results are shown in Table 1. We see that thanks to the bounded multi-modal energy, VGAN is able to benefit from more generator updates. GAN, on the other hand, fails to learn discriminative features, despite the appealing visual quality of generations when k=3. This also verifies our hypothesis discussed in Section 5 as the uni-modal nature of GAN discourages it from learning discriminative features at the top layer of its discriminator. 7.2 LEARNING WITH VCD In the next set of experiments, we evaluate the variational contrastive divergence of our model. We train our models on three datasets: MNIST, CIFAR-10, and SVHN with 50,000, 40,000, 60,000 training images, respectively. For each dataset, we train a VGAN with variational contrastive divergence, while varying the weight \( \rho \) in Equation 10 from the range {0, 0.001, 0.01, 0.1, 1}. Note that in the extreme case when \( \rho = 0 \), VGAN degrades to training an EBM with negative samples obtained from an autoencoder. In the other extreme case when \( \rho = 1 \), the transition distribution \( p_z(\tilde{x}|x) \) is not constrained to be centered around x, and is roughly equal to a regular VGAN. We set the dimensionality of h, m, z to be 256 for MNIST, and 2048 for CIFAR-10 and SVHN, and use tanh as its nonlinearity (ReLU is used for all other layers except for the top layer of the autoencoder with uses sigmoid). \( p_z(z) \) is set to be a uniform distribution drawn from \([-1, 1]\) which matches the 2Caution should be taken when attempting to apply batch normalization to the energy (discriminator). An incorrect approach is to apply batch normalization to real data batch and generated batch separately, which essentially makes E different for real and generated data in the energy function \( E(\cdot) \). Figure 2: Samples from the GAN (left), and generations of VGAN (right), with the same architecture. The first row corresponds to updating the generator one step at each iteration, and the second row corresponds to updating the generator three steps at each iteration. magnitudes of \( h \). The training protocol is the same as that described in Section 7.1 except that we use k=1 throughout this set for computational reasons. We first study the effect of varying \( \rho \) by looking at the MNIST examples in Figure 3. The first to third row corresponds to \( \rho = 0, 0.01, 1 \), respectively. The first to third column corresponds to validation samples, reconstructions, and conditional generations, respectively. We see from the first row (which equals to an unregularized autoencoder) that the generator fails to generate realistically looking images. The third row is able to generate realistic images conditioned on a sample, but there is no resemblance between the generation and the sample conditioned on. The second row, on the other hand, is able to both reconstruct the input sample, and also generate realistically looking samples with the transition operator, with notable differences between the input and generation. We have also observed similar trends on SVHN and CIFAR-10 results in Figure 4 where only \( \rho = 0.01 \) is shown for the space concern. We can also simulate a Markov Chain with the learned transition distribution, and we visualize the results on MNIST and SVHN in Figure 5. We see that the learned transition distribution can smoothly vary the style, type, color, and etc. of the digits. Also note that the transitions are not restricted to the Euclidean neighborhood of the samples conditioned on, for example, changing of colors should result in a large distance in the input space, which is our transition operator does not have difficulty exploring. Finally, as a quantitative evaluation of the learned transition distribution, we attempt to use the generated conditional samples as data augmentation on MNIST and SVHN.\footnote{we are not able to obtain reasonable results on CIFAR-10, as our EMB suffers from noticeable underfitting, identified by the large reconstruction errors in Figure\ref{fig:cifar10}.} To be concrete, for each dataset we train two additional CNNs enhanced with batch normalization, dropout out and input Gaussian noising. We then minimize the follow loss function: \[ 0.5 * \frac{1}{N} \sum_{i=1}^{N} \mathcal{L}(x^i, y^i) + 0.5 * \frac{1}{N} \sum_{i=1}^{N} \mathbb{E}_{\tilde{x}^i \sim p(\tilde{x}|x^i)} \mathcal{L}(\tilde{x}^i, y^i). \] (11) For each dataset we train on the first 1000 training images, and use the validation set to select the best model; we then report the test error of different configurations. The results are summarized in Table \ref{tab:results}. We see that on both datasets, with a properly chosen \( \rho \) the generator is able to provide good generations to improve learning. On the other hand, with \( \rho = 0 \), which corresponds to sample from an autoencoder, hurts performance. \( \rho = 1 \) completely messes up training as the generated samples are not guaranteed to have the same label as the samples conditioned on. This shows that our transition distribution is able to generate samples that are sufficiently different from training images to boost the performance. Although these numbers are by no means state-of-the-art results, we consider them significant as a proof of concept, because our baseline models are already heavily regularized with dropout and feature noising, which can be considered as data agnostic data augmentation. Also note that there are much space for improvements by leveraging the weights between the two terms in Equation\ref{eq:loss}, tuning the architecture of the energy model, the generator and the classifier model. ![Visualization of x, x̂, and ū̂ for ρ = 0, 0.01, 1 on MNIST. The first to third row corresponds to rho = 0, 0.01, 1, respectively. The first to third column corresponds to samples from the validation set x, reconstructions of the samples x̂, and the generated samples ū̂.](page_324_682_1097_377.png) Figure 3: Visualization of x, x̂, and ū̂ for \( \rho = 0, 0.01, 1 \) on MNIST. The first to third row corresponds to \( rho = 0, 0.01, 1 \), respectively. The first to third column corresponds to samples from the validation set x, reconstructions of the samples x̂, and the generated samples ū̂. Figure 4: Visualization of x, \( \bar{x} \), and \( \tilde{x} \) for \( \rho = 0.01 \) on SVHN and CIFAR10. The first to third column corresponds to samples from the validation set x, reconstructions of the samples \( \bar{x} \), and the generated samples \( \tilde{x} \). Figure 5: Simulating a Markov Chain with \( p_z(\bar{x}|x) \). We show 30 and 28 images form the validation set for MNIST and SVHN in the first row of each panel, respectively, followed by 9 Gibbs sampling steps. Note the smooth transition of digit types, shapes, and/or colors. Table 2: Semisupevised learning error rates by using the learned transition distribution for data augmentation. <table> <tr> <th>model</th> <th>MNIST-1000</th> <th>SVHN-1000</th> </tr> <tr> <td>No augmentation</td> <td>2.2</td> <td>19</td> </tr> <tr> <td>VCD (\( \rho = 0 \))</td> <td>2.9</td> <td>26</td> </tr> <tr> <td>VCD (\( \rho = 0.001 \))</td> <td>2.0</td> <td>20</td> </tr> <tr> <td>VCD (\( \rho = 0.01 \))</td> <td>1.7</td> <td>18</td> </tr> <tr> <td>VCD (\( \rho = 0.1 \))</td> <td>1.9</td> <td>17</td> </tr> <tr> <td>VCD (\( \rho = 1 \))</td> <td>21</td> <td>37</td> </tr> </table> 8 RELATED WORK There has been a recent surge on improving GANs [Radford et al. (2015); Salimans et al. (2016); Zhao et al. (2016); Kim & Bengio (2016)]. Radford et al. (2015) proposes a set of techniques to stabilize GANs, including using batch normalization, dropping pooling layers, reduced learning rate, and using strided convolutions, but there is little justification of the proposed designs. Our framework, however, directly addresses two most important issues, the energy parametrization and the entropy approximation, and allows the freedom of using the most conventional designs such as pooling and ReLU. Salimans et al. (2016) proposes several tricks to enhance the stability. For example, the proposed batch discrimination is in nature similar to our energy design, but with a much higher complexity. Kim & Bengio (2016); Zhao et al. (2016) are the two most directly related efforts that connect GANs with EBMs. However, our work is the first to the best of our knowledge to identify the nature of the variational training of EBMs and to provide practical solutions in this view at the same time. There has also been a long standing interest in terms of EBMs and deep generative models in the machine learning community, such as deep Boltzmann machines and deep belief networks [Salakhutdinov & Hinton; Hinton et al. (2006)]. The contribution of our framework from this aspect is to propose a scalable training method to eliminate the need of MCMC sampling. Variational inference has also been well studied in the literature, but most successfully in dealing with deep directed graphical models such as DBM [Salakhutdinov & Hinton] and variational autoencoder [Kingma & Welling (2013)], where typically variational *upper bounds* are derived for NLL, instead of the *lower bound* in our work. Minimizing the variational lower bound is obviously more difficult to work with, as if the bound is not tight enough, there is no guarantee that the original NLL is minimized. Our variational contrastive divergence is also related to GSNs [Thibodeau-Laufer et al. (2014)], as they both model a transition probability. However, GSNs adopt a transition distribution of form \( p(\mathbf{x}|\tilde{\mathbf{x}}) \), where \( \tilde{\mathbf{x}} \) is produced by adding simple noises to training samples. This essentially limits the space of sampling limited to a Gaussian neighborhood of training examples, which our model does not assume. VCD is also related to the adversarial autoencoder [Makhzani et al. (2015)] as they both include an autoencoder module, but with fundamental differences: the use of autoencoder in our work is part of and to improve the EBM/GAN, while [Makhzani et al. (2015)] on the other hand, requires another GAN besides the autoencoder. 9 CONCLUSION We have proposed VGANs, a family of methodologies to train deep EBMs with an auxiliary variational distribution. We have drawn connection between deep EBMs and GANs, and propose practical solutions to stabilizing training. We show that our proposed bounded multi-modal energy combined with variational contrastive divergence works well on generating realistically looking images, and recovering the data manifold by simulating a Markov Chain. We have also attempted to utilize the learned transition distributions to perform data augmentation in the context of semisupervised learning, and show consistent improvements. REFERENCES Yoshua Bengio, Grégoire Mesnil, Yann Dauphin, and Salah Rifai. Better mixing via deep representations. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672–2680, 2014. Geoffrey Hinton. A practical guide to training restricted boltzmann machines. Momentum, 9(1):926, 2010. Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771–1800, 2002a. Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771–1800, 2002b. Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006. Aapo Hyvarinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE transactions on Neural Networks, 10(3):626–634, 1999. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. Taesup Kim and Yoshua Bengio. Deep directed generative models with energy-based probability estimation. arXiv preprint arXiv:1606.03439, 2016. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. Yann LeCun, Sumit Chopra, and Raia Hadsell. A tutorial on energy-based learning. 2006. Alireza Makhzani, Jonathon Shlens, Naveed Jaitly, and Ian Goodfellow. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015. Jiquan Ngiam, Zhenghao Chen, Pang W Koh, and Andrew Y Ng. Learning deep energy models. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 1105–1112, 2011. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. Ruslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016. Eric Thibodeau-Laufer, Guillaume Alain, and Jason Yosinski. Deep generative stochastic networks trainable by backprop. 2014. Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. The Journal of Machine Learning Research, 11:3371–3408, 2010. Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012. Shuangfei Zhai, Yu Cheng, Weining Lu, and Zhongfei Zhang. Deep structured energy based models for anomaly detection. Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pp. 1100–1109, 2016. Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126, 2016.
ABSTRACT In this paper, we study deep generative models for effective unsupervised learning. We propose VGAN, which works by minimizing a variational lower bound of the negative log likelihood (NLL) of an energy based model (EBM), where the model density \( p(x) \) is approximated by a variational distribution \( q(x) \) that is easy to sample from. The training of VGAN takes a two step procedure: given \( p(x) \), \( q(x) \) is updated to maximize the lower bound; \( p(x) \) is then updated one step with samples drawn from \( q(x) \) to decrease the lower bound. VGAN is inspired by the generative adversarial networks (GANs), where \( p(x) \) corresponds to the discriminator and \( q(x) \) corresponds to the generator, but with several notable differences. We hence name our model variational GANs (VGANs). VGAN provides a practical solution to training deep EBMs in high dimensional space, by eliminating the need of MCMC sampling. From this view, we are also able to identify causes to the difficulty of training GANs and propose viable solutions.\footnote{Experimental code is available at https://github.com/Shuangfei/vgan} 1 INTRODUCTION Unsupervised learning is a long standing challenge of machine learning and deep learning. One major difficulty of effective unsupervised learning lies in the lack of an accurate distance metric. Euclidean distance has been widely adopted as the default metric from shallow methods, such as K-means and Gaussian mixture models, to deep models such as autoencoder variants (e.g., Vincent et al. (2010)). From a probabilistic point of view, the use of Euclidean distance assumes Gaussian distributions (or mixtures thereof) in the input space, which is a strong assumption and is often times inaccurate for high dimensional data (such as images). Generative adversarial networks (GANs) (Goodfellow et al., 2014) are a particularly interesting approach as it does not assume the data distribution to take any specific form, which therefore eliminates the need of a predefined distance metric of samples. GANs work as a mini-max two player game, where a generator \( G(z) \) is trained to generate samples that can fool the best discriminator D. When both G and D are formulated as deep convolutional networks, it is shown that the generator can learn to generate surprisingly realistic looking images [Radford et al., 2015]. Energy-based models (EBMs) [LeCun et al., 2006] are another powerful family of unsupervised learning models. Similarly to GANs, EBMs make minimal assumptions about the data distribution, as it directly parameterizes the negative log density of data \( E(x) = -\log p(x) \) as a deterministic function of \( x \). It is clear that by properly choosing the capacity of \( E(x) \), an EBM can be trained to approximate an arbitrary density function perfectly well. In this paper, we propose VGAN, which bridges GANs and EBMs and combines the benefits from both worlds. In particular, we show that the mini-max game of GANs is approximately equivalent to minimizing a variational lower bound of the negative log likelihood (NLL) of an EBM. To be more concrete, the energy \( E(\mathbf{x}) \) corresponds to \( -\log D(\mathbf{x}) \), and the generator \( G(\mathbf{z}) \) defines a parameterized sampler from the model distribution defined by \( p(\mathbf{x}) = \frac{e^{-E(\mathbf{x})}}{\int_{\mathbf{x}} e^{-E(\mathbf{x})} d\mathbf{x}} \). From this view, GANs provide a viable solution for the maximum likelihood estimation of EBMs, which is known to be challenging due to the difficulty of evaluating the partition function which integrates over the input space. We discuss the important design choices of the energy functions in order to make VGAN numerically stable, and propose a novel energy formulation that is bounded and explicitly multi-modal. Moreover, from the EBM point of view, we are also able to identify the reasons that make GANs unstable to train, due to the missing of an entropy term of the generator distribution, which causes the generator to collapse to a single or few local minima of the energy landscape. As a solution, we propose to parameterize the generator as a transition distribution (that is, \( p_z(\hat{\mathbf{x}}|\mathbf{x}) \) instead of \( p_x(\mathbf{x}) \)), in analogy to the one used in Gibbs sampling procedure. We show that this variant corresponds to a variational version of contrastive divergence [Hinton (2002a)], and circumvents the need of directly approximating the cumbersome entropy term. In our experiments on MNIST, CIFAR10, and SVHN, we show that we are able to learn generators that generate sharp and diversified images. Moreover, the learned transition distributions are able to effectively capture the data manifold by consecutively sampling realistic looking samples starting from testing images. Finally, as a quantitative evaluation of the learned model, we use the transition distribution as data augmentation, from which we are able to show consistent gains of classification accuracy with few training labels on MNIST and SVHN. 2 GENERATIVE ADVERSARIAL NETWORKS Generative adversarial networks [Goodfellow et al. (2014)] work by solving the following mini-max game: \[ \max_G \min_D \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[-\log D(\mathbf{x})] - \mathbb{E}_{\mathbf{z} \sim p_z(\mathbf{z})}[\log(1 - D(G(\mathbf{z})))] \] where \( p_{data}(\mathbf{x}) \) is the data distribution; \( D(\mathbf{x}) \) is the discriminator that takes as input a sample and outputs a scalar between \([0, 1]\); \( G(\mathbf{z}) \) is the generator that maps a sample \( \mathbf{z} \in R^d \) drawn from a simple distribution \( p(\mathbf{z}) \) to the input space. Typically both \( D \) and \( G \) are parameterized as deep neural networks. Equation 1 suggests a training procedure consisting of two loops: in the inner loop D is trained till convergence given G, and in the outer loop G is updated one step given D (note that in Goodfellow et al. (2014), the authors propose to maximize \( \log(D(G(\mathbf{z}))) \) instead of \( -\log(1 - D(G(\mathbf{z}))) \) in the outer loop). As the two-player, mini-max game reaches the Nash equilibrium, G defines an implicit distribution \( p_g(\mathbf{x}) \) that recovers the data distribution, i.e., \( p_g(\mathbf{x}) = p_{data}(\mathbf{x}) \). 3 VARIATIONAL TRAINING OF DEEP-EBMs An EBM formulates a density function as: \[ p(\mathbf{x}) = \frac{e^{-E(\mathbf{x})}}{\int_{\mathbf{x}} e^{-E(\mathbf{x})} d\mathbf{x}}, \] where \( E(\mathbf{x}) \) is defined as the energy of input x. One particularly powerful case is deep energy based models (deep-EBMs) Ngiam et al. (2011); Zhai et al. (2016), where \( E(\mathbf{x}) \) is directly parameterized as the output of a deep deterministic neural network. An obvious way to train an EBM is to minimize the negative log likelihood (NLL): \[ J(E) = \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[E(\mathbf{x})] + \log[\int_{\mathbf{x}} e^{-E(\mathbf{x})} d\mathbf{x}]. \] Directly minimizing \( J(E) \) is difficult due to the integration term over the input space. As a remedy, one can rewrite Equation[5] as follows: \[ J(E) = \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[E(\mathbf{x})] + \log[\int_{\mathbf{x}} q(\mathbf{x}) \frac{e^{-E(\mathbf{x})}}{q(\mathbf{x})} d\mathbf{x}] \] \[ = \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[E(\mathbf{x})] + \log[\mathbb{E}_{\mathbf{x} \sim q(\mathbf{x})}[\frac{e^{-E(\mathbf{x})}}{q(\mathbf{x})}]] \] \[ \geq \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[E(\mathbf{x})] + \mathbb{E}_{\mathbf{x} \sim q(\mathbf{x})}[\log \frac{e^{-E(\mathbf{x})}}{q(\mathbf{x})}] \] \[ = \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[E(\mathbf{x})] - \mathbb{E}_{\mathbf{x} \sim q(\mathbf{x})}[E(\mathbf{x})] + H(q), \] where \( q(\mathbf{x}) \) is an arbitrary distribution which we call the *variational distribution* with \( H(q) \) denoting its entropy. Equation[4] is a natural application of the Jensen’s inequality, and it gives a variational lower bound of the NLL given an \( q(\mathbf{x}) \). The lower bound is tight when \( \frac{e^{-E(\mathbf{x})}}{q(\mathbf{x})} \) is a constant independent of \( \mathbf{x} \), i.e., \( q(\mathbf{x}) \propto E(\mathbf{x}) \), which implies that \( q(\mathbf{x}) = p(\mathbf{x}) \). This suggests an optimization procedure as follows: \[ \min_E \max_q \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[E(\mathbf{x})] - \mathbb{E}_{\mathbf{x} \sim q(\mathbf{x})}[E(\mathbf{x})] + H(q), \] where in the inner loop, given the energy model \( E(\mathbf{x}) \), the variational lower bound is maximized w.r.t. \( q \); the energy model then is updated one step to decrease the NLL with the optimal \( q \). 4 VARIATIONAL GENERATIVE ADVERSARIAL NETWORKS In practice, \( q(\mathbf{x}) \) can be chosen as a distribution that is easy to sample from and differentiable; the inner loop can be achieved by simple stochastic gradient descent. It turns out that the generator used in GANs exactly satisfies such requirements, which directly connects GANs to EBMs. To see this, replace \( q(\mathbf{x}) \) with the generator distribution \( p_g(\mathbf{x}) \) (that is the implicit data distribution produced by \( \mathbf{x} = G(\mathbf{z}), \mathbf{z} \sim p_z(\mathbf{z}) \)), then Equation[5] turns into: \[ \min_E \max_G \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[E(\mathbf{x})] - \mathbb{E}_{\mathbf{x} \sim p_g(\mathbf{x})}[E(\mathbf{x})] + H(p_g). \] If we further let \( E(\mathbf{x}) = -\log D(\mathbf{x}) \), this becomes: \[ \min_D \max_G \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[-\log D(\mathbf{x})] - \mathbb{E}_{\mathbf{z} \sim p_z(\mathbf{z})}[-\log D(G(\mathbf{z}))] + H(p_g). \] One can now immediately recognize the resemblance of Equation[7] to Equation[1]. Both of them take the form as a mini-max optimization problem, where D is trained to increase \( D(\mathbf{x}) \) for \( \mathbf{x} \sim p_{data}(\mathbf{x}) \) and decrease \( D(\mathbf{x}) \) for \( \mathbf{x} \sim p_g(\mathbf{x}) \), while G is trained to increase \( D(\mathbf{x}) \) for \( \mathbf{x} \sim p_g(\mathbf{x}) \). In other words, GAN behaves similarly to variational training of an EBM, where the variational distribution \( q(\mathbf{x}) \) takes the specific form of \( p_g(\mathbf{x}) \) which is easy to sample from. In light of this connection, we call the family of the models defined by Equation[6] as the variational generative adversarial networks (VGANs). The nice property of VGANs over the traditional EBM training strategies is that it simplifies the sampling procedure by defining a differentiable, deterministic mapping from a simple distribution (e.g., uniform distribution) to the input space. Compared with GANs, VGANs differ in four aspects: 1. **The order of minimization and maximization.** GANs optimize D till convergence given G, while VGANs optimize G till convergence given D. The outcome of a GAN is then a generator that can fool the best discriminator possible, while the outcome of a VGAN is an energy model parameterized by D, and a variational distribution G that can sample from the exact distribution defined by D. Also note that with the optimization procedure of GANs, there is no guarantee that D defines a viable EBM, as the variational lower bound can be arbitrarily low due to the swapping of the min and max loops. 2. **The parameterization of energy.** GANs use a specific parameterization of energy as \( E(\mathbf{x}) = -\log D(\mathbf{x}) \). The energy parameterization of GANs is lower bounded by 0. This differs from that of an RBM with binary visible hidden units, one of the most popular EBMs. An RBM has free energy \( E(\mathbf{x}) = -\mathbf{b}_v^T \mathbf{x} - \sum_{j=1}^K \log(1 + e^{\mathbf{W}_j^T \mathbf{x} + \mathbf{b}_{h,j}}) \), which is unbounded. As the goal of training an EBM is to minimize the energy of training data, this difference is significant in practice. To see this, note that the optimum in Equation 5 is invariant w.r.t. an affine transformation of the energy; that is, let \( E^*(\mathbf{x}) \) be an optimal solution to Equation 5, then \( \tilde{E}(\mathbf{x}) = a E^*(\mathbf{x}) + b \) is also an optimal solution for \( a \in R^+, b \in R \). This property makes unbounded energy inappropriate to use for VGANs, as it often causes the scale of energy to explode. Even worse, the energy parameterization as that of RBM’s has stronger gradients as the energy decreases, and this essentially encourages the energy of both training samples and generated samples to grow to negative infinity. 3. The optimal energy assignment. A related problem to the energy parameterization is that, when optimizing \( D \), the term subject to expectation under \( p(z) \) of GANs is \( \log(1 - D(G(z))) \), whereas VGANs use \( -\log D(G(z)) \). While both have the same direction of gradients w.r.t. to \( D(\mathbf{x}) \) and \( D(G(z)) \) (increasing the former and decreasing the latter), and the optimal solution to both models is when \( p_{data}(\mathbf{x}) = p_g(\mathbf{x}) \), they differ in the optimal \( D \). The optimal \( D \) for GAN is fixed as \( D(\mathbf{x}) = 0.5 \). 4. The entropy term of the generator distribution \( H(p_g) \). Last but not the least, GANs do not include the entropy term while optimizing \( G \). In VGANs, including the entropy term guarantees that \( p_g(\mathbf{x}) \) recovers the density encoded by \( D \), and that the variational lower bound is tightened as such in the inner loop. Without the entropy term, \( G \) can be easily but misleadingly optimized by collapsing into one of the few local minima of the energy landscape. In fact, this accounts for most of the failures of training GANs, as pointed out in the GAN related literature [Radford et al. (2015); Salimans et al. (2016); Kim & Bengio (2016); Zhao et al. (2016)]. Of course, an immediate challenge that one needs to solve is the approximation of \( H(p_g) \). This amounts to a well known problem of differentiable entropy approximation (see Hyvarinen (1999), for example). The fact that the approximation needs not only to be accurate, but also to be easily optimized w.r.t. \( G \) makes it even more intimidating and cumbersome. 5 BOUNDED MULTI-MODAL ENERGY The first strategy we attempt to stabilize the generator is by designing a well behaved energy such that the generator can be easily optimized. We start by noticing that the energy of the form \( -\log D(\mathbf{x}) \) is inherently uni-modal. To see this, let \( D(\mathbf{x}) = \sigma(\mathbf{w}^T \phi(\mathbf{x}) + \mathbf{b}) \), where \( \sigma \) is the sigmoid function and \( \phi(\mathbf{x}) \) denotes a feature mapping of \( \mathbf{x} \) encoded by a deep neural network. Then in order to maximize \( D(\mathbf{x}) \) such as to minimize the energy, all the samples \( \mathbf{x} \) are thus encouraged to be projected to be proportional to the weight vector \( \mathbf{w} \). This is not a problem with the regularization of \( H(p_g) \), maximizing which diversifies \( \phi(\mathbf{x}) \), but without or with a poor approximation of the entropy term may cause the generator to collapse. Consequently, we propose a bounded multi-modal energy formulation as follows: \[ E(\mathbf{x}) = \sum_{j=1}^K H(\sigma(\mathbf{W}_j^T \phi(\mathbf{x}) + \mathbf{b}_j)), \] where \( \mathbf{W}_j \in R^d, \mathbf{b}_j \in R, \phi(\mathbf{x}) \) is the feature mapping; \( H(p) \) is slightly overloaded as the entropy of a binomial distribution defined by \( p \), i.e., \( H(p) = -p \log p - (1-p) \log(1-p) \). This energy formulation can be viewed as an instance of the product of experts model (PoE) Hinton (2002b), where each set of parameters \( \mathbf{W}_j, \mathbf{b}_j \) defines an expert. The nice property of this energy parameterization is that it is 1) bounded between [0, K], 2) with strong gradients in the high energy area (\( \sigma(\cdot) \) close to 0.5) and with vanishing gradients at low energy area (\( \sigma(\cdot) \) close to 0 or 1), and 3) multi-modal by design. To see the last point, simply note that \( H(p) \) achieves its minimum with \( p = 0 \) and \( p = 1 \). Thus for such a PoE energy with K experts, there exists \( 2^K \) equally likely local minima by design. With this energy formulation, it is also relatively easy to come up with a reasonable approximation of \( H(p_g) \), which is chosen as: \[ \tilde{H}(p_g) = \sum_{j=1}^K H(\frac{1}{N} \sum_{i=1}^N \sigma(\mathbf{W}_j^T \phi(G(\mathbf{z}^i)))), \] where \( \mathbf{x}^i \) denotes the \( i \)th training example. Although there is no theoretical guarantee that Equation 9 recovers the true entropy \( H(p_g) \) to any extent, maximizing it serves the same purpose of encouraging the generated samples to be diverse, as \( H(p_g) \) reaches its minimum if \( G(z) \) collapses to one single point. Moreover, in the outer loop while minimizing the NLL w.r.t. E, we find it helpful to also maximize \( \tilde{H}(p_{data}) = \sum_{j=1}^K H(\frac{1}{N} \sum_{i=1}^N \sigma(W_j^T \phi(x^i))) \) as well, which acts as a regularizer to E to encourage the average activation of each expert \( \sigma_j(\cdot) \) to close to 0.5. The training algorithm of VGAN with the proposed bounded multi-modal energy is summarized in Algorithm 1. Algorithm 1 The optimization procedure of VGAN 1: for number of training iterations do 2: for k steps do 3: sample N noise data \( \{z^1, \ldots, z^N\} \); update G by one step gradient ascent of \[ -\frac{1}{N} \sum_{i=1}^N E(G(z^i)) + \tilde{H}(p_g) \] 4: end for 5: sample N training data \( \{x^1, \ldots, x^N\} \); sample N noise data \( \{z^1, \ldots, z^N\} \); 6: update E with one step gradient descent of \[ \frac{1}{N} \sum_{i=1}^N E(x^i) - \frac{1}{N} \sum_{i=1}^N E(G(z^i)) - \tilde{H}(p_{data}) \] 7: end for 6 VARIATIONAL CONTRASTIVE DIVERGENCE WITH TRANSITION DISTRIBUTIONS Although it is possible to discourage the generator to collapse into a single output by carefully designing the energy function as described in Section 5, there is no good way to monitor the quality of the approximation of the entropy term other than manually observing the generated samples. Also, there is no guarantee that the designed approximation is accurate such that the variational lower bound is tight enough to provide correct gradients for updating the energy parameters. In this section, we propose an additional approach to bypass the need of the cumbersome entropy approximation problem. The idea is that instead of generating samples directly from \( p_z(z) \), we define a transition operator \( p_g(\tilde{x}|x) \) conditioned on a training sample \( \tilde{x} \). This corresponds to defining the variational distribution \( q(x) \) in Equation 8 as \( q(\tilde{x}) = \int_x p_{data}(x)p_z(\tilde{x}|x)dx \). If we further restrict the transition distribution \( p_z(x|\tilde{x}) \) to be one that is closely centered around \( \tilde{x} \), then the entropy term \( H(q) \) can be well approximated by the data entropy \( H(p_{data}) \), which is a constant. The variational lower bound is thus increased by increasing the energy for \( \tilde{x} \sim p_g(\tilde{x}|x) \). Of course, this parameterization limits the shape of the variational distribution, and the variational lower bound might never be tightened especially in the early stage of training when the model distribution differs significantly from the data distribution; this nonetheless can provide meaningful gradients to update the energies. In fact, this sampling procedure is closely related to contrastive divergence (CD) [Hinton, 2010] (one step CD to be exact), whereas in CD the transition distribution can be easily obtained from specific types of EBMs (e.g., RBM). Our approach, on the other hand, uses a parameterized variational distribution to approximate the true transition distribution; we thus name it *variational contrastive divergence* (VCD). The implementation of \( p_g(x|\tilde{x}) \) is illustrated in Figure 1. Let \( h = Encode(\tilde{x}) \) be an encoder that maps an input \( \tilde{x} \) to a bottleneck vector \( h \in R^d \), and let \( \hat{x} = Decode(h) \) be the output of a decoder that maps \( h \) to the input space. A sample from \( p_g(\tilde{x}|x) \) can be drawn as follows: 1) generate a binomial vector \( m \in R^d \) with probability 0.5; 2) generate a noise vector \( z \sim p_z(z), z \in R^d \); 3) produce a new vector \( \tilde{h} = m * z + (1 - m) * h \); and 4) obtain the generated sample by passing \( \tilde{h} \) to the same decoder \( \hat{x} = Decode(\tilde{h}) \). The generator then tries to minimize the following objective: \[ \rho * \mathbb{E}_{x \sim p_{data}(x), \tilde{x} \sim p_g(\tilde{x}|x)}[E(\tilde{x})] + (1 - \rho) * \mathbb{E}_{x \sim p_{data}(x)}[\|\tilde{x} - x\|_2^2]. \] The generator can be considered as a regularized autoencoder, where decreasing the first term of Equation 10 encourages the EBM to assign low energy to samples generated by \( \tilde{h} \). The choice Figure 1: Illustration of VGAN with variational contrastive divergence. On the left panel, energies of real data x and generated data \( \tilde{x} \) are computed, with the generator shown on the right. For the generator on the right panel, each \( x \sim p_{data}(x) \) is passed through an encoder to obtain h, which is then passed through a decoder to achieve a reconstruction \( \tilde{x} \). h is then mixed with a noise vector z of the same dimensionality by a randomly generated binary mask vector m to obtain \( \tilde{h} \) following \( \tilde{h} = m * z + (1 - m) * h \). \( \tilde{h} \) is then passed through the same decoder to obtain the generated sample \( \tilde{x} \). of the generation formula of \( \tilde{h} \) is also critical. Randomly replacing half of the dimensions of h with random noise z makes sure that \( \tilde{h} \) is sufficiently different from h. Otherwise, the autoencoder can easily denoise \( \tilde{h} \) to make \( \tilde{x} \) to collapse back to x, regardless of z. Also, mixing noise in the bottleneck layer of an autoencoder makes the generation process easier, as it is known that with high level features the mixing rate of MCMC sampling is significantly higher than the in the input space [Bengio et al.]. In addition, the formulation of our transition operator does not make any Gaussian (or mixture of Gaussian) distribution assumptions, despite the use of the reconstruction error. This is due to the use of a deep decoder, such that the generated sample can be far away from the sample conditioned on, when calculating the Euclidean distance. This conjecture is also supported in our experiments, see Figure 5. The training algorithm for VCD is summarized in Table 2. Algorithm 2 The optimization procedure of VCD 1: for number of training iterations do 2: for k steps do 3: sample N training data {x^1, ..., x^N}; sample N noise data {z^1, ..., z^N}; 4: sample N binary mask vectors; 5: update G by one step gradient ascent of \[ -\frac{1}{N} \sum_{i=1}^{N} E(G(z^i, m^i)) + \tilde{H}(p_g) \] 6: end for 7: sample N training data {x^1, ..., x^N}; sample N noise data {z^1, ..., z^N}; 8: sample N binary mask vectors; 9: update E with one step gradient descent of \[ \frac{1}{N} \sum_{i=1}^{N} E(x^i) - \frac{1}{N} \sum_{i=1}^{N} E(G(z^i, m^i)) - \tilde{H}(p_{data}) \] 10: end for Table 1: CIFAR-10 Test error rates of the linear classifiers trained on the second to the top discriminator layer (\( \phi(x) \)) of GAN and VGAN with generator update steps as 1 and 3. <table> <tr> <th></th> <th>GAN k=1</th> <th>GAN k=3</th> <th>VGAN k=1</th> <th>VGAN k=3</th> </tr> <tr> <td></td> <td>84.7</td> <td>86.6</td> <td>36.5</td> <td>32.5</td> </tr> </table> 7 EXPERIMENTS 7.1 VGAN SAMPLES As a proof of concept, in the first set of experiments, we test the efficacy of the proposed VGAN algorithm as in Algorithm 1. To do this, we train a VGAN on the 50,000 training images of CIFAR-10 with a moderately sized energy (discriminator) and generator. The energy is encoded by a convolutional neural network (CNN) with two convolutional layers, two max pooling layers, and two fully connected layers, where the last fully connected layer is used to compute the energy as defined in Equation 5 (with K=100). The generator is encoded by a deconvolutional neural network with two consecutive fully connected layers, the latter of which is reshaped and followed by two deconvolutional layers to perform upsampling convolution. Both the energy and the generator use ReLU as nonlinearity, and only the generator is equipped with batch normalization [Ioffe & Szegedy, 2015]2. Both the energy and the generator are updated with Adadelta [Zeiler, 2012] using learning rate 0.1. As a direct comparison, we have also trained a GAN with the exact same architecture and training protocol, except that the top layer of the discriminator is replaced with one single sigmoid unit. We train both VGAN and GAN for 100 epochs, while varying the number of generator updates per iteration from 1 to 3 (k in Algorithm 1). Note that the original GAN paper [Goodfellow et al., 2014] proposes to update the discriminator k steps per iteration, which we did the opposite. We show the generated samples from the generator in Figure 2. Here the first row corresponds to \( k = 1 \), and the second row corresponds to \( k = 3 \). For each row, on the left are 100 generations from GAN, and on the right are 100 generations from VGAN. We see that for both step numbers, VGAN is able to generate visually appealing images that are difficult to distinguish from samples from the test set. GAN, on the other hand, clearly fails to generate diversified, or realistically looking images when k=1, but works much better when k=3. This can be easily understood from the variational point of view, where a larger step k for generator makes the lower bound tighter, thus producing much stabler models. In order to further justify the observations, we train two linear classifiers with the second to the top layer fully connected activation from the discriminator of both models (1204 dimensional), for \( k = 1, 3 \); the results are shown in Table 1. We see that thanks to the bounded multi-modal energy, VGAN is able to benefit from more generator updates. GAN, on the other hand, fails to learn discriminative features, despite the appealing visual quality of generations when k=3. This also verifies our hypothesis discussed in Section 5 as the uni-modal nature of GAN discourages it from learning discriminative features at the top layer of its discriminator. 7.2 LEARNING WITH VCD In the next set of experiments, we evaluate the variational contrastive divergence of our model. We train our models on three datasets: MNIST, CIFAR-10, and SVHN with 50,000, 40,000, 60,000 training images, respectively. For each dataset, we train a VGAN with variational contrastive divergence, while varying the weight \( \rho \) in Equation 10 from the range {0, 0.001, 0.01, 0.1, 1}. Note that in the extreme case when \( \rho = 0 \), VGAN degrades to training an EBM with negative samples obtained from an autoencoder. In the other extreme case when \( \rho = 1 \), the transition distribution \( p_z(\tilde{x}|x) \) is not constrained to be centered around x, and is roughly equal to a regular VGAN. We set the dimensionality of h, m, z to be 256 for MNIST, and 2048 for CIFAR-10 and SVHN, and use tanh as its nonlinearity (ReLU is used for all other layers except for the top layer of the autoencoder with uses sigmoid). \( p_z(z) \) is set to be a uniform distribution drawn from \([-1, 1]\) which matches the 2Caution should be taken when attempting to apply batch normalization to the energy (discriminator). An incorrect approach is to apply batch normalization to real data batch and generated batch separately, which essentially makes E different for real and generated data in the energy function \( E(\cdot) \). Figure 2: Samples from the GAN (left), and generations of VGAN (right), with the same architecture. The first row corresponds to updating the generator one step at each iteration, and the second row corresponds to updating the generator three steps at each iteration. magnitudes of \( h \). The training protocol is the same as that described in Section 7.1 except that we use k=1 throughout this set for computational reasons. We first study the effect of varying \( \rho \) by looking at the MNIST examples in Figure 3. The first to third row corresponds to \( \rho = 0, 0.01, 1 \), respectively. The first to third column corresponds to validation samples, reconstructions, and conditional generations, respectively. We see from the first row (which equals to an unregularized autoencoder) that the generator fails to generate realistically looking images. The third row is able to generate realistic images conditioned on a sample, but there is no resemblance between the generation and the sample conditioned on. The second row, on the other hand, is able to both reconstruct the input sample, and also generate realistically looking samples with the transition operator, with notable differences between the input and generation. We have also observed similar trends on SVHN and CIFAR-10 results in Figure 4 where only \( \rho = 0.01 \) is shown for the space concern. We can also simulate a Markov Chain with the learned transition distribution, and we visualize the results on MNIST and SVHN in Figure 5. We see that the learned transition distribution can smoothly vary the style, type, color, and etc. of the digits. Also note that the transitions are not restricted to the Euclidean neighborhood of the samples conditioned on, for example, changing of colors should result in a large distance in the input space, which is our transition operator does not have difficulty exploring. Finally, as a quantitative evaluation of the learned transition distribution, we attempt to use the generated conditional samples as data augmentation on MNIST and SVHN.\footnote{we are not able to obtain reasonable results on CIFAR-10, as our EMB suffers from noticeable underfitting, identified by the large reconstruction errors in Figure\ref{fig:cifar10}.} To be concrete, for each dataset we train two additional CNNs enhanced with batch normalization, dropout out and input Gaussian noising. We then minimize the follow loss function: \[ 0.5 * \frac{1}{N} \sum_{i=1}^{N} \mathcal{L}(x^i, y^i) + 0.5 * \frac{1}{N} \sum_{i=1}^{N} \mathbb{E}_{\tilde{x}^i \sim p(\tilde{x}|x^i)} \mathcal{L}(\tilde{x}^i, y^i). \] (11) For each dataset we train on the first 1000 training images, and use the validation set to select the best model; we then report the test error of different configurations. The results are summarized in Table \ref{tab:results}. We see that on both datasets, with a properly chosen \( \rho \) the generator is able to provide good generations to improve learning. On the other hand, with \( \rho = 0 \), which corresponds to sample from an autoencoder, hurts performance. \( \rho = 1 \) completely messes up training as the generated samples are not guaranteed to have the same label as the samples conditioned on. This shows that our transition distribution is able to generate samples that are sufficiently different from training images to boost the performance. Although these numbers are by no means state-of-the-art results, we consider them significant as a proof of concept, because our baseline models are already heavily regularized with dropout and feature noising, which can be considered as data agnostic data augmentation. Also note that there are much space for improvements by leveraging the weights between the two terms in Equation\ref{eq:loss}, tuning the architecture of the energy model, the generator and the classifier model. ![Visualization of x, x̂, and ū̂ for ρ = 0, 0.01, 1 on MNIST. The first to third row corresponds to rho = 0, 0.01, 1, respectively. The first to third column corresponds to samples from the validation set x, reconstructions of the samples x̂, and the generated samples ū̂.](page_324_682_1097_377.png) Figure 3: Visualization of x, x̂, and ū̂ for \( \rho = 0, 0.01, 1 \) on MNIST. The first to third row corresponds to \( rho = 0, 0.01, 1 \), respectively. The first to third column corresponds to samples from the validation set x, reconstructions of the samples x̂, and the generated samples ū̂. Figure 4: Visualization of x, \( \bar{x} \), and \( \tilde{x} \) for \( \rho = 0.01 \) on SVHN and CIFAR10. The first to third column corresponds to samples from the validation set x, reconstructions of the samples \( \bar{x} \), and the generated samples \( \tilde{x} \). Figure 5: Simulating a Markov Chain with \( p_z(\bar{x}|x) \). We show 30 and 28 images form the validation set for MNIST and SVHN in the first row of each panel, respectively, followed by 9 Gibbs sampling steps. Note the smooth transition of digit types, shapes, and/or colors. Table 2: Semisupevised learning error rates by using the learned transition distribution for data augmentation. <table> <tr> <th>model</th> <th>MNIST-1000</th> <th>SVHN-1000</th> </tr> <tr> <td>No augmentation</td> <td>2.2</td> <td>19</td> </tr> <tr> <td>VCD (\( \rho = 0 \))</td> <td>2.9</td> <td>26</td> </tr> <tr> <td>VCD (\( \rho = 0.001 \))</td> <td>2.0</td> <td>20</td> </tr> <tr> <td>VCD (\( \rho = 0.01 \))</td> <td>1.7</td> <td>18</td> </tr> <tr> <td>VCD (\( \rho = 0.1 \))</td> <td>1.9</td> <td>17</td> </tr> <tr> <td>VCD (\( \rho = 1 \))</td> <td>21</td> <td>37</td> </tr> </table> 8 RELATED WORK There has been a recent surge on improving GANs [Radford et al. (2015); Salimans et al. (2016); Zhao et al. (2016); Kim & Bengio (2016)]. Radford et al. (2015) proposes a set of techniques to stabilize GANs, including using batch normalization, dropping pooling layers, reduced learning rate, and using strided convolutions, but there is little justification of the proposed designs. Our framework, however, directly addresses two most important issues, the energy parametrization and the entropy approximation, and allows the freedom of using the most conventional designs such as pooling and ReLU. Salimans et al. (2016) proposes several tricks to enhance the stability. For example, the proposed batch discrimination is in nature similar to our energy design, but with a much higher complexity. Kim & Bengio (2016); Zhao et al. (2016) are the two most directly related efforts that connect GANs with EBMs. However, our work is the first to the best of our knowledge to identify the nature of the variational training of EBMs and to provide practical solutions in this view at the same time. There has also been a long standing interest in terms of EBMs and deep generative models in the machine learning community, such as deep Boltzmann machines and deep belief networks [Salakhutdinov & Hinton; Hinton et al. (2006)]. The contribution of our framework from this aspect is to propose a scalable training method to eliminate the need of MCMC sampling. Variational inference has also been well studied in the literature, but most successfully in dealing with deep directed graphical models such as DBM [Salakhutdinov & Hinton] and variational autoencoder [Kingma & Welling (2013)], where typically variational *upper bounds* are derived for NLL, instead of the *lower bound* in our work. Minimizing the variational lower bound is obviously more difficult to work with, as if the bound is not tight enough, there is no guarantee that the original NLL is minimized. Our variational contrastive divergence is also related to GSNs [Thibodeau-Laufer et al. (2014)], as they both model a transition probability. However, GSNs adopt a transition distribution of form \( p(\mathbf{x}|\tilde{\mathbf{x}}) \), where \( \tilde{\mathbf{x}} \) is produced by adding simple noises to training samples. This essentially limits the space of sampling limited to a Gaussian neighborhood of training examples, which our model does not assume. VCD is also related to the adversarial autoencoder [Makhzani et al. (2015)] as they both include an autoencoder module, but with fundamental differences: the use of autoencoder in our work is part of and to improve the EBM/GAN, while [Makhzani et al. (2015)] on the other hand, requires another GAN besides the autoencoder. 9 CONCLUSION We have proposed VGANs, a family of methodologies to train deep EBMs with an auxiliary variational distribution. We have drawn connection between deep EBMs and GANs, and propose practical solutions to stabilizing training. We show that our proposed bounded multi-modal energy combined with variational contrastive divergence works well on generating realistically looking images, and recovering the data manifold by simulating a Markov Chain. We have also attempted to utilize the learned transition distributions to perform data augmentation in the context of semisupervised learning, and show consistent improvements.
reject
Reject
5
5a25ceca92747ee7a855195b67eaaa5f4e3a048e
iclr
2,017
LEARNING A STATIC ANALYZER: A CASE STUDY ON A TOY LANGUAGE Manzil Zaheer Carnegie Mellon University manzil.zaheer@cmu.edu Jean-Baptiste Tristan & Michael Wick & Guy L. Steele Jr. Oracle Labs jean.baptiste.tristan@oracle.com ABSTRACT Static analyzers are meta-programs that analyze programs to detect potential errors or collect information. For example, they are used as security tools to detect potential buffer overflows. Also, they are used by compilers to verify that a program is well-formed and collect information to generate better code. In this paper, we address the following question: can a static analyzer be learned from data? More specifically, can we use deep learning to learn a static analyzer without the need for complicated feature engineering? We show that long short-term memory networks are able to learn a basic static analyzer for a simple toy language. However, pre-existing approaches based on feature engineering, hidden Markov models, or basic recurrent neural networks fail on such a simple problem. Finally, we show how to make such a tool usable by employing a language model to help the programmer detect where the reported errors are located. 1 INTRODUCTION Can programming language tools, such as static analyzers, be learned from data using deep learning? While research projects trying to use machine learning to design better programming language tools are burgeoning, they all rely on feature engineering [Brun & Ernst, 2004; Kolter & Maloof, 2006; Yamaguchi et al., 2012; Tripp et al., 2014; Raychev et al., 2015; Allamanis et al., 2015; Nguyen & Nguyen, 2015; Gvero & Kuncak, 2015; Long & Rinard, 2016]. Unfortunately, feature engineering for programs is difficult and indeed the features often seem ad-hoc and superficial. This raises the question of whether it could be possible to approach a complicated problem such as static analysis – the automated detection of program properties – from almost raw features. In this paper, our goal is to present a very simple experiment that clearly shows that not only feature engineering can completely fail for even the simplest static analysis task, but that deep learning with neural networks can indeed be successful. The task in which we are interested is simple: we want to ensure that program variables are defined before they are used. We design a toy language to focus on the problem, and indeed our language is so simple that if it satisfies the aforementioned property, then it is semantically valid. Since programs are sequences of tokens, we experiment with different types of sequence learning methods (Xing et al., 2010). We try feature-based methods in which we extract features from the sequence and then use a classifier to decide whether or not the program is semantically valid. We show that they all fail, including methods that compute a sequence embedding. Then, we try different model-based methods (Lipton, 2015): hidden Markov models (HMM), recurrent neural networks (RNN), and long short-term memory networks (LSTM). Our results show that HMM and RNN do poorly (albeit better than random), while an LSTM is almost perfectly accurate. This finding is somewhat surprising as static analysis is essentially a document classification problem and LSTMs are known to perform poorly on related tasks, such as sentiment analysis (Dai & Le, 2015). The obvious question about such an experiment is: why would we want to learn a static analyzer for a problem that we know of a perfectly fine engineered solution? The answer is that we want to initiate investigation into the use of deep-learning for program analysis, and our broader hopes are two-fold. First, static analyzers are very complicated and often limited by the amount of false positive and false negatives they generate. In cases where false negatives are unacceptable, a learned static analyzer may not be the right approach. But when the goal is rather to find a good balance between false positives and false negatives, learned static analyzers might be more flexible. Second, as we will briefly show in the paper, learned static analyzers have a resilience to small errors that might lead to more robust tools. Indeed, even though our goal is to detect errors in syntactically valid programs, our tool work despite the presence of small syntactic errors, such as the omission of the semicolon. This resilience to errors is in our opinion a very promising aspect of learned methods for the analysis of programs. Another key problem with static analysis programs is that to be useful, they need to help the programmer understand what is the cause of the error. In that case, models based on recurrent neural networks really shine because they can be trained to provide such information. Indeed, in our experiment, we show how to use a language model to locate the position of erroneous variables in examples classified by the static analyzer as being wrong. This is very important for practical static analysis since a tool that merely reports the existence of an error in a large code file is not useful. The paper is organized as follows. In section 2, we introduce the programming language of study and the corresponding static analysis task. In section 3, we review how we created the dataset used to learn the static analyzer and the methods that we have tried. In section 4, we explain how we learn to report error messages to help the programmer understand how to fix an error. 2 A STATIC ANALYSIS TASK Our goal is to study the following static analysis problem: given a program, is every variable defined before it is being used? Because this problem is undecidable for a Turing complete language, programming languages such as Java impose constraints on what is a correct variable initialization. For example, a variable may not in general be defined within only one branch of an if-then-else statement and used afterward since it can be impossible to guarantee which branch will be executed for every run. In order to better understand whether this is feasible and which methods work, we design a toy language. As an example, in this language, we can write a program that computes the 42th Fibonacci number as follows. v0 = 1; v1 = 1; v2 = 0; while (v2 < 42) { v3 = v1; v1 = v0 + v1; v0 = v3; v2 = v2 + 1; } return v1; If we were to invert lines 4 and 6, then not only would the program be incorrect, but it would be semantically invalid since in the first execution of the loop, variable v3 has not yet been defined. In order to precisely explain what the task is, we now briefly present the syntax and semantics of our experimental programming language. 2.1 THE LANGUAGE We present the syntax of the language in Backus-Naur form in figure 1. The symbols delimited by ⟨⟩ are non-terminals while the symbols delimited by * are terminals. Symbol (program) is the starting non-terminal. A program is composed of an optional statement followed by an expression. The statement can be a list of statements, control-flow statements like conditionals or iterations, or the binding of an expression to a variable. The expressions are simple arithmetic expressions. For simplicity, the test expressions used in conditional statements are distinct from the other expressions, which is a simple syntactic way to enforce basic type safety. The integers are simple integer values of the form [0 − 9]+ while the identifiers are of the form v[0 − 9]+. The semantics of our experimental programming language is presented as a big-step operational semantics in figure 2. For simplicity, we only present a subset of the rules. It is composed of ⟨program⟩ ::= ‘return’ ⟨expression⟩ ‘;’ | ⟨statement⟩ ‘return’ ⟨expression⟩ ‘;’ ⟨statements⟩ ::= ⟨statement⟩ | ⟨statement⟩ ⟨statements⟩ ⟨statement⟩ ::= ⟨statements⟩ | ⟨identifier⟩ ‘=’ ⟨expression⟩ ‘;’ | ‘if’ ‘(’ ⟨test⟩ ‘)’ ‘{’ ⟨statement⟩ ‘}’ ‘else’ ‘{’ ⟨statement⟩ ‘}’ | ‘if’ ‘(’ ⟨test⟩ ‘)’ ‘{’ ⟨statement⟩ ‘}’ | ‘while’ ‘(’ ⟨test⟩ ‘)’ ‘{’ ⟨statement⟩ ‘}’ ⟨test⟩ ::= ⟨expression⟩ ‘<=’ ⟨expression⟩ | ⟨expression⟩ ‘<=’ ⟨expression⟩ ⟨expression⟩ ::= ⟨multiplicative⟩ | ⟨expression⟩ ‘+’ ⟨multiplicative⟩ | ⟨expression⟩ ‘-’ ⟨multiplicative⟩ ⟨multiplicative⟩ ::= ⟨unary⟩ | ⟨multiplicative⟩ ‘*’ ⟨unary⟩ | ⟨multiplicative⟩ ‘/’ ⟨unary⟩ ⟨unary⟩ ::= ⟨atomic⟩ | ‘+’ ⟨unary⟩ | ‘-’ ⟨unary⟩ ⟨atomic⟩ ::= ⟨integer⟩ | ⟨identifier⟩ | ‘(’ ⟨expression⟩ ‘)’ Figure 1: Syntax of the language, presented in Backus-Naur form. The symbols delimited by ⟨⟩ are non-terminals while the symbols delimited by ‘’ are terminals. ⟨program⟩ is the starting non-terminal. \[ \begin{array}{c} \frac{\Gamma \vdash e_1 \Rightarrow v_1 \quad \Gamma \vdash e_2 \Rightarrow v_2}{\Gamma \vdash e_1 + e_2 \Rightarrow v_1 + v_2} \text{ ADD} \\ \frac{\Gamma \vdash i \Rightarrow i}{\Gamma \vdash x \Rightarrow \Gamma(x)} \text{ LOOKUP} \\ \frac{\Gamma \vdash e_1 \Rightarrow v_1 \quad \Gamma \vdash e_2 \Rightarrow v_2 \quad v_1 = v_2}{\Gamma \vdash e_1 = e_2 \Rightarrow T} \text{ TEST1} \\ \frac{\Gamma \vdash e_1 \Rightarrow v_1 \quad \Gamma \vdash e_2 \Rightarrow v_2 \quad v_1 \neq v_2}{\Gamma \vdash e_1 = e_2 \Rightarrow F} \text{ TEST2} \\ \frac{\Gamma, s \xrightarrow{*} \Gamma'}{\Gamma, s \rightarrow \Gamma'} \text{ CLOSURE} \\ \frac{\Gamma \vdash e \Rightarrow v}{\Gamma, x = e \rightarrow (x, v) :: \Gamma} \text{ INTRO} \\ \frac{\Gamma \vdash t \Rightarrow T}{\Gamma, s \rightarrow \Gamma'} \quad \frac{\Gamma, s \rightarrow \Gamma' \quad \Gamma', \text{while}(t) s \rightarrow \Gamma''}{\Gamma, \text{while}(t) s \rightarrow \Gamma''} \text{ WHILE1} \\ \frac{\Gamma \vdash t \Rightarrow F}{\Gamma, \text{while}(t) s \rightarrow \Gamma} \text{ WHILE2} \\ \frac{\emptyset, s \rightarrow \Gamma \quad \Gamma \vdash e \Rightarrow v}{[s;\text{return } e] = v} \text{ PROGRAM} \end{array} \] Figure 2: Semantics of the language, presented as inference rules. The semantics is defined as four predicates formalizing the evaluation of expressions (\( \Gamma \vdash e \Rightarrow v \)), single statement step (\( \Gamma, s \Rightarrow \Gamma \)), the reflexive and transitive closure of statements (\( \Gamma, s \xrightarrow{*} \Gamma \)), and the evaluation of the program overall (\([p] = v\)). <table> <tr> <th>Method</th> <th>Accuracy</th> </tr> <tr> <td>Unigram features + logistic regression</td> <td>.5</td> </tr> <tr> <td>Unigram features + multilayer perceptron</td> <td>.5</td> </tr> <tr> <td>Bigram features + logistic regression</td> <td>.5</td> </tr> <tr> <td>Bigram features + multilayer perceptron</td> <td>.5</td> </tr> <tr> <td>Embedding features + logistic regression</td> <td>.5</td> </tr> <tr> <td>Embedding features + multilayer perceptron</td> <td>.5</td> </tr> <tr> <td>Hidden Markov model</td> <td>.57</td> </tr> <tr> <td>Recurrent neural network, classification</td> <td>.62</td> </tr> <tr> <td>Long short-term memory network, classification</td> <td>.98</td> </tr> <tr> <td>Long short-term memory network + set, classification</td> <td>.993</td> </tr> <tr> <td>Long short-term memory network + set, transduction</td> <td>.997</td> </tr> </table> Table 1: Accuracy of different learning algorithms on the static analysis task. LR stands for logistic regression, MLP stands for multilayer perceptron, HMM stands for hidden Markov Model, RNN stands for recurrent neural network, LSTM stands for Long Short-Term Memory. four predicates. The predicate \( \Gamma \vdash e \Rightarrow v \) denotes the value \( v \) resulting from evaluating \( e \) in the environment \( \Gamma \). The environment is simply a list of bindings from variables to their values. We present four rules that define this predicate, ADD, INT, LOOKUP, TEST1, and TEST2. The most important is the LOOKUP rule which states that the value of a variable is the value associated to it in the environment. Note that this is only well-defined if the variable actually is in the environment, otherwise the semantics is undefined. The goal of our static analyzer is to ensure this can never happen. The predicate \( \Gamma, s \Rightarrow \Gamma' \) denotes the execution of a statement that transforms the environment by adding variable bindings to it. For example, the INTRO rule shows that a variable assignment adds a variable binding to the environment, The CLOSURE rule states that a possible transition is the reflexive and transitive execution of a single statement \( \Gamma, s \rightarrow^* \Gamma' \). The rules WHILE1 and WHILE2 formalize the execution of a while loop. Finally, the predicate \( [p] = v \) denotes the evaluation of a complete program into a resulting value. 2.2 THE TASK Now that we have presented the language, we can state more precisely the goal of the static analysis. A program such as “v1 = 4; return v1 + v2;”, while syntactically valid, is not well-defined since variable v2 has not been defined. A static analyzer is a function that takes such a program as an input and returns a Boolean value. \[ \text{analyze} : \text{token sequence} \longrightarrow \text{Boolean} \] Function analyze should return true only if every variable is defined before it is used. We chose the input to be the sequence of tokens of the program rather than the raw characters for simplicity. It is easy to define such a function directly, but our goal is to see whether we can learn it from examples. Note that unlike previous work combining static analysis and machine learning, we are not trying to improve a static analyzer using machine learning, but rather learning the static analyzer completely from data. 3 LEARNING A STATIC ANALYZER To learn the static analyzer, we compile a balanced set of examples in which programs are labeled with a single Boolean value indicating whether the program should be accepted or not. The dataset contains 200,000 examples, half of which are valid programs and half of which are invalid programs. The invalid programs are of two forms. Half of them contain variables that have not been defined at all, the other half contains programs where the order of statements has been swapped and a variable use appears before its definition. Note that this swapping of statements results in documents that have the exact same bag-of-words, but different labels. Of the 200,000 examples, we use 150,000 as the training set and 50,000 as the test set while making sure to respect a perfect balance between valid and invalid programs. To create this dataset, we have built our own compiler and example generator for our language. The example generator only produces syntactically valid programs. The programs are generated using a variety of random decisions: for example, when trying to generate a statement, we must decide with what probability we want to choose a variable assignment versus a while loop or another type of statement. We vary the probability to try to avoid producing a dataset with a spurious signal, but this is a very delicate issue. We also try our classifiers on hand-written programs. We apply several different machine learning methods, including LSTM to the problem (described below) and present results in Table 1. N-grams and classification We attempt to learn the static analyzer using a classic approach of feature engineering followed by classification. We try both unigram and bigrams features and classify the examples using either a linear logistic regression or a non-linear multilayer perceptron. We expect this approach to fail since n-gram features fail to capture statement ordering, and this serves as a test to make sure our dataset does not contain any spurious signal. Indeed, these methods do not perform better than random. Sequence embedding and classification We also attempt to use an LSTM for our feature engineering. In this case, we first train an LSTM as language model. Then, for classification, we first execute our language model on the example program and use the last hidden state as an embedding. This embedding is used as an input to both a logistic regression and a multilayer perceptron. This approach fails as well and does not perform better than random. It is important to note that we might also consider using an RNN encoder-decoder to produce the embedding but we leave this for future work. Sequence classification We tried three model-based approaches to sequence classification. First, we tried to use an HMM trained using the Baum-Welch algorithm. Second, we tried to train a vanilla RNN with a cross-entropy loss using stochastic gradient descent (SGD). Third, we tried to train an LSTM with cross-entropy loss and SGD. More precisely, we use the variant of SGD known as RMSProp. In both cases we used the Keras framework. These sequence classification approaches perform better than the other approaches. However, the HMM and the RNN still perform poorly. Interestingly, the LSTM can achieve an accuracy of 98.3%. The training of the LSTM is very robust, we did not need to do any complicated parameter search to obtain these results. The false negative rate (i.e. the program is correct but predicted as faulty) is 1.0% and the false positive rate (i.e. the program is faulty but classified as correct) is 2.5%. Using differentiable data structures The key problem in detecting uninitialized variables is to remember which variables have been defined up to some program point. A solution is to employ a set data structure: if we encounter a variable definition, we add the variable to the set; if we encounter a variable use, we test whether that variable is in the set of defined variables. With this in mind, we design a differentiable set data structure to augment an RNN to see if the resulting network can learn (from the training data alone) a policy of how to use the set. The set is represented by a vector \( f \) and intended to be used as a bitmap by the network. The intent is that each possible value correspond to a bit in the vector and is set to 1 if the element is in the set and 0 otherwise. An action \( a \) on the set can be either adding an element to the set, or testing if some value is in the set. Values \( v \) are indices into the set representation. Controller update: \[ h_t = \mathrm{RNN}([x_t, f_{t-1}], h_t) \] Action: \[ a_t = \sigma(W_a x_t + b_a) \] Input representation: \[ v_t = \mathrm{softmax}(W_{v2} \tanh(W_{v1} x_t + b_{v1}) + b_{v2}) \] Set update: \[ f_t = \max\{f_{t-1}, a_t * v_t\} \] Set test \[ p_t = \langle f_t, v_t \rangle \] Decision: \[ y_t = \sigma(W_y h_t + U_y p_t + b_y) \] RNN can be a simple Ellman network or LSTM. The architecture is shown in Figure 4. v14 = (v14 - 23) ; # 1 1 1 0 1 0 0 0 v14 = (14 * 93) ; # 0 0 0 0 0 0 0 0 return v14 ; # 0 0 0 v14 = (v14 - 23) ; # 1 1 1 0 1 1 1 1 v14 = (14 * 93) ; # 1 1 1 1 1 1 1 1 return v14 ; # 1 1 1 v14 = (14 * 93) ; # 1 1 1 1 1 1 1 1 v14 = (v14 - 23) ; # 1 1 1 1 1 1 1 1 return v14 ; # 1 1 1 (a) Classification task. Once an error is detected, the rest of the outputs is meaningless. (b) Transduction task. The network gets every output right. (c) An example for which both classification and transduction work. Figure 3: A look inside the prediction of different networks that use an LSTM and a differentiable set data structure. The commented line shows the label attached to each variable by the network. A 1 means a variable is properly used while a 0 means a variable was not initialized. Unfortunately, training differentiable data structures is sometimes difficult, requiring extensive hyper-parameter tuning and cross validation to find a good weight initialization. Further, LSTMs are often able to learn the training data single-handedly causing the network to learn a policy that ignores the data structure. To circumvent these problem, we annotate the training data with additional intermediate signals: specifically, we annotate each token with a binary label that is true if and only if the token is a variable use that has not been initialized. Note that the additional labels results in a per-token classification problem, but we convert the network back into a program classifier by employing min-pooling over the per-token soft max outputs. We experiment with both per-program (sequence classification) and per-token (sequence transduction) classifiers as described next. Sequence classification: As in previous experiments, we train using the program as the input sequence and a single Boolean label to indicate whether the program is valid or not. For the network with differentiable set to produce one output we apply min-pooling across all the decisions. This method improves over an LSTM and achieves an accuracy of 99.3%. Note that, we did not need to do any complicated parameter search to obtain these results. The false negative rate (i.e. the program is faulty but classified as correct) is 0.8% and the false positive rate (i.e. the program is correct but predicted as faulty) is 0.6%. To understand the behavior of the network, we remove the last minpooling layer, and look at the decision made by the network for each input token. This reveals an interesting pattern: the network correctly identifies the first error location and subsequently emit incorrect outputs. Thus, it is comparable to conventional (non-ML) static analysis algorithms that give up after the first error. For example, in the example in figure[3a] the first variable use is correctly identified as invalid but the rest of the output is incorrect. information. Sequence transduction: Finally, we run an experiment at token level granularity. In this case, the network produce not just a single output but as many outputs as inputs (many-to-many architecture), we refer to this approach as sequence transduction to distinguish from the recurrent networks that produce a single label (many-to-one architecture). The training data also contains the label for each token in the program. This can achieve an accuracy of 99.7%. The training of the transduction task is very robust, we did not need to do any complicated parameter search to obtain these results. The false negative rate is 0.4% and the false positive rate is 0.2%. Given the token level data, it seems that the network has inducted a use of the set data structure that correspond to what an traditional algorithm would do. Aside from using the set to keep track of defined variables, it correctly handles the tricky case of a statement such as \( v1 = v1 + 3 \); by making sure that the variable \( v1 \) is introduced in the set only after the statement is finished. For example, in the example presented in figure[3b] the declaration of the variable \( v14 \) utilizes the value of the still undeclared variable \( v14 \) and the network correctly identifies it. Unfortunately, and interestingly, the accuracy is not perfect. Even though it looks like the correct use of the set has been learned, there are a few rare cases where the network makes simple mistakes. For example, some of the errors happen on some the simplest and shortest programs where the network fails to insert the declared variable into the set. ![Overview of a network utilizing the differentiable set data structure for the task of static analysis. It consists of a neural controller and a fixed size filter.](page_328_154_936_377.png) Figure 4: Overview of a network utilizing the differentiable set data structure for the task of static analysis. It consists of a neural controller and a fixed size filter. Conclusion In conclusion, an out-of-the-box LSTM achieves a promising accuracy on this task, and an LSTM equipped with a differentiable set data-structure has an almost perfect accuracy. Interestingly, none of the other approaches including HMM or RNN, could deliver satisfactory results. 4 REPORTING USEFUL ERROR MESSAGES While the above experiment demonstrates that it is possible to learn an accurate static analyzer; practically, such an analyzer is somewhat useless unless we can also help the programmer locate the potential errors. That is, imagine if a tool reported that there is a potential buffer overflow in your code base without any indication of where the problem is: it would not be of much use. Therefore we train a second LSTM as a language model over true instances of our programming language. That is, we train the LSTM to predict the next character in the sequence, and for every character in the sequence, the model provides the probability of observing this specific character. The idea is that we want to look at all the variable-use in the program and if the probability of this variable use is below a certain threshold, then we report the use as a potential source of error. We present several such examples in Figure 5. We color a variable-use in blue if its probability is above the threshold and in purple if it is below the threshold and therefore potentially the source of the error. As we can see from the examples, the method works well. The first four examples show simple cases with only two variables. Note that from the perspective of a bag-of-words classifier, these two programs are identical. Yet the LSTM language model, which takes into account the “word” order is able to model them differently. Examples 5-11 are more complicated in that the variables are used or defined several times. In Example 9, the language model accurately reports the first use of v2 as incorrect and the second use of v2 as correct. This is a somewhat interesting example as the incorrect use of v2 is in the definition of v2 itself. In example 10, we can see that the language model can handle multiple incorrect variable uses; this success crucially depends on the ability of the language model to recover from the error and still accurately model the remainder of the program. Finally, examples 12 and 13 demonstrate robustness. Despite the fact that these two examples are syntactically incorrect, the language model correctly reports the semantic errors. The resilience of the learned tools to small errors is part of what makes them so promising for program analysis. 5 RELATED WORK There is a growing body of work in employing machine learning to improve programming language tools. In such works, machine learning is used to complement the traditional static analysis methods; further, they rely on extensive feature engineering. In Brun & Ernst (2004), dynamic analysis is used to extract features that are used to detect latent code errors. In Kolter & Maloof (2006), n- 1. v1 = 37; v2 = (v1 + 20); 2. v1 = 37; v1 = (v2 + 20); 3. v2 = 37; v1 = (v2 + 20); 4. v2 = 37; v2 = (v2 + 20); 5. v2 = 37; v2 = (v2 + 20); v3 = (v2 + 40); 6. v2 = 37; v2 = (v2 + 20); v2 = (v3 + 40); 7. v2 = 37; v2 = (v2 + 20); v3 = (v1 + v2); 8. v2 = 37; v1 = (v2 + 20); v3 = (v1 + v2); 9. v1 = 37; v2 = (v2 + 20); v3 = (v1 + v2); 10. v1 = 37; v3 = (v2 + 20); v5 = (v3 + v4); 11. v1 = 37; v3 = (v2 + 20); v5 = (v3 + v2); 12. v1 = 37 v2 = (v1 + 20); 13. v1 = 37 v1 = (v2 + 20); Figure 5: Example of programs annotated with variable usage. The use colored in blue are considered to have been properly defined while the use in purple are considered to be faulty. This tool is run when the classifier detects a program error to help the programmer understand what the problem is. gram features are used to detect viruses in binary code. In [Yamaguchi et al. (2012)], parts of the abstract syntax tree of a function is embedded into a vector space to help detect functions similar to a known faulty one. In [Tripp et al. (2014)], various lexical and quantitative features about a program is used to improve an information analysis and reduce the number of false alarms reported by the tool. In [Raychev et al. (2015)], dependency networks are used with a conditional random field to de-obfuscate and type Javascript code. In [Allamanis et al. (2015)], the structure of the code is used to suggest method names. In [Nguyen & Nguyen (2015)], n-grams are used to improve code completion tools. In [Gvero & Kuncak (2015)], program syntax is used to learn to tool that can generate Java expressions from free-form queries. In [Long & Rinard (2016)], a feature extraction algorithm is designed to improve automatic patch generation. 6 CONCLUSION We have shown that it is possible to learn a static analyzer from data. Even though the problem we address is particularly simple and on a toy language, it is interesting to note that in our experiments, only LSTM networks provided a reasonable enough solution. We have also shown that it is possible to make the static analyzer useful by using a language model to help the programmer understand where to look in the program to find the error. Of course, this experiment is very far from any practical tool. First, dealing with more complicated programs involving memory, functions, and modularity should be vastly more complex. Also, our solution is very brittle. For example, in our language, the space of variable names is very restricted, it might be much more difficult to deal with normal variable names where a specific variable name could not appear at all in the training dataset. Finally, a fundamental issue are false positives, that is, programs that are wrongly classified as being without error. This is a serious problem that may make such a tool risky to use. However, note that there are useful programming language tools that indeed generate false positive. For instance, a tool that report buffer overflows might not catch every error, but it is still useful if it catches some. Another possibility is to consider approaches were a result is verified by an external tools. For example, in the field of certified compilation, [Tristan & Leroy (2008)] have shown that it can be acceptable to use an untrusted, potentially bogus, program transformation as long as each use can be formally checked. Also, as exemplified by [Gulwani & Necula (2003) (2004) (2005)] some static analysis algorithms do trade a small amount of unsoundness for much faster computation, which can be necessary when applying programming tools to very large code base. REFERENCES Miltiadis Allamanis, Earl T. Barr, Christian Bird, and Charles Sutton. Suggesting accurate method and class names. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2015, pp. 38–49, New York, NY, USA, 2015. ACM. ISBN 978-1-4503-3675-8. doi: 10.1145/2786805.2786849. URL http://doi.acm.org/10.1145/2786805.2786849 Yuriy Brun and Michael D. Ernst. Finding latent code errors via machine learning over program executions. In Proceedings of the 26th International Conference on Software Engineering, ICSE '04, pp. 480–490, Washington, DC, USA, 2004. IEEE Computer Society. ISBN 0-7695-2163-0. URL http://dl.acm.org/citation.cfm?id=998675.999452 Andrew M. Dai and Quoc V. Le. Semi-supervised sequence learning. CoRR, abs/1511.01432, 2015. URL http://arxiv.org/abs/1511.01432 Sumit Gulwani and George C. Necula. Discovering affine equalities using random interpretation. In Proceedings of the 30th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL '03, pp. 74–84, New York, NY, USA, 2003. ACM. ISBN 1-58113-628-5. doi: 10.1145/604131.604138. URL http://doi.acm.org/10.1145/604131.604138 Sumit Gulwani and George C. Necula. Global value numbering using random interpretation. In Proceedings of the 31st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL '04, pp. 342–352, New York, NY, USA, 2004. ACM. ISBN 1-58113-729-X. doi: 10.1145/964001.964030. URL http://doi.acm.org/10.1145/964001.964030 Sumit Gulwani and George C. Necula. Precise interprocedural analysis using random interpretation. In Proceedings of the 32Nd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL ’05, pp. 324–337, New York, NY, USA, 2005. ACM. ISBN 1-58113-830-X. doi: 10.1145/1040505.1040332. URL http://doi.acm.org/10.1145/1040505.1040332 Tihomir Gvero and Viktor Kuncak. Synthesizing java expressions from free-form queries. In Proceedings of the 2015 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2015, pp. 416–432, New York, NY, USA, 2015. ACM. ISBN 978-1-4503-3689-5. doi: 10.1145/2814270.2814295. URL http://doi.acm.org/10.1145/2814270.2814295 J. Zico Kolter and Marcus A. Maloof. Learning to detect and classify malicious executables in the wild. J. Mach. Learn. Res., 7:2721–2744, December 2006. ISSN 1532-4435. URL http://dl.acm.org/citation.cfm?id=1248547.1248646 Zachary Chase Lipton. A critical review of recurrent neural networks for sequence learning. CoRR, abs/1506.00019, 2015. URL http://arxiv.org/abs/1506.00019 Fan Long and Martin Rinard. Automatic patch generation by learning correct code. In Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL ’16, pp. 298–312, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-3549-2. doi: 10.1145/2837614.2837617. URL http://doi.acm.org/10.1145/2837614.2837617 Anh Tuan Nguyen and Tien N. Nguyen. Graph-based statistical language model for code. In Proceedings of the 37th International Conference on Software Engineering - Volume 1, ICSE ’15, pp. 858–868, Piscataway, NJ, USA, 2015. IEEE Press. ISBN 978-1-4799-1934-5. URL http://dl.acm.org/citation.cfm?id=2818754.2818858 Veselin Raychev, Martin Vechev, and Andreas Krause. Predicting program properties from “big code”. In Proceedings of the 42Nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL ’15, pp. 111–124, New York, NY, USA, 2015. ACM. ISBN 978-1-4503-3300-9. doi: 10.1145/2676726.2677009. URL http://doi.acm.org/10.1145/2676726.2677009 Omer Tripp, Salvatore Guarnieri, Marco Pistoia, and Aleksandr Aravkin. Aletheia: Improving the usability of static security analysis. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, CCS ’14, pp. 762–774, New York, NY, USA, 2014. ACM. ISBN 978-1-4503-2957-6. doi: 10.1145/2660267.2660339. URL http://doi.acm.org/10.1145/2660267.2660339 Jean-Baptiste Tristan and Xavier Leroy. Formal verification of translation validators: A case study on instruction scheduling optimizations. In Proceedings of the 35th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL ’08, pp. 17–27, New York, NY, USA, 2008. ACM. ISBN 978-1-59593-689-9. doi: 10.1145/1328438.1328444. URL http://doi.acm.org/10.1145/1328438.1328444 Zhengzheng Xing, Jian Pei, and Eamonn Keogh. A brief survey on sequence classification. SIGKDD Explor. Newsl., 12(1):40–48, November 2010. ISSN 1931-0145. doi: 10.1145/1882471.1882478. URL http://doi.acm.org/10.1145/1882471.1882478 Fabian Yamaguchi, Markus Lottmann, and Konrad Rieck. Generalized vulnerability extrapolation using abstract syntax trees. In Proceedings of the 28th Annual Computer Security Applications Conference, ACSAC ’12, pp. 359–368, New York, NY, USA, 2012. ACM. ISBN 978-1-4503-1312-4. doi: 10.1145/2420950.2421003. URL http://doi.acm.org/10.1145/2420950.2421003
ABSTRACT Static analyzers are meta-programs that analyze programs to detect potential errors or collect information. For example, they are used as security tools to detect potential buffer overflows. Also, they are used by compilers to verify that a program is well-formed and collect information to generate better code. In this paper, we address the following question: can a static analyzer be learned from data? More specifically, can we use deep learning to learn a static analyzer without the need for complicated feature engineering? We show that long short-term memory networks are able to learn a basic static analyzer for a simple toy language. However, pre-existing approaches based on feature engineering, hidden Markov models, or basic recurrent neural networks fail on such a simple problem. Finally, we show how to make such a tool usable by employing a language model to help the programmer detect where the reported errors are located. 1 INTRODUCTION Can programming language tools, such as static analyzers, be learned from data using deep learning? While research projects trying to use machine learning to design better programming language tools are burgeoning, they all rely on feature engineering [Brun & Ernst, 2004; Kolter & Maloof, 2006; Yamaguchi et al., 2012; Tripp et al., 2014; Raychev et al., 2015; Allamanis et al., 2015; Nguyen & Nguyen, 2015; Gvero & Kuncak, 2015; Long & Rinard, 2016]. Unfortunately, feature engineering for programs is difficult and indeed the features often seem ad-hoc and superficial. This raises the question of whether it could be possible to approach a complicated problem such as static analysis – the automated detection of program properties – from almost raw features. In this paper, our goal is to present a very simple experiment that clearly shows that not only feature engineering can completely fail for even the simplest static analysis task, but that deep learning with neural networks can indeed be successful. The task in which we are interested is simple: we want to ensure that program variables are defined before they are used. We design a toy language to focus on the problem, and indeed our language is so simple that if it satisfies the aforementioned property, then it is semantically valid. Since programs are sequences of tokens, we experiment with different types of sequence learning methods (Xing et al., 2010). We try feature-based methods in which we extract features from the sequence and then use a classifier to decide whether or not the program is semantically valid. We show that they all fail, including methods that compute a sequence embedding. Then, we try different model-based methods (Lipton, 2015): hidden Markov models (HMM), recurrent neural networks (RNN), and long short-term memory networks (LSTM). Our results show that HMM and RNN do poorly (albeit better than random), while an LSTM is almost perfectly accurate. This finding is somewhat surprising as static analysis is essentially a document classification problem and LSTMs are known to perform poorly on related tasks, such as sentiment analysis (Dai & Le, 2015). The obvious question about such an experiment is: why would we want to learn a static analyzer for a problem that we know of a perfectly fine engineered solution? The answer is that we want to initiate investigation into the use of deep-learning for program analysis, and our broader hopes are two-fold. First, static analyzers are very complicated and often limited by the amount of false positive and false negatives they generate. In cases where false negatives are unacceptable, a learned static analyzer may not be the right approach. But when the goal is rather to find a good balance between false positives and false negatives, learned static analyzers might be more flexible. Second, as we will briefly show in the paper, learned static analyzers have a resilience to small errors that might lead to more robust tools. Indeed, even though our goal is to detect errors in syntactically valid programs, our tool work despite the presence of small syntactic errors, such as the omission of the semicolon. This resilience to errors is in our opinion a very promising aspect of learned methods for the analysis of programs. Another key problem with static analysis programs is that to be useful, they need to help the programmer understand what is the cause of the error. In that case, models based on recurrent neural networks really shine because they can be trained to provide such information. Indeed, in our experiment, we show how to use a language model to locate the position of erroneous variables in examples classified by the static analyzer as being wrong. This is very important for practical static analysis since a tool that merely reports the existence of an error in a large code file is not useful. The paper is organized as follows. In section 2, we introduce the programming language of study and the corresponding static analysis task. In section 3, we review how we created the dataset used to learn the static analyzer and the methods that we have tried. In section 4, we explain how we learn to report error messages to help the programmer understand how to fix an error. 2 A STATIC ANALYSIS TASK Our goal is to study the following static analysis problem: given a program, is every variable defined before it is being used? Because this problem is undecidable for a Turing complete language, programming languages such as Java impose constraints on what is a correct variable initialization. For example, a variable may not in general be defined within only one branch of an if-then-else statement and used afterward since it can be impossible to guarantee which branch will be executed for every run. In order to better understand whether this is feasible and which methods work, we design a toy language. As an example, in this language, we can write a program that computes the 42th Fibonacci number as follows. v0 = 1; v1 = 1; v2 = 0; while (v2 < 42) { v3 = v1; v1 = v0 + v1; v0 = v3; v2 = v2 + 1; } return v1; If we were to invert lines 4 and 6, then not only would the program be incorrect, but it would be semantically invalid since in the first execution of the loop, variable v3 has not yet been defined. In order to precisely explain what the task is, we now briefly present the syntax and semantics of our experimental programming language. 2.1 THE LANGUAGE We present the syntax of the language in Backus-Naur form in figure 1. The symbols delimited by ⟨⟩ are non-terminals while the symbols delimited by * are terminals. Symbol (program) is the starting non-terminal. A program is composed of an optional statement followed by an expression. The statement can be a list of statements, control-flow statements like conditionals or iterations, or the binding of an expression to a variable. The expressions are simple arithmetic expressions. For simplicity, the test expressions used in conditional statements are distinct from the other expressions, which is a simple syntactic way to enforce basic type safety. The integers are simple integer values of the form [0 − 9]+ while the identifiers are of the form v[0 − 9]+. The semantics of our experimental programming language is presented as a big-step operational semantics in figure 2. For simplicity, we only present a subset of the rules. It is composed of ⟨program⟩ ::= ‘return’ ⟨expression⟩ ‘;’ | ⟨statement⟩ ‘return’ ⟨expression⟩ ‘;’ ⟨statements⟩ ::= ⟨statement⟩ | ⟨statement⟩ ⟨statements⟩ ⟨statement⟩ ::= ⟨statements⟩ | ⟨identifier⟩ ‘=’ ⟨expression⟩ ‘;’ | ‘if’ ‘(’ ⟨test⟩ ‘)’ ‘{’ ⟨statement⟩ ‘}’ ‘else’ ‘{’ ⟨statement⟩ ‘}’ | ‘if’ ‘(’ ⟨test⟩ ‘)’ ‘{’ ⟨statement⟩ ‘}’ | ‘while’ ‘(’ ⟨test⟩ ‘)’ ‘{’ ⟨statement⟩ ‘}’ ⟨test⟩ ::= ⟨expression⟩ ‘<=’ ⟨expression⟩ | ⟨expression⟩ ‘<=’ ⟨expression⟩ ⟨expression⟩ ::= ⟨multiplicative⟩ | ⟨expression⟩ ‘+’ ⟨multiplicative⟩ | ⟨expression⟩ ‘-’ ⟨multiplicative⟩ ⟨multiplicative⟩ ::= ⟨unary⟩ | ⟨multiplicative⟩ ‘*’ ⟨unary⟩ | ⟨multiplicative⟩ ‘/’ ⟨unary⟩ ⟨unary⟩ ::= ⟨atomic⟩ | ‘+’ ⟨unary⟩ | ‘-’ ⟨unary⟩ ⟨atomic⟩ ::= ⟨integer⟩ | ⟨identifier⟩ | ‘(’ ⟨expression⟩ ‘)’ Figure 1: Syntax of the language, presented in Backus-Naur form. The symbols delimited by ⟨⟩ are non-terminals while the symbols delimited by ‘’ are terminals. ⟨program⟩ is the starting non-terminal. \[ \begin{array}{c} \frac{\Gamma \vdash e_1 \Rightarrow v_1 \quad \Gamma \vdash e_2 \Rightarrow v_2}{\Gamma \vdash e_1 + e_2 \Rightarrow v_1 + v_2} \text{ ADD} \\ \frac{\Gamma \vdash i \Rightarrow i}{\Gamma \vdash x \Rightarrow \Gamma(x)} \text{ LOOKUP} \\ \frac{\Gamma \vdash e_1 \Rightarrow v_1 \quad \Gamma \vdash e_2 \Rightarrow v_2 \quad v_1 = v_2}{\Gamma \vdash e_1 = e_2 \Rightarrow T} \text{ TEST1} \\ \frac{\Gamma \vdash e_1 \Rightarrow v_1 \quad \Gamma \vdash e_2 \Rightarrow v_2 \quad v_1 \neq v_2}{\Gamma \vdash e_1 = e_2 \Rightarrow F} \text{ TEST2} \\ \frac{\Gamma, s \xrightarrow{*} \Gamma'}{\Gamma, s \rightarrow \Gamma'} \text{ CLOSURE} \\ \frac{\Gamma \vdash e \Rightarrow v}{\Gamma, x = e \rightarrow (x, v) :: \Gamma} \text{ INTRO} \\ \frac{\Gamma \vdash t \Rightarrow T}{\Gamma, s \rightarrow \Gamma'} \quad \frac{\Gamma, s \rightarrow \Gamma' \quad \Gamma', \text{while}(t) s \rightarrow \Gamma''}{\Gamma, \text{while}(t) s \rightarrow \Gamma''} \text{ WHILE1} \\ \frac{\Gamma \vdash t \Rightarrow F}{\Gamma, \text{while}(t) s \rightarrow \Gamma} \text{ WHILE2} \\ \frac{\emptyset, s \rightarrow \Gamma \quad \Gamma \vdash e \Rightarrow v}{[s;\text{return } e] = v} \text{ PROGRAM} \end{array} \] Figure 2: Semantics of the language, presented as inference rules. The semantics is defined as four predicates formalizing the evaluation of expressions (\( \Gamma \vdash e \Rightarrow v \)), single statement step (\( \Gamma, s \Rightarrow \Gamma \)), the reflexive and transitive closure of statements (\( \Gamma, s \xrightarrow{*} \Gamma \)), and the evaluation of the program overall (\([p] = v\)). <table> <tr> <th>Method</th> <th>Accuracy</th> </tr> <tr> <td>Unigram features + logistic regression</td> <td>.5</td> </tr> <tr> <td>Unigram features + multilayer perceptron</td> <td>.5</td> </tr> <tr> <td>Bigram features + logistic regression</td> <td>.5</td> </tr> <tr> <td>Bigram features + multilayer perceptron</td> <td>.5</td> </tr> <tr> <td>Embedding features + logistic regression</td> <td>.5</td> </tr> <tr> <td>Embedding features + multilayer perceptron</td> <td>.5</td> </tr> <tr> <td>Hidden Markov model</td> <td>.57</td> </tr> <tr> <td>Recurrent neural network, classification</td> <td>.62</td> </tr> <tr> <td>Long short-term memory network, classification</td> <td>.98</td> </tr> <tr> <td>Long short-term memory network + set, classification</td> <td>.993</td> </tr> <tr> <td>Long short-term memory network + set, transduction</td> <td>.997</td> </tr> </table> Table 1: Accuracy of different learning algorithms on the static analysis task. LR stands for logistic regression, MLP stands for multilayer perceptron, HMM stands for hidden Markov Model, RNN stands for recurrent neural network, LSTM stands for Long Short-Term Memory. four predicates. The predicate \( \Gamma \vdash e \Rightarrow v \) denotes the value \( v \) resulting from evaluating \( e \) in the environment \( \Gamma \). The environment is simply a list of bindings from variables to their values. We present four rules that define this predicate, ADD, INT, LOOKUP, TEST1, and TEST2. The most important is the LOOKUP rule which states that the value of a variable is the value associated to it in the environment. Note that this is only well-defined if the variable actually is in the environment, otherwise the semantics is undefined. The goal of our static analyzer is to ensure this can never happen. The predicate \( \Gamma, s \Rightarrow \Gamma' \) denotes the execution of a statement that transforms the environment by adding variable bindings to it. For example, the INTRO rule shows that a variable assignment adds a variable binding to the environment, The CLOSURE rule states that a possible transition is the reflexive and transitive execution of a single statement \( \Gamma, s \rightarrow^* \Gamma' \). The rules WHILE1 and WHILE2 formalize the execution of a while loop. Finally, the predicate \( [p] = v \) denotes the evaluation of a complete program into a resulting value. 2.2 THE TASK Now that we have presented the language, we can state more precisely the goal of the static analysis. A program such as “v1 = 4; return v1 + v2;”, while syntactically valid, is not well-defined since variable v2 has not been defined. A static analyzer is a function that takes such a program as an input and returns a Boolean value. \[ \text{analyze} : \text{token sequence} \longrightarrow \text{Boolean} \] Function analyze should return true only if every variable is defined before it is used. We chose the input to be the sequence of tokens of the program rather than the raw characters for simplicity. It is easy to define such a function directly, but our goal is to see whether we can learn it from examples. Note that unlike previous work combining static analysis and machine learning, we are not trying to improve a static analyzer using machine learning, but rather learning the static analyzer completely from data. 3 LEARNING A STATIC ANALYZER To learn the static analyzer, we compile a balanced set of examples in which programs are labeled with a single Boolean value indicating whether the program should be accepted or not. The dataset contains 200,000 examples, half of which are valid programs and half of which are invalid programs. The invalid programs are of two forms. Half of them contain variables that have not been defined at all, the other half contains programs where the order of statements has been swapped and a variable use appears before its definition. Note that this swapping of statements results in documents that have the exact same bag-of-words, but different labels. Of the 200,000 examples, we use 150,000 as the training set and 50,000 as the test set while making sure to respect a perfect balance between valid and invalid programs. To create this dataset, we have built our own compiler and example generator for our language. The example generator only produces syntactically valid programs. The programs are generated using a variety of random decisions: for example, when trying to generate a statement, we must decide with what probability we want to choose a variable assignment versus a while loop or another type of statement. We vary the probability to try to avoid producing a dataset with a spurious signal, but this is a very delicate issue. We also try our classifiers on hand-written programs. We apply several different machine learning methods, including LSTM to the problem (described below) and present results in Table 1. N-grams and classification We attempt to learn the static analyzer using a classic approach of feature engineering followed by classification. We try both unigram and bigrams features and classify the examples using either a linear logistic regression or a non-linear multilayer perceptron. We expect this approach to fail since n-gram features fail to capture statement ordering, and this serves as a test to make sure our dataset does not contain any spurious signal. Indeed, these methods do not perform better than random. Sequence embedding and classification We also attempt to use an LSTM for our feature engineering. In this case, we first train an LSTM as language model. Then, for classification, we first execute our language model on the example program and use the last hidden state as an embedding. This embedding is used as an input to both a logistic regression and a multilayer perceptron. This approach fails as well and does not perform better than random. It is important to note that we might also consider using an RNN encoder-decoder to produce the embedding but we leave this for future work. Sequence classification We tried three model-based approaches to sequence classification. First, we tried to use an HMM trained using the Baum-Welch algorithm. Second, we tried to train a vanilla RNN with a cross-entropy loss using stochastic gradient descent (SGD). Third, we tried to train an LSTM with cross-entropy loss and SGD. More precisely, we use the variant of SGD known as RMSProp. In both cases we used the Keras framework. These sequence classification approaches perform better than the other approaches. However, the HMM and the RNN still perform poorly. Interestingly, the LSTM can achieve an accuracy of 98.3%. The training of the LSTM is very robust, we did not need to do any complicated parameter search to obtain these results. The false negative rate (i.e. the program is correct but predicted as faulty) is 1.0% and the false positive rate (i.e. the program is faulty but classified as correct) is 2.5%. Using differentiable data structures The key problem in detecting uninitialized variables is to remember which variables have been defined up to some program point. A solution is to employ a set data structure: if we encounter a variable definition, we add the variable to the set; if we encounter a variable use, we test whether that variable is in the set of defined variables. With this in mind, we design a differentiable set data structure to augment an RNN to see if the resulting network can learn (from the training data alone) a policy of how to use the set. The set is represented by a vector \( f \) and intended to be used as a bitmap by the network. The intent is that each possible value correspond to a bit in the vector and is set to 1 if the element is in the set and 0 otherwise. An action \( a \) on the set can be either adding an element to the set, or testing if some value is in the set. Values \( v \) are indices into the set representation. Controller update: \[ h_t = \mathrm{RNN}([x_t, f_{t-1}], h_t) \] Action: \[ a_t = \sigma(W_a x_t + b_a) \] Input representation: \[ v_t = \mathrm{softmax}(W_{v2} \tanh(W_{v1} x_t + b_{v1}) + b_{v2}) \] Set update: \[ f_t = \max\{f_{t-1}, a_t * v_t\} \] Set test \[ p_t = \langle f_t, v_t \rangle \] Decision: \[ y_t = \sigma(W_y h_t + U_y p_t + b_y) \] RNN can be a simple Ellman network or LSTM. The architecture is shown in Figure 4. v14 = (v14 - 23) ; # 1 1 1 0 1 0 0 0 v14 = (14 * 93) ; # 0 0 0 0 0 0 0 0 return v14 ; # 0 0 0 v14 = (v14 - 23) ; # 1 1 1 0 1 1 1 1 v14 = (14 * 93) ; # 1 1 1 1 1 1 1 1 return v14 ; # 1 1 1 v14 = (14 * 93) ; # 1 1 1 1 1 1 1 1 v14 = (v14 - 23) ; # 1 1 1 1 1 1 1 1 return v14 ; # 1 1 1 (a) Classification task. Once an error is detected, the rest of the outputs is meaningless. (b) Transduction task. The network gets every output right. (c) An example for which both classification and transduction work. Figure 3: A look inside the prediction of different networks that use an LSTM and a differentiable set data structure. The commented line shows the label attached to each variable by the network. A 1 means a variable is properly used while a 0 means a variable was not initialized. Unfortunately, training differentiable data structures is sometimes difficult, requiring extensive hyper-parameter tuning and cross validation to find a good weight initialization. Further, LSTMs are often able to learn the training data single-handedly causing the network to learn a policy that ignores the data structure. To circumvent these problem, we annotate the training data with additional intermediate signals: specifically, we annotate each token with a binary label that is true if and only if the token is a variable use that has not been initialized. Note that the additional labels results in a per-token classification problem, but we convert the network back into a program classifier by employing min-pooling over the per-token soft max outputs. We experiment with both per-program (sequence classification) and per-token (sequence transduction) classifiers as described next. Sequence classification: As in previous experiments, we train using the program as the input sequence and a single Boolean label to indicate whether the program is valid or not. For the network with differentiable set to produce one output we apply min-pooling across all the decisions. This method improves over an LSTM and achieves an accuracy of 99.3%. Note that, we did not need to do any complicated parameter search to obtain these results. The false negative rate (i.e. the program is faulty but classified as correct) is 0.8% and the false positive rate (i.e. the program is correct but predicted as faulty) is 0.6%. To understand the behavior of the network, we remove the last minpooling layer, and look at the decision made by the network for each input token. This reveals an interesting pattern: the network correctly identifies the first error location and subsequently emit incorrect outputs. Thus, it is comparable to conventional (non-ML) static analysis algorithms that give up after the first error. For example, in the example in figure[3a] the first variable use is correctly identified as invalid but the rest of the output is incorrect. information. Sequence transduction: Finally, we run an experiment at token level granularity. In this case, the network produce not just a single output but as many outputs as inputs (many-to-many architecture), we refer to this approach as sequence transduction to distinguish from the recurrent networks that produce a single label (many-to-one architecture). The training data also contains the label for each token in the program. This can achieve an accuracy of 99.7%. The training of the transduction task is very robust, we did not need to do any complicated parameter search to obtain these results. The false negative rate is 0.4% and the false positive rate is 0.2%. Given the token level data, it seems that the network has inducted a use of the set data structure that correspond to what an traditional algorithm would do. Aside from using the set to keep track of defined variables, it correctly handles the tricky case of a statement such as \( v1 = v1 + 3 \); by making sure that the variable \( v1 \) is introduced in the set only after the statement is finished. For example, in the example presented in figure[3b] the declaration of the variable \( v14 \) utilizes the value of the still undeclared variable \( v14 \) and the network correctly identifies it. Unfortunately, and interestingly, the accuracy is not perfect. Even though it looks like the correct use of the set has been learned, there are a few rare cases where the network makes simple mistakes. For example, some of the errors happen on some the simplest and shortest programs where the network fails to insert the declared variable into the set. ![Overview of a network utilizing the differentiable set data structure for the task of static analysis. It consists of a neural controller and a fixed size filter.](page_328_154_936_377.png) Figure 4: Overview of a network utilizing the differentiable set data structure for the task of static analysis. It consists of a neural controller and a fixed size filter. Conclusion In conclusion, an out-of-the-box LSTM achieves a promising accuracy on this task, and an LSTM equipped with a differentiable set data-structure has an almost perfect accuracy. Interestingly, none of the other approaches including HMM or RNN, could deliver satisfactory results. 4 REPORTING USEFUL ERROR MESSAGES While the above experiment demonstrates that it is possible to learn an accurate static analyzer; practically, such an analyzer is somewhat useless unless we can also help the programmer locate the potential errors. That is, imagine if a tool reported that there is a potential buffer overflow in your code base without any indication of where the problem is: it would not be of much use. Therefore we train a second LSTM as a language model over true instances of our programming language. That is, we train the LSTM to predict the next character in the sequence, and for every character in the sequence, the model provides the probability of observing this specific character. The idea is that we want to look at all the variable-use in the program and if the probability of this variable use is below a certain threshold, then we report the use as a potential source of error. We present several such examples in Figure 5. We color a variable-use in blue if its probability is above the threshold and in purple if it is below the threshold and therefore potentially the source of the error. As we can see from the examples, the method works well. The first four examples show simple cases with only two variables. Note that from the perspective of a bag-of-words classifier, these two programs are identical. Yet the LSTM language model, which takes into account the “word” order is able to model them differently. Examples 5-11 are more complicated in that the variables are used or defined several times. In Example 9, the language model accurately reports the first use of v2 as incorrect and the second use of v2 as correct. This is a somewhat interesting example as the incorrect use of v2 is in the definition of v2 itself. In example 10, we can see that the language model can handle multiple incorrect variable uses; this success crucially depends on the ability of the language model to recover from the error and still accurately model the remainder of the program. Finally, examples 12 and 13 demonstrate robustness. Despite the fact that these two examples are syntactically incorrect, the language model correctly reports the semantic errors. The resilience of the learned tools to small errors is part of what makes them so promising for program analysis. 5 RELATED WORK There is a growing body of work in employing machine learning to improve programming language tools. In such works, machine learning is used to complement the traditional static analysis methods; further, they rely on extensive feature engineering. In Brun & Ernst (2004), dynamic analysis is used to extract features that are used to detect latent code errors. In Kolter & Maloof (2006), n- 1. v1 = 37; v2 = (v1 + 20); 2. v1 = 37; v1 = (v2 + 20); 3. v2 = 37; v1 = (v2 + 20); 4. v2 = 37; v2 = (v2 + 20); 5. v2 = 37; v2 = (v2 + 20); v3 = (v2 + 40); 6. v2 = 37; v2 = (v2 + 20); v2 = (v3 + 40); 7. v2 = 37; v2 = (v2 + 20); v3 = (v1 + v2); 8. v2 = 37; v1 = (v2 + 20); v3 = (v1 + v2); 9. v1 = 37; v2 = (v2 + 20); v3 = (v1 + v2); 10. v1 = 37; v3 = (v2 + 20); v5 = (v3 + v4); 11. v1 = 37; v3 = (v2 + 20); v5 = (v3 + v2); 12. v1 = 37 v2 = (v1 + 20); 13. v1 = 37 v1 = (v2 + 20); Figure 5: Example of programs annotated with variable usage. The use colored in blue are considered to have been properly defined while the use in purple are considered to be faulty. This tool is run when the classifier detects a program error to help the programmer understand what the problem is. gram features are used to detect viruses in binary code. In [Yamaguchi et al. (2012)], parts of the abstract syntax tree of a function is embedded into a vector space to help detect functions similar to a known faulty one. In [Tripp et al. (2014)], various lexical and quantitative features about a program is used to improve an information analysis and reduce the number of false alarms reported by the tool. In [Raychev et al. (2015)], dependency networks are used with a conditional random field to de-obfuscate and type Javascript code. In [Allamanis et al. (2015)], the structure of the code is used to suggest method names. In [Nguyen & Nguyen (2015)], n-grams are used to improve code completion tools. In [Gvero & Kuncak (2015)], program syntax is used to learn to tool that can generate Java expressions from free-form queries. In [Long & Rinard (2016)], a feature extraction algorithm is designed to improve automatic patch generation. 6 CONCLUSION We have shown that it is possible to learn a static analyzer from data. Even though the problem we address is particularly simple and on a toy language, it is interesting to note that in our experiments, only LSTM networks provided a reasonable enough solution. We have also shown that it is possible to make the static analyzer useful by using a language model to help the programmer understand where to look in the program to find the error. Of course, this experiment is very far from any practical tool. First, dealing with more complicated programs involving memory, functions, and modularity should be vastly more complex. Also, our solution is very brittle. For example, in our language, the space of variable names is very restricted, it might be much more difficult to deal with normal variable names where a specific variable name could not appear at all in the training dataset. Finally, a fundamental issue are false positives, that is, programs that are wrongly classified as being without error. This is a serious problem that may make such a tool risky to use. However, note that there are useful programming language tools that indeed generate false positive. For instance, a tool that report buffer overflows might not catch every error, but it is still useful if it catches some. Another possibility is to consider approaches were a result is verified by an external tools. For example, in the field of certified compilation, [Tristan & Leroy (2008)] have shown that it can be acceptable to use an untrusted, potentially bogus, program transformation as long as each use can be formally checked. Also, as exemplified by [Gulwani & Necula (2003) (2004) (2005)] some static analysis algorithms do trade a small amount of unsoundness for much faster computation, which can be necessary when applying programming tools to very large code base.
reject
Reject
3.333333
5dd1197eff96ff17272300c1678e6276a944f1f6
iclr
2,017
AN EMPIRICAL ANALYSIS OF DEEP NETWORK LOSS SURFACES Daniel Jiwoong Im1,2*, Michael Tao3, & Kristin Branson2 1Janelia Research Campus, HHMI, 2AIFounded Inc. 3University of Toronto, {imd, bransonk}@janelia.hhmi.org {mtao}@dgp.toronto.edu ABSTRACT The training of deep neural networks is a high-dimension optimization problem with respect to the loss function of a model. Unfortunately, these functions are of high dimension and non-convex and hence difficult to characterize. In this paper, we empirically investigate the geometry of the loss functions for state-of-the-art networks with multiple stochastic optimization methods. We do this through several experiments that are visualized on polygons to understand how and when these stochastic optimization methods find local minima. 1 INTRODUCTION Deep neural networks are trained by optimizing an extremely high-dimensional loss function with respect to the weights of the network’s linear layers. The objective function minimized is some measure of the error of the network’s predictions based on these weights compared to training data. This loss function is non-convex and has many local minima. These loss functions are usually minimized using first-order gradient descent (Robbins & Monro [1951] Polyak [1964]) algorithms such as stochastic gradient descent (SGD) (Bottou [1991]). The success of deep learning critically depends on how well we can minimize this loss function, both in terms of the quality of the local minima found and the time to find them. Understanding the geometry of this loss function and how well optimization algorithms can find good local minima is thus of vital importance. Several works have theoretically analyzed and characterized the geometry of deep network loss functions. However, to make these analyses tractible, they have relied on simplifications of the network structures, including that the networks are linear (Saxe et al. [2014]), or assuming the path and variable independence of the neural networks (Choromanska et al. [2015]). Orthogonally, the performance of various gradient descent algorithms has been theoretically characterized (Nesterov [1983]). Again, these analyses make simplifying assumptions, in particular that the loss function is strictly convex, i.e. there is only a single local minimum. In this work, we empirically investigated the geometry of the real loss functions for state-of-the-art networks and data sets. In addition, we investigated how popular optimization algorithms interact with these real loss surfaces. To do this, we plotted low-dimensional projections of the loss function in subspaces chosen to investigate properties of the local minima selected by different algorithms. We chose these subspaces to address the following questions: • What types of changes to the optimization procedure result in different local minima? • Do different optimization algorithms find qualitatively different types of local minima? 2 RELATED WORK 2.1 LOSS SURFACES There have been several attempts to understand the loss surfaces of deep neural networks. Some have studied the critical points of the deep linear neural networks (Baldi [1989] Baldi & Hornik [1989; Baldi & Lu [2012]. Others further investigated the learning dynamics of the deep linear neural networks (Saxe et al., 2014). More recently, several others have attempted to study the loss surfaces of deep non-linear neural networks (Choromanska et al., 2015; Kawaguchi, 2016; Soudry & Carmon, 2016). One approach is to analogize the states of neurons as the magnetics dipoles used in spherical spin-glass Ising models from statistical physics (Paris, 2016; Fyodorov & Williams, 2007; Bray & Dean, 2007). Choromanska et al., (2015) attempted to understand the loss function of neural networks through studying the random Gaussian error functions of Ising models. Recent results (Kawaguchi, 2016; Soudry & Carmon, 2016) have provided cursory evidence in agreement with the theory provided by Choromanska et al., (2015) in that they found that there are no “poor” local minima in neural networks still with strong assumptions. There is some potential disconnect between these theoretical results and what is found in practice due to several strong assumptions such as the activation of the hidden units and output being independent of the previous hidden units and input data. The work of Dauphin et al., (2014) empirically investigated properties of the critical points of neural network loss functions and demonstrated that their critical points behave similarly to the critical points of random Gaussian error functions in high dimensional space. We will expose further evidence along this trajectory. 2.2 OPTIMIZATION In practice, the local minima of deep network loss functions are for the most part decent. This implies that we probably do not need to take many precautions to avoid bad local minima in practice. If all local minima are decent, then the task of finding a decent local minimum quickly is reduced to the task of finding any local minimum quickly. From an optimization perspective this implies that solely focusing on designing fast methods are of key importance for training deep networks. In the literature the common method for measuring performance of optimization methods is to analyze them on nice convex quadratic functions (Polyak, 1964; Broyden, 1970; Nesterov, 1983; Martens, 2010; Erdogdu & Montanari, 2015) even though the problems are applied to non-convex problems. For non-convex problems, however, if two methods converge to different local minima their performance will be dictated on how those methods solve those two convex subproblems. It is challenging to show that one method will beat another without knowledge of the sort of convex subproblems, which is generally not known apriori. What we will explore is whether indeed are some characteristics that can found experimentally. If so, perhaps one could validate where these analytical results are valid or even improve methods for training neural networks. 2.2.1 LEARNING PHASES ![An example of learning curve of neural network](page_1012_1342_496_312.png) Figure 1: An example of learning curve of neural network One of the interesting empirical observation is that we often observe is that the incremental improvement of optimization methods decreases rapidly even in non-convex problems. This behavior has been discussed as a “transient” phase followed by a “minimization” phase (Sutskever et al., 2013) where the former finds the neighborhood of a decent local minima and the latter finds the local minima within that neighborhood. The existence of these phases implies that if certain methods are better at different phases one could create novel methods that schedule when to apply each method. 3 EXPERIMENTAL SETUP AND TOOLS 3.1 NETWORK ARCHITECTURES AND DATA SETS We conducted experiments on three state-of-the-art neural network architectures. Network-in-Network (NIN) (Lin et al. [2014]) and the VGG (Simonyan & Zisserman [2015]) network are feed-forward convolutional networks developed for image classification, and have excellent performance on the Imagenet (Russakovsky et al. [2014]) and CIFAR10 (Krizhevsky [2009]) data sets. The long short-term memory network (LSTM) (Hochreiter & Schmidhuber [1997]) is a recurrent neural network that has been successful in tasks that take variable-length sequences as input and/or produce variable-length sequences as output, such as speech recognition and image caption generation. These are large networks currently used in many machine vision and learning tasks, and the loss functions minimized by each are highly non-convex. All results using the feed-forward convolutional networks (NIN and VGG) are on the CIFAR10 image classification data set, while the LSTM was tested on the Penn Treebank next-word prediction data set. 3.2 OPTIMIZATION METHODS We analyzed the performance of five popular gradient-descent optimization methods for these learning frameworks: Stochastic gradient descent (SGD) (Robbins & Monro [1951]), stochastic gradient descent with momentum (SGDM), RMSprop (Tieleman & Hinton [2012]), Adadelta (Zeiler et al. [2011]), and ADAM (Kingma & Ba [2014]). These are all first-order gradient descent algorithms that estimate the gradients based on randomly-grouped minibatches of training examples. One of the major differences between these algorithms is how they select the weight-update step-size at each iteration, with SGD and SGDM using fixed schedules, and RMSprop, Adadelta, and ADAM using adaptive, per-parameter step-sizes. Details are provided in Section A.2 In addition to these five existing optimization methods, we compare to a new gradient descent method we developed based on the family of Runge Kutta integrators. In our experiments, we tested a second-order Runge-Kutta integrator in combination with SGD (RK2) and in combination with ADAM (ADAM&RK2). Details are provided in Section A.3. 3.3 ANALYSIS METHODS Several of our empirical analyses are based on the technique of Goodfellow et al. (Goodfellow et al. [2015]). They visualize the loss function by projecting it down to one carefully chosen dimension. They plot the value of the loss function along a set of samples along this dimension. The projection space is chosen based on important weight configurations, thus they plot the value of the loss function at linear interpolations between two weight configurations. They perform two such analyses: one in which they interpolate between the initialization weights and the final learned weights, and one in which they interpolate between two sets of final weights, each learned from different initializations. In this work, we use a similar visualization technique, but choose different low-dimensional sub-spaces for the projection of the loss function. These subspaces are based on the initial weights as well as the final weights learned using the different optimization algorithms and combinations of them, and are chosen to answer a variety of questions about the loss function and how the different optimization algorithms interact with this loss function. In contrast, Goodfellow et al. only looked at SGDM. In addition, we explore the use of two-dimensional projections of the loss function, allowing us to better visualize the space between local minima. We do this via barycentric and bilinar interpolation for triplets and quartets of points respectively (details in Section A.1). We refer to the critical points found using these variants of SGD, for which the gradient is approximately 0, as local minima. Our evidence that these are local minima as opposed to saddle points is Figure 2: Visualization of the loss surface at weights interpolated between two initial configurations and the final weight vectors learned using SGD from these initializations. Figure 3: Visualization of the loss surface at weights interpolated between the weights learned by four different algorithms from the same initialization. similar to that presented in Goodfellow et al. (Goodfellow et al., 2015). If we interpolate beyond the critical point, in this one-dimensional projection, the loss increases (Fig. 10). 3.4 TECHNICAL DETAILS We used the VGG and NIN implementations from https://github.com/szagoruyko/cifar.torch.git. The batch size was set to 128 and the number of epochs was set to 200. The learning rate was chosen from the discrete range between [0.2, 0.1, 0.05, 0.01] for SGD and [0.002, 0.001, 0.0005, 0.0001] for adaptive learning methods. We doubled the learning rates when we ran our augmented versions with Runge-Kutta because they required two stochastic gradient computations per epoch. We used batch-normalization and dropout to regularize our networks. All experiments were run on a 6-core Intel(R) Xeon(R) CPU @ 2.40GHz with a TITAN X. 4 EXPERIMENTAL RESULTS 4.1 DIFFERENT OPTIMIZATION METHODS FIND DIFFERENT LOCAL MINIMA We trained the neural networks described above using each optimization method starting from the same initial weights and with the same minibatching. We computed the value of the loss function for weight vectors interpolated between the initial weights, the final weights for one algorithm, and the final weights for a second algorithm for several pairings of algorithms. The results are shown in the lower triangle of Table 1. For every pair of optimization algorithms, we observe that the training loss between the final weights for different algorithms shows a sharp increase along the interpolated path. This suggests that each optimization algorithm found a different critical point, despite starting at the same initialization. We investigated the space between other triplets and quadruples of weight vectors (Figure 2 and 3), and even in these projections of the loss function, we still see that the local minima returned by different algorithms are separated by high loss weight parameters. Deep networks are overparameterized. For example, if we switch all corresponding weights for a pair of nodes in our network, we will obtain effectively the same network, with both the original and permuted networks outputting the same prediction for a given input. To ensure that the weight vectors returned by the different algorithms were functionally different, we compared the outputs of the networks on each example in a validation data set: \[ \text{dist}(\theta_1, \theta_2) = \sqrt{\frac{1}{N_{test}} \sum_{i=1}^{N_{test}} \| F(x_i, \theta_1) - F(x_i, \theta_2) \|^2 }, \] where \( \theta_1 \) and \( \theta_2 \) are the weights learned by two different optimization algorithms, \( x_i \) is the input for a validation example, and \( F(x, \theta) \) is the output of the network for weights \( \theta \) on input \( x \). <table> <tr> <th></th> <th>SGD</th> <th>RMSprop</th> <th>Adadelta</th> <th>Adam</th> <th>RK2</th> <th>Adam&amp;RK2</th> </tr> <tr> <th>SGD</th> <td style="background-color:red;"> </td> <td style="background-color:darkred;"> </td> <td style="background-color:yellow;"> </td> <td style="background-color:blue;"> </td> <td style="background-color:magenta;"> </td> <td style="background-color:cyan;"> </td> </tr> <tr> <th>RMSprop</th> <td style="background-color:darkred;"> </td> <td style="background-color:black;"> </td> <td style="background-color:yellow;"> </td> <td style="background-color:blue;"> </td> <td style="background-color:magenta;"> </td> <td style="background-color:cyan;"> </td> </tr> <tr> <th>Adadelta</th> <td style="background-color:yellow;"> </td> <td style="background-color:yellow;"> </td> <td style="background-color:yellow;"> </td> <td style="background-color:blue;"> </td> <td style="background-color:magenta;"> </td> <td style="background-color:cyan;"> </td> </tr> <tr> <th>Adam</th> <td style="background-color:blue;"> </td> <td style="background-color:blue;"> </td> <td style="background-color:blue;"> </td> <td style="background-color:blue;"> </td> <td style="background-color:magenta;"> </td> <td style="background-color:cyan;"> </td> </tr> <tr> <th>RK2</th> <td style="background-color:magenta;"> </td> <td style="background-color:magenta;"> </td> <td style="background-color:magenta;"> </td> <td style="background-color:magenta;"> </td> <td style="background-color:magenta;"> </td> <td style="background-color:cyan;"> </td> </tr> <tr> <th>Adam&amp;RK2</th> <td style="background-color:cyan;"> </td> <td style="background-color:cyan;"> </td> <td style="background-color:cyan;"> </td> <td style="background-color:cyan;"> </td> <td style="background-color:cyan;"> </td> <td style="background-color:cyan;"> </td> </tr> </table> Table 1: Visualization of the loss surface near and between local minima found by different optimization methods. Each box corresponds to a pair of optimization methods. In the lower triangle, we plot the projection of the loss surface at weight vectors between the initial weight and the learned weights found by the two optimization methods. Color as well as height of the surface indicate the loss function value. In the upper triangle, we plot the functional difference between the network corresponding to the learned weights for the first algorithm and networks corresponding to weights linearly interpolated between the first and second algorithm’s learned weights. (Best viewed in zoom) Figure 4: (a) Training accuracy and (b) test accuracy for each of the optimization methods. Colors correspond to different initializations. We found that, for all pairs of algorithms, the average distance between the outputs of the networks (Equation 3.1) was approximately 0.16, corresponding to a label disagreement of about 8% (upper triangle of Table 1). Given the generalization error of these networks (approximately 11%, Figure 4), the maximum disagreement we could see was 22%. Thus, these networks disagreed on a large fraction of these test examples – over \( \frac{1}{5} \)rd. Thus, the local minima found by different algorithms correspond to effectively different networks, not trivial reparameterizations of the same one. (a) NIN - Initial to Final. (b) NIN - Final to Final. (c) VGG - Initial to Final. (d) VGG - Final to Final. Figure 5: Loss function value near local minima found by multiple restarts of each algorithm. 4.2 DIFFERENT OPTIMIZATION ALGORITHMS FIND DIFFERENT TYPES OF LOCAL MINIMA Next, we investigated whether the local minima found by the different optimization algorithms had distinguishing properties. To do this, we trained the networks with each optimization algorithm using different initial parameters. We then compared differences between runs of the same algorithm but different initializations to differences between different algorithms. As shown in Figure 4(a), in terms of training accuracy, we do see some stereotypy for the optima found by different algorithms, with SGD finding local minima with the lowest training accuracy and ADAM, Rmsprop, and Adadelta finding local minima with the highest training accuracy. However, this could be attributed to SGD’s asymptotically slow convergence near local minima due to the gradient diminishing near extrema. Despite this limitation, Figure 4(b) shows that the generalization accuracy of these different local minima on validation data was not significantly different between algorithms. We also did not see a relationship between the weight initialization and the validation accuracy. Thus, while these algorithms fall into different local minima, they are not different in terms of their final quality. We visualized the loss surface around each of the local minima for the multiple runs. To do this, we plotted the value of the loss function between the initial and final weights for each algorithm (Figure 5(a,c)) for each run of the algorithm from a different initialization. In addition, we plotted Figure 6: Observing the absolute size of basin for different local minimas found by different optimization methods. the value of the loss function between the final weights for selected pairs of algorithms for each run (Figure 5[b,d]). We see that the surfaces look strikingly similar for different runs of the same algorithm, but characteristically different for different algorithms. Thus, we found evidence that the different algorithms land in qualitatively different types of local minima. In particular, we see in Figure 5[a,c] that the size of the basins around the local minima found by ADAM and ADAM&RK2 are larger than those found by SGD and RK2, i.e. that the training loss is small for a wider range of \( \alpha \) values. This is a relative measure, and the magnitude of the change in the weight vector is \( \Delta \alpha \| \theta_1 - \theta_0 \| \) for a change of size \( \Delta \alpha \), where \( \theta_0 \) is the initial weight vector \( \theta_1 \) is the result found by a given optimization algorithm. In Figure 6 we repeat this analysis, instead showing the loss as a function of the absolute distance in parameter space: \[ \theta(\lambda) = \theta_1 + \lambda \frac{\theta_0 - \theta_1}{\| \theta_0 - \theta_1 \|} \] We again see that the size of the basin around the local minima varies by optimization algorithm. Note that we evaluate the loss for weight vectors beyond the initial configuration, which had a loss of 2.4. 4.3 ANALYZING LEARNING AFTER “TRANSIENT PERIOD” Recall that, during optimization, it has been observed that there is a short “transient” phase when the loss decreases rapidly and a “minimization” phase in which the loss decreases slowly (Section 2.2.1 and Figure 1). In this set of experiments, we investigated the effects of switching from one type of optimization method to another at various points during training, in particular at late stages of training when it is thought that a local minimum has been chosen and is only being localized. We switched from one optimization method to another 25%, 50%, and 75% of the way through training. The results are plotted in Figure 7. We emphasize that we are not switching methods to improve performance, but rather to investigate the shape of the loss function in regions explored during the “minimization” phase of optimization. We found that, regardless of how late we switch optimization algorithms, as shown in the right-most column of Figure 7, the local minima found were all different. This directly disagrees with the notion that the local minimum has effectively been chosen before the “minimization” phase, but instead that which local minimum is found is still in flux this late in optimization. It appears that this switch from one local minimum to another happens almost immediately after the optimization method switches, with the training accuracy jumping to the characteristic accuracy for the given method within a few epochs (Figure 7 left column). Interestingly, we also see the distance between the initial and current weight vectors changes drastically after switching from one optimization (a) The learning rate is set to 0.001 for ADAM , and then switched to SGD with learning rate 0.01. (b) The learning rate is set to 0.1 for SGD, and then switched to ADAM with learning rate 0.0001. (c) The learning rate is set to 0.001 for ADAM , and then switched to Adadelta (learning rate is not required). (d) The learning rate is not required for Adadelta, and then switched to ADAM with learning rate 0.0001. Figure 7: Switching methods from one method to another method at epoch 50 and 100 and 150. Accuracy curve (Left two columns). Distance between initial weights to weights at each epoch (Middle). The interpolation between different convergence parameters (Right). For instance, S100-A100 as trained with SGD in the first 100 epoch and switched to ADAM for the rest of the epoch. (Best viewed in zoom) <table> <tr> <th rowspan="2">With Batch Normalization</th> <th>Initial Config.</th> <th>ADAM</th> <th>ADAM</th> <th>ADAM</th> </tr> <tr> <td>Adam</td> <td>Rmsprop</td> <td>VGG</td> <td>SGDM</td> <td>VGG</td> <td>SGDM</td> <td>VGG</td> <td>Adadelta</td> <td>Rmsprop</td> <td>VGG</td> </tr> <tr> <th>Without Batch Normalization</th> <th>Initial Config.</th> <th>ADAM&amp;RK2</th> <th>ADAM</th> <th>ADAM</th> <th>ADAM</th> </tr> <tr> <td></td> <td>Rmsprop</td> <td>ADAM</td> <td>VGG</td> <td>Rmsprop</td> <td>VGG</td> <td>SGDM</td> <td>Rmsprop</td> <td>NIN</td> <td>Rmsprop</td> <td>ADAM&amp;RK2</td> <td>NIN</td> </tr> </table> Table 2: Visualization of the Loss Surface with and without batch-normalization. (a) NIN - Initial to Final. (b) VGG - Initial to Final. (c) LSTM - Initial to Final. (d) NIN - Final to Final. (e) VGG - Final to Final. (f) LSTM - Final to Final. Figure 8: (Without batch normalization) Loss function with parameter interporlated between Initial to Final. (Bottom) Loss function with parameters interporlated between Final to Final. Each column of the plots are from different initializations. method to another, and that this distance is characteristic per algorithm (Figure 7 middle column). While distance increases with training epoch for any single optimization method, it actually starts to decrease when switching from ADAM to SGD. 4.4 EFFECTS OF BATCH-NORMALIZATION To understand how batch normalization affects the types of local minima found, we performed a set of experiments comparing loss surfaces near local minima found with and without batch normal- ization for each of the optimization methods. We visualized the surface near these local minima by interpolating between the initial weights and the final weights as well as between pairs of final weights found with different algorithms. We observed clear qualitative differences between optimization with (Figure 5) and without (Figure 8) batch normalization. We see that, without batch normalization, the quality of local minimum found by a given algorithm is much more dependent on the initialization. In addition, the surfaces between different local minima are more complex in appearance: with batch normalization we see sharp unimodal jumps in performance but without batch normalization we obtain wide bumpy shapes that aren’t necessarily unimodal. The neural networks are typically initialized with very small parameter values (Glorot & Bengio [2010]; He et al., [2015]). Instead, we trained NIN with exotic initializations such as initial parameters drawn from \( \mathcal{N}(-10.0, 0.01) \) or \( \mathcal{N}(-1.0, 1.0) \) and observe the loss surface behaviours. The details of results are discussed in Appendix A.5. 5 CONCLUSIONS In this work, we performed a series of empirical analyses to understand the geometry of the loss functions corresponding to deep neural networks, and how different optimization methods minimize this loss to answer the two questions posed in the introduction. What types of changes to the optimization procedure result in different local minima? We found that every type of change to the optimization procedure we tested resulted in a different local minimum. Different local minima were found using the different optimization algorithms from the same initialization (Section 4.1). Even switching the optimization algorithm to another very late in optimization – during the slow “mimimization” portion of learning – resulted in a different local minimum (Section 4.3). The quality of the local minima found, in terms of training and generalization error, is similar. These different local minima were not equivalent, and made mistakes on different test examples (Section 4.1). Thus, they were not trivially different local minima, as would occur if nodes in internal layers of the network were permuted. We observed that the quality of these local minima was only consistently good when we used batch normalization for regularization. Without batch normalization, the quality of the critical points found depended on the initialization, and some solutions found were not as good as others. Our observations are in contrast to the conclusions of Goodfellow et al., i.e. that local minima are not a problem in deep learning because, in the region of the loss function explored by SGD algorithms, the loss function is well-behaved (Goodfellow et al., 2015). Instead, our observations are more consistent with the explanation that the local minima found by popular SGD optimization methods are almost all good (Choromanska et al., 2015; Kawaguchi, 2016; Soudry & Carmon, 2016). Do different optimization algorithms find qualitatively different types of local minima? Interestingly, we found that, while the local minima found by the same optimization algorithm from different initializations were different, the shape of the loss function around these local minima was strikingly similar, and was a characteristic of the optimization algorithm. In particular, we found that the size of the basin around ADAM-based optimization was larger than that around vanilla SGD (Section 4.2). A large basin is related to a large margin, as small changes in the weight vector will not affect the training error, and perhaps could have some implications for generalization error. In our experiments, however, we did not observe better generalization error for ADAM than SGD. Questions for potential future research are why the shapes of the loss functions around different local minima found by the same algorithm are so similar, and what the practical implications of this are. REFERENCES Pierre. Baldi. Linear learning: Landscapes and algorithms. In In Advances in neural information processing systems., pp. 65–72, 1989. Pierre Baldi and K. Hornik. Neural networks and principal component analysis: Learning from examples without local minima. Neural Networks, 2:53–58, 1989. Pierre. Baldi and Zhiqin. Lu. Complex-valued autoencoders. Neural Networks, 33:136–147, 2012. Leon Bottou. Stochastic gradient learning in neural networks. In Proceedings of Nuero-Nimes, 1991. Alan J. Bray and David S. Dean. The statistics of critical points of gaussian fields on large-dimensional spaces. In Physics Review Letter, 2007. C. G Broyden. The convergence of a class of double-rank minimization algorithms 1. general considerations. Journal of Applied Mathematics, 6:76–90, 1970. John C. Butcher. Coefficients for the study of runge-kutta integration processes. Society for Industrial and Applied Mathematics, 3:185–201, 1963. Anna Choromanska, Mikael Henaf, Michael Mathieu, Gerard Ben Arous, and Yann LeCun. The loss surfaces of multilayer networks. arXiv preprint arXiv:1406.2572, 2015. Yann N. Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. arXiv preprint arXiv:1406.2572, 2014. Murat A. Erdogdu and Andrea Montanari. Convergence rates of sub-sampled newton methods. In Proceedings of the Neural Information Processing Systems (NIPS), 2015. Yan V. Fyodorov and Ian Williams. Replica symmetry breaking condition exposed by random matrix calculation of landscape complexity. Journal of Statistical Physics., 129:1081–1161, 2007. Xavier Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In International conference on artificial intelligence and statistics, 2010. Ian J. Goodfellow, Oriol Vinyals, and Andrew M. Saxe. Qualitatively characterizing neural network optimization problems. In Proceedings of the International Conference on Learning Representations (ICLR), 2015. Ernst Hairer, Nrsett Syvert P., and Gerhard Wanner. Solving Ordinary Differential Equations I Nonstiff. Springer, 1987. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In arXiv preprint arXiv:1502.01852, 2015. S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9:1735–1780, 1997. Kenji Kawaguchi. Deep learning without poor local minima. arXiv preprint arXiv:1605.07110, 2016. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR), 2014. Alex Krizhevsky. Learning multiple layers of features from tiny images. In MSc thesis, Univesity of Toronto, 2009. Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. In Proceedings of the International Conference on Learning Representations (ICLR), 2014. James Martens. Deep learning via hessian-free optimization. In Proceedings of the International Conference of Machine Learning (ICML), 2010. Yurii Nesterov. A method of solving a convex programming problem with convergence rate o(1/sqr(k)). Soviet Mathematics Doklady, 27:372–376, 1983. Giorgio Parisi. Probabilistic line searches for stochastic optimization. arXiv preprint arXiv:0706.0094, 2016. B.T Polyak. Some methods of speeding up the convergence of iteration methods. USSR Computational Mathematics and Mathematical Physics, 4(5):1–17, 1964. Herbert Robbins and Sutton Monro. A stochastic approximation method. Annals of Mathematical Statistics, 22(3):400–407, 1951. O. Russakovsky, H. Deng, J. adn Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. Imagenet large scale visual recognition challenge. In arXiv preprint arXiv:1409.0575, 2014. Andrew M Saxe, James L McClelland, and Surya. Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In In International Conference on Learning Representations., 2014. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations (ICLR), 2015. Daniel Soudry and Yair Carmon. No bad local minima: Data independent training error guarantees for multilayer neural networks. arXiv preprint arXiv:1605.08361, 2016. Ilya Sutskever, James Martens, George Dahl, and Geoffery Hinton. On the importance of momentum and initialization in deep learning. In Proceedings of the International Conference of Machine Learning (ICML), 2013. Grzegorz Swirszcz, Wojciech Marian Czarnecki, and Razvan Pascanu. Local minima in training of deep networks. In International conference on artificial intelligence and statistics, 2016. Tijmen Tieleman and Geoffery Hinton. Rmsprop gradient optimization. In Neural Networks for Machine Learning slide: http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf, 2012. Matthew D. Zeiler, Graham W. Taylor, and Rob Fergus. Adaptive deconvolutional networks for mid and high level feature learning. In International Conference on Computer Visio, 2011. A SUPPLEMENTARY MATERIALS A.1 3D VISUALIZATION Goodfellow et al. (2015) introduced the idea of visualizing 1D subspace of the loss surface between the parameters. Here, we propose to visualize loss surface in 3D space through interpolating over three and four vertices. Linear Interpolation Given two parameters \( \theta_1 \) and \( \theta_2 \), \[ \theta_i = \alpha \theta_1 + (1 - \alpha) \theta_2, \qquad \forall \alpha \in [0, 1]. \] (2) Bilinear Interpolation Given four parameters \( \theta_0, \theta_1, \theta_2, \) and \( \theta_3 \), \[ \phi_i = \alpha \theta_1 + (1 - \alpha) \theta_2 \] \[ \varphi_i = \alpha \theta_3 + (1 - \alpha) \theta_4 \] \[ \theta_j = \beta \phi_i + (1 - \beta) \varphi_i \] (3) (4) (5) for all \( \alpha \in [0, 1] \) and \( \beta \in [0, 1] \). Barycentric Interpolation Given four parameters \( \theta_0, \theta_1, \) and \( \theta_2 \), let \( d_1 = \theta_1 - \theta_0 \) and \( d_2 = \theta_2 - \theta_0 \). Then, the formulation of the interpolation is \[ \phi_i = \alpha d_1 + \theta_0 \] \[ \varphi_i = \alpha d_2 + \theta_0 \] \[ \theta_j = \beta \phi_i + (1 - \beta) \varphi_i \] (6) (7) (8) for all \( \alpha \in [0, 1] \) and \( \beta \in [0, 1] \). A.2 OPTIMIZATION METHODS A.2.1 STOCHASTIC GRADIENT DESCENT In many deep learning applications both the number of parameters and quantity of input data points can be quite large. This makes the full evaluation of \( U(\theta) \) be prohibitively expensive. A standard technique for alleviating computational loadis to apply an stochastic approximation to the gradient [Robbins & Monro (1951)]. More precisely, one approximates \( U \) by a subset of \( n \) data points, denoted by \( \{ \sigma_j \}_{j=1}^N \) at each timestep: \[ U^n(\theta) = \frac{1}{n} \sum_{j=1}^n \ell(\theta, x_{\sigma_j}) \simeq \frac{1}{N} \sum_{i=1}^N \ell(\theta, x_i) = U(\theta) \] (9) Of course this approximation also carries over to the gradient, which is of vital importance to optimization techniques: \[ \nabla U^n(\theta) = \frac{1}{n} \sum_{j=1}^n \nabla \ell(\theta, x_{\sigma_j}) \simeq \nabla U(\theta) \] (10) This method is what is commonly called Stochastic Gradient Descent or SGD. So long as the data is distributed nicely the approximation error of \( U^n \) should be sufficiently small such that not only will SGD still behave like normal GD , but it's wall clock time for to converge should be significantly lower as well. Usually one uses the stochastic gradient rather than the true gradient, but the inherent noisiness must be kept in mind. In what follows we will always mean the stochastic gradient. A.2.2 MOMENTUM In order to alleviate both noise in the input data as well as noise from stochasticity used in computing quantities one often maintains history of previous evaluations. In order to only require one extra variable one usually stores variables of the form \[ \mathbb{E}[F]_t = \alpha F_t + \beta \mathbb{E}[F]_{t-1}. \] (11) where \( F_t \) is some value changing over time and \( \mathbb{E}[F]_t \) is the averaged quantity. An easy scheme to apply this method is to compute a rolling weighted average of gradients such as \[ \mathbb{E}[g]_t = (1 - \alpha) g_t + \alpha \mathbb{E}[g]_{t-1} \] but there will be other uses in the future. A.2.3 PERTINENT METHODS With the aforementioned tools there are a variety of methods that can be constructed. We choose to view these algorithms as implementations of Explicit Euler on a variety of different vector fields to remove the ambiguity between \( \eta \) and \( g_t \). We therefore can define a method by the vector field \( \mathcal{X}_t \) that explicit Euler is applied to with a single \( \eta \) that is never changed. SGD with Momentum (SGDM) By simply applying momentum to \( g_t \) one obtains this stabilized stochastic version of gradient descent: \[ \mathcal{X}_t = -\mathbb{E}[g]_t. \] (12) This is the most fundamental method that is used in practice and the basis for everything that follows. Adagrad Adagrad rescales \( \mathcal{X}_t \) by summing up the squares of all previous gradients in a coefficient-wise fashion: \[ \mathcal{X}_t = - \frac{g_t}{\sqrt{\sum_{i=1}^t g_i^2 + \epsilon}}. \] (13) Here \( \epsilon \) is simply set to some small positive value to prevent division-by-zero. In the future we will neglect this term in denominators because it is always necessary. The concept is to accentuate variations in \( g_t \), but because the denominator is monotonically nondecreasing over time this method is doomed to retard its own progress over time. The denominator can also be seen as a form of momentum where \( \alpha \) and \( \beta \) are both set to 1. Rmsprop A simple generalization of ADAGrad is to simply allow for \( \alpha \) and \( \beta \) to be changed from 1. In particular one usually chooses a \( \beta \) less than 1, and presumably \( \alpha = 1 - \beta \). Thus one arrives at a method where the effects of the distance history are diminished: \[ \mathcal{X}_t = - \frac{g_t}{\sqrt{\mathbb{E}[g^2]_t}}. \] (14) Adadelta Adadelta adds another term to RMSprop in order to guarantee that the magnitude of \( \mathcal{X} \) is balanced with \( g_t \) [Zeiler et al., (2011)]. More precisely it maintains \[ \frac{\mathcal{X}_t}{\sqrt{\mathbb{E}[\mathcal{X}_t^2]}} = - \frac{g_t}{\sqrt{\mathbb{E}[g_t^2]}} \] (15) which results in the following vector field: \[ \mathcal{X}_t = - \frac{\sqrt{\mathbb{E}[\mathcal{X}_t^2]}}{\sqrt{\mathbb{E}[g_t^2]}} g_t. \] (16) and \( \eta \) is set to 1. ADAM By applying momentum to both \( g_t \) and \( g_t^2 \) one arrives at what is called ADAM. This is often considered a combination of SGDM + RMSprop, \[ \mathcal{X}_t = c_t \frac{\mathbb{E}[g]_t}{\sqrt{\mathbb{E}[g^2]_t}}. \] (17) \( c_t = \frac{\sqrt{1-\beta_2^t}}{1-\beta_1^t} \) is the initialization bias correction term with \( \beta_1, \beta_2 \in [0, 1) \) being the \( \beta \) parameters used in momentum for \( g \) and \( g^2 \) respectively. Initialization bias is caused by the history of the momentum variable being initially set to zero. A.3 RUNGE KUTTA Runge-Kutta methods [Butcher (1963)] are a broad class of numerical integrators categorized by their truncation error. Because the ordinary differential equations Runge-Kutta methods solve generalize gradient descent, our augmentation is quite straightforward. Although our method applies to all explicit Runge-Kutta methods we will only describe second order methods for simplicity. The general form of second-order explicit Runge-Kutta on a time-independent vector field is \[ \theta_{t+1} = \theta_t + (a_1 k_1 + a_2 k_2) h \tag{18} \] \[ k_1 = \mathcal{X}(\theta_t) \tag{19} \] \[ k_2 = \mathcal{X}(\theta_t + q_1 h k_1) \tag{20} \] where \( a_1, a_2, \) and \( q_1 \) are parameters that define a given Runge-Kutta method. Table 3 refers to the parameters used for the different Runge-Kutta variants we use in our experiments. <table> <tr> <th>Method Name</th> <th>\( a_1 \)</th> <th>\( a_2 \)</th> <th>\( q_1 \)</th> </tr> <tr> <td>Midpoint</td> <td>0</td> <td>1</td> <td>\( \frac{1}{2} \)</td> </tr> <tr> <td>Heun</td> <td>\( \frac{1}{2} \)</td> <td>\( \frac{1}{2} \)</td> <td>1</td> </tr> <tr> <td>Ralston</td> <td>\( \frac{1}{3} \)</td> <td>\( \frac{5}{6} \)</td> <td>\( \frac{3}{4} \)</td> </tr> </table> Table 3: The coefficients of various second order Runge-Kutta methods [Hairer et al. (1987)] A.3.1 AUGMENTING OPTIMIZATION WITH RUNGE KUTTA For a given timestep, explicit integrators can be seen as a morphism over vector fields \( \mathcal{X} \to \tilde{\mathcal{X}}^h \). For a gradient \( g_t = \nabla_\theta U \) we can solve a modified RK2 gradient \( \bar{g}_t \) in the following fashion: \[ \theta_{t+1} = \theta_t + \bar{g}_t h = Advect^{rk2}_{g}(\theta, h) \tag{21} \] rearranged with respect to \( \bar{g}_t \) \[ \bar{g}_t = \frac{Advect^{rk2}_{g}(\theta, h) - \theta_t}{h} \tag{22} \] \[ = \frac{\theta_t + (a_1 k_1 + a_2 k_2) h - \theta_t}{h} \tag{23} \] \[ = (a_1 k_1 + a_2 k_2). \tag{24} \] If we simply substitute the gradient \( g_t \) with \( \bar{g}_t \) one obtains an RK2-augmented optimization technique. A.4 EXPERIMENTS WITH RUNGE-KUTTA INTEGRATOR The results in Figure 9 illustrates that, with the exception of the Midpoint method, stochastic Runge-Kutta methods outperform SGD. “SGD x2” is the stochastic gradient descent with twice of the learning rate of “SGD”. From the figure, we observe that the Runge-Kutta methods perform even better with half the number of gradient computed by SGD. The reason is because SGD has the accumulated truncated error of \( O(h) \) while second-order Runge-Kutta methods have the accumulated truncated error of \( O(h^2) \). Unfortunately, ADAM outperforms ADAM+RK2 methods. We speculate that this is because the way how ADAM’s renormalization of input gradients in conjunction with momentum eliminates the value added by using our RK-based descent directions. A.5 EFFECTS OF BATCH-NORMALIZATION AND EXTREME INITIALIZATIONS The neural networks are typically initialized with very small parameter values [Glorot & Bengio (2010) [He et al., 2015]]. Instead, we trained NIN with exotic initializations such as initial parameters drawn from \( \mathcal{N}(-10.0, 0.01) \) or \( \mathcal{N}(-1.0, 1.0) \) and observe the loss surface behaviours. The results are shown in Figure 10. We can see that NIN without BN does not train at all with any of these initializations. [Swirszcz et al. (2016)] mentioned that bad performance of neural networks trained with these initializations are due to finding a bad local minima. However, we see that loss surface region around these initializations are plateau\footnote{We used same initializations as [Swirszcz et al., 2016], but we trained different neural networks with SGD on a different dataset. We used NIN and CIFAR10 and [Swirszcz et al., 2016] used smaller neural network and MNIST.} rather than a bad local minima as shown in Figure 11b. On (a) NIN - SGD & RK2 (b) VGG - SGD & RK2 (c) NIN - ADAM & ADAM+RK2 (d) VGG - ADAM & ADAM+RK2 Figure 9: Training accuracy curve Figure 10: Interpolation between initial points to final points upto \( \alpha = 2 \) in Equation [2] the other hand, NIN with BN does train slowly over time but finds a local minima. This implies that BN redeems the ill-posed loss surface (plateau region). Nevertheless, the local minima it found was not good as when the parameters were initialized with small values. However, it is not totally clear whether this is due to difficulty of training or due to falling in a bad local minima. (a) NIN - Learning curve (b) NIN without batch normalization (c) NIN with batch normalization Figure 11: NIN trained from different initializaitons. A.6 SWITCHING OPTIMIZATION METHODS (a) Figure 12: NIN - Learning curve when switching methods from SGD to ADAM and visa versa at epoch 50 and 100. Learning rate switched from SGD (ADAM) to ADAM (SGD) at (left) 0.001 (0.1) to 0.1 (0.001), (middle) 0.001 (0.1) to 0.05, (0.001), and (right) 0.001 (0.1) to 0.01 (0.001). (a) The learning rates is set to 0.001 and 0.05 for ADAM and SGD in the beginning, and then switched it to 0.05, 0.001 for SGD and ADAM. (b) The learning rates is set to 0.001 and 0.05 for ADAM and SGD in the beginning, and then switched it to 0.05, 0.0005 for SGD and ADAM . (c) The learning rates is set to 0.001 and 0.05 for ADAM and SGD in the beginning, and then switched it to 0.01, 0.0001 for SGD and ADAM. Figure 13: VGG - Switching methods from SGD to ADAM and ADAM to SGD at epoch 50 and 100. Zoomed in version (Left). Distance between initial weights to weights at each epoch (Middle). The interpolation between different convergence parameters (Right). Each figure shows the results of switching methods at different learning rate. We label the switch of methods in terms of ratio. For instance, S100-A100 as trained with SGD in the first 100 epoch and swithced to ADAM for the rest of the epoch. (a) Learning rate is not required for Adadelta. Learning rate is set to 0.05 for SGD in the beginning, and then switched it to 0.1. (b) Learning rate is set to 0.05 for SGD. (c) Learning rate is set to 0.05 for SGD in the beginning, and then switched it to 0.01. Figure 14: VGG - Switching methods from SGD to Adadelta and Adadelta to SGD at epoch 50 and 100. Zoomed in version (Left). Distance between initial weights to weights at each epoch (Middle). The interpolation between different convergence parameters (Right). Each figure shows the results of switching methods at different learning rate. We label the switch of methods in terms of ratio. For instance, S50-A50 as trained with SGD in the first 100 epoch and switched to Adadelta for the rest of the epoch. (a) Learning rate is not required for Adadelta. Learning rate is set to 0.05 for SGD in the beginning, and then switched it to 0.1. Figure 15: VGG - Switching methods from ADAM to Adadelta and Adadelta to ADAM at epoch 50 and 100. Zoomed in version (Left). Distance between initial weights to weights at each epoch (Middle). The interpolation between different convergence parameters (Right). Each figure shows the results of switching methods at different learning rate. We label the switch of methods in terms of ratio. For instance, S50-A50 as trained with SGD in the first 100 epoch and swithced to Adadelta for the rest of the epoch.
ABSTRACT The training of deep neural networks is a high-dimension optimization problem with respect to the loss function of a model. Unfortunately, these functions are of high dimension and non-convex and hence difficult to characterize. In this paper, we empirically investigate the geometry of the loss functions for state-of-the-art networks with multiple stochastic optimization methods. We do this through several experiments that are visualized on polygons to understand how and when these stochastic optimization methods find local minima. 1 INTRODUCTION Deep neural networks are trained by optimizing an extremely high-dimensional loss function with respect to the weights of the network’s linear layers. The objective function minimized is some measure of the error of the network’s predictions based on these weights compared to training data. This loss function is non-convex and has many local minima. These loss functions are usually minimized using first-order gradient descent (Robbins & Monro [1951] Polyak [1964]) algorithms such as stochastic gradient descent (SGD) (Bottou [1991]). The success of deep learning critically depends on how well we can minimize this loss function, both in terms of the quality of the local minima found and the time to find them. Understanding the geometry of this loss function and how well optimization algorithms can find good local minima is thus of vital importance. Several works have theoretically analyzed and characterized the geometry of deep network loss functions. However, to make these analyses tractible, they have relied on simplifications of the network structures, including that the networks are linear (Saxe et al. [2014]), or assuming the path and variable independence of the neural networks (Choromanska et al. [2015]). Orthogonally, the performance of various gradient descent algorithms has been theoretically characterized (Nesterov [1983]). Again, these analyses make simplifying assumptions, in particular that the loss function is strictly convex, i.e. there is only a single local minimum. In this work, we empirically investigated the geometry of the real loss functions for state-of-the-art networks and data sets. In addition, we investigated how popular optimization algorithms interact with these real loss surfaces. To do this, we plotted low-dimensional projections of the loss function in subspaces chosen to investigate properties of the local minima selected by different algorithms. We chose these subspaces to address the following questions: • What types of changes to the optimization procedure result in different local minima? • Do different optimization algorithms find qualitatively different types of local minima? 2 RELATED WORK 2.1 LOSS SURFACES There have been several attempts to understand the loss surfaces of deep neural networks. Some have studied the critical points of the deep linear neural networks (Baldi [1989] Baldi & Hornik [1989; Baldi & Lu [2012]. Others further investigated the learning dynamics of the deep linear neural networks (Saxe et al., 2014). More recently, several others have attempted to study the loss surfaces of deep non-linear neural networks (Choromanska et al., 2015; Kawaguchi, 2016; Soudry & Carmon, 2016). One approach is to analogize the states of neurons as the magnetics dipoles used in spherical spin-glass Ising models from statistical physics (Paris, 2016; Fyodorov & Williams, 2007; Bray & Dean, 2007). Choromanska et al., (2015) attempted to understand the loss function of neural networks through studying the random Gaussian error functions of Ising models. Recent results (Kawaguchi, 2016; Soudry & Carmon, 2016) have provided cursory evidence in agreement with the theory provided by Choromanska et al., (2015) in that they found that there are no “poor” local minima in neural networks still with strong assumptions. There is some potential disconnect between these theoretical results and what is found in practice due to several strong assumptions such as the activation of the hidden units and output being independent of the previous hidden units and input data. The work of Dauphin et al., (2014) empirically investigated properties of the critical points of neural network loss functions and demonstrated that their critical points behave similarly to the critical points of random Gaussian error functions in high dimensional space. We will expose further evidence along this trajectory. 2.2 OPTIMIZATION In practice, the local minima of deep network loss functions are for the most part decent. This implies that we probably do not need to take many precautions to avoid bad local minima in practice. If all local minima are decent, then the task of finding a decent local minimum quickly is reduced to the task of finding any local minimum quickly. From an optimization perspective this implies that solely focusing on designing fast methods are of key importance for training deep networks. In the literature the common method for measuring performance of optimization methods is to analyze them on nice convex quadratic functions (Polyak, 1964; Broyden, 1970; Nesterov, 1983; Martens, 2010; Erdogdu & Montanari, 2015) even though the problems are applied to non-convex problems. For non-convex problems, however, if two methods converge to different local minima their performance will be dictated on how those methods solve those two convex subproblems. It is challenging to show that one method will beat another without knowledge of the sort of convex subproblems, which is generally not known apriori. What we will explore is whether indeed are some characteristics that can found experimentally. If so, perhaps one could validate where these analytical results are valid or even improve methods for training neural networks. 2.2.1 LEARNING PHASES ![An example of learning curve of neural network](page_1012_1342_496_312.png) Figure 1: An example of learning curve of neural network One of the interesting empirical observation is that we often observe is that the incremental improvement of optimization methods decreases rapidly even in non-convex problems. This behavior has been discussed as a “transient” phase followed by a “minimization” phase (Sutskever et al., 2013) where the former finds the neighborhood of a decent local minima and the latter finds the local minima within that neighborhood. The existence of these phases implies that if certain methods are better at different phases one could create novel methods that schedule when to apply each method. 3 EXPERIMENTAL SETUP AND TOOLS 3.1 NETWORK ARCHITECTURES AND DATA SETS We conducted experiments on three state-of-the-art neural network architectures. Network-in-Network (NIN) (Lin et al. [2014]) and the VGG (Simonyan & Zisserman [2015]) network are feed-forward convolutional networks developed for image classification, and have excellent performance on the Imagenet (Russakovsky et al. [2014]) and CIFAR10 (Krizhevsky [2009]) data sets. The long short-term memory network (LSTM) (Hochreiter & Schmidhuber [1997]) is a recurrent neural network that has been successful in tasks that take variable-length sequences as input and/or produce variable-length sequences as output, such as speech recognition and image caption generation. These are large networks currently used in many machine vision and learning tasks, and the loss functions minimized by each are highly non-convex. All results using the feed-forward convolutional networks (NIN and VGG) are on the CIFAR10 image classification data set, while the LSTM was tested on the Penn Treebank next-word prediction data set. 3.2 OPTIMIZATION METHODS We analyzed the performance of five popular gradient-descent optimization methods for these learning frameworks: Stochastic gradient descent (SGD) (Robbins & Monro [1951]), stochastic gradient descent with momentum (SGDM), RMSprop (Tieleman & Hinton [2012]), Adadelta (Zeiler et al. [2011]), and ADAM (Kingma & Ba [2014]). These are all first-order gradient descent algorithms that estimate the gradients based on randomly-grouped minibatches of training examples. One of the major differences between these algorithms is how they select the weight-update step-size at each iteration, with SGD and SGDM using fixed schedules, and RMSprop, Adadelta, and ADAM using adaptive, per-parameter step-sizes. Details are provided in Section A.2 In addition to these five existing optimization methods, we compare to a new gradient descent method we developed based on the family of Runge Kutta integrators. In our experiments, we tested a second-order Runge-Kutta integrator in combination with SGD (RK2) and in combination with ADAM (ADAM&RK2). Details are provided in Section A.3. 3.3 ANALYSIS METHODS Several of our empirical analyses are based on the technique of Goodfellow et al. (Goodfellow et al. [2015]). They visualize the loss function by projecting it down to one carefully chosen dimension. They plot the value of the loss function along a set of samples along this dimension. The projection space is chosen based on important weight configurations, thus they plot the value of the loss function at linear interpolations between two weight configurations. They perform two such analyses: one in which they interpolate between the initialization weights and the final learned weights, and one in which they interpolate between two sets of final weights, each learned from different initializations. In this work, we use a similar visualization technique, but choose different low-dimensional sub-spaces for the projection of the loss function. These subspaces are based on the initial weights as well as the final weights learned using the different optimization algorithms and combinations of them, and are chosen to answer a variety of questions about the loss function and how the different optimization algorithms interact with this loss function. In contrast, Goodfellow et al. only looked at SGDM. In addition, we explore the use of two-dimensional projections of the loss function, allowing us to better visualize the space between local minima. We do this via barycentric and bilinar interpolation for triplets and quartets of points respectively (details in Section A.1). We refer to the critical points found using these variants of SGD, for which the gradient is approximately 0, as local minima. Our evidence that these are local minima as opposed to saddle points is Figure 2: Visualization of the loss surface at weights interpolated between two initial configurations and the final weight vectors learned using SGD from these initializations. Figure 3: Visualization of the loss surface at weights interpolated between the weights learned by four different algorithms from the same initialization. similar to that presented in Goodfellow et al. (Goodfellow et al., 2015). If we interpolate beyond the critical point, in this one-dimensional projection, the loss increases (Fig. 10). 3.4 TECHNICAL DETAILS We used the VGG and NIN implementations from https://github.com/szagoruyko/cifar.torch.git. The batch size was set to 128 and the number of epochs was set to 200. The learning rate was chosen from the discrete range between [0.2, 0.1, 0.05, 0.01] for SGD and [0.002, 0.001, 0.0005, 0.0001] for adaptive learning methods. We doubled the learning rates when we ran our augmented versions with Runge-Kutta because they required two stochastic gradient computations per epoch. We used batch-normalization and dropout to regularize our networks. All experiments were run on a 6-core Intel(R) Xeon(R) CPU @ 2.40GHz with a TITAN X. 4 EXPERIMENTAL RESULTS 4.1 DIFFERENT OPTIMIZATION METHODS FIND DIFFERENT LOCAL MINIMA We trained the neural networks described above using each optimization method starting from the same initial weights and with the same minibatching. We computed the value of the loss function for weight vectors interpolated between the initial weights, the final weights for one algorithm, and the final weights for a second algorithm for several pairings of algorithms. The results are shown in the lower triangle of Table 1. For every pair of optimization algorithms, we observe that the training loss between the final weights for different algorithms shows a sharp increase along the interpolated path. This suggests that each optimization algorithm found a different critical point, despite starting at the same initialization. We investigated the space between other triplets and quadruples of weight vectors (Figure 2 and 3), and even in these projections of the loss function, we still see that the local minima returned by different algorithms are separated by high loss weight parameters. Deep networks are overparameterized. For example, if we switch all corresponding weights for a pair of nodes in our network, we will obtain effectively the same network, with both the original and permuted networks outputting the same prediction for a given input. To ensure that the weight vectors returned by the different algorithms were functionally different, we compared the outputs of the networks on each example in a validation data set: \[ \text{dist}(\theta_1, \theta_2) = \sqrt{\frac{1}{N_{test}} \sum_{i=1}^{N_{test}} \| F(x_i, \theta_1) - F(x_i, \theta_2) \|^2 }, \] where \( \theta_1 \) and \( \theta_2 \) are the weights learned by two different optimization algorithms, \( x_i \) is the input for a validation example, and \( F(x, \theta) \) is the output of the network for weights \( \theta \) on input \( x \). <table> <tr> <th></th> <th>SGD</th> <th>RMSprop</th> <th>Adadelta</th> <th>Adam</th> <th>RK2</th> <th>Adam&amp;RK2</th> </tr> <tr> <th>SGD</th> <td style="background-color:red;"> </td> <td style="background-color:darkred;"> </td> <td style="background-color:yellow;"> </td> <td style="background-color:blue;"> </td> <td style="background-color:magenta;"> </td> <td style="background-color:cyan;"> </td> </tr> <tr> <th>RMSprop</th> <td style="background-color:darkred;"> </td> <td style="background-color:black;"> </td> <td style="background-color:yellow;"> </td> <td style="background-color:blue;"> </td> <td style="background-color:magenta;"> </td> <td style="background-color:cyan;"> </td> </tr> <tr> <th>Adadelta</th> <td style="background-color:yellow;"> </td> <td style="background-color:yellow;"> </td> <td style="background-color:yellow;"> </td> <td style="background-color:blue;"> </td> <td style="background-color:magenta;"> </td> <td style="background-color:cyan;"> </td> </tr> <tr> <th>Adam</th> <td style="background-color:blue;"> </td> <td style="background-color:blue;"> </td> <td style="background-color:blue;"> </td> <td style="background-color:blue;"> </td> <td style="background-color:magenta;"> </td> <td style="background-color:cyan;"> </td> </tr> <tr> <th>RK2</th> <td style="background-color:magenta;"> </td> <td style="background-color:magenta;"> </td> <td style="background-color:magenta;"> </td> <td style="background-color:magenta;"> </td> <td style="background-color:magenta;"> </td> <td style="background-color:cyan;"> </td> </tr> <tr> <th>Adam&amp;RK2</th> <td style="background-color:cyan;"> </td> <td style="background-color:cyan;"> </td> <td style="background-color:cyan;"> </td> <td style="background-color:cyan;"> </td> <td style="background-color:cyan;"> </td> <td style="background-color:cyan;"> </td> </tr> </table> Table 1: Visualization of the loss surface near and between local minima found by different optimization methods. Each box corresponds to a pair of optimization methods. In the lower triangle, we plot the projection of the loss surface at weight vectors between the initial weight and the learned weights found by the two optimization methods. Color as well as height of the surface indicate the loss function value. In the upper triangle, we plot the functional difference between the network corresponding to the learned weights for the first algorithm and networks corresponding to weights linearly interpolated between the first and second algorithm’s learned weights. (Best viewed in zoom) Figure 4: (a) Training accuracy and (b) test accuracy for each of the optimization methods. Colors correspond to different initializations. We found that, for all pairs of algorithms, the average distance between the outputs of the networks (Equation 3.1) was approximately 0.16, corresponding to a label disagreement of about 8% (upper triangle of Table 1). Given the generalization error of these networks (approximately 11%, Figure 4), the maximum disagreement we could see was 22%. Thus, these networks disagreed on a large fraction of these test examples – over \( \frac{1}{5} \)rd. Thus, the local minima found by different algorithms correspond to effectively different networks, not trivial reparameterizations of the same one. (a) NIN - Initial to Final. (b) NIN - Final to Final. (c) VGG - Initial to Final. (d) VGG - Final to Final. Figure 5: Loss function value near local minima found by multiple restarts of each algorithm. 4.2 DIFFERENT OPTIMIZATION ALGORITHMS FIND DIFFERENT TYPES OF LOCAL MINIMA Next, we investigated whether the local minima found by the different optimization algorithms had distinguishing properties. To do this, we trained the networks with each optimization algorithm using different initial parameters. We then compared differences between runs of the same algorithm but different initializations to differences between different algorithms. As shown in Figure 4(a), in terms of training accuracy, we do see some stereotypy for the optima found by different algorithms, with SGD finding local minima with the lowest training accuracy and ADAM, Rmsprop, and Adadelta finding local minima with the highest training accuracy. However, this could be attributed to SGD’s asymptotically slow convergence near local minima due to the gradient diminishing near extrema. Despite this limitation, Figure 4(b) shows that the generalization accuracy of these different local minima on validation data was not significantly different between algorithms. We also did not see a relationship between the weight initialization and the validation accuracy. Thus, while these algorithms fall into different local minima, they are not different in terms of their final quality. We visualized the loss surface around each of the local minima for the multiple runs. To do this, we plotted the value of the loss function between the initial and final weights for each algorithm (Figure 5(a,c)) for each run of the algorithm from a different initialization. In addition, we plotted Figure 6: Observing the absolute size of basin for different local minimas found by different optimization methods. the value of the loss function between the final weights for selected pairs of algorithms for each run (Figure 5[b,d]). We see that the surfaces look strikingly similar for different runs of the same algorithm, but characteristically different for different algorithms. Thus, we found evidence that the different algorithms land in qualitatively different types of local minima. In particular, we see in Figure 5[a,c] that the size of the basins around the local minima found by ADAM and ADAM&RK2 are larger than those found by SGD and RK2, i.e. that the training loss is small for a wider range of \( \alpha \) values. This is a relative measure, and the magnitude of the change in the weight vector is \( \Delta \alpha \| \theta_1 - \theta_0 \| \) for a change of size \( \Delta \alpha \), where \( \theta_0 \) is the initial weight vector \( \theta_1 \) is the result found by a given optimization algorithm. In Figure 6 we repeat this analysis, instead showing the loss as a function of the absolute distance in parameter space: \[ \theta(\lambda) = \theta_1 + \lambda \frac{\theta_0 - \theta_1}{\| \theta_0 - \theta_1 \|} \] We again see that the size of the basin around the local minima varies by optimization algorithm. Note that we evaluate the loss for weight vectors beyond the initial configuration, which had a loss of 2.4. 4.3 ANALYZING LEARNING AFTER “TRANSIENT PERIOD” Recall that, during optimization, it has been observed that there is a short “transient” phase when the loss decreases rapidly and a “minimization” phase in which the loss decreases slowly (Section 2.2.1 and Figure 1). In this set of experiments, we investigated the effects of switching from one type of optimization method to another at various points during training, in particular at late stages of training when it is thought that a local minimum has been chosen and is only being localized. We switched from one optimization method to another 25%, 50%, and 75% of the way through training. The results are plotted in Figure 7. We emphasize that we are not switching methods to improve performance, but rather to investigate the shape of the loss function in regions explored during the “minimization” phase of optimization. We found that, regardless of how late we switch optimization algorithms, as shown in the right-most column of Figure 7, the local minima found were all different. This directly disagrees with the notion that the local minimum has effectively been chosen before the “minimization” phase, but instead that which local minimum is found is still in flux this late in optimization. It appears that this switch from one local minimum to another happens almost immediately after the optimization method switches, with the training accuracy jumping to the characteristic accuracy for the given method within a few epochs (Figure 7 left column). Interestingly, we also see the distance between the initial and current weight vectors changes drastically after switching from one optimization (a) The learning rate is set to 0.001 for ADAM , and then switched to SGD with learning rate 0.01. (b) The learning rate is set to 0.1 for SGD, and then switched to ADAM with learning rate 0.0001. (c) The learning rate is set to 0.001 for ADAM , and then switched to Adadelta (learning rate is not required). (d) The learning rate is not required for Adadelta, and then switched to ADAM with learning rate 0.0001. Figure 7: Switching methods from one method to another method at epoch 50 and 100 and 150. Accuracy curve (Left two columns). Distance between initial weights to weights at each epoch (Middle). The interpolation between different convergence parameters (Right). For instance, S100-A100 as trained with SGD in the first 100 epoch and switched to ADAM for the rest of the epoch. (Best viewed in zoom) <table> <tr> <th rowspan="2">With Batch Normalization</th> <th>Initial Config.</th> <th>ADAM</th> <th>ADAM</th> <th>ADAM</th> </tr> <tr> <td>Adam</td> <td>Rmsprop</td> <td>VGG</td> <td>SGDM</td> <td>VGG</td> <td>SGDM</td> <td>VGG</td> <td>Adadelta</td> <td>Rmsprop</td> <td>VGG</td> </tr> <tr> <th>Without Batch Normalization</th> <th>Initial Config.</th> <th>ADAM&amp;RK2</th> <th>ADAM</th> <th>ADAM</th> <th>ADAM</th> </tr> <tr> <td></td> <td>Rmsprop</td> <td>ADAM</td> <td>VGG</td> <td>Rmsprop</td> <td>VGG</td> <td>SGDM</td> <td>Rmsprop</td> <td>NIN</td> <td>Rmsprop</td> <td>ADAM&amp;RK2</td> <td>NIN</td> </tr> </table> Table 2: Visualization of the Loss Surface with and without batch-normalization. (a) NIN - Initial to Final. (b) VGG - Initial to Final. (c) LSTM - Initial to Final. (d) NIN - Final to Final. (e) VGG - Final to Final. (f) LSTM - Final to Final. Figure 8: (Without batch normalization) Loss function with parameter interporlated between Initial to Final. (Bottom) Loss function with parameters interporlated between Final to Final. Each column of the plots are from different initializations. method to another, and that this distance is characteristic per algorithm (Figure 7 middle column). While distance increases with training epoch for any single optimization method, it actually starts to decrease when switching from ADAM to SGD. 4.4 EFFECTS OF BATCH-NORMALIZATION To understand how batch normalization affects the types of local minima found, we performed a set of experiments comparing loss surfaces near local minima found with and without batch normal- ization for each of the optimization methods. We visualized the surface near these local minima by interpolating between the initial weights and the final weights as well as between pairs of final weights found with different algorithms. We observed clear qualitative differences between optimization with (Figure 5) and without (Figure 8) batch normalization. We see that, without batch normalization, the quality of local minimum found by a given algorithm is much more dependent on the initialization. In addition, the surfaces between different local minima are more complex in appearance: with batch normalization we see sharp unimodal jumps in performance but without batch normalization we obtain wide bumpy shapes that aren’t necessarily unimodal. The neural networks are typically initialized with very small parameter values (Glorot & Bengio [2010]; He et al., [2015]). Instead, we trained NIN with exotic initializations such as initial parameters drawn from \( \mathcal{N}(-10.0, 0.01) \) or \( \mathcal{N}(-1.0, 1.0) \) and observe the loss surface behaviours. The details of results are discussed in Appendix A.5. 5 CONCLUSIONS In this work, we performed a series of empirical analyses to understand the geometry of the loss functions corresponding to deep neural networks, and how different optimization methods minimize this loss to answer the two questions posed in the introduction. What types of changes to the optimization procedure result in different local minima? We found that every type of change to the optimization procedure we tested resulted in a different local minimum. Different local minima were found using the different optimization algorithms from the same initialization (Section 4.1). Even switching the optimization algorithm to another very late in optimization – during the slow “mimimization” portion of learning – resulted in a different local minimum (Section 4.3). The quality of the local minima found, in terms of training and generalization error, is similar. These different local minima were not equivalent, and made mistakes on different test examples (Section 4.1). Thus, they were not trivially different local minima, as would occur if nodes in internal layers of the network were permuted. We observed that the quality of these local minima was only consistently good when we used batch normalization for regularization. Without batch normalization, the quality of the critical points found depended on the initialization, and some solutions found were not as good as others. Our observations are in contrast to the conclusions of Goodfellow et al., i.e. that local minima are not a problem in deep learning because, in the region of the loss function explored by SGD algorithms, the loss function is well-behaved (Goodfellow et al., 2015). Instead, our observations are more consistent with the explanation that the local minima found by popular SGD optimization methods are almost all good (Choromanska et al., 2015; Kawaguchi, 2016; Soudry & Carmon, 2016). Do different optimization algorithms find qualitatively different types of local minima? Interestingly, we found that, while the local minima found by the same optimization algorithm from different initializations were different, the shape of the loss function around these local minima was strikingly similar, and was a characteristic of the optimization algorithm. In particular, we found that the size of the basin around ADAM-based optimization was larger than that around vanilla SGD (Section 4.2). A large basin is related to a large margin, as small changes in the weight vector will not affect the training error, and perhaps could have some implications for generalization error. In our experiments, however, we did not observe better generalization error for ADAM than SGD. Questions for potential future research are why the shapes of the loss functions around different local minima found by the same algorithm are so similar, and what the practical implications of this are.
reject
Reject
4.5
64bff74c858d5835be14addae3b3bf6845b2e0b9
iclr
2,017
HADAMARD PRODUCT FOR LOW-RANK BILINEAR POOLING Jin-Hwa Kim Interdisciplinary Program in Cognitive Science Seoul National University Seoul 08826, Republic of Korea jhkim@bi.snu.ac.kr Kyoung-Woon On School of Computer Science and Engineering Seoul National University Seoul 08826, Republic of Korea kwon@bi.snu.ac.kr Woosang Lim School of Computing, KAIST Daejeon 34141, Republic of Korea quasar17@kaist.ac.kr Jeonghee Kim & Jung-Woo Ha NAVER LABS Corp. & NAVER Corp. Gyeonggi-do 13561, Republic of Korea {jeonghee.kim,jungwoo.ha}@navercorp.com Byoung-Tak Zhang School of Computer Science and Engineering & Interdisciplinary Program in Cognitive Science Seoul National University & Surromind Robotics Seoul 08826, Republic of Korea btzhang@bi.snu.ac.kr ABSTRACT Bilinear models provide rich representations compared with linear models. They have been applied in various visual tasks, such as object recognition, segmentation, and visual question-answering, to get state-of-the-art performances taking advantage of the expanded representations. However, bilinear representations tend to be high-dimensional, limiting the applicability to computationally complex tasks. We propose low-rank bilinear pooling using Hadamard product for an efficient attention mechanism of multimodal learning. We show that our model outperforms compact bilinear pooling in visual question-answering tasks with the state-of-the-art results on the VQA dataset, having a better parsimonious property. 1 INTRODUCTION Bilinear models (Tenenbaum & Freeman [2000]) provide richer representations than linear models. To exploit this advantage, fully-connected layers in neural networks can be replaced with bilinear pooling. The outer product of two vectors (or Kroneker product for matrices) is involved in bilinear pooling, as a result of this, all pairwise interactions among given features are considered. A successful application of this technique is used for fine-grained visual recognition (Lin et al., 2015). However, bilinear pooling produces a high-dimensional feature of quadratic expansion, which may constrain a model structure and computational resources. For example, an outer product of two feature vectors, both of which have 1K-dimensionality, produces a million-dimensional feature vector. Therefore, for classification problems, the choice of the number of target classes is severely constrained, because the number of parameters for a standard linear classifier is determined by multiplication of the size of the high-dimensional feature vector and the number of target classes. Compact bilinear pooling (Gao et al., 2016) reduces the quadratic expansion of dimensionality by two orders of magnitude, retaining the performance of the full bilinear pooling. This approximation uses sampling-based computation, Tensor Sketch Projection (Charikar et al., 2002; Pham & Pagh, 2013), which utilizes an useful property that \( \Psi(x \otimes y, h, s) = \Psi(x, h, s) * \Psi(y, h, s) \), which means the projection of outer product of two vectors is the convolution of two projected vectors. Here, \( \Psi \) is the proposed projection function, and, \( h \) and \( s \) are randomly sampled parameters by the algorithm. Nevertheless, compact bilinear pooling embraces two shortcomings. One comes from the sampling approach. Compact bilinear pooling relies on a favorable property, \( E[\langle \Psi(x, h, s), \Psi(y, h, s) \rangle] = \langle x, y \rangle \), which provides a basis to use projected features instead of original features. Yet, calculating the exact expectation is computationally intractable, so, the random parameters, \( h \) and \( s \) are fixed during training and evaluation. This practical choice leads to the second. The projected dimension of compact bilinear pooling should be large enough to minimize the bias from the fixed parameters. Practical choices are 10K and 16K for 512 and 4096-dimensional inputs, respectively (Gao et al., 2016; Fukui et al., 2016). Though, these *compacted* dimensions are reduced ones by two orders of magnitude compared with full bilinear pooling, such high-dimensional features could be a bottleneck for computationally complex models. We propose low-rank bilinear pooling using Hadamard product (element-wise multiplication), which is commonly used in various scientific computing frameworks as one of tensor operations. The proposed method factors a three-dimensional weight tensor for bilinear pooling into three two-dimensional weight matrices, which enforces the rank of the weight tensor to be low-rank. As a result, two input feature vectors linearly projected by two weight matrices, respectively, are computed by Hadamard product, then, followed by a linear projection using the third weight matrix. For example, the projected vector \( z \) is represented by \( \mathbf{W}_z^T (\mathbf{W}_x^T \mathbf{x} \circ \mathbf{W}_y^T \mathbf{y}) \), where \( \circ \) denotes Hadamard product. We also explore to add non-linearity using non-linear activation functions into the low-rank bilinear pooling, and shortcut connections inspired by deep residual learning (He et al., 2016). Then, we show that it becomes a simple baseline model (Antol et al., 2015) or one-learning block of Multimodal Residual Networks (Kim et al., 2016b) as a low-rank bilinear model, yet, this interpretation has not be done. Our contributions are as follows: First, we propose low-rank bilinear pooling to approximate full bilinear pooling to substitute compact bilinear pooling. Second, Multimodal Low-rank Bilinear Attention Networks (MLB) having an efficient attention mechanism using low-rank bilinear pooling is proposed for visual question-answering tasks. MLB achieves a new state-of-the-art performance, and has a better parsimonious property. Finally, ablation studies to explore alternative choices, e.g. network depth, non-linear functions, and shortcut connections, are conducted. 2 LOW-RANK BILINEAR MODEL Bilinear models use a quadratic expansion of linear transformation considering every pair of features. \[ f_i = \sum_{j=1}^N \sum_{k=1}^M w_{ijk} x_j y_k + b_i = \mathbf{x}^T \mathbf{W}_i \mathbf{y} + b_i \] where \( \mathbf{x} \) and \( \mathbf{y} \) are input vectors, \( \mathbf{W}_i \in \mathbb{R}^{N \times M} \) is a weight matrix for the output \( f_i \), and \( b_i \) is a bias for the output \( f_i \). Notice that the number of parameters is \( L \times (N \times M + 1) \) including a bias vector \( \mathbf{b} \), where \( L \) is the number of output features. Pirsiavash et al. (2009) suggest a low-rank bilinear method to reduce the rank of the weight matrix \( \mathbf{W}_i \) to have less number of parameters for regularization. They rewrite the weight matrix as \( \mathbf{W}_i = \mathbf{U}_i \mathbf{V}_i^T \) where \( \mathbf{U}_i \in \mathbb{R}^{N \times d} \) and \( \mathbf{V}_i \in \mathbb{R}^{M \times d} \), which imposes a restriction on the rank of \( \mathbf{W}_i \) to be at most \( d \leq \min(N, M) \). Based on this idea, \( f_i \) can be rewritten as follows: \[ f_i = \mathbf{x}^T \mathbf{W}_i \mathbf{y} + b_i = \mathbf{x}^T \mathbf{U}_i \mathbf{V}_i^T \mathbf{y} + b_i = \mathbf{1}^T (\mathbf{U}_i^T \mathbf{x} \circ \mathbf{V}_i^T \mathbf{y}) + b_i \] where \( \mathbf{1} \in \mathbb{R}^d \) denotes a column vector of ones, and \( \circ \) denotes Hadamard product. Still, we need two third-order tensors, \( \mathbf{U} \) and \( \mathbf{V} \), for a feature vector \( \mathbf{f} \), whose elements are \( \{ f_i \} \). To reduce the order of the weight tensors by one, we replace \( \mathbf{1} \) with \( \mathbf{P} \in \mathbb{R}^{d \times c} \) and \( b_i \) with \( \mathbf{b} \in \mathbb{R}^c \), then, redefine as \( \mathbf{U} \in \mathbb{R}^{N \times d} \) and \( \mathbf{V} \in \mathbb{R}^{M \times d} \) to get a projected feature vector \( \mathbf{f} \in \mathbb{R}^c \). Then, we get: \[ \mathbf{f} = \mathbf{P}^T (\mathbf{U}^T \mathbf{x} \circ \mathbf{V}^T \mathbf{y}) + \mathbf{b} \] where \( d \) and \( c \) are hyperparameters to decide the dimension of joint embeddings and the output dimension of low-rank bilinear models, respectively. 3 LOW-RANK BILINEAR POOLING A low-rank bilinear model in Equation 3 can be implemented using two linear mappings without biases for embedding two input vectors, Hadamard product to learn joint representations in a multiplicative way, and a linear mapping with a bias to project the joint representations into an output vector for a given output dimension. Then, we use this structure as a pooling method for deep neural networks. Now, we discuss possible variations of low-rank bilinear pooling based on this model inspired by studies of neural networks. 3.1 FULL MODEL In Equation 3, linear projections, \( U \) and \( V \), can have their own bias vectors. As a result, linear models for each input vectors, \( x \) and \( y \), are integrated in an additive form, called as full model for linear regression in statistics: \[ f = P^T((U^T x + b_x) \circ (V^T y + b_y)) + b \\ = P^T(U^T x \circ V^T y + U^T x + V^T y) + b'. \] (4) Here, \( U'^T = \text{diag}(b_y) \cdot U^T, V'^T = \text{diag}(b_x) \cdot V^T \), and \( b' = b + P^T(b_x \circ b_y) \). 3.2 NONLINEAR ACTIVATION Applying non-linear activation functions may help to increase representative capacity of model. The first candidate is to apply non-linear activation functions right after linear mappings for input vectors, \[ f = P^T(\sigma(U^T x) \circ \sigma(V^T y)) + b \] (5) where \( \sigma \) denotes an arbitrary non-linear activation function, which maps any real values into a finite interval, e.g. sigmoid or tanh. If two inputs come from different modalities, statistics of two inputs may be quite different from each other, which may result an interference. Since the gradient with respect to each input is directly dependent on the other input in Hadamard product of two inputs. Additional applying an activation function after the Hadamard product is not appropriate, since activation functions doubly appear in calculating gradients. However, applying the activation function only after the Hadamard product would be alternative choice (We explore this option in Section 5) as follows: \[ f = P^T \sigma(U^T x \circ V^T y) + b. \] (6) Note that using the activation function in low-rank bilinear pooling can be found in an implementation of simple baseline for the VQA dataset (Antol et al., 2015) without an interpretation of low-rank bilinear pooling. However, notably, Wu et al. (2016c) studied learning behavior of multiplicative integration in RNNs with discussions and empirical evidences. 3.3 SHORTCUT CONNECTION When we apply two previous techniques, full model and non-linear activation, linear models of two inputs are nested by the non-linear activation functions. To avoid this unfortunate situation, we add shortcut connections as explored in residual learning (He et al., 2016). \[ f = P^T(\sigma(U^T x) \circ \sigma(V^T y)) + h_x(x) + h_y(y) + b \] (7) where \( h_x \) and \( h_y \) are shortcut mappings. For linear projection, the shortcut mappings are linear mappings. Notice that this formulation is a generalized form of the one-block layered MRN (Kim et al., 2016b). Though, the shortcut connections are not used in our proposed model, as explained in Section 6. 4 MULTIMODAL LOW-RANK BILINEAR ATTENTION NETWORKS In this section, we apply low-rank bilinear pooling to propose an efficient attention mechanism for visual question-answering tasks, based on the interpretation of previous section. We assumed that inputs are a question embedding vector \( q \) and a set of visual feature vectors \( F \) over \( S \times S \) lattice space. 4.1 LOW-RANK BILINEAR POOLING IN ATTENTION MECHANISM Attention mechanism uses an attention probability distribution \( \alpha \) over \( S \times S \) lattice space. Here, using low-rank bilinear pooling, \( \alpha \) is defined as \[ \alpha = \operatorname{softmax}\left( \mathbf{P}_\alpha^T (\sigma(\mathbf{U}_q^T q \cdot 1^T)) \circ \sigma(\mathbf{V}_F^T F^T) \right) \] where \( \alpha \in \mathbb{R}^{G \times S^2} \), \( \mathbf{P}_\alpha \in \mathbb{R}^{d \times G} \), \( \sigma \) is a hyperbolic tangent function, \( \mathbf{U}_q \in \mathbb{R}^{N \times d} \), \( q \in \mathbb{R}^N \), \( 1 \in \mathbb{R}^{S^2} \), \( \mathbf{V}_F \in \mathbb{R}^{M \times d} \), and \( F \in \mathbb{R}^{S^2 \times M} \). If \( G > 1 \), multiple glimpses are explicitly expressed as in Fukui et al. (2016), conceptually similar to Jaderberg et al. (2015). And, the softmax function applies to each row vector of \( \alpha \). The bias terms are omitted for simplicity. 4.2 MULTIMODAL LOW-RANK BILINEAR ATTENTION NETWORKS Attended visual feature \( \hat{\mathbf{v}} \) is a linear combination of \( \mathbf{F}_i \) with coefficients \( \alpha_{g,i} \). Each attention probability distribution \( \alpha_g \) is for a glimpse \( g \). For \( G > 1 \), \( \hat{\mathbf{v}} \) is the concatenation of resulting vectors \( \hat{\mathbf{v}}_g \) as \[ \hat{\mathbf{v}} = \| \sum_{g=1}^G \sum_{s=1}^{S^2} \alpha_{g,s} \mathbf{F}_s \] where \( \| \) denotes concatenation of vectors. The posterior probability distribution is an output of a softmax function, whose input is the result of another low-rank bilinear pooling of \( q \) and \( \hat{\mathbf{v}} \) as \[ p(a|q, \mathbf{F}; \Theta) = \operatorname{softmax}\left( \mathbf{P}_a^T (\sigma(\mathbf{W}_q^T q) \circ \sigma(\mathbf{V}_{\hat{\mathbf{v}}}^T \hat{\mathbf{v}})) \right) \] \[ \hat{a} = \arg\max_{a \in \Omega} p(a|q, \mathbf{F}; \Theta) \] where \( \hat{a} \) denotes a predicted answer, \( \Omega \) is a set of candidate answers and \( \Theta \) is an aggregation of entire model parameters. 5 EXPERIMENTS In this section, we conduct six experiments to select the proposed model, Multimodal Low-rank Bilinear Attention Networks (MLB). Each experiment controls other factors except one factor to assess the effect on accuracies. Based on MRN (Kim et al., 2016b), we start our assessments with an initial option of \( G = 1 \) and shortcut connections of MRN, called as Multimodal Attention Residual Networks (MARN). Notice that we use one embeddings for each visual feature for better performance, based on our preliminary experiment (not shown). We attribute this choice to the attention mechanism for visual features, which provides more capacity to learn visual features. We use the same hyper-parameters of MRN (Kim et al., 2016b), without any explicit mention of this. The VQA dataset (Antol et al., 2015) is used as a primary dataset, and, for data augmentation, question-answering annotations of Visual Genome (Krishna et al., 2016) are used. Validation is performed on the VQA test-dev split, and model comparison is based on the results of the VQA test-standard split. For the comprehensive reviews of VQA tasks, please refer to Wu et al. (2016a) and Kafle & Kanan (2016). The details about preprocessing, question and vision embedding, and hyperparameters used in our experiments are described in Appendix A. The source code for the experiments is available in Github repository.\footnote{https://github.com/jnhwkim/MulLowBiVQA} Number of Learning Blocks Kim et al. (2016b) argue that three-block layered MRN shows the best performance among one to four-block layered models, taking advantage of residual learning. However, we speculate that an introduction of attention mechanism makes deep networks hard to optimize. Therefore, we explore the number of learning blocks of MARN, which have an attention mechanism using low-rank bilinear pooling. Number of Glimpses Fukui et al. (2016) show that the attention mechanism of two glimpses was an optimal choice. In a similar way, we assess one, two, and four-glimpse models. Table 1: The accuracies of our experimental model, Multimodal Attention Residual Networks (MARN), with respect to the number of learning blocks (L#), the number of glimpse (G#), the position of activation functions (tanh), answer sampling, shortcut connections, and data augmentation using Visual Genome dataset, for VQA test-dev split and Open-Ended task. Note that our proposed model, Multimodal Low-rank Bilinear Attention Networks (MLB) have no shortcut connections, compared with MARN. MODEL: model name, SIZE: number of parameters, ALL: overall accuracy in percentage, Y/N: yes/no, NUM: numbers, and ETC: others. Since Fukui et al. (2016) only report the accuracy of the ensemble model on the test-standard, the test-dev results of their single models are included in the last sector. Some figures have different precisions which are rounded. * indicates the selected model for each experiment. <table> <tr> <th>MODEL</th> <th>SIZE</th> <th>ALL</th> <th>Y/N</th> <th>NUM</th> <th>ETC</th> </tr> <tr> <td>MRN-L3</td> <td>65.0M</td> <td>61.68</td> <td>82.28</td> <td>38.82</td> <td>49.25</td> </tr> <tr> <td>MARN-L3</td> <td>65.5M</td> <td>62.37</td> <td>82.31</td> <td>38.06</td> <td>50.83</td> </tr> <tr> <td>MARN-L2</td> <td>56.3M</td> <td>63.92</td> <td>82.88</td> <td>37.98</td> <td>53.59</td> </tr> <tr> <td>* MARN-L1</td> <td>47.0M</td> <td>63.79</td> <td>82.73</td> <td>37.92</td> <td>53.46</td> </tr> <tr> <td>MARN-L1-G1</td> <td>47.0M</td> <td>63.79</td> <td>82.73</td> <td>37.92</td> <td>53.46</td> </tr> <tr> <td>* MARN-L1-G2</td> <td>57.7M</td> <td>64.53</td> <td>83.41</td> <td>37.82</td> <td>54.43</td> </tr> <tr> <td>MARN-L1-G4</td> <td>78.9M</td> <td>64.61</td> <td>83.72</td> <td>37.86</td> <td>54.33</td> </tr> <tr> <td>No Tanh</td> <td>57.7M</td> <td>63.58</td> <td>83.18</td> <td>37.23</td> <td>52.79</td> </tr> <tr> <td>* Before-Product</td> <td>57.7M</td> <td>64.53</td> <td>83.41</td> <td>37.82</td> <td>54.43</td> </tr> <tr> <td>After-Product</td> <td>57.7M</td> <td>64.53</td> <td>83.53</td> <td>37.06</td> <td>54.50</td> </tr> <tr> <td>Mode Answer</td> <td>57.7M</td> <td>64.53</td> <td>83.41</td> <td>37.82</td> <td>54.43</td> </tr> <tr> <td>* Sampled Answer</td> <td>57.7M</td> <td>64.80</td> <td>83.59</td> <td>38.38</td> <td>54.73</td> </tr> <tr> <td>Shortcut</td> <td>57.7M</td> <td>64.80</td> <td>83.59</td> <td>38.38</td> <td>54.73</td> </tr> <tr> <td>* No Shortcut</td> <td>51.9M</td> <td>65.08</td> <td>84.14</td> <td>38.21</td> <td>54.87</td> </tr> <tr> <td>MLB</td> <td>51.9M</td> <td>65.08</td> <td>84.14</td> <td>38.21</td> <td>54.87</td> </tr> <tr> <td>MLB+VG</td> <td>51.9M</td> <td>65.84</td> <td>83.87</td> <td>37.87</td> <td>56.76</td> </tr> <tr> <td>MCB+Att [Fukui et al., 2016]</td> <td>69.2M</td> <td>64.2</td> <td>82.2</td> <td>37.7</td> <td>54.8</td> </tr> <tr> <td>MCB+Att+GloVe [Fukui et al., 2016]</td> <td>70.5M</td> <td>64.7</td> <td>82.5</td> <td>37.6</td> <td>55.6</td> </tr> <tr> <td>MCB+Att+Glove+VG [Fukui et al., 2016]</td> <td>70.5M</td> <td>65.4</td> <td>82.3</td> <td>37.2</td> <td>57.4</td> </tr> </table> Non-Linearity We assess three options applying non-linearity on low-rank bilinear pooling, vanilla, before Hadamard product as in Equation[5] and after Hadamard product as in Equation[6]. Answer Sampling VQA (Antol et al., 2015) dataset has ten answers from unique persons for each question, while Visual Genome (Krishna et al., 2016) dataset has a single answer for each question. Since difficult or ambiguous questions may have divided answers, the probabilistic sampling from the distribution of answers can be utilized to optimize for the multiple answers. An instance[7] can be found in Fukui et al. (2016). We simplify the procedure as follows: \[ p(a_1) = \begin{cases} |a_1| / \Sigma_i |a_i|, & \text{if } |a_1| \geq 3 \\ 0, & \text{otherwise} \end{cases} \] (12) \[ p(a_0) = 1 - p(a_1) \] (13) where \(|a_i|\) denotes the number of unique answer \(a_i\) in a set of multiple answers, \(a_0\) denotes a mode, which is the most frequent answer, and \(a_1\) denotes the secondly most frequent answer. We define the divided answers as having at least three answers which are the secondly frequent one, for the evaluation metric of VQA (Antol et al., 2015), \[ \text{accuracy}(a_k) = \min(|a_k|/3, 1). \] (14) https://github.com/akirafukui/vqa-mcb/blob/5fea8/train/multi_att_2_glove/vqa_data_provider_layer.py#L130 Table 2: The VQA test-standard results to compare with state-of-the-art. Notice that these results are trained by provided VQA train and validation splits, without any data augmentation. <table> <tr> <th rowspan="2">MODEL</th> <th colspan="4">Open-Ended</th> <th rowspan="2">MC ALL</th> </tr> <tr> <th>ALL</th> <th>Y/N</th> <th>NUM</th> <th>ETC</th> </tr> <tr> <td>iBOWIMG (Zhou et al., 2015)</td> <td>55.89</td> <td>76.76</td> <td>34.98</td> <td>42.62</td> <td>61.97</td> </tr> <tr> <td>DPPnet (Noh et al., 2016)</td> <td>57.36</td> <td>80.28</td> <td>36.92</td> <td>42.24</td> <td>62.69</td> </tr> <tr> <td>Deeper LSTM+Normalized CNN (Antol et al., 2015)</td> <td>58.16</td> <td>80.56</td> <td>36.53</td> <td>43.73</td> <td>63.09</td> </tr> <tr> <td>SMem (Xu & Saenko, 2016)</td> <td>58.24</td> <td>80.80</td> <td>37.53</td> <td>43.48</td> <td>-</td> </tr> <tr> <td>Ask Your Neurons (Malinowski et al., 2016)</td> <td>58.43</td> <td>78.24</td> <td>36.27</td> <td>46.32</td> <td>-</td> </tr> <tr> <td>SAN (Yang et al., 2016)</td> <td>58.85</td> <td>79.11</td> <td>36.41</td> <td>46.42</td> <td>-</td> </tr> <tr> <td>D-NMN (Andreas et al., 2016)</td> <td>59.44</td> <td>80.98</td> <td>37.48</td> <td>45.81</td> <td>-</td> </tr> <tr> <td>ACK (Wu et al., 2016b)</td> <td>59.44</td> <td>81.07</td> <td>37.12</td> <td>45.83</td> <td>-</td> </tr> <tr> <td>FDA (Ilievski et al., 2016)</td> <td>59.54</td> <td>81.34</td> <td>35.67</td> <td>46.10</td> <td>64.18</td> </tr> <tr> <td>HYBRID (Kattel & Kanan, 2016b)</td> <td>60.06</td> <td>80.34</td> <td>37.82</td> <td>47.56</td> <td>-</td> </tr> <tr> <td>DMN+ (Xiong et al., 2016)</td> <td>60.36</td> <td>80.43</td> <td>36.82</td> <td>48.33</td> <td>-</td> </tr> <tr> <td>MRN (Kim et al., 2016b)</td> <td>61.84</td> <td>82.39</td> <td>38.23</td> <td>49.41</td> <td>66.33</td> </tr> <tr> <td>HieCoAtt (Lu et al., 2016)</td> <td>62.06</td> <td>79.95</td> <td>38.22</td> <td>51.95</td> <td>66.07</td> </tr> <tr> <td>RAU (Noh & Han, 2016)</td> <td>63.2</td> <td>81.7</td> <td>38.2</td> <td>52.8</td> <td>67.3</td> </tr> <tr> <td>MLB (ours)</td> <td><b>65.07</b></td> <td><b>84.02</b></td> <td><b>37.90</b></td> <td><b>54.77</b></td> <td><b>68.89</b></td> </tr> </table> The rate of the divided answers is approximately 16.40%, and only 0.23% of questions have more than two divided answers in VQA dataset. We assume that it eases the difficulty of convergence without severe degradation of performance. Shortcut Connection The contribution of shortcut connections for residual learning is explored based on the observation of the competitive performance of single-block layered model. Since the usefulness of shortcut connections is linked to the network depth (He et al., 2016). Data Augmentation The data augmentation with Visual Genome (Krishna et al., 2016) question answer annotations is explored. Visual Genome (Krishna et al., 2016) originally provides 1.7 Million visual question answer annotations. After aligning to VQA, the valid number of question-answering pairs for training is 837,298, which is for distinct 99,280 images. 6 RESULTS The six experiments are conducted sequentially. Each experiment determines experimental variables one by one. Refer to Table[1] which has six sectors divided by mid-rules. 6.1 SIX EXPERIMENT RESULTS Number of Learning Blocks Though, MRN (Kim et al., 2016b) has the three-block layered architecture, MARN shows the best performance with two-block layered models (63.92%). For the multiple glimpse models in the next experiment, we choose one-block layered model for its simplicity to extend, and competitive performance (63.79%). Number of Glimpses Compared with the results of Fukui et al. (2016), four-glimpse MARN (64.61%) is better than other comparative models. However, for a parsimonious choice, two-glimpse MARN (64.53%) is chosen for later experiments. We speculate that multiple glimpses are one of key factors for the competitive performance of MCB (Fukui et al., 2016), based on a large margin in accuracy, compared with one-glimpse MARN (63.79%). Non-Linearity The results confirm that activation functions are useful to improve performances. Surprisingly, there is no empirical difference between two options, before-Hadamard product and after-Hadamard product. This result may build a bridge to relate with studies on multiplicative integration with recurrent neural networks (Wu et al., 2016c). Answer Sampling Sampled answers (64.80%) result better performance than mode answers (64.53%). It confirms that the distribution of answers from annotators can be used to improve the performance. However, the number of multiple answers is usually limited due to the cost of data collection. Shortcut Connection Though, MRN (Kim et al., 2016b) effectively uses shortcut connections to improve model performance, one-block layered MARN shows better performance without the shortcut connection. In other words, the residual learning is not used in our proposed model, MLB. It seems that there is a trade-off between introducing attention mechanism and residual learning. We leave a careful study on this trade-off for future work. Data Augmentation Data augmentation using Visual Genome (Krishna et al., 2016) question answer annotations significantly improves the performance by 0.76% in accuracy for VQA test-dev split. Especially, the accuracy of others (ETC)-type answers is notably improved from the data augmentation. 6.2 Comparison with State-of-the-Art The comparison with other single models on VQA test-standard is shown in Table 2. The overall accuracy of our model is approximately 1.9% above the next best model (Noh & Han, 2016) on the Open-Ended task of VQA. The major improvements are from yes-or-no (Y/N) and others (ETC)-type answers. In Table 3 we also report the accuracy of our ensemble model to compare with other ensemble models on VQA test-standard, which won 1st to 5th places in VQA Challenge 2016.3 We beat the previous state-of-the-art with a margin of 0.42%. Table 3: The VQA test-standard results for ensemble models to compare with state-of-the-art. For unpublished entries, their team names are used instead of their model names. Some of their figures are updated after the challenge. <table> <tr> <th rowspan="2">MODEL</th> <th colspan="4">Open-Ended</th> <th colspan="2">MC</th> </tr> <tr> <th>ALL</th> <th>Y/N</th> <th>NUM</th> <th>ETC</th> <th>ALL</th> </tr> <tr> <td>RAU (Noh & Han, 2016)</td> <td>64.12</td> <td>83.33</td> <td>38.02</td> <td>53.37</td> <td>67.34</td> </tr> <tr> <td>MRN (Kim et al., 2016b)</td> <td>63.18</td> <td>83.16</td> <td>39.14</td> <td>51.33</td> <td>67.54</td> </tr> <tr> <td>DLAIT (not published)</td> <td>64.83</td> <td>83.23</td> <td><b>40.80</b></td> <td>54.32</td> <td>68.30</td> </tr> <tr> <td>Naver Labs (not published)</td> <td>64.79</td> <td>83.31</td> <td>38.70</td> <td>54.79</td> <td>69.26</td> </tr> <tr> <td>MCB (Fukui et al., 2016)</td> <td>66.47</td> <td>83.24</td> <td>39.47</td> <td><b>58.00</b></td> <td>70.10</td> </tr> <tr> <td>MLB (ours)</td> <td><b>66.89</b></td> <td><b>84.61</b></td> <td>39.07</td> <td>57.79</td> <td><b>70.29</b></td> </tr> <tr> <td>Human (Antol et al., 2015)</td> <td>83.30</td> <td>95.77</td> <td>83.39</td> <td>72.67</td> <td>91.54</td> </tr> </table> 7 RELATED WORKS MRN (Kim et al., 2016b) proposes multimodal residual learning with Hadamard product of low-rank bilinear pooling. However, their utilization of low-rank bilinear pooling is limited to joint residual mapping function for multimodal residual learning. Higher-order Boltzmann Machines (Memisevic & Hinton, 2007, 2010) use Hadamard product to capture the interactions of input, output, and hidden representations for energy function. Wu et al. (2016c) propose the recurrent neural networks using Hadamard product to integrate multiplicative interactions among hidden representations in the model. For details of these related works, please refer to Appendix D. 3http://visualqa.org/challenge.html Yet, compact bilinear pooling or multimodal compact bilinear pooling (Gao et al., 2016; Fukui et al., 2016) is worth to discuss and carefully compare with our method. 7.1 Compact Bilinear Pooling Compact bilinear pooling (Gao et al., 2016) approximates full bilinear pooling using a sampling-based computation, Tensor Sketch Projection (Charikar et al., 2002; Pham & Pagh, 2013): \[ \Psi(x \otimes y, h, s) = \Psi(x, h, s) * \Psi(y, h, s) \] (15) \[ = \mathrm{FFT}^{-1}(\mathrm{FFT}(\Psi(x, h, s)) \circ \mathrm{FFT}(\Psi(y, h, s))) \] (16) where \( \otimes \) denotes outer product, * denotes convolution, \( \Psi(v, h, s)_i := \sum_{j:h_j=s_j} s_j \cdot v_j \), FFT denotes Fast Fourier Transform, d denotes an output dimension, \( x, y, h, s \in \mathbb{R}^n \), x and y are inputs, and h and s are random variables. \( h_i \) is sampled from \( \{1, ..., d\} \), and \( s_i \) is sampled from \( \{-1, 1\} \), then, both random variables are fixed for further usage. Even if the dimensions of x and y are different from each other, it can be used for multimodal learning (Fukui et al., 2016). Similarly to Equation 1, compact bilinear pooling can be described as follows: \[ f_i = \mathbf{x}^T \mathcal{W}_i \mathbf{y} \] (17) where \( \mathcal{W}_{ijk} = s_{ijk} w_{ijk} \) if \( s_{ijk} \) is sampled from \( \{-1, 1\} \), \( w_{ijk} \) is sampled from \( \{\mathbf{P}_{i1}, \mathbf{P}_{i2}, \ldots, \mathbf{P}_{id}\} \), and the compact bilinear pooling is followed by a fully connected layer \( \mathbf{P} \in \mathbb{R}^{|\Omega| \times d} \). Then, this method can be formulated as a hashing trick (Weinberger et al., 2009; Chen et al., 2015) to share randomly chosen bilinear weights using d parameters for a output value, in a way that a single parameter is shared by \( NM/d \) bilinear terms in expectation, with the variance of \( NM(d-1)/d^2 \) (See Appendix B). In comparison with our method, their method approximates a three-dimensional weight tensor in bilinear pooling with a two-dimensional matrix \( \mathbf{P} \), which is larger than the concatenation of three two-dimensional matrices for low-rank bilinear pooling. The ratio of the number of parameters for a single output to the total number of parameters for \( |\Omega| \) outputs is \( d/d/|\Omega| = 1/|\Omega| \) (Fukui et al., 2016), vs. \( d(N + M + 1)/d(N + M + |\Omega|) = (N + M + 1)/(N + M + |\Omega|) \approx 2/3 \) (ours), since our method uses a three-way factorization. Hence, more parameters are allocated to each bilinear approximation than compact bilinear pooling does, effectively managing overall parameters guided by back-propagation algorithm. MCB (Fukui et al., 2016), which uses compact bilinear pooling for multimodal tasks, needs to set the dimension of output d to 16K, to reduce the bias induced by the fixed random variables h and s. As a result, the majority of model parameters (16K × 3K = 48M) are concentrated on the last fully connected layer, which makes a fan-out structure. So, the total number of parameters of MCB is highly sensitive to the number of classes, which is approximately 69.2M for MCB+att, and 70.5M for MCB+att+GloVe. Yet, the total number of parameters of our proposed model (MLB) is 51.9M, which is more robust to the number of classes having \( d = 1.2K \), which has a similar role in model architecture. 8 CONCLUSIONS We suggest a low-rank bilinear pooling method to replace compact bilinear pooling, which has a fan-out structure, and needs complex computations. Low-rank bilinear pooling has a flexible structure using linear mapping and Hadamard product, and a better parsimonious property, compared with compact bilinear pooling. We achieve new state-of-the-art results on the VQA dataset using a similar architecture of Fukui et al.(2016), replacing compact bilinear pooling with low-rank bilinear pooling. We believe our method could be applicable to other bilinear learning tasks. ACKNOWLEDGMENTS The authors would like to thank Patrick Emaase for helpful comments and editing. Also, we are thankful to anonymous reviewers who provided comments to improve this paper. This work was supported by NAVER LABS Corp. & NAVER Corp. and partly by the Korea government (IITP-R0126-16-1072-SW.StarLab, KEIT-10044009-HRI.MESSI, KEIT-10060086-RISF, ADD-UD130070ID-BMRR). The part of computing resources used in this study was generously shared by Standigm Inc. REFERENCES Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to Compose Neural Networks for Question Answering. arXiv preprint arXiv:1601.01705, 2016. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. VQA: Visual Question Answering. IEEE International Conference on Computer Vision, 2015. Moses Charikar, Kevin Chen, and Martin Farach-Colton. Finding frequent items in data streams. In International Colloquium on Automata, Languages, and Programming, pp. 693–703. Springer, 2002. Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger, and Yixin Chen. Compressing Neural Networks with the Hashing Trick. In 32nd International Conference on Machine Learning, pp. 2285–2294, 2015. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1724–1734, 2014. Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding. arXiv preprint arXiv:1606.01847, 2016. Yarin Gal. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. arXiv preprint arXiv:1512.05287, 2015. Yang Gao, Oscar Beijbom, Ning Zhang, and Trevor Darrell. Compact Bilinear Pooling. In IEEE Conference on Computer Vision and Pattern Recognition, 2016. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2016. Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural computation, 9(8):1735–1780, 1997. Ilija Ilievski, Shuicheng Yan, and Jiashi Feng. A Focused Dynamic Attention Model for Visual Question Answering. arXiv preprint arXiv:1604.01485, 2016. Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial Transformer Networks. In Advances in Neural Information Processing Systems 28, pp. 2008–2016, 2015. Kushal Kafle and Christopher Kanan. Visual Question Answering: Datasets, Algorithms, and Future Challenges. arXiv preprint arXiv:1610.01465, 2016a. Kushal Kafle and Christopher Kanan. Answer-Type Prediction for Visual Question Answering. IEEE Conference on Computer Vision and Pattern Recognition, pp. 4976–4984, 2016b. Jin-Hwa Kim, Jeonghee Kim, Jung-Woo Ha, and Byoung-Tak Zhang. TrimZero: A Torch Recurrent Module for Efficient Natural Language Processing. In KIIS Spring Conference, volume 26, pp. 165–166, 2016a. Jin-Hwa Kim, Sang-Woo Lee, Dong-Hyun Kwak, Min-Oh Heo, Jeonghee Kim, Jung-Woo Ha, and Byoung-Tak Zhang. Multimodal Residual Learning for Visual QA. arXiv preprint arXiv:1606.01455, 2016b. Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. Skip-Thought Vectors. In Advances in Neural Information Processing Systems 28, pp. 3294–3302, 2015. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. Visual genome: Connecting language and vision using crowdsourced dense image annotations. arXiv preprint arXiv:1602.07332, 2016. Nicholas Léonard, Sagar Waghmare, Yang Wang, and Jin-Hwa Kim. rnn : Recurrent Library for Torch. arXiv preprint arXiv:1511.07889, 2015. Tsung-Yu Lin, Aruni RoyChowdhury, and Subhransu Maji. Bilinear CNN Models for Fine-grained Visual Recognition. In IEEE International Conference on Computer Vision, pp. 1449–1457, 2015. Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. Hierarchical Question-Image Co-Attention for Visual Question Answering. arXiv preprint arXiv:1606.00061, 2016. Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz. Ask Your Neurons: A Deep Learning Approach to Visual Question Answering. arXiv preprint arXiv:1605.02697, 2016. Roland Memisevic and Geoffrey E Hinton. Unsupervised learning of image transformations. In IEEE Conference on Computer Vision and Pattern Recognition, 2007. Roland Memisevic and Geoffrey E Hinton. Learning to represent spatial transformations with factored higher-order Boltzmann machines. Neural computation, 22(6):1473–1492, 2010. Hyeonwoo Noh and Bohyung Han. Training Recurrent Answering Units with Joint Loss Minimization for VQA. arXiv preprint arXiv:1606.03647, 2016. Hyeonwoo Noh, Paul Hongseok Seo, and Bohyung Han. Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction. In IEEE Conference on Computer Vision and Pattern Recognition, 2016. Ninh Pham and Rasmus Pagh. Fast and scalable polynomial kernels via explicit feature maps. In 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 239–247. ACM, 2013. Hamed Pirsiavash, Deva Ramanan, and Charles C. Fowlkes. Bilinear classifiers for visual recognition. In Advances in Neural Information Processing Systems 22, pp. 1482–1490, 2009. Joshua B Tenenbaum and William T Freeman. Separating style and content with bilinear models. Neural computation, 12(6):1247–1283, 2000. Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012. Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Feature hashing for large scale multitask learning. In 26th International Conference on Machine Learning, pp. 1113–1120, 2009. Qi Wu, Damien Teney, Peng Wang, Chunhua Shen, Anthony Dick, and Anton van den Hengel. Visual Question Answering: A Survey of Methods and Datasets. arXiv preprint arXiv:1607.05910, 2016a. Qi Wu, Peng Wang, Chunhua Shen, Anthony Dick, and Anton van den Hengel. Ask Me Anything: Free-form Visual Question Answering Based on Knowledge from External Sources. In IEEE Conference on Computer Vision and Pattern Recognition, 2016b. Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan Salakhutdinov. On Multiplicative Integration with Recurrent Neural Networks. arXiv preprint arXiv:1606.06630, 2016c. Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic Memory Networks for Visual and Textual Question Answering. In 33rd International Conference on Machine Learning, 2016. Huijuan Xu and Kate Saenko. Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering. In European Conference on Computer Vision, 2016. Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked Attention Networks for Image Question Answering. In IEEE Conference on Computer Vision and Pattern Recognition, 2016. Bolei Zhou, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. Simple Baseline for Visual Question Answering. arXiv preprint arXiv:1512.02167, 2015. Appendix A EXPERIMENT DETAILS A.1 PREPROCESSING We follow the preprocessing procedure of [Kim et al., 2016b]. Here, we remark some details of it, and changes. A.1.1 QUESTION EMBEDDING The 90.45% of questions for the 2K-most frequent answers are used. The vocabulary size of questions is 15,031. GRU (Cho et al., 2014) is used for question embedding. Based on earlier studies (Noh et al., 2016; Kim et al., 2016b), a word embedding matrix and a GRU are initialized with Skip-thought Vector pre-trained model (Kiros et al., 2015). As a result, question vectors have 2,400 dimensions. For efficient computation of variable-length questions, [Kim et al., 2016a] is used for the GRU. Moreover, for regularization, Bayesian Dropout [Gal, 2015] which is implemented in [Leonard et al., 2015] is applied while training. A.2 VISION EMBEDDING ResNet-152 networks [He et al., 2016] are used for feature extraction. The dimensionality of an input image is \( 3 \times 448 \times 448 \). The outputs of the last convolution layer is used, which have \( 2,048 \times 14 \times 14 \) dimensions. A.3 HYPERPARAMETERS The hyperparameters used in MLB of Table 2 are described in Table 4. The batch size is 100, and the number of iterations is fixed to 250K. For data augmented models, a simplified early stopping is used, starting from 250K to 350K-iteration for every 25K iterations (250K, 275K, 300K, 325K, and 350K; at most five points) to avoid exhaustive submissions to VQA test-dev evaluation server. RMSProp (Tieleman & Hinton, 2012) is used for optimization. Though, the size of joint embedding size \( d \) is borrowed from [Kim et al., 2016b], a grid search on \( d \) confirms this choice in our model as shown in Table 5. <table> <tr> <th>SYMBOL</th> <th>VALUE</th> <th>DESCRIPTION</th> </tr> <tr> <td>S</td> <td>14</td> <td>attention lattice size</td> </tr> <tr> <td>N</td> <td>2,400</td> <td>question embedding size</td> </tr> <tr> <td>M</td> <td>2,048</td> <td>channel size of extracted visual features</td> </tr> <tr> <td>d</td> <td>1,200</td> <td>joint embedding size</td> </tr> <tr> <td>G</td> <td>2</td> <td>number of glimpses</td> </tr> <tr> <td>|Ω|</td> <td>2,000</td> <td>number of candidate answers</td> </tr> <tr> <td>η</td> <td>3e-4</td> <td>learning rate</td> </tr> <tr> <td>λ</td> <td>0.99997592083</td> <td>learning rate decay factor at every iteration</td> </tr> <tr> <td>p</td> <td>0.5</td> <td>dropout rate</td> </tr> <tr> <td>θ</td> <td>±10</td> <td>gradient clipping threshold</td> </tr> </table> A.4 MODEL SCHEMA Figure 1 shows a schematic diagram of MLB, where \( \circ \) denotes Hadamard product, and \( \Sigma \) denotes a linear combination of visual feature vectors using coefficients, which is the output of softmax function. If \( G > 1 \), the softmax function is applied to each row vectors of an output matrix (Equation 5), and we concatenate the resulting vectors of the \( G \) linear combinations (Equation 9). A.5 ENSEMBLE OF SEVEN MODELS The test-dev results for individual models consisting of our ensemble model is presented in Table 6. Table 5: The effect of joint embedding size \( d \). <table> <tr> <th rowspan="2">d</th> <th rowspan="2">SIZE</th> <th colspan="4">Open-Ended</th> </tr> <tr> <th>ALL</th> <th>Y/N</th> <th>NUM</th> <th>ETC</th> </tr> <tr> <td>800</td> <td>45.0M</td> <td>64.89</td> <td>84.08</td> <td>38.15</td> <td>54.55</td> </tr> <tr> <td>1000</td> <td>48.4M</td> <td>65.06</td> <td><b>84.18</b></td> <td>38.01</td> <td>54.85</td> </tr> <tr> <td>1200</td> <td>51.9M</td> <td><b>65.08</b></td> <td>84.14</td> <td><b>38.21</b></td> <td><b>54.87</b></td> </tr> <tr> <td>1400</td> <td>55.4M</td> <td>64.94</td> <td>84.13</td> <td>38.00</td> <td>54.64</td> </tr> <tr> <td>1600</td> <td>58.8M</td> <td>65.02</td> <td>84.15</td> <td>37.79</td> <td>54.85</td> </tr> </table> ![A schematic diagram of MLB. Replicate module copies an question embedding vector to match with \( S^2 \) visual feature vectors. Conv modules indicate \( 1 \times 1 \) convolution to transform a given channel space, which is computationally equivalent to linear projection for channels.](page_370_613_808_312.png) Figure 1: A schematic diagram of MLB. *Replicate* module copies an question embedding vector to match with \( S^2 \) visual feature vectors. *Conv* modules indicate \( 1 \times 1 \) convolution to transform a given channel space, which is computationally equivalent to linear projection for channels. Table 6: The individual models used in our ensemble model in Table 3 <table> <tr> <th rowspan="2">MODEL</th> <th rowspan="2">GLIMPSE</th> <th colspan="4">Open-Ended</th> </tr> <tr> <th>ALL</th> <th>Y/N</th> <th>NUM</th> <th>ETC</th> </tr> <tr> <td>MLB</td> <td>2</td> <td>64.89</td> <td>84.13</td> <td>37.85</td> <td>54.57</td> </tr> <tr> <td>MLB</td> <td>2</td> <td>65.08</td> <td><b>84.14</b></td> <td>38.21</td> <td>54.87</td> </tr> <tr> <td>MLB</td> <td>4</td> <td>65.01</td> <td>84.09</td> <td>37.66</td> <td>54.88</td> </tr> <tr> <td>MLB-VG</td> <td>2</td> <td>65.76</td> <td>83.64</td> <td>37.57</td> <td>56.86</td> </tr> <tr> <td>MLB-VG</td> <td>2</td> <td>65.84</td> <td>83.87</td> <td>37.87</td> <td>56.76</td> </tr> <tr> <td>MLB-VG</td> <td>3</td> <td>66.05</td> <td>83.88</td> <td>38.13</td> <td>57.13</td> </tr> <tr> <td>MLB-VG</td> <td>4</td> <td><b>66.09</b></td> <td>83.59</td> <td><b>38.32</b></td> <td><b>57.42</b></td> </tr> <tr> <td>Ensemble</td> <td>-</td> <td>66.77</td> <td>84.54</td> <td>39.21</td> <td>57.81</td> </tr> </table> B UNDERSTANDING OF MULTIMODAL COMPACT BILINEAR POOLING In this section, the algorithm of multimodal compact bilinear pooling (MCB) (Gao et al., 2016; Fukui et al., 2016) is described as a kind of hashing tick (Chen et al., 2015). \( x \in \mathbb{R}^{n_x} \) and \( y \in \mathbb{R}^{n_y} \) are the given inputs, \( \Phi(x, y) \in \mathbb{R}^d \) is the output. Random variables \( h_x \in \mathbb{N}^{n_x} \) and \( h_y \in \mathbb{N}^{n_y} \) are uniformly sampled from \( \{1, \ldots, d\} \), and \( s_x \in \mathbb{Z}^{n_x} \) and \( s_y \in \mathbb{Z}^{n_y} \) are uniformly sampled from \( \{-1, 1\} \). Then, Count Sketch projection function \( \Psi \) (Charikar et al., 2002) projects \( x \) and \( y \) to intermediate representations \( \Psi(x, h_x, s_x) \in \mathbb{R}^d \) and \( \Psi(y, h_y, s_y) \in \mathbb{R}^d \), which is defined as: \[ \Psi(v, h, s)_i := \sum_{j : h_j = i} s_j \cdot v_j \] (18) Notice that both \( h \) and \( s \) remain as constants after initialization (Fukui et al., 2016). The probability of \( h_{xj} = i \) and \( h_{yj} = i \) for the given \( j \) is \( 1/d^2 \). Hence, the expected number of bilinear terms in \( \Psi(x, h_x, s_x), \Psi(y, h_y, s_y) \) is \( (n_x n_y)/d^2 \). Since, the output \( \Phi(x, y) \) is a result of circular convolution of \( \Psi(x, h_x, s_x) \) and \( \Psi(y, h_y, s_y) \), the expected number of bilinear terms in \( \Phi(x, y)_i \) is \( (n_x n_y)/d \). Likewise, the probability of that a bilinear term is allocated in \( \Phi(x, y)_i \) is \( 1/d \). The probability distribution of the number of bilinear terms in \( \Phi(x, y)_i \) follows a multinomial distribution, whose mean is \( (n_x n_y)/d \) and variance is \( (n_x n_y)(d-1)/d^2 \). Linear projection after the multimodal compact bilinear pooling provides weights on the bilinear terms, in a way that a shared weight is assigned to \( \Phi(x, y)_i \), which has \( (n_x n_y)/d \) bilinear terms in expectation, though each bilinear term can have a different sign induced by both \( s_x \) and \( s_y \). HashedNets (Chen et al., 2015) propose a method to compress neural networks using a low-cost hashing function (Weinberger et al., 2009), which is the same function of \( \Psi(v, h, s) \). They randomly group a portion of connections in neural networks to share a single weight. We speculate that multimodal compact bilinear pooling uses the hashing tick to reduce the number of full bilinear weights with the rate of \( d/(n_x n_y) \). However, this approximation is limited to two-way interaction, compared with three-way factorization in our method. C REPLACEMENT OF LOW-RANK BILINEAR POOLING For the explicit comparison with compact bilinear pooling, we explicitly substitute compact bilinear pooling for low-rank bilinear pooling to control everything else, which means that the rest of the model architecture is exactly the same. According to Fukui et al. (2016), we use MCB followed by Signed Square Root, L2-Normalization, Dropout (\( p=0.1 \)), and linear projection from 16,000-dimension to the target dimension. Also, Dropout (\( p=0.3 \)) for a question embedding vector. Note that an overall architecture for multimodal learning of both is the same. Experimental details are referenced from the implementation1 of Fukui et al. (2016). For test-dev split, our version of MCB gets 61.48% for overall accuracy (yes/no: 82.48%, number: 37.06%, and other: 49.07%) vs. 65.08% (ours, MLB in Table 7). Additionally, if the nonlinearity in getting attention distributions is increased as the original MCB does using ReLU, we get 62.11% for overall accuracy (yes/no: 82.55%, number: 37.18%, and other: 50.30%), which is still the below of our performance2. We do not see it as a decisive evidence of the better performance of MLB, but as a reference (the comparison of test-dev results may be also unfair.), since an optimal architecture and hyperparameters may be required for each method. 1https://github.com/akirafukui/vqa-mcb 2Our version of MCB definition can be found in https://github.com/jnhwkim/MulLowBiVQA/blob/master/netdef/MCB.lua D RELATED WORKS D.1 MULTIMODAL RESIDUAL NETWORKS MRN (Kim et al., 2016b) is an implicit attentional model using multimodal residual learning with Hadamard product which does not have any explicit attention mechanism. \[ \mathcal{F}^{(k)}(\mathbf{q}, \mathbf{v}) = \sigma(\mathbf{W}_a^{(k)} \mathbf{q}) \circ \sigma(\mathbf{W}_2^{(k)} \sigma(\mathbf{W}_1^{(k)} \mathbf{v})) \] (19) \[ H_L(\mathbf{q}, \mathbf{v}) = \mathbf{W}_{a'} \mathbf{q} + \sum_{l=1}^L \mathbf{W}_{f^{(l)}} \mathcal{F}^{(l)}(H_{l-1}, \mathbf{v}) \] (20) where \( \mathbf{W}_a \) are parameter matrices, \( L \) is the number of learning blocks, \( H_0 = \mathbf{q} \), \( \mathbf{W}_{a'} = \Pi_{l=1}^L \mathbf{W}_{a'}^{(l)} \), and \( \mathbf{W}_{f^{(l)}} = \Pi_{m=l+1}^L \mathbf{W}_{f^{(m)}}^{(m)} \). Notice that these equations can be generalized by Equation 7. However, an explicit attention mechanism allows the use of lower-level visual features than fully-connected layers, and, more importantly, spatially selective learning. Recent state-of-the-art methods use a variant of an explicit attention mechanism in their models (Lu et al., 2016; Noh & Han, 2016; Fukui et al., 2016). Note that shortcut connections of MRN are not used in the proposed Multimodal Low-rank Bilinear (MLB) model. Since, it does not have any performance gain due to not stacking multiple layers in MLB. We leave the study of residual learning for MLB for future work, which may leverage the excellency of bilinear models as suggested in Wu et al. (2016a). D.2 HIGHER-ORDER BOLTZMANN MACHINES A similar model can be found in a study of Higher-Order Boltzmann Machines (Memisevic & Hinton, 2007, 2010). They suggest a factoring method for the three-way energy function to capture correlations among input, output, and hidden representations. \[ -E(\mathbf{y}, \mathbf{h}; \mathbf{x}) = \sum_f \left( \sum_i x_i w_{i f}^x \right) \left( \sum_j y_j w_{j f}^y \right) \left( \sum_k h_k w_{k f}^h \right) + \sum_k w_k^h h_k + \sum_j w_j^y y_j \] \[ = (\mathbf{x}^T \mathbf{W}^x \circ \mathbf{y}^T \mathbf{W}^y \circ \mathbf{h}^T \mathbf{W}^h) \mathbf{I} + \mathbf{h}^T \mathbf{w}^h + \mathbf{y}^T \mathbf{w}^y \] (21) Setting aside of bias terms, the \( I \times J \times K \) parameter tensor of unfactored Higher-Order Boltzmann Machines is replaced with three matrices, \( \mathbf{W}^x \in \mathbb{R}^{I \times F} \), \( \mathbf{W}^y \in \mathbb{R}^{J \times F} \), and \( \mathbf{W}^h \in \mathbb{R}^{K \times F} \). D.3 MULTIPLICATIVE INTEGRATION WITH RECURRENT NEURAL NETWORKS Most of recurrent neural networks, including vanilla RNNs, Long Short Term Memory networks (Hochreiter & Schmidhuber, 1997) and Gated Recurrent Units (Cho et al., 2014), share a common expression as follows: \[ \phi(\mathbf{W} \mathbf{x} + \mathbf{U} \mathbf{h} + \mathbf{b}) \] (22) where \( \phi \) is a non-linear function, \( \mathbf{W} \in \mathbb{R}^{d \times n} \), \( \mathbf{x} \in \mathbb{R}^n \), \( \mathbf{U} \in \mathbb{R}^{d \times m} \), \( \mathbf{h} \in \mathbb{R}^m \), and \( \mathbf{b} \in \mathbb{R}^d \) is a bias vector. Note that, usually, \( \mathbf{x} \) is an input state vector and \( \mathbf{h} \) is an hidden state vector in recurrent neural networks. (Wu et al., 2016c) propose a new design to replace the additive expression with a multiplicative expression using Hadamard product as \[ \phi(\mathbf{W} \mathbf{x} \circ \mathbf{U} \mathbf{h} + \mathbf{b}). \] (23) Moreover, a general formulation of this multiplicative integration can be described as \[ \phi(\alpha \circ \mathbf{W} \mathbf{x} \circ \mathbf{U} \mathbf{h} + \mathbf{W} \mathbf{x} \circ \beta_1 + \mathbf{U} \mathbf{h} \circ \beta_2 + \mathbf{b}) \] (24) which is reminiscent of full model in Section 8.1.
ABSTRACT Bilinear models provide rich representations compared with linear models. They have been applied in various visual tasks, such as object recognition, segmentation, and visual question-answering, to get state-of-the-art performances taking advantage of the expanded representations. However, bilinear representations tend to be high-dimensional, limiting the applicability to computationally complex tasks. We propose low-rank bilinear pooling using Hadamard product for an efficient attention mechanism of multimodal learning. We show that our model outperforms compact bilinear pooling in visual question-answering tasks with the state-of-the-art results on the VQA dataset, having a better parsimonious property. 1 INTRODUCTION Bilinear models (Tenenbaum & Freeman [2000]) provide richer representations than linear models. To exploit this advantage, fully-connected layers in neural networks can be replaced with bilinear pooling. The outer product of two vectors (or Kroneker product for matrices) is involved in bilinear pooling, as a result of this, all pairwise interactions among given features are considered. A successful application of this technique is used for fine-grained visual recognition (Lin et al., 2015). However, bilinear pooling produces a high-dimensional feature of quadratic expansion, which may constrain a model structure and computational resources. For example, an outer product of two feature vectors, both of which have 1K-dimensionality, produces a million-dimensional feature vector. Therefore, for classification problems, the choice of the number of target classes is severely constrained, because the number of parameters for a standard linear classifier is determined by multiplication of the size of the high-dimensional feature vector and the number of target classes. Compact bilinear pooling (Gao et al., 2016) reduces the quadratic expansion of dimensionality by two orders of magnitude, retaining the performance of the full bilinear pooling. This approximation uses sampling-based computation, Tensor Sketch Projection (Charikar et al., 2002; Pham & Pagh, 2013), which utilizes an useful property that \( \Psi(x \otimes y, h, s) = \Psi(x, h, s) * \Psi(y, h, s) \), which means the projection of outer product of two vectors is the convolution of two projected vectors. Here, \( \Psi \) is the proposed projection function, and, \( h \) and \( s \) are randomly sampled parameters by the algorithm. Nevertheless, compact bilinear pooling embraces two shortcomings. One comes from the sampling approach. Compact bilinear pooling relies on a favorable property, \( E[\langle \Psi(x, h, s), \Psi(y, h, s) \rangle] = \langle x, y \rangle \), which provides a basis to use projected features instead of original features. Yet, calculating the exact expectation is computationally intractable, so, the random parameters, \( h \) and \( s \) are fixed during training and evaluation. This practical choice leads to the second. The projected dimension of compact bilinear pooling should be large enough to minimize the bias from the fixed parameters. Practical choices are 10K and 16K for 512 and 4096-dimensional inputs, respectively (Gao et al., 2016; Fukui et al., 2016). Though, these *compacted* dimensions are reduced ones by two orders of magnitude compared with full bilinear pooling, such high-dimensional features could be a bottleneck for computationally complex models. We propose low-rank bilinear pooling using Hadamard product (element-wise multiplication), which is commonly used in various scientific computing frameworks as one of tensor operations. The proposed method factors a three-dimensional weight tensor for bilinear pooling into three two-dimensional weight matrices, which enforces the rank of the weight tensor to be low-rank. As a result, two input feature vectors linearly projected by two weight matrices, respectively, are computed by Hadamard product, then, followed by a linear projection using the third weight matrix. For example, the projected vector \( z \) is represented by \( \mathbf{W}_z^T (\mathbf{W}_x^T \mathbf{x} \circ \mathbf{W}_y^T \mathbf{y}) \), where \( \circ \) denotes Hadamard product. We also explore to add non-linearity using non-linear activation functions into the low-rank bilinear pooling, and shortcut connections inspired by deep residual learning (He et al., 2016). Then, we show that it becomes a simple baseline model (Antol et al., 2015) or one-learning block of Multimodal Residual Networks (Kim et al., 2016b) as a low-rank bilinear model, yet, this interpretation has not be done. Our contributions are as follows: First, we propose low-rank bilinear pooling to approximate full bilinear pooling to substitute compact bilinear pooling. Second, Multimodal Low-rank Bilinear Attention Networks (MLB) having an efficient attention mechanism using low-rank bilinear pooling is proposed for visual question-answering tasks. MLB achieves a new state-of-the-art performance, and has a better parsimonious property. Finally, ablation studies to explore alternative choices, e.g. network depth, non-linear functions, and shortcut connections, are conducted. 2 LOW-RANK BILINEAR MODEL Bilinear models use a quadratic expansion of linear transformation considering every pair of features. \[ f_i = \sum_{j=1}^N \sum_{k=1}^M w_{ijk} x_j y_k + b_i = \mathbf{x}^T \mathbf{W}_i \mathbf{y} + b_i \] where \( \mathbf{x} \) and \( \mathbf{y} \) are input vectors, \( \mathbf{W}_i \in \mathbb{R}^{N \times M} \) is a weight matrix for the output \( f_i \), and \( b_i \) is a bias for the output \( f_i \). Notice that the number of parameters is \( L \times (N \times M + 1) \) including a bias vector \( \mathbf{b} \), where \( L \) is the number of output features. Pirsiavash et al. (2009) suggest a low-rank bilinear method to reduce the rank of the weight matrix \( \mathbf{W}_i \) to have less number of parameters for regularization. They rewrite the weight matrix as \( \mathbf{W}_i = \mathbf{U}_i \mathbf{V}_i^T \) where \( \mathbf{U}_i \in \mathbb{R}^{N \times d} \) and \( \mathbf{V}_i \in \mathbb{R}^{M \times d} \), which imposes a restriction on the rank of \( \mathbf{W}_i \) to be at most \( d \leq \min(N, M) \). Based on this idea, \( f_i \) can be rewritten as follows: \[ f_i = \mathbf{x}^T \mathbf{W}_i \mathbf{y} + b_i = \mathbf{x}^T \mathbf{U}_i \mathbf{V}_i^T \mathbf{y} + b_i = \mathbf{1}^T (\mathbf{U}_i^T \mathbf{x} \circ \mathbf{V}_i^T \mathbf{y}) + b_i \] where \( \mathbf{1} \in \mathbb{R}^d \) denotes a column vector of ones, and \( \circ \) denotes Hadamard product. Still, we need two third-order tensors, \( \mathbf{U} \) and \( \mathbf{V} \), for a feature vector \( \mathbf{f} \), whose elements are \( \{ f_i \} \). To reduce the order of the weight tensors by one, we replace \( \mathbf{1} \) with \( \mathbf{P} \in \mathbb{R}^{d \times c} \) and \( b_i \) with \( \mathbf{b} \in \mathbb{R}^c \), then, redefine as \( \mathbf{U} \in \mathbb{R}^{N \times d} \) and \( \mathbf{V} \in \mathbb{R}^{M \times d} \) to get a projected feature vector \( \mathbf{f} \in \mathbb{R}^c \). Then, we get: \[ \mathbf{f} = \mathbf{P}^T (\mathbf{U}^T \mathbf{x} \circ \mathbf{V}^T \mathbf{y}) + \mathbf{b} \] where \( d \) and \( c \) are hyperparameters to decide the dimension of joint embeddings and the output dimension of low-rank bilinear models, respectively. 3 LOW-RANK BILINEAR POOLING A low-rank bilinear model in Equation 3 can be implemented using two linear mappings without biases for embedding two input vectors, Hadamard product to learn joint representations in a multiplicative way, and a linear mapping with a bias to project the joint representations into an output vector for a given output dimension. Then, we use this structure as a pooling method for deep neural networks. Now, we discuss possible variations of low-rank bilinear pooling based on this model inspired by studies of neural networks. 3.1 FULL MODEL In Equation 3, linear projections, \( U \) and \( V \), can have their own bias vectors. As a result, linear models for each input vectors, \( x \) and \( y \), are integrated in an additive form, called as full model for linear regression in statistics: \[ f = P^T((U^T x + b_x) \circ (V^T y + b_y)) + b \\ = P^T(U^T x \circ V^T y + U^T x + V^T y) + b'. \] (4) Here, \( U'^T = \text{diag}(b_y) \cdot U^T, V'^T = \text{diag}(b_x) \cdot V^T \), and \( b' = b + P^T(b_x \circ b_y) \). 3.2 NONLINEAR ACTIVATION Applying non-linear activation functions may help to increase representative capacity of model. The first candidate is to apply non-linear activation functions right after linear mappings for input vectors, \[ f = P^T(\sigma(U^T x) \circ \sigma(V^T y)) + b \] (5) where \( \sigma \) denotes an arbitrary non-linear activation function, which maps any real values into a finite interval, e.g. sigmoid or tanh. If two inputs come from different modalities, statistics of two inputs may be quite different from each other, which may result an interference. Since the gradient with respect to each input is directly dependent on the other input in Hadamard product of two inputs. Additional applying an activation function after the Hadamard product is not appropriate, since activation functions doubly appear in calculating gradients. However, applying the activation function only after the Hadamard product would be alternative choice (We explore this option in Section 5) as follows: \[ f = P^T \sigma(U^T x \circ V^T y) + b. \] (6) Note that using the activation function in low-rank bilinear pooling can be found in an implementation of simple baseline for the VQA dataset (Antol et al., 2015) without an interpretation of low-rank bilinear pooling. However, notably, Wu et al. (2016c) studied learning behavior of multiplicative integration in RNNs with discussions and empirical evidences. 3.3 SHORTCUT CONNECTION When we apply two previous techniques, full model and non-linear activation, linear models of two inputs are nested by the non-linear activation functions. To avoid this unfortunate situation, we add shortcut connections as explored in residual learning (He et al., 2016). \[ f = P^T(\sigma(U^T x) \circ \sigma(V^T y)) + h_x(x) + h_y(y) + b \] (7) where \( h_x \) and \( h_y \) are shortcut mappings. For linear projection, the shortcut mappings are linear mappings. Notice that this formulation is a generalized form of the one-block layered MRN (Kim et al., 2016b). Though, the shortcut connections are not used in our proposed model, as explained in Section 6. 4 MULTIMODAL LOW-RANK BILINEAR ATTENTION NETWORKS In this section, we apply low-rank bilinear pooling to propose an efficient attention mechanism for visual question-answering tasks, based on the interpretation of previous section. We assumed that inputs are a question embedding vector \( q \) and a set of visual feature vectors \( F \) over \( S \times S \) lattice space. 4.1 LOW-RANK BILINEAR POOLING IN ATTENTION MECHANISM Attention mechanism uses an attention probability distribution \( \alpha \) over \( S \times S \) lattice space. Here, using low-rank bilinear pooling, \( \alpha \) is defined as \[ \alpha = \operatorname{softmax}\left( \mathbf{P}_\alpha^T (\sigma(\mathbf{U}_q^T q \cdot 1^T)) \circ \sigma(\mathbf{V}_F^T F^T) \right) \] where \( \alpha \in \mathbb{R}^{G \times S^2} \), \( \mathbf{P}_\alpha \in \mathbb{R}^{d \times G} \), \( \sigma \) is a hyperbolic tangent function, \( \mathbf{U}_q \in \mathbb{R}^{N \times d} \), \( q \in \mathbb{R}^N \), \( 1 \in \mathbb{R}^{S^2} \), \( \mathbf{V}_F \in \mathbb{R}^{M \times d} \), and \( F \in \mathbb{R}^{S^2 \times M} \). If \( G > 1 \), multiple glimpses are explicitly expressed as in Fukui et al. (2016), conceptually similar to Jaderberg et al. (2015). And, the softmax function applies to each row vector of \( \alpha \). The bias terms are omitted for simplicity. 4.2 MULTIMODAL LOW-RANK BILINEAR ATTENTION NETWORKS Attended visual feature \( \hat{\mathbf{v}} \) is a linear combination of \( \mathbf{F}_i \) with coefficients \( \alpha_{g,i} \). Each attention probability distribution \( \alpha_g \) is for a glimpse \( g \). For \( G > 1 \), \( \hat{\mathbf{v}} \) is the concatenation of resulting vectors \( \hat{\mathbf{v}}_g \) as \[ \hat{\mathbf{v}} = \| \sum_{g=1}^G \sum_{s=1}^{S^2} \alpha_{g,s} \mathbf{F}_s \] where \( \| \) denotes concatenation of vectors. The posterior probability distribution is an output of a softmax function, whose input is the result of another low-rank bilinear pooling of \( q \) and \( \hat{\mathbf{v}} \) as \[ p(a|q, \mathbf{F}; \Theta) = \operatorname{softmax}\left( \mathbf{P}_a^T (\sigma(\mathbf{W}_q^T q) \circ \sigma(\mathbf{V}_{\hat{\mathbf{v}}}^T \hat{\mathbf{v}})) \right) \] \[ \hat{a} = \arg\max_{a \in \Omega} p(a|q, \mathbf{F}; \Theta) \] where \( \hat{a} \) denotes a predicted answer, \( \Omega \) is a set of candidate answers and \( \Theta \) is an aggregation of entire model parameters. 5 EXPERIMENTS In this section, we conduct six experiments to select the proposed model, Multimodal Low-rank Bilinear Attention Networks (MLB). Each experiment controls other factors except one factor to assess the effect on accuracies. Based on MRN (Kim et al., 2016b), we start our assessments with an initial option of \( G = 1 \) and shortcut connections of MRN, called as Multimodal Attention Residual Networks (MARN). Notice that we use one embeddings for each visual feature for better performance, based on our preliminary experiment (not shown). We attribute this choice to the attention mechanism for visual features, which provides more capacity to learn visual features. We use the same hyper-parameters of MRN (Kim et al., 2016b), without any explicit mention of this. The VQA dataset (Antol et al., 2015) is used as a primary dataset, and, for data augmentation, question-answering annotations of Visual Genome (Krishna et al., 2016) are used. Validation is performed on the VQA test-dev split, and model comparison is based on the results of the VQA test-standard split. For the comprehensive reviews of VQA tasks, please refer to Wu et al. (2016a) and Kafle & Kanan (2016). The details about preprocessing, question and vision embedding, and hyperparameters used in our experiments are described in Appendix A. The source code for the experiments is available in Github repository.\footnote{https://github.com/jnhwkim/MulLowBiVQA} Number of Learning Blocks Kim et al. (2016b) argue that three-block layered MRN shows the best performance among one to four-block layered models, taking advantage of residual learning. However, we speculate that an introduction of attention mechanism makes deep networks hard to optimize. Therefore, we explore the number of learning blocks of MARN, which have an attention mechanism using low-rank bilinear pooling. Number of Glimpses Fukui et al. (2016) show that the attention mechanism of two glimpses was an optimal choice. In a similar way, we assess one, two, and four-glimpse models. Table 1: The accuracies of our experimental model, Multimodal Attention Residual Networks (MARN), with respect to the number of learning blocks (L#), the number of glimpse (G#), the position of activation functions (tanh), answer sampling, shortcut connections, and data augmentation using Visual Genome dataset, for VQA test-dev split and Open-Ended task. Note that our proposed model, Multimodal Low-rank Bilinear Attention Networks (MLB) have no shortcut connections, compared with MARN. MODEL: model name, SIZE: number of parameters, ALL: overall accuracy in percentage, Y/N: yes/no, NUM: numbers, and ETC: others. Since Fukui et al. (2016) only report the accuracy of the ensemble model on the test-standard, the test-dev results of their single models are included in the last sector. Some figures have different precisions which are rounded. * indicates the selected model for each experiment. <table> <tr> <th>MODEL</th> <th>SIZE</th> <th>ALL</th> <th>Y/N</th> <th>NUM</th> <th>ETC</th> </tr> <tr> <td>MRN-L3</td> <td>65.0M</td> <td>61.68</td> <td>82.28</td> <td>38.82</td> <td>49.25</td> </tr> <tr> <td>MARN-L3</td> <td>65.5M</td> <td>62.37</td> <td>82.31</td> <td>38.06</td> <td>50.83</td> </tr> <tr> <td>MARN-L2</td> <td>56.3M</td> <td>63.92</td> <td>82.88</td> <td>37.98</td> <td>53.59</td> </tr> <tr> <td>* MARN-L1</td> <td>47.0M</td> <td>63.79</td> <td>82.73</td> <td>37.92</td> <td>53.46</td> </tr> <tr> <td>MARN-L1-G1</td> <td>47.0M</td> <td>63.79</td> <td>82.73</td> <td>37.92</td> <td>53.46</td> </tr> <tr> <td>* MARN-L1-G2</td> <td>57.7M</td> <td>64.53</td> <td>83.41</td> <td>37.82</td> <td>54.43</td> </tr> <tr> <td>MARN-L1-G4</td> <td>78.9M</td> <td>64.61</td> <td>83.72</td> <td>37.86</td> <td>54.33</td> </tr> <tr> <td>No Tanh</td> <td>57.7M</td> <td>63.58</td> <td>83.18</td> <td>37.23</td> <td>52.79</td> </tr> <tr> <td>* Before-Product</td> <td>57.7M</td> <td>64.53</td> <td>83.41</td> <td>37.82</td> <td>54.43</td> </tr> <tr> <td>After-Product</td> <td>57.7M</td> <td>64.53</td> <td>83.53</td> <td>37.06</td> <td>54.50</td> </tr> <tr> <td>Mode Answer</td> <td>57.7M</td> <td>64.53</td> <td>83.41</td> <td>37.82</td> <td>54.43</td> </tr> <tr> <td>* Sampled Answer</td> <td>57.7M</td> <td>64.80</td> <td>83.59</td> <td>38.38</td> <td>54.73</td> </tr> <tr> <td>Shortcut</td> <td>57.7M</td> <td>64.80</td> <td>83.59</td> <td>38.38</td> <td>54.73</td> </tr> <tr> <td>* No Shortcut</td> <td>51.9M</td> <td>65.08</td> <td>84.14</td> <td>38.21</td> <td>54.87</td> </tr> <tr> <td>MLB</td> <td>51.9M</td> <td>65.08</td> <td>84.14</td> <td>38.21</td> <td>54.87</td> </tr> <tr> <td>MLB+VG</td> <td>51.9M</td> <td>65.84</td> <td>83.87</td> <td>37.87</td> <td>56.76</td> </tr> <tr> <td>MCB+Att [Fukui et al., 2016]</td> <td>69.2M</td> <td>64.2</td> <td>82.2</td> <td>37.7</td> <td>54.8</td> </tr> <tr> <td>MCB+Att+GloVe [Fukui et al., 2016]</td> <td>70.5M</td> <td>64.7</td> <td>82.5</td> <td>37.6</td> <td>55.6</td> </tr> <tr> <td>MCB+Att+Glove+VG [Fukui et al., 2016]</td> <td>70.5M</td> <td>65.4</td> <td>82.3</td> <td>37.2</td> <td>57.4</td> </tr> </table> Non-Linearity We assess three options applying non-linearity on low-rank bilinear pooling, vanilla, before Hadamard product as in Equation[5] and after Hadamard product as in Equation[6]. Answer Sampling VQA (Antol et al., 2015) dataset has ten answers from unique persons for each question, while Visual Genome (Krishna et al., 2016) dataset has a single answer for each question. Since difficult or ambiguous questions may have divided answers, the probabilistic sampling from the distribution of answers can be utilized to optimize for the multiple answers. An instance[7] can be found in Fukui et al. (2016). We simplify the procedure as follows: \[ p(a_1) = \begin{cases} |a_1| / \Sigma_i |a_i|, & \text{if } |a_1| \geq 3 \\ 0, & \text{otherwise} \end{cases} \] (12) \[ p(a_0) = 1 - p(a_1) \] (13) where \(|a_i|\) denotes the number of unique answer \(a_i\) in a set of multiple answers, \(a_0\) denotes a mode, which is the most frequent answer, and \(a_1\) denotes the secondly most frequent answer. We define the divided answers as having at least three answers which are the secondly frequent one, for the evaluation metric of VQA (Antol et al., 2015), \[ \text{accuracy}(a_k) = \min(|a_k|/3, 1). \] (14) https://github.com/akirafukui/vqa-mcb/blob/5fea8/train/multi_att_2_glove/vqa_data_provider_layer.py#L130 Table 2: The VQA test-standard results to compare with state-of-the-art. Notice that these results are trained by provided VQA train and validation splits, without any data augmentation. <table> <tr> <th rowspan="2">MODEL</th> <th colspan="4">Open-Ended</th> <th rowspan="2">MC ALL</th> </tr> <tr> <th>ALL</th> <th>Y/N</th> <th>NUM</th> <th>ETC</th> </tr> <tr> <td>iBOWIMG (Zhou et al., 2015)</td> <td>55.89</td> <td>76.76</td> <td>34.98</td> <td>42.62</td> <td>61.97</td> </tr> <tr> <td>DPPnet (Noh et al., 2016)</td> <td>57.36</td> <td>80.28</td> <td>36.92</td> <td>42.24</td> <td>62.69</td> </tr> <tr> <td>Deeper LSTM+Normalized CNN (Antol et al., 2015)</td> <td>58.16</td> <td>80.56</td> <td>36.53</td> <td>43.73</td> <td>63.09</td> </tr> <tr> <td>SMem (Xu & Saenko, 2016)</td> <td>58.24</td> <td>80.80</td> <td>37.53</td> <td>43.48</td> <td>-</td> </tr> <tr> <td>Ask Your Neurons (Malinowski et al., 2016)</td> <td>58.43</td> <td>78.24</td> <td>36.27</td> <td>46.32</td> <td>-</td> </tr> <tr> <td>SAN (Yang et al., 2016)</td> <td>58.85</td> <td>79.11</td> <td>36.41</td> <td>46.42</td> <td>-</td> </tr> <tr> <td>D-NMN (Andreas et al., 2016)</td> <td>59.44</td> <td>80.98</td> <td>37.48</td> <td>45.81</td> <td>-</td> </tr> <tr> <td>ACK (Wu et al., 2016b)</td> <td>59.44</td> <td>81.07</td> <td>37.12</td> <td>45.83</td> <td>-</td> </tr> <tr> <td>FDA (Ilievski et al., 2016)</td> <td>59.54</td> <td>81.34</td> <td>35.67</td> <td>46.10</td> <td>64.18</td> </tr> <tr> <td>HYBRID (Kattel & Kanan, 2016b)</td> <td>60.06</td> <td>80.34</td> <td>37.82</td> <td>47.56</td> <td>-</td> </tr> <tr> <td>DMN+ (Xiong et al., 2016)</td> <td>60.36</td> <td>80.43</td> <td>36.82</td> <td>48.33</td> <td>-</td> </tr> <tr> <td>MRN (Kim et al., 2016b)</td> <td>61.84</td> <td>82.39</td> <td>38.23</td> <td>49.41</td> <td>66.33</td> </tr> <tr> <td>HieCoAtt (Lu et al., 2016)</td> <td>62.06</td> <td>79.95</td> <td>38.22</td> <td>51.95</td> <td>66.07</td> </tr> <tr> <td>RAU (Noh & Han, 2016)</td> <td>63.2</td> <td>81.7</td> <td>38.2</td> <td>52.8</td> <td>67.3</td> </tr> <tr> <td>MLB (ours)</td> <td><b>65.07</b></td> <td><b>84.02</b></td> <td><b>37.90</b></td> <td><b>54.77</b></td> <td><b>68.89</b></td> </tr> </table> The rate of the divided answers is approximately 16.40%, and only 0.23% of questions have more than two divided answers in VQA dataset. We assume that it eases the difficulty of convergence without severe degradation of performance. Shortcut Connection The contribution of shortcut connections for residual learning is explored based on the observation of the competitive performance of single-block layered model. Since the usefulness of shortcut connections is linked to the network depth (He et al., 2016). Data Augmentation The data augmentation with Visual Genome (Krishna et al., 2016) question answer annotations is explored. Visual Genome (Krishna et al., 2016) originally provides 1.7 Million visual question answer annotations. After aligning to VQA, the valid number of question-answering pairs for training is 837,298, which is for distinct 99,280 images. 6 RESULTS The six experiments are conducted sequentially. Each experiment determines experimental variables one by one. Refer to Table[1] which has six sectors divided by mid-rules. 6.1 SIX EXPERIMENT RESULTS Number of Learning Blocks Though, MRN (Kim et al., 2016b) has the three-block layered architecture, MARN shows the best performance with two-block layered models (63.92%). For the multiple glimpse models in the next experiment, we choose one-block layered model for its simplicity to extend, and competitive performance (63.79%). Number of Glimpses Compared with the results of Fukui et al. (2016), four-glimpse MARN (64.61%) is better than other comparative models. However, for a parsimonious choice, two-glimpse MARN (64.53%) is chosen for later experiments. We speculate that multiple glimpses are one of key factors for the competitive performance of MCB (Fukui et al., 2016), based on a large margin in accuracy, compared with one-glimpse MARN (63.79%). Non-Linearity The results confirm that activation functions are useful to improve performances. Surprisingly, there is no empirical difference between two options, before-Hadamard product and after-Hadamard product. This result may build a bridge to relate with studies on multiplicative integration with recurrent neural networks (Wu et al., 2016c). Answer Sampling Sampled answers (64.80%) result better performance than mode answers (64.53%). It confirms that the distribution of answers from annotators can be used to improve the performance. However, the number of multiple answers is usually limited due to the cost of data collection. Shortcut Connection Though, MRN (Kim et al., 2016b) effectively uses shortcut connections to improve model performance, one-block layered MARN shows better performance without the shortcut connection. In other words, the residual learning is not used in our proposed model, MLB. It seems that there is a trade-off between introducing attention mechanism and residual learning. We leave a careful study on this trade-off for future work. Data Augmentation Data augmentation using Visual Genome (Krishna et al., 2016) question answer annotations significantly improves the performance by 0.76% in accuracy for VQA test-dev split. Especially, the accuracy of others (ETC)-type answers is notably improved from the data augmentation. 6.2 Comparison with State-of-the-Art The comparison with other single models on VQA test-standard is shown in Table 2. The overall accuracy of our model is approximately 1.9% above the next best model (Noh & Han, 2016) on the Open-Ended task of VQA. The major improvements are from yes-or-no (Y/N) and others (ETC)-type answers. In Table 3 we also report the accuracy of our ensemble model to compare with other ensemble models on VQA test-standard, which won 1st to 5th places in VQA Challenge 2016.3 We beat the previous state-of-the-art with a margin of 0.42%. Table 3: The VQA test-standard results for ensemble models to compare with state-of-the-art. For unpublished entries, their team names are used instead of their model names. Some of their figures are updated after the challenge. <table> <tr> <th rowspan="2">MODEL</th> <th colspan="4">Open-Ended</th> <th colspan="2">MC</th> </tr> <tr> <th>ALL</th> <th>Y/N</th> <th>NUM</th> <th>ETC</th> <th>ALL</th> </tr> <tr> <td>RAU (Noh & Han, 2016)</td> <td>64.12</td> <td>83.33</td> <td>38.02</td> <td>53.37</td> <td>67.34</td> </tr> <tr> <td>MRN (Kim et al., 2016b)</td> <td>63.18</td> <td>83.16</td> <td>39.14</td> <td>51.33</td> <td>67.54</td> </tr> <tr> <td>DLAIT (not published)</td> <td>64.83</td> <td>83.23</td> <td><b>40.80</b></td> <td>54.32</td> <td>68.30</td> </tr> <tr> <td>Naver Labs (not published)</td> <td>64.79</td> <td>83.31</td> <td>38.70</td> <td>54.79</td> <td>69.26</td> </tr> <tr> <td>MCB (Fukui et al., 2016)</td> <td>66.47</td> <td>83.24</td> <td>39.47</td> <td><b>58.00</b></td> <td>70.10</td> </tr> <tr> <td>MLB (ours)</td> <td><b>66.89</b></td> <td><b>84.61</b></td> <td>39.07</td> <td>57.79</td> <td><b>70.29</b></td> </tr> <tr> <td>Human (Antol et al., 2015)</td> <td>83.30</td> <td>95.77</td> <td>83.39</td> <td>72.67</td> <td>91.54</td> </tr> </table> 7 RELATED WORKS MRN (Kim et al., 2016b) proposes multimodal residual learning with Hadamard product of low-rank bilinear pooling. However, their utilization of low-rank bilinear pooling is limited to joint residual mapping function for multimodal residual learning. Higher-order Boltzmann Machines (Memisevic & Hinton, 2007, 2010) use Hadamard product to capture the interactions of input, output, and hidden representations for energy function. Wu et al. (2016c) propose the recurrent neural networks using Hadamard product to integrate multiplicative interactions among hidden representations in the model. For details of these related works, please refer to Appendix D. 3http://visualqa.org/challenge.html Yet, compact bilinear pooling or multimodal compact bilinear pooling (Gao et al., 2016; Fukui et al., 2016) is worth to discuss and carefully compare with our method. 7.1 Compact Bilinear Pooling Compact bilinear pooling (Gao et al., 2016) approximates full bilinear pooling using a sampling-based computation, Tensor Sketch Projection (Charikar et al., 2002; Pham & Pagh, 2013): \[ \Psi(x \otimes y, h, s) = \Psi(x, h, s) * \Psi(y, h, s) \] (15) \[ = \mathrm{FFT}^{-1}(\mathrm{FFT}(\Psi(x, h, s)) \circ \mathrm{FFT}(\Psi(y, h, s))) \] (16) where \( \otimes \) denotes outer product, * denotes convolution, \( \Psi(v, h, s)_i := \sum_{j:h_j=s_j} s_j \cdot v_j \), FFT denotes Fast Fourier Transform, d denotes an output dimension, \( x, y, h, s \in \mathbb{R}^n \), x and y are inputs, and h and s are random variables. \( h_i \) is sampled from \( \{1, ..., d\} \), and \( s_i \) is sampled from \( \{-1, 1\} \), then, both random variables are fixed for further usage. Even if the dimensions of x and y are different from each other, it can be used for multimodal learning (Fukui et al., 2016). Similarly to Equation 1, compact bilinear pooling can be described as follows: \[ f_i = \mathbf{x}^T \mathcal{W}_i \mathbf{y} \] (17) where \( \mathcal{W}_{ijk} = s_{ijk} w_{ijk} \) if \( s_{ijk} \) is sampled from \( \{-1, 1\} \), \( w_{ijk} \) is sampled from \( \{\mathbf{P}_{i1}, \mathbf{P}_{i2}, \ldots, \mathbf{P}_{id}\} \), and the compact bilinear pooling is followed by a fully connected layer \( \mathbf{P} \in \mathbb{R}^{|\Omega| \times d} \). Then, this method can be formulated as a hashing trick (Weinberger et al., 2009; Chen et al., 2015) to share randomly chosen bilinear weights using d parameters for a output value, in a way that a single parameter is shared by \( NM/d \) bilinear terms in expectation, with the variance of \( NM(d-1)/d^2 \) (See Appendix B). In comparison with our method, their method approximates a three-dimensional weight tensor in bilinear pooling with a two-dimensional matrix \( \mathbf{P} \), which is larger than the concatenation of three two-dimensional matrices for low-rank bilinear pooling. The ratio of the number of parameters for a single output to the total number of parameters for \( |\Omega| \) outputs is \( d/d/|\Omega| = 1/|\Omega| \) (Fukui et al., 2016), vs. \( d(N + M + 1)/d(N + M + |\Omega|) = (N + M + 1)/(N + M + |\Omega|) \approx 2/3 \) (ours), since our method uses a three-way factorization. Hence, more parameters are allocated to each bilinear approximation than compact bilinear pooling does, effectively managing overall parameters guided by back-propagation algorithm. MCB (Fukui et al., 2016), which uses compact bilinear pooling for multimodal tasks, needs to set the dimension of output d to 16K, to reduce the bias induced by the fixed random variables h and s. As a result, the majority of model parameters (16K × 3K = 48M) are concentrated on the last fully connected layer, which makes a fan-out structure. So, the total number of parameters of MCB is highly sensitive to the number of classes, which is approximately 69.2M for MCB+att, and 70.5M for MCB+att+GloVe. Yet, the total number of parameters of our proposed model (MLB) is 51.9M, which is more robust to the number of classes having \( d = 1.2K \), which has a similar role in model architecture. 8 CONCLUSIONS We suggest a low-rank bilinear pooling method to replace compact bilinear pooling, which has a fan-out structure, and needs complex computations. Low-rank bilinear pooling has a flexible structure using linear mapping and Hadamard product, and a better parsimonious property, compared with compact bilinear pooling. We achieve new state-of-the-art results on the VQA dataset using a similar architecture of Fukui et al.(2016), replacing compact bilinear pooling with low-rank bilinear pooling. We believe our method could be applicable to other bilinear learning tasks. ACKNOWLEDGMENTS The authors would like to thank Patrick Emaase for helpful comments and editing. Also, we are thankful to anonymous reviewers who provided comments to improve this paper. This work was supported by NAVER LABS Corp. & NAVER Corp. and partly by the Korea government (IITP-R0126-16-1072-SW.StarLab, KEIT-10044009-HRI.MESSI, KEIT-10060086-RISF, ADD-UD130070ID-BMRR). The part of computing resources used in this study was generously shared by Standigm Inc. Appendix A EXPERIMENT DETAILS A.1 PREPROCESSING We follow the preprocessing procedure of [Kim et al., 2016b]. Here, we remark some details of it, and changes. A.1.1 QUESTION EMBEDDING The 90.45% of questions for the 2K-most frequent answers are used. The vocabulary size of questions is 15,031. GRU (Cho et al., 2014) is used for question embedding. Based on earlier studies (Noh et al., 2016; Kim et al., 2016b), a word embedding matrix and a GRU are initialized with Skip-thought Vector pre-trained model (Kiros et al., 2015). As a result, question vectors have 2,400 dimensions. For efficient computation of variable-length questions, [Kim et al., 2016a] is used for the GRU. Moreover, for regularization, Bayesian Dropout [Gal, 2015] which is implemented in [Leonard et al., 2015] is applied while training. A.2 VISION EMBEDDING ResNet-152 networks [He et al., 2016] are used for feature extraction. The dimensionality of an input image is \( 3 \times 448 \times 448 \). The outputs of the last convolution layer is used, which have \( 2,048 \times 14 \times 14 \) dimensions. A.3 HYPERPARAMETERS The hyperparameters used in MLB of Table 2 are described in Table 4. The batch size is 100, and the number of iterations is fixed to 250K. For data augmented models, a simplified early stopping is used, starting from 250K to 350K-iteration for every 25K iterations (250K, 275K, 300K, 325K, and 350K; at most five points) to avoid exhaustive submissions to VQA test-dev evaluation server. RMSProp (Tieleman & Hinton, 2012) is used for optimization. Though, the size of joint embedding size \( d \) is borrowed from [Kim et al., 2016b], a grid search on \( d \) confirms this choice in our model as shown in Table 5. <table> <tr> <th>SYMBOL</th> <th>VALUE</th> <th>DESCRIPTION</th> </tr> <tr> <td>S</td> <td>14</td> <td>attention lattice size</td> </tr> <tr> <td>N</td> <td>2,400</td> <td>question embedding size</td> </tr> <tr> <td>M</td> <td>2,048</td> <td>channel size of extracted visual features</td> </tr> <tr> <td>d</td> <td>1,200</td> <td>joint embedding size</td> </tr> <tr> <td>G</td> <td>2</td> <td>number of glimpses</td> </tr> <tr> <td>|Ω|</td> <td>2,000</td> <td>number of candidate answers</td> </tr> <tr> <td>η</td> <td>3e-4</td> <td>learning rate</td> </tr> <tr> <td>λ</td> <td>0.99997592083</td> <td>learning rate decay factor at every iteration</td> </tr> <tr> <td>p</td> <td>0.5</td> <td>dropout rate</td> </tr> <tr> <td>θ</td> <td>±10</td> <td>gradient clipping threshold</td> </tr> </table> A.4 MODEL SCHEMA Figure 1 shows a schematic diagram of MLB, where \( \circ \) denotes Hadamard product, and \( \Sigma \) denotes a linear combination of visual feature vectors using coefficients, which is the output of softmax function. If \( G > 1 \), the softmax function is applied to each row vectors of an output matrix (Equation 5), and we concatenate the resulting vectors of the \( G \) linear combinations (Equation 9). A.5 ENSEMBLE OF SEVEN MODELS The test-dev results for individual models consisting of our ensemble model is presented in Table 6. Table 5: The effect of joint embedding size \( d \). <table> <tr> <th rowspan="2">d</th> <th rowspan="2">SIZE</th> <th colspan="4">Open-Ended</th> </tr> <tr> <th>ALL</th> <th>Y/N</th> <th>NUM</th> <th>ETC</th> </tr> <tr> <td>800</td> <td>45.0M</td> <td>64.89</td> <td>84.08</td> <td>38.15</td> <td>54.55</td> </tr> <tr> <td>1000</td> <td>48.4M</td> <td>65.06</td> <td><b>84.18</b></td> <td>38.01</td> <td>54.85</td> </tr> <tr> <td>1200</td> <td>51.9M</td> <td><b>65.08</b></td> <td>84.14</td> <td><b>38.21</b></td> <td><b>54.87</b></td> </tr> <tr> <td>1400</td> <td>55.4M</td> <td>64.94</td> <td>84.13</td> <td>38.00</td> <td>54.64</td> </tr> <tr> <td>1600</td> <td>58.8M</td> <td>65.02</td> <td>84.15</td> <td>37.79</td> <td>54.85</td> </tr> </table> ![A schematic diagram of MLB. Replicate module copies an question embedding vector to match with \( S^2 \) visual feature vectors. Conv modules indicate \( 1 \times 1 \) convolution to transform a given channel space, which is computationally equivalent to linear projection for channels.](page_370_613_808_312.png) Figure 1: A schematic diagram of MLB. *Replicate* module copies an question embedding vector to match with \( S^2 \) visual feature vectors. *Conv* modules indicate \( 1 \times 1 \) convolution to transform a given channel space, which is computationally equivalent to linear projection for channels. Table 6: The individual models used in our ensemble model in Table 3 <table> <tr> <th rowspan="2">MODEL</th> <th rowspan="2">GLIMPSE</th> <th colspan="4">Open-Ended</th> </tr> <tr> <th>ALL</th> <th>Y/N</th> <th>NUM</th> <th>ETC</th> </tr> <tr> <td>MLB</td> <td>2</td> <td>64.89</td> <td>84.13</td> <td>37.85</td> <td>54.57</td> </tr> <tr> <td>MLB</td> <td>2</td> <td>65.08</td> <td><b>84.14</b></td> <td>38.21</td> <td>54.87</td> </tr> <tr> <td>MLB</td> <td>4</td> <td>65.01</td> <td>84.09</td> <td>37.66</td> <td>54.88</td> </tr> <tr> <td>MLB-VG</td> <td>2</td> <td>65.76</td> <td>83.64</td> <td>37.57</td> <td>56.86</td> </tr> <tr> <td>MLB-VG</td> <td>2</td> <td>65.84</td> <td>83.87</td> <td>37.87</td> <td>56.76</td> </tr> <tr> <td>MLB-VG</td> <td>3</td> <td>66.05</td> <td>83.88</td> <td>38.13</td> <td>57.13</td> </tr> <tr> <td>MLB-VG</td> <td>4</td> <td><b>66.09</b></td> <td>83.59</td> <td><b>38.32</b></td> <td><b>57.42</b></td> </tr> <tr> <td>Ensemble</td> <td>-</td> <td>66.77</td> <td>84.54</td> <td>39.21</td> <td>57.81</td> </tr> </table> B UNDERSTANDING OF MULTIMODAL COMPACT BILINEAR POOLING In this section, the algorithm of multimodal compact bilinear pooling (MCB) (Gao et al., 2016; Fukui et al., 2016) is described as a kind of hashing tick (Chen et al., 2015). \( x \in \mathbb{R}^{n_x} \) and \( y \in \mathbb{R}^{n_y} \) are the given inputs, \( \Phi(x, y) \in \mathbb{R}^d \) is the output. Random variables \( h_x \in \mathbb{N}^{n_x} \) and \( h_y \in \mathbb{N}^{n_y} \) are uniformly sampled from \( \{1, \ldots, d\} \), and \( s_x \in \mathbb{Z}^{n_x} \) and \( s_y \in \mathbb{Z}^{n_y} \) are uniformly sampled from \( \{-1, 1\} \). Then, Count Sketch projection function \( \Psi \) (Charikar et al., 2002) projects \( x \) and \( y \) to intermediate representations \( \Psi(x, h_x, s_x) \in \mathbb{R}^d \) and \( \Psi(y, h_y, s_y) \in \mathbb{R}^d \), which is defined as: \[ \Psi(v, h, s)_i := \sum_{j : h_j = i} s_j \cdot v_j \] (18) Notice that both \( h \) and \( s \) remain as constants after initialization (Fukui et al., 2016). The probability of \( h_{xj} = i \) and \( h_{yj} = i \) for the given \( j \) is \( 1/d^2 \). Hence, the expected number of bilinear terms in \( \Psi(x, h_x, s_x), \Psi(y, h_y, s_y) \) is \( (n_x n_y)/d^2 \). Since, the output \( \Phi(x, y) \) is a result of circular convolution of \( \Psi(x, h_x, s_x) \) and \( \Psi(y, h_y, s_y) \), the expected number of bilinear terms in \( \Phi(x, y)_i \) is \( (n_x n_y)/d \). Likewise, the probability of that a bilinear term is allocated in \( \Phi(x, y)_i \) is \( 1/d \). The probability distribution of the number of bilinear terms in \( \Phi(x, y)_i \) follows a multinomial distribution, whose mean is \( (n_x n_y)/d \) and variance is \( (n_x n_y)(d-1)/d^2 \). Linear projection after the multimodal compact bilinear pooling provides weights on the bilinear terms, in a way that a shared weight is assigned to \( \Phi(x, y)_i \), which has \( (n_x n_y)/d \) bilinear terms in expectation, though each bilinear term can have a different sign induced by both \( s_x \) and \( s_y \). HashedNets (Chen et al., 2015) propose a method to compress neural networks using a low-cost hashing function (Weinberger et al., 2009), which is the same function of \( \Psi(v, h, s) \). They randomly group a portion of connections in neural networks to share a single weight. We speculate that multimodal compact bilinear pooling uses the hashing tick to reduce the number of full bilinear weights with the rate of \( d/(n_x n_y) \). However, this approximation is limited to two-way interaction, compared with three-way factorization in our method. C REPLACEMENT OF LOW-RANK BILINEAR POOLING For the explicit comparison with compact bilinear pooling, we explicitly substitute compact bilinear pooling for low-rank bilinear pooling to control everything else, which means that the rest of the model architecture is exactly the same. According to Fukui et al. (2016), we use MCB followed by Signed Square Root, L2-Normalization, Dropout (\( p=0.1 \)), and linear projection from 16,000-dimension to the target dimension. Also, Dropout (\( p=0.3 \)) for a question embedding vector. Note that an overall architecture for multimodal learning of both is the same. Experimental details are referenced from the implementation1 of Fukui et al. (2016). For test-dev split, our version of MCB gets 61.48% for overall accuracy (yes/no: 82.48%, number: 37.06%, and other: 49.07%) vs. 65.08% (ours, MLB in Table 7). Additionally, if the nonlinearity in getting attention distributions is increased as the original MCB does using ReLU, we get 62.11% for overall accuracy (yes/no: 82.55%, number: 37.18%, and other: 50.30%), which is still the below of our performance2. We do not see it as a decisive evidence of the better performance of MLB, but as a reference (the comparison of test-dev results may be also unfair.), since an optimal architecture and hyperparameters may be required for each method. 1https://github.com/akirafukui/vqa-mcb 2Our version of MCB definition can be found in https://github.com/jnhwkim/MulLowBiVQA/blob/master/netdef/MCB.lua D RELATED WORKS D.1 MULTIMODAL RESIDUAL NETWORKS MRN (Kim et al., 2016b) is an implicit attentional model using multimodal residual learning with Hadamard product which does not have any explicit attention mechanism. \[ \mathcal{F}^{(k)}(\mathbf{q}, \mathbf{v}) = \sigma(\mathbf{W}_a^{(k)} \mathbf{q}) \circ \sigma(\mathbf{W}_2^{(k)} \sigma(\mathbf{W}_1^{(k)} \mathbf{v})) \] (19) \[ H_L(\mathbf{q}, \mathbf{v}) = \mathbf{W}_{a'} \mathbf{q} + \sum_{l=1}^L \mathbf{W}_{f^{(l)}} \mathcal{F}^{(l)}(H_{l-1}, \mathbf{v}) \] (20) where \( \mathbf{W}_a \) are parameter matrices, \( L \) is the number of learning blocks, \( H_0 = \mathbf{q} \), \( \mathbf{W}_{a'} = \Pi_{l=1}^L \mathbf{W}_{a'}^{(l)} \), and \( \mathbf{W}_{f^{(l)}} = \Pi_{m=l+1}^L \mathbf{W}_{f^{(m)}}^{(m)} \). Notice that these equations can be generalized by Equation 7. However, an explicit attention mechanism allows the use of lower-level visual features than fully-connected layers, and, more importantly, spatially selective learning. Recent state-of-the-art methods use a variant of an explicit attention mechanism in their models (Lu et al., 2016; Noh & Han, 2016; Fukui et al., 2016). Note that shortcut connections of MRN are not used in the proposed Multimodal Low-rank Bilinear (MLB) model. Since, it does not have any performance gain due to not stacking multiple layers in MLB. We leave the study of residual learning for MLB for future work, which may leverage the excellency of bilinear models as suggested in Wu et al. (2016a). D.2 HIGHER-ORDER BOLTZMANN MACHINES A similar model can be found in a study of Higher-Order Boltzmann Machines (Memisevic & Hinton, 2007, 2010). They suggest a factoring method for the three-way energy function to capture correlations among input, output, and hidden representations. \[ -E(\mathbf{y}, \mathbf{h}; \mathbf{x}) = \sum_f \left( \sum_i x_i w_{i f}^x \right) \left( \sum_j y_j w_{j f}^y \right) \left( \sum_k h_k w_{k f}^h \right) + \sum_k w_k^h h_k + \sum_j w_j^y y_j \] \[ = (\mathbf{x}^T \mathbf{W}^x \circ \mathbf{y}^T \mathbf{W}^y \circ \mathbf{h}^T \mathbf{W}^h) \mathbf{I} + \mathbf{h}^T \mathbf{w}^h + \mathbf{y}^T \mathbf{w}^y \] (21) Setting aside of bias terms, the \( I \times J \times K \) parameter tensor of unfactored Higher-Order Boltzmann Machines is replaced with three matrices, \( \mathbf{W}^x \in \mathbb{R}^{I \times F} \), \( \mathbf{W}^y \in \mathbb{R}^{J \times F} \), and \( \mathbf{W}^h \in \mathbb{R}^{K \times F} \). D.3 MULTIPLICATIVE INTEGRATION WITH RECURRENT NEURAL NETWORKS Most of recurrent neural networks, including vanilla RNNs, Long Short Term Memory networks (Hochreiter & Schmidhuber, 1997) and Gated Recurrent Units (Cho et al., 2014), share a common expression as follows: \[ \phi(\mathbf{W} \mathbf{x} + \mathbf{U} \mathbf{h} + \mathbf{b}) \] (22) where \( \phi \) is a non-linear function, \( \mathbf{W} \in \mathbb{R}^{d \times n} \), \( \mathbf{x} \in \mathbb{R}^n \), \( \mathbf{U} \in \mathbb{R}^{d \times m} \), \( \mathbf{h} \in \mathbb{R}^m \), and \( \mathbf{b} \in \mathbb{R}^d \) is a bias vector. Note that, usually, \( \mathbf{x} \) is an input state vector and \( \mathbf{h} \) is an hidden state vector in recurrent neural networks. (Wu et al., 2016c) propose a new design to replace the additive expression with a multiplicative expression using Hadamard product as \[ \phi(\mathbf{W} \mathbf{x} \circ \mathbf{U} \mathbf{h} + \mathbf{b}). \] (23) Moreover, a general formulation of this multiplicative integration can be described as \[ \phi(\alpha \circ \mathbf{W} \mathbf{x} \circ \mathbf{U} \mathbf{h} + \mathbf{W} \mathbf{x} \circ \beta_1 + \mathbf{U} \mathbf{h} \circ \beta_2 + \mathbf{b}) \] (24) which is reminiscent of full model in Section 8.1.
accept
Accept (Poster)
6.666667
66c8f1ab9d06d15414f51a8a6c2f617a771a122b
iclr
2,017
TRANSFORMATIONAL SPARSE CODING Dimitrios C. Gklezakos & Rajesh P. N. Rao Department of Computer Science and Center for Sensorimotor Neural Engineering University of Washington Seattle, WA 98105, USA {gklezd,rao}@cs.washington.edu ABSTRACT A fundamental problem faced by object recognition systems is that objects and their features can appear in different locations, scales and orientations. Current deep learning methods attempt to achieve invariance to local translations via pooling, discarding the locations of features in the process. Other approaches explicitly learn transformed versions of the same feature, leading to representations that quickly explode in size. Instead of discarding the rich and useful information about feature transformations to achieve invariance, we argue that models should learn object features conjointly with their transformations to achieve equivariance. We propose a new model of unsupervised learning based on sparse coding that can learn object features jointly with their affine transformations directly from images. Results based on learning from natural images indicate that our approach matches the reconstruction quality of traditional sparse coding but with significantly fewer degrees of freedom while simultaneously learning transformations from data. These results open the door to scaling up unsupervised learning to allow deep feature+transformation learning in a manner consistent with the ventral+dorsal stream architecture of the primate visual cortex. 1 INTRODUCTION A challenging problem in computer vision is the reliable recognition of objects under a wide range of transformations. Approaches such as deep learning that have achieved success in recent years usually require large amounts of labeled data, whereas the human brain has evolved to solve the problem using an almost unsupervised approach to learning object representations. During early development, the brain builds an internal representation of objects from unlabeled images that can be used in a wide range of tasks. Much of the complexity in learning efficient and general-purpose representations comes from the fact that objects can appear in different poses, at different scales, locations, orientations and lighting conditions. Models have to account for these transformed versions of objects and their features. Current successful approaches to recognition use pooling to allow limited invariance to two-dimensional translations (Ranzato et al. (2007)). At the same time pooling discards information about the location of the detected features. This can be problematic because scaling to large numbers of objects requires modeling objects in terms of parts and their relative pose, requiring the pose information to be retained. Previous unsupervised learning techniques such as sparse coding (Olshausen & Field (1997)) can learn features similar to the ones in the visual cortex but these models have to explicitly learn large numbers of transformed versions of the same feature and as such, quickly succumb to combinatorial explosion, preventing hierarchical learning. Other approaches focus on computing invariant object signatures (Anselmi et al. (2013) (2016)), but are completely oblivious to pose information. Ideally, we want a model that allows object features and their relative transformations to be simultaneously learned, endowing itself with a combinatorial explanatory capacity by being able to apply learned object features with object-specific transformations across large numbers of objects. The goal of modeling transformations in images is two-fold: (a) to facilitate the learning of pose-invariant sparse feature representations, and (b) to allow the use of pose information of object features in object representation and recognition. We propose a new model of sparse coding called transformational sparse coding that exploits a tree structure to account for large affine transformations. We apply our model to natural images. We show that our model can extract pose information from the data while matching the reconstruction quality of traditional sparse coding with significantly fewer degrees of freedom. Our approach to unsupervised learning is consistent with the concept of “capsules” first introduced by Hinton et al. (2011), and more generally, with the dorsal-ventral (features+transformations) architecture observed in the primate visual cortex. 2 TRANSFORMATIONAL SPARSE CODING 2.1 TRANSFORMATION MODEL Sparse coding (Olshausen & Field (1997)) models each image \( I \) as a sparse combination of features: \[ I \sim Fw \quad \text{s.t. w is sparse} \] Sparsity is usually enforced by the appropriate penalty. A typical choice is \( S_1(w) = \|w\|_1 \). We can enhance sparse coding with affine transformations by transforming features before combining them. The vectorized input image \( I \) is then modeled as: \[ I = \sum_{k=1}^K w_k T(x_k) F_k \] where \( w_k, F_k \) denote the \( k \)-th weight specific to the image and the \( k \)-th feature respectively and \( T(x_k) \) is a feature and image specific transformation. In modeling image transformations we follow the approach of Rao & Ruderman (1999) and Miao & Rao (2007). We consider the 2D general affine transformations. These include rigid motions such as vertical and horizontal translations and rotations, as well as scaling, parallel hyperbolic deformations along the \( X/Y \) axis and hyperbolic deformations along the diagonals. A discussion on why these are good candidates for inclusion in a model of visual perception can be found in Dodwell (1983). Figure 5 in Appendix A shows the effects of each transformation. Any subset of these transformations forms a Lie group with the corresponding number of dimensions (6 for the full set). Any transformation in this group can be expressed as the matrix exponential of a weighted combination of matrices (the group generators) that describe the behaviour of infinitesimal transformations around the identity: \[ T(x) = e^{\sum_j x_j G_j} \] For images of \( M \) pixels, \( T(x) \) is a matrix of size \( M \times M \). Note that the generator matrices and the features used are common across all images. The feature weights and transformation parameters can be inferred (and the features learned) by gradient descent on the regularized MSE objective: \[ L(w, x, F) = \frac{1}{N} \sum_{i=1}^N \left\| I_i - \sum_{k=1}^K w_{ik} T(x_{ik}) F_k \right\|_2^2 + \lambda_w S_1(w) + \lambda_F \|F\|_2^2 \] Although this model ties sparse coding with transformations elegantly, learning large transformations with it is intractable. The error surface of the loss function is highly non-convex with many shallow local minima. Figures 1(a), 1(b), 1(c) show the surface of \( L \) as a function of horizontal and vertical translation, horizontal translation and rotation and vertical translation and rotation parameters. The model tends to settle for small transformations around the identity. Due to the size of the parameters that we need to maintain, a random restart approach would be infeasible. 2.2 TRANSFORMATION TREES We introduce Transformational Sparse Coding Trees to circumvent this problem using hierarchies of transformed features. The main idea is to gradually marginalize over an increasing range of Figure 1: Normalized reconstruction error for individual vs. batch \(8 \times 8\) natural image patches. (a),(b),(c) show the surface of the reconstruction error for horizontal and vertical translations, horizontal translations and rotation, vertical translations and rotations for an individual data point and feature. (d),(e),(f) show the same, averaged over a batch of 2000 data points. The error is normalized between 0 and 1 for comparison. The global minimum in the range is marked in red. In the batch case, averaging makes the error surface smoother and learning easier. transformations. Each node in the tree represents a feature derived as a transformed version of its parent, with the root being the template of the feature. The leaves are equivalent to a set of sparse basis features and are combined to reconstruct the input as described above. A version of the model using a forest of trees of depth one (flat trees), is given by: \[ I \simeq \sum_{v=1}^{V} \sum_{b \sim ch(v)} w_b U_b \] where \(U_b = T(x_{v \rightarrow b}) F_v\) and \(ch(v)\) the children of root \(v\). The feature \(U_b\) is a leaf, derived from the root feature \(F_v\) via the fixed (across all data-points) transformation \(T(x_{v \rightarrow b})\). Deeper trees can be built accordingly (Section 3.3). A small example of a tree learned from natural image patches is shown in Figure 2. There are multiple advantages to such a hierarchical organization of sparse features. Some transformations are more common in data than others. Each path in the tree corresponds to a transformation that is common across images. Such a path can be viewed as a “transformation feature” learned from the data. Each additional node in the tree “costs” a fixed set of new parameters equal in size to the dimensions of the underlying Lie group (six in our case). At the same time the node contributes a whole new feature to the sparse code. Averaging over many data points, smoothens the surface of the error function and makes larger transformations more accessible to optimization. Figures 1(d) 1(e) 1(f) show the error surface averaged over a batch of 2000 patches. For every leaf that is activated, the root template represents the identity of the feature and the transformation associated with the path to the root, the pose. In other words the tree is an equivariant representation of the feature over the parameter region defined by the set of paths to the leaves, very similar to the concept of a capsule introduced by Hinton et al. (2011). In fact, every increasing subtree corresponds to a capsule of increasing size. Figure 2: Example of a tree learned from natural image patches. The leaves correspond to rigid transformations of the root. 2.3 LEARNING The reconstruction mean squared-error (MSE) for a forest of flat trees is given by: \[ L_{MSE}(w, x, F) = \frac{1}{N} \sum_{i=1}^{N} \left\| I_i - \sum_{v=1}^{V} \sum_{b \in h(v)} w_{ib} T(x_{v \rightarrow b}) F_v \right\|_2^2 \] Increasing the feature magnitudes and decreasing the weights will result in a decrease in loss. We constraint the root feature magnitudes to be of unit \( \ell_2 \) norm. Consider different, transformed, versions of the same root template. For every such version there is a set of tree parameters that compensates for the intrinsic transformation of the root and results in the same leaves. To make the solution unique we directly penalize the transformation parameter magnitudes. Since scaling and parallel deformation can also change the magnitude of the filter, we penalize them more to keep features/leaves close to unit norm. The full loss function of the model is: \[ L(w, x, F) = L_{MSE}(w, x, F) + \lambda_w S_1(w) + \sum_{j=1}^{6} \lambda_j X_{[j]} \] s.t. \[ \forall v, \|F_v\|_2 = 1 \] where \( X_{[j]} \) is the vector of the collective parameters for generator \( G_j \). Lee et al. (2007) use an alternating optimization approach to sparse coding. First the weights are inferred using the feature sign algorithm and then the features are learned using a Lagrange dual approach. We use the same approach for the weights. Then we optimize the transformation parameters using gradient descent. The root features can be optimized using the analytical solution and projecting to unit norm. The matrix exponential gradient \( \frac{\partial L}{\partial x} \) can be computed using the following formula (Ortiz et al. (2001)): \[ \frac{\partial e^{A(t)}}{\partial t} = \int_0^1 e^{\alpha A(t)} \frac{\partial A(t)}{\partial t} e^{(1-\alpha)A(t)} d\alpha \] The formula can be interpreted as: \[ E_{\alpha \sim U(0,1)} [D(\alpha)] \] where \( D(\alpha) = e^{\alpha A(t)} \frac{\partial A(t)}{\partial t} e^{(1-\alpha)A(t)} \). For our experiments we approximated the gradient by drawing a few samples\( ^1 \{ \hat{\alpha}_s \}_{s=1}^S \) and computing \( E_{\hat{\alpha}} [D(\hat{\alpha})] \). This can be regarded as a stochastic version of the approach used by Culpepper & Olshausen (2009). Some features might get initialized near shallow local optima (i.e. close to the borders or outside the receptive field). These features eventually become under-used by the model. We periodically check 1In practice even a single sample works well. The computation over samples is easily parallelizable. for under-used features and re-initialize their transformation parameters\footnote{A feature is under-used when the total number of data-points using it in a batch drops close to zero.} For re-initialization we select another feature in the same tree at random with probability proportional to the fraction of data points that used it in that batch. We then reset the transformation parameters at random, with small variance and centered around the chosen filter’s parameters. 3 EXPERIMENTS 3.1 LEARNING REPRESENTATIONS We apply transformational sparse coding (TSC) with forests of flat trees to natural image patches. Our approach allows us to learn features resembling those of traditional sparse coding. Apart from reconstructing the input, the model also extracts transformation parameters from the data. Figure 3 shows a reconstruction example. Figure 4 shows the root features learned from \(10 \times 10\) natural image patches using a forest of size 8 with branching factor 8, equipped with the full six-dimensional group. The forest has a total of 64 features. Figure 4(a) shows the features corresponding to the roots. Figure 4(b) shows the corresponding leaves. Each row contains features derived from the same root. More examples of learned features are shown in Figures 7, 8, 9, and 10 in Appendix ??? ![Reconstruction example. The root features are transformed and combined with different weights to reconstruct (bottom right) the 8 × 8 natural image patch in the top right corner.](page_325_670_900_246.png) Figure 3: Reconstruction example. The root features are transformed and combined with different weights to reconstruct (bottom right) the \(8 \times 8\) natural image patch in the top right corner. 3.2 COMPARISON WITH SPARSE CODING Even though derivative features have to be explicitly constructed for inference, the degrees of freedom of our model are significantly lower than that of traditional sparse coding. Specifically: \[ df_{TSC} = (\text{# of roots}) \times (\text{# of pixels} - 1 + \text{branching factor} \times \text{group dimension}) \] whereas: \[ df_{SC} = \text{# of features} \times (\text{# of pixels} - 1) \] Note that the group dimension is equal to 3 for rigid motions and 6 for general 2D affine transformations. We compare transformational sparse coding forests of various layouts and choices for \(\lambda_w\) with traditional sparse coding on \(10 \times 10\) natural image patches. Some transformations change the feature magnitudes and therefore the sparsity pattern of the weights. To make the comparison clearer, for each choice of layout and penalty coefficient, we run sparse coding, constraining the feature magnitudes to be equal to the average feature magnitude of our model. The results are shown in Table 1. The reconstruction error of our model is close to that of sparse coding, albeit with slightly less sparse solutions, even though it has significantly fewer degrees of freedom. Our model extracts pose information in the form of group parameters. Figure 4: Learned features for 8 trees with a branching factor of 8. (a) Features corresponding to the roots. (b) Features/Leaves: Each row corresponds to leaves/transformations of the same root. Table 1: Comparison of transformational sparse coding (TSC) with sparse coding (SC) for 10 × 10 natural image patches. We compare the error (MSE) and the degrees of freedom (\( df \)) over 40000 data points. “Sparsity” is the average number of non-zero weights. \( \lambda_w \) is the penalty coefficient for the weights and controls the sparseness of the solution. <table> <tr> <th rowspan="2">\( \lambda_w \)</th> <th rowspan="2">Layout</th> <th colspan="3">TSC</th> <th colspan="5">SC</th> </tr> <tr> <th>MSE</th> <th>Sparsity</th> <th>\( df_{TSC} \)</th> <th>MSE</th> <th>Sparsity</th> <th>\( df_{SC} \)</th> <th># of features</th> <th>\( \frac{df_{SC}}{df_{TSC}} \)</th> </tr> <tr> <td>0.4</td> <td>1 × 64</td> <td>2.13</td> <td>13.3</td> <td>447</td> <td>1.71</td> <td>12.3</td> <td>6336</td> <td>64</td> <td>14.17</td> </tr> <tr> <td>0.5</td> <td>1 × 128</td> <td>2.28</td> <td>12.1</td> <td>867</td> <td>1.96</td> <td>10.3</td> <td>12672</td> <td>128</td> <td>14.62</td> </tr> <tr> <td>0.4</td> <td>8 × 8</td> <td>1.89</td> <td>13.3</td> <td>1176</td> <td>1.72</td> <td>12.5</td> <td>6336</td> <td>64</td> <td>5.38</td> </tr> <tr> <td>0.4</td> <td>4 × 16</td> <td>1.91</td> <td>13.3</td> <td>780</td> <td>1.69</td> <td>12.3</td> <td>6336</td> <td>64</td> <td>8.12</td> </tr> <tr> <td>0.5</td> <td>8 × 8</td> <td>2.36</td> <td>10.4</td> <td>1176</td> <td>2.15</td> <td>9.9</td> <td>6336</td> <td>64</td> <td>5.38</td> </tr> <tr> <td>0.5</td> <td>4 × 16</td> <td>2.38</td> <td>11</td> <td>780</td> <td>2.12</td> <td>10.0</td> <td>6336</td> <td>64</td> <td>8.12</td> </tr> <tr> <td>0.4</td> <td>16 × 16</td> <td>1.66</td> <td>14.3</td> <td>3120</td> <td>1.56</td> <td>13.2</td> <td>25344</td> <td>256</td> <td>8.12</td> </tr> <tr> <td>0.4</td> <td>8 × 32</td> <td>1.67</td> <td>14.6</td> <td>2328</td> <td>1.56</td> <td>13.2</td> <td>25344</td> <td>256</td> <td>10.88</td> </tr> </table> 3.3 Deeper Trees We can define deeper trees by associating a set of transformation parameters with each branch. These correspond to additive contributions to the complete transformation that yields the leaf when applied to the root: \[ I_i \simeq \sum_{v=1}^V \sum_{b \sim ch(v)} w_{ib} T(x_P) F_v \] where \( x_P = \sum_{e \in \text{path}(b,v)} x_e \). Optimizing deeper trees is more demanding due to the increased number of parameters. Their advantage is that they lend structure to model. The parameters corresponding to the subtree of an internal node tend to explore the parameter subspace close to the transformation defined by that internal node. In tasks where it is disadvantageous to marginalize completely over transformations, equivariant representations corresponding to intermediate tree layers can be used. An example of such structure is shown in Figure 6 in Appendix B. 4 RELATED WORK Sohl-Dickstein et al. (2010) present a model for fitting Lie groups to video data. Their approach only works for estimating a global transformation between consecutive video frames. They only support transformations of a single kind (ie only rotations). Different such single-parameter transformations have to be chained together to produce the global one. The corresponding transformation parameters also have to be inferred and stored in memory and cannot be directly converted to parameters of a single transformation. Kokioiopoulou & Frossard (2009) present an approach to optimally estimating transformations between pairs of images. They support rigid motions and isotropic scaling. Culpepper & Olshausen (2009) focus on learning the group operators and transformation parameters from pairs of images, but do not learn features from data. Our model supports all six transformations and learns object parts and their individual transformations. In contrast with those approaches, our model learns object parts jointly with their transformations within the same image. Our model utilizes the full, six-dimensional, general affine Lie group and captures the pose of each object part in the form of a single set of six transformation parameters. Grimes & Rao (2005) propose a bilinear model that combines sparse coding with transformations. The model accounts for global transformations that apply to the entire image region. Our model accounts for individual transformations of image parts. Rao & Ballard (1998) propose a model that captures small image transformations with Lie groups using a first-order Taylor approximation. Our model estimates larger transformations of image parts using the full exponential model. Rao & Ruderman (1999) and Miao & Rao (2007) use a first-order Taylor approximation to learn the group operators and the transformation parameters for small transformations. The work closest to ours is that of Hinton et al. (2011) on capsules. A capsule learns to recognize its template (feature) over a wide range of poses. The pose is computed by a neural network (encoder). The decoder, resembling a computer graphics engine combines the capsule templates in different poses to reconstruct the image. Each transformational sparse coding tree can be thought of as capsule. The template corresponds to the root. The tree learns to “recognize” transformed versions of that template. Our work arrives at the concept of a capsule from a sparse coding perspective. A major difference is that our approach allows us to reuse each feature multiple times in different, transformed versions for each data point. Gens & Domingos (2014) propose a convolutional network that captures symmetries in the data by modeling symmetry groups. Experiments with rigid motions or various affine transformations show reduced sample complexity. Cohen & Welling (2016) propose a convolutional network that can handle translations, reflections and rotations of 90 degrees. Dieleman et al. (2016) propose a network that handles translations and rotations. All the above are supervised learning models and apart from the first can handle a limited set of transformations. Our model is completely unsupervised, extends sparse coding and can handle all transformations given by the first order differential equation: \[ \frac{\partial I(\theta)}{\partial \theta} = AI(\theta) \] as described by Rao & Ruderman (1999). 5 CONCLUSION In this paper, we proposed a sparse coding based model that learns object features jointly with their transformations, from data. Naively extending sparse coding for data-point specific transformations makes inference intractable. We introduce a new technique that circumvents this issue by using a tree structure that represents common transformations in data. We show that our approach can learn interesting features from natural image patches with performance comparable to that of traditional sparse coding. Investigating the properties of deeper trees, learning the tree structure dynamically from the data and extending our model into a hierarchy are subjects of ongoing research. REFERENCES Fabio Anselmi, Joel Z. Leibo, Lorenzo Rosasco, Jim Mutch, Andrea Tacchetti, and Tomaso A. Poggio. Unsupervised learning of invariant representations in hierarchical architectures. CoRR, abs/1311.4158, 2013. URL http://arxiv.org/abs/1311.4158 Fabio Anselmi, Joel Z. Leibo, Lorenzo Rosasco, Jim Mutch, Andrea Tacchetti, and Tomaso Poggio. Unsupervised learning of invariant representations. Theor. Comput. Sci., 633(C):112–121, June 2016. ISSN 0304-3975. doi: 10.1016/j.tcs.2015.06.048. URL http://dx.doi.org/10.1016/j.tcs.2015.06.048 Taco S. Cohen and Max Welling. Group equivariant convolutional networks. CoRR, abs/1602.07576, 2016. URL http://arxiv.org/abs/1602.07576 Benjamin Culpepper and Bruno A. Olshausen. Learning transport operators for image manifolds. In Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta (eds.), Advances in Neural Information Processing Systems 22, pp. 423–431. Curran Associates, Inc., 2009. URL http://papers.nips.cc/paper/3791-learning-transport-operators-for-image-manifolds.pdf Sander Dieleman, Jeffrey De Fauw, and Koray Kavukcuoglu. Exploiting cyclic symmetry in convolutional neural networks. CoRR, abs/1602.02660, 2016. URL http://arxiv.org/abs/1602.02660 Peter C. Dodwell. The lie transformation group model of visual perception. Perception & Psychophysics, 34(1):1–16, 1983. ISSN 1532-5962. doi: 10.3758/BF03205890. URL http://dx.doi.org/10.3758/BF03205890 Robert Gens and Pedro Domingos. Deep symmetry networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems, NIPS’14, pp. 2537–2545, Cambridge, MA, USA, 2014. MIT Press. URL http://dl.acm.org/citation.cfm?id=2969033.2969110 David B. Grimes and Rajesh P. N. Rao. Bilinear sparse coding for invariant vision. Neural Comput., 17(1):47–73, January 2005. ISSN 0899-7667. doi: 10.1162/0899766052530893. URL http://dx.doi.org/10.1162/0899766052530893 Geoffrey E. Hinton, Alex Krizhevsky, and Sida D. Wang. Transforming auto-encoders. In Proceedings of the 21th International Conference on Artificial Neural Networks - Volume Part I, ICANN’11, pp. 44–51, Berlin, Heidelberg, 2011. Springer-Verlag. ISBN 978-3-642-21734-0. URL http://dl.acm.org/citation.cfm?id=2029556.2029562 E. Kokipoulou and P. Frossard. Minimum distance between pattern transformation manifolds: Algorithm and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(7):1225–1238, July 2009. ISSN 0162-8828. doi: 10.1109/TPAMI.2008.156. Honglak Lee, Alexis Battle, Rajat Raina, and Andrew Y. Ng. Efficient sparse coding algorithms. In B. Schölkopf, J. C. Platt, and T. Hoffman (eds.), Advances in Neural Information Processing Systems 19, pp. 801–808. MIT Press, 2007. URL http://papers.nips.cc/paper/2979-efficient-sparse-coding-algorithms.pdf Xu Miao and Rajesh P. N. Rao. Learning the Lie Groups of Visual Invariance. Neural Comput., 19(10):2665–2693, October 2007. ISSN 0899-7667. doi: 10.1162/neco.2007.19.10.2665. URL http://dx.doi.org/10.1162/neco.2007.19.10.2665 Bruno A Olshausen and David J Field. Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision research, 37(23):3311–3325, 1997. M. Ortiz, R. A. Radovitzky, and E. A. Repetto. The computation of the exponential and logarithmic mappings and their first and second linearizations. International Journal for Numerical Methods in Engineering, 52:1431, December 2001. doi: 10.1002/nme.263. Marc’ Aurelio Ranzato, Fu-Jie Huang, Y-Lan Boureau, and Yann LeCun. Unsupervised learning of invariant feature hierarchies with applications to object recognition. In Proc. Computer Vision and Pattern Recognition Conference (CVPR’07). IEEE Press, 2007. Rajesh P N Rao and Dana H Ballard. Development of localized oriented receptive fields by learning a translation-invariant code for natural images. Network: Computation in Neural Systems, 9(2):219–234, 1998. URL http://www.tandfonline.com/doi/abs/10.1088/0954-898X_9_2_005. PMID: 9861987. Rajesh P. N. Rao and Daniel L. Ruderman. Learning lie groups for invariant visual perception. In Proceedings of the 1998 Conference on Advances in Neural Information Processing Systems II, pp. 810–816, Cambridge, MA, USA, 1999. MIT Press. ISBN 0-262-11245-0. URL http://dl.acm.org/citation.cfm?id=340534.340807 Jascha Sohl-Dickstein, Jimmy C. Wang, and Bruno A. Olshausen. An unsupervised algorithm for learning lie group transformations. CoRR, abs/1001.1027, 2010. URL http://arxiv.org/abs/1001.1027 APPENDIX A GENERATOR EFFECTS Figure 5 presents the effects of each individual transformation of the six that are supported by our model. The template is a square[5(a)] B DEEPER TREES AND STRUCTURE Figure 6 presents an example of structure learned by deeper trees. This example consists of vertical and horizontal lines. Each image patch is either blank, contains one vertical or one horizontal line or both. A patch is blank with probability \( \frac{1}{9} \), contains exactly one line with probability \( \frac{2}{9} \) or two lines with probability \( \frac{2}{9} \). Each line is then generated at one of eight positions at random. Fitting two binary trees results in some continuity in the features, whereas flat trees provide no such structure. C LEARNED FEATURES (a) (b) (c) (d) (e) (f) (g) Figure 5: Effects of each individual transformation on the template (a): (b) horizontal translation, (c) vertical translation, (d) rotation, (e) scaling, (f) parallel hyperbolic deformation along the \( X/Y \) axis, (g) hyperbolic deformation along the diagonals. To compute the generators, we used the sinc interpolation function. ![Features learned for the double-line example: (a) Input, (b) features learned by a forest of two flat trees of size eight, (c) features learned by two binary trees of the same size. For (c) the leaves have been reordered with subtree permutations to reveal the order. Each subtree learns features corresponding to an area of the input.](page_384_671_1024_384.png) Figure 6: Features learned for the double-line example: (a) Input, (b) features learned by a forest of two flat trees of size eight, (c) features learned by two binary trees of the same size. For (c) the leaves have been reordered with subtree permutations to reveal the order. Each subtree learns features corresponding to an area of the input. Figure 7: Learned features for 16 trees with branching factor 16. Each row corresponds to leaves/transformations of the same root. Figure 8: Learned features for 8 trees with branching factor 32. Each row corresponds to leaves/transformations of the same root. Figure 9: Learned features for 4 trees with branching factor 16. Each row corresponds to leaves/transformations of the same root. ![Learned features for 1 tree with branching factor 64. All features are transformations of the same root.](page_573_682_393_393.png) Figure 10: Learned features for 1 tree with branching factor 64. All features are transformations of the same root.
ABSTRACT A fundamental problem faced by object recognition systems is that objects and their features can appear in different locations, scales and orientations. Current deep learning methods attempt to achieve invariance to local translations via pooling, discarding the locations of features in the process. Other approaches explicitly learn transformed versions of the same feature, leading to representations that quickly explode in size. Instead of discarding the rich and useful information about feature transformations to achieve invariance, we argue that models should learn object features conjointly with their transformations to achieve equivariance. We propose a new model of unsupervised learning based on sparse coding that can learn object features jointly with their affine transformations directly from images. Results based on learning from natural images indicate that our approach matches the reconstruction quality of traditional sparse coding but with significantly fewer degrees of freedom while simultaneously learning transformations from data. These results open the door to scaling up unsupervised learning to allow deep feature+transformation learning in a manner consistent with the ventral+dorsal stream architecture of the primate visual cortex. 1 INTRODUCTION A challenging problem in computer vision is the reliable recognition of objects under a wide range of transformations. Approaches such as deep learning that have achieved success in recent years usually require large amounts of labeled data, whereas the human brain has evolved to solve the problem using an almost unsupervised approach to learning object representations. During early development, the brain builds an internal representation of objects from unlabeled images that can be used in a wide range of tasks. Much of the complexity in learning efficient and general-purpose representations comes from the fact that objects can appear in different poses, at different scales, locations, orientations and lighting conditions. Models have to account for these transformed versions of objects and their features. Current successful approaches to recognition use pooling to allow limited invariance to two-dimensional translations (Ranzato et al. (2007)). At the same time pooling discards information about the location of the detected features. This can be problematic because scaling to large numbers of objects requires modeling objects in terms of parts and their relative pose, requiring the pose information to be retained. Previous unsupervised learning techniques such as sparse coding (Olshausen & Field (1997)) can learn features similar to the ones in the visual cortex but these models have to explicitly learn large numbers of transformed versions of the same feature and as such, quickly succumb to combinatorial explosion, preventing hierarchical learning. Other approaches focus on computing invariant object signatures (Anselmi et al. (2013) (2016)), but are completely oblivious to pose information. Ideally, we want a model that allows object features and their relative transformations to be simultaneously learned, endowing itself with a combinatorial explanatory capacity by being able to apply learned object features with object-specific transformations across large numbers of objects. The goal of modeling transformations in images is two-fold: (a) to facilitate the learning of pose-invariant sparse feature representations, and (b) to allow the use of pose information of object features in object representation and recognition. We propose a new model of sparse coding called transformational sparse coding that exploits a tree structure to account for large affine transformations. We apply our model to natural images. We show that our model can extract pose information from the data while matching the reconstruction quality of traditional sparse coding with significantly fewer degrees of freedom. Our approach to unsupervised learning is consistent with the concept of “capsules” first introduced by Hinton et al. (2011), and more generally, with the dorsal-ventral (features+transformations) architecture observed in the primate visual cortex. 2 TRANSFORMATIONAL SPARSE CODING 2.1 TRANSFORMATION MODEL Sparse coding (Olshausen & Field (1997)) models each image \( I \) as a sparse combination of features: \[ I \sim Fw \quad \text{s.t. w is sparse} \] Sparsity is usually enforced by the appropriate penalty. A typical choice is \( S_1(w) = \|w\|_1 \). We can enhance sparse coding with affine transformations by transforming features before combining them. The vectorized input image \( I \) is then modeled as: \[ I = \sum_{k=1}^K w_k T(x_k) F_k \] where \( w_k, F_k \) denote the \( k \)-th weight specific to the image and the \( k \)-th feature respectively and \( T(x_k) \) is a feature and image specific transformation. In modeling image transformations we follow the approach of Rao & Ruderman (1999) and Miao & Rao (2007). We consider the 2D general affine transformations. These include rigid motions such as vertical and horizontal translations and rotations, as well as scaling, parallel hyperbolic deformations along the \( X/Y \) axis and hyperbolic deformations along the diagonals. A discussion on why these are good candidates for inclusion in a model of visual perception can be found in Dodwell (1983). Figure 5 in Appendix A shows the effects of each transformation. Any subset of these transformations forms a Lie group with the corresponding number of dimensions (6 for the full set). Any transformation in this group can be expressed as the matrix exponential of a weighted combination of matrices (the group generators) that describe the behaviour of infinitesimal transformations around the identity: \[ T(x) = e^{\sum_j x_j G_j} \] For images of \( M \) pixels, \( T(x) \) is a matrix of size \( M \times M \). Note that the generator matrices and the features used are common across all images. The feature weights and transformation parameters can be inferred (and the features learned) by gradient descent on the regularized MSE objective: \[ L(w, x, F) = \frac{1}{N} \sum_{i=1}^N \left\| I_i - \sum_{k=1}^K w_{ik} T(x_{ik}) F_k \right\|_2^2 + \lambda_w S_1(w) + \lambda_F \|F\|_2^2 \] Although this model ties sparse coding with transformations elegantly, learning large transformations with it is intractable. The error surface of the loss function is highly non-convex with many shallow local minima. Figures 1(a), 1(b), 1(c) show the surface of \( L \) as a function of horizontal and vertical translation, horizontal translation and rotation and vertical translation and rotation parameters. The model tends to settle for small transformations around the identity. Due to the size of the parameters that we need to maintain, a random restart approach would be infeasible. 2.2 TRANSFORMATION TREES We introduce Transformational Sparse Coding Trees to circumvent this problem using hierarchies of transformed features. The main idea is to gradually marginalize over an increasing range of Figure 1: Normalized reconstruction error for individual vs. batch \(8 \times 8\) natural image patches. (a),(b),(c) show the surface of the reconstruction error for horizontal and vertical translations, horizontal translations and rotation, vertical translations and rotations for an individual data point and feature. (d),(e),(f) show the same, averaged over a batch of 2000 data points. The error is normalized between 0 and 1 for comparison. The global minimum in the range is marked in red. In the batch case, averaging makes the error surface smoother and learning easier. transformations. Each node in the tree represents a feature derived as a transformed version of its parent, with the root being the template of the feature. The leaves are equivalent to a set of sparse basis features and are combined to reconstruct the input as described above. A version of the model using a forest of trees of depth one (flat trees), is given by: \[ I \simeq \sum_{v=1}^{V} \sum_{b \sim ch(v)} w_b U_b \] where \(U_b = T(x_{v \rightarrow b}) F_v\) and \(ch(v)\) the children of root \(v\). The feature \(U_b\) is a leaf, derived from the root feature \(F_v\) via the fixed (across all data-points) transformation \(T(x_{v \rightarrow b})\). Deeper trees can be built accordingly (Section 3.3). A small example of a tree learned from natural image patches is shown in Figure 2. There are multiple advantages to such a hierarchical organization of sparse features. Some transformations are more common in data than others. Each path in the tree corresponds to a transformation that is common across images. Such a path can be viewed as a “transformation feature” learned from the data. Each additional node in the tree “costs” a fixed set of new parameters equal in size to the dimensions of the underlying Lie group (six in our case). At the same time the node contributes a whole new feature to the sparse code. Averaging over many data points, smoothens the surface of the error function and makes larger transformations more accessible to optimization. Figures 1(d) 1(e) 1(f) show the error surface averaged over a batch of 2000 patches. For every leaf that is activated, the root template represents the identity of the feature and the transformation associated with the path to the root, the pose. In other words the tree is an equivariant representation of the feature over the parameter region defined by the set of paths to the leaves, very similar to the concept of a capsule introduced by Hinton et al. (2011). In fact, every increasing subtree corresponds to a capsule of increasing size. Figure 2: Example of a tree learned from natural image patches. The leaves correspond to rigid transformations of the root. 2.3 LEARNING The reconstruction mean squared-error (MSE) for a forest of flat trees is given by: \[ L_{MSE}(w, x, F) = \frac{1}{N} \sum_{i=1}^{N} \left\| I_i - \sum_{v=1}^{V} \sum_{b \in h(v)} w_{ib} T(x_{v \rightarrow b}) F_v \right\|_2^2 \] Increasing the feature magnitudes and decreasing the weights will result in a decrease in loss. We constraint the root feature magnitudes to be of unit \( \ell_2 \) norm. Consider different, transformed, versions of the same root template. For every such version there is a set of tree parameters that compensates for the intrinsic transformation of the root and results in the same leaves. To make the solution unique we directly penalize the transformation parameter magnitudes. Since scaling and parallel deformation can also change the magnitude of the filter, we penalize them more to keep features/leaves close to unit norm. The full loss function of the model is: \[ L(w, x, F) = L_{MSE}(w, x, F) + \lambda_w S_1(w) + \sum_{j=1}^{6} \lambda_j X_{[j]} \] s.t. \[ \forall v, \|F_v\|_2 = 1 \] where \( X_{[j]} \) is the vector of the collective parameters for generator \( G_j \). Lee et al. (2007) use an alternating optimization approach to sparse coding. First the weights are inferred using the feature sign algorithm and then the features are learned using a Lagrange dual approach. We use the same approach for the weights. Then we optimize the transformation parameters using gradient descent. The root features can be optimized using the analytical solution and projecting to unit norm. The matrix exponential gradient \( \frac{\partial L}{\partial x} \) can be computed using the following formula (Ortiz et al. (2001)): \[ \frac{\partial e^{A(t)}}{\partial t} = \int_0^1 e^{\alpha A(t)} \frac{\partial A(t)}{\partial t} e^{(1-\alpha)A(t)} d\alpha \] The formula can be interpreted as: \[ E_{\alpha \sim U(0,1)} [D(\alpha)] \] where \( D(\alpha) = e^{\alpha A(t)} \frac{\partial A(t)}{\partial t} e^{(1-\alpha)A(t)} \). For our experiments we approximated the gradient by drawing a few samples\( ^1 \{ \hat{\alpha}_s \}_{s=1}^S \) and computing \( E_{\hat{\alpha}} [D(\hat{\alpha})] \). This can be regarded as a stochastic version of the approach used by Culpepper & Olshausen (2009). Some features might get initialized near shallow local optima (i.e. close to the borders or outside the receptive field). These features eventually become under-used by the model. We periodically check 1In practice even a single sample works well. The computation over samples is easily parallelizable. for under-used features and re-initialize their transformation parameters\footnote{A feature is under-used when the total number of data-points using it in a batch drops close to zero.} For re-initialization we select another feature in the same tree at random with probability proportional to the fraction of data points that used it in that batch. We then reset the transformation parameters at random, with small variance and centered around the chosen filter’s parameters. 3 EXPERIMENTS 3.1 LEARNING REPRESENTATIONS We apply transformational sparse coding (TSC) with forests of flat trees to natural image patches. Our approach allows us to learn features resembling those of traditional sparse coding. Apart from reconstructing the input, the model also extracts transformation parameters from the data. Figure 3 shows a reconstruction example. Figure 4 shows the root features learned from \(10 \times 10\) natural image patches using a forest of size 8 with branching factor 8, equipped with the full six-dimensional group. The forest has a total of 64 features. Figure 4(a) shows the features corresponding to the roots. Figure 4(b) shows the corresponding leaves. Each row contains features derived from the same root. More examples of learned features are shown in Figures 7, 8, 9, and 10 in Appendix ??? ![Reconstruction example. The root features are transformed and combined with different weights to reconstruct (bottom right) the 8 × 8 natural image patch in the top right corner.](page_325_670_900_246.png) Figure 3: Reconstruction example. The root features are transformed and combined with different weights to reconstruct (bottom right) the \(8 \times 8\) natural image patch in the top right corner. 3.2 COMPARISON WITH SPARSE CODING Even though derivative features have to be explicitly constructed for inference, the degrees of freedom of our model are significantly lower than that of traditional sparse coding. Specifically: \[ df_{TSC} = (\text{# of roots}) \times (\text{# of pixels} - 1 + \text{branching factor} \times \text{group dimension}) \] whereas: \[ df_{SC} = \text{# of features} \times (\text{# of pixels} - 1) \] Note that the group dimension is equal to 3 for rigid motions and 6 for general 2D affine transformations. We compare transformational sparse coding forests of various layouts and choices for \(\lambda_w\) with traditional sparse coding on \(10 \times 10\) natural image patches. Some transformations change the feature magnitudes and therefore the sparsity pattern of the weights. To make the comparison clearer, for each choice of layout and penalty coefficient, we run sparse coding, constraining the feature magnitudes to be equal to the average feature magnitude of our model. The results are shown in Table 1. The reconstruction error of our model is close to that of sparse coding, albeit with slightly less sparse solutions, even though it has significantly fewer degrees of freedom. Our model extracts pose information in the form of group parameters. Figure 4: Learned features for 8 trees with a branching factor of 8. (a) Features corresponding to the roots. (b) Features/Leaves: Each row corresponds to leaves/transformations of the same root. Table 1: Comparison of transformational sparse coding (TSC) with sparse coding (SC) for 10 × 10 natural image patches. We compare the error (MSE) and the degrees of freedom (\( df \)) over 40000 data points. “Sparsity” is the average number of non-zero weights. \( \lambda_w \) is the penalty coefficient for the weights and controls the sparseness of the solution. <table> <tr> <th rowspan="2">\( \lambda_w \)</th> <th rowspan="2">Layout</th> <th colspan="3">TSC</th> <th colspan="5">SC</th> </tr> <tr> <th>MSE</th> <th>Sparsity</th> <th>\( df_{TSC} \)</th> <th>MSE</th> <th>Sparsity</th> <th>\( df_{SC} \)</th> <th># of features</th> <th>\( \frac{df_{SC}}{df_{TSC}} \)</th> </tr> <tr> <td>0.4</td> <td>1 × 64</td> <td>2.13</td> <td>13.3</td> <td>447</td> <td>1.71</td> <td>12.3</td> <td>6336</td> <td>64</td> <td>14.17</td> </tr> <tr> <td>0.5</td> <td>1 × 128</td> <td>2.28</td> <td>12.1</td> <td>867</td> <td>1.96</td> <td>10.3</td> <td>12672</td> <td>128</td> <td>14.62</td> </tr> <tr> <td>0.4</td> <td>8 × 8</td> <td>1.89</td> <td>13.3</td> <td>1176</td> <td>1.72</td> <td>12.5</td> <td>6336</td> <td>64</td> <td>5.38</td> </tr> <tr> <td>0.4</td> <td>4 × 16</td> <td>1.91</td> <td>13.3</td> <td>780</td> <td>1.69</td> <td>12.3</td> <td>6336</td> <td>64</td> <td>8.12</td> </tr> <tr> <td>0.5</td> <td>8 × 8</td> <td>2.36</td> <td>10.4</td> <td>1176</td> <td>2.15</td> <td>9.9</td> <td>6336</td> <td>64</td> <td>5.38</td> </tr> <tr> <td>0.5</td> <td>4 × 16</td> <td>2.38</td> <td>11</td> <td>780</td> <td>2.12</td> <td>10.0</td> <td>6336</td> <td>64</td> <td>8.12</td> </tr> <tr> <td>0.4</td> <td>16 × 16</td> <td>1.66</td> <td>14.3</td> <td>3120</td> <td>1.56</td> <td>13.2</td> <td>25344</td> <td>256</td> <td>8.12</td> </tr> <tr> <td>0.4</td> <td>8 × 32</td> <td>1.67</td> <td>14.6</td> <td>2328</td> <td>1.56</td> <td>13.2</td> <td>25344</td> <td>256</td> <td>10.88</td> </tr> </table> 3.3 Deeper Trees We can define deeper trees by associating a set of transformation parameters with each branch. These correspond to additive contributions to the complete transformation that yields the leaf when applied to the root: \[ I_i \simeq \sum_{v=1}^V \sum_{b \sim ch(v)} w_{ib} T(x_P) F_v \] where \( x_P = \sum_{e \in \text{path}(b,v)} x_e \). Optimizing deeper trees is more demanding due to the increased number of parameters. Their advantage is that they lend structure to model. The parameters corresponding to the subtree of an internal node tend to explore the parameter subspace close to the transformation defined by that internal node. In tasks where it is disadvantageous to marginalize completely over transformations, equivariant representations corresponding to intermediate tree layers can be used. An example of such structure is shown in Figure 6 in Appendix B. 4 RELATED WORK Sohl-Dickstein et al. (2010) present a model for fitting Lie groups to video data. Their approach only works for estimating a global transformation between consecutive video frames. They only support transformations of a single kind (ie only rotations). Different such single-parameter transformations have to be chained together to produce the global one. The corresponding transformation parameters also have to be inferred and stored in memory and cannot be directly converted to parameters of a single transformation. Kokioiopoulou & Frossard (2009) present an approach to optimally estimating transformations between pairs of images. They support rigid motions and isotropic scaling. Culpepper & Olshausen (2009) focus on learning the group operators and transformation parameters from pairs of images, but do not learn features from data. Our model supports all six transformations and learns object parts and their individual transformations. In contrast with those approaches, our model learns object parts jointly with their transformations within the same image. Our model utilizes the full, six-dimensional, general affine Lie group and captures the pose of each object part in the form of a single set of six transformation parameters. Grimes & Rao (2005) propose a bilinear model that combines sparse coding with transformations. The model accounts for global transformations that apply to the entire image region. Our model accounts for individual transformations of image parts. Rao & Ballard (1998) propose a model that captures small image transformations with Lie groups using a first-order Taylor approximation. Our model estimates larger transformations of image parts using the full exponential model. Rao & Ruderman (1999) and Miao & Rao (2007) use a first-order Taylor approximation to learn the group operators and the transformation parameters for small transformations. The work closest to ours is that of Hinton et al. (2011) on capsules. A capsule learns to recognize its template (feature) over a wide range of poses. The pose is computed by a neural network (encoder). The decoder, resembling a computer graphics engine combines the capsule templates in different poses to reconstruct the image. Each transformational sparse coding tree can be thought of as capsule. The template corresponds to the root. The tree learns to “recognize” transformed versions of that template. Our work arrives at the concept of a capsule from a sparse coding perspective. A major difference is that our approach allows us to reuse each feature multiple times in different, transformed versions for each data point. Gens & Domingos (2014) propose a convolutional network that captures symmetries in the data by modeling symmetry groups. Experiments with rigid motions or various affine transformations show reduced sample complexity. Cohen & Welling (2016) propose a convolutional network that can handle translations, reflections and rotations of 90 degrees. Dieleman et al. (2016) propose a network that handles translations and rotations. All the above are supervised learning models and apart from the first can handle a limited set of transformations. Our model is completely unsupervised, extends sparse coding and can handle all transformations given by the first order differential equation: \[ \frac{\partial I(\theta)}{\partial \theta} = AI(\theta) \] as described by Rao & Ruderman (1999). 5 CONCLUSION In this paper, we proposed a sparse coding based model that learns object features jointly with their transformations, from data. Naively extending sparse coding for data-point specific transformations makes inference intractable. We introduce a new technique that circumvents this issue by using a tree structure that represents common transformations in data. We show that our approach can learn interesting features from natural image patches with performance comparable to that of traditional sparse coding. Investigating the properties of deeper trees, learning the tree structure dynamically from the data and extending our model into a hierarchy are subjects of ongoing research. APPENDIX A GENERATOR EFFECTS Figure 5 presents the effects of each individual transformation of the six that are supported by our model. The template is a square[5(a)] B DEEPER TREES AND STRUCTURE Figure 6 presents an example of structure learned by deeper trees. This example consists of vertical and horizontal lines. Each image patch is either blank, contains one vertical or one horizontal line or both. A patch is blank with probability \( \frac{1}{9} \), contains exactly one line with probability \( \frac{2}{9} \) or two lines with probability \( \frac{2}{9} \). Each line is then generated at one of eight positions at random. Fitting two binary trees results in some continuity in the features, whereas flat trees provide no such structure. C LEARNED FEATURES (a) (b) (c) (d) (e) (f) (g) Figure 5: Effects of each individual transformation on the template (a): (b) horizontal translation, (c) vertical translation, (d) rotation, (e) scaling, (f) parallel hyperbolic deformation along the \( X/Y \) axis, (g) hyperbolic deformation along the diagonals. To compute the generators, we used the sinc interpolation function. ![Features learned for the double-line example: (a) Input, (b) features learned by a forest of two flat trees of size eight, (c) features learned by two binary trees of the same size. For (c) the leaves have been reordered with subtree permutations to reveal the order. Each subtree learns features corresponding to an area of the input.](page_384_671_1024_384.png) Figure 6: Features learned for the double-line example: (a) Input, (b) features learned by a forest of two flat trees of size eight, (c) features learned by two binary trees of the same size. For (c) the leaves have been reordered with subtree permutations to reveal the order. Each subtree learns features corresponding to an area of the input. Figure 7: Learned features for 16 trees with branching factor 16. Each row corresponds to leaves/transformations of the same root. Figure 8: Learned features for 8 trees with branching factor 32. Each row corresponds to leaves/transformations of the same root. Figure 9: Learned features for 4 trees with branching factor 16. Each row corresponds to leaves/transformations of the same root. ![Learned features for 1 tree with branching factor 64. All features are transformations of the same root.](page_573_682_393_393.png) Figure 10: Learned features for 1 tree with branching factor 64. All features are transformations of the same root.
reject
Reject
4.333333
721bb52ff53fef4ed65c1d68816293c3a1c75157
iclr
2,017
COLLABORATIVE DEEP EMBEDDING VIA DUAL NETWORKS Yilei Xiong & Dahua Lin Department of Information Engineering The Chinese University of Hong Kong {xy014,dhlin}@ie.cuhk.edu.hk Haoying Niu, Jiefeng Cheng & Zhenguo Li Noah’s Ark Lab, Huawei Technologies Co. Ltd. {niu.haoying,cheng.jiefeng,li.zhenguo}@huawei.com ABSTRACT Despite the long history of research on recommender systems, current approaches still face a number of challenges in practice, e.g. the difficulties in handling new items, the high diversity of user interests, and the noisiness and sparsity of observations. Many of such difficulties stem from the lack of expressive power to capture the complex relations between items and users. This paper presents a new method to tackle this problem, called Collaborative Deep Embedding. In this method, a pair of dual networks, one for encoding items and the other for users, are jointly trained in a collaborative fashion. Particularly, both networks produce embeddings at multiple aligned levels, which, when combined together, can accurately predict the matching between items and users. Compared to existing methods, the proposed one not only provides greater expressive power to capture complex matching relations, but also generalizes better to unseen items or users. On multiple real-world datasets, this method outperforms the state of the art. 1 INTRODUCTION What do consumers really want? – this is a question to which everyone wishes to have an answer. Over the past decade, the unprecedented growth of web services and online commercial platforms such as Amazon, Netflix, and Spotify, gives rise to a vast amount of business data, which contain valuable information about the customers. However, “data don’t speak for themselves”. To accurately predict what the customers want, one needs not only the data, but also an effective means to extract useful messages therefrom. There has been extensive study on recommender systems. Existing methods roughly fall into two categories, namely content-based filtering (Pazzani & Billsus [2007]) and collaborative filtering (Mnih & Salakhutdinov [2008] [Hu et al.] [2008]; Yu et al. [2009]). The former focuses on extracting relevant features from the content, while the latter attempts to exploit the common interest among groups of users. In recent efforts, hybrid methods (Agarwal & Chen [2009] Van den Oord et al., [2013]) that combine both aspects have also been developed. Whereas remarkable progress has been made on this topic, the state of the art remains far from satisfactory. The key challenges lie in several aspects. First, there is a large semantic gap between the true cause of a matching and what we observe from the data. For example, what usually attracts a book consumer is the implied emotion that one has to feel between the lines instead of the occurrences of certain words. It is difficult for classical techniques to extract such deep meanings from the observations. Second, the cold-start issue, namely making predictions for unseen items or users, has not been well addressed. Many collaborative filtering methods rely on the factorization of the matching matrix. Such methods implicitly assume that all the users and items are known in advance, and thus are difficult to be applied in real-world applications, especially online services. The success of deep learning brings new inspiration to this task. In a number of areas, including image classification (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012), and natural language understanding (Socher et al., 2011), deep learning techniques have substantially pushed forward the state of the art. The power of deep networks in capturing complex variations and bridging semantic gaps has been repeatedly shown in previous study. However, deep models were primarily used for classification or regression, e.g. translating images to sentences. How deep networks can be used to model cross-domain relations remains an open question. In this work, we aim to explore deep neural networks for learning the matching relations across two domains, with our focus placed on the matching between items and users. Specifically, we propose a new framework called Collaborative Deep Embedding, which comprises a pair of dual networks, one for encoding items and the other for users. Each network contains multiple embedding layers that are aligned with their dual counterparts of the other network. Predictions can then be made by coupling these embeddings. Note that unlike a conventional network, the dual networks are trained on two streams of data. In this paper, we devise an algorithm that can jointly train both networks using dual mini-batches. Compared to previous methods, this method not only narrows the semantic gap through a deep modeling architecture, but also provides a natural way to generalize – new items and new users can be encoded by the trained networks, just like those present in the training stage. On a number of real world tasks, the proposed method yields significant improvement over the current state-of-the-art. It is worth stressing that whereas our focus is on the matching between items and users, Collaborative Deep Embedding is a generic methodology, which can be readily extended to model other kinds of cross-domain relations. 2 RELATED WORK Existing methods for recommendation roughly fall into two categories: content-based methods (Pazzani & Billsus, 2007) and collaborative filtering (CF) (Mnih & Salakhutdinov, 2008; Hu et al., 2008; Yu et al., 2009). Specifically, content-based methods rely primarily on feature representation of the content, in which recommendations are often made based on feature similarity (Slaney et al., 2008). Following this, there are also attempts to incorporate additional information, such as meta-data of users, to further improve the performance (McFee et al., 2012). Instead, collaborative filtering exploits the interaction between users and items. A common approach to CF is to derive latent factors of both users and items through matrix factorization, and measure the degree of matching by their inner products. Previous study (Ricci et al., 2011) showed that CF methods tend to have higher recommendation accuracy than content-based methods, as they directly target the recommendation task. However, practical use of CF is often limited by the cold start problem. It is difficult to recommend items without a sufficient amount of use history. Issues like this motivated hybrid methods (Agarwal & Chen, 2009; Van den Oord et al., 2013) that combine both aspects of information, which have showed encouraging improvement. Our exploration is also along this line. Despite the progress on both family of methods, the practical performance of state-of-the-art still leaves a lot to be desired. This, to a large extent, is due to the lack of capability of capturing complex variations in interaction patterns. Recently, deep learning (Bengio, 2009) emerges as an important technique in machine learning. In a number of successful stories (Krizhevsky et al., 2012; Hinton et al., 2012; Socher et al., 2011), deep models have demonstrated remarkable representation power in capturing complex patterns. This power has been exploited by some recent work for recommendation. Van den Oord et al. (2013) applies deep learning for music recommendation. It uses the latent item vector learned by CF as ground truth to train a deep network for extracting content features, obtaining considerable performance gain. However the latent vectors for known users and items are not improved. Wang & Wang (2014) proposed an extension to this method, which concatenates both the CF features and the deep features, resulting in slight improvement. Wang & Blei (2011) showed that CF and topic modeling, when combined, can benefit each other. Inspired by this, Wang et al. (2015) proposed Collaborative Deep Learning (CDL), which incorporates CF and deep feature learning with a combined objective function. This work represents the latest advances in recommendation methods. Yet, its performance is still limited by several issues, e.g. the difficulties in balancing diversified objectives and the lack of effective methods for user encoding. An important aspect that distinguishes our work from CDL and other previous methods is that it encodes both items and users through a pair of deep networks that are jointly trained, which substantially enhance the representation power on both sides. Moreover, the objective function of our learning framework directly targets the recommendation accuracy, which also leads to better performance. 3 COLLABORATIVE DEEP EMBEDDING At the heart of a recommender system is matching model, namely, a model that can predict whether a given item matches the interest of a given user. Generally, this can be formalized as below. Suppose there are m users and n items, respectively indexed by i and j. Items are usually associated with inherent features, e.g. the descriptions or contents. Here, we use \( \mathbf{x}_j \) to denote the observed features of the j-th item. However, inherent information for users is generally very limited and often irrelevant. Hence, in most cases, users are primarily characterized by their history, i.e. the items they have purchased or rated. Specifically, the user history can be partly captured by a matching matrix \( \mathbf{R} \in \{0,1\}^{m \times n} \), where \( \mathbf{R}(i,j) = 1 \) indicates that the i-th user purchased the j-th item and gave a positive rating. Note that \( \mathbf{R} \) is often an incomplete reflection of the user interest – it is not uncommon that a user does not purchase or rate an item that he/she likes. 3.1 DUAL EMBEDDING To motivate our approach, we begin with a brief revisit of collaborative filtering (CF), which is widely adopted in practical recommender systems. The basic idea of CF is to derive vector representations for both users and items by factorizing the matching matrix \( \mathbf{R} \). A representative formulation in this family is the Weighted Matrix Factorization (WMF) (Hu et al., 2008), which adopts an objective function as below: \[ \sum_i \sum_j c_{ij} (\mathbf{R}_{ij} - \mathbf{u}_i^T \mathbf{v}_j)^2 + \lambda_u \sum_i \| \mathbf{u}_i \|_2^2 + \lambda_v \sum_j \| \mathbf{v}_j \|_2^2. \] Here, \( \mathbf{u}_i \) and \( \mathbf{v}_j \) denote the vector representations of the i-th user and the j-th item, \( c_{ij} \) the confidence coefficient of an observed entry, and \( \lambda_u, \lambda_v \) the regularization coefficients. Underlying such methods lies a common assumption, namely, all users and items must be known a priori. As a result, they will face fundamental difficulties when handling new items and new users. Encoding Networks. In this work, we aim to move beyond this limitation by exploring an alternative approach. Instead of pursuing the embeddings of a given set of items and users, our approach jointly learns a pair of encoding networks, respectively for items and users. Compared to CF, the key advantage of this approach is that it is generalizable by nature. When new items or new users come, their vector embeddings can be readily derived using the learned encoders. Generally, the items can be encoded based on their own inherent features, using, for example, an auto-encoder. The key question here, however, is how to encode users, which, as mentioned, have no inherent features. Again, we revisit conventional CF methods such as WMF and find that in these methods, the user representations can be expressed as: \[ \mathbf{u}_i = \arg\min_{\mathbf{u}} \sum_j c_{ij} \| \mathbf{R}_{ij} - \mathbf{u}_i^T \mathbf{v}_j \|^2 + \lambda_u \sum_i \| \mathbf{u}_i \|^2 = (\mathbf{V} \mathbf{C}_i \mathbf{V}^T + \lambda_u \mathbf{I})^{-1} \mathbf{V} \mathbf{r}_i. \] Here, \( \mathbf{V} = [\mathbf{v}_1, \ldots, \mathbf{v}_n] \) is a matrix comprised of all item embeddings, each column for one; \( \mathbf{r}_i \) is the i-th row of \( \mathbf{R} \) treated as a column vector, which represents the history of the i-th user; and \( \mathbf{C}_i = \mathrm{diag}(c_{i1}, \ldots, c_{in}) \) captures the confidence weights. The analysis above reveals that \( \mathbf{u}_i \) is a linear transform of \( \mathbf{r}_i \) as \( \mathbf{u}_i = \mathbf{W}_u \mathbf{r}_i \), where the transform matrix \( \mathbf{W}_u \) depends on the item embeddings \( \mathbf{V} \). This motivates our idea of user encoding, that is, to use a deep neural network instead the linear transform above, as \[ \mathbf{u}_i = g(\mathbf{r}_i; \mathbf{W}_u), \] where \( g \) denotes a nonlinear transform based on a deep network with parameters \( \mathbf{W}_u \). As we will show in our experiments, by drawing on the expressive power of deep neural networks, the proposed way of user encoding can substantially improve the prediction accuracy. Figure 1: This figure shows three different designs of the dual networks. Here, \( \odot \) indicates dot product and \( \oplus \) indicates summation. (a) The basic design adopts the MLP structure for each network. (b) The multi-level design integrates the dot products of embeddings at different levels to produce the prediction. (c) In the branching design, the embeddings (except those of the top level) used in the dot products are produced by transform branches. In this way, the main abstraction paths won’t be directly twisted. Overall Formulation. By coupling an item-network denoted by \( f(\mathbf{x}_j; \mathbf{W}_v) \) and a user-network \( g \) as introduced above, we can predict the matching of any given pair of user and item based on the inner product of their embeddings, as \( \langle f(\mathbf{x}; \mathbf{W}_v), g(\mathbf{r}; \mathbf{W}_u) \rangle \). The inputs to these networks include \( \mathbf{x} \), the inherent feature of the given item, and \( \mathbf{r} \), the history of the given user on a set of reference items. With both encoding networks, we formulate the learning objective as follows: \[ \min_{\mathbf{W}_v, \mathbf{W}_u} \sum_i \sum_j c_{ij} \| \mathbf{R}_{ij} - \langle f(\mathbf{x}_j; \mathbf{W}_v), g(\mathbf{r}_i; \mathbf{W}_u) \rangle \|^2. \] Here, \( \mathbf{X} = [\mathbf{x}_1, \ldots, \mathbf{x}_n] \) denotes the input features of all reference items. This formulation differs from previous ones in two key aspects: (1) Both users and items are encoded using deep neural networks. The learning objective above encourages the cooperation of both networks such that the coupling of both sides yield the highest accuracy. Hence, the user-network parameters \( \mathbf{W}_u \) depends on the item embeddings \( \mathbf{V} \), and likewise for the item-network. (2) The learning task is to estimate the parameters of the encoding networks. Once the encoding networks are learned, they encode users and items in a uniform way, no matter whether they are seen during training. In other words, new users and new items are no longer second-class citizens – they are encoded in exactly the same way as those in the training set. Comparison with CDL. The Collaborative Deep Learning (CDL) recently proposed by Wang et al. (2015) was another attempt to tackle the cold-start issue. This method leverages the item features by aligning the item encoder with the embeddings resulted from matrix factorization. In particular, the objective function is given as follows: \[ \sum_{ij} c_{ij} (\mathbf{R}_{ij} - \mathbf{u}_i^T \mathbf{v}_j)^2 + \lambda_u \sum_j \| \mathbf{v}_j - f_e(\tilde{\mathbf{x}}_j, \theta) \|^2 + \lambda_n \sum_j \| \tilde{\mathbf{x}}_j - f_r(\tilde{\mathbf{x}}_j, \theta) \|^2 + \lambda_u \sum_i \| \mathbf{u}_i \|^2 + r(\theta). \] Here, a Stacked Denoising Autoencoder (SDAE) (Vincent et al., 2010) with parameter \( \theta \) is used to encode the items, based on \( \{ \tilde{\mathbf{x}}_j \} \), noisy versions of their features. Compared to our formulation, CDL has several limitations: (1) The objective is to balance the SDAE reconstruction error and the matching accuracy, which does not necessarily lead to improved recommendation. Tuning this balance also turns out to be tricky. (2) Only items are encoded, while the representations of the users are still obtained by matrix factorization. As a result, its expressive power in capturing user interest remains limited. (3) There are inconsistencies between known items and new ones – the embedding of known items is resulted from a tradeoff between the matching accuracy and the fidelity to SDAE features, while the embedding of new items are purely based on SDAE encoding. 3.2 Network Architecture Designs Our model consists of two networks, namely the item-network \( f \) and the user-network \( g \). We went through a progressive procedure in designing their architectures, obtaining three different designs, from basic design, multi-level design, to multi-level branching design. Each new design was motivated by the observation of certain limitations in the previous version. The basic design, as shown in Figure 1(a) adopts the multilayer perceptron as the basic architecture, using tanh as the nonlinear activation function between layers\(^1\). The top layer of the item-network produces a vector \( f(\mathbf{x}_i; \mathbf{W}_v) \) for each item; while that of the user-network produces a dual vector \( g(\mathbf{r}_i; \mathbf{W}_u) \) for each user. During training, the loss layer takes their inner products and compares them with the ground-truth \( \mathbf{R}(i, j) \). Each layer in these networks generates a vector representation. We observe that representations from different layers are complementary. Representations from lower layers tend to be closer to the inputs and preserve more information; while those from higher layers focus on deeper semantics. The representations from these levels have their respective values, as different users tend to focus on different aspects of an item. Following this intuition, we reach a multi-level design, as shown in Figure 1(b). In this design, dot products between dual embeddings at corresponding levels are aggregated to produce the final prediction. There is an issue of the multi-level design – the output of each intermediate layer actually plays two roles. On one hand, it is the input to the next layer for further abstraction; on the other hand, it also serves as a facet to be matched with the other side. These two roles require different properties of the representations. Particularly, for the former role, the representation needs to preserve more information for higher-level abstraction; while for the latter, those parts related to the current level of matching need to be emphasized. To address this issue, we design a multi-level branching architecture, as shown in Figure 1(c). In this design, a matching branch is introduced to transform the representation at each level to a form that is more suitable for matching. This can also be considered as learning an alternative metric to measure the matchness between the embeddings. As we will show in our experiments, this design can considerably improve the prediction accuracy. 4 Training with Dual Mini-batches A distinctive aspect of our training algorithm is the use of dual mini-batches. Specifically, in each iteration, \( B_v \) items and \( B_u \) users are selected. In addition to the item features and user histories, the corresponding part of the matching matrix \( \mathbf{R} \) will also be loaded and fed to the network. Here, the two batch sizes \( B_v \) and \( B_u \) can be different, and they should be chosen according to the sparsity of the matching matrix \( \mathbf{R} \), such that each dual mini-batch can cover both positive and zero ratings. During the backward pass, the loss layer that compares the predictions with the ground-truth matchings will produce two sets of gradients, respectively for items and users. These gradients are then back-propagated along respective networks. Note that when the multi-level designs (both with and without branching) are used, each intermediate layer will receive gradients from two sources – those from the upper layers and those from the dual network (via the dot-product layer). Hence, the training of one network would impact that of the other. The entire training procedure consists of two stages: pre-training and optimization. In the pre-training stage, we initialize the item-network with unsupervised training (Vincent et al., 2010) and the user-network randomly. The unsupervised training of the item-network allows it to capture the feature statistics. Then both networks will be jointly refined in a layer-by-layer fashion. Particularly, we first tune the one-level networks, taking the dot products of their outputs as the predictions. Subsequently, we stack the second layers on top and refine them in a similar way. Empirically, we found that this layer-wise refinement scheme provides better initialization. In the optimization stage, we adopt the SGD algorithm with momentum and use the dual mini-batch scheme presented above. In this stage, the training is conducted in epochs. Each epoch, through multiple iterations, traverses the whole matching matrix \( \mathbf{R} \) without repetition. The order of choosing mini-batches is arbitrary and will be shuffled at the beginning of each epoch. Additional tricks such as dropout and batch normalization are employed to further improve the performance. \footnotetext{1The choice of tanh as the activation function is based on empirical comparison.} 5 EXPERIMENTS We tested our method on three real-world datasets with different kinds of items and matching relations: 1. CiteULike, constructed by [Wang & Blei (2011)], provides a list of researchers and the papers that they interested. Each paper comes with a text document that comprises both the title and the abstract. In total, it contains 5,551 researchers (as users) and 16,980 papers (as items) with 0.22% density. The task is to predict the papers that a researcher would like. 2. MovieLens+Posters is constructed based on the MovieLens 20M Dataset [Harper & Konstan (2016)], which provides about 20M user ratings on movies. For each movie, we collect a movie poster from TMDb and extract a visual feature therefrom using a convolutional neural network [Szegedy et al., 2016] as the item feature. Removing all those movies without posters and the users with fewer than 10 ratings, we obtain a dataset that contains 76,531 users and 14,101 items with 0.24% density. In this dataset, all 5 ratings are considered as positive matchings. 3. Ciao is organized by [Tang et al., 2012] from a product review site, where each product comes with a series of reviews. The reviews for each product are concatenated to serve as the item content. We removed those items with less than 5 rated users and the users with less than 10 ratings. This results in a dataset with 4,663 users and 12,083 items with 0.25% density. All ratings with 40 or above (the rating ranges from 0 to 50) are regarded as positive matchings. 5.1 EVALUATION The performance of a recommender system can be assessed from different perspective. In this paper, we follow [Wang & Blei (2011)] and perform the evaluation from the retrieval perspective. Specifically, a fraction of rating entries are omitted in the training phase, and the algorithms being tested will be used to predict those entries. As pointed out by [Wang & Blei (2011)], as the ratings are implicit feedback [Hu et al., 2008] – some positive matchings are not reflected in the ratings, recall is more suitable than precision in measuring the performance. In particular, we use Recall@M averaged over all users as the performance metric. Here, for a certain user, Recall@M is defined as follows: \[ \text{recall}@M = \frac{\text{the number of items a user likes in top } M \text{ recommendations}}{\text{the total number of items the user likes}} \] In our experiments, the value of \( M \) varies from 50 to 200. Following [Wang & Blei (2011)], we consider two tasks, in-matrix prediction and out-matrix prediction. Specifically, we divide all users into two disjoint parts, known and unknown, by the ratio of 9 to 1. The in-matrix prediction task only considers known items. For this task, all rating entries are split into three disjoint sets: training, validation and testing, by the ratio 3 : 1 : 1. It is ensured that all items in the validation and testing sets have appeared in the training stage (just that part of their ratings were omitted). The out-matrix prediction task is to make predictions for the items that are completely unseen in the training phase. This task is to test the performance of generalization and the capability of handling the cold-start issue. 5.2 COMPARISON WITH OTHER METHODS We compared our method, which we refer to as DualNet with two representative methods in previous work: (1) Weighted Matrix Factorization (WMF) [Hu et al., 2008], a representative method for for collaborative filtering (CF), and (2) Collaborative deep learning (CDL) [Wang et al., 2015], a hybrid method that combines deep encoding of the items and CF, which represents the latest advances in recommendation techniques. On each dataset, we chose the design parameters for each method via grid search. The parameter combinations that attain best performance on the validation set are used. For our DualNet method, we adopt a three-level branching configuration, where the embedding dimensions of each network, from bottom to top, are set to 200, 200, 50. For WMF, the latent dimension is set to 300 on CDL and 450 on other datasets. For CDL, the best performance is attained when the structure of SDAE is configured to be (2000, 1000, 300), with drop out ratio 0.1. Other design parameters of CDL are set as \( a = 1.0, b = 0.01, tu = 1, lv = 10, ln = 1000, lw = 0.0005 \). Table 1: Comparison of performance on three datasets. The performances are measure with the metric \( Recall@M \). We report the results where \( M \) are set to 50, 100, and 200. <table> <tr> <th rowspan="2"></th> <th colspan="3">CiteULike1</th> <th colspan="3">CiteULike2</th> </tr> <tr> <th>50</th> <th>100</th> <th>200</th> <th>50</th> <th>100</th> <th>200</th> </tr> <tr> <td>WMF</td> <td>22.14%</td> <td>32.58%</td> <td>43.65%</td> <td>40.45%</td> <td>50.28%</td> <td>59.95%</td> </tr> <tr> <td>CDL</td> <td>25.02%</td> <td>36.57%</td> <td>48.32%</td> <td>39.49%</td> <td>52.02%</td> <td>64.41%</td> </tr> <tr> <td>DualNet</td> <td><b>30.41%</b></td> <td><b>41.71%</b></td> <td><b>52.24%</b></td> <td><b>41.26%</b></td> <td><b>53.80%</b></td> <td><b>65.21%</b></td> </tr> <tr> <th rowspan="2"></th> <th colspan="3">MovieLens</th> <th colspan="3">Ciao</th> </tr> <tr> <th>50</th> <th>100</th> <th>200</th> <th>50</th> <th>100</th> <th>200</th> </tr> <tr> <td>WMF</td> <td>37.14%</td> <td>48.81%</td> <td>60.25%</td> <td>14.46%</td> <td>19.66%</td> <td>26.22%</td> </tr> <tr> <td>CDL</td> <td>38.11%</td> <td>49.73%</td> <td>61.00%</td> <td>17.90%</td> <td>24.55%</td> <td><b>32.53%</b></td> </tr> <tr> <td>DualNet</td> <td><b>44.95%</b></td> <td><b>59.15%</b></td> <td><b>72.56%</b></td> <td><b>17.94%</b></td> <td><b>24.58%</b></td> <td><b>32.52%</b></td> </tr> </table> Table 2: Comparison for out-matrix predictions on CiteULike <table> <tr> <th></th> <th>Recall@50</th> <th>Recall@100</th> <th>Recall@200</th> </tr> <tr> <td>CDL</td> <td>32.18%</td> <td>43.90%</td> <td>56.36%</td> </tr> <tr> <td>DualNet</td> <td><b>47.51%</b></td> <td><b>56.59%</b></td> <td><b>66.36%</b></td> </tr> </table> Note that on *CiteULike*, there are two ways to split the data. One is the scheme in (Wang et al., 2015), and the other is the scheme in (Wang & Blei, 2011), which is the one presented in the previous section. Note that in the former scheme, a fixed number of ratings from each user are selected for training. This may result in some testing items being missed in the training set. To provide a complete comparison with prior work, we use both schemes in our experiments, which are respectively denoted as *CiteULike1* and *CiteULike2*. Table 1 compares the performance of *WML, CDL*, and *DualNet* on all three datasets (four data splitting settings). From the results, we observed: (1) Our proposed *DualNet* method outperforms both *WML* and *CDL* on all datasets. On certain data sets, the performance gains are substantial. For example, on *MovieLens*, we obtained average recalls at 44.95%, 59.15%, and 72.56% respectively when \( M = 50, 100, 200 \). Comparing what CDL achieves (38.11%, 49.73%, and 61.00%), the relative gains are around 18%. On other data sets, the gains are also considerable. (2) The performance gains vary significantly across different datasets, as they are closely related to the *relevance* of the item features. Particularly, when the item features are pertinent to the user interest, we may see remarkable improvement when those features are incorporated; otherwise, the performance gains would be relatively smaller. 5.3 DETAILED STUDY We conducted additional experiments on *CiteULike* to further study the proposed algorithm. In this study, we investigate the performance of out-matrix prediction, the impact of various modeling choices, *e.g.* multi-level branching, as well as the influence of training tactics. **Out-matrix prediction.** As mentioned, the out-matrix prediction task is to examine an algorithm’s capability of handling new items, *i.e.* those unseen in the training stage. For this task, we compared *CDL* and *DualNet* on the *CiteULike* dataset. *WML* is not included here as it is not able to handle new items. Table 2 shows the results. It can be clearly seen that *DualNet* outperforms *CDL* by a notable margin. For example, *Recall@50* increases from 32.18% to 47.51% – the relative gain is 47.6%, a very remarkable improvement. The strong generalization performance as demonstrated here is, to a large extent, ascribed to our basic formulation, where the encoding networks uniformly encode both known and new items. **Multi-level branching.** We compared three different designs presented in Section 3: *basic design, multi-level design*, and *multi-level branching design*. From the results shown in Table 2, we can observe limited improvement of the multi-level design over the basic one. More significant performance Table 3: Comparison of different network architecture designs on CiteULike <table> <tr> <th></th> <th>Recall@10</th> <th>Recall@50</th> <th>Recall@100</th> </tr> <tr> <td>basic</td> <td>15.86%</td> <td>38.86%</td> <td>51.03%</td> </tr> <tr> <td>multi-level</td> <td>16.89%</td> <td>39.92%</td> <td>51.26%</td> </tr> <tr> <td>multi-level branching</td> <td>17.43%</td> <td>40.31%</td> <td>51.78%</td> </tr> </table> gains are observed when the *branching design* is introduced. This shows that the *branches* contribute a lot to the overall performance. **Noise injection.** Sometimes we noticed overfitting during training *i.e.* the validation performance gets worse while the training loss is decreasing. To tackle this issue, we inject noises to the inputs, *i.e.* setting a fraction of input entries to zeros. Generally, we observed that noise injection has little effect for *Recall@M* on in-matrix predictions when \( M < 30 \). However, it can considerably increase the recall for large \( M \) value or *out-matrix predictions*. Particularly, on *CiteULike*, it increases in-matrix *Recall@300* from 67.3% to 71.2%, and out-matrix *Recall@50* from 38.6% to 47.5%. **Unsuccessful Tactics.** Finally, we show some tactics that we have tried and found to be not working. (1) Replacing the weighted Euclidean loss with *logistic loss* would lead to substantial degradation of the performance (sometimes by up to 20%). Also, when using logistic loss, we observed severe overfitting. [Rendle et al. (2009)] proposed Bayesian Personalized Recommendation (BPR) which directly targets on ranking. We tested this on CiteULike with parameters tuned to obtain the optimal performance. Our experimental results showed that its performance is similar to WMF. Particularly, the Recall@50, 100, 200 for BPR are respectively 39.11%, 49.16%, 59.96%, while those for WMF are 40.45%, 50.25%, 59.95%. (2) Motivated by the observation that positive ratings are sparse, we tried a scheme that ignores a fraction of dual mini-batches that correspond to all zero ratings, with an aim to speed up the training. Whereas this can reduces the time needed to run an epoch, it takes significantly more epochs to reach the same level of performance. As a result, the overall runtime is even longer. 6 CONCLUSIONS AND FUTURE WORK This paper presented a new method for predicting the interactions between users and items, called *Collaborative Deep Embedding*. This method uses *dual networks* to encode users and items respectively. The user-network and item-network are trained jointly, in a collaborative manner, based on two streams of data. We obtained considerable performance gains over the state-of-the-art consistently on three large datasets. The proposed method also demonstrated superior generalization performance (on out-matrix predictions). This improvement, from our perspective, is ascribed to three important reasons: (1) the expressive power of deep models for capturing the rich variations in user interests, (2) the collaborative training process that encourages closely coupled embeddings, and (3) an objective function that directly targets the prediction accuracy. We consider this work as a significant step that brings the power of deep models to relational modeling. However, the space of deep relational modeling remains wide open – lots of questions remain yet to be answered. In future, we plan to investigate more sophisticated network architectures, and extend the proposed methodology to applications that involve more than two domains. REFERENCES Deepak Agarwal and Bee-Chung Chen. Regression-based latent factor models. In *Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining*, pp. 19–28. ACM, 2009. Yoshua Bengio. Learning deep architectures for ai. *Foundations and trends® in Machine Learning*, 2(1):1–127, 2009. F Maxwell Harper and Joseph A Konstan. The movielens datasets: History and context. ACM Transactions on Interactive Intelligent Systems (TiiS), 5(4):19, 2016. Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82–97, 2012. Yifan Hu, Yehuda Koren, and Chris Volinsky. Collaborative filtering for implicit feedback datasets. In Data Mining, 2008. ICDM’08. Eighth IEEE International Conference on, pp. 263–272. Ieee, 2008. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012. Brian McFee, Luke Barrington, and Gert Lanckriet. Learning content similarity for music recommendation. Audio, Speech, and Language Processing, IEEE Transactions on, 20(8):2207–2218, 2012. Andriy Mnih and Ruslan R Salakhutdinov. Probabilistic matrix factorization. In J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis (eds.), Advances in Neural Information Processing Systems 20, pp. 1257–1264. Curran Associates, Inc., 2008. URL http://papers.nips.cc/paper/3208-probabilistic-matrix-factorization.pdf Michael J Pazzani and Daniel Billsus. Content-based recommendation systems. In The adaptive web, pp. 325–341. Springer, 2007. Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. In Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence, pp. 452–461. AUAI Press, 2009. Francesco Ricci, Lior Rokach, and Bracha Shapira. Introduction to recommender systems handbook. Springer, 2011. Malcolm Slaney, Kilian Weinberger, and William White. Learning a metric for music similarity. In International Symposium on Music Information Retrieval (ISMIR), 2008. Richard Socher, Cliff C Lin, Chris Manning, and Andrew Y Ng. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th international conference on machine learning (ICML-11), pp. 129–136, 2011. Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016. J. Tang, H. Gao, and H. Liu. mTrust: Discerning multi-faceted trust in a connected world. In Proceedings of the fifth ACM international conference on Web search and data mining, pp. 93–102. ACM, 2012. Aaron Van den Oord, Sander Dieleman, and Benjamin Schrauwen. Deep content-based music recommendation. In Advances in Neural Information Processing Systems, pp. 2643–2651, 2013. Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. The Journal of Machine Learning Research, 11:3371–3408, 2010. Chong Wang and David M Blei. Collaborative topic modeling for recommending scientific articles. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 448–456. ACM, 2011. Hao Wang, Naiyan Wang, and Dit-Yan Yeung. Collaborative deep learning for recommender systems. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1235–1244. ACM, 2015. Xinxi Wang and Ye Wang. Improving content-based and hybrid music recommendation using deep learning. In Proceedings of the ACM International Conference on Multimedia, pp. 627–636. ACM, 2014. Kai Yu, John Lafferty, Shenghuo Zhu, and Yihong Gong. Large-scale collaborative prediction using a nonparametric random effects model. In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 1185–1192. ACM, 2009.
ABSTRACT Despite the long history of research on recommender systems, current approaches still face a number of challenges in practice, e.g. the difficulties in handling new items, the high diversity of user interests, and the noisiness and sparsity of observations. Many of such difficulties stem from the lack of expressive power to capture the complex relations between items and users. This paper presents a new method to tackle this problem, called Collaborative Deep Embedding. In this method, a pair of dual networks, one for encoding items and the other for users, are jointly trained in a collaborative fashion. Particularly, both networks produce embeddings at multiple aligned levels, which, when combined together, can accurately predict the matching between items and users. Compared to existing methods, the proposed one not only provides greater expressive power to capture complex matching relations, but also generalizes better to unseen items or users. On multiple real-world datasets, this method outperforms the state of the art. 1 INTRODUCTION What do consumers really want? – this is a question to which everyone wishes to have an answer. Over the past decade, the unprecedented growth of web services and online commercial platforms such as Amazon, Netflix, and Spotify, gives rise to a vast amount of business data, which contain valuable information about the customers. However, “data don’t speak for themselves”. To accurately predict what the customers want, one needs not only the data, but also an effective means to extract useful messages therefrom. There has been extensive study on recommender systems. Existing methods roughly fall into two categories, namely content-based filtering (Pazzani & Billsus [2007]) and collaborative filtering (Mnih & Salakhutdinov [2008] [Hu et al.] [2008]; Yu et al. [2009]). The former focuses on extracting relevant features from the content, while the latter attempts to exploit the common interest among groups of users. In recent efforts, hybrid methods (Agarwal & Chen [2009] Van den Oord et al., [2013]) that combine both aspects have also been developed. Whereas remarkable progress has been made on this topic, the state of the art remains far from satisfactory. The key challenges lie in several aspects. First, there is a large semantic gap between the true cause of a matching and what we observe from the data. For example, what usually attracts a book consumer is the implied emotion that one has to feel between the lines instead of the occurrences of certain words. It is difficult for classical techniques to extract such deep meanings from the observations. Second, the cold-start issue, namely making predictions for unseen items or users, has not been well addressed. Many collaborative filtering methods rely on the factorization of the matching matrix. Such methods implicitly assume that all the users and items are known in advance, and thus are difficult to be applied in real-world applications, especially online services. The success of deep learning brings new inspiration to this task. In a number of areas, including image classification (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012), and natural language understanding (Socher et al., 2011), deep learning techniques have substantially pushed forward the state of the art. The power of deep networks in capturing complex variations and bridging semantic gaps has been repeatedly shown in previous study. However, deep models were primarily used for classification or regression, e.g. translating images to sentences. How deep networks can be used to model cross-domain relations remains an open question. In this work, we aim to explore deep neural networks for learning the matching relations across two domains, with our focus placed on the matching between items and users. Specifically, we propose a new framework called Collaborative Deep Embedding, which comprises a pair of dual networks, one for encoding items and the other for users. Each network contains multiple embedding layers that are aligned with their dual counterparts of the other network. Predictions can then be made by coupling these embeddings. Note that unlike a conventional network, the dual networks are trained on two streams of data. In this paper, we devise an algorithm that can jointly train both networks using dual mini-batches. Compared to previous methods, this method not only narrows the semantic gap through a deep modeling architecture, but also provides a natural way to generalize – new items and new users can be encoded by the trained networks, just like those present in the training stage. On a number of real world tasks, the proposed method yields significant improvement over the current state-of-the-art. It is worth stressing that whereas our focus is on the matching between items and users, Collaborative Deep Embedding is a generic methodology, which can be readily extended to model other kinds of cross-domain relations. 2 RELATED WORK Existing methods for recommendation roughly fall into two categories: content-based methods (Pazzani & Billsus, 2007) and collaborative filtering (CF) (Mnih & Salakhutdinov, 2008; Hu et al., 2008; Yu et al., 2009). Specifically, content-based methods rely primarily on feature representation of the content, in which recommendations are often made based on feature similarity (Slaney et al., 2008). Following this, there are also attempts to incorporate additional information, such as meta-data of users, to further improve the performance (McFee et al., 2012). Instead, collaborative filtering exploits the interaction between users and items. A common approach to CF is to derive latent factors of both users and items through matrix factorization, and measure the degree of matching by their inner products. Previous study (Ricci et al., 2011) showed that CF methods tend to have higher recommendation accuracy than content-based methods, as they directly target the recommendation task. However, practical use of CF is often limited by the cold start problem. It is difficult to recommend items without a sufficient amount of use history. Issues like this motivated hybrid methods (Agarwal & Chen, 2009; Van den Oord et al., 2013) that combine both aspects of information, which have showed encouraging improvement. Our exploration is also along this line. Despite the progress on both family of methods, the practical performance of state-of-the-art still leaves a lot to be desired. This, to a large extent, is due to the lack of capability of capturing complex variations in interaction patterns. Recently, deep learning (Bengio, 2009) emerges as an important technique in machine learning. In a number of successful stories (Krizhevsky et al., 2012; Hinton et al., 2012; Socher et al., 2011), deep models have demonstrated remarkable representation power in capturing complex patterns. This power has been exploited by some recent work for recommendation. Van den Oord et al. (2013) applies deep learning for music recommendation. It uses the latent item vector learned by CF as ground truth to train a deep network for extracting content features, obtaining considerable performance gain. However the latent vectors for known users and items are not improved. Wang & Wang (2014) proposed an extension to this method, which concatenates both the CF features and the deep features, resulting in slight improvement. Wang & Blei (2011) showed that CF and topic modeling, when combined, can benefit each other. Inspired by this, Wang et al. (2015) proposed Collaborative Deep Learning (CDL), which incorporates CF and deep feature learning with a combined objective function. This work represents the latest advances in recommendation methods. Yet, its performance is still limited by several issues, e.g. the difficulties in balancing diversified objectives and the lack of effective methods for user encoding. An important aspect that distinguishes our work from CDL and other previous methods is that it encodes both items and users through a pair of deep networks that are jointly trained, which substantially enhance the representation power on both sides. Moreover, the objective function of our learning framework directly targets the recommendation accuracy, which also leads to better performance. 3 COLLABORATIVE DEEP EMBEDDING At the heart of a recommender system is matching model, namely, a model that can predict whether a given item matches the interest of a given user. Generally, this can be formalized as below. Suppose there are m users and n items, respectively indexed by i and j. Items are usually associated with inherent features, e.g. the descriptions or contents. Here, we use \( \mathbf{x}_j \) to denote the observed features of the j-th item. However, inherent information for users is generally very limited and often irrelevant. Hence, in most cases, users are primarily characterized by their history, i.e. the items they have purchased or rated. Specifically, the user history can be partly captured by a matching matrix \( \mathbf{R} \in \{0,1\}^{m \times n} \), where \( \mathbf{R}(i,j) = 1 \) indicates that the i-th user purchased the j-th item and gave a positive rating. Note that \( \mathbf{R} \) is often an incomplete reflection of the user interest – it is not uncommon that a user does not purchase or rate an item that he/she likes. 3.1 DUAL EMBEDDING To motivate our approach, we begin with a brief revisit of collaborative filtering (CF), which is widely adopted in practical recommender systems. The basic idea of CF is to derive vector representations for both users and items by factorizing the matching matrix \( \mathbf{R} \). A representative formulation in this family is the Weighted Matrix Factorization (WMF) (Hu et al., 2008), which adopts an objective function as below: \[ \sum_i \sum_j c_{ij} (\mathbf{R}_{ij} - \mathbf{u}_i^T \mathbf{v}_j)^2 + \lambda_u \sum_i \| \mathbf{u}_i \|_2^2 + \lambda_v \sum_j \| \mathbf{v}_j \|_2^2. \] Here, \( \mathbf{u}_i \) and \( \mathbf{v}_j \) denote the vector representations of the i-th user and the j-th item, \( c_{ij} \) the confidence coefficient of an observed entry, and \( \lambda_u, \lambda_v \) the regularization coefficients. Underlying such methods lies a common assumption, namely, all users and items must be known a priori. As a result, they will face fundamental difficulties when handling new items and new users. Encoding Networks. In this work, we aim to move beyond this limitation by exploring an alternative approach. Instead of pursuing the embeddings of a given set of items and users, our approach jointly learns a pair of encoding networks, respectively for items and users. Compared to CF, the key advantage of this approach is that it is generalizable by nature. When new items or new users come, their vector embeddings can be readily derived using the learned encoders. Generally, the items can be encoded based on their own inherent features, using, for example, an auto-encoder. The key question here, however, is how to encode users, which, as mentioned, have no inherent features. Again, we revisit conventional CF methods such as WMF and find that in these methods, the user representations can be expressed as: \[ \mathbf{u}_i = \arg\min_{\mathbf{u}} \sum_j c_{ij} \| \mathbf{R}_{ij} - \mathbf{u}_i^T \mathbf{v}_j \|^2 + \lambda_u \sum_i \| \mathbf{u}_i \|^2 = (\mathbf{V} \mathbf{C}_i \mathbf{V}^T + \lambda_u \mathbf{I})^{-1} \mathbf{V} \mathbf{r}_i. \] Here, \( \mathbf{V} = [\mathbf{v}_1, \ldots, \mathbf{v}_n] \) is a matrix comprised of all item embeddings, each column for one; \( \mathbf{r}_i \) is the i-th row of \( \mathbf{R} \) treated as a column vector, which represents the history of the i-th user; and \( \mathbf{C}_i = \mathrm{diag}(c_{i1}, \ldots, c_{in}) \) captures the confidence weights. The analysis above reveals that \( \mathbf{u}_i \) is a linear transform of \( \mathbf{r}_i \) as \( \mathbf{u}_i = \mathbf{W}_u \mathbf{r}_i \), where the transform matrix \( \mathbf{W}_u \) depends on the item embeddings \( \mathbf{V} \). This motivates our idea of user encoding, that is, to use a deep neural network instead the linear transform above, as \[ \mathbf{u}_i = g(\mathbf{r}_i; \mathbf{W}_u), \] where \( g \) denotes a nonlinear transform based on a deep network with parameters \( \mathbf{W}_u \). As we will show in our experiments, by drawing on the expressive power of deep neural networks, the proposed way of user encoding can substantially improve the prediction accuracy. Figure 1: This figure shows three different designs of the dual networks. Here, \( \odot \) indicates dot product and \( \oplus \) indicates summation. (a) The basic design adopts the MLP structure for each network. (b) The multi-level design integrates the dot products of embeddings at different levels to produce the prediction. (c) In the branching design, the embeddings (except those of the top level) used in the dot products are produced by transform branches. In this way, the main abstraction paths won’t be directly twisted. Overall Formulation. By coupling an item-network denoted by \( f(\mathbf{x}_j; \mathbf{W}_v) \) and a user-network \( g \) as introduced above, we can predict the matching of any given pair of user and item based on the inner product of their embeddings, as \( \langle f(\mathbf{x}; \mathbf{W}_v), g(\mathbf{r}; \mathbf{W}_u) \rangle \). The inputs to these networks include \( \mathbf{x} \), the inherent feature of the given item, and \( \mathbf{r} \), the history of the given user on a set of reference items. With both encoding networks, we formulate the learning objective as follows: \[ \min_{\mathbf{W}_v, \mathbf{W}_u} \sum_i \sum_j c_{ij} \| \mathbf{R}_{ij} - \langle f(\mathbf{x}_j; \mathbf{W}_v), g(\mathbf{r}_i; \mathbf{W}_u) \rangle \|^2. \] Here, \( \mathbf{X} = [\mathbf{x}_1, \ldots, \mathbf{x}_n] \) denotes the input features of all reference items. This formulation differs from previous ones in two key aspects: (1) Both users and items are encoded using deep neural networks. The learning objective above encourages the cooperation of both networks such that the coupling of both sides yield the highest accuracy. Hence, the user-network parameters \( \mathbf{W}_u \) depends on the item embeddings \( \mathbf{V} \), and likewise for the item-network. (2) The learning task is to estimate the parameters of the encoding networks. Once the encoding networks are learned, they encode users and items in a uniform way, no matter whether they are seen during training. In other words, new users and new items are no longer second-class citizens – they are encoded in exactly the same way as those in the training set. Comparison with CDL. The Collaborative Deep Learning (CDL) recently proposed by Wang et al. (2015) was another attempt to tackle the cold-start issue. This method leverages the item features by aligning the item encoder with the embeddings resulted from matrix factorization. In particular, the objective function is given as follows: \[ \sum_{ij} c_{ij} (\mathbf{R}_{ij} - \mathbf{u}_i^T \mathbf{v}_j)^2 + \lambda_u \sum_j \| \mathbf{v}_j - f_e(\tilde{\mathbf{x}}_j, \theta) \|^2 + \lambda_n \sum_j \| \tilde{\mathbf{x}}_j - f_r(\tilde{\mathbf{x}}_j, \theta) \|^2 + \lambda_u \sum_i \| \mathbf{u}_i \|^2 + r(\theta). \] Here, a Stacked Denoising Autoencoder (SDAE) (Vincent et al., 2010) with parameter \( \theta \) is used to encode the items, based on \( \{ \tilde{\mathbf{x}}_j \} \), noisy versions of their features. Compared to our formulation, CDL has several limitations: (1) The objective is to balance the SDAE reconstruction error and the matching accuracy, which does not necessarily lead to improved recommendation. Tuning this balance also turns out to be tricky. (2) Only items are encoded, while the representations of the users are still obtained by matrix factorization. As a result, its expressive power in capturing user interest remains limited. (3) There are inconsistencies between known items and new ones – the embedding of known items is resulted from a tradeoff between the matching accuracy and the fidelity to SDAE features, while the embedding of new items are purely based on SDAE encoding. 3.2 Network Architecture Designs Our model consists of two networks, namely the item-network \( f \) and the user-network \( g \). We went through a progressive procedure in designing their architectures, obtaining three different designs, from basic design, multi-level design, to multi-level branching design. Each new design was motivated by the observation of certain limitations in the previous version. The basic design, as shown in Figure 1(a) adopts the multilayer perceptron as the basic architecture, using tanh as the nonlinear activation function between layers\(^1\). The top layer of the item-network produces a vector \( f(\mathbf{x}_i; \mathbf{W}_v) \) for each item; while that of the user-network produces a dual vector \( g(\mathbf{r}_i; \mathbf{W}_u) \) for each user. During training, the loss layer takes their inner products and compares them with the ground-truth \( \mathbf{R}(i, j) \). Each layer in these networks generates a vector representation. We observe that representations from different layers are complementary. Representations from lower layers tend to be closer to the inputs and preserve more information; while those from higher layers focus on deeper semantics. The representations from these levels have their respective values, as different users tend to focus on different aspects of an item. Following this intuition, we reach a multi-level design, as shown in Figure 1(b). In this design, dot products between dual embeddings at corresponding levels are aggregated to produce the final prediction. There is an issue of the multi-level design – the output of each intermediate layer actually plays two roles. On one hand, it is the input to the next layer for further abstraction; on the other hand, it also serves as a facet to be matched with the other side. These two roles require different properties of the representations. Particularly, for the former role, the representation needs to preserve more information for higher-level abstraction; while for the latter, those parts related to the current level of matching need to be emphasized. To address this issue, we design a multi-level branching architecture, as shown in Figure 1(c). In this design, a matching branch is introduced to transform the representation at each level to a form that is more suitable for matching. This can also be considered as learning an alternative metric to measure the matchness between the embeddings. As we will show in our experiments, this design can considerably improve the prediction accuracy. 4 Training with Dual Mini-batches A distinctive aspect of our training algorithm is the use of dual mini-batches. Specifically, in each iteration, \( B_v \) items and \( B_u \) users are selected. In addition to the item features and user histories, the corresponding part of the matching matrix \( \mathbf{R} \) will also be loaded and fed to the network. Here, the two batch sizes \( B_v \) and \( B_u \) can be different, and they should be chosen according to the sparsity of the matching matrix \( \mathbf{R} \), such that each dual mini-batch can cover both positive and zero ratings. During the backward pass, the loss layer that compares the predictions with the ground-truth matchings will produce two sets of gradients, respectively for items and users. These gradients are then back-propagated along respective networks. Note that when the multi-level designs (both with and without branching) are used, each intermediate layer will receive gradients from two sources – those from the upper layers and those from the dual network (via the dot-product layer). Hence, the training of one network would impact that of the other. The entire training procedure consists of two stages: pre-training and optimization. In the pre-training stage, we initialize the item-network with unsupervised training (Vincent et al., 2010) and the user-network randomly. The unsupervised training of the item-network allows it to capture the feature statistics. Then both networks will be jointly refined in a layer-by-layer fashion. Particularly, we first tune the one-level networks, taking the dot products of their outputs as the predictions. Subsequently, we stack the second layers on top and refine them in a similar way. Empirically, we found that this layer-wise refinement scheme provides better initialization. In the optimization stage, we adopt the SGD algorithm with momentum and use the dual mini-batch scheme presented above. In this stage, the training is conducted in epochs. Each epoch, through multiple iterations, traverses the whole matching matrix \( \mathbf{R} \) without repetition. The order of choosing mini-batches is arbitrary and will be shuffled at the beginning of each epoch. Additional tricks such as dropout and batch normalization are employed to further improve the performance. \footnotetext{1The choice of tanh as the activation function is based on empirical comparison.} 5 EXPERIMENTS We tested our method on three real-world datasets with different kinds of items and matching relations: 1. CiteULike, constructed by [Wang & Blei (2011)], provides a list of researchers and the papers that they interested. Each paper comes with a text document that comprises both the title and the abstract. In total, it contains 5,551 researchers (as users) and 16,980 papers (as items) with 0.22% density. The task is to predict the papers that a researcher would like. 2. MovieLens+Posters is constructed based on the MovieLens 20M Dataset [Harper & Konstan (2016)], which provides about 20M user ratings on movies. For each movie, we collect a movie poster from TMDb and extract a visual feature therefrom using a convolutional neural network [Szegedy et al., 2016] as the item feature. Removing all those movies without posters and the users with fewer than 10 ratings, we obtain a dataset that contains 76,531 users and 14,101 items with 0.24% density. In this dataset, all 5 ratings are considered as positive matchings. 3. Ciao is organized by [Tang et al., 2012] from a product review site, where each product comes with a series of reviews. The reviews for each product are concatenated to serve as the item content. We removed those items with less than 5 rated users and the users with less than 10 ratings. This results in a dataset with 4,663 users and 12,083 items with 0.25% density. All ratings with 40 or above (the rating ranges from 0 to 50) are regarded as positive matchings. 5.1 EVALUATION The performance of a recommender system can be assessed from different perspective. In this paper, we follow [Wang & Blei (2011)] and perform the evaluation from the retrieval perspective. Specifically, a fraction of rating entries are omitted in the training phase, and the algorithms being tested will be used to predict those entries. As pointed out by [Wang & Blei (2011)], as the ratings are implicit feedback [Hu et al., 2008] – some positive matchings are not reflected in the ratings, recall is more suitable than precision in measuring the performance. In particular, we use Recall@M averaged over all users as the performance metric. Here, for a certain user, Recall@M is defined as follows: \[ \text{recall}@M = \frac{\text{the number of items a user likes in top } M \text{ recommendations}}{\text{the total number of items the user likes}} \] In our experiments, the value of \( M \) varies from 50 to 200. Following [Wang & Blei (2011)], we consider two tasks, in-matrix prediction and out-matrix prediction. Specifically, we divide all users into two disjoint parts, known and unknown, by the ratio of 9 to 1. The in-matrix prediction task only considers known items. For this task, all rating entries are split into three disjoint sets: training, validation and testing, by the ratio 3 : 1 : 1. It is ensured that all items in the validation and testing sets have appeared in the training stage (just that part of their ratings were omitted). The out-matrix prediction task is to make predictions for the items that are completely unseen in the training phase. This task is to test the performance of generalization and the capability of handling the cold-start issue. 5.2 COMPARISON WITH OTHER METHODS We compared our method, which we refer to as DualNet with two representative methods in previous work: (1) Weighted Matrix Factorization (WMF) [Hu et al., 2008], a representative method for for collaborative filtering (CF), and (2) Collaborative deep learning (CDL) [Wang et al., 2015], a hybrid method that combines deep encoding of the items and CF, which represents the latest advances in recommendation techniques. On each dataset, we chose the design parameters for each method via grid search. The parameter combinations that attain best performance on the validation set are used. For our DualNet method, we adopt a three-level branching configuration, where the embedding dimensions of each network, from bottom to top, are set to 200, 200, 50. For WMF, the latent dimension is set to 300 on CDL and 450 on other datasets. For CDL, the best performance is attained when the structure of SDAE is configured to be (2000, 1000, 300), with drop out ratio 0.1. Other design parameters of CDL are set as \( a = 1.0, b = 0.01, tu = 1, lv = 10, ln = 1000, lw = 0.0005 \). Table 1: Comparison of performance on three datasets. The performances are measure with the metric \( Recall@M \). We report the results where \( M \) are set to 50, 100, and 200. <table> <tr> <th rowspan="2"></th> <th colspan="3">CiteULike1</th> <th colspan="3">CiteULike2</th> </tr> <tr> <th>50</th> <th>100</th> <th>200</th> <th>50</th> <th>100</th> <th>200</th> </tr> <tr> <td>WMF</td> <td>22.14%</td> <td>32.58%</td> <td>43.65%</td> <td>40.45%</td> <td>50.28%</td> <td>59.95%</td> </tr> <tr> <td>CDL</td> <td>25.02%</td> <td>36.57%</td> <td>48.32%</td> <td>39.49%</td> <td>52.02%</td> <td>64.41%</td> </tr> <tr> <td>DualNet</td> <td><b>30.41%</b></td> <td><b>41.71%</b></td> <td><b>52.24%</b></td> <td><b>41.26%</b></td> <td><b>53.80%</b></td> <td><b>65.21%</b></td> </tr> <tr> <th rowspan="2"></th> <th colspan="3">MovieLens</th> <th colspan="3">Ciao</th> </tr> <tr> <th>50</th> <th>100</th> <th>200</th> <th>50</th> <th>100</th> <th>200</th> </tr> <tr> <td>WMF</td> <td>37.14%</td> <td>48.81%</td> <td>60.25%</td> <td>14.46%</td> <td>19.66%</td> <td>26.22%</td> </tr> <tr> <td>CDL</td> <td>38.11%</td> <td>49.73%</td> <td>61.00%</td> <td>17.90%</td> <td>24.55%</td> <td><b>32.53%</b></td> </tr> <tr> <td>DualNet</td> <td><b>44.95%</b></td> <td><b>59.15%</b></td> <td><b>72.56%</b></td> <td><b>17.94%</b></td> <td><b>24.58%</b></td> <td><b>32.52%</b></td> </tr> </table> Table 2: Comparison for out-matrix predictions on CiteULike <table> <tr> <th></th> <th>Recall@50</th> <th>Recall@100</th> <th>Recall@200</th> </tr> <tr> <td>CDL</td> <td>32.18%</td> <td>43.90%</td> <td>56.36%</td> </tr> <tr> <td>DualNet</td> <td><b>47.51%</b></td> <td><b>56.59%</b></td> <td><b>66.36%</b></td> </tr> </table> Note that on *CiteULike*, there are two ways to split the data. One is the scheme in (Wang et al., 2015), and the other is the scheme in (Wang & Blei, 2011), which is the one presented in the previous section. Note that in the former scheme, a fixed number of ratings from each user are selected for training. This may result in some testing items being missed in the training set. To provide a complete comparison with prior work, we use both schemes in our experiments, which are respectively denoted as *CiteULike1* and *CiteULike2*. Table 1 compares the performance of *WML, CDL*, and *DualNet* on all three datasets (four data splitting settings). From the results, we observed: (1) Our proposed *DualNet* method outperforms both *WML* and *CDL* on all datasets. On certain data sets, the performance gains are substantial. For example, on *MovieLens*, we obtained average recalls at 44.95%, 59.15%, and 72.56% respectively when \( M = 50, 100, 200 \). Comparing what CDL achieves (38.11%, 49.73%, and 61.00%), the relative gains are around 18%. On other data sets, the gains are also considerable. (2) The performance gains vary significantly across different datasets, as they are closely related to the *relevance* of the item features. Particularly, when the item features are pertinent to the user interest, we may see remarkable improvement when those features are incorporated; otherwise, the performance gains would be relatively smaller. 5.3 DETAILED STUDY We conducted additional experiments on *CiteULike* to further study the proposed algorithm. In this study, we investigate the performance of out-matrix prediction, the impact of various modeling choices, *e.g.* multi-level branching, as well as the influence of training tactics. **Out-matrix prediction.** As mentioned, the out-matrix prediction task is to examine an algorithm’s capability of handling new items, *i.e.* those unseen in the training stage. For this task, we compared *CDL* and *DualNet* on the *CiteULike* dataset. *WML* is not included here as it is not able to handle new items. Table 2 shows the results. It can be clearly seen that *DualNet* outperforms *CDL* by a notable margin. For example, *Recall@50* increases from 32.18% to 47.51% – the relative gain is 47.6%, a very remarkable improvement. The strong generalization performance as demonstrated here is, to a large extent, ascribed to our basic formulation, where the encoding networks uniformly encode both known and new items. **Multi-level branching.** We compared three different designs presented in Section 3: *basic design, multi-level design*, and *multi-level branching design*. From the results shown in Table 2, we can observe limited improvement of the multi-level design over the basic one. More significant performance Table 3: Comparison of different network architecture designs on CiteULike <table> <tr> <th></th> <th>Recall@10</th> <th>Recall@50</th> <th>Recall@100</th> </tr> <tr> <td>basic</td> <td>15.86%</td> <td>38.86%</td> <td>51.03%</td> </tr> <tr> <td>multi-level</td> <td>16.89%</td> <td>39.92%</td> <td>51.26%</td> </tr> <tr> <td>multi-level branching</td> <td>17.43%</td> <td>40.31%</td> <td>51.78%</td> </tr> </table> gains are observed when the *branching design* is introduced. This shows that the *branches* contribute a lot to the overall performance. **Noise injection.** Sometimes we noticed overfitting during training *i.e.* the validation performance gets worse while the training loss is decreasing. To tackle this issue, we inject noises to the inputs, *i.e.* setting a fraction of input entries to zeros. Generally, we observed that noise injection has little effect for *Recall@M* on in-matrix predictions when \( M < 30 \). However, it can considerably increase the recall for large \( M \) value or *out-matrix predictions*. Particularly, on *CiteULike*, it increases in-matrix *Recall@300* from 67.3% to 71.2%, and out-matrix *Recall@50* from 38.6% to 47.5%. **Unsuccessful Tactics.** Finally, we show some tactics that we have tried and found to be not working. (1) Replacing the weighted Euclidean loss with *logistic loss* would lead to substantial degradation of the performance (sometimes by up to 20%). Also, when using logistic loss, we observed severe overfitting. [Rendle et al. (2009)] proposed Bayesian Personalized Recommendation (BPR) which directly targets on ranking. We tested this on CiteULike with parameters tuned to obtain the optimal performance. Our experimental results showed that its performance is similar to WMF. Particularly, the Recall@50, 100, 200 for BPR are respectively 39.11%, 49.16%, 59.96%, while those for WMF are 40.45%, 50.25%, 59.95%. (2) Motivated by the observation that positive ratings are sparse, we tried a scheme that ignores a fraction of dual mini-batches that correspond to all zero ratings, with an aim to speed up the training. Whereas this can reduces the time needed to run an epoch, it takes significantly more epochs to reach the same level of performance. As a result, the overall runtime is even longer. 6 CONCLUSIONS AND FUTURE WORK This paper presented a new method for predicting the interactions between users and items, called *Collaborative Deep Embedding*. This method uses *dual networks* to encode users and items respectively. The user-network and item-network are trained jointly, in a collaborative manner, based on two streams of data. We obtained considerable performance gains over the state-of-the-art consistently on three large datasets. The proposed method also demonstrated superior generalization performance (on out-matrix predictions). This improvement, from our perspective, is ascribed to three important reasons: (1) the expressive power of deep models for capturing the rich variations in user interests, (2) the collaborative training process that encourages closely coupled embeddings, and (3) an objective function that directly targets the prediction accuracy. We consider this work as a significant step that brings the power of deep models to relational modeling. However, the space of deep relational modeling remains wide open – lots of questions remain yet to be answered. In future, we plan to investigate more sophisticated network architectures, and extend the proposed methodology to applications that involve more than two domains.
reject
Reject
4.666667
7af5e6920a223914cd84f3f79fd7d348690c4249
iclr
2,017
ITERATIVE REFINEMENT FOR MACHINE TRANSLATION Roman Novak* École polytechnique Palaiseau, France Michael Auli Facebook AI Research Menlo Park, CA David Grangier Facebook AI Research Menlo Park, CA ABSTRACT Existing machine translation decoding algorithms generate translations in a strictly monotonic fashion and never revisit previous decisions. As a result, earlier mistakes cannot be corrected at a later stage. In this paper, we present a translation scheme that starts from an initial guess and then makes iterative improvements that may revisit previous decisions. We parameterize our model as a convolutional neural network that predicts discrete substitutions to an existing translation based on an attention mechanism over both the source sentence as well as the current translation output. By making less than one modification per sentence, we improve the output of a phrase-based translation system by up to 0.4 BLEU on WMT15 German-English translation. 1 INTRODUCTION Existing decoding schemes for translation generate outputs either left-to-right, such as for phrase-based or neural translation models, or bottom-up as in syntactic models (Koehn et al., 2003; Galley et al., 2004; Bahdanau et al., 2015). All decoding algorithms for those models make decisions which cannot be revisited at a later stage, such as when the model discovers that it made an error earlier on. On the other hand, humans generate all but the simplest translations by conceiving a rough draft of the solution and then iteratively improving it until it is deemed complete. The translator may modify a clause she tackled earlier at any point and make arbitrary modifications to improve the translation. It can be argued that beam search allows to recover from mistakes, simply by providing alternative translations. However, reasonable beam sizes encode only a small number of binary decisions. A beam of size 50 contains fewer than six binary decisions, all of which frequently share the same prefix (Huang, 2008). In this paper, we present models that tackle translation similar to humans. The model iteratively edits the target sentence until it cannot improve it further. As a preliminary study, we address the problem of finding mistakes in an existing translation via a simple classifier that predicts if a word in a translation is correct (\S 2). Next, we model word substitutions for an existing translation via a convolutional neural network that attends to the source when suggesting substitutions (\S 3). Finally, we devise a model that attends both to the source as well as to the existing translation (\S 4). We repeatedly apply the models to their own output by determining the best substitution for each word in the previous translation and then choosing either one or zero substitutions for each sentence. For the latter we consider various heuristics as well as a classifier-based selection method (\S 5). Our results demonstrate that we can improve the output of a phrase-based translation system on WMT15 German-English data by up to 0.4 BLEU (Papineni et al., 2002) by making on average only 0.6 substitutions per sentence (\S 6). Our approach differs from automatic post-editing since it does not require post-edited text which is a scarce resource (\S imard et al., 2007; Bojar et al., 2016). For our first model (\S 3) we merely require parallel text and for our second model (\S 4) the output of a baseline translation system. *Roman was interning at Facebook for this work. \( 12^5 = 32 < 50 < 2^6 = 64 \) 2 DETECTING ERRORS Before correcting errors we consider the task of detecting mistakes in the output of an existing translation system. In the following, we use lowercase boldface for vectors (e.g. \( \mathbf{x} \)), uppercase boldface for matrices (e.g. \( \mathbf{F} \)) and calligraphy for sets (e.g. \( \mathcal{X} \)). We use superscripts for indexing or slicing, e.g., \( \mathbf{x}^i, \mathbf{F}^{i,j}, \mathbf{F}^i = (\mathbf{F}^{i,1}, \ldots, \mathbf{F}^{i,|\mathcal{F}|}) \). We further denote \( \mathbf{x} \) as the source sentence, \( \mathbf{y}_g \) as the guess translation from which we start and which was produced by a phrase-based translation system [6,1], and \( \mathbf{y}_{\text{ref}} \) as the reference translation. Sentences are vectors of indices indicating entries in a source vocabulary \( \mathcal{X} \) or a target vocabulary \( \mathcal{Y} \). For example, \( \mathbf{x} = (x^1, \ldots, x^{|\mathcal{X}|}) \in \mathcal{X}^{|\mathcal{X}|} \) with \( \mathcal{X} = \{1, \ldots, |\mathcal{X}|\} \). We omit biases of linear layers to simplify the notation. Error detection focuses on word-level accuracy, i.e., we predict for each token in a given translation if it is present in the reference or not. This metric ignores word order, however, we hope that performance on this simple task provides us with a sense of how difficult it will be to modify translations to a positive effect. A token \( y_g^i \) in the candidate translation \( \mathbf{y}_g \) is deemed correct iff it is present in the reference translation: \( y_g^i \in \mathbf{y}_{\text{ref}} \). We build a neural network \( f \) to predict correctness of each token in \( \mathbf{y}_g \) given the source sentence \( \mathbf{x} \): \[ f(\mathbf{x}, \mathbf{y}_g) \in [0; 1]^{|\mathbf{y}_g|}, \] where \( f(\mathbf{x}, \mathbf{y}_g)^i \) estimates \( P\left( y_g^i \in \mathbf{y}_{\text{ref}} \right) \). Architecture. We use an architecture similar to the word alignment model of [Legrand et al. (2016)]. The source and the target sequences are embedded via a lookup table that replace each word type with a learned vector. The resulting vector sequences are then processed by alternating convolutions and non-linearities. This results in a vector \( S(x)^i \) representing each position \( i \) in the source \( \mathbf{x} \) and a vector \( T(y_g)^j \) representing each position \( j \) in the target \( \mathbf{y}_g \). These vectors are then compared via a dot product. Our prediction estimates the probability of a target word being correct as the largest dot product between any source word and the guess word. We apply the logistic function \( \sigma \) to this score, \[ f(\mathbf{x}, \mathbf{y}_g)^i = \sigma \left( \max_{1 \leq j \leq |\mathcal{X}|} \left[ S(\mathbf{x}) T(\mathbf{y}_g)^T \right]^{j,i} \right). \] Training. At training time we minimize the cross-entropy loss, with the binary supervision 1 for \( y_g^i \in \mathbf{y}_{\text{ref}} \), 0 otherwise. Testing. At test time we threshold the model prediction \( f(\mathbf{x}, \mathbf{y}_g)^i \) to detect mistakes. We compare the performance of our network to the following baselines: 1. Predicting that all candidate words are always correct \( f_{\text{cor}} \equiv 1 \), or always incorrect \( f_{\text{wrong}} \equiv 0 \). 2. The prior probability of a word being correct based on the training data \( f_{\text{stat}}(y) = (\mathrm{P}[y \in \mathbf{y}_{\text{ref}} | y \in \mathbf{y}_g] > 0.5) \). We report word-level accuracy metrics in Table 1. While the model significantly improves over the baselines, the probability of correctly labeling a word as a mistake remains low (62.71%). The task of predicting mistakes is not easy as previously shown in confidence estimation [Blatz et al. (2004), Ueffing & Ney (2007)]. Also, one should bear in mind that this task cannot be solved with 100% accuracy since a sentence can be correctly in multiple different ways and we only have a single reference translation. In our case, our final refinement objective might be easier than error detection as we do not need to detect all errors. We need to identify some of the locations where a substitution could improve BLEU. At the same time, our strategy should also suggest these substitutions. This is the objective of the model introduced in the next section. 3 ATTENTION-BASED MODEL We introduce a model to predict modifications to a translation which can be trained on bilingual text. In [5] we discuss strategies to iteratively apply this model to its own output in order to improve a translation. <table> <tr> <th>Metric (%)</th> <th>\( f_{\text{cor}} \)</th> <th>\( f_{\text{wrong}} \)</th> <th>\( f_{\text{stat}} \)</th> <th>f</th> </tr> <tr> <td>Accuracy</td> <td>68.0</td> <td>32.0</td> <td>71.3</td> <td><b>76.0</b></td> </tr> <tr> <td>Recall</td> <td>0.00</td> <td><b>100.00</b></td> <td>36.0</td> <td>61.3</td> </tr> <tr> <td>Precision</td> <td><b>100.0</b></td> <td>32.0</td> <td>58.4</td> <td>62.7</td> </tr> <tr> <td>F1</td> <td>0.00</td> <td>48.4</td> <td>44.5</td> <td><b>62.0</b></td> </tr> </table> Table 1: Accuracy of the error detection model f compared to baselines on the concatenation of the WMT test sets from 2008 to 2015. For precision, recall and F1 we consider a positive prediction as labeling a word as a mistake. Baseline \( f_{\text{cor}} \) labels all words as correct, \( f_{\text{wrong}} \) labels all words as incorrect, \( f_{\text{stat}} \) labels a word from \( y_g \) based on the prior probability estimated on the training data. Our model \( \mathbf{F} \) takes as input a source sentence \( \mathbf{x} \) and a target sentence \( \mathbf{y} \), and outputs a distribution over the vocabulary for each target position, \( \mathbf{F}(\mathbf{x}, \mathbf{y}) \in [0, 1]^{|\mathbf{y}| \times |\mathcal{Y}|} \). For each position \( i \) and any word \( j \in \mathcal{Y} \), \( \mathbf{F}(\mathbf{x}, \mathbf{y})^{i,j} \) estimates \( P(y^i = j \mid \mathbf{x}, \mathbf{y}^{-i}) \), the probability of word \( j \) being at position \( i \) given the source and the target context \( \mathbf{y}^{-i} = (y^1, \ldots, y^{i-1}, y^{i+1}, \ldots, y^{|\mathbf{y}|}) \) surrounding \( i \). In other words, we learn a non-causal language model (\cite{Bengioetal2003}) which is also conditioned on the source \( \mathbf{x} \). Architecture. We rely on a convolutional model with attention. The source sentence is embedded into distributional space via a lookup table, followed by convolutions and non-linearities. The target sentence is also embedded in distributional space via a lookup table, followed by a single convolution and a succession of linear layers and non-linearities. The target convolution weights are zeroed at the center so that the model does not have access to the center word. This means that the model observes a fixed size context of length \( 2k \) for any target position \( i \), \( \mathbf{y}^{-i|k} = (y^{i-k}, \ldots, y^{i-1}, y^{i+1}, \ldots, y^{i+k}) \) where \( 2k + 1 \) refers to the convolution kernel width. These operations result in a vector \( \mathbf{S}^j \) representing each position \( j \) in the source sentence \( \mathbf{x} \) and a vector \( \mathbf{T}^i \) representing each target context \( \mathbf{y}^{-i|k} \). Given a target position \( i \), an attention module then takes as input these representation and outputs a weight for each target position \[ \alpha(i, j) = \frac{\exp \left( \mathbf{S}^j \cdot \mathbf{T}^i \right)}{\sum_{j'=1}^{|\mathbf{x}|} \exp \left( \mathbf{S}^{j'} \cdot \mathbf{T}^i \right)}. \] These weights correspond to dot-product attention scores (\cite{Luongetal2015, Rushetal2015}). The attention weights allow to compute a source summary specific to each target context through a weighted sum, \[ \mathbf{a}(\mathbf{y}^{-i|k}, \mathbf{x}) = \sum_{j=1}^{|\mathbf{x}|} \alpha(i, j) \; \mathbf{S}^j \] Finally, this summary \( \mathbf{a}(\mathbf{y}^{-i|k}, \mathbf{x}) \) is concatenated with the embedding of the target context \( \mathbf{y}^{-i|k} \) obtained from the target lookup table, \[ \mathbf{L}\left( \mathbf{y}^{-i|k} \right) = \left\{ \mathbf{L}^j, j \in \mathbf{y}^{-i|k} \right\} \] and a multilayer perceptron followed by a softmax computes \( \mathbf{F}(\mathbf{x}, \mathbf{y})^i \) from \( \mathbf{a}(\mathbf{y}^{-i|k}, \mathbf{x}), \; \mathbf{L}(\mathbf{y}^{-i|k}) \). Note that we could alternatively use \( \mathbf{T}^i \) instead of \( \mathbf{L}(\mathbf{y}^{-i|k}) \) but our preliminary validation experiments showed better result with the lookup table output. Training. The model is trained to maximize the (log) likelihood of the pairs \( (\mathbf{x}, \mathbf{y}_{\text{ref}}) \) from the training set. Testing. At test time the model is given \( (\mathbf{x}, \mathbf{y}_g) \), i.e., the source and the guess sentence. Similar to maximum likelihood training for left-to-right translation systems (\cite{Bahdanauetal2015}), the model is therefore not exposed to the same type of context in training (reference contexts from \( \mathbf{y}_{\text{ref}} \)) and testing (guess contexts from \( \mathbf{y}_g \)). Discussion. Our model is similar to the attention-based translation approach of (\cite{Bahdanauetal2015}). In addition to using convolutions, the main difference is that we have access to both left and right target context \( y^{-i|k} \) since we start from an initial guess translation. Right target words are of course good predictors of the previous word. For instance, an early validation experiment with the setup from [6.1] showed a perplexity of 5.4 for this model which compares to 13.9 with the same model trained with the left context only. 4 DUAL ATTENTION MODEL We introduce a dual attention architecture to also make use of the guess at training time. This contrasts with the model introduced in the previous section where the guess is not used during training. Also, we are free to use the entire guess, including the center word, compared to the reference where we have to remove the center word. At training time, the dual attention model takes 3 inputs, that is, the source, the guess and the reference. At test time, the reference input is replaced by the guess. Specifically, the model \( F_{\text{dual}}(x, y_g, y_{\text{ref}}) \in [0; 1]^{|V_{\text{ref}}| \times |V|} \) estimates \( P(y_{\text{ref}}^i | x, y_g, y_{\text{ref}}^{-i}) \) for each position \( i \) in the reference sentence. Architecture. The model builds upon the single attention model from the previous section by having two attention functions \( a \) with distinct parameters. The first function \( a_{\text{source}} \) takes the source sentence \( x \) and the reference context \( y_{\text{ref}}^{-i} \) to produce the source summary for this context \( a_{\text{source}}(y^{-i|k}, x) \) as in the single attention model. The second function \( a_{\text{guess}} \) takes the guess sentence \( y_g \) and the reference context \( y_{\text{ref}}^{-i} \) and produces a guess summary for this context \( a_{\text{guess}}(y^{-i|k}, y_g) \). These two summaries are then concatenated with the lookup representation of the reference context \( L(y_{\text{ref}}^{-i|k}) \) and input to a final multilayer perceptron followed by a softmax. The reference lookup table contains the only parameters shared by the two attention functions. Training. This model is trained similarly to the single attention model, the only difference being the conditioning on the guess \( y_g \). Testing. At test time, the reference is unavailable and we replace \( y_{\text{ref}} \) with \( y_g \), i.e., the model is given \( (x, y_g, y_g^{-i|k}) \) to make a prediction at position \( i \). In this case, the distribution shift when going from training to testing is less drastic than in [3] and the model retains access to the whole \( y_g \) via attention. Discussion. Compared to the single attention model ([3]), this model reduces perplexity from 5.4 to 4.1 on our validation set. Since the dual attention model can attend to all guess words, it can copy any guess word if necessary. In our dataset, 68% of guess words are in the reference and can therefore be copied. This also means that for the remaining 32% of reference tokens the model should not copy. Instead, the model should propose a substitution by itself ([6.1]). During testing, the fact that the guess is input twice (\( x, y_g, y_g^{-i|k} \)) means that the guess and the prediction context always match. This makes the model more conservative in its predictions, suggesting tokens from \( y_g \) more often than the single attention model. However, as we show in [6], this turns out beneficial in our setting. 5 ITERATIVE REFINEMENT The models in [3] and [4] suggest word substitutions for each position in the candidate translation \( y_g \) given the current surrounding context. Applying a single substitution changes the context of the surrounding words and requires updating the model predictions. We therefore perform multiple rounds of substitution. At each round, the model computes its predictions, then our refinement strategy selects a substitution and performs it unless the strategy decides that it can no longer improve the target sentence. This means that the refinement procedure should be able to (i) prioritize the suggested substitutions, and (ii) decide to stop the iterative process. We determine the best edit for each position \( i \) in \( y_g \) by selecting the word with the highest probability estimate: \( y_{\text{pred}}^i = \arg\max_{j \in V} F(x, y_g)^{i,j} \). Then we compute a confidence score in this prediction \( s(y_g, y_{\text{pred}})^i \), possibly considering the prediction for the current guess word at the same position. These scores are used to select the next position to edit, \( i^* = \arg\max_i s(y_g, y_{pred})^i \) and to stop the iterative process, i.e., when the confidence falls below a validated threshold \( t \). We also limit the number of substitutions to a maximum of \( N \). We consider different heuristics for \( s \), • Score positions based on the model confidence in \( y_{pred}^i \), i.e., \( s_{conf}(y_g, y_{pred})^i = F(x, y_g)^{i, y_{pred}} \). • Look for high confidence in the suggested substitution \( y_{pred}^i \) and low confidence in the current word \( y_g^i \): \( s_{pr}(y_g, y_{pred})^i = F(x, y_g)^{i, y_{pred}} \times \left(1 - F(x, y_g)^{i, y_g^i}\right) \). • Train a simple binary classifier taking as input the score of the best predicted word and the current guess word: \( s_{cl}(y_g, y_{pred})^i = \mathrm{nn} \left( \log F(x, y_g)^{i, y_{pred}}, \log F(x, y_g)^{i, y_g^i} \right) \), where nn is a 2-layer neural network trained to predict whether a substitution leads to an increase in BLEU or not. We compare the above strategies, different score thresholds \( t \), and the maximum number of modifications per sentence allowed \( N \) in [6,2] 6 EXPERIMENTS & RESULTS We first describe our experimental setup and then discuss our results. 6.1 EXPERIMENTAL SETUP Data. We perform our experiments on the German-to-English WMT15 task (Bojar et al., 2015) and benchmark our improvements against the output of a phrase-based translation system (PBMT; Koehn et al. 2007) on this language pair. In principle, our approach may start from any initial guess translation. We chose the output of a phrase-based system because it provides a good starting point that can be computed at high speed. This allows us to quickly generate guess translations for the millions of sentences in our training set. All data was lowercased and numbers were mapped to a single special “number” token. Infrequent tokens were mapped to an “unknown” token which resulted in dictionaries of 120K and 170K words for English and German respectively. For training we used 3.5M sentence triples (source, reference, and the guess translation output by the PBMT system). A validation set of 180K triples was used for neural network hyper-parameter selection and learning rate scheduling. Finally, two 3K subsets of the validation set were used to train the classifier discussed in §5 and to select the best model architecture (single vs dual attention) and refinement heuristic. The initial guess translations were generated with phrase-based systems trained on the same training data as our refinement models. We decoded the training data with ten systems, each trained on 90% of the training data in order to decode the remaining 10%. This procedure avoids the bias of generating guess translation with a system that was trained on the same data. Implementation. All models were implemented in Torch (Collobert et al., 2011) and trained with stochastic gradient descent to minimize the cross-entropy loss. For the error detection model in [2] we used two temporal convolutions on top of the lookup table, each followed by a tanh non-linearity to compute \( S(x) \) and \( T(y_g) \). The output dimensions of each convolution was set to 256 and the receptive fields spanned 5 words, resulting in final outputs summarizing a context of 9 words. For the single attention model we set the shared context embedding dimension \( \dim S^j = \dim T^i = 512 \) and use a context of size \( k = 4 \) words to the left and to the right, resulting in a window of size 9 for the source and 8 for the target. The final multilayer perceptron has 2 layers with a hidden dimension of 512, see [3]. For the dual attention model we used 2-layer context embeddings (a convolution followed by a linear with a tanh in between), each having output dimension 512, context of size \( k = 4 \). The <table> <tr> <th>Model</th> <th>Heuristic</th> <th>Best \( t \)</th> <th>Best \( N \)</th> <th>BLEU</th> </tr> <tr> <td colspan="5">PBMT Baseline</td> <td>30.02</td> </tr> <tr> <td rowspan="4">F</td> <td>S<sub>conf</sub></td> <td>0.8</td> <td>3</td> <td>30.21</td> </tr> <tr> <td>S<sub>pr</sub></td> <td>0.7</td> <td>3</td> <td>30.20</td> </tr> <tr> <td>S<sub>cl</sub></td> <td>0.5</td> <td>1</td> <td>30.19</td> </tr> <tr> <td>S<sub>conf</sub></td> <td>0.6</td> <td>7</td> <td>30.32</td> </tr> <tr> <td rowspan="3">F<sub>dual</sub></td> <td>S<sub>conf</sub></td> <td>0.5</td> <td>5</td> <td><b>30.35</b></td> </tr> <tr> <td>S<sub>pr</sub></td> <td>0.5</td> <td>5</td> <td><b>30.35</b></td> </tr> <tr> <td>S<sub>cl</sub></td> <td>0.4</td> <td>2</td> <td>30.33</td> </tr> </table> Table 2: Validation BLEU (selecting substitution heuristics, decision thresholds \( t \), and number of maximum allowed modifications \( N \)). BLEU is reported on a 3,041 validation sentences. <table> <tr> <th>newstest</th> <th>PBMT BLEU</th> <th>Our BLEU</th> <th>\( \Delta \)</th> </tr> <tr> <td>2008</td> <td>21.29</td> <td><b>21.60</b></td> <td>0.31</td> </tr> <tr> <td>2009</td> <td>20.42</td> <td><b>20.74</b></td> <td>0.32</td> </tr> <tr> <td>2010</td> <td>22.82</td> <td><b>23.13</b></td> <td>0.31</td> </tr> <tr> <td>2011</td> <td>21.43</td> <td><b>21.65</b></td> <td>0.22</td> </tr> <tr> <td>2012</td> <td>21.78</td> <td><b>22.10</b></td> <td>0.32</td> </tr> <tr> <td>2013</td> <td>24.99</td> <td><b>25.37</b></td> <td>0.38</td> </tr> <tr> <td>2014</td> <td>22.76</td> <td><b>23.07</b></td> <td>0.31</td> </tr> <tr> <td>2015</td> <td>24.40</td> <td><b>24.80</b></td> <td>0.40</td> </tr> <tr> <td>Mean</td> <td>22.49</td> <td><b>22.81</b></td> <td>0.32</td> </tr> </table> Table 3: Test accuracy on WMT test sets after applying our refinement procedure. final multilayer perceptron has 2 layers with a hidden dimension of 1024, see §4. In this setup, we replaced dot-product attention with MLP attention (Bahdanau et al., 2015) as it performed better on the validation set. All weights were initialized randomly apart from the word embedding layers, which were pre-computed with Hellinger Principal Component Analysis (Lebret & Collobert, 2014) applied to the bilingual co-occurrence matrix constructed on the training set. The word embedding dimension was set to 256 for both languages and all models. 6.2 RESULTS Table 2 compares BLEU of the single and dual attention models (F vs F<sub>dual</sub>) over the validation set. It reports the performance for the best threshold \( t \in \{0, 0.1, \ldots, 1\} \) and the best maximum number of modifications per sentence \( N \in \{0, 1, \ldots, 10\} \) for the different refinement heuristics. The best performing configuration is F<sub>dual</sub> with the product-based heuristic S<sub>pr</sub> thresholded at \( t = 0.5 \) for up to \( N = 5 \) substitutions. We report test performance of this configuration in table 3. Tables 4, 5 and 6 show examples of system outputs. Overall the system obtains a small but consistent improvement over all the test sets. Figure 1(left) plots accuracy versus the number of allowed substitutions and Figure 1(right) shows the percentage of actually modified tokens. The dual attention model (§4) outperforms single attention (§3). Both models achieve most of improvement by making only 1-2 substitutions per sentence. Thereafter only very few substitutions are made with little impact on BLEU. Figure 1(right) shows that the models saturate quickly, indicating convergence of the refinement output to a state where the models have no more suggestions. To isolate the model contribution from the scoring heuristic, we replace the scoring heuristic with an oracle while keeping the rest of the refinement strategy the same. We consider two types of oracle: The full oracle takes the suggested substitution for each position and then selects which single position should be edited or whether to stop editing altogether. This oracle has the potential to find the largest BLEU improvement. The partial oracle does not select the position, it just takes the heuristic suggestion for the current step and decides whether to edit or stop the process. Notice that both oracles have very limited choice, as they are only able to perform substitutions suggested by our model. Figure 1: Left: BLEU as a function of the total number of substitutions allowed per sentence. Values are reported on a small 3K validation set for the single and dual attention models using the best scoring heuristic s and threshold t. Right: Percentage of modified tokens on the validation set as a function of the total number of substitutions allowed per sentence. All models modify fewer than 2.5% of tokens. Figure 2: BLEU as a function of the total number of substitutions allowed per sentence. Left: best dual-attention refinement strategy (Dual_product) versus two oracles. The full oracle (Dual_full_oracle) accepts as input \( y_{pred} \) and selects a single i to substitute \( y^i_g := y^i_{pred} \). The partial oracle (Dual_partial_oracle) lets the model choose position as well (\( i := \arg\max_{1 \leq j \leq |y_g|} S(y_g, Y_{pred}) \)) but has the ability to prevent substitution \( y^i_g := y^i_{pred} \) if it does not improve BLEU. Right: same for the best single attention setup. Figure 2 reports the performance of our best single and dual attention models compared to both oracles on the validation set; Figure 3 shows the corresponding number of substitutions. The full and partial oracles result in an improvement of +1.7 and +1.09 BLEU over the baseline in the dual attention setting (compared to +0.35 with \( s_{pr} \)). In the single-attention setup the oracles yields a higher improvement (+2.37 and +1.3) and they also perform more substitutions. This supports our earlier conjecture (§4) that \( F_{dual} \) is more conservative and prone to copying words from the guess \( y_g \) compared to the single attention model. While helpful in validation, the cautious nature of the dual model restricts the options of the oracle. We make several observations. First, word-prediction models provide high-quality substitutions \( y_{pred} \) that can lead to a significant improvements in BLEU (despite that both oracles are limited in their choice of \( y_{pred} \)). This is supported by the simple heuristic \( s_{conf} \) performing very close to more sophisticated strategies (Table 2). ![Percentage of modified tokens as a function of total number of substitutions allowed per sentence for the dual attention model (left) and the single attention model (right) compared to the full and partial oracles (cf. Figure2)](page_184_153_670_377.png) Figure 3: Percentage of modified tokens as a function of total number of substitutions allowed per sentence for the dual attention model (left) and the single attention model (right) compared to the full and partial oracles (cf. Figure2). Second, it is important to have a good confidence estimate on whether a substitution will improve BLEU or not. The full oracle, which yields +1.7 BLEU, acts as an estimate to having a real-valued confidence measure and replaces the scoring heuristic s. The partial oracle, yielding +1.09 BLEU, assesses the benefit of having a binary-valued confidence measure. The latter oracle can only prevent our model from making a BLEU-damaging substitution. However, confidence estimation is a difficult task as we found in §2. Finally, we demonstrate that a significant improvement in BLEU can be achieved through very few substitutions. The full and partial oracle modify only 1.69% and 0.99% of tokens, or 0.4 and 0.24 modifications per sentence, respectively. Of course, oracle substitution assumes access to the reference which is not available at test time. At the same time, our oracle is more likely to generate fluent sentences since it only has access to substitutions deemed likely by the model as opposed to an unrestricted oracle that is more likely to suggest improvements leading to unreasonable sentences. Note that our oracles only allow substitutions (no deletions or insertions), and only those that raise BLEU in a monotonic fashion, with each single refinement improving the score of the previous translation. 7 CONCLUSION AND FUTURE WORK We present a simple iterative decoding scheme for machine translation which is motivated by the inability of existing models to revisit incorrect decoding decisions made in the past. Our models improve an initial guess translation via simple word substitutions over several rounds. At each round, the model has access to the source as well as the output of the previous round, which is an entire translation of the source. This is different to existing decoding algorithms which make predictions based on a limited partial translation and are unable to revisit previous erroneous decoding decisions. Our results increase translation accuracy by up to 0.4 BLEU on WMT15 German-English translation and modify only 0.6 words per sentence. In our experimental setup we start with the output of a phrase-based translation system but our model is general enough to deal with arbitrary guess translations. We see several future work avenues from here. Experimenting with different initial guess translations such as the output of a neural translation system, or even the result of a simple dictionary-based word-by-word translation scheme. Also one can envision editing a number of guess translations simultaneously by expanding the dual attention mechanism to attend over multiple guesses. So far we only experimented with word substitution, one may add deletion, insertion or even swaps of single or multi-word units. Finally, the dual-attention model in §4 may present a good starting point for neural multi-source translation (Schroeder et al., 2009). ACKNOWLEDGMENTS The authors wish to thank Marc’Aurelio Ranzato and Sumit Chopra for their advice and comments. REFERENCES Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR. Association for Computational Linguistics, May 2015. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. A neural probabilistic language model. J. Mach. Learn. Res., 3:1137–1155, March 2003. ISSN 1532-4435. URL http://dl.acm.org/citation.cfm?id=944919.944966 John Blatz, Erin Fitzgerald, George F. Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto Sanchís, and Nicola Ueffing. Confidence estimation for machine translation. In Proc. of COLING, 2004. Ondej Bojar, Rajan Chatterjee, Christian Federmann, Barry Haddow, Chris Hokamp, Matthias Huck, Varvara Logacheva, and Pavel Pecina (eds.). Proceedings of the Tenth Workshop on Statistical Machine Translation. Association for Computational Linguistics, Lisbon, Portugal, September 2015. URL http://aclweb.org/anthology/W15-30 Ondej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno-Yepes, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Aurélie Névêlôl, Mariana L. Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Specia, Marco Turchi, Karin M. Verspoor, and Marcos Zampieri. Findings of the 2016 conference on machine translation. In WMT, 2016. R. Collobert, K. Kavukcuoglu, and C. Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, 2011. URL http://torch.ch Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. What’s in a translation rule? pp. 273–280, Boston, MA, USA, May 2004. Liang Huang. Forest-based algorithms in natural language processing. PhD thesis, University of Pennsylvania, 2008. Philipp Koehn, Franz Josef Och, and Daniel Marcu. Statistical Phrase-Based Translation. pp. 127–133, Edmonton, Canada, May 2003. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondej Bojar, Alexandra Constantin, and Evan Herbst. Moses: Open source toolkit for statistical machine translation. In Proc. of ACL, 2007. Rémi Lebret and Ronan Collobert. Word embeddings through hellinger pca. In 14th Conference of the European Chapter of the Association for Computational Linguistics, 2014. Joel Legrand, Michael Auli, and Ronan Collobert. Neural network-based word alignment through score aggregation. In Proceedings of WMT, 2016. Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention-based neural machine translation. In Llus Mrquez, Chris Callison-Burch, Jian Su, Daniele Pighin, and Yuval Marton (eds.), EMNLP, pp. 1412–1421. The Association for Computational Linguistics, 2015. ISBN 978-1-941643-32-7. URL http://dblp.uni-trier.de/db/conf/emnlp/emnlp2015.html#LuongPM15 Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pp. 311–318, Stroudsburg, PA, USA, 2002. Association for Computational Linguistics. doi: 10.3115/1073083.1073135. URL http://dx.doi.org/10.3115/1073083.1073135 Alexander M Rush, Sumit Chopra, and Jason Weston. A neural attention model for sentence summarization. In Proc. of EMNLP, 2015. Josh Schroeder, Trevor Cohn, and Philipp Koehn. Word lattices for multi-source translation. In Proc. of EACL, 2009. Michel Simard, Cyril Goutte, and Pierre Isabelle. Statistical phrase-based post-editing. In Proc. of NAACL, 2007. Nicola Ueffing and Hermann Ney. Word-level confidence estimation for machine translation. Computational Linguistics, 33:9–40, 2007. A EXAMPLES <table> <tr> <th>x</th> <td>new york city erwägt ebenfalls ein solches .</td> <th>x</th> <td>papa , ich bin 22 !</td> </tr> <tr> <th>y<sub>ref</sub></th> <td>new york city is also considering this .</td> <th>y<sub>ref</sub></th> <td>dad , i 'm 22 !</td> </tr> <tr> <th>y<sub>g</sub></th> <td>new york city is also <b>a such</b> .</td> <th>y<sub>g</sub></th> <td>papa , i am 22 .</td> </tr> <tr> <th>our</th> <td>new york city is also <b>considering this</b> .</td> <th>our</th> <td>papa , i am 22 !</td> </tr> </table> <table> <tr> <th>x</th> <td>esme nussbaum senkte ihren kopf .</td> <th>x</th> <td>grobritannien importiert 139.000 tonnen .</td> </tr> <tr> <th>y<sub>ref</sub></th> <td>esme nussbaum lowered her head .</td> <th>y<sub>ref</sub></th> <td>uk imports 139,000 tons .</td> </tr> <tr> <th>y<sub>g</sub></th> <td>esme nussbaum <b>slashed its head</b> .</td> <th>y<sub>g</sub></th> <td>britain <b>imported</b> 139,000 tonnes .</td> </tr> <tr> <th>our</th> <td>esme nussbaum <b>lowered her head</b> .</td> <th>our</th> <td>britain <b>imports</b> 139,000 tonnes .</td> </tr> </table> <table> <tr> <th>x</th> <td>alles in deutschland wird subventioniert , von der kohle über autos bis zur landwirtschaft .</td> </tr> <tr> <th>y<sub>ref</sub></th> <td>everything is subsidised in germany , from coal , to cars and farmers .</td> </tr> <tr> <th>y<sub>g</sub></th> <td><b>all</b> in germany , subsidised by the coal on cars to agriculture .</td> </tr> <tr> <th>y</th> <td><b>everything in germany is subsidised by the coal on cars to agriculture</b> .</td> </tr> </table> <table> <tr> <th>x</th> <td>drei männer , die laut aussage der behörden als fahrer arbeiteten , wurden wegen des besitzes und des beabsichtigten verkaufs von marihuana und kokain angeklagt .</td> </tr> <tr> <th>y<sub>ref</sub></th> <td>three men who authorities say worked as drivers were charged with possession of marijuana and cocaine with intent to distribute .</td> </tr> <tr> <th>y<sub>g</sub></th> <td>three men who , according to the authorities <b>have been worked</b> as a driver , because of the possession and the <b>planned</b> sale of marijuana and cocaine .</td> </tr> <tr> <th>y</th> <td>three men who , according to the authorities , <b>were working</b> as a driver , because of the possession and the <b>intended</b> sale of marijuana and cocaine .</td> </tr> </table> Table 4: Examples of good refinements performed by our system on our test sets. The model clearly improves the quality of the initial guess translations. <table> <tr> <th>x</th> <td>er war auch kein klempner .</td> <th>x</th> <td>mit 38 aber beging er selbstmord .</td> </tr> <tr> <th>y_{ref}</th> <td>nor was he a pipe lagger .</td> <th>y_{ref}</th> <td>but at 38 , he committed suicide .</td> </tr> <tr> <th>y_g</th> <td>he was also a plumber .</td> <th>y_g</th> <td><b>with</b> 38 <b>but</b> he committed suicide .</td> </tr> <tr> <th>our</th> <td>he was not a plumber .</td> <th>our</th> <td><b>in</b> 38 , he committed suicide .</td> </tr> <tr> <th>x</th> <td>ich habe schon 2,5 millionen in die kampagne gesteckt .</td> <th>x</th> <td>i have already put 2.5 million into the campaign .</td> </tr> <tr> <th>y_{ref}</th> <td>i have already put 2.5 million into the campaign .</td> <th>y_g</th> <td>i have <b>already</b> 2.5 million <b>in</b> the campaign .</td> </tr> <tr> <th>our</th> <td>i have put 2.5 million <b>into</b> campaign .</td> <th>our</th> <td>i have <b>put</b> 2.5 million <b>into</b> campaign .</td> </tr> <tr> <th>x</th> <td>dieses jahr werden amerikaner etwa 106 millionen dollar für kürbisse ausgeben , so das us census bureau .</td> <th>x</th> <td>this year , americans will spend around $ 106 million on pumpkins , according to the u.s. census bureau .</td> </tr> <tr> <th>y_{ref}</th> <td>this year , americans will spend around $ 106 million on pumpkins , according to the u.s. census bureau .</td> <th>y_g</th> <td>this year , the americans <b>are approximately</b> 106 million dollars <b>for</b> pumpkins , so the us census bureau .</td> </tr> <tr> <th>our</th> <td>this year , the americans <b>spend about</b> 106 million dollars <b>to</b> pumpkins , so the us census bureau .</td> <th>our</th> <td>this year , the americans <b>spend about</b> 106 million dollars <b>to</b> pumpkins , so the us census bureau .</td> </tr> <tr> <th>x</th> <td>das thema unterliegt bestimmungen , denen zufolge fluggesellschaften die sicherheit jederzeit aufrecht erhalten und passagiere die vom kabinenpersonal gegebenen sicherheitsanweisungen befolgen müssen .</td> <th>x</th> <td>the issue is covered by regulations which require aircraft operators to ensure safety is maintained at all times and passengers to comply with the safety instructions given by crew members .</td> </tr> <tr> <th>y_{ref}</th> <td>the issue is subject to rules , according to which airlines and passengers <b>to maintain the</b> security at any time by the cabin crew safety instructions given to follow .</td> <th>y_g</th> <td>the issue is subject to rules , according to which airlines and passengers <b>must follow their</b> security at any time by the cabin crew safety instructions given to follow .</td> </tr> <tr> <th>our</th> <td>the issue is subject to rules , according to which airlines and passengers <b>must follow their</b> security at any time by the cabin crew safety instructions given to follow .</td> <th>our</th> <td>the issue is subject to rules , according to which airlines and passengers <b>must follow their</b> security at any time by the cabin crew safety instructions given to follow .</td> </tr> </table> Table 5: Refinements of mixed quality. Our model is not able to insert new words, and so sometimes it replaces a relevant word with another relevant word. In other cases, improvements are insignificant, or good word replacements are mixed with poor ones. <table> <tr> <th>x</th> <td>ein krieg , der weder verloren noch gewonnen wird</td> <th>x</th> <td>werden wir jemals erfahren , was ihn verursacht hat ?</td> </tr> <tr> <th>y_{ref}</th> <td>a war that is neither lost or won</td> <th>y_{ref}</th> <td>will we ever know what caused it ?</td> </tr> <tr> <th>y_g</th> <td>a war that is <b>still</b> to be <b>gained</b> or lost</td> <th>y_g</th> <td>will we ever <b>learn</b> what caused it ?</td> </tr> <tr> <th>our</th> <td>a war that is <b>neither</b> to be <b>lost</b> nor lost</td> <th>our</th> <td>will we ever <b>hear</b> what caused it ?</td> </tr> <tr> <th>x</th> <td>in den vereinigten staaten liegt das durchschnittsalter bei 12,5 jahren , etwas weniger als 12,75 im jahr 1970 .</td> <th>x</th> <td>in the united states , the average age is 12.5 years , down from 12.75 in 1970 .</td> </tr> <tr> <th>y_{ref}</th> <td>in the united states , the average age is 12.5 years , down from 12.75 in 1970 .</td> <th>y_g</th> <td>in the united states , the average age <b>at</b> 12.5 years ago , a little less than 12.75 in 1970 .</td> </tr> <tr> <th>our</th> <td>in the united states , the average age <b>of</b> 12.5 years ago <b>is</b> a little less than 12.75 in 1970 .</td> <th>our</th> <td>in the united states , the average age <b>of</b> 12.5 years ago <b>is</b> a little less than 12.75 in 1970 .</td> </tr> </table> Table 6: Examples of poor refinements. Our model does not improve the translation or decreases the quality of the translation.
ABSTRACT Existing machine translation decoding algorithms generate translations in a strictly monotonic fashion and never revisit previous decisions. As a result, earlier mistakes cannot be corrected at a later stage. In this paper, we present a translation scheme that starts from an initial guess and then makes iterative improvements that may revisit previous decisions. We parameterize our model as a convolutional neural network that predicts discrete substitutions to an existing translation based on an attention mechanism over both the source sentence as well as the current translation output. By making less than one modification per sentence, we improve the output of a phrase-based translation system by up to 0.4 BLEU on WMT15 German-English translation. 1 INTRODUCTION Existing decoding schemes for translation generate outputs either left-to-right, such as for phrase-based or neural translation models, or bottom-up as in syntactic models (Koehn et al., 2003; Galley et al., 2004; Bahdanau et al., 2015). All decoding algorithms for those models make decisions which cannot be revisited at a later stage, such as when the model discovers that it made an error earlier on. On the other hand, humans generate all but the simplest translations by conceiving a rough draft of the solution and then iteratively improving it until it is deemed complete. The translator may modify a clause she tackled earlier at any point and make arbitrary modifications to improve the translation. It can be argued that beam search allows to recover from mistakes, simply by providing alternative translations. However, reasonable beam sizes encode only a small number of binary decisions. A beam of size 50 contains fewer than six binary decisions, all of which frequently share the same prefix (Huang, 2008). In this paper, we present models that tackle translation similar to humans. The model iteratively edits the target sentence until it cannot improve it further. As a preliminary study, we address the problem of finding mistakes in an existing translation via a simple classifier that predicts if a word in a translation is correct (\S 2). Next, we model word substitutions for an existing translation via a convolutional neural network that attends to the source when suggesting substitutions (\S 3). Finally, we devise a model that attends both to the source as well as to the existing translation (\S 4). We repeatedly apply the models to their own output by determining the best substitution for each word in the previous translation and then choosing either one or zero substitutions for each sentence. For the latter we consider various heuristics as well as a classifier-based selection method (\S 5). Our results demonstrate that we can improve the output of a phrase-based translation system on WMT15 German-English data by up to 0.4 BLEU (Papineni et al., 2002) by making on average only 0.6 substitutions per sentence (\S 6). Our approach differs from automatic post-editing since it does not require post-edited text which is a scarce resource (\S imard et al., 2007; Bojar et al., 2016). For our first model (\S 3) we merely require parallel text and for our second model (\S 4) the output of a baseline translation system. *Roman was interning at Facebook for this work. \( 12^5 = 32 < 50 < 2^6 = 64 \) 2 DETECTING ERRORS Before correcting errors we consider the task of detecting mistakes in the output of an existing translation system. In the following, we use lowercase boldface for vectors (e.g. \( \mathbf{x} \)), uppercase boldface for matrices (e.g. \( \mathbf{F} \)) and calligraphy for sets (e.g. \( \mathcal{X} \)). We use superscripts for indexing or slicing, e.g., \( \mathbf{x}^i, \mathbf{F}^{i,j}, \mathbf{F}^i = (\mathbf{F}^{i,1}, \ldots, \mathbf{F}^{i,|\mathcal{F}|}) \). We further denote \( \mathbf{x} \) as the source sentence, \( \mathbf{y}_g \) as the guess translation from which we start and which was produced by a phrase-based translation system [6,1], and \( \mathbf{y}_{\text{ref}} \) as the reference translation. Sentences are vectors of indices indicating entries in a source vocabulary \( \mathcal{X} \) or a target vocabulary \( \mathcal{Y} \). For example, \( \mathbf{x} = (x^1, \ldots, x^{|\mathcal{X}|}) \in \mathcal{X}^{|\mathcal{X}|} \) with \( \mathcal{X} = \{1, \ldots, |\mathcal{X}|\} \). We omit biases of linear layers to simplify the notation. Error detection focuses on word-level accuracy, i.e., we predict for each token in a given translation if it is present in the reference or not. This metric ignores word order, however, we hope that performance on this simple task provides us with a sense of how difficult it will be to modify translations to a positive effect. A token \( y_g^i \) in the candidate translation \( \mathbf{y}_g \) is deemed correct iff it is present in the reference translation: \( y_g^i \in \mathbf{y}_{\text{ref}} \). We build a neural network \( f \) to predict correctness of each token in \( \mathbf{y}_g \) given the source sentence \( \mathbf{x} \): \[ f(\mathbf{x}, \mathbf{y}_g) \in [0; 1]^{|\mathbf{y}_g|}, \] where \( f(\mathbf{x}, \mathbf{y}_g)^i \) estimates \( P\left( y_g^i \in \mathbf{y}_{\text{ref}} \right) \). Architecture. We use an architecture similar to the word alignment model of [Legrand et al. (2016)]. The source and the target sequences are embedded via a lookup table that replace each word type with a learned vector. The resulting vector sequences are then processed by alternating convolutions and non-linearities. This results in a vector \( S(x)^i \) representing each position \( i \) in the source \( \mathbf{x} \) and a vector \( T(y_g)^j \) representing each position \( j \) in the target \( \mathbf{y}_g \). These vectors are then compared via a dot product. Our prediction estimates the probability of a target word being correct as the largest dot product between any source word and the guess word. We apply the logistic function \( \sigma \) to this score, \[ f(\mathbf{x}, \mathbf{y}_g)^i = \sigma \left( \max_{1 \leq j \leq |\mathcal{X}|} \left[ S(\mathbf{x}) T(\mathbf{y}_g)^T \right]^{j,i} \right). \] Training. At training time we minimize the cross-entropy loss, with the binary supervision 1 for \( y_g^i \in \mathbf{y}_{\text{ref}} \), 0 otherwise. Testing. At test time we threshold the model prediction \( f(\mathbf{x}, \mathbf{y}_g)^i \) to detect mistakes. We compare the performance of our network to the following baselines: 1. Predicting that all candidate words are always correct \( f_{\text{cor}} \equiv 1 \), or always incorrect \( f_{\text{wrong}} \equiv 0 \). 2. The prior probability of a word being correct based on the training data \( f_{\text{stat}}(y) = (\mathrm{P}[y \in \mathbf{y}_{\text{ref}} | y \in \mathbf{y}_g] > 0.5) \). We report word-level accuracy metrics in Table 1. While the model significantly improves over the baselines, the probability of correctly labeling a word as a mistake remains low (62.71%). The task of predicting mistakes is not easy as previously shown in confidence estimation [Blatz et al. (2004), Ueffing & Ney (2007)]. Also, one should bear in mind that this task cannot be solved with 100% accuracy since a sentence can be correctly in multiple different ways and we only have a single reference translation. In our case, our final refinement objective might be easier than error detection as we do not need to detect all errors. We need to identify some of the locations where a substitution could improve BLEU. At the same time, our strategy should also suggest these substitutions. This is the objective of the model introduced in the next section. 3 ATTENTION-BASED MODEL We introduce a model to predict modifications to a translation which can be trained on bilingual text. In [5] we discuss strategies to iteratively apply this model to its own output in order to improve a translation. <table> <tr> <th>Metric (%)</th> <th>\( f_{\text{cor}} \)</th> <th>\( f_{\text{wrong}} \)</th> <th>\( f_{\text{stat}} \)</th> <th>f</th> </tr> <tr> <td>Accuracy</td> <td>68.0</td> <td>32.0</td> <td>71.3</td> <td><b>76.0</b></td> </tr> <tr> <td>Recall</td> <td>0.00</td> <td><b>100.00</b></td> <td>36.0</td> <td>61.3</td> </tr> <tr> <td>Precision</td> <td><b>100.0</b></td> <td>32.0</td> <td>58.4</td> <td>62.7</td> </tr> <tr> <td>F1</td> <td>0.00</td> <td>48.4</td> <td>44.5</td> <td><b>62.0</b></td> </tr> </table> Table 1: Accuracy of the error detection model f compared to baselines on the concatenation of the WMT test sets from 2008 to 2015. For precision, recall and F1 we consider a positive prediction as labeling a word as a mistake. Baseline \( f_{\text{cor}} \) labels all words as correct, \( f_{\text{wrong}} \) labels all words as incorrect, \( f_{\text{stat}} \) labels a word from \( y_g \) based on the prior probability estimated on the training data. Our model \( \mathbf{F} \) takes as input a source sentence \( \mathbf{x} \) and a target sentence \( \mathbf{y} \), and outputs a distribution over the vocabulary for each target position, \( \mathbf{F}(\mathbf{x}, \mathbf{y}) \in [0, 1]^{|\mathbf{y}| \times |\mathcal{Y}|} \). For each position \( i \) and any word \( j \in \mathcal{Y} \), \( \mathbf{F}(\mathbf{x}, \mathbf{y})^{i,j} \) estimates \( P(y^i = j \mid \mathbf{x}, \mathbf{y}^{-i}) \), the probability of word \( j \) being at position \( i \) given the source and the target context \( \mathbf{y}^{-i} = (y^1, \ldots, y^{i-1}, y^{i+1}, \ldots, y^{|\mathbf{y}|}) \) surrounding \( i \). In other words, we learn a non-causal language model (\cite{Bengioetal2003}) which is also conditioned on the source \( \mathbf{x} \). Architecture. We rely on a convolutional model with attention. The source sentence is embedded into distributional space via a lookup table, followed by convolutions and non-linearities. The target sentence is also embedded in distributional space via a lookup table, followed by a single convolution and a succession of linear layers and non-linearities. The target convolution weights are zeroed at the center so that the model does not have access to the center word. This means that the model observes a fixed size context of length \( 2k \) for any target position \( i \), \( \mathbf{y}^{-i|k} = (y^{i-k}, \ldots, y^{i-1}, y^{i+1}, \ldots, y^{i+k}) \) where \( 2k + 1 \) refers to the convolution kernel width. These operations result in a vector \( \mathbf{S}^j \) representing each position \( j \) in the source sentence \( \mathbf{x} \) and a vector \( \mathbf{T}^i \) representing each target context \( \mathbf{y}^{-i|k} \). Given a target position \( i \), an attention module then takes as input these representation and outputs a weight for each target position \[ \alpha(i, j) = \frac{\exp \left( \mathbf{S}^j \cdot \mathbf{T}^i \right)}{\sum_{j'=1}^{|\mathbf{x}|} \exp \left( \mathbf{S}^{j'} \cdot \mathbf{T}^i \right)}. \] These weights correspond to dot-product attention scores (\cite{Luongetal2015, Rushetal2015}). The attention weights allow to compute a source summary specific to each target context through a weighted sum, \[ \mathbf{a}(\mathbf{y}^{-i|k}, \mathbf{x}) = \sum_{j=1}^{|\mathbf{x}|} \alpha(i, j) \; \mathbf{S}^j \] Finally, this summary \( \mathbf{a}(\mathbf{y}^{-i|k}, \mathbf{x}) \) is concatenated with the embedding of the target context \( \mathbf{y}^{-i|k} \) obtained from the target lookup table, \[ \mathbf{L}\left( \mathbf{y}^{-i|k} \right) = \left\{ \mathbf{L}^j, j \in \mathbf{y}^{-i|k} \right\} \] and a multilayer perceptron followed by a softmax computes \( \mathbf{F}(\mathbf{x}, \mathbf{y})^i \) from \( \mathbf{a}(\mathbf{y}^{-i|k}, \mathbf{x}), \; \mathbf{L}(\mathbf{y}^{-i|k}) \). Note that we could alternatively use \( \mathbf{T}^i \) instead of \( \mathbf{L}(\mathbf{y}^{-i|k}) \) but our preliminary validation experiments showed better result with the lookup table output. Training. The model is trained to maximize the (log) likelihood of the pairs \( (\mathbf{x}, \mathbf{y}_{\text{ref}}) \) from the training set. Testing. At test time the model is given \( (\mathbf{x}, \mathbf{y}_g) \), i.e., the source and the guess sentence. Similar to maximum likelihood training for left-to-right translation systems (\cite{Bahdanauetal2015}), the model is therefore not exposed to the same type of context in training (reference contexts from \( \mathbf{y}_{\text{ref}} \)) and testing (guess contexts from \( \mathbf{y}_g \)). Discussion. Our model is similar to the attention-based translation approach of (\cite{Bahdanauetal2015}). In addition to using convolutions, the main difference is that we have access to both left and right target context \( y^{-i|k} \) since we start from an initial guess translation. Right target words are of course good predictors of the previous word. For instance, an early validation experiment with the setup from [6.1] showed a perplexity of 5.4 for this model which compares to 13.9 with the same model trained with the left context only. 4 DUAL ATTENTION MODEL We introduce a dual attention architecture to also make use of the guess at training time. This contrasts with the model introduced in the previous section where the guess is not used during training. Also, we are free to use the entire guess, including the center word, compared to the reference where we have to remove the center word. At training time, the dual attention model takes 3 inputs, that is, the source, the guess and the reference. At test time, the reference input is replaced by the guess. Specifically, the model \( F_{\text{dual}}(x, y_g, y_{\text{ref}}) \in [0; 1]^{|V_{\text{ref}}| \times |V|} \) estimates \( P(y_{\text{ref}}^i | x, y_g, y_{\text{ref}}^{-i}) \) for each position \( i \) in the reference sentence. Architecture. The model builds upon the single attention model from the previous section by having two attention functions \( a \) with distinct parameters. The first function \( a_{\text{source}} \) takes the source sentence \( x \) and the reference context \( y_{\text{ref}}^{-i} \) to produce the source summary for this context \( a_{\text{source}}(y^{-i|k}, x) \) as in the single attention model. The second function \( a_{\text{guess}} \) takes the guess sentence \( y_g \) and the reference context \( y_{\text{ref}}^{-i} \) and produces a guess summary for this context \( a_{\text{guess}}(y^{-i|k}, y_g) \). These two summaries are then concatenated with the lookup representation of the reference context \( L(y_{\text{ref}}^{-i|k}) \) and input to a final multilayer perceptron followed by a softmax. The reference lookup table contains the only parameters shared by the two attention functions. Training. This model is trained similarly to the single attention model, the only difference being the conditioning on the guess \( y_g \). Testing. At test time, the reference is unavailable and we replace \( y_{\text{ref}} \) with \( y_g \), i.e., the model is given \( (x, y_g, y_g^{-i|k}) \) to make a prediction at position \( i \). In this case, the distribution shift when going from training to testing is less drastic than in [3] and the model retains access to the whole \( y_g \) via attention. Discussion. Compared to the single attention model ([3]), this model reduces perplexity from 5.4 to 4.1 on our validation set. Since the dual attention model can attend to all guess words, it can copy any guess word if necessary. In our dataset, 68% of guess words are in the reference and can therefore be copied. This also means that for the remaining 32% of reference tokens the model should not copy. Instead, the model should propose a substitution by itself ([6.1]). During testing, the fact that the guess is input twice (\( x, y_g, y_g^{-i|k} \)) means that the guess and the prediction context always match. This makes the model more conservative in its predictions, suggesting tokens from \( y_g \) more often than the single attention model. However, as we show in [6], this turns out beneficial in our setting. 5 ITERATIVE REFINEMENT The models in [3] and [4] suggest word substitutions for each position in the candidate translation \( y_g \) given the current surrounding context. Applying a single substitution changes the context of the surrounding words and requires updating the model predictions. We therefore perform multiple rounds of substitution. At each round, the model computes its predictions, then our refinement strategy selects a substitution and performs it unless the strategy decides that it can no longer improve the target sentence. This means that the refinement procedure should be able to (i) prioritize the suggested substitutions, and (ii) decide to stop the iterative process. We determine the best edit for each position \( i \) in \( y_g \) by selecting the word with the highest probability estimate: \( y_{\text{pred}}^i = \arg\max_{j \in V} F(x, y_g)^{i,j} \). Then we compute a confidence score in this prediction \( s(y_g, y_{\text{pred}})^i \), possibly considering the prediction for the current guess word at the same position. These scores are used to select the next position to edit, \( i^* = \arg\max_i s(y_g, y_{pred})^i \) and to stop the iterative process, i.e., when the confidence falls below a validated threshold \( t \). We also limit the number of substitutions to a maximum of \( N \). We consider different heuristics for \( s \), • Score positions based on the model confidence in \( y_{pred}^i \), i.e., \( s_{conf}(y_g, y_{pred})^i = F(x, y_g)^{i, y_{pred}} \). • Look for high confidence in the suggested substitution \( y_{pred}^i \) and low confidence in the current word \( y_g^i \): \( s_{pr}(y_g, y_{pred})^i = F(x, y_g)^{i, y_{pred}} \times \left(1 - F(x, y_g)^{i, y_g^i}\right) \). • Train a simple binary classifier taking as input the score of the best predicted word and the current guess word: \( s_{cl}(y_g, y_{pred})^i = \mathrm{nn} \left( \log F(x, y_g)^{i, y_{pred}}, \log F(x, y_g)^{i, y_g^i} \right) \), where nn is a 2-layer neural network trained to predict whether a substitution leads to an increase in BLEU or not. We compare the above strategies, different score thresholds \( t \), and the maximum number of modifications per sentence allowed \( N \) in [6,2] 6 EXPERIMENTS & RESULTS We first describe our experimental setup and then discuss our results. 6.1 EXPERIMENTAL SETUP Data. We perform our experiments on the German-to-English WMT15 task (Bojar et al., 2015) and benchmark our improvements against the output of a phrase-based translation system (PBMT; Koehn et al. 2007) on this language pair. In principle, our approach may start from any initial guess translation. We chose the output of a phrase-based system because it provides a good starting point that can be computed at high speed. This allows us to quickly generate guess translations for the millions of sentences in our training set. All data was lowercased and numbers were mapped to a single special “number” token. Infrequent tokens were mapped to an “unknown” token which resulted in dictionaries of 120K and 170K words for English and German respectively. For training we used 3.5M sentence triples (source, reference, and the guess translation output by the PBMT system). A validation set of 180K triples was used for neural network hyper-parameter selection and learning rate scheduling. Finally, two 3K subsets of the validation set were used to train the classifier discussed in §5 and to select the best model architecture (single vs dual attention) and refinement heuristic. The initial guess translations were generated with phrase-based systems trained on the same training data as our refinement models. We decoded the training data with ten systems, each trained on 90% of the training data in order to decode the remaining 10%. This procedure avoids the bias of generating guess translation with a system that was trained on the same data. Implementation. All models were implemented in Torch (Collobert et al., 2011) and trained with stochastic gradient descent to minimize the cross-entropy loss. For the error detection model in [2] we used two temporal convolutions on top of the lookup table, each followed by a tanh non-linearity to compute \( S(x) \) and \( T(y_g) \). The output dimensions of each convolution was set to 256 and the receptive fields spanned 5 words, resulting in final outputs summarizing a context of 9 words. For the single attention model we set the shared context embedding dimension \( \dim S^j = \dim T^i = 512 \) and use a context of size \( k = 4 \) words to the left and to the right, resulting in a window of size 9 for the source and 8 for the target. The final multilayer perceptron has 2 layers with a hidden dimension of 512, see [3]. For the dual attention model we used 2-layer context embeddings (a convolution followed by a linear with a tanh in between), each having output dimension 512, context of size \( k = 4 \). The <table> <tr> <th>Model</th> <th>Heuristic</th> <th>Best \( t \)</th> <th>Best \( N \)</th> <th>BLEU</th> </tr> <tr> <td colspan="5">PBMT Baseline</td> <td>30.02</td> </tr> <tr> <td rowspan="4">F</td> <td>S<sub>conf</sub></td> <td>0.8</td> <td>3</td> <td>30.21</td> </tr> <tr> <td>S<sub>pr</sub></td> <td>0.7</td> <td>3</td> <td>30.20</td> </tr> <tr> <td>S<sub>cl</sub></td> <td>0.5</td> <td>1</td> <td>30.19</td> </tr> <tr> <td>S<sub>conf</sub></td> <td>0.6</td> <td>7</td> <td>30.32</td> </tr> <tr> <td rowspan="3">F<sub>dual</sub></td> <td>S<sub>conf</sub></td> <td>0.5</td> <td>5</td> <td><b>30.35</b></td> </tr> <tr> <td>S<sub>pr</sub></td> <td>0.5</td> <td>5</td> <td><b>30.35</b></td> </tr> <tr> <td>S<sub>cl</sub></td> <td>0.4</td> <td>2</td> <td>30.33</td> </tr> </table> Table 2: Validation BLEU (selecting substitution heuristics, decision thresholds \( t \), and number of maximum allowed modifications \( N \)). BLEU is reported on a 3,041 validation sentences. <table> <tr> <th>newstest</th> <th>PBMT BLEU</th> <th>Our BLEU</th> <th>\( \Delta \)</th> </tr> <tr> <td>2008</td> <td>21.29</td> <td><b>21.60</b></td> <td>0.31</td> </tr> <tr> <td>2009</td> <td>20.42</td> <td><b>20.74</b></td> <td>0.32</td> </tr> <tr> <td>2010</td> <td>22.82</td> <td><b>23.13</b></td> <td>0.31</td> </tr> <tr> <td>2011</td> <td>21.43</td> <td><b>21.65</b></td> <td>0.22</td> </tr> <tr> <td>2012</td> <td>21.78</td> <td><b>22.10</b></td> <td>0.32</td> </tr> <tr> <td>2013</td> <td>24.99</td> <td><b>25.37</b></td> <td>0.38</td> </tr> <tr> <td>2014</td> <td>22.76</td> <td><b>23.07</b></td> <td>0.31</td> </tr> <tr> <td>2015</td> <td>24.40</td> <td><b>24.80</b></td> <td>0.40</td> </tr> <tr> <td>Mean</td> <td>22.49</td> <td><b>22.81</b></td> <td>0.32</td> </tr> </table> Table 3: Test accuracy on WMT test sets after applying our refinement procedure. final multilayer perceptron has 2 layers with a hidden dimension of 1024, see §4. In this setup, we replaced dot-product attention with MLP attention (Bahdanau et al., 2015) as it performed better on the validation set. All weights were initialized randomly apart from the word embedding layers, which were pre-computed with Hellinger Principal Component Analysis (Lebret & Collobert, 2014) applied to the bilingual co-occurrence matrix constructed on the training set. The word embedding dimension was set to 256 for both languages and all models. 6.2 RESULTS Table 2 compares BLEU of the single and dual attention models (F vs F<sub>dual</sub>) over the validation set. It reports the performance for the best threshold \( t \in \{0, 0.1, \ldots, 1\} \) and the best maximum number of modifications per sentence \( N \in \{0, 1, \ldots, 10\} \) for the different refinement heuristics. The best performing configuration is F<sub>dual</sub> with the product-based heuristic S<sub>pr</sub> thresholded at \( t = 0.5 \) for up to \( N = 5 \) substitutions. We report test performance of this configuration in table 3. Tables 4, 5 and 6 show examples of system outputs. Overall the system obtains a small but consistent improvement over all the test sets. Figure 1(left) plots accuracy versus the number of allowed substitutions and Figure 1(right) shows the percentage of actually modified tokens. The dual attention model (§4) outperforms single attention (§3). Both models achieve most of improvement by making only 1-2 substitutions per sentence. Thereafter only very few substitutions are made with little impact on BLEU. Figure 1(right) shows that the models saturate quickly, indicating convergence of the refinement output to a state where the models have no more suggestions. To isolate the model contribution from the scoring heuristic, we replace the scoring heuristic with an oracle while keeping the rest of the refinement strategy the same. We consider two types of oracle: The full oracle takes the suggested substitution for each position and then selects which single position should be edited or whether to stop editing altogether. This oracle has the potential to find the largest BLEU improvement. The partial oracle does not select the position, it just takes the heuristic suggestion for the current step and decides whether to edit or stop the process. Notice that both oracles have very limited choice, as they are only able to perform substitutions suggested by our model. Figure 1: Left: BLEU as a function of the total number of substitutions allowed per sentence. Values are reported on a small 3K validation set for the single and dual attention models using the best scoring heuristic s and threshold t. Right: Percentage of modified tokens on the validation set as a function of the total number of substitutions allowed per sentence. All models modify fewer than 2.5% of tokens. Figure 2: BLEU as a function of the total number of substitutions allowed per sentence. Left: best dual-attention refinement strategy (Dual_product) versus two oracles. The full oracle (Dual_full_oracle) accepts as input \( y_{pred} \) and selects a single i to substitute \( y^i_g := y^i_{pred} \). The partial oracle (Dual_partial_oracle) lets the model choose position as well (\( i := \arg\max_{1 \leq j \leq |y_g|} S(y_g, Y_{pred}) \)) but has the ability to prevent substitution \( y^i_g := y^i_{pred} \) if it does not improve BLEU. Right: same for the best single attention setup. Figure 2 reports the performance of our best single and dual attention models compared to both oracles on the validation set; Figure 3 shows the corresponding number of substitutions. The full and partial oracles result in an improvement of +1.7 and +1.09 BLEU over the baseline in the dual attention setting (compared to +0.35 with \( s_{pr} \)). In the single-attention setup the oracles yields a higher improvement (+2.37 and +1.3) and they also perform more substitutions. This supports our earlier conjecture (§4) that \( F_{dual} \) is more conservative and prone to copying words from the guess \( y_g \) compared to the single attention model. While helpful in validation, the cautious nature of the dual model restricts the options of the oracle. We make several observations. First, word-prediction models provide high-quality substitutions \( y_{pred} \) that can lead to a significant improvements in BLEU (despite that both oracles are limited in their choice of \( y_{pred} \)). This is supported by the simple heuristic \( s_{conf} \) performing very close to more sophisticated strategies (Table 2). ![Percentage of modified tokens as a function of total number of substitutions allowed per sentence for the dual attention model (left) and the single attention model (right) compared to the full and partial oracles (cf. Figure2)](page_184_153_670_377.png) Figure 3: Percentage of modified tokens as a function of total number of substitutions allowed per sentence for the dual attention model (left) and the single attention model (right) compared to the full and partial oracles (cf. Figure2). Second, it is important to have a good confidence estimate on whether a substitution will improve BLEU or not. The full oracle, which yields +1.7 BLEU, acts as an estimate to having a real-valued confidence measure and replaces the scoring heuristic s. The partial oracle, yielding +1.09 BLEU, assesses the benefit of having a binary-valued confidence measure. The latter oracle can only prevent our model from making a BLEU-damaging substitution. However, confidence estimation is a difficult task as we found in §2. Finally, we demonstrate that a significant improvement in BLEU can be achieved through very few substitutions. The full and partial oracle modify only 1.69% and 0.99% of tokens, or 0.4 and 0.24 modifications per sentence, respectively. Of course, oracle substitution assumes access to the reference which is not available at test time. At the same time, our oracle is more likely to generate fluent sentences since it only has access to substitutions deemed likely by the model as opposed to an unrestricted oracle that is more likely to suggest improvements leading to unreasonable sentences. Note that our oracles only allow substitutions (no deletions or insertions), and only those that raise BLEU in a monotonic fashion, with each single refinement improving the score of the previous translation. 7 CONCLUSION AND FUTURE WORK We present a simple iterative decoding scheme for machine translation which is motivated by the inability of existing models to revisit incorrect decoding decisions made in the past. Our models improve an initial guess translation via simple word substitutions over several rounds. At each round, the model has access to the source as well as the output of the previous round, which is an entire translation of the source. This is different to existing decoding algorithms which make predictions based on a limited partial translation and are unable to revisit previous erroneous decoding decisions. Our results increase translation accuracy by up to 0.4 BLEU on WMT15 German-English translation and modify only 0.6 words per sentence. In our experimental setup we start with the output of a phrase-based translation system but our model is general enough to deal with arbitrary guess translations. We see several future work avenues from here. Experimenting with different initial guess translations such as the output of a neural translation system, or even the result of a simple dictionary-based word-by-word translation scheme. Also one can envision editing a number of guess translations simultaneously by expanding the dual attention mechanism to attend over multiple guesses. So far we only experimented with word substitution, one may add deletion, insertion or even swaps of single or multi-word units. Finally, the dual-attention model in §4 may present a good starting point for neural multi-source translation (Schroeder et al., 2009). ACKNOWLEDGMENTS The authors wish to thank Marc’Aurelio Ranzato and Sumit Chopra for their advice and comments.
reject
Reject
5.25
7b2d3a72ca4a0a6a0a1f7d1008dc8556df395e87
iclr
2,017
IDENTITY MATTERS IN DEEP LEARNING Moritz Hardt Google Brain 1600 Amphitheatre Parkway, Mountain View, CA, 94043 m@mrtz.org Tengyu Ma Department of Computer Science Princeton University 35 Olden Street, Princeton, 08540 tengyu@cs.princeton.edu ABSTRACT An emerging design principle in deep learning is that each layer of a deep artificial neural network should be able to easily express the identity transformation. This idea not only motivated various normalization techniques, such as batch normalization, but was also key to the immense success of residual networks. In this work, we put the principle of identity parameterization on a more solid theoretical footing alongside further empirical progress. We first give a strikingly simple proof that arbitrarily deep linear residual networks have no spurious local optima. The same result for feed-forward networks in their standard parameterization is substantially more delicate. Second, we show that residual networks with ReLu activations have universal finite-sample expressivity in the sense that the network can represent any function of its sample provided that the model has more parameters than the sample size. Directly inspired by our theory, we experiment with a radically simple residual architecture consisting of only residual convolutional layers and ReLu activations, but no batch normalization, dropout, or max pool. Our model improves significantly on previous all-convolutional networks on the CIFAR10, CIFAR100, and ImageNet classification benchmarks. 1 INTRODUCTION Traditional convolutional neural networks for image classification, such as AlexNet (Krizhevsky et al. (2012)), are parameterized in such a way that when all trainable weights are 0, a convolutional layer represents the 0-mapping. Moreover, the weights are initialized symmetrically around 0. This standard parameterization makes it non-trivial for a convolutional layer trained with stochastic gradient methods to preserve features that were already good. Put differently, such convolutional layers cannot easily converge to the identity transformation at training time. This shortcoming was observed and partially addressed by Ioffe & Szegedy (2015) through batch normalization, i.e., layer-wise whitening of the input with a learned mean and covariance. But the idea remained somewhat implicit until residual networks (He et al. (2015); He et al. (2016)) explicitly introduced a reparameterization of the convolutional layers such that when all trainable weights are 0, the layer represents the identity function. Formally, for an input \( x \), each residual layer has the form \( x + h(x) \), rather than \( h(x) \). This simple reparameterization allows for much deeper architectures largely avoiding the problem of vanishing (or exploding) gradients. Residual networks, and subsequent architectures that use the same parameterization, have since then consistently achieved state-of-the-art results on various computer vision benchmarks such as CIFAR10 and ImageNet. 1.1 OUR CONTRIBUTIONS In this work, we consider identity parameterizations from a theoretical perspective, while translating some of our theoretical insight back into experiments. Loosely speaking, our first result underlines how identity parameterizations make optimization easier, while our second result shows the same is true for representation. Linear residual networks. Since general non-linear neural networks, are beyond the reach of current theoretical methods in optimization, we consider the case of deep linear networks as a simplified model. A linear network represents an arbitrary linear map as a sequence of matrices \(A_\ell \cdots A_2 A_1\). The objective function is \( \mathbb{E} \| y - A_\ell \cdots A_1 x \| ^2 \), where \( y = Rx \) for some unknown linear transformation \( R \) and \( x \) is drawn from a distribution. Such linear networks have been studied actively in recent years as a stepping stone toward the general non-linear case (see Section 1.2). Even though \( A_\ell \cdots A_1 \) is just a linear map, the optimization problem over the factored variables \((A_\ell, \ldots, A_1)\) is non-convex. In analogy with residual networks, we will instead parameterize the objective function as \[ \min_{A_1, \ldots, A_\ell} \mathbb{E} \| y - (I + A_\ell) \cdots (I + A_1)x \| ^2 . \] (1.1) To give some intuition, when the depth \( \ell \) is large enough, we can hope that the target function \( R \) has a factored representation in which each matrix \( A_i \) has small norm. Any symmetric positive semidefinite matrix \( O \) can, for example, be written as a product \( O = O_\ell \cdots O_1 \), where each \( O_i = O^{1/\ell} \) is very close to the identity for large \( \ell \) so that \( A_i = O_i - I \) has small spectral norm. We first prove that an analogous claim is true for all linear transformations \( R \). Specifically, we prove that for every linear transformation \( R \), there exists a global optimizer \((A_1, \ldots, A_\ell)\) of (1.1) such that for large enough depth \( \ell \), \[ \max_{1 \leq i \leq \ell} \| A_i \| \leq O(1/\ell). \] (1.2) Here, \( \| A \| \) denotes the spectral norm of \( A \). The constant factor depends on the conditioning of \( R \). We give the formal statement in Theorem 2.1. The theorem has the interesting consequence that as the depth increases, smaller norm solutions exist and hence regularization may offset the increase in parameters. Having established the existence of small norm solutions, our main result on linear residual networks shows that the objective function (1.1) is, in fact, easy to optimize when all matrices have sufficiently small norm. More formally, letting \( A = (A_1, \ldots, A_\ell) \) and \( f(A) \) denote the objective function in (1.1), we can show that the gradients of vanish only when \( f(A) = 0 \) provided that \( \max_i \| A_i \| \leq O(1/\ell) \). See Theorem 2.2. This result implies that linear residual networks have no critical points other than the global optimum. In contrast, for standard linear neural networks we only know, by work of Kawaguchi (2016) that these networks don’t have local optima except the global optimum, but it doesn’t rule out other critical points. In fact, setting \( A_i = 0 \) will always lead to a bad critical point in the standard parameterization. Universal finite sample expressivity. Going back to non-linear residual networks with ReLU activations, we can ask: How expressive are deep neural networks that are solely based on residual layers with ReLU activations? To answer this question, we give a very simple construction showing that such residual networks have perfect finite sample expressivity. In other words, a residual network with ReLU activations can easily express any functions of a sample of size \( n \), provided that it has sufficiently more than \( n \) parameters. Note that this requirement is easily met in practice. On CIFAR 10 (\( n = 50000 \)), for example, successful residual networks often have more than \( 10^6 \) parameters. More formally, for a data set of size \( n \) with \( r \) classes, our construction requires \( O(n \log n + r^2) \) parameters. Theorem 3.2 gives the formal statement. Each residual layer in our construction is of the form \( x + V \mathrm{ReLU}(Ux) \), where \( U \) and \( V \) are linear transformations. These layers are significantly simpler than standard residual layers, which typically have two ReLU activations as well as two instances of batch normalization. The power of all-convolutional residual networks. Directly inspired by the simplicity of our expressivity result, we experiment with a very similar architecture on the CIFAR10, CIFAR100, and ImageNet data sets. Our architecture is merely a chain of convolutional residual layers each with a single ReLU activation, but without batch normalization, dropout, or max pooling as are common in standard architectures. The last layer is a fixed random projection that is not trained. In line with our theory, the convolutional weights are initialized near 0, using Gaussian noise mainly as a symmetry breaker. The only regularizer is standard weight decay (\( \ell_2 \)-regularization) and there is no need for dropout. Despite its simplicity, our architecture reaches 6.38% top-1 classification error on the CIFAR10 benchmark (with standard data augmentation). This is competitive with the best residual network reported in He et al. (2015), which achieved 6.43%. Moreover, it improves upon the performance of the previous best all-convolutional network, 7.25%, achieved by Springenberg et al. (2014). Unlike ours, this previous all-convolutional architecture additionally required dropout and a non-standard preprocessing (ZCA) of the entire data set. Our architecture also improves significantly upon Springenberg et al. (2014) on both Cifar100 and ImageNet. 1.2 RELATED WORK Since the advent of residual networks (He et al. (2015); He et al. (2016)), most state-of-the-art networks for image classification have adopted a residual parameterization of the convolutional layers. Further impressive improvements were reported by Huang et al. (2016) with a variant of residual networks, called dense nets. Rather than adding the original input to the output of a convolutional layer, these networks preserve the original features directly by concatenation. In doing so, dense nets are also able to easily encode an identity embedding in a higher-dimensional space. It would be interesting to see if our theoretical results also apply to this variant of residual networks. There has been recent progress on understanding the optimization landscape of neural networks, though a comprehensive answer remains elusive. Experiments in Goodfellow et al. (2014) and Dauphin et al. (2014) suggest that the training objectives have a limited number of bad local minima with large function values. Work by Choromanska et al. (2015) draws an analogy between the optimization landscape of neural nets and that of the spin glass model in physics (Auffinger et al. (2013)). Soudry & Carmon (2016) showed that 2-layer neural networks have no bad differentiable local minima, but they didn’t prove that a good differentiable local minimum does exist. Baldi & Hornik (1989) and Kawaguchi (2016) show that linear neural networks have no bad local minima. In contrast, we show that the optimization landscape of deep linear residual networks has no bad critical point, which is a stronger and more desirable property. Our proof is also notably simpler illustrating the power of re-parametrization for optimization. Our results also indicate that deeper networks may have more desirable optimization landscapes compared with shallower ones. 2 OPTIMIZATION LANDSCAPE OF LINEAR RESIDUAL NETWORKS Consider the problem of learning a linear transformation \( R : \mathbb{R}^d \to \mathbb{R}^d \) from noisy measurements \( y = Rx + \xi \), where \( \xi \in \mathcal{N}(0, \mathrm{Id}_d) \) is a d-dimensional spherical Gaussian vector. Denoting by \( D \) the distribution of the input data x, let \( \Sigma = \mathbb{E}_{x \sim D}[xx^\top] \) be its covariance matrix. There are, of course, many ways to solve this classical problem, but our goal is to gain insights into the optimization landscape of neural nets, and in particular, residual networks. We therefore parameterize our learned model by a sequence of weight matrices \( A_1, \ldots, A_\ell \in \mathbb{R}^{d \times d} \), \[ h_0 = x , \qquad h_j = h_{j-1} + A_j h_{j-1} , \qquad \hat{y} = h_\ell . \] Here \( h_1, \ldots, h_{\ell-1} \) are the \( \ell - 1 \) hidden layers and \( \hat{y} = h_\ell \) are the predictions of the learned model on input \( x \). More succinctly, we have \[ \hat{y} = (\mathrm{Id}_d + A_\ell) \ldots (\mathrm{Id} + A_1)x . \] It is easy to see that this model can express any linear transformation \( R \). We will use \( A \) as a shorthand for all of the weight matrices, that is, the \( \ell \times d \times d \)-dimensional tensor the contains \( A_1, \ldots, A_\ell \) as slices. Our objective function is the maximum likelihood estimator, \[ f(A, (x, y)) = \| \hat{y} - y \|^2 = \| (\mathrm{Id} + A_\ell) \ldots (\mathrm{Id} + A_1)x - Rx - \xi \|^2 . \] We will analyze the landscape of the population risk, defined as, \[ f(A) := \mathbb{E}\left[ f(A, (x, y)) \right] . \] Recall that \( \|A_i\| \) is the spectral norm of \( A_i \). We define the norm \( \| \cdot \| \) for the tensor \( A \) as the maximum of the spectral norms of its slices, \[ \|A\| := \max_{1 \leq i \leq \ell} \|A_i\| . \] The first theorem of this section states that the objective function \( f \) has an optimal solution with small \( \| \cdot \| \)-norm, which is *inversely* proportional to the number of layers \( \ell \). Thus, when the architecture is deep, we can shoot for fairly small norm solutions. We define \( \gamma := \max\{|\log \sigma_{\max}(R)|, |\log \sigma_{\min}(R)|\} \). Here \( \sigma_{\min}(\cdot), \sigma_{\max}(\cdot) \) denote the least and largest singular values of \( R \) respectively. **Theorem 2.1.** *Suppose \( \ell \geq 3\gamma \). Then, there exists a global optimum solution \( A^* \) of the population risk \( f(\cdot) \) with norm* \[ \|A^*\| \leq 2(\sqrt{\pi} + \sqrt{3\gamma})^2/\ell . \] Here \( \gamma \) should be thought of as a constant since if \( R \) is too large (or too small), we can scale the data properly so that \( \sigma_{\min}(R) \leq 1 \leq \sigma_{\max}(R) \). Concretely, if \( \sigma_{\max}(R)/\sigma_{\min}(R) = \kappa \), then we can scaling for the outputs properly so that \( \sigma_{\min}(R) = 1/\sqrt{\kappa} \) and \( \sigma_{\max}(R) = \sqrt{\kappa} \). In this case, we have \( \gamma = \log \sqrt{\kappa} \), which will remain a small constant for fairly large condition number \( \kappa \). We also point out that we made no attempt to optimize the constant factors here in the analysis. The proof of Theorem 2.1 is rather involved and is deferred to Section A. Given the observation of Theorem 2.1, we restrict our attention to analyzing the landscape of \( f(\cdot) \) in the set of \( A \) with \( \| \cdot \| \)-norm less than \( \tau \), \[ \mathcal{B}_\tau = \{ A \in \mathbb{R}^{\ell \times d \times d} : \|A\| \leq \tau \} . \] Here using Theorem 2.1, the radius \( \tau \) should be thought of as on the order of \( 1/\ell \). Our main theorem in this section claims that there is no bad critical point in the domain \( \mathcal{B}_\tau \) for any \( \tau < 1 \). Recall that a critical point has vanishing gradient. **Theorem 2.2.** *For any \( \tau < 1 \), we have that any critical point \( A \) of the objective function \( f(\cdot) \) inside the domain \( \mathcal{B}_\tau \) must also be a global minimum.* Theorem 2.2 suggests that it is sufficient for the optimizer to converge to critical points of the population risk, since all the critical points are also global minima. Moreover, in addition to Theorem 2.2, we also have that any \( A \) inside the domain \( \mathcal{B}_\tau \) satisfies that \[ \| \nabla f(A) \|_F^2 \geq 4\ell(1-\tau)^{\ell-1}\sigma_{\min}(\Sigma)^2(f(A) - C_{\mathrm{opt}}) . \tag{2.3} \] Here \( C_{\mathrm{opt}} \) is the global minimal value of \( f(\cdot) \) and \( \| \nabla f(A) \|_F \) denotes the euclidean norm\footnote{That is, \( \|T\|_F := \sqrt{\sum_{ijk} T_{ijk}^2} \).} of the \( \ell \times d \times d \)-dimensional tensor \( \nabla f(A) \). Note that \( \sigma_{\min}(\Sigma) \) denote the minimum singular value of \( \Sigma \). Equation (2.3) says that the gradient has fairly large norm compared to the error, which guarantees convergence of the gradient descent to a global minimum (Karimi et al. (2016)) if the iterates stay inside the domain \( \mathcal{B}_\tau \), which is not guaranteed by Theorem 2.2 by itself. Towards proving Theorem 2.2, we start off with a simple claim that simplifies the population risk. We also use \( \| \cdot \|_F \) to denote the Frobenius norm of a matrix. **Claim 2.3.** *In the setting of this section, we have* \[ f(A) = \| ((\mathrm{Id} + A_\ell) \ldots (\mathrm{Id} + A_1) - R ) \Sigma^{1/2} \|_F^2 + C . \tag{2.4} \] *Here \( C \) is a constant that doesn’t depend on \( A \), and \( \Sigma^{1/2} \) denote the square root of \( \Sigma \), that is, the unique symmetric matrix \( B \) that satisfies \( B^2 = \Sigma \).* *Proof of Claim 2.3.* Let \( \mathrm{tr}(A) \) denotes the trace of the matrix \( A \). Let \( E = (\mathrm{Id} + A_\ell) \ldots (\mathrm{Id} + A_1) - R \). Recalling the definition of \( f(A) \) and using equation (2.2), we have \[ \begin{align*} f(A) &= \mathbb{E} \left[ \| Ex - \xi \|^2 \right] & \text{(by equation (2.2))} \\ &= \mathbb{E} \left[ \| Ex \|^2 + \| \xi \|^2 - 2 \langle Ex, \xi \rangle \right] \\ &= \mathbb{E} \left[ \mathrm{tr}(Ex x^T E^T) \right] + \mathbb{E} \left[ \| \xi \|^2 \right] & \text{(since \( \mathbb{E}[\langle Ex, \xi \rangle] = \mathbb{E}[\langle Ex, \mathbb{E}[\xi|x] \rangle] = 0 \))} \\ &= \mathrm{tr} \left( E \mathbb{E} \left[ xx^T \right] E^T \right) + C & \text{(where \( C = \mathbb{E}[xx^T] \))} \\ &= \mathrm{tr}(E \Sigma E^T) + C = \| E \Sigma^{1/2} \|_F^2 + C . & \text{(since \( \mathbb{E} \left[ xx^T \right] = \Sigma \))} \end{align*} \] \hfill \( \Box \) Next we compute the gradients of the objective function \( f(\cdot) \) from straightforward matrix calculus. We defer the full proof to Section A. **Lemma 2.4.** *The gradients of \( f(\cdot) \) can be written as, \[ \frac{\partial f}{\partial A_i} = 2(\mathrm{Id} + A_{\ell}^\top) \ldots (\mathrm{Id} + A_{i+1}^\top) E \Sigma (\mathrm{Id} + A_{i-1}^\top) \ldots (\mathrm{Id} + A_1^\top), \] where \( E = (\mathrm{Id} + A_\ell) \ldots (\mathrm{Id} + A_1) - R \).* Now we are ready to prove Theorem 2.2. The key observation is that each matric \( A_j \) has small norm and cannot cancel the identity matrix. Therefore, the gradients in equation (2.5) is a product of non-zero matrices, except for the error matrix \( E \). Therefore, if the gradient vanishes, then the only possibility is that the matrix \( E \) vanishes, which in turns implies \( A \) is an optimal solution. *Proof of Theorem 2.2.* Using Lemma 2.4, we have, \[ \left\| \frac{\partial f}{\partial A_i} \right\|_F = 2 \| (\mathrm{Id} + A_{\ell}^\top) \ldots (\mathrm{Id} + A_{i+1}^\top) E \Sigma (\mathrm{Id} + A_{i-1}^\top) \ldots (\mathrm{Id} + A_1^\top) \|_F \tag{by Lemma 2.4} \] \[ \geq 2 \prod_{j \neq i} \sigma_{\min}(\mathrm{Id} + A_j^\top) \cdot \sigma_{\min}(\Sigma) \|E\|_F \tag{by Claim C.2} \] \[ \geq 2(1-\tau)^{\ell-1} \sigma_{\min}(\Sigma) \|E\| . \tag{since \( \sigma_{\min}(\mathrm{Id} + A) \geq 1 - \|A\| \)} \] It follows that \[ \| \nabla f(A) \|_F^2 = \sum_{i=1}^\ell \left\| \frac{\partial f}{\partial A_i} \right\|_F^2 \geq 4\ell(1-\tau)^{\ell-1} \sigma_{\min}(\Sigma)^2 \|E\|^2 \] \[ \geq 4\ell(1-\tau)^{\ell-1} \sigma_{\min}(\Sigma)^2 (f(A) - C) \tag{by the definition of \( E \) and Claim 2.3} \] \[ \geq 4\ell(1-\tau)^{\ell-1} \sigma_{\min}(\Sigma)^2 (f(A) - C_{\text{opt}}) . \tag{since \( C_{\text{opt}} = \min_A f(A) \geq C \) by Claim 2.3} \] Therefore we complete the proof of equation (2.3). Finally, if \( A \) is a critical point, namely, \( \nabla f(A) = 0 \), then by equation (2.3) we have that \( f(A) = C_{\text{opt}} \). That is, \( A \) is a global minimum. \( \square \) 3 REPRESENTATIONAL POWER OF THE RESIDUAL NETWORKS In this section we characterize the finite-sample expressivity of residual networks. We consider a residual layers with a single ReLU activation and no batch normalization. The basic residual building block is a function \( \mathcal{T}_{U,V,s}(\cdot) : \mathbb{R}^k \to \mathbb{R}^k \) that is parameterized by two weight matrices \( U \in \mathbb{R}^{k \times k}, V \in \mathbb{R}^{k \times k} \) and a bias vector \( s \in \mathbb{R}^k \), \[ \mathcal{T}_{U,V,s}(h) = \mathrm{VReLu}(Uh + s) . \] (3.1) A residual network is composed of a sequence of such residual blocks. In comparison with the full pre-activation architecture in He et al. (2016), we remove two batch normalization layers and one ReLU layer in each building block. We assume the data has \( r \) labels, encoded as \( r \) standard basis vectors in \( \mathbb{R}^r \), denoted by \( e_1, \ldots, e_r \). We have \( n \) training examples \( (x^{(1)}, y^{(1)}), \ldots, (x^{(n)}, y^{(n)}) \), where \( x^{(i)} \in \mathbb{R}^d \) denotes the \( i \)-th data and \( y^{(i)} \in \{e_1, \ldots, e_r\} \) denotes the \( i \)-th label. Without loss of generality we assume the data are normalized so that \( x^{(i)} = 1 \). We also make the mild assumption that no two data points are very close to each other. **Assumption 3.1.** *We assume that for every \( 1 \leq i < j \leq n \), we have \( \|x^{(i)} - x^{(j)}\|^2 \geq \rho \) for some absolute constant \( \rho > 0 \).* Images, for example, can always be imperceptibly perturbed in pixel space so as to satisfy this assumption for a small but constant \( \rho \). Under this mild assumption, we prove that residual networks have the power to express any possible labeling of the data as long as the number of parameters is a logarithmic factor larger than \( n \). Theorem 3.2. Suppose the training examples satisfy Assumption 3.1. Then, there exists a residual network \( N \) (specified below) with \( O(n \log n + n^2) \) parameters that perfectly expresses the training data, i.e., for all \( i \in \{1, \ldots, n\} \), the network \( N \) maps \( x^{(i)} \) to \( y^{(i)} \). It is common in practice that \( n > r^2 \), as is for example the case for the Imagenet data set where \( n > 10^6 \) and \( r = 1000 \). We construct the following residual net using the building blocks of the form \( \mathcal{T}_{U,V,s} \) as defined in equation (3.1). The network consists of \( \ell + 1 \) hidden layers \( h_0, \ldots, h_\ell \), and the output is denoted by \( \hat{y} \in \mathbb{R}^r \). The first layer of weights matrices \( A_0 \) maps the \( d \)-dimensional input to a \( k \)-dimensional hidden variable \( h_0 \). Then we apply \( \ell \) layers of building block \( \mathcal{T} \) with weight matrices \( A_j, B_j \in \mathbb{R}^{k \times k} \). Finally, we apply another layer to map the hidden variable \( h_\ell \) to the label \( \hat{y} \) in \( \mathbb{R}^k \). Mathematically, we have \[ h_0 = A_0 x , \] \[ h_j = h_{j-1} + \mathcal{T}_{A_j, B_j, b_j}(h_{j-1}), \quad \forall j \in \{1, \ldots, \ell\} \] \[ \hat{y} = h_\ell + \mathcal{T}_{A_{\ell+1}, B_{\ell+1}, s_{\ell+1}}(h_\ell) . \] We note that here \( A_{\ell+1} \in \mathbb{R}^{k \times r} \) and \( B_{\ell+1} \in \mathbb{R}^{r \times r} \) so that the dimension is compatible. We assume the number of labels \( r \) and the input dimension \( d \) are both smaller than \( n \), which is safely true in practical applications.\footnote{In computer vision, typically \( r \) is less than \( 10^3 \) and \( d \) is less than \( 10^5 \) while \( n \) is larger than \( 10^6 \)} The hyperparameter \( k \) will be chosen to be \( O(\log n) \) and the number of layers is chosen to be \( \ell = \lceil n / k \rceil \). Thus, the first layer has \( dk \) parameters, and each of the middle \( \ell \) building blocks contains \( 2k^2 \) parameters and the final building block has \( kr + r^2 \) parameters. Hence, the total number of parameters is \( O(kd + \ell k^2 + rk + r^2) = O(n \log n + r^2) \). Towards constructing the network \( N \) of the form above that fits the data, we first take a random matrix \( A_0 \in \mathbb{R}^{k \times d} \) that maps all the data points \( x^{(i)} \) to vectors \( h_0^{(i)} := A_0 x^{(i)} \). Here we will use \( h_j^{(i)} \) to denote the \( j \)-th layer of hidden variable of the \( i \)-th example. By Johnson-Lindenstrauss Theorem (Johnson & Lindenstrauss (1984), or see Wikipedia (2016)), with good probability, the resulting vectors \( h_0^{(i)} \)'s remain to satisfy Assumption 3.1 (with slightly different scaling and larger constant \( \rho \)), that is, any two vectors \( h_0^{(i)} \) and \( h_0^{(j)} \) are not very correlated. Then we construct \( \ell \) middle layers that maps \( h_0^{(i)} \) to \( h_\ell^{(i)} \) for every \( i \in \{1, \ldots, n\} \). These vectors \( h_\ell^{(i)} \) will clustered into \( r \) groups according to the labels, though they are in the \( \mathbb{R}^k \) instead of in \( \mathbb{R}^r \) as desired. Concretely, we design this cluster centers by picking \( r \) random unit vectors \( q_1, \ldots, q_r \) in \( \mathbb{R}^k \). We view them as the surrogate label vectors in dimension \( k \) (note that \( k \) is potentially much smaller than \( r \)). In high dimensions (technically, if \( k > 4 \log r \)) random unit vectors \( q_1, \ldots, q_r \) are pair-wise uncorrelated with inner product less than \( < 0.5 \). We associate the \( i \)-th example with the target surrogate label vector \( v^{(i)} \) defined as follows, \[ \text{if } y^{(i)} = e_j, \text{ then } v^{(i)} = q_j . \] (3.2) Then we will construct the matrices \( (A_1, B_1), \ldots, (A_\ell, B_\ell) \) such that the first \( \ell \) layers of the network maps vector \( h_0^{(i)} \) to the surrogate label vector \( v^{(i)} \). Mathematically, we will construct \( (A_1, B_1), \ldots, (A_\ell, B_\ell) \) such that \[ \forall i \in \{1, \ldots, n\}, h_\ell^{(i)} = v^{(i)} . \] (3.3) Finally we will construct the last layer \( \mathcal{T}_{A_{\ell+1}, B_{\ell+1}, b_{\ell+1}} \) so that it maps the vectors \( q_1, \ldots, q_r \in \mathbb{R}^k \) to \( e_1, \ldots, e_r \in \mathbb{R}^r \), \[ \forall j \in \{1, \ldots, r\}, q_j + \mathcal{T}_{A_{\ell+1}, B_{\ell+1}, b_{\ell+1}}(q_j) = e_j . \] (3.4) Putting these together, we have that by the definition (3.2) and equation (3.3), for every \( i \), if the label is \( y^{(i)} \) is \( e_j \), then \( h_\ell^{(i)} \) will be \( q_j \). Then by equation (3.4), we have that \( \hat{y}^{(i)} = q_j + \mathcal{T}_{A_{\ell+1}, B_{\ell+1}, b_{\ell+1}}(q_j) = e_j \). Hence we obtain that \( \hat{y}^{(i)} = y^{(i)} \). The key part of this plan is the construction of the middle \( \ell \) layers of weight matrices so that \( h_\ell^{(i)} = v^{(i)} \). We encapsulate this into the following informal lemma. The formal statement and the full proof is deferred to Section B. Lemma 3.3 (Informal version of Lemma B.2). In the setting above, for (almost) arbitrary vectors \( h_0^{(1)}, \ldots, h_0^{(n)} \) and \( v^{(1)}, \ldots, v^{(n)} \in \{q_1, \ldots, q_r\} \), there exists weights matrices \( (A_1, B_1), \ldots, (A_\ell, B_\ell) \), such that, \[ \forall i \in \{1, \ldots, n\}, \quad h_\ell^{(i)} = v^{(i)} . \] We briefly sketch the proof of the Lemma to provide intuitions, and defer the full proof to Section B. The operation that each residual block applies to the hidden variable can be abstractly written as, \[ \hat{h} \to h + \mathcal{T}_{U,V,s}(h) . \tag{3.5} \] where \( h \) corresponds to the hidden variable before the block and \( \hat{h} \) corresponds to that after. We claim that for an (almost) arbitrary sequence of vectors \( h^{(1)}, \ldots, h^{(n)} \), there exist \( \mathcal{T}_{U,V,s}(\cdot) \) such that operation (3.5) transforms \( k \) vectors of \( h^{(i)} \)'s to an arbitrary set of other \( k \) vectors that we can freely choose, and maintain the value of the rest of \( n - k \) vectors. Concretely, for any subset \( S \) of size \( k \), and any desired vector \( v^{(i)} (i \in S) \), there exist \( U, V, s \) such that \[ v^{(i)} = h^{(i)} + \mathcal{T}_{U,V,s}(h^{(i)}) \quad \forall i \in S \\ h^{(i)} = h^{(i)} + \mathcal{T}_{U,V,s}(h^{(i)}) \quad \forall i \notin S \tag{3.6} \] This claim is formalized in Lemma B.1. We can use it repeatedly to construct \( \ell \) layers of building blocks, each of which transforms a subset of \( k \) vectors in \( \{h_0^{(1)}, \ldots, h_0^{(n)}\} \) to the corresponding vectors in \( \{v^{(1)}, \ldots, v^{(n)}\} \), and maintains the values of the others. Recall that we have \( \ell = \lceil n/k \rceil \) layers and therefore after \( \ell \) layers, all the vectors \( h_0^{(i)} \)'s are transformed to \( v^{(i)} \)'s, which complete the proof sketch. 4 POWER OF ALL-CONVOLUTIONAL RESIDUAL NETWORKS Inspired by our theory, we experimented with all-convolutional residual networks on standard image classification benchmarks. 4.1 CIFAR10 AND CIFAR100 Our architectures for CIFAR10 and CIFAR100 are identical except for the final dimension corresponding to the number of classes 10 and 100, respectively. In Table 1, we outline our architecture. Each residual block has the form \( x + C_2(\mathrm{ReLU}(C_1 x)) \), where \( C_1, C_2 \) are convolutions of the specified dimension (kernel width, kernel height, number of input channels, number of output channels). The second convolution in each block always has stride 1, while the first may have stride 2 where indicated. In cases where transformation is not dimensionality-preserving, the original input \( x \) is adjusted using averaging pooling and padding as is standard in residual layers. We trained our models with the Tensorflow framework, using a momentum optimizer with momentum 0.9, and batch size is 128. All convolutional weights are trained with weight decay 0.0001. The initial learning rate is 0.05, which drops by a factor 10 and 30000 and 50000 steps. The model reaches peak performance at around 50k steps, which takes about 24h on a single NVIDIA Tesla K40 GPU. Our code can be easily derived from an open source implementation\footnote{https://github.com/tensorflow/models/tree/master/resnet} by removing batch normalization, adjusting the residual components and model architecture. An important departure from the code is that we initialize a residual convolutional layer of kernel size \( k \times k \) and \( c \) output channels using a random normal initializer of standard deviation \( \sigma = 1/k^2c \), rather than \( 1/k\sqrt{c} \) used for standard convolutional layers. This substantially smaller weight initialization helped training, while not affecting representation. A notable difference from standard models is that the last layer is not trained, but simply a fixed random projection. On the one hand, this slightly improved test error (perhaps due to a regularizing effect). On the other hand, it means that the only trainable weights in our model are those of the convolutions, making our architecture “all-convolutional”. Table 1: Architecture for CIFAR10/100 (55 convolutions, 13.5M parameters) <table> <tr> <th>variable dimensions</th> <th>initial stride</th> <th>description</th> </tr> <tr> <td>3 × 3 × 3 × 16</td> <td>1</td> <td>1 standard conv</td> </tr> <tr> <td>3 × 3 × 16 × 64</td> <td>1</td> <td>9 residual blocks</td> </tr> <tr> <td>3 × 3 × 64 × 128</td> <td>2</td> <td>9 residual blocks</td> </tr> <tr> <td>3 × 3 × 128 × 256</td> <td>2</td> <td>9 residual blocks</td> </tr> <tr> <td>–</td> <td>–</td> <td>8 × 8 global average pool</td> </tr> <tr> <td>256 × num_classes</td> <td>–</td> <td>random projection (not trained)</td> </tr> </table> ![Convergence plots of best model for CIFAR10 (left) and CIFAR (100) right. One step is a gradient update with batch size 128.](page_184_470_1207_377.png) Figure 1: Convergence plots of best model for CIFAR10 (left) and CIFAR (100) right. One step is a gradient update with batch size 128. An interesting aspect of our model is that despite its massive size of 13.59 million trainable parameters, the model does not seem to overfit too quickly even though the data set size is 50000. In contrast, we found it difficult to train a model with batch normalization of this size without significant overfitting on CIFAR10. Table 2 summarizes the top-1 classification error of our models compared with a non-exhaustive list of previous works, restricted to the best previous all-convolutional result by Springenberg et al. (2014), the first residual results He et al. (2015), and state-of-the-art results on CIFAR by Huang et al. (2016). All results are with standard data augmentation. <table> <tr> <th>Method</th> <th>CIFAR10</th> <th>CIFAR100</th> <th>ImageNet</th> <th>remarks</th> </tr> <tr> <td>All-CNN</td> <td>7.25</td> <td>32.39</td> <td>41.2</td> <td>all-convolutional, dropout, extra data processing</td> </tr> <tr> <td>Ours</td> <td>6.38</td> <td>24.64</td> <td>35.29</td> <td>all-convolutional</td> </tr> <tr> <td>ResNet</td> <td>6.43</td> <td>25.16</td> <td>19.38</td> <td></td> </tr> <tr> <td>DenseNet</td> <td>3.74</td> <td>19.25</td> <td>N/A</td> <td></td> </tr> </table> 4.2 IMAGE NET The ImageNet ILSVRC 2012 data set has 1,281,167 data points with 1000 classes. Each image is resized to 224 × 224 pixels with 3 channels. We experimented with an all-convolutional variant of the 34-layer network in He et al. (2015). The original model achieved 25.03% classification error. Our derived model has 35.7\( M \) trainable parameters. We trained the model with a momentum optimizer (with momentum 0.9) and a learning rate schedule that decays by a factor of 0.94 every two epochs, starting from the initial learning rate 0.1. Training was distributed across 6 machines updating asynchronously. Each machine was equipped with 8 GPUs (NVIDIA Tesla K40) and used batch size 256 split across the 8 GPUs so that each GPU updated with batches of size 32. In contrast to the situation with CIFAR10 and CIFAR100, on ImageNet our all-convolutional model performed significantly worse than its original counterpart. Specifically, we experienced a significant amount of underfitting suggesting that a larger model would likely perform better. Despite this issue, our model still reached 35.29% top-1 classification error on the test set (50000 data points), and 14.17% top-5 test error after 700,000 steps (about one week of training). While no longer state-of-the-art, this performance is significantly better than the 40.7% reported by Krizhevsky et al. (2012), as well as the best all-convolutional architecture by Springenberg et al. (2014). We believe it is quite likely that a better learning rate schedule and hyperparameter settings of our model could substantially improve on the preliminary performance reported here. 5 CONCLUSION Our theory underlines the importance of identity parameterizations when training deep artificial neural networks. An outstanding open problem is to extend our optimization result to the non-linear case where each residual has a single ReLU activation as in our expressivity result. We conjecture that a result analogous to Theorem 2.2 is true for the general non-linear case. Unlike with the standard parameterization, we see no fundamental obstacle for such a result. We hope our theory and experiments together help simplify the state of deep learning by aiming to explain its success with a few fundamental principles, rather than a multitude of tricks that need to be delicately combined. We believe that much of the advances in image recognition can be achieved with residual convolutional layers and ReLU activations alone. This could lead to extremely simple (albeit deep) architectures that match the state-of-the-art on all image classification benchmarks. REFERENCES Antonio Auffinger, Gérard Ben Arous, and Jiří Černý. Random matrices and complexity of spin glasses. Communications on Pure and Applied Mathematics, 66(2):165–201, 2013. P. Baldi and K. Hornik. Neural networks and principal component analysis: Learning from examples without local minima. Neural Netw., 2(1):53–58, January 1989. ISSN 0893-6080. doi: 10.1016/0893-6080(89)90014-2. URL http://dx.doi.org/10.1016/0893-6080(89)90014-2. Anna Choromanska, Mikael Henaff, Michael Mathieu, Gérard Ben Arous, and Yann LeCun. The loss surfaces of multilayer networks. In AISTATS, 2015. Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in neural information processing systems, pp. 2933–2941, 2014. I. J. Goodfellow, O. Vinyals, and A. M. Saxe. Qualitatively characterizing neural network optimization problems. ArXiv e-prints, December 2014. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In arXiv preprint arXiv:1506.01497, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part IV, pp. 630–645, 2016. doi: 10.1007/978-3-319-46493-0_38. URL http://dx.doi.org/10.1007/978-3-319-46493-0_38. Gao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks. CoRR, abs/1608.06993, 2016. URL http://arxiv.org/abs/1608.06993. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 448–456, 2015. URL http://jmlr.org/proceedings/papers/v37/ioffe15.html. William B Johnson and Joram Lindenstrauss. Extensions of lipschitz mappings into a hilbert space. Contemporary mathematics, 26(189-206):1, 1984. H. Karimi, J. Nutini, and M. Schmidt. Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-\L{}ojasiewicz Condition. ArXiv e-prints, August 2016. K. Kawaguchi. Deep Learning without Poor Local Minima. ArXiv e-prints, May 2016. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012. D. Soudry and Y. Carmon. No bad local minima: Data independent training error guarantees for multilayer neural networks. ArXiv e-prints, May 2016. J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for Simplicity: The All Convolutional Net. ArXiv e-prints, December 2014. Eric W. Weisstein. Normal matrix, from mathworld–a wolfram web resource., 2016. URL http://mathworld.wolfram.com/NormalMatrix.html. Wikipedia. Johnsonlindenstrauss lemma — wikipedia, the free encyclopedia, 2016. URL https://en.wikipedia.org/w/index.php?title=Johnson%E2%80%93Lindenstrauss_lemma&oldid=743553642. A Missing Proofs in Section 2 In this section, we give the complete proofs for Theorem 2.1 and Lemma 2.4, which are omitted in Section 2. A.1 Proof of Theorem 2.1 It turns out the proof will be significantly easier if \( R \) is assumed to be a symmetric positive semidefinite (PSD) matrix, or if we allow the variables to be complex matrices. Here we first give a proof sketch for the first special case. The readers can skip it and jumps to the full proof below. We will also prove stronger results, namely, \( \|A^*\| \leq 3\gamma/\ell \), for the special case. When \( R \) is PSD, it can be diagonalized by orthonormal matrix \( U \) in the sense that \( R = UZU^\top \), where \( Z = \mathrm{diag}(z_1, \ldots, z_d) \) is a diagonal matrix with non-negative diagonal entries \( z_1, \ldots, z_d \). Let \( A_1^* = \cdots = A_\ell^* = U \mathrm{diag}(z_1^{1/\ell})U^\top - \mathrm{Id} \), then we have \[ (\mathrm{Id} + A_1^*) \cdots (\mathrm{Id} + A_\ell^*) = (U \mathrm{diag}(z_1^{1/\ell})U^\top)^\ell = U \mathrm{diag}(z_1^{1/\ell})^\ell U \quad \text{(since } U^\top U = \mathrm{Id}) \] \[ = UZU^\top = R . \] We see that the network defined by \( A^* \) reconstruct the transformation \( R \), and therefore it’s a global minimum of the population risk (formally see Claim 2.3 below). Next, we verify that each of the \( A_j^* \) has small spectral norm: \[ \|A_j^*\| = \|\mathrm{Id} - U \mathrm{diag}(z_i^{1/\ell})U^\top\| = \|U(\mathrm{Id} - \mathrm{diag}(z_i)^{1/\ell})U^\top\| = \|\mathrm{Id} - \mathrm{diag}(z_i)^{1/\ell}\| \] (since \( U \) is orthonormal) \[ = \max_i |z_i^{1/\ell} - 1| . \] (A.1) Since \( \sigma_{\min}(R) \leq z_i \leq \sigma_{\max}(R) \), we have \( \ell \geq 3\gamma \geq |\log z_i| \). It follows that \[ |z_i^{1/\ell} - 1| = |e^{(\log z_i)/\ell} - 1| \leq 3|(\log z_i)/\ell| \leq 3\gamma/\ell . \quad \text{(since } |e^x - 1| \leq 3|x| \text{ for all } |x| \leq 1) \] Then using equation (A.1) and the equation above, we have that \( \|A\| \leq \max_j \|A_j^*\| \leq 3\gamma/\ell \), which completes the proof for the special case. Next we give the formal full proof of Theorem 2.1. Proof of Theorem 2.1. We assume the dimension d is an even number. The odd case has very similar proof and is left to the readers. Let \( R = UKV^\top \) be its singular value decomposition, where \( U, V \) are two orthonormal matrices and \( K \) is a diagonal matrix. Since \( U \) is a normal matrix (that is, \( U \) satisfies that \( UU^\top = U^\top U \)), by Claim C.1, we have that \( U \) can be block-diagonalized by orthonormal matrix \( S \) into \( U = SDS^{-1} \), where \( D = \mathrm{diag}(D_1, \ldots, D_{d/2}) \) is a real block diagonal matrix with each block \( D_i \) being of size \( 2 \times 2 \). Since \( U \) is orthonormal, \( U \) has all its eigenvalues lying on the unit circle (in complex plane). Since \( D \) and \( U \) are unitarily similar to each other, \( D \) also has eigenvalues lying on the unit circle, and so does each of the block \( D_i \). This means that each \( D_i \) is a \( 2 \times 2 \) dimensional rotation matrix. Each rotation matrix can be written as \( T(\theta) = \begin{bmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{bmatrix} \). Suppose \( D_i = T(\theta_i) \) where \( \theta_i \in [-\pi, \pi] \). Then we have that \( D_i = T(\theta_i/q)^q \) for any integer \( q \) (that is chosen later). Let \( W = \mathrm{diag}(T(\theta_i/q)) \). Therefore, it follows that \( D = \mathrm{diag}(D_i) = W^q \). Moreover, we have \( U = SDS^{-1} = (SW S^{-1})^q \). Therefore, let \( B_1 = B_2 = \cdots = B_q = \mathrm{Id} - SW S^{-1} \), then we have \( U = (\mathrm{Id} + B_q) \ldots (\mathrm{Id} + B_1) \). We verify the spectral norm of these matrices are indeed small, \[ \begin{align*} \|B_j\| &= \|\mathrm{Id} - SW S^{-1}\| = \|S(\mathrm{Id} - W)S^{-1}\| \\ &= \|\mathrm{Id} - W\| \tag{since \( S \) is unitary} \\ &= \max_{i \in [d/2]} \|T(0) - T(\theta_i/q)\| \tag{since \( W = \mathrm{diag}(T(\theta_i/q)) \) is block diagonal} \\ &= \max |\sin(\theta_i/q)| \leq \pi/q . \end{align*} \] Similarly, we can choose \( B'_1, \ldots, B'_q \) with \( \|C_j\| \leq \pi/q \) so that \( V^\top = (\mathrm{Id} + B'_q) \ldots (\mathrm{Id} + B'_1) \). Last, we deal with the diagonal matrix \( K \). Let \( K = \mathrm{diag}(k_i) \). We have \( \min k_i = \sigma_{\min}(R), \max k_i = \sigma_{\max}(R) \). Then, we can write \( K = (K')^p \) where \( K' = \mathrm{diag}(k_i^{1/p}) \) and \( p \) is an integer to be chosen later. We have that \( \|K' - \mathrm{Id}\| \leq \max |k_i^{1/p} - 1| \leq \max |e^{\log k_i^{1/p}} - 1| \). When \( p \geq \gamma = \max\{\log \max k_i, -\log \min k_i\} = \max\{\log \sigma_{\max}(R), -\log \sigma_{\min}(R)\} \), we have that \[ \|K' - \mathrm{Id}\| \leq \max |e^{\log k_i^{1/p}} - 1| \leq 3 \max |\log k_i \cdot 1/p| = 3\gamma/p . \tag{since \( |e^x - 1| \leq 3|x| \) for \( |x| \leq 1 \)} \] Let \( B''_1 = \cdots = B''_p = K' - \mathrm{Id} \) and then we have \( K = (\mathrm{Id} + B''_p) \cdots (\mathrm{Id} + B''_1) \). Finally, we choose \( p = \frac{\ell \sqrt{3\gamma}}{2(\sqrt{\pi} + \sqrt{3\gamma})} \) and \( q = \frac{\ell \sqrt{\pi}}{\sqrt{\pi} + \sqrt{3\gamma}} \),\footnote{Here for notational convenience, \( p, q \) are not chosen to be integers. But rounding them to closest integer will change final bound of the norm by small constant factor.} and let \( A_{2p+q} = B_q, \cdots = A_{p+q+1} = B_1, A_{p+q} = B'_p, \ldots, A_{q+1} = B'_1, A_q = B'_q, \ldots, A_1 = B'_1 \). We have that \( 2q + \ell = 1 \) and \[ R = UKV^\top = (\mathrm{Id} + A_\ell) \ldots (\mathrm{Id} + A_1) . \] Moreover, we have \( \|A\| \leq \max\{\|B_j\|, \|B'_j\|, \|B''_j\|\} \leq \pi/q + 3\gamma/p \leq 2(\sqrt{\pi} + \sqrt{3\gamma})^2/\ell \), as desired. □ A.2 Proof of Lemma 2.4 We compute the partial gradients by definition. Let \( \Delta_j \in \mathbb{R}^{d \times d} \) be an infinitesimal change to \( A_j \). Using Claim 2.3, consider the Taylor expansion of \( f(A_1, \ldots, A_\ell + \Delta_j, \ldots, A_\ell) \) \[ f(A_1, \ldots, A_\ell + \Delta_j, \ldots, A_\ell) = \left\| ((\mathrm{Id} + A_\ell) \cdots (\mathrm{Id} + A_j + \Delta_j) \cdots (\mathrm{Id} + A_1) - R)\Sigma^{1/2} \right\|_F^2 = \left\| ((\mathrm{Id} + A_\ell) \cdots (\mathrm{Id} + A_1) - R)\Sigma^{1/2} + (\mathrm{Id} + A_\ell) \cdots \Delta_j \cdots (\mathrm{Id} + A_1)\Sigma^{1/2} \right\|_F^2 = \left\| ((\mathrm{Id} + A_\ell) \cdots (\mathrm{Id} + A_1) - R)\Sigma^{1/2} \right\|_F^2 + 2\langle ((\mathrm{Id} + A_\ell) \cdots (\mathrm{Id} + A_1) - R)\Sigma^{1/2}, (\mathrm{Id} + A_\ell) \cdots \Delta_j \cdots (\mathrm{Id} + A_1)\Sigma^{1/2} \rangle + O(\|\Delta_j\|_F^2) = f(A) + 2\langle (\mathrm{Id} + A_\ell^\top) \cdots (\mathrm{Id} + A_{j+1}^\top) E\Sigma(\mathrm{Id} + A_{j-1}^\top) \cdots (\mathrm{Id} + A_1^\top), \Delta_j \rangle + O(\|\Delta_j\|_F^2). \] By definition, this means that the \( \frac{\partial f}{\partial A_j} = 2(\mathrm{Id} + A_\ell^\top) \cdots (\mathrm{Id} + A_{j+1}^\top) E\Sigma(\mathrm{Id} + A_{j-1}^\top) \cdots (\mathrm{Id} + A_1^\top) \). B Missing Proofs in Section 3 In this section, we provide the full proof of Theorem 3.2. We start with the following Lemma that constructs a building block \( \mathcal{T} \) that transform \( k \) vectors of an arbitrary sequence of \( n \) vectors to any arbitrary set of vectors, and main the value of the others. For better abstraction we use \( \alpha^{(i)}, \beta^{(i)} \) to denote the sequence of vectors. Lemma B.1. Let \( S \subset [n] \) be of size \( k \). Suppose \( \alpha^{(1)}, \ldots, \alpha^{(n)} \) is a sequences of \( n \) vectors satisfying a) for every \( 1 \leq i \leq n \), we have \( 1 - \rho' \leq \|\alpha_i\|^2 \leq 1 + \rho' \), and b) if \( i \neq j \) and \( S \) contains at least one of \( i, j \), then \( \|\alpha^{(i)} - \beta^{(j)}\| \geq 3\rho' \). Let \( \beta^{(1)}, \ldots, \beta^{(n)} \) be an arbitrary sequence of vectors. Then, there exists \( U, V \in \mathbb{R}^{k \times k}, s \) such that for every \( i \in S \), we have \( \mathcal{T}_{U,V,s}(\alpha^{(i)}) = \beta^{(i)} - \alpha^{(i)} \), and moreover, for every \( i \in [n] \setminus S \) we have \( \mathcal{T}_{U,V,s}(\alpha^{(i)}) = 0 \). We can see that the conclusion implies \[ \beta^{(i)} = \alpha^{(i)} + \mathcal{T}_{U,V,s}(\alpha^{(i)}) \quad \forall i \in S \\ \alpha^{(i)} = \alpha^{(i)} + \mathcal{T}_{U,V,s}(\alpha^{(i)}) \quad \forall i \notin S \] which is a different way of writing equation (3.6). Proof of Lemma B.1. Without loss of generality, suppose \( S = \{1, \ldots, k\} \). We construct \( U, V, s \) as follows. Let the \( i \)-th row of \( U \) be \( \alpha^{(i)} \) for \( i \in [k] \), and let \( s = -(1 - 2\rho') \cdot 1 \) where \( 1 \) denotes the all 1's vector. Let the \( i \)-column of \( V \) be \( \frac{1}{\|\alpha^{(i)}\|^2 - (1 - 2\rho')} (\beta^{(i)} - \alpha^{(i)}) \) for \( i \in [k] \). Next we verify that the correctness of the construction. We first consider \( 1 \leq i \leq k \). We have that \( U\alpha^{(i)} \) is a a vector with \( i \)-th coordinate equal to \( \|\alpha^{(i)}\|^2 \geq 1 - \rho' \). The \( j \)-th coordinate of \( U\alpha^{(i)} \) is equal to \( \langle \alpha^{(j)}, \alpha^{(i)} \rangle \), which can be upperbounded using the assumption of the Lemma by \[ \langle \alpha^{(j)}, \alpha^{(i)} \rangle = \frac{1}{2} \left( \|\alpha^{(i)}\|^2 + \|\alpha^{(j)}\|^2 \right) - \|\alpha^{(i)} - \alpha^{(j)}\|^2 \leq 1 + \rho' - 3\rho' \leq 1 - 2\rho'. \] (B.1) Therefore, this means \( U\alpha^{(i)} - (1 - 2\rho') \cdot 1 \)contains a single positive entry (with value at least \( \|\alpha^{(i)}\|^2 - (1 - 2\rho') \geq \rho' \)), and all other entries being non-positive. This means that \( \mathrm{ReLu}(U\alpha^{(i)} + b) = (\|\alpha^{(i)}\|^2 - (1 - 2\rho')) e_i \) where \( e_i \) is the \( i \)-th natural basis vector. It follows that \( V\mathrm{ReLu}(U\alpha^{(i)} + b) = (\|\alpha^{(i)}\|^2 - (1 - 2\rho')) Ve_i = \beta^{(i)} - \alpha^{(i)} \). Finally, consider \( n \geq i > k \). Then similarly to the computation in equation (B.1), \( U\alpha^{(i)} \) is a vector with all coordinates less than \( 1 - 2\rho' \). Therefore \( U\alpha^{(i)} + b \) is a vector with negative entries. Hence we have \( \mathrm{ReLu}(U\alpha^{(i)} + b) = 0 \), which implies \( V\mathrm{ReLu}(U\alpha^{(i)} + b) = 0 \). Now we are ready to state the formal version of Lemma 3.3. Lemma B.2. Suppose a sequence of n vectors \( z^{(1)}, \ldots, z^{(n)} \) satisfies a relaxed version of Assumption 3.1: a) for every i, \( 1 - \rho' \leq \|z^{(i)}\|^2 \leq 1 + \rho' \) b) for every \( i \neq j \), we have \( \|z^{(i)} - z^{(j)}\|^2 \geq \rho' \). Let \( v^{(1)}, \ldots, v^{(n)} \) be defined above. Then there exists weigh matrices \( (A_1, B_1), \ldots, (A_\ell, B_\ell) \), such that given \( \forall i, h_0^{(i)} = z^{(i)} \), we have, \[ \forall i \in \{1, \ldots, n\}, \quad h_\ell^{(i)} = v^{(i)} . \] We will use Lemma B.1 repeatedly to construct building blocks \( \mathcal{T}_{A_j, B_k, s_j}(\cdot) \), and thus prove Lemma B.2. Each building block \( \mathcal{T}_{A_j, B_k, s_j}(\cdot) \) takes a subset of k vectors among \( \{z^{(1)}, \ldots, z^{(n)}\} \) and convert them to \( v^{(i)} \)'s, while maintaining all other vectors as fixed. Since they are totally \( n/k \) layers, we finally maps all the \( z^{(i)} \)'s to the target vectors \( v^{(i)} \)'s. Proof of Lemma B.2. We use Lemma B.1 repeatedly. Let \( S_1 = [1, \ldots, k] \). Then using Lemma B.1 with \( \alpha^{(i)} = z^{(i)} \) and \( \beta^{(i)} = v^{(i)} \) for \( i \in [n] \), we obtain that there exists \( A_1, B_1, b_1 \) such that for \( i \leq k \), it holds that \( h_1^{(i)} = z^{(i)} + \mathcal{T}_{A_1, B_1, b_1}(z^{(i)}) = v^{(i)} \), and for \( i \geq k \), it holds that \( h_1^{(i)} = z^{(i)} + \mathcal{T}_{A_1, B_1, b_1}(z^{(i)}) = z^{(i)} \). Now we construct the other layers inductively. We will construct the layers such that the hidden variable at layer j satisfies \( h_j^{(i)} = v^{(i)} \) for every \( 1 \leq i \leq jk \), and \( h_j^{(i)} = z^{(i)} \) for every \( n \geq i > jk \). Assume that we have constructed the first j layer and next we use Lemma B.1 to construct the \( j+1 \) layer. Then we argue that the choice of \( \alpha^{(1)} = v^{(1)}, \ldots, \alpha^{(jk)} = v^{(jk)}, \alpha^{(jk+1)} = z^{(jk+1)}, \ldots, \alpha^{(n)} = z^{(n)} \), and \( S = \{jk+1, \ldots, (j+1)k\} \) satisfies the assumption of Lemma B.1. Indeed, because \( q_i \)'s are chosen uniformly randomly, we have w.h.p for every s and i, \( \langle q_s, z^{(i)} \rangle \leq 1 - \rho' \). Thus, since \( v^{(i)} \in \{q_1, \ldots, q_r\} \), we have that \( v^{(i)} \) also doesn’t correlate with any of the \( z^{(i)} \). Then we apply Lemma B.1 and conclude that there exists \( A_{j+1} = U, B_{j+1} = V, b_{j+1} = s \) such that \( \mathcal{T}_{A_{j+1}, B_{j+1}, b_{j+1}}(v^{(i)}) = 0 \) for \( i \leq jk \), \( \mathcal{T}_{A_{j+1}, B_{j+1}, b_{j+1}}(z^{(i)}) = v^{(i)} - z^{(i)} \) for \( jk < i \leq (j+1)k \), and \( \mathcal{T}_{A_{j+1}, B_{j+1}, b_{j+1}}(z^{(i)}) = 0 \) for \( n \geq i > (j+1)k \). These imply that \[ h_{j+1}^{(i)} = h_j^{(i)} + \mathcal{T}_{A_{j+1}, B_{j+1}, b_{j+1}}(v^{(i)}) = v^{(i)} \quad \forall 1 \leq i \leq jk \\ h_{j+1}^{(i)} = h_j^{(i)} + \mathcal{T}_{A_{j+1}, B_{j+1}, b_{j+1}}(z^{(i)}) = v^{(i)} \quad \forall jk + 1 \leq i \leq (j+1)k \\ h_{j+1}^{(i)} = h_j^{(i)} + \mathcal{T}_{A_{j+1}, B_{j+1}, b_{j+1}}(z^{(i)}) = z^{(i)} \quad \forall (j+1)k < i \leq n \] Therefore we constructed the \( j+1 \) layers that meets the inductive hypothesis for layer \( j+1 \). Therefore, by induction we get all the layers, and the last layer satisfies that \( h_\ell^{(i)} = v^{(i)} \) for every example i. \( \square \) Now we ready to prove Theorem 3.2, following the general plan sketched in Section 3. Proof of Theorem 3.2. We use formalize the intuition discussed below Theorem 3.2. First, take \( k = c(\log n)/\rho^2 \) for sufficiently large absolute constant c (for example, \( c = 10 \) works), by Johnson-Lindenstrauss Theorem (Johnson & Lindenstrauss (1984), or see Wikipedia (2016)) we have that when \( A_0 \) is a random matrix with standard normal entries, with high probability, all the pairwise distance between the the set of vectors \( \{0, x^{(1)}, \ldots, x^{(n)}\} \) are preserved up to \( 1 \pm \rho/3 \) factor. That is, we have that for every \( i, 1-\rho/3 \leq \|A_0 x^{(i)}\| \leq 1+\rho/3 \), and for every \( i \neq j, \|A_0 x^{(i)} - A_0 x^{(j)}\| \geq \rho(1-\rho/3) \geq 2\rho/3 \). Let \( z^{(i)} = A_0 x^{(i)} \) and \( \rho' = \rho/3 \). Then we have \( z^{(i)} \)'s satisfy the condition of Lemma B.2. We pick r random vectors \( q_1, \ldots, q_r \) in \( \mathbb{R}^k \). Let \( v^{(1)}, \ldots, v^{(n)} \) be defined as in equation (3.2). Then by Lemma B.2, we can construct matrices \( (A_1, B_1), \ldots, (A_\ell, B_\ell) \) such that \[ h_\ell^{(i)} = v^{(i)}. \tag{B.2} \] Note that \( v^{(i)} \in \{q_1, \ldots, q_r\} \), and \( q_i \)'s are random unit vector. Therefore, the choice of \( \alpha^{(1)} = q_1, \ldots, \alpha^{(r)} = q_r, \beta^{(1)} = e_1, \ldots, \beta^{(r)} = e_r \), and satisfies the condition of Lemma B.1, and using Lemma B.1 we conclude that there exists \( A_{\ell+1}, B_{\ell+1}, s_{\ell+1} \) such that \[ e_j = v_j + \mathcal{T}_{A_{\ell+1}, B_{\ell+1}, b_{\ell+1}}(v_j), \text{ for every } j \in \{1, \ldots, r\}. \tag{B.3} \] By the definition of \( v^{(i)} \) in equation (3.2) and equation (B.2), we conclude that \( \hat{y}^{(i)} = h_{\ell}^{(i)} + T_{A_{\ell+1}, B_{\ell+1}, b_{\ell+1}}(h_{\ell}^{(i)}) = y^{(i)}, \) which complete the proof. C TOOLBOX In this section, we state two folklore linear algebra statements. The following Claim should be known, but we can’t find it in the literature. We provide the proof here for completeness. Claim C.1. Let \( U \in \mathbb{R}^{d \times d} \) be a real normal matrix (that is, it satisfies \( UU^T = U^T U \)). Then, there exists an orthonormal matrix \( S \in \mathbb{R}^{d \times d} \) such that \[ U = SDS^T, \] where \( D \) is a real block diagonal matrix that consists of blocks with size at most \( 2 \times 2 \). Moreover, if \( d \) is even, then \( D \) consists of blocks with size exactly \( 2 \times 2 \). Proof. Since \( U \) is a normal matrix, it is unitarily diagonalizable (see Weisstein (2016) for backgrounds). Therefore, there exists unitary matrix \( V \) in \( \mathbb{C}^{d \times d} \) and diagonal matrix in \( \mathbb{C}^{d \times d} \) such that \( U \) has eigen-decomposition \( U = VAV^* \). Since \( U \) itself is a real matrix, we have that the eigenvalues (the diagonal entries of \( A \)) come as conjugate pairs, and so do the eigenvectors (which are the columns of \( V \)). That is, we can group the columns of \( V \) into pairs \( (v_1, \bar{v}_1), \ldots, (v_s, \bar{v}_s), v_{s+1}, \ldots, v_t \), and let the corresponding eigenvalues be \( \lambda_1, \bar{\lambda}_1, \ldots, \lambda_s, \bar{\lambda}_s, \lambda_{s+1}, \ldots, \lambda_t \). Here \( \lambda_{s+1}, \ldots, \lambda_t \in \mathbb{R} \). Then we get that \( U = \sum_{i=1}^s 2 \Re(v_i \lambda_i v_i^*) + \sum_{i=s+1}^t v_i \lambda_i v_i^* \). Let \( Q_i = \Re(v_i \lambda_i v_i^*) \), then we have that \( Q_i \) is a real matrix of rank-2. Let \( S_i \in \mathbb{R}^{d \times 2} \) be a orthonormal basis of the column span of \( Q_i \) and then we have that \( Q_i \) can be written as \( Q_i = S_i D_i S_i^T \) where \( D_i \) is a \( 2 \times 2 \) matrix. Finally, let \( S = [S_1, \ldots, S_s, v_{s+1}, \ldots, v_t] \), and \( D = \mathrm{diag}(D_1, \ldots, D_s, \lambda_{s+1}, \ldots, \lambda_t) \) we complete the proof. The following Claim is used in the proof of Theorem 2.2. We provide a proof here for completeness. Claim C.2 (folklore). For any two matrices \( A, B \in \mathbb{R}^{d \times d} \), we have that \[ \|AB\|_F \geq \sigma_{\min}(A)\|B\|_F . \] Proof. Since \( \sigma_{\min}(A)^2 \) is the smallest eigenvalue of \( A^T A \), we have that \[ B^T A^T AB \succeq B^T \cdot \sigma_{\min}(A)^2 \mathrm{Id} \cdot B . \] Therefore, it follows that \[ \|AB\|_F^2 = \mathrm{tr}(B^T A^T AB) \geq \mathrm{tr}(B^T \cdot \sigma_{\min}(A)^2 \mathrm{Id} \cdot B) \\ = \sigma_{\min}(A)^2 \mathrm{tr}(B^T B) = \sigma_{\min}(A)^2 \|B\|_F^2 . \] Taking square root of both sides completes the proof.
ABSTRACT An emerging design principle in deep learning is that each layer of a deep artificial neural network should be able to easily express the identity transformation. This idea not only motivated various normalization techniques, such as batch normalization, but was also key to the immense success of residual networks. In this work, we put the principle of identity parameterization on a more solid theoretical footing alongside further empirical progress. We first give a strikingly simple proof that arbitrarily deep linear residual networks have no spurious local optima. The same result for feed-forward networks in their standard parameterization is substantially more delicate. Second, we show that residual networks with ReLu activations have universal finite-sample expressivity in the sense that the network can represent any function of its sample provided that the model has more parameters than the sample size. Directly inspired by our theory, we experiment with a radically simple residual architecture consisting of only residual convolutional layers and ReLu activations, but no batch normalization, dropout, or max pool. Our model improves significantly on previous all-convolutional networks on the CIFAR10, CIFAR100, and ImageNet classification benchmarks. 1 INTRODUCTION Traditional convolutional neural networks for image classification, such as AlexNet (Krizhevsky et al. (2012)), are parameterized in such a way that when all trainable weights are 0, a convolutional layer represents the 0-mapping. Moreover, the weights are initialized symmetrically around 0. This standard parameterization makes it non-trivial for a convolutional layer trained with stochastic gradient methods to preserve features that were already good. Put differently, such convolutional layers cannot easily converge to the identity transformation at training time. This shortcoming was observed and partially addressed by Ioffe & Szegedy (2015) through batch normalization, i.e., layer-wise whitening of the input with a learned mean and covariance. But the idea remained somewhat implicit until residual networks (He et al. (2015); He et al. (2016)) explicitly introduced a reparameterization of the convolutional layers such that when all trainable weights are 0, the layer represents the identity function. Formally, for an input \( x \), each residual layer has the form \( x + h(x) \), rather than \( h(x) \). This simple reparameterization allows for much deeper architectures largely avoiding the problem of vanishing (or exploding) gradients. Residual networks, and subsequent architectures that use the same parameterization, have since then consistently achieved state-of-the-art results on various computer vision benchmarks such as CIFAR10 and ImageNet. 1.1 OUR CONTRIBUTIONS In this work, we consider identity parameterizations from a theoretical perspective, while translating some of our theoretical insight back into experiments. Loosely speaking, our first result underlines how identity parameterizations make optimization easier, while our second result shows the same is true for representation. Linear residual networks. Since general non-linear neural networks, are beyond the reach of current theoretical methods in optimization, we consider the case of deep linear networks as a simplified model. A linear network represents an arbitrary linear map as a sequence of matrices \(A_\ell \cdots A_2 A_1\). The objective function is \( \mathbb{E} \| y - A_\ell \cdots A_1 x \| ^2 \), where \( y = Rx \) for some unknown linear transformation \( R \) and \( x \) is drawn from a distribution. Such linear networks have been studied actively in recent years as a stepping stone toward the general non-linear case (see Section 1.2). Even though \( A_\ell \cdots A_1 \) is just a linear map, the optimization problem over the factored variables \((A_\ell, \ldots, A_1)\) is non-convex. In analogy with residual networks, we will instead parameterize the objective function as \[ \min_{A_1, \ldots, A_\ell} \mathbb{E} \| y - (I + A_\ell) \cdots (I + A_1)x \| ^2 . \] (1.1) To give some intuition, when the depth \( \ell \) is large enough, we can hope that the target function \( R \) has a factored representation in which each matrix \( A_i \) has small norm. Any symmetric positive semidefinite matrix \( O \) can, for example, be written as a product \( O = O_\ell \cdots O_1 \), where each \( O_i = O^{1/\ell} \) is very close to the identity for large \( \ell \) so that \( A_i = O_i - I \) has small spectral norm. We first prove that an analogous claim is true for all linear transformations \( R \). Specifically, we prove that for every linear transformation \( R \), there exists a global optimizer \((A_1, \ldots, A_\ell)\) of (1.1) such that for large enough depth \( \ell \), \[ \max_{1 \leq i \leq \ell} \| A_i \| \leq O(1/\ell). \] (1.2) Here, \( \| A \| \) denotes the spectral norm of \( A \). The constant factor depends on the conditioning of \( R \). We give the formal statement in Theorem 2.1. The theorem has the interesting consequence that as the depth increases, smaller norm solutions exist and hence regularization may offset the increase in parameters. Having established the existence of small norm solutions, our main result on linear residual networks shows that the objective function (1.1) is, in fact, easy to optimize when all matrices have sufficiently small norm. More formally, letting \( A = (A_1, \ldots, A_\ell) \) and \( f(A) \) denote the objective function in (1.1), we can show that the gradients of vanish only when \( f(A) = 0 \) provided that \( \max_i \| A_i \| \leq O(1/\ell) \). See Theorem 2.2. This result implies that linear residual networks have no critical points other than the global optimum. In contrast, for standard linear neural networks we only know, by work of Kawaguchi (2016) that these networks don’t have local optima except the global optimum, but it doesn’t rule out other critical points. In fact, setting \( A_i = 0 \) will always lead to a bad critical point in the standard parameterization. Universal finite sample expressivity. Going back to non-linear residual networks with ReLU activations, we can ask: How expressive are deep neural networks that are solely based on residual layers with ReLU activations? To answer this question, we give a very simple construction showing that such residual networks have perfect finite sample expressivity. In other words, a residual network with ReLU activations can easily express any functions of a sample of size \( n \), provided that it has sufficiently more than \( n \) parameters. Note that this requirement is easily met in practice. On CIFAR 10 (\( n = 50000 \)), for example, successful residual networks often have more than \( 10^6 \) parameters. More formally, for a data set of size \( n \) with \( r \) classes, our construction requires \( O(n \log n + r^2) \) parameters. Theorem 3.2 gives the formal statement. Each residual layer in our construction is of the form \( x + V \mathrm{ReLU}(Ux) \), where \( U \) and \( V \) are linear transformations. These layers are significantly simpler than standard residual layers, which typically have two ReLU activations as well as two instances of batch normalization. The power of all-convolutional residual networks. Directly inspired by the simplicity of our expressivity result, we experiment with a very similar architecture on the CIFAR10, CIFAR100, and ImageNet data sets. Our architecture is merely a chain of convolutional residual layers each with a single ReLU activation, but without batch normalization, dropout, or max pooling as are common in standard architectures. The last layer is a fixed random projection that is not trained. In line with our theory, the convolutional weights are initialized near 0, using Gaussian noise mainly as a symmetry breaker. The only regularizer is standard weight decay (\( \ell_2 \)-regularization) and there is no need for dropout. Despite its simplicity, our architecture reaches 6.38% top-1 classification error on the CIFAR10 benchmark (with standard data augmentation). This is competitive with the best residual network reported in He et al. (2015), which achieved 6.43%. Moreover, it improves upon the performance of the previous best all-convolutional network, 7.25%, achieved by Springenberg et al. (2014). Unlike ours, this previous all-convolutional architecture additionally required dropout and a non-standard preprocessing (ZCA) of the entire data set. Our architecture also improves significantly upon Springenberg et al. (2014) on both Cifar100 and ImageNet. 1.2 RELATED WORK Since the advent of residual networks (He et al. (2015); He et al. (2016)), most state-of-the-art networks for image classification have adopted a residual parameterization of the convolutional layers. Further impressive improvements were reported by Huang et al. (2016) with a variant of residual networks, called dense nets. Rather than adding the original input to the output of a convolutional layer, these networks preserve the original features directly by concatenation. In doing so, dense nets are also able to easily encode an identity embedding in a higher-dimensional space. It would be interesting to see if our theoretical results also apply to this variant of residual networks. There has been recent progress on understanding the optimization landscape of neural networks, though a comprehensive answer remains elusive. Experiments in Goodfellow et al. (2014) and Dauphin et al. (2014) suggest that the training objectives have a limited number of bad local minima with large function values. Work by Choromanska et al. (2015) draws an analogy between the optimization landscape of neural nets and that of the spin glass model in physics (Auffinger et al. (2013)). Soudry & Carmon (2016) showed that 2-layer neural networks have no bad differentiable local minima, but they didn’t prove that a good differentiable local minimum does exist. Baldi & Hornik (1989) and Kawaguchi (2016) show that linear neural networks have no bad local minima. In contrast, we show that the optimization landscape of deep linear residual networks has no bad critical point, which is a stronger and more desirable property. Our proof is also notably simpler illustrating the power of re-parametrization for optimization. Our results also indicate that deeper networks may have more desirable optimization landscapes compared with shallower ones. 2 OPTIMIZATION LANDSCAPE OF LINEAR RESIDUAL NETWORKS Consider the problem of learning a linear transformation \( R : \mathbb{R}^d \to \mathbb{R}^d \) from noisy measurements \( y = Rx + \xi \), where \( \xi \in \mathcal{N}(0, \mathrm{Id}_d) \) is a d-dimensional spherical Gaussian vector. Denoting by \( D \) the distribution of the input data x, let \( \Sigma = \mathbb{E}_{x \sim D}[xx^\top] \) be its covariance matrix. There are, of course, many ways to solve this classical problem, but our goal is to gain insights into the optimization landscape of neural nets, and in particular, residual networks. We therefore parameterize our learned model by a sequence of weight matrices \( A_1, \ldots, A_\ell \in \mathbb{R}^{d \times d} \), \[ h_0 = x , \qquad h_j = h_{j-1} + A_j h_{j-1} , \qquad \hat{y} = h_\ell . \] Here \( h_1, \ldots, h_{\ell-1} \) are the \( \ell - 1 \) hidden layers and \( \hat{y} = h_\ell \) are the predictions of the learned model on input \( x \). More succinctly, we have \[ \hat{y} = (\mathrm{Id}_d + A_\ell) \ldots (\mathrm{Id} + A_1)x . \] It is easy to see that this model can express any linear transformation \( R \). We will use \( A \) as a shorthand for all of the weight matrices, that is, the \( \ell \times d \times d \)-dimensional tensor the contains \( A_1, \ldots, A_\ell \) as slices. Our objective function is the maximum likelihood estimator, \[ f(A, (x, y)) = \| \hat{y} - y \|^2 = \| (\mathrm{Id} + A_\ell) \ldots (\mathrm{Id} + A_1)x - Rx - \xi \|^2 . \] We will analyze the landscape of the population risk, defined as, \[ f(A) := \mathbb{E}\left[ f(A, (x, y)) \right] . \] Recall that \( \|A_i\| \) is the spectral norm of \( A_i \). We define the norm \( \| \cdot \| \) for the tensor \( A \) as the maximum of the spectral norms of its slices, \[ \|A\| := \max_{1 \leq i \leq \ell} \|A_i\| . \] The first theorem of this section states that the objective function \( f \) has an optimal solution with small \( \| \cdot \| \)-norm, which is *inversely* proportional to the number of layers \( \ell \). Thus, when the architecture is deep, we can shoot for fairly small norm solutions. We define \( \gamma := \max\{|\log \sigma_{\max}(R)|, |\log \sigma_{\min}(R)|\} \). Here \( \sigma_{\min}(\cdot), \sigma_{\max}(\cdot) \) denote the least and largest singular values of \( R \) respectively. **Theorem 2.1.** *Suppose \( \ell \geq 3\gamma \). Then, there exists a global optimum solution \( A^* \) of the population risk \( f(\cdot) \) with norm* \[ \|A^*\| \leq 2(\sqrt{\pi} + \sqrt{3\gamma})^2/\ell . \] Here \( \gamma \) should be thought of as a constant since if \( R \) is too large (or too small), we can scale the data properly so that \( \sigma_{\min}(R) \leq 1 \leq \sigma_{\max}(R) \). Concretely, if \( \sigma_{\max}(R)/\sigma_{\min}(R) = \kappa \), then we can scaling for the outputs properly so that \( \sigma_{\min}(R) = 1/\sqrt{\kappa} \) and \( \sigma_{\max}(R) = \sqrt{\kappa} \). In this case, we have \( \gamma = \log \sqrt{\kappa} \), which will remain a small constant for fairly large condition number \( \kappa \). We also point out that we made no attempt to optimize the constant factors here in the analysis. The proof of Theorem 2.1 is rather involved and is deferred to Section A. Given the observation of Theorem 2.1, we restrict our attention to analyzing the landscape of \( f(\cdot) \) in the set of \( A \) with \( \| \cdot \| \)-norm less than \( \tau \), \[ \mathcal{B}_\tau = \{ A \in \mathbb{R}^{\ell \times d \times d} : \|A\| \leq \tau \} . \] Here using Theorem 2.1, the radius \( \tau \) should be thought of as on the order of \( 1/\ell \). Our main theorem in this section claims that there is no bad critical point in the domain \( \mathcal{B}_\tau \) for any \( \tau < 1 \). Recall that a critical point has vanishing gradient. **Theorem 2.2.** *For any \( \tau < 1 \), we have that any critical point \( A \) of the objective function \( f(\cdot) \) inside the domain \( \mathcal{B}_\tau \) must also be a global minimum.* Theorem 2.2 suggests that it is sufficient for the optimizer to converge to critical points of the population risk, since all the critical points are also global minima. Moreover, in addition to Theorem 2.2, we also have that any \( A \) inside the domain \( \mathcal{B}_\tau \) satisfies that \[ \| \nabla f(A) \|_F^2 \geq 4\ell(1-\tau)^{\ell-1}\sigma_{\min}(\Sigma)^2(f(A) - C_{\mathrm{opt}}) . \tag{2.3} \] Here \( C_{\mathrm{opt}} \) is the global minimal value of \( f(\cdot) \) and \( \| \nabla f(A) \|_F \) denotes the euclidean norm\footnote{That is, \( \|T\|_F := \sqrt{\sum_{ijk} T_{ijk}^2} \).} of the \( \ell \times d \times d \)-dimensional tensor \( \nabla f(A) \). Note that \( \sigma_{\min}(\Sigma) \) denote the minimum singular value of \( \Sigma \). Equation (2.3) says that the gradient has fairly large norm compared to the error, which guarantees convergence of the gradient descent to a global minimum (Karimi et al. (2016)) if the iterates stay inside the domain \( \mathcal{B}_\tau \), which is not guaranteed by Theorem 2.2 by itself. Towards proving Theorem 2.2, we start off with a simple claim that simplifies the population risk. We also use \( \| \cdot \|_F \) to denote the Frobenius norm of a matrix. **Claim 2.3.** *In the setting of this section, we have* \[ f(A) = \| ((\mathrm{Id} + A_\ell) \ldots (\mathrm{Id} + A_1) - R ) \Sigma^{1/2} \|_F^2 + C . \tag{2.4} \] *Here \( C \) is a constant that doesn’t depend on \( A \), and \( \Sigma^{1/2} \) denote the square root of \( \Sigma \), that is, the unique symmetric matrix \( B \) that satisfies \( B^2 = \Sigma \).* *Proof of Claim 2.3.* Let \( \mathrm{tr}(A) \) denotes the trace of the matrix \( A \). Let \( E = (\mathrm{Id} + A_\ell) \ldots (\mathrm{Id} + A_1) - R \). Recalling the definition of \( f(A) \) and using equation (2.2), we have \[ \begin{align*} f(A) &= \mathbb{E} \left[ \| Ex - \xi \|^2 \right] & \text{(by equation (2.2))} \\ &= \mathbb{E} \left[ \| Ex \|^2 + \| \xi \|^2 - 2 \langle Ex, \xi \rangle \right] \\ &= \mathbb{E} \left[ \mathrm{tr}(Ex x^T E^T) \right] + \mathbb{E} \left[ \| \xi \|^2 \right] & \text{(since \( \mathbb{E}[\langle Ex, \xi \rangle] = \mathbb{E}[\langle Ex, \mathbb{E}[\xi|x] \rangle] = 0 \))} \\ &= \mathrm{tr} \left( E \mathbb{E} \left[ xx^T \right] E^T \right) + C & \text{(where \( C = \mathbb{E}[xx^T] \))} \\ &= \mathrm{tr}(E \Sigma E^T) + C = \| E \Sigma^{1/2} \|_F^2 + C . & \text{(since \( \mathbb{E} \left[ xx^T \right] = \Sigma \))} \end{align*} \] \hfill \( \Box \) Next we compute the gradients of the objective function \( f(\cdot) \) from straightforward matrix calculus. We defer the full proof to Section A. **Lemma 2.4.** *The gradients of \( f(\cdot) \) can be written as, \[ \frac{\partial f}{\partial A_i} = 2(\mathrm{Id} + A_{\ell}^\top) \ldots (\mathrm{Id} + A_{i+1}^\top) E \Sigma (\mathrm{Id} + A_{i-1}^\top) \ldots (\mathrm{Id} + A_1^\top), \] where \( E = (\mathrm{Id} + A_\ell) \ldots (\mathrm{Id} + A_1) - R \).* Now we are ready to prove Theorem 2.2. The key observation is that each matric \( A_j \) has small norm and cannot cancel the identity matrix. Therefore, the gradients in equation (2.5) is a product of non-zero matrices, except for the error matrix \( E \). Therefore, if the gradient vanishes, then the only possibility is that the matrix \( E \) vanishes, which in turns implies \( A \) is an optimal solution. *Proof of Theorem 2.2.* Using Lemma 2.4, we have, \[ \left\| \frac{\partial f}{\partial A_i} \right\|_F = 2 \| (\mathrm{Id} + A_{\ell}^\top) \ldots (\mathrm{Id} + A_{i+1}^\top) E \Sigma (\mathrm{Id} + A_{i-1}^\top) \ldots (\mathrm{Id} + A_1^\top) \|_F \tag{by Lemma 2.4} \] \[ \geq 2 \prod_{j \neq i} \sigma_{\min}(\mathrm{Id} + A_j^\top) \cdot \sigma_{\min}(\Sigma) \|E\|_F \tag{by Claim C.2} \] \[ \geq 2(1-\tau)^{\ell-1} \sigma_{\min}(\Sigma) \|E\| . \tag{since \( \sigma_{\min}(\mathrm{Id} + A) \geq 1 - \|A\| \)} \] It follows that \[ \| \nabla f(A) \|_F^2 = \sum_{i=1}^\ell \left\| \frac{\partial f}{\partial A_i} \right\|_F^2 \geq 4\ell(1-\tau)^{\ell-1} \sigma_{\min}(\Sigma)^2 \|E\|^2 \] \[ \geq 4\ell(1-\tau)^{\ell-1} \sigma_{\min}(\Sigma)^2 (f(A) - C) \tag{by the definition of \( E \) and Claim 2.3} \] \[ \geq 4\ell(1-\tau)^{\ell-1} \sigma_{\min}(\Sigma)^2 (f(A) - C_{\text{opt}}) . \tag{since \( C_{\text{opt}} = \min_A f(A) \geq C \) by Claim 2.3} \] Therefore we complete the proof of equation (2.3). Finally, if \( A \) is a critical point, namely, \( \nabla f(A) = 0 \), then by equation (2.3) we have that \( f(A) = C_{\text{opt}} \). That is, \( A \) is a global minimum. \( \square \) 3 REPRESENTATIONAL POWER OF THE RESIDUAL NETWORKS In this section we characterize the finite-sample expressivity of residual networks. We consider a residual layers with a single ReLU activation and no batch normalization. The basic residual building block is a function \( \mathcal{T}_{U,V,s}(\cdot) : \mathbb{R}^k \to \mathbb{R}^k \) that is parameterized by two weight matrices \( U \in \mathbb{R}^{k \times k}, V \in \mathbb{R}^{k \times k} \) and a bias vector \( s \in \mathbb{R}^k \), \[ \mathcal{T}_{U,V,s}(h) = \mathrm{VReLu}(Uh + s) . \] (3.1) A residual network is composed of a sequence of such residual blocks. In comparison with the full pre-activation architecture in He et al. (2016), we remove two batch normalization layers and one ReLU layer in each building block. We assume the data has \( r \) labels, encoded as \( r \) standard basis vectors in \( \mathbb{R}^r \), denoted by \( e_1, \ldots, e_r \). We have \( n \) training examples \( (x^{(1)}, y^{(1)}), \ldots, (x^{(n)}, y^{(n)}) \), where \( x^{(i)} \in \mathbb{R}^d \) denotes the \( i \)-th data and \( y^{(i)} \in \{e_1, \ldots, e_r\} \) denotes the \( i \)-th label. Without loss of generality we assume the data are normalized so that \( x^{(i)} = 1 \). We also make the mild assumption that no two data points are very close to each other. **Assumption 3.1.** *We assume that for every \( 1 \leq i < j \leq n \), we have \( \|x^{(i)} - x^{(j)}\|^2 \geq \rho \) for some absolute constant \( \rho > 0 \).* Images, for example, can always be imperceptibly perturbed in pixel space so as to satisfy this assumption for a small but constant \( \rho \). Under this mild assumption, we prove that residual networks have the power to express any possible labeling of the data as long as the number of parameters is a logarithmic factor larger than \( n \). Theorem 3.2. Suppose the training examples satisfy Assumption 3.1. Then, there exists a residual network \( N \) (specified below) with \( O(n \log n + n^2) \) parameters that perfectly expresses the training data, i.e., for all \( i \in \{1, \ldots, n\} \), the network \( N \) maps \( x^{(i)} \) to \( y^{(i)} \). It is common in practice that \( n > r^2 \), as is for example the case for the Imagenet data set where \( n > 10^6 \) and \( r = 1000 \). We construct the following residual net using the building blocks of the form \( \mathcal{T}_{U,V,s} \) as defined in equation (3.1). The network consists of \( \ell + 1 \) hidden layers \( h_0, \ldots, h_\ell \), and the output is denoted by \( \hat{y} \in \mathbb{R}^r \). The first layer of weights matrices \( A_0 \) maps the \( d \)-dimensional input to a \( k \)-dimensional hidden variable \( h_0 \). Then we apply \( \ell \) layers of building block \( \mathcal{T} \) with weight matrices \( A_j, B_j \in \mathbb{R}^{k \times k} \). Finally, we apply another layer to map the hidden variable \( h_\ell \) to the label \( \hat{y} \) in \( \mathbb{R}^k \). Mathematically, we have \[ h_0 = A_0 x , \] \[ h_j = h_{j-1} + \mathcal{T}_{A_j, B_j, b_j}(h_{j-1}), \quad \forall j \in \{1, \ldots, \ell\} \] \[ \hat{y} = h_\ell + \mathcal{T}_{A_{\ell+1}, B_{\ell+1}, s_{\ell+1}}(h_\ell) . \] We note that here \( A_{\ell+1} \in \mathbb{R}^{k \times r} \) and \( B_{\ell+1} \in \mathbb{R}^{r \times r} \) so that the dimension is compatible. We assume the number of labels \( r \) and the input dimension \( d \) are both smaller than \( n \), which is safely true in practical applications.\footnote{In computer vision, typically \( r \) is less than \( 10^3 \) and \( d \) is less than \( 10^5 \) while \( n \) is larger than \( 10^6 \)} The hyperparameter \( k \) will be chosen to be \( O(\log n) \) and the number of layers is chosen to be \( \ell = \lceil n / k \rceil \). Thus, the first layer has \( dk \) parameters, and each of the middle \( \ell \) building blocks contains \( 2k^2 \) parameters and the final building block has \( kr + r^2 \) parameters. Hence, the total number of parameters is \( O(kd + \ell k^2 + rk + r^2) = O(n \log n + r^2) \). Towards constructing the network \( N \) of the form above that fits the data, we first take a random matrix \( A_0 \in \mathbb{R}^{k \times d} \) that maps all the data points \( x^{(i)} \) to vectors \( h_0^{(i)} := A_0 x^{(i)} \). Here we will use \( h_j^{(i)} \) to denote the \( j \)-th layer of hidden variable of the \( i \)-th example. By Johnson-Lindenstrauss Theorem (Johnson & Lindenstrauss (1984), or see Wikipedia (2016)), with good probability, the resulting vectors \( h_0^{(i)} \)'s remain to satisfy Assumption 3.1 (with slightly different scaling and larger constant \( \rho \)), that is, any two vectors \( h_0^{(i)} \) and \( h_0^{(j)} \) are not very correlated. Then we construct \( \ell \) middle layers that maps \( h_0^{(i)} \) to \( h_\ell^{(i)} \) for every \( i \in \{1, \ldots, n\} \). These vectors \( h_\ell^{(i)} \) will clustered into \( r \) groups according to the labels, though they are in the \( \mathbb{R}^k \) instead of in \( \mathbb{R}^r \) as desired. Concretely, we design this cluster centers by picking \( r \) random unit vectors \( q_1, \ldots, q_r \) in \( \mathbb{R}^k \). We view them as the surrogate label vectors in dimension \( k \) (note that \( k \) is potentially much smaller than \( r \)). In high dimensions (technically, if \( k > 4 \log r \)) random unit vectors \( q_1, \ldots, q_r \) are pair-wise uncorrelated with inner product less than \( < 0.5 \). We associate the \( i \)-th example with the target surrogate label vector \( v^{(i)} \) defined as follows, \[ \text{if } y^{(i)} = e_j, \text{ then } v^{(i)} = q_j . \] (3.2) Then we will construct the matrices \( (A_1, B_1), \ldots, (A_\ell, B_\ell) \) such that the first \( \ell \) layers of the network maps vector \( h_0^{(i)} \) to the surrogate label vector \( v^{(i)} \). Mathematically, we will construct \( (A_1, B_1), \ldots, (A_\ell, B_\ell) \) such that \[ \forall i \in \{1, \ldots, n\}, h_\ell^{(i)} = v^{(i)} . \] (3.3) Finally we will construct the last layer \( \mathcal{T}_{A_{\ell+1}, B_{\ell+1}, b_{\ell+1}} \) so that it maps the vectors \( q_1, \ldots, q_r \in \mathbb{R}^k \) to \( e_1, \ldots, e_r \in \mathbb{R}^r \), \[ \forall j \in \{1, \ldots, r\}, q_j + \mathcal{T}_{A_{\ell+1}, B_{\ell+1}, b_{\ell+1}}(q_j) = e_j . \] (3.4) Putting these together, we have that by the definition (3.2) and equation (3.3), for every \( i \), if the label is \( y^{(i)} \) is \( e_j \), then \( h_\ell^{(i)} \) will be \( q_j \). Then by equation (3.4), we have that \( \hat{y}^{(i)} = q_j + \mathcal{T}_{A_{\ell+1}, B_{\ell+1}, b_{\ell+1}}(q_j) = e_j \). Hence we obtain that \( \hat{y}^{(i)} = y^{(i)} \). The key part of this plan is the construction of the middle \( \ell \) layers of weight matrices so that \( h_\ell^{(i)} = v^{(i)} \). We encapsulate this into the following informal lemma. The formal statement and the full proof is deferred to Section B. Lemma 3.3 (Informal version of Lemma B.2). In the setting above, for (almost) arbitrary vectors \( h_0^{(1)}, \ldots, h_0^{(n)} \) and \( v^{(1)}, \ldots, v^{(n)} \in \{q_1, \ldots, q_r\} \), there exists weights matrices \( (A_1, B_1), \ldots, (A_\ell, B_\ell) \), such that, \[ \forall i \in \{1, \ldots, n\}, \quad h_\ell^{(i)} = v^{(i)} . \] We briefly sketch the proof of the Lemma to provide intuitions, and defer the full proof to Section B. The operation that each residual block applies to the hidden variable can be abstractly written as, \[ \hat{h} \to h + \mathcal{T}_{U,V,s}(h) . \tag{3.5} \] where \( h \) corresponds to the hidden variable before the block and \( \hat{h} \) corresponds to that after. We claim that for an (almost) arbitrary sequence of vectors \( h^{(1)}, \ldots, h^{(n)} \), there exist \( \mathcal{T}_{U,V,s}(\cdot) \) such that operation (3.5) transforms \( k \) vectors of \( h^{(i)} \)'s to an arbitrary set of other \( k \) vectors that we can freely choose, and maintain the value of the rest of \( n - k \) vectors. Concretely, for any subset \( S \) of size \( k \), and any desired vector \( v^{(i)} (i \in S) \), there exist \( U, V, s \) such that \[ v^{(i)} = h^{(i)} + \mathcal{T}_{U,V,s}(h^{(i)}) \quad \forall i \in S \\ h^{(i)} = h^{(i)} + \mathcal{T}_{U,V,s}(h^{(i)}) \quad \forall i \notin S \tag{3.6} \] This claim is formalized in Lemma B.1. We can use it repeatedly to construct \( \ell \) layers of building blocks, each of which transforms a subset of \( k \) vectors in \( \{h_0^{(1)}, \ldots, h_0^{(n)}\} \) to the corresponding vectors in \( \{v^{(1)}, \ldots, v^{(n)}\} \), and maintains the values of the others. Recall that we have \( \ell = \lceil n/k \rceil \) layers and therefore after \( \ell \) layers, all the vectors \( h_0^{(i)} \)'s are transformed to \( v^{(i)} \)'s, which complete the proof sketch. 4 POWER OF ALL-CONVOLUTIONAL RESIDUAL NETWORKS Inspired by our theory, we experimented with all-convolutional residual networks on standard image classification benchmarks. 4.1 CIFAR10 AND CIFAR100 Our architectures for CIFAR10 and CIFAR100 are identical except for the final dimension corresponding to the number of classes 10 and 100, respectively. In Table 1, we outline our architecture. Each residual block has the form \( x + C_2(\mathrm{ReLU}(C_1 x)) \), where \( C_1, C_2 \) are convolutions of the specified dimension (kernel width, kernel height, number of input channels, number of output channels). The second convolution in each block always has stride 1, while the first may have stride 2 where indicated. In cases where transformation is not dimensionality-preserving, the original input \( x \) is adjusted using averaging pooling and padding as is standard in residual layers. We trained our models with the Tensorflow framework, using a momentum optimizer with momentum 0.9, and batch size is 128. All convolutional weights are trained with weight decay 0.0001. The initial learning rate is 0.05, which drops by a factor 10 and 30000 and 50000 steps. The model reaches peak performance at around 50k steps, which takes about 24h on a single NVIDIA Tesla K40 GPU. Our code can be easily derived from an open source implementation\footnote{https://github.com/tensorflow/models/tree/master/resnet} by removing batch normalization, adjusting the residual components and model architecture. An important departure from the code is that we initialize a residual convolutional layer of kernel size \( k \times k \) and \( c \) output channels using a random normal initializer of standard deviation \( \sigma = 1/k^2c \), rather than \( 1/k\sqrt{c} \) used for standard convolutional layers. This substantially smaller weight initialization helped training, while not affecting representation. A notable difference from standard models is that the last layer is not trained, but simply a fixed random projection. On the one hand, this slightly improved test error (perhaps due to a regularizing effect). On the other hand, it means that the only trainable weights in our model are those of the convolutions, making our architecture “all-convolutional”. Table 1: Architecture for CIFAR10/100 (55 convolutions, 13.5M parameters) <table> <tr> <th>variable dimensions</th> <th>initial stride</th> <th>description</th> </tr> <tr> <td>3 × 3 × 3 × 16</td> <td>1</td> <td>1 standard conv</td> </tr> <tr> <td>3 × 3 × 16 × 64</td> <td>1</td> <td>9 residual blocks</td> </tr> <tr> <td>3 × 3 × 64 × 128</td> <td>2</td> <td>9 residual blocks</td> </tr> <tr> <td>3 × 3 × 128 × 256</td> <td>2</td> <td>9 residual blocks</td> </tr> <tr> <td>–</td> <td>–</td> <td>8 × 8 global average pool</td> </tr> <tr> <td>256 × num_classes</td> <td>–</td> <td>random projection (not trained)</td> </tr> </table> ![Convergence plots of best model for CIFAR10 (left) and CIFAR (100) right. One step is a gradient update with batch size 128.](page_184_470_1207_377.png) Figure 1: Convergence plots of best model for CIFAR10 (left) and CIFAR (100) right. One step is a gradient update with batch size 128. An interesting aspect of our model is that despite its massive size of 13.59 million trainable parameters, the model does not seem to overfit too quickly even though the data set size is 50000. In contrast, we found it difficult to train a model with batch normalization of this size without significant overfitting on CIFAR10. Table 2 summarizes the top-1 classification error of our models compared with a non-exhaustive list of previous works, restricted to the best previous all-convolutional result by Springenberg et al. (2014), the first residual results He et al. (2015), and state-of-the-art results on CIFAR by Huang et al. (2016). All results are with standard data augmentation. <table> <tr> <th>Method</th> <th>CIFAR10</th> <th>CIFAR100</th> <th>ImageNet</th> <th>remarks</th> </tr> <tr> <td>All-CNN</td> <td>7.25</td> <td>32.39</td> <td>41.2</td> <td>all-convolutional, dropout, extra data processing</td> </tr> <tr> <td>Ours</td> <td>6.38</td> <td>24.64</td> <td>35.29</td> <td>all-convolutional</td> </tr> <tr> <td>ResNet</td> <td>6.43</td> <td>25.16</td> <td>19.38</td> <td></td> </tr> <tr> <td>DenseNet</td> <td>3.74</td> <td>19.25</td> <td>N/A</td> <td></td> </tr> </table> 4.2 IMAGE NET The ImageNet ILSVRC 2012 data set has 1,281,167 data points with 1000 classes. Each image is resized to 224 × 224 pixels with 3 channels. We experimented with an all-convolutional variant of the 34-layer network in He et al. (2015). The original model achieved 25.03% classification error. Our derived model has 35.7\( M \) trainable parameters. We trained the model with a momentum optimizer (with momentum 0.9) and a learning rate schedule that decays by a factor of 0.94 every two epochs, starting from the initial learning rate 0.1. Training was distributed across 6 machines updating asynchronously. Each machine was equipped with 8 GPUs (NVIDIA Tesla K40) and used batch size 256 split across the 8 GPUs so that each GPU updated with batches of size 32. In contrast to the situation with CIFAR10 and CIFAR100, on ImageNet our all-convolutional model performed significantly worse than its original counterpart. Specifically, we experienced a significant amount of underfitting suggesting that a larger model would likely perform better. Despite this issue, our model still reached 35.29% top-1 classification error on the test set (50000 data points), and 14.17% top-5 test error after 700,000 steps (about one week of training). While no longer state-of-the-art, this performance is significantly better than the 40.7% reported by Krizhevsky et al. (2012), as well as the best all-convolutional architecture by Springenberg et al. (2014). We believe it is quite likely that a better learning rate schedule and hyperparameter settings of our model could substantially improve on the preliminary performance reported here. 5 CONCLUSION Our theory underlines the importance of identity parameterizations when training deep artificial neural networks. An outstanding open problem is to extend our optimization result to the non-linear case where each residual has a single ReLU activation as in our expressivity result. We conjecture that a result analogous to Theorem 2.2 is true for the general non-linear case. Unlike with the standard parameterization, we see no fundamental obstacle for such a result. We hope our theory and experiments together help simplify the state of deep learning by aiming to explain its success with a few fundamental principles, rather than a multitude of tricks that need to be delicately combined. We believe that much of the advances in image recognition can be achieved with residual convolutional layers and ReLU activations alone. This could lead to extremely simple (albeit deep) architectures that match the state-of-the-art on all image classification benchmarks.
accept
Accept (Poster)
6.333333
7c6e44e5dda208bfa03cff6adb6ffc396ee09de1
iclr
2,017
TreNet: Hybrid Neural Networks for Learning the Local Trend in Time Series Tao Lin,* Tian Guo* & Karl Aberer School of Computer and Communication Sciences Ecole polytechnique federale de Lausanne Lausanne, Switzerland {tao.lin, tian.guo, karl.aberer}@epfl.ch ABSTRACT Local trends of time series characterize the intermediate upward and downward patterns of time series. Learning and forecasting the local trend in time series data play an important role in many real applications, ranging from investing in the stock market, resource allocation in data centers and load schedule in smart grid. Inspired by the recent successes of neural networks, in this paper we propose TreNet, a novel end-to-end hybrid neural network that predicts the local trend of time series based on local and global contextual features. TreNet leverages convolutional neural networks (CNNs) to extract salient features from local raw data of time series. Meanwhile, considering long-range dependencies existing in the sequence of historical local trends, TreNet uses a long-short term memory recurrent neural network (LSTM) to capture such dependency. Furthermore, for predicting the local trend, a feature fusion layer is designed in TreNet to learn joint representation from the features captured by CNN and LSTM. Our proposed TreNet demonstrates its effectiveness by outperforming conventional CNN, LSTM, HMM method and various kernel based baselines on real datasets. 1 INTRODUCTION Time series, which is a sequence of data points in time order, is being generated in a wide spectrum of domains, such as daily fluctuation of the stock market, power consumption records of households, performance monitoring data of clusters in data centres, and so on. In many applications, users are interested in understanding the evolving trend in time series and forecasting the trend, since the conventional prediction on specific data points could deliver very little information about the semantics and dynamics of the underlying process generating the time series. For instance, time series in Figure 1 are from the household power consumption dataset. Figure 1(a) shows some raw data points of time series. Though point A and B have approximately the same value, the underlying system is likely to be in two different states when it outputs A and B, because A is in an upward trend while B is in a downward trend (Wang et al., 2011; Matsubara et al., 2014). On the other hand, even when two points with the similar value are both in the upward trend, e.g. point A and C, the different slopes and durations of the trends where point A and C locate, could also indicate different states of the underlying process. Particularly, in this paper we are interested in the local trend of time series which measures the intermediate local behaviour, i.e., upward or downward pattern of time series that characterized by the slope and duration (Wang et al., 2011). For instance, in Figure 1(b) the linear segments over raw data points of time series represent the local trends extracted from a real household power consumption time series. For the ease of presentation, we will use the term trend and local trend interchangeably in the rest of the paper. Learning and forecasting local trends are quite useful in a wide range of applications. For instance, in the stock market, due to its high volatility and noisy environment, in reality predicting stock price trends is preferred over the prediction of the stock market absolute values (Atsalakis & Valavanis, 2009). Predicting the local trend of stock price time series empowers *These two authors contributed equally. 1 https://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption traders to design profitable trading strategies (Chang et al., 2012b; Atsalakis & Valavanis, 2009). In the smart energy domain, knowing the predictive local trend of power consumption time series enables energy providers to schedule power supply and maximize energy utilization (Zhao & Magoulès, 2012). Meanwhile, in recent years neural networks have shown the dramatical power in a wide spectrum of domains, e.g., natural language processing, computer vision, speech recognition, time series analysis, etc. (Wang et al., 2016b; Sutskever et al., 2014; Yang et al., 2015; Lipton et al., 2015). For time series data, two mainstream architectures, convolutional neural network (CNN) and recurrent neural network (RNN) have been exploited in different time series related tasks, e.g., RNN in time series classification (Lipton et al., 2015) and CNN in activity recognition and snippet learning (Liu et al., 2015; Yang et al., 2015). RNN is powerful in discovering the dependency in sequence data (Jain et al., 2014; Graves, 2012) and particularly the Long Short-Term Memory (LSTM) RNN works well on sequence data with long-term dependencies (Chung et al., 2014; Hochreiter & Schmidhuber, 1997) due to the internal memory mechanism. CNN excels in exacting effective representation of local salience from raw data of time series by enforcing a local connectivity between neurons. (Yang et al., 2015; Hammerla et al., 2016). ![Three subfigures showing time series plots and trend analysis](page_384_1012_1096_246.png) Figure 1: (a) Time series of household power consumption. (b) Local trends in time series. (c) Effect of local raw data on the trend forecasting. In this paper, we focus on learning and forecasting the local trends in time series via neural networks. This involves learning different aspects of the data. On one hand, the sequence of historical local trends describes the long-term contextual information of time series and thus naturally affects the evolution of the following local trend. On the other hand, the recent raw data points of time series (Wang et al., 2011; Batal et al., 2012), which represent the local variation and behaviour of time series, affect the evolving of the following trend as well and have particular predictive power for abruptly changing local trends (Wang et al., 2011). For instance, in Figure 1(c), trend 1, 2 and 3 present a continuous upward pattern. Then when we aim at predicting the subsequent trend of time series at the end of the third local trend, the previous three successive upward trends outline a probable increasing trend afterwards. However, the local data around the end of the third trend, e.g., data points in the red circle, indicate that time series could stabilize and even decrease. The data points after the third trend indeed present a decreasing trend indicated by the red dotted segment. In this case, the subsequent trend has more dependency on the local data points. Therefore, it is highly desired to develop a systematic way to model such various hidden and complementary dependencies in time series for the local trend forecasting problem. To this end, we propose a end-to-end hybrid neural network, referred to as TreNet. In particular, it consists of a LSTM recurrent neural network to capture the long dependency in historical local trends, a convolutional neural network to extract local features from local raw data of time series, and a feature fusion layer to learn joint representation to take advantage of both features drawn from CNN and LSTM. Such joint representation is used for the local trend forecasting. The experimental analysis on real datasets demonstrates that TreNet outperforms individual recurrent neural network, convolutional neural network and a variety of baselines in term of local trend prediction accuracy. The rest of the paper is organized as follows. Section 2 presents related work, while Section 3 defines the problem to be solved and introduces the notations. In Section 4, we present the proposed TreNet. Section 5 demonstrates the performance of our method and baselines on real datasets. Finally, the paper is concluded in Section 6. Refer to Section 7 and Section 8 for more experiment results and discussion. 2 RELATED WORK Traditional learning approaches over local trends of time series mainly make use of Hidden Markov Models (HMMs) (Wang et al., 2011; Matsubara et al., 2014). HMMs maintain short-term state dependences, i.e., the memoryless Markov property and predefined number of states, which requires significant task specific knowledge. RNNs instead use high dimensional, distributed hidden states that could take into account long-term dependencies in sequence data. Previous time series segmentation approaches (Keogh et al., 2001; Matsubara et al., 2014; Yuan, 2015) focus on achieving a meaningful segmentation and finding patterns, rather than modeling the relation in segments and therefore are not suitable for forecasting local trends. Multi-step ahead prediction is another way to realize local trend prediction by fitting the predicted values to estimate the local trend. However, multi-step ahead prediction is a non-trivial problem itself (Chang et al., 2012a). In this paper, we concentrate on directly learning local trends through neural networks. RNNs have recently shown promising results in a variety of applications, especially when there exist sequential dependencies in data (Lyu & Zhu, 2014; Chung et al., 2014; Sutskever et al., 2014). Long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997; Lyu & Zhu, 2014; Chung et al., 2014), a class of recurrent neural networks with sophisticated recurrent hidden and gated units, are particularly successful and popular due to its ability to learn hidden long-term sequential dependencies. (Lipton et al., 2015) uses LSTMs to recognize patterns in multivariate time series, especially for multi-label classification of diagnoses. (Chauhan & Vig, 2015; Malhotra et al., 2015) evaluate the ability of LSTMs to detect anomalies in ECG time series. Bidirectional LSTM (Graves & Schmidhuber, 2005) is usually intended for speech processing rather than time series forecasting problems. Our paper focuses on using LSTM to capture the dependency in the sequence of historical local trends and meanwhile the hidden states in LSTM are further used to learn joint feature representations for the local trend forecasting. CNN is often used to learn effective representation of local salience from raw data (Vinyals et al., 2015; Donahue et al., 2015; Karpathy et al., 2014). (Hammerla et al., 2016; Yang et al., 2015; Lea et al., 2016) make use of CNNs to extract features from raw time series data for activity/action recognition. (Liu et al., 2015) focuses on the prediction of periodical time series values by using CNN and embedding time series with the potential neighbors in the temporal domain. Our proposed TreNet will combine the strengths of both LSTM and CNN and form a novel and unified neural network architecture for local trend forecasting. Hybrid neural networks, which combines the strengths of various neural networks, are receiving increasing interest in the computer vision domain, such as image captioning (Mao et al., 2014; Vinyals et al., 2015; Donahue et al., 2015), image classification (Wang et al., 2016b), protein structure prediction (Li & Yu, 2016), action recognition (Ballas et al., 2015; Donahue et al., 2015) and so on. But efficient exploitation of such hybrid architectures has not been well studied for time series data, especially the trend forecasting problem. (Li & Yu, 2016; Ballas et al., 2015) utilize CNNs over images in cascade of RNNs in order to capture the temporal features for classification. (Bashivan et al., 2015) transforms EEG data into a sequence of topology-preserving multi-spectral images and then trains a cascaded convolutional-recurrent network over such images for EEG classification. (Wang et al., 2016a; Mao et al., 2014) propose the CNN-RNN framework to learn a shared representation for image captioning and classification problems. In our proposed TreNet, LSTM and CNN first respectively learn the trend evolution and local raw data of time series and then TreNet fuses the features captured by LSTM and CNN to predict the trend. 3 PROBLEM FORMULATION In this section, we provide the formal definition of the trend learning and forecasting problem in this paper. We define time series as a sequence of data points \( \mathcal{X} = \{ x_1, \ldots, x_T \} \), where each data point \( x_t \) is real-valued and subscript \( t \) represents the time instant. The corresponding local trend sequence of \( \mathcal{X} \) is a series of piecewise linear representations of \( \mathcal{X} \), denoted by \( \mathcal{T} = \{ (\ell_k, s_k) \} \). Each element of \( \mathcal{T} \), e.g., \( (\ell_k, s_k) \) describes a linear function over a certain subsequence (or segment) of \( \mathcal{X} \) and corresponds to a local trend in \( \mathcal{X} \). Such local trends in \( \mathcal{T} \) are extracted from \( \mathcal{X} \) by time series segmentation and fitting a linear function w.r.t. time \( t \) over each segment (Keogh et al., 2001; Wang et al., 2011). \( \ell_k \) and \( s_k \) respectively represent the duration and slope of trend \( k \). \( \ell_k \) is measured in terms of the time range covered by trend \( k \). Local trends in \( \mathcal{T} \) are time ordered and non-overlapping. The durations of all the local trends in \( \mathcal{T} \) address \( \sum_k \ell_k = T \). In addition, a local trend sequence ending by time \( t \) is denoted by \( \mathcal{T}(t) = \{ (\ell_k, s_k) | \sum_k \ell_k \leq t \} \). Meanwhile, as we discussed in Section 1, local raw data of time series affects the varying of trend as well and thus we define the local data w.r.t. a certain time instant \( t \) as a sequence of data points in a window of size \( w \), denoted by \( \mathcal{L}(t) = \{ x_{t-w}, \ldots, x_t \} \). At certain time \( t \), trend forecasting is meant to predict the duration and slope of the following trend based on a given sequence of historical trends \( \mathcal{T}(t) \) and local data set \( \mathcal{L}(t) \). The predicted duration and slope at time \( t \) are denoted by \( \hat{\ell}_t \) and \( \hat{s}_t \). Our proposed TreNet can be trained for predicting either \( \hat{\ell}_t \) or \( \hat{s}_t \). For simplicity, we use \( \hat{y}_t \) to represent the predicted value of TreNet throughout the paper. Therefore, given the training dataset \( \mathcal{D} = \mathcal{X} \cup \mathcal{T} \), we aim to propose a neural network based approach to learn a function \( \hat{y}_t = f(\mathcal{T}(t), \mathcal{L}(t)) \) for the trend forecasting. In this paper, we focus on univariate time series. The proposed method can be naturally generalized to multivariate time series as well by augmenting the input to the neural network. Refer to Section 8 for more discussion. 4 HYBRID NEURAL NETWORKS FOR TREND LEARNING AND FORECASTING In this section, we first present an overview about the proposed TreNet for the trend forecasting. Then we will detail the components of TreNet. Overview. The idea of our TreNet is to combine CNN with LSTM to utilize their representation abilities on different aspects of training data \( \mathcal{D} (\mathcal{D} = \mathcal{X} \cup \mathcal{T}) \) and then to learn a joint feature for the trend prediction. Technically, TreNet is designed to learn a predictive function \( \hat{y}_t = f(R(\mathcal{T}(t)), C(\mathcal{L}(t))) \). \( R(\mathcal{T}(t)) \) is derived by training the LSTM over sequence \( \mathcal{T} \) to capture the dependency in the trend evolving, while \( C(\mathcal{L}(t)) \) corresponds to local features extracted by CNN from \( \mathcal{L}(t) \). The long-term and local features captured by LSTM and CNN, i.e., \( R(\mathcal{T}(t)) \) and \( C(\mathcal{L}(t)) \) convey complementary information pertaining to the trend varying. Therefore, the feature fusion layer is supposed to take advantages of both features to produce a fused representation for improved performance. Finally, the trend prediction is realized by the function \( f(\cdot, \cdot) \), which corresponds to the feature fusion and output layers in Figure 2. ![Illustration of the hybrid architecture of TreNet.](page_370_1042_805_246.png) Figure 2: Illustration of the hybrid architecture of TreNet. (best viewed in colour) Learning the dependency in the trend sequence. During the training phase, the duration \( \ell_k \) and slope \( s_k \) of each local trend \( k \) in sequence \( \mathcal{T} \) are fed into the LSTM layer of TreNet. Each \( j \)-th neuron in the LSTM layer maintains a memory \( c^j_k \) at step \( k \). The output \( h^j_k \) or the activation of this neuron is then expressed as (Hochreiter & Schmidhuber [1997; Chung et al., 2014]: \[ h_k^j = o_k^j \tanh(c_k^j) \] (1) where \( o_k^j \) is an output gate and calculated as: \[ o_k^j = \sigma(W_o[\ell_k s_k] + U_o h_{k-1} + V_o c_k)^j \] (2) where \([\ell_k s_k]\) is the concatenation of the duration and slope of the trend \(k\), \(h_{k-1}\) and \(c_k\) are the vectorization of the activations of \(\{h_{k-1}^j\}\) and \(\{c_k^j\}\), and \(\sigma\) is a logistic sigmoid function. Then, the memory cell \(c_k^j\) is updated through partially forgetting the existing memory and adding a new memory content \(\tilde{c}_k^j\): \[ c_k^j = f_k^j c_{k-1}^j + i_k^j \tilde{c}_k^j , \quad \tilde{c}_k^j = \tanh(W_c[\ell_k s_k] + U_c h_{k-1})^j \] (3) The extent to which the existing memory is forgotten is modulated by a forget gate \(f_k^j\), and the degree to which the new memory content is added to the memory cell is modulated by an input gate \(i_k^j\). Then, such gates are computed by \[ f_k^j = \sigma(W_f[\ell_k s_k] + U_f h_{k-1} + V_f c_{k-1})^j \] (4) \[ i_k^j = \sigma(W_i[\ell_k s_k] + U_i h_{k-1} + V_i c_{k-1})^j \] (5) At each step \(k\), the hidden activation \(h_k\) is the output to the feature fusion layer. Specifically, given a \(\mathcal{T}(t)\) containing \(n\) local trends (i.e., \(|\mathcal{T}(t)| = n\)), the output of \(R(\mathcal{T}(t))\) is \(R(\mathcal{T}(t)) = h_n\). Learning features from the local raw data of time series. When the \(k\)-th trend in \(\mathcal{T}\) is fed to LSTM, the corresponding local raw time series data input to the CNN part of TreNet is \(\mathcal{L}(t)\), where \(t = \sum_{i=1}^k \ell_i\). CNN consists of \(H\) stacked layers of 1-d convolutional, activation and pooling operations. Denote by \(a^i\) the input signal of layer \(i\) and thus at the first layer \(a^1 = \mathcal{L}(t)\). Each layer has a specified number of filters \(n^i\) of a specified filter size \(d^i\). Each filter on a layer sweeps through the entire input signal to exact local features as follows: \[ v_{m}^{i,j} = \phi(b^{i,j} + \sum_{z=-m+d^i/2}^{m+d^i/2} W_z^{i,j} a_z^i), \forall m = 1, \ldots, |a^i| \] (6) where \(v_{m}^{i,j}\) is the activation of \(j\)-th filter of layer \(i\) on \(m\) position of the input signal. Here \(\phi\) is the Leaky Rectified Linear Unit, which is shown to perform better (Xu et al., 2015). Then the max-pooling is performed over the \(v_{m}^{i,j}\) of each filter. Finally, the output of CNN in TreNet is the concatenation of max-pooling of each filter on the last layer \(H\), namely: \[ C(\mathcal{L}(t)) = [p^1, \ldots, p^{n^H}], \quad p^j = [\max_{1 \leq z \leq q} (\{v_{m+z}^{H,j}\})], \quad \forall j = 1, \ldots, n^H \] (7) where \(q\) is the pooling size. Feature fusion and output layers. The feature fusion layer combines the representations \(R(\mathcal{T}(t))\) and \(C(\mathcal{L}(t))\), to form a joint feature. Then, such joint feature is fed to the output layer to provide the trend prediction. Particularly, we first map \(R(\mathcal{T}(t))\) and \(C(\mathcal{L}(t))\) to the same feature space and add them together to obtain the activation of the feature fusion layer (Mao et al., 2014). The output layer is a fully-connect layer following the feature fusion layer. Mathematically, the prediction of TreNet is expressed as: \[ \hat{y}_t = f(R(\mathcal{T}(t)), C(\mathcal{L}(t))) = W^o \cdot \phi(W^r \cdot R(\mathcal{T}(t)) + W^c \cdot C(\mathcal{L}(t))) + b^o \] (8) where \(\phi(\cdot)\) is element-wise leaky ReLU activation function and \(+\) denotes the element-wise addition. \(W^o\) and \(b^o\) are the weights and bias of the output layer. To train TreNet, we adopt the squared error function plus a regularization term as: \[ J(\mathbf{W}, \mathbf{b} ; \mathcal{T}, \mathcal{X}) = \frac{1}{|\mathcal{T}|} \sum_{k=1}^{|\mathcal{T}|} (\hat{y}_k - y_k)^2 + \lambda \| \mathbf{W} \|_2 \] (9) where \( \mathbf{W} \) and \( b \) represent the weight and bias parameters in TreNet, \( \lambda \) is a hyperparameter for the regularization term and \( y_k \) is the true value of trend slope or duration. The cost function is differentiable and the architecture of TreNet allows the gradients from the loss function (9) to be backpropagated to both LSTM and CNN parts. TreNet can be trained respectively for the slope and duration of local trends using \( \mathcal{T} \) and \( \mathcal{X} \). When performing forecasting, \( \mathcal{T}(t) \) and \( \mathcal{L}(t) \) are fed to TreNet and the prediction value \( \hat{y}_k \) could be either the slope or duration depending on the training target. 5 EXPERIMENTAL ANALYSIS In this section, we conduct extensive experiments to demonstrate the prediction performance of TreNet by comparing to a variety of baselines. Due to the page limit, refer to Section 7 for more experiment results. 5.1 EXPERIMENT SETUP Dataset: We test our method and baselines on three real time series datasets. • Daily Household Power Consumption (HousePC). This dataset[2] contains measurements of electric power consumption in one household with a one-minute sampling rate over a period of almost 4 years. Different electrical quantities and some sub-metering values are available. We use the voltage time series throughout the experiments. • Gas Sensor (GasSensor). This dataset[3] contains the recordings of chemical sensors exposed to dynamic gas mixtures at varying concentrations. The measurement was constructed by the continuous acquisition of the sensor array signals for a duration of about 12 hours without interruption. We mainly use the gas mixture time series regarding Ethylene and Methane in air. • Stock Transaction (Stock): This dataset is extracted from Yahoo Finance and contains the daily stock transaction information in New York Stock Exchange from 1950-10 to 2016-4. All datasets are preprocessed by [Keogh et al. 2001] to extract local trends. Alternative time series segmentation and local trend extraction approaches can be used as well. We choose [Keogh et al. 2001] here due to its high efficiency. Totally, we obtain 42591, 4720 and 1316 local trends respectively from above datasets. For the ease of experimental result interpretation, the slope of extracted local trends is represented by the angle of the corresponding linear function and thus in a bounded value range \([ -90, 90 ]\). The duration of local trends is measured by the number of data points within the local trend. Then, the obtained trend sequences and the set of local data are split into training (80%), validation (10%) and test (10%) datasets. Baselines: We compare TreNet with the following six baselines: • CNN. This baseline method predicts the trend by only using CNN over the set of local raw data of time series to learn features for the forecasting. The size of local data is set at \( w \) as is defined in Section 3. • LSTM. This method uses LSTM to learn dependencies in the trend sequence \( \mathcal{T} \) and predicts the trend only using the trained LSTM. • Support Vector Regression (SVR). A family of support vector regression based approaches with different kernel methods is used for the trend forecasting. We consider three 2 https://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption 3 https://archive.ics.uci.edu/ml/datasets/Gas+sensor+array+under+dynamic+gas+mixtures <table> <tr> <th>Dataset</th> <th>Model</th> <th>RMSE @ Duration</th> <th>RMSE @ Slope</th> </tr> <tr> <td rowspan="8">HousePC</td> <td>CNN</td> <td>27.51</td> <td>13.56</td> </tr> <tr> <td>LSTM</td> <td>27.27</td> <td>13.27</td> </tr> <tr> <td>SVRBF</td> <td>31.81</td> <td>12.94</td> </tr> <tr> <td>SVPOLY</td> <td>31.81</td> <td>12.93</td> </tr> <tr> <td>SVSIG</td> <td>31.80</td> <td>12.93</td> </tr> <tr> <td>pHMM</td> <td>34.06</td> <td>26.00</td> </tr> <tr> <td>Naive</td> <td>39.68</td> <td>21.17</td> </tr> <tr> <td>CLSTM</td> <td>25.97</td> <td>13.77</td> </tr> <tr> <td>TreNet</td> <td>25.89</td> <td>12.89</td> </tr> <tr> <td rowspan="8">Stock</td> <td>CNN</td> <td>18.87</td> <td>12.78</td> </tr> <tr> <td>LSTM</td> <td>11.07</td> <td>8.40</td> </tr> <tr> <td>SVRBF</td> <td>11.38</td> <td>7.40</td> </tr> <tr> <td>SVPOLY</td> <td>11.40</td> <td>7.42</td> </tr> <tr> <td>SVSIG</td> <td>11.49</td> <td>7.41</td> </tr> <tr> <td>pHMM</td> <td>36.37</td> <td>8.70</td> </tr> <tr> <td>Naive</td> <td>11.36</td> <td>8.58</td> </tr> <tr> <td>CLSTM</td> <td>9.26</td> <td>7.31</td> </tr> <tr> <td>TreNet</td> <td>8.86</td> <td>6.84</td> </tr> <tr> <td rowspan="8">GasSensor</td> <td>CNN</td> <td>53.99</td> <td>11.51</td> </tr> <tr> <td>LSTM</td> <td>55.77</td> <td>11.22</td> </tr> <tr> <td>SVRBF</td> <td>62.81</td> <td>10.21</td> </tr> <tr> <td>SVPOLY</td> <td>70.91</td> <td>10.95</td> </tr> <tr> <td>SVSIG</td> <td>85.69</td> <td>11.92</td> </tr> <tr> <td>pHMM</td> <td>111.62</td> <td>13.07</td> </tr> <tr> <td>Naive</td> <td>53.76</td> <td>10.57</td> </tr> <tr> <td>CLSTM</td> <td>54.20</td> <td>14.86</td> </tr> <tr> <td>TreNet</td> <td>52.28</td> <td>9.57</td> </tr> </table> Table 1: RMSE of the prediction of local trend duration and slope on each dataset. commonly used kernels (Liu et al., 2015), i.e., Radial Basis kernel (SVRBF), Polynomial kernel (SVPOLY), Sigmoid kernel (SVSIG). The trend sequence and the corresponding set of local time series data are concatenated as the input features to such SVR approaches. • Pattern-based Hidden Markov Model (pHMM). (Wang et al., 2011) proposed a pattern-based hidden Markov model (HMM), which segments the time series and models the dependency in segments via HMM. The derived HMM model is used to predict the state of time series and then to estimate the trend based on the state. • Naive. This is the naive approach which takes the duration and slope of the last trend as the prediction for the next one. • ConvNet+LSTM(CLSTM). It is based on the cascade structure of ConvNet and LSTM in (Bashivan et al., 2015) which feeds the features learnt by ConvNet over time series to a LSTM and obtains the prediction from the LSTM. Evaluation metric: We evaluate the predictive performance of TreNet and baselines in terms of Root Mean Square Error (RMSE). The lower the RMSE, the more accurate the predictions. Training: The training procedure of TreNet and baselines in our paper follows the schema below. The CNN and LSTM components in TreNet share the same network structure (e.g., number of layers, neurons in each layer) as CNN and LSTM baselines. CNN has two stacked convolutional layers, which have 32 filters of size 2 and 4. The number of memory cells in LSTM is 600. For baseline CNN and LSTM, we tune the learning rate for each approach from \( \{10^{-1}, 10^{-2}, 10^{-3}, 10^{-4}, 10^{-5}\} \) (Sutskever et al., 2013), in order to achieve the least prediction errors and then fix the learning rate. For TreNet, in addition to the learning rate, the number of neurons in the feature fusion layer is chosen from the range {300, 600, 900, 1200} to achieve the best performance. We use dropout and L2 regularization to control the capacity of neural networks to prevent overfitting, and set the values to 0.5 and \( 5 \times 10^{-4} \) respectively for all datasets (Mao et al., 2014). The Adam optimizer (Kingma & Ba, 2014) is chosen to learn the weights in neural networks. Regarding the SVR based approaches, we carefully tune the parameters \( c \) (error penalty), \( d \) (degree of kernel function), and \( \gamma \) (kernel coefficient) for kernels. Each parameter is selected from the sets \( c \in \{10^{-5}, 10^{-4}, \ldots, 1, \ldots, 10^4, 10^5\} \), \( d \in \{1, 2, 3\} \), \( \gamma \in \{10^{-5}, 10^{-4}, \ldots, 1, \ldots, 10^5\} \) respectively. We iterate through candidate values of each combination of \( c \), \( d \) and \( \gamma \) to train our model, and keep the parameters that generate the lowest RMSE on the validation set, and then use them to predict on the test set. The training datasets of SVR and pHMM baselines are consistent as that of TreNet. Likewise, CNN and LSTM baselines are respectively fed by the set of local data and the trend sequence of the same size as TreNet. In addition, since the window size of local data is tunable, we vary the window size of local data, i.e. \( w \), from the range {100, 300, 500, 700, 900}, so as to investigate how the size of local data influences the predication performance. The results will be presented in Section 5.2. The model’s performance on the validation set will be evaluated after each epoch of training. Each model is trained for at least 50 epochs. Meanwhile, the training process adopts early stopping if no further improvement in the performance of validation shows up after 50 epochs. 5.2 EXPERIMENT RESULTS Table 1 studies the prediction performances of TreNet and baselines. For each dataset, the window size of local data is constant for approaches (i.e., CNN, SVRBF, SVPOLY, SVSIG, pHMM and TreNet) that take local data as input. Then, the results of each approach are obtained by tuning the corresponding parameter as described in Section 5.1. In Table 1, we observe that TreNet consistently outperforms baselines on the duration and slope prediction by achieving around 30% less errors at the maximum. It verifies that the hybrid architecture of TreNet can improve the performance by utilizing the information captured by both CNN and LSTM. Specifically, pHMM method performs worse due to the limited representation capability of HMM. On the slope prediction, SVR based approaches can get comparable results as TreNet. In the following group of experiments, we investigate the effect of local data size (i.e., \( w \)) on the prediction. In particular, we tune the value of local data size for the approaches whose input fea- tures contains local data and observe the prediction errors. Such approaches include CNN, SVRBF, SVPOLY, SVSIG, pHMM and TreNet. LSTM only consumes the trend sequence and thus is not included. Due to the page limit, we report the results on the HousePC dataset in Table2 and Table3. The results on Stock and GasSensor datasets can be referred to Section7. Baseline Naive has no original time series data as input CLSTM works on the whole time series and has no local data. Thus they are excluded from this set of experiments. In Table2 we observe that compared to baselines TreNet has the lowest errors on the duration prediction across different window sizes. pHMM requires sufficient data points to model the relations of segments and fails to work on 100 size. As the window size increases and more local data points are fed to the training process, the prediction errors of CNN and TreNet decrease or nearly stabilize. This could be because only the certain amount of local data has predictive power. The filtering and pooling mechanism enables CNN to focus on the certain local data having strong predictive power and thus giving more local data only gives rise to marginal improvements. Such similar phenomenon is observed on the slope prediction as is shown in Table3. For more results and discussion, please refer to Section7. <table> <tr> <th>Window Size</th> <th>CNN</th> <th>SVRBF</th> <th>SVPOLY</th> <th>SVSIG</th> <th>pHMM</th> <th>TreNet</th> </tr> <tr> <td>100</td> <td>29.37</td> <td>31.48</td> <td>31.96</td> <td>31.88</td> <td>-</td> <td>25.93</td> </tr> <tr> <td>300</td> <td>27.33</td> <td>31.17</td> <td>31.61</td> <td>31.66</td> <td>30.03</td> <td>25.94</td> </tr> <tr> <td>500</td> <td>27.51</td> <td>31.81</td> <td>31.81</td> <td>31.80</td> <td>34.06</td> <td>25.89</td> </tr> <tr> <td>700</td> <td>27.41</td> <td>31.10</td> <td>31.09</td> <td>31.11</td> <td>27.37</td> <td>25.72</td> </tr> <tr> <td>900</td> <td>27.42</td> <td>31.28</td> <td>31.27</td> <td>31.27</td> <td>28.45</td> <td>25.62</td> </tr> </table> Table 2: RMSE of the duration predictions w.r.t. different sizes of local data in HousePC dataset <table> <tr> <th>Window Size</th> <th>CNN</th> <th>SVRBF</th> <th>SVPOLY</th> <th>SVSIG</th> <th>pHMM</th> <th>TreNet</th> </tr> <tr> <td>100</td> <td>13.68</td> <td>12.93</td> <td>12.9352</td> <td>12.9346</td> <td>-</td> <td>13.14</td> </tr> <tr> <td>300</td> <td>13.60</td> <td>12.93</td> <td>12.9346</td> <td>12.9345</td> <td>27.75</td> <td>13.15</td> </tr> <tr> <td>500</td> <td>13.56</td> <td>12.94</td> <td>12.9342</td> <td>12.9346</td> <td>26.00</td> <td>12.89</td> </tr> <tr> <td>700</td> <td>13.52</td> <td>12.93</td> <td>12.9345</td> <td>12.9345</td> <td>35.32</td> <td>12.86</td> </tr> <tr> <td>900</td> <td>13.60</td> <td>12.94</td> <td>12.9350</td> <td>12.9346</td> <td>37.60</td> <td>12.96</td> </tr> </table> Table 3: RMSE of the slope predictions w.r.t. different sizes of local data in HousePC dataset 6 CONCLUSION In this paper we propose TreNet, a novel hybrid neural network to learn and predict the local trend behaviour of time series. The experimental results demonstrate that such a hybrid framework can indeed utilize complementary information extracted by CNN and LSTM to enhance the prediction performance. Moreover, such architecture is generic and extendible in that additional exogenous time series can be fed to TreNet, so as to boost the performance and investigate the effect of different data sources on the trend evolving. REFERENCES George S Atsalakis and Kimon P Valavanis. Forecasting stock market short-term trends using a neuro-fuzzy based methodology. Expert Systems with Applications, 36(7):10696–10707, 2009. Nicolas Ballas, Li Yao, Chris Pal, and Aaron Courville. Delving deeper into convolutional networks for learning video representations. arXiv preprint arXiv:1511.06432, 2015. Pouya Bashivan, Irina Rish, Mohammed Yeasin, and Noel Codella. Learning representations from eeg with deep recurrent-convolutional neural networks. arXiv preprint arXiv:1511.06448, 2015. Iyad Batal, Dmitriy Fradkin, James Harrison, Fabian Moerchen, and Milos Hauskrecht. Mining recent temporal patterns for event detection in multivariate time series data. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 280–288. ACM, 2012. Li-Chiu Chang, Pin-An Chen, and Fi-John Chang. Reinforced two-step-ahead weight adjustment technique for online training of recurrent neural networks. IEEE transactions on neural networks and learning systems, 23(8):1269–1278, 2012a. Pei-Chann Chang et al. A novel model by evolving partially connected neural network for stock price trend forecasting. Expert Systems with Applications, 39(1):611–620, 2012b. Sucheta Chauhan and Lovekesh Vig. Anomaly detection in ecg time signals via deep long short-term memory networks. In Data Science and Advanced Analytics (DSAA), 2015. 36678 2015. IEEE International Conference on, pp. 1–7. IEEE, 2015. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014. Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2625–2634, 2015. A. Graves. Supervised Sequence Labelling with Recurrent Neural Networks. Studies in Computational Intelligence. Springer, 2012. Alex Graves and Jürgen Schmidhuber. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks, 18(5):602–610, 2005. Nils Y Hammerla, Shane Halloran, and Thomas Ploetz. Deep, convolutional, and recurrent models for human activity recognition using wearables. arXiv preprint arXiv:1604.08880, 2016. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735–1780, 1997. Lakhmi C Jain, Manjeevan Seera, Chee Peng Lim, and P Balasubramaniam. A review of online learning in supervised neural networks. Neural Computing and Applications, 25(3-4):491–509, 2014. Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1725–1732, 2014. Eamonn Keogh, Selina Chu, David Hart, and Michael Pazzani. An online algorithm for segmenting time series. In Data Mining, 2001. ICDM 2001, Proceedings IEEE International Conference on, pp. 289–296. IEEE, 2001. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Colin Lea, Rene Vidal, Austin Reiter, and Gregory D Hager. Temporal convolutional networks: A unified approach to action segmentation. arXiv preprint arXiv:1608.08242, 2016. Zhen Li and Yizhou Yu. Protein secondary structure prediction using cascaded convolutional and recurrent neural networks. arXiv preprint arXiv:1604.07176, 2016. Zachary C Lipton, David C Kale, Charles Elkan, and Randall Wetzel. Learning to diagnose with lstm recurrent neural networks. arXiv preprint arXiv:1511.03677, 2015. Jiajun Liu, Kun Zhao, Brano Kusy, Ji-rong Wen, and Raja Jurdak. Temporal embedding in convolutional neural networks for robust learning of abstract snippets. arXiv preprint arXiv:1502.05113, 2015. Qi Lyu and Jun Zhu. Revisit long short-term memory: An optimization perspective. In Advances in neural information processing systems workshop on deep Learning and representation Learning, 2014. Pankaj Malhotra, Lovekesh Vig, Gautam Shroff, and Puneet Agarwal. Long short term memory networks for anomaly detection in time series. In European Symposium on Artificial Neural Networks, volume 23, 2015. Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, and Alan Yuille. Deep captioning with multimodal recurrent neural networks (m-rnn). arXiv preprint arXiv:1412.6632, 2014. Yasuko Matsubara, Yasushi Sakurai, and Christos Faloutsos. Autoplait: Automatic mining of co-evolving time sequences. In Proceedings of the 2014 ACM SIGMOD international conference on Management of data, pp. 193–204. ACM, 2014. Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th international conference on machine learning (ICML-13), pp. 1139–1147, 2013. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104–3112, 2014. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3156–3164, 2015. Jiang Wang, Yi Yang, Junhua Mao, Zhiheng Huang, Chang Huang, and Wei Xu. Cnn-rnn: A unified framework for multi-label image classification. arXiv preprint arXiv:1604.04573, 2016a. Linlin Wang, Zhu Cao, Yu Xia, and Gerard de Melo. Morphological segmentation with window lstm neural networks. In Thirtieth AAAI Conference on Artificial Intelligence, 2016b. Peng Wang, Haixun Wang, and Wei Wang. Finding semantics in time series. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of data, pp. 385–396. ACM, 2011. Bing Xu, Naiyan Wang, Tianqi Chen, and Mu Li. Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853, 2015. Jian Bo Yang, Minh Nhut Nguyen, Phyto Phyto San, Xiao Li Li, and Shonali Krishnaswamy. Deep convolutional neural networks on multichannel time series for human activity recognition. In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI), Buenos Aires, Argentina, pp. 25–31, 2015. Chao Yuan. Unsupervised machine condition monitoring using segmental hidden markov models. In Proceedings of the 24th International Conference on Artificial Intelligence, pp. 4009–4016. AAAI Press, 2015. Hai-xiang Zhao and Frédéric Magoulès. A review on the prediction of building energy consumption. Renewable and Sustainable Energy Reviews, 16(6):3586–3592, 2012. 7 APPENDIX 7.1 DATA PRE-PROCESSING In this part, we describe the data pre-processing, which extracts the local trend sequence from raw time series data for the subsequent neural network training and testing. We convert the raw time series data into a piecewise linear representation, namely consecutive segments [Keogh et al., 2001; Wang et al., 2011]. Each segment corresponds to a local trend and is fitted by a linear function of time series value w.r.t. time, e.g., \( x_t = \beta_1 t + \beta_0 + \epsilon \) over the time range \([t_1, t_2]\) of this segment. Then, the slope and duration are derived from the coefficient \( \beta_1 \) and \([t_1, t_2]\). Technically, we adopt the bottom-up approach in [Keogh et al., 2001], since it can achieve lower approximate errors compared with top-down and sliding window methods. The process is illustrated in Figure 3. Initially, we approximate time series \( \mathcal{X} \) with \( \left\lceil \frac{T}{2} \right\rceil \) line segments (\( T \) is the length of the Figure 3: Illustration of local trend extraction via time series segmentation. (Best viewed in colour) time series). Then, we iteratively merge the neighbouring segments to build longer ones. In each iteration, neighbouring segments with the minimal approximation error are merged into a new one. The merging process repeats until every possible merge gives rise to a segment with errors above a specified threshold. We use the relative mean squared error as the error metric and specify the threshold as 0.05. 7.2 ADDITIONAL EXPERIMENT RESULTS ![Illustration of local trend extraction via time series segmentation](page_184_97_1016_246.png) (a) HousePC (b) Stock (c) GasSensor Figure 4: Visualization of the trend prediction by TreNet in HousePC, Stock and GasSensor datasets. The blue line in each figure represents the historical trend sequence. The yellow line represents the predicted local trend. In this group of experiments, we visualize the trend prediction using the sample testing data instance from each dataset in Figure 4. We can observe that in HousePC TreNet successfully predicts the changed trend, though there are successive upward trends before. In Stock and GasSensor datasets, the succeeding upward and downward trends are correctly predicted as well. <table> <tr> <th>Window Size</th> <th>CNN</th> <th>SVRBF</th> <th>SVPOLY</th> <th>SVSIG</th> <th>pHMM</th> <th>TreNet</th> </tr> <tr> <td>100</td> <td>18.87</td> <td>11.38</td> <td>11.40</td> <td>11.49</td> <td>-</td> <td>8.86</td> </tr> <tr> <td>300</td> <td>18.17</td> <td>11.41</td> <td>11.44</td> <td>11.42</td> <td>39.84</td> <td>8.85</td> </tr> <tr> <td>500</td> <td>18.06</td> <td>11.39</td> <td>11.44</td> <td>11.36</td> <td>32.10</td> <td>8.51</td> </tr> <tr> <td>700</td> <td>18.10</td> <td>11.45</td> <td>11.59</td> <td>11.58</td> <td>36.37</td> <td>8.58</td> </tr> <tr> <td>900</td> <td>18.07</td> <td>11.32</td> <td>11.47</td> <td>11.59</td> <td>38.36</td> <td>8.78</td> </tr> </table> Table 4: RMSE of the duration predictions on different sizes of local data in Stock dataset <table> <tr> <th>Window Size</th> <th>CNN</th> <th>SVRBF</th> <th>SVPOLY</th> <th>SVSIG</th> <th>pHMM</th> <th>TreNet</th> </tr> <tr> <td>100</td> <td>12.78</td> <td>7.40</td> <td>7.42</td> <td>7.41</td> <td>-</td> <td>6.84</td> </tr> <tr> <td>300</td> <td>12.24</td> <td>7.42</td> <td>7.51</td> <td>7.38</td> <td>6.67</td> <td>6.53</td> </tr> <tr> <td>500</td> <td>12.13</td> <td>7.47</td> <td>7.41</td> <td>7.42</td> <td>7.59</td> <td>6.58</td> </tr> <tr> <td>700</td> <td>12.24</td> <td>7.53</td> <td>7.58</td> <td>7.51</td> <td>9.74</td> <td>6.75</td> </tr> <tr> <td>900</td> <td>12.25</td> <td>7.61</td> <td>7.45</td> <td>7.59</td> <td>14.00</td> <td>6.73</td> </tr> </table> Table 5: RMSE of the slope predictions on different sizes of local data in Stock dataset Then, we provide the RMSE w.r.t. the varying window size on Stock and GasSensor datasets in Table 4, Table 5, Table 6 and Table 7. From the results, we observe that TreNet outperforms baselines almost on all window sizes. Meanwhile, the prediction errors often present the decreasing and stable pattern as the window size varies. **Window size of local data:** The observation in above experiments w.r.t. the varying window size provides inspiration for choosing the window size of local data. Given the training dataset, we can find out the maximum duration of local trends and takes it as the local data size. This is because doing so can ensure that the range of local data in each training instance can cover the most recent local trend, whose raw data is believed to have strong predictive power for the subsequent trend. Additionally, we observe that setting the window size of local data of CNN and TreNet in this way can achieve comparable prediction errors compared to the cases with larger window sizes . <table> <tr> <th>Window Size</th> <th>CNN</th> <th>SVRBF</th> <th>SVPOLY</th> <th>SVSIG</th> <th>pHMM</th> <th>TreNet</th> </tr> <tr> <td>100</td> <td>54.23</td> <td>57.77</td> <td>65.99</td> <td>99.78</td> <td>-</td> <td>53.91</td> </tr> <tr> <td>300</td> <td>53.99</td> <td>62.81</td> <td>70.91</td> <td>85.69</td> <td>-</td> <td>52.28</td> </tr> <tr> <td>500</td> <td>53.82</td> <td>61.86</td> <td>64.33</td> <td>91.51</td> <td>111.62</td> <td>51.77</td> </tr> <tr> <td>700</td> <td>53.14</td> <td>61.20</td> <td>63.89</td> <td>78.20</td> <td>175.36</td> <td>51.15</td> </tr> <tr> <td>900</td> <td>53.19</td> <td>61.45</td> <td>63.83</td> <td>68.09</td> <td>255.73</td> <td>51.25</td> </tr> </table> Table 6: RMSE of the duration predictions on different sizes of local data in GasSensor dataset <table> <tr> <th>Window Size</th> <th>CNN</th> <th>SVRBF</th> <th>SVPOLY</th> <th>SVSIG</th> <th>pHMM</th> <th>TreNet</th> </tr> <tr> <td>100</td> <td>11.98</td> <td>11.16</td> <td>11.19</td> <td>12.48</td> <td>-</td> <td>10.30</td> </tr> <tr> <td>300</td> <td>11.51</td> <td>10.21</td> <td>10.95</td> <td>11.92</td> <td>-</td> <td>9.57</td> </tr> <tr> <td>500</td> <td>11.75</td> <td>10.08</td> <td>10.65</td> <td>11.64</td> <td>13.07</td> <td>9.60</td> </tr> <tr> <td>700</td> <td>11.59</td> <td>9.54</td> <td>10.44</td> <td>11.72</td> <td>12.29</td> <td>9.55</td> </tr> <tr> <td>900</td> <td>12.10</td> <td>9.61</td> <td>10.37</td> <td>11.54</td> <td>12.37</td> <td>9.46</td> </tr> </table> Table 7: RMSE of the slope predictions on different sizes of local data in GasSensor dataset 8 DISCUSSION For multivariate time series, we can augment the input of TreNet by including the trend sequences and local data of exogenous time series and then train TreNet for a certain target time series to predict its trend. Another line of research is to explore equipping TreNet with multi-task learning. This is motivated by the observation that if we decompose the trend forecasting problem into classification and regression respectively for the slope and duration, we can utilize the correlation between slope and duration to boost the prediction performance. In addition, there could be alternative frameworks to combine the outputs of CNN and LSTM and our work opens the door for applying hybrid neural networks for trend analysis in time series.
ABSTRACT Local trends of time series characterize the intermediate upward and downward patterns of time series. Learning and forecasting the local trend in time series data play an important role in many real applications, ranging from investing in the stock market, resource allocation in data centers and load schedule in smart grid. Inspired by the recent successes of neural networks, in this paper we propose TreNet, a novel end-to-end hybrid neural network that predicts the local trend of time series based on local and global contextual features. TreNet leverages convolutional neural networks (CNNs) to extract salient features from local raw data of time series. Meanwhile, considering long-range dependencies existing in the sequence of historical local trends, TreNet uses a long-short term memory recurrent neural network (LSTM) to capture such dependency. Furthermore, for predicting the local trend, a feature fusion layer is designed in TreNet to learn joint representation from the features captured by CNN and LSTM. Our proposed TreNet demonstrates its effectiveness by outperforming conventional CNN, LSTM, HMM method and various kernel based baselines on real datasets. 1 INTRODUCTION Time series, which is a sequence of data points in time order, is being generated in a wide spectrum of domains, such as daily fluctuation of the stock market, power consumption records of households, performance monitoring data of clusters in data centres, and so on. In many applications, users are interested in understanding the evolving trend in time series and forecasting the trend, since the conventional prediction on specific data points could deliver very little information about the semantics and dynamics of the underlying process generating the time series. For instance, time series in Figure 1 are from the household power consumption dataset. Figure 1(a) shows some raw data points of time series. Though point A and B have approximately the same value, the underlying system is likely to be in two different states when it outputs A and B, because A is in an upward trend while B is in a downward trend (Wang et al., 2011; Matsubara et al., 2014). On the other hand, even when two points with the similar value are both in the upward trend, e.g. point A and C, the different slopes and durations of the trends where point A and C locate, could also indicate different states of the underlying process. Particularly, in this paper we are interested in the local trend of time series which measures the intermediate local behaviour, i.e., upward or downward pattern of time series that characterized by the slope and duration (Wang et al., 2011). For instance, in Figure 1(b) the linear segments over raw data points of time series represent the local trends extracted from a real household power consumption time series. For the ease of presentation, we will use the term trend and local trend interchangeably in the rest of the paper. Learning and forecasting local trends are quite useful in a wide range of applications. For instance, in the stock market, due to its high volatility and noisy environment, in reality predicting stock price trends is preferred over the prediction of the stock market absolute values (Atsalakis & Valavanis, 2009). Predicting the local trend of stock price time series empowers *These two authors contributed equally. 1 https://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption traders to design profitable trading strategies (Chang et al., 2012b; Atsalakis & Valavanis, 2009). In the smart energy domain, knowing the predictive local trend of power consumption time series enables energy providers to schedule power supply and maximize energy utilization (Zhao & Magoulès, 2012). Meanwhile, in recent years neural networks have shown the dramatical power in a wide spectrum of domains, e.g., natural language processing, computer vision, speech recognition, time series analysis, etc. (Wang et al., 2016b; Sutskever et al., 2014; Yang et al., 2015; Lipton et al., 2015). For time series data, two mainstream architectures, convolutional neural network (CNN) and recurrent neural network (RNN) have been exploited in different time series related tasks, e.g., RNN in time series classification (Lipton et al., 2015) and CNN in activity recognition and snippet learning (Liu et al., 2015; Yang et al., 2015). RNN is powerful in discovering the dependency in sequence data (Jain et al., 2014; Graves, 2012) and particularly the Long Short-Term Memory (LSTM) RNN works well on sequence data with long-term dependencies (Chung et al., 2014; Hochreiter & Schmidhuber, 1997) due to the internal memory mechanism. CNN excels in exacting effective representation of local salience from raw data of time series by enforcing a local connectivity between neurons. (Yang et al., 2015; Hammerla et al., 2016). ![Three subfigures showing time series plots and trend analysis](page_384_1012_1096_246.png) Figure 1: (a) Time series of household power consumption. (b) Local trends in time series. (c) Effect of local raw data on the trend forecasting. In this paper, we focus on learning and forecasting the local trends in time series via neural networks. This involves learning different aspects of the data. On one hand, the sequence of historical local trends describes the long-term contextual information of time series and thus naturally affects the evolution of the following local trend. On the other hand, the recent raw data points of time series (Wang et al., 2011; Batal et al., 2012), which represent the local variation and behaviour of time series, affect the evolving of the following trend as well and have particular predictive power for abruptly changing local trends (Wang et al., 2011). For instance, in Figure 1(c), trend 1, 2 and 3 present a continuous upward pattern. Then when we aim at predicting the subsequent trend of time series at the end of the third local trend, the previous three successive upward trends outline a probable increasing trend afterwards. However, the local data around the end of the third trend, e.g., data points in the red circle, indicate that time series could stabilize and even decrease. The data points after the third trend indeed present a decreasing trend indicated by the red dotted segment. In this case, the subsequent trend has more dependency on the local data points. Therefore, it is highly desired to develop a systematic way to model such various hidden and complementary dependencies in time series for the local trend forecasting problem. To this end, we propose a end-to-end hybrid neural network, referred to as TreNet. In particular, it consists of a LSTM recurrent neural network to capture the long dependency in historical local trends, a convolutional neural network to extract local features from local raw data of time series, and a feature fusion layer to learn joint representation to take advantage of both features drawn from CNN and LSTM. Such joint representation is used for the local trend forecasting. The experimental analysis on real datasets demonstrates that TreNet outperforms individual recurrent neural network, convolutional neural network and a variety of baselines in term of local trend prediction accuracy. The rest of the paper is organized as follows. Section 2 presents related work, while Section 3 defines the problem to be solved and introduces the notations. In Section 4, we present the proposed TreNet. Section 5 demonstrates the performance of our method and baselines on real datasets. Finally, the paper is concluded in Section 6. Refer to Section 7 and Section 8 for more experiment results and discussion. 2 RELATED WORK Traditional learning approaches over local trends of time series mainly make use of Hidden Markov Models (HMMs) (Wang et al., 2011; Matsubara et al., 2014). HMMs maintain short-term state dependences, i.e., the memoryless Markov property and predefined number of states, which requires significant task specific knowledge. RNNs instead use high dimensional, distributed hidden states that could take into account long-term dependencies in sequence data. Previous time series segmentation approaches (Keogh et al., 2001; Matsubara et al., 2014; Yuan, 2015) focus on achieving a meaningful segmentation and finding patterns, rather than modeling the relation in segments and therefore are not suitable for forecasting local trends. Multi-step ahead prediction is another way to realize local trend prediction by fitting the predicted values to estimate the local trend. However, multi-step ahead prediction is a non-trivial problem itself (Chang et al., 2012a). In this paper, we concentrate on directly learning local trends through neural networks. RNNs have recently shown promising results in a variety of applications, especially when there exist sequential dependencies in data (Lyu & Zhu, 2014; Chung et al., 2014; Sutskever et al., 2014). Long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997; Lyu & Zhu, 2014; Chung et al., 2014), a class of recurrent neural networks with sophisticated recurrent hidden and gated units, are particularly successful and popular due to its ability to learn hidden long-term sequential dependencies. (Lipton et al., 2015) uses LSTMs to recognize patterns in multivariate time series, especially for multi-label classification of diagnoses. (Chauhan & Vig, 2015; Malhotra et al., 2015) evaluate the ability of LSTMs to detect anomalies in ECG time series. Bidirectional LSTM (Graves & Schmidhuber, 2005) is usually intended for speech processing rather than time series forecasting problems. Our paper focuses on using LSTM to capture the dependency in the sequence of historical local trends and meanwhile the hidden states in LSTM are further used to learn joint feature representations for the local trend forecasting. CNN is often used to learn effective representation of local salience from raw data (Vinyals et al., 2015; Donahue et al., 2015; Karpathy et al., 2014). (Hammerla et al., 2016; Yang et al., 2015; Lea et al., 2016) make use of CNNs to extract features from raw time series data for activity/action recognition. (Liu et al., 2015) focuses on the prediction of periodical time series values by using CNN and embedding time series with the potential neighbors in the temporal domain. Our proposed TreNet will combine the strengths of both LSTM and CNN and form a novel and unified neural network architecture for local trend forecasting. Hybrid neural networks, which combines the strengths of various neural networks, are receiving increasing interest in the computer vision domain, such as image captioning (Mao et al., 2014; Vinyals et al., 2015; Donahue et al., 2015), image classification (Wang et al., 2016b), protein structure prediction (Li & Yu, 2016), action recognition (Ballas et al., 2015; Donahue et al., 2015) and so on. But efficient exploitation of such hybrid architectures has not been well studied for time series data, especially the trend forecasting problem. (Li & Yu, 2016; Ballas et al., 2015) utilize CNNs over images in cascade of RNNs in order to capture the temporal features for classification. (Bashivan et al., 2015) transforms EEG data into a sequence of topology-preserving multi-spectral images and then trains a cascaded convolutional-recurrent network over such images for EEG classification. (Wang et al., 2016a; Mao et al., 2014) propose the CNN-RNN framework to learn a shared representation for image captioning and classification problems. In our proposed TreNet, LSTM and CNN first respectively learn the trend evolution and local raw data of time series and then TreNet fuses the features captured by LSTM and CNN to predict the trend. 3 PROBLEM FORMULATION In this section, we provide the formal definition of the trend learning and forecasting problem in this paper. We define time series as a sequence of data points \( \mathcal{X} = \{ x_1, \ldots, x_T \} \), where each data point \( x_t \) is real-valued and subscript \( t \) represents the time instant. The corresponding local trend sequence of \( \mathcal{X} \) is a series of piecewise linear representations of \( \mathcal{X} \), denoted by \( \mathcal{T} = \{ (\ell_k, s_k) \} \). Each element of \( \mathcal{T} \), e.g., \( (\ell_k, s_k) \) describes a linear function over a certain subsequence (or segment) of \( \mathcal{X} \) and corresponds to a local trend in \( \mathcal{X} \). Such local trends in \( \mathcal{T} \) are extracted from \( \mathcal{X} \) by time series segmentation and fitting a linear function w.r.t. time \( t \) over each segment (Keogh et al., 2001; Wang et al., 2011). \( \ell_k \) and \( s_k \) respectively represent the duration and slope of trend \( k \). \( \ell_k \) is measured in terms of the time range covered by trend \( k \). Local trends in \( \mathcal{T} \) are time ordered and non-overlapping. The durations of all the local trends in \( \mathcal{T} \) address \( \sum_k \ell_k = T \). In addition, a local trend sequence ending by time \( t \) is denoted by \( \mathcal{T}(t) = \{ (\ell_k, s_k) | \sum_k \ell_k \leq t \} \). Meanwhile, as we discussed in Section 1, local raw data of time series affects the varying of trend as well and thus we define the local data w.r.t. a certain time instant \( t \) as a sequence of data points in a window of size \( w \), denoted by \( \mathcal{L}(t) = \{ x_{t-w}, \ldots, x_t \} \). At certain time \( t \), trend forecasting is meant to predict the duration and slope of the following trend based on a given sequence of historical trends \( \mathcal{T}(t) \) and local data set \( \mathcal{L}(t) \). The predicted duration and slope at time \( t \) are denoted by \( \hat{\ell}_t \) and \( \hat{s}_t \). Our proposed TreNet can be trained for predicting either \( \hat{\ell}_t \) or \( \hat{s}_t \). For simplicity, we use \( \hat{y}_t \) to represent the predicted value of TreNet throughout the paper. Therefore, given the training dataset \( \mathcal{D} = \mathcal{X} \cup \mathcal{T} \), we aim to propose a neural network based approach to learn a function \( \hat{y}_t = f(\mathcal{T}(t), \mathcal{L}(t)) \) for the trend forecasting. In this paper, we focus on univariate time series. The proposed method can be naturally generalized to multivariate time series as well by augmenting the input to the neural network. Refer to Section 8 for more discussion. 4 HYBRID NEURAL NETWORKS FOR TREND LEARNING AND FORECASTING In this section, we first present an overview about the proposed TreNet for the trend forecasting. Then we will detail the components of TreNet. Overview. The idea of our TreNet is to combine CNN with LSTM to utilize their representation abilities on different aspects of training data \( \mathcal{D} (\mathcal{D} = \mathcal{X} \cup \mathcal{T}) \) and then to learn a joint feature for the trend prediction. Technically, TreNet is designed to learn a predictive function \( \hat{y}_t = f(R(\mathcal{T}(t)), C(\mathcal{L}(t))) \). \( R(\mathcal{T}(t)) \) is derived by training the LSTM over sequence \( \mathcal{T} \) to capture the dependency in the trend evolving, while \( C(\mathcal{L}(t)) \) corresponds to local features extracted by CNN from \( \mathcal{L}(t) \). The long-term and local features captured by LSTM and CNN, i.e., \( R(\mathcal{T}(t)) \) and \( C(\mathcal{L}(t)) \) convey complementary information pertaining to the trend varying. Therefore, the feature fusion layer is supposed to take advantages of both features to produce a fused representation for improved performance. Finally, the trend prediction is realized by the function \( f(\cdot, \cdot) \), which corresponds to the feature fusion and output layers in Figure 2. ![Illustration of the hybrid architecture of TreNet.](page_370_1042_805_246.png) Figure 2: Illustration of the hybrid architecture of TreNet. (best viewed in colour) Learning the dependency in the trend sequence. During the training phase, the duration \( \ell_k \) and slope \( s_k \) of each local trend \( k \) in sequence \( \mathcal{T} \) are fed into the LSTM layer of TreNet. Each \( j \)-th neuron in the LSTM layer maintains a memory \( c^j_k \) at step \( k \). The output \( h^j_k \) or the activation of this neuron is then expressed as (Hochreiter & Schmidhuber [1997; Chung et al., 2014]: \[ h_k^j = o_k^j \tanh(c_k^j) \] (1) where \( o_k^j \) is an output gate and calculated as: \[ o_k^j = \sigma(W_o[\ell_k s_k] + U_o h_{k-1} + V_o c_k)^j \] (2) where \([\ell_k s_k]\) is the concatenation of the duration and slope of the trend \(k\), \(h_{k-1}\) and \(c_k\) are the vectorization of the activations of \(\{h_{k-1}^j\}\) and \(\{c_k^j\}\), and \(\sigma\) is a logistic sigmoid function. Then, the memory cell \(c_k^j\) is updated through partially forgetting the existing memory and adding a new memory content \(\tilde{c}_k^j\): \[ c_k^j = f_k^j c_{k-1}^j + i_k^j \tilde{c}_k^j , \quad \tilde{c}_k^j = \tanh(W_c[\ell_k s_k] + U_c h_{k-1})^j \] (3) The extent to which the existing memory is forgotten is modulated by a forget gate \(f_k^j\), and the degree to which the new memory content is added to the memory cell is modulated by an input gate \(i_k^j\). Then, such gates are computed by \[ f_k^j = \sigma(W_f[\ell_k s_k] + U_f h_{k-1} + V_f c_{k-1})^j \] (4) \[ i_k^j = \sigma(W_i[\ell_k s_k] + U_i h_{k-1} + V_i c_{k-1})^j \] (5) At each step \(k\), the hidden activation \(h_k\) is the output to the feature fusion layer. Specifically, given a \(\mathcal{T}(t)\) containing \(n\) local trends (i.e., \(|\mathcal{T}(t)| = n\)), the output of \(R(\mathcal{T}(t))\) is \(R(\mathcal{T}(t)) = h_n\). Learning features from the local raw data of time series. When the \(k\)-th trend in \(\mathcal{T}\) is fed to LSTM, the corresponding local raw time series data input to the CNN part of TreNet is \(\mathcal{L}(t)\), where \(t = \sum_{i=1}^k \ell_i\). CNN consists of \(H\) stacked layers of 1-d convolutional, activation and pooling operations. Denote by \(a^i\) the input signal of layer \(i\) and thus at the first layer \(a^1 = \mathcal{L}(t)\). Each layer has a specified number of filters \(n^i\) of a specified filter size \(d^i\). Each filter on a layer sweeps through the entire input signal to exact local features as follows: \[ v_{m}^{i,j} = \phi(b^{i,j} + \sum_{z=-m+d^i/2}^{m+d^i/2} W_z^{i,j} a_z^i), \forall m = 1, \ldots, |a^i| \] (6) where \(v_{m}^{i,j}\) is the activation of \(j\)-th filter of layer \(i\) on \(m\) position of the input signal. Here \(\phi\) is the Leaky Rectified Linear Unit, which is shown to perform better (Xu et al., 2015). Then the max-pooling is performed over the \(v_{m}^{i,j}\) of each filter. Finally, the output of CNN in TreNet is the concatenation of max-pooling of each filter on the last layer \(H\), namely: \[ C(\mathcal{L}(t)) = [p^1, \ldots, p^{n^H}], \quad p^j = [\max_{1 \leq z \leq q} (\{v_{m+z}^{H,j}\})], \quad \forall j = 1, \ldots, n^H \] (7) where \(q\) is the pooling size. Feature fusion and output layers. The feature fusion layer combines the representations \(R(\mathcal{T}(t))\) and \(C(\mathcal{L}(t))\), to form a joint feature. Then, such joint feature is fed to the output layer to provide the trend prediction. Particularly, we first map \(R(\mathcal{T}(t))\) and \(C(\mathcal{L}(t))\) to the same feature space and add them together to obtain the activation of the feature fusion layer (Mao et al., 2014). The output layer is a fully-connect layer following the feature fusion layer. Mathematically, the prediction of TreNet is expressed as: \[ \hat{y}_t = f(R(\mathcal{T}(t)), C(\mathcal{L}(t))) = W^o \cdot \phi(W^r \cdot R(\mathcal{T}(t)) + W^c \cdot C(\mathcal{L}(t))) + b^o \] (8) where \(\phi(\cdot)\) is element-wise leaky ReLU activation function and \(+\) denotes the element-wise addition. \(W^o\) and \(b^o\) are the weights and bias of the output layer. To train TreNet, we adopt the squared error function plus a regularization term as: \[ J(\mathbf{W}, \mathbf{b} ; \mathcal{T}, \mathcal{X}) = \frac{1}{|\mathcal{T}|} \sum_{k=1}^{|\mathcal{T}|} (\hat{y}_k - y_k)^2 + \lambda \| \mathbf{W} \|_2 \] (9) where \( \mathbf{W} \) and \( b \) represent the weight and bias parameters in TreNet, \( \lambda \) is a hyperparameter for the regularization term and \( y_k \) is the true value of trend slope or duration. The cost function is differentiable and the architecture of TreNet allows the gradients from the loss function (9) to be backpropagated to both LSTM and CNN parts. TreNet can be trained respectively for the slope and duration of local trends using \( \mathcal{T} \) and \( \mathcal{X} \). When performing forecasting, \( \mathcal{T}(t) \) and \( \mathcal{L}(t) \) are fed to TreNet and the prediction value \( \hat{y}_k \) could be either the slope or duration depending on the training target. 5 EXPERIMENTAL ANALYSIS In this section, we conduct extensive experiments to demonstrate the prediction performance of TreNet by comparing to a variety of baselines. Due to the page limit, refer to Section 7 for more experiment results. 5.1 EXPERIMENT SETUP Dataset: We test our method and baselines on three real time series datasets. • Daily Household Power Consumption (HousePC). This dataset[2] contains measurements of electric power consumption in one household with a one-minute sampling rate over a period of almost 4 years. Different electrical quantities and some sub-metering values are available. We use the voltage time series throughout the experiments. • Gas Sensor (GasSensor). This dataset[3] contains the recordings of chemical sensors exposed to dynamic gas mixtures at varying concentrations. The measurement was constructed by the continuous acquisition of the sensor array signals for a duration of about 12 hours without interruption. We mainly use the gas mixture time series regarding Ethylene and Methane in air. • Stock Transaction (Stock): This dataset is extracted from Yahoo Finance and contains the daily stock transaction information in New York Stock Exchange from 1950-10 to 2016-4. All datasets are preprocessed by [Keogh et al. 2001] to extract local trends. Alternative time series segmentation and local trend extraction approaches can be used as well. We choose [Keogh et al. 2001] here due to its high efficiency. Totally, we obtain 42591, 4720 and 1316 local trends respectively from above datasets. For the ease of experimental result interpretation, the slope of extracted local trends is represented by the angle of the corresponding linear function and thus in a bounded value range \([ -90, 90 ]\). The duration of local trends is measured by the number of data points within the local trend. Then, the obtained trend sequences and the set of local data are split into training (80%), validation (10%) and test (10%) datasets. Baselines: We compare TreNet with the following six baselines: • CNN. This baseline method predicts the trend by only using CNN over the set of local raw data of time series to learn features for the forecasting. The size of local data is set at \( w \) as is defined in Section 3. • LSTM. This method uses LSTM to learn dependencies in the trend sequence \( \mathcal{T} \) and predicts the trend only using the trained LSTM. • Support Vector Regression (SVR). A family of support vector regression based approaches with different kernel methods is used for the trend forecasting. We consider three 2 https://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption 3 https://archive.ics.uci.edu/ml/datasets/Gas+sensor+array+under+dynamic+gas+mixtures <table> <tr> <th>Dataset</th> <th>Model</th> <th>RMSE @ Duration</th> <th>RMSE @ Slope</th> </tr> <tr> <td rowspan="8">HousePC</td> <td>CNN</td> <td>27.51</td> <td>13.56</td> </tr> <tr> <td>LSTM</td> <td>27.27</td> <td>13.27</td> </tr> <tr> <td>SVRBF</td> <td>31.81</td> <td>12.94</td> </tr> <tr> <td>SVPOLY</td> <td>31.81</td> <td>12.93</td> </tr> <tr> <td>SVSIG</td> <td>31.80</td> <td>12.93</td> </tr> <tr> <td>pHMM</td> <td>34.06</td> <td>26.00</td> </tr> <tr> <td>Naive</td> <td>39.68</td> <td>21.17</td> </tr> <tr> <td>CLSTM</td> <td>25.97</td> <td>13.77</td> </tr> <tr> <td>TreNet</td> <td>25.89</td> <td>12.89</td> </tr> <tr> <td rowspan="8">Stock</td> <td>CNN</td> <td>18.87</td> <td>12.78</td> </tr> <tr> <td>LSTM</td> <td>11.07</td> <td>8.40</td> </tr> <tr> <td>SVRBF</td> <td>11.38</td> <td>7.40</td> </tr> <tr> <td>SVPOLY</td> <td>11.40</td> <td>7.42</td> </tr> <tr> <td>SVSIG</td> <td>11.49</td> <td>7.41</td> </tr> <tr> <td>pHMM</td> <td>36.37</td> <td>8.70</td> </tr> <tr> <td>Naive</td> <td>11.36</td> <td>8.58</td> </tr> <tr> <td>CLSTM</td> <td>9.26</td> <td>7.31</td> </tr> <tr> <td>TreNet</td> <td>8.86</td> <td>6.84</td> </tr> <tr> <td rowspan="8">GasSensor</td> <td>CNN</td> <td>53.99</td> <td>11.51</td> </tr> <tr> <td>LSTM</td> <td>55.77</td> <td>11.22</td> </tr> <tr> <td>SVRBF</td> <td>62.81</td> <td>10.21</td> </tr> <tr> <td>SVPOLY</td> <td>70.91</td> <td>10.95</td> </tr> <tr> <td>SVSIG</td> <td>85.69</td> <td>11.92</td> </tr> <tr> <td>pHMM</td> <td>111.62</td> <td>13.07</td> </tr> <tr> <td>Naive</td> <td>53.76</td> <td>10.57</td> </tr> <tr> <td>CLSTM</td> <td>54.20</td> <td>14.86</td> </tr> <tr> <td>TreNet</td> <td>52.28</td> <td>9.57</td> </tr> </table> Table 1: RMSE of the prediction of local trend duration and slope on each dataset. commonly used kernels (Liu et al., 2015), i.e., Radial Basis kernel (SVRBF), Polynomial kernel (SVPOLY), Sigmoid kernel (SVSIG). The trend sequence and the corresponding set of local time series data are concatenated as the input features to such SVR approaches. • Pattern-based Hidden Markov Model (pHMM). (Wang et al., 2011) proposed a pattern-based hidden Markov model (HMM), which segments the time series and models the dependency in segments via HMM. The derived HMM model is used to predict the state of time series and then to estimate the trend based on the state. • Naive. This is the naive approach which takes the duration and slope of the last trend as the prediction for the next one. • ConvNet+LSTM(CLSTM). It is based on the cascade structure of ConvNet and LSTM in (Bashivan et al., 2015) which feeds the features learnt by ConvNet over time series to a LSTM and obtains the prediction from the LSTM. Evaluation metric: We evaluate the predictive performance of TreNet and baselines in terms of Root Mean Square Error (RMSE). The lower the RMSE, the more accurate the predictions. Training: The training procedure of TreNet and baselines in our paper follows the schema below. The CNN and LSTM components in TreNet share the same network structure (e.g., number of layers, neurons in each layer) as CNN and LSTM baselines. CNN has two stacked convolutional layers, which have 32 filters of size 2 and 4. The number of memory cells in LSTM is 600. For baseline CNN and LSTM, we tune the learning rate for each approach from \( \{10^{-1}, 10^{-2}, 10^{-3}, 10^{-4}, 10^{-5}\} \) (Sutskever et al., 2013), in order to achieve the least prediction errors and then fix the learning rate. For TreNet, in addition to the learning rate, the number of neurons in the feature fusion layer is chosen from the range {300, 600, 900, 1200} to achieve the best performance. We use dropout and L2 regularization to control the capacity of neural networks to prevent overfitting, and set the values to 0.5 and \( 5 \times 10^{-4} \) respectively for all datasets (Mao et al., 2014). The Adam optimizer (Kingma & Ba, 2014) is chosen to learn the weights in neural networks. Regarding the SVR based approaches, we carefully tune the parameters \( c \) (error penalty), \( d \) (degree of kernel function), and \( \gamma \) (kernel coefficient) for kernels. Each parameter is selected from the sets \( c \in \{10^{-5}, 10^{-4}, \ldots, 1, \ldots, 10^4, 10^5\} \), \( d \in \{1, 2, 3\} \), \( \gamma \in \{10^{-5}, 10^{-4}, \ldots, 1, \ldots, 10^5\} \) respectively. We iterate through candidate values of each combination of \( c \), \( d \) and \( \gamma \) to train our model, and keep the parameters that generate the lowest RMSE on the validation set, and then use them to predict on the test set. The training datasets of SVR and pHMM baselines are consistent as that of TreNet. Likewise, CNN and LSTM baselines are respectively fed by the set of local data and the trend sequence of the same size as TreNet. In addition, since the window size of local data is tunable, we vary the window size of local data, i.e. \( w \), from the range {100, 300, 500, 700, 900}, so as to investigate how the size of local data influences the predication performance. The results will be presented in Section 5.2. The model’s performance on the validation set will be evaluated after each epoch of training. Each model is trained for at least 50 epochs. Meanwhile, the training process adopts early stopping if no further improvement in the performance of validation shows up after 50 epochs. 5.2 EXPERIMENT RESULTS Table 1 studies the prediction performances of TreNet and baselines. For each dataset, the window size of local data is constant for approaches (i.e., CNN, SVRBF, SVPOLY, SVSIG, pHMM and TreNet) that take local data as input. Then, the results of each approach are obtained by tuning the corresponding parameter as described in Section 5.1. In Table 1, we observe that TreNet consistently outperforms baselines on the duration and slope prediction by achieving around 30% less errors at the maximum. It verifies that the hybrid architecture of TreNet can improve the performance by utilizing the information captured by both CNN and LSTM. Specifically, pHMM method performs worse due to the limited representation capability of HMM. On the slope prediction, SVR based approaches can get comparable results as TreNet. In the following group of experiments, we investigate the effect of local data size (i.e., \( w \)) on the prediction. In particular, we tune the value of local data size for the approaches whose input fea- tures contains local data and observe the prediction errors. Such approaches include CNN, SVRBF, SVPOLY, SVSIG, pHMM and TreNet. LSTM only consumes the trend sequence and thus is not included. Due to the page limit, we report the results on the HousePC dataset in Table2 and Table3. The results on Stock and GasSensor datasets can be referred to Section7. Baseline Naive has no original time series data as input CLSTM works on the whole time series and has no local data. Thus they are excluded from this set of experiments. In Table2 we observe that compared to baselines TreNet has the lowest errors on the duration prediction across different window sizes. pHMM requires sufficient data points to model the relations of segments and fails to work on 100 size. As the window size increases and more local data points are fed to the training process, the prediction errors of CNN and TreNet decrease or nearly stabilize. This could be because only the certain amount of local data has predictive power. The filtering and pooling mechanism enables CNN to focus on the certain local data having strong predictive power and thus giving more local data only gives rise to marginal improvements. Such similar phenomenon is observed on the slope prediction as is shown in Table3. For more results and discussion, please refer to Section7. <table> <tr> <th>Window Size</th> <th>CNN</th> <th>SVRBF</th> <th>SVPOLY</th> <th>SVSIG</th> <th>pHMM</th> <th>TreNet</th> </tr> <tr> <td>100</td> <td>29.37</td> <td>31.48</td> <td>31.96</td> <td>31.88</td> <td>-</td> <td>25.93</td> </tr> <tr> <td>300</td> <td>27.33</td> <td>31.17</td> <td>31.61</td> <td>31.66</td> <td>30.03</td> <td>25.94</td> </tr> <tr> <td>500</td> <td>27.51</td> <td>31.81</td> <td>31.81</td> <td>31.80</td> <td>34.06</td> <td>25.89</td> </tr> <tr> <td>700</td> <td>27.41</td> <td>31.10</td> <td>31.09</td> <td>31.11</td> <td>27.37</td> <td>25.72</td> </tr> <tr> <td>900</td> <td>27.42</td> <td>31.28</td> <td>31.27</td> <td>31.27</td> <td>28.45</td> <td>25.62</td> </tr> </table> Table 2: RMSE of the duration predictions w.r.t. different sizes of local data in HousePC dataset <table> <tr> <th>Window Size</th> <th>CNN</th> <th>SVRBF</th> <th>SVPOLY</th> <th>SVSIG</th> <th>pHMM</th> <th>TreNet</th> </tr> <tr> <td>100</td> <td>13.68</td> <td>12.93</td> <td>12.9352</td> <td>12.9346</td> <td>-</td> <td>13.14</td> </tr> <tr> <td>300</td> <td>13.60</td> <td>12.93</td> <td>12.9346</td> <td>12.9345</td> <td>27.75</td> <td>13.15</td> </tr> <tr> <td>500</td> <td>13.56</td> <td>12.94</td> <td>12.9342</td> <td>12.9346</td> <td>26.00</td> <td>12.89</td> </tr> <tr> <td>700</td> <td>13.52</td> <td>12.93</td> <td>12.9345</td> <td>12.9345</td> <td>35.32</td> <td>12.86</td> </tr> <tr> <td>900</td> <td>13.60</td> <td>12.94</td> <td>12.9350</td> <td>12.9346</td> <td>37.60</td> <td>12.96</td> </tr> </table> Table 3: RMSE of the slope predictions w.r.t. different sizes of local data in HousePC dataset 6 CONCLUSION In this paper we propose TreNet, a novel hybrid neural network to learn and predict the local trend behaviour of time series. The experimental results demonstrate that such a hybrid framework can indeed utilize complementary information extracted by CNN and LSTM to enhance the prediction performance. Moreover, such architecture is generic and extendible in that additional exogenous time series can be fed to TreNet, so as to boost the performance and investigate the effect of different data sources on the trend evolving.
reject
Reject
5
7d80f6ba87e09765bbf01441201360b44f8f799b
iclr
2,017
DISCRETE VARIATIONAL AUTOENCODERS Jason Tyler Rolfe D-Wave Systems Burnaby, BC V5G-4M9, Canada jrolfe@dwavesys.com ABSTRACT Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data; and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. 1 INTRODUCTION Unsupervised learning of probabilistic models is a powerful technique, facilitating tasks such as denoising and inpainting, and regularizing supervised tasks such as classification (Hinton et al., 2006; Salakhutdinov & Hinton, 2009; Rasmus et al., 2015). Many datasets of practical interest are projections of underlying distributions over real-world objects into an observation space; the pixels of an image, for example. When the real-world objects are of discrete types subject to continuous transformations, these datasets comprise multiple disconnected smooth manifolds. For instance, natural images change smoothly with respect to the position and pose of objects, as well as scene lighting. At the same time, it is extremely difficult to directly transform the image of a person to one of a car while remaining on the manifold of natural images. It would be natural to represent the space within each disconnected component with continuous variables, and the selection amongst these components with discrete variables. In contrast, most state-of-the-art probabilistic models use exclusively discrete variables — as do DBMs (Salakhutdinov & Hinton, 2009), NADEs (Larochelle & Murray, 2011), sigmoid belief networks (Spiegelhalter & Lauritzen, 1990; Bornschein et al., 2016), and DARNs (Gregor et al., 2014) — or exclusively continuous variables — as do VAEs (Kingma & Welling, 2014; Rezende et al., 2014) and GANs (Goodfellow et al., 2014).1 Moreover, it would be desirable to apply the efficient variational autoencoder framework to models with discrete values, but this has proven difficult, since backpropagation through discrete variables is generally not possible (Bengio et al., 2013; Raiko et al., 2015). We introduce a novel class of probabilistic models, comprising an undirected graphical model defined over binary latent variables, followed by multiple directed layers of continuous latent variables. This class of models captures both the discrete class of the object in an image, and its specific continuously deformable realization. Moreover, we show how these models can be trained efficiently using the variational autoencoder framework, including backpropagation through the binary latent variables. We ensure that the evidence lower bound remains tight by incorporating a hierarchical approximation to the posterior distribution of the latent variables, which can model strong correlations. Since these models efficiently marry the variational autoencoder framework with discrete latent variables, we call them discrete variational autoencoders (discrete VAEs). 1Spike-and-slab RBMs (Courville et al., 2011) use both discrete and continuous latent variables. 1.1 VARIATIONAL AUTOENCODERS ARE INCOMPATIBLE WITH DISCRETE DISTRIBUTIONS Conventionally, unsupervised learning algorithms maximize the log-likelihood of an observed dataset under a probabilistic model. Even stochastic approximations to the gradient of the log-likelihood generally require samples from the posterior and prior of the model. However, sampling from undirected graphical models is generally intractable (Long & Servedio, 2010), as is sampling from the posterior of a directed graphical model conditioned on its leaf variables (Dagum & Luby, 1993). In contrast to the exact log-likelihood, it can be computationally efficient to optimize a lower bound on the log-likelihood (Jordan et al., 1999), such as the evidence lower bound (ELBO, \( \mathcal{L}(x, \theta, \phi) \); Hinton & Zemel, 1994): \[ \mathcal{L}(x, \theta, \phi) = \log p(x|\theta) - \mathrm{KL}[q(z|x, \phi)||p(z|x, \theta)], \] where \( q(z|x, \phi) \) is a computationally tractable approximation to the posterior distribution \( p(z|x, \theta) \). We denote the observed random variables by \( x \), the latent random variables by \( z \), the parameters of the generative model by \( \theta \), and the parameters of the approximating posterior by \( \phi \). The variational autoencoder (VAE; Kingma & Welling, 2014; Rezende et al., 2014; Kingma et al., 2014) regroups the evidence lower bound of Equation 1 as: \[ \mathcal{L}(x, \theta, \phi) = - \underbrace{\mathrm{KL}\left[q(z|x, \phi)||p(z|\theta)\right]}_{\text{KL term}} + \underbrace{\mathbb{E}_q\left[\log p(x|z, \theta)\right]}_{\text{autoencoding term}} . \] In many cases of practical interest, such as Gaussian \( q(z|x) \) and \( p(z) \), the KL term of Equation 2 can be computed analytically. Moreover, a low-variance stochastic approximation to the gradient of the autoencoding term can be obtained using backpropagation and the reparameterization trick, so long as samples from the approximating posterior \( q(z|x) \) can be drawn using a differentiable, deterministic function \( f(x, \phi, \rho) \) of the combination of the inputs, the parameters, and a set of input- and parameter-independent random variables \( \rho \sim D \). For instance, samples can be drawn from a Gaussian distribution with mean and variance determined by the input, \( \mathcal{N}(m(x, \phi), v(x, \phi)) \), using \( f(x, \phi, \rho) = m(x, \phi) + \sqrt{v(x, \phi)} \cdot \rho \), where \( \rho \sim \mathcal{N}(0, 1) \). When such an \( f(x, \phi, \rho) \) exists, \[ \frac{\partial}{\partial \phi} \mathbb{E}_{q(z|x, \phi)} [\log p(x|z, \theta)] \approx \frac{1}{N} \sum_{\rho \sim D} \frac{\partial}{\partial \phi} \log p(x|f(x, \rho, \phi), \theta). \] The reparameterization trick can be generalized to a large set of distributions, including nonfactorial approximating posteriors. We address this issue carefully in Appendix A, where we find that an analog of Equation 3 holds. Specifically, \( D_i \) is the uniform distribution between 0 and 1, and \[ f(x) = \mathbf{F}^{-1}(x), \] where \( \mathbf{F} \) is the conditional-marginal cumulative distribution function (CDF) defined by: \[ F_i(\mathbf{x}) = \int_{x'_i = -\infty}^{x_i} p\left(x'_i|x_1, \ldots, x_{i-1}\right). \] However, this generalization is only possible if the inverse of the conditional-marginal CDF exists and is differentiable. A formulation comparable to Equation 3 is not possible for discrete distributions, such as restricted Boltzmann machines (RBMs) (Smolensky, 1986): \[ p(z) = \frac{1}{Z_p} e^{-E_p(z)} = \frac{1}{Z_p} \cdot e^{(z^T W z + b^T z)}, \] where \( z \in \{0, 1\}^n \), \( Z_p \) is the partition function of \( p(z) \), and the lateral connection matrix \( W \) is triangular. Any approximating posterior that only assigns nonzero probability to a discrete domain corresponds to a CDF that is piecewise-contant. That is, the range of the CDF is a proper subset of the interval [0, 1]. The domain of the inverse CDF is thus also a proper subset of [0, 1], and its derivative is not defined, as required in Equations 3 and 4.2 2This problem remains even if we use the quantile function, \( F_p^{-1}(\rho) = \inf \left\{ z \in \mathbb{R} : \int_{z'=-\infty}^{z'} p(z') \geq \rho \right\} \), the derivative of which is either zero or infinite if \( p \) is a discrete distribution. In the following sections, we present the discrete variational autoencoder (discrete VAE), a hierarchical probabilistic model consisting of an RBM,\footnote{Strictly speaking, the prior contains a bipartite Boltzmann machine, all the units of which are connected to the rest of the model. In contrast to a traditional RBM, there is no distinction between the “visible” units and the “hidden” units. Nevertheless, we use the familiar term RBM in the sequel, rather than the more cumbersome “fully hidden bipartite Boltzmann machine.”} followed by multiple directed layers of continuous latent variables. This model is efficiently trainable using the variational autoencoder formalism, as in Equation 3, including backpropagation through its discrete latent variables. 1.2 RELATED WORK Recently, there have been many efforts to develop effective unsupervised learning techniques by building upon variational autoencoders. Importance weighted autoencoders (Burda et al., 2016), Hamiltonian variational inference (Salimans et al., 2015), normalizing flows (Rezende & Mohamed, 2015), and variational Gaussian processes (Tran et al., 2016) improve the approximation to the posterior distribution. Ladder variational autoencoders (Sønderby et al., 2016) increase the power of the architecture of both approximating posterior and prior. Neural adaptive importance sampling (Du et al., 2015) and reweighted wake-sleep (Bornschein & Bengio, 2015) use sophisticated approximations to the gradient of the log-likelihood that do not admit direct backpropagation. Structured variational autoencoders use conjugate priors to construct powerful approximating posterior distributions (Johnson et al., 2016). It is easy to construct a stochastic approximation to the gradient of the ELBO that admits both discrete and continuous latent variables, and only requires computationally tractable samples. Unfortunately, this naive estimate is impractically high-variance, leading to slow training and poor performance (Paisley et al., 2012). The variance of the gradient can be reduced somewhat using the baseline technique, originally called REINFORCE in the reinforcement learning literature (Mnih & Gregor, 2014; Williams, 1992; Mnih & Rezende, 2016), which we discuss in greater detail in Appendix B. Prior efforts by Makhzani et al. (2015) to use multimodal priors with implicit discrete variables governing the modes did not successfully align the modes of the prior with the intrinsic clusters of the dataset. Rectified Gaussian units allow spike-and-slab sparsity in a VAE, but the discrete variables are also implicit, and their prior factorial and thus unimodal (Salimans, 2016). Graves (2016) computes VAE-like gradient approximations for mixture models, but the component models are assumed to be simple factorial distributions. In contrast, discrete VAEs generalize to powerful multimodal priors on the discrete variables, and a wider set of mappings to the continuous units. The generative model underlying the discrete variational autoencoder resembles a deep belief network (DBN; Hinton et al., 2006). A DBN comprises a sigmoid belief network, the top layer of which is conditioned on the visible units of an RBM. In contrast to a DBN, we use a bipartite Boltzmann machine, with both sides of the bipartite split connected to the rest of the model. Moreover, all hidden layers below the bipartite Boltzmann machine are composed of continuous latent variables with a fully autoregressive layer-wise connection architecture. Each layer \( j \) receives connections from all previous layers \( i < j \), with connections from the bipartite Boltzmann machine mediated by a set of smoothing variables. However, these architectural differences are secondary to those in the gradient estimation technique. Whereas DBNs are traditionally trained by unrolling a succession of RBMs, discrete variational autoencoders use the reparameterization trick to backpropagate through the evidence lower bound. 2 BACKPROPAGATING THROUGH DISCRETE LATENT VARIABLES BY ADDING CONTINUOUS LATENT VARIABLES When working with an approximating posterior over discrete latent variables, we can effectively smooth the conditional-marginal CDF (defined by Equation 5 and Appendix A) by augmenting the latent representation with a set of continuous random variables. The conditional-marginal CDF over the new continuous variables is invertible and its inverse is differentiable, as required in Equations 3 and 4. We redefine the generative model so that the conditional distribution of the observed variables given the latent variables only depends on the new continuous latent space. This does not alter (a) Approximating posterior \( q(\zeta, z|x) \) (b) Prior \( p(x, \zeta, z) \) (c) Autoencoding term Figure 1: Graphical models of the smoothed approximating posterior (a) and prior (b), and the network realizing the autoencoding term of the ELBO from Equation 2 (c). Continuous latent variables \( \zeta_i \) are smoothed analogs of discrete latent variables \( z_i \), and insulate \( z \) from the observed variables \( x \) in the prior (b). This facilitates the marginalization of the discrete \( z \) in the autoencoding term of the ELBO, resulting in a network (c) in which all operations are deterministic and differentiable given independent stochastic input \( \rho \sim U[0, 1] \). the fundamental form of the model, or the KL term of Equation 2; rather, it can be interpreted as adding a noisy nonlinearity, like dropout (Srivastava et al., 2014) or batch normalization with a small minibatch (Ioffe & Szegedy, 2015), to each latent variable in the approximating posterior and the prior. The conceptual motivation for this approach is discussed in Appendix C. Specifically, as shown in Figure 1a, we augment the latent representation in the approximating posterior with continuous random variables \( \zeta \),\footnote{We always use a variant of \( z \) for latent variables. This is zeta, or Greek \( \zeta \). The discrete latent variables \( z \) can conveniently be thought of as English \( z \).} conditioned on the discrete latent variables \( z \) of the RBM: \[ q(\zeta, z|x, \phi) = r(\zeta|z) \cdot q(z|x, \phi), \qquad \text{where} \] \[ r(\zeta|z) = \prod_i r(\zeta_i|z_i). \] The support of \( r(\zeta|z) \) for all values of \( z \) must be connected, so the marginal distribution \( q(\zeta|x, \phi) = \sum_z r(\zeta|z) \cdot q(z|x, \phi) \) has a constant, connected support so long as \( 0 < q(z|x, \phi) < 1 \). We further require that \( r(\zeta|z) \) is continuous and differentiable except at the endpoints of its support, so the inverse conditional-marginal CDF of \( q(\zeta|x, \phi) \) is differentiable in Equations 3 and 4, as we discuss in Appendix A. As shown in Figure 1b, we correspondingly augment the prior with \( \zeta \): \[ p(\zeta, z|\theta) = r(\zeta|z) \cdot p(z|\theta), \] where \( r(\zeta|z) \) is the same as for the approximating posterior. Finally, we require that the conditional distribution over the observed variables only depends on \( \zeta \): \[ p(x|\zeta, z, \theta) = p(x|\zeta, \theta). \tag{7} \] The smoothing distribution \( r(\zeta|z) \) transforms the model into a continuous function of the distribution over \( z \), and allows us to use Equations 2 and 3 directly to obtain low-variance stochastic approximations to the gradient. Given this expansion, we can simplify Equations 3 and 4 by dropping the dependence on \( z \) and applying Equation 16 of Appendix A, which generalizes Equation 3: \[ \frac{\partial}{\partial \phi} \mathbb{E}_{q(\zeta, z|x, \phi)} [\log p(x|\zeta, z, \theta)] \approx \frac{1}{N} \sum_{\rho \sim U(0,1)^n} \frac{\partial}{\partial \phi} \log p \left( x | \mathbf{F}_{q(\zeta|x, \phi)}^{-1}(\rho), \theta \right). \tag{8} \] If the approximating posterior is factorial, then each \( F_i \) is an independent CDF, without conditioning or marginalization. As we shall demonstrate in Section 2.1, \( F_{q(\zeta|x,\phi)}^{-1}(\rho) \) is a function of \( q(z = 1|x,\phi) \), where \( q(z = 1|x,\phi) \) is a deterministic probability value calculated by a parameterized function, such as a neural network. The autoencoder implicit in Equation 8 is shown in Figure 1c. Initially, input \( x \) is passed into a deterministic feedforward network \( q(z = 1|x,\phi) \), for which the final nonlinearity is the logistic function. Its output \( q \), along with an independent random variable \( \rho \sim U[0,1] \), is passed into the deterministic function \( F_{q(\zeta|x,\phi)}^{-1}(\rho) \) to produce a sample of \( \zeta \). This \( \zeta \), along with the original input \( x \), is finally passed to \( \log p(x|\zeta,\theta) \). The expectation of this log probability with respect to \( \rho \) is the autoencoding term of the VAE formalism, as in Equation 2. Moreover, conditioned on the input and the independent \( \rho \), this autoencoder is deterministic and differentiable, so backpropagation can be used to produce a low-variance, computationally-efficient approximation to the gradient. 2.1 SPIKE-AND-EXPONENTIAL SMOOTHING TRANSFORMATION As a concrete example consistent with sparse coding, consider the spike-and-exponential transformation from binary \( z \) to continuous \( \zeta \): \[ r(\zeta_i|z_i = 0) = \begin{cases} \infty, & \text{if } \zeta_i = 0 \\ 0, & \text{otherwise} \end{cases} \] \[ F_{r(\zeta_i|z_i=0)}(\zeta') = 1 \] \[ r(\zeta_i|z_i = 1) = \begin{cases} \frac{\beta e^{\beta \zeta}}{e^{\beta \zeta} - 1}, & \text{if } 0 \leq \zeta_i \leq 1 \\ 0, & \text{otherwise} \end{cases} \] \[ F_{r(\zeta_i|z_i=1)}(\zeta') = \left. \frac{e^{\beta \zeta'}}{e^{\beta} - 1} \right|_0^{\zeta'} = \frac{e^{\beta \zeta'} - 1}{e^{\beta} - 1} \] where \( F_p(\zeta') = \int_{-\infty}^{\zeta'} p(\zeta) \cdot d\zeta \) is the CDF of probability distribution \( p \) in the domain \([0,1]\). This transformation from \( z_i \) to \( \zeta_i \) is invertible: \( \zeta_i = 0 \Leftrightarrow z_i = 0 \), and \( \zeta_i > 0 \Leftrightarrow z_i = 1 \) almost surely.\footnote{In the limit \( \beta \to \infty \), \( \zeta_i = z_i \) almost surely, and the continuous variables \( \zeta \) can effectively be removed from the model. This trick can be used after training with finite \( \beta \) to produce a model without smoothing variables \( \zeta \).} We can now find the CDF for \( q(\zeta|x,\phi) \) as a function of \( q(z = 1|x,\phi) \) in the domain \([0,1]\), marginalizing out the discrete \( z \): \[ F_{q(\zeta|x,\phi)}(\zeta') = (1 - q(z = 1|x,\phi)) \cdot F_{r(\zeta_i|z_i=0)}(\zeta') + q(z = 1|x,\phi) \cdot F_{r(\zeta_i|z_i=1)}(\zeta') \] \[ = q(z = 1|x,\phi) \cdot \left( \frac{e^{\beta \zeta'} - 1}{e^{\beta} - 1} - 1 \right) + 1. \] To evaluate the autoencoder of Figure 1c, and through it the gradient approximation of Equation 8, we must invert the conditional-marginal CDF \( F_{q(\zeta|x,\phi)} \): \[ F_{q(\zeta|x,\phi)}^{-1}(\rho) = \begin{cases} \frac{1}{\beta} \cdot \log \left[ \left( \frac{\rho + q - 1}{q} \right) \cdot (e^{\beta} - 1) + 1 \right], & \text{if } \rho \geq 1 - q \\ 0, & \text{otherwise} \end{cases} \] \[(9)\] where we use the substitution \( q(z = 1|x,\phi) \to q \) to simplify notation. For all values of the independent random variable \( \rho \sim U[0,1] \), the function \( F_{q(\zeta|x,\phi)}^{-1}(\rho) \) rectifies the input \( q(z = 1|x,\phi) \) if \( q \leq 1 - \rho \) in a manner analogous to a rectified linear unit (ReLU), as shown in Figure 2a. It is also quasi-sigmoidal, in that \( F^{-1} \) is increasing but concave-down if \( q > 1 - \rho \). The effect of \( \rho \) on \( F^{-1} \) is qualitatively similar to that of dropout (Srivastava et al., 2014), depicted in Figure 2b, or the noise injected by batch normalization (Ioffe & Szegedy, 2015) using small minibatches, shown in Figure 2c. Other expansions to the continuous space are possible. In Appendix D.1, we consider the case where both \( r(\zeta_i|z_i = 0) \) and \( r(\zeta_i|z_i = 1) \) are linear functions of \( \zeta \); in Appendix D.2, we develop a spike-and-slab transformation; and in Appendix E, we explore a spike-and-Gaussian transformation where the continuous \( \zeta \) is directly dependent on the input \( x \) in addition to the discrete \( z \). Figure 2: Inverse CDF of the spike-and-exponential smoothing transformation for \( \rho \in \{0.2, 0.5, 0.8\} \): \( \beta = 1 \) (dotted), \( \beta = 3 \) (solid), and \( \beta = 5 \) (dashed) (a). Rectified linear unit with dropout rate 0.5 (b). Shift (red) and scale (green) noise from batch normalization; with magnitude 0.3 (dashed), -0.3 (dotted), or 0 (solid blue); before a rectified linear unit (c). In all cases, the abscissa is the input and the ordinate is the output of the effective transfer function. The novel stochastic nonlinearity \( F_{q(\zeta|x,\phi)}^{-1}(\rho) \) from Figure 1c, of which (a) is an example, is qualitatively similar to the familiar stochastic nonlinearities induced by dropout (b) or batch normalization (c). 3 ACCOMMODATING EXPLAINING-AWAY WITH A HIERARCHICAL APPROXIMATING POSTERIOR When a probabilistic model is defined in terms of a prior distribution \( p(z) \) and a conditional distribution \( p(x|z) \), the observation of \( x \) often induces strong correlations in the posterior \( p(z|x) \) due to phenomena such as explaining-away (Pearl, 1988). Moreover, we wish to use an RBM as the prior distribution (Equation 6), which itself may have strong correlations. In contrast, to maintain tractability, many variational approximations use a product of independent approximating posterior distributions (e.g., mean-field methods, but also Kingma & Welling (2014); Rezende et al. (2014)). To accommodate strong correlations in the posterior distribution while maintaining tractability, we introduce a hierarchy into the approximating posterior \( q(z|x) \) over the discrete latent variables. Specifically, we divide the latent variables \( z \) of the RBM into disjoint groups, \( z_1, \ldots, z_k \),\footnote{The continuous latent variables \( \zeta \) are divided into complementary disjoint groups \( \zeta_1, \ldots, \zeta_k \).} and define the approximating posterior via a directed acyclic graphical model over these groups: \[ q(z_1, \zeta_1, \ldots, z_k, \zeta_k|x, \phi) = \prod_{1 \leq j \leq k} r(\zeta_j|z_j) \cdot q\left(z_j|\zeta_{i<j}, x, \phi\right) \quad \text{where} \] \[ q(z_j|\zeta_{i<j}, x, \phi) = \frac{e^{g_j(\zeta_{i<j}, x, \phi)^T \cdot z_j}}{\prod_{z_i \in z_j} (1 + e^{g_{z_i}(\zeta_{i<j}, x, \phi)})}, \] \( z_j \in \{0, 1\}^n \), and \( g_j(\zeta_{i<j}, x, \phi) \) is a parameterized function of the inputs and preceding \( \zeta_i \), such as a neural network. The corresponding graphical model is depicted in Figure 3a, and the integration of such hierarchical approximating posteriors into the reparameterization trick is discussed in Appendix A. If each group \( z_j \) contains a single variable, this dependence structure is analogous to that of a deep autoregressive network (DARN; Gregor et al., 2014), and can represent any distribution. However, the dependence of \( z_j \) on the preceding discrete variables \( z_{i<j} \) is always mediated by the continuous variables \( \zeta_{i<j} \). This hierarchical approximating posterior does not affect the form of the autoencoding term in Equation 8, except to increase the depth of the autoencoder, as shown in Figure 3b. The deterministic probability value \( q(z_j = 1|\zeta_{i<j}, x, \phi) \) of Equation 10 is parameterized, generally by a neural network, in a manner analogous to Section 2. However, the final logistic function is made explicit in Equation 10 to simplify Equation 12. For each successive layer \( j \) of the autoencoder, input \( x \) and all previous \( \zeta_{i<j} \) are passed into the network computing \( q(z = 1|\zeta_{i<j}, x, \phi) \). Its output \( q_j \), along with an (a) Hierarch approx post \( q(\zeta, z|x) \) (b) Hierarchical ELBO autoencoding term Figure 3: Graphical model of the hierarchical approximating posterior (a) and the network realizing the autoencoding term of the ELBO (b) from Equation 2. Discrete latent variables \( z_j \) only depend on the previous \( z_{i<j} \) through their smoothed analogs \( \zeta_{i<j} \). The autoregressive hierarchy allows the approximating posterior to capture correlations and multiple modes. Again, all operations in (b) are deterministic and differentiable given the stochastic input \( \rho \). independent random variable \( \rho \sim U[0,1] \), is passed to the deterministic function \( \mathbf{F}_{q(\zeta|\zeta_{<j}, x, \phi)}^{-1}(\rho) \) to produce a sample of \( \zeta_j \). Once all \( \zeta_j \) have been recursively computed, the full \( \zeta \) along with the original input \( x \) is finally passed to \( \log p(x|\zeta, \theta) \). The expectation of this log probability with respect to \( \rho \) is again the autoencoding term of the VAE formalism, as in Equation 2. In Appendix F, we show that the gradients of the remaining KL term of the ELBO (Equation 2) can be estimated stochastically using: \[ \frac{\partial}{\partial \theta} \mathrm{KL}\left[q||p\right] = \mathbb{E}_{q(z_1|x,\phi)} \left[ \cdots \left[ \mathbb{E}_{q(z_k|\zeta_{<k}, x, \phi)} \left[ \frac{\partial E_p(z, \theta)}{\partial \theta} \right] \right] \right] - \mathbb{E}_{p(z|\theta)} \left[ \frac{\partial E_p(z, \theta)}{\partial \theta} \right] \] and \[ \frac{\partial}{\partial \phi} \mathrm{KL}\left[q||p\right] = \mathbb{E}_\rho \left[ (g(x, \zeta) - b)^T \cdot \frac{\partial q}{\partial \phi} - z^T \cdot W \cdot \left( \frac{1 - z}{1 - q} \odot \frac{\partial q}{\partial \phi} \right) \right]. \] In particular, Equation 12 is substantially lower variance than the naive approach to calculate \( \frac{\partial}{\partial \phi} \mathrm{KL}\left[q||p\right] \), based upon REINFORCE. 4 MODELLING CONTINUOUS DEFORMATIONS WITH A HIERARCHY OF CONTINUOUS LATENT VARIABLES We can make both the generative model and the approximating posterior more powerful by adding additional layers of latent variables below the RBM. While these layers can be discrete, we focus on continuous variables, which have proven to be powerful in generative adversarial networks (Goodfellow et al., 2014) and traditional variational autoencoders (Kingma & Welling, 2014; Rezende et al., 2014). When positioned below and conditioned on a layer of discrete variables, continuous variables can build continuous manifolds, from which the discrete variables can choose. This complements the structure of the natural world, where a percept is determined first by a discrete selection of the types of objects present in the scene, and then by the position, pose, and other continuous attributes of these objects. Specifically, we augment the latent representation with continuous random variables \( \hat{z} \),\footnote{We always use a variant of \( z \) for latent variables. This is Fraktur \( z \), or German \( z \).} and define both the approximating posterior and the prior to be layer-wise fully autoregressive directed graphical models. We use the same autoregressive variable order for the approximating posterior as for the (a) Approx post w/ cont latent vars \( q(\mathfrak{z}, \zeta, z|x) \) (b) Prior w/ cont latent vars \( p(x, \mathfrak{z}, \zeta, z) \) Figure 4: Graphical models of the approximating posterior (a) and prior (b) with a hierarchy of continuous latent variables. The shaded regions in parts (a) and (b) expand to Figures 3a and 1b respectively. The continuous latent variables \( \mathfrak{z} \) build continuous manifolds, capturing properties like position and pose, conditioned on the discrete latent variables \( z \), which can represent the discrete types of objects in the image. prior, as in DRAW (Gregor et al., 2015), variational recurrent neural networks (Chung et al., 2015), the deep VAE of Salimans (2016), and ladder networks (Rasmus et al., 2015; Sønderby et al., 2016). We discuss the motivation for this ordering in Appendix G. The directed graphical model of the approximating posterior and prior are defined by: \[ q(\mathfrak{z}_0, \ldots, \mathfrak{z}_n|x, \phi) = \prod_{0 \leq m \leq n} q\left( \mathfrak{z}_m | \mathfrak{z}_{<m}, x, \phi \right) \quad \text{and} \] \[ p(\mathfrak{z}_0, \ldots, \mathfrak{z}_n|\theta) = \prod_{0 \leq m \leq n} p\left( \mathfrak{z}_m | \mathfrak{z}_{<m}, \theta \right). \] (13) The full set of latent variables associated with the RBM is now denoted by \( \mathfrak{z}_0 = \{ z_1, \zeta_1, \ldots, z_k, \zeta_k \} \). However, the conditional distributions in Equation 13 only depend on the continuous \( \zeta_j \). Each \( \mathfrak{z}_{m \geq 1} \) denotes a layer of continuous latent variables, and Figure 4 shows the resulting graphical model. The ELBO decomposes as: \[ \mathcal{L}(x, \theta, \phi) = \mathbb{E}_{q(\mathfrak{z}|x, \phi)} [\log p(x|\mathfrak{z}, \theta)] - \sum_m \mathbb{E}_{q(\mathfrak{z}_{<m}|x, \phi)} [\mathrm{KL}\left[ q(\mathfrak{z}_m|\mathfrak{z}_{<m}, x, \phi) || p(\mathfrak{z}_m|\mathfrak{z}_{<m}, \theta) \right]] . \] (14) If both \( q(\mathfrak{z}_m|\mathfrak{z}_{<m}, x, \phi) \) and \( p(\mathfrak{z}_m|\mathfrak{z}_{<m}, \theta) \) are Gaussian, then their KL divergence has a simple closed form, which is computationally efficient if the covariance matrices are diagonal. Gradients can be passed through the \( q(\mathfrak{z}_{<m}|x, \phi) \) using the traditional reparameterization trick, described in Section 1.1. 5 RESULTS Discrete variational autoencoders comprise a smoothed RBM (Section 2) with a hierarchical approximating posterior (Section 3), followed by a hierarchy of continuous latent variables (Section 4). We parameterize all distributions with neural networks, except the smoothing distribution \( r(\zeta|z) \) discussed in Section 2. Like NVIL (Mnih & Gregor, 2014) and VAEs (Kingma & Welling, 2014; Rezende et al., 2014), we define all approximating posteriors \( q \) to be explicit functions of \( x \), with parameters \( \phi \) shared between all inputs \( x \). For distributions over discrete variables, the neural networks output the parameters of a factorial Bernoulli distribution using a logistic final layer, as in Equation 10; for the continuous \( \mathfrak{z} \), the neural networks output the mean and log-standard deviation of a diagonal-covariance Gaussian distribution using a linear final layer. Each layer of the neural networks parameterizing the distributions over \( z \), \( \mathfrak{z} \), and \( x \) consists of a linear transformation, batch normalization (Ioffe & Szegedy, 2015) (but see Appendix H.2), and a rectified-linear point-wise nonlinearity (ReLU). We stochastically approximate the expectation with respect to the RBM prior \( p(z|\theta) \) in Equation 11 using block Gibbs sampling on persistent Markov chains, analogous to persistent contrastive divergence (Tieleman, 2008). We minimize the ELBO using ADAM (Kingma & Ba, 2015) with a decaying step size. The hierarchical structure of Section 4 is very powerful, and overfits without strong regularization of the prior, as shown in Appendix H. In contrast, powerful approximating posteriors do not induce significant overfitting. To address this problem, we use conditional distributions over the input \( p(x|\zeta, \theta) \) without any deterministic hidden layers, except on Omniglot. Moreover, all other neural networks in the prior have only one hidden layer, the size of which is carefully controlled. On statically binarized MNIST, Omniglot, and Caltech-101, we share parameters between the layers of the hierarchy over \( z \). We present the details of the architecture in Appendix H. We train the resulting discrete VAEs on the permutation-invariant MNIST (LeCun et al., 1998), Omniglot\(^8\) (Lake et al., 2013), and Caltech-101 Silhouettes datasets (Marlin et al., 2010). For MNIST, we use both the static binarization of Salakhutdinov & Murray (2008) and dynamic binarization. Estimates of the log-likelihood\(^9\) of these models, computed using the method of (Burda et al., 2016) with \( 10^4 \) importance-weighted samples, are listed in Table 1. The reported log-likelihoods for discrete VAEs are the average of 16 runs; the standard deviation of these log-likelihoods are 0.08, 0.04, 0.05, and 0.11 for dynamically and statically binarized MNIST, Omniglot, and Caltech-101 Silhouettes, respectively. Removing the RBM reduces the test set log-likelihood by 0.09, 0.37, 0.69, and 0.66. <table> <tr> <th colspan="2">MNIST (dynamic binarization)</th> <th colspan="2">MNIST (static binarization)</th> </tr> <tr> <th>LL</th> <th></th> <th>ELBO</th> <th>LL</th> </tr> <tr> <td>DBN</td> <td>-84.55</td> <td>HVI</td> <td>-88.30</td> <td>-85.51</td> </tr> <tr> <td>IWAE</td> <td>-82.90</td> <td>DRAW</td> <td>-87.40</td> <td></td> </tr> <tr> <td>Ladder VAE</td> <td>-81.74</td> <td>NAIS NADE</td> <td></td> <td>-83.67</td> </tr> <tr> <td>Discrete VAE</td> <td><b>-80.15</b></td> <td>Normalizing flows</td> <td>-85.10</td> <td></td> </tr> <tr> <td></td> <td></td> <td>Variational Gaussian process</td> <td></td> <td>-81.32</td> </tr> <tr> <td></td> <td></td> <td>Discrete VAE</td> <td><b>-84.58</b></td> <td><b>-81.01</b></td> </tr> <tr> <th colspan="2">Omniglot</th> <th colspan="2">Caltech-101 Silhouettes</th> </tr> <tr> <th>LL</th> <th></th> <th>LL</th> <th></th> </tr> <tr> <td>IWAE</td> <td>-103.38</td> <td>IWAE</td> <td>-117.2</td> </tr> <tr> <td>Ladder VAE</td> <td>-102.11</td> <td>RWS SBN</td> <td>-113.3</td> </tr> <tr> <td>RBM</td> <td>-100.46</td> <td>RBM</td> <td>-107.8</td> </tr> <tr> <td>DBN</td> <td>-100.45</td> <td>NAIS NADE</td> <td>-100.0</td> </tr> <tr> <td>Discrete VAE</td> <td><b>-97.43</b></td> <td>Discrete VAE</td> <td><b>-97.6</b></td> </tr> </table> Table 1: Test set log-likelihood of various models on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. For the discrete VAE, the reported log-likelihood is estimated with \( 10^4 \) importance-weighted samples (Burda et al., 2016). For comparison, we also report performance of some recent state-of-the-art techniques. Full names and references are listed in Appendix I. We further analyze the performance of discrete VAEs on dynamically binarized MNIST: the largest of the datasets, requiring the least regularization. Figure 5 shows the generative output of a discrete VAE as the Markov chain over the RBM evolves via block Gibbs sampling. The RBM is held constant across each sub-row of five samples, and variation amongst these samples is due to the layers of continuous latent variables. Given a multimodal distribution with well-separated modes, Gibbs sampling passes through the large, low-probability space between the modes only infrequently. As a result, consistency of the digit class over many successive rows in Figure 5 indicates that the RBM prior has well-separated modes. The RBM learns distinct, separated modes corresponding to the different digit types, except for 3/5 and 4/9, which are either nearby or overlapping; at least tens of \footnotetext{8We use the partitioned, preprocessed Omniglot dataset of Burda et al. (2016), available from https://github.com/yburda/wae/tree/master/datasets/OMNIGLOT.} \footnotetext{9The importance-weighted estimate of the log-likelihood is a lower bound, except for the log partition function of the RBM. We describe our unbiased estimation method for the partition function in Appendix H.1.} Figure 5: Evolution of samples from a discrete VAE trained on dynamically binarized MNIST, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. The long vertical sequences in which the digit ID remains constant demonstrate that the RBM has well-separated modes, each of which corresponds to a single (or occasionally two) digit IDs, despite being trained in a wholly unsupervised manner. ![Evolution of samples from a discrete VAE trained on dynamically binarized MNIST](page_246_183_1097_377.png) Figure 6: Log likelihood versus the number of iterations of block Gibbs sampling per minibatch (a), the number of units in the RBM (b), and the number of layers in the approximating posterior over the RBM (c). Better sampling (a) and hierarchical approximating posteriors (c) support better performance, but the network is robust to the size of the RBM (b). ![Log likelihood versus the number of iterations of block Gibbs sampling per minibatch, number of units in the RBM, and number of layers in the approximating posterior over the RBM](page_246_670_1097_246.png) thousands of iterations of single-temperature block Gibbs sampling is required to mix between the modes. We present corresponding figures for the other datasets, and results on simplified architectures, in Appendix J. The large mixing time of block Gibbs sampling on the RBM suggests that training may be constrained by sample quality. Figure 6a shows that performance\(^{10}\) improves as we increase the number of iterations of block Gibbs sampling performed per minibatch on the RBM prior: \( p(z|\theta) \) in Equation 11. This suggests that a further improvement may be achieved by using a more effective sampling algorithm, such as parallel tempering (Swendsen & Wang, 1986). \footnotetext{10 All models in Figure 6 use only 10 layers of continuous latent variables, for computational efficiency.} Commensurate with the small number of intrinsic classes, a moderately sized RBM yields the best performance on MNIST. As shown in Figure 6b, the log-likelihood plateaus once the number of units in the RBM reaches at least 64. Presumably, we would need a much larger RBM to model a dataset like Imagenet, which has many classes and complicated relationships between the elements of various classes. The benefit of the hierarchical approximating posterior over the RBM, introduced in Section 3, is apparent from Figure 6c. The reduction in performance when moving from 4 to 8 layers in the approximating posterior may be due to the fact that each additional hierarchical layer over the approximating posterior adds three layers to the encoder neural network: there are two deterministic hidden layers for each stochastic latent layer. As a result, expanding the number of RBM approximating posterior layers significantly increases the number of parameters that must be trained, and increases the risk of overfitting. 6 CONCLUSION Datasets consisting of a discrete set of classes are naturally modeled using discrete latent variables. However, it is difficult to train probabilistic models over discrete latent variables using efficient gradient approximations based upon backpropagation, such as variational autoencoders, since it is generally not possible to backpropagate through a discrete variable (Bengio et al., 2013). We avoid this problem by symmetrically projecting the approximating posterior and the prior into a continuous space. We then evaluate the autoencoding term of the evidence lower bound exclusively in the continuous space, marginalizing out the original discrete latent representation. At the same time, we evaluate the KL divergence between the approximating posterior and the true prior in the original discrete space; due to the symmetry of the projection into the continuous space, it does not contribute to the KL term. To increase representational power, we make the approximating posterior over the discrete latent variables hierarchical, and add a hierarchy of continuous latent variables below them. The resulting discrete variational autoencoder achieves state-of-the-art performance on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. ACKNOWLEDGEMENTS Zhengbing Bian, Fabian Chudak, Arash Vahdat helped run experiments. Jack Raymond provided the library used to estimate the log partition function of RBMs. Mani Ranjbar wrote the cluster management system, and a custom GPU acceleration library used for an earlier version of the code. We thank Evgeny Andriyash, William Macready, and Aaron Courville for helpful discussions; and one of our anonymous reviewers for identifying the problem addressed in Appendix D.3. REFERENCES Jimmy Ba and Brendan Frey. Adaptive dropout for training deep neural networks. In Advances in Neural Information Processing Systems, pp. 3084–3092, 2013. Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. Charles H. Bennett. Efficient estimation of free energy differences from Monte Carlo data. Journal of Computational Physics, 22(2):245–268, 1976. Jörg Bornschein and Yoshua Bengio. Reweighted wake-sleep. In Proceedings of the International Conference on Learning Representations, arXiv:1406.2751, 2015. Jörg Bornschein, Samira Shabanian, Asja Fischer, and Yoshua Bengio. Bidirectional Helmholtz machines. In Proceedings of The 33rd International Conference on Machine Learning, pp. 2511–2519, 2016. Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pp. 10–21, 2016. Yuri Burda, Roger B. Grosse, and Ruslan Salakhutdinov. Accurate and conservative estimates of MRF log-likelihood using reverse annealing. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, 2015. Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. Proceedings of the International Conference on Learning Representations, arXiv:1509.00519, 2016. Steve Cheng. Differentiation under the integral sign with weak derivatives. Technical report, Working paper, 2006. KyungHyun Cho, Tapani Raiko, and Alexander Ilin. Enhanced gradient for training restricted Boltzmann machines. Neural Computation, 25(3):805–831, 2013. Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C. Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. In Advances in Neural Information Processing Systems, pp. 2980–2988, 2015. Aaron C. Courville, James S. Bergstra, and Yoshua Bengio. Unsupervised models of images by spike-and-slab rbms. In Proceedings of the 28th International Conference on Machine Learning, pp. 1145–1152, 2011. Paul Dagum and Michael Luby. Approximating probabilistic inference in Bayesian belief networks is NP-hard. Artificial Intelligence, 60(1):141–153, 1993. Chao Du, Jun Zhu, and Bo Zhang. Learning deep generative models with doubly stochastic MCMC. arXiv preprint arXiv:1506.04557, 2015. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672–2680, 2014. Alex Graves. Stochastic backpropagation through mixture density distributions. arXiv preprint arXiv:1607.05690, 2016. Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, and Daan Wierstra. Deep autoregressive networks. In Proceedings of the 31st International Conference on Machine Learning, pp. 1242–1250, 2014. Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. DRAW: A recurrent neural network for image generation. In Proceedings of the 32nd International Conference on Machine Learning, pp. 1462–1471, 2015. Geoffrey Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527–1554, 2006. Geoffrey E. Hinton and R. S. Zemel. Autoencoders, minimum description length, and Helmholtz free energy. In J. D. Cowan, G. Tesauro, and J. Alspector (eds.), Advances in Neural Information Processing Systems 6, pp. 3–10. Morgan Kaufmann Publishers, Inc., 1994. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, pp. 448–456, 2015. Matthew Johnson, David K Duvenaud, Alexander B Wiltschko, Sandeep R Datta, and Ryan P Adams. Composing graphical models with neural networks for structured representations and fast inference. In Advances in Neural Information Processing Systems, pp. 2946–2954, 2016. Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233, 1999. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations, arXiv:1412.6980, 2015. Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pp. 3581–3589, 2014. Durk P. Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the International Conference on Learning Representations, arXiv:1312.6114, 2014. Brenden M. Lake, Ruslan R. Salakhutdinov, and Josh Tenenbaum. One-shot learning by inverting a compositional causal process. In Advances in Neural Information Processing Systems, pp. 2526–2534, 2013. Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, 2011. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. Yingzhen Li and Richard E. Turner. Variational inference with Rényi divergence. arXiv preprint arXiv:1602.02311, 2016. Philip M. Long and Rocco Servedio. Restricted Boltzmann machines are hard to approximately evaluate or simulate. In Proceedings of the 27th International Conference on Machine Learning, pp. 703–710, 2010. Alireza Makhzani, Jonathon Shlens, Naveed Jaitly, and Ian Goodfellow. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015. Benjamin M Marlin, Kevin Swersky, Bo Chen, and Nando de Freitas. Inductive principles for restricted Boltzmann machine learning. In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, pp. 509–516, 2010. Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. Proceedings of the 31st International Conference on Machine Learning, pp. 1791–1799, 2014. Andriy Mnih and Danilo J. Rezende. Variational inference for Monte Carlo objectives. In Proceedings of the 33rd International Conference on Machine Learning, pp. 2188–2196, 2016. Iain Murray and Ruslan R. Salakhutdinov. Evaluating probabilities under high-dimensional latent variable models. In Advances in Neural Information Processing Systems, pp. 1137–1144, 2009. Radford M. Neal. Connectionist learning of belief networks. Artificial Intelligence, 56(1):71–113, 1992. Bruno A. Olshausen and David J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607–609, 1996. John Paisley, David M. Blei, and Michael I. Jordan. Variational Baysian inference with stochastic search. In Proceedings of the 29th International Conference on Machine Learning, 2012. Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1988. Tapani Raiko, Harri Valpola, Markus Harva, and Juha Karhunen. Building blocks for variational Bayesian learning of latent variable models. Journal of Machine Learning Research, 8:155–201, 2007. Tapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for learning binary stochastic feedforward neural networks. In Proceedings of the International Conference on Learning Representations, arXiv:1406.2989, 2015. Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems, pp. 3546–3554, 2015. Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Proceedings of the 32nd International Conference on Machine Learning, pp. 1530–1538, 2015. Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of The 31st International Conference on Machine Learning, pp. 1278–1286, 2014. Ruslan Salakhutdinov and Geoffrey E. Hinton. Deep Boltzmann machines. In Proceedings of the 12th International Conference on Artificial Intelligence and Statistics, pp. 448–455, 2009. Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In Proceedings of the 25th International Conference on Machine Learning, pp. 872–879. ACM, 2008. Tim Salimans. A structured variational auto-encoder for learning deep hierarchies of sparse features. arXiv preprint arXiv:1602.08734, 2016. Tim Salimans, Diederik P. Kingma, Max Welling, et al. Markov chain Monte Carlo and variational inference: Bridging the gap. In Proceedings of the 32nd International Conference on Machine Learning, pp. 1218–1226, 2015. Michael R. Shirts and John D. Chodera. Statistically optimal analysis of samples from multiple equilibrium states. The Journal of Chemical Physics, 129(12), 2008. Paul Smolensky. Information processing in dynamical systems: Foundations of harmony theory. In D. E. Rumelhart and J. L. McClelland (eds.), Parallel Distributed Processing, volume 1, chapter 6, pp. 194–281. MIT Press, Cambridge, 1986. Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder variational autoencoders. In Advances in Neural Information Processing Systems, pp. 3738–3746, 2016. David J. Spiegelhalter and Steffen L. Lauritzen. Sequential updating of conditional probabilities on directed graphical structures. Networks, 20(5):579–605, 1990. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014. Robert H. Swendsen and Jian-Sheng Wang. Replica Monte Carlo simulation of spin-glasses. Physical Review Letters, 57(21):2607, 1986. Tijmen Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In Proceedings of the 25th International Conference on Machine Learning, pp. 1064–1071. ACM, 2008. Dustin Tran, Rajesh Ranganath, and David M. Blei. The variational Gaussian process. Proceedings of the International Conference on Learning Representations, arXiv:1511.06499, 2016. Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992. A Multivariate VAEs based on the cumulative distribution function The reparameterization trick is always possible if the cumulative distribution function (CDF) of \( q(z|x, \phi) \) is invertible, and the inverse CDF is differentiable, as noted in Kingma & Welling (2014). However, for multivariate distributions, the CDF is defined by: \[ F(\mathbf{x}) = \int_{x_1'=-\infty}^{x_1'} \cdots \int_{x_n'=-\infty}^{x_n'} p(x_1', \ldots, x_n'). \] The multivariate CDF maps \( \mathcal{R}^n \to [0, 1] \), and is generally not invertible.\footnote{For instance, for the bivariate uniform distribution on the interval \([0, 1]^2\), the CDF \(F(x, y) = x \cdot y\) for \(0 \leq x, y \leq 1\), so for any \(0 \leq c \leq 1\) and \(c \leq x \leq 1, y = \frac{c}{x}\) yields \(F(x, y) = c\). Clearly, many different pairs \((x, y)\) yield each possible value \(c\) of \(F(x, y)\).} In place of the multivariate CDF, consider the set of conditional-marginal CDFs defined by:\footnote{The set of marginal CDFs, used to define copulas, is invertible. However, it does not generally map the original distribution to a simple joint distribution, such as a multivariate uniform distribution, as required for variational autoencoders. In Equation 16, \(|\det \left( \frac{\partial F_{q(z|x,\phi)}^{-1}(\rho)}{\partial \phi} \right)|\) does not cancel out \(q \left( \mathbf{F}_{q(z|x,\phi)}^{-1}(\rho) | x, \phi \right)\). The determinant of the inverse Jacobian is instead \([\prod_i q (z_i = F_i^{-1}(\rho))]^{-1}\), which differs from \([q \left( \mathbf{F}_{q(z|x,\phi)}^{-1}(\rho) \right)]^{-1}\) if \(q\) is not factorial. As a result, we do not recover the variational autoencoder formulation of Equation 16.} \[ F_i(\mathbf{x}) = \int_{x'_i = -\infty}^{x_i} p\left(x'_i | x_1, \ldots, x_{i-1}\right). \] That is, \(F_j(\mathbf{x})\) is the CDF of \(x_j\), conditioned on all \(x_i\) such that \(i < h\), and marginalized over all \(x_k\) such the \(j < k\). The range of each \(F_j\) is \([0, 1]\), so \(\mathbf{F}\) maps the domain of the original distribution to \(\rho \in [0, 1]^n\). To invert \(\mathbf{F}\), we need only invert each conditional-marginal CDF in turn, conditioning \(x_j = F_j^{-1}(\rho)\) on \(x_1 = F_1^{-1}(\rho), \ldots, x_{j-1} = F_{j-1}^{-1}(\rho)\). These inverses exist so long as the conditional-marginal probabilities are everywhere nonzero. It is not problematic to effectively define \(F_j^{-1}(\rho)\) based upon \(x_{i<j}\), rather than \(\rho_{i<j}\), since by induction we can uniquely determine \(x_{i<j}\) given \(\rho_{i<j}\). Using integration-by-substitution, we can compute the gradient of the ELBO by taking the expectation of a uniform random variable \(\rho\) on \([0, 1]^n\), and using \(\mathbf{F}_{q(z|x,\phi)}^{-1}\) to transform \(\rho\) back to the element of \(z\) on which \(p(x|z, \theta)\) is conditioned. To perform integration-by-substitution, we will require the determinant of the Jacobian of \(\mathbf{F}^{-1}\). The derivative of a CDF is the probability density function at the selected point, and \(F_j\) is a simple CDF when we hold fixed the variables \(x_{i<j}\) on which it is conditioned, so using the inverse function theorem we find: \[ \frac{\partial F_j^{-1}(\rho)}{\partial \rho_j} = \frac{1}{F_j'(F_j^{-1}(\rho))} = \frac{1}{p\left(x_j = F_j^{-1}(\rho) | x_{i<j}\right)} \] where \(\rho\) is a vector, and \(F_j'\) is \(\frac{\partial F_j}{\partial \rho_j}\). The Jacobian matrix \(\frac{\partial \mathbf{F}}{\partial \mathbf{x}}\) is triangular, since the earlier conditional-marginal CDFs \(F_j\) are independent of the value of the later \(x_k, j < k\), over which they are marginalized. Moreover, the inverse conditional-marginal CDFs have the same dependence structure as \(\mathbf{F}\), so the Jacobian of \(\mathbf{F}^{-1}\) is also triangular. The determinant of a triangular matrix is the product of the diagonal elements. Using these facts to perform a multivariate integration-by-substitution, we obtain: \[ \mathbb{E}_{q(z|x,\phi)}[\log p(x|z,\theta)] = \int_z q(z|x,\phi) \cdot \log p(x|z,\theta) \] \[ = \int_{\rho=0}^1 q\left(\mathbf{F}_{q(z|x,\phi)}^{-1}(\rho)|x,\phi\right) \cdot \log p\left(x|\mathbf{F}_{q(z|x,\phi)}^{-1}(\rho),\theta\right) \cdot \left| \det \left( \frac{\partial \mathbf{F}_{q(z|x,\phi)}^{-1}(\rho)}{\partial \rho} \right) \right| \] \[ = \int_{\rho=0}^1 q\left(\mathbf{F}_{q(z|x,\phi)}^{-1}(\rho)|x,\phi\right) \cdot \log p\left(x|\mathbf{F}_{q(z|x,\phi)}^{-1}(\rho),\theta\right) \cdot \left( \prod_j \frac{\partial \mathbf{F}_{q(z|x,\phi)}^{-1}(\rho_j)}{\partial \rho_j} \right) \] \[ = \int_{\rho=0}^1 \frac{q\left(\mathbf{F}_{q(z|x,\phi)}^{-1}(\rho)|x,\phi\right)}{\prod_j q\left(z_j = F_j^{-1}(\rho)|z_{i<j}\right)} \cdot \log p\left(x|\mathbf{F}_{q(z|x,\phi)}^{-1}(\rho),\theta\right) \] \[ = \int_{\rho=0}^1 \log p\left(x|\mathbf{F}_{q(z|x,\phi)}^{-1}(\rho),\theta\right) \] (16) The variable \( \rho \) has dimensionality equal to that of \( z \); \( \mathbf{0} \) is the vector of all 0s; \( \mathbf{1} \) is the vector of all 1s. The gradient with respect to \( \phi \) is then easy to approximate stochastically: \[ \frac{\partial}{\partial \phi} \mathbb{E}_{q(z|x,\phi)}[\log p(x|z,\theta)] \approx \frac{1}{N} \sum_{\rho \sim U(0,1)^n} \frac{\partial}{\partial \phi} \log p\left(x|\mathbf{F}_{q(z|x,\phi)}^{-1}(\rho),\theta\right) \] (17) Note that if \( q(z|x,\phi) \) is factorial (i.e., the product of independent distributions in each dimension \( z_j \)), then the conditional-marginal CDFs \( F_j \) are just the marginal CDFs in each direction. However, even if \( q(z|x,\phi) \) is not factorial, Equation 17 still holds so long as \( \mathbf{F} \) is nevertheless defined to be the set of conditional-marginal CDFs of Equation 15. B THE DIFFICULTY OF ESTIMATING GRADIENTS OF THE ELBO WITH REINFORCE It is easy to construct a stochastic approximation to the gradient of the ELBO that only requires computationally tractable samples, and admits both discrete and continuous latent variables. Unfortunately, this naive estimate is impractically high-variance, leading to slow training and poor performance (Paisley et al., 2012). The variance of the gradient can be reduced somewhat using the baseline technique, originally called REINFORCE in the reinforcement learning literature (Mnih & Gregor, 2014; Williams, 1992; Bengio et al., 2013; Mnih & Rezende, 2016): \[ \frac{\partial}{\partial \phi} \mathbb{E}_{q(z|x,\phi)}[\log p(x|z,\theta)] = \mathbb{E}_{q(z|x,\phi)}\left[ [\log p(x|z,\theta) - B(x)] \cdot \frac{\partial}{\partial \phi} \log q(z|x,\phi) \right] \] \[ \approx \frac{1}{N} \sum_{z \sim q(z|x,\phi)} \left( [\log p(x|z,\theta) - B(x)] \cdot \frac{\partial}{\partial \phi} \log q(z|x,\phi) \right) \] (18) where \( B(x) \) is a (possibly input-dependent) baseline, which does not affect the gradient, but can reduce the variance of a stochastic estimate of the expectation. In REINFORCE, \( \frac{\partial}{\partial \phi} \mathbb{E}_{q(z|x,\phi)}[\log p(x|z,\theta)] \) is effectively estimated by something akin to a finite difference approximation to the derivative. The autoencoding term is a function of the conditional log-likelihood \( \log p(x|z,\theta) \), composed with the approximating posterior \( q(z|x,\phi) \), which determines the value of \( z \) at which \( p(x|z,\theta) \) is evaluated. However, the conditional log-likelihood is never differentiated directly in REINFORCE, even in the context of the chain rule. Rather, the conditional log-likelihood is evaluated at many different points \( z \sim q(z|x,\phi) \), and a weighted sum of these values is used to approximate the gradient, just like in the finite difference approximation. Equation 18 of REINFORCE captures much less information about \( p(x|z,\theta) \) per sample than Equation 3 of the variational autoencoder, which actively makes use of the gradient. In particular, the change of \( p(x|z,\theta) \) in some direction \( \vec{d} \) can only affect the REINFORCE gradient estimate if a sample is taken with a component in direction \( \vec{d} \). In a \( D \)-dimensional latent space, at least \( D \) samples are required to capture the variation of \( p(x|z, \theta) \) in all directions; fewer samples span a smaller subspace. Since the latent representation commonly consists of dozens of variables, the REINFORCE gradient estimate can be much less efficient than one that makes direct use of the gradient of \( p(x|z, \theta) \). Moreover, we will show in Section 5 that, when the gradient is calculated efficiently, hundreds of latent variables can be used effectively. C AUGMENTING DISCRETE LATENT VARIABLES WITH CONTINUOUS LATENT VARIABLES Intuitively, variational autoencoders break the encoder\(^{13}\) distribution into “packets” of probability of infinitesimal but equal mass, within which the value of the latent variables is approximately constant. These packets correspond to a region \( r_i < \rho_i < r_i + \delta \) for all \( i \) in Equation 16, and the expectation is taken over these packets. There are more packets in regions of high probability, so high-probability values are more likely to be selected. More rigorously, \( \mathbf{F}_{q(z|x, \phi)}(\zeta) \) maps intervals of high probability to larger spans of \( 0 \leq \rho \leq 1 \), so a randomly selected \( \rho \sim U[0, 1] \) is more likely to be mapped to a high-probability point by \( \mathbf{F}_{q(z|x, \phi)}^{-1}(\rho) \). As the parameters of the encoder are changed, the location of a packet can move, while its mass is held constant. That is, \( \zeta = \mathbf{F}_{q(z|x, \phi)}^{-1}(\rho) \) is a function of \( \phi \), whereas the probability mass associated with a region of \( \rho \)-space is constant by definition. So long as \( \mathbf{F}_{q(z|x, \phi)}^{-1} \) exists and is differentiable, a small change in \( \phi \) will correspond to a small change in the location of each packet. This allows us to use the gradient of the decoder to estimate the change in the loss function, since the gradient of the decoder captures the effect of small changes in the location of a selected packet in the latent space. In contrast, REINFORCE (Equation 18) breaks the latent representation into segments of infinitesimal but equal volume; e.g., \( z_i \leq z' < z_i + \delta \) for all \( i \) (Williams, 1992; Mnih & Gregor, 2014; Bengio et al., 2013). The latent variables are also approximately constant within these segments, but the probability mass varies between them. Specifically, the probability mass of the segment \( z \leq z' < z + \delta \) is proportional to \( q(z|x, \phi) \). Once a segment is selected in the latent space, its location is independent of the encoder and decoder. In particular, the gradient of the loss function does not depend on the gradient of the decoder with respect to position in the latent space, since this position is fixed. Only the probability mass assigned to the segment is relevant. Although variational autoencoders can make use of the additional gradient information from the decoder, the gradient estimate is only low-variance so long as the motion of most probability packets has a similar effect on the loss. This is likely to be the case if the packets are tightly clustered (e.g., the encoder produces a Gaussian with low variance, or the spike-and-exponential distribution of Section 2.1), or if the movements of far-separated packets have a similar effect on the total loss (e.g., the decoder is roughly linear). Nevertheless, Equation 17 of the VAE can be understood in analogy to dropout (Srivastava et al., 2014) or standout (Ba & Frey, 2013) regularization. Like dropout and standout, \( \mathbf{F}_{q(z|x, \phi)}^{-1}(\rho) \) is an element-wise stochastic nonlinearity applied to a hidden layer. Since \( \mathbf{F}_{q(z|x, \phi)}^{-1}(\rho) \) selects a point in the probability distribution, it rarely selects an improbable point. Like standout, the distribution of the hidden layer is learned. Indeed, we recover the encoder of standout if we use the spike-and-Gaussian distribution of Section E.1 and let the standard deviation \( \sigma \) go to zero. However, variational autoencoders cannot be used directly with discrete latent representations, since changing the parameters of a discrete encoder can only move probability mass between the allowed discrete values, which are far apart. If we follow a probability packet as we change the encoder parameters, it either remains in place, or jumps a large distance. As a result, the vast majority of probability packets are unaffected by small changes to the parameters of the encoder. Even if we are lucky enough to select a packet that jumps between the discrete values of the latent representation, \footnotetext{13 Since the approximating posterior \( q(z|x, \phi) \) maps each input to a distribution over the latent space, it is sometimes called the encoder. Correspondingly, since the conditional likelihood \( p(x|z, \theta) \) maps each configuration of the latent variables to a distribution over the input space, it is called the decoder.} the gradient of the decoder cannot be used to accurately estimate the change in the loss function, since the gradient only captures the effect of very small movements of the probability packet. To use discrete latent representations in the variational autoencoder framework, we must first transform to a continuous latent space, within which probability packets move smoothly. That is, we must compute Equation 17 over a different distribution than the original posterior distribution. Surprisingly, we need not sacrifice the original discrete latent space, with its associated approximating posterior. Rather, we extend the encoder \( q(z|x, \phi) \) and the prior \( p(z|\theta) \) with a transformation to a continuous, auxiliary latent representation \( \zeta \), and correspondingly make the decoder a function of this new continuous representation. By extending both the encoder and the prior in the same way, we avoid affecting the remaining KL divergence in Equation 2.\footnote{Rather than extend the encoder and the prior, we cannot simply prepend the transformation to continuous space to the decoder, since this does not change the space of the probability packets.} The gradient is defined everywhere if we require that each point in the original latent space map to nonzero probability over the entire auxiliary continuous space. This ensures that, if the probability of some point in the original latent space increases from zero to a nonzero value, no probability packet needs to jump a large distance to cover the resulting new region in the auxiliary continuous space. Moreover, it ensures that the conditional-marginal CDFs are strictly increasing as a function of their main argument, and thus are invertible. If we ignore the cases where some discrete latent variable has probability 0 or 1, we need only require that, for every pair of points in the original latent space, the associated regions of nonzero probability in the auxiliary continuous space overlap. This ensures that probability packets can move continuously as the parameters \( \phi \) of the encoder, \( q(z|x, \phi) \), change, redistributing weight amongst the associated regions of the auxiliary continuous space. D ALTERNATIVE TRANSFORMATIONS FROM DISCRETE TO CONTINUOUS LATENT REPRESENTATIONS The spike-and-exponential transformation from discrete latent variables \( z \) to continuous latent variables \( \zeta \) presented in Section 2.1 is by no means the only one possible. Here, we develop a collection of alternative transformations. D.1 MIXTURE OF RAMPS As another concrete example, we consider a case where both \( r(\zeta_i|z_i = 0) \) and \( r(\zeta_i|z_i = 1) \) are linear functions of \( \zeta_i \): \[ r(\zeta_i|z_i = 0) = \begin{cases} 2 \cdot (1 - \zeta_i), & \text{if } 0 \leq \zeta_i \leq 1 \\ 0, & \text{otherwise} \end{cases} \] \[ r(\zeta_i|z_i = 1) = \begin{cases} 2 \cdot \zeta_i, & \text{if } 0 \leq \zeta_i \leq 1 \\ 0, & \text{otherwise} \end{cases} \] \[ F_{r(\zeta_i|z_i=0)}(\zeta') = 2\zeta_i - \zeta_i^2|_{\zeta'}^{1} = 2\zeta' - \zeta'^2 \] \[ F_{r(\zeta_i|z_i=1)}(\zeta') = \zeta_i^2|_{0}^{\zeta'} = \zeta'^2 \] where \( F_p(\zeta') = \int_{-\infty}^{\zeta'} p(\zeta) \cdot d\zeta \) is the CDF of probability distribution \( p \) in the domain \([0, 1]\). The CDF for \( q(\zeta|x, \phi) \) as a function of \( q(z = 1|x, \phi) \) is: \[ F_{q(\zeta|x,\phi)}(\zeta') = (1 - q(z = 1|x, \phi)) \cdot \left(2\zeta' - \zeta'^2\right) + q(z = 1|x, \phi) \cdot \zeta'^2 \] \[ = 2 \cdot q(z = 1|x, \phi) \cdot \left(\zeta'^2 - \zeta'\right) + 2\zeta' - \zeta'^2. \] (19) We can calculate \( F_{q(\zeta|x,\phi)}^{-1} \) explicitly, using the substitutions \( F_{q(\zeta|x,\phi)} \to \rho, q(z=1|x,\phi) \to q, \) and \( \zeta' \to \zeta \) in Equation 19 to simplify notation: \[ \rho = 2 \cdot q \cdot (\zeta^2 - \zeta) + 2\zeta - \zeta^2 \\ 0 = (2q-1) \cdot \zeta^2 + 2(1-q) \cdot \zeta - \rho \\ \zeta = \frac{2(q-1) \pm \sqrt{4(1-2q+q^2)+4(2q-1)\rho}}{2(2q-1)} \\ = \frac{(q-1) \pm \sqrt{q^2 + 2(\rho-1)q + (1-\rho)}}{2q-1} \] if \( q \neq \frac{1}{2}, \rho = \zeta \) otherwise. \( F_{q(\zeta|x,\phi)}^{-1} \) has the desired range [0, 1] if we choose \[ F^{-1}(\rho) = \frac{(q-1) + \sqrt{q^2 + 2(\rho-1)q + (1-\rho)}}{2q-1} \\ = \frac{q-1 + \sqrt{(q-1)^2 + (2q-1) \cdot \rho}}{2q-1} \] if \( q \neq \frac{1}{2}, \) and \( F^{-1}(\rho) = \rho \) if \( q = \frac{1}{2} \). We plot \( F_{q(\zeta|x,\phi)}^{-1}(\rho) \) as a function of \( q \) for various values of \( \rho \) in Figure 7. ![Inverse CDF of the mixture of ramps transformation for ρ ∈ {0.2, 0.5, 0.8}](page_362_670_482_340.png) Figure 7: Inverse CDF of the mixture of ramps transformation for \( \rho \in \{0.2, 0.5, 0.8\} \) In Equation 20, \( F_{q(\zeta|x,\phi)}^{-1}(\rho) \) is quasi-sigmoidal as a function of \( q(z=1|x,\phi) \). If \( \rho \ll 0.5, F^{-1} \) is concave-up; if \( \rho \gg 0.5, F^{-1} \) is concave-down; if \( \rho \approx 0.5, F^{-1} \) is sigmoid. In no case is \( F^{-1} \) extremely flat, so it does not kill gradients. In contrast, the sigmoid probability of \( z \) inevitably flattens. D.2 SPIKE-AND-SLAB We can also use the spike-and-slab transformation, which is consistent with sparse coding and proven in other successful generative models (Courville et al., 2011): \[ r(\zeta_i|z_i=0) = \begin{cases} \infty, & \text{if } \zeta_i = 0 \\ 0, & \text{otherwise} \end{cases} \quad F_{r(\zeta_i|z_i=0)}(\zeta') = 1 \] \[ r(\zeta_i|z_i=1) = \begin{cases} 1, & \text{if } 0 \leq \zeta_i \leq 1 \\ 0, & \text{otherwise} \end{cases} \quad F_{r(\zeta_i|z_i=1)}(\zeta') = \zeta_i|_{0}^{\zeta'} = \zeta' \] where \( F_p(\zeta') = \int_{-\infty}^{\zeta'} p(\zeta) \cdot d\zeta \) is the cumulative distribution function (CDF) of probability distribution \( p \) in the domain [0, 1]. The CDF for \( q(\zeta|x,\phi) \) as a function of \( q(z=1|x,\phi) \) is: \[ F_{q(\zeta|x,\phi)}(\zeta') = (1 - q(z=1|x,\phi)) \cdot F_{r(\zeta_i|z_i=0)}(\zeta') + q(z=1|x,\phi) \cdot F_{r(\zeta_i|z_i=1)}(\zeta') \] \[ = q(z=1|x,\phi) \cdot (\zeta' - 1) + 1. \] We can calculate \( F_{q(\zeta|x,\phi)}^{-1} \) explicitly, using the substitution \( q(z = 1|x, \phi) \to q \) to simplify notation: \[ F_{q(\zeta|x,\phi)}^{-1}(\rho) = \begin{cases} \frac{\rho-1}{q} + 1, & \text{if } \rho \geq 1-q \\ 0, & \text{otherwise} \end{cases} \] We plot \( F_{q(\zeta|x,\phi)}^{-1}(\rho) \) as a function of \( q \) for various values of \( \rho \) in Figure 8. ![Inverse CDF of the spike-and-slab transformation for ρ ∈ {0.2, 0.5, 0.8}](page_374_629_563_312.png) Figure 8: Inverse CDF of the spike-and-slab transformation for \( \rho \in \{0.2, 0.5, 0.8\} \) D.3 ENGINEERING EFFECTIVE SMOOTHING TRANSFORMATIONS If the smoothing transformation is not chosen appropriately, the contribution of low-probability regions to the expected gradient of the inverse CDF may be large. Using a variant of the inverse function theorem, we find: \[ \frac{\partial}{\partial \theta} F(F^{-1}(\rho)) = \left. \frac{\partial F}{\partial \theta} \right|_{F^{-1}(\rho)} + \left. \frac{\partial F}{\partial z} \right|_{F^{-1}(\rho)} \cdot \frac{\partial}{\partial \theta} F^{-1}(\rho) = 0 \] \[ p(z) \cdot \frac{\partial}{\partial \theta} F^{-1}(\rho) = - \left. \frac{\partial F}{\partial \theta} \right|_z, \] (21) where \( z = F^{-1}(\rho) \). Consider the case where \( r(\zeta_i|z_i = 0) \) and \( r(\zeta_i|z_i = 1) \) are unimodal, but have little overlap. For instance, both distributions might be Gaussian, with means that are many standard deviations apart. For values of \( \zeta_i \) between the two modes, \( F(\zeta_i) \approx q(z_i = 0|x, \phi) \), assuming without loss of generality that the mode corresponding to \( z_i = 0 \) occurs at a smaller value of \( \zeta_i \) than that corresponding to \( z_i = 1 \). As a result, \( \frac{\partial F}{\partial q} \approx 1 \) between the two modes, and \( \frac{\partial F^{-1}}{\partial q} \approx \frac{1}{r(\zeta_i)} \) even if \( r(\zeta_i) \approx 0 \). In this case, the stochastic estimates of the gradient in equation 8, which depend upon \( \frac{\partial F^{-1}}{\partial q} \), have large variance. These high-variance gradient estimates arise because \( r(\zeta_i|z_i = 0) \) and \( r(\zeta_i|z_i = 1) \) are too well separated, and the resulting smoothing transformation is too sharp. Such disjoint smoothing transformations are analogous to a sigmoid transfer function \( \sigma(c \cdot x) \), where \( \sigma \) is the logistic function and \( c \to \infty \). The smoothing provided by the continuous random variables \( \zeta \) is only effective if there is a region of meaningful overlap between \( r(\zeta|z = 0) \) and \( r(\zeta|z = 1) \). In particular, \( \sum_{z_i} r(\zeta_i|z_i = 0) + r(\zeta_i|z_i = 1) \gg 0 \) for all \( \zeta_i \) between the modes of \( r(\zeta_i|z_i = 0) \) and \( r(\zeta_i|z_i = 1) \), so \( p(z) \) remains moderate in equation 21. In the spike-and-exponential distribution described in Section 2.1, this overlap can be ensured by fixing or bounding \( \beta \). E TRANSFORMATIONS FROM DISCRETE TO CONTINUOUS LATENT REPRESENTATIONS THAT DEPEND UPON THE INPUT It is not necessary to define the transformation from discrete to continuous latent variables in the approximating posterior, \( r(\zeta|z) \), to be independent of the input \( x \). In the true posterior distribution, \( p(\zeta|z, x) \approx p(\zeta|z) \) only if \( z \) already captures most of the information about \( x \) and \( p(\zeta|z, x) \) changes little as a function of \( x \), since \[ p(\zeta|z) = \int_x p(\zeta, x|z) = \int_x p(\zeta|z, x) \cdot p(x|z). \] This is implausible if the number of discrete latent variables is much smaller than the entropy of the input data distribution. To address this, we can define: \[ q(\zeta, z|x, \phi) = q(z|x, \phi) \cdot q(\zeta|z, x, \phi) \\ p(\zeta, z|\theta) = p(\zeta|z) \cdot p(z|\theta) \] This leads to an evidence lower bound that resembles that of Equation 2, but adds an extra term: \[ \mathcal{L}_{VAE}(x, \theta, \phi) = \log p(x|\theta) - \mathrm{KL}\left[ q(z, \zeta|x, \phi) || p(z, \zeta|x, \theta) \right] \\ = \log p(x|\theta) - \mathrm{KL}\left[ q(\zeta|z, x, \phi) \cdot q(z|x, \phi) || p(\zeta|z, x, \theta) \cdot p(z|x, \theta) \right] \\ = \sum_z \int_\zeta q(\zeta|z, x, \phi) \cdot q(z|x, \phi) \cdot \log \left[ \frac{p(x|\zeta, \theta) \cdot p(\zeta|z, \theta) \cdot p(z|\theta)}{q(\zeta|z, x, \phi) \cdot q(z|x, \phi)} \right] \\ = \mathbb{E}_{q(\zeta|z, x, \phi), q(z|x, \phi)} [\log p(x|\zeta, \theta)] - \mathrm{KL}\left[ q(z|x, \phi) || p(z|\theta) \right] \\ - \sum_z q(z|x, \phi) \cdot \mathrm{KL}\left[ q(\zeta|z, x, \phi) || p(\zeta|z) \right]. \] The extension to hierarchical approximating posteriors proceeds as in sections 3 and 4. If both \( q(\zeta|z, x, \phi) \) and \( p(\zeta|z) \) are Gaussian, then their KL divergence has a simple closed form, which is computationally efficient if the covariance matrices are diagonal. However, while the gradients of this KL divergence are easy to calculate when conditioned on \( z \), the gradients with respect of \( q(z|x, \phi) \) in the new term seem to force us into a REINFORCE-like approach (c.f. Equation 18): \[ \sum_z \frac{\partial q(z|x, \phi)}{\partial \phi} \cdot \mathrm{KL}\left[ q(\zeta|z, x, \phi) || p(\zeta|z) \right] = \mathbb{E}_{q(z|x, \phi)} \left[ \mathrm{KL}\left[ q(\zeta|z, x, \phi) || p(\zeta|z) \right] \cdot \frac{\partial \log q(z|x, \phi)}{\partial \phi} \right]. \] The reward signal is now \( \mathrm{KL}\left[ q(\zeta|z, x, \phi) || p(\zeta|z) \right] \) rather than \( \log p(x|\zeta, \theta) \), but the effect on the variance is the same, likely negating the advantages of the variational autoencoder in the rest of the loss function. However, whereas REINFORCE is high-variance because it samples over the expectation, we can perform the expectation in Equation 23 analytically, without injecting any additional variance. Specifically, if \( q(z|x, \phi) \) and \( q(\zeta|z, x, \phi) \) are factorial, with \( q(\zeta_i|z_i, x, \phi) \) only dependent on \( z_i \), then \( \mathrm{KL}\left[ q(\zeta|z, x, \phi) || p(\zeta|z) \right] \) decomposes into a sum of the KL divergences over each variable, as does \( \frac{\partial \log q(z|x, \phi)}{\partial \phi} \). The expectation of all terms in the resulting product of sums is zero except those of the form \( \mathbb{E}\left[ \mathrm{KL}\left[ q_i || p_i \right] \cdot \frac{\partial \log q_i}{\partial \phi} \right] \), due to the identity explained in Equation 27. We then use the reparameterization trick to eliminate all hierarchical layers before the current one, and marginalize over each \( z_i \). As a result, we can compute the term of Equation 23 by backpropagating \[ \mathrm{KL}\left[ q(\zeta|z = 1, x, \phi) || p(\zeta|z = 1) \right] - \mathrm{KL}\left[ q(\zeta|z = 0, x, \phi) || p(\zeta|z = 0) \right] \] into \( q(z|x, \phi) \). This is especially simple if \( q(\zeta_i|z_i, x, \phi) = p(\zeta_i|z_i) \) when \( z_i = 0 \), since then \( \mathrm{KL}\left[ q(\zeta|z = 0, x, \phi) || p(\zeta|z = 0) \right] = 0 \). E.1 SPIKE-AND-GAUSSIAN We might wish \( q(\zeta_i|z_i, x, \phi) \) to be a separate Gaussian for both values of the binary \( z_i \). However, it is difficult to invert the CDF of the resulting mixture of Gaussians. It is much easier to use a mixture of a delta spike and a Gaussian, for which the CDF can inverted piecewise: \[ q(\zeta_i|z_i = 0, x, \phi) = \delta(\zeta_i) \qquad F_{q(\zeta_i|z_i=0,x,\phi)}(\zeta_i) = H(\zeta_i) = \begin{cases} 0, & \text{if } \zeta_i < 0 \\ 1, & \text{otherwise} \end{cases} \] \[ q(\zeta_i|z_i = 1, x, \phi) = \mathcal{N}\left( \mu_{q,i}(x, \phi), \sigma^2_{q,i}(x, \phi) \right) \qquad F_{q(\zeta_i|z_i=1,x,\phi)}(\zeta_i) = \frac{1}{2} \left[ 1 + \operatorname{erf} \left( \frac{\zeta_i - \mu_{q,i}(x, \phi)}{\sqrt{2} \sigma_{q,i}(x, \phi)} \right) \right] \] where \( \mu_q(x, \phi) \) and \( \sigma_q(x, \phi) \) are functions of \( x \) and \( \phi \). We use the substitutions \( q(z_i = 1|x, \phi) \to q_i \), \( \mu_{q,i}(x, \phi) \to \mu_{q,i} \), and \( \sigma_{q,i}(x, \phi) \to \sigma_{q,i} \) in the sequel to simplify notation. The prior distribution \( p \) is similarly parameterized. We can now find the CDF for \( q(\zeta|x, \phi) \) as a function of \( q(z = 1|x, \phi) \to q_i \): \[ F_{q(\zeta|x, \phi)}(\zeta_i) = (1 - q_i) \cdot H(\zeta_i) \] \[ + \frac{q_i}{2} \cdot \left[ 1 + \operatorname{erf}\left( \frac{\zeta_i - \mu_{q,i}}{\sqrt{2}\sigma_{q,i}} \right) \right] \] Since \( z_i = 0 \) makes no contribution to the CDF until \( \zeta_i = 0 \), the value of \( \rho \) at which \( \zeta_i = 0 \) is \[ \rho_i^{step} = \frac{q_i}{2} \left[ 1 + \operatorname{erf}\left( \frac{-\mu_{q,i}}{\sqrt{2}\sigma_{q,i}} \right) \right] \] so: \[ \zeta_i = \begin{cases} \mu_{q,i} + \sqrt{2}\sigma_{q,i} \cdot \operatorname{erf}^{-1} \left( \frac{2\rho_i}{q_i} - 1 \right), & \text{if } \rho_i < \rho_i^{step} \\ 0, & \text{if } \rho_i^{step} \leq \rho_i \leq \rho_i^{step} + (1 - q_i) \\ \mu_{q,i} + \sqrt{2}\sigma_{q,i} \cdot \operatorname{erf}^{-1} \left( \frac{2(\rho_i-1)}{q_i} + 1 \right), & \text{otherwise} \end{cases} \] Gradients are always evaluated for fixed choices of \( \rho \), and gradients are never taken with respect to \( \rho \). As a result, expectations with respect to \( \rho \) are invariant to permutations of \( \rho \). Furthermore, \[ \frac{2\rho_i}{q_i} - 1 = \frac{2(\rho'_i - 1)}{q_i} + 1 \] where \( \rho'_i = \rho_i + (1 - q_i) \). We can thus shift the delta spike to the beginning of the range of \( \rho_i \), and use \[ \zeta_i = \begin{cases} 0, & \text{if } \rho_i \leq 1 - q_i \\ \mu_{q,i} + \sqrt{2}\sigma_{q,i} \cdot \operatorname{erf}^{-1} \left( \frac{2(\rho_i-1)}{q_i} + 1 \right), & \text{otherwise} \end{cases} \] All parameters of the multivariate Gaussians should be trainable functions of \( x \), and independent of \( q \). The new term in Equation 22 is: \[ \sum_z q(z|x, \phi) \cdot \mathrm{KL}[q(\zeta|z, x, \phi)||p(\zeta|z)] = \] \[ \sum_{z,i} q(z_i = 1|x, \phi) \cdot \mathrm{KL}[q(\zeta_i|z_i = 1, x, \phi)||p(\zeta_i|z_i = 1)] \] \[ + (1 - q(z_i = 1|x, \phi)) \cdot \mathrm{KL}[q(\zeta_i|z_i = 0, x, \phi)||p(\zeta_i|z_i = 0)] \] If \( z_i = 0 \), then \( q(\zeta_i|z_i = 0, x, \phi) = p(\zeta_i|z_i = 0, \theta) \), and \( \mathrm{KL}[q(\zeta_i|z_i = 0, x, \phi)||p(\zeta_i|z_i = 0, \theta)] = 0 \) as in Section 2. The KL divergence between two multivariate Gaussians with diagonal covariance matrices, with means \( \mu_{p,i}, \mu_{q,i} \), and covariances \( \sigma_{p,i}^2 \) and \( \sigma_{q,i}^2 \), is \[ \mathrm{KL}[q||p] = \sum_i \left( \log \sigma_{p,i} - \log \sigma_{q,i} + \frac{\sigma_{q,i}^2 + (\mu_{q,i} - \mu_{p,i})^2}{2 \cdot \sigma_{p,i}^2} - \frac{1}{2} \right) \] To train \( q(z_i = 1|x, \phi) \), we thus need to backpropagate \( \mathrm{KL}[q(\zeta_i|z_i = 1, x, \phi)||p(\zeta_i|z_i = 1)] \) into it. Finally, \[ \frac{\partial \mathrm{KL}[q||p]}{\partial \mu_{q,i}} = \frac{\mu_{q,i} - \mu_{p,i}}{\sigma_{p,i}^2} \] \[ \frac{\partial \mathrm{KL}[q||p]}{\partial \sigma_{q,i}} = -\frac{1}{\sigma_{q,i}} + \frac{\sigma_{q,i}}{\sigma_{p,i}^2} \] so \[ \sum_z q(z|x, \phi) \cdot \frac{\partial}{\partial \mu_{q,i}} \mathrm{KL}[q||p] = q(z_i = 1|x, \phi) \cdot \frac{\mu_{q,i} - \mu_{p,i}}{\sigma_{p,i}^2} \] \[ \sum_z q(z|x, \phi) \cdot \frac{\partial}{\partial \sigma_{q,i}} \mathrm{KL}[q||p] = q(z_i = 1|x, \phi) \cdot \left( -\frac{1}{\sigma_{q,i}} + \frac{\sigma_{q,i}}{\sigma_{p,i}^2} \right) \] For \( p \), it is not useful to make the mean values of \( \zeta \) adjustable for each value of \( z \), since this is redundant with the parameterization of the decoder. With fixed means, we could still parameterize the variance, but to maintain correspondence with the standard VAE, we choose the variance to be one. F COMPUTING THE GRADIENT OF \( \mathrm{KL}\,[q(\zeta, z|x, \phi)||p(\zeta, z|\theta)] \) The KL term of the ELBO (Equation 2) is not significantly affected by the introduction of additional continuous latent variables \( \zeta \), so long as we use the same expansion \( r(\zeta|z) \) for both the approximating posterior and the prior: \[ \mathrm{KL}\,[q||p] = \sum_z \int_{\zeta} \left( \prod_{1 \leq j \leq k} r(\zeta_j|z_j) \cdot q(z_j|\zeta_{i<j}, x) \right) \cdot \log \left[ \frac{\prod_{1 \leq j \leq k} r(\zeta_j|z_j) \cdot q(z_j|\zeta_{i<j}, x)}{p(z) \cdot \prod_{1 \leq j \leq k} r(\zeta_j|z_j)} \right] = \sum_z \int_{\zeta} \left( \prod_{1 \leq j \leq k} r(\zeta_j|z_j) \cdot q(z_j|\zeta_{i<j}, x) \right) \cdot \log \left[ \frac{\prod_{1 \leq j \leq k} q(z_j|\zeta_{i<j}, x)}{p(z)} \right]. \] (24) The gradient of Equation 24 with respect to the parameters \( \theta \) of the prior, \( p(z|\theta) \), can be estimated stochastically using samples from the approximating posterior, \( q(\zeta, z|x, \phi) \), and the true prior, \( p(z|\theta) \). When the prior is an RBM, defined by Equation 6, we find: \[ -\frac{\partial}{\partial \theta} \mathrm{KL}\,[q||p] = -\sum_{\zeta,z} q(\zeta, z|x, \phi) \cdot \frac{\partial E_p(z, \theta)}{\partial \theta} + \sum_z p(z|\theta) \cdot \frac{\partial E_p(z, \theta)}{\partial \theta} = -\mathbb{E}_{q(z_1|x,\phi)} \left[ \cdots \left[ \mathbb{E}_{q(z_k|\zeta_{i<k},x,\phi)} \left[ \frac{\partial E_p(z, \theta)}{\partial \theta} \right] \right] \right] + \mathbb{E}_{p(z|\theta)} \left[ \frac{\partial E_p(z, \theta)}{\partial \theta} \right] \] (25) The final expectation with respect to \( q(z_k|\zeta_{i<k}, x, \phi) \) can be performed analytically; all other expectations require samples from the approximating posterior. Similarly, for the prior, we must sample from the RBM, although Rao-Blackwellization can be used to marginalize half of the units. F.1 Gradient of the entropy with respect to \( \phi \) In contrast, the gradient of the KL term with respect to the parameters of the approximating posterior is severely complicated by a nonfactorial approximating posterior. We break \( \mathrm{KL}\,[q||p] \) into two terms, the negative entropy \( \sum_{z,\zeta} q \log q \), and the cross-entropy \( -\sum_{z,\zeta} q \log p \), and compute their gradients separately. We can regroup the negative entropy term of the KL divergence so as to use the reparameterization trick to backpropagate through \( \prod_{i<j} q(z_j|\zeta_{i<j}, x) \): \[ -H(q) = \sum_z \int_\zeta \left( \prod_{1 \leq j \leq k} r(\zeta_j|z_j) \cdot q(z_j|\zeta_{i<j}, x) \right) \cdot \log \left[ \prod_{1 \leq j \leq k} q(z_j|\zeta_{i<j}, x) \right] = \sum_z \int_\zeta \left( \prod_j r(\zeta_j|z_j) \cdot q(z_j|\zeta_{i<j}, x) \right) \cdot \left( \sum_j \log q(z_j|\zeta_{i<j}, x) \right) = \sum_j \sum_z \int_\zeta \left( \prod_{i \leq j} r(\zeta_i|z_i) \cdot q(z_i|\zeta_{h<i}, x) \right) \cdot \log q(z_j|\zeta_{i<j}, x) = \sum_j \mathbb{E}_{q(\zeta_{i<j}, z_{i<j}|x, \phi)} \left[ \sum_z q(z_j|\zeta_{i<j}, x) \cdot \log q(z_j|\zeta_{i<j}, x) \right] = \sum_j \mathbb{E}_{p_{i<j}} \left[ \sum_z q(z_j|p_{i<j}, x) \cdot \log q(z_j|p_{i<j}, x) \right] \] where indices \( i \) and \( j \) denote hierarchical groups of variables. The probability \( q(z_j|p_{i<j}, x) \) is evaluated analytically, whereas all variables \( z_{i<j} \) and \( \zeta_{i<j} \) are implicitly sampled stochastically via \( p_{i<j} \). We wish to take the gradient of \( -H(q) \) in Equation 26. Using the identity: \[ \mathbb{E}_q \left[ c \cdot \frac{\partial}{\partial \phi} \log q \right] = c \cdot \sum_z q \cdot \left( \frac{\partial q}{\partial \phi} / q \right) = c \cdot \frac{\partial}{\partial \phi} \left( \sum_z q \right) = 0 \] for any constant \( c \), we can eliminate the gradient of \( \log q_{j|p_{i<j}} \) in \( -\frac{\partial H(q)}{\partial \phi} \), and obtain: \[ -\frac{\partial}{\partial \phi} H(q) = \sum_j \mathbb{E}_{p_{i<j}} \left[ \sum_z \left( \frac{\partial}{\partial \phi} q(z_j|p_{i<j}, x) \right) \cdot \log q(z_j|p_{i<j}, x) \right]. \] Moreover, we can eliminate any log-partition function in \( \log q(z_j|p_{i<j}, x) \) by an argument analogous to Equation 27.\footnote{The sum over \( z_i \) is marginalized out of \( \frac{\partial q}{\partial \phi} \cdot \prod_{j \neq i} q_j \) when multiplied by \( \log q_i \). When \( \frac{\partial q_i}{\partial \phi} \cdot \prod_{j \neq i} q_j \) is multiplied by one of the log \( q_{j \neq i} \), the sum over \( z_i \) can be taken inside the \( \frac{\partial}{\partial \phi} \), and again \( \frac{\partial}{\partial \phi} \sum_z q_i = 0 \).} By repeating this argument one more time, we can break \( \frac{\partial}{\partial \phi} q(z_j|p_{i<j}, x) \) into its factorial components.\footnote{The sum over \( z_i \) is marginalized out of \( \frac{\partial q}{\partial \phi} \cdot \prod_{j \neq i} q_j \) when multiplied by \( \log q_i \). When \( \frac{\partial q_i}{\partial \phi} \cdot \prod_{j \neq i} q_j \) is multiplied by one of the log \( q_{j \neq i} \), the sum over \( z_i \) can be taken inside the \( \frac{\partial}{\partial \phi} \), and again \( \frac{\partial}{\partial \phi} \sum_z q_i = 0 \).} If \( z_i \in \{0, 1\} \), then using Equation 10, gradient of the negative entropy reduces to: \[ -\frac{\partial}{\partial \phi} H(q) = \sum_j \mathbb{E}_{p_{i<j}} \left[ \sum_{i \in j} \sum_{z_i} q_i(z_i) \cdot \left( z_i \cdot \frac{\partial g_i}{\partial \phi} - \sum_{z_i} \left( q_i(z_i) : z_i \cdot \frac{\partial g_i}{\partial \phi} \right) \right) \cdot (g_i : z_i) \right] = \sum_j \mathbb{E}_{p_{i<j}} \left[ \frac{\partial g_j^\top}{\partial \phi} \cdot (g_j \odot [q_j(z_j = 1) - q_j^2(z_j = 1)]) \right] \] where \( i \) and \( z_i \) correspond to single variables within the hierarchical groups denoted by \( j \). In TensorFlow, it might be simpler to write: \[ -\frac{\partial}{\partial \phi} H(q) = \mathbb{E}_{p_{i<j}} \left[ \frac{\partial q_j^\top(z_j = 1)}{\partial \phi} \cdot g_j \right]. \] F.2 Gradient of the cross-entropy The gradient of the cross-entropy with respect to the parameters \( \phi \) of the approximating posterior does not depend on the partition function of the prior \( Z_p \), since: \[ -\frac{\partial}{\partial \phi} \sum_z q \log p = \sum_z \frac{\partial}{\partial \phi} q \cdot E_p + \frac{\partial}{\partial \phi} q \cdot \log Z_p = \sum_z \frac{\partial}{\partial \phi} q \cdot E_p \] by Equations 6 and 27, so we are left with the gradient of the average energy \( E_p \). The remaining cross-entropy term is \[ \sum_z q \cdot E_p = -\mathbb{E}_p \left[ z^\top \cdot W \cdot z + b^\top \cdot z \right]. \] We can handle the term \( b^\top \cdot z \) analytically, since \( z_i \in \{0, 1\} \), and \[ \mathbb{E}_p \left[ b^\top \cdot z \right] = b^\top \cdot \mathbb{E}_p [q(z = 1)] . \] The approximating posterior \( q \) is continuous, with nonzero derivative, so the reparameterization trick can be applied to backpropagate gradients: \[ \frac{\partial}{\partial \phi} \mathbb{E}_p \left[ b^\top \cdot z \right] = b^\top \cdot \mathbb{E}_p \left[ \frac{\partial}{\partial \phi} q(z = 1) \right]. \] In contrast, each element of the sum \[ z^\top \cdot W \cdot z = \sum_{i,j} W_{ij} \cdot z_i \cdot z_j \] depends upon variables that are not usually in the same hierarchical level, so in general \[ \mathbb{E}_p [W_{ij} z_i z_j] \neq W_{ij} \mathbb{E}_p [z_i] \cdot \mathbb{E}_p [z_j]. \] We might decompose this term into \[ \mathbb{E}_p [W_{ij} z_i z_j] = W_{ij} \cdot \mathbb{E}_{p_{k \leq i}} \left[ z_i \cdot \mathbb{E}_{p_{k > i}} [z_j] \right], \] where without loss of generality \( z_i \) is in an earlier hierarchical layer than \( z_j \); however, it is not clear how to take the derivative of \( z_i \), since it is a discontinuous function of \( p_{k \leq i} \). F.3 Naive approach The naive approach would be to take the gradient of the expectation using the gradient of log-probabilities over all variables: \[ \frac{\partial}{\partial \phi} \mathbb{E} [W_{ij} z_i z_j] = \mathbb{E}_q \left[ W_{ij} z_i z_j \cdot \frac{\partial}{\partial \phi} \log q \right] \] \[ = \mathbb{E}_{q_{1}, q_{2|1}, \ldots} \left[ W_{ij} z_i z_j \cdot \sum_k \frac{\partial}{\partial \phi} \log q_{k|l < k} \right] \tag{28} \] \[ = \mathbb{E}_{q_{1}, q_{2|1}, \ldots} \left[ W_{ij} z_i z_j \cdot \sum_k \frac{1}{q_{k|l < k}} \cdot \frac{\partial q_{k|l < k}}{\partial \phi} \right]. \] For \( \frac{\partial q_{k|l < k}}{\partial \phi} \), we can drop out terms involving only \( z_{i < k} \) and \( z_{j < k} \) that occur hierarchically before \( k \), since those terms can be pulled out of the expectation over \( q_k \), and we can apply Equation 27. However, for terms involving \( z_{i > k} \) or \( z_{j > k} \) that occur hierarchically after \( k \), the expected value of \( z_i \) or \( z_j \) depends upon the chosen value of \( z_k \). The gradient calculation in Equation 28 is an instance of the REINFORCE algorithm (Equation 18). Moreover, the variance of the estimate is proportional to the number of terms (to the extent that the terms are independent). The number of terms contributing to each gradient \( \frac{\partial q_{k|l < k}}{\partial \phi} \) grows quadratically with number of units in the RBM. We can introduce a baseline, as in NVIL (Mnih & Gregor, 2014): \[ \mathbb{E}_q \left[ (W_{ij} z_i z_j - c(x)) \cdot \frac{\partial}{\partial \phi} \log q \right], \] but this approximation is still high-variance. F.4 DECOMPOSITION OF \( \frac{\partial}{\partial \phi} W_{ij} z_i z_j \) VIA THE CHAIN RULE When using the spike-and-exponential, spike-and-slab, or spike-and-Gaussian distributions of sections 2.1 D.2, and E.1, we can decompose the gradient of \( \mathbb{E}[W_{ij} z_i z_j] \) using the chain rule. Previously, we have considered \( z \) to be a function of \( \rho \) and \( \phi \). We can instead formulate \( z \) as a function of \( q(z = 1) \) and \( \rho \), where \( q(z = 1) \) is itself a function of \( \rho \) and \( \phi \). Specifically, \[ z_i(q_i(z_i = 1), \rho_i) = \begin{cases} 0 & \text{if } \rho_i < 1 - q_i(z_i = 1) = q_i(z_i = 0) \\ 1 & \text{otherwise} \end{cases} \] Using the chain rule, \( \frac{\partial}{\partial \phi} z_i = \sum_j \frac{\partial z_i}{\partial q_j(z_j = 1)} \cdot \frac{\partial q_j(z_j = 1)}{\partial \phi} \), where \( \frac{\partial z_i}{\partial q_j(z_j = 1)} \) holds all \( q_{k \neq j} \) fixed, even though they all depend on the common variables \( \rho \) and parameters \( \phi \). We use the chain rule to differentiate with respect to \( q(z = 1) \) since it allows us to pull part of the integral over \( \rho \) inside the derivative with respect to \( \phi \). In the sequel, we sometimes write \( q \) in place of \( q(z = 1) \) to minimize notational clutter. Expanding the desired gradient using the reparameterization trick and the chain rule, we find: \[ \frac{\partial}{\partial \phi} \mathbb{E}_q [W_{ij} z_i z_j] = \frac{\partial}{\partial \phi} \mathbb{E}_\rho [W_{ij} z_i z_j] = \mathbb{E}_\rho \left[ \sum_k \frac{\partial W_{ij} z_i z_j}{\partial q_k(z_k = 1)} \cdot \frac{\partial q_k(z_k = 1)}{\partial \phi} \right]. \] We can change the order of integration (via the expectation) and differentiation since \[ |W_{ij} z_i z_j| \leq W_{ij} < \infty \] for all \( \rho \) and bounded \( \phi \) (Cheng, 2006). Although \( z(q, \rho) \) is a step function, and its derivative is a delta function, the integral (corresponding to the expectation with respect to \( \rho \)) of its derivative is finite. Rather than dealing with generalized functions directly, we apply the definition of the derivative, and push through the matching integral to recover a finite quantity. For simplicity, we pull the sum over \( k \) out of the expectation in Equation 30, and consider each summand independently. From Equation 29, we see that \( z_i \) is only a function of \( q_i \), so all terms in the sum over \( k \) in Equation 30 vanish except \( k = i \) and \( k = j \). Without loss of generality, we consider the term \( k = i \); the term \( k = j \) is symmetric. Applying the definition of the gradient to one of the summands, and then analytically taking the expectation with respect to \( \rho_i \), we obtain: \[ \mathbb{E}_\rho \left[ \frac{\partial W_{ij} \cdot z_i(q, \rho) \cdot z_j(q, \rho)}{\partial q_i(z_i = 1)} \cdot \frac{\partial q_i(z_i = 1)}{\partial \phi} \right] = \mathbb{E}_\rho \left[ \lim_{\delta q_i(z_i = 1) \to 0} \frac{W_{ij} \cdot z_i(q + \delta q_i, \rho) \cdot z_j(q + \delta q_i, \rho) - W_{ij} \cdot z_i(q, \rho) \cdot z_j(q, \rho)}{\delta q_i} \cdot \frac{\partial q_i(z_i = 1)}{\partial \phi} \right] = \mathbb{E}_{\rho_{k \neq i}} \left[ \lim_{\delta q_i(z_i = 1) \to 0} \delta q_i \cdot \frac{W_{ij} \cdot 1 \cdot z_j(q, \rho) - W_{ij} \cdot 0 \cdot z_j(q, \rho)}{\delta q_i} \cdot \frac{\partial q_i(z_i = 1)}{\partial \phi} \Bigg|_{\rho_i = q_i(z_i = 0)} \right] = \mathbb{E}_{\rho_{k \neq i}} \left[ W_{ij} \cdot z_j(q, \rho) \cdot \frac{\partial q_i(z_i = 1)}{\partial \phi} \Bigg|_{\rho_i = q_i(z_i = 0)} \right]. \] The third line follows from Equation 29, since \( z_i(q + \delta q_i, \rho) \) differs from \( z_i(q, \rho) \) only in the region of \( \rho \) of size \( \delta q_i \) around \( q_i(z_i = 0) = 1 - q_i(z_i = 1) \) where \( z_i(q + \delta q_i, \rho) \neq z_i(q, \rho) \). Regardless of the choice of \( \rho \), \( z_j(q + \delta q_i, \rho) = z_j(q, \rho) \). The third line fixes \( \rho_i \) to the transition between \( z_i = 0 \) and \( z_i = 1 \) at \( q_i(z_i = 0) \). Since \( z_i = 0 \) implies \( \zeta_i = 0 \),\( ^{17} \) and \( \zeta \) is a continuous function of \( \rho \), the third line implies that \( \zeta_i = 0 \). At the same time, since \( q_i \) is only a function of \( \rho_{k < i} \) from earlier in the hierarchy, the term \( \frac{\partial q_i}{\partial \phi} \) is not affected by the choice of \( \rho_i \).\( ^{18} \) As noted above, due to the chain rule, the perturbation \( \delta q_i \) has no effect on other \( ^{17} \)We chose the conditional distribution \( r(\zeta_i | z_i = 0) \) to be a delta spike at zero. \( ^{18} \)In contrast, \( z_i \) is a function of \( \rho_i \). \( q_j \) by definition; the gradient is evaluated with those values held constant. On the other hand, \( \frac{\partial q_i}{\partial \phi} \) is generally nonzero for all parameters governing hierarchical levels \( k < i \). Since \( p_i \) is fixed such that \( \zeta_i = 0 \), all units further down the hierarchy must be sampled consistent with this restriction. A sample from \( \rho \) has \( \zeta_i = 0 \) if \( z_i = 0 \), which occurs with probability \( q_i(z_i = 0) \).\footnote{It might also be the case that \( \zeta_i = 0 \) when \( z_i = 1 \), but with our choice of \( r(\zeta|z) \), this has vanishingly small probability.} We can compute the gradient with a stochastic approximation by multiplying each sample by \( 1 - z_i \), so that terms with \( \zeta_i \neq 0 \) are ignored,\footnote{This takes advantage of the fact that \( z_i \in \{0, 1\} \).} and scaling up the gradient when \( z_i = 0 \) by \( \frac{1}{q_i(z_i = 0)} \): \[ \frac{\partial}{\partial \phi} \mathbb{E} \left[ W_{ij} z_i z_j \right] = \mathbb{E}_{\rho} \left[ W_{ij} \cdot \frac{1 - z_i}{1 - q_i(z_i = 1)} \cdot z_j \cdot \frac{\partial q_i(z_i = 1)}{\partial \phi} \right]. \] The term \( \frac{1 - z_i}{1 - q_i} \) is not necessary if \( j \) comes before \( i \) in the hierarchy. While Equation 31 appears similar to REINFORCE, it is better understood as an importance-weighted estimate of an efficient gradient calculation. Just as a ReLU only has a nonzero gradient in the linear regime, \( \frac{\partial z_i}{\partial \phi} \) effectively only has a nonzero gradient when \( z_i = 0 \), in which case \( \frac{\partial z_i}{\partial \phi} \sim \frac{\partial q_i(z_i=1)}{\partial \phi} \). Unlike in REINFORCE, we do effectively differentiate the reward, \( W_{ij} z_i z_j \). Moreover, the number of terms contributing to each gradient \( \frac{\partial q_i(z_i=1)}{\partial \phi} \) grows linearly with the number of units in an RBM, whereas it grows quadratically in the method of Section F.3. G MOTIVATION FOR BUILDING APPROXIMATING POSTERIOR AND PRIOR HIERARCHIES IN THE SAME ORDER Intuition regarding the difficulty of approximating the posterior distribution over the latent variables given the data can be developed by considering sparse coding, an approach that uses a basis set of spatially localized filters (Olshausen & Field, 1996). The basis set is overcomplete, and there are generally many basis elements similar to any selected basis element. However, the sparsity prior pushes the posterior distribution to use only one amongst each set of similar basis elements. As a result, there is a large set of sparse representations of roughly equivalent quality for any single input. Each basis element individually can be replaced with a similar basis element. However, having changed one basis element, the optimal choice for the adjacent elements also changes so the filters mesh properly, avoiding redundancy or gaps. The true posterior is thus highly correlated, since even after conditioning on the input, the probability of a given basis element depends strongly on the selection of the adjacent basis elements. These equivalent representations can easily be disambiguated by the successive layers of the representation. In the simplest case, the previous layer could directly specify which correlated set of basis elements to use amongst the applicable sets. We can therefore achieve greater efficiency by inferring the approximating posterior over the top-most latent layer first. Only then do we compute the conditional approximating posteriors of lower layers given a sample from the approximating posterior of the higher layers, breaking the symmetry between representations of similar quality. H ARCHITECTURE The stochastic approximation to the ELBO is computed via one pass down the approximating posterior (Figure 4a), sampling from each continuous latent layer \( \zeta_i \) and \( y_{m > 1} \) in turn; and another pass down the prior (Figure 4b), conditioned on the sample from the approximating posterior. In the pass down the prior, signals do not flow from layer to layer through the entire model. Rather, the input to each layer is determined by the approximating posterior of the previous layers, as follows from Equation 14. The gradient is computed by backpropagating the reconstruction log-likelihood, and the KL divergence between the approximating posterior and true prior at each layer, through this differentiable structure. All hyperparameters were tuned via manual experimentation. Except in Figure 6, RBMs have 128 units (64 units per side, with full bipartite connections between the two sides), with 4 layers of hierarchy in the approximating posterior. We use 100 iterations of block Gibbs sampling, with 20 persistent chains per element of the minibatch, to sample from the prior in the stochastic approximation to Equation 11. When using the hierarchy of continuous latent variables described in Section 4, discrete VAEs overfit if any component of the prior is overparameterized, as shown in Figure 9a. In contrast, a larger and more powerful approximating posterior generally did not reduce performance within the range examined, as in Figure 9b. In response, we manually tuned the number of layers of continuous latent variables, the number of such continuous latent variables per layer, the number of deterministic hidden units per layer in the neural network defining each hierarchical layer of the prior, and the use of parameter sharing in the prior. We list the selected values in Table 2. All neural networks implementing components of the approximating posterior contain two hidden layers of 2000 units. ![Log likelihood vs. hidden units per layer for prior and approximating posterior](page_352_670_889_377.png) Figure 9: Log likelihood on statically binarized MNIST versus the number of hidden units per neural network layer, in the prior (a) and approximating posterior (b). The number of deterministic hidden layers in the networks parameterizing the prior/approximating posterior is 1 (blue), 2 (red), 3 (green) in (a/b), respectively. The number of deterministic hidden layers in the final network parameterizing \( p(x|\mathbf{z}) \) is 0 (solid) or 1 (dashed). All models use only 10 layers of continuous latent variables, with no parameter sharing. <table> <tr> <th></th> <th>Num layers</th> <th>Vars per layer</th> <th>Hids per prior layer</th> <th>Param sharing</th> </tr> <tr> <td>MNIST (dyn bin)</td> <td>18</td> <td>64</td> <td>1000</td> <td>none</td> </tr> <tr> <td>MNIST (static bin)</td> <td>20</td> <td>256</td> <td>2000</td> <td>2 groups</td> </tr> <tr> <td>Omniglot</td> <td>16</td> <td>256</td> <td>800</td> <td>2 groups</td> </tr> <tr> <td>Caltech-101 Sil</td> <td>12</td> <td>80</td> <td>100</td> <td>complete</td> </tr> </table> Table 2: Architectural hyperparameters used for each dataset. Successive columns list the number of layers of continuous latent variables, the number of such continuous latent variables per layer, the number of deterministic hidden units per layer in the neural network defining each hierarchical layer of the prior, and the use of parameter sharing in the prior. Smaller datasets require more regularization, and achieve optimal performance with a smaller prior. On statically binarized MNIST, Omniglot, and Caltech-101 Silhouettes, we further regularize using recurrent parameter sharing. In the simplest case, each \( p(\mathbf{z}_m|\mathbf{z}_{<m},\theta) \) and \( p(x|\mathbf{z},\theta) \) is a function of \( \sum_{l<m} \hat{z}_l \), rather than a function of the concatenation \( [\hat{z}_0,\hat{z}_1,\ldots,\hat{z}_{m-1}] \). Moreover, all \( p(\mathbf{z}_{m\geq 1}|\mathbf{z}_{<m},\theta) \) share parameters. The RBM layer \( \mathbf{z}_0 \) is rendered compatible with this parameterization by using a trainable linear transformation of \( \zeta, M \cdot \zeta \); where the number of rows in \( M \) is equal to the number of variables in each \( \mathfrak{z}_{m>0} \). We refer to this architecture as complete recurrent parameter sharing. On datasets of intermediate size, a degree of recurrent parameter sharing somewhere between full independence and complete sharing is beneficial. We define the n group architecture by dividing the continuous latent layers \( \mathfrak{z}_{m\geq1} \) into n equally sized groups of consecutive layers. Each such group is independently subject to recurrent parameter sharing analogous to the complete sharing architecture, and the RBM layer \( \mathfrak{z}_0 \) is independently parameterized. We use the spike-and-exponential transformation described in Section 2.1. The exponent is a trainable parameter, but it is bounded above by a value that increases linearly with the number of training epochs. We use warm-up with strength 20 for 5 epochs, and additional warm-up of strength 2 on the RBM alone for 20 epochs (Raiko et al., 2007; Bowman et al., 2016; Sønderby et al., 2016). When \( p(x|\mathfrak{z}) \) is linear, all nonlinear transformations are part of the prior over the latent variables. In contrast, it is also possible to define the prior distribution over the continuous latent variables to be a simple factorial distribution, and push the nonlinearity into the final decoder \( p(x|\mathfrak{z}) \), as in traditional VAEs. The former case can be reduced to something analogous to the latter case using the reparameterization trick. However, a VAE with a completely independent prior does not regularize the nonlinearity of the prior; whereas a hierarchical prior requires that the nonlinearity of the prior (via its effect on the true posterior) be well-represented by the approximating posterior. Viewed another way, a completely independent prior requires the model to consist of many independent sources of variance, so the data manifold must be fully unfolded into an isotropic ball. A hierarchical prior allows the data manifold to remain curled within a higher-dimensional ambient space, with the approximating posterior merely tracking its distortions. A higher-dimensional ambient space makes sense when modeling multiple classes of objects. For instance, the parameters characterizing limb positions and orientations for people have no analog for houses. H.1 Estimating the log partition function We estimate the log-likelihood by subtracting an estimate of the log partition function of the RBM (\( \log Z_p \) from Equation 6) from an importance-weighted computation analogous to that of Burda et al. (2016). For this purpose, we estimate the log partition function using bridge sampling, a variant of Bennett’s acceptance ratio method (Bennett, 1976; Shirts & Chodera, 2008), which produces unbiased estimates of the partition function. Interpolating distributions were of the form \( p(x)^{\beta} \), and sampled with a parallel tempering routine (Swendsen & Wang, 1986). The set of smoothing parameters \( \beta \) in [0, 1] were chosen to approximately equalize replica exchange rates at 0.5. This standard criteria simultaneously keeps mixing times small, and allows for robust inference. We make a conservative estimate for burn-in (0.5 of total run time), and choose the total length of run, and number of repeated experiments, to achieve sufficient statistical accuracy in the log partition function. In Figure 10, we plot the distribution of independent estimations of the log-partition function for a single model of each dataset. These estimates differ by no more than about 0.1, indicating that the estimate of the log-likelihood should be accurate to within about 0.05 nats. H.2 Constrained Laplacian batch normalization Rather than traditional batch normalization (Ioffe & Szegedy, 2015), we base our batch normalization on the L1 norm. Specifically, we use: \[ \mathbf{y} = \mathbf{x} - \overline{\mathbf{x}} \] \[ \mathbf{x}_{bn} = \mathbf{y} / \left( |\overline{\mathbf{y}}| + \epsilon \right) \odot s + o, \] where \( \mathbf{x} \) is a minibatch of scalar values, \( \overline{\mathbf{x}} \) denotes the mean of \( \mathbf{x} \), \( \odot \) indicates element-wise multiplication, \( \epsilon \) is a small positive constant, \( s \) is a learned scale, and \( o \) is a learned offset. For the approximating posterior over the RBM units, we bound \( 2 \leq s \leq 3 \), and \( -s \leq o \leq s \). This helps ensure that all units are both active and inactive in each minibatch, and thus that all units are used. ![Distribution of estimates of the log-partition function, using Bennett’s acceptance ratio method with parallel tempering, for a single model trained on dynamically binarized MNIST (a), statically binarized MNIST (b), Omniglot (c), and Caltech-101 Silhouettes (d)](page_232_186_1107_563.png) Figure 10: Distribution of estimates of the log-partition function, using Bennett’s acceptance ratio method with parallel tempering, for a single model trained on dynamically binarized MNIST (a), statically binarized MNIST (b), Omniglot (c), and Caltech-101 Silhouettes (d) I COMPARISON MODELS In Table 1, we compare the performance of the discrete variational autoencoder to a selection of recent, competitive models. For dynamically binarized MNIST, we compare to deep belief networks (DBN; Hinton et al., 2006), reporting the results of Murray & Salakhutdinov (2009); importance-weighted autoencoders (IWAE; Burda et al., 2016); and ladder variational autoencoders (Ladder VAE; Sønderby et al., 2016). For the static MNIST binarization of (Salakhutdinov & Murray, 2008), we compare to Hamiltonian variational inference (HVI; Salimans et al., 2015); the deep recurrent attentive writer (DRAW; Gregor et al., 2015); the neural adaptive importance sampler with neural autoregressive distribution estimator (NAIS NADE; Du et al., 2015); deep latent Gaussian models with normalizing flows (Normalizing flows; Rezende & Mohamed, 2015); and the variational Gaussian process (Tran et al., 2016). On Omniglot, we compare to the importance-weighted autoencoder (IWAE; Burda et al., 2016); ladder variational autoencoder (Ladder VAE; Sønderby et al., 2016); and the restricted Boltzmann machine (RBM; Smolensky, 1986) and deep belief network (DBN; Hinton et al., 2006), reporting the results of Burda et al. (2015). Finally, for Caltech-101 Silhouettes, we compare to the importance-weighted autoencoder (IWAE; Burda et al., 2016), reporting the results of Li & Turner (2016); reweighted wake-sleep with a deep sigmoid belief network (RWS SBN; Bornschein & Bengio, 2015); the restricted Boltzmann machine (RBM; Smolensky, 1986), reporting the results of Cho et al. (2013); and the neural adaptive importance sampler with neural autoregressive distribution estimator (NAIS NADE; Du et al., 2015). ![Evolution of samples from a discrete VAE trained on statically binarized MNIST, using persistent RBM Markov chains.](page_186_120_1217_495.png) Figure 11: Evolution of samples from a discrete VAE trained on statically binarized MNIST, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. Vertical sequences in which the digit ID remains constant demonstrate that the RBM has distinct modes, each of which corresponds to a single digit ID, despite being trained in a wholly unsupervised manner. J SUPPLEMENTARY RESULTS To highlight the contribution of the various components of our generative model, we investigate performance on a selection of simplified models.21 First, we remove the continuous latent layers. The resulting prior, depicted in Figure 1b, consists of the bipartite Boltzmann machine (RBM), the smoothing variables \( \zeta \), and a factorial Bernoulli distribution over the observed variables \( x \) defined via a deep neural network with a logistic final layer. This probabilistic model achieves a log-likelihood of \( -86.9 \) with 128 RBM units and \( -85.2 \) with 200 RBM units. Next, we further restrict the neural network defining the distribution over the observed variables \( x \) given the smoothing variables \( \zeta \) to consist of a linear transformation followed by a pointwise logistic nonlinearity, analogous to a sigmoid belief network (SBN; Spiegelhalter & Lauritzen, 1990; Neal, 1992). This decreases the negative log-likelihood to \( -92.7 \) with 128 RBM units and \( -88.8 \) with 200 RBM units. We then remove the lateral connections in the RBM, reducing it to a set of independent binary random variables. The resulting network is a noisy sigmoid belief network. That is, samples are produced by drawing samples from the independent binary random variables, multiplying by an independent noise source, and then sampling from the observed variables as in a standard SBN. With this SBN-like architecture, the discrete variational autoencoder achieves a log-likelihood of \( -97.0 \) with 200 binary latent variables. Finally, we replace the hierarchical approximating posterior of Figure 3a with the factorial approximating posterior of Figure 1a. This simplification of the approximating posterior, in addition to the prior, reduces the log-likelihood to \( -102.9 \) with 200 binary latent variables. 21 In all cases, we report the negative log-likelihood on statically binarized MNIST (Salakhutdinov & Murray, 2008), estimated with \( 10^4 \) importance weighted samples (Burda et al., 2016). Figure 12: Evolution of samples from a discrete VAE trained on Omniglot, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. Figure 13: Evolution of samples from a discrete VAE trained on Caltech-101 Silhouettes, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. Vertical sequences in which the silhouette shape remains similar demonstrate that the RBM has distinct modes, each of which corresponds to a single silhouette type, despite being trained in a wholly unsupervised manner. Figures 11, 12, and 13 repeat the analysis of Figure 5 for statically binarized MNIST, Omniglot, and Caltech-101 Silhouettes. Specifically, they show the generative output of a discrete VAE as the Markov chain over the RBM evolves via block Gibbs sampling. The RBM is held constant across each sub-row of five samples, and variation amongst these samples is due to the layers of continuous latent variables. Given a multimodal distribution with well-separated modes, Gibbs sampling passes through the large, low-probability space between the modes only infrequently. As a result, consistency of the object class over many successive rows in Figures 11, 12, and 13 indicates that the RBM prior has well-separated modes. On statically binarized MNIST, the RBM still learns distinct, separated modes corresponding to most of the different digit types. However, these modes are not as well separated as in dynamically binarized MNIST, as is evident from the more rapid switching between digit types in Figure 11. There are not obvious modes for Omniglot in Figure 12; it is plausible that an RBM with 128 units could not represent enough well-separated modes to capture the large number of distinct character types in the Omniglot dataset. On Caltech-101 Silhouettes, there may be a mode corresponding to large, roughly convex blobs.
ABSTRACT Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data; and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. 1 INTRODUCTION Unsupervised learning of probabilistic models is a powerful technique, facilitating tasks such as denoising and inpainting, and regularizing supervised tasks such as classification (Hinton et al., 2006; Salakhutdinov & Hinton, 2009; Rasmus et al., 2015). Many datasets of practical interest are projections of underlying distributions over real-world objects into an observation space; the pixels of an image, for example. When the real-world objects are of discrete types subject to continuous transformations, these datasets comprise multiple disconnected smooth manifolds. For instance, natural images change smoothly with respect to the position and pose of objects, as well as scene lighting. At the same time, it is extremely difficult to directly transform the image of a person to one of a car while remaining on the manifold of natural images. It would be natural to represent the space within each disconnected component with continuous variables, and the selection amongst these components with discrete variables. In contrast, most state-of-the-art probabilistic models use exclusively discrete variables — as do DBMs (Salakhutdinov & Hinton, 2009), NADEs (Larochelle & Murray, 2011), sigmoid belief networks (Spiegelhalter & Lauritzen, 1990; Bornschein et al., 2016), and DARNs (Gregor et al., 2014) — or exclusively continuous variables — as do VAEs (Kingma & Welling, 2014; Rezende et al., 2014) and GANs (Goodfellow et al., 2014).1 Moreover, it would be desirable to apply the efficient variational autoencoder framework to models with discrete values, but this has proven difficult, since backpropagation through discrete variables is generally not possible (Bengio et al., 2013; Raiko et al., 2015). We introduce a novel class of probabilistic models, comprising an undirected graphical model defined over binary latent variables, followed by multiple directed layers of continuous latent variables. This class of models captures both the discrete class of the object in an image, and its specific continuously deformable realization. Moreover, we show how these models can be trained efficiently using the variational autoencoder framework, including backpropagation through the binary latent variables. We ensure that the evidence lower bound remains tight by incorporating a hierarchical approximation to the posterior distribution of the latent variables, which can model strong correlations. Since these models efficiently marry the variational autoencoder framework with discrete latent variables, we call them discrete variational autoencoders (discrete VAEs). 1Spike-and-slab RBMs (Courville et al., 2011) use both discrete and continuous latent variables. 1.1 VARIATIONAL AUTOENCODERS ARE INCOMPATIBLE WITH DISCRETE DISTRIBUTIONS Conventionally, unsupervised learning algorithms maximize the log-likelihood of an observed dataset under a probabilistic model. Even stochastic approximations to the gradient of the log-likelihood generally require samples from the posterior and prior of the model. However, sampling from undirected graphical models is generally intractable (Long & Servedio, 2010), as is sampling from the posterior of a directed graphical model conditioned on its leaf variables (Dagum & Luby, 1993). In contrast to the exact log-likelihood, it can be computationally efficient to optimize a lower bound on the log-likelihood (Jordan et al., 1999), such as the evidence lower bound (ELBO, \( \mathcal{L}(x, \theta, \phi) \); Hinton & Zemel, 1994): \[ \mathcal{L}(x, \theta, \phi) = \log p(x|\theta) - \mathrm{KL}[q(z|x, \phi)||p(z|x, \theta)], \] where \( q(z|x, \phi) \) is a computationally tractable approximation to the posterior distribution \( p(z|x, \theta) \). We denote the observed random variables by \( x \), the latent random variables by \( z \), the parameters of the generative model by \( \theta \), and the parameters of the approximating posterior by \( \phi \). The variational autoencoder (VAE; Kingma & Welling, 2014; Rezende et al., 2014; Kingma et al., 2014) regroups the evidence lower bound of Equation 1 as: \[ \mathcal{L}(x, \theta, \phi) = - \underbrace{\mathrm{KL}\left[q(z|x, \phi)||p(z|\theta)\right]}_{\text{KL term}} + \underbrace{\mathbb{E}_q\left[\log p(x|z, \theta)\right]}_{\text{autoencoding term}} . \] In many cases of practical interest, such as Gaussian \( q(z|x) \) and \( p(z) \), the KL term of Equation 2 can be computed analytically. Moreover, a low-variance stochastic approximation to the gradient of the autoencoding term can be obtained using backpropagation and the reparameterization trick, so long as samples from the approximating posterior \( q(z|x) \) can be drawn using a differentiable, deterministic function \( f(x, \phi, \rho) \) of the combination of the inputs, the parameters, and a set of input- and parameter-independent random variables \( \rho \sim D \). For instance, samples can be drawn from a Gaussian distribution with mean and variance determined by the input, \( \mathcal{N}(m(x, \phi), v(x, \phi)) \), using \( f(x, \phi, \rho) = m(x, \phi) + \sqrt{v(x, \phi)} \cdot \rho \), where \( \rho \sim \mathcal{N}(0, 1) \). When such an \( f(x, \phi, \rho) \) exists, \[ \frac{\partial}{\partial \phi} \mathbb{E}_{q(z|x, \phi)} [\log p(x|z, \theta)] \approx \frac{1}{N} \sum_{\rho \sim D} \frac{\partial}{\partial \phi} \log p(x|f(x, \rho, \phi), \theta). \] The reparameterization trick can be generalized to a large set of distributions, including nonfactorial approximating posteriors. We address this issue carefully in Appendix A, where we find that an analog of Equation 3 holds. Specifically, \( D_i \) is the uniform distribution between 0 and 1, and \[ f(x) = \mathbf{F}^{-1}(x), \] where \( \mathbf{F} \) is the conditional-marginal cumulative distribution function (CDF) defined by: \[ F_i(\mathbf{x}) = \int_{x'_i = -\infty}^{x_i} p\left(x'_i|x_1, \ldots, x_{i-1}\right). \] However, this generalization is only possible if the inverse of the conditional-marginal CDF exists and is differentiable. A formulation comparable to Equation 3 is not possible for discrete distributions, such as restricted Boltzmann machines (RBMs) (Smolensky, 1986): \[ p(z) = \frac{1}{Z_p} e^{-E_p(z)} = \frac{1}{Z_p} \cdot e^{(z^T W z + b^T z)}, \] where \( z \in \{0, 1\}^n \), \( Z_p \) is the partition function of \( p(z) \), and the lateral connection matrix \( W \) is triangular. Any approximating posterior that only assigns nonzero probability to a discrete domain corresponds to a CDF that is piecewise-contant. That is, the range of the CDF is a proper subset of the interval [0, 1]. The domain of the inverse CDF is thus also a proper subset of [0, 1], and its derivative is not defined, as required in Equations 3 and 4.2 2This problem remains even if we use the quantile function, \( F_p^{-1}(\rho) = \inf \left\{ z \in \mathbb{R} : \int_{z'=-\infty}^{z'} p(z') \geq \rho \right\} \), the derivative of which is either zero or infinite if \( p \) is a discrete distribution. In the following sections, we present the discrete variational autoencoder (discrete VAE), a hierarchical probabilistic model consisting of an RBM,\footnote{Strictly speaking, the prior contains a bipartite Boltzmann machine, all the units of which are connected to the rest of the model. In contrast to a traditional RBM, there is no distinction between the “visible” units and the “hidden” units. Nevertheless, we use the familiar term RBM in the sequel, rather than the more cumbersome “fully hidden bipartite Boltzmann machine.”} followed by multiple directed layers of continuous latent variables. This model is efficiently trainable using the variational autoencoder formalism, as in Equation 3, including backpropagation through its discrete latent variables. 1.2 RELATED WORK Recently, there have been many efforts to develop effective unsupervised learning techniques by building upon variational autoencoders. Importance weighted autoencoders (Burda et al., 2016), Hamiltonian variational inference (Salimans et al., 2015), normalizing flows (Rezende & Mohamed, 2015), and variational Gaussian processes (Tran et al., 2016) improve the approximation to the posterior distribution. Ladder variational autoencoders (Sønderby et al., 2016) increase the power of the architecture of both approximating posterior and prior. Neural adaptive importance sampling (Du et al., 2015) and reweighted wake-sleep (Bornschein & Bengio, 2015) use sophisticated approximations to the gradient of the log-likelihood that do not admit direct backpropagation. Structured variational autoencoders use conjugate priors to construct powerful approximating posterior distributions (Johnson et al., 2016). It is easy to construct a stochastic approximation to the gradient of the ELBO that admits both discrete and continuous latent variables, and only requires computationally tractable samples. Unfortunately, this naive estimate is impractically high-variance, leading to slow training and poor performance (Paisley et al., 2012). The variance of the gradient can be reduced somewhat using the baseline technique, originally called REINFORCE in the reinforcement learning literature (Mnih & Gregor, 2014; Williams, 1992; Mnih & Rezende, 2016), which we discuss in greater detail in Appendix B. Prior efforts by Makhzani et al. (2015) to use multimodal priors with implicit discrete variables governing the modes did not successfully align the modes of the prior with the intrinsic clusters of the dataset. Rectified Gaussian units allow spike-and-slab sparsity in a VAE, but the discrete variables are also implicit, and their prior factorial and thus unimodal (Salimans, 2016). Graves (2016) computes VAE-like gradient approximations for mixture models, but the component models are assumed to be simple factorial distributions. In contrast, discrete VAEs generalize to powerful multimodal priors on the discrete variables, and a wider set of mappings to the continuous units. The generative model underlying the discrete variational autoencoder resembles a deep belief network (DBN; Hinton et al., 2006). A DBN comprises a sigmoid belief network, the top layer of which is conditioned on the visible units of an RBM. In contrast to a DBN, we use a bipartite Boltzmann machine, with both sides of the bipartite split connected to the rest of the model. Moreover, all hidden layers below the bipartite Boltzmann machine are composed of continuous latent variables with a fully autoregressive layer-wise connection architecture. Each layer \( j \) receives connections from all previous layers \( i < j \), with connections from the bipartite Boltzmann machine mediated by a set of smoothing variables. However, these architectural differences are secondary to those in the gradient estimation technique. Whereas DBNs are traditionally trained by unrolling a succession of RBMs, discrete variational autoencoders use the reparameterization trick to backpropagate through the evidence lower bound. 2 BACKPROPAGATING THROUGH DISCRETE LATENT VARIABLES BY ADDING CONTINUOUS LATENT VARIABLES When working with an approximating posterior over discrete latent variables, we can effectively smooth the conditional-marginal CDF (defined by Equation 5 and Appendix A) by augmenting the latent representation with a set of continuous random variables. The conditional-marginal CDF over the new continuous variables is invertible and its inverse is differentiable, as required in Equations 3 and 4. We redefine the generative model so that the conditional distribution of the observed variables given the latent variables only depends on the new continuous latent space. This does not alter (a) Approximating posterior \( q(\zeta, z|x) \) (b) Prior \( p(x, \zeta, z) \) (c) Autoencoding term Figure 1: Graphical models of the smoothed approximating posterior (a) and prior (b), and the network realizing the autoencoding term of the ELBO from Equation 2 (c). Continuous latent variables \( \zeta_i \) are smoothed analogs of discrete latent variables \( z_i \), and insulate \( z \) from the observed variables \( x \) in the prior (b). This facilitates the marginalization of the discrete \( z \) in the autoencoding term of the ELBO, resulting in a network (c) in which all operations are deterministic and differentiable given independent stochastic input \( \rho \sim U[0, 1] \). the fundamental form of the model, or the KL term of Equation 2; rather, it can be interpreted as adding a noisy nonlinearity, like dropout (Srivastava et al., 2014) or batch normalization with a small minibatch (Ioffe & Szegedy, 2015), to each latent variable in the approximating posterior and the prior. The conceptual motivation for this approach is discussed in Appendix C. Specifically, as shown in Figure 1a, we augment the latent representation in the approximating posterior with continuous random variables \( \zeta \),\footnote{We always use a variant of \( z \) for latent variables. This is zeta, or Greek \( \zeta \). The discrete latent variables \( z \) can conveniently be thought of as English \( z \).} conditioned on the discrete latent variables \( z \) of the RBM: \[ q(\zeta, z|x, \phi) = r(\zeta|z) \cdot q(z|x, \phi), \qquad \text{where} \] \[ r(\zeta|z) = \prod_i r(\zeta_i|z_i). \] The support of \( r(\zeta|z) \) for all values of \( z \) must be connected, so the marginal distribution \( q(\zeta|x, \phi) = \sum_z r(\zeta|z) \cdot q(z|x, \phi) \) has a constant, connected support so long as \( 0 < q(z|x, \phi) < 1 \). We further require that \( r(\zeta|z) \) is continuous and differentiable except at the endpoints of its support, so the inverse conditional-marginal CDF of \( q(\zeta|x, \phi) \) is differentiable in Equations 3 and 4, as we discuss in Appendix A. As shown in Figure 1b, we correspondingly augment the prior with \( \zeta \): \[ p(\zeta, z|\theta) = r(\zeta|z) \cdot p(z|\theta), \] where \( r(\zeta|z) \) is the same as for the approximating posterior. Finally, we require that the conditional distribution over the observed variables only depends on \( \zeta \): \[ p(x|\zeta, z, \theta) = p(x|\zeta, \theta). \tag{7} \] The smoothing distribution \( r(\zeta|z) \) transforms the model into a continuous function of the distribution over \( z \), and allows us to use Equations 2 and 3 directly to obtain low-variance stochastic approximations to the gradient. Given this expansion, we can simplify Equations 3 and 4 by dropping the dependence on \( z \) and applying Equation 16 of Appendix A, which generalizes Equation 3: \[ \frac{\partial}{\partial \phi} \mathbb{E}_{q(\zeta, z|x, \phi)} [\log p(x|\zeta, z, \theta)] \approx \frac{1}{N} \sum_{\rho \sim U(0,1)^n} \frac{\partial}{\partial \phi} \log p \left( x | \mathbf{F}_{q(\zeta|x, \phi)}^{-1}(\rho), \theta \right). \tag{8} \] If the approximating posterior is factorial, then each \( F_i \) is an independent CDF, without conditioning or marginalization. As we shall demonstrate in Section 2.1, \( F_{q(\zeta|x,\phi)}^{-1}(\rho) \) is a function of \( q(z = 1|x,\phi) \), where \( q(z = 1|x,\phi) \) is a deterministic probability value calculated by a parameterized function, such as a neural network. The autoencoder implicit in Equation 8 is shown in Figure 1c. Initially, input \( x \) is passed into a deterministic feedforward network \( q(z = 1|x,\phi) \), for which the final nonlinearity is the logistic function. Its output \( q \), along with an independent random variable \( \rho \sim U[0,1] \), is passed into the deterministic function \( F_{q(\zeta|x,\phi)}^{-1}(\rho) \) to produce a sample of \( \zeta \). This \( \zeta \), along with the original input \( x \), is finally passed to \( \log p(x|\zeta,\theta) \). The expectation of this log probability with respect to \( \rho \) is the autoencoding term of the VAE formalism, as in Equation 2. Moreover, conditioned on the input and the independent \( \rho \), this autoencoder is deterministic and differentiable, so backpropagation can be used to produce a low-variance, computationally-efficient approximation to the gradient. 2.1 SPIKE-AND-EXPONENTIAL SMOOTHING TRANSFORMATION As a concrete example consistent with sparse coding, consider the spike-and-exponential transformation from binary \( z \) to continuous \( \zeta \): \[ r(\zeta_i|z_i = 0) = \begin{cases} \infty, & \text{if } \zeta_i = 0 \\ 0, & \text{otherwise} \end{cases} \] \[ F_{r(\zeta_i|z_i=0)}(\zeta') = 1 \] \[ r(\zeta_i|z_i = 1) = \begin{cases} \frac{\beta e^{\beta \zeta}}{e^{\beta \zeta} - 1}, & \text{if } 0 \leq \zeta_i \leq 1 \\ 0, & \text{otherwise} \end{cases} \] \[ F_{r(\zeta_i|z_i=1)}(\zeta') = \left. \frac{e^{\beta \zeta'}}{e^{\beta} - 1} \right|_0^{\zeta'} = \frac{e^{\beta \zeta'} - 1}{e^{\beta} - 1} \] where \( F_p(\zeta') = \int_{-\infty}^{\zeta'} p(\zeta) \cdot d\zeta \) is the CDF of probability distribution \( p \) in the domain \([0,1]\). This transformation from \( z_i \) to \( \zeta_i \) is invertible: \( \zeta_i = 0 \Leftrightarrow z_i = 0 \), and \( \zeta_i > 0 \Leftrightarrow z_i = 1 \) almost surely.\footnote{In the limit \( \beta \to \infty \), \( \zeta_i = z_i \) almost surely, and the continuous variables \( \zeta \) can effectively be removed from the model. This trick can be used after training with finite \( \beta \) to produce a model without smoothing variables \( \zeta \).} We can now find the CDF for \( q(\zeta|x,\phi) \) as a function of \( q(z = 1|x,\phi) \) in the domain \([0,1]\), marginalizing out the discrete \( z \): \[ F_{q(\zeta|x,\phi)}(\zeta') = (1 - q(z = 1|x,\phi)) \cdot F_{r(\zeta_i|z_i=0)}(\zeta') + q(z = 1|x,\phi) \cdot F_{r(\zeta_i|z_i=1)}(\zeta') \] \[ = q(z = 1|x,\phi) \cdot \left( \frac{e^{\beta \zeta'} - 1}{e^{\beta} - 1} - 1 \right) + 1. \] To evaluate the autoencoder of Figure 1c, and through it the gradient approximation of Equation 8, we must invert the conditional-marginal CDF \( F_{q(\zeta|x,\phi)} \): \[ F_{q(\zeta|x,\phi)}^{-1}(\rho) = \begin{cases} \frac{1}{\beta} \cdot \log \left[ \left( \frac{\rho + q - 1}{q} \right) \cdot (e^{\beta} - 1) + 1 \right], & \text{if } \rho \geq 1 - q \\ 0, & \text{otherwise} \end{cases} \] \[(9)\] where we use the substitution \( q(z = 1|x,\phi) \to q \) to simplify notation. For all values of the independent random variable \( \rho \sim U[0,1] \), the function \( F_{q(\zeta|x,\phi)}^{-1}(\rho) \) rectifies the input \( q(z = 1|x,\phi) \) if \( q \leq 1 - \rho \) in a manner analogous to a rectified linear unit (ReLU), as shown in Figure 2a. It is also quasi-sigmoidal, in that \( F^{-1} \) is increasing but concave-down if \( q > 1 - \rho \). The effect of \( \rho \) on \( F^{-1} \) is qualitatively similar to that of dropout (Srivastava et al., 2014), depicted in Figure 2b, or the noise injected by batch normalization (Ioffe & Szegedy, 2015) using small minibatches, shown in Figure 2c. Other expansions to the continuous space are possible. In Appendix D.1, we consider the case where both \( r(\zeta_i|z_i = 0) \) and \( r(\zeta_i|z_i = 1) \) are linear functions of \( \zeta \); in Appendix D.2, we develop a spike-and-slab transformation; and in Appendix E, we explore a spike-and-Gaussian transformation where the continuous \( \zeta \) is directly dependent on the input \( x \) in addition to the discrete \( z \). Figure 2: Inverse CDF of the spike-and-exponential smoothing transformation for \( \rho \in \{0.2, 0.5, 0.8\} \): \( \beta = 1 \) (dotted), \( \beta = 3 \) (solid), and \( \beta = 5 \) (dashed) (a). Rectified linear unit with dropout rate 0.5 (b). Shift (red) and scale (green) noise from batch normalization; with magnitude 0.3 (dashed), -0.3 (dotted), or 0 (solid blue); before a rectified linear unit (c). In all cases, the abscissa is the input and the ordinate is the output of the effective transfer function. The novel stochastic nonlinearity \( F_{q(\zeta|x,\phi)}^{-1}(\rho) \) from Figure 1c, of which (a) is an example, is qualitatively similar to the familiar stochastic nonlinearities induced by dropout (b) or batch normalization (c). 3 ACCOMMODATING EXPLAINING-AWAY WITH A HIERARCHICAL APPROXIMATING POSTERIOR When a probabilistic model is defined in terms of a prior distribution \( p(z) \) and a conditional distribution \( p(x|z) \), the observation of \( x \) often induces strong correlations in the posterior \( p(z|x) \) due to phenomena such as explaining-away (Pearl, 1988). Moreover, we wish to use an RBM as the prior distribution (Equation 6), which itself may have strong correlations. In contrast, to maintain tractability, many variational approximations use a product of independent approximating posterior distributions (e.g., mean-field methods, but also Kingma & Welling (2014); Rezende et al. (2014)). To accommodate strong correlations in the posterior distribution while maintaining tractability, we introduce a hierarchy into the approximating posterior \( q(z|x) \) over the discrete latent variables. Specifically, we divide the latent variables \( z \) of the RBM into disjoint groups, \( z_1, \ldots, z_k \),\footnote{The continuous latent variables \( \zeta \) are divided into complementary disjoint groups \( \zeta_1, \ldots, \zeta_k \).} and define the approximating posterior via a directed acyclic graphical model over these groups: \[ q(z_1, \zeta_1, \ldots, z_k, \zeta_k|x, \phi) = \prod_{1 \leq j \leq k} r(\zeta_j|z_j) \cdot q\left(z_j|\zeta_{i<j}, x, \phi\right) \quad \text{where} \] \[ q(z_j|\zeta_{i<j}, x, \phi) = \frac{e^{g_j(\zeta_{i<j}, x, \phi)^T \cdot z_j}}{\prod_{z_i \in z_j} (1 + e^{g_{z_i}(\zeta_{i<j}, x, \phi)})}, \] \( z_j \in \{0, 1\}^n \), and \( g_j(\zeta_{i<j}, x, \phi) \) is a parameterized function of the inputs and preceding \( \zeta_i \), such as a neural network. The corresponding graphical model is depicted in Figure 3a, and the integration of such hierarchical approximating posteriors into the reparameterization trick is discussed in Appendix A. If each group \( z_j \) contains a single variable, this dependence structure is analogous to that of a deep autoregressive network (DARN; Gregor et al., 2014), and can represent any distribution. However, the dependence of \( z_j \) on the preceding discrete variables \( z_{i<j} \) is always mediated by the continuous variables \( \zeta_{i<j} \). This hierarchical approximating posterior does not affect the form of the autoencoding term in Equation 8, except to increase the depth of the autoencoder, as shown in Figure 3b. The deterministic probability value \( q(z_j = 1|\zeta_{i<j}, x, \phi) \) of Equation 10 is parameterized, generally by a neural network, in a manner analogous to Section 2. However, the final logistic function is made explicit in Equation 10 to simplify Equation 12. For each successive layer \( j \) of the autoencoder, input \( x \) and all previous \( \zeta_{i<j} \) are passed into the network computing \( q(z = 1|\zeta_{i<j}, x, \phi) \). Its output \( q_j \), along with an (a) Hierarch approx post \( q(\zeta, z|x) \) (b) Hierarchical ELBO autoencoding term Figure 3: Graphical model of the hierarchical approximating posterior (a) and the network realizing the autoencoding term of the ELBO (b) from Equation 2. Discrete latent variables \( z_j \) only depend on the previous \( z_{i<j} \) through their smoothed analogs \( \zeta_{i<j} \). The autoregressive hierarchy allows the approximating posterior to capture correlations and multiple modes. Again, all operations in (b) are deterministic and differentiable given the stochastic input \( \rho \). independent random variable \( \rho \sim U[0,1] \), is passed to the deterministic function \( \mathbf{F}_{q(\zeta|\zeta_{<j}, x, \phi)}^{-1}(\rho) \) to produce a sample of \( \zeta_j \). Once all \( \zeta_j \) have been recursively computed, the full \( \zeta \) along with the original input \( x \) is finally passed to \( \log p(x|\zeta, \theta) \). The expectation of this log probability with respect to \( \rho \) is again the autoencoding term of the VAE formalism, as in Equation 2. In Appendix F, we show that the gradients of the remaining KL term of the ELBO (Equation 2) can be estimated stochastically using: \[ \frac{\partial}{\partial \theta} \mathrm{KL}\left[q||p\right] = \mathbb{E}_{q(z_1|x,\phi)} \left[ \cdots \left[ \mathbb{E}_{q(z_k|\zeta_{<k}, x, \phi)} \left[ \frac{\partial E_p(z, \theta)}{\partial \theta} \right] \right] \right] - \mathbb{E}_{p(z|\theta)} \left[ \frac{\partial E_p(z, \theta)}{\partial \theta} \right] \] and \[ \frac{\partial}{\partial \phi} \mathrm{KL}\left[q||p\right] = \mathbb{E}_\rho \left[ (g(x, \zeta) - b)^T \cdot \frac{\partial q}{\partial \phi} - z^T \cdot W \cdot \left( \frac{1 - z}{1 - q} \odot \frac{\partial q}{\partial \phi} \right) \right]. \] In particular, Equation 12 is substantially lower variance than the naive approach to calculate \( \frac{\partial}{\partial \phi} \mathrm{KL}\left[q||p\right] \), based upon REINFORCE. 4 MODELLING CONTINUOUS DEFORMATIONS WITH A HIERARCHY OF CONTINUOUS LATENT VARIABLES We can make both the generative model and the approximating posterior more powerful by adding additional layers of latent variables below the RBM. While these layers can be discrete, we focus on continuous variables, which have proven to be powerful in generative adversarial networks (Goodfellow et al., 2014) and traditional variational autoencoders (Kingma & Welling, 2014; Rezende et al., 2014). When positioned below and conditioned on a layer of discrete variables, continuous variables can build continuous manifolds, from which the discrete variables can choose. This complements the structure of the natural world, where a percept is determined first by a discrete selection of the types of objects present in the scene, and then by the position, pose, and other continuous attributes of these objects. Specifically, we augment the latent representation with continuous random variables \( \hat{z} \),\footnote{We always use a variant of \( z \) for latent variables. This is Fraktur \( z \), or German \( z \).} and define both the approximating posterior and the prior to be layer-wise fully autoregressive directed graphical models. We use the same autoregressive variable order for the approximating posterior as for the (a) Approx post w/ cont latent vars \( q(\mathfrak{z}, \zeta, z|x) \) (b) Prior w/ cont latent vars \( p(x, \mathfrak{z}, \zeta, z) \) Figure 4: Graphical models of the approximating posterior (a) and prior (b) with a hierarchy of continuous latent variables. The shaded regions in parts (a) and (b) expand to Figures 3a and 1b respectively. The continuous latent variables \( \mathfrak{z} \) build continuous manifolds, capturing properties like position and pose, conditioned on the discrete latent variables \( z \), which can represent the discrete types of objects in the image. prior, as in DRAW (Gregor et al., 2015), variational recurrent neural networks (Chung et al., 2015), the deep VAE of Salimans (2016), and ladder networks (Rasmus et al., 2015; Sønderby et al., 2016). We discuss the motivation for this ordering in Appendix G. The directed graphical model of the approximating posterior and prior are defined by: \[ q(\mathfrak{z}_0, \ldots, \mathfrak{z}_n|x, \phi) = \prod_{0 \leq m \leq n} q\left( \mathfrak{z}_m | \mathfrak{z}_{<m}, x, \phi \right) \quad \text{and} \] \[ p(\mathfrak{z}_0, \ldots, \mathfrak{z}_n|\theta) = \prod_{0 \leq m \leq n} p\left( \mathfrak{z}_m | \mathfrak{z}_{<m}, \theta \right). \] (13) The full set of latent variables associated with the RBM is now denoted by \( \mathfrak{z}_0 = \{ z_1, \zeta_1, \ldots, z_k, \zeta_k \} \). However, the conditional distributions in Equation 13 only depend on the continuous \( \zeta_j \). Each \( \mathfrak{z}_{m \geq 1} \) denotes a layer of continuous latent variables, and Figure 4 shows the resulting graphical model. The ELBO decomposes as: \[ \mathcal{L}(x, \theta, \phi) = \mathbb{E}_{q(\mathfrak{z}|x, \phi)} [\log p(x|\mathfrak{z}, \theta)] - \sum_m \mathbb{E}_{q(\mathfrak{z}_{<m}|x, \phi)} [\mathrm{KL}\left[ q(\mathfrak{z}_m|\mathfrak{z}_{<m}, x, \phi) || p(\mathfrak{z}_m|\mathfrak{z}_{<m}, \theta) \right]] . \] (14) If both \( q(\mathfrak{z}_m|\mathfrak{z}_{<m}, x, \phi) \) and \( p(\mathfrak{z}_m|\mathfrak{z}_{<m}, \theta) \) are Gaussian, then their KL divergence has a simple closed form, which is computationally efficient if the covariance matrices are diagonal. Gradients can be passed through the \( q(\mathfrak{z}_{<m}|x, \phi) \) using the traditional reparameterization trick, described in Section 1.1. 5 RESULTS Discrete variational autoencoders comprise a smoothed RBM (Section 2) with a hierarchical approximating posterior (Section 3), followed by a hierarchy of continuous latent variables (Section 4). We parameterize all distributions with neural networks, except the smoothing distribution \( r(\zeta|z) \) discussed in Section 2. Like NVIL (Mnih & Gregor, 2014) and VAEs (Kingma & Welling, 2014; Rezende et al., 2014), we define all approximating posteriors \( q \) to be explicit functions of \( x \), with parameters \( \phi \) shared between all inputs \( x \). For distributions over discrete variables, the neural networks output the parameters of a factorial Bernoulli distribution using a logistic final layer, as in Equation 10; for the continuous \( \mathfrak{z} \), the neural networks output the mean and log-standard deviation of a diagonal-covariance Gaussian distribution using a linear final layer. Each layer of the neural networks parameterizing the distributions over \( z \), \( \mathfrak{z} \), and \( x \) consists of a linear transformation, batch normalization (Ioffe & Szegedy, 2015) (but see Appendix H.2), and a rectified-linear point-wise nonlinearity (ReLU). We stochastically approximate the expectation with respect to the RBM prior \( p(z|\theta) \) in Equation 11 using block Gibbs sampling on persistent Markov chains, analogous to persistent contrastive divergence (Tieleman, 2008). We minimize the ELBO using ADAM (Kingma & Ba, 2015) with a decaying step size. The hierarchical structure of Section 4 is very powerful, and overfits without strong regularization of the prior, as shown in Appendix H. In contrast, powerful approximating posteriors do not induce significant overfitting. To address this problem, we use conditional distributions over the input \( p(x|\zeta, \theta) \) without any deterministic hidden layers, except on Omniglot. Moreover, all other neural networks in the prior have only one hidden layer, the size of which is carefully controlled. On statically binarized MNIST, Omniglot, and Caltech-101, we share parameters between the layers of the hierarchy over \( z \). We present the details of the architecture in Appendix H. We train the resulting discrete VAEs on the permutation-invariant MNIST (LeCun et al., 1998), Omniglot\(^8\) (Lake et al., 2013), and Caltech-101 Silhouettes datasets (Marlin et al., 2010). For MNIST, we use both the static binarization of Salakhutdinov & Murray (2008) and dynamic binarization. Estimates of the log-likelihood\(^9\) of these models, computed using the method of (Burda et al., 2016) with \( 10^4 \) importance-weighted samples, are listed in Table 1. The reported log-likelihoods for discrete VAEs are the average of 16 runs; the standard deviation of these log-likelihoods are 0.08, 0.04, 0.05, and 0.11 for dynamically and statically binarized MNIST, Omniglot, and Caltech-101 Silhouettes, respectively. Removing the RBM reduces the test set log-likelihood by 0.09, 0.37, 0.69, and 0.66. <table> <tr> <th colspan="2">MNIST (dynamic binarization)</th> <th colspan="2">MNIST (static binarization)</th> </tr> <tr> <th>LL</th> <th></th> <th>ELBO</th> <th>LL</th> </tr> <tr> <td>DBN</td> <td>-84.55</td> <td>HVI</td> <td>-88.30</td> <td>-85.51</td> </tr> <tr> <td>IWAE</td> <td>-82.90</td> <td>DRAW</td> <td>-87.40</td> <td></td> </tr> <tr> <td>Ladder VAE</td> <td>-81.74</td> <td>NAIS NADE</td> <td></td> <td>-83.67</td> </tr> <tr> <td>Discrete VAE</td> <td><b>-80.15</b></td> <td>Normalizing flows</td> <td>-85.10</td> <td></td> </tr> <tr> <td></td> <td></td> <td>Variational Gaussian process</td> <td></td> <td>-81.32</td> </tr> <tr> <td></td> <td></td> <td>Discrete VAE</td> <td><b>-84.58</b></td> <td><b>-81.01</b></td> </tr> <tr> <th colspan="2">Omniglot</th> <th colspan="2">Caltech-101 Silhouettes</th> </tr> <tr> <th>LL</th> <th></th> <th>LL</th> <th></th> </tr> <tr> <td>IWAE</td> <td>-103.38</td> <td>IWAE</td> <td>-117.2</td> </tr> <tr> <td>Ladder VAE</td> <td>-102.11</td> <td>RWS SBN</td> <td>-113.3</td> </tr> <tr> <td>RBM</td> <td>-100.46</td> <td>RBM</td> <td>-107.8</td> </tr> <tr> <td>DBN</td> <td>-100.45</td> <td>NAIS NADE</td> <td>-100.0</td> </tr> <tr> <td>Discrete VAE</td> <td><b>-97.43</b></td> <td>Discrete VAE</td> <td><b>-97.6</b></td> </tr> </table> Table 1: Test set log-likelihood of various models on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. For the discrete VAE, the reported log-likelihood is estimated with \( 10^4 \) importance-weighted samples (Burda et al., 2016). For comparison, we also report performance of some recent state-of-the-art techniques. Full names and references are listed in Appendix I. We further analyze the performance of discrete VAEs on dynamically binarized MNIST: the largest of the datasets, requiring the least regularization. Figure 5 shows the generative output of a discrete VAE as the Markov chain over the RBM evolves via block Gibbs sampling. The RBM is held constant across each sub-row of five samples, and variation amongst these samples is due to the layers of continuous latent variables. Given a multimodal distribution with well-separated modes, Gibbs sampling passes through the large, low-probability space between the modes only infrequently. As a result, consistency of the digit class over many successive rows in Figure 5 indicates that the RBM prior has well-separated modes. The RBM learns distinct, separated modes corresponding to the different digit types, except for 3/5 and 4/9, which are either nearby or overlapping; at least tens of \footnotetext{8We use the partitioned, preprocessed Omniglot dataset of Burda et al. (2016), available from https://github.com/yburda/wae/tree/master/datasets/OMNIGLOT.} \footnotetext{9The importance-weighted estimate of the log-likelihood is a lower bound, except for the log partition function of the RBM. We describe our unbiased estimation method for the partition function in Appendix H.1.} Figure 5: Evolution of samples from a discrete VAE trained on dynamically binarized MNIST, using persistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBM between successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM, but independent continuous latent variables, and shows the variation induced by the continuous layers as opposed to the RBM. The long vertical sequences in which the digit ID remains constant demonstrate that the RBM has well-separated modes, each of which corresponds to a single (or occasionally two) digit IDs, despite being trained in a wholly unsupervised manner. ![Evolution of samples from a discrete VAE trained on dynamically binarized MNIST](page_246_183_1097_377.png) Figure 6: Log likelihood versus the number of iterations of block Gibbs sampling per minibatch (a), the number of units in the RBM (b), and the number of layers in the approximating posterior over the RBM (c). Better sampling (a) and hierarchical approximating posteriors (c) support better performance, but the network is robust to the size of the RBM (b). ![Log likelihood versus the number of iterations of block Gibbs sampling per minibatch, number of units in the RBM, and number of layers in the approximating posterior over the RBM](page_246_670_1097_246.png) thousands of iterations of single-temperature block Gibbs sampling is required to mix between the modes. We present corresponding figures for the other datasets, and results on simplified architectures, in Appendix J. The large mixing time of block Gibbs sampling on the RBM suggests that training may be constrained by sample quality. Figure 6a shows that performance\(^{10}\) improves as we increase the number of iterations of block Gibbs sampling performed per minibatch on the RBM prior: \( p(z|\theta) \) in Equation 11. This suggests that a further improvement may be achieved by using a more effective sampling algorithm, such as parallel tempering (Swendsen & Wang, 1986). \footnotetext{10 All models in Figure 6 use only 10 layers of continuous latent variables, for computational efficiency.} Commensurate with the small number of intrinsic classes, a moderately sized RBM yields the best performance on MNIST. As shown in Figure 6b, the log-likelihood plateaus once the number of units in the RBM reaches at least 64. Presumably, we would need a much larger RBM to model a dataset like Imagenet, which has many classes and complicated relationships between the elements of various classes. The benefit of the hierarchical approximating posterior over the RBM, introduced in Section 3, is apparent from Figure 6c. The reduction in performance when moving from 4 to 8 layers in the approximating posterior may be due to the fact that each additional hierarchical layer over the approximating posterior adds three layers to the encoder neural network: there are two deterministic hidden layers for each stochastic latent layer. As a result, expanding the number of RBM approximating posterior layers significantly increases the number of parameters that must be trained, and increases the risk of overfitting. 6 CONCLUSION Datasets consisting of a discrete set of classes are naturally modeled using discrete latent variables. However, it is difficult to train probabilistic models over discrete latent variables using efficient gradient approximations based upon backpropagation, such as variational autoencoders, since it is generally not possible to backpropagate through a discrete variable (Bengio et al., 2013). We avoid this problem by symmetrically projecting the approximating posterior and the prior into a continuous space. We then evaluate the autoencoding term of the evidence lower bound exclusively in the continuous space, marginalizing out the original discrete latent representation. At the same time, we evaluate the KL divergence between the approximating posterior and the true prior in the original discrete space; due to the symmetry of the projection into the continuous space, it does not contribute to the KL term. To increase representational power, we make the approximating posterior over the discrete latent variables hierarchical, and add a hierarchy of continuous latent variables below them. The resulting discrete variational autoencoder achieves state-of-the-art performance on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. ACKNOWLEDGEMENTS Zhengbing Bian, Fabian Chudak, Arash Vahdat helped run experiments. Jack Raymond provided the library used to estimate the log partition function of RBMs. Mani Ranjbar wrote the cluster management system, and a custom GPU acceleration library used for an earlier version of the code. We thank Evgeny Andriyash, William Macready, and Aaron Courville for helpful discussions; and one of our anonymous reviewers for identifying the problem addressed in Appendix D.3.
accept
Accept (Poster)
8.333333
81335abe42d8ba98222e5765c14c569fd400b337
iclr
2,017
ZONEOUT: REGULARIZING RNNs BY RANDOMLY PRESERVING HIDDEN ACTIVATIONS David Krueger1*, Tegan Maharaj2†*, János Kramár2 Mohammad Pezeshki1 Nicolas Ballas1, Nan Rosemary Ke2, Anirudh Goyal1 Yoshua Bengio1‡, Aaron Courville1‡, Christopher Pal2 1 MILA, Université de Montréal, firstname.lastname@umontreal.ca. 2 École Polytechnique de Montréal, firstname.lastname@polymtl.ca. * Equal contributions. †CIFAR Senior Fellow. ‡CIFAR Fellow. ABSTRACT We propose zoneout, a novel method for regularizing RNNs. At each timestep, zoneout stochastically forces some hidden units to maintain their previous values. Like dropout, zoneout uses random noise to train a pseudo-ensemble, improving generalization. But by preserving instead of dropping hidden units, gradient information and state information are more readily propagated through time, as in feedforward stochastic depth networks. We perform an empirical investigation of various RNN regularizers, and find that zoneout gives significant performance improvements across tasks. We achieve competitive results with relatively simple models in character- and word-level language modelling on the Penn Treebank and Text8 datasets, and combining with recurrent batch normalization (Cooijmans et al., 2016) yields state-of-the-art results on permuted sequential MNIST. 1 INTRODUCTION Regularizing neural nets can significantly improve performance, as indicated by the widespread use of early stopping, and success of regularization methods such as dropout and its recurrent variants (Hinton et al., 2012; Srivastava et al., 2014; Zaremba et al., 2014; Gal, 2015). In this paper, we address the issue of regularization in recurrent neural networks (RNNs) with a novel method called zoneout. RNNs sequentially construct fixed-length representations of arbitrary-length sequences by folding new observations into their hidden state using an input-dependent transition operator. The repeated application of the same transition operator at the different time steps of the sequence, however, can make the dynamics of an RNN sensitive to minor perturbations in the hidden state; the transition dynamics can magnify components of these perturbations exponentially. Zoneout aims to improve RNNs’ robustness to perturbations in the hidden state in order to regularize transition dynamics. Like dropout, zoneout injects noise during training. But instead of setting some units’ activations to 0 as in dropout, zoneout randomly replaces some units’ activations with their activations from the previous timestep. As in dropout, we use the expectation of the random noise at test time. This results in a simple regularization approach which can be applied through time for any RNN architecture, and can be conceptually extended to any model whose state varies over time. Compared with dropout, zoneout is appealing because it preserves information flow forwards and backwards through the network. This helps combat the vanishing gradient problem (Hochreiter, 1991; Bengio et al., 1994), as we observe experimentally. We also empirically evaluate zoneout on classification using the permuted sequential MNIST dataset, and on language modelling using the Penn Treebank and Text8 datasets, demonstrating competitive or state of the art performance across tasks. In particular, we show that zoneout performs competitively with other proposed regularization methods for RNNs, including recently-proposed dropout variants. Code for replicating all experiments can be found at: http://github.com/teganmaharaj/zoneout 2 RELATED WORK 2.1 RELATIONSHIP TO DROPOUT Zoneout can be seen as a selective application of dropout to some of the nodes in a modified computational graph, as shown in Figure 1. In zoneout, instead of dropping out (being set to 0), units zone out and are set to their previous value (\( h_t = h_{t-1} \)). Zoneout, like dropout, can be viewed as a way to train a pseudo-ensemble (Bachman et al., 2014), injecting noise using a stochastic “identity-mask” rather than a zero-mask. We conjecture that identity-masking is more appropriate for RNNs, since it makes it easier for the network to preserve information from previous timesteps going forward, and facilitates, rather than hinders, the flow of gradient information going backward, as we demonstrate experimentally. ![Zoneout as a special case of dropout. \( \tilde{h}_t \) is the unit h’s hidden activation for the next time step (if not zoned out). Zoneout can be seen as applying dropout on the hidden state delta, \( \tilde{h}_t - h_{t-1} \). When this update is dropped out (represented by the dashed line), \( h_t \) becomes \( h_{t-1} \).](page_370_682_259_180.png) Figure 1: Zoneout as a special case of dropout; \( \tilde{h}_t \) is the unit h’s hidden activation for the next time step (if not zoned out). Zoneout can be seen as applying dropout on the hidden state delta, \( \tilde{h}_t - h_{t-1} \). When this update is dropped out (represented by the dashed line), \( h_t \) becomes \( h_{t-1} \). 2.2 DROPOUT IN RNNs Initially successful applications of dropout in RNNs (Pham et al., 2013; Zaremba et al., 2014) only applied dropout to feed-forward connections (“up the stack”), and not recurrent connections (“forward through time”), but several recent works (Semeniuta et al., 2016; Moon et al., 2015; Gal, 2015) propose methods that are not limited in this way. Bayer et al. (2013) successfully apply fast dropout (Wang & Manning, 2013), a deterministic approximation of dropout, to RNNs. Semeniuta et al. (2016) apply recurrent dropout to the updates to LSTM memory cells (or GRU states), i.e. they drop out the input/update gate in LSTM/GRU. Like zoneout, their approach prevents the loss of long-term memories built up in the states/cells of GRUs/LSTMs, but zoneout does this by preserving units’ activations exactly. This difference is most salient when zoning out the hidden states (not the memory cells) of an LSTM, for which there is no analogue in recurrent dropout. Whereas saturated output gates or output nonlinearities would cause recurrent dropout to suffer from vanishing gradients (Bengio et al., 1994), zoned-out units still propagate gradients effectively in this situation. Furthermore, while the recurrent dropout method is specific to LSTMs and GRUs, zoneout generalizes to any model that sequentially builds distributed representations of its input, including vanilla RNNs. Also motivated by preventing memory loss, Moon et al. (2015) propose rnnDrop. This technique amounts to using the same dropout mask at every timestep, which the authors show results in improved performance on speech recognition in their experiments. Semeniuta et al. (2016) show, however, that past states’ influence vanishes exponentially as a function of dropout probability when taking the expectation at test time in rnnDrop; this is problematic for tasks involving longer-term dependencies. Gal (2015) propose another technique which uses the same mask at each timestep. Motivated by variational inference, they drop out the rows of weight matrices in the input and output embeddings and LSTM gates, instead of dropping units’ activations. The proposed variational RNN technique achieves single-model state-of-the-art test perplexity of 73.4 on word-level language modelling of Penn Treebank. 2.3 RELATIONSHIP TO STOCHASTIC DEPTH Zoneout can also be viewed as a per-unit version of stochastic depth (Huang et al., 2016), which randomly drops entire layers of feed-forward residual networks (ResNets (He et al., 2015)). This is equivalent to zoning out all of the units of a layer at the same time. In a typical RNN, there is a new input at each timestep, causing issues for a naive implementation of stochastic depth. Zoning out an entire layer in an RNN means the input at the corresponding timestep is completely ignored, whereas zoning out individual units allows the RNN to take each element of its input sequence into account. We also found that using residual connections in recurrent nets led to instability, presumably due to the parameter sharing in RNNs. Concurrent with our work, Singh et al. (2016) propose zoneout for ResNets, calling it SkipForward. In their experiments, zoneout is outperformed by stochastic depth, dropout, and their proposed Swapout technique, which randomly drops either or both of the identity or residual connections. Unlike Singh et al. (2016), we apply zoneout to RNNs, and find it outperforms stochastic depth and recurrent dropout. 2.4 SELECTIVELY UPDATING HIDDEN UNITS Like zoneout, clockwork RNNs (Koutnik et al., 2014) and hierarchical RNNs (Hihi & Bengio, 1996) update only some units’ activations at every timestep, but their updates are periodic, whereas zoneout’s are stochastic. Inspired by clockwork RNNs, we experimented with zoneout variants that target different update rates or schedules for different units, but did not find any performance benefit. Hierarchical multiscale LSTMs (Chung et al., 2016) learn update probabilities for different units using the straight-through estimator (Bengio et al., 2013; Courbariaux et al., 2015), and combined with recently-proposed Layer Normalization (Ba et al., 2016) achieve competitive results on a variety of tasks. As the authors note, their method can be interpreted as an input-dependent form of adaptive zoneout. In recent work, Ha et al. (2016) use a hypernetwork to dynamically rescale the row-weights of a primary LSTM network, achieving state-of-the-art 1.21 BPC on character-level Penn Treebank when combined with layer normalization (Ba et al., 2016) in a two-layer network. This scaling can be viewed as an adaptive, differentiable version of the variational LSTM (Gal, 2015), and could similarly be used to create an adaptive, differentiable version of zoneout. Very recent work conditions zoneout probabilities on supral (a measure of the discrepancy between the predicted and actual state), and sets a new state of the art on enwik8 (Rocki et al., 2016). 3 ZONEOUT AND PRELIMINARIES We now explain zoneout in full detail, and compare with other forms of dropout in RNNs. We start by reviewing recurrent neural networks (RNNs). 3.1 RECURRENT NEURAL NETWORKS Recurrent neural networks process data \( x_1, x_2, \ldots, x_T \) sequentially, constructing a corresponding sequence of representations, \( h_1, h_2, \ldots, h_T \). Each hidden state is trained (implicitly) to remember and emphasize all task-relevant aspects of the preceding inputs, and to incorporate new inputs via a transition operator, \( \mathcal{T} \), which converts the present hidden state and input into a new hidden state: \[ h_t = \mathcal{T}(h_{t-1}, x_t). \] Zoneout modifies these dynamics by mixing the original transition operator \( \mathcal{T} \) with the identity operator (as opposed to the null operator used in dropout), according to a vector of Bernoulli masks, \( d_t \): \[ \text{Zoneout:} \quad \mathcal{T} = d_t \odot \mathcal{T} + (1 - d_t) \odot 1 \qquad \text{Dropout:} \quad \mathcal{T} = d_t \odot \mathcal{T} + (1 - d_t) \odot 0 \] 3.2 LONG SHORT-TERM MEMORY In long short-term memory RNNs (LSTMs) (Hochreiter & Schmidhuber, 1997), the hidden state is divided into memory cell \( c_t \), intended for internal long-term storage, and hidden state \( h_t \), used as a transient representation of state at timestep \( t \). In the most widely used formulation of an LSTM (Gers et al., 2000), \( c_t \) and \( h_t \) are computed via a set of four “gates”, including the forget gate, \( f_t \), which directly connects \( c_t \) to the memories of the previous timestep \( c_{t-1} \), via an element-wise multiplication. Large values of the forget gate cause the cell to remember most (not all) of its previous value. The other gates control the flow of information in (\( i_t, g_t \)) and out (\( o_t \)) of the cell. Each gate has a weight matrix and bias vector; for example the forget gate has \( W_{xf}, W_{hf}, \) and \( b_f \). For brevity, we will write these as \( W_x, W_h, b \). An LSTM is defined as follows: \[ i_t, f_t, o_t = \sigma(W_x x_t + W_h h_{t-1} + b) \\ g_t = \tanh(W_{xg} x_t + W_{hg} h_{t-1} + b_g) \\ c_t = f_t \odot c_{t-1} + i_t \odot g_t \\ h_t = o_t \odot \tanh(c_t) \] A naive application of dropout in LSTMs would zero-mask either or both of the memory cells and hidden states, without changing the computation of the gates (\(i, f, o, g\)). Dropping memory cells, for example, changes the computation of \(c_t\) as follows: \[ c_t = d_t \odot (f_t \odot c_{t-1} + i_t \odot g_t) \] Alternatives abound, however; masks can be applied to any subset of the gates, cells, and states. [Semeniuta et al. (2016)](https://arxiv.org/abs/1603.05117), for instance, zero-mask the input gate: \[ c_t = (f_t \odot c_{t-1} + d_t \odot i_t \odot g_t) \] When the input gate is masked like this, there is no additive contribution from the input or hidden state, and the value of the memory cell simply decays according to the forget gate. ![Diagram comparing zoneout and recurrent dropout in an LSTM](page_489_670_1012_180.png) Figure 2: (a) Zoneout, vs (b) the recurrent dropout strategy of [Semeniuta et al., 2016] in an LSTM. Dashed lines are zero-masked; in zoneout, the corresponding dotted lines are masked with the corresponding opposite zero-mask. Rectangular nodes are embedding layers. In **zoneout**, the values of the hidden state and memory cell randomly either maintain their previous value or are updated as usual. This introduces stochastic identity connections between subsequent time steps: \[ c_t = d_t^c \odot c_{t-1} + (1 - d_t^c) \odot (f_t \odot c_{t-1} + i_t \odot g_t) \\ h_t = d_t^h \odot h_{t-1} + (1 - d_t^h) \odot (o_t \odot \tanh(f_t \odot c_{t-1} + i_t \odot g_t)) \] We usually use different zoneout masks for cells and hiddens. We also experiment with a variant of recurrent dropout that reuses the input dropout mask to zoneout the corresponding output gates: \[ c_t = (f_t \odot c_{t-1} + d_t \odot i_t \odot g_t) \\ h_t = ((1 - d_t) \odot o_t + d_t \odot o_{t-1}) \odot \tanh(c_t) \] The motivation for this variant is to prevent the network from being forced (by the output gate) to expose a memory cell which has not been updated, and hence may contain misleading information. 4 EXPERIMENTS AND DISCUSSION We evaluate zoneout’s performance on the following tasks: (1) Character-level language modelling on the Penn Treebank corpus (Marcus et al., 1993); (2) Word-level language modelling on the Penn Treebank corpus (Marcus et al., 1993); (3) Character-level language modelling on the Text8 corpus (Mahoney, 2011); (4) Classification of hand-written digits on permuted sequential MNIST (pMNIST) (Le et al., 2015). We also investigate the gradient flow to past hidden states, using pMNIST. 4.1 PENN TREEBANK LANGUAGE MODELLING DATASET The Penn Treebank language model corpus contains 1 million words. The model is trained to predict the next word (evaluated on perplexity) or character (evaluated on BPC: bits per character) in a sequence.1 4.1.1 CHARACTER-LEVEL For the character-level task, we train networks with one layer of 1000 hidden units. We train LSTMs with a learning rate of 0.002 on overlapping sequences of 100 in batches of 32, optimize using Adam, and clip gradients with threshold 1. These settings match those used in [Coolmans et al., 2016]. We also train GRUs and tanh-RNNs with the same parameters as above, except sequences are non-overlapping and we use learning rates of 0.001, and 0.0003 for GRUs and tanh-RNNs respectively. Small values (0.1, 0.05) of zoneout significantly improve generalization performance for all three models. Intriguingly, we find zoneout increases training time for GRU and tanh-RNN, but decreases training time for LSTMs. We focus our investigation on LSTM units, where the dynamics of zoning out states, cells, or both provide interesting insight into zoneout’s behaviour. Figure 3 shows our exploration of zoneout in LSTMs, for various zoneout probabilities of cells and/or hiddens. Zoneout on cells with probability 0.5 or zoneout on states with probability 0.05 both outperform the best-performing recurrent dropout (\( p = 0.25 \)). Combining \( z_c = 0.5 \) and \( z_h = 0.05 \) leads to our best-performing model, which achieves 1.27 BPC, competitive with recent state-of-the-art set by [Ha et al., 2016]. We compare zoneout to recurrent dropout (for \( p \in \{0.05, 0.2, 0.25, 0.5, 0.7\} \)), weight noise (\( \sigma = 0.075 \)), norm stabilizer (\( \beta = 50 \)) [Krueger & Memisevic, 2015], and explore stochastic depth [Huang et al., 2016] in a recurrent setting (analogous to zoning out an entire timestep). We also tried a shared-mask variant of zoneout as used in \( \mu \)MNIST experiments, where the same mask is used for both cells and hiddens. Neither stochastic depth or shared-mask zoneout performed as well as separate masks, sampled per unit. Figure 3 shows the best performance achieved with each regularizer, as well as an unregularized LSTM baseline. Results are reported in Table 1 and learning curves shown in Figure 4. Low zoneout probabilities (0.05-0.25) also improve over baseline in GRUs and tanh-RNNs, reducing BPC from 1.53 to 1.41 for GRU and 1.67 to 1.52 for tanh-RNN. Similarly, low zoneout probabilities work best on the hidden states of LSTMs. For memory cells in LSTMs, however, higher probabilities (around 0.5) work well, perhaps because large forget-gate values approximate the effect of cells zoning out. We conjecture that best performance is achieved with zoneout LSTMs because of the stability of having both state and cell. The probability that both will be zoned out is very low, but having one or the other zoned out carries information from the previous timestep forward, while having the other react ‘normally’ to new information. 4.1.2 WORD-LEVEL For the word-level task, we replicate settings from [Zaremba et al., 2014]’s best single-model performance. This network has 2 layers of 1500 units, with weights initialized uniformly [-0.04, +0.04]. The model is trained for 14 epochs with learning rate 1, after which the learning rate is reduced by a factor of 1.15 after each epoch. Gradient norms are clipped at 10. With no dropout on the non-recurrent connections (i.e. zoneout as the only regularization), we do not achieve competitive results. We did not perform any search over models, and conjecture that the large model size requires regularization of the feed-forward connections. Adding zoneout (\( z_c = 0.25 \) and \( z_h = 0.025 \)) on the recurrent connections to the model optimized for dropout on the non-recurrent connections however, we are able to improve test perplexity from 78.4 to 77.4. We report the best performance achieved with a given technique in Table 1. 1 These metrics are deterministic functions of negative log-likelihood (NLL). Specifically, perplexity is exponentiated NLL, and BPC (entropy) is NLL divided by the natural logarithm of 2. Figure 3: Validation BPC (bits per character) on Character-level Penn Treebank, for different probabilities of zoneout on cells \( z_c \) and hidden states \( z_h \) (left), and comparison of an unregularized LSTM, zoneout \( z_c = 0.5, z_h = 0.05 \), stochastic depth zoneout \( z = 0.05 \), recurrent dropout \( p = 0.25 \), norm stabilizer \( \beta = 50 \), and weight noise \( \sigma = 0.075 \) (right). Figure 4: Training and validation bits-per-character (BPC) comparing LSTM regularization methods on character-level Penn Treebank (left) and Text8. (right) 4.2 TEXT8 Enwik8 is a corpus made from the first \( 10^9 \) bytes of Wikipedia dumped on Mar. 3, 2006. Text8 is a "clean text" version of this corpus; with html tags removed, numbers spelled out, symbols converted to spaces, all lower-cased. Both datasets were created and are hosted by [Mahoney] (2011). We use a single-layer network of 2000 units, initialized orthogonally, with batch size 128, learning rate 0.001, and sequence length 180. We optimize with Adam ([Kingma & Ba] 2014), clip gradients to a maximum norm of 1 ([Pascanu et al.] 2012), and use early stopping, again matching the settings of [Cooijmans et al.] (2016). Results are reported in Table 1 and Figure 4 shows training and validation learning curves for zoneout (\( z_c = 0.5, z_h = 0.05 \)) compared to an unregularized LSTM and to recurrent dropout. 4.3 PERMUTED SEQUENTIAL MNIST In sequential MNIST, pixels of an image representing a number [0-9] are presented one at a time, left to right, top to bottom. The task is to classify the number shown in the image. In pMNIST , the pixels are presented in a (fixed) random order. We compare recurrent dropout and zoneout to an unregularized LSTM baseline. All models have a single layer of 100 units, and are trained for 150 epochs using RMSProp ([Tieleman & Hinton] 2012) with a decay rate of 0.5 for the moving average of gradient norms. The learning rate is set to 0.001 and the gradients are clipped to a maximum norm of 1 ([Pascanu et al.] 2012). As shown in Figure 5 and Table 2 zoneout gives a significant performance boost compared to the LSTM baseline and outperforms recurrent dropout (Semeniuta et al., 2016), although recurrent batch normalization (Cooijmans et al., 2016) outperforms all three. However, by adding zoneout to the recurrent batch normalized LSTM, we achieve state of the art performance. For this setting, the zoneout mask is shared between cells and states, and the recurrent dropout probability and zoneout probabilities are both set to 0.15. Table 1: Validation and test results of different models on the three language modelling tasks. Results are reported for the best-performing settings. Performance on Char-PTB and Text8 is measured in bits-per-character (BPC); Word-PTB is measured in perplexity. For Char-PTB and Text8 all models are 1-layer unless otherwise noted; for Word-PTB all models are 2-layer. Results above the line are from our own implementation and experiments. Models below the line are: NR-dropout (non-recurrent dropout), V-Dropout (variational dropout), RBN (recurrent batchnorm), H-LSTM+LN (HyperLSTM + LayerNorm), 3-HM-LSTM+LN (3-layer Hierarchical Multiscale LSTM + LayerNorm). <table> <tr> <th rowspan="2">Model</th> <th colspan="2">Char-PTB</th> <th colspan="2">Word-PTB</th> <th colspan="2">Text8</th> </tr> <tr> <th>Valid</th> <th>Test</th> <th>Valid</th> <th>Test</th> <th>Valid</th> <th>Test</th> </tr> <tr> <td>Unregularized LSTM</td> <td>1.466</td> <td>1.356</td> <td>120.7</td> <td>114.5</td> <td>1.396</td> <td>1.408</td> </tr> <tr> <td>Weight noise</td> <td>1.507</td> <td>1.344</td> <td>--</td> <td>--</td> <td>1.356</td> <td>1.367</td> </tr> <tr> <td>Norm stabilizer</td> <td>1.459</td> <td>1.352</td> <td>--</td> <td>--</td> <td>1.382</td> <td>1.398</td> </tr> <tr> <td>Stochastic depth</td> <td>1.432</td> <td>1.343</td> <td>--</td> <td>--</td> <td>1.337</td> <td>1.343</td> </tr> <tr> <td>Recurrent dropout</td> <td>1.396</td> <td>1.286</td> <td>91.6</td> <td>87.0</td> <td>1.386</td> <td>1.401</td> </tr> <tr> <td>Zoneout</td> <td>1.362</td> <td>1.252</td> <td>81.4</td> <td>77.4</td> <td>1.331</td> <td>1.336</td> </tr> <tr> <td>NR-dropout (Zaremba et al., 2014)</td> <td>--</td> <td>--</td> <td>82.2</td> <td>78.4</td> <td>--</td> <td>--</td> </tr> <tr> <td>V-dropout (Gal, 2015)</td> <td>--</td> <td>--</td> <td>--</td> <td>73.4</td> <td>--</td> <td>--</td> </tr> <tr> <td>RBN (Cooijmans et al., 2016)</td> <td>--</td> <td>1.32</td> <td>--</td> <td>--</td> <td>--</td> <td>1.36</td> </tr> <tr> <td>H-LSTM + LN (Ha et al., 2016)</td> <td>1.281</td> <td>1.250</td> <td>--</td> <td>--</td> <td>--</td> <td>--</td> </tr> <tr> <td>3-HM-LSTM + LN (Chung et al., 2016)</td> <td>--</td> <td><b>1.24</b></td> <td>--</td> <td>--</td> <td>--</td> <td><b>1.29</b></td> </tr> </table> Table 2: Error rates on the pMNIST digit classification task. Zoneout outperforms recurrent dropout, and sets state of the art when combined with recurrent batch normalization. <table> <tr> <th>Model</th> <th>Valid</th> <th>Test</th> </tr> <tr> <td>Unregularized LSTM</td> <td>0.092</td> <td>0.102</td> </tr> <tr> <td>Recurrent dropout \( p = 0.5 \)</td> <td>0.083</td> <td>0.075</td> </tr> <tr> <td>Zoneout \( z_c = z_h = 0.15 \)</td> <td>0.063</td> <td>0.069</td> </tr> <tr> <td>Recurrent batchnorm</td> <td>-</td> <td>0.046</td> </tr> <tr> <td>Recurrent batchnorm & Zoneout \( z_c = z_h = 0.15 \)</td> <td>0.045</td> <td><b>0.041</b></td> </tr> </table> ![Line plot showing training and validation error rates for Vanilla LSTM, Recurrent Dropout, and Zoneout on the task of permuted sequential MNIST digit classification.](page_1012_1342_482_312.png) Figure 5: Training and validation error rates for an unregularized LSTM, recurrent dropout, and zoneout on the task of permuted sequential MNIST digit classification. 4.4 Gradient flow We investigate the hypothesis that identity connections introduced by zoneout facilitate gradient flow to earlier timesteps. Vanishing gradients are a perennial issue in RNNs. As effective as many techniques are for mitigating vanishing gradients (notably the LSTM architecture [Hochreiter & Schmidhuber (1997)]), we can always imagine a longer sequence to train on, or a longer-term dependence we want to capture. We compare gradient flow in an unregularized LSTM to zoning out (stochastic identity-mapping) and dropping out (stochastic zero-mapping) the recurrent connections after one epoch of training on pMNIST. We compute the average gradient norms \( \| \frac{\partial L}{\partial c_t} \| \) of loss \( L \) with respect to cell activations \( c_t \) at each timestep \( t \), and for each method, normalize the average gradient norms by the sum of average gradient norms for all timesteps. Figure 6 shows that zoneout propagates gradient information to early timesteps much more effectively than dropout on the recurrent connections, and even more effectively than an unregularized LSTM. The same effect was observed for hidden states \( h_t \). ![Line plot showing normalized average gradient norms over time for dropout, zoneout, and unregularized LSTM on pMNIST](page_328_563_627_246.png) Figure 6: Normalized \( \sum \| \frac{\partial L}{\partial c_t} \| \) of loss \( L \) with respect to cell activations \( c_t \) at each timestep \( t \) for zoneout (\( z_c = 0.5 \)), dropout (\( z_c = 0.5 \)), and an unregularized LSTM on one epoch of pMNIST. 5 CONCLUSION We have introduced zoneout, a novel and simple regularizer for RNNs, which stochastically preserves hidden units' activations. Zoneout improves performance across tasks, outperforming many alternative regularizers to achieve results competitive with state of the art on the Penn Treebank and Text8 datasets, and state of the art results on pMNIST. While searching over zoneout probabilities allows us to tune zoneout to each task, low zoneout probabilities (0.05 - 0.2) on states reliably improve performance of existing models. We perform no hyperparameter search to achieve these results, simply using settings from the previous state of the art. Results on pMNIST and word-level Penn Treebank suggest that Zoneout works well in combination with other regularizers, such as recurrent batch normalization, and dropout on feedforward/embedding layers. We conjecture that the benefits of zoneout arise from two main factors: (1) Introducing stochasticity makes the network more robust to changes in the hidden state; (2) The identity connections improve the flow of information forward and backward through the network. ACKNOWLEDGMENTS We are grateful to Hugo Larochelle, Jan Chorowski, and students at MILA, especially Çağlar Gülçehre, Marcin Moczulski, Chiheb Trabelsi, and Christopher Beckham, for helpful feedback and discussions. We thank the developers of Theano [Theano Development Team (2016)], Fuel, and Blocks [van Merriënboer et al. (2015)]. We acknowledge the computing resources provided by ComputeCanada and CalculQuebec. We also thank IBM and Samsung for their support. We would also like to acknowledge the work of Pranav Shyam on learning RNN hierarchies. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL). The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. REFERENCES Lei Jimmy Ba, Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. CoRR, abs/1607.06450, 2016. URL http://arxiv.org/abs/1607.06450 Philip Bachman, Ouais Alsharif, and Doina Precup. Learning with pseudo-ensembles. In Advances in Neural Information Processing Systems, pp. 3365–3373, 2014. J. Bayer, C. Osendorfer, D. Korhammer, N. Chen, S. Urban, and P. van der Smagt. On Fast Dropout and its Applicability to Recurrent Networks. ArXiv e-prints, November 2013. Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difficult. Neural Networks, IEEE Transactions on, 5(2):157–166, 1994. Yoshua Bengio, Nicholas Léonard, and Aaron C. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. CoRR, abs/1308.3432, 2013. URL http://arxiv.org/abs/1308.3432 Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural networks. CoRR, abs/1609.01704, 2016. URL http://arxiv.org/abs/1609.01704 Tim Cooijmans, Nicolas Ballas, César Laurent, Caglar Gulcehre, and Aaron Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016. Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In NIPS, pp. 3123–3131, 2015. Yarin Gal. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks. ArXiv e-prints, December 2015. Felix A. Gers, Jürgen Schmidhuber, and Fred A. Cummins. Learning to forget: Continual prediction with LSTM. Neural Computation, 12(10):2451–2471, 2000. David Ha, Andrew M. Dai, and Quoc V. Le. Hypernetworks. CoRR, abs/1609.09106, 2016. URL http://arxiv.org/abs/1609.09106 Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. Salah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependencies. In Advances in Neural Information Processing Systems. 1996. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Master’s thesis, Institut fur Informatik, Technische Universitat, Munchen, 1991. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735–1780, 1997. Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochastic depth. arXiv preprint arXiv:1603.09382, 2016. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Jan Koutnik, Klaus Greff, Faustino Gomez, and Juergen Schmidhuber. A clockwork rnn. arXiv preprint arXiv:1402.3511, 2014. David Krueger and Roland Memisevic. Regularizing rnns by stabilizing activations. arXiv preprint arXiv:1511.08400, 2015. Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks of rectified linear units. arXiv preprint arXiv:1504.00941, 2015. Matt Mahoney. About the test data, 2011. URL http://mattmahoney.net/dc/textdata Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330, 1993. Taesup Moon, Heeyoul Choi, Hoshik Lee, and Inchlul Song. Rnndrop: A novel dropout for rnns in asr. Automatic Speech Recognition and Understanding (ASRU), 2015. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. Understanding the exploding gradient problem. CoRR, abs/1211.5063, 2012. URL http://arxiv.org/abs/1211.5063 V. Pham, T. Bluche, C. Kermorvant, and J. Louradour. Dropout improves Recurrent Neural Networks for Handwriting Recognition. ArXiv e-prints, November 2013. Kamil Rocki, Tomasz Kornuta, and Tegan Maharaj. Surprisal-driven zoneout. CoRR, abs/1610.07675, 2016. URL http://arxiv.org/abs/1610.07675 Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. Recurrent dropout without memory loss. arXiv preprint arXiv:1603.05118, 2016. S. Singh, D. Hoiem, and D. Forsyth. Swapout: Learning an ensemble of deep architectures. ArXiv e-prints, May 2016. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014. Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4:2, 2012. Bart van Merriënboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-Farley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning. CoRR, abs/1506.00619, 2015. Sida Wang and Christopher Manning. Fast dropout training. In Proceedings of the 30th International Conference on Machine Learning, pp. 118–126, 2013. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014. 6 APPENDIX 6.1 STATIC IDENTITY CONNECTIONS EXPERIMENT This experiment was suggested by AnonReviewer2 during the ICLR review process with the goal of disentangling the effects zoneout has (1) through noise injection in the training process and (2) through identity connections. Based on these results, we observe that noise injection is essential for obtaining the regularization benefits of zoneout. In this experiment, one zoneout mask is sampled at the beginning of training, and used for all examples. This means the identity connections introduced are static across training examples (but still different for each timestep). Using static identity connections resulted in slightly lower training (but not validation) error than zoneout, but worse performance than an unregularized LSTM on both train and validation sets, as shown in Figure 7 ![Training and validation curves for an LSTM with static identity connections compared to zoneout and vanilla LSTM](page_384_654_768_384.png) Figure 7: Training and validation curves for an LSTM with static identity connections compared to zoneout (both \( Z_c = 0.5 \) and \( Z_h = 0.05 \)) and compared to a vanilla LSTM, showing that static identity connections fail to capture the benefits of zoneout.
ABSTRACT We propose zoneout, a novel method for regularizing RNNs. At each timestep, zoneout stochastically forces some hidden units to maintain their previous values. Like dropout, zoneout uses random noise to train a pseudo-ensemble, improving generalization. But by preserving instead of dropping hidden units, gradient information and state information are more readily propagated through time, as in feedforward stochastic depth networks. We perform an empirical investigation of various RNN regularizers, and find that zoneout gives significant performance improvements across tasks. We achieve competitive results with relatively simple models in character- and word-level language modelling on the Penn Treebank and Text8 datasets, and combining with recurrent batch normalization (Cooijmans et al., 2016) yields state-of-the-art results on permuted sequential MNIST. 1 INTRODUCTION Regularizing neural nets can significantly improve performance, as indicated by the widespread use of early stopping, and success of regularization methods such as dropout and its recurrent variants (Hinton et al., 2012; Srivastava et al., 2014; Zaremba et al., 2014; Gal, 2015). In this paper, we address the issue of regularization in recurrent neural networks (RNNs) with a novel method called zoneout. RNNs sequentially construct fixed-length representations of arbitrary-length sequences by folding new observations into their hidden state using an input-dependent transition operator. The repeated application of the same transition operator at the different time steps of the sequence, however, can make the dynamics of an RNN sensitive to minor perturbations in the hidden state; the transition dynamics can magnify components of these perturbations exponentially. Zoneout aims to improve RNNs’ robustness to perturbations in the hidden state in order to regularize transition dynamics. Like dropout, zoneout injects noise during training. But instead of setting some units’ activations to 0 as in dropout, zoneout randomly replaces some units’ activations with their activations from the previous timestep. As in dropout, we use the expectation of the random noise at test time. This results in a simple regularization approach which can be applied through time for any RNN architecture, and can be conceptually extended to any model whose state varies over time. Compared with dropout, zoneout is appealing because it preserves information flow forwards and backwards through the network. This helps combat the vanishing gradient problem (Hochreiter, 1991; Bengio et al., 1994), as we observe experimentally. We also empirically evaluate zoneout on classification using the permuted sequential MNIST dataset, and on language modelling using the Penn Treebank and Text8 datasets, demonstrating competitive or state of the art performance across tasks. In particular, we show that zoneout performs competitively with other proposed regularization methods for RNNs, including recently-proposed dropout variants. Code for replicating all experiments can be found at: http://github.com/teganmaharaj/zoneout 2 RELATED WORK 2.1 RELATIONSHIP TO DROPOUT Zoneout can be seen as a selective application of dropout to some of the nodes in a modified computational graph, as shown in Figure 1. In zoneout, instead of dropping out (being set to 0), units zone out and are set to their previous value (\( h_t = h_{t-1} \)). Zoneout, like dropout, can be viewed as a way to train a pseudo-ensemble (Bachman et al., 2014), injecting noise using a stochastic “identity-mask” rather than a zero-mask. We conjecture that identity-masking is more appropriate for RNNs, since it makes it easier for the network to preserve information from previous timesteps going forward, and facilitates, rather than hinders, the flow of gradient information going backward, as we demonstrate experimentally. ![Zoneout as a special case of dropout. \( \tilde{h}_t \) is the unit h’s hidden activation for the next time step (if not zoned out). Zoneout can be seen as applying dropout on the hidden state delta, \( \tilde{h}_t - h_{t-1} \). When this update is dropped out (represented by the dashed line), \( h_t \) becomes \( h_{t-1} \).](page_370_682_259_180.png) Figure 1: Zoneout as a special case of dropout; \( \tilde{h}_t \) is the unit h’s hidden activation for the next time step (if not zoned out). Zoneout can be seen as applying dropout on the hidden state delta, \( \tilde{h}_t - h_{t-1} \). When this update is dropped out (represented by the dashed line), \( h_t \) becomes \( h_{t-1} \). 2.2 DROPOUT IN RNNs Initially successful applications of dropout in RNNs (Pham et al., 2013; Zaremba et al., 2014) only applied dropout to feed-forward connections (“up the stack”), and not recurrent connections (“forward through time”), but several recent works (Semeniuta et al., 2016; Moon et al., 2015; Gal, 2015) propose methods that are not limited in this way. Bayer et al. (2013) successfully apply fast dropout (Wang & Manning, 2013), a deterministic approximation of dropout, to RNNs. Semeniuta et al. (2016) apply recurrent dropout to the updates to LSTM memory cells (or GRU states), i.e. they drop out the input/update gate in LSTM/GRU. Like zoneout, their approach prevents the loss of long-term memories built up in the states/cells of GRUs/LSTMs, but zoneout does this by preserving units’ activations exactly. This difference is most salient when zoning out the hidden states (not the memory cells) of an LSTM, for which there is no analogue in recurrent dropout. Whereas saturated output gates or output nonlinearities would cause recurrent dropout to suffer from vanishing gradients (Bengio et al., 1994), zoned-out units still propagate gradients effectively in this situation. Furthermore, while the recurrent dropout method is specific to LSTMs and GRUs, zoneout generalizes to any model that sequentially builds distributed representations of its input, including vanilla RNNs. Also motivated by preventing memory loss, Moon et al. (2015) propose rnnDrop. This technique amounts to using the same dropout mask at every timestep, which the authors show results in improved performance on speech recognition in their experiments. Semeniuta et al. (2016) show, however, that past states’ influence vanishes exponentially as a function of dropout probability when taking the expectation at test time in rnnDrop; this is problematic for tasks involving longer-term dependencies. Gal (2015) propose another technique which uses the same mask at each timestep. Motivated by variational inference, they drop out the rows of weight matrices in the input and output embeddings and LSTM gates, instead of dropping units’ activations. The proposed variational RNN technique achieves single-model state-of-the-art test perplexity of 73.4 on word-level language modelling of Penn Treebank. 2.3 RELATIONSHIP TO STOCHASTIC DEPTH Zoneout can also be viewed as a per-unit version of stochastic depth (Huang et al., 2016), which randomly drops entire layers of feed-forward residual networks (ResNets (He et al., 2015)). This is equivalent to zoning out all of the units of a layer at the same time. In a typical RNN, there is a new input at each timestep, causing issues for a naive implementation of stochastic depth. Zoning out an entire layer in an RNN means the input at the corresponding timestep is completely ignored, whereas zoning out individual units allows the RNN to take each element of its input sequence into account. We also found that using residual connections in recurrent nets led to instability, presumably due to the parameter sharing in RNNs. Concurrent with our work, Singh et al. (2016) propose zoneout for ResNets, calling it SkipForward. In their experiments, zoneout is outperformed by stochastic depth, dropout, and their proposed Swapout technique, which randomly drops either or both of the identity or residual connections. Unlike Singh et al. (2016), we apply zoneout to RNNs, and find it outperforms stochastic depth and recurrent dropout. 2.4 SELECTIVELY UPDATING HIDDEN UNITS Like zoneout, clockwork RNNs (Koutnik et al., 2014) and hierarchical RNNs (Hihi & Bengio, 1996) update only some units’ activations at every timestep, but their updates are periodic, whereas zoneout’s are stochastic. Inspired by clockwork RNNs, we experimented with zoneout variants that target different update rates or schedules for different units, but did not find any performance benefit. Hierarchical multiscale LSTMs (Chung et al., 2016) learn update probabilities for different units using the straight-through estimator (Bengio et al., 2013; Courbariaux et al., 2015), and combined with recently-proposed Layer Normalization (Ba et al., 2016) achieve competitive results on a variety of tasks. As the authors note, their method can be interpreted as an input-dependent form of adaptive zoneout. In recent work, Ha et al. (2016) use a hypernetwork to dynamically rescale the row-weights of a primary LSTM network, achieving state-of-the-art 1.21 BPC on character-level Penn Treebank when combined with layer normalization (Ba et al., 2016) in a two-layer network. This scaling can be viewed as an adaptive, differentiable version of the variational LSTM (Gal, 2015), and could similarly be used to create an adaptive, differentiable version of zoneout. Very recent work conditions zoneout probabilities on supral (a measure of the discrepancy between the predicted and actual state), and sets a new state of the art on enwik8 (Rocki et al., 2016). 3 ZONEOUT AND PRELIMINARIES We now explain zoneout in full detail, and compare with other forms of dropout in RNNs. We start by reviewing recurrent neural networks (RNNs). 3.1 RECURRENT NEURAL NETWORKS Recurrent neural networks process data \( x_1, x_2, \ldots, x_T \) sequentially, constructing a corresponding sequence of representations, \( h_1, h_2, \ldots, h_T \). Each hidden state is trained (implicitly) to remember and emphasize all task-relevant aspects of the preceding inputs, and to incorporate new inputs via a transition operator, \( \mathcal{T} \), which converts the present hidden state and input into a new hidden state: \[ h_t = \mathcal{T}(h_{t-1}, x_t). \] Zoneout modifies these dynamics by mixing the original transition operator \( \mathcal{T} \) with the identity operator (as opposed to the null operator used in dropout), according to a vector of Bernoulli masks, \( d_t \): \[ \text{Zoneout:} \quad \mathcal{T} = d_t \odot \mathcal{T} + (1 - d_t) \odot 1 \qquad \text{Dropout:} \quad \mathcal{T} = d_t \odot \mathcal{T} + (1 - d_t) \odot 0 \] 3.2 LONG SHORT-TERM MEMORY In long short-term memory RNNs (LSTMs) (Hochreiter & Schmidhuber, 1997), the hidden state is divided into memory cell \( c_t \), intended for internal long-term storage, and hidden state \( h_t \), used as a transient representation of state at timestep \( t \). In the most widely used formulation of an LSTM (Gers et al., 2000), \( c_t \) and \( h_t \) are computed via a set of four “gates”, including the forget gate, \( f_t \), which directly connects \( c_t \) to the memories of the previous timestep \( c_{t-1} \), via an element-wise multiplication. Large values of the forget gate cause the cell to remember most (not all) of its previous value. The other gates control the flow of information in (\( i_t, g_t \)) and out (\( o_t \)) of the cell. Each gate has a weight matrix and bias vector; for example the forget gate has \( W_{xf}, W_{hf}, \) and \( b_f \). For brevity, we will write these as \( W_x, W_h, b \). An LSTM is defined as follows: \[ i_t, f_t, o_t = \sigma(W_x x_t + W_h h_{t-1} + b) \\ g_t = \tanh(W_{xg} x_t + W_{hg} h_{t-1} + b_g) \\ c_t = f_t \odot c_{t-1} + i_t \odot g_t \\ h_t = o_t \odot \tanh(c_t) \] A naive application of dropout in LSTMs would zero-mask either or both of the memory cells and hidden states, without changing the computation of the gates (\(i, f, o, g\)). Dropping memory cells, for example, changes the computation of \(c_t\) as follows: \[ c_t = d_t \odot (f_t \odot c_{t-1} + i_t \odot g_t) \] Alternatives abound, however; masks can be applied to any subset of the gates, cells, and states. [Semeniuta et al. (2016)](https://arxiv.org/abs/1603.05117), for instance, zero-mask the input gate: \[ c_t = (f_t \odot c_{t-1} + d_t \odot i_t \odot g_t) \] When the input gate is masked like this, there is no additive contribution from the input or hidden state, and the value of the memory cell simply decays according to the forget gate. ![Diagram comparing zoneout and recurrent dropout in an LSTM](page_489_670_1012_180.png) Figure 2: (a) Zoneout, vs (b) the recurrent dropout strategy of [Semeniuta et al., 2016] in an LSTM. Dashed lines are zero-masked; in zoneout, the corresponding dotted lines are masked with the corresponding opposite zero-mask. Rectangular nodes are embedding layers. In **zoneout**, the values of the hidden state and memory cell randomly either maintain their previous value or are updated as usual. This introduces stochastic identity connections between subsequent time steps: \[ c_t = d_t^c \odot c_{t-1} + (1 - d_t^c) \odot (f_t \odot c_{t-1} + i_t \odot g_t) \\ h_t = d_t^h \odot h_{t-1} + (1 - d_t^h) \odot (o_t \odot \tanh(f_t \odot c_{t-1} + i_t \odot g_t)) \] We usually use different zoneout masks for cells and hiddens. We also experiment with a variant of recurrent dropout that reuses the input dropout mask to zoneout the corresponding output gates: \[ c_t = (f_t \odot c_{t-1} + d_t \odot i_t \odot g_t) \\ h_t = ((1 - d_t) \odot o_t + d_t \odot o_{t-1}) \odot \tanh(c_t) \] The motivation for this variant is to prevent the network from being forced (by the output gate) to expose a memory cell which has not been updated, and hence may contain misleading information. 4 EXPERIMENTS AND DISCUSSION We evaluate zoneout’s performance on the following tasks: (1) Character-level language modelling on the Penn Treebank corpus (Marcus et al., 1993); (2) Word-level language modelling on the Penn Treebank corpus (Marcus et al., 1993); (3) Character-level language modelling on the Text8 corpus (Mahoney, 2011); (4) Classification of hand-written digits on permuted sequential MNIST (pMNIST) (Le et al., 2015). We also investigate the gradient flow to past hidden states, using pMNIST. 4.1 PENN TREEBANK LANGUAGE MODELLING DATASET The Penn Treebank language model corpus contains 1 million words. The model is trained to predict the next word (evaluated on perplexity) or character (evaluated on BPC: bits per character) in a sequence.1 4.1.1 CHARACTER-LEVEL For the character-level task, we train networks with one layer of 1000 hidden units. We train LSTMs with a learning rate of 0.002 on overlapping sequences of 100 in batches of 32, optimize using Adam, and clip gradients with threshold 1. These settings match those used in [Coolmans et al., 2016]. We also train GRUs and tanh-RNNs with the same parameters as above, except sequences are non-overlapping and we use learning rates of 0.001, and 0.0003 for GRUs and tanh-RNNs respectively. Small values (0.1, 0.05) of zoneout significantly improve generalization performance for all three models. Intriguingly, we find zoneout increases training time for GRU and tanh-RNN, but decreases training time for LSTMs. We focus our investigation on LSTM units, where the dynamics of zoning out states, cells, or both provide interesting insight into zoneout’s behaviour. Figure 3 shows our exploration of zoneout in LSTMs, for various zoneout probabilities of cells and/or hiddens. Zoneout on cells with probability 0.5 or zoneout on states with probability 0.05 both outperform the best-performing recurrent dropout (\( p = 0.25 \)). Combining \( z_c = 0.5 \) and \( z_h = 0.05 \) leads to our best-performing model, which achieves 1.27 BPC, competitive with recent state-of-the-art set by [Ha et al., 2016]. We compare zoneout to recurrent dropout (for \( p \in \{0.05, 0.2, 0.25, 0.5, 0.7\} \)), weight noise (\( \sigma = 0.075 \)), norm stabilizer (\( \beta = 50 \)) [Krueger & Memisevic, 2015], and explore stochastic depth [Huang et al., 2016] in a recurrent setting (analogous to zoning out an entire timestep). We also tried a shared-mask variant of zoneout as used in \( \mu \)MNIST experiments, where the same mask is used for both cells and hiddens. Neither stochastic depth or shared-mask zoneout performed as well as separate masks, sampled per unit. Figure 3 shows the best performance achieved with each regularizer, as well as an unregularized LSTM baseline. Results are reported in Table 1 and learning curves shown in Figure 4. Low zoneout probabilities (0.05-0.25) also improve over baseline in GRUs and tanh-RNNs, reducing BPC from 1.53 to 1.41 for GRU and 1.67 to 1.52 for tanh-RNN. Similarly, low zoneout probabilities work best on the hidden states of LSTMs. For memory cells in LSTMs, however, higher probabilities (around 0.5) work well, perhaps because large forget-gate values approximate the effect of cells zoning out. We conjecture that best performance is achieved with zoneout LSTMs because of the stability of having both state and cell. The probability that both will be zoned out is very low, but having one or the other zoned out carries information from the previous timestep forward, while having the other react ‘normally’ to new information. 4.1.2 WORD-LEVEL For the word-level task, we replicate settings from [Zaremba et al., 2014]’s best single-model performance. This network has 2 layers of 1500 units, with weights initialized uniformly [-0.04, +0.04]. The model is trained for 14 epochs with learning rate 1, after which the learning rate is reduced by a factor of 1.15 after each epoch. Gradient norms are clipped at 10. With no dropout on the non-recurrent connections (i.e. zoneout as the only regularization), we do not achieve competitive results. We did not perform any search over models, and conjecture that the large model size requires regularization of the feed-forward connections. Adding zoneout (\( z_c = 0.25 \) and \( z_h = 0.025 \)) on the recurrent connections to the model optimized for dropout on the non-recurrent connections however, we are able to improve test perplexity from 78.4 to 77.4. We report the best performance achieved with a given technique in Table 1. 1 These metrics are deterministic functions of negative log-likelihood (NLL). Specifically, perplexity is exponentiated NLL, and BPC (entropy) is NLL divided by the natural logarithm of 2. Figure 3: Validation BPC (bits per character) on Character-level Penn Treebank, for different probabilities of zoneout on cells \( z_c \) and hidden states \( z_h \) (left), and comparison of an unregularized LSTM, zoneout \( z_c = 0.5, z_h = 0.05 \), stochastic depth zoneout \( z = 0.05 \), recurrent dropout \( p = 0.25 \), norm stabilizer \( \beta = 50 \), and weight noise \( \sigma = 0.075 \) (right). Figure 4: Training and validation bits-per-character (BPC) comparing LSTM regularization methods on character-level Penn Treebank (left) and Text8. (right) 4.2 TEXT8 Enwik8 is a corpus made from the first \( 10^9 \) bytes of Wikipedia dumped on Mar. 3, 2006. Text8 is a "clean text" version of this corpus; with html tags removed, numbers spelled out, symbols converted to spaces, all lower-cased. Both datasets were created and are hosted by [Mahoney] (2011). We use a single-layer network of 2000 units, initialized orthogonally, with batch size 128, learning rate 0.001, and sequence length 180. We optimize with Adam ([Kingma & Ba] 2014), clip gradients to a maximum norm of 1 ([Pascanu et al.] 2012), and use early stopping, again matching the settings of [Cooijmans et al.] (2016). Results are reported in Table 1 and Figure 4 shows training and validation learning curves for zoneout (\( z_c = 0.5, z_h = 0.05 \)) compared to an unregularized LSTM and to recurrent dropout. 4.3 PERMUTED SEQUENTIAL MNIST In sequential MNIST, pixels of an image representing a number [0-9] are presented one at a time, left to right, top to bottom. The task is to classify the number shown in the image. In pMNIST , the pixels are presented in a (fixed) random order. We compare recurrent dropout and zoneout to an unregularized LSTM baseline. All models have a single layer of 100 units, and are trained for 150 epochs using RMSProp ([Tieleman & Hinton] 2012) with a decay rate of 0.5 for the moving average of gradient norms. The learning rate is set to 0.001 and the gradients are clipped to a maximum norm of 1 ([Pascanu et al.] 2012). As shown in Figure 5 and Table 2 zoneout gives a significant performance boost compared to the LSTM baseline and outperforms recurrent dropout (Semeniuta et al., 2016), although recurrent batch normalization (Cooijmans et al., 2016) outperforms all three. However, by adding zoneout to the recurrent batch normalized LSTM, we achieve state of the art performance. For this setting, the zoneout mask is shared between cells and states, and the recurrent dropout probability and zoneout probabilities are both set to 0.15. Table 1: Validation and test results of different models on the three language modelling tasks. Results are reported for the best-performing settings. Performance on Char-PTB and Text8 is measured in bits-per-character (BPC); Word-PTB is measured in perplexity. For Char-PTB and Text8 all models are 1-layer unless otherwise noted; for Word-PTB all models are 2-layer. Results above the line are from our own implementation and experiments. Models below the line are: NR-dropout (non-recurrent dropout), V-Dropout (variational dropout), RBN (recurrent batchnorm), H-LSTM+LN (HyperLSTM + LayerNorm), 3-HM-LSTM+LN (3-layer Hierarchical Multiscale LSTM + LayerNorm). <table> <tr> <th rowspan="2">Model</th> <th colspan="2">Char-PTB</th> <th colspan="2">Word-PTB</th> <th colspan="2">Text8</th> </tr> <tr> <th>Valid</th> <th>Test</th> <th>Valid</th> <th>Test</th> <th>Valid</th> <th>Test</th> </tr> <tr> <td>Unregularized LSTM</td> <td>1.466</td> <td>1.356</td> <td>120.7</td> <td>114.5</td> <td>1.396</td> <td>1.408</td> </tr> <tr> <td>Weight noise</td> <td>1.507</td> <td>1.344</td> <td>--</td> <td>--</td> <td>1.356</td> <td>1.367</td> </tr> <tr> <td>Norm stabilizer</td> <td>1.459</td> <td>1.352</td> <td>--</td> <td>--</td> <td>1.382</td> <td>1.398</td> </tr> <tr> <td>Stochastic depth</td> <td>1.432</td> <td>1.343</td> <td>--</td> <td>--</td> <td>1.337</td> <td>1.343</td> </tr> <tr> <td>Recurrent dropout</td> <td>1.396</td> <td>1.286</td> <td>91.6</td> <td>87.0</td> <td>1.386</td> <td>1.401</td> </tr> <tr> <td>Zoneout</td> <td>1.362</td> <td>1.252</td> <td>81.4</td> <td>77.4</td> <td>1.331</td> <td>1.336</td> </tr> <tr> <td>NR-dropout (Zaremba et al., 2014)</td> <td>--</td> <td>--</td> <td>82.2</td> <td>78.4</td> <td>--</td> <td>--</td> </tr> <tr> <td>V-dropout (Gal, 2015)</td> <td>--</td> <td>--</td> <td>--</td> <td>73.4</td> <td>--</td> <td>--</td> </tr> <tr> <td>RBN (Cooijmans et al., 2016)</td> <td>--</td> <td>1.32</td> <td>--</td> <td>--</td> <td>--</td> <td>1.36</td> </tr> <tr> <td>H-LSTM + LN (Ha et al., 2016)</td> <td>1.281</td> <td>1.250</td> <td>--</td> <td>--</td> <td>--</td> <td>--</td> </tr> <tr> <td>3-HM-LSTM + LN (Chung et al., 2016)</td> <td>--</td> <td><b>1.24</b></td> <td>--</td> <td>--</td> <td>--</td> <td><b>1.29</b></td> </tr> </table> Table 2: Error rates on the pMNIST digit classification task. Zoneout outperforms recurrent dropout, and sets state of the art when combined with recurrent batch normalization. <table> <tr> <th>Model</th> <th>Valid</th> <th>Test</th> </tr> <tr> <td>Unregularized LSTM</td> <td>0.092</td> <td>0.102</td> </tr> <tr> <td>Recurrent dropout \( p = 0.5 \)</td> <td>0.083</td> <td>0.075</td> </tr> <tr> <td>Zoneout \( z_c = z_h = 0.15 \)</td> <td>0.063</td> <td>0.069</td> </tr> <tr> <td>Recurrent batchnorm</td> <td>-</td> <td>0.046</td> </tr> <tr> <td>Recurrent batchnorm & Zoneout \( z_c = z_h = 0.15 \)</td> <td>0.045</td> <td><b>0.041</b></td> </tr> </table> ![Line plot showing training and validation error rates for Vanilla LSTM, Recurrent Dropout, and Zoneout on the task of permuted sequential MNIST digit classification.](page_1012_1342_482_312.png) Figure 5: Training and validation error rates for an unregularized LSTM, recurrent dropout, and zoneout on the task of permuted sequential MNIST digit classification. 4.4 Gradient flow We investigate the hypothesis that identity connections introduced by zoneout facilitate gradient flow to earlier timesteps. Vanishing gradients are a perennial issue in RNNs. As effective as many techniques are for mitigating vanishing gradients (notably the LSTM architecture [Hochreiter & Schmidhuber (1997)]), we can always imagine a longer sequence to train on, or a longer-term dependence we want to capture. We compare gradient flow in an unregularized LSTM to zoning out (stochastic identity-mapping) and dropping out (stochastic zero-mapping) the recurrent connections after one epoch of training on pMNIST. We compute the average gradient norms \( \| \frac{\partial L}{\partial c_t} \| \) of loss \( L \) with respect to cell activations \( c_t \) at each timestep \( t \), and for each method, normalize the average gradient norms by the sum of average gradient norms for all timesteps. Figure 6 shows that zoneout propagates gradient information to early timesteps much more effectively than dropout on the recurrent connections, and even more effectively than an unregularized LSTM. The same effect was observed for hidden states \( h_t \). ![Line plot showing normalized average gradient norms over time for dropout, zoneout, and unregularized LSTM on pMNIST](page_328_563_627_246.png) Figure 6: Normalized \( \sum \| \frac{\partial L}{\partial c_t} \| \) of loss \( L \) with respect to cell activations \( c_t \) at each timestep \( t \) for zoneout (\( z_c = 0.5 \)), dropout (\( z_c = 0.5 \)), and an unregularized LSTM on one epoch of pMNIST. 5 CONCLUSION We have introduced zoneout, a novel and simple regularizer for RNNs, which stochastically preserves hidden units' activations. Zoneout improves performance across tasks, outperforming many alternative regularizers to achieve results competitive with state of the art on the Penn Treebank and Text8 datasets, and state of the art results on pMNIST. While searching over zoneout probabilities allows us to tune zoneout to each task, low zoneout probabilities (0.05 - 0.2) on states reliably improve performance of existing models. We perform no hyperparameter search to achieve these results, simply using settings from the previous state of the art. Results on pMNIST and word-level Penn Treebank suggest that Zoneout works well in combination with other regularizers, such as recurrent batch normalization, and dropout on feedforward/embedding layers. We conjecture that the benefits of zoneout arise from two main factors: (1) Introducing stochasticity makes the network more robust to changes in the hidden state; (2) The identity connections improve the flow of information forward and backward through the network. ACKNOWLEDGMENTS We are grateful to Hugo Larochelle, Jan Chorowski, and students at MILA, especially Çağlar Gülçehre, Marcin Moczulski, Chiheb Trabelsi, and Christopher Beckham, for helpful feedback and discussions. We thank the developers of Theano [Theano Development Team (2016)], Fuel, and Blocks [van Merriënboer et al. (2015)]. We acknowledge the computing resources provided by ComputeCanada and CalculQuebec. We also thank IBM and Samsung for their support. We would also like to acknowledge the work of Pranav Shyam on learning RNN hierarchies. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL). The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
accept
Accept (Poster)
7.666667
82e792b35faceba89375fdf19e8e9c03fb433e5d
iclr
2,017
CONTENT2VEC: SPECIALIZING JOINT REPRESENTATIONS OF PRODUCT IMAGES AND TEXT FOR THE TASK OF PRODUCT RECOMMENDATION Thomas Nedelec, Elena Smirnova & Flavian Vasile Criteo Research Paris, 32 Blanche, France {t.nedelec,e.smirnova,f.vasile}@criteo.com ABSTRACT We propose a unified product embedded representation that is optimized for the task of retrieval-based product recommendation. We generate this representation using Content2Vec, a new deep architecture that merges product content information such as text and image, and we analyze its performance on hard recommendation setups such as cold-start and cross-category recommendations. In the case of a normal recommendation regime where collaborative information signal is available, we merge the product co-occurrence information and propose a second architecture Content2vec+ and show its lift in performance versus non-hybrid approaches in both cold start and normal recommendation regimes. 1 INTRODUCTION Online product recommendation is now a key driver of demand, not only in E-commerce businesses that recommend physical products, such as Amazon (Marshall [2006]), TaoBao (Xiang [2013]) and Ebay (Academy [2013]), but also in online websites that recommend digital content such as news (Yahoo! - Agarwal et al. [2013]), Google - Liu et al. (2010), movies (Netflix - Bell & Koren [2007]), music (Spotify - Johnson [2015]), videos (YouTube - Covington et al. [2016]) and games (Xbox - Koenigstein et al. [2012]). Two of the most challenging aspects of recommendation in general and of product recommendation in particular, are scalability and freshness. The first one addresses the problem of making fast recommendations in parallel, the second addresses the problem of updating recommendations based on real-time user interaction. One of the most encountered architecture solutions for recommendation at scale divides the recommendation process in two stages: a candidate generation stage that prunes the number of recommendable items from billions to a couple of hundreds, followed by a second item selection stage that decides the final set of items to be displayed to the user, as shown in Figure 1 (see Mazare [2016], Cheng et al. [2016], Covington et al. [2016]). The first stage generally implies the pre-generation of an inverted index over the set of recommendable products, paired with a real-time retrieval module, similarly to a search engine architecture. In our current paper we focus on the cases where the system supports vectorial product queries. The sources of the vectorial representations range from the set of co-occurring products, like in the case of neighborhood-based collaborative filtering, to a low-dimensional representation produced via matrix factorization or to an embedded representation produced via a deep neural network. The second stage takes the candidate set and decides the final list of recommendations, usually by optimizing a ranking metric. This stage has in general a lot more constraints in terms of latency, due to its use of real-time signal that makes its predictions not cacheable. Therefore, in terms of model choice, the first stage can be a lot more complex than the second. In terms of impact, the quality of the candidate set coming from the first stage is crucial, since this constitutes a hard threshold on the performance of the second stage and of the overall system. Because of the feasibility of using a more complex model and the potential impact on the final recommendation performance, we choose to concentrate our efforts on the task of optimal candi- ![2-Stage Recommender System Architecture.](page_184_153_1207_496.png) Figure 1: 2-Stage Recommender System Architecture. date generation. We formalize the problem as a link prediction task, where given a set of past co-purchased products we try to predict unseen pairs of products. Related work in representation learning for recommendation investigated the use of collaborative filtering (CF), text and product images, but to our knowledge, there has been no attempt to unify all of these signals in a single representation. We see this as an opportunity to investigate the leveraging effect of generating a Unified Product Representation via a deep-learning approach. In the following, we formally define the set of associated requirements we would like to satisfy: • Relevance: the representation should be optimized for product recommendation relevance, as measured by the associated target metrics (in this case, modeling it as a link prediction task and optimizing for the AUC of product pair prediction). • Coverage: the representation should leverage all available product information (in our case, all product information available in the product catalog together with observed product co-occurrences). • Cross-modality expressiveness: the representation should be able to account for interactions between various information sources such as text and image (can take into account the fact that the word “red” and the “red” color detector are correlated). • Pair-wise expressiveness: the representation should be able to account for interactions between the two products. • Robustness: the representation should operate well (recommendation performance will not degrade dramatically) in hard recommendation situations such as product cold-start (new products, new product pairs) and cross-category recommendation. These are important use-cases in product recommendation, when the product catalog has high churn (as in the case of flash sales websites or classifieds) or the recommendation needs to leverage cross-advertiser signal (as in the case of new users and user acquisition advertising campaigns). This is a different goal from simply trying to optimize for relevance metrics, due to the inherent limitations of offline metrics in predicting future online performance. • Retrieval-optimized: the representation should be adapted to a content-retrieval setup, both on the query and on the indexing side, meaning that the vectors should be either small, sparse or both. We propose a modular deep architecture that leverages state-of-the-art architectures for generating embedded representations for image, text and CF input, re-specializes the resulting product embeddings and combines them into a single product vector. This is a very general architecture that can plugin any networks in the image and text domain and re-use them for the problem of product recommendation, along with their gains in representation learning for the two domains. We investigate multiple ways of merging the modality-specific product information and propose a new type of residual-inspired unit, which we name Pairwise Residual Unit, that can model the joint aspects of the different product embeddings and show that it leads to good improvements. We analyze our proposed architecture on an Amazon dataset (McAuley et al., 2015) containing information on co-purchased products. We report our improvements versus a text and an image-based baseline, that was introduced in previous work by (cite Julian) and show improvements both on normal and hard recommendation regimes such as cold-start and cross-category setups. Our approach is similar to the recent work by (Covington et al., 2016), that propose a solution for video recommendation at YouTube. Unlike their proposed solution, where, in order to support user vector queries, the candidate generation step co-embeds users and items, we are interested to co-embed just the product pairs, which generally has a much smaller dimension. In our approach, the personalization step can happen after the per-item candidates are retrieved. Our main contributions are the following: • We propose a novel way of integrating deep-learning item representation in the context of large scale recommender system with a 2-stage serving architecture and introduce the new task of Unified Product Representation for optimal candidate selection in both cold start and normal recommendation setups. • We introduce a new deep architecture that merges content and CF signal for the task of product recommendation and propose the Pairwise Residual Unit, a new learning component that models the joint product representations. • We introduce two novel experimental setups (hard cold start, cross-category) and test that the proposed Content2Vec architecture satisfies the requirements we defined. Though the focus of our work is on improving product recommendation through representation learning, we believe that simple extensions of our approach can be applied to many other recommendation scenarios. The rest of the paper goes as follows: In Section 2 we cover previous related work and the relationship with our method. In Section 3 we present the Content2Vec model, followed by a detailed description of the resulting architecture in Section 4. In Section 5 we present the experimental setup and go over the results on Section 5.2. In Section 6 we summarize our findings and conclude with future directions of research. 2 RELATED WORK Our work fits in the new wave of deep learning based recommendation solutions, that similarly to classical approaches can fall into 3 categories, namely collaborative filtering based, content based or hybrid approaches. Several approaches use neural networks to build better item representations based on the co-occurrence matrix. The Prod2Vec algorithm (see (Grbovic et al., 2015)) implements Word2Vec ((Mikolov et al., 2013a), (Shazeer et al., 2016)), an algorithm that is at origin a shallow neural language model, on sequences of product ids, to reach a low-dimensional representation of each product. Among other embedding solutions that use the item relationship graph are the more recent extensions to Word2Vec algorithm such as Glove (Pennington et al., 2014) and SWIVEL (Shazeer et al., 2016) and the graph embedding solutions proposed in Node2Vec (Grover & Leskovec, 2016) and SDNE (Wang et al., 2016). Content-based methods recommend an item to a user based upon an item description and a user profile ((Pazzani & Billsus, 2007)). This idea was deeply investigated in the information retrieval literature: in the context of web search, DSSM (Huang et al., 2013) and its extensions (Shen et al., 2014)(C-DSSM) and (Shan et al., 2016) are some of the most successful methods that specialize query and document text embedding in order to predict implicit feedback signal such as document click-through rate. In the context of product recommendation, in (McAuley et al., 2015) the authors feed a pre-trained CNN (CNN trained on the ImageNet dataset, which is an image classification task that is very different from the task of image-based product recommendation) with products images and use the last layer of the network as the product embedding. This representation is subsequently used to compute similarities between products. Similarly, the authors in (Van den Oord et al., 2013) use CNNs to compute similarities between songs. Yosinski et al. (2014) show that the low layers of DNNs trained on different tasks are often similar and that good performance can be reached by fine-tuning a network previously trained on another task. In the case of recommendation systems, this fine tuning was implemented in Veit et al. (2015), where the authors specialize a GoogLeNet architecture to the task of predicting co-view events based on product pictures. The performance of Collaborative Filtering (CF) models is often higher than that of content-based ones but it suffers from the cold-start problem. To take advantage of the best of both worlds, hybrid models use both sources of information in order to make recommendations. One possible way to incorporate product information is using it as side information in the product sequence model, as proposed in Meta-Prod2Vec (Vasile et al., 2016), leading to better product embeddings for products with low signal (low number of co-occurrences). In this work we continue the investigation of using both types of signal, this time both at training and product recommendation time. 3 CONTENT2VEC MODEL Our proposed approach takes the idea of specializing the input representations to the recommendation task and generalizes it for multi-modality inputs, in order to leverage all product information and in particular, product images and product title and description text. The main criteria for the Content2Vec architecture is to allow us to easily plugin new sources of signal and to replace existing embedding solutions with new versions. We are also interested in separating product-level embeddings from pair-level embeddings, such that the network can generate product vectors that are readily indexable. As a result, the Content2Vec architecture has three types of modules, as shown in Figure 2: • Content-specific embedding modules that take raw product information and generate the associated vectors. In this paper we cover embedding modules for text, image, categorical attributes and product co-occurrences (for an example, see Figure 5). • Overall product embedding modules that merge all the product information into a unified product representation. • Pair embedding module that merges the product-to-product interactions and computes the final similarity score. In the case of retrieval-optimized product embeddings, this module becomes the inner-product between the two items and all interactions between them are to be approximated within the product-level embedding modules. Content2Vec training follows the architecture, learning module-by-module. In the first stage, we initialize the content-specific modules with embeddings from proxy tasks (classification for image, language modeling for text) and re-specialize them to the task of product recommendation. For the specialization task, as mentioned in Section 1 we frame the objective as a link prediction task where we try to predict the pairs of products purchased together. We describe the loss function in Section 3.1. In the second stage, we stack the modality-specific embeddings generated in the first stage into a general product vector and learn an additional residual vector using the same learning objective as in the specialization step. This will described in depth in Section 4.2. Finally, in the third stage, given the updated product vectors from stage two, we learn the linear combination between the similarities of the product vectors and make the final prediction. 3.1 LOSS FUNCTION The previous work on learning pair-wise item distances concentrated on using ranking (McFee & Lanckriet, 2010), siamese (Hadsell et al., 2006) or logistic loss (Zheng et al., 2015). For optimizing the link prediction objective we choose the logistic similarity loss (eq. 1) that has the advantage of Figure 2: Content2Vec architecture combines content-specific modules with residual vector to produce embedding vector for each product, then uses these vectors to compute similarities between products. having a fast approximation via Negative Sampling loss (Mikolov et al., 2013b) shown in eq 2. By using Negative Sampling, the prediction step can scale up to large number of items, by using all positive pairs and sampling the negatives on the fly. \[ L(\theta) = \sum_{ij} -X_{ij}^{POS} \log \sigma(sim(a_i, b_j)) - X_{ij}^{NEG} \log \sigma(-sim(a_i, b_j)) \] (1) \[ L_{NS}(\theta) = \sum_{ij} -X_{ij}^{POS} (\log \sigma(sim(a_i, b_j)) + \sum_{l=1}^k \mathbb{E}_{n_l \sim P_D} \log \sigma(-sim(a_i, n_l))) \] (2) where: \( \theta = (a_i, b_j) \) is the set of model parameters, where \( a_i \) and \( b_j \) are the embedding vectors for the products A and B, \( sim(a_i, b_j) = \alpha < a_i, b_j > + \beta \) is the similarity function between \( a_i \) and \( b_j \), \( \alpha \) and \( \beta \) are scalar values, \( X_{ij}^{POS} \) is the frequency of the observed item pair \( ij \) (e.g. the frequency of the positive pair \( i,j \)), \( X_{ij}^{NEG} = X_i - X_{ij}^{POS} \) is the frequency of the unobserved item pair \( ij \) (we assume that all unobserved pairs are negatives), \( P_D \) probability distribution used to sample negative context examples \( n_l \), \( k \) is a hyper parameter specifying the number of negative examples per positive example. 4 CONTENT2VEC MODULES 4.1 CONTENT-SPECIFIC EMBEDDING MODULES Content-specific modules can have various architectures and are meant to be used separately in order to increase modularity. Their role is to map all types of item signal into embedded representations. Figure 3: An example of using the content-specific modules to create embedded representations of two products with images, text and CF signal. In Figure 3 we give an illustrative example of mapping a pair of products to their vectorial representations. In the following we analyze four types of input signal and embedding solutions for each one of them. For all of the modules, we use \( L_{NS} \) loss (see eq. 2) as specialization loss. 4.1.1 EMBEDDING PRODUCT IMAGES: ALEXNET Model and proxy task: CNN for Image Classification For generating the image embeddings we propose reusing a model trained for image classification, as in previous work by [Krizhevsky et al., 2012] and [He & McAuley, 2015]. In [He & McAuley, 2015], the authors have shown how to use the Inception architecture [Szegedy et al., 2015] and specialize it for the product recommendation task. However, the Inception architecture is very deep and requires extensive training time. For ease of experimentation we use AlexNet, which is a simpler architecture that was also a winner on the ImageNet task [Krizhevsky et al., 2012] previously to Inception NN. In section 5.2, we will show that, even if simpler, when combined with additional product text information, the AlexNet-based solution can perform very well on the recommendation task. For our experiments, we use the pretrained version of AlexNet available on Toronto’s university website. We experimented with two different ways to specialize the representation in order to compute product similarities. In the first one, we learn a weighted inner product between the two representations (fc7 layer of ImageNet). In the second one, we specialize the fc7 layer to detect product similarities. The second approach lead to much better performance and is the one for which we report final results. 4.1.2 EMBEDDING PRODUCT TEXT: WORD2VEC AND CNN ON SENTENCES Model and proxy task: Word2Vec for Product Language Modeling For generating word embeddings, we propose reusing Word2Vec [Mikolov et al., 2013b], a model for generating language models that has been employed in a various of text understanding tasks. More recently, it has been shown in [Pennington et al., 2014] that Word2Vec is closely linked with matrix factorization techniques applied on the word co-occurrence matrix. For Content2Vec, we chose to pretrain Word2Vec on the entire product catalog text information and not use an available set of word embeddings such as the one created on the Google Corpus. The main reason is that the text distribution within product descriptions is quite different from the general distribution. For example the word 'jersey' has a very different conditional distribution within the product description corpus versus general online text. Text CNN [Kim (2014)] offers a simple solution for sentence-level embeddings using convolutions. The convolutions act as a form of n-gram filters, allowing the network to embed sentence-level information and specializing word embeddings to higher-order tasks such as text classification or sentiment analysis. To the best of our knowledge, this is the first attempt to employ them for the task of product recommendation. For our task, we generate sentences based on the product titles and descriptions. 4.1.3 EMBEDDING PRODUCT CO-OCCURRENCES: PROD2VEC Prod2Vec (Grbovic et al. (2015)) is an extension of the Word2Vec algorithm to product shopping sequences. As a result, Prod2Vec can be seen as a matrix factorization technique on the product co-occurrence matrix. In Content2Vec, the Prod2Vec-based similarity contains all of the information that can be derived from the sequential aspect of the user behavior, without taking into account the per-product meta-data. 4.1.4 EMBEDDING CATEGORICAL PRODUCT META-DATA: META-PROD2VEC Meta-Prod2Vec (Vasile et al. (2016)) improves upon Prod2Vec by using the product meta-data side information to regularize the final product embeddings. In Content2Vec, we can use the similar technique of co-embedding product categorical information with product ids to generate the embedding values for the categorical features. 4.2 JOINT PRODUCT EMBEDDING: PAIRWISE RESIDUAL UNIT As stated in Section 1, the function of the product embedding module is two-fold: first, to model all interactions that exist between the modality-specific embeddings with respect to the final optimization objective, and second, to approximate interaction terms between the products that cannot be explained by a linear combination of the modality-specific similarities. With this in mind, we introduce a new type of learning unit, the Pairwise Residual Unit (eq. 4), which similarly to the original residual unit introduced in He et al. (2015) (eq. 5), allows the layers to learn incremental, i.e. residual representations (see Figure 4). In Hardt & Ma (2016) the authors motivate the use of residual units as helping preserve the representations learned in the previous layers. In our case we are interested in preserving the specialized image and text representations and learn an additional representation for their interactions. Though in previous work, most the of the residual units are using at least two ReLU layers in the residual unit, we observe good results using just one. In order to model interactions between modalities, we could also learn a fully connected layer initialized with identity that takes as input the concatenated modality-specific vectors. However, in order to have a smaller number of parameters and increase model comprehensibility, we would like to keep separate the modality-specific representations and to model the final prediction model as an ensemble. \[ y = F(x) + x \tag{3} \] \[ y = sim(F(x_1), F(x_2)) + sim(x_1, x_2) \tag{4} \] where: \( x_1 \) and \( x_2 \) are the two product embedding vectors (obtained by stacking the modality-specific vectors), \( sim(.,.) \) is a similarity function over two embedding vectors \( x_1, x_2 \), \( F(x) \) is a Rectified Linear Unit. To be able to measure the incremental value of introducing a residual vector we introduce a baseline architecture that computes the final prediction based on the linear combination of the modality-specific similarities denoted by *Content2Vec-linear* with the associated similarity function defined in eq. 5. ![Pairwise Residual Unit diagram](page_184_120_1207_563.png) Figure 4: Pairwise Residual Unit \[ sim_{c2v}(a_i, b_j) = \sum_{m \in Modalities} w_m \sigma(sim_m(a_i, b_j)) \] (5) Under this notation, the residual-based architecture denoted as Content2Vec-res minimizes \( L_{NS} \) with the similarity function defined in eq. 6 \[ sim_{c2v-res}(a_i, b_j) = \sum_{m \in (Modalities+Residual)} w_m \sigma(sim_m(a_i, b_j)) \] (6) In order to learn the residual vector, we keep fixed the modality-specific similarities and co-train the final weights of each of the modalities together with the product-specific residual layers. For example, in the case of using only image and text signals, our final predictor can be defined as in eq 7 where \( P_{txt} \) and \( P_{img} \) are pre-set and \( w_{txt}, w_{img}, w_{res} \) and \( P_{res} \) are learned together: \[ P(pos|a, b) = \sigma(w_{txt}P_{txt}(pos|a_{txt}, b_{txt}) + w_{img}P_{img}(pos|a_{img}, b_{img}) + w_{res}P_{res}(pos|a_{res}, b_{res})) \] (7) where: \( pos \) is the positive outcome of products A and B being bought together and \( P_{res}(pos|a, b) = \sigma(\alpha < F([a_{txt}, a_{img}]), F([b_{txt}, b_{img}]) > +\beta) \) In Section 5.2 we compare the performance of Content2Vec-res and Content2Vec-linear and show that, as expected, the proposed architecture surpasses the performance of the linear model, while allowing for a retrieval-based candidate scoring solution. 4.3 PAIR EMBEDDING MODULE In a retrieval-based architecture, the pair embedding module cannot support more than a simple linear combination of the product embedding vectors, such that the final score can be computed via inner-product. However, we are still interested to know the trade-off in performance between an inner-product-based candidate scoring and a model that allows for explicit interaction terms between the items. To this end, we introduce two explicit interaction models: Content2Vec-crossfeat - a ![The two types of Pairwise Residual Units. By comparison with the first version that outputs a scalar, the second one outputs a vector that goes directly into the final prediction layer](page_184_232_1207_496.png) Figure 5: The two types of Pairwise Residual Units. By comparison with the first version that outputs a scalar, the second one outputs a vector that goes directly into the final prediction layer model where we discretize the text and image-specific similarity scores and create explicit feature conjunctions between them and Content2Vec-embedpairs - a model where we use a similar technique with Pairwise Residual Unit, in this case modeling the residual of the linear similarity directly as a vector in the pair embedding layer, as shown in Figure 5. In Section 5.2 we show that two models have as expected better performance than the linear model and that the pair embedding is slightly better. 5 EXPERIMENTAL RESULTS 5.1 DATASET We perform our evaluation on the publicly available Amazon dataset (McAuley et al., 2015) that represents a collection of products that were co-bought on the Amazon website. Each item has a rich description containing product image, text and category (any of the modalities can be missing). In terms of dimensionality, the dataset contains around 10M pairs of products. We concentrate on the subgraph of Book and Movie product pairs, because both categories are large and they have a reasonable sized intersection. This allows us to look at recommendation performance on cross-category pairs (to evaluate a model trained only on Book pairs on predicting Movie co-bought items) and mixed category pairs (to evaluate the models on Book-Movie product pairs). Based on the full Book & Movies data we generate three datasets with different characteristics: The first dataset simulates a hard cold start regime, where all product pairs used in validation and testing are over products unseen in training. This tests the hardest recommendation setup, where all testing data is new. We decided to bench all of our hyperparameters on this regime and use the best setup on all datasets, since tuning on the harder dataset ensures the best generalization error (results shown in Table 1). The second dataset simulates a non-cold start regime, where the vast majority of the products in the test set are available at training time. The dataset is generated by taking the top 100k most connected products in the original dataset and keeping the links between them (results shown in Table 2). The third dataset simulates a soft cold start regime, where some of the products in the test set are available at training time. The dataset is generated by taking the top 200k most connected products in the original dataset and sampling 10% of the links between them (results shown in Table 3). Hyper-parameters We fixed the sizes of embedding vectors for image CNN module to 4096 hidden units, for text CNN module to 256, for Prod2Vec module to 50, for residual representation to 128. For optimization we use an Adam algorithm and we manually set the initial learning rate based on the validation set performance. The batch sizes vary for different datasets. We train all the models until validation set performance stops increasing. Evaluation task We evaluate the recommendation methods on the product link prediction task, similar to (He & McAuley [2015]). We consider the observed product pairs as positive examples and all unknown pairs as negatives. We generate negative pairs according to the popularity of the products in the positive pairs (negative examples between popular products are more likely to be generated) with a positive to negative ratio of 1:2. Evaluation metrics For the link prediction task, we use the Area Under Curve (AUC) of the Precision/Recall curve as our evaluation metric. Competing methods • ImageCNN: prediction based on specialized image embeddings similarity • TextCNN: prediction based on specialized text embeddings similarity • Content2Vec-linear: prediction based on the linear combination of text and image similarities • Content2Vec-crossfeat: prediction based on the linear combination of discretized image and text similarities and their conjunctions • Content2Vec-res: prediction based on the linear combination of text and image similarities plus product-level residual vectors similarities • Content2Vec-embedpairs: prediction based on the linear combination of text and image similarities and a pair-level residual component • Prod2Vec: prediction based on the product vectors coming from the decomposition of the co-purchase matrix • Content2Vec+: prediction based on the ensemble of Prod2Vec and Content2Vec models 5.2 RESULTS The results on hard and soft cold start datasets (Tables 1,3) show that our main proposed method Content2Vec-res can leverage the additional signal provided by each of the input modalities in a joint manner and leads to significant gains in AUC versus the one-signal baselines (ImageCNN, TextCNN) and their linear combination (Content2Vec-linear). From the point of view of robustness, Content2Vec-res learns product representations that perform better than the baseline methods on out-of-sample recommendations such as cross-category pairs and mixed-category pairs (Table 1). We observe that adding an additional layer that represents pair-level interactions does not lead to big improvements in either of the two models we investigated (Content2Vec-crossfeat, embedpairs), confirming that a product retrieval-based recommender system can achieve state-of-the-art results. Finally, Content2Vec-res+, our proposed hybrid architecture that combines content and CF signal achieves better performance than the content and CF-only models, with bigger lifts in the case of the third dataset (Table 3) where the CF signal is weaker due to higher sparsity. <table> <tr> <th>Recommendation Model</th> <th>Books</th> <th>Movies</th> <th>Mixed</th> </tr> <tr> <th colspan="4">Models trained on Books dataset</th> </tr> <tr> <td>Book ImageCNN specialized</td> <td>81%</td> <td>78%</td> <td>64%</td> </tr> <tr> <td>Book TextCNN</td> <td>72%</td> <td>79%</td> <td>76%</td> </tr> <tr> <td>Book Content2Vec-linear</td> <td>83%</td> <td>83%</td> <td>76%</td> </tr> <tr> <td>Book Content2Vec-crossfeat</td> <td>86%</td> <td>83%</td> <td>83%</td> </tr> <tr> <td>Book Content2Vec-res</td> <td>89%</td> <td>83%</td> <td>77%</td> </tr> <tr> <td>Book Content2Vec-embedpairs</td> <td>90%</td> <td>82%</td> <td>77%</td> </tr> <tr> <th colspan="4">Models trained on Movies dataset</th> </tr> <tr> <td>Movie ImageCNN specialized</td> <td>59%</td> <td>92%</td> <td>60%</td> </tr> <tr> <td>Movie TextCNN</td> <td>63%</td> <td>90%</td> <td>65%</td> </tr> <tr> <td>Movie Content2Vec-linear</td> <td>64%</td> <td>94%</td> <td>65%</td> </tr> <tr> <td>Movie Content2Vec-crossfeat</td> <td>62%</td> <td>94%</td> <td>63%</td> </tr> <tr> <td>Movie Content2Vec-res</td> <td>60%</td> <td>95%</td> <td>66%</td> </tr> <tr> <td>Movie Content2Vec-embedpairs</td> <td>64%</td> <td>94%</td> <td>65%</td> </tr> </table> Table 1: AUC results of image and text-based embeddings on hard cold-start dataset on Book, Movie and Mixed category test product pairs. <table> <tr> <th>Recommendation Model</th> <th>Test</th> </tr> <tr> <td>Content2Vec-linear</td> <td>84%</td> </tr> <tr> <td>Content2Vec-res</td> <td>87%</td> </tr> <tr> <td>Prod2Vec</td> <td>96%</td> </tr> <tr> <td>Content2Vec-linear+</td> <td>97%</td> </tr> <tr> <td>Content2Vec-res+</td> <td>97%</td> </tr> </table> Table 2: AUC results on non cold-start dataset. <table> <tr> <th>Recommendation Model</th> <th>Test</th> </tr> <tr> <td>ImageCNN</td> <td>80%</td> </tr> <tr> <td>TextCNN</td> <td>78%</td> </tr> <tr> <td>Content2vec-linear</td> <td>88%</td> </tr> <tr> <td>Content2vec-res</td> <td>89%</td> </tr> <tr> <td>Content2vec-embed_pairs</td> <td>90%</td> </tr> <tr> <td>Prod2vec</td> <td>86%</td> </tr> <tr> <td>Content2vec-linear+</td> <td>89%</td> </tr> <tr> <td>Content2vec-res+</td> <td>92%</td> </tr> <tr> <td>Content2vec-embed_pairs+</td> <td>92%</td> </tr> </table> Table 3: AUC results on soft cold-start dataset. 6 CONCLUSIONS This work has several key contributions. We show how to use all product signal for the task of product recommendation using a modular architecture that can leverage fast evolving solutions for each type of input modality. We define a set of requirements for evaluating the resulting product embeddings and show that our method leads to significant improvements over the single signal approaches on hard recommendation situations such as cold-start and cross-category evaluation. Finally, in order to model the joint aspects of the product embeddings we introduce a new type of learning unit, named Pairwise Residual Unit and show the resulting gains on a real product co-purchases dataset. In the current work we have addressed all but one of the desired requirements, namely generating retrieval-optimized embeddings. For the next steps, we want to pursue sparse and compressed product representations, in order to help the performance of the final product retrieval system. REFERENCES DataStax Academy. Slideshare presentation. http://www.slideshare.net/planetcassandra/e-bay-nyc March 2013. Accessed: 2016-04-08. Deepak Agarwal, Bee-Chung Chen, Pradheep Elango, and Raghu Ramakrishnan. Content recommendation on web portals. Communications of the ACM, 56(6):92–101, 2013. Robert M Bell and Yehuda Koren. Lessons from the netflix prize challenge. ACM SIGKDD Explorations Newsletter, 9(2):75–79, 2007. Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et al. Wide & deep learning for recommender systems. arXiv preprint arXiv:1606.07792, 2016. Paul Covington, Jay Adams, and Emre Sargin. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, pp. 191–198. ACM, 2016. Mihajlo Grbovic, Vladan Radosavljevic, Nemanja Djuric, Narayan Bhamidipati, Jaikit Savla, Varun Bhagwan, and Doug Sharp. E-commerce in your inbox: Product recommendations at scale. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’15, pp. 1809–1818, New York, NY, USA, 2015. ACM. ISBN 978-1-4503-3664-2. doi: 10.1145/2783258.2788627. URL http://doi.acm.org/10.1145/2783258.2788627 Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. 2016. Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), volume 2, pp. 1735–1742. IEEE, 2006. Moritz Hardt and Tengyu Ma. Identity matters in deep learning. arXiv preprint arXiv:1611.04231, 2016. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. Ruining He and Julian McAuley. Vbpr: visual bayesian personalized ranking from implicit feedback. arXiv preprint arXiv:1510.01784, 2015. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management, pp. 2333–2338. ACM, 2013. Chris Johnson. algorithmic music recommendations at spotify, 2015. Yoon Kim. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882, 2014. Noam Koenigstein, Nir Nice, Ulrich Paquet, and Nir Schleyen. The xbox recommender system. In Proceedings of the sixth ACM conference on Recommender systems, pp. 281–284. ACM, 2012. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012. Jiahui Liu, Peter Dolan, and Elin Rønby Pedersen. Personalized news recommendation based on click behavior. In Proceedings of the 15th international conference on Intelligent user interfaces, pp. 31–40. ACM, 2010. Matt Marshall. Venture beat article. http://venturebeat.com/2006/12/10/aggregate-knowledge-raises-5m-from-kleiner-on-a-roll/ December 2006. Accessed: 2016-04-08. PierreEmmanuel Mazare. Product recommendation at criteo. http://labs.criteo.com/2016/09/product-recommendation-criteo/ September 2016. Accessed: 2016-10-26. Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. Image-based recommendations on styles and substitutes. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 43–52. ACM, 2015. Brian McFee and Gert R Lanckriet. Metric learning to rank. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 775–782, 2010. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013a. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119, 2013b. Michael J Pazzani and Daniel Billsus. Content-based recommendation systems. In The adaptive web, pp. 325–341. Springer, 2007. Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543, Doha, Qatar, October 2014. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/D14-1162 Ying Shan, T Ryan Hoens, Jian Jiao, Haijing Wang, Dong Yu, and JC Mao. Deep crossing: Web-scale modeling without manually crafted combinatorial features. 2016. Noam Shazeer, Ryan Doherty, Colin Evans, and Chris Waterson. Swivel: Improving embeddings by noticing what’s missing. arXiv preprint arXiv:1602.02215, 2016. Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Grégoire Mesnil. Learning semantic representations using convolutional neural networks for web search. In Proceedings of the 23rd International Conference on World Wide Web, pp. 373–374. ACM, 2014. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015. Aaron Van den Oord, Sander Dieleman, and Benjamin Schrauwen. Deep content-based music recommendation. In Advances in Neural Information Processing Systems, pp. 2643–2651, 2013. Flavian Vasile, Elena Smirnova, and Alexis Conneau. Meta-prod2vec-product embeddings using side-information for recommendation. arXiv preprint arXiv:1607.07326, 2016. Andreas Veit, Balazs Kovacs, Sean Bell, Julian McAuley, Kavita Bala, and Serge Belongie. Learning visual clothing style with heterogeneous dyadic co-occurrences. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4642–4650, 2015. Daixin Wang, Peng Cui, and Wenwu Zhu. Structural deep network embedding. 2016. Tracey Xiang. Technode article. http://technode.com/2013/06/14/how-does-taobao-uses-user-data/ June 2013. Accessed: 2016-04-08. Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in neural information processing systems, pp. 3320–3328, 2014. Lilei Zheng, Khalid Idrissi, Christophe Garcia, Stefan Duffner, and Atilla Baskurt. Logistic similarity metric learning for face verification. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1951–1955. IEEE, 2015.
ABSTRACT We propose a unified product embedded representation that is optimized for the task of retrieval-based product recommendation. We generate this representation using Content2Vec, a new deep architecture that merges product content information such as text and image, and we analyze its performance on hard recommendation setups such as cold-start and cross-category recommendations. In the case of a normal recommendation regime where collaborative information signal is available, we merge the product co-occurrence information and propose a second architecture Content2vec+ and show its lift in performance versus non-hybrid approaches in both cold start and normal recommendation regimes. 1 INTRODUCTION Online product recommendation is now a key driver of demand, not only in E-commerce businesses that recommend physical products, such as Amazon (Marshall [2006]), TaoBao (Xiang [2013]) and Ebay (Academy [2013]), but also in online websites that recommend digital content such as news (Yahoo! - Agarwal et al. [2013]), Google - Liu et al. (2010), movies (Netflix - Bell & Koren [2007]), music (Spotify - Johnson [2015]), videos (YouTube - Covington et al. [2016]) and games (Xbox - Koenigstein et al. [2012]). Two of the most challenging aspects of recommendation in general and of product recommendation in particular, are scalability and freshness. The first one addresses the problem of making fast recommendations in parallel, the second addresses the problem of updating recommendations based on real-time user interaction. One of the most encountered architecture solutions for recommendation at scale divides the recommendation process in two stages: a candidate generation stage that prunes the number of recommendable items from billions to a couple of hundreds, followed by a second item selection stage that decides the final set of items to be displayed to the user, as shown in Figure 1 (see Mazare [2016], Cheng et al. [2016], Covington et al. [2016]). The first stage generally implies the pre-generation of an inverted index over the set of recommendable products, paired with a real-time retrieval module, similarly to a search engine architecture. In our current paper we focus on the cases where the system supports vectorial product queries. The sources of the vectorial representations range from the set of co-occurring products, like in the case of neighborhood-based collaborative filtering, to a low-dimensional representation produced via matrix factorization or to an embedded representation produced via a deep neural network. The second stage takes the candidate set and decides the final list of recommendations, usually by optimizing a ranking metric. This stage has in general a lot more constraints in terms of latency, due to its use of real-time signal that makes its predictions not cacheable. Therefore, in terms of model choice, the first stage can be a lot more complex than the second. In terms of impact, the quality of the candidate set coming from the first stage is crucial, since this constitutes a hard threshold on the performance of the second stage and of the overall system. Because of the feasibility of using a more complex model and the potential impact on the final recommendation performance, we choose to concentrate our efforts on the task of optimal candi- ![2-Stage Recommender System Architecture.](page_184_153_1207_496.png) Figure 1: 2-Stage Recommender System Architecture. date generation. We formalize the problem as a link prediction task, where given a set of past co-purchased products we try to predict unseen pairs of products. Related work in representation learning for recommendation investigated the use of collaborative filtering (CF), text and product images, but to our knowledge, there has been no attempt to unify all of these signals in a single representation. We see this as an opportunity to investigate the leveraging effect of generating a Unified Product Representation via a deep-learning approach. In the following, we formally define the set of associated requirements we would like to satisfy: • Relevance: the representation should be optimized for product recommendation relevance, as measured by the associated target metrics (in this case, modeling it as a link prediction task and optimizing for the AUC of product pair prediction). • Coverage: the representation should leverage all available product information (in our case, all product information available in the product catalog together with observed product co-occurrences). • Cross-modality expressiveness: the representation should be able to account for interactions between various information sources such as text and image (can take into account the fact that the word “red” and the “red” color detector are correlated). • Pair-wise expressiveness: the representation should be able to account for interactions between the two products. • Robustness: the representation should operate well (recommendation performance will not degrade dramatically) in hard recommendation situations such as product cold-start (new products, new product pairs) and cross-category recommendation. These are important use-cases in product recommendation, when the product catalog has high churn (as in the case of flash sales websites or classifieds) or the recommendation needs to leverage cross-advertiser signal (as in the case of new users and user acquisition advertising campaigns). This is a different goal from simply trying to optimize for relevance metrics, due to the inherent limitations of offline metrics in predicting future online performance. • Retrieval-optimized: the representation should be adapted to a content-retrieval setup, both on the query and on the indexing side, meaning that the vectors should be either small, sparse or both. We propose a modular deep architecture that leverages state-of-the-art architectures for generating embedded representations for image, text and CF input, re-specializes the resulting product embeddings and combines them into a single product vector. This is a very general architecture that can plugin any networks in the image and text domain and re-use them for the problem of product recommendation, along with their gains in representation learning for the two domains. We investigate multiple ways of merging the modality-specific product information and propose a new type of residual-inspired unit, which we name Pairwise Residual Unit, that can model the joint aspects of the different product embeddings and show that it leads to good improvements. We analyze our proposed architecture on an Amazon dataset (McAuley et al., 2015) containing information on co-purchased products. We report our improvements versus a text and an image-based baseline, that was introduced in previous work by (cite Julian) and show improvements both on normal and hard recommendation regimes such as cold-start and cross-category setups. Our approach is similar to the recent work by (Covington et al., 2016), that propose a solution for video recommendation at YouTube. Unlike their proposed solution, where, in order to support user vector queries, the candidate generation step co-embeds users and items, we are interested to co-embed just the product pairs, which generally has a much smaller dimension. In our approach, the personalization step can happen after the per-item candidates are retrieved. Our main contributions are the following: • We propose a novel way of integrating deep-learning item representation in the context of large scale recommender system with a 2-stage serving architecture and introduce the new task of Unified Product Representation for optimal candidate selection in both cold start and normal recommendation setups. • We introduce a new deep architecture that merges content and CF signal for the task of product recommendation and propose the Pairwise Residual Unit, a new learning component that models the joint product representations. • We introduce two novel experimental setups (hard cold start, cross-category) and test that the proposed Content2Vec architecture satisfies the requirements we defined. Though the focus of our work is on improving product recommendation through representation learning, we believe that simple extensions of our approach can be applied to many other recommendation scenarios. The rest of the paper goes as follows: In Section 2 we cover previous related work and the relationship with our method. In Section 3 we present the Content2Vec model, followed by a detailed description of the resulting architecture in Section 4. In Section 5 we present the experimental setup and go over the results on Section 5.2. In Section 6 we summarize our findings and conclude with future directions of research. 2 RELATED WORK Our work fits in the new wave of deep learning based recommendation solutions, that similarly to classical approaches can fall into 3 categories, namely collaborative filtering based, content based or hybrid approaches. Several approaches use neural networks to build better item representations based on the co-occurrence matrix. The Prod2Vec algorithm (see (Grbovic et al., 2015)) implements Word2Vec ((Mikolov et al., 2013a), (Shazeer et al., 2016)), an algorithm that is at origin a shallow neural language model, on sequences of product ids, to reach a low-dimensional representation of each product. Among other embedding solutions that use the item relationship graph are the more recent extensions to Word2Vec algorithm such as Glove (Pennington et al., 2014) and SWIVEL (Shazeer et al., 2016) and the graph embedding solutions proposed in Node2Vec (Grover & Leskovec, 2016) and SDNE (Wang et al., 2016). Content-based methods recommend an item to a user based upon an item description and a user profile ((Pazzani & Billsus, 2007)). This idea was deeply investigated in the information retrieval literature: in the context of web search, DSSM (Huang et al., 2013) and its extensions (Shen et al., 2014)(C-DSSM) and (Shan et al., 2016) are some of the most successful methods that specialize query and document text embedding in order to predict implicit feedback signal such as document click-through rate. In the context of product recommendation, in (McAuley et al., 2015) the authors feed a pre-trained CNN (CNN trained on the ImageNet dataset, which is an image classification task that is very different from the task of image-based product recommendation) with products images and use the last layer of the network as the product embedding. This representation is subsequently used to compute similarities between products. Similarly, the authors in (Van den Oord et al., 2013) use CNNs to compute similarities between songs. Yosinski et al. (2014) show that the low layers of DNNs trained on different tasks are often similar and that good performance can be reached by fine-tuning a network previously trained on another task. In the case of recommendation systems, this fine tuning was implemented in Veit et al. (2015), where the authors specialize a GoogLeNet architecture to the task of predicting co-view events based on product pictures. The performance of Collaborative Filtering (CF) models is often higher than that of content-based ones but it suffers from the cold-start problem. To take advantage of the best of both worlds, hybrid models use both sources of information in order to make recommendations. One possible way to incorporate product information is using it as side information in the product sequence model, as proposed in Meta-Prod2Vec (Vasile et al., 2016), leading to better product embeddings for products with low signal (low number of co-occurrences). In this work we continue the investigation of using both types of signal, this time both at training and product recommendation time. 3 CONTENT2VEC MODEL Our proposed approach takes the idea of specializing the input representations to the recommendation task and generalizes it for multi-modality inputs, in order to leverage all product information and in particular, product images and product title and description text. The main criteria for the Content2Vec architecture is to allow us to easily plugin new sources of signal and to replace existing embedding solutions with new versions. We are also interested in separating product-level embeddings from pair-level embeddings, such that the network can generate product vectors that are readily indexable. As a result, the Content2Vec architecture has three types of modules, as shown in Figure 2: • Content-specific embedding modules that take raw product information and generate the associated vectors. In this paper we cover embedding modules for text, image, categorical attributes and product co-occurrences (for an example, see Figure 5). • Overall product embedding modules that merge all the product information into a unified product representation. • Pair embedding module that merges the product-to-product interactions and computes the final similarity score. In the case of retrieval-optimized product embeddings, this module becomes the inner-product between the two items and all interactions between them are to be approximated within the product-level embedding modules. Content2Vec training follows the architecture, learning module-by-module. In the first stage, we initialize the content-specific modules with embeddings from proxy tasks (classification for image, language modeling for text) and re-specialize them to the task of product recommendation. For the specialization task, as mentioned in Section 1 we frame the objective as a link prediction task where we try to predict the pairs of products purchased together. We describe the loss function in Section 3.1. In the second stage, we stack the modality-specific embeddings generated in the first stage into a general product vector and learn an additional residual vector using the same learning objective as in the specialization step. This will described in depth in Section 4.2. Finally, in the third stage, given the updated product vectors from stage two, we learn the linear combination between the similarities of the product vectors and make the final prediction. 3.1 LOSS FUNCTION The previous work on learning pair-wise item distances concentrated on using ranking (McFee & Lanckriet, 2010), siamese (Hadsell et al., 2006) or logistic loss (Zheng et al., 2015). For optimizing the link prediction objective we choose the logistic similarity loss (eq. 1) that has the advantage of Figure 2: Content2Vec architecture combines content-specific modules with residual vector to produce embedding vector for each product, then uses these vectors to compute similarities between products. having a fast approximation via Negative Sampling loss (Mikolov et al., 2013b) shown in eq 2. By using Negative Sampling, the prediction step can scale up to large number of items, by using all positive pairs and sampling the negatives on the fly. \[ L(\theta) = \sum_{ij} -X_{ij}^{POS} \log \sigma(sim(a_i, b_j)) - X_{ij}^{NEG} \log \sigma(-sim(a_i, b_j)) \] (1) \[ L_{NS}(\theta) = \sum_{ij} -X_{ij}^{POS} (\log \sigma(sim(a_i, b_j)) + \sum_{l=1}^k \mathbb{E}_{n_l \sim P_D} \log \sigma(-sim(a_i, n_l))) \] (2) where: \( \theta = (a_i, b_j) \) is the set of model parameters, where \( a_i \) and \( b_j \) are the embedding vectors for the products A and B, \( sim(a_i, b_j) = \alpha < a_i, b_j > + \beta \) is the similarity function between \( a_i \) and \( b_j \), \( \alpha \) and \( \beta \) are scalar values, \( X_{ij}^{POS} \) is the frequency of the observed item pair \( ij \) (e.g. the frequency of the positive pair \( i,j \)), \( X_{ij}^{NEG} = X_i - X_{ij}^{POS} \) is the frequency of the unobserved item pair \( ij \) (we assume that all unobserved pairs are negatives), \( P_D \) probability distribution used to sample negative context examples \( n_l \), \( k \) is a hyper parameter specifying the number of negative examples per positive example. 4 CONTENT2VEC MODULES 4.1 CONTENT-SPECIFIC EMBEDDING MODULES Content-specific modules can have various architectures and are meant to be used separately in order to increase modularity. Their role is to map all types of item signal into embedded representations. Figure 3: An example of using the content-specific modules to create embedded representations of two products with images, text and CF signal. In Figure 3 we give an illustrative example of mapping a pair of products to their vectorial representations. In the following we analyze four types of input signal and embedding solutions for each one of them. For all of the modules, we use \( L_{NS} \) loss (see eq. 2) as specialization loss. 4.1.1 EMBEDDING PRODUCT IMAGES: ALEXNET Model and proxy task: CNN for Image Classification For generating the image embeddings we propose reusing a model trained for image classification, as in previous work by [Krizhevsky et al., 2012] and [He & McAuley, 2015]. In [He & McAuley, 2015], the authors have shown how to use the Inception architecture [Szegedy et al., 2015] and specialize it for the product recommendation task. However, the Inception architecture is very deep and requires extensive training time. For ease of experimentation we use AlexNet, which is a simpler architecture that was also a winner on the ImageNet task [Krizhevsky et al., 2012] previously to Inception NN. In section 5.2, we will show that, even if simpler, when combined with additional product text information, the AlexNet-based solution can perform very well on the recommendation task. For our experiments, we use the pretrained version of AlexNet available on Toronto’s university website. We experimented with two different ways to specialize the representation in order to compute product similarities. In the first one, we learn a weighted inner product between the two representations (fc7 layer of ImageNet). In the second one, we specialize the fc7 layer to detect product similarities. The second approach lead to much better performance and is the one for which we report final results. 4.1.2 EMBEDDING PRODUCT TEXT: WORD2VEC AND CNN ON SENTENCES Model and proxy task: Word2Vec for Product Language Modeling For generating word embeddings, we propose reusing Word2Vec [Mikolov et al., 2013b], a model for generating language models that has been employed in a various of text understanding tasks. More recently, it has been shown in [Pennington et al., 2014] that Word2Vec is closely linked with matrix factorization techniques applied on the word co-occurrence matrix. For Content2Vec, we chose to pretrain Word2Vec on the entire product catalog text information and not use an available set of word embeddings such as the one created on the Google Corpus. The main reason is that the text distribution within product descriptions is quite different from the general distribution. For example the word 'jersey' has a very different conditional distribution within the product description corpus versus general online text. Text CNN [Kim (2014)] offers a simple solution for sentence-level embeddings using convolutions. The convolutions act as a form of n-gram filters, allowing the network to embed sentence-level information and specializing word embeddings to higher-order tasks such as text classification or sentiment analysis. To the best of our knowledge, this is the first attempt to employ them for the task of product recommendation. For our task, we generate sentences based on the product titles and descriptions. 4.1.3 EMBEDDING PRODUCT CO-OCCURRENCES: PROD2VEC Prod2Vec (Grbovic et al. (2015)) is an extension of the Word2Vec algorithm to product shopping sequences. As a result, Prod2Vec can be seen as a matrix factorization technique on the product co-occurrence matrix. In Content2Vec, the Prod2Vec-based similarity contains all of the information that can be derived from the sequential aspect of the user behavior, without taking into account the per-product meta-data. 4.1.4 EMBEDDING CATEGORICAL PRODUCT META-DATA: META-PROD2VEC Meta-Prod2Vec (Vasile et al. (2016)) improves upon Prod2Vec by using the product meta-data side information to regularize the final product embeddings. In Content2Vec, we can use the similar technique of co-embedding product categorical information with product ids to generate the embedding values for the categorical features. 4.2 JOINT PRODUCT EMBEDDING: PAIRWISE RESIDUAL UNIT As stated in Section 1, the function of the product embedding module is two-fold: first, to model all interactions that exist between the modality-specific embeddings with respect to the final optimization objective, and second, to approximate interaction terms between the products that cannot be explained by a linear combination of the modality-specific similarities. With this in mind, we introduce a new type of learning unit, the Pairwise Residual Unit (eq. 4), which similarly to the original residual unit introduced in He et al. (2015) (eq. 5), allows the layers to learn incremental, i.e. residual representations (see Figure 4). In Hardt & Ma (2016) the authors motivate the use of residual units as helping preserve the representations learned in the previous layers. In our case we are interested in preserving the specialized image and text representations and learn an additional representation for their interactions. Though in previous work, most the of the residual units are using at least two ReLU layers in the residual unit, we observe good results using just one. In order to model interactions between modalities, we could also learn a fully connected layer initialized with identity that takes as input the concatenated modality-specific vectors. However, in order to have a smaller number of parameters and increase model comprehensibility, we would like to keep separate the modality-specific representations and to model the final prediction model as an ensemble. \[ y = F(x) + x \tag{3} \] \[ y = sim(F(x_1), F(x_2)) + sim(x_1, x_2) \tag{4} \] where: \( x_1 \) and \( x_2 \) are the two product embedding vectors (obtained by stacking the modality-specific vectors), \( sim(.,.) \) is a similarity function over two embedding vectors \( x_1, x_2 \), \( F(x) \) is a Rectified Linear Unit. To be able to measure the incremental value of introducing a residual vector we introduce a baseline architecture that computes the final prediction based on the linear combination of the modality-specific similarities denoted by *Content2Vec-linear* with the associated similarity function defined in eq. 5. ![Pairwise Residual Unit diagram](page_184_120_1207_563.png) Figure 4: Pairwise Residual Unit \[ sim_{c2v}(a_i, b_j) = \sum_{m \in Modalities} w_m \sigma(sim_m(a_i, b_j)) \] (5) Under this notation, the residual-based architecture denoted as Content2Vec-res minimizes \( L_{NS} \) with the similarity function defined in eq. 6 \[ sim_{c2v-res}(a_i, b_j) = \sum_{m \in (Modalities+Residual)} w_m \sigma(sim_m(a_i, b_j)) \] (6) In order to learn the residual vector, we keep fixed the modality-specific similarities and co-train the final weights of each of the modalities together with the product-specific residual layers. For example, in the case of using only image and text signals, our final predictor can be defined as in eq 7 where \( P_{txt} \) and \( P_{img} \) are pre-set and \( w_{txt}, w_{img}, w_{res} \) and \( P_{res} \) are learned together: \[ P(pos|a, b) = \sigma(w_{txt}P_{txt}(pos|a_{txt}, b_{txt}) + w_{img}P_{img}(pos|a_{img}, b_{img}) + w_{res}P_{res}(pos|a_{res}, b_{res})) \] (7) where: \( pos \) is the positive outcome of products A and B being bought together and \( P_{res}(pos|a, b) = \sigma(\alpha < F([a_{txt}, a_{img}]), F([b_{txt}, b_{img}]) > +\beta) \) In Section 5.2 we compare the performance of Content2Vec-res and Content2Vec-linear and show that, as expected, the proposed architecture surpasses the performance of the linear model, while allowing for a retrieval-based candidate scoring solution. 4.3 PAIR EMBEDDING MODULE In a retrieval-based architecture, the pair embedding module cannot support more than a simple linear combination of the product embedding vectors, such that the final score can be computed via inner-product. However, we are still interested to know the trade-off in performance between an inner-product-based candidate scoring and a model that allows for explicit interaction terms between the items. To this end, we introduce two explicit interaction models: Content2Vec-crossfeat - a ![The two types of Pairwise Residual Units. By comparison with the first version that outputs a scalar, the second one outputs a vector that goes directly into the final prediction layer](page_184_232_1207_496.png) Figure 5: The two types of Pairwise Residual Units. By comparison with the first version that outputs a scalar, the second one outputs a vector that goes directly into the final prediction layer model where we discretize the text and image-specific similarity scores and create explicit feature conjunctions between them and Content2Vec-embedpairs - a model where we use a similar technique with Pairwise Residual Unit, in this case modeling the residual of the linear similarity directly as a vector in the pair embedding layer, as shown in Figure 5. In Section 5.2 we show that two models have as expected better performance than the linear model and that the pair embedding is slightly better. 5 EXPERIMENTAL RESULTS 5.1 DATASET We perform our evaluation on the publicly available Amazon dataset (McAuley et al., 2015) that represents a collection of products that were co-bought on the Amazon website. Each item has a rich description containing product image, text and category (any of the modalities can be missing). In terms of dimensionality, the dataset contains around 10M pairs of products. We concentrate on the subgraph of Book and Movie product pairs, because both categories are large and they have a reasonable sized intersection. This allows us to look at recommendation performance on cross-category pairs (to evaluate a model trained only on Book pairs on predicting Movie co-bought items) and mixed category pairs (to evaluate the models on Book-Movie product pairs). Based on the full Book & Movies data we generate three datasets with different characteristics: The first dataset simulates a hard cold start regime, where all product pairs used in validation and testing are over products unseen in training. This tests the hardest recommendation setup, where all testing data is new. We decided to bench all of our hyperparameters on this regime and use the best setup on all datasets, since tuning on the harder dataset ensures the best generalization error (results shown in Table 1). The second dataset simulates a non-cold start regime, where the vast majority of the products in the test set are available at training time. The dataset is generated by taking the top 100k most connected products in the original dataset and keeping the links between them (results shown in Table 2). The third dataset simulates a soft cold start regime, where some of the products in the test set are available at training time. The dataset is generated by taking the top 200k most connected products in the original dataset and sampling 10% of the links between them (results shown in Table 3). Hyper-parameters We fixed the sizes of embedding vectors for image CNN module to 4096 hidden units, for text CNN module to 256, for Prod2Vec module to 50, for residual representation to 128. For optimization we use an Adam algorithm and we manually set the initial learning rate based on the validation set performance. The batch sizes vary for different datasets. We train all the models until validation set performance stops increasing. Evaluation task We evaluate the recommendation methods on the product link prediction task, similar to (He & McAuley [2015]). We consider the observed product pairs as positive examples and all unknown pairs as negatives. We generate negative pairs according to the popularity of the products in the positive pairs (negative examples between popular products are more likely to be generated) with a positive to negative ratio of 1:2. Evaluation metrics For the link prediction task, we use the Area Under Curve (AUC) of the Precision/Recall curve as our evaluation metric. Competing methods • ImageCNN: prediction based on specialized image embeddings similarity • TextCNN: prediction based on specialized text embeddings similarity • Content2Vec-linear: prediction based on the linear combination of text and image similarities • Content2Vec-crossfeat: prediction based on the linear combination of discretized image and text similarities and their conjunctions • Content2Vec-res: prediction based on the linear combination of text and image similarities plus product-level residual vectors similarities • Content2Vec-embedpairs: prediction based on the linear combination of text and image similarities and a pair-level residual component • Prod2Vec: prediction based on the product vectors coming from the decomposition of the co-purchase matrix • Content2Vec+: prediction based on the ensemble of Prod2Vec and Content2Vec models 5.2 RESULTS The results on hard and soft cold start datasets (Tables 1,3) show that our main proposed method Content2Vec-res can leverage the additional signal provided by each of the input modalities in a joint manner and leads to significant gains in AUC versus the one-signal baselines (ImageCNN, TextCNN) and their linear combination (Content2Vec-linear). From the point of view of robustness, Content2Vec-res learns product representations that perform better than the baseline methods on out-of-sample recommendations such as cross-category pairs and mixed-category pairs (Table 1). We observe that adding an additional layer that represents pair-level interactions does not lead to big improvements in either of the two models we investigated (Content2Vec-crossfeat, embedpairs), confirming that a product retrieval-based recommender system can achieve state-of-the-art results. Finally, Content2Vec-res+, our proposed hybrid architecture that combines content and CF signal achieves better performance than the content and CF-only models, with bigger lifts in the case of the third dataset (Table 3) where the CF signal is weaker due to higher sparsity. <table> <tr> <th>Recommendation Model</th> <th>Books</th> <th>Movies</th> <th>Mixed</th> </tr> <tr> <th colspan="4">Models trained on Books dataset</th> </tr> <tr> <td>Book ImageCNN specialized</td> <td>81%</td> <td>78%</td> <td>64%</td> </tr> <tr> <td>Book TextCNN</td> <td>72%</td> <td>79%</td> <td>76%</td> </tr> <tr> <td>Book Content2Vec-linear</td> <td>83%</td> <td>83%</td> <td>76%</td> </tr> <tr> <td>Book Content2Vec-crossfeat</td> <td>86%</td> <td>83%</td> <td>83%</td> </tr> <tr> <td>Book Content2Vec-res</td> <td>89%</td> <td>83%</td> <td>77%</td> </tr> <tr> <td>Book Content2Vec-embedpairs</td> <td>90%</td> <td>82%</td> <td>77%</td> </tr> <tr> <th colspan="4">Models trained on Movies dataset</th> </tr> <tr> <td>Movie ImageCNN specialized</td> <td>59%</td> <td>92%</td> <td>60%</td> </tr> <tr> <td>Movie TextCNN</td> <td>63%</td> <td>90%</td> <td>65%</td> </tr> <tr> <td>Movie Content2Vec-linear</td> <td>64%</td> <td>94%</td> <td>65%</td> </tr> <tr> <td>Movie Content2Vec-crossfeat</td> <td>62%</td> <td>94%</td> <td>63%</td> </tr> <tr> <td>Movie Content2Vec-res</td> <td>60%</td> <td>95%</td> <td>66%</td> </tr> <tr> <td>Movie Content2Vec-embedpairs</td> <td>64%</td> <td>94%</td> <td>65%</td> </tr> </table> Table 1: AUC results of image and text-based embeddings on hard cold-start dataset on Book, Movie and Mixed category test product pairs. <table> <tr> <th>Recommendation Model</th> <th>Test</th> </tr> <tr> <td>Content2Vec-linear</td> <td>84%</td> </tr> <tr> <td>Content2Vec-res</td> <td>87%</td> </tr> <tr> <td>Prod2Vec</td> <td>96%</td> </tr> <tr> <td>Content2Vec-linear+</td> <td>97%</td> </tr> <tr> <td>Content2Vec-res+</td> <td>97%</td> </tr> </table> Table 2: AUC results on non cold-start dataset. <table> <tr> <th>Recommendation Model</th> <th>Test</th> </tr> <tr> <td>ImageCNN</td> <td>80%</td> </tr> <tr> <td>TextCNN</td> <td>78%</td> </tr> <tr> <td>Content2vec-linear</td> <td>88%</td> </tr> <tr> <td>Content2vec-res</td> <td>89%</td> </tr> <tr> <td>Content2vec-embed_pairs</td> <td>90%</td> </tr> <tr> <td>Prod2vec</td> <td>86%</td> </tr> <tr> <td>Content2vec-linear+</td> <td>89%</td> </tr> <tr> <td>Content2vec-res+</td> <td>92%</td> </tr> <tr> <td>Content2vec-embed_pairs+</td> <td>92%</td> </tr> </table> Table 3: AUC results on soft cold-start dataset. 6 CONCLUSIONS This work has several key contributions. We show how to use all product signal for the task of product recommendation using a modular architecture that can leverage fast evolving solutions for each type of input modality. We define a set of requirements for evaluating the resulting product embeddings and show that our method leads to significant improvements over the single signal approaches on hard recommendation situations such as cold-start and cross-category evaluation. Finally, in order to model the joint aspects of the product embeddings we introduce a new type of learning unit, named Pairwise Residual Unit and show the resulting gains on a real product co-purchases dataset. In the current work we have addressed all but one of the desired requirements, namely generating retrieval-optimized embeddings. For the next steps, we want to pursue sparse and compressed product representations, in order to help the performance of the final product retrieval system.
reject
Reject
3.666667
89408d75748d0c9e68ec4011faf0e1b48f00b95a
iclr
2,017
END-TO-END ANSWER CHUNK EXTRACTION AND RANKING FOR READING COMPREHENSION Yang Yu*, Wei Zhang*, Bowen Zhou, Kazi Hasan, Mo Yu, Bing Xiang {yu, zhangwei, zhou, kshasan, yum, bingxia}@us.ibm.com IBM Watson, Yorktown Heights, NY, USA ABSTRACT This paper proposes dynamic chunk reader (DCR), an end-to-end neural reading comprehension (RC) model that is able to extract and rank a set of answer candidates from a given document to answer questions. DCR is able to predict answers of variable lengths, whereas previous neural RC models primarily focused on predicting single tokens or entities. DCR encodes a document and an input question with recurrent neural networks, and then applies a word-by-word attention mechanism to acquire question-aware representations for the document, followed by the generation of chunk representations and a ranking module to propose the top-ranked chunk as the answer. Experimental results show that DCR could achieve a 66.3% Exact match and 74.7% F1 score on the Stanford Question Answering Dataset (Rajpurkar et al.[2016]). 1 INTRODUCTION Reading comprehension-based question answering (RCQA) is the task of answering a question with a chunk of text taken from related document(s). A variety of neural models have been proposed recently either for extracting a single entity or a single token as an answer from a given text (Hermann et al., 2015; Kadlec et al., 2016; Trischler et al., 2016b; Dhingra et al., 2016; Chen et al., 2016; Sordoni et al., 2016; Cui et al., 2016a); or for selecting the correct answer by ranking a small set of human-provided candidates (Yin et al., 2016; Trischler et al., 2016a). In both cases, an answer boundary is either easy to determine or already given. Different from the above two assumptions for RCQA, in the real-world QA scenario, people may ask questions about both entities (factoid) and non-entities such as explanations and reasons (non-factoid) (see Table 1 for examples). In this regard, RCQA has the potential to complement other QA approaches that leverage structured data (e.g., knowledge bases) for both the above question types. This is because RCQA can exploit the textual evidences to ensure increased answer coverage, which is particularly helpful for non-factoid answers. However, it is also challenging for RCQA to identify answer in arbitrary position in the passage with arbitrary length, especially for non-factoid answers which might be clauses or sentences. As a result, apart from a few exceptions (Rajpurkar et al., 2016; Wang & Jiang, 2016), this research direction has not been fully explored yet. Compared to the relatively easier RC task of predicting single tokens/entities[1], predicting answers of arbitrary lengths and positions significantly increase the search space complexity: the number of possible candidates to consider is in the order of \( O(n^2) \), where \( n \) is the number of passage words. In contrast, for previous works in which answers are single tokens/entities or from candidate lists, the complexity is in \( O(n) \) or the size of candidate lists \( l \) (usually \( l \leq 5 \)), respectively. To address the above complexity, Rajpurkar et al. (Rajpurkar et al., 2016) used a two-step chunk-and-rank approach that employs a rule-based algorithm to extract answer candidates from a passage, * Both authors contribute equally 1 State-of-the-art RC models have a decent accuracy of ~70% on the widely used CNN/DailyMail dataset (Hermann et al., 2015). Table 1: Example of questions (with answers) which can be potentially answered with RC on a Wikipedia passage. The first question is factoid, asking for an entity. The second and third are non-factoid. <table> <tr> <th colspan="2">The United Kingdom (UK) intends to withdraw from the European Union (EU), a process commonly known as Brexit, as a result of a June 2016 referendum in which 51.9% voted to leave the EU. The separation process is complex, causing political and economic changes for the UK and other countries. As of September 2016, neither the timetable nor the terms for withdrawal have been established: in the meantime, the UK remains a full member of the European Union. The term "Brexit" is a portmanteau of the words "British" and "exit".</th> </tr> <tr> <th>Q1.</th> <td>Which country withdrew from EU in 2016?</td> </tr> <tr> <th>A1.</th> <td>United Kingdom</td> </tr> <tr> <th>Q2.</th> <td>How did UK decide to leave the European Union?</td> </tr> <tr> <th>A2.</th> <td>as a result of a June 2016 referendum in which 51.9% voted to leave the EU</td> </tr> <tr> <th>Q3.</th> <td>What has not been finalized for Brexit as of September 2016?</td> </tr> <tr> <th>A3.</th> <td>neither the timetable nor the terms for withdrawal</td> </tr> </table> followed by a ranking approach with hand-crafted features to select the best answer. The rule-based chunking approach suffered from low coverage (\( \approx 70\% \) recall of answer chunks) that cannot be improved during training; and candidate ranking performance depends greatly on the quality of the hand-crafted features. More recently, Wang and Jiang (Wang & Jiang, 2016) proposed two end-to-end neural network models, one of which chunks a candidate answer by predicting the answer’s two boundary indices and the other classifies each passage word into answer/not-answer. Both models improved significantly over the method proposed by Rajpurkar et al. (Rajpurkar et al., 2016). Our proposed model, called dynamic chunk reader (DCR), not only significantly differs from both the above systems in the way that answer candidates are generated and ranked, but also shares merits with both works. First, our model uses deep networks to learn better representations for candidate answer chunks, instead of using fixed feature representations as in (Rajpurkar et al., 2016). Second, it represents answer candidates as chunks, as in (Rajpurkar et al., 2016), instead of word-level representations (Wang & Jiang, 2016), to make the model aware of the subtle differences among candidates (importantly, overlapping candidates). The contributions of this paper are three-fold. (1) We propose a novel neural network model for joint candidate answer chunking and ranking, where the candidate answer chunks are dynamically constructed and ranked in an end-to-end manner. (2) we propose a new question-attention mechanism to enhance passage word representation, which is subsequently used to construct chunk representations. (3) We also propose several simple but effective features to strengthen the attention mechanism, which fundamentally improves candidate ranking, with the by-product of higher exact boundary match accuracy. The experiments on the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016), which contains a variety of human-generated factoid and non-factoid questions, have shown the effectiveness of above three contributions. Our paper is organized as follows. We formally define the RCQA problem first. Next, we describe our baseline with a neural network component. We present the end-to-end dynamic chunk reader model next. Finally, we analyze our experimental results and discuss the related work. In appendix, we show formal equations and details of the model. 2 PROBLEM DEFINITION Table 1 shows an example of our RC setting where the goal is to answer a question \( Q_i \), factoid (Q1) or non-factoid (Q2 and Q3), based on a supporting passage \( P_i \), by selecting a continuous sequence of text \( A_i \subseteq P_i \) as answer. \( Q_i, P_i, \) and \( A_i \) are all word sequences, where each word is drawn from a vocabulary, \( V \). The \( i \)-th instance in the training set is a triple in the form of \( (P_i, Q_i, A_i) \), where \( P_i = (p_{i1}, \ldots, p_{i|P_i|}) \), \( Q_i = (q_{i1}, \ldots, q_{i|Q_i|}) \), and \( A_i = (a_{i1}, \ldots, a_{i|A_i|}) \) (\( p_{i*}, q_{i*}, a_{i*} \in V \)). Owing to the disagreement among annotators, there could be more than one correct answer for the same question; and the \( k \)-th answer to \( Q_i \) is denoted by \( A_i^k = \{ a_{i1}^k, \ldots, a_{i|A_i^k|}^k \} \). An answer candidate for the \( i \)-th training example is defined as \( c_i^{m,n} \), a sub-sequence in \( P_i \), that spans from position \( m \) to \( n \) (\( 1 \leq m \leq n \leq |P_i| \)). The ground truth answer \( A_i \) could be included in the set of all candidates \( C_i = \{c_i^{m,n} \mid \forall m, n \in N^+, subj(m, n, P_i) \text{ and } 1 \leq m \leq n \leq |P_i| \} \), where \( subj(m, n, P_i) \) is the constraint put on the candidate chunk for \( P_i \), such as, “\( c_i^{m,n} \) can have at most 10 tokens”, or “\( c_i^{m,n} \) must have a pre-defined POS pattern”. To evaluate a system’s performance, its top answer to a question is matched against the corresponding gold standard answer(s). Remark: Categories of RC Tasks Other simpler variants of the aforementioned RC task were explored in the past. For example, quiz-style datasets (e.g., MCTest (Richardson et al. 2013), MovieQA (Tapaswi et al. [2015])) have multiple-choice questions with answer options. Cloze-style datasets (Hermann et al. [2015][Hill et al. [2015][Onishi et al. [2016]], usually automatically generated, have factoid “question’s created by replacing the answer in a sentence from the text with blank. For the answer selection task this paper focuses on, several datasets exist, e.g. TREC-QA for factoid answer extraction from multiple given passages, bAbI (Weston et al. [2014] designed for inference purpose, and the SQuAD dataset (Rajpurkar et al. [2016]) used in this paper. To the best of our knowledge, the SQuAD dataset is the only one for both factoid and non-factoid answer extraction with a question distribution more close to real-world applications. 3 Baseline: Chunk-and-Rank Pipeline with Neural RC In this section we modified a state-of-the-art RC system for cloze-style tasks for our answer extraction purpose, to see how much gap we have for the two type of tasks, and to inspire our end-to-end system in the next section. In order to make the cloze-style RC system to make chunk-level decision, we use the RC model to generate features for chunks, which are further used in a feature-based ranker like in (Rajpurkar et al. [2016]). As a result, this baseline can be viewed as a deep learning based counterpart of the system in (Rajpurkar et al. [2016]). It has two main components: 1) a stand-alone answer chunker, which is trained to produce overlapping candidate chunks, and 2) a neural RC model, which is used to score each word in a given passage to be used thereafter for generating chunk scores. Answer Chunking To reduce the errors generated by the rule-based chunker in (Rajpurkar et al. [2016]), first, we capture the part-of-speech (POS) pattern of all answer sub-sequences in the training dataset to form a POS pattern trie tree, and then apply the answer POS patterns to passage \( P_i \) to acquire a collection of all subsequences (chunk candidates) \( C_i \) whose POS patterns can be matched to the POS pattern trie. This is equivalent to putting a constraint \( subj(m, n, P_i) \) to candidate answer chunk generation process that only choose the chunk with a POS pattern seen for answers in the training data. Then the sub-sequences \( C_i \) are used as answer candidates for \( P_i \). Note that overlapping chunks could be generated for a passage, and we rely on the ranker to choose the best candidate based on features from the cloze-style RC system. Experiments showed that for > 90% of the questions on the development set, the ground truth answer is included in the candidate set constructed in such manner. Feature Extraction and Ranking For chunk ranking, we (1) use neural RCQA model to annotate each \( p_{ij} \) in passage \( P_i \) to get score \( s_{ij} \), then (2) for every chunk \( c_i^{m,n} \) in passage \( i \), collect scores (\( s_{im}, \ldots, s_{in} \)) for all the (\( p_{im}, ..., p_{in} \)) contained within \( c_i^{m,n} \), and (3) extract features on the sequence of scores (\( s_{im}, \ldots, s_{in} \)) to characterize its scale and distribution information, which serves as the feature representation of \( c_i^{m,n} \). In step (1) to acquire \( s_{ij} \) we train and apply a word-level single-layer Gated Attention Reader\(^2\) (Dhingra et al. [2016], which has state-of-the-art performance on CNN/DailyMail cloze-style RC task. In step (3) for chunk \( c_i^{m,n} \), we designed 5 features, including 4 statistics on (\( s_{im}, \ldots, s_{in} \)): *maximum*, *minimum*, *average* and *sum*; as well as the count of matched POS pattern within the chunk, which serves as an answer prior. We use these 5 features in a state-of-the-art ranker (Ganjisaffar et al. [2011]). 4 Dynamic Chunk Reader The dynamic chunk reader (DCR) model is presented in Figure[1]. Inspired by the baseline we built, DCR is deemed to be superior to the baseline for 3 reasons. First, each chunk has a representation constructed dynamically, instead of having a set of pre-defined feature values. Second, each passage \(^2\)We tried using more than one layers in Gated Attention Reader, but no improvement was observed. Figure 1: The main components in dynamic chunk reader model (from bottom to top) are bi-GRU encoders for passage and question, a word-by-word attention bi-GRU for passage, dynamic chunk representations that are transformed from pooled dynamic chunks of hidden states, the question attention on every chunk representation and final answer chunk prediction. word’s representation is enhanced by word-by-word attention that evaluates the relevance of the passage word to the question. Third, these components are all within a single, end-to-end model that can be trained in a joint manner. DCR works in four steps. First, the encoder layer encodes passage and question separately, by using bidirectional recurrent neural networks (RNN). Second, the attention layer calculates the relevance of each passage word to the question. Third, the convolution layer generates unigram, bigram and trigram representation for each word. bigram and trigram of a word ends with the same word, and proper padding is applied on the first word to make sure the output is the same length as input to CNN layer. Fourth, the chunk representation layer dynamically extracts the candidate chunks from the given passage, and create chunk representation that encodes the contextual information of each chunk. Fifth, the ranker layer scores the relevance between the representations of a chunk and the given question, and ranks all candidate chunks using a softmax layer. We describe each step below. Encoder Layer We use bi-directional RNN encoder to encode \( P_i \) and \( Q_i \) of example \( i \), and get hidden state for each word position \( p_{ij} \) and \( q_{ik} \). As RNN input, a word is represented by a row vector \( x \in \mathbb{R}^n \). \( x \) can be the concatenation of word embedding and word features (see Fig. 1). The word vector for the \( t \)-th word is \( x_t \). A word sequence is processed using an RNN encoder with gated recurrent units (GRU) (Cho et al., 2014), which was proved to be effective in RC and neural machine translation tasks (Bahdanau et al., 2015; Kadlec et al., 2016; Dhingra et al., 2016). For each position \( t \), GRU computes \( h_t \) with input \( x_t \) and previous state \( h_{t-1} \), as: \footnotetext{3We can have separated parameters for question and passage encoders but a single shared encoder for both works better in the experiments.} \[ r_t = \sigma(W_{rxt} + U_r h_{t-1}) \tag{1} \\ u_t = \sigma(W_{uxt} + U_u h_{t-1}) \tag{2} \\ \tilde{h}_t = tanh(W x_t + U (r_t \odot h_{t-1})) \tag{3} \\ h_t = (1 - u_t) \cdot h_{t-1} + u_t \cdot \tilde{h}_t \tag{4} \] where \( h_t, r_t, \) and \( u_t \in \mathbb{R}^d \) are d-dimensional hidden state, reset gate, and update gate, respectively; \( W_{(r,u)}, W \in \mathbb{R}^{n \times d} \) and \( U_{(r,u)}, U \in \mathbb{R}^{d \times d} \) are the parameters of the GRU; \( \sigma \) is the sigmoid function, and \( \odot \) denotes element-wise production. For a word at t, we use the hidden state \( \overrightarrow{h}_t \) from the forward RNN as a representation of the preceding context, and the \( \overleftarrow{h}_t \) from a backward RNN that encodes text reversely, to incorporate the context after t. Next, \( h_t = [\overrightarrow{h}_t; \overleftarrow{h}_t] \), the bi-directional contextual encoding of \( x_t \), is formed. \( [.;.] \) is the concatenation operator. To distinguish hidden states from different sources, we denote the \( h_j \) of j-th word in P and the \( h_k \) of k-th word in Q as \( h_j^p \) and \( h_k^q \) respectively. Attention Layer Attention mechanism in previous RC tasks (Kadlec et al., 2016; Hermann et al., 2015; Sordoni et al., 2016; Dhingra et al., 2016; Cui et al., 2016a,b) enables question-aware passage representations. We propose a novel attention mechanism inspired by word-by-word style attention methods (Rocktäschel et al., 2015; Wang & Jiang, 2015; Santos et al., 2016). For each \( p_j \), a question-attended representation \( v_j \) is computed as follows (example index i is omitted for simplicity): \[ \alpha_{jk} = h_j^p \cdot h_k^q, \tag{5} \\ \beta_j = \sum_{k=1}^{|Q|} \alpha_{jk} h_k^q, \tag{6} \\ v_j = [h_j^p; \beta_j] \tag{7} \] where \( h_j^p \) and \( h_k^q \) are hidden states from the bi-directional RNN encoders (see Figure 1). An inner product, \( \alpha_{jk} \), is calculated between \( h_j^p \) and every question word \( h_k^q \). It indicates how well the passage word \( p_j \) matches with every question word \( q_k \). \( \beta_j \) is a weighted pooling of \( |Q| \) question hidden states, which serves as a \( p_j \)-aware question representation. The concatenation of \( h_j^p \) and \( \beta_j \) leads to a passage-question joint representation, \( v_j \in \mathbb{R}^{4d} \). Next, we apply a second bi-GRU layer taking the \( v_j \)'s as inputs, and obtain forward and backward representations \( \overrightarrow{\gamma}_j \) and \( \overleftarrow{\gamma}_j \in \mathbb{R}^d \), and in turn their concatenation, \( \gamma_j = [\overrightarrow{\gamma}_j; \overleftarrow{\gamma}_j] \). Convolution Layer Every word is encoded with complete passage context through attention layer RNN. We would like to model more complex representation of the words, by introducing unigram, bigram and trigram representations. There are two benefits for this enhanced representation: 1) each word could be enhanced with local context information to help identify the boundary of the answer chunk. Using previous words has been a common feature used in POS tagging and Named entity recognition; and 2) The information brought in by the ngram into the word representation could enhance the semantic match between the answer chunk internal and the question. Imagine scenario of a three word candidate, where the last word representation includes the two previous words through the convolution layer. Matching to the last word could also lead to the match to the semantics of the internal of the chunk. Specifically, we create for every word position j three representations, by using ngrams ending with the hidden state j: \[ \hat{\gamma}_{j1} = \gamma_j \cdot W_{c1} \tag{8} \\ \hat{\gamma}_{j2} = [\gamma_{j-1}; \gamma_j] \cdot W_{c2} \tag{9} \\ \hat{\gamma}_{j3} = [\gamma_{j-2}; \gamma_{j-1}; \gamma_j] \cdot W_{c3} \tag{10} \] We tried another word-by-word attention methods as in (Santos et al., 2016), which has similar passage representation input to question side. However, this does not lead to improvement due to the confusion caused by long passages in RC. Consequently, we used the proposed simplified version of word-by-word attention on passage side only. The details shown in equations above. We used three different convolution kernels for different n-grams. Chunk Representation Layer A candidate answer chunk representation is dynamically created given convolution layer output. We first decide the text boundary for the candidate chunk, and then form a chunk representation using all or part of those \( \gamma_j \) outputs inside the chunk. To decide a candidate chunk (boundary): we tried two ways: (1) adopt the POS trie-based approach used in our baseline, and (2) enumerate all possible chunks up to a maximum number of tokens. For (2), we create up to \( N \) (max chunk length) chunks starting from any position \( j \) in \( P_i \). Approach (1) can generate candidates with arbitrary lengths, but fails to recall candidates whose POS pattern is unseen in training set; whereas approach (2) considers all possible candidates within a window and is more flexible, but over-generates invalid candidates. For a candidate answer chunk \( c^{m,n} \) spanning from position \( m \) to \( n \) inclusively, we construct chunk representation \( \overrightarrow{\gamma}_{m,n} \in \mathbb{R}^{2d} \) using every \( \hat{\gamma}_{jl} \) within range \([m, n]\), with a function \( g(\cdot) \), and \( l \in \{1, 2, 3\} \). Formally, \[ \overrightarrow{\gamma}_{m,n} = g(\hat{\gamma}_{ml}, \ldots, \hat{\gamma}_{nl}) \] Each \( \hat{\gamma}_{jl} \) is a convolution output over concatenated forward and backward RNN hidden states from attention layer. So the first half in \( \hat{\gamma}_{jl} \) encodes information in forward RNN hidden states and the second half encodes information in backward RNN hidden states. We experimented with several pooling functions (e.g., max, average) for \( g(\cdot) \), and found out that, instead of pooling, the best \( g(\cdot) \) function is to concatenate the first half of convolution output of the chunk’s first word and the second half of convolution output of the chunk’s last word. Formally, \[ \overrightarrow{\gamma}_{m,n} = g(\hat{\gamma}_{ml}, \ldots, \hat{\gamma}_{nl}) = [\overrightarrow{\gamma}_{ml}; \overleftarrow{\gamma}_{nl}] \] where \( \overrightarrow{\gamma}_{ml} \) is half of the hidden state for \( l \)-gram word representation corresponding to forward attention RNN output. We hypothesize that the hidden states at that two ends can better represent the chunk’s contexts, which is critical for this task, than the states within the chunk. This observation also agrees with [Kobayashi et al., 2016]. Ranker Layer A score \( s^l_{m,n} \) for each \( l \)-gram chunk representation \( \overrightarrow{\gamma}_{m,n} \) denoting the probability of that chunk to be the true answer is calculated by dot product with question representation. The question representation is the concatenation of the last hidden state in forward RNN and the first hidden state in backward RNN. Formally for the chunk \( c^{m,n}_i \) we have \[ s^l(c^{m,n}_i|P_i, Q_i) = \overrightarrow{\gamma}_{m,n} \cdot [\overrightarrow{h}^{Q_i}_k; \overleftarrow{h}^{Q_i}_1] \] where \( s^l \) denotes the score generated from \( l \)-gram representation. \( \overrightarrow{h}^{Q_i}_k \) or \( \overleftarrow{h}^{Q_i}_k \) is the \( k \)-th hidden state output from question \( Q_i \)’s forward and backward RNN encoder, respectively. After that, the final score for \( c^{m,n}_i \) is evaluated as the linear combination of three scores, followed by a softmax: \[ s(c^{m,n}_i|P_i, Q_i) = softmax(W \cdot [s^1; s^2; s^3]) \] where \( s^l \) is the shorthand notation for \( s^l(c^{m,n}_i|P_i, Q_i) \); \( W \in \mathbb{R}^3 \). In runtime, the chunk with the highest probability is taken as the answer. In training, the following negative log likelihood is minimized: \[ \mathbb{L} = -\sum_{i=1}^N \log \mathbb{P}(A_i|P_i, Q_i) \] Note that the \( i \)-th training instance is only used when \( A_i \) is included in the corresponding candidate chunk set \( C_i \), i.e. \( \exists_{m,n} A_i = c^{m,n}_i \). The softmax in the final layer serves as the list-wise ranking module similar in spirit to [Cao et al., 2007]. 5 EXPERIMENTS Dataset We used the Stanford Question Answering Dataset (SQuAD) [Rajpurkar et al., 2016] for the experiment. SQuAD came into our sight because it is a mix of factoid and non-factoid Table 2: Results on the SQuAD dataset. <table> <tr> <th rowspan="2">Models</th> <th colspan="2">Dev</th> <th colspan="2">Test</th> </tr> <tr> <th>EM</th> <th>F1</th> <th>EM</th> <th>F1</th> </tr> <tr> <td>Rajpurkar 2016</td> <td>39.8%</td> <td>51.0%</td> <td>40.4%</td> <td>51.0%</td> </tr> <tr> <td>Wang 2016</td> <td>59.1%</td> <td>70.0%</td> <td>59.5%</td> <td>70.3%</td> </tr> <tr> <td>DCR w/o Conv.</td> <td>62.5%</td> <td>71.2%</td> <td>62.5%</td> <td>71.0%</td> </tr> <tr> <td>DCR</td> <td>63.4%</td> <td>72.3%</td> <td>-</td> <td>-</td> </tr> <tr> <td>DCR Ensemble</td> <td>66.3%</td> <td>74.7%</td> <td>-</td> <td>-</td> </tr> </table> questions, a real-world data (crowd-sourced), and of large scale (over 100K question-answer pairs collected from 536 Wikipedia articles). Answers range from single words to long, variable-length phrase/clauses. It is a relaxation of assumptions by the cloze-style and quiz-style RC datasets in the Problem Definition section. Features The input vector representation of each word \( w \) to encoder RNNs has six parts including a pre-trained 300-dimensional GloVe embedding (Pennington et al., 2014) and five features (see Figure 1): (1) a one-hot encoding (46 dimensions) for the part-of-speech (POS) tag of \( w \); (2) a one-hot encoding (14 dimensions) for named entity (NE) tag of \( w \); (3) a binary value indicating whether \( w \)'s surface form is the same to any word in the question; (4) if the lemma form of \( w \) is the same to any word in the question; and (5) if \( w \) is capitalized. Feature (3) and (4) are designed to help the model align the passage text with question. Note that some types of questions (e.g., “who”, “when” questions) have answers that have a specific POS/NE tag pattern. For instance, “who” questions mostly have proper nouns/persons as answers and “when” questions may frequently have numbers/dates (e.g., a year) as answers. Thus, we believe that the model could exploit the co-relation between question types and answer POS/NE patterns easier with POS and NE tag features. Implementation Details We pre-processed the SQuAD dataset using Stanford CoreNLP tool5 (Manning et al., 2014) with its default setting to tokenize the text and obtain the POS and NE annotations. To train our model, we used stochastic gradient descent with the ADAM optimizer (Kingma & Ba, 2014), with an initial learning rate of 0.001. All GRU weights were initialized from a uniform distribution between (-0.01, 0.01). The hidden state size, \( d \), was set to 300 for all GRUs. The question bi-GRU shared parameters with the passage bi-GRU, while the attention-based passage bi-GRU had its own parameters. We shuffled all training examples at the beginning of each epoch and adopted a curriculum learning approach (Bengio et al., 2009), by sorting training instances by length in every 10 batches, to enable the model start learning from relatively easier instances and to harder ones. We also applied dropout of rate 0.2 to the embedding layer of input bi-GRU encoder, and gradient clipping when the norm of gradients exceeded 10. We trained in mini-batch style (mini-batch size is 180) and applied zero-padding to the passage and question inputs in each batch. We also set the maximum passage length to be 300 tokens, and pruned all the tokens after the 300-th token in the training set to save memory and speed up the training process. This step reduced the training set size by about 1.6%. During test, we test on the full length of passage, so that we don’t prune out the potential candidates. We trained the model for at most 30 epochs, and in case the accuracy did not improve for 10 epochs, we stopped training. For the feature ranking-based system, we used jforest ranker (Ganjisaffar et al., 2011) with LambdaMART-RegressionTree algorithm and the ranking metric was NDCG@10. For the Gated Attention Reader in baseline system, we replicated the method and use the same configurations as in (Dhingra et al., 2016). Results Table 2 shows our main results on the SQuAD dataset. Compared to the scores reported in (Wang & Jiang, 2016), our exact match (EM) and F1 on the development set and EM score on the test set are better, and F1 on the test set is comparable. We also studied how each component in our model contributes to the overall performance. Table 3 shows the details as well as the results of the baseline ranker. As the first row of Table 3 shows, our baseline system improves 10% (EM) over Rajpurkar et al. (Rajpurkar et al., 2016) (Table 2 row 1), the feature-based ranking system. However when compared to our DCR model (Table 3 row 2), the baseline (row 1) is more than 12% (EM) behind 5 stanfordnlp.github.io/CoreNLP/ Table 3: Detailed system experiments on the SQuAD development set. <table> <tr> <th>Models</th> <th>EM</th> <th>F1</th> </tr> <tr> <td>Chunk-and-Rank Pipeline Baseline</td> <td>49.7%</td> <td>64.9%</td> </tr> <tr> <td>DCR w/o Convolution</td> <td>62.5%</td> <td>71.2%</td> </tr> <tr> <td>DCR w/o Word-by-Word Attention</td> <td>57.6%</td> <td>68.7%</td> </tr> <tr> <td>DCR w/o POS feature (1)</td> <td>59.2%</td> <td>68.8%</td> </tr> <tr> <td>DCR w/o NE feature (2)</td> <td>60.4%</td> <td>70.2%</td> </tr> <tr> <td>DCR w/o Question-word feature (3)</td> <td>59.5%</td> <td>69.0%</td> </tr> <tr> <td>DCR w/o Question-lemma feature (4)</td> <td>61.2%</td> <td>69.9%</td> </tr> <tr> <td>DCR w/o Capitalized feature (5)</td> <td>61.5%</td> <td>70.6%</td> </tr> <tr> <td>DCR w/o Conv. w POS-trie</td> <td>62.1%</td> <td>70.8%</td> </tr> </table> ![Variations of DCR performance on ground truth answer length (up to 10) in the development set.](page_355_624_410_246.png) Figure 2: (a) Variations of DCR performance on ground truth answer length (up to 10) in the development set. The curve with diamond knots also shows the percentage of answers for each length in the development set. (b) Performance comparisons for different question head word. even though it is based on the state-of-the-art model for cloze-style RC tasks. This can be attributed to the advanced model structure and end-to-end manner of DCR. We also did ablation tests on our DCR model. First, replacing the word-by-word attention with Attentive Reader style attention (Hermann et al., 2015) decreases the EM score by about 4.5%, showing the strength of our proposed attention mechanism. Second, we remove the features in input to see the contribution of each feature. The result shows that POS feature (1) and question-word feature (3) are the two most important features. Finally, combining the DCR model with the proposed POS-trie constraints yields a score similar to the one obtained using the DCR model with all possible \( n \)-gram chunks. The result shows that (1) our chunk representations are powerful enough to differentiate even a huge amount of chunks when no constraints are applied; and (2) the proposed POS-trie reduces the search space at the cost of a small drop in performance. Analysis To better understand our system, we calculated the accuracy of the attention mechanism of the gated attention reader used in our deep learning-based baseline. We found that it is 72% accurate i.e., 72% of the times a word with the highest attention score is inside the correct answer span. This means that, if we could accurately detect the boundary around the word with the highest attention score to form the answer span, we could achieve an accuracy close to 72%. In addition, we checked the answer recall of our candidate chunking approach. When we use a window size of 10, 92% of the time, the ground truth answer will be included in the extracted Candidate chunk set. Thus the upper bound of the exact match score of our baseline system is around 66% (92% (the answer recall) × 72%). From the results, we see our DCR system’s exact match score is at 62%. This shows that DCR is proficient at differentiating answer spans dynamically. To further analyze the system’s performance while predicting answers of different lengths, we show the exact match (EM) and F1 scores for answers with lengths up to 10 tokens in Figure 2(a). From the graph, we can see that, with the increase of answer length, both EM and F1 drops, but in different speed. The gap between F1 and exact match also widens as answer length increases. However, the model still yields a decent accuracy when the answer is longer than a single word. Additionally, Figure 2(b) shows that the system is better at “when” and “who” questions, but performs poorly Figure 3: Development set performance comparisons for different types of “what” questions (considering the types with more than 20 examples in the development set). on “why” questions. The large gap between exact match and F1 on “why” questions means that perfectly identifying the span is harder than locating the core of the answer span. Since “what”, “which”, and “how” questions contain a broad range of question types, we split them further based on the bigram a question starts with, and Figure 3 shows the breakdown for “what” questions. We can see that “what” questions asking for explanations such as “what happens” and “what happened” have lower EM and F1 scores. In contrast, “what” questions asking for year and numbers have much higher scores and, for these questions, exact match scores are close to F1 scores, which means chunking for these questions are easier for DCR. 6 RELATED WORK Attentive Reader was the first neural model for factoid RCQA (Hermann et al., 2015). It uses Bidirectional RNN (Cho et al., 2014; Chung et al., 2014) to encode document and query respectively, and use query representation to match with every token from the document. Attention Sum Reader (Kadlec et al., 2016) simplifies the model to just predicting positions of correct answer in the document and the training speed and test accuracy are both greatly improved on the CNN/Daily Mail dataset. (Chen et al., 2016) also simplified Attentive Reader and reported higher accuracy. Window-based Memory Networks (MemN2N) is introduced along with the CBT dataset (Hill et al., 2015), which does not use RNN encoders, but embeds contexts as memory and matches questions with embedded contexts. Those models’ mechanism is to learn the match between answer context with question/query representation. In contrast, memory enhanced neural networks like Neural Turing Machines (Graves et al., 2014) and its variants (Zhang et al., 2015) (Gulcehre et al., 2016) Zaremba & Sutskever (2015) (Chandar et al., 2016) (Grefenstette et al., 2015) were also potential candidates for the task, and Gulcehre et al. (Gulcehre et al., 2016) reported results on the bAbI task, which is worse than memory networks. Similarly, sequence-to-sequence models were also used (Yu et al., 2015; Hermann et al., 2015), but they did not yield better results either. Recently, several models have been proposed to enable more complex inference for RC task. For instance, gated attention model (Dhingra et al., 2016) employs a multi-layer architecture, where each layer encodes the same document, but the attention is updated from layer to layer. EpiReader (Trischler et al., 2016b) adopted a joint training model for answer extractor and reasoner, where the extractor proposes top candidates, and the reasoner weighs each candidate by examining entailment relationship between question-answer representation and the document. An iterative alternating attention mechanism and gating strategies were proposed in (Sordoni et al., 2016) to optimize the attention through several hops. In contrast, Cui et al. (Cui et al., 2016a,b) introduced fine-grained document attention from each question word and then aggregated those attentions from each question token by summation with or without weights. This system achieved the state-of-the-art score on the CNN dataset. Those different variations all result in roughly 3-5% improvement over attention sum reader, but none of those could achieve higher than that. Other methods include using dynamic entity representation with max-pooling (Kobayashi et al., 2016) that aims to change entity representation with context, and Weissenborn’s (Weissenborn, 2016) system, which tries to separate entity from the context and then matches the question to context, scoring an accuracy around 70% on the CNN dataset. However, all of those models assume that the answers are single tokens. This limits the type of questions the models can answer. Wang and Jiang (Wang & Jiang, 2016) proposed a match-lstm and achieved good results on SQuAD. However, this approach predicts a chunk boundary or whether a word is part of a chunk or not. In contrast, our approach explicitly constructs the chunk representations and similar chunks are compared directly to determine correct answer boundaries. 7 CONCLUSION In this paper we proposed a novel neural reading comprehension model for question answering. Different from the previously proposed models for factoid RCQA, the proposed model, dynamic chunk reader, is not restricted to predicting a single named entity as an answer or selecting an answer from a small, pre-defined candidate list. Instead, it is capable of answering both factoid and non-factoid questions as it learns to select answer chunks that are suitable for an input question. DCR achieves this goal with a joint deep learning model enhanced with a novel attention mechanism and five simple yet effective features. Error analysis shows that the DCR model achieves good performance, but still needs to improve on predicting longer answers, which are usually non-factoid in nature. REFERENCES Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pp. 41–48. ACM, 2009. Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. Learning to rank: from pairwise approach to listwise approach. In Proceedings of the 24th international conference on Machine learning, pp. 129–136. ACM, 2007. Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, and Yoshua Bengio. Hierarchical memory networks. arXiv preprint arXiv:1605.07427, 2016. Danqi Chen, Jason Bolton, and Christopher D Manning. A thorough examination of the cnn/daily mail reading comprehension task. ACL, 2016. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over-attention neural networks for reading comprehension. arXiv preprint arXiv:1607.04423, 2016a. Yiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, and Guoping Hu. Consensus attention-based neural networks for chinese reading comprehension. arXiv preprint arXiv:1607.02250, 2016b. Bhuwan Dhingra, Hanxiao Liu, William W Cohen, and Ruslan Salakhutdinov. Gated-attention readers for text comprehension. arXiv preprint arXiv:1606.01549, 2016. Yasser Ganjisaffar, Rich Caruana, and Cristina Lopes. Bagging gradient-boosted trees for high precision, low variance ranking models. pp. 85–94, 2011. doi: http://doi.acm.org/10.1145/2009916.2009932. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce with unbounded memory. In Advances in Neural Information Processing Systems, pp. 1828–1836, 2015. Caglar Gulcehre, Sarah Chandar, Kyunghyun Cho, and Yoshua Bengio. Dynamic neural turing machine with soft and hard addressing schemes. arXiv preprint arXiv:1607.00036, 2016. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pp. 1693–1701, 2015. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children’s books with explicit memory representations. arXiv preprint arXiv:1511.02301, 2015. Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with the attention sum reader network. ACL, 2016. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Sosuke Kobayashi, Ran Tian, Naoaki Okazaki, and Kentaro Inui. Dynamic entity representations with max-pooling improves machine reading. NAACL-HLT, 2016. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pp. 55–60, 2014. URL http://www.aclweb.org/anthology/P/P14/P14-5010 T. Onishi, H. Wang, M. Bansal, K. Gimpel, and D. McAllester. Who did What: A large-scale person-centered cloze dataset. In Proc. of EMNLP, 2016. Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In EMNLP, volume 14, pp. 1532–43, 2014. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, volume 3, pp. 4, 2013. Tim Rocktäschel, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kočiský, and Phil Blunsom. Reasoning about entailment with neural attention. arXiv preprint arXiv:1509.06664, 2015. Cicero dos Santos, Ming Tan, Bing Xiang, and Bowen Zhou. Attentive pooling networks. arXiv preprint arXiv:1602.03609, 2016. Alessandro Sordoni, Phillip Bachman, and Yoshua Bengio. Iterative alternating neural attention for machine reading. arXiv preprint arXiv:1606.02245, 2016. Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. Movieqa: Understanding stories in movies through question-answering. arXiv preprint arXiv:1512.02902, 2015. Adam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Phillip Bachman, and Kaheer Suleman. A parallel-hierarchical model for machine comprehension on sparse data. arXiv preprint arXiv:1603.08884, 2016a. Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the epireader. arXiv preprint arXiv:1606.02270, 2016b. Shuohang Wang and Jing Jiang. Learning natural language inference with lstm. arXiv preprint arXiv:1512.08849, 2015. Shuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905, 2016. Dirk Weissenborn. Separating answers from queries for neural reading comprehension. arXiv preprint arXiv:1607.03316, 2016. Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. CoRR, abs/1410.3916, 2014. URL http://arxiv.org/abs/1410.3916 Wenpeng Yin, Sebastian Ebert, and Hinrich Schütze. Attention-based convolutional neural network for machine comprehension. arXiv preprint arXiv:1602.04341, 2016. Yang Yu, Wei Zhang, Chung-Wei Hang, and Bowen Zhou. Empirical study on deep learning models for question answering. arXiv preprint arXiv:1510.07526, 2015. Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 362, 2015. Wei Zhang, Yang Yu, and Bowen Zhou. Structured memory for neural turing machines. arXiv preprint arXiv:1510.03931, 2015.
ABSTRACT This paper proposes dynamic chunk reader (DCR), an end-to-end neural reading comprehension (RC) model that is able to extract and rank a set of answer candidates from a given document to answer questions. DCR is able to predict answers of variable lengths, whereas previous neural RC models primarily focused on predicting single tokens or entities. DCR encodes a document and an input question with recurrent neural networks, and then applies a word-by-word attention mechanism to acquire question-aware representations for the document, followed by the generation of chunk representations and a ranking module to propose the top-ranked chunk as the answer. Experimental results show that DCR could achieve a 66.3% Exact match and 74.7% F1 score on the Stanford Question Answering Dataset (Rajpurkar et al.[2016]). 1 INTRODUCTION Reading comprehension-based question answering (RCQA) is the task of answering a question with a chunk of text taken from related document(s). A variety of neural models have been proposed recently either for extracting a single entity or a single token as an answer from a given text (Hermann et al., 2015; Kadlec et al., 2016; Trischler et al., 2016b; Dhingra et al., 2016; Chen et al., 2016; Sordoni et al., 2016; Cui et al., 2016a); or for selecting the correct answer by ranking a small set of human-provided candidates (Yin et al., 2016; Trischler et al., 2016a). In both cases, an answer boundary is either easy to determine or already given. Different from the above two assumptions for RCQA, in the real-world QA scenario, people may ask questions about both entities (factoid) and non-entities such as explanations and reasons (non-factoid) (see Table 1 for examples). In this regard, RCQA has the potential to complement other QA approaches that leverage structured data (e.g., knowledge bases) for both the above question types. This is because RCQA can exploit the textual evidences to ensure increased answer coverage, which is particularly helpful for non-factoid answers. However, it is also challenging for RCQA to identify answer in arbitrary position in the passage with arbitrary length, especially for non-factoid answers which might be clauses or sentences. As a result, apart from a few exceptions (Rajpurkar et al., 2016; Wang & Jiang, 2016), this research direction has not been fully explored yet. Compared to the relatively easier RC task of predicting single tokens/entities[1], predicting answers of arbitrary lengths and positions significantly increase the search space complexity: the number of possible candidates to consider is in the order of \( O(n^2) \), where \( n \) is the number of passage words. In contrast, for previous works in which answers are single tokens/entities or from candidate lists, the complexity is in \( O(n) \) or the size of candidate lists \( l \) (usually \( l \leq 5 \)), respectively. To address the above complexity, Rajpurkar et al. (Rajpurkar et al., 2016) used a two-step chunk-and-rank approach that employs a rule-based algorithm to extract answer candidates from a passage, * Both authors contribute equally 1 State-of-the-art RC models have a decent accuracy of ~70% on the widely used CNN/DailyMail dataset (Hermann et al., 2015). Table 1: Example of questions (with answers) which can be potentially answered with RC on a Wikipedia passage. The first question is factoid, asking for an entity. The second and third are non-factoid. <table> <tr> <th colspan="2">The United Kingdom (UK) intends to withdraw from the European Union (EU), a process commonly known as Brexit, as a result of a June 2016 referendum in which 51.9% voted to leave the EU. The separation process is complex, causing political and economic changes for the UK and other countries. As of September 2016, neither the timetable nor the terms for withdrawal have been established: in the meantime, the UK remains a full member of the European Union. The term "Brexit" is a portmanteau of the words "British" and "exit".</th> </tr> <tr> <th>Q1.</th> <td>Which country withdrew from EU in 2016?</td> </tr> <tr> <th>A1.</th> <td>United Kingdom</td> </tr> <tr> <th>Q2.</th> <td>How did UK decide to leave the European Union?</td> </tr> <tr> <th>A2.</th> <td>as a result of a June 2016 referendum in which 51.9% voted to leave the EU</td> </tr> <tr> <th>Q3.</th> <td>What has not been finalized for Brexit as of September 2016?</td> </tr> <tr> <th>A3.</th> <td>neither the timetable nor the terms for withdrawal</td> </tr> </table> followed by a ranking approach with hand-crafted features to select the best answer. The rule-based chunking approach suffered from low coverage (\( \approx 70\% \) recall of answer chunks) that cannot be improved during training; and candidate ranking performance depends greatly on the quality of the hand-crafted features. More recently, Wang and Jiang (Wang & Jiang, 2016) proposed two end-to-end neural network models, one of which chunks a candidate answer by predicting the answer’s two boundary indices and the other classifies each passage word into answer/not-answer. Both models improved significantly over the method proposed by Rajpurkar et al. (Rajpurkar et al., 2016). Our proposed model, called dynamic chunk reader (DCR), not only significantly differs from both the above systems in the way that answer candidates are generated and ranked, but also shares merits with both works. First, our model uses deep networks to learn better representations for candidate answer chunks, instead of using fixed feature representations as in (Rajpurkar et al., 2016). Second, it represents answer candidates as chunks, as in (Rajpurkar et al., 2016), instead of word-level representations (Wang & Jiang, 2016), to make the model aware of the subtle differences among candidates (importantly, overlapping candidates). The contributions of this paper are three-fold. (1) We propose a novel neural network model for joint candidate answer chunking and ranking, where the candidate answer chunks are dynamically constructed and ranked in an end-to-end manner. (2) we propose a new question-attention mechanism to enhance passage word representation, which is subsequently used to construct chunk representations. (3) We also propose several simple but effective features to strengthen the attention mechanism, which fundamentally improves candidate ranking, with the by-product of higher exact boundary match accuracy. The experiments on the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016), which contains a variety of human-generated factoid and non-factoid questions, have shown the effectiveness of above three contributions. Our paper is organized as follows. We formally define the RCQA problem first. Next, we describe our baseline with a neural network component. We present the end-to-end dynamic chunk reader model next. Finally, we analyze our experimental results and discuss the related work. In appendix, we show formal equations and details of the model. 2 PROBLEM DEFINITION Table 1 shows an example of our RC setting where the goal is to answer a question \( Q_i \), factoid (Q1) or non-factoid (Q2 and Q3), based on a supporting passage \( P_i \), by selecting a continuous sequence of text \( A_i \subseteq P_i \) as answer. \( Q_i, P_i, \) and \( A_i \) are all word sequences, where each word is drawn from a vocabulary, \( V \). The \( i \)-th instance in the training set is a triple in the form of \( (P_i, Q_i, A_i) \), where \( P_i = (p_{i1}, \ldots, p_{i|P_i|}) \), \( Q_i = (q_{i1}, \ldots, q_{i|Q_i|}) \), and \( A_i = (a_{i1}, \ldots, a_{i|A_i|}) \) (\( p_{i*}, q_{i*}, a_{i*} \in V \)). Owing to the disagreement among annotators, there could be more than one correct answer for the same question; and the \( k \)-th answer to \( Q_i \) is denoted by \( A_i^k = \{ a_{i1}^k, \ldots, a_{i|A_i^k|}^k \} \). An answer candidate for the \( i \)-th training example is defined as \( c_i^{m,n} \), a sub-sequence in \( P_i \), that spans from position \( m \) to \( n \) (\( 1 \leq m \leq n \leq |P_i| \)). The ground truth answer \( A_i \) could be included in the set of all candidates \( C_i = \{c_i^{m,n} \mid \forall m, n \in N^+, subj(m, n, P_i) \text{ and } 1 \leq m \leq n \leq |P_i| \} \), where \( subj(m, n, P_i) \) is the constraint put on the candidate chunk for \( P_i \), such as, “\( c_i^{m,n} \) can have at most 10 tokens”, or “\( c_i^{m,n} \) must have a pre-defined POS pattern”. To evaluate a system’s performance, its top answer to a question is matched against the corresponding gold standard answer(s). Remark: Categories of RC Tasks Other simpler variants of the aforementioned RC task were explored in the past. For example, quiz-style datasets (e.g., MCTest (Richardson et al. 2013), MovieQA (Tapaswi et al. [2015])) have multiple-choice questions with answer options. Cloze-style datasets (Hermann et al. [2015][Hill et al. [2015][Onishi et al. [2016]], usually automatically generated, have factoid “question’s created by replacing the answer in a sentence from the text with blank. For the answer selection task this paper focuses on, several datasets exist, e.g. TREC-QA for factoid answer extraction from multiple given passages, bAbI (Weston et al. [2014] designed for inference purpose, and the SQuAD dataset (Rajpurkar et al. [2016]) used in this paper. To the best of our knowledge, the SQuAD dataset is the only one for both factoid and non-factoid answer extraction with a question distribution more close to real-world applications. 3 Baseline: Chunk-and-Rank Pipeline with Neural RC In this section we modified a state-of-the-art RC system for cloze-style tasks for our answer extraction purpose, to see how much gap we have for the two type of tasks, and to inspire our end-to-end system in the next section. In order to make the cloze-style RC system to make chunk-level decision, we use the RC model to generate features for chunks, which are further used in a feature-based ranker like in (Rajpurkar et al. [2016]). As a result, this baseline can be viewed as a deep learning based counterpart of the system in (Rajpurkar et al. [2016]). It has two main components: 1) a stand-alone answer chunker, which is trained to produce overlapping candidate chunks, and 2) a neural RC model, which is used to score each word in a given passage to be used thereafter for generating chunk scores. Answer Chunking To reduce the errors generated by the rule-based chunker in (Rajpurkar et al. [2016]), first, we capture the part-of-speech (POS) pattern of all answer sub-sequences in the training dataset to form a POS pattern trie tree, and then apply the answer POS patterns to passage \( P_i \) to acquire a collection of all subsequences (chunk candidates) \( C_i \) whose POS patterns can be matched to the POS pattern trie. This is equivalent to putting a constraint \( subj(m, n, P_i) \) to candidate answer chunk generation process that only choose the chunk with a POS pattern seen for answers in the training data. Then the sub-sequences \( C_i \) are used as answer candidates for \( P_i \). Note that overlapping chunks could be generated for a passage, and we rely on the ranker to choose the best candidate based on features from the cloze-style RC system. Experiments showed that for > 90% of the questions on the development set, the ground truth answer is included in the candidate set constructed in such manner. Feature Extraction and Ranking For chunk ranking, we (1) use neural RCQA model to annotate each \( p_{ij} \) in passage \( P_i \) to get score \( s_{ij} \), then (2) for every chunk \( c_i^{m,n} \) in passage \( i \), collect scores (\( s_{im}, \ldots, s_{in} \)) for all the (\( p_{im}, ..., p_{in} \)) contained within \( c_i^{m,n} \), and (3) extract features on the sequence of scores (\( s_{im}, \ldots, s_{in} \)) to characterize its scale and distribution information, which serves as the feature representation of \( c_i^{m,n} \). In step (1) to acquire \( s_{ij} \) we train and apply a word-level single-layer Gated Attention Reader\(^2\) (Dhingra et al. [2016], which has state-of-the-art performance on CNN/DailyMail cloze-style RC task. In step (3) for chunk \( c_i^{m,n} \), we designed 5 features, including 4 statistics on (\( s_{im}, \ldots, s_{in} \)): *maximum*, *minimum*, *average* and *sum*; as well as the count of matched POS pattern within the chunk, which serves as an answer prior. We use these 5 features in a state-of-the-art ranker (Ganjisaffar et al. [2011]). 4 Dynamic Chunk Reader The dynamic chunk reader (DCR) model is presented in Figure[1]. Inspired by the baseline we built, DCR is deemed to be superior to the baseline for 3 reasons. First, each chunk has a representation constructed dynamically, instead of having a set of pre-defined feature values. Second, each passage \(^2\)We tried using more than one layers in Gated Attention Reader, but no improvement was observed. Figure 1: The main components in dynamic chunk reader model (from bottom to top) are bi-GRU encoders for passage and question, a word-by-word attention bi-GRU for passage, dynamic chunk representations that are transformed from pooled dynamic chunks of hidden states, the question attention on every chunk representation and final answer chunk prediction. word’s representation is enhanced by word-by-word attention that evaluates the relevance of the passage word to the question. Third, these components are all within a single, end-to-end model that can be trained in a joint manner. DCR works in four steps. First, the encoder layer encodes passage and question separately, by using bidirectional recurrent neural networks (RNN). Second, the attention layer calculates the relevance of each passage word to the question. Third, the convolution layer generates unigram, bigram and trigram representation for each word. bigram and trigram of a word ends with the same word, and proper padding is applied on the first word to make sure the output is the same length as input to CNN layer. Fourth, the chunk representation layer dynamically extracts the candidate chunks from the given passage, and create chunk representation that encodes the contextual information of each chunk. Fifth, the ranker layer scores the relevance between the representations of a chunk and the given question, and ranks all candidate chunks using a softmax layer. We describe each step below. Encoder Layer We use bi-directional RNN encoder to encode \( P_i \) and \( Q_i \) of example \( i \), and get hidden state for each word position \( p_{ij} \) and \( q_{ik} \). As RNN input, a word is represented by a row vector \( x \in \mathbb{R}^n \). \( x \) can be the concatenation of word embedding and word features (see Fig. 1). The word vector for the \( t \)-th word is \( x_t \). A word sequence is processed using an RNN encoder with gated recurrent units (GRU) (Cho et al., 2014), which was proved to be effective in RC and neural machine translation tasks (Bahdanau et al., 2015; Kadlec et al., 2016; Dhingra et al., 2016). For each position \( t \), GRU computes \( h_t \) with input \( x_t \) and previous state \( h_{t-1} \), as: \footnotetext{3We can have separated parameters for question and passage encoders but a single shared encoder for both works better in the experiments.} \[ r_t = \sigma(W_{rxt} + U_r h_{t-1}) \tag{1} \\ u_t = \sigma(W_{uxt} + U_u h_{t-1}) \tag{2} \\ \tilde{h}_t = tanh(W x_t + U (r_t \odot h_{t-1})) \tag{3} \\ h_t = (1 - u_t) \cdot h_{t-1} + u_t \cdot \tilde{h}_t \tag{4} \] where \( h_t, r_t, \) and \( u_t \in \mathbb{R}^d \) are d-dimensional hidden state, reset gate, and update gate, respectively; \( W_{(r,u)}, W \in \mathbb{R}^{n \times d} \) and \( U_{(r,u)}, U \in \mathbb{R}^{d \times d} \) are the parameters of the GRU; \( \sigma \) is the sigmoid function, and \( \odot \) denotes element-wise production. For a word at t, we use the hidden state \( \overrightarrow{h}_t \) from the forward RNN as a representation of the preceding context, and the \( \overleftarrow{h}_t \) from a backward RNN that encodes text reversely, to incorporate the context after t. Next, \( h_t = [\overrightarrow{h}_t; \overleftarrow{h}_t] \), the bi-directional contextual encoding of \( x_t \), is formed. \( [.;.] \) is the concatenation operator. To distinguish hidden states from different sources, we denote the \( h_j \) of j-th word in P and the \( h_k \) of k-th word in Q as \( h_j^p \) and \( h_k^q \) respectively. Attention Layer Attention mechanism in previous RC tasks (Kadlec et al., 2016; Hermann et al., 2015; Sordoni et al., 2016; Dhingra et al., 2016; Cui et al., 2016a,b) enables question-aware passage representations. We propose a novel attention mechanism inspired by word-by-word style attention methods (Rocktäschel et al., 2015; Wang & Jiang, 2015; Santos et al., 2016). For each \( p_j \), a question-attended representation \( v_j \) is computed as follows (example index i is omitted for simplicity): \[ \alpha_{jk} = h_j^p \cdot h_k^q, \tag{5} \\ \beta_j = \sum_{k=1}^{|Q|} \alpha_{jk} h_k^q, \tag{6} \\ v_j = [h_j^p; \beta_j] \tag{7} \] where \( h_j^p \) and \( h_k^q \) are hidden states from the bi-directional RNN encoders (see Figure 1). An inner product, \( \alpha_{jk} \), is calculated between \( h_j^p \) and every question word \( h_k^q \). It indicates how well the passage word \( p_j \) matches with every question word \( q_k \). \( \beta_j \) is a weighted pooling of \( |Q| \) question hidden states, which serves as a \( p_j \)-aware question representation. The concatenation of \( h_j^p \) and \( \beta_j \) leads to a passage-question joint representation, \( v_j \in \mathbb{R}^{4d} \). Next, we apply a second bi-GRU layer taking the \( v_j \)'s as inputs, and obtain forward and backward representations \( \overrightarrow{\gamma}_j \) and \( \overleftarrow{\gamma}_j \in \mathbb{R}^d \), and in turn their concatenation, \( \gamma_j = [\overrightarrow{\gamma}_j; \overleftarrow{\gamma}_j] \). Convolution Layer Every word is encoded with complete passage context through attention layer RNN. We would like to model more complex representation of the words, by introducing unigram, bigram and trigram representations. There are two benefits for this enhanced representation: 1) each word could be enhanced with local context information to help identify the boundary of the answer chunk. Using previous words has been a common feature used in POS tagging and Named entity recognition; and 2) The information brought in by the ngram into the word representation could enhance the semantic match between the answer chunk internal and the question. Imagine scenario of a three word candidate, where the last word representation includes the two previous words through the convolution layer. Matching to the last word could also lead to the match to the semantics of the internal of the chunk. Specifically, we create for every word position j three representations, by using ngrams ending with the hidden state j: \[ \hat{\gamma}_{j1} = \gamma_j \cdot W_{c1} \tag{8} \\ \hat{\gamma}_{j2} = [\gamma_{j-1}; \gamma_j] \cdot W_{c2} \tag{9} \\ \hat{\gamma}_{j3} = [\gamma_{j-2}; \gamma_{j-1}; \gamma_j] \cdot W_{c3} \tag{10} \] We tried another word-by-word attention methods as in (Santos et al., 2016), which has similar passage representation input to question side. However, this does not lead to improvement due to the confusion caused by long passages in RC. Consequently, we used the proposed simplified version of word-by-word attention on passage side only. The details shown in equations above. We used three different convolution kernels for different n-grams. Chunk Representation Layer A candidate answer chunk representation is dynamically created given convolution layer output. We first decide the text boundary for the candidate chunk, and then form a chunk representation using all or part of those \( \gamma_j \) outputs inside the chunk. To decide a candidate chunk (boundary): we tried two ways: (1) adopt the POS trie-based approach used in our baseline, and (2) enumerate all possible chunks up to a maximum number of tokens. For (2), we create up to \( N \) (max chunk length) chunks starting from any position \( j \) in \( P_i \). Approach (1) can generate candidates with arbitrary lengths, but fails to recall candidates whose POS pattern is unseen in training set; whereas approach (2) considers all possible candidates within a window and is more flexible, but over-generates invalid candidates. For a candidate answer chunk \( c^{m,n} \) spanning from position \( m \) to \( n \) inclusively, we construct chunk representation \( \overrightarrow{\gamma}_{m,n} \in \mathbb{R}^{2d} \) using every \( \hat{\gamma}_{jl} \) within range \([m, n]\), with a function \( g(\cdot) \), and \( l \in \{1, 2, 3\} \). Formally, \[ \overrightarrow{\gamma}_{m,n} = g(\hat{\gamma}_{ml}, \ldots, \hat{\gamma}_{nl}) \] Each \( \hat{\gamma}_{jl} \) is a convolution output over concatenated forward and backward RNN hidden states from attention layer. So the first half in \( \hat{\gamma}_{jl} \) encodes information in forward RNN hidden states and the second half encodes information in backward RNN hidden states. We experimented with several pooling functions (e.g., max, average) for \( g(\cdot) \), and found out that, instead of pooling, the best \( g(\cdot) \) function is to concatenate the first half of convolution output of the chunk’s first word and the second half of convolution output of the chunk’s last word. Formally, \[ \overrightarrow{\gamma}_{m,n} = g(\hat{\gamma}_{ml}, \ldots, \hat{\gamma}_{nl}) = [\overrightarrow{\gamma}_{ml}; \overleftarrow{\gamma}_{nl}] \] where \( \overrightarrow{\gamma}_{ml} \) is half of the hidden state for \( l \)-gram word representation corresponding to forward attention RNN output. We hypothesize that the hidden states at that two ends can better represent the chunk’s contexts, which is critical for this task, than the states within the chunk. This observation also agrees with [Kobayashi et al., 2016]. Ranker Layer A score \( s^l_{m,n} \) for each \( l \)-gram chunk representation \( \overrightarrow{\gamma}_{m,n} \) denoting the probability of that chunk to be the true answer is calculated by dot product with question representation. The question representation is the concatenation of the last hidden state in forward RNN and the first hidden state in backward RNN. Formally for the chunk \( c^{m,n}_i \) we have \[ s^l(c^{m,n}_i|P_i, Q_i) = \overrightarrow{\gamma}_{m,n} \cdot [\overrightarrow{h}^{Q_i}_k; \overleftarrow{h}^{Q_i}_1] \] where \( s^l \) denotes the score generated from \( l \)-gram representation. \( \overrightarrow{h}^{Q_i}_k \) or \( \overleftarrow{h}^{Q_i}_k \) is the \( k \)-th hidden state output from question \( Q_i \)’s forward and backward RNN encoder, respectively. After that, the final score for \( c^{m,n}_i \) is evaluated as the linear combination of three scores, followed by a softmax: \[ s(c^{m,n}_i|P_i, Q_i) = softmax(W \cdot [s^1; s^2; s^3]) \] where \( s^l \) is the shorthand notation for \( s^l(c^{m,n}_i|P_i, Q_i) \); \( W \in \mathbb{R}^3 \). In runtime, the chunk with the highest probability is taken as the answer. In training, the following negative log likelihood is minimized: \[ \mathbb{L} = -\sum_{i=1}^N \log \mathbb{P}(A_i|P_i, Q_i) \] Note that the \( i \)-th training instance is only used when \( A_i \) is included in the corresponding candidate chunk set \( C_i \), i.e. \( \exists_{m,n} A_i = c^{m,n}_i \). The softmax in the final layer serves as the list-wise ranking module similar in spirit to [Cao et al., 2007]. 5 EXPERIMENTS Dataset We used the Stanford Question Answering Dataset (SQuAD) [Rajpurkar et al., 2016] for the experiment. SQuAD came into our sight because it is a mix of factoid and non-factoid Table 2: Results on the SQuAD dataset. <table> <tr> <th rowspan="2">Models</th> <th colspan="2">Dev</th> <th colspan="2">Test</th> </tr> <tr> <th>EM</th> <th>F1</th> <th>EM</th> <th>F1</th> </tr> <tr> <td>Rajpurkar 2016</td> <td>39.8%</td> <td>51.0%</td> <td>40.4%</td> <td>51.0%</td> </tr> <tr> <td>Wang 2016</td> <td>59.1%</td> <td>70.0%</td> <td>59.5%</td> <td>70.3%</td> </tr> <tr> <td>DCR w/o Conv.</td> <td>62.5%</td> <td>71.2%</td> <td>62.5%</td> <td>71.0%</td> </tr> <tr> <td>DCR</td> <td>63.4%</td> <td>72.3%</td> <td>-</td> <td>-</td> </tr> <tr> <td>DCR Ensemble</td> <td>66.3%</td> <td>74.7%</td> <td>-</td> <td>-</td> </tr> </table> questions, a real-world data (crowd-sourced), and of large scale (over 100K question-answer pairs collected from 536 Wikipedia articles). Answers range from single words to long, variable-length phrase/clauses. It is a relaxation of assumptions by the cloze-style and quiz-style RC datasets in the Problem Definition section. Features The input vector representation of each word \( w \) to encoder RNNs has six parts including a pre-trained 300-dimensional GloVe embedding (Pennington et al., 2014) and five features (see Figure 1): (1) a one-hot encoding (46 dimensions) for the part-of-speech (POS) tag of \( w \); (2) a one-hot encoding (14 dimensions) for named entity (NE) tag of \( w \); (3) a binary value indicating whether \( w \)'s surface form is the same to any word in the question; (4) if the lemma form of \( w \) is the same to any word in the question; and (5) if \( w \) is capitalized. Feature (3) and (4) are designed to help the model align the passage text with question. Note that some types of questions (e.g., “who”, “when” questions) have answers that have a specific POS/NE tag pattern. For instance, “who” questions mostly have proper nouns/persons as answers and “when” questions may frequently have numbers/dates (e.g., a year) as answers. Thus, we believe that the model could exploit the co-relation between question types and answer POS/NE patterns easier with POS and NE tag features. Implementation Details We pre-processed the SQuAD dataset using Stanford CoreNLP tool5 (Manning et al., 2014) with its default setting to tokenize the text and obtain the POS and NE annotations. To train our model, we used stochastic gradient descent with the ADAM optimizer (Kingma & Ba, 2014), with an initial learning rate of 0.001. All GRU weights were initialized from a uniform distribution between (-0.01, 0.01). The hidden state size, \( d \), was set to 300 for all GRUs. The question bi-GRU shared parameters with the passage bi-GRU, while the attention-based passage bi-GRU had its own parameters. We shuffled all training examples at the beginning of each epoch and adopted a curriculum learning approach (Bengio et al., 2009), by sorting training instances by length in every 10 batches, to enable the model start learning from relatively easier instances and to harder ones. We also applied dropout of rate 0.2 to the embedding layer of input bi-GRU encoder, and gradient clipping when the norm of gradients exceeded 10. We trained in mini-batch style (mini-batch size is 180) and applied zero-padding to the passage and question inputs in each batch. We also set the maximum passage length to be 300 tokens, and pruned all the tokens after the 300-th token in the training set to save memory and speed up the training process. This step reduced the training set size by about 1.6%. During test, we test on the full length of passage, so that we don’t prune out the potential candidates. We trained the model for at most 30 epochs, and in case the accuracy did not improve for 10 epochs, we stopped training. For the feature ranking-based system, we used jforest ranker (Ganjisaffar et al., 2011) with LambdaMART-RegressionTree algorithm and the ranking metric was NDCG@10. For the Gated Attention Reader in baseline system, we replicated the method and use the same configurations as in (Dhingra et al., 2016). Results Table 2 shows our main results on the SQuAD dataset. Compared to the scores reported in (Wang & Jiang, 2016), our exact match (EM) and F1 on the development set and EM score on the test set are better, and F1 on the test set is comparable. We also studied how each component in our model contributes to the overall performance. Table 3 shows the details as well as the results of the baseline ranker. As the first row of Table 3 shows, our baseline system improves 10% (EM) over Rajpurkar et al. (Rajpurkar et al., 2016) (Table 2 row 1), the feature-based ranking system. However when compared to our DCR model (Table 3 row 2), the baseline (row 1) is more than 12% (EM) behind 5 stanfordnlp.github.io/CoreNLP/ Table 3: Detailed system experiments on the SQuAD development set. <table> <tr> <th>Models</th> <th>EM</th> <th>F1</th> </tr> <tr> <td>Chunk-and-Rank Pipeline Baseline</td> <td>49.7%</td> <td>64.9%</td> </tr> <tr> <td>DCR w/o Convolution</td> <td>62.5%</td> <td>71.2%</td> </tr> <tr> <td>DCR w/o Word-by-Word Attention</td> <td>57.6%</td> <td>68.7%</td> </tr> <tr> <td>DCR w/o POS feature (1)</td> <td>59.2%</td> <td>68.8%</td> </tr> <tr> <td>DCR w/o NE feature (2)</td> <td>60.4%</td> <td>70.2%</td> </tr> <tr> <td>DCR w/o Question-word feature (3)</td> <td>59.5%</td> <td>69.0%</td> </tr> <tr> <td>DCR w/o Question-lemma feature (4)</td> <td>61.2%</td> <td>69.9%</td> </tr> <tr> <td>DCR w/o Capitalized feature (5)</td> <td>61.5%</td> <td>70.6%</td> </tr> <tr> <td>DCR w/o Conv. w POS-trie</td> <td>62.1%</td> <td>70.8%</td> </tr> </table> ![Variations of DCR performance on ground truth answer length (up to 10) in the development set.](page_355_624_410_246.png) Figure 2: (a) Variations of DCR performance on ground truth answer length (up to 10) in the development set. The curve with diamond knots also shows the percentage of answers for each length in the development set. (b) Performance comparisons for different question head word. even though it is based on the state-of-the-art model for cloze-style RC tasks. This can be attributed to the advanced model structure and end-to-end manner of DCR. We also did ablation tests on our DCR model. First, replacing the word-by-word attention with Attentive Reader style attention (Hermann et al., 2015) decreases the EM score by about 4.5%, showing the strength of our proposed attention mechanism. Second, we remove the features in input to see the contribution of each feature. The result shows that POS feature (1) and question-word feature (3) are the two most important features. Finally, combining the DCR model with the proposed POS-trie constraints yields a score similar to the one obtained using the DCR model with all possible \( n \)-gram chunks. The result shows that (1) our chunk representations are powerful enough to differentiate even a huge amount of chunks when no constraints are applied; and (2) the proposed POS-trie reduces the search space at the cost of a small drop in performance. Analysis To better understand our system, we calculated the accuracy of the attention mechanism of the gated attention reader used in our deep learning-based baseline. We found that it is 72% accurate i.e., 72% of the times a word with the highest attention score is inside the correct answer span. This means that, if we could accurately detect the boundary around the word with the highest attention score to form the answer span, we could achieve an accuracy close to 72%. In addition, we checked the answer recall of our candidate chunking approach. When we use a window size of 10, 92% of the time, the ground truth answer will be included in the extracted Candidate chunk set. Thus the upper bound of the exact match score of our baseline system is around 66% (92% (the answer recall) × 72%). From the results, we see our DCR system’s exact match score is at 62%. This shows that DCR is proficient at differentiating answer spans dynamically. To further analyze the system’s performance while predicting answers of different lengths, we show the exact match (EM) and F1 scores for answers with lengths up to 10 tokens in Figure 2(a). From the graph, we can see that, with the increase of answer length, both EM and F1 drops, but in different speed. The gap between F1 and exact match also widens as answer length increases. However, the model still yields a decent accuracy when the answer is longer than a single word. Additionally, Figure 2(b) shows that the system is better at “when” and “who” questions, but performs poorly Figure 3: Development set performance comparisons for different types of “what” questions (considering the types with more than 20 examples in the development set). on “why” questions. The large gap between exact match and F1 on “why” questions means that perfectly identifying the span is harder than locating the core of the answer span. Since “what”, “which”, and “how” questions contain a broad range of question types, we split them further based on the bigram a question starts with, and Figure 3 shows the breakdown for “what” questions. We can see that “what” questions asking for explanations such as “what happens” and “what happened” have lower EM and F1 scores. In contrast, “what” questions asking for year and numbers have much higher scores and, for these questions, exact match scores are close to F1 scores, which means chunking for these questions are easier for DCR. 6 RELATED WORK Attentive Reader was the first neural model for factoid RCQA (Hermann et al., 2015). It uses Bidirectional RNN (Cho et al., 2014; Chung et al., 2014) to encode document and query respectively, and use query representation to match with every token from the document. Attention Sum Reader (Kadlec et al., 2016) simplifies the model to just predicting positions of correct answer in the document and the training speed and test accuracy are both greatly improved on the CNN/Daily Mail dataset. (Chen et al., 2016) also simplified Attentive Reader and reported higher accuracy. Window-based Memory Networks (MemN2N) is introduced along with the CBT dataset (Hill et al., 2015), which does not use RNN encoders, but embeds contexts as memory and matches questions with embedded contexts. Those models’ mechanism is to learn the match between answer context with question/query representation. In contrast, memory enhanced neural networks like Neural Turing Machines (Graves et al., 2014) and its variants (Zhang et al., 2015) (Gulcehre et al., 2016) Zaremba & Sutskever (2015) (Chandar et al., 2016) (Grefenstette et al., 2015) were also potential candidates for the task, and Gulcehre et al. (Gulcehre et al., 2016) reported results on the bAbI task, which is worse than memory networks. Similarly, sequence-to-sequence models were also used (Yu et al., 2015; Hermann et al., 2015), but they did not yield better results either. Recently, several models have been proposed to enable more complex inference for RC task. For instance, gated attention model (Dhingra et al., 2016) employs a multi-layer architecture, where each layer encodes the same document, but the attention is updated from layer to layer. EpiReader (Trischler et al., 2016b) adopted a joint training model for answer extractor and reasoner, where the extractor proposes top candidates, and the reasoner weighs each candidate by examining entailment relationship between question-answer representation and the document. An iterative alternating attention mechanism and gating strategies were proposed in (Sordoni et al., 2016) to optimize the attention through several hops. In contrast, Cui et al. (Cui et al., 2016a,b) introduced fine-grained document attention from each question word and then aggregated those attentions from each question token by summation with or without weights. This system achieved the state-of-the-art score on the CNN dataset. Those different variations all result in roughly 3-5% improvement over attention sum reader, but none of those could achieve higher than that. Other methods include using dynamic entity representation with max-pooling (Kobayashi et al., 2016) that aims to change entity representation with context, and Weissenborn’s (Weissenborn, 2016) system, which tries to separate entity from the context and then matches the question to context, scoring an accuracy around 70% on the CNN dataset. However, all of those models assume that the answers are single tokens. This limits the type of questions the models can answer. Wang and Jiang (Wang & Jiang, 2016) proposed a match-lstm and achieved good results on SQuAD. However, this approach predicts a chunk boundary or whether a word is part of a chunk or not. In contrast, our approach explicitly constructs the chunk representations and similar chunks are compared directly to determine correct answer boundaries. 7 CONCLUSION In this paper we proposed a novel neural reading comprehension model for question answering. Different from the previously proposed models for factoid RCQA, the proposed model, dynamic chunk reader, is not restricted to predicting a single named entity as an answer or selecting an answer from a small, pre-defined candidate list. Instead, it is capable of answering both factoid and non-factoid questions as it learns to select answer chunks that are suitable for an input question. DCR achieves this goal with a joint deep learning model enhanced with a novel attention mechanism and five simple yet effective features. Error analysis shows that the DCR model achieves good performance, but still needs to improve on predicting longer answers, which are usually non-factoid in nature.
reject
Reject
5
8b51bda8d1927a7b73064f105f5632a34d91c991
iclr
2,017
FINDING A JACK-OF-ALL-TRADES: AN EXAMINATION OF SEMI-SUPERVISED LEARNING IN READING COMPREHENSION Rudolf Kadlec*, Ondrej Bajgar*, Peter Hrincar & Jan Kleindienst IBM Watson V Parku 4, 140 00 Prague, Czech Republic {rudolf_kadlec,obajgar,phrincar,jankle}@cz.ibm.com ABSTRACT Deep learning has proven useful on many NLP tasks including reading comprehension. However, it requires large amounts of training data which are not available in some domains of application. Hence we examine the possibility of using data-rich domains to pre-train models and then apply them in domains where training data are harder to get. Specifically, we train a neural-network-based model on two context-question-answer datasets, the BookTest and CNN/Daily Mail, and we monitor transfer to subsets of bAbl, a set of artificial tasks designed to test specific reasoning abilities, and of SQuAD, a question-answering dataset which is much closer to real-world applications. Our experiments show very limited transfer if the model is not shown any training examples from the target domain however the results are encouraging if the model is shown at least a few target-domain examples. Furthermore we show that the effect of pre-training is not limited to word embeddings. 1 INTRODUCTION Machine intelligence has had some notable successes, however often in narrow domains which are sometimes of little practical use to humans – for instance games like chess (Campbell et al., 2002) or Go (Silver et al., 2016). If we aimed to build a general AI that would be able to efficiently assist humans in a wide range of settings, we would want it to have a much larger set of skills – among them would be an ability to understand human language, to perform common-sense reasoning and to be able to generalize its abilities to new situations like humans do. If we want to achieve this goal through Machine Learning, we need data to learn from. A lot of data if the task at hand is complex – which is the case for many useful tasks. One way to achieve wide applicability would be to provide training data for each specific task we would like the machine to perform. However it is unrealistic to obtain a sufficient amount of training data for some domains – it may for instance require expensive human annotation or all domains of application may be difficult to predict in advance – while the amount of training data in other domains is practically unlimited, (e.g. in language modelling or Cloze-style question answering). The way to bridge this gap – and to achieve the aforementioned adaptability – is transfer learning (Pan & Yang, 2010) and closely related semi-supervised learning (Zhu & Goldberg, 2009) which allow the system to acquire a set of skills on domains where data are abundant and then use these skills to succeed on previously unseen domains. Despite how important generalization is for general AI, a lot of research keeps focusing on solving narrow tasks. In this paper we would like to examine transfer of learnt skills and knowledge within the domain of text comprehension, a field that has lately attracted a lot of attention within the NLP community (Hermann et al., 2015; Hill et al., 2015; Kobayashi et al., 2016; Kadlec et al., 2016b; Chen et al., 2016; Sordoni et al., 2016; Dhingra et al., 2016; Trischler et al., 2016; Weissenborn, 2016; Cui et al., 2016b)a *These authors contributed equally to this work. [Li et al., 2016] [Shen et al., 2016]. Specifically, we would like to address the following research questions: 1. Whether we could train models on natural-language tasks where data are abundant and transfer the learnt skills to tasks where in-domain training data may be difficult to obtain. We will first look into what reasoning abilities a model learns from two large-scale reading-comprehension datasets using artificial tasks, and then check whether it can transfer its skills to real world tasks. Spoiler: both these transfers are very poor if we allow no training at all on the target task. 2. Whether pre-training on large-scale datasets does help if we allow the model to train on a small sample of examples from the target tasks. Here the results are much more positive. 3. Finally we examine whether the benefits of pre-training are concentrated in any particular part of the model - namely the word-embedding part or the context encoder (the reasoning part). It turns out that pre-training is useful for both components. Although our results do not improve current state of the art in any of the studied tasks, they show a clear positive effect of large-dataset pre-training on the performance of our baseline machine-learning model. Previous studies of transfer learning and semi-supervised learning in NLP focused on text classification ([Dai & Le, 2015], [Mou et al., 2016]) and various parsing tasks ([Collobert et al., 2011], [Hashimoto et al., 2016]). To our knowledge this work is the first study of transfer learning in reading comprehension, and we hope it will stimulate further work in this important area. We will first briefly introduce the datasets we will be using on the pre-training and target sides, then our baseline model and afterwards in turn describe the method and results of each of the three experiments. 2 DATASETS 2.1 PRE-TRAINING DATASETS We have mentioned that for the model pre-training we would want to use a task where training data are abundant. An example of such task is context-dependent cloze-style-question answering since the training data for this task can be generated automatically from a suitable corpus. We will use two such pre-training datasets in our experiments: the BookTest ([Bajgar et al., 2016]) and the CNN/Daily Mail (CNN/DM) news dataset ([Hermann et al., 2015]). The task associated with both datasets is to answer a cloze-style question (i.e. fill in a blank in a sentence) the answer to which needs to be inferred from a context document provided with the question. 2.1.1 BOOKTEST In the BookTest dataset, the context document is formed from 20 consecutive sentences from a book. The question is then formed by omitting a common noun or a named entity from the subsequent 21st sentence. Among datasets of this kind, the BookTest is among the largest with more than 14 million training examples coming from 3555 copyright-free books avalable thanks to Project Gutenberg. 2.1.2 CNN/DAILY MAIL In the CNN/DM dataset the context document is formed from a news article while the cloze-style question is formed by removing a named entity from one of the short summary sentences which often appear at the top of the article. To stop the model from using world knowledge from outside the context article (and hence truly test the comprehension of the article), all named entities were replaced by anonymous tags, which are further shuffled for each example. This may make the comprehension more difficult; however, since the answer is always one of the anonymized entities, it also reduces the number of possible answers making guessing easier. 2.2 TARGET DATASETS 2.2.1 bAbI The first target dataset are the bAbI tasks (Weston et al., 2016) – a set of artificial tasks each of which is designed to test a specific kind of reasoning. This toy dataset will allow us to observe what particular skills the model may be learning from each of the three training datasets. For our experiments we will be using an architecture designed to select one word from the context document as the answer. Hence we have selected Tasks 1,2,3,4,5,11,12,13,14 and 16 which fulfill this requirement and added task 15 which required a slight modification. Furthermore because both pre-training datasets are cloze-style we converted also the bAbI task questions into cloze style (e.g. “Where is John?” to ”John is in the XXXXX.”). For the models pre-trained on CNN/DM we also anonymized the tasks in a way similar to the pre-training dataset - i.e. we replaced all names of characters and also all words that can appear as answers for the given task by anonymous tags in the style of CNN/DM. This gives even models that have not seen any training examples from the target domain a chance to answer the questions. Full details about these alterations can be found in Appendix A 2.2.2 SQuAD Secondly, we will look on transfer to the SQuAD dataset (Rajpurkar et al., 2016); here the associated task may be already useful in the real world. Although cloze-style questions have the huge advantage in the possibility of being automatically generated from a suitable corpus – the path taken by CNN/DM and the BookTest – in practice humans would use a proper question, not its cloze-style substitute. This brings us to the need of transfer from the data-rich cloze-style training to the domain of proper questions where data are much scarcer due to the necessary human annotation. The SQuAD dataset is a great target dataset to use for this. As opposed to the bAbI tasks, the goal of this dataset is actually a problem whose solving would be useful to humans - answering natural questions based on an natural language encyclopedic knowledge base. For our experiments we selected only a subset of the SQuAD training and development examples where the answer is only a single word, since this is an inherent assumption of our machine learning model. This way we extracted 28,346 training examples out of the original 100,000 examples and 3,233 development examples out of 10,570. 3 MACHINE LEARNING MODEL: AS READER We perform our experiments using the Attention Sum Reader (AS Reader) (Kadlec et al., 2016b) model. The AS Reader is simple to implement while it achieves strong performance on several text comprehension tasks (Kadlec et al., 2016b; Baigar et al., 2016; Chu et al., 2016). Since the AS Reader is a building block of many recent text-comprehension models (Trischler et al., 2016; Sordoni et al., 2016; Dhingra et al., 2016; Cui et al., 2016b,a; Shen et al., 2016; Munkhdalai & Yu, 2016) it is a good representative of current research in this field. A high level structure of the AS Reader is shown in Figure 1. The words from the document and the question are first converted into vector embeddings using a look-up matrix. The document is then read by a bidirectional Gated Recurrent Unit (GRU) network (Cho et al., 2014). A concatenation of the hidden states of the forward and backward GRUs at each word is then used as a contextual embedding of this word, intuitively representing the context in which the word is appearing. We can also understand it as representing the set of questions to which this word may be an answer. Similarly the question is read by a bidirectional GRU but in this case only the final hidden states are concatenated to form the question embedding. The attention over each word in the context is then calculated as the dot product of its contextual embedding with the question embedding. This attention is then normalized by the softmax function and summed across all occurrences of each answer candidate. The candidate with most accumulated attention is selected as the final answer. For a more detailed description of the model including equations check [Kadlec et al.] (2016b). ![Structure of the AS Reader model.](page_374_355_803_246.png) Figure 1: Structure of the AS Reader model. 4 EXPERIMENTS: TRANSFER LEARNING IN TEXT COMPREHENSION Now let us turn in more detail to the three kinds of experiments that we performed. 4.1 PRE-TRAINED WITHOUT TARGET ADJUSTMENT In the first experiment we tested how a model trained on one of the large-scale pre-training datasets performs on the bAbI tasks without any opportunity to train on bAbI. Since the BookTest and CNN/DM tasks involve only cloze-style questions, we can’t expect a model trained on them to answer natural ?-style questions. Hence we did not study the transfer to SQuAD in this case, only the transfer to the (cloze-converted) bAbI tasks. 4.1.1 METHOD First we tested how the AS Reader architecture [Kadlec et al.] (2016b) can handle the tasks if trained directly on the bAbI training data for each task. Then we tested the degree of transfer from the BookTest and CNN/DM data to the 11 selected bAbI tasks. In the first part of the experiment we trained a separate instance of the AS Reader on the 10,000-example version of the bAbI training data for each of the 11 tasks (for more details see Appendix B.1). On 8 of them the architecture was able to learn the task with accuracy at least 95% (results for each task can be found in Table 4 in Appendix C). Hence if given appropriate training the AS Reader is capable of the reasoning needed to solve most of the selected bAbI tasks. Now when we know that the AS Reader is powerful enough to learn the target tasks we can turn to transfer from the two large-scale datasets. The main part of this first experiment was then straightforward: we pre-trained multiple models on the BookTest and CNN/DM datasets and then simply evaluated them on the test datasets of the 11 selected bAbI tasks. 4.1.2 RESULTS Table 1 summarizes the results of this experiment. Both the models trained on the BookTest and those trained on the CNN/DM dataset perform quite poorly on bAbI and achieve much lower accuracy than 1It should be noted that there are several machine learning models that perform better than the AS Reader in the 10k weakly supervised setting, e.g. [Sukhbaatar et al., 2015] [Xiong et al., 2016] [Graves et al., 2016], however they often need significant fine-tuning. On the other hand we trained plain AS Reader model without any modifications. Hyperparameter and feature fine-tuning could probably further increase its performance on individual tasks however it goes directly against the idea of generality that is at the heart of this work. For comparison with state of the art we include results of DMN+ [Xiong et al., 2016] in Table 1 which had the best average performance over the original 20 tasks. Table 1: The mean performance across 11 bAbI tasks. The first two columns show a random baseline2 and a baseline that selects the most frequent word from the context which also appears as an answer in the training data for the task. The following three columns show performance of the AS Reader trained on different datasets, the last column shows the results of DMN+ (Xiong et al., 2016), the state-of-the-art model on the bAbI 10k dataset. For more detailed results listing per task accuracies see Appendix C <table> <tr> <th rowspan="2">Model</th> <th rowspan="2">Rnd.</th> <th rowspan="2">Most freq. cand.</th> <th colspan="4">AS Reader</th> <th rowspan="2">DMN+</th> </tr> <tr> <th>not trained</th> <th>bAbI 10k</th> <th>BookTest 14M</th> <th>CNN/DM 1.2M</th> <th>bAbI 10k</th> <th>bAbI 10k</th> </tr> <tr> <td>bAbI mean (11 tasks)</td> <td>6.1</td> <td>29.9</td> <td></td> <td>34.8</td> <td>38.1</td> <td>92.7</td> <td>95.7</td> </tr> </table> the models trained directly on each individual bAbI task. However there is some transfer between the tasks since the AS Reader trained on either the BookTest or CNN/DM outperforms a random baseline2 and even an improved baseline which selects the most frequent word from the context that also appears as an answer in the training data for this task. The results also show that the models trained on CNN/DM perform somewhat better on most tasks than the BookTest models. This may be due to the fact that bAbI tasks generally require the model to summarize information from the context document, which is also what the CNN/DM dataset is testing. On the other hand, the BookTest requires prediction of a possible continuation of a story, where the required kind of reasoning is much less clear but certainly different from pure summarization. Another explanation for better performance of CNN/DM models might be that they solve slightly simpler task since the candidate answers were already pre-selected in the entity anonymization step. Readers interested in how the training-dataset size affects this kind of transfer can check Kadlec et al. (2016a) where we show that the target-task performance is a bit better if we use the large BookTest as opposed to its smaller subset, the Children’s Book Test (CBT) (Hill et al., 2015). Conclusions from this experiment are that the skills learned from two large-scale datasets generalize surprisingly poorly to even simple toy tasks. This may make us ask whether most teams’ focus on solving narrow tasks is truly beneficial if the skills learnt on these tasks are hard to apply elsewhere. However it also brings us to our next experiment, where we try to provide some help to the struggling pre-trained models. 4.2 PRE-TRAINED WITH TARGET ADJUSTMENT After showing that the skills learnt from the BookTest and CNN/DM datasets are by themselves insufficient for solving the toy tasks, the next natural question is whether they are useful if helped by training on a small sample of examples from the target task. We call this additional phase of training target adjustment. For this experiment we again use the bAbI tasks, however we also test transfer to a subset of the SQuAD dataset, which is much closer to real-world natural-language question answering. The results presented in this and the following section are based on training 3701 model instances. 4.2.1 METHOD Common to bAbI and SQuAD datasets. In this experiment we started with a pre-trained model which we used in the previous experiment. However, after it finished training on one of the large pre-training datasets, we allowed it to train on a subset of training examples from the target dataset. We tried subsets of various sizes ranging from a single example to thousands. We tried training four different pre-trained models and also, for comparison, four randomly-initialized models with the same hyperparameters (see Appendix B.2 for details). The experiment with each task-model couple was run on 4 different data samples of each size which were randomly drawn from the training dataset 2The random baseline selects randomly uniformly between all unique words contained in the context document. Figure 2: Sub-figure (a) shows the average across the 11 bAbI tasks of the best-validation model’s test accuracy. (b) shows the test accuracy on SQuAD of each model we trained (the points) and the lines join the accuracies of the best-validation models for each training size. of the task to account for variations between these random samples – which may be substantial given the small sample size.\footnote{We are planning to release the split training datasets soon.} **bAbI.** For each of these models we observed the test accuracy at the best-validation epoch and compared this number between the randomly initialized and pre-trained models. Validation was done using 100 examples which were set aside from the task’s original 10k training data.\footnote{The other models trained on the full 10k dataset usually use 1000 validation examples (Sukhbaatar et al., 2015; Xiong et al., 2016), however we wanted to focus on low data regime thus we used 10 times less examples.} We perform the experiment with models pre-trained on the BookTest and also on CNN/DM. **SQuAD subset.** In the SQuAD experiment, we trained the model on a subset of the original training dataset where answers were only single words and its sub-subsets. We report the best-validation accuracy on a development set filtered in the same way. This experiment was performed only with the models pre-trained on BookTest. 4.2.2 RESULTS The results of these experiments are summarized in Figures 2 and 3. Figure 3: Example of 3 bAbI tasks where pre-training seems to help. Note that the task may be easier for the CNN/DM models due to answer anonymization which restricts the choice of possible answers. bAbI. Sub-figure[2a] shows mean test accuracy of the models that achieved the best validation result for each single task. The results for both BookTest and CNN/DM experiments confirm positive effect of pre-training compared to randomly initialized baseline. Figure[3] shows performance on selected bAbI tasks where pre-training has clearly positive effect, such plot for each of the target tasks is provided in Appendix[C.2](Figure[4]). Note that the CNN/DM models cannot be directly compared to BookTest results due to entity anonymization that seems to simplify the task when the model is trained on smaller datasets. Since our evaluation methodology with different training set sizes is novel, we can compare our result only to MemN2N (Sukhbaatar et al., 2015) trained on a 1k dataset. MemN2N is the only weakly supervised model that reports accuracy when trained on less than 10k examples. MemN2N achieves average accuracy 93.2% on the eleven selected tasks. This is substantially better than both our random baseline (78.0%) and the BookTest-pre-trained model (79.5%), however our model is not tuned in any way towards this particular task. One important conceptual difference is that the AS Reader processes the whole context as one sequence of words, whereas MemN2N receives the context split into single sentences, which simplifies the task for the network. SQuAD subset. The results of SQuAD experiment also confirm positive effect of pre-training, see Sub-figure[2f], for now compare just lines showing performance of the fully pre-trained model and the randomly initialized model – the meaning of the remaining two lines shall become clear in the next section. More detailed statistics about the results of this experiment can be found in Appendix[D] We should note that performance of our model is not competitive with the state of the art models on this dataset. For instance the DCR model (Yu et al., 2016) trained on our SQuAD subset achieves validation accuracy 74.9% in this task which is better than our randomly initialized (35.4%) and pre-trained (51.6%) model.[6] However, the DCR model is designed specifically for the SQuAD task, for instance it utilizes features that are not used by our model. 4.3 PARTIALLY PRE-TRAINED MODEL Since our previous experiment confirmed positive effect of pre-training if followed by target-domain adjustment, we wondered which part of the model contains the knowledge transferable to new domains. To examine this we performed the following experiment. 4.3.1 METHOD Our machine learning model, the AS Reader consists of two main parts: the word-embedding look-up and the bidirectional GRUs used to encode the document and question (see Figure[1]). Therefore a natural question was what the contribution of each of these parts is. To test this we created two models out of each pre-trained model used in the previous experiment. The first model variant uses the pre-trained word embeddings from the original model while the GRU encoders are randomly initialized. We say that this model has pre-trained embeddings. The second model variant uses the opposite setting where the word embeddings are randomly initialized while the encoders are taken form a pre-trained model. We call this pre-trained encoders. bAbI. For this experiment we selected only a subset of tasks with training set of 100 examples where there was significant difference in accuracy between randomly-initialized and pre-trained models. For evaluation we use the same methodology as in the previous experiment, that is, we report accuracy of the best-validation model averaged over 4 training splits. SQuAD subset. We evaluated both model variants on all training sets from the previous SQuAD experiment using the same methodology. 5MemN2N trained on each single task with PE LS RN features, see (Sukhbaatar et al., 2015) for details. 6We would like to thank [Yu et al., 2016] for training their system on our dataset. Table 2: The effect of pre-training different components of the model for selected tasks. The first row shows performance (average test accuracy across all trained model instances in each category) of a randomly initialized baseline model. The following three rows show increase in accuracy (measured in percent absolute) when the model is initialized with weights pre-trained on the BookTest. The last line shows results for models initialized with Google News word2vec word embeddings (Mikolov et al., 2013). <table> <tr> <th rowspan="2">Model variant</th> <th rowspan="2">Task</th> <th colspan="4">bAbl task (100 ex.)</th> <th rowspan="2">SQuAD (28k ex.)</th> </tr> <tr> <th>1.</th> <th>5.</th> <th>11.</th> <th>14.</th> </tr> <tr> <td>Random init</td> <td></td> <td>53%</td> <td>66%</td> <td>71%</td> <td>33%</td> <td>31%</td> </tr> <tr> <td>△ Pre-trained encoders</td> <td></td> <td>+6</td> <td>+25</td> <td>+4</td> <td>+2</td> <td>+4</td> </tr> <tr> <td>△ Pre-trained embeddings</td> <td></td> <td>+17</td> <td>+6</td> <td>+8</td> <td>+8</td> <td>+10</td> </tr> <tr> <td>△ Pre-trained full</td> <td></td> <td>+34</td> <td>+22</td> <td>+14</td> <td>+13</td> <td>+17</td> </tr> <tr> <td>△ Pre-trained word2vec</td> <td></td> <td>-2</td> <td>+5</td> <td>+1</td> <td>-1</td> <td>+5</td> </tr> </table> 4.3.2 RESULTS bAbl. Table 2 shows improvement of pre-trained models over a randomly initialized baseline. In most cases (all except Task 5) the fully pre-trained model achieved the best accuracy. SQuAD subset. The accuracies of the four model variants are plotted in Figure 2b together with results of the previous SQuAD experiment. The graph shows that both pre-trained embeddings and pre-trained encoders alone improve performance over the randomly initialized baseline, however the fully pre-trained model is always the best. The overall result of this experiment is that both pre-training of the word embeddings and pre-training of the encoder parameters are important since the fully pre-trained model outperforms both partially pre-trained variants. 5 CONCLUSION Our experiments show that transfer from two large cloze-style question-answering datasets to our two target tasks is surprisingly poor, if the models aren’t provided with any examples from the target domain. However we show that models that pre-trained models perform significantly better than a randomly initialized model if they are shown at least a few training examples from the target domain. The usefulness of pre-trained word embeddings is well known in the NLP community however we show that the power of our pre-trained model does not lie just in the embeddings. This suggests that once the text-comprehension community agrees on sufficiently versatile model, much larger parts of the model could start being reused than just the word-embeddings. The generalization of skills from a training domain to new tasks is an important ingredient of any system we would want to call intelligent. This work is an early step to explore this direction. REFERENCES Ondrej Bajgar, Rudolf Kadlec, and Jan Kleindienst. Embracing data abundance: BookTest Dataset for Reading Comprehension. arXiv preprint arXiv:1610.00956, 2016. Murray Campbell, A Joseph Hoane, and Feng-hsiung Hsu. Deep blue. Artificial intelligence, 134(1): 57–83, 2002. Danqi Chen, Jason Bolton, and Christopher D. Manning. A Thorough Examination of the CNN / Daily Mail Reading Comprehension Task. In Association for Computational Linguistics (ACL), 2016. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. Empirical Methods in Natural Language Processing (EMNLP), 2014. URL http://arxiv.org/abs/1406.1078v3 Zewei Chu, Hai Wang, Kevin Gimpel, and David Mcallester. Broad Context Language Modeling as Reading Comprehension. 2016. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural Language Processing ( Almost ) from Scratch. Journal ofMachine Learning Research 12, 12:2461–2505, 2011. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over-Attention Neural Networks for Reading Comprehension. 2016a. URL http://arxiv.org/abs/1607.04423 Yiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, and Guoping Hu. Consensus Attention-based Neural Networks for Chinese Reading Comprehension. 2016b. Andrew M. Dai and Quoc V. Le. Semi-supervised Sequence Learning. NIPS, 2015. ISSN 10495258. URL http://arxiv.org/abs/1511.01432 Bhuwan Dhingra, Hanxiao Liu, William W. Cohen, and Ruslan Salakhutdinov. Gated-Attention Readers for Text Comprehension. 2016. URL http://arxiv.org/abs/1606.01549 Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adrià Puigdomènech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis. Hybrid Computing Using a Neural Network with Dynamic External Memory. Nature, 2016. ISSN 0028-0836. doi: 10.1038/nature20101. Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. A JOINT MANY-TASK MODEL: GROWING A NEURAL NETWORK FOR MULTIPLE NLP TASKS. submitted to ICLR 2017, 2016. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pp. 1684–1692, 2015. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children’s books with explicit memory representations. arXiv preprint arXiv:1511.02301, 2015. Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. From Particular to General : A Preliminary Case Study of Transfer Learning in Reading Comprehension. MAIN Workshop at NIPS, 2016a. Rudolf Kadlec, Martin Schmid, Ondej Bajgar, and Jan Kleindienst. Neural Text Understanding with Attention Sum Reader. Proceedings of ACL, 2016b. Sosuke Kobayashi, Ran Tian, Naoaki Okazaki, and Kentaro Inui. Dynamic Entity Representation with Max-pooling Improves Machine Reading. Proceedings of the North American Chapter of the Association for Computational Linguistics and Human Language Technologies (NAACL-HLT), 2016. Peng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie Zhou, and Wei Xu. Dataset and Neural Recurrent Sequence Labeling Model for Open-Domain Factoid Question Answering. 2016. URL https://arxiv.org/abs/1607.06275 Tomas Mikolov, Greg Corrado, Kai Chen, and Jeffrey Dean. Efficient Estimation of Word Representations in Vector Space. Proceedings of the International Conference on Learning Representations (ICLR 2013), 2013. ISSN 15324435. doi: 10.1162/15324430332533223. URL http://arxiv.org/pdf/1301.3781v3.pdf Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. How Transferable are Neural Networks in NLP Applications? EMNLP, 2016. Tsendsuren Munkhdalai and Hong Yu. Reasoning with Memory Augmented Neural Networks for Language Comprehension. 2016. URL https://arxiv.org/abs/1610.06454v1 Sinno Jialin Pan and Qiang Yang. A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345–1359, oct 2010. ISSN 1041-4347. doi: 10.1109/TKDE.2009.191. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5288526 Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ Questions for Machine Comprehension of Text. (ii). 2016. URL http://arxiv.org/abs/1606.05250 Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. ReasoNet: Learning to Stop Reading in Machine Comprehension. 2016. URL http://arxiv.org/abs/1609.05284 David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016. ISSN 0028-0836. doi: 10.1038/nature16961. URL http://dx.doi.org/10.1038/nature16961 Alessandro Sordoni, Phillip Bachman, and Yoshua Bengio. Iterative Alternating Neural Attention for Machine Reading. 2016. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-To-End Memory Networks. pp. 1–11, 2015. URL http://arxiv.org/abs/1503.08895 Adam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural Language Comprehension with the EpiReader. 2016. URL http://arxiv.org/abs/1606.02270 Dirk Weissenborn. Separating Answers from Queries for Neural Reading Comprehension. 2016. URL http://arxiv.org/abs/1607.03316 Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart Van Merri, Armand Joulin, and Tomas Mikolov. Towards AI-complete Question Answering: A Set of Prerequisite Toy Tasks. 2016. URL https://arxiv.org/abs/1502.05698 Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic Memory Networks for Visual and Textual Question Answering. ICML, 2016. URL http://arxiv.org/abs/1603.01417. Yang Yu, Wei Zhang, Kazi Hasan, Mo Yu, Bing Xiang, and Bowen Zhou. End-to-End Reading Comprehension with Dynamic Answer Chunk Ranking. (1), 2016. URL http://arxiv.org/abs/1610.09996 Xiaojin Zhu and Andrew B Goldberg. Introduction to semi-supervised learning. Synthesis lectures on artificial intelligence and machine learning, 3(1):1–130, 2009. A Cloze Style bAbI Dataset Since our AS Reader architecture is designed to select a single word from the context document as an answer (the task of CBT and BookTest), we selected 10 bAbI tasks that fulfill this requirement out of the original 20. These tasks are: 1. single supporting fact, 2. two supporting facts, 3. three supporting facts, 4. two argument relations, 5. three argument relations, 11. basic coreference, 12. conjunction, 13. compound coreference, 14. time reasoning and 16. basic induction. Task 15 needed a slight modification to satisfy this requirement: we converted the answers into plural (e.g. "Q: What is Gertrude afraid of? A: wolf." was converted into "A: wolves" which also seems to be the more natural way to formulate the answer to such a question.). Also since CBT and BookTest train the model for Cloze-style question answering, we modify the original bAbI dataset by reformulating the questions into Cloze-style. For example we translate a question "Where is John ?" to "John is in the XXXXX ." For the models pre-trained on CNN/DM we also replace two kinds of words by anonymized tags (e.g. "@entity56") in a style similar to the pre-training dataset. Specifically we replace two (largely overlapping) categories of words: 1. Proper names of story characters (e.g. John, Sandra) 2. Any word that can appear as an answer for the particular task (e.g. kitchen, garden if the task is asking about locations). B METHOD DETAILS B.1 DIRECT TRAINING ON BABI – METHOD Here we give a more detailed description of the method we used to arrive to our results. We highlight only facts particular to this experiment. A more detailed general description of training the AS Reader is given in (Kadlec et al., 2016b). The results given for AS Reader trained on bAbI are each for a single model with 64 hidden units in each direction of the GRU context encoder and embedding dimension 32 trained on the 10k training data provided with that particular task. The results for AS Reader trained on the BookTest and the CNN/DM are for a greedy ensemble consisting of 4 models whose predictions were simply averaged. The models and ensemble were all validated on the validation set corresponding to the training dataset. The performance on the bAbI tasks oscillated notably during training however the ensemble averaging does somewhat mitigate this to get more representative numbers. B.2 HYPERPARAMETERS FOR THE TARGET-ADJUSTMENT EXPERIMENTS Table 3 lists hyperparameters of the pre-trained AS Reader instances used in our experiments with target adjustment. <table> <tr> <th>Dataset</th> <th>Hid. Units</th> <th>Emb.</th> <th>L. rate</th> <th>Dropout</th> </tr> <tr> <td>BookTest</td> <td>768</td> <td>256</td> <td>0.0001</td> <td>0</td> </tr> <tr> <td>BookTest</td> <td>384</td> <td>384</td> <td>0.0005</td> <td>0.2</td> </tr> <tr> <td>BookTest</td> <td>384</td> <td>384</td> <td>0.0005</td> <td>0.4</td> </tr> <tr> <td>BookTest</td> <td>512</td> <td>384</td> <td>0.0001</td> <td>0</td> </tr> <tr> <td>CNN/DM</td> <td>128</td> <td>128</td> <td>0.001</td> <td>0</td> </tr> <tr> <td>CNN/DM</td> <td>256</td> <td>128</td> <td>0.001</td> <td>0</td> </tr> <tr> <td>CNN/DM</td> <td>384</td> <td>128</td> <td>0.001</td> <td>0</td> </tr> <tr> <td>CNN/DM</td> <td>384</td> <td>384</td> <td>0.001</td> <td>0</td> </tr> </table> C DETAILED RESULTS C.1 EXPERIMENTS WITHOUT TARGET ADJUSTMENT Table 4 shows detailed results for the experiments on models which were just pre-trained on one of the pre-training datasets without any target-adjustment. It also shows several baselines and results of a state-of-the-art model. C.2 TARGET-ADJUSTMENT EXPERIMENTS C.2.1 RESULTS FOR ALL BABI TASKS Figure 4 shows the test accuracies of all models that we trained in the target-adjustment experiments as well as lines joining the accuracies of the best-validation models. Table 4: Performance of the AS Reader when trained on the bAbI 10k, BookTest and CNN/DM datasets and then evaluated on bAbI test data. The Dynamic Memory Network (DMN+) is the state-of-the-art model in a weakly supervised setting on the bAbI 10k dataset. Its results are taken from (Xiong et al., 2016). MemN2N (Sukhbaatar et al., 2015) is the state-of-the-art model on the 1k training dataset; for completeness we also include its results with the 10k training. <table> <tr> <th rowspan="2">Model:</th> <th colspan="2">Random</th> <th colspan="2">Rnd cand.</th> <th colspan="2">MemN2N (single) (PE LS RN)</th> <th colspan="2">MemN2N (single) (PE LS LW RN)</th> <th colspan="2">DMN+ (single)</th> <th colspan="3">ASReader</th> </tr> <tr> <th>Train dataset not trained</th> <th>bAbI 10k</th> <th>bAbI 1k</th> <th>bAbI 10k</th> <th>bAbI 10k</th> <th>bAbI 10k</th> <th>bAbI 10k</th> <th>BookTest 14M</th> <th>DM+CNN 1.2M</th> </tr> <tr> <td>1</td> <td>Single supporting fact</td> <td>7.80</td> <td>31.20</td> <td>100.00</td> <td>100.00</td> <td>100.00</td> <td>100.00</td> <td>100.00</td> <td>37.30</td> <td>51.50</td> </tr> <tr> <td>2</td> <td>Two supporting facts</td> <td>4.40</td> <td>26.96</td> <td>91.70</td> <td>99.70</td> <td>99.70</td> <td>91.90</td> <td>25.80</td> <td>28.90</td> </tr> <tr> <td>3</td> <td>Three supporting facts</td> <td>3.40</td> <td>19.14</td> <td>59.70</td> <td>97.90</td> <td>98.90</td> <td>86.00</td> <td>22.20</td> <td>27.40</td> </tr> <tr> <td>4</td> <td>Two-argument relations</td> <td>10.50</td> <td>33.58</td> <td>97.20</td> <td>100.00</td> <td>100.00</td> <td>100.00</td> <td>50.30</td> <td>54.90</td> </tr> <tr> <td>5</td> <td>Three-argument relations</td> <td>4.40</td> <td>21.42</td> <td>86.90</td> <td>99.20</td> <td>99.50</td> <td>99.80</td> <td>67.60</td> <td>68.10</td> </tr> <tr> <td>11</td> <td>Basic coreference</td> <td>6.20</td> <td>30.42</td> <td>99.10</td> <td>99.90</td> <td>100.00</td> <td>100.00</td> <td>33.00</td> <td>20.80</td> </tr> <tr> <td>12</td> <td>Conjunction</td> <td>6.70</td> <td>27.25</td> <td>99.80</td> <td>100.00</td> <td>100.00</td> <td>100.00</td> <td>30.40</td> <td>37.70</td> </tr> <tr> <td>13</td> <td>Compound coreference</td> <td>5.60</td> <td>27.73</td> <td>99.60</td> <td>100.00</td> <td>100.00</td> <td>100.00</td> <td>33.80</td> <td>14.00</td> </tr> <tr> <td>14</td> <td>Time reasoning</td> <td>5.00</td> <td>27.82</td> <td>98.30</td> <td>99.90</td> <td>99.80</td> <td>95.00</td> <td>27.60</td> <td>50.50</td> </tr> <tr> <td>15</td> <td>Basic deduction</td> <td>5.20</td> <td>37.20</td> <td>100.00</td> <td>100.00</td> <td>100.00</td> <td>96.70</td> <td>39.90</td> <td>17.60</td> </tr> <tr> <td>16</td> <td>Basic induction</td> <td>7.50</td> <td>45.65</td> <td>98.70</td> <td>48.20</td> <td>54.70</td> <td>50.30</td> <td>15.10</td> <td>48.00</td> </tr> <tr> <td colspan="2">bAbI mean (11 tasks)</td> <td>6.06</td> <td>29.85</td> <td>93.73</td> <td>94.98</td> <td>95.69</td> <td>92.70</td> <td>34.82</td> <td>38.13</td> </tr> </table> Figure 4: The test accuracies of all models that we trained in the target-adjustment experiments. The line joins the test accuracies of the best-validation models of each model type. C.2.2 AVERAGE OVER ALL MODELS TRAINED ON BABI TASKS Figure 5 plots mean accuracy of all models trained in our experiments. This suggests that pre-training helped all models, not only the top performing ones selected by validation as already shown in Figure 2a Figure 5: The average of the mean test accuracies across the 11 bAbI tasks. For the average of the best validation results see Figure 2a D MEANS, STANDARD DEVIATIONS AND P-VALUES BY EXPERIMENT Table 5 shows the mean accuracy across all models trained for each combination of task, pre-training dataset and target-adjustment dataset size. Table 6 shows the corresponding standard deviations. Table 7 then shows the p-value that whether the expected accuracy of pre-trained models is greater than the expected accuracy of randomly initialized models. This shows that the pre-trained models are statistically significantly better for all target-adjustment set sizes on the SQuAD dataset. On bAbI the BookTest pre-trained models perform convincingly better especially for target-adjustment dataset sizes 100, 500 and 1000, with Task 16 being the main exception to this because the AS Reader struggles to learn it in any setting. For the CNN+DM pre-training the results are not conclusive. <table> <tr> <th rowspan="2">Task</th> <th rowspan="2">Pretrain. set</th> <th rowspan="2">Model</th> <th colspan="10">Target-adjustment set size</th> </tr> <tr> <th>0</th> <th>1</th> <th>10</th> <th>100</th> <th>500</th> <th>1000</th> <th>5000</th> <th>10000</th> <th>28174</th> </tr> <tr> <td>SQuAD</td> <td>BookTest</td> <td>pre-trained</td> <td>0.025</td> <td>0.027</td> <td>0.049</td> <td>0.122</td> <td>NA</td> <td>0.245</td> <td>NA</td> <td>0.388</td> <td>0.484</td> </tr> <tr> <td>SQuAD</td> <td>BookTest</td> <td>rand. init.</td> <td>0.004</td> <td>0.006</td> <td>0.018</td> <td>0.042</td> <td>NA</td> <td>0.107</td> <td>NA</td> <td>0.214</td> <td>0.315</td> </tr> <tr> <td>Task 1</td> <td>BookTest</td> <td>pre-trained</td> <td>0.356</td> <td>0.383</td> <td>0.459</td> <td>0.870</td> <td>0.992</td> <td>0.995</td> <td>0.999</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 1</td> <td>BookTest</td> <td>rand. init.</td> <td>0.010</td> <td>0.327</td> <td>0.431</td> <td>0.529</td> <td>0.888</td> <td>0.916</td> <td>0.976</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 1</td> <td>CNN+DM</td> <td>pre-trained</td> <td>0.295</td> <td>0.385</td> <td>0.519</td> <td>0.689</td> <td>0.969</td> <td>0.985</td> <td>0.990</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 1</td> <td>CNN+DM</td> <td>rand. init.</td> <td>0.100</td> <td>0.354</td> <td>0.450</td> <td>0.582</td> <td>0.954</td> <td>0.941</td> <td>0.977</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 2</td> <td>BookTest</td> <td>pre-trained</td> <td>0.206</td> <td>0.295</td> <td>0.318</td> <td>0.339</td> <td>0.398</td> <td>0.410</td> <td>0.755</td> <td>0.783</td> <td>NA</td> </tr> <tr> <td>Task 2</td> <td>BookTest</td> <td>rand. init.</td> <td>0.003</td> <td>0.225</td> <td>0.290</td> <td>0.332</td> <td>0.358</td> <td>0.361</td> <td>0.528</td> <td>0.645</td> <td>NA</td> </tr> <tr> <td>Task 2</td> <td>CNN+DM</td> <td>pre-trained</td> <td>0.177</td> <td>0.265</td> <td>0.288</td> <td>0.359</td> <td>0.410</td> <td>0.398</td> <td>0.539</td> <td>0.586</td> <td>NA</td> </tr> <tr> <td>Task 2</td> <td>CNN+DM</td> <td>rand. init.</td> <td>0.005</td> <td>0.280</td> <td>0.320</td> <td>0.380</td> <td>0.371</td> <td>0.396</td> <td>0.478</td> <td>0.469</td> <td>NA</td> </tr> <tr> <td>Task 3</td> <td>BookTest</td> <td>pre-trained</td> <td>0.159</td> <td>0.192</td> <td>0.227</td> <td>0.314</td> <td>0.440</td> <td>0.508</td> <td>0.759</td> <td>0.857</td> <td>NA</td> </tr> <tr> <td>Task 3</td> <td>BookTest</td> <td>rand. init.</td> <td>0.005</td> <td>0.135</td> <td>0.182</td> <td>0.219</td> <td>0.370</td> <td>0.419</td> <td>0.542</td> <td>0.482</td> <td>NA</td> </tr> <tr> <td>Task 3</td> <td>CNN+DM</td> <td>pre-trained</td> <td>0.164</td> <td>0.213</td> <td>0.222</td> <td>0.303</td> <td>0.450</td> <td>0.489</td> <td>0.585</td> <td>0.687</td> <td>NA</td> </tr> <tr> <td>Task 3</td> <td>CNN+DM</td> <td>rand. init.</td> <td>0.001</td> <td>0.175</td> <td>0.227</td> <td>0.272</td> <td>0.385</td> <td>0.429</td> <td>0.551</td> <td>0.563</td> <td>NA</td> </tr> <tr> <td>Task 4</td> <td>BookTest</td> <td>pre-trained</td> <td>0.452</td> <td>0.490</td> <td>0.545</td> <td>0.631</td> <td>0.986</td> <td>0.989</td> <td>1.000</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 4</td> <td>BookTest</td> <td>rand. init.</td> <td>0.032</td> <td>0.532</td> <td>0.556</td> <td>0.582</td> <td>0.846</td> <td>0.982</td> <td>0.993</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 4</td> <td>CNN+DM</td> <td>pre-trained</td> <td>0.323</td> <td>0.413</td> <td>0.596</td> <td>0.766</td> <td>0.946</td> <td>0.986</td> <td>0.992</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 4</td> <td>CNN+DM</td> <td>rand. init.</td> <td>0.234</td> <td>0.536</td> <td>0.554</td> <td>0.593</td> <td>0.926</td> <td>0.990</td> <td>0.986</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 5</td> <td>BookTest</td> <td>pre-trained</td> <td>0.601</td> <td>0.604</td> <td>0.632</td> <td>0.877</td> <td>0.983</td> <td>0.982</td> <td>0.991</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 5</td> <td>BookTest</td> <td>rand. init.</td> <td>0.013</td> <td>0.162</td> <td>0.295</td> <td>0.635</td> <td>0.964</td> <td>0.973</td> <td>0.989</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 5</td> <td>CNN+DM</td> <td>pre-trained</td> <td>0.448</td> <td>0.492</td> <td>0.581</td> <td>0.842</td> <td>0.969</td> <td>0.984</td> <td>0.989</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 5</td> <td>CNN+DM</td> <td>rand. init.</td> <td>0.185</td> <td>0.252</td> <td>0.350</td> <td>0.844</td> <td>0.982</td> <td>0.984</td> <td>0.988</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 11</td> <td>BookTest</td> <td>pre-trained</td> <td>0.334</td> <td>0.415</td> <td>0.620</td> <td>0.847</td> <td>0.986</td> <td>0.988</td> <td>0.998</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 11</td> <td>BookTest</td> <td>rand. init.</td> <td>0.008</td> <td>0.540</td> <td>0.692</td> <td>0.711</td> <td>0.922</td> <td>0.951</td> <td>0.974</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 11</td> <td>CNN+DM</td> <td>pre-trained</td> <td>0.119</td> <td>0.492</td> <td>0.671</td> <td>0.762</td> <td>0.820</td> <td>0.972</td> <td>0.977</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 11</td> <td>CNN+DM</td> <td>rand. init.</td> <td>0.207</td> <td>0.679</td> <td>0.737</td> <td>0.734</td> <td>0.853</td> <td>0.934</td> <td>0.980</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 12</td> <td>BookTest</td> <td>pre-trained</td> <td>0.307</td> <td>0.429</td> <td>0.694</td> <td>0.786</td> <td>0.988</td> <td>0.991</td> <td>0.999</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 12</td> <td>BookTest</td> <td>rand. init.</td> <td>0.006</td> <td>0.499</td> <td>0.705</td> <td>0.721</td> <td>0.917</td> <td>0.966</td> <td>0.962</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 12</td> <td>CNN+DM</td> <td>pre-trained</td> <td>0.236</td> <td>0.518</td> <td>0.650</td> <td>0.779</td> <td>0.866</td> <td>0.968</td> <td>0.970</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 12</td> <td>CNN+DM</td> <td>rand. init.</td> <td>0.009</td> <td>0.661</td> <td>0.765</td> <td>0.735</td> <td>0.855</td> <td>0.921</td> <td>0.965</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 13</td> <td>BookTest</td> <td>pre-trained</td> <td>0.330</td> <td>0.505</td> <td>0.793</td> <td>0.944</td> <td>0.959</td> <td>0.976</td> <td>0.998</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 13</td> <td>BookTest</td> <td>rand. init.</td> <td>0.004</td> <td>0.617</td> <td>0.920</td> <td>0.937</td> <td>0.950</td> <td>0.966</td> <td>0.992</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 13</td> <td>CNN+DM</td> <td>pre-trained</td> <td>0.114</td> <td>0.612</td> <td>0.830</td> <td>0.942</td> <td>0.949</td> <td>0.946</td> <td>0.975</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 13</td> <td>CNN+DM</td> <td>rand. init.</td> <td>0.094</td> <td>0.828</td> <td>0.941</td> <td>0.944</td> <td>0.951</td> <td>0.961</td> <td>0.971</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 14</td> <td>BookTest</td> <td>pre-trained</td> <td>0.270</td> <td>0.266</td> <td>0.273</td> <td>0.465</td> <td>0.775</td> <td>0.807</td> <td>0.896</td> <td>0.912</td> <td>NA</td> </tr> <tr> <td>Task 14</td> <td>BookTest</td> <td>rand. init.</td> <td>0.007</td> <td>0.228</td> <td>0.277</td> <td>0.328</td> <td>0.597</td> <td>0.675</td> <td>0.852</td> <td>0.905</td> <td>NA</td> </tr> <tr> <td>Task 14</td> <td>CNN+DM</td> <td>pre-trained</td> <td>0.280</td> <td>0.314</td> <td>0.351</td> <td>0.458</td> <td>0.677</td> <td>0.790</td> <td>0.840</td> <td>0.904</td> <td>NA</td> </tr> <tr> <td>Task 14</td> <td>CNN+DM</td> <td>rand. init.</td> <td>0.054</td> <td>0.247</td> <td>0.297</td> <td>0.337</td> <td>0.543</td> <td>0.788</td> <td>0.901</td> <td>0.929</td> <td>NA</td> </tr> <tr> <td>Task 15</td> <td>BookTest</td> <td>pre-trained</td> <td>0.085</td> <td>0.417</td> <td>0.436</td> <td>0.491</td> <td>0.544</td> <td>0.546</td> <td>0.689</td> <td>0.853</td> <td>NA</td> </tr> <tr> <td>Task 15</td> <td>BookTest</td> <td>rand. init.</td> <td>0.003</td> <td>0.414</td> <td>0.430</td> <td>0.496</td> <td>0.517</td> <td>0.523</td> <td>0.584</td> <td>0.834</td> <td>NA</td> </tr> <tr> <td>Task 15</td> <td>CNN+DM</td> <td>pre-trained</td> <td>0.563</td> <td>0.604</td> <td>0.591</td> <td>0.608</td> <td>0.611</td> <td>0.635</td> <td>0.644</td> <td>0.597</td> <td>NA</td> </tr> <tr> <td>Task 15</td> <td>CNN+DM</td> <td>rand. init.</td> <td>0.392</td> <td>0.469</td> <td>0.534</td> <td>0.587</td> <td>0.623</td> <td>0.630</td> <td>0.656</td> <td>0.658</td> <td>NA</td> </tr> <tr> <td>Task 16</td> <td>BookTest</td> <td>pre-trained</td> <td>0.036</td> <td>0.456</td> <td>0.451</td> <td>0.465</td> <td>0.469</td> <td>0.474</td> <td>0.528</td> <td>0.566</td> <td>NA</td> </tr> <tr> <td>Task 16</td> <td>BookTest</td> <td>rand. init.</td> <td>0.001</td> <td>0.363</td> <td>0.449</td> <td>0.460</td> <td>0.469</td> <td>0.475</td> <td>0.489</td> <td>0.519</td> <td>NA</td> </tr> <tr> <td>Task 16</td> <td>CNN+DM</td> <td>pre-trained</td> <td>0.444</td> <td>0.467</td> <td>0.468</td> <td>0.474</td> <td>0.480</td> <td>0.505</td> <td>0.519</td> <td>0.547</td> <td>NA</td> </tr> <tr> <td>Task 16</td> <td>CNN+DM</td> <td>rand. init.</td> <td>0.280</td> <td>0.428</td> <td>0.480</td> <td>0.476</td> <td>0.483</td> <td>0.489</td> <td>0.489</td> <td>0.496</td> <td>NA</td> </tr> </table> Table 5: Mean test accuracy for each combination of task, model type and target-adjustment set size. <table> <tr> <th rowspan="2">Task</th> <th rowspan="2">Pretrain. set</th> <th rowspan="2">Model</th> <th colspan="13">Target-adjustment set size</th> </tr> <tr> <th>0</th> <th>1</th> <th>10</th> <th>100</th> <th>500</th> <th>1000</th> <th>5000</th> <th>10000</th> <th>28174</th> </tr> <tr><td>SQuAD</td><td>BookTest</td><td>pre-trained</td><td>0.025</td><td>0.027</td><td>0.049</td><td>0.122</td><td>NA</td><td>0.245</td><td>NA</td><td>0.388</td><td>0.484</td></tr> <tr><td>SQuAD</td><td>BookTest</td><td>rand. init.</td><td>0.004</td><td>0.006</td><td>0.018</td><td>0.042</td><td>NA</td><td>0.107</td><td>NA</td><td>0.214</td><td>0.315</td></tr> <tr><td>Taks 1</td><td>BookTest</td><td>pre-trained</td><td>0.356</td><td>0.383</td><td>0.459</td><td>0.870</td><td>0.992</td><td>0.995</td><td>0.999</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 1</td><td>BookTest</td><td>rand. init.</td><td>0.010</td><td>0.327</td><td>0.431</td><td>0.529</td><td>0.888</td><td>0.916</td><td>0.976</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 1</td><td>CNN+DM</td><td>pre-trained</td><td>0.295</td><td>0.385</td><td>0.519</td><td>0.689</td><td>0.969</td><td>0.985</td><td>0.990</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 1</td><td>CNN+DM</td><td>rand. init.</td><td>0.100</td><td>0.354</td><td>0.450</td><td>0.582</td><td>0.954</td><td>0.941</td><td>0.977</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 2</td><td>BookTest</td><td>pre-trained</td><td>0.206</td><td>0.295</td><td>0.318</td><td>0.339</td><td>0.398</td><td>0.410</td><td>0.755</td><td>0.783</td><td>NA</td></tr> <tr><td>Taks 2</td><td>BookTest</td><td>rand. init.</td><td>0.003</td><td>0.225</td><td>0.290</td><td>0.332</td><td>0.358</td><td>0.361</td><td>0.528</td><td>0.645</td><td>NA</td></tr> <tr><td>Taks 2</td><td>CNN+DM</td><td>pre-trained</td><td>0.177</td><td>0.265</td><td>0.288</td><td>0.359</td><td>0.410</td><td>0.398</td><td>0.539</td><td>0.586</td><td>NA</td></tr> <tr><td>Taks 2</td><td>CNN+DM</td><td>rand. init.</td><td>0.005</td><td>0.280</td><td>0.320</td><td>0.380</td><td>0.371</td><td>0.396</td><td>0.478</td><td>0.469</td><td>NA</td></tr> <tr><td>Taks 3</td><td>BookTest</td><td>pre-trained</td><td>0.159</td><td>0.192</td><td>0.227</td><td>0.314</td><td>0.440</td><td>0.508</td><td>0.759</td><td>0.857</td><td>NA</td></tr> <tr><td>Taks 3</td><td>BookTest</td><td>rand. init.</td><td>0.005</td><td>0.135</td><td>0.182</td><td>0.219</td><td>0.370</td><td>0.419</td><td>0.542</td><td>0.482</td><td>NA</td></tr> <tr><td>Taks 3</td><td>CNN+DM</td><td>pre-trained</td><td>0.164</td><td>0.213</td><td>0.222</td><td>0.303</td><td>0.450</td><td>0.489</td><td>0.585</td><td>0.687</td><td>NA</td></tr> <tr><td>Taks 3</td><td>CNN+DM</td><td>rand. init.</td><td>0.001</td><td>0.175</td><td>0.227</td><td>0.272</td><td>0.385</td><td>0.429</td><td>0.551</td><td>0.563</td><td>NA</td></tr> <tr><td>Taks 4</td><td>BookTest</td><td>pre-trained</td><td>0.452</td><td>0.490</td><td>0.545</td><td>0.631</td><td>0.986</td><td>0.989</td><td>1.000</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 4</td><td>BookTest</td><td>rand. init.</td><td>0.032</td><td>0.532</td><td>0.556</td><td>0.582</td><td>0.846</td><td>0.982</td><td>0.993</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 4</td><td>CNN+DM</td><td>pre-trained</td><td>0.323</td><td>0.413</td><td>0.596</td><td>0.766</td><td>0.946</td><td>0.986</td><td>0.992</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 4</td><td>CNN+DM</td><td>rand. init.</td><td>0.234</td><td>0.536</td><td>0.554</td><td>0.593</td><td>0.926</td><td>0.990</td><td>0.986</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 5</td><td>BookTest</td><td>pre-trained</td><td>0.601</td><td>0.604</td><td>0.632</td><td>0.877</td><td>0.983</td><td>0.982</td><td>0.991</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 5</td><td>BookTest</td><td>rand. init.</td><td>0.013</td><td>0.162</td><td>0.295</td><td>0.635</td><td>0.964</td><td>0.973</td><td>0.989</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 5</td><td>CNN+DM</td><td>pre-trained</td><td>0.448</td><td>0.492</td><td>0.581</td><td>0.842</td><td>0.969</td><td>0.984</td><td>0.989</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 5</td><td>CNN+DM</td><td>rand. init.</td><td>0.185</td><td>0.252</td><td>0.350</td><td>0.844</td><td>0.982</td><td>0.984</td><td>0.988</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 11</td><td>BookTest</td><td>pre-trained</td><td>0.334</td><td>0.415</td><td>0.620</td><td>0.847</td><td>0.986</td><td>0.988</td><td>0.998</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 11</td><td>BookTest</td><td>rand. init.</td><td>0.008</td><td>0.540</td><td>0.692</td><td>0.711</td><td>0.922</td><td>0.951</td><td>0.974</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 11</td><td>CNN+DM</td><td>pre-trained</td><td>0.119</td><td>0.492</td><td>0.671</td><td>0.762</td><td>0.820</td><td>0.972</td><td>0.977</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 11</td><td>CNN+DM</td><td>rand. init.</td><td>0.207</td><td>0.679</td><td>0.737</td><td>0.734</td><td>0.853</td><td>0.934</td><td>0.980</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 12</td><td>BookTest</td><td>pre-trained</td><td>0.307</td><td>0.429</td><td>0.694</td><td>0.786</td><td>0.988</td><td>0.991</td><td>0.999</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 12</td><td>BookTest</td><td>rand. init.</td><td>0.006</td><td>0.499</td><td>0.705</td><td>0.721</td><td>0.917</td><td>0.966</td><td>0.962</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 12</td><td>CNN+DM</td><td>pre-trained</td><td>0.236</td><td>0.518</td><td>0.650</td><td>0.779</td><td>0.866</td><td>0.968</td><td>0.970</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 12</td><td>CNN+DM</td><td>rand. init.</td><td>0.009</td><td>0.661</td><td>0.765</td><td>0.735</td><td>0.855</td><td>0.921</td><td>0.965</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 13</td><td>BookTest</td><td>pre-trained</td><td>0.330</td><td>0.505</td><td>0.793</td><td>0.944</td><td>0.959</td><td>0.976</td><td>0.998</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 13</td><td>BookTest</td><td>rand. init.</td><td>0.004</td><td>0.617</td><td>0.920</td><td>0.937</td><td>0.950</td><td>0.966</td><td>0.992</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 13</td><td>CNN+DM</td><td>pre-trained</td><td>0.114</td><td>0.612</td><td>0.830</td><td>0.942</td><td>0.949</td><td>0.946</td><td>0.975</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 13</td><td>CNN+DM</td><td>rand. init.</td><td>0.094</td><td>0.828</td><td>0.941</td><td>0.944</td><td>0.951</td><td>0.961</td><td>0.971</td><td>NA</td><td>NA</td></tr> <tr><td>Taks 14</td><td>BookTest</td><td>pre-trained</td><td>0.270</td><td>0.266</td><td>0.273</td><td>0.465</td><td>0.775</td><td>0.807</td><td>0.896</td><td>0.912</td><td>NA</td></tr> <tr><td>Taks 14</td><td>BookTest</td><td>rand. init.</td><td>0.007</td><td>0.228</td><td>0.277</td><td>0.328</td><td>0.597</td><td>0.675</td><td>0.852</td><td>0.905</td><td>NA</td></tr> <tr><td>Taks 14</td><td>CNN+DM</td><td>pre-trained</td><td>0.280</td><td>0.314</td><td>0.351</td><td>0.458</td><td>0.677</td><td>0.790</td><td>0.840</td><td>0.904</td><td>NA</td></tr> <tr><td>Taks 14</td><td>CNN+DM</td><td>rand. init.</td><td>0.054</td><td>0.247</td><td>0.297</td><td>0.337</td><td>0.543</td><td>0.788</td><td>0.901</td><td>0.929</td><td>NA</td></tr> <tr><td>Taks 15</td><td>BookTest</td><td>pre-trained</td><td>0.085</td><td>0.417</td><td>0.436</td><td>0.491</td><td>0.544</td><td>0.546</td><td>0.689</td><td>0.853</td><td>NA</td></tr> <tr><td>Taks 15</td><td>BookTest</td><td>rand. init.</td><td>0.003</td><td>0.414</td><td>0.430</td><td>0.496</td><td>0.517</td><td>0.523</td><td>0.584</td><td>0.834</td><td>NA</td></tr> <tr><td>Taks 15</td><td>CNN+DM</td><td>pre-trained</td><td>0.563</td><td>0.604</td><td>0.591</td><td>0.608</td><td>0.611</td><td>0.635</td><td>0.644</td><td>0.597</td><td>NA</td></tr> <tr><td>Taks 15</td><td>CNN+DM</td><td>rand. init.</td><td>0.392</td><td>0.469</td><td>0.534</td><td>0.587</td><td>0.623</td><td>0.630</td><td>0.656</td><td>0.658</td><td>NA</td></tr> <tr><td>Taks 16</td><td>BookTest</td><td>pre-trained</td><td>0.036</td><td>0.456</td><td>0.451</td><td>0.465</td><td>0.469</td><td>0.474</td><td>0.528</td><td>0.566</td><td>NA</td></tr> <tr><td>Taks 16</td><td>BookTest</td><td>rand. init.</td><td>0.001</td><td>0.363</td><td>0.449</td><td>0.460</td><td>0.469</td><td>0.475</td><td>0.489</td><td>0.519</td><td>NA</td></tr> <tr><td>Taks 16</td><td>CNN+DM</td><td>pre-trained</td><td>0.444</td><td>0.467</td><td>0.468</td><td>0.474</td><td>0.480</td><td>0.505</td><td>0.519</td><td>0.547</td><td>NA</td></tr> <tr><td>Taks 16</td><td>CNN+DM</td><td>rand. init.</td><td>0.280</td><td>0.428</td><td>0.480</td><td>0.476</td><td>0.483</td><td>0.489</td><td>0.489</td><td>0.496</td><td>NA</td></tr> </table> Table 6: Standard deviation in accuracies for each combination of task, model type and target-adjustment set size. <table> <tr> <th rowspan="2">Task</th> <th rowspan="2">Pretraining</th> <th colspan="8">Target-adjustment set size</th> </tr> <tr> <th>0</th> <th>1</th> <th>10</th> <th>100</th> <th>500</th> <th>1000</th> <th>5000</th> <th>10000</th> <th>28174</th> </tr> <tr> <td>SQuAD</td> <td>BookTest</td> <td><span style="background-color:green">1.01e-45</span></td> <td><span style="background-color:green">4.07e-05</span></td> <td><span style="background-color:green">7.40e-05</span></td> <td><span style="background-color:green">7.82e-08</span></td> <td>NA</td> <td><span style="background-color:green">5.17e-08</span></td> <td>NA</td> <td><span style="background-color:green">3.93e-08</span></td> <td><span style="background-color:green">8.52e-03</span></td> </tr> <tr> <td>Task 1</td> <td>BookTest</td> <td><span style="background-color:green">3.34e-83</span></td> <td><span style="background-color:green">1.81e-03</span></td> <td><span style="background-color:green">1.33e-01</span></td> <td><span style="background-color:green">2.35e-19</span></td> <td><span style="background-color:green">9.41e-04</span></td> <td><span style="background-color:green">1.67e-02</span></td> <td>1.32e-01</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 2</td> <td>BookTest</td> <td><span style="background-color:green">1.24e-34</span></td> <td><span style="background-color:green">3.86e-07</span></td> <td><span style="background-color:green">7.29e-03</span></td> <td><span style="background-color:green">2.59e-01</span></td> <td><span style="background-color:green">1.39e-08</span></td> <td><span style="background-color:green">2.63e-06</span></td> <td><span style="background-color:green">7.54e-09</span></td> <td>2.04e-01</td> <td>NA</td> </tr> <tr> <td>Task 3</td> <td>BookTest</td> <td><span style="background-color:green">9.84e-55</span></td> <td><span style="background-color:green">1.27e-05</span></td> <td><span style="background-color:green">7.66e-03</span></td> <td><span style="background-color:green">1.48e-03</span></td> <td><span style="background-color:green">3.18e-04</span></td> <td><span style="background-color:green">2.18e-03</span></td> <td><span style="background-color:green">2.16e-04</span></td> <td>1.03e-01</td> <td>NA</td> </tr> <tr> <td>Task 4</td> <td>BookTest</td> <td><span style="background-color:green">7.25e-78</span></td> <td>9.50e-01</td> <td>9.71e-01</td> <td>1.04e-05</td> <td>6.38e-03</td> <td><span style="background-color:green">1.70e-02</span></td> <td>1.81e-02</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 5</td> <td>BookTest</td> <td><span style="background-color:green">6.55e-115</span></td> <td><span style="background-color:green">9.88e-22</span></td> <td><span style="background-color:green">8.87e-19</span></td> <td><span style="background-color:green">5.25e-05</span></td> <td><span style="background-color:green">3.66e-03</span></td> <td>8.61e-02</td> <td><span style="background-color:green">5.65e-03</span></td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 11</td> <td>BookTest</td> <td><span style="background-color:green">6.78e-152</span></td> <td>1.00e+00</td> <td>9.94e-01</td> <td>4.07e-09</td> <td>2.50e-04</td> <td><span style="background-color:green">2.28e-02</span></td> <td>6.37e-02</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 12</td> <td>BookTest</td> <td><span style="background-color:green">2.27e-90</span></td> <td>9.10e-01</td> <td>6.46e-01</td> <td><span style="background-color:green">1.89e-05</span></td> <td><span style="background-color:green">2.78e-04</span></td> <td><span style="background-color:green">1.43e-02</span></td> <td><span style="background-color:green">2.36e-02</span></td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 13</td> <td>BookTest</td> <td><span style="background-color:green">5.30e-91</span></td> <td>9.75e-01</td> <td>9.99e-01</td> <td><span style="background-color:green">2.88e-02</span></td> <td><span style="background-color:green">2.74e-02</span></td> <td>1.03e-01</td> <td>7.06e-02</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 14</td> <td>BookTest</td> <td><span style="background-color:green">1.97e-200</span></td> <td><span style="background-color:green">1.01e-03</span></td> <td>6.79e-01</td> <td><span style="background-color:green">2.22e-14</span></td> <td><span style="background-color:green">3.40e-05</span></td> <td><span style="background-color:green">2.93e-03</span></td> <td><span style="background-color:green">3.66e-06</span></td> <td>3.97e-01</td> <td>NA</td> </tr> <tr> <td>Task 15</td> <td>BookTest</td> <td><span style="background-color:green">3.64e-09</span></td> <td>4.75e-01</td> <td>4.12e-01</td> <td>6.70e-01</td> <td><span style="background-color:green">1.68e-03</span></td> <td><span style="background-color:green">3.70e-03</span></td> <td>1.03e-05</td> <td>4.54e-01</td> <td>NA</td> </tr> <tr> <td>Task 16</td> <td>BookTest</td> <td><span style="background-color:green">1.81e-05</span></td> <td><span style="background-color:green">8.28e-04</span></td> <td>4.38e-01</td> <td>2.72e-01</td> <td>4.89e-01</td> <td>5.71e-01</td> <td>7.40e-03</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 1</td> <td>CNN+DM</td> <td><span style="background-color:green">9.43e-09</span></td> <td>2.99e-01</td> <td>1.11e-01</td> <td>1.05e-01</td> <td>9.54e-02</td> <td>1.45e-01</td> <td><span style="background-color:green">3.97e-03</span></td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 2</td> <td>CNN+DM</td> <td><span style="background-color:green">9.38e-17</span></td> <td>6.93e-01</td> <td>9.02e-01</td> <td>9.15e-01</td> <td><span style="background-color:green">1.05e-03</span></td> <td>4.20e-01</td> <td><span style="background-color:green">2.64e-03</span></td> <td>8.49e-02</td> <td>NA</td> </tr> <tr> <td>Task 3</td> <td>CNN+DM</td> <td><span style="background-color:green">2.42e-16</span></td> <td><span style="background-color:green">4.95e-02</span></td> <td>6.30e-01</td> <td>1.75e-01</td> <td><span style="background-color:green">2.13e-03</span></td> <td><span style="background-color:green">6.59e-04</span></td> <td><span style="background-color:green">4.68e-02</span></td> <td>1.24e-01</td> <td>NA</td> </tr> <tr> <td>Task 4</td> <td>CNN+DM</td> <td><span style="background-color:green">5.84e-03</span></td> <td>9.70e-01</td> <td>1.37e-01</td> <td><span style="background-color:green">4.83e-03</span></td> <td>3.33e-01</td> <td>8.84e-01</td> <td>1.08e-01</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 5</td> <td>CNN+DM</td> <td><span style="background-color:green">1.17e-10</span></td> <td><span style="background-color:green">7.00e-03</span></td> <td><span style="background-color:green">7.93e-04</span></td> <td>5.20e-01</td> <td>9.70e-01</td> <td>5.66e-01</td> <td>1.83e-01</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 11</td> <td>CNN+DM</td> <td>1.00e+00</td> <td>9.84e-01</td> <td>9.73e-01</td> <td>2.58e-01</td> <td>7.17e-01</td> <td>1.45e-01</td> <td>6.95e-01</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 12</td> <td>CNN+DM</td> <td><span style="background-color:green">1.93e-14</span></td> <td>9.32e-01</td> <td>9.92e-01</td> <td><span style="background-color:green">2.57e-02</span></td> <td>4.06e-01</td> <td>6.65e-02</td> <td>2.09e-01</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 13</td> <td>CNN+DM</td> <td>8.69e-02</td> <td>9.61e-01</td> <td>9.72e-01</td> <td>9.89e-01</td> <td>6.22e-01</td> <td>9.44e-01</td> <td>2.83e-01</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>Task 14</td> <td>CNN+DM</td> <td><span style="background-color:green">2.17e-12</span></td> <td>6.64e-02</td> <td>1.11e-01</td> <td><span style="background-color:green">2.05e-02</span></td> <td><span style="background-color:green">3.66e-02</span></td> <td>4.52e-01</td> <td>9.10e-01</td> <td>8.24e-01</td> <td>NA</td> </tr> <tr> <td>Task 15</td> <td>CNN+DM</td> <td><span style="background-color:green">1.36e-52</span></td> <td><span style="background-color:green">5.30e-03</span></td> <td><span style="background-color:green">3.48e-02</span></td> <td>7.21e-02</td> <td>8.36e-01</td> <td>3.09e-01</td> <td>8.47e-01</td> <td>9.84e-01</td> <td>NA</td> </tr> <tr> <td>Task 16</td> <td>CNN+DM</td> <td><span style="background-color:green">6.39e-35</span></td> <td><span style="background-color:green">4.56e-02</span></td> <td>9.66e-01</td> <td>5.95e-01</td> <td>7.19e-01</td> <td><span style="background-color:green">4.09e-02</span></td> <td><span style="background-color:green">2.51e-02</span></td> <td><span style="background-color:green">2.22e-03</span></td> <td>NA</td> </tr> </table> Table 7: One-sided p-value whether the mean accuracy of pre-trained models is greater than the accuracy of the randomly initialized ones for each combination of task pre-training dataset. p-values below 0.05 are marked in green.
ABSTRACT Deep learning has proven useful on many NLP tasks including reading comprehension. However, it requires large amounts of training data which are not available in some domains of application. Hence we examine the possibility of using data-rich domains to pre-train models and then apply them in domains where training data are harder to get. Specifically, we train a neural-network-based model on two context-question-answer datasets, the BookTest and CNN/Daily Mail, and we monitor transfer to subsets of bAbl, a set of artificial tasks designed to test specific reasoning abilities, and of SQuAD, a question-answering dataset which is much closer to real-world applications. Our experiments show very limited transfer if the model is not shown any training examples from the target domain however the results are encouraging if the model is shown at least a few target-domain examples. Furthermore we show that the effect of pre-training is not limited to word embeddings. 1 INTRODUCTION Machine intelligence has had some notable successes, however often in narrow domains which are sometimes of little practical use to humans – for instance games like chess (Campbell et al., 2002) or Go (Silver et al., 2016). If we aimed to build a general AI that would be able to efficiently assist humans in a wide range of settings, we would want it to have a much larger set of skills – among them would be an ability to understand human language, to perform common-sense reasoning and to be able to generalize its abilities to new situations like humans do. If we want to achieve this goal through Machine Learning, we need data to learn from. A lot of data if the task at hand is complex – which is the case for many useful tasks. One way to achieve wide applicability would be to provide training data for each specific task we would like the machine to perform. However it is unrealistic to obtain a sufficient amount of training data for some domains – it may for instance require expensive human annotation or all domains of application may be difficult to predict in advance – while the amount of training data in other domains is practically unlimited, (e.g. in language modelling or Cloze-style question answering). The way to bridge this gap – and to achieve the aforementioned adaptability – is transfer learning (Pan & Yang, 2010) and closely related semi-supervised learning (Zhu & Goldberg, 2009) which allow the system to acquire a set of skills on domains where data are abundant and then use these skills to succeed on previously unseen domains. Despite how important generalization is for general AI, a lot of research keeps focusing on solving narrow tasks. In this paper we would like to examine transfer of learnt skills and knowledge within the domain of text comprehension, a field that has lately attracted a lot of attention within the NLP community (Hermann et al., 2015; Hill et al., 2015; Kobayashi et al., 2016; Kadlec et al., 2016b; Chen et al., 2016; Sordoni et al., 2016; Dhingra et al., 2016; Trischler et al., 2016; Weissenborn, 2016; Cui et al., 2016b)a *These authors contributed equally to this work. [Li et al., 2016] [Shen et al., 2016]. Specifically, we would like to address the following research questions: 1. Whether we could train models on natural-language tasks where data are abundant and transfer the learnt skills to tasks where in-domain training data may be difficult to obtain. We will first look into what reasoning abilities a model learns from two large-scale reading-comprehension datasets using artificial tasks, and then check whether it can transfer its skills to real world tasks. Spoiler: both these transfers are very poor if we allow no training at all on the target task. 2. Whether pre-training on large-scale datasets does help if we allow the model to train on a small sample of examples from the target tasks. Here the results are much more positive. 3. Finally we examine whether the benefits of pre-training are concentrated in any particular part of the model - namely the word-embedding part or the context encoder (the reasoning part). It turns out that pre-training is useful for both components. Although our results do not improve current state of the art in any of the studied tasks, they show a clear positive effect of large-dataset pre-training on the performance of our baseline machine-learning model. Previous studies of transfer learning and semi-supervised learning in NLP focused on text classification ([Dai & Le, 2015], [Mou et al., 2016]) and various parsing tasks ([Collobert et al., 2011], [Hashimoto et al., 2016]). To our knowledge this work is the first study of transfer learning in reading comprehension, and we hope it will stimulate further work in this important area. We will first briefly introduce the datasets we will be using on the pre-training and target sides, then our baseline model and afterwards in turn describe the method and results of each of the three experiments. 2 DATASETS 2.1 PRE-TRAINING DATASETS We have mentioned that for the model pre-training we would want to use a task where training data are abundant. An example of such task is context-dependent cloze-style-question answering since the training data for this task can be generated automatically from a suitable corpus. We will use two such pre-training datasets in our experiments: the BookTest ([Bajgar et al., 2016]) and the CNN/Daily Mail (CNN/DM) news dataset ([Hermann et al., 2015]). The task associated with both datasets is to answer a cloze-style question (i.e. fill in a blank in a sentence) the answer to which needs to be inferred from a context document provided with the question. 2.1.1 BOOKTEST In the BookTest dataset, the context document is formed from 20 consecutive sentences from a book. The question is then formed by omitting a common noun or a named entity from the subsequent 21st sentence. Among datasets of this kind, the BookTest is among the largest with more than 14 million training examples coming from 3555 copyright-free books avalable thanks to Project Gutenberg. 2.1.2 CNN/DAILY MAIL In the CNN/DM dataset the context document is formed from a news article while the cloze-style question is formed by removing a named entity from one of the short summary sentences which often appear at the top of the article. To stop the model from using world knowledge from outside the context article (and hence truly test the comprehension of the article), all named entities were replaced by anonymous tags, which are further shuffled for each example. This may make the comprehension more difficult; however, since the answer is always one of the anonymized entities, it also reduces the number of possible answers making guessing easier. 2.2 TARGET DATASETS 2.2.1 bAbI The first target dataset are the bAbI tasks (Weston et al., 2016) – a set of artificial tasks each of which is designed to test a specific kind of reasoning. This toy dataset will allow us to observe what particular skills the model may be learning from each of the three training datasets. For our experiments we will be using an architecture designed to select one word from the context document as the answer. Hence we have selected Tasks 1,2,3,4,5,11,12,13,14 and 16 which fulfill this requirement and added task 15 which required a slight modification. Furthermore because both pre-training datasets are cloze-style we converted also the bAbI task questions into cloze style (e.g. “Where is John?” to ”John is in the XXXXX.”). For the models pre-trained on CNN/DM we also anonymized the tasks in a way similar to the pre-training dataset - i.e. we replaced all names of characters and also all words that can appear as answers for the given task by anonymous tags in the style of CNN/DM. This gives even models that have not seen any training examples from the target domain a chance to answer the questions. Full details about these alterations can be found in Appendix A 2.2.2 SQuAD Secondly, we will look on transfer to the SQuAD dataset (Rajpurkar et al., 2016); here the associated task may be already useful in the real world. Although cloze-style questions have the huge advantage in the possibility of being automatically generated from a suitable corpus – the path taken by CNN/DM and the BookTest – in practice humans would use a proper question, not its cloze-style substitute. This brings us to the need of transfer from the data-rich cloze-style training to the domain of proper questions where data are much scarcer due to the necessary human annotation. The SQuAD dataset is a great target dataset to use for this. As opposed to the bAbI tasks, the goal of this dataset is actually a problem whose solving would be useful to humans - answering natural questions based on an natural language encyclopedic knowledge base. For our experiments we selected only a subset of the SQuAD training and development examples where the answer is only a single word, since this is an inherent assumption of our machine learning model. This way we extracted 28,346 training examples out of the original 100,000 examples and 3,233 development examples out of 10,570. 3 MACHINE LEARNING MODEL: AS READER We perform our experiments using the Attention Sum Reader (AS Reader) (Kadlec et al., 2016b) model. The AS Reader is simple to implement while it achieves strong performance on several text comprehension tasks (Kadlec et al., 2016b; Baigar et al., 2016; Chu et al., 2016). Since the AS Reader is a building block of many recent text-comprehension models (Trischler et al., 2016; Sordoni et al., 2016; Dhingra et al., 2016; Cui et al., 2016b,a; Shen et al., 2016; Munkhdalai & Yu, 2016) it is a good representative of current research in this field. A high level structure of the AS Reader is shown in Figure 1. The words from the document and the question are first converted into vector embeddings using a look-up matrix. The document is then read by a bidirectional Gated Recurrent Unit (GRU) network (Cho et al., 2014). A concatenation of the hidden states of the forward and backward GRUs at each word is then used as a contextual embedding of this word, intuitively representing the context in which the word is appearing. We can also understand it as representing the set of questions to which this word may be an answer. Similarly the question is read by a bidirectional GRU but in this case only the final hidden states are concatenated to form the question embedding. The attention over each word in the context is then calculated as the dot product of its contextual embedding with the question embedding. This attention is then normalized by the softmax function and summed across all occurrences of each answer candidate. The candidate with most accumulated attention is selected as the final answer. For a more detailed description of the model including equations check [Kadlec et al.] (2016b). ![Structure of the AS Reader model.](page_374_355_803_246.png) Figure 1: Structure of the AS Reader model. 4 EXPERIMENTS: TRANSFER LEARNING IN TEXT COMPREHENSION Now let us turn in more detail to the three kinds of experiments that we performed. 4.1 PRE-TRAINED WITHOUT TARGET ADJUSTMENT In the first experiment we tested how a model trained on one of the large-scale pre-training datasets performs on the bAbI tasks without any opportunity to train on bAbI. Since the BookTest and CNN/DM tasks involve only cloze-style questions, we can’t expect a model trained on them to answer natural ?-style questions. Hence we did not study the transfer to SQuAD in this case, only the transfer to the (cloze-converted) bAbI tasks. 4.1.1 METHOD First we tested how the AS Reader architecture [Kadlec et al.] (2016b) can handle the tasks if trained directly on the bAbI training data for each task. Then we tested the degree of transfer from the BookTest and CNN/DM data to the 11 selected bAbI tasks. In the first part of the experiment we trained a separate instance of the AS Reader on the 10,000-example version of the bAbI training data for each of the 11 tasks (for more details see Appendix B.1). On 8 of them the architecture was able to learn the task with accuracy at least 95% (results for each task can be found in Table 4 in Appendix C). Hence if given appropriate training the AS Reader is capable of the reasoning needed to solve most of the selected bAbI tasks. Now when we know that the AS Reader is powerful enough to learn the target tasks we can turn to transfer from the two large-scale datasets. The main part of this first experiment was then straightforward: we pre-trained multiple models on the BookTest and CNN/DM datasets and then simply evaluated them on the test datasets of the 11 selected bAbI tasks. 4.1.2 RESULTS Table 1 summarizes the results of this experiment. Both the models trained on the BookTest and those trained on the CNN/DM dataset perform quite poorly on bAbI and achieve much lower accuracy than 1It should be noted that there are several machine learning models that perform better than the AS Reader in the 10k weakly supervised setting, e.g. [Sukhbaatar et al., 2015] [Xiong et al., 2016] [Graves et al., 2016], however they often need significant fine-tuning. On the other hand we trained plain AS Reader model without any modifications. Hyperparameter and feature fine-tuning could probably further increase its performance on individual tasks however it goes directly against the idea of generality that is at the heart of this work. For comparison with state of the art we include results of DMN+ [Xiong et al., 2016] in Table 1 which had the best average performance over the original 20 tasks. Table 1: The mean performance across 11 bAbI tasks. The first two columns show a random baseline2 and a baseline that selects the most frequent word from the context which also appears as an answer in the training data for the task. The following three columns show performance of the AS Reader trained on different datasets, the last column shows the results of DMN+ (Xiong et al., 2016), the state-of-the-art model on the bAbI 10k dataset. For more detailed results listing per task accuracies see Appendix C <table> <tr> <th rowspan="2">Model</th> <th rowspan="2">Rnd.</th> <th rowspan="2">Most freq. cand.</th> <th colspan="4">AS Reader</th> <th rowspan="2">DMN+</th> </tr> <tr> <th>not trained</th> <th>bAbI 10k</th> <th>BookTest 14M</th> <th>CNN/DM 1.2M</th> <th>bAbI 10k</th> <th>bAbI 10k</th> </tr> <tr> <td>bAbI mean (11 tasks)</td> <td>6.1</td> <td>29.9</td> <td></td> <td>34.8</td> <td>38.1</td> <td>92.7</td> <td>95.7</td> </tr> </table> the models trained directly on each individual bAbI task. However there is some transfer between the tasks since the AS Reader trained on either the BookTest or CNN/DM outperforms a random baseline2 and even an improved baseline which selects the most frequent word from the context that also appears as an answer in the training data for this task. The results also show that the models trained on CNN/DM perform somewhat better on most tasks than the BookTest models. This may be due to the fact that bAbI tasks generally require the model to summarize information from the context document, which is also what the CNN/DM dataset is testing. On the other hand, the BookTest requires prediction of a possible continuation of a story, where the required kind of reasoning is much less clear but certainly different from pure summarization. Another explanation for better performance of CNN/DM models might be that they solve slightly simpler task since the candidate answers were already pre-selected in the entity anonymization step. Readers interested in how the training-dataset size affects this kind of transfer can check Kadlec et al. (2016a) where we show that the target-task performance is a bit better if we use the large BookTest as opposed to its smaller subset, the Children’s Book Test (CBT) (Hill et al., 2015). Conclusions from this experiment are that the skills learned from two large-scale datasets generalize surprisingly poorly to even simple toy tasks. This may make us ask whether most teams’ focus on solving narrow tasks is truly beneficial if the skills learnt on these tasks are hard to apply elsewhere. However it also brings us to our next experiment, where we try to provide some help to the struggling pre-trained models. 4.2 PRE-TRAINED WITH TARGET ADJUSTMENT After showing that the skills learnt from the BookTest and CNN/DM datasets are by themselves insufficient for solving the toy tasks, the next natural question is whether they are useful if helped by training on a small sample of examples from the target task. We call this additional phase of training target adjustment. For this experiment we again use the bAbI tasks, however we also test transfer to a subset of the SQuAD dataset, which is much closer to real-world natural-language question answering. The results presented in this and the following section are based on training 3701 model instances. 4.2.1 METHOD Common to bAbI and SQuAD datasets. In this experiment we started with a pre-trained model which we used in the previous experiment. However, after it finished training on one of the large pre-training datasets, we allowed it to train on a subset of training examples from the target dataset. We tried subsets of various sizes ranging from a single example to thousands. We tried training four different pre-trained models and also, for comparison, four randomly-initialized models with the same hyperparameters (see Appendix B.2 for details). The experiment with each task-model couple was run on 4 different data samples of each size which were randomly drawn from the training dataset 2The random baseline selects randomly uniformly between all unique words contained in the context document. Figure 2: Sub-figure (a) shows the average across the 11 bAbI tasks of the best-validation model’s test accuracy. (b) shows the test accuracy on SQuAD of each model we trained (the points) and the lines join the accuracies of the best-validation models for each training size. of the task to account for variations between these random samples – which may be substantial given the small sample size.\footnote{We are planning to release the split training datasets soon.} **bAbI.** For each of these models we observed the test accuracy at the best-validation epoch and compared this number between the randomly initialized and pre-trained models. Validation was done using 100 examples which were set aside from the task’s original 10k training data.\footnote{The other models trained on the full 10k dataset usually use 1000 validation examples (Sukhbaatar et al., 2015; Xiong et al., 2016), however we wanted to focus on low data regime thus we used 10 times less examples.} We perform the experiment with models pre-trained on the BookTest and also on CNN/DM. **SQuAD subset.** In the SQuAD experiment, we trained the model on a subset of the original training dataset where answers were only single words and its sub-subsets. We report the best-validation accuracy on a development set filtered in the same way. This experiment was performed only with the models pre-trained on BookTest. 4.2.2 RESULTS The results of these experiments are summarized in Figures 2 and 3. Figure 3: Example of 3 bAbI tasks where pre-training seems to help. Note that the task may be easier for the CNN/DM models due to answer anonymization which restricts the choice of possible answers. bAbI. Sub-figure[2a] shows mean test accuracy of the models that achieved the best validation result for each single task. The results for both BookTest and CNN/DM experiments confirm positive effect of pre-training compared to randomly initialized baseline. Figure[3] shows performance on selected bAbI tasks where pre-training has clearly positive effect, such plot for each of the target tasks is provided in Appendix[C.2](Figure[4]). Note that the CNN/DM models cannot be directly compared to BookTest results due to entity anonymization that seems to simplify the task when the model is trained on smaller datasets. Since our evaluation methodology with different training set sizes is novel, we can compare our result only to MemN2N (Sukhbaatar et al., 2015) trained on a 1k dataset. MemN2N is the only weakly supervised model that reports accuracy when trained on less than 10k examples. MemN2N achieves average accuracy 93.2% on the eleven selected tasks. This is substantially better than both our random baseline (78.0%) and the BookTest-pre-trained model (79.5%), however our model is not tuned in any way towards this particular task. One important conceptual difference is that the AS Reader processes the whole context as one sequence of words, whereas MemN2N receives the context split into single sentences, which simplifies the task for the network. SQuAD subset. The results of SQuAD experiment also confirm positive effect of pre-training, see Sub-figure[2f], for now compare just lines showing performance of the fully pre-trained model and the randomly initialized model – the meaning of the remaining two lines shall become clear in the next section. More detailed statistics about the results of this experiment can be found in Appendix[D] We should note that performance of our model is not competitive with the state of the art models on this dataset. For instance the DCR model (Yu et al., 2016) trained on our SQuAD subset achieves validation accuracy 74.9% in this task which is better than our randomly initialized (35.4%) and pre-trained (51.6%) model.[6] However, the DCR model is designed specifically for the SQuAD task, for instance it utilizes features that are not used by our model. 4.3 PARTIALLY PRE-TRAINED MODEL Since our previous experiment confirmed positive effect of pre-training if followed by target-domain adjustment, we wondered which part of the model contains the knowledge transferable to new domains. To examine this we performed the following experiment. 4.3.1 METHOD Our machine learning model, the AS Reader consists of two main parts: the word-embedding look-up and the bidirectional GRUs used to encode the document and question (see Figure[1]). Therefore a natural question was what the contribution of each of these parts is. To test this we created two models out of each pre-trained model used in the previous experiment. The first model variant uses the pre-trained word embeddings from the original model while the GRU encoders are randomly initialized. We say that this model has pre-trained embeddings. The second model variant uses the opposite setting where the word embeddings are randomly initialized while the encoders are taken form a pre-trained model. We call this pre-trained encoders. bAbI. For this experiment we selected only a subset of tasks with training set of 100 examples where there was significant difference in accuracy between randomly-initialized and pre-trained models. For evaluation we use the same methodology as in the previous experiment, that is, we report accuracy of the best-validation model averaged over 4 training splits. SQuAD subset. We evaluated both model variants on all training sets from the previous SQuAD experiment using the same methodology. 5MemN2N trained on each single task with PE LS RN features, see (Sukhbaatar et al., 2015) for details. 6We would like to thank [Yu et al., 2016] for training their system on our dataset. Table 2: The effect of pre-training different components of the model for selected tasks. The first row shows performance (average test accuracy across all trained model instances in each category) of a randomly initialized baseline model. The following three rows show increase in accuracy (measured in percent absolute) when the model is initialized with weights pre-trained on the BookTest. The last line shows results for models initialized with Google News word2vec word embeddings (Mikolov et al., 2013). <table> <tr> <th rowspan="2">Model variant</th> <th rowspan="2">Task</th> <th colspan="4">bAbl task (100 ex.)</th> <th rowspan="2">SQuAD (28k ex.)</th> </tr> <tr> <th>1.</th> <th>5.</th> <th>11.</th> <th>14.</th> </tr> <tr> <td>Random init</td> <td></td> <td>53%</td> <td>66%</td> <td>71%</td> <td>33%</td> <td>31%</td> </tr> <tr> <td>△ Pre-trained encoders</td> <td></td> <td>+6</td> <td>+25</td> <td>+4</td> <td>+2</td> <td>+4</td> </tr> <tr> <td>△ Pre-trained embeddings</td> <td></td> <td>+17</td> <td>+6</td> <td>+8</td> <td>+8</td> <td>+10</td> </tr> <tr> <td>△ Pre-trained full</td> <td></td> <td>+34</td> <td>+22</td> <td>+14</td> <td>+13</td> <td>+17</td> </tr> <tr> <td>△ Pre-trained word2vec</td> <td></td> <td>-2</td> <td>+5</td> <td>+1</td> <td>-1</td> <td>+5</td> </tr> </table> 4.3.2 RESULTS bAbl. Table 2 shows improvement of pre-trained models over a randomly initialized baseline. In most cases (all except Task 5) the fully pre-trained model achieved the best accuracy. SQuAD subset. The accuracies of the four model variants are plotted in Figure 2b together with results of the previous SQuAD experiment. The graph shows that both pre-trained embeddings and pre-trained encoders alone improve performance over the randomly initialized baseline, however the fully pre-trained model is always the best. The overall result of this experiment is that both pre-training of the word embeddings and pre-training of the encoder parameters are important since the fully pre-trained model outperforms both partially pre-trained variants. 5 CONCLUSION Our experiments show that transfer from two large cloze-style question-answering datasets to our two target tasks is surprisingly poor, if the models aren’t provided with any examples from the target domain. However we show that models that pre-trained models perform significantly better than a randomly initialized model if they are shown at least a few training examples from the target domain. The usefulness of pre-trained word embeddings is well known in the NLP community however we show that the power of our pre-trained model does not lie just in the embeddings. This suggests that once the text-comprehension community agrees on sufficiently versatile model, much larger parts of the model could start being reused than just the word-embeddings. The generalization of skills from a training domain to new tasks is an important ingredient of any system we would want to call intelligent. This work is an early step to explore this direction.
reject
Reject
4.333333
8bb6c52a80fb44a8260830683e86e74d2f317fad
iclr
2,017
MULTI-LABEL LEARNING WITH SEMANTIC EMBEDDINGS Liping Jing, MiaoMiao Cheng & Liu Yang Beijing Key Lab of Traffic Data Analysis and Mining Beijing Jiaotong University Beijing, China, 100044 {lpjing,15112085,11112191}@bjtu.edu.cn Alex Gittens & Michael W. Mahoney ICSI and Department of Statistics, University of California at Berkeley Berkeley, CA, 94704 gittens@icsi.berkeley.edu, mmahoney@stat.berkeley.edu ABSTRACT Multi-label learning aims to automatically assign to an instance (e.g., an image or a document) the most relevant subset of labels from a large set of possible labels. The main challenge is to maintain accurate predictions while scaling efficiently on data sets with extremely large label sets and many training data points. We propose a simple but effective neural net approach, the Semantic Embedding Model (SEM), that models the labels for an instance as draws from a multinomial distribution parametrized by nonlinear functions of the instance features. A Gauss-Siedel mini-batch adaptive gradient descent algorithm is used to fit the model. To handle extremely large label sets, we propose and experimentally validate the efficacy of fitting randomly chosen marginal label distributions. Experimental results on eight real-world data sets show that SEM garners significant performance gains over existing methods. In particular, we compare SEM to four recent state-of-the-art algorithms (NNML, BMLPL, REmbed, and SLEEC) and find that SEM uniformly outperforms these algorithms in several widely used evaluation metrics while requiring significantly less training time. 1 INTRODUCTION The multi-label learning problem is to learn to predict potentially multiple relevant labels given an instance. Instances that have multiple labels naturally occur in many application domains, including multimedia information retrieval, tag recommendation, semantic scene classification, query categorization, gene function prediction, medical diagnosis, drug discovery, and marketing. A popular approach to the multi-label learning problem is to embed the labels in a low-dimensional latent space via linear or local non-linear embeddings. The approach of [Hsu et al. (2009)] projects the label vectors to a random low-dimensional space, fits a regression model in this space, then projects these predictions back to the original label space. [Balasubramanian & Lebanon (2012)] use a sparsity-regularized least squares reconstruction objective to select a small set of landmark labels that are used to predict the remaining labels. [Bi & Kwoh (2013)] take a similar approach, with a greatly decreased computation cost, by posing the problem of selecting the landmark labels as one of column subset selection and adopting the leverage score sampling approach [Boutsidis et al. (2009)]. Recently, [Yu et al. (2014)] and [Jing et al. (2015)] propose using trace norm regularization to identify a low-dimensional representation of the original large label space. [Mineiro & Karampatziakis (2015)] use randomized dimensionality reduction to learn a low-dimensional embedding that explicitly captures correlations between the instance features and their labels. These approaches, like other linear embedding methods, assume that the label matrix is low-rank. However, the label matrix in most applications of multi-label learning is a sparse binary matrix, and thus is extremely likely to violate this low-rank assumption [Bhatia et al. (2015)]. Rather than working with the original label and feature matrices, some methods work instead with label or feature similarity matrices, and seek to preserve the local structure of the data in the learned low-dimensional latent space. Tai & Lin (2010) use PCA on the label covariance matrix to extract a low-dimensional latent space for labels and Chen & Lin (2012) extend this method to integrate feature information. Lin et al. (2014) apply PCA to a similarity matrix constructed using both label and feature information; this approach is time-consuming as it requires computing a large similarity matrix. Nam et al. (2014) introduce a neural network model to capture non-linear relationships between the input features and the labels. However, this approach is computationally infeasible when the number of possible labels is large. Similarly, Cissé et al. (2016) shows that using a deep learning approach built on top of an informative partitioning of the label space gives good performance; the scalability of this method was not characterized. Prabhu & Varma (2014) propose a method to efficiently train a classification tree by minimizing the Normalized Discounted Cumulative Gain. Kai et al. (2015) assumes that the label vectors are generated by sampling from a weighted combination of label topics, where the mixture coefficients are determined by the instance features. Bhatia et al. (2015) proposes a multi-phase algorithm (SLEEC) that first clusters the instances into a number of relatively small groups, learns label embeddings for each group via an SVD, and then trains linear regressors from the input features to the latent label factors for each group. SLEEC empirically outperforms previous state-of-the-art multi-label classifiers, but the label embedding in each group is learned from a nearest neighbor graph that is constructed solely from labelling information, ignoring the available feature matrix; the feature matrix has been shown repeatedly to be a source of useful information for label embedding (Chen & Lin 2012; Lin et al. 2014; Yu et al. 2014; Ping et al. 2015). The contribution of this paper is a scalable, accurate, and simple neural network approach to multi-label learning. Experiments establish that our method is faster and more accurate than SLEEC, the current state-of-the-art scalable algorithm. Notation: In the sequel, n is the number of training instances, c is the cardinality of the set of possible labels, d is the dimensionality of the feature vectors, and r is the dimension of the learned latent space. The matrix \( \mathbf{X} \in \mathbb{R}^{n \times d} \) contains the instance features, and \( \mathbf{Y} \in 0,1^{n \times c} \) indicates the labels assigned to each instance. We denote the number of observed labels for instance i with \( \ell_i = \sum_{k=1}^c y_{ik} \). The notations \( \mathbf{A}_i \) and \( \mathbf{A}_{j} \) respectively refer to the ith row and jth column of the matrix \( \mathbf{A} \). Unless otherwise specified, the notation \( f(\mathbf{A}) \) denotes the elementwise application of an arbitrary function f to the \( \mathbf{A} \), so for example \( \exp(\mathbf{A})_{ij} = \exp(a_{ij}) \). 2 THE SEMANTIC EMBEDDING MODEL Our Semantic Embedding Model (SEM) assumes that the underlying parameters determining the observed labels are low-rank rather than that the observed label matrix is itself low-rank, and it uses a nonlinear model to fit the probability distributions over the labels, conditioned on the instance features. SEM models the i-th row of \( \mathbf{Y} \) as the result of \( \ell_i \) draws from a multinomial distribution: \[ \mathbf{Y}_i \sim \text{Multinomial}(\ell_i; \mathbf{P}_i), \quad \text{where } \mathbf{P} = \left[ \frac{\exp(h_{ij})}{\sum_{k=1}^c \exp(h_{ik})} \right]_{i=1,\ldots,n, j=1,\ldots,c}. \] (1) The parameter matrix \( \mathbf{H} = \mathbf{U} \mathbf{V}^T + 1_n \mathbf{b}^T \) is the sum of label priors \( \mathbf{b} \in \mathbb{R}^c \) and the product of explanatory latent factors associated with the instances (\( \mathbf{U} \in \mathbb{R}^{n \times r} \)) and the labels (\( \mathbf{V} \in \mathbb{R}^{c \times r} \)). Further, we allow the latent factors associated with each instance to be a nonlinear function of the features associated with that instance, \( \mathbf{U} = f(\mathbf{X}, \mathbf{W}) \) for some \( \mathbf{W} \) to be learned. We note that if \( f(\mathbf{X}, \mathbf{W}) = \mathbf{X} \mathbf{W} \), SEM could be viewed as fitting a Bayesian Exponential Family PCA (Mohamed et al. 2009). However, throughout this paper we take \( f(\mathbf{X} \mathbf{W}) = \sigma(\mathbf{X} \mathbf{W}) \), where \( \sigma(\mathbf{X}) = (1 + \exp(-\mathbf{X}))^{-1} \) denotes the elementwise application of the sigmoid function, as we find this gives good results; with this choice, SEM is more naturally viewed as a neural network model. We fit the SEM parameters by maximizing the likelihood of the observed labels. This is equivalent to minimizing the sum of the KL divergences between the empirical label distributions for each instance and the label distributions predicted by the model (Pawitan 2001). Accordingly, we define the empirical label distribution matrix \( \mathbf{G} \), whose ith row satisfies \( \mathbf{G}_i = \mathbf{Y}_i / \ell_i \), then minimize the row-wise Kullback-Leibler distance (Yang et al., 2011) between \( \mathbf{G} \) and \( \mathbf{P} \): \[ J_{\mathbf{G}|\mathbf{P}} = \sum_{i=1}^n \sum_{j=1}^c G_{ij} \log \frac{G_{ij}}{P_{ij}} \simeq - \sum_{i=1}^n \sum_{j=1}^c G_{ij} \log P_{ij}. \] (2) Recalling that \[ P_{ij} = \frac{\exp(h_{ij})}{\sum_{k=1}^c \exp(h_{ik})} = \frac{\exp((\sigma(\mathbf{XW})\mathbf{V}^T)_{ij} + b_j)}{\sum_{k=1}^c \exp((\sigma(\mathbf{XW})\mathbf{V}^T)_{ik} + b_k)}, \] some algebraic manipulations give the final objective \[ J(\mathbf{W}, \mathbf{V}, \mathbf{b}) = J_{\mathbf{G}|\mathbf{P}} = - \sum_{i=1}^n \sum_{j=1}^c G_{ij} \log \frac{\exp(\sigma(\mathbf{XW})_i (\mathbf{V}^T)_j + b_j)}{\sum_{k=1}^c \exp(\sigma(\mathbf{XW})_i (\mathbf{V}^T)_k + b_k)} \] \[ = - \sum_{i=1}^n \sum_{j=1}^c G_{ij} (\sigma(\mathbf{XW})_i (\mathbf{V}^T)_j + b_j) + \sum_{i=1}^n \log \left( \sum_{k=1}^c \exp(\sigma(\mathbf{XW})_i (\mathbf{V}^T)_k + b_k) \right) \] \[ = - \mathrm{Tr}(\mathbf{G} (\sigma(\mathbf{XW})\mathbf{V}^T + 1_n \mathbf{b}^T)^T) + 1_n^T \log \left( \exp(\sigma(\mathbf{XW})\mathbf{V}^T + 1_n \mathbf{b}^T)1_c \right). \] (3) Thus the SEM parameters are learned by solving the optimization problem \[ \min_{\mathbf{W}, \mathbf{V}, \mathbf{b}} J(\mathbf{W}, \mathbf{V}, \mathbf{b}). \] (4) Here \( \mathbf{V} \in \mathbb{R}^{c \times r} \) are the representations of the labels in a latent semantic space, \( \mathbf{W} \in \mathbb{R}^{d \times r} \) controls the nonlinear mapping from the instance features to the same semantic space, and the offsets \( \mathbf{b} \in \mathbb{R}^c \) allow for label-specific offsets in the mapping from the semantic space to the log probabilities. 3 Model Fitting The optimization problem (4) is non-convex. To solve it efficiently, we use a Gauss-Siedel approach combined with mini-batching. Namely, we cyclically update each of \( \mathbf{W}, \mathbf{V} \), and \( \mathbf{b} \) using AdaGrad (Duchi et al., 2011) while keeping the other two variable fixed. We compute the gradients using mini-batches. To state the expressions for the gradients with respect to the model parameters, we introduce some helpful notation: \( \mathbf{A} \odot \mathbf{B} \) denotes the entry-wise product of two matrices, \( \mathbf{M} = \sigma(\mathbf{XW}) \odot (1 - \sigma(\mathbf{XW})) \), and \( \mathbf{D} = \mathrm{Diag}\left( \exp \left( \sigma(\mathbf{XW})\mathbf{V}^T + 1_n \mathbf{b}^T \right) 1_c \right) \). The gradients are readily computed from (2): \[ \mathcal{G}(\mathbf{W}) = \mathbf{X}^T \left( \mathbf{M} \odot \left[ \left( \mathbf{D}^{-1} \exp \left( \sigma(\mathbf{XW})\mathbf{V}^T + 1_n \mathbf{b}^T \right) - \mathbf{G} \right) \mathbf{V} \right] \right) \] (5) \[ \mathcal{G}(\mathbf{V}) = \left( \exp \left( \sigma(\mathbf{XW})\mathbf{V}^T + 1_n \mathbf{b}^T \right)^T \mathbf{D}^{-1} - \mathbf{G}^T \right) \sigma(\mathbf{XW}) \] (6) \[ \mathcal{G}(\mathbf{b}) = \left( \exp \left( \sigma(\mathbf{XW})\mathbf{V}^T + 1_n \mathbf{b}^T \right)^T \mathbf{D}^{-1} - \mathbf{G}^T \right) 1_n. \] (7) Using AdaGrad, the update rule for \( \mathbf{W}^{(\tau)} \) is \[ \mathbf{W}^{(\tau)} = \mathbf{W}^{(\tau-1)} - \alpha_{\mathbf{W}}^{(\tau)} \odot \mathcal{G}(\mathbf{W}^{(\tau-1)}) \] (8) where \( \tau \) is the timestep and \( \alpha_{\mathbf{W}} \) is a matrix of step sizes computed via \[ (\alpha_{\mathbf{W}}^{(\tau)})_{iq} = \frac{\rho}{\sqrt{\sum_{m=1}^{\tau-1} (\mathcal{G}(\mathbf{W}^{(m)})) \odot \mathcal{G}(\mathbf{W}^{(m)}))_{iq}} + \varepsilon}, \] (9) where \( \varepsilon \) and the learning rate \( \rho \) determine how much an entry \( \mathbf{W}_{ij} \) is updated during the first timestep. \( \mathbf{V}^{(\tau)} \) and \( \mathbf{b}^{(\tau)} \) are computed according to similar updating rules obtained from (8) and (9) by substituting \( \mathcal{G}(\mathbf{W}) \) with \( \mathcal{G}(\mathbf{V}) \) (or \( \mathcal{G}(\mathbf{b}) \)), \( \mathbf{W} \) with \( \mathbf{V} \) (or \( \mathbf{b} \)), and \( \alpha_{\mathbf{W}} \) with \( \alpha_{\mathbf{V}} \) (or \( \alpha_{\mathbf{b}} \)). A listing of the proposed algorithm is given in Algorithm 1. Its computational complexity is \( O(T n r (d + c)) \), where \( T \) is the number of epochs. We note that the gradient calculations in lines 7–9 of Algorithm 1 are amenable to parallelization. Algorithm 1 Mini-Batched Gauss-Siedel Adaptive Gradient Descent for learning SEM parameters Input: Instance feature matrix \( \mathbf{X} \in \mathbb{R}^{n \times d} \), observed label matrix \( \mathbf{Y} \in \mathbb{R}^{n \times c} \), dimensionality of the latent space \( r \), learning rate \( \rho \) and \( \epsilon > 0 \), mini-batch size \( m < n \), and number of epochs \( T \). 1: Initialize \( \mathbf{W}^{(0)}, \mathbf{V}^{(0)}, \) and \( \mathbf{b}^{(0)} \) 2: for \( t = 1, 2, \ldots, T \) do 3: Randomly choose \( n/m \) mini-batches \( I_z \subset \{1, ..., n\} \) of size \( m \) 4: for \( b = 1, 2, \ldots, n/m \) do 5: Set \( \tau = (t-1)(n/m) + b \) 6: Select the data instances in the \( z \)-th mini-batch by working with \( \mathbf{X}_{I_{z,\cdot}} \) in lieu of \( \mathbf{X} \) 7: Update \( \mathbf{W}^{(\tau)} \) via (8) while fixing \( \mathbf{V} = \mathbf{V}^{(\tau-1)} \) and \( \mathbf{b} = \mathbf{b}^{(\tau-1)} \) 8: Update \( \mathbf{V}^{(\tau)} \) via the analog of (8) for \( \mathbf{V} \) while fixing \( \mathbf{W} = \mathbf{W}^{(\tau)} \) and \( \mathbf{b} = \mathbf{b}^{(\tau-1)} \) 9: Update \( \mathbf{b}^{(\tau)} \) via the analog of (8) for \( \mathbf{b} \) while fixing \( \mathbf{W} = \mathbf{W}^{(\tau)} \) and \( \mathbf{V} = \mathbf{V}^{(\tau)} \) 10: end for 11: end for Output: \( \mathbf{W}^{(\tau)}, \mathbf{V}^{(\tau)} \) and \( \mathbf{b}^{(\tau)} \) 3.1 INCREASED EFFICIENCY BY FITTING MARGINALS Although Algorithm 1 runs in time linear in the dimensions of the model parameters and the input datasets, it can be computationally expensive when there are more than a few thousand labels. To further reduce the running time of our algorithm, we note that in practice, each instance is often associated with \( \ell_i \ll c \) labels. To speed up the training, at each timestep \( \tau \), rather than attempting to minimize the divergence between the entire empirical and predicted label distributions of each instance, we sample a set of labels \( L_i^{(\tau)} \) for each instance and attempt to minimize the empirical and predicted marginal label distributions over that set of labels \( L_i^{(\tau)} \). Let \( \mathrm{PL}_i \) denote the set of labels assigned to the \( i \)th instance, and \( \mathrm{AL}_i \) denote the set of labels not assigned to that instance. We sample from \( \mathrm{AL}_i \) to form a set \( \mathrm{NL}_i \), and use \( L_i^{(\tau)} = \mathrm{PL}_i \cup \mathrm{NL}_i^{(\tau)} \). This leads to the modified objective \[ J_{\text{Marginal}}(\mathbf{W}, \mathbf{V}, \mathbf{b})^{(\tau)} = - \sum_{i=1}^n \sum_{j \in L_i^{(\tau)}} G_{ij} \log \frac{\exp(\sigma(\mathbf{X}\mathbf{W})_i (\mathbf{V}^T)_j + b_j)}{\sum_{k \in L_i^{(\tau)}} \exp(\sigma(\mathbf{X}\mathbf{W})_i (\mathbf{V}^T)_{k} + b_k)}. \] Note that \( J^{(\tau)} \) is a random function that changes at each timestep. Minimizing this stochastic objective effectively seeks SEM parameters which fit all the randomly sampled marginals encountered during training. Thus it is important to sample the sets \( \mathrm{NL}_i \) so that the selected marginals capture non-trivial information about the label distributions. One can imagine that uniformly sampling from \( \mathrm{AL}_i \) will not provide very informative marginals. As an improvement on this naive scheme, we sample labels from \( \mathrm{AL}_i \) with probability proportional to their frequency of occurrence in the training data set. The number of negative labels is set to be \( \beta \) times the number of positive labels i.e., \( |\mathrm{NL}_i| = \beta |\mathrm{PL}_i| = \beta \ell_i \). Further, when \( m > 1 \), to facilitate efficient BLAS operations while mini-batching, we use the same marginals for each instance in the same minibatch, i.e., we fit marginals over \( L^{(\tau)} := \bigcup_{i \in I_z} L_i^{(\tau)} \), where \( I_z \) denotes the set of instances in the current minibatch. In the experiments presented in Section 4, we found that \( \beta \) around 10 suffices when \( c \) is relatively small, and \( \beta \) around 100 suffices when \( c \) is on the order of tens of thousands. 3.2 LABEL PREDICTION We present two methods for predicting the labels for a new instance \( \mathbf{x} \in \mathbb{R}^d \) given the fitted SEM parameters. The first uses the generative model behind SEM: form \( \mathbf{h} = \sigma(\mathbf{x}^T \mathbf{W}) \mathbf{V}^T + \mathbf{b}^T \) and note the probability that the \( j \)th label is assigned to that instance is given by \[ \mathbb{P}(y_j = 1) = \exp(h_j) / \sum_{k=1}^c \exp(h_k). \] Accordingly we assign the most probable labels to x. We call this prediction scheme the direct SEM method; it simply requires choosing the labels corresponding to the largest entries of h. The second method builds a kernel classifier in the semantic space obtained from the SEM factorization. Following Mineiro & Karampatziakis (2015), a classifier is trained on these semantic representations by solving the optimization problem \[ \min_{\mathbf{Z} \in \mathbb{R}^{c \times s}} \sum_{i=1}^n \ell(\mathbf{Y}_i, \mathbf{Z}\psi(\mathbf{x}_i\mathbf{W})) + \lambda \| \mathbf{Z} \|_F^2, \] where \( \ell \) is the log-loss penalty and \( \psi : \mathbb{R}^r \to \mathbb{R}^s \) is an s-dimensional Random Fourier Feature (RFF) map (Rahimi & Recht 2007): \[ \psi(\mathbf{x}) = \cos \left( \Phi \mathbf{x} + \boldsymbol{\theta} \right), \] where \( \Phi \in \mathbb{R}^{s \times r} \) is a matrix of i.i.d. standard Gaussians and \( \boldsymbol{\theta} \in [0, 2\pi)^s \) is a vector of i.i.d uniform samples from \([0, 2\pi)\). At test time, the predicted label probabilities for an instance x are given by \( \mathbf{Z}\psi(\mathbf{x}\mathbf{W}) \), so we assign the most probable labels according to this model. We refer to this scheme as the kernelized SEM method. 4 EXPERIMENTS In the sequel we refer to the direct SEM scheme as simply SEM, and the kernelized SEM scheme as SEM-K. We compare SEM and SEM-K with several alternative multi-label learning algorithms: NNML (Nam et al. 2014), REmbed (Mineiro & Karampatziakis 2015), SLEEC (Bhatia et al. 2015), and BMLPL (Rai et al. 2015). We do not compare to the models proposed in (Tat & Lin 2010; Chen & Lin 2012; Bi & Kwok 2013; Yu et al. 2014; Prabhu & Varma 2014) because earlier works (Yu et al. 2014; Bhatia et al. 2015) have shown that they are inferior to SLEEC. 4.1 DATASETS Table 1 summarizes the eight datasets used in our experiments. Here \( n_{train} \) and \( n_{test} \) are the numbers of training and testing instances, d is the number of features, c is the number of labels/classes, and the avg(\( \ell_i \)) column reports the average number of labels per instance. In these datasets, the number of labels varies from 23 to 30938, the average label cardinality varies from 2.508 to 19.020, and the number of instances in different classes varies over a large range. Thus predicting the labels assignments correctly over this collection of datasets is a challenging task. <table> <tr> <th>Dataset</th> <th>Domain</th> <th>\( n_{train} \)</th> <th>\( n_{test} \)</th> <th>d</th> <th>c</th> <th>avg(\( \ell_i \))</th> </tr> <tr> <td><i>MSRC</i></td> <td>image</td> <td>296</td> <td>295</td> <td>512</td> <td>23</td> <td>2.508</td> </tr> <tr> <td><i>Corel5K</i></td> <td>image</td> <td>4500</td> <td>500</td> <td>499</td> <td>374</td> <td>3.522</td> </tr> <tr> <td><i>SUN</i></td> <td>image</td> <td>12906</td> <td>1434</td> <td>512</td> <td>102</td> <td>15.526</td> </tr> <tr> <td><i>Delicious</i></td> <td>text</td> <td>12920</td> <td>3185</td> <td>500</td> <td>983</td> <td>19.020</td> </tr> <tr> <td><i>EurLex-sub</i></td> <td>text</td> <td>17413</td> <td>1935</td> <td>5000</td> <td>201</td> <td>2.213</td> </tr> <tr> <td><i>Mediamill</i></td> <td>video</td> <td>30993</td> <td>12914</td> <td>210</td> <td>101</td> <td>4.736</td> </tr> <tr> <td><i>Eurlex-des</i></td> <td>text</td> <td>17413</td> <td>1935</td> <td>5000</td> <td>3993</td> <td>5.31</td> </tr> <tr> <td><i>Wiki10K</i></td> <td>text</td> <td>14146</td> <td>6616</td> <td>101938</td> <td>30938</td> <td>18.64</td> </tr> </table> 4.2 METHODOLOGY The codes of the methods we compare to are provided by the authors, in particular, we note that the computationally intensive portions of REmbed, SLEEC and NNML are implemented in C; by way of comparison, our algorithms are entirely implemented in Matlab. Due to there being several parameters for each method, we hand-tuned the parameters for each dataset as suggested by the authors. All methods were run in MATLAB on a Windows server with 4GB memory and four 2.3GHz CPUs with eight cores. The prediction performance for each algorithm is evaluated according to widely-used metrics in the field of multi-label classification, viz., label-based Macro-F1 (MaF1) and Micro-F1 (MiF1) and instance-based Precision-at-k (P@k, esp. P@1 and P@3) (Zhang & Zhou 2014). MaF1 and MiF1 require predefining a threshold to determine the number of labels to be assigned to the testing data. In our experiments, the number of labels assigned to each testing instance was set according to its ground truth. Table 2: The classification performance of six multi-label classification algorithms (NNML, BMLPL, REmbed, SLEEC and the proposed SEM and SEM-K). The best and second best results are respectively bolded and underlined for each evaluation measure. <table> <tr> <th rowspan="2"></th> <th colspan="4">MSRC</th> <th colspan="4">Corel5K</th> </tr> <tr> <th>MaF1</th> <th>MiF1</th> <th>P@1</th> <th>P@3</th> <th>MaF1</th> <th>MiF1</th> <th>P@1</th> <th>P@3</th> </tr> <tr> <td>NNML</td> <td>0.4086</td> <td>0.5944</td> <td>0.7356</td> <td>0.5073</td> <td>0.0547</td> <td>0.2967</td> <td>0.4020</td> <td>0.3047</td> </tr> <tr> <td>BMLPL</td> <td>0.4592</td> <td>0.6199</td> <td>0.7017</td> <td>0.5288</td> <td>0.0315</td> <td>0.2779</td> <td>0.3940</td> <td>0.2820</td> </tr> <tr> <td>REmbed</td> <td>0.3537</td> <td>0.5128</td> <td>0.5322</td> <td>0.4384</td> <td>0.0450</td> <td>0.2144</td> <td>0.3060</td> <td>0.2247</td> </tr> <tr> <td>SLEEC</td> <td>0.4973</td> <td>0.6314</td> <td>0.7353</td> <td>0.5243</td> <td>0.0534</td> <td>0.3188</td> <td>0.4360</td> <td>0.3287</td> </tr> <tr> <td>SEM</td> <td>0.5064</td> <td>0.6173</td> <td>0.7220</td> <td>0.5333</td> <td><b>0.0623</b></td> <td><b>0.3188</b></td> <td><b>0.4320</b></td> <td><b>0.3293</b></td> </tr> <tr> <td>SEM-K</td> <td><b>0.5770</b></td> <td><b>0.6492</b></td> <td><b>0.7458</b></td> <td><b>0.5525</b></td> <td>0.0589</td> <td>0.2649</td> <td>0.3600</td> <td>0.2773</td> </tr> <tr> <th colspan="9">SUN</th> </tr> <tr> <th></th> <th colspan="4">Mediamill</th> <th colspan="4">Eurlex-sub</th> </tr> <tr> <td>NNML</td> <td>0.2807</td> <td>0.5248</td> <td>0.9421</td> <td>0.8580</td> <td>0.0819</td> <td>0.5890</td> <td>0.8260</td> <td>0.6675</td> </tr> <tr> <td>BMLPL</td> <td>0.1897</td> <td>0.4766</td> <td>0.9024</td> <td>0.8001</td> <td>0.0855</td> <td>0.6012</td> <td>0.8478</td> <td>0.6854</td> </tr> <tr> <td>REmbed</td> <td>0.3408</td> <td>0.5125</td> <td>0.9393</td> <td>0.8591</td> <td>0.2634</td> <td>0.6371</td> <td>0.8741</td> <td>0.6988</td> </tr> <tr> <td>SLEEC</td> <td>0.2935</td> <td>0.5256</td> <td>0.9484</td> <td>0.8656</td> <td><b>0.2851</b></td> <td><b>0.6546</b></td> <td><b>0.8899</b></td> <td><b>0.7158</b></td> </tr> <tr> <td>SEM</td> <td>0.3648</td> <td><b>0.5486</b></td> <td>0.9365</td> <td>0.8642</td> <td>0.1593</td> <td>0.6296</td> <td>0.8746</td> <td>0.6996</td> </tr> <tr> <td>SEM-K</td> <td><b>0.3703</b></td> <td>0.5466</td> <td><b>0.9575</b></td> <td><b>0.8787</b></td> <td>0.2570</td> <td><b>0.6717</b></td> <td><b>0.8953</b></td> <td><b>0.7278</b></td> </tr> <tr> <th colspan="9">Delicious</th> </tr> <tr> <th></th> <th colspan="4">Mediamill</th> <th colspan="4">Eurlex-sub</th> </tr> <tr> <td>NNML</td> <td>0.1721</td> <td>0.3963</td> <td>0.6687</td> <td><b>0.6169</b></td> <td>0.5761</td> <td>0.8487</td> <td>0.9173</td> <td>0.6267</td> </tr> <tr> <td>BMLPL</td> <td>0.1061</td> <td>0.3739</td> <td>0.6378</td> <td>0.5772</td> <td>0.1459</td> <td>0.6011</td> <td>0.6789</td> <td>0.4697</td> </tr> <tr> <td>REmbed</td> <td>0.1549</td> <td>0.3713</td> <td>0.6353</td> <td>0.572</td> <td>0.5335</td> <td>0.8031</td> <td>0.8785</td> <td>0.5977</td> </tr> <tr> <td>SLEEC</td> <td>0.1257</td> <td>0.3859</td> <td>0.6674</td> <td>0.6112</td> <td>0.5433</td> <td>0.8461</td> <td>0.9152</td> <td>0.6191</td> </tr> <tr> <td>SEM</td> <td><b>0.1941</b></td> <td><b>0.3980</b></td> <td><b>0.6727</b></td> <td>0.6162</td> <td>0.5652</td> <td>0.8339</td> <td>0.8971</td> <td>0.6188</td> </tr> <tr> <td>SEM-K</td> <td>0.1675</td> <td>0.3886</td> <td>0.6658</td> <td>0.6112</td> <td><b>0.5807</b></td> <td><b>0.8494</b></td> <td><b>0.9188</b></td> <td><b>0.6269</b></td> </tr> </table> Table 3: The running times, in seconds, of six multi-label classification algorithms (NNML, BMLPL, REmbed, SLEEC and the proposed SEM and SEM-K) for differing training sizes on the Mediamill dataset. <table> <tr> <th>ntrain</th> <th>NNML</th> <th>BMLPL</th> <th>REMBED</th> <th>SLEEC</th> <th>SEM</th> <th>SEM-K</th> </tr> <tr> <td>439</td> <td>327.57</td> <td>10.29</td> <td>2.07</td> <td>16.11</td> <td>0.60</td> <td>1.50</td> </tr> <tr> <td>1756</td> <td>1333.91</td> <td>20.35</td> <td>3.02</td> <td>57.16</td> <td>2.41</td> <td>4.29</td> </tr> <tr> <td>3073</td> <td>2363.02</td> <td>48.2</td> <td>4.14</td> <td>145.88</td> <td>4.36</td> <td>6.99</td> </tr> <tr> <td>4391</td> <td>3264.79</td> <td>41.72</td> <td>5.45</td> <td>227.76</td> <td>6.65</td> <td>10.10</td> </tr> <tr> <td>8781</td> <td>4428.09</td> <td>84.09</td> <td>10.83</td> <td>815.66</td> <td>12.29</td> <td>21.73</td> </tr> <tr> <td>13172</td> <td>5170.00</td> <td>119.09</td> <td>17.04</td> <td>1041.07</td> <td>18.39</td> <td>26.49</td> </tr> <tr> <td>17563</td> <td>5170.17</td> <td>185.05</td> <td>20.90</td> <td>1692.7</td> <td>24.22</td> <td>42.21</td> </tr> <tr> <td>21954</td> <td>5297.75</td> <td>225.96</td> <td>44.20</td> <td>1772.52</td> <td>30.10</td> <td>50.64</td> </tr> <tr> <td>26344</td> <td>5947.94</td> <td>235.93</td> <td>52.75</td> <td>1985.82</td> <td>35.95</td> <td>59.42</td> </tr> <tr> <td>30735</td> <td>6604.93</td> <td>275.06</td> <td>58.74</td> <td>2181.48</td> <td>41.37</td> <td>61.30</td> </tr> </table> 4.3 PERFORMANCE ON DATASETS WITH SMALL LABEL SETS First we compare the performance on six multi-label learning problems with \( c < 1000 \). To fit both SEM models, we take the number of epochs be 30 and the mini-batch size be 200—i.e., \( T = 30 \) and \( m = 200 \) in Algorithm 1—and because \( c \) is small, we fit the full label distributions. The classification performances of our SEM algorithms and the baseline methods are shown in Table 2. SEM or SEM-K outperform the alternative algorithms in most cases. Table 3 compares the running times of the algorithms as the size of the dataset is increased, using MediaMill. We see that SEM is the fastest model, followed by REMBED, then closely by SEM-K; the remaining three models are significantly more costly. It is clear that NNML, the previous neural network approach to multi-label learning costs the most. In the other five algorithms, the latent space dimensionality (r) is set to be 50. SLEEC is expensive because it constructs the nearest neighbor graph among training data and computes the top r eigenvectors of the corresponding similarity matrix, which costs \( O(n^2 r + d^2 r) \). REmbed is efficient because its main cost is to find the singular vectors of a \( c \times (r + q) \) matrix (here \( c \) is the number of labels and \( q \) is a small integer), but its performance is inferior to SEM-K. The BMLPL code provided by the author applies SVD to the training data to initialize than model parameters and then uses conjugate gradient to update the parameters, thus it costs much more than REmbed and our proposed methods. 4.4 PERFORMANCE ON DATASETS WITH LARGE LABEL SETS We proposed using SEM to fit marginals rather than the entire label distribution when \( c \) is large, for computational efficiency. To judge the effectiveness of this proposal, we compare the accuracy and running times of the SEM and SEM-K models with baselines on EurLex-des and Wiki10K, two datasets with \( c > 1000 \). As baselines, we use REmbed and SLEEC in accordance with the above discussion which showed that these two methods are efficient and/or have good performance. The hyperparameters in SLEEC were set according to the original authors’ code: \( r \) for EurLex-des and Wiki10K is 100 and 75 respectively, and 3 clusters are used for Eurlex-des and 5 are used for Wiki10K. To fit the SEM models, we used the same value of \( r \) as SLEEC on these two datasets and used 10 training epochs. For REmbed, the latent space size \( r \) was tuned via cross-validation; \( r = 300 \) for Eurlex-des and \( r = 150 \) for Wiki10K. The number of Random Fourier Features is 2000 for both REmbed and SEM-K. The latent space size \( r \) in SEM is same with SLEEC. The mini-batch sizes and number of epochs are set to be 200 and 10 respectively when fitting the SEM models. The number of threads is set to be 8 for all methods. Table 4: The classification performance of five methods (REmbed, SLEEC and the proposed SEM and SEM-K with two values of \( \beta \)) on the Eurlex-des and Wiki10K datasets. The best and second best results are respectively bolded and underlined for each evaluation metric. <table> <tr> <th rowspan="2"> </th> <th colspan="2">REmbed</th> <th colspan="2">SLEEC</th> <th colspan="2">SEM</th> <th colspan="2">SEM-K</th> <th colspan="2">SEM-K</th> </tr> <tr> <th>(\( \beta = 500 \))</th> <th>(\( \beta = 10 \))</th> <th>(\( \beta = 500 \))</th> <th>(\( \beta = 10 \))</th> <th>(\( \beta = 500 \))</th> <th>(\( \beta = 10 \))</th> <th>(\( \beta = 500 \))</th> <th>(\( \beta = 10 \))</th> <th>(\( \beta = 60 \))</th> </tr> <tr> <th rowspan="3">Eurlex-des</th> <td>P@1</td> <td>0.7299</td> <td>0.8017</td> <td>0.7107</td> <td>0.8024</td> <td>0.8135</td> <td></td> <td></td> <td></td> </tr> <tr> <td>P@3</td> <td>0.6064</td> <td>0.6539</td> <td>0.5874</td> <td>0.6621</td> <td>0.6714</td> <td></td> <td></td> <td></td> </tr> <tr> <td>P@5</td> <td>0.5060</td> <td>0.5375</td> <td>0.4916</td> <td>0.5493</td> <td>0.5563</td> <td></td> <td></td> <td></td> </tr> <tr> <th rowspan="3">Wiki10K</th> <td>P@1</td> <td>0.6963</td> <td>0.8554</td> <td>0.8517</td> <td>0.8582</td> <td>0.8671</td> <td></td> <td></td> <td></td> </tr> <tr> <td>P@3</td> <td>0.5790</td> <td>0.7359</td> <td>0.7133</td> <td>0.7278</td> <td>0.7385</td> <td></td> <td></td> <td></td> </tr> <tr> <td>P@5</td> <td>0.4929</td> <td>0.6310</td> <td>0.6171</td> <td>0.6236</td> <td>0.6353</td> <td></td> <td></td> <td></td> </tr> </table> Table 5: The running times, in seconds, of five methods (REmbed, SLEEC and the proposed SEM and SEM-K for two values of \( \beta \)) on the Eurlex-des and Wiki10K datasets. <table> <tr> <th rowspan="2"> </th> <th colspan="2">REmbed</th> <th colspan="2">SLEEC</th> <th colspan="2">SEM</th> <th colspan="2">SEM-K</th> <th colspan="2">SEM-K</th> </tr> <tr> <th>(\( \beta = 500 \))</th> <th>(\( \beta = 10 \))</th> <th>(\( \beta = 500 \))</th> <th>(\( \beta = 10 \))</th> <th>(\( \beta = 500 \))</th> <th>(\( \beta = 10 \))</th> <th>(\( \beta = 500 \))</th> <th>(\( \beta = 10 \))</th> <th>(\( \beta = 60 \))</th> </tr> <tr> <th rowspan="2">Eurlex-des</th> <td>358.63</td> <td>1571.30</td> <td>1210.30</td> <td>167.10</td> <td>250.77</td> <td></td> <td></td> <td></td> </tr> <tr> <th>Wiki10K</th> <td>2858.96</td> <td>2497.00</td> <td>2003.43</td> <td>646.48</td> <td>769.18</td> <td></td> <td></td> <td></td> </tr> </table> Table 4 compares the classification performances of the methods on these two datasets. It is clear that SEM-K with a small set of negative labels obtains better performance than both REmbed and SLEEC. Table 5 shows that, additionally, the SEM-K models are fit much faster than than the other models. 4.5 IMPACT OF THE SIZE OF THE MARGINALS Figure 1 illustrates the impact of the choice of \( \beta \) on the prediction performance (in terms of P@1) of SEM and SEM-K. The performances of SLEEC and REmbed are included for comparison. The hyperparameters of SLEEC, REmbed and SEM were set as in Section 4.4. It is evident that the performance of SEM increases significantly in a monotonic fashion with \( \beta \). However, SEM-K is insensitive to \( \beta \) once it passes a dataset-dependent threshold (e.g., \( \beta = 60 \)) Figure 1: The P@1 performance of SEM and SEM-K as a function of \( \beta \), in comparison to the performances of SLEEC and REmbed on the (a) Eurlex-des and (b) Wiki10K datasets. for *Eurlex-des* and \( \beta = 100 \) for *Wiki10K*). Note that on *Wiki10K*, even the simpler direct SEM outperforms REmbed when there are sufficient negative labels. Figure 2 illustrates the effect of \( \beta \) on the running times of SEM and SEM-K. Note that the additional time to fit the classifier in the semantic space required by SEM-K is negligible compared to the time it takes to first fit the direct SEM model. Figure 2: Running time of SEM-K under varying \( \beta \). 4.6 ADDITIONAL CONSIDERATIONS There are other important ways in which the proposed SEM methods can be compared to the baseline multi-label learning methods, including their performance as a function of the latent space dimensionality and as a function of the amount of training. Due to space constraints, a discussion of these two concerns and the convergence behavior of Algorithm 1 is provided in the Supplementary material. 5 CONCLUSION We proposed a new semantic embedding model (SEM) for handling the multi-label learning task. A framework based on Gauss-Siedel mini-batched adaptive gradient descent was proposed for efficiently solving the non-convex optimization problem required to learn the SEM parameters. For large label sets, we proposed fitting the SEM to marginal distributions rather than the full label distribution. A series of experiments on eight real-world datasets empirically demonstrated that the proposed method is superior to state-of-the-art methods in terms of prediction performance and running time. REFERENCES K. Balasubramanian and G. Lebanon. The landmark selection method for multiple output prediction. In Proc. of ICML, 2012. K. Bhatia, H. Jain, M. Varma, and P. Jain. Sparse local embeddings for extreme multi-label classification. In Proc. of NIPS, 2015. W. Bi and J. Kwok. Efficient multi-label classification with many labels. In Proc. of ICML, 2013. C. Boutsidis, M. Mahoney, and P. Drineas. An improved approximation algorithm for the column subset selection problem. In Proc. of ACM SODA, 2009. Y. Chen and H. Lin. Feature-aware label space dimension reduction for multi-label classification. In Proc. of NIPS, 2012. M. Cissé, M. Al-Shedivat, and S. Bengio. ADIOS: Architectures Deep in Output Space. In Proc. of ICML, 2016. J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning and Research, 12:2121–2159, 2011. D. Hsu, S. Kakade, J. Langford, and T. Zhang. Multi-label prediction via compressed sensing. In Proc. of NIPS, 2009. L. Jing, L. Yang, J. Yu, and M. Ng. Semi-supervised low-rank mapping learning for multi-label classification. In Proc. of CVPR, 2015. Z. Lin, G. Ding, M. Hu, and J. Wang. Multi-label classification via feature-aware implicit label space encoding. In Proc. of ICML, 2014. P. Mineiro and N. Karampatziakis. Fast label embeddings via randomized linear algebra. In Proc. of ECML, 2015. S. Mohamed, Z. Ghahramani, and K. A. Heller. Bayesian Exponential Family PCA. In Proc. of NIPS, 2009. J. Nam, J. Kim, E. Mencía, I. Gurevich, and J. Furnkranz. Large-scale multi-label text classification - revisiting neural networks. In Proc. of ECML, 2014. Y. Pawitan. In All Likelihood: Statistical Modeling and Inference Using Likelihood. Oxford University Press, 2001. Y. Prabhu and M. Varma. Fastxml: a fast, accurate and stable tree-classifier for extreme multi-label learning. In Proc. of ACM SIGKDD, 2014. A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Proc. of NIPS, 2007. P. Rai, C. Hu, R. Henao, and L. Carin. Large-scale bayesian multi-label learning via topic-based label embeddings. In Proc. of NIPS, 2015. F. Tai and H. Lin. Multi-label classification with principle label space transformation. In Proc. of ICML, 2010. Z. Yang, H. Zhang, Z. Yuan, and E. Oja. Kullback-leibler divergence for nonnegative matrix factorization. In Proc. of Intl. Conf. on ANNML, pp. 250–257, 2011. H. Yu, P. Jain, and I. Dhillon. Large-scale multi-label learning with missing labels. In Proc. of ICML, 2014. M. Zhang and Z. Zhou. A review on multi-label learning algorithms. IEEE Trans. on KDE, 26(8): 1819–1837, 2014. SUPPLEMENTARY MATERIAL A. Effect of Latent Space Dimensionality It can be seen that the latent space dimensionality \( r \) plays an important role to learn latent factors \( \mathbf{V} \) and a feature mapping matrix \( \mathbf{W} \) in our proposed methods, as it does in the three baselines BMLPL, REmbed and SLEEC. In order to investigate this dependence, we conducted a series of experiments on the training data sets using 5-fold cross-validation, comparing BMLPL, REmbed, SLEEC and our proposed SEM and SEM-K. ![Two line plots showing the effect of latent space dimensionality on P@1 and MiF1 for Delicious dataset](page_246_370_1097_246.png) (a) Delicious-P@1 (b) Delicious-MiF1 Figure 3: The effect of the latent space dimensionality \( r \) on BMLPL, REmbed, SLEEC, SEM and SEM-K in terms of MiF1 and P@1 on the Delicious dataset. In this experiment, we take Delicious dataset as an example. The training data is separated into five folds where four folds are used as training and one fold as validating, and the averaged results in terms of P@1 and MiF1 are given by Figure 3. It can be seen that their performances usually improve with increasing \( r \) until they reach an optimum value. However, once \( r \) becomes too large, their performances degrade. This is reasonable: when \( r \) is too small, the learned parameters cannot fully characterize the hidden semantic structure in the classification problem, while when \( r \) is too large, the benefits of dimensionality reduction are lost, as the model begins to over-fit to the idiosyncrasies of the training data rather than capturing the semantic structure common to both the training and validation data. Usually, these methods could obtain good performance at small \( r \), say 45 for Delicious dataset. B. Effect of training data size ![Two line plots showing the effect of varying training data size on P@1 and MiF1 for Mediamill dataset](page_246_1012_1097_246.png) (a) P@1 (b) MiF1 Figure 4: Effect of varying the training data size, as a fraction of the combined test and training data, on five multi-label learning methods in terms of P@1 and MiF1 on the Mediamill dataset. Meanwhile, we studied the label prediction performance as a function of the amount of labeled training data. In this experiment, we fixed the testing data size, and randomly selected training data from the training set so that the training data size varies from 1% to 70% of the combined training and testing data. In order to avoid the presence of empty categories and instances with no labels, at least one instance is kept for each label and at least one label is kept for each instance during this sampling process. For each fixed size of the training set, the desired amount of data is randomly sampled ten times, and the resulting average P@1 and MiF1 on the testing data are recorded. During training, the latent dimensionality parameter \( r \) is selected via 5-fold cross-validation. Figure 4 shows these results for the Mediamill dataset which contains the largest number of instances. As expected, the performance of all the methods is positive correlated with the size of the training data set, and we also see that the proposed SEM-K uniformly outperforms the other methods regardless of the training data size. As it is often expensive to obtain large labeled data sets in real applications, this observation suggests that SEM-K is a better choice for these situations. C. Convergence In order to demonstrated the convergence of the proposed method, we show the value of objective function (4) (at \( r = 45 \)) via Figure 5(a) and the prediction result (P@1) via Figure 5(b) along with the number of passes to the dataset (i.e., \( \tau \) in Algorithm 1). It can be seen that SEM could be convergent and the prediction performance becomes stable in less than 50 epochs, which will leverage SEM dealing with large-scale data. ![Convergence curve and P@1 curve plots](page_328_670_924_246.png) Figure 5: Performance of the proposed SEM method (with \( r = 45, p = 0.1 \)) on the Delicious dataset, a) objective function value to minimum and b) prediction result in terms of P@1, where x-axis represents the number of passes to the dataset.
ABSTRACT Multi-label learning aims to automatically assign to an instance (e.g., an image or a document) the most relevant subset of labels from a large set of possible labels. The main challenge is to maintain accurate predictions while scaling efficiently on data sets with extremely large label sets and many training data points. We propose a simple but effective neural net approach, the Semantic Embedding Model (SEM), that models the labels for an instance as draws from a multinomial distribution parametrized by nonlinear functions of the instance features. A Gauss-Siedel mini-batch adaptive gradient descent algorithm is used to fit the model. To handle extremely large label sets, we propose and experimentally validate the efficacy of fitting randomly chosen marginal label distributions. Experimental results on eight real-world data sets show that SEM garners significant performance gains over existing methods. In particular, we compare SEM to four recent state-of-the-art algorithms (NNML, BMLPL, REmbed, and SLEEC) and find that SEM uniformly outperforms these algorithms in several widely used evaluation metrics while requiring significantly less training time. 1 INTRODUCTION The multi-label learning problem is to learn to predict potentially multiple relevant labels given an instance. Instances that have multiple labels naturally occur in many application domains, including multimedia information retrieval, tag recommendation, semantic scene classification, query categorization, gene function prediction, medical diagnosis, drug discovery, and marketing. A popular approach to the multi-label learning problem is to embed the labels in a low-dimensional latent space via linear or local non-linear embeddings. The approach of [Hsu et al. (2009)] projects the label vectors to a random low-dimensional space, fits a regression model in this space, then projects these predictions back to the original label space. [Balasubramanian & Lebanon (2012)] use a sparsity-regularized least squares reconstruction objective to select a small set of landmark labels that are used to predict the remaining labels. [Bi & Kwoh (2013)] take a similar approach, with a greatly decreased computation cost, by posing the problem of selecting the landmark labels as one of column subset selection and adopting the leverage score sampling approach [Boutsidis et al. (2009)]. Recently, [Yu et al. (2014)] and [Jing et al. (2015)] propose using trace norm regularization to identify a low-dimensional representation of the original large label space. [Mineiro & Karampatziakis (2015)] use randomized dimensionality reduction to learn a low-dimensional embedding that explicitly captures correlations between the instance features and their labels. These approaches, like other linear embedding methods, assume that the label matrix is low-rank. However, the label matrix in most applications of multi-label learning is a sparse binary matrix, and thus is extremely likely to violate this low-rank assumption [Bhatia et al. (2015)]. Rather than working with the original label and feature matrices, some methods work instead with label or feature similarity matrices, and seek to preserve the local structure of the data in the learned low-dimensional latent space. Tai & Lin (2010) use PCA on the label covariance matrix to extract a low-dimensional latent space for labels and Chen & Lin (2012) extend this method to integrate feature information. Lin et al. (2014) apply PCA to a similarity matrix constructed using both label and feature information; this approach is time-consuming as it requires computing a large similarity matrix. Nam et al. (2014) introduce a neural network model to capture non-linear relationships between the input features and the labels. However, this approach is computationally infeasible when the number of possible labels is large. Similarly, Cissé et al. (2016) shows that using a deep learning approach built on top of an informative partitioning of the label space gives good performance; the scalability of this method was not characterized. Prabhu & Varma (2014) propose a method to efficiently train a classification tree by minimizing the Normalized Discounted Cumulative Gain. Kai et al. (2015) assumes that the label vectors are generated by sampling from a weighted combination of label topics, where the mixture coefficients are determined by the instance features. Bhatia et al. (2015) proposes a multi-phase algorithm (SLEEC) that first clusters the instances into a number of relatively small groups, learns label embeddings for each group via an SVD, and then trains linear regressors from the input features to the latent label factors for each group. SLEEC empirically outperforms previous state-of-the-art multi-label classifiers, but the label embedding in each group is learned from a nearest neighbor graph that is constructed solely from labelling information, ignoring the available feature matrix; the feature matrix has been shown repeatedly to be a source of useful information for label embedding (Chen & Lin 2012; Lin et al. 2014; Yu et al. 2014; Ping et al. 2015). The contribution of this paper is a scalable, accurate, and simple neural network approach to multi-label learning. Experiments establish that our method is faster and more accurate than SLEEC, the current state-of-the-art scalable algorithm. Notation: In the sequel, n is the number of training instances, c is the cardinality of the set of possible labels, d is the dimensionality of the feature vectors, and r is the dimension of the learned latent space. The matrix \( \mathbf{X} \in \mathbb{R}^{n \times d} \) contains the instance features, and \( \mathbf{Y} \in 0,1^{n \times c} \) indicates the labels assigned to each instance. We denote the number of observed labels for instance i with \( \ell_i = \sum_{k=1}^c y_{ik} \). The notations \( \mathbf{A}_i \) and \( \mathbf{A}_{j} \) respectively refer to the ith row and jth column of the matrix \( \mathbf{A} \). Unless otherwise specified, the notation \( f(\mathbf{A}) \) denotes the elementwise application of an arbitrary function f to the \( \mathbf{A} \), so for example \( \exp(\mathbf{A})_{ij} = \exp(a_{ij}) \). 2 THE SEMANTIC EMBEDDING MODEL Our Semantic Embedding Model (SEM) assumes that the underlying parameters determining the observed labels are low-rank rather than that the observed label matrix is itself low-rank, and it uses a nonlinear model to fit the probability distributions over the labels, conditioned on the instance features. SEM models the i-th row of \( \mathbf{Y} \) as the result of \( \ell_i \) draws from a multinomial distribution: \[ \mathbf{Y}_i \sim \text{Multinomial}(\ell_i; \mathbf{P}_i), \quad \text{where } \mathbf{P} = \left[ \frac{\exp(h_{ij})}{\sum_{k=1}^c \exp(h_{ik})} \right]_{i=1,\ldots,n, j=1,\ldots,c}. \] (1) The parameter matrix \( \mathbf{H} = \mathbf{U} \mathbf{V}^T + 1_n \mathbf{b}^T \) is the sum of label priors \( \mathbf{b} \in \mathbb{R}^c \) and the product of explanatory latent factors associated with the instances (\( \mathbf{U} \in \mathbb{R}^{n \times r} \)) and the labels (\( \mathbf{V} \in \mathbb{R}^{c \times r} \)). Further, we allow the latent factors associated with each instance to be a nonlinear function of the features associated with that instance, \( \mathbf{U} = f(\mathbf{X}, \mathbf{W}) \) for some \( \mathbf{W} \) to be learned. We note that if \( f(\mathbf{X}, \mathbf{W}) = \mathbf{X} \mathbf{W} \), SEM could be viewed as fitting a Bayesian Exponential Family PCA (Mohamed et al. 2009). However, throughout this paper we take \( f(\mathbf{X} \mathbf{W}) = \sigma(\mathbf{X} \mathbf{W}) \), where \( \sigma(\mathbf{X}) = (1 + \exp(-\mathbf{X}))^{-1} \) denotes the elementwise application of the sigmoid function, as we find this gives good results; with this choice, SEM is more naturally viewed as a neural network model. We fit the SEM parameters by maximizing the likelihood of the observed labels. This is equivalent to minimizing the sum of the KL divergences between the empirical label distributions for each instance and the label distributions predicted by the model (Pawitan 2001). Accordingly, we define the empirical label distribution matrix \( \mathbf{G} \), whose ith row satisfies \( \mathbf{G}_i = \mathbf{Y}_i / \ell_i \), then minimize the row-wise Kullback-Leibler distance (Yang et al., 2011) between \( \mathbf{G} \) and \( \mathbf{P} \): \[ J_{\mathbf{G}|\mathbf{P}} = \sum_{i=1}^n \sum_{j=1}^c G_{ij} \log \frac{G_{ij}}{P_{ij}} \simeq - \sum_{i=1}^n \sum_{j=1}^c G_{ij} \log P_{ij}. \] (2) Recalling that \[ P_{ij} = \frac{\exp(h_{ij})}{\sum_{k=1}^c \exp(h_{ik})} = \frac{\exp((\sigma(\mathbf{XW})\mathbf{V}^T)_{ij} + b_j)}{\sum_{k=1}^c \exp((\sigma(\mathbf{XW})\mathbf{V}^T)_{ik} + b_k)}, \] some algebraic manipulations give the final objective \[ J(\mathbf{W}, \mathbf{V}, \mathbf{b}) = J_{\mathbf{G}|\mathbf{P}} = - \sum_{i=1}^n \sum_{j=1}^c G_{ij} \log \frac{\exp(\sigma(\mathbf{XW})_i (\mathbf{V}^T)_j + b_j)}{\sum_{k=1}^c \exp(\sigma(\mathbf{XW})_i (\mathbf{V}^T)_k + b_k)} \] \[ = - \sum_{i=1}^n \sum_{j=1}^c G_{ij} (\sigma(\mathbf{XW})_i (\mathbf{V}^T)_j + b_j) + \sum_{i=1}^n \log \left( \sum_{k=1}^c \exp(\sigma(\mathbf{XW})_i (\mathbf{V}^T)_k + b_k) \right) \] \[ = - \mathrm{Tr}(\mathbf{G} (\sigma(\mathbf{XW})\mathbf{V}^T + 1_n \mathbf{b}^T)^T) + 1_n^T \log \left( \exp(\sigma(\mathbf{XW})\mathbf{V}^T + 1_n \mathbf{b}^T)1_c \right). \] (3) Thus the SEM parameters are learned by solving the optimization problem \[ \min_{\mathbf{W}, \mathbf{V}, \mathbf{b}} J(\mathbf{W}, \mathbf{V}, \mathbf{b}). \] (4) Here \( \mathbf{V} \in \mathbb{R}^{c \times r} \) are the representations of the labels in a latent semantic space, \( \mathbf{W} \in \mathbb{R}^{d \times r} \) controls the nonlinear mapping from the instance features to the same semantic space, and the offsets \( \mathbf{b} \in \mathbb{R}^c \) allow for label-specific offsets in the mapping from the semantic space to the log probabilities. 3 Model Fitting The optimization problem (4) is non-convex. To solve it efficiently, we use a Gauss-Siedel approach combined with mini-batching. Namely, we cyclically update each of \( \mathbf{W}, \mathbf{V} \), and \( \mathbf{b} \) using AdaGrad (Duchi et al., 2011) while keeping the other two variable fixed. We compute the gradients using mini-batches. To state the expressions for the gradients with respect to the model parameters, we introduce some helpful notation: \( \mathbf{A} \odot \mathbf{B} \) denotes the entry-wise product of two matrices, \( \mathbf{M} = \sigma(\mathbf{XW}) \odot (1 - \sigma(\mathbf{XW})) \), and \( \mathbf{D} = \mathrm{Diag}\left( \exp \left( \sigma(\mathbf{XW})\mathbf{V}^T + 1_n \mathbf{b}^T \right) 1_c \right) \). The gradients are readily computed from (2): \[ \mathcal{G}(\mathbf{W}) = \mathbf{X}^T \left( \mathbf{M} \odot \left[ \left( \mathbf{D}^{-1} \exp \left( \sigma(\mathbf{XW})\mathbf{V}^T + 1_n \mathbf{b}^T \right) - \mathbf{G} \right) \mathbf{V} \right] \right) \] (5) \[ \mathcal{G}(\mathbf{V}) = \left( \exp \left( \sigma(\mathbf{XW})\mathbf{V}^T + 1_n \mathbf{b}^T \right)^T \mathbf{D}^{-1} - \mathbf{G}^T \right) \sigma(\mathbf{XW}) \] (6) \[ \mathcal{G}(\mathbf{b}) = \left( \exp \left( \sigma(\mathbf{XW})\mathbf{V}^T + 1_n \mathbf{b}^T \right)^T \mathbf{D}^{-1} - \mathbf{G}^T \right) 1_n. \] (7) Using AdaGrad, the update rule for \( \mathbf{W}^{(\tau)} \) is \[ \mathbf{W}^{(\tau)} = \mathbf{W}^{(\tau-1)} - \alpha_{\mathbf{W}}^{(\tau)} \odot \mathcal{G}(\mathbf{W}^{(\tau-1)}) \] (8) where \( \tau \) is the timestep and \( \alpha_{\mathbf{W}} \) is a matrix of step sizes computed via \[ (\alpha_{\mathbf{W}}^{(\tau)})_{iq} = \frac{\rho}{\sqrt{\sum_{m=1}^{\tau-1} (\mathcal{G}(\mathbf{W}^{(m)})) \odot \mathcal{G}(\mathbf{W}^{(m)}))_{iq}} + \varepsilon}, \] (9) where \( \varepsilon \) and the learning rate \( \rho \) determine how much an entry \( \mathbf{W}_{ij} \) is updated during the first timestep. \( \mathbf{V}^{(\tau)} \) and \( \mathbf{b}^{(\tau)} \) are computed according to similar updating rules obtained from (8) and (9) by substituting \( \mathcal{G}(\mathbf{W}) \) with \( \mathcal{G}(\mathbf{V}) \) (or \( \mathcal{G}(\mathbf{b}) \)), \( \mathbf{W} \) with \( \mathbf{V} \) (or \( \mathbf{b} \)), and \( \alpha_{\mathbf{W}} \) with \( \alpha_{\mathbf{V}} \) (or \( \alpha_{\mathbf{b}} \)). A listing of the proposed algorithm is given in Algorithm 1. Its computational complexity is \( O(T n r (d + c)) \), where \( T \) is the number of epochs. We note that the gradient calculations in lines 7–9 of Algorithm 1 are amenable to parallelization. Algorithm 1 Mini-Batched Gauss-Siedel Adaptive Gradient Descent for learning SEM parameters Input: Instance feature matrix \( \mathbf{X} \in \mathbb{R}^{n \times d} \), observed label matrix \( \mathbf{Y} \in \mathbb{R}^{n \times c} \), dimensionality of the latent space \( r \), learning rate \( \rho \) and \( \epsilon > 0 \), mini-batch size \( m < n \), and number of epochs \( T \). 1: Initialize \( \mathbf{W}^{(0)}, \mathbf{V}^{(0)}, \) and \( \mathbf{b}^{(0)} \) 2: for \( t = 1, 2, \ldots, T \) do 3: Randomly choose \( n/m \) mini-batches \( I_z \subset \{1, ..., n\} \) of size \( m \) 4: for \( b = 1, 2, \ldots, n/m \) do 5: Set \( \tau = (t-1)(n/m) + b \) 6: Select the data instances in the \( z \)-th mini-batch by working with \( \mathbf{X}_{I_{z,\cdot}} \) in lieu of \( \mathbf{X} \) 7: Update \( \mathbf{W}^{(\tau)} \) via (8) while fixing \( \mathbf{V} = \mathbf{V}^{(\tau-1)} \) and \( \mathbf{b} = \mathbf{b}^{(\tau-1)} \) 8: Update \( \mathbf{V}^{(\tau)} \) via the analog of (8) for \( \mathbf{V} \) while fixing \( \mathbf{W} = \mathbf{W}^{(\tau)} \) and \( \mathbf{b} = \mathbf{b}^{(\tau-1)} \) 9: Update \( \mathbf{b}^{(\tau)} \) via the analog of (8) for \( \mathbf{b} \) while fixing \( \mathbf{W} = \mathbf{W}^{(\tau)} \) and \( \mathbf{V} = \mathbf{V}^{(\tau)} \) 10: end for 11: end for Output: \( \mathbf{W}^{(\tau)}, \mathbf{V}^{(\tau)} \) and \( \mathbf{b}^{(\tau)} \) 3.1 INCREASED EFFICIENCY BY FITTING MARGINALS Although Algorithm 1 runs in time linear in the dimensions of the model parameters and the input datasets, it can be computationally expensive when there are more than a few thousand labels. To further reduce the running time of our algorithm, we note that in practice, each instance is often associated with \( \ell_i \ll c \) labels. To speed up the training, at each timestep \( \tau \), rather than attempting to minimize the divergence between the entire empirical and predicted label distributions of each instance, we sample a set of labels \( L_i^{(\tau)} \) for each instance and attempt to minimize the empirical and predicted marginal label distributions over that set of labels \( L_i^{(\tau)} \). Let \( \mathrm{PL}_i \) denote the set of labels assigned to the \( i \)th instance, and \( \mathrm{AL}_i \) denote the set of labels not assigned to that instance. We sample from \( \mathrm{AL}_i \) to form a set \( \mathrm{NL}_i \), and use \( L_i^{(\tau)} = \mathrm{PL}_i \cup \mathrm{NL}_i^{(\tau)} \). This leads to the modified objective \[ J_{\text{Marginal}}(\mathbf{W}, \mathbf{V}, \mathbf{b})^{(\tau)} = - \sum_{i=1}^n \sum_{j \in L_i^{(\tau)}} G_{ij} \log \frac{\exp(\sigma(\mathbf{X}\mathbf{W})_i (\mathbf{V}^T)_j + b_j)}{\sum_{k \in L_i^{(\tau)}} \exp(\sigma(\mathbf{X}\mathbf{W})_i (\mathbf{V}^T)_{k} + b_k)}. \] Note that \( J^{(\tau)} \) is a random function that changes at each timestep. Minimizing this stochastic objective effectively seeks SEM parameters which fit all the randomly sampled marginals encountered during training. Thus it is important to sample the sets \( \mathrm{NL}_i \) so that the selected marginals capture non-trivial information about the label distributions. One can imagine that uniformly sampling from \( \mathrm{AL}_i \) will not provide very informative marginals. As an improvement on this naive scheme, we sample labels from \( \mathrm{AL}_i \) with probability proportional to their frequency of occurrence in the training data set. The number of negative labels is set to be \( \beta \) times the number of positive labels i.e., \( |\mathrm{NL}_i| = \beta |\mathrm{PL}_i| = \beta \ell_i \). Further, when \( m > 1 \), to facilitate efficient BLAS operations while mini-batching, we use the same marginals for each instance in the same minibatch, i.e., we fit marginals over \( L^{(\tau)} := \bigcup_{i \in I_z} L_i^{(\tau)} \), where \( I_z \) denotes the set of instances in the current minibatch. In the experiments presented in Section 4, we found that \( \beta \) around 10 suffices when \( c \) is relatively small, and \( \beta \) around 100 suffices when \( c \) is on the order of tens of thousands. 3.2 LABEL PREDICTION We present two methods for predicting the labels for a new instance \( \mathbf{x} \in \mathbb{R}^d \) given the fitted SEM parameters. The first uses the generative model behind SEM: form \( \mathbf{h} = \sigma(\mathbf{x}^T \mathbf{W}) \mathbf{V}^T + \mathbf{b}^T \) and note the probability that the \( j \)th label is assigned to that instance is given by \[ \mathbb{P}(y_j = 1) = \exp(h_j) / \sum_{k=1}^c \exp(h_k). \] Accordingly we assign the most probable labels to x. We call this prediction scheme the direct SEM method; it simply requires choosing the labels corresponding to the largest entries of h. The second method builds a kernel classifier in the semantic space obtained from the SEM factorization. Following Mineiro & Karampatziakis (2015), a classifier is trained on these semantic representations by solving the optimization problem \[ \min_{\mathbf{Z} \in \mathbb{R}^{c \times s}} \sum_{i=1}^n \ell(\mathbf{Y}_i, \mathbf{Z}\psi(\mathbf{x}_i\mathbf{W})) + \lambda \| \mathbf{Z} \|_F^2, \] where \( \ell \) is the log-loss penalty and \( \psi : \mathbb{R}^r \to \mathbb{R}^s \) is an s-dimensional Random Fourier Feature (RFF) map (Rahimi & Recht 2007): \[ \psi(\mathbf{x}) = \cos \left( \Phi \mathbf{x} + \boldsymbol{\theta} \right), \] where \( \Phi \in \mathbb{R}^{s \times r} \) is a matrix of i.i.d. standard Gaussians and \( \boldsymbol{\theta} \in [0, 2\pi)^s \) is a vector of i.i.d uniform samples from \([0, 2\pi)\). At test time, the predicted label probabilities for an instance x are given by \( \mathbf{Z}\psi(\mathbf{x}\mathbf{W}) \), so we assign the most probable labels according to this model. We refer to this scheme as the kernelized SEM method. 4 EXPERIMENTS In the sequel we refer to the direct SEM scheme as simply SEM, and the kernelized SEM scheme as SEM-K. We compare SEM and SEM-K with several alternative multi-label learning algorithms: NNML (Nam et al. 2014), REmbed (Mineiro & Karampatziakis 2015), SLEEC (Bhatia et al. 2015), and BMLPL (Rai et al. 2015). We do not compare to the models proposed in (Tat & Lin 2010; Chen & Lin 2012; Bi & Kwok 2013; Yu et al. 2014; Prabhu & Varma 2014) because earlier works (Yu et al. 2014; Bhatia et al. 2015) have shown that they are inferior to SLEEC. 4.1 DATASETS Table 1 summarizes the eight datasets used in our experiments. Here \( n_{train} \) and \( n_{test} \) are the numbers of training and testing instances, d is the number of features, c is the number of labels/classes, and the avg(\( \ell_i \)) column reports the average number of labels per instance. In these datasets, the number of labels varies from 23 to 30938, the average label cardinality varies from 2.508 to 19.020, and the number of instances in different classes varies over a large range. Thus predicting the labels assignments correctly over this collection of datasets is a challenging task. <table> <tr> <th>Dataset</th> <th>Domain</th> <th>\( n_{train} \)</th> <th>\( n_{test} \)</th> <th>d</th> <th>c</th> <th>avg(\( \ell_i \))</th> </tr> <tr> <td><i>MSRC</i></td> <td>image</td> <td>296</td> <td>295</td> <td>512</td> <td>23</td> <td>2.508</td> </tr> <tr> <td><i>Corel5K</i></td> <td>image</td> <td>4500</td> <td>500</td> <td>499</td> <td>374</td> <td>3.522</td> </tr> <tr> <td><i>SUN</i></td> <td>image</td> <td>12906</td> <td>1434</td> <td>512</td> <td>102</td> <td>15.526</td> </tr> <tr> <td><i>Delicious</i></td> <td>text</td> <td>12920</td> <td>3185</td> <td>500</td> <td>983</td> <td>19.020</td> </tr> <tr> <td><i>EurLex-sub</i></td> <td>text</td> <td>17413</td> <td>1935</td> <td>5000</td> <td>201</td> <td>2.213</td> </tr> <tr> <td><i>Mediamill</i></td> <td>video</td> <td>30993</td> <td>12914</td> <td>210</td> <td>101</td> <td>4.736</td> </tr> <tr> <td><i>Eurlex-des</i></td> <td>text</td> <td>17413</td> <td>1935</td> <td>5000</td> <td>3993</td> <td>5.31</td> </tr> <tr> <td><i>Wiki10K</i></td> <td>text</td> <td>14146</td> <td>6616</td> <td>101938</td> <td>30938</td> <td>18.64</td> </tr> </table> 4.2 METHODOLOGY The codes of the methods we compare to are provided by the authors, in particular, we note that the computationally intensive portions of REmbed, SLEEC and NNML are implemented in C; by way of comparison, our algorithms are entirely implemented in Matlab. Due to there being several parameters for each method, we hand-tuned the parameters for each dataset as suggested by the authors. All methods were run in MATLAB on a Windows server with 4GB memory and four 2.3GHz CPUs with eight cores. The prediction performance for each algorithm is evaluated according to widely-used metrics in the field of multi-label classification, viz., label-based Macro-F1 (MaF1) and Micro-F1 (MiF1) and instance-based Precision-at-k (P@k, esp. P@1 and P@3) (Zhang & Zhou 2014). MaF1 and MiF1 require predefining a threshold to determine the number of labels to be assigned to the testing data. In our experiments, the number of labels assigned to each testing instance was set according to its ground truth. Table 2: The classification performance of six multi-label classification algorithms (NNML, BMLPL, REmbed, SLEEC and the proposed SEM and SEM-K). The best and second best results are respectively bolded and underlined for each evaluation measure. <table> <tr> <th rowspan="2"></th> <th colspan="4">MSRC</th> <th colspan="4">Corel5K</th> </tr> <tr> <th>MaF1</th> <th>MiF1</th> <th>P@1</th> <th>P@3</th> <th>MaF1</th> <th>MiF1</th> <th>P@1</th> <th>P@3</th> </tr> <tr> <td>NNML</td> <td>0.4086</td> <td>0.5944</td> <td>0.7356</td> <td>0.5073</td> <td>0.0547</td> <td>0.2967</td> <td>0.4020</td> <td>0.3047</td> </tr> <tr> <td>BMLPL</td> <td>0.4592</td> <td>0.6199</td> <td>0.7017</td> <td>0.5288</td> <td>0.0315</td> <td>0.2779</td> <td>0.3940</td> <td>0.2820</td> </tr> <tr> <td>REmbed</td> <td>0.3537</td> <td>0.5128</td> <td>0.5322</td> <td>0.4384</td> <td>0.0450</td> <td>0.2144</td> <td>0.3060</td> <td>0.2247</td> </tr> <tr> <td>SLEEC</td> <td>0.4973</td> <td>0.6314</td> <td>0.7353</td> <td>0.5243</td> <td>0.0534</td> <td>0.3188</td> <td>0.4360</td> <td>0.3287</td> </tr> <tr> <td>SEM</td> <td>0.5064</td> <td>0.6173</td> <td>0.7220</td> <td>0.5333</td> <td><b>0.0623</b></td> <td><b>0.3188</b></td> <td><b>0.4320</b></td> <td><b>0.3293</b></td> </tr> <tr> <td>SEM-K</td> <td><b>0.5770</b></td> <td><b>0.6492</b></td> <td><b>0.7458</b></td> <td><b>0.5525</b></td> <td>0.0589</td> <td>0.2649</td> <td>0.3600</td> <td>0.2773</td> </tr> <tr> <th colspan="9">SUN</th> </tr> <tr> <th></th> <th colspan="4">Mediamill</th> <th colspan="4">Eurlex-sub</th> </tr> <tr> <td>NNML</td> <td>0.2807</td> <td>0.5248</td> <td>0.9421</td> <td>0.8580</td> <td>0.0819</td> <td>0.5890</td> <td>0.8260</td> <td>0.6675</td> </tr> <tr> <td>BMLPL</td> <td>0.1897</td> <td>0.4766</td> <td>0.9024</td> <td>0.8001</td> <td>0.0855</td> <td>0.6012</td> <td>0.8478</td> <td>0.6854</td> </tr> <tr> <td>REmbed</td> <td>0.3408</td> <td>0.5125</td> <td>0.9393</td> <td>0.8591</td> <td>0.2634</td> <td>0.6371</td> <td>0.8741</td> <td>0.6988</td> </tr> <tr> <td>SLEEC</td> <td>0.2935</td> <td>0.5256</td> <td>0.9484</td> <td>0.8656</td> <td><b>0.2851</b></td> <td><b>0.6546</b></td> <td><b>0.8899</b></td> <td><b>0.7158</b></td> </tr> <tr> <td>SEM</td> <td>0.3648</td> <td><b>0.5486</b></td> <td>0.9365</td> <td>0.8642</td> <td>0.1593</td> <td>0.6296</td> <td>0.8746</td> <td>0.6996</td> </tr> <tr> <td>SEM-K</td> <td><b>0.3703</b></td> <td>0.5466</td> <td><b>0.9575</b></td> <td><b>0.8787</b></td> <td>0.2570</td> <td><b>0.6717</b></td> <td><b>0.8953</b></td> <td><b>0.7278</b></td> </tr> <tr> <th colspan="9">Delicious</th> </tr> <tr> <th></th> <th colspan="4">Mediamill</th> <th colspan="4">Eurlex-sub</th> </tr> <tr> <td>NNML</td> <td>0.1721</td> <td>0.3963</td> <td>0.6687</td> <td><b>0.6169</b></td> <td>0.5761</td> <td>0.8487</td> <td>0.9173</td> <td>0.6267</td> </tr> <tr> <td>BMLPL</td> <td>0.1061</td> <td>0.3739</td> <td>0.6378</td> <td>0.5772</td> <td>0.1459</td> <td>0.6011</td> <td>0.6789</td> <td>0.4697</td> </tr> <tr> <td>REmbed</td> <td>0.1549</td> <td>0.3713</td> <td>0.6353</td> <td>0.572</td> <td>0.5335</td> <td>0.8031</td> <td>0.8785</td> <td>0.5977</td> </tr> <tr> <td>SLEEC</td> <td>0.1257</td> <td>0.3859</td> <td>0.6674</td> <td>0.6112</td> <td>0.5433</td> <td>0.8461</td> <td>0.9152</td> <td>0.6191</td> </tr> <tr> <td>SEM</td> <td><b>0.1941</b></td> <td><b>0.3980</b></td> <td><b>0.6727</b></td> <td>0.6162</td> <td>0.5652</td> <td>0.8339</td> <td>0.8971</td> <td>0.6188</td> </tr> <tr> <td>SEM-K</td> <td>0.1675</td> <td>0.3886</td> <td>0.6658</td> <td>0.6112</td> <td><b>0.5807</b></td> <td><b>0.8494</b></td> <td><b>0.9188</b></td> <td><b>0.6269</b></td> </tr> </table> Table 3: The running times, in seconds, of six multi-label classification algorithms (NNML, BMLPL, REmbed, SLEEC and the proposed SEM and SEM-K) for differing training sizes on the Mediamill dataset. <table> <tr> <th>ntrain</th> <th>NNML</th> <th>BMLPL</th> <th>REMBED</th> <th>SLEEC</th> <th>SEM</th> <th>SEM-K</th> </tr> <tr> <td>439</td> <td>327.57</td> <td>10.29</td> <td>2.07</td> <td>16.11</td> <td>0.60</td> <td>1.50</td> </tr> <tr> <td>1756</td> <td>1333.91</td> <td>20.35</td> <td>3.02</td> <td>57.16</td> <td>2.41</td> <td>4.29</td> </tr> <tr> <td>3073</td> <td>2363.02</td> <td>48.2</td> <td>4.14</td> <td>145.88</td> <td>4.36</td> <td>6.99</td> </tr> <tr> <td>4391</td> <td>3264.79</td> <td>41.72</td> <td>5.45</td> <td>227.76</td> <td>6.65</td> <td>10.10</td> </tr> <tr> <td>8781</td> <td>4428.09</td> <td>84.09</td> <td>10.83</td> <td>815.66</td> <td>12.29</td> <td>21.73</td> </tr> <tr> <td>13172</td> <td>5170.00</td> <td>119.09</td> <td>17.04</td> <td>1041.07</td> <td>18.39</td> <td>26.49</td> </tr> <tr> <td>17563</td> <td>5170.17</td> <td>185.05</td> <td>20.90</td> <td>1692.7</td> <td>24.22</td> <td>42.21</td> </tr> <tr> <td>21954</td> <td>5297.75</td> <td>225.96</td> <td>44.20</td> <td>1772.52</td> <td>30.10</td> <td>50.64</td> </tr> <tr> <td>26344</td> <td>5947.94</td> <td>235.93</td> <td>52.75</td> <td>1985.82</td> <td>35.95</td> <td>59.42</td> </tr> <tr> <td>30735</td> <td>6604.93</td> <td>275.06</td> <td>58.74</td> <td>2181.48</td> <td>41.37</td> <td>61.30</td> </tr> </table> 4.3 PERFORMANCE ON DATASETS WITH SMALL LABEL SETS First we compare the performance on six multi-label learning problems with \( c < 1000 \). To fit both SEM models, we take the number of epochs be 30 and the mini-batch size be 200—i.e., \( T = 30 \) and \( m = 200 \) in Algorithm 1—and because \( c \) is small, we fit the full label distributions. The classification performances of our SEM algorithms and the baseline methods are shown in Table 2. SEM or SEM-K outperform the alternative algorithms in most cases. Table 3 compares the running times of the algorithms as the size of the dataset is increased, using MediaMill. We see that SEM is the fastest model, followed by REMBED, then closely by SEM-K; the remaining three models are significantly more costly. It is clear that NNML, the previous neural network approach to multi-label learning costs the most. In the other five algorithms, the latent space dimensionality (r) is set to be 50. SLEEC is expensive because it constructs the nearest neighbor graph among training data and computes the top r eigenvectors of the corresponding similarity matrix, which costs \( O(n^2 r + d^2 r) \). REmbed is efficient because its main cost is to find the singular vectors of a \( c \times (r + q) \) matrix (here \( c \) is the number of labels and \( q \) is a small integer), but its performance is inferior to SEM-K. The BMLPL code provided by the author applies SVD to the training data to initialize than model parameters and then uses conjugate gradient to update the parameters, thus it costs much more than REmbed and our proposed methods. 4.4 PERFORMANCE ON DATASETS WITH LARGE LABEL SETS We proposed using SEM to fit marginals rather than the entire label distribution when \( c \) is large, for computational efficiency. To judge the effectiveness of this proposal, we compare the accuracy and running times of the SEM and SEM-K models with baselines on EurLex-des and Wiki10K, two datasets with \( c > 1000 \). As baselines, we use REmbed and SLEEC in accordance with the above discussion which showed that these two methods are efficient and/or have good performance. The hyperparameters in SLEEC were set according to the original authors’ code: \( r \) for EurLex-des and Wiki10K is 100 and 75 respectively, and 3 clusters are used for Eurlex-des and 5 are used for Wiki10K. To fit the SEM models, we used the same value of \( r \) as SLEEC on these two datasets and used 10 training epochs. For REmbed, the latent space size \( r \) was tuned via cross-validation; \( r = 300 \) for Eurlex-des and \( r = 150 \) for Wiki10K. The number of Random Fourier Features is 2000 for both REmbed and SEM-K. The latent space size \( r \) in SEM is same with SLEEC. The mini-batch sizes and number of epochs are set to be 200 and 10 respectively when fitting the SEM models. The number of threads is set to be 8 for all methods. Table 4: The classification performance of five methods (REmbed, SLEEC and the proposed SEM and SEM-K with two values of \( \beta \)) on the Eurlex-des and Wiki10K datasets. The best and second best results are respectively bolded and underlined for each evaluation metric. <table> <tr> <th rowspan="2"> </th> <th colspan="2">REmbed</th> <th colspan="2">SLEEC</th> <th colspan="2">SEM</th> <th colspan="2">SEM-K</th> <th colspan="2">SEM-K</th> </tr> <tr> <th>(\( \beta = 500 \))</th> <th>(\( \beta = 10 \))</th> <th>(\( \beta = 500 \))</th> <th>(\( \beta = 10 \))</th> <th>(\( \beta = 500 \))</th> <th>(\( \beta = 10 \))</th> <th>(\( \beta = 500 \))</th> <th>(\( \beta = 10 \))</th> <th>(\( \beta = 60 \))</th> </tr> <tr> <th rowspan="3">Eurlex-des</th> <td>P@1</td> <td>0.7299</td> <td>0.8017</td> <td>0.7107</td> <td>0.8024</td> <td>0.8135</td> <td></td> <td></td> <td></td> </tr> <tr> <td>P@3</td> <td>0.6064</td> <td>0.6539</td> <td>0.5874</td> <td>0.6621</td> <td>0.6714</td> <td></td> <td></td> <td></td> </tr> <tr> <td>P@5</td> <td>0.5060</td> <td>0.5375</td> <td>0.4916</td> <td>0.5493</td> <td>0.5563</td> <td></td> <td></td> <td></td> </tr> <tr> <th rowspan="3">Wiki10K</th> <td>P@1</td> <td>0.6963</td> <td>0.8554</td> <td>0.8517</td> <td>0.8582</td> <td>0.8671</td> <td></td> <td></td> <td></td> </tr> <tr> <td>P@3</td> <td>0.5790</td> <td>0.7359</td> <td>0.7133</td> <td>0.7278</td> <td>0.7385</td> <td></td> <td></td> <td></td> </tr> <tr> <td>P@5</td> <td>0.4929</td> <td>0.6310</td> <td>0.6171</td> <td>0.6236</td> <td>0.6353</td> <td></td> <td></td> <td></td> </tr> </table> Table 5: The running times, in seconds, of five methods (REmbed, SLEEC and the proposed SEM and SEM-K for two values of \( \beta \)) on the Eurlex-des and Wiki10K datasets. <table> <tr> <th rowspan="2"> </th> <th colspan="2">REmbed</th> <th colspan="2">SLEEC</th> <th colspan="2">SEM</th> <th colspan="2">SEM-K</th> <th colspan="2">SEM-K</th> </tr> <tr> <th>(\( \beta = 500 \))</th> <th>(\( \beta = 10 \))</th> <th>(\( \beta = 500 \))</th> <th>(\( \beta = 10 \))</th> <th>(\( \beta = 500 \))</th> <th>(\( \beta = 10 \))</th> <th>(\( \beta = 500 \))</th> <th>(\( \beta = 10 \))</th> <th>(\( \beta = 60 \))</th> </tr> <tr> <th rowspan="2">Eurlex-des</th> <td>358.63</td> <td>1571.30</td> <td>1210.30</td> <td>167.10</td> <td>250.77</td> <td></td> <td></td> <td></td> </tr> <tr> <th>Wiki10K</th> <td>2858.96</td> <td>2497.00</td> <td>2003.43</td> <td>646.48</td> <td>769.18</td> <td></td> <td></td> <td></td> </tr> </table> Table 4 compares the classification performances of the methods on these two datasets. It is clear that SEM-K with a small set of negative labels obtains better performance than both REmbed and SLEEC. Table 5 shows that, additionally, the SEM-K models are fit much faster than than the other models. 4.5 IMPACT OF THE SIZE OF THE MARGINALS Figure 1 illustrates the impact of the choice of \( \beta \) on the prediction performance (in terms of P@1) of SEM and SEM-K. The performances of SLEEC and REmbed are included for comparison. The hyperparameters of SLEEC, REmbed and SEM were set as in Section 4.4. It is evident that the performance of SEM increases significantly in a monotonic fashion with \( \beta \). However, SEM-K is insensitive to \( \beta \) once it passes a dataset-dependent threshold (e.g., \( \beta = 60 \)) Figure 1: The P@1 performance of SEM and SEM-K as a function of \( \beta \), in comparison to the performances of SLEEC and REmbed on the (a) Eurlex-des and (b) Wiki10K datasets. for *Eurlex-des* and \( \beta = 100 \) for *Wiki10K*). Note that on *Wiki10K*, even the simpler direct SEM outperforms REmbed when there are sufficient negative labels. Figure 2 illustrates the effect of \( \beta \) on the running times of SEM and SEM-K. Note that the additional time to fit the classifier in the semantic space required by SEM-K is negligible compared to the time it takes to first fit the direct SEM model. Figure 2: Running time of SEM-K under varying \( \beta \). 4.6 ADDITIONAL CONSIDERATIONS There are other important ways in which the proposed SEM methods can be compared to the baseline multi-label learning methods, including their performance as a function of the latent space dimensionality and as a function of the amount of training. Due to space constraints, a discussion of these two concerns and the convergence behavior of Algorithm 1 is provided in the Supplementary material. 5 CONCLUSION We proposed a new semantic embedding model (SEM) for handling the multi-label learning task. A framework based on Gauss-Siedel mini-batched adaptive gradient descent was proposed for efficiently solving the non-convex optimization problem required to learn the SEM parameters. For large label sets, we proposed fitting the SEM to marginal distributions rather than the full label distribution. A series of experiments on eight real-world datasets empirically demonstrated that the proposed method is superior to state-of-the-art methods in terms of prediction performance and running time. SUPPLEMENTARY MATERIAL A. Effect of Latent Space Dimensionality It can be seen that the latent space dimensionality \( r \) plays an important role to learn latent factors \( \mathbf{V} \) and a feature mapping matrix \( \mathbf{W} \) in our proposed methods, as it does in the three baselines BMLPL, REmbed and SLEEC. In order to investigate this dependence, we conducted a series of experiments on the training data sets using 5-fold cross-validation, comparing BMLPL, REmbed, SLEEC and our proposed SEM and SEM-K. ![Two line plots showing the effect of latent space dimensionality on P@1 and MiF1 for Delicious dataset](page_246_370_1097_246.png) (a) Delicious-P@1 (b) Delicious-MiF1 Figure 3: The effect of the latent space dimensionality \( r \) on BMLPL, REmbed, SLEEC, SEM and SEM-K in terms of MiF1 and P@1 on the Delicious dataset. In this experiment, we take Delicious dataset as an example. The training data is separated into five folds where four folds are used as training and one fold as validating, and the averaged results in terms of P@1 and MiF1 are given by Figure 3. It can be seen that their performances usually improve with increasing \( r \) until they reach an optimum value. However, once \( r \) becomes too large, their performances degrade. This is reasonable: when \( r \) is too small, the learned parameters cannot fully characterize the hidden semantic structure in the classification problem, while when \( r \) is too large, the benefits of dimensionality reduction are lost, as the model begins to over-fit to the idiosyncrasies of the training data rather than capturing the semantic structure common to both the training and validation data. Usually, these methods could obtain good performance at small \( r \), say 45 for Delicious dataset. B. Effect of training data size ![Two line plots showing the effect of varying training data size on P@1 and MiF1 for Mediamill dataset](page_246_1012_1097_246.png) (a) P@1 (b) MiF1 Figure 4: Effect of varying the training data size, as a fraction of the combined test and training data, on five multi-label learning methods in terms of P@1 and MiF1 on the Mediamill dataset. Meanwhile, we studied the label prediction performance as a function of the amount of labeled training data. In this experiment, we fixed the testing data size, and randomly selected training data from the training set so that the training data size varies from 1% to 70% of the combined training and testing data. In order to avoid the presence of empty categories and instances with no labels, at least one instance is kept for each label and at least one label is kept for each instance during this sampling process. For each fixed size of the training set, the desired amount of data is randomly sampled ten times, and the resulting average P@1 and MiF1 on the testing data are recorded. During training, the latent dimensionality parameter \( r \) is selected via 5-fold cross-validation. Figure 4 shows these results for the Mediamill dataset which contains the largest number of instances. As expected, the performance of all the methods is positive correlated with the size of the training data set, and we also see that the proposed SEM-K uniformly outperforms the other methods regardless of the training data size. As it is often expensive to obtain large labeled data sets in real applications, this observation suggests that SEM-K is a better choice for these situations. C. Convergence In order to demonstrated the convergence of the proposed method, we show the value of objective function (4) (at \( r = 45 \)) via Figure 5(a) and the prediction result (P@1) via Figure 5(b) along with the number of passes to the dataset (i.e., \( \tau \) in Algorithm 1). It can be seen that SEM could be convergent and the prediction performance becomes stable in less than 50 epochs, which will leverage SEM dealing with large-scale data. ![Convergence curve and P@1 curve plots](page_328_670_924_246.png) Figure 5: Performance of the proposed SEM method (with \( r = 45, p = 0.1 \)) on the Delicious dataset, a) objective function value to minimum and b) prediction result in terms of P@1, where x-axis represents the number of passes to the dataset.
reject
Reject
4.333333
8cac3a5e5b053ebec1cf195ec0c59ea4d4eb0352
iclr
2,017
UNDERSTANDING INTERMEDIATE LAYERS USING LINEAR CLASSIFIER PROBES Guillaume Alain & Yoshua Bengio Department of Computer Science and Operations Research Université de Montréal Montreal, QC. H3C 3J7 guillaume.alain.umontreal@gmail.com ABSTRACT Neural network models have a reputation for being black boxes. We propose a new method to better understand the roles and dynamics of the intermediate layers. This has direct consequences on the design of such models and it enables the expert to be able to justify certain heuristics (such as adding auxiliary losses in middle layers). Our method uses linear classifiers, referred to as “probes”, where a probe can only use the hidden units of a given intermediate layer as discriminating features. Moreover, these probes cannot affect the training phase of a model, and they are generally added after training. They allow the user to visualize the state of the model at multiple steps of training. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. 1 INTRODUCTION The recent history of deep neural networks features an impressive number of new methods and technological improvements to allow the training of deeper and more powerful networks. Despite this, models still have a reputation for being black boxes. Neural networks are criticized for their lack of interpretability, which is a tradeoff that we accept because of their amazing performance on many tasks. Efforts have been made to identify the role played by each layer, but it can be hard to find a meaning to individual layers. There are good arguments to support the claim that the first layers of a convolution network for image recognition contain filters that are relatively “general”, in the sense that they would work great even if we switched to an entirely different dataset of images. The last layers are specific to the dataset being used, and have to be retrained when using a different dataset. In [Yosinski et al. (2014)] the authors try to pinpoint the layer at which this transition occurs, but they show that the exact transition is spread across multiple layers. In this paper, we introduce the concept of linear classifier probe, referred to as a “probe” for short when the context is clear. We start from the concept of Shannon entropy, which is the classic way to describe the information contents of a random variable. We then seek to apply that concept to understand the roles of the intermediate layers of a neural network, to measure how much information is gained at every layer (answer : technically, none). We argue that it fails to apply, and so we propose an alternative framework to ask the same question again. This time around, we ask what would be the performance of an optimal linear classifier if it was trained on the inputs of a given layer from our model. We demonstrate how this powerful concept can be very useful to understand the dynamics involved in a deep neural network during training and after. 2 INFORMATION THEORY It was a great discovery when Claude Shannon repurposed the notion of entropy to represent information contents in a formal way. It laid the foundations for the discipline of information theory. We would refer the reader to first chapters of [MacKay(2003)] for a good exposition on the matter. Naturally, we would like to ask some questions about the information contents of the many layers of convolutional neural networks. • What happens when we add more layers? • Where does information flow in a neural network with multiple branches? • Does having multiple auxiliary losses help? (e.g. Inception model) Intuitively, for a training sample \( x_i \) with its associated label \( y_i \), a deep model is getting closer to the correct answer in the higher layers. It starts with the difficult job of classifying \( x_i \), which becomes easier as the higher layers distill \( x_i \) into a representation that is easier to classify. One might be tempted to say that this means that the higher layers have more *information* about the ground truth, but this would be incorrect. Here there is a mismatch between two different concepts of information. The notion of entropy *fails* to capture the essence of those questions. This is illustrated in a formal way by the *Data Processing Inequality*. It states that, for a set of three random variables satisfying the dependency \[ X \rightarrow Y \rightarrow Z \] then we have that \[ I(X;Z) \leq I(X;Y) \] where \( I(X,Y) \) is the mutual information. Intuitively, this means that the deterministic transformations performed by the many layers of a deep neural network are not adding more information. In the best case, they preserve information and affect only the representation. But in almost all situations, they lose some information in the process. If we distill this further, we can think of the serious mismatch between the two following ideas : • Part of the genius of the notion of entropy is that is distills the essence of information to a quantity that does not depend on the particular representation. • A deep neural network is a series of simple deterministic transformations that affect the representation so that the final layer can be fed to a linear classifier. The former ignores the representation of data, while the latter is an expert in finding good representations. A deaf painter is working on a visual masterpiece to offer to a blind musician who plays music for him. We need a conceptual tool to analyze neural networks in a way that corresponds better to our intuitive notion of information. The role of data representation is important, but we would also argue that we have to think about this issue as it relates to computational complexity. A linear classifier is basically the simplest form of classifier that is neither trivial nor degenerate. We define a new notion of information that depends on our ability to classify features of a given layer with an optimal linear classifier. Then we have a conceptual tool to ask new questions and to get potentially interesting answers. We end this section with a conceptual example in Figure 1. If \( X \) contains an image of the savannah, and \( Y \in \{0,1\} \) refers to whether it contains a lion or not, then none of the subsequent layers are truly more informative than \( X \) itself. The raw bits from the picture file contain everything. 3 LINEAR CLASSIFIER PROBES In section 3.1 we present the main concept of this paper. We illustrate the concept in section 3.3. We then present a basic experiment in section 3.4. In section 3.6 we modify a very deep network in two different ways and we show how probes allow us to visualize the consequences (sometimes disastrous) of our design choices. (a) hex dump of picture of a lion (b) same lion in human-readable format Figure 1: The hex dump represented on the left has more information contents than the image on the right. Only one of them can be processed by the human brain in time to save their lives. Computational convenience matters. Not just entropy. 3.1 PROBES As we discussed the previous section, there is indeed a good reason to use many deterministic layers, and it is because they perform useful transformations to the data with the goal of ultimately fitting a linear classifier at the very end. That is the purpose of the many layers. They are a tool to transform data into a form to be fed to a boring linear classifier. With this in mind, it is natural to ask if that transformation is sudden or progressive, and whether the intermediate layers already have a representation that is immediately useful to a linear classifier. We refer the reader to Figure 2 for a diagram of probes being inserted in the usual deep neural network. ![Diagram showing probes being added to every layer of a model](page_682_749_573_180.png) Figure 2: Probes being added to every layer of a model. These additional probes are not supposed to change the training of the model, so we add a little diode symbol through the arrows to indicate that the gradients will not backpropagate through those connections. The conceptual framework that we propose is one where the intuitive notion of information is equivalent with immediate suitability for a linear classifier (instead of being related to entropy). Just to be absolutely clear about what we call a linear classifier, we mean a function \[ f : H \to [0, 1]^D \\ h \mapsto \text{softmax } (Wh + b). \] where \( h \in H \) are the features of some hidden layer, \([0, 1]^D\) is the space of one-hot encodings of the \(D\) target classes, and \((W, b)\) are the probe weights and biases to be learned so as to minimize the usual cross-entropy loss. Over the course of training a model, the parameters of the model change. However, probes only make sense when we refer to a given training step. We can talk about the probes at iteration \(n\) of training, when the model parameters are \(\theta_n\). **These parameters are not affected by the probes.** We prevent backpropagation through the model either by stopping the gradient flow (done with `tf.stop_gradient` in tensorflow), or simply by specifying that the only variables to be updated are the probe parameters, while we keep \(\theta_n\) frozen. 3.1.1 Training the probes For the purposes of this paper, we train the probes up to convergence with fixed model parameters, and we report the prediction error on the training set. It is absolutely possible to train the probes simultaneously while training the model itself. This is a good approach if we consider about how long it can take to train the model. However, this creates a potential problem if we optimize the loss of the model more quickly than the loss of the probes. This can present a skewed view of the actual situation that we would have if we trained the probes until convergence before updating the model parameters. If we accept this trade off, then we can train the probes at the same time as the model. In some situations, the probes might overfit the training set, so we may want to do early stopping on the validation set and report the performance for the probes on the test set. This is what we do in section 5.4 with the simple MNIST convnet. We are still unsure if one of those variations should be preferred in general, and right now they all seem acceptable so long as we interpret the probe measurements properly. Note that training those probes represents a convex optimization problem. In practice, this does mean guarantee that they are easy to train. However, it is reassuring because it means that probes taken at time \( \theta_n \) can be used as initialization for probes at time \( \theta_{n+1} \). We use cross-entropy as probe loss because all models studied here used cross-entropy. Other alternative losses could be justified in other settings. 3.2 Probes on bifurcating toy model Here we show a hypothetical example in which a model contains a bifurcation with two paths that later recombine. We are interested in knowing whether those two branches are useful, or whether one is potentially redundant or useless. ![A diagram showing a bifurcating neural network with two paths, each with a prediction error labeled (0.75, 0.60), and their combination (0.45). The upper path is labeled "probe prediction error".](page_495_820_670_180.png) For example, the two different branches might contain convolutional layers with different dimensions. They may have a different number of sublayers, or one might represent a skip connection. We assume that the branches are combined through concatenation of their features, so that nothing is lost. For this hypothetical situation, we indicate the probe prediction errors on the graphical model. The upper path has a prediction error of 0.75, the lower path has 0.60, and their combination has 0.45. Small errors are preferred. Although the upper path has “less information” than the lower path, we can see here that it is not redundant information, because when we concatenate the features of the two branches we get a prediction error of \( 0.45 < 0.60 \). If the concatenated layer had a prediction error of 0.60 instead of 0.45, then we could declare that the above branch did nothing useful. It may have nonzero weights, but it’s still useless. Naturally, this kind of conclusion might be entirely wrong. It might be the case that the branch above contains very meaningful features, and they simply happen to be useless to a linear classifier applied right there. The idea of using linear classification probes to understand the roles of different branches is suggested as a heuristic instead of a hard rule. Moreover, if the probes are not optimized perfectly, the conclusions drawn can be misleading. Note that we are reporting here the prediction errors, and it might be the case that the loss is indeed lower when we concatenate the two branches, but for some reason it could fail to apply to the prediction error. Figure 3: Toy experiment described in section 3.3 with linearly separable data (two labels), an untrained MLP with 32 layers, and probes at every layer. We report the prediction error for every probe, where 0.50 would be the performance of a coin flip and 0.00 would be ideal. Note that the layer 0 here corresponds to the raw data, and the probes are indeed able to classify it perfectly. As expected, performance degrades when applying random transformations. If many more layers were present, it would be hard to imagine how the final layer (with the model loss) can get any useful signal to backpropagate. 3.3 PROBES ON UNTRAINED MODEL We start with a toy example to illustrate what kind of plots we expect from probes. We use a 32-layer MLP with 128 hidden units. All the layers are fully-connected and we use LeakyReLU(0.5) as activation function. We will run the same experiment 100 times, with a different toy dataset each time. The goal is to use a data distribution \((X, Y)\) where \(X \in \mathbb{R}^{128}\) is drawn \(\mathcal{N}(0, I)\) and where \(Y \in \{-1, 1\}\) in linearly separable (i.e. super easy to classify with a one-layer neural network). To do this, we just pick a \(w \in \mathbb{R}^{128}\) for each experiment, and let the label \(y_n\) be the sign of \(x_n^T w\). We initialize this 32-layer MLP using glorot_normal initialization, we do not perform any training on the model, and we add one probe at every layer. We optimize the probes with RMSProp and a sufficiently small learning rate. In Figure 3, we show the prediction error rate for every probe, averaged over the 100 experiments. The graph includes a probe applied directly on the inputs \(X\), where we naturally have an error rate that is essentially zero (to be expected by the way we constructed our data), and which serves as a kind of sanity check. Given that we have only two possible labels, we also show a dotted horizontal line at 0.50, which is essentially the prediction error that we would get by flipping a coin. We can see that the prediction error rate climbs up towards 0.50 as we go deeper in the MLP (with untrained parameters). This illustrates the idea that the input signal is getting mangled by the successive layers, so much that it becomes rather useless by the time we reach the final layer. We checked the mean activation norm of the hidden units at layer 32 to be sure that numerical underflow was not the cause for the degradation. Note that this situation could be avoided by using orthogonal weights. One of the popular explanation for training difficulties in very deep models is that of the exploding/vanishing (Hochreiter 1991; Bengio et al. 1993). Here we would like to offer another complementary explanation, based on the observations from Figure 3. That is, at the beginning of training, the usefulness of layers decays as we go deeper, reaching the point where the deeper layers are utterly useless. The values contained in the last layer are then used in the final softmax classifier, and the loss backpropagates the values of the derivatives. Since that derivative is based on garbage activations, the backpropagated quantities are also garbage, which means that the weights are all going to be updated based on garbage. The weights stay bad, and we fail to train the model. The authors like to refer to that phenomenon as *garbage forwardprop, garbage backprop*, in reference to the popular concept of *garbage in, garbage out* in computer science. 3.4 PROBES ON MNIST CONVNET In this section we run the MNIST convolutional model provided by the tensorflow github repo (tensorflow/models/image/mnist/convolutional.py) We selected that model for reproducibility and to demonstrate how to easily peek into popular models by using probes. We start by sketching the model in Figure 4. We report the results at the beginning and the end of training on Figure 5. One of the interesting dynamics to be observed there is how useful the first layers are, despite the fact that the model is completely untrained. Random projections can be useful to classify data, and this has been studied by others (Jarrett et al., 2009). ![A neural network architecture diagram showing convolutional and fully-connected layers, with convolution, ReLU, maxpool, matmul, and output logits.](page_312_186_972_120.png) Figure 4: This graphical model represents the neural network that we are going to use for MNIST. The model could be written in a more compact form, but we represent it this way to expose all the locations where we are going to insert probes. The model itself is simply two convolutional layers followed by two fully-connected layer (one being the final classifier). However, we insert probes on each side of each convolution, activation function, and pooling function. This is a bit overzealous, but the small size of the model makes this relatively easy to do. ![Two line plots showing test prediction error versus epochs for different layers, labeled (a) After initialization, no training. and (b) After training for 10 epochs.](page_120_420_1342_246.png) Figure 5: We represent here the test prediction error for each probe, at the beginning and at the end of training. This measurement was obtained through early stopping based on a validation set of \(10^4\) elements. The probes are prevented from overfitting the training data. We can see that, at the beginning of training (on the left), the randomly-initialized layers were still providing useful transformations. The test prediction error goes from 8% to 2% simply using those random features. The biggest impact comes from the first ReLU. At the end of training (on the right), the test prediction error is improving at every layer (with the exception of a minor kink on fcl_preact). 3.5 PROBES ON INCEPTION V3 We have performed an experiment using the Inception v3 model on the ImageNet dataset (Szegedy et al., 2015; Russakovsky et al., 2015). This is very similar to what is presented in section 3.4 but on a much larger scale. Due to the challenge presented by this experiment, we were not able to do everything that we had hoped. We have chosen to put those results in the appendix section A.2. Certain layers of the Inception v3 model have approximately one million features. With 1000 classes, this means that some probes can take even more storage space than the whole model itself. In these cases, one of the creative solutions was to try to use only a random subset of the features. This is discussed in the appendix section A.1. 3.6 AUXILIARY LOSS BRANCHES AND SKIP CONNECTIONS Here we investigate two ways to modify a deep model in order to facilitate training. Our goal is not to convince the reader that they should implement these suggestions in their own models. Rather, we want to demonstrate the usefulness of the linear classifier probes as a way to better understand what is happening in their deep networks. In both cases we use a toy model with 128 fully-connected layers with 128 hidden units in each layer. We train on MNIST, and we use Glorot initialization along with leaky ReLUs. We choose this model because we wanted a pathologically deep model without getting involved in architecture details. The model is pathological in the sense that smaller models can easily be designed to achieve better performance, but also in the sense that the model is so deep that it is very hard to train it with gradient descent methods. From our experiments, the maximal depth where things start to break down was depth 64, hence the choice here of using depth 128. In the first scenario, we add one linear classifier at every 16 layers. These classifiers contribute to the loss minimization. They are not probes. This is very similar to what happens in the famous Inception model where “auxiliary heads” are used (Szegedy et al., 2015). This is illustrated in Figure 6a and it works nicely. The untrainable model is now made trainable through a judicious use of auxiliary classifier losses. The results are shown in Figure 7 In the second scenario, we look at adding a bridge (a skip connection) between layer 0 and layer 64. This means that the input features to layer 64 are obtained by concatenating the output of layer 63 with the features of layer 0. The idea here is that we might observe that the model would effectively train a submodel of depth 64, using the skip connection, and shift gears later to use the whole depth of 128 layers. This is illustrated in Figure 6b and the results are shown in Figure 8. It does not work as expected, but the failure of this approach is visualized very nicely with probes and serves as a great example of their usefulness in diagnosing problems with models. In both cases, there are two interesting observations that can be made with probes. We refer readers to https://youtu.be/x8j4ZHCR2FI for the full videos associated to Figures 5, 7 and 8 Firstly, at the beginning of training, we can see how the raw data is directly useful to perform linear classification, and how this degrades as more layers are added. In the case of the skip connection in Figure 8, this has the effect of creating two bumps. This is because the layer 64 also has the input data as direct parent, so it can fit a probe to that signal. Secondly, the prediction error goes down in all probes during training, but it does so in a way that starts with the parents before it spreads to their descendants. This is even more apparent on the full video (instead of the 3 frames provided here). This is a ripple effect, where the prediction error in Figure 6b is visually spreading like a wave from the left of the plot to the right. ![Examples of deep neural network with one probe at every layer (drawn above the graph). We show here the addition of extra components to help training (under the graph, in orange).](page_1092_1042_420_120.png) Figure 6: Examples of deep neural network with one probe at every layer (drawn above the graph). We show here the addition of extra components to help training (under the graph, in orange). 4 DISCUSSION AND FUTURE WORK We have presented more toy models or simple models instead of larger models such as Inception v3. In the appendix section A.2, we show an experiment on Inception v3, which proved to be more challenging than expected. Future work in this domain would involve performing better experiments on a larger scale than small MNIST convnets, but still within a manageable size so we can properly train all the probes. This would allow us to produce nice videos showing many training steps in sequence. We have received many comments from people who thought about using multi-layer probes. This can be seen as a natural extension of the linear classifier probes. One downside to this idea is that we lose the convexity property of the probes. It might be worth pursuing in a particular setting, but as of Figure 7: A pathologically deep model with 128 layers gets an auxiliary loss added at every 16 layers (refer to simplified sketch in Figure 6a if needed). This loss is added to the usual model loss at the last layer. We fit a probe at every layer to see how well each layer would perform if its values were used as a linear classifier. We plot the train prediction error associated to all the probes, at three different steps. Before adding those auxiliary losses, the model could not successfully be trained through usual gradient descent methods, but with the addition of those intermediate losses, the model is “guided” to achieve certain partial objectives. This leads to a successful training of the complete model. The final prediction error is not impressive, but the model was not designed to achieve state-of-the-art performance. Figure 8: A pathologically deep model with 128 layers gets a skip connection from layer 0 to layer 64 (refer to sketch in Figure 6b if needed). We fit a probe at every layer to see how well each layer would perform if its values were used as a linear classifier. We plot the train prediction error associated to all the probes, at three different steps. We can see how the model completely ignores layers 1-63, even when we train it for a long time. The use of probes allows us to diagnose that problem through visual inspection. now we feel that it is premature to start using multi-layer probes. This also leads to the convoluted idea of having a regular probe inside a multi-layer probe. 5 CONCLUSION In this paper we introduced the concept of the linear classifier probe as a conceptual tool to better understand the dynamics inside a neural network and the role played by the individual intermediate layers. We are now able to ask new questions and explore new areas. We have demonstrated how these probes can be used to identify certain problematic behaviors in models that might not be apparent when we traditionally have access to only the prediction loss and error. We hope that the notions presented in this paper can contribute to the understanding of deep neural networks and guide the intuition of researchers that design them. ACKNOWLEDGMENTS Yoshua Bengio is a senior CIFAR Fellow. The authors would like to acknowledge the support of the following agencies for research funding and computing support: NSERC, FQRNT, Calcul Québec, Compute Canada, the Canada Research Chairs and CIFAR. REFERENCES Y Bengio, Paolo Frasconi, and P Simard. The problem of learning long-term dependencies in recurrent networks. In Neural Networks, 1993., IEEE International Conference on, pp. 1183–1188. IEEE, 1993. Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Diploma, Technische Universität München, pp. 91, 1991. Kevin Jarrett, Koray Kavukcuoglu, Yann Lecun, et al. What is the best multi-stage architecture for object recognition? In 2009 IEEE 12th International Conference on Computer Vision, pp. 2146–2153. IEEE, 2009. David MacKay. Information Theory, Inference and Learning Algorithms. Cambridge University Press, 2003. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015. doi: 10.1007/s11263-015-0816-y. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9, 2015. Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in neural information processing systems, pp. 3320–3328, 2014. A APPENDIX A.1 PROPOSAL : TRAIN PROBES USING ONLY SUBSETS OF FEATURES One of the challenges to train on the Inception v3 model is that many of the layers have more than 200,000 features. This is even worse in the first convolution layers before the pooling operations, where we have around a million features. With 1000 output classes, a probe using 200,000 features has a weight matrix taking almost 1GB of storage. When using stochastic gradient descent, we require space to store the gradients, and if we use momentum this ends up taking three times the memory on the GPU. This is even worse for RMSProp. Normally this might be acceptable for a model of reasonable size, but this turns into almost 4GB overhead per probe. We do not have to put a probe at every layer. We can also train probes independently. We can put probe parameters on the CPU instead of the GPU, if necessary. But when the act of training probes increases the complexity of the experiment beyond a certain point, the researcher might decide that they are not worth the trouble. We propose the following solution : for a given probe, use a fixed random subset of features instead of the whole set of features. With certain assumptions about the independence of the features and their shared role in predicting the correct class, we can make certain claims about how few features are actually required to assess the prediction error of a probe. We thank Yaroslav Bulatov for suggesting this approach. We ran an experiment in which we used data \( X \sim \mathcal{N}(0, I_D) \) where \( D = 100,000 \) is the number of features. We used \( K = 1000 \) classes and we generated the ground truth using a matrix \( W \) of shape \( (D, K) \). To obtain the class of a given \( x \), we simply multiply \( x^T W \) and take the argmax over the \( K \) components of the result. \[ x \sim \mathcal{N}(0, I_D) \qquad y = \arg\max_{k=1..K} (x^T W[:, k]) \] We selected a matrix \( W \) by drawing all its individual coefficients from a univariate gaussian. Instead of using \( D = 100,000 \) features, we used instead only 1000 features picked at random. We trained a linear classifier on those features and, experimentally, it was relatively easy to achieve a 4% error rate on our first try. With all the features, we could achieve a 0% error rate, so 4 % might not look great. We have to keep in mind that we have \( K = 1000 \) classes so random guesses yield an error rate of 99.9%. This can reduce the storage cost for a probe from 1GB down to 10MB. The former is hard to justify, and the latter is almost negligible. A.2 PROBES ON INCEPTION V3 We are interested in putting linear classifier probes in the popular Inception v3 model, training on the ImageNet dataset. We used the tensorflow implementation available online (tensorflow/models/inception/inception) and ran it on one GPU for 2 weeks. ![Diagram showing the architecture of the Inception v3 model with boxes representing probes at various layers](page_573_682_474_120.png) As described in section A.1, one of the challenges is that the number of features can be prohibitively large, and we have to consider taking only a subset of the features. In this particular experiment, we have had the most success by taking 1000 random features for each probe. This gives certain layers an unfair advantage if they start with 4000 features and we kept 1000, whereas in other cases the probe insertion point has 426,320 features and we keep 1000. There was no simple “fair” solution. That being said, 13 out of the 17 probes have more than 100,000 features, and 11 of those probes have more than 200,000 features, so things were relatively comparable. We put linear classifier probes at certain strategic layers. We represent this using boxes in the following Figure 2. The prediction error of the probe given by the last layer of each box is illustrated by coloring the box. Red is bad (high prediction error) and green/blue is good (low prediction error). We would have liked to have a video to show the evolution of this during training, but this experiment had to be scaled back due to the large computational demands. We show here the prediction errors at three moments of training. These correspond roughly to the beginning of training, then after a few days, and finally after a week. Inception v3 probe training prediction error minibatches 001515 main head auxiliary head minibatches 050389 main head auxiliary head minibatches 100876 main head auxiliary head minibatches 308230 main head auxiliary head Figure 9: Inserting a probe at multiple moments during training the Inception v3 model on the ImageNet dataset. We represent here the prediction error evaluated at a random subset of 1000 features. As expected, at first all the probes have a 100% prediction error, but as training progresses we see that the model is getting better. Note that there are 1000 classes, so a prediction error of 50% is much better than a random guess. The auxiliary head, shown under the model, was observed to have a prediction error that was slightly better than the main head. This is not necessarily a condition that will hold at the end of training, but merely an observation.
ABSTRACT Neural network models have a reputation for being black boxes. We propose a new method to better understand the roles and dynamics of the intermediate layers. This has direct consequences on the design of such models and it enables the expert to be able to justify certain heuristics (such as adding auxiliary losses in middle layers). Our method uses linear classifiers, referred to as “probes”, where a probe can only use the hidden units of a given intermediate layer as discriminating features. Moreover, these probes cannot affect the training phase of a model, and they are generally added after training. They allow the user to visualize the state of the model at multiple steps of training. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems. 1 INTRODUCTION The recent history of deep neural networks features an impressive number of new methods and technological improvements to allow the training of deeper and more powerful networks. Despite this, models still have a reputation for being black boxes. Neural networks are criticized for their lack of interpretability, which is a tradeoff that we accept because of their amazing performance on many tasks. Efforts have been made to identify the role played by each layer, but it can be hard to find a meaning to individual layers. There are good arguments to support the claim that the first layers of a convolution network for image recognition contain filters that are relatively “general”, in the sense that they would work great even if we switched to an entirely different dataset of images. The last layers are specific to the dataset being used, and have to be retrained when using a different dataset. In [Yosinski et al. (2014)] the authors try to pinpoint the layer at which this transition occurs, but they show that the exact transition is spread across multiple layers. In this paper, we introduce the concept of linear classifier probe, referred to as a “probe” for short when the context is clear. We start from the concept of Shannon entropy, which is the classic way to describe the information contents of a random variable. We then seek to apply that concept to understand the roles of the intermediate layers of a neural network, to measure how much information is gained at every layer (answer : technically, none). We argue that it fails to apply, and so we propose an alternative framework to ask the same question again. This time around, we ask what would be the performance of an optimal linear classifier if it was trained on the inputs of a given layer from our model. We demonstrate how this powerful concept can be very useful to understand the dynamics involved in a deep neural network during training and after. 2 INFORMATION THEORY It was a great discovery when Claude Shannon repurposed the notion of entropy to represent information contents in a formal way. It laid the foundations for the discipline of information theory. We would refer the reader to first chapters of [MacKay(2003)] for a good exposition on the matter. Naturally, we would like to ask some questions about the information contents of the many layers of convolutional neural networks. • What happens when we add more layers? • Where does information flow in a neural network with multiple branches? • Does having multiple auxiliary losses help? (e.g. Inception model) Intuitively, for a training sample \( x_i \) with its associated label \( y_i \), a deep model is getting closer to the correct answer in the higher layers. It starts with the difficult job of classifying \( x_i \), which becomes easier as the higher layers distill \( x_i \) into a representation that is easier to classify. One might be tempted to say that this means that the higher layers have more *information* about the ground truth, but this would be incorrect. Here there is a mismatch between two different concepts of information. The notion of entropy *fails* to capture the essence of those questions. This is illustrated in a formal way by the *Data Processing Inequality*. It states that, for a set of three random variables satisfying the dependency \[ X \rightarrow Y \rightarrow Z \] then we have that \[ I(X;Z) \leq I(X;Y) \] where \( I(X,Y) \) is the mutual information. Intuitively, this means that the deterministic transformations performed by the many layers of a deep neural network are not adding more information. In the best case, they preserve information and affect only the representation. But in almost all situations, they lose some information in the process. If we distill this further, we can think of the serious mismatch between the two following ideas : • Part of the genius of the notion of entropy is that is distills the essence of information to a quantity that does not depend on the particular representation. • A deep neural network is a series of simple deterministic transformations that affect the representation so that the final layer can be fed to a linear classifier. The former ignores the representation of data, while the latter is an expert in finding good representations. A deaf painter is working on a visual masterpiece to offer to a blind musician who plays music for him. We need a conceptual tool to analyze neural networks in a way that corresponds better to our intuitive notion of information. The role of data representation is important, but we would also argue that we have to think about this issue as it relates to computational complexity. A linear classifier is basically the simplest form of classifier that is neither trivial nor degenerate. We define a new notion of information that depends on our ability to classify features of a given layer with an optimal linear classifier. Then we have a conceptual tool to ask new questions and to get potentially interesting answers. We end this section with a conceptual example in Figure 1. If \( X \) contains an image of the savannah, and \( Y \in \{0,1\} \) refers to whether it contains a lion or not, then none of the subsequent layers are truly more informative than \( X \) itself. The raw bits from the picture file contain everything. 3 LINEAR CLASSIFIER PROBES In section 3.1 we present the main concept of this paper. We illustrate the concept in section 3.3. We then present a basic experiment in section 3.4. In section 3.6 we modify a very deep network in two different ways and we show how probes allow us to visualize the consequences (sometimes disastrous) of our design choices. (a) hex dump of picture of a lion (b) same lion in human-readable format Figure 1: The hex dump represented on the left has more information contents than the image on the right. Only one of them can be processed by the human brain in time to save their lives. Computational convenience matters. Not just entropy. 3.1 PROBES As we discussed the previous section, there is indeed a good reason to use many deterministic layers, and it is because they perform useful transformations to the data with the goal of ultimately fitting a linear classifier at the very end. That is the purpose of the many layers. They are a tool to transform data into a form to be fed to a boring linear classifier. With this in mind, it is natural to ask if that transformation is sudden or progressive, and whether the intermediate layers already have a representation that is immediately useful to a linear classifier. We refer the reader to Figure 2 for a diagram of probes being inserted in the usual deep neural network. ![Diagram showing probes being added to every layer of a model](page_682_749_573_180.png) Figure 2: Probes being added to every layer of a model. These additional probes are not supposed to change the training of the model, so we add a little diode symbol through the arrows to indicate that the gradients will not backpropagate through those connections. The conceptual framework that we propose is one where the intuitive notion of information is equivalent with immediate suitability for a linear classifier (instead of being related to entropy). Just to be absolutely clear about what we call a linear classifier, we mean a function \[ f : H \to [0, 1]^D \\ h \mapsto \text{softmax } (Wh + b). \] where \( h \in H \) are the features of some hidden layer, \([0, 1]^D\) is the space of one-hot encodings of the \(D\) target classes, and \((W, b)\) are the probe weights and biases to be learned so as to minimize the usual cross-entropy loss. Over the course of training a model, the parameters of the model change. However, probes only make sense when we refer to a given training step. We can talk about the probes at iteration \(n\) of training, when the model parameters are \(\theta_n\). **These parameters are not affected by the probes.** We prevent backpropagation through the model either by stopping the gradient flow (done with `tf.stop_gradient` in tensorflow), or simply by specifying that the only variables to be updated are the probe parameters, while we keep \(\theta_n\) frozen. 3.1.1 Training the probes For the purposes of this paper, we train the probes up to convergence with fixed model parameters, and we report the prediction error on the training set. It is absolutely possible to train the probes simultaneously while training the model itself. This is a good approach if we consider about how long it can take to train the model. However, this creates a potential problem if we optimize the loss of the model more quickly than the loss of the probes. This can present a skewed view of the actual situation that we would have if we trained the probes until convergence before updating the model parameters. If we accept this trade off, then we can train the probes at the same time as the model. In some situations, the probes might overfit the training set, so we may want to do early stopping on the validation set and report the performance for the probes on the test set. This is what we do in section 5.4 with the simple MNIST convnet. We are still unsure if one of those variations should be preferred in general, and right now they all seem acceptable so long as we interpret the probe measurements properly. Note that training those probes represents a convex optimization problem. In practice, this does mean guarantee that they are easy to train. However, it is reassuring because it means that probes taken at time \( \theta_n \) can be used as initialization for probes at time \( \theta_{n+1} \). We use cross-entropy as probe loss because all models studied here used cross-entropy. Other alternative losses could be justified in other settings. 3.2 Probes on bifurcating toy model Here we show a hypothetical example in which a model contains a bifurcation with two paths that later recombine. We are interested in knowing whether those two branches are useful, or whether one is potentially redundant or useless. ![A diagram showing a bifurcating neural network with two paths, each with a prediction error labeled (0.75, 0.60), and their combination (0.45). The upper path is labeled "probe prediction error".](page_495_820_670_180.png) For example, the two different branches might contain convolutional layers with different dimensions. They may have a different number of sublayers, or one might represent a skip connection. We assume that the branches are combined through concatenation of their features, so that nothing is lost. For this hypothetical situation, we indicate the probe prediction errors on the graphical model. The upper path has a prediction error of 0.75, the lower path has 0.60, and their combination has 0.45. Small errors are preferred. Although the upper path has “less information” than the lower path, we can see here that it is not redundant information, because when we concatenate the features of the two branches we get a prediction error of \( 0.45 < 0.60 \). If the concatenated layer had a prediction error of 0.60 instead of 0.45, then we could declare that the above branch did nothing useful. It may have nonzero weights, but it’s still useless. Naturally, this kind of conclusion might be entirely wrong. It might be the case that the branch above contains very meaningful features, and they simply happen to be useless to a linear classifier applied right there. The idea of using linear classification probes to understand the roles of different branches is suggested as a heuristic instead of a hard rule. Moreover, if the probes are not optimized perfectly, the conclusions drawn can be misleading. Note that we are reporting here the prediction errors, and it might be the case that the loss is indeed lower when we concatenate the two branches, but for some reason it could fail to apply to the prediction error. Figure 3: Toy experiment described in section 3.3 with linearly separable data (two labels), an untrained MLP with 32 layers, and probes at every layer. We report the prediction error for every probe, where 0.50 would be the performance of a coin flip and 0.00 would be ideal. Note that the layer 0 here corresponds to the raw data, and the probes are indeed able to classify it perfectly. As expected, performance degrades when applying random transformations. If many more layers were present, it would be hard to imagine how the final layer (with the model loss) can get any useful signal to backpropagate. 3.3 PROBES ON UNTRAINED MODEL We start with a toy example to illustrate what kind of plots we expect from probes. We use a 32-layer MLP with 128 hidden units. All the layers are fully-connected and we use LeakyReLU(0.5) as activation function. We will run the same experiment 100 times, with a different toy dataset each time. The goal is to use a data distribution \((X, Y)\) where \(X \in \mathbb{R}^{128}\) is drawn \(\mathcal{N}(0, I)\) and where \(Y \in \{-1, 1\}\) in linearly separable (i.e. super easy to classify with a one-layer neural network). To do this, we just pick a \(w \in \mathbb{R}^{128}\) for each experiment, and let the label \(y_n\) be the sign of \(x_n^T w\). We initialize this 32-layer MLP using glorot_normal initialization, we do not perform any training on the model, and we add one probe at every layer. We optimize the probes with RMSProp and a sufficiently small learning rate. In Figure 3, we show the prediction error rate for every probe, averaged over the 100 experiments. The graph includes a probe applied directly on the inputs \(X\), where we naturally have an error rate that is essentially zero (to be expected by the way we constructed our data), and which serves as a kind of sanity check. Given that we have only two possible labels, we also show a dotted horizontal line at 0.50, which is essentially the prediction error that we would get by flipping a coin. We can see that the prediction error rate climbs up towards 0.50 as we go deeper in the MLP (with untrained parameters). This illustrates the idea that the input signal is getting mangled by the successive layers, so much that it becomes rather useless by the time we reach the final layer. We checked the mean activation norm of the hidden units at layer 32 to be sure that numerical underflow was not the cause for the degradation. Note that this situation could be avoided by using orthogonal weights. One of the popular explanation for training difficulties in very deep models is that of the exploding/vanishing (Hochreiter 1991; Bengio et al. 1993). Here we would like to offer another complementary explanation, based on the observations from Figure 3. That is, at the beginning of training, the usefulness of layers decays as we go deeper, reaching the point where the deeper layers are utterly useless. The values contained in the last layer are then used in the final softmax classifier, and the loss backpropagates the values of the derivatives. Since that derivative is based on garbage activations, the backpropagated quantities are also garbage, which means that the weights are all going to be updated based on garbage. The weights stay bad, and we fail to train the model. The authors like to refer to that phenomenon as *garbage forwardprop, garbage backprop*, in reference to the popular concept of *garbage in, garbage out* in computer science. 3.4 PROBES ON MNIST CONVNET In this section we run the MNIST convolutional model provided by the tensorflow github repo (tensorflow/models/image/mnist/convolutional.py) We selected that model for reproducibility and to demonstrate how to easily peek into popular models by using probes. We start by sketching the model in Figure 4. We report the results at the beginning and the end of training on Figure 5. One of the interesting dynamics to be observed there is how useful the first layers are, despite the fact that the model is completely untrained. Random projections can be useful to classify data, and this has been studied by others (Jarrett et al., 2009). ![A neural network architecture diagram showing convolutional and fully-connected layers, with convolution, ReLU, maxpool, matmul, and output logits.](page_312_186_972_120.png) Figure 4: This graphical model represents the neural network that we are going to use for MNIST. The model could be written in a more compact form, but we represent it this way to expose all the locations where we are going to insert probes. The model itself is simply two convolutional layers followed by two fully-connected layer (one being the final classifier). However, we insert probes on each side of each convolution, activation function, and pooling function. This is a bit overzealous, but the small size of the model makes this relatively easy to do. ![Two line plots showing test prediction error versus epochs for different layers, labeled (a) After initialization, no training. and (b) After training for 10 epochs.](page_120_420_1342_246.png) Figure 5: We represent here the test prediction error for each probe, at the beginning and at the end of training. This measurement was obtained through early stopping based on a validation set of \(10^4\) elements. The probes are prevented from overfitting the training data. We can see that, at the beginning of training (on the left), the randomly-initialized layers were still providing useful transformations. The test prediction error goes from 8% to 2% simply using those random features. The biggest impact comes from the first ReLU. At the end of training (on the right), the test prediction error is improving at every layer (with the exception of a minor kink on fcl_preact). 3.5 PROBES ON INCEPTION V3 We have performed an experiment using the Inception v3 model on the ImageNet dataset (Szegedy et al., 2015; Russakovsky et al., 2015). This is very similar to what is presented in section 3.4 but on a much larger scale. Due to the challenge presented by this experiment, we were not able to do everything that we had hoped. We have chosen to put those results in the appendix section A.2. Certain layers of the Inception v3 model have approximately one million features. With 1000 classes, this means that some probes can take even more storage space than the whole model itself. In these cases, one of the creative solutions was to try to use only a random subset of the features. This is discussed in the appendix section A.1. 3.6 AUXILIARY LOSS BRANCHES AND SKIP CONNECTIONS Here we investigate two ways to modify a deep model in order to facilitate training. Our goal is not to convince the reader that they should implement these suggestions in their own models. Rather, we want to demonstrate the usefulness of the linear classifier probes as a way to better understand what is happening in their deep networks. In both cases we use a toy model with 128 fully-connected layers with 128 hidden units in each layer. We train on MNIST, and we use Glorot initialization along with leaky ReLUs. We choose this model because we wanted a pathologically deep model without getting involved in architecture details. The model is pathological in the sense that smaller models can easily be designed to achieve better performance, but also in the sense that the model is so deep that it is very hard to train it with gradient descent methods. From our experiments, the maximal depth where things start to break down was depth 64, hence the choice here of using depth 128. In the first scenario, we add one linear classifier at every 16 layers. These classifiers contribute to the loss minimization. They are not probes. This is very similar to what happens in the famous Inception model where “auxiliary heads” are used (Szegedy et al., 2015). This is illustrated in Figure 6a and it works nicely. The untrainable model is now made trainable through a judicious use of auxiliary classifier losses. The results are shown in Figure 7 In the second scenario, we look at adding a bridge (a skip connection) between layer 0 and layer 64. This means that the input features to layer 64 are obtained by concatenating the output of layer 63 with the features of layer 0. The idea here is that we might observe that the model would effectively train a submodel of depth 64, using the skip connection, and shift gears later to use the whole depth of 128 layers. This is illustrated in Figure 6b and the results are shown in Figure 8. It does not work as expected, but the failure of this approach is visualized very nicely with probes and serves as a great example of their usefulness in diagnosing problems with models. In both cases, there are two interesting observations that can be made with probes. We refer readers to https://youtu.be/x8j4ZHCR2FI for the full videos associated to Figures 5, 7 and 8 Firstly, at the beginning of training, we can see how the raw data is directly useful to perform linear classification, and how this degrades as more layers are added. In the case of the skip connection in Figure 8, this has the effect of creating two bumps. This is because the layer 64 also has the input data as direct parent, so it can fit a probe to that signal. Secondly, the prediction error goes down in all probes during training, but it does so in a way that starts with the parents before it spreads to their descendants. This is even more apparent on the full video (instead of the 3 frames provided here). This is a ripple effect, where the prediction error in Figure 6b is visually spreading like a wave from the left of the plot to the right. ![Examples of deep neural network with one probe at every layer (drawn above the graph). We show here the addition of extra components to help training (under the graph, in orange).](page_1092_1042_420_120.png) Figure 6: Examples of deep neural network with one probe at every layer (drawn above the graph). We show here the addition of extra components to help training (under the graph, in orange). 4 DISCUSSION AND FUTURE WORK We have presented more toy models or simple models instead of larger models such as Inception v3. In the appendix section A.2, we show an experiment on Inception v3, which proved to be more challenging than expected. Future work in this domain would involve performing better experiments on a larger scale than small MNIST convnets, but still within a manageable size so we can properly train all the probes. This would allow us to produce nice videos showing many training steps in sequence. We have received many comments from people who thought about using multi-layer probes. This can be seen as a natural extension of the linear classifier probes. One downside to this idea is that we lose the convexity property of the probes. It might be worth pursuing in a particular setting, but as of Figure 7: A pathologically deep model with 128 layers gets an auxiliary loss added at every 16 layers (refer to simplified sketch in Figure 6a if needed). This loss is added to the usual model loss at the last layer. We fit a probe at every layer to see how well each layer would perform if its values were used as a linear classifier. We plot the train prediction error associated to all the probes, at three different steps. Before adding those auxiliary losses, the model could not successfully be trained through usual gradient descent methods, but with the addition of those intermediate losses, the model is “guided” to achieve certain partial objectives. This leads to a successful training of the complete model. The final prediction error is not impressive, but the model was not designed to achieve state-of-the-art performance. Figure 8: A pathologically deep model with 128 layers gets a skip connection from layer 0 to layer 64 (refer to sketch in Figure 6b if needed). We fit a probe at every layer to see how well each layer would perform if its values were used as a linear classifier. We plot the train prediction error associated to all the probes, at three different steps. We can see how the model completely ignores layers 1-63, even when we train it for a long time. The use of probes allows us to diagnose that problem through visual inspection. now we feel that it is premature to start using multi-layer probes. This also leads to the convoluted idea of having a regular probe inside a multi-layer probe. 5 CONCLUSION In this paper we introduced the concept of the linear classifier probe as a conceptual tool to better understand the dynamics inside a neural network and the role played by the individual intermediate layers. We are now able to ask new questions and explore new areas. We have demonstrated how these probes can be used to identify certain problematic behaviors in models that might not be apparent when we traditionally have access to only the prediction loss and error. We hope that the notions presented in this paper can contribute to the understanding of deep neural networks and guide the intuition of researchers that design them. ACKNOWLEDGMENTS Yoshua Bengio is a senior CIFAR Fellow. The authors would like to acknowledge the support of the following agencies for research funding and computing support: NSERC, FQRNT, Calcul Québec, Compute Canada, the Canada Research Chairs and CIFAR.
reject
Reject
4.333333
8e679cfd4e5a37a90a68bb74946f2dd5a55bae5c
iclr
2,017
IMPROVING POLICY GRADIENT BY EXPLORING UNDER-APPRECIATED REWARDS Ofir Nachum*, Mohammad Norouzi, Dale Schuurmans† Google Brain {ofirnachum,mnorouzi,schuurmans}@google.com ABSTRACT This paper presents a novel form of policy gradient for model-free reinforcement learning (RL) with improved exploration properties. Current policy-based methods use entropy regularization to encourage undirected exploration of the reward landscape, which is ineffective in high dimensional spaces with sparse rewards. We propose a more directed exploration strategy that promotes exploration of under-appreciated reward regions. An action sequence is considered under-appreciated if its log-probability under the current policy under-estimates its resulting reward. The proposed exploration strategy is easy to implement, requiring small modifications to the REINFORCE algorithm. We evaluate the approach on a set of algorithmic tasks that have long challenged RL methods. Our approach reduces hyper-parameter sensitivity and demonstrates significant improvements over baseline methods. The proposed algorithm successfully solves a benchmark multi-digit addition task and generalizes to long sequences, which, to our knowledge, is the first time that a pure RL method has solved addition using only reward feedback. 1 INTRODUCTION Humans can reason about symbolic objects and solve algorithmic problems. After learning to count and then manipulate numbers via simple arithmetic, people eventually learn to invent new algorithms and even reason about their correctness and efficiency. The ability to invent new algorithms is fundamental to artificial intelligence (AI). Although symbolic reasoning has a long history in AI (Russell et al., 2003), only recently have statistical machine learning and neural network approaches begun to make headway in automated algorithm discovery (Reed & de Freitas, 2016; Kaiser & Sutskever, 2016; Neelakantan et al., 2016), which would constitute an important milestone on the path to AI. Nevertheless, most of the recent successes depend on the use of strong supervision to learn a mapping from a set of training inputs to outputs by maximizing a conditional log-likelihood, very much like neural machine translation systems (Sutskever et al., 2014; Bahdanau et al., 2015). Such a dependence on strong supervision is a significant limitation that does not match the ability of people to invent new algorithmic procedures based solely on trial and error. By contrast, reinforcement learning (RL) methods (Sutton & Barto, 1998) hold the promise of searching over discrete objects such as symbolic representations of algorithms by considering much weaker feedback in the form of a simple verifier that tests the correctness of a program execution on a given problem instance. Despite the recent excitement around the use of RL to tackle Atari games (Mnih et al., 2015) and Go (Silver et al., 2016), standard RL methods are not yet able to consistently and reliably solve algorithmic tasks in all but the simplest cases (Zaremba & Sutskever, 2014). A key property of algorithmic problems that makes them challenging for RL is reward sparsity, i.e., a policy usually has to get a long action sequence exactly right to obtain a non-zero reward. *Work done as a member of the Google Brain Residency program [g.co/brainresidency] †Also at the Department of Computing Science, University of Alberta, daes@ualberta.ca We believe one of the key factors limiting the effectiveness of current RL methods in a sparse reward setting is the use of *undirected exploration* strategies (Thrun [1992]), such as \( \epsilon \)-greedy and entropy regularization (Williams & Peng [1991]). For long action sequences with delayed sparse reward, it is hopeless to explore the space uniformly and blindly. Instead, we propose a formulation to encourage exploration of action sequences that are *under-appreciated* by the current policy. Our formulation considers an action sequence to be under-appreciated if the model’s log-probability assigned to an action sequence under-estimates the resulting reward from the action sequence. Exploring under-appreciated states and actions encourages the policy to have a better calibration between its log-probabilities and observed reward values, even for action sequences with negligible rewards. This effectively increases exploration around neglected action sequences. We term our proposed technique *under-appreciated reward exploration (UREX)*. We show that the objective given by UREX is a combination of a mode seeking objective (standard REINFORCE) and a mean seeking term, which provides a well motivated trade-off between exploitation and exploration. To empirically evaluate our method, we take a set of algorithmic tasks such as sequence reversal, multi-digit addition, and binary search. We choose to focus on these tasks because, although simple, they present a difficult sparse reward setting which has limited the success of standard RL approaches. The experiments demonstrate that UREX significantly outperforms baseline RL methods, such as entropy regularized REINFORCE and one-step Q-learning, especially on the more difficult tasks, such as multi-digit addition. Moreover, UREX is shown to be more robust to changes of hyper-parameters, which makes hyper-parameter tuning less tedious in practice. In addition to introducing a new variant of policy gradient with improved performance, our paper is the first to demonstrate strong results for an RL method on algorithmic tasks. To our knowledge, the addition task has not been solved by any model-free reinforcement learning approach. We observe that some of the policies learned by UREX can successfully generalize to long sequences; *e.g.*, in 2 out of 5 random restarts, the policy learned by UREX for the addition task correctly generalizes to addition of numbers with 2000 digits with no mistakes, even though training sequences are at most 33 digits long. 2 Neural Networks for Learning Algorithms Although research on using neural networks to learn algorithms has witnessed a surge of recent interest, the problem of program induction from examples has a long history in many fields, including program induction, inductive logic programming (Lavrac & Dzeroski [1994]), relational learning (Kemp et al. [2007]) and regular language learning (Angulin [1987]). Rather than presenting a comprehensive survey of program induction here, we focus on neural network approaches to algorithmic tasks and highlight the relative simplicity of our neural network architecture. Most successful applications of neural networks to algorithmic tasks rely on strong supervision, where the inputs and target outputs are completely known *a priori*. Given a dataset of examples, one learns the network parameters by maximizing the conditional likelihood of the outputs via back-propagation (*e.g.*, Reed & de Freitas [2016]; Kaiser & Sutskever [2016]; Vinyals et al. [2015]). However, target outputs may not be available for novel tasks, for which no prior algorithm is known to be available. A more desirable approach to inducing algorithms, followed in this paper, advocates using self-driven learning strategies that only receive reinforcement based on the outputs produced. Hence, just by having access to a verifier for an algorithmic problem, one can aim to learn an algorithm. For example, if one does not know how to sort an array, but can check the extent to which an array is sorted, then one can provide the reward signal necessary for learning sorting algorithms. We formulate learning algorithms as an RL problem and make use of model-free policy gradient methods to optimize a set parameters associated with the algorithm. In this setting, the goal is to learn a policy \( \pi_\theta \) that given an observed state \( s_t \) at step \( t \), estimates a distribution over the next action \( a_t \), denoted \( \pi_\theta(a_t \mid s_t) \). Actions represent the commands within the algorithm and states represent the joint state of the algorithm and the environment. Previous work in this area has focused on augmenting a neural network with additional structure and increased capabilities (Zaremba & Sutskever [2015], Graves et al. [2016]). In contrast, we utilize a simple architecture based on a standard recurrent neural network (RNN) with LSTM cells (Hochreiter & Schmidhuber [1997]) as depicted in Figure 1. At each episode, the environment is initialized with a latent state \( h_0 \), unknown to the agent, which determines \( s_1 \) and the subsequent state transition and reward functions. Once the agent observes \( s_1 \) ![The agent's RNN architecture that represents a policy. The environment is initialized with a latent vector h. At time step t, the environment produces a state s_t, and the agent takes as input s_t and the previously sampled action a_{t-1} and produces a distribution over the next action \( \pi_\theta(a_t \mid s_t) \). Then, we sample a new action a_t and apply it to the environment.](page_246_183_1097_246.png) Figure 1: The agent’s RNN architecture that represents a policy. The environment is initialized with a latent vector h. At time step t, the environment produces a state s_t, and the agent takes as input s_t and the previously sampled action a_{t-1} and produces a distribution over the next action \( \pi_\theta(a_t \mid s_t) \). Then, we sample a new action a_t and apply it to the environment. as the input to the RNN, the network outputs a distribution \( \pi_\theta(a_1 \mid s_1) \), from which an action a_1 is sampled. This action is applied to the environment, and the agent receives a new state observation s_2. The state s_2 and the previous action a_1 are then fed into the RNN and the process repeats until the end of the episode. Upon termination, a reward signal is received. 3 LEARNING A POLICY BY MAXIMIZING EXPECTED REWARD We start by discussing the most common form of policy gradient, REINFORCE (Williams [1992]), and its entropy regularized variant (Williams & Peng [1991]). REINFORCE has been applied to model-free policy-based learning with neural networks and algorithmic domains (Zaremba & Sutskever [2015] [Graves et al. 2016]). The goal is to learn a policy \( \pi_\theta \) that, given an observed state s_t at step t, estimates a distribution over the next action a_t, denoted \( \pi_\theta(a_t \mid s_t) \). The environment is initialized with a latent vector, h, which determines the initial observed state \( s_1 = g(h) \), and the transition function \( s_{t+1} = f(s_t, a_t \mid h) \). Note that the use of nondeterministic transitions f as in Markov decision processes (MDP) may be recovered by assuming that h includes the random seed for the any nondeterministic functions. Given a latent state h, and \( s_{1:T} \equiv (s_1, \ldots, s_T) \), the model probability of an action sequence \( a_{1:T} \equiv (a_1, \ldots, a_T) \) is expressed as, \[ \pi_\theta(a_{1:T} \mid h) = \prod_{t=1}^T \pi_\theta(a_t \mid s_t) , \quad \text{where} \quad s_1 = g(h), \quad s_{t+1} = f(s_t, a_t \mid h) \quad \text{for} \ 1 \leq t < T . \] The environment provides a reward at the end of the episode, denoted \( r(a_{1:T} \mid h) \). For ease of readability we drop the subscript from \( a_{1:T} \) and simply write \( \pi_\theta(a \mid h) \) and \( r(a \mid h) \). The objective used to optimize the policy parameters, \( \theta \), consists of maximizing expected reward under actions drawn from the policy, plus an optional maximum entropy regularizer. Given a distribution over initial latent environment states \( p(h) \), we express the regularized expected reward as, \[ \mathcal{O}_{RL}(\theta; \tau) = \mathbb{E}_{h \sim p(h)} \left\{ \sum_{a \in \mathcal{A}} \pi_\theta(a \mid h) \left[ r(a \mid h) - \tau \log \pi_\theta(a \mid h) \right] \right\} . \] (1) When \( \pi_\theta \) is a non-linear function defined by a neural network, finding the global optimum of \( \theta \) is challenging, and one often resorts to gradient-based methods to find a local optimum of \( \mathcal{O}_{RL}(\theta; \tau) \). Given that \( \frac{d}{d\theta} \pi_\theta(a) = \pi_\theta(a) \frac{d}{d\theta} \log \pi_\theta(a) \) for any a such that \( \pi_\theta(a) > 0 \), one can verify that, \[ \frac{d}{d\theta} \mathcal{O}_{RL}(\theta; \tau \mid h) = \sum_{a \in \mathcal{A}} \pi_\theta(a \mid h) \frac{d}{d\theta} \log \pi_\theta(a \mid h) \left[ r(a \mid h) - \tau \log \pi_\theta(a \mid h) - \tau \right] . \] (2) Because the space of possible actions \( \mathcal{A} \) is large, enumerating over all of the actions to compute this gradient is infeasible. Williams (1992) proposed to compute the stochastic gradient of the expected reward by using Monte Carlo samples. Using Monte Carlo samples, one first draws \( N \) *i.i.d.* samples from the latent environment states \( \{ \mathbf{h}^{(n)} \}_{n=1}^N \), and then draws \( K \) *i.i.d.* samples \( \{ \mathbf{a}^{(k)} \}_{k=1}^K \) from \( \pi_\theta(\mathbf{a} \mid \mathbf{h}^{(n)}) \) to approximate the gradient of (1) by using (2) as, \[ \frac{\mathrm{d}}{\mathrm{d}\theta} \mathcal{O}_{\mathrm{RL}}(\theta; \tau) \approx \frac{1}{NK} \sum_{n=1}^N \sum_{k=1}^K \frac{\mathrm{d}}{\mathrm{d}\theta} \log \pi_\theta(\mathbf{a}^{(k)} \mid \mathbf{h}^{(n)}) \left[ r(\mathbf{a}^{(k)} \mid \mathbf{h}^{(n)}) - \tau \log \pi_\theta(\mathbf{a}^{(k)} \mid \mathbf{h}^{(n)}) - \tau \right]. \] (3) This reparametrization of the gradients is the key to the REINFORCE algorithm. To reduce the variance of (3), one uses rewards \( \hat{r} \) that are shifted by some offset values, \[ \hat{r}(\mathbf{a}^{(k)} \mid \mathbf{h}) = r(\mathbf{a}^{(k)} \mid \mathbf{h}) - b(\mathbf{h}) , \] (4) where \( b \) is known as a *baseline* or sometimes called a *critic*. Note that subtracting any offset from the rewards in (1) simply results in shifting the objective \( \mathcal{O}_{\mathrm{RL}} \) by a constant. Unfortunately, directly maximizing expected reward (*i.e.*, when \( \tau = 0 \)) is prone to getting trapped in a local optimum. To combat this tendency, Williams & Peng (1991) augmented the expected reward objective by including a maximum entropy regularizer (\( \tau > 0 \)) to promote greater exploration. We will refer to this variant of REINFORCE as MENT (maximum entropy exploration). 4 UNDER-APPRECIATED REWARD EXPLORATION (UREX) To explain our novel form of policy gradient, we first note that the optimal policy \( \pi^*_\tau \), which globally maximizes \( \mathcal{O}_{\mathrm{RL}}(\theta; \tau \mid \mathbf{h}) \) in (1) for any \( \tau > 0 \), can be expressed as, \[ \pi^*_\tau(\mathbf{a} \mid \mathbf{h}) = \frac{1}{Z(\mathbf{h})} \exp \left\{ \frac{1}{\tau} r(\mathbf{a} \mid \mathbf{h}) \right\} , \] (5) where \( Z(\mathbf{h}) \) is a normalization constant making \( \pi^*_\tau \) a distribution over the space of action sequences \( \mathcal{A} \). One can verify this by first acknowledging that, \[ \mathcal{O}_{\mathrm{RL}}(\theta; \tau \mid \mathbf{h}) = -\tau D_{\mathrm{KL}} \left( \pi_\theta(\cdot \mid \mathbf{h}) \parallel \pi^*_\tau(\cdot \mid \mathbf{h}) \right) . \] (6) Since \( D_{\mathrm{KL}} (p \parallel q) \) is non-negative and zero iff \( p = q \), then \( \pi^*_\tau \) defined in (5) maximizes \( \mathcal{O}_{\mathrm{RL}} \). That said, given a particular form of \( \pi_\theta \), finding \( \theta \) that exactly characterizes \( \pi^*_\tau \) may not be feasible. The KL divergence \( D_{\mathrm{KL}} (\pi_\theta \parallel \pi^*_\tau) \) is known to be mode seeking (Murphy 2012 Section 21.2.2) even with entropy regularization (\( \tau > 0 \)). Learning a policy by optimizing this direction of the KL is prone to falling into a local optimum resulting in a sub-optimal policy that omits some of the modes of \( \pi^*_\tau \). Although entropy regularization helps mitigate the issues as confirmed in our experiments, it is not an effective exploration strategy as it is undirected and requires a small regularization coefficient \( \tau \) to avoid too much random exploration. Instead, we propose a directed exploration strategy that improves the mean seeking behavior of policy gradient in a principled way. We start by considering the alternate mean seeking direction of the KL divergence, \( D_{\mathrm{KL}} (\pi^*_\tau \parallel \pi_\theta) \). Norouzi et al. (2016) considered this direction of the KL to directly learn a policy by optimizing \[ \mathcal{O}_{\mathrm{RAML}}(\theta; \tau) = \mathbb{E}_{\mathbf{h} \sim p(\mathbf{h})} \left\{ \tau \sum_{\mathbf{a} \in \mathcal{A}} \pi^*_\tau(\mathbf{a} \mid \mathbf{h}) \log \pi_\theta(\mathbf{a} \mid \mathbf{h}) \right\} , \] (7) for structured prediction. This objective has the same optimal solution \( \pi^*_\tau \) as \( \mathcal{O}_{\mathrm{RL}} \) since, \[ \mathcal{O}_{\mathrm{RAML}}(\theta; \tau \mid \mathbf{h}) = -\tau D_{\mathrm{KL}} \left( \pi^*_\tau(\cdot \mid \mathbf{h}) \parallel \pi_\theta(\cdot \mid \mathbf{h}) \right) + \mathrm{const} . \] (8) Norouzi et al. (2016) argue that in some structured prediction problems when one can draw samples from \( \pi^*_\tau \), optimizing (7) is more effective than (1), since no sampling from a non-stationary policy \( \pi_\theta \) is required. If \( \pi_\theta \) is a log-linear model of a set of features, \( \mathcal{O}_{\mathrm{RAML}} \) is convex in \( \theta \) whereas \( \mathcal{O}_{\mathrm{RL}} \) is not, even in the log-linear case. Unfortunately, in scenarios that the reward landscape is unknown or computing the normalization constant \( Z(\mathbf{h}) \) is intractable, sampling from \( \pi^*_\tau \) is not straightforward. In RL problems, the reward landscape is completely unknown, hence sampling from \( \pi^*_\tau \) is intractable. This paper proposes to approximate the expectation with respect to \( \pi^*_\tau \) by using *self-normalized importance sampling* (Owen 2013), where the proposal distribution is \( \pi_\theta \) and the reference distribution is \( \pi^*_\tau \). For importance sampling, one draws \( K \) *i.i.d.* samples \( \{ \mathbf{a}^{(k)} \}_{k=1}^K \) from \( \pi_\theta(\mathbf{a} \mid \mathbf{h}) \) and computes a set of normalized importance weights to approximate \( \mathcal{O}_{\text{RAML}}(\theta; \tau \mid \mathbf{h}) \) as, \[ \mathcal{O}_{\text{RAML}}(\theta; \tau \mid \mathbf{h}) \approx \tau \sum_{k=1}^K \frac{w_\tau(\mathbf{a}^{(k)} \mid \mathbf{h})}{\sum_{m=1}^K w_\tau(\mathbf{a}^{(m)} \mid \mathbf{h})} \log \pi_\theta(\mathbf{a}^{(k)} \mid \mathbf{h}) , \] where \( w_\tau(\mathbf{a}^{(k)} \mid \mathbf{h}) \propto \pi^*_\tau / \pi_\theta \) denotes an importance weight defined by, \[ w_\tau(\mathbf{a}^{(k)} \mid \mathbf{h}) = \exp \left\{ \frac{1}{\tau} r(\mathbf{a}^{(k)} \mid \mathbf{h}) - \log \pi_\theta(\mathbf{a}^{(k)} \mid \mathbf{h}) \right\} . \] One can view these importance weights as evaluating the discrepancy between scaled rewards \( r / \tau \) and the policy’s log-probabilities \( \log \pi_\theta \). Among the \( K \) samples, a sample that is least appreciated by the model, *i.e.*, has the largest \( r / \tau - \log \pi_\theta \), receives the largest positive feedback in (9). In practice, we have found that just using the importance sampling RAML objective in (9) does not always yield promising solutions. Particularly, at the beginning of training, when \( \pi_\theta \) is still far away from \( \pi^*_\tau \), the variance of importance weights is too large, and the self-normalized importance sampling procedure results in poor approximations. To stabilize early phases of training and ensure that the model distribution \( \pi_\theta \) achieves large expected reward scores, we combine the expected reward and RAML objectives to benefit from the best of their mode and mean seeking behaviors. Accordingly, we propose the following objective that we call *under-appreciated reward exploration (UREX)*, \[ \mathcal{O}_{\text{UREX}}(\theta; \tau) = \mathbb{E}_{\mathbf{h} \sim p(\mathbf{h})} \left\{ \sum_{\mathbf{a} \in \mathcal{A}} \left[ \pi_\theta(\mathbf{a} \mid \mathbf{h}) r(\mathbf{a} \mid \mathbf{h}) + \tau \pi^*_\tau(\mathbf{a} \mid \mathbf{h}) \log \pi_\theta(\mathbf{a} \mid \mathbf{h}) \right] \right\} , \] which is the sum of the expected reward and RAML objectives. In our preliminary experiments, we considered a composite objective of \( \mathcal{O}_{\text{RL}} + \mathcal{O}_{\text{RAML}} \), but we found that removing the entropy term is beneficial. Hence, the \( \mathcal{O}_{\text{UREX}} \) objective does not include entropy regularization. Accordingly, the optimum policy for \( \mathcal{O}_{\text{UREX}} \) is no longer \( \pi^*_\tau \), as it was for \( \mathcal{O}_{\text{RL}} \) and \( \mathcal{O}_{\text{RAML}} \). Appendix [A] derives the optimal policy for UREX as a function of the optimal policy for \( \mathcal{O}_{\text{RL}} \). We find that the optimal policy of UREX is more sharply concentrated on the high reward regions of the action space, which may be an advantage for UREX, but we leave more analysis of this behavior to future work. To compute the gradient of \( \mathcal{O}_{\text{UREX}}(\theta; \tau) \), we use the self-normalized importance sampling estimate outlined in (6). We assume that the importance weights are constant and contribute no gradient to \( \frac{\mathrm{d}}{\mathrm{d}\theta} \mathcal{O}_{\text{UREX}}(\theta; \tau) \). To approximate the gradient, one draws \( N \) *i.i.d.* samples from the latent environment states \( \{ \mathbf{h}^{(n)} \}_{n=1}^N \), and then draws \( K \) *i.i.d.* samples \( \{ \mathbf{a}^{(k)} \}_{k=1}^K \) from \( \pi_\theta(\mathbf{a} \mid \mathbf{h}^{(n)}) \) to obtain \[ \frac{\mathrm{d}}{\mathrm{d}\theta} \mathcal{O}_{\text{UREX}}(\theta; \tau) \approx \frac{1}{N} \sum_{n=1}^N \sum_{k=1}^K \frac{\mathrm{d}}{\mathrm{d}\theta} \log \pi_\theta(\mathbf{a}^{(k)} \mid \mathbf{h}^{(n)}) \left[ \frac{1}{K} \hat{r}(\mathbf{a}^{(k)} \mid \mathbf{h}^{(n)}) + \tau \frac{w_\tau(\mathbf{a}^{(k)} \mid \mathbf{h}^{(n)})}{\sum_{m=1}^K w_\tau(\mathbf{a}^{(m)} \mid \mathbf{h}^{(n)})} \right]. \] As with REINFORCE, the rewards are shifted by an offset \( b(\mathbf{h}) \). In this gradient, the model log-probability of a sample action sequence \( \mathbf{a}^{(k)} \) is reinforced if the corresponding reward is large, or the corresponding importance weights are large, meaning that the action sequence is under-appreciated. The normalized importance weights are computed using a softmax operator \( \operatorname{softmax}(r / \tau - \log \pi_\theta) \). 5 RELATED WORK Before presenting the experimental results, we briefly review some pieces of previous work that closely relate to the UREX approach. Reward-Weighted Regression. Both RAML and UREX objectives bear some similarity to a method in continuous control known as Reward-Weighted Regression (RWR) (Peters & Schaal [2007]; Wierstra et al. [2008]). Using our notation, the RWR objective is expressed as, \[ \mathcal{O}_{\text{RWR}}(\theta; \tau \mid \mathbf{h}) = \log \sum_{\mathbf{a} \in \mathcal{A}} \pi^*_\tau(\mathbf{a} \mid \mathbf{h}) \pi_\theta(\mathbf{a} \mid \mathbf{h}) \] \[ \geq \sum_{\mathbf{a} \in \mathcal{A}} q(\mathbf{a} \mid \mathbf{h}) \log \frac{\pi^*_\tau(\mathbf{a} \mid \mathbf{h}) \pi_\theta(\mathbf{a} \mid \mathbf{h})}{q(\mathbf{a} \mid \mathbf{h})}. \] To optimize \( \mathcal{O}_{\mathrm{RWR}} \), Peters & Schaal (2007) propose a technique inspired by the EM algorithm to maximize a variational lower bound in (14) based on a variational distribution \( q(a \mid h) \). The RWR objective can be interpreted as a log of the correlation between \( \pi_r^* \) and \( \pi_\theta \). By contrast, the RAML and UREX objectives are both based on a KL divergence between \( \pi_r^* \) and \( \pi_\theta \). To optimize the RWR objective, one formulates the gradient as, \[ \frac{\mathrm{d}}{\mathrm{d}\theta} \mathcal{O}_{\mathrm{RWR}}(\theta; \tau \mid h) = \sum_{a \in \mathcal{A}} \frac{\pi_r^*(a \mid h)\pi_\theta(a \mid h)}{C} \frac{\mathrm{d}}{\mathrm{d}\theta} \log \pi_\theta(a \mid h), \] where \( C \) denotes the normalization factor, i.e., \( C = \sum_{a \in \mathcal{A}} \pi_r^*(a \mid h)\pi_\theta(a \mid h) \). The expectation with respect to \( \pi_r^*(a \mid h)\pi_\theta(a \mid h)/C \) on the RHS can be approximated by self-normalized importance sampling where the proposal distribution is \( \pi_\theta \). Accordingly, one draws \( K \) Monte Carlo samples \( \{a^{(k)}\}_{k=1}^K \) i.i.d. from \( \pi_\theta(a \mid h) \) and formulates the gradient as, \[ \frac{\mathrm{d}}{\mathrm{d}\theta} \mathcal{O}_{\mathrm{RWR}}(\theta; \tau \mid h) \approx \frac{1}{K} \sum_{k=1}^K \frac{u(a^{(k)} \mid h)}{\sum_{m=1}^K u(a^{(m)} \mid h)} \frac{\mathrm{d}}{\mathrm{d}\theta} \log \pi_\theta(a^{(k)} \mid h), \] where \( u(a^{(k)} \mid h) = \exp\{\frac{1}{\tau} r(a^{(k)} \mid h)\} \). There is some similarity between (16) and (9) in that they both use self-normalized importance sampling, but note the critical difference that (16) and (9) estimate the gradients of two different objectives, and hence the importance weights in (16) do not correct for the sampling distribution \( \pi_\theta(a \mid h) \) as opposed to (9). Beyond important technical differences, the optimal policy of \( \mathcal{O}_{\mathrm{RWR}} \) is a one hot distribution with all probability mass concentrated on an action sequence with maximal reward, whereas the optimal policies for RAML and UREX are everywhere nonzero, with the probability of different action sequences being assigned proportionally to their exponentiated reward (with UREX introducing an additional re-scaling; see Appendix A). Further, the notion of under-appreciated reward exploration evident in \( \mathcal{O}_{\mathrm{UREX}} \), which is key to UREX’s performance, is missing in the RWR formulation. Exploration. The RL literature contains many different attempts at incorporating exploration that may be compared with our method. The most common exploration strategy considered in value-based RL is \( \epsilon \)-greedy Q-learning, where at each step the agent either takes the best action according to its current value approximation or with probability \( \epsilon \) takes an action sampled uniformly at random. Like entropy regularization, such an approach applies undirected exploration, but it has achieved recent success in game playing environments (Mnih et al., 2013; Van Hasselt et al., 2016; Mnih et al., 2016). Prominent approaches to improving exploration beyond \( \epsilon \)-greedy in value-based or model-based RL have focused on reducing uncertainty by prioritizing exploration toward states and actions where the agent knows the least. This basic intuition underlies work on counter and recency methods (Thrun, 1992), exploration methods based on uncertainty estimates of values (Kaelbling, 1993; Todic, 2010), methods that prioritize learning environment dynamics (Kearns & Singh, 2002; Stadie et al., 2015), and methods that provide an intrinsic motivation or curiosity bonus for exploring unknown states (Schmidhuber, 2006; Bellemare et al., 2016). In contrast to value-based methods, exploration for policy-based RL methods is often a by-product of the optimization algorithm itself. Since algorithms like REINFORCE and Thompson sampling choose actions according to a stochastic policy, sub-optimal actions are chosen with some non-zero probability. The Q-learning algorithm may also be modified to sample an action from the softmax of the Q values rather than the argmax (Sutton & Barto, 1998). Asynchronous training has also been reported to have an exploration effect on both value- and policy-based methods. Mnih et al. (2016) report that asynchronous training can stabilize training by reducing the bias experienced by a single trainer. By using multiple separate trainers, an agent is less likely to become trapped at a policy found to be locally optimal only due to local conditions. In the same spirit, Osband et al. (2016) use multiple Q value approximators and sample only one to act for each episode as a way to implicitly incorporate exploration. By relating the concepts of value and policy in RL, the exploration strategy we propose tries to bridge the discrepancy between the two. In particular, UREX can be viewed as a hybrid combination of value-based and policy-based exploration strategies that attempts to capture the benefits of each. Bornstein & Bengio (2014) apply the same trick to optimize the log-likelihood of latent variable models. Per-step Reward. Finally, while we restrict ourselves to episodic settings where a reward is associated with an entire episode of states and actions, much work has been done to take advantage of environments that provide per-step rewards. These include policy-based methods such as actor-critic (Mnih et al., 2016; Schulman et al., 2016) and value-based approaches based on Q-learning (Van Hasselt et al., 2016; Schaul et al., 2016). Some of these value-based methods have proposed a softening of Q-values which can be interpreted as adding a form of maximum-entropy regularizer (Asadi & Littman, 2016; Azar et al., 2012; Fox et al., 2016; Ziebart, 2010). The episodic total-reward setting that we consider is naturally harder since the credit assignment to individual actions within an episode is unclear. 6 SIX ALGORITHMIC TASKS We assess the effectiveness of the proposed approach on five algorithmic tasks from the OpenAI Gym (Brockman et al., 2016), as well as a new binary search problem. Each task is summarized below, with further details available on the Gym website[1] or in the corresponding open-source code[2]. In each case, the environment has a hidden tape and a hidden sequence. The agent observes the sequence via a pointer to a single character, which can be moved by a set of pointer control actions. Thus an action \( a_t \) is represented as a tuple \( (m, w, o) \) where \( m \) denotes how to move, \( w \) is a boolean denoting whether to write, and \( o \) is the output symbol to write. 1. Copy: The agent should emit a copy of the sequence. The pointer actions are move left and right. 2. DuplicatedInput: In the hidden tape, each character is repeated twice. The agent must deduplicate the sequence and emit every other character. The pointer actions are move left and right. 3. RepeatCopy: The agent should emit the hidden sequence once, then emit the sequence in the reverse order, then emit the original sequence again. The pointer actions are move left and right. 4. Reverse: The agent should emit the hidden sequence in the reverse order. As before, the pointer actions are move left and right. 5. ReversedAddition: The hidden tape is a \( 2 \times n \) grid of digits representing two numbers in base 3 in little-endian order. The agent must emit the sum of the two numbers, in little-endian order. The allowed pointer actions are move left, right, up, or down. The OpenAI Gym provides an additional harder task called ReversedAddition3, which involves adding three numbers. We omit this task, since none of the methods make much progress on it. For these tasks, the input sequences encountered during training range from a length of 2 to 33 characters. A reward of 1 is given for each correct emission. On an incorrect emission, a small penalty of \(-0.5\) is incurred and the episode is terminated. The agent is also terminated and penalized with a reward of \(-1\) if the episode exceeds a certain number of steps. For the experiments using UREX and MENT, we associate an episodic sequence of actions with the total reward, defined as the sum of the per-step rewards. The experiments using Q-learning, on the other hand, used the per-step rewards. Each of the Gym tasks has a success threshold, which determines the required average reward over 100 episodes for the agent to be considered successful. We also conduct experiments on an additional algorithmic task described below: 6. BinarySearch: Given an integer \( n \), the environment has a hidden array of \( n \) distinct numbers stored in ascending order. The environment also has a query number \( x \) unknown to the agent that is contained somewhere in the array. The goal of the agent is to find the query number in the array in a small number of actions. The environment has three integer registers initialized at \( (n, 0, 0) \). At each step, the agent can interact with the environment via the four following actions: • INC\((i)\): increment the value of the register \( i \) for \( i \in \{1, 2, 3\} \). • DIV\((i)\): divide the value of the register \( i \) by 2 for \( i \in \{1, 2, 3\} \). • AVG\((i)\): replace the value of the register \( i \) with the average of the two other registers. • CMP\((i)\): compare the value of the register \( i \) with \( x \) and receive a signal indicating which value is greater. The agent succeeds when it calls CMP on an array cell holding the value \( x \). [1] gym.openai.com [2] github.com/openai/gym Table 1: Each cell shows the percentage of 60 trials with different hyper-parameters (\( \eta, c \)) and random restarts that successfully solve an algorithmic task. UREX is more robust to hyper-parameter changes than MENT. We evaluate MENT with a few temperatures and UREX with \( \tau = 0.1 \). <table> <tr> <th rowspan="2"></th> <th colspan="5">REINFORCE / MENT</th> <th rowspan="2">UREX</th> </tr> <tr> <th>\( \tau = 0.0 \)</th> <th>\( \tau = 0.005 \)</th> <th>\( \tau = 0.01 \)</th> <th>\( \tau = 0.1 \)</th> <th>\( \tau = 0.1 \)</th> </tr> <tr> <td>Copy</td> <td>85.0</td> <td>88.3</td> <td><b>90.0</b></td> <td>3.3</td> <td>75.0</td> </tr> <tr> <td>DuplicatedInput</td> <td>68.3</td> <td>73.3</td> <td>73.3</td> <td>0.0</td> <td><b>100.0</b></td> </tr> <tr> <td>RepeatCopy</td> <td>0.0</td> <td>0.0</td> <td>11.6</td> <td>0.0</td> <td><b>18.3</b></td> </tr> <tr> <td>Reverse</td> <td>0.0</td> <td>0.0</td> <td>3.3</td> <td>10.0</td> <td><b>16.6</b></td> </tr> <tr> <td>ReversedAddition</td> <td>0.0</td> <td>0.0</td> <td>1.6</td> <td>0.0</td> <td><b>30.0</b></td> </tr> <tr> <td>BinarySearch</td> <td>0.0</td> <td>0.0</td> <td>1.6</td> <td>0.0</td> <td><b>20.0</b></td> </tr> </table> The agent is terminated when the number of steps exceeds a maximum threshold of \( 2n+1 \) steps and receives a reward of 0. If the agent finds \( x \) at step \( t \), it receives a reward of \( 10(1-t/(2n+1)) \). We set the maximum number of steps to \( 2n+1 \) to allow the agent to perform a full linear search. A policy performing full linear search achieves an average reward of 5, because \( x \) is chosen uniformly at random from the elements of the array. A policy employing binary search can find the number \( x \) in at most \( 2\log_2 n + 1 \) steps. If \( n \) is selected uniformly at random from the range \( 32 \leq n \leq 512 \), binary search yields an optimal average reward above 9.55. We set the <i>success threshold</i> for this task to an average reward of 9. 7 EXPERIMENTS We compare our policy gradient method using under-appreciated reward exploration (UREX) against two main RL baselines: (1) REINFORCE with entropy regularization termed MENT (Williams & Peng, 1991), where the value of \( \tau \) determines the degree of regularization. When \( \tau = 0 \), standard REINFORCE is obtained. (2) one-step double Q-learning based on bootstrapping one step future rewards. 7.1 ROBUSTNESS TO HYPER-PARAMETERS Hyper-parameter tuning is often tedious for RL algorithms. We found that the proposed UREX method significantly improves robustness to changes in hyper-parameters when compared to MENT. For our experiments, we perform a careful grid search over a set of hyper-parameters for both MENT and UREX. For any hyper-parameter setting, we run the MENT and UREX methods 5 times with different random restarts. We explore the following main hyper-parameters: • The <i>learning rate</i> denoted \( \eta \) chosen from a set of 3 possible values \( \eta \in \{0.1, 0.01, 0.001\} \). • The maximum L2 norm of the gradients, beyond which the gradients are <i>clipped</i>. This parameter, denoted \( c \), matters for training RNNs. The value of \( c \) is selected from \( c \in \{1, 10, 40, 100\} \). • The temperature parameter \( \tau \) that controls the degree of exploration for both MENT and UREX. For MENT, we use \( \tau \in \{0, 0.005, 0.01, 0.1\} \). For UREX, we only consider \( \tau = 0.1 \), which consistently performs well across the tasks. In all of the experiments, both MENT and UREX are treated exactly the same. In fact, the change of implementation is just a few lines of code. Given a value of \( \tau \), for each task, we run 60 training jobs comprising 3 learning rates, 4 clipping values, and 5 random restarts. We run each algorithm for a maximum number of steps determined based on the difficulty of the task. The training jobs for Copy, DuplicatedInput, RepeatCopy, Reverse, ReversedAddition, and BinarySearch are run for \( 2K \), \( 500, 50K, 5K, 50K \), and \( 2K \) stochastic gradient steps, respectively. We find that running a trainer job longer does not result in a better performance. Our policy network comprises a single LSTM layer with 128 nodes. We use the Adam optimizer (Kingma & Ba, 2015) for the experiments. Table 1 shows the percentage of 60 trials on different hyper-parameters (\( \eta, c \)) and random restarts which successfully solve each of the algorithmic tasks. It is clear that UREX is more robust than MENT to changes in hyper-parameters, even though we only report the results of UREX for a single temperature. See Appendix B for more detailed tables on hyper-parameter robustness. 7.2 RESULTS Table 2 presents the number of successful attempts (out of 5 random restarts) and the expected reward values (averaged over 5 trials) for each RL algorithm given the best hyper-parameters. One-step Q-learning results are also included in the table. We also present the training curves for MENT and UREX in Figure 2. It is clear that UREX outperforms the baselines on these tasks. On the more difficult tasks, such as Reverse and ReverseAddition, UREX is able to consistently find an appropriate algorithm, but MENT and Q-learning fall behind. Importantly, for the BinarySearch task, which exhibits many local maxima and necessitates smart exploration, UREX is the only method that can solve it consistently. The Q-learning baseline solves some of the simple tasks, but it makes little headway on the harder tasks. We believe that entropy regularization for policy gradient and \( \epsilon \)-greedy for Q-learning are relatively weak exploration strategies in long episodic tasks with delayed rewards. On such tasks, one random exploratory step in the wrong direction can take the agent off the optimal policy, hampering its ability to learn. In contrast, UREX provides a form of adaptive and smart exploration. In fact, we observe that the variance of the importance weights decreases as the agent approaches the optimal policy, effectively reducing exploration when it is no longer necessary; see Appendix E. ![Average reward during training for MENT (green) and UREX (blue) across six tasks: Copy, DuplicatedInput, RepeatCopy, Reverse, ReversedAddition, BinarySearch. Each plot shows the average reward as well as the single standard deviation region clipped at the min and max.](page_312_684_968_370.png) Figure 2: Average reward during training for MENT (green) and UREX (blue). We find the best hyper-parameters for each method, and run each algorithm 5 times with random restarts. The curves present the average reward as well as the single standard deviation region clipped at the min and max. 7.3 GENERALIZATION TO LONGER SEQUENCES To confirm whether our method is able to find the correct algorithm for multi-digit addition, we investigate its generalization to longer input sequences than provided during training. We evaluate the trained models on inputs up to a length of 2000 digits, even though training sequences were at most 33 characters. For each length, we test the model on 100 randomly generated inputs, stopping when the accuracy falls below 100%. Out of the 60 models trained on addition with UREX, we find that 5 models generalize to numbers up to 2000 digits without any observed mistakes. On the best UREX hyper-parameters, 2 out of the 5 random restarts are able to generalize successfully. For more detailed results on the generalization performance on 3 different tasks including Copy, Table 2: Results on several algorithmic tasks comparing Q-learning and policy gradient based on MENT and UREX. We find the best hyper-parameters for each method, and run each algorithm 5 times with random restarts. Number of successful attempts (out of 5) that achieve a reward threshold is reported. Expected reward computed over the last few iterations of training is also reported. <table> <tr> <th rowspan="2"> </th> <th colspan="3">Num. of successful attempts out of 5</th> <th colspan="3">Expected reward</th> </tr> <tr> <th>Q-learning</th> <th>MENT</th> <th>UREX</th> <th>Q-learning</th> <th>MENT</th> <th>UREX</th> </tr> <tr> <td>Copy</td> <td>5</td> <td>5</td> <td>5</td> <td>31.2</td> <td>31.2</td> <td>31.2</td> </tr> <tr> <td>DuplicatedInput</td> <td>5</td> <td>5</td> <td>5</td> <td>15.4</td> <td>15.4</td> <td>15.4</td> </tr> <tr> <td>RepeatCopy</td> <td>1</td> <td>3</td> <td>4</td> <td>39.3</td> <td>69.2</td> <td>81.1</td> </tr> <tr> <td>Reverse</td> <td>0</td> <td>2</td> <td>4</td> <td>4.4</td> <td>21.9</td> <td>27.2</td> </tr> <tr> <td>ReversedAddition</td> <td>0</td> <td>1</td> <td>5</td> <td>1.1</td> <td>8.7</td> <td>30.2</td> </tr> <tr> <td>BinarySearch</td> <td>0</td> <td>1</td> <td>4</td> <td>5.2</td> <td>8.6</td> <td>9.1</td> </tr> </table> DuplicatedInput, and ReversedAddition, see Appendix C. During these evaluations, we take the action with largest probability from \( \pi_\theta(a \mid h) \) at each time step rather than sampling randomly. We also looked into the generalization of the models trained on the BinarySearch task. We found that none of the agents perform proper binary search. Rather, those that solved the task perform a hybrid of binary and linear search: first actions follow a binary search pattern, but then the agent switches to a linear search procedure once it narrows down the search space; see Appendix D for some execution traces for BinarySearch and ReversedAddition. Thus, on longer input sequences, the agent’s running time complexity approaches linear rather than logarithmic. We hope that future work will make more progress on this task. This task is especially interesting because the reward signal should incorporate both correctness and efficiency of the algorithm. 7.4 IMPLEMENTATION DETAILS In all of the experiments, we make use of curriculum learning. The environment begins by only providing small inputs and moves on to longer sequences once the agent achieves close to maximal reward over a number of steps. For policy gradient methods including MENT and UREX, we only provide the agent with a reward at the end of the episode, and there is no notion of intermediate reward. For the value-based baseline, we implement one-step Q-learning as described in [Mnih et al. (2016)] Alg. 1, employing double Q-learning with \( \epsilon \)-greedy exploration. We use the same RNN in our policy-based approaches to estimate the Q values. A grid search over exploration rate, exploration rate decay, learning rate, and sync frequency (between online and target network) is conducted to find the best hyper-parameters. Unlike our other methods, the Q-learning baseline uses intermediate rewards, as given by the OpenAI Gym on a per-step basis. Hence, the Q-learning baseline has a slight advantage over the policy gradient methods. In all of the tasks except Copy, our stochastic optimizer uses mini-batches comprising 400 policy samples from the model. These 400 samples correspond to 40 different random sequences drawn from the environment, and 10 random policy trajectories per sequence. In other words, we set \( K = 10 \) and \( N = 40 \) as defined in (8) and (12). For MENT, we use the 10 samples to subtract the mean of the coefficient of \( \frac{d}{dh} \log \pi_\theta(a \mid h) \) which includes the contribution of the reward and entropy regularization. For UREX, we use the 10 trajectories to subtract the mean reward and normalize the importance sampling weights. We do not subtract the mean of the normalized importance weights. For the Copy task, we use mini-batches with 200 samples using \( K = 10 \) and \( N = 20 \). Experiments are conducted using Tensorflow (Abadi et al., 2016). 8 CONCLUSION We present a variant of policy gradient, called UREX, which promotes the exploration of action sequences that yield rewards larger than what the model expects. This exploration strategy is the result of importance sampling from the optimal policy. Our experimental results demonstrate that UREX significantly outperforms other value and policy based methods, while being more robust to changes of hyper-parameters. By using UREX, we can solve algorithmic tasks like multi-digit addition from only episodic reward, which other methods cannot reliably solve even given the best hyper-parameters. We introduce a new algorithmic task based on binary search to advocate more research in this area, especially when the computational complexity of the solution is also of interest. Solving these tasks is not only important for developing more human-like intelligence in learning algorithms, but also important for generic reinforcement learning, where smart and efficient exploration is the key to successful methods. 9 ACKNOWLEDGMENT We thank Sergey Levine, Irwan Bello, Corey Lynch, George Tucker, Kelvin Xu, Volodymyr Mnih, and the Google Brain team for insightful comments and discussions. REFERENCES Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine learning. arXiv:1605.08695, 2016. Dana Angulin. Learning regular sets form queries and counterexamples. Information and Computation, 1987. Kavosh Asadi and Michael L Littman. A new softmax operator for reinforcement learning. arXiv preprint arXiv:1612.05628, 2016. Mohammad Gheshlaghi Azar, Vicenç Gómez, and Hilbert J Kappen. Dynamic policy programming. Journal of Machine Learning Research, 13(Nov):3207–3245, 2012. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015. Marc G. Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Rémi Munos. Unifying count-based exploration and intrinsic motivation. NIPS, 2016. Jörg Bornschein and Yoshua Bengio. Reweighted wake-sleep. arXiv:1406.2751, 2014. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. arXiv:1606.01540, 2016. Roy Fox, Ari Pakman, and Naftali Tishby. G-learning: Taming the noise in reinforcement learning via soft updates. Uncertainty in Artifical Intelligence, 2016. URL http://arxiv.org/abs/1512.08562 Gene Golub. Some modified matrix eigenvalue problems. SIAM Review, 1987. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwinska, Sergio G. Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adria P. Badia, Karl M. Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis. Hybrid computing using a neural network with dynamic external memory. Nature, 2016. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Comput., 1997. Leslie Pack Kaelbling. Learning in embedded systems. MIT press, 1993. Lukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. ICLR, 2016. Michael Kearns and Satinder Singh. Near-optimal reinforcement learning in polynomial time. Machine Learning, 2002. Charles Kemp, Noah Goodman, and Joshua Tenenbaum. Learning and using relational theories. NIPS, 2007. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015. N. Lavrac and S. Dzeroski. Inductive Logic Programming: Theory and Methods. Ellis Horwood, 1994. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin A. Riedmiller. Playing atari with deep reinforcement learning. arXiv:1312.5602, 2013. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, et al. Human-level control through deep reinforcement learning. Nature, 2015. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. ICML, 2016. Kevin P. Murphy. Machine Learning: A Probabilistic Perspective. MIT Press, 2012. Arvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. ICLR, 2016. Mohammad Norouzi, Samy Bengio, Zhifeng Chen, Navdeep Jaitly, Mike Schuster, Yonghui Wu, and Dale Schuurmans. Reward augmented maximum likelihood for neural structured prediction. NIPS, 2016. Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrapped DQN. NIPS, 2016. Art B. Owen. Monte Carlo theory, methods and examples. 2013. Jan Peters and Stefan Schaal. Reinforcement learning by reward-weighted regression for operational space control. In Proceedings of the 24th international conference on Machine learning, pp. 745–750. ACM, 2007. Scott E. Reed and Nando de Freitas. Neural programmer-interpreters. ICLR, 2016. Stuart Jonathan Russell, Peter Norvig, John F Canny, Jitendra M Malik, and Douglas D Edwards. Artificial intelligence: a modern approach, volume 2. Prentice hall Upper Saddle River, 2003. Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. ICLR, 2016. Jürgen Schmidhuber. Optimal artificial curiosity, creativity, music, and the fine arts. Connection Science, 2006. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. ICLR, 2016. David Silver, Aja Huang, et al. Mastering the game of Go with deep neural networks and tree search. Nature, 2016. Bradly C. Stadie, Sergey Levine, and Pieter Abbeel. Incentivizing exploration in reinforcement learning with deep predictive models. arXiv:1507.00814, 2015. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. NIPS, 2014. Richard S. Sutton and Andrew G. Barto. Introduction to Reinforcement Learning. MIT Press, 1998. Sebastian B Thrun. Efficient exploration in reinforcement learning. Technical report, 1992. Michel Tokic. Adaptive \( \varepsilon \)-greedy exploration in reinforcement learning based on value differences. AAAI, 2010. Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. AAAI, 2016. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. NIPS, 2015. Daan Wierstra, Tom Schaul, Jan Peters, and Juergen Schmidhuber. Episodic reinforcement learning by logistic reward-weighted regression. In International Conference on Artificial Neural Networks, pp. 407–416. Springer, 2008. Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 1992. Ronald J Williams and Jing Peng. Function optimization using connectionist reinforcement learning algorithms. Connection Science, 1991. Wojciech Zaremba and Ilya Sutskever. Learning to execute. arXiv:1410.4615, 2014. Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv:1505.00521, 2015. Brian D Ziebart. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. 2010. A OPTIMAL POLICY FOR THE UREX OBJECTIVE To derive the form of the optimal policy for the UREX objective (11), note that for each \( h \) one would like to maximize \[ \sum_{a \in \mathcal{A}} \left[ \pi_\theta(a) r(a) + \tau \pi^*_\tau(a) \log \pi_\theta(a) \right], \] subject to the constraint \( \sum_{a \in \mathcal{A}} \pi_\theta(a) = 1 \). To enforce the constraint, we introduce a Lagrange multiplier \( \alpha \) and aim to maximize \[ \sum_{a \in \mathcal{A}} \left[ \pi_\theta(a) r(a) + \tau \pi^*_\tau(a) \log \pi_\theta(a) - \alpha \pi_\theta(a) \right] + \alpha . \] Since the gradient of the Lagrangian (18) with respect to \( \theta \) is given by \[ \sum_{a \in \mathcal{A}} \frac{\mathrm{d}\pi_\theta(a)}{\mathrm{d}\theta} \left[ r(a) + \tau \frac{\pi^*_\tau(a)}{\pi_\theta(a)} - \alpha \right] , \] the optimal choice for \( \pi_\theta \) is achieved by setting \[ \pi_\theta(a) = \frac{\tau \pi^*_\tau(a)}{\alpha - r(a)} \quad \text{for all } a \in \mathcal{A} , \] forcing the gradient to be zero. The Lagrange multiplier \( \alpha \) can then be chosen so that \( \sum_{a \in \mathcal{A}} \pi_\theta(a) = 1 \) while also satisfying \( \alpha > \max_{a \in \mathcal{A}} r(a) \); see e.g. (Golub 1987). B ROBUSTNESS TO HYPER-PARAMETERS Tables 3-8 provide more details on different cells of Table 1. Each table presents the results of MENT using the best temperature \( \tau \) vs. UREX with \( \tau = 0.1 \) on a variety of learning rates and clipping values. Each cell is the number of trials out of 5 random restarts that succeed at solving the task using a specific \( \eta \) and \( c \). <table> <tr> <th colspan="4">MENT (\( \tau = 0.01 \))</th> <th colspan="3">UREX (\( \tau = 0.1 \))</th> </tr> <tr> <th>\( \eta = 0.1 \)</th> <th>\( \eta = 0.01 \)</th> <th>\( \eta = 0.001 \)</th> <th></th> <th>\( \eta = 0.1 \)</th> <th>\( \eta = 0.01 \)</th> <th>\( \eta = 0.001 \)</th> </tr> <tr> <td>c = 1</td> <td>3</td> <td>5</td> <td>5</td> <td>5</td> <td>5</td> <td>2</td> </tr> <tr> <td>c = 10</td> <td>5</td> <td>4</td> <td>5</td> <td>5</td> <td>5</td> <td>3</td> </tr> <tr> <td>c = 40</td> <td>3</td> <td>5</td> <td>5</td> <td>4</td> <td>4</td> <td>1</td> </tr> <tr> <td>c = 100</td> <td>4</td> <td>5</td> <td>5</td> <td>4</td> <td>5</td> <td>2</td> </tr> </table> Table 3: Copy – number of successful attempts out of 5. <table> <tr> <th colspan="4">MENT (\( \tau = 0.01 \))</th> <th colspan="3">UREX (\( \tau = 0.1 \))</th> </tr> <tr> <th>\( \eta = 0.1 \)</th> <th>\( \eta = 0.01 \)</th> <th>\( \eta = 0.001 \)</th> <th></th> <th>\( \eta = 0.1 \)</th> <th>\( \eta = 0.01 \)</th> <th>\( \eta = 0.001 \)</th> </tr> <tr> <td>c = 1</td> <td>3</td> <td>5</td> <td>3</td> <td>5</td> <td>5</td> <td>5</td> </tr> <tr> <td>c = 10</td> <td>2</td> <td>5</td> <td>3</td> <td>5</td> <td>5</td> <td>5</td> </tr> <tr> <td>c = 40</td> <td>4</td> <td>5</td> <td>3</td> <td>5</td> <td>5</td> <td>5</td> </tr> <tr> <td>c = 100</td> <td>2</td> <td>5</td> <td>4</td> <td>5</td> <td>5</td> <td>5</td> </tr> </table> Table 4: DuplicatedInput – number of successful attempts out of 5. <table> <tr> <th colspan="4">MENT (\( \tau = 0.01 \))</th> <th colspan="3">UREX (\( \tau = 0.1 \))</th> </tr> <tr> <th>\( \eta = 0.1 \)</th> <th>\( \eta = 0.01 \)</th> <th>\( \eta = 0.001 \)</th> <th></th> <th>\( \eta = 0.1 \)</th> <th>\( \eta = 0.01 \)</th> <th>\( \eta = 0.001 \)</th> </tr> <tr> <td>c = 1</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>2</td> <td>0</td> </tr> <tr> <td>c = 10</td> <td>0</td> <td>0</td> <td>2</td> <td>0</td> <td>4</td> <td>0</td> </tr> <tr> <td>c = 40</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>2</td> <td>0</td> </tr> <tr> <td>c = 100</td> <td>0</td> <td>0</td> <td>3</td> <td>0</td> <td>3</td> <td>0</td> </tr> </table> Table 5: RepeatCopy – number of successful attempts out of 5. Table 6: Reverse – number of successful attempts out of 5. <table> <tr> <th rowspan="2"> </th> <th colspan="3">MENT \( (\tau = 0.1) \)</th> <th colspan="3">UREX \( (\tau = 0.1) \)</th> </tr> <tr> <th>\( \eta = 0.1 \)</th> <th>\( \eta = 0.01 \)</th> <th>\( \eta = 0.001 \)</th> <th>\( \eta = 0.1 \)</th> <th>\( \eta = 0.01 \)</th> <th>\( \eta = 0.001 \)</th> </tr> <tr> <td>\( c = 1 \)</td> <td>1</td> <td>1</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>\( c = 10 \)</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>4</td> <td>0</td> </tr> <tr> <td>\( c = 40 \)</td> <td>0</td> <td>2</td> <td>0</td> <td>0</td> <td>2</td> <td>1</td> </tr> <tr> <td>\( c = 100 \)</td> <td>1</td> <td>0</td> <td>0</td> <td>0</td> <td>2</td> <td>1</td> </tr> </table> Table 7: ReversedAddition – number of successful attempts out of 5. <table> <tr> <th rowspan="2"> </th> <th colspan="3">MENT \( (\tau = 0.01) \)</th> <th colspan="3">UREX \( (\tau = 0.1) \)</th> </tr> <tr> <th>\( \eta = 0.1 \)</th> <th>\( \eta = 0.01 \)</th> <th>\( \eta = 0.001 \)</th> <th>\( \eta = 0.1 \)</th> <th>\( \eta = 0.01 \)</th> <th>\( \eta = 0.001 \)</th> </tr> <tr> <td>\( c = 1 \)</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>4</td> </tr> <tr> <td>\( c = 10 \)</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>3</td> <td>2</td> </tr> <tr> <td>\( c = 40 \)</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>5</td> </tr> <tr> <td>\( c = 100 \)</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>1</td> <td>3</td> </tr> </table> Table 8: BinarySearch – number of successful attempts out of 5. <table> <tr> <th rowspan="2"> </th> <th colspan="3">MENT \( (\tau = 0.01) \)</th> <th colspan="3">UREX \( (\tau = 0.1) \)</th> </tr> <tr> <th>\( \eta = 0.1 \)</th> <th>\( \eta = 0.01 \)</th> <th>\( \eta = 0.001 \)</th> <th>\( \eta = 0.1 \)</th> <th>\( \eta = 0.01 \)</th> <th>\( \eta = 0.001 \)</th> </tr> <tr> <td>\( c = 1 \)</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>4</td> <td>0</td> </tr> <tr> <td>\( c = 10 \)</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>3</td> <td>0</td> </tr> <tr> <td>\( c = 40 \)</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>3</td> <td>0</td> </tr> <tr> <td>\( c = 100 \)</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>2</td> <td>0</td> </tr> </table> C GENERALIZATION TO LONGER SEQUENCES Table 9 provides a more detailed look into the generalization performance of the trained models on Copy, DuplicatedInput, and ReversedAddition. The tables show how the number of models which can solve the task correctly drops off as the length of the input increases. Table 9: Generalization results. Each cell includes the number of runs out of 60 different hyper-parameters and random initializations that achieve 100% accuracy on input of length up to the specified length. The bottom row is the maximal length (\( \leq 2000 \)) up to which at least one model achieves 100% accuracy. <table> <tr> <th rowspan="2"></th> <th colspan="2">Copy</th> <th colspan="2">DuplicatedInput</th> <th colspan="2">ReversedAddition</th> </tr> <tr> <th>MENT</th> <th>UREX</th> <th>MENT</th> <th>UREX</th> <th>MENT</th> <th>UREX</th> </tr> <tr> <td>30</td> <td>54</td> <td>45</td> <td>44</td> <td>60</td> <td>1</td> <td>18</td> </tr> <tr> <td>100</td> <td>51</td> <td>45</td> <td>36</td> <td>56</td> <td>0</td> <td>6</td> </tr> <tr> <td>500</td> <td>27</td> <td>22</td> <td>19</td> <td>25</td> <td>0</td> <td>5</td> </tr> <tr> <td>1000</td> <td>3</td> <td>2</td> <td>12</td> <td>17</td> <td>0</td> <td>5</td> </tr> <tr> <td>2000</td> <td>0</td> <td>0</td> <td>6</td> <td>9</td> <td>0</td> <td>5</td> </tr> <tr> <td>Max</td> <td>1126</td> <td>1326</td> <td>2000</td> <td>2000</td> <td>38</td> <td>2000</td> </tr> </table> D EXAMPLE EXECUTION TRACES We provide the traces of two trained agents on the ReversedAddition task (Figure 3) and the BinarySearch task (Table 10). Figure 3: A graphical representation of a trained addition agent. The agent begins at the top left corner of a \(2 \times n\) grid of ternary digits. At each time step, it may move to the left, right, up, or down (observing one digit at a time) and optionally write to output. Table 10: Example trace on the BinarySearch task where \(n = 512\) and the number to find is at position 100. At time \(t\) the agent observes \(s_t\) from the environment and samples an action \(a_t\). We also include the inferred range of indices to which the agent has narrowed down the position of \(x\). We see that the first several steps of the agent follow a binary search algorithm. However, at some point the agent switches to a linear search <table> <tr> <th>\(R_0\)</th> <th>\(R_1\)</th> <th>\(R_2\)</th> <th>\(s_t\)</th> <th>\(a_t\)</th> <th>Inferred range</th> </tr> <tr> <td>512</td> <td>0</td> <td>0</td> <td>–</td> <td>AVG(2)</td> <td>(0, 512)</td> </tr> <tr> <td>512</td> <td>0</td> <td>256</td> <td>–</td> <td>CMP(2)</td> <td>(0, 512)</td> </tr> <tr> <td>512</td> <td>0</td> <td>256</td> <td>&lt;</td> <td>DIV(0)</td> <td>(0, 256)</td> </tr> <tr> <td>256</td> <td>0</td> <td>256</td> <td>–</td> <td>AVG(2)</td> <td>(0, 256)</td> </tr> <tr> <td>256</td> <td>0</td> <td>128</td> <td>–</td> <td>CMP(2)</td> <td>(0, 256)</td> </tr> <tr> <td>256</td> <td>0</td> <td>128</td> <td>&lt;</td> <td>DIV(0)</td> <td>(0, 128)</td> </tr> <tr> <td>128</td> <td>0</td> <td>128</td> <td>–</td> <td>AVG(2)</td> <td>(0, 128)</td> </tr> <tr> <td>128</td> <td>0</td> <td>64</td> <td>–</td> <td>CMP(2)</td> <td>(0, 128)</td> </tr> <tr> <td>128</td> <td>0</td> <td>64</td> <td>&gt;</td> <td>AVG(1)</td> <td>(64, 128)</td> </tr> <tr> <td>128</td> <td>96</td> <td>64</td> <td>–</td> <td>CMP(1)</td> <td>(64, 128)</td> </tr> <tr> <td>128</td> <td>96</td> <td>64</td> <td>&gt;</td> <td>AVG(2)</td> <td>(96, 128)</td> </tr> <tr> <td>128</td> <td>96</td> <td>112</td> <td>–</td> <td>CMP(2)</td> <td>(96, 128)</td> </tr> <tr> <td>128</td> <td>96</td> <td>112</td> <td>&lt;</td> <td>AVG(1)</td> <td>(96, 112)</td> </tr> <tr> <td>128</td> <td>120</td> <td>112</td> <td>–</td> <td>CMP(2)</td> <td>(96, 112)</td> </tr> <tr> <td>128</td> <td>120</td> <td>112</td> <td>&lt;</td> <td>DIV(1)</td> <td>(96, 112)</td> </tr> <tr> <td>128</td> <td>60</td> <td>112</td> <td>–</td> <td>AVG(2)</td> <td>(96, 112)</td> </tr> <tr> <td>128</td> <td>60</td> <td>94</td> <td>–</td> <td>CMP(2)</td> <td>(96, 112)</td> </tr> <tr> <td>128</td> <td>60</td> <td>94</td> <td>&gt;</td> <td>AVG(1)</td> <td>(96, 112)</td> </tr> <tr> <td>128</td> <td>111</td> <td>94</td> <td>–</td> <td>CMP(1)</td> <td>(96, 112)</td> </tr> <tr> <td>128</td> <td>111</td> <td>94</td> <td>&lt;</td> <td>INC(1)</td> <td>(96, 111)</td> </tr> <tr> <td>128</td> <td>112</td> <td>94</td> <td>–</td> <td>INC(2)</td> <td>(96, 111)</td> </tr> <tr> <td>128</td> <td>112</td> <td>95</td> <td>–</td> <td>CMP(2)</td> <td>(96, 111)</td> </tr> <tr> <td>128</td> <td>112</td> <td>95</td> <td>&gt;</td> <td>INC(2)</td> <td>(96, 111)</td> </tr> <tr> <td>128</td> <td>112</td> <td>96</td> <td>–</td> <td>CMP(2)</td> <td>(96, 111)</td> </tr> <tr> <td>128</td> <td>112</td> <td>96</td> <td>&gt;</td> <td>INC(2)</td> <td>(96, 111)</td> </tr> <tr> <td>128</td> <td>112</td> <td>97</td> <td>–</td> <td>CMP(2)</td> <td>(96, 111)</td> </tr> <tr> <td>128</td> <td>112</td> <td>97</td> <td>&gt;</td> <td>INC(2)</td> <td>(97, 111)</td> </tr> <tr> <td>128</td> <td>112</td> <td>98</td> <td>–</td> <td>CMP(2)</td> <td>(97, 111)</td> </tr> <tr> <td>128</td> <td>112</td> <td>98</td> <td>&gt;</td> <td>INC(2)</td> <td>(98, 111)</td> </tr> <tr> <td>128</td> <td>112</td> <td>99</td> <td>–</td> <td>CMP(2)</td> <td>(98, 111)</td> </tr> <tr> <td>128</td> <td>112</td> <td>99</td> <td>&gt;</td> <td>INC(2)</td> <td>(99, 111)</td> </tr> <tr> <td>128</td> <td>112</td> <td>100</td> <td>–</td> <td>CMP(2)</td> <td>(99, 111)</td> </tr> <tr> <td>128</td> <td>112</td> <td>100</td> <td>=</td> <td>–</td> <td>–</td> </tr> </table> E VARIANCE OF IMPORTANCE WEIGHTS ![Two plots showing variance and reward over steps for two successful runs](page_246_312_1097_355.png) Figure 4: This plot shows the variance of the importance weights in the UREX updates as well as the average reward for two successful runs. We see that the variance starts off high and reaches near zero towards the end when the optimal policy is found. In the first plot, we see a dip and rise in the variance which corresponds to a plateau and then increase in the average reward. F A SIMPLE BANDIT TASK ![Plot showing average performance of UREX (blue) and MENT (green) over 100 repeats](page_246_823_1097_355.png) Figure 5: In this plot we present the average performance of UREX (blue) and MENT (green) over 100 repeats of a bandit-like task after choosing optimal hyperparameters for each method. In the task, the agent chooses one of 10,000 actions at each step and receives a payoff corresponding to the entry in a reward vector \( r = (r_1, ..., r_{10,000}) \) such that \( r_i = u_i^\beta \), where \( u_i \in [0, 1] \) has been sampled randomly and independently from a uniform distribution. We parameterize the policy with a weight vector \( \theta \in \mathbb{R}^{30} \) such that \( \pi_\theta(a) \propto \exp(\phi^{(a)} \cdot \theta) \), where the basis vectors \( \phi^{(a)} \in \mathbb{R}^{30} \) for each action are sampled from a standard normal distribution. The plot shows the average rewards obtained by setting \( \beta = 8 \) over 100 experiments, consisting of 10 repeats (where \( r \) and \( \Phi = (\phi^{(1)}, \ldots, \phi^{(10,000)}) \) are redrawn at the start of each repeat), with 10 random restarts within each repeat (keeping \( r \) and \( \Phi \) fixed but reinitializing \( \theta \)). Thus, this task presents a relatively simple problem with a large action space, and we again see that UREX outperforms MENT.
ABSTRACT This paper presents a novel form of policy gradient for model-free reinforcement learning (RL) with improved exploration properties. Current policy-based methods use entropy regularization to encourage undirected exploration of the reward landscape, which is ineffective in high dimensional spaces with sparse rewards. We propose a more directed exploration strategy that promotes exploration of under-appreciated reward regions. An action sequence is considered under-appreciated if its log-probability under the current policy under-estimates its resulting reward. The proposed exploration strategy is easy to implement, requiring small modifications to the REINFORCE algorithm. We evaluate the approach on a set of algorithmic tasks that have long challenged RL methods. Our approach reduces hyper-parameter sensitivity and demonstrates significant improvements over baseline methods. The proposed algorithm successfully solves a benchmark multi-digit addition task and generalizes to long sequences, which, to our knowledge, is the first time that a pure RL method has solved addition using only reward feedback. 1 INTRODUCTION Humans can reason about symbolic objects and solve algorithmic problems. After learning to count and then manipulate numbers via simple arithmetic, people eventually learn to invent new algorithms and even reason about their correctness and efficiency. The ability to invent new algorithms is fundamental to artificial intelligence (AI). Although symbolic reasoning has a long history in AI (Russell et al., 2003), only recently have statistical machine learning and neural network approaches begun to make headway in automated algorithm discovery (Reed & de Freitas, 2016; Kaiser & Sutskever, 2016; Neelakantan et al., 2016), which would constitute an important milestone on the path to AI. Nevertheless, most of the recent successes depend on the use of strong supervision to learn a mapping from a set of training inputs to outputs by maximizing a conditional log-likelihood, very much like neural machine translation systems (Sutskever et al., 2014; Bahdanau et al., 2015). Such a dependence on strong supervision is a significant limitation that does not match the ability of people to invent new algorithmic procedures based solely on trial and error. By contrast, reinforcement learning (RL) methods (Sutton & Barto, 1998) hold the promise of searching over discrete objects such as symbolic representations of algorithms by considering much weaker feedback in the form of a simple verifier that tests the correctness of a program execution on a given problem instance. Despite the recent excitement around the use of RL to tackle Atari games (Mnih et al., 2015) and Go (Silver et al., 2016), standard RL methods are not yet able to consistently and reliably solve algorithmic tasks in all but the simplest cases (Zaremba & Sutskever, 2014). A key property of algorithmic problems that makes them challenging for RL is reward sparsity, i.e., a policy usually has to get a long action sequence exactly right to obtain a non-zero reward. *Work done as a member of the Google Brain Residency program [g.co/brainresidency] †Also at the Department of Computing Science, University of Alberta, daes@ualberta.ca We believe one of the key factors limiting the effectiveness of current RL methods in a sparse reward setting is the use of *undirected exploration* strategies (Thrun [1992]), such as \( \epsilon \)-greedy and entropy regularization (Williams & Peng [1991]). For long action sequences with delayed sparse reward, it is hopeless to explore the space uniformly and blindly. Instead, we propose a formulation to encourage exploration of action sequences that are *under-appreciated* by the current policy. Our formulation considers an action sequence to be under-appreciated if the model’s log-probability assigned to an action sequence under-estimates the resulting reward from the action sequence. Exploring under-appreciated states and actions encourages the policy to have a better calibration between its log-probabilities and observed reward values, even for action sequences with negligible rewards. This effectively increases exploration around neglected action sequences. We term our proposed technique *under-appreciated reward exploration (UREX)*. We show that the objective given by UREX is a combination of a mode seeking objective (standard REINFORCE) and a mean seeking term, which provides a well motivated trade-off between exploitation and exploration. To empirically evaluate our method, we take a set of algorithmic tasks such as sequence reversal, multi-digit addition, and binary search. We choose to focus on these tasks because, although simple, they present a difficult sparse reward setting which has limited the success of standard RL approaches. The experiments demonstrate that UREX significantly outperforms baseline RL methods, such as entropy regularized REINFORCE and one-step Q-learning, especially on the more difficult tasks, such as multi-digit addition. Moreover, UREX is shown to be more robust to changes of hyper-parameters, which makes hyper-parameter tuning less tedious in practice. In addition to introducing a new variant of policy gradient with improved performance, our paper is the first to demonstrate strong results for an RL method on algorithmic tasks. To our knowledge, the addition task has not been solved by any model-free reinforcement learning approach. We observe that some of the policies learned by UREX can successfully generalize to long sequences; *e.g.*, in 2 out of 5 random restarts, the policy learned by UREX for the addition task correctly generalizes to addition of numbers with 2000 digits with no mistakes, even though training sequences are at most 33 digits long. 2 Neural Networks for Learning Algorithms Although research on using neural networks to learn algorithms has witnessed a surge of recent interest, the problem of program induction from examples has a long history in many fields, including program induction, inductive logic programming (Lavrac & Dzeroski [1994]), relational learning (Kemp et al. [2007]) and regular language learning (Angulin [1987]). Rather than presenting a comprehensive survey of program induction here, we focus on neural network approaches to algorithmic tasks and highlight the relative simplicity of our neural network architecture. Most successful applications of neural networks to algorithmic tasks rely on strong supervision, where the inputs and target outputs are completely known *a priori*. Given a dataset of examples, one learns the network parameters by maximizing the conditional likelihood of the outputs via back-propagation (*e.g.*, Reed & de Freitas [2016]; Kaiser & Sutskever [2016]; Vinyals et al. [2015]). However, target outputs may not be available for novel tasks, for which no prior algorithm is known to be available. A more desirable approach to inducing algorithms, followed in this paper, advocates using self-driven learning strategies that only receive reinforcement based on the outputs produced. Hence, just by having access to a verifier for an algorithmic problem, one can aim to learn an algorithm. For example, if one does not know how to sort an array, but can check the extent to which an array is sorted, then one can provide the reward signal necessary for learning sorting algorithms. We formulate learning algorithms as an RL problem and make use of model-free policy gradient methods to optimize a set parameters associated with the algorithm. In this setting, the goal is to learn a policy \( \pi_\theta \) that given an observed state \( s_t \) at step \( t \), estimates a distribution over the next action \( a_t \), denoted \( \pi_\theta(a_t \mid s_t) \). Actions represent the commands within the algorithm and states represent the joint state of the algorithm and the environment. Previous work in this area has focused on augmenting a neural network with additional structure and increased capabilities (Zaremba & Sutskever [2015], Graves et al. [2016]). In contrast, we utilize a simple architecture based on a standard recurrent neural network (RNN) with LSTM cells (Hochreiter & Schmidhuber [1997]) as depicted in Figure 1. At each episode, the environment is initialized with a latent state \( h_0 \), unknown to the agent, which determines \( s_1 \) and the subsequent state transition and reward functions. Once the agent observes \( s_1 \) ![The agent's RNN architecture that represents a policy. The environment is initialized with a latent vector h. At time step t, the environment produces a state s_t, and the agent takes as input s_t and the previously sampled action a_{t-1} and produces a distribution over the next action \( \pi_\theta(a_t \mid s_t) \). Then, we sample a new action a_t and apply it to the environment.](page_246_183_1097_246.png) Figure 1: The agent’s RNN architecture that represents a policy. The environment is initialized with a latent vector h. At time step t, the environment produces a state s_t, and the agent takes as input s_t and the previously sampled action a_{t-1} and produces a distribution over the next action \( \pi_\theta(a_t \mid s_t) \). Then, we sample a new action a_t and apply it to the environment. as the input to the RNN, the network outputs a distribution \( \pi_\theta(a_1 \mid s_1) \), from which an action a_1 is sampled. This action is applied to the environment, and the agent receives a new state observation s_2. The state s_2 and the previous action a_1 are then fed into the RNN and the process repeats until the end of the episode. Upon termination, a reward signal is received. 3 LEARNING A POLICY BY MAXIMIZING EXPECTED REWARD We start by discussing the most common form of policy gradient, REINFORCE (Williams [1992]), and its entropy regularized variant (Williams & Peng [1991]). REINFORCE has been applied to model-free policy-based learning with neural networks and algorithmic domains (Zaremba & Sutskever [2015] [Graves et al. 2016]). The goal is to learn a policy \( \pi_\theta \) that, given an observed state s_t at step t, estimates a distribution over the next action a_t, denoted \( \pi_\theta(a_t \mid s_t) \). The environment is initialized with a latent vector, h, which determines the initial observed state \( s_1 = g(h) \), and the transition function \( s_{t+1} = f(s_t, a_t \mid h) \). Note that the use of nondeterministic transitions f as in Markov decision processes (MDP) may be recovered by assuming that h includes the random seed for the any nondeterministic functions. Given a latent state h, and \( s_{1:T} \equiv (s_1, \ldots, s_T) \), the model probability of an action sequence \( a_{1:T} \equiv (a_1, \ldots, a_T) \) is expressed as, \[ \pi_\theta(a_{1:T} \mid h) = \prod_{t=1}^T \pi_\theta(a_t \mid s_t) , \quad \text{where} \quad s_1 = g(h), \quad s_{t+1} = f(s_t, a_t \mid h) \quad \text{for} \ 1 \leq t < T . \] The environment provides a reward at the end of the episode, denoted \( r(a_{1:T} \mid h) \). For ease of readability we drop the subscript from \( a_{1:T} \) and simply write \( \pi_\theta(a \mid h) \) and \( r(a \mid h) \). The objective used to optimize the policy parameters, \( \theta \), consists of maximizing expected reward under actions drawn from the policy, plus an optional maximum entropy regularizer. Given a distribution over initial latent environment states \( p(h) \), we express the regularized expected reward as, \[ \mathcal{O}_{RL}(\theta; \tau) = \mathbb{E}_{h \sim p(h)} \left\{ \sum_{a \in \mathcal{A}} \pi_\theta(a \mid h) \left[ r(a \mid h) - \tau \log \pi_\theta(a \mid h) \right] \right\} . \] (1) When \( \pi_\theta \) is a non-linear function defined by a neural network, finding the global optimum of \( \theta \) is challenging, and one often resorts to gradient-based methods to find a local optimum of \( \mathcal{O}_{RL}(\theta; \tau) \). Given that \( \frac{d}{d\theta} \pi_\theta(a) = \pi_\theta(a) \frac{d}{d\theta} \log \pi_\theta(a) \) for any a such that \( \pi_\theta(a) > 0 \), one can verify that, \[ \frac{d}{d\theta} \mathcal{O}_{RL}(\theta; \tau \mid h) = \sum_{a \in \mathcal{A}} \pi_\theta(a \mid h) \frac{d}{d\theta} \log \pi_\theta(a \mid h) \left[ r(a \mid h) - \tau \log \pi_\theta(a \mid h) - \tau \right] . \] (2) Because the space of possible actions \( \mathcal{A} \) is large, enumerating over all of the actions to compute this gradient is infeasible. Williams (1992) proposed to compute the stochastic gradient of the expected reward by using Monte Carlo samples. Using Monte Carlo samples, one first draws \( N \) *i.i.d.* samples from the latent environment states \( \{ \mathbf{h}^{(n)} \}_{n=1}^N \), and then draws \( K \) *i.i.d.* samples \( \{ \mathbf{a}^{(k)} \}_{k=1}^K \) from \( \pi_\theta(\mathbf{a} \mid \mathbf{h}^{(n)}) \) to approximate the gradient of (1) by using (2) as, \[ \frac{\mathrm{d}}{\mathrm{d}\theta} \mathcal{O}_{\mathrm{RL}}(\theta; \tau) \approx \frac{1}{NK} \sum_{n=1}^N \sum_{k=1}^K \frac{\mathrm{d}}{\mathrm{d}\theta} \log \pi_\theta(\mathbf{a}^{(k)} \mid \mathbf{h}^{(n)}) \left[ r(\mathbf{a}^{(k)} \mid \mathbf{h}^{(n)}) - \tau \log \pi_\theta(\mathbf{a}^{(k)} \mid \mathbf{h}^{(n)}) - \tau \right]. \] (3) This reparametrization of the gradients is the key to the REINFORCE algorithm. To reduce the variance of (3), one uses rewards \( \hat{r} \) that are shifted by some offset values, \[ \hat{r}(\mathbf{a}^{(k)} \mid \mathbf{h}) = r(\mathbf{a}^{(k)} \mid \mathbf{h}) - b(\mathbf{h}) , \] (4) where \( b \) is known as a *baseline* or sometimes called a *critic*. Note that subtracting any offset from the rewards in (1) simply results in shifting the objective \( \mathcal{O}_{\mathrm{RL}} \) by a constant. Unfortunately, directly maximizing expected reward (*i.e.*, when \( \tau = 0 \)) is prone to getting trapped in a local optimum. To combat this tendency, Williams & Peng (1991) augmented the expected reward objective by including a maximum entropy regularizer (\( \tau > 0 \)) to promote greater exploration. We will refer to this variant of REINFORCE as MENT (maximum entropy exploration). 4 UNDER-APPRECIATED REWARD EXPLORATION (UREX) To explain our novel form of policy gradient, we first note that the optimal policy \( \pi^*_\tau \), which globally maximizes \( \mathcal{O}_{\mathrm{RL}}(\theta; \tau \mid \mathbf{h}) \) in (1) for any \( \tau > 0 \), can be expressed as, \[ \pi^*_\tau(\mathbf{a} \mid \mathbf{h}) = \frac{1}{Z(\mathbf{h})} \exp \left\{ \frac{1}{\tau} r(\mathbf{a} \mid \mathbf{h}) \right\} , \] (5) where \( Z(\mathbf{h}) \) is a normalization constant making \( \pi^*_\tau \) a distribution over the space of action sequences \( \mathcal{A} \). One can verify this by first acknowledging that, \[ \mathcal{O}_{\mathrm{RL}}(\theta; \tau \mid \mathbf{h}) = -\tau D_{\mathrm{KL}} \left( \pi_\theta(\cdot \mid \mathbf{h}) \parallel \pi^*_\tau(\cdot \mid \mathbf{h}) \right) . \] (6) Since \( D_{\mathrm{KL}} (p \parallel q) \) is non-negative and zero iff \( p = q \), then \( \pi^*_\tau \) defined in (5) maximizes \( \mathcal{O}_{\mathrm{RL}} \). That said, given a particular form of \( \pi_\theta \), finding \( \theta \) that exactly characterizes \( \pi^*_\tau \) may not be feasible. The KL divergence \( D_{\mathrm{KL}} (\pi_\theta \parallel \pi^*_\tau) \) is known to be mode seeking (Murphy 2012 Section 21.2.2) even with entropy regularization (\( \tau > 0 \)). Learning a policy by optimizing this direction of the KL is prone to falling into a local optimum resulting in a sub-optimal policy that omits some of the modes of \( \pi^*_\tau \). Although entropy regularization helps mitigate the issues as confirmed in our experiments, it is not an effective exploration strategy as it is undirected and requires a small regularization coefficient \( \tau \) to avoid too much random exploration. Instead, we propose a directed exploration strategy that improves the mean seeking behavior of policy gradient in a principled way. We start by considering the alternate mean seeking direction of the KL divergence, \( D_{\mathrm{KL}} (\pi^*_\tau \parallel \pi_\theta) \). Norouzi et al. (2016) considered this direction of the KL to directly learn a policy by optimizing \[ \mathcal{O}_{\mathrm{RAML}}(\theta; \tau) = \mathbb{E}_{\mathbf{h} \sim p(\mathbf{h})} \left\{ \tau \sum_{\mathbf{a} \in \mathcal{A}} \pi^*_\tau(\mathbf{a} \mid \mathbf{h}) \log \pi_\theta(\mathbf{a} \mid \mathbf{h}) \right\} , \] (7) for structured prediction. This objective has the same optimal solution \( \pi^*_\tau \) as \( \mathcal{O}_{\mathrm{RL}} \) since, \[ \mathcal{O}_{\mathrm{RAML}}(\theta; \tau \mid \mathbf{h}) = -\tau D_{\mathrm{KL}} \left( \pi^*_\tau(\cdot \mid \mathbf{h}) \parallel \pi_\theta(\cdot \mid \mathbf{h}) \right) + \mathrm{const} . \] (8) Norouzi et al. (2016) argue that in some structured prediction problems when one can draw samples from \( \pi^*_\tau \), optimizing (7) is more effective than (1), since no sampling from a non-stationary policy \( \pi_\theta \) is required. If \( \pi_\theta \) is a log-linear model of a set of features, \( \mathcal{O}_{\mathrm{RAML}} \) is convex in \( \theta \) whereas \( \mathcal{O}_{\mathrm{RL}} \) is not, even in the log-linear case. Unfortunately, in scenarios that the reward landscape is unknown or computing the normalization constant \( Z(\mathbf{h}) \) is intractable, sampling from \( \pi^*_\tau \) is not straightforward. In RL problems, the reward landscape is completely unknown, hence sampling from \( \pi^*_\tau \) is intractable. This paper proposes to approximate the expectation with respect to \( \pi^*_\tau \) by using *self-normalized importance sampling* (Owen 2013), where the proposal distribution is \( \pi_\theta \) and the reference distribution is \( \pi^*_\tau \). For importance sampling, one draws \( K \) *i.i.d.* samples \( \{ \mathbf{a}^{(k)} \}_{k=1}^K \) from \( \pi_\theta(\mathbf{a} \mid \mathbf{h}) \) and computes a set of normalized importance weights to approximate \( \mathcal{O}_{\text{RAML}}(\theta; \tau \mid \mathbf{h}) \) as, \[ \mathcal{O}_{\text{RAML}}(\theta; \tau \mid \mathbf{h}) \approx \tau \sum_{k=1}^K \frac{w_\tau(\mathbf{a}^{(k)} \mid \mathbf{h})}{\sum_{m=1}^K w_\tau(\mathbf{a}^{(m)} \mid \mathbf{h})} \log \pi_\theta(\mathbf{a}^{(k)} \mid \mathbf{h}) , \] where \( w_\tau(\mathbf{a}^{(k)} \mid \mathbf{h}) \propto \pi^*_\tau / \pi_\theta \) denotes an importance weight defined by, \[ w_\tau(\mathbf{a}^{(k)} \mid \mathbf{h}) = \exp \left\{ \frac{1}{\tau} r(\mathbf{a}^{(k)} \mid \mathbf{h}) - \log \pi_\theta(\mathbf{a}^{(k)} \mid \mathbf{h}) \right\} . \] One can view these importance weights as evaluating the discrepancy between scaled rewards \( r / \tau \) and the policy’s log-probabilities \( \log \pi_\theta \). Among the \( K \) samples, a sample that is least appreciated by the model, *i.e.*, has the largest \( r / \tau - \log \pi_\theta \), receives the largest positive feedback in (9). In practice, we have found that just using the importance sampling RAML objective in (9) does not always yield promising solutions. Particularly, at the beginning of training, when \( \pi_\theta \) is still far away from \( \pi^*_\tau \), the variance of importance weights is too large, and the self-normalized importance sampling procedure results in poor approximations. To stabilize early phases of training and ensure that the model distribution \( \pi_\theta \) achieves large expected reward scores, we combine the expected reward and RAML objectives to benefit from the best of their mode and mean seeking behaviors. Accordingly, we propose the following objective that we call *under-appreciated reward exploration (UREX)*, \[ \mathcal{O}_{\text{UREX}}(\theta; \tau) = \mathbb{E}_{\mathbf{h} \sim p(\mathbf{h})} \left\{ \sum_{\mathbf{a} \in \mathcal{A}} \left[ \pi_\theta(\mathbf{a} \mid \mathbf{h}) r(\mathbf{a} \mid \mathbf{h}) + \tau \pi^*_\tau(\mathbf{a} \mid \mathbf{h}) \log \pi_\theta(\mathbf{a} \mid \mathbf{h}) \right] \right\} , \] which is the sum of the expected reward and RAML objectives. In our preliminary experiments, we considered a composite objective of \( \mathcal{O}_{\text{RL}} + \mathcal{O}_{\text{RAML}} \), but we found that removing the entropy term is beneficial. Hence, the \( \mathcal{O}_{\text{UREX}} \) objective does not include entropy regularization. Accordingly, the optimum policy for \( \mathcal{O}_{\text{UREX}} \) is no longer \( \pi^*_\tau \), as it was for \( \mathcal{O}_{\text{RL}} \) and \( \mathcal{O}_{\text{RAML}} \). Appendix [A] derives the optimal policy for UREX as a function of the optimal policy for \( \mathcal{O}_{\text{RL}} \). We find that the optimal policy of UREX is more sharply concentrated on the high reward regions of the action space, which may be an advantage for UREX, but we leave more analysis of this behavior to future work. To compute the gradient of \( \mathcal{O}_{\text{UREX}}(\theta; \tau) \), we use the self-normalized importance sampling estimate outlined in (6). We assume that the importance weights are constant and contribute no gradient to \( \frac{\mathrm{d}}{\mathrm{d}\theta} \mathcal{O}_{\text{UREX}}(\theta; \tau) \). To approximate the gradient, one draws \( N \) *i.i.d.* samples from the latent environment states \( \{ \mathbf{h}^{(n)} \}_{n=1}^N \), and then draws \( K \) *i.i.d.* samples \( \{ \mathbf{a}^{(k)} \}_{k=1}^K \) from \( \pi_\theta(\mathbf{a} \mid \mathbf{h}^{(n)}) \) to obtain \[ \frac{\mathrm{d}}{\mathrm{d}\theta} \mathcal{O}_{\text{UREX}}(\theta; \tau) \approx \frac{1}{N} \sum_{n=1}^N \sum_{k=1}^K \frac{\mathrm{d}}{\mathrm{d}\theta} \log \pi_\theta(\mathbf{a}^{(k)} \mid \mathbf{h}^{(n)}) \left[ \frac{1}{K} \hat{r}(\mathbf{a}^{(k)} \mid \mathbf{h}^{(n)}) + \tau \frac{w_\tau(\mathbf{a}^{(k)} \mid \mathbf{h}^{(n)})}{\sum_{m=1}^K w_\tau(\mathbf{a}^{(m)} \mid \mathbf{h}^{(n)})} \right]. \] As with REINFORCE, the rewards are shifted by an offset \( b(\mathbf{h}) \). In this gradient, the model log-probability of a sample action sequence \( \mathbf{a}^{(k)} \) is reinforced if the corresponding reward is large, or the corresponding importance weights are large, meaning that the action sequence is under-appreciated. The normalized importance weights are computed using a softmax operator \( \operatorname{softmax}(r / \tau - \log \pi_\theta) \). 5 RELATED WORK Before presenting the experimental results, we briefly review some pieces of previous work that closely relate to the UREX approach. Reward-Weighted Regression. Both RAML and UREX objectives bear some similarity to a method in continuous control known as Reward-Weighted Regression (RWR) (Peters & Schaal [2007]; Wierstra et al. [2008]). Using our notation, the RWR objective is expressed as, \[ \mathcal{O}_{\text{RWR}}(\theta; \tau \mid \mathbf{h}) = \log \sum_{\mathbf{a} \in \mathcal{A}} \pi^*_\tau(\mathbf{a} \mid \mathbf{h}) \pi_\theta(\mathbf{a} \mid \mathbf{h}) \] \[ \geq \sum_{\mathbf{a} \in \mathcal{A}} q(\mathbf{a} \mid \mathbf{h}) \log \frac{\pi^*_\tau(\mathbf{a} \mid \mathbf{h}) \pi_\theta(\mathbf{a} \mid \mathbf{h})}{q(\mathbf{a} \mid \mathbf{h})}. \] To optimize \( \mathcal{O}_{\mathrm{RWR}} \), Peters & Schaal (2007) propose a technique inspired by the EM algorithm to maximize a variational lower bound in (14) based on a variational distribution \( q(a \mid h) \). The RWR objective can be interpreted as a log of the correlation between \( \pi_r^* \) and \( \pi_\theta \). By contrast, the RAML and UREX objectives are both based on a KL divergence between \( \pi_r^* \) and \( \pi_\theta \). To optimize the RWR objective, one formulates the gradient as, \[ \frac{\mathrm{d}}{\mathrm{d}\theta} \mathcal{O}_{\mathrm{RWR}}(\theta; \tau \mid h) = \sum_{a \in \mathcal{A}} \frac{\pi_r^*(a \mid h)\pi_\theta(a \mid h)}{C} \frac{\mathrm{d}}{\mathrm{d}\theta} \log \pi_\theta(a \mid h), \] where \( C \) denotes the normalization factor, i.e., \( C = \sum_{a \in \mathcal{A}} \pi_r^*(a \mid h)\pi_\theta(a \mid h) \). The expectation with respect to \( \pi_r^*(a \mid h)\pi_\theta(a \mid h)/C \) on the RHS can be approximated by self-normalized importance sampling where the proposal distribution is \( \pi_\theta \). Accordingly, one draws \( K \) Monte Carlo samples \( \{a^{(k)}\}_{k=1}^K \) i.i.d. from \( \pi_\theta(a \mid h) \) and formulates the gradient as, \[ \frac{\mathrm{d}}{\mathrm{d}\theta} \mathcal{O}_{\mathrm{RWR}}(\theta; \tau \mid h) \approx \frac{1}{K} \sum_{k=1}^K \frac{u(a^{(k)} \mid h)}{\sum_{m=1}^K u(a^{(m)} \mid h)} \frac{\mathrm{d}}{\mathrm{d}\theta} \log \pi_\theta(a^{(k)} \mid h), \] where \( u(a^{(k)} \mid h) = \exp\{\frac{1}{\tau} r(a^{(k)} \mid h)\} \). There is some similarity between (16) and (9) in that they both use self-normalized importance sampling, but note the critical difference that (16) and (9) estimate the gradients of two different objectives, and hence the importance weights in (16) do not correct for the sampling distribution \( \pi_\theta(a \mid h) \) as opposed to (9). Beyond important technical differences, the optimal policy of \( \mathcal{O}_{\mathrm{RWR}} \) is a one hot distribution with all probability mass concentrated on an action sequence with maximal reward, whereas the optimal policies for RAML and UREX are everywhere nonzero, with the probability of different action sequences being assigned proportionally to their exponentiated reward (with UREX introducing an additional re-scaling; see Appendix A). Further, the notion of under-appreciated reward exploration evident in \( \mathcal{O}_{\mathrm{UREX}} \), which is key to UREX’s performance, is missing in the RWR formulation. Exploration. The RL literature contains many different attempts at incorporating exploration that may be compared with our method. The most common exploration strategy considered in value-based RL is \( \epsilon \)-greedy Q-learning, where at each step the agent either takes the best action according to its current value approximation or with probability \( \epsilon \) takes an action sampled uniformly at random. Like entropy regularization, such an approach applies undirected exploration, but it has achieved recent success in game playing environments (Mnih et al., 2013; Van Hasselt et al., 2016; Mnih et al., 2016). Prominent approaches to improving exploration beyond \( \epsilon \)-greedy in value-based or model-based RL have focused on reducing uncertainty by prioritizing exploration toward states and actions where the agent knows the least. This basic intuition underlies work on counter and recency methods (Thrun, 1992), exploration methods based on uncertainty estimates of values (Kaelbling, 1993; Todic, 2010), methods that prioritize learning environment dynamics (Kearns & Singh, 2002; Stadie et al., 2015), and methods that provide an intrinsic motivation or curiosity bonus for exploring unknown states (Schmidhuber, 2006; Bellemare et al., 2016). In contrast to value-based methods, exploration for policy-based RL methods is often a by-product of the optimization algorithm itself. Since algorithms like REINFORCE and Thompson sampling choose actions according to a stochastic policy, sub-optimal actions are chosen with some non-zero probability. The Q-learning algorithm may also be modified to sample an action from the softmax of the Q values rather than the argmax (Sutton & Barto, 1998). Asynchronous training has also been reported to have an exploration effect on both value- and policy-based methods. Mnih et al. (2016) report that asynchronous training can stabilize training by reducing the bias experienced by a single trainer. By using multiple separate trainers, an agent is less likely to become trapped at a policy found to be locally optimal only due to local conditions. In the same spirit, Osband et al. (2016) use multiple Q value approximators and sample only one to act for each episode as a way to implicitly incorporate exploration. By relating the concepts of value and policy in RL, the exploration strategy we propose tries to bridge the discrepancy between the two. In particular, UREX can be viewed as a hybrid combination of value-based and policy-based exploration strategies that attempts to capture the benefits of each. Bornstein & Bengio (2014) apply the same trick to optimize the log-likelihood of latent variable models. Per-step Reward. Finally, while we restrict ourselves to episodic settings where a reward is associated with an entire episode of states and actions, much work has been done to take advantage of environments that provide per-step rewards. These include policy-based methods such as actor-critic (Mnih et al., 2016; Schulman et al., 2016) and value-based approaches based on Q-learning (Van Hasselt et al., 2016; Schaul et al., 2016). Some of these value-based methods have proposed a softening of Q-values which can be interpreted as adding a form of maximum-entropy regularizer (Asadi & Littman, 2016; Azar et al., 2012; Fox et al., 2016; Ziebart, 2010). The episodic total-reward setting that we consider is naturally harder since the credit assignment to individual actions within an episode is unclear. 6 SIX ALGORITHMIC TASKS We assess the effectiveness of the proposed approach on five algorithmic tasks from the OpenAI Gym (Brockman et al., 2016), as well as a new binary search problem. Each task is summarized below, with further details available on the Gym website[1] or in the corresponding open-source code[2]. In each case, the environment has a hidden tape and a hidden sequence. The agent observes the sequence via a pointer to a single character, which can be moved by a set of pointer control actions. Thus an action \( a_t \) is represented as a tuple \( (m, w, o) \) where \( m \) denotes how to move, \( w \) is a boolean denoting whether to write, and \( o \) is the output symbol to write. 1. Copy: The agent should emit a copy of the sequence. The pointer actions are move left and right. 2. DuplicatedInput: In the hidden tape, each character is repeated twice. The agent must deduplicate the sequence and emit every other character. The pointer actions are move left and right. 3. RepeatCopy: The agent should emit the hidden sequence once, then emit the sequence in the reverse order, then emit the original sequence again. The pointer actions are move left and right. 4. Reverse: The agent should emit the hidden sequence in the reverse order. As before, the pointer actions are move left and right. 5. ReversedAddition: The hidden tape is a \( 2 \times n \) grid of digits representing two numbers in base 3 in little-endian order. The agent must emit the sum of the two numbers, in little-endian order. The allowed pointer actions are move left, right, up, or down. The OpenAI Gym provides an additional harder task called ReversedAddition3, which involves adding three numbers. We omit this task, since none of the methods make much progress on it. For these tasks, the input sequences encountered during training range from a length of 2 to 33 characters. A reward of 1 is given for each correct emission. On an incorrect emission, a small penalty of \(-0.5\) is incurred and the episode is terminated. The agent is also terminated and penalized with a reward of \(-1\) if the episode exceeds a certain number of steps. For the experiments using UREX and MENT, we associate an episodic sequence of actions with the total reward, defined as the sum of the per-step rewards. The experiments using Q-learning, on the other hand, used the per-step rewards. Each of the Gym tasks has a success threshold, which determines the required average reward over 100 episodes for the agent to be considered successful. We also conduct experiments on an additional algorithmic task described below: 6. BinarySearch: Given an integer \( n \), the environment has a hidden array of \( n \) distinct numbers stored in ascending order. The environment also has a query number \( x \) unknown to the agent that is contained somewhere in the array. The goal of the agent is to find the query number in the array in a small number of actions. The environment has three integer registers initialized at \( (n, 0, 0) \). At each step, the agent can interact with the environment via the four following actions: • INC\((i)\): increment the value of the register \( i \) for \( i \in \{1, 2, 3\} \). • DIV\((i)\): divide the value of the register \( i \) by 2 for \( i \in \{1, 2, 3\} \). • AVG\((i)\): replace the value of the register \( i \) with the average of the two other registers. • CMP\((i)\): compare the value of the register \( i \) with \( x \) and receive a signal indicating which value is greater. The agent succeeds when it calls CMP on an array cell holding the value \( x \). [1] gym.openai.com [2] github.com/openai/gym Table 1: Each cell shows the percentage of 60 trials with different hyper-parameters (\( \eta, c \)) and random restarts that successfully solve an algorithmic task. UREX is more robust to hyper-parameter changes than MENT. We evaluate MENT with a few temperatures and UREX with \( \tau = 0.1 \). <table> <tr> <th rowspan="2"></th> <th colspan="5">REINFORCE / MENT</th> <th rowspan="2">UREX</th> </tr> <tr> <th>\( \tau = 0.0 \)</th> <th>\( \tau = 0.005 \)</th> <th>\( \tau = 0.01 \)</th> <th>\( \tau = 0.1 \)</th> <th>\( \tau = 0.1 \)</th> </tr> <tr> <td>Copy</td> <td>85.0</td> <td>88.3</td> <td><b>90.0</b></td> <td>3.3</td> <td>75.0</td> </tr> <tr> <td>DuplicatedInput</td> <td>68.3</td> <td>73.3</td> <td>73.3</td> <td>0.0</td> <td><b>100.0</b></td> </tr> <tr> <td>RepeatCopy</td> <td>0.0</td> <td>0.0</td> <td>11.6</td> <td>0.0</td> <td><b>18.3</b></td> </tr> <tr> <td>Reverse</td> <td>0.0</td> <td>0.0</td> <td>3.3</td> <td>10.0</td> <td><b>16.6</b></td> </tr> <tr> <td>ReversedAddition</td> <td>0.0</td> <td>0.0</td> <td>1.6</td> <td>0.0</td> <td><b>30.0</b></td> </tr> <tr> <td>BinarySearch</td> <td>0.0</td> <td>0.0</td> <td>1.6</td> <td>0.0</td> <td><b>20.0</b></td> </tr> </table> The agent is terminated when the number of steps exceeds a maximum threshold of \( 2n+1 \) steps and receives a reward of 0. If the agent finds \( x \) at step \( t \), it receives a reward of \( 10(1-t/(2n+1)) \). We set the maximum number of steps to \( 2n+1 \) to allow the agent to perform a full linear search. A policy performing full linear search achieves an average reward of 5, because \( x \) is chosen uniformly at random from the elements of the array. A policy employing binary search can find the number \( x \) in at most \( 2\log_2 n + 1 \) steps. If \( n \) is selected uniformly at random from the range \( 32 \leq n \leq 512 \), binary search yields an optimal average reward above 9.55. We set the <i>success threshold</i> for this task to an average reward of 9. 7 EXPERIMENTS We compare our policy gradient method using under-appreciated reward exploration (UREX) against two main RL baselines: (1) REINFORCE with entropy regularization termed MENT (Williams & Peng, 1991), where the value of \( \tau \) determines the degree of regularization. When \( \tau = 0 \), standard REINFORCE is obtained. (2) one-step double Q-learning based on bootstrapping one step future rewards. 7.1 ROBUSTNESS TO HYPER-PARAMETERS Hyper-parameter tuning is often tedious for RL algorithms. We found that the proposed UREX method significantly improves robustness to changes in hyper-parameters when compared to MENT. For our experiments, we perform a careful grid search over a set of hyper-parameters for both MENT and UREX. For any hyper-parameter setting, we run the MENT and UREX methods 5 times with different random restarts. We explore the following main hyper-parameters: • The <i>learning rate</i> denoted \( \eta \) chosen from a set of 3 possible values \( \eta \in \{0.1, 0.01, 0.001\} \). • The maximum L2 norm of the gradients, beyond which the gradients are <i>clipped</i>. This parameter, denoted \( c \), matters for training RNNs. The value of \( c \) is selected from \( c \in \{1, 10, 40, 100\} \). • The temperature parameter \( \tau \) that controls the degree of exploration for both MENT and UREX. For MENT, we use \( \tau \in \{0, 0.005, 0.01, 0.1\} \). For UREX, we only consider \( \tau = 0.1 \), which consistently performs well across the tasks. In all of the experiments, both MENT and UREX are treated exactly the same. In fact, the change of implementation is just a few lines of code. Given a value of \( \tau \), for each task, we run 60 training jobs comprising 3 learning rates, 4 clipping values, and 5 random restarts. We run each algorithm for a maximum number of steps determined based on the difficulty of the task. The training jobs for Copy, DuplicatedInput, RepeatCopy, Reverse, ReversedAddition, and BinarySearch are run for \( 2K \), \( 500, 50K, 5K, 50K \), and \( 2K \) stochastic gradient steps, respectively. We find that running a trainer job longer does not result in a better performance. Our policy network comprises a single LSTM layer with 128 nodes. We use the Adam optimizer (Kingma & Ba, 2015) for the experiments. Table 1 shows the percentage of 60 trials on different hyper-parameters (\( \eta, c \)) and random restarts which successfully solve each of the algorithmic tasks. It is clear that UREX is more robust than MENT to changes in hyper-parameters, even though we only report the results of UREX for a single temperature. See Appendix B for more detailed tables on hyper-parameter robustness. 7.2 RESULTS Table 2 presents the number of successful attempts (out of 5 random restarts) and the expected reward values (averaged over 5 trials) for each RL algorithm given the best hyper-parameters. One-step Q-learning results are also included in the table. We also present the training curves for MENT and UREX in Figure 2. It is clear that UREX outperforms the baselines on these tasks. On the more difficult tasks, such as Reverse and ReverseAddition, UREX is able to consistently find an appropriate algorithm, but MENT and Q-learning fall behind. Importantly, for the BinarySearch task, which exhibits many local maxima and necessitates smart exploration, UREX is the only method that can solve it consistently. The Q-learning baseline solves some of the simple tasks, but it makes little headway on the harder tasks. We believe that entropy regularization for policy gradient and \( \epsilon \)-greedy for Q-learning are relatively weak exploration strategies in long episodic tasks with delayed rewards. On such tasks, one random exploratory step in the wrong direction can take the agent off the optimal policy, hampering its ability to learn. In contrast, UREX provides a form of adaptive and smart exploration. In fact, we observe that the variance of the importance weights decreases as the agent approaches the optimal policy, effectively reducing exploration when it is no longer necessary; see Appendix E. ![Average reward during training for MENT (green) and UREX (blue) across six tasks: Copy, DuplicatedInput, RepeatCopy, Reverse, ReversedAddition, BinarySearch. Each plot shows the average reward as well as the single standard deviation region clipped at the min and max.](page_312_684_968_370.png) Figure 2: Average reward during training for MENT (green) and UREX (blue). We find the best hyper-parameters for each method, and run each algorithm 5 times with random restarts. The curves present the average reward as well as the single standard deviation region clipped at the min and max. 7.3 GENERALIZATION TO LONGER SEQUENCES To confirm whether our method is able to find the correct algorithm for multi-digit addition, we investigate its generalization to longer input sequences than provided during training. We evaluate the trained models on inputs up to a length of 2000 digits, even though training sequences were at most 33 characters. For each length, we test the model on 100 randomly generated inputs, stopping when the accuracy falls below 100%. Out of the 60 models trained on addition with UREX, we find that 5 models generalize to numbers up to 2000 digits without any observed mistakes. On the best UREX hyper-parameters, 2 out of the 5 random restarts are able to generalize successfully. For more detailed results on the generalization performance on 3 different tasks including Copy, Table 2: Results on several algorithmic tasks comparing Q-learning and policy gradient based on MENT and UREX. We find the best hyper-parameters for each method, and run each algorithm 5 times with random restarts. Number of successful attempts (out of 5) that achieve a reward threshold is reported. Expected reward computed over the last few iterations of training is also reported. <table> <tr> <th rowspan="2"> </th> <th colspan="3">Num. of successful attempts out of 5</th> <th colspan="3">Expected reward</th> </tr> <tr> <th>Q-learning</th> <th>MENT</th> <th>UREX</th> <th>Q-learning</th> <th>MENT</th> <th>UREX</th> </tr> <tr> <td>Copy</td> <td>5</td> <td>5</td> <td>5</td> <td>31.2</td> <td>31.2</td> <td>31.2</td> </tr> <tr> <td>DuplicatedInput</td> <td>5</td> <td>5</td> <td>5</td> <td>15.4</td> <td>15.4</td> <td>15.4</td> </tr> <tr> <td>RepeatCopy</td> <td>1</td> <td>3</td> <td>4</td> <td>39.3</td> <td>69.2</td> <td>81.1</td> </tr> <tr> <td>Reverse</td> <td>0</td> <td>2</td> <td>4</td> <td>4.4</td> <td>21.9</td> <td>27.2</td> </tr> <tr> <td>ReversedAddition</td> <td>0</td> <td>1</td> <td>5</td> <td>1.1</td> <td>8.7</td> <td>30.2</td> </tr> <tr> <td>BinarySearch</td> <td>0</td> <td>1</td> <td>4</td> <td>5.2</td> <td>8.6</td> <td>9.1</td> </tr> </table> DuplicatedInput, and ReversedAddition, see Appendix C. During these evaluations, we take the action with largest probability from \( \pi_\theta(a \mid h) \) at each time step rather than sampling randomly. We also looked into the generalization of the models trained on the BinarySearch task. We found that none of the agents perform proper binary search. Rather, those that solved the task perform a hybrid of binary and linear search: first actions follow a binary search pattern, but then the agent switches to a linear search procedure once it narrows down the search space; see Appendix D for some execution traces for BinarySearch and ReversedAddition. Thus, on longer input sequences, the agent’s running time complexity approaches linear rather than logarithmic. We hope that future work will make more progress on this task. This task is especially interesting because the reward signal should incorporate both correctness and efficiency of the algorithm. 7.4 IMPLEMENTATION DETAILS In all of the experiments, we make use of curriculum learning. The environment begins by only providing small inputs and moves on to longer sequences once the agent achieves close to maximal reward over a number of steps. For policy gradient methods including MENT and UREX, we only provide the agent with a reward at the end of the episode, and there is no notion of intermediate reward. For the value-based baseline, we implement one-step Q-learning as described in [Mnih et al. (2016)] Alg. 1, employing double Q-learning with \( \epsilon \)-greedy exploration. We use the same RNN in our policy-based approaches to estimate the Q values. A grid search over exploration rate, exploration rate decay, learning rate, and sync frequency (between online and target network) is conducted to find the best hyper-parameters. Unlike our other methods, the Q-learning baseline uses intermediate rewards, as given by the OpenAI Gym on a per-step basis. Hence, the Q-learning baseline has a slight advantage over the policy gradient methods. In all of the tasks except Copy, our stochastic optimizer uses mini-batches comprising 400 policy samples from the model. These 400 samples correspond to 40 different random sequences drawn from the environment, and 10 random policy trajectories per sequence. In other words, we set \( K = 10 \) and \( N = 40 \) as defined in (8) and (12). For MENT, we use the 10 samples to subtract the mean of the coefficient of \( \frac{d}{dh} \log \pi_\theta(a \mid h) \) which includes the contribution of the reward and entropy regularization. For UREX, we use the 10 trajectories to subtract the mean reward and normalize the importance sampling weights. We do not subtract the mean of the normalized importance weights. For the Copy task, we use mini-batches with 200 samples using \( K = 10 \) and \( N = 20 \). Experiments are conducted using Tensorflow (Abadi et al., 2016). 8 CONCLUSION We present a variant of policy gradient, called UREX, which promotes the exploration of action sequences that yield rewards larger than what the model expects. This exploration strategy is the result of importance sampling from the optimal policy. Our experimental results demonstrate that UREX significantly outperforms other value and policy based methods, while being more robust to changes of hyper-parameters. By using UREX, we can solve algorithmic tasks like multi-digit addition from only episodic reward, which other methods cannot reliably solve even given the best hyper-parameters. We introduce a new algorithmic task based on binary search to advocate more research in this area, especially when the computational complexity of the solution is also of interest. Solving these tasks is not only important for developing more human-like intelligence in learning algorithms, but also important for generic reinforcement learning, where smart and efficient exploration is the key to successful methods. 9 ACKNOWLEDGMENT We thank Sergey Levine, Irwan Bello, Corey Lynch, George Tucker, Kelvin Xu, Volodymyr Mnih, and the Google Brain team for insightful comments and discussions.
accept
Accept (Poster)
7.333333
8fd9a6926f17953301aee46cbb94e0b8723b4f68
iclr
2,017
DIFFERENTIABLE CANONICAL CORRELATION ANALYSIS Matthias Dorfer Department of Computational Perception Johannes Kepler University Linz Linz, 4040, Austria matthias.dorfer@jku.at Jan Schlüter The Austrian Research Institute for Artificial Intelligence Vienna, 1010, Austria jan.schlueter@ofai.at Gerhard Widmer Department of Computational Perception Johannes Kepler University Linz Linz, 4040, Austria gerhard.widmer@jku.at ABSTRACT Canonical Correlation Analysis (CCA) computes maximally-correlated linear projections of two modalities. We propose Differentiable CCA, a formulation of CCA that can be cast as a layer within a multi-view neural network. Unlike Deep CCA, an earlier extension of CCA to nonlinear projections, our formulation enables gradient flow through the computation of the CCA projection matrices, and free choice of the final optimization target. We show the effectiveness of this approach in cross-modality retrieval experiments on two public image-to-text datasets, surpassing both Deep CCA and a multi-view network with freely-learned projections. We assume that Differentiable CCA could be a useful building block for many multi-modality tasks. 1 INTRODUCTION Deep Canonical Correlation Analysis (DCCA) (Andrew et al., 2013) is a non-linear extension of classic Canonical Correlation Analysis (CCA) (Hotelling, 1936) that learns highly correlated latent representations on top of two different neural networks. The central idea of our work is to extend this formulation and cast CCA as a fully differentiable neural network layer which allows for parameter optimization via back-propagation through the CCA projection matrices. This is in contrast to DCCA, where correlation analysis is the topmost part of the network and only used as an optimization target for maximizing the correlation between the respective views. DCCA in general gained a lot of attention recently. It inspired related methods such as Deep Linear Discriminant Analysis (Dorfer et al., 2015) as well as a discriminative re-formulation of DCCA (Elmadany et al., 2016) applied to improve speech-based emotion recognition. Wang et al. (2015a) show that joint optimization of correlation and reconstruction error in auto-encoder configurations is successfully used for representation learning on a multi-modal speech production dataset. We take this as a motivation to evolve and extend the applicability of DCCA. In our experiments, we employ the proposed differentiable CCA layer in a cross-modality retrieval setup. Cross-modality retrieval is the task of retrieving relevant data of another type when a sample of a different modality is given as a search query. A recent survey by Wang et al. (2016) categorizes the task into binary and real-valued representation learning. In the case of real-valued representation learning, End-to-End DCCA (Yan & Mikolajczyk, 2015) achieves state of the art retrieval results in combination with retrieval by cosine distance computation. With differentiable CCA, it becomes possible to train the networks to directly minimize the objective which will be used for retrieval (e.g., the cosine distance), while still benefitting from the optimally-correlated projections obtained by CCA. Results on two publicly available datasets (Flickr30k (Young et al., 2014), IAPR TC-12 (Grubinger et al., 2006)) suggest that our approach is capable to improve retrieval results in both directions. The remainder of our paper is structured as follows. In Section 2, we review classic and deep CCA, which are the basis for the differentiable CCA layer proposed in Section 3. In Section 4, we show results of an experimental evaluation in a cross-modality retrieval setting and provide further investigations on the representations learned by our networks. Finally, Section 5 concludes the paper. 2 FOUNDATION: CANONICAL CORRELATION ANALYSIS AND DEEP CCA In this section, we review the concepts of classical and deep Canonical Correlation Analysis, the basis for the methodology proposed in this work. 2.1 CANONICAL CORRELATION ANALYSIS (CCA) Let \( \mathbf{x} \in \mathbb{R}^{d_x} \) and \( \mathbf{y} \in \mathbb{R}^{d_y} \) denote two random vectors with covariances \( \Sigma_{xx} \) and \( \Sigma_{yy} \) and cross-covariance \( \Sigma_{xy} \). The objective of CCA is to find two matrices \( \mathbf{A}^* \in \mathbb{R}^{d_x \times k} \) and \( \mathbf{B}^* \in \mathbb{R}^{d_y \times k} \) (with \( k \leq d_x \) and \( k \leq d_y \)) that project \( \mathbf{x} \) and \( \mathbf{y} \) into a common space maximizing their cross-correlation: \[ (\mathbf{A}^*, \mathbf{B}^*) = \arg\max_{\mathbf{A}, \mathbf{B}} \operatorname{corr}(\mathbf{A}'\mathbf{x}, \mathbf{B}'\mathbf{y}) \] To fix the scale of \( \mathbf{A} \) and \( \mathbf{B} \) and force the projected dimensions to be uncorrelated, the optimization is further constrained to \( \mathbf{A}'\Sigma_{xx}\mathbf{A} = \mathbf{B}'\Sigma_{yy}\mathbf{B} = \mathbf{I} \), arriving at: \[ (\mathbf{A}^*, \mathbf{B}^*) = \underset{\mathbf{A}'\Sigma_{xx}\mathbf{A}=\mathbf{B}'\Sigma_{yy}\mathbf{B}=\mathbf{I}}{\arg\max} \mathbf{A}'\Sigma_{xy}\mathbf{B} \] Let \( \mathbf{T} = \Sigma_{xx}^{-1/2}\Sigma_{xy}\Sigma_{yy}^{-1/2} \), and let \( \mathbf{T} = \mathbf{U} \operatorname{diag}(\mathbf{d}) \mathbf{V}' \) be the Singular Value Decomposition (SVD) of \( \mathbf{T} \) with ordered singular values \( d_i \geq d_{i+1} \). As shown by Mardia et al. (1979), we obtain \( \mathbf{A}^* \) and \( \mathbf{B}^* \) from the top \( k \) left- and right-singular vectors of \( \mathbf{T} \): \[ \mathbf{A}^* = \Sigma_{xx}^{-1/2} \mathbf{U}_{:,k} \qquad \mathbf{B}^* = \Sigma_{yy}^{-1/2} \mathbf{V}_{:,k} \] Moreover, the cross-correlation in the projection space is the sum of the top \( k \) singular values: \[ \operatorname{corr}(\mathbf{A}^*\mathbf{x}, \mathbf{B}^*\mathbf{y}) = \sum_{i \leq k} d_i \] In practice, the covariances and cross-covariance of \( \mathbf{x} \) and \( \mathbf{y} \) are usually not known, but estimated from a training set of \( m \) paired vectors, expressed as matrices \( \mathbf{X} \in \mathbb{R}^{d_x \times m}, \mathbf{Y} \in \mathbb{R}^{d_y \times m} \): \[ \overline{\mathbf{X}} = \mathbf{X} - \frac{1}{m} \mathbf{X} \mathbf{1}_1 \qquad \overline{\mathbf{Y}} = \mathbf{Y} - \frac{1}{m} \mathbf{Y} \mathbf{1}_1 \] \[ \hat{\Sigma}_{xx} = \frac{1}{m-1} \overline{\mathbf{X}} \overline{\mathbf{X}}' + r \mathbf{I} \qquad \hat{\Sigma}_{xy} = \frac{1}{m-1} \overline{\mathbf{X}} \overline{\mathbf{Y}}' \qquad \hat{\Sigma}_{yy} = \frac{1}{m-1} \overline{\mathbf{Y}} \overline{\mathbf{Y}}' + r \mathbf{I} \] Here, \( r \) is a regularization parameter ensuring the matrices are positive definite. Substituting these estimates for \( \Sigma_{xx}, \Sigma_{xy} \) and \( \Sigma_{yy} \), respectively, we can estimate \( \mathbf{A}^* \) and \( \mathbf{B}^* \) using Equation 3. 2.2 DEEP CANONICAL CORRELATION ANALYSIS (DCCA) Andrew et al. (2013) propose an extension of CCA that allows learning parametric nonlinear transformations of two variables maximizing the cross-correlation after optimal projection. Specifically, let \( \mathbf{a} \in \mathbb{R}^{d_a} \) and \( \mathbf{b} \in \mathbb{R}^{d_b} \) denote two random vectors, and let \( \mathbf{x} = f(\mathbf{a}; \Theta_f) \) and \( \mathbf{y} = g(\mathbf{b}; \Theta_g) \) denote their nonlinear transformations, parameterized by \( \Theta_f \) and \( \Theta_g \). For example, \( f \) and \( g \) could be feed-forward neural networks. As before, Equation 3 gives the linear transformations of \( \mathbf{x} \) and \( \mathbf{y} \) optimizing the CCA objective in Equation 2. Deep CCA optimizes \( \Theta_f \) and \( \Theta_g \) to further increase the cross-correlation. For \( d_x = d_y = k \), the CCA objective is equal to the sum of all singular values of \( \mathbf{T} \) (Equation 4), which is equal to its trace norm: \[ \operatorname{corr}(f(\mathbf{a}; \Theta_f), g(\mathbf{b}; \Theta_g)) = \operatorname{corr}(\mathbf{x}, \mathbf{y}) = ||\mathbf{T}||_t = \operatorname{tr}(\mathbf{T}'\mathbf{T})^{1/2} \] Figure 1: Comparison of DCCA and the prosed differentiable CCA layer. DCCA optimizes the correlation of the two different views and is therefore the topmost part of the network. In contrast, our CCA layer establishes gradient flow over the CCA computation. This allows us to use the projection output of CCA as input for subsequent components in a multi-view network (e.g., a retrieval objective such as cosine distance). Andrew et al. (2013) show how to compute the gradient of this Trace Norm Objective (TNO) with respect to x and y. Assuming f and g are differentiable with respect to \( \Theta_f \) and \( \Theta_g \) (as is the case for neural networks), this allows to optimize the nonlinear transformations via a gradient-based method. Figure 1a shows a schematic sketch of DCCA, as a fixed objective backpropagated through two neural networks. 3 DIFFERENTIABLE IMPLEMENTATION OF CCA In this section, we further extend DCCA to allow not only an arbitrary nonlinear transformation of the inputs, but also arbitrary transformations of (or objectives on) the projected vectors. This allows CCA to be used as a building block within a multi-modality neural network, instead of as a final objective only. In the following, we will discuss how to enable backpropagation through CCA, what to consider when doing stochastic updates, and how to apply it for cross-modality retrieval. 3.1 GRADIENT OF CCA As mentioned above, we can compute the canonical correlation along with the optimal projection matrices from the singular value decomposition \( \mathbf{T} = \Sigma_{xx}^{-1/2} \Sigma_{xy} \Sigma_{yy}^{-1/2} = \mathbf{U} \operatorname{diag}(\mathbf{d}) \mathbf{V}' \). Specifically, we obtain the correlation as \( \sum_i d_i \), and projections as \( \mathbf{A}^* = \Sigma_{xx}^{-1/2} \mathbf{U} \) and \( \mathbf{B}^* = \Sigma_{yy}^{-1/2} \mathbf{V} \). For DCCA, it suffices to compute the gradient of the total correlation wrt. x and y in order to backpropagate it through the two networks f and g. Using the chain rule, Andrew et al. (2013) decompose this into the gradients of the total correlation wrt. \( \Sigma_{xx} \), \( \Sigma_{xy} \) and \( \Sigma_{yy} \), and the gradients of those wrt. x and y. Their derivations of the former make use of the fact that both the gradient of \( \sum_i d_i \) wrt. T and the gradient of \( \| \mathbf{T} \|_F \) (the trace norm objective in Equation 7) wrt. \( \mathbf{T}' \mathbf{T} \) have a simple form; see Andrew et al. (2013, Sec. 7) for details. For our differentiable CCA, we instead need the gradients of the projected data \( \mathbf{A}^* \mathbf{x} \) and \( \mathbf{B}^* \mathbf{y} \) wrt. x and y, which require \( \frac{\partial \mathbf{U}}{\partial \mathbf{x}, \mathbf{y}} \) and \( \frac{\partial \mathbf{V}}{\partial \mathbf{x}, \mathbf{y}} \). We could again decompose this into the gradients wrt. T, the gradients of T wrt. \( \Sigma_{xx} \), \( \Sigma_{xy} \) and \( \Sigma_{yy} \) and the gradients of those wrt. x and y. However, while the gradients of U and V wrt. T are known (Papadopoulo & Lourakis, 2000), they involve solving \( O((d_x d_y)^2) \) linear \( 2 \times 2 \) systems. To arrive at a more practical implementation that does not require the gradient of the SVD, we reformulate the solution to use two symmetric eigendecompositions \( \mathbf{T} \mathbf{T}' = \mathbf{U} \operatorname{diag}(\mathbf{e}) \mathbf{U}' \) and \( \mathbf{T}' \mathbf{T} = \mathbf{V} \operatorname{diag}(\mathbf{e}) \mathbf{V}' \) (Petersen & Pedersen, 2012, Eq. 270). This gives us the same left and right eigenvectors we would obtain from the SVD (save for possibly flipped signs, which are easy to fix), along with the squared singular values (\( e_i = d_i^2 \)). The gradients of eigenvectors of symmetric real eigensystems have a simple form (Magnus, 1985, Eq. 7) and both T'T' and T'T are differentiable wrt. x and y, enabling a sufficiently efficient implementation in a graph-based, auto-differentiating math compiler such as Theano (Theano Development Team, 2016). 3.2 STOCHASTIC OPTIMIZATION For classical CCA, \( \Sigma_{xx}, \Sigma_{xy} \) and \( \Sigma_{yy} \) are estimated from a large set of m training examples (Equation 6). In contrast, gradient-based optimization of neural networks usually estimates the gradients wrt. network parameters from mini-batches of n randomly drawn examples, with \( n \ll m \). In Deep CCA as well as in our extension, the correlations are functions of the network parameters that we need to backpropagate through, effectively enforcing \( m = n \). Andrew et al. (2013) solve this discrepancy by optimizing the network parameters with L-BFGS on the full training set, which is infeasible for very large datasets. Yan & Mikolajczyk (2015) instead train on small mini-batches, estimating correlation matrices of size 4096 × 4096 from 100 examples only, which seems risky. We will choose a way in between, training on large mini-batches to obtain stable estimates. This approach was also taken by Wang et al. (2015b, Sec. 5.1), who found mini-batches of 400–1000 examples to even outperform full-batch L-BFGS. In addition, for testing, we optionally re-estimate the correlation matrices (and the corresponding projection matrices) using a larger set of \( m > n \) examples. Another tempting option is to train on small mini-batches, but use exponential moving averages updated with each mini-batch as follows: \[ \Sigma_{xx} \leftarrow \Sigma_{xx}(1-\alpha) + \hat{\Sigma}_{xx}\alpha \quad \Sigma_{xy} \leftarrow \Sigma_{xy}(1-\alpha) + \hat{\Sigma}_{xy}\alpha \quad \Sigma_{yy} \leftarrow \Sigma_{yy}(1-\alpha) + \hat{\Sigma}_{yy}\alpha \] With proper initialization and a sufficiently small coefficient \( \alpha \), this gives stable estimates even for small \( n \). However, since only the estimates from the current mini-batch \( \hat{\Sigma}_{xx}, \hat{\Sigma}_{xy} \) and \( \hat{\Sigma}_{yy} \) can be practically considered in backpropagation, this changes the learning dynamics: For too small \( \alpha \), the projection matrices will be virtually degraded to constants. Empirically, we found that large mini-batches perform slightly better than small batches with moving averages (see Appendix B). 3.3 CROSS-MODALITY RETRIEVAL WITH DIFFERENTIABLE CCA DCCA maximizes the correlation between the latent representations of two different neural networks. When the two network inputs a and b represent different views of an entity (e.g., an image and its textual description), DCCA projects them into a common space where they are highly correlated. This can be exploited for cross-modality retrieval: Projecting one modality of an entity, we can find the best-matching representations of the second modality (e.g., an image for a textual description, or vice versa). To find the best matches, a common option is to compute nearest neighbors in terms of cosine distance (Yan & Mikolajczyk, 2015), which is closely related to correlation. Given the methodology introduced above, we now have the means to optimize DCCA projections directly for the task at hand. In Figure 1b, we show a possible setting where we put the differentiable CCA layer on top of a multi-view network. Instead of optimizing the networks to maximize the correlation of the projected views (the TNO), we can optimize the networks towards a task-specific objective and still benefit from the optimality of the CCA projections. For this work, we optimize towards minimal cosine distance between the correlated views, the very metric used for retrieval. In the next section, we empirically show that this is indeed beneficial in terms of quantitative retrieval performance as well as convergence speed of network training. 4 EXPERIMENTS We evaluate our approach in cross-modality retrieval experiments on two publicly available datasets (also considered by Yan & Mikolajczyk (2015)) and provide investigations on the representations learned by the network. 4.1 EXPERIMENTAL SETUP For the evaluation of our approach, we consider Flickr30k and IAPR TC-12, two publicly available datasets for cross-modality retrieval. Flickr30k consists of image-caption pairs, where each image A man in a white cowboy hat reclines in front of a window in an airport. A young man rests on an airport seat with a cowboy hat over his face. A woman relaxes on a couch, with a white cowboy hat over her head. A man is sleeping inside on a bench with his hat over his eyes. A person is sleeping at an airport with a hat on their head. a green and brown embankment with brown houses on the right and a light brown sandy beach at the dark blue sea on the left; a dark mountain range behind it and white clouds in a light blue sky in the background; Table 1: Example images for Flickr30k (top) and IAPR TC-12 (bottom) is annotated with five different textual descriptions. The train-validation-test split for Flickr30k is 28000-1000-1000. In terms of evaluation setup, we follow the related work and report results on two different evaluation protocols. Protocol pooled pools the five available captions into one "concatenated" text, meaning that only one but richer text annotation remains per image. This is done for all three sets. Protocol 5 captions pools only the captions of the train set and keeps five separate annotations for validation and test set. The IAPR TC-12 dataset contains 20000 natural images where only one – but compared to Flickr30k more detailed – caption is available for each image. As no predefined train-validation-test split is provided, we randomly select 2000 images for testing, 1000 for validation and keep the rest for training. Yan & Mikolajczyk (2015) also use 2000 images for testing, but did not explicitly mention hold out images for validation. Table 1 shows an example image along with its corresponding captions or caption for either dataset. The task at hand for both datasets is to retrieve the correct counterpart – either text or image – when given a query element of the other modality. We follow Yan & Mikolajczyk (2015) and use the cosine distance for retrieval in the projection space. As evaluation measures we consider the Recall@k (R@k) as well as the Median Rank (MR) and the Mean Average Precision (MAP). The R@k rate (high is better) is the ratio of queries which have the correct corresponding counterpart in the first k retrieval results. The MR is the median position (low is better) of the target in a similarity-ordered list of available candidates. Finally, we define the MAP (high is better) as the mean value of \( 1 / Rank \) over all queries. The input to our networks is a 4096-dimensional image feature vector along with a corresponding text vector representation (5793 for Flickr30k, 2048 for IAPR TC-12). In terms of text pre-processing, we follow Yan & Mikolajczyk (2015), tokenizing and lemmatizing the raw captions as the first step. Based on the lemmatized captions, we compute l2-normalized TF/IDF-vectors, omitting words with an overall occurrence smaller than 5 times for Flickr30k and 3 times for IAPR TC-12, respectively. The image representations are computed from the last hidden layer of a network pretrained on ImageNet (layer fc7 of CNN_S by Chatfield et al. (2014)). 4.2 Network Architectures and Optimization Details We feed 4096-dimensional image vectors along with the corresponding text representation into our networks. The image representation is followed by a linear dense layer with 128 units (this will also be the dimensionality \( k = 128 \) of the resulting CCA retrieval space). The text vector is processed by two batch-normalized (Ioffe & Szegedy, 2015) dense layers of 1024 units each and an ELU activation function (Clevert et al., 2015). As a last layer for the text representation network, we again apply a dense layer with 128 linear units. For a fair comparison, we keep the structure (and number of parameters) of all networks in our experiments the same. The only parameters that vary are the objectives and the corresponding optimization/regularization strategies. In particular, we apply a grid search on the respective hyper-parameters and report the best results for each method. Optimization is performed either using Stochastic Gradient Descent (SGD) with momentum or by the adam (Kingma & Ba, 2014) update rule. Table 2: Cross-modality retrieval results on Flickr30k. “E2E-DCCA” is taken from Yan & Mikolajczyk (2015), all other results are our own. Methods marked with "*" re-estimate projection matrices from a larger batch than used during training (10,000 training examples), see Section 3.2. <table> <tr> <th rowspan="2">Protocol</th> <th rowspan="2">Method</th> <th colspan="4">Image-to-Text</th> <th colspan="4">Text-to-Image</th> </tr> <tr> <th>R@1</th> <th>R@5</th> <th>R@10</th> <th>MR</th> <th>R@1</th> <th>R@5</th> <th>R@10</th> <th>MR</th> </tr> <tr> <td rowspan="6">pooled</td> <td>E2E-DCCA</td> <td>27.9</td> <td>56.9</td> <td>68.2</td> <td>4</td> <td>26.8</td> <td>52.9</td> <td>66.9</td> <td>4</td> </tr> <tr> <td>TNO*</td> <td>29.9</td> <td>57.9</td> <td>67.9</td> <td>4</td> <td>21.8</td> <td>48.1</td> <td>64.0</td> <td>6</td> </tr> <tr> <td>learned-\(cos^2\)</td> <td>9.0</td> <td>23.3</td> <td>32.8</td> <td>28</td> <td>8.5</td> <td>23.3</td> <td>32.8</td> <td>26</td> </tr> <tr> <td>CCAL-\(l2\)</td> <td>18.2</td> <td>42.0</td> <td>53.6</td> <td>9</td> <td>17.7</td> <td>42.2</td> <td>53.2</td> <td>9</td> </tr> <tr> <td>CCAL-\(cos\)</td> <td>28.9</td> <td>57.5</td> <td>69.1</td> <td>4</td> <td>25.1</td> <td>53.1</td> <td>66.4</td> <td>5</td> </tr> <tr> <td>CCAL-\(cos^2\)</td> <td>30.7</td> <td>58.8</td> <td>70.1</td> <td>4</td> <td>28.0</td> <td>56.2</td> <td>68.3</td> <td>4</td> </tr> <tr> <td>CCAL-\(cos^2*\)</td> <td>34.1</td> <td>60.0</td> <td>70.6</td> <td>3.5</td> <td>29.2</td> <td>58.3</td> <td>69.7</td> <td>4</td> </tr> <tr> <td rowspan="4">5 captions</td> <td>E2E-DCCA</td> <td>16.7</td> <td>39.3</td> <td>52.9</td> <td>8</td> <td>12.6</td> <td>31.0</td> <td>43.0</td> <td>15</td> </tr> <tr> <td>TNO*</td> <td>17.5</td> <td>39.3</td> <td>51.4</td> <td>10</td> <td>13.4</td> <td>31.7</td> <td>41.3</td> <td>19</td> </tr> <tr> <td>CCAL-\(cos^2\)</td> <td>21.2</td> <td>44.4</td> <td>55.8</td> <td>8</td> <td>14.9</td> <td>35.9</td> <td>47.5</td> <td>12</td> </tr> <tr> <td>CCAL-\(cos^2*\)</td> <td>20.6</td> <td>45.9</td> <td>57.2</td> <td>7</td> <td>15.6</td> <td>37.0</td> <td>49.4</td> <td>11</td> </tr> </table> As optimization targets, we consider the following candidates: (1) The Trace Norm Objective (\(TNO\)) as our base line for cross-modality retrieval (Yan & Mikolajczyk, 2015). (2) The proposed differentiable CCA layer in combination with the objectives cosine distance (\(CCAL-cos\)), squared cosine distance (\(CCAL-cos^2\)) and euclidean distance (\(CCAL-l2\)). As an additional setting, we consider a freely-learnable projection layer where the projection matrices \(A\) and \(B\) are randomly initialized weights that can be optimized by the network using SGD in the conventional way. This allows to assess the benefit of using CCA-derived projections within a multi-view network under otherwise unchanged objectives. For this experiment, we optimize for the squared cosine distance and denote the setting by \(learned-cos^2\). The batch size is set to 1000 samples to allow stable covariance estimates for the CCA (Section 3.2). For further stabilization, we regularize the covariance matrices (Andrew et al., 2013) by adding scaled (\(r = 10^{-3}\)) identity matrices to the estimates \(\Sigma_{xx}, \Sigma_{yy}\) and \(T\) (Section 2.1). The variants based on differentiable CCA are additionally regularized by \(L2\) weight decay. No dropout is used in this settings as it harmed optimization in our experiments. When optimizing with the TNO we follow Yan & Mikolajczyk (2015) and use dropout (\(p = 0.5\)) after the first two dense layers of the text network. In Table 4 in Appendix A we provide the optimization settings for all configurations in detail, found using a grid search optimizing MAP on the validation set. 4.3 EXPERIMENTAL RESULTS ON CROSS-MODALITY RETRIEVAL Table 2 lists our results on Flickr30k. Along with our experiments, we also show the results reported in (Yan & Mikolajczyk, 2015) as a reference (E2E-DCCA). However, a direct comparison to our results may not be fair: E2E-DCCA uses a different ImageNet-pretrained network for the image representation, and finetunes this network while we keep it fixed (as we are only interested in comparing differentiable CCA to alternatives, not in obtaining the best possible results). Our TNO results use the same objective as E2E-DCCA, but our network architecture, permitting direct comparison. When comparing the performance of our networks, we observe a gain both for image-to-text and text-to-image retrieval when training with the CCAL-\(cos^2\) objective compared to TNO (e.g., R@1 of 34.1 compared to 29.9 under protocol pooled). This indicates that training a network directly on the objective used for retrieval (using differentiable CCA) is a reasonable design choice. A closer look at the results also reveals that the squared cosine distance is superior compared to the remaining objectives. We further observe that the randomly initialized projection matrices learned entirely by SGD (learned-\(cos^2\)) show poor performance compared to their CCA counterpart (even though in theory, they could converge to exactly the same solution). This suggests that exploiting the beneficial properties of the CCA projections directly within a network during training is a powerful tool, supporting optimization of related objectives. CCAL-\(l2\) for example performs poorer than the variants including cosine losses but still better than the version with learned weights. On protocol Table 3: Cross-modality retrieval results on IAPR TC-12 <table> <tr> <th rowspan="2">Method</th> <th colspan="4">Image-to-Text</th> <th colspan="4">Text-to-Image</th> </tr> <tr> <th>R@1</th> <th>R@5</th> <th>MAP</th> <th>MR</th> <th>R@1</th> <th>R@5</th> <th>MAP</th> <th>MR</th> </tr> <tr> <td>E2E-DCCA</td> <td>30.2</td> <td>57.0</td> <td>0.426</td> <td></td> <td>29.5</td> <td>60.0</td> <td>0.415</td> <td></td> </tr> <tr> <td>TNO*</td> <td>30.0</td> <td>56.7</td> <td>0.424</td> <td>4</td> <td>28.0</td> <td>55.4</td> <td>0.410</td> <td>5</td> </tr> <tr> <td>CCAL-\( cos^2 \)*</td> <td>31.1</td> <td>58.4</td> <td>0.439</td> <td>4</td> <td>26.8</td> <td>55.1</td> <td>0.403</td> <td>4</td> </tr> </table> ![Three line plots showing evolution of correlation (train), MAP over training epochs and cosine distance (validation), and individual correlations](page_184_670_1207_246.png) (a) Evolution of correlation (train) (b) MAP over training epochs and cosine distance (validation) (c) Individual Correlations Figure 2: Comparison of the TNO and CCAL-\( cos^2 \) based on the total amount of canonical correlation (sum over singular values d) as well as the cosine distance between corresponding samples. 5 captions, we only report the best results (CCAL-\( cos^2 \)) along with the TNO and observe similar tendencies. Note that there are various other methods reporting results on Flickr30k (Karpathy et al., 2014; Socher et al., 2014; Mao et al., 2014; Kiros et al., 2014) which partly surpass ours, for example by using more elaborate processing of the textual descriptions. We omit these results as we focus on the comparison of DCCA with the proposed differentiable CCA layer. In Table 3, we list our results on the IAPR TC-12 dataset. We again show the retrieval performances of Yan & Mikolajczyk (2015) as a baseline (again with limited comparability, due to a different architecture and a different train-validation-test split), along with our implementation of the TNO and the CCA layer trained with squared cosine distance. For image-to-text retrieval, we achieve slightly better retrieval performances when training with cosine distance and propagating the gradients back through the differentiable CCA layer. For the other direction, results are slightly worse. 4.4 INVESTIGATIONS ON LEARNED REPRESENTATIONS In this section, we provide a more detailed look at the learned representations. We compare the representations learned with the TNO to the proposed CCA layer optimized with the squared cosine distance objective. For easier comparison, we re-train both networks with a reduced projection dimensionality of \( h = 64 \) – otherwise, the TNO takes much longer to converge than the CCA layer. This results in slightly decreased performance for both, but the relative tendencies are preserved. Figure 2a shows the evolution of the mean correlation (mean over singular values with maximum 1.0) on the training set during optimization. Allong with the correlation, we also plot the average cosine distance between corresponding pairs on the validation set. As expected, for the TNO we observe a continuous decrease of cosine distance when the correlation increases. Interestingly, this is not the case for CCAL-\( cos^2 \). The result suggests that the network found a way of minimizing the cosine distance other than by increasing correlation between the representations – the latter even decreases after a few training epochs. In Figure 2b, we plot the corresponding evolution of MAP on the training and validation set, confirming that the decreased cosine distance indeed also leads to improved retrieval performance. Finally, in Figure 2c we compare the individual correlation coefficients (magnitudes of CCA singular values on the training set) of both representations after the last training epoch. This details the observation in Figure 2a: not only the total correlation, but also the individual correlation coefficients are considerably higher when training with TNO, even though the retrieval performance is lower. 5 CONCLUSION We presented a fully differentiable version of Canonical Correlation Analysis which enables us to back-propagate errors directly through the computation of CCA. As this requires to establish gradient flow through CCA, we formulate it to allow easy computation of the partial derivatives \( \frac{\partial \mathbf{A}^*}{\partial \mathbf{x}, \mathbf{y}} \) and \( \frac{\partial \mathbf{B}^*}{\partial \mathbf{x}, \mathbf{y}} \) of CCA’s projection matrices \( \mathbf{A}^* \) and \( \mathbf{B}^* \) with respect to the input data \( \mathbf{x} \) and \( \mathbf{y} \). With this formulation, we can incorporate CCA as a building block within multi-modality neural networks that produces maximally-correlated projections of its inputs. In our experiments, we use this building block within a cross-modality retrieval setting, optimizing a network to minimize the cosine distance of the correlated CCA projections. Experimental results show that when using the cosine distance for retrieval (as is common for correlated views), this is superior to optimizing a network for maximally-correlated projections (as done in Deep CCA), or not using CCA at all. We further observed (Section 4.4) that it is not necessarily required to have maximum correlation to achieve a high retrieval performance. Finally, our differentiable CCA layer could provide a useful basis for further research, e.g., as an intermediate processing step for learning binary cross-modality retrieval representations. ACKNOWLEDGMENTS The research reported in this paper has been supported by the Austrian Federal Ministry for Transport, Innovation and Technology, the Federal Ministry of Science, Research and Economy, and the Province of Upper Austria in the frame of the COMET center SCCH, as well as by the Federal Ministry for Transport, Innovation & Technology (BMVIT) and the Austrian Science Fund (FWF): TRP 307-N23. The Tesla K40 used for this research was donated by the NVIDIA Corporation. REFERENCES Galen Andrew, Raman Arora, Jeff Bilmes, and Karen Livescu. Deep canonical correlation analysis. In Proceedings of the International Conference on Machine Learning, pp. 1247–1255, 2013. K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets. In British Machine Vision Conference, 2014. Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). International Conference on Learning Representations (ICLR) (arXiv:1511.07289), 2015. Matthias Dorfer, Rainer Kelz, and Gerhard Widmer. Deep linear discriminant analysis. International Conference on Learning Representations (ICLR) (arXiv:1511.04707), 2015. Nour El Din Elmadany, Yifeng He, and Ling Guan. Multiview learning via deep discriminative canonical correlation analysis. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2409–2413. IEEE, 2016. Michael Grubinger, Paul Clough, Henning Müller, and Thomas Deselaers. The iapr tc-12 benchmark: A new evaluation resource for visual information systems. In International Workshop OntoImage, volume 5, pp. 10. 2006. Harold Hotelling. Relations between two sets of variates. Biometrika, 28(3/4):321–377, 1936. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015. URL http://arxiv.org/abs/1502.03167. Andrej Karpathy, Armand Joulin, and Fei Fei F Li. Deep fragment embeddings for bidirectional image sentence mapping. In Advances in neural information processing systems, pp. 1889–1897, 2014. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539, 2014. Jan R. Magnus. On differentiating eigenvalues and eigenvectors. Econometric Theory, 1(2):179–191, 1985. ISSN 02664666, 14694360. Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, and Alan L Yuille. Explain images with multimodal recurrent neural networks. arXiv preprint arXiv:1410.1090, 2014. K.V. Mardia, J.T. Kent, and J.M. Bibby. Multivariate analysis. Probability and mathematical statistics. Academic Press, 1979. ISBN 9780124712508. Théodore Papadopoulo and Manolis I.A. Lourakis. Estimating the Jacobian of the Singular Value Decomposition: Theory and Applications. In Proceedings of the 6th European Conference on Computer Vision (ECCV), 2000. K. B. Petersen and M. S. Pedersen. The matrix cookbook, nov 2012. Version 20121115. Richard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng. Grounded compositional semantics for finding and describing images with sentences. Transactions of the Association for Computational Linguistics, 2:207–218, 2014. Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. Kaiye Wang, Qiyue Yin, Wei Wang, Shu Wu, and Liang Wang. A comprehensive survey on cross-modal retrieval. arXiv preprint arXiv:1607.06215, 2016. Weiran Wang, Raman Arora, Karen Livescu, and Jeff Bilmes. On deep multi-view representation learning. In Proceedings of the International Conference on Machine Learning, 2015a. Weiran Wang, Raman Arora, Karen Livescu, and Jeff A Bilmes. Unsupervised learning of acoustic features via deep canonical correlation analysis. In Proceedings of ICASSP, 2015b. Fei Yan and Krystian Mikolajczyk. Deep correlation for matching images and text. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3441–3450, 2015. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67–78, 2014. APPENDIX A: OPTIMIZATION SETTINGS The table below provides a detailed listing of the optimization strategies for all our experiments. All our configurations are of course also available in our experimental code published at (will be added). <table> <tr> <th rowspan="2">Objective</th> <th rowspan="2">Optimizer</th> <th rowspan="2">Units</th> <th colspan="2">Flickr30k</th> <th rowspan="2">Dropout</th> <th rowspan="2">L2</th> <th rowspan="2">r</th> </tr> <tr> <th>lr<sub>ini</sub></th> <th>lr-schedule</th> </tr> <tr> <td>TNO</td> <td>momentum</td> <td>2048</td> <td>0.05</td> <td>constant</td> <td>0.5</td> <td>none</td> <td>10<sup>-3</sup></td> </tr> <tr> <td>CCAL</td> <td>momentum</td> <td>1024</td> <td>0.5</td> <td>\( \times 0.7 \) from epoch 10</td> <td>none</td> <td>0.002</td> <td>10<sup>-3</sup></td> </tr> <tr> <td>learned-\( cos^2 \)</td> <td>momentum</td> <td>1024</td> <td>0.25</td> <td>none</td> <td>none</td> <td>0.002</td> <td>10<sup>-3</sup></td> </tr> </table> <table> <tr> <th rowspan="2">Objective</th> <th rowspan="2">Optimizer</th> <th rowspan="2">Units</th> <th colspan="2">IAPR TC-12</th> <th rowspan="2">Dropout</th> <th rowspan="2">L2</th> <th rowspan="2">r</th> </tr> <tr> <th>lr<sub>ini</sub></th> <th>lr-schedule</th> </tr> <tr> <td>TNO</td> <td>adam</td> <td>1024</td> <td>0.001</td> <td>\( \times 0.1 \) in epoch 30</td> <td>none</td> <td>0.0001</td> <td>10<sup>-3</sup></td> </tr> <tr> <td>CCAL</td> <td>adam</td> <td>1024</td> <td>0.001</td> <td>\( \times 0.1 \) in epoch 50</td> <td>none</td> <td>0.0002</td> <td>10<sup>-3</sup></td> </tr> </table> Table 4: Details on optimization strategies for the respective networks ![Line plot showing MAP (1/Rank) vs alpha for batch sizes 200 and 1000](page_374_186_591_377.png) Figure 3: Influence of parameter \( \alpha \). APPENDIX B: INFLUENCE OF RUNNING AVERAGE STATISTICS In this additional section, we investigate the influence of weighting coefficient \( \alpha \) when using exponential moving average estimates of the covariance matrices for CCA computation (see Section 3). A high \( \alpha \) (close to 1.0) means that the averaged estimate of \( \Sigma_{xx} \), \( \Sigma_{yy} \) and \( \Sigma_{xy} \) mostly depends on the current batch, and a low \( \alpha \) (close to 0.0) means it more strongly depends on the history of previous batches. To assess whether and under what circumstances exponential moving averages are helpful, we run an additional experiment on the IAPR TC-12 dataset as follows: We re-train one of the models of Section 4 both with batch size 1000 and with batch size 200, varying \( \alpha \) from 1.0 to 0.1 with a step size of 0.1 and measuring the \( MAP \) achieved on the validation set. We run each setting three times and report the average over the three runs. Figure 3 shows the results of this experiment. For batch size 1000, we draw the same conclusion as was reported in (Wang et al., 2015a;b): If the batch size is sufficiently large and representative for the entire population, learning on distribution parameters (in this case covariance matrices) is feasible, and the network performs best when trained with an \( \alpha \) close to one. This is not the case for batch size 200. In particular, the configurations with a large \( \alpha \) (small effective running average window) perform poorly. We conclude that a batch size of 200 is too small to obtain stable and representative covariances. However, when choosing a small \( \alpha \), it is still possible to train the models and achieve reasonable retrieval performance. As a practical recommendation, we suggest to use large batch sizes whenever possible (e.g., if feasible with available hardware). If the batch size needs to be reduced (e.g., for very large models and limited memory), using small alpha values still allows for training canonically correlated retrieval networks. For this work, we use a batch size of 1000 and fix \( \alpha = 1 \), disabling moving averages.
ABSTRACT Canonical Correlation Analysis (CCA) computes maximally-correlated linear projections of two modalities. We propose Differentiable CCA, a formulation of CCA that can be cast as a layer within a multi-view neural network. Unlike Deep CCA, an earlier extension of CCA to nonlinear projections, our formulation enables gradient flow through the computation of the CCA projection matrices, and free choice of the final optimization target. We show the effectiveness of this approach in cross-modality retrieval experiments on two public image-to-text datasets, surpassing both Deep CCA and a multi-view network with freely-learned projections. We assume that Differentiable CCA could be a useful building block for many multi-modality tasks. 1 INTRODUCTION Deep Canonical Correlation Analysis (DCCA) (Andrew et al., 2013) is a non-linear extension of classic Canonical Correlation Analysis (CCA) (Hotelling, 1936) that learns highly correlated latent representations on top of two different neural networks. The central idea of our work is to extend this formulation and cast CCA as a fully differentiable neural network layer which allows for parameter optimization via back-propagation through the CCA projection matrices. This is in contrast to DCCA, where correlation analysis is the topmost part of the network and only used as an optimization target for maximizing the correlation between the respective views. DCCA in general gained a lot of attention recently. It inspired related methods such as Deep Linear Discriminant Analysis (Dorfer et al., 2015) as well as a discriminative re-formulation of DCCA (Elmadany et al., 2016) applied to improve speech-based emotion recognition. Wang et al. (2015a) show that joint optimization of correlation and reconstruction error in auto-encoder configurations is successfully used for representation learning on a multi-modal speech production dataset. We take this as a motivation to evolve and extend the applicability of DCCA. In our experiments, we employ the proposed differentiable CCA layer in a cross-modality retrieval setup. Cross-modality retrieval is the task of retrieving relevant data of another type when a sample of a different modality is given as a search query. A recent survey by Wang et al. (2016) categorizes the task into binary and real-valued representation learning. In the case of real-valued representation learning, End-to-End DCCA (Yan & Mikolajczyk, 2015) achieves state of the art retrieval results in combination with retrieval by cosine distance computation. With differentiable CCA, it becomes possible to train the networks to directly minimize the objective which will be used for retrieval (e.g., the cosine distance), while still benefitting from the optimally-correlated projections obtained by CCA. Results on two publicly available datasets (Flickr30k (Young et al., 2014), IAPR TC-12 (Grubinger et al., 2006)) suggest that our approach is capable to improve retrieval results in both directions. The remainder of our paper is structured as follows. In Section 2, we review classic and deep CCA, which are the basis for the differentiable CCA layer proposed in Section 3. In Section 4, we show results of an experimental evaluation in a cross-modality retrieval setting and provide further investigations on the representations learned by our networks. Finally, Section 5 concludes the paper. 2 FOUNDATION: CANONICAL CORRELATION ANALYSIS AND DEEP CCA In this section, we review the concepts of classical and deep Canonical Correlation Analysis, the basis for the methodology proposed in this work. 2.1 CANONICAL CORRELATION ANALYSIS (CCA) Let \( \mathbf{x} \in \mathbb{R}^{d_x} \) and \( \mathbf{y} \in \mathbb{R}^{d_y} \) denote two random vectors with covariances \( \Sigma_{xx} \) and \( \Sigma_{yy} \) and cross-covariance \( \Sigma_{xy} \). The objective of CCA is to find two matrices \( \mathbf{A}^* \in \mathbb{R}^{d_x \times k} \) and \( \mathbf{B}^* \in \mathbb{R}^{d_y \times k} \) (with \( k \leq d_x \) and \( k \leq d_y \)) that project \( \mathbf{x} \) and \( \mathbf{y} \) into a common space maximizing their cross-correlation: \[ (\mathbf{A}^*, \mathbf{B}^*) = \arg\max_{\mathbf{A}, \mathbf{B}} \operatorname{corr}(\mathbf{A}'\mathbf{x}, \mathbf{B}'\mathbf{y}) \] To fix the scale of \( \mathbf{A} \) and \( \mathbf{B} \) and force the projected dimensions to be uncorrelated, the optimization is further constrained to \( \mathbf{A}'\Sigma_{xx}\mathbf{A} = \mathbf{B}'\Sigma_{yy}\mathbf{B} = \mathbf{I} \), arriving at: \[ (\mathbf{A}^*, \mathbf{B}^*) = \underset{\mathbf{A}'\Sigma_{xx}\mathbf{A}=\mathbf{B}'\Sigma_{yy}\mathbf{B}=\mathbf{I}}{\arg\max} \mathbf{A}'\Sigma_{xy}\mathbf{B} \] Let \( \mathbf{T} = \Sigma_{xx}^{-1/2}\Sigma_{xy}\Sigma_{yy}^{-1/2} \), and let \( \mathbf{T} = \mathbf{U} \operatorname{diag}(\mathbf{d}) \mathbf{V}' \) be the Singular Value Decomposition (SVD) of \( \mathbf{T} \) with ordered singular values \( d_i \geq d_{i+1} \). As shown by Mardia et al. (1979), we obtain \( \mathbf{A}^* \) and \( \mathbf{B}^* \) from the top \( k \) left- and right-singular vectors of \( \mathbf{T} \): \[ \mathbf{A}^* = \Sigma_{xx}^{-1/2} \mathbf{U}_{:,k} \qquad \mathbf{B}^* = \Sigma_{yy}^{-1/2} \mathbf{V}_{:,k} \] Moreover, the cross-correlation in the projection space is the sum of the top \( k \) singular values: \[ \operatorname{corr}(\mathbf{A}^*\mathbf{x}, \mathbf{B}^*\mathbf{y}) = \sum_{i \leq k} d_i \] In practice, the covariances and cross-covariance of \( \mathbf{x} \) and \( \mathbf{y} \) are usually not known, but estimated from a training set of \( m \) paired vectors, expressed as matrices \( \mathbf{X} \in \mathbb{R}^{d_x \times m}, \mathbf{Y} \in \mathbb{R}^{d_y \times m} \): \[ \overline{\mathbf{X}} = \mathbf{X} - \frac{1}{m} \mathbf{X} \mathbf{1}_1 \qquad \overline{\mathbf{Y}} = \mathbf{Y} - \frac{1}{m} \mathbf{Y} \mathbf{1}_1 \] \[ \hat{\Sigma}_{xx} = \frac{1}{m-1} \overline{\mathbf{X}} \overline{\mathbf{X}}' + r \mathbf{I} \qquad \hat{\Sigma}_{xy} = \frac{1}{m-1} \overline{\mathbf{X}} \overline{\mathbf{Y}}' \qquad \hat{\Sigma}_{yy} = \frac{1}{m-1} \overline{\mathbf{Y}} \overline{\mathbf{Y}}' + r \mathbf{I} \] Here, \( r \) is a regularization parameter ensuring the matrices are positive definite. Substituting these estimates for \( \Sigma_{xx}, \Sigma_{xy} \) and \( \Sigma_{yy} \), respectively, we can estimate \( \mathbf{A}^* \) and \( \mathbf{B}^* \) using Equation 3. 2.2 DEEP CANONICAL CORRELATION ANALYSIS (DCCA) Andrew et al. (2013) propose an extension of CCA that allows learning parametric nonlinear transformations of two variables maximizing the cross-correlation after optimal projection. Specifically, let \( \mathbf{a} \in \mathbb{R}^{d_a} \) and \( \mathbf{b} \in \mathbb{R}^{d_b} \) denote two random vectors, and let \( \mathbf{x} = f(\mathbf{a}; \Theta_f) \) and \( \mathbf{y} = g(\mathbf{b}; \Theta_g) \) denote their nonlinear transformations, parameterized by \( \Theta_f \) and \( \Theta_g \). For example, \( f \) and \( g \) could be feed-forward neural networks. As before, Equation 3 gives the linear transformations of \( \mathbf{x} \) and \( \mathbf{y} \) optimizing the CCA objective in Equation 2. Deep CCA optimizes \( \Theta_f \) and \( \Theta_g \) to further increase the cross-correlation. For \( d_x = d_y = k \), the CCA objective is equal to the sum of all singular values of \( \mathbf{T} \) (Equation 4), which is equal to its trace norm: \[ \operatorname{corr}(f(\mathbf{a}; \Theta_f), g(\mathbf{b}; \Theta_g)) = \operatorname{corr}(\mathbf{x}, \mathbf{y}) = ||\mathbf{T}||_t = \operatorname{tr}(\mathbf{T}'\mathbf{T})^{1/2} \] Figure 1: Comparison of DCCA and the prosed differentiable CCA layer. DCCA optimizes the correlation of the two different views and is therefore the topmost part of the network. In contrast, our CCA layer establishes gradient flow over the CCA computation. This allows us to use the projection output of CCA as input for subsequent components in a multi-view network (e.g., a retrieval objective such as cosine distance). Andrew et al. (2013) show how to compute the gradient of this Trace Norm Objective (TNO) with respect to x and y. Assuming f and g are differentiable with respect to \( \Theta_f \) and \( \Theta_g \) (as is the case for neural networks), this allows to optimize the nonlinear transformations via a gradient-based method. Figure 1a shows a schematic sketch of DCCA, as a fixed objective backpropagated through two neural networks. 3 DIFFERENTIABLE IMPLEMENTATION OF CCA In this section, we further extend DCCA to allow not only an arbitrary nonlinear transformation of the inputs, but also arbitrary transformations of (or objectives on) the projected vectors. This allows CCA to be used as a building block within a multi-modality neural network, instead of as a final objective only. In the following, we will discuss how to enable backpropagation through CCA, what to consider when doing stochastic updates, and how to apply it for cross-modality retrieval. 3.1 GRADIENT OF CCA As mentioned above, we can compute the canonical correlation along with the optimal projection matrices from the singular value decomposition \( \mathbf{T} = \Sigma_{xx}^{-1/2} \Sigma_{xy} \Sigma_{yy}^{-1/2} = \mathbf{U} \operatorname{diag}(\mathbf{d}) \mathbf{V}' \). Specifically, we obtain the correlation as \( \sum_i d_i \), and projections as \( \mathbf{A}^* = \Sigma_{xx}^{-1/2} \mathbf{U} \) and \( \mathbf{B}^* = \Sigma_{yy}^{-1/2} \mathbf{V} \). For DCCA, it suffices to compute the gradient of the total correlation wrt. x and y in order to backpropagate it through the two networks f and g. Using the chain rule, Andrew et al. (2013) decompose this into the gradients of the total correlation wrt. \( \Sigma_{xx} \), \( \Sigma_{xy} \) and \( \Sigma_{yy} \), and the gradients of those wrt. x and y. Their derivations of the former make use of the fact that both the gradient of \( \sum_i d_i \) wrt. T and the gradient of \( \| \mathbf{T} \|_F \) (the trace norm objective in Equation 7) wrt. \( \mathbf{T}' \mathbf{T} \) have a simple form; see Andrew et al. (2013, Sec. 7) for details. For our differentiable CCA, we instead need the gradients of the projected data \( \mathbf{A}^* \mathbf{x} \) and \( \mathbf{B}^* \mathbf{y} \) wrt. x and y, which require \( \frac{\partial \mathbf{U}}{\partial \mathbf{x}, \mathbf{y}} \) and \( \frac{\partial \mathbf{V}}{\partial \mathbf{x}, \mathbf{y}} \). We could again decompose this into the gradients wrt. T, the gradients of T wrt. \( \Sigma_{xx} \), \( \Sigma_{xy} \) and \( \Sigma_{yy} \) and the gradients of those wrt. x and y. However, while the gradients of U and V wrt. T are known (Papadopoulo & Lourakis, 2000), they involve solving \( O((d_x d_y)^2) \) linear \( 2 \times 2 \) systems. To arrive at a more practical implementation that does not require the gradient of the SVD, we reformulate the solution to use two symmetric eigendecompositions \( \mathbf{T} \mathbf{T}' = \mathbf{U} \operatorname{diag}(\mathbf{e}) \mathbf{U}' \) and \( \mathbf{T}' \mathbf{T} = \mathbf{V} \operatorname{diag}(\mathbf{e}) \mathbf{V}' \) (Petersen & Pedersen, 2012, Eq. 270). This gives us the same left and right eigenvectors we would obtain from the SVD (save for possibly flipped signs, which are easy to fix), along with the squared singular values (\( e_i = d_i^2 \)). The gradients of eigenvectors of symmetric real eigensystems have a simple form (Magnus, 1985, Eq. 7) and both T'T' and T'T are differentiable wrt. x and y, enabling a sufficiently efficient implementation in a graph-based, auto-differentiating math compiler such as Theano (Theano Development Team, 2016). 3.2 STOCHASTIC OPTIMIZATION For classical CCA, \( \Sigma_{xx}, \Sigma_{xy} \) and \( \Sigma_{yy} \) are estimated from a large set of m training examples (Equation 6). In contrast, gradient-based optimization of neural networks usually estimates the gradients wrt. network parameters from mini-batches of n randomly drawn examples, with \( n \ll m \). In Deep CCA as well as in our extension, the correlations are functions of the network parameters that we need to backpropagate through, effectively enforcing \( m = n \). Andrew et al. (2013) solve this discrepancy by optimizing the network parameters with L-BFGS on the full training set, which is infeasible for very large datasets. Yan & Mikolajczyk (2015) instead train on small mini-batches, estimating correlation matrices of size 4096 × 4096 from 100 examples only, which seems risky. We will choose a way in between, training on large mini-batches to obtain stable estimates. This approach was also taken by Wang et al. (2015b, Sec. 5.1), who found mini-batches of 400–1000 examples to even outperform full-batch L-BFGS. In addition, for testing, we optionally re-estimate the correlation matrices (and the corresponding projection matrices) using a larger set of \( m > n \) examples. Another tempting option is to train on small mini-batches, but use exponential moving averages updated with each mini-batch as follows: \[ \Sigma_{xx} \leftarrow \Sigma_{xx}(1-\alpha) + \hat{\Sigma}_{xx}\alpha \quad \Sigma_{xy} \leftarrow \Sigma_{xy}(1-\alpha) + \hat{\Sigma}_{xy}\alpha \quad \Sigma_{yy} \leftarrow \Sigma_{yy}(1-\alpha) + \hat{\Sigma}_{yy}\alpha \] With proper initialization and a sufficiently small coefficient \( \alpha \), this gives stable estimates even for small \( n \). However, since only the estimates from the current mini-batch \( \hat{\Sigma}_{xx}, \hat{\Sigma}_{xy} \) and \( \hat{\Sigma}_{yy} \) can be practically considered in backpropagation, this changes the learning dynamics: For too small \( \alpha \), the projection matrices will be virtually degraded to constants. Empirically, we found that large mini-batches perform slightly better than small batches with moving averages (see Appendix B). 3.3 CROSS-MODALITY RETRIEVAL WITH DIFFERENTIABLE CCA DCCA maximizes the correlation between the latent representations of two different neural networks. When the two network inputs a and b represent different views of an entity (e.g., an image and its textual description), DCCA projects them into a common space where they are highly correlated. This can be exploited for cross-modality retrieval: Projecting one modality of an entity, we can find the best-matching representations of the second modality (e.g., an image for a textual description, or vice versa). To find the best matches, a common option is to compute nearest neighbors in terms of cosine distance (Yan & Mikolajczyk, 2015), which is closely related to correlation. Given the methodology introduced above, we now have the means to optimize DCCA projections directly for the task at hand. In Figure 1b, we show a possible setting where we put the differentiable CCA layer on top of a multi-view network. Instead of optimizing the networks to maximize the correlation of the projected views (the TNO), we can optimize the networks towards a task-specific objective and still benefit from the optimality of the CCA projections. For this work, we optimize towards minimal cosine distance between the correlated views, the very metric used for retrieval. In the next section, we empirically show that this is indeed beneficial in terms of quantitative retrieval performance as well as convergence speed of network training. 4 EXPERIMENTS We evaluate our approach in cross-modality retrieval experiments on two publicly available datasets (also considered by Yan & Mikolajczyk (2015)) and provide investigations on the representations learned by the network. 4.1 EXPERIMENTAL SETUP For the evaluation of our approach, we consider Flickr30k and IAPR TC-12, two publicly available datasets for cross-modality retrieval. Flickr30k consists of image-caption pairs, where each image A man in a white cowboy hat reclines in front of a window in an airport. A young man rests on an airport seat with a cowboy hat over his face. A woman relaxes on a couch, with a white cowboy hat over her head. A man is sleeping inside on a bench with his hat over his eyes. A person is sleeping at an airport with a hat on their head. a green and brown embankment with brown houses on the right and a light brown sandy beach at the dark blue sea on the left; a dark mountain range behind it and white clouds in a light blue sky in the background; Table 1: Example images for Flickr30k (top) and IAPR TC-12 (bottom) is annotated with five different textual descriptions. The train-validation-test split for Flickr30k is 28000-1000-1000. In terms of evaluation setup, we follow the related work and report results on two different evaluation protocols. Protocol pooled pools the five available captions into one "concatenated" text, meaning that only one but richer text annotation remains per image. This is done for all three sets. Protocol 5 captions pools only the captions of the train set and keeps five separate annotations for validation and test set. The IAPR TC-12 dataset contains 20000 natural images where only one – but compared to Flickr30k more detailed – caption is available for each image. As no predefined train-validation-test split is provided, we randomly select 2000 images for testing, 1000 for validation and keep the rest for training. Yan & Mikolajczyk (2015) also use 2000 images for testing, but did not explicitly mention hold out images for validation. Table 1 shows an example image along with its corresponding captions or caption for either dataset. The task at hand for both datasets is to retrieve the correct counterpart – either text or image – when given a query element of the other modality. We follow Yan & Mikolajczyk (2015) and use the cosine distance for retrieval in the projection space. As evaluation measures we consider the Recall@k (R@k) as well as the Median Rank (MR) and the Mean Average Precision (MAP). The R@k rate (high is better) is the ratio of queries which have the correct corresponding counterpart in the first k retrieval results. The MR is the median position (low is better) of the target in a similarity-ordered list of available candidates. Finally, we define the MAP (high is better) as the mean value of \( 1 / Rank \) over all queries. The input to our networks is a 4096-dimensional image feature vector along with a corresponding text vector representation (5793 for Flickr30k, 2048 for IAPR TC-12). In terms of text pre-processing, we follow Yan & Mikolajczyk (2015), tokenizing and lemmatizing the raw captions as the first step. Based on the lemmatized captions, we compute l2-normalized TF/IDF-vectors, omitting words with an overall occurrence smaller than 5 times for Flickr30k and 3 times for IAPR TC-12, respectively. The image representations are computed from the last hidden layer of a network pretrained on ImageNet (layer fc7 of CNN_S by Chatfield et al. (2014)). 4.2 Network Architectures and Optimization Details We feed 4096-dimensional image vectors along with the corresponding text representation into our networks. The image representation is followed by a linear dense layer with 128 units (this will also be the dimensionality \( k = 128 \) of the resulting CCA retrieval space). The text vector is processed by two batch-normalized (Ioffe & Szegedy, 2015) dense layers of 1024 units each and an ELU activation function (Clevert et al., 2015). As a last layer for the text representation network, we again apply a dense layer with 128 linear units. For a fair comparison, we keep the structure (and number of parameters) of all networks in our experiments the same. The only parameters that vary are the objectives and the corresponding optimization/regularization strategies. In particular, we apply a grid search on the respective hyper-parameters and report the best results for each method. Optimization is performed either using Stochastic Gradient Descent (SGD) with momentum or by the adam (Kingma & Ba, 2014) update rule. Table 2: Cross-modality retrieval results on Flickr30k. “E2E-DCCA” is taken from Yan & Mikolajczyk (2015), all other results are our own. Methods marked with "*" re-estimate projection matrices from a larger batch than used during training (10,000 training examples), see Section 3.2. <table> <tr> <th rowspan="2">Protocol</th> <th rowspan="2">Method</th> <th colspan="4">Image-to-Text</th> <th colspan="4">Text-to-Image</th> </tr> <tr> <th>R@1</th> <th>R@5</th> <th>R@10</th> <th>MR</th> <th>R@1</th> <th>R@5</th> <th>R@10</th> <th>MR</th> </tr> <tr> <td rowspan="6">pooled</td> <td>E2E-DCCA</td> <td>27.9</td> <td>56.9</td> <td>68.2</td> <td>4</td> <td>26.8</td> <td>52.9</td> <td>66.9</td> <td>4</td> </tr> <tr> <td>TNO*</td> <td>29.9</td> <td>57.9</td> <td>67.9</td> <td>4</td> <td>21.8</td> <td>48.1</td> <td>64.0</td> <td>6</td> </tr> <tr> <td>learned-\(cos^2\)</td> <td>9.0</td> <td>23.3</td> <td>32.8</td> <td>28</td> <td>8.5</td> <td>23.3</td> <td>32.8</td> <td>26</td> </tr> <tr> <td>CCAL-\(l2\)</td> <td>18.2</td> <td>42.0</td> <td>53.6</td> <td>9</td> <td>17.7</td> <td>42.2</td> <td>53.2</td> <td>9</td> </tr> <tr> <td>CCAL-\(cos\)</td> <td>28.9</td> <td>57.5</td> <td>69.1</td> <td>4</td> <td>25.1</td> <td>53.1</td> <td>66.4</td> <td>5</td> </tr> <tr> <td>CCAL-\(cos^2\)</td> <td>30.7</td> <td>58.8</td> <td>70.1</td> <td>4</td> <td>28.0</td> <td>56.2</td> <td>68.3</td> <td>4</td> </tr> <tr> <td>CCAL-\(cos^2*\)</td> <td>34.1</td> <td>60.0</td> <td>70.6</td> <td>3.5</td> <td>29.2</td> <td>58.3</td> <td>69.7</td> <td>4</td> </tr> <tr> <td rowspan="4">5 captions</td> <td>E2E-DCCA</td> <td>16.7</td> <td>39.3</td> <td>52.9</td> <td>8</td> <td>12.6</td> <td>31.0</td> <td>43.0</td> <td>15</td> </tr> <tr> <td>TNO*</td> <td>17.5</td> <td>39.3</td> <td>51.4</td> <td>10</td> <td>13.4</td> <td>31.7</td> <td>41.3</td> <td>19</td> </tr> <tr> <td>CCAL-\(cos^2\)</td> <td>21.2</td> <td>44.4</td> <td>55.8</td> <td>8</td> <td>14.9</td> <td>35.9</td> <td>47.5</td> <td>12</td> </tr> <tr> <td>CCAL-\(cos^2*\)</td> <td>20.6</td> <td>45.9</td> <td>57.2</td> <td>7</td> <td>15.6</td> <td>37.0</td> <td>49.4</td> <td>11</td> </tr> </table> As optimization targets, we consider the following candidates: (1) The Trace Norm Objective (\(TNO\)) as our base line for cross-modality retrieval (Yan & Mikolajczyk, 2015). (2) The proposed differentiable CCA layer in combination with the objectives cosine distance (\(CCAL-cos\)), squared cosine distance (\(CCAL-cos^2\)) and euclidean distance (\(CCAL-l2\)). As an additional setting, we consider a freely-learnable projection layer where the projection matrices \(A\) and \(B\) are randomly initialized weights that can be optimized by the network using SGD in the conventional way. This allows to assess the benefit of using CCA-derived projections within a multi-view network under otherwise unchanged objectives. For this experiment, we optimize for the squared cosine distance and denote the setting by \(learned-cos^2\). The batch size is set to 1000 samples to allow stable covariance estimates for the CCA (Section 3.2). For further stabilization, we regularize the covariance matrices (Andrew et al., 2013) by adding scaled (\(r = 10^{-3}\)) identity matrices to the estimates \(\Sigma_{xx}, \Sigma_{yy}\) and \(T\) (Section 2.1). The variants based on differentiable CCA are additionally regularized by \(L2\) weight decay. No dropout is used in this settings as it harmed optimization in our experiments. When optimizing with the TNO we follow Yan & Mikolajczyk (2015) and use dropout (\(p = 0.5\)) after the first two dense layers of the text network. In Table 4 in Appendix A we provide the optimization settings for all configurations in detail, found using a grid search optimizing MAP on the validation set. 4.3 EXPERIMENTAL RESULTS ON CROSS-MODALITY RETRIEVAL Table 2 lists our results on Flickr30k. Along with our experiments, we also show the results reported in (Yan & Mikolajczyk, 2015) as a reference (E2E-DCCA). However, a direct comparison to our results may not be fair: E2E-DCCA uses a different ImageNet-pretrained network for the image representation, and finetunes this network while we keep it fixed (as we are only interested in comparing differentiable CCA to alternatives, not in obtaining the best possible results). Our TNO results use the same objective as E2E-DCCA, but our network architecture, permitting direct comparison. When comparing the performance of our networks, we observe a gain both for image-to-text and text-to-image retrieval when training with the CCAL-\(cos^2\) objective compared to TNO (e.g., R@1 of 34.1 compared to 29.9 under protocol pooled). This indicates that training a network directly on the objective used for retrieval (using differentiable CCA) is a reasonable design choice. A closer look at the results also reveals that the squared cosine distance is superior compared to the remaining objectives. We further observe that the randomly initialized projection matrices learned entirely by SGD (learned-\(cos^2\)) show poor performance compared to their CCA counterpart (even though in theory, they could converge to exactly the same solution). This suggests that exploiting the beneficial properties of the CCA projections directly within a network during training is a powerful tool, supporting optimization of related objectives. CCAL-\(l2\) for example performs poorer than the variants including cosine losses but still better than the version with learned weights. On protocol Table 3: Cross-modality retrieval results on IAPR TC-12 <table> <tr> <th rowspan="2">Method</th> <th colspan="4">Image-to-Text</th> <th colspan="4">Text-to-Image</th> </tr> <tr> <th>R@1</th> <th>R@5</th> <th>MAP</th> <th>MR</th> <th>R@1</th> <th>R@5</th> <th>MAP</th> <th>MR</th> </tr> <tr> <td>E2E-DCCA</td> <td>30.2</td> <td>57.0</td> <td>0.426</td> <td></td> <td>29.5</td> <td>60.0</td> <td>0.415</td> <td></td> </tr> <tr> <td>TNO*</td> <td>30.0</td> <td>56.7</td> <td>0.424</td> <td>4</td> <td>28.0</td> <td>55.4</td> <td>0.410</td> <td>5</td> </tr> <tr> <td>CCAL-\( cos^2 \)*</td> <td>31.1</td> <td>58.4</td> <td>0.439</td> <td>4</td> <td>26.8</td> <td>55.1</td> <td>0.403</td> <td>4</td> </tr> </table> ![Three line plots showing evolution of correlation (train), MAP over training epochs and cosine distance (validation), and individual correlations](page_184_670_1207_246.png) (a) Evolution of correlation (train) (b) MAP over training epochs and cosine distance (validation) (c) Individual Correlations Figure 2: Comparison of the TNO and CCAL-\( cos^2 \) based on the total amount of canonical correlation (sum over singular values d) as well as the cosine distance between corresponding samples. 5 captions, we only report the best results (CCAL-\( cos^2 \)) along with the TNO and observe similar tendencies. Note that there are various other methods reporting results on Flickr30k (Karpathy et al., 2014; Socher et al., 2014; Mao et al., 2014; Kiros et al., 2014) which partly surpass ours, for example by using more elaborate processing of the textual descriptions. We omit these results as we focus on the comparison of DCCA with the proposed differentiable CCA layer. In Table 3, we list our results on the IAPR TC-12 dataset. We again show the retrieval performances of Yan & Mikolajczyk (2015) as a baseline (again with limited comparability, due to a different architecture and a different train-validation-test split), along with our implementation of the TNO and the CCA layer trained with squared cosine distance. For image-to-text retrieval, we achieve slightly better retrieval performances when training with cosine distance and propagating the gradients back through the differentiable CCA layer. For the other direction, results are slightly worse. 4.4 INVESTIGATIONS ON LEARNED REPRESENTATIONS In this section, we provide a more detailed look at the learned representations. We compare the representations learned with the TNO to the proposed CCA layer optimized with the squared cosine distance objective. For easier comparison, we re-train both networks with a reduced projection dimensionality of \( h = 64 \) – otherwise, the TNO takes much longer to converge than the CCA layer. This results in slightly decreased performance for both, but the relative tendencies are preserved. Figure 2a shows the evolution of the mean correlation (mean over singular values with maximum 1.0) on the training set during optimization. Allong with the correlation, we also plot the average cosine distance between corresponding pairs on the validation set. As expected, for the TNO we observe a continuous decrease of cosine distance when the correlation increases. Interestingly, this is not the case for CCAL-\( cos^2 \). The result suggests that the network found a way of minimizing the cosine distance other than by increasing correlation between the representations – the latter even decreases after a few training epochs. In Figure 2b, we plot the corresponding evolution of MAP on the training and validation set, confirming that the decreased cosine distance indeed also leads to improved retrieval performance. Finally, in Figure 2c we compare the individual correlation coefficients (magnitudes of CCA singular values on the training set) of both representations after the last training epoch. This details the observation in Figure 2a: not only the total correlation, but also the individual correlation coefficients are considerably higher when training with TNO, even though the retrieval performance is lower. 5 CONCLUSION We presented a fully differentiable version of Canonical Correlation Analysis which enables us to back-propagate errors directly through the computation of CCA. As this requires to establish gradient flow through CCA, we formulate it to allow easy computation of the partial derivatives \( \frac{\partial \mathbf{A}^*}{\partial \mathbf{x}, \mathbf{y}} \) and \( \frac{\partial \mathbf{B}^*}{\partial \mathbf{x}, \mathbf{y}} \) of CCA’s projection matrices \( \mathbf{A}^* \) and \( \mathbf{B}^* \) with respect to the input data \( \mathbf{x} \) and \( \mathbf{y} \). With this formulation, we can incorporate CCA as a building block within multi-modality neural networks that produces maximally-correlated projections of its inputs. In our experiments, we use this building block within a cross-modality retrieval setting, optimizing a network to minimize the cosine distance of the correlated CCA projections. Experimental results show that when using the cosine distance for retrieval (as is common for correlated views), this is superior to optimizing a network for maximally-correlated projections (as done in Deep CCA), or not using CCA at all. We further observed (Section 4.4) that it is not necessarily required to have maximum correlation to achieve a high retrieval performance. Finally, our differentiable CCA layer could provide a useful basis for further research, e.g., as an intermediate processing step for learning binary cross-modality retrieval representations. ACKNOWLEDGMENTS The research reported in this paper has been supported by the Austrian Federal Ministry for Transport, Innovation and Technology, the Federal Ministry of Science, Research and Economy, and the Province of Upper Austria in the frame of the COMET center SCCH, as well as by the Federal Ministry for Transport, Innovation & Technology (BMVIT) and the Austrian Science Fund (FWF): TRP 307-N23. The Tesla K40 used for this research was donated by the NVIDIA Corporation. APPENDIX A: OPTIMIZATION SETTINGS The table below provides a detailed listing of the optimization strategies for all our experiments. All our configurations are of course also available in our experimental code published at (will be added). <table> <tr> <th rowspan="2">Objective</th> <th rowspan="2">Optimizer</th> <th rowspan="2">Units</th> <th colspan="2">Flickr30k</th> <th rowspan="2">Dropout</th> <th rowspan="2">L2</th> <th rowspan="2">r</th> </tr> <tr> <th>lr<sub>ini</sub></th> <th>lr-schedule</th> </tr> <tr> <td>TNO</td> <td>momentum</td> <td>2048</td> <td>0.05</td> <td>constant</td> <td>0.5</td> <td>none</td> <td>10<sup>-3</sup></td> </tr> <tr> <td>CCAL</td> <td>momentum</td> <td>1024</td> <td>0.5</td> <td>\( \times 0.7 \) from epoch 10</td> <td>none</td> <td>0.002</td> <td>10<sup>-3</sup></td> </tr> <tr> <td>learned-\( cos^2 \)</td> <td>momentum</td> <td>1024</td> <td>0.25</td> <td>none</td> <td>none</td> <td>0.002</td> <td>10<sup>-3</sup></td> </tr> </table> <table> <tr> <th rowspan="2">Objective</th> <th rowspan="2">Optimizer</th> <th rowspan="2">Units</th> <th colspan="2">IAPR TC-12</th> <th rowspan="2">Dropout</th> <th rowspan="2">L2</th> <th rowspan="2">r</th> </tr> <tr> <th>lr<sub>ini</sub></th> <th>lr-schedule</th> </tr> <tr> <td>TNO</td> <td>adam</td> <td>1024</td> <td>0.001</td> <td>\( \times 0.1 \) in epoch 30</td> <td>none</td> <td>0.0001</td> <td>10<sup>-3</sup></td> </tr> <tr> <td>CCAL</td> <td>adam</td> <td>1024</td> <td>0.001</td> <td>\( \times 0.1 \) in epoch 50</td> <td>none</td> <td>0.0002</td> <td>10<sup>-3</sup></td> </tr> </table> Table 4: Details on optimization strategies for the respective networks ![Line plot showing MAP (1/Rank) vs alpha for batch sizes 200 and 1000](page_374_186_591_377.png) Figure 3: Influence of parameter \( \alpha \). APPENDIX B: INFLUENCE OF RUNNING AVERAGE STATISTICS In this additional section, we investigate the influence of weighting coefficient \( \alpha \) when using exponential moving average estimates of the covariance matrices for CCA computation (see Section 3). A high \( \alpha \) (close to 1.0) means that the averaged estimate of \( \Sigma_{xx} \), \( \Sigma_{yy} \) and \( \Sigma_{xy} \) mostly depends on the current batch, and a low \( \alpha \) (close to 0.0) means it more strongly depends on the history of previous batches. To assess whether and under what circumstances exponential moving averages are helpful, we run an additional experiment on the IAPR TC-12 dataset as follows: We re-train one of the models of Section 4 both with batch size 1000 and with batch size 200, varying \( \alpha \) from 1.0 to 0.1 with a step size of 0.1 and measuring the \( MAP \) achieved on the validation set. We run each setting three times and report the average over the three runs. Figure 3 shows the results of this experiment. For batch size 1000, we draw the same conclusion as was reported in (Wang et al., 2015a;b): If the batch size is sufficiently large and representative for the entire population, learning on distribution parameters (in this case covariance matrices) is feasible, and the network performs best when trained with an \( \alpha \) close to one. This is not the case for batch size 200. In particular, the configurations with a large \( \alpha \) (small effective running average window) perform poorly. We conclude that a batch size of 200 is too small to obtain stable and representative covariances. However, when choosing a small \( \alpha \), it is still possible to train the models and achieve reasonable retrieval performance. As a practical recommendation, we suggest to use large batch sizes whenever possible (e.g., if feasible with available hardware). If the batch size needs to be reduced (e.g., for very large models and limited memory), using small alpha values still allows for training canonically correlated retrieval networks. For this work, we use a batch size of 1000 and fix \( \alpha = 1 \), disabling moving averages.
reject
Reject
3.333333
95287ca906f44f404c1152541eeb12bef63cee04
iclr
2,017
Decomposing Motion and Content for Natural Video Sequence Prediction Ruben Villegas1 Jimei Yang2 Seunghoon Hong3,* Xunyu Lin4,* Honglak Lee1,5 1University of Michigan, Ann Arbor, USA 2Adobe Research, San Jose, CA 95110 3POSTECH, Pohang, Korea 4Beihang University, Beijing, China 5Google Brain, Mountain View, CA 94043 ABSTRACT We propose a deep neural network for the prediction of future frames in natural video sequences. To effectively handle complex evolution of pixels in videos, we propose to decompose the motion and content, two key components generating dynamics in videos. Our model is built upon the Encoder-Decoder Convolutional Neural Network and Convolutional LSTM for pixel-level prediction, which independently capture the spatial layout of an image and the corresponding temporal dynamics. By independently modeling motion and content, predicting the next frame reduces to converting the extracted content features into the next frame content by the identified motion features, which simplifies the task of prediction. Our model is end-to-end trainable over multiple time steps, and naturally learns to decompose motion and content without separate training. We evaluate the proposed network architecture on human activity videos using KTH, Weizmann action, and UCF-101 datasets. We show state-of-the-art performance in comparison to recent approaches. To the best of our knowledge, this is the first end-to-end trainable network architecture with motion and content separation to model the spatio-temporal dynamics for pixel-level future prediction in natural videos. 1 INTRODUCTION Understanding videos has been one of the most important tasks in the field of computer vision. Compared to still images, the temporal component of videos provides much richer descriptions of the visual world, such as interaction between objects, human activities, and so on. Amongst the various tasks applicable on videos, the task of anticipating the future has recently received increased attention in the research community. Most prior works in this direction focus on predicting high-level semantics in a video such as action (Vondrick et al., 2015) (Rvo, 2011), Lan et al., 2014), event (Yuen and Torralba, 2010; Hoai and Torre, 2013) and motion (Pinte et al., 2014; Walker et al., 2014; Pickup et al., 2014 [Walker et al., 2016]. Forecasting semantics provides information about what will happen in a video, and is essential to automate decision making. However, the predicted semantics are often specific to a particular task and provide only a partial description of the future. Also, training such models often requires heavily labeled training data which leads to tremendous annotation costs especially with videos. In this work, we aim to address the problem of prediction of future frames in natural video sequences. Pixel-level predictions provide dense and direct description of the visual world, and existing video recognition models can be adopted on top of the predicted frames to infer various semantics of the future. Spatio-temporal correlations in videos provide a self-supervision for frame prediction, which enables purely unsupervised training of a model by observing raw video frames. Unfortunately, estimating frames is an extremely challenging task; not only because of the inherent uncertainty of the future, but also various factors of variation in videos leading to complicated dynamics in raw pixel values. There have been a number of recent attempts on frame prediction (Srivastava et al., 2015 [Mathieu et al., 2015] [Oh et al., 2015] [Goroshin et al., 2015] [Lotter et al., 2015] [Ranzato et al., 2014]), *This work was done while SH and XL were visiting the University of Michigan. which use a single encoder that needs to reason about all the different variations occurring in videos in order to make predictions of the future, or require extra information like foreground-background segmentation masks and static background (Vondrick et al., 2016). We propose a Motion-Content Network (MCnet) for robust future frame prediction. Our intuition is to split the inputs for video prediction into two easily identifiable groups, motion and content, and independently capture each information stream with separate encoder pathways. In this architecture, the motion pathway encodes the local dynamics of spatial regions, while the content pathway encodes the spatial layout of the salient parts of an image. The prediction of the future frame is then achieved by transforming the content of the last observed frame given the identified dynamics up to the last observation. Somewhat surprisingly, we show that such a network is end-to-end trainable without individual path way supervision. Specifically, we show that an asymmetric architecture for the two pathways enables such decompositions without explicit supervision. The contributions of this paper are summarized below: • We propose MCnet for the task of frame prediction, which separates the information streams (motion and content) into different encoder pathways. • The proposed network is end-to-end trainable and naturally learns to decompose motion and content without separate training, and reduces the task of frame prediction to transforming the last observed frame into the next by the observed motion. • We evaluate the proposed model on challenging real-world video datasets, and show that it outperforms previous approaches on frame prediction. The rest of the paper is organized as follows. We briefly review related work in Section 2, and introduce an overview of the proposed algorithm in Section 3. The detailed configuration of the proposed network is described in Section 4. Section 5 describes training and inference procedure. Section 6 illustrates implementation details and experimental results on challenging benchmarks. 2 RELATED WORK The problem of visual future prediction has received growing interests in the computer vision community. It has led to various tasks depending on the objective of future prediction, such as human activity (Vondrick et al., 2015; Kyoo, 2011; Lan et al., 2014), event (Yuen and Torralba, 2010; Hoai and Torre, 2013) and geometric path (Walker et al., 2014). Although previous work achieved reasonable success in specific tasks, they are often limited to estimating predefined semantics, and require fully-labeled training data. To alleviate this issue, approaches predicting representation of the future beyond semantic labels have been proposed. Walker et al. (2014) proposed a data-driven approach to predict the motion of a moving object, and coarse hallucination of the predicted motion. Vondrick et al. (2015) proposed a deep regression network to predict feature representations of the future frames. These approaches are supervised and provide coarse predictions of how the future will look like. Our work also focuses on unsupervised learning for prediction of the future, but to a more direct visual prediction task: frame prediction. Compared to predicting semantics, pixel-level prediction has been less investigated due to the difficulties in modeling evolution of raw pixels over time. Fortunately, recent advances in deep learning provide a powerful tool for sequence modeling, and enable the creation of novel architectures for modeling complex sequential data. Ranzato et al. (2014) applied a recurrent neural network developed for language modeling to frame prediction by posing the task as classification of each image region to one of quantized patch dictionaries. Srivastava et al. (2015) applied a sequence-to-sequence model to video prediction, and showed that Long Short-Term Memory (LSTM) is able to capture pixel dynamics. Oh et al. (2015) proposed an action-conditional encoder-decoder network to predict future frames in Atari games. In addition to the different choices of architecture, some other works addressed the importance of selecting right objective function: Lotter et al. (2015) used adversarial loss with combined CNN and LSTM architectures, and Mathieu et al. (2015) employed similar adversarial loss with additional regularization using a multi-scale encoder-decoder network. Finn et al. (2016) constructed a network that predicts transformations on the input pixels for next frame prediction. Patraucean et al. (2015) proposed a network that by explicitly predicting optical flow features is able to predict the next frame in a video. Vondrick et al. (2016) proposed a generative adversarial network for video which, by generating a background-foreground mask, is able to generate realistic-looking video sequences. However, none of the previously mentioned approaches exploit spatial and temporal information separately in an unsupervised fashion. In terms of the way data is observed, the closest work to ours is [Xue et al., (2016)]. The differences are (1) Our model is deterministic and theirs is probabilistic, (2) our motion encoder is based on convolutional LSTM [Shi et al.,(2015)] which is a more natural module to model long-term dynamics, (3) our content encoder observes a single scale input and theirs observes many scales, and (4) we directly generate image pixels values, which is a more complicated task. We aim to exploit the existing spatio-temporal correlations in videos by decomposing the motion and content in our network architecture. To the best of our knowledge, the idea of separating motion and content has not been investigated in the task of unsupervised deterministic frame prediction. The proposed architecture shares similarities to the two-stream CNN [Simonyan and Zisserman, 2014], which is designed for action recognition to jointly exploit the information from frames and their temporal dynamics. However, in contrast to their network we aim to learn features for temporal dynamics directly from the raw pixels, and we use the identified features from the motion in combination with spatial features to make pixel-level predictions of the future. 3 ALGORITHM OVERVIEW In this section, we formally define the task of frame prediction and the role of each component in the proposed architecture. Let \( \mathbf{x}_t \in \mathbb{R}^{w \times h \times c} \) denote the t-th frame in an input video \( \mathbf{x} \), where \( w, h, \) and \( c \) denote width, height, and number of channels, respectively. The objective of frame prediction is to generate the future frame \( \hat{\mathbf{x}}_{t+1} \) given the input frames \( \mathbf{x}_{1:t} \). At the t-th time step, our network observes a history of previous consecutive frames up to frame t, and generates the prediction of the next frame \( \hat{\mathbf{x}}_{t+1} \) as follows: • **Motion Encoder** recurrently takes an image difference input between frame \( \mathbf{x}_t \) and \( \mathbf{x}_{t-1} \) starting from \( t = 2 \), and produces the hidden representation \( \mathbf{d}_t \) encoding the temporal dynamics of the scene components (Section 4.1). • **Content Encoder** takes the last observed frame \( \mathbf{x}_t \) as an input, and outputs the hidden representation \( s_t \) that encodes the spatial layout of the scene (Section 4.2). • **Multi-Scale Motion-Content Residual** takes the computed features, from both the motion and content encoders, at every scale right before pooling and computes residuals \( r_t \) [He et al., 2015] to aid the information loss caused by pooling in the encoding phase (Section 4.3). • **Combination Layers and Decoder** takes the outputs from both encoder pathways and residual connections, \( \mathbf{d}_t, s_t, \) and \( r_t \), and combines them to produce a pixel-level prediction of the next frame \( \hat{\mathbf{x}}_{t+1} \) (Section 4.4). The overall architecture of the proposed algorithm is described in Figure 1. The prediction of multiple frames, \( \hat{\mathbf{x}}_{t+1:T} \), can be achieved by recursively performing the above procedures over T time steps (Section 5). Each component in the proposed architecture is described in the following section. 4 ARCHITECTURE This section describes the detailed configuration of the proposed architecture, including the two encoder pathways, multi-scale residual connections, combination layers, and decoder. 4.1 MOTION ENCODER The motion encoder captures the temporal dynamics of the scene’s components by recurrently observing subsequent difference images computed from \( \mathbf{x}_{t-1} \) and \( \mathbf{x}_t \), and outputs motion features by \[ [\mathbf{d}_t, \mathbf{c}_t] = f^{\text{dyn}} (\mathbf{x}_t - \mathbf{x}_{t-1}, \mathbf{d}_{t-1}, \mathbf{c}_{t-1}) , \] where \( \mathbf{x}_t - \mathbf{x}_{t-1} \) denotes element-wise subtraction between frames at time t and \( t-1 \), \( \mathbf{d}_t \in \mathbb{R}^{w' \times h' \times c'} \) is the feature tensor encoding the motion across the observed difference image inputs, and \( \mathbf{c}_t \in \mathbb{R}^{w' \times h' \times c'} \) is a memory cell that retains information of the dynamics observed through time. \( f^{\text{dyn}} \) is implemented in a fully-convolutional way to allow our model to identify local dynamics of frames Figure 1: Overall architecture of the proposed network. (a) illustrates MCnet without the Motion-Content Residual skip connections, and (b) illustrates MCnet with such connections. Our network observes a history of image differences through the motion encoder and last observed image through the content encoder. Subsequently, our network proceeds to compute motion-content features and communicates them to the decoder for the prediction of the next frame. rather than complicated global motion. For this, we use an encoder CNN with a Convolutional LSTM (Shi et al., 2015) layer on top. 4.2 CONTENT ENCODER The content encoder extracts important spatial features from a single frame, such as the spatial layout of the scene and salient objects in a video. Specifically, it takes the last observed frame \( \mathbf{x}_t \) as an input, and produces content features by \[ \mathbf{s}_t = f^{\text{cont}}(\mathbf{x}_t), \] where \( \mathbf{s}_t \in \mathbb{R}^{w' \times h' \times c'} \) is the feature encoding the spatial content of the last observed frame, and \( f^{\text{cont}} \) is implemented by a Convolutional Neural Network (CNN) that specializes on extracting features from single frame. It is important to note that our model employs an asymmetric architecture for the motion and content encoder. The content encoder takes the last observed frame, which keeps the most critical clue to reconstruct spatial layout of near future, but has no information about dynamics. On the other hand, the motion encoder takes a history of previous image differences, which are less informative about the future spatial layout compared to the last observed frame, yet contain important spatio-temporal variations occurring over time. This asymmetric architecture encourages encoders to exploit each of two pieces of critical information to predict the future content and motion individually, and enables the model to learn motion and content decomposition naturally without any supervision. 4.3 MULTI-SCALE MOTION-CONTENT RESIDUAL To prevent information loss after the pooling operations in our motion and content encoders, we use residual connections (He et al., 2015). The residual connections in our network communicate motion-content features at every scale into the decoder layers after unpooling operations. The residual feature at layer \( l \) is computed by \[ \mathbf{r}_l^t = f^{\text{res}}\left([\mathbf{s}_l^t, \mathbf{d}_l^t]\right)^l, \] where \( \mathbf{r}_l^t \) is the residual output at layer \( l \), \( [\mathbf{s}_l^t, \mathbf{d}_l^t] \) is the concatenation of the motion and content features along the depth dimension at layer \( l \) of their respective encoders, \( f^{\text{res}}(.)^l \) the residual function at layer \( l \) implemented as consecutive convolution layers and rectification with a final linear layer. 4.4 COMBINATION LAYERS AND DECODER The outputs from the two encoder pathways, \( \mathbf{d}_t \) and \( \mathbf{s}_t \), encode a high-level representation of motion and content, respectively. Given these representations, the objective of the decoder is to generate a pixel-level prediction of the next frame \( \hat{\mathbf{x}}_{t+1} \in \mathbb{R}^{w \times h \times c} \). To this end, it first combines the motion and content back into a unified representation by \[ f_t = g^{\text{comb}} \left( [\mathbf{d}_t, \mathbf{s}_t] \right), \] where \([\mathbf{d}_t, \mathbf{s}_t] \in \mathbb{R}^{w' \times h' \times 2c'}\) denotes the concatenation of the higher-level motion and content features in the depth dimension, and \(f_t \in \mathbb{R}^{w' \times h' \times c'}\) denotes the combined high-level representation of motion and content. \(g^{\text{comb}}\) is implemented by a CNN with bottleneck layers (Hinton and Salakhutdinov [2006]); it first projects both \(\mathbf{d}_t\) and \(\mathbf{s}_t\) into a lower-dimensional embedding space, and then puts it back to the original size to construct the combined feature \(f_t\). Intuitively, \(f_t\) can be viewed as the content feature of the next time step, \(\mathbf{s}_{t+1}\), which is generated by transforming \(\mathbf{s}_t\) using the observed dynamics encoded in \(\mathbf{d}_t\). Then our decoder places \(f_t\) back into the original pixel space by \[ \hat{\mathbf{x}}_{t+1} = g^{\text{dec}} \left( f_t, \mathbf{r}_t \right), \] where \(\mathbf{r}_t\) is a list containing the residual connections from every layer of the motion and content encoders before pooling sent to every layer of the decoder after unpooling. We employ the deconvolution network (Zeiler et al. [2011]) for our decoder network \(g^{\text{dec}}\), which is composed of multiple successive operations of deconvolution, rectification and unpooling with the addition of the motion-content residual connections after each unpooling operation. The output layer is passed through a tanh(.) activation function. Unpooling with fixed switches are used to upsample the intermediate activation maps. 5 INFERENCE AND TRAINING Section 4 describes the procedures for single frame prediction, while this section presents the extension of our algorithm for the prediction of multiple time steps. 5.1 MULTI-STEP PREDICTION Given an input video, our network observes the first \(n\) frames as image difference between frame \(\mathbf{x}_t\) and \(\mathbf{x}_{t-1}\), starting from \(t=2\) up to \(t=n\), to encode initial temporal dynamics through the motion encoder. The last frame \(\mathbf{x}_n\) is given to the content encoder to be transformed into the first prediction \(\hat{\mathbf{x}}_{t+1}\) by the identified motion features. For each time step \(t \in [n+1, n+T]\), where \(T\) is the desired number of prediction steps, our network takes the difference image between the first prediction \(\hat{\mathbf{x}}_{t+1}\) and the previous image \(\mathbf{x}_t\), and the first prediction \(\hat{\mathbf{x}}_{t+1}\) itself to predict the next frame \(\hat{\mathbf{x}}_{t+2}\), and so forth. 5.2 TRAINING OBJECTIVE To train our network, we use an objective function composed of different sub-losses similar to Mathieu et al. (2015). Given the training data \(D = \{ \mathbf{x}_{1:T}^{(i)} \}_{i=1}^N\), our model is trained to minimize the prediction loss by \[ \mathcal{L} = \alpha \mathcal{L}_{\text{img}} + \beta \mathcal{L}_{\text{GAN}}, \] where \(\alpha\) and \(\beta\) are hyper-parameters that control the effect of each sub-loss during optimization. \(\mathcal{L}_{\text{img}}\) is the loss in image space from Mathieu et al. (2015) defined by \[ \mathcal{L}_{\text{img}} = \mathcal{L}_p \left( \mathbf{x}_{t+k}, \hat{\mathbf{x}}_{t+k} \right) + \mathcal{L}_{gdl} \left( \mathbf{x}_{t+k}, \hat{\mathbf{x}}_{t+k} \right), \] where \[ \mathcal{L}_p \left( \mathbf{y}, \mathbf{z} \right) = \sum_{k=1}^T \| \mathbf{y} - \mathbf{z} \|_p^p, \] \[ \mathcal{L}_{gdl} \left( \mathbf{y}, \mathbf{z} \right) = \sum_{i,j}^{h,w} \left( | (\mathbf{y}_{i,j} - \mathbf{y}_{i-1,j}) - (\mathbf{z}_{i,j} - \mathbf{z}_{i-1,j}) |^\lambda \right. \\ \left. + | (\mathbf{y}_{i,j-1} - \mathbf{y}_{i,j}) - (\mathbf{z}_{i,j-1} - \mathbf{z}_{i,j}) |^\lambda \right). \] Here, \(\mathbf{x}_{t+k}\) and \(\hat{\mathbf{x}}_{t+k}\) are the target and predicted frames, respectively, and \(p\) and \(\lambda\) are hyper-parameters for \(\mathcal{L}_p\) and \(\mathcal{L}_{gdl}\), respectively. Intuitively, \(\mathcal{L}_p\) guides our network to match the average pixel values directly, while \( \mathcal{L}_{gdl} \) guides our network to match the gradients of such pixel values. Overall, \( \mathcal{L}_{img} \) guides our network to learn parameters towards generating the correct average sequence given the input. Training to generate average sequences, however, results in somewhat blurry generations which is the reason we use an additional sub-loss. \( \mathcal{L}_{GAN} \) is the generator loss in adversarial training to allow our model to predict realistic looking frames and it is defined by \[ \mathcal{L}_{GAN} = -\log D \left( [\mathbf{x}_{1:t}, G(\mathbf{x}_{1:t})] \right), \] where \( \mathbf{x}_{1:t} \) is the concatenation of the input images, \( \mathbf{x}_{t+1:t+T} \) is the concatenation of the ground-truth future images, \( G(\mathbf{x}_{1:t}) = \hat{\mathbf{x}}_{t+1:t+T} \) is the concatenation of all predicted images along the depth dimension, and \( D(.) \) is the discriminator in adversarial training. The discriminative loss in adversarial training is defined by \[ \mathcal{L}_{disc} = -\log D \left( [\mathbf{x}_{1:t}, \mathbf{x}_{t+1:t+T}] \right) - \log (1 - D \left( [\mathbf{x}_{1:t}, G(\mathbf{x}_{1:t})] \right)). \] \( \mathcal{L}_{GAN} \), in addition to \( \mathcal{L}_{img} \), allows our network to not only generate the target sequence, but also simultaneously enforce realism in the images through visual sharpness that fools the human eye. Note that our model uses its predictions as input for the next time-step during the training, which enables the gradients to flow through time and makes the network robust for error propagation during prediction. For more a detailed description about adversarial training, please refer to Appendix D. 6 EXPERIMENTS In this section, we present experiments using our network for video generation. We first evaluate our network, MCnet, on the KTH (Schuldt et al., 2004) and Weizmann action (Gorelick et al., 2007) datasets, and compare against a baseline convolutional LSTM (ConvLSTM) (Shi et al., 2015). We then proceed to evaluate on the more challenging UCF-101 (Soomro et al., 2012) dataset, in which we compare against the same ConvLSTM baseline and also the current state-of-the-art method by Mathieu et al. (2015). For all our experiments, we use \( \alpha = 1 \), \( \lambda = 1 \), and \( p = 2 \) in the loss functions. In addition to the results in this section, we also provide more qualitative comparisons in the supplementary material and in the videos on the project website: https://sites.google.com/a/umich.edu/rubenevillegas/iclr2017 Architectures. The content encoder of MCnet is built with the same architecture as VGG16 (Simonyan and Zisserman, 2015) up to the third pooling layer. The motion encoder of MCnet is also similar to VGG16 up to the third pooling layer, except that we replace its consecutive 3x3 convolutions with single 5x5, 5x5, and 7x7 convolutions in each layer. The combination layers are composed of 3 consecutive 3x3 convolutions (256, 128, and 256 channels in each layer). The multi-scale residuals are composed of 2 consecutive 3x3 convolutions. The decoder is the mirrored architecture of the content encoder where we perform unpooling followed by deconvolution. For the baseline ConvLSTM, we use the same architecture as the motion encoder, residual connections, and decoder, except we increase the number of channels in the encoder in order to have an overall comparable number of parameters with MCnet. 6.1 KTH AND WEIZMANN ACTION DATASETS Experimental settings. The KTH human action dataset (Schuldt et al., 2004) contains 6 categories of periodic motions on a simple background: running, jogging, walking, boxing, hand-clapping and hand-waving. We use person 1-16 for training and 17-25 for testing, and also resize frames to 128x128 pixels. We train our network and baseline by observing 10 frames and predicting 10 frames into the future on the KTH dataset. We set \( \beta = 0.02 \) for training. We also select the walking, running, one-hand waving, and two-hands waving sequences from the Weizmann action dataset (Gorelick et al., 2007) for testing the networks’ generalizability. For all the experiments, we test the networks on predicting 20 time steps into the future. As for evaluation, we use the same SSIM and PSNR metrics as in Mathieu et al. (2015). The evaluation on KTH was performed on sub-clips within each video in the testset. We sample sub-clips every 3 frames for running and jogging, and sample sub-clips every 20 frames (skipping the frames we have already predicted) for walking, boxing, hand-clapping, and hand-waving. Sub-clips for running, jogging, and walking were manually trimmed to ensure humans are always present in the frames. The evaluation on Weizmann was performed on all sub-clips in the selected sequences. Figure 2: Quantitative comparison between MCnet and ConvLSTM baseline with and without multi-scale residual connections (indicated by "+ RES"). Given 10 input frames, the models predict 20 frames recursively, one by one. Left column: evaluation on KTH dataset (Schuldt et al., 2004). Right column: evaluation on Weizmann (Gorelick et al., 2007) dataset. Results. Figure 2 summarizes the quantitative comparisons among our MCnet, ConvLSTM baseline and their residual variations. In the KTH test set, our network outperforms the ConvLSTM baseline by a small margin. However, when we test the residual versions of MCnet and ConvLSTM on the dataset (Gorelick et al., 2007) with similar motions, we can see that our network can generalize well to the unseen contents by showing clear improvements, especially in long-term prediction. One reason for this result is that the test and training partitions of the KTH dataset have simple and similar image contents so that ConvLSTM can memorize the average background and human appearance to make reasonable predictions. However, when tested on unseen data, ConvLSTM has to internally take care of both scene dynamics and image contents in a mingled representation, which gives it a hard time for generalization. In contrast, the reason our network outperforms the ConvLSTM baseline on unseen data is that our network focuses on identifying general motion features and applying them to a learned content representation. Figure 3 presents qualitative results of multi-step prediction by our network and ConvLSTM. As expected, prediction results by our full architecture preserves human shapes more accurately than the baseline. It is worth noticing that our network produces very sharp prediction over long-term time steps; it shows that MCnet is able to capture periodic motion cycles, which reduces the uncertainty of future prediction significantly. More qualitative comparisons are shown in the supplementary material and the project website. 6.2 UCF-101 DATASET Experimental settings. This section presents results on the challenging real-world videos in the UCF-101 (Soomro et al., 2012) dataset. Having collected from YouTube, the dataset contains 101 realistic human actions taken in a wild and exhibits various challenges, such as background clutter, occlusion, and complicated motion. We employed the same network architecture as in the KTH dataset, but resized frames to 240x320 pixels, and trained the network to observe 4 frames and predict a single frame. We set \( \beta = 0.001 \) for training. We also trained our convolutional LSTM baseline in the same way. Following the same protocol as (Mathieu et al., 2015) for data pre-processing and Figure 3: Qualitative comparison between our MCNet model and ConvLSTM. We display predictions starting from the 12th frame, in every 3 timesteps. The first 3 rows correspond to KTH dataset for the action of jogging and the last 3 rows correspond to Weizmann dataset for the action of walking. evaluation metrics on full images, all networks were trained on Sports-1M (Karpathy et al., 2014) dataset and tested on UCF-101 unless otherwise stated[1] Results. Figure 4 shows the quantitative comparisons between our network trained for single-step-prediction and Mathieu et al. (2015). We can clearly see the advantage of our network over the baseline. The separation of motion and contents in two encoder pathways allows our network to identify key motion and content features, which are then fed into the decoder to yield predictions of higher quality compared to the baseline[2]. In other words, our network only moves what shows motion in the past, and leaves the rest untouched. We also trained a residual version of MCnet on UCF-101, indicated by “MCnet + RES UCF101”, to compare how well our model generalizes when trained and tested on the same or different dataset(s). To our surprise, when tested with UCF-101, the MCnet trained on Sports-1M (MCnet + RES) roughly matches the performance of the MCnet trained on UCF-101 (MCnet + RES UCF101), which suggests that our model learns effective representations which can generalize to new datasets. Figure 5 presents qualitative comparisons between frames generated by our network and Mathieu et al. (2015). Since the ConvLSTM and Mathieu et al. (2015) lack explicit motion and content modules, they lose sense of the dynamics in the video and therefore the contents become distorted quickly. More qualitative comparisons are shown in the supplementary material and the project website [1] We use the code and model released by Mathieu et al. (2015) at https://github.com/couprieC/VideoPredictionICLR2016 [2] We were not able to get the model fine-tuned on UCF-101 from the authors so it is not included in Figure 4 Figure 4: Quantitative comparison between our model, convolutional LSTM [Shi et al. (2015)], and [Mathieu et al. (2015)]. Given 4 input frames, the models predict 8 frames recursively, one by one. 7 CONCLUSION We proposed a motion-content network for pixel-level prediction of future frames in natural video sequences. The proposed model employs two separate encoding pathways, and learns to decompose motion and content without explicit constraints or separate training. Experimental results suggest that separate modeling of motion and content improves the quality of the pixel-level future prediction, and our model overall achieves state-of-the-art performance in predicting future frames in challenging real-world video datasets. 8 ACKNOWLEDGEMENTS This work was supported in part by ONR N00014-13-1-0762, NSF CAREER IIS-1453651, gifts from the Bosch Research and Technology Center, and Sloan Research Fellowship. We also thank NVIDIA for donating K40c and TITAN X GPUs. We thank Ye Liu, Junhyuk Oh, Xinchen Yan, Lajanugen Logeswaran, Yuting Zhang, Sungryull Sohn, Kibok Lee, Rui Zhang, and other collaborators for helpful discussions. R. Villegas was partly supported by the Rackham Merit Fellowship. REFERENCES C. Finn, I. J. Goodfellow, and S. Levine. Unsupervised learning for physical interaction through video prediction. In NIPS, 2016. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS. 2014. L. Gorelick, M. Blank, E. Shechtman, M. Irani, and R. Basri. Actions as space-time shapes. Transactions on Pattern Analysis and Machine Intelligence, 29(12):2247–2253, December 2007. R. Goroshin, M. Mathieu, and Y. LeCun. Learning to linearize under uncertainty. In NIPS. 2015. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. G. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 2006. M. Hoai and F. Torre. Max-margin early event detectors. IJCV, 2013. A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convolutional neural networks. In CVPR, 2014. T. Lan, T. Chen, and S. Savarese. A hierarchical representation for future action prediction. In ECCV, 2014. W. Lotter, G. Kreiman, and D. Cox. Unsupervised learning of visual structure using predictive generative networks. arXiv preprint arXiv:1504.08023, 2015. Figure 5: Qualitative comparisons among MCnet and ConvLSTM and Mathieu et al. (2015). We display predicted frames (in every other frame) starting from the 5th frame. The green arrows denote the top-30 closest optical flow vectors within image patches between MCnet and ground-truth. More clear motion prediction can be seen in the project website M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440, 2015. J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh. Action-conditional video prediction using deep networks in atari games. In NIPS. 2015. V. Patraucean, A. Handa, and R. Cipolla. Spatio-temporal video autoencoder with differentiable memory. CoRR, abs/1511.06309, 2015. L. C. Pickup, Z. Pan, D. Wei, Y. Shih, C. Zhang, A. Zisserman, B. Scholkopf, and W. T. Freeman. Seeing the arrow of time. In CVPR, 2014. S. L. Pintea, J. C. van Gemert, and A. W. M. Smeulders. Dejavu: Motion prediction in static images. In European Conference on Computer Vision, 2014. M. Ranzato, A. Szlam, J. Bruna, M. Mathieu, R. Collobert, and S. Chopra. Video (language) modeling: a baseline for generative models of natural videos. arXiv preprint arXiv:1412.6604, 2014. M. S. Ryoo. Human activity prediction: Early recognition of ongoing activities from streaming videos. In ICCV, 2011. C. Schuldt, I. Laptev, and B. Caputo. Recognizing human actions: A local svm approach. In ICPR, 2004. X. Shi, Z. Chen, H. Wang, D.-Y. Yeung, W.-k. Wong, and W.-c. WOO. Convolutional lstm network: A machine learning approach for precipitation nowcasting. In Advances in Neural Information Processing Systems 28. 2015. K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS. 2014. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. K. Soomro, A. R. Zamir, and M. Shah. UCF101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. N. Srivastava, E. Mansimov, and R. Salakhudinov. Unsupervised learning of video representations using lstms. In ICML, 2015. C. Vondrick, H. Pirsiavash, and A. Torralba. Anticipating the future by watching unlabeled video. arXiv preprint arXiv:1504.08023, 2015. C. Vondrick, H. Pirsiavash, and A. Torralba. Generating videos with scene dynamics. In NIPS. 2016. J. Walker, A. Gupta , and M. Hebert . Patch to the future: Unsupervised visual prediction. In CVPR, 2014. J. Walker, C. Doersch, A. Gupta, and M. Hebert. An uncertain future: Forecasting from static images using variational autoencoders. CoRR, abs/1606.07873, 2016. P. Weinzaepfel, J. Revaud, Z. Harchaoui, and C. Schmid. DeepFlow: Large displacement optical flow with deep matching. In ICCV, 2013. T. Xue, J. Wu, K. L. Bouman, and W. T. Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. NIPS, 2016. J. Yuen and A. Torralba. A data-driven approach for event prediction. In ECCV, 2010. M. D. Zeiler, G. W. Taylor, and R. Fergus. Adaptive deconvolutional networks for mid and high level feature learning. In ICCV, 2011. 9 APPENDIX ![Qualitative comparisons on KTH testset. Three columns show predictions for Boxing, Running, and Walking, with t=12, t=15, t=18, t=21, t=24, t=27, t=30. Each column compares MCnet, ConvLSTM, and G.T.](page_246_246_1092_1092.png) Figure 6: Qualitative comparisons on KTH testset. We display predictions starting from the 12th frame, for every 3 timesteps. More clear motion prediction can be seen in the project website Figure 7: Qualitative comparisons on KTH testset. We display predictions starting from the 12th frame, for every 3 timesteps. More clear motion prediction can be seen in the project website Figure 8: Qualitative comparisons on UCF-101. We display predictions (in every other frame) starting from the 5th frame. The green arrows denote the top-30 closest optical flow vectors within image patches between MCnet and ground-truth. More clear motion prediction can be seen in the project website A QUALITATIVE AND QUANTITATIVE COMPARISON WITH CONSIDERABLE CAMERA MOTION AND ANALYSIS In this section, we show frame prediction examples in which considerable camera motion occurs. We analyze the effects of camera motion on our best network and the corresponding baselines. First, we analyze qualitative examples on UCF101 (more complicated camera motion) and then on KTH (zoom-in and zoom-out camera effect). UCF101 Results. As seen in Figure 9 and Figure 10, our model handles foreground and camera motion for a few steps. We hypothesize that for the first few steps, motion signals from images are clear. However, as images are predicted, motion signals start to deteriorate due to prediction errors. When a considerable amount of camera motion is present in image sequences, the motion signals are very dense. As predictions evolve into the future, our motion encoder has to handle large motion deterioration due to prediction errors, which cause motion signals to get easily confused and lost quickly. ![Qualitative comparisons on UCF-101. Four columns: G.T., MCnet, ConvLSTM, Mathieu et al. (2015). Each row shows frames at t=5, t=7, t=9, t=11. Green arrows indicate optical flow vectors.](page_384_668_1047_482.png) Figure 9: Qualitative comparisons on UCF-101. We display predictions (in every other frame) starting from the 5th frame. The green arrows denote the top-30 closest optical flow vectors within image patches between MCnet and ground-truth. More clear motion prediction can be seen in the project website Figure 10: Qualitative comparisons on UCF-101. We display predictions (in every other frame) starting from the 5th frame. The green arrows denote the top-30 closest optical flow vectors within image patches between MCnet and ground-truth. More clear motion prediction can be seen in the project website KTH Results. We were unable to find videos with background motion in the KTH dataset, but we found videos where the camera is zooming in or out for the actions of boxing, handclapping, and handwaving. In Figure [11] we display qualitative for such videos. Our model is able to predict the zoom change in the cameras, while continuing the action motion. In comparison to the performance observed in UCF101, the background does not change much. Thus, the motion signals are well localized in the foreground motion (human), and do not get confused with the background and lost as quickly. ![Qualitative comparisons on KTH testset. Top row: Boxing. Bottom row: Handclapping.](page_324_670_938_563.png) Figure 11: Qualitative comparisons on KTH testset. We display predictions starting from the 12th frame, in every 3 timesteps. More clear motion prediction can be seen in the project website B EXTENDED QUANTITATIVE EVALUATION In this section, we show additional quantitative comparison with a baseline based on copying the last observed frame through time for KTH and UCF101 datasets. Copying the last observed frame through time ensures perfect background prediction in videos where most of the motion comes from foreground (i.e. person performing an action). However, if such foreground composes a small part of the video, it will result in high prediction quality score regardless of the simple copying action. In Figure 12 below, we can see the quantitative comparison in the datasets. Copying the last observed frame through time does a reasonable job in both datasets, however, the impact is larger in UCF101. Videos in the KTH dataset comprise simple background with minimal camera motion, which allows our network to easily predict both foreground and background motion, resulting in better image quality scores. However, videos in UCF101 contain more complicated and diverse background which in combination with camera motion present a much greater challenge to video prediction networks. From the qualitative results in Section A and Figures 5, 8, 9 and 10, we can see that our network performs better in videos that contain isolated areas of motion compared to videos with dense motion. A simple copy/paste operation of the last observed frame, ensures very high prediction scores in videos where very small motion occur. The considerable score boost by videos with small motion causes the simple copy/paste baseline to outperform MCnet in the overall performance on UCF101. ![Extended quantitative comparison including a baseline based on copying the last observed frame through time.](page_346_728_1092_482.png) Figure 12: Extended quantitative comparison including a baseline based on copying the last observed frame through time. C UCF101 Motion Disambiguation Experiments Due to the observed bias from videos with small motion, we perform experiments by measuring the image quality scores on areas of motion. These experiments are similar to the ones performed in Mathieu et al.(2015). We compute DeepFlow optical flow (Weinzaepfel et al., 2013) between the previous and the current groundtruth image of interest, compute the magnitude, and normalize it to [0, 1]. The computed optical flow magnitude is used to mask the pixels where motion was observed. We set the pixels where the optical flow magnitude is less than 0.2, and leave all other pixels untouched in both the groundtruth and predicted images. Additionally, we separate the test videos by the average \( \ell_2 \)-norm of time difference between target frames. We separate the test videos into deciles based of the computed average \( \ell_2 \)-norms, and compute image quality on each decile. Intuitively, the 1st decile contains videos with the least overall of motion (i.e. frames that show the smallest change over time), and the 10th decile contains videos with the most overall motion (i.e. frames that show the largest change over time). As shown in Figure 13 when we only evaluate on pixels where rough motion is observed, MCnet reflects higher PSNR and SSIM, and clearly outperforms all the baselines in terms of SSIM. The SSIM results show that our network is able to predict a structure (i.e. textures, edges, etc) similar to the groundtruth images within the areas of motion. The PSNR results, however, show that our method outperforms the simple copy/paste baseline for the first few steps, but then our method performs slightly worse. The discrepancies observed between PSNR and SSIM scores could be due to the fact that some of the predicted images may not reflect the exact pixel values of the groundtruth regardless of the structures being similar. SSIM scores are known to take into consideration features in the image that go beyond directly matching pixel values, reflecting more accurately how humans perceived image quality. ![Two line plots comparing Peak Signal to Noise Ratio and Structural Similarity across time steps for different methods on UCF101](page_184_670_1207_246.png) Figure 13: Extended quantitative comparison on UCF101 including a baseline based on copying the last observed frame through time using motion based pixel mask. Figures 15 and 14 show the evaluation by separating the test videos into deciles based on the average \( \ell_2 \)-norm of time difference between target frames. From this evaluation, it is proven that the copy last frame baseline scores higher in videos where motion is the smallest. The first few deciles (videos with small motion) show that our network is not just copying the last observed frame through time, otherwise it would perform similarly to the copy last frame baseline. The last deciles (videos with large motion) show our network outperforming all the baselines, including the copy last frame baseline, effectively confirming that our network does predict motion similar to the motion observed in the video. Figure 14: Quantitative comparison on UCF101 using motion based pixel mask, and separating dataset by average \( \ell_2 \)-norm of time difference between target frames. Figure 15: Quantitative comparison on UCF101 using motion based pixel mask, and separating dataset by average \( \ell_2 \)-norm of time difference between target frames. D ADVERSARIAL TRAINING Mathieu et al. (2015) proposed an adversarial training for frame prediction. Inspired by Goodfellow et al. (2014), they proposed a training procedure that involves a generative model \( G \) and a discriminative model \( D \). The two models compete in a two-player minimax game. The discriminator \( D \) is optimized to correctly classify its inputs as either coming from the training data (real frame sequence) or from the generator \( G \) (synthetic frame sequence). The generator \( G \) is optimized to generate frames that fool the discriminator into believing that they come from the training data. At training time, \( D \) takes the concatenation of the input frames that go into \( G \) and the images produced by \( G \). The adversarial training objective is defined as follows: \[ \min_G \max_D \log D ([x_{1:t}, x_{t+1:t+T}]) + \log (1 - D ([x_{1:t}, G(x_{1:t})])) , \] where \( [.,.] \) denotes concatenation in the depth dimension, \( x_{1:t} \) denotes the input frames to \( G \), \( x_{t+1:t+T} \) are the target frames, and \( G(x_{1:t}) = \hat{x}_{t+1:t+T} \) are the frames predicted by \( G \). In practice, we split the minimax objective into two separate, but equivalent, objectives: \( \mathcal{L}_{GAN} \) and \( \mathcal{L}_{disc} \). During optimization, we minimize the adversarial objective alternating between \( \mathcal{L}_{GAN} \) and \( \mathcal{L}_{disc} \). \( \mathcal{L}_{GAN} \) is defined by \[ \mathcal{L}_{GAN} = - \log D ([x_{1:t}, G(x_{1:t})]) , \] where we optimize the parameters of \( G \) to minimize \( \mathcal{L}_{GAN} \) while the parameters of \( D \) stay untouched. As a result, \( G \) is optimized to generate images that make \( D \) believe that they come from the training data. Thus, the generated images look sharper, and more realistic. \( \mathcal{L}_{disc} \) is defined by \[ \mathcal{L}_{disc} = - \log D ([x_{1:t}, x_{t+1:t+T}]) - \log (1 - D ([x_{1:t}, G(x_{1:t})])) , \] where we optimize the parameters of \( D \) to minimize \( \mathcal{L}_{disc} \), while the parameters of \( G \) stay untouched. \( D \) tells us whether its input came from the training data or the generator \( G \). Alternating between the two objectives, causes \( G \) to generate very realistic images, and \( D \) not being able to distinguish between generated frames and frames from the training data.
ABSTRACT We propose a deep neural network for the prediction of future frames in natural video sequences. To effectively handle complex evolution of pixels in videos, we propose to decompose the motion and content, two key components generating dynamics in videos. Our model is built upon the Encoder-Decoder Convolutional Neural Network and Convolutional LSTM for pixel-level prediction, which independently capture the spatial layout of an image and the corresponding temporal dynamics. By independently modeling motion and content, predicting the next frame reduces to converting the extracted content features into the next frame content by the identified motion features, which simplifies the task of prediction. Our model is end-to-end trainable over multiple time steps, and naturally learns to decompose motion and content without separate training. We evaluate the proposed network architecture on human activity videos using KTH, Weizmann action, and UCF-101 datasets. We show state-of-the-art performance in comparison to recent approaches. To the best of our knowledge, this is the first end-to-end trainable network architecture with motion and content separation to model the spatio-temporal dynamics for pixel-level future prediction in natural videos. 1 INTRODUCTION Understanding videos has been one of the most important tasks in the field of computer vision. Compared to still images, the temporal component of videos provides much richer descriptions of the visual world, such as interaction between objects, human activities, and so on. Amongst the various tasks applicable on videos, the task of anticipating the future has recently received increased attention in the research community. Most prior works in this direction focus on predicting high-level semantics in a video such as action (Vondrick et al., 2015) (Rvo, 2011), Lan et al., 2014), event (Yuen and Torralba, 2010; Hoai and Torre, 2013) and motion (Pinte et al., 2014; Walker et al., 2014; Pickup et al., 2014 [Walker et al., 2016]. Forecasting semantics provides information about what will happen in a video, and is essential to automate decision making. However, the predicted semantics are often specific to a particular task and provide only a partial description of the future. Also, training such models often requires heavily labeled training data which leads to tremendous annotation costs especially with videos. In this work, we aim to address the problem of prediction of future frames in natural video sequences. Pixel-level predictions provide dense and direct description of the visual world, and existing video recognition models can be adopted on top of the predicted frames to infer various semantics of the future. Spatio-temporal correlations in videos provide a self-supervision for frame prediction, which enables purely unsupervised training of a model by observing raw video frames. Unfortunately, estimating frames is an extremely challenging task; not only because of the inherent uncertainty of the future, but also various factors of variation in videos leading to complicated dynamics in raw pixel values. There have been a number of recent attempts on frame prediction (Srivastava et al., 2015 [Mathieu et al., 2015] [Oh et al., 2015] [Goroshin et al., 2015] [Lotter et al., 2015] [Ranzato et al., 2014]), *This work was done while SH and XL were visiting the University of Michigan. which use a single encoder that needs to reason about all the different variations occurring in videos in order to make predictions of the future, or require extra information like foreground-background segmentation masks and static background (Vondrick et al., 2016). We propose a Motion-Content Network (MCnet) for robust future frame prediction. Our intuition is to split the inputs for video prediction into two easily identifiable groups, motion and content, and independently capture each information stream with separate encoder pathways. In this architecture, the motion pathway encodes the local dynamics of spatial regions, while the content pathway encodes the spatial layout of the salient parts of an image. The prediction of the future frame is then achieved by transforming the content of the last observed frame given the identified dynamics up to the last observation. Somewhat surprisingly, we show that such a network is end-to-end trainable without individual path way supervision. Specifically, we show that an asymmetric architecture for the two pathways enables such decompositions without explicit supervision. The contributions of this paper are summarized below: • We propose MCnet for the task of frame prediction, which separates the information streams (motion and content) into different encoder pathways. • The proposed network is end-to-end trainable and naturally learns to decompose motion and content without separate training, and reduces the task of frame prediction to transforming the last observed frame into the next by the observed motion. • We evaluate the proposed model on challenging real-world video datasets, and show that it outperforms previous approaches on frame prediction. The rest of the paper is organized as follows. We briefly review related work in Section 2, and introduce an overview of the proposed algorithm in Section 3. The detailed configuration of the proposed network is described in Section 4. Section 5 describes training and inference procedure. Section 6 illustrates implementation details and experimental results on challenging benchmarks. 2 RELATED WORK The problem of visual future prediction has received growing interests in the computer vision community. It has led to various tasks depending on the objective of future prediction, such as human activity (Vondrick et al., 2015; Kyoo, 2011; Lan et al., 2014), event (Yuen and Torralba, 2010; Hoai and Torre, 2013) and geometric path (Walker et al., 2014). Although previous work achieved reasonable success in specific tasks, they are often limited to estimating predefined semantics, and require fully-labeled training data. To alleviate this issue, approaches predicting representation of the future beyond semantic labels have been proposed. Walker et al. (2014) proposed a data-driven approach to predict the motion of a moving object, and coarse hallucination of the predicted motion. Vondrick et al. (2015) proposed a deep regression network to predict feature representations of the future frames. These approaches are supervised and provide coarse predictions of how the future will look like. Our work also focuses on unsupervised learning for prediction of the future, but to a more direct visual prediction task: frame prediction. Compared to predicting semantics, pixel-level prediction has been less investigated due to the difficulties in modeling evolution of raw pixels over time. Fortunately, recent advances in deep learning provide a powerful tool for sequence modeling, and enable the creation of novel architectures for modeling complex sequential data. Ranzato et al. (2014) applied a recurrent neural network developed for language modeling to frame prediction by posing the task as classification of each image region to one of quantized patch dictionaries. Srivastava et al. (2015) applied a sequence-to-sequence model to video prediction, and showed that Long Short-Term Memory (LSTM) is able to capture pixel dynamics. Oh et al. (2015) proposed an action-conditional encoder-decoder network to predict future frames in Atari games. In addition to the different choices of architecture, some other works addressed the importance of selecting right objective function: Lotter et al. (2015) used adversarial loss with combined CNN and LSTM architectures, and Mathieu et al. (2015) employed similar adversarial loss with additional regularization using a multi-scale encoder-decoder network. Finn et al. (2016) constructed a network that predicts transformations on the input pixels for next frame prediction. Patraucean et al. (2015) proposed a network that by explicitly predicting optical flow features is able to predict the next frame in a video. Vondrick et al. (2016) proposed a generative adversarial network for video which, by generating a background-foreground mask, is able to generate realistic-looking video sequences. However, none of the previously mentioned approaches exploit spatial and temporal information separately in an unsupervised fashion. In terms of the way data is observed, the closest work to ours is [Xue et al., (2016)]. The differences are (1) Our model is deterministic and theirs is probabilistic, (2) our motion encoder is based on convolutional LSTM [Shi et al.,(2015)] which is a more natural module to model long-term dynamics, (3) our content encoder observes a single scale input and theirs observes many scales, and (4) we directly generate image pixels values, which is a more complicated task. We aim to exploit the existing spatio-temporal correlations in videos by decomposing the motion and content in our network architecture. To the best of our knowledge, the idea of separating motion and content has not been investigated in the task of unsupervised deterministic frame prediction. The proposed architecture shares similarities to the two-stream CNN [Simonyan and Zisserman, 2014], which is designed for action recognition to jointly exploit the information from frames and their temporal dynamics. However, in contrast to their network we aim to learn features for temporal dynamics directly from the raw pixels, and we use the identified features from the motion in combination with spatial features to make pixel-level predictions of the future. 3 ALGORITHM OVERVIEW In this section, we formally define the task of frame prediction and the role of each component in the proposed architecture. Let \( \mathbf{x}_t \in \mathbb{R}^{w \times h \times c} \) denote the t-th frame in an input video \( \mathbf{x} \), where \( w, h, \) and \( c \) denote width, height, and number of channels, respectively. The objective of frame prediction is to generate the future frame \( \hat{\mathbf{x}}_{t+1} \) given the input frames \( \mathbf{x}_{1:t} \). At the t-th time step, our network observes a history of previous consecutive frames up to frame t, and generates the prediction of the next frame \( \hat{\mathbf{x}}_{t+1} \) as follows: • **Motion Encoder** recurrently takes an image difference input between frame \( \mathbf{x}_t \) and \( \mathbf{x}_{t-1} \) starting from \( t = 2 \), and produces the hidden representation \( \mathbf{d}_t \) encoding the temporal dynamics of the scene components (Section 4.1). • **Content Encoder** takes the last observed frame \( \mathbf{x}_t \) as an input, and outputs the hidden representation \( s_t \) that encodes the spatial layout of the scene (Section 4.2). • **Multi-Scale Motion-Content Residual** takes the computed features, from both the motion and content encoders, at every scale right before pooling and computes residuals \( r_t \) [He et al., 2015] to aid the information loss caused by pooling in the encoding phase (Section 4.3). • **Combination Layers and Decoder** takes the outputs from both encoder pathways and residual connections, \( \mathbf{d}_t, s_t, \) and \( r_t \), and combines them to produce a pixel-level prediction of the next frame \( \hat{\mathbf{x}}_{t+1} \) (Section 4.4). The overall architecture of the proposed algorithm is described in Figure 1. The prediction of multiple frames, \( \hat{\mathbf{x}}_{t+1:T} \), can be achieved by recursively performing the above procedures over T time steps (Section 5). Each component in the proposed architecture is described in the following section. 4 ARCHITECTURE This section describes the detailed configuration of the proposed architecture, including the two encoder pathways, multi-scale residual connections, combination layers, and decoder. 4.1 MOTION ENCODER The motion encoder captures the temporal dynamics of the scene’s components by recurrently observing subsequent difference images computed from \( \mathbf{x}_{t-1} \) and \( \mathbf{x}_t \), and outputs motion features by \[ [\mathbf{d}_t, \mathbf{c}_t] = f^{\text{dyn}} (\mathbf{x}_t - \mathbf{x}_{t-1}, \mathbf{d}_{t-1}, \mathbf{c}_{t-1}) , \] where \( \mathbf{x}_t - \mathbf{x}_{t-1} \) denotes element-wise subtraction between frames at time t and \( t-1 \), \( \mathbf{d}_t \in \mathbb{R}^{w' \times h' \times c'} \) is the feature tensor encoding the motion across the observed difference image inputs, and \( \mathbf{c}_t \in \mathbb{R}^{w' \times h' \times c'} \) is a memory cell that retains information of the dynamics observed through time. \( f^{\text{dyn}} \) is implemented in a fully-convolutional way to allow our model to identify local dynamics of frames Figure 1: Overall architecture of the proposed network. (a) illustrates MCnet without the Motion-Content Residual skip connections, and (b) illustrates MCnet with such connections. Our network observes a history of image differences through the motion encoder and last observed image through the content encoder. Subsequently, our network proceeds to compute motion-content features and communicates them to the decoder for the prediction of the next frame. rather than complicated global motion. For this, we use an encoder CNN with a Convolutional LSTM (Shi et al., 2015) layer on top. 4.2 CONTENT ENCODER The content encoder extracts important spatial features from a single frame, such as the spatial layout of the scene and salient objects in a video. Specifically, it takes the last observed frame \( \mathbf{x}_t \) as an input, and produces content features by \[ \mathbf{s}_t = f^{\text{cont}}(\mathbf{x}_t), \] where \( \mathbf{s}_t \in \mathbb{R}^{w' \times h' \times c'} \) is the feature encoding the spatial content of the last observed frame, and \( f^{\text{cont}} \) is implemented by a Convolutional Neural Network (CNN) that specializes on extracting features from single frame. It is important to note that our model employs an asymmetric architecture for the motion and content encoder. The content encoder takes the last observed frame, which keeps the most critical clue to reconstruct spatial layout of near future, but has no information about dynamics. On the other hand, the motion encoder takes a history of previous image differences, which are less informative about the future spatial layout compared to the last observed frame, yet contain important spatio-temporal variations occurring over time. This asymmetric architecture encourages encoders to exploit each of two pieces of critical information to predict the future content and motion individually, and enables the model to learn motion and content decomposition naturally without any supervision. 4.3 MULTI-SCALE MOTION-CONTENT RESIDUAL To prevent information loss after the pooling operations in our motion and content encoders, we use residual connections (He et al., 2015). The residual connections in our network communicate motion-content features at every scale into the decoder layers after unpooling operations. The residual feature at layer \( l \) is computed by \[ \mathbf{r}_l^t = f^{\text{res}}\left([\mathbf{s}_l^t, \mathbf{d}_l^t]\right)^l, \] where \( \mathbf{r}_l^t \) is the residual output at layer \( l \), \( [\mathbf{s}_l^t, \mathbf{d}_l^t] \) is the concatenation of the motion and content features along the depth dimension at layer \( l \) of their respective encoders, \( f^{\text{res}}(.)^l \) the residual function at layer \( l \) implemented as consecutive convolution layers and rectification with a final linear layer. 4.4 COMBINATION LAYERS AND DECODER The outputs from the two encoder pathways, \( \mathbf{d}_t \) and \( \mathbf{s}_t \), encode a high-level representation of motion and content, respectively. Given these representations, the objective of the decoder is to generate a pixel-level prediction of the next frame \( \hat{\mathbf{x}}_{t+1} \in \mathbb{R}^{w \times h \times c} \). To this end, it first combines the motion and content back into a unified representation by \[ f_t = g^{\text{comb}} \left( [\mathbf{d}_t, \mathbf{s}_t] \right), \] where \([\mathbf{d}_t, \mathbf{s}_t] \in \mathbb{R}^{w' \times h' \times 2c'}\) denotes the concatenation of the higher-level motion and content features in the depth dimension, and \(f_t \in \mathbb{R}^{w' \times h' \times c'}\) denotes the combined high-level representation of motion and content. \(g^{\text{comb}}\) is implemented by a CNN with bottleneck layers (Hinton and Salakhutdinov [2006]); it first projects both \(\mathbf{d}_t\) and \(\mathbf{s}_t\) into a lower-dimensional embedding space, and then puts it back to the original size to construct the combined feature \(f_t\). Intuitively, \(f_t\) can be viewed as the content feature of the next time step, \(\mathbf{s}_{t+1}\), which is generated by transforming \(\mathbf{s}_t\) using the observed dynamics encoded in \(\mathbf{d}_t\). Then our decoder places \(f_t\) back into the original pixel space by \[ \hat{\mathbf{x}}_{t+1} = g^{\text{dec}} \left( f_t, \mathbf{r}_t \right), \] where \(\mathbf{r}_t\) is a list containing the residual connections from every layer of the motion and content encoders before pooling sent to every layer of the decoder after unpooling. We employ the deconvolution network (Zeiler et al. [2011]) for our decoder network \(g^{\text{dec}}\), which is composed of multiple successive operations of deconvolution, rectification and unpooling with the addition of the motion-content residual connections after each unpooling operation. The output layer is passed through a tanh(.) activation function. Unpooling with fixed switches are used to upsample the intermediate activation maps. 5 INFERENCE AND TRAINING Section 4 describes the procedures for single frame prediction, while this section presents the extension of our algorithm for the prediction of multiple time steps. 5.1 MULTI-STEP PREDICTION Given an input video, our network observes the first \(n\) frames as image difference between frame \(\mathbf{x}_t\) and \(\mathbf{x}_{t-1}\), starting from \(t=2\) up to \(t=n\), to encode initial temporal dynamics through the motion encoder. The last frame \(\mathbf{x}_n\) is given to the content encoder to be transformed into the first prediction \(\hat{\mathbf{x}}_{t+1}\) by the identified motion features. For each time step \(t \in [n+1, n+T]\), where \(T\) is the desired number of prediction steps, our network takes the difference image between the first prediction \(\hat{\mathbf{x}}_{t+1}\) and the previous image \(\mathbf{x}_t\), and the first prediction \(\hat{\mathbf{x}}_{t+1}\) itself to predict the next frame \(\hat{\mathbf{x}}_{t+2}\), and so forth. 5.2 TRAINING OBJECTIVE To train our network, we use an objective function composed of different sub-losses similar to Mathieu et al. (2015). Given the training data \(D = \{ \mathbf{x}_{1:T}^{(i)} \}_{i=1}^N\), our model is trained to minimize the prediction loss by \[ \mathcal{L} = \alpha \mathcal{L}_{\text{img}} + \beta \mathcal{L}_{\text{GAN}}, \] where \(\alpha\) and \(\beta\) are hyper-parameters that control the effect of each sub-loss during optimization. \(\mathcal{L}_{\text{img}}\) is the loss in image space from Mathieu et al. (2015) defined by \[ \mathcal{L}_{\text{img}} = \mathcal{L}_p \left( \mathbf{x}_{t+k}, \hat{\mathbf{x}}_{t+k} \right) + \mathcal{L}_{gdl} \left( \mathbf{x}_{t+k}, \hat{\mathbf{x}}_{t+k} \right), \] where \[ \mathcal{L}_p \left( \mathbf{y}, \mathbf{z} \right) = \sum_{k=1}^T \| \mathbf{y} - \mathbf{z} \|_p^p, \] \[ \mathcal{L}_{gdl} \left( \mathbf{y}, \mathbf{z} \right) = \sum_{i,j}^{h,w} \left( | (\mathbf{y}_{i,j} - \mathbf{y}_{i-1,j}) - (\mathbf{z}_{i,j} - \mathbf{z}_{i-1,j}) |^\lambda \right. \\ \left. + | (\mathbf{y}_{i,j-1} - \mathbf{y}_{i,j}) - (\mathbf{z}_{i,j-1} - \mathbf{z}_{i,j}) |^\lambda \right). \] Here, \(\mathbf{x}_{t+k}\) and \(\hat{\mathbf{x}}_{t+k}\) are the target and predicted frames, respectively, and \(p\) and \(\lambda\) are hyper-parameters for \(\mathcal{L}_p\) and \(\mathcal{L}_{gdl}\), respectively. Intuitively, \(\mathcal{L}_p\) guides our network to match the average pixel values directly, while \( \mathcal{L}_{gdl} \) guides our network to match the gradients of such pixel values. Overall, \( \mathcal{L}_{img} \) guides our network to learn parameters towards generating the correct average sequence given the input. Training to generate average sequences, however, results in somewhat blurry generations which is the reason we use an additional sub-loss. \( \mathcal{L}_{GAN} \) is the generator loss in adversarial training to allow our model to predict realistic looking frames and it is defined by \[ \mathcal{L}_{GAN} = -\log D \left( [\mathbf{x}_{1:t}, G(\mathbf{x}_{1:t})] \right), \] where \( \mathbf{x}_{1:t} \) is the concatenation of the input images, \( \mathbf{x}_{t+1:t+T} \) is the concatenation of the ground-truth future images, \( G(\mathbf{x}_{1:t}) = \hat{\mathbf{x}}_{t+1:t+T} \) is the concatenation of all predicted images along the depth dimension, and \( D(.) \) is the discriminator in adversarial training. The discriminative loss in adversarial training is defined by \[ \mathcal{L}_{disc} = -\log D \left( [\mathbf{x}_{1:t}, \mathbf{x}_{t+1:t+T}] \right) - \log (1 - D \left( [\mathbf{x}_{1:t}, G(\mathbf{x}_{1:t})] \right)). \] \( \mathcal{L}_{GAN} \), in addition to \( \mathcal{L}_{img} \), allows our network to not only generate the target sequence, but also simultaneously enforce realism in the images through visual sharpness that fools the human eye. Note that our model uses its predictions as input for the next time-step during the training, which enables the gradients to flow through time and makes the network robust for error propagation during prediction. For more a detailed description about adversarial training, please refer to Appendix D. 6 EXPERIMENTS In this section, we present experiments using our network for video generation. We first evaluate our network, MCnet, on the KTH (Schuldt et al., 2004) and Weizmann action (Gorelick et al., 2007) datasets, and compare against a baseline convolutional LSTM (ConvLSTM) (Shi et al., 2015). We then proceed to evaluate on the more challenging UCF-101 (Soomro et al., 2012) dataset, in which we compare against the same ConvLSTM baseline and also the current state-of-the-art method by Mathieu et al. (2015). For all our experiments, we use \( \alpha = 1 \), \( \lambda = 1 \), and \( p = 2 \) in the loss functions. In addition to the results in this section, we also provide more qualitative comparisons in the supplementary material and in the videos on the project website: https://sites.google.com/a/umich.edu/rubenevillegas/iclr2017 Architectures. The content encoder of MCnet is built with the same architecture as VGG16 (Simonyan and Zisserman, 2015) up to the third pooling layer. The motion encoder of MCnet is also similar to VGG16 up to the third pooling layer, except that we replace its consecutive 3x3 convolutions with single 5x5, 5x5, and 7x7 convolutions in each layer. The combination layers are composed of 3 consecutive 3x3 convolutions (256, 128, and 256 channels in each layer). The multi-scale residuals are composed of 2 consecutive 3x3 convolutions. The decoder is the mirrored architecture of the content encoder where we perform unpooling followed by deconvolution. For the baseline ConvLSTM, we use the same architecture as the motion encoder, residual connections, and decoder, except we increase the number of channels in the encoder in order to have an overall comparable number of parameters with MCnet. 6.1 KTH AND WEIZMANN ACTION DATASETS Experimental settings. The KTH human action dataset (Schuldt et al., 2004) contains 6 categories of periodic motions on a simple background: running, jogging, walking, boxing, hand-clapping and hand-waving. We use person 1-16 for training and 17-25 for testing, and also resize frames to 128x128 pixels. We train our network and baseline by observing 10 frames and predicting 10 frames into the future on the KTH dataset. We set \( \beta = 0.02 \) for training. We also select the walking, running, one-hand waving, and two-hands waving sequences from the Weizmann action dataset (Gorelick et al., 2007) for testing the networks’ generalizability. For all the experiments, we test the networks on predicting 20 time steps into the future. As for evaluation, we use the same SSIM and PSNR metrics as in Mathieu et al. (2015). The evaluation on KTH was performed on sub-clips within each video in the testset. We sample sub-clips every 3 frames for running and jogging, and sample sub-clips every 20 frames (skipping the frames we have already predicted) for walking, boxing, hand-clapping, and hand-waving. Sub-clips for running, jogging, and walking were manually trimmed to ensure humans are always present in the frames. The evaluation on Weizmann was performed on all sub-clips in the selected sequences. Figure 2: Quantitative comparison between MCnet and ConvLSTM baseline with and without multi-scale residual connections (indicated by "+ RES"). Given 10 input frames, the models predict 20 frames recursively, one by one. Left column: evaluation on KTH dataset (Schuldt et al., 2004). Right column: evaluation on Weizmann (Gorelick et al., 2007) dataset. Results. Figure 2 summarizes the quantitative comparisons among our MCnet, ConvLSTM baseline and their residual variations. In the KTH test set, our network outperforms the ConvLSTM baseline by a small margin. However, when we test the residual versions of MCnet and ConvLSTM on the dataset (Gorelick et al., 2007) with similar motions, we can see that our network can generalize well to the unseen contents by showing clear improvements, especially in long-term prediction. One reason for this result is that the test and training partitions of the KTH dataset have simple and similar image contents so that ConvLSTM can memorize the average background and human appearance to make reasonable predictions. However, when tested on unseen data, ConvLSTM has to internally take care of both scene dynamics and image contents in a mingled representation, which gives it a hard time for generalization. In contrast, the reason our network outperforms the ConvLSTM baseline on unseen data is that our network focuses on identifying general motion features and applying them to a learned content representation. Figure 3 presents qualitative results of multi-step prediction by our network and ConvLSTM. As expected, prediction results by our full architecture preserves human shapes more accurately than the baseline. It is worth noticing that our network produces very sharp prediction over long-term time steps; it shows that MCnet is able to capture periodic motion cycles, which reduces the uncertainty of future prediction significantly. More qualitative comparisons are shown in the supplementary material and the project website. 6.2 UCF-101 DATASET Experimental settings. This section presents results on the challenging real-world videos in the UCF-101 (Soomro et al., 2012) dataset. Having collected from YouTube, the dataset contains 101 realistic human actions taken in a wild and exhibits various challenges, such as background clutter, occlusion, and complicated motion. We employed the same network architecture as in the KTH dataset, but resized frames to 240x320 pixels, and trained the network to observe 4 frames and predict a single frame. We set \( \beta = 0.001 \) for training. We also trained our convolutional LSTM baseline in the same way. Following the same protocol as (Mathieu et al., 2015) for data pre-processing and Figure 3: Qualitative comparison between our MCNet model and ConvLSTM. We display predictions starting from the 12th frame, in every 3 timesteps. The first 3 rows correspond to KTH dataset for the action of jogging and the last 3 rows correspond to Weizmann dataset for the action of walking. evaluation metrics on full images, all networks were trained on Sports-1M (Karpathy et al., 2014) dataset and tested on UCF-101 unless otherwise stated[1] Results. Figure 4 shows the quantitative comparisons between our network trained for single-step-prediction and Mathieu et al. (2015). We can clearly see the advantage of our network over the baseline. The separation of motion and contents in two encoder pathways allows our network to identify key motion and content features, which are then fed into the decoder to yield predictions of higher quality compared to the baseline[2]. In other words, our network only moves what shows motion in the past, and leaves the rest untouched. We also trained a residual version of MCnet on UCF-101, indicated by “MCnet + RES UCF101”, to compare how well our model generalizes when trained and tested on the same or different dataset(s). To our surprise, when tested with UCF-101, the MCnet trained on Sports-1M (MCnet + RES) roughly matches the performance of the MCnet trained on UCF-101 (MCnet + RES UCF101), which suggests that our model learns effective representations which can generalize to new datasets. Figure 5 presents qualitative comparisons between frames generated by our network and Mathieu et al. (2015). Since the ConvLSTM and Mathieu et al. (2015) lack explicit motion and content modules, they lose sense of the dynamics in the video and therefore the contents become distorted quickly. More qualitative comparisons are shown in the supplementary material and the project website [1] We use the code and model released by Mathieu et al. (2015) at https://github.com/couprieC/VideoPredictionICLR2016 [2] We were not able to get the model fine-tuned on UCF-101 from the authors so it is not included in Figure 4 Figure 4: Quantitative comparison between our model, convolutional LSTM [Shi et al. (2015)], and [Mathieu et al. (2015)]. Given 4 input frames, the models predict 8 frames recursively, one by one. 7 CONCLUSION We proposed a motion-content network for pixel-level prediction of future frames in natural video sequences. The proposed model employs two separate encoding pathways, and learns to decompose motion and content without explicit constraints or separate training. Experimental results suggest that separate modeling of motion and content improves the quality of the pixel-level future prediction, and our model overall achieves state-of-the-art performance in predicting future frames in challenging real-world video datasets. 8 ACKNOWLEDGEMENTS This work was supported in part by ONR N00014-13-1-0762, NSF CAREER IIS-1453651, gifts from the Bosch Research and Technology Center, and Sloan Research Fellowship. We also thank NVIDIA for donating K40c and TITAN X GPUs. We thank Ye Liu, Junhyuk Oh, Xinchen Yan, Lajanugen Logeswaran, Yuting Zhang, Sungryull Sohn, Kibok Lee, Rui Zhang, and other collaborators for helpful discussions. R. Villegas was partly supported by the Rackham Merit Fellowship.
accept
Accept (Poster)
6.666667
97aed3d11ddc9aa9eefbda1edd5418969d456770
iclr
2,017
LEARNING TO OPTIMIZE Ke Li & Jitendra Malik Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720 United States {ke.li,malik}@eecs.berkeley.edu ABSTRACT Algorithm design is a laborious process and often requires many iterations of ideation and validation. In this paper, we explore automating algorithm design and present a method to learn an optimization algorithm. We approach this problem from a reinforcement learning perspective and represent any particular optimization algorithm as a policy. We learn an optimization algorithm using guided policy search and demonstrate that the resulting algorithm outperforms existing hand-engineered algorithms in terms of convergence speed and/or the final objective value. 1 INTRODUCTION Continuous optimization algorithms are some of the most ubiquitous tools used in virtually all areas of science and engineering. Indeed, they are the workhorse of machine learning and power most learning algorithms. Consequently, optimization difficulties become learning challenges – because their causes are often not well understood, they are one of the most vexing issues that arise in practice. One solution is to design better optimization algorithms that are immune to these failure cases. This requires careful analysis of existing optimization algorithms and clever solutions to overcome their weaknesses; thus, doing so is both laborious and time-consuming. Is there a better way? If the mantra of machine learning is to learn what is traditionally manually designed, why not take it a step further and learn the optimization algorithm itself? Consider the general structure of an algorithm for unconstrained continuous optimization, which is outlined in Algorithm 1. Starting from a random location in the domain of the objective function, the algorithm iteratively updates the current iterate by a step vector \( \Delta x \) computed from some functional \( \pi \) of the objective function, the current iterate and past iterates. Algorithm 1 General structure of unconstrained optimization algorithms Require: Objective function \( f \) \( x^{(0)} \leftarrow \) random point in the domain of \( f \) for \( i = 1, 2, \ldots \) do \( \Delta x \leftarrow \pi(f, \{x^{(0)}, \ldots, x^{(i-1)}\}) \) if stopping condition is met then return \( x^{(i-1)} \) end if \( x^{(i)} \leftarrow x^{(i-1)} + \Delta x \) end for Different optimization algorithms only differ in the choice of the update formula \( \pi \). Examples of existing optimization algorithms and their corresponding update formulas are shown in Table 1 If we can learn \( \pi \), we will be able to learn an optimization algorithm. Since it is difficult to model general functionals, in practice, we restrict the dependence of \( \pi \) on the objective function \( f \) to objective values and gradients evaluated at current and past iterates. Hence, \( \pi \) can be simply modelled as a function from the objective values and gradients along the trajectory taken by the optimizer so far <table> <tr> <th>Algorithm</th> <th>Update Formula \( \pi \)</th> </tr> <tr> <td>Gradient Descent</td> <td>\( \pi(\cdot) = -\gamma \nabla f(x^{(i-1)}) \)</td> </tr> <tr> <td>Momentum</td> <td>\( \pi(\cdot) = -\gamma \left( \sum_{j=0}^{i-1} \alpha^{i-1-j} \nabla f(x^{(j)}) \right) \)</td> </tr> <tr> <td>Conjugate Gradient</td> <td>\( \pi(\cdot) = -\gamma \left( \nabla f(x^{(i-1)}) + \sum_{j=0}^{i-2} \left( \frac{\| \nabla f(x^{(j+1)}) \|}{\| \nabla f(x^{(j)}) \|} \right)^{i-1-j} \nabla f(x^{(j)}) \right) \)</td> </tr> </table> Table 1: Choices of the update formula \( \pi \) made by hand-engineered optimization algorithms. We propose learning \( \pi \) automatically in the hope of learning an optimization algorithm that converges faster and to better optima on objective functions of interest. to the next step vector. If we model \( \pi \) with a universal function approximator like a neural net, it is then possible to search over the space of optimization algorithms by learning the parameters of the neural net. We formulate this as a reinforcement learning problem, where any particular optimization algorithm simply corresponds to a policy. Learning an optimization algorithm then reduces to finding an optimal policy. For this purpose, we use an off-the-shelf reinforcement learning algorithm known as guided policy search (Levine & Abbeel [2014]), which has demonstrated success in a variety of robotic control settings (Levine et al., 2015a; Finn et al., 2015; Levine et al., 2015b; Han et al., 2015). Our goal is to learn about regularities in the geometry of the error surface induced by a class of objective functions of interest and exploit this knowledge to optimize the class of objective functions faster. This is potentially advantageous, since the learned optimizer is trained on the actual objective functions that arise in practice, whereas hand-engineered optimizers are often analyzed in the convex setting and applied to the non-convex setting. 1.1 Learning How to Learn When the objective functions under consideration correspond to loss functions for training a model, the proposed framework effectively learns how to learn. The loss function for training a model on a particular task/dataset is a particular objective function, and so the loss on many tasks corresponds to a set of objective functions. We can train an optimizer on this set of objective functions, which can hopefully learn to exploit regularities of the model and train it faster irrespective of the task. As a concrete example, if the model is a neural net with ReLU activations, our goal is to learn an optimizer that can leverage the piecewise linearity of the model. We evaluate the learned optimizer on its ability to generalize to unseen objective functions. Akin to the supervised learning paradigm, we divide the dataset of objective functions into training and test sets. At test time, the learned optimizer can be used stand-alone and functions exactly like a hand-engineered optimizer, except that the update formula is replaced with a neural net and no hyperparameters like step size or momentum need to be specified by the user. In particular, it does not perform multiple trials on the same objective function at test time, unlike hyperparameter optimization. Since different objective functions correspond to the loss for training a model on different tasks, the optimizer is effectively asked to train on its experience of learning on some tasks and generalize to other possibly unrelated tasks. It is therefore critical to ensure that the optimizer does not learn anything about particular tasks; this would be considered as overfitting under this setting. Notably, this goal is different from the line of work on “learning to learn” or “meta-learning”, whose goal is to learn something about a family of tasks. To prevent overfitting to particular tasks, we train the optimizer to learn on randomly generated tasks. 2 RELATED WORK 2.1 META-LEARNING When the objective functions optimized by the learned optimizer correspond to loss functions for training a model, learning the optimizer can be viewed as learning how to learn. This theme of learning about learning itself has been explored and is referred to as “learning to learn” or “meta- learning” (Baxter et al., 1995; Vilalta & Drissi, 2002; Brazdil et al., 2008; Thrun & Pratt, 2012). Various authors have used the term in different ways and there is no consensus on its precise definition. While there is agreement on what kinds of knowledge should be learned at the base-level, it is less clear what kinds of meta-knowledge should be learned at the meta-level. We briefly summarize the various perspectives that have been presented below. One form of meta-knowledge is the commonalities across a family of related tasks (Abu-Mostafa, 1993). Under this framework, the goal of the meta-learner is to learn about such commonalities, so that given the learned commonalities, base-level learning on new tasks from the family can be performed faster. This line of work is often better known as transfer learning and multi-task learning. A different approach (Brazdil et al., 2003) is to learn how to select the base-level learner that achieves the best performance for a given task. Under this setting, the meta-knowledge is the correlation between properties of tasks and the performance of different base-level learners trained on them. There are two challenges associated with this approach: the need to devise meta-features on tasks that capture similarity between different tasks, and the need to parameterize the space of base-level learners to make search in this space tractable. Schmidhuber (2004) proposes representing base-level learners as general-purpose programs, that is, sequences of primitive operations. While such a representation can in principle encode all base-level learners, searching in this space takes exponential time in the length of the target program. The proposed method differs from these lines in work in several important ways. First, the proposed method learns regularities in the optimization/learning process itself, rather than regularities that are shared by different tasks or regularities in the mapping between tasks and best-performing base-level learners. More concretely, the meta-knowledge in the proposed framework can capture regularities in the error surface. Second, unlike the approaches above, we explicitly aim to avoid capturing any regularities about the task. Under the proposed framework, only model-specific regularities are captured at the meta-level, while task-specific regularities are captured at the base-level. 2.2 PROGRAM INDUCTION Because the proposed method learns an algorithm, it is related to program induction, which considers the problem of learning programs from examples of input and output. Several different approaches have been proposed: genetic programming (Cramer, 1985) represents programs as abstract syntax trees and evolves them using genetic algorithms; Liang et al. (2010) represents programs explicitly using a formal language, constructs a hierarchical Bayesian prior over programs and performs inference using an MCMC sampling procedure and Graves et al. (2014) represents programs implicitly as sequences of memory access operations and trains a recurrent neural net to learn the underlying patterns in the memory access operations. Hochreiter et al. (2001) considers the special case of online learning algorithms, each of which is represented as a recurrent neural net with a particular setting of weights, and learns the online learning algorithm by learning the neural net weights. While the program/algorithm improves as training progresses, the algorithms learned using these methods have not been able to match the performance of simple hand-engineered algorithms. In contrast, our aim is learn an algorithm that is better than known hand-engineered algorithms. 2.3 HYPERPARAMETER OPTIMIZATION There is a large body of work on hyperparameter optimization, which studies the optimization of hyperparameters used to train a model, such as the learning rate, the momentum decay factor and regularization parameters. Most methods (Hutter et al., 2011; Bergstra et al., 2011; Snoek et al., 2012; Swersky et al., 2013; Feurer et al., 2015) rely on sequential model-based Bayesian optimization (Mockus et al., 1978; Brochu et al., 2010), while others adopt a random search approach (Bergstra & Bengio, 2012) or use gradient-based optimization (Bengio, 2000; Domke, 2012; Maclaurin et al., 2015). Because each hyperparameter setting corresponds to a particular instantiation of an optimization algorithm, these methods can be viewed as a way to search over different instantiations of the same optimization algorithm. The proposed method, on the other hand, can search over a larger space of possible optimization algorithms. In addition, as noted previously, when presented with a new objective function at test time, the learned optimizer does not need to conduct multiple trials with different hyperparameter settings. 2.4 Special Cases and Other Related Work Work on online hyperparameter adaptation studies ways to choose the step size or other hyperparameters adaptively while performing optimization. Stochastic meta-descent (Bray et al., 2004) derives a rule for adaptively choosing the step size, Ruvolo et al. (2009) learns a policy for picking the damping factor in the Levenberg-Marquardt algorithm and recent work (Hansen, 2016) (Daniel et al., 2016) (Fu et al., 2016) explores learning a policy for choosing the step size. Unlike this line of work, the proposed method learns a policy for choosing the step direction as well as step size, thereby making it possible to learn a new optimization algorithm that is different from known algorithms. A different line of work (Gregor & LeCun, 2010) (Sprechmann et al., 2013) explores learning special-purpose solvers for a class of optimization problems that arise in sparse coding. Work that appeared on ArXiv after this paper (Andrychowicz et al., 2016) explores a similar theme under a different setting, where the goal is learn a task-dependent optimization algorithm. The optimizer is trained from the experience of training on a particular task or family of tasks and is evaluated on its ability to train on the same task or family of tasks. Under this setting, the optimizer learns regularities about the task itself rather than regularities of the model in general. 3 Background on Reinforcement Learning 3.1 Markov Decision Process In the reinforcement learning setting, the learner is given a choice of actions to take in each time step, which changes the state of the environment in an unknown fashion, and receives feedback based on the consequence of the action. The feedback is typically given in the form of a reward or cost, and the objective of the learner is to choose a sequence of actions based on observations of the current environment that maximizes cumulative reward or minimizes cumulative cost over all time steps. More formally, a reinforcement learning problem can be characterized by a Markov decision process (MDP). We consider an undiscounted finite-horizon MDP with continuous state and action spaces defined by the tuple \((\mathcal{S}, \mathcal{A}, p_0, p, c, T)\), where \(\mathcal{S} \subseteq \mathbb{R}^D\) is the set of states, \(\mathcal{A} \subseteq \mathbb{R}^d\) is the set of actions, \(p_0 : \mathcal{S} \to \mathbb{R}^+\) is the probability density over initial states, \(p : \mathcal{S} \times \mathcal{A} \times \mathcal{S} \to \mathbb{R}^+\) is the transition probability density, that is, the conditional probability density over successor states given the current state and action, \(c : \mathcal{S} \to \mathbb{R}\) is a function that maps state to cost and \(T\) is the time horizon. A policy \(\pi : \mathcal{S} \times \mathcal{A} \times \{0, \ldots, T-1\} \to \mathbb{R}^+\) is a conditional probability density over actions given the state at each time step. When a policy is independent of the time step, it is referred to as stationary. 3.2 Policy Search This problem of finding the cost-minimizing policy is known as the policy search problem. More precisely, the objective is to find a policy \(\pi^*\) such that \[ \pi^* = \arg\min_{\pi} \mathbb{E}_{s_0, a_0, s_1, \ldots, s_T} \left[ \sum_{t=0}^T c(s_t) \right], \] where the expectation is taken with respect to the joint distribution over the sequence of states and actions, often referred to as a trajectory, which has the density \[ q\left(s_0, a_0, s_1, \ldots, s_T\right) = p_0\left(s_0\right) \prod_{t=0}^{T-1} \pi\left(a_t|s_t, t\right) p\left(s_{t+1}|s_t, a_t\right). \] To enable generalization to unseen states, the policy is typically parameterized and minimization is performed over representable policies. Solving this problem exactly is intractable in all but selected special cases. Therefore, policy search methods generally tackle this problem by solving it approximately. In addition, the transition probability density \(p\) is typically not known, but may be accessed via sampling. 3.3 Guided Policy Search Guided policy search (GPS) (Levine & Abbeel, 2014) is a method for searching over expressive non-linear policy classes in continuous state and action spaces. It works by alternating between computing a mixture of target trajectories and training the policy to replicate them. Successive iterations locally improve target trajectories while ensuring proximity to behaviours that are reproducible by the policy. Target trajectories are computed by fitting local approximations to the cost and transition probability density and optimizing over a restricted class of time-varying linear target policies subject to a trust region constraint. The stationary non-linear policy is trained to minimize the squared Mahalanobis distance between the predicted and target actions at each time step. More precisely, GPS works by solving the following constrained optimization problem: \[ \min_{\theta, \eta} \mathbb{E}_{\psi} \left[ \sum_{t=0}^T c(s_t) \right] \quad \text{s.t. } \psi(a_t|s_t, t; \eta) = \pi(a_t|s_t; \theta) \quad \forall a_t, s_t, t, \] where \( \psi \) denotes the time-varying target policy, \( \pi \) denotes the stationary non-linear policy, and \( \mathbb{E}_{\psi}[\cdot] \) denotes the expectation taken with respect to the trajectory induced by the target policy \( \psi \). \( \psi \) is assumed to be conditionally Gaussian whose mean is linear in \( s_t \) and \( \pi \) is assumed to be conditionally Gaussian whose mean could be an arbitrary function of \( s_t \). To solve this problem, the equality constraint is relaxed and replaced with a penalty on the KL-divergence between \( \psi \) and \( \pi \). Different flavours of GPS (Levine & Abbeel, 2014; Levine et al., 2015a) use different constrained optimization methods, which all involve alternating between optimizing the parameters of \( \psi \) and \( \pi \). For updating \( \psi \), GPS first builds a model \( \hat{p} \) of the transition probability density \( p \) of the form \( \hat{p}(s_{t+1}|s_t, a_t, t) := \mathcal{N}(A_t s_t + B_t a_t + c_t, F_t) \) where \( A_t, B_t \) and \( c_t \) are parameters estimated from samples drawn from the trajectory induced by the existing \( \psi \). It also computes local quadratic approximations to the cost, so that \( c(s_t) \approx \frac{1}{2} s_t^T C_t s_t + d_t^T s_t + h_t \) for \( s_t \)'s that are near the samples. It then solves the following: \[ \min_{K_t, k_t, G_t} \mathbb{E}_{\hat{\psi}} \left[ \sum_{t=0}^T \frac{1}{2} s_t^T C_t s_t + d_t^T s_t \right] \] \[ \text{s.t. } \sum_{t=0}^T D_{KL} \left( p(s_t) \psi(\cdot|s_t, t; \eta) \| p(s_t) \psi(\cdot|s_t, t; \eta') \right) \leq \epsilon, \] where \( \mathbb{E}_{\hat{\psi}}[\cdot] \) denotes the expectation taken with respect to the trajectory induced by the target policy \( \psi \) if states transition according to the model \( \hat{p} \). \( K_t, k_t, G_t \) are the parameters of \( \psi(a_t|s_t, t; \eta) := \mathcal{N}(K_t s_t + k_t, G_t) \) and \( \eta' \) denotes the parameters of the previous target policy. It turns out that this optimization problem can be solved in closed form using a dynamic programming algorithm known as linear-quadratic-Gaussian regulator (LQG). For updating \( \pi \), GPS minimizes \( D_{KL} \left( p(s_t) \pi(\cdot|s_t) \| p(s_t) \psi(\cdot|s_t, t) \right) \). Assuming fixed covariance and omitting dual variables, this corresponds to minimizing the following: \[ \mathbb{E}_{\psi} \left[ \sum_{t=0}^T (\mathbb{E}_{\pi}[a_t|s_t] - \mathbb{E}_{\psi}[a_t|s_t, t])^T G_t^{-1} (\mathbb{E}_{\pi}[a_t|s_t] - \mathbb{E}_{\psi}[a_t|s_t, t]) \right], \] where \( \mathbb{E}_{\pi}[\cdot] \) denotes the expectation taken with respect to the trajectory induced by the non-linear policy \( \pi \). We refer interested readers to (Levine & Abbeel, 2014) and (Levine et al., 2015a) for details. 4 FORMULATION We observe that the execution of an optimization algorithm can be viewed as the execution of a particular policy in an MDP: the state consists of the current iterate and the objective values and gradients evaluated at the current and past iterates, the action is the step vector that is used to update 1In a slight abuse of notation, we use \( \mathcal{N}(\mu, \Sigma) \) to denote the density of a Gaussian distribution with mean \( \mu \) and covariance \( \Sigma \). the current iterate, and the transition probability is partially characterized by the update formula, \( x^{(i)} \leftarrow x^{(i-1)} + \Delta x \). The policy that is executed corresponds precisely to the choice of \( \pi \) used by the optimization algorithm. For this reason, we will also use \( \pi \) to denote the policy at hand. Under this formulation, searching over policies corresponds to searching over possible optimization algorithms. To learn \( \pi \), we need to define the cost function, which should penalize policies that exhibit undesirable behaviours during their execution. Since the performance metric of interest for optimization algorithms is the speed of convergence, the cost function should penalize policies that converge slowly. To this end, assuming the goal is to minimize the objective function, we define cost at a state to be the objective value at the current iterate. This encourages the policy to reach the minimum of the objective function as quickly as possible. We choose to parameterize the mean of \( \pi \) using a neural net, due to its appealing properties as a universal function approximator and strong empirical performance in a variety of applications. We use GPS to learn \( \pi \). 5 IMPLEMENTATION DETAILS We store the current iterate, previous gradients and improvements in the objective value from previous iterations in the state. We keep track of only the information pertaining to the previous \( H \) time steps and use \( H = 25 \) in our experiments. More specifically, the dimensions of the state space encode the following information: • Current iterate • Change in the objective value at the current iterate relative to the objective value at the \( i \)th most recent iterate for all \( i \in \{2, \ldots, H+1\} \) • Gradient of the objective function evaluated at the \( i \)th most recent iterate for all \( i \in \{2, \ldots, H+1\} \) Initially, we set the dimensions corresponding to historical information to zero. The current iterate is only used to compute the cost; because the policy should not depend on the absolute coordinates of the current iterate, we exclude it from the input that is fed into the neural net. We use a small neural net with a single hidden layer of 50 hidden units to model the mean of \( \pi \). Softplus activation units are used at the hidden layer and linear activation units are used at the output layer. We initialize the weights of the neural net randomly and do not regularize the magnitude of weights. Initially, we set the target trajectory distribution so that the mean action given state at each time step matches the step vector used by the gradient descent method with momentum. We choose the best settings of the step size and momentum decay factor for each objective function in the training set by performing a grid search over hyperparameters and running noiseless gradient descent with momentum for each hyperparameter setting. We use a mixture of 10 Gaussians as a prior for fitting the parameters of the transition probability density. For training, we sample 20 trajectories with a length of 40 time steps for each objective function in the training set. After each iteration of guided policy search, we sample new trajectories from the new distribution and discard the trajectories from the preceding iteration. 6 EXPERIMENTS We learn optimization algorithms for various convex and non-convex classes of objective functions that correspond to loss functions for different machine learning models. We learn an optimizer for logistic regression, robust linear regression using the Geman-McClure M-estimator and a two-layer neural net classifier with ReLU activation units. The geometry of the error surface becomes progressively more complex: the loss for logistic regression is convex, the loss for robust linear regression is non-convex, and the loss for the neural net has many local minima. Figure 1: (a) Mean margin of victory of each algorithm for optimizing the logistic regression loss. Higher margin of victory indicates better performance. (b-c) Objective values achieved by each algorithm on two objective functions from the test set. Lower objective values indicate better performance. Best viewed in colour. 6.1 LOGISTIC REGRESSION We consider a logistic regression model with an \( \ell_2 \) regularizer on the weight vector. Training the model requires optimizing the following objective: \[ \min_{\mathbf{w}, b} - \frac{1}{n} \sum_{i=1}^n y_i \log \sigma \left( \mathbf{w}^T \mathbf{x}_i + b \right) + (1 - y_i) \log \left( 1 - \sigma \left( \mathbf{w}^T \mathbf{x}_i + b \right) \right) + \frac{\lambda}{2} \| \mathbf{w} \|_2^2, \] where \( \mathbf{w} \in \mathbb{R}^d \) and \( b \in \mathbb{R} \) denote the weight vector and bias respectively, \( \mathbf{x}_i \in \mathbb{R}^d \) and \( y_i \in \{0, 1\} \) denote the feature vector and label of the \( i \)th instance, \( \lambda \) denotes the coefficient on the regularizer and \( \sigma(z) := \frac{1}{1 + e^{-z}} \). For our experiments, we choose \( \lambda = 0.0005 \) and \( d = 3 \). This objective is convex in \( \mathbf{w} \) and \( b \). We train an algorithm for optimizing objectives of this form. Different examples in the training set correspond to such objective functions with different instantiations of the free variables, which in this case are \( \mathbf{x}_i \) and \( y_i \). Hence, each objective function in the training set corresponds to a logistic regression problem on a different dataset. To construct the training set, we randomly generate a dataset of 100 instances for each function in the training set. The instances are drawn randomly from two multivariate Gaussians with random means and covariances, with half drawn from each. Instances from the same Gaussian are assigned the same label and instances from different Gaussians are assigned different labels. We train the optimizer on a set of 90 objective functions. We evaluate it on a test set of 100 random objective functions generated using the same procedure and compare to popular hand-engineered algorithms, such as gradient descent, momentum, conjugate gradient and L-BFGS. All baselines are run with the best hyperparameter settings tuned on the training set. For each algorithm and objective function in the test set, we compute the difference between the objective value achieved by a given algorithm and that achieved by the best of the competing algorithms at every iteration, a quantity we will refer to as “the margin of victory”. This quantity is positive when the current algorithm is better than all other algorithms and negative otherwise. In Figure 1a, we plot the mean margin of victory of each algorithm at each iteration averaged over all objective functions in the test set. As shown, the learned optimizer, which we will henceforth refer to as “predicted step descent”, outperforms gradient descent, momentum and conjugate gradient at almost every iteration. The margin of victory for predicted step descent is high in early iterations, indicating that it converges much faster than other algorithms. It is interesting to note that despite having seen only trajectories of length 40 at training time, the learned optimizer is able to generalize to much longer time horizons at test time. L-BFGS converges to slightly better optima than predicted step descent and the momentum method. This is not surprising, as the objective functions are convex and L-BFGS is known to be a very good optimizer for convex problems. Figure 2: (a) Mean margin of victory of each algorithm for optimizing the robust linear regression loss. Higher margin of victory indicates better performance. (b-c) Objective values achieved by each algorithm on two objective functions from the test set. Lower objective values indicate better performance. Best viewed in colour. We show the performance of each algorithm on two objective functions from the test set in Figures 1b and 1c. In Figure 1b, predicted step descent converges faster than all other algorithms. In Figure 1c, predicted step descent initially converges faster than all other algorithms but is later overtaken by L-BFGS, while remaining faster than all other optimizers. However, it eventually achieves the same objective value as L-BFGS, while the objective values achieved by gradient descent and momentum remain much higher. 6.2 ROBUST LINEAR REGRESSION Next, we consider the problem of linear regression using a robust loss function. One way to ensure robustness is to use an M-estimator for parameter estimation. A popular choice is the Geman-McClure estimator, which induces the following objective: \[ \min_{\mathbf{w}, b} \frac{1}{n} \sum_{i=1}^n \frac{(y_i - \mathbf{w}^T \mathbf{x}_i - b)^2}{c^2 + (y_i - \mathbf{w}^T \mathbf{x}_i - b)^2}, \] where \( \mathbf{w} \in \mathbb{R}^d \) and \( b \in \mathbb{R} \) denote the weight vector and bias respectively, \( \mathbf{x}_i \in \mathbb{R}^d \) and \( y_i \in \mathbb{R} \) denote the feature vector and label of the \( i \)th instance and \( c \in \mathbb{R} \) is a constant that modulates the shape of the loss function. For our experiments, we use \( c = 1 \) and \( d = 3 \). This loss function is not convex in either \( \mathbf{w} \) or \( b \). As with the preceding section, each objective function in the training set is a function of the above form with a particular instantiation of \( \mathbf{x}_i \) and \( y_i \). The dataset for each objective function is generated by drawing 25 random samples from each one of four multivariate Gaussians, each of which has a random mean and the identity covariance matrix. For all points drawn from the same Gaussian, their labels are generated by projecting them along the same random vector, adding the same randomly generated bias and perturbing them with i.i.d. Gaussian noise. The optimizer is trained on a set of 120 objective functions. We evaluate it on 100 randomly generated objective functions using the same metric as above. As shown in Figure 2a, predicted step descent outperforms all hand-engineered algorithms except at early iterations. While it dominates gradient descent, conjugate gradient and L-BFGS at all times, it does not make progress as quickly as the momentum method initially. However, after around 30 iterations, it is able to close the gap and surpass the momentum method. On this optimization problem, both conjugate gradient and L-BFGS diverge quickly. Interestingly, unlike in the previous experiment, L-BFGS no longer performs well, which could be caused by non-convexity of the objective functions. Figures 2b and 2c show performance on objective functions from the test set. In Figure 2b, predicted step descent not only converges the fastest, but also reaches a better optimum than all other algorithms. In Figure 2c, predicted step descent converges the fastest and is able to avoid most of the oscillations that hamper gradient descent and momentum after reaching the optimum. Figure 3: (a) Mean margin of victory of each algorithm for training neural net classifiers. Higher margin of victory indicates better performance. (b-c) Objective values achieved by each algorithm on two objective functions from the test set. Lower objective values indicate better performance. Best viewed in colour. 6.3 Neural Net Classifier Finally, we train an optimizer to train a small neural net classifier. We consider a two-layer neural net with ReLU activation on the hidden units and softmax activation on the output units. We use the cross-entropy loss combined with \( \ell_2 \) regularization on the weights. To train the model, we need to optimize the following objective: \[ \min_{W,U,b,c} - \frac{1}{n} \sum_{i=1}^n \log \left( \frac{\exp \left( (U \max(W x_i + b, 0) + c)_j y_i \right)}{\sum_j \exp \left( (U \max(W x_i + b, 0) + c)_j \right)} \right) + \frac{\lambda}{2} \| W \|_F^2 + \frac{\lambda}{2} \| U \|_F^2, \] where \( W \in \mathbb{R}^{h \times d}, b \in \mathbb{R}^h, U \in \mathbb{R}^{p \times h}, c \in \mathbb{R}^p \) denote the first-layer and second-layer weights and biases, \( x_i \in \mathbb{R}^d \) and \( y_i \in \{1, \ldots, p\} \) denote the input and target class label of the \( i \)th instance, \( \lambda \) denotes the coefficient on regularizers and \( (v)_j \) denotes the \( j \)th component of \( v \). For our experiments, we use \( \lambda = 0.0005 \) and \( d = h = p = 2 \). The error surface is known to have complex geometry and multiple local optima, making this a challenging optimization problem. The training set consists of 80 objective functions, each of which corresponds to the objective for training a neural net on a different dataset. Each dataset is generated by generating four multivariate Gaussians with random means and covariances and sampling 25 points from each. The points from the same Gaussian are assigned the same random label of either 0 or 1. We make sure not all of the points in the dataset are assigned the same label. We evaluate the learned optimizer in the same manner as above. As shown in Figure 3a, predicted step descent significantly outperforms all other algorithms. In particular, as evidenced by the sizeable and sustained gap between margin of victory for predicted step descent and the momentum method, predicted step descent is able to reach much better optima and is less prone to getting trapped in local optima compared to other methods. This gap is also larger compared to that exhibited in previous sections, suggesting that hand-engineered algorithms are more sub-optimal on challenging optimization problems and so the potential for improvement from learning the algorithm is greater in such settings. Due to non-convexity, conjugate gradient and L-BFGS often diverge. Performance on examples of objective functions from the test set is shown in Figures 3b and 3c. As shown, predicted step descent is able to reach better optima than all other methods and largely avoids oscillations that other methods suffer from. 6.4 Visualization of Optimization Trajectories We visualize optimization trajectories followed by the learned algorithm and various hand-engineered algorithms to gain further insights into the behaviour of the learned algorithm. We generated random two-dimensional logistic regression problems and plot trajectories followed by different algorithms on each problem in Figure 4. Figure 4: Objective values and trajectories produced by different algorithms on unseen random two-dimensional logistic regression problems. Each pair of plots corresponds to a different logistic regression problem. Objective values are shown on the vertical axis in the left plot and as contour levels in the right plot, where darker shading represents higher objective values. In the right plot, the axes represent the values of the iterates in each dimension and are of the same scale. Each arrow represents one iteration of an algorithm, whose tail and tip correspond to the preceding and subsequent iterates respectively. Best viewed in colour. As shown, the learned algorithm exhibits some interesting behaviours. In Figure 4a, the learned algorithm does not take as large a step as L-BFGS initially, but takes larger steps than L-BFGS later on as it approaches the optimum. In other words, the learned algorithm appears to be not as greedy as L-BFGS. In Figures 4b and 4d the learned algorithm initially overshoots, but appears to have learned how to recover while avoiding oscillations. In Figure 4c the learned algorithm is able to make rapid progress despite vanishing gradients. 7 CONCLUSION We presented a method for learning a better optimization algorithm. We formulated this as a reinforcement learning problem, in which any particular optimization algorithm can be represented as a policy. Learning an optimization algorithm then reduces to find the optimal policy. We used guided policy search for this purpose and trained optimizers for different classes of convex and non-convex objective functions. We demonstrated that the learned optimizer converges faster and/or reaches better optima than hand-engineered optimizers. We hope optimizers learned using the proposed approach can be used to solve various common classes of optimization problems more quickly and help accelerate the pace of research in science and engineering. ACKNOWLEDGEMENTS This work was supported by ONR MURI N00014-14-1-0671. Ke Li thanks the Natural Sciences and Engineering Research Council of Canada (NSERC) for fellowship support. The authors also thank Chelsea Finn for code and Pieter Abbeel, Sandy Huang and Zoe McCarthy for feedback. This research used the Savio computational cluster resource provided by the Berkeley Research Computing program at the University of California, Berkeley (supported by the UC Berkeley Chancellor, Vice Chancellor for Research, and Chief Information Officer). REFERENCES Yaser S Abu-Mostafa. A method for learning from hints. In Advances in Neural Information Processing Systems, pp. 73–80, 1993. Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. arXiv preprint arXiv:1606.04474, 2016. Jonathan Baxter, Rich Caruana, Tom Mitchell, Lorien Y Pratt, Daniel L Silver, and Sebastian Thrun. NIPS 1995 workshop on learning to learn: Knowledge consolidation and transfer in inductive systems. https://web.archive.org/web/20000618135816/http://www.cs.cmu.edu/afs/cs.cmu.edu/user/caruana/pub/transfer.html 1995. Accessed: 2015-12-05. Yoshua Bengio. Gradient-based optimization of hyperparameters. Neural computation, 12(8):1889–1900, 2000. James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. The Journal of Machine Learning Research, 13(1):281–305, 2012. James S Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyper-parameter optimization. In Advances in Neural Information Processing Systems, pp. 2546–2554, 2011. Matthieu Bray, Esther Koller-meier, Pascal Müller, Luc Van Gool, and Nicol N Schraudolph. 3d hand tracking by rapid stochastic gradient descent using a skinning model. In 1st European Conference on Visual Media Production (CVMP). Citeseer, 2004. Pavel Brazdil, Christophe Giraud Carrier, Carlos Soares, and Ricardo Vilalta. Metalearning: applications to data mining. Springer Science & Business Media, 2008. Pavel B Brazdil, Carlos Soares, and Joaquim Pinto Da Costa. Ranking learning algorithms: Using ibl and meta-learning on accuracy and time results. Machine Learning, 50(3):251–277, 2003. Eric Brochu, Vlad M Cora, and Nando De Freitas. A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599, 2010. Michael Lynn Cramer. A representation for the adaptive generation of simple sequential programs. In Proceedings of the First International Conference on Genetic Algorithms, pp. 183–187, 1985. Christian Daniel, Jonathan Taylor, and Sebastian Nowozin. Learning step size controllers for robust neural network training. In Thirtieth AAAI Conference on Artificial Intelligence, 2016. Justin Domke. Generic methods for optimization-based modeling. In AISTATS, volume 22, pp. 318–326, 2012. Matthias Feurer, Jost Tobias Springenberg, and Frank Hutter. Initializing bayesian hyperparameter optimization via meta-learning. In AAAI, pp. 1128–1135, 2015. Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, and Pieter Abbeel. Learning visual feature spaces for robotic manipulation with deep spatial autoencoders. arXiv preprint arXiv:1509.06113, 2015. Jie Fu, Zichuan Lin, Miao Liu, Nicholas Leonard, Jiashi Feng, and Tat-Seng Chua. Deep q-networks for accelerating the training of deep neural networks. arXiv preprint arXiv:1606.01467, 2016. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing machines. arXiv preprint arXiv:1410.5401, 2014. Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 399–406, 2010. Weiqiao Han, Sergey Levine, and Pieter Abbeel. Learning compound multi-step controllers under unknown dynamics. In International Conference on Intelligent Robots and Systems, 2015. Samantha Hansen. Using deep q-learning to control optimization hyperparameters. arXiv preprint arXiv:1602.04062, 2016. Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent. In International Conference on Artificial Neural Networks, pp. 87–94. Springer, 2001. Frank Hutter, Holger H Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In Learning and Intelligent Optimization, pp. 507–523. Springer, 2011. Sergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems, pp. 1071–1079, 2014. Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuo-motor policies. arXiv preprint arXiv:1504.00702, 2015a. Sergey Levine, Nolan Wagener, and Pieter Abbeel. Learning contact-rich manipulation skills with guided policy search. arXiv preprint arXiv:1501.05611, 2015b. Percy Liang, Michael I Jordan, and Dan Klein. Learning programs: A hierarchical Bayesian approach. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 639–646, 2010. Dougal Maclaurin, David Duvenaud, and Ryan P Adams. Gradient-based hyperparameter optimization through reversible learning. arXiv preprint arXiv:1502.03492, 2015. Jonas Mockus, Vytautas Tesis, and Antanas Zilinskas. The application of bayesian methods for seeking the extremum. Towards global optimization, 2(117-129):2, 1978. Paul L Ruvolo, Ian Fasel, and Javier R Movellan. Optimization on a budget: A reinforcement learning approach. In Advances in Neural Information Processing Systems, pp. 1385–1392, 2009. Jürgen Schmidhuber. Optimal ordered problem solver. Machine Learning, 54(3):211–254, 2004. Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pp. 2951–2959, 2012. Pablo Sprechmann, Roei Litman, Tal Ben Yakar, Alexander M Bronstein, and Guillermo Sapiro. Supervised sparse analysis and synthesis operators. In Advances in Neural Information Processing Systems, pp. 908–916, 2013. Kevin Swersky, Jasper Snoek, and Ryan P Adams. Multi-task bayesian optimization. In Advances in neural information processing systems, pp. 2004–2012, 2013. Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012. Ricardo Vilalta and Youssef Drissi. A perspective view and survey of meta-learning. Artificial Intelligence Review, 18(2):77–95, 2002. 8 APPENDIX 8.1 TRANSFER TO OBJECTIVE FUNCTIONS DRAWN FROM DIFFERENT DISTRIBUTIONS We evaluate the learned neural net optimizer, which is trained on neural net classification problems on data drawn from mixtures of four random Gaussians, on neural net classification problems on data drawn from mixtures of different numbers of random Gaussians. As the number of mixture components increases, the data distributions used at test time become more dissimilar from those seen during training. As shown in Figures 5, the learned optimizer seems to be fairly robust and performs reasonably well despite deviations of data distributions from those used for training. (a) Mixture of 5 Gaussians (b) Mixture of 6 Gaussians (c) Mixture of 8 Gaussians (d) Mixture of 12 Gaussians Figure 5: Performance of the learned neural net optimizer on classification problems on data drawn from mixtures of different numbers (5, 6, 8 and 12) of random Gaussians. The left column shows the mean margin of victory of each algorithm, and the middle and right columns show objective values achieved by each algorithm on two neural net classification problems from the test set. Best viewed in colour.
ABSTRACT Algorithm design is a laborious process and often requires many iterations of ideation and validation. In this paper, we explore automating algorithm design and present a method to learn an optimization algorithm. We approach this problem from a reinforcement learning perspective and represent any particular optimization algorithm as a policy. We learn an optimization algorithm using guided policy search and demonstrate that the resulting algorithm outperforms existing hand-engineered algorithms in terms of convergence speed and/or the final objective value. 1 INTRODUCTION Continuous optimization algorithms are some of the most ubiquitous tools used in virtually all areas of science and engineering. Indeed, they are the workhorse of machine learning and power most learning algorithms. Consequently, optimization difficulties become learning challenges – because their causes are often not well understood, they are one of the most vexing issues that arise in practice. One solution is to design better optimization algorithms that are immune to these failure cases. This requires careful analysis of existing optimization algorithms and clever solutions to overcome their weaknesses; thus, doing so is both laborious and time-consuming. Is there a better way? If the mantra of machine learning is to learn what is traditionally manually designed, why not take it a step further and learn the optimization algorithm itself? Consider the general structure of an algorithm for unconstrained continuous optimization, which is outlined in Algorithm 1. Starting from a random location in the domain of the objective function, the algorithm iteratively updates the current iterate by a step vector \( \Delta x \) computed from some functional \( \pi \) of the objective function, the current iterate and past iterates. Algorithm 1 General structure of unconstrained optimization algorithms Require: Objective function \( f \) \( x^{(0)} \leftarrow \) random point in the domain of \( f \) for \( i = 1, 2, \ldots \) do \( \Delta x \leftarrow \pi(f, \{x^{(0)}, \ldots, x^{(i-1)}\}) \) if stopping condition is met then return \( x^{(i-1)} \) end if \( x^{(i)} \leftarrow x^{(i-1)} + \Delta x \) end for Different optimization algorithms only differ in the choice of the update formula \( \pi \). Examples of existing optimization algorithms and their corresponding update formulas are shown in Table 1 If we can learn \( \pi \), we will be able to learn an optimization algorithm. Since it is difficult to model general functionals, in practice, we restrict the dependence of \( \pi \) on the objective function \( f \) to objective values and gradients evaluated at current and past iterates. Hence, \( \pi \) can be simply modelled as a function from the objective values and gradients along the trajectory taken by the optimizer so far <table> <tr> <th>Algorithm</th> <th>Update Formula \( \pi \)</th> </tr> <tr> <td>Gradient Descent</td> <td>\( \pi(\cdot) = -\gamma \nabla f(x^{(i-1)}) \)</td> </tr> <tr> <td>Momentum</td> <td>\( \pi(\cdot) = -\gamma \left( \sum_{j=0}^{i-1} \alpha^{i-1-j} \nabla f(x^{(j)}) \right) \)</td> </tr> <tr> <td>Conjugate Gradient</td> <td>\( \pi(\cdot) = -\gamma \left( \nabla f(x^{(i-1)}) + \sum_{j=0}^{i-2} \left( \frac{\| \nabla f(x^{(j+1)}) \|}{\| \nabla f(x^{(j)}) \|} \right)^{i-1-j} \nabla f(x^{(j)}) \right) \)</td> </tr> </table> Table 1: Choices of the update formula \( \pi \) made by hand-engineered optimization algorithms. We propose learning \( \pi \) automatically in the hope of learning an optimization algorithm that converges faster and to better optima on objective functions of interest. to the next step vector. If we model \( \pi \) with a universal function approximator like a neural net, it is then possible to search over the space of optimization algorithms by learning the parameters of the neural net. We formulate this as a reinforcement learning problem, where any particular optimization algorithm simply corresponds to a policy. Learning an optimization algorithm then reduces to finding an optimal policy. For this purpose, we use an off-the-shelf reinforcement learning algorithm known as guided policy search (Levine & Abbeel [2014]), which has demonstrated success in a variety of robotic control settings (Levine et al., 2015a; Finn et al., 2015; Levine et al., 2015b; Han et al., 2015). Our goal is to learn about regularities in the geometry of the error surface induced by a class of objective functions of interest and exploit this knowledge to optimize the class of objective functions faster. This is potentially advantageous, since the learned optimizer is trained on the actual objective functions that arise in practice, whereas hand-engineered optimizers are often analyzed in the convex setting and applied to the non-convex setting. 1.1 Learning How to Learn When the objective functions under consideration correspond to loss functions for training a model, the proposed framework effectively learns how to learn. The loss function for training a model on a particular task/dataset is a particular objective function, and so the loss on many tasks corresponds to a set of objective functions. We can train an optimizer on this set of objective functions, which can hopefully learn to exploit regularities of the model and train it faster irrespective of the task. As a concrete example, if the model is a neural net with ReLU activations, our goal is to learn an optimizer that can leverage the piecewise linearity of the model. We evaluate the learned optimizer on its ability to generalize to unseen objective functions. Akin to the supervised learning paradigm, we divide the dataset of objective functions into training and test sets. At test time, the learned optimizer can be used stand-alone and functions exactly like a hand-engineered optimizer, except that the update formula is replaced with a neural net and no hyperparameters like step size or momentum need to be specified by the user. In particular, it does not perform multiple trials on the same objective function at test time, unlike hyperparameter optimization. Since different objective functions correspond to the loss for training a model on different tasks, the optimizer is effectively asked to train on its experience of learning on some tasks and generalize to other possibly unrelated tasks. It is therefore critical to ensure that the optimizer does not learn anything about particular tasks; this would be considered as overfitting under this setting. Notably, this goal is different from the line of work on “learning to learn” or “meta-learning”, whose goal is to learn something about a family of tasks. To prevent overfitting to particular tasks, we train the optimizer to learn on randomly generated tasks. 2 RELATED WORK 2.1 META-LEARNING When the objective functions optimized by the learned optimizer correspond to loss functions for training a model, learning the optimizer can be viewed as learning how to learn. This theme of learning about learning itself has been explored and is referred to as “learning to learn” or “meta- learning” (Baxter et al., 1995; Vilalta & Drissi, 2002; Brazdil et al., 2008; Thrun & Pratt, 2012). Various authors have used the term in different ways and there is no consensus on its precise definition. While there is agreement on what kinds of knowledge should be learned at the base-level, it is less clear what kinds of meta-knowledge should be learned at the meta-level. We briefly summarize the various perspectives that have been presented below. One form of meta-knowledge is the commonalities across a family of related tasks (Abu-Mostafa, 1993). Under this framework, the goal of the meta-learner is to learn about such commonalities, so that given the learned commonalities, base-level learning on new tasks from the family can be performed faster. This line of work is often better known as transfer learning and multi-task learning. A different approach (Brazdil et al., 2003) is to learn how to select the base-level learner that achieves the best performance for a given task. Under this setting, the meta-knowledge is the correlation between properties of tasks and the performance of different base-level learners trained on them. There are two challenges associated with this approach: the need to devise meta-features on tasks that capture similarity between different tasks, and the need to parameterize the space of base-level learners to make search in this space tractable. Schmidhuber (2004) proposes representing base-level learners as general-purpose programs, that is, sequences of primitive operations. While such a representation can in principle encode all base-level learners, searching in this space takes exponential time in the length of the target program. The proposed method differs from these lines in work in several important ways. First, the proposed method learns regularities in the optimization/learning process itself, rather than regularities that are shared by different tasks or regularities in the mapping between tasks and best-performing base-level learners. More concretely, the meta-knowledge in the proposed framework can capture regularities in the error surface. Second, unlike the approaches above, we explicitly aim to avoid capturing any regularities about the task. Under the proposed framework, only model-specific regularities are captured at the meta-level, while task-specific regularities are captured at the base-level. 2.2 PROGRAM INDUCTION Because the proposed method learns an algorithm, it is related to program induction, which considers the problem of learning programs from examples of input and output. Several different approaches have been proposed: genetic programming (Cramer, 1985) represents programs as abstract syntax trees and evolves them using genetic algorithms; Liang et al. (2010) represents programs explicitly using a formal language, constructs a hierarchical Bayesian prior over programs and performs inference using an MCMC sampling procedure and Graves et al. (2014) represents programs implicitly as sequences of memory access operations and trains a recurrent neural net to learn the underlying patterns in the memory access operations. Hochreiter et al. (2001) considers the special case of online learning algorithms, each of which is represented as a recurrent neural net with a particular setting of weights, and learns the online learning algorithm by learning the neural net weights. While the program/algorithm improves as training progresses, the algorithms learned using these methods have not been able to match the performance of simple hand-engineered algorithms. In contrast, our aim is learn an algorithm that is better than known hand-engineered algorithms. 2.3 HYPERPARAMETER OPTIMIZATION There is a large body of work on hyperparameter optimization, which studies the optimization of hyperparameters used to train a model, such as the learning rate, the momentum decay factor and regularization parameters. Most methods (Hutter et al., 2011; Bergstra et al., 2011; Snoek et al., 2012; Swersky et al., 2013; Feurer et al., 2015) rely on sequential model-based Bayesian optimization (Mockus et al., 1978; Brochu et al., 2010), while others adopt a random search approach (Bergstra & Bengio, 2012) or use gradient-based optimization (Bengio, 2000; Domke, 2012; Maclaurin et al., 2015). Because each hyperparameter setting corresponds to a particular instantiation of an optimization algorithm, these methods can be viewed as a way to search over different instantiations of the same optimization algorithm. The proposed method, on the other hand, can search over a larger space of possible optimization algorithms. In addition, as noted previously, when presented with a new objective function at test time, the learned optimizer does not need to conduct multiple trials with different hyperparameter settings. 2.4 Special Cases and Other Related Work Work on online hyperparameter adaptation studies ways to choose the step size or other hyperparameters adaptively while performing optimization. Stochastic meta-descent (Bray et al., 2004) derives a rule for adaptively choosing the step size, Ruvolo et al. (2009) learns a policy for picking the damping factor in the Levenberg-Marquardt algorithm and recent work (Hansen, 2016) (Daniel et al., 2016) (Fu et al., 2016) explores learning a policy for choosing the step size. Unlike this line of work, the proposed method learns a policy for choosing the step direction as well as step size, thereby making it possible to learn a new optimization algorithm that is different from known algorithms. A different line of work (Gregor & LeCun, 2010) (Sprechmann et al., 2013) explores learning special-purpose solvers for a class of optimization problems that arise in sparse coding. Work that appeared on ArXiv after this paper (Andrychowicz et al., 2016) explores a similar theme under a different setting, where the goal is learn a task-dependent optimization algorithm. The optimizer is trained from the experience of training on a particular task or family of tasks and is evaluated on its ability to train on the same task or family of tasks. Under this setting, the optimizer learns regularities about the task itself rather than regularities of the model in general. 3 Background on Reinforcement Learning 3.1 Markov Decision Process In the reinforcement learning setting, the learner is given a choice of actions to take in each time step, which changes the state of the environment in an unknown fashion, and receives feedback based on the consequence of the action. The feedback is typically given in the form of a reward or cost, and the objective of the learner is to choose a sequence of actions based on observations of the current environment that maximizes cumulative reward or minimizes cumulative cost over all time steps. More formally, a reinforcement learning problem can be characterized by a Markov decision process (MDP). We consider an undiscounted finite-horizon MDP with continuous state and action spaces defined by the tuple \((\mathcal{S}, \mathcal{A}, p_0, p, c, T)\), where \(\mathcal{S} \subseteq \mathbb{R}^D\) is the set of states, \(\mathcal{A} \subseteq \mathbb{R}^d\) is the set of actions, \(p_0 : \mathcal{S} \to \mathbb{R}^+\) is the probability density over initial states, \(p : \mathcal{S} \times \mathcal{A} \times \mathcal{S} \to \mathbb{R}^+\) is the transition probability density, that is, the conditional probability density over successor states given the current state and action, \(c : \mathcal{S} \to \mathbb{R}\) is a function that maps state to cost and \(T\) is the time horizon. A policy \(\pi : \mathcal{S} \times \mathcal{A} \times \{0, \ldots, T-1\} \to \mathbb{R}^+\) is a conditional probability density over actions given the state at each time step. When a policy is independent of the time step, it is referred to as stationary. 3.2 Policy Search This problem of finding the cost-minimizing policy is known as the policy search problem. More precisely, the objective is to find a policy \(\pi^*\) such that \[ \pi^* = \arg\min_{\pi} \mathbb{E}_{s_0, a_0, s_1, \ldots, s_T} \left[ \sum_{t=0}^T c(s_t) \right], \] where the expectation is taken with respect to the joint distribution over the sequence of states and actions, often referred to as a trajectory, which has the density \[ q\left(s_0, a_0, s_1, \ldots, s_T\right) = p_0\left(s_0\right) \prod_{t=0}^{T-1} \pi\left(a_t|s_t, t\right) p\left(s_{t+1}|s_t, a_t\right). \] To enable generalization to unseen states, the policy is typically parameterized and minimization is performed over representable policies. Solving this problem exactly is intractable in all but selected special cases. Therefore, policy search methods generally tackle this problem by solving it approximately. In addition, the transition probability density \(p\) is typically not known, but may be accessed via sampling. 3.3 Guided Policy Search Guided policy search (GPS) (Levine & Abbeel, 2014) is a method for searching over expressive non-linear policy classes in continuous state and action spaces. It works by alternating between computing a mixture of target trajectories and training the policy to replicate them. Successive iterations locally improve target trajectories while ensuring proximity to behaviours that are reproducible by the policy. Target trajectories are computed by fitting local approximations to the cost and transition probability density and optimizing over a restricted class of time-varying linear target policies subject to a trust region constraint. The stationary non-linear policy is trained to minimize the squared Mahalanobis distance between the predicted and target actions at each time step. More precisely, GPS works by solving the following constrained optimization problem: \[ \min_{\theta, \eta} \mathbb{E}_{\psi} \left[ \sum_{t=0}^T c(s_t) \right] \quad \text{s.t. } \psi(a_t|s_t, t; \eta) = \pi(a_t|s_t; \theta) \quad \forall a_t, s_t, t, \] where \( \psi \) denotes the time-varying target policy, \( \pi \) denotes the stationary non-linear policy, and \( \mathbb{E}_{\psi}[\cdot] \) denotes the expectation taken with respect to the trajectory induced by the target policy \( \psi \). \( \psi \) is assumed to be conditionally Gaussian whose mean is linear in \( s_t \) and \( \pi \) is assumed to be conditionally Gaussian whose mean could be an arbitrary function of \( s_t \). To solve this problem, the equality constraint is relaxed and replaced with a penalty on the KL-divergence between \( \psi \) and \( \pi \). Different flavours of GPS (Levine & Abbeel, 2014; Levine et al., 2015a) use different constrained optimization methods, which all involve alternating between optimizing the parameters of \( \psi \) and \( \pi \). For updating \( \psi \), GPS first builds a model \( \hat{p} \) of the transition probability density \( p \) of the form \( \hat{p}(s_{t+1}|s_t, a_t, t) := \mathcal{N}(A_t s_t + B_t a_t + c_t, F_t) \) where \( A_t, B_t \) and \( c_t \) are parameters estimated from samples drawn from the trajectory induced by the existing \( \psi \). It also computes local quadratic approximations to the cost, so that \( c(s_t) \approx \frac{1}{2} s_t^T C_t s_t + d_t^T s_t + h_t \) for \( s_t \)'s that are near the samples. It then solves the following: \[ \min_{K_t, k_t, G_t} \mathbb{E}_{\hat{\psi}} \left[ \sum_{t=0}^T \frac{1}{2} s_t^T C_t s_t + d_t^T s_t \right] \] \[ \text{s.t. } \sum_{t=0}^T D_{KL} \left( p(s_t) \psi(\cdot|s_t, t; \eta) \| p(s_t) \psi(\cdot|s_t, t; \eta') \right) \leq \epsilon, \] where \( \mathbb{E}_{\hat{\psi}}[\cdot] \) denotes the expectation taken with respect to the trajectory induced by the target policy \( \psi \) if states transition according to the model \( \hat{p} \). \( K_t, k_t, G_t \) are the parameters of \( \psi(a_t|s_t, t; \eta) := \mathcal{N}(K_t s_t + k_t, G_t) \) and \( \eta' \) denotes the parameters of the previous target policy. It turns out that this optimization problem can be solved in closed form using a dynamic programming algorithm known as linear-quadratic-Gaussian regulator (LQG). For updating \( \pi \), GPS minimizes \( D_{KL} \left( p(s_t) \pi(\cdot|s_t) \| p(s_t) \psi(\cdot|s_t, t) \right) \). Assuming fixed covariance and omitting dual variables, this corresponds to minimizing the following: \[ \mathbb{E}_{\psi} \left[ \sum_{t=0}^T (\mathbb{E}_{\pi}[a_t|s_t] - \mathbb{E}_{\psi}[a_t|s_t, t])^T G_t^{-1} (\mathbb{E}_{\pi}[a_t|s_t] - \mathbb{E}_{\psi}[a_t|s_t, t]) \right], \] where \( \mathbb{E}_{\pi}[\cdot] \) denotes the expectation taken with respect to the trajectory induced by the non-linear policy \( \pi \). We refer interested readers to (Levine & Abbeel, 2014) and (Levine et al., 2015a) for details. 4 FORMULATION We observe that the execution of an optimization algorithm can be viewed as the execution of a particular policy in an MDP: the state consists of the current iterate and the objective values and gradients evaluated at the current and past iterates, the action is the step vector that is used to update 1In a slight abuse of notation, we use \( \mathcal{N}(\mu, \Sigma) \) to denote the density of a Gaussian distribution with mean \( \mu \) and covariance \( \Sigma \). the current iterate, and the transition probability is partially characterized by the update formula, \( x^{(i)} \leftarrow x^{(i-1)} + \Delta x \). The policy that is executed corresponds precisely to the choice of \( \pi \) used by the optimization algorithm. For this reason, we will also use \( \pi \) to denote the policy at hand. Under this formulation, searching over policies corresponds to searching over possible optimization algorithms. To learn \( \pi \), we need to define the cost function, which should penalize policies that exhibit undesirable behaviours during their execution. Since the performance metric of interest for optimization algorithms is the speed of convergence, the cost function should penalize policies that converge slowly. To this end, assuming the goal is to minimize the objective function, we define cost at a state to be the objective value at the current iterate. This encourages the policy to reach the minimum of the objective function as quickly as possible. We choose to parameterize the mean of \( \pi \) using a neural net, due to its appealing properties as a universal function approximator and strong empirical performance in a variety of applications. We use GPS to learn \( \pi \). 5 IMPLEMENTATION DETAILS We store the current iterate, previous gradients and improvements in the objective value from previous iterations in the state. We keep track of only the information pertaining to the previous \( H \) time steps and use \( H = 25 \) in our experiments. More specifically, the dimensions of the state space encode the following information: • Current iterate • Change in the objective value at the current iterate relative to the objective value at the \( i \)th most recent iterate for all \( i \in \{2, \ldots, H+1\} \) • Gradient of the objective function evaluated at the \( i \)th most recent iterate for all \( i \in \{2, \ldots, H+1\} \) Initially, we set the dimensions corresponding to historical information to zero. The current iterate is only used to compute the cost; because the policy should not depend on the absolute coordinates of the current iterate, we exclude it from the input that is fed into the neural net. We use a small neural net with a single hidden layer of 50 hidden units to model the mean of \( \pi \). Softplus activation units are used at the hidden layer and linear activation units are used at the output layer. We initialize the weights of the neural net randomly and do not regularize the magnitude of weights. Initially, we set the target trajectory distribution so that the mean action given state at each time step matches the step vector used by the gradient descent method with momentum. We choose the best settings of the step size and momentum decay factor for each objective function in the training set by performing a grid search over hyperparameters and running noiseless gradient descent with momentum for each hyperparameter setting. We use a mixture of 10 Gaussians as a prior for fitting the parameters of the transition probability density. For training, we sample 20 trajectories with a length of 40 time steps for each objective function in the training set. After each iteration of guided policy search, we sample new trajectories from the new distribution and discard the trajectories from the preceding iteration. 6 EXPERIMENTS We learn optimization algorithms for various convex and non-convex classes of objective functions that correspond to loss functions for different machine learning models. We learn an optimizer for logistic regression, robust linear regression using the Geman-McClure M-estimator and a two-layer neural net classifier with ReLU activation units. The geometry of the error surface becomes progressively more complex: the loss for logistic regression is convex, the loss for robust linear regression is non-convex, and the loss for the neural net has many local minima. Figure 1: (a) Mean margin of victory of each algorithm for optimizing the logistic regression loss. Higher margin of victory indicates better performance. (b-c) Objective values achieved by each algorithm on two objective functions from the test set. Lower objective values indicate better performance. Best viewed in colour. 6.1 LOGISTIC REGRESSION We consider a logistic regression model with an \( \ell_2 \) regularizer on the weight vector. Training the model requires optimizing the following objective: \[ \min_{\mathbf{w}, b} - \frac{1}{n} \sum_{i=1}^n y_i \log \sigma \left( \mathbf{w}^T \mathbf{x}_i + b \right) + (1 - y_i) \log \left( 1 - \sigma \left( \mathbf{w}^T \mathbf{x}_i + b \right) \right) + \frac{\lambda}{2} \| \mathbf{w} \|_2^2, \] where \( \mathbf{w} \in \mathbb{R}^d \) and \( b \in \mathbb{R} \) denote the weight vector and bias respectively, \( \mathbf{x}_i \in \mathbb{R}^d \) and \( y_i \in \{0, 1\} \) denote the feature vector and label of the \( i \)th instance, \( \lambda \) denotes the coefficient on the regularizer and \( \sigma(z) := \frac{1}{1 + e^{-z}} \). For our experiments, we choose \( \lambda = 0.0005 \) and \( d = 3 \). This objective is convex in \( \mathbf{w} \) and \( b \). We train an algorithm for optimizing objectives of this form. Different examples in the training set correspond to such objective functions with different instantiations of the free variables, which in this case are \( \mathbf{x}_i \) and \( y_i \). Hence, each objective function in the training set corresponds to a logistic regression problem on a different dataset. To construct the training set, we randomly generate a dataset of 100 instances for each function in the training set. The instances are drawn randomly from two multivariate Gaussians with random means and covariances, with half drawn from each. Instances from the same Gaussian are assigned the same label and instances from different Gaussians are assigned different labels. We train the optimizer on a set of 90 objective functions. We evaluate it on a test set of 100 random objective functions generated using the same procedure and compare to popular hand-engineered algorithms, such as gradient descent, momentum, conjugate gradient and L-BFGS. All baselines are run with the best hyperparameter settings tuned on the training set. For each algorithm and objective function in the test set, we compute the difference between the objective value achieved by a given algorithm and that achieved by the best of the competing algorithms at every iteration, a quantity we will refer to as “the margin of victory”. This quantity is positive when the current algorithm is better than all other algorithms and negative otherwise. In Figure 1a, we plot the mean margin of victory of each algorithm at each iteration averaged over all objective functions in the test set. As shown, the learned optimizer, which we will henceforth refer to as “predicted step descent”, outperforms gradient descent, momentum and conjugate gradient at almost every iteration. The margin of victory for predicted step descent is high in early iterations, indicating that it converges much faster than other algorithms. It is interesting to note that despite having seen only trajectories of length 40 at training time, the learned optimizer is able to generalize to much longer time horizons at test time. L-BFGS converges to slightly better optima than predicted step descent and the momentum method. This is not surprising, as the objective functions are convex and L-BFGS is known to be a very good optimizer for convex problems. Figure 2: (a) Mean margin of victory of each algorithm for optimizing the robust linear regression loss. Higher margin of victory indicates better performance. (b-c) Objective values achieved by each algorithm on two objective functions from the test set. Lower objective values indicate better performance. Best viewed in colour. We show the performance of each algorithm on two objective functions from the test set in Figures 1b and 1c. In Figure 1b, predicted step descent converges faster than all other algorithms. In Figure 1c, predicted step descent initially converges faster than all other algorithms but is later overtaken by L-BFGS, while remaining faster than all other optimizers. However, it eventually achieves the same objective value as L-BFGS, while the objective values achieved by gradient descent and momentum remain much higher. 6.2 ROBUST LINEAR REGRESSION Next, we consider the problem of linear regression using a robust loss function. One way to ensure robustness is to use an M-estimator for parameter estimation. A popular choice is the Geman-McClure estimator, which induces the following objective: \[ \min_{\mathbf{w}, b} \frac{1}{n} \sum_{i=1}^n \frac{(y_i - \mathbf{w}^T \mathbf{x}_i - b)^2}{c^2 + (y_i - \mathbf{w}^T \mathbf{x}_i - b)^2}, \] where \( \mathbf{w} \in \mathbb{R}^d \) and \( b \in \mathbb{R} \) denote the weight vector and bias respectively, \( \mathbf{x}_i \in \mathbb{R}^d \) and \( y_i \in \mathbb{R} \) denote the feature vector and label of the \( i \)th instance and \( c \in \mathbb{R} \) is a constant that modulates the shape of the loss function. For our experiments, we use \( c = 1 \) and \( d = 3 \). This loss function is not convex in either \( \mathbf{w} \) or \( b \). As with the preceding section, each objective function in the training set is a function of the above form with a particular instantiation of \( \mathbf{x}_i \) and \( y_i \). The dataset for each objective function is generated by drawing 25 random samples from each one of four multivariate Gaussians, each of which has a random mean and the identity covariance matrix. For all points drawn from the same Gaussian, their labels are generated by projecting them along the same random vector, adding the same randomly generated bias and perturbing them with i.i.d. Gaussian noise. The optimizer is trained on a set of 120 objective functions. We evaluate it on 100 randomly generated objective functions using the same metric as above. As shown in Figure 2a, predicted step descent outperforms all hand-engineered algorithms except at early iterations. While it dominates gradient descent, conjugate gradient and L-BFGS at all times, it does not make progress as quickly as the momentum method initially. However, after around 30 iterations, it is able to close the gap and surpass the momentum method. On this optimization problem, both conjugate gradient and L-BFGS diverge quickly. Interestingly, unlike in the previous experiment, L-BFGS no longer performs well, which could be caused by non-convexity of the objective functions. Figures 2b and 2c show performance on objective functions from the test set. In Figure 2b, predicted step descent not only converges the fastest, but also reaches a better optimum than all other algorithms. In Figure 2c, predicted step descent converges the fastest and is able to avoid most of the oscillations that hamper gradient descent and momentum after reaching the optimum. Figure 3: (a) Mean margin of victory of each algorithm for training neural net classifiers. Higher margin of victory indicates better performance. (b-c) Objective values achieved by each algorithm on two objective functions from the test set. Lower objective values indicate better performance. Best viewed in colour. 6.3 Neural Net Classifier Finally, we train an optimizer to train a small neural net classifier. We consider a two-layer neural net with ReLU activation on the hidden units and softmax activation on the output units. We use the cross-entropy loss combined with \( \ell_2 \) regularization on the weights. To train the model, we need to optimize the following objective: \[ \min_{W,U,b,c} - \frac{1}{n} \sum_{i=1}^n \log \left( \frac{\exp \left( (U \max(W x_i + b, 0) + c)_j y_i \right)}{\sum_j \exp \left( (U \max(W x_i + b, 0) + c)_j \right)} \right) + \frac{\lambda}{2} \| W \|_F^2 + \frac{\lambda}{2} \| U \|_F^2, \] where \( W \in \mathbb{R}^{h \times d}, b \in \mathbb{R}^h, U \in \mathbb{R}^{p \times h}, c \in \mathbb{R}^p \) denote the first-layer and second-layer weights and biases, \( x_i \in \mathbb{R}^d \) and \( y_i \in \{1, \ldots, p\} \) denote the input and target class label of the \( i \)th instance, \( \lambda \) denotes the coefficient on regularizers and \( (v)_j \) denotes the \( j \)th component of \( v \). For our experiments, we use \( \lambda = 0.0005 \) and \( d = h = p = 2 \). The error surface is known to have complex geometry and multiple local optima, making this a challenging optimization problem. The training set consists of 80 objective functions, each of which corresponds to the objective for training a neural net on a different dataset. Each dataset is generated by generating four multivariate Gaussians with random means and covariances and sampling 25 points from each. The points from the same Gaussian are assigned the same random label of either 0 or 1. We make sure not all of the points in the dataset are assigned the same label. We evaluate the learned optimizer in the same manner as above. As shown in Figure 3a, predicted step descent significantly outperforms all other algorithms. In particular, as evidenced by the sizeable and sustained gap between margin of victory for predicted step descent and the momentum method, predicted step descent is able to reach much better optima and is less prone to getting trapped in local optima compared to other methods. This gap is also larger compared to that exhibited in previous sections, suggesting that hand-engineered algorithms are more sub-optimal on challenging optimization problems and so the potential for improvement from learning the algorithm is greater in such settings. Due to non-convexity, conjugate gradient and L-BFGS often diverge. Performance on examples of objective functions from the test set is shown in Figures 3b and 3c. As shown, predicted step descent is able to reach better optima than all other methods and largely avoids oscillations that other methods suffer from. 6.4 Visualization of Optimization Trajectories We visualize optimization trajectories followed by the learned algorithm and various hand-engineered algorithms to gain further insights into the behaviour of the learned algorithm. We generated random two-dimensional logistic regression problems and plot trajectories followed by different algorithms on each problem in Figure 4. Figure 4: Objective values and trajectories produced by different algorithms on unseen random two-dimensional logistic regression problems. Each pair of plots corresponds to a different logistic regression problem. Objective values are shown on the vertical axis in the left plot and as contour levels in the right plot, where darker shading represents higher objective values. In the right plot, the axes represent the values of the iterates in each dimension and are of the same scale. Each arrow represents one iteration of an algorithm, whose tail and tip correspond to the preceding and subsequent iterates respectively. Best viewed in colour. As shown, the learned algorithm exhibits some interesting behaviours. In Figure 4a, the learned algorithm does not take as large a step as L-BFGS initially, but takes larger steps than L-BFGS later on as it approaches the optimum. In other words, the learned algorithm appears to be not as greedy as L-BFGS. In Figures 4b and 4d the learned algorithm initially overshoots, but appears to have learned how to recover while avoiding oscillations. In Figure 4c the learned algorithm is able to make rapid progress despite vanishing gradients. 7 CONCLUSION We presented a method for learning a better optimization algorithm. We formulated this as a reinforcement learning problem, in which any particular optimization algorithm can be represented as a policy. Learning an optimization algorithm then reduces to find the optimal policy. We used guided policy search for this purpose and trained optimizers for different classes of convex and non-convex objective functions. We demonstrated that the learned optimizer converges faster and/or reaches better optima than hand-engineered optimizers. We hope optimizers learned using the proposed approach can be used to solve various common classes of optimization problems more quickly and help accelerate the pace of research in science and engineering. ACKNOWLEDGEMENTS This work was supported by ONR MURI N00014-14-1-0671. Ke Li thanks the Natural Sciences and Engineering Research Council of Canada (NSERC) for fellowship support. The authors also thank Chelsea Finn for code and Pieter Abbeel, Sandy Huang and Zoe McCarthy for feedback. This research used the Savio computational cluster resource provided by the Berkeley Research Computing program at the University of California, Berkeley (supported by the UC Berkeley Chancellor, Vice Chancellor for Research, and Chief Information Officer).
accept
Accept (Poster)
6.666667
995cc46b9902c15fbade5e957bb9b000451ec682
iclr
2,017
Layer Recurrent Neural Networks Weidi Xie, Alison Noble & Andrew Zisserman Department of Engineering Science, University of Oxford, UK Abstract In this paper, we propose a Layer-RNN (L-RNN) module that is able to learn contextual information adaptively using within-layer recurrence. Our contributions are three-fold: (i) we propose a hybrid neural network architecture that interleaves traditional convolutional layers with L-RNN module for learning long-range dependencies at multiple levels; (ii) we show that a L-RNN module can be seamlessly inserted into any convolutional layer of a pre-trained CNN, and the entire network then fine-tuned, leading to a boost in performance; (iii) we report experiments on the CIFAR-10 classification task, showing that a network with interleaved convolutional layers and L-RNN modules, achieves comparable results (5.39% top1 error) using only 15 layers and fewer parameters to ResNet-164 (5.46%); and on the PASCAL VOC2012 semantic segmentation task, we show that the performance of a pre-trained FCN network can be boosted by 5% (mean IOU) by simply inserting Layer-RNNs. 1 Introduction In computer vision tasks, such as image classification or pixel level prediction, multi-scale contextual information plays a very important role in achieving high performance. The original architectures for these tasks (e.g. [He et al., 2016a]; [Krizhevsky et al., 2012]; [Long et al., 2015]; [Ronneberger et al., 2015]; [Simonyan & Zisserman, 2015]; [Szegedy et al., 2015]) were able to obtain multi-scale context with a large spatial footprint by the combination of filters through the layers of the network, so that a large receptive field was effectively built up. Indeed, the final layers of these networks use average pooling or fully connected layers (convolution with a large kernel) so that the effective receptive field covers the entire input image patch. More recent pixel prediction architectures have used dilated convolutions ([Yu & Koltun, 2016]; [Chen et al., 2016]) which are able to aggregate multi-scale contextual information without losing resolution (due to the spatial pooling and strides in the original architectures), and without incurring the penalty of having to learn many parameters for convolutions with very large kernels. In this paper we introduce an alternative ‘module’ for learning multi-scale spatial contextual information by using Recurrent Neural Networks (RNNs) within layers. This approach is inspired by the ReNet architecture of [Visin et al., 2015], which we extend here into a hybrid architecture that interleaves traditional convolutional neural network (CNN) modules with layer recurrent modules, and we term a Layer Recurrent Neural Network (L-RNN). A L-RNN module is a combination of 1D RNNs, and is able to learn contextual information adaptively, with the effective receptive field able to reach across the entire feature map or image, if that is required for the task. The hybrid network combines the best of both worlds: canonical CNNs are composed of filters that are efficient in capturing features in a local region, whilst the L-RNNs are able to learn long-range dependencies across a layer efficiently with only a small number of parameters. We describe the basic L-RNN module in Section 2 and discuss different fusion choices for the hybrid architecture by incorporating L-RNN into residual blocks ([He et al., 2016b] in Section 3). In addition, in Section 4, we explain how L-RNN modules can be inserted into pre-trained CNNs seamlessly. This means that the entire network does not have to be trained from scratch, only the added L-RNNs are fine-tuned together with pre-trained networks, and the experiments show that this addition always improves performance. In Section 5 we experiment on the CIFAR-10 classification with the hybrid networks of increasing depths, by using Layer Normalization ([Ba et al., 2016]), we are able to train vanilla RNNs to match the performance of GRU ([Chung et al., 2015], while using fewer parameters. In addition, we fine-tune a truncated VGG-16 FCN base net for semantic segmentation on the Pascal VOC 2012 and COCO (Lin et al., 2014) dataset. It is worth noting that (broadly) recurrence can be used in feed-forward multi-layer convolutional neural network architectures in two ways: between layers, and within layers. For example, between-layer recurrence was used for scene labelling in (Liang et al., 2015)(Pinheiro & Collobert, 2014) with convolutions applied recursively on top of feature maps from different layers or raw input images. And in (Zheng et al., 2015), spatial dependencies are modelled explicitly for semantic segmentation with densely connected Gaussian CRFs by iterated application of bilateral filtering using between-layer recurrence. By contrast, our Layer-RNN architecture falls into the second category, where within-layer recurrence is used to capture dependencies. Others have learnt contextual information from within layer recurrence for tasks such as object detection (Bell et al., 2016), and low-level vision problems, such as de-noising, colourization and smoothing (Liu et al., 2016). We postpone discussing in detail the relationships of the proposed Layer-RNN modules to these architectures, and to that of ReNet (Visin et al., 2015) and ReSeg (Visin et al., 2016), until we have introduced the L-RNN in Section 2 2 LAYER-RNN ARCHITECTURE The architecture of the network (Figure 1) is composed of two parts. Local features are calculated by the low-level CNNs module, the Layer-RNN (L-RNN) module, consisting of several 1D spatial RNNs is applied to capture the spatial dependencies. By scanning across the feature maps in different directions, the complete L-RNN is able to learn the receptive field in an adaptive way, up to the size of the entire image. These two modules can be combined to build networks in various ways; for example, an L-RNN module can be stacked on top of several CNN modules at the final layer, or CNN and L-RNN modules can be interleaved at multiple levels. ![Basic Architecture: Given the input image, local features are calculated by the CNN module (A). In (B), two 1D spatial RNNs are applied to scan along each row independently from different directions, hidden states are calculated at every spatial step, and the output feature maps can either be concatenated or summed up. The receptive field for the black pixel in (B) is labelled in orange; In (C), two 1D spatial RNNs are applied to scan along each column from two directions. The combination of (B) and (C) defines the L-RNN module that is able to propagate information over the entire image.](page_370_1012_808_312.png) Figure 1: Basic Architecture: Given the input image, local features are calculated by the CNN module (A). In (B), two 1D spatial RNNs are applied to scan along each row independently from different directions, hidden states are calculated at every spatial step, and the output feature maps can either be concatenated or summed up. The receptive field for the black pixel in (B) is labelled in orange; In (C), two 1D spatial RNNs are applied to scan along each column from two directions. The combination of (B) and (C) defines the L-RNN module that is able to propagate information over the entire image. 2.1 LAYER-RNN MODULE As shown in Figure 1, the Layer-RNN (L-RNN) module is a combination of the 1D spatial recurrent modules (B) and (C). In each module, there are two 1D RNNs scanning across the feature maps horizontally or vertically from two directions (bidirectional spatial RNNs), and their hidden states are updated at every spatial step. Consequently, for each of the horizontal and vertical directions, two output feature maps are obtained with the same width and height as the input feature maps. In our implementation, we simply sum up these output feature maps (an alternative is to concatenate the output feature maps, but that would increase the number of parameters). More formally, assume the feature maps (layer \( L \)) coming into the L-RNN module are \( X^L \in \mathbb{R}^{m \times n \times d} \) and output \( X^{L+1} \) (layer \( L + 1 \)), where \( m, n, d \) refers to the width, height, and the number of feature maps respectively for the input layer. For simplicity, assume the input to the 1D spatial RNNs from \( X^L \) is a feature vector at each spatial location, each row or column on the feature maps is treated as one sequence. When scanning from left to right, the feature responses for location \( ij \) can be calculated: \[ x_{i,j}^{L+1} = f(U x_{i,j}^L + V x_{i,j-1}^{L+1} + b) \qquad \text{left to right} \] Where \( x_{i,0}^{L+1} = 0,\ x_{i,j}^L \in \mathbb{R}^{d \times 1},\ x_{i,j}^{L+1}, x_{i,j-1}^{L+1} \in \mathbb{R}^{D \times 1},\ U \in \mathbb{R}^{D \times d},\ V \in \mathbb{R}^{D \times D},\ b \in \mathbb{R}^{D \times 1},\ D \) denotes the number of nodes used in the 1D spatial RNN, and \( f \) refers to the non-linearity function. 1D spatial RNNs scanning other directions can be calculated similarly. Notice that, the first term of equation (1) encodes local information independently, resembling the normal convolutional layer, and the second term characterizes the within-layer recurrence (\( U \) is a convolution matrix, \( V \) a recurrence matrix). We make use of this observation in Section 4. 2.2 DISCUSSION AND RELATION TO OTHER WORK As can be seen in Figure 1C, the effective receptive field can cover the entire image. However, the actual receptive field depends on the parameters of the RNNs, and can be learnt adaptively. As an insight to what is learnt, consider a separable filter, such as an axis aligned 2D Gaussian. Such filters can be applied exactly by a composition of 1D Gaussian convolutions in the horizontal and vertical directions. The 1D spatial RNNs can approximate finite 1D convolutions of this type. We next discuss the relation of the L-RNN to prior work. First, ReNets (Visin et al., 2015), which is an architecture completely made of 1D RNNs (i.e. no CNNs). In ReNets, the input images are first split into non-overlapping patches of size \( m \times n \times d \), where \( m, n, d \) refer to width, height and feature channels respectively. The 1D RNNs takes the flattened patch (\( mn \times d \)) as input, and outputs feature vector of size \( D \times 1 \), where \( D \) refers to the number of nodes used in the RNNs. In contrast, we interleave the L-RNN and CNN modules. There are two benefits of this: first, CNNs are more efficient at capturing local features than RNNs, the L-RNN stacked upon them is able to learn dependencies between local features (rather than the input channel reformatted); second, we are able to introduce more non-linearities between the hierarchical layers (through the convolutional+ReLU and pooling layers), and a RNN provides non-linearities within the same layer. The 2D-RNN, proposed in (Graves & Schmidhuber, 2009; Theis & Bethge, 2015), is able to scan across the image or feature maps row-by-row, or column-by-column sequentially, with each RNN node accept input from three sources, namely, projections of current input, and feedbacks from the two neighbour nodes. By contrast, we use unidirectional 1D spatial RNNs, with each hidden node only accepting feedbacks from its previous node. Another advantage of our model is that rows or columns can be processed in parallel on GPUs, and training time is shortened. Bell et al. (2016) (Inside-Outside Net) and Visin et al. (2016) (ReSeg) describe similar ideas for object detection and semantic segmentation. Both architectures follow a pipeline that consists of a CNN feature extractor (VGG Net) followed by spatial RNNs at the final prediction stage. In contrast, we treat the L-RNN module as a general computational layer, that can be inserted into any layer of modern architectures, and interleaved with CNN modules. This enables a network to be capable of learning contextual information in a flexible way at multiple levels, rather than with hand-crafted kernel sizes and receptive fields. Note that the vanilla RNN unit consists of two terms, a local term and a recurrence term, where the local term is exactly the convolution operation. Therefore, the spatial RNN can be seen as a generalisation of the convolutional layer, and in the worst case, when the RNN learns no context, the layer simply becomes a convolutional one. For tasks with limited data (semantic segmentation in our case), we propose a regime for inserting the L-RNN into the pre-trained FCN and fine-tuning the entire network end-to-end. This means that we directly increase the representational power of the model, and set the pre-trained model free to learn contextual information if it is needed. 3 CNNs & Layer-RNN MODULES In this section, we describe the architecture for incorporating 1D spatial RNNs into the computational block of a Residual Networks (He et al., 2016b), and also discuss fusion methods for such blocks. We start with the standard residual block of He et al. (2016b) (Figure 2(a)), and then replace the included CNN layer with bidirectional spatial RNNs, to includ a L-RNN module instead. ![Basic modules for classification: CNN module and L-RNN module diagrams](page_184_370_1092_246.png) Figure 2: Basic Modules for Classification: CNN module is defined in a same way as ResNet (He et al., 2016b). L-RNN module is defined as a cascade of bidirectional spatial RNNs. Forward, Sum or Concatenation can be used for skip layers. Batch Normalizations are used when training the architecture from scratch (Section 5.1). We consider three fusion options for combining the features from such blocks with the input to subsequent layers; namely forward, sum and concatenation. Forward refers to the traditional feed-forward architectures: \[ X^{L+1} = F(X^L, W) \] (2) i.e. the block simply becomes a new layer; sum denotes the method of the original residual networks: \[ X^{L+1} = X^L + F(X^L, W) \] (3) so that the L-RNN module acts as a residual block; whilst, in concatenation, features from multiple layers (same spatial sizes) are concatenated: \[ X^{L+1} = [X^L; F(X^L, W)] \quad (\text{;}) \text{ refers to concatenation} \] (4) Therefore, the channels of output feature maps will be the sum of the channels of the two concatenated layers (the number of parameters will be increased for the next layers). In the experimental evaluation of Section 5.1 we compare these options. 4 ADDING A LAYER-RNN TO A PRE-TRAINED CNN In this section, we describe how a Layer-RNN module, can be seamlessly inserted into a pre-trained CNN. In a typical scenario, the CNN would be trained for classification on ImageNet (where there are copious annotations). After inserting the L-RNN modules, the hybrid L-RNN network can then be fine tuned for a new task such as pixel-level prediction, e.g. semantic segmentation (where the annotated data is usually more limited). This trick naturally allows multi-level contextual information to be effortlessly incorporated. Avoiding training the network from scratch means the entire network can be re-purposed with the available annotations and trained end-to-end for the new task, whilst benefiting from the earlier classification training. We illustrate the idea using 1D convolution, but the same principles hold for the entire L-RNN module. As shown in Figure 3, the canonical CNN architecture for a 1D convolution can be denoted as: \[ X^{L+1} = f(W * X^L + b) \] (5) where * refers to convolution, \( W \) and \( b \) are the parameters of the CNN, \( L, L+1 \) denote the layer. The 1D spatial RNN can be written as : \[ X_1^{L+1} = f(U * X_1^L + V X_{L-1}^{L+1} + b) \] (6) where \( U, V, b \) refer to the parameters that are shared across the whole scan-line. Notice that the 1D spatial RNN are designed to incorporate two terms, projections from local region (input-to-hidden) and recurrence term from previous hidden unit (hidden-to-hidden). In fact, it is Figure 3: CNNs & Spatial RNNs Spatial RNNs can be re-expressed as a two-step process, CNNs (Local features) + Recurrence. The similarity between CNNs and spatial RNNs is highlighted by the yellow box. The difference between CNNs and spatial RNNs is shown in blue box and arrow. the presence of non-zero recurrence matrix \( V \), that characterizes the 1D spatial RNN, and they can be calculated in a two-step way as : \[ X^{inter} = U * X^L + b \quad \text{(Convolution)} \tag{7} \] \[ X_i^{L+1} = f(X_i^{inter}) \quad (i = 1, \text{zero initial states}) \tag{8} \] \[ X_i^{L+1} = f(X_i^{inter} + V X_{i-1}^{L+1}) \quad (i > 1) \tag{9} \] By interpreting the recurrence in this way, 1D spatial RNNs can be constructed by inserting recurrence directly into any convolutional layer right after the convolution. If the recurrence matrix \( V \) is initialized as zero, and ReLU is the activation function, then the 1D spatial RNN will be initialized exactly as the pre-trained CNNs. The complete L-RNN can be constructed by inserting two bidirectional spatial RNNs into subsequent layers of the pre-trained CNNs. We derive the expression of the within-layer gradient for use in back-prop fine-tuning in Appendix B. 5 EXPERIMENTAL EVALUATION We test the proposed Layer-RNN on two supervised learning tasks: CIFAR-10 classification in Section [5.1] and PASCAL VOC 2012 segmentation in Section [5.2]. 5.1 IMAGE CLASSIFICATION In this section, we investigate classification performance under variations in an architecture containing L-RNN modules. We vary the depth of the network, the number and position of the L-RNN modules, the type of recurrent units in RNNs, the pooling mechanisms for the last pooling layer, and the method of fusing the block outputs. Architectures: An overview of the architectures is given in Table 1 (with the fundamental building modules (CNN and L-RNN) adapted from Figure 2). There are two principal architectural variations. The first variation is that from Network A to D, we gradually increase the network depth by adding CNN Modules, with the L-RNN module always stacked at the final stage to capture global information over the entire image, in a similar manner to the fully connected layers or average pooling in other networks. Network A has 5 convolutional layers. The second principal variation, in Network E and F, is to interleave CNN and L-RNN modules. This means that the network is capable of learning representations across large spatial footprints at any stage in the network. To show the effectiveness of adding L-RNN modules, we include a Baseline-CNN composed of only convolutional layers (7 layers, with concatenation used at every skip layer). Network E is built upon the Baseline-CNN by inserting L-RNN modules before CNN modules at multiple stages. To make sure the performance gain is not from the increased number of parameters, we cut down the number of filters in the last CNN module to 128 (this number is 256 in the Baseline-CNN). Network F, uses more convolutional layers interleaved with L-RNN modules. <table> <tr> <th>Baseline-CNN</th> <th>A</th> <th>B</th> <th>C</th> <th>D</th> <th>E</th> <th>F</th> </tr> <tr> <td colspan="7">input (\(32 \times 32 \times 3\))</td> </tr> <tr> <td colspan="7">convolution (\(3 \times 3 \times 64\))</td> </tr> <tr> <td>CNN Module (\(3 \times 3 \times 64\)) Concatenate</td> <td>CNN Module (\(3 \times 3 \times 64\))<br><span style="color:blue;">Feature Fusion</span></td> <td>CNN Module (\(3 \times 3 \times 64\))<br>Forward<br>CNN Module (\(3 \times 3 \times 64\)) Concatenate</td> <td>CNN Module (\(3 \times 3 \times 64\))<br>Forward<br>CNN Module (\(3 \times 3 \times 64\)) Concatenate</td> <td>CNN Module (\(3 \times 3 \times 64\))<br>Forward<br>CNN Module (\(3 \times 3 \times 64\)) Concatenate<br>CNN Module (\(3 \times 3 \times 128\)) Forward<br>CNN Module (\(3 \times 3 \times 128\)) Concatenate</td> <td>CNN Module (\(3 \times 3 \times 64\))<br>Forward<br>CNN Module (\(3 \times 3 \times 64\)) Concatenate<br>CNN Module (\(3 \times 3 \times 128\)) Forward<br>CNN Module (\(3 \times 3 \times 128\)) Concatenate</td> <td>CNN Module (\(3 \times 3 \times 64\))<br>Concatenate<br>CNN Module (\(3 \times 3 \times 64\))<br>Concatenate<br>CNN Module (\(3 \times 3 \times 64\))<br>Concatenate</td> </tr> <tr> <td colspan="7">MaxPooling (2)</td> </tr> <tr> <td>CNN Module (\(3 \times 3 \times 128\)) Concatenate</td> <td>CNN Module (\(3 \times 3 \times 128\))<br><span style="color:blue;">Feature Fusion</span></td> <td>CNN Module (\(3 \times 3 \times 128\))<br>Forward<br>CNN Module (\(3 \times 3 \times 128\)) Concatenate</td> <td>CNN Module (\(3 \times 3 \times 128\))<br>Forward<br>CNN Module (\(3 \times 3 \times 128\)) Concatenate</td> <td>CNN Module (\(3 \times 3 \times 128\))<br>Forward<br>CNN Module (\(3 \times 3 \times 128\)) Concatenate<br>CNN Module (\(3 \times 3 \times 128\)) Forward<br>CNN Module (\(3 \times 3 \times 128\)) Concatenate</td> <td>CNN Module (\(3 \times 3 \times 128\))<br>Forward<br>CNN Module (\(3 \times 3 \times 128\)) Concatenate<br>CNN Module (\(3 \times 3 \times 128\)) Forward<br>CNN Module (\(3 \times 3 \times 128\)) Concatenate</td> <td>LRNN Module (128)<br>Forward<br>CNN Module (\(3 \times 3 \times 64\))<br>Concatenate<br>LRNN Module (128)<br>Forward<br>CNN Module (\(3 \times 3 \times 64\))<br>Concatenate</td> </tr> <tr> <td colspan="7">MaxPooling (2)</td> </tr> <tr> <td>CNN Module (\(3 \times 3 \times 256\)) Concatenate</td> <td>LRNN Module (256)<br><span style="color:blue;">Feature Fusion</span></td> <td>LRNN Module (256) Concatenate</td> <td>LRNN Module (256) Concatenate</td> <td>LRNN Module (256) Concatenate<br>LRNN Module (256) Concatenate</td> <td>LRNN Module (256) Concatenate<br>LRNN Module (256) Concatenate</td> <td>LRNN Module (128)<br>Forward<br>CNN Module (\(3 \times 3 \times 64\))<br>Concatenate<br>LRNN Module (128)<br>Forward<br>CNN Module (\(3 \times 3 \times 64\))<br>Concatenate</td> </tr> <tr> <td colspan="7">Global Pooling (8)</td> </tr> <tr> <td colspan="7">Dropout (0.5)</td> </tr> <tr> <td colspan="7">Softmax (10)</td> </tr> </table> Table 1: Network architectures for CIFAR-10 experiments In Network A, a variety of selections are tested (coded as blue). In Feature Fusion, we may choose Forward, Sum, Concatenation; in the LRNN module, GRU and vanilla RNNs are tested; max pooling or average pooling can be used for global pooling. From Network A to D, the depth of networks is gradually increased by adding CNN modules, for example, comparing C to B, two more CNN modules are added based on B (coded as red). Comparing Networks E and F with the the Baseline-CNN, LRNN modules (green) are interleaved with CNN modules. Other variations of architectures include: firstly, we may use Forward, Sum, Concatenation to fuse features; secondly, GRU and vanilla RNN units are compared for the L-RNN modules, ReLU is used for both cases as the non-linear activation; thirdly, both max pooling and average pooling are tested as global pooling. For clarity, we name the networks by these variations in Table 2. When Forward is selected to fuse features, Network A-Forward simply follows the traditional CNN with pure feed-forward layers. A-Concat uses concatenation as an alternative, and A-Sum follows the idea of residual networks proposed in [He et al., 2016b], the number of filters is gradually increased as the networks get deeper. To match dimensions for summation, \(1 \times 1\) convolution is used in A-Sum. In our experiments, we found that concatenation works better than sum (Table 2). Therefore, in all other architectures (B,C,D), as we gradually increase the network depth by adding CNN modules, we fuse the skip layers by only alternating between concatenation and forward. Following the VGG-net (Simonyan & Zisserman 2015), in all architectures, convolutional kernels in the CNN Module are of size \(3 \times 3\). Maxpoolings (\(2 \times 2\)) are used as intermediate pooling, and \(8 \times 8\) global poolings (average or max) are applied at the end. To avoid overfitting, we use dropout (0.5). Training details and recurrent units are described in the Appendix [A]. Implementations are mostly based in Theano (Theano Development Team 2016) with single NVIDIA Titan X. Dataset & Evaluation. We conducted experiments on the CIFAR-10 dataset, which consists of 40k training images, 10k validation and 10k testing images in 10 classes, and each of the image is of \(32 \times 32\) pixels with RGB channels. We augment the training data with simple transformations (rotation, flipping, scaling) on the fly. The mean image over the whole training set is subtracted from each image during training. Following the standard evaluation protocol, we report the *top1* error on the testing set. Results & Discussion. We present detailed comparisons with other published methods in Table 2 <table> <tr> <th>CIFAR-10</th> <th># Params</th> <th># Conv Layers</th> <th>Approx. Time / Epoch (s)</th> <th>Top1 Error(%)</th> </tr> <tr> <td>ReNet (Visin et al., 2015)</td> <td>–</td> <td>0</td> <td>–</td> <td>12.35</td> </tr> <tr> <td>NIN (Lin et al., 2013)</td> <td>–</td> <td>–</td> <td>–</td> <td>8.81</td> </tr> <tr> <td>FitNet (Romero et al., 2014)</td> <td>2.5M</td> <td>19</td> <td>–</td> <td>8.39</td> </tr> <tr> <td>Highway (Srivastava et al., 2015)</td> <td>2.3M</td> <td>19</td> <td>–</td> <td>7.54</td> </tr> <tr> <td>ResNet-110 (He et al., 2016a)</td> <td>1.7M</td> <td>110</td> <td>–</td> <td>6.61</td> </tr> <tr> <td>ResNet-164 (He et al., 2016b)</td> <td>1.7M</td> <td>164</td> <td>–</td> <td>5.46</td> </tr> <tr> <td>Dense Net (Huang et al., 2016)</td> <td>27.2M</td> <td>100</td> <td>–</td> <td><b>3.74</b></td> </tr> <tr> <td>Baseline-CNN-Avg</td> <td>1.56M</td> <td>7</td> <td>331</td> <td>9.07</td> </tr> <tr> <td>Baseline-CNN-Max</td> <td>1.56M</td> <td>7</td> <td>331</td> <td>8.48</td> </tr> <tr> <td>A-Concat-RNN-Avg</td> <td>0.9M</td> <td>5</td> <td>293</td> <td>7.65</td> </tr> <tr> <td>A-Concat-RNN-Max</td> <td>0.9M</td> <td>5</td> <td>293</td> <td>7.43</td> </tr> <tr> <td>A-Forward-GRU-Max</td> <td>1.68M</td> <td>5</td> <td>315</td> <td>7.57</td> </tr> <tr> <td>A-Concat-GRU-Max</td> <td>1.95M</td> <td>5</td> <td>377</td> <td>7.35</td> </tr> <tr> <td>A-Sum-GRU-Max</td> <td>1.99M</td> <td>5</td> <td>383</td> <td>7.69</td> </tr> <tr> <td>B-GRU-Max</td> <td>2.3M</td> <td>9</td> <td>542</td> <td>6.62</td> </tr> <tr> <td>B-RNN-Max</td> <td>1.27M</td> <td>9</td> <td>483</td> <td>6.78</td> </tr> <tr> <td>C (GRU-Max)</td> <td>2.5M</td> <td>13</td> <td>726</td> <td>6.21</td> </tr> <tr> <td>D (GRU-Max)</td> <td>3M</td> <td>19</td> <td>1321</td> <td>5.73</td> </tr> <tr> <td>E (RNN-Max)</td> <td>0.97M</td> <td>7</td> <td>462</td> <td>5.96</td> </tr> <tr> <td>F (RNN-Max)</td> <td>1.55M</td> <td>15</td> <td>(Tensorflow on 2 GPUs)</td> <td><b>5.39</b></td> </tr> </table> Table 2: Comparison with previous published methods on CIFAR-10 The networks are named by the chosen operation at every step; for instance, *A-Forward-GRU-Max*, refers to the architecture A with Forward feature fusion, GRU in L-RNN Module, and max pooling as the final global pooling. From the experimental results, we can draw the following conclusions: Comparison of basic choices. Max pooling consistently performs better when used as the global pooling in our case, this is seen in the results by Baseline-CNN-Avg (9.07%) vs. Baseline-CNN-Max (8.48%), and A-Concat-RNN-Avg (7.65%) vs. A-Concat-RNN-Max (7.43%). One possible explanation would be that for classification tasks, decisions are based on the most salient features. In our experiments for shallow networks, the summing of residual connections shows no benefit compared to feed-forward or concatenation. This observation is made from the results by A-Forward-GRU-Max (7.57%), A-Concat-GRU-Max (7.35%) and A-Sum-GRU-Max (7.69%). Thus, as also employed in U-Net or DenseNet (Ronneberger et al., 2015; Huang et al., 2016), concatenation can be used as an alternative to summation in building deeper networks. It can be seen that vanilla RNN units trained with Layer Normalization (Ba et al., 2016) can perform almost as well as GRU, while saving a large number of parameters (by comparing the results from A-Concat-RNN-Max with 0.9\( M \) parameters (7.43%) and that of A-Concat-GRU-Max with 1.95\( M \) parameters (7.36%), B-RNN-Max with 1.27\( M \) parameters (6.78%) vs. B-GRU-Max with 2.3\( M \) parameters (6.62%)). Networks with L-RNN module stacked at the final stage. Even shallow networks with L-RNN modules (architectures A) can achieve comparable or superior performance to deep architectures with 19 layers that requires more parameters (e.g. Network A-Concat-RNN-Max (0.9\( M \)) vs. Highway(2.3\( M \))). This confirms that when a L-RNN module is stacked on top of CNNs, it is able to capture global information, avoiding the multiple layer route to increasing receptive fields in standard architectures, e.g. in (Romero et al., 2014; Srivastava et al., 2015). As expected, networks can always improve classification performance by adding more CNN modules (going from architecture A to D). Network D with 19 convolutional layers performs better than the ResNet-110 (by 0.3% \( top1 \) error), (though Network D has more parameters than the ResNet-110) and is slightly worse than ResNet-164 (by 0.25% \( top1 \) error). Thus, following this trend, it is reasonable to expect a benefit if L-RNN Modules are combined with very deep networks, like the residual variants. Networks with L-RNN modules interleaved with CNN modules. Comparing the performance of Baseline-CNN-Max (8.48%) with that of Network E (5.96%), there is a significant performance boost (2.5%), brought by simply inserting L-RNN modules. Network E also has other advantages over the networks A to D: the number of parameters, network depth, and running time. Furthermore, when we continue increasing the network depth and interleaving L-RNN modules, Network F achieves comparable results (5.39%) to ResNet-164 (5.46%) and with fewer parameters (1.55\( M \) vs. 1.7\( M \)). This confirms that, firstly, L-RNN modules can be combined with very deep networks; and secondly, rather than hand-craft the kernel size, we should set the model free and learn contextual information at any stage. 5.2 Semantic Segmentation In this section, we insert L-RNN modules into the VGG-16 networks (pre-trained on ImageNet (Deng et al., 2009)), and fine-tune the entire network for the PASCAL VOC 2012 segmentation task. The objective is to boost the segmentation performance by providing contextual information via the L-RNNs. In particular, we consider the two FCN segmentation architectures originally introduced by Long et al. (2015), FCN-32s and FCN-8s; these are described below. We proceed in three steps: first, we establish baselines by training our own FCN-32s and FCN-8s (Appendix C), and comparing their performance to those of Long et al. (2015). We also investigate the loss in performance as the fully connected (FC) layer is gradually reduced from 4096 to 512 channels. The reason for doing this is that when we insert the L-RNN module, its complexity (dimension of the hidden units) depends on this number of channels, and so the overall complexity can be varied. In the second step, we insert L-RNNs into the FCN-32s architecture and evaluate the change in performance. Finally, we insert L-RNNs into the FCN-8s architecture and compare with previous published methods. Dataset & Evaluation. We used a training set consisted of VOC2012 training data (1464 images provided by the challenge organizers), and augmented with training and validation data from Hariraharan et al. (2014), which further extend the training set to a total of 11,685 images with pixel-level annotation. After removing the overlapping images between VOC2012 validation data and this dataset, we are left with 346 images from the original VOC2012 validation set to validate our model. In all the following experiments, we use a single scale for the input images (384 × 384), and only horizontal flipping is used for data augmentation. The performance is measured in terms of pixel intersection-over-union (IOU) averaged across the 21 classes. 5.2.1 Baseline Architectures and Training Architecture & Training. In the FCN-32s, input images are passed through the whole networks, and end up with predictions of \(12 \times 12 \times 21\), then, up-sampling layers are directly used to map the predictions back to \(384 \times 384\) (32 times). In the FCN-16s, instead of directly up-sampling 32 times, the predictions are first up-sampled by 2, and summed up with stream predictions from pool4 (named after VGG16), then up-sampled by 16 times. In the FCN-8s, the stream predictions from pool3 are further added to the results from FCN-16s, thus, up-sampling layers with only factor 8 is needed. (Appendix C) For all the architectures, the base net(VGG16) is pre-trained on ImageNet (Deng et al., 2009), we further train on Pascal VOC2012 for 50 epochs, similar to the experiment for CIFAR-10, we iteratively increase or decrease the learning rate between \(10^{-3}\) and \(10^{-5}\) after every 10 epochs. The 4096 channel architectures are trained first, and then the number of channels is gradually reduced in the FC layer by randomly cutting them (e.g. from 4096 to 2048), and re-training the networks. Results & Discussion. Table 3 shows the performance of the six baselines: FCN-32s and FCN-8s with the number of channels varying from 512 to 4096. We observe that reducing the nodes in the FC layers does produce a performance drop (from 4096 to 1024 nodes, 1% mean IOU) in both FCN-32s and FCN-8s. Although from 1024 to 4096 nodes, the improvement is tiny, the difference in the number of parameters is over 64 million. Consequently, in the following experiments we choose to perform experiments based on networks with 512, 1024 or 2048 channels only (i.e. not 4096). In comparison to the original performance for the FCN-8s architecture in (Long et al., 2015), we exceed this (by 64.4 to 61.3 mean IOU) in our training. Thus, we use our trained networks as a baseline. 5.2.2 FCN-32s with L-RNN modules Architecture & Training. The architecture FCN-32s(L-RNN) is shown in figure 4, the convolutional part of the architecture is initialized with the pre-trained FCN-32s(2048 channels in FC layer) baseline. Then, two 1D spatial RNNs are inserted into the fc1 layer in the horizontal direction, and two 1D spatial RNNs are inserted into the fc2 layer in the vertical direction. The convolution activations of fc1 are shared for both left-right and right-left scanning. Similarly for fc2, the convolution activations are shared for up-down and down-up scanning. Thus the fc1 and fc2 layers together with the added 1D spatial RNNs form a complete L-RNN module. During training, as described in section 4, the 1D spatial RNNs are initialized with a zero recurrence matrix. The entire network is then fine-tuned end-to-end with the PASCAL VOC2012 data. We adopt RMS-prop (Tieleman & Hinton, 2012) for 30 epochs with hyper-parameters \(lr = 10^{-4}\), \(\rho = 0.9, \epsilon = 10^{-8}\), then decrease the learning rate to \(lr = 10^{-5}\) for 10 epochs. Results & Discussion. The results are shown in Table 3. Compare the 32s rows with and without the L-RNN for the FC layers with 512, 1024, and 2048 channels. As can be seen, the addition of the L-RNN always improve the segmentation performance over the pre-trained FCN-32s baselines. However, the improvement is not large – about 1 – 1.5% mean IOU. This is because the receptive field in the fully connected layers of FCN-32s is sufficiently large to cover \(224 \times 224\) pixels of the input patch, and consequently the networks are not able to benefit much from the context provided by the L-RNN. The benefit is greater when L-RNNs are added to the lower layers (where the receptive fields of the convolutions is much smaller), and we turn to that case next. 5.2.3 FCN-8s with L-RNN modules Architecture & Training. The architecture FCN-8s(L-RNN) is shown in figure 4, as with the FCN-32s architecture, 1D spatial RNNs are inserted into the fc1 and fc2 layers to form a L-RNN module. L-RNNs are also inserted into the lower layers, namely pool3 and pool4 layers. Unlike the FC layers in the FCN-32s, where prediction for each central pixel comes from image patches of size \(224 \times 224\), the predictions from pool3 and pool4 are based on receptive field on the image of much smaller sizes (around \(44 \times 44\) and \(100 \times 100\) pixels respectively). Thus, the inserted L-RNN modules must be able to model relatively long-range dependencies. Figure 4: FCN-32s (above the blue dash line) and FCN-8s with L-RNN modules. Spatial RNNs are inserted to the fully connected (FC) layers in all FCNs, every two FC layers construct a complete L-RNN module. {384, 192, 96} indicate the spatial sizes of the feature maps. Kernel Sizes for the fully connected layers (n is an experimental variable– number of channels): fc1 : \(7 \times 7 \times 512 \times n\), fc2 : \(1 \times 1 \times n \times n\), fc3 : \(1 \times 1 \times n \times 21\) fc4 : \(1 \times 1 \times 512 \times 1024\), fc5 : \(1 \times 1 \times 1024 \times 1024\), fc6 : \(1 \times 1 \times 1024 \times 21\) fc7 : \(1 \times 1 \times 256 \times 1024\), fc8 : \(1 \times 1 \times 1024 \times 1024\), fc9 : \(1 \times 1 \times 1024 \times 21\) During training, the network is initialized from the FCN-8s baseline, and then fine-tuned using segmentation data. Again the PASCAL VOC dataset is used. Furthermore, when comparing to the other previously published methods, the network is further trained on the COCO trainval dataset, and we use a densely connected CRF as post-processing (Krishenbl & Koltun [2012]). Results on PASCAL VOC Validation set. The experimental results are shown in Table 3. <table> <tr> <th>Type</th> <th># of channels in FC</th> <th>L-RNNs added</th> <th>Pixel Acc %</th> <th>Mean IOU %</th> </tr> <tr> <td>32s</td> <td>512</td> <td>NO</td> <td>90.4</td> <td>61.5</td> </tr> <tr> <td>32s</td> <td>1024</td> <td>NO</td> <td>90.5</td> <td>62.1</td> </tr> <tr> <td>32s</td> <td>2048</td> <td>NO</td> <td>90.7</td> <td>62.7</td> </tr> <tr> <td>32s</td> <td>4096</td> <td>NO</td> <td>90.7</td> <td>62.9</td> </tr> <tr> <td>8s</td> <td>1024</td> <td>NO</td> <td>91.3</td> <td>63.8</td> </tr> <tr> <td>8s</td> <td>2048</td> <td>NO</td> <td>91.2</td> <td>64.1</td> </tr> <tr> <td>8s</td> <td>4096</td> <td>NO</td> <td>91.3</td> <td>64.4</td> </tr> <tr> <td>8s (original [Long et al. (2015)])</td> <td>4096</td> <td>--</td> <td>--</td> <td>61.3</td> </tr> <tr> <td>32s</td> <td>512</td> <td>YES</td> <td>90.8</td> <td>62.7</td> </tr> <tr> <td>32s</td> <td>1024</td> <td>YES</td> <td>90.9</td> <td>63.4</td> </tr> <tr> <td>32s</td> <td>2048</td> <td>YES</td> <td>91.1</td> <td>64.2</td> </tr> <tr> <td>8s</td> <td>2048</td> <td>YES</td> <td>92.6</td> <td>69.1</td> </tr> </table> Table 3: Comparison of FCN networks on the PASCAL VOC2012 segmentation validation set. Comparing the rows for 32s with and without L-RNN, to those for 8s with and without L-RNN. We can draw the following conclusions: Improvement due to the skip layers. It can be seen (for IOU) that going from FCN-32s(2048) to FCN-8s(2048), where there are additional skip layers, the performance is boosted from 62.7 to 64.1. The skip layers in the FCN-8s architecture introduce more parameters, but this is not the reason for the performance boost since FCN-8s(2048) and FCN-32s(4096), have a similar number of parameters though they perform very differently (64.1 vs. 62.9). This observation confirms that the performance gain is brought by the the skip layers, rather than the increased number of parameters. Improvement due to L-RNN module. Inserting a L-RNN to the FC layers of FCN-32s(2048), only improves the performance from 62.7 to 64.2. However, as noted earlier, since the nodes in the FC layers already cover the entire input patch of size \(224 \times 224\), the L-RNN can contribute only little context here. In contrast, adding L-RNNs to FCN-8s brings a substantial improvement from 64.1(FCN-8s) to 69.1(FCN-8s-LRNN). This process will introduce more parameters due to the recurrence term in the RNNs, but it is clear that the improvement is mainly from the inserted L-RNN module after pool3 and pool4 in FCN-8s, rather than from the increased number of parameters. The reason is that, when comparing FCN-8s (2048 channels without L-RNN) to FCN-8s (4096 channels without L-RNN), although the number of parameters is increased dramatically, the performance is only increased from 64.1 to 64.4. While FCN-8s (4096 channels without L-RNN) has roughly the same number of parameters as that of FCN-8s (2048 channels with L-RNN), but the performance gain is from 64.4 to 69.1. In conclusion, the L-RNN is able to learn contextual information over a much larger range than the receptive field of pure local convolutions. Results on PASCAL VOC Test set. Table 4 shows the results of the FCN-8s with L-RNNs on the PASCAL VOC test data, and also compares to others who have published on this dataset. The performance is far superior to the original result (Long et al. [2015]) using a FCN-8s with 4096 channels (whereas only 2048 channels are used here). We also compare to the dilated convolution network of (Yu & Koltun [2016]), obtaining comparable, though slightly better performance. Note that in (Yu & Koltun [2016]), multi-scale contextual information is captured by explicitly designing dilated convolution kernels, while the L-RNN is able to learn contextual information implicitly. Finally, we compare to (Zheng et al. [2015]) who add a densely connected CRF to FCN-8s. If we also add a dense CRF as post-processing, we boost the performance by 1% in IOU (the same boost as obtained by (Yu & Koltun [2016])). In Figure 5, we show the samples of semantic segmentations on the PASCAL VOC2012 validation set. In each figure, we show our predictions and the results after CRF post-processing. Comparing with the end-to-end trainable CRF-RNN (Zheng et al. [2015]), our predictions miss the small details, like the wheel of the bicycle, but show much better performance in determining the class of the segmented regions – something that context can really contribute to. <table> <tr> <th>Methods</th> <th>P</th> <th>P+CRF</th> <th>P+COCO</th> <th>P+COCO+CRF</th> </tr> <tr> <td>FCN-8s (Long et al. [2015])</td> <td>62.2</td> <td>n/a</td> <td>n/a</td> <td>n/a</td> </tr> <tr> <td>CRF-RNNs (Zheng et al. [2015])</td> <td>n/a</td> <td>72.0</td> <td>n/a</td> <td>74.7</td> </tr> <tr> <td>Dilated Conv. (Yu & Koltun [2016])</td> <td>n/a</td> <td>n/a</td> <td>73.5</td> <td>74.7</td> </tr> <tr> <td>FCN-8s-LRNN (2048)</td> <td><b>71.9</b></td> <td><b>72.7</b></td> <td><b>74.2</b></td> <td><b>75.7</b></td> </tr> </table> Table 4: Comparison of mean IOU on the PASCAL VOC2012 segmentation Test set. (All results are based on VGG-16 net) Training is on P: PASCAL VOC2012; COCO: COCO dataset. http://host.robots.ox.ac.uk:8080/anonymous/YJBLI7.html 6 CONCLUSION & FUTURE WORK This paper has shown that the proposed L-RNN module is an alternative way of adding multi-level spatial context to a network. In fact, L-RNNs can be interleaved with convolutional layers to learn context at any stage. When the L-RNN is only used at the final stage after the CNNs, it gives shallow networks the receptive fields of far deeper networks. Furthermore, we have demonstrated that inserting L-RNNs can boost the performance of pre-trained networks, and given an initialization procedure that makes this training a simple matter of end-to-end fine tuning. There is much left to investigate using L-RNNs as a new building block, and we suggest some avenues here: (i) training the hybrid architectures on larger dataset, such as ImageNet (Deng et al. [2009]), and learn representations that can be transferred to other vision tasks, (ii) a similar investigation for deep residual networks where the residual blocks are either convolutional or L-RNNs; and (iii) including a CRF final layer in end-to-end training. Figure 5: Qualitative Results. First column: input image. Second column: prediction from Zheng et al. (2015). Third column: prediction from the our networks. Fourth column: CRF post-processing. Fifth column: ground-truth annotation. REFERENCES Ba, Jimmy Lei, Kiros, Jamie Ryan, and Hinton, Geoffrey E. Layer normalization. https://arxiv.org/abs/1607.06450, 2016. Bell, Sean, Zitnick, C Lawrence, Bala, Kavita, and Girshick, Ross. Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks. CVPR, 2016. Chen, Liang-Chieh, Papandreou, George, Kokkinos, Iasonas, Murphy, Kevin, and Yuille, Alan L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. arXiv preprint arXiv:1606.00915, 2016. Chung, Junyoung, Gulcehre, Caglar, Cho, Kyunghyun, and Bengio, Yoshua. Gated feedback recurrent neural networks. NIPS, 2015. Dauphin, Yann N, Pascanu, Razvan, Gulcehre, Caglar, Cho, Kyunghyun, Ganguli, Surya, and Bengio, Yoshua. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. NIPS, 2014. Deng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, and Fei-Fei, Li. Imagenet: A large-scale hierarchical image database. CVPR, 2009. Graves, Alex and Schmidhuber, Jürgen. Offline handwriting recognition with multidimensional recurrent neural networks. NIPS, 2009. Hariharan, Bharath, Arbeláez, Pablo, Girshick, Ross, and Malik, Jitendra. Simultaneous detection and segmentation. ECCV, 2014. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. CVPR, 2016a. He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Identity mappings in deep residual networks. ECCV, 2016b. Huang, Gao, Liu, Zhuang, and Weinberger, Kilian Q. Densely connected convolutional networks. https://arxiv.org/abs/1608.06993, 2016. Krhenbhl, Philipp and Koltun, Vladlen. Efficient inference in fully connected crfs with gaussian edge potentials. NIPS, 2012. Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. ImageNet classification with deep convolutional neural networks. NIPS, 2012. Liang, Ming, Hu, Xiaolin, and Zhang, Bo. Convolutional neural networks with intra-layer recurrent connections for scene labeling. NIPS, 2015. Lin, Min, Chen, Qiang, and Yan, Shuicheng. Network in network. arXiv preprint arXiv:1312.4400, 2013. Lin, Tsung-Yi, Maire, Michael, Belongie, Serge, Bourdev, Lubomir, Girshick, Ross, Hays, James, Perona, Pietro, Ramanan, Deva, Zitnick, C. Lawrence, and Dollr, Piotr. Microsoft coco: Common objects in context. ECCV, 2014. Liu, Sifei, Pan, Jinshan, and Yang, Ming-Hsuan. Learning recursive filters for low-level vision via a hybrid neural network. ECCV, 2016. Long, Jonathan, Shelhamer, Evan, and Darrell, Trevor. Fully convolutional networks for semantic segmentation. CVPR, 2015. Pinheiro, Pedro HO and Collobert, Ronan. Recurrent convolutional neural networks for scene labeling. ICML, 2014. Romero, Adriana, Ballas, Nicolas, Kahou, Samira Ebrahimi, Chassang, Antoine, Gatta, Carlo, and Bengio, Yoshua. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014. Ronneberger, Olaf, Fischer, Philipp, and Brox, Thomas. U-net: Convolutional networks for biomedical image segmentation. MICCAI, 2015. Simonyan, Karen and Zisserman, Andrew. Very deep convolutional networks for large-scale image recognition. ICLR, 2015. Srivastava, Rupesh K, Greff, Klaus, and Schmidhuber, Jürgen. Training very deep networks. NIPS, 2015. Szegedy, Christian, Liu, Wei, Jia, Yangqing, Sermanet, Pierre, Reed, Scott E., Anguelov, Dragomir, Erhan, Dumitru, Vanhoucke, Vincent, and Rabinovich, Andrew. Going deeper with convolutions. CVPR, 2015. Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/1605.02688. Theis, Lucas and Bethge, Matthias. Generative image modeling using spatial lstms. NIPS, 2015. Tieleman, Tijmen and Hinton, Geoffrey. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012. Visin, Francesco, Kastner, Kyle, Cho, Kyunghyun, Matteucci, Matteo, Courville, Aaron, and Bengio, Yoshua. Renet: A recurrent neural network based alternative to convolutional networks. arXiv preprint arXiv:1505.00393, 2015. Visin, Francesco, Ciccone, Marco, Romero, Adriana, Kastner, Kyle, Cho, Kyunghyun, Bengio, Yoshua, Matteucci, Matteo, and Courville, Aaron. Reseg: A recurrent neural network-based model for semantic segmentation. CVPR, 2016. Yu, Fisher and Koltun, Vladlen. Multi-scale context aggregation by dilated convolutions. ICLR, 2016. Zheng, Shuai, Jayasumana, Sadeep, Romera-Paredes, Bernardino, Vineet, Vibhav, Su, Zhizhong, Du, Dalong, Huang, Chang, and Torr, Philip HS. Conditional random fields as recurrent neural networks. ICCV, 2015. Appendices A Training Details for CIFAR-10 A.1 RNN with Gated Recurrent Units In the Layer-RNN, we test the gated recurrent units (GRU) for the RNN blocks (Chung et al., 2015), the GRU has two gates, namely reset gate \( r \) and update gate \( z \). Intuitively, the reset gate determines how to combine the new input with the previous memory, and the update gate defines how much of the previous memory to use, thus, the hidden state \( s_t \) of the GRU at time \( t \) can be computed as : \[ z = \sigma(x_t U^z + s_{t-1} W^z) \tag{10} \] \[ r = \sigma(x_t U^r + s_{t-1} W^r) \tag{11} \] \[ h = f(x_t U^h + (s_{t-1} \circ r) W^h) \tag{12} \] \[ s_t = (1 - z) \circ h + z \circ s_{t-1} \tag{13} \] A.2 Vanilla RNN with Layer Normalization To simplify the training process and reduce number of parameters, we also test the vanilla RNNs for the RNN blocks with Layer Normalization (Ba et al., 2016). In a standard RNN, the outputs in the recurrent layer are calculated from the current input \( x_t \) and the previous hidden states \( h_{t-1} \), which are denoted as \( a_t = U x_t + V h_{t-1} \). The layer normalized layer is computed as : \[ h_t = f(\frac{g}{\sigma_t} \circ (a_t - \mu_t) + b) \tag{14} \] \[ \mu_t = \frac{1}{H} \sum_{i=1}^{H} a_t^i \qquad \sigma_t = \sqrt{\frac{1}{H} \sum_{i=1}^{H} (a_t^i - \mu_t)^2} \tag{15} \] Where \( U \) is the current input-to-hidden term, and \( V \) is the hidden-to-hidden recurrence term, \( b \) and \( g \) are defined as the bias and gain parameters of the same dimension as \( h_t \). During training, we iteratively increase and decrease the learning rate (learning rate restart) between \( 10^{-3} \) and \( 10^{-5} \) based on the conjecture that (Figure 6), networks tend to get trapped in the regions with small derivatives, such as saddle points or bad local minima (Dauphin et al., 2014). Traditionally, the learning rate is decreased every several epochs, and gradients that are used to update parameters depend on both the learning rate and the derivatives w.r.t loss functions. At the end of training, both of these two terms tend to be very small. Therefore, it becomes difficult for the networks to escape from these regions. During our training, we restart the learning rate every some epochs (we try 60 or 80 in our training), and decrease it gradually. ![Intuitive Loss Surfaces. Deep Neural Networks may easily be trapped into saddle point or bad local minima.](page_109_1347_1342_246.png) Figure 6: Intuitive Loss Surfaces. Deep Neural Networks may easily be trapped into saddle point or bad local minima. B FINE-TUNING LAYER-RNNs WITH ZERO RECURRENT MATRIX In this section, we derive the procedure for fine-tuning the recurrence matrix, when it is initialized as zeros. We will only consider 1D scan-lines of the spatial RNN, and therefore simplify the derivation to a 1D sequence. Consider the fully connected layer for simplicity. \( L, L+1 \) denote layer, \( t \) refers to the index of input, \( f \) refers to ReLU, \( U, V \) refer to the input-hidden matrix and recurrence matrix respectively. \[ s_t = UX_t^L + VX_{t-1}^{L+1} + b \tag{16} \] \[ X_t^{L+1} = f(s_t) \tag{17} \] Assume \( E \) denotes the loss function for a specific task. Since \( V \) is shared for the whole 1D sequence (length denoted by T), the back-propagation *within* the layer \( L+1 \) can then be derived as: \[ \frac{\partial E}{\partial V} = \sum_T \sum_{t \leq T} \frac{\partial E}{\partial X_T^{L+1}} \cdot \frac{\partial X_T^{L+1}}{\partial X_t^{L+1}} \cdot \frac{\partial X_t^{L+1}}{\partial s_t} \cdot \frac{\partial s_t}{\partial V} \tag{18} \] where, \[ \frac{\partial X_T^{L+1}}{\partial X_t^{L+1}} = \frac{\partial X_T^{L+1}}{\partial X_{T-1}^{L+1}} \frac{\partial X_{T-1}^{L+1}}{\partial X_{T-2}^{L+1}} \cdots \frac{\partial X_1^{L+1}}{\partial X_t^{L+1}} \quad \text{and} \quad \frac{\partial X_{t+1}^{L+1}}{\partial X_t^{L+1}} = V^T \cdot diag(f') \tag{19} \] Each Jacobian \( \frac{\partial X_{t+1}^{L+1}}{\partial X_t^{L+1}} \) is a product of two matrices: (a) the recurrence weight matrix \( V \), and (b) the diagonal matrix composed of the derivative of ReLU (\( f' \)). Therefore, when \( V \) is initialized to zero at the starting point, there are no long-range dependencies, and for \( t < T \), \( \frac{\partial X_{t+1}^{L+1}}{\partial X_t^{L+1}} \) will be zero, while for \( t = T \): \[ \frac{\partial E}{\partial V} = \sum_T \frac{\partial E}{\partial X_T^{L+1}} \cdot \frac{\partial X_T^{L+1}}{\partial s_T} \cdot \frac{\partial s_T}{\partial V} \quad \text{where} \quad \frac{\partial X_T^{L+1}}{\partial s_T} = f', \quad \frac{\partial s_T}{\partial V} = X_{T-1}^{L+1} \tag{20} \] \[ V_1 = V_0 - \alpha \frac{\partial E}{\partial V} \quad \text{gradient descent at first iteration} \tag{21} \] Since \( V_0 \) is initialized as zero, \( V_1 = -\alpha \frac{\partial E}{\partial V} \). In other words, instead of initializing the recurrence matrix \( V \) randomly or to be identity matrix, we actually initialize it based on the features in a local neighbourhood (equation[20]). During the back-propagation of spatial RNNs, gradients flow *within* layers, \( \frac{\partial E}{\partial U} \) (*between layers*) is calculated in the same way as normal convolutional layers. C DETAILED DESCRIPTION OF THE FCNs USED IN THE PAPER The complete FCNs architecture used in the paper. ![Complete FCNs architecture diagram](page_232_314_1167_388.png) Figure 7: Complete FCNs used extensively in the paper In FCN-32s, output feature maps of spatial size \(12 \times 12\) is directly up-sampled by 32 times. In FCN-16s, output feature maps of spatial size \(12 \times 12\) is first up-sampled by 2, then sum up with the prediction scores calculated from feature maps of spatial size \(24 \times 24\), and up-sample by 16 times. In FCN-8s, the summed prediction scores are further up-sampled by 2, then sum up with the prediction scores calculated from feature maps of spatial size \(48 \times 48\), and up-sample by 8 times. Kernel Sizes for the fully connected layers : fc1 : \(7 \times 7 \times 512 \times 4096\), fc2 : \(1 \times 1 \times 4096 \times 4096\), fc3 : \(1 \times 1 \times 4096 \times 21\) fc4 : \(1 \times 1 \times 512 \times 1024\), fc5 : \(1 \times 1 \times 1024 \times 1024\), fc6 : \(1 \times 1 \times 1024 \times 21\) fc7 : \(1 \times 1 \times 256 \times 1024\), fc8 : \(1 \times 1 \times 1024 \times 1024\), fc9 : \(1 \times 1 \times 1024 \times 21\)
Abstract In this paper, we propose a Layer-RNN (L-RNN) module that is able to learn contextual information adaptively using within-layer recurrence. Our contributions are three-fold: (i) we propose a hybrid neural network architecture that interleaves traditional convolutional layers with L-RNN module for learning long-range dependencies at multiple levels; (ii) we show that a L-RNN module can be seamlessly inserted into any convolutional layer of a pre-trained CNN, and the entire network then fine-tuned, leading to a boost in performance; (iii) we report experiments on the CIFAR-10 classification task, showing that a network with interleaved convolutional layers and L-RNN modules, achieves comparable results (5.39% top1 error) using only 15 layers and fewer parameters to ResNet-164 (5.46%); and on the PASCAL VOC2012 semantic segmentation task, we show that the performance of a pre-trained FCN network can be boosted by 5% (mean IOU) by simply inserting Layer-RNNs. 1 Introduction In computer vision tasks, such as image classification or pixel level prediction, multi-scale contextual information plays a very important role in achieving high performance. The original architectures for these tasks (e.g. [He et al., 2016a]; [Krizhevsky et al., 2012]; [Long et al., 2015]; [Ronneberger et al., 2015]; [Simonyan & Zisserman, 2015]; [Szegedy et al., 2015]) were able to obtain multi-scale context with a large spatial footprint by the combination of filters through the layers of the network, so that a large receptive field was effectively built up. Indeed, the final layers of these networks use average pooling or fully connected layers (convolution with a large kernel) so that the effective receptive field covers the entire input image patch. More recent pixel prediction architectures have used dilated convolutions ([Yu & Koltun, 2016]; [Chen et al., 2016]) which are able to aggregate multi-scale contextual information without losing resolution (due to the spatial pooling and strides in the original architectures), and without incurring the penalty of having to learn many parameters for convolutions with very large kernels. In this paper we introduce an alternative ‘module’ for learning multi-scale spatial contextual information by using Recurrent Neural Networks (RNNs) within layers. This approach is inspired by the ReNet architecture of [Visin et al., 2015], which we extend here into a hybrid architecture that interleaves traditional convolutional neural network (CNN) modules with layer recurrent modules, and we term a Layer Recurrent Neural Network (L-RNN). A L-RNN module is a combination of 1D RNNs, and is able to learn contextual information adaptively, with the effective receptive field able to reach across the entire feature map or image, if that is required for the task. The hybrid network combines the best of both worlds: canonical CNNs are composed of filters that are efficient in capturing features in a local region, whilst the L-RNNs are able to learn long-range dependencies across a layer efficiently with only a small number of parameters. We describe the basic L-RNN module in Section 2 and discuss different fusion choices for the hybrid architecture by incorporating L-RNN into residual blocks ([He et al., 2016b] in Section 3). In addition, in Section 4, we explain how L-RNN modules can be inserted into pre-trained CNNs seamlessly. This means that the entire network does not have to be trained from scratch, only the added L-RNNs are fine-tuned together with pre-trained networks, and the experiments show that this addition always improves performance. In Section 5 we experiment on the CIFAR-10 classification with the hybrid networks of increasing depths, by using Layer Normalization ([Ba et al., 2016]), we are able to train vanilla RNNs to match the performance of GRU ([Chung et al., 2015], while using fewer parameters. In addition, we fine-tune a truncated VGG-16 FCN base net for semantic segmentation on the Pascal VOC 2012 and COCO (Lin et al., 2014) dataset. It is worth noting that (broadly) recurrence can be used in feed-forward multi-layer convolutional neural network architectures in two ways: between layers, and within layers. For example, between-layer recurrence was used for scene labelling in (Liang et al., 2015)(Pinheiro & Collobert, 2014) with convolutions applied recursively on top of feature maps from different layers or raw input images. And in (Zheng et al., 2015), spatial dependencies are modelled explicitly for semantic segmentation with densely connected Gaussian CRFs by iterated application of bilateral filtering using between-layer recurrence. By contrast, our Layer-RNN architecture falls into the second category, where within-layer recurrence is used to capture dependencies. Others have learnt contextual information from within layer recurrence for tasks such as object detection (Bell et al., 2016), and low-level vision problems, such as de-noising, colourization and smoothing (Liu et al., 2016). We postpone discussing in detail the relationships of the proposed Layer-RNN modules to these architectures, and to that of ReNet (Visin et al., 2015) and ReSeg (Visin et al., 2016), until we have introduced the L-RNN in Section 2 2 LAYER-RNN ARCHITECTURE The architecture of the network (Figure 1) is composed of two parts. Local features are calculated by the low-level CNNs module, the Layer-RNN (L-RNN) module, consisting of several 1D spatial RNNs is applied to capture the spatial dependencies. By scanning across the feature maps in different directions, the complete L-RNN is able to learn the receptive field in an adaptive way, up to the size of the entire image. These two modules can be combined to build networks in various ways; for example, an L-RNN module can be stacked on top of several CNN modules at the final layer, or CNN and L-RNN modules can be interleaved at multiple levels. ![Basic Architecture: Given the input image, local features are calculated by the CNN module (A). In (B), two 1D spatial RNNs are applied to scan along each row independently from different directions, hidden states are calculated at every spatial step, and the output feature maps can either be concatenated or summed up. The receptive field for the black pixel in (B) is labelled in orange; In (C), two 1D spatial RNNs are applied to scan along each column from two directions. The combination of (B) and (C) defines the L-RNN module that is able to propagate information over the entire image.](page_370_1012_808_312.png) Figure 1: Basic Architecture: Given the input image, local features are calculated by the CNN module (A). In (B), two 1D spatial RNNs are applied to scan along each row independently from different directions, hidden states are calculated at every spatial step, and the output feature maps can either be concatenated or summed up. The receptive field for the black pixel in (B) is labelled in orange; In (C), two 1D spatial RNNs are applied to scan along each column from two directions. The combination of (B) and (C) defines the L-RNN module that is able to propagate information over the entire image. 2.1 LAYER-RNN MODULE As shown in Figure 1, the Layer-RNN (L-RNN) module is a combination of the 1D spatial recurrent modules (B) and (C). In each module, there are two 1D RNNs scanning across the feature maps horizontally or vertically from two directions (bidirectional spatial RNNs), and their hidden states are updated at every spatial step. Consequently, for each of the horizontal and vertical directions, two output feature maps are obtained with the same width and height as the input feature maps. In our implementation, we simply sum up these output feature maps (an alternative is to concatenate the output feature maps, but that would increase the number of parameters). More formally, assume the feature maps (layer \( L \)) coming into the L-RNN module are \( X^L \in \mathbb{R}^{m \times n \times d} \) and output \( X^{L+1} \) (layer \( L + 1 \)), where \( m, n, d \) refers to the width, height, and the number of feature maps respectively for the input layer. For simplicity, assume the input to the 1D spatial RNNs from \( X^L \) is a feature vector at each spatial location, each row or column on the feature maps is treated as one sequence. When scanning from left to right, the feature responses for location \( ij \) can be calculated: \[ x_{i,j}^{L+1} = f(U x_{i,j}^L + V x_{i,j-1}^{L+1} + b) \qquad \text{left to right} \] Where \( x_{i,0}^{L+1} = 0,\ x_{i,j}^L \in \mathbb{R}^{d \times 1},\ x_{i,j}^{L+1}, x_{i,j-1}^{L+1} \in \mathbb{R}^{D \times 1},\ U \in \mathbb{R}^{D \times d},\ V \in \mathbb{R}^{D \times D},\ b \in \mathbb{R}^{D \times 1},\ D \) denotes the number of nodes used in the 1D spatial RNN, and \( f \) refers to the non-linearity function. 1D spatial RNNs scanning other directions can be calculated similarly. Notice that, the first term of equation (1) encodes local information independently, resembling the normal convolutional layer, and the second term characterizes the within-layer recurrence (\( U \) is a convolution matrix, \( V \) a recurrence matrix). We make use of this observation in Section 4. 2.2 DISCUSSION AND RELATION TO OTHER WORK As can be seen in Figure 1C, the effective receptive field can cover the entire image. However, the actual receptive field depends on the parameters of the RNNs, and can be learnt adaptively. As an insight to what is learnt, consider a separable filter, such as an axis aligned 2D Gaussian. Such filters can be applied exactly by a composition of 1D Gaussian convolutions in the horizontal and vertical directions. The 1D spatial RNNs can approximate finite 1D convolutions of this type. We next discuss the relation of the L-RNN to prior work. First, ReNets (Visin et al., 2015), which is an architecture completely made of 1D RNNs (i.e. no CNNs). In ReNets, the input images are first split into non-overlapping patches of size \( m \times n \times d \), where \( m, n, d \) refer to width, height and feature channels respectively. The 1D RNNs takes the flattened patch (\( mn \times d \)) as input, and outputs feature vector of size \( D \times 1 \), where \( D \) refers to the number of nodes used in the RNNs. In contrast, we interleave the L-RNN and CNN modules. There are two benefits of this: first, CNNs are more efficient at capturing local features than RNNs, the L-RNN stacked upon them is able to learn dependencies between local features (rather than the input channel reformatted); second, we are able to introduce more non-linearities between the hierarchical layers (through the convolutional+ReLU and pooling layers), and a RNN provides non-linearities within the same layer. The 2D-RNN, proposed in (Graves & Schmidhuber, 2009; Theis & Bethge, 2015), is able to scan across the image or feature maps row-by-row, or column-by-column sequentially, with each RNN node accept input from three sources, namely, projections of current input, and feedbacks from the two neighbour nodes. By contrast, we use unidirectional 1D spatial RNNs, with each hidden node only accepting feedbacks from its previous node. Another advantage of our model is that rows or columns can be processed in parallel on GPUs, and training time is shortened. Bell et al. (2016) (Inside-Outside Net) and Visin et al. (2016) (ReSeg) describe similar ideas for object detection and semantic segmentation. Both architectures follow a pipeline that consists of a CNN feature extractor (VGG Net) followed by spatial RNNs at the final prediction stage. In contrast, we treat the L-RNN module as a general computational layer, that can be inserted into any layer of modern architectures, and interleaved with CNN modules. This enables a network to be capable of learning contextual information in a flexible way at multiple levels, rather than with hand-crafted kernel sizes and receptive fields. Note that the vanilla RNN unit consists of two terms, a local term and a recurrence term, where the local term is exactly the convolution operation. Therefore, the spatial RNN can be seen as a generalisation of the convolutional layer, and in the worst case, when the RNN learns no context, the layer simply becomes a convolutional one. For tasks with limited data (semantic segmentation in our case), we propose a regime for inserting the L-RNN into the pre-trained FCN and fine-tuning the entire network end-to-end. This means that we directly increase the representational power of the model, and set the pre-trained model free to learn contextual information if it is needed. 3 CNNs & Layer-RNN MODULES In this section, we describe the architecture for incorporating 1D spatial RNNs into the computational block of a Residual Networks (He et al., 2016b), and also discuss fusion methods for such blocks. We start with the standard residual block of He et al. (2016b) (Figure 2(a)), and then replace the included CNN layer with bidirectional spatial RNNs, to includ a L-RNN module instead. ![Basic modules for classification: CNN module and L-RNN module diagrams](page_184_370_1092_246.png) Figure 2: Basic Modules for Classification: CNN module is defined in a same way as ResNet (He et al., 2016b). L-RNN module is defined as a cascade of bidirectional spatial RNNs. Forward, Sum or Concatenation can be used for skip layers. Batch Normalizations are used when training the architecture from scratch (Section 5.1). We consider three fusion options for combining the features from such blocks with the input to subsequent layers; namely forward, sum and concatenation. Forward refers to the traditional feed-forward architectures: \[ X^{L+1} = F(X^L, W) \] (2) i.e. the block simply becomes a new layer; sum denotes the method of the original residual networks: \[ X^{L+1} = X^L + F(X^L, W) \] (3) so that the L-RNN module acts as a residual block; whilst, in concatenation, features from multiple layers (same spatial sizes) are concatenated: \[ X^{L+1} = [X^L; F(X^L, W)] \quad (\text{;}) \text{ refers to concatenation} \] (4) Therefore, the channels of output feature maps will be the sum of the channels of the two concatenated layers (the number of parameters will be increased for the next layers). In the experimental evaluation of Section 5.1 we compare these options. 4 ADDING A LAYER-RNN TO A PRE-TRAINED CNN In this section, we describe how a Layer-RNN module, can be seamlessly inserted into a pre-trained CNN. In a typical scenario, the CNN would be trained for classification on ImageNet (where there are copious annotations). After inserting the L-RNN modules, the hybrid L-RNN network can then be fine tuned for a new task such as pixel-level prediction, e.g. semantic segmentation (where the annotated data is usually more limited). This trick naturally allows multi-level contextual information to be effortlessly incorporated. Avoiding training the network from scratch means the entire network can be re-purposed with the available annotations and trained end-to-end for the new task, whilst benefiting from the earlier classification training. We illustrate the idea using 1D convolution, but the same principles hold for the entire L-RNN module. As shown in Figure 3, the canonical CNN architecture for a 1D convolution can be denoted as: \[ X^{L+1} = f(W * X^L + b) \] (5) where * refers to convolution, \( W \) and \( b \) are the parameters of the CNN, \( L, L+1 \) denote the layer. The 1D spatial RNN can be written as : \[ X_1^{L+1} = f(U * X_1^L + V X_{L-1}^{L+1} + b) \] (6) where \( U, V, b \) refer to the parameters that are shared across the whole scan-line. Notice that the 1D spatial RNN are designed to incorporate two terms, projections from local region (input-to-hidden) and recurrence term from previous hidden unit (hidden-to-hidden). In fact, it is Figure 3: CNNs & Spatial RNNs Spatial RNNs can be re-expressed as a two-step process, CNNs (Local features) + Recurrence. The similarity between CNNs and spatial RNNs is highlighted by the yellow box. The difference between CNNs and spatial RNNs is shown in blue box and arrow. the presence of non-zero recurrence matrix \( V \), that characterizes the 1D spatial RNN, and they can be calculated in a two-step way as : \[ X^{inter} = U * X^L + b \quad \text{(Convolution)} \tag{7} \] \[ X_i^{L+1} = f(X_i^{inter}) \quad (i = 1, \text{zero initial states}) \tag{8} \] \[ X_i^{L+1} = f(X_i^{inter} + V X_{i-1}^{L+1}) \quad (i > 1) \tag{9} \] By interpreting the recurrence in this way, 1D spatial RNNs can be constructed by inserting recurrence directly into any convolutional layer right after the convolution. If the recurrence matrix \( V \) is initialized as zero, and ReLU is the activation function, then the 1D spatial RNN will be initialized exactly as the pre-trained CNNs. The complete L-RNN can be constructed by inserting two bidirectional spatial RNNs into subsequent layers of the pre-trained CNNs. We derive the expression of the within-layer gradient for use in back-prop fine-tuning in Appendix B. 5 EXPERIMENTAL EVALUATION We test the proposed Layer-RNN on two supervised learning tasks: CIFAR-10 classification in Section [5.1] and PASCAL VOC 2012 segmentation in Section [5.2]. 5.1 IMAGE CLASSIFICATION In this section, we investigate classification performance under variations in an architecture containing L-RNN modules. We vary the depth of the network, the number and position of the L-RNN modules, the type of recurrent units in RNNs, the pooling mechanisms for the last pooling layer, and the method of fusing the block outputs. Architectures: An overview of the architectures is given in Table 1 (with the fundamental building modules (CNN and L-RNN) adapted from Figure 2). There are two principal architectural variations. The first variation is that from Network A to D, we gradually increase the network depth by adding CNN Modules, with the L-RNN module always stacked at the final stage to capture global information over the entire image, in a similar manner to the fully connected layers or average pooling in other networks. Network A has 5 convolutional layers. The second principal variation, in Network E and F, is to interleave CNN and L-RNN modules. This means that the network is capable of learning representations across large spatial footprints at any stage in the network. To show the effectiveness of adding L-RNN modules, we include a Baseline-CNN composed of only convolutional layers (7 layers, with concatenation used at every skip layer). Network E is built upon the Baseline-CNN by inserting L-RNN modules before CNN modules at multiple stages. To make sure the performance gain is not from the increased number of parameters, we cut down the number of filters in the last CNN module to 128 (this number is 256 in the Baseline-CNN). Network F, uses more convolutional layers interleaved with L-RNN modules. <table> <tr> <th>Baseline-CNN</th> <th>A</th> <th>B</th> <th>C</th> <th>D</th> <th>E</th> <th>F</th> </tr> <tr> <td colspan="7">input (\(32 \times 32 \times 3\))</td> </tr> <tr> <td colspan="7">convolution (\(3 \times 3 \times 64\))</td> </tr> <tr> <td>CNN Module (\(3 \times 3 \times 64\)) Concatenate</td> <td>CNN Module (\(3 \times 3 \times 64\))<br><span style="color:blue;">Feature Fusion</span></td> <td>CNN Module (\(3 \times 3 \times 64\))<br>Forward<br>CNN Module (\(3 \times 3 \times 64\)) Concatenate</td> <td>CNN Module (\(3 \times 3 \times 64\))<br>Forward<br>CNN Module (\(3 \times 3 \times 64\)) Concatenate</td> <td>CNN Module (\(3 \times 3 \times 64\))<br>Forward<br>CNN Module (\(3 \times 3 \times 64\)) Concatenate<br>CNN Module (\(3 \times 3 \times 128\)) Forward<br>CNN Module (\(3 \times 3 \times 128\)) Concatenate</td> <td>CNN Module (\(3 \times 3 \times 64\))<br>Forward<br>CNN Module (\(3 \times 3 \times 64\)) Concatenate<br>CNN Module (\(3 \times 3 \times 128\)) Forward<br>CNN Module (\(3 \times 3 \times 128\)) Concatenate</td> <td>CNN Module (\(3 \times 3 \times 64\))<br>Concatenate<br>CNN Module (\(3 \times 3 \times 64\))<br>Concatenate<br>CNN Module (\(3 \times 3 \times 64\))<br>Concatenate</td> </tr> <tr> <td colspan="7">MaxPooling (2)</td> </tr> <tr> <td>CNN Module (\(3 \times 3 \times 128\)) Concatenate</td> <td>CNN Module (\(3 \times 3 \times 128\))<br><span style="color:blue;">Feature Fusion</span></td> <td>CNN Module (\(3 \times 3 \times 128\))<br>Forward<br>CNN Module (\(3 \times 3 \times 128\)) Concatenate</td> <td>CNN Module (\(3 \times 3 \times 128\))<br>Forward<br>CNN Module (\(3 \times 3 \times 128\)) Concatenate</td> <td>CNN Module (\(3 \times 3 \times 128\))<br>Forward<br>CNN Module (\(3 \times 3 \times 128\)) Concatenate<br>CNN Module (\(3 \times 3 \times 128\)) Forward<br>CNN Module (\(3 \times 3 \times 128\)) Concatenate</td> <td>CNN Module (\(3 \times 3 \times 128\))<br>Forward<br>CNN Module (\(3 \times 3 \times 128\)) Concatenate<br>CNN Module (\(3 \times 3 \times 128\)) Forward<br>CNN Module (\(3 \times 3 \times 128\)) Concatenate</td> <td>LRNN Module (128)<br>Forward<br>CNN Module (\(3 \times 3 \times 64\))<br>Concatenate<br>LRNN Module (128)<br>Forward<br>CNN Module (\(3 \times 3 \times 64\))<br>Concatenate</td> </tr> <tr> <td colspan="7">MaxPooling (2)</td> </tr> <tr> <td>CNN Module (\(3 \times 3 \times 256\)) Concatenate</td> <td>LRNN Module (256)<br><span style="color:blue;">Feature Fusion</span></td> <td>LRNN Module (256) Concatenate</td> <td>LRNN Module (256) Concatenate</td> <td>LRNN Module (256) Concatenate<br>LRNN Module (256) Concatenate</td> <td>LRNN Module (256) Concatenate<br>LRNN Module (256) Concatenate</td> <td>LRNN Module (128)<br>Forward<br>CNN Module (\(3 \times 3 \times 64\))<br>Concatenate<br>LRNN Module (128)<br>Forward<br>CNN Module (\(3 \times 3 \times 64\))<br>Concatenate</td> </tr> <tr> <td colspan="7">Global Pooling (8)</td> </tr> <tr> <td colspan="7">Dropout (0.5)</td> </tr> <tr> <td colspan="7">Softmax (10)</td> </tr> </table> Table 1: Network architectures for CIFAR-10 experiments In Network A, a variety of selections are tested (coded as blue). In Feature Fusion, we may choose Forward, Sum, Concatenation; in the LRNN module, GRU and vanilla RNNs are tested; max pooling or average pooling can be used for global pooling. From Network A to D, the depth of networks is gradually increased by adding CNN modules, for example, comparing C to B, two more CNN modules are added based on B (coded as red). Comparing Networks E and F with the the Baseline-CNN, LRNN modules (green) are interleaved with CNN modules. Other variations of architectures include: firstly, we may use Forward, Sum, Concatenation to fuse features; secondly, GRU and vanilla RNN units are compared for the L-RNN modules, ReLU is used for both cases as the non-linear activation; thirdly, both max pooling and average pooling are tested as global pooling. For clarity, we name the networks by these variations in Table 2. When Forward is selected to fuse features, Network A-Forward simply follows the traditional CNN with pure feed-forward layers. A-Concat uses concatenation as an alternative, and A-Sum follows the idea of residual networks proposed in [He et al., 2016b], the number of filters is gradually increased as the networks get deeper. To match dimensions for summation, \(1 \times 1\) convolution is used in A-Sum. In our experiments, we found that concatenation works better than sum (Table 2). Therefore, in all other architectures (B,C,D), as we gradually increase the network depth by adding CNN modules, we fuse the skip layers by only alternating between concatenation and forward. Following the VGG-net (Simonyan & Zisserman 2015), in all architectures, convolutional kernels in the CNN Module are of size \(3 \times 3\). Maxpoolings (\(2 \times 2\)) are used as intermediate pooling, and \(8 \times 8\) global poolings (average or max) are applied at the end. To avoid overfitting, we use dropout (0.5). Training details and recurrent units are described in the Appendix [A]. Implementations are mostly based in Theano (Theano Development Team 2016) with single NVIDIA Titan X. Dataset & Evaluation. We conducted experiments on the CIFAR-10 dataset, which consists of 40k training images, 10k validation and 10k testing images in 10 classes, and each of the image is of \(32 \times 32\) pixels with RGB channels. We augment the training data with simple transformations (rotation, flipping, scaling) on the fly. The mean image over the whole training set is subtracted from each image during training. Following the standard evaluation protocol, we report the *top1* error on the testing set. Results & Discussion. We present detailed comparisons with other published methods in Table 2 <table> <tr> <th>CIFAR-10</th> <th># Params</th> <th># Conv Layers</th> <th>Approx. Time / Epoch (s)</th> <th>Top1 Error(%)</th> </tr> <tr> <td>ReNet (Visin et al., 2015)</td> <td>–</td> <td>0</td> <td>–</td> <td>12.35</td> </tr> <tr> <td>NIN (Lin et al., 2013)</td> <td>–</td> <td>–</td> <td>–</td> <td>8.81</td> </tr> <tr> <td>FitNet (Romero et al., 2014)</td> <td>2.5M</td> <td>19</td> <td>–</td> <td>8.39</td> </tr> <tr> <td>Highway (Srivastava et al., 2015)</td> <td>2.3M</td> <td>19</td> <td>–</td> <td>7.54</td> </tr> <tr> <td>ResNet-110 (He et al., 2016a)</td> <td>1.7M</td> <td>110</td> <td>–</td> <td>6.61</td> </tr> <tr> <td>ResNet-164 (He et al., 2016b)</td> <td>1.7M</td> <td>164</td> <td>–</td> <td>5.46</td> </tr> <tr> <td>Dense Net (Huang et al., 2016)</td> <td>27.2M</td> <td>100</td> <td>–</td> <td><b>3.74</b></td> </tr> <tr> <td>Baseline-CNN-Avg</td> <td>1.56M</td> <td>7</td> <td>331</td> <td>9.07</td> </tr> <tr> <td>Baseline-CNN-Max</td> <td>1.56M</td> <td>7</td> <td>331</td> <td>8.48</td> </tr> <tr> <td>A-Concat-RNN-Avg</td> <td>0.9M</td> <td>5</td> <td>293</td> <td>7.65</td> </tr> <tr> <td>A-Concat-RNN-Max</td> <td>0.9M</td> <td>5</td> <td>293</td> <td>7.43</td> </tr> <tr> <td>A-Forward-GRU-Max</td> <td>1.68M</td> <td>5</td> <td>315</td> <td>7.57</td> </tr> <tr> <td>A-Concat-GRU-Max</td> <td>1.95M</td> <td>5</td> <td>377</td> <td>7.35</td> </tr> <tr> <td>A-Sum-GRU-Max</td> <td>1.99M</td> <td>5</td> <td>383</td> <td>7.69</td> </tr> <tr> <td>B-GRU-Max</td> <td>2.3M</td> <td>9</td> <td>542</td> <td>6.62</td> </tr> <tr> <td>B-RNN-Max</td> <td>1.27M</td> <td>9</td> <td>483</td> <td>6.78</td> </tr> <tr> <td>C (GRU-Max)</td> <td>2.5M</td> <td>13</td> <td>726</td> <td>6.21</td> </tr> <tr> <td>D (GRU-Max)</td> <td>3M</td> <td>19</td> <td>1321</td> <td>5.73</td> </tr> <tr> <td>E (RNN-Max)</td> <td>0.97M</td> <td>7</td> <td>462</td> <td>5.96</td> </tr> <tr> <td>F (RNN-Max)</td> <td>1.55M</td> <td>15</td> <td>(Tensorflow on 2 GPUs)</td> <td><b>5.39</b></td> </tr> </table> Table 2: Comparison with previous published methods on CIFAR-10 The networks are named by the chosen operation at every step; for instance, *A-Forward-GRU-Max*, refers to the architecture A with Forward feature fusion, GRU in L-RNN Module, and max pooling as the final global pooling. From the experimental results, we can draw the following conclusions: Comparison of basic choices. Max pooling consistently performs better when used as the global pooling in our case, this is seen in the results by Baseline-CNN-Avg (9.07%) vs. Baseline-CNN-Max (8.48%), and A-Concat-RNN-Avg (7.65%) vs. A-Concat-RNN-Max (7.43%). One possible explanation would be that for classification tasks, decisions are based on the most salient features. In our experiments for shallow networks, the summing of residual connections shows no benefit compared to feed-forward or concatenation. This observation is made from the results by A-Forward-GRU-Max (7.57%), A-Concat-GRU-Max (7.35%) and A-Sum-GRU-Max (7.69%). Thus, as also employed in U-Net or DenseNet (Ronneberger et al., 2015; Huang et al., 2016), concatenation can be used as an alternative to summation in building deeper networks. It can be seen that vanilla RNN units trained with Layer Normalization (Ba et al., 2016) can perform almost as well as GRU, while saving a large number of parameters (by comparing the results from A-Concat-RNN-Max with 0.9\( M \) parameters (7.43%) and that of A-Concat-GRU-Max with 1.95\( M \) parameters (7.36%), B-RNN-Max with 1.27\( M \) parameters (6.78%) vs. B-GRU-Max with 2.3\( M \) parameters (6.62%)). Networks with L-RNN module stacked at the final stage. Even shallow networks with L-RNN modules (architectures A) can achieve comparable or superior performance to deep architectures with 19 layers that requires more parameters (e.g. Network A-Concat-RNN-Max (0.9\( M \)) vs. Highway(2.3\( M \))). This confirms that when a L-RNN module is stacked on top of CNNs, it is able to capture global information, avoiding the multiple layer route to increasing receptive fields in standard architectures, e.g. in (Romero et al., 2014; Srivastava et al., 2015). As expected, networks can always improve classification performance by adding more CNN modules (going from architecture A to D). Network D with 19 convolutional layers performs better than the ResNet-110 (by 0.3% \( top1 \) error), (though Network D has more parameters than the ResNet-110) and is slightly worse than ResNet-164 (by 0.25% \( top1 \) error). Thus, following this trend, it is reasonable to expect a benefit if L-RNN Modules are combined with very deep networks, like the residual variants. Networks with L-RNN modules interleaved with CNN modules. Comparing the performance of Baseline-CNN-Max (8.48%) with that of Network E (5.96%), there is a significant performance boost (2.5%), brought by simply inserting L-RNN modules. Network E also has other advantages over the networks A to D: the number of parameters, network depth, and running time. Furthermore, when we continue increasing the network depth and interleaving L-RNN modules, Network F achieves comparable results (5.39%) to ResNet-164 (5.46%) and with fewer parameters (1.55\( M \) vs. 1.7\( M \)). This confirms that, firstly, L-RNN modules can be combined with very deep networks; and secondly, rather than hand-craft the kernel size, we should set the model free and learn contextual information at any stage. 5.2 Semantic Segmentation In this section, we insert L-RNN modules into the VGG-16 networks (pre-trained on ImageNet (Deng et al., 2009)), and fine-tune the entire network for the PASCAL VOC 2012 segmentation task. The objective is to boost the segmentation performance by providing contextual information via the L-RNNs. In particular, we consider the two FCN segmentation architectures originally introduced by Long et al. (2015), FCN-32s and FCN-8s; these are described below. We proceed in three steps: first, we establish baselines by training our own FCN-32s and FCN-8s (Appendix C), and comparing their performance to those of Long et al. (2015). We also investigate the loss in performance as the fully connected (FC) layer is gradually reduced from 4096 to 512 channels. The reason for doing this is that when we insert the L-RNN module, its complexity (dimension of the hidden units) depends on this number of channels, and so the overall complexity can be varied. In the second step, we insert L-RNNs into the FCN-32s architecture and evaluate the change in performance. Finally, we insert L-RNNs into the FCN-8s architecture and compare with previous published methods. Dataset & Evaluation. We used a training set consisted of VOC2012 training data (1464 images provided by the challenge organizers), and augmented with training and validation data from Hariraharan et al. (2014), which further extend the training set to a total of 11,685 images with pixel-level annotation. After removing the overlapping images between VOC2012 validation data and this dataset, we are left with 346 images from the original VOC2012 validation set to validate our model. In all the following experiments, we use a single scale for the input images (384 × 384), and only horizontal flipping is used for data augmentation. The performance is measured in terms of pixel intersection-over-union (IOU) averaged across the 21 classes. 5.2.1 Baseline Architectures and Training Architecture & Training. In the FCN-32s, input images are passed through the whole networks, and end up with predictions of \(12 \times 12 \times 21\), then, up-sampling layers are directly used to map the predictions back to \(384 \times 384\) (32 times). In the FCN-16s, instead of directly up-sampling 32 times, the predictions are first up-sampled by 2, and summed up with stream predictions from pool4 (named after VGG16), then up-sampled by 16 times. In the FCN-8s, the stream predictions from pool3 are further added to the results from FCN-16s, thus, up-sampling layers with only factor 8 is needed. (Appendix C) For all the architectures, the base net(VGG16) is pre-trained on ImageNet (Deng et al., 2009), we further train on Pascal VOC2012 for 50 epochs, similar to the experiment for CIFAR-10, we iteratively increase or decrease the learning rate between \(10^{-3}\) and \(10^{-5}\) after every 10 epochs. The 4096 channel architectures are trained first, and then the number of channels is gradually reduced in the FC layer by randomly cutting them (e.g. from 4096 to 2048), and re-training the networks. Results & Discussion. Table 3 shows the performance of the six baselines: FCN-32s and FCN-8s with the number of channels varying from 512 to 4096. We observe that reducing the nodes in the FC layers does produce a performance drop (from 4096 to 1024 nodes, 1% mean IOU) in both FCN-32s and FCN-8s. Although from 1024 to 4096 nodes, the improvement is tiny, the difference in the number of parameters is over 64 million. Consequently, in the following experiments we choose to perform experiments based on networks with 512, 1024 or 2048 channels only (i.e. not 4096). In comparison to the original performance for the FCN-8s architecture in (Long et al., 2015), we exceed this (by 64.4 to 61.3 mean IOU) in our training. Thus, we use our trained networks as a baseline. 5.2.2 FCN-32s with L-RNN modules Architecture & Training. The architecture FCN-32s(L-RNN) is shown in figure 4, the convolutional part of the architecture is initialized with the pre-trained FCN-32s(2048 channels in FC layer) baseline. Then, two 1D spatial RNNs are inserted into the fc1 layer in the horizontal direction, and two 1D spatial RNNs are inserted into the fc2 layer in the vertical direction. The convolution activations of fc1 are shared for both left-right and right-left scanning. Similarly for fc2, the convolution activations are shared for up-down and down-up scanning. Thus the fc1 and fc2 layers together with the added 1D spatial RNNs form a complete L-RNN module. During training, as described in section 4, the 1D spatial RNNs are initialized with a zero recurrence matrix. The entire network is then fine-tuned end-to-end with the PASCAL VOC2012 data. We adopt RMS-prop (Tieleman & Hinton, 2012) for 30 epochs with hyper-parameters \(lr = 10^{-4}\), \(\rho = 0.9, \epsilon = 10^{-8}\), then decrease the learning rate to \(lr = 10^{-5}\) for 10 epochs. Results & Discussion. The results are shown in Table 3. Compare the 32s rows with and without the L-RNN for the FC layers with 512, 1024, and 2048 channels. As can be seen, the addition of the L-RNN always improve the segmentation performance over the pre-trained FCN-32s baselines. However, the improvement is not large – about 1 – 1.5% mean IOU. This is because the receptive field in the fully connected layers of FCN-32s is sufficiently large to cover \(224 \times 224\) pixels of the input patch, and consequently the networks are not able to benefit much from the context provided by the L-RNN. The benefit is greater when L-RNNs are added to the lower layers (where the receptive fields of the convolutions is much smaller), and we turn to that case next. 5.2.3 FCN-8s with L-RNN modules Architecture & Training. The architecture FCN-8s(L-RNN) is shown in figure 4, as with the FCN-32s architecture, 1D spatial RNNs are inserted into the fc1 and fc2 layers to form a L-RNN module. L-RNNs are also inserted into the lower layers, namely pool3 and pool4 layers. Unlike the FC layers in the FCN-32s, where prediction for each central pixel comes from image patches of size \(224 \times 224\), the predictions from pool3 and pool4 are based on receptive field on the image of much smaller sizes (around \(44 \times 44\) and \(100 \times 100\) pixels respectively). Thus, the inserted L-RNN modules must be able to model relatively long-range dependencies. Figure 4: FCN-32s (above the blue dash line) and FCN-8s with L-RNN modules. Spatial RNNs are inserted to the fully connected (FC) layers in all FCNs, every two FC layers construct a complete L-RNN module. {384, 192, 96} indicate the spatial sizes of the feature maps. Kernel Sizes for the fully connected layers (n is an experimental variable– number of channels): fc1 : \(7 \times 7 \times 512 \times n\), fc2 : \(1 \times 1 \times n \times n\), fc3 : \(1 \times 1 \times n \times 21\) fc4 : \(1 \times 1 \times 512 \times 1024\), fc5 : \(1 \times 1 \times 1024 \times 1024\), fc6 : \(1 \times 1 \times 1024 \times 21\) fc7 : \(1 \times 1 \times 256 \times 1024\), fc8 : \(1 \times 1 \times 1024 \times 1024\), fc9 : \(1 \times 1 \times 1024 \times 21\) During training, the network is initialized from the FCN-8s baseline, and then fine-tuned using segmentation data. Again the PASCAL VOC dataset is used. Furthermore, when comparing to the other previously published methods, the network is further trained on the COCO trainval dataset, and we use a densely connected CRF as post-processing (Krishenbl & Koltun [2012]). Results on PASCAL VOC Validation set. The experimental results are shown in Table 3. <table> <tr> <th>Type</th> <th># of channels in FC</th> <th>L-RNNs added</th> <th>Pixel Acc %</th> <th>Mean IOU %</th> </tr> <tr> <td>32s</td> <td>512</td> <td>NO</td> <td>90.4</td> <td>61.5</td> </tr> <tr> <td>32s</td> <td>1024</td> <td>NO</td> <td>90.5</td> <td>62.1</td> </tr> <tr> <td>32s</td> <td>2048</td> <td>NO</td> <td>90.7</td> <td>62.7</td> </tr> <tr> <td>32s</td> <td>4096</td> <td>NO</td> <td>90.7</td> <td>62.9</td> </tr> <tr> <td>8s</td> <td>1024</td> <td>NO</td> <td>91.3</td> <td>63.8</td> </tr> <tr> <td>8s</td> <td>2048</td> <td>NO</td> <td>91.2</td> <td>64.1</td> </tr> <tr> <td>8s</td> <td>4096</td> <td>NO</td> <td>91.3</td> <td>64.4</td> </tr> <tr> <td>8s (original [Long et al. (2015)])</td> <td>4096</td> <td>--</td> <td>--</td> <td>61.3</td> </tr> <tr> <td>32s</td> <td>512</td> <td>YES</td> <td>90.8</td> <td>62.7</td> </tr> <tr> <td>32s</td> <td>1024</td> <td>YES</td> <td>90.9</td> <td>63.4</td> </tr> <tr> <td>32s</td> <td>2048</td> <td>YES</td> <td>91.1</td> <td>64.2</td> </tr> <tr> <td>8s</td> <td>2048</td> <td>YES</td> <td>92.6</td> <td>69.1</td> </tr> </table> Table 3: Comparison of FCN networks on the PASCAL VOC2012 segmentation validation set. Comparing the rows for 32s with and without L-RNN, to those for 8s with and without L-RNN. We can draw the following conclusions: Improvement due to the skip layers. It can be seen (for IOU) that going from FCN-32s(2048) to FCN-8s(2048), where there are additional skip layers, the performance is boosted from 62.7 to 64.1. The skip layers in the FCN-8s architecture introduce more parameters, but this is not the reason for the performance boost since FCN-8s(2048) and FCN-32s(4096), have a similar number of parameters though they perform very differently (64.1 vs. 62.9). This observation confirms that the performance gain is brought by the the skip layers, rather than the increased number of parameters. Improvement due to L-RNN module. Inserting a L-RNN to the FC layers of FCN-32s(2048), only improves the performance from 62.7 to 64.2. However, as noted earlier, since the nodes in the FC layers already cover the entire input patch of size \(224 \times 224\), the L-RNN can contribute only little context here. In contrast, adding L-RNNs to FCN-8s brings a substantial improvement from 64.1(FCN-8s) to 69.1(FCN-8s-LRNN). This process will introduce more parameters due to the recurrence term in the RNNs, but it is clear that the improvement is mainly from the inserted L-RNN module after pool3 and pool4 in FCN-8s, rather than from the increased number of parameters. The reason is that, when comparing FCN-8s (2048 channels without L-RNN) to FCN-8s (4096 channels without L-RNN), although the number of parameters is increased dramatically, the performance is only increased from 64.1 to 64.4. While FCN-8s (4096 channels without L-RNN) has roughly the same number of parameters as that of FCN-8s (2048 channels with L-RNN), but the performance gain is from 64.4 to 69.1. In conclusion, the L-RNN is able to learn contextual information over a much larger range than the receptive field of pure local convolutions. Results on PASCAL VOC Test set. Table 4 shows the results of the FCN-8s with L-RNNs on the PASCAL VOC test data, and also compares to others who have published on this dataset. The performance is far superior to the original result (Long et al. [2015]) using a FCN-8s with 4096 channels (whereas only 2048 channels are used here). We also compare to the dilated convolution network of (Yu & Koltun [2016]), obtaining comparable, though slightly better performance. Note that in (Yu & Koltun [2016]), multi-scale contextual information is captured by explicitly designing dilated convolution kernels, while the L-RNN is able to learn contextual information implicitly. Finally, we compare to (Zheng et al. [2015]) who add a densely connected CRF to FCN-8s. If we also add a dense CRF as post-processing, we boost the performance by 1% in IOU (the same boost as obtained by (Yu & Koltun [2016])). In Figure 5, we show the samples of semantic segmentations on the PASCAL VOC2012 validation set. In each figure, we show our predictions and the results after CRF post-processing. Comparing with the end-to-end trainable CRF-RNN (Zheng et al. [2015]), our predictions miss the small details, like the wheel of the bicycle, but show much better performance in determining the class of the segmented regions – something that context can really contribute to. <table> <tr> <th>Methods</th> <th>P</th> <th>P+CRF</th> <th>P+COCO</th> <th>P+COCO+CRF</th> </tr> <tr> <td>FCN-8s (Long et al. [2015])</td> <td>62.2</td> <td>n/a</td> <td>n/a</td> <td>n/a</td> </tr> <tr> <td>CRF-RNNs (Zheng et al. [2015])</td> <td>n/a</td> <td>72.0</td> <td>n/a</td> <td>74.7</td> </tr> <tr> <td>Dilated Conv. (Yu & Koltun [2016])</td> <td>n/a</td> <td>n/a</td> <td>73.5</td> <td>74.7</td> </tr> <tr> <td>FCN-8s-LRNN (2048)</td> <td><b>71.9</b></td> <td><b>72.7</b></td> <td><b>74.2</b></td> <td><b>75.7</b></td> </tr> </table> Table 4: Comparison of mean IOU on the PASCAL VOC2012 segmentation Test set. (All results are based on VGG-16 net) Training is on P: PASCAL VOC2012; COCO: COCO dataset. http://host.robots.ox.ac.uk:8080/anonymous/YJBLI7.html 6 CONCLUSION & FUTURE WORK This paper has shown that the proposed L-RNN module is an alternative way of adding multi-level spatial context to a network. In fact, L-RNNs can be interleaved with convolutional layers to learn context at any stage. When the L-RNN is only used at the final stage after the CNNs, it gives shallow networks the receptive fields of far deeper networks. Furthermore, we have demonstrated that inserting L-RNNs can boost the performance of pre-trained networks, and given an initialization procedure that makes this training a simple matter of end-to-end fine tuning. There is much left to investigate using L-RNNs as a new building block, and we suggest some avenues here: (i) training the hybrid architectures on larger dataset, such as ImageNet (Deng et al. [2009]), and learn representations that can be transferred to other vision tasks, (ii) a similar investigation for deep residual networks where the residual blocks are either convolutional or L-RNNs; and (iii) including a CRF final layer in end-to-end training. Figure 5: Qualitative Results. First column: input image. Second column: prediction from Zheng et al. (2015). Third column: prediction from the our networks. Fourth column: CRF post-processing. Fifth column: ground-truth annotation.
reject
Reject
6.25
99c1b9d0bc42e00f8df00467997956ff5b4d8bef
iclr
2,017
A WAY OUT OF THE ODYSSEY: ANALYZING AND COMBINING RECENT INSIGHTS FOR LSTMs Shayne Longpre Salesforce Research Palo Alto, California slongpre@cs.stanford.edu Sabek Pradhan Stanford University Palo Alto, California sabeekp@cs.stanford.edu Caiming Xiong, Richard Socher Salesforce Research Palo Alto, California {cxiong,rsocher}@salesforce.com ABSTRACT LSTMs have become a basic building block for many deep NLP models. In recent years, many improvements and variations have been proposed for deep sequence models in general, and LSTMs in particular. We propose and analyze a series of augmentations and modifications to LSTM networks resulting in improved performance for text classification datasets. We observe compounding improvements on traditional LSTMs using Monte Carlo test-time model averaging, average pooling, and residual connections, along with four other suggested modifications. Our analysis provides a simple, reliable, and high quality baseline model. 1 INTRODUCTION When exploring a new problem, having a simple yet competitive off-the-shelf baseline is fundamental to new research. For instance, Caruana et al. (2008) showed random forests to be a strong baseline for many high-dimensional supervised learning tasks. For computer vision, off-the-shelf convolutional neural networks (CNNs) have earned their reputation as a strong baseline (Sharif Razavian et al., 2014) and basic building block for more complex models like visual question answering (Xiong et al., 2016). For natural language processing (NLP) and other sequential modeling tasks, recurrent neural networks (RNNs), and in particular Long Short-Term Memory (LSTM) networks, with a linear projection layer at the end have begun to attain a similar status. However, the standard LSTM is in many ways lacking as a baseline. Zaremba (2015), Gal (2015), and others show that large improvements are possible using a forget bias, inverted dropout regularization or bidirectionality. We add three major additions with similar improvements to off-the-shelf LSTMs: Monte Carlo model averaging, embed average pooling, and residual connections. We analyze these and other more common improvements. 2 LSTM NETWORK LSTM networks are among the most commonly used models for tasks involving variable-length sequences of data, such as text classification. The basic LSTM layer consists of six equations: \[ i_t = \tanh(W_i x_t + R_i h_{t-1} + b_i) \tag{1} \] \[ j_t = \sigma(W_j x_t + R_j h_{t-1} + b_j) \tag{2} \] \[ f_t = \sigma(W_f x_t + R_f h_{t-1} + b_f) \tag{3} \] \[ o_t = \tanh(W_o x_t + R_o h_{t-1} + b_o) \tag{4} \] \[ c_t = i_t \odot j_t + f_t \odot c_{t-1} \tag{5} \] \[ h_t = o_t \odot \tanh(c_t) \tag{6} \] (a) Monte Carlo for SST fine-grained error (b) Monte Carlo for IMDB binary error Figure 1: A comparison of the performance of Monte Carlo averaging, over sample size, to regular single-sample inverted dropout at test-time. Where \( \sigma \) is the sigmoid function, \( \odot \) is element-wise multiplication, and \( v_t \) is the value of variable \( v \) at timestep \( t \). Each layer receives \( x_t \) from the layer that came before it and \( h_{t-1} \) and \( c_{t-1} \) from the previous timestep, and it outputs \( h_t \) to the layer that comes after it and \( h_t \) and \( c_t \) to the next timestep. The \( c \) and \( h \) values jointly constitute the recurrent state of the LSTM that is passed from one timestep to the next. Since the \( h \) value completely updates at each timestep while the \( c \) value maintains part of its own value through multiplication by the forget gate \( f \), \( h \) and \( c \) complement each other very well, with \( h \) forming a “fast” state that can quickly adapt to new information and \( c \) forming a “slow” state that allows information to be retained over longer periods of time (Zaremba, 2015). While various papers have tried to systematically experiment with the 6 core equations constituting an LSTM (Greff et al., 2015; Zaremba, 2015), in general the basic LSTM equations have proven extremely resilient and, if not optimal, at least a local maximum. 3 MONTE CARLO MODEL AVERAGING It is common practice when applying dropout in neural networks to scale the weights up at train time (inverted dropout). This ensures that the expected magnitude of the inputs to any given layer are equivalent between train and test, allowing for an efficient computation of test-time predictions. However, for a model trained with dropout, test-time predictions generated without dropout merely approximate the ensemble of smaller models that dropout is meant to provide. A higher fidelity method requires that test-time dropout be conducted in a manner consistent with how the model was trained. To achieve this, we sample \( k \) neural nets with dropout applied for each test example and average the predictions. With sufficiently large \( k \) this Monte Carlo average should approach the true model average (Srivastava et al., 2014). We show in Figure 1 that this technique can yield more accurate predictions on test-time data than the standard practice. This is demonstrated over a number of datasets, suggesting its applicability to many types of sequential architectures. While running multiple Monte Carlo samples is more computationally expensive, the overall increase is minimal as the process is only run on test-time forward passes and is highly parallelizable. We show that higher performance can be achieved with relatively few Monte Carlo samples, and that this number of samples is similar across different NLP datasets and tasks. We encountered one ambiguity of Monte Carlo model averaging that to our knowledge remains unaddressed in prior literature: there is relatively little exploration as to where and how the model averaging is most appropriately handled. We investigated averaging over the output of the final recurrent layer (just before the projection layer), over the output of the projection layer (the pre-softmax unnormalized logits), and the post-softmax normalized probabilities, which is the approach taken by Gal (2015) for language modeling. We saw no discernible difference in performance between averaging the pre-projection and post-projection outputs. Averaging over the post-softmax probabilities showed marginal improvements over these two methods, but interestingly only for bidirectional models. We also explored using majority voting among the sampled models. This Figure 2: An illustration of the embed average pooling extension to a standard RNN model. The output of the multilayer perceptron is concatenated to the final hidden state output by the RNN. involves tallying the maximum post-softmax probabilities and selecting the class that received the most votes. This method differs from averaging the post-softmax probabilities in the same way max-margin differs from maximum likelihood estimation (MLE), de-emphasizing the points well inside the decision boundary or the models that predicted a class with extremely high probability. With sufficiently large \( k \), this voting method seemed to work best of the averaging methods we tried, and thus all of our displayed models use this technique. However, for classification problems with more classes, more Monte Carlo samples might be necessary to guarantee a meaningful plurality of class predictions. We conclude that the majority-vote Monte Carlo averaging method is preferable in the case where the ratio of Monte Carlo samples to number of classification labels is large (\( k/output\_size \)). The Monte Carlo model averaging experiments, shown in Figure 1, were conducted as follows. We drew \( k = 400 \) separate test samples for each example, differentiated by their dropout masks. For each sample size \( p \) (whose values, plotted on the x-axis, were in the range from 2 to 200 with step-size 2) we selected \( p \) of our \( k \) samples randomly without replacement and performed the relevant Monte Carlo averaging technique for that task, as discussed above. We do this \( m = 20 \) times for each point, to establish the mean and variance for that number of Monte Carlo iterations/samples \( p \). The variance is used to visualize the 90% confidence interval in blue, while the red line denotes the test accuracy computed using the traditional approximation method (inverted dropout at train-time, and no dropout at test-time). 4 EMBED AVERAGE POOLING Reliably retaining long-range information is a well documented weakness of LSTM networks (Karpathy et al., 2015). This is especially the case for very long sequences like the IMDB sentiment dataset (Maas et al., 2011), where deep sequential models fail to capture uni- and bi-gram occurrences over long sequences. This is likely why n-gram based models, such as a bi-gram NBSVM (Wang and Manning, 2012), outperform RNN models on such datasets. It was shown by Iyyer et al. (2015) and others that for general NLP classification tasks, the use of a deep, unordered composition (or bag-of-words) of a sequence can yield strong results. Their solution, the deep averaging network (DAN), combines the observed effectiveness of depth, with the unreasonable effectiveness of unordered representations of long sequences. We suspect that the primary advantage of DANs is their ability to keep track of information that would have otherwise been forgotten by a sequential model, such as information early in the sequence for a unidirectional RNN or information in the middle of the sequence for a bidirectional RNN. Our embed average pooling supplements the bidirectional RNN with the information from a DAN at a relatively negligible computational cost. (a) Res-V1: An illustration of vertical residual connections (b) Res-V2: An illustration of vertical and lateral residual connections Figure 3: An illustration of vertical (ResV) and lateral residual (ResL) connections added to a 3-layer RNN. A model with only vertical residuals is denoted Res-V1, whereas a model with vertical and lateral residuals is denoted “Res-V2”. As shown in Figure 2, embed average pooling works by averaging the sequence of word vectors and passing this average through an MLP. The averaging is similar to an average pooling layer in a CNN (hence the name), but with the averaging being done temporally rather than spatially. The output of this MLP is concatenated to the final output of the RNN, and the combined vector is then passed into the projection and softmax layer. We apply the same dropout mask to the word vectors when passing them to the RNN as when averaging them, and we apply a different dropout mask on the output of the MLP. We experimented with applying the MLP before rather than after averaging the word vectors but found the latter to be most effective. 5 RESIDUAL CONNECTIONS For feed-forward convolutional neural networks used in computer vision tasks, residual networks, or ResNets, have obtained state of the art results (He et al., 2015). Rather than having each layer learn a wholly new representation of the data, as is customary for neural networks, ResNets have each layer (or group of layers) learn a residual which is added to the layer’s input and then passed on to the next layer. More formally, if the input to a layer (or group of layers) is \( x \) and the output of that layer (or group of layers) is \( F(x) \), then the input to the next layer (or group of layers) is \( x + F(x) \), whereas it would be \( F(x) \) in a conventional neural network. This architecture allows the training of far deeper models. He et al. (2015) trained convolutional neural networks as deep as 151 layers, compared to 16 layers used in VGGNets (Simonyan and Zisserman, 2014) or 22 layers used in GoogLeNet (Szegedy et al., 2015), and won the 2015 ImageNet Challenge. Since then, various papers have tried to build upon the ResNet paradigm (Huang et al., 2016; Szegedy et al., 2016), and various others have tried to create convincing theoretical reasons for ResNet’s success (Liao and Poggio, 2016; Veit et al., 2016). We explored many different ways to incorporate residual connections in an RNN. The two most successful ones, which we call Res-V1 and Res-V2 are depicted in Figure 6. Res-V1 incorporates only vertical residuals, while Res-V2 incorporates both vertical and lateral residuals. With vertical residual connections, the input to a layer is added to its output and then passed to the next layer, as is done in feed-forward ResNets. Thus, whereas the input to a layer is normally the \( h_t \) from the previous layer, with vertical residuals the input becomes the \( h_t + x_t \) from the previous layer. This maintains many of the attractive properties of ResNets (e.g. unimpeded gradient flow across layers, adding/averaging the contributions of each layer) and thus lends itself naturally to deeper networks. However, it can interact unpredictably with the LSTM architecture, as the “fast” state of the LSTM no longer reflects the network’s full representation of the data at that point. To mitigate this unpredictability, Res-V2 also includes lateral residual connections. With lateral residual connections, the input to a layer is added to its output and then passed to the next timestep as the fast state of the LSTM. It is equivalent to replacing equation 6 with \( h_t = o_t \odot \tanh(c_t) + x_t \). Thus, applying both vertical and lateral residuals ensures that the same value is passed both to the next layer as input and to the next timestep as the “fast” state. In addition to these two, we explored various other, ultimately less successful, ways of adding residual connections to an LSTM, the primary one being horizontal residual connections. In this architecture, rather than adding the input from the previous layer to a layer’s output, we added the fast state from the previous timestep. The hope was that adding residual connections across timesteps would allow information to flow more effectively across timesteps and thus improve the performance of RNNs that are deep across timesteps, much as ResNets do for networks that are deep across layers. Thus, we believed horizontal residual connections could solve the problem of LSTMs not learning long-term dependencies, the same problem we also hoped to mitigate with embed average pooling. Unfortunately, horizontal residuals failed, possibly because they blurred the distinction between the LSTM’s “fast” state and “slow” state and thus prevented the LSTM from quickly adapting to new data. Alternate combinations of horizontal, vertical, and lateral residual connections were also experimented with but yielded poor results. 6 EXPERIMENTAL RESULTS 6.1 DATASETS We chose two commonly used benchmark datasets for our experiments: the Stanford Sentiment Treebank (SST) (Socher et al., 2013) and the IMDB sentiment dataset (Maas et al., 2011). This allowed us to compare the performance of our models to existing work and review the flexibility of our proposed model extensions across fairly disparate types of classification datasets. SST contains relatively well curated, short sequence sentences, in contrast to IMDB’s comparatively colloquial and lengthy sequences (some up to 2,000 tokens). To further differentiate the classification tasks we chose to experiment with fine-grained, five-class sentiment on SST, while IMDB only offered binary labels. For IMDB, we randomly split the training set of 25,000 examples into training and validation sets containing 22,500 and 2,500 examples respectively, as done in Maas et al. (2011). 6.2 METHODOLOGY Our objective is to show a series of compounding extensions to the standard LSTM baseline that enhance accuracy. To ensure scientific reliability, the addition of each feature is the only change from the previous model (see Figures 4 and 5). The baseline model is a 2-layer stacked LSTM with hidden size 170 for SST and 120 for IMDB, as used in Tai et al. (2015). All models in this paper used publicly available 300 dimensional word vectors, pre-trained using Glove on 840 million tokens of Common Crawl Data (Pennington et al., 2014), and both the word vectors and the subsequent weight matrices were trained using Adam with a learning rate of \( 10^{-4} \). The first set of basic feature additions were adding a forget bias and using dropout. Adding a bias of 1.0 to the forget gate (i.e. adding 1.0 to the inside of the sigmoid function in equation 3) improves results across NLP tasks, especially for learning long-range dependencies (Zaremba, 2015). Dropout (Srivastava et al., 2014) is a highly effective regularizer for deep models. For SST and IMDB we used grid search to select dropout probabilities of 0.5 and 0.7 respectively, applied to the input of each layer, including the projection/softmax layer. While forget bias appears to hurt performance in Figure 5, the combination of dropout and forget bias yielded better results in all cases than dropout without forget bias. Our last two basic optimizations were increasing the hidden sizes and then adding shared-weight bidirectionality to the RNN. The hidden sizes for SST and IMDB were increased to 800 and 360 respectively; we found significantly diminishing returns to performance from increases beyond this. We chose shared-weight bidirectionality to ensure the model size did not increase any further. Specifically, the forward and backward weights are shared, and the input to the projection/softmax layer is a concatenation of the forward and backward passes’ final hidden states. All of our subsequent proposed model extensions are described at length in their own sections. For both datasets, we used 60 Monte Carlo samples, and the embed average pooling MLP had one hidden layer and both a hidden dimension and an output dimension of 300 as the output dimension of the embed average pooling MLP. Note that although the MLP weights increased the size of their respective models, this increase is negligible (equivalent to increasing the hidden size for SST from 800 to 804 or the hidden size of IMDB from 360 to 369), and we found that such a size increase had no discernible effect on accuracy when done without the embed average pooling. 6.3 RESULTS Since each of our proposed modifications operate independently, they are well suited to use in combination as well as in isolation. In Figures 4 and 5 we compound these features on top of the more traditional enhancements. Due to the expensiveness of bidirectional models, Figure 4 also shows these compounding features on SST with and without bidirectionality. The validation accuracy distributions show that each augmentation usually provides some small but noticeable improvement on the previous model, as measured by consistent improvements in mean and median accuracy. ![Box-plots showing the performance of compounding model features on fine-grain SST validation accuracy.](page_184_728_1207_312.png) (a) Compounding feature models on 5-Class SST. (b) Compounding feature models (minus bidirectional) for 5-Class SST. Figure 4: These box-plots show the performance of compounding model features on fine-grain SST validation accuracy. The red points, red lines, blue boxes, whiskers and plus-shaped points indicate the mean, median, quartiles, range, and outliers, respectively. We originally suspected that MC would provide marginal yet consistent improvements across datasets, while embed average pooling would especially excel for long sequences like in IMDB, where n-gram based models and deep unordered compositions have benefited from their ability to retain information from disparate parts of the text. The former hypothesis was largely confirmed. However, while embed average pooling was generally performance-enhancing, the performance boost it yielded for IMDB was not significantly larger than the one it yielded for SST, though that may have been because the other enhancements already encompassed most of the advantages provided by deep unordered compositions. The only evident exceptions to the positive trend are the variations of residual connections. Which of Res-V1 (vertical only) and Res-V2 (vertical and residual) outperformed the other depended on the dataset and whether the network was bidirectional. The Res-V2 architecture dominated in experiments 4b and 5 while the Res-V1 (only vertical residuals) architecture is most performant in Figure 4a. This Figure 5: These box-plots show the performance of compounding model features on binary IMDB validation accuracy. Figure 6: Comparing the effects of layer depth between Vanilla RNNs, Res-V1 and Res-V2 models on fine-grained sentiment classification (SST). As we increase the layers, we decrease the hidden size to maintain equivalent model sizes. The points indicate average validation accuracy, while the shaded regions indicate 90% confidence intervals. suggests for short sequences, bidirectionality and lateral residuals conflict. Further analysis of the effect of residual connections and model depth can be found in Figure 6. In that figure, the number of parameters, and hence model size, are kept uniform by modifying the hidden size as the layer depth changed. The hidden sizes used for 1, 2, 4, 6, and 8 layer models were 250, 170, 120, 100, and 85 respectively, maintaining \( \approx 550,000 \) total parameters for all models. As the graph demonstrates, <table> <tr> <th>Model</th> <th># Params (M)</th> <th>Train Time / Epoch (sec)</th> <th>Test Acc (%)</th> </tr> <tr> <td>RNTN (Socher et al., 2013)</td> <td>–</td> <td>–</td> <td>45.7</td> </tr> <tr> <td>CNN-MC (Kim, 2014)</td> <td>–</td> <td>–</td> <td>47.4</td> </tr> <tr> <td>DRNN (Irsoy and Cardie, 2014)</td> <td>–</td> <td>–</td> <td>49.8</td> </tr> <tr> <td>CT-LSTM (Tai et al., 2015)</td> <td>0.317</td> <td>–</td> <td>51.0</td> </tr> <tr> <td>DMN (Kumar et al., 2016)</td> <td>–</td> <td>–</td> <td>52.1</td> </tr> <tr> <td>NTI-SLSTM-LSTM (Munkhdalai and Yu, 2016)</td> <td>–</td> <td>–</td> <td>53.1</td> </tr> <tr> <td>Baseline 2-LSTM</td> <td>0.553</td> <td>≈ 2, 100</td> <td>46.4</td> </tr> <tr> <td>Large 2-LSTM</td> <td>8.650</td> <td>≈ 3, 150</td> <td>48.7</td> </tr> <tr> <td>Bi-2-LSTM</td> <td>8.650</td> <td>≈ 6, 100</td> <td>50.9</td> </tr> <tr> <td>Bi-2-LSTM+MC+Pooling+ResV</td> <td>8.740</td> <td>≈ 8, 050</td> <td>52.2</td> </tr> <tr> <td>2-LSTM+MC+Pooling+ResV+ResL</td> <td>8.740</td> <td>≈ 4, 800</td> <td>51.6</td> </tr> </table> Table 1: Test performance on the Stanford Sentiment Treebank (SST) sentiment classification task. <table> <tr> <th>Model</th> <th># Params (M)</th> <th>Train Time / Epoch (sec)</th> <th>Test Acc (%)</th> </tr> <tr> <td>SVM-bi (Wang and Manning, 2012)</td> <td>–</td> <td>–</td> <td>89.2</td> </tr> <tr> <td>DAN-RAND (Iyyer et al., 2015)</td> <td>–</td> <td>–</td> <td>88.8</td> </tr> <tr> <td>DAN (Iyyer et al., 2015)</td> <td>–</td> <td>–</td> <td>89.4</td> </tr> <tr> <td>NBSVM-bi (Wang and Manning, 2012)</td> <td>–</td> <td>–</td> <td>91.2</td> </tr> <tr> <td>NBSVM-tri, RNN, Sentence-Vec Ensemble (Mesnil et al., 2014)</td> <td>–</td> <td>–</td> <td>92.6</td> </tr> <tr> <td>Baseline 2-LSTM</td> <td>0.318</td> <td>≈ 1, 800</td> <td>85.3</td> </tr> <tr> <td>Large 2-LSTM</td> <td>2.00</td> <td>≈ 2, 500</td> <td>87.6</td> </tr> <tr> <td>Bi-2-LSTM</td> <td>2.00</td> <td>≈ 5, 100</td> <td>88.9</td> </tr> <tr> <td>Bi-2-LSTM+MC+Pooling+ResV+ResL</td> <td>2.08</td> <td>≈ 5, 500</td> <td>90.1</td> </tr> </table> Table 2: Test performance on the IMDB sentiment classification task. normal LSTMs ("Vanilla") perform drastically worse as they become deeper and narrower, while Res-V1 and Res-V2 both see their performance stay much steadier or even briefly rise. While depth wound up being far from a panacea for the datasets we experimented on, the ability of an LSTM with residual connections to maintain its performance as it gets deeper holds promise for other domains where the extra expressive power provided by depth might prove more crucial. Selecting the best results for each model, we see results competitive with state-of-the-art performance for both IMDB\(^1\) and SST, even though many state-of-the-art models use either parse-tree information (Tai et al., 2015), multiple passes through the data (Kumar et al., 2016) or tremendous train and test-time computational and memory expenses (Le and Mikolov, 2014). To our knowledge, our models constitute the best performance of purely sequential, single-pass, and computationally feasible models, precisely the desired features of a solid out-of-the-box baseline. Furthermore, for SST, the compounding enhancement model without bidirectionality, the final model shown in Figure 4b, greatly exceeded the performance of the large bidirectional model (51.6% vs 50.9%), with significantly less training time (Table 1). This suggests our enhancements could provide a similarly reasonable and efficient alternative to shared-weight bidirectionality for other such datasets. 7 CONCLUSION We explore several easy to implement enhancements to the basic LSTM network that positively impact performance. These include both fairly well established extensions (biasing the forget gate, dropout, increasing the model size, bidirectionality) and several more novel ones (Monte Carlo \(^1\)For IMDB, we benchmark only against results obtained from training exclusively on the labeled training set. Thus, we omit results from unsupervised models that leveraged the additional 50,000 unlabeled examples, such as Miyato et al. (2016). model averaging, embed average pooling, residual connections). We find that these enhancements improve the performance of the LSTM in classification tasks, both in conjunction or isolation, with an accuracy close to state of the art despite being more lightweight and using less information than the current state of the art models. Our results suggest that these extensions should be incorporated into LSTM baselines. REFERENCES Rich Caruana, Nikos Karampatziakis, and Ainur Yessenalina. An empirical evaluation of supervised learning in high dimensions. In Proceedings of the 25th international conference on Machine learning, pages 96–103. ACM, 2008. Yarin Gal. A theoretically grounded application of dropout in recurrent neural networks. arXiv preprint arXiv:1512.05287, 2015. Klaus Greff, Rupesh Kumar Srivastava, Jan Koutn\'{i}k, Bas R Steunebrink, and J\"{u}rgen Schmidhuber. Lstm: A search space odyssey. arXiv preprint arXiv:1503.04069, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. URL http://arxiv.org/abs/1512.03385. Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochastic depth. arXiv preprint arXiv:1603.09382, 2016. Ozan Irsoy and Claire Cardie. Modeling compositionality with multiplicative recurrent neural networks. arXiv preprint arXiv:1412.6577, 2014. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum\'{e} III. Deep unordered composition rivals syntactic methods for text classification. In Association for Computational Linguistics, 2015. Andrej Karpathy, Justin Johnson, and Fei-Fei Li. Visualizing and understanding recurrent networks. CoRR, abs/1506.02078, 2015. Yoon Kim. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882, 2014. Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. Ask me anything: Dynamic memory networks for natural language processing. In ICML, 2016. Quoc V Le and Tomas Mikolov. Distributed representations of sentences and documents. In ICML, volume 14, pages 1188–1196, 2014. Qianli Liao and Tomaso A. Poggio. Bridging the gaps between residual learning, recurrent neural networks and visual cortex. CoRR, abs/1604.03640, 2016. URL http://arxiv.org/abs/1604.03640. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/P11-1015. Gr\'{e}goire Mesnil, Tomas Mikolov, Marc'Aurelio Ranzato, and Yoshua Bengio. Ensemble of generative and discriminative techniques for sentiment analysis of movie reviews. arXiv preprint arXiv:1412.5335, 2014. Takeru Miyato, Andrew M Dai, and Ian Goodfellow. Virtual adversarial training for semi-supervised text classification. arXiv preprint arXiv:1605.07725, 2016. Tsendsuren Munkhdalai and Hong Yu. Neural tree indexers for text understanding. CoRR, abs/1607.04492, 2016. URL http://arxiv.org/abs/1607.04492. Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In EMNLP, volume 14, pages 1532–43, 2014. Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off-the-shelf: An astounding baseline for recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2014. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. URL http://arxiv.org/abs/1409.1556. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical methods in natural language processing (EMNLP), volume 1631, page 1642. Citeseer, 2013. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1–9, 2015. Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. CoRR, abs/1602.07261, 2016. URL http://arxiv.org/abs/1602.07261. Kai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075, 2015. Andreas Veit, Michael J. Wilber, and Serge J. Belongie. Residual networks are exponential ensembles of relatively shallow networks. CoRR, abs/1605.06431, 2016. URL http://arxiv.org/abs/1605.06431. Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2, pages 90–94. Association for Computational Linguistics, 2012. Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. In ICML, 2016. Wojciech Zaremba. An empirical exploration of recurrent network architectures. 2015.
ABSTRACT LSTMs have become a basic building block for many deep NLP models. In recent years, many improvements and variations have been proposed for deep sequence models in general, and LSTMs in particular. We propose and analyze a series of augmentations and modifications to LSTM networks resulting in improved performance for text classification datasets. We observe compounding improvements on traditional LSTMs using Monte Carlo test-time model averaging, average pooling, and residual connections, along with four other suggested modifications. Our analysis provides a simple, reliable, and high quality baseline model. 1 INTRODUCTION When exploring a new problem, having a simple yet competitive off-the-shelf baseline is fundamental to new research. For instance, Caruana et al. (2008) showed random forests to be a strong baseline for many high-dimensional supervised learning tasks. For computer vision, off-the-shelf convolutional neural networks (CNNs) have earned their reputation as a strong baseline (Sharif Razavian et al., 2014) and basic building block for more complex models like visual question answering (Xiong et al., 2016). For natural language processing (NLP) and other sequential modeling tasks, recurrent neural networks (RNNs), and in particular Long Short-Term Memory (LSTM) networks, with a linear projection layer at the end have begun to attain a similar status. However, the standard LSTM is in many ways lacking as a baseline. Zaremba (2015), Gal (2015), and others show that large improvements are possible using a forget bias, inverted dropout regularization or bidirectionality. We add three major additions with similar improvements to off-the-shelf LSTMs: Monte Carlo model averaging, embed average pooling, and residual connections. We analyze these and other more common improvements. 2 LSTM NETWORK LSTM networks are among the most commonly used models for tasks involving variable-length sequences of data, such as text classification. The basic LSTM layer consists of six equations: \[ i_t = \tanh(W_i x_t + R_i h_{t-1} + b_i) \tag{1} \] \[ j_t = \sigma(W_j x_t + R_j h_{t-1} + b_j) \tag{2} \] \[ f_t = \sigma(W_f x_t + R_f h_{t-1} + b_f) \tag{3} \] \[ o_t = \tanh(W_o x_t + R_o h_{t-1} + b_o) \tag{4} \] \[ c_t = i_t \odot j_t + f_t \odot c_{t-1} \tag{5} \] \[ h_t = o_t \odot \tanh(c_t) \tag{6} \] (a) Monte Carlo for SST fine-grained error (b) Monte Carlo for IMDB binary error Figure 1: A comparison of the performance of Monte Carlo averaging, over sample size, to regular single-sample inverted dropout at test-time. Where \( \sigma \) is the sigmoid function, \( \odot \) is element-wise multiplication, and \( v_t \) is the value of variable \( v \) at timestep \( t \). Each layer receives \( x_t \) from the layer that came before it and \( h_{t-1} \) and \( c_{t-1} \) from the previous timestep, and it outputs \( h_t \) to the layer that comes after it and \( h_t \) and \( c_t \) to the next timestep. The \( c \) and \( h \) values jointly constitute the recurrent state of the LSTM that is passed from one timestep to the next. Since the \( h \) value completely updates at each timestep while the \( c \) value maintains part of its own value through multiplication by the forget gate \( f \), \( h \) and \( c \) complement each other very well, with \( h \) forming a “fast” state that can quickly adapt to new information and \( c \) forming a “slow” state that allows information to be retained over longer periods of time (Zaremba, 2015). While various papers have tried to systematically experiment with the 6 core equations constituting an LSTM (Greff et al., 2015; Zaremba, 2015), in general the basic LSTM equations have proven extremely resilient and, if not optimal, at least a local maximum. 3 MONTE CARLO MODEL AVERAGING It is common practice when applying dropout in neural networks to scale the weights up at train time (inverted dropout). This ensures that the expected magnitude of the inputs to any given layer are equivalent between train and test, allowing for an efficient computation of test-time predictions. However, for a model trained with dropout, test-time predictions generated without dropout merely approximate the ensemble of smaller models that dropout is meant to provide. A higher fidelity method requires that test-time dropout be conducted in a manner consistent with how the model was trained. To achieve this, we sample \( k \) neural nets with dropout applied for each test example and average the predictions. With sufficiently large \( k \) this Monte Carlo average should approach the true model average (Srivastava et al., 2014). We show in Figure 1 that this technique can yield more accurate predictions on test-time data than the standard practice. This is demonstrated over a number of datasets, suggesting its applicability to many types of sequential architectures. While running multiple Monte Carlo samples is more computationally expensive, the overall increase is minimal as the process is only run on test-time forward passes and is highly parallelizable. We show that higher performance can be achieved with relatively few Monte Carlo samples, and that this number of samples is similar across different NLP datasets and tasks. We encountered one ambiguity of Monte Carlo model averaging that to our knowledge remains unaddressed in prior literature: there is relatively little exploration as to where and how the model averaging is most appropriately handled. We investigated averaging over the output of the final recurrent layer (just before the projection layer), over the output of the projection layer (the pre-softmax unnormalized logits), and the post-softmax normalized probabilities, which is the approach taken by Gal (2015) for language modeling. We saw no discernible difference in performance between averaging the pre-projection and post-projection outputs. Averaging over the post-softmax probabilities showed marginal improvements over these two methods, but interestingly only for bidirectional models. We also explored using majority voting among the sampled models. This Figure 2: An illustration of the embed average pooling extension to a standard RNN model. The output of the multilayer perceptron is concatenated to the final hidden state output by the RNN. involves tallying the maximum post-softmax probabilities and selecting the class that received the most votes. This method differs from averaging the post-softmax probabilities in the same way max-margin differs from maximum likelihood estimation (MLE), de-emphasizing the points well inside the decision boundary or the models that predicted a class with extremely high probability. With sufficiently large \( k \), this voting method seemed to work best of the averaging methods we tried, and thus all of our displayed models use this technique. However, for classification problems with more classes, more Monte Carlo samples might be necessary to guarantee a meaningful plurality of class predictions. We conclude that the majority-vote Monte Carlo averaging method is preferable in the case where the ratio of Monte Carlo samples to number of classification labels is large (\( k/output\_size \)). The Monte Carlo model averaging experiments, shown in Figure 1, were conducted as follows. We drew \( k = 400 \) separate test samples for each example, differentiated by their dropout masks. For each sample size \( p \) (whose values, plotted on the x-axis, were in the range from 2 to 200 with step-size 2) we selected \( p \) of our \( k \) samples randomly without replacement and performed the relevant Monte Carlo averaging technique for that task, as discussed above. We do this \( m = 20 \) times for each point, to establish the mean and variance for that number of Monte Carlo iterations/samples \( p \). The variance is used to visualize the 90% confidence interval in blue, while the red line denotes the test accuracy computed using the traditional approximation method (inverted dropout at train-time, and no dropout at test-time). 4 EMBED AVERAGE POOLING Reliably retaining long-range information is a well documented weakness of LSTM networks (Karpathy et al., 2015). This is especially the case for very long sequences like the IMDB sentiment dataset (Maas et al., 2011), where deep sequential models fail to capture uni- and bi-gram occurrences over long sequences. This is likely why n-gram based models, such as a bi-gram NBSVM (Wang and Manning, 2012), outperform RNN models on such datasets. It was shown by Iyyer et al. (2015) and others that for general NLP classification tasks, the use of a deep, unordered composition (or bag-of-words) of a sequence can yield strong results. Their solution, the deep averaging network (DAN), combines the observed effectiveness of depth, with the unreasonable effectiveness of unordered representations of long sequences. We suspect that the primary advantage of DANs is their ability to keep track of information that would have otherwise been forgotten by a sequential model, such as information early in the sequence for a unidirectional RNN or information in the middle of the sequence for a bidirectional RNN. Our embed average pooling supplements the bidirectional RNN with the information from a DAN at a relatively negligible computational cost. (a) Res-V1: An illustration of vertical residual connections (b) Res-V2: An illustration of vertical and lateral residual connections Figure 3: An illustration of vertical (ResV) and lateral residual (ResL) connections added to a 3-layer RNN. A model with only vertical residuals is denoted Res-V1, whereas a model with vertical and lateral residuals is denoted “Res-V2”. As shown in Figure 2, embed average pooling works by averaging the sequence of word vectors and passing this average through an MLP. The averaging is similar to an average pooling layer in a CNN (hence the name), but with the averaging being done temporally rather than spatially. The output of this MLP is concatenated to the final output of the RNN, and the combined vector is then passed into the projection and softmax layer. We apply the same dropout mask to the word vectors when passing them to the RNN as when averaging them, and we apply a different dropout mask on the output of the MLP. We experimented with applying the MLP before rather than after averaging the word vectors but found the latter to be most effective. 5 RESIDUAL CONNECTIONS For feed-forward convolutional neural networks used in computer vision tasks, residual networks, or ResNets, have obtained state of the art results (He et al., 2015). Rather than having each layer learn a wholly new representation of the data, as is customary for neural networks, ResNets have each layer (or group of layers) learn a residual which is added to the layer’s input and then passed on to the next layer. More formally, if the input to a layer (or group of layers) is \( x \) and the output of that layer (or group of layers) is \( F(x) \), then the input to the next layer (or group of layers) is \( x + F(x) \), whereas it would be \( F(x) \) in a conventional neural network. This architecture allows the training of far deeper models. He et al. (2015) trained convolutional neural networks as deep as 151 layers, compared to 16 layers used in VGGNets (Simonyan and Zisserman, 2014) or 22 layers used in GoogLeNet (Szegedy et al., 2015), and won the 2015 ImageNet Challenge. Since then, various papers have tried to build upon the ResNet paradigm (Huang et al., 2016; Szegedy et al., 2016), and various others have tried to create convincing theoretical reasons for ResNet’s success (Liao and Poggio, 2016; Veit et al., 2016). We explored many different ways to incorporate residual connections in an RNN. The two most successful ones, which we call Res-V1 and Res-V2 are depicted in Figure 6. Res-V1 incorporates only vertical residuals, while Res-V2 incorporates both vertical and lateral residuals. With vertical residual connections, the input to a layer is added to its output and then passed to the next layer, as is done in feed-forward ResNets. Thus, whereas the input to a layer is normally the \( h_t \) from the previous layer, with vertical residuals the input becomes the \( h_t + x_t \) from the previous layer. This maintains many of the attractive properties of ResNets (e.g. unimpeded gradient flow across layers, adding/averaging the contributions of each layer) and thus lends itself naturally to deeper networks. However, it can interact unpredictably with the LSTM architecture, as the “fast” state of the LSTM no longer reflects the network’s full representation of the data at that point. To mitigate this unpredictability, Res-V2 also includes lateral residual connections. With lateral residual connections, the input to a layer is added to its output and then passed to the next timestep as the fast state of the LSTM. It is equivalent to replacing equation 6 with \( h_t = o_t \odot \tanh(c_t) + x_t \). Thus, applying both vertical and lateral residuals ensures that the same value is passed both to the next layer as input and to the next timestep as the “fast” state. In addition to these two, we explored various other, ultimately less successful, ways of adding residual connections to an LSTM, the primary one being horizontal residual connections. In this architecture, rather than adding the input from the previous layer to a layer’s output, we added the fast state from the previous timestep. The hope was that adding residual connections across timesteps would allow information to flow more effectively across timesteps and thus improve the performance of RNNs that are deep across timesteps, much as ResNets do for networks that are deep across layers. Thus, we believed horizontal residual connections could solve the problem of LSTMs not learning long-term dependencies, the same problem we also hoped to mitigate with embed average pooling. Unfortunately, horizontal residuals failed, possibly because they blurred the distinction between the LSTM’s “fast” state and “slow” state and thus prevented the LSTM from quickly adapting to new data. Alternate combinations of horizontal, vertical, and lateral residual connections were also experimented with but yielded poor results. 6 EXPERIMENTAL RESULTS 6.1 DATASETS We chose two commonly used benchmark datasets for our experiments: the Stanford Sentiment Treebank (SST) (Socher et al., 2013) and the IMDB sentiment dataset (Maas et al., 2011). This allowed us to compare the performance of our models to existing work and review the flexibility of our proposed model extensions across fairly disparate types of classification datasets. SST contains relatively well curated, short sequence sentences, in contrast to IMDB’s comparatively colloquial and lengthy sequences (some up to 2,000 tokens). To further differentiate the classification tasks we chose to experiment with fine-grained, five-class sentiment on SST, while IMDB only offered binary labels. For IMDB, we randomly split the training set of 25,000 examples into training and validation sets containing 22,500 and 2,500 examples respectively, as done in Maas et al. (2011). 6.2 METHODOLOGY Our objective is to show a series of compounding extensions to the standard LSTM baseline that enhance accuracy. To ensure scientific reliability, the addition of each feature is the only change from the previous model (see Figures 4 and 5). The baseline model is a 2-layer stacked LSTM with hidden size 170 for SST and 120 for IMDB, as used in Tai et al. (2015). All models in this paper used publicly available 300 dimensional word vectors, pre-trained using Glove on 840 million tokens of Common Crawl Data (Pennington et al., 2014), and both the word vectors and the subsequent weight matrices were trained using Adam with a learning rate of \( 10^{-4} \). The first set of basic feature additions were adding a forget bias and using dropout. Adding a bias of 1.0 to the forget gate (i.e. adding 1.0 to the inside of the sigmoid function in equation 3) improves results across NLP tasks, especially for learning long-range dependencies (Zaremba, 2015). Dropout (Srivastava et al., 2014) is a highly effective regularizer for deep models. For SST and IMDB we used grid search to select dropout probabilities of 0.5 and 0.7 respectively, applied to the input of each layer, including the projection/softmax layer. While forget bias appears to hurt performance in Figure 5, the combination of dropout and forget bias yielded better results in all cases than dropout without forget bias. Our last two basic optimizations were increasing the hidden sizes and then adding shared-weight bidirectionality to the RNN. The hidden sizes for SST and IMDB were increased to 800 and 360 respectively; we found significantly diminishing returns to performance from increases beyond this. We chose shared-weight bidirectionality to ensure the model size did not increase any further. Specifically, the forward and backward weights are shared, and the input to the projection/softmax layer is a concatenation of the forward and backward passes’ final hidden states. All of our subsequent proposed model extensions are described at length in their own sections. For both datasets, we used 60 Monte Carlo samples, and the embed average pooling MLP had one hidden layer and both a hidden dimension and an output dimension of 300 as the output dimension of the embed average pooling MLP. Note that although the MLP weights increased the size of their respective models, this increase is negligible (equivalent to increasing the hidden size for SST from 800 to 804 or the hidden size of IMDB from 360 to 369), and we found that such a size increase had no discernible effect on accuracy when done without the embed average pooling. 6.3 RESULTS Since each of our proposed modifications operate independently, they are well suited to use in combination as well as in isolation. In Figures 4 and 5 we compound these features on top of the more traditional enhancements. Due to the expensiveness of bidirectional models, Figure 4 also shows these compounding features on SST with and without bidirectionality. The validation accuracy distributions show that each augmentation usually provides some small but noticeable improvement on the previous model, as measured by consistent improvements in mean and median accuracy. ![Box-plots showing the performance of compounding model features on fine-grain SST validation accuracy.](page_184_728_1207_312.png) (a) Compounding feature models on 5-Class SST. (b) Compounding feature models (minus bidirectional) for 5-Class SST. Figure 4: These box-plots show the performance of compounding model features on fine-grain SST validation accuracy. The red points, red lines, blue boxes, whiskers and plus-shaped points indicate the mean, median, quartiles, range, and outliers, respectively. We originally suspected that MC would provide marginal yet consistent improvements across datasets, while embed average pooling would especially excel for long sequences like in IMDB, where n-gram based models and deep unordered compositions have benefited from their ability to retain information from disparate parts of the text. The former hypothesis was largely confirmed. However, while embed average pooling was generally performance-enhancing, the performance boost it yielded for IMDB was not significantly larger than the one it yielded for SST, though that may have been because the other enhancements already encompassed most of the advantages provided by deep unordered compositions. The only evident exceptions to the positive trend are the variations of residual connections. Which of Res-V1 (vertical only) and Res-V2 (vertical and residual) outperformed the other depended on the dataset and whether the network was bidirectional. The Res-V2 architecture dominated in experiments 4b and 5 while the Res-V1 (only vertical residuals) architecture is most performant in Figure 4a. This Figure 5: These box-plots show the performance of compounding model features on binary IMDB validation accuracy. Figure 6: Comparing the effects of layer depth between Vanilla RNNs, Res-V1 and Res-V2 models on fine-grained sentiment classification (SST). As we increase the layers, we decrease the hidden size to maintain equivalent model sizes. The points indicate average validation accuracy, while the shaded regions indicate 90% confidence intervals. suggests for short sequences, bidirectionality and lateral residuals conflict. Further analysis of the effect of residual connections and model depth can be found in Figure 6. In that figure, the number of parameters, and hence model size, are kept uniform by modifying the hidden size as the layer depth changed. The hidden sizes used for 1, 2, 4, 6, and 8 layer models were 250, 170, 120, 100, and 85 respectively, maintaining \( \approx 550,000 \) total parameters for all models. As the graph demonstrates, <table> <tr> <th>Model</th> <th># Params (M)</th> <th>Train Time / Epoch (sec)</th> <th>Test Acc (%)</th> </tr> <tr> <td>RNTN (Socher et al., 2013)</td> <td>–</td> <td>–</td> <td>45.7</td> </tr> <tr> <td>CNN-MC (Kim, 2014)</td> <td>–</td> <td>–</td> <td>47.4</td> </tr> <tr> <td>DRNN (Irsoy and Cardie, 2014)</td> <td>–</td> <td>–</td> <td>49.8</td> </tr> <tr> <td>CT-LSTM (Tai et al., 2015)</td> <td>0.317</td> <td>–</td> <td>51.0</td> </tr> <tr> <td>DMN (Kumar et al., 2016)</td> <td>–</td> <td>–</td> <td>52.1</td> </tr> <tr> <td>NTI-SLSTM-LSTM (Munkhdalai and Yu, 2016)</td> <td>–</td> <td>–</td> <td>53.1</td> </tr> <tr> <td>Baseline 2-LSTM</td> <td>0.553</td> <td>≈ 2, 100</td> <td>46.4</td> </tr> <tr> <td>Large 2-LSTM</td> <td>8.650</td> <td>≈ 3, 150</td> <td>48.7</td> </tr> <tr> <td>Bi-2-LSTM</td> <td>8.650</td> <td>≈ 6, 100</td> <td>50.9</td> </tr> <tr> <td>Bi-2-LSTM+MC+Pooling+ResV</td> <td>8.740</td> <td>≈ 8, 050</td> <td>52.2</td> </tr> <tr> <td>2-LSTM+MC+Pooling+ResV+ResL</td> <td>8.740</td> <td>≈ 4, 800</td> <td>51.6</td> </tr> </table> Table 1: Test performance on the Stanford Sentiment Treebank (SST) sentiment classification task. <table> <tr> <th>Model</th> <th># Params (M)</th> <th>Train Time / Epoch (sec)</th> <th>Test Acc (%)</th> </tr> <tr> <td>SVM-bi (Wang and Manning, 2012)</td> <td>–</td> <td>–</td> <td>89.2</td> </tr> <tr> <td>DAN-RAND (Iyyer et al., 2015)</td> <td>–</td> <td>–</td> <td>88.8</td> </tr> <tr> <td>DAN (Iyyer et al., 2015)</td> <td>–</td> <td>–</td> <td>89.4</td> </tr> <tr> <td>NBSVM-bi (Wang and Manning, 2012)</td> <td>–</td> <td>–</td> <td>91.2</td> </tr> <tr> <td>NBSVM-tri, RNN, Sentence-Vec Ensemble (Mesnil et al., 2014)</td> <td>–</td> <td>–</td> <td>92.6</td> </tr> <tr> <td>Baseline 2-LSTM</td> <td>0.318</td> <td>≈ 1, 800</td> <td>85.3</td> </tr> <tr> <td>Large 2-LSTM</td> <td>2.00</td> <td>≈ 2, 500</td> <td>87.6</td> </tr> <tr> <td>Bi-2-LSTM</td> <td>2.00</td> <td>≈ 5, 100</td> <td>88.9</td> </tr> <tr> <td>Bi-2-LSTM+MC+Pooling+ResV+ResL</td> <td>2.08</td> <td>≈ 5, 500</td> <td>90.1</td> </tr> </table> Table 2: Test performance on the IMDB sentiment classification task. normal LSTMs ("Vanilla") perform drastically worse as they become deeper and narrower, while Res-V1 and Res-V2 both see their performance stay much steadier or even briefly rise. While depth wound up being far from a panacea for the datasets we experimented on, the ability of an LSTM with residual connections to maintain its performance as it gets deeper holds promise for other domains where the extra expressive power provided by depth might prove more crucial. Selecting the best results for each model, we see results competitive with state-of-the-art performance for both IMDB\(^1\) and SST, even though many state-of-the-art models use either parse-tree information (Tai et al., 2015), multiple passes through the data (Kumar et al., 2016) or tremendous train and test-time computational and memory expenses (Le and Mikolov, 2014). To our knowledge, our models constitute the best performance of purely sequential, single-pass, and computationally feasible models, precisely the desired features of a solid out-of-the-box baseline. Furthermore, for SST, the compounding enhancement model without bidirectionality, the final model shown in Figure 4b, greatly exceeded the performance of the large bidirectional model (51.6% vs 50.9%), with significantly less training time (Table 1). This suggests our enhancements could provide a similarly reasonable and efficient alternative to shared-weight bidirectionality for other such datasets. 7 CONCLUSION We explore several easy to implement enhancements to the basic LSTM network that positively impact performance. These include both fairly well established extensions (biasing the forget gate, dropout, increasing the model size, bidirectionality) and several more novel ones (Monte Carlo \(^1\)For IMDB, we benchmark only against results obtained from training exclusively on the labeled training set. Thus, we omit results from unsupervised models that leveraged the additional 50,000 unlabeled examples, such as Miyato et al. (2016). model averaging, embed average pooling, residual connections). We find that these enhancements improve the performance of the LSTM in classification tasks, both in conjunction or isolation, with an accuracy close to state of the art despite being more lightweight and using less information than the current state of the art models. Our results suggest that these extensions should be incorporated into LSTM baselines.
reject
Reject
5
ICLR_2017_paper_0062
iclr
2,017
# LOSSY IMAGE COMPRESSION WITH COMPRESSIVE AUTOENCODERS Lucas Theis, Wenzhe Shi, Andrew Cunningham & Ferenc Huszár Twitter London, UK {ltheis,wshi,acunningham,fhuszar}@twitter.com ## ABSTRACT We propose a new approach to the problem of optimizing autoencoders for lossy image compression. New media formats, changing hardware technology, as well as diverse requirements and content types create a need for compression algorithms which are more flexible than existing codecs. Autoencoders have the potential to address this need, but are difficult to optimize directly due to the inherent non- differentiability of the compression loss. We here show that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs. Our network is furthermore computationally efficient thanks to a sub- pixel architecture, which makes it suitable for high- resolution images. This is in contrast to previous work on autoencoders for compression using coarser approximations, shallower architectures, computationally expensive methods, or focusing on small images. ## 1 INTRODUCTION Advances in training of neural networks have helped to improve performance in a number of domains, but neural networks have yet to surpass existing codecs in lossy image compression. Promising first results have recently been achieved using autoencoders (Ballé et al., 2016; Toderici et al., 2016b) – in particular on small images (Toderici et al., 2016a; Gregor et al., 2016; van den Oord et al., 2016b) – and neural networks are already achieving state- of- the- art results in lossless image compression (Theis & Bethge, 2015; van den Oord et al., 2016a). Autoencoders have the potential to address an increasing need for flexible lossy compression algorithms. Depending on the situation, encoders and decoders of different computational complexity are required. When sending data from a server to a mobile device, it may be desirable to pair a powerful encoder with a less complex decoder, but the requirements are reversed when sending data in the other direction. The amount of computational power and bandwidth available also changes over time as new technologies become available. For the purpose of archiving, encoding and decoding times matter less than for streaming applications. Finally, existing compression algorithms may be far from optimal for new media formats such as lightfield images, 360 video or VR content. While the development of a new codec can take years, a more general compression framework based on neural networks may be able to adapt much quicker to these changing tasks and environments. Unfortunately, lossy compression is an inherently non- differentiable problem. In particular, quantization is an integral part of the compression pipeline but is not differentiable. This makes it difficult to train neural networks for this task. Existing transformations have typically been manually chosen (e.g., the DCT transformation used in JPEG) or have been optimized for a task different from lossy compression (e.g. Testa & Rossi, 2016, used denoising autoencoders for compression). In contrast to most previous work, but in line with Ballé et al. (2016), we here aim at directly optimizing the rate- distortion tradeoff produced by an autoencoder. We propose a simple but effective approach for dealing with the non- differentiability of rounding- based quantization, and for approximating the non- differentiable cost of coding the generated coefficients. Using this approach, we achieve performance similar to or better than JPEG 2000 when evaluated for perceptual quality. Unlike JPEG 2000, however, our framework can be optimized for specific content (e.g., thumbnails or non- natural images), arbitrary metrics, and is readily generalizable to other <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Effects of rounding and differentiable alternatives when used as replacements in JPEG compression. A: A crop of an image before compression (GoToVan, 2014). B: Blocking artefacts in JPEG are caused by rounding of DCT coefficients to the nearest integer. Since rounding is used at test time, a good approximation should produce similar artefacts. C: Stochastic rounding to the nearest integer similar to the binarization of Toderici et al. (2016a). D: Uniform additive noise (Balle et al., 2016). </center> forms of media. Notably, we achieve this performance using efficient neural network architectures which would allow near real- time decoding of large images even on low- powered consumer devices. ## 2 COMPRESSIVE AUTOENCODERS We define a compressive autoencoder (CAE) to have three components: an encoder \(f\) , a decoder \(g\) , and a probabilistic model \(Q\) , \[f:\mathbb{R}^{N}\to \mathbb{R}^{M},\quad g:\mathbb{R}^{M}\to \mathbb{R}^{N},\quad Q:\mathbb{Z}^{M}\to [0,1]. \quad (1)\] The discrete probability distribution defined by \(Q\) is used to assign a number of bits to representations based on their frequencies, that is, for entropy coding. All three components may have parameters and our goal is to optimize the tradeoff between using a small number of bits and having small distortion, \[-\underbrace{\log_{2}Q\left([f(\mathbf{x})]\right)}_{{\mathrm{Number~of~bits}}}+\beta\cdot\underbrace{d\left(\mathbf{x},g([f(\mathbf{x})])\right)}_{{\mathrm{Distortion}}}. \quad (2)\] Here, \(\beta\) controls the tradeoff, square brackets indicate quantization through rounding to the nearest integer, and \(d\) measures the distortion introduced by coding and decoding. The quantized output of the encoder is the code used to represent an image and is stored losslessly. The main source of information loss is the quantization (Appendix A.3). Additional information may be discarded by the encoder, and the decoder may not perfectly decode the available information, increasing distortion. Unfortunately we cannot optimize Equation 2 directly using gradient- based techniques, as \(Q\) and \([\cdot ]\) are non- differentiable. The following two sections propose a solution to deal with this problem. ### 2.1 QUANTIZATION AND DIFFERENTIABLE ALTERNATIVES The derivative of the rounding function is zero everywhere except at integers, where it is undefined. We propose to replace its derivative in the backward pass of backpropagation (Rumelhart et al., 1986) with the derivative of a smooth approximation, \(r\) , that is, effectively defining the derivative to be \[\frac{d}{d y}\left[y\right]:= \frac{d}{d y} r(y). \quad (3)\] Importantly, we do not fully replace the rounding function with a smooth approximation but only its derivative, which means that quantization is still performed as usual in the forward pass. If we replaced rounding with a smooth approximation completely, the decoder might learn to invert the <--- Page Split ---> smooth approximation, thereby removing the information bottle neck that forces the network to compress information. Empirically, we found the identity, \(r(y) = y\) , to work as well as more sophisticated choices. This makes this operation easy to implement, as we simply have to pass gradients without modification from the decoder to the encoder. Note that the gradient with respect to the decoder's parameters can be computed without resorting to approximations, assuming \(d\) is differentiable. In contrast to related approaches, our approach has the advantage that it does not change the gradients of the decoder, since the forward pass is kept the same. In the following, we discuss alternative approaches proposed by other authors. Motivated by theoretical links to dithering, Balle et al. (2016) proposed to replace quantization by additive uniform noise, \[[f(\mathbf{x})] \approx f(\mathbf{x}) + \mathbf{u}. \quad (4)\] Toderici et al. (2016a), on the other hand, used a stochastic form of binarization (Williams, 1992). Generalizing this idea to integers, we define the following stochastic rounding operation: \[\{y\} \approx \lfloor y\rfloor +\epsilon ,\quad \epsilon \in \{0,1\} ,\quad P(\epsilon = 1) = y - \lfloor y\rfloor , \quad (5)\] where \(\lfloor \cdot \rfloor\) is the floor operator. In the backward pass, the derivative is replaced with the derivative of the expectation, \[\frac{d}{d y}\{y\} := \frac{d}{d y}\mathbb{E}\left[\{y\} \right] = \frac{d}{d y} y = 1. \quad (6)\] Figure 1 shows the effect of using these two alternatives as part of JPEG, whose encoder and decoder are based on a block- wise DCT transformation (Pennebaker & Mitchell, 1993). Note that the output is visibly different from the output produced with regular quantization by rounding and that the error signal sent to the autoencoder depends on these images. Whereas in Fig. 1B the error signal received by the decoder would be to remove blocking artefacts, the signal in Fig. 1D will be to remove high- frequency noise. We expect this difference to be less of a problem with simple metrics such as mean- squared error and to have a bigger impact when using more perceptually meaningful measures of distortion. An alternative would be to use the latter approximations only for the gradient of the encoder but not for the gradients of the decoder. While this is possible, it comes at the cost of increased computational and implementational complexity, since we would have to perform the forward and backward pass through the decoder twice: once using rounding, once using the approximation. With our approach the gradient of the decoder is correct even for a single forward and backward pass. ### 2.2 ENTROPY RATE ESTIMATION Since \(Q\) is a discrete function, we cannot differentiate it with respect to its argument, which prevents us from computing a gradient for the encoder. To solve this problem, we use a continuous, differentiable approximation. We upper- bound the non- differentiable number of bits by first expressing the model's distribution \(Q\) in terms of a probability density \(q\) , \[Q(\mathbf{z}) = \int_{[-.5,.5]^{M}}q(\mathbf{z} + \mathbf{u})d\mathbf{u}. \quad (7)\] An upper bound is given by: \[-\log_{2}Q\left(\mathbf{z}\right) = -\log_{2}\int_{[-.5,.5]^{M}}q(\mathbf{z} + \mathbf{u})d\mathbf{u}\leq \int_{[-.5,.5]^{M}} - \log_{2}q(\mathbf{z} + \mathbf{u})d\mathbf{u}, \quad (8)\] where the second step follows from Jensen's inequality (see also Theis et al., 2016). An unbiased estimate of the upper bound is obtained by sampling \(\mathbf{u}\) from the unit cube \([- .5,.5]^{M}\) . If we use a differentiable density, this estimate will be differentiable in \(\mathbf{z}\) and therefore can be used to train the encoder. <--- Page Split ---> ### 2.3 VARIABLE BIT RATES In practice we often want fine- gained control over the number of bits used. One way to achieve this is to train an autoencoder for different rate- distortion tradeoffs. But this would require us to train and store a potentially large number of models. To reduce these costs, we finetune a pre- trained autoencoder for different rates by introducing scale parameters \(\lambda \in \mathbb{R}^{M}\) , \[-\log_{2}q\left(\left[f(\mathbf{x})\circ \lambda \right] + \mathbf{u}\right) + \beta \cdot d\left(\mathbf{x},g(\left[f(\mathbf{x})\circ \lambda \right] / \lambda)\right). \quad (9)\] Here, \(\circ\) indicates point- wise multiplication and division is also performed point- wise. To reduce the number of trainable scales, they may furthermore be shared across dimensions. Where \(f\) and \(g\) are convolutional, for example, we share scale parameters across spatial dimensions but not across channels. An example of learned scale parameters is shown in Figure 3A. For more fine- grained control over bit rates, the optimized scales can be interpolated. ### 2.4 RELATED WORK Perhaps most closely related to our work is the work of Balle et al. (2016). The main differences lie in the way we deal with quantization (see Section 2.1) and entropy rate estimation. The transformations used by Balle et al. (2016) consist of a single linear layer combined with a form of contrast gain control, while our framework relies on more standard deep convolutional neural networks. Toderici et al. (2016a) proposed to use recurrent neural networks (RNNs) for compression. Instead of entropy coding as in our work, the network tries to minimize the distortion for a given number of bits. The image is encoded in an iterative manner, and decoding is performed in each step to be able to take into account residuals at the next iteration. An advantage of this design is that it allows for progressive coding of images. A disadvantage is that compression is much more time consuming than in our approach, as we use efficient convolutional neural networks and do not necessarily require decoding at the encoding stage. Gregor et al. (2016) explored using variational autoencoders with recurrent encoders and decoders for compression of small images. This type of autoencoder is trained to maximize the lower bound of a log- likelihood, or equivalently to minimize \[-\mathbb{E}_{p(\mathbf{y}|\mathbf{x})}\left[\log \frac{q(\mathbf{y})q(\mathbf{x}\mid\mathbf{y})}{p(\mathbf{y}\mid\mathbf{x})}\right], \quad (10)\] where \(p(\mathbf{y}\mid \mathbf{x})\) plays the role of the encoder, and \(q(\mathbf{x}\mid \mathbf{y})\) plays the role of the decoder. While Gregor et al. (2016) used a Gaussian distribution for the encoder, we can link their approach to the work of Balle et al. (2016) by assuming it to be uniform, \(p(\mathbf{y}\mid \mathbf{x}) = f(\mathbf{x}) + \mathbf{u}\) . If we also assume a Gaussian likelihood with fixed variance, \(q(\mathbf{x}\mid \mathbf{y}) = \mathcal{N}(\mathbf{x}\mid g(\mathbf{y}),\sigma^{2}\mathbf{I})\) , the objective function can be written \[\mathbb{E}_{\mathbf{u}}\left[-\log q(f(\mathbf{x}) + \mathbf{u}) + \frac{1}{2\sigma^{2}} ||\mathbf{x} - g(f(\mathbf{x}) + \mathbf{u})||^{2}\right] + C. \quad (11)\] Here, \(C\) is a constant which encompasses the negative entropy of the encoder and the normalization constant of the Gaussian likelihood. Note that this equation is identical to a rate- distortion trade- off with \(\beta = \sigma^{- 2} / 2\) and quantization replaced by additive uniform noise. However, not all distortions have an equivalent formulation as a variational autoencoder (Kingma & Welling, 2014). This only works if \(e^{- d(\mathbf{x},\mathbf{y})}\) is normalizable in \(\mathbf{x}\) and the normalization constant does not depend on \(\mathbf{y}\) , or otherwise \(C\) will not be constant. An direct empirical comparison of our approach with variational autoencoders is provided in Appendix A.5. Ollivier (2015) discusses variational autoencoders for lossless compression as well as connections to denoising autoencoders. <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 2: Illustration of the compressive autoencoder architecture used in this paper. Inspired by the work of Shi et al. (2016), most convolutions are performed in a downsampled space to speed up computation, and upsampling is performed using sub-pixel convolutions (convolutions followed by reshaping/reshuffling of the coefficients). To reduce clutter, only two residual blocks of the encoder and the decoder are shown. Convolutions followed by leaky rectifications are indicated by solid arrows, while transparent arrows indicate absence of additional nonlinearities. As a model for the distributions of quantized coefficients we use Gaussian scale mixtures. The notation \(C \times K \times K\) refers to \(K \times K\) convolutions with \(C\) filters. The number following the slash indicates stride in the case of convolutions, and upsampling factors in the case of sub-pixel convolutions. </center> ## 3 EXPERIMENTS ### 3.1 ENCODER, DECODER, AND ENTROPY MODEL We use common convolutional neural networks (LeCun et al., 1998) for the encoder and the decoder of the compressive autoencoder. Our architecture was inspired by the work of Shi et al. (2016), who demonstrated that super- resolution can be achieved much more efficiently by operating in the low- resolution space, that is, by convolving images and then upsampling instead of upsampling first and then convolving an image. The first two layers of the encoder perform preprocessing, namely mirror padding and a fixed pixelwise normalization. The mirror- padding was chosen such that the output of the encoder has the same spatial extent as an 8 times downsampled image. The normalization centers the distribution of each channel's values and ensures it has approximately unit variance. Afterwards, the image is convolved and spatially downsampled while at the same time increasing the number of channels to 128. This is followed by three residual blocks (He et al., 2015), where each block consists of an additional two convolutional layers with 128 filters each. A final convolutional layer is applied and the coefficients downsampled again before quantization through rounding to the nearest integer. The decoder mirrors the architecture of the encoder (Figure 9). Instead of mirror- padding and valid convolutions, we use zero- padded convolutions. Upsampling is achieved through convolution followed by a reorganization of the coefficients. This reorganization turns a tensor with many channels into a tensor of the same dimensionality but with fewer channels and larger spatial extent (for details, see Shi et al., 2016). A convolution and reorganization of coefficients together form a sub- pixel convolution layer. Following three residual blocks, two sub- pixel convolution layers upsample the image to the resolution of the input. Finally, after denormalization, the pixel values are clipped to <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 3: A: Scale parameters obtained by finetuning a compressive autoencoder (blue). More fine-grained control over bit rates can be achieved by interpolating scales (gray). Each dot corresponds to the scale parameter of one coefficient for a particular rate-distortion trade-off. The coefficients are ordered due to the incremental training procedure. B: Comparison of incremental training versus non-incremental training. The learning rate was decreased after 116,000 iterations (bottom two lines). Non-incremental training is initially less stable and shows worse performance at later iterations. Using a small learning rate from the beginning stabilizes non-incremental training but is considerably slower (top line). </center> the range of 0 to 255. Similar to how we deal with gradients of the rounding function, we redefine the gradient of the clipping function to be 1 outside the clipped range. This ensures that the training signal is non- zero even when the decoded pixels are outside this range (Appendix A.1). To model the distribution of coefficients and estimate the bit rate, we use independent Gaussian scale mixtures (GSMs), \[\log_{2}q(\mathbf{z} + \mathbf{u}) = \sum_{i,j,k}\log_{2}\sum_{s}\pi_{ks}\mathcal{N}(z_{kij} + u_{kij};0,\sigma_{ks}^{2}), \quad (12)\] where \(i\) and \(j\) iterate over spatial positions, and \(k\) iterates over channels of the coefficients for a single image \(\mathbf{z}\) . GSMs are well established as useful building blocks for modelling filter responses of natural images (e.g., Portilla et al., 2003). We used 6 scales in each GSM. Rather than using the more common parametrization above, we parametrized the GSM so that it can be easily used with gradient based methods, optimizing log- weights and log- precisions rather than weights and variances. We note that the leptokurtic nature of GSMs (Andrews & Mallows, 1974) means that the rate term encourages sparsity of coefficients. All networks were implemented in Python using Theano (2016) and Lasagne (Dieleman et al., 2015). For entropy encoding of the quantized coefficients, we first created Laplace- smoothed histogram estimates of the coefficient distributions across a training set. The estimated probabilities were then used with a publicly available BSD licensed implementation of a range coder<sup>2</sup>. ### 3.2 INCREMENTAL TRAINING All models were trained using Adam (Kingma & Ba, 2015) applied to batches of 32 images \(128 \times 128\) pixels in size. We found it beneficial to optimize coefficients in an incremental manner (Figure 3B). This is done by introducing an additional binary mask \(\mathbf{m}\) , \[-\log_{2}q\left([f(\mathbf{x})]\circ \mathbf{m} + \mathbf{u}\right) + \beta \cdot d\left(\mathbf{x},g([f(\mathbf{x})]\circ \mathbf{m})\right). \quad (13)\] Initially, all but 2 entries of the mask are set to zero. Networks are trained until performance improvements reach below a threshold, and then another coefficient is enabled by setting an entry of the binary mask to 1. After all coefficients have been enabled, the learning rate is reduced from an initial value of \(10^{- 4}\) to \(10^{- 5}\) . Training was performed for up to \(10^{6}\) updates but usually reached good performance much earlier. After a model has been trained for a fixed rate- distortion trade- off \((\beta)\) , we introduce and fine- tune scale parameters (Equation 9) for other values of \(\beta\) while keeping all other parameters fixed. Here <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 4: Comparison of different compression algorithms with respect to PSNR, SSIM, and MSSSIM on the Kodak PhotoCD image dataset. We note that the blue line refers to the results of Toderici et al. (2016b) achieved without entropy encoding. </center> we used an initial learning rate of \(10^{- 3}\) and continuously decreased it by a factor of \(\tau^{\kappa} / (\tau + t)^{\kappa}\) , where \(t\) is the current number of updates performed, \(\kappa = .8\) , and \(\tau = 1000\) . Scales were optimized for 10,000 iterations. For even more fine- grained control over the bit rates, we interpolated between scales optimized for nearby rate- distortion tradeoffs. ### 3.3 NATURAL IMAGES We trained compressive autoencoders on 434 high quality images licensed under creative commons and obtained from flickr.com. The images were downsampled to below \(1536 \times 1536\) pixels and stored as lossless PNGs to avoid compression artefacts. From these images, we extracted \(128 \times 128\) crops to train the network. Mean squared error was used as a measure of distortion during training. Hyperparameters affecting network architecture and training were evaluated on a small set of held- out Flickr images. For testing, we use the commonly used Kodak PhotoCD dataset of 24 uncompressed \(768 \times 512\) pixel images<sup>3</sup>. We compared our method to JPEG (Wallace, 1991), JPEG 2000 (Skodras et al., 2001), and the RNN- based method of (Toderici et al., 2016b)<sup>4</sup>. Bits for header information were not counted towards the bit rate of JPEG and JPEG 2000. Among the different variants of JPEG, we found that optimized JPEG with 4:2:0 chroma sub- sampling generally worked best (Appendix A.2). While fine- tuning a single compressive autoencoder for a wide range of bit rates worked well, optimizing all parameters of a network for a particular rate distortion trade- off still worked better. We here chose the compromise of combining autoencoders trained for low, medium or high bit rates (see Appendix A.4 for details). For each image and bit rate, we choose the autoencoder producing the smallest distortion. This increases the time needed to compress an image, since an image has to be encoded and decoded multiple times. However, decoding an image is still as fast, since it only requires choosing and running one decoder network. A more efficient but potentially less performant solution would be to always choose the same autoencoder for a given rate- distortion tradeoff. We added 1 byte to the coding cost to encode which autoencoder of an ensemble is used. Rate- distortion curves averaged over all test images are shown in Figure 4. We evaluated the different methods in terms of PSNR, SSIM (Wang et al., 2004a), and multiscale SSIM (MS- SSIM; Wang et al., 2004b). We used the implementation of van der Walt et al. (2014) for SSIM and the implementation of Toderici et al. (2016b) for MS- SSIM. We find that in terms of PSNR, our method performs <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: Closeups of images produced by different compression algorithms at relatively low bit rates. The second row shows an example where our method performs well, producing sharper lines than and fewer artefacts than other methods. The fourth row shows an example where our method struggles, producing noticeable artefacts in the hair and discolouring the skin. At higher bit rates, these problems disappear and CAE reconstructions appear sharper than those of JPEG 2000 (fifth row). Complete images are provided in Appendix A.6. </center> <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 6: Results of a mean opinion score test. </center> similar to JPEG 2000 although slightly worse at low and medium bit rates and slightly better at high bit rates. In terms of SSIM, our method outperforms all other tested methods. MS- SSIM produces very similar scores for all methods, except at very low bit rates. However, we also find these results to be highly image dependent. Results for individual images are provided as supplementary material<sup>5</sup>. In Figure 5 we show crops of images compressed to low bit rates. In line with quantitative results, we find that JPEG 2000 reconstructions appear visually more similar to CAE reconstructions than those of other methods. However, artefacts produced by JPEG 2000 seem more noisy than CAE's, which are smoother and sometimes appear Gábor- filter- like. To quantify the subjective quality of compressed images, we ran a mean opinion score (MOS) test. While MOS tests have their limitations, they are a widely used standard for evaluating perceptual quality (Streijl et al., 2014). Our MOS test set included the 24 full- resolution uncompressed originals from the Kodak dataset, as well as the same images compressed using each of four algorithms at or near three different bit rates: 0.25, 0.372 and 0.5 bits per pixel. Only the low- bit- rate CAE was included in this test. For each image, we chose the CAE setting which produced the highest bit rate but did not exceed the target bit rate. The average bit rates of CAE compressed images were 0.24479, 0.36446, and 0.48596, respectively. We then chose the smallest quality factor for JPEG and JPEG 2000 for which the bit rate exceeded that of the CAE. The average bit rates for JPEG were 0.25221, 0.37339 and 0.49534, for JPEG 2000 0.24631, 0.36748 and 0.49373. For some images the bit rate of the CAE at the lowest setting was still higher than the target bit rate. These images were excluded from the final results, leaving 15, 21, and 23 images, respectively. The perceptual quality of the resulting 273 images was rated by \(n = 24\) non- expert evaluators. One evaluator did not finish the experiment, so her data was discarded. The images were presented to each individual in a random order. The evaluators gave a discrete opinion score for each image from a scale between 1 (bad) to 5 (excellent). Before the rating began, subjects were presented an uncompressed calibration image of the same dimensions as the test images (but not from the Kodak dataset). They were then shown four versions of the calibration image using the worst quality setting of all four compression methods, and given the instruction "These are examples of compressed images. These are some of the worst quality examples." Figure 6 shows average MOS results for each algorithm at each bit rate. 95% confidence intervals were computed via bootstrapping. We found that CAE and JPEG 2000 achieved higher MOS than JPEG or the method of Toderici et al. (2016b) at all bit rates we tested. We also found that CAE significantly outperformed JPEG 2000 at \(0.375 bpp\) ( \(p < 0.05\) ) and \(0.5 bpp\) ( \(p < 0.001\) ). <--- Page Split ---> ## 4 DISCUSSION We have introduced a simple but effective way of dealing with non- differentiability in training autoencoders for lossy compression. Together with an incremental training strategy, this enabled us to achieve better performance than JPEG 2000 in terms of SSIM and MOS scores. Notably, this performance was achieved using an efficient convolutional architecture, combined with simple rounding- based quantization and a simple entropy coding scheme. Existing codecs often benefit from hardware support, allowing them to run at low energy costs. However, hardware chips optimized for convolutional neural networks are likely to be widely available soon, given that these networks are now key to good performance in so many applications. While other trained algorithms have been shown to provide similar results as JPEG 2000 (e.g. van den Oord & Schrauwen, 2014), to our knowledge this is the first time that an end- to- end trained architecture has been demonstrated to achieve this level of performance on high- resolution images. An end- to- end trained autoencoder has the advantage that it can be optimized for arbitrary metrics. Unfortunately, research on perceptually relevant metrics suitable for optimization is still in its infancy (e.g., Dosovitskiy & Brox, 2016; Balle et al., 2016). While perceptual metrics exist which correlate well with human perception for certain types of distortions (e.g., Wang et al., 2004a; Laparra et al., 2016), developing a perceptual metric which can be optimized is a more challenging task, since this requires the metric to behave well for a much larger variety of distortions and image pairs. In future work, we would like to explore the optimization of compressive autoencoders for different metrics. A promising direction was presented by Bruna et al. (2016), who achieved interesting super- resolution results using metrics based on neural networks trained for image classification. Gatys et al. (2016) used similar representations to achieve a breakthrough in perceptually meaningful style transfer. An alternative to perceptual metrics may be to use generative adversarial networks (GANs; Goodfellow et al., 2014). Building on the work of Bruna et al. (2016) and Dosovitskiy & Brox (2016), Ledig et al. (2016) recently demonstrated impressive super- resolution results by combining GANs with feature- based metrics. ## ACKNOWLEDGMENTS We would like to thank Zehan Wang, Aly Tejani, Clément Farabet, and Luke Alonso for helpful feedback on the manuscript. ## REFERENCES D. F. Andrews and C. L. Mallows. Scale mixtures of normal distributions. Journal of the Royal Statistical Society, Series B, 36(1):99-102, 1974. J. Ballé, V. Laparra, and E. P. Simoncelli. End-to-end optimization of nonlinear transform codes for perceptual quality. In Picture Coding Symposium, 2016. J. Bruna, P. Sprechmann, and Y. LeCun. Super-resolution with deep convolutional sufficient statistics. In The International Conference on Learning Representations, 2016. S. Dieleman, J. Schluter, C. Raffel, E. Olson, S. K. Sonderby, D. Nouri, D. Maturana, M. Thoma, E. Battenberg, J. Kelly, J. De Fauw, M. Heilman, D. Moitinho de Almeida, B. McFee, H. Weideman, G. Takacs, P. de Rivaz, J. Crall, G. Sanders, K. Rasul, C. Liu, G. French, and J. Degrave. Lasagne: First release, 2015. URL http://dx.doi.org/10.5281/zenodo.27878. A. Dosovitskiy and T. Brox. Generating images with perceptual similarity metrics based on deep networks, 2016. arXiv:1602.02644. L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, , and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27, 2014. <--- Page Split ---> GoToVan. Canada day parade, 2014. URL https://www.flickr.com/photos/gotovan/14579921203. K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. Towards conceptual compression, 2016. arXiv:1601.06759. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition, 2015. arXiv:1512.03385. D. Kingma and M. Welling. Auto-encoding variational bayes. In International Conference on Learning Representations, 2014. D. P. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. In The International Conference on Learning Representations, 2015. V. Laparra, J. Balle, and E. P. Simoncelli. Perceptual image quality assessment using a normalized Laplacian pyramid. In SPIE, Conf. on Human Vision and Electronic Imaging, XXI, 2016. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. 86(11), 1998. C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network, 2016. arXiv:1609.04802. Y. Ollivier. Auto-encoders: reconstruction versus compression, 2015. 1403.7752. W. B. Pennebaker and J. L. Mitchell. JPEG still image data compression standard. Springer, 3rd edition, 1993. J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli. Image denoising using scale mixtures of gaussians in the wavelet domain. IEE Trans. Image Process., 12(11):1338- 1351, 2003. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. Nature, 323(6088):533-536, 1986. W. Shi, J. Caballero, F. Huszar, J. Totz, A. Aitken, R. Bishop, D. Rueckert, and Z. Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In IEEE Conf. on Computer Vision and Pattern Recognition, 2016. A. Skodras, C. Christopoulos, and T. Ebrahimi. The JPEG 2000 still image compression standard. Signal Processing Magazine, 18(5):36-58, 2001. R. C. Streijl, S. Winkler, and D. S. Hands. Mean opinion score (MOS) revisited: methods and applications, limitations and alternatives. Multimedia Systems, pp. 1-15, 2014. D. Del Testa and M. Rossi. Lightweight Lossy Compression of Biometric Patterns via Denoising Autoencoders. IEEE Signal Processing Letters, 22(12), 2016. Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions, 2016. arXiv:1605.02688. L. Theis and M. Bethge. Generative Image Modeling Using Spatial LSTMs. In Advances in Neural Information Processing Systems 28, 2015. L. Theis, A. van den Oord, and M. Bethge. A note on the evaluation of generative models. In The International Conference on Learning Representations, 2016. G. Toderici, S. M. O'Malley, S. J. Hwang, D. Vincent, D. Minnen, S. Baluja, M. Covell, and R. Sukthankar. Variable rate image compression with recurrent neural networks. In The International Conference on Learning Representations, 2016a. G. Toderici, D. Vincent, N. Johnston, S. J. Hwang, D. Minnen, J. Shor, and M. Covell. Full resolution image compression with recurrent neural networks, 2016b. arXiv:1608.05148v1. <--- Page Split ---> A. van den Oord and B. Schrauwen. The student-t mixture as a natural image patch prior with application to image compression. Journal of Machine Learning Research, 15(1):2061-2086, 2014. A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. In Proceedings of the 33rd International Conference on Machine Learning, 2016a. A. van den Oord, N. Kalchbrenner, O. Vinyals, L. Espeholt, A. Graves, and K. Kavukcuoglu. Conditional Image Generation with PixelCNN Decoders, 2016b. arXiv:1606.05328v2. S. van der Walt, J. L. Schonberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu. scikit-image: image processing in Python. PeerJ, 2, 2014. G. K. Wallace. The JPEG still picture compression standard. Communications of the ACM, 34(4): 30-44, 1991. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600-612, 2004a. Z. Wang, E. P. Simoncelli, and A. C. Bovik. Multiscale structural similarity for image quality assessment. In Conference Record of the Thirty-Seventh Asilomar Conference on Signals, Systems and Computers, volume 2, pp. 1398-1402, 2004b. R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229-256, 1992. ## A APPENDIX ## A.1 GRADIENT OF CLIPPING We redefine the gradient of the clipping operation to be constant, \[\frac{d}{d\hat{x}}\mathrm{clip}_{0.255}(\hat{x}):= 1. \quad (14)\] Consider how this affects the gradients of a squared loss, \[\frac{d}{d\hat{x}} (\mathrm{clip}_{0.255}(\hat{x}) - x)^2 = 2(\mathrm{clip}_{0.255}(\hat{x}) - x)\frac{d}{d\hat{x}}\mathrm{clip}_{0.255}(\hat{x}). \quad (15)\] Assume that \(\hat{x}\) is larger than 255. Without redefinition of the derivative, the error signal will be 0 and not helpful. Without any clipping, the error signal will depend on the value of \(\hat{x}\) , even though any value above 255 will have the same effect on the loss at test time. On the other hand, using clipping but a different signal in the backward pass is intuitive, as it yields an error signal which is proportional to the error that would also be incurred at test time. ## A.2 DIFFERENT MODES OF JPEG We compared optimized and non- optimized JPEG with (4:2:0) and without (4:4:4) chroma subsampling. Optimized JPEG computes a Huffman table specific to a given image, while unoptimized JPEG uses a predefined Huffman table. We did not count bits allocated to the header of the file format, but for optimized JPEG we counted the bits required to store the Huffman table. We found that on average, chroma- subsampled and optimized JPEG performed better on the Kodak dataset (Figure 7). ## A.3 COMPRESSION VS DIMENSIONALITY REDUCTION Since a single real number can carry as much information as a high- dimensional entity, dimensionality reduction alone does not amount to compression. However, if we constrain the architecture of the encoder, it may be forced to discard certain information. To better understand how much information is lost due to dimensionality reduction and how much information is lost due to quantization, Figure 8 shows reconstructions produced by a compressive autoencoder with and without quantization. The effect of dimensionality reduction is minimal compared to the effect of quantization. <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 7: A comparison of different JPEG modes on the Kodak PhotoCD image dataset. Optimized Huffman tables perform better than default Huffman tables. </center> ![](images/12_1.jpg) <center>Figure 8: To disentangle the effects of quantization and dimensionality reduction, we reconstructed images with quantization disabled. A: The original uncompressed image. B: A reconstruction generated by a compressive autoencoder, but with the rounding operation removed. The dimensionality of the encoder's output is \(3 \times\) smaller than the input. C: A reconstruction generated by the same compressive autoencoder. While the effects of dimensionality reduction are almost imperceptible, quantization introduces visible artefacts. </center> ## A.4 ENSEMBLE ![](images/12_2.jpg) <center>Figure 9: Comparison of CAEs optimized for low, medium, or high bit rates. </center> <--- Page Split ---> To bring the parameter controlling the rate- distortion trade- off into a more intuitive range, we rescaled the distortion term and expressed the objective as follows: \[-\frac{\alpha}{N}\ln q\left(\left[f(\mathbf{x})\circ \lambda \right] + \mathbf{u}\right) + \frac{1 - \alpha}{1000\cdot M}\cdot \left\| \mathbf{x} - g(\left[f(\mathbf{x})\circ \lambda \right] / \lambda)\right\|^{2}. \quad (16)\] Here, \(N\) is the number of coefficients produced by the encoder and \(M\) is the dimensionality of \(\mathbf{x}\) (i.e., 3 times the number of pixels). The high- bit- rate CAE was trained with \(\alpha = 0.01\) and 96 output channels, the medium- bit- rate CAE was trained with \(\alpha = 0.05\) and 96 output channels, and the low- bit- rate CAE was trained with \(\alpha = 0.2\) and 64 output channels. ## A.5 COMPARISON WITH VAE ![](images/13_0.jpg) <center>Figure 10: An alternative to our approach is to replace the rounding function with additive uniform noise during training (Balle et al., 2016). Using mean-squared error for measuring distortion, optimizing rate-distortion this way is equivalent to training a variational autoencoder (Kingma & Welling, 2014) with a Gaussian likelihood and uniform encoder (Section 2.4). Using the same training procedure autoencoder architecture for both approaches (here trained for high bit-rates), we find that additive noise performs worse than redefining derivatives as in our approach. Rounding-based quantization is used at test time in both approaches. </center> ## A.6 COMPLETE IMAGES Below we show complete images corresponding to the crops in Figure 5. For each image, we show the original image (top left), reconstructions using CAE (top right), reconstructions using JPEG 2000 (bottom left) and reconstructions using the method of (Toderici et al., 2016b) (bottom right). The images are best viewed on a monitor screen. <--- Page Split ---> ![](images/14_0.jpg) <--- Page Split ---> ![](images/15_0.jpg) <--- Page Split ---> ![](images/16_0.jpg) <--- Page Split ---> ![](images/17_0.jpg) <--- Page Split ---> ![](images/18_0.jpg) <--- Page Split --->
## ABSTRACT We propose a new approach to the problem of optimizing autoencoders for lossy image compression. New media formats, changing hardware technology, as well as diverse requirements and content types create a need for compression algorithms which are more flexible than existing codecs. Autoencoders have the potential to address this need, but are difficult to optimize directly due to the inherent non- differentiability of the compression loss. We here show that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs. Our network is furthermore computationally efficient thanks to a sub- pixel architecture, which makes it suitable for high- resolution images. This is in contrast to previous work on autoencoders for compression using coarser approximations, shallower architectures, computationally expensive methods, or focusing on small images. ## 1 INTRODUCTION Advances in training of neural networks have helped to improve performance in a number of domains, but neural networks have yet to surpass existing codecs in lossy image compression. Promising first results have recently been achieved using autoencoders (Ballé et al., 2016; Toderici et al., 2016b) – in particular on small images (Toderici et al., 2016a; Gregor et al., 2016; van den Oord et al., 2016b) – and neural networks are already achieving state- of- the- art results in lossless image compression (Theis & Bethge, 2015; van den Oord et al., 2016a). Autoencoders have the potential to address an increasing need for flexible lossy compression algorithms. Depending on the situation, encoders and decoders of different computational complexity are required. When sending data from a server to a mobile device, it may be desirable to pair a powerful encoder with a less complex decoder, but the requirements are reversed when sending data in the other direction. The amount of computational power and bandwidth available also changes over time as new technologies become available. For the purpose of archiving, encoding and decoding times matter less than for streaming applications. Finally, existing compression algorithms may be far from optimal for new media formats such as lightfield images, 360 video or VR content. While the development of a new codec can take years, a more general compression framework based on neural networks may be able to adapt much quicker to these changing tasks and environments. Unfortunately, lossy compression is an inherently non- differentiable problem. In particular, quantization is an integral part of the compression pipeline but is not differentiable. This makes it difficult to train neural networks for this task. Existing transformations have typically been manually chosen (e.g., the DCT transformation used in JPEG) or have been optimized for a task different from lossy compression (e.g. Testa & Rossi, 2016, used denoising autoencoders for compression). In contrast to most previous work, but in line with Ballé et al. (2016), we here aim at directly optimizing the rate- distortion tradeoff produced by an autoencoder. We propose a simple but effective approach for dealing with the non- differentiability of rounding- based quantization, and for approximating the non- differentiable cost of coding the generated coefficients. Using this approach, we achieve performance similar to or better than JPEG 2000 when evaluated for perceptual quality. Unlike JPEG 2000, however, our framework can be optimized for specific content (e.g., thumbnails or non- natural images), arbitrary metrics, and is readily generalizable to other <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Effects of rounding and differentiable alternatives when used as replacements in JPEG compression. A: A crop of an image before compression (GoToVan, 2014). B: Blocking artefacts in JPEG are caused by rounding of DCT coefficients to the nearest integer. Since rounding is used at test time, a good approximation should produce similar artefacts. C: Stochastic rounding to the nearest integer similar to the binarization of Toderici et al. (2016a). D: Uniform additive noise (Balle et al., 2016). </center> forms of media. Notably, we achieve this performance using efficient neural network architectures which would allow near real- time decoding of large images even on low- powered consumer devices. ## 2 COMPRESSIVE AUTOENCODERS We define a compressive autoencoder (CAE) to have three components: an encoder \(f\) , a decoder \(g\) , and a probabilistic model \(Q\) , \[f:\mathbb{R}^{N}\to \mathbb{R}^{M},\quad g:\mathbb{R}^{M}\to \mathbb{R}^{N},\quad Q:\mathbb{Z}^{M}\to [0,1]. \quad (1)\] The discrete probability distribution defined by \(Q\) is used to assign a number of bits to representations based on their frequencies, that is, for entropy coding. All three components may have parameters and our goal is to optimize the tradeoff between using a small number of bits and having small distortion, \[-\underbrace{\log_{2}Q\left([f(\mathbf{x})]\right)}_{{\mathrm{Number~of~bits}}}+\beta\cdot\underbrace{d\left(\mathbf{x},g([f(\mathbf{x})])\right)}_{{\mathrm{Distortion}}}. \quad (2)\] Here, \(\beta\) controls the tradeoff, square brackets indicate quantization through rounding to the nearest integer, and \(d\) measures the distortion introduced by coding and decoding. The quantized output of the encoder is the code used to represent an image and is stored losslessly. The main source of information loss is the quantization (Appendix A.3). Additional information may be discarded by the encoder, and the decoder may not perfectly decode the available information, increasing distortion. Unfortunately we cannot optimize Equation 2 directly using gradient- based techniques, as \(Q\) and \([\cdot ]\) are non- differentiable. The following two sections propose a solution to deal with this problem. ### 2.1 QUANTIZATION AND DIFFERENTIABLE ALTERNATIVES The derivative of the rounding function is zero everywhere except at integers, where it is undefined. We propose to replace its derivative in the backward pass of backpropagation (Rumelhart et al., 1986) with the derivative of a smooth approximation, \(r\) , that is, effectively defining the derivative to be \[\frac{d}{d y}\left[y\right]:= \frac{d}{d y} r(y). \quad (3)\] Importantly, we do not fully replace the rounding function with a smooth approximation but only its derivative, which means that quantization is still performed as usual in the forward pass. If we replaced rounding with a smooth approximation completely, the decoder might learn to invert the <--- Page Split ---> smooth approximation, thereby removing the information bottle neck that forces the network to compress information. Empirically, we found the identity, \(r(y) = y\) , to work as well as more sophisticated choices. This makes this operation easy to implement, as we simply have to pass gradients without modification from the decoder to the encoder. Note that the gradient with respect to the decoder's parameters can be computed without resorting to approximations, assuming \(d\) is differentiable. In contrast to related approaches, our approach has the advantage that it does not change the gradients of the decoder, since the forward pass is kept the same. In the following, we discuss alternative approaches proposed by other authors. Motivated by theoretical links to dithering, Balle et al. (2016) proposed to replace quantization by additive uniform noise, \[[f(\mathbf{x})] \approx f(\mathbf{x}) + \mathbf{u}. \quad (4)\] Toderici et al. (2016a), on the other hand, used a stochastic form of binarization (Williams, 1992). Generalizing this idea to integers, we define the following stochastic rounding operation: \[\{y\} \approx \lfloor y\rfloor +\epsilon ,\quad \epsilon \in \{0,1\} ,\quad P(\epsilon = 1) = y - \lfloor y\rfloor , \quad (5)\] where \(\lfloor \cdot \rfloor\) is the floor operator. In the backward pass, the derivative is replaced with the derivative of the expectation, \[\frac{d}{d y}\{y\} := \frac{d}{d y}\mathbb{E}\left[\{y\} \right] = \frac{d}{d y} y = 1. \quad (6)\] Figure 1 shows the effect of using these two alternatives as part of JPEG, whose encoder and decoder are based on a block- wise DCT transformation (Pennebaker & Mitchell, 1993). Note that the output is visibly different from the output produced with regular quantization by rounding and that the error signal sent to the autoencoder depends on these images. Whereas in Fig. 1B the error signal received by the decoder would be to remove blocking artefacts, the signal in Fig. 1D will be to remove high- frequency noise. We expect this difference to be less of a problem with simple metrics such as mean- squared error and to have a bigger impact when using more perceptually meaningful measures of distortion. An alternative would be to use the latter approximations only for the gradient of the encoder but not for the gradients of the decoder. While this is possible, it comes at the cost of increased computational and implementational complexity, since we would have to perform the forward and backward pass through the decoder twice: once using rounding, once using the approximation. With our approach the gradient of the decoder is correct even for a single forward and backward pass. ### 2.2 ENTROPY RATE ESTIMATION Since \(Q\) is a discrete function, we cannot differentiate it with respect to its argument, which prevents us from computing a gradient for the encoder. To solve this problem, we use a continuous, differentiable approximation. We upper- bound the non- differentiable number of bits by first expressing the model's distribution \(Q\) in terms of a probability density \(q\) , \[Q(\mathbf{z}) = \int_{[-.5,.5]^{M}}q(\mathbf{z} + \mathbf{u})d\mathbf{u}. \quad (7)\] An upper bound is given by: \[-\log_{2}Q\left(\mathbf{z}\right) = -\log_{2}\int_{[-.5,.5]^{M}}q(\mathbf{z} + \mathbf{u})d\mathbf{u}\leq \int_{[-.5,.5]^{M}} - \log_{2}q(\mathbf{z} + \mathbf{u})d\mathbf{u}, \quad (8)\] where the second step follows from Jensen's inequality (see also Theis et al., 2016). An unbiased estimate of the upper bound is obtained by sampling \(\mathbf{u}\) from the unit cube \([- .5,.5]^{M}\) . If we use a differentiable density, this estimate will be differentiable in \(\mathbf{z}\) and therefore can be used to train the encoder. <--- Page Split ---> ### 2.3 VARIABLE BIT RATES In practice we often want fine- gained control over the number of bits used. One way to achieve this is to train an autoencoder for different rate- distortion tradeoffs. But this would require us to train and store a potentially large number of models. To reduce these costs, we finetune a pre- trained autoencoder for different rates by introducing scale parameters \(\lambda \in \mathbb{R}^{M}\) , \[-\log_{2}q\left(\left[f(\mathbf{x})\circ \lambda \right] + \mathbf{u}\right) + \beta \cdot d\left(\mathbf{x},g(\left[f(\mathbf{x})\circ \lambda \right] / \lambda)\right). \quad (9)\] Here, \(\circ\) indicates point- wise multiplication and division is also performed point- wise. To reduce the number of trainable scales, they may furthermore be shared across dimensions. Where \(f\) and \(g\) are convolutional, for example, we share scale parameters across spatial dimensions but not across channels. An example of learned scale parameters is shown in Figure 3A. For more fine- grained control over bit rates, the optimized scales can be interpolated. ### 2.4 RELATED WORK Perhaps most closely related to our work is the work of Balle et al. (2016). The main differences lie in the way we deal with quantization (see Section 2.1) and entropy rate estimation. The transformations used by Balle et al. (2016) consist of a single linear layer combined with a form of contrast gain control, while our framework relies on more standard deep convolutional neural networks. Toderici et al. (2016a) proposed to use recurrent neural networks (RNNs) for compression. Instead of entropy coding as in our work, the network tries to minimize the distortion for a given number of bits. The image is encoded in an iterative manner, and decoding is performed in each step to be able to take into account residuals at the next iteration. An advantage of this design is that it allows for progressive coding of images. A disadvantage is that compression is much more time consuming than in our approach, as we use efficient convolutional neural networks and do not necessarily require decoding at the encoding stage. Gregor et al. (2016) explored using variational autoencoders with recurrent encoders and decoders for compression of small images. This type of autoencoder is trained to maximize the lower bound of a log- likelihood, or equivalently to minimize \[-\mathbb{E}_{p(\mathbf{y}|\mathbf{x})}\left[\log \frac{q(\mathbf{y})q(\mathbf{x}\mid\mathbf{y})}{p(\mathbf{y}\mid\mathbf{x})}\right], \quad (10)\] where \(p(\mathbf{y}\mid \mathbf{x})\) plays the role of the encoder, and \(q(\mathbf{x}\mid \mathbf{y})\) plays the role of the decoder. While Gregor et al. (2016) used a Gaussian distribution for the encoder, we can link their approach to the work of Balle et al. (2016) by assuming it to be uniform, \(p(\mathbf{y}\mid \mathbf{x}) = f(\mathbf{x}) + \mathbf{u}\) . If we also assume a Gaussian likelihood with fixed variance, \(q(\mathbf{x}\mid \mathbf{y}) = \mathcal{N}(\mathbf{x}\mid g(\mathbf{y}),\sigma^{2}\mathbf{I})\) , the objective function can be written \[\mathbb{E}_{\mathbf{u}}\left[-\log q(f(\mathbf{x}) + \mathbf{u}) + \frac{1}{2\sigma^{2}} ||\mathbf{x} - g(f(\mathbf{x}) + \mathbf{u})||^{2}\right] + C. \quad (11)\] Here, \(C\) is a constant which encompasses the negative entropy of the encoder and the normalization constant of the Gaussian likelihood. Note that this equation is identical to a rate- distortion trade- off with \(\beta = \sigma^{- 2} / 2\) and quantization replaced by additive uniform noise. However, not all distortions have an equivalent formulation as a variational autoencoder (Kingma & Welling, 2014). This only works if \(e^{- d(\mathbf{x},\mathbf{y})}\) is normalizable in \(\mathbf{x}\) and the normalization constant does not depend on \(\mathbf{y}\) , or otherwise \(C\) will not be constant. An direct empirical comparison of our approach with variational autoencoders is provided in Appendix A.5. Ollivier (2015) discusses variational autoencoders for lossless compression as well as connections to denoising autoencoders. <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 2: Illustration of the compressive autoencoder architecture used in this paper. Inspired by the work of Shi et al. (2016), most convolutions are performed in a downsampled space to speed up computation, and upsampling is performed using sub-pixel convolutions (convolutions followed by reshaping/reshuffling of the coefficients). To reduce clutter, only two residual blocks of the encoder and the decoder are shown. Convolutions followed by leaky rectifications are indicated by solid arrows, while transparent arrows indicate absence of additional nonlinearities. As a model for the distributions of quantized coefficients we use Gaussian scale mixtures. The notation \(C \times K \times K\) refers to \(K \times K\) convolutions with \(C\) filters. The number following the slash indicates stride in the case of convolutions, and upsampling factors in the case of sub-pixel convolutions. </center> ## 3 EXPERIMENTS ### 3.1 ENCODER, DECODER, AND ENTROPY MODEL We use common convolutional neural networks (LeCun et al., 1998) for the encoder and the decoder of the compressive autoencoder. Our architecture was inspired by the work of Shi et al. (2016), who demonstrated that super- resolution can be achieved much more efficiently by operating in the low- resolution space, that is, by convolving images and then upsampling instead of upsampling first and then convolving an image. The first two layers of the encoder perform preprocessing, namely mirror padding and a fixed pixelwise normalization. The mirror- padding was chosen such that the output of the encoder has the same spatial extent as an 8 times downsampled image. The normalization centers the distribution of each channel's values and ensures it has approximately unit variance. Afterwards, the image is convolved and spatially downsampled while at the same time increasing the number of channels to 128. This is followed by three residual blocks (He et al., 2015), where each block consists of an additional two convolutional layers with 128 filters each. A final convolutional layer is applied and the coefficients downsampled again before quantization through rounding to the nearest integer. The decoder mirrors the architecture of the encoder (Figure 9). Instead of mirror- padding and valid convolutions, we use zero- padded convolutions. Upsampling is achieved through convolution followed by a reorganization of the coefficients. This reorganization turns a tensor with many channels into a tensor of the same dimensionality but with fewer channels and larger spatial extent (for details, see Shi et al., 2016). A convolution and reorganization of coefficients together form a sub- pixel convolution layer. Following three residual blocks, two sub- pixel convolution layers upsample the image to the resolution of the input. Finally, after denormalization, the pixel values are clipped to <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 3: A: Scale parameters obtained by finetuning a compressive autoencoder (blue). More fine-grained control over bit rates can be achieved by interpolating scales (gray). Each dot corresponds to the scale parameter of one coefficient for a particular rate-distortion trade-off. The coefficients are ordered due to the incremental training procedure. B: Comparison of incremental training versus non-incremental training. The learning rate was decreased after 116,000 iterations (bottom two lines). Non-incremental training is initially less stable and shows worse performance at later iterations. Using a small learning rate from the beginning stabilizes non-incremental training but is considerably slower (top line). </center> the range of 0 to 255. Similar to how we deal with gradients of the rounding function, we redefine the gradient of the clipping function to be 1 outside the clipped range. This ensures that the training signal is non- zero even when the decoded pixels are outside this range (Appendix A.1). To model the distribution of coefficients and estimate the bit rate, we use independent Gaussian scale mixtures (GSMs), \[\log_{2}q(\mathbf{z} + \mathbf{u}) = \sum_{i,j,k}\log_{2}\sum_{s}\pi_{ks}\mathcal{N}(z_{kij} + u_{kij};0,\sigma_{ks}^{2}), \quad (12)\] where \(i\) and \(j\) iterate over spatial positions, and \(k\) iterates over channels of the coefficients for a single image \(\mathbf{z}\) . GSMs are well established as useful building blocks for modelling filter responses of natural images (e.g., Portilla et al., 2003). We used 6 scales in each GSM. Rather than using the more common parametrization above, we parametrized the GSM so that it can be easily used with gradient based methods, optimizing log- weights and log- precisions rather than weights and variances. We note that the leptokurtic nature of GSMs (Andrews & Mallows, 1974) means that the rate term encourages sparsity of coefficients. All networks were implemented in Python using Theano (2016) and Lasagne (Dieleman et al., 2015). For entropy encoding of the quantized coefficients, we first created Laplace- smoothed histogram estimates of the coefficient distributions across a training set. The estimated probabilities were then used with a publicly available BSD licensed implementation of a range coder<sup>2</sup>. ### 3.2 INCREMENTAL TRAINING All models were trained using Adam (Kingma & Ba, 2015) applied to batches of 32 images \(128 \times 128\) pixels in size. We found it beneficial to optimize coefficients in an incremental manner (Figure 3B). This is done by introducing an additional binary mask \(\mathbf{m}\) , \[-\log_{2}q\left([f(\mathbf{x})]\circ \mathbf{m} + \mathbf{u}\right) + \beta \cdot d\left(\mathbf{x},g([f(\mathbf{x})]\circ \mathbf{m})\right). \quad (13)\] Initially, all but 2 entries of the mask are set to zero. Networks are trained until performance improvements reach below a threshold, and then another coefficient is enabled by setting an entry of the binary mask to 1. After all coefficients have been enabled, the learning rate is reduced from an initial value of \(10^{- 4}\) to \(10^{- 5}\) . Training was performed for up to \(10^{6}\) updates but usually reached good performance much earlier. After a model has been trained for a fixed rate- distortion trade- off \((\beta)\) , we introduce and fine- tune scale parameters (Equation 9) for other values of \(\beta\) while keeping all other parameters fixed. Here <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 4: Comparison of different compression algorithms with respect to PSNR, SSIM, and MSSSIM on the Kodak PhotoCD image dataset. We note that the blue line refers to the results of Toderici et al. (2016b) achieved without entropy encoding. </center> we used an initial learning rate of \(10^{- 3}\) and continuously decreased it by a factor of \(\tau^{\kappa} / (\tau + t)^{\kappa}\) , where \(t\) is the current number of updates performed, \(\kappa = .8\) , and \(\tau = 1000\) . Scales were optimized for 10,000 iterations. For even more fine- grained control over the bit rates, we interpolated between scales optimized for nearby rate- distortion tradeoffs. ### 3.3 NATURAL IMAGES We trained compressive autoencoders on 434 high quality images licensed under creative commons and obtained from flickr.com. The images were downsampled to below \(1536 \times 1536\) pixels and stored as lossless PNGs to avoid compression artefacts. From these images, we extracted \(128 \times 128\) crops to train the network. Mean squared error was used as a measure of distortion during training. Hyperparameters affecting network architecture and training were evaluated on a small set of held- out Flickr images. For testing, we use the commonly used Kodak PhotoCD dataset of 24 uncompressed \(768 \times 512\) pixel images<sup>3</sup>. We compared our method to JPEG (Wallace, 1991), JPEG 2000 (Skodras et al., 2001), and the RNN- based method of (Toderici et al., 2016b)<sup>4</sup>. Bits for header information were not counted towards the bit rate of JPEG and JPEG 2000. Among the different variants of JPEG, we found that optimized JPEG with 4:2:0 chroma sub- sampling generally worked best (Appendix A.2). While fine- tuning a single compressive autoencoder for a wide range of bit rates worked well, optimizing all parameters of a network for a particular rate distortion trade- off still worked better. We here chose the compromise of combining autoencoders trained for low, medium or high bit rates (see Appendix A.4 for details). For each image and bit rate, we choose the autoencoder producing the smallest distortion. This increases the time needed to compress an image, since an image has to be encoded and decoded multiple times. However, decoding an image is still as fast, since it only requires choosing and running one decoder network. A more efficient but potentially less performant solution would be to always choose the same autoencoder for a given rate- distortion tradeoff. We added 1 byte to the coding cost to encode which autoencoder of an ensemble is used. Rate- distortion curves averaged over all test images are shown in Figure 4. We evaluated the different methods in terms of PSNR, SSIM (Wang et al., 2004a), and multiscale SSIM (MS- SSIM; Wang et al., 2004b). We used the implementation of van der Walt et al. (2014) for SSIM and the implementation of Toderici et al. (2016b) for MS- SSIM. We find that in terms of PSNR, our method performs <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: Closeups of images produced by different compression algorithms at relatively low bit rates. The second row shows an example where our method performs well, producing sharper lines than and fewer artefacts than other methods. The fourth row shows an example where our method struggles, producing noticeable artefacts in the hair and discolouring the skin. At higher bit rates, these problems disappear and CAE reconstructions appear sharper than those of JPEG 2000 (fifth row). Complete images are provided in Appendix A.6. </center> <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 6: Results of a mean opinion score test. </center> similar to JPEG 2000 although slightly worse at low and medium bit rates and slightly better at high bit rates. In terms of SSIM, our method outperforms all other tested methods. MS- SSIM produces very similar scores for all methods, except at very low bit rates. However, we also find these results to be highly image dependent. Results for individual images are provided as supplementary material<sup>5</sup>. In Figure 5 we show crops of images compressed to low bit rates. In line with quantitative results, we find that JPEG 2000 reconstructions appear visually more similar to CAE reconstructions than those of other methods. However, artefacts produced by JPEG 2000 seem more noisy than CAE's, which are smoother and sometimes appear Gábor- filter- like. To quantify the subjective quality of compressed images, we ran a mean opinion score (MOS) test. While MOS tests have their limitations, they are a widely used standard for evaluating perceptual quality (Streijl et al., 2014). Our MOS test set included the 24 full- resolution uncompressed originals from the Kodak dataset, as well as the same images compressed using each of four algorithms at or near three different bit rates: 0.25, 0.372 and 0.5 bits per pixel. Only the low- bit- rate CAE was included in this test. For each image, we chose the CAE setting which produced the highest bit rate but did not exceed the target bit rate. The average bit rates of CAE compressed images were 0.24479, 0.36446, and 0.48596, respectively. We then chose the smallest quality factor for JPEG and JPEG 2000 for which the bit rate exceeded that of the CAE. The average bit rates for JPEG were 0.25221, 0.37339 and 0.49534, for JPEG 2000 0.24631, 0.36748 and 0.49373. For some images the bit rate of the CAE at the lowest setting was still higher than the target bit rate. These images were excluded from the final results, leaving 15, 21, and 23 images, respectively. The perceptual quality of the resulting 273 images was rated by \(n = 24\) non- expert evaluators. One evaluator did not finish the experiment, so her data was discarded. The images were presented to each individual in a random order. The evaluators gave a discrete opinion score for each image from a scale between 1 (bad) to 5 (excellent). Before the rating began, subjects were presented an uncompressed calibration image of the same dimensions as the test images (but not from the Kodak dataset). They were then shown four versions of the calibration image using the worst quality setting of all four compression methods, and given the instruction "These are examples of compressed images. These are some of the worst quality examples." Figure 6 shows average MOS results for each algorithm at each bit rate. 95% confidence intervals were computed via bootstrapping. We found that CAE and JPEG 2000 achieved higher MOS than JPEG or the method of Toderici et al. (2016b) at all bit rates we tested. We also found that CAE significantly outperformed JPEG 2000 at \(0.375 bpp\) ( \(p < 0.05\) ) and \(0.5 bpp\) ( \(p < 0.001\) ). <--- Page Split ---> ## 4 DISCUSSION We have introduced a simple but effective way of dealing with non- differentiability in training autoencoders for lossy compression. Together with an incremental training strategy, this enabled us to achieve better performance than JPEG 2000 in terms of SSIM and MOS scores. Notably, this performance was achieved using an efficient convolutional architecture, combined with simple rounding- based quantization and a simple entropy coding scheme. Existing codecs often benefit from hardware support, allowing them to run at low energy costs. However, hardware chips optimized for convolutional neural networks are likely to be widely available soon, given that these networks are now key to good performance in so many applications. While other trained algorithms have been shown to provide similar results as JPEG 2000 (e.g. van den Oord & Schrauwen, 2014), to our knowledge this is the first time that an end- to- end trained architecture has been demonstrated to achieve this level of performance on high- resolution images. An end- to- end trained autoencoder has the advantage that it can be optimized for arbitrary metrics. Unfortunately, research on perceptually relevant metrics suitable for optimization is still in its infancy (e.g., Dosovitskiy & Brox, 2016; Balle et al., 2016). While perceptual metrics exist which correlate well with human perception for certain types of distortions (e.g., Wang et al., 2004a; Laparra et al., 2016), developing a perceptual metric which can be optimized is a more challenging task, since this requires the metric to behave well for a much larger variety of distortions and image pairs. In future work, we would like to explore the optimization of compressive autoencoders for different metrics. A promising direction was presented by Bruna et al. (2016), who achieved interesting super- resolution results using metrics based on neural networks trained for image classification. Gatys et al. (2016) used similar representations to achieve a breakthrough in perceptually meaningful style transfer. An alternative to perceptual metrics may be to use generative adversarial networks (GANs; Goodfellow et al., 2014). Building on the work of Bruna et al. (2016) and Dosovitskiy & Brox (2016), Ledig et al. (2016) recently demonstrated impressive super- resolution results by combining GANs with feature- based metrics. ## ACKNOWLEDGMENTS We would like to thank Zehan Wang, Aly Tejani, Clément Farabet, and Luke Alonso for helpful feedback on the manuscript. ## A APPENDIX ## A.1 GRADIENT OF CLIPPING We redefine the gradient of the clipping operation to be constant, \[\frac{d}{d\hat{x}}\mathrm{clip}_{0.255}(\hat{x}):= 1. \quad (14)\] Consider how this affects the gradients of a squared loss, \[\frac{d}{d\hat{x}} (\mathrm{clip}_{0.255}(\hat{x}) - x)^2 = 2(\mathrm{clip}_{0.255}(\hat{x}) - x)\frac{d}{d\hat{x}}\mathrm{clip}_{0.255}(\hat{x}). \quad (15)\] Assume that \(\hat{x}\) is larger than 255. Without redefinition of the derivative, the error signal will be 0 and not helpful. Without any clipping, the error signal will depend on the value of \(\hat{x}\) , even though any value above 255 will have the same effect on the loss at test time. On the other hand, using clipping but a different signal in the backward pass is intuitive, as it yields an error signal which is proportional to the error that would also be incurred at test time. ## A.2 DIFFERENT MODES OF JPEG We compared optimized and non- optimized JPEG with (4:2:0) and without (4:4:4) chroma subsampling. Optimized JPEG computes a Huffman table specific to a given image, while unoptimized JPEG uses a predefined Huffman table. We did not count bits allocated to the header of the file format, but for optimized JPEG we counted the bits required to store the Huffman table. We found that on average, chroma- subsampled and optimized JPEG performed better on the Kodak dataset (Figure 7). ## A.3 COMPRESSION VS DIMENSIONALITY REDUCTION Since a single real number can carry as much information as a high- dimensional entity, dimensionality reduction alone does not amount to compression. However, if we constrain the architecture of the encoder, it may be forced to discard certain information. To better understand how much information is lost due to dimensionality reduction and how much information is lost due to quantization, Figure 8 shows reconstructions produced by a compressive autoencoder with and without quantization. The effect of dimensionality reduction is minimal compared to the effect of quantization. <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 7: A comparison of different JPEG modes on the Kodak PhotoCD image dataset. Optimized Huffman tables perform better than default Huffman tables. </center> ![](images/12_1.jpg) <center>Figure 8: To disentangle the effects of quantization and dimensionality reduction, we reconstructed images with quantization disabled. A: The original uncompressed image. B: A reconstruction generated by a compressive autoencoder, but with the rounding operation removed. The dimensionality of the encoder's output is \(3 \times\) smaller than the input. C: A reconstruction generated by the same compressive autoencoder. While the effects of dimensionality reduction are almost imperceptible, quantization introduces visible artefacts. </center> ## A.4 ENSEMBLE ![](images/12_2.jpg) <center>Figure 9: Comparison of CAEs optimized for low, medium, or high bit rates. </center> <--- Page Split ---> To bring the parameter controlling the rate- distortion trade- off into a more intuitive range, we rescaled the distortion term and expressed the objective as follows: \[-\frac{\alpha}{N}\ln q\left(\left[f(\mathbf{x})\circ \lambda \right] + \mathbf{u}\right) + \frac{1 - \alpha}{1000\cdot M}\cdot \left\| \mathbf{x} - g(\left[f(\mathbf{x})\circ \lambda \right] / \lambda)\right\|^{2}. \quad (16)\] Here, \(N\) is the number of coefficients produced by the encoder and \(M\) is the dimensionality of \(\mathbf{x}\) (i.e., 3 times the number of pixels). The high- bit- rate CAE was trained with \(\alpha = 0.01\) and 96 output channels, the medium- bit- rate CAE was trained with \(\alpha = 0.05\) and 96 output channels, and the low- bit- rate CAE was trained with \(\alpha = 0.2\) and 64 output channels. ## A.5 COMPARISON WITH VAE ![](images/13_0.jpg) <center>Figure 10: An alternative to our approach is to replace the rounding function with additive uniform noise during training (Balle et al., 2016). Using mean-squared error for measuring distortion, optimizing rate-distortion this way is equivalent to training a variational autoencoder (Kingma & Welling, 2014) with a Gaussian likelihood and uniform encoder (Section 2.4). Using the same training procedure autoencoder architecture for both approaches (here trained for high bit-rates), we find that additive noise performs worse than redefining derivatives as in our approach. Rounding-based quantization is used at test time in both approaches. </center> ## A.6 COMPLETE IMAGES Below we show complete images corresponding to the crops in Figure 5. For each image, we show the original image (top left), reconstructions using CAE (top right), reconstructions using JPEG 2000 (bottom left) and reconstructions using the method of (Toderici et al., 2016b) (bottom right). The images are best viewed on a monitor screen. <--- Page Split ---> ![](images/14_0.jpg) <--- Page Split ---> ![](images/15_0.jpg) <--- Page Split ---> ![](images/16_0.jpg) <--- Page Split ---> ![](images/17_0.jpg) <--- Page Split ---> ![](images/18_0.jpg) <--- Page Split --->
accept
Accept (Poster)
6.666667
ICLR_2017_paper_0145
iclr
2,017
# \(\beta\) - VAE: LEARNING BASIC VISUAL CONCEPTS WITH A CONSTRAINED VARIATIONAL FRAMEWORK Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, Alexander Lerchner Google DeepMind {irinah, lmatthey, arkap, cpburgess, glorotx, botvinick, shakir, lerchner}@google.com ## ABSTRACT Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce \(\beta\) - VAE, a new state- of- the- art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter \(\beta\) that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that \(\beta\) - VAE with appropriately tuned \(\beta > 1\) qualitatively outperforms VAE \((\beta = 1)\) , as well as state of the art unsupervised (InfoGAN) and semi- supervised (DC- IGN) approaches to disentangled factor learning on a variety of datasets (celeba, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, \(\beta\) - VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter \(\beta\) , which can be directly optimised through a hyperparameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data. ## 1 INTRODUCTION The difficulty of learning a task for a given machine learning approach can vary significantly depending on the choice of the data representation. Having a representation that is well suited to the particular task and data domain can significantly improve the learning success and robustness of the chosen model (Bengio et al., 2013). It has been suggested that learning a disentangled representation of the generative factors in the data can be useful for a large variety of tasks and domains (Bengio et al., 2013; Ridgeway, 2016). A disentangled representation can be defined as one where single latent units are sensitive to changes in single generative factors, while being relatively invariant to changes in other factors (Bengio et al., 2013). For example, a model trained on a dataset of 3D objects might learn independent latent units sensitive to single independent data generative factors, such as object identity, position, scale, lighting or colour, thus acting as an inverse graphics model (Kulkarni et al., 2015). In a disentangled representation, knowledge about one factor can generalise to novel configurations of other factors. According to Lake et al. (2016), disentangled representations could boost the performance of state- of- the- art AI approaches in situations where they still struggle but where humans excel. Such scenarios include those which require knowledge transfer, where faster learning is achieved by reusing learnt representations for numerous tasks; zero- shot inference, where reasoning about new data is enabled by recombining previously learnt factors; or novelty detection. Unsupervised learning of a disentangled posterior distribution over the underlying generative factors of sensory data is a major challenge in AI research (Bengio et al., 2013; Lake et al., 2016). Most previous attempts required a priori knowledge of the number and/or nature of the data generative factors (Hinton et al., 2011; Rippel & Adams, 2013; Reed et al., 2014; Zhu et al., 2014; Yang et al., 2015; Goroshin et al., 2015; Kulkarni et al., 2015; Cheung et al., 2015; Whitney et al., 2016; Karaletsos et al., 2016). This is not always feasible in the real world, where the newly initialised learner may be exposed to complex data where no a priori knowledge of the generative factors exists, and little to no supervision for discovering the factors is available. Until recently purely unsupervised <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Manipulating latent variables on celebA: Qualitative results comparing disentangling performance of \(\beta\) -VAE \((\beta = 250)\) , VAE (Kingma & Welling, 2014) \((\beta = 1)\) and InfoGAN (Chen et al., 2016). In all figures of latent code traversal each block corresponds to the traversal of a single latent variable while keeping others fixed to either their inferred \((\beta\) -VAE, VAE and DC-IGN where applicable) or sampled (InfoGAN) values. Each row represents a different seed image used to infer the latent values in the VAE-based models, or a random sample of the noise variables in InfoGAN. \(\beta\) -VAE and VAE traversal is over the [-3, 3] range. InfoGAN traversal is over ten dimensional categorical latent variables. Only \(\beta\) -VAE and InfoGAN learnt to disentangle factors like azimuth (a), emotion (b) and hair style (c), whereas VAE learnt an entangled representation (e.g. azimuth is entangled with emotion, presence of glasses and gender). InfoGAN images adapted from Chen et al. (2016). Reprinted with permission. </center> approaches to disentangled factor learning have not scaled well (Schmidhuber, 1992; Desjardins et al., 2012; Tang et al., 2013; Cohen & Welling, 2014; 2015). Recently a scalable unsupervised approach for disentangled factor learning has been developed, called InfoGAN (Chen et al., 2016). InfoGAN extends the generative adversarial network (GAN) (Goodfellow et al., 2014) framework to additionally maximise the mutual information between a subset of the generating noise variables and the output of a recognition network. It has been reported to be capable of discovering at least a subset of data generative factors and of learning a disentangled representation of these factors. The reliance of InfoGAN on the GAN framework, however, comes at the cost of training instability and reduced sample diversity. Furthermore, InfoGAN requires some a priori knowledge of the data, since its performance is sensitive to the choice of the prior distribution and the number of the regularised noise variables. InfoGAN also lacks a principled inference network (although the recognition network can be used as one). The ability to infer the posterior latent distribution from sensory input is important when using the unsupervised model in transfer learning or zero- shot inference scenarios. Hence, while InfoGAN is an important step in the right direction, we believe that further improvements are necessary to achieve a principled way of using unsupervised learning for developing more human- like learning and reasoning in algorithms as described by Lake et al. (2016). Finally, there is currently no general method for quantifying the degree of learnt disentanglement. Therefore there is no way to quantitatively compare the degree of disentanglement achieved by different models or when optimising the hyperparameters of a single model. <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: Manipulating latent variables on 3D chairs: Qualitative results comparing disentangling performance of \(\beta\) -VAE \((\beta = 5)\) , VAE (Kingma & Welling, 2014) \((\beta = 1)\) , InfoGAN (Chen et al., 2016) and DC-IGN (Kulkarni et al., 2015). InfoGAN traversal is over the [-1, 1] range. VAE always learns an entangled representation (e.g. chair width is entangled with azimuth and leg style (b)). All models apart from VAE learnt to disentangle the labelled data generative factor, azimuth (a). InfoGAN and \(\beta\) -VAE were also able to discover unlabelled factors in the dataset, such as chair width (b). Only \(\beta\) -VAE, however, learnt about the unlabelled factor of chair leg style (c). InfoGAN and DC-IGN images adapted from Chen et al. (2016) and Kulkarni et al. (2015), respectively. Reprinted with permission. </center> In this paper we attempt to address these issues. We propose \(\beta\) - VAE, a deep unsupervised generative approach for disentangled factor learning that can automatically discover the independent latent factors of variation in unsupervised data. Our approach is based on the variational autoencoder (VAE) framework (Kingma & Welling, 2014; Rezende et al., 2014), which brings scalability and training stability. While the original VAE work has been shown to achieve limited disentangling performance on simple datasets, such as FreyFaces or MNIST (Kingma & Welling, 2014), disentangling performance does not scale to more complex datasets (e.g. Aubry et al., 2014; Paysan et al., 2009; Liu et al., 2015), prompting the development of more elaborate semi- supervised VAE- based approaches for learning disentangled factors (e.g. Kulkarni et al., 2015; Karaletsos et al., 2016). We propose augmenting the original VAE framework with a single hyperparameter \(\beta\) that modulates the learning constraints applied to the model. These constraints impose a limit on the capacity of the latent information channel and control the emphasis on learning statistically independent latent factors. \(\beta\) - VAE with \(\beta = 1\) corresponds to the original VAE framework (Kingma & Welling, 2014; Rezende et al., 2014). With \(\beta > 1\) the model is pushed to learn a more efficient latent representation of the data, which is disentangled if the data contains at least some underlying factors of variation that are independent. We show that this simple modification allows \(\beta\) - VAE to significantly improve the degree of disentanglement in learnt latent representations compared to the unmodified VAE framework (Kingma & Welling, 2014; Rezende et al., 2014). Furthermore, we show that \(\beta\) - VAE achieves state of the art disentangling performance against both the best unsupervised (InfoGAN: Chen et al., 2016) and semi- supervised (DC- IGN: Kulkarni et al., 2015) approaches for disentangled factor learning on a number of benchmark datasets, such as CelebA (Liu et al., 2015), chairs (Aubry et al., 2014) and faces (Paysan et al., 2009) using qualitative evaluation. Finally, to help quantify the differences, we develop a new measure of disentanglement and show that \(\beta\) - VAE significantly outperforms all our baselines on this measure (ICA, PCA, VAE Kingma & Ba (2014), DC- IGN Kulkarni et al. (2015), and InfoGAN Chen et al. (2016)). Our main contributions are the following: 1) we propose \(\beta\) - VAE, a new unsupervised approach for learning disentangled representations of independent visual data generative factors; 2) we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models; 3) we demonstrate both qualitatively and quantitatively that our \(\beta\) - VAE approach achieves state- of- the- art disentanglement performance compared to various baselines on a variety of complex datasets. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 3: Manipulating latent variables on 3D faces: Qualitative results comparing disentangling performance of \(\beta\) -VAE ( \(\beta = 20\) ), VAE (Kingma & Welling, 2014) ( \(\beta = 1\) ), InfoGAN (Chen et al., 2016) and DC-IGN (Kulkarni et al., 2015). InfoGAN traversal is over the [-1, 1] range. All models learnt to disentangle lighting (b) and elevation (c). DC-IGN and VAE struggled to continuously interpolate between different azimuth angles (a), unlike \(\beta\) -VAE, which additionally learnt to encode a wider range of azimuth angles than other models. InfoGAN and DC-IGN images adapted from Chen et al. (2016) and Kulkarni et al. (2015), respectively. Reprinted with permission. </center> ![](images/3_1.jpg) <center>Figure 4: Latent factors learnt by \(\beta\) -VAE on celebA: traversal of individual latents demonstrates that \(\beta\) -VAE discovered in an unsupervised manner factors that encode skin colour, transition from an elderly male to younger female, and image saturation. </center> ## 2 \(\beta\) -VAE FRAMEWORK DERIVATION Let \(\mathcal{D} = \{X,V,W\}\) be the set that consists of images \(\mathbf{x}\in \mathbb{R}^{N}\) and two sets of ground truth data generative factors: conditionally independent factors \(\mathbf{v}\in \mathbb{R}^{K}\) , where \(\log p(\mathbf{v}|\mathbf{x}) = \sum_{k}\log p(v_{k}|\mathbf{x})\) ; and conditionally dependent factors \(\mathbf{w}\in \mathbb{R}^{H}\) . We assume that the images \(\mathbf{x}\) are generated by the true world simulator using the corresponding ground truth data generative factors: \(p(\mathbf{x}|\mathbf{v},\mathbf{w}) = \mathbf{Sim}(\mathbf{v},\mathbf{w})\) . <--- Page Split ---> We want to develop an unsupervised deep generative model that, using samples from \(X\) only, can learn the joint distribution of the data \(\mathbf{x}\) and a set of generative latent factors \(\mathbf{z}\) ( \(\mathbf{z} \in \mathbb{R}^{M}\) , where \(M \geq K\) ) such that \(\mathbf{z}\) can generate the observed data \(\mathbf{x}\) ; that is, \(p(\mathbf{x}|\mathbf{z}) \approx p(\mathbf{x}|\mathbf{v}, \mathbf{w}) = \mathbf{Sim}(\mathbf{v}, \mathbf{w})\) . Thus a suitable objective is to maximise the marginal (log- likelihood of the observed data \(\mathbf{x}\) in expectation over the whole distribution of latent factors \(\mathbf{z}\) : \[\max_{\theta}\mathbb{E}_{p_{\theta}(\mathbf{z})}[p_{\theta}(\mathbf{x}|\mathbf{z})] \quad (1)\] For a given observation \(\mathbf{x}\) , we describe the inferred posterior configurations of the latent factors \(\mathbf{z}\) by a probability distribution \(q_{\phi}(\mathbf{z}|\mathbf{x})\) . Our aim is to ensure that the inferred latent factors \(q_{\phi}(\mathbf{z}|\mathbf{x})\) capture the generative factors \(\mathbf{v}\) in a disentangled manner. The conditionally dependent data generative factors \(\mathbf{w}\) can remain entangled in a separate subset of \(\mathbf{z}\) that is not used for representing \(\mathbf{v}\) . In order to encourage this disentangling property in the inferred \(q_{\phi}(\mathbf{z}|\mathbf{x})\) , we introduce a constraint over it by trying to match it to a prior \(p(\mathbf{z})\) that can both control the capacity of the latent information bottleneck, and embodies the desiderata of statistical independence mentioned above. This can be achieved if we set the prior to be an isotropic unit Gaussian \((p(\mathbf{z}) = \mathcal{N}(\mathbf{0}, I))\) , hence arriving at the constrained optimisation problem in Eq. 2, where \(\epsilon\) specifies the strength of the applied constraint. \[\max_{\phi ,\theta}\mathbb{E}_{\mathbf{z}\sim \mathcal{D}}\left[\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[\log p_{\theta}(\mathbf{x}|\mathbf{z})]\right]\quad \mathrm{subject~to} D_{K L}(q_{\phi}(\mathbf{z}|\mathbf{x})||p(\mathbf{z}))< \epsilon \quad (2)\] Re- writing Eq. 2 as a Lagrangian under the KKT conditions (Kuhn & Tucker, 1951; Karush, 1939), we obtain: \[\mathcal{F}(\theta ,\phi ,\beta ;\mathbf{x},\mathbf{z}) = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[\log p_{\theta}(\mathbf{x}|\mathbf{z})] - \beta (D_{K L}(q_{\phi}(\mathbf{z}|\mathbf{x})||p(\mathbf{z})) - \epsilon) \quad (3)\] where the KKT multiplier \(\beta\) is the regularisation coefficient that constrains the capacity of the latent information channel \(\mathbf{z}\) and puts implicit independence pressure on the learnt posterior due to the isotropic nature of the Gaussian prior \(p(\mathbf{z})\) . Since \(\beta , \epsilon \geq 0\) according to the complementary slackness KKT condition, Eq. 3 can be re- written to arrive at the \(\beta\) - VAE formulation - as the familiar variational free energy objective function as described by Jordan et al. (1999), but with the addition of the \(\beta\) coefficient: \[\mathcal{F}(\theta ,\phi ,\beta ;\mathbf{x},\mathbf{z})\geq \mathcal{L}(\theta ,\phi ;\mathbf{x},\mathbf{z},\beta) = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[\log p_{\theta}(\mathbf{x}|\mathbf{z})] - \beta D_{K L}(q_{\phi}(\mathbf{z}|\mathbf{x})||p(\mathbf{z})) \quad (4)\] Varying \(\beta\) changes the degree of applied learning pressure during training, thus encouraging different learnt representations. \(\beta\) - VAE where \(\beta = 1\) corresponds to the original VAE formulation of (Kingma & Welling, 2014). We postulate that in order to learn disentangled representations of the conditionally independent data generative factors \(\mathbf{v}\) , it is important to set \(\beta > 1\) , thus putting a stronger constraint on the latent bottleneck than in the original VAE formulation of Kingma & Welling (2014). These constraints limit the capacity of \(\mathbf{z}\) , which, combined with the pressure to maximise the log likelihood of the training data \(\mathbf{x}\) under the model, should encourage the model to learn the most efficient representation of the data. Since the data \(\mathbf{x}\) is generated using at least some conditionally independent ground truth factors \(\mathbf{v}\) , and the \(D_{K L}\) term of the \(\beta\) - VAE objective function encourages conditional independence in \(q_{\phi}(\mathbf{z}|\mathbf{x})\) , we hypothesise that higher values of \(\beta\) should encourage learning a disentangled representation of \(\mathbf{v}\) . The extra pressures coming from high \(\beta\) values, however, may create a trade- off between reconstruction fidelity and the quality of disentanglement within the learnt latent representations. Disentangled representations emerge when the right balance is found between information preservation (reconstruction cost as regularisation) and latent channel capacity restriction \((\beta > 1)\) . The latter can lead to poorer reconstructions due to the loss of high frequency details when passing through a constrained latent bottleneck. Hence, the log likelihood of the data under the learnt model is a poor metric for evaluating disentangling in \(\beta\) - VAEs. Instead we propose a quantitative metric that directly measures the degree of learnt disentanglement in the latent representation. Since our proposed hyperparameter \(\beta\) directly affects the degree of learnt disentanglement, we would like to estimate the optimal \(\beta\) for learning a disentangled latent representation directly. However, it is not possible to do so. This is because the optimal \(\beta\) will depend on the value of \(\epsilon\) in Eq.2. Different datasets and different model architectures will require different optimal values of \(\epsilon\) . However, when optimising \(\beta\) in Eq. 4, we are indirectly also optimising \(\epsilon\) for the best disentanglement (see Sec.A.7 for details), and while we can not learn the optimal value of \(\beta\) directly, we can instead estimate it using either our proposed disentanglement metric (see Sec. 3) or through visual inspection heuristics. <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 5: Schematic of the proposed disentanglement metric: over a batch of \(L\) samples, each pair of images has a fixed value for one target generative factor \(y\) (here \(y = scale\) ) and differs on all others. A linear classifier is then trained to identify the target factor using the average pairwise difference \(\mathbf{z}_{\mathrm{diff}}^{l}\) in the latent space over \(L\) samples. </center> ## 3 DISENTANGLEMENT METRIC It is important to be able to quantify the level of disentanglement achieved by different models. Designing a metric for this, however, is not straightforward. We begin by defining the properties that we expect a disentangled representation to have. Then we describe our proposed solution for quantifying the presence of such properties in a learnt representation. As stated above, we assume that the data is generated by a ground truth simulation process which uses a number of data generative factors, some of which are conditionally independent, and we also assume that they are interpretable. For example, the simulator might sample independent factors corresponding to object shape, colour and size to generate an image of a small green apple. Because of the independence property, the simulator can also generate small red apples or big green apples. A representation of the data that is disentangled with respect to these generative factors, i.e. which encodes them in separate latents, would enable robust classification even using very simple linear classifiers (hence providing interpretability). For example, a classifier that learns a decision boundary that relies on object shape would perform as well when other data generative factors, such as size or colour, are varied. Note that a representation consisting of independent latents is not necessarily disentangled, according to our desiderata. Independence can readily be achieved by a variety of approaches (such as PCA or ICA) that learn to project the data onto independent bases. Representations learnt by such approaches do not in general align with the data generative factors and hence may lack interpretability. For this reason, a simple cross- correlation calculation between the inferred latents would not suffice as a disentanglement metric. Our proposed disentangling metric, therefore, measures both the independence and interpretability (due to the use of a simple classifier) of the inferred latents. To apply our metric, we run inference on a number of images that are generated by fixing the value of one data generative factor while randomly sampling all others. If the independence and interpretability properties hold for the inferred representations, there will be less variance in the inferred latents that correspond to the fixed generative factor. We use a low capacity linear classifier to identify this factor and report the accuracy value as the final disentanglement metric score. Smaller variance in the latents corresponding to the target factor will make the job of this classifier easier, resulting in a higher score under the metric. See Fig. 5 for a representation of the full process. More formally, we start from a dataset \(\mathcal{D} = \{X, V, W\}\) as described in Sec. 2, assumed to contain a balanced distribution of ground truth factors \((\mathbf{v}, \mathbf{w})\) , where images data points are obtained using a ground truth simulator process \(\mathbf{x} \sim \mathbf{Sim}(\mathbf{v}, \mathbf{w})\) . We also assume we are given labels identifying a subset of the independent data generative factors \(\mathbf{v} \in V\) for at least some instances. We then construct a batch of \(B\) vectors \(\mathbf{z}_{\mathrm{diff}}^{l}\) , to be fed as inputs to a linear classifier as follows: 1. Choose a factor \(y \sim \text{Unif}[1 \ldots K]\) (e.g. \(y = \text{scale}\) in Fig. 5). <--- Page Split ---> 2. For a batch of \(L\) samples: (a) Sample two sets of latent representations, \(\mathbf{v}_{1,l}\) and \(\mathbf{v}_{2,l}\) , enforcing \([\mathbf{v}_{1,l}]_{k} = [\mathbf{v}_{2,l}]_{k}\) if \(k = y\) (so that the value of factor \(k = y\) is kept fixed). (b) Simulate image \(\mathbf{x}_{1,l} \sim \mathbf{Sim}(\mathbf{v}_{1,l})\) , then infer \(\mathbf{z}_{1,l} = \mu (\mathbf{x}_{1,l})\) , using the encoder \(q(\mathbf{z}|\mathbf{x}) \sim N(\mu (\mathbf{x}), \sigma (\mathbf{x}))\) . Repeat the process for \(\mathbf{v}_{2,l}\) . (c) Compute the difference \(\mathbf{z}_{\mathrm{diff}}^{l} = |\mathbf{z}_{1,l} - \mathbf{z}_{2,l}|\) , the absolute linear difference between the inferred latent representations. 3. Use the average \(\mathbf{z}_{\mathrm{diff}}^{b} = \frac{1}{L} \sum_{l = 1}^{L} \mathbf{z}_{\mathrm{diff}}^{l}\) to predict \(p(y|\mathbf{z}_{\mathrm{diff}}^{b})\) (again, \(y = scale\) in Fig. 5) and report the accuracy of this predictor as disentanglement metric score. The classifier's goal is to predict the index \(y\) of the generative factor that was kept fixed for a given \(\mathbf{z}_{\mathrm{diff}}^{b}\) . The accuracy of this classifier over multiple batches is used as our disentanglement metric score. We choose a linear classifier with low VC- dimension in order to ensure it has no capacity to perform nonlinear disentangling by itself. We take differences of two inferred latent vectors to reduce the variance in the inputs to the classifier, and to reduce the conditional dependence on the inputs \(\mathbf{x}\) . This ensures that on average \(\left[\mathbf{z}_{\mathrm{diff}}^{b}\right]_{y} < \left[\mathbf{z}_{\mathrm{diff}}^{b}\right]_{\{y\}}\) . See Equations 5 in Appendix A.4 for more details of the process. ## 4 EXPERIMENTS In this section we first qualitatively demonstrate that our proposed \(\beta\) - VAE framework consistently discovers more latent factors and disentangles them in a cleaner fashion that either unmodified VAE (Kingma & Welling, 2014) or state of the art unsupervised (InfoGAN: Chen et al., 2016) and semi- supervised (DC- IGN: Kulkarni et al., 2015) solutions for disentangled factor learning on a variety of benchmarks. We then quantify and characterise the differences in disentangled factor learning between our \(\beta\) - VAE framework and a variety of benchmarks using our proposed new disentangling metric. ### 4.1 QUALITATIVE BENCHMARKS We trained \(\beta\) - VAE (see Tbl. 1 for architecture details) on a variety of datasets commonly used to evaluate disentangling performance of models: celebA (Liu et al., 2015), chairs (Aubry et al., 2014) and faces (Paysan et al., 2009). Figures 1- 3 provide a qualitative comparison of the disentangling performance of \(\beta\) - VAE, VAE \((\beta = 1)\) (Kingma & Welling, 2014), InfoGAN (Chen et al., 2016) and DC- IGN (Kulkarni et al., 2015) as appropriate. It can be seen that across all datasets \(\beta\) - VAE is able to automatically discover and learn to disentangle all of the factors learnt by the semi- supervised DC- IGN (Kulkarni et al., 2015): azimuth (Fig. 3a, Fig. 2a), lighting and elevation (Fig. 3b,c)). Often it acts as a more convincing inverse graphics network than DC- IGN (e.g. Fig. 3a) or InfoGAN (e.g. Fig. 2a, Fig. 1a- c or Fig. 3a). Furthermore, unlike DC- IGN, \(\beta\) - VAE requires no supervision and hence can learn about extra unlabelled data generative factors that DC- IGN can not learn by design, such as chair width or leg style (Fig. 2b,c). The unsupervised InfoGAN (Chen et al., 2016) approach shares this quality with \(\beta\) - VAE, and the two frameworks tend to discover overlapping, but not necessarily identical sets of data generative factors. For example, both \(\beta\) - VAE and InfoGAN (but not DC- IGN) learn about the width of chairs (Fig. 2b). Only \(\beta\) - VAE, however, learns about the chair leg style (Fig. 2c). It is interesting to note how \(\beta\) - VAE is able to generate an armchair with a round office chair base, even though such armchairs do not exist in the dataset (or, perhaps, reality). Furthermore, only \(\beta\) - VAE is able to discover all three factors of variation (chair azimuth, width and leg style) within a single model, while InfoGAN learns to allocate its continuous latent variable to either azimuth or width. InfoGAN sometimes discovers factors that \(\beta\) - VAE does not precisely disentangle, such as the presence of sunglasses in celebA. \(\beta\) - VAE does, however, discover numerous extra factors such as skin colour, image saturation, and age/gender that are not reported in the InfoGAN paper (Chen et al., 2016) (Fig. 4). Furthermore, \(\beta\) - VAE latents tend to learn a smooth continuous transformation over a wider range of factor values than InfoGAN (e.g. rotation over a wider range of angles as shown in Figs. 1- 3a). <--- Page Split ---> Overall \(\beta\) - VAE tends to consistently and robustly discover more latent factors and learn cleaner disentangled representations of them than either InfoGAN or DC- IGN. This holds even on such challenging datasets as celebA. Furthermore, unlike InfoGAN and DC- IGN, \(\beta\) - VAE requires no design decisions or assumptions about the data, and is very stable to train. When compared to the unmodified VAE baseline ( \(\beta = 1\) ) \(\beta\) - VAE consistently learns significantly more disentangled latent representations. For example, when learning about chairs, VAE entangles chair width with leg style (Fig. 2b). When learning about celebA, VAE entangles azimuth with emotion and gender (Fig. 1a); emotion with hair style, skin colour and identity (Fig. 1b); while the VAE fringe latent also codes for baldness and head size (Fig. 1c). Although VAE performs relatively well on the faces dataset, it still struggles to learn a clean representation of azimuth (Fig. 3a). This, however, suggests that a continuum of disentanglement quality exists, and it can be traversed by varying \(\beta\) within the \(\beta\) - VAE framework. While increasing \(\beta\) often leads to better disentanglement, it may come at the cost of blurrier reconstructions and losing representations for some factors, particularly those that correspond to only minor changes in pixel space. ### 4.2 QUANTITATIVE BENCHMARKS In order to quantitatively compare the disentangling performance of \(\beta\) - VAE against various baselines, we created a synthetic dataset of 737,280 binary 2D shapes (heart, oval and square) generated from the Cartesian product of the shape and four independent generative factors \(v_{k}\) defined in vector graphics: position X (32 values), position Y (32 values), scale (6 values) and rotation (40 values over the \(2\pi\) range). To ensure smooth affine object transforms, each two subsequent values for each factor \(v_{k}\) were chosen to ensure minimal differences in pixel space given 64x64 pixel image resolution. This dataset was chosen because it contains no confounding factors apart from its five independent data generative factors (identity, position X, position Y, scale and rotation). This gives us knowledge of the ground truth for comparing the disentangling performance of different models in an objective manner. We used our proposed disentanglement metric (see Sec. 3) to quantitatively compare the ability of \(\beta\) - VAE to automatically discover and learn a disentangled representation of the data generative factors of the synthetic dataset of 2D shapes described above with that of a number of benchmarks (see Tbl. 1 in Appendix for model architecture details). The table in Fig. 6 (left) reports the classification accuracy of the disentanglement metric for 5,000 test samples. It can be seen that \(\beta\) - VAE ( \(\beta = 4\) ) significantly outperforms all baselines, such as an untrained VAE and the original VAE formulation of Kingma & Welling (2014) ( \(\beta = 1\) ) with the same architecture as \(\beta\) - VAE, the top ten PCA or ICA components of the data (see Sec. A.3 for details), or when using the raw pixels directly. \(\beta\) - VAE also does better than InfoGAN. Remarkably, \(\beta\) - VAE performs on the same level as DC- IGN despite the latter being semi- supervised and the former wholly unsupervised. Furthermore, \(\beta\) - VAE achieved similar classification accuracy as the ground truth vectors used for data generation, thus suggesting that it was able to learn a very good disentangled representation of the data generative factors. We also examined qualitatively the representations learnt by \(\beta\) - VAE, VAE, InfoGAN and DC- IGN on the synthetic dataset of 2D shapes. Fig. 7A demonstrates that after training, \(\beta\) - VAE with \(\beta = 4\) learnt a good (while not perfect) disentangled representation of the data generative factors, and its decoder learnt to act as a rendering engine. Its performance was comparative to that of DC- IGN (Fig. 7C), with the difference that DC- IGN required a priori knowledge about the quantity of the data generative factors, while \(\beta\) - VAE was able to discover them in an unsupervised manner. The most informative latent units \(z_{m}\) of \(\beta\) - VAE have the highest KL divergence from the unit Gaussian prior ( \(p(z) = \mathcal{N}(0, I)\) ), while the uninformative latents have KL divergence close to zero. Fig. 7A demonstrates the selectivity of each latent \(z_{m}\) to the independent data generating factors: \(z_{m}^{t} = f(v_{k}) \forall v_{k} \in \{v_{\text{position}, X}, v_{\text{position}, Y}, v_{\text{scale}, v_{\text{rotation}}}\}\) (top three rows), where \(z_{m}^{t}\) is the learnt Gaussian mean of latent unit \(z_{m}\) . The effect of traversing each latent \(z_{m}\) on the resulting reconstructions is shown in the bottom five rows of Fig. 7A. The latents \(z_{6}\) and \(z_{2}\) learnt to encode X and Y coordinates of the objects respectively; unit \(z_{1}\) learnt to encode scale; and units \(z_{5}\) and \(z_{7}\) learnt to encode rotation. The frequency of oscillations in each rotational latent corresponds to the rotational symmetry of the corresponding object ( \(2\pi\) for heart, \(\pi\) for oval and \(\pi /2\) for square). Furthermore, the two rotational latents seem to encode cos and sin rotational coordinates, while the positional latents align with the Cartesian axes. While such alignment with intuitive factors for humans is not guaranteed, empirically we found it to be very common. Fig. 7B demonstrates that the unmodified <--- Page Split ---> <table><tr><td>Model</td><td>Disentanglement metric score</td></tr><tr><td>Ground truth</td><td>100%</td></tr><tr><td>Raw pixels</td><td>45.75 ± 0.8%</td></tr><tr><td>PCA</td><td>84.9 ± 0.4%</td></tr><tr><td>ICA</td><td>42.03 ± 10.6%</td></tr><tr><td>DC-IGN</td><td>99.3 ± 0.1%</td></tr><tr><td>InfoGAN</td><td>73.5 ± 0.9%</td></tr><tr><td>VAE untrained</td><td>44.14 ± 2.5%</td></tr><tr><td>VAE</td><td>61.58 ± 0.5%</td></tr><tr><td>β-VAE</td><td>99.23 ± 0.1%</td></tr></table> ![](images/8_0.jpg) <center>Figure 6: Disentanglement metric classification accuracy for 2D shapes dataset. Left: Accuracy for different models and training regimes Right: Positive correlation is present between the size of \(\mathbf{z}\) and the optimal normalised values of \(\beta\) for disentangled factor learning for a fixed \(\beta\) -VAE architecture. \(\beta\) values are normalised by latent \(\mathbf{z}\) size \(m\) and input \(\mathbf{x}\) size \(n\) . Note that \(\beta\) values are not uniformly sampled. Orange approximately corresponds to unnormalised \(\beta = 1\) . Good reconstructions are associated with entangled representations (lower disentanglement scores). Disentangled representations (high disentanglement scores) often result in blurry reconstructions. </center> VAE baseline ( \(\beta = 1\) ) is not able to disentangle generative factors in the data as well as \(\beta\) - VAE with appropriate learning pressures. Instead each latent \(\mathbf{z}\) (apart from \(\mathbf{z}_0\) , which learnt rotation) encodes at least two data generative factors. InfoGAN also achieved a degree of disentangling (see Fig. 7D), particularly for positional factors. However, despite our best efforts to train InfoGAN, we were not able to achieve the same degree of disentangling in other factors, such as rotation, scale and shape. We also found its ability to generate the different shapes in the dataset to be inaccurate and unstable during training, possibly due to reported limitations of the GAN framework, which can struggle to learn the full data distribution and instead will often learn a small subset of its modes (Salimans et al., 2016; Zhao et al., 2016). Understanding the effects of \(\beta\) We hypothesised that constrained optimisation is important for enabling deep unsupervised models to learn disentangled representations of the independent data generative factors (Sec. 2). In the \(\beta\) - VAE framework this corresponds to tuning the \(\beta\) coefficient. One way to view \(\beta\) is as a mixing coefficient (see Sec. A.6 for a derivation) for balancing the magnitudes of gradients from the reconstruction and the prior- matching components of the VAE lower bound formulation in Eq. 4 during training. In this context it makes sense to normalise \(\beta\) by latent \(\mathbf{z}\) size \(m\) and input \(\mathbf{x}\) size \(n\) in order to compare its different values across different latent layer sizes and different datasets ( \(\beta_{norm} = \frac{\beta M}{N}\) ). We found that larger latent \(\mathbf{z}\) layer sizes \(m\) require higher constraint pressures (higher \(\beta\) values), see Fig. 6 (Right). Furthermore, the relationship of \(\beta\) for a given \(m\) is characterised by an inverted U curve. When \(\beta\) is too low or too high the model learns an entangled latent representation due to either too much or too little capacity in the latent \(\mathbf{z}\) bottleneck. We find that in general \(\beta > 1\) is necessary to achieve good disentanglement. However if \(\beta\) is too high and the resulting capacity of the latent channel is lower than the number of data generative factors, then the learnt representation necessarily has to be entangled (as a low- rank projection of the true data generative factors will compress them in a non- factorial way to still capture the full data distribution well). We also note that VAE reconstruction quality is a poor indicator of learnt disentanglement. Good disentangled representations often lead to blurry reconstructions due to the restricted capacity of the latent information channel \(\mathbf{z}\) , while entangled representations often result in the sharpest reconstructions. We therefore suggest that one should not necessarily strive for perfect reconstructions when using \(\beta\) - VAEs as unsupervised feature learners - though it is often possible to find the right \(\beta\) - VAE architecture and the right value of \(\beta\) to have both well disentangled latent representations and good reconstructions. We proposed a principled way of choosing \(\beta\) for datasets with at least weak label information. If label information exists for at least a small subset of the independent data generative factors of variation, one can apply the disentanglement metric described in Sec. 3 to approximate the level of learnt disentanglement for various \(\beta\) choices during a hyperparameter sweep. When such labelled information is not available, the optimal value of \(\beta\) can be found through visual inspection of what <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 7: A: Representations learnt by a \(\beta\) -VAE \((\beta = 4)\) . Each column represents a latent \(z_{i}\) , ordered according to the learnt Gaussian variance (last row). Row 1 (position) shows the mean activation (red represents high values) of each latent \(z_{i}\) as a function of all 32x32 locations averaged across objects, rotations and scales. Row 2 and 3 show the mean activation of each unit \(z_{i}\) as a function of scale (respectively rotation), averaged across rotations and positions (respectively scales and positions). Square is red, oval is green and heart is blue. Rows 4-8 (second group) show reconstructions resulting from the traversal of each latent \(z_{i}\) over three standard deviations around the unit Gaussian prior mean while keeping the remaining 9/10 latent units fixed to the values obtained by running inference on an image from the dataset. B: Similar analysis for VAE \((\beta = 1)\) . C: Similar analysis for DCIGN, clamping a single latent each for scale, positions, orientation and 5 for shape. D: Similar analysis for InfoGAN, using 5 continuous latents regularized using the mutual information cost, and 5 additional unconstrained noise latents (not shown). </center> effect the traversal of each single latent unit \(z_{m}\) has on the generated images \((\mathbf{x}|\mathbf{z})\) in pixel space (as shown in Fig. 7 rows 4- 8). For the 2D shapes dataset, we have found that the optimal values of \(\beta\) as determined by visual inspection match closely the optimal values as determined by the disentanglement metric. ## 5 CONCLUSION In this paper we have reformulated the standard VAE framework (Kingma & Welling, 2014; Rezende et al., 2014) as a constrained optimisation problem with strong latent capacity constraint and independence prior pressures. By augmenting the lower bound formulation with the \(\beta\) coefficient that regulates the strength of such pressures and, as a consequence, the qualitative nature of the representations learnt by the model, we have achieved state of the art results for learning disentangled representations of data generative factors. We have shown that our proposed \(\beta\) - VAE framework significantly outperforms both qualitatively and quantitatively the original VAE (Kingma & Welling, 2014), as well as state- of- the- art unsupervised (InfoGAN: Chen et al., 2016) and semi- supervised (DC- IGN: Kulkarni et al., 2015) approaches to disentangled factor learning. Furthermore, we have shown that \(\beta\) - VAE consistently and robustly discovers more factors of variation in the data, and it learns a representation that covers a wider range of factor values and is disentangled more cleanly than other benchmarks, all in a completely unsupervised manner. Unlike InfoGAN and DC- IGN, our approach does not depend on any a priori knowledge about the number or the nature of data generative factors. Our preliminary investigations suggest that the performance of the \(\beta\) - VAE framework may depend on the sampling density of the data generative factors within a training dataset (see Appendix A.8 for more details). It appears that having more densely sampled data generative factors results in better disentangling performance of \(\beta\) - VAE, however we leave a more principled investigation of this effect to future work. <--- Page Split ---> \(\beta\) - VAE is robust with respect to different architectures, optimisation parameters and datasets, hence requiring few design decisions. Our approach relies on the optimisation of a single hyperparameter \(\beta\) , which can be found directly through a hyperparameter search if weakly labelled data is available to calculate our new proposed disentangling metric. Alternatively the optimal \(\beta\) can be estimated heuristically in purely unsupervised scenarios. Learning an interpretable factorised representation of the independent data generative factors in a completely unsupervised manner is an important precursor for the development of artificial intelligence that understands the world in the same way that humans do (Lake et al., 2016). We believe that using our approach as an unsupervised pretraining stage for supervised or reinforcement learning will produce significant improvements for scenarios such as transfer or fast learning. ## 6 ACKNOWLEDGEMENTS We would like to thank Charles Blundell, Danilo Rezende, Tejas Kulkarni and David Pfau for helpful comments that improved the manuscript. ## REFERENCES M. Aubry, D. Maturana, A. Efros, B. Russell, and J. Sivic. Seeing 3d chairs: exemplar part-based 2d-3d alignment using a large dataset of cad models. In CVPR, 2014. Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. In IEEE Transactions on Pattern Analysis & Machine Intelligence, 2013. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. arXiv, 2016. Brian Cheung, Jesse A. Levezey, Arjun K. Bansal, and Bruno A. Olshausen. Discovering hidden factors of variation in deep networks. In Proceedings of the International Conference on Learning Representations, Workshop Track, 2015. T. Cohen and M. Welling. Transformation properties of learned visual representations. In ICLR, 2015. Taco Cohen and Max Welling. Learning the irreducible representations of commutative lie groups. arXiv, 2014. G. Desjardins, A. Courville, and Y. Bengio. Disentangling factors of variation via generative entangling. arXiv, 2012. Carl Doersch. Tutorial on variational autoencoders. arXiv, 2016. John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 2011. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. NIPS, pp. 2672-2680, 2014. Ross Goroshin, Michael Mathieu, and Yann LeCun. Learning to linearize under uncertainty. NIPS, 2015. G. Hinton, A. Krizhevsky, and S. D. Wang. Transforming auto-encoders. International Conference on Artificial Neural Networks, 2011. Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183-233, 1999. Theofanis Karaletsos, Serge Belongie, and Gunnar Ratsch. Bayesian representation learning with oracle constraints. ICLR, 2016. W. Karush. Minima of Functions of Several Variables with Inequalities as Side Constraints. Master's thesis, Univ. of Chicago, Chicago, Illinois, 1939. D. P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv, 2014. D. P. Kingma and M. Welling. Auto-encoding variational bayes. ICLR, 2014. H. W. Kuhn and A. W. Tucker. Nonlinear programming. In Proceedings of 2nd Berkeley Symposium, pp. 481-492, 1951. Tejas Kulkarni, William Whitney, Pushmeet Kohli, and Joshua Tenenbaum. Deep convolutional inverse graphics network. NIPS, 2015. Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Building machines that learn and think like people. arXiv, 2016. <--- Page Split ---> Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. ICCV, 2015. P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter. A 3d face model for pose and illumination invariant face recognition. AVSS, 2009. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, and David Cournapeau. Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 2011. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv, 2015. Scott Reed, Kihyuk Sohn, Yuting Zhang, and Honglak Lee. Learning to disentangle factors of variation with manifold interaction. ICML, 2014. Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv, 2014. Karl Ridgeway. A survey of inductive biases for factorial Representation-Learning. arXiv, 2016. URL http://arxiv.org/abs/1612.05299. Oren Rippel and Ryan Prescott Adams. High-dimensional probability estimation with deep density models. arXiv, 2013. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. arXiv, 2016. URL http://arxiv.org/abs/1606.03498. Jürgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation, 4(6):863- 869, 1992. Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-Time single image and video Super- Resolution using an efficient Sub- Pixel convolutional neural network. arXiv, 2016. Yichuan Tang, Ruslan Salakhutdinov, and Geoffrey Hinton. Tensor analyzers. In Proceedings of the 30th International Conference on Machine Learning, 2013, Atlanta, USA, 2013. William F. Whitney, Michael Chang, Tejas Kulkarni, and Joshua B. Tenenbaum. Understanding visual concepts with continuation learning. arXiv, 2016. URL http://arxiv.org/pdf/1602.06822. pdf. Jimei Yang, Scott Reed, Ming- Hsuan Yang, and Honglak Lee. Weakly- supervised disentangling with recurrent transformations for 3d view synthesis. NIPS, 2015. Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy- based generative adversarial network. arXiv, 2016. URL http://arxiv.org/abs/1609.03126. Z. Zhu, P. Luo, X. Wang, and X. Tang. Multi- view perceptron: a deep model for learning face identity and view representations. In Advances in Neural Information Processing Systems 27. 2014. ## A APPENDIX ## A.1 MODEL ARCHITECTURE DETAILS A summary of all model architectures used in this paper can be seen in Tbl 1. ## A.2 INFOGAN TRAINING To train the InfoGAN network described in Tbl. 1 on the 2D shapes dataset (Fig. 7), we followed the training paradigm described in Chen et al. (2016) with the following modifications. For the mutual information regularised latent code, we used 5 continuous variables \(c_{i}\) sampled uniformly from \((- 1,1)\) . We used 5 noise variables \(z_{i}\) , as we found that using a reduced number of noise variables improved the quality of generated samples for this dataset. To help stabilise training, we used the instance noise trick described in Shi et al. (2016), adding Gaussian noise to the discriminator inputs (0.2 standard deviation on images scaled to \([- 1,1]\) ). We followed Radford et al. (2015) for the architecture of the convolutional layers, and used batch normalisation in all layers except the last in the generator and the first in the discriminator. <--- Page Split ---> <table><tr><td>Dataset</td><td>Optimiser</td><td></td><td>Architecture</td></tr><tr><td rowspan="4">2D shapes (VAE)</td><td rowspan="4">Adagrad<br>1e-2</td><td>Input</td><td>4096 (flattened 64x64x1).</td></tr><tr><td>Encoder</td><td>FC 1200, 1200. ReLU activation.</td></tr><tr><td>Latents</td><td>10</td></tr><tr><td>Decoder</td><td>FC 1200, 1200, 1200, 4096. Tanh activation. Bernoulli.</td></tr><tr><td rowspan="4">2D shapes (DC-IGN)</td><td rowspan="4">rmsprop<br>(as in Kulkarni et al., 2015)</td><td>Input</td><td>64x64x1.</td></tr><tr><td>Encoder</td><td>Conv 96x3x3, 48x3x3, 48x3x3 (padding 1).</td></tr><tr><td>Latents</td><td>ReLU activation and Max pooling 2x2.</td></tr><tr><td>Decoder</td><td>10</td></tr><tr><td rowspan="3">2D shapes (InfoGAN)</td><td rowspan="3">Adam<br>1e-3 (gen)<br>2e-4 (dis)</td><td>Generator</td><td>FC 256, 256, Deconv 128x4x4, 64x4x4 (stride 2). Tanh.</td></tr><tr><td>Discriminator</td><td>Conv and FC reverse of generator. Leaky ReLU activation.</td></tr><tr><td>Recognition</td><td>FC 1. Sigmoid activation.</td></tr><tr><td rowspan="3">Chairs (VAE)</td><td rowspan="3">Adam<br>1e-4</td><td>Latents</td><td>10: z1...5 ~ Unif(-1, 1), c1...5 ~ Unif(-1, 1)</td></tr><tr><td>Input</td><td>64x64x1.</td></tr><tr><td>Encoder</td><td>Conv 32x4x4 (stride 2), 32x4x4 (stride 2), 64x4x4 (stride 2), 64x4x4 (stride2), FC 256. ReLU activation.</td></tr><tr><td rowspan="3">CelebA (VAE)</td><td rowspan="3">Adam<br>1e-4</td><td>Latents</td><td>32</td></tr><tr><td>Decoder</td><td>Deconv reverse of encoder. ReLU activation. Bernoulli.</td></tr><tr><td>Input</td><td>64x64x3.</td></tr><tr><td rowspan="3">3DFaces (VAE)</td><td rowspan="3">Adam<br>1e-4</td><td>Encoder</td><td>Conv 32x4x4 (stride 2), 32x4x4 (stride 2), 64x4x4 (stride 2), 64x4x4 (stride2), FC 256. ReLU activation.</td></tr><tr><td>Latents</td><td>32</td></tr><tr><td>Decoder</td><td>Deconv reverse of encoder. ReLU activation. Gaussian.</td></tr><tr><td rowspan="3">3DFaces (VAE)</td><td rowspan="3">Adam<br>1e-4</td><td>Input</td><td>64x64x1.</td></tr><tr><td>Encoder</td><td>Conv 32x4x4 (stride 2), 32x4x4 (stride 2), 64x4x4 (stride 2), 64x4x4 (stride2), FC 256. ReLU activation.</td></tr><tr><td>Latents</td><td>32</td></tr><tr><td></td><td></td><td>Decoder</td><td>Deconv reverse of encoder. ReLU activation. Bernoulli.</td></tr></table> Table 1: Details of model architectures used in the paper. The models were trained using either adagrad (Duchi et al., 2011) or adam (Kingma & Ba, 2014) optimisers. # A.3 ICA AND PCA BASELINES In order to calculate the ICA benchmark, we applied fastICA (Pedregosa et al., 2011) algorithm to the whitened pixel data. Due to memory limitations we had to apply the algorithm to pairwise combinations of the subsets of the dataset corresponding to the transforms of each of the three 2D object identities. We calculated the disentangling metric for all three ICA models trained on each of the three pairwise combinations of 2D objects, before presenting the average of these scores in Fig. 6. We performed PCA on the raw and whitened pixel data. Both approaches resulted in similar disentangling metric scores. Fig. 6 reports the PCA results calculated using whitened pixel data for more direct comparison with the ICA score. # A.4 DISENTANGLEMENT METRIC DETAILS We used a linear classifier to learn the identity of the generative factor that produced \(\mathbf {z}_{\mathrm {diff}}^{b}\) (see Equations (5) for the process used to obtain samples of \(\mathbf {z}_{\mathrm {diff}}^{b}\) ). We used a fully connected linear <--- Page Split ---> classifier to predict \(p(y|\mathbf{z}_{\mathrm{diff}}^{b})\) , where \(y\) is one of four generative factors (position X, position Y, scale and rotation). We used softmax output nonlinearity and a negative log likelihood loss function. The classifier was trained using the Adagrad (Duchi et al., 2011) optimisation algorithm with learning rate of 1e- 2 until convergence. \[\mathcal{D} = \{V\in \mathbb{R}^{K},W\in \mathbb{R}^{H},X\in \mathbb{R}^{N}\} ,y\sim U n i f[1\dots K]\] Repeat for \(b = 1\dots B\) : \[\begin{array}{r l} & {\mathbf{v}_{1,l}\sim p(\mathbf{v}),\mathbf{w}_{1,l}\sim p(\mathbf{w}),\mathbf{w}_{2,l}\sim p(\mathbf{w}),[\mathbf{v}_{2,l}]_{k} = \left\{ \begin{array}{l l}{[\mathbf{v}_{1,l}]_{k},} & {\mathrm{if}k = y}\\ {-\rho (v_{k}),} & {\mathrm{otherwise}} \end{array} \right.}\\ & {\mathbf{x}_{1,l}\sim \mathbf{S i m}(\mathbf{v}_{1,1},\mathbf{w}_{1,1}),\mathbf{x}_{2,1}\sim \mathbf{S i m}(\mathbf{v}_{2,1},\mathbf{w}_{2,1}),}\\ & {q(\mathbf{z}|\mathbf{x})\sim \mathcal{N}(\mu (\mathbf{x}),\sigma (\mathbf{x})),\mathbf{z}_{1,l} = \mu (\mathbf{x}_{1,l}),\mathbf{z}_{2,l} = \mu (\mathbf{x}_{2,l})}\\ & {\mathbf{z}_{\mathrm{diff}}^{l} = |\mathbf{z}_{1,l} - \mathbf{z}_{2,l}|,\quad \mathbf{z}_{\mathrm{diff}}^{b} = \frac{1}{L}\sum_{l = 1}^{L}\mathbf{z}_{\mathrm{diff}}^{l}} \end{array} \quad (5)\] All disentanglement metric score results reported in the paper were calculated in the following manner. Ten replicas of each model with the same hyperparameters were trained using different random seeds to obtain disentangled representations. Each of the ten trained model replicas was evaluated three times using the disentanglement metric score algorithm, each time using a different random seed to initialise the linear classifier. We then discarded the bottom \(50\%\) of the thirty resulting scores and reported the remaining results. This was done to control for the outlier results from the few experiments that diverged during training. The results reported in table in Fig. 6 (left) were calculated using the following data. Ground truth uses independent data generating factors \(\mathbf{v}\) (our dataset did not contain any correlated data generating factors \(\mathbf{w}\) ). PCA and ICA decompositions keep the first ten components (PCA components explain \(60.8\%\) of variance). \(\beta\) - VAE \((\beta = 4)\) , VAE \((\beta = 1)\) and VAE untrained have the same fully connected architecture with ten latent units \(\mathbf{z}\) . InfoGAN uses "inferred" values of the five continuous latents that were regularised with the mutual information objective during training. ## A.5 CLASSIFYING THE GROUND TRUTH DATA GENERATIVE FACTORS VALUES In order to further verify the validity of our proposed disentanglement metric we ran an extra quantitative test: we trained a linear classifier to predict the ground truth value of each of the five data generative factors used to generate the 2D shapes dataset. While this test does not measure disentangling directly (since it does not measure independence of the latent representation), a disentangled representation should make such a classification trivial. It can be seen in Table 2 that the representation learnt by \(\beta\) - VAE is on average the best representation for factor classification across all five factors. It is closely followed by DC- IGN. It is interesting to note that ICA does well only at encoding object identity, while PCA manages to learn a very good representation of object position. Table 2: Linear classifier classification accuracy for predicting the ground truth values for each data generative factor from different latent representations. Each factor could take a variable number of possible values: 3 for id, 6 for scale, 40 for rotation and 32 for position X or Y. Best performing model results in each column are printed in bold. <table><tr><td rowspan="2">Model</td><td colspan="5">Classification accuracy</td></tr><tr><td>id</td><td>scale</td><td>rotation</td><td>position X</td><td>position Y</td></tr><tr><td>PCA</td><td>43.38</td><td>36.08</td><td>5.96</td><td>60.66</td><td>60.15</td></tr><tr><td>ICA</td><td>59.6</td><td>34.4</td><td>7.61</td><td>25.96</td><td>25.12</td></tr><tr><td>DC-IGN</td><td>44.82</td><td>45.92</td><td>15.89</td><td>47.64</td><td>45.88</td></tr><tr><td>InfoGAN</td><td>44.47</td><td>40.91</td><td>6.39</td><td>27.51</td><td>23.73</td></tr><tr><td>VAE untrained</td><td>39.44</td><td>25.33</td><td>6.09</td><td>16.69</td><td>14.39</td></tr><tr><td>VAE</td><td>41.55</td><td>24.07</td><td>8</td><td>16.5</td><td>18.72</td></tr><tr><td>β-VAE</td><td>50.08</td><td>43.03</td><td>20.36</td><td>52.25</td><td>49.5</td></tr></table> <--- Page Split ---> ## A.6 INTERPRETING NORMALISED \(\beta\) We start with the \(\beta\) - VAE constrained optimisation formulation that we have derived in Sec. 2. \[\mathcal{L}(\theta ,\phi ;\mathbf{x},\mathbf{z},\beta) = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[\log p_{\theta}(\mathbf{x}|\mathbf{z})] - \beta D_{K L}(q_{\phi}(\mathbf{z}|\mathbf{x})||p(\mathbf{z})) \quad (6)\] We make the assumption that every pixel \(n\) in \(\mathbf{x}\in \mathbb{R}^{N}\) is conditionally independent given \(\mathbf{z}\) (Doersch, 2016). The first term of Eq. 6 then becomes: \[\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[\log p_{\theta}(\mathbf{x}|\mathbf{z})] = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[\log \prod_{n}p_{\theta}(x_{n}|\mathbf{z})] = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[\sum_{n}\log p_{\theta}(x_{n}|\mathbf{z})] \quad (7)\] Dividing both sides of Eq. 6 by \(N\) produces: \[\mathcal{L}(\theta ,\phi ;\mathbf{x},\mathbf{z},\beta)\propto \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\mathbb{E}_{n}[\log p_{\theta}(x_{n}|\mathbf{z})] - \frac{\beta}{N} D_{K L}(q_{\phi}(\mathbf{z}|\mathbf{x})||p(\mathbf{z})) \quad (8)\] We design \(\beta\) - VAE to learn conditionally independent factors of variation in the data. Hence we assume conditional independence of every latent \(z_{m}\) given \(x\) (where \(m\in 1\dots M\) , and \(M\) is the dimensionality of \(\mathbf{z}\) ). Since our prior \(p(\mathbf{z})\) is an isotropic unit Gaussian, we can re- write the second term of Eq. 6 as: \[D_{K L}(q_{\phi}(\mathbf{z}|\mathbf{x})||p(\mathbf{z})) = \int_{z}q_{\phi}(\mathbf{z}|\mathbf{x})log\frac{q_{\phi}(\mathbf{z}|\mathbf{x})}{p(\mathbf{z})} = \sum_{m}\int_{z_{m}}q_{\phi}(z_{m}|\mathbf{x})log\frac{q_{\phi}(z_{m}|\mathbf{x})}{p(z_{m})} \quad (9)\] Multiplying the second term in Eq. 8 by a factor \(\frac{M}{M}\) produces: \[\begin{array}{r l} & {\mathcal{L}(\theta ,\phi ;\mathbf{x},\mathbf{z},\beta)\propto \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\mathbb{E}_{n}[\log p_{\theta}(x_{n}|\mathbf{z})] - \frac{\beta M}{N}\mathbb{E}_{m}\int_{z_{m}}[q_{\phi}(z_{m}|\mathbf{x})log\frac{q_{\phi}(z_{m}|\mathbf{x})}{p(z_{m})}]}\\ & {\qquad = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\mathbb{E}_{n}[\log p_{\theta}(x_{n}|\mathbf{z})] - \frac{\beta M}{N}\mathbb{E}_{m}[D_{K L}(q_{\phi}(z_{m}|\mathbf{x})||p(z_{m}))]} \end{array} \quad (10)\] Hence using \[\beta_{norm} = \frac{\beta M}{N}\] in Eq. 10 is equivalent to optimising the original \(\beta\) - VAE formulation from Sec. 2, but with the additional independence assumptions that let us calculate data log likelihood and KL divergence terms in expectation over the individual pixels \(x_{n}\) and individual latents \(z_{m}\) . ## A.7 RELATIONSHIP BETWEEN \(\beta\) AND \(\epsilon\) For a given \(\epsilon\) we can solve the constrained optimisation problem in Eq. 3 (find the optimal \((\theta^{*},\phi^{*},\beta^{*})\) such that \(\Delta \mathcal{F}(\theta^{*},\phi^{*},\beta^{*}) = 0\) ). We can then re- write our optimal solution to the original optimisation problem in Eq. 2 as a function of \(\epsilon\) : \[\mathcal{G}(\theta^{*}(\epsilon),\phi^{*}(\epsilon)) = \mathbb{E}_{q_{\phi^{*}(\epsilon)}(\mathbf{z}|\mathbf{x})}[\log p_{\theta^{*}(\epsilon)}(\mathbf{x}|\mathbf{z})] \quad (11)\] Now \(\beta\) can be interpreted as the rate of change of the optimal solution \((\theta^{*},\phi^{*})\) to \(\mathcal{G}\) when varying the constraint \(\epsilon\) : \[\frac{\delta\mathcal{G}}{\delta\epsilon} = \beta^{*}(\epsilon) \quad (12)\] ## A.8 DATA CONTINUITY We hypothesise that data continuity plays a role in guiding unsupervised models towards learning the correct data manifolds. To test this idea we measure how the degree of learnt disentangling changes with reduced continuity in the 2D shapes dataset. We trained a \(\beta\) - VAE with \(\beta = 4\) (Figure 7A) on subsamples of the original 2D shapes dataset, where we progressively decreased the generative factor sampling density. Reduction in data continuity negatively correlates with the average pixel wise (Hamming) distance between two consecutive transforms of each object (normalised by the average number of pixels occupied by each of the two adjacent transforms of an object to account for object <--- Page Split ---> scale). Figure 8 demonstrates that as the continuity in the data reduces, the degree of disentanglement in the learnt representations also drops. This effect holds after additional hyperparameter tuning and can not solely be explained by the decrease in dataset size, since the same VAE can learn disentangled representations from a data subset that preserves data continuity but is approximately \(55\%\) of the original size (results not shown). ![](images/15_0.jpg) <center>Figure 8: Negative correlation between data transform continuity and the degree of disentangling achieved by \(\beta\) -VAE. Abscissa is the average normalized Hamming distance between each of the two consecutive transforms of each object. Ordinate is disentanglement metric score. Disentangling performance is robust to Bernoulli noise added to the data at test time, as shown by slowly degrading classification accuracy up to \(10\%\) noise level, considering that the 2D objects occupy on average between \(2 - 7\%\) of the image depending on scale. Fluctuations in classification accuracy for similar Hamming distances are due the different nature of subsampled generative factors (i.e. symmetries are present in rotation but are lacking in position). </center> ## A.9 \(\beta\) -VAE SAMPLES Samples from \(\beta\) - VAE that learnt disentangled ( \(\beta = 4\) ) and entangled ( \(\beta = 1\) ) representations can be seen in Figure 9. ## A.10 EXTRA \(\beta\) -VAE TRAVERSAL PLOTS We present extra latent traversal plots from \(\beta\) - VAE that learnt disentangled representations of 3D chairs (Figures 10- 11) and CelebA (Figures 12- 14) datasets. Here we show traversals from all informative latents from a large number of seed images. <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 9: Samples from \(\beta\) -VAE trained on the dataset of 2D shapes that learnt either a disentangled (left, \(\beta = 4\) ) or an entangled (right, \(\beta = 1\) ) representation of the data generative factors. It can be seen that sampling from an entangled representation results in some unrealistic looking samples. A disentangled representation that inverts the original data generation process does not suffer from such errors. </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 10: Latent traversal plots from \(\beta\) -VAE that learnt disentangled representations on the 3D chairs dataset. </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 11: Latent traversal plots from \(\beta\) -VAE that learnt disentangled representations on the 3D chairs dataset. </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 12: Latent traversal plots from \(\beta\) -VAE that learnt disentangled representations on the CelebA dataset. </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 13: Latent traversal plots from \(\beta\) -VAE that learnt disentangled representations on the CelebA dataset. </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 14: Latent traversal plots from \(\beta\) -VAE that learnt disentangled representations on the CelebA dataset. </center> <--- Page Split --->
## ABSTRACT Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce \(\beta\) - VAE, a new state- of- the- art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter \(\beta\) that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that \(\beta\) - VAE with appropriately tuned \(\beta > 1\) qualitatively outperforms VAE \((\beta = 1)\) , as well as state of the art unsupervised (InfoGAN) and semi- supervised (DC- IGN) approaches to disentangled factor learning on a variety of datasets (celeba, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, \(\beta\) - VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter \(\beta\) , which can be directly optimised through a hyperparameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data. ## 1 INTRODUCTION The difficulty of learning a task for a given machine learning approach can vary significantly depending on the choice of the data representation. Having a representation that is well suited to the particular task and data domain can significantly improve the learning success and robustness of the chosen model (Bengio et al., 2013). It has been suggested that learning a disentangled representation of the generative factors in the data can be useful for a large variety of tasks and domains (Bengio et al., 2013; Ridgeway, 2016). A disentangled representation can be defined as one where single latent units are sensitive to changes in single generative factors, while being relatively invariant to changes in other factors (Bengio et al., 2013). For example, a model trained on a dataset of 3D objects might learn independent latent units sensitive to single independent data generative factors, such as object identity, position, scale, lighting or colour, thus acting as an inverse graphics model (Kulkarni et al., 2015). In a disentangled representation, knowledge about one factor can generalise to novel configurations of other factors. According to Lake et al. (2016), disentangled representations could boost the performance of state- of- the- art AI approaches in situations where they still struggle but where humans excel. Such scenarios include those which require knowledge transfer, where faster learning is achieved by reusing learnt representations for numerous tasks; zero- shot inference, where reasoning about new data is enabled by recombining previously learnt factors; or novelty detection. Unsupervised learning of a disentangled posterior distribution over the underlying generative factors of sensory data is a major challenge in AI research (Bengio et al., 2013; Lake et al., 2016). Most previous attempts required a priori knowledge of the number and/or nature of the data generative factors (Hinton et al., 2011; Rippel & Adams, 2013; Reed et al., 2014; Zhu et al., 2014; Yang et al., 2015; Goroshin et al., 2015; Kulkarni et al., 2015; Cheung et al., 2015; Whitney et al., 2016; Karaletsos et al., 2016). This is not always feasible in the real world, where the newly initialised learner may be exposed to complex data where no a priori knowledge of the generative factors exists, and little to no supervision for discovering the factors is available. Until recently purely unsupervised <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Manipulating latent variables on celebA: Qualitative results comparing disentangling performance of \(\beta\) -VAE \((\beta = 250)\) , VAE (Kingma & Welling, 2014) \((\beta = 1)\) and InfoGAN (Chen et al., 2016). In all figures of latent code traversal each block corresponds to the traversal of a single latent variable while keeping others fixed to either their inferred \((\beta\) -VAE, VAE and DC-IGN where applicable) or sampled (InfoGAN) values. Each row represents a different seed image used to infer the latent values in the VAE-based models, or a random sample of the noise variables in InfoGAN. \(\beta\) -VAE and VAE traversal is over the [-3, 3] range. InfoGAN traversal is over ten dimensional categorical latent variables. Only \(\beta\) -VAE and InfoGAN learnt to disentangle factors like azimuth (a), emotion (b) and hair style (c), whereas VAE learnt an entangled representation (e.g. azimuth is entangled with emotion, presence of glasses and gender). InfoGAN images adapted from Chen et al. (2016). Reprinted with permission. </center> approaches to disentangled factor learning have not scaled well (Schmidhuber, 1992; Desjardins et al., 2012; Tang et al., 2013; Cohen & Welling, 2014; 2015). Recently a scalable unsupervised approach for disentangled factor learning has been developed, called InfoGAN (Chen et al., 2016). InfoGAN extends the generative adversarial network (GAN) (Goodfellow et al., 2014) framework to additionally maximise the mutual information between a subset of the generating noise variables and the output of a recognition network. It has been reported to be capable of discovering at least a subset of data generative factors and of learning a disentangled representation of these factors. The reliance of InfoGAN on the GAN framework, however, comes at the cost of training instability and reduced sample diversity. Furthermore, InfoGAN requires some a priori knowledge of the data, since its performance is sensitive to the choice of the prior distribution and the number of the regularised noise variables. InfoGAN also lacks a principled inference network (although the recognition network can be used as one). The ability to infer the posterior latent distribution from sensory input is important when using the unsupervised model in transfer learning or zero- shot inference scenarios. Hence, while InfoGAN is an important step in the right direction, we believe that further improvements are necessary to achieve a principled way of using unsupervised learning for developing more human- like learning and reasoning in algorithms as described by Lake et al. (2016). Finally, there is currently no general method for quantifying the degree of learnt disentanglement. Therefore there is no way to quantitatively compare the degree of disentanglement achieved by different models or when optimising the hyperparameters of a single model. <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: Manipulating latent variables on 3D chairs: Qualitative results comparing disentangling performance of \(\beta\) -VAE \((\beta = 5)\) , VAE (Kingma & Welling, 2014) \((\beta = 1)\) , InfoGAN (Chen et al., 2016) and DC-IGN (Kulkarni et al., 2015). InfoGAN traversal is over the [-1, 1] range. VAE always learns an entangled representation (e.g. chair width is entangled with azimuth and leg style (b)). All models apart from VAE learnt to disentangle the labelled data generative factor, azimuth (a). InfoGAN and \(\beta\) -VAE were also able to discover unlabelled factors in the dataset, such as chair width (b). Only \(\beta\) -VAE, however, learnt about the unlabelled factor of chair leg style (c). InfoGAN and DC-IGN images adapted from Chen et al. (2016) and Kulkarni et al. (2015), respectively. Reprinted with permission. </center> In this paper we attempt to address these issues. We propose \(\beta\) - VAE, a deep unsupervised generative approach for disentangled factor learning that can automatically discover the independent latent factors of variation in unsupervised data. Our approach is based on the variational autoencoder (VAE) framework (Kingma & Welling, 2014; Rezende et al., 2014), which brings scalability and training stability. While the original VAE work has been shown to achieve limited disentangling performance on simple datasets, such as FreyFaces or MNIST (Kingma & Welling, 2014), disentangling performance does not scale to more complex datasets (e.g. Aubry et al., 2014; Paysan et al., 2009; Liu et al., 2015), prompting the development of more elaborate semi- supervised VAE- based approaches for learning disentangled factors (e.g. Kulkarni et al., 2015; Karaletsos et al., 2016). We propose augmenting the original VAE framework with a single hyperparameter \(\beta\) that modulates the learning constraints applied to the model. These constraints impose a limit on the capacity of the latent information channel and control the emphasis on learning statistically independent latent factors. \(\beta\) - VAE with \(\beta = 1\) corresponds to the original VAE framework (Kingma & Welling, 2014; Rezende et al., 2014). With \(\beta > 1\) the model is pushed to learn a more efficient latent representation of the data, which is disentangled if the data contains at least some underlying factors of variation that are independent. We show that this simple modification allows \(\beta\) - VAE to significantly improve the degree of disentanglement in learnt latent representations compared to the unmodified VAE framework (Kingma & Welling, 2014; Rezende et al., 2014). Furthermore, we show that \(\beta\) - VAE achieves state of the art disentangling performance against both the best unsupervised (InfoGAN: Chen et al., 2016) and semi- supervised (DC- IGN: Kulkarni et al., 2015) approaches for disentangled factor learning on a number of benchmark datasets, such as CelebA (Liu et al., 2015), chairs (Aubry et al., 2014) and faces (Paysan et al., 2009) using qualitative evaluation. Finally, to help quantify the differences, we develop a new measure of disentanglement and show that \(\beta\) - VAE significantly outperforms all our baselines on this measure (ICA, PCA, VAE Kingma & Ba (2014), DC- IGN Kulkarni et al. (2015), and InfoGAN Chen et al. (2016)). Our main contributions are the following: 1) we propose \(\beta\) - VAE, a new unsupervised approach for learning disentangled representations of independent visual data generative factors; 2) we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models; 3) we demonstrate both qualitatively and quantitatively that our \(\beta\) - VAE approach achieves state- of- the- art disentanglement performance compared to various baselines on a variety of complex datasets. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 3: Manipulating latent variables on 3D faces: Qualitative results comparing disentangling performance of \(\beta\) -VAE ( \(\beta = 20\) ), VAE (Kingma & Welling, 2014) ( \(\beta = 1\) ), InfoGAN (Chen et al., 2016) and DC-IGN (Kulkarni et al., 2015). InfoGAN traversal is over the [-1, 1] range. All models learnt to disentangle lighting (b) and elevation (c). DC-IGN and VAE struggled to continuously interpolate between different azimuth angles (a), unlike \(\beta\) -VAE, which additionally learnt to encode a wider range of azimuth angles than other models. InfoGAN and DC-IGN images adapted from Chen et al. (2016) and Kulkarni et al. (2015), respectively. Reprinted with permission. </center> ![](images/3_1.jpg) <center>Figure 4: Latent factors learnt by \(\beta\) -VAE on celebA: traversal of individual latents demonstrates that \(\beta\) -VAE discovered in an unsupervised manner factors that encode skin colour, transition from an elderly male to younger female, and image saturation. </center> ## 2 \(\beta\) -VAE FRAMEWORK DERIVATION Let \(\mathcal{D} = \{X,V,W\}\) be the set that consists of images \(\mathbf{x}\in \mathbb{R}^{N}\) and two sets of ground truth data generative factors: conditionally independent factors \(\mathbf{v}\in \mathbb{R}^{K}\) , where \(\log p(\mathbf{v}|\mathbf{x}) = \sum_{k}\log p(v_{k}|\mathbf{x})\) ; and conditionally dependent factors \(\mathbf{w}\in \mathbb{R}^{H}\) . We assume that the images \(\mathbf{x}\) are generated by the true world simulator using the corresponding ground truth data generative factors: \(p(\mathbf{x}|\mathbf{v},\mathbf{w}) = \mathbf{Sim}(\mathbf{v},\mathbf{w})\) . <--- Page Split ---> We want to develop an unsupervised deep generative model that, using samples from \(X\) only, can learn the joint distribution of the data \(\mathbf{x}\) and a set of generative latent factors \(\mathbf{z}\) ( \(\mathbf{z} \in \mathbb{R}^{M}\) , where \(M \geq K\) ) such that \(\mathbf{z}\) can generate the observed data \(\mathbf{x}\) ; that is, \(p(\mathbf{x}|\mathbf{z}) \approx p(\mathbf{x}|\mathbf{v}, \mathbf{w}) = \mathbf{Sim}(\mathbf{v}, \mathbf{w})\) . Thus a suitable objective is to maximise the marginal (log- likelihood of the observed data \(\mathbf{x}\) in expectation over the whole distribution of latent factors \(\mathbf{z}\) : \[\max_{\theta}\mathbb{E}_{p_{\theta}(\mathbf{z})}[p_{\theta}(\mathbf{x}|\mathbf{z})] \quad (1)\] For a given observation \(\mathbf{x}\) , we describe the inferred posterior configurations of the latent factors \(\mathbf{z}\) by a probability distribution \(q_{\phi}(\mathbf{z}|\mathbf{x})\) . Our aim is to ensure that the inferred latent factors \(q_{\phi}(\mathbf{z}|\mathbf{x})\) capture the generative factors \(\mathbf{v}\) in a disentangled manner. The conditionally dependent data generative factors \(\mathbf{w}\) can remain entangled in a separate subset of \(\mathbf{z}\) that is not used for representing \(\mathbf{v}\) . In order to encourage this disentangling property in the inferred \(q_{\phi}(\mathbf{z}|\mathbf{x})\) , we introduce a constraint over it by trying to match it to a prior \(p(\mathbf{z})\) that can both control the capacity of the latent information bottleneck, and embodies the desiderata of statistical independence mentioned above. This can be achieved if we set the prior to be an isotropic unit Gaussian \((p(\mathbf{z}) = \mathcal{N}(\mathbf{0}, I))\) , hence arriving at the constrained optimisation problem in Eq. 2, where \(\epsilon\) specifies the strength of the applied constraint. \[\max_{\phi ,\theta}\mathbb{E}_{\mathbf{z}\sim \mathcal{D}}\left[\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[\log p_{\theta}(\mathbf{x}|\mathbf{z})]\right]\quad \mathrm{subject~to} D_{K L}(q_{\phi}(\mathbf{z}|\mathbf{x})||p(\mathbf{z}))< \epsilon \quad (2)\] Re- writing Eq. 2 as a Lagrangian under the KKT conditions (Kuhn & Tucker, 1951; Karush, 1939), we obtain: \[\mathcal{F}(\theta ,\phi ,\beta ;\mathbf{x},\mathbf{z}) = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[\log p_{\theta}(\mathbf{x}|\mathbf{z})] - \beta (D_{K L}(q_{\phi}(\mathbf{z}|\mathbf{x})||p(\mathbf{z})) - \epsilon) \quad (3)\] where the KKT multiplier \(\beta\) is the regularisation coefficient that constrains the capacity of the latent information channel \(\mathbf{z}\) and puts implicit independence pressure on the learnt posterior due to the isotropic nature of the Gaussian prior \(p(\mathbf{z})\) . Since \(\beta , \epsilon \geq 0\) according to the complementary slackness KKT condition, Eq. 3 can be re- written to arrive at the \(\beta\) - VAE formulation - as the familiar variational free energy objective function as described by Jordan et al. (1999), but with the addition of the \(\beta\) coefficient: \[\mathcal{F}(\theta ,\phi ,\beta ;\mathbf{x},\mathbf{z})\geq \mathcal{L}(\theta ,\phi ;\mathbf{x},\mathbf{z},\beta) = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[\log p_{\theta}(\mathbf{x}|\mathbf{z})] - \beta D_{K L}(q_{\phi}(\mathbf{z}|\mathbf{x})||p(\mathbf{z})) \quad (4)\] Varying \(\beta\) changes the degree of applied learning pressure during training, thus encouraging different learnt representations. \(\beta\) - VAE where \(\beta = 1\) corresponds to the original VAE formulation of (Kingma & Welling, 2014). We postulate that in order to learn disentangled representations of the conditionally independent data generative factors \(\mathbf{v}\) , it is important to set \(\beta > 1\) , thus putting a stronger constraint on the latent bottleneck than in the original VAE formulation of Kingma & Welling (2014). These constraints limit the capacity of \(\mathbf{z}\) , which, combined with the pressure to maximise the log likelihood of the training data \(\mathbf{x}\) under the model, should encourage the model to learn the most efficient representation of the data. Since the data \(\mathbf{x}\) is generated using at least some conditionally independent ground truth factors \(\mathbf{v}\) , and the \(D_{K L}\) term of the \(\beta\) - VAE objective function encourages conditional independence in \(q_{\phi}(\mathbf{z}|\mathbf{x})\) , we hypothesise that higher values of \(\beta\) should encourage learning a disentangled representation of \(\mathbf{v}\) . The extra pressures coming from high \(\beta\) values, however, may create a trade- off between reconstruction fidelity and the quality of disentanglement within the learnt latent representations. Disentangled representations emerge when the right balance is found between information preservation (reconstruction cost as regularisation) and latent channel capacity restriction \((\beta > 1)\) . The latter can lead to poorer reconstructions due to the loss of high frequency details when passing through a constrained latent bottleneck. Hence, the log likelihood of the data under the learnt model is a poor metric for evaluating disentangling in \(\beta\) - VAEs. Instead we propose a quantitative metric that directly measures the degree of learnt disentanglement in the latent representation. Since our proposed hyperparameter \(\beta\) directly affects the degree of learnt disentanglement, we would like to estimate the optimal \(\beta\) for learning a disentangled latent representation directly. However, it is not possible to do so. This is because the optimal \(\beta\) will depend on the value of \(\epsilon\) in Eq.2. Different datasets and different model architectures will require different optimal values of \(\epsilon\) . However, when optimising \(\beta\) in Eq. 4, we are indirectly also optimising \(\epsilon\) for the best disentanglement (see Sec.A.7 for details), and while we can not learn the optimal value of \(\beta\) directly, we can instead estimate it using either our proposed disentanglement metric (see Sec. 3) or through visual inspection heuristics. <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 5: Schematic of the proposed disentanglement metric: over a batch of \(L\) samples, each pair of images has a fixed value for one target generative factor \(y\) (here \(y = scale\) ) and differs on all others. A linear classifier is then trained to identify the target factor using the average pairwise difference \(\mathbf{z}_{\mathrm{diff}}^{l}\) in the latent space over \(L\) samples. </center> ## 3 DISENTANGLEMENT METRIC It is important to be able to quantify the level of disentanglement achieved by different models. Designing a metric for this, however, is not straightforward. We begin by defining the properties that we expect a disentangled representation to have. Then we describe our proposed solution for quantifying the presence of such properties in a learnt representation. As stated above, we assume that the data is generated by a ground truth simulation process which uses a number of data generative factors, some of which are conditionally independent, and we also assume that they are interpretable. For example, the simulator might sample independent factors corresponding to object shape, colour and size to generate an image of a small green apple. Because of the independence property, the simulator can also generate small red apples or big green apples. A representation of the data that is disentangled with respect to these generative factors, i.e. which encodes them in separate latents, would enable robust classification even using very simple linear classifiers (hence providing interpretability). For example, a classifier that learns a decision boundary that relies on object shape would perform as well when other data generative factors, such as size or colour, are varied. Note that a representation consisting of independent latents is not necessarily disentangled, according to our desiderata. Independence can readily be achieved by a variety of approaches (such as PCA or ICA) that learn to project the data onto independent bases. Representations learnt by such approaches do not in general align with the data generative factors and hence may lack interpretability. For this reason, a simple cross- correlation calculation between the inferred latents would not suffice as a disentanglement metric. Our proposed disentangling metric, therefore, measures both the independence and interpretability (due to the use of a simple classifier) of the inferred latents. To apply our metric, we run inference on a number of images that are generated by fixing the value of one data generative factor while randomly sampling all others. If the independence and interpretability properties hold for the inferred representations, there will be less variance in the inferred latents that correspond to the fixed generative factor. We use a low capacity linear classifier to identify this factor and report the accuracy value as the final disentanglement metric score. Smaller variance in the latents corresponding to the target factor will make the job of this classifier easier, resulting in a higher score under the metric. See Fig. 5 for a representation of the full process. More formally, we start from a dataset \(\mathcal{D} = \{X, V, W\}\) as described in Sec. 2, assumed to contain a balanced distribution of ground truth factors \((\mathbf{v}, \mathbf{w})\) , where images data points are obtained using a ground truth simulator process \(\mathbf{x} \sim \mathbf{Sim}(\mathbf{v}, \mathbf{w})\) . We also assume we are given labels identifying a subset of the independent data generative factors \(\mathbf{v} \in V\) for at least some instances. We then construct a batch of \(B\) vectors \(\mathbf{z}_{\mathrm{diff}}^{l}\) , to be fed as inputs to a linear classifier as follows: 1. Choose a factor \(y \sim \text{Unif}[1 \ldots K]\) (e.g. \(y = \text{scale}\) in Fig. 5). <--- Page Split ---> 2. For a batch of \(L\) samples: (a) Sample two sets of latent representations, \(\mathbf{v}_{1,l}\) and \(\mathbf{v}_{2,l}\) , enforcing \([\mathbf{v}_{1,l}]_{k} = [\mathbf{v}_{2,l}]_{k}\) if \(k = y\) (so that the value of factor \(k = y\) is kept fixed). (b) Simulate image \(\mathbf{x}_{1,l} \sim \mathbf{Sim}(\mathbf{v}_{1,l})\) , then infer \(\mathbf{z}_{1,l} = \mu (\mathbf{x}_{1,l})\) , using the encoder \(q(\mathbf{z}|\mathbf{x}) \sim N(\mu (\mathbf{x}), \sigma (\mathbf{x}))\) . Repeat the process for \(\mathbf{v}_{2,l}\) . (c) Compute the difference \(\mathbf{z}_{\mathrm{diff}}^{l} = |\mathbf{z}_{1,l} - \mathbf{z}_{2,l}|\) , the absolute linear difference between the inferred latent representations. 3. Use the average \(\mathbf{z}_{\mathrm{diff}}^{b} = \frac{1}{L} \sum_{l = 1}^{L} \mathbf{z}_{\mathrm{diff}}^{l}\) to predict \(p(y|\mathbf{z}_{\mathrm{diff}}^{b})\) (again, \(y = scale\) in Fig. 5) and report the accuracy of this predictor as disentanglement metric score. The classifier's goal is to predict the index \(y\) of the generative factor that was kept fixed for a given \(\mathbf{z}_{\mathrm{diff}}^{b}\) . The accuracy of this classifier over multiple batches is used as our disentanglement metric score. We choose a linear classifier with low VC- dimension in order to ensure it has no capacity to perform nonlinear disentangling by itself. We take differences of two inferred latent vectors to reduce the variance in the inputs to the classifier, and to reduce the conditional dependence on the inputs \(\mathbf{x}\) . This ensures that on average \(\left[\mathbf{z}_{\mathrm{diff}}^{b}\right]_{y} < \left[\mathbf{z}_{\mathrm{diff}}^{b}\right]_{\{y\}}\) . See Equations 5 in Appendix A.4 for more details of the process. ## 4 EXPERIMENTS In this section we first qualitatively demonstrate that our proposed \(\beta\) - VAE framework consistently discovers more latent factors and disentangles them in a cleaner fashion that either unmodified VAE (Kingma & Welling, 2014) or state of the art unsupervised (InfoGAN: Chen et al., 2016) and semi- supervised (DC- IGN: Kulkarni et al., 2015) solutions for disentangled factor learning on a variety of benchmarks. We then quantify and characterise the differences in disentangled factor learning between our \(\beta\) - VAE framework and a variety of benchmarks using our proposed new disentangling metric. ### 4.1 QUALITATIVE BENCHMARKS We trained \(\beta\) - VAE (see Tbl. 1 for architecture details) on a variety of datasets commonly used to evaluate disentangling performance of models: celebA (Liu et al., 2015), chairs (Aubry et al., 2014) and faces (Paysan et al., 2009). Figures 1- 3 provide a qualitative comparison of the disentangling performance of \(\beta\) - VAE, VAE \((\beta = 1)\) (Kingma & Welling, 2014), InfoGAN (Chen et al., 2016) and DC- IGN (Kulkarni et al., 2015) as appropriate. It can be seen that across all datasets \(\beta\) - VAE is able to automatically discover and learn to disentangle all of the factors learnt by the semi- supervised DC- IGN (Kulkarni et al., 2015): azimuth (Fig. 3a, Fig. 2a), lighting and elevation (Fig. 3b,c)). Often it acts as a more convincing inverse graphics network than DC- IGN (e.g. Fig. 3a) or InfoGAN (e.g. Fig. 2a, Fig. 1a- c or Fig. 3a). Furthermore, unlike DC- IGN, \(\beta\) - VAE requires no supervision and hence can learn about extra unlabelled data generative factors that DC- IGN can not learn by design, such as chair width or leg style (Fig. 2b,c). The unsupervised InfoGAN (Chen et al., 2016) approach shares this quality with \(\beta\) - VAE, and the two frameworks tend to discover overlapping, but not necessarily identical sets of data generative factors. For example, both \(\beta\) - VAE and InfoGAN (but not DC- IGN) learn about the width of chairs (Fig. 2b). Only \(\beta\) - VAE, however, learns about the chair leg style (Fig. 2c). It is interesting to note how \(\beta\) - VAE is able to generate an armchair with a round office chair base, even though such armchairs do not exist in the dataset (or, perhaps, reality). Furthermore, only \(\beta\) - VAE is able to discover all three factors of variation (chair azimuth, width and leg style) within a single model, while InfoGAN learns to allocate its continuous latent variable to either azimuth or width. InfoGAN sometimes discovers factors that \(\beta\) - VAE does not precisely disentangle, such as the presence of sunglasses in celebA. \(\beta\) - VAE does, however, discover numerous extra factors such as skin colour, image saturation, and age/gender that are not reported in the InfoGAN paper (Chen et al., 2016) (Fig. 4). Furthermore, \(\beta\) - VAE latents tend to learn a smooth continuous transformation over a wider range of factor values than InfoGAN (e.g. rotation over a wider range of angles as shown in Figs. 1- 3a). <--- Page Split ---> Overall \(\beta\) - VAE tends to consistently and robustly discover more latent factors and learn cleaner disentangled representations of them than either InfoGAN or DC- IGN. This holds even on such challenging datasets as celebA. Furthermore, unlike InfoGAN and DC- IGN, \(\beta\) - VAE requires no design decisions or assumptions about the data, and is very stable to train. When compared to the unmodified VAE baseline ( \(\beta = 1\) ) \(\beta\) - VAE consistently learns significantly more disentangled latent representations. For example, when learning about chairs, VAE entangles chair width with leg style (Fig. 2b). When learning about celebA, VAE entangles azimuth with emotion and gender (Fig. 1a); emotion with hair style, skin colour and identity (Fig. 1b); while the VAE fringe latent also codes for baldness and head size (Fig. 1c). Although VAE performs relatively well on the faces dataset, it still struggles to learn a clean representation of azimuth (Fig. 3a). This, however, suggests that a continuum of disentanglement quality exists, and it can be traversed by varying \(\beta\) within the \(\beta\) - VAE framework. While increasing \(\beta\) often leads to better disentanglement, it may come at the cost of blurrier reconstructions and losing representations for some factors, particularly those that correspond to only minor changes in pixel space. ### 4.2 QUANTITATIVE BENCHMARKS In order to quantitatively compare the disentangling performance of \(\beta\) - VAE against various baselines, we created a synthetic dataset of 737,280 binary 2D shapes (heart, oval and square) generated from the Cartesian product of the shape and four independent generative factors \(v_{k}\) defined in vector graphics: position X (32 values), position Y (32 values), scale (6 values) and rotation (40 values over the \(2\pi\) range). To ensure smooth affine object transforms, each two subsequent values for each factor \(v_{k}\) were chosen to ensure minimal differences in pixel space given 64x64 pixel image resolution. This dataset was chosen because it contains no confounding factors apart from its five independent data generative factors (identity, position X, position Y, scale and rotation). This gives us knowledge of the ground truth for comparing the disentangling performance of different models in an objective manner. We used our proposed disentanglement metric (see Sec. 3) to quantitatively compare the ability of \(\beta\) - VAE to automatically discover and learn a disentangled representation of the data generative factors of the synthetic dataset of 2D shapes described above with that of a number of benchmarks (see Tbl. 1 in Appendix for model architecture details). The table in Fig. 6 (left) reports the classification accuracy of the disentanglement metric for 5,000 test samples. It can be seen that \(\beta\) - VAE ( \(\beta = 4\) ) significantly outperforms all baselines, such as an untrained VAE and the original VAE formulation of Kingma & Welling (2014) ( \(\beta = 1\) ) with the same architecture as \(\beta\) - VAE, the top ten PCA or ICA components of the data (see Sec. A.3 for details), or when using the raw pixels directly. \(\beta\) - VAE also does better than InfoGAN. Remarkably, \(\beta\) - VAE performs on the same level as DC- IGN despite the latter being semi- supervised and the former wholly unsupervised. Furthermore, \(\beta\) - VAE achieved similar classification accuracy as the ground truth vectors used for data generation, thus suggesting that it was able to learn a very good disentangled representation of the data generative factors. We also examined qualitatively the representations learnt by \(\beta\) - VAE, VAE, InfoGAN and DC- IGN on the synthetic dataset of 2D shapes. Fig. 7A demonstrates that after training, \(\beta\) - VAE with \(\beta = 4\) learnt a good (while not perfect) disentangled representation of the data generative factors, and its decoder learnt to act as a rendering engine. Its performance was comparative to that of DC- IGN (Fig. 7C), with the difference that DC- IGN required a priori knowledge about the quantity of the data generative factors, while \(\beta\) - VAE was able to discover them in an unsupervised manner. The most informative latent units \(z_{m}\) of \(\beta\) - VAE have the highest KL divergence from the unit Gaussian prior ( \(p(z) = \mathcal{N}(0, I)\) ), while the uninformative latents have KL divergence close to zero. Fig. 7A demonstrates the selectivity of each latent \(z_{m}\) to the independent data generating factors: \(z_{m}^{t} = f(v_{k}) \forall v_{k} \in \{v_{\text{position}, X}, v_{\text{position}, Y}, v_{\text{scale}, v_{\text{rotation}}}\}\) (top three rows), where \(z_{m}^{t}\) is the learnt Gaussian mean of latent unit \(z_{m}\) . The effect of traversing each latent \(z_{m}\) on the resulting reconstructions is shown in the bottom five rows of Fig. 7A. The latents \(z_{6}\) and \(z_{2}\) learnt to encode X and Y coordinates of the objects respectively; unit \(z_{1}\) learnt to encode scale; and units \(z_{5}\) and \(z_{7}\) learnt to encode rotation. The frequency of oscillations in each rotational latent corresponds to the rotational symmetry of the corresponding object ( \(2\pi\) for heart, \(\pi\) for oval and \(\pi /2\) for square). Furthermore, the two rotational latents seem to encode cos and sin rotational coordinates, while the positional latents align with the Cartesian axes. While such alignment with intuitive factors for humans is not guaranteed, empirically we found it to be very common. Fig. 7B demonstrates that the unmodified <--- Page Split ---> <table><tr><td>Model</td><td>Disentanglement metric score</td></tr><tr><td>Ground truth</td><td>100%</td></tr><tr><td>Raw pixels</td><td>45.75 ± 0.8%</td></tr><tr><td>PCA</td><td>84.9 ± 0.4%</td></tr><tr><td>ICA</td><td>42.03 ± 10.6%</td></tr><tr><td>DC-IGN</td><td>99.3 ± 0.1%</td></tr><tr><td>InfoGAN</td><td>73.5 ± 0.9%</td></tr><tr><td>VAE untrained</td><td>44.14 ± 2.5%</td></tr><tr><td>VAE</td><td>61.58 ± 0.5%</td></tr><tr><td>β-VAE</td><td>99.23 ± 0.1%</td></tr></table> ![](images/8_0.jpg) <center>Figure 6: Disentanglement metric classification accuracy for 2D shapes dataset. Left: Accuracy for different models and training regimes Right: Positive correlation is present between the size of \(\mathbf{z}\) and the optimal normalised values of \(\beta\) for disentangled factor learning for a fixed \(\beta\) -VAE architecture. \(\beta\) values are normalised by latent \(\mathbf{z}\) size \(m\) and input \(\mathbf{x}\) size \(n\) . Note that \(\beta\) values are not uniformly sampled. Orange approximately corresponds to unnormalised \(\beta = 1\) . Good reconstructions are associated with entangled representations (lower disentanglement scores). Disentangled representations (high disentanglement scores) often result in blurry reconstructions. </center> VAE baseline ( \(\beta = 1\) ) is not able to disentangle generative factors in the data as well as \(\beta\) - VAE with appropriate learning pressures. Instead each latent \(\mathbf{z}\) (apart from \(\mathbf{z}_0\) , which learnt rotation) encodes at least two data generative factors. InfoGAN also achieved a degree of disentangling (see Fig. 7D), particularly for positional factors. However, despite our best efforts to train InfoGAN, we were not able to achieve the same degree of disentangling in other factors, such as rotation, scale and shape. We also found its ability to generate the different shapes in the dataset to be inaccurate and unstable during training, possibly due to reported limitations of the GAN framework, which can struggle to learn the full data distribution and instead will often learn a small subset of its modes (Salimans et al., 2016; Zhao et al., 2016). Understanding the effects of \(\beta\) We hypothesised that constrained optimisation is important for enabling deep unsupervised models to learn disentangled representations of the independent data generative factors (Sec. 2). In the \(\beta\) - VAE framework this corresponds to tuning the \(\beta\) coefficient. One way to view \(\beta\) is as a mixing coefficient (see Sec. A.6 for a derivation) for balancing the magnitudes of gradients from the reconstruction and the prior- matching components of the VAE lower bound formulation in Eq. 4 during training. In this context it makes sense to normalise \(\beta\) by latent \(\mathbf{z}\) size \(m\) and input \(\mathbf{x}\) size \(n\) in order to compare its different values across different latent layer sizes and different datasets ( \(\beta_{norm} = \frac{\beta M}{N}\) ). We found that larger latent \(\mathbf{z}\) layer sizes \(m\) require higher constraint pressures (higher \(\beta\) values), see Fig. 6 (Right). Furthermore, the relationship of \(\beta\) for a given \(m\) is characterised by an inverted U curve. When \(\beta\) is too low or too high the model learns an entangled latent representation due to either too much or too little capacity in the latent \(\mathbf{z}\) bottleneck. We find that in general \(\beta > 1\) is necessary to achieve good disentanglement. However if \(\beta\) is too high and the resulting capacity of the latent channel is lower than the number of data generative factors, then the learnt representation necessarily has to be entangled (as a low- rank projection of the true data generative factors will compress them in a non- factorial way to still capture the full data distribution well). We also note that VAE reconstruction quality is a poor indicator of learnt disentanglement. Good disentangled representations often lead to blurry reconstructions due to the restricted capacity of the latent information channel \(\mathbf{z}\) , while entangled representations often result in the sharpest reconstructions. We therefore suggest that one should not necessarily strive for perfect reconstructions when using \(\beta\) - VAEs as unsupervised feature learners - though it is often possible to find the right \(\beta\) - VAE architecture and the right value of \(\beta\) to have both well disentangled latent representations and good reconstructions. We proposed a principled way of choosing \(\beta\) for datasets with at least weak label information. If label information exists for at least a small subset of the independent data generative factors of variation, one can apply the disentanglement metric described in Sec. 3 to approximate the level of learnt disentanglement for various \(\beta\) choices during a hyperparameter sweep. When such labelled information is not available, the optimal value of \(\beta\) can be found through visual inspection of what <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 7: A: Representations learnt by a \(\beta\) -VAE \((\beta = 4)\) . Each column represents a latent \(z_{i}\) , ordered according to the learnt Gaussian variance (last row). Row 1 (position) shows the mean activation (red represents high values) of each latent \(z_{i}\) as a function of all 32x32 locations averaged across objects, rotations and scales. Row 2 and 3 show the mean activation of each unit \(z_{i}\) as a function of scale (respectively rotation), averaged across rotations and positions (respectively scales and positions). Square is red, oval is green and heart is blue. Rows 4-8 (second group) show reconstructions resulting from the traversal of each latent \(z_{i}\) over three standard deviations around the unit Gaussian prior mean while keeping the remaining 9/10 latent units fixed to the values obtained by running inference on an image from the dataset. B: Similar analysis for VAE \((\beta = 1)\) . C: Similar analysis for DCIGN, clamping a single latent each for scale, positions, orientation and 5 for shape. D: Similar analysis for InfoGAN, using 5 continuous latents regularized using the mutual information cost, and 5 additional unconstrained noise latents (not shown). </center> effect the traversal of each single latent unit \(z_{m}\) has on the generated images \((\mathbf{x}|\mathbf{z})\) in pixel space (as shown in Fig. 7 rows 4- 8). For the 2D shapes dataset, we have found that the optimal values of \(\beta\) as determined by visual inspection match closely the optimal values as determined by the disentanglement metric. ## 5 CONCLUSION In this paper we have reformulated the standard VAE framework (Kingma & Welling, 2014; Rezende et al., 2014) as a constrained optimisation problem with strong latent capacity constraint and independence prior pressures. By augmenting the lower bound formulation with the \(\beta\) coefficient that regulates the strength of such pressures and, as a consequence, the qualitative nature of the representations learnt by the model, we have achieved state of the art results for learning disentangled representations of data generative factors. We have shown that our proposed \(\beta\) - VAE framework significantly outperforms both qualitatively and quantitatively the original VAE (Kingma & Welling, 2014), as well as state- of- the- art unsupervised (InfoGAN: Chen et al., 2016) and semi- supervised (DC- IGN: Kulkarni et al., 2015) approaches to disentangled factor learning. Furthermore, we have shown that \(\beta\) - VAE consistently and robustly discovers more factors of variation in the data, and it learns a representation that covers a wider range of factor values and is disentangled more cleanly than other benchmarks, all in a completely unsupervised manner. Unlike InfoGAN and DC- IGN, our approach does not depend on any a priori knowledge about the number or the nature of data generative factors. Our preliminary investigations suggest that the performance of the \(\beta\) - VAE framework may depend on the sampling density of the data generative factors within a training dataset (see Appendix A.8 for more details). It appears that having more densely sampled data generative factors results in better disentangling performance of \(\beta\) - VAE, however we leave a more principled investigation of this effect to future work. <--- Page Split ---> \(\beta\) - VAE is robust with respect to different architectures, optimisation parameters and datasets, hence requiring few design decisions. Our approach relies on the optimisation of a single hyperparameter \(\beta\) , which can be found directly through a hyperparameter search if weakly labelled data is available to calculate our new proposed disentangling metric. Alternatively the optimal \(\beta\) can be estimated heuristically in purely unsupervised scenarios. Learning an interpretable factorised representation of the independent data generative factors in a completely unsupervised manner is an important precursor for the development of artificial intelligence that understands the world in the same way that humans do (Lake et al., 2016). We believe that using our approach as an unsupervised pretraining stage for supervised or reinforcement learning will produce significant improvements for scenarios such as transfer or fast learning. ## 6 ACKNOWLEDGEMENTS We would like to thank Charles Blundell, Danilo Rezende, Tejas Kulkarni and David Pfau for helpful comments that improved the manuscript. ## A APPENDIX ## A.1 MODEL ARCHITECTURE DETAILS A summary of all model architectures used in this paper can be seen in Tbl 1. ## A.2 INFOGAN TRAINING To train the InfoGAN network described in Tbl. 1 on the 2D shapes dataset (Fig. 7), we followed the training paradigm described in Chen et al. (2016) with the following modifications. For the mutual information regularised latent code, we used 5 continuous variables \(c_{i}\) sampled uniformly from \((- 1,1)\) . We used 5 noise variables \(z_{i}\) , as we found that using a reduced number of noise variables improved the quality of generated samples for this dataset. To help stabilise training, we used the instance noise trick described in Shi et al. (2016), adding Gaussian noise to the discriminator inputs (0.2 standard deviation on images scaled to \([- 1,1]\) ). We followed Radford et al. (2015) for the architecture of the convolutional layers, and used batch normalisation in all layers except the last in the generator and the first in the discriminator. <--- Page Split ---> <table><tr><td>Dataset</td><td>Optimiser</td><td></td><td>Architecture</td></tr><tr><td rowspan="4">2D shapes (VAE)</td><td rowspan="4">Adagrad<br>1e-2</td><td>Input</td><td>4096 (flattened 64x64x1).</td></tr><tr><td>Encoder</td><td>FC 1200, 1200. ReLU activation.</td></tr><tr><td>Latents</td><td>10</td></tr><tr><td>Decoder</td><td>FC 1200, 1200, 1200, 4096. Tanh activation. Bernoulli.</td></tr><tr><td rowspan="4">2D shapes (DC-IGN)</td><td rowspan="4">rmsprop<br>(as in Kulkarni et al., 2015)</td><td>Input</td><td>64x64x1.</td></tr><tr><td>Encoder</td><td>Conv 96x3x3, 48x3x3, 48x3x3 (padding 1).</td></tr><tr><td>Latents</td><td>ReLU activation and Max pooling 2x2.</td></tr><tr><td>Decoder</td><td>10</td></tr><tr><td rowspan="3">2D shapes (InfoGAN)</td><td rowspan="3">Adam<br>1e-3 (gen)<br>2e-4 (dis)</td><td>Generator</td><td>FC 256, 256, Deconv 128x4x4, 64x4x4 (stride 2). Tanh.</td></tr><tr><td>Discriminator</td><td>Conv and FC reverse of generator. Leaky ReLU activation.</td></tr><tr><td>Recognition</td><td>FC 1. Sigmoid activation.</td></tr><tr><td rowspan="3">Chairs (VAE)</td><td rowspan="3">Adam<br>1e-4</td><td>Latents</td><td>10: z1...5 ~ Unif(-1, 1), c1...5 ~ Unif(-1, 1)</td></tr><tr><td>Input</td><td>64x64x1.</td></tr><tr><td>Encoder</td><td>Conv 32x4x4 (stride 2), 32x4x4 (stride 2), 64x4x4 (stride 2), 64x4x4 (stride2), FC 256. ReLU activation.</td></tr><tr><td rowspan="3">CelebA (VAE)</td><td rowspan="3">Adam<br>1e-4</td><td>Latents</td><td>32</td></tr><tr><td>Decoder</td><td>Deconv reverse of encoder. ReLU activation. Bernoulli.</td></tr><tr><td>Input</td><td>64x64x3.</td></tr><tr><td rowspan="3">3DFaces (VAE)</td><td rowspan="3">Adam<br>1e-4</td><td>Encoder</td><td>Conv 32x4x4 (stride 2), 32x4x4 (stride 2), 64x4x4 (stride 2), 64x4x4 (stride2), FC 256. ReLU activation.</td></tr><tr><td>Latents</td><td>32</td></tr><tr><td>Decoder</td><td>Deconv reverse of encoder. ReLU activation. Gaussian.</td></tr><tr><td rowspan="3">3DFaces (VAE)</td><td rowspan="3">Adam<br>1e-4</td><td>Input</td><td>64x64x1.</td></tr><tr><td>Encoder</td><td>Conv 32x4x4 (stride 2), 32x4x4 (stride 2), 64x4x4 (stride 2), 64x4x4 (stride2), FC 256. ReLU activation.</td></tr><tr><td>Latents</td><td>32</td></tr><tr><td></td><td></td><td>Decoder</td><td>Deconv reverse of encoder. ReLU activation. Bernoulli.</td></tr></table> Table 1: Details of model architectures used in the paper. The models were trained using either adagrad (Duchi et al., 2011) or adam (Kingma & Ba, 2014) optimisers. # A.3 ICA AND PCA BASELINES In order to calculate the ICA benchmark, we applied fastICA (Pedregosa et al., 2011) algorithm to the whitened pixel data. Due to memory limitations we had to apply the algorithm to pairwise combinations of the subsets of the dataset corresponding to the transforms of each of the three 2D object identities. We calculated the disentangling metric for all three ICA models trained on each of the three pairwise combinations of 2D objects, before presenting the average of these scores in Fig. 6. We performed PCA on the raw and whitened pixel data. Both approaches resulted in similar disentangling metric scores. Fig. 6 reports the PCA results calculated using whitened pixel data for more direct comparison with the ICA score. # A.4 DISENTANGLEMENT METRIC DETAILS We used a linear classifier to learn the identity of the generative factor that produced \(\mathbf {z}_{\mathrm {diff}}^{b}\) (see Equations (5) for the process used to obtain samples of \(\mathbf {z}_{\mathrm {diff}}^{b}\) ). We used a fully connected linear <--- Page Split ---> classifier to predict \(p(y|\mathbf{z}_{\mathrm{diff}}^{b})\) , where \(y\) is one of four generative factors (position X, position Y, scale and rotation). We used softmax output nonlinearity and a negative log likelihood loss function. The classifier was trained using the Adagrad (Duchi et al., 2011) optimisation algorithm with learning rate of 1e- 2 until convergence. \[\mathcal{D} = \{V\in \mathbb{R}^{K},W\in \mathbb{R}^{H},X\in \mathbb{R}^{N}\} ,y\sim U n i f[1\dots K]\] Repeat for \(b = 1\dots B\) : \[\begin{array}{r l} & {\mathbf{v}_{1,l}\sim p(\mathbf{v}),\mathbf{w}_{1,l}\sim p(\mathbf{w}),\mathbf{w}_{2,l}\sim p(\mathbf{w}),[\mathbf{v}_{2,l}]_{k} = \left\{ \begin{array}{l l}{[\mathbf{v}_{1,l}]_{k},} & {\mathrm{if}k = y}\\ {-\rho (v_{k}),} & {\mathrm{otherwise}} \end{array} \right.}\\ & {\mathbf{x}_{1,l}\sim \mathbf{S i m}(\mathbf{v}_{1,1},\mathbf{w}_{1,1}),\mathbf{x}_{2,1}\sim \mathbf{S i m}(\mathbf{v}_{2,1},\mathbf{w}_{2,1}),}\\ & {q(\mathbf{z}|\mathbf{x})\sim \mathcal{N}(\mu (\mathbf{x}),\sigma (\mathbf{x})),\mathbf{z}_{1,l} = \mu (\mathbf{x}_{1,l}),\mathbf{z}_{2,l} = \mu (\mathbf{x}_{2,l})}\\ & {\mathbf{z}_{\mathrm{diff}}^{l} = |\mathbf{z}_{1,l} - \mathbf{z}_{2,l}|,\quad \mathbf{z}_{\mathrm{diff}}^{b} = \frac{1}{L}\sum_{l = 1}^{L}\mathbf{z}_{\mathrm{diff}}^{l}} \end{array} \quad (5)\] All disentanglement metric score results reported in the paper were calculated in the following manner. Ten replicas of each model with the same hyperparameters were trained using different random seeds to obtain disentangled representations. Each of the ten trained model replicas was evaluated three times using the disentanglement metric score algorithm, each time using a different random seed to initialise the linear classifier. We then discarded the bottom \(50\%\) of the thirty resulting scores and reported the remaining results. This was done to control for the outlier results from the few experiments that diverged during training. The results reported in table in Fig. 6 (left) were calculated using the following data. Ground truth uses independent data generating factors \(\mathbf{v}\) (our dataset did not contain any correlated data generating factors \(\mathbf{w}\) ). PCA and ICA decompositions keep the first ten components (PCA components explain \(60.8\%\) of variance). \(\beta\) - VAE \((\beta = 4)\) , VAE \((\beta = 1)\) and VAE untrained have the same fully connected architecture with ten latent units \(\mathbf{z}\) . InfoGAN uses "inferred" values of the five continuous latents that were regularised with the mutual information objective during training. ## A.5 CLASSIFYING THE GROUND TRUTH DATA GENERATIVE FACTORS VALUES In order to further verify the validity of our proposed disentanglement metric we ran an extra quantitative test: we trained a linear classifier to predict the ground truth value of each of the five data generative factors used to generate the 2D shapes dataset. While this test does not measure disentangling directly (since it does not measure independence of the latent representation), a disentangled representation should make such a classification trivial. It can be seen in Table 2 that the representation learnt by \(\beta\) - VAE is on average the best representation for factor classification across all five factors. It is closely followed by DC- IGN. It is interesting to note that ICA does well only at encoding object identity, while PCA manages to learn a very good representation of object position. Table 2: Linear classifier classification accuracy for predicting the ground truth values for each data generative factor from different latent representations. Each factor could take a variable number of possible values: 3 for id, 6 for scale, 40 for rotation and 32 for position X or Y. Best performing model results in each column are printed in bold. <table><tr><td rowspan="2">Model</td><td colspan="5">Classification accuracy</td></tr><tr><td>id</td><td>scale</td><td>rotation</td><td>position X</td><td>position Y</td></tr><tr><td>PCA</td><td>43.38</td><td>36.08</td><td>5.96</td><td>60.66</td><td>60.15</td></tr><tr><td>ICA</td><td>59.6</td><td>34.4</td><td>7.61</td><td>25.96</td><td>25.12</td></tr><tr><td>DC-IGN</td><td>44.82</td><td>45.92</td><td>15.89</td><td>47.64</td><td>45.88</td></tr><tr><td>InfoGAN</td><td>44.47</td><td>40.91</td><td>6.39</td><td>27.51</td><td>23.73</td></tr><tr><td>VAE untrained</td><td>39.44</td><td>25.33</td><td>6.09</td><td>16.69</td><td>14.39</td></tr><tr><td>VAE</td><td>41.55</td><td>24.07</td><td>8</td><td>16.5</td><td>18.72</td></tr><tr><td>β-VAE</td><td>50.08</td><td>43.03</td><td>20.36</td><td>52.25</td><td>49.5</td></tr></table> <--- Page Split ---> ## A.6 INTERPRETING NORMALISED \(\beta\) We start with the \(\beta\) - VAE constrained optimisation formulation that we have derived in Sec. 2. \[\mathcal{L}(\theta ,\phi ;\mathbf{x},\mathbf{z},\beta) = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[\log p_{\theta}(\mathbf{x}|\mathbf{z})] - \beta D_{K L}(q_{\phi}(\mathbf{z}|\mathbf{x})||p(\mathbf{z})) \quad (6)\] We make the assumption that every pixel \(n\) in \(\mathbf{x}\in \mathbb{R}^{N}\) is conditionally independent given \(\mathbf{z}\) (Doersch, 2016). The first term of Eq. 6 then becomes: \[\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[\log p_{\theta}(\mathbf{x}|\mathbf{z})] = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[\log \prod_{n}p_{\theta}(x_{n}|\mathbf{z})] = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[\sum_{n}\log p_{\theta}(x_{n}|\mathbf{z})] \quad (7)\] Dividing both sides of Eq. 6 by \(N\) produces: \[\mathcal{L}(\theta ,\phi ;\mathbf{x},\mathbf{z},\beta)\propto \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\mathbb{E}_{n}[\log p_{\theta}(x_{n}|\mathbf{z})] - \frac{\beta}{N} D_{K L}(q_{\phi}(\mathbf{z}|\mathbf{x})||p(\mathbf{z})) \quad (8)\] We design \(\beta\) - VAE to learn conditionally independent factors of variation in the data. Hence we assume conditional independence of every latent \(z_{m}\) given \(x\) (where \(m\in 1\dots M\) , and \(M\) is the dimensionality of \(\mathbf{z}\) ). Since our prior \(p(\mathbf{z})\) is an isotropic unit Gaussian, we can re- write the second term of Eq. 6 as: \[D_{K L}(q_{\phi}(\mathbf{z}|\mathbf{x})||p(\mathbf{z})) = \int_{z}q_{\phi}(\mathbf{z}|\mathbf{x})log\frac{q_{\phi}(\mathbf{z}|\mathbf{x})}{p(\mathbf{z})} = \sum_{m}\int_{z_{m}}q_{\phi}(z_{m}|\mathbf{x})log\frac{q_{\phi}(z_{m}|\mathbf{x})}{p(z_{m})} \quad (9)\] Multiplying the second term in Eq. 8 by a factor \(\frac{M}{M}\) produces: \[\begin{array}{r l} & {\mathcal{L}(\theta ,\phi ;\mathbf{x},\mathbf{z},\beta)\propto \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\mathbb{E}_{n}[\log p_{\theta}(x_{n}|\mathbf{z})] - \frac{\beta M}{N}\mathbb{E}_{m}\int_{z_{m}}[q_{\phi}(z_{m}|\mathbf{x})log\frac{q_{\phi}(z_{m}|\mathbf{x})}{p(z_{m})}]}\\ & {\qquad = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\mathbb{E}_{n}[\log p_{\theta}(x_{n}|\mathbf{z})] - \frac{\beta M}{N}\mathbb{E}_{m}[D_{K L}(q_{\phi}(z_{m}|\mathbf{x})||p(z_{m}))]} \end{array} \quad (10)\] Hence using \[\beta_{norm} = \frac{\beta M}{N}\] in Eq. 10 is equivalent to optimising the original \(\beta\) - VAE formulation from Sec. 2, but with the additional independence assumptions that let us calculate data log likelihood and KL divergence terms in expectation over the individual pixels \(x_{n}\) and individual latents \(z_{m}\) . ## A.7 RELATIONSHIP BETWEEN \(\beta\) AND \(\epsilon\) For a given \(\epsilon\) we can solve the constrained optimisation problem in Eq. 3 (find the optimal \((\theta^{*},\phi^{*},\beta^{*})\) such that \(\Delta \mathcal{F}(\theta^{*},\phi^{*},\beta^{*}) = 0\) ). We can then re- write our optimal solution to the original optimisation problem in Eq. 2 as a function of \(\epsilon\) : \[\mathcal{G}(\theta^{*}(\epsilon),\phi^{*}(\epsilon)) = \mathbb{E}_{q_{\phi^{*}(\epsilon)}(\mathbf{z}|\mathbf{x})}[\log p_{\theta^{*}(\epsilon)}(\mathbf{x}|\mathbf{z})] \quad (11)\] Now \(\beta\) can be interpreted as the rate of change of the optimal solution \((\theta^{*},\phi^{*})\) to \(\mathcal{G}\) when varying the constraint \(\epsilon\) : \[\frac{\delta\mathcal{G}}{\delta\epsilon} = \beta^{*}(\epsilon) \quad (12)\] ## A.8 DATA CONTINUITY We hypothesise that data continuity plays a role in guiding unsupervised models towards learning the correct data manifolds. To test this idea we measure how the degree of learnt disentangling changes with reduced continuity in the 2D shapes dataset. We trained a \(\beta\) - VAE with \(\beta = 4\) (Figure 7A) on subsamples of the original 2D shapes dataset, where we progressively decreased the generative factor sampling density. Reduction in data continuity negatively correlates with the average pixel wise (Hamming) distance between two consecutive transforms of each object (normalised by the average number of pixels occupied by each of the two adjacent transforms of an object to account for object <--- Page Split ---> scale). Figure 8 demonstrates that as the continuity in the data reduces, the degree of disentanglement in the learnt representations also drops. This effect holds after additional hyperparameter tuning and can not solely be explained by the decrease in dataset size, since the same VAE can learn disentangled representations from a data subset that preserves data continuity but is approximately \(55\%\) of the original size (results not shown). ![](images/15_0.jpg) <center>Figure 8: Negative correlation between data transform continuity and the degree of disentangling achieved by \(\beta\) -VAE. Abscissa is the average normalized Hamming distance between each of the two consecutive transforms of each object. Ordinate is disentanglement metric score. Disentangling performance is robust to Bernoulli noise added to the data at test time, as shown by slowly degrading classification accuracy up to \(10\%\) noise level, considering that the 2D objects occupy on average between \(2 - 7\%\) of the image depending on scale. Fluctuations in classification accuracy for similar Hamming distances are due the different nature of subsampled generative factors (i.e. symmetries are present in rotation but are lacking in position). </center> ## A.9 \(\beta\) -VAE SAMPLES Samples from \(\beta\) - VAE that learnt disentangled ( \(\beta = 4\) ) and entangled ( \(\beta = 1\) ) representations can be seen in Figure 9. ## A.10 EXTRA \(\beta\) -VAE TRAVERSAL PLOTS We present extra latent traversal plots from \(\beta\) - VAE that learnt disentangled representations of 3D chairs (Figures 10- 11) and CelebA (Figures 12- 14) datasets. Here we show traversals from all informative latents from a large number of seed images. <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 9: Samples from \(\beta\) -VAE trained on the dataset of 2D shapes that learnt either a disentangled (left, \(\beta = 4\) ) or an entangled (right, \(\beta = 1\) ) representation of the data generative factors. It can be seen that sampling from an entangled representation results in some unrealistic looking samples. A disentangled representation that inverts the original data generation process does not suffer from such errors. </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 10: Latent traversal plots from \(\beta\) -VAE that learnt disentangled representations on the 3D chairs dataset. </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 11: Latent traversal plots from \(\beta\) -VAE that learnt disentangled representations on the 3D chairs dataset. </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 12: Latent traversal plots from \(\beta\) -VAE that learnt disentangled representations on the CelebA dataset. </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 13: Latent traversal plots from \(\beta\) -VAE that learnt disentangled representations on the CelebA dataset. </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 14: Latent traversal plots from \(\beta\) -VAE that learnt disentangled representations on the CelebA dataset. </center> <--- Page Split --->
accept
Accept (Poster)
6.25
ICLR_2017_paper_0173
iclr
2,017
# UNSUPERVISED CROSS-DOMAIN IMAGE GENERATION Yaniv Taigman, Adam Polyak & Lior Wolf Facebook AI Research Tel- Aviv, Israel {yaniv, adampolyak, wolf}@fb.com ## ABSTRACT We study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, \(S\) and \(T\) , we would like to learn a generative function \(G\) that maps an input sample from \(S\) to the domain \(T\) , such that the output of a given representation function \(f\) , which accepts inputs in either domains, would remain unchanged. Other than \(f\) , the training data is unsupervised and consist of a set of samples from each domain, without any mapping between them. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a multiclass GAN loss, an \(f\) preserving component, and a regularizing component that encourages \(G\) to map samples from \(T\) to themselves. We apply our method to visual domains including digits and face images and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity. ## 1 INTRODUCTION Humans excel in tasks that require making analogies between distinct domains, transferring elements from one domain to another, and using these capabilities in order to blend concepts that originated from multiple source domains. Our experience tells us that these remarkable capabilities are developed with very little, if any, supervision that is given in the form of explicit analogies. Recent achievements replicate some of these capabilities to some degree: Generative Adversarial Networks (GANs) are able to convincingly generate novel samples that match that of a given training set; style transfer methods are able to alter the visual style of images; domain adaptation methods are able to generalize learned functions to new domains even without labeled samples in the target domain and transfer learning is now commonly used to import existing knowledge and to make learning much more efficient. These capabilities, however, do not address the general analogy synthesis problem that we tackle in this work. Namely, given separated but otherwise unlabeled samples from domains \(S\) and \(T\) and a perceptual function \(f\) , learn a mapping \(G:S\to T\) such that \(f(x)\sim f(G(x))\) In order to solve this problem, we make use of deep neural networks of a specific structure in which the function \(G\) is a composition of the input function \(f\) and a learned function \(g\) . A compound loss that integrates multiple terms is used. One term is a Generative Adversarial Network (GAN) term that encourages the creation of samples \(G(x)\) that are indistinguishable from the training samples of the target domain, regardless of \(x\in S\) or \(x\in T\) . The second loss term enforces that for every \(x\) in the source domain training set, \(\| f(x) - f(G(x))\|\) is small. The third loss term is a regularizer that encourages \(G\) to be the identity mapping for all \(x\in T\) . The type of problems we focus on in our experiments are visual, although our methods are not limited to visual or even to perceptual tasks. Typically, \(f\) would be a neural network representation that is taken as the activations of a network that was trained, e.g., by using the cross entropy loss, in order to classify or to capture identity. As a main application challenge, we tackle the problem of emoji generation for a given facial image. Despite a growing interest in emoji and the hurdle of creating such personal emoji manually, no system has been proposed, to our knowledge, that can solve this problem. Our method is able to <--- Page Split ---> produce face emoji that are visually appealing and capture much more of the facial characteristics than the emoji created by well- trained human annotators who use the conventional tools. ## 2 RELATED WORK As far as we know, the domain transfer problem we formulate is novel despite being ecological (i.e., appearing naturally in the real- world), widely applicable, and related to cognitive reasoning (Fauconnier & Turner, 2003). In the discussion below, we survey recent GAN work, compare our work to the recent image synthesis work and make links to unsupervised domain adaptation. GAN (Goodfellow et al., 2014) methods train a generator network \(G\) that synthesizes samples from a target distribution given noise vectors. \(G\) is trained jointly with a discriminator network \(D\) , which distinguishes between samples generated by \(G\) and a training set from the target distribution. The goal of \(G\) is to create samples that are classified by \(D\) as real samples. While originally proposed for generating random samples, GANs can be used as a general tool to measure equivalence between distributions. Specifically, the optimization of \(D\) corresponds to taking the most discriminative \(D\) achievable, which in turn implies that the indistinguishability is true for every \(D\) . Formally, Ganin et al. (2016) linked the GAN loss to the H- divergence between two distributions of Ben- david et al. (2006). The generative architecture that we employ is based on the successful architecture of Radford et al. (2015). There has recently been a growing concern about the uneven distribution of the samples generated by \(G\) – that they tend to cluster around a set of modes in the target domain (Salimans et al., 2016). In general, we do not observe such an effect in our results, due to the requirement to generate samples that satisfy specific \(f\) - constancy criteria. A few contributions ("Conditional GANs") have employed GANs in order to generate samples from a specific class (Mirza & Osindero, 2014), or even based on a textual description (Reed et al., 2016). When performing such conditioning, one can distinguish between samples that were correctly generated but fail to match the conditional constraint and samples that were not correctly generated. This is modeled as a ternary discriminative function \(D\) (Reed et al., 2016; Brock et al., 2016). The recent work by Dosovitskiy & Brox (2016), has shown promising results for learning to map embeddings to their pre- images, given input- target pairs. Like us, they employ a GAN as well as additional losses in the feature- and the pixel- space. Their method is able to invert the mid- level activations of AlexNet and reconstruct the input image. In contrast, we solve the problem of unsupervised domain transfer and apply the loss terms in different domains: pixel loss in the target domain, and feature loss in the source domain. Another class of very promising generative techniques that has recently gained traction is neural style transfer. In these methods, new images are synthesized by minimizing the content loss with respect to one input sample and the style loss with respect to one or more input samples. The content loss is typically the encoding of the image by a network training for an image categorization task, similar to our work. The style loss compares the statistics of the activations in various layers of the neural network. We do not employ style losses in our method. While initially style transfer was obtained by a slow optimization process (Gatys et al., 2016), recently, the emphasis was put on feed- forward methods (Ulyanov et al., 2016; Johnson et al., 2016). There are many links between style transfer and our work: both are unsupervised and generate a sample under \(f\) constancy given an input sample. However, our work is much more general in its scope and does not rely on a predefined family of perceptual losses. Our method can be used in order to perform style transfer, but not the other way around. Another key difference is that the current style transfer methods are aimed at replicating the style of one or several images, while our work considers a distribution in the target space. In many applications, there is an abundance of unlabeled data in the target domain \(T\) , which can be modeled accurately in an unsupervised manner. Given the impressive results of recent style transfer work, in particular for face images, one might get the false impression that emoji are just a different style of drawing faces. By way of analogy, this claim is similar to stating that a Siamese cat is a Labrador in a different style. Emoji differ from facial photographs in both content and style. Style transfer can create visually appealing face images; However, the properties of the target domain are compromised. <--- Page Split ---> In the computer vision literature, work has been done to automatically generate sketches from images, see Kyprianidis et al. (2013) for a survey. These systems are able to emphasize image edges and facial features in a convincing way. However, unlike our method, they require matching pairs of samples, and were not shown to work across two distant domains as in our method. Due to the lack of supervised training data, we did not try to apply such methods to our problems. However, one can assume that if such methods were appropriate for emoji synthesis, automatic face emoji services would be available. Unsupervised domain adaptation addresses the following problem: given a labeled training set in \(S\times Y\) , for some target space \(Y\) , and an unlabeled set of samples from domain \(T\) , learn a function \(h:T\to Y\) (Chen et al., 2012; Ganin et al., 2016). One can solve the sample transfer problem (our problem) using domain adaptation and vice versa. In both cases, the solution is indirect. In order to solve domain adaptation using domain transfer, one would learn a function from \(S\) to \(Y\) and use it as the input method of the domain transfer algorithm in order to obtain a map from \(S\) to \(T^{1}\) . The training samples could then be transferred to \(T\) and used to learn a classifier there. In the other direction, given the function \(f\) , one can invert \(f\) in the domain \(T\) by generating training samples \((f(x),x)\) for \(x\in T\) and learn from them a function \(h\) from \(f(T) = \{f(x)|x\in T\}\) to \(T\) . Domain adaptation can then be used in order to map \(f(S) = \{f(x)|x\in S\}\) to \(T\) , thus achieving domain transfer. Based on the work by Zhmoginov & Sandler (2016), we expect that \(h\) , even in the target domain of emoji, will be hard to learn, making this solution hypothetical at this point. ## 3 A BASELINE PROBLEM FORMULATION Given a set \(s\) of unlabeled samples in a source domain \(S\) sampled i.i.d according to some distribution \(\mathcal{D}_{S}\) , a set of samples in the target domain \(\mathbf{t}\subset T\) sampled i.i.d from distribution \(\mathcal{D}_{T}\) , a function \(f\) from the domain \(S\cup T\) , some metric \(d\) , and a weight \(\alpha\) , we wish to learn a function \(G:S\to T\) that minimizes the combined risk \(R = R_{\mathrm{GAN}} + \alpha R_{\mathrm{CONST}}\) , which is comprised of \[R_{\mathrm{GAN}} = \max_{D}\mathbb{E}_{x\sim \mathcal{D}_{S}}\log [1 - D(G(x))] + \mathbb{E}_{x\sim \mathcal{D}_{T}}\log [D(x)], \quad (1)\] where \(D\) is a binary classification function from \(T\) , \(D(x)\) the probability of the class 1 it assigns for a sample \(x\in T\) , and \[R_{\mathrm{CONST}} = \mathbb{E}_{x\sim \mathcal{D}_{S}}d(f(x),f(G(x))) \quad (2)\] The first term is the adversarial risk, which requires that for every discriminative function \(D\) , the samples from the target domain would be indistinguishable from the samples generated by \(G\) for samples in the source domain. An adversarial risk is not the only option. An alternative term that does not employ GANs would directly compare the distribution \(\mathcal{D}_{T}\) to the distribution of \(G(x)\) where \(x\sim \mathcal{D}_{S}\) , e.g., by using KL- divergence. The second term is the \(f\) - constancy term, which requires that \(f\) is invariant under \(G\) . In practice, we have experimented with multiple forms of \(d\) including Mean Squared Error (MSE) and cosine distance, as well as other variants including metric learning losses (hinge) and triplet losses. The performance is mostly unchanged, and we report results using the simplest MSE solution. Similarly to other GAN formulations, one can minimize the loss associated with the risk \(R\) over \(G\) , while maximizing it over \(D\) , where \(G\) and \(D\) are deep neural networks, and the expectations in \(R\) are replaced by summations over the corresponding training sets. However, this baseline solution, as we will show experimentally, does not produce desirable results. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 1: The Domain Transfer Network. Losses are drawn with dashed lines, input/output with solid lines. After training, the forward model G is used for the sample transfer. </center> ## 4 THE DOMAIN TRANSFER NETWORK We suggest to employ a more elaborate architecture that contains two high level modifications. First, we employ \(f(x)\) as the baseline representation to the function \(G\) . Second, we consider, during training, the generated samples \(G(x)\) for \(x \in \mathbf{t}\) . The first change is stated as \(G = g \circ f\) , for some learned function \(g\) . By applying this, we focus the learning effort of \(G\) on the aspects that are most relevant to \(R_{\mathrm{CONST}}\) . In addition, in most applications, \(f\) is not as accurate on \(T\) as it on \(S\) . The composed function, which is trained on samples from both \(S\) and \(T\) , adds layers on top of \(f\) , which adapt it. The second change alters the form of \(L_{\mathrm{GAN}}\) , making it multiclass instead of binary. It also introduces a new term \(L_{TID}\) that requires \(G\) to be the identity matrix on samples from \(T\) . Taken together and written in terms of training loss, we now have two losses \(L_{D}\) and \(L_{G} = L_{\mathrm{GANG}} + \alpha L_{\mathrm{CONST}} + \beta L_{\mathrm{TID}} + \gamma L_{\mathrm{TV}}\) , for some weights \(\alpha , \beta , \gamma\) , where \[L_{\mathrm{D}} = -\sum_{x\in \mathbf{s}}\log D_{1}(g(f(x))) - \sum_{x\in \mathbf{t}}\log D_{2}(g(f(x))) - \sum_{x\in \mathbf{t}}\log D_{3}(x) \quad (3)\] \[L_{\mathrm{GANG}} = -\sum_{x\in \mathbf{s}}\log D_{3}(g(f(x))) - \sum_{x\in \mathbf{t}}\log D_{3}(g(f(x))) \quad (4)\] \[L_{\mathrm{CONST}} = \sum_{x\in \mathbf{s}}d(f(x),f(g(f(x)))) \quad (5)\] \[L_{\mathrm{TID}} = \sum_{x\in \mathbf{t}}d_{2}(x,G(x)) \quad (6)\] and where \(D\) is a ternary classification function from the domain \(T\) to \(1,2,3\) , and \(D_{i}(x)\) is the probability it assigns to class \(i = 1,2,3\) for an input sample \(x\) , and \(d_{2}\) is a distance function in \(T\) . During optimization, \(L_{G}\) is minimized over \(g\) and \(L_{D}\) is minimized over \(D\) . See Fig. 1 for an illustration of our method. Eq. 3 and 4 make sure that the generated analogy, i.e., the output of \(G\) , is in the target space \(T\) . Since \(D\) is ternary and can therefore confuse classes in more than one way, this role, which is captured by Eq. 1 in the baseline formulation, is split into two. However, the two equations do not enforce any similarity between the source sample \(x\) and the generated \(G(x)\) . This is done by Eq. 5 and 6: Eq. 5 enforces \(f\) - constancy for \(x \in S\) , while Eq. 6 enforces that for samples \(x \in T\) , which are already in the target space, \(G\) is the identity mapping. The latter is a desirable behavior, e.g., for the cartooning task, given an input emoji, one would like it to remain constant under the mapping of \(G\) . It can also be seen as an autoencoder type of loss, applied only to samples from \(T\) . The experiments reported in Sec. 5 evaluate the contributions of \(L_{CONST}\) and \(L_{TID}\) and reveal that at least one of these is required, and that when employing only one loss, \(L_{CONST}\) leads to a better performance than \(L_{TID}\) . <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 2: Domain transfer in two visual domains. Input in odd columns; output in even columns. (a) Transfer from SVHN to MNIST. (b) Transfer from face photos (Facescrub dataset) to emoji. </center> The last loss, \(L_{TV}\) is an anisotropic total variation loss (Rudin et al., 1992; Mahendran & Vedaldi, 2015), which is added in order to slightly smooth the resulting image. The loss is defined on the generated image \(z = [z_{ij}] = G(x)\) as \[L_{TV}(z) = \sum_{i,j}\left((z_{i,j + 1} - z_{ij})^2 +(z_{i + 1,j} - z_{ij})^2\right)^{\frac{B}{2}}, \quad (7)\] where we employ \(B = 1\) . In our work, MSE is used for both \(d\) and \(d_{2}\) . We also experimented with replacing \(d_{2}\) , which, in visual domains, compares images, with a second GAN. No noticeable improvement was observed. Throughout the experiments, the adaptive learning rate method Adam by Kingma & Ba (2016) is used as the optimization algorithm. ## 5 EXPERIMENTS The Domain Transfer Network (DTN) is evaluated in two application domains: digits and face images. In the first domain, we transfer images from the Street View House Number (SVHN) dataset of Netzer et al. (2011) to the domain of the MNIST dataset by LeCun & Cortes (2010). In <--- Page Split ---> Table 1: Accuracy of the MNIST classifier on the sampled transferred by our DTN method from SHVN to MNIST. <table><tr><td>Method</td><td>Accuracy</td></tr><tr><td>Baseline method (Sec. 3)</td><td>13.71%</td></tr><tr><td>DTN</td><td>90.66%</td></tr><tr><td>DTN w/0 LTD</td><td>88.40%</td></tr><tr><td>DTN w/0 LCONST</td><td>74.55%</td></tr><tr><td>DTN G does not contain f</td><td>36.90%</td></tr><tr><td>DTN w/0 LD and LGANG</td><td>34.70%</td></tr><tr><td>DTN w/0 LCONST &amp;amp; LTD</td><td>5.28%</td></tr><tr><td>Original SHVN image</td><td>40.06%</td></tr></table> Table 2: Domain adaptation from SVHN to MNIST <table><tr><td>Method</td><td>Accuracy</td></tr><tr><td>SA Fernando et al. (2013)</td><td>59.32%</td></tr><tr><td>DANN Ganin et al. (2016)</td><td>73.85%</td></tr><tr><td>DTN on SVHN transferring the train split s</td><td>84.44%</td></tr><tr><td>DTN on SVHN transferring the test split</td><td>79.72%</td></tr></table> the face domain, we transfer a set of random and unlabeled face images to a space of emoji images. In both cases, the source and target domains differ considerably. ### 5.1 DIGITS: FROM SVHN TO MNIST For working with digits, we employ the extra training split of SVHN, which contains 531,131 images for two purposes: learning the function \(f\) and as an unsupervised training set \(\mathbf{s}\) for the domain transfer method. The evaluation is done on the test split of SVHN, comprised of 26,032 images. The architecture of \(f\) consists of four convolutional layers with 64, 128, 256, 128 filters respectively, each followed by max pooling and ReLU non- linearity. The error on the test split is \(4.95\%\) . Even though this accuracy is far from the best reported results, it seems to be sufficient for the purpose of domain transfer. Within the DTN, \(f\) maps a \(32 \times 32\) RGB image to the activations of the last convolutional layer of size \(128 \times 1 \times 1\) (post a \(4 \times 4\) max pooling and before the ReLU). In order to apply \(f\) on MNIST images, we replicate the grayscale image three times. The set t contains the test set of the MNIST dataset. For supporting quantitative evaluation, we have trained a classifier on the train set of the MNIST dataset, consisting of the same architecture as \(f\) . The accuracy of this classifier on the test set approaches perfect performance at \(99.4\%\) accuracy, and is, therefore, trustworthy as an evaluation metric. In comparison, the network \(f\) , achieves \(76.08\%\) accuracy on t. Network \(g\) , inspired by Radford et al. (2015), maps SVHN- trained \(f\) 's 128D representations to \(32 \times 32\) grayscale images. \(g\) employs four blocks of deconvolution, batch- normalization, and ReLU, with a hyperbolic tangent terminal. The architecture of \(D\) consists of four batch- normalized convolutional layers and employs ReLU. See Radford et al. (2015) for more details on the networks architecture. In the digit experiments, the results were obtained with the tradeoff hyperparameters \(\alpha = \beta = 15\) . We did not observe a need to add a smoothness term and the weight of \(L_{\mathrm{TV}}\) was set to \(\gamma = 0\) . Despite not being very accurate on both domains (and also considerably worse than the SVHN state of the art), we were able to achieve visually appealing domain transfer, as shown in Fig. 2(a). In order to evaluate the contribution of each of the method's components, we have employed the MNIST network on the set of samples \(G(\mathbf{s}_{TEST}) = \{G(x)|x \in \mathbf{s}_{TEST}\}\) , using the true SVHN labels of the test set. We first compare to the baseline method of Sec. 3, where the generative function, which works directly with samples in \(S\) , is composed out of a few additional layers at the bottom of \(G\) . The results, shown in Tab. 1, demonstrate that DTN has a clear advantage over the baseline method. In addition, the contribution of each one of the terms in the loss function is shown in the table. The regularization term \(L_{TID}\) seems less crucial than the constancy term. However, at least one of them is required in order to obtain good performance. The GAN constraints are also important. Finally, the inclusion of \(f\) within the generator function \(G\) has a dramatic influence on the results. As explained in Sec. 2, domain transfer can be used in order to perform unsupervised domain adaptation. For this purposes, we transformed the set \(\mathbf{s}\) to the MNIST domain (as above), and using the true labels of \(\mathbf{s}\) employed a simple nearest neighbor classifier there. The choice of classifier was <--- Page Split ---> Table 3: Comparison of recognition accuracy of the digit 3 as generated in MNIST <table><tr><td>Method</td><td>Accuracy of ‘3’</td></tr><tr><td>DTN</td><td>94.67%</td></tr><tr><td>‘3’ was not shown in s</td><td>93.33%</td></tr><tr><td>‘3’ was not shown in t</td><td>40.13%</td></tr><tr><td>‘3’ was not shown in both s or t</td><td>60.02%</td></tr><tr><td>‘3’ was not shown in s, t, and during the training of f</td><td>4.52%</td></tr></table> to emphasize the simplicity of the approach; However, the constraints of the unsupervised domain transfer problem would be respected for any classifier trained on \(G(\mathbf{s})\) . The results of this experiment are reported in Tab. 2, which shows a clear advantage over the state of the art method of Ganin et al. (2016). This is true both when transferring the samples of the set s and when transferring the test set of SVHN, which is much smaller and was not seen during the training of the DTN. #### 5.1.1 UNSEEN DIGITS Another set of experiments was performed in order to study the ability of the domain transfer network to overcome the omission of a class of samples. This type of ablation can occur in the source or the target domain, or during the training of \(f\) and can help us understand the importance of each of these inputs. The results are shown visually in Fig. 3, and qualitatively in Tab. 3, based on the accuracy of the MNIST classifier only on the transferred samples from the test set of SVHN that belong to class '3'. It is evident that not including the class in the source domain is much less detrimental than eliminating it from the target domain. This is the desirable behavior: never seeing any '3'- like shapes in t, the generator should not generate such samples. Results are better when not observing '3' in both s, t than when not seeing it only in t since in the latter case, \(G\) learns to map source samples of '3' to target images of other classes. ![](images/6_0.jpg) <center>Figure 3: A random subset of the digit '3' from SVHN, transferred to MNIST. (a) The input images. (b) Results of our DTN. In all plots, the cases keep their respective locations, and are sorted by the probability of '3' as inferred by the MNIST classifier on the results of our DTN. (c) The obtained results, in which the digit 3 was not shown as part of the set s unlabeled samples from SVNH. (d) The obtained results, in which the digit 3 was not shown as part of the set t of unlabeled samples in MNIST. (e) The digit 3 was not shown in both s and t. (f) The digit 3 was not shown in s, t, and during the training of \(f\) . </center> <--- Page Split ---> Table 4: Comparison of retrieval accuracy out of a set of 100,001 face images for either manually created emoji or the one created by the DTN method. <table><tr><td>Measure</td><td>Manual</td><td>Emoji by DTN</td></tr><tr><td>Median rank</td><td>16311</td><td>16</td></tr><tr><td>Mean rank</td><td>27,992.34</td><td>535.47</td></tr><tr><td>Rank-1 accuracy</td><td>0%</td><td>22.88%</td></tr><tr><td>Rank-5 accuracy</td><td>0%</td><td>34.75%</td></tr></table> ### 5.2 FACES: FROM PHOTOS TO EMOJI For face images, we use a set s of one million random images without identity information. The set t consists of assorted facial avatars (emoji) created by an online service (bitmoji.com). The emoji images were processed by a fully automatic process that localizes, based on a set of heuristics, the center of the irides and the tip of the nose. Based on these coordinates, the emoji were centered and scaled into \(152\times 152\) RGB images. As the function \(f\) , we employ the representation layer of the DeepFace network Taigman et al. (2014). This representation is 256- dimensional and was trained on a labeled set of four million images that does not intersect the set s. Network D takes \(152\times 152\) RGB images (either natural or scaled- up emoji) and consists of 6 blocks, each containing a convolution with stride 2, batch normalization, and a leaky ReLU with a parameter of 0.2. Network \(g\) maps \(f\) 's 256D representations to \(64\times 64\) RGB images through a network with 5 blocks, each consisting of an upscaling convolution, batch- normalization and ReLU. Adding \(1\times 1\) convolution to each block resulted in lower \(L_{\mathrm{CONST}}\) training errors, and made \(g\) 9- layers deep. We set \(\alpha = 100\) , \(\beta = 1\) , \(\gamma = 0.05\) as the tradeoff hyperparameters within \(L_{G}\) via validation. As expected, higher values of \(\alpha\) resulted in better \(f\) - constancy, however introduced artifacts such as general noise or distortions. The network was trained for 3 epochs, the point where no further reduction of validation error was observed on \(L_{\mathrm{CONST}}\) . In order to upscale the \(64\times 64\) output to print quality, we used the method of Dong et al. (2015), which was shown to work well on art. We did not retrain this network for our application, and apply the published one to the final output of our method after its training was finished. Results without this upscale are shown, for comparison, in Appendix C. Comparison With Human Annotators For evaluation purposes only, a team of professional annotators manually created an emoji, using a web service, for 118 random images from the CelebA dataset (Yang et al., 2015). Fig. 4 shows side by side samples of the original image, the human generated emoji and the emoji generated by the learned generator function \(G\) . As can be seen, the automatically generated emoji tend to be more informative, albeit less restrictive than the ones created manually. In order to evaluate the identifiability of the resulting emoji, we have collected a second example for each identity in the set of 118 CelebA images and a set \(s'\) of 100,000 random face images, which were not included in s. We then employed the VGG face CNN descriptor of Parkhi et al. (2015) in order to perform retrieval as follows. For each image \(x\) in our manually annotated set, we create a gallery \(s' \cup x'\) , where \(x'\) is the other image of the person in \(x\) . We then perform retrieval using the VGG face descriptor using either the manually created emoji or \(G(x)\) as probe. The VGG network is used in order to avoid a bias that might be caused by using \(f\) both for training the DTN and for evaluation. The results are reported in Tab. 4. As can be seen, the emoji generated by \(G\) are much more discriminative than the emoji created manually and obtain a median rank of 16 in cross- domain identification out of \(10^{5}\) distractors. Multiple Images Per Person We evaluate the visual quality that is obtained per person and not just per image, by testing DTN on the Facescrub dataset (Ng & Winkler, 2014). For each person \(p\) , we considered the set of their images \(X_{p}\) , and selected the emoji that was most similar to their <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 4: Shown, side by side are sample images from the CelebA dataset, the emoji images created manually using a web interface (for validation only), and the result of the unsupervised DTN. See Tab. 4 for retrieval performance. </center> source image: \[\arg \min_{x\in X_{p}}\| f(x) - f(G(x))\| \quad (8)\] This simple heuristic seems to work well in practice; The general problem of mapping a set \(X \subset S\) to a single output in \(T\) is left for future work. Fig. 2(b) contains several examples from the Facescrub dataset. For the complete set of identities, see Appendix A. Transferring both identity and expression We also experimented with multiple expressions. As it turns out the face identification network \(f\) encodes enough expression information to support a successful transfer of both identity as well as expression, see Appendix B. Network Visualization The obtained mapping \(g\) can serve as a visualization tool for studying the properties of the face representation. This is studied in Appendix D by computing the emoji generated for the standard basis of \(\mathbb{R}^{256}\) . The resulting images present a large amount of variability, indicating that \(g\) does not present a significant mode effect. ### 5.3 STYLE TRANSFER AS A SPECIFIC DOMAIN TRANSFER TASK Fig. 5(a- c) demonstrates that neural style transfer Gatys et al. (2016) cannot solve the photo to emoji transfer task in a convincing way. The output image is perhaps visually appealing; However, it does not belong to the space \(\mathbf{t}\) of emoji. Our result are given in Fig. 5(d) for comparison. Note that DTN is able to fix the missing hair in the image. Domain transfer is more general than style transfer in the sense that we can perform style transfer using a DTN. In order to show this, we have transformed, using the method of Johnson et al. (2016), the training images of CelebA based on the style of a single image (shown in Fig. 5(e)). The original photos were used as the set \(\mathbf{s}\) , and the transformed images were used as \(\mathbf{t}\) . Applying DTN, using face representation \(f\) , we obtained styled face images such as the one shown in the figure 5(f). <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 5: Style transfer as a specific case of Domain Transfer. (a) The input content photo. (b) An emoji taken as the input style image. (c) The result of applying the style transfer method of Gatys et al. (2016). (d) The result of the emoji DTN. (e) Source image for style transfer. (f) The result, on the same input image, of a DTN trained to perform style transfer. </center> ## 6 DISCUSSION AND LIMITATIONS Asymmetry is central to our work. Not only does our solution handle the two domains \(S\) and \(T\) differently, the function \(f\) is unlikely to be equally effective in both domains since in most practical cases, \(f\) would be trained on samples from one domain. While an explicit domain adaptation step can be added in order to make \(f\) more effective on the second domain, we found it to be unnecessary. Adaptation of \(f\) occurs implicitly due to the application of \(D\) downstream. Using the same function \(f\) , we can replace the roles of the two domains, \(S\) and \(T\) . For example, we can synthesize an SVHN image that resembles a given MNIST image, or synthesize a face that matches an emoji. As expected, this yields less appealing results due to the asymmetric nature of \(f\) and the lower information content in these new source domains, see Appendix E. Domain transfer, as an unsupervised method, could prove useful across a wide variety of computational tasks. Here, we demonstrate the ability to use domain transfer in order to perform unsupervised domain adaptation. While this is currently only shown in a single experiment, the simplicity of performing domain adaptation and the fact that state of the art results were obtained effortlessly with a simple nearest neighbor classifier suggest it to be a promising direction for future research. ## REFERENCES Shai Ben- david, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. In NIPS, pp. 137- 144. 2006. Andrew Brock, Theodore Lim, and Nick Ritchie, J. M.and Weston. Neural photo editing with introspective adversarial networks. arXiv preprint arXiv:1609.07093, 2016. Minnin Chen, Zhixiang Xu, Kilian Weinberger, and Fei Sha. Marginalized denoising autoencoders for domain adaptation. In ICML, pp. 767- 774. 2012. Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super- resolution using deep convolutional networks. arXiv preprint arXiv:1501.00092, 2015. Alexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics based on deep networks. CoRR, abs/1602.02644, 2016. Gilles Fauconnier and Mark Turner. The Way We Think: Conceptual Blending and the Mind's Hidden Complexities. Basic Books, 2003. Basura Fernando, Amaury Habrard, Marc Sebban, and Tinne Tuytelaars. Unsupervised visual domain adaptation using subspace alignment. In ICCV, pp. 2960- 2967, 2013. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain- adversarial training of neural networks. JMLR, 17(1):2096- 2030, January 2016. Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016. <--- Page Split ---> Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, pp. 2672- 2680. 2014. Justin Johnson, Alexandre Alahi, and Li Fei- Fei. Perceptual losses for real- time style transfer and super- resolution. In ECCV, 2016. D.P. Kingma and J. Ba. Adam: A method for stochastic optimization. In The International Conference on Learning Representations (ICLR), 2016. J. E. Kyprianidis, J. Collomosse, T. Wang, and T. Isenberg. State of the "art": A taxonomy of artistic stylization techniques for images and video. IEEE Transactions on Visualization and Computer Graphics, 19(5):866- 885, 2013. Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010. URL http://yann.lecun.com/exdb/mnist/. A. Mahendran and A. Vedaldi. Understanding deep image representations by inverting them. In CVPR, 2015. Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. H.W. Ng and S. Winkler. A data- driven approach to cleaning large face datasets. In Proc. IEEE International Conference on Image Processing (ICIP), Paris, France, 2014. O. M. Parkhi, A. Vedaldi, and A. Zisserman. Deep face recognition. In British Machine Vision Conference, 2015. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In ICML, 2016. Leonid I. Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. In International Conference of the Center for Nonlinear Studies on Experimental Mathematics: Computational Issues in Nonlinear Science, pp. 259- 268, 1992. Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016. Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, and Lior Wolf. Deepface: Closing the gap to human- level performance in face verification. In CVPR, 2014. D. Ulyanov, V. Lebedev, A. Vedaldi, and V. Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. In ICML, 2016. Shuo Yang, Ping Luo, Chen Change Loy, and Xiaoou Tang. From facial parts responses to face detection: A deep learning approach. In ICCV, pp. 3676- 3684, 2015. Andrey Zhmoginov and Mark Sandler. Inverting face embeddings with convolutional neural networks. arXiv preprint arXiv:1606.04189, 2016. <--- Page Split ---> ## A FACESCRUB DATASET GENERATIONS In Fig. 6 we show the full set of identities of the Facescrub dataset, and their corresponding generated emoji. ![](images/11_0.jpg) <center>Figure 6: All 80 identities of the Facescrub dataset. The even columns show the results obtained for the images in the odd column to the left. Best viewed in color and zoom. </center> <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 7: Maintaining expression in the domain transfer. In order to support a smiling expression, random smiling emoji were added to set of unlabeled samples from the domain \(T\) and the DTN was re-trained. Each quadruplet include two pairs of {face, emoji} of the same identity in the two modes respectively: not-smiling and smiling. Odd columns are input; Subsequent even columns are output. </center> ## B TRANSFERRING NON-IDENTITY DATA \(f\) may encode, in addition to identity, other data that is desirable to transfer. In the example of faces, this information might include expression, facial hair, glasses, pose, etc. In order to transfer such information, it is important that the set of samples in the target domain \(\mathbf{t}\) present variability along the desirable dimensions. Otherwise, the GAN applied in the target domain (Eq. 4) would maintain these dimensions fixed. The set \(\mathbf{t}\) employed throughout our experiments in Sec. 5.2 was constructed by sampling emoji of neutral expression. To support a smiling expression for example, we simply added to set \(t\) random smiling emoji and re- trained the DTN. The results, presented in Fig. 7, demonstrate that \(f\) contains expression information in addition to identity information, and that this information is enough in order to transfer smiling photos to smiling emoji. ## C THE EFFECT OF SUPER-RESOLUTION As mentioned in Sec. 5, in order to upscale the \(64 \times 64\) output to print quality, the method of Dong et al. (2015) is used. Fig. 8 shows the effect of applying this postprocessing step. ## D THE BASIS ELEMENTS OF THE FACE REPRESENTATION Fig. 9 depicts the face emoji generated by \(g\) for the standard basis of the face representation (Taigman et al., 2014), viewed as the vector space \(\mathbb{R}^{256}\) . <--- Page Split ---> ## E DOMAIN TRANSFER IN THE REVERSE DIRECTION For completion, we present, in Fig. 10 results obtained by performing domain transfer using DTNs in the reverse direction of the one reported in Sec. 5. ![](images/13_0.jpg) <center>Figure 8: The images in Fig. 4 above with (right version) and without (left version) applying super-resolution. Best viewed on screen. </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 9: The emoji visualization of the standard basis vectors in the space of the face representation, i.e., \(g(e_{1}), \ldots , g(e_{256})\) , where \(e_{i}\) is the \(i\) standard basis vector in \(\mathbb{R}^{256}\) . </center> ![](images/14_1.jpg) <center>Figure 10: Domain transfer in the other direction (see limitations in Sec. 6). Input (output) in odd (even) columns. (a) Transfer from MNIST to SVHN. (b) Transfer from emoji to face photos. </center> <--- Page Split --->
## ABSTRACT We study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, \(S\) and \(T\) , we would like to learn a generative function \(G\) that maps an input sample from \(S\) to the domain \(T\) , such that the output of a given representation function \(f\) , which accepts inputs in either domains, would remain unchanged. Other than \(f\) , the training data is unsupervised and consist of a set of samples from each domain, without any mapping between them. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a multiclass GAN loss, an \(f\) preserving component, and a regularizing component that encourages \(G\) to map samples from \(T\) to themselves. We apply our method to visual domains including digits and face images and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity. ## 1 INTRODUCTION Humans excel in tasks that require making analogies between distinct domains, transferring elements from one domain to another, and using these capabilities in order to blend concepts that originated from multiple source domains. Our experience tells us that these remarkable capabilities are developed with very little, if any, supervision that is given in the form of explicit analogies. Recent achievements replicate some of these capabilities to some degree: Generative Adversarial Networks (GANs) are able to convincingly generate novel samples that match that of a given training set; style transfer methods are able to alter the visual style of images; domain adaptation methods are able to generalize learned functions to new domains even without labeled samples in the target domain and transfer learning is now commonly used to import existing knowledge and to make learning much more efficient. These capabilities, however, do not address the general analogy synthesis problem that we tackle in this work. Namely, given separated but otherwise unlabeled samples from domains \(S\) and \(T\) and a perceptual function \(f\) , learn a mapping \(G:S\to T\) such that \(f(x)\sim f(G(x))\) In order to solve this problem, we make use of deep neural networks of a specific structure in which the function \(G\) is a composition of the input function \(f\) and a learned function \(g\) . A compound loss that integrates multiple terms is used. One term is a Generative Adversarial Network (GAN) term that encourages the creation of samples \(G(x)\) that are indistinguishable from the training samples of the target domain, regardless of \(x\in S\) or \(x\in T\) . The second loss term enforces that for every \(x\) in the source domain training set, \(\| f(x) - f(G(x))\|\) is small. The third loss term is a regularizer that encourages \(G\) to be the identity mapping for all \(x\in T\) . The type of problems we focus on in our experiments are visual, although our methods are not limited to visual or even to perceptual tasks. Typically, \(f\) would be a neural network representation that is taken as the activations of a network that was trained, e.g., by using the cross entropy loss, in order to classify or to capture identity. As a main application challenge, we tackle the problem of emoji generation for a given facial image. Despite a growing interest in emoji and the hurdle of creating such personal emoji manually, no system has been proposed, to our knowledge, that can solve this problem. Our method is able to <--- Page Split ---> produce face emoji that are visually appealing and capture much more of the facial characteristics than the emoji created by well- trained human annotators who use the conventional tools. ## 2 RELATED WORK As far as we know, the domain transfer problem we formulate is novel despite being ecological (i.e., appearing naturally in the real- world), widely applicable, and related to cognitive reasoning (Fauconnier & Turner, 2003). In the discussion below, we survey recent GAN work, compare our work to the recent image synthesis work and make links to unsupervised domain adaptation. GAN (Goodfellow et al., 2014) methods train a generator network \(G\) that synthesizes samples from a target distribution given noise vectors. \(G\) is trained jointly with a discriminator network \(D\) , which distinguishes between samples generated by \(G\) and a training set from the target distribution. The goal of \(G\) is to create samples that are classified by \(D\) as real samples. While originally proposed for generating random samples, GANs can be used as a general tool to measure equivalence between distributions. Specifically, the optimization of \(D\) corresponds to taking the most discriminative \(D\) achievable, which in turn implies that the indistinguishability is true for every \(D\) . Formally, Ganin et al. (2016) linked the GAN loss to the H- divergence between two distributions of Ben- david et al. (2006). The generative architecture that we employ is based on the successful architecture of Radford et al. (2015). There has recently been a growing concern about the uneven distribution of the samples generated by \(G\) – that they tend to cluster around a set of modes in the target domain (Salimans et al., 2016). In general, we do not observe such an effect in our results, due to the requirement to generate samples that satisfy specific \(f\) - constancy criteria. A few contributions ("Conditional GANs") have employed GANs in order to generate samples from a specific class (Mirza & Osindero, 2014), or even based on a textual description (Reed et al., 2016). When performing such conditioning, one can distinguish between samples that were correctly generated but fail to match the conditional constraint and samples that were not correctly generated. This is modeled as a ternary discriminative function \(D\) (Reed et al., 2016; Brock et al., 2016). The recent work by Dosovitskiy & Brox (2016), has shown promising results for learning to map embeddings to their pre- images, given input- target pairs. Like us, they employ a GAN as well as additional losses in the feature- and the pixel- space. Their method is able to invert the mid- level activations of AlexNet and reconstruct the input image. In contrast, we solve the problem of unsupervised domain transfer and apply the loss terms in different domains: pixel loss in the target domain, and feature loss in the source domain. Another class of very promising generative techniques that has recently gained traction is neural style transfer. In these methods, new images are synthesized by minimizing the content loss with respect to one input sample and the style loss with respect to one or more input samples. The content loss is typically the encoding of the image by a network training for an image categorization task, similar to our work. The style loss compares the statistics of the activations in various layers of the neural network. We do not employ style losses in our method. While initially style transfer was obtained by a slow optimization process (Gatys et al., 2016), recently, the emphasis was put on feed- forward methods (Ulyanov et al., 2016; Johnson et al., 2016). There are many links between style transfer and our work: both are unsupervised and generate a sample under \(f\) constancy given an input sample. However, our work is much more general in its scope and does not rely on a predefined family of perceptual losses. Our method can be used in order to perform style transfer, but not the other way around. Another key difference is that the current style transfer methods are aimed at replicating the style of one or several images, while our work considers a distribution in the target space. In many applications, there is an abundance of unlabeled data in the target domain \(T\) , which can be modeled accurately in an unsupervised manner. Given the impressive results of recent style transfer work, in particular for face images, one might get the false impression that emoji are just a different style of drawing faces. By way of analogy, this claim is similar to stating that a Siamese cat is a Labrador in a different style. Emoji differ from facial photographs in both content and style. Style transfer can create visually appealing face images; However, the properties of the target domain are compromised. <--- Page Split ---> In the computer vision literature, work has been done to automatically generate sketches from images, see Kyprianidis et al. (2013) for a survey. These systems are able to emphasize image edges and facial features in a convincing way. However, unlike our method, they require matching pairs of samples, and were not shown to work across two distant domains as in our method. Due to the lack of supervised training data, we did not try to apply such methods to our problems. However, one can assume that if such methods were appropriate for emoji synthesis, automatic face emoji services would be available. Unsupervised domain adaptation addresses the following problem: given a labeled training set in \(S\times Y\) , for some target space \(Y\) , and an unlabeled set of samples from domain \(T\) , learn a function \(h:T\to Y\) (Chen et al., 2012; Ganin et al., 2016). One can solve the sample transfer problem (our problem) using domain adaptation and vice versa. In both cases, the solution is indirect. In order to solve domain adaptation using domain transfer, one would learn a function from \(S\) to \(Y\) and use it as the input method of the domain transfer algorithm in order to obtain a map from \(S\) to \(T^{1}\) . The training samples could then be transferred to \(T\) and used to learn a classifier there. In the other direction, given the function \(f\) , one can invert \(f\) in the domain \(T\) by generating training samples \((f(x),x)\) for \(x\in T\) and learn from them a function \(h\) from \(f(T) = \{f(x)|x\in T\}\) to \(T\) . Domain adaptation can then be used in order to map \(f(S) = \{f(x)|x\in S\}\) to \(T\) , thus achieving domain transfer. Based on the work by Zhmoginov & Sandler (2016), we expect that \(h\) , even in the target domain of emoji, will be hard to learn, making this solution hypothetical at this point. ## 3 A BASELINE PROBLEM FORMULATION Given a set \(s\) of unlabeled samples in a source domain \(S\) sampled i.i.d according to some distribution \(\mathcal{D}_{S}\) , a set of samples in the target domain \(\mathbf{t}\subset T\) sampled i.i.d from distribution \(\mathcal{D}_{T}\) , a function \(f\) from the domain \(S\cup T\) , some metric \(d\) , and a weight \(\alpha\) , we wish to learn a function \(G:S\to T\) that minimizes the combined risk \(R = R_{\mathrm{GAN}} + \alpha R_{\mathrm{CONST}}\) , which is comprised of \[R_{\mathrm{GAN}} = \max_{D}\mathbb{E}_{x\sim \mathcal{D}_{S}}\log [1 - D(G(x))] + \mathbb{E}_{x\sim \mathcal{D}_{T}}\log [D(x)], \quad (1)\] where \(D\) is a binary classification function from \(T\) , \(D(x)\) the probability of the class 1 it assigns for a sample \(x\in T\) , and \[R_{\mathrm{CONST}} = \mathbb{E}_{x\sim \mathcal{D}_{S}}d(f(x),f(G(x))) \quad (2)\] The first term is the adversarial risk, which requires that for every discriminative function \(D\) , the samples from the target domain would be indistinguishable from the samples generated by \(G\) for samples in the source domain. An adversarial risk is not the only option. An alternative term that does not employ GANs would directly compare the distribution \(\mathcal{D}_{T}\) to the distribution of \(G(x)\) where \(x\sim \mathcal{D}_{S}\) , e.g., by using KL- divergence. The second term is the \(f\) - constancy term, which requires that \(f\) is invariant under \(G\) . In practice, we have experimented with multiple forms of \(d\) including Mean Squared Error (MSE) and cosine distance, as well as other variants including metric learning losses (hinge) and triplet losses. The performance is mostly unchanged, and we report results using the simplest MSE solution. Similarly to other GAN formulations, one can minimize the loss associated with the risk \(R\) over \(G\) , while maximizing it over \(D\) , where \(G\) and \(D\) are deep neural networks, and the expectations in \(R\) are replaced by summations over the corresponding training sets. However, this baseline solution, as we will show experimentally, does not produce desirable results. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 1: The Domain Transfer Network. Losses are drawn with dashed lines, input/output with solid lines. After training, the forward model G is used for the sample transfer. </center> ## 4 THE DOMAIN TRANSFER NETWORK We suggest to employ a more elaborate architecture that contains two high level modifications. First, we employ \(f(x)\) as the baseline representation to the function \(G\) . Second, we consider, during training, the generated samples \(G(x)\) for \(x \in \mathbf{t}\) . The first change is stated as \(G = g \circ f\) , for some learned function \(g\) . By applying this, we focus the learning effort of \(G\) on the aspects that are most relevant to \(R_{\mathrm{CONST}}\) . In addition, in most applications, \(f\) is not as accurate on \(T\) as it on \(S\) . The composed function, which is trained on samples from both \(S\) and \(T\) , adds layers on top of \(f\) , which adapt it. The second change alters the form of \(L_{\mathrm{GAN}}\) , making it multiclass instead of binary. It also introduces a new term \(L_{TID}\) that requires \(G\) to be the identity matrix on samples from \(T\) . Taken together and written in terms of training loss, we now have two losses \(L_{D}\) and \(L_{G} = L_{\mathrm{GANG}} + \alpha L_{\mathrm{CONST}} + \beta L_{\mathrm{TID}} + \gamma L_{\mathrm{TV}}\) , for some weights \(\alpha , \beta , \gamma\) , where \[L_{\mathrm{D}} = -\sum_{x\in \mathbf{s}}\log D_{1}(g(f(x))) - \sum_{x\in \mathbf{t}}\log D_{2}(g(f(x))) - \sum_{x\in \mathbf{t}}\log D_{3}(x) \quad (3)\] \[L_{\mathrm{GANG}} = -\sum_{x\in \mathbf{s}}\log D_{3}(g(f(x))) - \sum_{x\in \mathbf{t}}\log D_{3}(g(f(x))) \quad (4)\] \[L_{\mathrm{CONST}} = \sum_{x\in \mathbf{s}}d(f(x),f(g(f(x)))) \quad (5)\] \[L_{\mathrm{TID}} = \sum_{x\in \mathbf{t}}d_{2}(x,G(x)) \quad (6)\] and where \(D\) is a ternary classification function from the domain \(T\) to \(1,2,3\) , and \(D_{i}(x)\) is the probability it assigns to class \(i = 1,2,3\) for an input sample \(x\) , and \(d_{2}\) is a distance function in \(T\) . During optimization, \(L_{G}\) is minimized over \(g\) and \(L_{D}\) is minimized over \(D\) . See Fig. 1 for an illustration of our method. Eq. 3 and 4 make sure that the generated analogy, i.e., the output of \(G\) , is in the target space \(T\) . Since \(D\) is ternary and can therefore confuse classes in more than one way, this role, which is captured by Eq. 1 in the baseline formulation, is split into two. However, the two equations do not enforce any similarity between the source sample \(x\) and the generated \(G(x)\) . This is done by Eq. 5 and 6: Eq. 5 enforces \(f\) - constancy for \(x \in S\) , while Eq. 6 enforces that for samples \(x \in T\) , which are already in the target space, \(G\) is the identity mapping. The latter is a desirable behavior, e.g., for the cartooning task, given an input emoji, one would like it to remain constant under the mapping of \(G\) . It can also be seen as an autoencoder type of loss, applied only to samples from \(T\) . The experiments reported in Sec. 5 evaluate the contributions of \(L_{CONST}\) and \(L_{TID}\) and reveal that at least one of these is required, and that when employing only one loss, \(L_{CONST}\) leads to a better performance than \(L_{TID}\) . <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 2: Domain transfer in two visual domains. Input in odd columns; output in even columns. (a) Transfer from SVHN to MNIST. (b) Transfer from face photos (Facescrub dataset) to emoji. </center> The last loss, \(L_{TV}\) is an anisotropic total variation loss (Rudin et al., 1992; Mahendran & Vedaldi, 2015), which is added in order to slightly smooth the resulting image. The loss is defined on the generated image \(z = [z_{ij}] = G(x)\) as \[L_{TV}(z) = \sum_{i,j}\left((z_{i,j + 1} - z_{ij})^2 +(z_{i + 1,j} - z_{ij})^2\right)^{\frac{B}{2}}, \quad (7)\] where we employ \(B = 1\) . In our work, MSE is used for both \(d\) and \(d_{2}\) . We also experimented with replacing \(d_{2}\) , which, in visual domains, compares images, with a second GAN. No noticeable improvement was observed. Throughout the experiments, the adaptive learning rate method Adam by Kingma & Ba (2016) is used as the optimization algorithm. ## 5 EXPERIMENTS The Domain Transfer Network (DTN) is evaluated in two application domains: digits and face images. In the first domain, we transfer images from the Street View House Number (SVHN) dataset of Netzer et al. (2011) to the domain of the MNIST dataset by LeCun & Cortes (2010). In <--- Page Split ---> Table 1: Accuracy of the MNIST classifier on the sampled transferred by our DTN method from SHVN to MNIST. <table><tr><td>Method</td><td>Accuracy</td></tr><tr><td>Baseline method (Sec. 3)</td><td>13.71%</td></tr><tr><td>DTN</td><td>90.66%</td></tr><tr><td>DTN w/0 LTD</td><td>88.40%</td></tr><tr><td>DTN w/0 LCONST</td><td>74.55%</td></tr><tr><td>DTN G does not contain f</td><td>36.90%</td></tr><tr><td>DTN w/0 LD and LGANG</td><td>34.70%</td></tr><tr><td>DTN w/0 LCONST &amp;amp; LTD</td><td>5.28%</td></tr><tr><td>Original SHVN image</td><td>40.06%</td></tr></table> Table 2: Domain adaptation from SVHN to MNIST <table><tr><td>Method</td><td>Accuracy</td></tr><tr><td>SA Fernando et al. (2013)</td><td>59.32%</td></tr><tr><td>DANN Ganin et al. (2016)</td><td>73.85%</td></tr><tr><td>DTN on SVHN transferring the train split s</td><td>84.44%</td></tr><tr><td>DTN on SVHN transferring the test split</td><td>79.72%</td></tr></table> the face domain, we transfer a set of random and unlabeled face images to a space of emoji images. In both cases, the source and target domains differ considerably. ### 5.1 DIGITS: FROM SVHN TO MNIST For working with digits, we employ the extra training split of SVHN, which contains 531,131 images for two purposes: learning the function \(f\) and as an unsupervised training set \(\mathbf{s}\) for the domain transfer method. The evaluation is done on the test split of SVHN, comprised of 26,032 images. The architecture of \(f\) consists of four convolutional layers with 64, 128, 256, 128 filters respectively, each followed by max pooling and ReLU non- linearity. The error on the test split is \(4.95\%\) . Even though this accuracy is far from the best reported results, it seems to be sufficient for the purpose of domain transfer. Within the DTN, \(f\) maps a \(32 \times 32\) RGB image to the activations of the last convolutional layer of size \(128 \times 1 \times 1\) (post a \(4 \times 4\) max pooling and before the ReLU). In order to apply \(f\) on MNIST images, we replicate the grayscale image three times. The set t contains the test set of the MNIST dataset. For supporting quantitative evaluation, we have trained a classifier on the train set of the MNIST dataset, consisting of the same architecture as \(f\) . The accuracy of this classifier on the test set approaches perfect performance at \(99.4\%\) accuracy, and is, therefore, trustworthy as an evaluation metric. In comparison, the network \(f\) , achieves \(76.08\%\) accuracy on t. Network \(g\) , inspired by Radford et al. (2015), maps SVHN- trained \(f\) 's 128D representations to \(32 \times 32\) grayscale images. \(g\) employs four blocks of deconvolution, batch- normalization, and ReLU, with a hyperbolic tangent terminal. The architecture of \(D\) consists of four batch- normalized convolutional layers and employs ReLU. See Radford et al. (2015) for more details on the networks architecture. In the digit experiments, the results were obtained with the tradeoff hyperparameters \(\alpha = \beta = 15\) . We did not observe a need to add a smoothness term and the weight of \(L_{\mathrm{TV}}\) was set to \(\gamma = 0\) . Despite not being very accurate on both domains (and also considerably worse than the SVHN state of the art), we were able to achieve visually appealing domain transfer, as shown in Fig. 2(a). In order to evaluate the contribution of each of the method's components, we have employed the MNIST network on the set of samples \(G(\mathbf{s}_{TEST}) = \{G(x)|x \in \mathbf{s}_{TEST}\}\) , using the true SVHN labels of the test set. We first compare to the baseline method of Sec. 3, where the generative function, which works directly with samples in \(S\) , is composed out of a few additional layers at the bottom of \(G\) . The results, shown in Tab. 1, demonstrate that DTN has a clear advantage over the baseline method. In addition, the contribution of each one of the terms in the loss function is shown in the table. The regularization term \(L_{TID}\) seems less crucial than the constancy term. However, at least one of them is required in order to obtain good performance. The GAN constraints are also important. Finally, the inclusion of \(f\) within the generator function \(G\) has a dramatic influence on the results. As explained in Sec. 2, domain transfer can be used in order to perform unsupervised domain adaptation. For this purposes, we transformed the set \(\mathbf{s}\) to the MNIST domain (as above), and using the true labels of \(\mathbf{s}\) employed a simple nearest neighbor classifier there. The choice of classifier was <--- Page Split ---> Table 3: Comparison of recognition accuracy of the digit 3 as generated in MNIST <table><tr><td>Method</td><td>Accuracy of ‘3’</td></tr><tr><td>DTN</td><td>94.67%</td></tr><tr><td>‘3’ was not shown in s</td><td>93.33%</td></tr><tr><td>‘3’ was not shown in t</td><td>40.13%</td></tr><tr><td>‘3’ was not shown in both s or t</td><td>60.02%</td></tr><tr><td>‘3’ was not shown in s, t, and during the training of f</td><td>4.52%</td></tr></table> to emphasize the simplicity of the approach; However, the constraints of the unsupervised domain transfer problem would be respected for any classifier trained on \(G(\mathbf{s})\) . The results of this experiment are reported in Tab. 2, which shows a clear advantage over the state of the art method of Ganin et al. (2016). This is true both when transferring the samples of the set s and when transferring the test set of SVHN, which is much smaller and was not seen during the training of the DTN. #### 5.1.1 UNSEEN DIGITS Another set of experiments was performed in order to study the ability of the domain transfer network to overcome the omission of a class of samples. This type of ablation can occur in the source or the target domain, or during the training of \(f\) and can help us understand the importance of each of these inputs. The results are shown visually in Fig. 3, and qualitatively in Tab. 3, based on the accuracy of the MNIST classifier only on the transferred samples from the test set of SVHN that belong to class '3'. It is evident that not including the class in the source domain is much less detrimental than eliminating it from the target domain. This is the desirable behavior: never seeing any '3'- like shapes in t, the generator should not generate such samples. Results are better when not observing '3' in both s, t than when not seeing it only in t since in the latter case, \(G\) learns to map source samples of '3' to target images of other classes. ![](images/6_0.jpg) <center>Figure 3: A random subset of the digit '3' from SVHN, transferred to MNIST. (a) The input images. (b) Results of our DTN. In all plots, the cases keep their respective locations, and are sorted by the probability of '3' as inferred by the MNIST classifier on the results of our DTN. (c) The obtained results, in which the digit 3 was not shown as part of the set s unlabeled samples from SVNH. (d) The obtained results, in which the digit 3 was not shown as part of the set t of unlabeled samples in MNIST. (e) The digit 3 was not shown in both s and t. (f) The digit 3 was not shown in s, t, and during the training of \(f\) . </center> <--- Page Split ---> Table 4: Comparison of retrieval accuracy out of a set of 100,001 face images for either manually created emoji or the one created by the DTN method. <table><tr><td>Measure</td><td>Manual</td><td>Emoji by DTN</td></tr><tr><td>Median rank</td><td>16311</td><td>16</td></tr><tr><td>Mean rank</td><td>27,992.34</td><td>535.47</td></tr><tr><td>Rank-1 accuracy</td><td>0%</td><td>22.88%</td></tr><tr><td>Rank-5 accuracy</td><td>0%</td><td>34.75%</td></tr></table> ### 5.2 FACES: FROM PHOTOS TO EMOJI For face images, we use a set s of one million random images without identity information. The set t consists of assorted facial avatars (emoji) created by an online service (bitmoji.com). The emoji images were processed by a fully automatic process that localizes, based on a set of heuristics, the center of the irides and the tip of the nose. Based on these coordinates, the emoji were centered and scaled into \(152\times 152\) RGB images. As the function \(f\) , we employ the representation layer of the DeepFace network Taigman et al. (2014). This representation is 256- dimensional and was trained on a labeled set of four million images that does not intersect the set s. Network D takes \(152\times 152\) RGB images (either natural or scaled- up emoji) and consists of 6 blocks, each containing a convolution with stride 2, batch normalization, and a leaky ReLU with a parameter of 0.2. Network \(g\) maps \(f\) 's 256D representations to \(64\times 64\) RGB images through a network with 5 blocks, each consisting of an upscaling convolution, batch- normalization and ReLU. Adding \(1\times 1\) convolution to each block resulted in lower \(L_{\mathrm{CONST}}\) training errors, and made \(g\) 9- layers deep. We set \(\alpha = 100\) , \(\beta = 1\) , \(\gamma = 0.05\) as the tradeoff hyperparameters within \(L_{G}\) via validation. As expected, higher values of \(\alpha\) resulted in better \(f\) - constancy, however introduced artifacts such as general noise or distortions. The network was trained for 3 epochs, the point where no further reduction of validation error was observed on \(L_{\mathrm{CONST}}\) . In order to upscale the \(64\times 64\) output to print quality, we used the method of Dong et al. (2015), which was shown to work well on art. We did not retrain this network for our application, and apply the published one to the final output of our method after its training was finished. Results without this upscale are shown, for comparison, in Appendix C. Comparison With Human Annotators For evaluation purposes only, a team of professional annotators manually created an emoji, using a web service, for 118 random images from the CelebA dataset (Yang et al., 2015). Fig. 4 shows side by side samples of the original image, the human generated emoji and the emoji generated by the learned generator function \(G\) . As can be seen, the automatically generated emoji tend to be more informative, albeit less restrictive than the ones created manually. In order to evaluate the identifiability of the resulting emoji, we have collected a second example for each identity in the set of 118 CelebA images and a set \(s'\) of 100,000 random face images, which were not included in s. We then employed the VGG face CNN descriptor of Parkhi et al. (2015) in order to perform retrieval as follows. For each image \(x\) in our manually annotated set, we create a gallery \(s' \cup x'\) , where \(x'\) is the other image of the person in \(x\) . We then perform retrieval using the VGG face descriptor using either the manually created emoji or \(G(x)\) as probe. The VGG network is used in order to avoid a bias that might be caused by using \(f\) both for training the DTN and for evaluation. The results are reported in Tab. 4. As can be seen, the emoji generated by \(G\) are much more discriminative than the emoji created manually and obtain a median rank of 16 in cross- domain identification out of \(10^{5}\) distractors. Multiple Images Per Person We evaluate the visual quality that is obtained per person and not just per image, by testing DTN on the Facescrub dataset (Ng & Winkler, 2014). For each person \(p\) , we considered the set of their images \(X_{p}\) , and selected the emoji that was most similar to their <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 4: Shown, side by side are sample images from the CelebA dataset, the emoji images created manually using a web interface (for validation only), and the result of the unsupervised DTN. See Tab. 4 for retrieval performance. </center> source image: \[\arg \min_{x\in X_{p}}\| f(x) - f(G(x))\| \quad (8)\] This simple heuristic seems to work well in practice; The general problem of mapping a set \(X \subset S\) to a single output in \(T\) is left for future work. Fig. 2(b) contains several examples from the Facescrub dataset. For the complete set of identities, see Appendix A. Transferring both identity and expression We also experimented with multiple expressions. As it turns out the face identification network \(f\) encodes enough expression information to support a successful transfer of both identity as well as expression, see Appendix B. Network Visualization The obtained mapping \(g\) can serve as a visualization tool for studying the properties of the face representation. This is studied in Appendix D by computing the emoji generated for the standard basis of \(\mathbb{R}^{256}\) . The resulting images present a large amount of variability, indicating that \(g\) does not present a significant mode effect. ### 5.3 STYLE TRANSFER AS A SPECIFIC DOMAIN TRANSFER TASK Fig. 5(a- c) demonstrates that neural style transfer Gatys et al. (2016) cannot solve the photo to emoji transfer task in a convincing way. The output image is perhaps visually appealing; However, it does not belong to the space \(\mathbf{t}\) of emoji. Our result are given in Fig. 5(d) for comparison. Note that DTN is able to fix the missing hair in the image. Domain transfer is more general than style transfer in the sense that we can perform style transfer using a DTN. In order to show this, we have transformed, using the method of Johnson et al. (2016), the training images of CelebA based on the style of a single image (shown in Fig. 5(e)). The original photos were used as the set \(\mathbf{s}\) , and the transformed images were used as \(\mathbf{t}\) . Applying DTN, using face representation \(f\) , we obtained styled face images such as the one shown in the figure 5(f). <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 5: Style transfer as a specific case of Domain Transfer. (a) The input content photo. (b) An emoji taken as the input style image. (c) The result of applying the style transfer method of Gatys et al. (2016). (d) The result of the emoji DTN. (e) Source image for style transfer. (f) The result, on the same input image, of a DTN trained to perform style transfer. </center> ## 6 DISCUSSION AND LIMITATIONS Asymmetry is central to our work. Not only does our solution handle the two domains \(S\) and \(T\) differently, the function \(f\) is unlikely to be equally effective in both domains since in most practical cases, \(f\) would be trained on samples from one domain. While an explicit domain adaptation step can be added in order to make \(f\) more effective on the second domain, we found it to be unnecessary. Adaptation of \(f\) occurs implicitly due to the application of \(D\) downstream. Using the same function \(f\) , we can replace the roles of the two domains, \(S\) and \(T\) . For example, we can synthesize an SVHN image that resembles a given MNIST image, or synthesize a face that matches an emoji. As expected, this yields less appealing results due to the asymmetric nature of \(f\) and the lower information content in these new source domains, see Appendix E. Domain transfer, as an unsupervised method, could prove useful across a wide variety of computational tasks. Here, we demonstrate the ability to use domain transfer in order to perform unsupervised domain adaptation. While this is currently only shown in a single experiment, the simplicity of performing domain adaptation and the fact that state of the art results were obtained effortlessly with a simple nearest neighbor classifier suggest it to be a promising direction for future research. ## A FACESCRUB DATASET GENERATIONS In Fig. 6 we show the full set of identities of the Facescrub dataset, and their corresponding generated emoji. ![](images/11_0.jpg) <center>Figure 6: All 80 identities of the Facescrub dataset. The even columns show the results obtained for the images in the odd column to the left. Best viewed in color and zoom. </center> <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 7: Maintaining expression in the domain transfer. In order to support a smiling expression, random smiling emoji were added to set of unlabeled samples from the domain \(T\) and the DTN was re-trained. Each quadruplet include two pairs of {face, emoji} of the same identity in the two modes respectively: not-smiling and smiling. Odd columns are input; Subsequent even columns are output. </center> ## B TRANSFERRING NON-IDENTITY DATA \(f\) may encode, in addition to identity, other data that is desirable to transfer. In the example of faces, this information might include expression, facial hair, glasses, pose, etc. In order to transfer such information, it is important that the set of samples in the target domain \(\mathbf{t}\) present variability along the desirable dimensions. Otherwise, the GAN applied in the target domain (Eq. 4) would maintain these dimensions fixed. The set \(\mathbf{t}\) employed throughout our experiments in Sec. 5.2 was constructed by sampling emoji of neutral expression. To support a smiling expression for example, we simply added to set \(t\) random smiling emoji and re- trained the DTN. The results, presented in Fig. 7, demonstrate that \(f\) contains expression information in addition to identity information, and that this information is enough in order to transfer smiling photos to smiling emoji. ## C THE EFFECT OF SUPER-RESOLUTION As mentioned in Sec. 5, in order to upscale the \(64 \times 64\) output to print quality, the method of Dong et al. (2015) is used. Fig. 8 shows the effect of applying this postprocessing step. ## D THE BASIS ELEMENTS OF THE FACE REPRESENTATION Fig. 9 depicts the face emoji generated by \(g\) for the standard basis of the face representation (Taigman et al., 2014), viewed as the vector space \(\mathbb{R}^{256}\) . <--- Page Split ---> ## E DOMAIN TRANSFER IN THE REVERSE DIRECTION For completion, we present, in Fig. 10 results obtained by performing domain transfer using DTNs in the reverse direction of the one reported in Sec. 5. ![](images/13_0.jpg) <center>Figure 8: The images in Fig. 4 above with (right version) and without (left version) applying super-resolution. Best viewed on screen. </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 9: The emoji visualization of the standard basis vectors in the space of the face representation, i.e., \(g(e_{1}), \ldots , g(e_{256})\) , where \(e_{i}\) is the \(i\) standard basis vector in \(\mathbb{R}^{256}\) . </center> ![](images/14_1.jpg) <center>Figure 10: Domain transfer in the other direction (see limitations in Sec. 6). Input (output) in odd (even) columns. (a) Transfer from MNIST to SVHN. (b) Transfer from emoji to face photos. </center> <--- Page Split --->
accept
Accept (Poster)
6.666667
ICLR_2017_paper_0277
iclr
2,017
# DENSITY ESTIMATION USING REAL NVP Laurent Dinh\* Montreal Institute for Learning Algorithms University of Montreal Montreal, QC H3T1J4 Jascha Sohl- Dickstein Google Brain Samy Bengio Google Brain ## ABSTRACT Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real- valued non- volume preserving (real NVP) transformations, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log- likelihood computation, exact and efficient sampling, exact and efficient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log- likelihood evaluation, and latent variable manipulations. ## 1 Introduction The domain of representation learning has undergone tremendous advances due to improved supervised learning techniques. However, unsupervised learning has the potential to leverage large pools of unlabeled data, and extend these advances to modalities that are otherwise impractical or impossible. One principled approach to unsupervised learning is generative probabilistic modeling. Not only do generative probabilistic models have the ability to create novel content, they also have a wide range of reconstruction related applications including inpainting [61, 46, 59], denoising [3], colorization [71], and super- resolution [9]. As data of interest are generally high- dimensional and highly structured, the challenge in this domain is building models that are powerful enough to capture its complexity yet still trainable. We address this challenge by introducing real- valued non- volume preserving (real NVP) transformations, a tractable yet expressive approach to modeling high- dimensional data. This model can perform efficient and exact inference, sampling and log- density estimation of data points. Moreover, the architecture presented in this paper enables exact and efficient reconstruction of input images from the hierarchical features extracted by this model. ## 2 Related work Substantial work on probabilistic generative models has focused on training models using maximum likelihood. One class of maximum likelihood models are those described by probabilistic undirected graphs, such as Restricted Boltzmann Machines [58] and Deep Boltzmann Machines [53]. These models are trained by taking advantage of the conditional independence property of their bipartite structure to allow efficient exact or approximate posterior inference on latent variables. However, because of the intractability of the associated marginal distribution over latent variables, their training, evaluation, and sampling procedures necessitate the use of approximations like Mean Field inference and Markov Chain Monte Carlo, whose convergence time for such complex models <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Real NVP learns an invertible, stable, mapping between a data distribution \(\hat{p}_{X}\) and a latent distribution \(p_{Z}\) (typically a Gaussian). Here we show a mapping that has been learned on a toy 2-d dataset. The function \(f(x)\) maps samples \(x\) from the data distribution in the upper left into approximate samples \(z\) from the latent distribution, in the upper right. This corresponds to exact inference of the latent state given the data. The inverse function, \(f^{-1}(z)\) , maps samples \(z\) from the latent distribution in the lower right into approximate samples \(x\) from the data distribution in the lower left. This corresponds to exact generation of samples from the model. The transformation of grid lines in \(\mathcal{X}\) and \(\mathcal{Z}\) space is additionally illustrated for both \(f(x)\) and \(f^{-1}(z)\) . </center> remains undetermined, often resulting in generation of highly correlated samples. Furthermore, these approximations can often hinder their performance [7]. Directed graphical models are instead defined in terms of an ancestral sampling procedure, which is appealing both for its conceptual and computational simplicity. They lack, however, the conditional independence structure of undirected models, making exact and approximate posterior inference on latent variables cumbersome [56]. Recent advances in stochastic variational inference [27] and amortized inference [13, 43, 35, 49], allowed efficient approximate inference and learning of deep directed graphical models by maximizing a variational lower bound on the log- likelihood [45]. In particular, the variational autoencoder algorithm [35, 49] simultaneously learns a generative network, that maps gaussian latent variables \(z\) to samples \(x\) , and a matched approximate inference network that maps samples \(x\) to a semantically meaningful latent representation \(z\) , by exploiting the reparametrization trick [68]. Its success in leveraging recent advances in backpropagation [51, 39] in deep neural networks resulted in its adoption for several applications ranging from speech synthesis [12] to language modeling [8]. Still, the approximation in the inference process limits its ability to learn high dimensional deep representations, motivating recent work in improving approximate inference [42, 48, 55, 63, 10, 59, 34]. Such approximations can be avoided altogether by abstaining from using latent variables. Autoregressive models [18, 6, 37, 20] can implement this strategy while typically retaining a great deal of flexibility. This class of algorithms tractably models the joint distribution by decomposing it into a product of conditionals using the probability chain rule according to a fixed ordering over dimensions, simplifying log- likelihood evaluation and sampling. Recent work in this line of research has taken advantage of recent advances in recurrent networks [51], in particular long- short term memory [26], and residual networks [25, 24] in order to learn state- of- the- art generative image models [61, 46] and language models [32]. The ordering of the dimensions, although often arbitrary, can be critical to the training of the model [66]. The sequential nature of this model limits its computational efficiency. For example, its sampling procedure is sequential and non- parallelizable, which can become cumbersome in applications like speech and music synthesis, or real- time rendering. Additionally, there is no natural latent representation associated with autoregressive models, and they have not yet been shown to be useful for semi- supervised learning. <--- Page Split ---> Generative Adversarial Networks (GANs) [21] on the other hand can train any differentiable generative network by avoiding the maximum likelihood principle altogether. Instead, the generative network is associated with a discriminator network whose task is to distinguish between samples and real data. Rather than using an intractable log- likelihood, this discriminator network provides the training signal in an adversarial fashion. Successfully trained GAN models [21, 15, 47] can consistently generate sharp and realistically looking samples [38]. However, metrics that measure the diversity in the generated samples are currently intractable [62, 22, 30]. Additionally, instability in their training process [47] requires careful hyperparameter tuning to avoid diverging behavior. Training such a generative network \(g\) that maps latent variable \(z \sim p_{Z}\) to a sample \(x \sim p_{X}\) does not in theory require a discriminator network as in GANs, or approximate inference as in variational autoencoders. Indeed, if \(g\) is bijective, it can be trained through maximum likelihood using the change of variable formula: \[p_{X}(x) = p_{Z}(z)\left|\operatorname *{det}\left(\frac{\partial g(z)}{\partial z^{T}}\right)\right|^{-1}. \quad (1)\] This formula has been discussed in several papers including the maximum likelihood formulation of independent components analysis (ICA) [4, 28], gaussianization [14, 11] and deep density models [5, 50, 17, 3]. As the existence proof of nonlinear ICA solutions [29] suggests, auto- regressive models can be seen as tractable instance of maximum likelihood nonlinear ICA, where the residual corresponds to the independent components. However, naive application of the change of variable formula produces models which are computationally expensive and poorly conditioned, and so large scale models of this type have not entered general use. ## 3 Model definition In this paper, we will tackle the problem of learning highly nonlinear models in high- dimensional continuous spaces through maximum likelihood. In order to optimize the log- likelihood, we introduce a more flexible class of architectures that enables the computation of log- likelihood on continuous data using the change of variable formula. Building on our previous work in [17], we define a powerful class of bijective functions which enable exact and tractable density evaluation and exact and tractable inference. Moreover, the resulting cost function does not to rely on a fixed form reconstruction cost such as square error [38, 47], and generates sharper samples as a result. Also, this flexibility helps us leverage recent advances in batch normalization [31] and residual networks [24, 25] to define a very deep multi- scale architecture with multiple levels of abstraction. ### 3.1 Change of variable formula Given an observed data variable \(x \in X\) , a simple prior probability distribution \(p_{Z}\) on a latent variable \(z \in Z\) , and a bijection \(f: X \to Z\) (with \(g = f^{- 1}\) ), the change of variable formula defines a model distribution on \(X\) by \[\begin{array}{c}{p_{X}(x) = p_{Z}\big(f(x)\big)\left|\operatorname *{det}\left(\frac{\partial f(x)}{\partial x^{T}}\right)\right|}\\ {\log \left(p_{X}(x)\right) = \log \left(p_{Z}\big(f(x)\big)\right) + \log \left(\left|\operatorname *{det}\left(\frac{\partial f(x)}{\partial x^{T}}\right)\right|\right),} \end{array} \quad (2)\] where \(\frac{\partial f(x)}{\partial x^{T}}\) is the Jacobian of \(f\) at \(x\) . Exact samples from the resulting distribution can be generated by using the inverse transform sampling rule [16]. A sample \(z \sim p_{Z}\) is drawn in the latent space, and its inverse image \(x = f^{- 1}(z) = g(z)\) generates a sample in the original space. Computing the density on a point \(x\) is accomplished by computing the density of its image \(f(x)\) and multiplying by the associated Jacobian determinant \(\operatorname *{det}\left(\frac{\partial f(x)}{\partial x^{T}}\right)\) . See also Figure 1. Exact and efficient inference enables the accurate and fast evaluation of the model. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: Computational graphs for forward and inverse propagation. A coupling layer applies a simple invertible transformation consisting of scaling followed by addition of a constant offset to one part \(\mathbf{x}_{2}\) of the input vector conditioned on the remaining part of the input vector \(\mathbf{x}_{1}\) . Because of its simple nature, this transformation is both easily invertible and possesses a tractable determinant. However, the conditional nature of this transformation, captured by the functions \(s\) and \(t\) , significantly increase the flexibility of this otherwise weak function. The forward and inverse propagation operations have identical computational cost. </center> ### 3.2 Coupling layers Computing the Jacobian of functions with high- dimensional domain and codomain and computing the determinants of large matrices are in general computationally very expensive. This combined with the restriction to bijective functions makes Equation 2 appear impractical for modeling arbitrary distributions. As shown however in [17], by careful design of the function \(f\) , a bijective model can be learned which is both tractable and extremely flexible. As computing the Jacobian determinant of the transformation is crucial to effectively train using this principle, this work exploits the simple observation that the determinant of a triangular matrix can be efficiently computed as the product of its diagonal terms. We will build a flexible and tractable bijective function by stacking a sequence of simple bijections. In each simple bijection, part of the input vector is updated using a function which is simple to invert, but which depends on the remainder of the input vector in a complex way. We refer to each of these simple bijections as an affine coupling layer. Given a \(D\) dimensional input \(x\) and \(d < D\) , the output \(y\) of an affine coupling layer follows the equations \[\begin{array}{r}{y_{1:d} = x_{1:d}}\\ {y_{d + 1:D} = x_{d + 1:D}\odot \exp \left(s(x_{1:d})\right) + t(x_{1:d}),} \end{array} \quad (4)\] where \(s\) and \(t\) stand for scale and translation, and are functions from \(R^{d} \mapsto R^{D - d}\) , and \(\odot\) is the Hadamard product or element- wise product (see Figure 2(a)). ### 3.3 Properties The Jacobian of this transformation is \[\frac{\partial y}{\partial x^{T}} = \left[ \begin{array}{cc}\mathbb{I}_{d} & 0\\ \frac{\partial y_{d + 1:D}}{\partial x_{1:d}^{T}} & \mathrm{diag}\left(\exp \left[s\left(x_{1:d}\right)\right]\right) \end{array} \right], \quad (6)\] where \(\mathrm{diag}\left(\exp \left[s\left(x_{1:d}\right)\right]\right)\) is the diagonal matrix whose diagonal elements correspond to the vector \(\exp \left[s\left(x_{1:d}\right)\right]\) . Given the observation that this Jacobian is triangular, we can efficiently compute its determinant as \(\exp \left[\sum_{j}s\left(x_{1:d}\right)_{j}\right]\) . Since computing the Jacobian determinant of the coupling layer operation does not involve computing the Jacobian of \(s\) or \(t\) , those functions can be arbitrarily complex. We will make them deep convolutional neural networks. Note that the hidden layers of \(s\) and \(t\) can have more features than their input and output layers. Another interesting property of these coupling layers in the context of defining probabilistic models is their invertibility. Indeed, computing the inverse is no more complex than the forward propagation <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 3: Masking schemes for affine coupling layers. On the left, a spatial checkerboard pattern mask. On the right, a channel-wise masking. The squeezing operation reduces the \(4 \times 4 \times 1\) tensor (on the left) into a \(2 \times 2 \times 4\) tensor (on the right). Before the squeezing operation, a checkerboard pattern is used for coupling layers while a channel-wise masking pattern is used afterward. </center> (see Figure 2(b)), \[\left\{ \begin{array}{l l}{y_{1:d}} & {= x_{1:d}}\\ {y_{d + 1:D}} & {= x_{d + 1:D}\odot \exp \left(s(x_{1:d})\right) + t(x_{1:d})} \end{array} \right. \quad (7)\] meaning that sampling is as efficient as inference for this model. Note again that computing the inverse of the coupling layer does not require computing the inverse of \(s\) or \(t\) , so these functions can be arbitrarily complex and difficult to invert. ### 3.4 Masked convolution Partitioning can be implemented using a binary mask \(b\) , and using the functional form for \(y\) , \[y = b\odot x + (1 - b)\odot \left(x\odot \exp \left(s(b\odot x)\right) + t(b\odot x)\right). \quad (9)\] We use two partitionings that exploit the local correlation structure of images: spatial checkerboard patterns, and channel- wise masking (see Figure 3). The spatial checkerboard pattern mask has value 1 where the sum of spatial coordinates is odd, and 0 otherwise. The channel- wise mask \(b\) is 1 for the first half of the channel dimensions and 0 for the second half. For the models presented here, both \(s(\cdot)\) and \(t(\cdot)\) are rectified convolutional networks. ### 3.5 Combining coupling layers Although coupling layers can be powerful, their forward transformation leaves some components unchanged. This difficulty can be overcome by composing coupling layers in an alternating pattern, such that the components that are left unchanged in one coupling layer are updated in the next (see Figure 4(a)). The Jacobian determinant of the resulting function remains tractable, relying on the fact that \[\begin{array}{c}{\frac{\partial(f_{b}\circ f_{a})}{\partial x_{a}^{T}}(x_{a}) = \frac{\partial f_{a}}{\partial x_{a}^{T}}(x_{a})\cdot \frac{\partial f_{b}}{\partial x_{b}^{T}}\big(x_{b} = f_{a}(x_{a})\big)}\\ {\operatorname *{det}(A\cdot B) = \operatorname *{det}(A)\operatorname *{det}(B).} \end{array} \quad (10)\] Similarly, its inverse can be computed easily as \[(f_{b}\circ f_{a})^{-1} = f_{a}^{-1}\circ f_{b}^{-1}. \quad (12)\] <--- Page Split ---> ![](images/5_0.jpg) <center>(a) In this alternating pattern, units which remain identical in one transformation are modified in the next. </center> ![](images/5_1.jpg) <center>(b) Factoring out variables. At each step, half the variables are directly modeled as Gaussians, while the other half undergo further transformation. </center> (b) Factoring out variables. At each step, half the variables are directly modeled as Gaussians, while the other half undergo further transformation. ### 3.6 Multi-scale architecture We implement a multi- scale architecture using a squeezing operation: for each channel, it divides the image into subsquares of shape \(2\times 2\times c\) , then reshapes them into subsquares of shape \(1\times 1\times 4c\) The squeezing operation transforms an \(s\times s\times c\) tensor into an \(\textstyle{\frac{s}{2}}\times \frac{s}{2}\times 4c\) tensor (see Figure 3), effectively trading spatial size for number of channels. At each scale, we combine several operations into a sequence: we first apply three coupling layers with alternating checkerboard masks, then perform a squeezing operation, and finally apply three more coupling layers with alternating channel- wise masking. The channel- wise masking is chosen so that the resulting partitioning is not redundant with the previous checkerboard masking (see Figure 3). For the final scale, we only apply four coupling layers with alternating checkerboard masks. Propagating a \(D\) dimensional vector through all the coupling layers would be cumbersome, in terms of computational and memory cost, and in terms of the number of parameters that would need to be trained. For this reason we follow the design choice of [57] and factor out half of the dimensions at regular intervals (see Equation 14). We can define this operation recursively (see Figure 4(b)), \[\begin{array}{c}{h^{(0)} = x}\\ {(z^{(i + 1)},h^{(i + 1)}) = f^{(i + 1)}(h^{(i)})}\\ {z^{(L)} = f^{(L)}(h^{(L - 1)})}\\ {z = (z^{(1)},\ldots ,z^{(L)}).} \end{array} \quad (13)\] In our experiments, we use this operation for \(i< L\) . The sequence of coupling- squeezing- coupling operations described above is performed per layer when computing \(f^{(i)}\) (Equation 14). At each layer, as the spatial resolution is reduced, the number of hidden layer features in \(s\) and \(t\) is doubled. All variables which have been factored out at different scales are concatenated to obtain the final transformed output (Equation 16). As a consequence, the model must Gaussianize units which are factored out at a finer scale (in an earlier layer) before those which are factored out at a coarser scale (in a later layer). This results in the definition of intermediary levels of representation [53, 49] corresponding to more local, fine- grained features as shown in Appendix D. Moreover, Gaussianizing and factoring out units in earlier layers has the practical benefit of distributing the loss function throughout the network, following the philosophy similar to guiding intermediate layers using intermediate classifiers [40]. It also reduces significantly the amount of computation and memory used by the model, allowing us to train larger models. <--- Page Split ---> ### 3.7 Batch normalization To further improve the propagation of training signal, we use deep residual networks [24, 25] with batch normalization [31] and weight normalization [2, 54] in \(s\) and \(t\) . As described in Appendix E we introduce and use a novel variant of batch normalization which is based on a running average over recent minibatches, and is thus more robust when training with very small minibatches. We also apply batch normalization to the whole coupling layer output. The effects of batch normalization are easily included in the Jacobian computation, since it acts as a linear rescaling on each dimension. That is, given the estimated batch statistics \(\tilde{\mu}\) and \(\tilde{\sigma}^{2}\) , the rescaling function \[x\mapsto \frac{x - \tilde{\mu}}{\sqrt{\tilde{\sigma}^{2} + \epsilon}} \quad (17)\] has a Jacobian determinant \[\left(\prod_{i}\left(\tilde{\sigma}_{i}^{2} + \epsilon\right)\right)^{-\frac{1}{2}}. \quad (18)\] This form of batch normalization can be seen as similar to reward normalization in deep reinforcement learning [44, 65]. We found that the use of this technique not only allowed training with a deeper stack of coupling layers, but also alleviated the instability problem that practitioners often encounter when training conditional distributions with a scale parameter through a gradient- based approach. ## 4 Experiments ### 4.1 Procedure The algorithm described in Equation 2 shows how to learn distributions on unbounded space. In general, the data of interest have bounded magnitude. For examples, the pixel values of an image typically lie in \([0, 256]^{D}\) after application of the recommended jittering procedure [64, 62]. In order to reduce the impact of boundary effects, we instead model the density of \(\logit(\alpha + (1 - \alpha)\odot \frac{\alpha}{256})\) , where \(\alpha\) is picked here as .05. We take into account this transformation when computing log- likelihood and bits per dimension. We also augment the CIFAR- 10, CelebA and LSUN datasets during training to also include horizontal flips of the training examples. We train our model on four natural image datasets: CIFAR- 10 [36], Imagenet [52], Large- scale Scene Understanding (LSUN) [70], CelebFaces Attributes (CelebA) [41]. More specifically, we train on the downsampled to \(32\times 32\) and \(64\times 64\) versions of Imagenet [46]. For the LSUN dataset, we train on the bedroom, tower and church outdoor categories. The procedure for LSUN is the same as in [47]: we downsample the image so that the smallest side is 96 pixels and take random crops of \(64\times 64\) . For CelebA, we use the same procedure as in [38]: we take an approximately central crop of \(148\times 148\) then resize it to \(64\times 64\) . We use the multi- scale architecture described in Section 3.6 and use deep convolutional residual networks in the coupling layers with rectifier nonlinearity and skip- connections as suggested by [46]. To compute the scaling functions \(s\) , we use a hyperbolic tangent function multiplied by a learned scale, whereas the translation function \(t\) has an affine output. Our multi- scale architecture is repeated recursively until the input of the last recursion is a \(4\times 4\times c\) tensor. For datasets of images of size \(32\times 32\) , we use 4 residual blocks with 32 hidden feature maps for the first coupling layers with checkerboard masking. Only 2 residual blocks are used for images of size \(64\times 64\) . We use a batch size of 64. For CIFAR- 10, we use 8 residual blocks, 64 feature maps, and downscale only once. We optimize with ADAM [33] with default hyperparameters and use an \(L_{2}\) regularization on the weight scale parameters with coefficient \(5\cdot 10^{- 5}\) . We set the prior \(p_{Z}\) to be an isotropic unit norm Gaussian. However, any distribution could be used for \(p_{Z}\) , including distributions that are also learned during training, such as from an auto- regressive model, or (with slight modifications to the training objective) a variational autoencoder. <--- Page Split ---> <table><tr><td>Dataset</td><td>PixelRNN [46]</td><td>Real NVP</td><td>Conv DRAW [22]</td><td>IAF-VAE [34]</td></tr><tr><td>CIFAR-10</td><td>3.00</td><td>3.49</td><td>&amp;lt; 3.59</td><td>&amp;lt; 3.28</td></tr><tr><td>Imagenet (32 × 32)</td><td>3.86 (3.83)</td><td>4.28 (4.26)</td><td>&amp;lt; 4.40 (4.35)</td><td></td></tr><tr><td>Imagenet (64 × 64)</td><td>3.63 (3.57)</td><td>3.98 (3.75)</td><td>&amp;lt; 4.10 (4.04)</td><td></td></tr><tr><td>LSUN (bedroom)</td><td></td><td>2.72 (2.70)</td><td></td><td></td></tr><tr><td>LSUN (tower)</td><td></td><td>2.81 (2.78)</td><td></td><td></td></tr><tr><td>LSUN (church outdoor)</td><td></td><td>3.08 (2.94)</td><td></td><td></td></tr><tr><td>CelebA</td><td></td><td>3.02 (2.97)</td><td></td><td></td></tr></table> Table 1: Bits/dim results for CIFAR- 10, Imagenet, LSUN datasets and CelebA. Test results for CIFAR- 10 and validation results for Imagenet, LSUN and CelebA (with training results in parenthesis for reference). ![](images/7_0.jpg) <center>Figure 5: On the left column, examples from the dataset. On the right column, samples from the model trained on the dataset. The datasets shown in this figure are in order: CIFAR-10, Imagenet \((32 \times 32)\) , Imagenet \((64 \times 64)\) , CelebA, LSUN (bedroom). </center> ### 4.2 Results We show in Table 1 that the number of bits per dimension, while not improving over the Pixel RNN [46] baseline, is competitive with other generative methods. As we notice that our performance increases with the number of parameters, larger models are likely to further improve performance. For CelebA and LSUN, the bits per dimension for the validation set was decreasing throughout training, so little overfitting is expected. We show in Figure 5 samples generated from the model with training examples from the dataset for comparison. As mentioned in [62, 22], maximum likelihood is a principle that values diversity <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 6: Manifold generated from four examples in the dataset. Clockwise from top left: CelebA, Imagenet \((64 \times 64)\) , LSUN (tower), LSUN (bedroom). </center> over sample quality in a limited capacity setting. As a result, our model outputs sometimes highly improbable samples as we can notice especially on CelebA. As opposed to variational autoencoders, the samples generated from our model look not only globally coherent but also sharp. Our hypothesis is that as opposed to these models, real NVP does not rely on fixed form reconstruction cost like an \(L_{2}\) norm which tends to reward capturing low frequency components more heavily than high frequency components. Unlike autoregressive models, sampling from our model is done very efficiently as it is parallelized over input dimensions. On Imagenet and LSUN, our model seems to have captured well the notion of background/foreground and lighting interactions such as luminosity and consistent light source direction for reflectance and shadows. We also illustrate the smooth semantically consistent meaning of our latent variables. In the latent space, we define a manifold based on four validation examples \(z_{(1)}, z_{(2)}, z_{(3)}, z_{(4)}\) , and parametrized by two parameters \(\phi\) and \(\phi'\) by, \[z = \cos (\phi)\left(\cos (\phi^{\prime})z_{(1)} + \sin (\phi^{\prime})z_{(2)}\right) + \sin (\phi)\left(\cos (\phi^{\prime})z_{(3)} + \sin (\phi^{\prime})z_{(4)}\right). \quad (19)\] We project the resulting manifold back into the data space by computing \(g(z)\) . Results are shown Figure 6. We observe that the model seems to have organized the latent space with a notion of meaning that goes well beyond pixel space interpolation. More manifold visualization are shown in the Appendix. To further test whether the latent space has a consistent semantic interpretation, we trained a class- conditional model on CelebA, and found that the learned representation had a consistent semantic meaning across class labels (see Appendix F). ## 5 Discussion and conclusion In this paper, we have defined a class of invertible functions with tractable Jacobian determinant, enabling exact and tractable log- likelihood evaluation, inference, and sampling. We have shown that this class of generative model achieves competitive performances, both in terms of sample quality and log- likelihood. Many avenues exist to further improve the functional form of the transformations, for instance by exploiting the latest advances in dilated convolutions [69] and residual networks architectures [60]. This paper presented a technique bridging the gap between auto- regressive models, variational autoencoders, and generative adversarial networks. Like auto- regressive models, it allows tractable and exact log- likelihood evaluation for training. It allows however a much more flexible functional form, similar to that in the generative model of variational autoencoders. This allows for fast and exact sampling from the model distribution. Like GANs, and unlike variational autoencoders, our technique does not require the use of a fixed form reconstruction cost, and instead defines a cost in terms of higher level features, generating sharper images. Finally, unlike both variational <--- Page Split ---> autoencoders and GANs, our technique is able to learn a semantically meaningful latent space which is as high dimensional as the input space. This may make the algorithm particularly well suited to semi- supervised learning tasks, as we hope to explore in future work. Real NVP generative models can additionally be conditioned on additional variables (for instance class labels) to create a structured output algorithm. More so, as the resulting class of invertible transformations can be treated as a probability distribution in a modular way, it can also be used to improve upon other probabilistic models like auto- regressive models and variational autoencoders. For variational autoencoders, these transformations could be used both to enable a more flexible reconstruction cost [38] and a more flexible stochastic inference distribution [48]. Probabilistic models in general can also benefit from batch normalization techniques as applied in this paper. The definition of powerful and trainable invertible functions can also benefit domains other than generative unsupervised learning. For example, in reinforcement learning, these invertible functions can help extend the set of functions for which an argmax operation is tractable for continuous \(Q\) - learning [23] or find representation where local linear Gaussian approximations are more appropriate [67]. ## 6 Acknowledgments The authors thank the developers of Tensorflow [1]. We thank Sherry Moore, David Andersen and Jon Shlens for their help in implementing the model. We thank Aaron van den Oord, Yann Dauphin, Kyle Kastner, Chelsea Finn, Maithra Raghu, David Warde- Farley, Daniel Jiwoong Im and Oriol Vinyals for fruitful discussions. Finally, we thank Ben Poole, Rafal Jozefowicz and George Dahl for their input on a draft of the paper. ## References [1] Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large- scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016. [2] Vijay Badrinarayanan, Bamdev Mishra, and Roberto Cipolla. Understanding symmetries in deep networks. arXiv preprint arXiv:1511.01029, 2015. [3] Johannes Ballé, Valero Laparra, and Eero P Simoncelli. Density modeling of images using a generalized normalization transformation. arXiv preprint arXiv:1511.06281, 2015. [4] Anthony J Bell and Terrence J Sejnowski. An information- maximization approach to blind separation and blind deconvolution. Neural computation, 7(6):1129- 1159, 1995. [5] Yoshua Bengio. Artificial neural networks and their application to sequence recognition. 1991. [6] Yoshua Bengio and Samy Bengio. Modeling high- dimensional discrete data with multi- layer neural networks. In NIPS, volume 99, pages 400- 406, 1999. [7] Mathias Berglund and Tapani Raiko. Stochastic gradient estimate variance in contrastive divergence and persistent contrastive divergence. arXiv preprint arXiv:1312.6002, 2013. [8] Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015. [9] Joan Bruna, Pablo Sprechmann, and Yann LeCun. Super- resolution with deep convolutional sufficient statistics. arXiv preprint arXiv:1511.05666, 2015. [10] Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015. [11] Scott Shaobing Chen and Ramesh A Gopinath. Gaussianization. In Advances in Neural Information Processing Systems, 2000. [12] Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pages 2962- 2970, 2015. [13] Peter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The helmholtz machine. Neural computation, 7(5):889- 904, 1995. [14] Gustavo Deco and Wilfried Brauer. Higher order statistical decorrelation without information loss. In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, Advances in Neural Information Processing Systems 7, pages 247- 254. MIT Press, 1995. [15] Emily L. Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in Neural Information Processing Systems 28: <--- Page Split ---> Annual Conference on Neural Information Processing Systems 2015, December 7- 12, 2015, Montreal, Quebec, Canada, pages 1486- 1494, 2015. [16] Luc Devroye. Sample- based non- uniform random variate generation. In Proceedings of the 18th conference on Winter simulation, pages 260- 265. ACM, 1986. [17] Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: non- linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014. [18] Brendan J Frey. Graphical models for machine learning and digital communication. MIT press, 1998. [19] Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Texture synthesis using convolutional neural networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7- 12, 2015, Montreal, Quebec, Canada, pages 262- 270, 2015. [20] Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. MADE: masked autoencoder for distribution estimation. CoRR, abs/1502.03509, 2015. [21] Ian J. Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8- 13 2014, Montreal, Quebec, Canada, pages 2672- 2680, 2014. [22] Karol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, and Daan Wierstra. Towards conceptual compression. arXiv preprint arXiv:1604.08772, 2016. [23] Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, and Sergey Levine. Continuous deep q- learning with model- based acceleration. arXiv preprint arXiv:1603.00748, 2016. [24] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. [25] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. CoRR, abs/1603.05027, 2016. [26] Sepp Hochreiter and Jurgen Schmidhuber. Long short- term memory. Neural Computation, 9(8):1735- 1780, 1997. [27] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303- 1347, 2013. [28] Aapo Hyvarinen, Juha Karhunen, and Erkki Oja. Independent component analysis, volume 46. John Wiley & Sons, 2004. [29] Aapo Hyvarinen and Petteri Pajunen. Nonlinear independent component analysis: Existence and uniqueness results. Neural Networks, 12(3):429- 439, 1999. [30] Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating images with recurrent adversarial networks. arXiv preprint arXiv:1602.05110, 2016. [31] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. [32] Rafal Józefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. CoRR, abs/1602.02410, 2016. [33] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [34] Diederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregressive flow. arXiv preprint arXiv:1606.04934, 2016. [35] Diederik P Kingma and Max Welling. Auto- encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [36] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009. [37] Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS, 2011. [38] Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. CoRR, abs/1512.09300, 2015. [39] Yann A LeCun, Léon Bottou, Genevieve B Orr, and Klaus- Robert Müller. Efficient backprop. In Neural networks: Tricks of the trade, pages 9- 48. Springer, 2012. [40] Chen- Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply- supervised nets. arXiv preprint arXiv:1409.5185, 2014. [41] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015. [42] Lars Maaloe, Casper Kaae Sønderby, Søren Kaae Sønderby, and Ole Winther. Auxiliary deep generative models. arXiv preprint arXiv:1602.05473, 2016. [43] Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030, 2014. [44] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human- level control through deep reinforcement learning. Nature, 518(7540):529- 533, 2015. [45] Radford M Neal and Geoffrey E Hinton. A view of the em algorithm that justifies incremental, sparse, and other variants. In Learning in graphical models, pages 355- 368. Springer, 1998. <--- Page Split ---> [46] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016. [47] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015. [48] Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015. [49] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014. [50] Oren Rippel and Ryan Prescott Adams. High- dimensional probability estimation with deep density models. arXiv preprint arXiv:1302.5125, 2013. [51] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by backpropagating errors. Cognitive modeling, 5(3):1, 1988. [52] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211- 252, 2015. [53] Ruslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines. In International conference on artificial intelligence and statistics, pages 448- 455, 2009. [54] Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. arXiv preprint arXiv:1602.07868, 2016. [55] Tim Salimans, Diederik P Kingma, and Max Welling. Markov chain monte carlo and variational inference: Bridging the gap. arXiv preprint arXiv:1410.6460, 2014. [56] Lawrence K Saul, Tommi Jaakkola, and Michael I Jordan. Mean field theory for sigmoid belief networks. Journal of artificial intelligence research, 4(1):61- 76, 1996. [57] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large- scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [58] Paul Smolensky. Information processing in dynamical systems: Foundations of harmony theory. Technical report, DTIC Document, 1986. [59] Jascha Sohl- Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6- 11 July 2015, pages 2256- 2265, 2015. [60] Sasha Targ, Diogo Almeida, and Kevin Lyman. Resnet in resnet: Generalizing residual architectures. CoRR, abs/1603.08029, 2016. [61] Lucas Theis and Matthias Bethge. Generative image modeling using spatial lstms. In Advances in Neural Information Processing Systems, pages 1918- 1926, 2015. [62] Lucas Theis, Aaron Van Den Oord, and Matthias Bethge. A note on the evaluation of generative models. CoRR, abs/1511.01844, 2015. [63] Dustin Tran, Rajesh Ranganath, and David M Blei. Variational gaussian process. arXiv preprint arXiv:1511.06499, 2015. [64] Benigno Uria, Iain Murray, and Hugo Larochelle. Rnade: The real- valued neural autoregressive density- estimator. In Advances in Neural Information Processing Systems, pages 2175- 2183, 2013. [65] Hado van Hasselt, Arthur Guez, Matteo Hessel, and David Silver. Learning functions across many orders of magnitudes. arXiv preprint arXiv:1602.07714, 2016. [66] Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. Order matters: Sequence to sequence for sets. arXiv preprint arXiv:1511.06391, 2015. [67] Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in Neural Information Processing Systems, pages 2728- 2736, 2015. [68] Ronald J Williams. Simple statistical gradient- following algorithms for connectionist reinforcement learning. Machine learning, 8(3- 4):229- 256, 1992. [69] Fisher Yu and Vladlen Koltun. Multi- scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122, 2015. [70] Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. Construction of a large- scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015. [71] Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. arXiv preprint arXiv:1603.08511, 2016. <--- Page Split ---> ## A Samples ![](images/12_0.jpg) <center>Figure 7: Samples from a model trained on Imagenet \((64 \times 64)\) . </center> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 8: Samples from a model trained on CelebA. </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 9: Samples from a model trained on LSUN (bedroom category). </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 10: Samples from a model trained on LSUN (church outdoor category). </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 11: Samples from a model trained on LSUN (tower category). </center> <--- Page Split ---> ## B Manifold ![](images/17_0.jpg) <center>Figure 12: Manifold from a model trained on Imagenet \((64\times 64)\) . Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation 19, where the x-axis corresponds to \(\phi\) , and the y-axis to \(\phi^{\prime}\) , and where \(\phi ,\phi^{\prime}\in \{0,\frac{\pi}{4},\dots ,\frac{7\pi}{4}\}\) . </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 13: Manifold from a model trained on CelebA. Images with red borders are taken from the training set, and define the manifold. The manifold was computed as described in Equation 19, where the x-axis corresponds to \(\phi\) , and the y-axis to \(\phi^{\prime}\) , and where \(\phi , \phi^{\prime} \in \{0, \frac{\pi}{4}, \dots , \frac{\pi}{4}\}\) . </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 14: Manifold from a model trained on LSUN (bedroom category). Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation 19, where the x-axis corresponds to \(\phi\) , and the y-axis to \(\phi '\) , and where \(\phi , \phi ' \in \{0, \frac{\pi}{4}, \dots , \frac{7\pi}{4}\}\) . </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 15: Manifold from a model trained on LSUN (church outdoor category). Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation 19, where the x-axis corresponds to \(\phi\) , and the y-axis to \(\phi '\) , and where \(\phi , \phi ' \in \{0, \frac{\pi}{4}, \dots , \frac{7\pi}{4}\}\) . </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 16: Manifold from a model trained on LSUN (tower category). Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation 19, where the x-axis corresponds to \(\phi\) , and the y-axis to \(\phi '\) , and where \(\phi , \phi ' \in \{0, \frac{\pi}{4}, \dots , \frac{7\pi}{4} \}\) . </center> ## C Extrapolation Inspired by the texture generation work by [19, 61] and extrapolation test with DCGAN [47], we also evaluate the statistics captured by our model by generating images twice or ten times as large as present in the dataset. As we can observe in the following figures, our model seems to successfully create a "texture" representation of the dataset while maintaining a spatial smoothness through the image. Our convolutional architecture is only aware of the position of considered pixel through edge effects in convolutions, therefore our model is similar to a stationary process. This also explains why these samples are more consistent in LSUN, where the training data was obtained using random crops. <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 17: We generate samples a factor bigger than the training set image size on Imagenet (64 × 64). </center> <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 18: We generate samples a factor bigger than the training set image size on CelebA. </center> <--- Page Split ---> ![](images/24_0.jpg) <center>Figure 19: We generate samples a factor bigger than the training set image size on LSUN (bedroom category). </center> <--- Page Split ---> ![](images/25_0.jpg) <center>Figure 20: We generate samples a factor bigger than the training set image size on LSUN (church outdoor category). </center> <--- Page Split ---> ![](images/26_0.jpg) <center>Figure 21: We generate samples a factor bigger than the training set image size on LSUN (tower category). </center> <--- Page Split ---> ## D Latent variables semantic As in [22], we further try to grasp the semantic of our learned layers latent variables by doing ablation tests. We infer the latent variables and resample the lowest levels of latent variables from a standard gaussian, increasing the highest level affected by this resampling. As we can see in the following figures, the semantic of our latent space seems to be more on a graphic level rather than higher level concept. Although the heavy use of convolution improves learning by exploiting image prior knowledge, it is also likely to be responsible for this limitation. ![](images/27_0.jpg) <center>Figure 22: Conceptual compression from a model trained on Imagenet \((64 \times 64)\) . The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: \(100\%\) , \(50\%\) , \(25\%\) , \(12.5\%\) and \(6.25\%\) of the latent variables are kept. </center> ![](images/27_1.jpg) <center>Figure 23: Conceptual compression from a model trained on CelebA. The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: \(100\%\) , \(50\%\) , \(25\%\) , \(12.5\%\) and \(6.25\%\) of the latent variables are kept. </center> <--- Page Split ---> ![](images/28_0.jpg) <center>Figure 24: Conceptual compression from a model trained on LSUN (bedroom category). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: \(100\%\) , \(50\%\) , \(25\%\) , \(12.5\%\) and \(6.25\%\) of the latent variables are kept. </center> ![](images/28_1.jpg) <center>Figure 25: Conceptual compression from a model trained on LSUN (church outdoor category). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: \(100\%\) , \(50\%\) , \(25\%\) , \(12.5\%\) and \(6.25\%\) of the latent variables are kept. </center> <--- Page Split ---> ![](images/29_0.jpg) <center>Figure 26: Conceptual compression from a model trained on LSUN (tower category). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: \(100\%\) , \(50\%\) , \(25\%\) , \(12.5\%\) and \(6.25\%\) of the latent variables are kept. </center> ## E Batch normalization We further experimented with batch normalization by using a weighted average of a moving average of the layer statistics \(\tilde{\mu}_{t}, \tilde{\sigma}_{t}^{2}\) and the current batch batch statistics \(\tilde{\mu}_{t}, \tilde{\sigma}_{t}^{2}\) , \[\begin{array}{r}{\tilde{\mu}_{t + 1} = \rho \tilde{\mu}_{t} + (1 - \rho)\tilde{\mu}_{t}}\\ {\tilde{\sigma}_{t + 1}^{2} = \rho \tilde{\sigma}_{t}^{2} + (1 - \rho)\tilde{\sigma}_{t}^{2},} \end{array} \quad (21)\] where \(\rho\) is the momentum. When using \(\tilde{\mu}_{t + 1}, \tilde{\sigma}_{t + 1}^{2}\) , we only propagate gradient through the current batch statistics \(\tilde{\mu}_{t}, \tilde{\sigma}_{t}^{2}\) . We observe that using this lag helps the model train with very small minibatches. We used batch normalization with a moving average for our results on CIFAR- 10. ## F Attribute change Additionally, we exploit the attribute information \(y\) in CelebA to build a conditional model, i.e. the invertible function \(f\) from image to latent variable uses the labels in \(y\) to define its parameters. In order to observe the information stored in the latent variables, we choose to encode a batch of images \(x\) with their original attribute \(y\) and decode them using a new set of attributes \(y'\) , build by shuffling the original attributes inside the batch. We obtain the new images \(x' = g(f(x; y); y')\) . We observe that, although the faces are changed as to respect the new attributes, several properties remain unchanged like position and background. <--- Page Split ---> ![](images/30_0.jpg) <center>Figure 27: Examples \(x\) from the CelebA dataset. </center> <--- Page Split ---> ![](images/31_0.jpg) <center>Figure 28: From a model trained on pairs of images and attributes from the CelebA dataset, we encode a batch of images with their original attributes before decoding them with a new set of attributes. We notice that the new images often share similar characteristics with those in Fig 27, including position and background. </center> <--- Page Split --->
## ABSTRACT Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real- valued non- volume preserving (real NVP) transformations, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log- likelihood computation, exact and efficient sampling, exact and efficient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log- likelihood evaluation, and latent variable manipulations. ## 1 Introduction The domain of representation learning has undergone tremendous advances due to improved supervised learning techniques. However, unsupervised learning has the potential to leverage large pools of unlabeled data, and extend these advances to modalities that are otherwise impractical or impossible. One principled approach to unsupervised learning is generative probabilistic modeling. Not only do generative probabilistic models have the ability to create novel content, they also have a wide range of reconstruction related applications including inpainting [61, 46, 59], denoising [3], colorization [71], and super- resolution [9]. As data of interest are generally high- dimensional and highly structured, the challenge in this domain is building models that are powerful enough to capture its complexity yet still trainable. We address this challenge by introducing real- valued non- volume preserving (real NVP) transformations, a tractable yet expressive approach to modeling high- dimensional data. This model can perform efficient and exact inference, sampling and log- density estimation of data points. Moreover, the architecture presented in this paper enables exact and efficient reconstruction of input images from the hierarchical features extracted by this model. ## 2 Related work Substantial work on probabilistic generative models has focused on training models using maximum likelihood. One class of maximum likelihood models are those described by probabilistic undirected graphs, such as Restricted Boltzmann Machines [58] and Deep Boltzmann Machines [53]. These models are trained by taking advantage of the conditional independence property of their bipartite structure to allow efficient exact or approximate posterior inference on latent variables. However, because of the intractability of the associated marginal distribution over latent variables, their training, evaluation, and sampling procedures necessitate the use of approximations like Mean Field inference and Markov Chain Monte Carlo, whose convergence time for such complex models <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Real NVP learns an invertible, stable, mapping between a data distribution \(\hat{p}_{X}\) and a latent distribution \(p_{Z}\) (typically a Gaussian). Here we show a mapping that has been learned on a toy 2-d dataset. The function \(f(x)\) maps samples \(x\) from the data distribution in the upper left into approximate samples \(z\) from the latent distribution, in the upper right. This corresponds to exact inference of the latent state given the data. The inverse function, \(f^{-1}(z)\) , maps samples \(z\) from the latent distribution in the lower right into approximate samples \(x\) from the data distribution in the lower left. This corresponds to exact generation of samples from the model. The transformation of grid lines in \(\mathcal{X}\) and \(\mathcal{Z}\) space is additionally illustrated for both \(f(x)\) and \(f^{-1}(z)\) . </center> remains undetermined, often resulting in generation of highly correlated samples. Furthermore, these approximations can often hinder their performance [7]. Directed graphical models are instead defined in terms of an ancestral sampling procedure, which is appealing both for its conceptual and computational simplicity. They lack, however, the conditional independence structure of undirected models, making exact and approximate posterior inference on latent variables cumbersome [56]. Recent advances in stochastic variational inference [27] and amortized inference [13, 43, 35, 49], allowed efficient approximate inference and learning of deep directed graphical models by maximizing a variational lower bound on the log- likelihood [45]. In particular, the variational autoencoder algorithm [35, 49] simultaneously learns a generative network, that maps gaussian latent variables \(z\) to samples \(x\) , and a matched approximate inference network that maps samples \(x\) to a semantically meaningful latent representation \(z\) , by exploiting the reparametrization trick [68]. Its success in leveraging recent advances in backpropagation [51, 39] in deep neural networks resulted in its adoption for several applications ranging from speech synthesis [12] to language modeling [8]. Still, the approximation in the inference process limits its ability to learn high dimensional deep representations, motivating recent work in improving approximate inference [42, 48, 55, 63, 10, 59, 34]. Such approximations can be avoided altogether by abstaining from using latent variables. Autoregressive models [18, 6, 37, 20] can implement this strategy while typically retaining a great deal of flexibility. This class of algorithms tractably models the joint distribution by decomposing it into a product of conditionals using the probability chain rule according to a fixed ordering over dimensions, simplifying log- likelihood evaluation and sampling. Recent work in this line of research has taken advantage of recent advances in recurrent networks [51], in particular long- short term memory [26], and residual networks [25, 24] in order to learn state- of- the- art generative image models [61, 46] and language models [32]. The ordering of the dimensions, although often arbitrary, can be critical to the training of the model [66]. The sequential nature of this model limits its computational efficiency. For example, its sampling procedure is sequential and non- parallelizable, which can become cumbersome in applications like speech and music synthesis, or real- time rendering. Additionally, there is no natural latent representation associated with autoregressive models, and they have not yet been shown to be useful for semi- supervised learning. <--- Page Split ---> Generative Adversarial Networks (GANs) [21] on the other hand can train any differentiable generative network by avoiding the maximum likelihood principle altogether. Instead, the generative network is associated with a discriminator network whose task is to distinguish between samples and real data. Rather than using an intractable log- likelihood, this discriminator network provides the training signal in an adversarial fashion. Successfully trained GAN models [21, 15, 47] can consistently generate sharp and realistically looking samples [38]. However, metrics that measure the diversity in the generated samples are currently intractable [62, 22, 30]. Additionally, instability in their training process [47] requires careful hyperparameter tuning to avoid diverging behavior. Training such a generative network \(g\) that maps latent variable \(z \sim p_{Z}\) to a sample \(x \sim p_{X}\) does not in theory require a discriminator network as in GANs, or approximate inference as in variational autoencoders. Indeed, if \(g\) is bijective, it can be trained through maximum likelihood using the change of variable formula: \[p_{X}(x) = p_{Z}(z)\left|\operatorname *{det}\left(\frac{\partial g(z)}{\partial z^{T}}\right)\right|^{-1}. \quad (1)\] This formula has been discussed in several papers including the maximum likelihood formulation of independent components analysis (ICA) [4, 28], gaussianization [14, 11] and deep density models [5, 50, 17, 3]. As the existence proof of nonlinear ICA solutions [29] suggests, auto- regressive models can be seen as tractable instance of maximum likelihood nonlinear ICA, where the residual corresponds to the independent components. However, naive application of the change of variable formula produces models which are computationally expensive and poorly conditioned, and so large scale models of this type have not entered general use. ## 3 Model definition In this paper, we will tackle the problem of learning highly nonlinear models in high- dimensional continuous spaces through maximum likelihood. In order to optimize the log- likelihood, we introduce a more flexible class of architectures that enables the computation of log- likelihood on continuous data using the change of variable formula. Building on our previous work in [17], we define a powerful class of bijective functions which enable exact and tractable density evaluation and exact and tractable inference. Moreover, the resulting cost function does not to rely on a fixed form reconstruction cost such as square error [38, 47], and generates sharper samples as a result. Also, this flexibility helps us leverage recent advances in batch normalization [31] and residual networks [24, 25] to define a very deep multi- scale architecture with multiple levels of abstraction. ### 3.1 Change of variable formula Given an observed data variable \(x \in X\) , a simple prior probability distribution \(p_{Z}\) on a latent variable \(z \in Z\) , and a bijection \(f: X \to Z\) (with \(g = f^{- 1}\) ), the change of variable formula defines a model distribution on \(X\) by \[\begin{array}{c}{p_{X}(x) = p_{Z}\big(f(x)\big)\left|\operatorname *{det}\left(\frac{\partial f(x)}{\partial x^{T}}\right)\right|}\\ {\log \left(p_{X}(x)\right) = \log \left(p_{Z}\big(f(x)\big)\right) + \log \left(\left|\operatorname *{det}\left(\frac{\partial f(x)}{\partial x^{T}}\right)\right|\right),} \end{array} \quad (2)\] where \(\frac{\partial f(x)}{\partial x^{T}}\) is the Jacobian of \(f\) at \(x\) . Exact samples from the resulting distribution can be generated by using the inverse transform sampling rule [16]. A sample \(z \sim p_{Z}\) is drawn in the latent space, and its inverse image \(x = f^{- 1}(z) = g(z)\) generates a sample in the original space. Computing the density on a point \(x\) is accomplished by computing the density of its image \(f(x)\) and multiplying by the associated Jacobian determinant \(\operatorname *{det}\left(\frac{\partial f(x)}{\partial x^{T}}\right)\) . See also Figure 1. Exact and efficient inference enables the accurate and fast evaluation of the model. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: Computational graphs for forward and inverse propagation. A coupling layer applies a simple invertible transformation consisting of scaling followed by addition of a constant offset to one part \(\mathbf{x}_{2}\) of the input vector conditioned on the remaining part of the input vector \(\mathbf{x}_{1}\) . Because of its simple nature, this transformation is both easily invertible and possesses a tractable determinant. However, the conditional nature of this transformation, captured by the functions \(s\) and \(t\) , significantly increase the flexibility of this otherwise weak function. The forward and inverse propagation operations have identical computational cost. </center> ### 3.2 Coupling layers Computing the Jacobian of functions with high- dimensional domain and codomain and computing the determinants of large matrices are in general computationally very expensive. This combined with the restriction to bijective functions makes Equation 2 appear impractical for modeling arbitrary distributions. As shown however in [17], by careful design of the function \(f\) , a bijective model can be learned which is both tractable and extremely flexible. As computing the Jacobian determinant of the transformation is crucial to effectively train using this principle, this work exploits the simple observation that the determinant of a triangular matrix can be efficiently computed as the product of its diagonal terms. We will build a flexible and tractable bijective function by stacking a sequence of simple bijections. In each simple bijection, part of the input vector is updated using a function which is simple to invert, but which depends on the remainder of the input vector in a complex way. We refer to each of these simple bijections as an affine coupling layer. Given a \(D\) dimensional input \(x\) and \(d < D\) , the output \(y\) of an affine coupling layer follows the equations \[\begin{array}{r}{y_{1:d} = x_{1:d}}\\ {y_{d + 1:D} = x_{d + 1:D}\odot \exp \left(s(x_{1:d})\right) + t(x_{1:d}),} \end{array} \quad (4)\] where \(s\) and \(t\) stand for scale and translation, and are functions from \(R^{d} \mapsto R^{D - d}\) , and \(\odot\) is the Hadamard product or element- wise product (see Figure 2(a)). ### 3.3 Properties The Jacobian of this transformation is \[\frac{\partial y}{\partial x^{T}} = \left[ \begin{array}{cc}\mathbb{I}_{d} & 0\\ \frac{\partial y_{d + 1:D}}{\partial x_{1:d}^{T}} & \mathrm{diag}\left(\exp \left[s\left(x_{1:d}\right)\right]\right) \end{array} \right], \quad (6)\] where \(\mathrm{diag}\left(\exp \left[s\left(x_{1:d}\right)\right]\right)\) is the diagonal matrix whose diagonal elements correspond to the vector \(\exp \left[s\left(x_{1:d}\right)\right]\) . Given the observation that this Jacobian is triangular, we can efficiently compute its determinant as \(\exp \left[\sum_{j}s\left(x_{1:d}\right)_{j}\right]\) . Since computing the Jacobian determinant of the coupling layer operation does not involve computing the Jacobian of \(s\) or \(t\) , those functions can be arbitrarily complex. We will make them deep convolutional neural networks. Note that the hidden layers of \(s\) and \(t\) can have more features than their input and output layers. Another interesting property of these coupling layers in the context of defining probabilistic models is their invertibility. Indeed, computing the inverse is no more complex than the forward propagation <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 3: Masking schemes for affine coupling layers. On the left, a spatial checkerboard pattern mask. On the right, a channel-wise masking. The squeezing operation reduces the \(4 \times 4 \times 1\) tensor (on the left) into a \(2 \times 2 \times 4\) tensor (on the right). Before the squeezing operation, a checkerboard pattern is used for coupling layers while a channel-wise masking pattern is used afterward. </center> (see Figure 2(b)), \[\left\{ \begin{array}{l l}{y_{1:d}} & {= x_{1:d}}\\ {y_{d + 1:D}} & {= x_{d + 1:D}\odot \exp \left(s(x_{1:d})\right) + t(x_{1:d})} \end{array} \right. \quad (7)\] meaning that sampling is as efficient as inference for this model. Note again that computing the inverse of the coupling layer does not require computing the inverse of \(s\) or \(t\) , so these functions can be arbitrarily complex and difficult to invert. ### 3.4 Masked convolution Partitioning can be implemented using a binary mask \(b\) , and using the functional form for \(y\) , \[y = b\odot x + (1 - b)\odot \left(x\odot \exp \left(s(b\odot x)\right) + t(b\odot x)\right). \quad (9)\] We use two partitionings that exploit the local correlation structure of images: spatial checkerboard patterns, and channel- wise masking (see Figure 3). The spatial checkerboard pattern mask has value 1 where the sum of spatial coordinates is odd, and 0 otherwise. The channel- wise mask \(b\) is 1 for the first half of the channel dimensions and 0 for the second half. For the models presented here, both \(s(\cdot)\) and \(t(\cdot)\) are rectified convolutional networks. ### 3.5 Combining coupling layers Although coupling layers can be powerful, their forward transformation leaves some components unchanged. This difficulty can be overcome by composing coupling layers in an alternating pattern, such that the components that are left unchanged in one coupling layer are updated in the next (see Figure 4(a)). The Jacobian determinant of the resulting function remains tractable, relying on the fact that \[\begin{array}{c}{\frac{\partial(f_{b}\circ f_{a})}{\partial x_{a}^{T}}(x_{a}) = \frac{\partial f_{a}}{\partial x_{a}^{T}}(x_{a})\cdot \frac{\partial f_{b}}{\partial x_{b}^{T}}\big(x_{b} = f_{a}(x_{a})\big)}\\ {\operatorname *{det}(A\cdot B) = \operatorname *{det}(A)\operatorname *{det}(B).} \end{array} \quad (10)\] Similarly, its inverse can be computed easily as \[(f_{b}\circ f_{a})^{-1} = f_{a}^{-1}\circ f_{b}^{-1}. \quad (12)\] <--- Page Split ---> ![](images/5_0.jpg) <center>(a) In this alternating pattern, units which remain identical in one transformation are modified in the next. </center> ![](images/5_1.jpg) <center>(b) Factoring out variables. At each step, half the variables are directly modeled as Gaussians, while the other half undergo further transformation. </center> (b) Factoring out variables. At each step, half the variables are directly modeled as Gaussians, while the other half undergo further transformation. ### 3.6 Multi-scale architecture We implement a multi- scale architecture using a squeezing operation: for each channel, it divides the image into subsquares of shape \(2\times 2\times c\) , then reshapes them into subsquares of shape \(1\times 1\times 4c\) The squeezing operation transforms an \(s\times s\times c\) tensor into an \(\textstyle{\frac{s}{2}}\times \frac{s}{2}\times 4c\) tensor (see Figure 3), effectively trading spatial size for number of channels. At each scale, we combine several operations into a sequence: we first apply three coupling layers with alternating checkerboard masks, then perform a squeezing operation, and finally apply three more coupling layers with alternating channel- wise masking. The channel- wise masking is chosen so that the resulting partitioning is not redundant with the previous checkerboard masking (see Figure 3). For the final scale, we only apply four coupling layers with alternating checkerboard masks. Propagating a \(D\) dimensional vector through all the coupling layers would be cumbersome, in terms of computational and memory cost, and in terms of the number of parameters that would need to be trained. For this reason we follow the design choice of [57] and factor out half of the dimensions at regular intervals (see Equation 14). We can define this operation recursively (see Figure 4(b)), \[\begin{array}{c}{h^{(0)} = x}\\ {(z^{(i + 1)},h^{(i + 1)}) = f^{(i + 1)}(h^{(i)})}\\ {z^{(L)} = f^{(L)}(h^{(L - 1)})}\\ {z = (z^{(1)},\ldots ,z^{(L)}).} \end{array} \quad (13)\] In our experiments, we use this operation for \(i< L\) . The sequence of coupling- squeezing- coupling operations described above is performed per layer when computing \(f^{(i)}\) (Equation 14). At each layer, as the spatial resolution is reduced, the number of hidden layer features in \(s\) and \(t\) is doubled. All variables which have been factored out at different scales are concatenated to obtain the final transformed output (Equation 16). As a consequence, the model must Gaussianize units which are factored out at a finer scale (in an earlier layer) before those which are factored out at a coarser scale (in a later layer). This results in the definition of intermediary levels of representation [53, 49] corresponding to more local, fine- grained features as shown in Appendix D. Moreover, Gaussianizing and factoring out units in earlier layers has the practical benefit of distributing the loss function throughout the network, following the philosophy similar to guiding intermediate layers using intermediate classifiers [40]. It also reduces significantly the amount of computation and memory used by the model, allowing us to train larger models. <--- Page Split ---> ### 3.7 Batch normalization To further improve the propagation of training signal, we use deep residual networks [24, 25] with batch normalization [31] and weight normalization [2, 54] in \(s\) and \(t\) . As described in Appendix E we introduce and use a novel variant of batch normalization which is based on a running average over recent minibatches, and is thus more robust when training with very small minibatches. We also apply batch normalization to the whole coupling layer output. The effects of batch normalization are easily included in the Jacobian computation, since it acts as a linear rescaling on each dimension. That is, given the estimated batch statistics \(\tilde{\mu}\) and \(\tilde{\sigma}^{2}\) , the rescaling function \[x\mapsto \frac{x - \tilde{\mu}}{\sqrt{\tilde{\sigma}^{2} + \epsilon}} \quad (17)\] has a Jacobian determinant \[\left(\prod_{i}\left(\tilde{\sigma}_{i}^{2} + \epsilon\right)\right)^{-\frac{1}{2}}. \quad (18)\] This form of batch normalization can be seen as similar to reward normalization in deep reinforcement learning [44, 65]. We found that the use of this technique not only allowed training with a deeper stack of coupling layers, but also alleviated the instability problem that practitioners often encounter when training conditional distributions with a scale parameter through a gradient- based approach. ## 4 Experiments ### 4.1 Procedure The algorithm described in Equation 2 shows how to learn distributions on unbounded space. In general, the data of interest have bounded magnitude. For examples, the pixel values of an image typically lie in \([0, 256]^{D}\) after application of the recommended jittering procedure [64, 62]. In order to reduce the impact of boundary effects, we instead model the density of \(\logit(\alpha + (1 - \alpha)\odot \frac{\alpha}{256})\) , where \(\alpha\) is picked here as .05. We take into account this transformation when computing log- likelihood and bits per dimension. We also augment the CIFAR- 10, CelebA and LSUN datasets during training to also include horizontal flips of the training examples. We train our model on four natural image datasets: CIFAR- 10 [36], Imagenet [52], Large- scale Scene Understanding (LSUN) [70], CelebFaces Attributes (CelebA) [41]. More specifically, we train on the downsampled to \(32\times 32\) and \(64\times 64\) versions of Imagenet [46]. For the LSUN dataset, we train on the bedroom, tower and church outdoor categories. The procedure for LSUN is the same as in [47]: we downsample the image so that the smallest side is 96 pixels and take random crops of \(64\times 64\) . For CelebA, we use the same procedure as in [38]: we take an approximately central crop of \(148\times 148\) then resize it to \(64\times 64\) . We use the multi- scale architecture described in Section 3.6 and use deep convolutional residual networks in the coupling layers with rectifier nonlinearity and skip- connections as suggested by [46]. To compute the scaling functions \(s\) , we use a hyperbolic tangent function multiplied by a learned scale, whereas the translation function \(t\) has an affine output. Our multi- scale architecture is repeated recursively until the input of the last recursion is a \(4\times 4\times c\) tensor. For datasets of images of size \(32\times 32\) , we use 4 residual blocks with 32 hidden feature maps for the first coupling layers with checkerboard masking. Only 2 residual blocks are used for images of size \(64\times 64\) . We use a batch size of 64. For CIFAR- 10, we use 8 residual blocks, 64 feature maps, and downscale only once. We optimize with ADAM [33] with default hyperparameters and use an \(L_{2}\) regularization on the weight scale parameters with coefficient \(5\cdot 10^{- 5}\) . We set the prior \(p_{Z}\) to be an isotropic unit norm Gaussian. However, any distribution could be used for \(p_{Z}\) , including distributions that are also learned during training, such as from an auto- regressive model, or (with slight modifications to the training objective) a variational autoencoder. <--- Page Split ---> <table><tr><td>Dataset</td><td>PixelRNN [46]</td><td>Real NVP</td><td>Conv DRAW [22]</td><td>IAF-VAE [34]</td></tr><tr><td>CIFAR-10</td><td>3.00</td><td>3.49</td><td>&amp;lt; 3.59</td><td>&amp;lt; 3.28</td></tr><tr><td>Imagenet (32 × 32)</td><td>3.86 (3.83)</td><td>4.28 (4.26)</td><td>&amp;lt; 4.40 (4.35)</td><td></td></tr><tr><td>Imagenet (64 × 64)</td><td>3.63 (3.57)</td><td>3.98 (3.75)</td><td>&amp;lt; 4.10 (4.04)</td><td></td></tr><tr><td>LSUN (bedroom)</td><td></td><td>2.72 (2.70)</td><td></td><td></td></tr><tr><td>LSUN (tower)</td><td></td><td>2.81 (2.78)</td><td></td><td></td></tr><tr><td>LSUN (church outdoor)</td><td></td><td>3.08 (2.94)</td><td></td><td></td></tr><tr><td>CelebA</td><td></td><td>3.02 (2.97)</td><td></td><td></td></tr></table> Table 1: Bits/dim results for CIFAR- 10, Imagenet, LSUN datasets and CelebA. Test results for CIFAR- 10 and validation results for Imagenet, LSUN and CelebA (with training results in parenthesis for reference). ![](images/7_0.jpg) <center>Figure 5: On the left column, examples from the dataset. On the right column, samples from the model trained on the dataset. The datasets shown in this figure are in order: CIFAR-10, Imagenet \((32 \times 32)\) , Imagenet \((64 \times 64)\) , CelebA, LSUN (bedroom). </center> ### 4.2 Results We show in Table 1 that the number of bits per dimension, while not improving over the Pixel RNN [46] baseline, is competitive with other generative methods. As we notice that our performance increases with the number of parameters, larger models are likely to further improve performance. For CelebA and LSUN, the bits per dimension for the validation set was decreasing throughout training, so little overfitting is expected. We show in Figure 5 samples generated from the model with training examples from the dataset for comparison. As mentioned in [62, 22], maximum likelihood is a principle that values diversity <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 6: Manifold generated from four examples in the dataset. Clockwise from top left: CelebA, Imagenet \((64 \times 64)\) , LSUN (tower), LSUN (bedroom). </center> over sample quality in a limited capacity setting. As a result, our model outputs sometimes highly improbable samples as we can notice especially on CelebA. As opposed to variational autoencoders, the samples generated from our model look not only globally coherent but also sharp. Our hypothesis is that as opposed to these models, real NVP does not rely on fixed form reconstruction cost like an \(L_{2}\) norm which tends to reward capturing low frequency components more heavily than high frequency components. Unlike autoregressive models, sampling from our model is done very efficiently as it is parallelized over input dimensions. On Imagenet and LSUN, our model seems to have captured well the notion of background/foreground and lighting interactions such as luminosity and consistent light source direction for reflectance and shadows. We also illustrate the smooth semantically consistent meaning of our latent variables. In the latent space, we define a manifold based on four validation examples \(z_{(1)}, z_{(2)}, z_{(3)}, z_{(4)}\) , and parametrized by two parameters \(\phi\) and \(\phi'\) by, \[z = \cos (\phi)\left(\cos (\phi^{\prime})z_{(1)} + \sin (\phi^{\prime})z_{(2)}\right) + \sin (\phi)\left(\cos (\phi^{\prime})z_{(3)} + \sin (\phi^{\prime})z_{(4)}\right). \quad (19)\] We project the resulting manifold back into the data space by computing \(g(z)\) . Results are shown Figure 6. We observe that the model seems to have organized the latent space with a notion of meaning that goes well beyond pixel space interpolation. More manifold visualization are shown in the Appendix. To further test whether the latent space has a consistent semantic interpretation, we trained a class- conditional model on CelebA, and found that the learned representation had a consistent semantic meaning across class labels (see Appendix F). ## 5 Discussion and conclusion In this paper, we have defined a class of invertible functions with tractable Jacobian determinant, enabling exact and tractable log- likelihood evaluation, inference, and sampling. We have shown that this class of generative model achieves competitive performances, both in terms of sample quality and log- likelihood. Many avenues exist to further improve the functional form of the transformations, for instance by exploiting the latest advances in dilated convolutions [69] and residual networks architectures [60]. This paper presented a technique bridging the gap between auto- regressive models, variational autoencoders, and generative adversarial networks. Like auto- regressive models, it allows tractable and exact log- likelihood evaluation for training. It allows however a much more flexible functional form, similar to that in the generative model of variational autoencoders. This allows for fast and exact sampling from the model distribution. Like GANs, and unlike variational autoencoders, our technique does not require the use of a fixed form reconstruction cost, and instead defines a cost in terms of higher level features, generating sharper images. Finally, unlike both variational <--- Page Split ---> autoencoders and GANs, our technique is able to learn a semantically meaningful latent space which is as high dimensional as the input space. This may make the algorithm particularly well suited to semi- supervised learning tasks, as we hope to explore in future work. Real NVP generative models can additionally be conditioned on additional variables (for instance class labels) to create a structured output algorithm. More so, as the resulting class of invertible transformations can be treated as a probability distribution in a modular way, it can also be used to improve upon other probabilistic models like auto- regressive models and variational autoencoders. For variational autoencoders, these transformations could be used both to enable a more flexible reconstruction cost [38] and a more flexible stochastic inference distribution [48]. Probabilistic models in general can also benefit from batch normalization techniques as applied in this paper. The definition of powerful and trainable invertible functions can also benefit domains other than generative unsupervised learning. For example, in reinforcement learning, these invertible functions can help extend the set of functions for which an argmax operation is tractable for continuous \(Q\) - learning [23] or find representation where local linear Gaussian approximations are more appropriate [67]. ## 6 Acknowledgments The authors thank the developers of Tensorflow [1]. We thank Sherry Moore, David Andersen and Jon Shlens for their help in implementing the model. We thank Aaron van den Oord, Yann Dauphin, Kyle Kastner, Chelsea Finn, Maithra Raghu, David Warde- Farley, Daniel Jiwoong Im and Oriol Vinyals for fruitful discussions. Finally, we thank Ben Poole, Rafal Jozefowicz and George Dahl for their input on a draft of the paper. ## A Samples ![](images/12_0.jpg) <center>Figure 7: Samples from a model trained on Imagenet \((64 \times 64)\) . </center> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 8: Samples from a model trained on CelebA. </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 9: Samples from a model trained on LSUN (bedroom category). </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 10: Samples from a model trained on LSUN (church outdoor category). </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 11: Samples from a model trained on LSUN (tower category). </center> <--- Page Split ---> ## B Manifold ![](images/17_0.jpg) <center>Figure 12: Manifold from a model trained on Imagenet \((64\times 64)\) . Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation 19, where the x-axis corresponds to \(\phi\) , and the y-axis to \(\phi^{\prime}\) , and where \(\phi ,\phi^{\prime}\in \{0,\frac{\pi}{4},\dots ,\frac{7\pi}{4}\}\) . </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 13: Manifold from a model trained on CelebA. Images with red borders are taken from the training set, and define the manifold. The manifold was computed as described in Equation 19, where the x-axis corresponds to \(\phi\) , and the y-axis to \(\phi^{\prime}\) , and where \(\phi , \phi^{\prime} \in \{0, \frac{\pi}{4}, \dots , \frac{\pi}{4}\}\) . </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 14: Manifold from a model trained on LSUN (bedroom category). Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation 19, where the x-axis corresponds to \(\phi\) , and the y-axis to \(\phi '\) , and where \(\phi , \phi ' \in \{0, \frac{\pi}{4}, \dots , \frac{7\pi}{4}\}\) . </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 15: Manifold from a model trained on LSUN (church outdoor category). Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation 19, where the x-axis corresponds to \(\phi\) , and the y-axis to \(\phi '\) , and where \(\phi , \phi ' \in \{0, \frac{\pi}{4}, \dots , \frac{7\pi}{4}\}\) . </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 16: Manifold from a model trained on LSUN (tower category). Images with red borders are taken from the validation set, and define the manifold. The manifold was computed as described in Equation 19, where the x-axis corresponds to \(\phi\) , and the y-axis to \(\phi '\) , and where \(\phi , \phi ' \in \{0, \frac{\pi}{4}, \dots , \frac{7\pi}{4} \}\) . </center> ## C Extrapolation Inspired by the texture generation work by [19, 61] and extrapolation test with DCGAN [47], we also evaluate the statistics captured by our model by generating images twice or ten times as large as present in the dataset. As we can observe in the following figures, our model seems to successfully create a "texture" representation of the dataset while maintaining a spatial smoothness through the image. Our convolutional architecture is only aware of the position of considered pixel through edge effects in convolutions, therefore our model is similar to a stationary process. This also explains why these samples are more consistent in LSUN, where the training data was obtained using random crops. <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 17: We generate samples a factor bigger than the training set image size on Imagenet (64 × 64). </center> <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 18: We generate samples a factor bigger than the training set image size on CelebA. </center> <--- Page Split ---> ![](images/24_0.jpg) <center>Figure 19: We generate samples a factor bigger than the training set image size on LSUN (bedroom category). </center> <--- Page Split ---> ![](images/25_0.jpg) <center>Figure 20: We generate samples a factor bigger than the training set image size on LSUN (church outdoor category). </center> <--- Page Split ---> ![](images/26_0.jpg) <center>Figure 21: We generate samples a factor bigger than the training set image size on LSUN (tower category). </center> <--- Page Split ---> ## D Latent variables semantic As in [22], we further try to grasp the semantic of our learned layers latent variables by doing ablation tests. We infer the latent variables and resample the lowest levels of latent variables from a standard gaussian, increasing the highest level affected by this resampling. As we can see in the following figures, the semantic of our latent space seems to be more on a graphic level rather than higher level concept. Although the heavy use of convolution improves learning by exploiting image prior knowledge, it is also likely to be responsible for this limitation. ![](images/27_0.jpg) <center>Figure 22: Conceptual compression from a model trained on Imagenet \((64 \times 64)\) . The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: \(100\%\) , \(50\%\) , \(25\%\) , \(12.5\%\) and \(6.25\%\) of the latent variables are kept. </center> ![](images/27_1.jpg) <center>Figure 23: Conceptual compression from a model trained on CelebA. The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: \(100\%\) , \(50\%\) , \(25\%\) , \(12.5\%\) and \(6.25\%\) of the latent variables are kept. </center> <--- Page Split ---> ![](images/28_0.jpg) <center>Figure 24: Conceptual compression from a model trained on LSUN (bedroom category). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: \(100\%\) , \(50\%\) , \(25\%\) , \(12.5\%\) and \(6.25\%\) of the latent variables are kept. </center> ![](images/28_1.jpg) <center>Figure 25: Conceptual compression from a model trained on LSUN (church outdoor category). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: \(100\%\) , \(50\%\) , \(25\%\) , \(12.5\%\) and \(6.25\%\) of the latent variables are kept. </center> <--- Page Split ---> ![](images/29_0.jpg) <center>Figure 26: Conceptual compression from a model trained on LSUN (tower category). The leftmost column represent the original image, the subsequent columns were obtained by storing higher level latent variables and resampling the others, storing less and less as we go right. From left to right: \(100\%\) , \(50\%\) , \(25\%\) , \(12.5\%\) and \(6.25\%\) of the latent variables are kept. </center> ## E Batch normalization We further experimented with batch normalization by using a weighted average of a moving average of the layer statistics \(\tilde{\mu}_{t}, \tilde{\sigma}_{t}^{2}\) and the current batch batch statistics \(\tilde{\mu}_{t}, \tilde{\sigma}_{t}^{2}\) , \[\begin{array}{r}{\tilde{\mu}_{t + 1} = \rho \tilde{\mu}_{t} + (1 - \rho)\tilde{\mu}_{t}}\\ {\tilde{\sigma}_{t + 1}^{2} = \rho \tilde{\sigma}_{t}^{2} + (1 - \rho)\tilde{\sigma}_{t}^{2},} \end{array} \quad (21)\] where \(\rho\) is the momentum. When using \(\tilde{\mu}_{t + 1}, \tilde{\sigma}_{t + 1}^{2}\) , we only propagate gradient through the current batch statistics \(\tilde{\mu}_{t}, \tilde{\sigma}_{t}^{2}\) . We observe that using this lag helps the model train with very small minibatches. We used batch normalization with a moving average for our results on CIFAR- 10. ## F Attribute change Additionally, we exploit the attribute information \(y\) in CelebA to build a conditional model, i.e. the invertible function \(f\) from image to latent variable uses the labels in \(y\) to define its parameters. In order to observe the information stored in the latent variables, we choose to encode a batch of images \(x\) with their original attribute \(y\) and decode them using a new set of attributes \(y'\) , build by shuffling the original attributes inside the batch. We obtain the new images \(x' = g(f(x; y); y')\) . We observe that, although the faces are changed as to respect the new attributes, several properties remain unchanged like position and background. <--- Page Split ---> ![](images/30_0.jpg) <center>Figure 27: Examples \(x\) from the CelebA dataset. </center> <--- Page Split ---> ![](images/31_0.jpg) <center>Figure 28: From a model trained on pairs of images and attributes from the CelebA dataset, we encode a batch of images with their original attributes before decoding them with a new set of attributes. We notice that the new images often share similar characteristics with those in Fig 27, including position and background. </center> <--- Page Split --->
accept
Accept (Poster)
7.666667
ICLR_2017_paper_0336
iclr
2,017
# LR-GAN: LAYERED RECURSIVE GENERATIVE ADVERSARIAL NETWORKS FOR IMAGE GENERATION Jianwei Yang\* Virginia Tech Blacksburg, VA jw2yang@vt.edu Anitha Kannan Facebook AI Research Menlo Park, CA akannan@fb.com Dhruv Batra\* and Devi Parikh\* Georgia Institute of Technology Atlanta, GA {dbatra, parikh}@gatech.edu ## ABSTRACT We present LR- GAN: an adversarial image generation model which takes scene structure and context into account. Unlike previous generative adversarial networks (GANs), the proposed GAN learns to generate image background and foregrounds separately and recursively, and stitch the foregrounds on the background in a contextually relevant manner to produce a complete natural image. For each foreground, the model learns to generate its appearance, shape and pose. The whole model is unsupervised, and is trained in an end- to- end manner with gradient descent methods. The experiments demonstrate that LR- GAN can generate more natural images with objects that are more human recognizable than DCGAN. ## 1 INTRODUCTION Generative adversarial networks (GANs) (Goodfellow et al., 2014) have shown significant promise as generative models for natural images. A flurry of recent work has proposed improvements over the original GAN work for image generation (Radford et al., 2015; Denton et al., 2015; Salimans et al., 2016; Chen et al., 2016; Zhu et al., 2016; Zhao et al., 2016), multi- stage image generation including part- based models (Im et al., 2016; Kwak & Zhang, 2016), image generation conditioned on input text or attributes (Mansimov et al., 2015; Reed et al., 2016b;a), image generation based on 3D structure (Wang & Gupta, 2016), and even video generation (Vondrick et al., 2016). While the holistic 'gist' of images generated by these approaches is beginning to look natural, there is clearly a long way to go. For instance, the foreground objects in these images tend to be deformed, blended into the background, and not look realistic or recognizable. One fundamental limitation of these methods is that they attempt to generate images without taking into account that images are 2D projections of a 3D visual world, which has a lot of structures in it. This manifests as structure in the 2D images that capture this world. One example of this structure is that images tend to have a background, and foreground objects are placed in this background in contextually relevant ways. We develop a GAN model that explicitly encodes this structure. Our proposed model generates images in a recursive fashion: it first generates a background, and then conditioned on the background generates a foreground along with a shape (mask) and a pose (affine transformation) that together define how the background and foreground should be composed to obtain a complete image. Conditioned on this composite image, a second foreground and an associated shape and pose are generated, and so on. As a byproduct in the course of recursive image generation, our approach generates some object- shape foreground- background masks in a completely unsupervised way, without access to any object masks for training. Note that decomposing a scene into foreground- background layers is a classical ill- posed problem in computer vision. By explicitly factorizing appearance and transformation, LR- GAN encodes natural priors about the images that the same foreground can be 'pasted' to the different backgrounds, under different affine transformations. According to the experiments, the absence of these priors result in degenerate foreground- background decompositions, and thus also degenerate final composite images. <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Generation results of our model on CUB-200 (Welinder et al., 2010). It generates images in two timesteps. At the first timestep, it generates background images, while generates foreground images, masks and transformations at the second timestep. Then, they are composed to obtain the final images. From top left to bottom right (row major), the blocks are real images, generated background images, foreground images, foreground masks, carved foreground images, carved and transformed foreground images, final composite images, and their nearest neighbor real images in the training set. Note that the model is trained in a completely unsupervised manner. </center> We mainly evaluate our approach on four datasets: MNIST- ONE (one digit) and MNIST- TWO (two digits) synthesized from MNIST (LeCun et al., 1998), CIFAR- 10 (Krizhevsky & Hinton, 2009) and CUB- 200 (Welinder et al., 2010). We show qualitatively (via samples) and quantitatively (via evaluation metrics and human studies on Amazon Mechanical Turk) that LR- GAN generates images that globally look natural and contain clear background and object structures in them that are realistic and recognizable by humans as semantic entities. An experimental snapshot on CUB- 200 is shown in Fig. 1. We also find that LR- GAN generates foreground objects that are contextually relevant to the backgrounds (e.g., horses on grass, airplanes in skies, ships in water, cars on streets, etc.). For quantitative comparison, besides existing metrics in the literature, we propose two new quantitative metrics to evaluate the quality of generated images. The proposed metrics are derived from the sufficient conditions for the closeness between generated image distribution and real image distribution, and thus supplement existing metrics. ## 2 RELATED WORK Early work in parametric texture synthesis was based on a set of hand- crafted features (Portilla & Simoncelli, 2000). Recent improvements in image generation using deep neural networks mainly fall into one of the two stochastic models: variational autoencoders (VAEs) (Kingma et al., 2016) and generative adversarial networks (GANs) (Goodfellow et al., 2014). VAEs pair a top- down probabilistic generative network with a bottom up recognition network for amortized probabilistic inference. Two networks are jointly trained to maximize a variational lower bound on the data likelihood. GANs consist of a generator and a discriminator in a minmax game with the generator aiming to fool the discriminator with its samples with the latter aiming to not get fooled. Sequential models have been pivotal for improved image generation using variational autoencoders: DRAW (Gregor et al., 2015) uses attention based recurrence conditioning on the canvas drawn so far. In Eslami et al. (2016), a recurrent generative model that draws one object at a time to the canvas was used as the decoder in VAE. These methods are yet to show scalability to natural images. Early compelling results using GANs used sequential coarse- to- fine multiscale generation and class- conditioning (Denton et al., 2015). Since then, improved training schemes (Salimans et al., 2016) and better convolutional structure (Radford et al., 2015) have improved the generation results using <--- Page Split ---> GANs. PixelRNN (van den Oord et al., 2016) is also recently proposed to sequentially generates a pixel at a time, along the two spatial dimensions. In this paper, we combine the merits of sequential generation with the flexibility of GANs. Our model for sequential generation imbibes a recursive structure that more naturally mimics image composition by inferring three components: appearance, shape, and pose. One closely related work combining recursive structure with GAN is that of Im et al. (2016) but it does not explicitly model object composition and follows a similar paradigm as by Gregor et al. (2015). Another closely related work is that of Kwak & Zhang (2016). It combines recursive structure and alpha blending. However, our work differs in three main ways: (1) We explicitly use a generator for modeling the foreground poses. That provides significant advantage for natural images, as shown by our ablation studies; (2) Our shape generator is separate from the appearance generator. This factored representation allows more flexibility in the generated scenes; (3) Our recursive framework generates subsequent objects conditioned on the current and previous hidden vectors, and previously generated object. This allows for explicit contextual modeling among generated elements in the scene. See Fig. 17 for contextually relevant foregrounds generated for the same background, or Fig. 6 for meaningful placement of two MNIST digits relative to each. Models that provide supervision to image generation using conditioning variables have also been proposed: Style/Structure GANs (Wang & Gupta, 2016) learns separate generative models for style and structure that are then composed to obtain final images. In Reed et al. (2016a), GAN based image generation is conditioned on text and the region in the image where the text manifests, specified during training via keypoints or bounding boxes. While not the focus of our work, the model proposed in this paper can be easily extended to take into account these forms of supervision. ## 3 PRELIMINARIES ### 3.1 GENERATIVE ADVERSARIAL NETWORKS Generative Adversarial Networks (GANs) consist of a generator \(G\) and a discriminator \(D\) that are simultaneously trained with competing goals: The generator \(G\) is trained to generate samples that can 'fool' a discriminator \(D\) , while the discriminator is trained to classify its inputs as either real (coming from the training dataset) or fake (coming from the samples of \(G\) ). This competition leads to a minmax formulation with a value function: \[\min_{\theta_{G}}\max_{\theta_{D}}\left(\mathrm{E}_{x\sim p_{d a t a}}(z)[\log (D(x;\theta_{D}))] + \mathrm{E}_{z\sim p_{z}}(z)[\log (1 - D(G(z;\theta_{G});\theta_{D}))]\right), \quad (1)\] where \(z\) is a random vector from a standard multivariate Gaussian or a uniform distribution \(p_{z}(z)\) , \(G(z;\theta_{G})\) maps \(z\) to the data space, \(D(x)\) is the probability that \(x\) is real estimated by \(D\) . The advantage of the GANs formulation is that it lacks an explicit loss function and instead uses the discriminator to optimize the generative model. The discriminator, in turn, only cares whether the sample it receives is on the data manifold, and not whether it exactly matches a particular training example (as opposed to losses such as MSE). Hence, the discriminator provides a gradient signal only when the generated samples do not lie on the data manifold so that the generator can readjust its parameters accordingly. This form of training enables learning the data manifold of the training set and not just optimizing to reconstruct the dataset, as in autoencoder and its variants. While the GANs framework is largely agnostic to the choice of \(G\) and \(D\) , it is clear that generative models with the 'right' inductive biases will be more effective in learning from the gradient information (Denton et al., 2015; Im et al., 2016; Gregor et al., 2015; Reed et al., 2016a; Yan et al., 2015). With this motivation, we propose a generator that models image generation via a recurrent process - in each time step of the recurrence, an object with its own appearance and shape is generated and warped according to a generated pose to compose an image in layers. ### 3.2 LAYERED STRUCTURE OF IMAGE An image taken of our 3D world typically contains a layered structure. One way of representing an image layer is by its appearance and shape. As an example, an image \(x\) with two layers, foreground \(f\) and background \(b\) may be factorized as: \[x = f\odot m + b\odot (1 - m), \quad (2)\] <--- Page Split ---> where \(\pmb{m}\) is the mask depicting the shapes of image layers, and \(\odot\) the element wise multiplication operator. Some existing methods assume the access to the shape of the object either during training (Isola & Liu, 2013) or both at train and test time (Reed et al., 2016a; Yan et al., 2015). Representing images in layered structure is even straightforward for video with moving objects (Darrell & Pentland, 1991; Wang & Adelson, 1994; Kannan et al., 2005). Vondrick et al. (2016) generates videos by separately generating a fixed background and moving foregrounds. A similar way of generating single image can be found in Kwak & Zhang (2016). Another way is modeling the layered structure with object appearance and pose as: \[\pmb {x} = S T(\pmb {f},\pmb {a}) + \pmb {b}, \quad (3)\] where \(\pmb{f}\) and \(\pmb{b}\) are foreground and background, respectively; \(\pmb{a}\) is the affine transformation; \(S T\) is the spatial transformation operator. Several works fall into this group (Roux et al., 2011; Huang & Murphy, 2015; Eslami et al., 2016). In Huang & Murphy (2015), images are decomposed into layers of objects with specific poses in a variational autoencoder framework, while the number of objects (i.e., layers) is adaptively estimated in Eslami et al. (2016). To contrast with these works, LR- GAN uses a layered composition, and the foreground layers simultaneously model all three dominant factors of variation: appearance \(\pmb{f}\) , shape \(\pmb{m}\) and pose \(\pmb{a}\) . We will elaborate it in the following section. ## 4 LAYERED RECURSIVE GAN (LR-GAN) The basic structure of LR- GAN is similar to GAN: it consists of a discriminator and a generator that are simultaneously trained using the minmax formulation of GAN, as described in §.3.1. The key innovation of our work is the layered recursive generator, which is what we describe in this section. The generator in LR- GAN is recursive in that the image is constructed recursively using a recurrent network. Layered in that each recursive step composes an object layer that is 'pasted' on the image generated so far. Object layer at timestep \(t\) is parameterized by the following three constituents - 'canonical' appearance \(\pmb{f}_{t}\) , shape (or mask) \(\pmb{m}_{t}\) , and pose (or affine transformation) \(\pmb{a}_{t}\) for warping the object before pasting in the image composition. Fig. 2 shows the architecture of the LR- GAN with the generator architecture unrolled for generating background \(\pmb{x}_{0}\) \((\equiv \pmb{x}_{b})\) and foreground \(\pmb{x}_{1}\) and \(\pmb{x}_{2}\) . At each time step \(t\) , the generator composes the next image \(\pmb{x}_{t}\) via the following recursive computation: \[\begin{array}{r l r l r l r l}{\pmb{x}_{t} =} & {\underbrace{S T(\pmb{m}_{t},\pmb{a}_{t})}_{\mathrm{affine~transformed~mask}} \odot} & {\underbrace{S T(\pmb{f}_{t},\pmb{a}_{t})}_{\mathrm{affine~transformed~appearance}} + \underbrace{(1 - S T(\pmb{m}_{t},\pmb{a}_{t}))\odot\pmb{x}_{t - 1}}_{\mathrm{pasting~on~image~composed~so~far}},} & {\forall t\in [1,T]} \end{array} \quad (4)\] where \(S T(\diamond ,\pmb{a}_{t})\) is a spatial transformation operator that outputs the affine transformed version of \(\diamond\) with \(\pmb{a}_{t}\) indicating parameters of the affine transformation. Since our proposed model has an explicit transformation variable \(\pmb{a}_{t}\) that is used to warp the object, it can learn a canonical object representation that can be re- used to generate scenes where the object occurs as mere transformations of it, such as different scales or rotations. By factorizing the appearance, shape and pose, the object generator can focus on separately capturing regularities in these three factors that constitute an object. We will demonstrate in our experiments that removing these factorizations from the model leads to its spending capacity in variability that may not solely be about the object in Section 5.5 and 5.6. ### 4.1 DETAILS OF GENERATOR ARCHITECTURE Fig. 2 shows our LR- GAN architecture in detail – we use different shapes to indicate different kinds of layers (convolutional, fractional convolutional, (non)linear, etc), as indicated by the legend. Our model consists of two main pieces – a background generator \(G_{b}\) and a foreground generator \(G_{f}\) . \(G_{b}\) and \(G_{f}\) do not share parameters with each other. \(G_{b}\) computation happens only once, while \(G_{f}\) is recurrent over time, i.e., all object generators share the same parameters. In the following, we will introduce each module and connections between them. Temporal Connections. LR- GAN has two kinds of temporal connections – informally speaking, one on 'top' and one on 'bottom'. The 'top' connections perform the act of sequentially 'pasting' <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 2: LR-GAN architecture unfolded to three timesteps. It mainly consists of one background generator, one foreground generator, temporal connections and one discriminator. The meaning of each component is explained in the legend. </center> object layers (Eqn. 4). The 'bottom' connections are constructed by a LSTM on the noise vectors \(z_{0}, z_{1}, z_{2}\) . Intuitively, this noise- vector- LSTM provides information to the foreground generator about what else has been generated in past. Besides, when generating multiple objects, we use a pooling layer \(P_{f}^{c}\) and a fully- connected layer \(E_{f}^{c}\) to extract the information from previous generated object response map. By this way, the model is able to 'see' previously generated objects. Background Generator. The background generator \(G_{b}\) is purposely kept simple. It takes the hidden state of noise- vector- LSTM \(h_{t}^{0}\) as the input and passes it to a number of fractional convolutional layers (also called 'deconvolution' layer in some papers) to generate images at its end. The output of background generator \(x_{b}\) will be used as the canvas for the following generated foregrounds. Foreground Generator. The foreground generator \(G_{f}\) is used to generate an object with appearance and shape. Correspondingly, \(G_{f}\) consists of three sub- modules, \(G_{f}^{c}\) , which is a common 'trunk' whose outputs are shared by \(G_{f}^{i}\) and \(G_{f}^{m}\) . \(G_{f}^{i}\) is used to generate the foreground appearance \(f_{t}\) , while \(G_{f}^{m}\) generates the mask \(m_{t}\) for the foreground. All three sub- modules consists of one or more fractional convolutional layers combined with batch- normalization and nonlinear layers. The generated foreground appearance and mask have the same spatial size as the background. The top of \(G_{f}^{m}\) is a sigmoid layer in order to generate one channel mask whose values range in \((0, 1)\) . Spatial Transformer. To spatially transform foreground objects, we need to estimate the transformation matrix. As in Jaderberg et al. (2015), we predict the affine transformation matrix with a linear layer \(T_{f}\) that has six- dimensional outputs. Then based on the predicted transformation matrix, we use a grid generator \(G_{g}\) to generate the corresponding sampling coordinates in the input for each location at the output. The generated foreground appearance and mask share the same transformation matrix, and thus the same sampling grid. Given the grid, the sampler \(S\) will simultaneously sample the \(f_{t}\) and \(m_{t}\) to obtain \(\hat{f}_{t}\) and \(\hat{m}_{t}\) , respectively. Different from Jaderberg et al. (2015), our sampler here normally performs downsampling, since the the foreground typically has smaller size than the background. Pixels in \(\hat{f}_{t}\) and \(\hat{m}_{t}\) that are from outside the extent of \(\hat{f}_{t}\) and \(m_{t}\) are set to zero. Finally, \(\hat{f}_{t}\) and \(\hat{m}_{t}\) are sent to the compositor \(C\) which combines the canvas \(x_{t - 1}\) and \(\hat{f}_{t}\) through layered composition with blending weights given by \(\hat{m}_{t}\) (Eqn. 4). Pseudo- code for our approach and detailed model configuration are provided in the Appendix. <--- Page Split ---> ### 4.2 NEW EVALUATION METRICS Several metrics have been proposed to evaluate GANs, such as Gaussian parzen window (Goodfellow et al., 2014), Generative Adversarial Metric (GAM) (Im et al., 2016) and Inception Score (Salimans et al., 2016). The common goal is to measure the similarity between the generated data distribution \(P_{g}(\pmb {x}) = G(\pmb {x};\theta_{z})\) and the real data distribution \(P(\pmb {x})\) . Most recently, Inception Score has been used in several works (Salimans et al., 2016; Zhao et al., 2016). However, it is an assymetric metric and could be easily fooled by generating centers of data modes. In addition to these metrics, we present two new metrics based on the following intuition - a sufficient (but not necessary) condition for closeness of \(P_{g}(\pmb {x})\) and \(P(\pmb {x})\) is closeness of \(P_{g}(\pmb {x}|y)\) and \(P(\pmb {x}|y)\) , i.e., distributions of generated data and real data conditioned on all possible variables of interest \(y\) , e.g., category label. One way to obtain this variable of interest \(y\) is via human annotation. Specifically, given the data sampled from \(P_{g}(\pmb {x})\) and \(P(\pmb {x})\) , we ask people to label the category of the samples according to some rules. Note that such human annotation is often easier than comparing samples from the two distributions (e.g., because there is no 1:1 correspondence between samples to conduct forced- choice tests). After the annotations, we need to verify whether the two distributions are similar in each category. Clearly, directly comparing the distributions \(P_{g}(\pmb {x}|y)\) and \(P(\pmb {x}|y)\) may be as difficult as comparing \(P_{g}(\pmb {x})\) and \(P(\pmb {x})\) . Fortunately, we can use Bayes rule and alternatively compare \(P_{g}(y|\pmb {x})\) and \(P(y|\pmb {x})\) , which is a much easier task. In this case, we can simply train a discriminative model on the samples from \(P_{g}(\pmb {x})\) and \(P(\pmb {x})\) together with the human annotations about categories of these samples. With a slight abuse of notation, we use \(P_{g}(y|\pmb {x})\) and \(P(y|\pmb {x})\) to denote probability outputs from these two classifiers (trained on generated samples vs trained on real samples). We can then use these two classifiers to compute the following two evaluation metrics: Adversarial Accuracy: Computes the classification accuracies achieved by these two classifiers on a validation set, which can be the training set or another set of real images sampled from \(P(\pmb {x})\) . If \(P_{g}(\pmb {x})\) is close to \(P(\pmb {x})\) , we expect to see similar accuracies. Adversarial Divergence: Computes the KL divergence between \(P_{g}(y|\pmb {x})\) and \(P(y|\pmb {x})\) . The lower the adversarial divergence, the closer two distributions are. The low bound for this metric is exactly zero, which means \(P_{g}(y|\pmb {x}) = P(y|\pmb {x})\) for all samples in the validation set. As discussed above, we need human efforts to label the real and generated samples. Fortunately, we can further simplify this. Based on the labels given on training data, we split the training data into categories, and train one generator for each category. With all these generators, we generate samples of all categories. This strategy will be used in our experiments on the datasets with labels given. ## 5 EXPERIMENT We conduct qualitative and quantitative evaluations on three datasets: 1) MNIST (LeCun et al., 1998); 2) CIFAR- 10 (Krizhevsky & Hinton, 2009); 3) CUB- 200 (Welinder et al., 2010). To add variability to the MNIST images, we randomly scale (factor of 0.8 to 1.2) and rotate \((-\frac{\pi}{4})\) to \(\frac{\pi}{4}\) ) the digits and then stitch them to \(48\times 48\) uniform backgrounds with random grayscale value between [0, 200]. Images are then rescaled back to \(32\times 32\) . Each image thus has a different background grayscale value and a different transformed digit as foreground. We rename this synthesized dataset as MNIST- ONE (single digit on a gray background). We also synthesize a dataset MNIST- TWO containing two digits on a grayscale background. We randomly select two images of digits and perform similar transformations as described above, and put one on the left and the other on the right side of a \(78\times 78\) gray background. We resize the whole image to \(64\times 64\) . We develop LR- GAN based on open source code<sup>1</sup>. We assume the number of objects is known. Therefore, for MNIST- ONE, MNIST- TWO, CIFAR- 10, and CUB- 200, our model has two, three, two, and two timesteps, respectively. Since the size of foreground object should be smaller than that of canvas, we set the minimal allowed scale <sup>2</sup> in affine transformation to be 1.2 for all datasets except for MNIST- TWO, which is set to 2 (objects are smaller in MNIST- TWO). In LR- GAN, the <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: Generated images on CIFAR-10 based on our model. </center> ![](images/6_1.jpg) <center>Figure 4: Generated images on CUB-200 based on our model. </center> background generator and foreground generator have similar architectures. One difference is that the number of channels in the background generator is half of the one in the foreground generator. We compare our results to that of DCGAN (Radford et al., 2015). Note that LR- GAN without LSTM at the first timestep corresponds exactly to the DCGAN. This allows us to run controlled experiments. In both generator and discriminator, all the (fractional) convolutional layers have \(4 \times 4\) filter size with stride 2. As a result, the number of layers in the generator and discriminator automatically adapt to the size of training images. Please see the Appendix (Section 6.2) for details about the configurations. We use three metrics for quantitative evaluation, including Inception Score (Salimans et al., 2016) and the proposed Adversarial Accuracy, Adversarial Divergence. Note that we report two versions of Inception Score. One is based on the pre- trained Inception net, and the other one is based on the pre- trained classifier on the target datasets. ### 5.1 QUALITATIVE RESULTS In Fig. 3 and 4, we show the generated samples for CIFAR- 10 and CUB- 200, respectively. MNIST results are shown in the next subsection. As we can see from the images, the compositional nature of our model results in the images being free of blending artifacts between backgrounds and foregrounds. For CIFAR- 10, we can see the horses and cars with clear shapes. For CUB- 200, the bird shapes tend to be even sharper. ### 5.2 MNIST-ONE AND MNIST-TWO We now report the results on MNIST- ONE and MNIST- TWO. Fig. 5 shows the generation results of our model on MNIST- ONE. As we can see, our model generates the background and the foreground in separate timestep, and can disentangle the foreground digits from background nearly perfectly. Though initial values of the mask randomly distribute in the range of (0, 1), after training, the masks are nearly binary and accurately carve out the digits from the generated foreground. More results on MNIST- ONE (including human studies) can be found in the Appendix (Section 6.3). Fig. 6 shows the generation results for MNIST- TWO. Similarly, the model is also able to generate background and the two foreground objects separately. The foreground generator tends to generate a single digit at each timestep. Meanwhile, it captures the context information from the previous time steps. When the first digit is placed to the left side, the second one tends to be placed on the right side, and vice versa. <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: Generation results of our model on MNIST-ONE. From left to right, the image blocks are real images, generated background images, generated foreground images, generated masks and final composite images, respectively. </center> ![](images/7_1.jpg) <center>Figure 6: Generation results of our model on MNIST-TWO. From top left to bottom right (row major), the image blocks are real images, generated background images, foreground images and masks at the second timestep, composite images at the second time step, generated foreground images and masks at the third timestep and the final composite images, respectively. </center> ### 5.3 CUB-200 We study the effectiveness of our model trained on the CUB- 200 bird dataset. In Fig. 1, we have shown a random set of generated images, along with the intermediate generation results of the model. While being completely unsupervised, the model, for a large fraction of the samples, is able to ![](images/7_2.jpg) <center>Figure 7: Matched pairs of generated images based on DCGAN and LR-GAN, respectively. The odd columns are generated by DCGAN, and the even columns are generated by LR-GAN. These are paired according to the perfect matching based on Hungarian algorithm. </center> <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 8: Qualitative comparison on CIFAR-10. Top three rows are images generated by DCGAN; Bottom three rows are by LR-GAN. From left to right, the blocks display generated images with increasing quality level as determined by human studies. </center> successfully disentangle the foreground and the background. This is evident from the generated bird- like masks. We do a comparative study based on Amazon Mechanical Turk (AMT) between DCGAN and LRGAN to quantify relative visual quality of the generated images. We first generated 1000 samples from both the models. Then, we performed perfect matching between the two image sets using the Hungarian algorithm on \(L2\) norm distance in the pixel space. This resulted in 1000 image pairs. Some examplar pairs are shown in Fig. 7. For each image pair, 9 judges are asked to choose the one that is more realistic. Based on majority voting, we find that our generated images are selected \(68.4\%\) times, compared with \(31.6\%\) times for DCGAN. This demonstrates that our model has generated more realistic images than DCGAN. We can attribute this difference to our model's ability to generate foreground separately from the background, enabling stronger edge cues. ### 5.4 CIFAR-10 We now qualitatively and quantitatively evaluate our model on CIFAR- 10, which contains multiple object categories and also various backgrounds. Comparison of image generation quality: We conduct AMT studies to compare the fidelity of image generation. Towards this goal, we generate 1000 images from DCGAN and LR- GAN, respectively. We ask 5 judges to label each image to either belong to one of the 10 categories or as 'non recognizable' or 'recognizable but not belonging to the listed categories'. We then assign each image a quality level between [0,5] that captures the number of judges that agree with the majority choice. Fig. 8 shows the images generated by both approaches, ordered by increasing quality level. We merge images at quality level 0 (all judges said non- recognizable) and 1 together, and similarly images at level 4 and 5. Visually, the generated samples by our model have clearer boundaries and object structures. We also computed the fraction of non- recognizable images: Our model had a \(10\%\) absolute drop in non- recognizability rate ( \(67.3\%\) for ours vs. \(77.7\%\) for DCGAN). For reference, \(11.4\%\) of real CIFAR images were categorized as non- recognizable. Fig. 9 shows more generated (intermediate) results of our model. Quantitative evaluation on generators: We evaluate the generators based on three metrics: 1) Inception Score; 2) Adversarial Accuracy; 3) Adversarial Divergence. To obtain a classifier model for evaluation, we remove the top layer in the discriminator used in our model, and then append two fully connected layers on the top of it. We train this classifier using the training samples of CIFAR- 10 based on the annotations. Following Salimans et al. (2016), we generated 50,000 images Table 1: Quantitative comparison between DCGAN and LR-GAN on CIFAR-10. <table><tr><td>Training Data</td><td>Real Images</td><td>DCGAN</td><td>Ours</td></tr><tr><td>Inception Score†</td><td>11.18±0.18</td><td>6.64±0.14</td><td>7.17±0.07</td></tr><tr><td>Inception Score††</td><td>7.23±0.09</td><td>5.69±0.07</td><td>6.11±0.06</td></tr><tr><td>Adversarial Accuracy</td><td>83.33±0.08</td><td>37.81±0.02</td><td>44.22±0.08</td></tr><tr><td>Adversarial Divergence</td><td>0</td><td>7.58±0.04</td><td>5.57±0.06</td></tr></table> \(^{\dagger \dagger}\) Evaluate using the pre-trained Inception net as Salimans et al. (2016) \(^{\dagger \dagger}\) Evaluate using the supervisedly trained classifier based on the discriminator in LR-GAN. <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 9: Generation results of our model on CIFAR-10. From left to right, the blocks are: generated background images, foreground images, foreground masks, foreground images carved out by masks, carved foregrounds after spatial transformation, final composite images and nearest neighbor training images to the generated images. </center> ![](images/9_1.jpg) <center>Figure 10: Category specific generation results of our model on CIFAR-10 categories of horse, frog, and cat (top to bottom). The blocks from left to right are: generated background images, foreground images, foreground masks, foreground images carved out by masks, carved foregrounds after spatial transformation and final composite images. </center> based on DCGAN and LR- GAN, respectively. We compute two types of Inception Scores. The standard Inception Score is based on the Inception net as in Salimans et al. (2016), and the contextual Inception Score is based on our trained classifier model. To distinguish, we denote the standard one as 'Inception Score', and the contextual one as 'Inception Score'. To obtain the Adversarial Accuracy and Adversarial Divergence scores, we train one generator on each of 10 categories for DCGAN and LR- GAN, respectively. Then, we use these generators to generate samples of different categories. Given these generated samples, we train the classifiers for DCGAN and LR- GAN separately. Along with the classifier trained on the real samples, we compute the Adversarial Accuracy <--- Page Split ---> and Adversarial Divergence on the real training samples. In Table 1, we report the Inception Scores, Adversarial Accuracy and Adversarial Divergence for comparison. We can see that our model outperforms DCGAN across the board. To point out, we obtain different Inception Scores based on different classifier models, which indicates that the Inception Score varies with different models. Quantitative evaluation on discriminators: We evaluate the discriminator as an extractor for deep representations. Specifically, we use the output of the last convolutional layer in the discriminator as features. We perform a 1- NN classification on the test set given the full training set. Cosine similarity is used as the metric. On the test set, our model achieves \(62.09\% \pm 0.01\%\) compared to DCGAN's \(56.05\% \pm 0.02\%\) . Contextual generation: We also show the efficacy of our approach to generate diverse foregrounds conditioned on fixed background. The results in Fig. 17 in Appendix showcase that the foreground generator generates objects that are compatible with the background. This indicates that the model has captured contextual dependencies between the image layers. Category specific models: The objects in CIFAR- 10 exhibit huge variability in shapes. That can partly explain why some of the generated shapes are not as compelling in Fig. 9. To test this hypothesis, we reuse the generators trained for each of 10 categories used in our metrics to obtain the generation results. Fig. 10 shows results for categories 'horse', 'frog' and 'cat'. We can see that the model is now able to generate object- specific appearances and shapes, similar in vein to our results on the CUB- 200 dataset. ### 5.5 IMPORTANCE OF TRANSFORMATIONS ![](images/10_0.jpg) <center>Figure 11: Generation results from an ablated LR-GAN model without affine transformations. From top to bottom, the block rows correspond to different datasets: MNIST-ONE, CUB-200, CIFAR-10. From left to right, the blocks show generated background images, foreground images, foreground masks, and final composite images. For comparison, the rightmost column block shows final generated images from a non-ablated model with affine transformations. </center> Fig. 11 shows results from an ablated model without affine transformations in the foreground layers, and compares the results with the full model that does include these transformations. We note that one significant problem emerges that the decompositions are degenerate, in the sense that the model is unable to break the symmetry between foreground and background layers, often generating object appearances in the model's background layer and vice versa. For CUB- 200, the final generated images have some blendings between foregrounds and backgrounds. This is particularly the case for <--- Page Split ---> ![](images/11_0.jpg) <center>Figure 12: Generation results from an ablated LR-GAN model without mask generator. The block rows correspond to different datasets (from top to bottom: MNIST-ONE, CUB-200, CIFAR-10). From left to right, the blocks show generated background images, foreground images, transformed foreground images, and final composite images. For comparison, the rightmost column block shows final generated images from a non-ablated model with mask generator. </center> those images without bird- shape masks. For CIFAR- 10, a number of generated masks are inverted. In this case, the background images are carved out as the foreground objects. The foreground generator takes almost all the duty to generate the final images, which make it harder to generate images as clear as the model with transformation. From these comparisons, we qualitatively demonstrate the importance of modeling transformations in the foreground generation process. Another merit of using transformation is that the intermediate outputs of the model are more interpretable and facilitate to the downstreaming tasks, such as scene paring, which is demonstrated in Section 6.8. ### 5.6 IMPORTANCE OF SHAPES We perform another ablation study by removing the mask generator to understand the importance of modeling object shapes. In this case, the generated foreground is simply pasted on top of the generated background after being transformed. There is no alpha blending between the foregrounds and backgrounds. The generation results for three datasets, MNIST- ONE, CUB- 200, CIFAR- 10 are shown in Fig. 12. As we can see, though the model works well for the generation of MNIST- ONE, it fails to generate reasonable images across the other datasets. Particularly, the training does not even converge for CUB- 200. Based on these results, we qualitatively demonstrate that mask generator in our model is fairly important to obtain plausible results, especially for realistic images. ## REFERENCES Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. arXiv preprint arXiv:1606.03657, 2016. Trevor Darrell and Alex Pentland. Robust estimation of a multi- layered motion representation. IEEE Workshop on Visual Motion, 1991. <--- Page Split ---> Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in neural information processing systems, pp. 1486- 1494, 2015. S. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, Koray Kavukcuoglu, and Geoffrey E. Hinton. Attend, infer, repeat: Fast scene understanding with generative models. CoRR, abs/1603.08575, 2016. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672- 2680, 2014. Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.046239, 2015. Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned- Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 07- 49, University of Massachusetts, Amherst, October 2007. Jonathan Huang and Kevin Murphy. Efficient inference in occlusion- aware generative models of images. CoRR, abs/1511.06362, 2015. Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating images with recurrent adversarial networks. arXiv preprint arXiv:1602.05110, 2016. Phillip Isola and Ce Liu. Scene collaging: Analysis and synthesis of natural images with semantic layers. In IEEE International Conference on Computer Vision, pp. 3048- 3055, 2013. Max Jaderberg, Karen Simonyan, Andrew Zisserman, and koray kavukcuoglu. Spatial transformer networks. In Advances in Neural Information Processing Systems 28, pp. 2017- 2025, 2015. Anitha Kannan, Nebojsa Jojic, and Brendan Frey. Generative model for layers of appearance and deformation. AISTATS, 2005. Diederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregressive flow. arXiv preprint arXiv:1606.04934, 2016. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009. Hancock Kwak and Byoung- Tak Zhang. Generating images part by part with composite generative adversarial networks. arXiv preprint arXiv:1607.05387, 2016. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient- based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278- 2324, 1998. Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Generating images from captions with attention. arXiv preprint arXiv:1511.02793, 2015. Javier Portilla and Eero P Simoncelli. A parametric texture model based on joint statistics of complex wavelet coefficients. International journal of computer vision, 40(1):49- 70, 2000. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. Scott Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learning what and where to draw. arXiv preprint arXiv:1610.02454, 2016a. Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396, 2016b. Nicolas Le Roux, Nicolas Heess, Jamie Shotton, and John Winn. Learning a generative model of images by factoring appearance and shape. Neural Computation, 23:593- 650, 2011. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016. <--- Page Split ---> Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. CoRR, abs/1601.06759, 2016. Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics. arXiv preprint arXiv:1609.02612, 2016. John Wang and Edward Adelson. Representing moving images with layers. IEEE Transactions on Image Processing, 1994. Xiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversarial networks. arXiv preprint arXiv:1603.05631, 2016. P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech- UCSD Birds 200. Technical Report CNS- TR- 2010- 001, California Institute of Technology, 2010. Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional image generation from visual attributes. CoRR, abs/1512.00570, 2015. Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy- based generative adversarial network. arXiv preprint arXiv:1609.03126, 2016. Jun- Yan Zhu, Philipp Krahenbuhl, Eli Shechtman, and Alexei A Efros. Generative visual manipulation on the natural image manifold. In European Conference on Computer Vision, pp. 597- 613. Springer, 2016. ## 6 APPENDIX ### 6.1 ALGORITHM Algo. 1 illustrates the generative process in our model. \(g(\star)\) evaluates the function \(g\) at \(\star\) . \(\circ\) is a composition operator that composes its operands so that \(f \circ g(\star) = f(g(\star))\) . # Algorithm 1 Stochastic Layered Recursive Image Generation 1: \(z_{0}\sim \mathcal{N}(0,I)\) 2: \(x_{0} = G_{b}(z_{0})\) \(\triangleright\) background generator 3: \(h_{t}^{0}\gets 0\) 4: \(c_{t}^{0}\gets 0\) 5: for \(t\in [1\dots T]\) do 6: \(z_{t}\sim \mathcal{N}(0,I)\) 7: \(h_{t}^{t},c_{t}^{t}\gets \mathrm{LSTM}([z_{t},h_{t}^{t - 1},c_{t}^{t - 1}])\) \(\triangleright\) pass through LSTM 8: if \(t = 1\) then 9: \(y_{t}\gets h_{t}^{t}\) 10: else 11: \(y_{t}\gets E_{f}^{l}([h_{t}^{t}h_{f}^{t - 1}])\) \(\triangleright\) pass through non- linear embedding layers \(E_{f}^{l}\) 12: end if 13: \(s_{t}\gets G_{f}^{r}(y_{t})\) \(\triangleright\) predict shared cube for \(G_{f}^{t}\) and \(G_{f}^{m}\) 14: \(a_{t}\gets T_{f}(y_{t})\) \(\triangleright\) object transformation 15: \(f_{t}\gets G_{f}^{t}(s_{t})\) \(\triangleright\) generate object appearance 16: \(m_{t}\gets G_{f}^{m}(s_{t})\) \(\triangleright\) generate object shape 17: \(h_{f}^{t}\gets E_{f}^{c}\circ P_{f}^{c}(s_{t})\) \(\triangleright\) predict shared representation embedding 18: \(x_{t}\gets S T(m_{t},a_{t})\odot S T(f_{t},a_{t}) + (1 - S T(m_{t},a_{t}))\odot x_{t - 1}\) 19: end for ### 6.2 MODEL CONFIGURATIONS Table 2 lists the information and model configuration for different datasets. The dimensions of random vectors and hidden vectors are all set to 100. We also compare the number of parameters in DCGAN and LR- GAN. The numbers before '/ are our model, after '/ are DCGAN. Based on the same notation used in (Zhao et al., 2016), the architectures for the different datasets are: <--- Page Split ---> Table 2: Information and model configurations on different datasets. <table><tr><td>Dataset</td><td>MNIST-ONE</td><td>MNIST-TWO</td><td>CIFAR-10</td><td>CUB-200</td></tr><tr><td>Image Size</td><td>32</td><td>64</td><td>32</td><td>64</td></tr><tr><td>#Images</td><td>60,000</td><td>60,000</td><td>50,000</td><td>5,994</td></tr><tr><td>#Timesteps</td><td>2</td><td>3</td><td>2</td><td>2</td></tr><tr><td>#Parameters</td><td>5.25M/4.11M</td><td>7.53M/6.33M</td><td>5.26M/4.11M</td><td>27.3M/6.34M</td></tr></table> - MNIST-ONE: \(G_{b}\) : (256)4c-(128)4c2s-(64)4c2s-(3)4c2s; \(G_{f}^{c}\) : (512)4c-(256)4c2s-(128)4c2s; \(G_{f}^{i}\) : (3)4c2s; \(G_{f}^{m}\) : (1)4c2s; \(D\) : (64)4c2s-(128)4c2s-(256)4c2s-(256)4p4s-1- MNIST-TWO: \(G_{b}\) : (256)4c-(128)4c2s-(64)4c2s-(32)4c2s-(3)4c2s; \(G_{f}^{c}\) : (512)4c-(256)4c2s-(128)4c2s-(64)4c2s; \(G_{f}^{i}\) : (3)4c2s; \(G_{f}^{m}\) : (1)4c2s; \(D\) : (64)4c2s-(128)4c2s-(256)4c2s-(512)4c2s-(512)4c2s-(512)4p4s-1- CUB-200: \(G_{b}\) : (512)4c-(256)4c2s-(128)4c2s-(64)4c2s-(3)4c2s; \(G_{f}^{c}\) : (1024)4c-(512)4c2s-(256)4c2s-(128)4c2s; \(G_{f}^{i}\) : (3)4c2s; \(G_{f}^{m}\) : (1)4c2s; \(D\) : (128)4c2s-(256)4c2s-(512)4c2s-(1024)4c2s-(1024)4p4s-1- CIFAR-10: \(G_{b}\) : (256)4c-(128)4c2s-(64)4c2s-(3)4c2s; \(G_{f}^{c}\) : (512)4c-(256)4c2s-(128)4c2s; \(G_{f}^{i}\) : (3)4c2s; \(G_{f}^{m}\) : (1)4c2s; \(D\) : (64)4c2s-(128)4c2s-(256)4c2s-(256)4p4s-1 ### 6.3 RESULTS ON MNIST-ONE We conduct human studies on generation results on MNIST- ONE. Specifically, we generate 1,000 images using both LR- GAN and DCGAN. As references, we also include 1000 real images. Then we ask the users on AMT to label each image to be one of the digits (0- 9). We also provide them an option 'non recognizable' in case the generated image does not seem to contain a digit. Each image was judged by 5 unique workers. Similar to CIFAR- 10, if an image is recognized to be the same digit by all 5 users, it is assigned to quality level 5. If it is not recognizable according to all users, it is assigned to quality level 0. Fig. 13 (left) shows the number of images assigned to all six quality levels. Compared to DCGAN, our model generated more samples with high quality levels. As expected, the real images have many samples with high quality levels. In Fig. 13 (right), we show the number of images that are recognized to each digit category (0- 9). For qualitative comparison, we show examplar images at each quality level in Fig. 14. From left to right, the quality level increases from 0 to 5. As expected, the images with higher quality level are more clear. For quantitative evaluation, we use the same way as for CIFAR- 10. The classifier model used for contextual Inception Score is trained based on the training set. We generate 60,000 samples based on DCGAN and LR- GAN for evaluation, respectively. To obtain the Adversarial Accuracy and Adversarial Divergence, we first train 10 generators for 10 digit categories separately, and then use the generated samples to train the classifier. As shown in Table 3, our model has higher scores than DCGAN on both standard and contextual Inception Score. Also, our model has a slightly higher ![](images/14_0.jpg) <center>Figure 13: Statistics of annotations in human studies on MNIST-ONE. Left: distribution of quality level; Right: distribution of recognized digit categories. </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 14: Qualitative comparison on MNIST-ONE. Top three rows are samples generated by DCGAN. Bottom three rows are samples generated by LR-GAN. The quality level increases from left to right as determined via human studies. </center> Table 3: Quantitative comparison on MNIST-ONE. <table><tr><td>Training Data</td><td>Real Images</td><td>DCGAN</td><td>Ours</td></tr><tr><td>Inception Score†</td><td>1.83±0.01</td><td>2.03±0.01</td><td>2.06±0.01</td></tr><tr><td>Inception Score††</td><td>9.15±0.04</td><td>6.42±0.03</td><td>7.15±0.04</td></tr><tr><td>Adversarial Accuracy</td><td>95.22 ± 0.25</td><td>26.12 ± 0.07</td><td>26.61 ± 0.06</td></tr><tr><td>Adversarial Divergence Score</td><td>0</td><td>8.47 ± 0.03</td><td>8.39 ± 0.04</td></tr></table> †Evaluate using the pre-trained Inception net as Salimans et al. (2016) \(\dagger \dagger\) Evaluate using the supervisedly trained classifier based on the discriminator in LR-GAN. adversarial accuracy, and lower adversarial divergence than DCGAN. We find that the all three image sets have low standard Inception Scores. This is mainly because the Inception net is trained on ImageNet, which has a very different data distribution from the MNIST dataset. Based on this, we argue that the standard Inception Score is not suitable for some image datasets. ### 6.4 MORE RESULTS ON CUB-200 In this experiment, we reduce the minimal allowed object scale to 1.1, which allows the model to generate larger foreground objects. The results are shown in Fig. 15. Similar to the results when the constraint is 1.2, the crisp bird- like masks are generated automatically by our model. ![](images/15_1.jpg) <center>Figure 15: Generation results of our model on CUB-200 when setting minimal allowed scale to 1.1. From left to right, the blocks show the generated background images, foreground images, foreground masks, foreground images carved out by masks, carved foreground images after spatial transformation. The sixth and seventh blocks are final composite images and the nearest neighbor real images. </center> <--- Page Split ---> ### 6.5 MORE RESULTS ON CIFAR-10 #### 6.5.1 QUALITATIVE RESULTS In Fig. 16, we show more results on CIFAR- 10 when setting minimal allowed object scale to 1.1. The rightmost column block also shows the training images that are closest to the generated images (cosine similarity in pixel space). We can see our model does not memorize the training data. ![](images/16_0.jpg) <center>Figure 16: Generation results of our model on CIFAR-10 with minimal allowed scale be 1.1, From left to right, the layout is same to Fig. 15. </center> #### 6.5.2 WALKING IN THE LATENT SPACE Similar to DCGAN, we also show results by walking in the latent space. Note that our model has two or more inputs. So we can walk along any of them or their combination. In Fig. 17, we generate multiple foregrounds for the same fixed generated background. We find that our model consistently generates contextually compatible foregrounds. For example, for the grass- like backgrounds, the foreground generator generates horses and deer, and airplane- like objects for the blue sky. #### 6.5.3 WORD CLOUD BASED ON HUMAN STUDY As we mentioned above, we conducted human studies on CIFAR- 10. Besides asking people to select a name from a list for an image, we also conducted another human study where we ask people to use one word (free- form) to describe the main object in the image. Each image was 'named' by 5 unique people. We generate word clouds for real images, images generated by DCGAN and LR- GAN, as shown in Fig. 18. ### 6.6 RESULTS ON LFW FACE DATASET We conduct experiment on face images in LFW dataset (Huang et al., 2007). Different from previous works which work on cropped and aligned faces, we directly generate the original images which contains a large portion of backgrounds. This configuration helps to verify the efficiency of LR- GAN to model the object appearance, shape and pose. In Fig. 19, we show the (intermediate) generation results of LR- GAN. Surprisingly, without any supervisions, the model generated background and faces in separate steps, and the generated masks accurately depict face shapes. Moreover, the model <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 17: Walking in the latent foreground space by fixing backgrounds in our model on CIFAR-10. From left to right, the blocks are: generated background images, foreground images, foreground masks, foreground images carved out by masks, carved out foreground images after spatial transformation, and final composite images. Each row has the same background, but different foregrounds. </center> ![](images/17_1.jpg) <center>Figure 18: Statistics of annotations in human studies on CIFAR-10. Left to right: word cloud for real images, images generated by DCGAN, images generated by LR-GAN. </center> ![](images/17_2.jpg) <center>Figure 19: Generation results of our model on LFW. From left to right, the blocks are: generated background images, foreground images, foreground masks, carved out foreground images after spatial transformation, and final composite images. </center> <--- Page Split ---> learns where to place the generated faces so that the whole image looks natural. For comparison, please refer to (Kwak & Zhang, 2016) which does not model the transformation. We can find the generation results degrade much. ### 6.7 STATISTICS ON TRANSFORMATION MATRICES In this part, we analyze the statistics on the transformation matrices generated by our model for different datasets, including MNIST- ONE, CUB- 200, CIFAR- 10 and LFW. We used affine transformation in our model. So there are 6 parameters, scaling in the x coordinate \((s_{x})\) , scaling in the y coordinate \((s_{y})\) , translation in the x coordinate \((t_{x})\) , translation in the y coordinate \((t_{y})\) , rotation in the x coordinate \((r_{x})\) and rotation in the y coordinate \((r_{y})\) . In Fig. 20, we show the histograms on different parameters for different datasets. These histograms show that the model produces non- trivial varied scaling, translation and rotation on all datasets. For different datasets, the learned transformation have different patterns. We hypothesize that this is mainly determined by the configurations of objects in the images. For example, on MNIST- ONE, all six parameters have some fluctuations since the synthetic dataset contains digits randomly placed at different locations. For the other three datasets, the scalings converge to single value since the object sizes do not vary much, and the variations on rotation and translation suffice to generate realistic images. Specifically, we can find the generator largely relies on the translation on x coordinate for generating CUB- 200. This makes sense since birds in the images have similar scales, orientations but various horizontal locations. For CIFAR- 10, since there are 10 different object categories, the configurations are more diverse, hence the generator uses all parameters for generation except for the scaling. For LFW, since faces have similar configurations, the learned transformations have less fluctuation as well. As a result, we can see that LR- GAN indeed models the transformations on the foreground to generate images. ### 6.8 CONDITIONAL IMAGE GENERATION Considering our model can generate object- like masks (shapes) for images, we conducted an experiment to evaluate whether our model can be potentially used for image segmentation and object detection. We make some changes to the model. For the background generator, the input is a real image instead of a random vector. Then the image is passed through an encoder to extract the hidden features, which replaces the random vector \(z_{0}\) and are fed to the background generator. For the foreground generator, we subtract the image generated by the background generator from the input image to obtain a residual image. Then this residual image is fed to the same encoder to get the hidden features, which are used as the input for foreground generator. In our conditional model, we want to reconstruct the image, so we add a reconstruction loss along with the adversarial loss. We train this conditional model on CIFAR- 10. The (intermediate) outputs of the model is shown in Fig. 21. Interestingly, the model successfully learned to decompose the input images into background and foreground. The background generator tends to do an image inpainting by generating a complete background without object, while the foreground generator works as a segmentation model to get object mask from the input image. Similarly, we also run the conditional LR- GAN on LFW dataset. As we can see in Fig. 22, the foreground generator automatically and consistently learned to generate the face regions, even though there are large portion of background in the input images. In other words, the conditional LR- GAN successfully learned to detection faces in images. We suspect this success is due to that it has low cost for the generator to generate similar images, and thus converge to the case that the first generator generate background, and the second generator generate face images. Based on these experiments, we argue that our model can be possibly used for image segmentation and object detection in a generative and unsupervised manner. One future work would be verifying this by applying it to high- resolution and more complicate datasets. <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 20: Histograms of transformation parameters learnt in our model for different datasets. From left to right, the datasets are: MNIST-ONE, CUB-200, CIFAR-10 and LFW. From top to bottom, they are scaling \(s_{x}\) , \(s_{y}\) , translation \(t_{x}\) , \(t_{y}\) , and rotation \(r_{x}\) , \(r_{y}\) in \(x\) and \(y\) coordinate, respectively. </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 21: Conditional generation results of our model on CIFAR-10. From left to right, the blocks are: real images, generated background images, foreground images, foreground masks, foreground images carved out by masks, carved foreground images after spatial transformation, and final composite (reconstructed) images. </center> ![](images/20_1.jpg) <center>Figure 22: Conditional generation results of our model on LFW, displayed with the same layout to Fig. 21. </center> <--- Page Split --->
## ABSTRACT We present LR- GAN: an adversarial image generation model which takes scene structure and context into account. Unlike previous generative adversarial networks (GANs), the proposed GAN learns to generate image background and foregrounds separately and recursively, and stitch the foregrounds on the background in a contextually relevant manner to produce a complete natural image. For each foreground, the model learns to generate its appearance, shape and pose. The whole model is unsupervised, and is trained in an end- to- end manner with gradient descent methods. The experiments demonstrate that LR- GAN can generate more natural images with objects that are more human recognizable than DCGAN. ## 1 INTRODUCTION Generative adversarial networks (GANs) (Goodfellow et al., 2014) have shown significant promise as generative models for natural images. A flurry of recent work has proposed improvements over the original GAN work for image generation (Radford et al., 2015; Denton et al., 2015; Salimans et al., 2016; Chen et al., 2016; Zhu et al., 2016; Zhao et al., 2016), multi- stage image generation including part- based models (Im et al., 2016; Kwak & Zhang, 2016), image generation conditioned on input text or attributes (Mansimov et al., 2015; Reed et al., 2016b;a), image generation based on 3D structure (Wang & Gupta, 2016), and even video generation (Vondrick et al., 2016). While the holistic 'gist' of images generated by these approaches is beginning to look natural, there is clearly a long way to go. For instance, the foreground objects in these images tend to be deformed, blended into the background, and not look realistic or recognizable. One fundamental limitation of these methods is that they attempt to generate images without taking into account that images are 2D projections of a 3D visual world, which has a lot of structures in it. This manifests as structure in the 2D images that capture this world. One example of this structure is that images tend to have a background, and foreground objects are placed in this background in contextually relevant ways. We develop a GAN model that explicitly encodes this structure. Our proposed model generates images in a recursive fashion: it first generates a background, and then conditioned on the background generates a foreground along with a shape (mask) and a pose (affine transformation) that together define how the background and foreground should be composed to obtain a complete image. Conditioned on this composite image, a second foreground and an associated shape and pose are generated, and so on. As a byproduct in the course of recursive image generation, our approach generates some object- shape foreground- background masks in a completely unsupervised way, without access to any object masks for training. Note that decomposing a scene into foreground- background layers is a classical ill- posed problem in computer vision. By explicitly factorizing appearance and transformation, LR- GAN encodes natural priors about the images that the same foreground can be 'pasted' to the different backgrounds, under different affine transformations. According to the experiments, the absence of these priors result in degenerate foreground- background decompositions, and thus also degenerate final composite images. <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Generation results of our model on CUB-200 (Welinder et al., 2010). It generates images in two timesteps. At the first timestep, it generates background images, while generates foreground images, masks and transformations at the second timestep. Then, they are composed to obtain the final images. From top left to bottom right (row major), the blocks are real images, generated background images, foreground images, foreground masks, carved foreground images, carved and transformed foreground images, final composite images, and their nearest neighbor real images in the training set. Note that the model is trained in a completely unsupervised manner. </center> We mainly evaluate our approach on four datasets: MNIST- ONE (one digit) and MNIST- TWO (two digits) synthesized from MNIST (LeCun et al., 1998), CIFAR- 10 (Krizhevsky & Hinton, 2009) and CUB- 200 (Welinder et al., 2010). We show qualitatively (via samples) and quantitatively (via evaluation metrics and human studies on Amazon Mechanical Turk) that LR- GAN generates images that globally look natural and contain clear background and object structures in them that are realistic and recognizable by humans as semantic entities. An experimental snapshot on CUB- 200 is shown in Fig. 1. We also find that LR- GAN generates foreground objects that are contextually relevant to the backgrounds (e.g., horses on grass, airplanes in skies, ships in water, cars on streets, etc.). For quantitative comparison, besides existing metrics in the literature, we propose two new quantitative metrics to evaluate the quality of generated images. The proposed metrics are derived from the sufficient conditions for the closeness between generated image distribution and real image distribution, and thus supplement existing metrics. ## 2 RELATED WORK Early work in parametric texture synthesis was based on a set of hand- crafted features (Portilla & Simoncelli, 2000). Recent improvements in image generation using deep neural networks mainly fall into one of the two stochastic models: variational autoencoders (VAEs) (Kingma et al., 2016) and generative adversarial networks (GANs) (Goodfellow et al., 2014). VAEs pair a top- down probabilistic generative network with a bottom up recognition network for amortized probabilistic inference. Two networks are jointly trained to maximize a variational lower bound on the data likelihood. GANs consist of a generator and a discriminator in a minmax game with the generator aiming to fool the discriminator with its samples with the latter aiming to not get fooled. Sequential models have been pivotal for improved image generation using variational autoencoders: DRAW (Gregor et al., 2015) uses attention based recurrence conditioning on the canvas drawn so far. In Eslami et al. (2016), a recurrent generative model that draws one object at a time to the canvas was used as the decoder in VAE. These methods are yet to show scalability to natural images. Early compelling results using GANs used sequential coarse- to- fine multiscale generation and class- conditioning (Denton et al., 2015). Since then, improved training schemes (Salimans et al., 2016) and better convolutional structure (Radford et al., 2015) have improved the generation results using <--- Page Split ---> GANs. PixelRNN (van den Oord et al., 2016) is also recently proposed to sequentially generates a pixel at a time, along the two spatial dimensions. In this paper, we combine the merits of sequential generation with the flexibility of GANs. Our model for sequential generation imbibes a recursive structure that more naturally mimics image composition by inferring three components: appearance, shape, and pose. One closely related work combining recursive structure with GAN is that of Im et al. (2016) but it does not explicitly model object composition and follows a similar paradigm as by Gregor et al. (2015). Another closely related work is that of Kwak & Zhang (2016). It combines recursive structure and alpha blending. However, our work differs in three main ways: (1) We explicitly use a generator for modeling the foreground poses. That provides significant advantage for natural images, as shown by our ablation studies; (2) Our shape generator is separate from the appearance generator. This factored representation allows more flexibility in the generated scenes; (3) Our recursive framework generates subsequent objects conditioned on the current and previous hidden vectors, and previously generated object. This allows for explicit contextual modeling among generated elements in the scene. See Fig. 17 for contextually relevant foregrounds generated for the same background, or Fig. 6 for meaningful placement of two MNIST digits relative to each. Models that provide supervision to image generation using conditioning variables have also been proposed: Style/Structure GANs (Wang & Gupta, 2016) learns separate generative models for style and structure that are then composed to obtain final images. In Reed et al. (2016a), GAN based image generation is conditioned on text and the region in the image where the text manifests, specified during training via keypoints or bounding boxes. While not the focus of our work, the model proposed in this paper can be easily extended to take into account these forms of supervision. ## 3 PRELIMINARIES ### 3.1 GENERATIVE ADVERSARIAL NETWORKS Generative Adversarial Networks (GANs) consist of a generator \(G\) and a discriminator \(D\) that are simultaneously trained with competing goals: The generator \(G\) is trained to generate samples that can 'fool' a discriminator \(D\) , while the discriminator is trained to classify its inputs as either real (coming from the training dataset) or fake (coming from the samples of \(G\) ). This competition leads to a minmax formulation with a value function: \[\min_{\theta_{G}}\max_{\theta_{D}}\left(\mathrm{E}_{x\sim p_{d a t a}}(z)[\log (D(x;\theta_{D}))] + \mathrm{E}_{z\sim p_{z}}(z)[\log (1 - D(G(z;\theta_{G});\theta_{D}))]\right), \quad (1)\] where \(z\) is a random vector from a standard multivariate Gaussian or a uniform distribution \(p_{z}(z)\) , \(G(z;\theta_{G})\) maps \(z\) to the data space, \(D(x)\) is the probability that \(x\) is real estimated by \(D\) . The advantage of the GANs formulation is that it lacks an explicit loss function and instead uses the discriminator to optimize the generative model. The discriminator, in turn, only cares whether the sample it receives is on the data manifold, and not whether it exactly matches a particular training example (as opposed to losses such as MSE). Hence, the discriminator provides a gradient signal only when the generated samples do not lie on the data manifold so that the generator can readjust its parameters accordingly. This form of training enables learning the data manifold of the training set and not just optimizing to reconstruct the dataset, as in autoencoder and its variants. While the GANs framework is largely agnostic to the choice of \(G\) and \(D\) , it is clear that generative models with the 'right' inductive biases will be more effective in learning from the gradient information (Denton et al., 2015; Im et al., 2016; Gregor et al., 2015; Reed et al., 2016a; Yan et al., 2015). With this motivation, we propose a generator that models image generation via a recurrent process - in each time step of the recurrence, an object with its own appearance and shape is generated and warped according to a generated pose to compose an image in layers. ### 3.2 LAYERED STRUCTURE OF IMAGE An image taken of our 3D world typically contains a layered structure. One way of representing an image layer is by its appearance and shape. As an example, an image \(x\) with two layers, foreground \(f\) and background \(b\) may be factorized as: \[x = f\odot m + b\odot (1 - m), \quad (2)\] <--- Page Split ---> where \(\pmb{m}\) is the mask depicting the shapes of image layers, and \(\odot\) the element wise multiplication operator. Some existing methods assume the access to the shape of the object either during training (Isola & Liu, 2013) or both at train and test time (Reed et al., 2016a; Yan et al., 2015). Representing images in layered structure is even straightforward for video with moving objects (Darrell & Pentland, 1991; Wang & Adelson, 1994; Kannan et al., 2005). Vondrick et al. (2016) generates videos by separately generating a fixed background and moving foregrounds. A similar way of generating single image can be found in Kwak & Zhang (2016). Another way is modeling the layered structure with object appearance and pose as: \[\pmb {x} = S T(\pmb {f},\pmb {a}) + \pmb {b}, \quad (3)\] where \(\pmb{f}\) and \(\pmb{b}\) are foreground and background, respectively; \(\pmb{a}\) is the affine transformation; \(S T\) is the spatial transformation operator. Several works fall into this group (Roux et al., 2011; Huang & Murphy, 2015; Eslami et al., 2016). In Huang & Murphy (2015), images are decomposed into layers of objects with specific poses in a variational autoencoder framework, while the number of objects (i.e., layers) is adaptively estimated in Eslami et al. (2016). To contrast with these works, LR- GAN uses a layered composition, and the foreground layers simultaneously model all three dominant factors of variation: appearance \(\pmb{f}\) , shape \(\pmb{m}\) and pose \(\pmb{a}\) . We will elaborate it in the following section. ## 4 LAYERED RECURSIVE GAN (LR-GAN) The basic structure of LR- GAN is similar to GAN: it consists of a discriminator and a generator that are simultaneously trained using the minmax formulation of GAN, as described in §.3.1. The key innovation of our work is the layered recursive generator, which is what we describe in this section. The generator in LR- GAN is recursive in that the image is constructed recursively using a recurrent network. Layered in that each recursive step composes an object layer that is 'pasted' on the image generated so far. Object layer at timestep \(t\) is parameterized by the following three constituents - 'canonical' appearance \(\pmb{f}_{t}\) , shape (or mask) \(\pmb{m}_{t}\) , and pose (or affine transformation) \(\pmb{a}_{t}\) for warping the object before pasting in the image composition. Fig. 2 shows the architecture of the LR- GAN with the generator architecture unrolled for generating background \(\pmb{x}_{0}\) \((\equiv \pmb{x}_{b})\) and foreground \(\pmb{x}_{1}\) and \(\pmb{x}_{2}\) . At each time step \(t\) , the generator composes the next image \(\pmb{x}_{t}\) via the following recursive computation: \[\begin{array}{r l r l r l r l}{\pmb{x}_{t} =} & {\underbrace{S T(\pmb{m}_{t},\pmb{a}_{t})}_{\mathrm{affine~transformed~mask}} \odot} & {\underbrace{S T(\pmb{f}_{t},\pmb{a}_{t})}_{\mathrm{affine~transformed~appearance}} + \underbrace{(1 - S T(\pmb{m}_{t},\pmb{a}_{t}))\odot\pmb{x}_{t - 1}}_{\mathrm{pasting~on~image~composed~so~far}},} & {\forall t\in [1,T]} \end{array} \quad (4)\] where \(S T(\diamond ,\pmb{a}_{t})\) is a spatial transformation operator that outputs the affine transformed version of \(\diamond\) with \(\pmb{a}_{t}\) indicating parameters of the affine transformation. Since our proposed model has an explicit transformation variable \(\pmb{a}_{t}\) that is used to warp the object, it can learn a canonical object representation that can be re- used to generate scenes where the object occurs as mere transformations of it, such as different scales or rotations. By factorizing the appearance, shape and pose, the object generator can focus on separately capturing regularities in these three factors that constitute an object. We will demonstrate in our experiments that removing these factorizations from the model leads to its spending capacity in variability that may not solely be about the object in Section 5.5 and 5.6. ### 4.1 DETAILS OF GENERATOR ARCHITECTURE Fig. 2 shows our LR- GAN architecture in detail – we use different shapes to indicate different kinds of layers (convolutional, fractional convolutional, (non)linear, etc), as indicated by the legend. Our model consists of two main pieces – a background generator \(G_{b}\) and a foreground generator \(G_{f}\) . \(G_{b}\) and \(G_{f}\) do not share parameters with each other. \(G_{b}\) computation happens only once, while \(G_{f}\) is recurrent over time, i.e., all object generators share the same parameters. In the following, we will introduce each module and connections between them. Temporal Connections. LR- GAN has two kinds of temporal connections – informally speaking, one on 'top' and one on 'bottom'. The 'top' connections perform the act of sequentially 'pasting' <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 2: LR-GAN architecture unfolded to three timesteps. It mainly consists of one background generator, one foreground generator, temporal connections and one discriminator. The meaning of each component is explained in the legend. </center> object layers (Eqn. 4). The 'bottom' connections are constructed by a LSTM on the noise vectors \(z_{0}, z_{1}, z_{2}\) . Intuitively, this noise- vector- LSTM provides information to the foreground generator about what else has been generated in past. Besides, when generating multiple objects, we use a pooling layer \(P_{f}^{c}\) and a fully- connected layer \(E_{f}^{c}\) to extract the information from previous generated object response map. By this way, the model is able to 'see' previously generated objects. Background Generator. The background generator \(G_{b}\) is purposely kept simple. It takes the hidden state of noise- vector- LSTM \(h_{t}^{0}\) as the input and passes it to a number of fractional convolutional layers (also called 'deconvolution' layer in some papers) to generate images at its end. The output of background generator \(x_{b}\) will be used as the canvas for the following generated foregrounds. Foreground Generator. The foreground generator \(G_{f}\) is used to generate an object with appearance and shape. Correspondingly, \(G_{f}\) consists of three sub- modules, \(G_{f}^{c}\) , which is a common 'trunk' whose outputs are shared by \(G_{f}^{i}\) and \(G_{f}^{m}\) . \(G_{f}^{i}\) is used to generate the foreground appearance \(f_{t}\) , while \(G_{f}^{m}\) generates the mask \(m_{t}\) for the foreground. All three sub- modules consists of one or more fractional convolutional layers combined with batch- normalization and nonlinear layers. The generated foreground appearance and mask have the same spatial size as the background. The top of \(G_{f}^{m}\) is a sigmoid layer in order to generate one channel mask whose values range in \((0, 1)\) . Spatial Transformer. To spatially transform foreground objects, we need to estimate the transformation matrix. As in Jaderberg et al. (2015), we predict the affine transformation matrix with a linear layer \(T_{f}\) that has six- dimensional outputs. Then based on the predicted transformation matrix, we use a grid generator \(G_{g}\) to generate the corresponding sampling coordinates in the input for each location at the output. The generated foreground appearance and mask share the same transformation matrix, and thus the same sampling grid. Given the grid, the sampler \(S\) will simultaneously sample the \(f_{t}\) and \(m_{t}\) to obtain \(\hat{f}_{t}\) and \(\hat{m}_{t}\) , respectively. Different from Jaderberg et al. (2015), our sampler here normally performs downsampling, since the the foreground typically has smaller size than the background. Pixels in \(\hat{f}_{t}\) and \(\hat{m}_{t}\) that are from outside the extent of \(\hat{f}_{t}\) and \(m_{t}\) are set to zero. Finally, \(\hat{f}_{t}\) and \(\hat{m}_{t}\) are sent to the compositor \(C\) which combines the canvas \(x_{t - 1}\) and \(\hat{f}_{t}\) through layered composition with blending weights given by \(\hat{m}_{t}\) (Eqn. 4). Pseudo- code for our approach and detailed model configuration are provided in the Appendix. <--- Page Split ---> ### 4.2 NEW EVALUATION METRICS Several metrics have been proposed to evaluate GANs, such as Gaussian parzen window (Goodfellow et al., 2014), Generative Adversarial Metric (GAM) (Im et al., 2016) and Inception Score (Salimans et al., 2016). The common goal is to measure the similarity between the generated data distribution \(P_{g}(\pmb {x}) = G(\pmb {x};\theta_{z})\) and the real data distribution \(P(\pmb {x})\) . Most recently, Inception Score has been used in several works (Salimans et al., 2016; Zhao et al., 2016). However, it is an assymetric metric and could be easily fooled by generating centers of data modes. In addition to these metrics, we present two new metrics based on the following intuition - a sufficient (but not necessary) condition for closeness of \(P_{g}(\pmb {x})\) and \(P(\pmb {x})\) is closeness of \(P_{g}(\pmb {x}|y)\) and \(P(\pmb {x}|y)\) , i.e., distributions of generated data and real data conditioned on all possible variables of interest \(y\) , e.g., category label. One way to obtain this variable of interest \(y\) is via human annotation. Specifically, given the data sampled from \(P_{g}(\pmb {x})\) and \(P(\pmb {x})\) , we ask people to label the category of the samples according to some rules. Note that such human annotation is often easier than comparing samples from the two distributions (e.g., because there is no 1:1 correspondence between samples to conduct forced- choice tests). After the annotations, we need to verify whether the two distributions are similar in each category. Clearly, directly comparing the distributions \(P_{g}(\pmb {x}|y)\) and \(P(\pmb {x}|y)\) may be as difficult as comparing \(P_{g}(\pmb {x})\) and \(P(\pmb {x})\) . Fortunately, we can use Bayes rule and alternatively compare \(P_{g}(y|\pmb {x})\) and \(P(y|\pmb {x})\) , which is a much easier task. In this case, we can simply train a discriminative model on the samples from \(P_{g}(\pmb {x})\) and \(P(\pmb {x})\) together with the human annotations about categories of these samples. With a slight abuse of notation, we use \(P_{g}(y|\pmb {x})\) and \(P(y|\pmb {x})\) to denote probability outputs from these two classifiers (trained on generated samples vs trained on real samples). We can then use these two classifiers to compute the following two evaluation metrics: Adversarial Accuracy: Computes the classification accuracies achieved by these two classifiers on a validation set, which can be the training set or another set of real images sampled from \(P(\pmb {x})\) . If \(P_{g}(\pmb {x})\) is close to \(P(\pmb {x})\) , we expect to see similar accuracies. Adversarial Divergence: Computes the KL divergence between \(P_{g}(y|\pmb {x})\) and \(P(y|\pmb {x})\) . The lower the adversarial divergence, the closer two distributions are. The low bound for this metric is exactly zero, which means \(P_{g}(y|\pmb {x}) = P(y|\pmb {x})\) for all samples in the validation set. As discussed above, we need human efforts to label the real and generated samples. Fortunately, we can further simplify this. Based on the labels given on training data, we split the training data into categories, and train one generator for each category. With all these generators, we generate samples of all categories. This strategy will be used in our experiments on the datasets with labels given. ## 5 EXPERIMENT We conduct qualitative and quantitative evaluations on three datasets: 1) MNIST (LeCun et al., 1998); 2) CIFAR- 10 (Krizhevsky & Hinton, 2009); 3) CUB- 200 (Welinder et al., 2010). To add variability to the MNIST images, we randomly scale (factor of 0.8 to 1.2) and rotate \((-\frac{\pi}{4})\) to \(\frac{\pi}{4}\) ) the digits and then stitch them to \(48\times 48\) uniform backgrounds with random grayscale value between [0, 200]. Images are then rescaled back to \(32\times 32\) . Each image thus has a different background grayscale value and a different transformed digit as foreground. We rename this synthesized dataset as MNIST- ONE (single digit on a gray background). We also synthesize a dataset MNIST- TWO containing two digits on a grayscale background. We randomly select two images of digits and perform similar transformations as described above, and put one on the left and the other on the right side of a \(78\times 78\) gray background. We resize the whole image to \(64\times 64\) . We develop LR- GAN based on open source code<sup>1</sup>. We assume the number of objects is known. Therefore, for MNIST- ONE, MNIST- TWO, CIFAR- 10, and CUB- 200, our model has two, three, two, and two timesteps, respectively. Since the size of foreground object should be smaller than that of canvas, we set the minimal allowed scale <sup>2</sup> in affine transformation to be 1.2 for all datasets except for MNIST- TWO, which is set to 2 (objects are smaller in MNIST- TWO). In LR- GAN, the <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: Generated images on CIFAR-10 based on our model. </center> ![](images/6_1.jpg) <center>Figure 4: Generated images on CUB-200 based on our model. </center> background generator and foreground generator have similar architectures. One difference is that the number of channels in the background generator is half of the one in the foreground generator. We compare our results to that of DCGAN (Radford et al., 2015). Note that LR- GAN without LSTM at the first timestep corresponds exactly to the DCGAN. This allows us to run controlled experiments. In both generator and discriminator, all the (fractional) convolutional layers have \(4 \times 4\) filter size with stride 2. As a result, the number of layers in the generator and discriminator automatically adapt to the size of training images. Please see the Appendix (Section 6.2) for details about the configurations. We use three metrics for quantitative evaluation, including Inception Score (Salimans et al., 2016) and the proposed Adversarial Accuracy, Adversarial Divergence. Note that we report two versions of Inception Score. One is based on the pre- trained Inception net, and the other one is based on the pre- trained classifier on the target datasets. ### 5.1 QUALITATIVE RESULTS In Fig. 3 and 4, we show the generated samples for CIFAR- 10 and CUB- 200, respectively. MNIST results are shown in the next subsection. As we can see from the images, the compositional nature of our model results in the images being free of blending artifacts between backgrounds and foregrounds. For CIFAR- 10, we can see the horses and cars with clear shapes. For CUB- 200, the bird shapes tend to be even sharper. ### 5.2 MNIST-ONE AND MNIST-TWO We now report the results on MNIST- ONE and MNIST- TWO. Fig. 5 shows the generation results of our model on MNIST- ONE. As we can see, our model generates the background and the foreground in separate timestep, and can disentangle the foreground digits from background nearly perfectly. Though initial values of the mask randomly distribute in the range of (0, 1), after training, the masks are nearly binary and accurately carve out the digits from the generated foreground. More results on MNIST- ONE (including human studies) can be found in the Appendix (Section 6.3). Fig. 6 shows the generation results for MNIST- TWO. Similarly, the model is also able to generate background and the two foreground objects separately. The foreground generator tends to generate a single digit at each timestep. Meanwhile, it captures the context information from the previous time steps. When the first digit is placed to the left side, the second one tends to be placed on the right side, and vice versa. <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: Generation results of our model on MNIST-ONE. From left to right, the image blocks are real images, generated background images, generated foreground images, generated masks and final composite images, respectively. </center> ![](images/7_1.jpg) <center>Figure 6: Generation results of our model on MNIST-TWO. From top left to bottom right (row major), the image blocks are real images, generated background images, foreground images and masks at the second timestep, composite images at the second time step, generated foreground images and masks at the third timestep and the final composite images, respectively. </center> ### 5.3 CUB-200 We study the effectiveness of our model trained on the CUB- 200 bird dataset. In Fig. 1, we have shown a random set of generated images, along with the intermediate generation results of the model. While being completely unsupervised, the model, for a large fraction of the samples, is able to ![](images/7_2.jpg) <center>Figure 7: Matched pairs of generated images based on DCGAN and LR-GAN, respectively. The odd columns are generated by DCGAN, and the even columns are generated by LR-GAN. These are paired according to the perfect matching based on Hungarian algorithm. </center> <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 8: Qualitative comparison on CIFAR-10. Top three rows are images generated by DCGAN; Bottom three rows are by LR-GAN. From left to right, the blocks display generated images with increasing quality level as determined by human studies. </center> successfully disentangle the foreground and the background. This is evident from the generated bird- like masks. We do a comparative study based on Amazon Mechanical Turk (AMT) between DCGAN and LRGAN to quantify relative visual quality of the generated images. We first generated 1000 samples from both the models. Then, we performed perfect matching between the two image sets using the Hungarian algorithm on \(L2\) norm distance in the pixel space. This resulted in 1000 image pairs. Some examplar pairs are shown in Fig. 7. For each image pair, 9 judges are asked to choose the one that is more realistic. Based on majority voting, we find that our generated images are selected \(68.4\%\) times, compared with \(31.6\%\) times for DCGAN. This demonstrates that our model has generated more realistic images than DCGAN. We can attribute this difference to our model's ability to generate foreground separately from the background, enabling stronger edge cues. ### 5.4 CIFAR-10 We now qualitatively and quantitatively evaluate our model on CIFAR- 10, which contains multiple object categories and also various backgrounds. Comparison of image generation quality: We conduct AMT studies to compare the fidelity of image generation. Towards this goal, we generate 1000 images from DCGAN and LR- GAN, respectively. We ask 5 judges to label each image to either belong to one of the 10 categories or as 'non recognizable' or 'recognizable but not belonging to the listed categories'. We then assign each image a quality level between [0,5] that captures the number of judges that agree with the majority choice. Fig. 8 shows the images generated by both approaches, ordered by increasing quality level. We merge images at quality level 0 (all judges said non- recognizable) and 1 together, and similarly images at level 4 and 5. Visually, the generated samples by our model have clearer boundaries and object structures. We also computed the fraction of non- recognizable images: Our model had a \(10\%\) absolute drop in non- recognizability rate ( \(67.3\%\) for ours vs. \(77.7\%\) for DCGAN). For reference, \(11.4\%\) of real CIFAR images were categorized as non- recognizable. Fig. 9 shows more generated (intermediate) results of our model. Quantitative evaluation on generators: We evaluate the generators based on three metrics: 1) Inception Score; 2) Adversarial Accuracy; 3) Adversarial Divergence. To obtain a classifier model for evaluation, we remove the top layer in the discriminator used in our model, and then append two fully connected layers on the top of it. We train this classifier using the training samples of CIFAR- 10 based on the annotations. Following Salimans et al. (2016), we generated 50,000 images Table 1: Quantitative comparison between DCGAN and LR-GAN on CIFAR-10. <table><tr><td>Training Data</td><td>Real Images</td><td>DCGAN</td><td>Ours</td></tr><tr><td>Inception Score†</td><td>11.18±0.18</td><td>6.64±0.14</td><td>7.17±0.07</td></tr><tr><td>Inception Score††</td><td>7.23±0.09</td><td>5.69±0.07</td><td>6.11±0.06</td></tr><tr><td>Adversarial Accuracy</td><td>83.33±0.08</td><td>37.81±0.02</td><td>44.22±0.08</td></tr><tr><td>Adversarial Divergence</td><td>0</td><td>7.58±0.04</td><td>5.57±0.06</td></tr></table> \(^{\dagger \dagger}\) Evaluate using the pre-trained Inception net as Salimans et al. (2016) \(^{\dagger \dagger}\) Evaluate using the supervisedly trained classifier based on the discriminator in LR-GAN. <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 9: Generation results of our model on CIFAR-10. From left to right, the blocks are: generated background images, foreground images, foreground masks, foreground images carved out by masks, carved foregrounds after spatial transformation, final composite images and nearest neighbor training images to the generated images. </center> ![](images/9_1.jpg) <center>Figure 10: Category specific generation results of our model on CIFAR-10 categories of horse, frog, and cat (top to bottom). The blocks from left to right are: generated background images, foreground images, foreground masks, foreground images carved out by masks, carved foregrounds after spatial transformation and final composite images. </center> based on DCGAN and LR- GAN, respectively. We compute two types of Inception Scores. The standard Inception Score is based on the Inception net as in Salimans et al. (2016), and the contextual Inception Score is based on our trained classifier model. To distinguish, we denote the standard one as 'Inception Score', and the contextual one as 'Inception Score'. To obtain the Adversarial Accuracy and Adversarial Divergence scores, we train one generator on each of 10 categories for DCGAN and LR- GAN, respectively. Then, we use these generators to generate samples of different categories. Given these generated samples, we train the classifiers for DCGAN and LR- GAN separately. Along with the classifier trained on the real samples, we compute the Adversarial Accuracy <--- Page Split ---> and Adversarial Divergence on the real training samples. In Table 1, we report the Inception Scores, Adversarial Accuracy and Adversarial Divergence for comparison. We can see that our model outperforms DCGAN across the board. To point out, we obtain different Inception Scores based on different classifier models, which indicates that the Inception Score varies with different models. Quantitative evaluation on discriminators: We evaluate the discriminator as an extractor for deep representations. Specifically, we use the output of the last convolutional layer in the discriminator as features. We perform a 1- NN classification on the test set given the full training set. Cosine similarity is used as the metric. On the test set, our model achieves \(62.09\% \pm 0.01\%\) compared to DCGAN's \(56.05\% \pm 0.02\%\) . Contextual generation: We also show the efficacy of our approach to generate diverse foregrounds conditioned on fixed background. The results in Fig. 17 in Appendix showcase that the foreground generator generates objects that are compatible with the background. This indicates that the model has captured contextual dependencies between the image layers. Category specific models: The objects in CIFAR- 10 exhibit huge variability in shapes. That can partly explain why some of the generated shapes are not as compelling in Fig. 9. To test this hypothesis, we reuse the generators trained for each of 10 categories used in our metrics to obtain the generation results. Fig. 10 shows results for categories 'horse', 'frog' and 'cat'. We can see that the model is now able to generate object- specific appearances and shapes, similar in vein to our results on the CUB- 200 dataset. ### 5.5 IMPORTANCE OF TRANSFORMATIONS ![](images/10_0.jpg) <center>Figure 11: Generation results from an ablated LR-GAN model without affine transformations. From top to bottom, the block rows correspond to different datasets: MNIST-ONE, CUB-200, CIFAR-10. From left to right, the blocks show generated background images, foreground images, foreground masks, and final composite images. For comparison, the rightmost column block shows final generated images from a non-ablated model with affine transformations. </center> Fig. 11 shows results from an ablated model without affine transformations in the foreground layers, and compares the results with the full model that does include these transformations. We note that one significant problem emerges that the decompositions are degenerate, in the sense that the model is unable to break the symmetry between foreground and background layers, often generating object appearances in the model's background layer and vice versa. For CUB- 200, the final generated images have some blendings between foregrounds and backgrounds. This is particularly the case for <--- Page Split ---> ![](images/11_0.jpg) <center>Figure 12: Generation results from an ablated LR-GAN model without mask generator. The block rows correspond to different datasets (from top to bottom: MNIST-ONE, CUB-200, CIFAR-10). From left to right, the blocks show generated background images, foreground images, transformed foreground images, and final composite images. For comparison, the rightmost column block shows final generated images from a non-ablated model with mask generator. </center> those images without bird- shape masks. For CIFAR- 10, a number of generated masks are inverted. In this case, the background images are carved out as the foreground objects. The foreground generator takes almost all the duty to generate the final images, which make it harder to generate images as clear as the model with transformation. From these comparisons, we qualitatively demonstrate the importance of modeling transformations in the foreground generation process. Another merit of using transformation is that the intermediate outputs of the model are more interpretable and facilitate to the downstreaming tasks, such as scene paring, which is demonstrated in Section 6.8. ### 5.6 IMPORTANCE OF SHAPES We perform another ablation study by removing the mask generator to understand the importance of modeling object shapes. In this case, the generated foreground is simply pasted on top of the generated background after being transformed. There is no alpha blending between the foregrounds and backgrounds. The generation results for three datasets, MNIST- ONE, CUB- 200, CIFAR- 10 are shown in Fig. 12. As we can see, though the model works well for the generation of MNIST- ONE, it fails to generate reasonable images across the other datasets. Particularly, the training does not even converge for CUB- 200. Based on these results, we qualitatively demonstrate that mask generator in our model is fairly important to obtain plausible results, especially for realistic images. ## 6 APPENDIX ### 6.1 ALGORITHM Algo. 1 illustrates the generative process in our model. \(g(\star)\) evaluates the function \(g\) at \(\star\) . \(\circ\) is a composition operator that composes its operands so that \(f \circ g(\star) = f(g(\star))\) . # Algorithm 1 Stochastic Layered Recursive Image Generation 1: \(z_{0}\sim \mathcal{N}(0,I)\) 2: \(x_{0} = G_{b}(z_{0})\) \(\triangleright\) background generator 3: \(h_{t}^{0}\gets 0\) 4: \(c_{t}^{0}\gets 0\) 5: for \(t\in [1\dots T]\) do 6: \(z_{t}\sim \mathcal{N}(0,I)\) 7: \(h_{t}^{t},c_{t}^{t}\gets \mathrm{LSTM}([z_{t},h_{t}^{t - 1},c_{t}^{t - 1}])\) \(\triangleright\) pass through LSTM 8: if \(t = 1\) then 9: \(y_{t}\gets h_{t}^{t}\) 10: else 11: \(y_{t}\gets E_{f}^{l}([h_{t}^{t}h_{f}^{t - 1}])\) \(\triangleright\) pass through non- linear embedding layers \(E_{f}^{l}\) 12: end if 13: \(s_{t}\gets G_{f}^{r}(y_{t})\) \(\triangleright\) predict shared cube for \(G_{f}^{t}\) and \(G_{f}^{m}\) 14: \(a_{t}\gets T_{f}(y_{t})\) \(\triangleright\) object transformation 15: \(f_{t}\gets G_{f}^{t}(s_{t})\) \(\triangleright\) generate object appearance 16: \(m_{t}\gets G_{f}^{m}(s_{t})\) \(\triangleright\) generate object shape 17: \(h_{f}^{t}\gets E_{f}^{c}\circ P_{f}^{c}(s_{t})\) \(\triangleright\) predict shared representation embedding 18: \(x_{t}\gets S T(m_{t},a_{t})\odot S T(f_{t},a_{t}) + (1 - S T(m_{t},a_{t}))\odot x_{t - 1}\) 19: end for ### 6.2 MODEL CONFIGURATIONS Table 2 lists the information and model configuration for different datasets. The dimensions of random vectors and hidden vectors are all set to 100. We also compare the number of parameters in DCGAN and LR- GAN. The numbers before '/ are our model, after '/ are DCGAN. Based on the same notation used in (Zhao et al., 2016), the architectures for the different datasets are: <--- Page Split ---> Table 2: Information and model configurations on different datasets. <table><tr><td>Dataset</td><td>MNIST-ONE</td><td>MNIST-TWO</td><td>CIFAR-10</td><td>CUB-200</td></tr><tr><td>Image Size</td><td>32</td><td>64</td><td>32</td><td>64</td></tr><tr><td>#Images</td><td>60,000</td><td>60,000</td><td>50,000</td><td>5,994</td></tr><tr><td>#Timesteps</td><td>2</td><td>3</td><td>2</td><td>2</td></tr><tr><td>#Parameters</td><td>5.25M/4.11M</td><td>7.53M/6.33M</td><td>5.26M/4.11M</td><td>27.3M/6.34M</td></tr></table> - MNIST-ONE: \(G_{b}\) : (256)4c-(128)4c2s-(64)4c2s-(3)4c2s; \(G_{f}^{c}\) : (512)4c-(256)4c2s-(128)4c2s; \(G_{f}^{i}\) : (3)4c2s; \(G_{f}^{m}\) : (1)4c2s; \(D\) : (64)4c2s-(128)4c2s-(256)4c2s-(256)4p4s-1- MNIST-TWO: \(G_{b}\) : (256)4c-(128)4c2s-(64)4c2s-(32)4c2s-(3)4c2s; \(G_{f}^{c}\) : (512)4c-(256)4c2s-(128)4c2s-(64)4c2s; \(G_{f}^{i}\) : (3)4c2s; \(G_{f}^{m}\) : (1)4c2s; \(D\) : (64)4c2s-(128)4c2s-(256)4c2s-(512)4c2s-(512)4c2s-(512)4p4s-1- CUB-200: \(G_{b}\) : (512)4c-(256)4c2s-(128)4c2s-(64)4c2s-(3)4c2s; \(G_{f}^{c}\) : (1024)4c-(512)4c2s-(256)4c2s-(128)4c2s; \(G_{f}^{i}\) : (3)4c2s; \(G_{f}^{m}\) : (1)4c2s; \(D\) : (128)4c2s-(256)4c2s-(512)4c2s-(1024)4c2s-(1024)4p4s-1- CIFAR-10: \(G_{b}\) : (256)4c-(128)4c2s-(64)4c2s-(3)4c2s; \(G_{f}^{c}\) : (512)4c-(256)4c2s-(128)4c2s; \(G_{f}^{i}\) : (3)4c2s; \(G_{f}^{m}\) : (1)4c2s; \(D\) : (64)4c2s-(128)4c2s-(256)4c2s-(256)4p4s-1 ### 6.3 RESULTS ON MNIST-ONE We conduct human studies on generation results on MNIST- ONE. Specifically, we generate 1,000 images using both LR- GAN and DCGAN. As references, we also include 1000 real images. Then we ask the users on AMT to label each image to be one of the digits (0- 9). We also provide them an option 'non recognizable' in case the generated image does not seem to contain a digit. Each image was judged by 5 unique workers. Similar to CIFAR- 10, if an image is recognized to be the same digit by all 5 users, it is assigned to quality level 5. If it is not recognizable according to all users, it is assigned to quality level 0. Fig. 13 (left) shows the number of images assigned to all six quality levels. Compared to DCGAN, our model generated more samples with high quality levels. As expected, the real images have many samples with high quality levels. In Fig. 13 (right), we show the number of images that are recognized to each digit category (0- 9). For qualitative comparison, we show examplar images at each quality level in Fig. 14. From left to right, the quality level increases from 0 to 5. As expected, the images with higher quality level are more clear. For quantitative evaluation, we use the same way as for CIFAR- 10. The classifier model used for contextual Inception Score is trained based on the training set. We generate 60,000 samples based on DCGAN and LR- GAN for evaluation, respectively. To obtain the Adversarial Accuracy and Adversarial Divergence, we first train 10 generators for 10 digit categories separately, and then use the generated samples to train the classifier. As shown in Table 3, our model has higher scores than DCGAN on both standard and contextual Inception Score. Also, our model has a slightly higher ![](images/14_0.jpg) <center>Figure 13: Statistics of annotations in human studies on MNIST-ONE. Left: distribution of quality level; Right: distribution of recognized digit categories. </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 14: Qualitative comparison on MNIST-ONE. Top three rows are samples generated by DCGAN. Bottom three rows are samples generated by LR-GAN. The quality level increases from left to right as determined via human studies. </center> Table 3: Quantitative comparison on MNIST-ONE. <table><tr><td>Training Data</td><td>Real Images</td><td>DCGAN</td><td>Ours</td></tr><tr><td>Inception Score†</td><td>1.83±0.01</td><td>2.03±0.01</td><td>2.06±0.01</td></tr><tr><td>Inception Score††</td><td>9.15±0.04</td><td>6.42±0.03</td><td>7.15±0.04</td></tr><tr><td>Adversarial Accuracy</td><td>95.22 ± 0.25</td><td>26.12 ± 0.07</td><td>26.61 ± 0.06</td></tr><tr><td>Adversarial Divergence Score</td><td>0</td><td>8.47 ± 0.03</td><td>8.39 ± 0.04</td></tr></table> †Evaluate using the pre-trained Inception net as Salimans et al. (2016) \(\dagger \dagger\) Evaluate using the supervisedly trained classifier based on the discriminator in LR-GAN. adversarial accuracy, and lower adversarial divergence than DCGAN. We find that the all three image sets have low standard Inception Scores. This is mainly because the Inception net is trained on ImageNet, which has a very different data distribution from the MNIST dataset. Based on this, we argue that the standard Inception Score is not suitable for some image datasets. ### 6.4 MORE RESULTS ON CUB-200 In this experiment, we reduce the minimal allowed object scale to 1.1, which allows the model to generate larger foreground objects. The results are shown in Fig. 15. Similar to the results when the constraint is 1.2, the crisp bird- like masks are generated automatically by our model. ![](images/15_1.jpg) <center>Figure 15: Generation results of our model on CUB-200 when setting minimal allowed scale to 1.1. From left to right, the blocks show the generated background images, foreground images, foreground masks, foreground images carved out by masks, carved foreground images after spatial transformation. The sixth and seventh blocks are final composite images and the nearest neighbor real images. </center> <--- Page Split ---> ### 6.5 MORE RESULTS ON CIFAR-10 #### 6.5.1 QUALITATIVE RESULTS In Fig. 16, we show more results on CIFAR- 10 when setting minimal allowed object scale to 1.1. The rightmost column block also shows the training images that are closest to the generated images (cosine similarity in pixel space). We can see our model does not memorize the training data. ![](images/16_0.jpg) <center>Figure 16: Generation results of our model on CIFAR-10 with minimal allowed scale be 1.1, From left to right, the layout is same to Fig. 15. </center> #### 6.5.2 WALKING IN THE LATENT SPACE Similar to DCGAN, we also show results by walking in the latent space. Note that our model has two or more inputs. So we can walk along any of them or their combination. In Fig. 17, we generate multiple foregrounds for the same fixed generated background. We find that our model consistently generates contextually compatible foregrounds. For example, for the grass- like backgrounds, the foreground generator generates horses and deer, and airplane- like objects for the blue sky. #### 6.5.3 WORD CLOUD BASED ON HUMAN STUDY As we mentioned above, we conducted human studies on CIFAR- 10. Besides asking people to select a name from a list for an image, we also conducted another human study where we ask people to use one word (free- form) to describe the main object in the image. Each image was 'named' by 5 unique people. We generate word clouds for real images, images generated by DCGAN and LR- GAN, as shown in Fig. 18. ### 6.6 RESULTS ON LFW FACE DATASET We conduct experiment on face images in LFW dataset (Huang et al., 2007). Different from previous works which work on cropped and aligned faces, we directly generate the original images which contains a large portion of backgrounds. This configuration helps to verify the efficiency of LR- GAN to model the object appearance, shape and pose. In Fig. 19, we show the (intermediate) generation results of LR- GAN. Surprisingly, without any supervisions, the model generated background and faces in separate steps, and the generated masks accurately depict face shapes. Moreover, the model <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 17: Walking in the latent foreground space by fixing backgrounds in our model on CIFAR-10. From left to right, the blocks are: generated background images, foreground images, foreground masks, foreground images carved out by masks, carved out foreground images after spatial transformation, and final composite images. Each row has the same background, but different foregrounds. </center> ![](images/17_1.jpg) <center>Figure 18: Statistics of annotations in human studies on CIFAR-10. Left to right: word cloud for real images, images generated by DCGAN, images generated by LR-GAN. </center> ![](images/17_2.jpg) <center>Figure 19: Generation results of our model on LFW. From left to right, the blocks are: generated background images, foreground images, foreground masks, carved out foreground images after spatial transformation, and final composite images. </center> <--- Page Split ---> learns where to place the generated faces so that the whole image looks natural. For comparison, please refer to (Kwak & Zhang, 2016) which does not model the transformation. We can find the generation results degrade much. ### 6.7 STATISTICS ON TRANSFORMATION MATRICES In this part, we analyze the statistics on the transformation matrices generated by our model for different datasets, including MNIST- ONE, CUB- 200, CIFAR- 10 and LFW. We used affine transformation in our model. So there are 6 parameters, scaling in the x coordinate \((s_{x})\) , scaling in the y coordinate \((s_{y})\) , translation in the x coordinate \((t_{x})\) , translation in the y coordinate \((t_{y})\) , rotation in the x coordinate \((r_{x})\) and rotation in the y coordinate \((r_{y})\) . In Fig. 20, we show the histograms on different parameters for different datasets. These histograms show that the model produces non- trivial varied scaling, translation and rotation on all datasets. For different datasets, the learned transformation have different patterns. We hypothesize that this is mainly determined by the configurations of objects in the images. For example, on MNIST- ONE, all six parameters have some fluctuations since the synthetic dataset contains digits randomly placed at different locations. For the other three datasets, the scalings converge to single value since the object sizes do not vary much, and the variations on rotation and translation suffice to generate realistic images. Specifically, we can find the generator largely relies on the translation on x coordinate for generating CUB- 200. This makes sense since birds in the images have similar scales, orientations but various horizontal locations. For CIFAR- 10, since there are 10 different object categories, the configurations are more diverse, hence the generator uses all parameters for generation except for the scaling. For LFW, since faces have similar configurations, the learned transformations have less fluctuation as well. As a result, we can see that LR- GAN indeed models the transformations on the foreground to generate images. ### 6.8 CONDITIONAL IMAGE GENERATION Considering our model can generate object- like masks (shapes) for images, we conducted an experiment to evaluate whether our model can be potentially used for image segmentation and object detection. We make some changes to the model. For the background generator, the input is a real image instead of a random vector. Then the image is passed through an encoder to extract the hidden features, which replaces the random vector \(z_{0}\) and are fed to the background generator. For the foreground generator, we subtract the image generated by the background generator from the input image to obtain a residual image. Then this residual image is fed to the same encoder to get the hidden features, which are used as the input for foreground generator. In our conditional model, we want to reconstruct the image, so we add a reconstruction loss along with the adversarial loss. We train this conditional model on CIFAR- 10. The (intermediate) outputs of the model is shown in Fig. 21. Interestingly, the model successfully learned to decompose the input images into background and foreground. The background generator tends to do an image inpainting by generating a complete background without object, while the foreground generator works as a segmentation model to get object mask from the input image. Similarly, we also run the conditional LR- GAN on LFW dataset. As we can see in Fig. 22, the foreground generator automatically and consistently learned to generate the face regions, even though there are large portion of background in the input images. In other words, the conditional LR- GAN successfully learned to detection faces in images. We suspect this success is due to that it has low cost for the generator to generate similar images, and thus converge to the case that the first generator generate background, and the second generator generate face images. Based on these experiments, we argue that our model can be possibly used for image segmentation and object detection in a generative and unsupervised manner. One future work would be verifying this by applying it to high- resolution and more complicate datasets. <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 20: Histograms of transformation parameters learnt in our model for different datasets. From left to right, the datasets are: MNIST-ONE, CUB-200, CIFAR-10 and LFW. From top to bottom, they are scaling \(s_{x}\) , \(s_{y}\) , translation \(t_{x}\) , \(t_{y}\) , and rotation \(r_{x}\) , \(r_{y}\) in \(x\) and \(y\) coordinate, respectively. </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 21: Conditional generation results of our model on CIFAR-10. From left to right, the blocks are: real images, generated background images, foreground images, foreground masks, foreground images carved out by masks, carved foreground images after spatial transformation, and final composite (reconstructed) images. </center> ![](images/20_1.jpg) <center>Figure 22: Conditional generation results of our model on LFW, displayed with the same layout to Fig. 21. </center> <--- Page Split --->
accept
Accept (Poster)
6.333333
ICLR_2017_paper_0436
iclr
2,017
# WHAT DOES IT TAKE TO GENERATE NATURAL TEXTURES? Ivan Ustyuzhaninov \(^{*,1,2,3}\) , Wieland Brendel \(^{*,1,2}\) , Leon Gatys \(^{1,2,3}\) , Matthias Bethge \(^{1,2,3,4}\) \(^{*}\) contributed equally \(^{1}\) Centre for Integrative Neuroscience, University of Tübingen, Germany \(^{2}\) Bernstein Center for Computational Neuroscience, Tübingen, Germany \(^{3}\) Graduate School of Neural Information Processing, University of Tübingen, Germany \(^{4}\) Max Planck Institute for Biological Cybernetics, Tübingen, Germany first.last@bethgelab.org ## ABSTRACT Natural image generation is currently one of the most actively explored fields in Deep Learning. Many approaches, e.g. for state- of- the- art artistic style transfer or natural texture synthesis, rely on the statistics of hierarchical representations in supervisedly trained deep neural networks. It is, however, unclear what aspects of this feature representation are crucial for natural image generation: is it the depth, the pooling or the training of the features on natural images? We here address this question for the task of natural texture synthesis and show that none of the above aspects are indispensable. Instead, we demonstrate that natural textures of high perceptual quality can be generated from networks with only a single layer, no pooling and random filters. ## 1 INTRODUCTION During the last two years several different approaches towards natural image generation have been suggested, among them generative adversarial networks (Goodfellow et al., 2014; Chen et al., 2016), probabilistic generative models like the conditional PixelCNN (van den Oord et al., 2016b;a) or maximum entropy models that rely on the representations of deep neural networks (e.g. Gatys et al., 2015b; Johnson et al., 2016; Ulyanov et al., 2016). The latter approach has been particularly groundbreaking for artistic style transfer and natural texture generation (e.g. Gatys et al., 2015a;b) and has the potential to uncover the regularities that supervisedly trained deep neural networks infer from natural images. For the sake of clarity and concreteness, this paper will focus on natural texture synthesis. Parametric texture models aim to uniquely describe each texture by a set of statistical measurements that are taken over the spatial extent of the image. Each image with the same spatial summary statistics should be perceived as the same texture. Consequently, synthesizing a texture corresponds to finding a new image that reproduces the summary statistics inferred from the reference texture. Starting from Nth- order joint histograms of the pixels by Julesz (1962), many different statistical measures have been proposed (see e.g. Heeger & Bergen, 1995; Portilla & Simoncelli, 2000). The quality of the synthesized textures is usually determined by human inspection; the synthesis is successful if a human observer cannot tell the reference texture from the synthesized ones. The current state of the art in parametric texture modeling (Gatys et al., 2015a) employs the hierarchical image representation in a deep 19- layer convolutional network (Simonyan & Zisserman (2014); in the following referred to as VGG network) that was trained on object recognition in natural images (Russakovsky et al. (2015)). In this model textures are described by the raw correlations between feature activations in response to the texture image from a collection of network layers (see section 5 for details). Since its initial reception several papers explored which additional elements or constraints can further increase the perceptual quality of the generated textures (Berger & Memisevic, 2016; Liu et al., 2016; Aittala et al., 2016). In this work we go the opposite way and ask which elements of the original texture synthesis algorithm (Gatys et al., 2015a) are absolutely indispensable. <--- Page Split ---> In particular two aspects have been deemed critical for natural texture synthesis: the hierarchical multi- layer representation of the textures, and the supervised training of the feature spaces. Here we show that neither aspect is imperative for texture modeling and that in fact a single convolutional layer with random features can synthesize textures that often rival the perceptual quality of Gatys et al. (2015a). This is in contrast to earlier reports (Gatys et al., 2015a) that suggested that networks with random weights fail to generate perceptually interesting images. We suggest that this discrepancy originates from a more elaborate tuning of the optimization procedure (see section 4). Our main contributions are: - We present a strong minimal baseline for parametric texture synthesis that solely relies on a single-layer network and random, data-independent filters.- We show that textures synthesized from the baseline are of high quality and often rival state-of-the-art approaches, suggesting that the depth and the pre-training of multi-layer image representations are not as indispensable for natural image generation as has previously been thought.- We test and compare a wide range of single-layer architectures with different filter-sizes and different types of filters (random, hand-crafted and unsupervisedly learnt filters) against the state-of-the-art texture model by Gatys et al. (2015a).- We utilize a quantitative texture quality measure based on the synthesis loss in the VGG-based model (Gatys et al., 2015a) to replace the common-place evaluation of texture models through qualitative human inspection.- We discuss a formal generalization of maximum entropy models to account for the natural variability of textures with limited spatial extent. ## 2 CONVOLUTIONAL NEURAL NETWORK If not mentioned otherwise, all our models employ single- layer CNNs with standard rectified linear units (ReLUs) and convolutions with stride one, no bias and padding \((f - 1) / 2\) where \(f\) is the filter- size. This choice ensures that the spatial dimension of the output feature maps is the same as the input. All networks except the last one employ filters of size \(11 \times 11 \times 3\) (filter width \(\times\) filter height \(\times\) no. of input channels), but the number of feature maps as well as the selection of the filters differ: - Fourier-363: Each color channel (R, G, B) is filtered separately by each element \(\mathbf{B}_i \in \mathbb{R}^{11 \times 11}\) of the 2D Fourier basis \((11 \times 11 = 121\) feature maps/channel), yielding \(3 \cdot 121 = 363\) feature maps in total. More concretely, each filter can be described as the tensor product \(\mathbf{B}_i \otimes \mathbf{e}_k\) where the elements of the unit-norm \(\mathbf{e}_k \in \mathbb{R}^3\) are all zero except one. - Fourier-3267: All color channels (R, G, B) are filtered simultaneously by each element \(\mathbf{B}_i\) of the 2D Fourier basis but with different weighting terms \(w_R, w_G, w_B \in [1, 0, -1]\) , yielding \(3 \cdot 3 \cdot 3 \cdot 121 = 3267\) feature maps in total. More concretely, each filter can be described by the tensor product \(\mathbf{B}_i \otimes [w_R, w_G, w_B]\) . - Kmeans-363: We randomly sample and whiten 1e7 patches of size \(11 \times 11\) from the Imagenet dataset (Russakovsky et al., 2015), partition the patches into 363 clusters using k-means (Rubinstein et al., 2009), and use the cluster means as convolutional filters. - Kmeans-3267: Same as Kmeans-363 but with 3267 clusters. - Kmeans-NonWhite-363/3267: Same as Kmeans-363/3267 but without whitening of the patches. - Kmeans-Sample-363/3267: Same as Kmeans-363/3267, but patches are only sampled from the target texture. - PCA-363: We randomly sample 1e7 patches of size \(11 \times 11\) from the Imagenet dataset (Russakovsky et al., 2015), vectorize each patch, perform PCA and use the set of principal axes as convolutional filters. - Random-363: Filters are drawn from a uniform distribution according to (Glorot & Bengio, 2010), 363 feature maps in total. - Random-3267: Same as Random-363 but with 3267 feature maps. <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 1: Influence of the feature maps on texture synthesis performance. (Top) Samples synthesized from several single-layer models with 363 feature maps (see sec. 2) for three different textures (rows). Reference textures are shown in the first column. (Bottom) Samples synthesized from several single-layer models with 3267 feature maps (see sec. 2) for three different textures (rows). Additionally, the first column shows samples from the VGG model (Gatys et al., 2015a), and the last column from the multi-scale model (with 1024 feature maps). </center> - Random-Multiscale Eight different filter sizes \(f \times f \times 3\) with \(f = 3, 5, 7, 11, 15, 23, 37, 55\) and 128 feature maps each (1024 feature maps in total). Filters are drawn from a uniform distribution according to (Glorot & Bengio, 2010). The networks were implemented in Lasagne (Dieleman et al., 2015; Theano Development Team, 2016). We remove the DC component of the inputs by subtracting the mean intensity in each color channel (estimated over the Imagenet dataset (Russakovsky et al., 2015)). ## 3 TEXTURE MODEL The texture model closely follows (Gatys et al., 2015a). In essence, to characterise a given vectorised texture \(\mathbf{x} \in \mathbb{R}^{M}\) , we first pass \(\mathbf{x}\) through the convolutional layer and compute the output activations. The output can be understood as a non- linear filter bank, and thus its activations form a set of filtered images (so- called feature maps). For \(N\) distinct feature maps, the rectified output activations can be <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: Influence of the scale and the non-linearity on texture synthesis performance. (1st column) Samples from the random multi-scale model for comparison (same as in Fig. 1). (2nd - 7th column) Samples from the random single-scale model with different spatial filter sizes. (Last column) Samples from the random multi-scale model without ReLU nonlinearity. </center> described by a matrix \(\mathbf{F} \in \mathbb{R}^{N \times M}\) . To capture the stationary structure of the textures, we compute the covariances (or, more precisely, the Gramian matrix) \(\mathbf{G} \in \mathbb{R}^{N \times N}\) between the feature activations \(\mathbf{F}\) by averaging the outer product of the point- wise feature vectors, \[G_{i j} = \frac{1}{M}\sum_{m = 1}^{M}F_{i m}F_{j m}. \quad (1)\] We will denote \(\mathbf{G}(\mathbf{x})\) as the Gram matrix of the feature activations for the input \(\mathbf{x}\) . To determine the relative distance between two textures \(\mathbf{x}\) and \(\mathbf{y}\) we compute the euclidean distance of the normalized Gram matrices, \[d(\mathbf{x},\mathbf{y}) = \frac{1}{\sqrt{\sum_{m,n}G_{m n}(\mathbf{x})^{2}}\sqrt{\sum_{m,n}G_{m n}(\mathbf{y})^{2}}}\sum_{i,j = 1}^{N}\left(G_{i j}(\mathbf{x}) - G_{i j}(\mathbf{y})\right)^{2}. \quad (2)\] To compare with the distance in the raw pixel values, we compute \[d_{p}(\mathbf{x},\mathbf{y}) = \frac{1}{\sqrt{\sum_{m}x_{m}^{2}}\sqrt{\sum_{m}y_{m}^{2}}}\sum_{i = 1}^{N}\left(x_{i} - y_{i}\right)^{2}. \quad (3)\] ## 4 TEXTURE SYNTHESIS To generate a new texture we start from a uniform noise image (in the range [0, 1]) and iteratively optimize it to match the Gram matrix of the reference texture. More precisely, let \(\mathbf{G}(\mathbf{x})\) be the Gram matrix of the reference texture. The goal is to find a synthesised image \(\tilde{\mathbf{x}}\) such that the squared distance between \(\mathbf{G}(\mathbf{x})\) and the Gram matrix \(\mathbf{G}(\tilde{\mathbf{x}})\) of the synthesised image is minimized, i.e. \[\tilde{\mathbf{x}} = \underset {\mathbf{y}\in \mathbb{R}^{M}}{\mathrm{argmin}}E(\mathbf{y}), \quad (4)\] \[E(\mathbf{y}) = \frac{1}{\sum_{i,j = 1}^{N}G_{i j}(\mathbf{x})^{2}}\sum_{i,j = 1}^{N}\left(G_{i j}(\mathbf{x}) - G_{i j}(\mathbf{y})\right)^{2}. \quad (5)\] The gradient \(\partial E(\mathbf{y}) / \partial \mathbf{y}\) of the reconstruction error with respect to the image can readily be computed using standard backpropagation, which we then use in conjunction with the L- BFGS- B algorithm (Jones et al., 2001- ) to solve (4). We leave all parameters of the optimization algorithm at their default value except for the maximum number of iterations (2000), and add a box constraints with range [0, 1]. In addition, we scale the loss and the gradients by a factor of \(10^{7}\) in order to avoid early stopping of the optimization algorithm. <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 3: Each row shows the reference texture (left, gray background) and three samples that were synthesized from different (random) initial images using our multi-scale model. Most importantly, the multi-scale model generates samples that are perceptually different. All three textures are taken from Portilla & Simoncelli (2000). </center> ## 5 TEXTURE EVALUATION Evaluating the quality of the synthesized textures is traditionally performed by human inspection. Optimal texture synthesis should generate samples that humans perceive as being the same texture as the reference. The high quality of the synthesized textures by (Gatys et al., 2015a) suggests that the summary statistics from multiple layers of VGG can approximate the perceptual metric of humans. Even though the VGG texture representation is not perfect, this allows us to utilize these statistics as a more objective quantification of texture quality. For all details of the VGG- based texture model see (Gatys et al., 2015a). Here we use the standard 19- layer VGG network (Simonyan & Zisserman, 2014) with pretrained weights and average- instead of max- pooling'. We compute a Gram matrix on the output of each convolutional layer that follows a pooling layer. Let \(\mathbf{G}^{\ell}(\cdot)\) be the Gram matrix on the activations of the \(\ell\) - th layer and \[E^{\ell}(\mathbf{y}) = \frac{1}{\sum_{i,j = 1}^{N}G_{ij}^{\ell}(\mathbf{x})^{2}}\sum_{i,j = 1}^{N}\left(G_{ij}^{\ell}(\mathbf{x}) - G_{ij}^{\ell}(\mathbf{y})\right)^{2}. \quad (6)\] the corresponding relative reconstruction cost. The total reconstruction cost is then defined as the average distance between the reference Gram matrices and the synthesized ones, i.e. \[E(\mathbf{y}) = \frac{1}{5}\sum_{\ell = 1}^{5}E^{\ell}(\mathbf{y}). \quad (7)\] This cost is reported on top of each synthesised texture in Figures 4. To visually evaluate samples from our single- and multi- scale model against the VGG- based model (Gatys et al., 2015a), we additionally synthesize textures from VGG by minimizing (7) using L- BFGS- B as in section 4. ## 6 RESULTS In Fig. 1 we show textures synthesised from two random single- and multi- scale models, as well as eight other non- random single- layer models for three different source images (top left). For <--- Page Split ---> comparison, we also plot samples generated from the VGG model by Gatys et al. (Gatys et al., 2015a) (bottom left). There are roughly two groups of models: those with a small number of feature maps (363, top row), and those with a large number of feature maps (3267, bottom row). Only the multi- scale model employs 1024 feature maps. Within each group, we can differentiate models for which the filters are unsupervisedly trained on natural images (e.g. sparse coding filters from k- means), principally devised filter banks (e.g. 2D Fourier basis) and completely random filters (see sec. 2 for all details). All single- layer networks, except for multi- scale, feature \(11 \times 11\) filters. Remarkably, despite the small spatial size of the filters, all models capture much of the small- and mid- scale structure of the textures, in particular if the number of feature maps is large. Notably, the scale of these structures extends far beyond the receptive fields of the single units (see e.g. the pebble texture). We further observe that a larger number of feature maps generally increases the perceptual quality of the generated textures. Surprisingly, however, completely random filters perform on par or better than filters that have been trained on the statistics of natural images. This is particularly true for the multi- scale model that clearly outperforms the single- scale models on all textures. The captured structures in the multi- scale model are generally much larger and often reach the full size of the texture (see e.g. the wall). While the above results show that for natural texture synthesis one neither needs a hierarchical deep network architecture with spatial pooling nor filters that are adapted to the statistics of natural images, we now focus on the aspects that are crucial for high quality texture synthesis. First, we evaluate whether the success of the random multi- scale network arises from the combination of filters on multiple scales or whether it is simply the increased size of its largest receptive fields \((55 \times 55\) vs. \(11 \times 11\) ) that leads to the improvement compared to the single- scale model. Thus, to investigate the influence of the spatial extend of the filters and the importance of combining multiple filter sizes in one model, we generate textures from multiple single- scale models, where each model has the same number of random filters as the multi- scale model (1024) but only uses filters from a single scale of the multi- scale model (Fig. 2). We find that while \(3 \times 3\) filters mainly capture the marginal distribution of the color channels, larger filters like \(11 \times 11\) model small- to mid- scale structures (like small stones) but miss more long- range structures (larger stones are not well separated). Very large filters like \(55 \times 55\) , on the other hand, are capable of modeling long- range structures but then miss much of the small- to midscale statistics (like the texture of the stone). Therefore we conclude that the combination of different scales in the multi- scale network is important for good texture synthesis since it allows to simultaneously model small- , mid- and long- range correlations of the textures. Finally we note that a further indispensable component for good texture models are the non- linearities: textures synthesised the multi- scale model without ReLU (Fig. 2, right column) are unable to capture the statistical dependencies of the texture. The perceptual quality of the textures generated from models with only a single layer and random filters is quite remarkable and surpasses parametric methods like Portilla & Simoncelli (2000) that have been state- of- the- art two years ago (before the use of DNNs). The multi- scale model often rivals the current state of the art (Gatys et al., 2015a) as we show in Fig. 4 where we compare samples synthesized from 20 different textures for the random single- and multi- scale model, as well as VGG. The multi- scale model generates very competitive samples in particular for textures with extremely regular structures across the whole image (e.g. for the brick wall, the grids or the scales). In part, this effect can be attributed to the more robust optimization of the single- layer model that is less prone to local minima then the optimization in deeper models. This can be seen by initializing the VGG- based synthesis with textures from the single- layer model, which consistently yields superior synthesis results (see Appendix A, Fig. 5). In addition, for a few textures such as the grid structures, the VGG- based loss is paradoxically lower for samples from the multi- scale model then for the VGG- based model (which directly optimized the VGG- based loss). This suggests that the naive synthesis performed here favors images that are perceptually similar to the reference texture and thus looses variability (see sec. 7 for further discussion). Nonetheless, samples from the single- layer model still exhibit large perceptual differences, see Fig. 3. The VGG- based loss (7) appears to generally be an acceptable approximation of the perceptual differences between the reference and the synthesized texture. Only for a few textures, especially those with very regular men- made structures (e.g. the wall or the grids), the VGG- based loss fails to capture the perceptual advantage of the multi- scale synthesis. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 4: Each row shows the reference texture (left, gray background) and three samples that were synthesized from different (random) initial images using three different models: single-layer network with 1024 feature maps and random 11x11 filters; multi-scale single layer network with filters of sizes \(f\times f\) , where \(f = \{3,5,7,11,15,23,37,55\}\) and 128 feature maps correspond to filters of each size; and the VGG-based model (Gatys et al., 2015a). Numbers above figures show the values of the normalized VGG-loss (7) for corresponding textures. </center> <--- Page Split ---> ## 7 DISCUSSION We proposed a generative model of natural textures based on a single- layer convolutional neural network with completely random filters and showed that the model is able to qualitatively capture the perceptual differences between natural textures. Samples from the model often rival the current state- of- the- art (Gatys et al., 2015a) (Fig. 4, third vs fourth row), even though the latter relies on a high- performance deep neural network with features that are tuned to the statistics of natural images. Seen more broadly, this finding suggests that natural image generation does not necessarily depend on deep hierarchical representations or on the training of the feature maps. Instead, for texture synthesis, both aspects rather seem to serve as fine- tuning of the image representation. One concern about the proposed single- layer multi- scale model is its computational inefficiency since it involves convolutions with spatially large filters (up to \(55 \times 55\) ). A more efficient way to achieve receptive fields of similar size would be to use a hierarchical multi- layer net. We conducted extensive experiments with various hierarchical architectures and while the synthesis is indeed significantly faster, the quality of the synthesized textures does not improve compared to a single- layer model. Thus for a minimal model of natural textures, deep hierarchical representations are not necessary but they can improve the efficiency of the texture synthesis. Our results clearly demonstrate that Gram matrices computed from the feature maps of convolutional neural networks generically lead to useful summary statistics for texture synthesis. The Gram matrix on the feature maps transforms the representations from the convolutional neural network into a stationary feature space that captures the pairwise correlations between different features. If the number of feature maps is large, then the local structures in the image are well preserved in the projected space and the overlaps of the convolutional filtering add additional constraints. At the same time, averaging out the spatial dimensions yields sufficient flexibility to generate entirely new textures that differ from the reference on a patch by patch level, but still share much of the small- and long- range statistics. The success of shallow convolutional networks with random filters in reproducing the structure of the reference texture is remarkable and indicates that they can be useful for parametric texture synthesis. Besides reproducing the stationary correlation structure of the reference image ("perceptual similarity") another desideratum of a texture synthesis is to exhibit a large variety between different samples generated from the same given image ("variability"). Hence, synthesis algorithms need to balance perceptual similarity and variability. This balance is determined by a complex interplay between the choice of summary statistics and the optimization algorithm used. For example the stopping criterion of the optimization algorithm can be adjusted to trade perceptual similarity for larger variability. Finding the right balance between perceptual similarity and variability is challenging because we are currently lacking robust measures of these quantities. In this work we introduced VGG- loss as a measure of perceptual similarity, and, even though, it works much better than other common measures such as Structural Similarity Index (SSIM, Wang et al., 2004, see Appendix A, Figure 6) or Euclidean distance in the pixel space (not shown), it is still not perfect (Figure 4). Measuring variability is probably even more difficult: in principle it requires measuring the entropy of generated samples, which is intractable in a high- dimensional space. A different approach could be based on a psychophysical assessment of generated samples. For example, we could use an inpainting task (illustrated in Appendix A, Figure 7) to make human observers decide between actual texture patches and inpainted ones. Performance close to a chance- level would indicate that the texture model produces variable enough samples to capture the diversity of actual patches. The further exploration of variability measures lies, however, beyond the scope of this work. In this paper we focused on maximizing perceptual similarity only, and it is worth pointing out that additional efforts will be necessary to find an optimal trade- off between perceptual similarity and variability. For the synthesis of textures from the random models considered here, the trade- off leans more towards perceptual similarity in comparison to Gatys et al. (2015a)(due to the simpler optimization) which also explains the superior performance on some samples. In fact, we found some anecdotal evidence (not shown) in deeper multi- layer random CNNs where the reference texture was exactly reconstructed during the synthesis. From a theoretical point of view this is likely a finite size effect which does not necessarily constitute a failure of the chosen summary statistics: for finite size images it is well possible that only the reference image can exactly reproduce all the summary <--- Page Split ---> statistics. Therefore, in practice, the Gram matrices are not treated as hard constraints but as soft constraints only. More generally, we do not expect a perceptual distance metric to assign exactly zero to a random pair of patches from the same texture. Instead, we expect it to assign small values for pairs from the same texture, and large values for patches from different textures. Therefore, the selection of constraints is not sufficient to characterize a texture synthesis model but only determines the exact minima of the objective function (which are sought for by the synthesis). If we additionally consider images with small but non- zero distance to the reference statistics, then the set of equivalent textures increases substantially, and the precise composition of this set becomes critically dependent on the perceptual distance metric. Mathematically, parametric texture synthesis models are described as ergodic random fields that have maximum entropy subject to certain constraints Zhu et al. (1997); Bruna & Mallat (2013); Zhu et al. (2000) (MaxEnt framework). Practical texture synthesis algorithms, however, always deal with finite size images. As discussed above, two finite- size patches from the same ergodic random field will almost never feature the exact same summary statistics. This additional uncertainty in estimating the constraints on finite length processes is not thoroughly accounted for by the MaxEnt framework (see discussion on its "ad hockeries" by Jaynes (Jaynes (1982))). Thus, a critical difference of practical implementations of texture synthesis algorithms from the conceptual MaxEnt texture modeling framework is that they genuinely allow a small mismatch in the constraints. Accordingly, specifying the summary statistics is not sufficient but a comprehensive definition of a texture synthesis model should specify: 1. A metric \(d(\mathbf{x},\mathbf{y})\) that determines the distance between any two arbitrary textures \(\mathbf{x},\mathbf{y}\) 2. A bipartition \(P_{\mathbf{x}}\) of the image space that determines which images are considered perceptually equivalent to a reference texture \(\mathbf{x}\) . A simple example for such a partition is the \(\epsilon\) -environment \(U_{\epsilon}(\mathbf{y}):= \{\mathbf{y}:d(\mathbf{y},\mathbf{x})< \epsilon \}\) and its complement. This definition is relevant for both under- as well as over- constrained models, but its importance becomes particularly obvious for the latter. According to the Minimax entropy principle for texture modeling suggested by Zhu et al Zhu et al. (1997), as many constraints as possible should be used to reduce the (Kullback- Leibler) divergence between the true texture model and its estimate. However, for finite spatial size, the synthetic samples become exactly equivalent to the reference texture (up to shifts) in the limit of sufficiently many independent constraints. In contrast, if we explicitly allow for a small mismatch between the summary statistics of the reference image and the synthesized textures, then the set of possible textures does not constitute a low- dimensional manifold but rather a small volume within the pixel space. Alternatively, instead of introducing an \(\epsilon\) - environment it is also possible to extent the MaxEnt framework to allow for variability in the summary statistics (Joan Bruna, personal communication). It will be interesting to compare in the future to what extent the difference between the two approaches can lead to differences in the perceptual appearance of the textures. Taken together we have shown that simple single- layer CNNs with random filters can serve as the basis for excellent texture synthesis models that outperform previous hand- crafted synthesis models and sometimes even rivals the current state- of- the- art. This finding repeals previous observations that suggested a critical role for the multi- layer representations in trained deep networks for natural texture generation. On the other hand, it is not enough to just use sufficiently many constraints as one would predict from the MaxEnt framework. Instead, for the design of good texture synthesis algorithms it will be crucial to find distance measures for which the \(\epsilon\) - environment around the reference texture leads to perceptually satisfying results. In this way, building better texture synthesis models is inherently related to better quantitative models of human perception. ## REFERENCES M. Aittala, T. Aila, and J. Lehtinen. Reflectance modeling by neural texture synthesis. ACM Transactions on Graphics, 35, 2016. G. Berger and R. Memisevic. Incorporating long-range consistency in cnn-based texture generation. Jun 2016. <--- Page Split ---> Joan Bruna and Stéphane Mallat. Audio texture synthesis with scattering moments. CoRR, abs/1311.0407, 2013. URL http://dblp.uni- trier.de/db/journals/corr/corr1311. html#BrunaM13. X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets. ArXiv e-prints, June 2016. Sander Dieleman, Jan Schlüter, Colin Raffel, Eben Olson, Søren Kaae Sønderby, Daniel Nouri, Daniel Maturana, Martin Thoma, and other contributors. Lasagne: First release., August 2015. URL http://dx.doi.org/10.5281/zenodo.27878. L. A. Gatys, A. S. Ecker, and M. Bethge. Texture synthesis using convolutional neural networks. In Advances in Neural Information Processing Systems 28, May 2015a. URL http://arxiv.org/abs/1505.07376. L. A. Gatys, A. S. Ecker, and M. Bethge. A neural algorithm of artistic style. Aug 2015b. URL http://arxiv.org/abs/1508.06576. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS'10), 2010. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative Adversarial Networks. ArXiv e-prints, June 2014. David J. Heeger and James R. Bergen. Pyramid-based texture analysis/synthesis. In Proceedings of the 22Nd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '95, pp. 229-238, New York, NY, USA, 1995. ACM. ISBN 0- 89791- 701- 4. doi: 10.1145/218380.218446. URL http://doi.acm.org/10.1145/218380.218446. E.T. Jaynes. On the rationale of maximum-entropy methods. Proceedings of the IEEE, 70(9):939- 952, Sept. 1982. ISSN 0018- 9219. Justin Johnson, Alexandre Alahi, and Li Fei- Fei. Perceptual losses for real- time style transfer and super- resolution. In European Conference on Computer Vision, 2016. Eric Jones, Travis Oliphant, Pearu Peterson, et al. SciPy: Open source scientific tools for Python, 2001-. URL http://www.scipy.org/. [Online; accessed 2016- 05- 12]. B. Julesz. Visual pattern discrimination. IRE Transactions on Information Theory, 8(2):84-92, February 1962. ISSN 0096- 1000. doi: 10.1109/TIT.1962.1057698. G. Liu, Y. Gousseau, and G. Xia. Texture synthesis through convolutional neural networks and spectrum constraints. May 2016. Javier Portilla and Eero P. Simoncelli. A parametric texture model based on joint statistics of complex wavelet coefficients. Int. J. Comput. Vision, 40(1):49- 70, October 2000. ISSN 0920- 5691. doi: 10.1023/A:1026553619983. URL http://dx.doi.org/10.1023/A:1026553619983. Ron Rubinstein, Michael Zibulevsky, and Michael Elad. Efficient implementation of the k-svd algorithm using batch orthogonal matching pursuit, 2009. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei- Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115 (3):211- 252, 2015. doi: 10.1007/s11263- 015- 0816- y. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large- scale image recognition. CoRR, abs/1409.1556, 2014. URL http://arxiv.org/abs/1409.1556. Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e- prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/1605.02688. <--- Page Split ---> Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor Lempitsky. Texture Networks: Feedforward Synthesis of Textures and Stylized Images. arXiv:1603.03417 [cs], March 2016. URL http://arxiv.org/abs/1603.03417. arXiv:1603.03417. A. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel Recurrent Neural Networks. ArXiv e-prints, January 2016a. A. van den Oord, N. Kalchbrenner, O. Vinyals, L. Espeholt, A. Graves, and K. Kavukcuoglu. Conditional Image Generation with PixelCNN Decoders. ArXiv e-prints, June 2016b. Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600- 612, 2004. Song Chun Zhu, Ying Nian Wu, and David Mumford. Minimax entropy principle and its application to texture modeling. Neural Computation, 9(8):1627- 1660, 1997. Song Chun Zhu, Xiuwen Liu, and Ying Nian Wu. Exploring texture ensembles by efficient markov chain monte carlo- toward a 'trichromacy' theory of texture. IEEE Trans. Pattern Anal. Mach. Intell., 22(6):554- 569, 2000. doi: 10.1109/34.862195. URL http://dx.doi.org/10.1109/34.862195. <--- Page Split ---> ## A APPENDIX ![](images/11_0.jpg) <center>Figure 5: Initializing VGG-synthesis with a sample from the random multi-scale model. The first column shows the original textures, the second and third columns show samples from the standard VGG-based synthesis (random initialization) (Gatys et al., 2015a) and the random multi-scale model. The last column shows samples from the VGG-based model, which was initialized with samples from the random multi-scale model (from column 3). On top of all samples we report the corresponding values of the VGG-loss (7). Empirically, the VGG loss is up to two orders of magnitude lower in the last column relative to the standard VGG synthesis. </center> <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 6: Similarity measures between textures computed using the structural similarity index on the pixels (SSIM, left column) and normalized euclidean distances in the feature spaces of VGG (second column) and two shallow texture models (third and fourth column). Ten random patches were extracted from each of ten different textures (examples of patches are shown in top row). The matrix element (i, j) of each similarity matrix corresponds to the median distance between patches from textures i and j. The values for all but the SSIM matrix are shown on a log-scale. </center> ![](images/12_1.jpg) <center>Figure 7: Examples of inpainted textures for the VGG model (Gatys et al., 2015a, , two top rows) and random multi-scale model (two bottom rows). Textures were inpainted starting from three different initial conditions (10%, 15%, 20% corresponding to the width of the frame used for initialization), and for each initial condition the texture was inpainted either by matching the Gram matrix of the patch used for initializing the frame (regular sample) or by matching the Gram matrix estimated over many (500) randomly extracted patches from the texture (estimate sample). </center> <--- Page Split --->
## ABSTRACT Natural image generation is currently one of the most actively explored fields in Deep Learning. Many approaches, e.g. for state- of- the- art artistic style transfer or natural texture synthesis, rely on the statistics of hierarchical representations in supervisedly trained deep neural networks. It is, however, unclear what aspects of this feature representation are crucial for natural image generation: is it the depth, the pooling or the training of the features on natural images? We here address this question for the task of natural texture synthesis and show that none of the above aspects are indispensable. Instead, we demonstrate that natural textures of high perceptual quality can be generated from networks with only a single layer, no pooling and random filters. ## 1 INTRODUCTION During the last two years several different approaches towards natural image generation have been suggested, among them generative adversarial networks (Goodfellow et al., 2014; Chen et al., 2016), probabilistic generative models like the conditional PixelCNN (van den Oord et al., 2016b;a) or maximum entropy models that rely on the representations of deep neural networks (e.g. Gatys et al., 2015b; Johnson et al., 2016; Ulyanov et al., 2016). The latter approach has been particularly groundbreaking for artistic style transfer and natural texture generation (e.g. Gatys et al., 2015a;b) and has the potential to uncover the regularities that supervisedly trained deep neural networks infer from natural images. For the sake of clarity and concreteness, this paper will focus on natural texture synthesis. Parametric texture models aim to uniquely describe each texture by a set of statistical measurements that are taken over the spatial extent of the image. Each image with the same spatial summary statistics should be perceived as the same texture. Consequently, synthesizing a texture corresponds to finding a new image that reproduces the summary statistics inferred from the reference texture. Starting from Nth- order joint histograms of the pixels by Julesz (1962), many different statistical measures have been proposed (see e.g. Heeger & Bergen, 1995; Portilla & Simoncelli, 2000). The quality of the synthesized textures is usually determined by human inspection; the synthesis is successful if a human observer cannot tell the reference texture from the synthesized ones. The current state of the art in parametric texture modeling (Gatys et al., 2015a) employs the hierarchical image representation in a deep 19- layer convolutional network (Simonyan & Zisserman (2014); in the following referred to as VGG network) that was trained on object recognition in natural images (Russakovsky et al. (2015)). In this model textures are described by the raw correlations between feature activations in response to the texture image from a collection of network layers (see section 5 for details). Since its initial reception several papers explored which additional elements or constraints can further increase the perceptual quality of the generated textures (Berger & Memisevic, 2016; Liu et al., 2016; Aittala et al., 2016). In this work we go the opposite way and ask which elements of the original texture synthesis algorithm (Gatys et al., 2015a) are absolutely indispensable. <--- Page Split ---> In particular two aspects have been deemed critical for natural texture synthesis: the hierarchical multi- layer representation of the textures, and the supervised training of the feature spaces. Here we show that neither aspect is imperative for texture modeling and that in fact a single convolutional layer with random features can synthesize textures that often rival the perceptual quality of Gatys et al. (2015a). This is in contrast to earlier reports (Gatys et al., 2015a) that suggested that networks with random weights fail to generate perceptually interesting images. We suggest that this discrepancy originates from a more elaborate tuning of the optimization procedure (see section 4). Our main contributions are: - We present a strong minimal baseline for parametric texture synthesis that solely relies on a single-layer network and random, data-independent filters.- We show that textures synthesized from the baseline are of high quality and often rival state-of-the-art approaches, suggesting that the depth and the pre-training of multi-layer image representations are not as indispensable for natural image generation as has previously been thought.- We test and compare a wide range of single-layer architectures with different filter-sizes and different types of filters (random, hand-crafted and unsupervisedly learnt filters) against the state-of-the-art texture model by Gatys et al. (2015a).- We utilize a quantitative texture quality measure based on the synthesis loss in the VGG-based model (Gatys et al., 2015a) to replace the common-place evaluation of texture models through qualitative human inspection.- We discuss a formal generalization of maximum entropy models to account for the natural variability of textures with limited spatial extent. ## 2 CONVOLUTIONAL NEURAL NETWORK If not mentioned otherwise, all our models employ single- layer CNNs with standard rectified linear units (ReLUs) and convolutions with stride one, no bias and padding \((f - 1) / 2\) where \(f\) is the filter- size. This choice ensures that the spatial dimension of the output feature maps is the same as the input. All networks except the last one employ filters of size \(11 \times 11 \times 3\) (filter width \(\times\) filter height \(\times\) no. of input channels), but the number of feature maps as well as the selection of the filters differ: - Fourier-363: Each color channel (R, G, B) is filtered separately by each element \(\mathbf{B}_i \in \mathbb{R}^{11 \times 11}\) of the 2D Fourier basis \((11 \times 11 = 121\) feature maps/channel), yielding \(3 \cdot 121 = 363\) feature maps in total. More concretely, each filter can be described as the tensor product \(\mathbf{B}_i \otimes \mathbf{e}_k\) where the elements of the unit-norm \(\mathbf{e}_k \in \mathbb{R}^3\) are all zero except one. - Fourier-3267: All color channels (R, G, B) are filtered simultaneously by each element \(\mathbf{B}_i\) of the 2D Fourier basis but with different weighting terms \(w_R, w_G, w_B \in [1, 0, -1]\) , yielding \(3 \cdot 3 \cdot 3 \cdot 121 = 3267\) feature maps in total. More concretely, each filter can be described by the tensor product \(\mathbf{B}_i \otimes [w_R, w_G, w_B]\) . - Kmeans-363: We randomly sample and whiten 1e7 patches of size \(11 \times 11\) from the Imagenet dataset (Russakovsky et al., 2015), partition the patches into 363 clusters using k-means (Rubinstein et al., 2009), and use the cluster means as convolutional filters. - Kmeans-3267: Same as Kmeans-363 but with 3267 clusters. - Kmeans-NonWhite-363/3267: Same as Kmeans-363/3267 but without whitening of the patches. - Kmeans-Sample-363/3267: Same as Kmeans-363/3267, but patches are only sampled from the target texture. - PCA-363: We randomly sample 1e7 patches of size \(11 \times 11\) from the Imagenet dataset (Russakovsky et al., 2015), vectorize each patch, perform PCA and use the set of principal axes as convolutional filters. - Random-363: Filters are drawn from a uniform distribution according to (Glorot & Bengio, 2010), 363 feature maps in total. - Random-3267: Same as Random-363 but with 3267 feature maps. <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 1: Influence of the feature maps on texture synthesis performance. (Top) Samples synthesized from several single-layer models with 363 feature maps (see sec. 2) for three different textures (rows). Reference textures are shown in the first column. (Bottom) Samples synthesized from several single-layer models with 3267 feature maps (see sec. 2) for three different textures (rows). Additionally, the first column shows samples from the VGG model (Gatys et al., 2015a), and the last column from the multi-scale model (with 1024 feature maps). </center> - Random-Multiscale Eight different filter sizes \(f \times f \times 3\) with \(f = 3, 5, 7, 11, 15, 23, 37, 55\) and 128 feature maps each (1024 feature maps in total). Filters are drawn from a uniform distribution according to (Glorot & Bengio, 2010). The networks were implemented in Lasagne (Dieleman et al., 2015; Theano Development Team, 2016). We remove the DC component of the inputs by subtracting the mean intensity in each color channel (estimated over the Imagenet dataset (Russakovsky et al., 2015)). ## 3 TEXTURE MODEL The texture model closely follows (Gatys et al., 2015a). In essence, to characterise a given vectorised texture \(\mathbf{x} \in \mathbb{R}^{M}\) , we first pass \(\mathbf{x}\) through the convolutional layer and compute the output activations. The output can be understood as a non- linear filter bank, and thus its activations form a set of filtered images (so- called feature maps). For \(N\) distinct feature maps, the rectified output activations can be <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: Influence of the scale and the non-linearity on texture synthesis performance. (1st column) Samples from the random multi-scale model for comparison (same as in Fig. 1). (2nd - 7th column) Samples from the random single-scale model with different spatial filter sizes. (Last column) Samples from the random multi-scale model without ReLU nonlinearity. </center> described by a matrix \(\mathbf{F} \in \mathbb{R}^{N \times M}\) . To capture the stationary structure of the textures, we compute the covariances (or, more precisely, the Gramian matrix) \(\mathbf{G} \in \mathbb{R}^{N \times N}\) between the feature activations \(\mathbf{F}\) by averaging the outer product of the point- wise feature vectors, \[G_{i j} = \frac{1}{M}\sum_{m = 1}^{M}F_{i m}F_{j m}. \quad (1)\] We will denote \(\mathbf{G}(\mathbf{x})\) as the Gram matrix of the feature activations for the input \(\mathbf{x}\) . To determine the relative distance between two textures \(\mathbf{x}\) and \(\mathbf{y}\) we compute the euclidean distance of the normalized Gram matrices, \[d(\mathbf{x},\mathbf{y}) = \frac{1}{\sqrt{\sum_{m,n}G_{m n}(\mathbf{x})^{2}}\sqrt{\sum_{m,n}G_{m n}(\mathbf{y})^{2}}}\sum_{i,j = 1}^{N}\left(G_{i j}(\mathbf{x}) - G_{i j}(\mathbf{y})\right)^{2}. \quad (2)\] To compare with the distance in the raw pixel values, we compute \[d_{p}(\mathbf{x},\mathbf{y}) = \frac{1}{\sqrt{\sum_{m}x_{m}^{2}}\sqrt{\sum_{m}y_{m}^{2}}}\sum_{i = 1}^{N}\left(x_{i} - y_{i}\right)^{2}. \quad (3)\] ## 4 TEXTURE SYNTHESIS To generate a new texture we start from a uniform noise image (in the range [0, 1]) and iteratively optimize it to match the Gram matrix of the reference texture. More precisely, let \(\mathbf{G}(\mathbf{x})\) be the Gram matrix of the reference texture. The goal is to find a synthesised image \(\tilde{\mathbf{x}}\) such that the squared distance between \(\mathbf{G}(\mathbf{x})\) and the Gram matrix \(\mathbf{G}(\tilde{\mathbf{x}})\) of the synthesised image is minimized, i.e. \[\tilde{\mathbf{x}} = \underset {\mathbf{y}\in \mathbb{R}^{M}}{\mathrm{argmin}}E(\mathbf{y}), \quad (4)\] \[E(\mathbf{y}) = \frac{1}{\sum_{i,j = 1}^{N}G_{i j}(\mathbf{x})^{2}}\sum_{i,j = 1}^{N}\left(G_{i j}(\mathbf{x}) - G_{i j}(\mathbf{y})\right)^{2}. \quad (5)\] The gradient \(\partial E(\mathbf{y}) / \partial \mathbf{y}\) of the reconstruction error with respect to the image can readily be computed using standard backpropagation, which we then use in conjunction with the L- BFGS- B algorithm (Jones et al., 2001- ) to solve (4). We leave all parameters of the optimization algorithm at their default value except for the maximum number of iterations (2000), and add a box constraints with range [0, 1]. In addition, we scale the loss and the gradients by a factor of \(10^{7}\) in order to avoid early stopping of the optimization algorithm. <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 3: Each row shows the reference texture (left, gray background) and three samples that were synthesized from different (random) initial images using our multi-scale model. Most importantly, the multi-scale model generates samples that are perceptually different. All three textures are taken from Portilla & Simoncelli (2000). </center> ## 5 TEXTURE EVALUATION Evaluating the quality of the synthesized textures is traditionally performed by human inspection. Optimal texture synthesis should generate samples that humans perceive as being the same texture as the reference. The high quality of the synthesized textures by (Gatys et al., 2015a) suggests that the summary statistics from multiple layers of VGG can approximate the perceptual metric of humans. Even though the VGG texture representation is not perfect, this allows us to utilize these statistics as a more objective quantification of texture quality. For all details of the VGG- based texture model see (Gatys et al., 2015a). Here we use the standard 19- layer VGG network (Simonyan & Zisserman, 2014) with pretrained weights and average- instead of max- pooling'. We compute a Gram matrix on the output of each convolutional layer that follows a pooling layer. Let \(\mathbf{G}^{\ell}(\cdot)\) be the Gram matrix on the activations of the \(\ell\) - th layer and \[E^{\ell}(\mathbf{y}) = \frac{1}{\sum_{i,j = 1}^{N}G_{ij}^{\ell}(\mathbf{x})^{2}}\sum_{i,j = 1}^{N}\left(G_{ij}^{\ell}(\mathbf{x}) - G_{ij}^{\ell}(\mathbf{y})\right)^{2}. \quad (6)\] the corresponding relative reconstruction cost. The total reconstruction cost is then defined as the average distance between the reference Gram matrices and the synthesized ones, i.e. \[E(\mathbf{y}) = \frac{1}{5}\sum_{\ell = 1}^{5}E^{\ell}(\mathbf{y}). \quad (7)\] This cost is reported on top of each synthesised texture in Figures 4. To visually evaluate samples from our single- and multi- scale model against the VGG- based model (Gatys et al., 2015a), we additionally synthesize textures from VGG by minimizing (7) using L- BFGS- B as in section 4. ## 6 RESULTS In Fig. 1 we show textures synthesised from two random single- and multi- scale models, as well as eight other non- random single- layer models for three different source images (top left). For <--- Page Split ---> comparison, we also plot samples generated from the VGG model by Gatys et al. (Gatys et al., 2015a) (bottom left). There are roughly two groups of models: those with a small number of feature maps (363, top row), and those with a large number of feature maps (3267, bottom row). Only the multi- scale model employs 1024 feature maps. Within each group, we can differentiate models for which the filters are unsupervisedly trained on natural images (e.g. sparse coding filters from k- means), principally devised filter banks (e.g. 2D Fourier basis) and completely random filters (see sec. 2 for all details). All single- layer networks, except for multi- scale, feature \(11 \times 11\) filters. Remarkably, despite the small spatial size of the filters, all models capture much of the small- and mid- scale structure of the textures, in particular if the number of feature maps is large. Notably, the scale of these structures extends far beyond the receptive fields of the single units (see e.g. the pebble texture). We further observe that a larger number of feature maps generally increases the perceptual quality of the generated textures. Surprisingly, however, completely random filters perform on par or better than filters that have been trained on the statistics of natural images. This is particularly true for the multi- scale model that clearly outperforms the single- scale models on all textures. The captured structures in the multi- scale model are generally much larger and often reach the full size of the texture (see e.g. the wall). While the above results show that for natural texture synthesis one neither needs a hierarchical deep network architecture with spatial pooling nor filters that are adapted to the statistics of natural images, we now focus on the aspects that are crucial for high quality texture synthesis. First, we evaluate whether the success of the random multi- scale network arises from the combination of filters on multiple scales or whether it is simply the increased size of its largest receptive fields \((55 \times 55\) vs. \(11 \times 11\) ) that leads to the improvement compared to the single- scale model. Thus, to investigate the influence of the spatial extend of the filters and the importance of combining multiple filter sizes in one model, we generate textures from multiple single- scale models, where each model has the same number of random filters as the multi- scale model (1024) but only uses filters from a single scale of the multi- scale model (Fig. 2). We find that while \(3 \times 3\) filters mainly capture the marginal distribution of the color channels, larger filters like \(11 \times 11\) model small- to mid- scale structures (like small stones) but miss more long- range structures (larger stones are not well separated). Very large filters like \(55 \times 55\) , on the other hand, are capable of modeling long- range structures but then miss much of the small- to midscale statistics (like the texture of the stone). Therefore we conclude that the combination of different scales in the multi- scale network is important for good texture synthesis since it allows to simultaneously model small- , mid- and long- range correlations of the textures. Finally we note that a further indispensable component for good texture models are the non- linearities: textures synthesised the multi- scale model without ReLU (Fig. 2, right column) are unable to capture the statistical dependencies of the texture. The perceptual quality of the textures generated from models with only a single layer and random filters is quite remarkable and surpasses parametric methods like Portilla & Simoncelli (2000) that have been state- of- the- art two years ago (before the use of DNNs). The multi- scale model often rivals the current state of the art (Gatys et al., 2015a) as we show in Fig. 4 where we compare samples synthesized from 20 different textures for the random single- and multi- scale model, as well as VGG. The multi- scale model generates very competitive samples in particular for textures with extremely regular structures across the whole image (e.g. for the brick wall, the grids or the scales). In part, this effect can be attributed to the more robust optimization of the single- layer model that is less prone to local minima then the optimization in deeper models. This can be seen by initializing the VGG- based synthesis with textures from the single- layer model, which consistently yields superior synthesis results (see Appendix A, Fig. 5). In addition, for a few textures such as the grid structures, the VGG- based loss is paradoxically lower for samples from the multi- scale model then for the VGG- based model (which directly optimized the VGG- based loss). This suggests that the naive synthesis performed here favors images that are perceptually similar to the reference texture and thus looses variability (see sec. 7 for further discussion). Nonetheless, samples from the single- layer model still exhibit large perceptual differences, see Fig. 3. The VGG- based loss (7) appears to generally be an acceptable approximation of the perceptual differences between the reference and the synthesized texture. Only for a few textures, especially those with very regular men- made structures (e.g. the wall or the grids), the VGG- based loss fails to capture the perceptual advantage of the multi- scale synthesis. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 4: Each row shows the reference texture (left, gray background) and three samples that were synthesized from different (random) initial images using three different models: single-layer network with 1024 feature maps and random 11x11 filters; multi-scale single layer network with filters of sizes \(f\times f\) , where \(f = \{3,5,7,11,15,23,37,55\}\) and 128 feature maps correspond to filters of each size; and the VGG-based model (Gatys et al., 2015a). Numbers above figures show the values of the normalized VGG-loss (7) for corresponding textures. </center> <--- Page Split ---> ## 7 DISCUSSION We proposed a generative model of natural textures based on a single- layer convolutional neural network with completely random filters and showed that the model is able to qualitatively capture the perceptual differences between natural textures. Samples from the model often rival the current state- of- the- art (Gatys et al., 2015a) (Fig. 4, third vs fourth row), even though the latter relies on a high- performance deep neural network with features that are tuned to the statistics of natural images. Seen more broadly, this finding suggests that natural image generation does not necessarily depend on deep hierarchical representations or on the training of the feature maps. Instead, for texture synthesis, both aspects rather seem to serve as fine- tuning of the image representation. One concern about the proposed single- layer multi- scale model is its computational inefficiency since it involves convolutions with spatially large filters (up to \(55 \times 55\) ). A more efficient way to achieve receptive fields of similar size would be to use a hierarchical multi- layer net. We conducted extensive experiments with various hierarchical architectures and while the synthesis is indeed significantly faster, the quality of the synthesized textures does not improve compared to a single- layer model. Thus for a minimal model of natural textures, deep hierarchical representations are not necessary but they can improve the efficiency of the texture synthesis. Our results clearly demonstrate that Gram matrices computed from the feature maps of convolutional neural networks generically lead to useful summary statistics for texture synthesis. The Gram matrix on the feature maps transforms the representations from the convolutional neural network into a stationary feature space that captures the pairwise correlations between different features. If the number of feature maps is large, then the local structures in the image are well preserved in the projected space and the overlaps of the convolutional filtering add additional constraints. At the same time, averaging out the spatial dimensions yields sufficient flexibility to generate entirely new textures that differ from the reference on a patch by patch level, but still share much of the small- and long- range statistics. The success of shallow convolutional networks with random filters in reproducing the structure of the reference texture is remarkable and indicates that they can be useful for parametric texture synthesis. Besides reproducing the stationary correlation structure of the reference image ("perceptual similarity") another desideratum of a texture synthesis is to exhibit a large variety between different samples generated from the same given image ("variability"). Hence, synthesis algorithms need to balance perceptual similarity and variability. This balance is determined by a complex interplay between the choice of summary statistics and the optimization algorithm used. For example the stopping criterion of the optimization algorithm can be adjusted to trade perceptual similarity for larger variability. Finding the right balance between perceptual similarity and variability is challenging because we are currently lacking robust measures of these quantities. In this work we introduced VGG- loss as a measure of perceptual similarity, and, even though, it works much better than other common measures such as Structural Similarity Index (SSIM, Wang et al., 2004, see Appendix A, Figure 6) or Euclidean distance in the pixel space (not shown), it is still not perfect (Figure 4). Measuring variability is probably even more difficult: in principle it requires measuring the entropy of generated samples, which is intractable in a high- dimensional space. A different approach could be based on a psychophysical assessment of generated samples. For example, we could use an inpainting task (illustrated in Appendix A, Figure 7) to make human observers decide between actual texture patches and inpainted ones. Performance close to a chance- level would indicate that the texture model produces variable enough samples to capture the diversity of actual patches. The further exploration of variability measures lies, however, beyond the scope of this work. In this paper we focused on maximizing perceptual similarity only, and it is worth pointing out that additional efforts will be necessary to find an optimal trade- off between perceptual similarity and variability. For the synthesis of textures from the random models considered here, the trade- off leans more towards perceptual similarity in comparison to Gatys et al. (2015a)(due to the simpler optimization) which also explains the superior performance on some samples. In fact, we found some anecdotal evidence (not shown) in deeper multi- layer random CNNs where the reference texture was exactly reconstructed during the synthesis. From a theoretical point of view this is likely a finite size effect which does not necessarily constitute a failure of the chosen summary statistics: for finite size images it is well possible that only the reference image can exactly reproduce all the summary <--- Page Split ---> statistics. Therefore, in practice, the Gram matrices are not treated as hard constraints but as soft constraints only. More generally, we do not expect a perceptual distance metric to assign exactly zero to a random pair of patches from the same texture. Instead, we expect it to assign small values for pairs from the same texture, and large values for patches from different textures. Therefore, the selection of constraints is not sufficient to characterize a texture synthesis model but only determines the exact minima of the objective function (which are sought for by the synthesis). If we additionally consider images with small but non- zero distance to the reference statistics, then the set of equivalent textures increases substantially, and the precise composition of this set becomes critically dependent on the perceptual distance metric. Mathematically, parametric texture synthesis models are described as ergodic random fields that have maximum entropy subject to certain constraints Zhu et al. (1997); Bruna & Mallat (2013); Zhu et al. (2000) (MaxEnt framework). Practical texture synthesis algorithms, however, always deal with finite size images. As discussed above, two finite- size patches from the same ergodic random field will almost never feature the exact same summary statistics. This additional uncertainty in estimating the constraints on finite length processes is not thoroughly accounted for by the MaxEnt framework (see discussion on its "ad hockeries" by Jaynes (Jaynes (1982))). Thus, a critical difference of practical implementations of texture synthesis algorithms from the conceptual MaxEnt texture modeling framework is that they genuinely allow a small mismatch in the constraints. Accordingly, specifying the summary statistics is not sufficient but a comprehensive definition of a texture synthesis model should specify: 1. A metric \(d(\mathbf{x},\mathbf{y})\) that determines the distance between any two arbitrary textures \(\mathbf{x},\mathbf{y}\) 2. A bipartition \(P_{\mathbf{x}}\) of the image space that determines which images are considered perceptually equivalent to a reference texture \(\mathbf{x}\) . A simple example for such a partition is the \(\epsilon\) -environment \(U_{\epsilon}(\mathbf{y}):= \{\mathbf{y}:d(\mathbf{y},\mathbf{x})< \epsilon \}\) and its complement. This definition is relevant for both under- as well as over- constrained models, but its importance becomes particularly obvious for the latter. According to the Minimax entropy principle for texture modeling suggested by Zhu et al Zhu et al. (1997), as many constraints as possible should be used to reduce the (Kullback- Leibler) divergence between the true texture model and its estimate. However, for finite spatial size, the synthetic samples become exactly equivalent to the reference texture (up to shifts) in the limit of sufficiently many independent constraints. In contrast, if we explicitly allow for a small mismatch between the summary statistics of the reference image and the synthesized textures, then the set of possible textures does not constitute a low- dimensional manifold but rather a small volume within the pixel space. Alternatively, instead of introducing an \(\epsilon\) - environment it is also possible to extent the MaxEnt framework to allow for variability in the summary statistics (Joan Bruna, personal communication). It will be interesting to compare in the future to what extent the difference between the two approaches can lead to differences in the perceptual appearance of the textures. Taken together we have shown that simple single- layer CNNs with random filters can serve as the basis for excellent texture synthesis models that outperform previous hand- crafted synthesis models and sometimes even rivals the current state- of- the- art. This finding repeals previous observations that suggested a critical role for the multi- layer representations in trained deep networks for natural texture generation. On the other hand, it is not enough to just use sufficiently many constraints as one would predict from the MaxEnt framework. Instead, for the design of good texture synthesis algorithms it will be crucial to find distance measures for which the \(\epsilon\) - environment around the reference texture leads to perceptually satisfying results. In this way, building better texture synthesis models is inherently related to better quantitative models of human perception. ## A APPENDIX ![](images/11_0.jpg) <center>Figure 5: Initializing VGG-synthesis with a sample from the random multi-scale model. The first column shows the original textures, the second and third columns show samples from the standard VGG-based synthesis (random initialization) (Gatys et al., 2015a) and the random multi-scale model. The last column shows samples from the VGG-based model, which was initialized with samples from the random multi-scale model (from column 3). On top of all samples we report the corresponding values of the VGG-loss (7). Empirically, the VGG loss is up to two orders of magnitude lower in the last column relative to the standard VGG synthesis. </center> <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 6: Similarity measures between textures computed using the structural similarity index on the pixels (SSIM, left column) and normalized euclidean distances in the feature spaces of VGG (second column) and two shallow texture models (third and fourth column). Ten random patches were extracted from each of ten different textures (examples of patches are shown in top row). The matrix element (i, j) of each similarity matrix corresponds to the median distance between patches from textures i and j. The values for all but the SSIM matrix are shown on a log-scale. </center> ![](images/12_1.jpg) <center>Figure 7: Examples of inpainted textures for the VGG model (Gatys et al., 2015a, , two top rows) and random multi-scale model (two bottom rows). Textures were inpainted starting from three different initial conditions (10%, 15%, 20% corresponding to the width of the frame used for initialization), and for each initial condition the texture was inpainted either by matching the Gram matrix of the patch used for initializing the frame (regular sample) or by matching the Gram matrix estimated over many (500) randomly extracted patches from the texture (estimate sample). </center> <--- Page Split --->
accept
Accept (Poster)
7.666667
ICLR_2017_paper_0483
iclr
2,017
# COOPERATIVE TRAINING OF DESCRIPTOR AND GENERATOR NETWORKS Jianwen Xie, Yang Lu, Ruiqi Gao, Song- Chun Zhu & Ying Nian Wu Department of Statistics University of California, Los Angeles {jianwen, yanglv, ruiqigao}@ucla.edu, {sczhu, ywu}@stat.ucla.edu ## ABSTRACT This paper studies the cooperative training of two probabilistic models of signals such as images. Both models are parametrized by convolutional neural networks (ConvNets). The first network is a descriptor network, which is an exponential family model or an energy- based model, whose feature statistics or energy function are defined by a bottom- up ConvNet, which maps the observed signal to the feature statistics. The second network is a generator network, which is a nonlinear version of factor analysis. It is defined by a top- down ConvNet, which maps the latent factors to the observed signal. The maximum likelihood training algorithms of both the descriptor net and the generator net are in the form of alternating back- propagation, and both algorithms involve Langevin sampling. We observe that the two training algorithms can cooperate with each other by jump- starting each other's Langevin sampling, and they can be seamlessly interwoven into a CoopNets algorithm that can train both nets simultaneously. ## 1 INTRODUCTION ### 1.1 TWO CONVNETS OF OPPOSITE DIRECTIONS We begin with a story that the reader of this paper can readily relate to. A student writes up an initial draft of a paper. His advisor then revises it. After that they submit the revised paper for review. The student then learns from his advisor's revision, while the advisor learns from the outside review. In this story, the advisor guides the student, but the student does most of the work. This paper is about two probabilistic models of signals such as images, and they play the roles of student and advisor as described above. Both models are parametrized by convolutional neural networks (ConvNets or CNNs) (LeCun et al., 1998; Krizhevsky et al., 2012). The two nets take two opposite directions. One is bottom- up, and the other is top- down, as illustrate by the following diagram: \[\begin{array}{r l} & {\mathrm{Bottom - up~ConvNet}}\\ & {\mathrm{features}}\\ & {\uparrow}\\ & {\mathrm{signal}}\\ & {\mathrm{(a)~Descriptor~Net}} \end{array} \qquad \begin{array}{r l} & {\mathrm{Top - down~ConvNet}}\\ & {\mathrm{latent~variables}}\\ & {\downarrow}\\ & {\mathrm{signal}}\\ & {\mathrm{(b)~Generator~Net}} \end{array} \quad (1)\] The simultaneous training of such two nets was first studied by the recent work of Kim & Bengio (2016). These two nets belong to two major classes of probabilistic models. (a) The exponential family models or the energy- based models (LeCun et al., 2006) or the Markov random field models (Zhu et al., 1997), where the probability distribution is defined by feature statistics or energy function computed from the signal by a bottom- up process. (b) The latent variable models or the directed graphical models, where the signal is assumed to be a transformation of the latent factors that follow a known prior distribution. The latent factors generate the signal by a top- down process. A classical example is factor analysis. The two classes of models have been contrasted by Zhu (2003); Teh et al. (2003); Ngiam et al. (2011). Zhu (2003) called the two classes of models the descriptive models and the generative <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: (a) Algorithm D involves sampling from the current model by Langevin dynamics. (b) Algorithm G involves sampling from the posterior distribution of the latent factors by Langevin dynamics. (c) CoopNets algorithm. The part of the flowchart for training the descriptor is similar to Algorithm D, except that the D1 Langevin sampling is initialized from the initial synthesized examples supplied by the generator. The part of the flowchart for training the generator can also be mapped to Algorithm G, except that the revised synthesized examples play the role of the observed data, and the known generated latent factors can be used as inferred latent factors (or be used to initialize the G1 Langevin sampling of the latent factors). </center> models respectively. Both classes of models can benefit from the high capacity of the multi- layer ConvNets. (a) In the exponential family models or the energy- based models, the feature statistics or the energy function can be defined by a bottom- up ConvNet that maps the signal to the features and the energy function (Ngiam et al., 2011; Xie et al., 2016). We call the resulting model a descriptive network or a descriptor net following Zhu (2003), because it is built on descriptive feature statistics. (b) In the latent variable models or the directed graphical models, the transformation from the latent factors to the signal can be defined by a top- down ConvNet (Dosovitskiy et al., 2015), which maps the latent factors to the signal. We call the resulting model a generative network or generator net following Goodfellow et al. (2014), who proposed such a model in their work on the generative adversarial networks (GAN). ### 1.2 TWO TRAINING ALGORITHMS AND THEIR COOPERATION Fig. 1(a) and (b) display the flowcharts of the maximum likelihood learning algorithms for training the descriptor and generator nets. We call the two algorithms Algorithm D and Algorithm G respectively. Algorithm D (Xie et al., 2016) iterates two steps: Step D1 synthesizes examples by sampling from the current model by Langevin dynamics. Step D2 updates the parameters to shift the density from the synthesized examples towards the observed examples. Algorithm G (Han et al., 2017) also iterates two steps. Step G1 infers latent factors for each observed example by sampling from their posterior distribution by Langevin dynamics. Step G2 updates the parameters by a non- linear regression of the observed examples on their corresponding latent factors. We use Langevin dynamics for Markov chain Monte Carlo (MCMC) sampling because the gradient term of Langevin dynamics can be readily computed via back- propagation. Thus all the steps D1, D2 and G1, G2 are powered by back- propagation, and both Algorithms D and G are alternating back- propagation algorithms. In this article, we propose to couple Algorithms D and G into a cooperative training algorithm that interweaves the steps of the two algorithms seamlessly. We call the resulting algorithm the CoopNets algorithm, and we show that it can train both nets simultaneously. Figure 1(c) displays the flowchart of the CoopNets algorithm. The generator is like the student. It generates the initial draft of the synthesized examples. The descriptor is like the advisor. It revises the initial draft by initializing its Langevin dynamics from the initial draft in Step D1, which produces the revised draft of the synthesized examples. The descriptor learns from the outside review in Step D2, which is in the form of the difference between the observed examples and the revised <--- Page Split ---> synthesized examples. The generator learns from how the descriptor revises the initial draft by reconstructing the revised draft in Step G2. For each synthesized example, the generator knows the latent factors that generate the initial draft, so that Step G1 can infer the latent factors by initializing its Langevin dynamics from their known values. In the CoopNets algorithm, the generator fuels the MCMC of the descriptor by supplying initial synthesized examples, which can be obtained by direct ancestral sampling. The generator then learns from the revised synthesized examples with virtually known latent factors. The cooperation is thus beneficial to both nets. ## 2 RELATED WORK Our work is inspired by the generative adversarial networks (GAN) (Goodfellow et al., 2014; Denton et al., 2015; Radford et al., 2015). In GAN, the generator net is paired with a discriminator net. The two nets play adversarial roles. In our work, the generator net and the descriptor net play cooperative roles, and they feed each other the initial, revised and reconstructed synthesized data. The learning of both nets is based on maximum likelihood, and the learning process is quite stable because of the cooperative nature and the consistent directions of the two maximum likelihood training algorithms. Another method to train the generator network is variational auto- encoder (VAE) (Kingma & Welling, 2014; Rezende et al., 2014; Mnih & Gregor, 2014), which learns an inferential or recognition network to approximate the posterior distribution of the latent factors. The connection between the descriptor net and the discriminator net has been explored by Xie et al. (2016), where the descriptor can be derived from the discriminator. Our work is most similar to the recent work of Kim & Bengio (2016). In fact, the settings of the two nets are the same. In their work, the generator learns from the descriptor by minimizing the Kullback- Leibler divergence from the generator to the descriptor, which can be decomposed into an energy term and an entropy term. In our work, the two nets interact with each other via synthesized data, and the generator learns from the descriptor by reconstructing the revised draft of synthesized examples. Our method does not need to approximate the intractable entropy term. Our work is related to the contrastive divergence algorithm (Hinton, 2002) for training the descriptor net. The contrastive divergence initializes the MCMC sampling from the observed examples. The CoopNets algorithm initializes the MCMC sampling from the examples supplied by the generator. ## 3 TWO NETS AND TWO TRAINING ALGORITHMS ### 3.1 DESCRIPTOR NET AND TRAINING ALGORITHM Let \(Y\) be the \(D\) - dimensional signal, such as an image. The descriptor model is in the form of exponential tilting of a reference distribution (Xie et al., 2016): \[P_{\mathcal{D}}(Y;W_{\mathcal{D}}) = \frac{1}{Z(W_{\mathcal{D}})}\exp \left[f(Y;W_{\mathcal{D}})\right]q(Y), \quad (2)\] where \(q(Y)\) is the reference distribution such as Gaussian white noise \(q(Y)\propto \exp \left(- \| Y\|^{2} / 2s^{2}\right)\) , \(f(Y;W_{\mathcal{D}})\) ( \(f\) stands for features) is the feature statistics or energy function, defined by a ConvNet whose parameters are denoted by \(W_{\mathcal{D}}\) . This ConvNet is bottom- up because it maps the signal \(Y\) to a number. See the diagram in (1). \(Z(W_{\mathcal{D}}) = \int \exp \left[f(Y;W_{\mathcal{D}})\right]q(Y)dY = \mathrm{E}_{q}\{\exp [f(Y;W_{\mathcal{D}})]\}\) is the normalizing constant, where \(\mathrm{E}_{q}\) is the expectation with respect to \(q\) . Suppose we observe training examples \(\{Y_{i},i = 1,\dots,n\}\) from an unknown data distribution \(P_{\mathrm{data}}(Y)\) . The maximum likelihood training seeks to maximize the log- likelihood function \(L_{\mathcal{D}}(W_{\mathcal{D}}) = \frac{1}{n}\sum_{i = 1}^{n}\log P_{\mathcal{D}}(Y_{i};W_{\mathcal{D}})\) . If the sample size \(n\) is large, the maximum likelihood estimator minimizes \(\mathrm{KL}(P_{\mathrm{data}}|P_{\mathcal{D}})\) , the Kullback- Leibler divergence from the data distribution \(P_{\mathrm{data}}\) to the model distribution \(P_{\mathcal{D}}\) . The gradient of the \(L_{\mathcal{D}}(W_{\mathcal{D}})\) is \[L_{\mathcal{D}}^{\prime}(W_{\mathcal{D}}) = \frac{1}{n}\sum_{i = 1}^{n}\frac{\partial}{\partial W_{\mathcal{D}}} f(Y_{i};W_{\mathcal{D}}) - \mathrm{E}_{W_{\mathcal{D}}}\left[\frac{\partial}{\partial W_{\mathcal{D}}} f(Y;W_{\mathcal{D}})\right], \quad (3)\] <--- Page Split ---> where \(\mathrm{E}_{W_{\mathcal{D}}}\) denotes the expectation with respect to \(P_{\mathcal{D}}(Y;W_{\mathcal{D}})\) . The expectation in equation (3) is analytically intractable and has to be approximated by MCMC, such as Langevin dynamics, which iterates the following step: \[Y_{\tau +1} = Y_{\tau} - \frac{\delta^{2}}{2}\left[\frac{Y_{\tau}}{s^{2}} -\frac{\partial}{\partial Y} f(Y_{\tau};W_{\mathcal{D}})\right] + \delta U_{\tau}, \quad (4)\] where \(\tau\) indexes the time steps of the Langevin dynamics, \(\delta\) is the step size, and \(U_{\tau} \sim \mathrm{N}(0, I_{D})\) is the Gaussian white noise term. We can run \(\tilde{n}\) parallel chains of Langevin dynamics according to (4) to obtain the synthesized examples \(\{\tilde{Y}_{i}, i = 1, \ldots , \tilde{n}\}\) . The Monte Carlo approximation to \(L_{\mathcal{D}}^{\prime}(W_{\mathcal{D}})\) is \[L_{\mathcal{D}}^{\prime}(W_{\mathcal{D}}) \approx \frac{1}{n} \sum_{i = 1}^{n} \frac{\partial}{\partial W_{\mathcal{D}}} f(Y_{i};W_{\mathcal{D}}) - \frac{1}{\tilde{n}} \sum_{i = 1}^{\tilde{n}} \frac{\partial}{\partial W_{\mathcal{D}}} f(\tilde{Y}_{i};W_{\mathcal{D}}), \quad (5)\] which is used to update \(W_{\mathcal{D}}\) . Algorithm D (Xie et al., 2016) iterates the following two steps after initializing \(W_{\mathcal{D}}\) and \(\{\tilde{Y}_{i}, i = 1, \ldots , \tilde{n}\}\) . Step \(D I\) : Run \(l_{\mathcal{D}}\) steps of Langevin from the current \(\{\tilde{Y}\}\) according according to (4). Step \(D2\) : update \(W_{\mathcal{D}}^{(t + 1)} = W_{\mathcal{D}}^{(t)} + \gamma_{t} L_{\mathcal{D}}^{\prime}(W_{\mathcal{D}}^{(t)})\) with learning rate \(\gamma_{t}\) . The convergence of such an algorithm follows Younes (1999). ### 3.2 GENERATOR NET AND TRAINING ALGORITHM The generator net (Goodfellow et al., 2014) seeks to explain the signal \(Y\) of dimension \(D\) by a vector of latent factors \(X\) of dimension \(d\) , and usually \(d \ll D\) . The model is of the following form: \[X \sim \mathrm{N}(0, I_{d}), Y = g(X; W_{\mathcal{G}}) + \epsilon , \epsilon \sim \mathrm{N}(0, \sigma^{2} I_{D}). \quad (6)\] \(g(X; W_{\mathcal{G}})\) ( \(g\) stands for generator) is a top- down ConvNet defined by the parameters \(W_{\mathcal{G}}\) . The ConvNet \(g\) maps the latent factors \(X\) to the signal \(Y\) . See the diagram in (1). The joint density of model (6) is \(P_{\mathcal{G}}(X, Y; W_{\mathcal{G}}) = P_{\mathcal{G}}(X) P_{\mathcal{G}}(Y | X; W_{\mathcal{G}})\) , and \[\log P_{\mathcal{G}}(X, Y; W_{\mathcal{G}}) = -\frac{1}{2\sigma^{2}} \| Y - g(X; W_{\mathcal{G}}) \|^{2} - \frac{1}{2} \| X \|^{2} + \mathrm{constant}, \quad (7)\] where the constant term is independent of \(X\) , \(Y\) and \(W_{\mathcal{G}}\) . The marginal density is obtained by integrating out the latent factors \(X\) , i.e., \(P_{\mathcal{G}}(Y; W_{\mathcal{G}}) = \int P_{\mathcal{G}}(X, Y; W_{\mathcal{G}}) dX\) . The inference of \(X\) given \(Y\) is based on the posterior density \(P_{\mathcal{G}}(X | Y; W_{\mathcal{G}}) = P_{\mathcal{G}}(X, Y; W_{\mathcal{G}}) / P_{\mathcal{G}}(Y; W_{\mathcal{G}}) \propto P_{\mathcal{G}}(X, Y; W_{\mathcal{G}})\) as a function of \(X\) . For the training data \(\{Y_{i}, i = 1, \ldots , n\}\) , the generator net can be trained by maximizing the loglikelihood \(L_{\mathcal{G}}(W_{\mathcal{G}}) = \frac{1}{n} \sum_{i = 1}^{n} \log P_{\mathcal{G}}(Y_{i}; W_{\mathcal{G}})\) . For large sample, the learned \(W_{\mathcal{G}}\) minimizes the Kullback- Leibler divergence \(\mathrm{KL}(P_{\mathrm{data}}|P_{\mathcal{G}})\) from the data distribution \(P_{\mathrm{data}}\) to the model distribution \(P_{\mathcal{G}}\) . The gradient of \(L_{\mathcal{G}}(W_{\mathcal{G}})\) is obtained according to the following identity \[\begin{array}{r l r}{{\frac{\partial}{\partial W_{\mathcal{G}}}\log P_{\mathcal{G}}(Y;W_{\mathcal{G}})}&{=}&{\frac{1}{P_{\mathcal{G}}(Y;W_{\mathcal{G}})}\frac{\partial}{\partial W_{\mathcal{G}}}\int P_{\mathcal{G}}(Y,X;W_{\mathcal{G}})dX}\\ &{=}&{\frac{1}{P_{\mathcal{G}}(Y;W_{\mathcal{G}})}\int\left[\frac{\partial}{\partial W_{\mathcal{G}}}\log P_{\mathcal{G}}(Y,X;W_{\mathcal{G}})\right]P_{\mathcal{G}}(Y,X;W_{\mathcal{G}})dX}\\ &{=}&{\int\left[\frac{\partial}{\partial W_{\mathcal{G}}}\log P_{\mathcal{G}}(Y,X;W_{\mathcal{G}})\right]\frac{P_{\mathcal{G}}(Y,X;W_{\mathcal{G}})}{P_{\mathcal{G}}(Y;W_{\mathcal{G}})}dX}\\ &{=}&{\mathrm{E}_{P_{\mathcal{G}}(X|Y;W_{\mathcal{G}})}\left[\frac{\partial}{\partial W_{\mathcal{G}}}\log P_{\mathcal{G}}(X,Y;W_{\mathcal{G}})\right],}\end{array} \quad (8)\] which underlies the EM algorithm. In general, the expectation in (8) is analytically intractable, and has to be approximated by MCMC that samples from the posterior \(P_{\mathcal{G}}(X | Y; W_{\mathcal{G}})\) , such as Langevin dynamics, which iterates \[X_{\tau +1} = X_{\tau} + \frac{\delta^{2}}{2} \frac{\partial}{\partial X} \log P_{\mathcal{G}}(X_{\tau}, Y; W_{\mathcal{G}}) + \delta U_{\tau}, \quad (9)\] <--- Page Split ---> ## Algorithm 1 CoopNets Algorithm ## Input: (1) training examples \(\{Y_{i},i = 1,\dots,n\}\) (2) numbers of Langevin steps \(l_{\mathcal{D}}\) ad \(l_{G}\) (3) number of learning iterations \(T\) ## Output: (1) estimated parameters \(W_{\mathcal{D}}\) and \(W_{G}\) (2) synthesized examples \(\{\hat{Y}_{i},\hat{Y}_{i},i = 1,\dots,\hat{n}\}\) 1: Let \(t\gets 0\) , initialize \(W_{\mathcal{D}}\) and \(W_{G}\) 2: repeat 3: Step G0: For \(i = 1,\dots,\hat{n}\) , generate \(\hat{X}_{i}\sim \mathrm{N}(0,I_{d})\) , and generate \(\hat{Y}_{i} = g(\hat{X}_{i};W_{G}^{(t)}) + \epsilon_{i}\) 4: Step D1: For \(i = 1,\dots,\hat{n}\) , starting from \(\hat{Y}_{i}\) , Run \(l_{\mathcal{D}}\) steps of Langevin dynamics to obtain \(\hat{Y}_{i}\) , each step following equation (4). 5: Step G1: Treat the current \(\{\hat{Y}_{i},i = 1,\dots,\hat{n}\}\) as the training data, for each \(i\) , infer \(X_{i} = \hat{X}_{i}\) . Or more rigorously, starting from \(X_{i} = \hat{X}_{i}\) , run \(l_{G}\) steps of Langevin dynamics to update \(X_{i}\) , each step following equation (9). 6: Step D2: Update \(W_{\mathcal{D}}^{(t + 1)} = W_{\mathcal{D}}^{(t)} + \gamma_{t}L_{\mathcal{D}}^{\prime}(W_{\mathcal{D}}^{(t)})\) , where \(L_{\mathcal{D}}^{\prime}(W_{\mathcal{D}}^{(t)})\) is computed according to (5). 7: Step G2: Update \(W_{G}^{(t + 1)} = W_{G}^{(t)} + \gamma_{t}L_{G}^{\prime}(W_{G}^{(t)})\) , where \(L_{G}^{\prime}(W_{G})\) is computed according to (10), except that \(Y_{i}\) is replaced by \(\hat{Y}_{i}\) , and \(n\) by \(\hat{n}\) . 8: Let \(t\gets t + 1\) 9: until \(t = T\) ## 5 EXPERIMENTS We use the MatConvNet of Vedaldi & Lenc (2015) for coding. For the descriptor net, we adopt the structure of Xie et al. (2016), where the bottom- up network consists of multiple layers of convolution by linear filtering, ReLU non- linearity, and down- sampling. We adopt the structure of the generator network of Radford et al. (2015); Dosovitskiy et al. (2015), where the top- down network consists of multiple layers of deconvolution by linear superposition, ReLU non- linearity, and up- sampling, with tanh non- linearity at the bottom- layer (Radford et al., 2015) to make the signals fall within \([- 1,1]\) . ### 5.1 QUANTITATIVE EXPERIMENT ON FACE COMPLETION We conduct an experiment on learning from complete training images of human faces, and then testing the learned model on completing the occluded testing images. The structure of the generator network is the same as in (Radford et al., 2015; Dosovitskiy et al., 2015). We adopt a 4- layer descriptor net. The first layer has \(96 \times 5\) filters with sub- sampling of 2, the second layers has \(128 \times 5\) filters with sub- sampling of 2, the third layer has \(256 \times 5\) filters with sub- sampling of 2, and the final layer is a fully connected layer with 50 channels as output. We use \(L = 10\) steps of Langevin revision dynamics within each learning iteration, and the Langevin step size is set at 0.002. The learning rate is 0.07. The training data are 10,000 human faces randomly selected from CelebA dataset (Liu et al., 2015). We run 600 cooperative learning iterations. Figure 2 displays 144 synthesized human faces by the descriptor net. To quantitatively test whether we have learned a good generator net \(g(X;W_{G})\) even though it has never seen the training images directly in the training stage, we apply it to the task of recovering the occluded pixels of testing images. For each occluded testing image \(\hat{Y}\) , we use Step G1 of Algorithm G to infer the latent factors \(\hat{X}\) . The only change is with respect to the term \(\| \hat{Y} - g(X;W_{G})\|^{2}\) , where the sum of squares is over all the observed pixels of \(\hat{Y}\) in back- propagation computation. We run 1000 Langevin steps, initializing \(X\) from \(\mathrm{N}(0,I_{d})\) . After inferring \(X\) , the completed image \(g(X;W_{G})\) is automatically obtained. We design 3 experiments, where we randomly place a \(20 \times 20\) , \(30 \times 30\) , or \(40 \times 40\) mask on each \(64 \times 64\) testing image. These 3 experiments are denoted by M20 M30, and M40 respectively (M for mask). We report the recovery errors and compare our method with 8 different image inpainting methods as well as the DCGAN of Radford et al. (2015). <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 2: Generating human face pattern. The synthesized images are generated by the CoopNets algorithm that learns from 10,000 images. </center> For DCGAN, we use the parameter setting in Radford et al. (2015) except changing the number of learning iterations to 600. We use the same 10,000 training images to learn DCGAN. After the model is learned, we keep the generator and use the same method as ours to infer latent factors \(X\) , and recover the unobserved pixels. In 8 inpainting methods, Methods 1 and 2 are based on Markov random field prior where the nearest neighbor potential terms are \(\ell_{2}\) and \(\ell_{1}\) differences respectively. Methods 3 to 8 are interpolation methods. Please refer to D'Errico (2004) for more details. Table 1 displays the recovery errors of the 3 experiments, where the error is measured by per pixel difference (relative to the range of pixel values) between the original image and the recovered image on the occluded region, averaged over 100 testing images. Fig. 3 displays some recovery results by our method. The first row shows the original images as the ground truth. The second row displays the testing images with occluded pixels. The third row displays the recovered images by the generator net trained by the CoopNets algorithm on the 10,000 training images. Table 1: Comparison of recovery errors among different inpainting methods in 3 experiments <table><tr><td>Exp</td><td>Ours</td><td>GAN</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td></tr><tr><td>M20</td><td>.0966</td><td>.2535</td><td>.1545</td><td>.1506</td><td>.1277</td><td>.1123</td><td>.2493</td><td>.1123</td><td>.1126</td><td>.1277</td></tr><tr><td>M30</td><td>.1112</td><td>.2606</td><td>.1820</td><td>.1792</td><td>.1679</td><td>.1321</td><td>.3367</td><td>.1310</td><td>.1312</td><td>.1679</td></tr><tr><td>M40</td><td>.1184</td><td>.2618</td><td>.2055</td><td>.2032</td><td>.1894</td><td>.1544</td><td>.3809</td><td>.1525</td><td>.1526</td><td>.1894</td></tr></table> ![](images/5_1.jpg) <center>Figure 3: Row 1: ground-truth images. Row 2: testing images with occluded pixels. Row 3: recovered images by our method. </center> <--- Page Split ---> ### 5.2 QUALITATIVE EXPERIMENT ON SYNTHESIS We conduct an experiment on synthesizing images of categories from Imagenet ILSVRC2012 dataset (Deng et al., 2009) and MIT places205 dataset (Zhou et al., 2014). We adopt a 4- layer descriptor net. The first layer has \(64 \times 5 \times 5\) filters with sub- sampling of 2, the second layers has \(128 \times 3 \times 3\) filters with sub- sampling of 2, the third layer has \(256 \times 3 \times 3\) filters with sub- sampling of 1, and the final layer is a fully connected layer with 100 channels as output. We set the number of Langevin dynamics steps in each learning iteration to 10 and the step size to 0.002. The learning rate is 0.07. For each category, we randomly choose 1,000 images as training data and resize the images to \(64 \times 64\) . We run 1,000 cooperative learning iterations to train the model. Figures 4 and 5 display the results for two categories, where for each category, we show 144 original images sampled from the training set, and 144 synthesized images generated by our method. The appendix contains more synthesis results. As a comparison, we apply the Algorithm G alone and GAN code on the same 1,000 hotel room training images to learn the generator of the same structure as in CoopNets. Figure 6 displays the synthesis results. We also try to synthesize images at high resolution ( \(224 \times 224\) ). We adopt a 4- layer descriptor net. The first layer has \(128 \times 15\) filters with sub- sampling of 3, the second layer has \(256 \times 3 \times 3\) filters with sub- sampling of 2, the third layer has \(512 \times 3 \times 3\) filters with sub- sampling of 1, and the final layer is a fully connected layer with 100 channels as output. We enlarge the filters of the final layer of generator net to \(14 \times 14\) to generate \(224 \times 224\) images. The learning rate is 0.05. We run 1000 cooperative learning iterations to train the model. Figures 7 and 8 show the synthesized images of two categories from MIT places205 dataset. ## 6 CONCLUSION The most unique feature of our work is that the two networks feed each other the synthesized data in the learning process, including initial, revised, and reconstructed synthesized data. Another unique feature of our work is that the learning process interweaves the existing maximum likelihood learning algorithms for the two networks. A third unique feature of our work is that the MCMC for the descriptor keeps rejuvenating the chains by refreshing the samples by independent replacements supplied by the generator, so that a single chain effectively amounts to an infinite number of chains or the evolution of the whole marginal distribution modeled by the generator. ## CODE AND DATA http://www.stat.ucla.edu/\~ywu/CoopNets/main.html ## 7 APPENDIX:CONVERGENCE ### 7.1 GENERATOR OF INFINITE CAPACITY In the CoopNets algorithm, the descriptor learns from the observed examples, while the generator learns from the descriptor through the synthesized examples. Therefore, the descriptor is the driving force in terms of learning, although the generator is the driving force in terms of synthesis. In order to understand the convergence of learning, we can start from Algorithm D for learning the descriptor. Algorithm D is a stochastic approximation algorithm (Robbins & Monro, 1951), except that the samples are generated by finite step MCMC transitions. According to Younes (1999), Algorithm D converges to the maximum likelihood estimate under suitable regularity conditions on the mixing of the transition kernel of the MCMC and the schedule of the learning rate \(\gamma_{t}\) , even if the number of Langevin steps \(l_{\mathcal{D}}\) is finite or small (e.g., \(l_{\mathcal{D}} = 1\) ), and even if the number of parallel chains \(\tilde{n}\) is finite or small (e.g., \(\tilde{n} = 1\) ). The reason is that the random fluctuations caused by the finite number of chains, \(\tilde{n}\) , and the limited mixing caused by the finite steps of MCMC, \(l_{\mathcal{D}}\) , are mitigated if the <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 4: Generating forest road images. The category is from MIT places205 dataset. </center> <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 5: Generating hotel room images. The category is from MIT places205 dataset. </center> <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 6: Generating hotel room images by Algorithm G alone and by GAN. </center> <--- Page Split ---> ![](images/10_0.jpg) <center>Figure 7: Generating forest road images at high resolution \((224 \times 224)\) . </center> <--- Page Split ---> ![](images/11_0.jpg) <center>Figure 8: Generating hotel room images at high resolution \((224 \times 224)\) . </center> <--- Page Split ---> learning rate \(\gamma_{t}\) is sufficiently small. At learning iteration \(t\) , let \(W_{\mathcal{D}}^{(t)}\) be the estimated parameter of the descriptor. Let \(P_{\mathcal{D}}^{(t + 1)}\) be the marginal distribution of \(\{\hat{Y}_{i}\}\) . Even though \(P_{\mathcal{D}}^{(t + 1)}\neq P_{\mathcal{D}}(Y;W_{\mathcal{D}}^{(t)})\) because \(l_{\mathcal{D}}\) is finite \((P_{\mathcal{D}}^{(t + 1)} = P_{\mathcal{D}}(Y;W_{\mathcal{D}}^{(t)})\) if \(l_{\mathcal{D}}\to \infty\) ), we still have \(W_{\mathcal{D}}^{(t)}\to \hat{W}_{\mathcal{D}}\) in probability according to Younes (1999), where \(\hat{W}_{\mathcal{D}}\) is the maximum likelihood estimate of \(W_{\mathcal{D}}\) . The efficiency of Algorithm D increases if the number of parallel chains \(\tilde{n}\) is large because it leads to more accurate estimation of the expectation in the gradient \(L_{\mathcal{D}}^{\prime}(W_{\mathcal{D}})\) of equation (3), so that we can afford to use larger learning rate \(\gamma_{t}\) for faster convergence. Now let us come back to the CoopNets algorithm. In order to understand how the descriptor net helps the training of the generator net, let us consider the idealized scenario where the number of parallel chains \(\tilde{n}\to \infty\) , and the generator has infinite capacity, and in each iteration it estimates \(W_{\mathcal{G}}\) by maximum likelihood using the synthesized data from \(P_{\mathcal{D}}^{(t + 1)}\) . In this idealized scenario, the learned generator \(P_{\mathcal{G}}(Y;W_{\mathcal{G}}^{(t + 1)})\) will reproduce \(P_{\mathcal{D}}^{(t + 1)}\) by minimizing \(\mathrm{KL}(P_{\mathcal{D}}^{(t + 1)}(Y)|P_{\mathcal{G}}(Y;W_{\mathcal{G}}))\) , with \(P_{\mathcal{D}}^{(t + 1)}\) serving as its data distribution. Then eventually the learned generator \(P_{\mathcal{G}}(Y,\hat{W}_{\mathcal{G}})\) will reproduce \(P_{\mathcal{D}}(Y;\hat{W}_{\mathcal{D}})\) . Thus the cooperative training helps the learning of the generator. Note that the learned generator \(P_{\mathcal{G}}(Y,\hat{W}_{\mathcal{G}})\) will not reproduce the distribution of the observed data \(P_{\mathrm{data}}\) , unless the descriptor is of infinite capacity too. Conversely, the generator net also helps the learning of the descriptor net in the CoopNets algorithm. In Algorithm D, it is impractical to make the number of parallel chains \(\tilde{n}\) too large. On the other hand, it would be difficult for a small number of chains \(\{\hat{Y}_{i},i = 1,\dots,\tilde{n}\}\) to explore the state space. In the CoopNets algorithm, because \(P_{\mathcal{G}}(Y;W_{\mathcal{G}}^{(t)})\) reproduces \(P_{\mathcal{D}}^{(t)}\) , we can generate a completely new batch of independent samples \(\{\hat{Y}_{i}\}\) from \(P_{\mathcal{G}}(Y;W_{\mathcal{G}}^{(t)})\) , and revise \(\{\hat{Y}_{i}\}\) to \(\{\hat{Y}_{i}\}\) by Langevin dynamics, instead of running Langevin dynamics from the same old batch of \(\{\hat{Y}_{i}\}\) as in the original Algorithm D. This is like implementing an infinite number of parallel chains, because each iteration evolves a fresh batch of examples, as if each iteration evolves a new set of chains. By updating the generator \(W_{\mathcal{G}}\) , it is like we are updating the infinite number of parallel chains, because \(W_{\mathcal{G}}\) memorizes the whole distribution. Even if \(\tilde{n}\) in the CoopNets algorithm is small, e.g., \(\tilde{n} = 1\) , viewed from the perspective of Algorithm D, it is as if \(\tilde{n}\to \infty\) . Thus the above idealization \(\tilde{n}\to \infty\) is sound. ### 7.2 GENERATOR OF FINITE CAPACITY From an information geometry point of view, let \(\mathcal{D} = \{P_{\mathcal{D}}(Y;W_{\mathcal{D}}),\forall W_{\mathcal{D}}\}\) be the manifold of the descriptor models, where each distribution \(P_{\mathcal{D}}(Y;W_{\mathcal{D}})\) is a point on this manifold. Then the maximum likelihood estimate of \(W_{\mathcal{D}}\) is a projection of the data distribution \(P_{\mathrm{data}}\) onto the manifold \(\mathcal{D}\) . Let \(\mathcal{G} = \{P_{\mathcal{G}}(Y;W_{\mathcal{G}}),\forall W_{\mathcal{G}}\}\) be the manifold of the generator models, where each distribution \(P_{\mathcal{G}}(Y;W_{\mathcal{G}})\) is a point on this manifold. Then the maximum likelihood estimate of \(W_{\mathcal{G}}\) is a projection of the data distribution \(P_{\mathrm{data}}\) onto the manifold \(\mathcal{G}\) . From now on, for notational simplicity and with a slight abuse of notation, we use \(W_{\mathcal{D}}\) to denote the descriptor distribution \(P_{\mathcal{D}}(Y;W_{\mathcal{D}})\) , and use \(W_{\mathcal{G}}\) to denote the generator distribution \(P_{\mathcal{G}}(Y;W_{\mathcal{G}})\) . We assume both the observed data size \(n\) and the synthesized data size \(\tilde{n}\) are large enough so that we shall work on distributions or populations instead of finite samples. As explained above, assuming \(\tilde{n}\to \infty\) is sound because the generator net can supply unlimited number of examples. The Langevin revision dynamics runs a Markov chain from \(W_{\mathcal{G}}^{(t)}\) towards \(W_{\mathcal{D}}^{(t)}\) . Let \(\mathbf{L}_{W_{\mathcal{D}}}\) be the Markov transition kernel of \(l_{\mathcal{D}}\) steps of Langevin revisions towards \(W_{\mathcal{D}}\) . The distribution of the revised synthesized data is \[P_{\mathcal{D}}^{(t + 1)} = \mathbf{L}_{W_{\mathcal{D}}^{(t)}}\cdot W_{\mathcal{G}}^{(t)}, \quad (12)\] where the notation \(\mathbf{L}\cdot P\) denotes the marginal distribution obtained by running the Markov transition \(\mathbf{L}\) from \(P\) . The distribution \(P_{\mathcal{D}}^{(t + 1)}\) is in the middle between the two nets \(W_{\mathcal{G}}^{(t)}\) and \(W_{\mathcal{D}}^{(t)}\) , and it serves as the data distribution to train the generator, i.e., we project this distribution onto the manifold \(\mathcal{G} = \{P_{\mathcal{G}}(Y;W_{\mathcal{G}}),\forall W_{\mathcal{G}}\} = \{W_{\mathcal{G}}\}\) (recall we use \(W_{\mathcal{G}}\) to denote the distribution \(P_{\mathcal{G}}(Y;W_{\mathcal{G}})\) ) in the <--- Page Split ---> information geometry picture, so that \[W_{\mathcal{G}}^{(t + 1)} = \arg \min_{\mathcal{G}}\mathrm{KL}(P_{\mathcal{D}}^{(t + 1)}|W_{\mathcal{G}}). \quad (13)\] The learning process alternates between Markov transition in (12) and projection in (13), as illustrated by Figure 9. ![](images/13_0.jpg) <center>Figure 9: The learning of the generator alternates between Markov transition and projection. The family of the generator models \(\mathcal{G}\) is illustrated by the black curve. Each distribution is illustrated by a point. </center> In the case of \(l_{\mathcal{D}}\to \infty\) \[\begin{array}{r l} & {W_{\mathcal{D}}^{(t)}\to \hat{W}_{\mathcal{D}} = \arg \min_{\mathcal{D}}\mathrm{KL}(P_{\mathrm{data}}|W_{\mathcal{D}}),}\\ & {W_{\mathcal{G}}^{(t)}\to \hat{W}_{\mathcal{G}} = \arg \min_{\mathcal{G}}\mathrm{KL}(\hat{W}_{\mathcal{D}}|W_{\mathcal{G}}).} \end{array} \quad (14)\] That is, we first project \(P_{\mathrm{data}}\) onto \(\mathcal{D}\) , and from there continue to project onto \(\mathcal{G}\) . Therefore, \(W_{\mathcal{D}}\) converges to the maximum likelihood estimate with \(P_{\mathrm{data}}\) being the data distribution, while \(W_{\mathcal{G}}\) converges to the maximum likelihood estimate with \(\hat{W}_{\mathcal{D}}\) serving as the data distribution. For finite \(l_{\mathcal{D}}\) , the algorithm may converge to the following fixed points. The fixed point for the generator satisfies \[\hat{W}_{\mathcal{G}} = \arg \min_{\mathcal{G}}\mathrm{KL}(L_{\hat{W}_{\mathcal{D}}}\cdot \hat{W}_{\mathcal{G}}|W_{\mathcal{G}}). \quad (16)\] The fixed point for the descriptor satisfies \[\hat{W}_{\mathcal{D}} = \arg \min_{\mathcal{D}}\left[\mathrm{KL}(P_{\mathrm{data}}|W_{\mathcal{D}}) - \mathrm{KL}(\mathbf{L}_{\hat{W}_{\mathcal{D}}}\cdot \hat{W}_{\mathcal{G}}|W_{\mathcal{D}})\right], \quad (17)\] which is similar to contrastive divergence (Hinton, 2002), except that \(\hat{W}_{G}\) takes the place of \(P_{\mathrm{data}}\) in the second Kullback- Leibler divergence. Because \(\hat{W}_{\mathcal{G}}\) is supposed to be close to \(\hat{W}_{\mathcal{D}}\) , the second Kullback- Leibler divergence is supposed to be small, hence our algorithm is closer to maximum likelihood learning than contrastive divergence. Kim & Bengio (2016) learned the generator by gradient descent on \(\mathrm{KL}(W_{\mathcal{G}}|W_{\mathcal{D}}^{(t)})\) over \(\mathcal{G}\) . The objective function is \(\mathrm{KL}(W_{\mathcal{G}}|W_{\mathcal{D}}^{(t)}) = \mathrm{E}_{W_{\mathcal{G}}}[\log P_{G}(Y;W_{\mathcal{G}})] - \mathrm{E}_{W_{\mathcal{G}}}[\log P_{\mathcal{D}}(Y;W_{\mathcal{D}}^{(t)})]\) , where the first term is the negative entropy that is intractable, and the second term is the expected energy that is tractable. Our learning method for the generator is consistent with the learning objective \(\mathrm{KL}(W_{\mathcal{G}}|W_{\mathcal{D}}^{(t)})\) , because \[\mathrm{KL}(P_{\mathcal{D}}^{(t + 1)}|W_{\mathcal{D}}^{(t)})\leq \mathrm{KL}(W_{\mathcal{G}}^{(t)}|W_{\mathcal{D}}^{(t)}). \quad (18)\] In fact, \(\mathrm{KL}(P_{\mathcal{D}}^{(t + 1)}|W_{\mathcal{D}}^{(t)}) \to 0\) monotonically as \(l_{\mathcal{D}} \to \infty\) due to the second law of thermodynamics. The reduction of the Kullback- Leibler divergence in (18) and the projection in (13) in our learning of the generator are consistent with the learning objective of reducing \(\mathrm{KL}(W_{\mathcal{G}}|W_{\mathcal{D}}^{(t)})\) in Kim & Bengio (2016). But the Monte Carlo implementation of \(\mathbf{L}\) in our work avoids the need to approximate the intractable entropy term. ### 7.3 MORE SYNTHESIS RESULTS We display more synthesis results at the resolution of \(64 \times 64\) . <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 10: Generating swimming pool images. The category is from MIT places205 dataset. </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 11: Generating volcano images. The category is from MIT places205 dataset. </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 12: Generating rock images. The category is from MIT places205 dataset. </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 13: Generating desert images. The category is from MIT places205 dataset. </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 14: Generating schoolbus images. The category is from Imagenet ILSVRC2012 1000 object categories. </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 15: Generating lifeboat images. The category is from Imagenet ILSVRC2012 1000 object categories. </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 16: Generating zebra images. The category is from Imagenet ILSVRC2012 1000 object categories. </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 17: Generating strawberry images. The category is from Imagenet ILSVRC2012 1000 object categories. </center> <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 18: Generating lemon images. The category is from Imagenet ILSVRC2012 1000 object categories. </center> <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 19: Generating apartment building images. The category is from Imagenet ILSVRC2012 1000 object categories. </center> <--- Page Split ---> ![](images/24_0.jpg) <center>Figure 20: Generating dinning table images. The category is from Imagenet ILSVRC2012 1000 object categories. </center> <--- Page Split ---> ![](images/25_0.jpg) <center>Figure 21: Generating balloon images. The category is from Imagenet ILSVRC2012 1000 object categories. </center> <--- Page Split ---> ## ACKNOWLEDGEMENT We thank Hansheng Jiang for her work on this project as a summer visiting student. We thank Tian Han for sharing the code on learning the generator network, and for helpful discussions. The work is supported by NSF DMS 1310391, DARPA SIMPLEX N66001- 15- C- 4035, ONR MURI N00014- 16- 1- 2007, and DARPA ARO W911NF- 16- 1- 0579. <--- Page Split ---> ## REFERENCES Jia Deng, Wei Dong, Richard Socher, Li- Jia Li, Kai Li, and Li Fei- Fei. Imagenet: A large- scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248- 255. IEEE, 2009. Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in Neural Information Processing Systems, pp. 1486- 1494, 2015. John D'Errico. Interpolation inpainting, 2004. URL https://www.mathworks.com/matlabcentral/fileexchange/4551- inpaint- nans. E Dosovitskiy, J. T. Springenberg, and T Brox. Learning to generate chairs with convolutional neural networks. In IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2015. Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672- 2680, 2014. Tian Han, Yang Lu, Song- Chun Zhu, and Ying Nian Wu. Alternating back- propagation for generator network. In 31st AAAI Conference on Artificial Intelligence, 2017. Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1771- 1800, 2002. Taesup Kim and Yoshua Bengio. Deep directed generative models with energy- based probability estimation. arXiv preprint arXiv:1606.03439, 2016. Diederik P. Kingma and Max Welling. Auto- encoding variational bayes. ICLR, 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pp. 1097- 1105, 2012. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient- based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278- 2324, 1998. Yann LeCun, Sumit Chopra, Rata Hadsell, Mare'Aurelio Ranzato, and Fu Jie Huang. A tutorial on energy- based learning. In Predicting Structured Data. MIT Press, 2006. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3730- 3738, 2015. Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. In ICML, 2014. Jiquan Ngiam, Zhenghao Chen, Pang Wei Koh, and Andrew Y. Ng. Learning deep energy models. In International Conference on Machine Learning, 2011. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Tony Jebara and Eric P. Xing (eds.), ICML, pp. 1278- 1286. JMLR Workshop and Conference Proceedings, 2014. Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals of mathematical statistics, pp. 400- 407, 1951. Yee Whye Teh, Max Welling, Simon Osindero, and Geoffrey E Hinton. Energy- based models for sparse overcomplete representations. Journal of Machine Learning Research, 4(Dec):1235- 1260, 2003. <--- Page Split ---> A. Vedaldi and K. Lenc. Matconvnet – convolutional neural networks for matlab. In Proceeding of the ACM Int. Conf. on Multimedia, 2015. Jianwen Xie, Yang Lu, Song-Chun Zhu, and Ying Nian Wu. A theory of generative convnet. In ICML, 2016. Laurent Younes. On the convergence of markovian stochastic algorithms with rapidly decreasing ergodicity rates. Stochastics: An International Journal of Probability and Stochastic Processes, 65(3- 4):177- 228, 1999. Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. Learning deep features for scene recognition using places database. In Advances in neural information processing systems, pp. 487- 495, 2014. Song-Chun Zhu. Statistical modeling and conceptualization of visual patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(6):691- 712, 2003. Song-Chun Zhu, Ying Nian Wu, and David Mumford. Minimax entropy principle and its application to texture modeling. Neural Computation, 9(8):1627- 1660, 1997. <--- Page Split --->
## ABSTRACT This paper studies the cooperative training of two probabilistic models of signals such as images. Both models are parametrized by convolutional neural networks (ConvNets). The first network is a descriptor network, which is an exponential family model or an energy- based model, whose feature statistics or energy function are defined by a bottom- up ConvNet, which maps the observed signal to the feature statistics. The second network is a generator network, which is a nonlinear version of factor analysis. It is defined by a top- down ConvNet, which maps the latent factors to the observed signal. The maximum likelihood training algorithms of both the descriptor net and the generator net are in the form of alternating back- propagation, and both algorithms involve Langevin sampling. We observe that the two training algorithms can cooperate with each other by jump- starting each other's Langevin sampling, and they can be seamlessly interwoven into a CoopNets algorithm that can train both nets simultaneously. ## 1 INTRODUCTION ### 1.1 TWO CONVNETS OF OPPOSITE DIRECTIONS We begin with a story that the reader of this paper can readily relate to. A student writes up an initial draft of a paper. His advisor then revises it. After that they submit the revised paper for review. The student then learns from his advisor's revision, while the advisor learns from the outside review. In this story, the advisor guides the student, but the student does most of the work. This paper is about two probabilistic models of signals such as images, and they play the roles of student and advisor as described above. Both models are parametrized by convolutional neural networks (ConvNets or CNNs) (LeCun et al., 1998; Krizhevsky et al., 2012). The two nets take two opposite directions. One is bottom- up, and the other is top- down, as illustrate by the following diagram: \[\begin{array}{r l} & {\mathrm{Bottom - up~ConvNet}}\\ & {\mathrm{features}}\\ & {\uparrow}\\ & {\mathrm{signal}}\\ & {\mathrm{(a)~Descriptor~Net}} \end{array} \qquad \begin{array}{r l} & {\mathrm{Top - down~ConvNet}}\\ & {\mathrm{latent~variables}}\\ & {\downarrow}\\ & {\mathrm{signal}}\\ & {\mathrm{(b)~Generator~Net}} \end{array} \quad (1)\] The simultaneous training of such two nets was first studied by the recent work of Kim & Bengio (2016). These two nets belong to two major classes of probabilistic models. (a) The exponential family models or the energy- based models (LeCun et al., 2006) or the Markov random field models (Zhu et al., 1997), where the probability distribution is defined by feature statistics or energy function computed from the signal by a bottom- up process. (b) The latent variable models or the directed graphical models, where the signal is assumed to be a transformation of the latent factors that follow a known prior distribution. The latent factors generate the signal by a top- down process. A classical example is factor analysis. The two classes of models have been contrasted by Zhu (2003); Teh et al. (2003); Ngiam et al. (2011). Zhu (2003) called the two classes of models the descriptive models and the generative <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: (a) Algorithm D involves sampling from the current model by Langevin dynamics. (b) Algorithm G involves sampling from the posterior distribution of the latent factors by Langevin dynamics. (c) CoopNets algorithm. The part of the flowchart for training the descriptor is similar to Algorithm D, except that the D1 Langevin sampling is initialized from the initial synthesized examples supplied by the generator. The part of the flowchart for training the generator can also be mapped to Algorithm G, except that the revised synthesized examples play the role of the observed data, and the known generated latent factors can be used as inferred latent factors (or be used to initialize the G1 Langevin sampling of the latent factors). </center> models respectively. Both classes of models can benefit from the high capacity of the multi- layer ConvNets. (a) In the exponential family models or the energy- based models, the feature statistics or the energy function can be defined by a bottom- up ConvNet that maps the signal to the features and the energy function (Ngiam et al., 2011; Xie et al., 2016). We call the resulting model a descriptive network or a descriptor net following Zhu (2003), because it is built on descriptive feature statistics. (b) In the latent variable models or the directed graphical models, the transformation from the latent factors to the signal can be defined by a top- down ConvNet (Dosovitskiy et al., 2015), which maps the latent factors to the signal. We call the resulting model a generative network or generator net following Goodfellow et al. (2014), who proposed such a model in their work on the generative adversarial networks (GAN). ### 1.2 TWO TRAINING ALGORITHMS AND THEIR COOPERATION Fig. 1(a) and (b) display the flowcharts of the maximum likelihood learning algorithms for training the descriptor and generator nets. We call the two algorithms Algorithm D and Algorithm G respectively. Algorithm D (Xie et al., 2016) iterates two steps: Step D1 synthesizes examples by sampling from the current model by Langevin dynamics. Step D2 updates the parameters to shift the density from the synthesized examples towards the observed examples. Algorithm G (Han et al., 2017) also iterates two steps. Step G1 infers latent factors for each observed example by sampling from their posterior distribution by Langevin dynamics. Step G2 updates the parameters by a non- linear regression of the observed examples on their corresponding latent factors. We use Langevin dynamics for Markov chain Monte Carlo (MCMC) sampling because the gradient term of Langevin dynamics can be readily computed via back- propagation. Thus all the steps D1, D2 and G1, G2 are powered by back- propagation, and both Algorithms D and G are alternating back- propagation algorithms. In this article, we propose to couple Algorithms D and G into a cooperative training algorithm that interweaves the steps of the two algorithms seamlessly. We call the resulting algorithm the CoopNets algorithm, and we show that it can train both nets simultaneously. Figure 1(c) displays the flowchart of the CoopNets algorithm. The generator is like the student. It generates the initial draft of the synthesized examples. The descriptor is like the advisor. It revises the initial draft by initializing its Langevin dynamics from the initial draft in Step D1, which produces the revised draft of the synthesized examples. The descriptor learns from the outside review in Step D2, which is in the form of the difference between the observed examples and the revised <--- Page Split ---> synthesized examples. The generator learns from how the descriptor revises the initial draft by reconstructing the revised draft in Step G2. For each synthesized example, the generator knows the latent factors that generate the initial draft, so that Step G1 can infer the latent factors by initializing its Langevin dynamics from their known values. In the CoopNets algorithm, the generator fuels the MCMC of the descriptor by supplying initial synthesized examples, which can be obtained by direct ancestral sampling. The generator then learns from the revised synthesized examples with virtually known latent factors. The cooperation is thus beneficial to both nets. ## 2 RELATED WORK Our work is inspired by the generative adversarial networks (GAN) (Goodfellow et al., 2014; Denton et al., 2015; Radford et al., 2015). In GAN, the generator net is paired with a discriminator net. The two nets play adversarial roles. In our work, the generator net and the descriptor net play cooperative roles, and they feed each other the initial, revised and reconstructed synthesized data. The learning of both nets is based on maximum likelihood, and the learning process is quite stable because of the cooperative nature and the consistent directions of the two maximum likelihood training algorithms. Another method to train the generator network is variational auto- encoder (VAE) (Kingma & Welling, 2014; Rezende et al., 2014; Mnih & Gregor, 2014), which learns an inferential or recognition network to approximate the posterior distribution of the latent factors. The connection between the descriptor net and the discriminator net has been explored by Xie et al. (2016), where the descriptor can be derived from the discriminator. Our work is most similar to the recent work of Kim & Bengio (2016). In fact, the settings of the two nets are the same. In their work, the generator learns from the descriptor by minimizing the Kullback- Leibler divergence from the generator to the descriptor, which can be decomposed into an energy term and an entropy term. In our work, the two nets interact with each other via synthesized data, and the generator learns from the descriptor by reconstructing the revised draft of synthesized examples. Our method does not need to approximate the intractable entropy term. Our work is related to the contrastive divergence algorithm (Hinton, 2002) for training the descriptor net. The contrastive divergence initializes the MCMC sampling from the observed examples. The CoopNets algorithm initializes the MCMC sampling from the examples supplied by the generator. ## 3 TWO NETS AND TWO TRAINING ALGORITHMS ### 3.1 DESCRIPTOR NET AND TRAINING ALGORITHM Let \(Y\) be the \(D\) - dimensional signal, such as an image. The descriptor model is in the form of exponential tilting of a reference distribution (Xie et al., 2016): \[P_{\mathcal{D}}(Y;W_{\mathcal{D}}) = \frac{1}{Z(W_{\mathcal{D}})}\exp \left[f(Y;W_{\mathcal{D}})\right]q(Y), \quad (2)\] where \(q(Y)\) is the reference distribution such as Gaussian white noise \(q(Y)\propto \exp \left(- \| Y\|^{2} / 2s^{2}\right)\) , \(f(Y;W_{\mathcal{D}})\) ( \(f\) stands for features) is the feature statistics or energy function, defined by a ConvNet whose parameters are denoted by \(W_{\mathcal{D}}\) . This ConvNet is bottom- up because it maps the signal \(Y\) to a number. See the diagram in (1). \(Z(W_{\mathcal{D}}) = \int \exp \left[f(Y;W_{\mathcal{D}})\right]q(Y)dY = \mathrm{E}_{q}\{\exp [f(Y;W_{\mathcal{D}})]\}\) is the normalizing constant, where \(\mathrm{E}_{q}\) is the expectation with respect to \(q\) . Suppose we observe training examples \(\{Y_{i},i = 1,\dots,n\}\) from an unknown data distribution \(P_{\mathrm{data}}(Y)\) . The maximum likelihood training seeks to maximize the log- likelihood function \(L_{\mathcal{D}}(W_{\mathcal{D}}) = \frac{1}{n}\sum_{i = 1}^{n}\log P_{\mathcal{D}}(Y_{i};W_{\mathcal{D}})\) . If the sample size \(n\) is large, the maximum likelihood estimator minimizes \(\mathrm{KL}(P_{\mathrm{data}}|P_{\mathcal{D}})\) , the Kullback- Leibler divergence from the data distribution \(P_{\mathrm{data}}\) to the model distribution \(P_{\mathcal{D}}\) . The gradient of the \(L_{\mathcal{D}}(W_{\mathcal{D}})\) is \[L_{\mathcal{D}}^{\prime}(W_{\mathcal{D}}) = \frac{1}{n}\sum_{i = 1}^{n}\frac{\partial}{\partial W_{\mathcal{D}}} f(Y_{i};W_{\mathcal{D}}) - \mathrm{E}_{W_{\mathcal{D}}}\left[\frac{\partial}{\partial W_{\mathcal{D}}} f(Y;W_{\mathcal{D}})\right], \quad (3)\] <--- Page Split ---> where \(\mathrm{E}_{W_{\mathcal{D}}}\) denotes the expectation with respect to \(P_{\mathcal{D}}(Y;W_{\mathcal{D}})\) . The expectation in equation (3) is analytically intractable and has to be approximated by MCMC, such as Langevin dynamics, which iterates the following step: \[Y_{\tau +1} = Y_{\tau} - \frac{\delta^{2}}{2}\left[\frac{Y_{\tau}}{s^{2}} -\frac{\partial}{\partial Y} f(Y_{\tau};W_{\mathcal{D}})\right] + \delta U_{\tau}, \quad (4)\] where \(\tau\) indexes the time steps of the Langevin dynamics, \(\delta\) is the step size, and \(U_{\tau} \sim \mathrm{N}(0, I_{D})\) is the Gaussian white noise term. We can run \(\tilde{n}\) parallel chains of Langevin dynamics according to (4) to obtain the synthesized examples \(\{\tilde{Y}_{i}, i = 1, \ldots , \tilde{n}\}\) . The Monte Carlo approximation to \(L_{\mathcal{D}}^{\prime}(W_{\mathcal{D}})\) is \[L_{\mathcal{D}}^{\prime}(W_{\mathcal{D}}) \approx \frac{1}{n} \sum_{i = 1}^{n} \frac{\partial}{\partial W_{\mathcal{D}}} f(Y_{i};W_{\mathcal{D}}) - \frac{1}{\tilde{n}} \sum_{i = 1}^{\tilde{n}} \frac{\partial}{\partial W_{\mathcal{D}}} f(\tilde{Y}_{i};W_{\mathcal{D}}), \quad (5)\] which is used to update \(W_{\mathcal{D}}\) . Algorithm D (Xie et al., 2016) iterates the following two steps after initializing \(W_{\mathcal{D}}\) and \(\{\tilde{Y}_{i}, i = 1, \ldots , \tilde{n}\}\) . Step \(D I\) : Run \(l_{\mathcal{D}}\) steps of Langevin from the current \(\{\tilde{Y}\}\) according according to (4). Step \(D2\) : update \(W_{\mathcal{D}}^{(t + 1)} = W_{\mathcal{D}}^{(t)} + \gamma_{t} L_{\mathcal{D}}^{\prime}(W_{\mathcal{D}}^{(t)})\) with learning rate \(\gamma_{t}\) . The convergence of such an algorithm follows Younes (1999). ### 3.2 GENERATOR NET AND TRAINING ALGORITHM The generator net (Goodfellow et al., 2014) seeks to explain the signal \(Y\) of dimension \(D\) by a vector of latent factors \(X\) of dimension \(d\) , and usually \(d \ll D\) . The model is of the following form: \[X \sim \mathrm{N}(0, I_{d}), Y = g(X; W_{\mathcal{G}}) + \epsilon , \epsilon \sim \mathrm{N}(0, \sigma^{2} I_{D}). \quad (6)\] \(g(X; W_{\mathcal{G}})\) ( \(g\) stands for generator) is a top- down ConvNet defined by the parameters \(W_{\mathcal{G}}\) . The ConvNet \(g\) maps the latent factors \(X\) to the signal \(Y\) . See the diagram in (1). The joint density of model (6) is \(P_{\mathcal{G}}(X, Y; W_{\mathcal{G}}) = P_{\mathcal{G}}(X) P_{\mathcal{G}}(Y | X; W_{\mathcal{G}})\) , and \[\log P_{\mathcal{G}}(X, Y; W_{\mathcal{G}}) = -\frac{1}{2\sigma^{2}} \| Y - g(X; W_{\mathcal{G}}) \|^{2} - \frac{1}{2} \| X \|^{2} + \mathrm{constant}, \quad (7)\] where the constant term is independent of \(X\) , \(Y\) and \(W_{\mathcal{G}}\) . The marginal density is obtained by integrating out the latent factors \(X\) , i.e., \(P_{\mathcal{G}}(Y; W_{\mathcal{G}}) = \int P_{\mathcal{G}}(X, Y; W_{\mathcal{G}}) dX\) . The inference of \(X\) given \(Y\) is based on the posterior density \(P_{\mathcal{G}}(X | Y; W_{\mathcal{G}}) = P_{\mathcal{G}}(X, Y; W_{\mathcal{G}}) / P_{\mathcal{G}}(Y; W_{\mathcal{G}}) \propto P_{\mathcal{G}}(X, Y; W_{\mathcal{G}})\) as a function of \(X\) . For the training data \(\{Y_{i}, i = 1, \ldots , n\}\) , the generator net can be trained by maximizing the loglikelihood \(L_{\mathcal{G}}(W_{\mathcal{G}}) = \frac{1}{n} \sum_{i = 1}^{n} \log P_{\mathcal{G}}(Y_{i}; W_{\mathcal{G}})\) . For large sample, the learned \(W_{\mathcal{G}}\) minimizes the Kullback- Leibler divergence \(\mathrm{KL}(P_{\mathrm{data}}|P_{\mathcal{G}})\) from the data distribution \(P_{\mathrm{data}}\) to the model distribution \(P_{\mathcal{G}}\) . The gradient of \(L_{\mathcal{G}}(W_{\mathcal{G}})\) is obtained according to the following identity \[\begin{array}{r l r}{{\frac{\partial}{\partial W_{\mathcal{G}}}\log P_{\mathcal{G}}(Y;W_{\mathcal{G}})}&{=}&{\frac{1}{P_{\mathcal{G}}(Y;W_{\mathcal{G}})}\frac{\partial}{\partial W_{\mathcal{G}}}\int P_{\mathcal{G}}(Y,X;W_{\mathcal{G}})dX}\\ &{=}&{\frac{1}{P_{\mathcal{G}}(Y;W_{\mathcal{G}})}\int\left[\frac{\partial}{\partial W_{\mathcal{G}}}\log P_{\mathcal{G}}(Y,X;W_{\mathcal{G}})\right]P_{\mathcal{G}}(Y,X;W_{\mathcal{G}})dX}\\ &{=}&{\int\left[\frac{\partial}{\partial W_{\mathcal{G}}}\log P_{\mathcal{G}}(Y,X;W_{\mathcal{G}})\right]\frac{P_{\mathcal{G}}(Y,X;W_{\mathcal{G}})}{P_{\mathcal{G}}(Y;W_{\mathcal{G}})}dX}\\ &{=}&{\mathrm{E}_{P_{\mathcal{G}}(X|Y;W_{\mathcal{G}})}\left[\frac{\partial}{\partial W_{\mathcal{G}}}\log P_{\mathcal{G}}(X,Y;W_{\mathcal{G}})\right],}\end{array} \quad (8)\] which underlies the EM algorithm. In general, the expectation in (8) is analytically intractable, and has to be approximated by MCMC that samples from the posterior \(P_{\mathcal{G}}(X | Y; W_{\mathcal{G}})\) , such as Langevin dynamics, which iterates \[X_{\tau +1} = X_{\tau} + \frac{\delta^{2}}{2} \frac{\partial}{\partial X} \log P_{\mathcal{G}}(X_{\tau}, Y; W_{\mathcal{G}}) + \delta U_{\tau}, \quad (9)\] <--- Page Split ---> ## Algorithm 1 CoopNets Algorithm ## Input: (1) training examples \(\{Y_{i},i = 1,\dots,n\}\) (2) numbers of Langevin steps \(l_{\mathcal{D}}\) ad \(l_{G}\) (3) number of learning iterations \(T\) ## Output: (1) estimated parameters \(W_{\mathcal{D}}\) and \(W_{G}\) (2) synthesized examples \(\{\hat{Y}_{i},\hat{Y}_{i},i = 1,\dots,\hat{n}\}\) 1: Let \(t\gets 0\) , initialize \(W_{\mathcal{D}}\) and \(W_{G}\) 2: repeat 3: Step G0: For \(i = 1,\dots,\hat{n}\) , generate \(\hat{X}_{i}\sim \mathrm{N}(0,I_{d})\) , and generate \(\hat{Y}_{i} = g(\hat{X}_{i};W_{G}^{(t)}) + \epsilon_{i}\) 4: Step D1: For \(i = 1,\dots,\hat{n}\) , starting from \(\hat{Y}_{i}\) , Run \(l_{\mathcal{D}}\) steps of Langevin dynamics to obtain \(\hat{Y}_{i}\) , each step following equation (4). 5: Step G1: Treat the current \(\{\hat{Y}_{i},i = 1,\dots,\hat{n}\}\) as the training data, for each \(i\) , infer \(X_{i} = \hat{X}_{i}\) . Or more rigorously, starting from \(X_{i} = \hat{X}_{i}\) , run \(l_{G}\) steps of Langevin dynamics to update \(X_{i}\) , each step following equation (9). 6: Step D2: Update \(W_{\mathcal{D}}^{(t + 1)} = W_{\mathcal{D}}^{(t)} + \gamma_{t}L_{\mathcal{D}}^{\prime}(W_{\mathcal{D}}^{(t)})\) , where \(L_{\mathcal{D}}^{\prime}(W_{\mathcal{D}}^{(t)})\) is computed according to (5). 7: Step G2: Update \(W_{G}^{(t + 1)} = W_{G}^{(t)} + \gamma_{t}L_{G}^{\prime}(W_{G}^{(t)})\) , where \(L_{G}^{\prime}(W_{G})\) is computed according to (10), except that \(Y_{i}\) is replaced by \(\hat{Y}_{i}\) , and \(n\) by \(\hat{n}\) . 8: Let \(t\gets t + 1\) 9: until \(t = T\) ## 5 EXPERIMENTS We use the MatConvNet of Vedaldi & Lenc (2015) for coding. For the descriptor net, we adopt the structure of Xie et al. (2016), where the bottom- up network consists of multiple layers of convolution by linear filtering, ReLU non- linearity, and down- sampling. We adopt the structure of the generator network of Radford et al. (2015); Dosovitskiy et al. (2015), where the top- down network consists of multiple layers of deconvolution by linear superposition, ReLU non- linearity, and up- sampling, with tanh non- linearity at the bottom- layer (Radford et al., 2015) to make the signals fall within \([- 1,1]\) . ### 5.1 QUANTITATIVE EXPERIMENT ON FACE COMPLETION We conduct an experiment on learning from complete training images of human faces, and then testing the learned model on completing the occluded testing images. The structure of the generator network is the same as in (Radford et al., 2015; Dosovitskiy et al., 2015). We adopt a 4- layer descriptor net. The first layer has \(96 \times 5\) filters with sub- sampling of 2, the second layers has \(128 \times 5\) filters with sub- sampling of 2, the third layer has \(256 \times 5\) filters with sub- sampling of 2, and the final layer is a fully connected layer with 50 channels as output. We use \(L = 10\) steps of Langevin revision dynamics within each learning iteration, and the Langevin step size is set at 0.002. The learning rate is 0.07. The training data are 10,000 human faces randomly selected from CelebA dataset (Liu et al., 2015). We run 600 cooperative learning iterations. Figure 2 displays 144 synthesized human faces by the descriptor net. To quantitatively test whether we have learned a good generator net \(g(X;W_{G})\) even though it has never seen the training images directly in the training stage, we apply it to the task of recovering the occluded pixels of testing images. For each occluded testing image \(\hat{Y}\) , we use Step G1 of Algorithm G to infer the latent factors \(\hat{X}\) . The only change is with respect to the term \(\| \hat{Y} - g(X;W_{G})\|^{2}\) , where the sum of squares is over all the observed pixels of \(\hat{Y}\) in back- propagation computation. We run 1000 Langevin steps, initializing \(X\) from \(\mathrm{N}(0,I_{d})\) . After inferring \(X\) , the completed image \(g(X;W_{G})\) is automatically obtained. We design 3 experiments, where we randomly place a \(20 \times 20\) , \(30 \times 30\) , or \(40 \times 40\) mask on each \(64 \times 64\) testing image. These 3 experiments are denoted by M20 M30, and M40 respectively (M for mask). We report the recovery errors and compare our method with 8 different image inpainting methods as well as the DCGAN of Radford et al. (2015). <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 2: Generating human face pattern. The synthesized images are generated by the CoopNets algorithm that learns from 10,000 images. </center> For DCGAN, we use the parameter setting in Radford et al. (2015) except changing the number of learning iterations to 600. We use the same 10,000 training images to learn DCGAN. After the model is learned, we keep the generator and use the same method as ours to infer latent factors \(X\) , and recover the unobserved pixels. In 8 inpainting methods, Methods 1 and 2 are based on Markov random field prior where the nearest neighbor potential terms are \(\ell_{2}\) and \(\ell_{1}\) differences respectively. Methods 3 to 8 are interpolation methods. Please refer to D'Errico (2004) for more details. Table 1 displays the recovery errors of the 3 experiments, where the error is measured by per pixel difference (relative to the range of pixel values) between the original image and the recovered image on the occluded region, averaged over 100 testing images. Fig. 3 displays some recovery results by our method. The first row shows the original images as the ground truth. The second row displays the testing images with occluded pixels. The third row displays the recovered images by the generator net trained by the CoopNets algorithm on the 10,000 training images. Table 1: Comparison of recovery errors among different inpainting methods in 3 experiments <table><tr><td>Exp</td><td>Ours</td><td>GAN</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td></tr><tr><td>M20</td><td>.0966</td><td>.2535</td><td>.1545</td><td>.1506</td><td>.1277</td><td>.1123</td><td>.2493</td><td>.1123</td><td>.1126</td><td>.1277</td></tr><tr><td>M30</td><td>.1112</td><td>.2606</td><td>.1820</td><td>.1792</td><td>.1679</td><td>.1321</td><td>.3367</td><td>.1310</td><td>.1312</td><td>.1679</td></tr><tr><td>M40</td><td>.1184</td><td>.2618</td><td>.2055</td><td>.2032</td><td>.1894</td><td>.1544</td><td>.3809</td><td>.1525</td><td>.1526</td><td>.1894</td></tr></table> ![](images/5_1.jpg) <center>Figure 3: Row 1: ground-truth images. Row 2: testing images with occluded pixels. Row 3: recovered images by our method. </center> <--- Page Split ---> ### 5.2 QUALITATIVE EXPERIMENT ON SYNTHESIS We conduct an experiment on synthesizing images of categories from Imagenet ILSVRC2012 dataset (Deng et al., 2009) and MIT places205 dataset (Zhou et al., 2014). We adopt a 4- layer descriptor net. The first layer has \(64 \times 5 \times 5\) filters with sub- sampling of 2, the second layers has \(128 \times 3 \times 3\) filters with sub- sampling of 2, the third layer has \(256 \times 3 \times 3\) filters with sub- sampling of 1, and the final layer is a fully connected layer with 100 channels as output. We set the number of Langevin dynamics steps in each learning iteration to 10 and the step size to 0.002. The learning rate is 0.07. For each category, we randomly choose 1,000 images as training data and resize the images to \(64 \times 64\) . We run 1,000 cooperative learning iterations to train the model. Figures 4 and 5 display the results for two categories, where for each category, we show 144 original images sampled from the training set, and 144 synthesized images generated by our method. The appendix contains more synthesis results. As a comparison, we apply the Algorithm G alone and GAN code on the same 1,000 hotel room training images to learn the generator of the same structure as in CoopNets. Figure 6 displays the synthesis results. We also try to synthesize images at high resolution ( \(224 \times 224\) ). We adopt a 4- layer descriptor net. The first layer has \(128 \times 15\) filters with sub- sampling of 3, the second layer has \(256 \times 3 \times 3\) filters with sub- sampling of 2, the third layer has \(512 \times 3 \times 3\) filters with sub- sampling of 1, and the final layer is a fully connected layer with 100 channels as output. We enlarge the filters of the final layer of generator net to \(14 \times 14\) to generate \(224 \times 224\) images. The learning rate is 0.05. We run 1000 cooperative learning iterations to train the model. Figures 7 and 8 show the synthesized images of two categories from MIT places205 dataset. ## 6 CONCLUSION The most unique feature of our work is that the two networks feed each other the synthesized data in the learning process, including initial, revised, and reconstructed synthesized data. Another unique feature of our work is that the learning process interweaves the existing maximum likelihood learning algorithms for the two networks. A third unique feature of our work is that the MCMC for the descriptor keeps rejuvenating the chains by refreshing the samples by independent replacements supplied by the generator, so that a single chain effectively amounts to an infinite number of chains or the evolution of the whole marginal distribution modeled by the generator. ## CODE AND DATA http://www.stat.ucla.edu/\~ywu/CoopNets/main.html ## 7 APPENDIX:CONVERGENCE ### 7.1 GENERATOR OF INFINITE CAPACITY In the CoopNets algorithm, the descriptor learns from the observed examples, while the generator learns from the descriptor through the synthesized examples. Therefore, the descriptor is the driving force in terms of learning, although the generator is the driving force in terms of synthesis. In order to understand the convergence of learning, we can start from Algorithm D for learning the descriptor. Algorithm D is a stochastic approximation algorithm (Robbins & Monro, 1951), except that the samples are generated by finite step MCMC transitions. According to Younes (1999), Algorithm D converges to the maximum likelihood estimate under suitable regularity conditions on the mixing of the transition kernel of the MCMC and the schedule of the learning rate \(\gamma_{t}\) , even if the number of Langevin steps \(l_{\mathcal{D}}\) is finite or small (e.g., \(l_{\mathcal{D}} = 1\) ), and even if the number of parallel chains \(\tilde{n}\) is finite or small (e.g., \(\tilde{n} = 1\) ). The reason is that the random fluctuations caused by the finite number of chains, \(\tilde{n}\) , and the limited mixing caused by the finite steps of MCMC, \(l_{\mathcal{D}}\) , are mitigated if the <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 4: Generating forest road images. The category is from MIT places205 dataset. </center> <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 5: Generating hotel room images. The category is from MIT places205 dataset. </center> <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 6: Generating hotel room images by Algorithm G alone and by GAN. </center> <--- Page Split ---> ![](images/10_0.jpg) <center>Figure 7: Generating forest road images at high resolution \((224 \times 224)\) . </center> <--- Page Split ---> ![](images/11_0.jpg) <center>Figure 8: Generating hotel room images at high resolution \((224 \times 224)\) . </center> <--- Page Split ---> learning rate \(\gamma_{t}\) is sufficiently small. At learning iteration \(t\) , let \(W_{\mathcal{D}}^{(t)}\) be the estimated parameter of the descriptor. Let \(P_{\mathcal{D}}^{(t + 1)}\) be the marginal distribution of \(\{\hat{Y}_{i}\}\) . Even though \(P_{\mathcal{D}}^{(t + 1)}\neq P_{\mathcal{D}}(Y;W_{\mathcal{D}}^{(t)})\) because \(l_{\mathcal{D}}\) is finite \((P_{\mathcal{D}}^{(t + 1)} = P_{\mathcal{D}}(Y;W_{\mathcal{D}}^{(t)})\) if \(l_{\mathcal{D}}\to \infty\) ), we still have \(W_{\mathcal{D}}^{(t)}\to \hat{W}_{\mathcal{D}}\) in probability according to Younes (1999), where \(\hat{W}_{\mathcal{D}}\) is the maximum likelihood estimate of \(W_{\mathcal{D}}\) . The efficiency of Algorithm D increases if the number of parallel chains \(\tilde{n}\) is large because it leads to more accurate estimation of the expectation in the gradient \(L_{\mathcal{D}}^{\prime}(W_{\mathcal{D}})\) of equation (3), so that we can afford to use larger learning rate \(\gamma_{t}\) for faster convergence. Now let us come back to the CoopNets algorithm. In order to understand how the descriptor net helps the training of the generator net, let us consider the idealized scenario where the number of parallel chains \(\tilde{n}\to \infty\) , and the generator has infinite capacity, and in each iteration it estimates \(W_{\mathcal{G}}\) by maximum likelihood using the synthesized data from \(P_{\mathcal{D}}^{(t + 1)}\) . In this idealized scenario, the learned generator \(P_{\mathcal{G}}(Y;W_{\mathcal{G}}^{(t + 1)})\) will reproduce \(P_{\mathcal{D}}^{(t + 1)}\) by minimizing \(\mathrm{KL}(P_{\mathcal{D}}^{(t + 1)}(Y)|P_{\mathcal{G}}(Y;W_{\mathcal{G}}))\) , with \(P_{\mathcal{D}}^{(t + 1)}\) serving as its data distribution. Then eventually the learned generator \(P_{\mathcal{G}}(Y,\hat{W}_{\mathcal{G}})\) will reproduce \(P_{\mathcal{D}}(Y;\hat{W}_{\mathcal{D}})\) . Thus the cooperative training helps the learning of the generator. Note that the learned generator \(P_{\mathcal{G}}(Y,\hat{W}_{\mathcal{G}})\) will not reproduce the distribution of the observed data \(P_{\mathrm{data}}\) , unless the descriptor is of infinite capacity too. Conversely, the generator net also helps the learning of the descriptor net in the CoopNets algorithm. In Algorithm D, it is impractical to make the number of parallel chains \(\tilde{n}\) too large. On the other hand, it would be difficult for a small number of chains \(\{\hat{Y}_{i},i = 1,\dots,\tilde{n}\}\) to explore the state space. In the CoopNets algorithm, because \(P_{\mathcal{G}}(Y;W_{\mathcal{G}}^{(t)})\) reproduces \(P_{\mathcal{D}}^{(t)}\) , we can generate a completely new batch of independent samples \(\{\hat{Y}_{i}\}\) from \(P_{\mathcal{G}}(Y;W_{\mathcal{G}}^{(t)})\) , and revise \(\{\hat{Y}_{i}\}\) to \(\{\hat{Y}_{i}\}\) by Langevin dynamics, instead of running Langevin dynamics from the same old batch of \(\{\hat{Y}_{i}\}\) as in the original Algorithm D. This is like implementing an infinite number of parallel chains, because each iteration evolves a fresh batch of examples, as if each iteration evolves a new set of chains. By updating the generator \(W_{\mathcal{G}}\) , it is like we are updating the infinite number of parallel chains, because \(W_{\mathcal{G}}\) memorizes the whole distribution. Even if \(\tilde{n}\) in the CoopNets algorithm is small, e.g., \(\tilde{n} = 1\) , viewed from the perspective of Algorithm D, it is as if \(\tilde{n}\to \infty\) . Thus the above idealization \(\tilde{n}\to \infty\) is sound. ### 7.2 GENERATOR OF FINITE CAPACITY From an information geometry point of view, let \(\mathcal{D} = \{P_{\mathcal{D}}(Y;W_{\mathcal{D}}),\forall W_{\mathcal{D}}\}\) be the manifold of the descriptor models, where each distribution \(P_{\mathcal{D}}(Y;W_{\mathcal{D}})\) is a point on this manifold. Then the maximum likelihood estimate of \(W_{\mathcal{D}}\) is a projection of the data distribution \(P_{\mathrm{data}}\) onto the manifold \(\mathcal{D}\) . Let \(\mathcal{G} = \{P_{\mathcal{G}}(Y;W_{\mathcal{G}}),\forall W_{\mathcal{G}}\}\) be the manifold of the generator models, where each distribution \(P_{\mathcal{G}}(Y;W_{\mathcal{G}})\) is a point on this manifold. Then the maximum likelihood estimate of \(W_{\mathcal{G}}\) is a projection of the data distribution \(P_{\mathrm{data}}\) onto the manifold \(\mathcal{G}\) . From now on, for notational simplicity and with a slight abuse of notation, we use \(W_{\mathcal{D}}\) to denote the descriptor distribution \(P_{\mathcal{D}}(Y;W_{\mathcal{D}})\) , and use \(W_{\mathcal{G}}\) to denote the generator distribution \(P_{\mathcal{G}}(Y;W_{\mathcal{G}})\) . We assume both the observed data size \(n\) and the synthesized data size \(\tilde{n}\) are large enough so that we shall work on distributions or populations instead of finite samples. As explained above, assuming \(\tilde{n}\to \infty\) is sound because the generator net can supply unlimited number of examples. The Langevin revision dynamics runs a Markov chain from \(W_{\mathcal{G}}^{(t)}\) towards \(W_{\mathcal{D}}^{(t)}\) . Let \(\mathbf{L}_{W_{\mathcal{D}}}\) be the Markov transition kernel of \(l_{\mathcal{D}}\) steps of Langevin revisions towards \(W_{\mathcal{D}}\) . The distribution of the revised synthesized data is \[P_{\mathcal{D}}^{(t + 1)} = \mathbf{L}_{W_{\mathcal{D}}^{(t)}}\cdot W_{\mathcal{G}}^{(t)}, \quad (12)\] where the notation \(\mathbf{L}\cdot P\) denotes the marginal distribution obtained by running the Markov transition \(\mathbf{L}\) from \(P\) . The distribution \(P_{\mathcal{D}}^{(t + 1)}\) is in the middle between the two nets \(W_{\mathcal{G}}^{(t)}\) and \(W_{\mathcal{D}}^{(t)}\) , and it serves as the data distribution to train the generator, i.e., we project this distribution onto the manifold \(\mathcal{G} = \{P_{\mathcal{G}}(Y;W_{\mathcal{G}}),\forall W_{\mathcal{G}}\} = \{W_{\mathcal{G}}\}\) (recall we use \(W_{\mathcal{G}}\) to denote the distribution \(P_{\mathcal{G}}(Y;W_{\mathcal{G}})\) ) in the <--- Page Split ---> information geometry picture, so that \[W_{\mathcal{G}}^{(t + 1)} = \arg \min_{\mathcal{G}}\mathrm{KL}(P_{\mathcal{D}}^{(t + 1)}|W_{\mathcal{G}}). \quad (13)\] The learning process alternates between Markov transition in (12) and projection in (13), as illustrated by Figure 9. ![](images/13_0.jpg) <center>Figure 9: The learning of the generator alternates between Markov transition and projection. The family of the generator models \(\mathcal{G}\) is illustrated by the black curve. Each distribution is illustrated by a point. </center> In the case of \(l_{\mathcal{D}}\to \infty\) \[\begin{array}{r l} & {W_{\mathcal{D}}^{(t)}\to \hat{W}_{\mathcal{D}} = \arg \min_{\mathcal{D}}\mathrm{KL}(P_{\mathrm{data}}|W_{\mathcal{D}}),}\\ & {W_{\mathcal{G}}^{(t)}\to \hat{W}_{\mathcal{G}} = \arg \min_{\mathcal{G}}\mathrm{KL}(\hat{W}_{\mathcal{D}}|W_{\mathcal{G}}).} \end{array} \quad (14)\] That is, we first project \(P_{\mathrm{data}}\) onto \(\mathcal{D}\) , and from there continue to project onto \(\mathcal{G}\) . Therefore, \(W_{\mathcal{D}}\) converges to the maximum likelihood estimate with \(P_{\mathrm{data}}\) being the data distribution, while \(W_{\mathcal{G}}\) converges to the maximum likelihood estimate with \(\hat{W}_{\mathcal{D}}\) serving as the data distribution. For finite \(l_{\mathcal{D}}\) , the algorithm may converge to the following fixed points. The fixed point for the generator satisfies \[\hat{W}_{\mathcal{G}} = \arg \min_{\mathcal{G}}\mathrm{KL}(L_{\hat{W}_{\mathcal{D}}}\cdot \hat{W}_{\mathcal{G}}|W_{\mathcal{G}}). \quad (16)\] The fixed point for the descriptor satisfies \[\hat{W}_{\mathcal{D}} = \arg \min_{\mathcal{D}}\left[\mathrm{KL}(P_{\mathrm{data}}|W_{\mathcal{D}}) - \mathrm{KL}(\mathbf{L}_{\hat{W}_{\mathcal{D}}}\cdot \hat{W}_{\mathcal{G}}|W_{\mathcal{D}})\right], \quad (17)\] which is similar to contrastive divergence (Hinton, 2002), except that \(\hat{W}_{G}\) takes the place of \(P_{\mathrm{data}}\) in the second Kullback- Leibler divergence. Because \(\hat{W}_{\mathcal{G}}\) is supposed to be close to \(\hat{W}_{\mathcal{D}}\) , the second Kullback- Leibler divergence is supposed to be small, hence our algorithm is closer to maximum likelihood learning than contrastive divergence. Kim & Bengio (2016) learned the generator by gradient descent on \(\mathrm{KL}(W_{\mathcal{G}}|W_{\mathcal{D}}^{(t)})\) over \(\mathcal{G}\) . The objective function is \(\mathrm{KL}(W_{\mathcal{G}}|W_{\mathcal{D}}^{(t)}) = \mathrm{E}_{W_{\mathcal{G}}}[\log P_{G}(Y;W_{\mathcal{G}})] - \mathrm{E}_{W_{\mathcal{G}}}[\log P_{\mathcal{D}}(Y;W_{\mathcal{D}}^{(t)})]\) , where the first term is the negative entropy that is intractable, and the second term is the expected energy that is tractable. Our learning method for the generator is consistent with the learning objective \(\mathrm{KL}(W_{\mathcal{G}}|W_{\mathcal{D}}^{(t)})\) , because \[\mathrm{KL}(P_{\mathcal{D}}^{(t + 1)}|W_{\mathcal{D}}^{(t)})\leq \mathrm{KL}(W_{\mathcal{G}}^{(t)}|W_{\mathcal{D}}^{(t)}). \quad (18)\] In fact, \(\mathrm{KL}(P_{\mathcal{D}}^{(t + 1)}|W_{\mathcal{D}}^{(t)}) \to 0\) monotonically as \(l_{\mathcal{D}} \to \infty\) due to the second law of thermodynamics. The reduction of the Kullback- Leibler divergence in (18) and the projection in (13) in our learning of the generator are consistent with the learning objective of reducing \(\mathrm{KL}(W_{\mathcal{G}}|W_{\mathcal{D}}^{(t)})\) in Kim & Bengio (2016). But the Monte Carlo implementation of \(\mathbf{L}\) in our work avoids the need to approximate the intractable entropy term. ### 7.3 MORE SYNTHESIS RESULTS We display more synthesis results at the resolution of \(64 \times 64\) . <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 10: Generating swimming pool images. The category is from MIT places205 dataset. </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 11: Generating volcano images. The category is from MIT places205 dataset. </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 12: Generating rock images. The category is from MIT places205 dataset. </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 13: Generating desert images. The category is from MIT places205 dataset. </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 14: Generating schoolbus images. The category is from Imagenet ILSVRC2012 1000 object categories. </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 15: Generating lifeboat images. The category is from Imagenet ILSVRC2012 1000 object categories. </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 16: Generating zebra images. The category is from Imagenet ILSVRC2012 1000 object categories. </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 17: Generating strawberry images. The category is from Imagenet ILSVRC2012 1000 object categories. </center> <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 18: Generating lemon images. The category is from Imagenet ILSVRC2012 1000 object categories. </center> <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 19: Generating apartment building images. The category is from Imagenet ILSVRC2012 1000 object categories. </center> <--- Page Split ---> ![](images/24_0.jpg) <center>Figure 20: Generating dinning table images. The category is from Imagenet ILSVRC2012 1000 object categories. </center> <--- Page Split ---> ![](images/25_0.jpg) <center>Figure 21: Generating balloon images. The category is from Imagenet ILSVRC2012 1000 object categories. </center> <--- Page Split ---> ## ACKNOWLEDGEMENT We thank Hansheng Jiang for her work on this project as a summer visiting student. We thank Tian Han for sharing the code on learning the generator network, and for helpful discussions. The work is supported by NSF DMS 1310391, DARPA SIMPLEX N66001- 15- C- 4035, ONR MURI N00014- 16- 1- 2007, and DARPA ARO W911NF- 16- 1- 0579. <--- Page Split --->
reject
Reject
4.333333
ICLR_2018_paper_0009
iclr
2,018
# CLASSIFICATION AND DISEASE LOCALIZATION IN HISTOPATHOLOGY USING ONLY GLOBAL LABELS: A WEAKLY-SUPERVISED APPROACH Anonymous authors Paper under double- blind review ## ABSTRACT Analysis of histopathology slides is a critical step for many diagnoses, and in particular in oncology where it defines the gold standard. In the case of digital histopathological analysis, highly trained pathologists must review vast whole- slide- images of extreme digital resolution \((100,000^{2}\) pixels) across multiple zoom levels in order to locate abnormal regions of cells, or in some cases single cells, out of millions. The application of deep learning to this problem is hampered not only by small sample sizes, as typical datasets contain only a few hundred samples, but also by the generation of ground- truth localized annotations for training interpretable classification and segmentation models. We propose a method for disease localization in the context of weakly supervised learning, where only image- level labels are available during training. Even without pixel- level annotations, we are able to demonstrate performance comparable with models trained with strong annotations on the Camelyon- 16 lymph node metastases detection challenge. We accomplish this through the use of pre- trained deep convolutional networks, feature embedding, as well as learning via top instances and negative evidence, a multiple instance learning technique from the field of semantic segmentation and object detection. ## 1 INTRODUCTION Histopathological image analysis (HIA) is a critical element of diagnosis in many areas of medicine, and especially in oncology, where it defines the gold standard metric. Recent works have sought to leverage modern developments in machine learning (ML) to aid pathologists in disease detection tasks, but the majority of these techniques require localized annotation masks as training data. These annotations are even more costly to obtain than the original diagnosis, as pathologists must spend time to assemble pixel- by- pixel segmentation maps of diseased tissue at extreme resolution, thus HIA datasets with annotations are very limited in size. Additionally, such localized annotations may not be available when facing new problems in HIA, such as new disease subtype classification, prognosis estimation, or drug response prediction. Thus, the critical question for HIA is: can one design a learning architecture which achieves accurate classification with no additional localized annotation? A successful technique would be able train algorithms to assist pathologists during analysis, and could also be used to identify previously unknown structures and regions of interest. Indeed, while histopathology is the gold standard diagnostic in oncology, it is extremely costly, requiring many hours of focus from pathologists to make a single diagnosis (Litjens et al., 2016; Weaver, 2010). Additionally, as correct diagnosis for certain diseases requires pathologists to identify a few cells out of millions, these tasks are akin to "finding a needle in a haystack." Hard numbers on diagnostic error rates in histopathology are difficult to obtain, being dependent upon the disease and tissue in question as well as self- reporting by pathologists of diagnostic errors. However, as reported in the review of Santana & Ferreira (2017), false negatives in cancer diagnosis can lead not only to catastrophic consequences for the patient, but also to incredible financial risk to the pathologist. Any tool which can aid pathologists to focus their attention and effort to the must suspect regions can help reduce false- negatives and improve patient outcomes through more accurate diagnoses (Djuric et al., 2017). Medical researchers have looked to computer- aided diagnosis for decades, but the lack of computational resources and data have prevented wide- spread implementation. <--- Page Split ---> tion and usage of such tools (Gurcan et al., 2009). Since the advent of automated digital WSI capture in the 1990s, researchers have sought approaches for easing the pathologist's workload and improve patient outcomes through image processing algorithms (Gurcan et al., 2009; Litjens et al., 2017). Rather than predicting final diagnosis, many of these procedures focused instead on segmentation, either for cell- counting, or for the detection of suspect regions in the WSI. Historical methods have focused on the use of hand- crafted texture or morphological (Demir & Yener, 2005) features used in conjunction with unsupervised techniques such as K- means clustering or other dimensionality reduction techniques prior to classification via k- Nearest Neighbor or a support vector machine. Over the past decade, fruitful developments in deep learning (LeCunn et al., 2015) have led to an explosion of research into the automation of image processing tasks. While the application of such advanced ML techniques to image tasks has been successful for many consumer applications, the adoption of such approaches within the field of medical imaging has been more gradual. However, these techniques demonstrate remarkable promise in the field of HIA. Specifically, in digital pathology with whole- slide- imaging (WSI) (Yagi & Gilbertson, 2005; Snead et al., 2016), highly trained and skilled pathologists review digitally captured microscopy images from prepared and stained tissue samples in order to make diagnoses. Digital WSI are massive datasets, consisting of images captured at multiple zoom levels. At the greatest magnification levels, a WSI may have a digital resolution upwards of 100 thousand pixels in both dimensions. However, since localized annotations are very difficult to obtain, datasets may only contain WSI- level diagnosis labels, falling into the category of weakly- supervised learning. The use of DCNNs was first proposed for HIA in Ciresan et al. (2013), where the authors were able to train a model for mitosis detection in H&E stained images. A similar technique was applied for WSI for the detection of invasive ductal carcinoma in Cruz- Roa et al. (2014). These approaches demonstrated the usefulness of learned features as an effective replacement for hand- crafted image features. It is possible to train deep architectures from scratch for the classification of tile images (Wang et al., 2016; Hou et al., 2016). However, training such DCNN architectures can be extremely resource intensive. For this reason, many recent approaches applying DCNNs to HIA make use of large pre- trained networks to act as rich feature extractors for tiles (Källén et al., 2016; Kim et al., 2016; Litjens et al., 2016; Xu et al., 2017; Song et al., 2017). Such approaches have found success as aggregation of rich representations from pre- trained DCNNs has proven to be quite effective, even without from- scratch training on WSI tiles. In this paper, we propose CHOWDER<sup>1</sup>, an approach for the interpretable prediction of general localized diseases in WSI with only weak, whole- image disease labels and without any additional expert- produced localized annotations, i.e. per- pixel segmentation maps, of diseased areas within the WSI. To accomplish this, we modify an existing architecture from the field of multiple instance learning and object region detection (Durand et al., 2016) to WSI diagnosis prediction. By modifying the pre- trained DCNN model (He et al., 2016), introducing an additional set of fully- connected layers for context- aware classification from tile instances, developing a random tile sampling scheme for efficient training over massive WSI, and enforcing a strict set of regularizations, we are able to demonstrate performance equivalent to the best human pathologists (Bejnordi et al., 2017). Notably, while the approach we propose makes use of a pre- trained DCNN as a feature extractor, the entire procedure is a true end- to- end classification technique, and therefore the transferred pre- trained layers can be fine- tuned to the context of H&E WSI. We demonstrate, using only whole- slide labels, performance comparable to top- 10 ranked methods trained with strong, pixel- level labels on the Camelyon- 16 challenge dataset, while also producing disease segmentation that closely matches ground- truth annotations. We also present results for diagnosis prediction on WSI obtained from The Cancer Genome Atlas (TCGA), where strong annotations are not available and diseases may not be strongly localized within the tissue sample. ## 2 LEARNING WITHOUT LOCAL ANNOTATIONS While approaches using localized annotations have shown promise for HIA, they fail to address the cost associated with the acquisition of hand- labeled datasets, as in each case these methods require access to pixel- level labels. As shown with ImageNet (Deng et al., 2009), access to data drives innovation, however for HIA hand- labeled segmentation maps are costly to produce, often subject <--- Page Split ---> to missed diseased areas, and cannot scale to the size of datasets required for truly effective deep learning. Because of these considerations, HIA is uniquely suited to the weakly supervised learning (WSL) setting. Here, we define the WSL task for HIA to be the identification of suspect regions of WSI when the training data only contains image- wide labels of diagnoses made by expert pathologists. Since WSI are often digitally processed in small patches, or tiles, the aggregation of these tiles into groups with a single label (e.g. "healthy", "cancer present") can be used within the framework of multiple instance learning (MIL) (Dietterich et al., 1997; Amores, 2013; Xu et al., 2014). In MIL for binary classification, one often makes the standard multi- instance (SMI) assumption: a bag is classified as positive iff at least one instance (here, a tile) in the bag is labelled positive. The goal is to take the bag- level labels and learn a set of instance- level rules for the classification of single instances. In the case of HIA, learning such rules provides the ability to infer localized regions of abnormal cells within the large- scale WSI. In the recent work of Hou et al. (2016) for WSI classification in the WSL setting, the authors propose an EM- based method to identify discriminative patches in high resolution images automatically during patch- level CNN training. They also introduced a decision level fusion method for HIA, which is more robust than max- pooling and can be thought of as a Count- based Multiple Instance (CMI) learning method with two- level learning. While this approach was shown to be effective in the case of glioma classification and obtains the best result, it only slightly outperforms much simpler approaches presented in (Hou et al., 2016), but at much greater computational cost. In the case of natural images, the WELDON and WILDCAT techniques of Durand et al. (2016) and Durand et al. (2017), respectively, demonstrated state- of- the- art performance for object detection and localization for WSL with image- wide labels. In the case of WELDON, the authors propose an end- to- end trainable CNN model based on MIL learning with top instances (Li & Vasconcelos, 2015) as well as negative evidence, relaxing the SMI assumption. Specifically, in the case of semantic segmentation, Li & Vasconcelos (2015) argue that a target concept might not exist just at the subregion level, but that the proportion of positive and negative samples in a bag have a larger effect in the determination of label assignment. This argument also holds for the case of HIA, where pathologist diagnosis arises from a synthesis of observations across multiple resolution levels as well as the relative abundance of diseased cells. In Sec. 2.3, we will detail our proposed approach which makes a number of improvements on the framework of Durand et al. (2016), adapting it to the context of large- scale WSI for HIA. ### 2.1 WSI PRE-PROCESSING Tissue Detection. As seen in Fig. 1, large regions of a WSI may contain no tissue at all, and are therefore not useful for training and inference. To extract only tiles with content relevant to the task, we use the same approach as Wang et al. (2016), namely, Otsu's method (Otsu, 1979) applied to the hue and saturation channels of the image after transformation into the HSV color space to produce two masks which are then combined to produce the final tissue segmentation. Subsequently, only tiles within the foreground segmentation are extracted for training and inference. Color Normalization. According to Ciompi et al. (2017), stain normalization is an important step in HIA since the result of the H&E staining procedure can vary greatly between any two slides. We utilize a simple histogram equalization algorithm consisting of left- shifting RGB channels and subsequently rescaling them to \([0, 255]\) , as proposed in Nikitenko et al. (2008). In this work, we place a particular emphasis on the tile aggregation method rather than color normalization, so we did not make use of more advanced color normalization algorithms, such as Khan et al. (2014). Tiling. The tiling step is necessary in histopathology analysis. Indeed, due to the large size of the WSI, it is computationally intractable to process the slide in its entirety. For example, on the highest resolution zoom level, denoted as scale 0, for a fixed grid of non- overlapping tiles, a WSI may possess more than 200,000 tiles of \(224 \times 224\) pixels. Because of the computational burden associated with processing the set of all possible tiles, we instead turn to a uniform random sampling from the space of possible tiles. Additionally, due to the large scale nature of WSI datasets, the computational burden associated with sampling potentially overlapping tiles from arbitrary locations is a prohibitive cost for batch construction during training. <--- Page Split ---> Instead, we propose that all tiles from the non- overlapping grid should be processed and stored to disk prior to training. As the tissue structure does not exhibit any strong periodicity, we find that sampling tiles along a fixed grid without overlapping provides a reasonably representative sampling while maximizing the total sampled area. Given a target scale \(\ell \in \{0,1,\ldots ,L\}\) , we denote the number of possible tiles in WSI indexed by \(i\in \{1,2,\ldots ,N\}\) as \(M_{i,\ell}^{\mathrm{T}}\) . The number of tiles sampled for training or inference is denoted by \(M_{i,\ell}^{\mathrm{S}}\) and is chosen according to \[M_{i,\ell}^{\mathrm{S}} = \min \left(M_{i,\ell}^{\mathrm{T}}, \max \left(M_{\min}^{\mathrm{T}}, \frac{1}{2} \cdot \bar{M}_{\ell}^{\mathrm{T}}\right)\right), \quad (1)\] where \(\bar{M}_{\ell}^{\mathrm{T}} = \frac{1}{N}\sum_{i}M_{i,\ell}^{\mathrm{T}}\) is the empirical average of the number of tiles at scale \(\ell\) over the entire set of training data. Feature Extraction. We make use of the ResNet- 50 (He et al., 2016) architecture trained on the ImageNet natural image dataset. In empirical comparisons between VGG or Inception architectures, we have found that the ResNet architecture provides features more well suited for HIA. Additionally, the ResNet architecture is provided at a variety of depths (ResNet- 101, ResNet- 152). However, we found that ResNet- 50 provides the best balance between the computational burden of forward inference and richness of representation for HIA. In our approach, for every tile we use the values of the ResNet- 50 pre- output layer, a set of \(P = 2048\) floating point values, as the feature vector for the tile. Since the fixed input resolution for ResNet- 50 is \(224 \times 224\) pixels, we set the resolution for the tiles extracted from the WSI to the same pixel resolution at every scale \(\ell\) . ### 2.2 BASELINE METHOD Given a WSI, extracting tile- level features produces a bag of feature vectors which one attempts to use for classification against the known image- wide label. The dimension of these local descriptors is \(M^{\mathrm{S}} \times P\) , where \(P\) is the number of features output from the pre- trained image DCNN and \(M^{\mathrm{S}}\) is the number of sampled tiles. Approaches such as Bag- of- visual- words (BoVW) or VLAD (Jégou et al., 2010) could be chosen as a baseline aggregation method to generate a single image- wide descriptor of size \(P \times 1\) , but would require a huge computational power given the dimensionality of the input. Instead, we will try two common approaches for the aggregation of local features, specifically, the MaxPool and MeanPool and subsequently apply a classifier on the aggregated features. After applying these pooling methods over the axis of tile indices, one obtains a single feature descriptor for the whole image. Other pooling approaches have been used in the context of HIA, including Fisher vector encodings (Song et al., 2017) and \(p\) - norm pooling (Xu et al., 2017). However, as the reported effect of these aggregations is quite small, we don't consider these approaches when constructing our baseline approach. After aggregation, a classifier can be trained to produce the desired diagnosis labels given the global WSI aggregated descriptor. For our baseline method, we use a logistic regression for this final prediction layer of the model. We present a description of the baseline approach in Fig. 1. ### 2.3 CHOWDER METHOD In experimentation, we observe that the baseline approach of the previous section works well for diffuse disease, which is evidenced in the results of Table 1 for TCGA- Lung. Here, diffuse implies that the number of disease- containing tiles, pertinent to the diagnosis label, are roughly proportional to the number of tiles containing healthy tissue. However, if one applies the same approach to different WSI datasets, such as Camelyon- 16, the performance significantly degrades. In the case of Camelyon- 16, the diseased regions of most of the slides are highly localized, restricted to a very small area within the WSI. When presented with such imbalanced bags, simple aggregation approaches for global slide descriptors will overwhelm the features of the disease- containing tiles. Instead, we propose an adaptation and improvement of the WELDON method (Durand et al., 2016) designed for histopathology images analysis. As in their approach, rather than creating a global <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 1: Description of the BASELINE approach for WSI classification via aggregation of tile-level features into global slide descriptors. </center> ![](images/4_1.jpg) <center>Figure 2: Description of the CHOWDER architecture (for \(R = 2\) ) for WSI classification via MLP on operating on top positive and negative instances shown for a single sample mini-batch sample. </center> slide descriptor by aggregating all tile features, instead a MIL approach is used that combines both top- instances as well as negative evidence. A visual description of approach is given in Fig. 2. Feature Embedding. First, a set of one- dimensional embeddings for the \(P = 2048\) ResNet- 50 features are calculated via \(J\) one- dimensional convolutional layers strided across the tile index axis. For tile \(t\) with features \(\mathbf{k}_{t}\) , the embedding according to kernel \(j\) is calculated as \(e_{j,t} = \langle \mathbf{w}_{j},\mathbf{k}_{t}\rangle\) . Notably, the kernels \(\mathbf{w}_{j}\) have dimensionality \(P\) . This one- dimensional convolution is, in essence, a shortcut for enforcing a fully- connected layer with tied weights across tiles, i.e. the same embedding for every tile (Durand et al., 2016). In our experiments, we found that the use of a single embedding, \(J = 1\) , is an appropriate choice for WSI datasets when the number of available slides is small \((< 1000)\) . In this case, choosing \(J > 1\) will decrease training error, but will increase generalization error. Avoiding overtraining and ensuring model generality remains a major challenge for the application of WSL to WSI datasets. Top Instances and Negative Evidence. After feature embedding, we now have a \(M_{j,t}^{S} \times 1\) vector of local tile- level (instance) descriptors. As in (Durand et al., 2016), these instance descriptors are sorted by value. Of these sorted embedding values, only the top and bottom \(R\) entries are retained, resulting in a tensor of \(2R \times 1\) entries to use for diagnosis classification. This can be easily accomplished through a MinMax layer on the output of the one- dimensional convolution layer. The purpose of this layer is to take not only the top instances region but also the negative evidences, that is the region which best support the absence of the class. During training, the back- propagation runs only through the selected tiles, positive and negative evidences. When applied to WSI, the MinMax serves as a powerful tile selection procedure. Multi- layer Perceptron (MLP) Classifier. In the WELDON architecture, the last layer consists of a sum applied over the \(2R \times 1\) output from the MinMax layer. However, we find that this approach can be improved for WSI classification. We investigate the possibility of a richer interactions between the top and bottom instances by instead using an MLP as the final classifier. In our implementation of CHOWDER, we use an MLP with two fully connected layers of 200 and 100 neurons with sigmoid activations. <--- Page Split ---> ## 3 EXPERIMENTAL RESULTS ### 3.1 TRAINING DETAILS First, for pre- processing, we fix a single tile scale for all methods and datasets. We chose a fixed zoom level of \(0.5 \mu \mathrm{m / pixel}\) , which corresponds to \(\ell = 0\) for slides scanned at \(20 \mathrm{x}\) magnification, or \(\ell = 1\) slides scanned at \(40 \mathrm{x}\) magnification. Next, since WSI datasets often only contain a few hundred images, far from the millions images of ImageNet dataset, strong regularization required prevent over- fitting. We applied \(\ell_{2}\) - regularization of 0.5 on the convolutional feature embedding layer and dropout on the MLP with a rate of 0.5. However, these values may not be the global optimal, as we did not apply any hyper- parameter optimization to tune these values. To optimize the model parameters, we use Adam (Kingma & Ba, 2014) to minimize the binary cross- entropy loss over 30 epochs with a mini- batch size of 10 and with learning rate of 0.001. To reduce variance and prevent over- fitting, we trained an ensemble of \(E\) CHOWDER networks which only differ by their initial weights. The average of the predictions made by these \(E\) networks establishes the final prediction. Although we set \(E = 10\) for the results presented in Table 1, we used a larger ensemble of \(E = 50\) with \(R = 5\) to obtain the best possible model and compare our method to those presented in Table 2. We also use an ensemble of \(E = 10\) when reporting the results for WELDON. As the training of one epoch requires about 30 seconds on our available hardware, the total training time for the ensemble took just over twelve hours. While the ResNet- 50 features were extracted using a GPU for efficient feed- forward calculations, the CHOWDER network is trained on CPU in order to take advantage of larger system RAM sizes, compared to on- board GPU RAM. This allows us to store all the training tiles in memory to provide faster training compared to a GPU due to reduced transfer overhead. ### 3.2 TCGA The public Cancer Genome Atlas (TCGA) provides approximately 11,000 tissue slides images of cancers of various organs<sup>2</sup>. For our first experiment, we selected 707 lung cancer WSIs (TCGA- Lung), which were downloaded in March 2017. Subsequently, a set of new lung slides have been added to TCGA, increasing the count of lung slides to 1,009. Along with the slides themselves, TCGA also provides labels representing the type of cancer present in each WSI. However, no local segmentation annotations of cancerous tissue regions are provided. The pre- processing step extracts 1,411,043 tiles and their corresponding representations from ResNet- 50. The task of these experiments is then to predict which type of cancer is contained in each WSI: adenocarcinoma or squamous cell carcinoma. We evaluate the quality of the classification according to the area under the curve (AUC) of the receiver operating characteristic (ROC) curve generated using the raw output predictions. As expected in the case of diffuse disease, the advantage provided by CHOWDER is slight as compared to the MeanPool baseline, as evidenced in Table 1. Additionally, as the full aggregation techniques work quite well in this setting, the value of \(R\) does not seem to have a strong effect on the performance of CHOWDER as it increases to \(R = 100\) . In this setting of highly homogenous tissue content, we can expect that global aggregate descriptors are able to effectively separate the two classes of carcinoma. ### 3.3 CAMELYON-16 For our second experiment, we use the Camelyon- 16 challenge dataset<sup>3</sup>, which consists of 400 WSIs taken from sentinel lymph nodes, which are either healthy or exhibit metastases of some form. In addition to the WSIs themselves, as well as their labeling (healthy, contains- metastases), a segmentation mask is provided for each WSI which represents an expert analysis on the location of metastases within the WSI. Human labeling of sentinel lymph node slides is known to be quite tedious, as noted in Litjens et al. (2016); Weaver (2010). Teams participating in the challenge had access to, and utilized, the ground- truth masks when training their diagnosis prediction and tumor localization models. For our approach, we set aside the masks of <--- Page Split ---> metastasis locations and utilize only diagnosis labels. Furthermore, many participating teams developed a post- processing step, extracting handcrafted features from predicted metastasis maps to improve their segmentation. No post- processing is performed for the presented CHOWDER results, the score is computed directly from the raw output of the CHOWDER model. The Camelyon- 16 dataset is evaluated on two different axes. First, the accuracy of the predicted label for each WSI in the test set is evaluated according to AUC. Second, the accuracy of metastasis localization is evaluated by comparing model outputs to the ground- truth expert annotations of metastasis location. This segmentation accuracy is measured according to the free ROC metric (FROC), which is the curve of metastasis detection sensitivity to the average number of also positives. As in the Camelyon challenge, we evaluate the FROC metric as the average detection sensitivity at the average false positive rates 0.25, 0.5, 1, 2, 4, and 8. Competition Split Bias. We also conduct a set of experiments on Camelyon- 16 using random train- test cross- validation (CV) splits, respecting the same training set size as in the original competition split. We note distinct difference in AUC between the competition split and those obtained via random folds. This discrepancy is especially distinct for the MeanPool baseline, as reported in Table 1. We therefore note a distinct discrepancy in the data distribution between the competition test and training splits. Notably, using the MeanPool baseline architecture, we found that the competition train- test split can be predicted with an AUC of 0.75, however one only obtains an AUC of 0.55 when using random splits. Because this distribution mismatch in the competition split could produce misleading interpretations, we report 3- fold average CV results along with the results obtained on the competition split. Classification Performance. In Table 1, we see the classification performance of our proposed CHOWDER method, for \(E = 10\) , as compared to both the baseline aggregation techniques, as well as the WELDON approach. In the case of WELDON, the final MLP is not used and instead a summing is applied to the MinMax layer. The value of \(R\) retains the same meaning in both cases: the number of both high and low scoring tiles to pass on to the classification layers. We test a range of values \(R\) for both WELDON and CHOWDER. We find that over all values of \(R\) , CHOWDER provides a significant advantage over both the baseline aggregation techniques as well as WELDON. We also note that the optimal performance can be obtained without using a large number of discriminative tiles, i.e. \(R = 5\) . We also present in Table 2 our performance as compared to the public Camelyon leader boards for \(E = 50\) . In this case, we are able to obtain an effective \(11^{\mathrm{th}}\) place rank, but without using any of the ground- truth disease segmentation maps. This is a remarkable result, as the winning approach of Wang et al. (2016) required tile- level disease labels derived from expert- provided annotations in order to train a full 27- layer GoogLeNet (Szegedy et al., 2015) architecture for tumor prediction. We also show the ROC curve for this result in Fig. 3. Finally, we note that CHOWDER's performance on this task roughly is equivalent to the best- performing human pathologist, an AUC of 0.884 as reported by Bejnordi et al. (2017), and better than the average human pathologist performance, an AUC of 0.810. Notably, this human- level performance is achieved without human assistance during training, beyond the diagnosis labels themselves. Localization Performance. Obtaining high performance in terms of whole slide classification is well and good, but it is not worth much without an interpretable result which can be used by pathologists to aid their diagnosis. For example, the MeanPool baseline aggregation approach provides no information during inference from which one could derive tumor locations in the WSI: all locality information is lost with the aggregation. With MaxPool, one at least retains some information via the tile locations which provide each maximum aggregate feature. For CHOWDER, we propose the use of the full set of outputs from the convolutional feature embedding layer. These are then sorted and thresholded according to value \(\tau\) such that tiles with an embedded value larger than \(\tau\) are classified as diseased and those with lower values are classified as healthy. We show an example of disease localization produced by CHOWDER in Fig. 4. Here, we see that CHOWDER is able to very accurately localize the tumorous region in the WSI even though it has only been trained using global slide- wide labels and without any local annotations. While some potential false detections occur outside of the tumor region, we see that the strongest <--- Page Split ---> Table 1: Classification (AUC) results for the Camelyon-16 (left) and TCGA-Lung (right) datasets for CHOWDER, WELDON, and the baseline approach. For Camelyon-16, we present two scores, one for the fixed competition test split of 130 WSIs, and one for a cross-validated average over 3 folds (CV) on the 270 training WSIs. For TCGA-Lung, we present scores as a cross-validated average over 5 folds. <table><tr><td rowspan="2">Method</td><td colspan="2">AUC</td></tr><tr><td>CV</td><td>Competition</td></tr><tr><td>BASELINE</td><td></td><td></td></tr><tr><td>MaxPool</td><td>0.749</td><td>0.655</td></tr><tr><td>MeanPool</td><td>0.802</td><td>0.530</td></tr><tr><td>WELDON</td><td></td><td></td></tr><tr><td>R = 1</td><td>0.782</td><td>0.765</td></tr><tr><td>R = 10</td><td>0.832</td><td>0.670</td></tr><tr><td>R = 100</td><td>0.809</td><td>0.600</td></tr><tr><td>R = 300</td><td>0.761</td><td>0.573</td></tr><tr><td>CHOWDER</td><td></td><td></td></tr><tr><td>R = 1</td><td>0.809</td><td>0.821</td></tr><tr><td>R = 5</td><td>0.903</td><td>0.858</td></tr><tr><td>R = 10</td><td>0.900</td><td>0.843</td></tr><tr><td>R = 100</td><td>0.870</td><td>0.775</td></tr><tr><td>R = 300</td><td>0.837</td><td>0.652</td></tr></table> <table><tr><td>Method</td><td>AUC</td></tr><tr><td>BASELINE</td><td></td></tr><tr><td>MaxPool</td><td>0.860</td></tr><tr><td>MeanPool</td><td>0.903</td></tr><tr><td>CHOWDER</td><td></td></tr><tr><td>R = 1</td><td>0.900</td></tr><tr><td>R = 10</td><td>0.915</td></tr><tr><td>R = 100</td><td>0.909</td></tr></table> response occurs within the tumor region itself, and follows the border regions nicely. We present further localization results in Appendix A. We also present FROC scores for CHOWDER in Table 2 as compared to the leader board results. Here, we obtain results comparable to the \(18^{\mathrm{th}}\) rank. However, this performance is incredibly significant as all other approaches were making use of tile- level classification in order to train their segmentation techniques. We also show the FROC curve in Fig. 3. ## 4 DISCUSSION We have shown that using state- of- the- art techniques from MIL in computer vision, such as the top instance and negative evidence approach of (Durand et al., 2016), one can construct an effective technique for diagnosis prediction and disease location for WSI in histopathology without the need <table><tr><td>Rank</td><td>Team</td><td>AUC</td><td>Rank</td><td>Team</td><td>FROC</td></tr><tr><td>1</td><td>HMS &amp;amp; MIT</td><td>0.9935</td><td>1</td><td>HMS &amp;amp; MIT</td><td>0.8074</td></tr><tr><td>2</td><td>HMS-MGH</td><td>0.9763</td><td>2</td><td>HMS-MGH</td><td>0.7600</td></tr><tr><td>3</td><td>HMS-MGH</td><td>0.9650</td><td>3</td><td>HMS-MGH</td><td>0.7289</td></tr><tr><td>4</td><td>CUHK</td><td>0.9415</td><td>4</td><td>CUHK</td><td>0.7030</td></tr><tr><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td></tr><tr><td>9</td><td>CUHK</td><td>0.9056</td><td>16</td><td>Osaka University</td><td>0.3467</td></tr><tr><td>10</td><td>DeepCare Inc.</td><td>0.8833</td><td>17</td><td>SIT</td><td>0.3385</td></tr><tr><td></td><td>CHOWDER (No Annotation)</td><td>0.8706</td><td></td><td>CHOWDER (No Annotation)</td><td>0.3103</td></tr><tr><td>11</td><td>Indep. DE</td><td>0.8654</td><td>18</td><td>Warwick-QU</td><td>0.3052</td></tr><tr><td>12</td><td>METU</td><td>0.8642</td><td>19</td><td>U. Munich (CAMP)</td><td>0.2733</td></tr><tr><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td></tr><tr><td>32</td><td>Sorbonne LIB</td><td>0.5561</td><td>32</td><td>Mines Paris Tech</td><td>0.0970</td></tr></table> Table 2: Final leader boards for Camelyon- 16 competition. All competition methods had access to the full set of strong annotations for training their models. In contrast, our proposed approach only utilizes image- wide diagnosis levels and obtains comparable performance as top- 10 methods. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 3: Performance curves for Camelyon-16 dataset for both classification and segmentation tasks for the different tested approaches. Left: ROC curves for the classification task. Right: FROC curves for lesion detection task. </center> ![](images/8_1.jpg) <center>Figure 4: Visualization of metastasis detection on test image 27 of the Camelyon-16 dataset using our proposed approach. Left: Full WSI at zoom level 6 with ground truth annotation of metastases shown via black border. Tiles with positive feature embeddings are colored from white to red according to their magnitude, with red representing the largest magnitude. Right: Detail of metastases at zoom level 2 overlaid with classification output of our proposed approach. Here, the output of all tested tiles are shown and colored according to their value, from blue to white to red, with blue representing the most negative values, and red the most positive. Tiles without color were not included when randomly selecting tiles for inference. </center> for expensive localized annotations produced by expert pathologists. By removing this requirement, we hope to accelerate the production of computer- assistance tools for pathologists to greatly improve the turn- around time in pathology labs and help surgeons and oncologists make rapid and effective patient care decisions. This also opens the way to tackle problems where expert pathologists may not know precisely where relevant tissue is located within the slide image, for instance for prognosis estimation or prediction of drug response tasks. The ability of our approach to discover associated regions of interest without prior localized annotations hence appears as a novel discovery approach for the field of pathology. Moreover, using the suggested localization from CHOWDER, one may considerably speed up the process of obtaining ground- truth localized annotations. A number of improvements can be made in the CHOWDER method, especially in the production of disease localization maps. As presented, we use the raw values from convolutional embedding layer, which means that the resolution of the produced disease localization map is fixed to that of the sampled tiles. However, one could also sample overlapping tiles and then use a data fusion technique to generate a final localization map. Additionally, as a variety of annotations may be available, CHOWDER could be extended to the case of heterogenous annotation, e.g. some slides with expert- produced localized annotations and those with only whole- slide annotations. <--- Page Split ---> ## REFERENCES Jaume Amores. Multiple instance classification: Review, taxonomy and comparative study. Artificial Intelligence, 201:81- 105, 2013. Babak Ehteshami Bejnordi, Mitko Veta, and Paul Johannes van Diest et al. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. Journal of the American Medical Association, 318(22):2199- 2210, 2017. Francesco Ciompi, Oscar Guessing, Babak Ehteshami Bejnordi, Gabriel Silva de Souza, Alexi Baidoshvili, Geert Litjens, Bram van Ginneken, Iris Nagtegaal, and Jeroen van der Laak. The importance of stain normalization in colorectal tissue classification with convolutional networks. arXiv Preprint [cs.CV]:1702.05931, 2017. Dan C. Ciresan, Alessandro Giusti, Luca M. Gambardella, and Jürgen Schmidhuber. Mitosis detection in breast cancer histology images with deep neural networks. In Int. Conf. on Medical Image Computing and Computer- Assisted Intervention, pp. 411- 418, 2013. Angel Cruz- Roa, Ajay Basavanhally, Fabio González, Hannah Gilmore, Michael Feldman, Shirdar Ganesan, Natalie Shih, John Tomaszewski, and Anant Madabhushi. Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks. SPIE Medical Imaging, 9041, 2014. Cigdem Demir and Bülent Yener. Automated cancer diagnosis based on histopathological images: A systematic survey. Rensselaer Polytechnique Institute, Tech. Rep., 2005. Jia Deng, Wei Dong, Richard Socher, Li- Jia Li, Kai Li, and Li Fei- Fei. ImageNet: A large- scale hierarchical image database. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 248- 255, June 2009. Thomas G. Dietterich, Richard H. Lathrop, and Tomás Lozano- Pérez. Solving the multiple instance problem with axis- parallel rectangles. Artificial Intelligence, 89(1- 2):31- 71, 1997. Ugliesa Djuric, Glerahe Zadeh, Kenneth Aldape, and Phedias Diamandis. Precision histology: How deep learning is poised to revitalize histomorphology for personalized cancer care. npj Precision Oncology, 1(1): 22, June 2017. Thibault Durand, Nicolas Thome, and Matthieu Cord. WELDON: Weakly supervised learning of deep convolutional neural networks. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 4743- 4752, June 2016. Thibault Durand, Taylor Mordan, Nicolas Thome, and Matthieu Cord. WILDCAT: Weakly supervised learning of deep ConvNets for image classification, pointwise localization and segmentation. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 642- 651, July 2017. Metin N. Gurcan, Laura Boucheron, Ali Can, Anant Madabhushi, Nasir Rajpoot, and Bulent Yener. Histopathological image analysis: A review. IEEE Reviews in Biomedical Engineering, 2:147- 171, 2009. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 770- 778, June 2016. Le Hou, Dimitris Samaras, Tahsin M. Kurc, Yi Gao, James E. Davis, and Joel H. Saltz. Patch- based convolutional neural network for whole slide tissue image classification. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 2424- 2433, June 2016. Herve Jégou, Matthijs Douze, Cordelia Schmid, and Patrick Pérez. Aggregating local descriptors into a compact image representation. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 3304- 3311, June 2010. Hanna Källen, Jesper Molin, Anders Heyden, Claes Lundström, and Kalle Åström. Towards grading gleason score using generically trained deep convolutional neural networks. In Proc. IEEE Int. Symp. on Biomedical Imaging, pp. 1163- 1167, April 2016. Adnan Mujahid Khan, Nasir Rajpoot, Darren Treanor, and Derek Magee. A nonlinear mapping approach to stain normalization in digital histopathology images using image- specific color deconvolution. IEEE Trans. Biomedical Engineering, 61(6):1729- 1738, 2014. Edward Kim, Miguel Cortre- Real, and Zubair Baloch. A deep semantic mobile application for thyroid cytopathology. Proc. SPIE, 9789, 2016. Diederik P. Kingma and Jimmy Ba. ADAM: A method for stochastic optimization. arXiv Preprint [cs.LG]:1412.6980, 2014. <--- Page Split ---> Yann LeCunn, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436- 444, 2015. Weixin Li and Nuno Vasconcelos. Multiple instance learning for soft bags via top instances. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 4277- 4285, June 2015. Geert Litjens, Clara I. Sanchez, Nadya Timofeeva, Meyke Hermsen, Iris Nagtegaal, Iringo Kovacs, Christina Hulsbergen van de Kaa, Peter Bult, Bram van Ginneken, and Jeroen van der Laak. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis. Scientific Reports, (6):26286, 2016. Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen A. W. M. van der Laak, Bram van Ginneken, and Clara I. Sanchez. A survey on deep learning in medical image analysis. arXiv Preprint [cs.CV]:1702.05747, 2017. Dennis Nikitenko, Michael A. Wirth, and Kataline Trudel. Applicability of white- balancing algorithms to restoring faded colour slides: An empirical evaluation. Journal of Multimedia, 3(5):9- 18, 2008. N. Otsu. A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics, 9(1):62- 66, 1979. Monique F. Santana and Luiz Carlos L. Ferreira. Diagnostic errors in surgical pathology. Jornal Brasileiro de Patologia e Medicina Laboratorial, 53(2):124- 129, 2017. David R J Snead, Yee- Wah Tsang, Aisha Meskiri, Peter K Kimani, Richard Crossman, Nasir M Rajpoot, Elaine Blessing, Klaus Chen, Kishore Gopalakrishnan, Paul Matthews, Navid Momtahan, Sarah Read- Jones, Shatrughan Sah, Emma Simmons, Bidisa Sinha, Sari Suortamo, Yen Yeo, Hesham El Daly, and Ian A Cree. Validation of digital pathology imaging for primary histopathological diagnosis. Histopathology, 68(7): 1063- 1072, 2016. Yang Song, Ju Jia Zou, Hang Chang, and Weidong Cai. Adapting Fisher vectors for histopathology image classification. In Proc. IEEE Int. Symp. on Biomedical Imaging, pp. 600- 603, April 2017. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1- 9, June 2015. Dayong Wang, Aditya Khosla, Rishab Gargeya, Humayun Irshad, and Andrew H. Beck. Deep learning for identifying metastatic breast cancer. arXiv Preprint [q- bio.QM]:1606.05718, 2016. Donald L. Weaver. Pathology evaluation of sentinel lymph nodes in breast cancer: Protocol recommendations and rationale. Modern Pathology, 23:S26- S32, 2010. Yan Xu, Tao Mo, Qiwei Feng, Peilin Zhong, Maode Lai, and Eric I- Chao Chang. Deep learning of feature representation with multiple instance learning for medical image analysis. In Proc. IEEE Conf. on Acoustics, Speech and Signal Processing, pp. 1626- 1630, May 2014. Yan Xu, Zhipeng Jia, Liang- Bo Wang, Yuqing Ai, Fang Zhang, Maode Lai, and Eric I- Chao Chang. Large scale tissue histopathology image classification, segmentation, and visualization via deep convolutional activation features. BMC Bioinformatics, 18(281):281, 2017. Yukako Yagi and John R. Gilbertson. Digital imaging in pathology: The case for standardization. Journal of Telemedicine and Telecare, 11(3):109- 116, 2005. <--- Page Split ---> ## A FURTHER RESULTS ![](images/11_0.jpg) <center>Figure 5: Visualization of metastasis detection on test image 2 of the Camelyon-16 dataset using our proposed approach. Left: Full WSI at zoom level 6 with ground truth annotation of metastases shown via black border. Tiles with positive feature embeddings are colored from white to red according to their magnitude, with red representing the largest magnitude. Right: Detail of metastases at zoom level 2 overlaid with classification output of our proposed approach. Here, the output of all tested tiles are shown and colored according to their value, from blue to white to red, with blue representing the most negative values, and red the most positive. Tiles without color were not included when randomly selecting tiles for inference. </center> <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 6: Visualization of metastasis detection on test image 92 of the Camelyon-16 dataset using our proposed approach. Left: Full WSI at zoom level 6 with ground truth annotation of metastases shown via black border. Tiles with positive feature embeddings are colored from white to red according to their magnitude, with red representing the largest magnitude. Right: Detail of metastases at zoom level 2 overlaid with classification output of our proposed approach. Here, the output of all tested tiles are shown and colored according to their value, from blue to white to red, with blue representing the most negative values, and red the most positive. Tiles without color were not included when randomly selecting tiles for inference. </center> <--- Page Split --->
## ABSTRACT Analysis of histopathology slides is a critical step for many diagnoses, and in particular in oncology where it defines the gold standard. In the case of digital histopathological analysis, highly trained pathologists must review vast whole- slide- images of extreme digital resolution \((100,000^{2}\) pixels) across multiple zoom levels in order to locate abnormal regions of cells, or in some cases single cells, out of millions. The application of deep learning to this problem is hampered not only by small sample sizes, as typical datasets contain only a few hundred samples, but also by the generation of ground- truth localized annotations for training interpretable classification and segmentation models. We propose a method for disease localization in the context of weakly supervised learning, where only image- level labels are available during training. Even without pixel- level annotations, we are able to demonstrate performance comparable with models trained with strong annotations on the Camelyon- 16 lymph node metastases detection challenge. We accomplish this through the use of pre- trained deep convolutional networks, feature embedding, as well as learning via top instances and negative evidence, a multiple instance learning technique from the field of semantic segmentation and object detection. ## 1 INTRODUCTION Histopathological image analysis (HIA) is a critical element of diagnosis in many areas of medicine, and especially in oncology, where it defines the gold standard metric. Recent works have sought to leverage modern developments in machine learning (ML) to aid pathologists in disease detection tasks, but the majority of these techniques require localized annotation masks as training data. These annotations are even more costly to obtain than the original diagnosis, as pathologists must spend time to assemble pixel- by- pixel segmentation maps of diseased tissue at extreme resolution, thus HIA datasets with annotations are very limited in size. Additionally, such localized annotations may not be available when facing new problems in HIA, such as new disease subtype classification, prognosis estimation, or drug response prediction. Thus, the critical question for HIA is: can one design a learning architecture which achieves accurate classification with no additional localized annotation? A successful technique would be able train algorithms to assist pathologists during analysis, and could also be used to identify previously unknown structures and regions of interest. Indeed, while histopathology is the gold standard diagnostic in oncology, it is extremely costly, requiring many hours of focus from pathologists to make a single diagnosis (Litjens et al., 2016; Weaver, 2010). Additionally, as correct diagnosis for certain diseases requires pathologists to identify a few cells out of millions, these tasks are akin to "finding a needle in a haystack." Hard numbers on diagnostic error rates in histopathology are difficult to obtain, being dependent upon the disease and tissue in question as well as self- reporting by pathologists of diagnostic errors. However, as reported in the review of Santana & Ferreira (2017), false negatives in cancer diagnosis can lead not only to catastrophic consequences for the patient, but also to incredible financial risk to the pathologist. Any tool which can aid pathologists to focus their attention and effort to the must suspect regions can help reduce false- negatives and improve patient outcomes through more accurate diagnoses (Djuric et al., 2017). Medical researchers have looked to computer- aided diagnosis for decades, but the lack of computational resources and data have prevented wide- spread implementation. <--- Page Split ---> tion and usage of such tools (Gurcan et al., 2009). Since the advent of automated digital WSI capture in the 1990s, researchers have sought approaches for easing the pathologist's workload and improve patient outcomes through image processing algorithms (Gurcan et al., 2009; Litjens et al., 2017). Rather than predicting final diagnosis, many of these procedures focused instead on segmentation, either for cell- counting, or for the detection of suspect regions in the WSI. Historical methods have focused on the use of hand- crafted texture or morphological (Demir & Yener, 2005) features used in conjunction with unsupervised techniques such as K- means clustering or other dimensionality reduction techniques prior to classification via k- Nearest Neighbor or a support vector machine. Over the past decade, fruitful developments in deep learning (LeCunn et al., 2015) have led to an explosion of research into the automation of image processing tasks. While the application of such advanced ML techniques to image tasks has been successful for many consumer applications, the adoption of such approaches within the field of medical imaging has been more gradual. However, these techniques demonstrate remarkable promise in the field of HIA. Specifically, in digital pathology with whole- slide- imaging (WSI) (Yagi & Gilbertson, 2005; Snead et al., 2016), highly trained and skilled pathologists review digitally captured microscopy images from prepared and stained tissue samples in order to make diagnoses. Digital WSI are massive datasets, consisting of images captured at multiple zoom levels. At the greatest magnification levels, a WSI may have a digital resolution upwards of 100 thousand pixels in both dimensions. However, since localized annotations are very difficult to obtain, datasets may only contain WSI- level diagnosis labels, falling into the category of weakly- supervised learning. The use of DCNNs was first proposed for HIA in Ciresan et al. (2013), where the authors were able to train a model for mitosis detection in H&E stained images. A similar technique was applied for WSI for the detection of invasive ductal carcinoma in Cruz- Roa et al. (2014). These approaches demonstrated the usefulness of learned features as an effective replacement for hand- crafted image features. It is possible to train deep architectures from scratch for the classification of tile images (Wang et al., 2016; Hou et al., 2016). However, training such DCNN architectures can be extremely resource intensive. For this reason, many recent approaches applying DCNNs to HIA make use of large pre- trained networks to act as rich feature extractors for tiles (Källén et al., 2016; Kim et al., 2016; Litjens et al., 2016; Xu et al., 2017; Song et al., 2017). Such approaches have found success as aggregation of rich representations from pre- trained DCNNs has proven to be quite effective, even without from- scratch training on WSI tiles. In this paper, we propose CHOWDER<sup>1</sup>, an approach for the interpretable prediction of general localized diseases in WSI with only weak, whole- image disease labels and without any additional expert- produced localized annotations, i.e. per- pixel segmentation maps, of diseased areas within the WSI. To accomplish this, we modify an existing architecture from the field of multiple instance learning and object region detection (Durand et al., 2016) to WSI diagnosis prediction. By modifying the pre- trained DCNN model (He et al., 2016), introducing an additional set of fully- connected layers for context- aware classification from tile instances, developing a random tile sampling scheme for efficient training over massive WSI, and enforcing a strict set of regularizations, we are able to demonstrate performance equivalent to the best human pathologists (Bejnordi et al., 2017). Notably, while the approach we propose makes use of a pre- trained DCNN as a feature extractor, the entire procedure is a true end- to- end classification technique, and therefore the transferred pre- trained layers can be fine- tuned to the context of H&E WSI. We demonstrate, using only whole- slide labels, performance comparable to top- 10 ranked methods trained with strong, pixel- level labels on the Camelyon- 16 challenge dataset, while also producing disease segmentation that closely matches ground- truth annotations. We also present results for diagnosis prediction on WSI obtained from The Cancer Genome Atlas (TCGA), where strong annotations are not available and diseases may not be strongly localized within the tissue sample. ## 2 LEARNING WITHOUT LOCAL ANNOTATIONS While approaches using localized annotations have shown promise for HIA, they fail to address the cost associated with the acquisition of hand- labeled datasets, as in each case these methods require access to pixel- level labels. As shown with ImageNet (Deng et al., 2009), access to data drives innovation, however for HIA hand- labeled segmentation maps are costly to produce, often subject <--- Page Split ---> to missed diseased areas, and cannot scale to the size of datasets required for truly effective deep learning. Because of these considerations, HIA is uniquely suited to the weakly supervised learning (WSL) setting. Here, we define the WSL task for HIA to be the identification of suspect regions of WSI when the training data only contains image- wide labels of diagnoses made by expert pathologists. Since WSI are often digitally processed in small patches, or tiles, the aggregation of these tiles into groups with a single label (e.g. "healthy", "cancer present") can be used within the framework of multiple instance learning (MIL) (Dietterich et al., 1997; Amores, 2013; Xu et al., 2014). In MIL for binary classification, one often makes the standard multi- instance (SMI) assumption: a bag is classified as positive iff at least one instance (here, a tile) in the bag is labelled positive. The goal is to take the bag- level labels and learn a set of instance- level rules for the classification of single instances. In the case of HIA, learning such rules provides the ability to infer localized regions of abnormal cells within the large- scale WSI. In the recent work of Hou et al. (2016) for WSI classification in the WSL setting, the authors propose an EM- based method to identify discriminative patches in high resolution images automatically during patch- level CNN training. They also introduced a decision level fusion method for HIA, which is more robust than max- pooling and can be thought of as a Count- based Multiple Instance (CMI) learning method with two- level learning. While this approach was shown to be effective in the case of glioma classification and obtains the best result, it only slightly outperforms much simpler approaches presented in (Hou et al., 2016), but at much greater computational cost. In the case of natural images, the WELDON and WILDCAT techniques of Durand et al. (2016) and Durand et al. (2017), respectively, demonstrated state- of- the- art performance for object detection and localization for WSL with image- wide labels. In the case of WELDON, the authors propose an end- to- end trainable CNN model based on MIL learning with top instances (Li & Vasconcelos, 2015) as well as negative evidence, relaxing the SMI assumption. Specifically, in the case of semantic segmentation, Li & Vasconcelos (2015) argue that a target concept might not exist just at the subregion level, but that the proportion of positive and negative samples in a bag have a larger effect in the determination of label assignment. This argument also holds for the case of HIA, where pathologist diagnosis arises from a synthesis of observations across multiple resolution levels as well as the relative abundance of diseased cells. In Sec. 2.3, we will detail our proposed approach which makes a number of improvements on the framework of Durand et al. (2016), adapting it to the context of large- scale WSI for HIA. ### 2.1 WSI PRE-PROCESSING Tissue Detection. As seen in Fig. 1, large regions of a WSI may contain no tissue at all, and are therefore not useful for training and inference. To extract only tiles with content relevant to the task, we use the same approach as Wang et al. (2016), namely, Otsu's method (Otsu, 1979) applied to the hue and saturation channels of the image after transformation into the HSV color space to produce two masks which are then combined to produce the final tissue segmentation. Subsequently, only tiles within the foreground segmentation are extracted for training and inference. Color Normalization. According to Ciompi et al. (2017), stain normalization is an important step in HIA since the result of the H&E staining procedure can vary greatly between any two slides. We utilize a simple histogram equalization algorithm consisting of left- shifting RGB channels and subsequently rescaling them to \([0, 255]\) , as proposed in Nikitenko et al. (2008). In this work, we place a particular emphasis on the tile aggregation method rather than color normalization, so we did not make use of more advanced color normalization algorithms, such as Khan et al. (2014). Tiling. The tiling step is necessary in histopathology analysis. Indeed, due to the large size of the WSI, it is computationally intractable to process the slide in its entirety. For example, on the highest resolution zoom level, denoted as scale 0, for a fixed grid of non- overlapping tiles, a WSI may possess more than 200,000 tiles of \(224 \times 224\) pixels. Because of the computational burden associated with processing the set of all possible tiles, we instead turn to a uniform random sampling from the space of possible tiles. Additionally, due to the large scale nature of WSI datasets, the computational burden associated with sampling potentially overlapping tiles from arbitrary locations is a prohibitive cost for batch construction during training. <--- Page Split ---> Instead, we propose that all tiles from the non- overlapping grid should be processed and stored to disk prior to training. As the tissue structure does not exhibit any strong periodicity, we find that sampling tiles along a fixed grid without overlapping provides a reasonably representative sampling while maximizing the total sampled area. Given a target scale \(\ell \in \{0,1,\ldots ,L\}\) , we denote the number of possible tiles in WSI indexed by \(i\in \{1,2,\ldots ,N\}\) as \(M_{i,\ell}^{\mathrm{T}}\) . The number of tiles sampled for training or inference is denoted by \(M_{i,\ell}^{\mathrm{S}}\) and is chosen according to \[M_{i,\ell}^{\mathrm{S}} = \min \left(M_{i,\ell}^{\mathrm{T}}, \max \left(M_{\min}^{\mathrm{T}}, \frac{1}{2} \cdot \bar{M}_{\ell}^{\mathrm{T}}\right)\right), \quad (1)\] where \(\bar{M}_{\ell}^{\mathrm{T}} = \frac{1}{N}\sum_{i}M_{i,\ell}^{\mathrm{T}}\) is the empirical average of the number of tiles at scale \(\ell\) over the entire set of training data. Feature Extraction. We make use of the ResNet- 50 (He et al., 2016) architecture trained on the ImageNet natural image dataset. In empirical comparisons between VGG or Inception architectures, we have found that the ResNet architecture provides features more well suited for HIA. Additionally, the ResNet architecture is provided at a variety of depths (ResNet- 101, ResNet- 152). However, we found that ResNet- 50 provides the best balance between the computational burden of forward inference and richness of representation for HIA. In our approach, for every tile we use the values of the ResNet- 50 pre- output layer, a set of \(P = 2048\) floating point values, as the feature vector for the tile. Since the fixed input resolution for ResNet- 50 is \(224 \times 224\) pixels, we set the resolution for the tiles extracted from the WSI to the same pixel resolution at every scale \(\ell\) . ### 2.2 BASELINE METHOD Given a WSI, extracting tile- level features produces a bag of feature vectors which one attempts to use for classification against the known image- wide label. The dimension of these local descriptors is \(M^{\mathrm{S}} \times P\) , where \(P\) is the number of features output from the pre- trained image DCNN and \(M^{\mathrm{S}}\) is the number of sampled tiles. Approaches such as Bag- of- visual- words (BoVW) or VLAD (Jégou et al., 2010) could be chosen as a baseline aggregation method to generate a single image- wide descriptor of size \(P \times 1\) , but would require a huge computational power given the dimensionality of the input. Instead, we will try two common approaches for the aggregation of local features, specifically, the MaxPool and MeanPool and subsequently apply a classifier on the aggregated features. After applying these pooling methods over the axis of tile indices, one obtains a single feature descriptor for the whole image. Other pooling approaches have been used in the context of HIA, including Fisher vector encodings (Song et al., 2017) and \(p\) - norm pooling (Xu et al., 2017). However, as the reported effect of these aggregations is quite small, we don't consider these approaches when constructing our baseline approach. After aggregation, a classifier can be trained to produce the desired diagnosis labels given the global WSI aggregated descriptor. For our baseline method, we use a logistic regression for this final prediction layer of the model. We present a description of the baseline approach in Fig. 1. ### 2.3 CHOWDER METHOD In experimentation, we observe that the baseline approach of the previous section works well for diffuse disease, which is evidenced in the results of Table 1 for TCGA- Lung. Here, diffuse implies that the number of disease- containing tiles, pertinent to the diagnosis label, are roughly proportional to the number of tiles containing healthy tissue. However, if one applies the same approach to different WSI datasets, such as Camelyon- 16, the performance significantly degrades. In the case of Camelyon- 16, the diseased regions of most of the slides are highly localized, restricted to a very small area within the WSI. When presented with such imbalanced bags, simple aggregation approaches for global slide descriptors will overwhelm the features of the disease- containing tiles. Instead, we propose an adaptation and improvement of the WELDON method (Durand et al., 2016) designed for histopathology images analysis. As in their approach, rather than creating a global <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 1: Description of the BASELINE approach for WSI classification via aggregation of tile-level features into global slide descriptors. </center> ![](images/4_1.jpg) <center>Figure 2: Description of the CHOWDER architecture (for \(R = 2\) ) for WSI classification via MLP on operating on top positive and negative instances shown for a single sample mini-batch sample. </center> slide descriptor by aggregating all tile features, instead a MIL approach is used that combines both top- instances as well as negative evidence. A visual description of approach is given in Fig. 2. Feature Embedding. First, a set of one- dimensional embeddings for the \(P = 2048\) ResNet- 50 features are calculated via \(J\) one- dimensional convolutional layers strided across the tile index axis. For tile \(t\) with features \(\mathbf{k}_{t}\) , the embedding according to kernel \(j\) is calculated as \(e_{j,t} = \langle \mathbf{w}_{j},\mathbf{k}_{t}\rangle\) . Notably, the kernels \(\mathbf{w}_{j}\) have dimensionality \(P\) . This one- dimensional convolution is, in essence, a shortcut for enforcing a fully- connected layer with tied weights across tiles, i.e. the same embedding for every tile (Durand et al., 2016). In our experiments, we found that the use of a single embedding, \(J = 1\) , is an appropriate choice for WSI datasets when the number of available slides is small \((< 1000)\) . In this case, choosing \(J > 1\) will decrease training error, but will increase generalization error. Avoiding overtraining and ensuring model generality remains a major challenge for the application of WSL to WSI datasets. Top Instances and Negative Evidence. After feature embedding, we now have a \(M_{j,t}^{S} \times 1\) vector of local tile- level (instance) descriptors. As in (Durand et al., 2016), these instance descriptors are sorted by value. Of these sorted embedding values, only the top and bottom \(R\) entries are retained, resulting in a tensor of \(2R \times 1\) entries to use for diagnosis classification. This can be easily accomplished through a MinMax layer on the output of the one- dimensional convolution layer. The purpose of this layer is to take not only the top instances region but also the negative evidences, that is the region which best support the absence of the class. During training, the back- propagation runs only through the selected tiles, positive and negative evidences. When applied to WSI, the MinMax serves as a powerful tile selection procedure. Multi- layer Perceptron (MLP) Classifier. In the WELDON architecture, the last layer consists of a sum applied over the \(2R \times 1\) output from the MinMax layer. However, we find that this approach can be improved for WSI classification. We investigate the possibility of a richer interactions between the top and bottom instances by instead using an MLP as the final classifier. In our implementation of CHOWDER, we use an MLP with two fully connected layers of 200 and 100 neurons with sigmoid activations. <--- Page Split ---> ## 3 EXPERIMENTAL RESULTS ### 3.1 TRAINING DETAILS First, for pre- processing, we fix a single tile scale for all methods and datasets. We chose a fixed zoom level of \(0.5 \mu \mathrm{m / pixel}\) , which corresponds to \(\ell = 0\) for slides scanned at \(20 \mathrm{x}\) magnification, or \(\ell = 1\) slides scanned at \(40 \mathrm{x}\) magnification. Next, since WSI datasets often only contain a few hundred images, far from the millions images of ImageNet dataset, strong regularization required prevent over- fitting. We applied \(\ell_{2}\) - regularization of 0.5 on the convolutional feature embedding layer and dropout on the MLP with a rate of 0.5. However, these values may not be the global optimal, as we did not apply any hyper- parameter optimization to tune these values. To optimize the model parameters, we use Adam (Kingma & Ba, 2014) to minimize the binary cross- entropy loss over 30 epochs with a mini- batch size of 10 and with learning rate of 0.001. To reduce variance and prevent over- fitting, we trained an ensemble of \(E\) CHOWDER networks which only differ by their initial weights. The average of the predictions made by these \(E\) networks establishes the final prediction. Although we set \(E = 10\) for the results presented in Table 1, we used a larger ensemble of \(E = 50\) with \(R = 5\) to obtain the best possible model and compare our method to those presented in Table 2. We also use an ensemble of \(E = 10\) when reporting the results for WELDON. As the training of one epoch requires about 30 seconds on our available hardware, the total training time for the ensemble took just over twelve hours. While the ResNet- 50 features were extracted using a GPU for efficient feed- forward calculations, the CHOWDER network is trained on CPU in order to take advantage of larger system RAM sizes, compared to on- board GPU RAM. This allows us to store all the training tiles in memory to provide faster training compared to a GPU due to reduced transfer overhead. ### 3.2 TCGA The public Cancer Genome Atlas (TCGA) provides approximately 11,000 tissue slides images of cancers of various organs<sup>2</sup>. For our first experiment, we selected 707 lung cancer WSIs (TCGA- Lung), which were downloaded in March 2017. Subsequently, a set of new lung slides have been added to TCGA, increasing the count of lung slides to 1,009. Along with the slides themselves, TCGA also provides labels representing the type of cancer present in each WSI. However, no local segmentation annotations of cancerous tissue regions are provided. The pre- processing step extracts 1,411,043 tiles and their corresponding representations from ResNet- 50. The task of these experiments is then to predict which type of cancer is contained in each WSI: adenocarcinoma or squamous cell carcinoma. We evaluate the quality of the classification according to the area under the curve (AUC) of the receiver operating characteristic (ROC) curve generated using the raw output predictions. As expected in the case of diffuse disease, the advantage provided by CHOWDER is slight as compared to the MeanPool baseline, as evidenced in Table 1. Additionally, as the full aggregation techniques work quite well in this setting, the value of \(R\) does not seem to have a strong effect on the performance of CHOWDER as it increases to \(R = 100\) . In this setting of highly homogenous tissue content, we can expect that global aggregate descriptors are able to effectively separate the two classes of carcinoma. ### 3.3 CAMELYON-16 For our second experiment, we use the Camelyon- 16 challenge dataset<sup>3</sup>, which consists of 400 WSIs taken from sentinel lymph nodes, which are either healthy or exhibit metastases of some form. In addition to the WSIs themselves, as well as their labeling (healthy, contains- metastases), a segmentation mask is provided for each WSI which represents an expert analysis on the location of metastases within the WSI. Human labeling of sentinel lymph node slides is known to be quite tedious, as noted in Litjens et al. (2016); Weaver (2010). Teams participating in the challenge had access to, and utilized, the ground- truth masks when training their diagnosis prediction and tumor localization models. For our approach, we set aside the masks of <--- Page Split ---> metastasis locations and utilize only diagnosis labels. Furthermore, many participating teams developed a post- processing step, extracting handcrafted features from predicted metastasis maps to improve their segmentation. No post- processing is performed for the presented CHOWDER results, the score is computed directly from the raw output of the CHOWDER model. The Camelyon- 16 dataset is evaluated on two different axes. First, the accuracy of the predicted label for each WSI in the test set is evaluated according to AUC. Second, the accuracy of metastasis localization is evaluated by comparing model outputs to the ground- truth expert annotations of metastasis location. This segmentation accuracy is measured according to the free ROC metric (FROC), which is the curve of metastasis detection sensitivity to the average number of also positives. As in the Camelyon challenge, we evaluate the FROC metric as the average detection sensitivity at the average false positive rates 0.25, 0.5, 1, 2, 4, and 8. Competition Split Bias. We also conduct a set of experiments on Camelyon- 16 using random train- test cross- validation (CV) splits, respecting the same training set size as in the original competition split. We note distinct difference in AUC between the competition split and those obtained via random folds. This discrepancy is especially distinct for the MeanPool baseline, as reported in Table 1. We therefore note a distinct discrepancy in the data distribution between the competition test and training splits. Notably, using the MeanPool baseline architecture, we found that the competition train- test split can be predicted with an AUC of 0.75, however one only obtains an AUC of 0.55 when using random splits. Because this distribution mismatch in the competition split could produce misleading interpretations, we report 3- fold average CV results along with the results obtained on the competition split. Classification Performance. In Table 1, we see the classification performance of our proposed CHOWDER method, for \(E = 10\) , as compared to both the baseline aggregation techniques, as well as the WELDON approach. In the case of WELDON, the final MLP is not used and instead a summing is applied to the MinMax layer. The value of \(R\) retains the same meaning in both cases: the number of both high and low scoring tiles to pass on to the classification layers. We test a range of values \(R\) for both WELDON and CHOWDER. We find that over all values of \(R\) , CHOWDER provides a significant advantage over both the baseline aggregation techniques as well as WELDON. We also note that the optimal performance can be obtained without using a large number of discriminative tiles, i.e. \(R = 5\) . We also present in Table 2 our performance as compared to the public Camelyon leader boards for \(E = 50\) . In this case, we are able to obtain an effective \(11^{\mathrm{th}}\) place rank, but without using any of the ground- truth disease segmentation maps. This is a remarkable result, as the winning approach of Wang et al. (2016) required tile- level disease labels derived from expert- provided annotations in order to train a full 27- layer GoogLeNet (Szegedy et al., 2015) architecture for tumor prediction. We also show the ROC curve for this result in Fig. 3. Finally, we note that CHOWDER's performance on this task roughly is equivalent to the best- performing human pathologist, an AUC of 0.884 as reported by Bejnordi et al. (2017), and better than the average human pathologist performance, an AUC of 0.810. Notably, this human- level performance is achieved without human assistance during training, beyond the diagnosis labels themselves. Localization Performance. Obtaining high performance in terms of whole slide classification is well and good, but it is not worth much without an interpretable result which can be used by pathologists to aid their diagnosis. For example, the MeanPool baseline aggregation approach provides no information during inference from which one could derive tumor locations in the WSI: all locality information is lost with the aggregation. With MaxPool, one at least retains some information via the tile locations which provide each maximum aggregate feature. For CHOWDER, we propose the use of the full set of outputs from the convolutional feature embedding layer. These are then sorted and thresholded according to value \(\tau\) such that tiles with an embedded value larger than \(\tau\) are classified as diseased and those with lower values are classified as healthy. We show an example of disease localization produced by CHOWDER in Fig. 4. Here, we see that CHOWDER is able to very accurately localize the tumorous region in the WSI even though it has only been trained using global slide- wide labels and without any local annotations. While some potential false detections occur outside of the tumor region, we see that the strongest <--- Page Split ---> Table 1: Classification (AUC) results for the Camelyon-16 (left) and TCGA-Lung (right) datasets for CHOWDER, WELDON, and the baseline approach. For Camelyon-16, we present two scores, one for the fixed competition test split of 130 WSIs, and one for a cross-validated average over 3 folds (CV) on the 270 training WSIs. For TCGA-Lung, we present scores as a cross-validated average over 5 folds. <table><tr><td rowspan="2">Method</td><td colspan="2">AUC</td></tr><tr><td>CV</td><td>Competition</td></tr><tr><td>BASELINE</td><td></td><td></td></tr><tr><td>MaxPool</td><td>0.749</td><td>0.655</td></tr><tr><td>MeanPool</td><td>0.802</td><td>0.530</td></tr><tr><td>WELDON</td><td></td><td></td></tr><tr><td>R = 1</td><td>0.782</td><td>0.765</td></tr><tr><td>R = 10</td><td>0.832</td><td>0.670</td></tr><tr><td>R = 100</td><td>0.809</td><td>0.600</td></tr><tr><td>R = 300</td><td>0.761</td><td>0.573</td></tr><tr><td>CHOWDER</td><td></td><td></td></tr><tr><td>R = 1</td><td>0.809</td><td>0.821</td></tr><tr><td>R = 5</td><td>0.903</td><td>0.858</td></tr><tr><td>R = 10</td><td>0.900</td><td>0.843</td></tr><tr><td>R = 100</td><td>0.870</td><td>0.775</td></tr><tr><td>R = 300</td><td>0.837</td><td>0.652</td></tr></table> <table><tr><td>Method</td><td>AUC</td></tr><tr><td>BASELINE</td><td></td></tr><tr><td>MaxPool</td><td>0.860</td></tr><tr><td>MeanPool</td><td>0.903</td></tr><tr><td>CHOWDER</td><td></td></tr><tr><td>R = 1</td><td>0.900</td></tr><tr><td>R = 10</td><td>0.915</td></tr><tr><td>R = 100</td><td>0.909</td></tr></table> response occurs within the tumor region itself, and follows the border regions nicely. We present further localization results in Appendix A. We also present FROC scores for CHOWDER in Table 2 as compared to the leader board results. Here, we obtain results comparable to the \(18^{\mathrm{th}}\) rank. However, this performance is incredibly significant as all other approaches were making use of tile- level classification in order to train their segmentation techniques. We also show the FROC curve in Fig. 3. ## 4 DISCUSSION We have shown that using state- of- the- art techniques from MIL in computer vision, such as the top instance and negative evidence approach of (Durand et al., 2016), one can construct an effective technique for diagnosis prediction and disease location for WSI in histopathology without the need <table><tr><td>Rank</td><td>Team</td><td>AUC</td><td>Rank</td><td>Team</td><td>FROC</td></tr><tr><td>1</td><td>HMS &amp;amp; MIT</td><td>0.9935</td><td>1</td><td>HMS &amp;amp; MIT</td><td>0.8074</td></tr><tr><td>2</td><td>HMS-MGH</td><td>0.9763</td><td>2</td><td>HMS-MGH</td><td>0.7600</td></tr><tr><td>3</td><td>HMS-MGH</td><td>0.9650</td><td>3</td><td>HMS-MGH</td><td>0.7289</td></tr><tr><td>4</td><td>CUHK</td><td>0.9415</td><td>4</td><td>CUHK</td><td>0.7030</td></tr><tr><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td></tr><tr><td>9</td><td>CUHK</td><td>0.9056</td><td>16</td><td>Osaka University</td><td>0.3467</td></tr><tr><td>10</td><td>DeepCare Inc.</td><td>0.8833</td><td>17</td><td>SIT</td><td>0.3385</td></tr><tr><td></td><td>CHOWDER (No Annotation)</td><td>0.8706</td><td></td><td>CHOWDER (No Annotation)</td><td>0.3103</td></tr><tr><td>11</td><td>Indep. DE</td><td>0.8654</td><td>18</td><td>Warwick-QU</td><td>0.3052</td></tr><tr><td>12</td><td>METU</td><td>0.8642</td><td>19</td><td>U. Munich (CAMP)</td><td>0.2733</td></tr><tr><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td></tr><tr><td>32</td><td>Sorbonne LIB</td><td>0.5561</td><td>32</td><td>Mines Paris Tech</td><td>0.0970</td></tr></table> Table 2: Final leader boards for Camelyon- 16 competition. All competition methods had access to the full set of strong annotations for training their models. In contrast, our proposed approach only utilizes image- wide diagnosis levels and obtains comparable performance as top- 10 methods. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 3: Performance curves for Camelyon-16 dataset for both classification and segmentation tasks for the different tested approaches. Left: ROC curves for the classification task. Right: FROC curves for lesion detection task. </center> ![](images/8_1.jpg) <center>Figure 4: Visualization of metastasis detection on test image 27 of the Camelyon-16 dataset using our proposed approach. Left: Full WSI at zoom level 6 with ground truth annotation of metastases shown via black border. Tiles with positive feature embeddings are colored from white to red according to their magnitude, with red representing the largest magnitude. Right: Detail of metastases at zoom level 2 overlaid with classification output of our proposed approach. Here, the output of all tested tiles are shown and colored according to their value, from blue to white to red, with blue representing the most negative values, and red the most positive. Tiles without color were not included when randomly selecting tiles for inference. </center> for expensive localized annotations produced by expert pathologists. By removing this requirement, we hope to accelerate the production of computer- assistance tools for pathologists to greatly improve the turn- around time in pathology labs and help surgeons and oncologists make rapid and effective patient care decisions. This also opens the way to tackle problems where expert pathologists may not know precisely where relevant tissue is located within the slide image, for instance for prognosis estimation or prediction of drug response tasks. The ability of our approach to discover associated regions of interest without prior localized annotations hence appears as a novel discovery approach for the field of pathology. Moreover, using the suggested localization from CHOWDER, one may considerably speed up the process of obtaining ground- truth localized annotations. A number of improvements can be made in the CHOWDER method, especially in the production of disease localization maps. As presented, we use the raw values from convolutional embedding layer, which means that the resolution of the produced disease localization map is fixed to that of the sampled tiles. However, one could also sample overlapping tiles and then use a data fusion technique to generate a final localization map. Additionally, as a variety of annotations may be available, CHOWDER could be extended to the case of heterogenous annotation, e.g. some slides with expert- produced localized annotations and those with only whole- slide annotations. <--- Page Split ---> ## A FURTHER RESULTS ![](images/11_0.jpg) <center>Figure 5: Visualization of metastasis detection on test image 2 of the Camelyon-16 dataset using our proposed approach. Left: Full WSI at zoom level 6 with ground truth annotation of metastases shown via black border. Tiles with positive feature embeddings are colored from white to red according to their magnitude, with red representing the largest magnitude. Right: Detail of metastases at zoom level 2 overlaid with classification output of our proposed approach. Here, the output of all tested tiles are shown and colored according to their value, from blue to white to red, with blue representing the most negative values, and red the most positive. Tiles without color were not included when randomly selecting tiles for inference. </center> <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 6: Visualization of metastasis detection on test image 92 of the Camelyon-16 dataset using our proposed approach. Left: Full WSI at zoom level 6 with ground truth annotation of metastases shown via black border. Tiles with positive feature embeddings are colored from white to red according to their magnitude, with red representing the largest magnitude. Right: Detail of metastases at zoom level 2 overlaid with classification output of our proposed approach. Here, the output of all tested tiles are shown and colored according to their value, from blue to white to red, with blue representing the most negative values, and red the most positive. Tiles without color were not included when randomly selecting tiles for inference. </center> <--- Page Split --->
reject
Reject
5.333333
ICLR_2018_paper_0102
iclr
2,018
# SCAN: LEARNING HIERARCHICAL COMPOSITIONAL VISUAL CONCEPTS Irina Higgins, Nicolas Sonnerat, Loic Matthey, Arka Pal, Christopher P Burgess, Matko Bosnjak, Murray Shanahan, Matthew Botvinick, Demis Hassabis, Alexander Lerchner DeepMind, London, UK {irinah, sonnerat, lmatthey,arkap,cpburgess, matko,botvinick, demishassabis,lerchner}@google.com ## ABSTRACT The seemingly infinite diversity of the natural world arises from a relatively small set of coherent rules, such as the laws of physics or chemistry. We conjecture that these rules give rise to regularities that can be discovered through primarily unsupervised experiences and represented as abstract concepts. If such representations are compositional and hierarchical, they can be recombined into an exponentially large set of new concepts. This paper describes SCAN (Symbol- Concept Association Network), a new framework for learning such abstractions in the visual domain. SCAN learns concepts through fast symbol association, grounding them in disentangled visual primitives that are discovered in an unsupervised manner. Unlike state of the art multimodal generative model baselines, our approach requires very few pairings between symbols and images and makes no assumptions about the form of symbol representations. Once trained, SCAN is capable of multimodal bi- directional inference, generating a diverse set of image samples from symbolic descriptions and vice versa. It also allows for traversal and manipulation of the implicit hierarchy of visual concepts through symbolic instructions and learnt logical recombination operations. Such manipulations enable SCAN to break away from its training data distribution and imagine novel visual concepts through symbolically instructed recombination of previously learnt concepts. ## 1 INTRODUCTION State of the art deep learning approaches to machine learning have achieved impressive results in many problem domains, including classification (He et al., 2016; Szegedy et al., 2015), density modelling (Gregor et al., 2015; Oord et al., 2016a;b), and reinforcement learning (Mnih et al., 2015; 2016; Jaderberg et al., 2017; Silver et al., 2016). They are still, however, far from possessing many traits characteristic of human intelligence. Such deep learning techniques tend to be overly data hungry, often rely on significant human supervision and tend to overfit to the training data distribution (Lake et al., 2016; Garnelo et al., 2016). An important step towards bridging the gap between human and artificial intelligence is endowing algorithms with compositional concepts (Lake et al., 2016; Garnelo et al., 2016). Compositionality allows for reuse of a finite set of primitives (addressing the data efficiency and human supervision issues) across many scenarios by recombining them to produce an exponentially large number of novel yet coherent and potentially useful concepts (addressing the overfitting problem). Compositionality is at the core of such human abilities as creativity, imagination and language- based communication. We propose that concepts are abstractions over a set of primitives. For example, consider a toy hierarchy of visual concepts shown in Fig. 1. Each node in this hierarchy is defined as a subset of visual primitives that make up the scene in the input image. These visual primitives might include factors like object identity, object colour, floor colour and wall colour. As one traverses the hierarchy from the subordinate over basic to superordinate levels of abstraction (Rosch, 1978) (i.e. from the more specific to the more general concepts corresponding to the same visual scene), the number of concept- defining visual primitives decreases. Hence, each parent concept in such a hierarchy is an <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Schematic of an implicit concept hierarchy built upon a subset of four visual primitives: object identity \((I)\) , object colour \((O)\) , floor colour \((F)\) and wall colour \((W)\) (other visual primitives necessary to generate the scene are ignored in this example). Concepts form an implicit hierarchy, where each parent is an abstraction over its children and over the original set of visual primitives (the values of the concept-defining sets of visual primitives are indicated by the bold capital letters). In order to generate an image that corresponds to a concept, one has to fill in values for the factors that got abstracted away (represented as "-"), e.g. by sampling from their respective priors. Given certain nodes in the concept hierarchy, one can traverse the other nodes using logical operations. See Sec.3 for our formal definition of concepts. </center> abstraction (i.e. a subset) over its children and over the original set of visual primitives. A more formal definition of concepts is provided in Sec. 3. Intelligent agents are able to discover and learn abstract compositional concepts using little supervision (Baillargeon, 1987; Spelke, 1990; Baillargeon, 2004; Smith & Vul, 2013). Think of human word learning – we acquire the meaning of words through a combination of a continual stream of unsupervised visual data occasionally paired with a corresponding word label. This paper describes SCAN (Symbol- Concept Association Network, see Fig. 2A), a neural network model capable of learning grounded visual concepts in a largely unsupervised manner through fast symbol association. First, we use the \(\beta\) - VAE (Higgins et al., 2017a) to learn a set of independent representational primitives through unsupervised exposure to the visual data. This is equivalent to learning a disentangled (factorised and interpretable) representation of the independent ground truth "generative factors" of the data (Bengio et al., 2013). Next, we allow SCAN to discover meaningful abstractions over these disentangled primitives by exposing it to a small number of symbol- image pairs that apply to a particular concept (e.g. a few example images of an apple paired with the symbol "apple"). SCAN learns the meaning of the concept by identifying the set of visual primitives that all the visual examples have in common (e.g. all observed apples are small, round and red). The corresponding symbol ("apple") then becomes a "pointer" to the newly acquired concept {small, round, red} - a way to access and manipulate the concept without having to know its exact representational form. Our approach does not make any assumptions about how these symbols are encoded, which also allows SCAN to learn multiple referents to the same concept, i.e. synonyms. Once a concept is acquired, it should be possible to use it for bi- directional inference: the model should be able to generate diverse visual samples that correspond to a particular concept (sym2img) and vice versa (img2sym). Since the projection from the space of visual primitives to the space of concepts (img2sym, red dash arrow in Fig. 1) involves abstraction and hence a loss of information, one then needs to add compatible information back in when moving from the space of concepts to that of visual primitives (sym2img, blue dot arrow in Fig. 1). In our setup, concepts are defined in terms of a set of relevant visual primitives (e.g. colour, shape and size for "apple"). This leaves a set of irrelevant visual attributes (e.g. lighting, position, background) to be "filled in". We do so by defaulting them to their respective priors, which ensures high diversity of samples (in both image or symbol space) for each concept during img2sym and sym2img inferences. The structured nature of learnt concepts acquired by SCAN allows for sample efficient learning of logical recombination operators: AND (corresponding to a set union of relevant primitives), IN COMMON (corresponding to set intersection) and IGNORE (corresponding to set difference), by pairing a small number of valid visual examples of recombined concepts with the respective operator names. Once the meaning of the operators has been successfully learned, SCAN can exploit the compositionality of the acquired concepts, and traverse previously unexplored parts of the implicit underlying concept hierarchy by manipulating and recombining existing concepts in novel ways. For example, a new node corresponding to the concept {blue, small} can be reached through the following instructions: "blue" AND "small" (going down the hierarchy from more general to more <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: A: SCAN model architecture. The capital letters correspond to four disentangled visual primitives: object identity \((I)\) , object colour \((O)\) , floor colour \((F)\) and wall colour \((W)\) . B: Mode coverage of the extra KL term of the SCAN loss function. Forward KL divergence \(D_{KL}(\mathbf{z}_x \parallel \mathbf{z}_y)\) allows SCAN to learn abstractions (wide yellow distribution \(\mathbf{z}_y\) ) over the visual primitives that are irrelevant to the meaning of a concept (blue modes corresponds to the inferred values of \(\mathbf{z}_x\) for different visual examples matching symbol \(\mathbf{y}\) ). C: \(\beta\) -VAE \(D_{DAE}\) model architecture. </center> specific), "blueberry" IN COMMON "bluebell" (going up the hierarchy from more specific to more general) or "blueberry" IGNORE "round" (also going up the hierarchy). To summarise, our paper 1) presents SCAN, a neural network model capable of learning compositional and hierarchical representations of visual concepts; 2) demonstrates that SCAN can be successfully trained with very little supervised data; 3) shows that after training, SCAN can perform multimodal (visual and symbolic) bi- directional inference and generation with high accuracy and diversity, outperforming all baselines; 4) shows that the addition of logical recombination operations allows SCAN to break out of its limited training data distribution and reach new nodes within the implicit hierarchy of concepts. ## 2 RELATED WORK To the best of our knowledge no framework currently exists that is directly equivalent to SCAN. Past relevant literature can broadly be split into three categories: 1) Bayesian models that try to mimic fast human concept learning (Tennenbaum, 1999; Lake et al., 2015); 2) conditional generative models that aim to generate faithful images conditioned on a list of attributes or other labels (Reed et al., 2016b;a; Kingma et al., 2014; Yan et al., 2016; Sohn et al., 2015; Pandey & Dukkipati, 2017); and 3) multimodal generative models that aim to embed visual and symbolic inputs in a joint latent space in order to be able to run bi- directional inferences (Vedantam et al., 2017; Suzuki et al., 2017; Pu et al., 2016; Wang et al., 2016; Srivastava & Salakhutdinov, 2014). Bayesian models by Tennenbaum (1999) and Lake et al. (2015) can learn from few examples, but, unlike SCAN, they are not fully grounded in visual data. Conditional and joint multimodal models are fully grounded in visual data, however, unlike SCAN, they require a large number of image- symbol pairs for training. An exception to this is the model by Srivastava & Salakhutdinov (2014), which, however, cannot generate images, instead relying on feature- guided nearest- neighbour lookup within existing data, and also requires slow MCMC sampling. Multimodal generative models are capable of bi- directional inference, however they tend to learn a flat unstructured latent space unlike the hierarchical compositional latent space of SCAN. Hence these baselines underperform SCAN in terms of sample diversity and the ability to break out of their training data distribution through symbolically instructed logical operations. ## 3 FORMALISING CONCEPTS In Sec. 1 we informally proposed that concepts are abstractions over visual representational primitives. Hence, in order to formally define concepts we first define the visual representations used to ground them. These are defined as tuples of the form \((Z_{1}, \ldots , Z_{K})\) , where \(\{1, \ldots , K\}\) is the set of indices of the independent latent factors sufficient to generate the visual input \(\mathbf{x}\) , and \(Z_{k}\) is a random variable. The set \(\mathbb{R}^{K}\) of all such tuples is a K- dimensional visual representation space. <--- Page Split ---> We define a concept \(C_{i}\) in such a K- dimensional representation space as a set of assignments of probability distributions to the random variables \(Z_{k}\) , with the following form: \[C_{i} = \{(k,p_{k}^{i}(Z_{k}))\mid k\in S_{i}\} \quad (1)\] where \(S_{i}\subseteq \{1,\dots,K\}\) is the set of visual latent primitives that are relevant to concept \(C_{i}\) and \(p_{k}^{i}(Z_{k})\) is a probability distribution specified for the visual latent factor represented by the random variable \(Z_{k}\) . Since \(S_{i}\) are subsets of \(\{1,\dots,K\}\) , concepts are abstractions over the K- dimensional visual representation space. To generate a visual sample corresponding to a concept \(C_{i}\) , it is necessary to fill in details for latents that got abstracted away during concept learning. This corresponds to the probability distributions \(\{p_{k}(Z_{k})|k\in \bar{S_{i}}\}\) , where \(\bar{S_{i}} = \{1,\dots,K\} \setminus S_{i}\) is the set of visual latent primitives that are irrelevant to the concept \(C_{i}\) . In SCAN we set these to the unit Gaussian prior: \(p_{k}(Z_{k}) = \mathcal{N}(0,1)\) , \(\forall k\in \bar{S_{i}}\) . In order to improve readability, we will use a simplified notation for concepts throughout the rest of the paper. For example, \(\{(size, p(Z_{size} = \text{small})), (colour, p(Z_{colour} = \text{blue}))\}\) will become either {small, blue} or {small, blue, __}, depending on whether we signify the irrelevant primitives \(k\in \bar{S_{i}}\) as placeholder symbols " \(\_\) ". Note that unlike the formal notation, the ordering of attributes within the simplified notation is fixed and meaningful. Since we define concepts as sets, we can also define binary relations and operators on these sets. If \(C_{1}\) and \(C_{2}\) are concepts, and \(C_{1}\subset C_{2}\) , we say that \(C_{1}\) is superordinate to \(C_{2}\) , and \(C_{2}\) is subordinate to \(C_{1}\) . Two concepts \(C_{1}\) and \(C_{2}\) are orthogonal if \(S_{1}\cap S_{2} = \emptyset\) . The conjunction of two orthogonal concepts \(C_{1}\) and \(C_{2}\) is the concept \(C_{1}\cup C_{2}\) (e.g. {small, __, __} AND {__, round, __} = {small, round, __}). The overlap of two non- orthogonal concepts \(C_{1}\) and \(C_{2}\) is the concept \(C_{1}\cap C_{2}\) (e.g. {small, round, __} IN COMMON {__, round, red} = {__, round, __}). The difference between two concepts \(C_{1}\) and \(C_{2}\) , where \(C_{1}\subset C_{2}\) is the concept \(C_{2}\setminus C_{1}\) (e.g. {small, round, __} IGNORE {__, round, __} = {small, __, __}). These operators allow for a traversal over a broader set of concepts within the implicit hierarchy given knowledge of a limited training subset of concepts. ## 4 MODEL ARCHITECTURE Learning visual representational primitives The discovery of the generative structure of the visual world is the goal of disentangled factor learning research (Bengio et al., 2013). In this work we build SCAN on top of \(\beta\) - VAE, a state of the art model for unsupervised visual disentangled factor learning. \(\beta\) - VAE is a modification of the variational autoencoder (VAE) framework (Kingma & Welling, 2014; Rezende et al., 2014) that introduces an adjustable hyperparameter \(\beta\) to the original VAE objective: \[\mathcal{L}_{x}(\theta ,\phi ;\mathbf{x},\mathbf{z}_{x},\beta) = \mathbb{E}_{q_{\phi}(\mathbf{z}_{x}|\mathbf{x})}[\log p_{\theta}(\mathbf{x}|\mathbf{z}_{x})] - \beta D_{KL}\big(q_{\phi}(\mathbf{z}_{x}|\mathbf{x})\parallel p(\mathbf{z}_{x})\big) \quad (2)\] where \(\phi\) , \(\theta\) parametrise the distributions of the encoder and the decoder respectively. Well chosen values of \(\beta\) (usually \(\beta >1\) ) result in more disentangled latent representations \(\mathbf{z}_{x}\) by setting the right balance between reconstruction accuracy, latent channel capacity and independence constraints to encourage disentangling. For some datasets, however, this balance is tipped too far away from reconstruction accuracy. In these scenarios, disentangled latent representations \(\mathbf{z}_{x}\) may be learnt at the cost of losing crucial information about the scene, particularly if that information takes up a small proportion of the observations \(\mathbf{x}\) in pixel space. Hence, we adopt the solution used in Higgins et al. (2017b) that replaces the pixel log- likelihood term in Eq. 2 with an L2 loss in the high- level feature space of a denoising autoencoder (DAE) (Vincent et al., 2010) trained on the same data (see Fig. 2C for model architecture). The resulting \(\beta\) - VAE \(_{DAE}\) architecture optimises the following objective function: \[\mathcal{L}_{x}(\theta ,\phi ;\mathbf{x},\mathbf{z}_{x},\beta) = \mathbb{E}_{q_{\phi}(\mathbf{z}_{x}|\mathbf{x})}\| J(\hat{\mathbf{x}}) - J(\mathbf{x})\|_{2}^{2} - \beta D_{KL}\big(q_{\phi}(\mathbf{z}_{x}|\mathbf{x})\parallel p(\mathbf{z}_{x})\big) \quad (3)\] where \(\hat{\mathbf{x}}\sim p_{\theta}(\mathbf{x}|\mathbf{z}_{x})\) and \(J:\mathbb{R}^{W\times H\times C}\to \mathbb{R}^{N}\) is the function that maps images from pixel space with dimensionality \(\mathrm{Width}\times \mathrm{Height}\times \mathrm{Channels}\) to a high- level feature space with dimensionality \(N\) given by a stack of DAE layers up to a certain layer depth (a hyperparameter). Note that this adjustment means that we are no longer optimising the variational lower bound, and \(\beta\) - VAE \(_{DAE}\) with \(\beta = 1\) loses its equivalence to the original VAE framework. <--- Page Split ---> Learning visual concepts This section describes how our proposed SCAN framework (Fig. 2A) exploits the particular parametrisation of the visual building blocks acquired by \(\beta\) - VAE \(^{1}\) to learn an implicit hierarchy of visual concepts formalised in Sec. 3. SCAN is based on a modified VAE framework. In order to encourage the model to learn visually grounded abstractions, we initialise the space of concepts (the latent space \(\mathbf{z}_{y}\) of SCAN) to be structurally identical to the space of visual primitives (the latent space \(\mathbf{z}_{x}\) of \(\beta\) - VAE). Both spaces are parametrised as multivariate Gaussian distributions with diagonal covariance matrices, and \(\dim (\mathbf{z}_{y}) = \dim (\mathbf{z}_{x}) = K\) . The grounding is performed by aiming to minimise the KL divergence between the two distributions. The abstraction step corresponds to setting SCAN latents \(z_{y}^{k}\) corresponding to the relevant factors to narrow distributions, while defaulting those corresponding to the irrelevant factors to the wider unit Gaussian prior. This is done by minimising the forward KL divergence \(D_{KL}\left(q(\mathbf{z}_{x})\parallel q(\mathbf{z}_{y})\right)\) , rather than the mode picking reverse KL divergence \(D_{KL}\left(q(\mathbf{z}_{y})\parallel q(\mathbf{z}_{x})\right)\) . Fig. 2B demonstrates the differences. Each blue mode corresponds to an inferred visual latent distribution \(q(z_{x}^{k}|x_{i})\) given an image \(x_{i}\) . The yellow distribution corresponds to the learnt conceptual latent distribution \(q(z_{y}^{k})\) . When presented with visual examples that have high variability for a particular generative factor, e.g. various lighting conditions when viewing examples of apples, the forward KL allows SCAN to learn a broad distribution for the corresponding conceptual latent \(q(z_{y}^{k})\) that is close to the prior \(p(z_{y}^{k}) = \mathcal{N}(0,1)\) . Hence, SCAN is trained by minimising: \[\begin{array}{r l} & {\mathcal{L}_{y}(\theta_{y},\phi_{y};\mathbf{y},\mathbf{x},\mathbf{z}_{y},\beta ,\lambda) = \mathbb{E}_{q_{\phi_{y}}(\mathbf{z}_{y}|\mathbf{y})}[\log p_{\theta_{y}}(\mathbf{y}|\mathbf{z}_{y})] - \beta D_{K L}\big(q_{\phi_{y}}(\mathbf{z}_{y}|\mathbf{y})\parallel p(\mathbf{z}_{y})\big)}\\ & {\qquad -\lambda D_{K L}\big(q_{\phi_{x}}(\mathbf{z}_{x}|\mathbf{x})\parallel q_{\phi_{y}}(\mathbf{z}_{y}|\mathbf{y})\big)} \end{array} \quad (4)\] where \(\mathbf{y}\) is symbol inputs, \(\mathbf{z}_{y}\) is the latent space of concepts, \(\mathbf{z}_{x}\) is the latent space of the pre- trained \(\beta\) - VAE containing the visual primitives which ground the abstract concepts \(\mathbf{z}_{y}\) , and \(\mathbf{x}\) are example images that correspond to the concepts \(\mathbf{z}_{y}\) activated by symbols \(\mathbf{y}\) . It is important to up- weight the forward KL term relative to the other terms in the cost function (e.g. \(\lambda = 1\) , \(\beta = 10\) ). The SCAN architecture does not make any assumptions on the nature of the symbols \(\mathbf{y}\) . In this paper we use a commonly used k- hot encoding (Vedantam et al., 2017; Suzuki et al., 2017), where each concept is described in terms of the \(k \leq K\) visual attributes it refers to (e.g. an apple could be referred to by a 3- hot symbol "round, small, red"). In principle, other possible encoding schemes for \(\mathbf{y}\) can also be used, including word embeddings (Mikolov et al., 2013), or even entirely random vectors. We leave the empirical demonstration of this to future work. Once trained, SCAN allows for bi- directional inference and generation (img2sym and sym2img). In order to generate visual samples that correspond to a particular concept (sym2img), we infer the concept \(\mathbf{z}_{y}\) by presenting an appropriate symbol \(\mathbf{y}\) to the inference network of SCAN. One can then sample from the inferred concept \(q_{\phi_{y}}(\mathbf{z}_{y}|\mathbf{y})\) and use the generative part of \(\beta\) - VAE to visualise the corresponding image samples \(p_{\theta_{y}}(\mathbf{x}|\mathbf{z}_{y})\) . SCAN can also be used to infer a description of an image in terms of the different learnt concepts via their respective symbols. To do so, an image \(\mathbf{x}\) is presented to the inference network of the \(\beta\) - VAE to obtain its description in terms of the visual primitives \(\mathbf{z}_{x}\) . One then uses the generative part of the SCAN to sample descriptions \(p_{\theta_{y}}(\mathbf{y}|\mathbf{z}_{x})\) in terms of symbols that correspond to the previously inferred visual building blocks \(q_{\phi_{x}}(\mathbf{z}_{x}|\mathbf{x})\) . Learning concept recombination operators The compositional and hierarchical structure of the concept latent space \(\mathbf{z}_{y}\) learnt by SCAN can be exploited to break away from the training data distribution and imagine new concepts. This can be done by using logical concept manipulation operators AND, IN COMMON and IGNORE formally defined in Sec. 3. These operators are implemented within a conditional convolutional module parametrised by \(\psi\) (Fig. 3A) that accepts two multivariate Gaussian distributions \(\mathbf{z}_{y_{1}}\) and \(\mathbf{z}_{y_{2}}\) corresponding to the two concepts that are to be recombined, and a conditioning vector \(\mathbf{r}\) specifying the recombination operator. The input distributions \(\mathbf{z}_{y_{1}}\) and \(\mathbf{z}_{y_{2}}\) are inferred from the two corresponding input symbols \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\) , respectively, using a pre- trained SCAN. The convolutional module strides over the parameters of each matching component \(z_{y_{1}}^{k}\) and \(z_{y_{2}}^{k}\) one at a time and outputs the corresponding parametrised component \(z_{r}^{k}\) of a recombined multivariate Gaussian distribution \(\mathbf{z}_{r}\) with a diagonal covariance matrix. \(^2\) We used 1- hot encoding <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 3: A: Learning AND, IN COMMON or IGNORE recombination operators with a SCAN model architecture. Inset demonstrates the convolutional recombination operator that takes in \(\{\mu_{y_{1}}^{k},\sigma_{y_{1}}^{k};\mu_{y_{2}}^{k},\sigma_{y_{2}}^{k}\}\) and outputs \(\{\mu_{r}^{k},\sigma_{r}^{k}\}\) . The capital letters correspond to four disentangled visual primitives: object identity \((I)\) , object colour \((O)\) , floor colour \((F)\) and wall colour \((W)\) . B: Visual samples produced by SCAN and JMVAE when instructed with a novel concept recombination. SCAN samples consistently match the expected ground truth recombined concept, while maintaining high variability in the irrelevant visual primitives. JMVAE samples lack accuracy. Recombination instructions are used to imagine concepts that have never been seen during model training. Top: samples for IGNORE; Middle: samples for IN COMMON; Bottom: samples for AND. </center> for the conditioning vector \(\mathbf{r}\) , where \([1 0 0]\) , \([0 1 0]\) and \([0 0 1]\) stood for AND, IN COMMON and IGNORE respectively. The conditioning was implemented as a tensor product that takes in \(\mathbf{z}_{y_{1}}\) and \(\mathbf{z}_{y_{2}}\) and outputs \(\mathbf{z}_{r}\) , where \(\mathbf{r}\) effectively selects the appropriate trainable transformation matrix parametrised by \(\psi\) . The conditional convolutional module is trained through the same visual grounding process as SCAN—each recombination instruction is paired with a small number of appropriate example images (e.g. "blue, suitcase" IGNORE "suitcase" might be paired with various example images containing a blue object). The recombination module is trained by minimising: \[\mathcal{L}_{r}(\psi ;\mathbf{z}_{x},\mathbf{z}_{r}) = D_{KL}\big[q_{\phi_{x}}(\mathbf{z}_{x}|\mathbf{x}_{i})\big|\big|q_{\psi}\big(\mathbf{z}_{r}\mid q_{\phi_{y}}(\mathbf{z}_{y_{1}}|\mathbf{y}_{1}),q_{\phi_{y}}(\mathbf{z}_{y_{2}}|\mathbf{y}_{2}),\mathbf{r}\big)\big] \quad (5)\] where \(q_{\phi_{x}}(\mathbf{z}_{x}|\mathbf{x}_{i})\) is the inferred latent distribution of the \(\beta\) - VAE given a seed image \(\mathbf{x}_{i}\) that matches the specified symbolic description. The resulting \(\mathbf{z}_{r}\) lives in the same space as \(\mathbf{z}_{y}\) and corresponds to a node within the implicit hierarchy of visual concepts. Hence, all the properties of concepts \(\mathbf{z}_{y}\) discussed in the previous section also hold for \(\mathbf{z}_{r}\) . ## 5 EXPERIMENTS ### 5.1 DEEPMINDLAB EXPERIMENTS Environment We evaluate the performance of SCAN on a dataset of visual frames and corresponding symbolic descriptions collected within the DeepMind Lab environment (Beattie et al., 2016). DeepMind Lab was chosen, because it gave us good control of the ground truth generative process. The visual frames were collected from a static viewpoint situated in a room containing a single object. The generative process was specified by four factors of variation: wall colour, floor colour, object colour with 16 possible values each, and object identity with 3 possible values: hat, ice lolly and suitcase. Other factors of variation were also added to the dataset by the DeepMind Lab engine, such as the spawn animation, horizontal camera rotation and the rotation of objects around the vertical axis. We split the dataset into two subsets. One was used for training the models, while the other one contained a held out set of 300 four- gram concepts that were never seen during training, either visually or symbolically. We used the held out set to evaluate the model's ability to imagine new concepts. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 4: A: sym2img inferences with "white suitcase", "white suitcase, blue wall", and "white suitcase, blue wall, magenta floor" as input. The latter one points to a concept that the model has never seen during training, either visually or symbolically. All samples are consistently accurate, while showing good diversity in terms of the irrelevant visual attributes. B: when presented with an image, SCAN is able to describe it in terms of all concepts it has learnt, including synonyms (e.g. "dub", which corresponds to {ice lolly, white wall}). The histograms show the distributions of unique concepts the model used to describe each image, most probable of which are printed in descending order next to the corresponding image. The few confusions SCAN makes are intuitive to humans too (e.g. confusing orange and yellow colours). </center> Learning grounded concepts In this section we demonstrate that SCAN is capable of learning the meaning of new concepts from very few image- symbol pairs. We evaluate the model's concept understanding through qualitative analysis of sym2img and img2sym samples. First we pre- trained a \(\beta\) - VAE to learn a disentangled representation of the DeepMind Lab dataset (see Sec. A.3 in Supplementary Materials for details). Then we trained SCAN on a random subset of 133 out of 18,883 possible concepts sampled from all levels of the implicit hierarchy (these concepts specify between one and four visual primitives, and are associated with 1- to 4- hot symbols respectively). The set of symbols also included a number of 1- hot synonyms (e.g. a blue wall may be described by symbols "blue wall", "bright blue wall" or "blue wall synonym"). Each concept was associated with ten visual examples during training. Fig. 4A shows samples drawn from SCAN when asked to imagine a bigram concept {white, suitcase}, a trigram concept {white, suitcase, blue wall}, or a four- gram {white, suitcase, blue wall, magenta floor}. Note that the latter is a concept drawn from the held- out test set that neither \(\beta\) - VAE nor SCAN have ever seen during training, and the first two concepts are novel to SCAN, but have been experienced by \(\beta\) - VAE. It is evident that the model demonstrates a good understanding of all three concepts, producing visual samples that match the meaning of the concept, and showing good variability over the irrelevant factors. Confusions do sometimes arise due to the sampling process (e.g. one of the suitcase samples is actually an ice lolly). Fig. 4B demonstrates that the same model can also correctly describe an image. The labels are mostly consistent with the image and display good diversity (SCAN is able to describe the same image using different symbols including synonyms). The few confusions that SCAN does make are between concepts that are easily confusable for human too (e.g. red, orange and yellow colours). Evolution of concept understanding In this section we take a closer look inside SCAN as it learns a new concept. In Sec. 3 we suggested that concepts should be grounded in terms of specified factors (the corresponding latent units \(z_{y}^{k} \forall k \in S\) should have low inferred standard deviations \(\sigma_{y}^{k}\) ), while the unspecified visual primitives should be sampled from the unit Gaussian prior (the corresponding latent units \(z_{y}^{k} \forall k \in \bar{S}\) show have \(\sigma_{y}^{k} \approx 1\) ). We visualise this process by teaching SCAN the meaning of the concept {cyan wall} using a curriculum of fifteen progressively more diverse visual examples (see Fig. 5, bottom row). After training SCAN on each set of five visual examples, we test the model's understanding of the concept through sym2img sampling using the symbol "cyan wall" (Fig. 5, top four rows). We also plot the average inferred specificity of all 32 latent units \(z_{y}^{k}\) during training <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: Evolution of understanding of the meaning of concept {cyan wall} as SCAN is exposed to progressively more diverse visual examples. Left: top row contains three sets of visual samples (sym2img) generated by SCAN after seeing each set of five visual examples presented in the bottom row. Right: average inferred specificity of concept latents \(z_{y}^{b}\) during training. Vertical dashed lines correspond to the vertical dashed lines in the left plot and indicate a switch to the next set of five more diverse visual examples. 6/32 latents \(z_{y}^{b}\) and labelled according to their corresponding visual primitives in \(\mathbf{z}_{x}\) . </center> (Fig. 5, right). It can be seen that the number of specified latents \(z_{y}^{b}\) drops from six, over four, to two as the diversity of visual examples seen by SCAN increases. The remaining two highly specified latents \(z_{y}^{b}\) correctly correspond to the visual primitives \(\mathbf{z}_{x}\) representing wall hue and brightness. Quantitative comparison to baselines In this section we quantitatively compare the accuracy and diversity of the sym2img samples produced by the SCAN to those of the baselines - a SCAN like architecture trained with a reverse KL used for grounding conceptual representations in vision (SCANR), another modification of SCAN that tries to ground conceptual representations in unstructured (entangled) visual representations (SCANU, with various levels of visual entanglement), and two of the latest multimodal joint density models, the JMVAE (Suzuki et al., 2017) and the triple ELBO (TrELBO) (Vedantam et al., 2017). The two metrics, accuracy and diversity, measure different aspects of the models' performance. High accuracy means that the models understand the meaning of a symbol (e.g. samples of a "blue suitcase" should contain blue suitcases). High diversity means that the models were able to learn an abstraction. It quantifies the variety of samples in terms of the unspecified visual attributes (e.g. samples of blue suitcases should include a high diversity of wall colours and floor colours). There is a correlation between the two metrics, since samples with low accuracy often result in higher diversity scores. We use a pre- trained classifier achieving \(99\%\) average accuracy over all data generative factors to evaluate the accuracy of img2sym samples. Since some colours in the dataset are hard to differentiate even to humans (e.g. yellow and orange), we use top- 3 accuracy for colour related factors. We evaluate the diversity of visual samples by estimating the KL divergence of the inferred factor distribution with the flat prior: \(D_{KL}(u(\mathbf{y}_i) \parallel p(\mathbf{y}_i))\) , where \(p(\mathbf{y}_i)\) is the joint distribution over the factors irrelevant to the \(i\) th concept \(i \in \overline{S_i}\) (inferred by the classifier) and \(u(\mathbf{y}_i)\) is the equivalent flat distribution (i.e., with each factor value having equal probability). See Sec. A.1 in Supplementary Materials for more details. All models were trained on a random subset of 133 out of 18,883 possible concepts sampled from all levels of the implicit hierarchy with ten visual examples each. The accuracy and diversity metrics were calculated on two sets of sym2img samples: 1) train, corresponding to the 133 symbols used to train the models; and 2) test (symbols), corresponding to a held out set of 50 symbols. Tbl. 1 demonstrates that SCAN outperforms all baselines in terms of both metrics. SCANR learns very accurate representations, however it overfits to a single mode of each of the irrelevant visual factors and hence lacks diversity. SCANU experiments show that as the level of disentanglement within the visual representation is increased (the higher the \(\beta\) , the more disentangled the representation), the accuracy and the diversity of the sym2img samples also get better. Note that baselines with poor sample accuracy inadvertently have good diversity scores because samples that are hard to classify produce a relatively flat classifier distribution \(p(\mathbf{y}_i)\) close to the uniform prior \(u(\mathbf{y}_i)\) . TrELBO learns an entangled and unstructured conceptual representation that produces accurate yet stereotypical sym2img samples that lack diversity. Finally, JMVAE is a model that comes the closest to SCAN <--- Page Split ---> Table 1: Quantitative results comparing the accuracy and diversity of visual samples produced through sym2img inference by SCAN and three baselines: a SCAN with unstructured vision (SCAN), lower \(\beta\) means more visual entanglement), a SCAN with a reverse grounding KL term for both the model itself and its recombination operator (SCANg) and two recent joint multimodal embedding models, JMVAE and TrELBO. Higher accuracy and lower diversity indicate better performance. Test values can be computed either by directly feeding the ground truth symbols (test symbols), or by applying trained recombination operators to make the model recombine in the latent space (test operators). <table><tr><td rowspan="2">MODEL</td><td colspan="3">ACCURACY</td><td colspan="3">DIVERSITY</td></tr><tr><td>TRAIN</td><td>TEST (SYMBOLS)</td><td>TEST (OPERATORS)</td><td>TRAIN</td><td>TEST (SYMBOLS)</td><td>TEST (OPERATORS</td></tr><tr><td>TRELBO</td><td>0.81</td><td>0.69</td><td>0.37</td><td>9.41</td><td>6.86</td><td>0.63</td></tr><tr><td>JMVAE</td><td>0.75</td><td>0.68</td><td>0.61</td><td>4.32</td><td>2.87</td><td>0.86</td></tr><tr><td>SCANg</td><td>0.86</td><td>0.81</td><td>0.67</td><td>13.17</td><td>9.2</td><td>9.94</td></tr><tr><td>SCANU (β = 0.1)</td><td>0.27</td><td>0.26</td><td>0.25</td><td>5.51</td><td>1.23</td><td>1.66</td></tr><tr><td>SCANU (β = 1)</td><td>0.58</td><td>0.36</td><td>0.33</td><td>2.07</td><td>1.22</td><td>1.34</td></tr><tr><td>SCANU (β = 20)</td><td>0.65</td><td>0.42</td><td>0.32</td><td>1.41</td><td>3.98</td><td>4.57</td></tr><tr><td>SCAN (β = 53)</td><td>0.82</td><td>0.79</td><td>0.79</td><td>1.46</td><td>1.08</td><td>1.05</td></tr></table> in terms of performance. It manages to exploit the structure of the symbolic inputs to learn a representation of the joint posterior that is almost as disentangled as that of SCAN. Similarly to SCAN, it also uses a forward KL term to match unimodal posteriors to the joint posterior. Hence, given that there is enough supervision within the symbols to help JMVAE learn a disentangled joint posterior, it should become equivalent to SCAN, whereby the joint \(q(\mathbf{z}|\mathbf{x},\mathbf{y})\) and unimodal \(q(\mathbf{z}|\mathbf{y})\) posteriors of JMVAE become equivalent to the visual \(q(\mathbf{z}_v|\mathbf{x})\) and symbolic \(q(\mathbf{z}_y|\mathbf{y})\) posteriors of SCAN respectively. Yet in practice we found that JMVAE training is much more sensitive to various architectural and hyperparameter choices compared to SCAN, which often results in mode collapse leading to the reasonable accuracy yet poor diversity of the JMVAE sym2img samples. See Sec. A.3 for more details of the baselines' performance. Finally, SCAN is the only model that was able to exploit the k- hot structure of the symbols and the compositional nature of its representations to generalise well to the test set (test symbols results), while all of the other baselines lost a lot of their sample accuracy. Learning recombination operators In Sec. 4 we suggested a way to traverse the implicit hierarchy of concepts towards novel nodes without any knowledge of how to point to these nodes through a symbolic reference. We suggested doing so by instructing a recombination of known concepts in the latent space. To test this, we trained a recombination module using 10 recombination instructions per each of the three operators, with 20 visual examples each. Tbl. 1 (test operators) demonstrates that we were able to reach the nodes corresponding to the 50 novel test concepts using such a pre- trained recombination operator module. This, however, only worked for SCAN, since the successful training of the recombination module relies on a structured latent space that all the other baselines lacked. SCAN with the recombination module preserved the accuracy and the diversity of samples, as shown quantitatively in Tbl. 1 and qualitatively in Fig. 3B. JMVAE, the closest baseline to SCAN in terms of the recombination module performance, produced samples with low accuracy (the drop in accuracy resulted in an increase in the diversity score). It is interesting to note that the recombination operator training relies on the same kind of visual grounding as SCAN, hence it can often improve the diversity of the original model. ### 5.2 CELEBA EXPERIMENTS We ran additional experiments on a more realistic dataset CelebA (Liu et al., 2015) after performing minimal dataset pre- processing of cropping the frames to \(64 \times 64\) . Unlike other approaches (Vedantam et al., 2017; Perarnau et al., 2016) which only use 18 best attributes for training their models, we used all 40 attributes. Many of these 40 attributes are not useful, since they are either: 1) subjective (e.g. "attractiveness"); 2) refer to parts of the image that have been cropped out (e.g. "wearing necktie"); 3) refer to visual features that have not been discovered by \(\beta\) - VAE (e.g. "sideburns", see Higgins et al. (2017a) for a discussion of the types of factors that \(\beta\) - VAE tends to learn on this dataset); 4) are confusing due to mislabelling (e.g. "bald female" as reported by Vedantam et al. (2017)). Hence, our experiments test the robustness of SCAN to learning concepts in an adversarial setting, where the model is taught concepts that do not necessarily relate well to their corresponding visual examples. For these experiments we used the controlled capacity schedule (Burgess et al., 2017) for \(\beta\) - VAE training to increase the quality of the generative process of the model. We found that SCAN trained on CelebA was able to outperform its baselines of JMVAE and TrELBO. First, we checked which of the 40 attributes SCAN was able to understand after training. To do so, <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 6: Comparison of sym2img samples of SCAN, JMVAE and TrELBO trained on CelebA. See Fig. 19 in Supplementary Materials for larger samples. </center> ![](images/9_1.jpg) <center>Figure 7: Example sym2img samples of SCAN trained on CelebA. We run inference using four different values for each attribute. We found that the model was more sensitive to changes in values in the positive rather than negative direction, hence we use the following values: \(\{-6, - 3, 1, 2\}\) . See Fig. 20 in Supplementary Materials for larger samples. </center> we inferred \(q(\mathbf{z}_y|\mathbf{y}_i)\) for all \(\mathbf{y}_i \in \mathcal{R}^{40}\) , where \(\mathbf{y}_i\) is a 1- hot encoding of the ith attribute. We then approximated the number of specified latents for each posterior \(q(\mathbf{z}_y|\mathbf{y}_i)\) . If an attribute \(i\) did not correspond to anything meaningful in the corresponding visual examples seen during training, it would have no specified latents and \(D_{KL}(q(\mathbf{z}_y|\mathbf{y}_i)||p(\mathbf{z}_y)) \approx 0\) . We found that SCAN did indeed learn the meaning of a large number of attributes. Fig. 6 shows sym2img samples for some of them compared to the equivalent samples for the baseline models: JMVAE and TrELBO. It can be seen that SCAN samples tend to be more faithful than those of JMVAE, and both models produce much better diversity of samples than TrELBO. A notable difference between SCAN and the two baselines is that despite being trained on binary k- hot attribute vectors (where k varies for each sample), SCAN learnt meaningful directions of continuous variability in its conceptual latent space \(\mathbf{z}_y\) . For example, if we vary the value of an individual symbolic attribute, we will get meaningful sym2img samples that range between extreme positive and extreme negative examples of that attribute (e.g. by changing the values of the "pale skin" symbol \(\mathbf{y}\) , we can generate samples with various skin tones as shown in Fig. 7). This is in contrast to JMVAE and TrELBO, which only produce meaningful sym2img samples if the value of the attribute is set to 1 (attribute is present) or 0 (attribute is not enforced). This means that unlike SCAN, it is impossible to enforce JMVAE or TrELBO to generate samples with darker skin colours despite the models knowing the meaning of the "pale skin" attribute. Note that sometimes SCAN picks up implicit biases in the dataset. For example, after training SCAN interprets "attractive" as a term that refers to young white females and less so to males, especially if these males are also older and have darker skin tones (Fig. 7). Similarly, SCAN learns to use the term "big lips" to describe younger ethnic individuals, and less so older white males; while "arched eyebrows" is deemed appropriate to use when describing young white females, but not when <--- Page Split ---> describing people wearing sunglasses or hats, presumably because one cannot see how arched their eyebrows are. ## 6 CONCLUSION This paper introduced a new approach to learning grounded visual concepts. We defined concepts as abstractions over independent (and often interpretable) visual primitives, where each concept is given by learned distributions over a set of relevant visual factors. We proposed that all other (irrelevant) visual factors should default to their prior in order to produce a diverse set of samples corresponding to a concept. We then proposed SCAN, a neural network implementation of such an approach, which was able to discover and learn an implicit hierarchy of abstract concepts from as few as five symbol- image pairs per concept and no assumptions on the nature of symbolic representations. SCAN was then capable of bi- directional inference, generating diverse and accurate image samples from symbolic instructions, and vice versa, qualitatively and quantitatively outperforming all baselines, including on a realistic CelebA dataset with noisy attribute labels. The structure of the learnt concepts allowed us to train an extension to SCAN that could perform logical recombination operators. We demonstrated how such operators could be used to traverse the implicit concept hierarchy, including imagining completely new concepts. Due to the sample efficiency and the limited number of assumptions in our approach, the representations learnt by SCAN should be immediately applicable within a large set of broader problem domains, including reinforcement learning, classification, control and planning. <--- Page Split ---> ## REFERENCES Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large- scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from tensorflow.org. Renée Baillargeon. Young infants' reasoning about the physical and spatial properties of a hidden object. Cognitive Development, 2(3):179 - 200, 1987. Renée Baillargeon. Infants' physical world. Current Directions in Psychological Science, 13(3):89- 94, 2004. Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Kuttler, Andrew Lefrancq, Simon Green, Victor Valdés, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis, Shane Legg, and Stig Petersen. Deepmind lab. arXiv preprint arXiv:1612.03801, 2016. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798- 1828, 2013. Christopher P. Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in \(\beta\) - VAE. NIPS Workshop of Learning Disentangled Features, 2017. Marta Garnelo, Kai Arulkumaran, and Murray Shanahan. Towards deep symbolic reinforcement learning. arXiv preprint arXiv:1609.05518, 2016. Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. ICML, 37:1462- 1471, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CVPR, pp. 770- 778, 2016. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. \(\beta\) - VAE: Learning basic visual concepts with a constrained variational framework. ICLR, 2017a. Irina Higgins, Arka Pal, Andrei Rusu, Loic Matthey, Christopher Burgess, Alexander Pritzel, Matthew Botvinick, Charles Blundell, and Alexander Lerchner. DARLA: Improving zero- shot transfer in reinforcement learning. ICML, 2017b. Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. ICLR, 2017. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015. Diederik P. Kingma and Max Welling. Auto- encoding variational bayes. ICLR, 2014. Diederik P. Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi- supervised learning with deep generative models. NIPS, 2014. Brenden M. Lake, R. Salakhutdinov, and Joshua B. Tenenbaum. Human- level concept learning through probabilistic program induction. Science, 350(6266):1332- 1338, 2015. Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Building machines that learn and think like people. Behavioral and Brain Sciences, pp. 1- 101, 2016. Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. ICCV, 2015. Loic Matthey, Irina Higgins, Demis Hassabis, and Alexander Lerchner. dsprite: Disentanglement testing sprites dataset, 2017. URL https://github.com/deepmind/dsprites- dataset/. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. NIPS, pp. 3111- 3119, 2013. Volodymyr Mnih, Koray Kavukcuoglu, David S Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human- level control through deep reinforcement learning. Nature, 518(7540):529- 533, 2015. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. ICML, 2016. Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016a. Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. Conditional image generation with pixelcnn decoders. NIPS, 2016b. Gaurav Pandey and Ambedkar Dukkipati. Variational methods for conditional multimodal deep learning. IJCNN, pp. 308- 315, 2017. Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A. Efros. Context encoders: Feature learning by inpainting. CVPR, pp. 2536- 2544, 2016. <--- Page Split ---> Guim Perarnau, Joost van de Weijer, Bogdan Raducanu, and Jose M Alvarez. Invertible conditional gans for image editing. NIPS Workshop on Adversarial Training, 2016. Yunchen Pu, Zhe Gan, Ricardo Henao, Xin Yuan, Chunyuan Li, Andrew Stevens, and Lawrence Carin. Variational autoencoder for deep learning of images, labels and captions. NIPS, 2016. Scott Reed, Zeynep Akata, Bernt Schiele, and Honglak Lee. Learning deep representations of fine- grained visual descriptions. CVPR, 2016a. Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text- to- image. ICML, 2016b. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. ICML, 32(2):1278- 1286, 2014. Eleanor H. Rosch. Cognition and Categorization, chapter Principles of Categorization, pp. 27- 48. Lawrence Erlbaum Associates, Hillsdale, 1978. David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484- 489, 2016. K.A. Smith and E. Vul. Sources of uncertainty in intuitive physics. Topics in cognitive science, 5(1):185- 199, 2013. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. NIPS, 2015. Elizabeth S Spelke. Principles of object perception. Cognitive science, 14(1):29- 56, 1990. Nitish Srivastava and Ruslan Salakhutdinov. Multimodal learning with deep boltzmann machines. Journal of Machine Learning Research, 15:2949- 2980, 2014. Masahiro Suzuki, Kotaro Nakayama, and Yutaka Matsuo. Joint multimodal learning with deep generative models. ICLR Workshop track, 2017. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. CVPR, 2015. Joshua B. Tennenbaum. Bayesian modeling of human concept learning. NIPS, 1999. Ramakrishna Vedantam, Ian Fischer, Jonathan Huang, and Kevin Murphy. Generative models of visually grounded imagination. arXiv preprint arXiv:1705.10762, 2017. Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre- Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. NIPS, 2010. Weiran Wang, Xinchen Yan, Honglak Lee, and Karen Livescu. Deep variational canonical correlation analysis. arXiv preprint arXiv:1610.03454, 2016. Wikipedia. HSL and HSV, 2017. URL https://en.wikipedia.org/wiki/HSL_and_HSV/. Online; accessed 15- June- 2017. Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional image generation from visual attributes. ECCV, pp. 776- 791, 2016. <--- Page Split ---> ## A SUPPLEMENTARY INFORMATION ## A.1 MODEL DETAILS \(\beta\) - VAE. We re- used the architecture and the training setup for \(\beta\) - VAE specified in Higgins et al. (2017b). In particular, we used L2 loss within a pre- trained denoising autoencoder (DAE) to calculate the reconstruction part of the \(\beta\) - VAE loss function. The DAE was trained with occlusion- style masking noise in the vein of Pathak et al. (2016). Concretely, two values were independently sampled from \(U[0,W]\) and two from \(U[0,H]\) where \(W\) and \(H\) were the width and height of the input frames. These four values determined the corners of the rectangular mask applied; all pixels that fell within the mask were set to zero. The DAE architecture consisted of four convolutional layers, each with kernel size 4 and stride 2 in both the height and width dimensions. The number of filters learnt for each layer was \(\{32,32,64,64\}\) respectively. The bottleneck layer consisted of a fully connected layer of size 100 neurons. This was followed by four deconvolutional layers, again with kernel sizes 4, strides 2, and \(\{64,64,32,32\}\) filters. The padding algorithm used was 'SAME' in TensorFlow (Abadi et al., 2015). ELU non- linearities were used throughout. The optimiser used was Adam (Kingma & Ba, 2015) with a learning rate of \(1\mathrm{e} - 3\) and \(\epsilon = 1\mathrm{e} - 8\) . We pre- trained the DAE for 200,000 steps, using batch size of 100 before training \(\beta\) - VAE. \(\beta\) - VAE architecture was the following. We used an encoder of four convolutional layers, each with kernel size 4, and stride 2 in the height and width dimensions. The number of filters learnt for each layer was \(\{32,32,64,64\}\) respectively. This was followed by a fully connected layer of size 256 neurons. The latent layer comprised 64 neurons parametrising 32 (marginally) independent Gaussian distributions. The decoder architecture was simply the reverse of the encoder, utilising deconvolutional layers. The decoder used was Bernoulli. The padding algorithm used was 'SAME' in TensorFlow. ReLU non- linearities were used throughout. The reconstruction error was taking in the last layer of the DAE (in the pixel space of DAE reconstructions) using L2 loss and before the non- linearity. The optimiser used was Adam with a learning rate of \(1\mathrm{e} - 4\) and \(\epsilon = 1\mathrm{e} - 8\) . We pre- trained \(\beta\) - VAE until convergence using batch size of 100. The disentangled \(\beta\) - VAE had \(\beta = 53\) , while the entangled \(\beta\) - VAE used within the \(\mathrm{SCAN_U}\) baseline had \(\beta = 0.1\) . SCAN The encoder and decoder of SCAN were simple single layer MLPs with 100 hidden units for DeepMind Lab experiments and a two layer MLP with 500 hidden units in each hidden layer for the CelebA experiments. We used ReLU non- linearities in both cases. The decoder was parametrised as a Bernoulli distribution over the output space of size 375. We set \(\beta_{y} = 1\) for all experiments, and \(\lambda = 10\) . We trained the model using Adam optimiser with learning rate of \(1\mathrm{e} - 4\) and batch size 16. SCAN recombination operator The recombination operator was implemented as a convolutional operator with kernel size 1 and stride 1. The operator was parametrised as a 2 layer MLP with 30 and 15 hidden units per layer, and ReLU non- linearities. The optimizer is Adam with a learning rate of \(1\mathrm{e} - 3\) and batch size 16 was used. We trained the recombination operator for \(50k\) steps. JMVAE The JMVAE was trained using the loss as described in (Suzuki et al., 2017): \[\begin{array}{r l} & {\mathcal{L}_{J M}(\theta_{x},\theta_{y},\phi_{x},\phi_{y},\phi ;\mathbf{x},\mathbf{y},\alpha) = \mathbb{E}_{q_{\phi}(\mathbf{x}|\mathbf{x},\mathbf{y})}\left[\log p_{\theta_{x}}(\mathbf{x}|\mathbf{z})\right] + \mathbb{E}_{q_{\phi}(\mathbf{x}|\mathbf{x},\mathbf{y})}\left[\log p_{\theta_{y}}(\mathbf{y}|\mathbf{z})\right]}\\ & {\qquad -D_{K L}\big(q_{\phi}(\mathbf{z}|\mathbf{x},\mathbf{y})\parallel p(\mathbf{z})\big)}\\ & {\qquad -\alpha \big[D_{K L}\big(q_{\phi}(\mathbf{z}|\mathbf{x},\mathbf{y})\parallel q_{\phi_{x}}(\mathbf{z}|\mathbf{x})\big)}\\ & {\qquad +D_{K L}\big(q_{\phi}(\mathbf{z}|\mathbf{x},\mathbf{y})\parallel q_{\phi_{y}}(\mathbf{z}|\mathbf{y})\big)\big]} \end{array} \quad (6)\] Where \(\alpha\) was a hyperparameter. We tried \(\alpha\) values \(\{0.01,0.1,1.0,10.0\}\) as in the original paper and found that the best results were obtained with \(\alpha = 1.0\) . All results were reported with this value. The architectural choices for JMVAE were made to match as closely as possible those made for SCAN. Thus the visual encoder \(q_{\phi_{x}}\) consisted of four convolutional layers, each with kernel size 4 and stride 2 in both the height and width dimensions, with \(\{32,32,64,64\}\) filters learned at the respective layers. The convolutional stack was followed by a single fully connected layer with 256 hidden units. The encoder output the parametrisation for a a 32- dimensional diagonal Gaussian latent distribution. The symbol encoder \(q_{\phi_{y}}\) consisted of a single layer MLP with 100 hidden units for the DeepMind Lab experiments or two layer MLP with 500 hidden units per layer for the CelebA experiments as in SCAN. The joint encoder \(q_{\phi}\) consisted of the same convolutional stack as in the visual encoder to process the visual input, while the symbol input was passed through a two- layer MLP with 32 and 100 hidden units. These two embeddings were then concatenated and passed through a further two- layer MLP of 256 hidden units each, before outputting the 64 parameters of the diagonal Gaussian latents. <--- Page Split ---> The visual decoder \(p_{\theta_{x}}\) was simply the reverse of the visual encoder using transposed convolutions. Similarly, the symbol decoder \(p_{\theta_{x}}\) was again a single layer MLP with 100 hidden units. The output distributions of both decoders were parameterised as Bernoulli. The model was trained using the Adam optimiser with a learning rate of \(1\mathrm{e} - 4\) and a batch size of 16. Triple ELBO (TrELBO) The Triple ELBO (TrELBO) model was trained using the loss as described in (Vedantam et al., 2017): \[\begin{array}{r l} & {\mathcal{L}_{t r e l b o}(\theta_{x},\theta_{y},\phi_{x},\phi_{y},\phi;\mathbf{x},\mathbf{y},\lambda_{y}^{w z},\lambda_{y}^{y}) = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x},\mathbf{y})}\left[\log p_{\theta_{x}}(\mathbf{x}|\mathbf{z})\right] + \mathbb{E}_{q_{\phi_{x}}(\mathbf{z}|\mathbf{x})}\left[\log p_{\theta_{x}}(\mathbf{x}|\mathbf{z})\right]}\\ & {\qquad +\lambda_{y}^{w z}\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x},\mathbf{y})}\left[\log p_{\theta_{y}}(\mathbf{y}|\mathbf{z})\right] + \lambda_{y}^{y}\mathbb{E}_{q_{\phi_{y}}(\mathbf{z}|\mathbf{y})}\left[\log p_{\theta_{y}}(\mathbf{y}|\mathbf{z})\right]}\\ & {\qquad -D_{K L}\left(q_{\phi}(\mathbf{z}|\mathbf{x},\mathbf{y})\parallel p(\mathbf{z})\right) - D_{K L}\left(q_{\phi_{x}}(\mathbf{z}|\mathbf{x})\parallel p(\mathbf{z})\right)}\\ & {\qquad -D_{K L}\left(q_{\phi_{y}}(\mathbf{z}|\mathbf{y})\parallel p(\mathbf{z})\right)} \end{array} \quad (7)\] Where \(\lambda_{y}^{w z}\) and \(\lambda_{y}^{y}\) were hyperparameters. We set these to 10 and 100 respectively, following the reported best values from (Vedantam et al., 2017). We trained the model using the frozen- likelihood trick shown to improve the model performance in Vedantam et al. (2017). The symbol decoder parameters \(\theta_{y}\) were trained only using the \(\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x},\mathbf{y})}\left[\log p_{\theta_{y}}(\mathbf{y}|\mathbf{z})\right]\) term and not the \(\mathbb{E}_{q_{\phi_{y}}(\mathbf{z}|\mathbf{y})}\left[\log p_{\theta_{y}}(\mathbf{y}|\mathbf{z})\right]\) term. For fair comparison to the other models, we did not utilise a product of experts for the inference network \(q_{\phi_{y}}(\mathbf{z}|\mathbf{y})\) . In all architectural respects, the networks used were identical to those reported above for the JMVAE. The same training procedures were followed. Accuracy and diversity evaluation The classifier used to evaluate the samples generated by each model was trained to discriminate the four room configuration factors in the DeepMind Lab dataset: wall colour, floor colour, object colour and object identity. We used a network of four 2- strided deconvolutional layers (with filters in each successive layer of \(\{32,64,128,256\}\) , and kernels sized 3x3), followed by a fully connected layer with 256 neurons, with ReLU activations used throughout. The output layer consisted of four fully connected softmax heads, one for each predicted factor (with dimensionality 16 for each of the colour factors, 3 for object identity). The classifier was trained until convergence using the Adam optimiser, with a learning rate of \(1\mathrm{e} - 4\) and batch size of 100 (reaching an overall accuracy of 0.992). The accuracy metric for the sym2img samples was computed as the average top- k accuracy across the factors (with \(k = 3\) for the colour factors, and \(k = 1\) for the object identity factor), against the ground- truth factors specified by the concept used to generate each sym2img sample. The top- k of the factors in each image sample was calculated using the top- k softmax outputs of the classifier. Sample diversity of the sym2img data was characterised by estimating the KL divergence of the irrelevant factor distribution inferred for each concept with a flat distribution, \(D_{K L}(u(\mathbf{y}_{i})\parallel p(\mathbf{y}_{i}))\) . Here, \(p(\mathbf{y}_{i})\) is the joint distribution of the irrelevant factors in the sym2img set of images generated from the \(i\) th concept, which we estimated by averaging the classifier predictions across those images. \(u(\mathbf{y}_{i})\) is the desired (flat) joint distribution of the same factors (i.e., where each factor value has equal probability). We also computed the expected KL if \(p(\mathbf{y}_{i})\) were estimated using the samples drawn from the flat distribution \(u(\mathbf{y}_{i})\) . We report the mean of this KL across all the k- grams. We used 64 sym2img samples per concept. ## A.2 DEEP MIND LAB DATASET DETAILS RGB to HSV conversion The majority of the data generative factors to be learnt in the DeepMind Lab dataset correspond to colours (floor, wall and object). We found that it was hard to learn disentangled representations of these data generative factors with \(\beta\) - VAE. We believe this is because \(\beta\) - VAE requires a degree of smoothness in pixel space when traversing a manifold for a particular data generative factor in order to correctly learn this factor (Higgins et al., 2017a). The intuitively smooth notion of colour, however, is disrupted in RGB space (see Fig. 8). Instead, the intuitive human notion of colour is more closely aligned with hue in HSV space. Hence we added a pre- processing step that converted the DeepMind Lab frames from the RGB to HSV space before training \(\beta\) - VAE. This conversion preserved the dimensionality of the frames, since both RGB and HSV require three channels. We found that this conversion enabled \(\beta\) - VAE to achieve good disentangling results. k- hot experiments Our DeepMind Lab (Beattie et al., 2016) dataset contained 73 frames per room, where the configuration of each room was randomly sampled from the outer product of the four data generative factors: object identity and colour, wall and floor colour (18,883 unique factor combinations). All models were trained using a randomly sampled subset of 133 concepts (with 10 example images per concept), 30 extra concepts were used for training the recombination operators (20 example images per concept) and a further set of 50 concepts <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 8: A: Comparison of hue traversal in HSV space, which closely aligns with the intuitive human understanding of colour, and the equivalent highly non-monotonic changes in RGB space. \(H\) stands for hue, \(S\) stands for saturation and \(V\) stands for value/brightness. Adapted from Wikipedia (2017). B: Visualisation of colours used in DeepMind Lab in RGB. C: Visualisation of colours used in DeepMind Lab in HSV. It can be seen that the HSV projection of the DeepMind Lab colours appears significantly more structured than the equivalent RGB projection. </center> were used to evaluate the models' ability to break away from their training distribution using recombination operators. Training the recombination operator The recombination operator was trained by sampling two concepts, \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\) , and an operator \(\mathbf{r}\) as input. The training objective was to ground \(\mathbf{z}_{r}\) in the ground truth latent space \(\mathbf{z}_{r}\) inferred from an image \(\mathbf{x}\) . The ground truth image \(\mathbf{x}\) was obtained by applying binary logical operation corresponding to \(\mathbf{r}\) to binary symbols \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\) . This produces the ground truth recombined symbol \(\mathbf{y}_{r}\) , which can then be used to fetch a corresponding ground truth image \(\mathbf{x}_{r}\) from the dataset. To make sure that the logical operators were not presented with nonsensical instructions, we followed the following logic for sampling minibatches of \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\) during training. The IN COMMON and AND operators were trained by sampling two \(\mathbf{k}\) - grams \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\) with \(k \in \{1, 2, 3\}\) . The IN COMMON operator had an additional restriction that the intersection cannot be and empty set. The IGNORE operator was trained by sampling a \(\mathbf{k}\) - gram with \(k \in \{1, 2, 3\}\) and a unigram selected from one of the factors specified by the \(\mathbf{k}\) - gram. ## A.3 DEEP-MIND LAB EXPERIMENTS Unsupervised visual representation learning SCAN relies on the presence of structured visual primitives. Hence, we first investigate whether \(\beta\) - VAE trained in an unsupervised manner on the visually complex DeepMind Lab dataset has discovered a disentangled representation of all its data generative factors. As can be seen in Fig. 9 (left panel), SCAN has learnt to represent each of the object-, wall-, and floor-colours, using two latents - one for hue and one for brightness. Learning a disentangled representation of colour is challenging, but we were able to achieve it by projecting the input images from RGB to HSV space, which is better aligned with human intuitions of colour (see Sec. A.2). We noticed that \(\beta\) - VAE confused certain colours (e.g. red floors are reconstructed as magenta, see the top right image in the Reconstructions pane of Fig. 9). We speculate that this is caused by trying to approximate the circular hue space using a linear latent. Red and magenta end up on the opposite ends of the linear latent while being neighbours in the circular space. Compare the disentangled representations of Fig. 9 to the entangled equivalents in Fig. 10. Fig. 10 shows that an entangled \(\beta\) - VAE was able to reconstruct the data well, however due to the entangled nature of its learnt representations, latent traversal plots and samples are not as good as those of a disentangled \(\beta\) - VAE (Fig. 9). SCAN \(\mathbf{u}\) analysis As shown in Fig. 10 SCAN with unstructured vision is based on a \(\beta\) - VAE that learnt a good (yet entangled) representation of the DeepMind Lab dataset. Due to the unstructured entangled nature of the visual latent space \(\mathbf{z}_{x}\) , the additional forward KL term of the SCAN loss function (Eq. 4) is not able to pick out the relevant visual primitives for each training concept. Instead, all latents end up in the irrelevant set, since the relevant and irrelevant ground truth factors end up being entangled in the latent space \(\mathbf{z}_{x}\) . This disrupts the ability of SCAN with entangled vision to learn useful concepts, as demonstrated in Fig. 11. JMVAE analysis In this section we provide some insights into the nature of representations learnt by JMVAE (Suzuki et al., 2017). Fig. 12 demonstrates that after training JMVAE is capable of reconstructing the data and drawing reasonable visual samples. Furthermore, the latent traversal plots indicate that the model learnt a reasonably disentangled representation of the data generative factors. Apart from failing to learn a latent to represent the spawn animation and a latent to represent all object identities (while the hat and the ice lolly are represented, the suitcase is missing), the representations learnt by JMVAE match those learnt by \(\beta\) - VAE (compare Figs. 9 and 12). Note, however, that unlike \(\beta\) - VAE that managed to discover and learn a disentangled <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 9: Reconstructions, samples and latent traversals of \(\beta\) -VAE \((\beta = 53)\) trained to disentangle the data generative factors of variation within the DeepMind Lab dataset. For the latent traversal plots we sampled the posterior, then visualised \(\beta\) -VAE reconstructions while resampling each latent unit one at a time in the \([-3,3]\) range while keeping all other latents fixed to their originally sampled values. This process helps visualise which data generative factor each latent unit has learnt to represent. </center> representation of the data generative factors in a completely unsupervised manner, JMVAE was able to achieve its disentangling performance by exploiting the extra supervision signal coming from the symbolic inputs. JMVAE is unable to learn a hierarchical compositional latent representation of concepts like SCAN does. Instead, it learns a flat representation of visual primitives like the representation learnt by \(\beta\) - VAE. Such a flat representation is problematic, as evidenced by the accuracy/diversity metrics shown in Tbl. 1. Further evidence comes from examining the sym2img samples produced by JMVAE (see Fig. 13). It can be seen that JMVAE fails to learn the abstract concepts as defined in Sec. 3. While the samples in Fig. 13 mostly include correct wall colours that match their respective input symbols, the samples have limited diversity. Many samples are exact copies of each other - a sign of mode collapse. TrELBO analysis This section examines the nature of representations learnt by TrELBO (Vedantam et al., 2017). Fig. 14 demonstrates that after training TrELBO is capable of reconstructing the data, however it produces poor samples. This is due to the highly entangled nature of its learnt representation, as also evidenced by the traversal plots. Since TrELBO is not able to learn a compositional latent representation of concept like that acquired by SCAN, it also struggles to produce diverse sym2img samples when instructed with symbols from the training set (see Fig. 15). Furthermore, this lack of structure in the learnt concept representations precludes successful recombination operator training. Hence, sym2img samples of test symbols instructed through recombination operators lack accuracy (Fig. 16). Data efficiency analysis We evaluate the effect of the training set size on the performance of SCAN, JMVAE and TrELBO by comparing their accuracy and diversity scores after training on {5, 10, 15, 20, 25, 50, 75} concepts. Fig. 17 shows that SCAN consistently outperforms its baselines in terms of the absolute scores, while also displaying less variance when trained on datasets of various sizes. For this set of experiments we also halved the number of training iterations for all models, which affected the baselines but not SCAN. The diversity of JMVAE and TrELBO is better in this plot compared to the results reported in Tbl. 1 because sym2img samples used for this plot were blurrier than those describe in the main text. <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 10: Samples, reconstructions and latent traversals of \(\beta\) -VAE that did not learn a structured disentangled representation \((\beta = 0.1)\) . It is evident that the model learnt to reconstruct the data despite learning an entangled latent space. </center> ![](images/17_1.jpg) <center>Figure 11: Visual samples (sym2img) of SCAN grounded in unstructured vision when presented with symbols "hat" and "ice lolly". It is evident that the model struggled to learn a good understanding of the meaning of these concepts. </center> ## A.4 DSPRITES EXPERIMENTS In this section we describe additional experiments testing SCAN on the dSprites (Matthey et al., 2017) dataset. The dataset consists of binary sprites fully specified by five ground truth factors: position x (32 values), position y (32 values), scale (6 values), rotation (40 values) and sprite identity (3 values). For our experiments we defined a conceptual space spanned by three of the data generative factors - horizontal and vertical positions, and scale. We quantised the values of each chosen factor into halves (top/bottom, left/right, big/small) and assigned one- hot encoded symbols to each of the \(\sum_{k = 1}^{K} \binom{K}{k} N^k = 26\) possible concepts to be learnt (since \(K = 3\) is the number of factors to be learnt and \(N = 2\) is the number of values each factor can take). We compared the performance of SCAN grounded in disentangled visual representations ( \(\beta\) - VAE with \(\beta = 12\) ) to that of grounded in entangled visual representations ( \(\beta\) - VAE with \(\beta = 0\) ). We trained both models on a random subset of image- symbol pairs \((x_i, y_i)\) making up \(< 0.01\%\) of the full dataset. We quantified how well the models understood the meaning of the positional and scale concepts after training by running sym2img inference and counting the number of white pixels within each of the four quadrants of the canvas (for position) or in total in the whole image (for scale). This can be compared to similar values calculated <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 12: Samples, reconstructions and latent traversals of JMVAE. The model learns good disentangled latents, making use of the supervised symbolic information available. </center> ![](images/18_1.jpg) <center>Figure 13: Visualisation of sym2img visual samples produced JMVAE in response to symbols specifying wall colour names. It is evident that the model suffers from mode collapse, since a significant number of samples are copies of each other. </center> over a batch of ground truth images that match the same input symbols. Samples from SCAN matched closely the statistics of the ground truth samples (see Fig. 18). \(\mathrm{SCAN}_{\mathrm{U}}\) , however, failed to produce meaningful samples despite being able to reconstruct the dataset almost perfectly. ## A.5 CELEBA EXPERIMENTS <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 14: Samples, reconstructions and latent traversals of TrELBO. The model learns a very entangled representation. </center> ![](images/19_1.jpg) <center>Figure 15: Visualisation of sym2img visual samples produced TrELBO in response to train symbols: "magenta object", "ice lolly", "purple floor" and "blue wall". It is evident that the model has good accuracy but very low diversity. </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 16: Visualisation of sym2img visual samples produced TrELBO in response to test symbols instructed using recombination operators: "yellow object", "hat", "orange floor" and "cyan wall". It is evident that the model has very low accuracy but decent diversity. </center> ![](images/20_1.jpg) <center>Figure 17: Accuracy and diversity scores of SCAN, JMVAE and TrELBO after being trained on {5, 10, 15, 20, 25, 50, 75} concepts with 10 visual examples each. The size of the circle corresponds to the training set size. We used symbols from the train set to generate sym2img samples used to calculate the scores. SCAN outperforms both baselines and shows less susceptibility to the training set size. </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 18: sym2img inference performance of SCAN and SCAN<sub>U</sub> for symbols - “left” and “large top”. First line in each subplot demonstrates ground truth samples from dSprites dataset that correspond to the respective symbol. Next three lines illustrate the comparative performance of SCAN (left) vs SCAN<sub>U</sub> (right), including their respective sym2img samples, as well as the quantitative comparison of each model (green) to the ground truth (red) in terms of scale understanding (each bar corresponds to the average number of pixels per sample image) and positional understanding (each bar corresponds to the average number of pixels in one of the four quadrants of the samples: T - top, B - bottom, R - right, L - left). The closer the green bars are to the red bars, the better the model’s understanding of the learnt concepts. </center> <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 19: Large version of Fig. 6 </center> <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 20: Large version of Fig. 7 </center> <--- Page Split --->
## ABSTRACT The seemingly infinite diversity of the natural world arises from a relatively small set of coherent rules, such as the laws of physics or chemistry. We conjecture that these rules give rise to regularities that can be discovered through primarily unsupervised experiences and represented as abstract concepts. If such representations are compositional and hierarchical, they can be recombined into an exponentially large set of new concepts. This paper describes SCAN (Symbol- Concept Association Network), a new framework for learning such abstractions in the visual domain. SCAN learns concepts through fast symbol association, grounding them in disentangled visual primitives that are discovered in an unsupervised manner. Unlike state of the art multimodal generative model baselines, our approach requires very few pairings between symbols and images and makes no assumptions about the form of symbol representations. Once trained, SCAN is capable of multimodal bi- directional inference, generating a diverse set of image samples from symbolic descriptions and vice versa. It also allows for traversal and manipulation of the implicit hierarchy of visual concepts through symbolic instructions and learnt logical recombination operations. Such manipulations enable SCAN to break away from its training data distribution and imagine novel visual concepts through symbolically instructed recombination of previously learnt concepts. ## 1 INTRODUCTION State of the art deep learning approaches to machine learning have achieved impressive results in many problem domains, including classification (He et al., 2016; Szegedy et al., 2015), density modelling (Gregor et al., 2015; Oord et al., 2016a;b), and reinforcement learning (Mnih et al., 2015; 2016; Jaderberg et al., 2017; Silver et al., 2016). They are still, however, far from possessing many traits characteristic of human intelligence. Such deep learning techniques tend to be overly data hungry, often rely on significant human supervision and tend to overfit to the training data distribution (Lake et al., 2016; Garnelo et al., 2016). An important step towards bridging the gap between human and artificial intelligence is endowing algorithms with compositional concepts (Lake et al., 2016; Garnelo et al., 2016). Compositionality allows for reuse of a finite set of primitives (addressing the data efficiency and human supervision issues) across many scenarios by recombining them to produce an exponentially large number of novel yet coherent and potentially useful concepts (addressing the overfitting problem). Compositionality is at the core of such human abilities as creativity, imagination and language- based communication. We propose that concepts are abstractions over a set of primitives. For example, consider a toy hierarchy of visual concepts shown in Fig. 1. Each node in this hierarchy is defined as a subset of visual primitives that make up the scene in the input image. These visual primitives might include factors like object identity, object colour, floor colour and wall colour. As one traverses the hierarchy from the subordinate over basic to superordinate levels of abstraction (Rosch, 1978) (i.e. from the more specific to the more general concepts corresponding to the same visual scene), the number of concept- defining visual primitives decreases. Hence, each parent concept in such a hierarchy is an <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Schematic of an implicit concept hierarchy built upon a subset of four visual primitives: object identity \((I)\) , object colour \((O)\) , floor colour \((F)\) and wall colour \((W)\) (other visual primitives necessary to generate the scene are ignored in this example). Concepts form an implicit hierarchy, where each parent is an abstraction over its children and over the original set of visual primitives (the values of the concept-defining sets of visual primitives are indicated by the bold capital letters). In order to generate an image that corresponds to a concept, one has to fill in values for the factors that got abstracted away (represented as "-"), e.g. by sampling from their respective priors. Given certain nodes in the concept hierarchy, one can traverse the other nodes using logical operations. See Sec.3 for our formal definition of concepts. </center> abstraction (i.e. a subset) over its children and over the original set of visual primitives. A more formal definition of concepts is provided in Sec. 3. Intelligent agents are able to discover and learn abstract compositional concepts using little supervision (Baillargeon, 1987; Spelke, 1990; Baillargeon, 2004; Smith & Vul, 2013). Think of human word learning – we acquire the meaning of words through a combination of a continual stream of unsupervised visual data occasionally paired with a corresponding word label. This paper describes SCAN (Symbol- Concept Association Network, see Fig. 2A), a neural network model capable of learning grounded visual concepts in a largely unsupervised manner through fast symbol association. First, we use the \(\beta\) - VAE (Higgins et al., 2017a) to learn a set of independent representational primitives through unsupervised exposure to the visual data. This is equivalent to learning a disentangled (factorised and interpretable) representation of the independent ground truth "generative factors" of the data (Bengio et al., 2013). Next, we allow SCAN to discover meaningful abstractions over these disentangled primitives by exposing it to a small number of symbol- image pairs that apply to a particular concept (e.g. a few example images of an apple paired with the symbol "apple"). SCAN learns the meaning of the concept by identifying the set of visual primitives that all the visual examples have in common (e.g. all observed apples are small, round and red). The corresponding symbol ("apple") then becomes a "pointer" to the newly acquired concept {small, round, red} - a way to access and manipulate the concept without having to know its exact representational form. Our approach does not make any assumptions about how these symbols are encoded, which also allows SCAN to learn multiple referents to the same concept, i.e. synonyms. Once a concept is acquired, it should be possible to use it for bi- directional inference: the model should be able to generate diverse visual samples that correspond to a particular concept (sym2img) and vice versa (img2sym). Since the projection from the space of visual primitives to the space of concepts (img2sym, red dash arrow in Fig. 1) involves abstraction and hence a loss of information, one then needs to add compatible information back in when moving from the space of concepts to that of visual primitives (sym2img, blue dot arrow in Fig. 1). In our setup, concepts are defined in terms of a set of relevant visual primitives (e.g. colour, shape and size for "apple"). This leaves a set of irrelevant visual attributes (e.g. lighting, position, background) to be "filled in". We do so by defaulting them to their respective priors, which ensures high diversity of samples (in both image or symbol space) for each concept during img2sym and sym2img inferences. The structured nature of learnt concepts acquired by SCAN allows for sample efficient learning of logical recombination operators: AND (corresponding to a set union of relevant primitives), IN COMMON (corresponding to set intersection) and IGNORE (corresponding to set difference), by pairing a small number of valid visual examples of recombined concepts with the respective operator names. Once the meaning of the operators has been successfully learned, SCAN can exploit the compositionality of the acquired concepts, and traverse previously unexplored parts of the implicit underlying concept hierarchy by manipulating and recombining existing concepts in novel ways. For example, a new node corresponding to the concept {blue, small} can be reached through the following instructions: "blue" AND "small" (going down the hierarchy from more general to more <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: A: SCAN model architecture. The capital letters correspond to four disentangled visual primitives: object identity \((I)\) , object colour \((O)\) , floor colour \((F)\) and wall colour \((W)\) . B: Mode coverage of the extra KL term of the SCAN loss function. Forward KL divergence \(D_{KL}(\mathbf{z}_x \parallel \mathbf{z}_y)\) allows SCAN to learn abstractions (wide yellow distribution \(\mathbf{z}_y\) ) over the visual primitives that are irrelevant to the meaning of a concept (blue modes corresponds to the inferred values of \(\mathbf{z}_x\) for different visual examples matching symbol \(\mathbf{y}\) ). C: \(\beta\) -VAE \(D_{DAE}\) model architecture. </center> specific), "blueberry" IN COMMON "bluebell" (going up the hierarchy from more specific to more general) or "blueberry" IGNORE "round" (also going up the hierarchy). To summarise, our paper 1) presents SCAN, a neural network model capable of learning compositional and hierarchical representations of visual concepts; 2) demonstrates that SCAN can be successfully trained with very little supervised data; 3) shows that after training, SCAN can perform multimodal (visual and symbolic) bi- directional inference and generation with high accuracy and diversity, outperforming all baselines; 4) shows that the addition of logical recombination operations allows SCAN to break out of its limited training data distribution and reach new nodes within the implicit hierarchy of concepts. ## 2 RELATED WORK To the best of our knowledge no framework currently exists that is directly equivalent to SCAN. Past relevant literature can broadly be split into three categories: 1) Bayesian models that try to mimic fast human concept learning (Tennenbaum, 1999; Lake et al., 2015); 2) conditional generative models that aim to generate faithful images conditioned on a list of attributes or other labels (Reed et al., 2016b;a; Kingma et al., 2014; Yan et al., 2016; Sohn et al., 2015; Pandey & Dukkipati, 2017); and 3) multimodal generative models that aim to embed visual and symbolic inputs in a joint latent space in order to be able to run bi- directional inferences (Vedantam et al., 2017; Suzuki et al., 2017; Pu et al., 2016; Wang et al., 2016; Srivastava & Salakhutdinov, 2014). Bayesian models by Tennenbaum (1999) and Lake et al. (2015) can learn from few examples, but, unlike SCAN, they are not fully grounded in visual data. Conditional and joint multimodal models are fully grounded in visual data, however, unlike SCAN, they require a large number of image- symbol pairs for training. An exception to this is the model by Srivastava & Salakhutdinov (2014), which, however, cannot generate images, instead relying on feature- guided nearest- neighbour lookup within existing data, and also requires slow MCMC sampling. Multimodal generative models are capable of bi- directional inference, however they tend to learn a flat unstructured latent space unlike the hierarchical compositional latent space of SCAN. Hence these baselines underperform SCAN in terms of sample diversity and the ability to break out of their training data distribution through symbolically instructed logical operations. ## 3 FORMALISING CONCEPTS In Sec. 1 we informally proposed that concepts are abstractions over visual representational primitives. Hence, in order to formally define concepts we first define the visual representations used to ground them. These are defined as tuples of the form \((Z_{1}, \ldots , Z_{K})\) , where \(\{1, \ldots , K\}\) is the set of indices of the independent latent factors sufficient to generate the visual input \(\mathbf{x}\) , and \(Z_{k}\) is a random variable. The set \(\mathbb{R}^{K}\) of all such tuples is a K- dimensional visual representation space. <--- Page Split ---> We define a concept \(C_{i}\) in such a K- dimensional representation space as a set of assignments of probability distributions to the random variables \(Z_{k}\) , with the following form: \[C_{i} = \{(k,p_{k}^{i}(Z_{k}))\mid k\in S_{i}\} \quad (1)\] where \(S_{i}\subseteq \{1,\dots,K\}\) is the set of visual latent primitives that are relevant to concept \(C_{i}\) and \(p_{k}^{i}(Z_{k})\) is a probability distribution specified for the visual latent factor represented by the random variable \(Z_{k}\) . Since \(S_{i}\) are subsets of \(\{1,\dots,K\}\) , concepts are abstractions over the K- dimensional visual representation space. To generate a visual sample corresponding to a concept \(C_{i}\) , it is necessary to fill in details for latents that got abstracted away during concept learning. This corresponds to the probability distributions \(\{p_{k}(Z_{k})|k\in \bar{S_{i}}\}\) , where \(\bar{S_{i}} = \{1,\dots,K\} \setminus S_{i}\) is the set of visual latent primitives that are irrelevant to the concept \(C_{i}\) . In SCAN we set these to the unit Gaussian prior: \(p_{k}(Z_{k}) = \mathcal{N}(0,1)\) , \(\forall k\in \bar{S_{i}}\) . In order to improve readability, we will use a simplified notation for concepts throughout the rest of the paper. For example, \(\{(size, p(Z_{size} = \text{small})), (colour, p(Z_{colour} = \text{blue}))\}\) will become either {small, blue} or {small, blue, __}, depending on whether we signify the irrelevant primitives \(k\in \bar{S_{i}}\) as placeholder symbols " \(\_\) ". Note that unlike the formal notation, the ordering of attributes within the simplified notation is fixed and meaningful. Since we define concepts as sets, we can also define binary relations and operators on these sets. If \(C_{1}\) and \(C_{2}\) are concepts, and \(C_{1}\subset C_{2}\) , we say that \(C_{1}\) is superordinate to \(C_{2}\) , and \(C_{2}\) is subordinate to \(C_{1}\) . Two concepts \(C_{1}\) and \(C_{2}\) are orthogonal if \(S_{1}\cap S_{2} = \emptyset\) . The conjunction of two orthogonal concepts \(C_{1}\) and \(C_{2}\) is the concept \(C_{1}\cup C_{2}\) (e.g. {small, __, __} AND {__, round, __} = {small, round, __}). The overlap of two non- orthogonal concepts \(C_{1}\) and \(C_{2}\) is the concept \(C_{1}\cap C_{2}\) (e.g. {small, round, __} IN COMMON {__, round, red} = {__, round, __}). The difference between two concepts \(C_{1}\) and \(C_{2}\) , where \(C_{1}\subset C_{2}\) is the concept \(C_{2}\setminus C_{1}\) (e.g. {small, round, __} IGNORE {__, round, __} = {small, __, __}). These operators allow for a traversal over a broader set of concepts within the implicit hierarchy given knowledge of a limited training subset of concepts. ## 4 MODEL ARCHITECTURE Learning visual representational primitives The discovery of the generative structure of the visual world is the goal of disentangled factor learning research (Bengio et al., 2013). In this work we build SCAN on top of \(\beta\) - VAE, a state of the art model for unsupervised visual disentangled factor learning. \(\beta\) - VAE is a modification of the variational autoencoder (VAE) framework (Kingma & Welling, 2014; Rezende et al., 2014) that introduces an adjustable hyperparameter \(\beta\) to the original VAE objective: \[\mathcal{L}_{x}(\theta ,\phi ;\mathbf{x},\mathbf{z}_{x},\beta) = \mathbb{E}_{q_{\phi}(\mathbf{z}_{x}|\mathbf{x})}[\log p_{\theta}(\mathbf{x}|\mathbf{z}_{x})] - \beta D_{KL}\big(q_{\phi}(\mathbf{z}_{x}|\mathbf{x})\parallel p(\mathbf{z}_{x})\big) \quad (2)\] where \(\phi\) , \(\theta\) parametrise the distributions of the encoder and the decoder respectively. Well chosen values of \(\beta\) (usually \(\beta >1\) ) result in more disentangled latent representations \(\mathbf{z}_{x}\) by setting the right balance between reconstruction accuracy, latent channel capacity and independence constraints to encourage disentangling. For some datasets, however, this balance is tipped too far away from reconstruction accuracy. In these scenarios, disentangled latent representations \(\mathbf{z}_{x}\) may be learnt at the cost of losing crucial information about the scene, particularly if that information takes up a small proportion of the observations \(\mathbf{x}\) in pixel space. Hence, we adopt the solution used in Higgins et al. (2017b) that replaces the pixel log- likelihood term in Eq. 2 with an L2 loss in the high- level feature space of a denoising autoencoder (DAE) (Vincent et al., 2010) trained on the same data (see Fig. 2C for model architecture). The resulting \(\beta\) - VAE \(_{DAE}\) architecture optimises the following objective function: \[\mathcal{L}_{x}(\theta ,\phi ;\mathbf{x},\mathbf{z}_{x},\beta) = \mathbb{E}_{q_{\phi}(\mathbf{z}_{x}|\mathbf{x})}\| J(\hat{\mathbf{x}}) - J(\mathbf{x})\|_{2}^{2} - \beta D_{KL}\big(q_{\phi}(\mathbf{z}_{x}|\mathbf{x})\parallel p(\mathbf{z}_{x})\big) \quad (3)\] where \(\hat{\mathbf{x}}\sim p_{\theta}(\mathbf{x}|\mathbf{z}_{x})\) and \(J:\mathbb{R}^{W\times H\times C}\to \mathbb{R}^{N}\) is the function that maps images from pixel space with dimensionality \(\mathrm{Width}\times \mathrm{Height}\times \mathrm{Channels}\) to a high- level feature space with dimensionality \(N\) given by a stack of DAE layers up to a certain layer depth (a hyperparameter). Note that this adjustment means that we are no longer optimising the variational lower bound, and \(\beta\) - VAE \(_{DAE}\) with \(\beta = 1\) loses its equivalence to the original VAE framework. <--- Page Split ---> Learning visual concepts This section describes how our proposed SCAN framework (Fig. 2A) exploits the particular parametrisation of the visual building blocks acquired by \(\beta\) - VAE \(^{1}\) to learn an implicit hierarchy of visual concepts formalised in Sec. 3. SCAN is based on a modified VAE framework. In order to encourage the model to learn visually grounded abstractions, we initialise the space of concepts (the latent space \(\mathbf{z}_{y}\) of SCAN) to be structurally identical to the space of visual primitives (the latent space \(\mathbf{z}_{x}\) of \(\beta\) - VAE). Both spaces are parametrised as multivariate Gaussian distributions with diagonal covariance matrices, and \(\dim (\mathbf{z}_{y}) = \dim (\mathbf{z}_{x}) = K\) . The grounding is performed by aiming to minimise the KL divergence between the two distributions. The abstraction step corresponds to setting SCAN latents \(z_{y}^{k}\) corresponding to the relevant factors to narrow distributions, while defaulting those corresponding to the irrelevant factors to the wider unit Gaussian prior. This is done by minimising the forward KL divergence \(D_{KL}\left(q(\mathbf{z}_{x})\parallel q(\mathbf{z}_{y})\right)\) , rather than the mode picking reverse KL divergence \(D_{KL}\left(q(\mathbf{z}_{y})\parallel q(\mathbf{z}_{x})\right)\) . Fig. 2B demonstrates the differences. Each blue mode corresponds to an inferred visual latent distribution \(q(z_{x}^{k}|x_{i})\) given an image \(x_{i}\) . The yellow distribution corresponds to the learnt conceptual latent distribution \(q(z_{y}^{k})\) . When presented with visual examples that have high variability for a particular generative factor, e.g. various lighting conditions when viewing examples of apples, the forward KL allows SCAN to learn a broad distribution for the corresponding conceptual latent \(q(z_{y}^{k})\) that is close to the prior \(p(z_{y}^{k}) = \mathcal{N}(0,1)\) . Hence, SCAN is trained by minimising: \[\begin{array}{r l} & {\mathcal{L}_{y}(\theta_{y},\phi_{y};\mathbf{y},\mathbf{x},\mathbf{z}_{y},\beta ,\lambda) = \mathbb{E}_{q_{\phi_{y}}(\mathbf{z}_{y}|\mathbf{y})}[\log p_{\theta_{y}}(\mathbf{y}|\mathbf{z}_{y})] - \beta D_{K L}\big(q_{\phi_{y}}(\mathbf{z}_{y}|\mathbf{y})\parallel p(\mathbf{z}_{y})\big)}\\ & {\qquad -\lambda D_{K L}\big(q_{\phi_{x}}(\mathbf{z}_{x}|\mathbf{x})\parallel q_{\phi_{y}}(\mathbf{z}_{y}|\mathbf{y})\big)} \end{array} \quad (4)\] where \(\mathbf{y}\) is symbol inputs, \(\mathbf{z}_{y}\) is the latent space of concepts, \(\mathbf{z}_{x}\) is the latent space of the pre- trained \(\beta\) - VAE containing the visual primitives which ground the abstract concepts \(\mathbf{z}_{y}\) , and \(\mathbf{x}\) are example images that correspond to the concepts \(\mathbf{z}_{y}\) activated by symbols \(\mathbf{y}\) . It is important to up- weight the forward KL term relative to the other terms in the cost function (e.g. \(\lambda = 1\) , \(\beta = 10\) ). The SCAN architecture does not make any assumptions on the nature of the symbols \(\mathbf{y}\) . In this paper we use a commonly used k- hot encoding (Vedantam et al., 2017; Suzuki et al., 2017), where each concept is described in terms of the \(k \leq K\) visual attributes it refers to (e.g. an apple could be referred to by a 3- hot symbol "round, small, red"). In principle, other possible encoding schemes for \(\mathbf{y}\) can also be used, including word embeddings (Mikolov et al., 2013), or even entirely random vectors. We leave the empirical demonstration of this to future work. Once trained, SCAN allows for bi- directional inference and generation (img2sym and sym2img). In order to generate visual samples that correspond to a particular concept (sym2img), we infer the concept \(\mathbf{z}_{y}\) by presenting an appropriate symbol \(\mathbf{y}\) to the inference network of SCAN. One can then sample from the inferred concept \(q_{\phi_{y}}(\mathbf{z}_{y}|\mathbf{y})\) and use the generative part of \(\beta\) - VAE to visualise the corresponding image samples \(p_{\theta_{y}}(\mathbf{x}|\mathbf{z}_{y})\) . SCAN can also be used to infer a description of an image in terms of the different learnt concepts via their respective symbols. To do so, an image \(\mathbf{x}\) is presented to the inference network of the \(\beta\) - VAE to obtain its description in terms of the visual primitives \(\mathbf{z}_{x}\) . One then uses the generative part of the SCAN to sample descriptions \(p_{\theta_{y}}(\mathbf{y}|\mathbf{z}_{x})\) in terms of symbols that correspond to the previously inferred visual building blocks \(q_{\phi_{x}}(\mathbf{z}_{x}|\mathbf{x})\) . Learning concept recombination operators The compositional and hierarchical structure of the concept latent space \(\mathbf{z}_{y}\) learnt by SCAN can be exploited to break away from the training data distribution and imagine new concepts. This can be done by using logical concept manipulation operators AND, IN COMMON and IGNORE formally defined in Sec. 3. These operators are implemented within a conditional convolutional module parametrised by \(\psi\) (Fig. 3A) that accepts two multivariate Gaussian distributions \(\mathbf{z}_{y_{1}}\) and \(\mathbf{z}_{y_{2}}\) corresponding to the two concepts that are to be recombined, and a conditioning vector \(\mathbf{r}\) specifying the recombination operator. The input distributions \(\mathbf{z}_{y_{1}}\) and \(\mathbf{z}_{y_{2}}\) are inferred from the two corresponding input symbols \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\) , respectively, using a pre- trained SCAN. The convolutional module strides over the parameters of each matching component \(z_{y_{1}}^{k}\) and \(z_{y_{2}}^{k}\) one at a time and outputs the corresponding parametrised component \(z_{r}^{k}\) of a recombined multivariate Gaussian distribution \(\mathbf{z}_{r}\) with a diagonal covariance matrix. \(^2\) We used 1- hot encoding <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 3: A: Learning AND, IN COMMON or IGNORE recombination operators with a SCAN model architecture. Inset demonstrates the convolutional recombination operator that takes in \(\{\mu_{y_{1}}^{k},\sigma_{y_{1}}^{k};\mu_{y_{2}}^{k},\sigma_{y_{2}}^{k}\}\) and outputs \(\{\mu_{r}^{k},\sigma_{r}^{k}\}\) . The capital letters correspond to four disentangled visual primitives: object identity \((I)\) , object colour \((O)\) , floor colour \((F)\) and wall colour \((W)\) . B: Visual samples produced by SCAN and JMVAE when instructed with a novel concept recombination. SCAN samples consistently match the expected ground truth recombined concept, while maintaining high variability in the irrelevant visual primitives. JMVAE samples lack accuracy. Recombination instructions are used to imagine concepts that have never been seen during model training. Top: samples for IGNORE; Middle: samples for IN COMMON; Bottom: samples for AND. </center> for the conditioning vector \(\mathbf{r}\) , where \([1 0 0]\) , \([0 1 0]\) and \([0 0 1]\) stood for AND, IN COMMON and IGNORE respectively. The conditioning was implemented as a tensor product that takes in \(\mathbf{z}_{y_{1}}\) and \(\mathbf{z}_{y_{2}}\) and outputs \(\mathbf{z}_{r}\) , where \(\mathbf{r}\) effectively selects the appropriate trainable transformation matrix parametrised by \(\psi\) . The conditional convolutional module is trained through the same visual grounding process as SCAN—each recombination instruction is paired with a small number of appropriate example images (e.g. "blue, suitcase" IGNORE "suitcase" might be paired with various example images containing a blue object). The recombination module is trained by minimising: \[\mathcal{L}_{r}(\psi ;\mathbf{z}_{x},\mathbf{z}_{r}) = D_{KL}\big[q_{\phi_{x}}(\mathbf{z}_{x}|\mathbf{x}_{i})\big|\big|q_{\psi}\big(\mathbf{z}_{r}\mid q_{\phi_{y}}(\mathbf{z}_{y_{1}}|\mathbf{y}_{1}),q_{\phi_{y}}(\mathbf{z}_{y_{2}}|\mathbf{y}_{2}),\mathbf{r}\big)\big] \quad (5)\] where \(q_{\phi_{x}}(\mathbf{z}_{x}|\mathbf{x}_{i})\) is the inferred latent distribution of the \(\beta\) - VAE given a seed image \(\mathbf{x}_{i}\) that matches the specified symbolic description. The resulting \(\mathbf{z}_{r}\) lives in the same space as \(\mathbf{z}_{y}\) and corresponds to a node within the implicit hierarchy of visual concepts. Hence, all the properties of concepts \(\mathbf{z}_{y}\) discussed in the previous section also hold for \(\mathbf{z}_{r}\) . ## 5 EXPERIMENTS ### 5.1 DEEPMINDLAB EXPERIMENTS Environment We evaluate the performance of SCAN on a dataset of visual frames and corresponding symbolic descriptions collected within the DeepMind Lab environment (Beattie et al., 2016). DeepMind Lab was chosen, because it gave us good control of the ground truth generative process. The visual frames were collected from a static viewpoint situated in a room containing a single object. The generative process was specified by four factors of variation: wall colour, floor colour, object colour with 16 possible values each, and object identity with 3 possible values: hat, ice lolly and suitcase. Other factors of variation were also added to the dataset by the DeepMind Lab engine, such as the spawn animation, horizontal camera rotation and the rotation of objects around the vertical axis. We split the dataset into two subsets. One was used for training the models, while the other one contained a held out set of 300 four- gram concepts that were never seen during training, either visually or symbolically. We used the held out set to evaluate the model's ability to imagine new concepts. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 4: A: sym2img inferences with "white suitcase", "white suitcase, blue wall", and "white suitcase, blue wall, magenta floor" as input. The latter one points to a concept that the model has never seen during training, either visually or symbolically. All samples are consistently accurate, while showing good diversity in terms of the irrelevant visual attributes. B: when presented with an image, SCAN is able to describe it in terms of all concepts it has learnt, including synonyms (e.g. "dub", which corresponds to {ice lolly, white wall}). The histograms show the distributions of unique concepts the model used to describe each image, most probable of which are printed in descending order next to the corresponding image. The few confusions SCAN makes are intuitive to humans too (e.g. confusing orange and yellow colours). </center> Learning grounded concepts In this section we demonstrate that SCAN is capable of learning the meaning of new concepts from very few image- symbol pairs. We evaluate the model's concept understanding through qualitative analysis of sym2img and img2sym samples. First we pre- trained a \(\beta\) - VAE to learn a disentangled representation of the DeepMind Lab dataset (see Sec. A.3 in Supplementary Materials for details). Then we trained SCAN on a random subset of 133 out of 18,883 possible concepts sampled from all levels of the implicit hierarchy (these concepts specify between one and four visual primitives, and are associated with 1- to 4- hot symbols respectively). The set of symbols also included a number of 1- hot synonyms (e.g. a blue wall may be described by symbols "blue wall", "bright blue wall" or "blue wall synonym"). Each concept was associated with ten visual examples during training. Fig. 4A shows samples drawn from SCAN when asked to imagine a bigram concept {white, suitcase}, a trigram concept {white, suitcase, blue wall}, or a four- gram {white, suitcase, blue wall, magenta floor}. Note that the latter is a concept drawn from the held- out test set that neither \(\beta\) - VAE nor SCAN have ever seen during training, and the first two concepts are novel to SCAN, but have been experienced by \(\beta\) - VAE. It is evident that the model demonstrates a good understanding of all three concepts, producing visual samples that match the meaning of the concept, and showing good variability over the irrelevant factors. Confusions do sometimes arise due to the sampling process (e.g. one of the suitcase samples is actually an ice lolly). Fig. 4B demonstrates that the same model can also correctly describe an image. The labels are mostly consistent with the image and display good diversity (SCAN is able to describe the same image using different symbols including synonyms). The few confusions that SCAN does make are between concepts that are easily confusable for human too (e.g. red, orange and yellow colours). Evolution of concept understanding In this section we take a closer look inside SCAN as it learns a new concept. In Sec. 3 we suggested that concepts should be grounded in terms of specified factors (the corresponding latent units \(z_{y}^{k} \forall k \in S\) should have low inferred standard deviations \(\sigma_{y}^{k}\) ), while the unspecified visual primitives should be sampled from the unit Gaussian prior (the corresponding latent units \(z_{y}^{k} \forall k \in \bar{S}\) show have \(\sigma_{y}^{k} \approx 1\) ). We visualise this process by teaching SCAN the meaning of the concept {cyan wall} using a curriculum of fifteen progressively more diverse visual examples (see Fig. 5, bottom row). After training SCAN on each set of five visual examples, we test the model's understanding of the concept through sym2img sampling using the symbol "cyan wall" (Fig. 5, top four rows). We also plot the average inferred specificity of all 32 latent units \(z_{y}^{k}\) during training <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: Evolution of understanding of the meaning of concept {cyan wall} as SCAN is exposed to progressively more diverse visual examples. Left: top row contains three sets of visual samples (sym2img) generated by SCAN after seeing each set of five visual examples presented in the bottom row. Right: average inferred specificity of concept latents \(z_{y}^{b}\) during training. Vertical dashed lines correspond to the vertical dashed lines in the left plot and indicate a switch to the next set of five more diverse visual examples. 6/32 latents \(z_{y}^{b}\) and labelled according to their corresponding visual primitives in \(\mathbf{z}_{x}\) . </center> (Fig. 5, right). It can be seen that the number of specified latents \(z_{y}^{b}\) drops from six, over four, to two as the diversity of visual examples seen by SCAN increases. The remaining two highly specified latents \(z_{y}^{b}\) correctly correspond to the visual primitives \(\mathbf{z}_{x}\) representing wall hue and brightness. Quantitative comparison to baselines In this section we quantitatively compare the accuracy and diversity of the sym2img samples produced by the SCAN to those of the baselines - a SCAN like architecture trained with a reverse KL used for grounding conceptual representations in vision (SCANR), another modification of SCAN that tries to ground conceptual representations in unstructured (entangled) visual representations (SCANU, with various levels of visual entanglement), and two of the latest multimodal joint density models, the JMVAE (Suzuki et al., 2017) and the triple ELBO (TrELBO) (Vedantam et al., 2017). The two metrics, accuracy and diversity, measure different aspects of the models' performance. High accuracy means that the models understand the meaning of a symbol (e.g. samples of a "blue suitcase" should contain blue suitcases). High diversity means that the models were able to learn an abstraction. It quantifies the variety of samples in terms of the unspecified visual attributes (e.g. samples of blue suitcases should include a high diversity of wall colours and floor colours). There is a correlation between the two metrics, since samples with low accuracy often result in higher diversity scores. We use a pre- trained classifier achieving \(99\%\) average accuracy over all data generative factors to evaluate the accuracy of img2sym samples. Since some colours in the dataset are hard to differentiate even to humans (e.g. yellow and orange), we use top- 3 accuracy for colour related factors. We evaluate the diversity of visual samples by estimating the KL divergence of the inferred factor distribution with the flat prior: \(D_{KL}(u(\mathbf{y}_i) \parallel p(\mathbf{y}_i))\) , where \(p(\mathbf{y}_i)\) is the joint distribution over the factors irrelevant to the \(i\) th concept \(i \in \overline{S_i}\) (inferred by the classifier) and \(u(\mathbf{y}_i)\) is the equivalent flat distribution (i.e., with each factor value having equal probability). See Sec. A.1 in Supplementary Materials for more details. All models were trained on a random subset of 133 out of 18,883 possible concepts sampled from all levels of the implicit hierarchy with ten visual examples each. The accuracy and diversity metrics were calculated on two sets of sym2img samples: 1) train, corresponding to the 133 symbols used to train the models; and 2) test (symbols), corresponding to a held out set of 50 symbols. Tbl. 1 demonstrates that SCAN outperforms all baselines in terms of both metrics. SCANR learns very accurate representations, however it overfits to a single mode of each of the irrelevant visual factors and hence lacks diversity. SCANU experiments show that as the level of disentanglement within the visual representation is increased (the higher the \(\beta\) , the more disentangled the representation), the accuracy and the diversity of the sym2img samples also get better. Note that baselines with poor sample accuracy inadvertently have good diversity scores because samples that are hard to classify produce a relatively flat classifier distribution \(p(\mathbf{y}_i)\) close to the uniform prior \(u(\mathbf{y}_i)\) . TrELBO learns an entangled and unstructured conceptual representation that produces accurate yet stereotypical sym2img samples that lack diversity. Finally, JMVAE is a model that comes the closest to SCAN <--- Page Split ---> Table 1: Quantitative results comparing the accuracy and diversity of visual samples produced through sym2img inference by SCAN and three baselines: a SCAN with unstructured vision (SCAN), lower \(\beta\) means more visual entanglement), a SCAN with a reverse grounding KL term for both the model itself and its recombination operator (SCANg) and two recent joint multimodal embedding models, JMVAE and TrELBO. Higher accuracy and lower diversity indicate better performance. Test values can be computed either by directly feeding the ground truth symbols (test symbols), or by applying trained recombination operators to make the model recombine in the latent space (test operators). <table><tr><td rowspan="2">MODEL</td><td colspan="3">ACCURACY</td><td colspan="3">DIVERSITY</td></tr><tr><td>TRAIN</td><td>TEST (SYMBOLS)</td><td>TEST (OPERATORS)</td><td>TRAIN</td><td>TEST (SYMBOLS)</td><td>TEST (OPERATORS</td></tr><tr><td>TRELBO</td><td>0.81</td><td>0.69</td><td>0.37</td><td>9.41</td><td>6.86</td><td>0.63</td></tr><tr><td>JMVAE</td><td>0.75</td><td>0.68</td><td>0.61</td><td>4.32</td><td>2.87</td><td>0.86</td></tr><tr><td>SCANg</td><td>0.86</td><td>0.81</td><td>0.67</td><td>13.17</td><td>9.2</td><td>9.94</td></tr><tr><td>SCANU (β = 0.1)</td><td>0.27</td><td>0.26</td><td>0.25</td><td>5.51</td><td>1.23</td><td>1.66</td></tr><tr><td>SCANU (β = 1)</td><td>0.58</td><td>0.36</td><td>0.33</td><td>2.07</td><td>1.22</td><td>1.34</td></tr><tr><td>SCANU (β = 20)</td><td>0.65</td><td>0.42</td><td>0.32</td><td>1.41</td><td>3.98</td><td>4.57</td></tr><tr><td>SCAN (β = 53)</td><td>0.82</td><td>0.79</td><td>0.79</td><td>1.46</td><td>1.08</td><td>1.05</td></tr></table> in terms of performance. It manages to exploit the structure of the symbolic inputs to learn a representation of the joint posterior that is almost as disentangled as that of SCAN. Similarly to SCAN, it also uses a forward KL term to match unimodal posteriors to the joint posterior. Hence, given that there is enough supervision within the symbols to help JMVAE learn a disentangled joint posterior, it should become equivalent to SCAN, whereby the joint \(q(\mathbf{z}|\mathbf{x},\mathbf{y})\) and unimodal \(q(\mathbf{z}|\mathbf{y})\) posteriors of JMVAE become equivalent to the visual \(q(\mathbf{z}_v|\mathbf{x})\) and symbolic \(q(\mathbf{z}_y|\mathbf{y})\) posteriors of SCAN respectively. Yet in practice we found that JMVAE training is much more sensitive to various architectural and hyperparameter choices compared to SCAN, which often results in mode collapse leading to the reasonable accuracy yet poor diversity of the JMVAE sym2img samples. See Sec. A.3 for more details of the baselines' performance. Finally, SCAN is the only model that was able to exploit the k- hot structure of the symbols and the compositional nature of its representations to generalise well to the test set (test symbols results), while all of the other baselines lost a lot of their sample accuracy. Learning recombination operators In Sec. 4 we suggested a way to traverse the implicit hierarchy of concepts towards novel nodes without any knowledge of how to point to these nodes through a symbolic reference. We suggested doing so by instructing a recombination of known concepts in the latent space. To test this, we trained a recombination module using 10 recombination instructions per each of the three operators, with 20 visual examples each. Tbl. 1 (test operators) demonstrates that we were able to reach the nodes corresponding to the 50 novel test concepts using such a pre- trained recombination operator module. This, however, only worked for SCAN, since the successful training of the recombination module relies on a structured latent space that all the other baselines lacked. SCAN with the recombination module preserved the accuracy and the diversity of samples, as shown quantitatively in Tbl. 1 and qualitatively in Fig. 3B. JMVAE, the closest baseline to SCAN in terms of the recombination module performance, produced samples with low accuracy (the drop in accuracy resulted in an increase in the diversity score). It is interesting to note that the recombination operator training relies on the same kind of visual grounding as SCAN, hence it can often improve the diversity of the original model. ### 5.2 CELEBA EXPERIMENTS We ran additional experiments on a more realistic dataset CelebA (Liu et al., 2015) after performing minimal dataset pre- processing of cropping the frames to \(64 \times 64\) . Unlike other approaches (Vedantam et al., 2017; Perarnau et al., 2016) which only use 18 best attributes for training their models, we used all 40 attributes. Many of these 40 attributes are not useful, since they are either: 1) subjective (e.g. "attractiveness"); 2) refer to parts of the image that have been cropped out (e.g. "wearing necktie"); 3) refer to visual features that have not been discovered by \(\beta\) - VAE (e.g. "sideburns", see Higgins et al. (2017a) for a discussion of the types of factors that \(\beta\) - VAE tends to learn on this dataset); 4) are confusing due to mislabelling (e.g. "bald female" as reported by Vedantam et al. (2017)). Hence, our experiments test the robustness of SCAN to learning concepts in an adversarial setting, where the model is taught concepts that do not necessarily relate well to their corresponding visual examples. For these experiments we used the controlled capacity schedule (Burgess et al., 2017) for \(\beta\) - VAE training to increase the quality of the generative process of the model. We found that SCAN trained on CelebA was able to outperform its baselines of JMVAE and TrELBO. First, we checked which of the 40 attributes SCAN was able to understand after training. To do so, <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 6: Comparison of sym2img samples of SCAN, JMVAE and TrELBO trained on CelebA. See Fig. 19 in Supplementary Materials for larger samples. </center> ![](images/9_1.jpg) <center>Figure 7: Example sym2img samples of SCAN trained on CelebA. We run inference using four different values for each attribute. We found that the model was more sensitive to changes in values in the positive rather than negative direction, hence we use the following values: \(\{-6, - 3, 1, 2\}\) . See Fig. 20 in Supplementary Materials for larger samples. </center> we inferred \(q(\mathbf{z}_y|\mathbf{y}_i)\) for all \(\mathbf{y}_i \in \mathcal{R}^{40}\) , where \(\mathbf{y}_i\) is a 1- hot encoding of the ith attribute. We then approximated the number of specified latents for each posterior \(q(\mathbf{z}_y|\mathbf{y}_i)\) . If an attribute \(i\) did not correspond to anything meaningful in the corresponding visual examples seen during training, it would have no specified latents and \(D_{KL}(q(\mathbf{z}_y|\mathbf{y}_i)||p(\mathbf{z}_y)) \approx 0\) . We found that SCAN did indeed learn the meaning of a large number of attributes. Fig. 6 shows sym2img samples for some of them compared to the equivalent samples for the baseline models: JMVAE and TrELBO. It can be seen that SCAN samples tend to be more faithful than those of JMVAE, and both models produce much better diversity of samples than TrELBO. A notable difference between SCAN and the two baselines is that despite being trained on binary k- hot attribute vectors (where k varies for each sample), SCAN learnt meaningful directions of continuous variability in its conceptual latent space \(\mathbf{z}_y\) . For example, if we vary the value of an individual symbolic attribute, we will get meaningful sym2img samples that range between extreme positive and extreme negative examples of that attribute (e.g. by changing the values of the "pale skin" symbol \(\mathbf{y}\) , we can generate samples with various skin tones as shown in Fig. 7). This is in contrast to JMVAE and TrELBO, which only produce meaningful sym2img samples if the value of the attribute is set to 1 (attribute is present) or 0 (attribute is not enforced). This means that unlike SCAN, it is impossible to enforce JMVAE or TrELBO to generate samples with darker skin colours despite the models knowing the meaning of the "pale skin" attribute. Note that sometimes SCAN picks up implicit biases in the dataset. For example, after training SCAN interprets "attractive" as a term that refers to young white females and less so to males, especially if these males are also older and have darker skin tones (Fig. 7). Similarly, SCAN learns to use the term "big lips" to describe younger ethnic individuals, and less so older white males; while "arched eyebrows" is deemed appropriate to use when describing young white females, but not when <--- Page Split ---> describing people wearing sunglasses or hats, presumably because one cannot see how arched their eyebrows are. ## 6 CONCLUSION This paper introduced a new approach to learning grounded visual concepts. We defined concepts as abstractions over independent (and often interpretable) visual primitives, where each concept is given by learned distributions over a set of relevant visual factors. We proposed that all other (irrelevant) visual factors should default to their prior in order to produce a diverse set of samples corresponding to a concept. We then proposed SCAN, a neural network implementation of such an approach, which was able to discover and learn an implicit hierarchy of abstract concepts from as few as five symbol- image pairs per concept and no assumptions on the nature of symbolic representations. SCAN was then capable of bi- directional inference, generating diverse and accurate image samples from symbolic instructions, and vice versa, qualitatively and quantitatively outperforming all baselines, including on a realistic CelebA dataset with noisy attribute labels. The structure of the learnt concepts allowed us to train an extension to SCAN that could perform logical recombination operators. We demonstrated how such operators could be used to traverse the implicit concept hierarchy, including imagining completely new concepts. Due to the sample efficiency and the limited number of assumptions in our approach, the representations learnt by SCAN should be immediately applicable within a large set of broader problem domains, including reinforcement learning, classification, control and planning. <--- Page Split ---> ## A SUPPLEMENTARY INFORMATION ## A.1 MODEL DETAILS \(\beta\) - VAE. We re- used the architecture and the training setup for \(\beta\) - VAE specified in Higgins et al. (2017b). In particular, we used L2 loss within a pre- trained denoising autoencoder (DAE) to calculate the reconstruction part of the \(\beta\) - VAE loss function. The DAE was trained with occlusion- style masking noise in the vein of Pathak et al. (2016). Concretely, two values were independently sampled from \(U[0,W]\) and two from \(U[0,H]\) where \(W\) and \(H\) were the width and height of the input frames. These four values determined the corners of the rectangular mask applied; all pixels that fell within the mask were set to zero. The DAE architecture consisted of four convolutional layers, each with kernel size 4 and stride 2 in both the height and width dimensions. The number of filters learnt for each layer was \(\{32,32,64,64\}\) respectively. The bottleneck layer consisted of a fully connected layer of size 100 neurons. This was followed by four deconvolutional layers, again with kernel sizes 4, strides 2, and \(\{64,64,32,32\}\) filters. The padding algorithm used was 'SAME' in TensorFlow (Abadi et al., 2015). ELU non- linearities were used throughout. The optimiser used was Adam (Kingma & Ba, 2015) with a learning rate of \(1\mathrm{e} - 3\) and \(\epsilon = 1\mathrm{e} - 8\) . We pre- trained the DAE for 200,000 steps, using batch size of 100 before training \(\beta\) - VAE. \(\beta\) - VAE architecture was the following. We used an encoder of four convolutional layers, each with kernel size 4, and stride 2 in the height and width dimensions. The number of filters learnt for each layer was \(\{32,32,64,64\}\) respectively. This was followed by a fully connected layer of size 256 neurons. The latent layer comprised 64 neurons parametrising 32 (marginally) independent Gaussian distributions. The decoder architecture was simply the reverse of the encoder, utilising deconvolutional layers. The decoder used was Bernoulli. The padding algorithm used was 'SAME' in TensorFlow. ReLU non- linearities were used throughout. The reconstruction error was taking in the last layer of the DAE (in the pixel space of DAE reconstructions) using L2 loss and before the non- linearity. The optimiser used was Adam with a learning rate of \(1\mathrm{e} - 4\) and \(\epsilon = 1\mathrm{e} - 8\) . We pre- trained \(\beta\) - VAE until convergence using batch size of 100. The disentangled \(\beta\) - VAE had \(\beta = 53\) , while the entangled \(\beta\) - VAE used within the \(\mathrm{SCAN_U}\) baseline had \(\beta = 0.1\) . SCAN The encoder and decoder of SCAN were simple single layer MLPs with 100 hidden units for DeepMind Lab experiments and a two layer MLP with 500 hidden units in each hidden layer for the CelebA experiments. We used ReLU non- linearities in both cases. The decoder was parametrised as a Bernoulli distribution over the output space of size 375. We set \(\beta_{y} = 1\) for all experiments, and \(\lambda = 10\) . We trained the model using Adam optimiser with learning rate of \(1\mathrm{e} - 4\) and batch size 16. SCAN recombination operator The recombination operator was implemented as a convolutional operator with kernel size 1 and stride 1. The operator was parametrised as a 2 layer MLP with 30 and 15 hidden units per layer, and ReLU non- linearities. The optimizer is Adam with a learning rate of \(1\mathrm{e} - 3\) and batch size 16 was used. We trained the recombination operator for \(50k\) steps. JMVAE The JMVAE was trained using the loss as described in (Suzuki et al., 2017): \[\begin{array}{r l} & {\mathcal{L}_{J M}(\theta_{x},\theta_{y},\phi_{x},\phi_{y},\phi ;\mathbf{x},\mathbf{y},\alpha) = \mathbb{E}_{q_{\phi}(\mathbf{x}|\mathbf{x},\mathbf{y})}\left[\log p_{\theta_{x}}(\mathbf{x}|\mathbf{z})\right] + \mathbb{E}_{q_{\phi}(\mathbf{x}|\mathbf{x},\mathbf{y})}\left[\log p_{\theta_{y}}(\mathbf{y}|\mathbf{z})\right]}\\ & {\qquad -D_{K L}\big(q_{\phi}(\mathbf{z}|\mathbf{x},\mathbf{y})\parallel p(\mathbf{z})\big)}\\ & {\qquad -\alpha \big[D_{K L}\big(q_{\phi}(\mathbf{z}|\mathbf{x},\mathbf{y})\parallel q_{\phi_{x}}(\mathbf{z}|\mathbf{x})\big)}\\ & {\qquad +D_{K L}\big(q_{\phi}(\mathbf{z}|\mathbf{x},\mathbf{y})\parallel q_{\phi_{y}}(\mathbf{z}|\mathbf{y})\big)\big]} \end{array} \quad (6)\] Where \(\alpha\) was a hyperparameter. We tried \(\alpha\) values \(\{0.01,0.1,1.0,10.0\}\) as in the original paper and found that the best results were obtained with \(\alpha = 1.0\) . All results were reported with this value. The architectural choices for JMVAE were made to match as closely as possible those made for SCAN. Thus the visual encoder \(q_{\phi_{x}}\) consisted of four convolutional layers, each with kernel size 4 and stride 2 in both the height and width dimensions, with \(\{32,32,64,64\}\) filters learned at the respective layers. The convolutional stack was followed by a single fully connected layer with 256 hidden units. The encoder output the parametrisation for a a 32- dimensional diagonal Gaussian latent distribution. The symbol encoder \(q_{\phi_{y}}\) consisted of a single layer MLP with 100 hidden units for the DeepMind Lab experiments or two layer MLP with 500 hidden units per layer for the CelebA experiments as in SCAN. The joint encoder \(q_{\phi}\) consisted of the same convolutional stack as in the visual encoder to process the visual input, while the symbol input was passed through a two- layer MLP with 32 and 100 hidden units. These two embeddings were then concatenated and passed through a further two- layer MLP of 256 hidden units each, before outputting the 64 parameters of the diagonal Gaussian latents. <--- Page Split ---> The visual decoder \(p_{\theta_{x}}\) was simply the reverse of the visual encoder using transposed convolutions. Similarly, the symbol decoder \(p_{\theta_{x}}\) was again a single layer MLP with 100 hidden units. The output distributions of both decoders were parameterised as Bernoulli. The model was trained using the Adam optimiser with a learning rate of \(1\mathrm{e} - 4\) and a batch size of 16. Triple ELBO (TrELBO) The Triple ELBO (TrELBO) model was trained using the loss as described in (Vedantam et al., 2017): \[\begin{array}{r l} & {\mathcal{L}_{t r e l b o}(\theta_{x},\theta_{y},\phi_{x},\phi_{y},\phi;\mathbf{x},\mathbf{y},\lambda_{y}^{w z},\lambda_{y}^{y}) = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x},\mathbf{y})}\left[\log p_{\theta_{x}}(\mathbf{x}|\mathbf{z})\right] + \mathbb{E}_{q_{\phi_{x}}(\mathbf{z}|\mathbf{x})}\left[\log p_{\theta_{x}}(\mathbf{x}|\mathbf{z})\right]}\\ & {\qquad +\lambda_{y}^{w z}\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x},\mathbf{y})}\left[\log p_{\theta_{y}}(\mathbf{y}|\mathbf{z})\right] + \lambda_{y}^{y}\mathbb{E}_{q_{\phi_{y}}(\mathbf{z}|\mathbf{y})}\left[\log p_{\theta_{y}}(\mathbf{y}|\mathbf{z})\right]}\\ & {\qquad -D_{K L}\left(q_{\phi}(\mathbf{z}|\mathbf{x},\mathbf{y})\parallel p(\mathbf{z})\right) - D_{K L}\left(q_{\phi_{x}}(\mathbf{z}|\mathbf{x})\parallel p(\mathbf{z})\right)}\\ & {\qquad -D_{K L}\left(q_{\phi_{y}}(\mathbf{z}|\mathbf{y})\parallel p(\mathbf{z})\right)} \end{array} \quad (7)\] Where \(\lambda_{y}^{w z}\) and \(\lambda_{y}^{y}\) were hyperparameters. We set these to 10 and 100 respectively, following the reported best values from (Vedantam et al., 2017). We trained the model using the frozen- likelihood trick shown to improve the model performance in Vedantam et al. (2017). The symbol decoder parameters \(\theta_{y}\) were trained only using the \(\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x},\mathbf{y})}\left[\log p_{\theta_{y}}(\mathbf{y}|\mathbf{z})\right]\) term and not the \(\mathbb{E}_{q_{\phi_{y}}(\mathbf{z}|\mathbf{y})}\left[\log p_{\theta_{y}}(\mathbf{y}|\mathbf{z})\right]\) term. For fair comparison to the other models, we did not utilise a product of experts for the inference network \(q_{\phi_{y}}(\mathbf{z}|\mathbf{y})\) . In all architectural respects, the networks used were identical to those reported above for the JMVAE. The same training procedures were followed. Accuracy and diversity evaluation The classifier used to evaluate the samples generated by each model was trained to discriminate the four room configuration factors in the DeepMind Lab dataset: wall colour, floor colour, object colour and object identity. We used a network of four 2- strided deconvolutional layers (with filters in each successive layer of \(\{32,64,128,256\}\) , and kernels sized 3x3), followed by a fully connected layer with 256 neurons, with ReLU activations used throughout. The output layer consisted of four fully connected softmax heads, one for each predicted factor (with dimensionality 16 for each of the colour factors, 3 for object identity). The classifier was trained until convergence using the Adam optimiser, with a learning rate of \(1\mathrm{e} - 4\) and batch size of 100 (reaching an overall accuracy of 0.992). The accuracy metric for the sym2img samples was computed as the average top- k accuracy across the factors (with \(k = 3\) for the colour factors, and \(k = 1\) for the object identity factor), against the ground- truth factors specified by the concept used to generate each sym2img sample. The top- k of the factors in each image sample was calculated using the top- k softmax outputs of the classifier. Sample diversity of the sym2img data was characterised by estimating the KL divergence of the irrelevant factor distribution inferred for each concept with a flat distribution, \(D_{K L}(u(\mathbf{y}_{i})\parallel p(\mathbf{y}_{i}))\) . Here, \(p(\mathbf{y}_{i})\) is the joint distribution of the irrelevant factors in the sym2img set of images generated from the \(i\) th concept, which we estimated by averaging the classifier predictions across those images. \(u(\mathbf{y}_{i})\) is the desired (flat) joint distribution of the same factors (i.e., where each factor value has equal probability). We also computed the expected KL if \(p(\mathbf{y}_{i})\) were estimated using the samples drawn from the flat distribution \(u(\mathbf{y}_{i})\) . We report the mean of this KL across all the k- grams. We used 64 sym2img samples per concept. ## A.2 DEEP MIND LAB DATASET DETAILS RGB to HSV conversion The majority of the data generative factors to be learnt in the DeepMind Lab dataset correspond to colours (floor, wall and object). We found that it was hard to learn disentangled representations of these data generative factors with \(\beta\) - VAE. We believe this is because \(\beta\) - VAE requires a degree of smoothness in pixel space when traversing a manifold for a particular data generative factor in order to correctly learn this factor (Higgins et al., 2017a). The intuitively smooth notion of colour, however, is disrupted in RGB space (see Fig. 8). Instead, the intuitive human notion of colour is more closely aligned with hue in HSV space. Hence we added a pre- processing step that converted the DeepMind Lab frames from the RGB to HSV space before training \(\beta\) - VAE. This conversion preserved the dimensionality of the frames, since both RGB and HSV require three channels. We found that this conversion enabled \(\beta\) - VAE to achieve good disentangling results. k- hot experiments Our DeepMind Lab (Beattie et al., 2016) dataset contained 73 frames per room, where the configuration of each room was randomly sampled from the outer product of the four data generative factors: object identity and colour, wall and floor colour (18,883 unique factor combinations). All models were trained using a randomly sampled subset of 133 concepts (with 10 example images per concept), 30 extra concepts were used for training the recombination operators (20 example images per concept) and a further set of 50 concepts <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 8: A: Comparison of hue traversal in HSV space, which closely aligns with the intuitive human understanding of colour, and the equivalent highly non-monotonic changes in RGB space. \(H\) stands for hue, \(S\) stands for saturation and \(V\) stands for value/brightness. Adapted from Wikipedia (2017). B: Visualisation of colours used in DeepMind Lab in RGB. C: Visualisation of colours used in DeepMind Lab in HSV. It can be seen that the HSV projection of the DeepMind Lab colours appears significantly more structured than the equivalent RGB projection. </center> were used to evaluate the models' ability to break away from their training distribution using recombination operators. Training the recombination operator The recombination operator was trained by sampling two concepts, \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\) , and an operator \(\mathbf{r}\) as input. The training objective was to ground \(\mathbf{z}_{r}\) in the ground truth latent space \(\mathbf{z}_{r}\) inferred from an image \(\mathbf{x}\) . The ground truth image \(\mathbf{x}\) was obtained by applying binary logical operation corresponding to \(\mathbf{r}\) to binary symbols \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\) . This produces the ground truth recombined symbol \(\mathbf{y}_{r}\) , which can then be used to fetch a corresponding ground truth image \(\mathbf{x}_{r}\) from the dataset. To make sure that the logical operators were not presented with nonsensical instructions, we followed the following logic for sampling minibatches of \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\) during training. The IN COMMON and AND operators were trained by sampling two \(\mathbf{k}\) - grams \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\) with \(k \in \{1, 2, 3\}\) . The IN COMMON operator had an additional restriction that the intersection cannot be and empty set. The IGNORE operator was trained by sampling a \(\mathbf{k}\) - gram with \(k \in \{1, 2, 3\}\) and a unigram selected from one of the factors specified by the \(\mathbf{k}\) - gram. ## A.3 DEEP-MIND LAB EXPERIMENTS Unsupervised visual representation learning SCAN relies on the presence of structured visual primitives. Hence, we first investigate whether \(\beta\) - VAE trained in an unsupervised manner on the visually complex DeepMind Lab dataset has discovered a disentangled representation of all its data generative factors. As can be seen in Fig. 9 (left panel), SCAN has learnt to represent each of the object-, wall-, and floor-colours, using two latents - one for hue and one for brightness. Learning a disentangled representation of colour is challenging, but we were able to achieve it by projecting the input images from RGB to HSV space, which is better aligned with human intuitions of colour (see Sec. A.2). We noticed that \(\beta\) - VAE confused certain colours (e.g. red floors are reconstructed as magenta, see the top right image in the Reconstructions pane of Fig. 9). We speculate that this is caused by trying to approximate the circular hue space using a linear latent. Red and magenta end up on the opposite ends of the linear latent while being neighbours in the circular space. Compare the disentangled representations of Fig. 9 to the entangled equivalents in Fig. 10. Fig. 10 shows that an entangled \(\beta\) - VAE was able to reconstruct the data well, however due to the entangled nature of its learnt representations, latent traversal plots and samples are not as good as those of a disentangled \(\beta\) - VAE (Fig. 9). SCAN \(\mathbf{u}\) analysis As shown in Fig. 10 SCAN with unstructured vision is based on a \(\beta\) - VAE that learnt a good (yet entangled) representation of the DeepMind Lab dataset. Due to the unstructured entangled nature of the visual latent space \(\mathbf{z}_{x}\) , the additional forward KL term of the SCAN loss function (Eq. 4) is not able to pick out the relevant visual primitives for each training concept. Instead, all latents end up in the irrelevant set, since the relevant and irrelevant ground truth factors end up being entangled in the latent space \(\mathbf{z}_{x}\) . This disrupts the ability of SCAN with entangled vision to learn useful concepts, as demonstrated in Fig. 11. JMVAE analysis In this section we provide some insights into the nature of representations learnt by JMVAE (Suzuki et al., 2017). Fig. 12 demonstrates that after training JMVAE is capable of reconstructing the data and drawing reasonable visual samples. Furthermore, the latent traversal plots indicate that the model learnt a reasonably disentangled representation of the data generative factors. Apart from failing to learn a latent to represent the spawn animation and a latent to represent all object identities (while the hat and the ice lolly are represented, the suitcase is missing), the representations learnt by JMVAE match those learnt by \(\beta\) - VAE (compare Figs. 9 and 12). Note, however, that unlike \(\beta\) - VAE that managed to discover and learn a disentangled <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 9: Reconstructions, samples and latent traversals of \(\beta\) -VAE \((\beta = 53)\) trained to disentangle the data generative factors of variation within the DeepMind Lab dataset. For the latent traversal plots we sampled the posterior, then visualised \(\beta\) -VAE reconstructions while resampling each latent unit one at a time in the \([-3,3]\) range while keeping all other latents fixed to their originally sampled values. This process helps visualise which data generative factor each latent unit has learnt to represent. </center> representation of the data generative factors in a completely unsupervised manner, JMVAE was able to achieve its disentangling performance by exploiting the extra supervision signal coming from the symbolic inputs. JMVAE is unable to learn a hierarchical compositional latent representation of concepts like SCAN does. Instead, it learns a flat representation of visual primitives like the representation learnt by \(\beta\) - VAE. Such a flat representation is problematic, as evidenced by the accuracy/diversity metrics shown in Tbl. 1. Further evidence comes from examining the sym2img samples produced by JMVAE (see Fig. 13). It can be seen that JMVAE fails to learn the abstract concepts as defined in Sec. 3. While the samples in Fig. 13 mostly include correct wall colours that match their respective input symbols, the samples have limited diversity. Many samples are exact copies of each other - a sign of mode collapse. TrELBO analysis This section examines the nature of representations learnt by TrELBO (Vedantam et al., 2017). Fig. 14 demonstrates that after training TrELBO is capable of reconstructing the data, however it produces poor samples. This is due to the highly entangled nature of its learnt representation, as also evidenced by the traversal plots. Since TrELBO is not able to learn a compositional latent representation of concept like that acquired by SCAN, it also struggles to produce diverse sym2img samples when instructed with symbols from the training set (see Fig. 15). Furthermore, this lack of structure in the learnt concept representations precludes successful recombination operator training. Hence, sym2img samples of test symbols instructed through recombination operators lack accuracy (Fig. 16). Data efficiency analysis We evaluate the effect of the training set size on the performance of SCAN, JMVAE and TrELBO by comparing their accuracy and diversity scores after training on {5, 10, 15, 20, 25, 50, 75} concepts. Fig. 17 shows that SCAN consistently outperforms its baselines in terms of the absolute scores, while also displaying less variance when trained on datasets of various sizes. For this set of experiments we also halved the number of training iterations for all models, which affected the baselines but not SCAN. The diversity of JMVAE and TrELBO is better in this plot compared to the results reported in Tbl. 1 because sym2img samples used for this plot were blurrier than those describe in the main text. <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 10: Samples, reconstructions and latent traversals of \(\beta\) -VAE that did not learn a structured disentangled representation \((\beta = 0.1)\) . It is evident that the model learnt to reconstruct the data despite learning an entangled latent space. </center> ![](images/17_1.jpg) <center>Figure 11: Visual samples (sym2img) of SCAN grounded in unstructured vision when presented with symbols "hat" and "ice lolly". It is evident that the model struggled to learn a good understanding of the meaning of these concepts. </center> ## A.4 DSPRITES EXPERIMENTS In this section we describe additional experiments testing SCAN on the dSprites (Matthey et al., 2017) dataset. The dataset consists of binary sprites fully specified by five ground truth factors: position x (32 values), position y (32 values), scale (6 values), rotation (40 values) and sprite identity (3 values). For our experiments we defined a conceptual space spanned by three of the data generative factors - horizontal and vertical positions, and scale. We quantised the values of each chosen factor into halves (top/bottom, left/right, big/small) and assigned one- hot encoded symbols to each of the \(\sum_{k = 1}^{K} \binom{K}{k} N^k = 26\) possible concepts to be learnt (since \(K = 3\) is the number of factors to be learnt and \(N = 2\) is the number of values each factor can take). We compared the performance of SCAN grounded in disentangled visual representations ( \(\beta\) - VAE with \(\beta = 12\) ) to that of grounded in entangled visual representations ( \(\beta\) - VAE with \(\beta = 0\) ). We trained both models on a random subset of image- symbol pairs \((x_i, y_i)\) making up \(< 0.01\%\) of the full dataset. We quantified how well the models understood the meaning of the positional and scale concepts after training by running sym2img inference and counting the number of white pixels within each of the four quadrants of the canvas (for position) or in total in the whole image (for scale). This can be compared to similar values calculated <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 12: Samples, reconstructions and latent traversals of JMVAE. The model learns good disentangled latents, making use of the supervised symbolic information available. </center> ![](images/18_1.jpg) <center>Figure 13: Visualisation of sym2img visual samples produced JMVAE in response to symbols specifying wall colour names. It is evident that the model suffers from mode collapse, since a significant number of samples are copies of each other. </center> over a batch of ground truth images that match the same input symbols. Samples from SCAN matched closely the statistics of the ground truth samples (see Fig. 18). \(\mathrm{SCAN}_{\mathrm{U}}\) , however, failed to produce meaningful samples despite being able to reconstruct the dataset almost perfectly. ## A.5 CELEBA EXPERIMENTS <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 14: Samples, reconstructions and latent traversals of TrELBO. The model learns a very entangled representation. </center> ![](images/19_1.jpg) <center>Figure 15: Visualisation of sym2img visual samples produced TrELBO in response to train symbols: "magenta object", "ice lolly", "purple floor" and "blue wall". It is evident that the model has good accuracy but very low diversity. </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 16: Visualisation of sym2img visual samples produced TrELBO in response to test symbols instructed using recombination operators: "yellow object", "hat", "orange floor" and "cyan wall". It is evident that the model has very low accuracy but decent diversity. </center> ![](images/20_1.jpg) <center>Figure 17: Accuracy and diversity scores of SCAN, JMVAE and TrELBO after being trained on {5, 10, 15, 20, 25, 50, 75} concepts with 10 visual examples each. The size of the circle corresponds to the training set size. We used symbols from the train set to generate sym2img samples used to calculate the scores. SCAN outperforms both baselines and shows less susceptibility to the training set size. </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 18: sym2img inference performance of SCAN and SCAN<sub>U</sub> for symbols - “left” and “large top”. First line in each subplot demonstrates ground truth samples from dSprites dataset that correspond to the respective symbol. Next three lines illustrate the comparative performance of SCAN (left) vs SCAN<sub>U</sub> (right), including their respective sym2img samples, as well as the quantitative comparison of each model (green) to the ground truth (red) in terms of scale understanding (each bar corresponds to the average number of pixels per sample image) and positional understanding (each bar corresponds to the average number of pixels in one of the four quadrants of the samples: T - top, B - bottom, R - right, L - left). The closer the green bars are to the red bars, the better the model’s understanding of the learnt concepts. </center> <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 19: Large version of Fig. 6 </center> <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 20: Large version of Fig. 7 </center> <--- Page Split --->
accept
Accept (Poster)
6
ICLR_2018_paper_0292
iclr
2,018
# LATENT CONSTRAINTS: LEARNING TO GENERATE CONDITIONALLY FROM UNCONDITIONAL GENERATIVE MODELS Jesse Engel Google Brain San Francisco, CA, USA Matthew D. Hoffman Google Inc. San Francisco, CA, USA Adam Roberts Google Brain San Francisco, CA, USA ## ABSTRACT Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions. Conditional generation enables interactive control, but creating new controls often requires expensive retraining. In this paper, we develop a method to condition generation without retraining the model. By post- hoc learning latent constraints, value functions that identify regions in latent space that generate outputs with desired attributes, we can conditionally sample from these regions with gradient- based optimization or amortized actor functions. Combining attribute constraints with a universal "realism" constraint, which enforces similarity to the data distribution, we generate realistic conditional images from an unconditional variational autoencoder. Further, using gradient- based optimization, we demonstrate identity- preserving transformations that make the minimal adjustment in latent space to modify the attributes of an image. Finally, with discrete sequences of musical notes, we demonstrate zero- shot conditional generation, learning latent constraints in the absence of labeled data or a differentiable reward function. ## 1 INTRODUCTION Generative modeling of complicated data such as images and audio is a long- standing challenge in machine learning. While unconditional sampling is an interesting technical problem, it is arguably of limited practical interest in its own right: if one needs a non- specific image (or sound, song, document, etc.), one can simply pull something at random from the unfathomably vast media databases on the web. But that naive approach may not work for conditional sampling (i.e., generating data to match a set of user- specified attributes), since as more attributes are specified, it becomes exponentially less likely that a satisfactory example can be pulled from a database. One might also want to modify some attributes of an object while preserving its core identity. These are crucial tasks in creative applications, where the typical user desires fine- grained controls (Bernardo et al., 2017). One can enforce user- specified constraints at training time, either by training on a curated subset of data or with conditioning variables. These approaches can be effective if there is enough labeled data available, but they require expensive model retraining for each new set of constraints and may not leverage commonalities between tasks. Deep latent- variable models, such as Generative Adversarial Networks (GANs; Goodfellow et al., 2014) and Variational Autoencoders (VAEs; Kingma & Welling, 2013; Rezende et al., 2014), learn to unconditionally generate realistic and varied outputs by sampling from a semantically structured latent space. One might hope to leverage that structure in creating new conditional controls for sampling and transformations (Brock et al., 2016). Here, we show that new constraints can be enforced post- hoc on pre- trained unsupervised generative models. This approach removes the need to retrain the model for each new set of constraints, allowing users to more easily define custom behavior. We separate the problem into (1) creating an unsupervised model that learns how to reconstruct data from latent embeddings, and (2) leveraging the latent structure exposed in that embedding space as a source of prior knowledge, upon which we can impose behavioral constraints. Our key contributions are as follows: <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: (a) Diagram of latent constraints for a VAE. We use one critic \(D_{\mathrm{attr}}\) to predict which regions of the latent space will generate outputs with desired attributes, and another critic \(D_{\mathrm{realism}}\) to predict which regions have high mass under the marginal posterior, \(q(z)\) , of the training data. (b) We begin by pretraining a standard VAE, with an emphasis on achieving good reconstructions. (c) To train the actor-critic pair we use constraint-satisfaction labels, \(c\) , to train \(D\) to discriminate between encodings of actual data, \(z \sim q(z|x)\) , versus latent vectors \(z \sim p(z)\) sampled from the prior or transformed prior samples \(G(z \sim p(z), y)\) . Similar to a Conditional GAN, both \(G\) and \(D\) operate on a concatenation of \(z\) and a binary attribute vector, \(y\) , allowing \(G\) to learn conditional mappings in latent space. If \(G\) is an optimizer, a separate attribute discriminator, \(D_{\mathrm{attr}}\) is trained and the latent vector is optimized to reduce the cost of both \(D_{\mathrm{attr}}\) and \(D_{\mathrm{realism}}\) . (d) To sample from the intersection of these regions, we use either gradient-based optimization or an amortized generator, \(G\) , to shift latent samples from either the prior (\(z \sim p(z)\) , sampling) or from the data (\(z \sim q(z|x)\) , transformation). </center> - We show that it is possible to generate conditionally from an unconditional model, learning a critic function \(D(z)\) in latent space and generating high-value samples with either gradient-based optimization or an amortized actor function \(G(z)\) , even with a non-differentiable decoder (e.g., discrete sequences).- Focusing on VAEs, we address the tradeoff between reconstruction quality and sample quality (without sacrificing diversity) by enforcing a universal "realism" constraint that requires samples in latent space to be indistinguishable from encoded data (rather than prior samples).- Because we start from a VAE that can reconstruct inputs well, we are able to apply identity-preserving transformations by making the minimal adjustment in latent space needed to satisfy the desired constraints. For example, when we adjust a person's expression or hair, the result is still clearly identifiable as the same person (see Figure 5). This contrasts with pure GAN-based transformation approaches, which often fail to preserve identity.- Zero-shot conditional generation. Using samples from the VAE to generate exemplars, we can learn an actor-critic pair that satisfies user-specified rule-based constraints in the absence of any labeled data. ## 2 BACKGROUND Decoder- based deep generative models such as VAEs and GANs generate samples that approximate a population distribution \(p^{\star}(x)\) by passing samples from some simple tractable distribution \(p(z)\) (often \(p(z) \triangleq \mathcal{N}(0, I)\) ) through a deep neural network. GANs are trained to fool an auxiliary classifier that tries to learn to distinguish between real and synthetic samples. VAEs are fit to data using a variational approximation to maximum- likelihood estimation: \[\mathcal{L}^{\mathrm{ELBO}} \triangleq \frac{1}{N} \sum_{n} \mathbb{E}_{z \sim q(z|x_{n})} [\log \pi (x_{n}; g(z))] - \mathrm{KL}(q(z \mid x_{n}) \mid \mid p(z)) \leq \frac{1}{N} \sum_{n} \log p(x_{n}), \quad (1)\] <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: Typical VAEs use a pixel-wise data likelihood, \(\mathcal{N}(\mu_{x}(z),\sigma_{x}I)\) , with \(\sigma_{x} = 1\) to produce coherent samples at the expense of visual and conceptual blurriness (Row 3). Some reconstructions (Row 2) actually change attributes of the original data. Decreasing \(\sigma_{x}\) to 0.1 maximizes the ELBO (supplemental Table 4) and increases the fidelity of reconstructions (Row 4) at the cost of sample realism (Row 5). Using an actor to shift prior samples to satisfy the realism constraint, we achieve more realistic samples without sacrificing sharpness (Row 6). The samples are mapped to the closest point in latent space that both satisfies the realism constraint and has the same attributes as the original data. </center> where the "encoder" distribution \(q(z \mid x)\) is an approximation to the posterior \(p(z \mid x)\) , \(\pi (x; g(z)) \triangleq p(x \mid z)\) is a tractable likelihood function that depends on some parameters output by a "decoder" function \(g(z)\) , and \(q\) and \(g\) are fit to maximize the evidence lower bound (ELBO) \(\mathcal{L}^{\mathrm{ELBO}}\) . The likelihood \(\pi (x; g)\) is often chosen to be a product of simple distributions such as \(\pi (x; g) = \mathcal{N}(x; g, \sigma_{x}^{2} I)\) for continuous data or \(\pi (x; g) = \prod_{d} \mathrm{Bernoulli}(x_{d}; g_{d})\) for binary data. GANs and VAEs have complementary strengths and weaknesses. GANs suffer from the "mode- collapse" problem, where the generator assigns mass to a small subset of the support of the population distribution—that is, it may generate realistic samples, but there are many more realistic samples that it cannot generate. This is particularly problematic if we want to use GANs to manipulate data rather than generate new data; even GAN variants that include some kind of inference machinery (e.g., Donahue et al., 2016; Dumoulin et al., 2016; Perarnau et al., 2016) to determine what \(z\) best matches some \(x\) tend to produce reconstructions that are reminiscent of the input but do not preserve its identity. On the other hand, VAEs (especially those with simple likelihoods \(\pi\) ) often exhibit a tradeoff between sharp reconstructions and sensible- looking samples (see Figure 2). That is, depending on what hyperparameters they are trained with (e.g., latent dimensionality and the scale of the likelihood term), VAEs tend to either produce blurry reconstructions and plausible (but blurry) novel samples, or bizarre samples but sharp reconstructions. It has been argued (Makhzani et al., 2016) that this is due to the "holes" problem; the decoder is trained on samples from the marginal posterior \(q(z) \triangleq \frac{1}{N} \sum_{n} q(z \mid x_{n})\) , which may have very high KL divergence to the presupposed marginal \(p(z)\) (Hoffman & Johnson, 2016). In particular, if the decoder, \(g(z)\) , can reconstruct arbitrary values of \(x\) with high accuracy (as in the case of small \(\sigma_{x}\) ) then the typical posterior \(p(z \mid x)\) will be highly concentrated. We show this experimentally in supplemental Figure 16. If \(q(z \mid x)\) underestimates the posterior variance (as it usually does), then the marginal posterior \(q(z)\) will also be highly concentrated, and samples from \(p(x) = \int_{z} p(z) p(x \mid z) dz\) may produce results that are far from typical reconstructions \(\mathbb{E}_{p}[x \mid z \sim q(z \mid x)]\) . If we tune \(\sigma_{x}\) to maximize the ELBO (Bishop, 2006), we find the optimal \(\sigma_{x} \approx 0.1\) (supplemental Table 4). Figure 2 shows that this choice does indeed lead to good reconstructions but strange- looking samples. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 3: Contour maps of the critic value functions for the marginal posterior ("realism") constraint. We look at the two latent dimensions that have the lowest average posterior standard deviation on the training set, taking low variance in \(z\) space as a proxy for influence over the generated images. All other latent dimensions are held fixed at their original values (from a sample from \(p(z)\) on the left, and from a sample from \(q(z \mid x)\) for a held-out \(x\) on the right). Gray x marks correspond to the points in latent space of the generated images to the right. The cross-section on the left, taken from a prior sample, shows contours that point towards more realistic looking digits. In the cross-section on the right, a sample from the validation set (indicated by orange squares) resides within a local maximum of the critic, as one would hope. </center> Conditional GANs (CGAN; Mirza & Osindero, 2014) and conditional VAEs (CVAE; Sohn et al., 2015) can generate samples conditioned on attribute information when available, but they must be trained with knowledge of the attribute labels for the whole training set, and it is not clear how to adapt them to new attributes without retraining from scratch. Furthermore, CGANs and CVAEs suffer from the same problems of mode- collapse and blurriness as their unconditional cousins. We take a different approach to conditional generation and identity- preserving transformation. We begin by training an unconditional VAE with hyperparameters chosen to ensure good reconstruction (at the expense of sample quality). We then train a "realism" critic to predict whether a given \(z\) maps to a high- quality sample. We also train critics to predict whether a given \(z\) maps to a sample that manifests various attributes of interest. To generate samples that are both realistic and exhibit desired attributes, one option is to optimize random \(z\) vectors until they satisfy both the realism and attribute critics. Alternately, we can amortize this cost by training an "actor" network to map a random set of \(z\) vectors to a subregion of latent space that satisfies the constraints encoded by the critics. By encouraging these transformed \(z\) vectors to remain as close as possible to where they started, we alleviate the mode- collapse problem common to GANs. Our approach is summarized visually in Figure 1. The details follow in sections 3, 4, 5, and 6. ## 3 THE "REALISM" CONSTRAINT: SHARPENING VAE SAMPLES We define the realism constraint implicitly as being satisfied by samples from the marginal posterior \(q(z) \triangleq \frac{1}{N} \sum_{n} q(z \mid x_{n})\) and not those from \(p(z)\) . By enforcing this constraint, we can close the gap between reconstruction quality and sample quality (without sacrificing sample diversity). As shown in Figure 1, we can train a critic \(D\) to differentiate between samples from \(p(z)\) and \(q(z)\) . The critic loss, \(\mathcal{L}_{D}(z)\) , is simply the cross- entropy, with labels \(c = 1\) for \(z \sim q(z \mid x)\) and \(c = 0\) for \(z \sim p(z)\) . We found that the realism critic had little trouble generalizing to unseen data; that is, it was able to recognize samples from \(q(z \mid x^{\mathrm{held - out}})\) as being "realistic" (Figure 3). Sampling from the prior is sufficient to train \(D\) for models with lower KL Divergence, but if the KL Divergence between \(q\) and \(p\) is large, the chances of sampling a point \(p(z)\) that has high probability under \(q(z)\) becomes vanishingly small. This leads to poor sample quality and makes it difficult for \(D\) to learn a tight approximation of \(q(z)\) solely by sampling from \(p(z)\) . Instead, we use an inner- loop of gradient- based optimization, \(G_{\mathrm{opt}}(z) = \mathrm{GradientDescent}(z; \mathcal{L}_{D}(z))\) , to move prior samples to points deemed more like \(q(z)\) by \(D\) . For clarity, we introduce the shorthand \(\mathcal{L}_{c = 1}(z) \triangleq - \log (D(z))\) and \(\mathcal{L}_{c = 0}(z) \triangleq -(1 - \log (D(z)))\) . This gives us our critic loss for the realism constraint: \[\mathcal{L}_{D}(z) = \mathbb{E}_{z\sim q(z\mid x)}[\mathcal{L}_{c = 1}(z)] + \mathbb{E}_{z\sim p(z)}[\mathcal{L}_{c = 0}(z)] + \mathbb{E}_{z\sim G(p(z))}[\mathcal{L}_{c = 0}(z)] \quad (2)\] <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 4: Conditional generation with a CGAN actor-critic pair acting in the latent space of a VAE with \(\sigma_{x} = 0.1\) . Each row starts from a different prior sample and maps it to a new point in latent space that satisfies both the attribute constraints and the realism constraint. The attribute constraints are changed one at a time to produce as smooth a transition as possible from left to right. The bottom CGAN is regularized during training to prefer small shifts in latent space ( \(\lambda_{\mathrm{dist}} = 0.1\) ), while the top is not ( \(\lambda_{\mathrm{dist}} = 0.0\) ). Compared to the images generated by the unregularized model, the images generated by the regularized model are much less diverse across columns, suggesting that the regularization does indeed enforce some degree of identity preservation. The regularized model produces images that are somewhat more diverse across rows, suggesting that the regularization fights mode collapse (arguably at the expense of image quality). For each column, the complete list of attributes is given in supplemental Table 3. </center> Since this inner- loop of optimization can slow down training, we amortize the generation by using a neural network as a function approximator. There are many examples of such amortization tricks, including the encoder of a VAE, generator of a GAN, and fast neural style transfer (Ulyanov et al., 2016; Li & Wand, 2016; Johnson et al., 2016). As with a traditional GAN, the parameters of the function \(G\) are updated to maximize the value \(D\) ascribes to the shifted latent points. One of the challenges using a GAN in this situation is that it is prone to mode- collapse. However, an advantage of applying the GAN in latent space is that we can regularize \(G\) to try and find the closest point in latent space that satisfies \(D\) , thus encouraging diverse solutions. We introduce a regularization term, \(\mathcal{L}_{\mathrm{dist}}(z^{\prime},z) = 1 / \sigma_{z}^{2}\log (1 + (z^{\prime} - z)^{2})\) to encourage nearby solutions, while allowing more exploration than a mean square error term. As a VAE utilizes only a fraction of its latent dimensions, we scale the distance penalty of each dimension by its utilization, as indicated by the squared reciprocal of the scale \(\sigma_{z}(x)\) of the encoder distribution \(q(z \mid x)\) , averaged over the training dataset, \(\bar{\sigma}_{z} \triangleq \frac{1}{N} \sum_{n} \sigma_{z}(x_{n})\) . The regularized loss is \[\mathcal{L}_{G}(z) = \mathbb{E}_{z\sim p(z)}[\mathcal{L}_{c = 1}(G(z)) + \lambda_{\mathrm{dist}}\mathcal{L}_{\mathrm{dist}}(G(z),z)]. \quad (3)\] ## 4 ATTRIBUTE CONSTRAINTS: CONDITIONAL GENERATION We want to generate samples that are realistic, but we also want to control what attributes they exhibit. Given binary attribute labels \(y\) for a dataset, we can accomplish this by using a CGAN in <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 5: Identity-preserving transformations with optimization. Two separate critics are trained, one for attributes and one for the realism constraint. Starting at the latent points corresponding to the data reconstructions, we then perform gradient ascent in latent space on a weighted combination of critic values (1.0 attribute, 0.1 marginal posterior), stopping when a threshold value is passed for both critics. Images remain semantically close to the original because the pixel-wise likelihood of VAE training encourages identity-preserving reconstructions, and the dynamics of gradient ascent are naturally limited to finding solutions close in latent space. Panels are black for attributes of the original image, as the procedure just returns the original point in latent space. </center> the latent space, which amounts to replacing \(D(z)\) and \(G(z)\) with conditional versions \(D(z,y)\) and \(G(z,y)\) and concatenating \(y\) to \(z\) as input. If both the actor and critic see attribute information, \(G\) must find points in latent space that could be samples from \(q(z)\) with attributes \(y\) . This procedure is computationally inexpensive relative to training a generative model from scratch. In most of our experiments, we use a relatively large CGAN actor- critic pair (4 fully connected ReLU layers of 2048 units each), which during training uses about \(96 \times\) fewer FLOPs/iteration than the unconditional VAE. We also trained a much smaller CGAN actor- critic pair (3 fully connected ReLU layers of 256 units), which uses about \(2884 \times\) fewer FLOPs/iteration than the VAE, and achieves only slightly worse results than the larger CGAN (supplemental Figure 14 and Table 1). Figure 4 demonstrates the quality of conditional samples from a CGAN actor- critic pair and the effect of the distance penalty, which constrains generation to be closer to the prior sample, maintaining similarity between samples with different attributes. The regularized CGAN actor has less freedom to ignore modes by pushing many random \(z\) vectors to the same area of the latent space, since it is penalized for moving samples from \(p(z)\) too far. The increased diversity across rows of the regularized CGAN is evidence that this regularization does fight mode- collapse (additional qualitative evidence is in supplemental Figures 7 and 8). However, without a distance penalty, samples appear more a bit realistic with more prominent attributes. This is supported by Table 1, where we use a separately trained attribute classification model to quantitatively evaluate samples. The actor with no penalty generates samples that are more accurately classified than the actor with a penalty but also shifts the samples much farther in latent space. <--- Page Split ---> <table><tr><td>CelebA</td><td>Accuracy</td><td>Precision</td><td>Recall</td><td>F1 Score</td><td>zMSE</td></tr><tr><td>(This Work) 10 Attributes</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Test Data</td><td>0.936</td><td>0.901</td><td>0.893</td><td>0.895</td><td></td></tr><tr><td>\(G_{CGAN}(\lambda _{\mathrm{dist}}=0)\)</td><td>0.942</td><td>0.914</td><td>0.904</td><td>0.906</td><td>80.7</td></tr><tr><td>\(G_{CGAN}(\lambda _{\mathrm{dist}}=0)\) (Small Model)</td><td>0.926</td><td>0.898</td><td>0.860</td><td>0.870</td><td>58.9</td></tr><tr><td>\(G_{CGAN}(\lambda _{\mathrm{dist}}=0.1)\)</td><td>0.928</td><td>0.903</td><td>0.863</td><td>0.874</td><td>17.0</td></tr><tr><td>(Perarnau et al., 2016) 18 Attributes</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Test Data</td><td>0.928</td><td></td><td></td><td>0.715</td><td></td></tr><tr><td>ICGAN</td><td>0.860</td><td></td><td></td><td>0.524</td><td></td></tr></table> Table 1: Accuracy of a separate model trained to classify attributes from images, evaluated on test data and generated images. We condition and evaluate the generated images on the same labels as the test data. For comparison, the results of a similar task using invertible CGANs for generation (Perarnau et al., 2016) are provided. However, since the full list of salient attributes was not given in the paper, we emphasize that they are not directly comparable as the two experiments use a slightly different set of attribute labels. We also measure the distance in latent space that prior samples are shifted, weighted by \(1 / \sigma_{z}^{2}\) . Actors trained with a latent distance penalty \(\lambda_{\mathrm{dist}}\) have slightly worse accuracy, but find latent points much closer to the prior samples and produce a greater diversity of images (see supplemental Figures 7 and 8). Interestingly, an actor trained without a distance penalty achieves higher classification accuracy than the test set itself, possibly by generating images with more exaggerated and distinctive features than real data. A "small model" CGAN with 85x fewer parameters (3 fully connected layers of 256 units) generates images (supplemental Figure 14) of comparable quality. Due to the smaller capacity, the model finds more local solutions (smaller \(z_{MSE}\) ) that have slightly less attribute accuracy, but are more visually similar to the prior sample without an explicit regularization term. Although we used a VAE as the base generative model, our approach could also be used to generate high- quality conditional samples from pretrained classical autoencoders. We show in supplemental Figure 15 that we obtain reasonably good conditional samples (albeit with high- frequency spatial artifacts) as \(\sigma_{x} \to 0\) (equivalent to a classical autoencoder). Learning the decoder using VAE training encourages q(z) to fill up as much of the latent space as possible (without sacrificing reconstruction quality), which in turn encourages the decoder to map more of the latent space to reasonable- looking images. The prior \(p(z) = \mathcal{N}(0, I)\) also imposes a natural scale on the latent variables. ## 5 IDENTITY-PRESERVING TRANSFORMATIONS If we have a VAE that can produce good reconstructions of held- out data, we can transform the attributes of the output by gradient- based optimization. We simply need to train a critic, \(D_{attr}(z)\) , to predict the attribute labels \(p(y \mid z)\) of the data embeddings \(z \sim q(z \mid x)\) , and use a cross- entropy loss to train. Then, starting from a data point, \(z \sim q(z \mid x)\) , we can perform gradient descent on the the realism constraint and attribute constraint jointly, \(\mathcal{L}_{D_{\mathrm{real}}}(z) + \lambda_{\mathrm{attr}}\mathcal{L}_{D_{\mathrm{attr}}}(z)\) . Note that it is helpful to maintain the realism constraint to keep the image from distorting unrealistically. Using the same procedure, we can also conditionally generate new samples (supplemental Figure 9) by starting from \(z \sim p(z)\) . Figure 5 demonstrates transformations applied to samples from the held- out evaluation dataset. Note that since the reconstructions are close to the original images, the transformed images also maintain much of their structure. This contrasts with supplemental Figure 10, where a distance- penalty- free CGAN actor produces transformations that share attributes with the original but shift identity. We could preserve identity by introducing a distance penalty, but find that it is much easier to find the correct weighting of realism cost, attribute cost, and distance penalty through optimization, as each combination does not require retraining the network. ## 6 RULE-BASED CONSTRAINTS: ZERO-SHOT CONDITIONAL GENERATION So far, we have assumed access to labeled data to train attribute classifiers. We can remove the need to provide labeled examples by leveraging the structure learned by our pre- trained model, using it to generate exemplars that are scored by a user- supplied reward function. If we constrain the reward <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 6: Transformations from a prior sample for the Melody VAE model. In each 16-bar pianoroll, time is in the horizontal direction and pitch in the vertical direction. In the prior sample, notes falling outside of the C Major scale are shown in red. After transformation by \(G_{\mathcal{P} = \mathcal{C}_{\mathrm{Maj}}},d = 0\) , all sampled notes fall within the scale, without a significant change to note density. After transformation of the original \(z\) by \(G_{\mathcal{P} = \mathcal{C}_{\mathrm{Maj}}},d = 192\) , all sampled notes lay within the scale and the density increases beyond 192. Synthesized audio of these samples can be heard at https://goo.gl/ouULT9. </center> function to be bounded, \(c(x):\mathbb{R}^{N}\to [0,1]\) , the problem becomes very similar to previous GAN settings, but now the actor, \(G\) , and critic, \(D\) , are working together. \(D\) aims to best approximate the true value of each latent state, \(\mathbb{E}_{x\sim p(x|z)}c(x)\) , and \(G\) aims to shift samples from the prior to high- value states. The critic loss is the cross- entropy from \(c(x)\) , and the actor loss is the same as \(\mathcal{L}_{G}\) in equation 3, where we again have a distance penalty to promote diversity of outputs. Note that the reward function and VAE decoder need not necessarily be differentiable, as the critic learns a value function to approximate the reward, which the actor uses for training. To highlight this, we demonstrate that the output of a recurrent VAE model can be constrained to satisfy hard- coded rule- based constraints. We first train an LSTM VAE (details in the Appendix) on melodic fragments. Each melody, \(m\) , is represented as a sequence of categorical variables. In order to examine our ability to constrain the pitch classes and note density of the outputs, we define two reward functions, one that encourages notes from a set of pitches \(\mathcal{P}\) , and another for that encourages melodies to have at least \(d\) notes: \[c_{\mathrm{pitch}}(m,\mathcal{P}) = \sum_{p\in m}\mathbb{1}(p\in \mathcal{P}) / |m|\qquad c_{\mathrm{density}}(m,d) = \min (1,|m| / d) \quad (4)\] Figure 6 gives an example of controlling the pitch class and note density of generated outputs, which is quantitatively supported by the results in Table 2. During training, the actor goes through several phases of exploration and exploitation, oscillating between expanding to find new modes with high reward and then contracting to find the nearest locations of those modes, eventually settling into high value states that require only small movements in the latent space (supplemental Figure 11). ## 7 RELATED WORK Conditional GANs (Mirza & Osindero, 2014) and VAEs (Sohn et al., 2015) introduce conditioning variables at training time. Sohn et al. (2015) allow these variables to affect the distribution in latent \(z\) space, but still require that \(p(z \mid y)\) be a tractable distribution. Perarnau et al. (2016) use CGANs to adjust images, but because CGANs cannot usually reconstruct arbitrary inputs accurately, they must resort to image- space processing techniques to transfer effects to the original input. White (2016) propose adding "attribute vectors" to samples from \(p(z)\) as a simple and effective heuristic to perform transformations, which relies heavily on the linearity of the latent space. <--- Page Split ---> <table><tr><td>Actor</td><td>\(c_{\mathrm{pitch}}(m,\mathcal {P}=C_{\mathrm{Maj}})\)</td><td>\(c_{\mathrm{density}}(m,d=192)\)</td><td>\(z_{\mathrm{MSE}}\)</td></tr><tr><td>Prior</td><td>0.579 (0.43%)</td><td>0.417 (0.04%)</td><td>-</td></tr><tr><td>\(G_{\mathcal {P}=C_{\mathrm{Maj}},d=0}\)</td><td>0.991 (70.8%)</td><td>0.459 (0.01%)</td><td>0.015</td></tr><tr><td>\(G_{\mathcal {P}=C_{\mathrm{Maj}},d=192}\)</td><td>0.982 (62.4%)</td><td>0.985 (84.9%)</td><td>0.039</td></tr></table> Table 2: Average rewards and constraint satisfaction rates (in parentheses) for unconditional (Prior)and conditional generation. Samples from the prior receive low rewards, on average, and near zero satisfaction rates from both the pitch class (C Major) and note density (≥ 192 notes) constraints. After applying an actor optimized only for the C Major scale ( $G_{\mathcal {P}=C_{\mathrm{Maj}},d=0}$ ), the pitch class constraint is fully satisfied 70.8% of the time with only a minor effect on density. The average value close to 1 also indicates that when the constraint is not satisfied, it is typically off by only a few notes. Applying an actor function optimized for the C Major scale and high density ( $G_{\mathcal {P}=C_{\mathrm{Maj}},d=192}$ ) causes both constraints to be satisfied at high rates, with a slightly larger shift in latent space. Some recent work has focused on applying more expressive prior constraints to VAEs (Rezende et al., 2014; Sonderby et al., 2016; Chen et al., 2017; Tomczak & Welling, 2017). The prior that maximizes the ELBO is \(p^{*}(z)=q(z)\) (Hoffman & Johnson, 2016); one can interpret our realism constraint as trying to find an implicit distribution that is indistinguishable from \(q(z)\) . Like the adversarial autoencoder of Makhzani et al. (2016), our realism constraint relies on a discriminative model, but instead of trying to force \(q(z)\) to equal some simple \(p(z)\) , we only weakly constrain \(q(z)\) and then use a classifier to “clean up” our results. Like this work, the recently proposed adversarially regularized autoencoder (Junbo et al., 2017) uses adversarial training to generate latent codes in a latent space discovered by an autoencoder; that work focuses on unconditional generation. Gómez-Bombarelli et al. (2016) train classifiers in the latent space of a VAE to predict what latent variables map to molecules with various properties, and then use iterative gradient-based optimization in the latent space to find molecules that have a desired set of properties. On molecule data, their procedure generates invalid molecules rarely enough that they can simply reject these samples, which are detected using off-the-shelf software. By contrast, the probability of generating realistic images under our pretrained VAE is astronomically small, and no simple criterion for detecting valid images exists. Jaques et al. (2017) also use a classifier to constrain generation; they use a Deep Q-network as an auxiliary loss for training an LSTM. Closest to Section 6, Nguyen et al. (2016a;b) generate very high quality conditional images by optimizing a sample from the latent space of a generative network to create an image that maximizes the class activations of a pretrained ImageNet classifier. Our work differs in that we learn an amortized generator/discriminator directly in the latent space and we achieve diversity through regularizing by the natural scale of the latent space rather than through a modified Langevin sampling algorithm. # 8 DISCUSSION AND FUTURE WORK We have demonstrated a new approach to conditional generation by constraining the latent space of an unconditional generative model. This approach could be extended in a number of ways. One possibility would be to plug in different architectures, including powerful autoregressive de-coders or adversarial decoder costs, as we make no assumptions specific to independent likelihoods.While we have considered constraints based on implicit density estimation, we could also estimate the constrained distribution directly with an explicit autoregressive model or another variational autoencoder. The efficacy of autoregressive priors in VAEs is promising for this approach (Kingma et al., 2016). Conditional samples could then be obtained by ancestral sampling, and transformations by using gradient ascent to increase the likelihood under the model. Active or semisupervised learn-ing approaches could reduce the sample complexity of learning constraints. Real-time constraint learning would also enable new applications; it might be fruitful to extend the reward approximation of Section 6 to incorporate user preferences as in (Christiano et al., 2017). <--- Page Split ---> ## ACKNOWLEDGMENTS Many thanks to Jascha Sohl- Dickstein, Colin Raffel, and Doug Eck for their helpful brainstorming and encouragement. ## REFERENCES Bernardo, Zbyszyski, Fiebrink, and Grierson. Interactive machine learning for end- user innovation. In Proceedings of the AAAI Symposium Series: Designing the User Experience of Machine Learning Systems, 2017. URL http://research.gold.ac.uk/19767/1/BernardoZbyszynskiFiebrinkGrierson_UxML_2017. pdf. Christopher M Bishop. Pattern recognition and machine learning (information science and statistics) springer- verlag new york. Inc. Secaucus, NJ, USA, 2006. Andrew Brock, Theodore Lim, J. M. Ritchie, and Nick Weston. Neural Photo Editing with Introspective Adversarial Networks. arXiv preprint, 2016. URL https://arxiv.org/abs/1609.07093. Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. Variational Lossy Autoencoder. In Proceedings of the International Conference on Learning Representations (ICLR), 2017. URL http://arxiv.org/abs/1611.02731. Paul Christiano, Jan Leike, Tom B Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. arXiv preprint, 2017. URL https://arxiv.org/abs/1706.03741. Jeff Donahue, Philipp Krahenbuhl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville. Adversarially Learned Inference. In Proceedings of the International Conference on Learning Representations (ICLR), 2016. URL https://arxiv.org/abs/1606.00704. R. Gómez-Bombarelli, J. N. Wei, D. Duvenaud, J. M. Hernández-Lobato, B. Sánchez-Lengeling, D. Sheberla, J. Aguilera-Iparraguirre, T. D. Hirzel, R. P. Adams, and A. Aspuru-Guzik. Automatic chemical design using a data-driven continuous representation of molecules. ArXiv e-prints, October 2016. Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), 2014. URL http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved Training of Wasserstein GANs. arXiv preprint, 2017. URL http://arxiv.org/abs/1704.00028. Matthew D. Hoffman and Matthew J. Johnson. ELBO surgery: yet another way to carve up the variational evidence lower bound. In Workshop in Advances in Approximate Bayesian Inference, NIPS, 2016. URL http://approximateinference.org/accepted/HoffmanJohnson2016. pdf. Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, Jos Miguel Hernandez- Lobato, Richard E. Turner, and Douglas Eck. Sequence tutor: Conservative fine- tuning of sequence generation models with kl- control. In Proceedings of the International Conference on Learning Representations (ICLR), 2017. URL https://arxiv.org/abs/1611.02796. Justin Johnson, Alexandre Alahi, and Li Fei- Fei. Perceptual losses for real- time style transfer and super- resolution. In European Conference on Computer Vision, pp. 694- 711. Springer, 2016. <--- Page Split ---> Junbo, Zhao, Yoon Kim, Kelly Zhang, Alexander M. Rush, and Yann LeCun. Adversarially Regularized Autoencoders for Generating Discrete Structures. arXiv preprint, 2017. URL http://arxiv.org/abs/1706.04223. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR), 2015. URL http://arxiv.org/abs/1412.6980. Diederik P. Kingma and Max Welling. Auto- encoding variational bayes. In Proceedings of the International Conference on Learning Representations (ICLR), 2013. URL http://arxiv.org/abs/1312.6114. Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improving Variational Inference with Inverse Autoregressive Flow. In Advances in Neural Information Processing Systems (NIPS), 2016. URL http://arxiv.org/abs/1606.04934. Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010. URL http://yann.lecun.com/exdb/mnist/. Chuan Li and Michael Wand. Precomputed real- time texture synthesis with markovian generative adversarial networks. In European Conference on Computer Vision, pp. 702- 716. Springer, 2016. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015. URL https://arxiv.org/abs/1411.7766. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders. In Proceedings of the International Conference on Learning Representations (ICLR), 2016. URL http://arxiv.org/abs/1511.05644. Mehdi Mirza and Simon Osindero. Conditional Generative Adversarial Nets. arXiv preprint, 2014. URL http://arxiv.org/abs/1411.1784. Anh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In Advances in Neural Information Processing Systems (NIPS), 2016a. URL https://arxiv.org/abs/1605.09304. Anh Nguyen, Jason Yosinski, Yoshua Bengio, Alexey Dosovitskiy, and Jeff Clune. Plug & play generative networks: Conditional iterative generation of images in latent space. arXiv preprint arXiv:1612.00005, 2016b. Guim Perarnau, Joost van de Weijer, Bogdan Raducanu, and Jose M. Alvarez. Invertible Conditional GANs for image editing. In Workshop on Adversarial Training, NIPS, 2016. URL http://arxiv.org/abs/1611.06355http://www.cvc.uab.es/LAMP/wp- content/uploads/Projects/pdfs/presentationNIPS.pdf. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015. URL http://arxiv.org/abs/1511.06434. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, and Xi Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems 29, 2016. URL http://papers.nips.cc/paper/6125- improved- techniques- for- training- gans.pdf. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems (NIPS), 2015. URL http://papers.nips.cc/paper/5775- learning- structured- output- representation- using- deep- conditional- generative- models.pdf. <--- Page Split ---> Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder variational autoencoders. In Advances in Neural Information Processing Systems, pp. 3738–3746, 2016. Jakub M. Tomczak and Max Welling. VAE with a VampPrior. CoRR, abs/1705.07120, 2017. URL http://arxiv.org/abs/1705.07120. Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor S. Lempitsky. Texture networks: Feed- forward synthesis of textures and stylized images. In Proceedings of the 33rd International Conference on Machine Learning (ICML), 2016. URL http://arxiv.org/abs/1603.03417. Tom White. Sampling generative networks: Notes on a few effective techniques. arXiv preprint, 2016. URL https://arxiv.org/abs/1609.04468. <--- Page Split ---> ## 9 APPENDIX ### 9.1 EXPERIMENTAL DETAILS For images, we use the MNIST digits dataset (LeCun & Cortes, 2010) and the Large- scale CelebFaces Attributes (CelebA) dataset (Liu et al., 2015). MNIST images are \(28\times 28\) pixels and greyscale scaled to [0, 1]. For attributes, we use the number class label of each digit. CelebA images are centered cropped to \(128\times 128\) pixels and then downsampled to \(64\times 64\) RGB pixels and scaled to [0, 1]. We find that many of the attribute labels are not strongly correlated with changes in the images, so we narrow the original 40 attributes to the 10 most visually salient: blond hair, black hair, brown hair, bald, eyeglasses, facial hair, hat, smiling, gender, and age. For melodies, we scraped the web to collect over 1.5 million publicly available MIDI files. We then extracted 16- bar melodies by sliding a window with a single bar stride over each non- percussion instrument with a \(\frac{1}{4}\) time signature, keeping only the note with the highest pitch when multiple overlap. This produced over 3 million unique melodies. We represent each melody as a sequence of 256 (16 per bar) categorical variables taking one of 130 discrete states at each sixteenth note: 128 note- on pitches, a hold state, and a rest state. ### 9.2 MODEL ARCHITECTURES All encoders, decoders, and classifiers are trained with the Adam optimizer (Kingma & Ba, 2015), with learning rate \(= 3\mathrm{e - 4}\) , \(\beta_{1} = 0.9\) , and \(\beta_{2} = 0.999\) . To train \(D_{real}(z)\) , \(D_{attr}(z)\) and \(G(z)\) we follow the training procedure of Gulrajani et al. (2017), applying a gradient penalty of 10, training \(D\) and \(G\) in a 10:1 step ratio, and use the Adam optimizer with learning rate \(= 3\mathrm{e - 4}\) , \(\beta_{1} = 0.0\) , and \(\beta_{2} = 0.9\) . While not necessary to converge, we find it improves the stability of optimization. We do not apply any of the other tricks of GAN training such as batch normalization, minibatch discrimination, or one- sided label smoothing (Radford et al., 2015; Salimans et al., 2016). As samples from \(p(z)\) are easier to discriminate than samples from \(G(p(z))\) , we train \(D\) by sampling from \(p(z)\) at a rate 10 times less than \(G(p(z))\) . For actors with inner- loop optimization, \(G_{\mathrm{opt}}\) , 100 iterations of Adam are used with with learning rate \(= 1\mathrm{e - 1}\) , \(\beta_{1} = 0.9\) , and \(\beta_{2} = 0.999\) . #### 9.2.1 MNIST FEED-FORWARD VAE To model the MNIST data, we use a deep feed- forward neural network (Figure 13a). The encoder is a series of 3 linear layers with 1024 outputs, each followed by a ReLU, after which an additional linear layer is used to produce 2048 outputs. Half of the outputs are used as the \(\mu\) and the softplus of the other half are used as the \(\sigma\) to parameterize a 1024- dimension multivariate Gaussian distribution with a diagonal covariance matrix for \(z\) . The decoder is a series of 3 linear layers with 1024 outputs, each followed by a ReLU, after which an additional linear layer is used to produce 28x28 outputs. These outputs are then passed through a sigmoid to generate the output image. #### 9.2.2 CELEBA CONVOLUTIONAL VAE To model the CelebA data, we use a deep convolutional neural network (Figure 13b). The encoder is a series of 4 2D convolutional layers, each followed by a ReLU. The convolution kernels are of size \(3\times 3\) , \(3\times 3\) , \(5\times 5\) , and \(5\times 5\) , with 2048, 1024, 512, and 256 output channels, respectively. All convolutional layers have a stride of 2. After the final ReLU, a linear layer is used to produce 2048 outputs. Half of the outputs are used as the \(\mu\) and the softplus of the other half are used as the \(\sigma\) to parameterize a 1024- dimension multivariate Gaussian distribution with a diagonal covariance matrix for \(z\) . The decoder passes the \(z\) through a 4x4x2048 linear layer, and then a series of 4 2D transposed convolutional layers, all but the last of which are followed by a ReLU. The deconvolution kernels are of size \(5\times 5\) , \(5\times 5\) , \(3\times 3\) , and \(3\times 3\) , with 1024, 512, 256, and 3 output channels, respectively. All <--- Page Split ---> deconvolution layers have a stride of 2. The output from the final deconvolution is passed through a sigmoid to generate the output image. The classifier that is trained to predict labels from images are identical to the VAE encoders except that they end with a sigmoid cross- entropy loss. #### 9.2.3 MELODY SEQUENCE VAE Music is fundamentally sequential, so we use an LSTM- based sequence VAE for modelling monophonic melodies (Figure 13c). The encoder is made up of a single- layer bidirectional LSTM, with 2048 units per cell. The final output in each direction is concatenated and passed through a linear layer to produce 1024 outputs. Half of the outputs are used as the \(\mu\) and the softplus of the other half are used as a \(\sigma\) to parameterize a 512- dimension multivariate Gaussian distribution with a diagonal covariance matrix for \(z\) . Since musical sequences often have structure at the bar level, we use a hierarchical decoder to model long melodies. First, the \(z\) goes through a linear layer to initialize the state of a 2- layer LSTM with 1024 units per layer, which outputs 16 embeddings of size 512 each, one per bar. Each of these embeddings are passed through a linear layer to produce 16 initial states for another 2- layer LSTM with 1024 units per layer. This bar- level LSTM autoregressively produces individual sixteenth note events, passing its output through a linear layer and softmax to create a distribution over the 130 classes. This categorical distribution is used to compute a cross- entropy loss during training or samples at inference time. In addition to generating the initial state at the start of each bar, the embedding for the current bar is concatenated with the previous output as the input at each time step. #### 9.2.4 ACTOR FEED-FORWARD NETWORK For \(G(z)\) , we use a deep feed- forward neural network (Figure 12a) in all of our experiments. The network is a series of 4 linear layers with 2048 outputs, each followed by a ReLU, after which an additional linear layer is used to produce \(2 * dim(z)\) outputs. Half of the outputs are used as the \(\delta z\) and the sigmoid of the other half are used as gates. The transformed \(z'\) is the computed as \((1 - gates)*z + gates*\delta z\) . This aids in training as the network only has to then predict shifts in \(z\) . When conditioning on attribute labels, \(y\) , to compute \(G(z,y)\) , the labels are passed through a linear layer producing 2048 outputs which are concatenated with \(z\) as the model input. #### 9.2.5 CRITIC FEED-FORWARD NETWORK For \(D(z)\) , we use a deep feed- forward neural network (Figure 12b) in all of our experiments. The network is a series of 4 linear layers with 2048 outputs, each followed by a ReLU, after which an additional linear layer is used to produce a single output. This output is passed through a sigmoid to compute \(D(z)\) . When conditioning on attribute labels, \(y\) , to compute \(D(z,y)\) , the labels are passed through a linear layer producing 2048 outputs which are concatenated with \(z\) as the model input. <--- Page Split ---> ### 9.3 SUPPLEMENTAL FIGURES ![](images/14_0.jpg) <center>Figure 7: Additional generated CelebA faces by \(G_{\mathrm{CGAN}}\) with \(\lambda_{\mathrm{dist}} = 0\) . Full attribute labels are given in supplementary Table 3 </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 8: Additional generated CelebA faces by \(G_{\mathrm{CGAN}}\) with \(\lambda_{\mathrm{dist}} = 0.1\) . Full attribute labels are given in supplementary Table 3 </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 9: Optimization of samples drawn from the prior to satisfy both the realism constraint and attribute constraints (drawn from the test set). The optimization takes 100 steps, and images are shown at 0, 10, 30, 50 and 100 steps. \(D\) is trained with inner-loop optimization, \(G_{\mathrm{opt}}\) , as described in Section 9.2 </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 10: Identity-distorting transformations with CGAN actor-critic. Without a penalty to encourage small moves in latent space, the actor maps the latent vectors of the original data points to generated images that have the correct attributes, but a different identity. Panels are black for attributes of the original image, as the procedure just returns the same image as the reconstruction. </center> ![](images/17_1.jpg) <center>Figure 11: Training curves for melody actor \((G)\) and critic \((D)\) pair for pitch class constraint \(c_{\mathrm{pitch}}(m, \mathcal{P} = \mathrm{C}_{\mathrm{Maj}})\) . </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 12: Architecture for the (a) actors and (b) critics used in all experiments. </center> Table 3: Complete list of attributes for label names in Figures 4, 7, and 8 <table><tr><td>Figure Label</td><td>Black Bald</td><td>Blond Hair</td><td>Brown Hair</td><td>Eye-glasses</td><td>Male</td><td>Beard</td><td>Smiling</td><td>Hat</td><td>Young</td></tr><tr><td>Blond Hair</td><td>0</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>1</td><td>0</td><td>1</td></tr><tr><td>Brown Hair</td><td>0</td><td>0</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>1</td><td>0</td></tr><tr><td>Black Hair</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>1</td><td>0</td><td>1</td></tr><tr><td>Male</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>1</td><td>0</td><td>1</td><td>0</td></tr><tr><td>Facial Hair</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>1</td><td>1</td><td>1</td><td>0</td></tr><tr><td>Eyeglasses</td><td>0</td><td>1</td><td>0</td><td>0</td><td>1</td><td>1</td><td>1</td><td>1</td><td>0</td></tr><tr><td>Bald</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>1</td><td>1</td><td>0</td><td>1</td></tr><tr><td>Aged</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>1</td><td>1</td><td>0</td><td>0</td></tr></table> <table><tr><td>σx2</td><td>LL</td><td>KL</td><td>ELBO</td></tr><tr><td>1e-1</td><td>-11360</td><td>30</td><td>-11390</td></tr><tr><td>1e-1</td><td>-11325</td><td>150</td><td>-11475</td></tr><tr><td>1e-2</td><td>15680</td><td>600</td><td>15080</td></tr><tr><td>1e-3</td><td>16090</td><td>1950</td><td>14140</td></tr><tr><td>1e-4</td><td>16150</td><td>3650</td><td>12500</td></tr></table> Table 4: Selection of \(\sigma_{x} = 0.1\) for the CelebA VAEs by ELBO maximization. All results are given in Nats. <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 13: Architectures for the (a) feed-forward MNIST, (b) convolutional CelebA, and (c) hierarchical LSTM melody VAEs. In (b), all convolutions have a stride of 2. In (c), LSTM cells shown in the same color share weights and linear layers between levels are omitted. </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 14: Samples generated with smaller (3 ReLU layers of 256 units each) \(G\) and \(D\) models are comparable quality despite having 85x fewer parameters, \(\lambda_{\mathrm{dist}} = 0.0\) . Full attribute labels are given in supplementary Table 3. </center> ![](images/20_1.jpg) <center>Figure 15: Latent constraints applied to a vanilla autoencoder with no latent prior. Samples are similar quality to VAEs with \(\sigma_{x} = 0.1\) , but with less diversity and more high-frequency visual artifacts. Full attribute labels are given in supplementary Table 3. </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 16: Smaller decoder standard deviations, \(\sigma_{x}\) , lead to lower-variance posteriors, \(\sigma_{z}(x)\) of the encoder \(q(z \mid x)\) , averaged over the training set per a dimension. The x-axis is sorted from lowest to highest variance. Tighter posteriors correspond to more utilization of the latent dimension, and we scale our distance regularization the square inverse on a per-dimension basis. </center> <--- Page Split --->
## ABSTRACT Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions. Conditional generation enables interactive control, but creating new controls often requires expensive retraining. In this paper, we develop a method to condition generation without retraining the model. By post- hoc learning latent constraints, value functions that identify regions in latent space that generate outputs with desired attributes, we can conditionally sample from these regions with gradient- based optimization or amortized actor functions. Combining attribute constraints with a universal "realism" constraint, which enforces similarity to the data distribution, we generate realistic conditional images from an unconditional variational autoencoder. Further, using gradient- based optimization, we demonstrate identity- preserving transformations that make the minimal adjustment in latent space to modify the attributes of an image. Finally, with discrete sequences of musical notes, we demonstrate zero- shot conditional generation, learning latent constraints in the absence of labeled data or a differentiable reward function. ## 1 INTRODUCTION Generative modeling of complicated data such as images and audio is a long- standing challenge in machine learning. While unconditional sampling is an interesting technical problem, it is arguably of limited practical interest in its own right: if one needs a non- specific image (or sound, song, document, etc.), one can simply pull something at random from the unfathomably vast media databases on the web. But that naive approach may not work for conditional sampling (i.e., generating data to match a set of user- specified attributes), since as more attributes are specified, it becomes exponentially less likely that a satisfactory example can be pulled from a database. One might also want to modify some attributes of an object while preserving its core identity. These are crucial tasks in creative applications, where the typical user desires fine- grained controls (Bernardo et al., 2017). One can enforce user- specified constraints at training time, either by training on a curated subset of data or with conditioning variables. These approaches can be effective if there is enough labeled data available, but they require expensive model retraining for each new set of constraints and may not leverage commonalities between tasks. Deep latent- variable models, such as Generative Adversarial Networks (GANs; Goodfellow et al., 2014) and Variational Autoencoders (VAEs; Kingma & Welling, 2013; Rezende et al., 2014), learn to unconditionally generate realistic and varied outputs by sampling from a semantically structured latent space. One might hope to leverage that structure in creating new conditional controls for sampling and transformations (Brock et al., 2016). Here, we show that new constraints can be enforced post- hoc on pre- trained unsupervised generative models. This approach removes the need to retrain the model for each new set of constraints, allowing users to more easily define custom behavior. We separate the problem into (1) creating an unsupervised model that learns how to reconstruct data from latent embeddings, and (2) leveraging the latent structure exposed in that embedding space as a source of prior knowledge, upon which we can impose behavioral constraints. Our key contributions are as follows: <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: (a) Diagram of latent constraints for a VAE. We use one critic \(D_{\mathrm{attr}}\) to predict which regions of the latent space will generate outputs with desired attributes, and another critic \(D_{\mathrm{realism}}\) to predict which regions have high mass under the marginal posterior, \(q(z)\) , of the training data. (b) We begin by pretraining a standard VAE, with an emphasis on achieving good reconstructions. (c) To train the actor-critic pair we use constraint-satisfaction labels, \(c\) , to train \(D\) to discriminate between encodings of actual data, \(z \sim q(z|x)\) , versus latent vectors \(z \sim p(z)\) sampled from the prior or transformed prior samples \(G(z \sim p(z), y)\) . Similar to a Conditional GAN, both \(G\) and \(D\) operate on a concatenation of \(z\) and a binary attribute vector, \(y\) , allowing \(G\) to learn conditional mappings in latent space. If \(G\) is an optimizer, a separate attribute discriminator, \(D_{\mathrm{attr}}\) is trained and the latent vector is optimized to reduce the cost of both \(D_{\mathrm{attr}}\) and \(D_{\mathrm{realism}}\) . (d) To sample from the intersection of these regions, we use either gradient-based optimization or an amortized generator, \(G\) , to shift latent samples from either the prior (\(z \sim p(z)\) , sampling) or from the data (\(z \sim q(z|x)\) , transformation). </center> - We show that it is possible to generate conditionally from an unconditional model, learning a critic function \(D(z)\) in latent space and generating high-value samples with either gradient-based optimization or an amortized actor function \(G(z)\) , even with a non-differentiable decoder (e.g., discrete sequences).- Focusing on VAEs, we address the tradeoff between reconstruction quality and sample quality (without sacrificing diversity) by enforcing a universal "realism" constraint that requires samples in latent space to be indistinguishable from encoded data (rather than prior samples).- Because we start from a VAE that can reconstruct inputs well, we are able to apply identity-preserving transformations by making the minimal adjustment in latent space needed to satisfy the desired constraints. For example, when we adjust a person's expression or hair, the result is still clearly identifiable as the same person (see Figure 5). This contrasts with pure GAN-based transformation approaches, which often fail to preserve identity.- Zero-shot conditional generation. Using samples from the VAE to generate exemplars, we can learn an actor-critic pair that satisfies user-specified rule-based constraints in the absence of any labeled data. ## 2 BACKGROUND Decoder- based deep generative models such as VAEs and GANs generate samples that approximate a population distribution \(p^{\star}(x)\) by passing samples from some simple tractable distribution \(p(z)\) (often \(p(z) \triangleq \mathcal{N}(0, I)\) ) through a deep neural network. GANs are trained to fool an auxiliary classifier that tries to learn to distinguish between real and synthetic samples. VAEs are fit to data using a variational approximation to maximum- likelihood estimation: \[\mathcal{L}^{\mathrm{ELBO}} \triangleq \frac{1}{N} \sum_{n} \mathbb{E}_{z \sim q(z|x_{n})} [\log \pi (x_{n}; g(z))] - \mathrm{KL}(q(z \mid x_{n}) \mid \mid p(z)) \leq \frac{1}{N} \sum_{n} \log p(x_{n}), \quad (1)\] <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: Typical VAEs use a pixel-wise data likelihood, \(\mathcal{N}(\mu_{x}(z),\sigma_{x}I)\) , with \(\sigma_{x} = 1\) to produce coherent samples at the expense of visual and conceptual blurriness (Row 3). Some reconstructions (Row 2) actually change attributes of the original data. Decreasing \(\sigma_{x}\) to 0.1 maximizes the ELBO (supplemental Table 4) and increases the fidelity of reconstructions (Row 4) at the cost of sample realism (Row 5). Using an actor to shift prior samples to satisfy the realism constraint, we achieve more realistic samples without sacrificing sharpness (Row 6). The samples are mapped to the closest point in latent space that both satisfies the realism constraint and has the same attributes as the original data. </center> where the "encoder" distribution \(q(z \mid x)\) is an approximation to the posterior \(p(z \mid x)\) , \(\pi (x; g(z)) \triangleq p(x \mid z)\) is a tractable likelihood function that depends on some parameters output by a "decoder" function \(g(z)\) , and \(q\) and \(g\) are fit to maximize the evidence lower bound (ELBO) \(\mathcal{L}^{\mathrm{ELBO}}\) . The likelihood \(\pi (x; g)\) is often chosen to be a product of simple distributions such as \(\pi (x; g) = \mathcal{N}(x; g, \sigma_{x}^{2} I)\) for continuous data or \(\pi (x; g) = \prod_{d} \mathrm{Bernoulli}(x_{d}; g_{d})\) for binary data. GANs and VAEs have complementary strengths and weaknesses. GANs suffer from the "mode- collapse" problem, where the generator assigns mass to a small subset of the support of the population distribution—that is, it may generate realistic samples, but there are many more realistic samples that it cannot generate. This is particularly problematic if we want to use GANs to manipulate data rather than generate new data; even GAN variants that include some kind of inference machinery (e.g., Donahue et al., 2016; Dumoulin et al., 2016; Perarnau et al., 2016) to determine what \(z\) best matches some \(x\) tend to produce reconstructions that are reminiscent of the input but do not preserve its identity. On the other hand, VAEs (especially those with simple likelihoods \(\pi\) ) often exhibit a tradeoff between sharp reconstructions and sensible- looking samples (see Figure 2). That is, depending on what hyperparameters they are trained with (e.g., latent dimensionality and the scale of the likelihood term), VAEs tend to either produce blurry reconstructions and plausible (but blurry) novel samples, or bizarre samples but sharp reconstructions. It has been argued (Makhzani et al., 2016) that this is due to the "holes" problem; the decoder is trained on samples from the marginal posterior \(q(z) \triangleq \frac{1}{N} \sum_{n} q(z \mid x_{n})\) , which may have very high KL divergence to the presupposed marginal \(p(z)\) (Hoffman & Johnson, 2016). In particular, if the decoder, \(g(z)\) , can reconstruct arbitrary values of \(x\) with high accuracy (as in the case of small \(\sigma_{x}\) ) then the typical posterior \(p(z \mid x)\) will be highly concentrated. We show this experimentally in supplemental Figure 16. If \(q(z \mid x)\) underestimates the posterior variance (as it usually does), then the marginal posterior \(q(z)\) will also be highly concentrated, and samples from \(p(x) = \int_{z} p(z) p(x \mid z) dz\) may produce results that are far from typical reconstructions \(\mathbb{E}_{p}[x \mid z \sim q(z \mid x)]\) . If we tune \(\sigma_{x}\) to maximize the ELBO (Bishop, 2006), we find the optimal \(\sigma_{x} \approx 0.1\) (supplemental Table 4). Figure 2 shows that this choice does indeed lead to good reconstructions but strange- looking samples. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 3: Contour maps of the critic value functions for the marginal posterior ("realism") constraint. We look at the two latent dimensions that have the lowest average posterior standard deviation on the training set, taking low variance in \(z\) space as a proxy for influence over the generated images. All other latent dimensions are held fixed at their original values (from a sample from \(p(z)\) on the left, and from a sample from \(q(z \mid x)\) for a held-out \(x\) on the right). Gray x marks correspond to the points in latent space of the generated images to the right. The cross-section on the left, taken from a prior sample, shows contours that point towards more realistic looking digits. In the cross-section on the right, a sample from the validation set (indicated by orange squares) resides within a local maximum of the critic, as one would hope. </center> Conditional GANs (CGAN; Mirza & Osindero, 2014) and conditional VAEs (CVAE; Sohn et al., 2015) can generate samples conditioned on attribute information when available, but they must be trained with knowledge of the attribute labels for the whole training set, and it is not clear how to adapt them to new attributes without retraining from scratch. Furthermore, CGANs and CVAEs suffer from the same problems of mode- collapse and blurriness as their unconditional cousins. We take a different approach to conditional generation and identity- preserving transformation. We begin by training an unconditional VAE with hyperparameters chosen to ensure good reconstruction (at the expense of sample quality). We then train a "realism" critic to predict whether a given \(z\) maps to a high- quality sample. We also train critics to predict whether a given \(z\) maps to a sample that manifests various attributes of interest. To generate samples that are both realistic and exhibit desired attributes, one option is to optimize random \(z\) vectors until they satisfy both the realism and attribute critics. Alternately, we can amortize this cost by training an "actor" network to map a random set of \(z\) vectors to a subregion of latent space that satisfies the constraints encoded by the critics. By encouraging these transformed \(z\) vectors to remain as close as possible to where they started, we alleviate the mode- collapse problem common to GANs. Our approach is summarized visually in Figure 1. The details follow in sections 3, 4, 5, and 6. ## 3 THE "REALISM" CONSTRAINT: SHARPENING VAE SAMPLES We define the realism constraint implicitly as being satisfied by samples from the marginal posterior \(q(z) \triangleq \frac{1}{N} \sum_{n} q(z \mid x_{n})\) and not those from \(p(z)\) . By enforcing this constraint, we can close the gap between reconstruction quality and sample quality (without sacrificing sample diversity). As shown in Figure 1, we can train a critic \(D\) to differentiate between samples from \(p(z)\) and \(q(z)\) . The critic loss, \(\mathcal{L}_{D}(z)\) , is simply the cross- entropy, with labels \(c = 1\) for \(z \sim q(z \mid x)\) and \(c = 0\) for \(z \sim p(z)\) . We found that the realism critic had little trouble generalizing to unseen data; that is, it was able to recognize samples from \(q(z \mid x^{\mathrm{held - out}})\) as being "realistic" (Figure 3). Sampling from the prior is sufficient to train \(D\) for models with lower KL Divergence, but if the KL Divergence between \(q\) and \(p\) is large, the chances of sampling a point \(p(z)\) that has high probability under \(q(z)\) becomes vanishingly small. This leads to poor sample quality and makes it difficult for \(D\) to learn a tight approximation of \(q(z)\) solely by sampling from \(p(z)\) . Instead, we use an inner- loop of gradient- based optimization, \(G_{\mathrm{opt}}(z) = \mathrm{GradientDescent}(z; \mathcal{L}_{D}(z))\) , to move prior samples to points deemed more like \(q(z)\) by \(D\) . For clarity, we introduce the shorthand \(\mathcal{L}_{c = 1}(z) \triangleq - \log (D(z))\) and \(\mathcal{L}_{c = 0}(z) \triangleq -(1 - \log (D(z)))\) . This gives us our critic loss for the realism constraint: \[\mathcal{L}_{D}(z) = \mathbb{E}_{z\sim q(z\mid x)}[\mathcal{L}_{c = 1}(z)] + \mathbb{E}_{z\sim p(z)}[\mathcal{L}_{c = 0}(z)] + \mathbb{E}_{z\sim G(p(z))}[\mathcal{L}_{c = 0}(z)] \quad (2)\] <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 4: Conditional generation with a CGAN actor-critic pair acting in the latent space of a VAE with \(\sigma_{x} = 0.1\) . Each row starts from a different prior sample and maps it to a new point in latent space that satisfies both the attribute constraints and the realism constraint. The attribute constraints are changed one at a time to produce as smooth a transition as possible from left to right. The bottom CGAN is regularized during training to prefer small shifts in latent space ( \(\lambda_{\mathrm{dist}} = 0.1\) ), while the top is not ( \(\lambda_{\mathrm{dist}} = 0.0\) ). Compared to the images generated by the unregularized model, the images generated by the regularized model are much less diverse across columns, suggesting that the regularization does indeed enforce some degree of identity preservation. The regularized model produces images that are somewhat more diverse across rows, suggesting that the regularization fights mode collapse (arguably at the expense of image quality). For each column, the complete list of attributes is given in supplemental Table 3. </center> Since this inner- loop of optimization can slow down training, we amortize the generation by using a neural network as a function approximator. There are many examples of such amortization tricks, including the encoder of a VAE, generator of a GAN, and fast neural style transfer (Ulyanov et al., 2016; Li & Wand, 2016; Johnson et al., 2016). As with a traditional GAN, the parameters of the function \(G\) are updated to maximize the value \(D\) ascribes to the shifted latent points. One of the challenges using a GAN in this situation is that it is prone to mode- collapse. However, an advantage of applying the GAN in latent space is that we can regularize \(G\) to try and find the closest point in latent space that satisfies \(D\) , thus encouraging diverse solutions. We introduce a regularization term, \(\mathcal{L}_{\mathrm{dist}}(z^{\prime},z) = 1 / \sigma_{z}^{2}\log (1 + (z^{\prime} - z)^{2})\) to encourage nearby solutions, while allowing more exploration than a mean square error term. As a VAE utilizes only a fraction of its latent dimensions, we scale the distance penalty of each dimension by its utilization, as indicated by the squared reciprocal of the scale \(\sigma_{z}(x)\) of the encoder distribution \(q(z \mid x)\) , averaged over the training dataset, \(\bar{\sigma}_{z} \triangleq \frac{1}{N} \sum_{n} \sigma_{z}(x_{n})\) . The regularized loss is \[\mathcal{L}_{G}(z) = \mathbb{E}_{z\sim p(z)}[\mathcal{L}_{c = 1}(G(z)) + \lambda_{\mathrm{dist}}\mathcal{L}_{\mathrm{dist}}(G(z),z)]. \quad (3)\] ## 4 ATTRIBUTE CONSTRAINTS: CONDITIONAL GENERATION We want to generate samples that are realistic, but we also want to control what attributes they exhibit. Given binary attribute labels \(y\) for a dataset, we can accomplish this by using a CGAN in <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 5: Identity-preserving transformations with optimization. Two separate critics are trained, one for attributes and one for the realism constraint. Starting at the latent points corresponding to the data reconstructions, we then perform gradient ascent in latent space on a weighted combination of critic values (1.0 attribute, 0.1 marginal posterior), stopping when a threshold value is passed for both critics. Images remain semantically close to the original because the pixel-wise likelihood of VAE training encourages identity-preserving reconstructions, and the dynamics of gradient ascent are naturally limited to finding solutions close in latent space. Panels are black for attributes of the original image, as the procedure just returns the original point in latent space. </center> the latent space, which amounts to replacing \(D(z)\) and \(G(z)\) with conditional versions \(D(z,y)\) and \(G(z,y)\) and concatenating \(y\) to \(z\) as input. If both the actor and critic see attribute information, \(G\) must find points in latent space that could be samples from \(q(z)\) with attributes \(y\) . This procedure is computationally inexpensive relative to training a generative model from scratch. In most of our experiments, we use a relatively large CGAN actor- critic pair (4 fully connected ReLU layers of 2048 units each), which during training uses about \(96 \times\) fewer FLOPs/iteration than the unconditional VAE. We also trained a much smaller CGAN actor- critic pair (3 fully connected ReLU layers of 256 units), which uses about \(2884 \times\) fewer FLOPs/iteration than the VAE, and achieves only slightly worse results than the larger CGAN (supplemental Figure 14 and Table 1). Figure 4 demonstrates the quality of conditional samples from a CGAN actor- critic pair and the effect of the distance penalty, which constrains generation to be closer to the prior sample, maintaining similarity between samples with different attributes. The regularized CGAN actor has less freedom to ignore modes by pushing many random \(z\) vectors to the same area of the latent space, since it is penalized for moving samples from \(p(z)\) too far. The increased diversity across rows of the regularized CGAN is evidence that this regularization does fight mode- collapse (additional qualitative evidence is in supplemental Figures 7 and 8). However, without a distance penalty, samples appear more a bit realistic with more prominent attributes. This is supported by Table 1, where we use a separately trained attribute classification model to quantitatively evaluate samples. The actor with no penalty generates samples that are more accurately classified than the actor with a penalty but also shifts the samples much farther in latent space. <--- Page Split ---> <table><tr><td>CelebA</td><td>Accuracy</td><td>Precision</td><td>Recall</td><td>F1 Score</td><td>zMSE</td></tr><tr><td>(This Work) 10 Attributes</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Test Data</td><td>0.936</td><td>0.901</td><td>0.893</td><td>0.895</td><td></td></tr><tr><td>\(G_{CGAN}(\lambda _{\mathrm{dist}}=0)\)</td><td>0.942</td><td>0.914</td><td>0.904</td><td>0.906</td><td>80.7</td></tr><tr><td>\(G_{CGAN}(\lambda _{\mathrm{dist}}=0)\) (Small Model)</td><td>0.926</td><td>0.898</td><td>0.860</td><td>0.870</td><td>58.9</td></tr><tr><td>\(G_{CGAN}(\lambda _{\mathrm{dist}}=0.1)\)</td><td>0.928</td><td>0.903</td><td>0.863</td><td>0.874</td><td>17.0</td></tr><tr><td>(Perarnau et al., 2016) 18 Attributes</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Test Data</td><td>0.928</td><td></td><td></td><td>0.715</td><td></td></tr><tr><td>ICGAN</td><td>0.860</td><td></td><td></td><td>0.524</td><td></td></tr></table> Table 1: Accuracy of a separate model trained to classify attributes from images, evaluated on test data and generated images. We condition and evaluate the generated images on the same labels as the test data. For comparison, the results of a similar task using invertible CGANs for generation (Perarnau et al., 2016) are provided. However, since the full list of salient attributes was not given in the paper, we emphasize that they are not directly comparable as the two experiments use a slightly different set of attribute labels. We also measure the distance in latent space that prior samples are shifted, weighted by \(1 / \sigma_{z}^{2}\) . Actors trained with a latent distance penalty \(\lambda_{\mathrm{dist}}\) have slightly worse accuracy, but find latent points much closer to the prior samples and produce a greater diversity of images (see supplemental Figures 7 and 8). Interestingly, an actor trained without a distance penalty achieves higher classification accuracy than the test set itself, possibly by generating images with more exaggerated and distinctive features than real data. A "small model" CGAN with 85x fewer parameters (3 fully connected layers of 256 units) generates images (supplemental Figure 14) of comparable quality. Due to the smaller capacity, the model finds more local solutions (smaller \(z_{MSE}\) ) that have slightly less attribute accuracy, but are more visually similar to the prior sample without an explicit regularization term. Although we used a VAE as the base generative model, our approach could also be used to generate high- quality conditional samples from pretrained classical autoencoders. We show in supplemental Figure 15 that we obtain reasonably good conditional samples (albeit with high- frequency spatial artifacts) as \(\sigma_{x} \to 0\) (equivalent to a classical autoencoder). Learning the decoder using VAE training encourages q(z) to fill up as much of the latent space as possible (without sacrificing reconstruction quality), which in turn encourages the decoder to map more of the latent space to reasonable- looking images. The prior \(p(z) = \mathcal{N}(0, I)\) also imposes a natural scale on the latent variables. ## 5 IDENTITY-PRESERVING TRANSFORMATIONS If we have a VAE that can produce good reconstructions of held- out data, we can transform the attributes of the output by gradient- based optimization. We simply need to train a critic, \(D_{attr}(z)\) , to predict the attribute labels \(p(y \mid z)\) of the data embeddings \(z \sim q(z \mid x)\) , and use a cross- entropy loss to train. Then, starting from a data point, \(z \sim q(z \mid x)\) , we can perform gradient descent on the the realism constraint and attribute constraint jointly, \(\mathcal{L}_{D_{\mathrm{real}}}(z) + \lambda_{\mathrm{attr}}\mathcal{L}_{D_{\mathrm{attr}}}(z)\) . Note that it is helpful to maintain the realism constraint to keep the image from distorting unrealistically. Using the same procedure, we can also conditionally generate new samples (supplemental Figure 9) by starting from \(z \sim p(z)\) . Figure 5 demonstrates transformations applied to samples from the held- out evaluation dataset. Note that since the reconstructions are close to the original images, the transformed images also maintain much of their structure. This contrasts with supplemental Figure 10, where a distance- penalty- free CGAN actor produces transformations that share attributes with the original but shift identity. We could preserve identity by introducing a distance penalty, but find that it is much easier to find the correct weighting of realism cost, attribute cost, and distance penalty through optimization, as each combination does not require retraining the network. ## 6 RULE-BASED CONSTRAINTS: ZERO-SHOT CONDITIONAL GENERATION So far, we have assumed access to labeled data to train attribute classifiers. We can remove the need to provide labeled examples by leveraging the structure learned by our pre- trained model, using it to generate exemplars that are scored by a user- supplied reward function. If we constrain the reward <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 6: Transformations from a prior sample for the Melody VAE model. In each 16-bar pianoroll, time is in the horizontal direction and pitch in the vertical direction. In the prior sample, notes falling outside of the C Major scale are shown in red. After transformation by \(G_{\mathcal{P} = \mathcal{C}_{\mathrm{Maj}}},d = 0\) , all sampled notes fall within the scale, without a significant change to note density. After transformation of the original \(z\) by \(G_{\mathcal{P} = \mathcal{C}_{\mathrm{Maj}}},d = 192\) , all sampled notes lay within the scale and the density increases beyond 192. Synthesized audio of these samples can be heard at https://goo.gl/ouULT9. </center> function to be bounded, \(c(x):\mathbb{R}^{N}\to [0,1]\) , the problem becomes very similar to previous GAN settings, but now the actor, \(G\) , and critic, \(D\) , are working together. \(D\) aims to best approximate the true value of each latent state, \(\mathbb{E}_{x\sim p(x|z)}c(x)\) , and \(G\) aims to shift samples from the prior to high- value states. The critic loss is the cross- entropy from \(c(x)\) , and the actor loss is the same as \(\mathcal{L}_{G}\) in equation 3, where we again have a distance penalty to promote diversity of outputs. Note that the reward function and VAE decoder need not necessarily be differentiable, as the critic learns a value function to approximate the reward, which the actor uses for training. To highlight this, we demonstrate that the output of a recurrent VAE model can be constrained to satisfy hard- coded rule- based constraints. We first train an LSTM VAE (details in the Appendix) on melodic fragments. Each melody, \(m\) , is represented as a sequence of categorical variables. In order to examine our ability to constrain the pitch classes and note density of the outputs, we define two reward functions, one that encourages notes from a set of pitches \(\mathcal{P}\) , and another for that encourages melodies to have at least \(d\) notes: \[c_{\mathrm{pitch}}(m,\mathcal{P}) = \sum_{p\in m}\mathbb{1}(p\in \mathcal{P}) / |m|\qquad c_{\mathrm{density}}(m,d) = \min (1,|m| / d) \quad (4)\] Figure 6 gives an example of controlling the pitch class and note density of generated outputs, which is quantitatively supported by the results in Table 2. During training, the actor goes through several phases of exploration and exploitation, oscillating between expanding to find new modes with high reward and then contracting to find the nearest locations of those modes, eventually settling into high value states that require only small movements in the latent space (supplemental Figure 11). ## 7 RELATED WORK Conditional GANs (Mirza & Osindero, 2014) and VAEs (Sohn et al., 2015) introduce conditioning variables at training time. Sohn et al. (2015) allow these variables to affect the distribution in latent \(z\) space, but still require that \(p(z \mid y)\) be a tractable distribution. Perarnau et al. (2016) use CGANs to adjust images, but because CGANs cannot usually reconstruct arbitrary inputs accurately, they must resort to image- space processing techniques to transfer effects to the original input. White (2016) propose adding "attribute vectors" to samples from \(p(z)\) as a simple and effective heuristic to perform transformations, which relies heavily on the linearity of the latent space. <--- Page Split ---> <table><tr><td>Actor</td><td>\(c_{\mathrm{pitch}}(m,\mathcal {P}=C_{\mathrm{Maj}})\)</td><td>\(c_{\mathrm{density}}(m,d=192)\)</td><td>\(z_{\mathrm{MSE}}\)</td></tr><tr><td>Prior</td><td>0.579 (0.43%)</td><td>0.417 (0.04%)</td><td>-</td></tr><tr><td>\(G_{\mathcal {P}=C_{\mathrm{Maj}},d=0}\)</td><td>0.991 (70.8%)</td><td>0.459 (0.01%)</td><td>0.015</td></tr><tr><td>\(G_{\mathcal {P}=C_{\mathrm{Maj}},d=192}\)</td><td>0.982 (62.4%)</td><td>0.985 (84.9%)</td><td>0.039</td></tr></table> Table 2: Average rewards and constraint satisfaction rates (in parentheses) for unconditional (Prior)and conditional generation. Samples from the prior receive low rewards, on average, and near zero satisfaction rates from both the pitch class (C Major) and note density (≥ 192 notes) constraints. After applying an actor optimized only for the C Major scale ( $G_{\mathcal {P}=C_{\mathrm{Maj}},d=0}$ ), the pitch class constraint is fully satisfied 70.8% of the time with only a minor effect on density. The average value close to 1 also indicates that when the constraint is not satisfied, it is typically off by only a few notes. Applying an actor function optimized for the C Major scale and high density ( $G_{\mathcal {P}=C_{\mathrm{Maj}},d=192}$ ) causes both constraints to be satisfied at high rates, with a slightly larger shift in latent space. Some recent work has focused on applying more expressive prior constraints to VAEs (Rezende et al., 2014; Sonderby et al., 2016; Chen et al., 2017; Tomczak & Welling, 2017). The prior that maximizes the ELBO is \(p^{*}(z)=q(z)\) (Hoffman & Johnson, 2016); one can interpret our realism constraint as trying to find an implicit distribution that is indistinguishable from \(q(z)\) . Like the adversarial autoencoder of Makhzani et al. (2016), our realism constraint relies on a discriminative model, but instead of trying to force \(q(z)\) to equal some simple \(p(z)\) , we only weakly constrain \(q(z)\) and then use a classifier to “clean up” our results. Like this work, the recently proposed adversarially regularized autoencoder (Junbo et al., 2017) uses adversarial training to generate latent codes in a latent space discovered by an autoencoder; that work focuses on unconditional generation. Gómez-Bombarelli et al. (2016) train classifiers in the latent space of a VAE to predict what latent variables map to molecules with various properties, and then use iterative gradient-based optimization in the latent space to find molecules that have a desired set of properties. On molecule data, their procedure generates invalid molecules rarely enough that they can simply reject these samples, which are detected using off-the-shelf software. By contrast, the probability of generating realistic images under our pretrained VAE is astronomically small, and no simple criterion for detecting valid images exists. Jaques et al. (2017) also use a classifier to constrain generation; they use a Deep Q-network as an auxiliary loss for training an LSTM. Closest to Section 6, Nguyen et al. (2016a;b) generate very high quality conditional images by optimizing a sample from the latent space of a generative network to create an image that maximizes the class activations of a pretrained ImageNet classifier. Our work differs in that we learn an amortized generator/discriminator directly in the latent space and we achieve diversity through regularizing by the natural scale of the latent space rather than through a modified Langevin sampling algorithm. # 8 DISCUSSION AND FUTURE WORK We have demonstrated a new approach to conditional generation by constraining the latent space of an unconditional generative model. This approach could be extended in a number of ways. One possibility would be to plug in different architectures, including powerful autoregressive de-coders or adversarial decoder costs, as we make no assumptions specific to independent likelihoods.While we have considered constraints based on implicit density estimation, we could also estimate the constrained distribution directly with an explicit autoregressive model or another variational autoencoder. The efficacy of autoregressive priors in VAEs is promising for this approach (Kingma et al., 2016). Conditional samples could then be obtained by ancestral sampling, and transformations by using gradient ascent to increase the likelihood under the model. Active or semisupervised learn-ing approaches could reduce the sample complexity of learning constraints. Real-time constraint learning would also enable new applications; it might be fruitful to extend the reward approximation of Section 6 to incorporate user preferences as in (Christiano et al., 2017). <--- Page Split ---> ## ACKNOWLEDGMENTS Many thanks to Jascha Sohl- Dickstein, Colin Raffel, and Doug Eck for their helpful brainstorming and encouragement. ## 9 APPENDIX ### 9.1 EXPERIMENTAL DETAILS For images, we use the MNIST digits dataset (LeCun & Cortes, 2010) and the Large- scale CelebFaces Attributes (CelebA) dataset (Liu et al., 2015). MNIST images are \(28\times 28\) pixels and greyscale scaled to [0, 1]. For attributes, we use the number class label of each digit. CelebA images are centered cropped to \(128\times 128\) pixels and then downsampled to \(64\times 64\) RGB pixels and scaled to [0, 1]. We find that many of the attribute labels are not strongly correlated with changes in the images, so we narrow the original 40 attributes to the 10 most visually salient: blond hair, black hair, brown hair, bald, eyeglasses, facial hair, hat, smiling, gender, and age. For melodies, we scraped the web to collect over 1.5 million publicly available MIDI files. We then extracted 16- bar melodies by sliding a window with a single bar stride over each non- percussion instrument with a \(\frac{1}{4}\) time signature, keeping only the note with the highest pitch when multiple overlap. This produced over 3 million unique melodies. We represent each melody as a sequence of 256 (16 per bar) categorical variables taking one of 130 discrete states at each sixteenth note: 128 note- on pitches, a hold state, and a rest state. ### 9.2 MODEL ARCHITECTURES All encoders, decoders, and classifiers are trained with the Adam optimizer (Kingma & Ba, 2015), with learning rate \(= 3\mathrm{e - 4}\) , \(\beta_{1} = 0.9\) , and \(\beta_{2} = 0.999\) . To train \(D_{real}(z)\) , \(D_{attr}(z)\) and \(G(z)\) we follow the training procedure of Gulrajani et al. (2017), applying a gradient penalty of 10, training \(D\) and \(G\) in a 10:1 step ratio, and use the Adam optimizer with learning rate \(= 3\mathrm{e - 4}\) , \(\beta_{1} = 0.0\) , and \(\beta_{2} = 0.9\) . While not necessary to converge, we find it improves the stability of optimization. We do not apply any of the other tricks of GAN training such as batch normalization, minibatch discrimination, or one- sided label smoothing (Radford et al., 2015; Salimans et al., 2016). As samples from \(p(z)\) are easier to discriminate than samples from \(G(p(z))\) , we train \(D\) by sampling from \(p(z)\) at a rate 10 times less than \(G(p(z))\) . For actors with inner- loop optimization, \(G_{\mathrm{opt}}\) , 100 iterations of Adam are used with with learning rate \(= 1\mathrm{e - 1}\) , \(\beta_{1} = 0.9\) , and \(\beta_{2} = 0.999\) . #### 9.2.1 MNIST FEED-FORWARD VAE To model the MNIST data, we use a deep feed- forward neural network (Figure 13a). The encoder is a series of 3 linear layers with 1024 outputs, each followed by a ReLU, after which an additional linear layer is used to produce 2048 outputs. Half of the outputs are used as the \(\mu\) and the softplus of the other half are used as the \(\sigma\) to parameterize a 1024- dimension multivariate Gaussian distribution with a diagonal covariance matrix for \(z\) . The decoder is a series of 3 linear layers with 1024 outputs, each followed by a ReLU, after which an additional linear layer is used to produce 28x28 outputs. These outputs are then passed through a sigmoid to generate the output image. #### 9.2.2 CELEBA CONVOLUTIONAL VAE To model the CelebA data, we use a deep convolutional neural network (Figure 13b). The encoder is a series of 4 2D convolutional layers, each followed by a ReLU. The convolution kernels are of size \(3\times 3\) , \(3\times 3\) , \(5\times 5\) , and \(5\times 5\) , with 2048, 1024, 512, and 256 output channels, respectively. All convolutional layers have a stride of 2. After the final ReLU, a linear layer is used to produce 2048 outputs. Half of the outputs are used as the \(\mu\) and the softplus of the other half are used as the \(\sigma\) to parameterize a 1024- dimension multivariate Gaussian distribution with a diagonal covariance matrix for \(z\) . The decoder passes the \(z\) through a 4x4x2048 linear layer, and then a series of 4 2D transposed convolutional layers, all but the last of which are followed by a ReLU. The deconvolution kernels are of size \(5\times 5\) , \(5\times 5\) , \(3\times 3\) , and \(3\times 3\) , with 1024, 512, 256, and 3 output channels, respectively. All <--- Page Split ---> deconvolution layers have a stride of 2. The output from the final deconvolution is passed through a sigmoid to generate the output image. The classifier that is trained to predict labels from images are identical to the VAE encoders except that they end with a sigmoid cross- entropy loss. #### 9.2.3 MELODY SEQUENCE VAE Music is fundamentally sequential, so we use an LSTM- based sequence VAE for modelling monophonic melodies (Figure 13c). The encoder is made up of a single- layer bidirectional LSTM, with 2048 units per cell. The final output in each direction is concatenated and passed through a linear layer to produce 1024 outputs. Half of the outputs are used as the \(\mu\) and the softplus of the other half are used as a \(\sigma\) to parameterize a 512- dimension multivariate Gaussian distribution with a diagonal covariance matrix for \(z\) . Since musical sequences often have structure at the bar level, we use a hierarchical decoder to model long melodies. First, the \(z\) goes through a linear layer to initialize the state of a 2- layer LSTM with 1024 units per layer, which outputs 16 embeddings of size 512 each, one per bar. Each of these embeddings are passed through a linear layer to produce 16 initial states for another 2- layer LSTM with 1024 units per layer. This bar- level LSTM autoregressively produces individual sixteenth note events, passing its output through a linear layer and softmax to create a distribution over the 130 classes. This categorical distribution is used to compute a cross- entropy loss during training or samples at inference time. In addition to generating the initial state at the start of each bar, the embedding for the current bar is concatenated with the previous output as the input at each time step. #### 9.2.4 ACTOR FEED-FORWARD NETWORK For \(G(z)\) , we use a deep feed- forward neural network (Figure 12a) in all of our experiments. The network is a series of 4 linear layers with 2048 outputs, each followed by a ReLU, after which an additional linear layer is used to produce \(2 * dim(z)\) outputs. Half of the outputs are used as the \(\delta z\) and the sigmoid of the other half are used as gates. The transformed \(z'\) is the computed as \((1 - gates)*z + gates*\delta z\) . This aids in training as the network only has to then predict shifts in \(z\) . When conditioning on attribute labels, \(y\) , to compute \(G(z,y)\) , the labels are passed through a linear layer producing 2048 outputs which are concatenated with \(z\) as the model input. #### 9.2.5 CRITIC FEED-FORWARD NETWORK For \(D(z)\) , we use a deep feed- forward neural network (Figure 12b) in all of our experiments. The network is a series of 4 linear layers with 2048 outputs, each followed by a ReLU, after which an additional linear layer is used to produce a single output. This output is passed through a sigmoid to compute \(D(z)\) . When conditioning on attribute labels, \(y\) , to compute \(D(z,y)\) , the labels are passed through a linear layer producing 2048 outputs which are concatenated with \(z\) as the model input. <--- Page Split ---> ### 9.3 SUPPLEMENTAL FIGURES ![](images/14_0.jpg) <center>Figure 7: Additional generated CelebA faces by \(G_{\mathrm{CGAN}}\) with \(\lambda_{\mathrm{dist}} = 0\) . Full attribute labels are given in supplementary Table 3 </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 8: Additional generated CelebA faces by \(G_{\mathrm{CGAN}}\) with \(\lambda_{\mathrm{dist}} = 0.1\) . Full attribute labels are given in supplementary Table 3 </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 9: Optimization of samples drawn from the prior to satisfy both the realism constraint and attribute constraints (drawn from the test set). The optimization takes 100 steps, and images are shown at 0, 10, 30, 50 and 100 steps. \(D\) is trained with inner-loop optimization, \(G_{\mathrm{opt}}\) , as described in Section 9.2 </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 10: Identity-distorting transformations with CGAN actor-critic. Without a penalty to encourage small moves in latent space, the actor maps the latent vectors of the original data points to generated images that have the correct attributes, but a different identity. Panels are black for attributes of the original image, as the procedure just returns the same image as the reconstruction. </center> ![](images/17_1.jpg) <center>Figure 11: Training curves for melody actor \((G)\) and critic \((D)\) pair for pitch class constraint \(c_{\mathrm{pitch}}(m, \mathcal{P} = \mathrm{C}_{\mathrm{Maj}})\) . </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 12: Architecture for the (a) actors and (b) critics used in all experiments. </center> Table 3: Complete list of attributes for label names in Figures 4, 7, and 8 <table><tr><td>Figure Label</td><td>Black Bald</td><td>Blond Hair</td><td>Brown Hair</td><td>Eye-glasses</td><td>Male</td><td>Beard</td><td>Smiling</td><td>Hat</td><td>Young</td></tr><tr><td>Blond Hair</td><td>0</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>1</td><td>0</td><td>1</td></tr><tr><td>Brown Hair</td><td>0</td><td>0</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>1</td><td>0</td></tr><tr><td>Black Hair</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>1</td><td>0</td><td>1</td></tr><tr><td>Male</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>1</td><td>0</td><td>1</td><td>0</td></tr><tr><td>Facial Hair</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>1</td><td>1</td><td>1</td><td>0</td></tr><tr><td>Eyeglasses</td><td>0</td><td>1</td><td>0</td><td>0</td><td>1</td><td>1</td><td>1</td><td>1</td><td>0</td></tr><tr><td>Bald</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>1</td><td>1</td><td>0</td><td>1</td></tr><tr><td>Aged</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>1</td><td>1</td><td>0</td><td>0</td></tr></table> <table><tr><td>σx2</td><td>LL</td><td>KL</td><td>ELBO</td></tr><tr><td>1e-1</td><td>-11360</td><td>30</td><td>-11390</td></tr><tr><td>1e-1</td><td>-11325</td><td>150</td><td>-11475</td></tr><tr><td>1e-2</td><td>15680</td><td>600</td><td>15080</td></tr><tr><td>1e-3</td><td>16090</td><td>1950</td><td>14140</td></tr><tr><td>1e-4</td><td>16150</td><td>3650</td><td>12500</td></tr></table> Table 4: Selection of \(\sigma_{x} = 0.1\) for the CelebA VAEs by ELBO maximization. All results are given in Nats. <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 13: Architectures for the (a) feed-forward MNIST, (b) convolutional CelebA, and (c) hierarchical LSTM melody VAEs. In (b), all convolutions have a stride of 2. In (c), LSTM cells shown in the same color share weights and linear layers between levels are omitted. </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 14: Samples generated with smaller (3 ReLU layers of 256 units each) \(G\) and \(D\) models are comparable quality despite having 85x fewer parameters, \(\lambda_{\mathrm{dist}} = 0.0\) . Full attribute labels are given in supplementary Table 3. </center> ![](images/20_1.jpg) <center>Figure 15: Latent constraints applied to a vanilla autoencoder with no latent prior. Samples are similar quality to VAEs with \(\sigma_{x} = 0.1\) , but with less diversity and more high-frequency visual artifacts. Full attribute labels are given in supplementary Table 3. </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 16: Smaller decoder standard deviations, \(\sigma_{x}\) , lead to lower-variance posteriors, \(\sigma_{z}(x)\) of the encoder \(q(z \mid x)\) , averaged over the training set per a dimension. The x-axis is sorted from lowest to highest variance. Tighter posteriors correspond to more utilization of the latent dimension, and we scale our distance regularization the square inverse on a per-dimension basis. </center> <--- Page Split --->
accept
Accept (Poster)
7
ICLR_2018_paper_0320
iclr
2,018
# STABILIZING ADVERSARIAL NETS WITH PREDICTION METHODS Abhay Yadav\*, Sohil Shah\*, Zheng Xu, David Jacobs, & Tom Goldstein University of Maryland, College Park, MD 20740, USA. {jaiabhay, xuzh, tomg}@cs.umd.edu, sohilas@umd.edu, djacobs@umiacs.umd.edu ## ABSTRACT Adversarial neural networks solve many important problems in data science, but are notoriously difficult to train. These difficulties come from the fact that optimal weights for adversarial nets correspond to saddle points, and not minimizers, of the loss function. The alternating stochastic gradient methods typically used for such problems do not reliably converge to saddle points, and when convergence does happen it is often highly sensitive to learning rates. We propose a simple modification of stochastic gradient descent that stabilizes adversarial networks. We show, both in theory and practice, that the proposed method reliably converges to saddle points, and is stable with a wider range of training parameters than a non- prediction method. This makes adversarial networks less likely to "collapse," and enables faster training with larger learning rates. ## 1 INTRODUCTION Adversarial networks play an important role in a variety of applications, including image generation (Zhang et al., 2017; Wang & Gupta, 2016), style transfer (Brock et al., 2017; Taigman et al., 2017; Wang & Gupta, 2016; Isola et al., 2017), domain adaptation (Taigman et al., 2017; Tzeng et al., 2017; Ganin & Lempitsky, 2015), imitation learning (Ho et al., 2016), privacy (Edwards & Storkey, 2016; Abadi & Andersen, 2016), fair representation (Mathieu et al., 2016; Edwards & Storkey, 2016), etc. One particularly motivating application of adversarial nets is their ability to form generative models, as opposed to the classical discriminative models (Goodfellow et al., 2014; Radford et al., 2016; Denton et al., 2015; Mirza & Osindero, 2014). While adversarial networks have the power to attack a wide range of previously unsolved problems, they suffer from a major flaw: they are difficult to train. This is because adversarial nets try to accomplish two objectives simultaneously; weights are adjusted to maximize performance on one task while minimizing performance on another. Mathematically, this corresponds to finding a saddle point of a loss function - a point that is minimal with respect to one set of weights, and maximal with respect to another. Conventional neural networks are trained by marching down a loss function until a minimizer is reached (Figure 1a). In contrast, adversarial training methods search for saddle points rather than a minimizer, which introduces the possibility that the training path "slides off" the objective functions and the loss goes to \(- \infty\) (Figure 1b), resulting in "collapse" of the adversarial network. As a result, many authors suggest using early stopping, gradients/weight clipping (Arjovsky et al., 2017), or specialized objective functions (Goodfellow et al., 2014; Zhao et al., 2017; Arjovsky et al., 2017) to maintain stability. In this paper, we present a simple "prediction" step that is easily added to many training algorithms for adversarial nets. We present theoretical analysis showing that the proposed prediction method is asymptotically stable for a class of saddle point problems. Finally, we use a wide range of experiments to show that prediction enables faster training of adversarial networks using large learning rates without the instability problems that plague conventional training schemes. <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: A schematic depiction of gradient methods. (a) Classical networks are trained by marching down the loss function until a minimizer is reached. Because classical loss functions are bounded from below, the solution path gets stopped when a minimizer is reached, and the gradient method remains stable. (b) Adversarial net loss functions may be unbounded from below, and training alternates between minimization and maximization steps. If minimization (or, conversely, maximization) is more powerful, the solution path "slides off" the loss surface and the algorithm becomes unstable, resulting in a sudden "collapse" of the network. </center> ## 2 PROPOSED METHOD Saddle- point optimization problems have the general form \[\min_{u}\max_{v}\mathcal{L}(u,v) \quad (1)\] for some loss function \(\mathcal{L}\) and variables \(u\) and \(v\) . Most authors use the alternating stochastic gradient method to solve saddle- point problems involving neural networks. This method alternates between updating \(u\) with a stochastic gradient descent step, and then updating \(v\) with a stochastic gradient ascent step. When simple/classical SGD updates are used, the steps of this method can be written \[\begin{array}{r l r}{{u^{k+1}=u^{k}-\alpha_{k}\mathcal{L}_{u}^{\prime}(u^{k},v^{k})\quad|}}&{{}}&{\mathrm{gradient~descent~in~}u,\mathrm{starting~at~}(u^{k},v^{k})}\\ {{v^{k+1}=v^{k}+\beta_{k}\mathcal{L}_{v}^{\prime}(u^{k+1},v^{k})\quad|}}&{{}}&{\mathrm{gradient~ascent~in~}v,\mathrm{starting~at~}(u^{k+1},v^{k})}\end{array} \quad (2)\] Here, \(\{\alpha_{k}\}\) and \(\{\beta_{k}\}\) are learning rate schedules for the minimization and maximization steps, respectively. The vectors \(\mathcal{L}_{u}^{\prime}(u,v)\) and \(\mathcal{L}_{v}^{\prime}(u,v)\) denote (possibly stochastic) gradients of \(\mathcal{L}\) with respect to \(u\) and \(v\) . In practice, the gradient updates are often performed by an automated solver, such as the Adam optimizer (Kingma & Ba, 2015), and include momentum updates. We propose to stabilize the training of adversarial networks by adding a prediction step. Rather than calculating \(v^{k + 1}\) using \(u^{k + 1}\) , we first make a prediction, \(\bar{u}^{k + 1}\) , about where the \(u\) iterates will be in the future, and use this predicted value to obtain \(v^{k + 1}\) . ## Prediction Method \[\begin{array}{r l r}{{u^{k+1}=u^{k}-\alpha_{k}\mathcal{L}_{u}^{\prime}(u^{k},v^{k})\quad|}}&{{}}&{\mathrm{gradient~descent~in~}u,\mathrm{starting~at~}(u^{k},v^{k})}\\ {{\bar{u}^{k+1}=u^{k+1}+(u^{k+1}-u^{k})\quad|}}&{{}}&{\mathrm{predict~future~value~of~}u}\\ {{v^{k+1}=v^{k}+\beta_{k}\mathcal{L}_{v}^{\prime}(\bar{u}^{k+1},v^{k})\quad|}}&{{}}&{\mathrm{gradient~ascent~in~}v,\mathrm{starting~at~}(\bar{u}^{k+1},v^{k})~.}\end{array} \quad (3)\] The Prediction step (3) tries to estimate where \(u\) is going to be in the future by assuming its trajectory remains the same as in the current iteration. ## 3 BACKGROUND ### 3.1 ADVERSARIAL NETWORKS AS A SADDLE-POINT PROBLEM We now discuss a few common adversarial network problems and their saddle- point formulations. Generative Adversarial Networks (GANs) fit a generative model to a dataset using a game in which a generative model competes against a discriminator (Goodfellow et al., 2014). The generator, <--- Page Split ---> \(\mathbf{G}(\mathbf{z};\theta_{g})\) , takes random noise vectors \(\mathbf{z}\) as inputs, and maps them onto points in the target data distribution. The discriminator, \(\mathbf{D}(\mathbf{x};\theta_{d})\) , accepts a candidate point \(\mathbf{x}\) and tries to determine whether it is really drawn from the empirical distribution (in which case it outputs 1), or fabricated by the generator (output 0). During a training iteration, noise vectors from a Gaussian distribution \(\mathcal{G}\) are pushed through the generator network \(\mathbf{G}\) to form a batch of generated data samples denoted by \(\mathcal{D}_{fake}\) . A batch of empirical samples, \(\mathcal{D}_{real}\) , is also prepared. One then tries to adjust the weights of each network to solve a saddle point problem, which is popularly formulated as, \[\min_{\theta_{g}}\max_{\theta_{d}}\quad \mathbb{E}_{x\sim \mathcal{D}_{real}}f(\mathbf{D}(\mathbf{x};\theta_{d})) + \mathbb{E}_{z\sim \mathcal{G}}f(1 - \mathbf{D}(\mathbf{G}(\mathbf{z};\theta_{g});\theta_{d})). \quad (4)\] Here \(f(.)\) is any monotonically increasing function. Initially, (Goodfellow et al., 2014) proposed using \(f(x) = \log (x)\) . Domain Adversarial Networks (DANs) (Makhzani et al., 2016; Ganin & Lempitsky, 2015; Edwards & Storkey, 2016) take data collected from a "source" domain, and extract a feature representation that can be used to train models that generalize to another "target" domain. For example, in the domain adversarial neural network (DANN (Ganin & Lempitsky, 2015)), a set of feature layers maps data points into an embedded feature space, and a classifier is trained on these embedded features. Meanwhile, the adversarial discriminator tries to determine, using only the embedded features, whether the data points belong to the source or target domain. A good embedding yields a better task- specific objective on the target domain while fooling the discriminator, and is found by solving \[\min_{\theta_{f},\theta_{y_{k}}}\max_{\theta_{d}}\quad \sum_{k}\alpha_{k}\mathcal{L}_{y^{k}}\left(\mathbf{x}_{s};\theta_{f},\theta_{y^{k}}\right) - \lambda \mathcal{L}_{d}\left(\mathbf{x}_{s},\mathbf{x}_{t};\theta_{f},\theta_{d}\right). \quad (5)\] Here \(\mathcal{L}_{d}\) is any adversarial discriminator loss function and \(\mathcal{L}_{y^{k}}\) denotes the task specific loss. \(\theta_{f},\theta_{d}\) , and \(\theta_{y^{k}}\) are network parameter of feature mapping, discriminator, and classification layers. ### 3.2 STABILIZING SADDLE POINT SOLVERS It is well known that alternating stochastic gradient methods are unstable when using simple logarithmic losses. This led researchers to explore multiple directions for stabilizing GANs; either by adding regularization terms (Arjovsky et al., 2017; Li et al., 2015; Che et al., 2017; Zhao et al., 2017), a myriad of training "hacks" (Salimans et al., 2016; Gulrajani et al., 2017), re- engineering network architectures (Zhao et al., 2017), and designing different solvers (Metz et al., 2017). Specifically, the Wasserstein GAN (WGAN) (Arjovsky et al., 2017) approach modifies the original objective by replacing \(f(x) = \log (x)\) with \(f(x) = x\) . This led to a training scheme in which the discriminator weights are "clipped." However, as discussed in Arjovsky et al. (2017), the WGAN training is unstable at high learning rates, or when used with popular momentum based solvers such as Adam. Currently, it is known to work well only with RMSProp (Arjovsky et al., 2017). The unrolled GAN (Metz et al., 2017) is a new solver that can stabilize training at the cost of more expensive gradient computations. Each generator update requires the computation of multiple extra discriminator updates, which are then discarded when the generator update is complete. While avoiding GAN collapse, this method requires increased computation and memory. In the convex optimization literature, saddle point problems are more well studied. One popular solver is the primal- dual hybrid gradient (PDHG) method (Zhu & Chan, 2008; Esser et al., 2009), which has been popularized by Chambolle and Pock (Chambolle & Pock, 2011), and has been successfully applied to a range of machine learning and statistical estimation problems (Goldstein et al., 2015). PDHG relates closely to the method proposed here - it achieves stability using the same prediction step, although it uses a different type of gradient update and is only applicable to bi- linear problems. Stochastic methods for convex saddle- point problems can be roughly divided into two categories: stochastic coordinate descent (Dang & Lan, 2014; Lan & Zhou, 2015; Zhang & Lin, 2015; Zhu & Storkey, 2015; 2016; Wang & Xiao, 2017; Shibagaki & Takeuchi, 2017) and stochastic gradient descent (Chen et al., 2014; Qiao et al., 2016). Similar optimization algorithms have been studied for reinforcement learning (Wang & Chen, 2016; Du et al., 2017). Recently, a "doubly" stochastic method that randomizes both primal and dual updates was proposed for strongly convex bilinear saddle point problems (Yu et al., 2015). For general saddle point problems, "doubly" stochastic gradient descent methods are discussed in Nemirovski et al. (2009), Palaniappan & Bach (2016), in <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: A schematic depiction of the prediction method. When the minimization step is powerful and moves the iterates a long distance, the prediction step (dotted black arrow) causes the maximization update to be calculated further down the loss surface, resulting in a more dramatic maximization update. In this way, prediction methods prevent the maximization step from getting overpowered by the minimization update. </center> which primal and dual variables are updated simultaneously based on the previous iterates and the current gradients. ## 4 INTERPRETATIONS OF THE PREDICTION STEP We present three ways to explain the effect of prediction: an intuitive, non- mathematical perspective, a more analytical viewpoint involving dynamical systems, and finally a rigorous proof- based approach. ### 4.1 AN INTUITIVE VIEWPOINT The standard alternating SGD switches between minimization and maximization steps. In this algorithm, there is a risk that the minimization step can overpower the maximization step, in which case the iterates will "slide off" the edge of saddle, leading to instability (Figure 1b). Conversely, an overpowering maximization step will dominate the minimization step, and drive the iterates to extreme values as well. The effect of prediction is visualized in Figure 2. Suppose that a maximization step takes place starting at the red dot. Without prediction, the maximization step has no knowledge of the algorithm history, and will be the same regardless of whether the previous minimization update was weak (Figure 2a) or strong (Figure 2b). Prediction allows the maximization step to exploit information about the minimization step. If the previous minimizations step was weak (Figure 2a), the prediction step (dotted black arrow) stays close to the red dot, resulting in a weak predictive maximization step (white arrow). But if we arrived at the red dot using a strong minimization step (Figure 2b), the prediction moves a long way down the loss surface, resulting in a stronger maximization step (white arrows) to compensate. ### 4.2 A MORE MATHEMATICAL PERSPECTIVE To get stronger intuition about prediction methods, let's look at the behavior of Algorithm (3) on a simple bi- linear saddle of the form \[\mathcal{L}(u,v) = v^{T}Ku \quad (6)\] where \(K\) is a matrix. When exact (non- stochastic) gradient updates are used, the iterates follow the path of a simple dynamical system with closed- form solutions. We give here a sketch of this argument: a detailed derivation is provided in the Supplementary Material. When the (non- predictive) gradient method (2) is applied to the linear problem (6), the resulting iterations can be written \[\frac{u^{k + 1} - u^{k}}{\alpha} = -K^{T}v^{k},\qquad \frac{v^{k + 1} - v^{k}}{\alpha} = (\beta /\alpha)Ku^{k + 1}.\] When the stepsize \(\alpha\) gets small, this behaves like a discretization of the system of differential equations \[\dot{u} = -K^{T}v,\qquad \dot{v} = \beta /\alpha Ku\] <--- Page Split ---> where \(\dot{u}\) and \(\dot{v}\) denote the derivatives of \(u\) and \(v\) with respect to time. These equations describe a simple harmonic oscillator, and the closed form solution for \(u\) is \[u(t) = C\cos (\Sigma^{1 / 2}t + \phi)\] where \(\Sigma\) is a diagonal matrix, and the matrix \(C\) and vector \(\phi\) depend on the initialization. We can see that, for small values of \(\alpha\) and \(\beta\) , the non- predictive algorithm (2) approximates an undamped harmonic motion, and the solutions orbit around the saddle without converging. The prediction step (3) improves convergence because it produces damped harmonic motion that sinks into the saddle point. When applied to the linearized problem (6), we get the dynamical system \[\dot{u} = -K^{T}v,\qquad \dot{v} = \beta /\alpha K(u + \alpha \dot{u}) \quad (7)\] which has solution \[u(t) = UA\exp (-\frac{t\alpha}{2}\sqrt{\Sigma})\sin (t\sqrt{(1 - \alpha^{2} / 4)}\Sigma +\phi).\] From this analysis, we see that the damping caused by the prediction step causes the orbits to converge into the saddle point, and the error decays exponentially fast. ### 4.3 A RIGOROUS PERSPECTIVE While the arguments above are intuitive, they are also informal and do not address issues like stochastic gradients, non- constant stepsize sequences, and more complex loss functions. We now provide a rigorous convergence analysis that handles these issues. We assume that the function \(\mathcal{L}(u,v)\) is convex in \(u\) and concave in \(v\) . We can then measure convergence using the "primal- dual" gap, \(P(u,v) = \mathcal{L}(u,v^{*}) - \mathcal{L}(u^{*},v)\) where \((u^{*},v^{*})\) is a saddle. Note that \(P(u,v) > 0\) for non- optimal \((u,v)\) , and \(P(u,v) = 0\) if \((u,v)\) is a saddle. Using these definitions, we formulate the following convergence result. The proof is in the supplementary material. Theorem 1. Suppose the function \(\mathcal{L}(u,v)\) is convex in \(u\) , concave in \(v\) , and that the partial gradient \(\mathcal{L}_{v}^{\prime}\) is uniformly Lipschitz smooth in \(u\) ( \(\| \mathcal{L}_{v}^{\prime}(u_{1},v) - \mathcal{L}_{v}^{\prime}(u_{2},v)\| \leq L_{v}\| u_{1} - u_{2}\|\) ). Suppose further that the stochastic gradient approximations satisfy \(\mathbb{E}\| \mathcal{L}_{u}^{\prime}(u,v)\|^{2}\leq G_{u}^{2}\) , \(\mathbb{E}\| \mathcal{L}_{v}^{\prime}(u,v)\|^{2}\leq G_{v}^{2}\) for scalars \(G_{u}\) and \(G_{v}\) , and that \(\mathbb{E}\| u^{k} - u^{*}\|^{2}\leq D_{u}^{2}\) , and \(\mathbb{E}\| v^{k} - v^{*}\|^{2}\leq D_{v}^{2}\) for scalars \(D_{u}\) and \(D_{v}\) . If we choose decreasing learning rate parameters of the form \(\alpha_{k} = \frac{C_{k}}{\sqrt{k}}\) and \(\beta_{k} = \frac{C_{\beta}}{\sqrt{k}}\) , then the SGD method with prediction converges in expectation, and we have the error bound \[\mathbb{E}[P(\hat{u}^{l},\hat{v}^{l})]\leq \frac{1}{2\sqrt{l}}\left(\frac{D_{u}^{2}}{C_{\alpha}} +\frac{D_{v}^{2}}{C_{\beta}}\right) + \frac{\sqrt{l + 1}}{l}\left(\frac{C_{\alpha}G_{u}^{2}}{2} +C_{\alpha}L_{v}G_{u}^{2} + C_{\alpha}L_{v}D_{v}^{2} + \frac{C_{\beta}G_{v}^{2}}{2}\right)\] where \(\hat{u}^{l} = \frac{1}{l}\sum_{k = 1}^{l}u^{k}\) , \(\hat{v}^{l} = \frac{1}{l}\sum_{k = 1}^{l}v^{k}\) . ## 5 EXPERIMENTS We present a wide range of experiments to demonstrate the benefits of the proposed prediction step for adversarial nets. We consider a saddle point problem on a toy dataset constructed using MNIST images, and then move on to consider state- of- the- art models for three tasks: GANs, domain adaptation, and learning of fair classifiers. Additional results, and additional experiments involving mixtures of Gaussians, are presented in the Appendix. The code is available at https://github.com/jaiabhayk/stableGAN. ### 5.1 MNIST Toy PROBLEM We consider the task of classifying MNIST digits as being even or odd. To make the problem interesting, we corrupt \(70\%\) of odd digits with salt- and- pepper noise, while we corrupt only \(30\%\) of even digits. When we train a LeNet network (LeCun et al., 1998) on this problem, we find that the network encodes and uses information about the noise; when a noise vs no- noise classifier is trained <--- Page Split ---> on the deep features generated by LeNet, it gets \(100\%\) accuracy. The goal of this task is to force LeNet to ignore the noise when making decisions. We create an adversarial model of the form (5) in which \(\mathcal{L}_{y}\) is a softmax loss for the even vs odd classifier. We make \(\mathcal{L}_{d}\) a softmax loss for the task of discriminating whether the input sample is noisy or not. The classifier and discriminator were both pre- trained using the default LeNet implementation in Caffe (Jia et al., 2014). Then the combined adversarial net was jointly trained both with and without prediction. For implementation details, see the Supplementary Material. Figure 3 summarizes our findings. In this experiment, we considered applying prediction to both the classifier and discriminator. We note that our task is to retain good classification accuracy while preventing the discriminator from doing better than the trivial strategy of classifying odd digits as noisy and even digits as non- noisy. This means that the discriminator accuracy should ideally be \(\sim 0.7\) . As shown in Figure 3a, the prediction step hardly makes any difference when evaluated at the small learning rate of \(10^{- 4}\) . However, when evaluated at higher rates, Figures 3b and 3c show that the prediction solvers are very stable while one without prediction collapses (blue solid line is flat) very early. Figure 3c shows that the default learning rate \((10^{- 3})\) of the Adam solver is unstable unless prediction is used. ![](images/5_0.jpg) <center>Figure 3: Comparison of the classification accuracy (digit parity) and discriminator (noisy vs. no-noise) accuracy using SGD and Adam solver with and without prediction steps. \(\theta_{f}\) and \(\theta_{d}\) refers to variables in eq. (5). (a) Using SGD with learning rate \(lr = 10^{-4}\) . Note that the solid lines of red, blue and green are overlapped. (b) SGD solver with higher learning rate of \(lr = 10^{-3}\) , and (c) using Adam solver with its default parameter. </center> ### 5.2 GENERATIVE ADVERSARIAL NETWORKS Next, we test the efficacy and stability of our proposed predictive step on generative adversarial networks (GAN), which are formulated as saddle point problems (4) and are popularly solved using a heuristic approach (Goodfellow et al., 2014). We consider an image modeling task using CIFAR- 10 (Krizhevsky, 2009) on the recently popular convolutional GAN architecture, DCGAN (Radford et al., 2016). We compare our predictive method with that of DCGAN and the unrolled GAN (Metz et al., 2017) using the training protocol described in Radford et al. (2016). Note that we compared against the unrolled GAN with stop gradient switch<sup>1</sup> and \(K = 5\) unrolling steps. All the approaches were trained for five random seeds and 100 epochs each. We start with comparing all three methods using the default solver for DCGAN (the Adam optimizer) with learning rate=0.0002 and \(\beta_{1} = 0.5\) . Figure 4 compares the generated sample images (at the \(100^{th}\) epoch) and the training loss curve for all approaches. The discriminator and generator loss curves in Figure 4e show that without prediction, the DCGAN collapses at the \(45^{th}\) and \(57^{th}\) epochs. Similarly, Figure 4f shows that the training for unrolled GAN collapses in at least three instances. The training procedure using predictive steps never collapsed during any epochs. Qualitatively, the images generated using prediction are more diverse than the DCGAN and unrolled GAN images. Figure 5 compares all approaches when trained with \(5\times\) higher learning rate (0.001) (the default for the Adam solver). As observed in Radford et al. (2016), the standard and unrolled solvers are very unstable and collapse at this higher rate. However, as shown in Figure 5d, & 5a, training remains stable when a predictive step is used, and generates images of reasonable quality. The training procedure for both DCGAN and unrolled GAN collapsed on all five random seeds. The results on various additional intermediate learning rates as well as on high resolution Imagenet dataset are in the Supplementary Material. <--- Page Split ---> In the Supplementary Material, we present one additional comparison showing results on a higher momentum of \(\beta_{1} = 0.9\) (learning rate=0.0002). We observe that all the training approaches are stable. However, the quality of images generated using DCGAN is inferior to that of the predictive and unrolled methods. Overall, of the 25 training settings we ran on (each of five learning rates for five random seeds), the DCGAN training procedure collapsed in 20 such instances while unrolled GAN collapsed in 14 experiments (not counting the multiple collapse in each training setting). On the contrary, we find that our simple predictive step method collapsed only once. Note that prediction adds trivial cost to the training algorithm. Using a single TitanX Pascal, a training epoch of DCGAN takes 35 secs. With prediction, an epoch takes 38 secs. The unrolled GAN method, which requires extra gradient steps, takes 139 secs/epoch. Finally, we draw quantitative comparisons based on the inception score (Salimans et al., 2016), which is a widely used metric for visual quality of the generated images. For this purpose, we consider the current state- of- the- art Stacked GAN (Huang et al., 2017) architecture. Table 1 lists the inception scores computed on the generated samples from Stacked GAN trained (200 epochs) with and without prediction at different learning rates. The joint training of Stacked GAN collapses when trained at the default learning rate of adam solver (i.e., 0.001). However, reasonably good samples are generated if the same is trained with prediction on both the generator networks. The right end of Table 1 also list the inception score measured at fewer number of epochs for higher learning rates. It suggest that the model trained with prediction methods are not only stable but also allows faster convergence using higher learning rates. For reference the inception score on real images of CIFAR- 10 dataset is \(11.51 \pm 0.17\) . Table 1: Comparison of Inception Score on Stacked GAN network with and w/o G prediction. <table><tr><td>Learning rate</td><td>0.0001</td><td>0.0005</td><td>0.001</td><td>0.0005 (40)</td><td>0.001 (20)</td></tr><tr><td>Stacked GAN (joint)</td><td>8.44 ± 0.11</td><td>7.90 ± 0.08</td><td>1.52 ± 0.01</td><td>5.80 ± 0.15</td><td>1.42 ± 0.01</td></tr><tr><td>Stacked GAN (joint) + prediction</td><td>8.55 ± 0.12</td><td>8.13 ± 0.09</td><td>7.96 ± 0.11</td><td>8.10 ± 0.10</td><td>7.79 ± 0.07</td></tr></table> ![](images/6_0.jpg) <center>Figure 4: Comparison of GAN training algorithms for DCGAN architecture on Cifar-10 image datasets. Using default parameters of DCGAN; \(lr = 0.0002\) , \(\beta_{1} = 0.5\) . </center> <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: Comparison of GAN training algorithms for DCGAN architecture on Cifar-10 image datasets with higher learning rate, \(lr = 0.001\) , \(\beta_{1} = 0.5\) . </center> ### 5.3 DOMAIN ADAPTATION We consider the domain adaptation task (Saenko et al., 2010; Ganin & Lempitsky, 2015; Tzeng et al., 2017) wherein the representation learned using the source domain samples is altered so that it can also generalize to samples from the target distribution. We use the problem setup and hyper- parameters as described in (Ganin & Lempitsky, 2015) using the OFFICE dataset (Saenko et al., 2010) (experimental details are shared in the Supplementary Material). In Table 2, comparisons are drawn with respect to target domain accuracy on six pairs of source- target domain tasks. We observe that the prediction step has mild benefits on the "easy" adaptation tasks with very similar source and target domain samples. However, on the transfer learning tasks of AMAZON- to- WEBCAM, WEBCAM- to- AMAZON, and DSLR- to- AMAZON which has noticeably distinct data samples, an extra prediction step gives an absolute improvement of \(1.3 - 6.9\%\) in predicting target domain labels. Table 2: Comparison of target domain accuracy on OFFICE dataset. <table><tr><td rowspan="2">Method</td><td rowspan="2">Source Target</td><td>AMAZON</td><td>WEBCAM</td><td>DSLR</td><td>WEBCAM</td><td>AMAZON</td><td>DSLR</td></tr><tr><td>WEBCAM</td><td>AMAZON</td><td>WEBCAM</td><td>DSLR</td><td>DSLR</td><td>AMAZON</td></tr><tr><td>DANN (Ganin &amp;amp; Lempitsky, 2015)</td><td>73.4</td><td>51.6</td><td>95.5</td><td>99.4</td><td>76.5</td><td>51.7</td><td></td></tr><tr><td>DANN + prediction</td><td>74.7</td><td>58.5</td><td>96.1</td><td>99.0</td><td>73.5</td><td>57.6</td><td></td></tr></table> ### 5.4 FAIR CLASSIFIER Finally, we consider a task of learning fair feature representations (Mathieu et al., 2016; Edwards & Storkey, 2016; Louizos et al., 2016) such that the final learned classifier does not discriminate with respect to a sensitive variable. As proposed in Edwards & Storkey (2016) one way to measure fairness is using discrimination, \[y_{disc} = \left|\frac{1}{N_0}\sum_{i:s_i = 0}\eta (x_i) - \frac{1}{N_1}\sum_{i:s_i = 1}\eta (x_i)\right|. \quad (8)\] Here \(s_i\) is a binary sensitive variable for the \(i^{th}\) data sample and \(N_k\) denotes the total number of samples belonging to the \(k^{th}\) sensitive class. Similar to the domain adaptation task, the learning of each classifier can be formulated as a minimax problem in (5) (Edwards & Storkey, 2016; Mathieu <--- Page Split ---> et al., 2016). Unlike the previous example though, this task has a model selection component. From a pool of hundreds of randomly generated adversarial deep nets, for each value of \(t\) , one selects the model that maximizes the difference \[y_{t,Delt a} = y_{a c c} - t*y_{d i s c}. \quad (9)\] The "Adult" dataset from the UCI machine learning repository is used. The task \((y_{a c c})\) is to classify whether a person earns \(\geq \S 50k /\) year. The person's gender is chosen to be the sensitive variable. Details are in the supplementary. To demonstrate the advantage of using prediction for model selection, we follow the protocol developed in Edwards & Storkey (2016). In this work, the search space is restricted to a class of models that consist of a fully connected autoencoder, one task specific discriminator, and one adversarial discriminator. The encoder output from the autoencoder acts as input to both the discriminators. In our experiment, 100 models are randomly selected. During the training of each adversarial model, \(\mathcal{L}_{d}\) is a cross- entropy loss while \(\mathcal{L}_{y}\) is a linear combination of reconstruction and cross- entropy loss. Once all the models are trained, the best model for each value of \(t\) is selected by evaluating (9) on the validation set. Figure 6a plots the results on the test set for the AFLR approach with and without prediction steps in their default Adam solver. For each value of \(t\) , Figure 6b, 6c also compares the number of layers in the selected encoder and discriminator networks. When using prediction for training, relatively stronger encoder models are produced and selected during validation, and hence the prediction results generalize better on the test set. ![](images/8_0.jpg) <center>Figure 6: Model selection for learning a fair classifier. (a) Comparison of \(y_{t,delta}\) (higher is better), and also \(y_{d i s c}\) (lower is better) and \(y_{a c c}\) on the test set using AFLR with and without predictive steps. (b) Number of encoder layers in the selected model. (c) Number of discriminator layers (both adversarial and task-specific) in the selected model. </center> ## 6 CONCLUSION We present a simple modification to the alternating SGD method, called a prediction step, that improves the stability of adversarial networks. We present theoretical results showing that the prediction step is asymptotically stable for solving saddle point problems. We show, using a variety of test problems, that prediction steps prevent network collapse and enable training with a wider range of learning rates than plain SGD methods. ## ACKNOWLEDGMENTS The work of T. Goldstein was supported by the US Office of Naval Research under grant N00014- 17- 1- 2078, the US National Science Foundation (NSF) under grant CCF- 1535902, and by the Sloan Foundation. A. Yadav and D. Jacobs were supported by the National Science Foundation under grant no. IIS- 1526234 and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA R&D Contract No. 2014- 14071600012. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of <--- Page Split ---> the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. ## REFERENCES Martín Abadi and David G Andersen. Learning to protect communications with adversarial neural cryptography. arXiv preprint arXiv:1610.06918, 2016. Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In ICML, 2017. Andrew Brock, Theodore Lim, JM Ritchie, and Nick Weston. Neural photo editing with introspective adversarial networks. In ICLR, 2017. Antonin Chambolle and Thomas Pock. A first- order primal- dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision, 40(1):120- 145, 2011. Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized generative adversarial networks. In ICLR, 2017. Yunmei Chen, Guanghui Lan, and Yuyuan Ouyang. Optimal primal- dual methods for a class of saddle point problems. SIAM Journal on Optimization, 24(4):1779- 1814, 2014. Cong Dang and Guanghui Lan. Randomized first- order methods for saddle point optimization. arXiv preprint arXiv:1409.8625, 2014. Emily Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a laplacian pyramid of adversarial networks. In NIPS, 2015. Simon S Du, Jianshu Chen, Lihong Li, Lin Xiao, and Dengyong Zhou. Stochastic variance reduction methods for policy evaluation. ICML, 2017. Harrison Edwards and Amos Storkey. Censoring representations with an adversary. In ICLR, 2016. Ernie Esser, Xiaoqun Zhang, and Tony Chan. A general framework for a class of first order primal- dual algorithms for tv minimization. UCLA CAM Report, pp. 09- 67, 2009. Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In Proceedings of The 32nd International Conference on Machine Learning, pp. 1180- 1189, 2015. Tom Goldstein, Min Li, and Xiaoming Yuan. Adaptive primal- dual splitting methods for statistical learning and image processing. In NIPS, pp. 2089- 2097, 2015. Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, pp. 2672- 2680, 2014. Ihsaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028, 2017. Jonathan Ho, Jayesh Gupta, and Stefano Ermon. Model- free imitation learning with policy optimization. In International Conference on Machine Learning, pp. 2760- 2769, 2016. Xun Huang, Yixuan Li, Omid Poursaeed, John Hopcroft, and Serge Belongie. Stacked generative adversarial networks. In CVPR, 2017. Phillip Isola, Jun- Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image- to- image translation with conditional adversarial networks. In CVPR, 2017. Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In ACM Multimedia, pp. 675- 678, 2014. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009. <--- Page Split ---> Guanghui Lan and Yi Zhou. An optimal randomized incremental gradient method. arXiv preprint arXiv:1507.02000, 2015. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient- based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278- 2324, 1998. Yujia Li, Kevin Swersky, and Richard S Zemel. Generative moment matching networks. In ICML, pp. 1718- 1727, 2015. Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. The variational fair autoencoder. In ICLR, 2016. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. In ICLR, 2016. Michael F Mathieu, Junbo Jake Zhao, Junbo Zhao, Aditya Ramesh, Pablo Sprechmann, and Yann LeCun. Disentangling factors of variation in deep representation using adversarial training. In NIPS, pp. 5041- 5049, 2016. Luke Metz, Ben Poole, David Pfau, and Jascha Sohl- Dickstein. Unrolled generative adversarial networks. In ICLR, 2017. Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. Arkadi Nemirovski, Anatoli Juditsky, Guanghui Lan, and Alexander Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on optimization, 19(4):1574- 1609, 2009. Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier gans. In ICLR, 2017. Balamurugan Palaniappan and Francis Bach. Stochastic variance reduction methods for saddle- point problems. In NIPS, pp. 1408- 1416, 2016. Linbo Qiao, Tianyi Lin, Yu- Gang Jiang, Fan Yang, Wei Liu, and Xicheng Lu. On stochastic primal- dual hybrid gradient approach for compositely regularized minimization. In ECAI, 2016. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016. Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. ECCV, pp. 213- 226, 2010. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NIPS, pp. 2234- 2242, 2016. Atsushi Shibagaki and Ichiro Takeuchi. Stochastic primal dual coordinate method with non- uniform sampling based on optimality violations. arXiv preprint arXiv:1703.07056, 2017. Yaniv Taigman, Adam Polyak, and Lior Wolf. Unsupervised cross- domain image generation. In ICLR, 2017. Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In ICLR Workshop, 2017. Jialei Wang and Lin Xiao. Exploiting strong convexity from data with primal- dual first- order algorithms. ICML, 2017. Mengdi Wang and Yichen Chen. An online primal- dual method for discounted markov decision processes. In Decision and Control (CDC), 2016 IEEE 55th Conference on, pp. 4516- 4521. IEEE, 2016. Xiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversarial networks. In ECCV, pp. 318- 335, 2016. <--- Page Split ---> Adams Wei Yu, Qihang Lin, and Tianbao Yang. Doubly stochastic primal- dual coordinate method for empirical risk minimization and bilinear saddle- point problem. arXiv preprint arXiv:1508.03390, 2015. Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris Metaxas. Stackgan: Text to photo- realistic image synthesis with stacked generative adversarial networks. In ICCV, 2017. Yuchen Zhang and Xiao Lin. Stochastic primal- dual coordinate method for regularized empirical risk minimization. In ICML, pp. 353- 361, 2015. Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy- based generative adversarial network. In ICLR, 2017. Mingqiang Zhu and Tony Chan. An efficient primal- dual hybrid gradient algorithm for total variation image restoration. UCLA CAM Report, pp. 08- 34, 2008. Zhanxing Zhu and Amos J Storkey. Adaptive stochastic primal- dual coordinate descent for separable saddle point problems. In ECML- PKDD, pp. 645- 658, 2015. Zhanxing Zhu and Amos J Storkey. Stochastic parallel block coordinate descent for large- scale saddle point problems. In AAAI, 2016. <--- Page Split ---> ## APPENDIX ## A DETAILED DERIVATION OF THE HARMONIC OSCILLATOR EQUATION Here, we provide a detailed derivation of the harmonic oscillator behavior of Algorithm (3) on the simple bi- linear saddle of the form \[\mathcal{L}(x,y) = y^{T}Kx\] where \(K\) is a matrix. Note that, within a small neighborhood of a saddle, all smooth weakly convex objective functions behave like (6). To see why, consider a smooth objective function \(\mathcal{L}\) with a saddle point at \(x^{*} = 0\) , \(y^{*} = 0\) . Within a small neighborhood of the saddle, we can approximate the function \(\mathcal{L}\) to high accuracy using its Taylor approximation \[\mathcal{L}(x,y)\approx \mathcal{L}(x^{*},y^{*}) + y^{T}\mathcal{L}_{x y}^{\prime}x + O(\| x\|^{3} + \| y\|^{3})\] where \(\mathcal{L}_{x y}^{\prime}\) denotes the matrix of mixed- partial derivatives with respect to \(x\) and \(y\) . Note that the first- order terms have vanished from this Taylor approximation because the gradients are zero at a saddle point. The \(O(\| x\|^{2})\) and \(O(\| y\|^{2})\) terms vanish as well because the problem is assumed to be weakly convex around the saddle. Up to third- order error (which vanishes quickly near the saddle), this Taylor expansion has the form (6). For this reason, stability on saddles of the form (6) is a necessary condition for convergence of (3), and the analysis here describes the asymptotic behavior of the prediction method on any smooth problem for which the method converges. We will show that, as the learning rate gets small, the iterates of the non- prediction method (2) rotate in orbits around the saddle without converging. In contrast, the iterates of the prediction method fall into the saddle and converge. When the conventional gradient method (2) is applied to the linear problem (6), the resulting iterations can be written \[\frac{x^{k + 1} - x^{k}}{\alpha} = -K^{T}y^{k},\qquad \frac{y^{k + 1} - y^{k}}{\alpha} = (\beta /\alpha)Kx^{k + 1}.\] When the stepsize \(\alpha\) gets small, this behaves like a discretization of the differential equation \[\begin{array}{l}{\dot{x} = -K^{T}y}\\ {\dot{y} = \beta /\alpha Kx} \end{array} \quad (10)\] where \(\dot{x}\) and \(\dot{y}\) denote the derivatives of \(x\) and \(y\) with respect to time. The differential equations (10,11) describe a harmonic oscillator. To see why, differentiate (10) and plug (11) into the result to get a differential equation in \(x\) alone \[\ddot{x} = -K^{T}\dot{y} = -\beta /\alpha K^{T}Kx. \quad (12)\] We can decompose this into a system of independent single- variable problems by considering the eigenvalue decomposition \(\beta /\alpha K^{T}K = U\Sigma U^{T}\) . We now multiply both sides of (12) by \(U^{T}\) , and make the change of variables \(z \leftarrow U^{T}x\) to get \[\ddot{z} = -\Sigma z.\] where \(\Sigma\) is diagonal. This is the standard equation for undamped harmonic motion, and its solution is \(z = A\cos (\Sigma^{1 / 2}t + \phi)\) , where \(\cos\) acts entry- wise, and the diagonal matrix \(A\) and vector \(\phi\) are constants that depend only on the initialization. Changing back into the variable \(x\) , we get the solution \[x = UA\cos (\Sigma^{1 / 2}t + \phi).\] We can see that, for small values of \(\alpha\) and \(\beta\) , the non- predictive algorithm (2) approximates an undamped harmonic motion, and the solutions orbit around the saddle without converging. The prediction step (3) improves convergence because it produces damped harmonic motion that sinks into the saddle point. When applied to the linearized problem (6), the iterates of the predictive method (3) satisfy \[\frac{x^{k + 1} - x^{k}}{\alpha} = -K^{T}y^{k}\] \[\frac{y^{k + 1} - y^{k}}{\alpha} = \beta /\alpha K(x^{k + 1} + x^{k + 1} - x^{k}) = \beta /\alpha Kx^{k + 1} + \beta K\frac{x^{k + 1} - x^{k}}{\alpha}.\] <--- Page Split ---> For small \(\alpha\) , this approximates the dynamical system \[\begin{array}{r l} & {\dot{x} = -K^{T}y}\\ & {\dot{y} = \beta /\alpha K(x + \alpha \dot{x}).} \end{array} \quad (14)\] Like before, we differentiate (13) and use (14) to obtain \[\ddot{x} = -K^{T}\dot{y} = -\beta /\alpha K^{T}(Kx + \alpha A\dot{x}) = -\beta /\alpha K^{T}Kx - \beta /K^{T}K\dot{x}. \quad (15)\] Finally, multiply both sides by \(U^{T}\) and perform the change of variables \(z \leftarrow U^{T}x\) to get \[\dot{z} = -\Sigma z - \alpha \Sigma \dot{z}.\] This equation describes a damped harmonic motion. The solutions have the form \(z(t) = A\exp (-\frac{t\alpha}{2}\sqrt{\Sigma})\sin (t\sqrt{(1 - \alpha^{2} / 4)\Sigma} +\phi)\) . Changing back to the variable \(x\) , we see that the iterates of the original method satisfy \[x(t) = UA\exp (-\frac{t\alpha}{2}\sqrt{\Sigma})\sin (t\sqrt{(1 - \alpha^{2} / 4)\Sigma} +\phi).\] where \(A\) and \(\phi\) depend on the initialization. From this analysis, we see that for small constant \(\alpha\) the orbits of the lookahead method converge into the saddle point, and the error decays exponentially fast. ## A PROOF OF THEOREM 1 Assume the optimal solution \((u^{*},v^{*})\) exists, then \(\mathcal{L}_{u}^{\prime}(u^{*},v) = \mathcal{L}_{v}^{\prime}(u,v^{*}) = 0\) . In the following proofs, we use \(g_{u}(u,v)\) , \(g_{v}(u,v)\) to represent the stochastic approximation of gradients, where \(\mathbb{E}[g_{u}(u,v)] = \mathcal{L}_{u}^{\prime}(u,v)\) , \(\mathbb{E}[g_{v}(u,v)] = \mathcal{L}_{v}^{\prime}(u,v)\) . We show the convergence of the proposed stochastic primal- dual gradients for the primal- dual gap \(P(u^{k},v^{k}) = \mathcal{L}(u^{k},v^{*}) - \mathcal{L}(u^{*},v^{k})\) . We prove the \(O(1 / \sqrt{k})\) convergence rate in Theorem 1 by using Lemma 1 and Lemma 2, which present the contraction of primal and dual updates, respectively. Lemma 1. Suppose \(\mathcal{L}(u,v)\) is convex in \(u\) and \(\mathbb{E}[\| g_{u}(u,v)\|^{2}]\leq G_{u}^{2}\) , we have \[\mathbb{E}[\mathcal{L}(u^{k},v^{k})] - \mathbb{E}[\mathcal{L}(u^{*},v^{k})]\leq \frac{1}{2\alpha^{k}}\left(\mathbb{E}[\| u^{k} - u^{*}\|^{2}] - \mathbb{E}[\| u^{k + 1} - u^{*}\|^{2}]\right) + \frac{\alpha_{k}}{2} G_{u}^{2} \quad (16)\] Proof. Use primal update in (3), we have \[\begin{array}{r l} & {\| u^{k + 1} - u^{*}\|^{2} = \| u^{k} - \alpha_{k}g_{u}(u^{k},v^{k}) - u^{*}\|^{2}}\\ & {\qquad = \| u^{k} - u^{*}\|^{2} - 2\alpha_{k}\left\langle g_{u}(u^{k},v^{k}),u^{k} - u^{*}\right\rangle +\alpha_{k}^{2}\| g_{u}(u^{k},v^{k})\|^{2}.} \end{array} \quad (17)\] Take expectation on both side of the equation, substitute with \(\mathbb{E}[g_{u}(u,v)] = \mathcal{L}_{u}^{\prime}(u,v)\) and apply \(\mathbb{E}[\| g_{u}^{2}(u,v)\| ]\leq G_{u}^{2}\) to get \[\mathbb{E}[\| u^{k + 1} - u^{*}\|^{2}]\leq \mathbb{E}[\| u^{k} - u^{*}\|^{2}] - 2\alpha_{k}\mathbb{E}[\langle \mathcal{L}_{u}^{\prime}(u^{k},v^{k}),u^{k} - u^{*}\rangle ] + \alpha_{k}^{2}G_{u}^{2}. \quad (19)\] Since \(\mathcal{L}(u,v)\) is convex in \(u\) , we have \[\langle \mathcal{L}_{u}^{\prime}(u^{k},v^{k}),u^{k} - u^{*}\rangle \geq \mathcal{L}(u^{k},v^{k}) - \mathcal{L}(u^{*},v^{k}). \quad (20)\] (16) is proved by combining (19) and (20). Lemma 2. Suppose \(\mathcal{L}(u,v)\) is concave in \(v\) and has Lipschitz gradients, \(\| \mathcal{L}_{u}^{\prime}(u_{1},v) - \mathcal{L}_{u}^{\prime}(u_{2},v)\| \leq\) \(L_{v}\| u_{1} - u_{2}\|\) , and bounded variance, \(\mathbb{E}[\| g_{u}(u,v)\|^{2}]\leq G_{u}^{2}\) , \(\mathbb{E}[\| g_{v}(u,v)\|^{2}]\leq G_{v}^{2}\) ; and \(\mathbb{E}[\| v^{k} - v^{*}\|^{2}]\leq D_{v}^{2}\) , we have \[\begin{array}{r l} & {\mathbb{E}[\mathcal{L}(u^{k},v^{*})] - \mathbb{E}[\mathcal{L}(u^{k},v^{k})]\leq}\\ & {\qquad \frac{1}{2\beta_{k}}\left(\mathbb{E}[\| v^{k} - v^{*}\|^{2}] - \mathbb{E}[\| v^{k + 1} - v^{*}\|^{2}]\right) + \frac{\beta_{k}}{2} G_{u}^{2} + \alpha_{k}L_{v}\left(G_{u}^{2} + D_{v}^{2}\right).} \end{array} \quad (21)\] <--- Page Split ---> Proof. From the dual update in (3), we have \[\begin{array}{r l} & {\| v^{k + 1} - v^{*}\|^{2} = \| v^{k} + \beta_{k}g_{v}(\bar{u}^{k + 1},v^{k}) - v^{*}\|^{2}}\\ & {\qquad = \| v^{k} - v^{*}\|^{2} + 2\beta_{k}\left\langle g_{v}(\bar{u}^{k + 1},v^{k}),v^{k} - v^{*}\right\rangle +\beta_{k}^{2}\| g_{v}(\bar{u}^{k + 1},v^{k})\|^{2}.} \end{array} \quad (22)\] Take expectation on both sides of the equation, substitute \(\mathbb{E}[g_{v}(u,v)] = \mathcal{L}_{v}^{\prime}(u,v)\) , and apply \(\mathbb{E}[\| g_{v}^{2}(u,v)\| ]\leq G_{v}^{2}\) to get \[\mathbb{E}[\| v^{k + 1} - v^{*}\|^{2}]\leq \mathbb{E}[\| v^{k} - v^{*}\|^{2}] + 2\beta_{k}\mathbb{E}[(\mathcal{L}_{v}^{\prime}(\bar{u}^{k + 1},v^{k}),v^{k} - v^{*})] + \beta_{k}^{2}G_{v}^{2}. \quad (24)\] Reorganize (24) to get \[\mathbb{E}[\| v^{k + 1} - v^{*}\|^{2}] - \mathbb{E}[\| v^{k} - v^{*}\|^{2}] - \beta_{k}^{2}G_{v}^{2}\leq 2\beta_{k}\mathbb{E}[(\mathcal{L}_{v}^{\prime}(\bar{u}^{k + 1},v^{k}),v^{k} - v^{*})]. \quad (25)\] The right hand side of (25) can be represented as \[\begin{array}{r l} & {\mathbb{E}[(\mathcal{L}_{v}^{\prime}(\bar{u}^{k + 1},v^{k}),u^{k} - v^{*})]}\\ & {= \mathbb{E}[(\mathcal{L}_{v}^{\prime}(\bar{u}^{k + 1},v^{k}) - \mathcal{L}_{v}^{\prime}(u^{k},v^{k}) + \mathcal{L}_{v}^{\prime}(u^{k},v^{k}),v^{k} - v^{*})]}\\ & {= \mathbb{E}[(\mathcal{L}_{v}^{\prime}(\bar{u}^{k + 1},v^{k}) - \mathcal{L}_{v}^{\prime}(u^{k},v^{k}),v^{k} - v^{*})] + \mathbb{E}[(\mathcal{L}_{v}^{\prime}(u^{k},v^{k}),v^{k} + v^{*})],} \end{array} \quad (27)\] where \[\begin{array}{r l} & {\mathbb{E}[(\mathcal{L}_{v}^{\prime}(\bar{u}^{k + 1},v^{k}) - \mathcal{L}_{v}^{\prime}(u^{k},v^{k}),v^{k} - v^{*})]}\\ & {\leq \mathbb{E}[\| \mathcal{L}_{v}^{\prime}(\bar{u}^{k + 1},v^{k}) - \mathcal{L}_{v}^{\prime}(u^{k},v^{k})\| \| v^{k} - v^{*}\| ]}\\ & {\leq \mathbb{E}[L_{v}\| \bar{u}^{k + 1} - u^{k}\| \| v^{k} - v^{*}\| ]}\\ & {= \mathbb{E}[2L_{y}\| u^{k + 1} - u^{k}\| \| v^{k} - v^{*}\| ]}\\ & {= \mathbb{E}[2L_{y}\| \alpha_{k}g_{u}(u^{k},v^{k})\| \| v^{k} - v^{*}\| ]}\\ & {\leq L_{y}\alpha_{k}\mathbb{E}[\| g_{u}(u^{k},v^{k})\|^{2} + \| v^{k} - v^{*}\|^{2}]}\\ & {\leq L_{y}\alpha_{k}(G_{u}^{2} + D_{v}^{2}).} \end{array} \quad (33)\] Lipschitz smoothness is used for (31); the prediction step in (3) is used for (32); the primal update in (3) is used for (33); bounded assumptions are used for (35). Since \(\mathcal{L}(u,v)\) is concave in \(v\) , we have \[\langle \mathcal{L}_{v}^{\prime}(u^{k},v^{k}),v^{k} - v^{*}\rangle \leq \mathcal{L}(u^{k},v^{k}) - \mathcal{L}(u^{k},v^{*}). \quad (36)\] Combine equations (25, 28, 35 to get36) \[\begin{array}{r l} & {\frac{1}{2\beta_{k}}\left(\mathbb{E}[\| v^{k + 1} - v^{*}\|^{2}] - \mathbb{E}[\| v^{k} - v^{*}\|^{2}]\right) - \frac{\beta_{k}}{2} G_{v}^{2}}\\ & {\leq L_{v}\alpha_{k}\left(G_{u}^{2} + D_{v}^{2}\right) + \mathbb{E}[\mathcal{L}(u^{k},v^{k})] - \mathbb{E}[\mathcal{L}(u^{k},v^{*})].} \end{array} \quad (37)\] Rearrange the order of (37) to achieve (21). We now present the proof of Theorem 1. Proof. Combining (16) and (21) in the Lemmas, the primal- dual gap \(P(u^{k},v^{k}) = \mathcal{L}(u^{k},v^{*}) - \mathcal{L}(u^{*},v^{k})\) satisfies, \[\begin{array}{r l} & {\mathbb{E}[P(u^{k},v^{k})]\leq \frac{1}{2\alpha_{k}}\left(\mathbb{E}[\| u^{k} - u^{*}\|^{2}] - \mathbb{E}[\| u^{k + 1} - u^{*}\|^{2}]\right) + \frac{\alpha_{k}}{2} G_{u}^{2}}\\ & {\qquad +\frac{1}{2\beta_{k}}\left(\mathbb{E}[\| v^{k} - v^{*}\|^{2}] - \mathbb{E}[\| v^{k + 1} - v^{*}\|^{2}]\right) + \frac{\beta_{k}}{2} G_{v}^{2} + \alpha_{k}L_{v}\left(G_{u}^{2} + D_{v}^{2}\right).} \end{array} \quad (38)\] Accumulate (38) from \(k = 1,\ldots ,l\) to obtain \[\begin{array}{r l} & {\sum_{k = 1}^{l}\mathbb{E}[P(u^{k},v^{k})]\leq}\\ & {\frac{1}{2\alpha_{1}}\mathbb{E}[\| u^{1} - u^{*}\|^{2}] + \sum_{k = 2}^{l}(\frac{1}{2\alpha_{k}} -\frac{1}{2\alpha_{k - 1}})\mathbb{E}[\| u^{k} - u^{*}\|^{2}] + \sum_{k = 1}^{l}\alpha_{k}(\frac{G_{u}^{2}}{2} +L_{v}G_{u}^{2} + L_{v}D_{v}^{2})}\\ & {+\frac{1}{2\beta_{1}}\mathbb{E}[\| v^{1} - v^{*}\|^{2}] + \sum_{k = 2}^{l}(\frac{1}{2\beta_{k}} -\frac{1}{2\beta_{k - 1}})\mathbb{E}[\| v^{k} - v^{*}\|^{2}] + \sum_{k = 1}^{l}\beta_{k}\frac{G_{v}^{2}}{2}.} \end{array} \quad (39)\] <--- Page Split ---> Assume \(\mathbb{E}[||u^{k} - u^{*}||^{2}]\leq D_{u}^{2}\) , \(\mathbb{E}[||v^{k} - v^{*}||^{2}]\leq D_{v}^{2}\) are bounded, we have \[\begin{array}{l}{\sum_{k = 1}^{l}\mathbb{E}[P(u^{k},v^{k})]\leq \frac{1}{2\alpha_{1}} D_{u}^{2} + \sum_{k = 2}^{l}(\frac{1}{2\alpha_{k}} -\frac{1}{2\alpha_{k - 1}})D_{u}^{2} + \sum_{k = 1}^{l}\alpha_{k}(\frac{G_{u}^{2}}{2} +L_{v}G_{u}^{2} + L_{v}D_{v}^{2})}\\ {+\frac{1}{2\beta_{1}} D_{v}^{2} + \sum_{k = 2}^{l}(\frac{1}{2\beta_{k}} -\frac{1}{2\beta_{k - 1}})D_{v}^{2} + \sum_{k = 1}^{l}\beta_{k}\frac{G_{v}^{2}}{2}.} \end{array} \quad (40)\] Since \(\alpha_{k},\beta_{k}\) are decreasing and \(\textstyle \sum_{k = 1}^{l}\alpha_{k}\leq C_{\alpha}\sqrt{l + 1}\) \(\textstyle \sum_{k = 1}^{l}\beta_{k}\leq C_{\beta}\sqrt{l + 1}\) , we have \[\sum_{k = 1}^{l}\mathbb{E}[P(u^{k},v^{k})]\leq \frac{\sqrt{l}}{2}\left(\frac{D_{u}^{2}}{C_{\alpha}} +\frac{D_{v}^{2}}{C_{\beta}}\right) + \sqrt{l + 1}\left(\frac{C_{\alpha}G_{u}^{2}}{2} +C_{\beta}L_{v}G_{u}^{2} + C_{\alpha}L_{v}D_{v}^{2} + \frac{C_{\beta}G_{v}^{2}}{2}\right) \quad (41)\] For \(\begin{array}{r}{\hat{u}^{l} = \frac{1}{l}\sum_{k = 1}^{l}u^{k}} \end{array}\) \(\begin{array}{r}{\hat{v}^{l} = \frac{1}{l}\sum_{k = 1}^{l}v^{k}} \end{array}\) , because \(\mathcal{L}(u,v)\) is convex- concave, we have \[\begin{array}{r l} & {\mathbb{E}[P(\hat{u}^{l},\hat{v}^{l})] = \mathbb{E}[\mathcal{L}(\hat{u}^{l},v^{*}) - \mathcal{L}(u^{*},\hat{v}^{l})]}\\ & {\qquad \leq \mathbb{E}[\frac{1}{l}\sum_{k = 1}^{l}(\mathcal{L}(u^{k},v^{*}) - \mathcal{L}(u^{*},v^{k}))]}\\ & {\qquad = \frac{1}{l}\sum_{k = 1}^{l}\mathbb{E}[\mathcal{L}(u^{k},v^{*}) - \mathcal{L}(u^{*},v^{k})]}\\ & {\qquad = \frac{1}{l}\sum_{k = 1}^{l}\mathbb{E}[P(u^{k},v^{k})].} \end{array} \quad (44)\] Combine (41) and (45) to prove \[\mathbb{E}[P(\hat{x}^{l},\hat{y}^{l})]\leq \frac{1}{2\sqrt{l}}\left(\frac{D_{u}^{2}}{C_{\alpha}} +\frac{D_{v}^{2}}{C_{\beta}}\right) + \frac{\sqrt{l + 1}}{l}\left(\frac{C_{\alpha}G_{u}^{2}}{2} +C_{\alpha}L_{v}G_{u}^{2} + C_{\alpha}L_{v}D_{v}^{2} + \frac{C_{\beta}G_{v}^{2}}{2}\right). \quad (46)\] ## B MNIST TOY EXAMPLE Experimental details: We consider a classic MNIST digits dataset (LeCun et al., 1998) consisting of 60,000 training images and 10,000 testing images each of size \(28\times 28\) . For simplicity, let us consider a task (T1) of classifying into odd and even numbered images. Let's say, that \(\sim 50\%\) of data instances were corrupted using salt and pepper noise of probability 0.2 and this distortion process was biased. Specifically, only \(30\%\) of even numbered images were distorted as against the \(70\%\) of odd- numbered images. We have observed that any feature representation network \(\theta_{f}\) trained using the binary classification loss function for task T1 has noise bias also encoded within it. This was verified by training an independent noise classifier on the learned features. This lead us to design of simple adversarial network to "unlearn" the noise bias from the feature learning pipeline. We formulate this using the minimax objective in (5). In our model, \(\mathcal{L}_{d}\) is a softmax loss for the task (T2) of classifying whether the input sample is noisy or not. \(\mathcal{L}_{y}\) is a softmax loss for task T1 and \(\lambda = 1\) . A LeNet network (LeCun et al., 1998) is considered for training on task T1 while a two- layer MLP is used for training on task T2. LeNet consist of two convolutional (conv) layers followed by two fully connected (FC) layers at the top. The parameters of conv layers form \(\theta_{f}\) while that of FC and MLP layers forms \(\theta_{y}\) and \(\theta_{d}\) respectively. We train the network in three stages. Following the training on task T1, \(\theta_{f}\) were fixed and MLP is trained independently on task T2. The default learning schedule of the LeNet implementation in Caffe (Jia et al., 2014) were followed for both the tasks. The total training iterations on each task were set to 10,000. After pretraining, the whole network is jointly finetuned using the adversarial approach. (5) is alternatively minimized w.r.t. \(\theta_{\mathrm{f}}\) , \(\theta_{\mathrm{y}}\) and maximized w.r.t. \(\theta_{\mathrm{d}}\) . The predictive steps were only used during the finetuning phase. <--- Page Split ---> Our finding is summarized in Figure 3. In addition, Figure 7 provides head- to- head comparison of two popular solvers Adam and SGD using the predictive step. Not surprisingly, the Adam solver shows relatively better performance and convergence even with an additional predictive step. This also suggests that the default hyper- parameter for the Adam solver can be retained and utilized for training this networks without resorting to any further hyper- parameter tuning (as it is currently in practice). ![](images/16_0.jpg) <center>Figure 7: Comparison of the classification accuracy of parity classification and noise discrimination using the SGD and Adam solvers with and without prediction step. </center> ## C DOMAIN ADAPTATION Experimental details: To evaluate a domain adaptation task, we consider the OFFICE dataset (Saenko et al., 2010). OFFICE is a small scale dataset consisting of images collected from three distinct domains: AMAZON, DSLR and WEBCAM. For such a small scale dataset, it is non- trivial to learn features from images of a single domain. For instance, consider the largest subset AMAZON, which contains only 2,817 labeled images spread across 31 different categories. However, one can leverage the power of domain adaptation to improve cross domain accuracy. We follow the protocol listed in Ganin & Lempitsky (2015) and the same network architecture is used. Caffe (Jia et al., 2014) is used for implementation. The training procedure from Ganin & Lempitsky (2015) is kept intact except for the additional prediction step. In Table 2 comparisons are drawn with respect to target domain accuracy on three pairs of source- target domain tasks. The test accuracy is reported at the end of 50,000 training iterations. ## D FAIR CLASSIFIER Experimental details: The "Adult" dataset from the UCI machine learning repository is used, which consists of census data from \(\sim 45,000\) people. The task is to classify whether a person earns \(\geq \$ 50k\) /year. The person's gender is chosen to be the sensitive variable. We binarize all the category attributes, giving us a total of 102 input features per sample. We randomly split data into 35,000 samples for training, 5000 for validation and 5000 for testing. The result reported here is an average over five such random splits. ## E GENERATIVE ADVERSARIAL NETWORKS Toy Dataset: To illustrate the advantage of the prediction method, we experiment on a simple GAN architecture with fully connected layers using the toy dataset. The constructed toy example and its architecture is inspired by the one presented in Metz et al. (2017). The two dimensional data is sampled from the mixture of eight Gaussians with their means equally spaced around the unit circle <--- Page Split ---> centered at \((0,0)\) . The standard deviation of each Gaussian is set at 0.01. The two dimensional latent vector \(\mathbf{z}\) is sampled from the multivariate Gaussian distribution. The generator and discriminator networks consist of two fully connected hidden layers, each with 128 hidden units and tanh activations. The final layer of the generator has linear activation while that of discriminator has sigmoid activation. The solver optimizes both the discriminator and the generator network using the objective in (4). We use adam solver with its default parameters (i.e., learning rate \(= 0.001\) , \(\beta_{1} = 0.9\) , \(\beta_{2} = 0.999\) ) and with input batch size of 512. The generated two dimensional samples are plotted in the figure (8). The straightforward utilization of the adam solver fails to construct all the modes of the underlying dataset while both unrolled GAN and our method are able to produce all the modes. ![](images/17_0.jpg) <center>Figure 8: Comparison of GAN training algorithms on toy dataset. Results on, from top to bottom, GAN, GAN with G prediction, and unrolled GAN. </center> We further investigate the performance of GAN training algorithms on data sampled from a mixture of a large number of Gaussians. We use 100 Gaussian modes which are equally spaced around a circle of radius 24 centered at \((0,0)\) . We retain the same experimental settings as described above and train GAN with two different input batch sizes, a small (64) and a large batch (6144) setting. The Figure (9) plots the generated sample output of GAN trained (for fixed number of epochs) under the above setting using different training algorithms. Note that for small batch size input, the default as well as the unrolled training for GAN fails to construct actual modes of the underlying dataset. We hypothesize that this is perhaps due to the batch size, 64, being smaller than the number of input modes (100). When trained with small batch the GAN observe samples only from few input modes at every iteration. This causes instability leading to the failure of training algorithms. This scenario is pertinent to real datasets wherein the number of modes are relatively high compared to input batch size. ![](images/17_1.jpg) <center>Figure 9: Comparison of GAN training algorithms on toy dataset of mixture of 100 Gaussians. Results on, from top to bottom, batch size of 64 and 6144. </center> <--- Page Split ---> DCGAN Architecture details: For our experiments, we use publicly available code for DCGAN (Radford et al., 2016) and their implementation for Cifar- 10 dataset. The random noise vector is of 100 dimensional and output of the generator network is a 64x64 image of 3 channels. # Additional DCGAN Results: ![](images/18_0.jpg) <center>Figure 10: Comparison of GAN training algorithms for DCGAN architecture on Cifar-10 image datasets. Using higher momentum, \(lr = 0.0002\) , \(\beta_{1} = 0.9\) . </center> ![](images/18_1.jpg) <center>Figure 11: Comparison of GAN training algorithms for DCGAN architecture on Cifar-10 image datasets. \(lr = 0.0004\) , \(\beta_{1} = 0.5\) . </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 12: Comparison of GAN training algorithms for DCGAN architecture on Cifar-10 image datasets. \(lr = 0.0006\) , \(\beta_{1} = 0.5\) . </center> ![](images/19_1.jpg) <center>Figure 13: Comparison of GAN training algorithms for DCGAN architecture on Cifar-10 image datasets. \(lr = 0.0008\) , \(\beta_{1} = 0.5\) . </center> <--- Page Split ---> Experiments on Imagenet: In this section we demonstrate the advantage of prediction methods for generating higher resolution images of size \(128 \times 128\) . For this purpose, the state- of- the- art AC- GAN (Odena et al., 2017) architecture is considered and conditionally learned using images of all 1000 classes from Imagenet dataset. We have used the publicly available code for AC- GAN and all the parameter were set to it default as in Odena et al. (2017). The figure 14 plots the inception score measured at every training epoch of AC- GAN model with and without prediction. The score is averaged over five independent runs. From the figure, it is clear that even at higher resolution with large number of classes the prediction method is stable and aids in speeding up the training. ![](images/20_0.jpg) <center>Figure 14: Comparison of Inception scores on high resolution Imagenet datasets measured at each training epoch of ACGAN model with and without prediction. </center> <--- Page Split --->
## ABSTRACT Adversarial neural networks solve many important problems in data science, but are notoriously difficult to train. These difficulties come from the fact that optimal weights for adversarial nets correspond to saddle points, and not minimizers, of the loss function. The alternating stochastic gradient methods typically used for such problems do not reliably converge to saddle points, and when convergence does happen it is often highly sensitive to learning rates. We propose a simple modification of stochastic gradient descent that stabilizes adversarial networks. We show, both in theory and practice, that the proposed method reliably converges to saddle points, and is stable with a wider range of training parameters than a non- prediction method. This makes adversarial networks less likely to "collapse," and enables faster training with larger learning rates. ## 1 INTRODUCTION Adversarial networks play an important role in a variety of applications, including image generation (Zhang et al., 2017; Wang & Gupta, 2016), style transfer (Brock et al., 2017; Taigman et al., 2017; Wang & Gupta, 2016; Isola et al., 2017), domain adaptation (Taigman et al., 2017; Tzeng et al., 2017; Ganin & Lempitsky, 2015), imitation learning (Ho et al., 2016), privacy (Edwards & Storkey, 2016; Abadi & Andersen, 2016), fair representation (Mathieu et al., 2016; Edwards & Storkey, 2016), etc. One particularly motivating application of adversarial nets is their ability to form generative models, as opposed to the classical discriminative models (Goodfellow et al., 2014; Radford et al., 2016; Denton et al., 2015; Mirza & Osindero, 2014). While adversarial networks have the power to attack a wide range of previously unsolved problems, they suffer from a major flaw: they are difficult to train. This is because adversarial nets try to accomplish two objectives simultaneously; weights are adjusted to maximize performance on one task while minimizing performance on another. Mathematically, this corresponds to finding a saddle point of a loss function - a point that is minimal with respect to one set of weights, and maximal with respect to another. Conventional neural networks are trained by marching down a loss function until a minimizer is reached (Figure 1a). In contrast, adversarial training methods search for saddle points rather than a minimizer, which introduces the possibility that the training path "slides off" the objective functions and the loss goes to \(- \infty\) (Figure 1b), resulting in "collapse" of the adversarial network. As a result, many authors suggest using early stopping, gradients/weight clipping (Arjovsky et al., 2017), or specialized objective functions (Goodfellow et al., 2014; Zhao et al., 2017; Arjovsky et al., 2017) to maintain stability. In this paper, we present a simple "prediction" step that is easily added to many training algorithms for adversarial nets. We present theoretical analysis showing that the proposed prediction method is asymptotically stable for a class of saddle point problems. Finally, we use a wide range of experiments to show that prediction enables faster training of adversarial networks using large learning rates without the instability problems that plague conventional training schemes. <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: A schematic depiction of gradient methods. (a) Classical networks are trained by marching down the loss function until a minimizer is reached. Because classical loss functions are bounded from below, the solution path gets stopped when a minimizer is reached, and the gradient method remains stable. (b) Adversarial net loss functions may be unbounded from below, and training alternates between minimization and maximization steps. If minimization (or, conversely, maximization) is more powerful, the solution path "slides off" the loss surface and the algorithm becomes unstable, resulting in a sudden "collapse" of the network. </center> ## 2 PROPOSED METHOD Saddle- point optimization problems have the general form \[\min_{u}\max_{v}\mathcal{L}(u,v) \quad (1)\] for some loss function \(\mathcal{L}\) and variables \(u\) and \(v\) . Most authors use the alternating stochastic gradient method to solve saddle- point problems involving neural networks. This method alternates between updating \(u\) with a stochastic gradient descent step, and then updating \(v\) with a stochastic gradient ascent step. When simple/classical SGD updates are used, the steps of this method can be written \[\begin{array}{r l r}{{u^{k+1}=u^{k}-\alpha_{k}\mathcal{L}_{u}^{\prime}(u^{k},v^{k})\quad|}}&{{}}&{\mathrm{gradient~descent~in~}u,\mathrm{starting~at~}(u^{k},v^{k})}\\ {{v^{k+1}=v^{k}+\beta_{k}\mathcal{L}_{v}^{\prime}(u^{k+1},v^{k})\quad|}}&{{}}&{\mathrm{gradient~ascent~in~}v,\mathrm{starting~at~}(u^{k+1},v^{k})}\end{array} \quad (2)\] Here, \(\{\alpha_{k}\}\) and \(\{\beta_{k}\}\) are learning rate schedules for the minimization and maximization steps, respectively. The vectors \(\mathcal{L}_{u}^{\prime}(u,v)\) and \(\mathcal{L}_{v}^{\prime}(u,v)\) denote (possibly stochastic) gradients of \(\mathcal{L}\) with respect to \(u\) and \(v\) . In practice, the gradient updates are often performed by an automated solver, such as the Adam optimizer (Kingma & Ba, 2015), and include momentum updates. We propose to stabilize the training of adversarial networks by adding a prediction step. Rather than calculating \(v^{k + 1}\) using \(u^{k + 1}\) , we first make a prediction, \(\bar{u}^{k + 1}\) , about where the \(u\) iterates will be in the future, and use this predicted value to obtain \(v^{k + 1}\) . ## Prediction Method \[\begin{array}{r l r}{{u^{k+1}=u^{k}-\alpha_{k}\mathcal{L}_{u}^{\prime}(u^{k},v^{k})\quad|}}&{{}}&{\mathrm{gradient~descent~in~}u,\mathrm{starting~at~}(u^{k},v^{k})}\\ {{\bar{u}^{k+1}=u^{k+1}+(u^{k+1}-u^{k})\quad|}}&{{}}&{\mathrm{predict~future~value~of~}u}\\ {{v^{k+1}=v^{k}+\beta_{k}\mathcal{L}_{v}^{\prime}(\bar{u}^{k+1},v^{k})\quad|}}&{{}}&{\mathrm{gradient~ascent~in~}v,\mathrm{starting~at~}(\bar{u}^{k+1},v^{k})~.}\end{array} \quad (3)\] The Prediction step (3) tries to estimate where \(u\) is going to be in the future by assuming its trajectory remains the same as in the current iteration. ## 3 BACKGROUND ### 3.1 ADVERSARIAL NETWORKS AS A SADDLE-POINT PROBLEM We now discuss a few common adversarial network problems and their saddle- point formulations. Generative Adversarial Networks (GANs) fit a generative model to a dataset using a game in which a generative model competes against a discriminator (Goodfellow et al., 2014). The generator, <--- Page Split ---> \(\mathbf{G}(\mathbf{z};\theta_{g})\) , takes random noise vectors \(\mathbf{z}\) as inputs, and maps them onto points in the target data distribution. The discriminator, \(\mathbf{D}(\mathbf{x};\theta_{d})\) , accepts a candidate point \(\mathbf{x}\) and tries to determine whether it is really drawn from the empirical distribution (in which case it outputs 1), or fabricated by the generator (output 0). During a training iteration, noise vectors from a Gaussian distribution \(\mathcal{G}\) are pushed through the generator network \(\mathbf{G}\) to form a batch of generated data samples denoted by \(\mathcal{D}_{fake}\) . A batch of empirical samples, \(\mathcal{D}_{real}\) , is also prepared. One then tries to adjust the weights of each network to solve a saddle point problem, which is popularly formulated as, \[\min_{\theta_{g}}\max_{\theta_{d}}\quad \mathbb{E}_{x\sim \mathcal{D}_{real}}f(\mathbf{D}(\mathbf{x};\theta_{d})) + \mathbb{E}_{z\sim \mathcal{G}}f(1 - \mathbf{D}(\mathbf{G}(\mathbf{z};\theta_{g});\theta_{d})). \quad (4)\] Here \(f(.)\) is any monotonically increasing function. Initially, (Goodfellow et al., 2014) proposed using \(f(x) = \log (x)\) . Domain Adversarial Networks (DANs) (Makhzani et al., 2016; Ganin & Lempitsky, 2015; Edwards & Storkey, 2016) take data collected from a "source" domain, and extract a feature representation that can be used to train models that generalize to another "target" domain. For example, in the domain adversarial neural network (DANN (Ganin & Lempitsky, 2015)), a set of feature layers maps data points into an embedded feature space, and a classifier is trained on these embedded features. Meanwhile, the adversarial discriminator tries to determine, using only the embedded features, whether the data points belong to the source or target domain. A good embedding yields a better task- specific objective on the target domain while fooling the discriminator, and is found by solving \[\min_{\theta_{f},\theta_{y_{k}}}\max_{\theta_{d}}\quad \sum_{k}\alpha_{k}\mathcal{L}_{y^{k}}\left(\mathbf{x}_{s};\theta_{f},\theta_{y^{k}}\right) - \lambda \mathcal{L}_{d}\left(\mathbf{x}_{s},\mathbf{x}_{t};\theta_{f},\theta_{d}\right). \quad (5)\] Here \(\mathcal{L}_{d}\) is any adversarial discriminator loss function and \(\mathcal{L}_{y^{k}}\) denotes the task specific loss. \(\theta_{f},\theta_{d}\) , and \(\theta_{y^{k}}\) are network parameter of feature mapping, discriminator, and classification layers. ### 3.2 STABILIZING SADDLE POINT SOLVERS It is well known that alternating stochastic gradient methods are unstable when using simple logarithmic losses. This led researchers to explore multiple directions for stabilizing GANs; either by adding regularization terms (Arjovsky et al., 2017; Li et al., 2015; Che et al., 2017; Zhao et al., 2017), a myriad of training "hacks" (Salimans et al., 2016; Gulrajani et al., 2017), re- engineering network architectures (Zhao et al., 2017), and designing different solvers (Metz et al., 2017). Specifically, the Wasserstein GAN (WGAN) (Arjovsky et al., 2017) approach modifies the original objective by replacing \(f(x) = \log (x)\) with \(f(x) = x\) . This led to a training scheme in which the discriminator weights are "clipped." However, as discussed in Arjovsky et al. (2017), the WGAN training is unstable at high learning rates, or when used with popular momentum based solvers such as Adam. Currently, it is known to work well only with RMSProp (Arjovsky et al., 2017). The unrolled GAN (Metz et al., 2017) is a new solver that can stabilize training at the cost of more expensive gradient computations. Each generator update requires the computation of multiple extra discriminator updates, which are then discarded when the generator update is complete. While avoiding GAN collapse, this method requires increased computation and memory. In the convex optimization literature, saddle point problems are more well studied. One popular solver is the primal- dual hybrid gradient (PDHG) method (Zhu & Chan, 2008; Esser et al., 2009), which has been popularized by Chambolle and Pock (Chambolle & Pock, 2011), and has been successfully applied to a range of machine learning and statistical estimation problems (Goldstein et al., 2015). PDHG relates closely to the method proposed here - it achieves stability using the same prediction step, although it uses a different type of gradient update and is only applicable to bi- linear problems. Stochastic methods for convex saddle- point problems can be roughly divided into two categories: stochastic coordinate descent (Dang & Lan, 2014; Lan & Zhou, 2015; Zhang & Lin, 2015; Zhu & Storkey, 2015; 2016; Wang & Xiao, 2017; Shibagaki & Takeuchi, 2017) and stochastic gradient descent (Chen et al., 2014; Qiao et al., 2016). Similar optimization algorithms have been studied for reinforcement learning (Wang & Chen, 2016; Du et al., 2017). Recently, a "doubly" stochastic method that randomizes both primal and dual updates was proposed for strongly convex bilinear saddle point problems (Yu et al., 2015). For general saddle point problems, "doubly" stochastic gradient descent methods are discussed in Nemirovski et al. (2009), Palaniappan & Bach (2016), in <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: A schematic depiction of the prediction method. When the minimization step is powerful and moves the iterates a long distance, the prediction step (dotted black arrow) causes the maximization update to be calculated further down the loss surface, resulting in a more dramatic maximization update. In this way, prediction methods prevent the maximization step from getting overpowered by the minimization update. </center> which primal and dual variables are updated simultaneously based on the previous iterates and the current gradients. ## 4 INTERPRETATIONS OF THE PREDICTION STEP We present three ways to explain the effect of prediction: an intuitive, non- mathematical perspective, a more analytical viewpoint involving dynamical systems, and finally a rigorous proof- based approach. ### 4.1 AN INTUITIVE VIEWPOINT The standard alternating SGD switches between minimization and maximization steps. In this algorithm, there is a risk that the minimization step can overpower the maximization step, in which case the iterates will "slide off" the edge of saddle, leading to instability (Figure 1b). Conversely, an overpowering maximization step will dominate the minimization step, and drive the iterates to extreme values as well. The effect of prediction is visualized in Figure 2. Suppose that a maximization step takes place starting at the red dot. Without prediction, the maximization step has no knowledge of the algorithm history, and will be the same regardless of whether the previous minimization update was weak (Figure 2a) or strong (Figure 2b). Prediction allows the maximization step to exploit information about the minimization step. If the previous minimizations step was weak (Figure 2a), the prediction step (dotted black arrow) stays close to the red dot, resulting in a weak predictive maximization step (white arrow). But if we arrived at the red dot using a strong minimization step (Figure 2b), the prediction moves a long way down the loss surface, resulting in a stronger maximization step (white arrows) to compensate. ### 4.2 A MORE MATHEMATICAL PERSPECTIVE To get stronger intuition about prediction methods, let's look at the behavior of Algorithm (3) on a simple bi- linear saddle of the form \[\mathcal{L}(u,v) = v^{T}Ku \quad (6)\] where \(K\) is a matrix. When exact (non- stochastic) gradient updates are used, the iterates follow the path of a simple dynamical system with closed- form solutions. We give here a sketch of this argument: a detailed derivation is provided in the Supplementary Material. When the (non- predictive) gradient method (2) is applied to the linear problem (6), the resulting iterations can be written \[\frac{u^{k + 1} - u^{k}}{\alpha} = -K^{T}v^{k},\qquad \frac{v^{k + 1} - v^{k}}{\alpha} = (\beta /\alpha)Ku^{k + 1}.\] When the stepsize \(\alpha\) gets small, this behaves like a discretization of the system of differential equations \[\dot{u} = -K^{T}v,\qquad \dot{v} = \beta /\alpha Ku\] <--- Page Split ---> where \(\dot{u}\) and \(\dot{v}\) denote the derivatives of \(u\) and \(v\) with respect to time. These equations describe a simple harmonic oscillator, and the closed form solution for \(u\) is \[u(t) = C\cos (\Sigma^{1 / 2}t + \phi)\] where \(\Sigma\) is a diagonal matrix, and the matrix \(C\) and vector \(\phi\) depend on the initialization. We can see that, for small values of \(\alpha\) and \(\beta\) , the non- predictive algorithm (2) approximates an undamped harmonic motion, and the solutions orbit around the saddle without converging. The prediction step (3) improves convergence because it produces damped harmonic motion that sinks into the saddle point. When applied to the linearized problem (6), we get the dynamical system \[\dot{u} = -K^{T}v,\qquad \dot{v} = \beta /\alpha K(u + \alpha \dot{u}) \quad (7)\] which has solution \[u(t) = UA\exp (-\frac{t\alpha}{2}\sqrt{\Sigma})\sin (t\sqrt{(1 - \alpha^{2} / 4)}\Sigma +\phi).\] From this analysis, we see that the damping caused by the prediction step causes the orbits to converge into the saddle point, and the error decays exponentially fast. ### 4.3 A RIGOROUS PERSPECTIVE While the arguments above are intuitive, they are also informal and do not address issues like stochastic gradients, non- constant stepsize sequences, and more complex loss functions. We now provide a rigorous convergence analysis that handles these issues. We assume that the function \(\mathcal{L}(u,v)\) is convex in \(u\) and concave in \(v\) . We can then measure convergence using the "primal- dual" gap, \(P(u,v) = \mathcal{L}(u,v^{*}) - \mathcal{L}(u^{*},v)\) where \((u^{*},v^{*})\) is a saddle. Note that \(P(u,v) > 0\) for non- optimal \((u,v)\) , and \(P(u,v) = 0\) if \((u,v)\) is a saddle. Using these definitions, we formulate the following convergence result. The proof is in the supplementary material. Theorem 1. Suppose the function \(\mathcal{L}(u,v)\) is convex in \(u\) , concave in \(v\) , and that the partial gradient \(\mathcal{L}_{v}^{\prime}\) is uniformly Lipschitz smooth in \(u\) ( \(\| \mathcal{L}_{v}^{\prime}(u_{1},v) - \mathcal{L}_{v}^{\prime}(u_{2},v)\| \leq L_{v}\| u_{1} - u_{2}\|\) ). Suppose further that the stochastic gradient approximations satisfy \(\mathbb{E}\| \mathcal{L}_{u}^{\prime}(u,v)\|^{2}\leq G_{u}^{2}\) , \(\mathbb{E}\| \mathcal{L}_{v}^{\prime}(u,v)\|^{2}\leq G_{v}^{2}\) for scalars \(G_{u}\) and \(G_{v}\) , and that \(\mathbb{E}\| u^{k} - u^{*}\|^{2}\leq D_{u}^{2}\) , and \(\mathbb{E}\| v^{k} - v^{*}\|^{2}\leq D_{v}^{2}\) for scalars \(D_{u}\) and \(D_{v}\) . If we choose decreasing learning rate parameters of the form \(\alpha_{k} = \frac{C_{k}}{\sqrt{k}}\) and \(\beta_{k} = \frac{C_{\beta}}{\sqrt{k}}\) , then the SGD method with prediction converges in expectation, and we have the error bound \[\mathbb{E}[P(\hat{u}^{l},\hat{v}^{l})]\leq \frac{1}{2\sqrt{l}}\left(\frac{D_{u}^{2}}{C_{\alpha}} +\frac{D_{v}^{2}}{C_{\beta}}\right) + \frac{\sqrt{l + 1}}{l}\left(\frac{C_{\alpha}G_{u}^{2}}{2} +C_{\alpha}L_{v}G_{u}^{2} + C_{\alpha}L_{v}D_{v}^{2} + \frac{C_{\beta}G_{v}^{2}}{2}\right)\] where \(\hat{u}^{l} = \frac{1}{l}\sum_{k = 1}^{l}u^{k}\) , \(\hat{v}^{l} = \frac{1}{l}\sum_{k = 1}^{l}v^{k}\) . ## 5 EXPERIMENTS We present a wide range of experiments to demonstrate the benefits of the proposed prediction step for adversarial nets. We consider a saddle point problem on a toy dataset constructed using MNIST images, and then move on to consider state- of- the- art models for three tasks: GANs, domain adaptation, and learning of fair classifiers. Additional results, and additional experiments involving mixtures of Gaussians, are presented in the Appendix. The code is available at https://github.com/jaiabhayk/stableGAN. ### 5.1 MNIST Toy PROBLEM We consider the task of classifying MNIST digits as being even or odd. To make the problem interesting, we corrupt \(70\%\) of odd digits with salt- and- pepper noise, while we corrupt only \(30\%\) of even digits. When we train a LeNet network (LeCun et al., 1998) on this problem, we find that the network encodes and uses information about the noise; when a noise vs no- noise classifier is trained <--- Page Split ---> on the deep features generated by LeNet, it gets \(100\%\) accuracy. The goal of this task is to force LeNet to ignore the noise when making decisions. We create an adversarial model of the form (5) in which \(\mathcal{L}_{y}\) is a softmax loss for the even vs odd classifier. We make \(\mathcal{L}_{d}\) a softmax loss for the task of discriminating whether the input sample is noisy or not. The classifier and discriminator were both pre- trained using the default LeNet implementation in Caffe (Jia et al., 2014). Then the combined adversarial net was jointly trained both with and without prediction. For implementation details, see the Supplementary Material. Figure 3 summarizes our findings. In this experiment, we considered applying prediction to both the classifier and discriminator. We note that our task is to retain good classification accuracy while preventing the discriminator from doing better than the trivial strategy of classifying odd digits as noisy and even digits as non- noisy. This means that the discriminator accuracy should ideally be \(\sim 0.7\) . As shown in Figure 3a, the prediction step hardly makes any difference when evaluated at the small learning rate of \(10^{- 4}\) . However, when evaluated at higher rates, Figures 3b and 3c show that the prediction solvers are very stable while one without prediction collapses (blue solid line is flat) very early. Figure 3c shows that the default learning rate \((10^{- 3})\) of the Adam solver is unstable unless prediction is used. ![](images/5_0.jpg) <center>Figure 3: Comparison of the classification accuracy (digit parity) and discriminator (noisy vs. no-noise) accuracy using SGD and Adam solver with and without prediction steps. \(\theta_{f}\) and \(\theta_{d}\) refers to variables in eq. (5). (a) Using SGD with learning rate \(lr = 10^{-4}\) . Note that the solid lines of red, blue and green are overlapped. (b) SGD solver with higher learning rate of \(lr = 10^{-3}\) , and (c) using Adam solver with its default parameter. </center> ### 5.2 GENERATIVE ADVERSARIAL NETWORKS Next, we test the efficacy and stability of our proposed predictive step on generative adversarial networks (GAN), which are formulated as saddle point problems (4) and are popularly solved using a heuristic approach (Goodfellow et al., 2014). We consider an image modeling task using CIFAR- 10 (Krizhevsky, 2009) on the recently popular convolutional GAN architecture, DCGAN (Radford et al., 2016). We compare our predictive method with that of DCGAN and the unrolled GAN (Metz et al., 2017) using the training protocol described in Radford et al. (2016). Note that we compared against the unrolled GAN with stop gradient switch<sup>1</sup> and \(K = 5\) unrolling steps. All the approaches were trained for five random seeds and 100 epochs each. We start with comparing all three methods using the default solver for DCGAN (the Adam optimizer) with learning rate=0.0002 and \(\beta_{1} = 0.5\) . Figure 4 compares the generated sample images (at the \(100^{th}\) epoch) and the training loss curve for all approaches. The discriminator and generator loss curves in Figure 4e show that without prediction, the DCGAN collapses at the \(45^{th}\) and \(57^{th}\) epochs. Similarly, Figure 4f shows that the training for unrolled GAN collapses in at least three instances. The training procedure using predictive steps never collapsed during any epochs. Qualitatively, the images generated using prediction are more diverse than the DCGAN and unrolled GAN images. Figure 5 compares all approaches when trained with \(5\times\) higher learning rate (0.001) (the default for the Adam solver). As observed in Radford et al. (2016), the standard and unrolled solvers are very unstable and collapse at this higher rate. However, as shown in Figure 5d, & 5a, training remains stable when a predictive step is used, and generates images of reasonable quality. The training procedure for both DCGAN and unrolled GAN collapsed on all five random seeds. The results on various additional intermediate learning rates as well as on high resolution Imagenet dataset are in the Supplementary Material. <--- Page Split ---> In the Supplementary Material, we present one additional comparison showing results on a higher momentum of \(\beta_{1} = 0.9\) (learning rate=0.0002). We observe that all the training approaches are stable. However, the quality of images generated using DCGAN is inferior to that of the predictive and unrolled methods. Overall, of the 25 training settings we ran on (each of five learning rates for five random seeds), the DCGAN training procedure collapsed in 20 such instances while unrolled GAN collapsed in 14 experiments (not counting the multiple collapse in each training setting). On the contrary, we find that our simple predictive step method collapsed only once. Note that prediction adds trivial cost to the training algorithm. Using a single TitanX Pascal, a training epoch of DCGAN takes 35 secs. With prediction, an epoch takes 38 secs. The unrolled GAN method, which requires extra gradient steps, takes 139 secs/epoch. Finally, we draw quantitative comparisons based on the inception score (Salimans et al., 2016), which is a widely used metric for visual quality of the generated images. For this purpose, we consider the current state- of- the- art Stacked GAN (Huang et al., 2017) architecture. Table 1 lists the inception scores computed on the generated samples from Stacked GAN trained (200 epochs) with and without prediction at different learning rates. The joint training of Stacked GAN collapses when trained at the default learning rate of adam solver (i.e., 0.001). However, reasonably good samples are generated if the same is trained with prediction on both the generator networks. The right end of Table 1 also list the inception score measured at fewer number of epochs for higher learning rates. It suggest that the model trained with prediction methods are not only stable but also allows faster convergence using higher learning rates. For reference the inception score on real images of CIFAR- 10 dataset is \(11.51 \pm 0.17\) . Table 1: Comparison of Inception Score on Stacked GAN network with and w/o G prediction. <table><tr><td>Learning rate</td><td>0.0001</td><td>0.0005</td><td>0.001</td><td>0.0005 (40)</td><td>0.001 (20)</td></tr><tr><td>Stacked GAN (joint)</td><td>8.44 ± 0.11</td><td>7.90 ± 0.08</td><td>1.52 ± 0.01</td><td>5.80 ± 0.15</td><td>1.42 ± 0.01</td></tr><tr><td>Stacked GAN (joint) + prediction</td><td>8.55 ± 0.12</td><td>8.13 ± 0.09</td><td>7.96 ± 0.11</td><td>8.10 ± 0.10</td><td>7.79 ± 0.07</td></tr></table> ![](images/6_0.jpg) <center>Figure 4: Comparison of GAN training algorithms for DCGAN architecture on Cifar-10 image datasets. Using default parameters of DCGAN; \(lr = 0.0002\) , \(\beta_{1} = 0.5\) . </center> <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: Comparison of GAN training algorithms for DCGAN architecture on Cifar-10 image datasets with higher learning rate, \(lr = 0.001\) , \(\beta_{1} = 0.5\) . </center> ### 5.3 DOMAIN ADAPTATION We consider the domain adaptation task (Saenko et al., 2010; Ganin & Lempitsky, 2015; Tzeng et al., 2017) wherein the representation learned using the source domain samples is altered so that it can also generalize to samples from the target distribution. We use the problem setup and hyper- parameters as described in (Ganin & Lempitsky, 2015) using the OFFICE dataset (Saenko et al., 2010) (experimental details are shared in the Supplementary Material). In Table 2, comparisons are drawn with respect to target domain accuracy on six pairs of source- target domain tasks. We observe that the prediction step has mild benefits on the "easy" adaptation tasks with very similar source and target domain samples. However, on the transfer learning tasks of AMAZON- to- WEBCAM, WEBCAM- to- AMAZON, and DSLR- to- AMAZON which has noticeably distinct data samples, an extra prediction step gives an absolute improvement of \(1.3 - 6.9\%\) in predicting target domain labels. Table 2: Comparison of target domain accuracy on OFFICE dataset. <table><tr><td rowspan="2">Method</td><td rowspan="2">Source Target</td><td>AMAZON</td><td>WEBCAM</td><td>DSLR</td><td>WEBCAM</td><td>AMAZON</td><td>DSLR</td></tr><tr><td>WEBCAM</td><td>AMAZON</td><td>WEBCAM</td><td>DSLR</td><td>DSLR</td><td>AMAZON</td></tr><tr><td>DANN (Ganin &amp;amp; Lempitsky, 2015)</td><td>73.4</td><td>51.6</td><td>95.5</td><td>99.4</td><td>76.5</td><td>51.7</td><td></td></tr><tr><td>DANN + prediction</td><td>74.7</td><td>58.5</td><td>96.1</td><td>99.0</td><td>73.5</td><td>57.6</td><td></td></tr></table> ### 5.4 FAIR CLASSIFIER Finally, we consider a task of learning fair feature representations (Mathieu et al., 2016; Edwards & Storkey, 2016; Louizos et al., 2016) such that the final learned classifier does not discriminate with respect to a sensitive variable. As proposed in Edwards & Storkey (2016) one way to measure fairness is using discrimination, \[y_{disc} = \left|\frac{1}{N_0}\sum_{i:s_i = 0}\eta (x_i) - \frac{1}{N_1}\sum_{i:s_i = 1}\eta (x_i)\right|. \quad (8)\] Here \(s_i\) is a binary sensitive variable for the \(i^{th}\) data sample and \(N_k\) denotes the total number of samples belonging to the \(k^{th}\) sensitive class. Similar to the domain adaptation task, the learning of each classifier can be formulated as a minimax problem in (5) (Edwards & Storkey, 2016; Mathieu <--- Page Split ---> et al., 2016). Unlike the previous example though, this task has a model selection component. From a pool of hundreds of randomly generated adversarial deep nets, for each value of \(t\) , one selects the model that maximizes the difference \[y_{t,Delt a} = y_{a c c} - t*y_{d i s c}. \quad (9)\] The "Adult" dataset from the UCI machine learning repository is used. The task \((y_{a c c})\) is to classify whether a person earns \(\geq \S 50k /\) year. The person's gender is chosen to be the sensitive variable. Details are in the supplementary. To demonstrate the advantage of using prediction for model selection, we follow the protocol developed in Edwards & Storkey (2016). In this work, the search space is restricted to a class of models that consist of a fully connected autoencoder, one task specific discriminator, and one adversarial discriminator. The encoder output from the autoencoder acts as input to both the discriminators. In our experiment, 100 models are randomly selected. During the training of each adversarial model, \(\mathcal{L}_{d}\) is a cross- entropy loss while \(\mathcal{L}_{y}\) is a linear combination of reconstruction and cross- entropy loss. Once all the models are trained, the best model for each value of \(t\) is selected by evaluating (9) on the validation set. Figure 6a plots the results on the test set for the AFLR approach with and without prediction steps in their default Adam solver. For each value of \(t\) , Figure 6b, 6c also compares the number of layers in the selected encoder and discriminator networks. When using prediction for training, relatively stronger encoder models are produced and selected during validation, and hence the prediction results generalize better on the test set. ![](images/8_0.jpg) <center>Figure 6: Model selection for learning a fair classifier. (a) Comparison of \(y_{t,delta}\) (higher is better), and also \(y_{d i s c}\) (lower is better) and \(y_{a c c}\) on the test set using AFLR with and without predictive steps. (b) Number of encoder layers in the selected model. (c) Number of discriminator layers (both adversarial and task-specific) in the selected model. </center> ## 6 CONCLUSION We present a simple modification to the alternating SGD method, called a prediction step, that improves the stability of adversarial networks. We present theoretical results showing that the prediction step is asymptotically stable for solving saddle point problems. We show, using a variety of test problems, that prediction steps prevent network collapse and enable training with a wider range of learning rates than plain SGD methods. ## ACKNOWLEDGMENTS The work of T. Goldstein was supported by the US Office of Naval Research under grant N00014- 17- 1- 2078, the US National Science Foundation (NSF) under grant CCF- 1535902, and by the Sloan Foundation. A. Yadav and D. Jacobs were supported by the National Science Foundation under grant no. IIS- 1526234 and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA R&D Contract No. 2014- 14071600012. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of <--- Page Split ---> the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. ## APPENDIX ## A DETAILED DERIVATION OF THE HARMONIC OSCILLATOR EQUATION Here, we provide a detailed derivation of the harmonic oscillator behavior of Algorithm (3) on the simple bi- linear saddle of the form \[\mathcal{L}(x,y) = y^{T}Kx\] where \(K\) is a matrix. Note that, within a small neighborhood of a saddle, all smooth weakly convex objective functions behave like (6). To see why, consider a smooth objective function \(\mathcal{L}\) with a saddle point at \(x^{*} = 0\) , \(y^{*} = 0\) . Within a small neighborhood of the saddle, we can approximate the function \(\mathcal{L}\) to high accuracy using its Taylor approximation \[\mathcal{L}(x,y)\approx \mathcal{L}(x^{*},y^{*}) + y^{T}\mathcal{L}_{x y}^{\prime}x + O(\| x\|^{3} + \| y\|^{3})\] where \(\mathcal{L}_{x y}^{\prime}\) denotes the matrix of mixed- partial derivatives with respect to \(x\) and \(y\) . Note that the first- order terms have vanished from this Taylor approximation because the gradients are zero at a saddle point. The \(O(\| x\|^{2})\) and \(O(\| y\|^{2})\) terms vanish as well because the problem is assumed to be weakly convex around the saddle. Up to third- order error (which vanishes quickly near the saddle), this Taylor expansion has the form (6). For this reason, stability on saddles of the form (6) is a necessary condition for convergence of (3), and the analysis here describes the asymptotic behavior of the prediction method on any smooth problem for which the method converges. We will show that, as the learning rate gets small, the iterates of the non- prediction method (2) rotate in orbits around the saddle without converging. In contrast, the iterates of the prediction method fall into the saddle and converge. When the conventional gradient method (2) is applied to the linear problem (6), the resulting iterations can be written \[\frac{x^{k + 1} - x^{k}}{\alpha} = -K^{T}y^{k},\qquad \frac{y^{k + 1} - y^{k}}{\alpha} = (\beta /\alpha)Kx^{k + 1}.\] When the stepsize \(\alpha\) gets small, this behaves like a discretization of the differential equation \[\begin{array}{l}{\dot{x} = -K^{T}y}\\ {\dot{y} = \beta /\alpha Kx} \end{array} \quad (10)\] where \(\dot{x}\) and \(\dot{y}\) denote the derivatives of \(x\) and \(y\) with respect to time. The differential equations (10,11) describe a harmonic oscillator. To see why, differentiate (10) and plug (11) into the result to get a differential equation in \(x\) alone \[\ddot{x} = -K^{T}\dot{y} = -\beta /\alpha K^{T}Kx. \quad (12)\] We can decompose this into a system of independent single- variable problems by considering the eigenvalue decomposition \(\beta /\alpha K^{T}K = U\Sigma U^{T}\) . We now multiply both sides of (12) by \(U^{T}\) , and make the change of variables \(z \leftarrow U^{T}x\) to get \[\ddot{z} = -\Sigma z.\] where \(\Sigma\) is diagonal. This is the standard equation for undamped harmonic motion, and its solution is \(z = A\cos (\Sigma^{1 / 2}t + \phi)\) , where \(\cos\) acts entry- wise, and the diagonal matrix \(A\) and vector \(\phi\) are constants that depend only on the initialization. Changing back into the variable \(x\) , we get the solution \[x = UA\cos (\Sigma^{1 / 2}t + \phi).\] We can see that, for small values of \(\alpha\) and \(\beta\) , the non- predictive algorithm (2) approximates an undamped harmonic motion, and the solutions orbit around the saddle without converging. The prediction step (3) improves convergence because it produces damped harmonic motion that sinks into the saddle point. When applied to the linearized problem (6), the iterates of the predictive method (3) satisfy \[\frac{x^{k + 1} - x^{k}}{\alpha} = -K^{T}y^{k}\] \[\frac{y^{k + 1} - y^{k}}{\alpha} = \beta /\alpha K(x^{k + 1} + x^{k + 1} - x^{k}) = \beta /\alpha Kx^{k + 1} + \beta K\frac{x^{k + 1} - x^{k}}{\alpha}.\] <--- Page Split ---> For small \(\alpha\) , this approximates the dynamical system \[\begin{array}{r l} & {\dot{x} = -K^{T}y}\\ & {\dot{y} = \beta /\alpha K(x + \alpha \dot{x}).} \end{array} \quad (14)\] Like before, we differentiate (13) and use (14) to obtain \[\ddot{x} = -K^{T}\dot{y} = -\beta /\alpha K^{T}(Kx + \alpha A\dot{x}) = -\beta /\alpha K^{T}Kx - \beta /K^{T}K\dot{x}. \quad (15)\] Finally, multiply both sides by \(U^{T}\) and perform the change of variables \(z \leftarrow U^{T}x\) to get \[\dot{z} = -\Sigma z - \alpha \Sigma \dot{z}.\] This equation describes a damped harmonic motion. The solutions have the form \(z(t) = A\exp (-\frac{t\alpha}{2}\sqrt{\Sigma})\sin (t\sqrt{(1 - \alpha^{2} / 4)\Sigma} +\phi)\) . Changing back to the variable \(x\) , we see that the iterates of the original method satisfy \[x(t) = UA\exp (-\frac{t\alpha}{2}\sqrt{\Sigma})\sin (t\sqrt{(1 - \alpha^{2} / 4)\Sigma} +\phi).\] where \(A\) and \(\phi\) depend on the initialization. From this analysis, we see that for small constant \(\alpha\) the orbits of the lookahead method converge into the saddle point, and the error decays exponentially fast. ## A PROOF OF THEOREM 1 Assume the optimal solution \((u^{*},v^{*})\) exists, then \(\mathcal{L}_{u}^{\prime}(u^{*},v) = \mathcal{L}_{v}^{\prime}(u,v^{*}) = 0\) . In the following proofs, we use \(g_{u}(u,v)\) , \(g_{v}(u,v)\) to represent the stochastic approximation of gradients, where \(\mathbb{E}[g_{u}(u,v)] = \mathcal{L}_{u}^{\prime}(u,v)\) , \(\mathbb{E}[g_{v}(u,v)] = \mathcal{L}_{v}^{\prime}(u,v)\) . We show the convergence of the proposed stochastic primal- dual gradients for the primal- dual gap \(P(u^{k},v^{k}) = \mathcal{L}(u^{k},v^{*}) - \mathcal{L}(u^{*},v^{k})\) . We prove the \(O(1 / \sqrt{k})\) convergence rate in Theorem 1 by using Lemma 1 and Lemma 2, which present the contraction of primal and dual updates, respectively. Lemma 1. Suppose \(\mathcal{L}(u,v)\) is convex in \(u\) and \(\mathbb{E}[\| g_{u}(u,v)\|^{2}]\leq G_{u}^{2}\) , we have \[\mathbb{E}[\mathcal{L}(u^{k},v^{k})] - \mathbb{E}[\mathcal{L}(u^{*},v^{k})]\leq \frac{1}{2\alpha^{k}}\left(\mathbb{E}[\| u^{k} - u^{*}\|^{2}] - \mathbb{E}[\| u^{k + 1} - u^{*}\|^{2}]\right) + \frac{\alpha_{k}}{2} G_{u}^{2} \quad (16)\] Proof. Use primal update in (3), we have \[\begin{array}{r l} & {\| u^{k + 1} - u^{*}\|^{2} = \| u^{k} - \alpha_{k}g_{u}(u^{k},v^{k}) - u^{*}\|^{2}}\\ & {\qquad = \| u^{k} - u^{*}\|^{2} - 2\alpha_{k}\left\langle g_{u}(u^{k},v^{k}),u^{k} - u^{*}\right\rangle +\alpha_{k}^{2}\| g_{u}(u^{k},v^{k})\|^{2}.} \end{array} \quad (17)\] Take expectation on both side of the equation, substitute with \(\mathbb{E}[g_{u}(u,v)] = \mathcal{L}_{u}^{\prime}(u,v)\) and apply \(\mathbb{E}[\| g_{u}^{2}(u,v)\| ]\leq G_{u}^{2}\) to get \[\mathbb{E}[\| u^{k + 1} - u^{*}\|^{2}]\leq \mathbb{E}[\| u^{k} - u^{*}\|^{2}] - 2\alpha_{k}\mathbb{E}[\langle \mathcal{L}_{u}^{\prime}(u^{k},v^{k}),u^{k} - u^{*}\rangle ] + \alpha_{k}^{2}G_{u}^{2}. \quad (19)\] Since \(\mathcal{L}(u,v)\) is convex in \(u\) , we have \[\langle \mathcal{L}_{u}^{\prime}(u^{k},v^{k}),u^{k} - u^{*}\rangle \geq \mathcal{L}(u^{k},v^{k}) - \mathcal{L}(u^{*},v^{k}). \quad (20)\] (16) is proved by combining (19) and (20). Lemma 2. Suppose \(\mathcal{L}(u,v)\) is concave in \(v\) and has Lipschitz gradients, \(\| \mathcal{L}_{u}^{\prime}(u_{1},v) - \mathcal{L}_{u}^{\prime}(u_{2},v)\| \leq\) \(L_{v}\| u_{1} - u_{2}\|\) , and bounded variance, \(\mathbb{E}[\| g_{u}(u,v)\|^{2}]\leq G_{u}^{2}\) , \(\mathbb{E}[\| g_{v}(u,v)\|^{2}]\leq G_{v}^{2}\) ; and \(\mathbb{E}[\| v^{k} - v^{*}\|^{2}]\leq D_{v}^{2}\) , we have \[\begin{array}{r l} & {\mathbb{E}[\mathcal{L}(u^{k},v^{*})] - \mathbb{E}[\mathcal{L}(u^{k},v^{k})]\leq}\\ & {\qquad \frac{1}{2\beta_{k}}\left(\mathbb{E}[\| v^{k} - v^{*}\|^{2}] - \mathbb{E}[\| v^{k + 1} - v^{*}\|^{2}]\right) + \frac{\beta_{k}}{2} G_{u}^{2} + \alpha_{k}L_{v}\left(G_{u}^{2} + D_{v}^{2}\right).} \end{array} \quad (21)\] <--- Page Split ---> Proof. From the dual update in (3), we have \[\begin{array}{r l} & {\| v^{k + 1} - v^{*}\|^{2} = \| v^{k} + \beta_{k}g_{v}(\bar{u}^{k + 1},v^{k}) - v^{*}\|^{2}}\\ & {\qquad = \| v^{k} - v^{*}\|^{2} + 2\beta_{k}\left\langle g_{v}(\bar{u}^{k + 1},v^{k}),v^{k} - v^{*}\right\rangle +\beta_{k}^{2}\| g_{v}(\bar{u}^{k + 1},v^{k})\|^{2}.} \end{array} \quad (22)\] Take expectation on both sides of the equation, substitute \(\mathbb{E}[g_{v}(u,v)] = \mathcal{L}_{v}^{\prime}(u,v)\) , and apply \(\mathbb{E}[\| g_{v}^{2}(u,v)\| ]\leq G_{v}^{2}\) to get \[\mathbb{E}[\| v^{k + 1} - v^{*}\|^{2}]\leq \mathbb{E}[\| v^{k} - v^{*}\|^{2}] + 2\beta_{k}\mathbb{E}[(\mathcal{L}_{v}^{\prime}(\bar{u}^{k + 1},v^{k}),v^{k} - v^{*})] + \beta_{k}^{2}G_{v}^{2}. \quad (24)\] Reorganize (24) to get \[\mathbb{E}[\| v^{k + 1} - v^{*}\|^{2}] - \mathbb{E}[\| v^{k} - v^{*}\|^{2}] - \beta_{k}^{2}G_{v}^{2}\leq 2\beta_{k}\mathbb{E}[(\mathcal{L}_{v}^{\prime}(\bar{u}^{k + 1},v^{k}),v^{k} - v^{*})]. \quad (25)\] The right hand side of (25) can be represented as \[\begin{array}{r l} & {\mathbb{E}[(\mathcal{L}_{v}^{\prime}(\bar{u}^{k + 1},v^{k}),u^{k} - v^{*})]}\\ & {= \mathbb{E}[(\mathcal{L}_{v}^{\prime}(\bar{u}^{k + 1},v^{k}) - \mathcal{L}_{v}^{\prime}(u^{k},v^{k}) + \mathcal{L}_{v}^{\prime}(u^{k},v^{k}),v^{k} - v^{*})]}\\ & {= \mathbb{E}[(\mathcal{L}_{v}^{\prime}(\bar{u}^{k + 1},v^{k}) - \mathcal{L}_{v}^{\prime}(u^{k},v^{k}),v^{k} - v^{*})] + \mathbb{E}[(\mathcal{L}_{v}^{\prime}(u^{k},v^{k}),v^{k} + v^{*})],} \end{array} \quad (27)\] where \[\begin{array}{r l} & {\mathbb{E}[(\mathcal{L}_{v}^{\prime}(\bar{u}^{k + 1},v^{k}) - \mathcal{L}_{v}^{\prime}(u^{k},v^{k}),v^{k} - v^{*})]}\\ & {\leq \mathbb{E}[\| \mathcal{L}_{v}^{\prime}(\bar{u}^{k + 1},v^{k}) - \mathcal{L}_{v}^{\prime}(u^{k},v^{k})\| \| v^{k} - v^{*}\| ]}\\ & {\leq \mathbb{E}[L_{v}\| \bar{u}^{k + 1} - u^{k}\| \| v^{k} - v^{*}\| ]}\\ & {= \mathbb{E}[2L_{y}\| u^{k + 1} - u^{k}\| \| v^{k} - v^{*}\| ]}\\ & {= \mathbb{E}[2L_{y}\| \alpha_{k}g_{u}(u^{k},v^{k})\| \| v^{k} - v^{*}\| ]}\\ & {\leq L_{y}\alpha_{k}\mathbb{E}[\| g_{u}(u^{k},v^{k})\|^{2} + \| v^{k} - v^{*}\|^{2}]}\\ & {\leq L_{y}\alpha_{k}(G_{u}^{2} + D_{v}^{2}).} \end{array} \quad (33)\] Lipschitz smoothness is used for (31); the prediction step in (3) is used for (32); the primal update in (3) is used for (33); bounded assumptions are used for (35). Since \(\mathcal{L}(u,v)\) is concave in \(v\) , we have \[\langle \mathcal{L}_{v}^{\prime}(u^{k},v^{k}),v^{k} - v^{*}\rangle \leq \mathcal{L}(u^{k},v^{k}) - \mathcal{L}(u^{k},v^{*}). \quad (36)\] Combine equations (25, 28, 35 to get36) \[\begin{array}{r l} & {\frac{1}{2\beta_{k}}\left(\mathbb{E}[\| v^{k + 1} - v^{*}\|^{2}] - \mathbb{E}[\| v^{k} - v^{*}\|^{2}]\right) - \frac{\beta_{k}}{2} G_{v}^{2}}\\ & {\leq L_{v}\alpha_{k}\left(G_{u}^{2} + D_{v}^{2}\right) + \mathbb{E}[\mathcal{L}(u^{k},v^{k})] - \mathbb{E}[\mathcal{L}(u^{k},v^{*})].} \end{array} \quad (37)\] Rearrange the order of (37) to achieve (21). We now present the proof of Theorem 1. Proof. Combining (16) and (21) in the Lemmas, the primal- dual gap \(P(u^{k},v^{k}) = \mathcal{L}(u^{k},v^{*}) - \mathcal{L}(u^{*},v^{k})\) satisfies, \[\begin{array}{r l} & {\mathbb{E}[P(u^{k},v^{k})]\leq \frac{1}{2\alpha_{k}}\left(\mathbb{E}[\| u^{k} - u^{*}\|^{2}] - \mathbb{E}[\| u^{k + 1} - u^{*}\|^{2}]\right) + \frac{\alpha_{k}}{2} G_{u}^{2}}\\ & {\qquad +\frac{1}{2\beta_{k}}\left(\mathbb{E}[\| v^{k} - v^{*}\|^{2}] - \mathbb{E}[\| v^{k + 1} - v^{*}\|^{2}]\right) + \frac{\beta_{k}}{2} G_{v}^{2} + \alpha_{k}L_{v}\left(G_{u}^{2} + D_{v}^{2}\right).} \end{array} \quad (38)\] Accumulate (38) from \(k = 1,\ldots ,l\) to obtain \[\begin{array}{r l} & {\sum_{k = 1}^{l}\mathbb{E}[P(u^{k},v^{k})]\leq}\\ & {\frac{1}{2\alpha_{1}}\mathbb{E}[\| u^{1} - u^{*}\|^{2}] + \sum_{k = 2}^{l}(\frac{1}{2\alpha_{k}} -\frac{1}{2\alpha_{k - 1}})\mathbb{E}[\| u^{k} - u^{*}\|^{2}] + \sum_{k = 1}^{l}\alpha_{k}(\frac{G_{u}^{2}}{2} +L_{v}G_{u}^{2} + L_{v}D_{v}^{2})}\\ & {+\frac{1}{2\beta_{1}}\mathbb{E}[\| v^{1} - v^{*}\|^{2}] + \sum_{k = 2}^{l}(\frac{1}{2\beta_{k}} -\frac{1}{2\beta_{k - 1}})\mathbb{E}[\| v^{k} - v^{*}\|^{2}] + \sum_{k = 1}^{l}\beta_{k}\frac{G_{v}^{2}}{2}.} \end{array} \quad (39)\] <--- Page Split ---> Assume \(\mathbb{E}[||u^{k} - u^{*}||^{2}]\leq D_{u}^{2}\) , \(\mathbb{E}[||v^{k} - v^{*}||^{2}]\leq D_{v}^{2}\) are bounded, we have \[\begin{array}{l}{\sum_{k = 1}^{l}\mathbb{E}[P(u^{k},v^{k})]\leq \frac{1}{2\alpha_{1}} D_{u}^{2} + \sum_{k = 2}^{l}(\frac{1}{2\alpha_{k}} -\frac{1}{2\alpha_{k - 1}})D_{u}^{2} + \sum_{k = 1}^{l}\alpha_{k}(\frac{G_{u}^{2}}{2} +L_{v}G_{u}^{2} + L_{v}D_{v}^{2})}\\ {+\frac{1}{2\beta_{1}} D_{v}^{2} + \sum_{k = 2}^{l}(\frac{1}{2\beta_{k}} -\frac{1}{2\beta_{k - 1}})D_{v}^{2} + \sum_{k = 1}^{l}\beta_{k}\frac{G_{v}^{2}}{2}.} \end{array} \quad (40)\] Since \(\alpha_{k},\beta_{k}\) are decreasing and \(\textstyle \sum_{k = 1}^{l}\alpha_{k}\leq C_{\alpha}\sqrt{l + 1}\) \(\textstyle \sum_{k = 1}^{l}\beta_{k}\leq C_{\beta}\sqrt{l + 1}\) , we have \[\sum_{k = 1}^{l}\mathbb{E}[P(u^{k},v^{k})]\leq \frac{\sqrt{l}}{2}\left(\frac{D_{u}^{2}}{C_{\alpha}} +\frac{D_{v}^{2}}{C_{\beta}}\right) + \sqrt{l + 1}\left(\frac{C_{\alpha}G_{u}^{2}}{2} +C_{\beta}L_{v}G_{u}^{2} + C_{\alpha}L_{v}D_{v}^{2} + \frac{C_{\beta}G_{v}^{2}}{2}\right) \quad (41)\] For \(\begin{array}{r}{\hat{u}^{l} = \frac{1}{l}\sum_{k = 1}^{l}u^{k}} \end{array}\) \(\begin{array}{r}{\hat{v}^{l} = \frac{1}{l}\sum_{k = 1}^{l}v^{k}} \end{array}\) , because \(\mathcal{L}(u,v)\) is convex- concave, we have \[\begin{array}{r l} & {\mathbb{E}[P(\hat{u}^{l},\hat{v}^{l})] = \mathbb{E}[\mathcal{L}(\hat{u}^{l},v^{*}) - \mathcal{L}(u^{*},\hat{v}^{l})]}\\ & {\qquad \leq \mathbb{E}[\frac{1}{l}\sum_{k = 1}^{l}(\mathcal{L}(u^{k},v^{*}) - \mathcal{L}(u^{*},v^{k}))]}\\ & {\qquad = \frac{1}{l}\sum_{k = 1}^{l}\mathbb{E}[\mathcal{L}(u^{k},v^{*}) - \mathcal{L}(u^{*},v^{k})]}\\ & {\qquad = \frac{1}{l}\sum_{k = 1}^{l}\mathbb{E}[P(u^{k},v^{k})].} \end{array} \quad (44)\] Combine (41) and (45) to prove \[\mathbb{E}[P(\hat{x}^{l},\hat{y}^{l})]\leq \frac{1}{2\sqrt{l}}\left(\frac{D_{u}^{2}}{C_{\alpha}} +\frac{D_{v}^{2}}{C_{\beta}}\right) + \frac{\sqrt{l + 1}}{l}\left(\frac{C_{\alpha}G_{u}^{2}}{2} +C_{\alpha}L_{v}G_{u}^{2} + C_{\alpha}L_{v}D_{v}^{2} + \frac{C_{\beta}G_{v}^{2}}{2}\right). \quad (46)\] ## B MNIST TOY EXAMPLE Experimental details: We consider a classic MNIST digits dataset (LeCun et al., 1998) consisting of 60,000 training images and 10,000 testing images each of size \(28\times 28\) . For simplicity, let us consider a task (T1) of classifying into odd and even numbered images. Let's say, that \(\sim 50\%\) of data instances were corrupted using salt and pepper noise of probability 0.2 and this distortion process was biased. Specifically, only \(30\%\) of even numbered images were distorted as against the \(70\%\) of odd- numbered images. We have observed that any feature representation network \(\theta_{f}\) trained using the binary classification loss function for task T1 has noise bias also encoded within it. This was verified by training an independent noise classifier on the learned features. This lead us to design of simple adversarial network to "unlearn" the noise bias from the feature learning pipeline. We formulate this using the minimax objective in (5). In our model, \(\mathcal{L}_{d}\) is a softmax loss for the task (T2) of classifying whether the input sample is noisy or not. \(\mathcal{L}_{y}\) is a softmax loss for task T1 and \(\lambda = 1\) . A LeNet network (LeCun et al., 1998) is considered for training on task T1 while a two- layer MLP is used for training on task T2. LeNet consist of two convolutional (conv) layers followed by two fully connected (FC) layers at the top. The parameters of conv layers form \(\theta_{f}\) while that of FC and MLP layers forms \(\theta_{y}\) and \(\theta_{d}\) respectively. We train the network in three stages. Following the training on task T1, \(\theta_{f}\) were fixed and MLP is trained independently on task T2. The default learning schedule of the LeNet implementation in Caffe (Jia et al., 2014) were followed for both the tasks. The total training iterations on each task were set to 10,000. After pretraining, the whole network is jointly finetuned using the adversarial approach. (5) is alternatively minimized w.r.t. \(\theta_{\mathrm{f}}\) , \(\theta_{\mathrm{y}}\) and maximized w.r.t. \(\theta_{\mathrm{d}}\) . The predictive steps were only used during the finetuning phase. <--- Page Split ---> Our finding is summarized in Figure 3. In addition, Figure 7 provides head- to- head comparison of two popular solvers Adam and SGD using the predictive step. Not surprisingly, the Adam solver shows relatively better performance and convergence even with an additional predictive step. This also suggests that the default hyper- parameter for the Adam solver can be retained and utilized for training this networks without resorting to any further hyper- parameter tuning (as it is currently in practice). ![](images/16_0.jpg) <center>Figure 7: Comparison of the classification accuracy of parity classification and noise discrimination using the SGD and Adam solvers with and without prediction step. </center> ## C DOMAIN ADAPTATION Experimental details: To evaluate a domain adaptation task, we consider the OFFICE dataset (Saenko et al., 2010). OFFICE is a small scale dataset consisting of images collected from three distinct domains: AMAZON, DSLR and WEBCAM. For such a small scale dataset, it is non- trivial to learn features from images of a single domain. For instance, consider the largest subset AMAZON, which contains only 2,817 labeled images spread across 31 different categories. However, one can leverage the power of domain adaptation to improve cross domain accuracy. We follow the protocol listed in Ganin & Lempitsky (2015) and the same network architecture is used. Caffe (Jia et al., 2014) is used for implementation. The training procedure from Ganin & Lempitsky (2015) is kept intact except for the additional prediction step. In Table 2 comparisons are drawn with respect to target domain accuracy on three pairs of source- target domain tasks. The test accuracy is reported at the end of 50,000 training iterations. ## D FAIR CLASSIFIER Experimental details: The "Adult" dataset from the UCI machine learning repository is used, which consists of census data from \(\sim 45,000\) people. The task is to classify whether a person earns \(\geq \$ 50k\) /year. The person's gender is chosen to be the sensitive variable. We binarize all the category attributes, giving us a total of 102 input features per sample. We randomly split data into 35,000 samples for training, 5000 for validation and 5000 for testing. The result reported here is an average over five such random splits. ## E GENERATIVE ADVERSARIAL NETWORKS Toy Dataset: To illustrate the advantage of the prediction method, we experiment on a simple GAN architecture with fully connected layers using the toy dataset. The constructed toy example and its architecture is inspired by the one presented in Metz et al. (2017). The two dimensional data is sampled from the mixture of eight Gaussians with their means equally spaced around the unit circle <--- Page Split ---> centered at \((0,0)\) . The standard deviation of each Gaussian is set at 0.01. The two dimensional latent vector \(\mathbf{z}\) is sampled from the multivariate Gaussian distribution. The generator and discriminator networks consist of two fully connected hidden layers, each with 128 hidden units and tanh activations. The final layer of the generator has linear activation while that of discriminator has sigmoid activation. The solver optimizes both the discriminator and the generator network using the objective in (4). We use adam solver with its default parameters (i.e., learning rate \(= 0.001\) , \(\beta_{1} = 0.9\) , \(\beta_{2} = 0.999\) ) and with input batch size of 512. The generated two dimensional samples are plotted in the figure (8). The straightforward utilization of the adam solver fails to construct all the modes of the underlying dataset while both unrolled GAN and our method are able to produce all the modes. ![](images/17_0.jpg) <center>Figure 8: Comparison of GAN training algorithms on toy dataset. Results on, from top to bottom, GAN, GAN with G prediction, and unrolled GAN. </center> We further investigate the performance of GAN training algorithms on data sampled from a mixture of a large number of Gaussians. We use 100 Gaussian modes which are equally spaced around a circle of radius 24 centered at \((0,0)\) . We retain the same experimental settings as described above and train GAN with two different input batch sizes, a small (64) and a large batch (6144) setting. The Figure (9) plots the generated sample output of GAN trained (for fixed number of epochs) under the above setting using different training algorithms. Note that for small batch size input, the default as well as the unrolled training for GAN fails to construct actual modes of the underlying dataset. We hypothesize that this is perhaps due to the batch size, 64, being smaller than the number of input modes (100). When trained with small batch the GAN observe samples only from few input modes at every iteration. This causes instability leading to the failure of training algorithms. This scenario is pertinent to real datasets wherein the number of modes are relatively high compared to input batch size. ![](images/17_1.jpg) <center>Figure 9: Comparison of GAN training algorithms on toy dataset of mixture of 100 Gaussians. Results on, from top to bottom, batch size of 64 and 6144. </center> <--- Page Split ---> DCGAN Architecture details: For our experiments, we use publicly available code for DCGAN (Radford et al., 2016) and their implementation for Cifar- 10 dataset. The random noise vector is of 100 dimensional and output of the generator network is a 64x64 image of 3 channels. # Additional DCGAN Results: ![](images/18_0.jpg) <center>Figure 10: Comparison of GAN training algorithms for DCGAN architecture on Cifar-10 image datasets. Using higher momentum, \(lr = 0.0002\) , \(\beta_{1} = 0.9\) . </center> ![](images/18_1.jpg) <center>Figure 11: Comparison of GAN training algorithms for DCGAN architecture on Cifar-10 image datasets. \(lr = 0.0004\) , \(\beta_{1} = 0.5\) . </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 12: Comparison of GAN training algorithms for DCGAN architecture on Cifar-10 image datasets. \(lr = 0.0006\) , \(\beta_{1} = 0.5\) . </center> ![](images/19_1.jpg) <center>Figure 13: Comparison of GAN training algorithms for DCGAN architecture on Cifar-10 image datasets. \(lr = 0.0008\) , \(\beta_{1} = 0.5\) . </center> <--- Page Split ---> Experiments on Imagenet: In this section we demonstrate the advantage of prediction methods for generating higher resolution images of size \(128 \times 128\) . For this purpose, the state- of- the- art AC- GAN (Odena et al., 2017) architecture is considered and conditionally learned using images of all 1000 classes from Imagenet dataset. We have used the publicly available code for AC- GAN and all the parameter were set to it default as in Odena et al. (2017). The figure 14 plots the inception score measured at every training epoch of AC- GAN model with and without prediction. The score is averaged over five independent runs. From the figure, it is clear that even at higher resolution with large number of classes the prediction method is stable and aids in speeding up the training. ![](images/20_0.jpg) <center>Figure 14: Comparison of Inception scores on high resolution Imagenet datasets measured at each training epoch of ACGAN model with and without prediction. </center> <--- Page Split --->
accept
Accept (Poster)
6.666667
ICLR_2018_paper_0400
iclr
2,018
# QUANTITATIVELY EVALUATING GANs WITH DIVERGENCES PROPOSED FOR TRAINING Daniel Jiwoong \(\mathbf{Im}^{1,2}\) , He Ma \(^{3,4}\) , Graham Taylor \(^{3,4}\) , & Kristin Branson \(^{1}\) \(^{1}\) Janelia Research Campus, HHMI, \(^{2}\) AIFounded Inc. \(^{3}\) University of Guelph, \(^{4}\) Vector Institute {imd, bransonk}@janelia.hhmi.org {hma02, gtaylor}@uoguelph.ca ## ABSTRACT Generative adversarial networks (GANs) have been extremely effective in approximating complex distributions of high- dimensional, input data samples, and substantial progress has been made in understanding and improving GAN performance in terms of both theory and application. However, we currently lack quantitative methods for model assessment. Because of this, while many GAN variants being proposed, we have relatively little understanding of their relative abilities. In this paper, we evaluate the performance of various types of GANs using divergence and distance functions typically used only for training. We observe consistency across the various proposed metrics and, interestingly, the test- time metrics do not favour networks that use the same training- time criterion. We also compare the proposed metrics to human perceptual scores. ## 1 INTRODUCTION Generative adversarial networks (GANs) aim to approximate a data distribution \(P\) , using a parameterized model distribution \(Q\) . They achieve this by jointly optimizing generative and discriminative networks (Goodfellow et al., 2014). GANs are end- to- end differentiable, and samples from the generative network are propagated forward to a discriminative network, and error signals are then propagated backwards from the discriminative network to the generative network. The discriminative network is often viewed as learned, adaptive loss function for the generative network. GANs have achieved state- of- the- art results for a number of applications (Goodfellow, 2016), producing more realistic, sharper samples than other popular generative models, such as variational autoencoders (Kingma & Welling, 2014). Because of their success, many GAN frameworks have been proposed. However, it has been difficult to compare these algorithms and understand their strengths and weaknesses because we are currently lacking in quantitative methods for assessing the learned generators. In this work, we propose new metrics for measuring how realistic samples generated from GANs are. These criteria are based on a formulation of divergence between the distributions \(P\) and \(Q\) (Nowozin et al., 2016; Sriperumbudur et al., 2009): \[\inf_{Q}J(Q) = \inf_{Q}\sup_{f\in \mathcal{F}}\mathbb{E}_{P(x)}\left[\mu \left(f(x)\right)\right] - \mathbb{E}_{Q(x)}\left[v\left(f(x)\right)\right] \quad (1)\] Here, different choices of \(\mu , v\) , and \(\mathcal{F}\) can correspond to different \(f\) - divergences (Nowozin et al., 2016) or different integral probability metrics (IPMs) (Sriperumbudur et al., 2009). Importantly, \(J(Q)\) can be estimated using samples from \(P\) and \(Q\) , and does not require us to be able to estimate \(P(x)\) or \(Q(x)\) for samples \(x\) . Instead, evaluating \(J(Q)\) involves finding the function \(f \in \mathcal{F}\) that is maximally different with respect to \(P\) and \(Q\) . This measure of divergence between the distributions \(P\) and \(Q\) is related to the GAN criterion if we restrict the function class \(\mathcal{F}\) to be neural network functions parameterized by the vector \(\phi\) and the class of approximating distributions to correspond to neural network generators \(G_{\theta}\) parameterized by the vector \(\theta\) , allowing formulation as a min- max problem: \[\min_{\theta}J(\theta) = \min_{\theta}\max_{\phi}\mathbb{E}_{P(x)}\left[\mu \left(D_{\phi}(x)\right)\right] - \mathbb{E}_{Q_{\theta}(x)}\left[v\left(D_{\phi}(x)\right)\right], \quad (2)\] <--- Page Split ---> Table 1: Defined \(\mu\) and \(\upsilon\) functions for GAN metrics proposed in this paper. \(M\) is some real number. \(\mathcal{H}\) is a Reproducing Kernel Hilbert Space (RKHS) and \(\| \cdot \|_{L}\) is the Lipschitz constant. For the LS-DCGAN, we used \(b = 1\) and \(a = 0\) (Mao et al., 2017). <table><tr><td>Metric</td><td>μ</td><td>v</td><td>Function Class</td></tr><tr><td>GAN (GC)</td><td>log f</td><td>-log(1-f)</td><td>X→R+, ∃M∈R: |f(x)|≤M</td></tr><tr><td>Least-Squares GAN (LS)</td><td>-(f-b)²</td><td>(f-a)²</td><td>X→R, ∃M∈R: |f(x)|≤M</td></tr><tr><td>MMD</td><td>f</td><td>f</td><td>f: ||f||ℋ≤1</td></tr><tr><td>Wasserstein (IW)</td><td>f</td><td>f</td><td>f: ||f||L≤1</td></tr></table> In this formulation, \(Q_{\theta}\) corresponds to the generator network's distribution and \(D_{\phi}\) corresponds to the discriminator network (see (Nowozin et al., 2016) for details). We propose using \(J(\theta)\) to evaluate the performance of the generator network \(G_{\theta}\) for various choices of \(\mu\) and \(\upsilon\) , corresponding to different \(f\) - divergences or IPMs between distributions \(P\) and \(Q_{\theta}\) , that have been successfully used for GAN training. Our proposed metrics differ from most existing metrics in that they are adaptive, and involve finding the maximum over discriminative networks. We compare four metrics, those corresponding to the original GAN (GC) (Goodfellow, 2016), the Least- Squares GAN (LS) (Mao et al., 2017), the Wasserstein GAN (IW) (Arjovsky et al., 2017), and the Maximum Mean Discrepancy GAN (MMD) (Li et al., 2017) criteria. Choices for \(\mu\) , \(\upsilon\) , and \(\mathcal{F}\) for these metrics are shown in Table 1. Our method can easily be extended to other \(f\) - divergences or IPMs. To compare these and previous metrics for evaluating GANs, we performed many experiments, training and comparing multiple types of GANs with multiple architectures on multiple data sets. We qualitatively and quantitatively compared these metrics to human perception, and found that our proposed metrics better reflected human perception. We also show that rankings produced using our proposed metrics are consistent across metrics, thus are robust to the exact choices of the functions \(\mu\) and \(\upsilon\) in Equation 2. We used the proposed metrics to quantitatively analyze three different families of GANs: Deep Convolutional Generative Adversarial Networks (DCGAN) (Radford et al., 2015), Least- Squares GANs (LS- DCGAN), and Wasserstein GANs (W- DCGAN), each of which corresponded to a different proposed metric. Interestingly, we found that the different proposed metrics still agreed on the best GAN framework for each dataset. Thus, even though, e.g. for MNIST the W- DCGAN was trained with the IW criterion, LS- DCGAN still outperformed it for the IW criterion. Our analysis also included carrying out a sensitivity analysis with respect to various factors, such as the architecture size, noise dimension, update ratio between discriminator and generator, and number of data points. Our empirical results show that: i) the larger the GAN architecture, the better the results; ii) having a generator network larger than the discriminator network does not yield good results; iii) the best ratio between discriminator and generator updates depend on the data set; and iv) the W- DCGAN and LS- DCGAN performance increases much faster than DCGAN as the number of training examples grows. These metrics thus allow us to tune the hyper- parameters and architectures of GANs based on our proposed method. ## 2 RELATED WORK GANs can be evaluated using manual annotations, but this is time consuming and difficult to reproduce. Several automatically computable metrics have been proposed for evaluating the performance of probabilistic general models and GANs in particular. We review some of these here, and compare our proposed metrics to these in our experiments. Many previous probabilistic generative models were evaluated based on the pointwise likelihood of the test data, the criterion also used during training. While GANs can be used to generate samples from the approximate distribution, its likelihood on test samples cannot be evaluated without simplifying assumptions. As discussed in (Theis et al., 2015), likelihood often does not provide good rankings of how realistic samples look, the main goal of GANs. We evaluated the efficacy of the log- likelihood of the test data, as estimated using Annealed Importance Sampling (AIS) (Wu et al., 2016). AIS <--- Page Split ---> has been to estimate the likelihood of a test sample \(x\) by considering many intermediate distributions that are defined by taking a weighted geometric mean between the prior (input) distribution, \(p(z)\) , and an approximation of the joint distribution \(p_{\sigma}(x,z) = p_{\sigma}(x|z)p(z)\) . Here, \(p_{\sigma}(x|z)\) is a Gaussian kernel with fixed standard deviation \(\sigma\) around mean \(G_{\theta}(z)\) . The final estimate depends critically on the accuracy of this approximation. In Section 4, we demonstrate that the AIS estimate of \(p(x)\) is highly dependent on the choice of this hyperparameter. The Generative Adversarial Metric (Im et al., 2016a) measures the relative performance of two GANs by measuring the likelihood ratio of the two models. Consider two GANs with their respective trained partners, \(M_{1} = (D_{1},G_{1})\) and \(M_{2} = (D_{2},G_{2})\) , where \(G_{1}\) and \(G_{2}\) are the generators and \(D_{1}\) and \(D_{2}\) are the discriminators. The hypothesis \(\mathcal{H}_{1}\) is that \(M_{1}\) is better than \(M_{2}\) if \(G_{1}\) fools \(D_{2}\) more than \(G_{2}\) fools \(D_{1}\) , and vice versa for the hypothesis \(\mathcal{H}_{0}\) . The likelihood- ratio is defined as: \[\begin{array}{r}{p(x|y = 1;M_{1}^{\prime}) = \frac{p(y = 1|x;D_{1})p(x;G_{2})}{p(y = 1|x;D_{2})p(x;G_{1})},} \end{array} \quad (3)\] where \(M_{1}^{\prime}\) and \(M_{2}^{\prime}\) are the swapped pairs \((D_{1},G_{2})\) and \((D_{2},G_{1})\) , and \(p(x|y = 1,M)\) is the likelihood of \(x\) generated from the data distribution \(p(x)\) by model \(M\) and \(p(y = 1|x;D)\) indicates that discriminator \(D\) thinks \(x\) is a real sample. To evaluate this, we measure the ratio of how frequently \(G_{1}\) , the generator from model 1, fools \(D_{2}\) , the discriminator from model 2, and vice- versa: \(\frac{D_{1}(x_{2})}{D_{2}(x_{1})}\) , where \(x_{1} \sim G_{1}\) and \(x_{2} \sim G_{2}\) . There are two main caveats to the Generative Adversarial Metric. First, the measurement only provides comparisons between pairs of models. Second, the metric has a constraint where the two discriminators must have an approximately similar performance on a calibration dataset, which can be difficult to satisfy in practice. The Inception Score (Salimans et al., 2016) (IS) measures the performance of a model using a third- party neural network trained on a supervised classification task, e.g. Imagenet. The IS computes the expectation of divergence between the distribution of class predictions for samples from the GAN compared to the distribution of class to the distribution of class labels used to train the third- party network, \[\exp \left(\mathbb{E}_{x\sim Q_{\theta}}K L(p(y|x)\| p(y))\right). \quad (4)\] Here, the class prediction given a sample \(x\) is computed using the third- party neural network. In (Salimans et al., 2016), Google's Inception Network (Szegedy et al., 2015) trained on Imagenet was the third- party neural network. IS is the most widely used metric to measure GAN performance. However, summarizing samples as the class prediction from a network trained for a different task discards much of the important information in the sample. In addition, it requires another neural network that is trained separately via supervised learning. We demonstrate an example of a failure case of IS in the Experiments section. The Fréchet Inception Distance (FID) (Heusel et al., 2017) extends upon IS. Instead of using the final classification outputs from the third- party network as representations of samples, it uses a representation computed from a late layer of the third- party network. It compares the mean \(m_{Q}\) and covariance \(C_{Q}\) of the Inception- based representation of samples generated by the GAN to the mean \(m_{P}\) and covariance \(C_{P}\) of the same representation for training samples: \[D^{2}\left((m_{P},C_{P}),(m_{Q},C_{Q})\right) = \| m_{P} - m_{Q}\|_{2}^{2} + \mathrm{Tr}\left(C_{P} + C_{Q} - 2(C_{P}C_{Q})^{\frac{1}{2}}\right), \quad (5)\] This method relies on the Inception- based representation of the samples capturing all important information and the first two moments of the distributions being descriptive of the distribution. Classifier Two- Sample Tests (C2ST) (Lopez- Paz & Oquab, 2016) proposes training a classifier, similar to a discriminator, that can distinguish real samples from \(P\) from generated samples from \(Q\) , and using the error rate of this classifier as a measure of GAN performance. In their work, they used single- layer and \(k\) - nearest neighbor (KNN) classifiers trained on a representation of the samples computed from a late layer of a third- party network (in this case, ResNet (He et al., 2015)). C2ST is an IPM (Sriperumbudur et al., 2009), like the MMD and Wasserstein metrics we propose, with \(\mu (f) = f\) and \(\upsilon (f) = f\) , but with a different function class \(\mathcal{F}\) , corresponding to the family of classifiers chosen (in this case, single- layer networks or KNN, see see our detailed explanation in Appendix 5). The accuracy of a classifier trained to distinguish samples from distributions \(P\) and \(Q\) is just one way to measure the distance between these distributions, and, in this work, we propose a general family. <--- Page Split ---> ## 3 EVALUATION METRICS Given a generator \(G_{\theta}\) with parameters \(\theta\) which generates samples from the distribution \(Q_{\theta}\) , we propose to measure the quality of \(G_{\theta}\) by estimating divergence between the true data distribution \(P\) and \(Q_{\theta}\) for different choices of divergence measure. We train both \(G_{\theta}\) and \(D_{\phi}\) on a training data set, and measure performance on a separate test set. See Algorithm 1 for details. We consider metrics from two widely studied divergence and distance measures, \(f\) - divergence (Nguyen et al., 2008) and the Integral Probability Metric (IPM) (Muller, 1997). In our experiments, we consider the following four metrics that are commonly used to train GANs. Below, \(\phi\) represents the parameters of the discriminator network and \(\theta\) represents the parameters of the generator network. Original GAN Criterion (GC) Training a standard GAN corresponds to minimizing the following (Goodfellow et al., 2014): \[\max_{\phi} \mathbb{E}_{x\sim p(x)}[\log (D_{\phi}(x))] + \mathbb{E}_{z\sim p(z)}[\log (1 - D_{\phi}(G_{\theta}(z)))], \quad (6)\] where \(p(z)\) is the prior distribution of the generative network and \(G_{\theta}(z)\) is a differentiable function from \(z\) to the data space represented by a neural network with parameter \(\theta\) . \(D_{\phi}\) is trained with a sigmoid activation function, thus its output is guaranteed to be positive. Least- Squares GAN Criterion (LS) A Least- Squares GAN corresponds to training with a Pearson \(\chi^{2}\) divergence (Mao et al., 2017): \[\max_{\phi} -\mathbb{E}_{x\sim p(x)}[(D_{\phi}(x) - b)^{2}] - \mathbb{E}_{z\sim p(z)}[(D_{\phi}(G_{\theta}(z) - a))^{2}]. \quad (7)\] Following (Mao et al., 2017), we set \(a = 0\) and \(b = 1\) when training \(D_{\phi}\) . Maximum Mean Discrepancy (MMD) The maximum mean discrepancy metric considers the largest difference in the expectations over a unit ball of RKHS \(\mathcal{H}\) , \[\begin{array}{r l} & {\underset {\phi :\| f\|_{\mathcal{H}}\leq 1}{\max}\mathbb{E}_{x\sim p(x)}[D_{\phi}(x)] - \mathbb{E}_{z\sim p(z)}[D_{\phi}(G_{\theta}(z)]}\\ & {\qquad = \mathbb{E}_{x,x^{\prime}\sim P}\left[k(x,x^{\prime})\right] + \mathbb{E}_{z,z^{\prime}\sim p(z)}\left[k(G_{\theta}(z),G_{\theta}(z^{\prime}))\right] - 2\mathbb{E}_{x\sim P,z\sim p(z)}\left[k(x,G_{\theta}(z))\right],} \end{array} \quad (8)\] where \(\mathcal{H}\) is the RKHS with kernel \(k(\cdot ,\cdot)\) (Gretton et al., 2012). In this case, we do not need to train a discriminator \(D_{\phi}\) to evaluate our metric. Improved Wasserstein Distance (IW) Arjovsky & Bottou (2017) proposed the use of the dual representation of the Wasserstein distance (Villani, 2009) for training GANs. The Wasserstein distance is an IPM which considers the 1- Lipschitz function class \(\phi :\| \phi \|_{L}\leq 1\) : \[\max_{\phi :\| \phi \|_{L}\leq 1}\left[\mathbb{E}_{x\sim p(X)}\left[D_{\phi}(x)\right] - \mathbb{E}_{z\sim p(Z)}\left[D_{\phi}(G_{\theta}(z))\right]\right]. \quad (10)\] Note that IW (Danihelka et al., 2017) and MMD (Sutherland et al., 2017) were recently proposed to evaluate GANs, but have not been compared before. Algorithm 1 Compute the divergence/distance. 1: procedure DIVERGENCECOMPUTATION(Dataset \(\{X_{t r},X_{t e}\}\) , generator \(G_{\theta}\) , learning rate \(\eta\) evaluation criterion \(J(\phi ,X,Y)\) 2: Initialize critic network parameter \(\phi\) 3: for \(i = 1\dots N\) do 4: Sample data points from \(\mathbf{X}\) , \(\{x_{m}\} \sim X_{t r}\) 5: Sample points from generative model, \(\{s_{m}\} \sim G_{\theta}\) 6: \(\phi \leftarrow \phi +\eta \nabla_{\phi}J(\{x_{m}\} ,\{s_{m}\} ;\phi)\) 7: Sample points from generative model, \(\{s_{m}\} \sim G_{\theta}\) 8: return \(J(\phi ,X_{t e},\{s_{m}\})\) <--- Page Split ---> ## 4 EXPERIMENTS The goals in our experiments are two- fold. First, we wanted to evaluate the metrics we proposed for evaluating GANs. Second, we wanted to use these metrics to evaluate GAN frameworks and architectures. In particular, we evaluated how size of the discriminator and generator networks affected performance, and the sensitivity of each algorithm to training data set size. GAN frameworks. We conducted our experiments on three types of GANs: Deep Convolutional Generative Adversarial Networks (DCGAN), Least- Squares GANs (LS- DCGAN), and Wasserstein GANs (W- DCGAN). Note that to not confuse the test metric names with the GAN frameworks we evaluated, we use different abbreviations. GC is the original GAN criterion, which is used to train DCGANs. The LS criterion is used to train the LS- DCGAN, and the IW is used to train the W- DCGAN. Evaluation criteria. We evaluated these three families of GANs with six metrics. We compared our four proposed metrics to the two most commonly used metrics for evaluating GANs, the IS and FID. Because the optimization of a discriminator is required both during training and test time, we will call the discriminator learned for evaluation of our metrics the critic, in order to not confuse the two discriminators. We also compared these metrics to human perception, and had three volunteers evaluate and compare sets of images, either from the training data set or generated from different GAN frameworks during training. Data sets. In our experiments, we considered the MNIST (LeCun et al., 1998), CIFAR10, LSUN Bedroom, and Fashion MNIST datasets. MNIST consists of 60,000 training and 10,000 test images with a size of \(28 \times 28\) pixels, containing handwritten digits from the classes 0 to 9. From the 60,000 training examples, we set aside 10,000 as validation examples to tune various hyper- parameters. Similarly, FashionMNIST consists exactly the same number of training and test examples. Each example is a \(28 \times 28\) grayscale image, associated with a label from 10 classes. The CIFAR10 dataset 1 consists of images with a size of \(32 \times 32 \times 3\) pixels, with ten different classes of objects. We used 45,000, 5,000, and 10,000 examples as training, validation, and test data, respectively. The LSUN Bedroom dataset consists of images with a size of \(64 \times 64\) pixels, depicting various bedrooms. From the 3,033,342 images, we used 90,000 images as training data and 90,000 images as validation data. The learning rate was selected from discrete ranges and chosen based on a held- out validation set. Hyperparameters. Table 10 in the Appendix shows the learning rates and the convolutional kernel sizes that were used for each experiment. The architecture of each network is presented in the Appendix in Figure 10. Additionally, we used exponential- mean- square kernels with several different sigma values for MMD. A pre- trained logistic regression and pre- trained residual network were used for IS and FID on the MNIST and CIFAR10 datasets, respectively. For every experiment, we retrained 10 times with different random seeds, and report the mean and standard deviation. ### 4.1 QUALITATIVE OBSERVATIONS ABOUT EXISTING METRICS The log- likelihood measurement is the most commonly used metric for generative models. We measured the log- likelihood using \(\mathrm{AIS}^2\) on GANs is strange, as shown in Figure 1. We measured the log- likelihood of the DCGAN on MNIST with three different variances, \(\sigma^2 = 0.01\) , 0.025, and 0.05. The figure illustrates that the log- likelihood curve over the training epochs varies substantially depending on the variance, which indicates that the fixed Gaussian observable model might not be the ideal assumption for GANs. Moreover, we observe a high log- likelihood at the beginning of training, followed by a drop in likelihood, which then returns to the high value. The IS and MMD metrics do not require training a critic. It was easy to find samples for which IS and MMD scores did not match their visual quality. For example, Figure 2 shows samples generated by a DCGAN when it failed to train properly. Even though the failed DCGAN samples are much darker than the samples on the right, the IS for the left samples is higher/better than for the right samples. <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 1: Log-likelihood estimated using AIS for generators learned using DCGAN at various points during training, MNIST data set. </center> ![](images/5_1.jpg) <center>Figure 2: Misleading examples of Inception Scores. </center> As the Imagenet- trained network is likely trained to be somewhat invariant to overall intensity, this issue is to be expected. A failure case for MMD is shown in Figure 5. The samples on the right are dark, like the previous examples, but still textually recognizable, whereas the samples on the left are totally meaningless. However, MMD gives lower/worse distances to the left samples. The average intensity of the pixels of the left samples are closer to that for the training data, suggesting that MMD is overly sensitive to image intensity. Thus, IS is under- sensitive to image intensity, while MMD if oversensitive to it. In Section 4.2.1, we conduct more systematic experiments by measuring the correlation between these metrics to human perceptual scores. ### 4.2 METRIC COMPARISON To both compare the metrics as well as different GAN frameworks, we evaluated the six metrics on different GAN frameworks. Tables 2, 3, and 4 present the results on MNIST, CIFAR10, and LSUN respectively. As each type of GAN was trained using one of our proposed metrics, we investigated whether the metric favors samples from the model trained using the same metric. Interestingly, we do not see this behavior, and our proposed metrics agree on which GAN framework produces samples closest to the test data set. Every metric, except for MMD, showed that LS- DCGAN performed best for MNIST and CIFAR10, while W- DCGAN performed best for LSUN. As discussed below, we found DCGAN to be unstable to train, and thus excluded GC as a metric for experiments except for this first data set. For Fashion- MNIST, FID's ranking disagreed with IW and LS. We observed similar results for a range of different critic CNN architectures (number of feature maps in each convolutional layer): [3, 64, 128, 256], [3, 128, 256, 512], [3, 256, 512, 1024], and [3, 320, 640, 1280] (see Supp. Fig. 12 and 13). We evaluated a larger variety of GAN frameworks using pre- trained GANs downloaded from (pyt). In particular, we evaluated on EBGAN(Junbo Zhao, 2016), BEGAN(Berthelot et al., 2017), W- DCGAN GP(Gulrajani et al., 2017), and DRAGAN (Kodali et al., 2017). Table 5 presents the evaluation results. Critic architectures were selected to match those of these pre- trained GANs. For both MNIST and FashionMNIST, the three metrics are consistent and they rank DRAGAN the highest, followed by LS- DCGAN and DCGAN. The standard deviations for the IW distance are higher than for LS divergence. We computed the Wilcoxon rank sum in order to test that whether medians of the distributions of distances are the same for DCGAN, LS- DCGAN, and W- DCGAN. We found that the different GAN frameworks have significantly different performance according to the LS- GAN criterion, but not according to the IW criterion \((p < .05\) , Wilcoxon rank- sum test). Thus LS is more sensitive than IW. We evaluated the consistency of the metrics with respect to the size of validation set. We trained our three GAN frameworks for 100 epochs with training 90,000 examples from the LSUN Bedroom <--- Page Split ---> Table 2: GAN scores for various metrics trained on MNIST. Lower values are better for MMD, IW, LS, GC, and FID, higher values are better for IS. Lighter color indicates better performance. <table><tr><td>Model</td><td>MMD</td><td>IW</td><td>GC</td><td>LS</td><td>IS <br>(Logistic Reg.)</td></tr><tr><td>DCGAN</td><td>0.028 ± 0.0066</td><td>7.01 ± 1.63</td><td>-2.2e-3 ± 3e-4</td><td>-0.12 ± 0.013</td><td>5.76 ± 0.10</td></tr><tr><td>W-DCGAN</td><td>0.006 ± 0.0009</td><td>7.71 ± 1.89</td><td>-4e-4 ± 4e-4</td><td>-0.05 ± 0.008</td><td>5.17 ± 0.11</td></tr><tr><td>LS-DCGAN</td><td>0.012 ± 0.0036</td><td>4.50 ± 1.94</td><td>-3e-3 ± 6e-4</td><td>-0.13 ± 0.022</td><td>6.07 ± 0.08</td></tr></table> Table 3: GAN scores for various metrics trained on CIFAR10. <table><tr><td>Model</td><td>MMD</td><td>IW</td><td>LS</td><td>IS <br>(ResNet)</td><td>FID</td></tr><tr><td>DCGAN</td><td>0.0538 ± 0.014</td><td>8.844 ± 2.87</td><td>-0.0408 ± 0.0039</td><td>6.649 ± 0.068</td><td>0.112 ± 0.010</td></tr><tr><td>W-DCGAN</td><td>0.0060 ± 0.001</td><td>9.875 ± 3.42</td><td>-0.0421 ± 0.0054</td><td>6.524 ± 0.078</td><td>0.095 ± 0.003</td></tr><tr><td>LS-DCGAN</td><td>0.0072 ± 0.0024</td><td>7.10 ± 2.05</td><td>-0.0535 ± 0.0031</td><td>6.761 ± 0.069</td><td>0.088 ± 0.008</td></tr></table> Table 4: GAN scores for various metrics trained on LSUN Bedroom dataset. <table><tr><td>Model</td><td>MMD</td><td>IW</td><td>LS</td></tr><tr><td>DCGAN</td><td>0.00708</td><td>3.79097</td><td>-0.14614</td></tr><tr><td>W-DCGAN</td><td>0.00584</td><td>2.91787</td><td>-0.20572</td></tr><tr><td>LS-DCGAN</td><td>0.00973</td><td>3.36779</td><td>-0.17307</td></tr></table> Table 5: Evaluation of GANs on MNIST and Fashion-MNIST datasets. <table><tr><td rowspan="2">Model</td><td colspan="3">MNIST</td><td colspan="3">Fashion-MNIST</td></tr><tr><td>IW</td><td>LS</td><td>FID</td><td>IW</td><td>LS</td><td>FID</td></tr><tr><td>DCGAN</td><td>0.4814 ± 0.0083</td><td>-0.111 ± 0.0074</td><td>1.84 ± 0.15</td><td>0.69 ± 0.0057</td><td>-0.0202 ± 0.00242</td><td>3.23 ± 0.34</td></tr><tr><td>EBGAN</td><td>0.7277 ± 0.0159</td><td>-0.029 ± 0.0026</td><td>5.36 ± 0.32</td><td>0.99 ± 0.0001</td><td>-2.2e-5 ± 5.3e-5</td><td>104.08 ± 0.56</td></tr><tr><td>W-DCGAN GP</td><td>0.7314 ± 0.0194</td><td>-0.035 ± 0.0059</td><td>2.67 ± 0.15</td><td>0.89 ± 0.0086</td><td>-0.0005 ± 0.00037</td><td>2.56 ± 0.25</td></tr><tr><td>LS-DCGAN</td><td>0.5058 ± 0.0117</td><td>-0.115 ± 0.0070</td><td>2.20 ± 0.27</td><td>0.68 ± 0.0086</td><td>-0.0208 ± 0.00290</td><td>0.62 ± 0.13</td></tr><tr><td>BEGAN</td><td></td><td>-0.009 ± 0.0063</td><td>15.9 ± 0.48</td><td>0.90 ± 0.0159</td><td>-0.0016 ± 0.00047</td><td>1.51 ± 0.16</td></tr><tr><td>DRAGAN</td><td>0.4632 ± 0.0247</td><td>-0.116 ± 0.0116</td><td>1.09 ± 0.13</td><td>0.66 ± 0.0108</td><td>-0.0219 ± 0.00232</td><td>0.97 ± 0.14</td></tr></table> dataset. We then trained LS and IW critics using both 300 and 90,000 validation examples. We looked at how often the critic trained with 300 examples agreed with that trained with 90,000 examples. The LS critics agreed \(88\%\) of the time, while the IW critics agreed only \(55\%\) of the time (slightly better than chance). Thus, LS is more robust to validation data set size. Another advantage is that measuring the LS distance is faster than measuring the IW distance, as estimating IW involves regularizing with a gradient penalty time (Gulrajani et al., 2017). Computing the gradient penalty term and tuning its regularization coefficient requires extra computational time. As mentioned above, we found training a critic using the GC criterion (corresponding to a DCGAN) to be unstable. It has previously been speculated that this is the case because the support of the data and model distributions possibly becoming disjoint (Arjovsky & Bottou, 2017), and the Hessian of the GAN objective being non- Hermitian (Mescheder et al., 2017). LS- DCGAN and W- DCGAN are proposed to address this by providing non- saturating gradients. We also found DCGAN to be difficult to train, and thus only report results using the corresponding criterion GC for MNIST. Note that this is different than training a discriminator as part of standard GAN training because we are training from a random initialization, not from the previous version of the discriminator. Our experience was that the LS- DCGAN was the simplest and most stable model to train. We visualized the 2D subspace of the loss surface of the GANs in Supp. Fig. 29. Here, we took the parameters of three trained models (corresponds to red vertices in the figure) and applied barycentric interpolation with respect to three parameters (see details from (Im et al., 2016c)). DCGAN surfaces have much sharper slopes when compared to the LS- DCGAN and W- DCGAN, and LS- DCGAN has the most gentle surfaces. In what follows, we show that this geometric view is consistent with our finding that LS- DCGAN is the easiest and the most stable to train. <--- Page Split ---> Table 6: The fraction of pairs of which each metric agrees with human scores. We use colored asterisks to represent significant differences (two- sided Fisher's test, \(\mathrm{p}< .05\) ). E.g. \* in the IW row indicates that IW and IS are significantly different. <table><tr><td>Metric</td><td>Fraction</td><td>[Agreed/Total] samples</td><td>p &amp;lt; .05?</td></tr><tr><td>IW</td><td>0.977</td><td>128 / 131</td><td>*</td></tr><tr><td>LS</td><td>0.931</td><td>122 / 131</td><td>*</td></tr><tr><td>IS</td><td>0.863</td><td>113 / 131</td><td>*</td></tr><tr><td>MMD</td><td>0.832</td><td>109 / 131</td><td>*</td></tr></table> #### 4.2.1 COMPARISON TO HUMAN PERCEPTION We compared the LS, IW, MMD, and IS metrics to human perception for the CIFAR10 dataset. To accomplish this, we asked five volunteers to choose which of two sets of 100 samples, each generated using a different generator, looked most realistic. Before surveying, the volunteers were trained to choose between real samples from CIFAR10 and samples generated by a GAN. Supp. Fig. 14 displays the user interface for the participants, and Supp. Fig. 15 shows the fraction of labels that the volunteers agreed upon. Table 6) presents the fraction of pairs for which each metric agrees with humans (higher is better). IW has a slight edge over LS, and both outperform IS and MMD. In Figure 3, we show examples in which all humans agree and metrics disagrees with human perception. All such examples are shown in Supp. Fig. 21- 24. ![](images/7_0.jpg) <center>Figure 3: Pairs of generated image sets for which human perception and metrics disagree. Here, we selected one such example for each metric for which the difference in that metric's scores was high. For each pair, humans perceived the set of images on the left to be more realistic than those on the right, while the metric predicted the opposite. Below each pair of images, we indicate the metric's score for the left and right image sets. </center> ### 4.3 SENSITIVITY ANALYSIS #### 4.3.1 PERFORMANCE CHANGE WITH RESPECT TO THE SIZE OF THE NETWORK Several works have demonstrated an improvement in performance by enlarging deep network architectures (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2015; Huang et al., 2017). Here, we investigate performance changes with respect to the width and depth of the networks. First, we trained three GANs with varying numbers of feature map sizes, as shown in Table 7 (a- d). Note that we double the number of feature maps in Table 7 for both the discriminators and generators. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 4: LS score evaluation of W-DCGAN & LS-DCGAN w.r.t number of feature maps. </center> ![](images/8_1.jpg) <center>Figure 5: W-DCGAN trained with different numbers of feature maps. </center> Table 8: LS-DCGAN and W-DCGAN scores on CIFAR10 with respect to different generator and discriminator capacity. <table><tr><td>Model</td><td>Architecture (Table 7)</td><td>MMD Test vs. Samples</td><td>IW</td><td>LS</td><td>IS (ResNet)</td></tr><tr><td rowspan="2">W-DCGAN</td><td>(e)</td><td>0.1057 ± 0.0798</td><td>450.17 ± 25.74</td><td>-0.0079 ± 0.0009</td><td>6.403 ± 0.839</td></tr><tr><td>(f)</td><td>0.2176 ± 0.2706</td><td>16.52 ± 15.63</td><td>-0.0636 ± 0.0101</td><td>6.266 ± 0.055</td></tr><tr><td rowspan="2">LS-DCGAN</td><td>(e)</td><td>0.1390 ± 0.1525</td><td>343.23 ± 47.55</td><td>-0.0092 ± 0.0007</td><td>5.751 ± 0.511</td></tr><tr><td>(f)</td><td>0.0054 ± 0.0022</td><td>12.75 ± 4.29</td><td>-0.0372 ± 0.0068</td><td>6.600 ± 0.061</td></tr></table> In Figure 4, the performance of the LS score increases logarithmically as the number of feature maps is doubled. A similar behaviour is observed in other metrics as well (see S.M. Figure 16). We then analyzed the importance of size in the discriminative and generative networks. We considered two extreme feature map sizes, where we choose a small and large number of feature maps for the generator and discriminator, and vice versa (see label (e) and (f) in Table 7), and results are shown in Table 8. For LS- DCGAN, it can be seen that a large number of feature maps for the discriminator has a better score than a large number of feature maps for the generator. This can also be qualitatively verified by looking at the samples from architectures (a), (e), (f), and (d) in Figure 6. For W- DCGAN, we observe the agreement between the LS and IW metric and conflict with MMD and IS. When we look at the samples from the W- DCGAN in Figure 5, it is clear that the model with a larger number of feature maps in the discriminator should get a better score; this is another example of false intuition propagated by MMD and IS. One interesting observation is that when we compare the score and samples from architecture (a) and (e) from Table 7, architecture (a) is much better than (e) (see Figure 6). This demonstrates that having a large generator and small discriminator is worse than having a small architecture for both networks. Overall, we found that having a larger generator than discriminator does not give good results, and that it is more desirable to have a larger discriminator than generator. Similar results were also observed for MNIST, as shown in S.M. Figure 20. This result somewhat supports the theoretical result from Arora et al. (2017), where the generator capacity needs to be modulated in order for approximately pure equilibrium to exist for GANs. Table 7: Reference for the different architectures explored in the experiments. <table><tr><td rowspan="2">Label</td><td colspan="2">Feature Maps</td></tr><tr><td>Discriminator</td><td>Generator</td></tr><tr><td>(a)</td><td>[3, 16, 32, 64]</td><td>[128, 64, 32, 3]</td></tr><tr><td>(b)</td><td>[3, 32, 64, 128]</td><td>[256, 128, 64, 3]</td></tr><tr><td>(c)</td><td>[3, 64, 128, 256]</td><td>[512, 256, 128, 3]</td></tr><tr><td>(d)</td><td>[3, 128, 256, 512]</td><td>[1024, 512, 256, 3]</td></tr><tr><td>(e)</td><td>[3, 16, 32, 64]</td><td>[1024, 512, 256, 3]</td></tr><tr><td>(f)</td><td>[3, 128, 256, 512]</td><td>[128, 64, 32, 3]</td></tr></table> and conflict with MMD and IS. When we look at the samples from the W- DCGAN in Figure 5, it is clear that the model with a larger number of feature maps in the discriminator should get a better score; this is another example of false intuition propagated by MMD and IS. One interesting observation is that when we compare the score and samples from architecture (a) and (e) from Table 7, architecture (a) is much better than (e) (see Figure 6). This demonstrates that having a large generator and small discriminator is worse than having a small architecture for both networks. Overall, we found that having a larger generator than discriminator does not give good results, and that it is more desirable to have a larger discriminator than generator. Similar results were also observed for MNIST, as shown in S.M. Figure 20. This result somewhat supports the theoretical result from Arora et al. (2017), where the generator capacity needs to be modulated in order for approximately pure equilibrium to exist for GANs. Lastly, we experimented with how performance changes with respect to the dimension of the noise vectors. The source of the sample starts by transforming a noise vector into a meaningful image. It is unclear how the size of noise affects the ability of the generator to generate a meaningful image. Che et al. (2017) have observed that a 100- d noise vector preserves modes better than a 200- d noise vector <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 6: Samples from different LS-DCGAN architectures. </center> for DCGAN. Our experiments show that this depends on the model. Given a fixed size architecture (d) from Table 7, we observed the performance of LS- DCGAN and W- DCGAN by varying the size of noise vector \(z\) . Table 9 illustrates that LS- DCGAN gives the best score with a noise dimension of 50 and W- DCGAN gives best score with a noise dimension of 150 for both IW and LS. The outcome of LS- DCGAN is consistent with the result in (Che et al., 2017). It is possible that this occurs because both models fall into the category of \(f\) - divergences, whereas the W- DCGAN behaves differently because its metric falls under a different category, the Integral Probability Metric. Table 9: LS-DCGAN and W-DCGAN scores on CIFAR10 with respect to the dimensionality of the noise vector. <table><tr><td rowspan="2">lzl</td><td colspan="3">LS-DCGAN</td><td colspan="3">W-DCGAN</td></tr><tr><td>IW</td><td>LS</td><td>IW</td><td>LS</td><td>IW</td><td>LS</td></tr><tr><td>50</td><td>3.9010 ± 0.60</td><td>-0.0547 ± 0.0059</td><td>6.0948 ± 3.21</td><td>-0.0532 ± 0.0069</td><td></td><td></td></tr><tr><td>100</td><td>5.6588 ± 1.47</td><td>-0.0511 ± 0.0065</td><td>5.7358 ± 3.25</td><td>-0.0528 ± 0.0051</td><td></td><td></td></tr><tr><td>150</td><td>5.8350 ± 0.80</td><td>-0.0434 ± 0.0036</td><td>3.6945 ± 1.33</td><td>-0.0521 ± 0.0050</td><td></td><td></td></tr></table> ![](images/9_1.jpg) <center>Figure 7: LS score evaluation with respect to a varying number of discriminator and generator updates on DCGAN, W-DCGAN, and LS-DCGAN. </center> #### 4.3.2 PERFORMANCE CHANGE WITH RESPECT TO THE RATIO OF NUMBER OF UPDATES BETWEEN THE GENERATOR AND DISCRIMINATOR In practice, we alternate between updating the discriminator and generator, and yet this is not guaranteed to give the same result as the solution to the min- max problem in Equation 2. Hence, the update ratio can influence the performance of GANs. We experimented with three different update ratios, \(5:1,1:1\) , and \(1:5\) , with respect to the discriminator and generator update. We applied these ratios to both the MNIST and CIFAR10 datasets on all models. <--- Page Split ---> Figure 7 presents the LS scores on both MNIST and CIFAR10 and this result is consistent with the IW metric as well (see S.M. Figure 25). However, we did not find that any one update ratio was superior over others between the two datasets. For CIFAR10, the \(1:1\) update ratio worked best for all models, and for MNIST, different ratios worked better for different models. Hence, we conclude that number of update ratios for each model needs to be dynamically tuned. The corresponding samples from the models trained by different update ratios are shown in S.M. Figure 27. ## 4.3.3 PERFORMANCE WITH RESPECT TO THE AMOUNT OF AVAILABLE TRAINING DATA In practice, DCGANs are known to be unstable, and the generator tends to suffer as the discriminator gets better due to disjoint support between the data and generator distributions (Goodfellow et al., 2014; Arjovsky & Bottou, 2017). W- DCGAN and LS- DCGAN offer alternative ways to solving this problem. If the model is suffering from disjoint support, having more training examples will not help, and alternatively, if the model does not suffer from such a problem, having more training examples could potentially help. Here, we explore the sensitivity of three different kinds of GANs with respect to the number of training examples. We have trained GANs with 10,000, 20,000, 30,000, 40,000, and 45,000 examples on CIFAR10. Figure 8 shows that the LS score curve of DCGAN grows quite slowly when compared to W- DCGAN and LS- DCGAN. The three ![](images/10_0.jpg) <center>Figure 8: LS score evaluation on W-DCGAN & LS-DCGAN w.r.t number of data points. </center> GANs have a relatively similar loss when they are trained with 10,000 training examples. However, the DCGAN only gained \(0.0124 \pm 0.00127\) by increasing from 10,000 to 40,000 training examples, whereas the performance of W- DCGAN and LS- DCGAN improved by \(0.03016 \pm 0.00469\) and \(0.0444 \pm 0.0033\) , respectively. Thus, we empirically observe that W- DCGAN and LS- DCGAN have faster performance increases than a DCGAN as the number of training examples grows. ## 5 CONCLUSION In this paper, we proposed to use four well- known distance functions as an evaluation metrics, and empirically investigated the DCGAN, W- DCGAN, and LS- DCGAN families under these metrics. Previously, these models were compared based on visual assessment of sample quality and difficulty of training. In our experiments, we showed that there are performance differences in terms of average experiments, but that some are not statistically significant. Moreover, we thoroughly analyzed the performance of GANs under different hyper- parameter settings. There are still several types of GANs that need to be evaluated, such as GRAN (Im et al., 2016a), IWD-CGAN (Gulrajani et al., 2017), BEGAN (Berthelot et al., 2017), MMDGAN (Li et al., 2017), and CramerGAN (Bellemare et al., 2017). We hope to evaluate all of these models under this framework and thoroughly analyze them in the future. Moreover, there has been an investigation into taking ensemble approaches to GANs, such as Generative Adversarial Parallelization Im et al. (2016b). Ensemble approaches have been empirically shown to work well in many domains of research, so it would be interesting to find out whether ensembles can also help in min- max problems. Alternatively, we can also try to evaluate other log- likelihood- based models like NVIL (Mnih & Gregor, 2014), VAE (Kingma & Welling, 2014), DVAE (Im et al., 2015), DRAW (Gregor et al., 2015), RBMs (Hinton et al., 2006; Salakhutdinov & Hinton, 2009), NICE Dinh et al. (2014), etc. Model evaluation is an important and complex topic. Model selection, model design, and even research direction can change depending on the evaluation metric. Thus, we need to continuously explore different metrics and rigorously evaluate new models. <--- Page Split ---> ## REFERENCES pyTorch- generative- model- collections. https://github.com/znxlwm/pytorch- generative- model- collections. Accessed: 2018- 01- 30. Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. In arXiv preprint arXiv:1701.04862, 2017. Martin Arjovsky, Soumith Chintala, and Leon Bottou. Wgan. In arXiv preprint arXiv:1701.07875, 2017. Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and equilibrium in generative adversarial nets. In arXiv preprint arXiv:1703.00573, 2017. Marc G. Bellemare, Ivo Danihelka, Will Dabney, Shakir Mohamed, Balaji Lakshminarayanan, Stephan Hoyer, and Rémi Munos. The cramer distance as a solution to biased wasserstein gradients. In arXiv preprint arXiv:1705.10743, 2017. David Berthelot, Thomas Schumm, and Luke Metz. Began: Boundary equilibrium generative adversarial networks. In arXiv preprint arXiv:1703.10717, 2017. Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized generative adversarial networks. In arXiv preprint arXiv:1705.08584, 2017. Ivo Danihelka, Balaji Lakshminarayanan, Benigno Uria, Daan Wierstra, and Peter Dayan. Comparison of maximum likelihood and gan- based training of real nvps. In arXiv preprint arXiv:1705.05263, 2017. Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: non- linear independent components estimation. In arXiv preprint arXiv:1410.8516, 2014. Ian Goodfellow. Nips 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160, 2016. Ian J. Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Proceedings of the Neural Information Processing Systems (NIPS), 2014. Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. In Proceedings of the International Conference on Machine Learning (ICML), 2015. Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Scholkopf, and Alexander Smola. A kernel two- sample test. Journal of Machine Learning Research, 13:723- 773, 2012. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans. In arXiv preprint arXiv:1704.00028, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In arXiv preprint arXiv:1512.03385, 2015. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time- scale update rule converge to a local nash equilibrium. In arXiv preprint arXiv:1706.08500, 2017. Geoffrey E. Hinton, Simon Osindero, and Yee Whye Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18:1527- 1554, 2006. Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks. In Proceedings of the Computer Vision and Pattern Recognition (CVPR), 2017. Daniel Jiwoong Im, Sungjin Ahn, Roland Memisevic, and Yoshua Bengio. Denoising criterion for variational auto- encoding framework. In arXiv preprint arXiv:1509.00519, 2015. <--- Page Split ---> Daniel Jiwoong Im, Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating images with recurrent adversarial networks. In arXiv preprint arXiv:1602.05110, 2016a. Daniel Jiwoong Im, He Ma, Dongjoo Kim, and Graham Taylor. Generating adversarial parallelization. In arXiv preprint arXiv:1612.04021, 2016b. Daniel Jiwoong Im, Michael Tao, and Kristin Branson. An empirical analysis of the optimization of deep network loss surfaces. In arXiv preprint arXiv:1612.04010, 2016c. Yann LeCun Junbo Zhao, Michael Mathieu. Energy- based generative adversarial network. In arXiv preprint arXiv:1609.03126, 2016. Diederik P Kingma and Max Welling. Auto- encoding variational bayes. In Proceedings of the Neural Information Processing Systems (NIPS), 2014. Naveen Kodali, Jacob Abernethy, James Hays, and Zsolt Kira. On convergence and stability of gans. In arXiv preprint arXiv:1705.07215, 2017. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Proceedings of the Neural Information Processing Systems (NIPS), 2012. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient- based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278- 2324, 1998. Chun- Liang Li, Wei- Cheng Chang, Yu Cheng, Yiming Yang, and Barnabas Póczos. Mmd gan: Towards deeper understanding of moment matching network. In arXiv preprint arXiv:1705.08584, 2017. David Lopez- Paz and Maxime Oquab. Revisiting classifier two- sample tests. In arXiv preprint arXiv:1610.06545, 2016. Xudong Mao, Qing Li, Haoran Xie, Raymond Y.K. Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. In arXiv preprint arXiv:1611.04076, 2017. Lars Mescheder, Sebastian Nowozin, and Andreas Geiger. The numerics of gans. In Proceedings of the Neural Information Processing Systems (NIPS), 2017. Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. In Proceedings of the International Conference on Machine Learning (ICML), 2014. Alfred Muller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 29(2):429- 443, 1997. XuanLong Nguyen, Martin J. Wainwright, and Michael I. Jordan. Estimating divergence functionals and the likelihood ratio by penalized convex risk minimization. In Proceedings of the Neural Information Processing Systems (NIPS), 2008. Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f- gan: Training generative neural samplers using variational divergence minimization. In arXiv preprint arXiv:1606.00709, 2016. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In arXiv preprint arXiv:1511.06434, 2015. Ruslan Salakhutdinov and Geoffrey E. Hinton. Deep boltzmann machines. In Proceedings of the International Conference on Machine Learning (ICML), 2009. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In arXiv preprint arXiv:1606.03498, 2016. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large- scale image recognition. In arXiv preprint arXiv:1409.1556, 2014. Bharath Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf, and Gert Lanckriet. On integral probability metrics, phi- divergences and binary classification. 01 2009. <--- Page Split ---> Dougal J. Sutherland, Hsiao- Yu Tung, Heiko Strathmann, Soumyajit De Aaditya Ramdas, Alex Smola, and Arthur Gretton. Generative models and model criticism via optimized maximum mean discrepancy. In arXiv preprint arXiv:1611.04488, 2017. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In arXiv preprint arXiv:1512.00567, 2015. Lucas Theis, Aaron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. 5 November 2015. Cedric Villani. Grundlehren der mathematischen wissenschaften. In Optimal Transport: Old and New. Springer, Berline, 2009. Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, and Roger Grosse. On the quantitative analysis of decoder based generative models. In International Conference on Learning Representation, 2016. <--- Page Split ---> ## APPENDIX ## RELATIONSHIP BETWEEN METRICS AND BINARY CLASSIFICATION In this paper, we considered four distance metrics that belong to two class of metrics, \(\phi\) - divergence and IPMs. Sriperumbudur et al. (2009) have shown that the optimal risk function is associated with a binary classifier with \(P\) and \(Q\) distributions conditioned on a class when the discriminant function is restricted to certain \(F\) (Theorem 17 from (Sriperumbudur et al., 2009)). Let the optimal risk function be: \[R(L,F) = \inf_{f\in F}\int L(y,f(x))d p(x,y), \quad (11)\] where \(F\) is the set of discriminant functions (classifier), \(y \in - 1, 1\) , and \(L\) is the loss function. By following derivation, we can see that the optimal risk function becomes IPM: \[\begin{array}{l}{{R(L,F)=\inf _{f\in F}\int L(y,f(x))d u(x,y)}}\\ {{=\inf _{f\in F}\left[\epsilon\int L(1,f(x))d p(x)+(1-\epsilon)\int L(0,f(x))d q(x)\right]}}\\ {{=\inf _{f\in F}f d p(x)+\inf _{f\in F}f d q(x)}}\\ {{=-I P M}}\end{array} \quad (14)\] where \(L(1, f(x)) = 1 / \epsilon\) and \(L(0, f(x)) = - 1 / (1 - \epsilon)\) . The second equality is derived by separating the loss for class 1 and class 0. The third equality is from the way how we chose \(\mathrm{L}(1, \mathrm{f}(\mathrm{x}))\) and \(\mathrm{L}(0, \mathrm{f}(\mathrm{x}))\) . The last equality is derived from that fact that \(F\) is symmetric around zero ( \(f \in F \Rightarrow - f \in F\) ). Hence, this shows that with appropriately choosing \(L\) , MMD and Wasserstein distance can be understood as the optimal \(L\) - risk associated with binary classifier with specific set of \(F\) functions. For example, Wasserstein distance and MMD distances are equivalent to the optimal risk function with 1- Lipschitz classifiers and a RKHS classifier with an unit length. <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 9: GAN Topology for MNIST. </center> ![](images/15_1.jpg) <center>Figure 10: GAN Topology for CIFAR10. </center> ![](images/15_2.jpg) <center>Figure 11: MNIST & FashionMNIST Samples </center> <--- Page Split ---> Table 10: Hyper-parameters used for different experiments. <table><tr><td rowspan="2"></td><td colspan="5">GAN training</td><td colspan="3">Critic Training (test time)</td></tr><tr><td>Model</td><td>Disc. Lr.</td><td>Gen. Lr.</td><td>Ratio³</td><td>Cr. Lr.</td><td>Cr. Kern</td><td>Num Epoch</td><td></td></tr><tr><td rowspan="3">Table 2</td><td>DCGAN</td><td>0.0002</td><td>0.0004</td><td>1:2</td><td></td><td></td><td></td><td></td></tr><tr><td>W-DCGAN</td><td>0.0004</td><td>0.0008</td><td>1:1</td><td>0.0001</td><td>[1, 128, 32]</td><td>25</td><td></td></tr><tr><td>LS-DCGAN</td><td>0.0004</td><td>0.0008</td><td>1:2</td><td></td><td></td><td></td><td></td></tr><tr><td rowspan="3">Table 3</td><td>DCGAN</td><td>0.0002</td><td>0.0001</td><td>1:2</td><td></td><td></td><td></td><td></td></tr><tr><td>W-DCGAN</td><td>0.0008</td><td>0.0004</td><td>1:1</td><td>0.0002</td><td>[3, 128, 256, 512]</td><td>11</td><td></td></tr><tr><td>LS-DCGAN</td><td>0.0008</td><td>0.0004</td><td>1:2</td><td></td><td></td><td></td><td></td></tr><tr><td rowspan="3">Table 4</td><td>DCGAN</td><td>0.00005</td><td>0.0001</td><td>1:2</td><td></td><td></td><td></td><td></td></tr><tr><td>W-DCGAN</td><td>0.0002</td><td>0.0004</td><td>1:2</td><td>0.0002</td><td>[3, 128, 256, 512, 1024]</td><td>4</td><td></td></tr><tr><td>LS-DCGAN</td><td>0.0002</td><td>0.0004</td><td>1:2</td><td></td><td></td><td></td><td></td></tr><tr><td>Table 5</td><td>ALL GANs</td><td>0.0002</td><td>0.0002</td><td>1:1</td><td>0.0002</td><td>[1, 64, 128]</td><td>25</td><td></td></tr><tr><td rowspan="3">Table 8</td><td>DCGAN</td><td>0.0002</td><td>0.0001</td><td>1:2</td><td></td><td></td><td></td><td></td></tr><tr><td>W-DCGAN</td><td>0.0002</td><td>0.0001</td><td>1:1</td><td>0.0002</td><td>[3, 128, 256, 512]</td><td>11</td><td></td></tr><tr><td>LS-DCGAN</td><td>0.0008</td><td>0.0004</td><td>1:2</td><td></td><td></td><td></td><td></td></tr><tr><td>Table 11</td><td>ALL GANs</td><td>0.0002</td><td>0.0002</td><td>1:1</td><td>0.0002</td><td>[1, 64, 128]</td><td>25</td><td></td></tr><tr><td>Table 12</td><td>ALL GANs</td><td>0.0002</td><td>0.0002</td><td>1:1</td><td>0.0002</td><td>[1, 64, 128]</td><td>25</td><td></td></tr><tr><td rowspan="5">Figure 7b</td><td rowspan="5">DCGAN</td><td rowspan="5">0.0001</td><td rowspan="5">0.00005</td><td>5:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:5</td><td></td><td></td><td></td><td></td></tr><tr><td>5:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:1</td><td></td><td></td><td></td><td></td></tr><tr><td rowspan="5">Figure 26</td><td rowspan="5">DCGAN</td><td rowspan="5">0.0001</td><td rowspan="5">0.00005</td><td>5:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:5</td><td></td><td></td><td></td><td></td></tr><tr><td>5:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:1</td><td></td><td></td><td></td><td></td></tr><tr><td rowspan="5">Figure 16</td><td rowspan="5">DCGAN</td><td rowspan="5">0.0002</td><td rowspan="5">0.0001</td><td>5:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:5</td><td></td><td></td><td></td><td></td></tr><tr><td>5:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:1</td><td></td><td></td><td></td><td></td></tr><tr><td rowspan="3">Figure 28</td><td rowspan="3">DCGAN</td><td rowspan="3">0.0002</td><td rowspan="3">0.0001</td><td>1:2</td><td></td><td></td><td></td><td></td></tr><tr><td>1:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:2</td><td></td><td></td><td></td><td></td></tr></table> ## MORE EXPERIMENTS We trained two critics on training data and validation data, respectively, and evaluated on test data from both critics. We trained six GANs (GAN, LS- DCGAN, W- DCGAN GP, DRAGAN, BEGAN, EBGAN) on MNIST and FashionMNIST. We trained these GANs with 50,000 training examples. At test time, we used 10,000 training and 10,000 validation examples for training the critics, and evaluated on 10,000 test examples. Here, we present the test scores from the critics trained on training and validation data. The results are shown in Table ??. Note that we also have the IW and FID evaluation on these models in the paper. For FashionMNIST, we find that test scores with a critic trained on training and validation data are very close. Hence, we do not see any indication of overfitting. On the other hand, there are gaps between the scores for the MNIST dataset and the test scores from critics trained on the validation set. which gives better performance than the ones that are trained on the training set. <--- Page Split ---> ![](images/17_0.jpg) <center>Filter #: [3, 64, 128, 256] Filter #: [3, 128, 256, 512] Filter #: [3, 256, 512, 1024] Filter #: [3, 320, 640, 1280] </center> ![](images/17_1.jpg) <center>Figure 12: GAN evaluation using different architectures for the critic (Number of feature maps in each layer of the CNN critic). Above figures are evaluated under negative least-square loss and Figures 13 are evaluated under Wasserstein distance. </center> Figure 13: GAN evaluation using different critic's architecture (Number of filter of critic's convolutional network). Figure (a,b,c,d) are evaluation under Wasserstein distance. Table 11: Evaluation of GANs on MNIST dataset. Test score comparison between the two critics that are trained by training and validation dataset. <table><tr><td rowspan="2">Model</td><td colspan="2">LS Score</td><td colspan="2">IW Score</td></tr><tr><td>Trained on training data</td><td>Trained on validation data</td><td>Trained on training data</td><td>Trained on validation data</td></tr><tr><td>DCGAN</td><td>-0.312 ± 0.010</td><td>-0.4408 ± 0.0201</td><td>0.300 ± 0.0103</td><td>0.259 ± 0.0083</td></tr><tr><td>EBGAN</td><td>-3.386 ± 0.1866-7</td><td>-3.826 ± 0.2826-7</td><td>0.999 ± 0.0001</td><td>0.999 ± 0.0001</td></tr><tr><td>WGAN GP</td><td>-0.196 ± 0.006</td><td>-0.307 ± 0.0381</td><td>0.705 ± 0.0202</td><td>0.635 ± 0.0270</td></tr><tr><td>LSGAN</td><td>-0.232 ± 0.0104</td><td>-0.352 ± 0.0143</td><td>0.232 ± 0.0156</td><td>0.195 ± 0.0103</td></tr><tr><td>BEGAN</td><td>-0.081 ± 0.016</td><td>-0.140 ± 0.0329</td><td>0.888 ± 0.0097</td><td>0.858 ± 0.0131</td></tr><tr><td>DRAGAN</td><td>-0.318 ± 0.012</td><td>-0.384 ± 0.0139</td><td>0.266 ± 0.0060</td><td>0.235 ± 0.0079</td></tr></table> Table 12: Evaluation of GANs on Fashion-MNIST dataset. Test score comparison between the two critics that are trained by training and validation dataset. <table><tr><td rowspan="2">Model</td><td colspan="2">LS Score</td><td colspan="2">IW Score</td></tr><tr><td>Trained on training data</td><td>Trained on validation data</td><td>Trained on training data</td><td>Trained on validation data</td></tr><tr><td>DCGAN</td><td>-0.1638 ± 0.010</td><td>-0.1635 ± 0.0006</td><td>0.408 ± 0.0135</td><td>0.4118 ± 0.0107</td></tr><tr><td>EBGAN</td><td>-0.0037 ± 0.0009</td><td>-0.0048 ± 0.0023</td><td>0.415 ± 0.0067</td><td>0.4247 ± 0.0098</td></tr><tr><td>WGAN GP</td><td>-0.00175 ± 0.000876</td><td>-0.00438 ± 0.0000862</td><td>0.921 ± 0.0061</td><td>0.9924 ± 0.00089</td></tr><tr><td>LSGAN</td><td>-0.135 ± 0.0046</td><td>-0.136 ± 0.0074</td><td>0.631 ± 0.0106</td><td>0.6236 ± 0.0200</td></tr><tr><td>BEGAN</td><td>-0.1133 ± 0.042</td><td>-0.0893 ± 0.0095</td><td>0.429 ± 0.0148</td><td>0.4293 ± 0.0213</td></tr><tr><td>DRAGAN</td><td>-0.1638 ± 0.015</td><td>-0.1645 ± 0.0151</td><td>0.641 ± 0.0304</td><td>0.6311 ± 0.0547</td></tr></table> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 14: The participants are trained by selecting between random samples generated by GANs versus samples from data distribution. They get a positive reward if they selected the data samples and a negative reward if they select the samples from the model. After enough training, they choose the better group of samples among two randomly select set of samples. </center> ![](images/18_1.jpg) <center>Figure 15: The fraction of labels that agree for each pair, depending on the number of labels for each pair, presented as a histogram. By definition, if there is only one participant, that participant must agree with themselves. </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 16: Performance of W-DCGAN & LS-DCGAN with respect to number of filters. </center> ![](images/19_1.jpg) <center>Figure 17: W-DCGAN trained with different number of filters. </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 18: Samples from different architectures of LS-DCGAN </center> ![](images/20_1.jpg) <center>Figure 19: Samples from different architectures of W-DCGAN. </center> ![](images/20_2.jpg) <center>Figure 20: The performance of GANs trained with different numbers of feature maps. </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 21: All pairs of generated image sets for which human perception and IW disagree, as in Figure 3. </center> <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 22: All pairs of generated image sets for which human perception and LS disagree, as in Figure 3. </center> <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 23: All pairs of generated image sets for which human perception and IS disagree, as in Figure 3. </center> <--- Page Split ---> ![](images/24_0.jpg) <center>Figure 24: All pairs of generated image sets for which human perception and MMD disagree, as in Figure 3. </center> <--- Page Split ---> ![](images/25_0.jpg) <center>Figure 25: Performance of DCGAN, W-DCGAN, and LS-DCGAN trained with varying numbers of discriminator and generator updates. These models were trained on CIFAR10 dataset and evaluated with IW and LS metrics. </center> ![](images/25_1.jpg) <center>Figure 26: Performance of DCGAN, W-DCGAN, and LS-DCGAN trained with varying numbers of discriminator and generator updates. These models were trained on the MNIST dataset and evaluated with GC, LS, and IW metrics. </center> ![](images/25_2.jpg) <center>Figure 27: Samples at varying update ratios. </center> <--- Page Split ---> ![](images/26_0.jpg) <center>Figure 28: Performance of W-DCGAN & LS-DCGAN with respect to number of data points. </center> ![](images/26_1.jpg) <center>Figure 29: Interpolation between the three final GAN parameters trained using different random seeds on CIFAR10. Loss surface values are amplified by 10 times in order to illustrate the separation of the terrains. Local zig-zag patterns are minor artifacts from rendering. </center> ![](images/26_2.jpg) <center>Figure 30: Scores from training GANs on LSUN Bedroom dataset. </center> ![](images/26_3.jpg) <center>Figure 31: The training curve of critics to show that the training curve converges. IW distance curves in (a) increase because we used linear output unit for the critic network (by design choice). This can be simply bounded by adding a sigmoid at the output of the critic network. </center> <--- Page Split ---> ![](images/27_0.jpg) <center>(a) GAN samples of LSUN Bedroom dataset. </center> <--- Page Split ---> ![](images/28_0.jpg) <center>(a) LS-DCGAN samples of LSUN Bedroom dataset. </center> <--- Page Split ---> ![](images/29_0.jpg) <center>(a) W-DCGAN samples of LSUN Bedroom dataset. </center> <--- Page Split --->
## ABSTRACT Generative adversarial networks (GANs) have been extremely effective in approximating complex distributions of high- dimensional, input data samples, and substantial progress has been made in understanding and improving GAN performance in terms of both theory and application. However, we currently lack quantitative methods for model assessment. Because of this, while many GAN variants being proposed, we have relatively little understanding of their relative abilities. In this paper, we evaluate the performance of various types of GANs using divergence and distance functions typically used only for training. We observe consistency across the various proposed metrics and, interestingly, the test- time metrics do not favour networks that use the same training- time criterion. We also compare the proposed metrics to human perceptual scores. ## 1 INTRODUCTION Generative adversarial networks (GANs) aim to approximate a data distribution \(P\) , using a parameterized model distribution \(Q\) . They achieve this by jointly optimizing generative and discriminative networks (Goodfellow et al., 2014). GANs are end- to- end differentiable, and samples from the generative network are propagated forward to a discriminative network, and error signals are then propagated backwards from the discriminative network to the generative network. The discriminative network is often viewed as learned, adaptive loss function for the generative network. GANs have achieved state- of- the- art results for a number of applications (Goodfellow, 2016), producing more realistic, sharper samples than other popular generative models, such as variational autoencoders (Kingma & Welling, 2014). Because of their success, many GAN frameworks have been proposed. However, it has been difficult to compare these algorithms and understand their strengths and weaknesses because we are currently lacking in quantitative methods for assessing the learned generators. In this work, we propose new metrics for measuring how realistic samples generated from GANs are. These criteria are based on a formulation of divergence between the distributions \(P\) and \(Q\) (Nowozin et al., 2016; Sriperumbudur et al., 2009): \[\inf_{Q}J(Q) = \inf_{Q}\sup_{f\in \mathcal{F}}\mathbb{E}_{P(x)}\left[\mu \left(f(x)\right)\right] - \mathbb{E}_{Q(x)}\left[v\left(f(x)\right)\right] \quad (1)\] Here, different choices of \(\mu , v\) , and \(\mathcal{F}\) can correspond to different \(f\) - divergences (Nowozin et al., 2016) or different integral probability metrics (IPMs) (Sriperumbudur et al., 2009). Importantly, \(J(Q)\) can be estimated using samples from \(P\) and \(Q\) , and does not require us to be able to estimate \(P(x)\) or \(Q(x)\) for samples \(x\) . Instead, evaluating \(J(Q)\) involves finding the function \(f \in \mathcal{F}\) that is maximally different with respect to \(P\) and \(Q\) . This measure of divergence between the distributions \(P\) and \(Q\) is related to the GAN criterion if we restrict the function class \(\mathcal{F}\) to be neural network functions parameterized by the vector \(\phi\) and the class of approximating distributions to correspond to neural network generators \(G_{\theta}\) parameterized by the vector \(\theta\) , allowing formulation as a min- max problem: \[\min_{\theta}J(\theta) = \min_{\theta}\max_{\phi}\mathbb{E}_{P(x)}\left[\mu \left(D_{\phi}(x)\right)\right] - \mathbb{E}_{Q_{\theta}(x)}\left[v\left(D_{\phi}(x)\right)\right], \quad (2)\] <--- Page Split ---> Table 1: Defined \(\mu\) and \(\upsilon\) functions for GAN metrics proposed in this paper. \(M\) is some real number. \(\mathcal{H}\) is a Reproducing Kernel Hilbert Space (RKHS) and \(\| \cdot \|_{L}\) is the Lipschitz constant. For the LS-DCGAN, we used \(b = 1\) and \(a = 0\) (Mao et al., 2017). <table><tr><td>Metric</td><td>μ</td><td>v</td><td>Function Class</td></tr><tr><td>GAN (GC)</td><td>log f</td><td>-log(1-f)</td><td>X→R+, ∃M∈R: |f(x)|≤M</td></tr><tr><td>Least-Squares GAN (LS)</td><td>-(f-b)²</td><td>(f-a)²</td><td>X→R, ∃M∈R: |f(x)|≤M</td></tr><tr><td>MMD</td><td>f</td><td>f</td><td>f: ||f||ℋ≤1</td></tr><tr><td>Wasserstein (IW)</td><td>f</td><td>f</td><td>f: ||f||L≤1</td></tr></table> In this formulation, \(Q_{\theta}\) corresponds to the generator network's distribution and \(D_{\phi}\) corresponds to the discriminator network (see (Nowozin et al., 2016) for details). We propose using \(J(\theta)\) to evaluate the performance of the generator network \(G_{\theta}\) for various choices of \(\mu\) and \(\upsilon\) , corresponding to different \(f\) - divergences or IPMs between distributions \(P\) and \(Q_{\theta}\) , that have been successfully used for GAN training. Our proposed metrics differ from most existing metrics in that they are adaptive, and involve finding the maximum over discriminative networks. We compare four metrics, those corresponding to the original GAN (GC) (Goodfellow, 2016), the Least- Squares GAN (LS) (Mao et al., 2017), the Wasserstein GAN (IW) (Arjovsky et al., 2017), and the Maximum Mean Discrepancy GAN (MMD) (Li et al., 2017) criteria. Choices for \(\mu\) , \(\upsilon\) , and \(\mathcal{F}\) for these metrics are shown in Table 1. Our method can easily be extended to other \(f\) - divergences or IPMs. To compare these and previous metrics for evaluating GANs, we performed many experiments, training and comparing multiple types of GANs with multiple architectures on multiple data sets. We qualitatively and quantitatively compared these metrics to human perception, and found that our proposed metrics better reflected human perception. We also show that rankings produced using our proposed metrics are consistent across metrics, thus are robust to the exact choices of the functions \(\mu\) and \(\upsilon\) in Equation 2. We used the proposed metrics to quantitatively analyze three different families of GANs: Deep Convolutional Generative Adversarial Networks (DCGAN) (Radford et al., 2015), Least- Squares GANs (LS- DCGAN), and Wasserstein GANs (W- DCGAN), each of which corresponded to a different proposed metric. Interestingly, we found that the different proposed metrics still agreed on the best GAN framework for each dataset. Thus, even though, e.g. for MNIST the W- DCGAN was trained with the IW criterion, LS- DCGAN still outperformed it for the IW criterion. Our analysis also included carrying out a sensitivity analysis with respect to various factors, such as the architecture size, noise dimension, update ratio between discriminator and generator, and number of data points. Our empirical results show that: i) the larger the GAN architecture, the better the results; ii) having a generator network larger than the discriminator network does not yield good results; iii) the best ratio between discriminator and generator updates depend on the data set; and iv) the W- DCGAN and LS- DCGAN performance increases much faster than DCGAN as the number of training examples grows. These metrics thus allow us to tune the hyper- parameters and architectures of GANs based on our proposed method. ## 2 RELATED WORK GANs can be evaluated using manual annotations, but this is time consuming and difficult to reproduce. Several automatically computable metrics have been proposed for evaluating the performance of probabilistic general models and GANs in particular. We review some of these here, and compare our proposed metrics to these in our experiments. Many previous probabilistic generative models were evaluated based on the pointwise likelihood of the test data, the criterion also used during training. While GANs can be used to generate samples from the approximate distribution, its likelihood on test samples cannot be evaluated without simplifying assumptions. As discussed in (Theis et al., 2015), likelihood often does not provide good rankings of how realistic samples look, the main goal of GANs. We evaluated the efficacy of the log- likelihood of the test data, as estimated using Annealed Importance Sampling (AIS) (Wu et al., 2016). AIS <--- Page Split ---> has been to estimate the likelihood of a test sample \(x\) by considering many intermediate distributions that are defined by taking a weighted geometric mean between the prior (input) distribution, \(p(z)\) , and an approximation of the joint distribution \(p_{\sigma}(x,z) = p_{\sigma}(x|z)p(z)\) . Here, \(p_{\sigma}(x|z)\) is a Gaussian kernel with fixed standard deviation \(\sigma\) around mean \(G_{\theta}(z)\) . The final estimate depends critically on the accuracy of this approximation. In Section 4, we demonstrate that the AIS estimate of \(p(x)\) is highly dependent on the choice of this hyperparameter. The Generative Adversarial Metric (Im et al., 2016a) measures the relative performance of two GANs by measuring the likelihood ratio of the two models. Consider two GANs with their respective trained partners, \(M_{1} = (D_{1},G_{1})\) and \(M_{2} = (D_{2},G_{2})\) , where \(G_{1}\) and \(G_{2}\) are the generators and \(D_{1}\) and \(D_{2}\) are the discriminators. The hypothesis \(\mathcal{H}_{1}\) is that \(M_{1}\) is better than \(M_{2}\) if \(G_{1}\) fools \(D_{2}\) more than \(G_{2}\) fools \(D_{1}\) , and vice versa for the hypothesis \(\mathcal{H}_{0}\) . The likelihood- ratio is defined as: \[\begin{array}{r}{p(x|y = 1;M_{1}^{\prime}) = \frac{p(y = 1|x;D_{1})p(x;G_{2})}{p(y = 1|x;D_{2})p(x;G_{1})},} \end{array} \quad (3)\] where \(M_{1}^{\prime}\) and \(M_{2}^{\prime}\) are the swapped pairs \((D_{1},G_{2})\) and \((D_{2},G_{1})\) , and \(p(x|y = 1,M)\) is the likelihood of \(x\) generated from the data distribution \(p(x)\) by model \(M\) and \(p(y = 1|x;D)\) indicates that discriminator \(D\) thinks \(x\) is a real sample. To evaluate this, we measure the ratio of how frequently \(G_{1}\) , the generator from model 1, fools \(D_{2}\) , the discriminator from model 2, and vice- versa: \(\frac{D_{1}(x_{2})}{D_{2}(x_{1})}\) , where \(x_{1} \sim G_{1}\) and \(x_{2} \sim G_{2}\) . There are two main caveats to the Generative Adversarial Metric. First, the measurement only provides comparisons between pairs of models. Second, the metric has a constraint where the two discriminators must have an approximately similar performance on a calibration dataset, which can be difficult to satisfy in practice. The Inception Score (Salimans et al., 2016) (IS) measures the performance of a model using a third- party neural network trained on a supervised classification task, e.g. Imagenet. The IS computes the expectation of divergence between the distribution of class predictions for samples from the GAN compared to the distribution of class to the distribution of class labels used to train the third- party network, \[\exp \left(\mathbb{E}_{x\sim Q_{\theta}}K L(p(y|x)\| p(y))\right). \quad (4)\] Here, the class prediction given a sample \(x\) is computed using the third- party neural network. In (Salimans et al., 2016), Google's Inception Network (Szegedy et al., 2015) trained on Imagenet was the third- party neural network. IS is the most widely used metric to measure GAN performance. However, summarizing samples as the class prediction from a network trained for a different task discards much of the important information in the sample. In addition, it requires another neural network that is trained separately via supervised learning. We demonstrate an example of a failure case of IS in the Experiments section. The Fréchet Inception Distance (FID) (Heusel et al., 2017) extends upon IS. Instead of using the final classification outputs from the third- party network as representations of samples, it uses a representation computed from a late layer of the third- party network. It compares the mean \(m_{Q}\) and covariance \(C_{Q}\) of the Inception- based representation of samples generated by the GAN to the mean \(m_{P}\) and covariance \(C_{P}\) of the same representation for training samples: \[D^{2}\left((m_{P},C_{P}),(m_{Q},C_{Q})\right) = \| m_{P} - m_{Q}\|_{2}^{2} + \mathrm{Tr}\left(C_{P} + C_{Q} - 2(C_{P}C_{Q})^{\frac{1}{2}}\right), \quad (5)\] This method relies on the Inception- based representation of the samples capturing all important information and the first two moments of the distributions being descriptive of the distribution. Classifier Two- Sample Tests (C2ST) (Lopez- Paz & Oquab, 2016) proposes training a classifier, similar to a discriminator, that can distinguish real samples from \(P\) from generated samples from \(Q\) , and using the error rate of this classifier as a measure of GAN performance. In their work, they used single- layer and \(k\) - nearest neighbor (KNN) classifiers trained on a representation of the samples computed from a late layer of a third- party network (in this case, ResNet (He et al., 2015)). C2ST is an IPM (Sriperumbudur et al., 2009), like the MMD and Wasserstein metrics we propose, with \(\mu (f) = f\) and \(\upsilon (f) = f\) , but with a different function class \(\mathcal{F}\) , corresponding to the family of classifiers chosen (in this case, single- layer networks or KNN, see see our detailed explanation in Appendix 5). The accuracy of a classifier trained to distinguish samples from distributions \(P\) and \(Q\) is just one way to measure the distance between these distributions, and, in this work, we propose a general family. <--- Page Split ---> ## 3 EVALUATION METRICS Given a generator \(G_{\theta}\) with parameters \(\theta\) which generates samples from the distribution \(Q_{\theta}\) , we propose to measure the quality of \(G_{\theta}\) by estimating divergence between the true data distribution \(P\) and \(Q_{\theta}\) for different choices of divergence measure. We train both \(G_{\theta}\) and \(D_{\phi}\) on a training data set, and measure performance on a separate test set. See Algorithm 1 for details. We consider metrics from two widely studied divergence and distance measures, \(f\) - divergence (Nguyen et al., 2008) and the Integral Probability Metric (IPM) (Muller, 1997). In our experiments, we consider the following four metrics that are commonly used to train GANs. Below, \(\phi\) represents the parameters of the discriminator network and \(\theta\) represents the parameters of the generator network. Original GAN Criterion (GC) Training a standard GAN corresponds to minimizing the following (Goodfellow et al., 2014): \[\max_{\phi} \mathbb{E}_{x\sim p(x)}[\log (D_{\phi}(x))] + \mathbb{E}_{z\sim p(z)}[\log (1 - D_{\phi}(G_{\theta}(z)))], \quad (6)\] where \(p(z)\) is the prior distribution of the generative network and \(G_{\theta}(z)\) is a differentiable function from \(z\) to the data space represented by a neural network with parameter \(\theta\) . \(D_{\phi}\) is trained with a sigmoid activation function, thus its output is guaranteed to be positive. Least- Squares GAN Criterion (LS) A Least- Squares GAN corresponds to training with a Pearson \(\chi^{2}\) divergence (Mao et al., 2017): \[\max_{\phi} -\mathbb{E}_{x\sim p(x)}[(D_{\phi}(x) - b)^{2}] - \mathbb{E}_{z\sim p(z)}[(D_{\phi}(G_{\theta}(z) - a))^{2}]. \quad (7)\] Following (Mao et al., 2017), we set \(a = 0\) and \(b = 1\) when training \(D_{\phi}\) . Maximum Mean Discrepancy (MMD) The maximum mean discrepancy metric considers the largest difference in the expectations over a unit ball of RKHS \(\mathcal{H}\) , \[\begin{array}{r l} & {\underset {\phi :\| f\|_{\mathcal{H}}\leq 1}{\max}\mathbb{E}_{x\sim p(x)}[D_{\phi}(x)] - \mathbb{E}_{z\sim p(z)}[D_{\phi}(G_{\theta}(z)]}\\ & {\qquad = \mathbb{E}_{x,x^{\prime}\sim P}\left[k(x,x^{\prime})\right] + \mathbb{E}_{z,z^{\prime}\sim p(z)}\left[k(G_{\theta}(z),G_{\theta}(z^{\prime}))\right] - 2\mathbb{E}_{x\sim P,z\sim p(z)}\left[k(x,G_{\theta}(z))\right],} \end{array} \quad (8)\] where \(\mathcal{H}\) is the RKHS with kernel \(k(\cdot ,\cdot)\) (Gretton et al., 2012). In this case, we do not need to train a discriminator \(D_{\phi}\) to evaluate our metric. Improved Wasserstein Distance (IW) Arjovsky & Bottou (2017) proposed the use of the dual representation of the Wasserstein distance (Villani, 2009) for training GANs. The Wasserstein distance is an IPM which considers the 1- Lipschitz function class \(\phi :\| \phi \|_{L}\leq 1\) : \[\max_{\phi :\| \phi \|_{L}\leq 1}\left[\mathbb{E}_{x\sim p(X)}\left[D_{\phi}(x)\right] - \mathbb{E}_{z\sim p(Z)}\left[D_{\phi}(G_{\theta}(z))\right]\right]. \quad (10)\] Note that IW (Danihelka et al., 2017) and MMD (Sutherland et al., 2017) were recently proposed to evaluate GANs, but have not been compared before. Algorithm 1 Compute the divergence/distance. 1: procedure DIVERGENCECOMPUTATION(Dataset \(\{X_{t r},X_{t e}\}\) , generator \(G_{\theta}\) , learning rate \(\eta\) evaluation criterion \(J(\phi ,X,Y)\) 2: Initialize critic network parameter \(\phi\) 3: for \(i = 1\dots N\) do 4: Sample data points from \(\mathbf{X}\) , \(\{x_{m}\} \sim X_{t r}\) 5: Sample points from generative model, \(\{s_{m}\} \sim G_{\theta}\) 6: \(\phi \leftarrow \phi +\eta \nabla_{\phi}J(\{x_{m}\} ,\{s_{m}\} ;\phi)\) 7: Sample points from generative model, \(\{s_{m}\} \sim G_{\theta}\) 8: return \(J(\phi ,X_{t e},\{s_{m}\})\) <--- Page Split ---> ## 4 EXPERIMENTS The goals in our experiments are two- fold. First, we wanted to evaluate the metrics we proposed for evaluating GANs. Second, we wanted to use these metrics to evaluate GAN frameworks and architectures. In particular, we evaluated how size of the discriminator and generator networks affected performance, and the sensitivity of each algorithm to training data set size. GAN frameworks. We conducted our experiments on three types of GANs: Deep Convolutional Generative Adversarial Networks (DCGAN), Least- Squares GANs (LS- DCGAN), and Wasserstein GANs (W- DCGAN). Note that to not confuse the test metric names with the GAN frameworks we evaluated, we use different abbreviations. GC is the original GAN criterion, which is used to train DCGANs. The LS criterion is used to train the LS- DCGAN, and the IW is used to train the W- DCGAN. Evaluation criteria. We evaluated these three families of GANs with six metrics. We compared our four proposed metrics to the two most commonly used metrics for evaluating GANs, the IS and FID. Because the optimization of a discriminator is required both during training and test time, we will call the discriminator learned for evaluation of our metrics the critic, in order to not confuse the two discriminators. We also compared these metrics to human perception, and had three volunteers evaluate and compare sets of images, either from the training data set or generated from different GAN frameworks during training. Data sets. In our experiments, we considered the MNIST (LeCun et al., 1998), CIFAR10, LSUN Bedroom, and Fashion MNIST datasets. MNIST consists of 60,000 training and 10,000 test images with a size of \(28 \times 28\) pixels, containing handwritten digits from the classes 0 to 9. From the 60,000 training examples, we set aside 10,000 as validation examples to tune various hyper- parameters. Similarly, FashionMNIST consists exactly the same number of training and test examples. Each example is a \(28 \times 28\) grayscale image, associated with a label from 10 classes. The CIFAR10 dataset 1 consists of images with a size of \(32 \times 32 \times 3\) pixels, with ten different classes of objects. We used 45,000, 5,000, and 10,000 examples as training, validation, and test data, respectively. The LSUN Bedroom dataset consists of images with a size of \(64 \times 64\) pixels, depicting various bedrooms. From the 3,033,342 images, we used 90,000 images as training data and 90,000 images as validation data. The learning rate was selected from discrete ranges and chosen based on a held- out validation set. Hyperparameters. Table 10 in the Appendix shows the learning rates and the convolutional kernel sizes that were used for each experiment. The architecture of each network is presented in the Appendix in Figure 10. Additionally, we used exponential- mean- square kernels with several different sigma values for MMD. A pre- trained logistic regression and pre- trained residual network were used for IS and FID on the MNIST and CIFAR10 datasets, respectively. For every experiment, we retrained 10 times with different random seeds, and report the mean and standard deviation. ### 4.1 QUALITATIVE OBSERVATIONS ABOUT EXISTING METRICS The log- likelihood measurement is the most commonly used metric for generative models. We measured the log- likelihood using \(\mathrm{AIS}^2\) on GANs is strange, as shown in Figure 1. We measured the log- likelihood of the DCGAN on MNIST with three different variances, \(\sigma^2 = 0.01\) , 0.025, and 0.05. The figure illustrates that the log- likelihood curve over the training epochs varies substantially depending on the variance, which indicates that the fixed Gaussian observable model might not be the ideal assumption for GANs. Moreover, we observe a high log- likelihood at the beginning of training, followed by a drop in likelihood, which then returns to the high value. The IS and MMD metrics do not require training a critic. It was easy to find samples for which IS and MMD scores did not match their visual quality. For example, Figure 2 shows samples generated by a DCGAN when it failed to train properly. Even though the failed DCGAN samples are much darker than the samples on the right, the IS for the left samples is higher/better than for the right samples. <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 1: Log-likelihood estimated using AIS for generators learned using DCGAN at various points during training, MNIST data set. </center> ![](images/5_1.jpg) <center>Figure 2: Misleading examples of Inception Scores. </center> As the Imagenet- trained network is likely trained to be somewhat invariant to overall intensity, this issue is to be expected. A failure case for MMD is shown in Figure 5. The samples on the right are dark, like the previous examples, but still textually recognizable, whereas the samples on the left are totally meaningless. However, MMD gives lower/worse distances to the left samples. The average intensity of the pixels of the left samples are closer to that for the training data, suggesting that MMD is overly sensitive to image intensity. Thus, IS is under- sensitive to image intensity, while MMD if oversensitive to it. In Section 4.2.1, we conduct more systematic experiments by measuring the correlation between these metrics to human perceptual scores. ### 4.2 METRIC COMPARISON To both compare the metrics as well as different GAN frameworks, we evaluated the six metrics on different GAN frameworks. Tables 2, 3, and 4 present the results on MNIST, CIFAR10, and LSUN respectively. As each type of GAN was trained using one of our proposed metrics, we investigated whether the metric favors samples from the model trained using the same metric. Interestingly, we do not see this behavior, and our proposed metrics agree on which GAN framework produces samples closest to the test data set. Every metric, except for MMD, showed that LS- DCGAN performed best for MNIST and CIFAR10, while W- DCGAN performed best for LSUN. As discussed below, we found DCGAN to be unstable to train, and thus excluded GC as a metric for experiments except for this first data set. For Fashion- MNIST, FID's ranking disagreed with IW and LS. We observed similar results for a range of different critic CNN architectures (number of feature maps in each convolutional layer): [3, 64, 128, 256], [3, 128, 256, 512], [3, 256, 512, 1024], and [3, 320, 640, 1280] (see Supp. Fig. 12 and 13). We evaluated a larger variety of GAN frameworks using pre- trained GANs downloaded from (pyt). In particular, we evaluated on EBGAN(Junbo Zhao, 2016), BEGAN(Berthelot et al., 2017), W- DCGAN GP(Gulrajani et al., 2017), and DRAGAN (Kodali et al., 2017). Table 5 presents the evaluation results. Critic architectures were selected to match those of these pre- trained GANs. For both MNIST and FashionMNIST, the three metrics are consistent and they rank DRAGAN the highest, followed by LS- DCGAN and DCGAN. The standard deviations for the IW distance are higher than for LS divergence. We computed the Wilcoxon rank sum in order to test that whether medians of the distributions of distances are the same for DCGAN, LS- DCGAN, and W- DCGAN. We found that the different GAN frameworks have significantly different performance according to the LS- GAN criterion, but not according to the IW criterion \((p < .05\) , Wilcoxon rank- sum test). Thus LS is more sensitive than IW. We evaluated the consistency of the metrics with respect to the size of validation set. We trained our three GAN frameworks for 100 epochs with training 90,000 examples from the LSUN Bedroom <--- Page Split ---> Table 2: GAN scores for various metrics trained on MNIST. Lower values are better for MMD, IW, LS, GC, and FID, higher values are better for IS. Lighter color indicates better performance. <table><tr><td>Model</td><td>MMD</td><td>IW</td><td>GC</td><td>LS</td><td>IS <br>(Logistic Reg.)</td></tr><tr><td>DCGAN</td><td>0.028 ± 0.0066</td><td>7.01 ± 1.63</td><td>-2.2e-3 ± 3e-4</td><td>-0.12 ± 0.013</td><td>5.76 ± 0.10</td></tr><tr><td>W-DCGAN</td><td>0.006 ± 0.0009</td><td>7.71 ± 1.89</td><td>-4e-4 ± 4e-4</td><td>-0.05 ± 0.008</td><td>5.17 ± 0.11</td></tr><tr><td>LS-DCGAN</td><td>0.012 ± 0.0036</td><td>4.50 ± 1.94</td><td>-3e-3 ± 6e-4</td><td>-0.13 ± 0.022</td><td>6.07 ± 0.08</td></tr></table> Table 3: GAN scores for various metrics trained on CIFAR10. <table><tr><td>Model</td><td>MMD</td><td>IW</td><td>LS</td><td>IS <br>(ResNet)</td><td>FID</td></tr><tr><td>DCGAN</td><td>0.0538 ± 0.014</td><td>8.844 ± 2.87</td><td>-0.0408 ± 0.0039</td><td>6.649 ± 0.068</td><td>0.112 ± 0.010</td></tr><tr><td>W-DCGAN</td><td>0.0060 ± 0.001</td><td>9.875 ± 3.42</td><td>-0.0421 ± 0.0054</td><td>6.524 ± 0.078</td><td>0.095 ± 0.003</td></tr><tr><td>LS-DCGAN</td><td>0.0072 ± 0.0024</td><td>7.10 ± 2.05</td><td>-0.0535 ± 0.0031</td><td>6.761 ± 0.069</td><td>0.088 ± 0.008</td></tr></table> Table 4: GAN scores for various metrics trained on LSUN Bedroom dataset. <table><tr><td>Model</td><td>MMD</td><td>IW</td><td>LS</td></tr><tr><td>DCGAN</td><td>0.00708</td><td>3.79097</td><td>-0.14614</td></tr><tr><td>W-DCGAN</td><td>0.00584</td><td>2.91787</td><td>-0.20572</td></tr><tr><td>LS-DCGAN</td><td>0.00973</td><td>3.36779</td><td>-0.17307</td></tr></table> Table 5: Evaluation of GANs on MNIST and Fashion-MNIST datasets. <table><tr><td rowspan="2">Model</td><td colspan="3">MNIST</td><td colspan="3">Fashion-MNIST</td></tr><tr><td>IW</td><td>LS</td><td>FID</td><td>IW</td><td>LS</td><td>FID</td></tr><tr><td>DCGAN</td><td>0.4814 ± 0.0083</td><td>-0.111 ± 0.0074</td><td>1.84 ± 0.15</td><td>0.69 ± 0.0057</td><td>-0.0202 ± 0.00242</td><td>3.23 ± 0.34</td></tr><tr><td>EBGAN</td><td>0.7277 ± 0.0159</td><td>-0.029 ± 0.0026</td><td>5.36 ± 0.32</td><td>0.99 ± 0.0001</td><td>-2.2e-5 ± 5.3e-5</td><td>104.08 ± 0.56</td></tr><tr><td>W-DCGAN GP</td><td>0.7314 ± 0.0194</td><td>-0.035 ± 0.0059</td><td>2.67 ± 0.15</td><td>0.89 ± 0.0086</td><td>-0.0005 ± 0.00037</td><td>2.56 ± 0.25</td></tr><tr><td>LS-DCGAN</td><td>0.5058 ± 0.0117</td><td>-0.115 ± 0.0070</td><td>2.20 ± 0.27</td><td>0.68 ± 0.0086</td><td>-0.0208 ± 0.00290</td><td>0.62 ± 0.13</td></tr><tr><td>BEGAN</td><td></td><td>-0.009 ± 0.0063</td><td>15.9 ± 0.48</td><td>0.90 ± 0.0159</td><td>-0.0016 ± 0.00047</td><td>1.51 ± 0.16</td></tr><tr><td>DRAGAN</td><td>0.4632 ± 0.0247</td><td>-0.116 ± 0.0116</td><td>1.09 ± 0.13</td><td>0.66 ± 0.0108</td><td>-0.0219 ± 0.00232</td><td>0.97 ± 0.14</td></tr></table> dataset. We then trained LS and IW critics using both 300 and 90,000 validation examples. We looked at how often the critic trained with 300 examples agreed with that trained with 90,000 examples. The LS critics agreed \(88\%\) of the time, while the IW critics agreed only \(55\%\) of the time (slightly better than chance). Thus, LS is more robust to validation data set size. Another advantage is that measuring the LS distance is faster than measuring the IW distance, as estimating IW involves regularizing with a gradient penalty time (Gulrajani et al., 2017). Computing the gradient penalty term and tuning its regularization coefficient requires extra computational time. As mentioned above, we found training a critic using the GC criterion (corresponding to a DCGAN) to be unstable. It has previously been speculated that this is the case because the support of the data and model distributions possibly becoming disjoint (Arjovsky & Bottou, 2017), and the Hessian of the GAN objective being non- Hermitian (Mescheder et al., 2017). LS- DCGAN and W- DCGAN are proposed to address this by providing non- saturating gradients. We also found DCGAN to be difficult to train, and thus only report results using the corresponding criterion GC for MNIST. Note that this is different than training a discriminator as part of standard GAN training because we are training from a random initialization, not from the previous version of the discriminator. Our experience was that the LS- DCGAN was the simplest and most stable model to train. We visualized the 2D subspace of the loss surface of the GANs in Supp. Fig. 29. Here, we took the parameters of three trained models (corresponds to red vertices in the figure) and applied barycentric interpolation with respect to three parameters (see details from (Im et al., 2016c)). DCGAN surfaces have much sharper slopes when compared to the LS- DCGAN and W- DCGAN, and LS- DCGAN has the most gentle surfaces. In what follows, we show that this geometric view is consistent with our finding that LS- DCGAN is the easiest and the most stable to train. <--- Page Split ---> Table 6: The fraction of pairs of which each metric agrees with human scores. We use colored asterisks to represent significant differences (two- sided Fisher's test, \(\mathrm{p}< .05\) ). E.g. \* in the IW row indicates that IW and IS are significantly different. <table><tr><td>Metric</td><td>Fraction</td><td>[Agreed/Total] samples</td><td>p &amp;lt; .05?</td></tr><tr><td>IW</td><td>0.977</td><td>128 / 131</td><td>*</td></tr><tr><td>LS</td><td>0.931</td><td>122 / 131</td><td>*</td></tr><tr><td>IS</td><td>0.863</td><td>113 / 131</td><td>*</td></tr><tr><td>MMD</td><td>0.832</td><td>109 / 131</td><td>*</td></tr></table> #### 4.2.1 COMPARISON TO HUMAN PERCEPTION We compared the LS, IW, MMD, and IS metrics to human perception for the CIFAR10 dataset. To accomplish this, we asked five volunteers to choose which of two sets of 100 samples, each generated using a different generator, looked most realistic. Before surveying, the volunteers were trained to choose between real samples from CIFAR10 and samples generated by a GAN. Supp. Fig. 14 displays the user interface for the participants, and Supp. Fig. 15 shows the fraction of labels that the volunteers agreed upon. Table 6) presents the fraction of pairs for which each metric agrees with humans (higher is better). IW has a slight edge over LS, and both outperform IS and MMD. In Figure 3, we show examples in which all humans agree and metrics disagrees with human perception. All such examples are shown in Supp. Fig. 21- 24. ![](images/7_0.jpg) <center>Figure 3: Pairs of generated image sets for which human perception and metrics disagree. Here, we selected one such example for each metric for which the difference in that metric's scores was high. For each pair, humans perceived the set of images on the left to be more realistic than those on the right, while the metric predicted the opposite. Below each pair of images, we indicate the metric's score for the left and right image sets. </center> ### 4.3 SENSITIVITY ANALYSIS #### 4.3.1 PERFORMANCE CHANGE WITH RESPECT TO THE SIZE OF THE NETWORK Several works have demonstrated an improvement in performance by enlarging deep network architectures (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2015; Huang et al., 2017). Here, we investigate performance changes with respect to the width and depth of the networks. First, we trained three GANs with varying numbers of feature map sizes, as shown in Table 7 (a- d). Note that we double the number of feature maps in Table 7 for both the discriminators and generators. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 4: LS score evaluation of W-DCGAN & LS-DCGAN w.r.t number of feature maps. </center> ![](images/8_1.jpg) <center>Figure 5: W-DCGAN trained with different numbers of feature maps. </center> Table 8: LS-DCGAN and W-DCGAN scores on CIFAR10 with respect to different generator and discriminator capacity. <table><tr><td>Model</td><td>Architecture (Table 7)</td><td>MMD Test vs. Samples</td><td>IW</td><td>LS</td><td>IS (ResNet)</td></tr><tr><td rowspan="2">W-DCGAN</td><td>(e)</td><td>0.1057 ± 0.0798</td><td>450.17 ± 25.74</td><td>-0.0079 ± 0.0009</td><td>6.403 ± 0.839</td></tr><tr><td>(f)</td><td>0.2176 ± 0.2706</td><td>16.52 ± 15.63</td><td>-0.0636 ± 0.0101</td><td>6.266 ± 0.055</td></tr><tr><td rowspan="2">LS-DCGAN</td><td>(e)</td><td>0.1390 ± 0.1525</td><td>343.23 ± 47.55</td><td>-0.0092 ± 0.0007</td><td>5.751 ± 0.511</td></tr><tr><td>(f)</td><td>0.0054 ± 0.0022</td><td>12.75 ± 4.29</td><td>-0.0372 ± 0.0068</td><td>6.600 ± 0.061</td></tr></table> In Figure 4, the performance of the LS score increases logarithmically as the number of feature maps is doubled. A similar behaviour is observed in other metrics as well (see S.M. Figure 16). We then analyzed the importance of size in the discriminative and generative networks. We considered two extreme feature map sizes, where we choose a small and large number of feature maps for the generator and discriminator, and vice versa (see label (e) and (f) in Table 7), and results are shown in Table 8. For LS- DCGAN, it can be seen that a large number of feature maps for the discriminator has a better score than a large number of feature maps for the generator. This can also be qualitatively verified by looking at the samples from architectures (a), (e), (f), and (d) in Figure 6. For W- DCGAN, we observe the agreement between the LS and IW metric and conflict with MMD and IS. When we look at the samples from the W- DCGAN in Figure 5, it is clear that the model with a larger number of feature maps in the discriminator should get a better score; this is another example of false intuition propagated by MMD and IS. One interesting observation is that when we compare the score and samples from architecture (a) and (e) from Table 7, architecture (a) is much better than (e) (see Figure 6). This demonstrates that having a large generator and small discriminator is worse than having a small architecture for both networks. Overall, we found that having a larger generator than discriminator does not give good results, and that it is more desirable to have a larger discriminator than generator. Similar results were also observed for MNIST, as shown in S.M. Figure 20. This result somewhat supports the theoretical result from Arora et al. (2017), where the generator capacity needs to be modulated in order for approximately pure equilibrium to exist for GANs. Table 7: Reference for the different architectures explored in the experiments. <table><tr><td rowspan="2">Label</td><td colspan="2">Feature Maps</td></tr><tr><td>Discriminator</td><td>Generator</td></tr><tr><td>(a)</td><td>[3, 16, 32, 64]</td><td>[128, 64, 32, 3]</td></tr><tr><td>(b)</td><td>[3, 32, 64, 128]</td><td>[256, 128, 64, 3]</td></tr><tr><td>(c)</td><td>[3, 64, 128, 256]</td><td>[512, 256, 128, 3]</td></tr><tr><td>(d)</td><td>[3, 128, 256, 512]</td><td>[1024, 512, 256, 3]</td></tr><tr><td>(e)</td><td>[3, 16, 32, 64]</td><td>[1024, 512, 256, 3]</td></tr><tr><td>(f)</td><td>[3, 128, 256, 512]</td><td>[128, 64, 32, 3]</td></tr></table> and conflict with MMD and IS. When we look at the samples from the W- DCGAN in Figure 5, it is clear that the model with a larger number of feature maps in the discriminator should get a better score; this is another example of false intuition propagated by MMD and IS. One interesting observation is that when we compare the score and samples from architecture (a) and (e) from Table 7, architecture (a) is much better than (e) (see Figure 6). This demonstrates that having a large generator and small discriminator is worse than having a small architecture for both networks. Overall, we found that having a larger generator than discriminator does not give good results, and that it is more desirable to have a larger discriminator than generator. Similar results were also observed for MNIST, as shown in S.M. Figure 20. This result somewhat supports the theoretical result from Arora et al. (2017), where the generator capacity needs to be modulated in order for approximately pure equilibrium to exist for GANs. Lastly, we experimented with how performance changes with respect to the dimension of the noise vectors. The source of the sample starts by transforming a noise vector into a meaningful image. It is unclear how the size of noise affects the ability of the generator to generate a meaningful image. Che et al. (2017) have observed that a 100- d noise vector preserves modes better than a 200- d noise vector <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 6: Samples from different LS-DCGAN architectures. </center> for DCGAN. Our experiments show that this depends on the model. Given a fixed size architecture (d) from Table 7, we observed the performance of LS- DCGAN and W- DCGAN by varying the size of noise vector \(z\) . Table 9 illustrates that LS- DCGAN gives the best score with a noise dimension of 50 and W- DCGAN gives best score with a noise dimension of 150 for both IW and LS. The outcome of LS- DCGAN is consistent with the result in (Che et al., 2017). It is possible that this occurs because both models fall into the category of \(f\) - divergences, whereas the W- DCGAN behaves differently because its metric falls under a different category, the Integral Probability Metric. Table 9: LS-DCGAN and W-DCGAN scores on CIFAR10 with respect to the dimensionality of the noise vector. <table><tr><td rowspan="2">lzl</td><td colspan="3">LS-DCGAN</td><td colspan="3">W-DCGAN</td></tr><tr><td>IW</td><td>LS</td><td>IW</td><td>LS</td><td>IW</td><td>LS</td></tr><tr><td>50</td><td>3.9010 ± 0.60</td><td>-0.0547 ± 0.0059</td><td>6.0948 ± 3.21</td><td>-0.0532 ± 0.0069</td><td></td><td></td></tr><tr><td>100</td><td>5.6588 ± 1.47</td><td>-0.0511 ± 0.0065</td><td>5.7358 ± 3.25</td><td>-0.0528 ± 0.0051</td><td></td><td></td></tr><tr><td>150</td><td>5.8350 ± 0.80</td><td>-0.0434 ± 0.0036</td><td>3.6945 ± 1.33</td><td>-0.0521 ± 0.0050</td><td></td><td></td></tr></table> ![](images/9_1.jpg) <center>Figure 7: LS score evaluation with respect to a varying number of discriminator and generator updates on DCGAN, W-DCGAN, and LS-DCGAN. </center> #### 4.3.2 PERFORMANCE CHANGE WITH RESPECT TO THE RATIO OF NUMBER OF UPDATES BETWEEN THE GENERATOR AND DISCRIMINATOR In practice, we alternate between updating the discriminator and generator, and yet this is not guaranteed to give the same result as the solution to the min- max problem in Equation 2. Hence, the update ratio can influence the performance of GANs. We experimented with three different update ratios, \(5:1,1:1\) , and \(1:5\) , with respect to the discriminator and generator update. We applied these ratios to both the MNIST and CIFAR10 datasets on all models. <--- Page Split ---> Figure 7 presents the LS scores on both MNIST and CIFAR10 and this result is consistent with the IW metric as well (see S.M. Figure 25). However, we did not find that any one update ratio was superior over others between the two datasets. For CIFAR10, the \(1:1\) update ratio worked best for all models, and for MNIST, different ratios worked better for different models. Hence, we conclude that number of update ratios for each model needs to be dynamically tuned. The corresponding samples from the models trained by different update ratios are shown in S.M. Figure 27. ## 4.3.3 PERFORMANCE WITH RESPECT TO THE AMOUNT OF AVAILABLE TRAINING DATA In practice, DCGANs are known to be unstable, and the generator tends to suffer as the discriminator gets better due to disjoint support between the data and generator distributions (Goodfellow et al., 2014; Arjovsky & Bottou, 2017). W- DCGAN and LS- DCGAN offer alternative ways to solving this problem. If the model is suffering from disjoint support, having more training examples will not help, and alternatively, if the model does not suffer from such a problem, having more training examples could potentially help. Here, we explore the sensitivity of three different kinds of GANs with respect to the number of training examples. We have trained GANs with 10,000, 20,000, 30,000, 40,000, and 45,000 examples on CIFAR10. Figure 8 shows that the LS score curve of DCGAN grows quite slowly when compared to W- DCGAN and LS- DCGAN. The three ![](images/10_0.jpg) <center>Figure 8: LS score evaluation on W-DCGAN & LS-DCGAN w.r.t number of data points. </center> GANs have a relatively similar loss when they are trained with 10,000 training examples. However, the DCGAN only gained \(0.0124 \pm 0.00127\) by increasing from 10,000 to 40,000 training examples, whereas the performance of W- DCGAN and LS- DCGAN improved by \(0.03016 \pm 0.00469\) and \(0.0444 \pm 0.0033\) , respectively. Thus, we empirically observe that W- DCGAN and LS- DCGAN have faster performance increases than a DCGAN as the number of training examples grows. ## 5 CONCLUSION In this paper, we proposed to use four well- known distance functions as an evaluation metrics, and empirically investigated the DCGAN, W- DCGAN, and LS- DCGAN families under these metrics. Previously, these models were compared based on visual assessment of sample quality and difficulty of training. In our experiments, we showed that there are performance differences in terms of average experiments, but that some are not statistically significant. Moreover, we thoroughly analyzed the performance of GANs under different hyper- parameter settings. There are still several types of GANs that need to be evaluated, such as GRAN (Im et al., 2016a), IWD-CGAN (Gulrajani et al., 2017), BEGAN (Berthelot et al., 2017), MMDGAN (Li et al., 2017), and CramerGAN (Bellemare et al., 2017). We hope to evaluate all of these models under this framework and thoroughly analyze them in the future. Moreover, there has been an investigation into taking ensemble approaches to GANs, such as Generative Adversarial Parallelization Im et al. (2016b). Ensemble approaches have been empirically shown to work well in many domains of research, so it would be interesting to find out whether ensembles can also help in min- max problems. Alternatively, we can also try to evaluate other log- likelihood- based models like NVIL (Mnih & Gregor, 2014), VAE (Kingma & Welling, 2014), DVAE (Im et al., 2015), DRAW (Gregor et al., 2015), RBMs (Hinton et al., 2006; Salakhutdinov & Hinton, 2009), NICE Dinh et al. (2014), etc. Model evaluation is an important and complex topic. Model selection, model design, and even research direction can change depending on the evaluation metric. Thus, we need to continuously explore different metrics and rigorously evaluate new models. <--- Page Split ---> ## APPENDIX ## RELATIONSHIP BETWEEN METRICS AND BINARY CLASSIFICATION In this paper, we considered four distance metrics that belong to two class of metrics, \(\phi\) - divergence and IPMs. Sriperumbudur et al. (2009) have shown that the optimal risk function is associated with a binary classifier with \(P\) and \(Q\) distributions conditioned on a class when the discriminant function is restricted to certain \(F\) (Theorem 17 from (Sriperumbudur et al., 2009)). Let the optimal risk function be: \[R(L,F) = \inf_{f\in F}\int L(y,f(x))d p(x,y), \quad (11)\] where \(F\) is the set of discriminant functions (classifier), \(y \in - 1, 1\) , and \(L\) is the loss function. By following derivation, we can see that the optimal risk function becomes IPM: \[\begin{array}{l}{{R(L,F)=\inf _{f\in F}\int L(y,f(x))d u(x,y)}}\\ {{=\inf _{f\in F}\left[\epsilon\int L(1,f(x))d p(x)+(1-\epsilon)\int L(0,f(x))d q(x)\right]}}\\ {{=\inf _{f\in F}f d p(x)+\inf _{f\in F}f d q(x)}}\\ {{=-I P M}}\end{array} \quad (14)\] where \(L(1, f(x)) = 1 / \epsilon\) and \(L(0, f(x)) = - 1 / (1 - \epsilon)\) . The second equality is derived by separating the loss for class 1 and class 0. The third equality is from the way how we chose \(\mathrm{L}(1, \mathrm{f}(\mathrm{x}))\) and \(\mathrm{L}(0, \mathrm{f}(\mathrm{x}))\) . The last equality is derived from that fact that \(F\) is symmetric around zero ( \(f \in F \Rightarrow - f \in F\) ). Hence, this shows that with appropriately choosing \(L\) , MMD and Wasserstein distance can be understood as the optimal \(L\) - risk associated with binary classifier with specific set of \(F\) functions. For example, Wasserstein distance and MMD distances are equivalent to the optimal risk function with 1- Lipschitz classifiers and a RKHS classifier with an unit length. <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 9: GAN Topology for MNIST. </center> ![](images/15_1.jpg) <center>Figure 10: GAN Topology for CIFAR10. </center> ![](images/15_2.jpg) <center>Figure 11: MNIST & FashionMNIST Samples </center> <--- Page Split ---> Table 10: Hyper-parameters used for different experiments. <table><tr><td rowspan="2"></td><td colspan="5">GAN training</td><td colspan="3">Critic Training (test time)</td></tr><tr><td>Model</td><td>Disc. Lr.</td><td>Gen. Lr.</td><td>Ratio³</td><td>Cr. Lr.</td><td>Cr. Kern</td><td>Num Epoch</td><td></td></tr><tr><td rowspan="3">Table 2</td><td>DCGAN</td><td>0.0002</td><td>0.0004</td><td>1:2</td><td></td><td></td><td></td><td></td></tr><tr><td>W-DCGAN</td><td>0.0004</td><td>0.0008</td><td>1:1</td><td>0.0001</td><td>[1, 128, 32]</td><td>25</td><td></td></tr><tr><td>LS-DCGAN</td><td>0.0004</td><td>0.0008</td><td>1:2</td><td></td><td></td><td></td><td></td></tr><tr><td rowspan="3">Table 3</td><td>DCGAN</td><td>0.0002</td><td>0.0001</td><td>1:2</td><td></td><td></td><td></td><td></td></tr><tr><td>W-DCGAN</td><td>0.0008</td><td>0.0004</td><td>1:1</td><td>0.0002</td><td>[3, 128, 256, 512]</td><td>11</td><td></td></tr><tr><td>LS-DCGAN</td><td>0.0008</td><td>0.0004</td><td>1:2</td><td></td><td></td><td></td><td></td></tr><tr><td rowspan="3">Table 4</td><td>DCGAN</td><td>0.00005</td><td>0.0001</td><td>1:2</td><td></td><td></td><td></td><td></td></tr><tr><td>W-DCGAN</td><td>0.0002</td><td>0.0004</td><td>1:2</td><td>0.0002</td><td>[3, 128, 256, 512, 1024]</td><td>4</td><td></td></tr><tr><td>LS-DCGAN</td><td>0.0002</td><td>0.0004</td><td>1:2</td><td></td><td></td><td></td><td></td></tr><tr><td>Table 5</td><td>ALL GANs</td><td>0.0002</td><td>0.0002</td><td>1:1</td><td>0.0002</td><td>[1, 64, 128]</td><td>25</td><td></td></tr><tr><td rowspan="3">Table 8</td><td>DCGAN</td><td>0.0002</td><td>0.0001</td><td>1:2</td><td></td><td></td><td></td><td></td></tr><tr><td>W-DCGAN</td><td>0.0002</td><td>0.0001</td><td>1:1</td><td>0.0002</td><td>[3, 128, 256, 512]</td><td>11</td><td></td></tr><tr><td>LS-DCGAN</td><td>0.0008</td><td>0.0004</td><td>1:2</td><td></td><td></td><td></td><td></td></tr><tr><td>Table 11</td><td>ALL GANs</td><td>0.0002</td><td>0.0002</td><td>1:1</td><td>0.0002</td><td>[1, 64, 128]</td><td>25</td><td></td></tr><tr><td>Table 12</td><td>ALL GANs</td><td>0.0002</td><td>0.0002</td><td>1:1</td><td>0.0002</td><td>[1, 64, 128]</td><td>25</td><td></td></tr><tr><td rowspan="5">Figure 7b</td><td rowspan="5">DCGAN</td><td rowspan="5">0.0001</td><td rowspan="5">0.00005</td><td>5:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:5</td><td></td><td></td><td></td><td></td></tr><tr><td>5:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:1</td><td></td><td></td><td></td><td></td></tr><tr><td rowspan="5">Figure 26</td><td rowspan="5">DCGAN</td><td rowspan="5">0.0001</td><td rowspan="5">0.00005</td><td>5:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:5</td><td></td><td></td><td></td><td></td></tr><tr><td>5:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:1</td><td></td><td></td><td></td><td></td></tr><tr><td rowspan="5">Figure 16</td><td rowspan="5">DCGAN</td><td rowspan="5">0.0002</td><td rowspan="5">0.0001</td><td>5:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:5</td><td></td><td></td><td></td><td></td></tr><tr><td>5:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:1</td><td></td><td></td><td></td><td></td></tr><tr><td rowspan="3">Figure 28</td><td rowspan="3">DCGAN</td><td rowspan="3">0.0002</td><td rowspan="3">0.0001</td><td>1:2</td><td></td><td></td><td></td><td></td></tr><tr><td>1:1</td><td></td><td></td><td></td><td></td></tr><tr><td>1:2</td><td></td><td></td><td></td><td></td></tr></table> ## MORE EXPERIMENTS We trained two critics on training data and validation data, respectively, and evaluated on test data from both critics. We trained six GANs (GAN, LS- DCGAN, W- DCGAN GP, DRAGAN, BEGAN, EBGAN) on MNIST and FashionMNIST. We trained these GANs with 50,000 training examples. At test time, we used 10,000 training and 10,000 validation examples for training the critics, and evaluated on 10,000 test examples. Here, we present the test scores from the critics trained on training and validation data. The results are shown in Table ??. Note that we also have the IW and FID evaluation on these models in the paper. For FashionMNIST, we find that test scores with a critic trained on training and validation data are very close. Hence, we do not see any indication of overfitting. On the other hand, there are gaps between the scores for the MNIST dataset and the test scores from critics trained on the validation set. which gives better performance than the ones that are trained on the training set. <--- Page Split ---> ![](images/17_0.jpg) <center>Filter #: [3, 64, 128, 256] Filter #: [3, 128, 256, 512] Filter #: [3, 256, 512, 1024] Filter #: [3, 320, 640, 1280] </center> ![](images/17_1.jpg) <center>Figure 12: GAN evaluation using different architectures for the critic (Number of feature maps in each layer of the CNN critic). Above figures are evaluated under negative least-square loss and Figures 13 are evaluated under Wasserstein distance. </center> Figure 13: GAN evaluation using different critic's architecture (Number of filter of critic's convolutional network). Figure (a,b,c,d) are evaluation under Wasserstein distance. Table 11: Evaluation of GANs on MNIST dataset. Test score comparison between the two critics that are trained by training and validation dataset. <table><tr><td rowspan="2">Model</td><td colspan="2">LS Score</td><td colspan="2">IW Score</td></tr><tr><td>Trained on training data</td><td>Trained on validation data</td><td>Trained on training data</td><td>Trained on validation data</td></tr><tr><td>DCGAN</td><td>-0.312 ± 0.010</td><td>-0.4408 ± 0.0201</td><td>0.300 ± 0.0103</td><td>0.259 ± 0.0083</td></tr><tr><td>EBGAN</td><td>-3.386 ± 0.1866-7</td><td>-3.826 ± 0.2826-7</td><td>0.999 ± 0.0001</td><td>0.999 ± 0.0001</td></tr><tr><td>WGAN GP</td><td>-0.196 ± 0.006</td><td>-0.307 ± 0.0381</td><td>0.705 ± 0.0202</td><td>0.635 ± 0.0270</td></tr><tr><td>LSGAN</td><td>-0.232 ± 0.0104</td><td>-0.352 ± 0.0143</td><td>0.232 ± 0.0156</td><td>0.195 ± 0.0103</td></tr><tr><td>BEGAN</td><td>-0.081 ± 0.016</td><td>-0.140 ± 0.0329</td><td>0.888 ± 0.0097</td><td>0.858 ± 0.0131</td></tr><tr><td>DRAGAN</td><td>-0.318 ± 0.012</td><td>-0.384 ± 0.0139</td><td>0.266 ± 0.0060</td><td>0.235 ± 0.0079</td></tr></table> Table 12: Evaluation of GANs on Fashion-MNIST dataset. Test score comparison between the two critics that are trained by training and validation dataset. <table><tr><td rowspan="2">Model</td><td colspan="2">LS Score</td><td colspan="2">IW Score</td></tr><tr><td>Trained on training data</td><td>Trained on validation data</td><td>Trained on training data</td><td>Trained on validation data</td></tr><tr><td>DCGAN</td><td>-0.1638 ± 0.010</td><td>-0.1635 ± 0.0006</td><td>0.408 ± 0.0135</td><td>0.4118 ± 0.0107</td></tr><tr><td>EBGAN</td><td>-0.0037 ± 0.0009</td><td>-0.0048 ± 0.0023</td><td>0.415 ± 0.0067</td><td>0.4247 ± 0.0098</td></tr><tr><td>WGAN GP</td><td>-0.00175 ± 0.000876</td><td>-0.00438 ± 0.0000862</td><td>0.921 ± 0.0061</td><td>0.9924 ± 0.00089</td></tr><tr><td>LSGAN</td><td>-0.135 ± 0.0046</td><td>-0.136 ± 0.0074</td><td>0.631 ± 0.0106</td><td>0.6236 ± 0.0200</td></tr><tr><td>BEGAN</td><td>-0.1133 ± 0.042</td><td>-0.0893 ± 0.0095</td><td>0.429 ± 0.0148</td><td>0.4293 ± 0.0213</td></tr><tr><td>DRAGAN</td><td>-0.1638 ± 0.015</td><td>-0.1645 ± 0.0151</td><td>0.641 ± 0.0304</td><td>0.6311 ± 0.0547</td></tr></table> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 14: The participants are trained by selecting between random samples generated by GANs versus samples from data distribution. They get a positive reward if they selected the data samples and a negative reward if they select the samples from the model. After enough training, they choose the better group of samples among two randomly select set of samples. </center> ![](images/18_1.jpg) <center>Figure 15: The fraction of labels that agree for each pair, depending on the number of labels for each pair, presented as a histogram. By definition, if there is only one participant, that participant must agree with themselves. </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 16: Performance of W-DCGAN & LS-DCGAN with respect to number of filters. </center> ![](images/19_1.jpg) <center>Figure 17: W-DCGAN trained with different number of filters. </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 18: Samples from different architectures of LS-DCGAN </center> ![](images/20_1.jpg) <center>Figure 19: Samples from different architectures of W-DCGAN. </center> ![](images/20_2.jpg) <center>Figure 20: The performance of GANs trained with different numbers of feature maps. </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 21: All pairs of generated image sets for which human perception and IW disagree, as in Figure 3. </center> <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 22: All pairs of generated image sets for which human perception and LS disagree, as in Figure 3. </center> <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 23: All pairs of generated image sets for which human perception and IS disagree, as in Figure 3. </center> <--- Page Split ---> ![](images/24_0.jpg) <center>Figure 24: All pairs of generated image sets for which human perception and MMD disagree, as in Figure 3. </center> <--- Page Split ---> ![](images/25_0.jpg) <center>Figure 25: Performance of DCGAN, W-DCGAN, and LS-DCGAN trained with varying numbers of discriminator and generator updates. These models were trained on CIFAR10 dataset and evaluated with IW and LS metrics. </center> ![](images/25_1.jpg) <center>Figure 26: Performance of DCGAN, W-DCGAN, and LS-DCGAN trained with varying numbers of discriminator and generator updates. These models were trained on the MNIST dataset and evaluated with GC, LS, and IW metrics. </center> ![](images/25_2.jpg) <center>Figure 27: Samples at varying update ratios. </center> <--- Page Split ---> ![](images/26_0.jpg) <center>Figure 28: Performance of W-DCGAN & LS-DCGAN with respect to number of data points. </center> ![](images/26_1.jpg) <center>Figure 29: Interpolation between the three final GAN parameters trained using different random seeds on CIFAR10. Loss surface values are amplified by 10 times in order to illustrate the separation of the terrains. Local zig-zag patterns are minor artifacts from rendering. </center> ![](images/26_2.jpg) <center>Figure 30: Scores from training GANs on LSUN Bedroom dataset. </center> ![](images/26_3.jpg) <center>Figure 31: The training curve of critics to show that the training curve converges. IW distance curves in (a) increase because we used linear output unit for the critic network (by design choice). This can be simply bounded by adding a sigmoid at the output of the critic network. </center> <--- Page Split ---> ![](images/27_0.jpg) <center>(a) GAN samples of LSUN Bedroom dataset. </center> <--- Page Split ---> ![](images/28_0.jpg) <center>(a) LS-DCGAN samples of LSUN Bedroom dataset. </center> <--- Page Split ---> ![](images/29_0.jpg) <center>(a) W-DCGAN samples of LSUN Bedroom dataset. </center> <--- Page Split --->
accept
Accept (Poster)
6
ICLR_2018_paper_0588
iclr
2,018
# PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION Tero Karras Timo Aila Samuli Laine Jaakko Lehtinen NVIDIA NVIDIA NVIDIA NVIDIA and Aalto University {tkarras,taila,slaire, jlehtinen}@nvidia.com ## ABSTRACT We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CELEBA images at \(1024^{2}\) . We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher- quality version of the CELEBA dataset. ## 1 INTRODUCTION Generative methods that produce novel samples from high- dimensional data distributions, such as images, are finding widespread use, for example in speech synthesis (van den Oord et al., 2016a), image- to- image translation (Zhu et al., 2017; Liu et al., 2017; Wang et al., 2017), and image inpainting (Iizuka et al., 2017). Currently the most prominent approaches are autoregressive models (van den Oord et al., 2016b;c), variational autoencoders (VAE) (Kingma & Welling, 2014), and generative adversarial networks (GAN) (Goodfellow et al., 2014). Currently they all have significant strengths and weaknesses. Autoregressive models – such as PixelCNN – produce sharp images but are slow to evaluate and do not have a latent representation as they directly model the conditional distribution over pixels, potentially limiting their applicability. VAEs are easy to train but tend to produce blurry results due to restrictions in the model, although recent work is improving this (Kingma et al., 2016). GANs produce sharp images, albeit only in fairly small resolutions and with somewhat limited variation, and the training continues to be unstable despite recent progress (Salimans et al., 2016; Gulrajani et al., 2017; Berthelot et al., 2017; Kodali et al., 2017). Hybrid methods combine various strengths of the three, but so far lag behind GANs in image quality (Makhzani & Frey, 2017; Ulyanov et al., 2017; Dumoulin et al., 2016). Typically, a GAN consists of two networks: generator and discriminator (aka critic). The generator produces a sample, e.g., an image, from a latent code, and the distribution of these images should ideally be indistinguishable from the training distribution. Since it is generally infeasible to engineer a function that tells whether that is the case, a discriminator network is trained to do the assessment, and since networks are differentiable, we also get a gradient we can use to steer both networks to the right direction. Typically, the generator is of main interest – the discriminator is an adaptive loss function that gets discarded once the generator has been trained. There are multiple potential problems with this formulation. When we measure the distance between the training distribution and the generated distribution, the gradients can point to more or less random directions if the distributions do not have substantial overlap, i.e., are too easy to tell apart (Arjovsky & Bottou, 2017). Originally, Jensen- Shannon divergence was used as a distance metric (Goodfellow et al., 2014), and recently that formulation has been improved (Hjelm et al., 2017) and a number of more stable alternatives have been proposed, including least squares (Mao et al., 2016b), absolute deviation with margin (Zhao et al., 2017), and Wasserstein distance (Arjovsky et al., 2017; Gulrajani <--- Page Split ---> et al., 2017). Our contributions are largely orthogonal to this ongoing discussion, and we primarily use the improved Wasserstein loss, but also experiment with least- squares loss. The generation of high- resolution images is difficult because higher resolution makes it easier to tell the generated images apart from training images (Odena et al., 2017), thus drastically amplifying the gradient problem. Large resolutions also necessitate using smaller minibatches due to memory constraints, further compromising training stability. Our key insight is that we can grow both the generator and discriminator progressively, starting from easier low- resolution images, and add new layers that introduce higher- resolution details as the training progresses. This greatly speeds up training and improves stability in high resolutions, as we will discuss in Section 2. The GAN formulation does not explicitly require the entire training data distribution to be represented by the resulting generative model. The conventional wisdom has been that there is a tradeoff between image quality and variation, but that view has been recently challenged (Odena et al., 2017). The degree of preserved variation is currently receiving attention and various methods have been suggested for measuring it, including inception score (Salimans et al., 2016), multi- scale structural similarity (MS- SSIM) (Odena et al., 2017; Wang et al., 2003), birthday paradox (Arora & Zhang, 2017), and explicit tests for the number of discrete modes discovered (Metz et al., 2016). We will describe our method for encouraging variation in Section 3, and propose a new metric for evaluating the quality and variation in Section 5. Section 4.1 discusses a subtle modification to the initialization of networks, leading to a more balanced learning speed for different layers. Furthermore, we observe that mode collapses traditionally plaguing GANs tend to happen very quickly, over the course of a dozen minibatches. Commonly they start when the discriminator overshoots, leading to exaggerated gradients, and an unhealthy competition follows where the signal magnitudes escalate in both networks. We propose a mechanism to stop the generator from participating in such escalation, overcoming the issue (Section 4.2). We evaluate our contributions using the CELEBA, LSUN, CIFAR10 datasets. We improve the best published inception score for CIFAR10. Since the datasets commonly used in benchmarking generative methods are limited to a fairly low resolution, we have also created a higher quality version of the CELEBA dataset that allows experimentation with output resolutions up to \(1024 \times 1024\) pixels. This dataset and our full implementation are available at https://github.com/tkarras/progressive_growing_of_gans, trained networks can be found at https://drive.google.com/open?id=0B4qLCyYJmi2ONHFULTdyC051X0U along with result images, and a supplementary video illustrating the datasets, additional results, and latent space interpolations is at https://youtu.be/G06dEcZ- QTg. ## 2 PROGRESSIVE GROWING OF GANs Our primary contribution is a training methodology for GANs where we start with low- resolution images, and then progressively increase the resolution by adding layers to the networks as visualized in Figure 1. This incremental nature allows the training to first discover large- scale structure of the image distribution and then shift attention to increasingly finer scale detail, instead of having to learn all scales simultaneously. We use generator and discriminator networks that are mirror images of each other and always grow in synchrony. All existing layers in both networks remain trainable throughout the training process. When new layers are added to the networks, we fade them in smoothly, as illustrated in Figure 2. This avoids sudden shocks to the already well- trained, smaller- resolution layers. Appendix A describes structure of the generator and discriminator in detail, along with other training parameters. We observe that the progressive training has several benefits. Early on, the generation of smaller images is substantially more stable because there is less class information and fewer modes (Odena et al., 2017). By increasing the resolution little by little we are continuously asking a much simpler question compared to the end goal of discovering a mapping from latent vectors to e.g. \(1024^{2}\) images. This approach has conceptual similarity to recent work by Chen & Koltun (2017). In practice it stabilizes the training sufficiently for us to reliably synthesize megapixel- scale images using WGAN- GP loss (Gulrajani et al., 2017) and even LSGAN loss (Mao et al., 2016b). <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 1: Our training starts with both the generator (G) and discriminator (D) having a low spatial resolution of \(4 \times 4\) pixels. As the training advances, we incrementally add layers to G and D, thus increasing the spatial resolution of the generated images. All existing layers remain trainable throughout the process. Here \(\boxed {N\times N}\) refers to convolutional layers operating on \(N\times N\) spatial resolution. This allows stable synthesis in high resolutions and also speeds up training considerably. One the right we show six example images generated using progressive growing at \(1024\times 1024\) . </center> Another benefit is the reduced training time. With progressively growing GANs most of the iterations are done at lower resolutions, and comparable result quality is often obtained up to 2- 6 times faster, depending on the final output resolution. The idea of growing GANs progressively is related to the work of Wang et al. (2017), who use multiple discriminators that operate on different spatial resolutions. That work in turn is motivated by Durugkar et al. (2016) who use one generator and multiple discriminators concurrently, and Ghosh et al. (2017) who do the opposite with multiple generators and one discriminator. Hierarchical GANs (Denton et al., 2015; Huang et al., 2016; Zhang et al., 2017) define a generator and discriminator for each level of an image pyramid. These methods build on the same observation as our work – that the complex mapping from latents to high- resolution images is easier to learn in steps – but the crucial difference is that we have only a single GAN instead of a hierarchy of them. In contrast to early work on adaptively growing networks, e.g., growing neural gas (Fritzke, 1995) and neuro evolution of augmenting topologies (Stanley & Miikkulainen, 2002) that grow networks greedily, we simply defer the introduction of pre- configured layers. In that sense our approach resembles layer- wise training of autoencoders (Bengio et al., 2007). ## 3 INCREASING VARIATION USING MINIBATCH STANDARD DEVIATION GANs have a tendency to capture only a subset of the variation found in training data, and Salimans et al. (2016) suggest “minibatch discrimination” as a solution. They compute feature statistics not only from individual images but also across the minibatch, thus encouraging the minibatches of generated and training images to show similar statistics. This is implemented by adding a minibatch layer towards the end of the discriminator, where the layer learns a large tensor that projects the input activation to an array of statistics. A separate set of statistics is produced for each example in a minibatch and it is concatenated to the layer’s output, so that the discriminator can use the statistics internally. We simplify this approach drastically while also improving the variation. Our simplified solution has neither learnable parameters nor new hyperparameters. We first compute the standard deviation for each feature in each spatial location over the minibatch. We then average these estimates over all features and spatial locations to arrive at a single value. We replicate the value and concatenate it to all spatial locations and over the minibatch, yielding one additional (constant) feature map. This layer could be inserted anywhere in the discriminator, but we have found it best to insert it towards the end (see Appendix A.1 for details). We experimented with a richer set of statistics, but were not able to improve the variation further. In parallel work, Lin et al. (2017) provide theoretical insights about the benefits of showing multiple images to the discriminator. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: When doubling the resolution of the generator (G) and discriminator (D) we fade in the new layers smoothly. This example illustrates the transition from \(16 \times 16\) images (a) to \(32 \times 32\) images (c). During the transition (b) we treat the layers that operate on the higher resolution like a residual block, whose weight \(\alpha\) increases linearly from 0 to 1. Here \(\left\lfloor 2 \times \right\rfloor\) and \(\left\lfloor 0.5 \times \right\rfloor\) refer to doubling and halving the image resolution using nearest neighbor filtering and average pooling, respectively. The \(\left\lfloor \mathrm{toRGB} \right\rfloor\) represents a layer that projects feature vectors to RGB colors and \(\left\lfloor \mathrm{fromRGB} \right\rfloor\) does the reverse; both use \(1 \times 1\) convolutions. When training the discriminator, we feed in real images that are downscaled to match the current resolution of the network. During a resolution transition, we interpolate between two resolutions of the real images, similarly to how the generator output combines two resolutions. </center> Alternative solutions to the variation problem include unrolling the discriminator (Metz et al., 2016) to regularize its updates, and a "repelling regularizer" (Zhao et al., 2017) that adds a new loss term to the generator, trying to encourage it to orthogonalize the feature vectors in a minibatch. The multiple generators of Ghosh et al. (2017) also serve a similar goal. We acknowledge that these solutions may increase the variation even more than our solution – or possibly be orthogonal to it – but leave a detailed comparison to a later time. ## 4 NORMALIZATION IN GENERATOR AND DISCRIMINATOR GANs are prone to the escalation of signal magnitudes as a result of unhealthy competition between the two networks. Most if not all earlier solutions discourage this by using a variant of batch normalization (Ioffe & Szegedy, 2015; Salimans & Kingma, 2016; Ba et al., 2016) in the generator, and often also in the discriminator. These normalization methods were originally introduced to eliminate covariate shift. However, we have not observed that to be an issue in GANs, and thus believe that the actual need in GANs is constraining signal magnitudes and competition. We use a different approach that consists of two ingredients, neither of which include learnable parameters. ### 4.1 EQUALIZED LEARNING RATE We deviate from the current trend of careful weight initialization, and instead use a trivial \(\mathcal{N}(0,1)\) initialization and then explicitly scale the weights at runtime. To be precise, we set \(\hat{w}_i = w_i / c\) , where \(w_i\) are the weights and \(c\) is the per- layer normalization constant from He's initializer (He et al., 2015). The benefit of doing this dynamically instead of during initialization is somewhat subtle, and relates to the scale- invariance in commonly used adaptive stochastic gradient descent methods such as RMSProp (Tieleman & Hinton, 2012) and Adam (Kingma & Ba, 2015). These methods normalize a gradient update by its estimated standard deviation, thus making the update independent of the scale of the parameter. As a result, if some parameters have a larger dynamic range than others, they will take longer to adjust. This is a scenario modern initializers cause, and thus it is possible that a learning rate is both too large and too small at the same time. Our approach ensures that the dynamic range, and thus the learning speed, is the same for all weights. A similar reasoning was independently used by van Laarhoven (2017). <--- Page Split ---> ### 4.2 PIXELWISE FEATURE VECTOR NORMALIZATION IN GENERATOR 4.2 PIXELWISE FEATURE VECTOR NORMALIZATION IN GENERATORTo disallow the scenario where the magnitudes in the generator and discriminator spiral out of control as a result of competition, we normalize the feature vector in each pixel to unit length in the generator after each convolutional layer. We do this using a variant of "local response normalization" (Krizhevsky et al., 2012), configured as \(b_{x,y} = a_{x,y} / \sqrt{\frac{1}{N}\sum_{j = 0}^{N - 1}(a_{x,y}^{j})^{2} + \epsilon}\) , where \(\epsilon = 10^{- 8}\) , \(N\) is the number of feature maps, and \(a_{x,y}\) and \(b_{x,y}\) are the original and normalized feature vector in pixel \((x,y)\) , respectively. We find it surprising that this heavy- handed constraint does not seem to harm the generator in any way, and indeed with most datasets it does not change the results much, but it prevents the escalation of signal magnitudes very effectively when needed. ## 5 MULTI-SCALE STATISTICAL SIMILARITY FOR ASSESSING GAN RESULTS In order to compare the results of one GAN to another, one needs to investigate a large number of images, which can be tedious, difficult, and subjective. Thus it is desirable to rely on automated methods that compute some indicative metric from large image collections. We noticed that existing methods such as MS- SSIM (Odena et al., 2017) find large- scale mode collapses reliably but fail to react to smaller effects such as loss of variation in colors or textures, and they also do not directly assess image quality in terms of similarity to the training set. We build on the intuition that a successful generator will produce samples whose local image structure is similar to the training set over all scales. We propose to study this by considering the multiscale statistical similarity between distributions of local image patches drawn from Laplacian pyramid (Burt & Adelson, 1987) representations of generated and target images, starting at a low- pass resolution of \(16 \times 16\) pixels. As per standard practice, the pyramid progressively doubles until the full resolution is reached, each successive level encoding the difference to an up- sampled version of the previous level. A single Laplacian pyramid level corresponds to a specific spatial frequency band. We randomly sample 16384 images and extract 128 descriptors from each level in the Laplacian pyramid, giving us \(2^{21}\) (2.1M) descriptors per level. Each descriptor is a \(7 \times 7\) pixel neighborhood with 3 color channels, denoted by \(\mathbf{x} \in \mathbb{R}^{7 \times 7 \times 3} = \mathbb{R}^{147}\) . We denote the patches from level \(l\) of the training set and generated set as \(\{\mathbf{x}_i^l\}_{i = 1}^{221}\) and \(\{\mathbf{y}_i^l\}_{i = 1}^{221}\) , respectively. We first normalize \(\{\mathbf{x}_i^l\}\) and \(\{\mathbf{y}_i^l\}\) w.r.t. the mean and standard deviation of each color channel, and then estimate the statistical similarity by computing their sliced Wasserstein distance \(\mathrm{SWD}(\{\mathbf{x}_i^l\} ,\{\mathbf{y}_i^l\})\) , an efficiently computable randomized approximation to earthmovers distance, using 512 projections (Rabin et al., 2011). Intuitively a small Wasserstein distance indicates that the distribution of the patches is similar, meaning that the training images and generator samples appear similar in both appearance and variation at this spatial resolution. In particular, the distance between the patch sets extracted from the lowest- resolution \(16 \times 16\) images indicate similarity in large- scale image structures, while the finest- level patches encode information about pixel- level attributes such as sharpness of edges and noise. ## 6 EXPERIMENTS In this section we discuss a set of experiments that we conducted to evaluate the quality of our results. Please refer to Appendix A for detailed description of our network structures and training configurations. We also invite the reader to consult the accompanying video (https://youtu.be/G06dEcZ- QTg) for additional result images and latent space interpolations. In this section we will distinguish between the network structure (e.g., convolutional layers, resizing), training configuration (various normalization layers, minibatch- related operations), and training loss (WGAN- GP, LSGAN). ### 6.1 IMPORTANCE OF INDIVIDUAL CONTRIBUTIONS IN TERMS OF STATISTICAL SIMILARITY We will first use the sliced Wasserstein distance (SWD) and multi- scale structural similarity (MS- SSIM) (Odena et al., 2017) to evaluate the importance our individual contributions, and also perceptually validate the metrics themselves. We will do this by building on top of a previous state- of- the- art loss function (WGAN- GP) and training configuration (Gulrajani et al., 2017) in an unsupervised setting using CELEBA (Liu et al., 2015) and LSUN BEDROOM (Yu et al., 2015) datasets in \(128^{2}\) <--- Page Split ---> <table><tr><td rowspan="2">Training configuration</td><td colspan="6">CELEBA</td><td colspan="6">LSUN BEDROOM</td></tr><tr><td>Sliced Wasserstein distance × 10-3</td><td></td><td></td><td></td><td></td><td></td><td>Sliced Wasserstein distance × 10-3</td><td></td><td></td><td></td><td></td></tr><tr><td>(a) Gulrajani et al. (2017)</td><td>128</td><td>64</td><td>32</td><td>16</td><td>Avg</td><td>128</td><td>64</td><td>32</td><td>16</td><td>Avg</td><td></td></tr><tr><td>(b) + Progressive growing</td><td>12.99</td><td>7.79</td><td>7.62</td><td>8.73</td><td>9.28</td><td>0.2854</td><td>11.97</td><td>10.51</td><td>8.03</td><td>14.48</td><td>11.25</td><td>0.0587</td></tr><tr><td>(c) + Small minibatch</td><td>4.62</td><td>2.64</td><td>3.78</td><td>6.06</td><td>4.28</td><td>0.2838</td><td>7.09</td><td>6.27</td><td>7.40</td><td>9.64</td><td>7.60</td><td>0.0615</td></tr><tr><td>(d) + Revised training parameters</td><td>75.42</td><td>41.33</td><td>41.62</td><td>26.57</td><td>46.23</td><td>0.4065</td><td>72.73</td><td>40.16</td><td>42.75</td><td>42.46</td><td>49.52</td><td>0.1061</td></tr><tr><td>(e) + Minibatch discrimination</td><td>9.20</td><td>6.53</td><td>4.71</td><td>11.84</td><td>8.07</td><td>0.3027</td><td>7.39</td><td>5.51</td><td>3.65</td><td>9.63</td><td>6.54</td><td>0.0662</td></tr><tr><td>(e) + Minibatch stride</td><td>10.76</td><td>6.28</td><td>6.04</td><td>16.29</td><td>9.84</td><td>0.3057</td><td>10.29</td><td>6.22</td><td>5.32</td><td>11.88</td><td>8.43</td><td>0.0648</td></tr><tr><td>(f) + Equalized learning rate</td><td>13.94</td><td>5.67</td><td>2.82</td><td>5.71</td><td>7.04</td><td>0.2950</td><td>7.77</td><td>5.23</td><td>3.27</td><td>9.64</td><td>6.48</td><td>0.0671</td></tr><tr><td>(g) + Pixelwise normalization</td><td>4.42</td><td>3.28</td><td>2.32</td><td>7.52</td><td>4.39</td><td>0.2902</td><td>3.61</td><td>3.32</td><td>2.71</td><td>6.44</td><td>4.02</td><td>0.0668</td></tr><tr><td>(h) Converged</td><td>4.06</td><td>3.04</td><td>2.02</td><td>5.13</td><td>3.56</td><td>0.2845</td><td>3.89</td><td>3.05</td><td>3.24</td><td>5.87</td><td>4.01</td><td>0.0640</td></tr><tr><td></td><td>2.42</td><td>2.17</td><td>2.24</td><td>4.99</td><td>2.96</td><td>0.2828</td><td>3.47</td><td>2.60</td><td>2.30</td><td>4.87</td><td>3.31</td><td>0.0636</td></tr></table> Table 1: Sliced Wasserstein distance (SWD) between the generated and training images (Section 5) and multi- scale structural similarity (MS- SSIM) among the generated images for several training setups at \(128 \times 128\) . For SWD, each column represents one level of the Laplacian pyramid, and the last one gives an average of the four distances. ![](images/5_0.jpg) <center>Figure 3: (a) – (g) CELEBA examples corresponding to rows in Table 1. These are intentionally non-converged. (h) Our converged result. Notice that some images show aliasing and some are not sharp – this is a flaw of the dataset, which the model learns to replicate faithfully. </center> resolution. CELEBA is particularly well suited for such comparison because the training images contain noticeable artifacts (aliasing, compression, blur) that are difficult for the generator to reproduce faithfully. In this test we amplify the differences between training configurations by choosing a relatively low- capacity network structure (Appendix A.2) and terminating the training once the discriminator has been shown a total of 10M real images. As such the results are not fully converged. Table 1 lists the numerical values for SWD and MS- SSIM in several training configurations, where our individual contributions are cumulatively enabled one by one on top of the baseline (Gulrajani et al., 2017). The MS- SSIM numbers were averaged from 10000 pairs of generated images, and SWD was calculated as described in Section 5. Generated CELEBA images from these configurations are shown in Figure 3. Due to space constraints, the figure shows only a small number of examples for each row of the table, but a significantly broader set is available in Appendix H. Intuitively, a good evaluation metric should reward plausible images that exhibit plenty of variation in colors, textures, and viewpoints. However, this is not captured by MS- SSIM: we can immediately see that configuration (h) generates significantly better images than configuration (a), but MS- SSIM remains approximately unchanged because it measures only the variation between outputs, not similarity to the training set. SWD, on the other hand, does indicate a clear improvement. The first training configuration (a) corresponds to Gulrajani et al. (2017), featuring batch normalization in the generator, layer normalization in the discriminator, and minibatch size of 64. (b) enables progressive growing of the networks, which results in sharper and more believable output images. SWD correctly finds the distribution of generated images to be more similar to the training set. Our primary goal is to enable high output resolutions, and this requires reducing the size of minibatches in order to stay within the available memory budget. We illustrate the ensuing challenges in (c) where we decrease the minibatch size from 64 to 16. The generated images are unnatural, which is clearly visible in both metrics. In (d), we stabilize the training process by adjusting the hyperparameters as well as by removing batch normalization and layer normalization (Appendix A.2). As an intermediate test (e\*), we enable minibatch discrimination (Salimans et al., 2016), which somewhat surprisingly fails to improve any of the metrics, including MS- SSIM that measures output variation. In contrast, our minibatch standard deviation (e) improves the average SWD scores and images. We then enable our remaining contributions in (f) and (g), leading to an overall improvement in SWD <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 4: Effect of progressive growing on training speed and convergence. The timings were measured on a single-GPU setup using NVIDIA Tesla P100. (a) Statistical similarity with respect to wall clock time for Gulrajani et al. (2017) using CELEBA at \(128 \times 128\) resolution. Each graph represents sliced Wasserstein distance on one level of the Laplacian pyramid, and the vertical line indicates the point where we stop the training in Table 1. (b) Same graph with progressive growing enabled. The dashed vertical lines indicate points where we double the resolution of G and D. (c) Effect of progressive growing on the raw training speed in \(1024 \times 1024\) resolution. </center> and subjective visual quality. Finally, in (h) we use a non- crippled network and longer training – we feel the quality of the generated images is at least comparable to the best published results so far. ### 6.2 CONVERGENCE AND TRAINING SPEED Figure 4 illustrates the effect of progressive growing in terms of the SWD metric and raw image throughput. The first two plots correspond to the training configuration of Gulrajani et al. (2017) without and with progressive growing. We observe that the progressive variant offers two main benefits: it converges to a considerably better optimum and also reduces the total training time by about a factor of two. The improved convergence is explained by an implicit form of curriculum learning that is imposed by the gradually increasing network capacity. Without progressive growing, all layers of the generator and discriminator are tasked with simultaneously finding succinct intermediate representations for both the large- scale variation and the small- scale detail. With progressive growing, however, the existing low- resolution layers are likely to have already converged early on, so the networks are only tasked with refining the representations by increasingly smaller- scale effects as new layers are introduced. Indeed, we see in Figure 4(b) that the largest- scale statistical similarity curve (16) reaches its optimal value very quickly and remains consistent throughout the rest of the training. The smaller- scale curves (32, 64, 128) level off one by one as the resolution is increased, but the convergence of each curve is equally consistent. With non- progressive training in Figure 4(a), each scale of the SWD metric converges roughly in unison, as could be expected. The speedup from progressive growing increases as the output resolution grows. Figure 4(c) shows training progress, measured in number of real images shown to the discriminator, as a function of training time when the training progresses all the way to \(1024^{2}\) resolution. We see that progressive growing gains a significant head start because the networks are shallow and quick to evaluate at the beginning. Once the full resolution is reached, the image throughput is equal between the two methods. The plot shows that the progressive variant reaches approximately 6.4 million images in 96 hours, whereas it can be extrapolated that the non- progressive variant would take about 520 hours to reach the same point. In this case, the progressive growing offers roughly a \(5.4 \times\) speedup. ### 6.3 HIGH-RESOLUTION IMAGE GENERATION USING CELEBA-HQ DATASET To meaningfully demonstrate our results at high output resolutions, we need a sufficiently varied high- quality dataset. However, virtually all publicly available datasets previously used in GAN literature are limited to relatively low resolutions ranging from \(32^{2}\) to \(480^{2}\) . To this end, we created a high- quality version of the CELEBA dataset consisting of 30000 of the images at \(1024 \times 1024\) resolution. We refer to Appendix C for further details about the generation of this dataset. <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: \(1024 \times 1024\) images generated using the CELEBA-HQ dataset. See Appendix F for a larger set of results, and the accompanying video for latent space interpolations. </center> ![](images/7_1.jpg) <center>Figure 6: Visual quality comparison in LSUN BEDROOM; pictures copied from the cited articles. </center> Our contributions allow us to deal with high output resolutions in a robust and efficient fashion. Figure 5 shows selected \(1024 \times 1024\) images produced by our network. While megapixel GAN results have been shown before in another dataset (Marchesi, 2017), our results are vastly more varied and of higher perceptual quality. Please refer to Appendix F for a larger set of result images as well as the nearest neighbors found from the training data. The accompanying video shows latent space interpolations and visualizes the progressive training. The interpolation works so that we first randomize a latent code for each frame (512 components sampled individually from \(\mathcal{N}(0,1)\) ), then blur the latents across time with a Gaussian ( \(\sigma = 45\) frames @ \(60\mathrm{Hz}\) ), and finally normalize each vector to lie on a hypersphere. We trained the network on 8 Tesla V100 GPUs for 4 days, after which we no longer observed qualitative differences between the results of consecutive training iterations. Our implementation used an adaptive minibatch size depending on the current output resolution so that the available memory budget was optimally utilized. In order to demonstrate that our contributions are largely orthogonal to the choice of a loss function, we also trained the same network using LSGAN loss instead of WGAN- GP loss. Figure 1 shows six examples of \(1024^{2}\) images produced using our method using LSGAN. Further details of this setup are given in Appendix B. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 7: Selection of \(256 \times 256\) images generated from different LSUN categories. </center> ### 6.4 LSUN RESULTS Figure 6 shows a purely visual comparison between our solution and earlier results in LSUN BEDROOM. Figure 7 gives selected examples from seven very different LSUN categories at \(256^{2}\) . A larger, non- curated set of results from all 30 LSUN categories is available in Appendix G, and the video demonstrates interpolations. We are not aware of earlier results in most of these categories, and while some categories work better than others, we feel that the overall quality is high. ### 6.5 CIFAR10 INCEPTION SCORES The best inception scores for CIFAR10 (10 categories of \(32 \times 32\) RGB images) we are aware of are 7.90 for unsupervised and 8.87 for label conditioned setups (Grinblat et al., 2017). The large difference between the two numbers is primarily caused by "ghosts" that necessarily appear between classes in the unsupervised setting, while label conditioning can remove many such transitions. When all of our contributions are enabled, we get 8.80 in the unsupervised setting. Appendix D shows a representative set of generated images along with a more comprehensive list of results from earlier methods. The network and training setup were the same as for CELEBA, progression limited to \(32 \times 32\) of course. The only customization was to the WGAN- GP's regularization term \(\mathbb{E}_{\mathbf{x} \sim \mathbb{P}_{\mathbf{x}}} \left( \left( \left| \nabla_{\mathbf{x}} D(\hat{\mathbf{x}}) \right| \right|_{2} - \gamma \right)^{2} / \gamma^{2} \right]\) . Gulrajani et al. (2017) used \(\gamma = 1.0\) , which corresponds to 1- Lipschitz, but we noticed that it is in fact significantly better to prefer fast transitions (\(\gamma = 750\) ) to minimize the ghosts. We have not tried this trick with other datasets. ## 7 DISCUSSION While the quality of our results is generally high compared to earlier work on GANs, and the training is stable in large resolutions, there is a long way to true photorealism. Semantic sensibility and understanding dataset- dependent constraints, such as certain objects being straight rather than curved, leaves a lot to be desired. There is also room for improvement in the micro- structure of the images. That said, we feel that convincing realism may now be within reach, especially in CELEBA- HQ. <--- Page Split ---> ## 8 ACKNOWLEDGEMENTS We would like to thank Mikael Honkavaara, Tero Kuosmanen, and Timi Hietanen for the compute infrastructure. Dmitry Korobchenko and Richard Calderwood for efforts related to the CELEBA- HQ dataset. Oskar Elek, Jacob Munkberg, and Jon Hasselgren for useful comments. ## REFERENCES Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. In ICLR, 2017. Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. CoRR, abs/1701.07875, 2017. Sanjeev Arora and Yi Zhang. Do GANs actually learn the distribution? an empirical study. CoRR, abs/1706.08224, 2017. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. CoRR, abs/1607.06450, 2016. Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer- wise training of deep networks. In P. B. Schölkopf, J. C. Platt, and T. Hoffman (eds.), NIPS, pp. 153- 160, 2007. David Berthelot, Tom Schumm, and Luke Metz. BEGAN: Boundary equilibrium generative adversarial networks. CoRR, abs/1703.10717, 2017. Peter J. Burt and Edward H. Adelson. Readings in computer vision: Issues, problems, principles, and paradigms. chapter The Laplacian Pyramid As a Compact Image Code, pp. 671- 679, 1987. Qifeng Chen and Vladlen Koltun. Photographic image synthesis with cascaded refinement networks. CoRR, abs/1707.09405, 2017. Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard H. Hovy, and Aaron C. Courville. Calibrating energy- based generative adversarial networks. In ICLR, 2017. Emily L. Denton, Soumith Chintala, Arthur Szlam, and Robert Fergus. Deep generative image models using a Laplacian pyramid of adversarial networks. CoRR, abs/1506.05751, 2015. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. CoRR, abs/1606.00704, 2016. Ishan P. Durugkar, Ian Gemp, and Sridhar Mahadevan. Generative multi- adversarial networks. CoRR, abs/1611.01673, 2016. Bernd Fritzke. A growing neural gas network learns topologies. In G. Tesauro, D. S. Touretzky, and T. K. Leen (eds.), Advances in Neural Information Processing Systems 7, pp. 625- 632, 1995. Arnab Ghosh, Viveka Kulharia, Vinay P. Namboodiri, Philip H. S. Torr, and Puneet Kumar Dokania. Multi- agent diverse generative adversarial networks. CoRR, abs/1704.02906, 2017. Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Networks. In NIPS, 2014. Guillermo L. Grinblatt, Lucas C. Uzal, and Pablo M. Granitto. Class- splitting generative adversarial networks. CoRR, abs/1709.07359, 2017. Ishan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C. Courville. Improved training of Wasserstein GANs. CoRR, abs/1704.00028, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human- level performance on imagenet classification. CoRR, abs/1502.01852, 2015. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two time- scale update rule converge to a local Nash equilibrium. In NIPS, pp. 6626- 6637, 2017. <--- Page Split ---> R Devon Hjelm, Athul Paul Jacob, Tong Che, Kyunghyun Cho, and Yoshua Bengio. Boundary- Seeking Generative Adversarial Networks. CoRR, abs/1702.08431, 2017. Xun Huang, Yixuan Li, Omid Poursaeed, John E. Hopcroft, and Serge J. Belongie. Stacked generative adversarial networks. CoRR, abs/1612.04357, 2016. Satoshi Iizuka, Edgar Simo- Serra, and Hiroshi Ishikawa. Globally and locally consistent image completion. ACM Trans. Graph., 36(4):107:1- 107:14, 2017. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. Diederik P. Kingma and Max Welling. Auto- encoding variational bayes. In ICLR, 2014. Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. In NIPS, volume 29, pp. 4743- 4751. 2016. Naveen Kodali, Jacob D. Abernethy, James Hays, and Zsolt Kira. How to train your DRAGAN. CoRR, abs/1705.07215, 2017. Dmitry Korobchenko and Marco Foco. Single image super- resolution using deep learning, 2017. URL https://gwt.nvidia.com/super- res/about. Machines Can See summit. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pp. 1097- 1105. 2012. Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew P. Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo- realistic single image super- resolution using a generative adversarial network. CoRR, abs/1609.04802, 2016. Zinan Lin, Ashish Khetan, Giulia Fanti, and Sewoong Oh. PaGAN: The power of two samples in generative adversarial networks. CoRR, abs/1712.04086, 2017. Ming- Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image- to- image translation networks. CoRR, abs/1703.00848, 2017. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In ICCV, 2015. Alireza Makhzani and Brendan J. Frey. PixelGAN autoencoders. CoRR, abs/1706.00531, 2017. Xiao- Jiao Mao, Chunhua Shen, and Yu- Bin Yang. Image restoration using convolutional auto- encoders with symmetric skip connections. CoRR, abs/1606.08921, 2016a. Xudong Mao, Qing Li, Haoran Xie, Raymond Y. K. Lau, and Zhen Wang. Least squares generative adversarial networks. CoRR, abs/1611.04076, 2016b. Marco Marchesi. Megapixel size image creation using generative adversarial networks. CoRR, abs/1706.00082, 2017. Luke Metz, Ben Poole, David Pfau, and Jascha Sohl- Dickstein. Unrolled generative adversarial networks. CoRR, abs/1611.02163, 2016. Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier GANs. In ICML, 2017. Julien Rabin, Gabriel Peyr, Julie Delon, and Marc Bernot. Wasserstein barycenter and its application to texture mixing. In Scale Space and Variational Methods in Computer Vision (SSVM), pp. 435- 446, 2011. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015. <--- Page Split ---> Tim Salimans and Diederik P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. CoRR, abs/1602.07868, 2016. Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. In NIPS, 2016. Kenneth O. Stanley and Risto Miikkulainen. Evolving neural networks through augmenting topologies. Evolutionary Computation, 10(2):99- 127, 2002. Tijmen Tieleman and Geoffrey E. Hinton. Lecture 6.5 - RMSProp. Technical report, COURSERA: Neural Networks for Machine Learning, 2012. Dmitry Ulyanov, Andrea Vedaldi, and Victor S. Lempitsky. Adversarial generator- encoder networks. CoRR, abs/1704.02304, 2017. Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. WaveNet: A generative model for raw audio. CoRR, abs/1609.03499, 2016a. Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In ICML, pp. 1747- 1756, 2016b. Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. Conditional image generation with PixelCNN decoders. CoRR, abs/1606.05328, 2016c. Twan van Laarhoven. L2 regularization versus batch and weight normalization. CoRR, abs/1706.05350, 2017. Ting- Chun Wang, Ming- Yu Liu, Jun- Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High- resolution image synthesis and semantic manipulation with conditional GANs. CoRR, abs/1711.11585, 2017. Zhou Wang, Eero P. Simoncelli, and Alan C. Bovik. Multi- scale structural similarity for image quality assessment. In Proc. IEEE Asilomar Conf. on Signals, Systems, and Computers, pp. 1398- 1402, 2003. David Warde- Farley and Yoshua Bengio. Improving generative adversarial networks with denoising feature matching. In ICLR, 2017. Jianwei Yang, Anitha Kannan, Dhruv Batra, and Devi Parikh. LR- GAN: layered recursive generative adversarial networks for image generation. In ICLR, 2017. Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. LSUN: Construction of a large- scale image dataset using deep learning with humans in the loop. CoRR, abs/1506.03365, 2015. Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaolei Huang, Xiaogang Wang, and Dimitris N. Metaxas. StackGAN: text to photo- realistic image synthesis with stacked generative adversarial networks. In ICCV, 2017. Junbo Jake Zhao, Michael Mathieu, and Yann LeCun. Energy- based generative adversarial network. In ICLR, 2017. Jun- Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired image- to- image translation using cycle- consistent adversarial networks. CoRR, abs/1703.10593, 2017. <--- Page Split ---> Table 2: Generator and discriminator that we use with CELEBA-HQ to generate \(1024\times 1024\) images. <table><tr><td>Generator</td><td>Act.</td><td>Output shape</td><td>Params</td></tr><tr><td>Latent vector</td><td>-</td><td>512 × 1 × 1</td><td>1</td></tr><tr><td>Conv 4 × 4</td><td>LReLU</td><td>512 × 4 × 4</td><td>4.2M</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>512 × 4 × 4</td><td>2.4M</td></tr><tr><td>Upsample</td><td>-</td><td>512 × 8 × 8</td><td>-</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>512 × 8 × 8</td><td>2.4M</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>512 ×8 × 8</td><td>2.4M</td></tr><tr><td>Upsample</td><td>-</td><td>512 × 16 × 16</td><td>-</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>512 × 16 × 16</td><td>2.4M</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>512 ×16 × 16</td><td>2.4M</td></tr><tr><td>Upsample</td><td>-</td><td>512 × 32 × 32</td><td>-</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>512 × 32 × 32</td><td>2.4M</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>512 ×32 × 32</td><td>2.4M</td></tr><tr><td>Upsample</td><td>-</td><td>512 × 64 × 64</td><td>-</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>256 × 64 × 64</td><td>1.2M</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>256 ×64 × 64</td><td>590k</td></tr><tr><td>Upsample</td><td>-</td><td>256 × 128 × 128</td><td>-</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>128 × 128 × 128</td><td>295k</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>128 ×128 × 128</td><td>148k</td></tr><tr><td>Upsample</td><td>-</td><td>128 × 256 × 256</td><td>-</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>64 × 256 × 256</td><td>74k</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>64 ×256 × 256</td><td>37k</td></tr><tr><td>Upsample</td><td>-</td><td>64 × 512 × 512</td><td>-</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>32 × 512 × 512</td><td>18k</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>32 ×512 × 512</td><td>9.2k</td></tr><tr><td>Upsample</td><td>-</td><td>32 × 1024 × 1024</td><td>-</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>16 × 1024 × 1024</td><td>4.6k</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>16 × 1024 × 1024</td><td>2.3k</td></tr><tr><td>Conv 1 × 1</td><td>linear</td><td>3 × 1024 × 1024</td><td>51</td></tr><tr><td>Total trainable parameters</td><td></td><td>23.1M</td><td></td></tr></table> <table><tr><td>Discriminator</td><td>Act.</td><td>Output shape</td><td>Params</td></tr><tr><td>Input image</td><td>-</td><td>3 × 1024 × 1024</td><td>-</td></tr><tr><td>Conv 1 × 1</td><td>LReLU</td><td>16 × 1024 × 1024</td><td>64</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>16 × 1024 × 1024</td><td>2.3k</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>32 × 1024 × 1024</td><td>4.6k</td></tr><tr><td>Downsample</td><td>-</td><td>32 × 512 × 512</td><td>-</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>32 × 512 × 512</td><td>9.2k</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>64 × 512 × 512</td><td>18k</td></tr><tr><td>Downsample</td><td>-</td><td>64 × 256 × 256</td><td>-</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>64 × 256 × 256</td><td>37k</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>128 × 256 × 256</td><td>74k</td></tr><tr><td>Downsample</td><td>-</td><td>128 × 128 × 128</td><td>148k</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>128 × 128 × 128</td><td>295k</td></tr><tr><td>Downsample</td><td>-</td><td>256 × 64 × 64</td><td>-</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>256 × 64 × 64</td><td>590k</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>512 × 64 × 64</td><td>1.2M</td></tr><tr><td>Downsample</td><td>-</td><td>512 × 32 × 32</td><td>-</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>512 × 32 × 32</td><td>2.4M</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>512 ×32 × 32</td><td>2.4M</td></tr><tr><td>Downsample</td><td>-</td><td>512 × 16 × 16</td><td>-</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>512 × 16 × 16</td><td>2.4M</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>512 ×16 × 16</td><td>2.4M</td></tr><tr><td>Downsample</td><td>-</td><td>512 × 8 × 8</td><td>-</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>512 × 8 × 8</td><td>2.4M</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>512 ×8 × 8</td><td>2.4M</td></tr><tr><td>Downsample</td><td>-</td><td>512 × 4 × 4</td><td>-</td></tr><tr><td>Minibatch sdddev</td><td>-</td><td>513 × 4 × 4</td><td>-</td></tr><tr><td>Conv 3 × 3</td><td>LReLU</td><td>512 × 4 × 4</td><td>2.4M</td></tr><tr><td>Conv 4 × 4</td><td>LReLU</td><td>512 × 1 × 1</td><td>4.2M</td></tr><tr><td>Fully-connected</td><td>linear</td><td>1 × 1 × 1</td><td>513</td></tr><tr><td>Total trainable parameters</td><td></td><td>23.1M</td><td></td></tr></table> ## A NETWORK STRUCTURE AND TRAINING CONFIGURATION ## A.1 \(1024\times 1024\) NETWORKS USED FOR CELEBA-HQ Table 2 shows network architectures of the full- resolution generator and discriminator that we use with the CELEBA- HQ dataset. Both networks consist mainly of replicated 3- layer blocks that we introduce one by one during the course of the training. The last Conv \(1\times 1\) layer of the generator corresponds to the tORGB block in Figure 2, and the first Conv \(1\times 1\) layer of the discriminator similarly corresponds to fromRGB. We start with \(4\times 4\) resolution and train the networks until we have shown the discriminator 800k real images in total. We then alternate between two phases: fade in the first 3- layer block during the next 800k images, stabilize the networks for 800k images, fade in the next 3- layer block during 800k images, etc. Our latent vectors correspond to random points on a 512- dimensional hypersphere, and we represent training and generated images in [- 1,1]. We use leaky ReLU with leakiness 0.2 in all layers of both networks, except for the last layer that uses linear activation. We do not employ batch normalization, layer normalization, or weight normalization in either network, but we perform pixelwise normalization of the feature vectors after each Conv \(3\times 3\) layer in the generator as described in Section 4.2. We initialize all bias parameters to zero and all weights according to the normal distribution with unit variance. However, we scale the weights with a layer- specific constant at runtime as described in Section 4.1. We inject the across- minibatch standard deviation as an additional feature map at \(4\times 4\) resolution toward the end of the discriminator as described in Section 3. The upsampling and downsampling operations in Table 2 correspond to \(2\times 2\) element replication and average pooling, respectively. We train the networks using Adam (Kingma & Ba, 2015) with \(\alpha = 0.001\) , \(\beta_{1} = 0\) , \(\beta_{2} = 0.99\) , and \(\epsilon = 10^{- 8}\) . We do not use any learning rate decay or rampdown, but for visualizing generator output at any given point during the training, we use an exponential running average for the weights of the generator with decay 0.999. We use a minibatch size 16 for resolutions \(4^{2} - 128^{2}\) and then gradually decrease the size according to \(256^{2} \to 14\) , \(512^{2} \to 6\) , and \(1024^{2} \to 3\) to avoid exceeding the available memory budget. We use the WGAN- GP loss, but unlike Gulrajani et al. (2017), we alternate between optimizing the generator and discriminator on a per- minibatch basis, i.e., we set \(n_{\mathrm{critic}} = 1\) . Additionally, we introduce a fourth term into the discriminator loss with an extremely <--- Page Split ---> small weight to keep the discriminator output from drifting too far away from zero. To be precise, we set \(L' = L + \epsilon_{\mathrm{drift}}\mathbb{E}_{x\in \mathbb{P}_{r}}[D(x)^{2}]\) , where \(\epsilon_{\mathrm{drift}} = 0.001\) . ## A.2 OTHER NETWORKS Whenever we need to operate on a spatial resolution lower than \(1024 \times 1024\) , we do that by leaving out an appropriate number copies of the replicated 3- layer block in both networks. Furthermore, Section 6.1 uses a slightly lower- capacity version, where we halve the number of feature maps in Conv \(3 \times 3\) layers at the \(16 \times 16\) resolution, and divide by 4 in the subsequent resolutions. This leaves 32 feature maps to the last Conv \(3 \times 3\) layers. In Table 1 and Figure 4 we train each resolution for a total 600k images instead of 800k, and also fade in new layers for the duration of 600k images. For the "Gulrajani et al. (2017)" case in Table 1, we follow their training configuration as closely as possible. In particular, we set \(\alpha = 0.0001\) , \(\beta_{2} = 0.9\) , \(n_{\mathrm{critic}} = 5\) , \(\epsilon_{\mathrm{drift}} = 0\) , and minibatch size 64. We disable progressive resolution, minibatch stddev, as well as weight scaling at runtime, and initialize all weights using He's initializer (He et al., 2015). Furthermore, we modify the generator by replacing LReLU with ReLU, linear activation with tanh in the last layer, and pixelwise normalization with batch normalization. In the discriminator, we add layer normalization to all Conv \(3 \times 3\) and Conv \(4 \times 4\) layers. For the latent vectors, we use 128 components sampled independently from the normal distribution. ## B LEAST-SQUARES GAN (LSGAN) AT \(1024 \times 1024\) We find that LSGAN is generally a less stable loss function than WGAN- GP, and it also has a tendency to lose some of the variation towards the end of long runs. Thus we prefer WGAN- GP, but have also produced high- resolution images by building on top of LSGAN. For example, the \(1024^{2}\) images in Figure 1 are LSGAN- based. On top of the techniques described in Sections 2- 4, we need one additional hack with LSGAN that prevents the training from spiraling out of control when the dataset is too easy for the discriminator, and the discriminator gradients are at risk of becoming meaningless as a result. We adaptively increase the magnitude of multiplicative Gaussian noise in discriminator as a function of the discriminator's output. The noise is applied to the input of each Conv \(3 \times 3\) and Conv \(4 \times 4\) layer. There is a long history of adding noise to the discriminator, and it is generally detrimental for the image quality (Arjovsky et al., 2017) and ideally one would never have to do that, which according to our tests is the case for WGAN- GP (Gulrajani et al., 2017). The magnitude of noise is determined as \(0.2 \cdot \max (0, \hat{d}_{t} - 0.5)^{2}\) , where \(\hat{d}_{t} = 0.1d + 0.9\hat{d}_{t - 1}\) is an exponential moving average of the discriminator output \(d\) . The motivation behind this hack is that LSGAN is seriously unstable when \(d\) approaches (or exceeds) 1.0. ## C CELEBA-HQ DATASET In this section we describe the process we used to create the high- quality version of the CELEBA dataset, consisting of 30000 images in \(1024 \times 1024\) resolution. As a starting point, we took the collection of in- the- wild images included as a part of the original CELEBA dataset. These images are extremely varied in terms of resolution and visual quality, ranging all the way from \(43 \times 55\) to \(6732 \times 8984\) . Some of them show crowds of several people whereas others focus on the face of a single person – often only a part of the face. Thus, we found it necessary to apply several image processing steps to ensure consistent quality and to center the images on the facial region. Our processing pipeline is illustrated in Figure 8. To improve the overall image quality, we preprocess each JPEG image using two pre- trained neural networks: a convolutional autoencoder trained to remove JPEG artifacts in natural images, similar in structure to the proposed by Mao et al. (2016a), and an adversarially- trained 4x super- resolution network (Korobchenko & Foco, 2017) similar to Ledig et al. (2016). To handle cases where the facial region extends outside the image, we employ padding and filtering to extend the dimensions of the image as illustrated in Fig.8(c- d). We then select an oriented crop rectangle based on the facial landmark annotations included in the <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 8: Creating the CELEBA-HQ dataset. We start with a JPEG image (a) from the CelebA in-the-wild dataset. We improve the visual quality (b,top) through JPEG artifact removal (b,middle) and 4x super-resolution (b,bottom). We then extend the image through mirror padding (c) and Gaussian filtering (d) to produce a visually pleasing depth-of-field effect. Finally, we use the facial landmark locations to select an appropriate crop region (e) and perform high-quality resampling to obtain the final image at \(1024\times 1024\) resolution (f). </center> original CELEBA dataset as follows: \[x^{\prime} = e_{1} - e_{0\] \[y^{\prime} = \frac{1}{2} (e_{0} + e_{1}) - \frac{1}{2} (m_{0} + m_{1})\] \[c = \frac{1}{2} (e_{0} + e_{1}) - 0.1\cdot y^{\prime\] \[s = \max (4.0\cdot |x^{\prime}|,3.6\cdot |y^{\prime}|)\] \[x = \mathrm{Normalize}(x^{\prime} - \mathrm{Rotate}90(y^{\prime}))\] \[y = \mathrm{Rotate}90(x)\] \(e_{0}\) , \(e_{1}\) , \(m_{0}\) , and \(m_{1}\) represent the 2D pixel locations of the two eye landmarks and two mouth landmarks, respectively, \(c\) and \(s\) indicate the center and size of the desired crop rectangle, and \(x\) and \(y\) indicate its orientation. We constructed the above formulas empirically to ensure that the crop rectangle stays consistent in cases where the face is viewed from different angles. Once we have calculated the crop rectangle, we transform the rectangle to \(4096\times 4096\) pixels using bilinear filtering, and then scale it to \(1024\times 1024\) resolution using a box filter. We perform the above processing for all 202599 images in the dataset, analyze the resulting \(1024\times 1024\) images further to estimate the final image quality, sort the images accordingly, and discard all but the best 30000 images. We use a frequency- based quality metric that favors images whose power spectrum contains a broad range of frequencies and is approximately radially symmetric. This penalizes blurry images as well as images that have conspicuous directional features due to, e.g., visible halftoning patterns. We selected the cutoff point of 30000 images as a practical sweet spot between variation and image quality, because it appeared to yield the best results. ## D CIFAR10 RESULTS Figure 9 shows non- curated images generated in the unsupervised setting, and Table 3 compares against prior art in terms of inception scores. We report our scores in two different ways: 1) the highest score observed during training runs (here \(\pm\) refers to the standard deviation returned by the inception score calculator) and 2) the mean and standard deviation computed from the highest scores seen during training, starting from ten random initializations. Arguably the latter methodology is much more meaningful as one can be lucky with individual runs (as we were). We did not use any kind of augmentation with this dataset. ## E MNIST-1K DISCRETE MODE TEST WITH CRIPPLED DISCRIMINATOR Metz et al. (2016) describe a setup where a generator synthesizes MNIST digits simultaneously to 3 color channels, the digits are classified using a pre- trained classifier (0.4% error rate in our case), and concatenated to form a number in \([0,999]\) . They generate a total of 25,600 images and count <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 9: CIFAR10 images generated using a network that was trained unsupervised (no label conditioning), and achieves a record 8.80 inception score. </center> Table 3: CIFAR10 inception scores, higher is better. <table><tr><td colspan="2">UNSUPERVISED</td><td colspan="2">LABEL CONDITIONED</td></tr><tr><td>Method</td><td>Inception score</td><td>Method</td><td>Inception score</td></tr><tr><td>ALI</td><td>(Dumoulin et al., 2016)</td><td>DCGAN</td><td>(Radford et al., 2015)</td></tr><tr><td>GMAN</td><td>(Durugkar et al., 2016)</td><td>Improved GAN</td><td>(Salimans et al., 2016)</td></tr><tr><td>Improved GAN</td><td>(Salimans et al., 2016)</td><td>AC-GAN</td><td>(Odena et al., 2017)</td></tr><tr><td>CEGAN-Ent-VI</td><td>(Dai et al., 2017)</td><td>SGAN</td><td>(Huang et al., 2016)</td></tr><tr><td>LR-AGN</td><td>(Yang et al., 2017)</td><td>WGAN-GP</td><td>(Gulrajani et al., 2017)</td></tr><tr><td>DFM</td><td>(Warde-Farley &amp;amp; Bengio, 2017)</td><td>Splitting GAN</td><td>(Grünblat et al., 2017)</td></tr><tr><td>WGAN-GP</td><td>(Gulrajani et al., 2017)</td><td></td><td></td></tr><tr><td>Splitting GAN</td><td>(Grünblat et al., 2017)</td><td></td><td></td></tr><tr><td>Our (best run)</td><td>8.80 ± 0.05</td><td></td><td></td></tr><tr><td>Our (computed from 10 runs)</td><td>8.56 ± 0.06</td><td></td><td></td></tr></table> how many of the discrete modes are covered. They also compute KL divergence as KL(histogram || uniform). Modern GAN implementations can trivially cover all modes at very low divergence (0.05 in our case), and thus Metz et al. specify a fairly low- capacity generator and two severely crippled discriminators ("K/2" has \(\sim 2000\) params and "K/4" only about \(\sim 500\) ) to tease out differences between training methodologies. Both of these networks use batch normalization. As shown in Table 4, using WGAN- GP loss with the networks specified by Metz et al. covers much more modes than the original GAN loss, and even more than the unrolled original GAN with the smaller (K/4) discriminator. The KL divergence, which is arguably a more accurate metric than the raw count, acts even more favorably. Replacing batch normalization with our normalization (equalized learning rate, pixelwise normalization) improves the result considerably, while also removing a few trainable parameters from the discriminators. The addition of a minibatch stddev layer further improves the scores, while restoring the discriminator capacity to within \(0.5\%\) of the original. Progression does not help much with these tiny images, but it does not hurt either. <--- Page Split ---> <table><tr><td colspan="2">Arch</td><td>GAN</td><td>+ unrolling</td><td>WGAN-GP</td><td>+ our norm</td><td>+ mb stddev</td><td>+ progression</td></tr><tr><td>K/4</td><td># KL</td><td>30.6 ± 20.7<br>5.99 ± 0.04</td><td>372.2 ± 20.7<br>4.66 ± 0.46</td><td>640.1 ± 136.3<br>1.97 ± 0.70</td><td>856.7 ± 50.4<br>1.10±0.19</td><td>881.3 ± 39.2<br>1.09 ± 0.16</td><td>859.5 ± 36.2<br>1.05 ± 0.09</td></tr><tr><td>K/2</td><td># KL</td><td>628.0 ± 140.9<br>2.58 ± 0.75</td><td>817.4 ± 39.9<br>1.43 ± 0.12</td><td>772.4 ± 146.5<br>1.35 ± 0.55</td><td>886.6 ± 58.5<br>0.98 ± 0.33</td><td>918.3 ± 30.2<br>0.89 ± 0.21</td><td>919.8 ± 35.1<br>0.82 ± 0.13</td></tr></table> Table 4: Results for MNIST discrete mode test using two tiny discriminators (K/4, K/2) defined by Metz et al. (2016). The number of covered modes (#) and KL divergence from a uniform distribution are given as an average \(\pm\) standard deviation over 8 random initializations. Higher is better for the number of modes, and lower is better for KL divergence. ## F ADDITIONAL CELEBA-HQ RESULTS Figure 10 shows the nearest neighbors found for our generated images. Figure 11 gives additional generated examples from CELEBA- HQ. We enabled mirror augmentation for all tests using CELEBA and CELEBA- HQ. In addition to the sliced Wasserstein distance (SWD), we also quote the recently introduced Fréchet Inception Distance (FID) (Heusel et al., 2017) computed from 50K images. ## G LSUN RESULTS Figures 12- 17 show representative images generated for all 30 LSUN categories. A separate network was trained for each category using identical parameters. All categories were trained using 100k images, except for BEDROOM and DOG that used all the available data. Since 100k images is a very limited amount of training data for most categories, we enabled mirror augmentation in these tests (but not for BEDROOM or DOG). ## H ADDITIONAL IMAGES FOR TABLE 1 Figure 18 shows larger collections of images corresponding to the non- converged setups in Table 1. The training time was intentionally limited to make the differences between various methods more visible. <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 10: Top: Our CELEBA-HQ results. Next five rows: Nearest neighbors found from the training data, based on feature-space distance. We used activations from five VGG layers, as suggested by Chen & Koltun (2017). Only the crop highlighted in bottom right image was used for comparison in order to exclude image background and focus the search on matching facial features. </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 11: Additional \(1024 \times 1024\) images generated using the CELEBA-HQ dataset. Sliced Wasserstein Distance (SWD) \(\times 10^{3}\) for levels 1024, ..., 16: 7.48, 7.24, 6.08, 3.51, 3.55, 3.02, 7.22, for which the average is 5.44. Fréchet Inception Distance (FID) computed from 50K images was 7.30. See the video for latent space interpolations. </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 12: Example images generated at \(256 \times 256\) from LSUN categories. Sliced Wasserstein Distance (SWD) \(\times 10^{3}\) is given for levels 256, 128, 64, 32 and 16, and the average is bolded. We also quote the Fréchet Inception Distance (FID) computed from 50K images. </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 13: Example images generated at \(256 \times 256\) from LSUN categories. Sliced Wasserstein Distance (SWD) \(\times 10^{3}\) is given for levels 256, 128, 64, 32 and 16, and the average is bolded. We also quote the Fréchet Inception Distance (FID) computed from 50K images. </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 14: Example images generated at \(256 \times 256\) from LSUN categories. Sliced Wasserstein Distance (SWD) \(\times 10^{3}\) is given for levels 256, 128, 64, 32 and 16, and the average is bolded. We also quote the Fréchet Inception Distance (FID) computed from 50K images. </center> <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 15: Example images generated at \(256 \times 256\) from LSUN categories. Sliced Wasserstein Distance (SWD) \(\times 10^{3}\) is given for levels 256, 128, 64, 32 and 16, and the average is bolded. We also quote the Fréchet Inception Distance (FID) computed from 50K images. </center> <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 16: Example images generated at \(256 \times 256\) from LSUN categories. Sliced Wasserstein Distance (SWD) \(\times 10^{3}\) is given for levels 256, 128, 64, 32 and 16, and the average is bolded. We also quote the Fréchet Inception Distance (FID) computed from 50K images. </center> <--- Page Split ---> ![](images/24_0.jpg) <center>Figure 17: Example images generated at \(256 \times 256\) from LSUN categories. Sliced Wasserstein Distance (SWD) \(\times 10^{3}\) is given for levels 256, 128, 64, 32 and 16, and the average is bolded. We also quote the Fréchet Inception Distance (FID) computed from 50K images. </center> <--- Page Split ---> ![](images/25_0.jpg) <center>Figure 18: A larger set of generated images corresponding to the non-converged setups in Table 1. </center> <--- Page Split --->
## ABSTRACT We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CELEBA images at \(1024^{2}\) . We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher- quality version of the CELEBA dataset. ## 1 INTRODUCTION Generative methods that produce novel samples from high- dimensional data distributions, such as images, are finding widespread use, for example in speech synthesis (van den Oord et al., 2016a), image- to- image translation (Zhu et al., 2017; Liu et al., 2017; Wang et al., 2017), and image inpainting (Iizuka et al., 2017). Currently the most prominent approaches are autoregressive models (van den Oord et al., 2016b;c), variational autoencoders (VAE) (Kingma & Welling, 2014), and generative adversarial networks (GAN) (Goodfellow et al., 2014). Currently they all have significant strengths and weaknesses. Autoregressive models – such as PixelCNN – produce sharp images but are slow to evaluate and do not have a latent representation as they directly model the conditional distribution over pixels, potentially limiting their applicability. VAEs are easy to train but tend to produce blurry results due to restrictions in the model, although recent work is improving this (Kingma et al., 2016). GANs produce sharp images, albeit only in fairly small resolutions and with somewhat limited variation, and the training continues to be unstable despite recent progress (Salimans et al., 2016; Gulrajani et al., 2017; Berthelot et al., 2017; Kodali et al., 2017). Hybrid methods combine various strengths of the three, but so far lag behind GANs in image quality (Makhzani & Frey, 2017; Ulyanov et al., 2017; Dumoulin et al., 2016). Typically, a GAN consists of two networks: generator and discriminator (aka critic). The generator produces a sample, e.g., an image, from a latent code, and the distribution of these images should ideally be indistinguishable from the training distribution. Since it is generally infeasible to engineer a function that tells whether that is the case, a discriminator network is trained to do the assessment, and since networks are differentiable, we also get a gradient we can use to steer both networks to the right direction. Typically, the generator is of main interest – the discriminator is an adaptive loss function that gets discarded once the generator has been trained. There are multiple potential problems with this formulation. When we measure the distance between the training distribution and the generated distribution, the gradients can point to more or less random directions if the distributions do not have substantial overlap, i.e., are too easy to tell apart (Arjovsky & Bottou, 2017). Originally, Jensen- Shannon divergence was used as a distance metric (Goodfellow et al., 2014), and recently that formulation has been improved (Hjelm et al., 2017) and a number of more stable alternatives have been proposed, including least squares (Mao et al., 2016b), absolute deviation with margin (Zhao et al., 2017), and Wasserstein distance (Arjovsky et al., 2017; Gulrajani <--- Page Split ---> et al., 2017). Our contributions are largely orthogonal to this ongoing discussion, and we primarily use the improved Wasserstein loss, but also experiment with least- squares loss. The generation of high- resolution images is difficult because higher resolution makes it easier to tell the generated images apart from training images (Odena et al., 2017), thus drastically amplifying the gradient problem. Large resolutions also necessitate using smaller minibatches due to memory constraints, further compromising training stability. Our key insight is that we can grow both the generator and discriminator progressively, starting from easier low- resolution images, and add new layers that introduce higher- resolution details as the training progresses. This greatly speeds up training and improves stability in high resolutions, as we will discuss in Section 2. The GAN formulation does not explicitly require the entire training data distribution to be represented by the resulting generative model. The conventional wisdom has been that there is a tradeoff between image quality and variation, but that view has been recently challenged (Odena et al., 2017). The degree of preserved variation is currently receiving attention and various methods have been suggested for measuring it, including inception score (Salimans et al., 2016), multi- scale structural similarity (MS- SSIM) (Odena et al., 2017; Wang et al., 2003), birthday paradox (Arora & Zhang, 2017), and explicit tests for the number of discrete modes discovered (Metz et al., 2016). We will describe our method for encouraging variation in Section 3, and propose a new metric for evaluating the quality and variation in Section 5. Section 4.1 discusses a subtle modification to the initialization of networks, leading to a more balanced learning speed for different layers. Furthermore, we observe that mode collapses traditionally plaguing GANs tend to happen very quickly, over the course of a dozen minibatches. Commonly they start when the discriminator overshoots, leading to exaggerated gradients, and an unhealthy competition follows where the signal magnitudes escalate in both networks. We propose a mechanism to stop the generator from participating in such escalation, overcoming the issue (Section 4.2). We evaluate our contributions using the CELEBA, LSUN, CIFAR10 datasets. We improve the best published inception score for CIFAR10. Since the datasets commonly used in benchmarking generative methods are limited to a fairly low resolution, we have also created a higher quality version of the CELEBA dataset that allows experimentation with output resolutions up to \(1024 \times 1024\) pixels. This dataset and our full implementation are available at https://github.com/tkarras/progressive_growing_of_gans, trained networks can be found at https://drive.google.com/open?id=0B4qLCyYJmi2ONHFULTdyC051X0U along with result images, and a supplementary video illustrating the datasets, additional results, and latent space interpolations is at https://youtu.be/G06dEcZ- QTg. ## 2 PROGRESSIVE GROWING OF GANs Our primary contribution is a training methodology for GANs where we start with low- resolution images, and then progressively increase the resolution by adding layers to the networks as visualized in Figure 1. This incremental nature allows the training to first discover large- scale structure of the image distribution and then shift attention to increasingly finer scale detail, instead of having to learn all scales simultaneously. We use generator and discriminator networks that are mirror images of each other and always grow in synchrony. All existing layers in both networks remain trainable throughout the training process. When new layers are added to the networks, we fade them in smoothly, as illustrated in Figure 2. This avoids sudden shocks to the already well- trained, smaller- resolution layers. Appendix A describes structure of the generator and discriminator in detail, along with other training parameters. We observe that the progressive training has several benefits. Early on, the generation of smaller images is substantially more stable because there is less class information and fewer modes (Odena et al., 2017). By increasing the resolution little by little we are continuously asking a much simpler question compared to the end goal of discovering a mapping from latent vectors to e.g. \(1024^{2}\) images. This approach has conceptual similarity to recent work by Chen & Koltun (2017). In practice it stabilizes the training sufficiently for us to reliably synthesize megapixel- scale images using WGAN- GP loss (Gulrajani et al., 2017) and even LSGAN loss (Mao et al., 2016b). <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 1: Our training starts with both the generator (G) and discriminator (D) having a low spatial resolution of \(4 \times 4\) pixels. As the training advances, we incrementally add layers to G and D, thus increasing the spatial resolution of the generated images. All existing layers remain trainable throughout the process. Here \(\boxed {N\times N}\) refers to convolutional layers operating on \(N\times N\) spatial resolution. This allows stable synthesis in high resolutions and also speeds up training considerably. One the right we show six example images generated using progressive growing at \(1024\times 1024\) . </center> Another benefit is the reduced training time. With progressively growing GANs most of the iterations are done at lower resolutions, and comparable result quality is often obtained up to 2- 6 times faster, depending on the final output resolution. The idea of growing GANs progressively is related to the work of Wang et al. (2017), who use multiple discriminators that operate on different spatial resolutions. That work in turn is motivated by Durugkar et al. (2016) who use one generator and multiple discriminators concurrently, and Ghosh et al. (2017) who do the opposite with multiple generators and one discriminator. Hierarchical GANs (Denton et al., 2015; Huang et al., 2016; Zhang et al., 2017) define a generator and discriminator for each level of an image pyramid. These methods build on the same observation as our work – that the complex mapping from latents to high- resolution images is easier to learn in steps – but the crucial difference is that we have only a single GAN instead of a hierarchy of them. In contrast to early work on adaptively growing networks, e.g., growing neural gas (Fritzke, 1995) and neuro evolution of augmenting topologies (Stanley & Miikkulainen, 2002) that grow networks greedily, we simply defer the introduction of pre- configured layers. In that sense our approach resembles layer- wise training of autoencoders (Bengio et al., 2007). ## 3 INCREASING VARIATION USING MINIBATCH STANDARD DEVIATION GANs have a tendency to capture only a subset of the variation found in training data, and Salimans et al. (2016) suggest “minibatch discrimination” as a solution. They compute feature statistics not only from individual images but also across the minibatch, thus encouraging the minibatches of generated and training images to show similar statistics. This is implemented by adding a minibatch layer towards the end of the discriminator, where the layer learns a large tensor that projects the input activation to an array of statistics. A separate set of statistics is produced for each example in a minibatch and it is concatenated to the layer’s output, so that the discriminator can use the statistics internally. We simplify this approach drastically while also improving the variation. Our simplified solution has neither learnable parameters nor new hyperparameters. We first compute the standard deviation for each feature in each spatial location over the minibatch. We then average these estimates over all features and spatial locations to arrive at a single value. We replicate the value and concatenate it to all spatial locations and over the minibatch, yielding one additional (constant) feature map. This layer could be inserted anywhere in the discriminator, but we have found it best to insert it towards the end (see Appendix A.1 for details). We experimented with a richer set of statistics, but were not able to improve the variation further. In parallel work, Lin et al. (2017) provide theoretical insights about the benefits of showing multiple images to the discriminator. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: When doubling the resolution of the generator (G) and discriminator (D) we fade in the new layers smoothly. This example illustrates the transition from \(16 \times 16\) images (a) to \(32 \times 32\) images (c). During the transition (b) we treat the layers that operate on the higher resolution like a residual block, whose weight \(\alpha\) increases linearly from 0 to 1. Here \(\left\lfloor 2 \times \right\rfloor\) and \(\left\lfloor 0.5 \times \right\rfloor\) refer to doubling and halving the image resolution using nearest neighbor filtering and average pooling, respectively. The \(\left\lfloor \mathrm{toRGB} \right\rfloor\) represents a layer that projects feature vectors to RGB colors and \(\left\lfloor \mathrm{fromRGB} \right\rfloor\) does the reverse; both use \(1 \times 1\) convolutions. When training the discriminator, we feed in real images that are downscaled to match the current resolution of the network. During a resolution transition, we interpolate between two resolutions of the real images, similarly to how the generator output combines two resolutions. </center> Alternative solutions to the variation problem include unrolling the discriminator (Metz et al., 2016) to regularize its updates, and a "repelling regularizer" (Zhao et al., 2017) that adds a new loss term to the generator, trying to encourage it to orthogonalize the feature vectors in a minibatch. The multiple generators of Ghosh et al. (2017) also serve a similar goal. We acknowledge that these solutions may increase the variation even more than our solution – or possibly be orthogonal to it – but leave a detailed comparison to a later time. ## 4 NORMALIZATION IN GENERATOR AND DISCRIMINATOR GANs are prone to the escalation of signal magnitudes as a result of unhealthy competition between the two networks. Most if not all earlier solutions discourage this by using a variant of batch normalization (Ioffe & Szegedy, 2015; Salimans & Kingma, 2016; Ba et al., 2016) in the generator, and often also in the discriminator. These normalization methods were originally introduced to eliminate covariate shift. However, we have not observed that to be an issue in GANs, and thus believe that the actual need in GANs is constraining signal magnitudes and competition. We use a different approach that consists of two ingredients, neither of which include learnable parameters. ### 4.1 EQUALIZED LEARNING RATE We deviate from the current trend of careful weight initialization, and instead use a trivial \(\mathcal{N}(0,1)\) initialization and then explicitly scale the weights at runtime. To be precise, we set \(\hat{w}_i = w_i / c\) , where \(w_i\) are the weights and \(c\) is the per- layer normalization constant from He's initializer (He et al., 2015). The benefit of doing this dynamically instead of during initialization is somewhat subtle, and relates to the scale- invariance in commonly used adaptive stochastic gradient descent methods such as RMSProp (Tieleman & Hinton, 2012) and Adam (Kingma & Ba, 2015). These methods normalize a gradient update by its estimated standard deviation, thus making the update independent of the scale of the parameter. As a result, if some parameters have a larger dynamic range than others, they will take longer to adjust. This is a scenario modern initializers cause, and thus it is possible that a learning rate is both too large and too small at the same time. Our approach ensures that the dynamic range, and thus the learning speed, is the same for all weights. A similar reasoning was independently used by van Laarhoven (2017). <--- Page Split ---> ### 4.2 PIXELWISE FEATURE VECTOR NORMALIZATION IN GENERATOR 4.2 PIXELWISE FEATURE VECTOR NORMALIZATION IN GENERATORTo disallow the scenario where the magnitudes in the generator and discriminator spiral out of control as a result of competition, we normalize the feature vector in each pixel to unit length in the generator after each convolutional layer. We do this using a variant of "local response normalization" (Krizhevsky et al., 2012), configured as \(b_{x,y} = a_{x,y} / \sqrt{\frac{1}{N}\sum_{j = 0}^{N - 1}(a_{x,y}^{j})^{2} + \epsilon}\) , where \(\epsilon = 10^{- 8}\) , \(N\) is the number of feature maps, and \(a_{x,y}\) and \(b_{x,y}\) are the original and normalized feature vector in pixel \((x,y)\) , respectively. We find it surprising that this heavy- handed constraint does not seem to harm the generator in any way, and indeed with most datasets it does not change the results much, but it prevents the escalation of signal magnitudes very effectively when needed. ## 5 MULTI-SCALE STATISTICAL SIMILARITY FOR ASSESSING GAN RESULTS In order to compare the results of one GAN to another, one needs to investigate a large number of images, which can be tedious, difficult, and subjective. Thus it is desirable to rely on automated methods that compute some indicative metric from large image collections. We noticed that existing methods such as MS- SSIM (Odena et al., 2017) find large- scale mode collapses reliably but fail to react to smaller effects such as loss of variation in colors or textures, and they also do not directly assess image quality in terms of similarity to the training set. We build on the intuition that a successful generator will produce samples whose local image structure is similar to the training set over all scales. We propose to study this by considering the multiscale statistical similarity between distributions of local image patches drawn from Laplacian pyramid (Burt & Adelson, 1987) representations of generated and target images, starting at a low- pass resolution of \(16 \times 16\) pixels. As per standard practice, the pyramid progressively doubles until the full resolution is reached, each successive level encoding the difference to an up- sampled version of the previous level. A single Laplacian pyramid level corresponds to a specific spatial frequency band. We randomly sample 16384 images and extract 128 descriptors from each level in the Laplacian pyramid, giving us \(2^{21}\) (2.1M) descriptors per level. Each descriptor is a \(7 \times 7\) pixel neighborhood with 3 color channels, denoted by \(\mathbf{x} \in \mathbb{R}^{7 \times 7 \times 3} = \mathbb{R}^{147}\) . We denote the patches from level \(l\) of the training set and generated set as \(\{\mathbf{x}_i^l\}_{i = 1}^{221}\) and \(\{\mathbf{y}_i^l\}_{i = 1}^{221}\) , respectively. We first normalize \(\{\mathbf{x}_i^l\}\) and \(\{\mathbf{y}_i^l\}\) w.r.t. the mean and standard deviation of each color channel, and then estimate the statistical similarity by computing their sliced Wasserstein distance \(\mathrm{SWD}(\{\mathbf{x}_i^l\} ,\{\mathbf{y}_i^l\})\) , an efficiently computable randomized approximation to earthmovers distance, using 512 projections (Rabin et al., 2011). Intuitively a small Wasserstein distance indicates that the distribution of the patches is similar, meaning that the training images and generator samples appear similar in both appearance and variation at this spatial resolution. In particular, the distance between the patch sets extracted from the lowest- resolution \(16 \times 16\) images indicate similarity in large- scale image structures, while the finest- level patches encode information about pixel- level attributes such as sharpness of edges and noise. ## 6 EXPERIMENTS In this section we discuss a set of experiments that we conducted to evaluate the quality of our results. Please refer to Appendix A for detailed description of our network structures and training configurations. We also invite the reader to consult the accompanying video (https://youtu.be/G06dEcZ- QTg) for additional result images and latent space interpolations. In this section we will distinguish between the network structure (e.g., convolutional layers, resizing), training configuration (various normalization layers, minibatch- related operations), and training loss (WGAN- GP, LSGAN). ### 6.1 IMPORTANCE OF INDIVIDUAL CONTRIBUTIONS IN TERMS OF STATISTICAL SIMILARITY We will first use the sliced Wasserstein distance (SWD) and multi- scale structural similarity (MS- SSIM) (Odena et al., 2017) to evaluate the importance our individual contributions, and also perceptually validate the metrics themselves. We will do this by building on top of a previous state- of- the- art loss function (WGAN- GP) and training configuration (Gulrajani et al., 2017) in an unsupervised setting using CELEBA (Liu et al., 2015) and LSUN BEDROOM (Yu et al., 2015) datasets in \(128^{2}\) <--- Page Split ---> <table><tr><td rowspan="2">Training configuration</td><td colspan="6">CELEBA</td><td colspan="6">LSUN BEDROOM</td></tr><tr><td>Sliced Wasserstein distance × 10-3</td><td></td><td></td><td></td><td></td><td></td><td>Sliced Wasserstein distance × 10-3</td><td></td><td></td><td></td><td></td></tr><tr><td>(a) Gulrajani et al. (2017)</td><td>128</td><td>64</td><td>32</td><td>16</td><td>Avg</td><td>128</td><td>64</td><td>32</td><td>16</td><td>Avg</td><td></td></tr><tr><td>(b) + Progressive growing</td><td>12.99</td><td>7.79</td><td>7.62</td><td>8.73</td><td>9.28</td><td>0.2854</td><td>11.97</td><td>10.51</td><td>8.03</td><td>14.48</td><td>11.25</td><td>0.0587</td></tr><tr><td>(c) + Small minibatch</td><td>4.62</td><td>2.64</td><td>3.78</td><td>6.06</td><td>4.28</td><td>0.2838</td><td>7.09</td><td>6.27</td><td>7.40</td><td>9.64</td><td>7.60</td><td>0.0615</td></tr><tr><td>(d) + Revised training parameters</td><td>75.42</td><td>41.33</td><td>41.62</td><td>26.57</td><td>46.23</td><td>0.4065</td><td>72.73</td><td>40.16</td><td>42.75</td><td>42.46</td><td>49.52</td><td>0.1061</td></tr><tr><td>(e) + Minibatch discrimination</td><td>9.20</td><td>6.53</td><td>4.71</td><td>11.84</td><td>8.07</td><td>0.3027</td><td>7.39</td><td>5.51</td><td>3.65</td><td>9.63</td><td>6.54</td><td>0.0662</td></tr><tr><td>(e) + Minibatch stride</td><td>10.76</td><td>6.28</td><td>6.04</td><td>16.29</td><td>9.84</td><td>0.3057</td><td>10.29</td><td>6.22</td><td>5.32</td><td>11.88</td><td>8.43</td><td>0.0648</td></tr><tr><td>(f) + Equalized learning rate</td><td>13.94</td><td>5.67</td><td>2.82</td><td>5.71</td><td>7.04</td><td>0.2950</td><td>7.77</td><td>5.23</td><td>3.27</td><td>9.64</td><td>6.48</td><td>0.0671</td></tr><tr><td>(g) + Pixelwise normalization</td><td>4.42</td><td>3.28</td><td>2.32</td><td>7.52</td><td>4.39</td><td>0.2902</td><td>3.61</td><td>3.32</td><td>2.71</td><td>6.44</td><td>4.02</td><td>0.0668</td></tr><tr><td>(h) Converged</td><td>4.06</td><td>3.04</td><td>2.02</td><td>5.13</td><td>3.56</td><td>0.2845</td><td>3.89</td><td>3.05</td><td>3.24</td><td>5.87</td><td>4.01</td><td>0.0640</td></tr><tr><td></td><td>2.42</td><td>2.17</td><td>2.24</td><td>4.99</td><td>2.96</td><td>0.2828</td><td>3.47</td><td>2.60</td><td>2.30</td><td>4.87</td><td>3.31</td><td>0.0636</td></tr></table> Table 1: Sliced Wasserstein distance (SWD) between the generated and training images (Section 5) and multi- scale structural similarity (MS- SSIM) among the generated images for several training setups at \(128 \times 128\) . For SWD, each column represents one level of the Laplacian pyramid, and the last one gives an average of the four distances. ![](images/5_0.jpg) <center>Figure 3: (a) – (g) CELEBA examples corresponding to rows in Table 1. These are intentionally non-converged. (h) Our converged result. Notice that some images show aliasing and some are not sharp – this is a flaw of the dataset, which the model learns to replicate faithfully. </center> resolution. CELEBA is particularly well suited for such comparison because the training images contain noticeable artifacts (aliasing, compression, blur) that are difficult for the generator to reproduce faithfully. In this test we amplify the differences between training configurations by choosing a relatively low- capacity network structure (Appendix A.2) and terminating the training once the discriminator has been shown a total of 10M real images. As such the results are not fully converged. Table 1 lists the numerical values for SWD and MS- SSIM in several training configurations, where our individual contributions are cumulatively enabled one by one on top of the baseline (Gulrajani et al., 2017). The MS- SSIM numbers were averaged from 10000 pairs of generated images, and SWD was calculated as described in Section 5. Generated CELEBA images from these configurations are shown in Figure 3. Due to space constraints, the figure shows only a small number of examples for each row of the table, but a significantly broader set is available in Appendix H. Intuitively, a good evaluation metric should reward plausible images that exhibit plenty of variation in colors, textures, and viewpoints. However, this is not captured by MS- SSIM: we can immediately see that configuration (h) generates significantly better images than configuration (a), but MS- SSIM remains approximately unchanged because it measures only the variation between outputs, not similarity to the training set. SWD, on the other hand, does indicate a clear improvement. The first training configuration (a) corresponds to Gulrajani et al. (2017), featuring batch normalization in the generator, layer normalization in the discriminator, and minibatch size of 64. (b) enables progressive growing of the networks, which results in sharper and more believable output images. SWD correctly finds the distribution of generated images to be more similar to the training set. Our primary goal is to enable high output resolutions, and this requires reducing the size of minibatches in order to stay within the available memory budget. We illustrate the ensuing challenges in (c) where we decrease the minibatch size from 64 to 16. The generated images are unnatural, which is clearly visible in both metrics. In (d), we stabilize the training process by adjusting the hyperparameters as well as by removing batch normalization and layer normalization (Appendix A.2). As an intermediate test (e\*), we enable minibatch discrimination (Salimans et al., 2016), which somewhat surprisingly fails to improve any of the metrics, including MS- SSIM that measures output variation. In contrast, our minibatch standard deviation (e) improves the average SWD scores and images. We then enable our remaining contributions in (f) and (g), leading to an overall improvement in SWD <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 4: Effect of progressive growing on training speed and convergence. The timings were measured on a single-GPU setup using NVIDIA Tesla P100. (a) Statistical similarity with respect to wall clock time for Gulrajani et al. (2017) using CELEBA at \(128 \times 128\) resolution. Each graph represents sliced Wasserstein distance on one level of the Laplacian pyramid, and the vertical line indicates the point where we stop the training in Table 1. (b) Same graph with progressive growing enabled. The dashed vertical lines indicate points where we double the resolution of G and D. (c) Effect of progressive growing on the raw training speed in \(1024 \times 1024\) resolution. </center> and subjective visual quality. Finally, in (h) we use a non- crippled network and longer training – we feel the quality of the generated images is at least comparable to the best published results so far. ### 6.2 CONVERGENCE AND TRAINING SPEED Figure 4 illustrates the effect of progressive growing in terms of the SWD metric and raw image throughput. The first two plots correspond to the training configuration of Gulrajani et al. (2017) without and with progressive growing. We observe that the progressive variant offers two main benefits: it converges to a considerably better optimum and also reduces the total training time by about a factor of two. The improved convergence is explained by an implicit form of curriculum learning that is imposed by the gradually increasing network capacity. Without progressive growing, all layers of the generator and discriminator are tasked with simultaneously finding succinct intermediate representations for both the large- scale variation and the small- scale detail. With progressive growing, however, the existing low- resolution layers are likely to have already converged early on, so the networks are only tasked with refining the representations by increasingly smaller- scale effects as new layers are introduced. Indeed, we see in Figure 4(b) that the largest- scale statistical similarity curve (16) reaches its optimal value very quickly and remains consistent throughout the rest of the training. The smaller- scale curves (32, 64, 128) level off one by one as the resolution is increased, but the convergence of each curve is equally consistent. With non- progressive training in Figure 4(a), each scale of the SWD metric converges roughly in unison, as could be expected. The speedup from progressive growing increases as the output resolution grows. Figure 4(c) shows training progress, measured in number of real images shown to the discriminator, as a function of training time when the training progresses all the way to \(1024^{2}\) resolution. We see that progressive growing gains a significant head start because the networks are shallow and quick to evaluate at the beginning. Once the full resolution is reached, the image throughput is equal between the two methods. The plot shows that the progressive variant reaches approximately 6.4 million images in 96 hours, whereas it can be extrapolated that the non- progressive variant would take about 520 hours to reach the same point. In this case, the progressive growing offers roughly a \(5.4 \times\) speedup. ### 6.3 HIGH-RESOLUTION IMAGE GENERATION USING CELEBA-HQ DATASET To meaningfully demonstrate our results at high output resolutions, we need a sufficiently varied high- quality dataset. However, virtually all publicly available datasets previously used in GAN literature are limited to relatively low resolutions ranging from \(32^{2}\) to \(480^{2}\) . To this end, we created a high- quality version of the CELEBA dataset consisting of 30000 of the images at \(1024 \times 1024\) resolution. We refer to Appendix C for further details about the generation of this dataset. <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: \(1024 \times 1024\) images generated using the CELEBA-HQ dataset. See Appendix F for a larger set of results, and the accompanying video for latent space interpolations. </center> ![](images/7_1.jpg) <center>Figure 6: Visual quality comparison in LSUN BEDROOM; pictures copied from the cited articles. </center> Our contributions allow us to deal with high output resolutions in a robust and efficient fashion. Figure 5 shows selected \(1024 \times 1024\) images produced by our network. While megapixel GAN results have been shown before in another dataset (Marchesi, 2017), our results are vastly more varied and of higher perceptual quality. Please refer to Appendix F for a larger set of result images as well as the nearest neighbors found from the training data. The accompanying video shows latent space interpolations and visualizes the progressive training. The interpolation works so that we first randomize a latent code for each frame (512 components sampled individually from \(\mathcal{N}(0,1)\) ), then blur the latents across time with a Gaussian ( \(\sigma = 45\) frames @ \(60\mathrm{Hz}\) ), and finally normalize each vector to lie on a hypersphere. We trained the network on 8 Tesla V100 GPUs for 4 days, after which we no longer observed qualitative differences between the results of consecutive training iterations. Our implementation used an adaptive minibatch size depending on the current output resolution so that the available memory budget was optimally utilized. In order to demonstrate that our contributions are largely orthogonal to the choice of a loss function, we also trained the same network using LSGAN loss instead of WGAN- GP loss. Figure 1 shows six examples of \(1024^{2}\) images produced using our method using LSGAN. Further details of this setup are given in Appendix B. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 7: Selection of \(256 \times 256\) images generated from different LSUN categories. </center> ### 6.4 LSUN RESULTS Figure 6 shows a purely visual comparison between our solution and earlier results in LSUN BEDROOM. Figure 7 gives selected examples from seven very different LSUN categories at \(256^{2}\) . A larger, non- curated set of results from all 30 LSUN categories is available in Appendix G, and the video demonstrates interpolations. We are not aware of earlier results in most of these categories, and while some categories work better than others, we feel that the overall quality is high. ### 6.5 CIFAR10 INCEPTION SCORES The best inception scores for CIFAR10 (10 categories of \(32 \times 32\) RGB images) we are aware of are 7.90 for unsupervised and 8.87 for label conditioned setups (Grinblat et al., 2017). The large difference between the two numbers is primarily caused by "ghosts" that necessarily appear between classes in the unsupervised setting, while label conditioning can remove many such transitions. When all of our contributions are enabled, we get 8.80 in the unsupervised setting. Appendix D shows a representative set of generated images along with a more comprehensive list of results from earlier methods. The network and training setup were the same as for CELEBA, progression limited to \(32 \times 32\) of course. The only customization was to the WGAN- GP's regularization term \(\mathbb{E}_{\mathbf{x} \sim \mathbb{P}_{\mathbf{x}}} \left( \left( \left| \nabla_{\mathbf{x}} D(\hat{\mathbf{x}}) \right| \right|_{2} - \gamma \right)^{2} / \gamma^{2} \right]\) . Gulrajani et al. (2017) used \(\gamma = 1.0\) , which corresponds to 1- Lipschitz, but we noticed that it is in fact significantly better to prefer fast transitions (\(\gamma = 750\) ) to minimize the ghosts. We have not tried this trick with other datasets. ## 7 DISCUSSION While the quality of our results is generally high compared to earlier work on GANs, and the training is stable in large resolutions, there is a long way to true photorealism. Semantic sensibility and understanding dataset- dependent constraints, such as certain objects being straight rather than curved, leaves a lot to be desired. There is also room for improvement in the micro- structure of the images. That said, we feel that convincing realism may now be within reach, especially in CELEBA- HQ. <--- Page Split ---> ## 8 ACKNOWLEDGEMENTS We would like to thank Mikael Honkavaara, Tero Kuosmanen, and Timi Hietanen for the compute infrastructure. Dmitry Korobchenko and Richard Calderwood for efforts related to the CELEBA- HQ dataset. Oskar Elek, Jacob Munkberg, and Jon Hasselgren for useful comments. ## A NETWORK STRUCTURE AND TRAINING CONFIGURATION ## A.1 \(1024\times 1024\) NETWORKS USED FOR CELEBA-HQ Table 2 shows network architectures of the full- resolution generator and discriminator that we use with the CELEBA- HQ dataset. Both networks consist mainly of replicated 3- layer blocks that we introduce one by one during the course of the training. The last Conv \(1\times 1\) layer of the generator corresponds to the tORGB block in Figure 2, and the first Conv \(1\times 1\) layer of the discriminator similarly corresponds to fromRGB. We start with \(4\times 4\) resolution and train the networks until we have shown the discriminator 800k real images in total. We then alternate between two phases: fade in the first 3- layer block during the next 800k images, stabilize the networks for 800k images, fade in the next 3- layer block during 800k images, etc. Our latent vectors correspond to random points on a 512- dimensional hypersphere, and we represent training and generated images in [- 1,1]. We use leaky ReLU with leakiness 0.2 in all layers of both networks, except for the last layer that uses linear activation. We do not employ batch normalization, layer normalization, or weight normalization in either network, but we perform pixelwise normalization of the feature vectors after each Conv \(3\times 3\) layer in the generator as described in Section 4.2. We initialize all bias parameters to zero and all weights according to the normal distribution with unit variance. However, we scale the weights with a layer- specific constant at runtime as described in Section 4.1. We inject the across- minibatch standard deviation as an additional feature map at \(4\times 4\) resolution toward the end of the discriminator as described in Section 3. The upsampling and downsampling operations in Table 2 correspond to \(2\times 2\) element replication and average pooling, respectively. We train the networks using Adam (Kingma & Ba, 2015) with \(\alpha = 0.001\) , \(\beta_{1} = 0\) , \(\beta_{2} = 0.99\) , and \(\epsilon = 10^{- 8}\) . We do not use any learning rate decay or rampdown, but for visualizing generator output at any given point during the training, we use an exponential running average for the weights of the generator with decay 0.999. We use a minibatch size 16 for resolutions \(4^{2} - 128^{2}\) and then gradually decrease the size according to \(256^{2} \to 14\) , \(512^{2} \to 6\) , and \(1024^{2} \to 3\) to avoid exceeding the available memory budget. We use the WGAN- GP loss, but unlike Gulrajani et al. (2017), we alternate between optimizing the generator and discriminator on a per- minibatch basis, i.e., we set \(n_{\mathrm{critic}} = 1\) . Additionally, we introduce a fourth term into the discriminator loss with an extremely <--- Page Split ---> small weight to keep the discriminator output from drifting too far away from zero. To be precise, we set \(L' = L + \epsilon_{\mathrm{drift}}\mathbb{E}_{x\in \mathbb{P}_{r}}[D(x)^{2}]\) , where \(\epsilon_{\mathrm{drift}} = 0.001\) . ## A.2 OTHER NETWORKS Whenever we need to operate on a spatial resolution lower than \(1024 \times 1024\) , we do that by leaving out an appropriate number copies of the replicated 3- layer block in both networks. Furthermore, Section 6.1 uses a slightly lower- capacity version, where we halve the number of feature maps in Conv \(3 \times 3\) layers at the \(16 \times 16\) resolution, and divide by 4 in the subsequent resolutions. This leaves 32 feature maps to the last Conv \(3 \times 3\) layers. In Table 1 and Figure 4 we train each resolution for a total 600k images instead of 800k, and also fade in new layers for the duration of 600k images. For the "Gulrajani et al. (2017)" case in Table 1, we follow their training configuration as closely as possible. In particular, we set \(\alpha = 0.0001\) , \(\beta_{2} = 0.9\) , \(n_{\mathrm{critic}} = 5\) , \(\epsilon_{\mathrm{drift}} = 0\) , and minibatch size 64. We disable progressive resolution, minibatch stddev, as well as weight scaling at runtime, and initialize all weights using He's initializer (He et al., 2015). Furthermore, we modify the generator by replacing LReLU with ReLU, linear activation with tanh in the last layer, and pixelwise normalization with batch normalization. In the discriminator, we add layer normalization to all Conv \(3 \times 3\) and Conv \(4 \times 4\) layers. For the latent vectors, we use 128 components sampled independently from the normal distribution. ## B LEAST-SQUARES GAN (LSGAN) AT \(1024 \times 1024\) We find that LSGAN is generally a less stable loss function than WGAN- GP, and it also has a tendency to lose some of the variation towards the end of long runs. Thus we prefer WGAN- GP, but have also produced high- resolution images by building on top of LSGAN. For example, the \(1024^{2}\) images in Figure 1 are LSGAN- based. On top of the techniques described in Sections 2- 4, we need one additional hack with LSGAN that prevents the training from spiraling out of control when the dataset is too easy for the discriminator, and the discriminator gradients are at risk of becoming meaningless as a result. We adaptively increase the magnitude of multiplicative Gaussian noise in discriminator as a function of the discriminator's output. The noise is applied to the input of each Conv \(3 \times 3\) and Conv \(4 \times 4\) layer. There is a long history of adding noise to the discriminator, and it is generally detrimental for the image quality (Arjovsky et al., 2017) and ideally one would never have to do that, which according to our tests is the case for WGAN- GP (Gulrajani et al., 2017). The magnitude of noise is determined as \(0.2 \cdot \max (0, \hat{d}_{t} - 0.5)^{2}\) , where \(\hat{d}_{t} = 0.1d + 0.9\hat{d}_{t - 1}\) is an exponential moving average of the discriminator output \(d\) . The motivation behind this hack is that LSGAN is seriously unstable when \(d\) approaches (or exceeds) 1.0. ## C CELEBA-HQ DATASET In this section we describe the process we used to create the high- quality version of the CELEBA dataset, consisting of 30000 images in \(1024 \times 1024\) resolution. As a starting point, we took the collection of in- the- wild images included as a part of the original CELEBA dataset. These images are extremely varied in terms of resolution and visual quality, ranging all the way from \(43 \times 55\) to \(6732 \times 8984\) . Some of them show crowds of several people whereas others focus on the face of a single person – often only a part of the face. Thus, we found it necessary to apply several image processing steps to ensure consistent quality and to center the images on the facial region. Our processing pipeline is illustrated in Figure 8. To improve the overall image quality, we preprocess each JPEG image using two pre- trained neural networks: a convolutional autoencoder trained to remove JPEG artifacts in natural images, similar in structure to the proposed by Mao et al. (2016a), and an adversarially- trained 4x super- resolution network (Korobchenko & Foco, 2017) similar to Ledig et al. (2016). To handle cases where the facial region extends outside the image, we employ padding and filtering to extend the dimensions of the image as illustrated in Fig.8(c- d). We then select an oriented crop rectangle based on the facial landmark annotations included in the <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 8: Creating the CELEBA-HQ dataset. We start with a JPEG image (a) from the CelebA in-the-wild dataset. We improve the visual quality (b,top) through JPEG artifact removal (b,middle) and 4x super-resolution (b,bottom). We then extend the image through mirror padding (c) and Gaussian filtering (d) to produce a visually pleasing depth-of-field effect. Finally, we use the facial landmark locations to select an appropriate crop region (e) and perform high-quality resampling to obtain the final image at \(1024\times 1024\) resolution (f). </center> original CELEBA dataset as follows: \[x^{\prime} = e_{1} - e_{0\] \[y^{\prime} = \frac{1}{2} (e_{0} + e_{1}) - \frac{1}{2} (m_{0} + m_{1})\] \[c = \frac{1}{2} (e_{0} + e_{1}) - 0.1\cdot y^{\prime\] \[s = \max (4.0\cdot |x^{\prime}|,3.6\cdot |y^{\prime}|)\] \[x = \mathrm{Normalize}(x^{\prime} - \mathrm{Rotate}90(y^{\prime}))\] \[y = \mathrm{Rotate}90(x)\] \(e_{0}\) , \(e_{1}\) , \(m_{0}\) , and \(m_{1}\) represent the 2D pixel locations of the two eye landmarks and two mouth landmarks, respectively, \(c\) and \(s\) indicate the center and size of the desired crop rectangle, and \(x\) and \(y\) indicate its orientation. We constructed the above formulas empirically to ensure that the crop rectangle stays consistent in cases where the face is viewed from different angles. Once we have calculated the crop rectangle, we transform the rectangle to \(4096\times 4096\) pixels using bilinear filtering, and then scale it to \(1024\times 1024\) resolution using a box filter. We perform the above processing for all 202599 images in the dataset, analyze the resulting \(1024\times 1024\) images further to estimate the final image quality, sort the images accordingly, and discard all but the best 30000 images. We use a frequency- based quality metric that favors images whose power spectrum contains a broad range of frequencies and is approximately radially symmetric. This penalizes blurry images as well as images that have conspicuous directional features due to, e.g., visible halftoning patterns. We selected the cutoff point of 30000 images as a practical sweet spot between variation and image quality, because it appeared to yield the best results. ## D CIFAR10 RESULTS Figure 9 shows non- curated images generated in the unsupervised setting, and Table 3 compares against prior art in terms of inception scores. We report our scores in two different ways: 1) the highest score observed during training runs (here \(\pm\) refers to the standard deviation returned by the inception score calculator) and 2) the mean and standard deviation computed from the highest scores seen during training, starting from ten random initializations. Arguably the latter methodology is much more meaningful as one can be lucky with individual runs (as we were). We did not use any kind of augmentation with this dataset. ## E MNIST-1K DISCRETE MODE TEST WITH CRIPPLED DISCRIMINATOR Metz et al. (2016) describe a setup where a generator synthesizes MNIST digits simultaneously to 3 color channels, the digits are classified using a pre- trained classifier (0.4% error rate in our case), and concatenated to form a number in \([0,999]\) . They generate a total of 25,600 images and count <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 9: CIFAR10 images generated using a network that was trained unsupervised (no label conditioning), and achieves a record 8.80 inception score. </center> Table 3: CIFAR10 inception scores, higher is better. <table><tr><td colspan="2">UNSUPERVISED</td><td colspan="2">LABEL CONDITIONED</td></tr><tr><td>Method</td><td>Inception score</td><td>Method</td><td>Inception score</td></tr><tr><td>ALI</td><td>(Dumoulin et al., 2016)</td><td>DCGAN</td><td>(Radford et al., 2015)</td></tr><tr><td>GMAN</td><td>(Durugkar et al., 2016)</td><td>Improved GAN</td><td>(Salimans et al., 2016)</td></tr><tr><td>Improved GAN</td><td>(Salimans et al., 2016)</td><td>AC-GAN</td><td>(Odena et al., 2017)</td></tr><tr><td>CEGAN-Ent-VI</td><td>(Dai et al., 2017)</td><td>SGAN</td><td>(Huang et al., 2016)</td></tr><tr><td>LR-AGN</td><td>(Yang et al., 2017)</td><td>WGAN-GP</td><td>(Gulrajani et al., 2017)</td></tr><tr><td>DFM</td><td>(Warde-Farley &amp;amp; Bengio, 2017)</td><td>Splitting GAN</td><td>(Grünblat et al., 2017)</td></tr><tr><td>WGAN-GP</td><td>(Gulrajani et al., 2017)</td><td></td><td></td></tr><tr><td>Splitting GAN</td><td>(Grünblat et al., 2017)</td><td></td><td></td></tr><tr><td>Our (best run)</td><td>8.80 ± 0.05</td><td></td><td></td></tr><tr><td>Our (computed from 10 runs)</td><td>8.56 ± 0.06</td><td></td><td></td></tr></table> how many of the discrete modes are covered. They also compute KL divergence as KL(histogram || uniform). Modern GAN implementations can trivially cover all modes at very low divergence (0.05 in our case), and thus Metz et al. specify a fairly low- capacity generator and two severely crippled discriminators ("K/2" has \(\sim 2000\) params and "K/4" only about \(\sim 500\) ) to tease out differences between training methodologies. Both of these networks use batch normalization. As shown in Table 4, using WGAN- GP loss with the networks specified by Metz et al. covers much more modes than the original GAN loss, and even more than the unrolled original GAN with the smaller (K/4) discriminator. The KL divergence, which is arguably a more accurate metric than the raw count, acts even more favorably. Replacing batch normalization with our normalization (equalized learning rate, pixelwise normalization) improves the result considerably, while also removing a few trainable parameters from the discriminators. The addition of a minibatch stddev layer further improves the scores, while restoring the discriminator capacity to within \(0.5\%\) of the original. Progression does not help much with these tiny images, but it does not hurt either. <--- Page Split ---> <table><tr><td colspan="2">Arch</td><td>GAN</td><td>+ unrolling</td><td>WGAN-GP</td><td>+ our norm</td><td>+ mb stddev</td><td>+ progression</td></tr><tr><td>K/4</td><td># KL</td><td>30.6 ± 20.7<br>5.99 ± 0.04</td><td>372.2 ± 20.7<br>4.66 ± 0.46</td><td>640.1 ± 136.3<br>1.97 ± 0.70</td><td>856.7 ± 50.4<br>1.10±0.19</td><td>881.3 ± 39.2<br>1.09 ± 0.16</td><td>859.5 ± 36.2<br>1.05 ± 0.09</td></tr><tr><td>K/2</td><td># KL</td><td>628.0 ± 140.9<br>2.58 ± 0.75</td><td>817.4 ± 39.9<br>1.43 ± 0.12</td><td>772.4 ± 146.5<br>1.35 ± 0.55</td><td>886.6 ± 58.5<br>0.98 ± 0.33</td><td>918.3 ± 30.2<br>0.89 ± 0.21</td><td>919.8 ± 35.1<br>0.82 ± 0.13</td></tr></table> Table 4: Results for MNIST discrete mode test using two tiny discriminators (K/4, K/2) defined by Metz et al. (2016). The number of covered modes (#) and KL divergence from a uniform distribution are given as an average \(\pm\) standard deviation over 8 random initializations. Higher is better for the number of modes, and lower is better for KL divergence. ## F ADDITIONAL CELEBA-HQ RESULTS Figure 10 shows the nearest neighbors found for our generated images. Figure 11 gives additional generated examples from CELEBA- HQ. We enabled mirror augmentation for all tests using CELEBA and CELEBA- HQ. In addition to the sliced Wasserstein distance (SWD), we also quote the recently introduced Fréchet Inception Distance (FID) (Heusel et al., 2017) computed from 50K images. ## G LSUN RESULTS Figures 12- 17 show representative images generated for all 30 LSUN categories. A separate network was trained for each category using identical parameters. All categories were trained using 100k images, except for BEDROOM and DOG that used all the available data. Since 100k images is a very limited amount of training data for most categories, we enabled mirror augmentation in these tests (but not for BEDROOM or DOG). ## H ADDITIONAL IMAGES FOR TABLE 1 Figure 18 shows larger collections of images corresponding to the non- converged setups in Table 1. The training time was intentionally limited to make the differences between various methods more visible. <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 10: Top: Our CELEBA-HQ results. Next five rows: Nearest neighbors found from the training data, based on feature-space distance. We used activations from five VGG layers, as suggested by Chen & Koltun (2017). Only the crop highlighted in bottom right image was used for comparison in order to exclude image background and focus the search on matching facial features. </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 11: Additional \(1024 \times 1024\) images generated using the CELEBA-HQ dataset. Sliced Wasserstein Distance (SWD) \(\times 10^{3}\) for levels 1024, ..., 16: 7.48, 7.24, 6.08, 3.51, 3.55, 3.02, 7.22, for which the average is 5.44. Fréchet Inception Distance (FID) computed from 50K images was 7.30. See the video for latent space interpolations. </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 12: Example images generated at \(256 \times 256\) from LSUN categories. Sliced Wasserstein Distance (SWD) \(\times 10^{3}\) is given for levels 256, 128, 64, 32 and 16, and the average is bolded. We also quote the Fréchet Inception Distance (FID) computed from 50K images. </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 13: Example images generated at \(256 \times 256\) from LSUN categories. Sliced Wasserstein Distance (SWD) \(\times 10^{3}\) is given for levels 256, 128, 64, 32 and 16, and the average is bolded. We also quote the Fréchet Inception Distance (FID) computed from 50K images. </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 14: Example images generated at \(256 \times 256\) from LSUN categories. Sliced Wasserstein Distance (SWD) \(\times 10^{3}\) is given for levels 256, 128, 64, 32 and 16, and the average is bolded. We also quote the Fréchet Inception Distance (FID) computed from 50K images. </center> <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 15: Example images generated at \(256 \times 256\) from LSUN categories. Sliced Wasserstein Distance (SWD) \(\times 10^{3}\) is given for levels 256, 128, 64, 32 and 16, and the average is bolded. We also quote the Fréchet Inception Distance (FID) computed from 50K images. </center> <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 16: Example images generated at \(256 \times 256\) from LSUN categories. Sliced Wasserstein Distance (SWD) \(\times 10^{3}\) is given for levels 256, 128, 64, 32 and 16, and the average is bolded. We also quote the Fréchet Inception Distance (FID) computed from 50K images. </center> <--- Page Split ---> ![](images/24_0.jpg) <center>Figure 17: Example images generated at \(256 \times 256\) from LSUN categories. Sliced Wasserstein Distance (SWD) \(\times 10^{3}\) is given for levels 256, 128, 64, 32 and 16, and the average is bolded. We also quote the Fréchet Inception Distance (FID) computed from 50K images. </center> <--- Page Split ---> ![](images/25_0.jpg) <center>Figure 18: A larger set of generated images corresponding to the non-converged setups in Table 1. </center> <--- Page Split --->
accept
Accept (Oral)
5.666667
ICLR_2018_paper_0627
iclr
2,018
# REINFORCEMENT AND IMITATION LEARNING FOR DIVERSE VISUOMOTOR SKILLS Anonymous authors Paper under double- blind review ## ABSTRACT We propose a general deep reinforcement learning method and apply it to robot manipulation tasks. Our approach leverages demonstration data to assist a reinforcement learning agent in learning to solve a wide range of tasks, mainly previously unsolved. We train visuomotor policies end- to- end to learn a direct mapping from RGB camera inputs to joint velocities. Our experiments indicate that our reinforcement and imitation approach can solve contact- rich robot manipulation tasks that neither the state- of- the- art reinforcement nor imitation learning method can solve alone. We also illustrate that these policies achieved zero- shot sim2real transfer by training with large visual and dynamics variations. ## 1 INTRODUCTION Recent advances in deep reinforcement learning (RL) have performed very well in several challenging domains such as video games (Mnih et al., 2015) and Go (Silver et al., 2016). For robotics, RL in combination with powerful function approximators provides a general framework for designing sophisticated controllers that would be hard to handcraft otherwise. Yet, despite significant leaps in other domains the application of deep RL to control and robotic manipulation has proven challenging. While there have been successful demonstrations of deep RL for manipulation (e.g. Nair et al. 2017; Popov et al. 2017) and also noteworthy applications on real robotic hardware (e.g. Levine et al. 2015; Yahya et al. 2016) there have been very few examples of learned controllers for sophisticated tasks even in simulation. Robotics exhibits several unique challenges. These include the need to rely on multi- modal and partial observations from noisy sensors, such as cameras. At the same time, realistic tasks often come with a large degree of variation (visual appearance, position, shapes, etc.) posing significant generalization challenges. Training on real robotics hardware can be daunting due to constraints on the amount of training data that can be collected in reasonable time. This is typically much less than the millions of frames needed by modern algorithms. Safety considerations also play an important role, as well as the difficulty of accessing information about the state of the environment (like the position of an object) e.g. to define a reward. Even in simulation when perfect state information and large amounts of training data are available, exploration can be a significant challenge. This is partly due to the often high- dimensional and continuous action space, but also due to the difficulty of designing suitable reward functions. In this paper, we present a general deep reinforcement learning method that addresses these issues and that can solve a wide range of robot arm manipulation tasks directly from pixels, most of which have not been solved previously. Our key insight is 1) to reduce the difficulty of exploration in continuous domains by leveraging a handful of human demonstrations; 2) several techniques to stabilize the learning of complex manipulation policies from vision; and 3) to improve generalization by increasing the diversity of the training conditions. As a result, the trained policies work well under significant variations of system dynamics, object appearances, task lengths, etc. We ground these policies in the real world, demonstrating zero- shot transfer from simulation to real hardware. We develop a new method to combine imitation learning with reinforcement learning. Our method requires only a small number of human demonstrations to dramatically simplify the exploration problem. It uses demonstration data in two ways: first, it uses a hybrid reward that combines sparse environment reward with imitation reward based on Generative Adversarial Imitation Learning (Ho <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Our proposal of a principled robot learning pipeline. We used 3D motion controllers to collect human demonstrations of a task. Our reinforcement and imitation learning model leveraged these demonstrations to facilitate learning in a simulated physical engine. We then performed sim2real transfer to deploy the learned visuomotor policy to a real robot. </center> & Ermon, 2016), which produces more robust controllers; second, it uses demonstration as a curriculum to initiate training episodes along demonstration trajectories, which facilitates the agent to reach new states and solve longer tasks. As a result, it solves dexterous manipulation tasks that neither the state- of- the- art reinforcement learning nor imitation learning method can solve alone. Previous RL- based robot manipulation policies (Nair et al., 2017; Popov et al., 2017) largely rely on low- level states as input, or use severely limited action spaces that ignore the arm and instead learn Cartesian control of a simple gripper. This limits the ability of these methods to represent and solve more complex tasks (e.g., manipulating arbitrary 3D objects) and to deploy in real environments where the privileged state information is unavailable. Our method learns an end- to- end visuomotor policy that maps RGB camera observations to joint space control over the full 9- DoF arm (6 arm joints plus 3 actuated fingers). To sidestep the constraints of training on real hardware we embrace the sim2real paradigm which has recently shown promising results (James et al., 2017; Rusu et al., 2016a). Through the use of a physics engine and high- throughput RL algorithms, we can simulate parallel copies of a robot arm to perform millions of complex physical interactions in a contact- rich environment while eliminating the practical concerns of robot safety and system reset. Furthermore, we can, during training, exploit privileged information about the true system state with several new techniques, including learning policy and value in separate modalities, an object- centric GAIL discriminator, and auxiliary tasks for visual modules. These techniques stabilize and speed up policy learning from pixels. Finally, we diversify training conditions such as visual appearance as well as e.g. the size and shape of objects. This improves both generalization with respect to different task conditions as well as transfer from simulation to reality. To demonstrate our method, we use the same model and the same algorithm for visuomotor control of six diverse robot arm manipulation tasks. Combining reinforcement and imitation, our policies solve the tasks that the state- of- the- art reinforcement and imitation learning cannot solve and outperform human demonstrations. Our approach sheds light on a principled deep visuomotor learning pipeline illustrated in Fig. 1, from collecting real- world human demonstration to learning in simulation, and back to real- world deployment via sim2real policy transfer. ## 2 RELATED WORK Reinforcement learning methods have been extensively used with low- dimensional policy representations such as movement primitives to solve a variety of control problems both in simulation and in reality. Three classes of RL algorithms are currently dominant for continuous control problems: guided policy search methods (GPS; Levine & Koltun 2013), value- based methods such as the deterministic policy gradient (DPG; Silver et al. 2014; Lillicrap et al. 2016; Heess et al. 2015) or the normalized advantage function (NAF; Gu et al. 2016b) algorithm, and trust- region based policy gradient algorithms such as trust region policy optimization (TRPO) and proximal policy optimization <--- Page Split ---> (PPO). TRPO (Schulman et al., 2015) and PPO (Schulman et al., 2017) hold appeal due to their robustness to hyper- parameter settings as well as their scalability (Heess et al., 2017) but the lack of sample efficiency makes them unsuitable for training directly on robotics hardware. GPS (Levine & Koltun, 2013) has been used e.g. by Levine et al. (2015) and Yahya et al. (2016) to learn visuomotor policies directly on a real robotics hardware after a network pretraining phase. Gupta et al. (2016) and Kumar et al. (2016) use GPS for learning controllers for robotic hand models. Value- based methods have been employed, e.g. by Gu et al. (2016a) who use NAF to learn a door opening task directly on a robot while Popov et al. (2017) demonstrate how to solve a stacking problem efficiently using a distributed variant of DPG. The idea of using large- scale data collection for training visuomotor controllers has been the focus of Levine et al. (2016) and Pinto & Gupta (2015) who train a convolutional network to predict grasp success for diverse sets of objects using a large dataset with 10s or 100s of thousands of grasp attempts collected from multiple robots in a self- supervised setting. An alternative strategy for dealing with the data demand is to train in simulation and transfer the learned controller to real hardware, or to augment real- world training with synthetic data. Rusu et al. (2016b) learn simple visuomotor policies for a Jaco robot arm and transfer to reality using progressive networks (Rusu et al. 2016a). Viereck et al. (2017) minimize the reality gap by relying on depth. Tobin et al. (2017) use visual variations to learn robust object detectors that can transfer to reality; James et al. (2017) combine randomization with supervised learning. Bousmalis et al. (2017) augments the training with simulated data to learn grasp prediction of diverse shapes. Suitable cost functions and exploration strategies for control problems are challenging to design, so demonstrations have long played an important role. Demonstrations can be used to initialize policies, design cost functions, guide exploration, augment the training data, or a combination of these. Cost functions can be derived from demonstrations either via tracking objectives (e.g. Gupta et al. 2016) or, via inverse RL (e.g. Boularias et al. 2011; Finn et al. 2016), or, as in our case, via adversarial learning (Ho & Ermon, 2016). When expert actions or expert policies are available, behavioral cloning or DAgger can be used (Rahmatizadeh et al. 2017; James et al. 2017; Duan et al. 2017). Alternatively, expert trajectories can be used as additional training data for off- policy algorithms such as DPG (e.g. Vecerik et al. 2017). Most of these methods require observation and/or action spaces to be aligned between robot and demonstrations. Recently, methods for third person imitation have been proposed (e.g. Sermanet et al. 2017; Liu et al. 2017; Finn et al. 2017). Concurrently with our work several papers have presented results on manipulation tasks. Rajeswaran et al. (2017); Nair et al. (2017) both use human demonstrations to aid exploration. Nair et al. (2017) extends the DDPGfD algorithm (Vecerik et al., 2017) to learn a block stacking task on a position- controlled arm in simulation. Rajeswaran et al. (2017) use the demonstrations with a form of behavioral cloning and data augmentation to learn several complex manipulation tasks. In both cases, controllers observe a low- dimensional state space representation and the methods inherently require aligned state and action spaces with the demonstrations. Pinto et al. (2017) and Peng et al. (2017) address the transfer from simulation to reality, focusing on randomizing visual appearance and robot dynamics respectively. Peng et al. transfer a block- pushing policy operating from state features to a 7- DoF position controlled Fetch robotics arm. Pinto et al. consider different tasks using visual input with end- effector position control. ## 3 MODEL Our goal is to learn a deep visuomotor policy for robot manipulation tasks. The policy takes both an RGB camera observation and a proprioceptive feature that describes the joint positions and angular velocities. These two sensory modalities are also available on the real robot, enabling us to perform zero- shot policy transfer once trained in simulation. Fig. 2 provides an overview of our model. The deep visuomotor policy encodes the pixel observation with a convolutional network (CNN) and the proprioceptive feature with a multilayer perceptron (MLP). The features from these two modules are concatenated and passed to a recurrent LSTM layer before producing the joint velocities. The whole network is trained end- to- end. We start with a brief review of the basics of generative adversarial imitation learning (GAIL) and proximal policy optimization (PPO). Our model extends upon these two methods for visuomotor skills. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: Model overview. The core of our model is the deep visuomotor policy, which takes the camera observation and the proprioceptive feature as input and produces the next joint velocities. </center> ### 3.1 BACKGROUND: GAIL AND PPO Imitation learning (IL) is the problem of learning a behavior policy by mimicking a set of demonstrations. Here we assume that human demonstration is provided as a dataset of state- action pairs \(\mathcal{D} = \{(s_{i},a_{i})\}_{i = 1\dots N}\) . Traditional IL methods cast it as a supervised learning problem, i.e., behavior cloning. These methods use maximum likelihood to train a parameterized policy \(\pi_{\theta}:\mathcal{S}\to \mathcal{A}\) , where \(\mathcal{S}\) is the state space and \(\mathcal{A}\) is the action space, such that \(\theta^{*} = \arg \max_{\theta}\sum_{N}\log \pi_{\theta}(a_{i}|s_{i})\) . The behavior cloning approach works effectively in cases when the demonstrations abound (Ross et al., 2011). However, as robot demonstrations can be costly and time- consuming, we are in favor of a method that can learn from a handful of demonstrations. GAIL (Ho & Ermon, 2016) makes a more efficient use of demonstration data by allowing the agent to interact with the environment and learn from its own experiences. Similar to Generative Adversarial Networks (Goodfellow et al., 2014), GAIL has two networks, a policy network \(\pi_{\theta}:\mathcal{S}\to \mathcal{A}\) and a discriminator network \(D_{\psi}:\mathcal{S}\times \mathcal{A}\to [0,1]\) . GAIL uses a similar min- max objective function as in GANs: \[\min_{\psi}\theta \max_{\psi}\mathbb{E}_{\pi_{E}}[\log D_{\psi}(s,a)] + \mathbb{E}_{\pi_{\theta}}[\log (1 - D_{\psi}(s,a))], \quad (1)\] where \(\pi_{E}\) denotes the expert policy that generated the demonstration trajectories. This learning objective encourages the policy \(\pi_{\theta}\) to have an occupancy measure close to the expert policy \(\pi_{E}\) . In practice, we train \(\pi_{\theta}\) with policy gradient methods to maximize the discounted sum of the reward function \(r_{g a i l}(s_{t},a_{t}) = -\log (1 - D_{\psi}(s_{t},a_{t}))\) , clipped by a max value of 10. In continuous domains, trust region methods greatly stabilize policy training. The original GAIL model uses TRPO (Schulman et al., 2015) in the policy update steps. Recently, PPO (Schulman et al., 2017) is proposed as a simple and scalable approximation to TRPO. PPO only relies on the first- order gradients and can be easily implemented with recurrent networks in a distributed setting (Heess et al., 2017). The key idea of PPO is to use the Kullback- Leibler (KL) divergence to dynamically change the coefficient of a regularization term, where the coefficient is adapted based on whether the previous policy update step violates the KL constraint. We use distributed PPO to perform data collection and synchronous gradient updates across many workers in parallel. We trained all our policies with 256 CPU workers, which brings a significant speedup in wall- clock time. ### 3.2 REINFORCEMENT AND IMITATION LEARNING MODEL #### 3.2.1 HYBRID IL/RL REWARD A common approach to guiding exploration is by engineering a shaping reward. Although reward shaping sometimes provides informative guidance for policy search, it has the well- known drawback of producing suboptimal behaviors (Ng et al., 1999). Hence, we use sparse piecewise constant rewards in this work. Training agents in continuous domains under sparse rewards is particularly challenging. Inspired by reward augmentation introduced in Li et al. 2017 and Merel et al. 2017, we <--- Page Split ---> design a hybrid reward function to mix the imitation reward \(r_{g a i l}\) with the sparse task reward \(r_{t a s k}\) : \[r(s_{t},a_{t}) = \lambda r_{g a i l}(s_{t},a_{t}) + (1 - \lambda)r_{t a s k}(s_{t},a_{t})\quad \lambda \in [0,1]. \quad (2)\] Maximizing this hybrid reward can be interpreted as simultaneous reinforcement and imitation learning, where the imitation reward encourages the policy to generate trajectories closer to demonstration trajectories, and the task reward encourages the policy to achieve high returns in the task. Setting \(\lambda\) to either 0 or 1 reduces this method to the standard RL or GAIL setups. Our experiments suggest that with a balanced contribution of these two rewards, the agents can solve tasks that neither GAIL nor RL can solve alone. Further, the final agents achieved higher returns than the human demonstrations owing to the exposure to task rewards. #### 3.2.2 LEVERAGING PHYSICAL STATES IN SIMULATION The use of simulated system provides us access to the underlying physical states. Even though such privileged information is unavailable on a real system, we can take advantage of it when training the policy in simulation. We propose four techniques to leverage the physical states in simulation to stabilize and accelerate learning, including demonstration curriculum, learning value from states, building object- centric discriminator, and auxiliary tasks. Demonstration as a curriculum. The problem of exploration in continuous domains is exacerbated by the long duration of realistic tasks. Previous work indicates that shaping the distribution of start states towards states where the optimal policy tends to visit can greatly improve policy learning (Kakade & Langford, 2002; Popov et al., 2017). We alter the start state distribution with demonstration states. We build a curriculum that contains clusters of states in different stages of a task. For instance, we define three clusters for the pouring task, including reaching the mug, grasping the mug, and pouring. For a training episode, with probability \(\epsilon\) we start it from a random initial state, and with probability \(1 - \epsilon\) we uniformly select a cluster and reset the episode to a demonstration state from the cluster. This is possible as our simulated system is fully characterized by the physical states. Learning value functions from states. PPO uses a learnable value function \(V_{\phi}\) to estimate the advantage for policy gradient. During training, each PPO worker executes the policy for \(K\) steps and uses the discounted sum of rewards and the value as an advantage function estimator \(\hat{A}_{t} = \sum_{i = 1}^{K}\gamma^{i - 1}r_{t + i} + \gamma^{K - 1}V_{\phi}(s_{t + K}) - V_{\phi}(s_{t})\) , where \(\gamma\) is the discount factor. As the policy gradient relies on the value function to reduce variance, it is beneficial to accelerate learning of the value function. Rather than using pixel inputs as the policy network, we take advantage of the low- level physical states (e.g., the position and velocity of the 3D objects and the robot arm) to train the value \(V_{\phi}\) with a smaller multilayer perceptron. We find that training the policy and value in two different modalities stabilizes training and reduces oscillation in the agent's performance. This technique has also been adopted by a concurrent work by Pinto et al. 2017. Object- centric discriminator. Similar to the value function, the GAIL discriminator leverages the physical states to construct task- specific features as its input. In manipulation tasks, we find that object- centric representations (e.g., absolute and relative positions of the objects) provide the salient and relevant signals to the discriminator. The states of the robot arm, on the contrary, tend to make the discriminator too strong and stagnate the training of the policy. Inspired by information hiding strategies used in locomotion domains (Heess et al., 2016; Merel et al., 2017), our discriminator only takes the object- centric features as input while masking out arm- related information. State prediction auxiliary tasks. Auxiliary tasks have been shown effective in improving the learning efficiency and the final performance of deep RL methods (Jaderberg et al., 2016). To facilitate learning visuomotor policies, we add a state prediction layer on the top of the CNN module to predict the locations of objects from the camera observation. We use a fully- connected layer to regress the 3D coordinates of objects in the task. We train this auxiliary task by minimizing the \(\ell_{2}\) loss between the predicted and ground- truth object locations. #### 3.2.3 SIM2REAL POLICY TRANSFER Policy transfer is shown to a real- world Kinova Jaco robot arm. The simulation was manually aligned to generally match the visuals and dynamics: a Kinect camera was visually calibrated to <--- Page Split ---> match the position and orientation of the simulated camera, and the simulation's dynamics parameters were manually adjusted to match the dynamics of the real arm. Instead of using professional calibration equipment, our approach to sim2real policy transfer relies on domain randomization of camera position and orientation (Tobin et al., 2017; James et al., 2017). in contrast, we do not create intermediate position goals using object position information in reality, but rather train an end- to- end, pixels to velocities, feedback control policy. In addition, to alleviate the issues caused by latency on the real robot, we also fine- tune our policies while subjecting them to action dropping. Detailed descriptions are available in Appendix B. ## 4 EXPERIMENTS Here we demonstrate that our proposed approach offers a general framework to visuomotor policy learning. We evaluate the performance of our model in six manipulation tasks illustrated in Fig. 3. We provide more qualitative results in this video. ### 4.1 ENVIRONMENT SETUP We use a Kinova Jaco arm that has 9 degrees of freedom, including six arm joints and three actuated fingers. The robot arm interacts with a diverse set of objects on a tabletop. The visuomotor policy controls the robot using joint velocity commands. Our policy produces 9- dimensional continuous velocities in the range of \([- 1,1]\) at \(20\mathrm{Hz}\) . The proprioceptive features consist of the positions and angular velocities of the arm joints and the fingers. We use a positioned camera to collect real- time RGB observations. The proprioceptive features and the camera observations are available in both simulation and real environments. Thus, it enables policy transfer. We use the MuJoCo physics simulator (Todorov et al., 2012) as our training platform. We use a large variety of objects, from basic geometric shapes to procedurally generated 3D objects as ensembles of primitive shapes. We increase the diversity of objects by randomizing various physical properties, including dimension, color, mass, friction, etc. We used a 3D motion controller called SpaceNavigator, which allows us to operate the robot arm with a position controller, to collect 30 episodes of demonstration for each task and recorded the observations, actions, and physical states into a dataset. As each episode takes less than a minute to complete, demonstrating each task can be done within half an hour. ### 4.2 ROBOT ARM MANIPULATION TASKS Fig. 3 shows visualizations of the six manipulation tasks in our experiments. The first column shows the six tasks in simulated environments, and the second column shows the real- world setup of the block lifting and stacking tasks. We see obvious visual discrepancies of the same task in simulation and reality. These six tasks exhibit learning challenges to varying degrees. The first three tasks use simple colored blocks, which allows us to easily construct the tasks for a real robot. We study sim2real policy transfer with the block lifting and stacking tasks in Sec. 4.4. Block lifting. The goal is to grasp and lift a randomized block, allowing us to evaluate the model's robustness. We vary several random factors, including the robot arm dynamics (friction and armature), lighting conditions, camera poses, background colors, as well as the properties of the block. Each episode starts with a new configuration with these random factors uniformly drawn from a preset range. Block stacking. The goal is to stack one block on top of the other block. Together with the block lifting task, this is evaluated in sim2real transfer experiments. Clearing blocks. This task aims at clearing the tabletop that has two blocks. One strategy to do this task using a single arm is to stack the blocks and pick up both together. This task requires longer time and a more dexterous controller, introducing a significant challenge for exploration. The next three tasks involve a large variety of procedurally generated 3D shapes, making them difficult to recreate in real environments. We use them to examine the model's ability to generalize across object variations in long and complex tasks. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: Visualizations of the six manipulation tasks in our experiments. The left column shows RGB images of all six tasks in the simulated environments. These images correspond to the actual pixel observations as input to the visuomotor policies. The right column shows the two tasks with color blocks on the real robot. </center> Clearing tabletop. In this task, the goal is to clear the tabletop that has a box and a toy car. One strategy is to grasp the toy, put it into the box, and lift the box. Both the box and the toy car are randomly generated for each episode. Pouring liquid. Modeling and reasoning about deformable objects and fluids is a long- standing challenge in the robotics community (Schenck & Fox, 2017). We design a pouring task where we use many small spheres to simulate liquid. The goal is to pour the "liquid" from one mug to the other container. This task is particularly challenging due to the dexterity required. Even trained humans struggled to demonstrate the task with our 3D motion controller. Order fulfillment. In this task, we randomly place a variable number of procedurally generated toy planes and cars on the table. The goal is to place all the planes into the green box and all the cars into the red box. This task requires the policy to generalize at an abstract level. It needs to recognize the object categories, perform successful grasps on diverse shapes, and handle tasks with variable lengths. ### 4.3 QUANTITATIVE EVALUATION Our full model can solve all six tasks, with only occasional failures, using the same policy network, the same training algorithm, and a fixed set of hyperparameters. On the contrary, neither reinforcement nor imitation alone can solve all tasks. We compare the full model with three baseline methods, where we evaluate degenerated versions of our model, which correspond to RL, GAIL, and RL w/o demonstration curriculum. These baselines use the same setup as the full model, except that we set \(\lambda = 0\) for RL and \(\lambda = 1\) for GAIL, while our model uses a balanced contribution of the hybrid reward, where \(\lambda = 0.5\) . In the third baseline, all the training episodes start from random initial states rather than resetting to demonstration states. This corresponds to a standard RL setup. We report the mean episode returns as a function of the number of training iterations in Fig. 4. Our full model achieves the highest returns in all six tasks. The only case where the baseline model is on par with the full model is the block lifting task, in which both the RL baseline and the full model achieved similar levels of performance. We hypothesize that this is due to the short length of the lifting task, where random exploration in RL is likely to reach the goal states without the aid of GAIL. <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 4: Learning efficiency of our reinforcement and imitation model against baselines. The plots are averaged over 5 runs with different random seeds. All the policies use the same network architecture and the same hyperparameters (except \(\lambda\) ). </center> ![](images/7_1.jpg) <center>Figure 5: Model analysis in the stacking task. On the left we investigate the impact on performance by removing each individual component from the full model. On the right we investigate the model's sensitivity to the hyperparameter \(\lambda\) that moderates the contribution of reinforcement and imitation. </center> In the other five tasks, the full model outperforms both the reinforcement learning and imitation learning baselines by a large margin, demonstrating the effectiveness of combining reinforcement and imitation for learning complex tasks. Comparing the two variants of RL with and without using demonstration as a curriculum, we see a pronounced effect of altering the start state distribution. We see that RL from scratch leads to a very slow learning progress; while initiating episodes along demonstration trajectories enables the agent to train on states from different stages of a task. As a result, it greatly reduces the burden of exploration and improves the learning efficiency. We also report the mean episode returns of human demonstrations in these figures. While demonstrations using the 3D motion controller are imperfect, especially for pouring (see video), the trained agents can surpass them via interacting with the environment. Two findings are noteworthy. First, the RL agent learns faster than the full model in the table clearing task, but the full model eventually outperforms. This is because the full model discovers <--- Page Split ---> a novel strategy, different from the strategy demonstrated by human operators (see video). In this case, imitation gave contradictory signals but eventually, reinforcement learning guided the policy towards a better strategy. Second, pouring liquid is the only task where GAIL outperforms its RL counterpart. Imitation can effectively shape the agent's behaviors towards the demonstration trajectories (Wang et al., 2017). This is a viable solution for the pouring task, where a controller that generates similar- looking behaviors can complete the task. In contact- rich domains, however, a controller learned solely from dozens of demonstrations would struggle to handle complex object dynamics and to infer the true task goal. We hypothesize that this is why the baseline RL agent outperforms the GAIL agent in the other five tasks. We further perform an ablation study in the block stacking task to understand the impacts of different components of our model. In Fig. 5a, we trained our agents with a number of configurations, each with a single modification to the full model. We see that the final performances of the experiments cluster into two groups: agents that learn to stack (with average returns greater than 400) and agents that only learn to lift (with average returns between 200 and 300). These results indicate that the hybrid RL/IL reward, learning value function from states, and object- centric discriminator play an integral role in learning good policies. Using sole RL or GAIL reward, learning value function on pixels, or no information hiding for discriminator input (no discriminator mask) all result in inferior performances. In contrast, the optional components include the recurrent policy core (LSTM), the use of state prediction auxiliary tasks, and whether to include actions in discriminator input. We then examine the model's sensitivity to the \(\lambda\) values in Eq. 2. We see in Fig. 5b that, our model works well with a broad range of \(\lambda\) values from 0.3 to 0.7 that provide a balanced mix of the RL and GAIL rewards. ### 4.4 SIM2REAL POLICY TRANSFER RESULTS To assess the robustness of the simulation- trained policy, we evaluate zero- shot transfer (no additional training) on a real Jaco arm. Given a real- world set up that mirrored the simulated domain to a large extent, including camera positions and robot kinematics and approximate object size and color, we ran the trained network policy and counted the number of successful trials for both the lifting and stacking tasks. Although the sim and real domains were similar, there was still a sizable reality gap that made zero- shot transfer challenging. For example, the objects were non- rigid foam blocks which deformed and bounced unpredictably. The arm position was randomly initialized and the target block(s) placed in a number of repeatable start configurations for each task. The zero- shot transfer of the lifting policy had a success rate of \(64\%\) over 25 trials (split between 5 block configurations). The stacking policy had a success rate of \(35\%\) over 20 trials (split between 2 block configurations). \(80\%\) of the stacking trajectories, however, contain successful lifting behavior. Qualitatively, the policies are notably robust even on failed attempts — rather than exhibiting "open- loop" behaviors such as attempting to stack a non- existent block, the policy repeatedly chases the block to get a successful grasp before trying to stack (see video). For more detailed descriptions of the sim2real results, refer to Appendix B. ## 5 CONCLUSION We have shown that combining reinforcement and imitation learning considerably improves the agents' ability to solve challenging dexterous manipulation tasks from pixels. Our proposed method sheds light on the three stages of a principled pipeline for robot skill learning: first, we collected a small amount of demonstration data to simplify the exploration problem; second, we relied on physical simulation to perform large- scale distributed robot training; and third, we performed sim2real transfer for real- world deployment. In future work, we seek to improve the sample efficiency of the learning method and to leverage real- world experience to close the reality gap for policy transfer. ## REFERENCES Abdeslam Boularias, Jens Kober, and Jan Peters. Relative entropy inverse reinforcement learning. In JMLR Workshop and Conference Proceedings Volume 15: AISTATS 2011, pp. 182–189, Cambridge, MA, USA, April 2011. MIT Press. <--- Page Split ---> Konstantinos Bousmalis, Alex Irpan, Paul Wohlhart, Yunfei Bai, Matthew Kelcey, Mrinal Kalakrishnan, Laura Downs, Julian Ibarz, Peter Pastor, Kurt Konolige, Sergey Levine, and Vincent Vanhoucke. Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping. ArXiv e- prints, 2017. Yan Duan, Marcin Andrychowicz, Bradly C. Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, and Wojciech Zaremba. One- shot imitation learning. CoRR, abs/1703.07326, 2017. URL http://arxiv.org/abs/1703.07326. Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19- 24, 2016, pp. 49- 58, 2016. URL http://jmlr.org/proceedings/papers/v48/finn16. html. Chelsea Finn, Tianhe Yu, Tianhao Zhang, Pieter Abbeel, and Sergey Levine. One- Shot Visual Imitation Learning via Meta- Learning. ArXiv e- prints, 2017. Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672- 2680, 2014. Shixiang Gu, Ethan Holly, Timothy P. Lillicrap, and Sergey Levine. Deep reinforcement learning for robotic manipulation. CoRR, abs/1610.00633, 2016a. Shixiang Gu, Tim Lillicrap, Ilya Sutskever, and Sergey Levine. Continuous deep q- learning with model- based acceleration. In International Conference on Machine Learning (ICML), 2016b. Abhishek Gupta, Clemens Eppner, Sergey Levine, and Pieter Abbeel. Learning dexterous manipulation for a soft robotic hand from human demonstration. CoRR, abs/1603.06348, 2016. Nicolas Heess, Gregory Wayne, David Silver, Tim Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems (NIPS), pp. 2926- 2934, 2015. Nicolas Heess, Greg Wayne, Yuval Tassa, Timothy Lillicrap, Martin Riedmiller, and David Silver. Learning and transfer of modulated locomotor controllers. arXiv preprint arXiv:1610.05182, 2016. Nicolas Heess, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, Ali Eslami, Martin Riedmiller, et al. Emergence of locomotion behaviours in rich environments. arXiv preprint arXiv:1707.02286, 2017. Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pp. 4565- 4573, 2016. Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397, 2016. Stephen James, Andrew J. Davison, and Edward Johns. Transferring end- to- end visuomotor control from simulation to real world for a multi- stage task. CoRR, abs/1707.02267, 2017. Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. In ICML, 2002. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Vikash Kumar, Abhishek Gupta, Emanuel Todorov, and Sergey Levine. Learning dexterous manipulation policies from experience and imitation. CoRR, abs/1611.05095, 2016. URL http://arxiv.org/abs/1611.05095. Sergey Levine and Vladlen Koltun. Guided policy search. In International Conference on Machine Learning (ICML), pp. 1- 9, 2013. <--- Page Split ---> Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End- to- end training of deep visuomotor policies. CoRR, abs/1504.00702, 2015. Sergey Levine, Peter Pastor, Alex Krizhevsky, and Deirdre Quillen. Learning hand- eye coordination for robotic grasping with deep learning and large- scale data collection. CoRR, abs/1603.02199, 2016. Yunzhu Li, Jiaming Song, and Stefano Ermon. Inferring the latent structure of human decision- making from raw visual inputs. arXiv preprint arXiv:1703.08840, 2017. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. International Conference on Learning Representations (ICLR), 2016. Yuxuan Liu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Imitation from observation: Learning to imitate behaviors from raw video via context translation. CoRR, abs/1707.03374, 2017. Josh Merel, Yuval Tassa, Dhruva TB, Sriram Srinivasan, Jay Lemmon, Ziyu Wang, Greg Wayne, and Nicolas Heess. Learning human behaviors from motion capture by adversarial imitation. CoRR, abs/1707.02201, 2017. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human- level control through deep reinforcement learning. Nature, 518(7540):529- 533, 2015. Ashvin Nair, Bob McGrew, Marcin Andrychowicz, Wojciech Zaremba, and Pieter Abbeel. Overcoming exploration in reinforcement learning with demonstrations. CoRR, abs/1709.10089, 2017. Andrew Y Ng, Daishi Harada, and Stuart J Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In ICML, pp. 278- 287, 1999. Xue Bin Peng, Marcin Andrychowicz, Wojciech Zaremba, and Pieter Abbeel. Sim- to- Real Transfer of Robotic Control with Dynamics Randomization. ArXiv e- prints, October 2017. Lerrel Pinto and Abhinav Gupta. Supersizing self- supervision: Learning to grasp from 50k tries and 700 robot hours. CoRR, abs/1509.06825, 2015. URL http://arxiv.org/abs/1509.06825. Lerrel Pinto, Marcin Andrychowicz, Peter Welinder, Wojciech Zaremba, and Pieter Abbeel. Asymmetric Actor Critic for Image- Based Robot Learning. ArXiv e- prints, 2017. Ivaylo Popov, Nicolas Heess, Timothy P. Lillicrap, Roland Hafner, Gabriel Barth- Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, and Martin A. Riedmiller. Data- efficient deep reinforcement learning for dexterous manipulation. CoRR, abs/1704.03073, 2017. Rouhollah Rahmatizadeh, Pooya Abolghasemi, Ladislau Bölöni, and Sergey Levine. Vision- based multi- task manipulation for inexpensive robots using end- to- end learning from demonstration. CoRR, abs/1707.02920, 2017. URL http://arxiv.org/abs/1707.02920. Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, John Schulman, Emanuel Todorov, and Sergey Levine. Learning complex dexterous manipulation with deep reinforcement learning and demonstrations. CoRR, abs/1709.10087, 2017. Stéphane Ross, Geoffrey J Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no- regret online learning. In International Conference on Artificial Intelligence and Statistics, pp. 627- 635, 2011. Andrei Rusu, Neil Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016a. Andrei Rusu, Matej Vecerik, Thomas Rothörl, Nicolas Heess, Razvan Pascanu, and Raia Hadsell. Sim- to- real robot learning from pixels with progressive nets. CoRR, abs/1610.04286, 2016b. <--- Page Split ---> Connor Schenck and Dieter Fox. Reasoning about liquids via closed- loop simulation. arXiv preprint arXiv:1703.01656, 2017. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML- 15), pp. 1889- 1897, 2015. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. Pierre Sermanet, Corey Lynch, Jasmine Hsu, and Sergey Levine. Time- contrastive networks: Self- supervised learning from multi- view observation. CoRR, abs/1704.06888, 2017. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In International Conference on Machine Learning (ICML), 2014. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484- 489, 2016. Joshua Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. CoRR, abs/1703.06907, 2017. Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model- based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026- 5033. IEEE, 2012. Matej Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothorl, Thomas Lampe, and Martin A. Riedmiller. Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards. CoRR, abs/1707.08817, 2017. Ulrich Viereck, Andreas ten Pas, Kate Saenko, and Robert Platt. Learning a visuomotor controller for real world robotic grasping using easily simulated depth images. CoRR, abs/1706.04652, 2017. Ziyu Wang, Josh Merel, Scott E. Reed, Greg Wayne, Nando de Freitas, and Nicolas Heess. Robust imitation of diverse behaviors. NIPS, abs/1707.02747, 2017. Ali Yahya, Adrian Li, Mrinal Kalakrishnan, Yevgen Chebotar, and Sergey Levine. Collective robot reinforcement learning with distributed asynchronous guided policy search. CoRR, abs/1610.00673, 2016. URL http://arxiv.org/abs/1610.00673. <--- Page Split ---> ## A EXPERIMENT DETAILS The policy network takes the pixel observation and the proprioceptive feature as input. The pixel observation is an RGB image of size \(64\times 64\times 3\) . We used the Kinect for Xbox One camera in the real environment. The proprioceptive feature describes the joint positions and velocities of the Kinova Jaco arm. Each joint position is represented as the sin and cos of the angle of the joint in joint coordinates. Each joint velocity is represented as the scalar angular velocity. This results in a 24- dimensional proprioceptive feature that contains the positions (12- d) and velocities (6- d) of the six arm joints and the positions (6- d) of the three fingers. We exclude the finger velocities due to the noisy sensory readings on the real robot. We used Adam (Kingma & Ba, 2014) to train the neural network parameters. We set the learning rate of policy and value to \(10^{- 4}\) and \(10^{- 3}\) respectively, and \(10^{- 4}\) for both the discriminator and the auxiliary tasks. The pixel observation is encoded by a two- layer convolutional network. We use 2 convolutional layers followed by a fully- connected layer with 128 hidden units. The first convolutional layer has \(16\times 8\) filters with stride 4 and the second \(32 4\times 4\) filters with stride 2. We add a recurrent layer of 100 LSTM units before the policy and value outputs. The policy output is the mean and the standard deviation of a conditional Gaussian distribution over the 9- dimensional joint velocities. The initial policy standard deviation is set to \(\exp (- 3)\) for the clearing table with blocks task and \(\exp (- 1)\) for the other five tasks. The auxiliary head of the policy contains a separate 3- layer MLP sitting on top of the convolutional network. The first two layers of the MLP has 200 and 100 hidden units respectively, while the third layer predicts the auxiliary outputs. Finally, the discriminator is a simple three- layer MLP of 100 and 64 hidden units for the first two layers with the third layer producing log probabilities. The networks use tanh nonlinearities. We trained the visuomotor policies using the distributed PPO algorithm (Heess et al., 2017) with synchronous gradient updates from 256 CPU workers. Each worker runs the policy to complete an entire episode before the parameter updates are computed. We set a constant episode length for each task based on its difficulty, with the longest being 1000 time steps (50 seconds) for the clearing table with blocks and order fulfillment tasks. We set \(K = 50\) as the number of time steps for computing \(K\) - step returns and truncated backpropagation through time to train the LSTM units. After a worker collects a batch of data points, it performs 50 parameter updates for the policy and value networks, 5 for the discriminator and 5 for the auxiliary prediction network. ## B SIM2REAL DETAILS To better facilitate sim2real transfer, we lower the frequency at which we sample the observations. Pixel observations are only observed at the rate of 5Hz despite the fact that our controller runs at 20Hz. Similarly, the proprioceptive features are observed at a rate of 10Hz. In addition to observation delays, we also apply domain variations. Gaussian noise (of standard deviation 0.01) are added proprioceptive features. Uniform integers noise in the range of \([- 5,5]\) are added to each pixel independently. Pixels of values outside the range of \([0,255]\) are clipped. We also vary randomly the shade of grey on the Jaco arm, the color of the table top, as well as the location and orientation of the light source (see Fig. 6). In the case of block lifting, we vary in addition the dynamics of the arm. Specifically, we dynamically change the friction, damping, armature, and gain parameters of the robot arm in simulation to further robustify the agent's performance. ## B.1 ACTION DROPPING Our analysis indicates that, on the real robot, there is often a delay in the execution of actions. The amount of delay also varies significantly. This has an adverse effect on the performance of our agent on the physical robot since our agents' performance depends on the timely execution of their actions. To better facilitate the transfer to the real robot, we fine- tune our trained agent in simulation while subjecting them to a random chance of dropping actions. Specifically, each action emitted by the agent has a \(50\%\) chance of being executed immediately in which case the action is flagged as the last executed action. If the current action is not executed, the last executed action will then be executed. <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 6: Tiles show the representative range of diversity seen in the domain-randomized variations of the colors, lighting, background, etc. </center> Table 1: Block Lifting success rate from different positions. (LL, LR, UL, UR, and C represents the positions of lower left, lower right, upper left, upper right, and center respectively.) <table><tr><td></td><td>LL</td><td>LR</td><td>UL</td><td>UR</td><td>C</td><td>All</td></tr><tr><td>No Action Dropping</td><td>2/5</td><td>2/5</td><td>1/5</td><td>3/5</td><td>4/5</td><td>12/25</td></tr><tr><td>Action Dropping</td><td>4/5</td><td>4/5</td><td>4/5</td><td>0/5</td><td>4/5</td><td>16/25</td></tr></table> Using the above procedure, we fine- tune our agents on both block lifting and block stacking for a further 2 million iterations. To demonstrate the effectiveness of action dropping, we compare our agent on the real robot over the task of block lifting. Without action dropping, the baseline agent lifts \(48\%\) percent of the time. After fine- tuning using action dropping, our agent succeeded \(64\%\) percent of the time. For the complete set of results, please see Table 1 and Table 2. ## C TASK DETAILS We use a fixed episode length for each task, which is determined by the amount of time a skilled human demonstrator can complete the task. An episode terminates when a maximum number of agent steps are performed. The robot arm operates at a control frequency of \(20\mathrm{Hz}\) , which means each time step takes 0.05 second. We segment into a sequence of stages that represent an agent's progress in a task. For instance, the block stacking task can be characterized by three stages, including reaching the block, lifting the block and stacking the block. We define functions on the underlying physical state to determine the stage of a state. This way, we can cluster demonstration states according to their corresponding stages. These clusters are used to reset training episodes in our demonstration as a curriculum technique proposed in Sec. 3.2.2. The definition of stages also gives rise to a convenient way of specifying the reward functions without hand- engineering a shaping reward. We define a piecewise constant reward function for each task, where we assign the same constant reward to all the states that belong to the same stage. We detail the stages, reward functions, auxiliary tasks, and object- centric features for the six tasks in our experiments. Block lifting. Each episode lasts 100 time steps. We define three stages and their rewards (in parentheses) to be initial (0), reaching the block (0.125) and lifting the block (1.0). The auxiliary task is to predict the 3D coordinates of the color block. The object- centric feature consists of the relative position between the gripper and the block. Block stacking. Each episode lasts 500 time steps. We define four stages and their rewards to be initial (0), reaching the orange block (0.125), lifting the orange block (0.25), and stacking the orange block onto the pink block (1.0). The auxiliary task is to predict the 3D coordinates of the two blocks. <--- Page Split ---> Table 2: Success rate of the block stacking agent from different starting positions. Left and Right indicates the position of the support block upon initialization. <table><tr><td></td><td>Left</td><td>Right</td><td>All</td></tr><tr><td>Stacking Success Rate</td><td>5/10</td><td>2/10</td><td>7/20</td></tr><tr><td>Lifting Success Rate</td><td>9/10</td><td>7/10</td><td>16/20</td></tr></table> The object- centric feature consists of the relative positions between the gripper and the two blocks respectively. Clearing table with blocks. Each episode lasts 1000 time steps. We define five stages and their rewards to be initial (0), reaching the orange block (0.125), lifting the orange block (0.25), stacking the orange block onto the pink block (1.0), and lifting both blocks off the ground (2.0). The auxiliary task is to predict the 3D coordinates of the two blocks. The object- centric feature consists of the 3D positions of the two blocks as well as the relative positions between the gripper and the two blocks respectively. Clearing table with a box. Each episode lasts 500 time steps. We define five stages and their rewards to be initial (0), reaching the toy (0.125), grasping the toy (0.25), putting the toy into the box (1.0), and lifting the box (2.0). The auxiliary task is to predict the 3D coordinates of the toy and the box. The object- centric feature consists of the 3D positions of the toy and the box as well as the relative positions between the gripper and these two objects respectively. Pouring liquid. Each episode lasts 500 time steps. We define three stages and their rewards to be initial (0), grasping the mug (0.05), pouring (0.1N), where \(N\) is the number of small spheres in the other container. The auxiliary task is to predict the 3D coordinates of the mug. The object- centric feature consists of the 3D positions of the mug, the relative position between the gripper and the mug, and the relative position between the mug and the container. Order fulfillment. Each episode lasts 1000 time steps. The number of objects varies from 1 to 4 across episodes. We define five stages that correspond to the number of toys in the boxes. The immediate reward corresponds to the number of toys placed in the correct boxes (number of toy planes in the green box and toy cars in the red box). To handle the variable number of objects, we only represent the objects nearest to the gripper for the auxiliary task and the object- centric feature. The auxiliary task is to predict the 3D coordinates of the nearest plane and the nearest car to the gripper. The object- centric feature consists of the relative positions from the gripper to these two nearest objects. <--- Page Split --->
## ABSTRACT We propose a general deep reinforcement learning method and apply it to robot manipulation tasks. Our approach leverages demonstration data to assist a reinforcement learning agent in learning to solve a wide range of tasks, mainly previously unsolved. We train visuomotor policies end- to- end to learn a direct mapping from RGB camera inputs to joint velocities. Our experiments indicate that our reinforcement and imitation approach can solve contact- rich robot manipulation tasks that neither the state- of- the- art reinforcement nor imitation learning method can solve alone. We also illustrate that these policies achieved zero- shot sim2real transfer by training with large visual and dynamics variations. ## 1 INTRODUCTION Recent advances in deep reinforcement learning (RL) have performed very well in several challenging domains such as video games (Mnih et al., 2015) and Go (Silver et al., 2016). For robotics, RL in combination with powerful function approximators provides a general framework for designing sophisticated controllers that would be hard to handcraft otherwise. Yet, despite significant leaps in other domains the application of deep RL to control and robotic manipulation has proven challenging. While there have been successful demonstrations of deep RL for manipulation (e.g. Nair et al. 2017; Popov et al. 2017) and also noteworthy applications on real robotic hardware (e.g. Levine et al. 2015; Yahya et al. 2016) there have been very few examples of learned controllers for sophisticated tasks even in simulation. Robotics exhibits several unique challenges. These include the need to rely on multi- modal and partial observations from noisy sensors, such as cameras. At the same time, realistic tasks often come with a large degree of variation (visual appearance, position, shapes, etc.) posing significant generalization challenges. Training on real robotics hardware can be daunting due to constraints on the amount of training data that can be collected in reasonable time. This is typically much less than the millions of frames needed by modern algorithms. Safety considerations also play an important role, as well as the difficulty of accessing information about the state of the environment (like the position of an object) e.g. to define a reward. Even in simulation when perfect state information and large amounts of training data are available, exploration can be a significant challenge. This is partly due to the often high- dimensional and continuous action space, but also due to the difficulty of designing suitable reward functions. In this paper, we present a general deep reinforcement learning method that addresses these issues and that can solve a wide range of robot arm manipulation tasks directly from pixels, most of which have not been solved previously. Our key insight is 1) to reduce the difficulty of exploration in continuous domains by leveraging a handful of human demonstrations; 2) several techniques to stabilize the learning of complex manipulation policies from vision; and 3) to improve generalization by increasing the diversity of the training conditions. As a result, the trained policies work well under significant variations of system dynamics, object appearances, task lengths, etc. We ground these policies in the real world, demonstrating zero- shot transfer from simulation to real hardware. We develop a new method to combine imitation learning with reinforcement learning. Our method requires only a small number of human demonstrations to dramatically simplify the exploration problem. It uses demonstration data in two ways: first, it uses a hybrid reward that combines sparse environment reward with imitation reward based on Generative Adversarial Imitation Learning (Ho <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Our proposal of a principled robot learning pipeline. We used 3D motion controllers to collect human demonstrations of a task. Our reinforcement and imitation learning model leveraged these demonstrations to facilitate learning in a simulated physical engine. We then performed sim2real transfer to deploy the learned visuomotor policy to a real robot. </center> & Ermon, 2016), which produces more robust controllers; second, it uses demonstration as a curriculum to initiate training episodes along demonstration trajectories, which facilitates the agent to reach new states and solve longer tasks. As a result, it solves dexterous manipulation tasks that neither the state- of- the- art reinforcement learning nor imitation learning method can solve alone. Previous RL- based robot manipulation policies (Nair et al., 2017; Popov et al., 2017) largely rely on low- level states as input, or use severely limited action spaces that ignore the arm and instead learn Cartesian control of a simple gripper. This limits the ability of these methods to represent and solve more complex tasks (e.g., manipulating arbitrary 3D objects) and to deploy in real environments where the privileged state information is unavailable. Our method learns an end- to- end visuomotor policy that maps RGB camera observations to joint space control over the full 9- DoF arm (6 arm joints plus 3 actuated fingers). To sidestep the constraints of training on real hardware we embrace the sim2real paradigm which has recently shown promising results (James et al., 2017; Rusu et al., 2016a). Through the use of a physics engine and high- throughput RL algorithms, we can simulate parallel copies of a robot arm to perform millions of complex physical interactions in a contact- rich environment while eliminating the practical concerns of robot safety and system reset. Furthermore, we can, during training, exploit privileged information about the true system state with several new techniques, including learning policy and value in separate modalities, an object- centric GAIL discriminator, and auxiliary tasks for visual modules. These techniques stabilize and speed up policy learning from pixels. Finally, we diversify training conditions such as visual appearance as well as e.g. the size and shape of objects. This improves both generalization with respect to different task conditions as well as transfer from simulation to reality. To demonstrate our method, we use the same model and the same algorithm for visuomotor control of six diverse robot arm manipulation tasks. Combining reinforcement and imitation, our policies solve the tasks that the state- of- the- art reinforcement and imitation learning cannot solve and outperform human demonstrations. Our approach sheds light on a principled deep visuomotor learning pipeline illustrated in Fig. 1, from collecting real- world human demonstration to learning in simulation, and back to real- world deployment via sim2real policy transfer. ## 2 RELATED WORK Reinforcement learning methods have been extensively used with low- dimensional policy representations such as movement primitives to solve a variety of control problems both in simulation and in reality. Three classes of RL algorithms are currently dominant for continuous control problems: guided policy search methods (GPS; Levine & Koltun 2013), value- based methods such as the deterministic policy gradient (DPG; Silver et al. 2014; Lillicrap et al. 2016; Heess et al. 2015) or the normalized advantage function (NAF; Gu et al. 2016b) algorithm, and trust- region based policy gradient algorithms such as trust region policy optimization (TRPO) and proximal policy optimization <--- Page Split ---> (PPO). TRPO (Schulman et al., 2015) and PPO (Schulman et al., 2017) hold appeal due to their robustness to hyper- parameter settings as well as their scalability (Heess et al., 2017) but the lack of sample efficiency makes them unsuitable for training directly on robotics hardware. GPS (Levine & Koltun, 2013) has been used e.g. by Levine et al. (2015) and Yahya et al. (2016) to learn visuomotor policies directly on a real robotics hardware after a network pretraining phase. Gupta et al. (2016) and Kumar et al. (2016) use GPS for learning controllers for robotic hand models. Value- based methods have been employed, e.g. by Gu et al. (2016a) who use NAF to learn a door opening task directly on a robot while Popov et al. (2017) demonstrate how to solve a stacking problem efficiently using a distributed variant of DPG. The idea of using large- scale data collection for training visuomotor controllers has been the focus of Levine et al. (2016) and Pinto & Gupta (2015) who train a convolutional network to predict grasp success for diverse sets of objects using a large dataset with 10s or 100s of thousands of grasp attempts collected from multiple robots in a self- supervised setting. An alternative strategy for dealing with the data demand is to train in simulation and transfer the learned controller to real hardware, or to augment real- world training with synthetic data. Rusu et al. (2016b) learn simple visuomotor policies for a Jaco robot arm and transfer to reality using progressive networks (Rusu et al. 2016a). Viereck et al. (2017) minimize the reality gap by relying on depth. Tobin et al. (2017) use visual variations to learn robust object detectors that can transfer to reality; James et al. (2017) combine randomization with supervised learning. Bousmalis et al. (2017) augments the training with simulated data to learn grasp prediction of diverse shapes. Suitable cost functions and exploration strategies for control problems are challenging to design, so demonstrations have long played an important role. Demonstrations can be used to initialize policies, design cost functions, guide exploration, augment the training data, or a combination of these. Cost functions can be derived from demonstrations either via tracking objectives (e.g. Gupta et al. 2016) or, via inverse RL (e.g. Boularias et al. 2011; Finn et al. 2016), or, as in our case, via adversarial learning (Ho & Ermon, 2016). When expert actions or expert policies are available, behavioral cloning or DAgger can be used (Rahmatizadeh et al. 2017; James et al. 2017; Duan et al. 2017). Alternatively, expert trajectories can be used as additional training data for off- policy algorithms such as DPG (e.g. Vecerik et al. 2017). Most of these methods require observation and/or action spaces to be aligned between robot and demonstrations. Recently, methods for third person imitation have been proposed (e.g. Sermanet et al. 2017; Liu et al. 2017; Finn et al. 2017). Concurrently with our work several papers have presented results on manipulation tasks. Rajeswaran et al. (2017); Nair et al. (2017) both use human demonstrations to aid exploration. Nair et al. (2017) extends the DDPGfD algorithm (Vecerik et al., 2017) to learn a block stacking task on a position- controlled arm in simulation. Rajeswaran et al. (2017) use the demonstrations with a form of behavioral cloning and data augmentation to learn several complex manipulation tasks. In both cases, controllers observe a low- dimensional state space representation and the methods inherently require aligned state and action spaces with the demonstrations. Pinto et al. (2017) and Peng et al. (2017) address the transfer from simulation to reality, focusing on randomizing visual appearance and robot dynamics respectively. Peng et al. transfer a block- pushing policy operating from state features to a 7- DoF position controlled Fetch robotics arm. Pinto et al. consider different tasks using visual input with end- effector position control. ## 3 MODEL Our goal is to learn a deep visuomotor policy for robot manipulation tasks. The policy takes both an RGB camera observation and a proprioceptive feature that describes the joint positions and angular velocities. These two sensory modalities are also available on the real robot, enabling us to perform zero- shot policy transfer once trained in simulation. Fig. 2 provides an overview of our model. The deep visuomotor policy encodes the pixel observation with a convolutional network (CNN) and the proprioceptive feature with a multilayer perceptron (MLP). The features from these two modules are concatenated and passed to a recurrent LSTM layer before producing the joint velocities. The whole network is trained end- to- end. We start with a brief review of the basics of generative adversarial imitation learning (GAIL) and proximal policy optimization (PPO). Our model extends upon these two methods for visuomotor skills. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: Model overview. The core of our model is the deep visuomotor policy, which takes the camera observation and the proprioceptive feature as input and produces the next joint velocities. </center> ### 3.1 BACKGROUND: GAIL AND PPO Imitation learning (IL) is the problem of learning a behavior policy by mimicking a set of demonstrations. Here we assume that human demonstration is provided as a dataset of state- action pairs \(\mathcal{D} = \{(s_{i},a_{i})\}_{i = 1\dots N}\) . Traditional IL methods cast it as a supervised learning problem, i.e., behavior cloning. These methods use maximum likelihood to train a parameterized policy \(\pi_{\theta}:\mathcal{S}\to \mathcal{A}\) , where \(\mathcal{S}\) is the state space and \(\mathcal{A}\) is the action space, such that \(\theta^{*} = \arg \max_{\theta}\sum_{N}\log \pi_{\theta}(a_{i}|s_{i})\) . The behavior cloning approach works effectively in cases when the demonstrations abound (Ross et al., 2011). However, as robot demonstrations can be costly and time- consuming, we are in favor of a method that can learn from a handful of demonstrations. GAIL (Ho & Ermon, 2016) makes a more efficient use of demonstration data by allowing the agent to interact with the environment and learn from its own experiences. Similar to Generative Adversarial Networks (Goodfellow et al., 2014), GAIL has two networks, a policy network \(\pi_{\theta}:\mathcal{S}\to \mathcal{A}\) and a discriminator network \(D_{\psi}:\mathcal{S}\times \mathcal{A}\to [0,1]\) . GAIL uses a similar min- max objective function as in GANs: \[\min_{\psi}\theta \max_{\psi}\mathbb{E}_{\pi_{E}}[\log D_{\psi}(s,a)] + \mathbb{E}_{\pi_{\theta}}[\log (1 - D_{\psi}(s,a))], \quad (1)\] where \(\pi_{E}\) denotes the expert policy that generated the demonstration trajectories. This learning objective encourages the policy \(\pi_{\theta}\) to have an occupancy measure close to the expert policy \(\pi_{E}\) . In practice, we train \(\pi_{\theta}\) with policy gradient methods to maximize the discounted sum of the reward function \(r_{g a i l}(s_{t},a_{t}) = -\log (1 - D_{\psi}(s_{t},a_{t}))\) , clipped by a max value of 10. In continuous domains, trust region methods greatly stabilize policy training. The original GAIL model uses TRPO (Schulman et al., 2015) in the policy update steps. Recently, PPO (Schulman et al., 2017) is proposed as a simple and scalable approximation to TRPO. PPO only relies on the first- order gradients and can be easily implemented with recurrent networks in a distributed setting (Heess et al., 2017). The key idea of PPO is to use the Kullback- Leibler (KL) divergence to dynamically change the coefficient of a regularization term, where the coefficient is adapted based on whether the previous policy update step violates the KL constraint. We use distributed PPO to perform data collection and synchronous gradient updates across many workers in parallel. We trained all our policies with 256 CPU workers, which brings a significant speedup in wall- clock time. ### 3.2 REINFORCEMENT AND IMITATION LEARNING MODEL #### 3.2.1 HYBRID IL/RL REWARD A common approach to guiding exploration is by engineering a shaping reward. Although reward shaping sometimes provides informative guidance for policy search, it has the well- known drawback of producing suboptimal behaviors (Ng et al., 1999). Hence, we use sparse piecewise constant rewards in this work. Training agents in continuous domains under sparse rewards is particularly challenging. Inspired by reward augmentation introduced in Li et al. 2017 and Merel et al. 2017, we <--- Page Split ---> design a hybrid reward function to mix the imitation reward \(r_{g a i l}\) with the sparse task reward \(r_{t a s k}\) : \[r(s_{t},a_{t}) = \lambda r_{g a i l}(s_{t},a_{t}) + (1 - \lambda)r_{t a s k}(s_{t},a_{t})\quad \lambda \in [0,1]. \quad (2)\] Maximizing this hybrid reward can be interpreted as simultaneous reinforcement and imitation learning, where the imitation reward encourages the policy to generate trajectories closer to demonstration trajectories, and the task reward encourages the policy to achieve high returns in the task. Setting \(\lambda\) to either 0 or 1 reduces this method to the standard RL or GAIL setups. Our experiments suggest that with a balanced contribution of these two rewards, the agents can solve tasks that neither GAIL nor RL can solve alone. Further, the final agents achieved higher returns than the human demonstrations owing to the exposure to task rewards. #### 3.2.2 LEVERAGING PHYSICAL STATES IN SIMULATION The use of simulated system provides us access to the underlying physical states. Even though such privileged information is unavailable on a real system, we can take advantage of it when training the policy in simulation. We propose four techniques to leverage the physical states in simulation to stabilize and accelerate learning, including demonstration curriculum, learning value from states, building object- centric discriminator, and auxiliary tasks. Demonstration as a curriculum. The problem of exploration in continuous domains is exacerbated by the long duration of realistic tasks. Previous work indicates that shaping the distribution of start states towards states where the optimal policy tends to visit can greatly improve policy learning (Kakade & Langford, 2002; Popov et al., 2017). We alter the start state distribution with demonstration states. We build a curriculum that contains clusters of states in different stages of a task. For instance, we define three clusters for the pouring task, including reaching the mug, grasping the mug, and pouring. For a training episode, with probability \(\epsilon\) we start it from a random initial state, and with probability \(1 - \epsilon\) we uniformly select a cluster and reset the episode to a demonstration state from the cluster. This is possible as our simulated system is fully characterized by the physical states. Learning value functions from states. PPO uses a learnable value function \(V_{\phi}\) to estimate the advantage for policy gradient. During training, each PPO worker executes the policy for \(K\) steps and uses the discounted sum of rewards and the value as an advantage function estimator \(\hat{A}_{t} = \sum_{i = 1}^{K}\gamma^{i - 1}r_{t + i} + \gamma^{K - 1}V_{\phi}(s_{t + K}) - V_{\phi}(s_{t})\) , where \(\gamma\) is the discount factor. As the policy gradient relies on the value function to reduce variance, it is beneficial to accelerate learning of the value function. Rather than using pixel inputs as the policy network, we take advantage of the low- level physical states (e.g., the position and velocity of the 3D objects and the robot arm) to train the value \(V_{\phi}\) with a smaller multilayer perceptron. We find that training the policy and value in two different modalities stabilizes training and reduces oscillation in the agent's performance. This technique has also been adopted by a concurrent work by Pinto et al. 2017. Object- centric discriminator. Similar to the value function, the GAIL discriminator leverages the physical states to construct task- specific features as its input. In manipulation tasks, we find that object- centric representations (e.g., absolute and relative positions of the objects) provide the salient and relevant signals to the discriminator. The states of the robot arm, on the contrary, tend to make the discriminator too strong and stagnate the training of the policy. Inspired by information hiding strategies used in locomotion domains (Heess et al., 2016; Merel et al., 2017), our discriminator only takes the object- centric features as input while masking out arm- related information. State prediction auxiliary tasks. Auxiliary tasks have been shown effective in improving the learning efficiency and the final performance of deep RL methods (Jaderberg et al., 2016). To facilitate learning visuomotor policies, we add a state prediction layer on the top of the CNN module to predict the locations of objects from the camera observation. We use a fully- connected layer to regress the 3D coordinates of objects in the task. We train this auxiliary task by minimizing the \(\ell_{2}\) loss between the predicted and ground- truth object locations. #### 3.2.3 SIM2REAL POLICY TRANSFER Policy transfer is shown to a real- world Kinova Jaco robot arm. The simulation was manually aligned to generally match the visuals and dynamics: a Kinect camera was visually calibrated to <--- Page Split ---> match the position and orientation of the simulated camera, and the simulation's dynamics parameters were manually adjusted to match the dynamics of the real arm. Instead of using professional calibration equipment, our approach to sim2real policy transfer relies on domain randomization of camera position and orientation (Tobin et al., 2017; James et al., 2017). in contrast, we do not create intermediate position goals using object position information in reality, but rather train an end- to- end, pixels to velocities, feedback control policy. In addition, to alleviate the issues caused by latency on the real robot, we also fine- tune our policies while subjecting them to action dropping. Detailed descriptions are available in Appendix B. ## 4 EXPERIMENTS Here we demonstrate that our proposed approach offers a general framework to visuomotor policy learning. We evaluate the performance of our model in six manipulation tasks illustrated in Fig. 3. We provide more qualitative results in this video. ### 4.1 ENVIRONMENT SETUP We use a Kinova Jaco arm that has 9 degrees of freedom, including six arm joints and three actuated fingers. The robot arm interacts with a diverse set of objects on a tabletop. The visuomotor policy controls the robot using joint velocity commands. Our policy produces 9- dimensional continuous velocities in the range of \([- 1,1]\) at \(20\mathrm{Hz}\) . The proprioceptive features consist of the positions and angular velocities of the arm joints and the fingers. We use a positioned camera to collect real- time RGB observations. The proprioceptive features and the camera observations are available in both simulation and real environments. Thus, it enables policy transfer. We use the MuJoCo physics simulator (Todorov et al., 2012) as our training platform. We use a large variety of objects, from basic geometric shapes to procedurally generated 3D objects as ensembles of primitive shapes. We increase the diversity of objects by randomizing various physical properties, including dimension, color, mass, friction, etc. We used a 3D motion controller called SpaceNavigator, which allows us to operate the robot arm with a position controller, to collect 30 episodes of demonstration for each task and recorded the observations, actions, and physical states into a dataset. As each episode takes less than a minute to complete, demonstrating each task can be done within half an hour. ### 4.2 ROBOT ARM MANIPULATION TASKS Fig. 3 shows visualizations of the six manipulation tasks in our experiments. The first column shows the six tasks in simulated environments, and the second column shows the real- world setup of the block lifting and stacking tasks. We see obvious visual discrepancies of the same task in simulation and reality. These six tasks exhibit learning challenges to varying degrees. The first three tasks use simple colored blocks, which allows us to easily construct the tasks for a real robot. We study sim2real policy transfer with the block lifting and stacking tasks in Sec. 4.4. Block lifting. The goal is to grasp and lift a randomized block, allowing us to evaluate the model's robustness. We vary several random factors, including the robot arm dynamics (friction and armature), lighting conditions, camera poses, background colors, as well as the properties of the block. Each episode starts with a new configuration with these random factors uniformly drawn from a preset range. Block stacking. The goal is to stack one block on top of the other block. Together with the block lifting task, this is evaluated in sim2real transfer experiments. Clearing blocks. This task aims at clearing the tabletop that has two blocks. One strategy to do this task using a single arm is to stack the blocks and pick up both together. This task requires longer time and a more dexterous controller, introducing a significant challenge for exploration. The next three tasks involve a large variety of procedurally generated 3D shapes, making them difficult to recreate in real environments. We use them to examine the model's ability to generalize across object variations in long and complex tasks. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: Visualizations of the six manipulation tasks in our experiments. The left column shows RGB images of all six tasks in the simulated environments. These images correspond to the actual pixel observations as input to the visuomotor policies. The right column shows the two tasks with color blocks on the real robot. </center> Clearing tabletop. In this task, the goal is to clear the tabletop that has a box and a toy car. One strategy is to grasp the toy, put it into the box, and lift the box. Both the box and the toy car are randomly generated for each episode. Pouring liquid. Modeling and reasoning about deformable objects and fluids is a long- standing challenge in the robotics community (Schenck & Fox, 2017). We design a pouring task where we use many small spheres to simulate liquid. The goal is to pour the "liquid" from one mug to the other container. This task is particularly challenging due to the dexterity required. Even trained humans struggled to demonstrate the task with our 3D motion controller. Order fulfillment. In this task, we randomly place a variable number of procedurally generated toy planes and cars on the table. The goal is to place all the planes into the green box and all the cars into the red box. This task requires the policy to generalize at an abstract level. It needs to recognize the object categories, perform successful grasps on diverse shapes, and handle tasks with variable lengths. ### 4.3 QUANTITATIVE EVALUATION Our full model can solve all six tasks, with only occasional failures, using the same policy network, the same training algorithm, and a fixed set of hyperparameters. On the contrary, neither reinforcement nor imitation alone can solve all tasks. We compare the full model with three baseline methods, where we evaluate degenerated versions of our model, which correspond to RL, GAIL, and RL w/o demonstration curriculum. These baselines use the same setup as the full model, except that we set \(\lambda = 0\) for RL and \(\lambda = 1\) for GAIL, while our model uses a balanced contribution of the hybrid reward, where \(\lambda = 0.5\) . In the third baseline, all the training episodes start from random initial states rather than resetting to demonstration states. This corresponds to a standard RL setup. We report the mean episode returns as a function of the number of training iterations in Fig. 4. Our full model achieves the highest returns in all six tasks. The only case where the baseline model is on par with the full model is the block lifting task, in which both the RL baseline and the full model achieved similar levels of performance. We hypothesize that this is due to the short length of the lifting task, where random exploration in RL is likely to reach the goal states without the aid of GAIL. <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 4: Learning efficiency of our reinforcement and imitation model against baselines. The plots are averaged over 5 runs with different random seeds. All the policies use the same network architecture and the same hyperparameters (except \(\lambda\) ). </center> ![](images/7_1.jpg) <center>Figure 5: Model analysis in the stacking task. On the left we investigate the impact on performance by removing each individual component from the full model. On the right we investigate the model's sensitivity to the hyperparameter \(\lambda\) that moderates the contribution of reinforcement and imitation. </center> In the other five tasks, the full model outperforms both the reinforcement learning and imitation learning baselines by a large margin, demonstrating the effectiveness of combining reinforcement and imitation for learning complex tasks. Comparing the two variants of RL with and without using demonstration as a curriculum, we see a pronounced effect of altering the start state distribution. We see that RL from scratch leads to a very slow learning progress; while initiating episodes along demonstration trajectories enables the agent to train on states from different stages of a task. As a result, it greatly reduces the burden of exploration and improves the learning efficiency. We also report the mean episode returns of human demonstrations in these figures. While demonstrations using the 3D motion controller are imperfect, especially for pouring (see video), the trained agents can surpass them via interacting with the environment. Two findings are noteworthy. First, the RL agent learns faster than the full model in the table clearing task, but the full model eventually outperforms. This is because the full model discovers <--- Page Split ---> a novel strategy, different from the strategy demonstrated by human operators (see video). In this case, imitation gave contradictory signals but eventually, reinforcement learning guided the policy towards a better strategy. Second, pouring liquid is the only task where GAIL outperforms its RL counterpart. Imitation can effectively shape the agent's behaviors towards the demonstration trajectories (Wang et al., 2017). This is a viable solution for the pouring task, where a controller that generates similar- looking behaviors can complete the task. In contact- rich domains, however, a controller learned solely from dozens of demonstrations would struggle to handle complex object dynamics and to infer the true task goal. We hypothesize that this is why the baseline RL agent outperforms the GAIL agent in the other five tasks. We further perform an ablation study in the block stacking task to understand the impacts of different components of our model. In Fig. 5a, we trained our agents with a number of configurations, each with a single modification to the full model. We see that the final performances of the experiments cluster into two groups: agents that learn to stack (with average returns greater than 400) and agents that only learn to lift (with average returns between 200 and 300). These results indicate that the hybrid RL/IL reward, learning value function from states, and object- centric discriminator play an integral role in learning good policies. Using sole RL or GAIL reward, learning value function on pixels, or no information hiding for discriminator input (no discriminator mask) all result in inferior performances. In contrast, the optional components include the recurrent policy core (LSTM), the use of state prediction auxiliary tasks, and whether to include actions in discriminator input. We then examine the model's sensitivity to the \(\lambda\) values in Eq. 2. We see in Fig. 5b that, our model works well with a broad range of \(\lambda\) values from 0.3 to 0.7 that provide a balanced mix of the RL and GAIL rewards. ### 4.4 SIM2REAL POLICY TRANSFER RESULTS To assess the robustness of the simulation- trained policy, we evaluate zero- shot transfer (no additional training) on a real Jaco arm. Given a real- world set up that mirrored the simulated domain to a large extent, including camera positions and robot kinematics and approximate object size and color, we ran the trained network policy and counted the number of successful trials for both the lifting and stacking tasks. Although the sim and real domains were similar, there was still a sizable reality gap that made zero- shot transfer challenging. For example, the objects were non- rigid foam blocks which deformed and bounced unpredictably. The arm position was randomly initialized and the target block(s) placed in a number of repeatable start configurations for each task. The zero- shot transfer of the lifting policy had a success rate of \(64\%\) over 25 trials (split between 5 block configurations). The stacking policy had a success rate of \(35\%\) over 20 trials (split between 2 block configurations). \(80\%\) of the stacking trajectories, however, contain successful lifting behavior. Qualitatively, the policies are notably robust even on failed attempts — rather than exhibiting "open- loop" behaviors such as attempting to stack a non- existent block, the policy repeatedly chases the block to get a successful grasp before trying to stack (see video). For more detailed descriptions of the sim2real results, refer to Appendix B. ## 5 CONCLUSION We have shown that combining reinforcement and imitation learning considerably improves the agents' ability to solve challenging dexterous manipulation tasks from pixels. Our proposed method sheds light on the three stages of a principled pipeline for robot skill learning: first, we collected a small amount of demonstration data to simplify the exploration problem; second, we relied on physical simulation to perform large- scale distributed robot training; and third, we performed sim2real transfer for real- world deployment. In future work, we seek to improve the sample efficiency of the learning method and to leverage real- world experience to close the reality gap for policy transfer. ## A EXPERIMENT DETAILS The policy network takes the pixel observation and the proprioceptive feature as input. The pixel observation is an RGB image of size \(64\times 64\times 3\) . We used the Kinect for Xbox One camera in the real environment. The proprioceptive feature describes the joint positions and velocities of the Kinova Jaco arm. Each joint position is represented as the sin and cos of the angle of the joint in joint coordinates. Each joint velocity is represented as the scalar angular velocity. This results in a 24- dimensional proprioceptive feature that contains the positions (12- d) and velocities (6- d) of the six arm joints and the positions (6- d) of the three fingers. We exclude the finger velocities due to the noisy sensory readings on the real robot. We used Adam (Kingma & Ba, 2014) to train the neural network parameters. We set the learning rate of policy and value to \(10^{- 4}\) and \(10^{- 3}\) respectively, and \(10^{- 4}\) for both the discriminator and the auxiliary tasks. The pixel observation is encoded by a two- layer convolutional network. We use 2 convolutional layers followed by a fully- connected layer with 128 hidden units. The first convolutional layer has \(16\times 8\) filters with stride 4 and the second \(32 4\times 4\) filters with stride 2. We add a recurrent layer of 100 LSTM units before the policy and value outputs. The policy output is the mean and the standard deviation of a conditional Gaussian distribution over the 9- dimensional joint velocities. The initial policy standard deviation is set to \(\exp (- 3)\) for the clearing table with blocks task and \(\exp (- 1)\) for the other five tasks. The auxiliary head of the policy contains a separate 3- layer MLP sitting on top of the convolutional network. The first two layers of the MLP has 200 and 100 hidden units respectively, while the third layer predicts the auxiliary outputs. Finally, the discriminator is a simple three- layer MLP of 100 and 64 hidden units for the first two layers with the third layer producing log probabilities. The networks use tanh nonlinearities. We trained the visuomotor policies using the distributed PPO algorithm (Heess et al., 2017) with synchronous gradient updates from 256 CPU workers. Each worker runs the policy to complete an entire episode before the parameter updates are computed. We set a constant episode length for each task based on its difficulty, with the longest being 1000 time steps (50 seconds) for the clearing table with blocks and order fulfillment tasks. We set \(K = 50\) as the number of time steps for computing \(K\) - step returns and truncated backpropagation through time to train the LSTM units. After a worker collects a batch of data points, it performs 50 parameter updates for the policy and value networks, 5 for the discriminator and 5 for the auxiliary prediction network. ## B SIM2REAL DETAILS To better facilitate sim2real transfer, we lower the frequency at which we sample the observations. Pixel observations are only observed at the rate of 5Hz despite the fact that our controller runs at 20Hz. Similarly, the proprioceptive features are observed at a rate of 10Hz. In addition to observation delays, we also apply domain variations. Gaussian noise (of standard deviation 0.01) are added proprioceptive features. Uniform integers noise in the range of \([- 5,5]\) are added to each pixel independently. Pixels of values outside the range of \([0,255]\) are clipped. We also vary randomly the shade of grey on the Jaco arm, the color of the table top, as well as the location and orientation of the light source (see Fig. 6). In the case of block lifting, we vary in addition the dynamics of the arm. Specifically, we dynamically change the friction, damping, armature, and gain parameters of the robot arm in simulation to further robustify the agent's performance. ## B.1 ACTION DROPPING Our analysis indicates that, on the real robot, there is often a delay in the execution of actions. The amount of delay also varies significantly. This has an adverse effect on the performance of our agent on the physical robot since our agents' performance depends on the timely execution of their actions. To better facilitate the transfer to the real robot, we fine- tune our trained agent in simulation while subjecting them to a random chance of dropping actions. Specifically, each action emitted by the agent has a \(50\%\) chance of being executed immediately in which case the action is flagged as the last executed action. If the current action is not executed, the last executed action will then be executed. <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 6: Tiles show the representative range of diversity seen in the domain-randomized variations of the colors, lighting, background, etc. </center> Table 1: Block Lifting success rate from different positions. (LL, LR, UL, UR, and C represents the positions of lower left, lower right, upper left, upper right, and center respectively.) <table><tr><td></td><td>LL</td><td>LR</td><td>UL</td><td>UR</td><td>C</td><td>All</td></tr><tr><td>No Action Dropping</td><td>2/5</td><td>2/5</td><td>1/5</td><td>3/5</td><td>4/5</td><td>12/25</td></tr><tr><td>Action Dropping</td><td>4/5</td><td>4/5</td><td>4/5</td><td>0/5</td><td>4/5</td><td>16/25</td></tr></table> Using the above procedure, we fine- tune our agents on both block lifting and block stacking for a further 2 million iterations. To demonstrate the effectiveness of action dropping, we compare our agent on the real robot over the task of block lifting. Without action dropping, the baseline agent lifts \(48\%\) percent of the time. After fine- tuning using action dropping, our agent succeeded \(64\%\) percent of the time. For the complete set of results, please see Table 1 and Table 2. ## C TASK DETAILS We use a fixed episode length for each task, which is determined by the amount of time a skilled human demonstrator can complete the task. An episode terminates when a maximum number of agent steps are performed. The robot arm operates at a control frequency of \(20\mathrm{Hz}\) , which means each time step takes 0.05 second. We segment into a sequence of stages that represent an agent's progress in a task. For instance, the block stacking task can be characterized by three stages, including reaching the block, lifting the block and stacking the block. We define functions on the underlying physical state to determine the stage of a state. This way, we can cluster demonstration states according to their corresponding stages. These clusters are used to reset training episodes in our demonstration as a curriculum technique proposed in Sec. 3.2.2. The definition of stages also gives rise to a convenient way of specifying the reward functions without hand- engineering a shaping reward. We define a piecewise constant reward function for each task, where we assign the same constant reward to all the states that belong to the same stage. We detail the stages, reward functions, auxiliary tasks, and object- centric features for the six tasks in our experiments. Block lifting. Each episode lasts 100 time steps. We define three stages and their rewards (in parentheses) to be initial (0), reaching the block (0.125) and lifting the block (1.0). The auxiliary task is to predict the 3D coordinates of the color block. The object- centric feature consists of the relative position between the gripper and the block. Block stacking. Each episode lasts 500 time steps. We define four stages and their rewards to be initial (0), reaching the orange block (0.125), lifting the orange block (0.25), and stacking the orange block onto the pink block (1.0). The auxiliary task is to predict the 3D coordinates of the two blocks. <--- Page Split ---> Table 2: Success rate of the block stacking agent from different starting positions. Left and Right indicates the position of the support block upon initialization. <table><tr><td></td><td>Left</td><td>Right</td><td>All</td></tr><tr><td>Stacking Success Rate</td><td>5/10</td><td>2/10</td><td>7/20</td></tr><tr><td>Lifting Success Rate</td><td>9/10</td><td>7/10</td><td>16/20</td></tr></table> The object- centric feature consists of the relative positions between the gripper and the two blocks respectively. Clearing table with blocks. Each episode lasts 1000 time steps. We define five stages and their rewards to be initial (0), reaching the orange block (0.125), lifting the orange block (0.25), stacking the orange block onto the pink block (1.0), and lifting both blocks off the ground (2.0). The auxiliary task is to predict the 3D coordinates of the two blocks. The object- centric feature consists of the 3D positions of the two blocks as well as the relative positions between the gripper and the two blocks respectively. Clearing table with a box. Each episode lasts 500 time steps. We define five stages and their rewards to be initial (0), reaching the toy (0.125), grasping the toy (0.25), putting the toy into the box (1.0), and lifting the box (2.0). The auxiliary task is to predict the 3D coordinates of the toy and the box. The object- centric feature consists of the 3D positions of the toy and the box as well as the relative positions between the gripper and these two objects respectively. Pouring liquid. Each episode lasts 500 time steps. We define three stages and their rewards to be initial (0), grasping the mug (0.05), pouring (0.1N), where \(N\) is the number of small spheres in the other container. The auxiliary task is to predict the 3D coordinates of the mug. The object- centric feature consists of the 3D positions of the mug, the relative position between the gripper and the mug, and the relative position between the mug and the container. Order fulfillment. Each episode lasts 1000 time steps. The number of objects varies from 1 to 4 across episodes. We define five stages that correspond to the number of toys in the boxes. The immediate reward corresponds to the number of toys placed in the correct boxes (number of toy planes in the green box and toy cars in the red box). To handle the variable number of objects, we only represent the objects nearest to the gripper for the auxiliary task and the object- centric feature. The auxiliary task is to predict the 3D coordinates of the nearest plane and the nearest car to the gripper. The object- centric feature consists of the relative positions from the gripper to these two nearest objects. <--- Page Split --->
reject
Reject
4.666667
ICLR_2018_paper_0724
iclr
2,018
# MINIMAX CURRICULUM LEARNING: MACHINE TEACHING WITH DESIRABLE DIFFICULTIES AND SCHEDULED DIVERSITY Tianyi Zhou & Jeff Bilmes University of Washington, Seattle {tianyizh,bilmes}@uw.edu ## ABSTRACT We introduce and study minimax curriculum learning (MCL), a new method for adaptively selecting a sequence of training subsets for a succession of stages in machine learning. The subsets are encouraged to be small and diverse early on, and then larger, harder, and allowably more homogeneous in later stages. At each stage, model weights and training sets are chosen by solving a joint continuous- discrete minimax optimization, whose objective is composed of a continuous loss (reflecting training set hardness) and a discrete submodular promoter of diversity for the chosen subset. MCL repeatedly solves a sequence of such optimizations with a schedule of increasing training set size and decreasing pressure on diversity encouragement. We reduce MCL to the minimization of a surrogate function handled by submodular maximization and continuous gradient methods. We show that MCL achieves better performance and, with a clustering trick, uses fewer labeled samples for both shallow and deep models. Our method involves repeatedly solving constrained submodular maximization of an only slowly varying function on the same ground set. Therefore, we develop a heuristic method that utilizes the previous submodular maximization solution as a warm start for the current submodular maximization process to reduce computation while still yielding a guarantee. ## 1 INTRODUCTION Inspired by the human interaction between teacher and student, recent studies (Khan et al., 2011; Basu & Christensen, 2013; Spitkovsky et al., 2009) support that learning algorithms can be improved by updating a model on a designed sequence of training sets, i.e., a curriculum. This problem is addressed in curriculum learning (CL) (Bengio et al., 2009), where the sequence is designed by a human expert or heuristic before training begins. Instead of relying on a teacher to provide the curriculum, self- paced learning (SPL) (Kumar et al., 2010; Tang et al., 2012a; Supancic III & Ramanan, 2013; Tang et al., 2012b) chooses the curriculum during the training process. It does so by letting the student (i.e., the algorithm) determine which samples to learn from based on their hardness. Given a training set \(\mathcal{D} = \{(x_{1},y_{1}),\ldots ,(x_{n},y_{n})\}\) of \(n\) samples and loss function \(L(y_{i},f(x_{i},w))\) where \(x_{i}\in \mathbb{R}^{n}\) represents the feature vector for the \(i^{t h}\) sample, \(y_{i}\) is its label, and \(f(x_{i},w)\) is the predicted label provided by a model with weight \(w\) , SPL performs the following: \[\min_{w\in \mathbb{R}^{m}}\min_{\nu \in [0,1]^{n}}\left[\sum_{i = 1}^{n}\nu_{i}L\left(y_{i},f(x_{i},w)\right) - \lambda \sum_{i = 1}^{n}\nu_{i}\right]. \quad (1)\] SPL jointly learns the model weights \(w\) and sample weights \(\nu\) , which end up being 0- 1 indicators of selected samples, and it does so via alternating minimization. Fixing \(w\) , minimization w.r.t. \(\nu\) selects samples with loss \(L(y_{i},f(x_{i},w))< \lambda\) , where \(\lambda\) is a "hardness parameter" as it corresponds to the hardness as measure by the current loss (since with large \(\lambda\) , samples with greater loss are allowed in). Self- paced curriculum learning (Jiang et al., 2015) introduces a blending of "teacher mode" in CL and "student mode" in SPL, where the teacher can define a region of \(\nu\) by attaching a linear constraint \(a^{T}\nu \leq c\) to Eq. (1). SPL with diversity (SPLD) (Jiang et al., 2014), adds to Eq. (1) a negative group sparse regularization term \(- \gamma \| \nu \|_{2,1}\triangleq - \gamma \sum_{j = 1}^{b}\| \nu^{(j)}\|_{2}\) , where the samples are <--- Page Split ---> divided into \(b\) groups beforehand and \(\nu^{(j)}\) is the weight vector for the \(j^{th}\) group. Samples coming from different groups are thus preferred, to the extent that \(\gamma >0\) is large. CL, SPL, and SPLD can be seen as a form of continuation scheme (Allgower & Georg, 2003) that handles a hard task by solving a sequence of tasks moving from easy to hard; the solution to each task is the warm start for the next slightly harder task. That is, each task, in the present case, is determined by the training data subset and other training hyperparameters, and the resulting parameters at the end of a training round are used as the initial parameters for the next training round. Such continuation schemes can reduce the impact of local minima within neural networks (Bengio et al., 2013; Bengio, 2014). With SPL, after each round of alternating minimization to optimize Eq. (1), \(\lambda\) is increased so that the next round selects samples that have a larger loss, a process (Khan et al., 2011; Tang et al., 2012b; Basu & Christensen, 2013) that can both help avoid local minima and reduce generalization error. In SPLD, \(\gamma\) is also increased between training rounds, increasingly preferring diversity. In each case, each round results in a fully trained model for the currently selected training samples. Selection of training samples has been studied in other settings as well, often with a different motivation. In active learning (AL) (Settles, 2010) and experimental design (Montgomery, 2006), the learner can actively query labels of samples from an unlabeled pool during the training process, and the goal is to reduce annotation costs. The aim is to achieve the same or better performance using fewer labeled samples by ruling out uninformative ones. Diversity modeling was introduced to AL in (Wei et al., 2015). It uses submodular maximization to select diverse training batches from the most uncertain samples. However, changing the diversity during the learning process has not been investigated as far as we know. In boosting (Schapire, 1990; Freund & Schapire, 1997), the goal is to learn an ensemble of weak classifiers sequentially; it does this by assigning weights to all samples, with larger weights given to samples having larger loss measured by an aggregation of previously trained models. Both active learning and boosting favor samples that are difficult to predict, since they are the most informative to learn. For example, uncertainty sampling (Culotta & McCallum, 2005; Scheffer et al., 2001; Dagan & Engelson, 1995; Dasgupta & Hsu, 2008) selects samples that are most uncertain, while query by committee (Seung et al., 1992; Dagan & Engelson, 1995; Abe & Mamitsuka, 1998) selects the ones that multiple models most disagree on. With machine teaching (Khan et al., 2011; Zhu, 2015; Patil et al., 2014; Zhu et al., 2018), a separate teacher helps the training procedure find a good model. The SPL approach starts with a smaller set of easy samples and gradually increases the difficulty of the chosen samples as measured by the sample loss of the model produced by previous round's training. One of the difficulties of this approach is the following: since for any given value of \(\lambda\) the relatively easiest samples are chosen, there is a good chance that the process can repeatedly select a similar training set over multiple rounds and therefore can learn slowly. This is precisely the problem that SPLD address — by concomitantly increasing the desired diversity over rounds, the sample selection procedure chooses from an increasingly diverse set of different groups, as measured by \(\| \nu \|_{2,1}\) . Therefore, in SPLD, early stages train on easier not necessarily diverse samples and later stages train on harder more diverse samples. There are several challenges remaining with SPLD, however. One is that in early stages, it is still possible to repeatedly select a similar training set over multiple rounds since diversity might not increase dramatically between successive rounds. Potentially more problematically, it is not clear that having a large diversity selection weight in late stages is desirable. For example, with a reasonably trained model, it might be best to select primarily the hardest samples in the part of the space near the difficult regions of the decision boundaries. With a high diversity weight, samples in these difficult decision boundary regions might be avoided in favor of other samples perhaps already well learnt and having a large margin only because they are diverse, thereby leading to wasted effort. At such point, it would be beneficial to choose points having small margin from the same region but that might not have the greatest diversity, especially when using only a simple notion of diversity such as the group sparse norm \(\| \nu \|_{2,1}\) . Also, it is possible that late stages of learning can select outliers only because they are both hard and diverse. Lastly, the SPL/SPLD min- min optimization involves minimizing a lower bound of the loss, while normally one would, if anything, wish to minimize the loss directly or at least an upper bound. Motivated by these issues, we introduce a new form of CL that chooses the hardest diverse samples in early rounds of training and then actually decreases, rather than increases, diversity as training rounds proceed. Our contention is that diversity is more important during the early phases of training when <--- Page Split ---> only relatively few samples are selected. Later rounds of training will naturally have more diversity opportunity simply because the size of the selected samples is much larger. Also, to avoid successive rounds selecting similar sets of samples, our approach selects the hardest, rather than the easiest, samples at each round. Hence, if a set of samples is learnt well during one training round, those samples will tend to be ill- favored in the next round because they become easier. We also measure hardness via the loss function, but the selection is always based on the hardest and most diverse samples of a given size \(k\) , where the degree of diversity is controlled by a parameter \(\lambda\) , and where diversity is measured by an arbitrary non- monotone submodular function. In fact, for binary variables the group sparse norm is also submodular where \(\| \nu \|_{2,1} = \sum_{j = 1}^{b}\sqrt{|C_{j}\cap A|} = F(A)\) where \(A\) is the set for which \(\nu\) is the characteristic vector, and \(C_{j}\) is the set of samples in the \(j^{\mathrm{th}}\) group. Our approach allows the full expressive class of submodular functions to be used to measure diversity since the selection phases is based on submodular optimization. Evidence for the naturalness of such hardness and diversity adjustment in a curriculum can also be found in human education. For example, courses in primary school usually cover a broad, small, and relatively easy range of topics, in order to expose the young learner to a diversity of knowledge early on. In college and graduate school, by contrast, students focus on advanced deeper knowledge within their majors. As another example, studies of bilingualism (Bialystok et al., 2012; Li et al., 2014; Mechelli et al., 2004; Kovács & Mehler, 2009) show that learning multiple languages in childhood is beneficial for future brain development, but early- age multi- lingual learning is usually not advanced or concentrated linguistically for any of the languages involved. Still other studies argue that difficulty can be desired at early human learning stages (Bjork & Bjork, 1992; McDaniel & Butler, 2011). ### 1.1 OUR APPROACH: MINIMAX CURRICULUM LEARNING We introduce a new form of curriculum learning called minimax curriculum learning (MCL). MCL increases desired hardness and reduces diversity encouragement over rounds of training. This is accomplished by solving a sequence of minimax optimizations, each of which having the form: \[\min_{w\in \mathbb{R}^{m}}\max_{A\subseteq V,|A|\leq k}\sum_{i\in A}L\left(y_{i},f(x_{i},w)\right) + \lambda F(A). \quad (2)\] The objective is composed of the loss on a subset \(A\) of samples evaluating their hardness and a normalized monotone non- decreasing submodular function \(F:2^{V}\to \mathbb{R}_{+}\) measuring \(A\) 's diversity, where \(V\) is the ground set of all available samples. A larger loss implies that the subset \(A\) has been found harder to learn, while a larger \(F(A)\) indicates greater diversity. The weight \(\lambda\) controls the trade- off between hardness and diversity, while \(k\) , the size of the resulting \(A\) , determines the number of samples to simultaneously learn and hence is a hardness parameter. It is important to realize that \(F(A)\) is not a parameter regularizer (e.g., \(\ell_{1}\) or \(\ell_{2}\) regularization on the parameters \(w\) ) but rather an expression of preference for a diversity of training samples. In practice, one would add to Eq. (2) an appropriate parameter regularizer as we do in our experiments (Section 3). Like SPL/SPLD, learning rounds are scheduled, here each round with increasing \(k\) and decreasing \(\lambda\) . Unlike SPL/SPLD, we explicitly schedule the number of selected samples via \(k\) rather than indirectly via a hardness parameter. This makes sense since we are always choosing the hardest \(k\) samples at a given \(\lambda\) diversity preference, so there is no need for an explicit real- valued hardness parameter as in SPL/SPLD. Also, the MCL optimization minimizes an upper bound of the loss on any size \(k\) subset of training samples. The function \(F(\cdot)\) may be chosen from the large expressive family of submodular functions, all of which are natural for measuring diversity, and all having the following diminishing returns property: given a finite ground set \(V\) , and any \(A\subseteq B\subseteq V\) and a \(v\notin B\) , \[F(v\cup A) - F(A)\geq F(v\cup B) - F(B). \quad (3)\] This implies \(v\) is no less valuable to the smaller set \(A\) than to the larger set \(B\) . The marginal gain of \(v\) conditioned on \(A\) is denoted \(f(v|A)\triangleq f(v\cup A) - f(A)\) and reflects the importance of \(v\) to \(A\) . Submodular functions (Fujishige, 2005) have been widely used for diversity models (Lin et al., 2009; Lin & Bilmes, 2011; Batra et al., 2012; Prasad et al., 2014; Gillenwater et al., 2012; Iyer & Bilmes, 2015; Bilmes & Bai, 2017). <--- Page Split ---> Although Eq. (2) is a hybrid optimization involving both continuous variables \(w\) and discrete variables \(A\) , it can be reduced to the minimization of a piecewise function, where each piece is defined by a subset \(A\) achieving the maximum in a region around \(w\) . Each piece is convex when the loss is convex, so various off- the- shelf algorithms can be applied once \(A\) has been computed. However, the number of possible sets \(A\) is \(\binom{n}{k}\) , and enumerating them all to find the maximum is intractable. Thanks to submodularity, fast approximate algorithms (Nemhauser et al., 1978; Minoux, 1978; Mirzasoleiman et al., 2015) exist to find an approximately optimal \(A\) . Therefore, the outer optimization over \(w\) will need to minimize an approximation of the piecewise function defined by an approximate \(A\) computed via submodular maximization. ## 2 MINIMAX CURRICULUM LEARNING AND MACHINE TEACHING The minimax problem in Eq. (2) can be seen as a two- person zero- sum game between a teacher (the maximizer) and a student (the minimizer): the teacher chooses training set \(A\) based on the student's feedback about the hardness (i.e., the loss achieved by current model \(w\) ) and how diverse according to the teacher \((\lambda F(A))\) , while the student updates \(w\) to reduce the loss on training set \(A\) (i.e., learn \(A\) ) given by the teacher. Similar teacher- student interaction also exist in real life. In addition, the teacher usually introduces concepts at the beginning and asks a small number of easy questions from a diverse range of topics and receives feedback from the student, and then further trains the student on the topics the student finds difficult while eschewing topics the student has mastered. MCL's minimax formulation is different from the min- min formulation used in SPL/SPLD. For certain losses and models, \(L(y_{i}, f(x_{i}, w))\) is convex in \(w\) . The min- min formulation, however, is only bi- convex and requires procedures such as alternative convex search (ACS) as in (Bazaraa et al., 1993). Furthermore, diversity regularization of \(w\) in SPLD leads to the loss of bi- convexity altogether. Minimizing the worst case loss, as in MCL, is a widely used strategy in machine learning (Lanckriet et al., 2003; Farnia & Tse, 2016; Shalev- Shwartz & Wexler, 2016) to achieve better generalization performance and model robustness, especially when strong assumptions cannot be made about the data distribution. Compared to SPL/SPLD, MCL is also better in that the outer minimization over \(w\) in Eq. (2) is a convex program, and corresponds to minimizing the objective \(g(w)\) in Eq. (4). On the other hand, querying \(g(w)\) requires submodular maximization which can only be solved approximately. The goal of this section, therefore, is to address the minimax problem in Eq. (2), i.e., the minimization \(\min_{w \in \mathbb{R}^{m}} g(w)\) of the following objective \(g(w)\) . \[g(w) \triangleq \max_{A \subseteq V, |A| \leq k} \sum_{i \in A} L(y_{i}, f(x_{i}, w)) + \lambda F(A) \quad (4)\] If the loss function \(L(y_{i}, f(x_{i}, w))\) is convex w.r.t. \(w\) , then \(g(w)\) is convex but, as mentioned above, enumerating all subsets is intractable. Defining the discrete objective \(G_{w}: 2^{V} \to \mathbb{R}_{+}\) where \[G_{w}(A) \triangleq \sum_{i \in A} L(y_{i}, f(x_{i}, w)) + \lambda F(A). \quad (5)\] shows that computing \(g(w)\) in involves a discrete optimization over \(G_{w}(A)\) , a problem that is submodular since \(G_{w}(A)\) is weighted sum of a non- negative (since loss is non- negative) modular and a submodular function, and thus \(G_{w}\) is monotone non- decreasing submodular. Thus, the fast greedy procedure mentioned earlier can be used to approximately optimizes \(G_{w}(A)\) for any \(w\) . Let \(\hat{A}_{w} \subseteq V\) be the \(k\) - constrained greedy approximation to maximizing \(G_{w}(A)\) . We define the following approximate objective: \[\hat{g} (w) \triangleq \sum_{i \in \hat{A}_{w}} L(y_{i}, f(x_{i}, w)) + \lambda F(\hat{A}), \quad (6)\] and note that it satisfies \(\alpha g(w) \leq \hat{g} (w) \leq g(w)\) where \(\alpha\) is the approximation factor of submodular optimization. For \(\hat{w}\) within a region around \(w\) , \(\hat{g} (\hat{w})\) will utilize the same set \(\hat{A}_{w}\) . Therefore, \(\hat{g} (w)\) is piecewise convex, if the loss function \(L(y_{i}, f(x_{i}, w))\) is convex w.r.t. \(w\) , and different regions of within \(\mathbb{R}^{m}\) are associated with different \(\hat{A}\) although not necessarily the same regions or sets that define \(g(w)\) . We show in Section 2.2 that minimizing \(\hat{g} (w)\) offers an approximate solution to Eq. (2). <--- Page Split ---> With \(\hat{g} (w)\) given, our algorithm is simply gradient descent for minimizing \(\hat{g} (w)\) , where many off- the- shelf methods can be invoked, e.g., SGD, momentum methods, Nesterov's accelerated gradient (Nesterov, 2005), Adagrad (Duchi et al., 2011), etc. The key problem is how to obtain \(\hat{g} (w)\) , which depends on suboptimal solutions in different regions of \(w\) . It is not necessary, however, to run submodular maximization for every region of \(w\) . Since we use gradient descent, we only need to know \(\hat{g} (w)\) for \(w\) on the optimization path. At the beginning of each iteration, we fix \(w\) and use submodular maximization to achieve the \(\hat{A}_{w}\) that defines \(\hat{g} (w)\) . Then a gradient update step is applied to \(\hat{g} (w)\) . Let \(A_{w}^{*}\) represent the optimal solution to Eq. (5), then \(\hat{A}_{w}\) satisfies \(G(\hat{A}) \geq \alpha G(A^{*})\) . # Algorithm 1 Minimax Curriculum Learning (MCL) 1: input: \(\pi (\cdot ,\eta),\gamma ,p,\Delta ,\bar{\alpha}\) 2: output: \(w_{T}^{0}\) 3: initialize: \(\tau \gets 1\) \(w_{\tau}^{0},\lambda ,k\) 4: while not "converged" do 5: for \(t\in \{0,\cdot \cdot \cdot ,p\}\) do 6: \(G(A)\leftarrow \sum_{i\in A}L\left(y_{i},f(x_{i},w_{\tau}^{t})\right) + \lambda F(A);\) 7: \(\hat{A}\gets \mathrm{WS - SUBMODULARMAX}(G,k,\hat{A},\bar{\alpha});\) 8: \(\begin{array}{r}{\nabla \hat{g} (w_{\tau}^{t}) = \frac{\partial}{\partial w}\sum_{i\in \hat{A}}L\left(y_{i},f(x_{i},w_{\tau}^{t})\right);} \end{array}\) 9: \(w_{\tau}^{t + 1}\gets w_{\tau}^{t} + \pi \left(\{w_{\tau}^{1:t}\} ,\{\nabla \hat{g} (w_{\tau}^{1:t})\} ,\eta \right);\) 10: end for 11: \(w_{\tau +1}^{0}\gets w_{\tau}^{p},\lambda \leftarrow (1 - \gamma)\cdot \lambda ,k\gets k + \Delta ,\tau \gets \tau +1;\) 12: end while Algorithm 1 details MCL. Lines 5- 10 solve the optimization in Eq. (2) with \(\lambda\) and \(k\) scheduled in line 11. Lines 6- 7 finds an approximate \(\hat{A}\) via submodular maximization, discussed further in Section 2.1. Lines 8- 9 update \(w\) for the current \(\hat{A}\) by gradient descent \(\pi (\cdot ,\eta)\) with learning rate \(\eta\) . The inner optimization stops after \(p\) steps and then \(\lambda\) is reduced by factor \(1 - \gamma\) where \(\gamma \in [0,1]\) and \(k\) is increased by \(\Delta\) . The outer optimization stops after \(T\) steps when a form of "convergence", described below, is achieved. Given \(\hat{A}_{w},\hat{g} (w)\) has gradient \[\nabla \hat{g} (w) = \frac{\partial}{\partial w}\sum_{i\in \hat{A}_{w}}L\left(y_{i},f(x_{i},w)\right), \quad (7)\] and thus gradient descent method can update \(w\) . For example, we can treat \(\hat{A}\) as a batch if \(k\) is small, and update \(w\) by \(w\gets w - \eta \nabla \hat{g} (w)\) with learning rate \(\eta\) . For large \(\hat{A}_{w}\) , we can use SGD that applies an update rule to mini- batches within \(\hat{A}_{w}\) . More complex gradient descent rules \(\pi (\cdot ,\eta)\) can take historical gradients and \(w_{\tau}^{t}\) 's into account leading to \(w^{t + 1}\gets w^{t} + \pi \left(\{w^{1:t}\} ,\{\nabla \hat{g} (w^{1:t})\} ,\eta \right)\) . Considering the outer loop as well, the algorithm approximately solves a sequence of Eq. (2)s with decreasing \(\lambda\) and increasing \(k\) , where the previous solutions act as a warm start for the next iterations. This corresponds to repeatedly updating the model \(w\) on a sequence of training sets \(\hat{A}\) that changes from small, diverse, and hard to large. ### 2.1 SUBMODULAR MAXIMIZATION Although solving Eq. (5) exactly is NP- hard, a near- optimal solution can be achieved by the greedy algorithm, which offers a worst- case approximation factor of \(\alpha = 1 - e^{- 1}\) (Nemhauser et al., 1978). The algorithm starts with \(A\leftarrow \emptyset\) , and selects next the element with the largest marginal gain \(f(v|A)\) from \(V\backslash A\) , i.e., \(A\leftarrow A\cup \{v^{*}\}\) where \(v^{*}\in \mathrm{argmax}_{v\in V\backslash A}f(v|A)\) , and this repeats until \(|A| = k\) . It is simple to implement, fast, and usually outperforms other methods, e.g., those based on integer linear programming. It requires \(\mathcal{O}(nk)\) function evaluations for ground set size \(|V| = n\) . Since Algorithm 1 runs greedy \(Tp\) times, it is useful for the greedy procedure to be as fast as possible. The accelerated, or lazy, greedy algorithm (Minoux, 1978) reduces the number of evaluations per step by updating a priority queue of marginal gains, while having the same output and guarantee as the original (thanks to submodularity) and offers significant speedups. Still faster variants are also available Mirzasoleiman et al. (2015; 2016). Our own implementation takes advantage of the fact that line 7 of Algorithm 1 repeatedly solves submodular maximization over a sequence of submodular functions that are changing only slowly, and hence the previous set solution can be used as a warm start for the current algorithm, a process we call WS- SUBMODULARMAX outlined in Algorithm 2. The greedy procedure offers much better approximation factors than \(1 - e^{- 1}\) when the objective \(G(A)\) is close to modular. Specifically, the approximation factor becomes \(\alpha = (1 - e^{- \kappa_{G}}) / \kappa_{G}\) (Conforti & Cornuejols, 1984), which depends on the curvature \(\kappa_{G}\in [0,1]\) of \(G(A)\) defined as \[\kappa_{G}\triangleq 1 - \min_{j\in V}\frac{G(j|V\backslash j)}{G(j)}. \quad (8)\] <--- Page Split ---> When \(\kappa_{G} = 0\) , \(G\) is modular, and when \(\kappa_{G} = 1\) , \(G\) is fully curved and the above bound recovers \(1 - e^{- 1}\) . \(G(A)\) becomes more modular as the outer loop proceeds since \(\lambda\) decreases. Therefore, the approximation improves with the number of outer loops. In fact, we have: Lemma 1. Let \(G(A) = L(A) + \lambda F(A)\) where \(F\) is a monotone non- decreasing submodular function with curvature \(\kappa_{F}\) , \(L\) is a non- negative modular function, and \(\lambda \geq 0\) . Then \(\kappa_{G} \leq \kappa_{F} / (c_{1} / \lambda + 1)\) where \(c_{1} = \min_{j \in V} L(j) / F(j)\) . The proof is given in Appendix 4.1. In MCL, therefore, the submodular approximation improves \((\alpha \to 1)\) as \(\lambda\) grows, and the surrogate function \(\hat{g} (w)\) correspondingly approaches the true convex objective \(g(w)\) . ### 2.2 CONDITIONS AT CONVERGENCE In this section, we study how close the solution \(\hat{w}\) is of applying gradient descent to \(\hat{g} (w)\) , where we assume \(p\) is large enough so that a form of convergence occurs. Specifically, in Theorem 1, we analyze the upper bound on \(\| \hat{w} - w^{*}\|_{2}^{2}\) based on two assumptions: 1) the loss \(L\left(y_{i},f(x_{i},w)\right)\) being \(\beta\) - strongly convex w.r.t. \(w\) ; and 2) \(\hat{w}\) is achieved by running gradient descent in lines 6- 9 of Algorithm 1 until convergence, defined as the gradient reaching zero. In case the loss \(L\left(y_{i},f(x_{i},w)\right)\) is convex but not \(\beta\) - strongly convex, a commonly used trick to modify it to \(\beta\) - strongly convex is to add an \(\ell_{2}\) regularization \((\beta /2)\| w\|_{2}^{2}\) . In addition, for non- convex \(L\left(y_{i},f(x_{i},w)\right)\) , it is possible to prove that with high probability, a noise perturbed SGD on \(\hat{g} (w)\) can hit an \(\epsilon\) - optimal local solution of \(g(w)\) in polynomial time — we leave this for future work. In our empirical study (Section 3), MCL achieves good performance even when applied to non- convex deep neural networks. The following theorem relies on the fact that the maximum of multiple \(\beta\) - strongly convex functions is also \(\beta\) - strongly convex, shown in Appendix 4.2. Theorem 1 (Inner- loop convergence). For the minimax problem in Eq. (2) with ground set of samples \(V\) and \(\lambda \geq 0\) , if the loss function \(L\left(y_{i},f(x_{i},w)\right)\) is \(\beta\) - strongly convex and \(|V| \geq k\) , running lines 6- 9 of Algorithm 1 until convergence (defined as the gradient reaching zero) yields a solution \(\hat{w}\) satisfying \[\| \hat{w} -w^{*}\|_{2}^{2} \leq \frac{2}{k\beta} \left(\frac{1}{\alpha} -1\right) \cdot g(w^{*}), \quad (9)\] \(\hat{w}\) is the solution achieved at convergence, \(w^{*}\) is the optimal solution of the minimax problem in Eq.(2), \(g(w^{*})\) is the objective value achieved on \(w^{*}\) , and \(\alpha\) is the approximation factor that submodular maximization can guarantee for \(G(A)\) . The proof is given in Appendix 4.3. It is interesting to note that the bound depends both on the strong convexity parameter \(\beta\) and on the submodular maximization approximation \(\alpha\) . As mentioned in Lemma 1, as \(\lambda\) gets smaller, the approximation factor \(\alpha\) approaches 1 meaning that the bound in Equation (9) improves. We mention the convergence criteria where the gradient reaches zero. While it is possible, in theory, for lines 6- 9 of Algorithm 1 to oscillate amongst the non- differentiable boundaries between the convex pieces, with most damped learning rates, this will eventually subside and the algorithm will remain within one convex piece. The reason for this is line 7 of the algorithm always chooses one \(\hat{A}\) thereby selecting one convex piece associated with the region around \(w_{\tau}^{t}\) , and with only small subsequent adjustments to \(w_{\tau}^{t}\) , the same \(\hat{A}\) will continue to be selected. Hence, the algorithm will, in such case, reach the minimum of that convex piece where the gradient is zero. We can restate and then simplify the above bound in terms of the resulting parameters, and corresponding \(\lambda , k\) values, used at a particular iteration \(\tau\) of the outer loop. In the following, \(\hat{w}_{\tau}\) is the solution achieved by Algorithm 1 at the iteration \(\tau\) of the outer loop, and the optimal solution of the minimax problem in Eq.(2) with \(\lambda , k\) set as in iteration \(\tau\) is denoted \(w_{\tau}^{*}\) . Corollary 1. If the loss function \(L\left(y_{i},f(x_{i},w)\right)\) is \(\beta\) - strongly convex, the submodular function \(F(\cdot)\) has curvature \(\kappa_{F}\) , and if each inner- loop in Algorithm 1 runs until convergence, then the solution \(\hat{w}_{\tau}\) at the end of the \(\tau^{t h}\) iteration of the outer- loop fulfills: \[\| \hat{w}_{\tau} - w_{\tau}^{*}\|_{2}^{2} \leq \frac{2\kappa_{F}}{k\beta(c_{1} / \lambda + 1)} g(w_{\tau}^{*}) \leq \frac{2\kappa_{F}}{\beta c_{1}} \times \frac{\lambda}{k} \times g(w_{\tau}^{*}), \quad (10)\] <--- Page Split ---> where \(w_{\tau}^{*}\) is the optimal solution of the minimax problem in Eq. (2) with \(\lambda\) set as in the \(\tau^{th}\) outer loop iteration. Thus, if \(k\) starts from \(k_{0}\) and linearly increases via \(k\gets k + \Delta\) (as in line 11 of Algorithm alg: mcl), \[\| \hat{w}_{\tau} - w_{\tau}^{*}\|_{2}^{2}\leq \frac{2\kappa_{F}\lambda_{0}}{\beta c_{1}}\times \frac{(1 - \gamma)^{\tau}}{(k_{0} + \tau\Delta)}\times [g(w_{\infty}^{*}) + \lambda_{0}c_{2}(1 - \gamma)^{\tau}], \quad (11)\] Otherwise, if \(k\) increases exponentially, i.e., \(k\gets (1 + \Delta)\cdot k\) \[\| \hat{w}_{\tau} - w_{\tau}^{*}\|_{2}^{2}\leq \frac{2\kappa_{F}\lambda_{0}}{\beta c_{1}k_{0}}\times \left(\frac{1 - \gamma}{1 + \Delta}\right)^{\tau}\times [g(w_{\infty}^{*}) + \lambda_{0}c_{2}(1 - \gamma)^{\tau}]. \quad (12)\] In the above, \(\lambda_{0}\) and \(k_{0}\) are the initial values for \(\lambda\) and \(k\) , \(c_{1} = \min_{j\in V,t\in [1,\tau ]}[L(y_{i},f(x_{i},\hat{w}_{\tau}^{t})) / F(j)]\) , \(c_{2} = \max_{A\subseteq V,|A|\leq k}F(A)\) , and \(g(w_{\infty}^{*}) = \min_{w\in \mathbb{R}^{m}}\max_{A\subseteq V,|A|\leq k}\sum_{i\in A}L(y_{i},f(x_{i},w))\) . The proof can be found in Appendix 4.5. On the one hand, the upper bound above is in terms of the ratio \(\lambda /k\) which improves with larger subset sizes. On the other hand, submodular maximization becomes more expensive with \(k\) . Hence, Algorithm 1 chooses a schedule to decrease \(\lambda\) exponentially and increase \(k\) only linearly. Also, we see that the bound is dependent on the submodular curvature \(\kappa_{F}\) , the strongly- convex constant \(\beta\) , and \(c_{1}\) which relates the submodular and modular terms (similar to as in Lemma 1). These quantities \((\kappa_{F} / \beta\) and \(c_{1}\) ) might be relevant for other convex- submodular optimization schemes. ### 2.3 HEURISTIC IMPROVEMENTS There are several heuristic improvements we employ that are described next. Algorithm 1 stops gradient descent after \(p\) steps. A reason for doing this is that \(\hat{w}^{p}\) can be sufficient as a warm- start for the next iteration if \(p\) is large enough. We also have not observed any benefit for larger \(p\) , although we do eventually observe convergence empirically when the average loss no longer change appreciably between stages. Also, lines 6- 7 of Algorithm 1 require computing the loss on all the samples, and each step of the greedy algorithm needs to, in the worst case, evaluate the marginal gains of all of the unselected samples. Moreover, this is done repeatedly in the inner- most block of two nested loops. Therefore, we use two heuristic tricks to improve efficiency. First, rather than selecting individual samples, we first cluster the data and then select clusters, thereby reducing the ground set size from the number of samples to the number of clusters. We replace the per- sample loss \(L\left(y_{i},f(x_{i},w)\right)\) with a per- cluster loss \(L\left(Y^{(i)},f(X^{(i)},w)\right)\) that we approximate by the loss of the sample closest to the centroid within each cluster: \[L\left(Y^{(i)},f(X^{(i)},w)\right)\triangleq \sum_{j\in C^{(i)}}L\left(y_{j},f(x_{j},w)\right)\approx |C^{(i)}|L\left(y^{(i)},f(x^{(i)},w)\right), \quad (13)\] where \(C^{(i)}\) is the set of indices of the samples in the \(i^{\mathrm{th}}\) cluster, and \(x^{(i)}\) with label \(y^{(i)}\) is the sample closest to the cluster centroid. We find that the loss on \(x^{(i)}\) is sufficiently representative to approximately indicate the hardness of the cluster. The set \(V\) becomes the set of clusters and \(A\subseteq V\) is a set of clusters, and hence the ground set size is reduced speeding up the greedy algorithm. When computing \(F(A)\) , the diversity of selected clusters, cluster centroids again represent the cluster. In line 8, the gradient is computed on all the samples in the selected clusters rather than on only \(x^{(i)}\) at which point the labels of all the samples in the selected clusters are used. Otherwise, when selecting clusters via submodular maximization, the labels of only the centroid samples are needed. Thus, we need only annotate and compute the loss for samples in the selected clusters and the representative centroid samples \(x^{(i)}\) of other clusters. This also reduces the need to label all samples up front as only the labels of the selected clusters, and centroid samples of each cluster, are used (i.e., the clustering process itself does not use the labels). We can further reduce the ground set to save computation during submodular maximization via prefiltering methods that lead either to no (Wei et al., 2014a) or little (Zhou et al., 2017; Mirzasoleiman et al., 2015) reduction in approximation quality. Moreover, as \(\lambda\) decreases in the MCL objective and \(G(A)\) becomes more modular, pruning method become more effective. More details are given in Section 4.6. <--- Page Split ---> ## 3 EXPERIMENTS <table><tr><td>Dataset</td><td>News20</td><td>MNIST</td><td>CIFAR10</td><td>STL10</td><td>SVHN</td><td>Fashion</td></tr><tr><td>Method</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>SGD(random)</td><td>14.27</td><td>0.88</td><td>18.52</td><td>21.76</td><td>5.20</td><td>7.79</td></tr><tr><td>SPL</td><td>15.43</td><td>1.25</td><td>21.14</td><td>20.63</td><td>5.67</td><td>7.46</td></tr><tr><td>SPLD</td><td>16.23</td><td>1.18</td><td>20.79</td><td>21.25</td><td>5.40</td><td>7.80</td></tr><tr><td>MCL(Δ = 0, λ = 0, γ = 0)</td><td>15.99</td><td>1.23</td><td>18.04</td><td>20.50</td><td>5.37</td><td>7.95</td></tr><tr><td>MCL(Δ = 0, λ &amp;gt; 0, γ &amp;gt; 0)</td><td>16.54</td><td>0.95</td><td>17.33</td><td>19.70</td><td>4.95</td><td>7.29</td></tr><tr><td>MCL(Δ &amp;gt; 0, λ &amp;gt; 0, γ = 0)</td><td>15.45</td><td>0.82</td><td>16.93</td><td>20.40</td><td>5.29</td><td>7.07</td></tr><tr><td>MCL-RAND</td><td>16.23</td><td>0.80</td><td>17.12</td><td>20.42</td><td>5.18</td><td>6.92</td></tr><tr><td>MCL(Δ &amp;gt; 0, λ &amp;gt; 0, γ &amp;gt; 0)</td><td>14.12</td><td>0.75</td><td>12.87</td><td>17.83</td><td>4.19</td><td>6.36</td></tr></table> Table 1: Test error \((\%)\) for different methods (SGD shows the lowest error out of 10 random trials). In this section, we apply different curriculum learning methods to train logistic regression models on 20newsgroups (Lang, 1995), LeNet5 models on MNIST (Lecun et al., 1998), convolutional neural nets (CNNs) with three convolutional layers' on CIFAR10 (Krizhevsky & Hinton, 2009), CNNs with two convolutional layers 2 on Fashion- MNIST ("Fashion" in all tables) (Xiao et al., 2017), CNNs with six convolutional layers on STL10 (Coates et al., 2011), and CNNs with seven convolutional layers on SVHN (Netzer et al., 2011)3. Details on the datasets can be found in Table 3 of the appendix. In all cases, we also use \(\ell_{2}\) parameter regularization on \(w\) with weight \(1e - 4\) (i.e., the weight decay factor of the optimizer). We compare MCL and its variants to SPL (Kumar et al., 2010), SPLD (Jiang et al., 2014) and SGD with a random curriculum (i.e., with random batches). Each method uses mini- batch SGD for \(\pi (\cdot ,\eta)\) with the same learning rate strategy to update \(w\) . The methods, therefore, differ only in the curriculum (i.e., the sequence of training sets). For SGD, in each iteration, we randomly select 4000 samples (20newsgroups) or 5000 samples (other datasets) and apply mini- batch SGD to the selected samples. In SPL and SPLD, the training set starts from a fixed size \(k\) (4000 samples for 20newsgroups, 5000 samples for other datasets), and increases by a factor of \(1 + \mu\) (where \(\mu = 0.1\) ) per round of alternating minimization (i.e., per iteration of the outer loop) 4. We use \(\rho\) to denote the number of iterations of the inner loop, which aims to minimize the loss w.r.t. the model \(w\) on the selected training set. In SPLD, we also have a weight for the negative group sparsity: it starts from \(\xi\) and increases by a factor of 1.1 at each round of alternating minimization (i.e., per iteration of the outer loop). We test five different combinations of \(\{\rho ,\mu \}\) and \(\{\rho ,\xi \}\) for SPL and SPLD respectively. The best combination with the smallest test error rate is what we report. Neither SPL nor SPLD uses the clustering trick we applied to MCL: they compute the exact loss on each sample in each iteration. Hence, they have more accurate estimation of the hardness on each sample, and require knowing the labels of all samples (selected and unselected) and cannot reduce annotation costs. Note SPLD still needs to run clustering and use the resulted clusters as groups in the group sparsity (which measures diversity in SPLD). We did not select samples with SPL/SPLD as we do with MCL since we wanted to test SPL/SPLD as originally presented — intuitively, SPL/SPLD should if anything only do better without such clustering due to the more accurate sample- specific hardness estimation. The actual clustering, however, used for SPLD's diversity term is the same as that used for MCL's cluster samples. We apply the mini- batch k- means algorithm to the features detailed in the next paragraph to get the clusters used in MCL and SPLD. Although both SPL and SPLD can be reduced to SGD when \(\lambda \to \infty\) (i.e., all samples always selected), we do not include this special case because SGD is already a baseline. For SGD with a random curriculum, results of 10 independent trials are reported. In our MCL experiments, we use a simple "feature based" submodular function (Wei et al., 2014b) where \(F(A) = \sum_{u\in \mathcal{U}}\omega_{u}\sqrt{c_{u}(A)}\) and where \(\mathcal{U}\) is a set of features. For a subset \(A\) of clusters, <--- Page Split ---> <table><tr><td>Dataset</td><td>News20</td><td>MNIST</td><td>CIFAR10</td><td>STL10</td><td>SVHN</td><td>Fashion</td></tr><tr><td>Total time</td><td>2649.19s</td><td>3418.97s</td><td>3677.73s</td><td>2953.47s</td><td>34153.81s</td><td>2927.18s</td></tr><tr><td>WS-SUBMODULARMAX</td><td>62.44s</td><td>35.33s</td><td>127.36s</td><td>206.70s</td><td>1892.62s</td><td>167.55s</td></tr></table> Table 2: Total time (secs.) of MCL( \(\Delta >0,\lambda >0,\gamma >0)\) and time only on WS-SUBMODULARMAX. \(c_{u}(A) = \sum_{i\in A}c_{u}(i)\) , where \(c_{u}(i)\) is the nonnegative feature \(u\) of the centroid for cluster \(i\) , and can be interpreted as a nonnegative score for cluster \(i\) . We use TF- IDF features for 20newsgroup. For the other datasets, we train a corresponding neural networks on a small random subset of training data (e.g., hundreds of samples) for one epoch, and use the inputs to the last fully connected layer (whose outputs are processed by softmax to generate class probabilities) as features. Because we always use ReLU activations between layers, the features are all nonnegative and the submodularity of \(F(A)\) follows as a consequence. These features are also used by mini- batch \(k\) - means to generate clusters for MCL and SPLD. For MCL, we set the number of inner loop iterations to \(p\leq 50\) . For each dataset, we choose \(p\) as the number among \(\{10,20,50\}\) that reduces the training loss the most in the first few iterations of the outer loop, and then use that \(p\) for the remaining iterations. As shown in Table 4, we use \(p = 50\) for 20newsgroups, MNIST and Fashion- MNIST, and \(p = 20\) for the other three datasets. We consider five variants of MCL: 1) MCL( \(\Delta = 0\) , \(\lambda = 0\) , \(\gamma = 0\) ) having neither submodular regularization that promotes diversity nor scheduling of \(k\) that increases hardness; 2) MCL( \(\Delta = 0\) , \(\lambda >0\) , \(\gamma >0\) ), which decreases diversity by exponentially reducing the weight \(\lambda\) of the submodular regularization, but does not have any scheduling of \(k\) , i.e., \(k\) is fixed during the algorithm; 3) MCL( \(\Delta >0\) , \(\lambda >0\) , \(\gamma = 0\) ), which only uses the scheduling of \(k\) shown in Algorithm 1, but the diversity weight \(\lambda\) is positive and fixed during the algorithm, i.e., with \(\gamma = 0\) ; 4) MCL- RAND( \(r,q\) ), which randomly samples \(r\) clusters as a training set \(A\) after every \(q\) rounds of the outer loop in Algorithm 1, and thus combines both MCL and SGD; 5) MCL( \(\Delta >0\) , \(\lambda >0\) , \(\gamma >0\) ), which uses the scheduling of both \(\lambda\) and \(k\) shown in Algorithm 1. We tried five different combinations of \(\{q,r\}\) for MCL- RAND( \(r,q\) ) and five different \(\Delta\) values for MCL( \(\Delta >0\) , \(\lambda >0\) , \(\gamma >0\) ), and report the one with the smallest test error. Other parameters, such as the initial values for \(\lambda\) and \(k\) , the values for \(\gamma\) and \(p\) , and the total number of clusters are the same for different variants (the exact values of these quantities are given in Table 4 of the Appendix). ![](images/8_0.jpg) <center>Figure 1: Test error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on 20newsgroups (grey curves represents 10 random trials of SGD). </center> In MCL, running greedy is the only extra computation comparing to normal SGD. To show that in our implementation (see Section 4.6) its additional time cost is negligible, we report in Table 2 the total time cost for MCL( \(\Delta >0\) , \(\lambda >0\) , \(\gamma >0\) ) and the time spent on our implementation WS- SUBMODULARMAX. We summarize the main results in Figure 1- 8. More results are given at the end of the appendix (Section 4.7). In all figures, grey curves correspond to the ten trials of SGD under a random curriculum. The legend in all figures gives the parameters used for the different methods using the following labels: 1) SPL \((\rho ,\mu)\) ; 2) SPLD \((\rho ,\xi)\) ; and 3) MCL- RAND \((q,r)\) . Figures 1- 6 show how the test error changes with (on the left) the number of distinct labeled samples ever needing a loss gradient calculation, and (on the right) the number of training batches, <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 2: Test error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on CIFAR10 (grey curves represents 10 random trials of SGD). </center> corresponding to training time. Note only MCL and its variants use the clustering trick, while SPL/SPLD need to compute loss on every sample and thus require knowledge of the labels of all samples. The left plot shows only the number of loss gradient calculations needed — 1) in MCL, for those clusters never selected in the curriculum, the loss (and hence the label) of only the centroid sample is needed; 2) in SPL/SPLD, for those samples never selected in the curriculum, their labels are needed only to compute the loss but not the gradient, so they are not reflected in the left plots of all figures because their labels are not used to compute a gradient. Therefore, thanks to the clustering trick, MCL and its variants can train without needing all labels, similar to semi- supervised learning methods. This can help to reduce the annotation costs, if an MCL process is done in tandem with a labeling procedure analogous to active learning. The right plots very roughly indicate convergence rate, namely how the test error decreases as a function of the amount of training. ![](images/9_1.jpg) <center>Figure 3: Test error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on Fashion-MNIST (grey curves represents 10 random trials of SGD). </center> ![](images/9_2.jpg) <center>Figure 4: Test error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on STL10 (grey curves represents 10 random trials of SGD). </center> <--- Page Split ---> ![](images/10_0.jpg) <center>Figure 5: Test error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on SVHN (grey curves represents 10 random trials of SGD). </center> On all datasets, MCL and most of its variants outperform SPL and SPLD in terms of final test accuracy (shown in Table 1) with comparable efficiency (shown in the right plots of all figures). MCL is slightly slower than SGD to converge in early stages but it can achieve a much smaller error when using the same number of labeled samples for loss gradients. Moreover, when using the same learning rate strategy, they can be more robust to overfitting, as shown in Figure 2. Comparing Figure 1 with Figure 2- 6, MCL has the advantage when applied to deep models. ![](images/10_1.jpg) <center>Figure 6: Test error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on MNIST (grey curves represents 10 random trials of SGD). </center> ![](images/10_2.jpg) <center>Figure 7: Number of distinct labeled samples ever needing loss gradient calculation vs. number of training batches for News20 (left), CIFAR10 (middle) and MNIST(right) (grey curves represents 10 random trials of SGD). </center> Among the five variants of MCL, \(\mathrm{MCL}(\lambda >0, \gamma >0, \Delta >0)\) achieves the fastest convergence speed in later stages and the smallest final test error, while \(\mathrm{MCL}(\Delta = 0, \lambda = 0, \Delta = 0)\) usually achieves the worst performance (the only exception is on News20). Comparison between \(\mathrm{MCL}(\Delta = 0, \lambda >0, \gamma >0)\) and \(\mathrm{MCL}(\lambda = 0, \gamma = 0, \Delta = 0)\) shows that decreasing diversity improves the performance. \(\mathrm{MCL}(\lambda >0, \gamma >0, \Delta >0)\) always outperforms \(\mathrm{MCL}(\Delta = 0, \lambda >0, \gamma >0)\) . This indicates that increasing \(k\) can bring advantages, e.g., more improvements in later stages. <--- Page Split ---> ![](images/11_0.jpg) <center>Figure 8: Number of distinct labeled samples ever needing loss gradient calculation vs. number of training batches for Fashion-MNIST (left), STL10 (middle) and SVHN (right) (grey curves represents 10 random trials of SGD). </center> \(\mathrm{MCL}(\lambda >0,\gamma >0,\Delta >0)\) always outperforms \(\mathrm{MCL}(\Delta >0,\lambda >0,\gamma = 0)\) , which supports our claim that it is better to decrease the diversity as training proceeds rather than keeping it fixed. In particular, \(\mathrm{MCL}(\Delta >0,\lambda >0,\gamma = 0)\) shows slower convergence than other MCL variants in later stages. In our experiments in the \(\mathrm{MCL}(\Delta >0,\lambda >0,\gamma = 0)\) case, we needed to carefully choose \(\lambda\) and use a relatively large \(\Delta\) for it to work at all, as otherwise it would repeatedly choose the same subset (with small \(\Delta\) , the loss term decreases as training proceeds, so with fixed \(\lambda\) the diversity term comes to dominate the objective). This suggests that a large diversity encouragement is neither necessary nor beneficial when the model matures, possibly since \(k\) is large at that point and there is ample opportunity for a diversity of samples to be selected just because \(k\) is large, and also since encouraging too much loss- unspecific diversity at that point might only select outliers. The combination of MCL and random curriculum (MCL- RAND) speeds up convergence, and sometimes (e.g., on MNIST, SVHN and Fashion- MNIST) leads to a good final test accuracy, but requires more labeled samples for gradient computation and still cannot outperform \(\mathrm{MCL}(\lambda >0\) \(\gamma >0,\Delta >0)\) . These results indicate that the diversity introduced by submodular regularization does yield improvements, and changing both hardness and diversity improves performance. Figure 7 and Figure 8 shows how the "number of distinct labeled samples ever needing loss gradient calculation" changes as training proceeds. It shows how the different methods trade- off between "training on more new samples" vs. "training on fewer distinct samples more often." Thanks to the clustering trick, MCL and its variants usually require fewer labeled samples for model training than SGD but more than SPL and SPLD. Acknowledgments This work was done in part while author Bilmes was visiting the Simons Institute for the Theory of Computing in Berkeley, CA. This material is based upon work supported by the National Science Foundation under Grant No. IIS- 1162606, the National Institutes of Health under award R01GM103544, and by a Google, a Microsoft, a Facebook, and an Intel research award. This work was supported in part by TerraSwarm, one of six centers of STARnet, a Semiconductor Research Corporation program sponsored by MARCO and DARPA. ## REFERENCES Naoki Abe and Hiroshi Mamitsuka. Query learning strategies using boosting and bagging. In ICML, pp. 1- 9, 1998. Eugene L. Allgower and Kurt Georg. Introduction to Numerical Continuation Methods. Society for Industrial and Applied Mathematics, 2003. Sumit Basu and Janara Christensen. Teaching classification boundaries to humans. In AAAI, pp. 109- 115, 2013. Dhruv Batra, Payman Yadollahpour, Abner Guzman- Rivera, and Gregory Shakhnarovich. Diverse m- best solutions in markov random fields. In ECCV, pp. 1- 16, 2012. <--- Page Split ---> Mokhtar S. Bazaraa, Hanif D. Sherali, and C. M. Shetty. Nonlinear programming - theory and algorithms (2. ed.). Wiley, 1993. Yoshua Bengio. Evolving Culture Versus Local Minima, pp. 109- 138. Springer Berlin Heidelberg, 2014. Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In ICML, pp. 41- 48, 2009. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798- 1828, 2013. Ellen Bialystok, Fergus I. M. Craik, and Gigi Luk. Bilingualism: consequences for mind and brain. Trends in Cognitive Sciences, 16(4):240- 250, 2012. Jeffrey A. Bilmes and Wenruo Bai. Deep submodular functions. CoRR, abs/1701.08939, 2017. URL http://arxiv.org/abs/1701.08939. Robert A Bjork and Elizabeth Ligon Bjork. A new theory of disuse and an old theory of stimulus fluctuation. From learning processes to cognitive processes: Essays in honor of William K. Estes, 2:35- 67, 1992. Adam Coates, Honglak Lee, and Andrew Y. Ng. An analysis of single- layer networks in unsupervised feature learning. In AISTATS, pp. 215- 223, 2011. Michele Conforti and Gerard Cornuejols. Submodular set functions, matroids and the greedy algorithm: Tight worst- case bounds and some generalizations of the rado- edmonds theorem. Discrete Applied Mathematics, 7(3):251- 274, 1984. Aron Culotta and Andrew McCallum. Reducing labeling effort for structured prediction tasks. In AAAI, pp. 746- 751, 2005. Ido Dagan and Sean P. Engelson. Committee- based sampling for training probabilistic classifiers. In ICML, pp. 150- 157, 1995. Sanjoy Dasgupta and Daniel Hsu. Hierarchical sampling for active learning. In ICML, pp. 208- 215, 2008. John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121- 2159, 2011. Farzan Farnia and David Tse. A minimax approach to supervised learning. In NIPS, pp. 4240- 4248, 2016. U. Feige. A threshold of \(\ln n\) for approximating set cover. Journal of the ACM (JACM), 1998. Yoav Freund and Robert E Schapire. A decision- theoretic generalization of on- line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119- 139, 1997. Satoru Fujishige. Submodular functions and optimization. Annals of discrete mathematics. Elsevier, 2005. Jennifer Gillenwater, Alex Kulesza, and Ben Taskar. Near- optimal map inference for determinantal point processes. In NIPS, pp. 2735- 2743, 2012. R. Iyer and J. Bilmes. Algorithms for approximate minimization of the difference between submodular functions, with applications. In UAI, 2012. Rishabh Iyer and Jeff A. Bilmes. Submodular point processes with applications in machine learning. In AISTATS, May 2015. Rishabh Iyer, Stefanie Jegelka, and Jeff A. Bilmes. Fast semidifferential- based submodular function optimization. In ICML, 2013. <--- Page Split ---> S. Jegelka and J. Bilmes. Submodularity beyond submodular energies: coupling edges in graph cuts. In Computer Vision and Pattern Recognition (CVPR), 2011. Lu Jiang, Deyu Meng, Shoou-I Yu, Zhenzhong Lan, Shiguang Shan, and Alexander G. Hauptmann. Self-paced learning with diversity. In NIPS, pp. 2078- 2086, 2014. Lu Jiang, Deyu Meng, Qian Zhao, Shiguang Shan, and Alexander G. Hauptmann. Self-paced curriculum learning. In AAAI, pp. 2694- 2700, 2015. Faisal Khan, Xiaojin (Jerry) Zhu, and Bilge Mutlu. How do humans teach: On curriculum learning and teaching dimension. In NIPS, pp. 1449- 1457, 2011. Ágnes Melinda Kovács and Jacques Mehler. Flexible learning of multiple speech structures in bilingual infants. Science, 325(5940):611- 612, 2009. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. M. Pawan Kumar, Benjamin Packer, and Daphne Koller. Self-paced learning for latent variable models. In NIPS, pp. 1189- 1197, 2010. Gert R.G. Lanckriet, Laurent El Ghaoui, Chiranjib Bhattacharyya, and Michael I. Jordan. A robust minimax approach to classification. Journal of Machine Learning Research, 3:555- 582, 2003. Ken Lang. Newsweeder: Learning to filter netnews. In ICML, pp. 331- 339, 1995. Yann Lecun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient- based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278- 2324, 1998. Ping Li, Jennifer Legault, and Kaitlyn A. Litcofsky. Neuroplasticity as a function of second language learning: Anatomical changes in the human brain. Cortex, 58:301- 324, 2014. Hui Lin and Jeff A. Bilmes. A class of submodular functions for document summarization. In ACL, pp. 510- 520, 2011. Hui Lin, Jeff A. Bilmes, and Shasha Xie. Graph- based submodular selection for extractive summarization. In Proc. IEEE Automatic Speech Recognition and Understanding (ASRU), Merano, Italy, December 2009. Mark A McDaniel and Andrew C Butler. A contextual framework for understanding when difficulties are desirable. Successful remembering and successful forgetting: A festschrift in honor of Robert A. Bjork, pp. 175- 198, 2011. Andrea Mechelli, Jenny T. Crinion, Uta Noppeney, John O'Doherty, John Ashburner, Richard S. Frackowiak, and Cathy J. Price. Neurolinguistics: Structural plasticity in the bilingual brain. Nature, 431(7010):757- 757, 2004. Michel Minoux. Accelerated greedy algorithms for maximizing submodular set functions. In Optimization Techniques, volume 7 of Lecture Notes in Control and Information Sciences, chapter 27, pp. 234- 243. Springer Berlin Heidelberg, 1978. Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, Amin Karbasi, Jan Vondrak, and Andreas Krause. Lazier than lazy greedy. In AAAI, pp. 1812- 1818, 2015. Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, and Andreas Krause. Distributed submodular maximization. Journal of Machine Learning Research, 17(238):1- 44, 2016. Douglas C. Montgomery. Design and Analysis of Experiments. John Wiley & Sons, 2006. M. Narasimhan and J. Bilmes. A submodular-supermodular procedure with applications to discriminative structure learning. In UAI, 2005. G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functions-I. Mathematical Programming, 14(1):265-294, 1978. <--- Page Split ---> Yurii Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, 2004. Yurii Nesterov. Smooth minimization of non- smooth functions. Mathematical Programming, 103(1): 127- 152, 2005. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. Kaustubh R Patil, Xiaojin (Jerry) Zhu, L ukasz Kopec, and Bradley C Love. Optimal teaching for limited- capacity human learners. In NIPS, pp. 2465- 2473, 2014. Adarsh Prasad, Stefanie Jegelka, and Dhruv Batra. Submodular meets structured: Finding diverse subsets in exponentially- large structured item sets. In NIPS, pp. 2645- 2653, 2014. Robert E. Schapire. The strength of weak learnability. Machine Learning, 5(2):197- 227, 1990. Tobias Scheffer, Christian Decomain, and Stefan Wrobel. Active hidden markov models for information extraction. In CAIDA, pp. 309- 318, 2001. Burr Settles. Active learning literature survey. Technical report, University of Wisconsin, Madison, 2010. H. S. Seung, M. Opper, and H. Sompolinsky. Query by committee. In COLT, pp. 287-294, 1992. Shai Shalev- Shwartz and Yonatan Wexler. Minimizing the maximal loss: How and why. In International Conference on Machine Learning (ICML), pp. 793-801, 2016. Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. Baby Steps: How "Less is More" in unsupervised dependency parsing. In NIPS 2009 Workshop on Grammar Induction, Representation of Language and Language Learning, 2009. James Steven Supancic III and Deva Ramanan. Self- paced learning for long- term tracking. In CVPR, pp. 2379- 2386, 2013. Kevin Tang, Vignesh Ramanathan, Li Fei- fei, and Daphne Koller. Shifting weights: Adapting object detectors from image to video. In NIPS, pp. 638- 646, 2012a. Ye Tang, Yu- Bin Yang, and Yang Gao. Self- paced dictionary learning for image classification. In MM, pp. 833- 836, 2012b. Kai Wei, Rishabh Iyer, and Jeff A. Bilmes. Fast multi- stage submodular maximization. In ICML, 2014a. Kai Wei, Yuzong Liu, Katrin Kirchhoff, Chris D. Bartels, and Jeff A. Bilmes. Submodular subset selection for large- scale speech training data. In ICASSP, pp. 3311- 3315, 2014b. Kai Wei, Rishabh Iyer, and Jeff A. Bilmes. Submodularity in data subset selection and active learning. In ICML, 2015. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion- mnist: a novel image dataset for benchmarking machine learning algorithms, 2017. Tianyi Zhou, Hua Ouyang, Jeff A. Bilmes, Yi Chang, and Carlos Guestrin. Scaling submodular maximization via pruned submodularity graphs. In AISTATS, 2017. X. Zhu, A. Singla, S. Zilles, and A. N. Rafferty. An Overview of Machine Teaching. ArXiv e- prints, January 2018. Xiaojin (Jerry) Zhu. Machine teaching: An inverse problem to machine learning and an approach toward optimal education. In AAAI, pp. 4083- 4087, 2015. <--- Page Split ---> ## 4 APPENDIX ### 4.1 PROOF OF LEMMA 1 Proof. We have \[\kappa_{G} = 1 - \min_{j\in V}\frac{L(j) + \lambda F(j|V\backslash j)}{L(j) + \lambda F(j)} = \lambda \cdot \max_{j\in V}\frac{F(j) - F(j|V\backslash j)}{L(j) + \lambda F(j)}\] \[\qquad = \lambda \cdot \max_{j\in V}\frac{1 - \frac{F(j|V\backslash j)}{F(j)}}{L(j) + \lambda}\leq \frac{\lambda \cdot \kappa_{F}}{\min_{j\in V}\frac{L(j)}{F(j)} + \lambda} = \frac{\kappa_{F}}{c_{1} / \lambda + 1}\] Where \(c_{1} \triangleq \min_{j \in V} \frac{L(j)}{F(j)}\) . ### 4.2 PROOF OF PROPOSITION 1 Proposition 1. The maximum of multiple \(\beta\) - strongly convex functions is \(\beta\) - strongly convex as well. Proof. Let \(g(x) = \max_{i} g_{i}(x)\) , where \(g_{i}(x)\) is \(\beta\) - strongly convex for any \(i\) . According to a definition of strongly convex function given in Theorem 2.1.9 (page 64) of (Nesterov, 2004), \(\forall \lambda \in [0,1]\) , we have \[g_{i}(\lambda x + (1 - \lambda)y)\leq \lambda g_{i}(x) + (1 - \lambda)g_{i}(y) - \frac{\beta}{2}\lambda (1 - \lambda)\| x - y\|_{2}^{2},\forall i.\] The following proves that \(g(x)\) is also \(\beta\) - strongly convex: \[g(\lambda x + (1 - \lambda)y) = \max_{i}g_{i}(\lambda x + (1 - \lambda)y)\] \[\leq \max_{i}[\lambda g_{i}(x) + (1 - \lambda)g_{i}(y)] - \frac{\beta}{2}\lambda (1 - \lambda)\| x - y\|_{2}^{2}\] \[\leq \max_{i}\lambda g_{i}(x) + \max_{i}(1 - \lambda)g_{i}(y) - \frac{\beta}{2}\lambda (1 - \lambda)\| x - y\|_{2}^{2}\] \[= \lambda g(x) + (1 - \lambda)g(y) - \frac{\beta}{2}\lambda (1 - \lambda)\| x - y\|_{2}^{2}.\] ### 4.3 PROOF OF THEOREM 1 Proof. The objective \(g(w)\) of the minimax problem in Eq. (2) after eliminating \(A\) is given in Eq. (4). Since \(G(A)\) in Eq. (5) is monotone non- decreasing submodular, the optimal subset \(A\) when defining \(g(w)\) in Eq. (4) always has size \(k\) if \(|V| \geq k\) . In addition, because the loss function \(L(y_{i}, f(x_{i}, w))\) is \(\beta\) - strongly convex, \(g(w)\) in Eq. (4) is the maximum over multiple \(k \beta\) - strongly convex functions with different \(A\) . According to Proposition 1, \(g(w)\) is also \(k \beta\) - strongly convex, i.e., \[g(\hat{w})\geq g(w^{*}) + \nabla g(w^{*})^{T}(\hat{w} -w^{*}) + \frac{k\beta}{2}\| \hat{w} -w^{*}\|_{2}^{2},\forall \nabla g(w^{*})\in \partial g(w^{*}). \quad (14)\] Since the convex function \(g(w)\) achieves minimum on \(w^{*}\) , it is valid to substitute \(\nabla g(w^{*}) = 0 \in \partial g(w^{*})\) into Eq. (14). After rearrangement, we have \[\| \hat{w} -w^{*}\|_{2}^{2}\leq \frac{2}{k\beta}\left[g(\hat{w}) - g(w^{*})\right]. \quad (15)\] In the following, we will prove \(g(w^{*}) \geq \alpha \cdot g(\hat{w})\) , which together with Eq. (15) will lead to the final bound showing how close \(\hat{w}\) is to \(w^{*}\) . Note \(\hat{g} (w)\) (Eq. (6)) is a piecewise function, each piece of which is convex and associated with different \(\hat{A}\) achieved by a submodular maximization algorithm of approximation factor \(\alpha\) . Since \(\hat{A}\) is not guaranteed to be a global maxima, unlike \(g(w)\) , the whole \(\hat{g} (w)\) cannot be written as the maximum of multiple convex functions and thus can be non- convex. Therefore, gradient descent in lines 6- 9 of Algorithm 1 can lead to either: 1) \(\hat{w}\) is a global minima of \(\hat{g} (w)\) ; or 2) \(\hat{w}\) is a local minima of \(\hat{g} (w)\) . Saddle points do not exist on \(\hat{g} (w)\) because each piece of it is convex. We are also assuming other issues associated with the boundaries between convex pieces do not repeatedly occur. <--- Page Split ---> 1) When \(\hat{w}\) is a global minima of \(\hat{g} (w)\) , we have \[g(w^{*})\geq \hat{g} (w^{*})\geq \hat{g} (\hat{w})\geq \alpha \cdot g(\hat{w}). \quad (16)\] The first inequality is due to \(g(\cdot)\geq \hat{g} (\cdot)\) . The second inequality is due to the global optimality of \(\hat{w}\) . The third inequality is due to the approximation bound \(\hat{g} (\cdot)\geq \alpha \cdot g(\cdot)\) guaranteed by the submodular maximization in Step 7 of Algorithm 1. 2) When \(\hat{w}\) is a local minima of \(\hat{g} (w)\) , we have \(\nabla \hat{g} (\hat{w}) = 0\) . Let \(h(w)\) be the piece of \(\hat{g} (w)\) where \(\hat{w}\) is located, then \(\hat{w}\) has to be a global minima of \(h(w)\) due to the convexity of \(h(w)\) . Let \(\mathcal{A}\) denote the ground set of \(\hat{A}\) on all pieces of \(\hat{g} (w)\) , we define an auxiliary convex function \(\tilde{g} (w)\) as \[\tilde{g} (w)\triangleq \max_{A\in \mathcal{A}}\sum_{i\in A}L\left(y_{i},f(x_{i},w)\right) + \lambda F(A). \quad (17)\] It is convex because it is defined as the maximum of multiple convex function. So we have \[\hat{g} (w)\leq \tilde{g} (w)\leq g(w),\forall w\in \mathbb{R}^{m}. \quad (18)\] The first inequality is due to the definition of \(\mathcal{A}\) , and the second inequality is a result of \(\mathcal{A}\subseteq 2^{V}\) by comparing \(g(w)\) in Eq. (4) with \(\tilde{g} (w)\) in Eq. (17). Let \(\tilde{w}\) denote a global minima of \(\tilde{g} (w)\) , we have \[g(w^{*})\geq \tilde{g} (w^{*})\geq \tilde{g} (\tilde{w})\geq h(\tilde{w})\geq h(\hat{w}) = \hat{g} (\hat{w})\geq \alpha \cdot g(\hat{w}). \quad (19)\] The first inequality is due to Eq. (18), the second inequality is due to the global optimality of \(\hat{w}\) on \(\tilde{g} (w)\) , the third inequality is due to the definition of \(\tilde{g} (w)\) in Eq. (17) \((\tilde{g} (w)\) is the maximum of all pieces of \(\hat{g} (w)\) and \(h(w)\) is one piece of them), the fourth inequality is due to the global optimality of \(\hat{w}\) on \(h(w)\) , the last inequality is due to the approximation bound \(\hat{g} (\cdot)\geq \alpha \cdot g(\cdot)\) guaranteed by the submodular maximization in Step 7 of Algorithm 1. Therefore, in both cases we have \(g(w^{*})\geq \alpha \cdot g(\hat{w})\) . Applying it to Eq. (15) results in \[\| \hat{w} -w^{*}\|_{2}^{2}\leq \frac{2}{k\beta}\left(\frac{1}{\alpha} -1\right)\cdot g(w^{*}). \quad (20)\] ### 4.4 PROPOSITION 2 Proposition 2. If \(x\in [0,1]\) , the following inequality holds true. \[\frac{x}{1 - e^{-x}} -1\leq x. \quad (21)\] Proof. Due to two inequalities \(e^{x}\leq 1 + x + x^{2} / 2\) for \(x\leq 0\) and \(1 - e^{- x}\geq x / 2\) for \(x\in [0,1]\) \[\frac{x}{1 - e^{-x}} -1 = \frac{x - 1 + e^{-x}}{1 - e^{-x}}\leq \frac{x - 1 + (1 - x + x^{2} / 2)}{x / 2} = x. \quad (22)\] ### 4.5 PROOF OF COROLLARY 1 Proof. Applying the inequality in Proposition 2 and the approximation factor of lazy greedy \(\alpha =\) \((1 - e^{- \kappa_{G}})\big / \kappa_{G}\) to the right hand side of Eq. (9) from Theorem 1 yields \[\begin{array}{l}{\| \hat{w} -w^{*}\|_{2}^{2}\leq \frac{2}{k\beta}\left(\frac{1}{\alpha} -1\right)\cdot g(w^{*})}\\ {= \frac{2}{k\beta}\left(\frac{\kappa_{G}}{1 - e^{-\kappa_{G}}} -1\right)\cdot g(w^{*})\leq \frac{2\kappa_{G}}{k\beta}\cdot g(w^{*}),} \end{array} \quad (23)\] where \(\kappa_{G}\) is the curvature of submodular function \(G(\cdot)\) defined in Eq. (5). Substituting the inequality about \(\kappa_{G}\) from Lemma 1 into Eq. (23) results in \[\| \hat{w} -w^{*}\|_{2}^{2}\leq \frac{2\kappa_{F}}{k\beta(c_{1} / \lambda +1)}\leq \frac{2\kappa_{F}}{\beta c_{1}}\times \frac{\lambda}{k}\times g(w^{*}). \quad (24)\] We use subscript as the index for iterations in the outer- loop, e.g., \(\hat{w}_{T}\) is the model weights \(w\) after the \(T^{t h}\) iteration of outer- loop. If we decrease \(\lambda\) exponentially from \(\lambda = \lambda_{0}\) and increase \(k\) linearly from \(k = k_{0}\) , as Step 11 in Algorithm 1, we have \[\| \hat{w}_{T} -w_{T}^{*}\|_{2}^{2}\leq \frac{2\kappa_{F}\lambda_{0}}{\beta c_{1}}\times \frac{(1 - \gamma)^{T}}{(k_{0} + T\Delta)}\times g(w_{T}^{*}), \quad (25)\] <--- Page Split ---> According to the definition of \(g(\cdot)\) in Eq. (4), for \(g(w_{T}^{*})\) we have \[\begin{array}{r l} & {g(w_{T}^{*}) = \underset {w\in \mathbb{R}^{m}}{\min}\underset {A\subseteq V,|A|\leq k}{\max}\underset {i\in A}{\sum}L\left(y_{i},f(x_{i},w)\right) + \lambda F(A)}\\ & {\qquad \leq \underset {w\in \mathbb{R}^{m}}{\min}\underset {A\subseteq V,|A|\leq k}{\max}\underset {i\in A}{\sum}L\left(y_{i},f(x_{i},w)\right) + \lambda_{0}(1 - \gamma)^{T}\underset {A\subseteq V,|A|\leq k}{\max}F(A)}\\ & {\qquad = g(w_{\infty}^{*}) + \lambda_{0}(1 - \gamma)^{T}c_{2},} \end{array} \quad (26)\] where \[g(w_{\infty}^{*})\triangleq \min_{w\in \mathbb{R}^{m}}\max_{A\subseteq V,|A|\leq k}\sum_{i\in A}L\left(y_{i},f(x_{i},w)\right),c_{2}\triangleq \max_{A\subseteq V,|A|\leq k}F(A). \quad (27)\] Substituting Eq. (26) to Eq. (25) yields \[\| \hat{w}_{T} - w_{T}^{*}\|_{2}^{2}\leq \frac{2\kappa_{F}\lambda_{0}}{\beta c_{1}}\times \frac{(1 - \gamma)^{T}}{(k_{0} + T\Delta)}\times \left[g(w_{\infty}^{*}) + \lambda_{0}c_{2}(1 - \gamma)^{T}\right], \quad (28)\] If we can tolerate more expensive computational cost for running submodular maximization with larger budget \(k\) , and increase \(k\) exponentially, i.e., \(k\gets (1 + \Delta)\cdot k\) , we have \[\| \hat{w}_{T} - w_{T}^{*}\|_{2}^{2}\leq \frac{2\kappa_{F}\lambda_{0}}{\beta c_{1}k_{0}}\times \left(\frac{1 - \gamma}{1 + \Delta}\right)^{T}\times \left[g(w_{\infty}^{*}) + \lambda_{0}c_{2}(1 - \gamma)^{T}\right]. \quad (29)\] This completes the proof. ### 4.6 SUBMODULAR MAXIMIZATION STARTING FROM A PREVIOUS "WARM" SOLUTION Algorithm 1 repeatedly runs a greedy procedure to solve submodular maximization, and this occurs two nested loops deep. In this section we describe how we speed this process up. Our first strategy reduces the size of the ground set before starting a more expensive submodular maximization procedure. We use a method described in (Wei et al., 2014a) where we sort the elements of \(V\) non- increasingly by \(G(i|V\setminus i)\) and then remove any element \(i\) from \(V\) having \(G(i)< G(\delta (k)|V\setminus \delta (k))\) where \(\delta (k)\) is \(k^{\mathrm{th}}\) element in the sorted permutation. Any such element will never be chosen by the \(k\) - cardinality constrained greedy procedure because for any \(\ell \in \{1,2,\ldots ,k\}\) and any set \(A\) , we have \(G(\delta (\ell)|A)\geq G(\delta (\ell)|V\setminus \delta (\ell))\geq G(\delta (k)|V\setminus \delta (k)) > G(i)\geq G(i|A)\) and thus greedy would always be able to choose an element better than \(i\) . This method results in no reduction in approximation quality, although it might not yield any speedup at all. But with a decreasing \(\lambda\) , \(G(A)\) becomes more modular, and the filtering method can become more effective. Other methods we can employ are those such as (Zhou et al., 2017; Mirzasoleiman et al., 2015), resulting in small reduction in approximation quality, but we do not describe these further. The key contribution of this section is a method exploiting a potential warm start set that might already achieve a sufficient approximation quality. Normally, the greedy procedure starts with the empty set and adds elements greedily until a set of size \(k\) is reached. In Algorithm 1, by contrast, a previous iteration has already solved a size- \(k\) constrained submodular maximization problem for the previous submodular function, the solution to which is one that could very nearly already satisfy a desired approximation bound for the current submodular function. The reason for this is that, depending on the weight update method in line 9 of Algorithm 1 between inner loop iterations, and the changes to parameters \(\Delta\) and \(\gamma\) between outer iterations, the succession of submodular functions might not change very quickly. For example, when the learning rate \(\eta\) is small, the \(\hat{A}\) from the previous iteration could still be valued highly by the current iteration's function, so running a greedy procedure from scratch is unnecessary. Our method warm- starts a submodular maximization process with a previously computed set, and offers a bound that trades off speed and approximation quality. The approach is given in Algorithm 2, which (after the aforementioned filtering in line 3 (Wei et al., 2014a)) tests in linear time if the warm start set \(\hat{A}\) already achieves a sufficient approximation quality, and if so, possibly improves it further with an additional linear or quasilinear time computation. To test approximation quality of \(\hat{A}\) , our approach uses a simple modular function upper bound, in line 4, to compute an upper bound on the global maximum value. For the subsequent improvement of \(\hat{A}\) , our approach utilizes a submodular semigradient approach (Iyer et al., 2013) (specifically subgradients (Fujishige, 2005) in this case). If the warm- start set \(\hat{A}\) does not achieve sufficient approximation quality in line 5, the algorithm backs off to standard submodular maximization in line <--- Page Split ---> 11 (we use the accelerated/lazy greedy procedure (Minoux, 1978) here although other methods, e.g., (Mirzasoleiman et al., 2015), can be used as well). Algorithm 2 Warm Start (WS) WS- SUBMODULARMAX \((G,k,\hat{A},\bar{\alpha}\in [0,1))\) 1: Input: \(G(\cdot)\) , \(k\) , \(\hat{A}\) , \(\bar{\alpha}\) 2: Output: \(\hat{A}\) 3: Reduce ground set size: arrange \(V\) non- increasingly in terms of \(G(i|V\backslash i)\) in a permutation \(\delta\) where \(\delta (k)\) is the \(k^{t h}\) element, set \(V\leftarrow \{i\in V|G(i)\geq G(\delta (k)|V\backslash \delta (k))\};\) 4: Compute upper bound to maximum of Eq. (5): \[\tau = \max_{A\in V,|A|\leq k}\sum_{i\in A}\left[L\left(y_{i},f(x_{i},w^{t})\right) + \lambda F(i)\right]\] 5: if \(G(\hat{A})\geq \bar{\alpha}\cdot \tau\) then 6: Permutation \(\sigma\) of \(V\) : the first \(k\) elements have \(S_{k}^{\sigma} = \hat{A}\) and are chosen ordered non- increasing by \(\kappa_{G}(v)\) ; the remaining \(n - k\) elements \(V\setminus \hat{A}\) for \(\sigma\) are chosen non- increasing by \(\kappa_{G}(v)\) . 7: Define modular function \(h_{\hat{A}}(A)\triangleq \sum_{i\in A}h_{\hat{A}}(i)\) with \(h_{\hat{A}}(\sigma (i)) = G(S_{i}^{\sigma}) - G(S_{i - 1}^{\sigma})\) ; 8: Compute tight, at \(\hat{A}\) , lower bound \(L(A)\) of \(G(A)\) : \[L(A)\triangleq G(\hat{A}) + h_{\hat{A}}(A) - h_{\hat{A}}(\hat{A})\leq G(A)\] 9: \(\hat{A}\leftarrow \arg \max_{A\in V,|A|\leq k}L(A)\) ; 10: else 11: \(\hat{A}\leftarrow \mathrm{LAZYGREEDY}(G,V,k)\) ; 12: end if Line 4 computes the upper bound \(\tau \geq \max_{A\in V,|A|\leq k}G(A)\) which holds due to submodularity, requiring only a modular maximization problem (which can be done in \(O(|V|)\) time, independent of \(k\) , to select the top \(k\) elements). Line 5 checks if an \(\bar{\alpha}\) approximation to this upper bound is achieved by the warm- start set \(\hat{A}\) , and if not we back off to a standard submodular maximization procedure in line 11. If \(\hat{A}\) is an \(\bar{\alpha}\) approximation to the upper bound \(\tau\) , then lines 6- 9 runs a subgradient optimization procedure, a process that can potentially improve it further. The approach selects a subgradient defined by a permutation \(\sigma = (\sigma (1),\sigma (2),\ldots ,\sigma (n))\) of the elements. The algorithm then defines a modular function \(L(A)\) , tight at \(\hat{A}\) and a lower bound everywhere else, i.e., \(L(\hat{A}) = G(\hat{A})\) , and \(\forall A,L(A)\leq G(A)\) . Any permutation will achieve this as long as \(\hat{A} = \{\sigma (1),\sigma (2),\ldots ,\sigma (k)\}\) . The specific permutation we use is described below. Once we have the modular lower bound, we can do simple and fast modular maximization. Lines 6- 9 of Algorithm 2 offer a heuristic that can only improve the objective — letting \(\hat{A}\) be the solution after line 9, we have \[G(\hat{A})\geq L(\hat{A})\geq L(\hat{A}) = G(\hat{A}). \quad (30)\] The first inequality follows since \(L(\cdot)\) is a lower bound of \(G(\cdot)\) ; the second inequality follows from the optimality of \(\hat{A}^{+}\) ; the equality follows since \(L\) is tight at \(\hat{A}\) . The approximation factor \(\bar{\alpha}\) is distinct from the submodular maximization approximation factor \(\alpha\) achieved by the greedy algorithm. Setting, for example \(\bar{\alpha} = 1 - 1 / e\) would ask for the previous solution to be this good relative to \(\tau\) , the upper bound on the global maximum, and the algorithm would almost always immediately jump to line 11 since achieving such approximation quality might not even be possible in polynomial time (Feige, 1998). With \(\bar{\alpha}\) large, we recover the approximation factor of the greedy algorithm but ignore the warm start. If \(\bar{\alpha}\) is small, many iterations might use the warm start from the previous iteration, updating it only via one step of subgradient optimization, but with a worse approximation factor. In practice, therefore, we use a more lenient bound (often we set \(\bar{\alpha} = 1 / 2\) ) which is a good practical tradeoff between approximation accuracy and speed (meaning lines 6- 9 execute a reasonable fraction of the time leading to a good speedup, i.e., in our experiments, the time cost for WS- SUBMODULARMAX increases if \(\alpha = 1\) by a factor ranging from about 3 to 5). In general, we have the following final bound based on the smaller of \(\bar{\alpha}\) and \(\alpha\) . <--- Page Split ---> Lemma 2. Algorithm 2 outputs a solution \(\hat{A}\) such that \(G(\hat{A}) \geq \min \{\bar{\alpha}, \alpha \} \times \max_{A \in V, |A| \leq k} G(A)\) , where \(\alpha\) is the approximation factor of the greedy procedure (typically \(\alpha = (1 - e^{-\kappa_G}) / \kappa_G\) ). Proof. Let \(A^{*}\) denote an optimal solution to Eq. (5): \[A^{*} \in \underset {A \in V, |A| \leq k}{\arg \max} \sum_{i \in A} L \left(y_{i}, f(x_{i}, w^{t}) \right) + \lambda F(A). \quad (31)\] \(\tau\) computed in line 4 is an upper bound to \(G(A^{*})\) since: \[\begin{array}{r l} & {\tau \geq \sum_{i\in A^{*}}[L\left(y_{i},f(x_{i},w^{t})\right) + \lambda F(i)]}\\ & {\qquad = \sum_{i\in A^{*}}L\left(y_{i},f(x_{i},w^{t})\right) + \lambda \sum_{i\in A^{*}}F(i)}\\ & {\qquad \geq \sum_{i\in A^{*}}L\left(y_{i},f(x_{i},w^{t})\right) + \lambda F(A^{*}).} \end{array} \quad (32)\] The first inequality follows by the definition of \(\tau\) ; the last inequality is due to submodularity, guaranteeing \(F(i) \geq F(i|B)\) for any \(B \subseteq V\) . When \(G(\hat{A}) \geq \bar{\alpha} \cdot \tau\) (line 5), the subgradient ascent can only improve the objective. Thus, we have \(G(\hat{A}) \geq \bar{\alpha} \cdot \max_{A \in V, |A| \leq k} G(A)\) for \(\hat{A}\) obtained in line 9. Otherwise, we run the greedy algorithm on the reduced ground set \(V\) . Thus, we have \(G(\hat{A}) \geq \alpha \cdot \max_{A \in V, |A| \leq k} G(A)\) for \(\hat{A}\) obtained in line 11. \(\square\) The heuristic in lines 6- 9 is identical to one step of the semigradient- based minorization- maximization (MM) scheme used in, for example, (Narasimhan & Bilmes, 2005; Jegelka & Bilmes, 2011; Iyer & Bilmes, 2012; Iyer et al., 2013). Which permutation to use for the subgradient in order to tighten the gap has been an issue discussed as far back as (Narasimhan & Bilmes, 2005). In the present work, we offer a new heuristic for this problem. Let the first \(i\) elements in the permutation \(\sigma\) be denoted \(S_{i}^{\sigma} = \{\sigma (1), \sigma (2), \ldots , \sigma (i)\}\) , and let \(A_{i - 1}^{\sigma} \triangleq \{\sigma (j) \in A | j < i\} = S_{i - 1}^{\sigma} \cap A \subseteq S_{i - 1}^{\sigma}\) for any \(i \in A\) . The gap we wish to reduce is \[\begin{array}{r l} & {0\leq G(A) - L(A) = \sum_{\sigma (i)\in A}\left[G(\sigma (i)|A_{i - 1}^{\sigma}) - G(\sigma (i)|S_{i - 1}^{\sigma})\right]}\\ & {\qquad = \sum_{\sigma (i)\in A}G(\sigma (i))\cdot \left[\frac{G(\sigma (i)|A_{i - 1}^{\sigma})}{G(\sigma (i))} -\frac{G(\sigma (i)|S_{i - 1}^{\sigma})}{G(\sigma (i))}\right]}\\ & {\qquad \leq \sum_{\sigma (i)\in A}G(\sigma (i))\cdot \left[\frac{G(\sigma (i)|A_{i - 1}^{\sigma})}{G(\sigma (i))} -(1 - \kappa_{G}(\sigma (i)))\right]} \end{array} \quad (34)\] Which follows since \(h_{\hat{A}}(\sigma (i)) = G(\sigma (i)|S_{i - 1}^{\sigma})\) by definition. Line 6 chooses a particular permutation in an attempt to reduce this gap. Define an element- wise form of curvature as \(\kappa_{G}(v) = 1 - G(v|V\backslash v) / G(v) \in [0, 1]\) for all \(v \in V\) . Note that \(\kappa_{G} = \max_{v \in V} \kappa_{G}(v)\) . If \(\kappa_{G}(v) \approx 0\) then \(G\) is practically modular at \(v\) and so \(G(v|A) \approx G(v)\) for any set \(A\) ; in other words, \(G(v|A)\) is close to \(v\) 's maximum possible gain even if \(v\) is ranked with index very late in the permutation \(\sigma\) where \(A\) is very large. If \(\kappa_{G}(v) \approx 1\) , on the other hand, then there is some set \(A \subseteq V \setminus \{v\}\) that can appreciably reduce \(G(v|A)\) relative to the maximum possible gain \(G(v)\) , and so it is best to rank \(v\) very early in the order \(\sigma\) where \(A\) must be a small set. One heuristic to achieve these goals is to choose a permutation \(\sigma\) that arranges the elements in an order non- increasing according to \(\kappa_{G}(v)\) , meaning \(\kappa_{G}(\sigma (1)) \geq \kappa_{G}(\sigma (2)) \geq \ldots\) . Choosing this order is therefore an attempt to keep each of the conditional gains \(G(\sigma (i)|S_{i - 1}^{\sigma})\) as close as possible to \(\sigma (i)\) 's maximum possible gain, \(G(\sigma (i))\) . This corresponds to an attempt to reduce Eq. (34) (and correspondingly close the \(G(A) - L(A)\) gap) as much as possible. Line 6 of Algorithm 2 does this, subject to the requirement that the first \(k\) elements of the permutation must correspond to \(\hat{A}\) in order to be a subgradient. These tricks all help lines 6- 9 produce a better updated approximate maximizer but at appreciably increased speed. ### 4.7 ADDITIONAL RESULTS This section concludes by, in the form of tables and plots, providing more information about our experiments and experimental results for the algorithms mentioned above. <--- Page Split ---> Table 3: Details regarding the datasets. <table><tr><td>Dataset</td><td>News20</td><td>MNIST</td><td>CIFAR10</td><td>STL10</td><td>SVHN</td><td>Fashion</td></tr><tr><td>#Training</td><td>11314</td><td>50000</td><td>50000</td><td>5000</td><td>73257</td><td>50000</td></tr><tr><td>#Test</td><td>7532</td><td>10000</td><td>10000</td><td>8000</td><td>26032</td><td>10000</td></tr><tr><td>#Feature</td><td>129791</td><td>28 × 28</td><td>32 × 32 × 3</td><td>96 × 96 × 3</td><td>32 × 32</td><td>28 × 28</td></tr><tr><td>#Class</td><td>20</td><td>10</td><td>10</td><td>10</td><td>10</td><td>10</td></tr></table> Table 4: Parameters of MCL (Algorithm 1) and its variants for different datasets. <table><tr><td>Dataset</td><td>News20</td><td>MNIST</td><td>CIFAR10</td><td>STL10</td><td>SVHN</td><td>Fashion</td></tr><tr><td>p</td><td>50</td><td>50</td><td>20</td><td>20</td><td>20</td><td>50</td></tr><tr><td>#cluster</td><td>200</td><td>1000</td><td>1000</td><td>400</td><td>800</td><td>1000</td></tr><tr><td>γ</td><td>0.05</td><td>0.05</td><td>0.05</td><td>0.2</td><td>0.2</td><td>0.05</td></tr><tr><td>initial k</td><td>4</td><td>4</td><td>4</td><td>20</td><td>30</td><td>10</td></tr><tr><td>initial λ</td><td>6 × 10−6</td><td>1 × 10−6</td><td>8 × 10−7</td><td>1 × 10−7</td><td>1 × 10−6</td><td>1 × 10−6</td></tr><tr><td>initial η</td><td>3.5</td><td>0.02</td><td>0.01</td><td>0.02</td><td>0.01</td><td>0.02</td></tr></table> ![](images/20_0.jpg) <center>Figure 9: Training error rate (\%) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on 20newsgroups (grey curves represents 10 random trials of SGD). </center> ![](images/20_1.jpg) <center>Figure 10: Training loss vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on 20newsgroups (grey curves represents 10 random trials of SGD). </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 11: Test loss vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on 20newsgroups (grey curves represents 10 random trials of SGD). </center> ![](images/21_1.jpg) <center>Figure 12: Training loss vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on CIFAR10 (grey curves represents 10 random trials of SGD). </center> ![](images/21_2.jpg) <center>Figure 13: Test loss vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on CIFAR10 (grey curves represents 10 random trials of SGD). </center> ![](images/21_3.jpg) <center>Figure 14: Training error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on MNIST (grey curves represents 10 random trials of SGD). </center> <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 15: Training error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on Fashion-MNIST (grey curves represents 10 random trials of SGD). </center> ![](images/22_1.jpg) <center>Figure 16: Training error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on STL10 (grey curves represents 10 random trials of SGD). </center> ![](images/22_2.jpg) <center>Figure 17: Training error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on SVHN (grey curves represents 10 random trials of SGD). </center> <--- Page Split --->
## ABSTRACT We introduce and study minimax curriculum learning (MCL), a new method for adaptively selecting a sequence of training subsets for a succession of stages in machine learning. The subsets are encouraged to be small and diverse early on, and then larger, harder, and allowably more homogeneous in later stages. At each stage, model weights and training sets are chosen by solving a joint continuous- discrete minimax optimization, whose objective is composed of a continuous loss (reflecting training set hardness) and a discrete submodular promoter of diversity for the chosen subset. MCL repeatedly solves a sequence of such optimizations with a schedule of increasing training set size and decreasing pressure on diversity encouragement. We reduce MCL to the minimization of a surrogate function handled by submodular maximization and continuous gradient methods. We show that MCL achieves better performance and, with a clustering trick, uses fewer labeled samples for both shallow and deep models. Our method involves repeatedly solving constrained submodular maximization of an only slowly varying function on the same ground set. Therefore, we develop a heuristic method that utilizes the previous submodular maximization solution as a warm start for the current submodular maximization process to reduce computation while still yielding a guarantee. ## 1 INTRODUCTION Inspired by the human interaction between teacher and student, recent studies (Khan et al., 2011; Basu & Christensen, 2013; Spitkovsky et al., 2009) support that learning algorithms can be improved by updating a model on a designed sequence of training sets, i.e., a curriculum. This problem is addressed in curriculum learning (CL) (Bengio et al., 2009), where the sequence is designed by a human expert or heuristic before training begins. Instead of relying on a teacher to provide the curriculum, self- paced learning (SPL) (Kumar et al., 2010; Tang et al., 2012a; Supancic III & Ramanan, 2013; Tang et al., 2012b) chooses the curriculum during the training process. It does so by letting the student (i.e., the algorithm) determine which samples to learn from based on their hardness. Given a training set \(\mathcal{D} = \{(x_{1},y_{1}),\ldots ,(x_{n},y_{n})\}\) of \(n\) samples and loss function \(L(y_{i},f(x_{i},w))\) where \(x_{i}\in \mathbb{R}^{n}\) represents the feature vector for the \(i^{t h}\) sample, \(y_{i}\) is its label, and \(f(x_{i},w)\) is the predicted label provided by a model with weight \(w\) , SPL performs the following: \[\min_{w\in \mathbb{R}^{m}}\min_{\nu \in [0,1]^{n}}\left[\sum_{i = 1}^{n}\nu_{i}L\left(y_{i},f(x_{i},w)\right) - \lambda \sum_{i = 1}^{n}\nu_{i}\right]. \quad (1)\] SPL jointly learns the model weights \(w\) and sample weights \(\nu\) , which end up being 0- 1 indicators of selected samples, and it does so via alternating minimization. Fixing \(w\) , minimization w.r.t. \(\nu\) selects samples with loss \(L(y_{i},f(x_{i},w))< \lambda\) , where \(\lambda\) is a "hardness parameter" as it corresponds to the hardness as measure by the current loss (since with large \(\lambda\) , samples with greater loss are allowed in). Self- paced curriculum learning (Jiang et al., 2015) introduces a blending of "teacher mode" in CL and "student mode" in SPL, where the teacher can define a region of \(\nu\) by attaching a linear constraint \(a^{T}\nu \leq c\) to Eq. (1). SPL with diversity (SPLD) (Jiang et al., 2014), adds to Eq. (1) a negative group sparse regularization term \(- \gamma \| \nu \|_{2,1}\triangleq - \gamma \sum_{j = 1}^{b}\| \nu^{(j)}\|_{2}\) , where the samples are <--- Page Split ---> divided into \(b\) groups beforehand and \(\nu^{(j)}\) is the weight vector for the \(j^{th}\) group. Samples coming from different groups are thus preferred, to the extent that \(\gamma >0\) is large. CL, SPL, and SPLD can be seen as a form of continuation scheme (Allgower & Georg, 2003) that handles a hard task by solving a sequence of tasks moving from easy to hard; the solution to each task is the warm start for the next slightly harder task. That is, each task, in the present case, is determined by the training data subset and other training hyperparameters, and the resulting parameters at the end of a training round are used as the initial parameters for the next training round. Such continuation schemes can reduce the impact of local minima within neural networks (Bengio et al., 2013; Bengio, 2014). With SPL, after each round of alternating minimization to optimize Eq. (1), \(\lambda\) is increased so that the next round selects samples that have a larger loss, a process (Khan et al., 2011; Tang et al., 2012b; Basu & Christensen, 2013) that can both help avoid local minima and reduce generalization error. In SPLD, \(\gamma\) is also increased between training rounds, increasingly preferring diversity. In each case, each round results in a fully trained model for the currently selected training samples. Selection of training samples has been studied in other settings as well, often with a different motivation. In active learning (AL) (Settles, 2010) and experimental design (Montgomery, 2006), the learner can actively query labels of samples from an unlabeled pool during the training process, and the goal is to reduce annotation costs. The aim is to achieve the same or better performance using fewer labeled samples by ruling out uninformative ones. Diversity modeling was introduced to AL in (Wei et al., 2015). It uses submodular maximization to select diverse training batches from the most uncertain samples. However, changing the diversity during the learning process has not been investigated as far as we know. In boosting (Schapire, 1990; Freund & Schapire, 1997), the goal is to learn an ensemble of weak classifiers sequentially; it does this by assigning weights to all samples, with larger weights given to samples having larger loss measured by an aggregation of previously trained models. Both active learning and boosting favor samples that are difficult to predict, since they are the most informative to learn. For example, uncertainty sampling (Culotta & McCallum, 2005; Scheffer et al., 2001; Dagan & Engelson, 1995; Dasgupta & Hsu, 2008) selects samples that are most uncertain, while query by committee (Seung et al., 1992; Dagan & Engelson, 1995; Abe & Mamitsuka, 1998) selects the ones that multiple models most disagree on. With machine teaching (Khan et al., 2011; Zhu, 2015; Patil et al., 2014; Zhu et al., 2018), a separate teacher helps the training procedure find a good model. The SPL approach starts with a smaller set of easy samples and gradually increases the difficulty of the chosen samples as measured by the sample loss of the model produced by previous round's training. One of the difficulties of this approach is the following: since for any given value of \(\lambda\) the relatively easiest samples are chosen, there is a good chance that the process can repeatedly select a similar training set over multiple rounds and therefore can learn slowly. This is precisely the problem that SPLD address — by concomitantly increasing the desired diversity over rounds, the sample selection procedure chooses from an increasingly diverse set of different groups, as measured by \(\| \nu \|_{2,1}\) . Therefore, in SPLD, early stages train on easier not necessarily diverse samples and later stages train on harder more diverse samples. There are several challenges remaining with SPLD, however. One is that in early stages, it is still possible to repeatedly select a similar training set over multiple rounds since diversity might not increase dramatically between successive rounds. Potentially more problematically, it is not clear that having a large diversity selection weight in late stages is desirable. For example, with a reasonably trained model, it might be best to select primarily the hardest samples in the part of the space near the difficult regions of the decision boundaries. With a high diversity weight, samples in these difficult decision boundary regions might be avoided in favor of other samples perhaps already well learnt and having a large margin only because they are diverse, thereby leading to wasted effort. At such point, it would be beneficial to choose points having small margin from the same region but that might not have the greatest diversity, especially when using only a simple notion of diversity such as the group sparse norm \(\| \nu \|_{2,1}\) . Also, it is possible that late stages of learning can select outliers only because they are both hard and diverse. Lastly, the SPL/SPLD min- min optimization involves minimizing a lower bound of the loss, while normally one would, if anything, wish to minimize the loss directly or at least an upper bound. Motivated by these issues, we introduce a new form of CL that chooses the hardest diverse samples in early rounds of training and then actually decreases, rather than increases, diversity as training rounds proceed. Our contention is that diversity is more important during the early phases of training when <--- Page Split ---> only relatively few samples are selected. Later rounds of training will naturally have more diversity opportunity simply because the size of the selected samples is much larger. Also, to avoid successive rounds selecting similar sets of samples, our approach selects the hardest, rather than the easiest, samples at each round. Hence, if a set of samples is learnt well during one training round, those samples will tend to be ill- favored in the next round because they become easier. We also measure hardness via the loss function, but the selection is always based on the hardest and most diverse samples of a given size \(k\) , where the degree of diversity is controlled by a parameter \(\lambda\) , and where diversity is measured by an arbitrary non- monotone submodular function. In fact, for binary variables the group sparse norm is also submodular where \(\| \nu \|_{2,1} = \sum_{j = 1}^{b}\sqrt{|C_{j}\cap A|} = F(A)\) where \(A\) is the set for which \(\nu\) is the characteristic vector, and \(C_{j}\) is the set of samples in the \(j^{\mathrm{th}}\) group. Our approach allows the full expressive class of submodular functions to be used to measure diversity since the selection phases is based on submodular optimization. Evidence for the naturalness of such hardness and diversity adjustment in a curriculum can also be found in human education. For example, courses in primary school usually cover a broad, small, and relatively easy range of topics, in order to expose the young learner to a diversity of knowledge early on. In college and graduate school, by contrast, students focus on advanced deeper knowledge within their majors. As another example, studies of bilingualism (Bialystok et al., 2012; Li et al., 2014; Mechelli et al., 2004; Kovács & Mehler, 2009) show that learning multiple languages in childhood is beneficial for future brain development, but early- age multi- lingual learning is usually not advanced or concentrated linguistically for any of the languages involved. Still other studies argue that difficulty can be desired at early human learning stages (Bjork & Bjork, 1992; McDaniel & Butler, 2011). ### 1.1 OUR APPROACH: MINIMAX CURRICULUM LEARNING We introduce a new form of curriculum learning called minimax curriculum learning (MCL). MCL increases desired hardness and reduces diversity encouragement over rounds of training. This is accomplished by solving a sequence of minimax optimizations, each of which having the form: \[\min_{w\in \mathbb{R}^{m}}\max_{A\subseteq V,|A|\leq k}\sum_{i\in A}L\left(y_{i},f(x_{i},w)\right) + \lambda F(A). \quad (2)\] The objective is composed of the loss on a subset \(A\) of samples evaluating their hardness and a normalized monotone non- decreasing submodular function \(F:2^{V}\to \mathbb{R}_{+}\) measuring \(A\) 's diversity, where \(V\) is the ground set of all available samples. A larger loss implies that the subset \(A\) has been found harder to learn, while a larger \(F(A)\) indicates greater diversity. The weight \(\lambda\) controls the trade- off between hardness and diversity, while \(k\) , the size of the resulting \(A\) , determines the number of samples to simultaneously learn and hence is a hardness parameter. It is important to realize that \(F(A)\) is not a parameter regularizer (e.g., \(\ell_{1}\) or \(\ell_{2}\) regularization on the parameters \(w\) ) but rather an expression of preference for a diversity of training samples. In practice, one would add to Eq. (2) an appropriate parameter regularizer as we do in our experiments (Section 3). Like SPL/SPLD, learning rounds are scheduled, here each round with increasing \(k\) and decreasing \(\lambda\) . Unlike SPL/SPLD, we explicitly schedule the number of selected samples via \(k\) rather than indirectly via a hardness parameter. This makes sense since we are always choosing the hardest \(k\) samples at a given \(\lambda\) diversity preference, so there is no need for an explicit real- valued hardness parameter as in SPL/SPLD. Also, the MCL optimization minimizes an upper bound of the loss on any size \(k\) subset of training samples. The function \(F(\cdot)\) may be chosen from the large expressive family of submodular functions, all of which are natural for measuring diversity, and all having the following diminishing returns property: given a finite ground set \(V\) , and any \(A\subseteq B\subseteq V\) and a \(v\notin B\) , \[F(v\cup A) - F(A)\geq F(v\cup B) - F(B). \quad (3)\] This implies \(v\) is no less valuable to the smaller set \(A\) than to the larger set \(B\) . The marginal gain of \(v\) conditioned on \(A\) is denoted \(f(v|A)\triangleq f(v\cup A) - f(A)\) and reflects the importance of \(v\) to \(A\) . Submodular functions (Fujishige, 2005) have been widely used for diversity models (Lin et al., 2009; Lin & Bilmes, 2011; Batra et al., 2012; Prasad et al., 2014; Gillenwater et al., 2012; Iyer & Bilmes, 2015; Bilmes & Bai, 2017). <--- Page Split ---> Although Eq. (2) is a hybrid optimization involving both continuous variables \(w\) and discrete variables \(A\) , it can be reduced to the minimization of a piecewise function, where each piece is defined by a subset \(A\) achieving the maximum in a region around \(w\) . Each piece is convex when the loss is convex, so various off- the- shelf algorithms can be applied once \(A\) has been computed. However, the number of possible sets \(A\) is \(\binom{n}{k}\) , and enumerating them all to find the maximum is intractable. Thanks to submodularity, fast approximate algorithms (Nemhauser et al., 1978; Minoux, 1978; Mirzasoleiman et al., 2015) exist to find an approximately optimal \(A\) . Therefore, the outer optimization over \(w\) will need to minimize an approximation of the piecewise function defined by an approximate \(A\) computed via submodular maximization. ## 2 MINIMAX CURRICULUM LEARNING AND MACHINE TEACHING The minimax problem in Eq. (2) can be seen as a two- person zero- sum game between a teacher (the maximizer) and a student (the minimizer): the teacher chooses training set \(A\) based on the student's feedback about the hardness (i.e., the loss achieved by current model \(w\) ) and how diverse according to the teacher \((\lambda F(A))\) , while the student updates \(w\) to reduce the loss on training set \(A\) (i.e., learn \(A\) ) given by the teacher. Similar teacher- student interaction also exist in real life. In addition, the teacher usually introduces concepts at the beginning and asks a small number of easy questions from a diverse range of topics and receives feedback from the student, and then further trains the student on the topics the student finds difficult while eschewing topics the student has mastered. MCL's minimax formulation is different from the min- min formulation used in SPL/SPLD. For certain losses and models, \(L(y_{i}, f(x_{i}, w))\) is convex in \(w\) . The min- min formulation, however, is only bi- convex and requires procedures such as alternative convex search (ACS) as in (Bazaraa et al., 1993). Furthermore, diversity regularization of \(w\) in SPLD leads to the loss of bi- convexity altogether. Minimizing the worst case loss, as in MCL, is a widely used strategy in machine learning (Lanckriet et al., 2003; Farnia & Tse, 2016; Shalev- Shwartz & Wexler, 2016) to achieve better generalization performance and model robustness, especially when strong assumptions cannot be made about the data distribution. Compared to SPL/SPLD, MCL is also better in that the outer minimization over \(w\) in Eq. (2) is a convex program, and corresponds to minimizing the objective \(g(w)\) in Eq. (4). On the other hand, querying \(g(w)\) requires submodular maximization which can only be solved approximately. The goal of this section, therefore, is to address the minimax problem in Eq. (2), i.e., the minimization \(\min_{w \in \mathbb{R}^{m}} g(w)\) of the following objective \(g(w)\) . \[g(w) \triangleq \max_{A \subseteq V, |A| \leq k} \sum_{i \in A} L(y_{i}, f(x_{i}, w)) + \lambda F(A) \quad (4)\] If the loss function \(L(y_{i}, f(x_{i}, w))\) is convex w.r.t. \(w\) , then \(g(w)\) is convex but, as mentioned above, enumerating all subsets is intractable. Defining the discrete objective \(G_{w}: 2^{V} \to \mathbb{R}_{+}\) where \[G_{w}(A) \triangleq \sum_{i \in A} L(y_{i}, f(x_{i}, w)) + \lambda F(A). \quad (5)\] shows that computing \(g(w)\) in involves a discrete optimization over \(G_{w}(A)\) , a problem that is submodular since \(G_{w}(A)\) is weighted sum of a non- negative (since loss is non- negative) modular and a submodular function, and thus \(G_{w}\) is monotone non- decreasing submodular. Thus, the fast greedy procedure mentioned earlier can be used to approximately optimizes \(G_{w}(A)\) for any \(w\) . Let \(\hat{A}_{w} \subseteq V\) be the \(k\) - constrained greedy approximation to maximizing \(G_{w}(A)\) . We define the following approximate objective: \[\hat{g} (w) \triangleq \sum_{i \in \hat{A}_{w}} L(y_{i}, f(x_{i}, w)) + \lambda F(\hat{A}), \quad (6)\] and note that it satisfies \(\alpha g(w) \leq \hat{g} (w) \leq g(w)\) where \(\alpha\) is the approximation factor of submodular optimization. For \(\hat{w}\) within a region around \(w\) , \(\hat{g} (\hat{w})\) will utilize the same set \(\hat{A}_{w}\) . Therefore, \(\hat{g} (w)\) is piecewise convex, if the loss function \(L(y_{i}, f(x_{i}, w))\) is convex w.r.t. \(w\) , and different regions of within \(\mathbb{R}^{m}\) are associated with different \(\hat{A}\) although not necessarily the same regions or sets that define \(g(w)\) . We show in Section 2.2 that minimizing \(\hat{g} (w)\) offers an approximate solution to Eq. (2). <--- Page Split ---> With \(\hat{g} (w)\) given, our algorithm is simply gradient descent for minimizing \(\hat{g} (w)\) , where many off- the- shelf methods can be invoked, e.g., SGD, momentum methods, Nesterov's accelerated gradient (Nesterov, 2005), Adagrad (Duchi et al., 2011), etc. The key problem is how to obtain \(\hat{g} (w)\) , which depends on suboptimal solutions in different regions of \(w\) . It is not necessary, however, to run submodular maximization for every region of \(w\) . Since we use gradient descent, we only need to know \(\hat{g} (w)\) for \(w\) on the optimization path. At the beginning of each iteration, we fix \(w\) and use submodular maximization to achieve the \(\hat{A}_{w}\) that defines \(\hat{g} (w)\) . Then a gradient update step is applied to \(\hat{g} (w)\) . Let \(A_{w}^{*}\) represent the optimal solution to Eq. (5), then \(\hat{A}_{w}\) satisfies \(G(\hat{A}) \geq \alpha G(A^{*})\) . # Algorithm 1 Minimax Curriculum Learning (MCL) 1: input: \(\pi (\cdot ,\eta),\gamma ,p,\Delta ,\bar{\alpha}\) 2: output: \(w_{T}^{0}\) 3: initialize: \(\tau \gets 1\) \(w_{\tau}^{0},\lambda ,k\) 4: while not "converged" do 5: for \(t\in \{0,\cdot \cdot \cdot ,p\}\) do 6: \(G(A)\leftarrow \sum_{i\in A}L\left(y_{i},f(x_{i},w_{\tau}^{t})\right) + \lambda F(A);\) 7: \(\hat{A}\gets \mathrm{WS - SUBMODULARMAX}(G,k,\hat{A},\bar{\alpha});\) 8: \(\begin{array}{r}{\nabla \hat{g} (w_{\tau}^{t}) = \frac{\partial}{\partial w}\sum_{i\in \hat{A}}L\left(y_{i},f(x_{i},w_{\tau}^{t})\right);} \end{array}\) 9: \(w_{\tau}^{t + 1}\gets w_{\tau}^{t} + \pi \left(\{w_{\tau}^{1:t}\} ,\{\nabla \hat{g} (w_{\tau}^{1:t})\} ,\eta \right);\) 10: end for 11: \(w_{\tau +1}^{0}\gets w_{\tau}^{p},\lambda \leftarrow (1 - \gamma)\cdot \lambda ,k\gets k + \Delta ,\tau \gets \tau +1;\) 12: end while Algorithm 1 details MCL. Lines 5- 10 solve the optimization in Eq. (2) with \(\lambda\) and \(k\) scheduled in line 11. Lines 6- 7 finds an approximate \(\hat{A}\) via submodular maximization, discussed further in Section 2.1. Lines 8- 9 update \(w\) for the current \(\hat{A}\) by gradient descent \(\pi (\cdot ,\eta)\) with learning rate \(\eta\) . The inner optimization stops after \(p\) steps and then \(\lambda\) is reduced by factor \(1 - \gamma\) where \(\gamma \in [0,1]\) and \(k\) is increased by \(\Delta\) . The outer optimization stops after \(T\) steps when a form of "convergence", described below, is achieved. Given \(\hat{A}_{w},\hat{g} (w)\) has gradient \[\nabla \hat{g} (w) = \frac{\partial}{\partial w}\sum_{i\in \hat{A}_{w}}L\left(y_{i},f(x_{i},w)\right), \quad (7)\] and thus gradient descent method can update \(w\) . For example, we can treat \(\hat{A}\) as a batch if \(k\) is small, and update \(w\) by \(w\gets w - \eta \nabla \hat{g} (w)\) with learning rate \(\eta\) . For large \(\hat{A}_{w}\) , we can use SGD that applies an update rule to mini- batches within \(\hat{A}_{w}\) . More complex gradient descent rules \(\pi (\cdot ,\eta)\) can take historical gradients and \(w_{\tau}^{t}\) 's into account leading to \(w^{t + 1}\gets w^{t} + \pi \left(\{w^{1:t}\} ,\{\nabla \hat{g} (w^{1:t})\} ,\eta \right)\) . Considering the outer loop as well, the algorithm approximately solves a sequence of Eq. (2)s with decreasing \(\lambda\) and increasing \(k\) , where the previous solutions act as a warm start for the next iterations. This corresponds to repeatedly updating the model \(w\) on a sequence of training sets \(\hat{A}\) that changes from small, diverse, and hard to large. ### 2.1 SUBMODULAR MAXIMIZATION Although solving Eq. (5) exactly is NP- hard, a near- optimal solution can be achieved by the greedy algorithm, which offers a worst- case approximation factor of \(\alpha = 1 - e^{- 1}\) (Nemhauser et al., 1978). The algorithm starts with \(A\leftarrow \emptyset\) , and selects next the element with the largest marginal gain \(f(v|A)\) from \(V\backslash A\) , i.e., \(A\leftarrow A\cup \{v^{*}\}\) where \(v^{*}\in \mathrm{argmax}_{v\in V\backslash A}f(v|A)\) , and this repeats until \(|A| = k\) . It is simple to implement, fast, and usually outperforms other methods, e.g., those based on integer linear programming. It requires \(\mathcal{O}(nk)\) function evaluations for ground set size \(|V| = n\) . Since Algorithm 1 runs greedy \(Tp\) times, it is useful for the greedy procedure to be as fast as possible. The accelerated, or lazy, greedy algorithm (Minoux, 1978) reduces the number of evaluations per step by updating a priority queue of marginal gains, while having the same output and guarantee as the original (thanks to submodularity) and offers significant speedups. Still faster variants are also available Mirzasoleiman et al. (2015; 2016). Our own implementation takes advantage of the fact that line 7 of Algorithm 1 repeatedly solves submodular maximization over a sequence of submodular functions that are changing only slowly, and hence the previous set solution can be used as a warm start for the current algorithm, a process we call WS- SUBMODULARMAX outlined in Algorithm 2. The greedy procedure offers much better approximation factors than \(1 - e^{- 1}\) when the objective \(G(A)\) is close to modular. Specifically, the approximation factor becomes \(\alpha = (1 - e^{- \kappa_{G}}) / \kappa_{G}\) (Conforti & Cornuejols, 1984), which depends on the curvature \(\kappa_{G}\in [0,1]\) of \(G(A)\) defined as \[\kappa_{G}\triangleq 1 - \min_{j\in V}\frac{G(j|V\backslash j)}{G(j)}. \quad (8)\] <--- Page Split ---> When \(\kappa_{G} = 0\) , \(G\) is modular, and when \(\kappa_{G} = 1\) , \(G\) is fully curved and the above bound recovers \(1 - e^{- 1}\) . \(G(A)\) becomes more modular as the outer loop proceeds since \(\lambda\) decreases. Therefore, the approximation improves with the number of outer loops. In fact, we have: Lemma 1. Let \(G(A) = L(A) + \lambda F(A)\) where \(F\) is a monotone non- decreasing submodular function with curvature \(\kappa_{F}\) , \(L\) is a non- negative modular function, and \(\lambda \geq 0\) . Then \(\kappa_{G} \leq \kappa_{F} / (c_{1} / \lambda + 1)\) where \(c_{1} = \min_{j \in V} L(j) / F(j)\) . The proof is given in Appendix 4.1. In MCL, therefore, the submodular approximation improves \((\alpha \to 1)\) as \(\lambda\) grows, and the surrogate function \(\hat{g} (w)\) correspondingly approaches the true convex objective \(g(w)\) . ### 2.2 CONDITIONS AT CONVERGENCE In this section, we study how close the solution \(\hat{w}\) is of applying gradient descent to \(\hat{g} (w)\) , where we assume \(p\) is large enough so that a form of convergence occurs. Specifically, in Theorem 1, we analyze the upper bound on \(\| \hat{w} - w^{*}\|_{2}^{2}\) based on two assumptions: 1) the loss \(L\left(y_{i},f(x_{i},w)\right)\) being \(\beta\) - strongly convex w.r.t. \(w\) ; and 2) \(\hat{w}\) is achieved by running gradient descent in lines 6- 9 of Algorithm 1 until convergence, defined as the gradient reaching zero. In case the loss \(L\left(y_{i},f(x_{i},w)\right)\) is convex but not \(\beta\) - strongly convex, a commonly used trick to modify it to \(\beta\) - strongly convex is to add an \(\ell_{2}\) regularization \((\beta /2)\| w\|_{2}^{2}\) . In addition, for non- convex \(L\left(y_{i},f(x_{i},w)\right)\) , it is possible to prove that with high probability, a noise perturbed SGD on \(\hat{g} (w)\) can hit an \(\epsilon\) - optimal local solution of \(g(w)\) in polynomial time — we leave this for future work. In our empirical study (Section 3), MCL achieves good performance even when applied to non- convex deep neural networks. The following theorem relies on the fact that the maximum of multiple \(\beta\) - strongly convex functions is also \(\beta\) - strongly convex, shown in Appendix 4.2. Theorem 1 (Inner- loop convergence). For the minimax problem in Eq. (2) with ground set of samples \(V\) and \(\lambda \geq 0\) , if the loss function \(L\left(y_{i},f(x_{i},w)\right)\) is \(\beta\) - strongly convex and \(|V| \geq k\) , running lines 6- 9 of Algorithm 1 until convergence (defined as the gradient reaching zero) yields a solution \(\hat{w}\) satisfying \[\| \hat{w} -w^{*}\|_{2}^{2} \leq \frac{2}{k\beta} \left(\frac{1}{\alpha} -1\right) \cdot g(w^{*}), \quad (9)\] \(\hat{w}\) is the solution achieved at convergence, \(w^{*}\) is the optimal solution of the minimax problem in Eq.(2), \(g(w^{*})\) is the objective value achieved on \(w^{*}\) , and \(\alpha\) is the approximation factor that submodular maximization can guarantee for \(G(A)\) . The proof is given in Appendix 4.3. It is interesting to note that the bound depends both on the strong convexity parameter \(\beta\) and on the submodular maximization approximation \(\alpha\) . As mentioned in Lemma 1, as \(\lambda\) gets smaller, the approximation factor \(\alpha\) approaches 1 meaning that the bound in Equation (9) improves. We mention the convergence criteria where the gradient reaches zero. While it is possible, in theory, for lines 6- 9 of Algorithm 1 to oscillate amongst the non- differentiable boundaries between the convex pieces, with most damped learning rates, this will eventually subside and the algorithm will remain within one convex piece. The reason for this is line 7 of the algorithm always chooses one \(\hat{A}\) thereby selecting one convex piece associated with the region around \(w_{\tau}^{t}\) , and with only small subsequent adjustments to \(w_{\tau}^{t}\) , the same \(\hat{A}\) will continue to be selected. Hence, the algorithm will, in such case, reach the minimum of that convex piece where the gradient is zero. We can restate and then simplify the above bound in terms of the resulting parameters, and corresponding \(\lambda , k\) values, used at a particular iteration \(\tau\) of the outer loop. In the following, \(\hat{w}_{\tau}\) is the solution achieved by Algorithm 1 at the iteration \(\tau\) of the outer loop, and the optimal solution of the minimax problem in Eq.(2) with \(\lambda , k\) set as in iteration \(\tau\) is denoted \(w_{\tau}^{*}\) . Corollary 1. If the loss function \(L\left(y_{i},f(x_{i},w)\right)\) is \(\beta\) - strongly convex, the submodular function \(F(\cdot)\) has curvature \(\kappa_{F}\) , and if each inner- loop in Algorithm 1 runs until convergence, then the solution \(\hat{w}_{\tau}\) at the end of the \(\tau^{t h}\) iteration of the outer- loop fulfills: \[\| \hat{w}_{\tau} - w_{\tau}^{*}\|_{2}^{2} \leq \frac{2\kappa_{F}}{k\beta(c_{1} / \lambda + 1)} g(w_{\tau}^{*}) \leq \frac{2\kappa_{F}}{\beta c_{1}} \times \frac{\lambda}{k} \times g(w_{\tau}^{*}), \quad (10)\] <--- Page Split ---> where \(w_{\tau}^{*}\) is the optimal solution of the minimax problem in Eq. (2) with \(\lambda\) set as in the \(\tau^{th}\) outer loop iteration. Thus, if \(k\) starts from \(k_{0}\) and linearly increases via \(k\gets k + \Delta\) (as in line 11 of Algorithm alg: mcl), \[\| \hat{w}_{\tau} - w_{\tau}^{*}\|_{2}^{2}\leq \frac{2\kappa_{F}\lambda_{0}}{\beta c_{1}}\times \frac{(1 - \gamma)^{\tau}}{(k_{0} + \tau\Delta)}\times [g(w_{\infty}^{*}) + \lambda_{0}c_{2}(1 - \gamma)^{\tau}], \quad (11)\] Otherwise, if \(k\) increases exponentially, i.e., \(k\gets (1 + \Delta)\cdot k\) \[\| \hat{w}_{\tau} - w_{\tau}^{*}\|_{2}^{2}\leq \frac{2\kappa_{F}\lambda_{0}}{\beta c_{1}k_{0}}\times \left(\frac{1 - \gamma}{1 + \Delta}\right)^{\tau}\times [g(w_{\infty}^{*}) + \lambda_{0}c_{2}(1 - \gamma)^{\tau}]. \quad (12)\] In the above, \(\lambda_{0}\) and \(k_{0}\) are the initial values for \(\lambda\) and \(k\) , \(c_{1} = \min_{j\in V,t\in [1,\tau ]}[L(y_{i},f(x_{i},\hat{w}_{\tau}^{t})) / F(j)]\) , \(c_{2} = \max_{A\subseteq V,|A|\leq k}F(A)\) , and \(g(w_{\infty}^{*}) = \min_{w\in \mathbb{R}^{m}}\max_{A\subseteq V,|A|\leq k}\sum_{i\in A}L(y_{i},f(x_{i},w))\) . The proof can be found in Appendix 4.5. On the one hand, the upper bound above is in terms of the ratio \(\lambda /k\) which improves with larger subset sizes. On the other hand, submodular maximization becomes more expensive with \(k\) . Hence, Algorithm 1 chooses a schedule to decrease \(\lambda\) exponentially and increase \(k\) only linearly. Also, we see that the bound is dependent on the submodular curvature \(\kappa_{F}\) , the strongly- convex constant \(\beta\) , and \(c_{1}\) which relates the submodular and modular terms (similar to as in Lemma 1). These quantities \((\kappa_{F} / \beta\) and \(c_{1}\) ) might be relevant for other convex- submodular optimization schemes. ### 2.3 HEURISTIC IMPROVEMENTS There are several heuristic improvements we employ that are described next. Algorithm 1 stops gradient descent after \(p\) steps. A reason for doing this is that \(\hat{w}^{p}\) can be sufficient as a warm- start for the next iteration if \(p\) is large enough. We also have not observed any benefit for larger \(p\) , although we do eventually observe convergence empirically when the average loss no longer change appreciably between stages. Also, lines 6- 7 of Algorithm 1 require computing the loss on all the samples, and each step of the greedy algorithm needs to, in the worst case, evaluate the marginal gains of all of the unselected samples. Moreover, this is done repeatedly in the inner- most block of two nested loops. Therefore, we use two heuristic tricks to improve efficiency. First, rather than selecting individual samples, we first cluster the data and then select clusters, thereby reducing the ground set size from the number of samples to the number of clusters. We replace the per- sample loss \(L\left(y_{i},f(x_{i},w)\right)\) with a per- cluster loss \(L\left(Y^{(i)},f(X^{(i)},w)\right)\) that we approximate by the loss of the sample closest to the centroid within each cluster: \[L\left(Y^{(i)},f(X^{(i)},w)\right)\triangleq \sum_{j\in C^{(i)}}L\left(y_{j},f(x_{j},w)\right)\approx |C^{(i)}|L\left(y^{(i)},f(x^{(i)},w)\right), \quad (13)\] where \(C^{(i)}\) is the set of indices of the samples in the \(i^{\mathrm{th}}\) cluster, and \(x^{(i)}\) with label \(y^{(i)}\) is the sample closest to the cluster centroid. We find that the loss on \(x^{(i)}\) is sufficiently representative to approximately indicate the hardness of the cluster. The set \(V\) becomes the set of clusters and \(A\subseteq V\) is a set of clusters, and hence the ground set size is reduced speeding up the greedy algorithm. When computing \(F(A)\) , the diversity of selected clusters, cluster centroids again represent the cluster. In line 8, the gradient is computed on all the samples in the selected clusters rather than on only \(x^{(i)}\) at which point the labels of all the samples in the selected clusters are used. Otherwise, when selecting clusters via submodular maximization, the labels of only the centroid samples are needed. Thus, we need only annotate and compute the loss for samples in the selected clusters and the representative centroid samples \(x^{(i)}\) of other clusters. This also reduces the need to label all samples up front as only the labels of the selected clusters, and centroid samples of each cluster, are used (i.e., the clustering process itself does not use the labels). We can further reduce the ground set to save computation during submodular maximization via prefiltering methods that lead either to no (Wei et al., 2014a) or little (Zhou et al., 2017; Mirzasoleiman et al., 2015) reduction in approximation quality. Moreover, as \(\lambda\) decreases in the MCL objective and \(G(A)\) becomes more modular, pruning method become more effective. More details are given in Section 4.6. <--- Page Split ---> ## 3 EXPERIMENTS <table><tr><td>Dataset</td><td>News20</td><td>MNIST</td><td>CIFAR10</td><td>STL10</td><td>SVHN</td><td>Fashion</td></tr><tr><td>Method</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>SGD(random)</td><td>14.27</td><td>0.88</td><td>18.52</td><td>21.76</td><td>5.20</td><td>7.79</td></tr><tr><td>SPL</td><td>15.43</td><td>1.25</td><td>21.14</td><td>20.63</td><td>5.67</td><td>7.46</td></tr><tr><td>SPLD</td><td>16.23</td><td>1.18</td><td>20.79</td><td>21.25</td><td>5.40</td><td>7.80</td></tr><tr><td>MCL(Δ = 0, λ = 0, γ = 0)</td><td>15.99</td><td>1.23</td><td>18.04</td><td>20.50</td><td>5.37</td><td>7.95</td></tr><tr><td>MCL(Δ = 0, λ &amp;gt; 0, γ &amp;gt; 0)</td><td>16.54</td><td>0.95</td><td>17.33</td><td>19.70</td><td>4.95</td><td>7.29</td></tr><tr><td>MCL(Δ &amp;gt; 0, λ &amp;gt; 0, γ = 0)</td><td>15.45</td><td>0.82</td><td>16.93</td><td>20.40</td><td>5.29</td><td>7.07</td></tr><tr><td>MCL-RAND</td><td>16.23</td><td>0.80</td><td>17.12</td><td>20.42</td><td>5.18</td><td>6.92</td></tr><tr><td>MCL(Δ &amp;gt; 0, λ &amp;gt; 0, γ &amp;gt; 0)</td><td>14.12</td><td>0.75</td><td>12.87</td><td>17.83</td><td>4.19</td><td>6.36</td></tr></table> Table 1: Test error \((\%)\) for different methods (SGD shows the lowest error out of 10 random trials). In this section, we apply different curriculum learning methods to train logistic regression models on 20newsgroups (Lang, 1995), LeNet5 models on MNIST (Lecun et al., 1998), convolutional neural nets (CNNs) with three convolutional layers' on CIFAR10 (Krizhevsky & Hinton, 2009), CNNs with two convolutional layers 2 on Fashion- MNIST ("Fashion" in all tables) (Xiao et al., 2017), CNNs with six convolutional layers on STL10 (Coates et al., 2011), and CNNs with seven convolutional layers on SVHN (Netzer et al., 2011)3. Details on the datasets can be found in Table 3 of the appendix. In all cases, we also use \(\ell_{2}\) parameter regularization on \(w\) with weight \(1e - 4\) (i.e., the weight decay factor of the optimizer). We compare MCL and its variants to SPL (Kumar et al., 2010), SPLD (Jiang et al., 2014) and SGD with a random curriculum (i.e., with random batches). Each method uses mini- batch SGD for \(\pi (\cdot ,\eta)\) with the same learning rate strategy to update \(w\) . The methods, therefore, differ only in the curriculum (i.e., the sequence of training sets). For SGD, in each iteration, we randomly select 4000 samples (20newsgroups) or 5000 samples (other datasets) and apply mini- batch SGD to the selected samples. In SPL and SPLD, the training set starts from a fixed size \(k\) (4000 samples for 20newsgroups, 5000 samples for other datasets), and increases by a factor of \(1 + \mu\) (where \(\mu = 0.1\) ) per round of alternating minimization (i.e., per iteration of the outer loop) 4. We use \(\rho\) to denote the number of iterations of the inner loop, which aims to minimize the loss w.r.t. the model \(w\) on the selected training set. In SPLD, we also have a weight for the negative group sparsity: it starts from \(\xi\) and increases by a factor of 1.1 at each round of alternating minimization (i.e., per iteration of the outer loop). We test five different combinations of \(\{\rho ,\mu \}\) and \(\{\rho ,\xi \}\) for SPL and SPLD respectively. The best combination with the smallest test error rate is what we report. Neither SPL nor SPLD uses the clustering trick we applied to MCL: they compute the exact loss on each sample in each iteration. Hence, they have more accurate estimation of the hardness on each sample, and require knowing the labels of all samples (selected and unselected) and cannot reduce annotation costs. Note SPLD still needs to run clustering and use the resulted clusters as groups in the group sparsity (which measures diversity in SPLD). We did not select samples with SPL/SPLD as we do with MCL since we wanted to test SPL/SPLD as originally presented — intuitively, SPL/SPLD should if anything only do better without such clustering due to the more accurate sample- specific hardness estimation. The actual clustering, however, used for SPLD's diversity term is the same as that used for MCL's cluster samples. We apply the mini- batch k- means algorithm to the features detailed in the next paragraph to get the clusters used in MCL and SPLD. Although both SPL and SPLD can be reduced to SGD when \(\lambda \to \infty\) (i.e., all samples always selected), we do not include this special case because SGD is already a baseline. For SGD with a random curriculum, results of 10 independent trials are reported. In our MCL experiments, we use a simple "feature based" submodular function (Wei et al., 2014b) where \(F(A) = \sum_{u\in \mathcal{U}}\omega_{u}\sqrt{c_{u}(A)}\) and where \(\mathcal{U}\) is a set of features. For a subset \(A\) of clusters, <--- Page Split ---> <table><tr><td>Dataset</td><td>News20</td><td>MNIST</td><td>CIFAR10</td><td>STL10</td><td>SVHN</td><td>Fashion</td></tr><tr><td>Total time</td><td>2649.19s</td><td>3418.97s</td><td>3677.73s</td><td>2953.47s</td><td>34153.81s</td><td>2927.18s</td></tr><tr><td>WS-SUBMODULARMAX</td><td>62.44s</td><td>35.33s</td><td>127.36s</td><td>206.70s</td><td>1892.62s</td><td>167.55s</td></tr></table> Table 2: Total time (secs.) of MCL( \(\Delta >0,\lambda >0,\gamma >0)\) and time only on WS-SUBMODULARMAX. \(c_{u}(A) = \sum_{i\in A}c_{u}(i)\) , where \(c_{u}(i)\) is the nonnegative feature \(u\) of the centroid for cluster \(i\) , and can be interpreted as a nonnegative score for cluster \(i\) . We use TF- IDF features for 20newsgroup. For the other datasets, we train a corresponding neural networks on a small random subset of training data (e.g., hundreds of samples) for one epoch, and use the inputs to the last fully connected layer (whose outputs are processed by softmax to generate class probabilities) as features. Because we always use ReLU activations between layers, the features are all nonnegative and the submodularity of \(F(A)\) follows as a consequence. These features are also used by mini- batch \(k\) - means to generate clusters for MCL and SPLD. For MCL, we set the number of inner loop iterations to \(p\leq 50\) . For each dataset, we choose \(p\) as the number among \(\{10,20,50\}\) that reduces the training loss the most in the first few iterations of the outer loop, and then use that \(p\) for the remaining iterations. As shown in Table 4, we use \(p = 50\) for 20newsgroups, MNIST and Fashion- MNIST, and \(p = 20\) for the other three datasets. We consider five variants of MCL: 1) MCL( \(\Delta = 0\) , \(\lambda = 0\) , \(\gamma = 0\) ) having neither submodular regularization that promotes diversity nor scheduling of \(k\) that increases hardness; 2) MCL( \(\Delta = 0\) , \(\lambda >0\) , \(\gamma >0\) ), which decreases diversity by exponentially reducing the weight \(\lambda\) of the submodular regularization, but does not have any scheduling of \(k\) , i.e., \(k\) is fixed during the algorithm; 3) MCL( \(\Delta >0\) , \(\lambda >0\) , \(\gamma = 0\) ), which only uses the scheduling of \(k\) shown in Algorithm 1, but the diversity weight \(\lambda\) is positive and fixed during the algorithm, i.e., with \(\gamma = 0\) ; 4) MCL- RAND( \(r,q\) ), which randomly samples \(r\) clusters as a training set \(A\) after every \(q\) rounds of the outer loop in Algorithm 1, and thus combines both MCL and SGD; 5) MCL( \(\Delta >0\) , \(\lambda >0\) , \(\gamma >0\) ), which uses the scheduling of both \(\lambda\) and \(k\) shown in Algorithm 1. We tried five different combinations of \(\{q,r\}\) for MCL- RAND( \(r,q\) ) and five different \(\Delta\) values for MCL( \(\Delta >0\) , \(\lambda >0\) , \(\gamma >0\) ), and report the one with the smallest test error. Other parameters, such as the initial values for \(\lambda\) and \(k\) , the values for \(\gamma\) and \(p\) , and the total number of clusters are the same for different variants (the exact values of these quantities are given in Table 4 of the Appendix). ![](images/8_0.jpg) <center>Figure 1: Test error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on 20newsgroups (grey curves represents 10 random trials of SGD). </center> In MCL, running greedy is the only extra computation comparing to normal SGD. To show that in our implementation (see Section 4.6) its additional time cost is negligible, we report in Table 2 the total time cost for MCL( \(\Delta >0\) , \(\lambda >0\) , \(\gamma >0\) ) and the time spent on our implementation WS- SUBMODULARMAX. We summarize the main results in Figure 1- 8. More results are given at the end of the appendix (Section 4.7). In all figures, grey curves correspond to the ten trials of SGD under a random curriculum. The legend in all figures gives the parameters used for the different methods using the following labels: 1) SPL \((\rho ,\mu)\) ; 2) SPLD \((\rho ,\xi)\) ; and 3) MCL- RAND \((q,r)\) . Figures 1- 6 show how the test error changes with (on the left) the number of distinct labeled samples ever needing a loss gradient calculation, and (on the right) the number of training batches, <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 2: Test error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on CIFAR10 (grey curves represents 10 random trials of SGD). </center> corresponding to training time. Note only MCL and its variants use the clustering trick, while SPL/SPLD need to compute loss on every sample and thus require knowledge of the labels of all samples. The left plot shows only the number of loss gradient calculations needed — 1) in MCL, for those clusters never selected in the curriculum, the loss (and hence the label) of only the centroid sample is needed; 2) in SPL/SPLD, for those samples never selected in the curriculum, their labels are needed only to compute the loss but not the gradient, so they are not reflected in the left plots of all figures because their labels are not used to compute a gradient. Therefore, thanks to the clustering trick, MCL and its variants can train without needing all labels, similar to semi- supervised learning methods. This can help to reduce the annotation costs, if an MCL process is done in tandem with a labeling procedure analogous to active learning. The right plots very roughly indicate convergence rate, namely how the test error decreases as a function of the amount of training. ![](images/9_1.jpg) <center>Figure 3: Test error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on Fashion-MNIST (grey curves represents 10 random trials of SGD). </center> ![](images/9_2.jpg) <center>Figure 4: Test error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on STL10 (grey curves represents 10 random trials of SGD). </center> <--- Page Split ---> ![](images/10_0.jpg) <center>Figure 5: Test error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on SVHN (grey curves represents 10 random trials of SGD). </center> On all datasets, MCL and most of its variants outperform SPL and SPLD in terms of final test accuracy (shown in Table 1) with comparable efficiency (shown in the right plots of all figures). MCL is slightly slower than SGD to converge in early stages but it can achieve a much smaller error when using the same number of labeled samples for loss gradients. Moreover, when using the same learning rate strategy, they can be more robust to overfitting, as shown in Figure 2. Comparing Figure 1 with Figure 2- 6, MCL has the advantage when applied to deep models. ![](images/10_1.jpg) <center>Figure 6: Test error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on MNIST (grey curves represents 10 random trials of SGD). </center> ![](images/10_2.jpg) <center>Figure 7: Number of distinct labeled samples ever needing loss gradient calculation vs. number of training batches for News20 (left), CIFAR10 (middle) and MNIST(right) (grey curves represents 10 random trials of SGD). </center> Among the five variants of MCL, \(\mathrm{MCL}(\lambda >0, \gamma >0, \Delta >0)\) achieves the fastest convergence speed in later stages and the smallest final test error, while \(\mathrm{MCL}(\Delta = 0, \lambda = 0, \Delta = 0)\) usually achieves the worst performance (the only exception is on News20). Comparison between \(\mathrm{MCL}(\Delta = 0, \lambda >0, \gamma >0)\) and \(\mathrm{MCL}(\lambda = 0, \gamma = 0, \Delta = 0)\) shows that decreasing diversity improves the performance. \(\mathrm{MCL}(\lambda >0, \gamma >0, \Delta >0)\) always outperforms \(\mathrm{MCL}(\Delta = 0, \lambda >0, \gamma >0)\) . This indicates that increasing \(k\) can bring advantages, e.g., more improvements in later stages. <--- Page Split ---> ![](images/11_0.jpg) <center>Figure 8: Number of distinct labeled samples ever needing loss gradient calculation vs. number of training batches for Fashion-MNIST (left), STL10 (middle) and SVHN (right) (grey curves represents 10 random trials of SGD). </center> \(\mathrm{MCL}(\lambda >0,\gamma >0,\Delta >0)\) always outperforms \(\mathrm{MCL}(\Delta >0,\lambda >0,\gamma = 0)\) , which supports our claim that it is better to decrease the diversity as training proceeds rather than keeping it fixed. In particular, \(\mathrm{MCL}(\Delta >0,\lambda >0,\gamma = 0)\) shows slower convergence than other MCL variants in later stages. In our experiments in the \(\mathrm{MCL}(\Delta >0,\lambda >0,\gamma = 0)\) case, we needed to carefully choose \(\lambda\) and use a relatively large \(\Delta\) for it to work at all, as otherwise it would repeatedly choose the same subset (with small \(\Delta\) , the loss term decreases as training proceeds, so with fixed \(\lambda\) the diversity term comes to dominate the objective). This suggests that a large diversity encouragement is neither necessary nor beneficial when the model matures, possibly since \(k\) is large at that point and there is ample opportunity for a diversity of samples to be selected just because \(k\) is large, and also since encouraging too much loss- unspecific diversity at that point might only select outliers. The combination of MCL and random curriculum (MCL- RAND) speeds up convergence, and sometimes (e.g., on MNIST, SVHN and Fashion- MNIST) leads to a good final test accuracy, but requires more labeled samples for gradient computation and still cannot outperform \(\mathrm{MCL}(\lambda >0\) \(\gamma >0,\Delta >0)\) . These results indicate that the diversity introduced by submodular regularization does yield improvements, and changing both hardness and diversity improves performance. Figure 7 and Figure 8 shows how the "number of distinct labeled samples ever needing loss gradient calculation" changes as training proceeds. It shows how the different methods trade- off between "training on more new samples" vs. "training on fewer distinct samples more often." Thanks to the clustering trick, MCL and its variants usually require fewer labeled samples for model training than SGD but more than SPL and SPLD. Acknowledgments This work was done in part while author Bilmes was visiting the Simons Institute for the Theory of Computing in Berkeley, CA. This material is based upon work supported by the National Science Foundation under Grant No. IIS- 1162606, the National Institutes of Health under award R01GM103544, and by a Google, a Microsoft, a Facebook, and an Intel research award. This work was supported in part by TerraSwarm, one of six centers of STARnet, a Semiconductor Research Corporation program sponsored by MARCO and DARPA. ## 4 APPENDIX ### 4.1 PROOF OF LEMMA 1 Proof. We have \[\kappa_{G} = 1 - \min_{j\in V}\frac{L(j) + \lambda F(j|V\backslash j)}{L(j) + \lambda F(j)} = \lambda \cdot \max_{j\in V}\frac{F(j) - F(j|V\backslash j)}{L(j) + \lambda F(j)}\] \[\qquad = \lambda \cdot \max_{j\in V}\frac{1 - \frac{F(j|V\backslash j)}{F(j)}}{L(j) + \lambda}\leq \frac{\lambda \cdot \kappa_{F}}{\min_{j\in V}\frac{L(j)}{F(j)} + \lambda} = \frac{\kappa_{F}}{c_{1} / \lambda + 1}\] Where \(c_{1} \triangleq \min_{j \in V} \frac{L(j)}{F(j)}\) . ### 4.2 PROOF OF PROPOSITION 1 Proposition 1. The maximum of multiple \(\beta\) - strongly convex functions is \(\beta\) - strongly convex as well. Proof. Let \(g(x) = \max_{i} g_{i}(x)\) , where \(g_{i}(x)\) is \(\beta\) - strongly convex for any \(i\) . According to a definition of strongly convex function given in Theorem 2.1.9 (page 64) of (Nesterov, 2004), \(\forall \lambda \in [0,1]\) , we have \[g_{i}(\lambda x + (1 - \lambda)y)\leq \lambda g_{i}(x) + (1 - \lambda)g_{i}(y) - \frac{\beta}{2}\lambda (1 - \lambda)\| x - y\|_{2}^{2},\forall i.\] The following proves that \(g(x)\) is also \(\beta\) - strongly convex: \[g(\lambda x + (1 - \lambda)y) = \max_{i}g_{i}(\lambda x + (1 - \lambda)y)\] \[\leq \max_{i}[\lambda g_{i}(x) + (1 - \lambda)g_{i}(y)] - \frac{\beta}{2}\lambda (1 - \lambda)\| x - y\|_{2}^{2}\] \[\leq \max_{i}\lambda g_{i}(x) + \max_{i}(1 - \lambda)g_{i}(y) - \frac{\beta}{2}\lambda (1 - \lambda)\| x - y\|_{2}^{2}\] \[= \lambda g(x) + (1 - \lambda)g(y) - \frac{\beta}{2}\lambda (1 - \lambda)\| x - y\|_{2}^{2}.\] ### 4.3 PROOF OF THEOREM 1 Proof. The objective \(g(w)\) of the minimax problem in Eq. (2) after eliminating \(A\) is given in Eq. (4). Since \(G(A)\) in Eq. (5) is monotone non- decreasing submodular, the optimal subset \(A\) when defining \(g(w)\) in Eq. (4) always has size \(k\) if \(|V| \geq k\) . In addition, because the loss function \(L(y_{i}, f(x_{i}, w))\) is \(\beta\) - strongly convex, \(g(w)\) in Eq. (4) is the maximum over multiple \(k \beta\) - strongly convex functions with different \(A\) . According to Proposition 1, \(g(w)\) is also \(k \beta\) - strongly convex, i.e., \[g(\hat{w})\geq g(w^{*}) + \nabla g(w^{*})^{T}(\hat{w} -w^{*}) + \frac{k\beta}{2}\| \hat{w} -w^{*}\|_{2}^{2},\forall \nabla g(w^{*})\in \partial g(w^{*}). \quad (14)\] Since the convex function \(g(w)\) achieves minimum on \(w^{*}\) , it is valid to substitute \(\nabla g(w^{*}) = 0 \in \partial g(w^{*})\) into Eq. (14). After rearrangement, we have \[\| \hat{w} -w^{*}\|_{2}^{2}\leq \frac{2}{k\beta}\left[g(\hat{w}) - g(w^{*})\right]. \quad (15)\] In the following, we will prove \(g(w^{*}) \geq \alpha \cdot g(\hat{w})\) , which together with Eq. (15) will lead to the final bound showing how close \(\hat{w}\) is to \(w^{*}\) . Note \(\hat{g} (w)\) (Eq. (6)) is a piecewise function, each piece of which is convex and associated with different \(\hat{A}\) achieved by a submodular maximization algorithm of approximation factor \(\alpha\) . Since \(\hat{A}\) is not guaranteed to be a global maxima, unlike \(g(w)\) , the whole \(\hat{g} (w)\) cannot be written as the maximum of multiple convex functions and thus can be non- convex. Therefore, gradient descent in lines 6- 9 of Algorithm 1 can lead to either: 1) \(\hat{w}\) is a global minima of \(\hat{g} (w)\) ; or 2) \(\hat{w}\) is a local minima of \(\hat{g} (w)\) . Saddle points do not exist on \(\hat{g} (w)\) because each piece of it is convex. We are also assuming other issues associated with the boundaries between convex pieces do not repeatedly occur. <--- Page Split ---> 1) When \(\hat{w}\) is a global minima of \(\hat{g} (w)\) , we have \[g(w^{*})\geq \hat{g} (w^{*})\geq \hat{g} (\hat{w})\geq \alpha \cdot g(\hat{w}). \quad (16)\] The first inequality is due to \(g(\cdot)\geq \hat{g} (\cdot)\) . The second inequality is due to the global optimality of \(\hat{w}\) . The third inequality is due to the approximation bound \(\hat{g} (\cdot)\geq \alpha \cdot g(\cdot)\) guaranteed by the submodular maximization in Step 7 of Algorithm 1. 2) When \(\hat{w}\) is a local minima of \(\hat{g} (w)\) , we have \(\nabla \hat{g} (\hat{w}) = 0\) . Let \(h(w)\) be the piece of \(\hat{g} (w)\) where \(\hat{w}\) is located, then \(\hat{w}\) has to be a global minima of \(h(w)\) due to the convexity of \(h(w)\) . Let \(\mathcal{A}\) denote the ground set of \(\hat{A}\) on all pieces of \(\hat{g} (w)\) , we define an auxiliary convex function \(\tilde{g} (w)\) as \[\tilde{g} (w)\triangleq \max_{A\in \mathcal{A}}\sum_{i\in A}L\left(y_{i},f(x_{i},w)\right) + \lambda F(A). \quad (17)\] It is convex because it is defined as the maximum of multiple convex function. So we have \[\hat{g} (w)\leq \tilde{g} (w)\leq g(w),\forall w\in \mathbb{R}^{m}. \quad (18)\] The first inequality is due to the definition of \(\mathcal{A}\) , and the second inequality is a result of \(\mathcal{A}\subseteq 2^{V}\) by comparing \(g(w)\) in Eq. (4) with \(\tilde{g} (w)\) in Eq. (17). Let \(\tilde{w}\) denote a global minima of \(\tilde{g} (w)\) , we have \[g(w^{*})\geq \tilde{g} (w^{*})\geq \tilde{g} (\tilde{w})\geq h(\tilde{w})\geq h(\hat{w}) = \hat{g} (\hat{w})\geq \alpha \cdot g(\hat{w}). \quad (19)\] The first inequality is due to Eq. (18), the second inequality is due to the global optimality of \(\hat{w}\) on \(\tilde{g} (w)\) , the third inequality is due to the definition of \(\tilde{g} (w)\) in Eq. (17) \((\tilde{g} (w)\) is the maximum of all pieces of \(\hat{g} (w)\) and \(h(w)\) is one piece of them), the fourth inequality is due to the global optimality of \(\hat{w}\) on \(h(w)\) , the last inequality is due to the approximation bound \(\hat{g} (\cdot)\geq \alpha \cdot g(\cdot)\) guaranteed by the submodular maximization in Step 7 of Algorithm 1. Therefore, in both cases we have \(g(w^{*})\geq \alpha \cdot g(\hat{w})\) . Applying it to Eq. (15) results in \[\| \hat{w} -w^{*}\|_{2}^{2}\leq \frac{2}{k\beta}\left(\frac{1}{\alpha} -1\right)\cdot g(w^{*}). \quad (20)\] ### 4.4 PROPOSITION 2 Proposition 2. If \(x\in [0,1]\) , the following inequality holds true. \[\frac{x}{1 - e^{-x}} -1\leq x. \quad (21)\] Proof. Due to two inequalities \(e^{x}\leq 1 + x + x^{2} / 2\) for \(x\leq 0\) and \(1 - e^{- x}\geq x / 2\) for \(x\in [0,1]\) \[\frac{x}{1 - e^{-x}} -1 = \frac{x - 1 + e^{-x}}{1 - e^{-x}}\leq \frac{x - 1 + (1 - x + x^{2} / 2)}{x / 2} = x. \quad (22)\] ### 4.5 PROOF OF COROLLARY 1 Proof. Applying the inequality in Proposition 2 and the approximation factor of lazy greedy \(\alpha =\) \((1 - e^{- \kappa_{G}})\big / \kappa_{G}\) to the right hand side of Eq. (9) from Theorem 1 yields \[\begin{array}{l}{\| \hat{w} -w^{*}\|_{2}^{2}\leq \frac{2}{k\beta}\left(\frac{1}{\alpha} -1\right)\cdot g(w^{*})}\\ {= \frac{2}{k\beta}\left(\frac{\kappa_{G}}{1 - e^{-\kappa_{G}}} -1\right)\cdot g(w^{*})\leq \frac{2\kappa_{G}}{k\beta}\cdot g(w^{*}),} \end{array} \quad (23)\] where \(\kappa_{G}\) is the curvature of submodular function \(G(\cdot)\) defined in Eq. (5). Substituting the inequality about \(\kappa_{G}\) from Lemma 1 into Eq. (23) results in \[\| \hat{w} -w^{*}\|_{2}^{2}\leq \frac{2\kappa_{F}}{k\beta(c_{1} / \lambda +1)}\leq \frac{2\kappa_{F}}{\beta c_{1}}\times \frac{\lambda}{k}\times g(w^{*}). \quad (24)\] We use subscript as the index for iterations in the outer- loop, e.g., \(\hat{w}_{T}\) is the model weights \(w\) after the \(T^{t h}\) iteration of outer- loop. If we decrease \(\lambda\) exponentially from \(\lambda = \lambda_{0}\) and increase \(k\) linearly from \(k = k_{0}\) , as Step 11 in Algorithm 1, we have \[\| \hat{w}_{T} -w_{T}^{*}\|_{2}^{2}\leq \frac{2\kappa_{F}\lambda_{0}}{\beta c_{1}}\times \frac{(1 - \gamma)^{T}}{(k_{0} + T\Delta)}\times g(w_{T}^{*}), \quad (25)\] <--- Page Split ---> According to the definition of \(g(\cdot)\) in Eq. (4), for \(g(w_{T}^{*})\) we have \[\begin{array}{r l} & {g(w_{T}^{*}) = \underset {w\in \mathbb{R}^{m}}{\min}\underset {A\subseteq V,|A|\leq k}{\max}\underset {i\in A}{\sum}L\left(y_{i},f(x_{i},w)\right) + \lambda F(A)}\\ & {\qquad \leq \underset {w\in \mathbb{R}^{m}}{\min}\underset {A\subseteq V,|A|\leq k}{\max}\underset {i\in A}{\sum}L\left(y_{i},f(x_{i},w)\right) + \lambda_{0}(1 - \gamma)^{T}\underset {A\subseteq V,|A|\leq k}{\max}F(A)}\\ & {\qquad = g(w_{\infty}^{*}) + \lambda_{0}(1 - \gamma)^{T}c_{2},} \end{array} \quad (26)\] where \[g(w_{\infty}^{*})\triangleq \min_{w\in \mathbb{R}^{m}}\max_{A\subseteq V,|A|\leq k}\sum_{i\in A}L\left(y_{i},f(x_{i},w)\right),c_{2}\triangleq \max_{A\subseteq V,|A|\leq k}F(A). \quad (27)\] Substituting Eq. (26) to Eq. (25) yields \[\| \hat{w}_{T} - w_{T}^{*}\|_{2}^{2}\leq \frac{2\kappa_{F}\lambda_{0}}{\beta c_{1}}\times \frac{(1 - \gamma)^{T}}{(k_{0} + T\Delta)}\times \left[g(w_{\infty}^{*}) + \lambda_{0}c_{2}(1 - \gamma)^{T}\right], \quad (28)\] If we can tolerate more expensive computational cost for running submodular maximization with larger budget \(k\) , and increase \(k\) exponentially, i.e., \(k\gets (1 + \Delta)\cdot k\) , we have \[\| \hat{w}_{T} - w_{T}^{*}\|_{2}^{2}\leq \frac{2\kappa_{F}\lambda_{0}}{\beta c_{1}k_{0}}\times \left(\frac{1 - \gamma}{1 + \Delta}\right)^{T}\times \left[g(w_{\infty}^{*}) + \lambda_{0}c_{2}(1 - \gamma)^{T}\right]. \quad (29)\] This completes the proof. ### 4.6 SUBMODULAR MAXIMIZATION STARTING FROM A PREVIOUS "WARM" SOLUTION Algorithm 1 repeatedly runs a greedy procedure to solve submodular maximization, and this occurs two nested loops deep. In this section we describe how we speed this process up. Our first strategy reduces the size of the ground set before starting a more expensive submodular maximization procedure. We use a method described in (Wei et al., 2014a) where we sort the elements of \(V\) non- increasingly by \(G(i|V\setminus i)\) and then remove any element \(i\) from \(V\) having \(G(i)< G(\delta (k)|V\setminus \delta (k))\) where \(\delta (k)\) is \(k^{\mathrm{th}}\) element in the sorted permutation. Any such element will never be chosen by the \(k\) - cardinality constrained greedy procedure because for any \(\ell \in \{1,2,\ldots ,k\}\) and any set \(A\) , we have \(G(\delta (\ell)|A)\geq G(\delta (\ell)|V\setminus \delta (\ell))\geq G(\delta (k)|V\setminus \delta (k)) > G(i)\geq G(i|A)\) and thus greedy would always be able to choose an element better than \(i\) . This method results in no reduction in approximation quality, although it might not yield any speedup at all. But with a decreasing \(\lambda\) , \(G(A)\) becomes more modular, and the filtering method can become more effective. Other methods we can employ are those such as (Zhou et al., 2017; Mirzasoleiman et al., 2015), resulting in small reduction in approximation quality, but we do not describe these further. The key contribution of this section is a method exploiting a potential warm start set that might already achieve a sufficient approximation quality. Normally, the greedy procedure starts with the empty set and adds elements greedily until a set of size \(k\) is reached. In Algorithm 1, by contrast, a previous iteration has already solved a size- \(k\) constrained submodular maximization problem for the previous submodular function, the solution to which is one that could very nearly already satisfy a desired approximation bound for the current submodular function. The reason for this is that, depending on the weight update method in line 9 of Algorithm 1 between inner loop iterations, and the changes to parameters \(\Delta\) and \(\gamma\) between outer iterations, the succession of submodular functions might not change very quickly. For example, when the learning rate \(\eta\) is small, the \(\hat{A}\) from the previous iteration could still be valued highly by the current iteration's function, so running a greedy procedure from scratch is unnecessary. Our method warm- starts a submodular maximization process with a previously computed set, and offers a bound that trades off speed and approximation quality. The approach is given in Algorithm 2, which (after the aforementioned filtering in line 3 (Wei et al., 2014a)) tests in linear time if the warm start set \(\hat{A}\) already achieves a sufficient approximation quality, and if so, possibly improves it further with an additional linear or quasilinear time computation. To test approximation quality of \(\hat{A}\) , our approach uses a simple modular function upper bound, in line 4, to compute an upper bound on the global maximum value. For the subsequent improvement of \(\hat{A}\) , our approach utilizes a submodular semigradient approach (Iyer et al., 2013) (specifically subgradients (Fujishige, 2005) in this case). If the warm- start set \(\hat{A}\) does not achieve sufficient approximation quality in line 5, the algorithm backs off to standard submodular maximization in line <--- Page Split ---> 11 (we use the accelerated/lazy greedy procedure (Minoux, 1978) here although other methods, e.g., (Mirzasoleiman et al., 2015), can be used as well). Algorithm 2 Warm Start (WS) WS- SUBMODULARMAX \((G,k,\hat{A},\bar{\alpha}\in [0,1))\) 1: Input: \(G(\cdot)\) , \(k\) , \(\hat{A}\) , \(\bar{\alpha}\) 2: Output: \(\hat{A}\) 3: Reduce ground set size: arrange \(V\) non- increasingly in terms of \(G(i|V\backslash i)\) in a permutation \(\delta\) where \(\delta (k)\) is the \(k^{t h}\) element, set \(V\leftarrow \{i\in V|G(i)\geq G(\delta (k)|V\backslash \delta (k))\};\) 4: Compute upper bound to maximum of Eq. (5): \[\tau = \max_{A\in V,|A|\leq k}\sum_{i\in A}\left[L\left(y_{i},f(x_{i},w^{t})\right) + \lambda F(i)\right]\] 5: if \(G(\hat{A})\geq \bar{\alpha}\cdot \tau\) then 6: Permutation \(\sigma\) of \(V\) : the first \(k\) elements have \(S_{k}^{\sigma} = \hat{A}\) and are chosen ordered non- increasing by \(\kappa_{G}(v)\) ; the remaining \(n - k\) elements \(V\setminus \hat{A}\) for \(\sigma\) are chosen non- increasing by \(\kappa_{G}(v)\) . 7: Define modular function \(h_{\hat{A}}(A)\triangleq \sum_{i\in A}h_{\hat{A}}(i)\) with \(h_{\hat{A}}(\sigma (i)) = G(S_{i}^{\sigma}) - G(S_{i - 1}^{\sigma})\) ; 8: Compute tight, at \(\hat{A}\) , lower bound \(L(A)\) of \(G(A)\) : \[L(A)\triangleq G(\hat{A}) + h_{\hat{A}}(A) - h_{\hat{A}}(\hat{A})\leq G(A)\] 9: \(\hat{A}\leftarrow \arg \max_{A\in V,|A|\leq k}L(A)\) ; 10: else 11: \(\hat{A}\leftarrow \mathrm{LAZYGREEDY}(G,V,k)\) ; 12: end if Line 4 computes the upper bound \(\tau \geq \max_{A\in V,|A|\leq k}G(A)\) which holds due to submodularity, requiring only a modular maximization problem (which can be done in \(O(|V|)\) time, independent of \(k\) , to select the top \(k\) elements). Line 5 checks if an \(\bar{\alpha}\) approximation to this upper bound is achieved by the warm- start set \(\hat{A}\) , and if not we back off to a standard submodular maximization procedure in line 11. If \(\hat{A}\) is an \(\bar{\alpha}\) approximation to the upper bound \(\tau\) , then lines 6- 9 runs a subgradient optimization procedure, a process that can potentially improve it further. The approach selects a subgradient defined by a permutation \(\sigma = (\sigma (1),\sigma (2),\ldots ,\sigma (n))\) of the elements. The algorithm then defines a modular function \(L(A)\) , tight at \(\hat{A}\) and a lower bound everywhere else, i.e., \(L(\hat{A}) = G(\hat{A})\) , and \(\forall A,L(A)\leq G(A)\) . Any permutation will achieve this as long as \(\hat{A} = \{\sigma (1),\sigma (2),\ldots ,\sigma (k)\}\) . The specific permutation we use is described below. Once we have the modular lower bound, we can do simple and fast modular maximization. Lines 6- 9 of Algorithm 2 offer a heuristic that can only improve the objective — letting \(\hat{A}\) be the solution after line 9, we have \[G(\hat{A})\geq L(\hat{A})\geq L(\hat{A}) = G(\hat{A}). \quad (30)\] The first inequality follows since \(L(\cdot)\) is a lower bound of \(G(\cdot)\) ; the second inequality follows from the optimality of \(\hat{A}^{+}\) ; the equality follows since \(L\) is tight at \(\hat{A}\) . The approximation factor \(\bar{\alpha}\) is distinct from the submodular maximization approximation factor \(\alpha\) achieved by the greedy algorithm. Setting, for example \(\bar{\alpha} = 1 - 1 / e\) would ask for the previous solution to be this good relative to \(\tau\) , the upper bound on the global maximum, and the algorithm would almost always immediately jump to line 11 since achieving such approximation quality might not even be possible in polynomial time (Feige, 1998). With \(\bar{\alpha}\) large, we recover the approximation factor of the greedy algorithm but ignore the warm start. If \(\bar{\alpha}\) is small, many iterations might use the warm start from the previous iteration, updating it only via one step of subgradient optimization, but with a worse approximation factor. In practice, therefore, we use a more lenient bound (often we set \(\bar{\alpha} = 1 / 2\) ) which is a good practical tradeoff between approximation accuracy and speed (meaning lines 6- 9 execute a reasonable fraction of the time leading to a good speedup, i.e., in our experiments, the time cost for WS- SUBMODULARMAX increases if \(\alpha = 1\) by a factor ranging from about 3 to 5). In general, we have the following final bound based on the smaller of \(\bar{\alpha}\) and \(\alpha\) . <--- Page Split ---> Lemma 2. Algorithm 2 outputs a solution \(\hat{A}\) such that \(G(\hat{A}) \geq \min \{\bar{\alpha}, \alpha \} \times \max_{A \in V, |A| \leq k} G(A)\) , where \(\alpha\) is the approximation factor of the greedy procedure (typically \(\alpha = (1 - e^{-\kappa_G}) / \kappa_G\) ). Proof. Let \(A^{*}\) denote an optimal solution to Eq. (5): \[A^{*} \in \underset {A \in V, |A| \leq k}{\arg \max} \sum_{i \in A} L \left(y_{i}, f(x_{i}, w^{t}) \right) + \lambda F(A). \quad (31)\] \(\tau\) computed in line 4 is an upper bound to \(G(A^{*})\) since: \[\begin{array}{r l} & {\tau \geq \sum_{i\in A^{*}}[L\left(y_{i},f(x_{i},w^{t})\right) + \lambda F(i)]}\\ & {\qquad = \sum_{i\in A^{*}}L\left(y_{i},f(x_{i},w^{t})\right) + \lambda \sum_{i\in A^{*}}F(i)}\\ & {\qquad \geq \sum_{i\in A^{*}}L\left(y_{i},f(x_{i},w^{t})\right) + \lambda F(A^{*}).} \end{array} \quad (32)\] The first inequality follows by the definition of \(\tau\) ; the last inequality is due to submodularity, guaranteeing \(F(i) \geq F(i|B)\) for any \(B \subseteq V\) . When \(G(\hat{A}) \geq \bar{\alpha} \cdot \tau\) (line 5), the subgradient ascent can only improve the objective. Thus, we have \(G(\hat{A}) \geq \bar{\alpha} \cdot \max_{A \in V, |A| \leq k} G(A)\) for \(\hat{A}\) obtained in line 9. Otherwise, we run the greedy algorithm on the reduced ground set \(V\) . Thus, we have \(G(\hat{A}) \geq \alpha \cdot \max_{A \in V, |A| \leq k} G(A)\) for \(\hat{A}\) obtained in line 11. \(\square\) The heuristic in lines 6- 9 is identical to one step of the semigradient- based minorization- maximization (MM) scheme used in, for example, (Narasimhan & Bilmes, 2005; Jegelka & Bilmes, 2011; Iyer & Bilmes, 2012; Iyer et al., 2013). Which permutation to use for the subgradient in order to tighten the gap has been an issue discussed as far back as (Narasimhan & Bilmes, 2005). In the present work, we offer a new heuristic for this problem. Let the first \(i\) elements in the permutation \(\sigma\) be denoted \(S_{i}^{\sigma} = \{\sigma (1), \sigma (2), \ldots , \sigma (i)\}\) , and let \(A_{i - 1}^{\sigma} \triangleq \{\sigma (j) \in A | j < i\} = S_{i - 1}^{\sigma} \cap A \subseteq S_{i - 1}^{\sigma}\) for any \(i \in A\) . The gap we wish to reduce is \[\begin{array}{r l} & {0\leq G(A) - L(A) = \sum_{\sigma (i)\in A}\left[G(\sigma (i)|A_{i - 1}^{\sigma}) - G(\sigma (i)|S_{i - 1}^{\sigma})\right]}\\ & {\qquad = \sum_{\sigma (i)\in A}G(\sigma (i))\cdot \left[\frac{G(\sigma (i)|A_{i - 1}^{\sigma})}{G(\sigma (i))} -\frac{G(\sigma (i)|S_{i - 1}^{\sigma})}{G(\sigma (i))}\right]}\\ & {\qquad \leq \sum_{\sigma (i)\in A}G(\sigma (i))\cdot \left[\frac{G(\sigma (i)|A_{i - 1}^{\sigma})}{G(\sigma (i))} -(1 - \kappa_{G}(\sigma (i)))\right]} \end{array} \quad (34)\] Which follows since \(h_{\hat{A}}(\sigma (i)) = G(\sigma (i)|S_{i - 1}^{\sigma})\) by definition. Line 6 chooses a particular permutation in an attempt to reduce this gap. Define an element- wise form of curvature as \(\kappa_{G}(v) = 1 - G(v|V\backslash v) / G(v) \in [0, 1]\) for all \(v \in V\) . Note that \(\kappa_{G} = \max_{v \in V} \kappa_{G}(v)\) . If \(\kappa_{G}(v) \approx 0\) then \(G\) is practically modular at \(v\) and so \(G(v|A) \approx G(v)\) for any set \(A\) ; in other words, \(G(v|A)\) is close to \(v\) 's maximum possible gain even if \(v\) is ranked with index very late in the permutation \(\sigma\) where \(A\) is very large. If \(\kappa_{G}(v) \approx 1\) , on the other hand, then there is some set \(A \subseteq V \setminus \{v\}\) that can appreciably reduce \(G(v|A)\) relative to the maximum possible gain \(G(v)\) , and so it is best to rank \(v\) very early in the order \(\sigma\) where \(A\) must be a small set. One heuristic to achieve these goals is to choose a permutation \(\sigma\) that arranges the elements in an order non- increasing according to \(\kappa_{G}(v)\) , meaning \(\kappa_{G}(\sigma (1)) \geq \kappa_{G}(\sigma (2)) \geq \ldots\) . Choosing this order is therefore an attempt to keep each of the conditional gains \(G(\sigma (i)|S_{i - 1}^{\sigma})\) as close as possible to \(\sigma (i)\) 's maximum possible gain, \(G(\sigma (i))\) . This corresponds to an attempt to reduce Eq. (34) (and correspondingly close the \(G(A) - L(A)\) gap) as much as possible. Line 6 of Algorithm 2 does this, subject to the requirement that the first \(k\) elements of the permutation must correspond to \(\hat{A}\) in order to be a subgradient. These tricks all help lines 6- 9 produce a better updated approximate maximizer but at appreciably increased speed. ### 4.7 ADDITIONAL RESULTS This section concludes by, in the form of tables and plots, providing more information about our experiments and experimental results for the algorithms mentioned above. <--- Page Split ---> Table 3: Details regarding the datasets. <table><tr><td>Dataset</td><td>News20</td><td>MNIST</td><td>CIFAR10</td><td>STL10</td><td>SVHN</td><td>Fashion</td></tr><tr><td>#Training</td><td>11314</td><td>50000</td><td>50000</td><td>5000</td><td>73257</td><td>50000</td></tr><tr><td>#Test</td><td>7532</td><td>10000</td><td>10000</td><td>8000</td><td>26032</td><td>10000</td></tr><tr><td>#Feature</td><td>129791</td><td>28 × 28</td><td>32 × 32 × 3</td><td>96 × 96 × 3</td><td>32 × 32</td><td>28 × 28</td></tr><tr><td>#Class</td><td>20</td><td>10</td><td>10</td><td>10</td><td>10</td><td>10</td></tr></table> Table 4: Parameters of MCL (Algorithm 1) and its variants for different datasets. <table><tr><td>Dataset</td><td>News20</td><td>MNIST</td><td>CIFAR10</td><td>STL10</td><td>SVHN</td><td>Fashion</td></tr><tr><td>p</td><td>50</td><td>50</td><td>20</td><td>20</td><td>20</td><td>50</td></tr><tr><td>#cluster</td><td>200</td><td>1000</td><td>1000</td><td>400</td><td>800</td><td>1000</td></tr><tr><td>γ</td><td>0.05</td><td>0.05</td><td>0.05</td><td>0.2</td><td>0.2</td><td>0.05</td></tr><tr><td>initial k</td><td>4</td><td>4</td><td>4</td><td>20</td><td>30</td><td>10</td></tr><tr><td>initial λ</td><td>6 × 10−6</td><td>1 × 10−6</td><td>8 × 10−7</td><td>1 × 10−7</td><td>1 × 10−6</td><td>1 × 10−6</td></tr><tr><td>initial η</td><td>3.5</td><td>0.02</td><td>0.01</td><td>0.02</td><td>0.01</td><td>0.02</td></tr></table> ![](images/20_0.jpg) <center>Figure 9: Training error rate (\%) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on 20newsgroups (grey curves represents 10 random trials of SGD). </center> ![](images/20_1.jpg) <center>Figure 10: Training loss vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on 20newsgroups (grey curves represents 10 random trials of SGD). </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 11: Test loss vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on 20newsgroups (grey curves represents 10 random trials of SGD). </center> ![](images/21_1.jpg) <center>Figure 12: Training loss vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on CIFAR10 (grey curves represents 10 random trials of SGD). </center> ![](images/21_2.jpg) <center>Figure 13: Test loss vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on CIFAR10 (grey curves represents 10 random trials of SGD). </center> ![](images/21_3.jpg) <center>Figure 14: Training error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on MNIST (grey curves represents 10 random trials of SGD). </center> <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 15: Training error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on Fashion-MNIST (grey curves represents 10 random trials of SGD). </center> ![](images/22_1.jpg) <center>Figure 16: Training error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on STL10 (grey curves represents 10 random trials of SGD). </center> ![](images/22_2.jpg) <center>Figure 17: Training error rate \((\%)\) vs. number of distinct labeled samples ever needing loss gradient calculation (left) and number of training batches (right) on SVHN (grey curves represents 10 random trials of SGD). </center> <--- Page Split --->
accept
Accept (Poster)
5.666667
ICLR_2018_paper_0730
iclr
2,018
# ROBUSTNESS OF CLASSIFIERS TO UNIVERSAL PERTURBATIONS: A GEOMETRIC PERSPECTIVE Seyed Mohsen Moosavi Dezfooli\* École Polytechnique Fédérale de Lausanne seyed.moosavi@epfl.ch Alhussein Fawzi† University of California, Los Angeles fawzi@cs.ucla.edu Omar Fawzi École Normale Supérieure de Lyon omar.fawzi@ens- lyon.fr Pascal Frossard École Polytechnique Fédérale de Lausanne pascal.frossard@epfl.ch Stefano Soatto University of California, Los Angeles soatto@ucla.edu ## ABSTRACT Deep networks have recently been shown to be vulnerable to universal perturbations: there exist very small image- agnostic perturbations that cause most natural images to be misclassified by such classifiers. In this paper, we provide a quantitative analysis of the robustness of classifiers to universal perturbations, and draw a formal link between the robustness to universal perturbations, and the geometry of the decision boundary. Specifically, we establish theoretical bounds on the robustness of classifiers under two decision boundary models (flat and curved models). We show in particular that the robustness of deep networks to universal perturbations is driven by a key property of their curvature: there exist shared directions along which the decision boundary of deep networks is systematically positively curved. Under such conditions, we prove the existence of small universal perturbations. Our analysis further provides a novel geometric method for computing universal perturbations, in addition to explaining their properties. ## 1 INTRODUCTION Despite the success of deep neural networks in solving complex visual tasks He et al. (2016); Krizhevsky et al. (2012), these classifiers have recently been shown to be highly vulnerable to perturbations in the input space. In Moosavi- Dezfooli et al. (2017), state- of- the- art classifiers are empirically shown to be vulnerable to universal perturbations: there exist very small image- agnostic perturbations that cause most natural images to be misclassified. The existence of universal perturbation is further shown in Hendrik Metzen et al. (2017) to extend to other visual tasks, such as semantic segmentation. Universal perturbations fundamentally differ from the random noise regime, and exploit essential properties of deep networks to misclassify most natural images with perturbations of very small magnitude. Why are state- of- the- art classifiers highly vulnerable to these specific directions in the input space? What do these directions represent? To answer these questions, we follow a theoretical approach and find the causes of this vulnerability in the geometry of the decision boundaries induced by deep neural networks. For deep networks, we show that the key to answering these questions lies in the existence of shared directions (across different datapoints) along which the decision boundary is highly curved. This establishes fundamental connections between geometry and robustness to universal perturbations, and thereby reveals new properties of the decision boundaries induced by deep networks. <--- Page Split ---> Our aim here is to derive an analysis of the vulnerability to universal perturbations in terms of the geometric properties of the boundary. To this end, we introduce two decision boundary models: 1) the locally flat model assumes that the first order linear approximation of the decision boundary holds locally in the vicinity of the natural images, and 2) the locally curved model provides a second order local description of the decision boundary, and takes into account the curvature information. We summarize our contributions as follows: - Under the locally flat decision boundary model, we show that classifiers are vulnerable to universal directions as long as the normals to the decision boundaries in the vicinity of natural images are correlated (i.e., they approximately span a low dimensional space). This result formalizes and proves some of the empirical observations made in Moosavi-Dezfooli et al. (2017).- Under the locally curved decision boundary model, the robustness to universal perturbations is instead driven by the curvature of the decision boundary; we show that the existence of shared directions along which the decision boundary is positively<sup>1</sup> curved implies the existence of very small universal perturbations.- We show that state-of-the-art deep nets remarkably satisfy the assumption of our theorem derived for the locally curved model: there actually exist shared directions along which the decision boundary of deep neural networks are positively curved. Our theoretical result consequently captures the large vulnerability of state-of-the-art deep networks to universal perturbations.- We finally show that the developed theoretical framework provides a novel (geometric) method for computing universal perturbations, and further explains some of the properties observed in Moosavi-Dezfooli et al. (2017) (e.g., diversity, transferability) regarding the robustness to universal perturbations. ## 2 DEFINITIONS AND NOTATIONS Consider an \(L\) - class classifier \(f:\mathbb{R}^{d}\to \mathbb{R}^{L}\) . Given a datapoint \(\pmb {x}\in \mathbb{R}^{d}\) , we define the estimated label \(\hat{k} (\pmb {x}) = \mathrm{argmax}_{k}f_{k}(\pmb {x})\) , where \(f_{k}(\pmb {x})\) is the \(k\) th component of \(f(\pmb {x})\) that corresponds to the \(k^{\mathrm{th}}\) class. We define by \(\mu\) a distribution over natural images in \(\mathbb{R}^{d}\) . The main focus of this paper is to analyze the robustness of classifiers to universal (image- agnostic) noise. Specifically, we define \(\pmb{v}\) to be a universal noise vector if \(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\) for "most" \(\pmb {x}\sim \mu\) . Formally, a perturbation \(\pmb{v}\) is \((\xi ,\delta)\) - universal, if the following two constraints are satisfied: \[\| \pmb {v}\|_{2}\leq \xi ,\] \[\mathbb{P}\left(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\right)\geq 1 - \delta .\] This perturbation image \(\pmb{v}\) is coined "universal", as it represents a fixed image- agnostic perturbation that causes label change for a large fraction of images sampled from the data distribution \(\mu\) . In Moosavi- Dezfooli et al. (2017), state- of- the- art classifiers have been shown to be surprisingly vulnerable to this simple perturbation regime. It should be noted that universal perturbations are different from adversarial perturbations Szegedy et al. (2014); Biggio et al. (2013), which are datapoint- specific perturbations that are sought to fool a specific image. An adversarial perturbation is a solution to the following optimization problem \[\pmb {r}(\pmb {x}) = \arg \min_{\pmb {r}\in \mathbb{R}^{d}}\| \pmb {r}\|_{2}\mathrm{~subject~to~}\hat{k} (\pmb {x} + \pmb {r})\neq \hat{k} (\pmb {x}), \quad (1)\] which corresponds to the smallest additive perturbation that is necessary to change the label of the classifier \(\hat{k}\) for \(\pmb{x}\) . From a geometric perspective, \(\pmb {r}(\pmb {x})\) quantifies the distance from \(\pmb{x}\) to the decision boundary (see Fig. 1a). In addition, due to the optimality conditions of Eq. (1), \(\pmb {r}(\pmb {x})\) is orthogonal to the decision boundary at \(\pmb {x} + \pmb {r}(\pmb {x})\) , as illustrated in Fig. 1a. In the remainder of the paper, we analyze the robustness of classifiers to universal noise, with respect to the geometry of the decision boundary of the classifier \(f\) . Formally, the pairwise decision boundary, <--- Page Split ---> ![](images/2_0.jpg) <center>(a) Local geometry of the decision boundary. </center> ![](images/2_1.jpg) <center>(b) Universal direction \(\pmb{v}\) of a linear binary classifier. </center> when restricting the classifier to class \(i\) and \(j\) is defined by \(\mathcal{B} = \{\mathbf{z} \in \mathbb{R}^{d}: f_{i}(\mathbf{z}) - f_{j}(\mathbf{z}) = 0\}\) (we omit the dependence of \(\mathcal{B}\) on \(i, j\) for simplicity). The decision boundary of the classifier hence corresponds to points in the input space that are equally likely to be classified as \(i\) or \(j\) . In the following sections, we introduce two models on the decision boundary, and quantify in each case the robustness of such classifiers to universal perturbations. We then show that the locally curved model better explains the vulnerability of deep networks to such perturbations. ## 3 ROBUSTNESS OF CLASSIFIERS WITH FLAT DECISION BOUNDARIES We start here our analysis by assuming a locally flat decision boundary model, and analyze the robustness of classifiers to universal perturbations under this decision boundary model. We specifically study the existence of a universal direction \(\pmb{v}\) , such that \[\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\mathrm{~or~}\hat{k} (\pmb {x} - \pmb {v})\neq \hat{k} (\pmb {x}), \quad (2)\] where \(\pmb{v}\) is a vector of sufficiently small norm. It should be noted that a universal direction (as opposed to a universal vector) is sought in Eq. (2), as this definition is more adapted to the analysis of classifiers with locally flat decision boundaries. For example, while a binary linear classifier has a universal direction that fools all the data points, only half of the data points can be fooled with a universal vector (provided the classes are balanced) (see Fig. 1b). We therefore consider this slightly modified definition in the remainder of this section. We start our analysis by introducing our local decision boundary model. For \(\pmb {x}\in \mathbb{R}^{d}\) , note that \(\pmb {x} + \pmb {r}(\pmb {x})\) belongs to the decision boundary and \(\pmb {r}(\pmb {x})\) is normal to the decision boundary at \(\pmb {x} + \pmb {r}(\pmb {x})\) (see Fig. 1a). A linear approximation of the decision boundary of the classifier at \(\pmb {x} + \pmb {r}(\pmb {x})\) is therefore given by \(\pmb {x} + \{\pmb {v}:\pmb {r}(\pmb {x})^{T}\pmb {v} = \| \pmb {r}(\pmb {x})\|_{2}^{2}\}\) . Under this approximation, the vector \(\pmb {r}(\pmb {x})\) hence captures the local geometry of the decision boundary in the vicinity of datapoint \(\pmb{x}\) . We assume a local decision boundary model in the vicinity of datapoints \(\pmb {x}\sim \mu\) , where the local classification region of \(\pmb{x}\) occurs in the halfspace \(\pmb {r}(\pmb {x})^{T}\pmb {v}\leq \| \pmb {r}(\pmb {x})\|_{2}^{2}\) . Equivalently, we assume that outside of this half- space, the classifier outputs a different label than \(\hat{k} (\pmb {x})\) . However, since we are analyzing the robustness to universal directions (and not vectors), we consider the following condition, given by \[\mathcal{L}_{s}(\pmb {x},\rho):\forall \pmb {v}\in B(\rho),|\pmb {r}(\pmb {x})^{T}\pmb {v}|\geq \| \pmb {r}(\pmb {x})\|_{2}^{2}\Longrightarrow \hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\mathrm{~or~}\hat{k} (\pmb {x} - \pmb {v})\neq \hat{k} (\pmb {x}). \quad (3)\] where \(B(\rho)\) is a ball of radius \(\rho\) centered at 0. An illustration of this decision boundary model is provided in Fig. 2a. It should be noted that linear classifiers satisfy this decision boundary model, as their decision boundaries are globally flat. This local decision boundary model is however more general, as we do not assume that the decision boundary is linear, but rather that the classification region in the vicinity of \(\pmb{x}\) is included in \(\pmb {x} + \{\pmb {v}:|\pmb {r}(\pmb {x})^{T}\pmb {v}|\leq \| \pmb {r}(\pmb {x})\|_{2}^{2}\}\) . Moreover, it should be noted that the model being assumed here is on the decision boundary of the classifier, and not an assumption on the classification function \(f\) .2 Fig. 2a provides an example of nonlinear decision boundary that satisfies this model. <--- Page Split ---> In all the theoretical results of this paper, we assume that \(\| \boldsymbol {r}(\boldsymbol {x})\|_{2} = 1\) , for all \(\boldsymbol {x}\sim \mu\) , for simplicity of the exposition. The results can be extended in a straightforward way to the case where \(\| \boldsymbol {r}(\boldsymbol {x})\|_{2}\) takes different values for points sampled from \(\mu\) . The following result shows that classifiers following the locally flat decision boundary model are not robust to small universal perturbations, provided the normals to the decision boundary (in the vicinity of datapoints) approximately belong to a low dimensional subspace of dimension \(m\ll d\) . Theorem 1. Let \(\xi \geq 0,\delta \geq 0\) . Let \(S\) be an \(m\) dimensional subspace such that \(\| P_{S}\boldsymbol {r}(\boldsymbol {x})\|_{2}\geq\) \(1 - \xi\) for almost all \(\boldsymbol {x}\sim \mu\) , where \(P_{S}\) is the projection operator on the subspace. Assume moreover that \(\mathcal{L}_{s}\left(\boldsymbol {x},\rho\right)\) holds for almost all \(\boldsymbol {x}\sim \mu\) , with \(\rho = \frac{\sqrt{e^{m}}}{\delta(1 - \xi)}\) . Then, there exists a universal noise vector \(\pmb{v}\) , such that \(\| \pmb {v}\|_{2}\leq \rho\) and \(\underset {x\sim \mu}{\mathbb{P}}\left(\hat{k} (\boldsymbol {x} + \boldsymbol {v})\neq \hat{k} (\boldsymbol {x})\text{or}\hat{k} (\boldsymbol {x} - \boldsymbol {v})\neq \hat{k} (\boldsymbol {x})\right)\geq 1 - \delta .\) The proof can be found in supplementary material, and relies on the construction of a universal perturbation through randomly sampling from \(S\) . The vulnerability of classifiers to universal perturbations can be attributed to the shared geometric properties of the classifier's decision boundary in the vicinity of different data points. In the above theorem, this shared geometric property across different data points is expressed in terms of the normal vectors \(\boldsymbol {r}(\boldsymbol {x})\) . The main assumption of the above theorem is specifically that normal vectors \(\boldsymbol {r}(\boldsymbol {x})\) to the decision boundary in the neighborhood of data points approximately live in a subspace \(S\) of low dimension \(m< d\) . Under this assumption, the above result shows the existence of universal perturbations of \(\ell_{2}\) norm of order \(\sqrt{m}\) . When \(m\ll d\) , Theorem 1 hence shows that very small (compared to random noise, which scales as \(\sqrt{d}\) Fawzi et al. (2016)) universal perturbations misclassifying most data points can be found. Remark 1. Theorem 1 can be readily applied to assess the robustness of multiclass linear classifiers to universal perturbations. In fact, when \(f(\boldsymbol {x}) = W^{T}\boldsymbol {x}\) , with \(W = [\boldsymbol{w}_{1},\ldots ,\boldsymbol{w}_{L}]\) , the normal vectors are equal to \(\boldsymbol {w}_{i} - \boldsymbol {w}_{j}\) , for \(1\leq i,j\leq L,i\neq j\) . These normal vectors exactly span a subspace of dimension \(L - 1\) . Hence, by applying the result with \(\xi = 0\) , and \(m = L - 1\) , we obtain that linear classifiers are vulnerable to universal noise, with magnitude proportional to \(\sqrt{L - 1}\) . In typical problems, we have \(L\ll d\) , which leads to very small universal directions. Remark 2. Theorem 1 provides a partial explanation to the vulnerability of deep networks, provided a locally flat decision boundary is assumed. Evidence in favor of this assumption was given through visualization of randomly chosen cross- sections in Warde- Farley et al. (2016); Fawzi et al. (2016). In addition, normal vectors to the decision boundary of deep nets (near data points) have been observed to approximately span a subspace \(S\) of sufficiently small dimension in Moosavi- Dezfooli et al. (2017). However, unlike linear classifiers, the dimensionality of this subspace \(m\) is typically larger than the the number of classes \(L\) , leading to large upper bounds on the norm of the universal noise, under the flat decision boundary model. This simplified model of the decision boundary hence fails to exhaustively explain the large vulnerability of state- of- the- art deep neural networks to universal perturbations. We show in the next section that the second order information of the decision boundary contains crucial information (curvature) that captures the high vulnerability to universal perturbations. ## 4 ROBUSTNESS OF CLASSIFIERS WITH CURVED DECISION BOUNDARIES We now consider a model of the decision boundary in the vicinity of the data points that allows to leverage the curvature of nonlinear classifiers. Under this decision boundary model, we study the existence of universal perturbations satisfying \(\hat{k} (\boldsymbol {x} + \boldsymbol {v})\neq \hat{k} (\boldsymbol {x})\) for most \(\boldsymbol {x}\sim \mu\) .3 We start by establishing an informal link between curvature of the decision boundary and robustness to universal perturbations, that will be made clear later in this section. As illustrated in Fig. 3, the norm of the required perturbation to change the label of the classifier along a specific direction \(\boldsymbol{v}\) is smaller if the decision boundary is positively curved, than if the decision boundary is flat (or with negative curvature). It therefore appears from Fig. 3 that the existence of universal perturbations (when the decision boundary is curved) can be attributed to the existence of common directions where <--- Page Split ---> ![](images/4_0.jpg) <center>(a) Flat decision boundary model \(\mathcal{L}_{s}(\boldsymbol {x},\rho)\) . </center> ![](images/4_1.jpg) <center>(b) Curved decision boundary model \(\mathcal{Q}(\boldsymbol {x},\rho)\) . </center> Figure 2: Illustration of the decision boundary models considered in this paper. (a): For the flat decision boundary model, the set \(\{\boldsymbol {v}:|\boldsymbol {r}(\boldsymbol {x})^{T}\boldsymbol {v}|\leq \| \boldsymbol {r}(\boldsymbol {x})\|_{2}^{2}\}\) is illustrated (stripe). Note that for \(\boldsymbol{v}\) taken outside the stripe (i.e., in the grayed area), we have \(\hat{k} (\boldsymbol {x} + \boldsymbol {v})\neq \hat{k} (\boldsymbol {x})\) or \(\hat{k} (\boldsymbol {x} - \boldsymbol {v})\neq \hat{k} (\boldsymbol {x})\) in the \(\rho\) neighborhood. (b): For the curved decision boundary model, the any vector \(\boldsymbol{v}\) chosen in the grayed area is classified differently from \(\hat{k} (\boldsymbol {x})\) . ![](images/4_2.jpg) <center>Figure 3: Link between robustness and curvature of the decision boundary. When the decision boundary is positively curved (left), small universal perturbations are more likely to fool the classifier. </center> the decision boundary is positively curved for many data points. In the remaining of this section, we formally prove the existence of universal perturbations, when there exists common positively curved directions of the decision boundary. Recalling the definitions of Sec. 2, a quadratic approximation of the decision boundary at \(\boldsymbol {z} = \boldsymbol {x} + \boldsymbol {r}(\boldsymbol {x})\) gives \(\boldsymbol {x} + \{\boldsymbol {v}:(\boldsymbol {v} - \boldsymbol {r}(\boldsymbol {x}))^{T}H_{\boldsymbol{z}}(\boldsymbol {v} - \boldsymbol {r}(\boldsymbol {x})) + \alpha_{x}\boldsymbol {r}(\boldsymbol {x})^{T}(\boldsymbol {v} - \boldsymbol {r}(\boldsymbol {x})) = 0\}\) , where \(H_{\boldsymbol{z}}\) denotes the Hessian of \(F\) at \(\boldsymbol{z}\) , and \(\alpha_{x} = \frac{\| \nabla F(\boldsymbol {z})\|_{2}}{\| \boldsymbol {r}(\boldsymbol {x})\|_{2}}\) , with \(F = f_{i} - f_{j}\) . In this model, the second order information (encoded in the Hessian matrix \(H_{\boldsymbol{z}}\) ) captures the curvature of the decision boundary. We assume a local decision boundary model in the vicinity of datapoints \(\boldsymbol {x}\sim \mu\) , where the local classification region of \(\boldsymbol {x}\) is bounded by a quadratic form. Formally, we assume that there exists \(\rho >0\) where the following condition holds for almost all \(\boldsymbol {x}\sim \mu\) : \[\mathcal{Q}(\boldsymbol {x},\rho):\forall \boldsymbol {v}\in B(\rho),(\boldsymbol {v} - \boldsymbol {r}(\boldsymbol {x}))^{T}H_{\boldsymbol{z}}(\boldsymbol {v} - \boldsymbol {r}(\boldsymbol {x})) + \alpha_{x}\boldsymbol {r}(\boldsymbol {x})^{T}(\boldsymbol {v} - \boldsymbol {r}(\boldsymbol {x}))\leq 0\Longrightarrow \hat{k} (\boldsymbol {x} + \boldsymbol {v})\neq \hat{k} (\boldsymbol {x}).\] An illustration of this quadratic decision boundary model is shown in Fig. 2b. The following result shows the existence of universal perturbations, provided a subspace \(\mathcal{S}\) exists where the decision boundary has positive curvature along most directions of \(\mathcal{S}\) : Theorem 2. Let \(\kappa >0,\delta >0\) and \(m\in \mathbb{N}\) . Assume that the quadratic decision boundary model \(\mathcal{Q}\left(\boldsymbol {x},\rho\right)\) holds for almost all \(\boldsymbol {x}\sim \mu\) , with \(\rho = \sqrt{\frac{2\log(2 / \delta)}{m}\kappa^{- 1} + \kappa^{- 1 / 2}}\) . Let \(\mathcal{S}\) be a \(m\) dimensional subspace such that \[\underset {\mathbf{v}\sim \mathcal{S}}{\mathbb{P}}\left(\forall \pmb {u}\in \mathbb{R}^{2},\alpha_{x}^{-1}\pmb {u}^{T}H_{\mathbf{z}}^{r(\mathbf{x}),\mathbf{v}}\pmb {u}\geq \kappa \| \pmb {u}\|_{2}^{2}\right)\geq 1 - \beta \text{for almost all} \boldsymbol {x}\sim \mu ,\] where \(H_{\mathbf{z}}^{r(\mathbf{x}),\mathbf{v}} = \Pi^{T}H_{\mathbf{z}}\Pi\) with \(\Pi\) an orthonormal basis of \(\operatorname {span}(\boldsymbol {r}(\boldsymbol {x}),\boldsymbol {v})\) , and \(\mathcal{S}\) denotes the unit sphere in \(\mathcal{S}\) . Then, there is a universal perturbation vector \(\boldsymbol{v}\) such that \(\| \boldsymbol {v}\|_{2}\leq \rho\) and \(\underset {\boldsymbol {x}\sim \mu}{\mathbb{P}}\left(\hat{k} (\boldsymbol {x} + \boldsymbol {v})\neq \hat{k} (\boldsymbol {x})\right)\geq 1 - \delta - \beta\) . The above theorem quantifies the robustness of classifiers to universal perturbations in terms of the curvature \(\kappa\) of the decision boundary, along normal sections spanned by \(\boldsymbol {r}(\boldsymbol {x})\) , and vectors \(\boldsymbol {v}\in \mathcal{S}\) (see <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 4: Left: Normal section \(\mathcal{U}\) of the decision boundary, along the plane spanned by the normal vector \(\pmb {r}(\pmb {x})\) and \(\pmb{v}\) . Right: Geometric interpretation of the assumption in Theorem 2. Theorem 2 assumes that the decision boundary along normal sections \((\pmb {r}(\pmb {x}),\pmb {v})\) is locally (in a \(\rho\) neighborhood) located inside a disk of radius \(1 / \kappa\) . Note the difference with respect to traditional notions of curvature, which express the curvature in terms of the osculating circle at \(\pmb {x} + \pmb {r}(\pmb {x})\) . The assumption we use here is more "global". </center> Fig. 4 (left) for an illustration of a normal section). Fig. 4 (right) provides a geometric illustration of the assumption of Theorem 2. Provided a subspace \(\mathcal{S}\) exists where the curvature of the decision boundary in the vicinity of datapoints \(\boldsymbol{x}\) is positive (along directions in \(\mathcal{S}\) ), Theorem 2 shows that universal perturbations can be found with a norm of approximately \(\frac{\kappa - 1}{\sqrt{m}} + \kappa^{- 1 / 2}\) . Hence, when the curvature \(\kappa\) is sufficiently large, the existence of small universal perturbations is guaranteed with Theorem 2.4 Remark 1. We stress that Theorem 2 does not assume that the decision boundary is curved in the direction of all vectors in \(\mathbb{R}^{d}\) , but we rather assume the existence of a subspace \(\mathcal{S}\) where the decision boundary is positively curved (in the vicinity of natural images \(\boldsymbol{x}\) ) along most directions in \(\mathcal{S}\) . Moreover, it should be noted that, unlike Theorem 1, where the normals to the decision boundary are assumed to belong to a low dimensional subspace, no assumption is imposed on the normal vectors. Instead, we assume the existence of a subspace \(\mathcal{S}\) leading to positive curvature, for points on the decision boundary in the vicinity of natural images. Remark 2. Theorem 2 does not only predict the vulnerability of classifiers, but it also provides a constructive way to find such universal perturbations. In fact, random vectors sampled from the subspace \(\mathcal{S}\) are predicted to be universal perturbations (see supp. material for more details). In Section 5, we will show that this new construction works remarkably well for deep networks, as predicted by our analysis. ## 5 EXPERIMENTAL RESULTS: UNIVERSAL PERTURBATIONS FOR DEEP NETS We first evaluate the validity of the assumption of Theorem 2 for deep neural networks, that is the existence of a low dimensional subspace where the decision boundary is positively curved along most directions sampled from the subspace. To construct the subspace, we find the directions that lead to large positive curvature in the vicinity of a given set of training points \(\{\boldsymbol{x}_{1},\ldots ,\boldsymbol{x}_{n}\}\) . We recall that principal directions \(\boldsymbol{v}_{1},\ldots ,\boldsymbol{v}_{d - 1}\) at a point \(\boldsymbol{z}\) on the decision boundary correspond to the eigenvectors (with nonzero eigenvalue) of the matrix \(H_{\boldsymbol{z}}^{t}\) , given by \(H_{\boldsymbol{z}}^{t} = P H_{\boldsymbol{z}}P\) , where \(P\) denotes the projection operator on the tangent to the decision boundary at \(\boldsymbol{z}\) , and \(H_{\boldsymbol{z}}\) denotes the Hessian of the decision boundary function evaluated at \(\boldsymbol{z}\) Lee (2009). Common directions with large average curvature at \(\boldsymbol{z}_{i} = \boldsymbol{x}_{i} + \boldsymbol {r}(\boldsymbol{x}_{i})\) (where \(\boldsymbol {r}(\boldsymbol{x}_{i})\) is the minimal perturbation defined in Eq. (1)) hence correspond to the eigenvectors of the average Hessian matrix \(\overline{H} = n^{- 1}\sum_{i = 1}^{n}H_{\boldsymbol{z}_{i}}^{t}\) . We therefore set our subspace, \(\mathcal{S}_{c}\) , to be the span of the first \(m\) eigenvectors of \(\overline{H}\) , and show that the subspace constructed in this way satisfies the assumption of Theorem 2. To determine whether the decision boundary is positively curved in most directions of \(\mathcal{S}_{c}\) (for unseen datapoints from the validation set), we compute the average curvature across random directions in \(\mathcal{S}_{c}\) for points on the decision boundary, <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 5: Visualization of normal cross-sections of the decision boundary, for CIFAR-10 (Left: LeNet, Right: ResNet-18). Top: Normal cross-sections along \((r(\boldsymbol {x}),\boldsymbol {v})\) , where \(\boldsymbol{v}\) is the universal perturbation computed using the algorithm in Moosavi-Dezfooli et al. (2017). Bottom: Normal cross-sections along \((\boldsymbol {r}(\boldsymbol {x}),\boldsymbol {v})\) , where \(\boldsymbol{v}\) is a random vector uniformly sampled from the unit sphere in \(\mathbb{R}^{d}\) . </center> i.e. \(\mathbf{z} = \mathbf{x} + \mathbf{r}(\mathbf{x})\) ; the average curvature is formally given by \[\bar{\kappa}_{\mathcal{S}}(\boldsymbol {x}) = \underset {\boldsymbol {v}\sim \mathcal{S}}{\mathbb{E}}\left(\frac{(P\boldsymbol{v})^{T}H_{\boldsymbol{z}}(P\boldsymbol{v})}{\|P\boldsymbol{v}\|_{2}^{2}}\right),\] where \(\mathbb{S}\) denotes the unit sphere in \(\mathcal{S}_{c}\) . In Fig. 7 (a), the average of \(\bar{\kappa}_{\mathcal{S}}(\boldsymbol {x})\) across points sampled from the validation set is shown (as well as the standard deviation) in function of the subspace dimension \(m\) , for a LeNet architecture LeCun et al. (1998) trained on the CIFAR- 10 dataset.\(^5\) Observe that when the dimension of the subspace is sufficiently small, the average curvature is strongly oriented towards positive curvature, which empirically shows the existence of this subspace \(\mathcal{S}_{c}\) where the decision boundary is positively curved for most data points in the validation set. This empirical evidence hence suggests that the assumption of Theorem 2 is satisfied, and that universal perturbations hence represent random vectors sampled from this subspace \(\mathcal{S}_{c}\) . To show this strong link between the vulnerability of universal perturbations and the positive curvature of the decision boundary, we now visualize normal sections of the decision boundary of deep networks trained on ImageNet (CaffeNet (Jia et al., 2014) and ResNet- 152 (He et al., 2016)) and CIFAR- 10 (LeNet (LeCun et al., 1998) and ResNet- 18 (He et al., 2016)) in the direction of their respective universal perturbations.\(^6\) Specifically, we visualize normal sections of the decision boundary in the plane \((\boldsymbol {r}(\boldsymbol {x}),\boldsymbol {v})\) , where \(\boldsymbol{v}\) is a universal perturbation computed using the universal perturbations algorithm of Moosavi- Dezfooli et al. (2017). The visualizations are shown in Fig. 5 and 6. Interestingly, the universal perturbations belong to highly positively curved directions of the decision boundary, despite the absence of any geometric constraint in the algorithm to compute universal perturbations. To fool most data points, universal perturbations hence naturally seek common directions of the embedding space, where the decision boundary is positively curved. These directions lead to very small universal perturbations, as highlighted by our analysis in Theorem 2. It should be noted that such highly curved directions of the decision boundary are rare, as random normal sections are comparatively flat (see Fig. 5 and 6, second row). This is due to the fact that most principal curvatures are approximately zero, for points sampled on the decision boundary in the vicinity of data points. Recall that Theorem 2 suggests a novel procedure to generate universal perturbations; in fact, random perturbations from \(\mathcal{S}_{c}\) are predicted to be universal perturbations. To assess the validity of this result, Fig. 7 (b) illustrates the fooling rate of the universal perturbations (for the LeNet network on CIFAR- 10) sampled uniformly at random from the unit sphere in subspace \(\mathcal{S}_{c}\) , and scaled to have a fixed norm (1/5th of the norm of the random noise required to fool most data points). We assess the quality of such perturbation by further indicating in Fig. 7 (b) the fooling rate of the universal <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 6: Visualization of normal cross-sections of the decision boundary, for ImageNet (Left: ResNet-152, and Right: CaffeNet) Top: Normal cross-sections along \((r(x),v)\) , where \(v\) is the universal perturbation computed using the algorithm in Moosavi-Dezfooli et al. (2017). Bottom: Normal cross-sections along \((r(x),v)\) , where \(v\) is a random vector uniformly sampled from the unit sphere in \(\mathbb{R}^d\) . </center> ![](images/7_1.jpg) <center>Figure 7: (a) Average curvature \(\bar{\kappa}_{S}\) , averaged over 1000 validation datapoints, as a function of the subspace dimension. (b) Fooling rate of universal perturbations (on an unseen validation set) computed using random perturbations in 1) \(S_{c}\) : the subspace of positively curved directions, and 2) \(S_{f}\) : the subspace collecting normal vectors \(r(x)\) . The dotted line corresponds to the fooling rate using the algorithm in Moosavi-Dezfooli et al. (2017). \(S_{f}\) corresponds to the largest singular vectors corresponding to the matrix gathering the normal vectors \(r(x)\) in the training set (similar to the approach in Moosavi-Dezfooli et al. (2017)). </center> perturbation computed using the original algorithm in Moosavi- Dezfooli et al. (2017). Observe that random perturbations sampled from \(S_{c}\) (with \(m\) small) provide very powerful universal perturbations, fooling nearly \(85\%\) of data points from the validation set. This rate is comparable to that of the algorithm in Moosavi- Dezfooli et al. (2017), while using much less training points (only \(n = 100\) , while at least \(2,000\) training points are required by Moosavi- Dezfooli et al. (2017)). The very large fooling rates achieved with such a simple procedure (random generation in \(S_{c}\) ) confirms that the curvature is the governing factor that controls the robustness of classifiers to universal perturbations, as analyzed in Section 4. In fact, such high fooling rates cannot be achieved by only using the model of Section 3 (neglecting the curvature information), as illustrated in Fig. 7 (b). Specifically, by generating random perturbations from the subspace \(S_{f}\) collecting normal vectors \(r(x)\) (which is the procedure that is suggested by Theorem 1 to compute universal perturbations, without taking into account second order information), the best universal perturbation achieves a fooling rate of \(65\%\) , which is significantly worse than if the curvature is used to craft the perturbation. We further perform in Appendix C the same experiment on other architectures (VGG- 16 and ResNet- 18) to verify the consistency of the results across networks. It can be seen that, similarly to Fig. 7 (b), the proposed approach of generating universal perturbations through random sampling from the subspace \(S_{c}\) achieves high fooling rates (comparable to the algorithm in Moosavi- Dezfooli et al. (2017), and significantly higher than by using \(S_{f}\) ). <--- Page Split ---> Fig 8 illustrates a universal perturbation for ImageNet, corresponding to the maximally curved shared direction (or in other words, the maximum eigenvalue of \(\overline{H}\) computed using \(n = 200\) random samples).<sup>7</sup> The CaffeNet architecture is used, and Fig. 8 also represents sample perturbed images that fool the classifier. Just like the universal perturbation in Moosavi- Dezfooli et al. (2017), the perturbations are not very perceptible, and lead to misclassification of most unseen images in the validation set. For this example on ImageNet, the fooling rate of this perturbation is \(67.2\%\) on the validation set. This is significantly larger than the fooling rate of the perturbation computed using \(\mathcal{S}_{f}\) only (38%), but lower than that of the original algorithm (85.4%) proposed in (Moosavi- Dezfooli et al., 2017). We hypothesize that this gap for ImageNet is partially due to the small number of samples, which was made due to computational restrictions. ![](images/8_0.jpg) <center>Figure 8: Left column: Universal perturbation computed through random sampling from \(\mathcal{S}_{c}\) . Second column to end: All images are (incorrectly) classified as "bubble". The CaffeNet architecture is used. Similarly to Moosavi-Dezfooli et al. (2017), the perturbation is constrained to have \(\ell_{2}\) norm of 2,000. </center> The existence of this subspace \(\mathcal{S}_{c}\) (and that universal perturbations are random vectors in \(\mathcal{S}_{c}\) ) further explains the high diversity of universal perturbations. Fig. 9 illustrates different universal perturbations for CIFAR- 10 computed by sampling random directions from \(\mathcal{S}_{c}\) . The diversity of such perturbations justifies why re- training with perturbed images (as in Moosavi- Dezfooli et al. (2017)) does not significantly improve the robustness of such networks, as other directions in \(\mathcal{S}_{c}\) can still lead to universal perturbations, even if the network becomes robust to some directions. Finally, it is interesting to note that this subspace \(\mathcal{S}_{c}\) is likely to be shared not only across datapoints, but also different networks (to some extent). To support this claim, Fig. 10 shows the cosine of the principal angles between subspaces \(\mathcal{S}_{c}^{\mathrm{LeNet}}\) and \(\mathcal{S}_{c}^{\mathrm{NiN}}\) , computed for LeNet and NiN Lin et al. (2014) models. Note that the first principal angles between the two subspaces are very small, leading to shared directions between the two subspaces. A similar observation is made for networks trained on ImageNet in the supp. material. The sharing of \(\mathcal{S}_{c}\) across different networks explains the transferability of universal perturbations observed in Moosavi- Dezfooli et al. (2017). ## 6 DISCUSSION AND RELATED WORK In this paper, we analyzed the robustness of classifiers to universal perturbations, under two decision boundary models: Locally flat and curved. We showed that the first are not robust to universal directions, provided the normal vectors in the vicinity of natural images are correlated. While this model explains the vulnerability for e.g., linear classifiers, this model discards the curvature information, which is essential to fully analyze the robustness of deep nets to universal perturbations. The second, classifiers with curved decision boundaries, are instead not robust to universal perturbations, provided the existence of a shared subspace along which the decision boundary is positively curved (for most ![](images/8_1.jpg) <center>Figure 9: Diversity of universal perturbations randomly sampled from the subspace \(\mathcal{S}_{c}\) . The normalized inner product between two perturbations is less than 0.1. </center> <--- Page Split ---> directions). We empirically verify this assumption for deep nets. Our analysis hence explains the existence of universal perturbations, and further provides a purely geometric approach for computing such perturbations, in addition to explaining properties of perturbations, such as their diversity. Other authors have focused on the analysis of the robustness properties of SVM classifiers (e.g., Xu et al. (2009)) and new approaches for constructing robust classifiers (based on robust optimization) Caramanis et al. (2012); Lanckriet et al. (2003). More recently, some have assessed the robustness of deep neural networks to different regimes such as adversarial perturbations Szegedy et al. (2014); Biggio et al. (2013), random noise Fawzi et al. (2016), and occlusions Sharif et al. (2016); Evtimov et al. (2017). The robustness of classifiers to adversarial perturbations has been specifically studied in Szegedy et al. (2014); Goodfellow et al. (2015); Moosavi- Dezfooli et al. (2016); Carlini & Wagner (2017); Baluja & Fischer (2017), followed by works to improve the robustness Madry et al. (2017); Gu & Rigazio (2014); Papernot et al. (2015); Cisse et al. (2017), and attempts at explaining the phenomenon in Goodfellow et al. (2015); Fawzi et al. (2015); Tabacof & Valle (2016); Tanay & Griffin (2016). This paper however differs from these previous works as we study universal (image- agnostic) perturbations that can fool every image in a dataset, as opposed to image- specific adversarial perturbations that are not universal across datapoints (as shown in Moosavi- Dezfooli et al. (2017)). Moreover, explanations that hinge on the output of a deep network being well approximated by a linear function of the inputs \(f(\pmb {x}) = W\pmb {x} + b\) are inconclusive, as the assumption is violated even for relatively small networks. We show here that it is precisely the large curvature of the decision boundary that causes vulnerability to universal perturbations. Our bounds indeed show an increasing vulnerability with respect to the curvature of the decision boundary, and represent up to our knowledge the first quantitative result showing tight links between robustness and curvature. In addition, we show empirically that the first- order approximation of the decision boundary is not sufficient to explain the high vulnerability to universal perturbations (Fig. 7 (b)). Recent works have further proposed new methods for computing universal perturbations Mopuri et al. (2017); Khrulkov & Oseledets (2017); instead, we focus here on an analysis of the phenomenon of vulnerability to universal perturbations, while also providing a constructive approach to compute universal perturbations leveraging our curvature analysis. Finally, it should be noted that recent works have studied properties of deep networks from a geometric perspective (such as their expressivity Poole et al. (2016); Montufar et al. (2014)); our focus is different in this paper as we analyze the robustness with the geometry of the decision boundary. Our analysis hence shows that to construct classifiers that are robust to universal perturbations, it is key to suppress this subspace of shared positive directions, which can possibly be done through regularization of the objective function. This will be the subject of future works. ## A PROOF OF THEOREM 1 We first start by recalling a result from Fawzi et al. (2016), which is based on Dasgupta & Gupta (2003). Lemma 1. Let \(\mathbf{v}\) be a random vector uniformly drawn from the unit sphere \(\mathbb{S}^{d - 1}\) , and \(\mathbf{P}_{m}\) be the projection matrix onto the first \(m\) coordinates. Then, \[\mathbb{P}\left(\beta_{1}(\delta ,m)\frac{m}{d}\leq \| \mathbf{P}_{m}\pmb {v}\|_{2}^{2}\leq \beta_{2}(\delta ,m)\frac{m}{d}\right)\geq 1 - 2\delta , \quad (4)\] with \(\beta_{1}(\delta ,m) = \max ((1 / e)\delta^{2 / m},1 - \sqrt{2(1 - \delta^{2 / m})}\) , and \(\beta_{2}(\delta ,m) = 1 + 2\sqrt{\frac{\ln(1 / \delta)}{m}} +\frac{2\ln(1 / \delta)}{m}\) . We use the above lemma to prove our result, which we recall as follows: Theorem 1. Let \(\xi \geq 0,\delta \geq 0\) . Let \(S\) be an \(m\) dimensional subspace such that \(\| P_{S}r(\pmb {x})\|_{2}\geq\) \(1 - \xi\) for almost all \(\pmb {x}\sim \mu\) , where \(P_{S}\) is the projection operator on the subspace. Assume moreover that \(\mathcal{L}_{s}\left(\pmb {x},\rho\right)\) holds for almost all \(\pmb {x}\sim \mu\) , with \(\rho = \frac{\sqrt{e m}}{\delta(1 - \xi)}\) . Then, there exists a universal noise vector \(\pmb{v}\) , such that \(\| \pmb {v}\|_{2}\leq \rho\) and \(\underset {x\sim \mu}{\mathbb{P}}\left(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\text{or}\hat{k} (\pmb {x} - \pmb {v})\neq \hat{k} (\pmb {x})\right)\geq 1 - \delta\) <--- Page Split ---> Proof. Define \(\mathbb{S}\) to be the unit sphere centered at 0 in the subspace \(S\) . Let \(\rho = \frac{\sqrt{c m}}{(1 - \xi)}\) , and denote by \(\rho \mathbb{S}\) the sphere scaled by \(\rho\) . We have \[\quad \mathbb{E}_{\nu \sim \rho \mathbb{S}}\left(\mathbb{P}_{\pi \sim \mu}\left(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\mathrm{or}\hat{k} (\pmb {x} - \pmb {v})\neq \hat{k} (\pmb {x})\right)\right)\] \[\quad = \mathbb{E}_{\pi \sim \mu}\left(\mathbb{P}_{\nu \sim \rho \mathbb{S}}\left(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\mathrm{or}\hat{k} (\pmb {x} - \pmb {v})\neq \hat{k} (\pmb {x})\right)\right)\] \[\quad \geq \mathbb{E}_{\pi \sim \mu}\left(\mathbb{P}_{\nu \sim \rho \mathbb{S}}\left(\left|\pmb {r} (\pmb {x})^{T}\pmb {v}\right| - \left\|\pmb {r} (\pmb {x})\right\|_{2}^{2}\geq 0\right)\right)\] \[\quad = \mathbb{E}_{\pi \sim \mu}\left(\mathbb{P}_{\nu \sim \rho \mathbb{S}}\left(\left|\left(P_{\mathcal{S}}\pmb {r} (\pmb {x}) + P_{\mathcal{S}^{\mathrm{out}}}\pmb {r} (\pmb {x})\right)^{T}\pmb {v}\right| - \left\|\pmb {r} (\pmb {x})\right\|_{2}^{2}\geq 0\right)\right),\] where \(P_{\mathrm{S}^{\mathrm{out}}}\) denotes the projection operator on the orthogonal of \(S\) . Observe that \((P_{\mathrm{S}^{\mathrm{out}}}\pmb {r}(\pmb {x}))^{T}\pmb {v} = 0\) . Note moreover that \(\| \pmb {r}(\pmb {x})\|_{2}^{2} = 1\) by assumption. Hence, the above expression simplifies to \[\quad \mathbb{E}_{\pi \sim \mu}\left(\mathbb{P}_{\nu \sim \rho \mathbb{S}}\left(\left|\left(P_{\mathcal{S}}\pmb {r}(\pmb {x})\right)^{T}\pmb {v}\right| - 1\geq 0\right)\right)\] \[\quad = \mathbb{E}_{\pi \sim \mu}\left(\mathbb{P}_{\nu \sim \mathbb{S}}\left(\left|\left(P_{\mathcal{S}}\pmb {r}(\pmb {x})\right)^{T}\pmb {v}\right|\geq \rho^{-1}\right)\right)\] \[\quad \geq \mathbb{E}_{\pi \sim \mu}\left(\mathbb{P}_{\nu \sim \mathbb{S}}\left(\left|\left(P_{\mathcal{S}}\pmb {r}(\pmb {x})\right)^{T}\pmb {v}\right|\geq \frac{\delta}{\sqrt{c m}}\right)\right),\] where we have used the assumption of the projection of \(\pmb {r}(\pmb {x})\) on the subspace \(S\) . Hence, it follows from Lemma 1 that \[\mathbb{E}_{\nu \sim \rho \mathbb{S}}\left(\mathbb{P}_{\pi \sim \mu}\left(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\mathrm{or}\hat{k} (\pmb {x} - \pmb {v})\neq \hat{k} (\pmb {x})\right)\right)\geq 1 - \delta .\] Hence, there exists a universal vector \(\pmb{v}\) of \(\ell_{2}\) norm \(\rho\) such that \(\mathbb{P}_{\pi \sim \mu}\left(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\mathrm{or}\hat{k} (\pmb {x} - \pmb {v})\neq \hat{k} (\pmb {x})\right)\geq 1 - \delta\) . \(\square\) ## B PROOF OF THEOREM 2 Theorem 2. Let \(\kappa >0,\delta >0\) and \(m\in \mathbb{N}\) . Assume that the quadratic decision boundary model \(\mathcal{Q}\left(\pmb {x},\rho\right)\) holds for almost all \(\pmb {x}\sim \mu\) , with \(\rho = \sqrt{\frac{2\log(2 / \delta)}{m}}\kappa^{- 1} + \kappa^{- 1 / 2}\) . Let \(S\) be a \(m\) dimensional subspace such that \[\mathbb{P}_{\nu \sim \mathbb{S}}\left(\forall \pmb {u}\in \mathbb{R}^{2},\alpha_{x}^{-1}\pmb {u}^{T}H_{\pmb{z}}^{r}(\pmb {x}),\pmb {u}\geq \kappa \| \pmb {u}\|_{2}^{2}\right)\geq 1 - \beta \mathrm{~for~almost~all~}\pmb {x}\sim \mu ,\] where \(H_{\pmb{z}}^{r}(\pmb {x}),\pmb {v} = \Pi^{T}H_{\pmb{z}}\Pi\) with \(\Pi\) an orthonormal basis of \(\operatorname {span}(\pmb {r}(\pmb {x}),\pmb {v})\) , and \(\mathbb{S}\) denotes the unit sphere in \(S\) . Then, there is a universal perturbation vector \(\pmb{v}\) such that \(\| \pmb {v}\|_{2}\leq \rho\) and \(\mathbb{P}_{\pi \sim \mu}\left(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\right)\geq 1 - \delta - \beta\) . Proof. Let \(\pmb {x}\sim \mu\) . We have \[\quad \mathbb{E}_{\nu \sim \rho \mathbb{S}}\left(\mathbb{P}_{\pi \sim \mu}\left(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\right)\right)\] \[\quad = \mathbb{E}_{\pi \sim \mu}\left(\mathbb{P}_{\nu \sim \rho \mathbb{S}}\left(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\right)\right)\] \[\quad \geq \mathbb{E}_{\pi \sim \mu}\left(\mathbb{P}_{\nu \sim \rho \mathbb{S}}\left(\alpha_{x}^{-1}(\pmb {v} - \pmb {r})^{T}H_{z}(\pmb {v} - \pmb {r}) + \pmb{r}^{T}(\pmb {v} - \pmb {r})\geq 0\right)\right)\] \[\quad = \mathbb{E}_{\pi \sim \mu}\left(\mathbb{P}_{\nu \sim \rho \mathbb{S}}\left(\alpha_{x}^{-1}(\pmb {r}\pmb {v} - \pmb {r})^{T}H_{z}(\pmb {r}\pmb {v} - \pmb {r}) + \pmb{r}^{T}(\pmb {r}\pmb {v} - \pmb {r})\geq 0\right)\right)\] <--- Page Split ---> Using the assumptions of the theorem, we have \[\begin{array}{r l} & {\underset {\mathbf{v}\sim \mathcal{S}}{\mathbb{P}}\left(\alpha_{x}^{-1}(\rho \mathbf{v} - \mathbf{r})^{T}H_{z}(\rho \mathbf{v} - \mathbf{r}) + \mathbf{r}^{T}(\rho \mathbf{v} - \mathbf{r})\leq 0\right)}\\ & {\leq \underset {\mathbf{v}\sim \mathcal{S}}{\mathbb{P}}\left(\kappa \| \rho \mathbf{v} - \mathbf{r}\|_{2}^{2} + \mathbf{r}^{T}(\rho \mathbf{v} - \mathbf{r})\leq 0\right) + \beta}\\ & {\leq \underset {\mathbf{v}\sim \mathcal{S}}{\mathbb{P}}\left(\rho (1 - 2\kappa)\mathbf{v}^{T}\mathbf{r} + \kappa \rho^{2} + (\kappa -1)\leq 0\right) + \beta}\\ & {\leq \underset {\mathbf{v}\sim \mathcal{S}}{\mathbb{P}}\left(\rho (1 - 2\kappa)\mathbf{v}^{T}\mathbf{r}\leq -\epsilon\right) + \underset {\mathbf{v}\sim \mathcal{S}}{\mathbb{P}}\left(\kappa \rho^{2} + (\kappa -1)\leq \epsilon\right) + \beta ,} \end{array} \quad (5)\] for \(\epsilon >0\) . The goal is therefore to find \(\rho\) such that \(\kappa \rho^{2} + (\kappa -1)\geq \epsilon\) , together with \(\begin{array}{r}{\mathbb{P}_{\mathbf{v}\sim \mathcal{S}}\left(\rho (1 - 2\kappa)\mathbf{v}^{T}\mathbf{r}\leq -\epsilon\right)\leq \delta} \end{array}\) . Let \(\rho^{2} = \frac{\epsilon + 1}{\kappa}\) . Using the concentration of measure on the sphere Matousek (2002), we have \[\underset {\mathbf{v}\sim \mathcal{S}}{\mathbb{P}}\left(\mathbf{v}^{T}\mathbf{r}\leq \frac{-\epsilon}{\rho(1 - 2\kappa)}\right)\leq 2\exp \left(\frac{m\epsilon^{2}}{-2\rho^{2}(1 - 2\kappa)^{2}}\right).\] To bound the above probability by \(\delta\) , we set \(\epsilon = C\frac{\rho}{\sqrt{m}}\) , where \(C = \sqrt{2\log(2 / \delta)}\) . We therefore choose \(\rho\) such that \[\rho^{2} = \kappa^{-1}\left(C\rho m^{-1 / 2} + 1\right)\] The solution of this second order equation gives \[\rho = \frac{C\kappa^{-1}m^{-1 / 2} + \sqrt{\kappa^{-2}C^{2}m^{-1} + 4\kappa^{-1}}}{2}\leq C\kappa^{-1}m^{-1 / 2} + \kappa^{-1 / 2}.\] Hence, for this choice of \(\rho\) , we have by construction \[\underset {\mathbf{v}\sim \mathcal{S}}{\mathbb{P}}\left(\alpha_{x}^{-1}(\rho \mathbf{v} - \mathbf{r})^{T}H_{z}(\rho \mathbf{v} - \mathbf{r}) + \mathbf{r}^{T}(\rho \mathbf{v} - \mathbf{r})\leq 0\right)\leq \delta +\beta .\] We therefore conclude that \(\underset {\mathbf{v}\sim \rho \mathcal{S}}{\mathbb{E}}\left(\underset {\mathbf{x}\sim \mu}{\mathbb{P}}\left(\hat{k} (\mathbf{x} + \mathbf{v})\neq \hat{k} (\mathbf{x})\right)\right)\geq 1 - \delta - \beta\) . This shows the existence of a universal noise vector \(\mathbf{v}\sim \rho \mathcal{S}\) such that \(\hat{k} (\mathbf{x} + \mathbf{v})\neq \hat{k} (\mathbf{x})\) with probability larger than \(1 - \delta - \beta\) . \(\square\) ## C COMPLEMENTARY EXPERIMENTAL RESULTS ## C.1 EXPERIMENT IN FIG 7 (B) We perform here similar experiment to Fig. 7 (b) on the VGG- 16 and ResNet- 18 architectures. It can be seen that, similarly to Fig. 7 (b), the proposed approach of generating universal perturbations through random sampling from the subspace \(\mathcal{S}_{c}\) achieves high fooling rates (comparable to the algorithm in Moosavi- Dezfooli et al. (2017), and significantly higher than by using \(\mathcal{S}_{f}\) ). ## C.2 TRANSFERABILITY OF UNIVERSAL PERTURBATIONS Fig. 13 shows examples of normal cross- sections of the decision boundary across a fixed direction in \(\mathcal{S}_{c}\) , for the VGG- 16 architecture (but where \(\mathcal{S}_{c}\) is computed for CaffeNet). Note that the decision boundary across this fixed direction is positively curved for both networks, albeit computing this subspace for a distinct network. The sharing of \(\mathcal{S}_{c}\) across different nets explains the transferability of universal perturbations observed in Moosavi- Dezfooli et al. (2017). ## REFERENCES Shumeet Baluja and Ian Fischer. Adversarial transformation networks: Learning to generate adversarial examples. arXiv preprint arXiv:1703.09387, 2017. Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Srndic, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 387- 402, 2013. <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 11: Same experiment as Fig. 7 (b) performed on VGG-16 architecture (CIFAR-10 dataset). </center> ![](images/12_1.jpg) <center>Figure 12: Same experiment as Fig. 7 (b) performed on ResNet-18 architecture (CIFAR-10 dataset). </center> Constantine Caramanis, Shie Mannor, and Huan Xu. Robust optimization in machine learning. In Suvrit Sra, Sebastian Nowozin, and Stephen J Wright (eds.), Optimization for machine learning, chapter 14. Mit Press, 2012. Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In Security and Privacy (SP), 2017 IEEE Symposium on, pp. 39- 57. IEEE, 2017. Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin, and Nicolas Usunier. Parseval networks: Improving robustness to adversarial examples. In International Conference on Machine Learning (ICML), 2017. Sanjoy Dasgupta and Anupam Gupta. An elementary proof of a theorem of johnson and lindenstrauss. Random Structures & Algorithms, 22(1):60- 65, 2003. Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, and Dawn Song. Robust physical- world attacks on machine learning models. arXiv preprint arXiv:1707.08945, 2017. Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Analysis of classifiers' robustness to adversarial perturbations. arXiv preprint arXiv:1502.02590, 2015. Alhussein Fawzi, Seyed Moosavi- Dezfooli, and Pascal Frossard. Robustness of classifiers: from adversarial to random noise. In Neural Information Processing Systems (NIPS), 2016. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), 2015. <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 13: Transferability of the subspace \(\mathcal{S}_{c}\) across different networks. The first row shows normal cross sections along a fixed direction in \(\mathcal{S}_{c}\) for VGG-16, with a subspace \(\mathcal{S}_{c}\) computed with CaffeNet. Note the positive curvature in most cases. To provide a baseline for comparison, the second row illustrates normal sections along random directions. </center> Shixiang Gu and Luca Rigazio. Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv:1412.5068, 2014. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Computer Vision and Pattern Recognition (CVPR), 2016. Jan Hendrik Metzen, Mummadi Chaithanya Kumar, Thomas Brox, and Volker Fischer. Universal adversarial perturbations against semantic image segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2755- 2764, 2017. Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In ACM International Conference on Multimedia (MM), pp. 675- 678, 2014. Valentin Khrulkov and Ivan Oseledets. Art of singular vectors and universal adversarial perturbations. arXiv preprint arXiv:1709.03582, 2017. Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NIPS), pp. 1106- 1114, 2012. Gert Lanckriet, Laurent Ghaoui, Chiranjib Bhattacharyya, and Michael Jordan. A robust minimax approach to classification. The Journal of Machine Learning Research, 3:555- 582, 2003. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient- based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278- 2324, 1998. Jeffrey M Lee. Manifolds and differential geometry, volume 107. American Mathematical Society Providence, 2009. Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. In International Conference on Learning Representations (ICLR), 2014. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. Jiri Matousek. Lectures on discrete geometry, volume 108. Springer New York, 2002. Guido F Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear regions of deep neural networks. In Advances In Neural Information Processing Systems, pp. 2924- 2932, 2014. Seyed- Mohsen Moosavi- Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. <--- Page Split ---> Seyed- Mohsen Moosavi- Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. Konda Reddy Mopuri, Utsav Garg, and R Venkatesh Babu. Fast feature fool: A data independent approach to universal adversarial perturbations. In British Machine Vision Conference (BMVC), 2017. Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a defense to adversarial perturbations against deep neural networks. arXiv preprint arXiv:1511.04508, 2015. Ben Poole, Subhaneil Lahiri, Maithreyi Raghu, Jascha Sohl- Dickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. In Advances In Neural Information Processing Systems, pp. 3360- 3368, 2016. Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K Reiter. Accessorize to a crime: Real and stealthy attacks on state- of- the- art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528- 1540. ACM, 2016. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), 2014. Pedro Tabacof and Eduardo Valle. Exploring the space of adversarial images. IEEE International Joint Conference on Neural Networks, 2016. Thomas Tanay and Lewis Griffin. A boundary tilting perspective on the phenomenon of adversarial examples. arXiv preprint arXiv:1608.07690, 2016. David Warde- Farley, Ian Goodfellow, T Hazan, G Papandreou, and D Tarlow. Adversarial perturbations of deep neural networks. Perturbations, Optimization, and Statistics, 2016. Huan Xu, Constantine Caramanis, and Shie Mannor. Robustness and regularization of support vector machines. The Journal of Machine Learning Research, 10:1485- 1510, 2009. <--- Page Split --->
## ABSTRACT Deep networks have recently been shown to be vulnerable to universal perturbations: there exist very small image- agnostic perturbations that cause most natural images to be misclassified by such classifiers. In this paper, we provide a quantitative analysis of the robustness of classifiers to universal perturbations, and draw a formal link between the robustness to universal perturbations, and the geometry of the decision boundary. Specifically, we establish theoretical bounds on the robustness of classifiers under two decision boundary models (flat and curved models). We show in particular that the robustness of deep networks to universal perturbations is driven by a key property of their curvature: there exist shared directions along which the decision boundary of deep networks is systematically positively curved. Under such conditions, we prove the existence of small universal perturbations. Our analysis further provides a novel geometric method for computing universal perturbations, in addition to explaining their properties. ## 1 INTRODUCTION Despite the success of deep neural networks in solving complex visual tasks He et al. (2016); Krizhevsky et al. (2012), these classifiers have recently been shown to be highly vulnerable to perturbations in the input space. In Moosavi- Dezfooli et al. (2017), state- of- the- art classifiers are empirically shown to be vulnerable to universal perturbations: there exist very small image- agnostic perturbations that cause most natural images to be misclassified. The existence of universal perturbation is further shown in Hendrik Metzen et al. (2017) to extend to other visual tasks, such as semantic segmentation. Universal perturbations fundamentally differ from the random noise regime, and exploit essential properties of deep networks to misclassify most natural images with perturbations of very small magnitude. Why are state- of- the- art classifiers highly vulnerable to these specific directions in the input space? What do these directions represent? To answer these questions, we follow a theoretical approach and find the causes of this vulnerability in the geometry of the decision boundaries induced by deep neural networks. For deep networks, we show that the key to answering these questions lies in the existence of shared directions (across different datapoints) along which the decision boundary is highly curved. This establishes fundamental connections between geometry and robustness to universal perturbations, and thereby reveals new properties of the decision boundaries induced by deep networks. <--- Page Split ---> Our aim here is to derive an analysis of the vulnerability to universal perturbations in terms of the geometric properties of the boundary. To this end, we introduce two decision boundary models: 1) the locally flat model assumes that the first order linear approximation of the decision boundary holds locally in the vicinity of the natural images, and 2) the locally curved model provides a second order local description of the decision boundary, and takes into account the curvature information. We summarize our contributions as follows: - Under the locally flat decision boundary model, we show that classifiers are vulnerable to universal directions as long as the normals to the decision boundaries in the vicinity of natural images are correlated (i.e., they approximately span a low dimensional space). This result formalizes and proves some of the empirical observations made in Moosavi-Dezfooli et al. (2017).- Under the locally curved decision boundary model, the robustness to universal perturbations is instead driven by the curvature of the decision boundary; we show that the existence of shared directions along which the decision boundary is positively<sup>1</sup> curved implies the existence of very small universal perturbations.- We show that state-of-the-art deep nets remarkably satisfy the assumption of our theorem derived for the locally curved model: there actually exist shared directions along which the decision boundary of deep neural networks are positively curved. Our theoretical result consequently captures the large vulnerability of state-of-the-art deep networks to universal perturbations.- We finally show that the developed theoretical framework provides a novel (geometric) method for computing universal perturbations, and further explains some of the properties observed in Moosavi-Dezfooli et al. (2017) (e.g., diversity, transferability) regarding the robustness to universal perturbations. ## 2 DEFINITIONS AND NOTATIONS Consider an \(L\) - class classifier \(f:\mathbb{R}^{d}\to \mathbb{R}^{L}\) . Given a datapoint \(\pmb {x}\in \mathbb{R}^{d}\) , we define the estimated label \(\hat{k} (\pmb {x}) = \mathrm{argmax}_{k}f_{k}(\pmb {x})\) , where \(f_{k}(\pmb {x})\) is the \(k\) th component of \(f(\pmb {x})\) that corresponds to the \(k^{\mathrm{th}}\) class. We define by \(\mu\) a distribution over natural images in \(\mathbb{R}^{d}\) . The main focus of this paper is to analyze the robustness of classifiers to universal (image- agnostic) noise. Specifically, we define \(\pmb{v}\) to be a universal noise vector if \(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\) for "most" \(\pmb {x}\sim \mu\) . Formally, a perturbation \(\pmb{v}\) is \((\xi ,\delta)\) - universal, if the following two constraints are satisfied: \[\| \pmb {v}\|_{2}\leq \xi ,\] \[\mathbb{P}\left(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\right)\geq 1 - \delta .\] This perturbation image \(\pmb{v}\) is coined "universal", as it represents a fixed image- agnostic perturbation that causes label change for a large fraction of images sampled from the data distribution \(\mu\) . In Moosavi- Dezfooli et al. (2017), state- of- the- art classifiers have been shown to be surprisingly vulnerable to this simple perturbation regime. It should be noted that universal perturbations are different from adversarial perturbations Szegedy et al. (2014); Biggio et al. (2013), which are datapoint- specific perturbations that are sought to fool a specific image. An adversarial perturbation is a solution to the following optimization problem \[\pmb {r}(\pmb {x}) = \arg \min_{\pmb {r}\in \mathbb{R}^{d}}\| \pmb {r}\|_{2}\mathrm{~subject~to~}\hat{k} (\pmb {x} + \pmb {r})\neq \hat{k} (\pmb {x}), \quad (1)\] which corresponds to the smallest additive perturbation that is necessary to change the label of the classifier \(\hat{k}\) for \(\pmb{x}\) . From a geometric perspective, \(\pmb {r}(\pmb {x})\) quantifies the distance from \(\pmb{x}\) to the decision boundary (see Fig. 1a). In addition, due to the optimality conditions of Eq. (1), \(\pmb {r}(\pmb {x})\) is orthogonal to the decision boundary at \(\pmb {x} + \pmb {r}(\pmb {x})\) , as illustrated in Fig. 1a. In the remainder of the paper, we analyze the robustness of classifiers to universal noise, with respect to the geometry of the decision boundary of the classifier \(f\) . Formally, the pairwise decision boundary, <--- Page Split ---> ![](images/2_0.jpg) <center>(a) Local geometry of the decision boundary. </center> ![](images/2_1.jpg) <center>(b) Universal direction \(\pmb{v}\) of a linear binary classifier. </center> when restricting the classifier to class \(i\) and \(j\) is defined by \(\mathcal{B} = \{\mathbf{z} \in \mathbb{R}^{d}: f_{i}(\mathbf{z}) - f_{j}(\mathbf{z}) = 0\}\) (we omit the dependence of \(\mathcal{B}\) on \(i, j\) for simplicity). The decision boundary of the classifier hence corresponds to points in the input space that are equally likely to be classified as \(i\) or \(j\) . In the following sections, we introduce two models on the decision boundary, and quantify in each case the robustness of such classifiers to universal perturbations. We then show that the locally curved model better explains the vulnerability of deep networks to such perturbations. ## 3 ROBUSTNESS OF CLASSIFIERS WITH FLAT DECISION BOUNDARIES We start here our analysis by assuming a locally flat decision boundary model, and analyze the robustness of classifiers to universal perturbations under this decision boundary model. We specifically study the existence of a universal direction \(\pmb{v}\) , such that \[\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\mathrm{~or~}\hat{k} (\pmb {x} - \pmb {v})\neq \hat{k} (\pmb {x}), \quad (2)\] where \(\pmb{v}\) is a vector of sufficiently small norm. It should be noted that a universal direction (as opposed to a universal vector) is sought in Eq. (2), as this definition is more adapted to the analysis of classifiers with locally flat decision boundaries. For example, while a binary linear classifier has a universal direction that fools all the data points, only half of the data points can be fooled with a universal vector (provided the classes are balanced) (see Fig. 1b). We therefore consider this slightly modified definition in the remainder of this section. We start our analysis by introducing our local decision boundary model. For \(\pmb {x}\in \mathbb{R}^{d}\) , note that \(\pmb {x} + \pmb {r}(\pmb {x})\) belongs to the decision boundary and \(\pmb {r}(\pmb {x})\) is normal to the decision boundary at \(\pmb {x} + \pmb {r}(\pmb {x})\) (see Fig. 1a). A linear approximation of the decision boundary of the classifier at \(\pmb {x} + \pmb {r}(\pmb {x})\) is therefore given by \(\pmb {x} + \{\pmb {v}:\pmb {r}(\pmb {x})^{T}\pmb {v} = \| \pmb {r}(\pmb {x})\|_{2}^{2}\}\) . Under this approximation, the vector \(\pmb {r}(\pmb {x})\) hence captures the local geometry of the decision boundary in the vicinity of datapoint \(\pmb{x}\) . We assume a local decision boundary model in the vicinity of datapoints \(\pmb {x}\sim \mu\) , where the local classification region of \(\pmb{x}\) occurs in the halfspace \(\pmb {r}(\pmb {x})^{T}\pmb {v}\leq \| \pmb {r}(\pmb {x})\|_{2}^{2}\) . Equivalently, we assume that outside of this half- space, the classifier outputs a different label than \(\hat{k} (\pmb {x})\) . However, since we are analyzing the robustness to universal directions (and not vectors), we consider the following condition, given by \[\mathcal{L}_{s}(\pmb {x},\rho):\forall \pmb {v}\in B(\rho),|\pmb {r}(\pmb {x})^{T}\pmb {v}|\geq \| \pmb {r}(\pmb {x})\|_{2}^{2}\Longrightarrow \hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\mathrm{~or~}\hat{k} (\pmb {x} - \pmb {v})\neq \hat{k} (\pmb {x}). \quad (3)\] where \(B(\rho)\) is a ball of radius \(\rho\) centered at 0. An illustration of this decision boundary model is provided in Fig. 2a. It should be noted that linear classifiers satisfy this decision boundary model, as their decision boundaries are globally flat. This local decision boundary model is however more general, as we do not assume that the decision boundary is linear, but rather that the classification region in the vicinity of \(\pmb{x}\) is included in \(\pmb {x} + \{\pmb {v}:|\pmb {r}(\pmb {x})^{T}\pmb {v}|\leq \| \pmb {r}(\pmb {x})\|_{2}^{2}\}\) . Moreover, it should be noted that the model being assumed here is on the decision boundary of the classifier, and not an assumption on the classification function \(f\) .2 Fig. 2a provides an example of nonlinear decision boundary that satisfies this model. <--- Page Split ---> In all the theoretical results of this paper, we assume that \(\| \boldsymbol {r}(\boldsymbol {x})\|_{2} = 1\) , for all \(\boldsymbol {x}\sim \mu\) , for simplicity of the exposition. The results can be extended in a straightforward way to the case where \(\| \boldsymbol {r}(\boldsymbol {x})\|_{2}\) takes different values for points sampled from \(\mu\) . The following result shows that classifiers following the locally flat decision boundary model are not robust to small universal perturbations, provided the normals to the decision boundary (in the vicinity of datapoints) approximately belong to a low dimensional subspace of dimension \(m\ll d\) . Theorem 1. Let \(\xi \geq 0,\delta \geq 0\) . Let \(S\) be an \(m\) dimensional subspace such that \(\| P_{S}\boldsymbol {r}(\boldsymbol {x})\|_{2}\geq\) \(1 - \xi\) for almost all \(\boldsymbol {x}\sim \mu\) , where \(P_{S}\) is the projection operator on the subspace. Assume moreover that \(\mathcal{L}_{s}\left(\boldsymbol {x},\rho\right)\) holds for almost all \(\boldsymbol {x}\sim \mu\) , with \(\rho = \frac{\sqrt{e^{m}}}{\delta(1 - \xi)}\) . Then, there exists a universal noise vector \(\pmb{v}\) , such that \(\| \pmb {v}\|_{2}\leq \rho\) and \(\underset {x\sim \mu}{\mathbb{P}}\left(\hat{k} (\boldsymbol {x} + \boldsymbol {v})\neq \hat{k} (\boldsymbol {x})\text{or}\hat{k} (\boldsymbol {x} - \boldsymbol {v})\neq \hat{k} (\boldsymbol {x})\right)\geq 1 - \delta .\) The proof can be found in supplementary material, and relies on the construction of a universal perturbation through randomly sampling from \(S\) . The vulnerability of classifiers to universal perturbations can be attributed to the shared geometric properties of the classifier's decision boundary in the vicinity of different data points. In the above theorem, this shared geometric property across different data points is expressed in terms of the normal vectors \(\boldsymbol {r}(\boldsymbol {x})\) . The main assumption of the above theorem is specifically that normal vectors \(\boldsymbol {r}(\boldsymbol {x})\) to the decision boundary in the neighborhood of data points approximately live in a subspace \(S\) of low dimension \(m< d\) . Under this assumption, the above result shows the existence of universal perturbations of \(\ell_{2}\) norm of order \(\sqrt{m}\) . When \(m\ll d\) , Theorem 1 hence shows that very small (compared to random noise, which scales as \(\sqrt{d}\) Fawzi et al. (2016)) universal perturbations misclassifying most data points can be found. Remark 1. Theorem 1 can be readily applied to assess the robustness of multiclass linear classifiers to universal perturbations. In fact, when \(f(\boldsymbol {x}) = W^{T}\boldsymbol {x}\) , with \(W = [\boldsymbol{w}_{1},\ldots ,\boldsymbol{w}_{L}]\) , the normal vectors are equal to \(\boldsymbol {w}_{i} - \boldsymbol {w}_{j}\) , for \(1\leq i,j\leq L,i\neq j\) . These normal vectors exactly span a subspace of dimension \(L - 1\) . Hence, by applying the result with \(\xi = 0\) , and \(m = L - 1\) , we obtain that linear classifiers are vulnerable to universal noise, with magnitude proportional to \(\sqrt{L - 1}\) . In typical problems, we have \(L\ll d\) , which leads to very small universal directions. Remark 2. Theorem 1 provides a partial explanation to the vulnerability of deep networks, provided a locally flat decision boundary is assumed. Evidence in favor of this assumption was given through visualization of randomly chosen cross- sections in Warde- Farley et al. (2016); Fawzi et al. (2016). In addition, normal vectors to the decision boundary of deep nets (near data points) have been observed to approximately span a subspace \(S\) of sufficiently small dimension in Moosavi- Dezfooli et al. (2017). However, unlike linear classifiers, the dimensionality of this subspace \(m\) is typically larger than the the number of classes \(L\) , leading to large upper bounds on the norm of the universal noise, under the flat decision boundary model. This simplified model of the decision boundary hence fails to exhaustively explain the large vulnerability of state- of- the- art deep neural networks to universal perturbations. We show in the next section that the second order information of the decision boundary contains crucial information (curvature) that captures the high vulnerability to universal perturbations. ## 4 ROBUSTNESS OF CLASSIFIERS WITH CURVED DECISION BOUNDARIES We now consider a model of the decision boundary in the vicinity of the data points that allows to leverage the curvature of nonlinear classifiers. Under this decision boundary model, we study the existence of universal perturbations satisfying \(\hat{k} (\boldsymbol {x} + \boldsymbol {v})\neq \hat{k} (\boldsymbol {x})\) for most \(\boldsymbol {x}\sim \mu\) .3 We start by establishing an informal link between curvature of the decision boundary and robustness to universal perturbations, that will be made clear later in this section. As illustrated in Fig. 3, the norm of the required perturbation to change the label of the classifier along a specific direction \(\boldsymbol{v}\) is smaller if the decision boundary is positively curved, than if the decision boundary is flat (or with negative curvature). It therefore appears from Fig. 3 that the existence of universal perturbations (when the decision boundary is curved) can be attributed to the existence of common directions where <--- Page Split ---> ![](images/4_0.jpg) <center>(a) Flat decision boundary model \(\mathcal{L}_{s}(\boldsymbol {x},\rho)\) . </center> ![](images/4_1.jpg) <center>(b) Curved decision boundary model \(\mathcal{Q}(\boldsymbol {x},\rho)\) . </center> Figure 2: Illustration of the decision boundary models considered in this paper. (a): For the flat decision boundary model, the set \(\{\boldsymbol {v}:|\boldsymbol {r}(\boldsymbol {x})^{T}\boldsymbol {v}|\leq \| \boldsymbol {r}(\boldsymbol {x})\|_{2}^{2}\}\) is illustrated (stripe). Note that for \(\boldsymbol{v}\) taken outside the stripe (i.e., in the grayed area), we have \(\hat{k} (\boldsymbol {x} + \boldsymbol {v})\neq \hat{k} (\boldsymbol {x})\) or \(\hat{k} (\boldsymbol {x} - \boldsymbol {v})\neq \hat{k} (\boldsymbol {x})\) in the \(\rho\) neighborhood. (b): For the curved decision boundary model, the any vector \(\boldsymbol{v}\) chosen in the grayed area is classified differently from \(\hat{k} (\boldsymbol {x})\) . ![](images/4_2.jpg) <center>Figure 3: Link between robustness and curvature of the decision boundary. When the decision boundary is positively curved (left), small universal perturbations are more likely to fool the classifier. </center> the decision boundary is positively curved for many data points. In the remaining of this section, we formally prove the existence of universal perturbations, when there exists common positively curved directions of the decision boundary. Recalling the definitions of Sec. 2, a quadratic approximation of the decision boundary at \(\boldsymbol {z} = \boldsymbol {x} + \boldsymbol {r}(\boldsymbol {x})\) gives \(\boldsymbol {x} + \{\boldsymbol {v}:(\boldsymbol {v} - \boldsymbol {r}(\boldsymbol {x}))^{T}H_{\boldsymbol{z}}(\boldsymbol {v} - \boldsymbol {r}(\boldsymbol {x})) + \alpha_{x}\boldsymbol {r}(\boldsymbol {x})^{T}(\boldsymbol {v} - \boldsymbol {r}(\boldsymbol {x})) = 0\}\) , where \(H_{\boldsymbol{z}}\) denotes the Hessian of \(F\) at \(\boldsymbol{z}\) , and \(\alpha_{x} = \frac{\| \nabla F(\boldsymbol {z})\|_{2}}{\| \boldsymbol {r}(\boldsymbol {x})\|_{2}}\) , with \(F = f_{i} - f_{j}\) . In this model, the second order information (encoded in the Hessian matrix \(H_{\boldsymbol{z}}\) ) captures the curvature of the decision boundary. We assume a local decision boundary model in the vicinity of datapoints \(\boldsymbol {x}\sim \mu\) , where the local classification region of \(\boldsymbol {x}\) is bounded by a quadratic form. Formally, we assume that there exists \(\rho >0\) where the following condition holds for almost all \(\boldsymbol {x}\sim \mu\) : \[\mathcal{Q}(\boldsymbol {x},\rho):\forall \boldsymbol {v}\in B(\rho),(\boldsymbol {v} - \boldsymbol {r}(\boldsymbol {x}))^{T}H_{\boldsymbol{z}}(\boldsymbol {v} - \boldsymbol {r}(\boldsymbol {x})) + \alpha_{x}\boldsymbol {r}(\boldsymbol {x})^{T}(\boldsymbol {v} - \boldsymbol {r}(\boldsymbol {x}))\leq 0\Longrightarrow \hat{k} (\boldsymbol {x} + \boldsymbol {v})\neq \hat{k} (\boldsymbol {x}).\] An illustration of this quadratic decision boundary model is shown in Fig. 2b. The following result shows the existence of universal perturbations, provided a subspace \(\mathcal{S}\) exists where the decision boundary has positive curvature along most directions of \(\mathcal{S}\) : Theorem 2. Let \(\kappa >0,\delta >0\) and \(m\in \mathbb{N}\) . Assume that the quadratic decision boundary model \(\mathcal{Q}\left(\boldsymbol {x},\rho\right)\) holds for almost all \(\boldsymbol {x}\sim \mu\) , with \(\rho = \sqrt{\frac{2\log(2 / \delta)}{m}\kappa^{- 1} + \kappa^{- 1 / 2}}\) . Let \(\mathcal{S}\) be a \(m\) dimensional subspace such that \[\underset {\mathbf{v}\sim \mathcal{S}}{\mathbb{P}}\left(\forall \pmb {u}\in \mathbb{R}^{2},\alpha_{x}^{-1}\pmb {u}^{T}H_{\mathbf{z}}^{r(\mathbf{x}),\mathbf{v}}\pmb {u}\geq \kappa \| \pmb {u}\|_{2}^{2}\right)\geq 1 - \beta \text{for almost all} \boldsymbol {x}\sim \mu ,\] where \(H_{\mathbf{z}}^{r(\mathbf{x}),\mathbf{v}} = \Pi^{T}H_{\mathbf{z}}\Pi\) with \(\Pi\) an orthonormal basis of \(\operatorname {span}(\boldsymbol {r}(\boldsymbol {x}),\boldsymbol {v})\) , and \(\mathcal{S}\) denotes the unit sphere in \(\mathcal{S}\) . Then, there is a universal perturbation vector \(\boldsymbol{v}\) such that \(\| \boldsymbol {v}\|_{2}\leq \rho\) and \(\underset {\boldsymbol {x}\sim \mu}{\mathbb{P}}\left(\hat{k} (\boldsymbol {x} + \boldsymbol {v})\neq \hat{k} (\boldsymbol {x})\right)\geq 1 - \delta - \beta\) . The above theorem quantifies the robustness of classifiers to universal perturbations in terms of the curvature \(\kappa\) of the decision boundary, along normal sections spanned by \(\boldsymbol {r}(\boldsymbol {x})\) , and vectors \(\boldsymbol {v}\in \mathcal{S}\) (see <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 4: Left: Normal section \(\mathcal{U}\) of the decision boundary, along the plane spanned by the normal vector \(\pmb {r}(\pmb {x})\) and \(\pmb{v}\) . Right: Geometric interpretation of the assumption in Theorem 2. Theorem 2 assumes that the decision boundary along normal sections \((\pmb {r}(\pmb {x}),\pmb {v})\) is locally (in a \(\rho\) neighborhood) located inside a disk of radius \(1 / \kappa\) . Note the difference with respect to traditional notions of curvature, which express the curvature in terms of the osculating circle at \(\pmb {x} + \pmb {r}(\pmb {x})\) . The assumption we use here is more "global". </center> Fig. 4 (left) for an illustration of a normal section). Fig. 4 (right) provides a geometric illustration of the assumption of Theorem 2. Provided a subspace \(\mathcal{S}\) exists where the curvature of the decision boundary in the vicinity of datapoints \(\boldsymbol{x}\) is positive (along directions in \(\mathcal{S}\) ), Theorem 2 shows that universal perturbations can be found with a norm of approximately \(\frac{\kappa - 1}{\sqrt{m}} + \kappa^{- 1 / 2}\) . Hence, when the curvature \(\kappa\) is sufficiently large, the existence of small universal perturbations is guaranteed with Theorem 2.4 Remark 1. We stress that Theorem 2 does not assume that the decision boundary is curved in the direction of all vectors in \(\mathbb{R}^{d}\) , but we rather assume the existence of a subspace \(\mathcal{S}\) where the decision boundary is positively curved (in the vicinity of natural images \(\boldsymbol{x}\) ) along most directions in \(\mathcal{S}\) . Moreover, it should be noted that, unlike Theorem 1, where the normals to the decision boundary are assumed to belong to a low dimensional subspace, no assumption is imposed on the normal vectors. Instead, we assume the existence of a subspace \(\mathcal{S}\) leading to positive curvature, for points on the decision boundary in the vicinity of natural images. Remark 2. Theorem 2 does not only predict the vulnerability of classifiers, but it also provides a constructive way to find such universal perturbations. In fact, random vectors sampled from the subspace \(\mathcal{S}\) are predicted to be universal perturbations (see supp. material for more details). In Section 5, we will show that this new construction works remarkably well for deep networks, as predicted by our analysis. ## 5 EXPERIMENTAL RESULTS: UNIVERSAL PERTURBATIONS FOR DEEP NETS We first evaluate the validity of the assumption of Theorem 2 for deep neural networks, that is the existence of a low dimensional subspace where the decision boundary is positively curved along most directions sampled from the subspace. To construct the subspace, we find the directions that lead to large positive curvature in the vicinity of a given set of training points \(\{\boldsymbol{x}_{1},\ldots ,\boldsymbol{x}_{n}\}\) . We recall that principal directions \(\boldsymbol{v}_{1},\ldots ,\boldsymbol{v}_{d - 1}\) at a point \(\boldsymbol{z}\) on the decision boundary correspond to the eigenvectors (with nonzero eigenvalue) of the matrix \(H_{\boldsymbol{z}}^{t}\) , given by \(H_{\boldsymbol{z}}^{t} = P H_{\boldsymbol{z}}P\) , where \(P\) denotes the projection operator on the tangent to the decision boundary at \(\boldsymbol{z}\) , and \(H_{\boldsymbol{z}}\) denotes the Hessian of the decision boundary function evaluated at \(\boldsymbol{z}\) Lee (2009). Common directions with large average curvature at \(\boldsymbol{z}_{i} = \boldsymbol{x}_{i} + \boldsymbol {r}(\boldsymbol{x}_{i})\) (where \(\boldsymbol {r}(\boldsymbol{x}_{i})\) is the minimal perturbation defined in Eq. (1)) hence correspond to the eigenvectors of the average Hessian matrix \(\overline{H} = n^{- 1}\sum_{i = 1}^{n}H_{\boldsymbol{z}_{i}}^{t}\) . We therefore set our subspace, \(\mathcal{S}_{c}\) , to be the span of the first \(m\) eigenvectors of \(\overline{H}\) , and show that the subspace constructed in this way satisfies the assumption of Theorem 2. To determine whether the decision boundary is positively curved in most directions of \(\mathcal{S}_{c}\) (for unseen datapoints from the validation set), we compute the average curvature across random directions in \(\mathcal{S}_{c}\) for points on the decision boundary, <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 5: Visualization of normal cross-sections of the decision boundary, for CIFAR-10 (Left: LeNet, Right: ResNet-18). Top: Normal cross-sections along \((r(\boldsymbol {x}),\boldsymbol {v})\) , where \(\boldsymbol{v}\) is the universal perturbation computed using the algorithm in Moosavi-Dezfooli et al. (2017). Bottom: Normal cross-sections along \((\boldsymbol {r}(\boldsymbol {x}),\boldsymbol {v})\) , where \(\boldsymbol{v}\) is a random vector uniformly sampled from the unit sphere in \(\mathbb{R}^{d}\) . </center> i.e. \(\mathbf{z} = \mathbf{x} + \mathbf{r}(\mathbf{x})\) ; the average curvature is formally given by \[\bar{\kappa}_{\mathcal{S}}(\boldsymbol {x}) = \underset {\boldsymbol {v}\sim \mathcal{S}}{\mathbb{E}}\left(\frac{(P\boldsymbol{v})^{T}H_{\boldsymbol{z}}(P\boldsymbol{v})}{\|P\boldsymbol{v}\|_{2}^{2}}\right),\] where \(\mathbb{S}\) denotes the unit sphere in \(\mathcal{S}_{c}\) . In Fig. 7 (a), the average of \(\bar{\kappa}_{\mathcal{S}}(\boldsymbol {x})\) across points sampled from the validation set is shown (as well as the standard deviation) in function of the subspace dimension \(m\) , for a LeNet architecture LeCun et al. (1998) trained on the CIFAR- 10 dataset.\(^5\) Observe that when the dimension of the subspace is sufficiently small, the average curvature is strongly oriented towards positive curvature, which empirically shows the existence of this subspace \(\mathcal{S}_{c}\) where the decision boundary is positively curved for most data points in the validation set. This empirical evidence hence suggests that the assumption of Theorem 2 is satisfied, and that universal perturbations hence represent random vectors sampled from this subspace \(\mathcal{S}_{c}\) . To show this strong link between the vulnerability of universal perturbations and the positive curvature of the decision boundary, we now visualize normal sections of the decision boundary of deep networks trained on ImageNet (CaffeNet (Jia et al., 2014) and ResNet- 152 (He et al., 2016)) and CIFAR- 10 (LeNet (LeCun et al., 1998) and ResNet- 18 (He et al., 2016)) in the direction of their respective universal perturbations.\(^6\) Specifically, we visualize normal sections of the decision boundary in the plane \((\boldsymbol {r}(\boldsymbol {x}),\boldsymbol {v})\) , where \(\boldsymbol{v}\) is a universal perturbation computed using the universal perturbations algorithm of Moosavi- Dezfooli et al. (2017). The visualizations are shown in Fig. 5 and 6. Interestingly, the universal perturbations belong to highly positively curved directions of the decision boundary, despite the absence of any geometric constraint in the algorithm to compute universal perturbations. To fool most data points, universal perturbations hence naturally seek common directions of the embedding space, where the decision boundary is positively curved. These directions lead to very small universal perturbations, as highlighted by our analysis in Theorem 2. It should be noted that such highly curved directions of the decision boundary are rare, as random normal sections are comparatively flat (see Fig. 5 and 6, second row). This is due to the fact that most principal curvatures are approximately zero, for points sampled on the decision boundary in the vicinity of data points. Recall that Theorem 2 suggests a novel procedure to generate universal perturbations; in fact, random perturbations from \(\mathcal{S}_{c}\) are predicted to be universal perturbations. To assess the validity of this result, Fig. 7 (b) illustrates the fooling rate of the universal perturbations (for the LeNet network on CIFAR- 10) sampled uniformly at random from the unit sphere in subspace \(\mathcal{S}_{c}\) , and scaled to have a fixed norm (1/5th of the norm of the random noise required to fool most data points). We assess the quality of such perturbation by further indicating in Fig. 7 (b) the fooling rate of the universal <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 6: Visualization of normal cross-sections of the decision boundary, for ImageNet (Left: ResNet-152, and Right: CaffeNet) Top: Normal cross-sections along \((r(x),v)\) , where \(v\) is the universal perturbation computed using the algorithm in Moosavi-Dezfooli et al. (2017). Bottom: Normal cross-sections along \((r(x),v)\) , where \(v\) is a random vector uniformly sampled from the unit sphere in \(\mathbb{R}^d\) . </center> ![](images/7_1.jpg) <center>Figure 7: (a) Average curvature \(\bar{\kappa}_{S}\) , averaged over 1000 validation datapoints, as a function of the subspace dimension. (b) Fooling rate of universal perturbations (on an unseen validation set) computed using random perturbations in 1) \(S_{c}\) : the subspace of positively curved directions, and 2) \(S_{f}\) : the subspace collecting normal vectors \(r(x)\) . The dotted line corresponds to the fooling rate using the algorithm in Moosavi-Dezfooli et al. (2017). \(S_{f}\) corresponds to the largest singular vectors corresponding to the matrix gathering the normal vectors \(r(x)\) in the training set (similar to the approach in Moosavi-Dezfooli et al. (2017)). </center> perturbation computed using the original algorithm in Moosavi- Dezfooli et al. (2017). Observe that random perturbations sampled from \(S_{c}\) (with \(m\) small) provide very powerful universal perturbations, fooling nearly \(85\%\) of data points from the validation set. This rate is comparable to that of the algorithm in Moosavi- Dezfooli et al. (2017), while using much less training points (only \(n = 100\) , while at least \(2,000\) training points are required by Moosavi- Dezfooli et al. (2017)). The very large fooling rates achieved with such a simple procedure (random generation in \(S_{c}\) ) confirms that the curvature is the governing factor that controls the robustness of classifiers to universal perturbations, as analyzed in Section 4. In fact, such high fooling rates cannot be achieved by only using the model of Section 3 (neglecting the curvature information), as illustrated in Fig. 7 (b). Specifically, by generating random perturbations from the subspace \(S_{f}\) collecting normal vectors \(r(x)\) (which is the procedure that is suggested by Theorem 1 to compute universal perturbations, without taking into account second order information), the best universal perturbation achieves a fooling rate of \(65\%\) , which is significantly worse than if the curvature is used to craft the perturbation. We further perform in Appendix C the same experiment on other architectures (VGG- 16 and ResNet- 18) to verify the consistency of the results across networks. It can be seen that, similarly to Fig. 7 (b), the proposed approach of generating universal perturbations through random sampling from the subspace \(S_{c}\) achieves high fooling rates (comparable to the algorithm in Moosavi- Dezfooli et al. (2017), and significantly higher than by using \(S_{f}\) ). <--- Page Split ---> Fig 8 illustrates a universal perturbation for ImageNet, corresponding to the maximally curved shared direction (or in other words, the maximum eigenvalue of \(\overline{H}\) computed using \(n = 200\) random samples).<sup>7</sup> The CaffeNet architecture is used, and Fig. 8 also represents sample perturbed images that fool the classifier. Just like the universal perturbation in Moosavi- Dezfooli et al. (2017), the perturbations are not very perceptible, and lead to misclassification of most unseen images in the validation set. For this example on ImageNet, the fooling rate of this perturbation is \(67.2\%\) on the validation set. This is significantly larger than the fooling rate of the perturbation computed using \(\mathcal{S}_{f}\) only (38%), but lower than that of the original algorithm (85.4%) proposed in (Moosavi- Dezfooli et al., 2017). We hypothesize that this gap for ImageNet is partially due to the small number of samples, which was made due to computational restrictions. ![](images/8_0.jpg) <center>Figure 8: Left column: Universal perturbation computed through random sampling from \(\mathcal{S}_{c}\) . Second column to end: All images are (incorrectly) classified as "bubble". The CaffeNet architecture is used. Similarly to Moosavi-Dezfooli et al. (2017), the perturbation is constrained to have \(\ell_{2}\) norm of 2,000. </center> The existence of this subspace \(\mathcal{S}_{c}\) (and that universal perturbations are random vectors in \(\mathcal{S}_{c}\) ) further explains the high diversity of universal perturbations. Fig. 9 illustrates different universal perturbations for CIFAR- 10 computed by sampling random directions from \(\mathcal{S}_{c}\) . The diversity of such perturbations justifies why re- training with perturbed images (as in Moosavi- Dezfooli et al. (2017)) does not significantly improve the robustness of such networks, as other directions in \(\mathcal{S}_{c}\) can still lead to universal perturbations, even if the network becomes robust to some directions. Finally, it is interesting to note that this subspace \(\mathcal{S}_{c}\) is likely to be shared not only across datapoints, but also different networks (to some extent). To support this claim, Fig. 10 shows the cosine of the principal angles between subspaces \(\mathcal{S}_{c}^{\mathrm{LeNet}}\) and \(\mathcal{S}_{c}^{\mathrm{NiN}}\) , computed for LeNet and NiN Lin et al. (2014) models. Note that the first principal angles between the two subspaces are very small, leading to shared directions between the two subspaces. A similar observation is made for networks trained on ImageNet in the supp. material. The sharing of \(\mathcal{S}_{c}\) across different networks explains the transferability of universal perturbations observed in Moosavi- Dezfooli et al. (2017). ## 6 DISCUSSION AND RELATED WORK In this paper, we analyzed the robustness of classifiers to universal perturbations, under two decision boundary models: Locally flat and curved. We showed that the first are not robust to universal directions, provided the normal vectors in the vicinity of natural images are correlated. While this model explains the vulnerability for e.g., linear classifiers, this model discards the curvature information, which is essential to fully analyze the robustness of deep nets to universal perturbations. The second, classifiers with curved decision boundaries, are instead not robust to universal perturbations, provided the existence of a shared subspace along which the decision boundary is positively curved (for most ![](images/8_1.jpg) <center>Figure 9: Diversity of universal perturbations randomly sampled from the subspace \(\mathcal{S}_{c}\) . The normalized inner product between two perturbations is less than 0.1. </center> <--- Page Split ---> directions). We empirically verify this assumption for deep nets. Our analysis hence explains the existence of universal perturbations, and further provides a purely geometric approach for computing such perturbations, in addition to explaining properties of perturbations, such as their diversity. Other authors have focused on the analysis of the robustness properties of SVM classifiers (e.g., Xu et al. (2009)) and new approaches for constructing robust classifiers (based on robust optimization) Caramanis et al. (2012); Lanckriet et al. (2003). More recently, some have assessed the robustness of deep neural networks to different regimes such as adversarial perturbations Szegedy et al. (2014); Biggio et al. (2013), random noise Fawzi et al. (2016), and occlusions Sharif et al. (2016); Evtimov et al. (2017). The robustness of classifiers to adversarial perturbations has been specifically studied in Szegedy et al. (2014); Goodfellow et al. (2015); Moosavi- Dezfooli et al. (2016); Carlini & Wagner (2017); Baluja & Fischer (2017), followed by works to improve the robustness Madry et al. (2017); Gu & Rigazio (2014); Papernot et al. (2015); Cisse et al. (2017), and attempts at explaining the phenomenon in Goodfellow et al. (2015); Fawzi et al. (2015); Tabacof & Valle (2016); Tanay & Griffin (2016). This paper however differs from these previous works as we study universal (image- agnostic) perturbations that can fool every image in a dataset, as opposed to image- specific adversarial perturbations that are not universal across datapoints (as shown in Moosavi- Dezfooli et al. (2017)). Moreover, explanations that hinge on the output of a deep network being well approximated by a linear function of the inputs \(f(\pmb {x}) = W\pmb {x} + b\) are inconclusive, as the assumption is violated even for relatively small networks. We show here that it is precisely the large curvature of the decision boundary that causes vulnerability to universal perturbations. Our bounds indeed show an increasing vulnerability with respect to the curvature of the decision boundary, and represent up to our knowledge the first quantitative result showing tight links between robustness and curvature. In addition, we show empirically that the first- order approximation of the decision boundary is not sufficient to explain the high vulnerability to universal perturbations (Fig. 7 (b)). Recent works have further proposed new methods for computing universal perturbations Mopuri et al. (2017); Khrulkov & Oseledets (2017); instead, we focus here on an analysis of the phenomenon of vulnerability to universal perturbations, while also providing a constructive approach to compute universal perturbations leveraging our curvature analysis. Finally, it should be noted that recent works have studied properties of deep networks from a geometric perspective (such as their expressivity Poole et al. (2016); Montufar et al. (2014)); our focus is different in this paper as we analyze the robustness with the geometry of the decision boundary. Our analysis hence shows that to construct classifiers that are robust to universal perturbations, it is key to suppress this subspace of shared positive directions, which can possibly be done through regularization of the objective function. This will be the subject of future works. ## A PROOF OF THEOREM 1 We first start by recalling a result from Fawzi et al. (2016), which is based on Dasgupta & Gupta (2003). Lemma 1. Let \(\mathbf{v}\) be a random vector uniformly drawn from the unit sphere \(\mathbb{S}^{d - 1}\) , and \(\mathbf{P}_{m}\) be the projection matrix onto the first \(m\) coordinates. Then, \[\mathbb{P}\left(\beta_{1}(\delta ,m)\frac{m}{d}\leq \| \mathbf{P}_{m}\pmb {v}\|_{2}^{2}\leq \beta_{2}(\delta ,m)\frac{m}{d}\right)\geq 1 - 2\delta , \quad (4)\] with \(\beta_{1}(\delta ,m) = \max ((1 / e)\delta^{2 / m},1 - \sqrt{2(1 - \delta^{2 / m})}\) , and \(\beta_{2}(\delta ,m) = 1 + 2\sqrt{\frac{\ln(1 / \delta)}{m}} +\frac{2\ln(1 / \delta)}{m}\) . We use the above lemma to prove our result, which we recall as follows: Theorem 1. Let \(\xi \geq 0,\delta \geq 0\) . Let \(S\) be an \(m\) dimensional subspace such that \(\| P_{S}r(\pmb {x})\|_{2}\geq\) \(1 - \xi\) for almost all \(\pmb {x}\sim \mu\) , where \(P_{S}\) is the projection operator on the subspace. Assume moreover that \(\mathcal{L}_{s}\left(\pmb {x},\rho\right)\) holds for almost all \(\pmb {x}\sim \mu\) , with \(\rho = \frac{\sqrt{e m}}{\delta(1 - \xi)}\) . Then, there exists a universal noise vector \(\pmb{v}\) , such that \(\| \pmb {v}\|_{2}\leq \rho\) and \(\underset {x\sim \mu}{\mathbb{P}}\left(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\text{or}\hat{k} (\pmb {x} - \pmb {v})\neq \hat{k} (\pmb {x})\right)\geq 1 - \delta\) <--- Page Split ---> Proof. Define \(\mathbb{S}\) to be the unit sphere centered at 0 in the subspace \(S\) . Let \(\rho = \frac{\sqrt{c m}}{(1 - \xi)}\) , and denote by \(\rho \mathbb{S}\) the sphere scaled by \(\rho\) . We have \[\quad \mathbb{E}_{\nu \sim \rho \mathbb{S}}\left(\mathbb{P}_{\pi \sim \mu}\left(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\mathrm{or}\hat{k} (\pmb {x} - \pmb {v})\neq \hat{k} (\pmb {x})\right)\right)\] \[\quad = \mathbb{E}_{\pi \sim \mu}\left(\mathbb{P}_{\nu \sim \rho \mathbb{S}}\left(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\mathrm{or}\hat{k} (\pmb {x} - \pmb {v})\neq \hat{k} (\pmb {x})\right)\right)\] \[\quad \geq \mathbb{E}_{\pi \sim \mu}\left(\mathbb{P}_{\nu \sim \rho \mathbb{S}}\left(\left|\pmb {r} (\pmb {x})^{T}\pmb {v}\right| - \left\|\pmb {r} (\pmb {x})\right\|_{2}^{2}\geq 0\right)\right)\] \[\quad = \mathbb{E}_{\pi \sim \mu}\left(\mathbb{P}_{\nu \sim \rho \mathbb{S}}\left(\left|\left(P_{\mathcal{S}}\pmb {r} (\pmb {x}) + P_{\mathcal{S}^{\mathrm{out}}}\pmb {r} (\pmb {x})\right)^{T}\pmb {v}\right| - \left\|\pmb {r} (\pmb {x})\right\|_{2}^{2}\geq 0\right)\right),\] where \(P_{\mathrm{S}^{\mathrm{out}}}\) denotes the projection operator on the orthogonal of \(S\) . Observe that \((P_{\mathrm{S}^{\mathrm{out}}}\pmb {r}(\pmb {x}))^{T}\pmb {v} = 0\) . Note moreover that \(\| \pmb {r}(\pmb {x})\|_{2}^{2} = 1\) by assumption. Hence, the above expression simplifies to \[\quad \mathbb{E}_{\pi \sim \mu}\left(\mathbb{P}_{\nu \sim \rho \mathbb{S}}\left(\left|\left(P_{\mathcal{S}}\pmb {r}(\pmb {x})\right)^{T}\pmb {v}\right| - 1\geq 0\right)\right)\] \[\quad = \mathbb{E}_{\pi \sim \mu}\left(\mathbb{P}_{\nu \sim \mathbb{S}}\left(\left|\left(P_{\mathcal{S}}\pmb {r}(\pmb {x})\right)^{T}\pmb {v}\right|\geq \rho^{-1}\right)\right)\] \[\quad \geq \mathbb{E}_{\pi \sim \mu}\left(\mathbb{P}_{\nu \sim \mathbb{S}}\left(\left|\left(P_{\mathcal{S}}\pmb {r}(\pmb {x})\right)^{T}\pmb {v}\right|\geq \frac{\delta}{\sqrt{c m}}\right)\right),\] where we have used the assumption of the projection of \(\pmb {r}(\pmb {x})\) on the subspace \(S\) . Hence, it follows from Lemma 1 that \[\mathbb{E}_{\nu \sim \rho \mathbb{S}}\left(\mathbb{P}_{\pi \sim \mu}\left(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\mathrm{or}\hat{k} (\pmb {x} - \pmb {v})\neq \hat{k} (\pmb {x})\right)\right)\geq 1 - \delta .\] Hence, there exists a universal vector \(\pmb{v}\) of \(\ell_{2}\) norm \(\rho\) such that \(\mathbb{P}_{\pi \sim \mu}\left(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\mathrm{or}\hat{k} (\pmb {x} - \pmb {v})\neq \hat{k} (\pmb {x})\right)\geq 1 - \delta\) . \(\square\) ## B PROOF OF THEOREM 2 Theorem 2. Let \(\kappa >0,\delta >0\) and \(m\in \mathbb{N}\) . Assume that the quadratic decision boundary model \(\mathcal{Q}\left(\pmb {x},\rho\right)\) holds for almost all \(\pmb {x}\sim \mu\) , with \(\rho = \sqrt{\frac{2\log(2 / \delta)}{m}}\kappa^{- 1} + \kappa^{- 1 / 2}\) . Let \(S\) be a \(m\) dimensional subspace such that \[\mathbb{P}_{\nu \sim \mathbb{S}}\left(\forall \pmb {u}\in \mathbb{R}^{2},\alpha_{x}^{-1}\pmb {u}^{T}H_{\pmb{z}}^{r}(\pmb {x}),\pmb {u}\geq \kappa \| \pmb {u}\|_{2}^{2}\right)\geq 1 - \beta \mathrm{~for~almost~all~}\pmb {x}\sim \mu ,\] where \(H_{\pmb{z}}^{r}(\pmb {x}),\pmb {v} = \Pi^{T}H_{\pmb{z}}\Pi\) with \(\Pi\) an orthonormal basis of \(\operatorname {span}(\pmb {r}(\pmb {x}),\pmb {v})\) , and \(\mathbb{S}\) denotes the unit sphere in \(S\) . Then, there is a universal perturbation vector \(\pmb{v}\) such that \(\| \pmb {v}\|_{2}\leq \rho\) and \(\mathbb{P}_{\pi \sim \mu}\left(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\right)\geq 1 - \delta - \beta\) . Proof. Let \(\pmb {x}\sim \mu\) . We have \[\quad \mathbb{E}_{\nu \sim \rho \mathbb{S}}\left(\mathbb{P}_{\pi \sim \mu}\left(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\right)\right)\] \[\quad = \mathbb{E}_{\pi \sim \mu}\left(\mathbb{P}_{\nu \sim \rho \mathbb{S}}\left(\hat{k} (\pmb {x} + \pmb {v})\neq \hat{k} (\pmb {x})\right)\right)\] \[\quad \geq \mathbb{E}_{\pi \sim \mu}\left(\mathbb{P}_{\nu \sim \rho \mathbb{S}}\left(\alpha_{x}^{-1}(\pmb {v} - \pmb {r})^{T}H_{z}(\pmb {v} - \pmb {r}) + \pmb{r}^{T}(\pmb {v} - \pmb {r})\geq 0\right)\right)\] \[\quad = \mathbb{E}_{\pi \sim \mu}\left(\mathbb{P}_{\nu \sim \rho \mathbb{S}}\left(\alpha_{x}^{-1}(\pmb {r}\pmb {v} - \pmb {r})^{T}H_{z}(\pmb {r}\pmb {v} - \pmb {r}) + \pmb{r}^{T}(\pmb {r}\pmb {v} - \pmb {r})\geq 0\right)\right)\] <--- Page Split ---> Using the assumptions of the theorem, we have \[\begin{array}{r l} & {\underset {\mathbf{v}\sim \mathcal{S}}{\mathbb{P}}\left(\alpha_{x}^{-1}(\rho \mathbf{v} - \mathbf{r})^{T}H_{z}(\rho \mathbf{v} - \mathbf{r}) + \mathbf{r}^{T}(\rho \mathbf{v} - \mathbf{r})\leq 0\right)}\\ & {\leq \underset {\mathbf{v}\sim \mathcal{S}}{\mathbb{P}}\left(\kappa \| \rho \mathbf{v} - \mathbf{r}\|_{2}^{2} + \mathbf{r}^{T}(\rho \mathbf{v} - \mathbf{r})\leq 0\right) + \beta}\\ & {\leq \underset {\mathbf{v}\sim \mathcal{S}}{\mathbb{P}}\left(\rho (1 - 2\kappa)\mathbf{v}^{T}\mathbf{r} + \kappa \rho^{2} + (\kappa -1)\leq 0\right) + \beta}\\ & {\leq \underset {\mathbf{v}\sim \mathcal{S}}{\mathbb{P}}\left(\rho (1 - 2\kappa)\mathbf{v}^{T}\mathbf{r}\leq -\epsilon\right) + \underset {\mathbf{v}\sim \mathcal{S}}{\mathbb{P}}\left(\kappa \rho^{2} + (\kappa -1)\leq \epsilon\right) + \beta ,} \end{array} \quad (5)\] for \(\epsilon >0\) . The goal is therefore to find \(\rho\) such that \(\kappa \rho^{2} + (\kappa -1)\geq \epsilon\) , together with \(\begin{array}{r}{\mathbb{P}_{\mathbf{v}\sim \mathcal{S}}\left(\rho (1 - 2\kappa)\mathbf{v}^{T}\mathbf{r}\leq -\epsilon\right)\leq \delta} \end{array}\) . Let \(\rho^{2} = \frac{\epsilon + 1}{\kappa}\) . Using the concentration of measure on the sphere Matousek (2002), we have \[\underset {\mathbf{v}\sim \mathcal{S}}{\mathbb{P}}\left(\mathbf{v}^{T}\mathbf{r}\leq \frac{-\epsilon}{\rho(1 - 2\kappa)}\right)\leq 2\exp \left(\frac{m\epsilon^{2}}{-2\rho^{2}(1 - 2\kappa)^{2}}\right).\] To bound the above probability by \(\delta\) , we set \(\epsilon = C\frac{\rho}{\sqrt{m}}\) , where \(C = \sqrt{2\log(2 / \delta)}\) . We therefore choose \(\rho\) such that \[\rho^{2} = \kappa^{-1}\left(C\rho m^{-1 / 2} + 1\right)\] The solution of this second order equation gives \[\rho = \frac{C\kappa^{-1}m^{-1 / 2} + \sqrt{\kappa^{-2}C^{2}m^{-1} + 4\kappa^{-1}}}{2}\leq C\kappa^{-1}m^{-1 / 2} + \kappa^{-1 / 2}.\] Hence, for this choice of \(\rho\) , we have by construction \[\underset {\mathbf{v}\sim \mathcal{S}}{\mathbb{P}}\left(\alpha_{x}^{-1}(\rho \mathbf{v} - \mathbf{r})^{T}H_{z}(\rho \mathbf{v} - \mathbf{r}) + \mathbf{r}^{T}(\rho \mathbf{v} - \mathbf{r})\leq 0\right)\leq \delta +\beta .\] We therefore conclude that \(\underset {\mathbf{v}\sim \rho \mathcal{S}}{\mathbb{E}}\left(\underset {\mathbf{x}\sim \mu}{\mathbb{P}}\left(\hat{k} (\mathbf{x} + \mathbf{v})\neq \hat{k} (\mathbf{x})\right)\right)\geq 1 - \delta - \beta\) . This shows the existence of a universal noise vector \(\mathbf{v}\sim \rho \mathcal{S}\) such that \(\hat{k} (\mathbf{x} + \mathbf{v})\neq \hat{k} (\mathbf{x})\) with probability larger than \(1 - \delta - \beta\) . \(\square\) ## C COMPLEMENTARY EXPERIMENTAL RESULTS ## C.1 EXPERIMENT IN FIG 7 (B) We perform here similar experiment to Fig. 7 (b) on the VGG- 16 and ResNet- 18 architectures. It can be seen that, similarly to Fig. 7 (b), the proposed approach of generating universal perturbations through random sampling from the subspace \(\mathcal{S}_{c}\) achieves high fooling rates (comparable to the algorithm in Moosavi- Dezfooli et al. (2017), and significantly higher than by using \(\mathcal{S}_{f}\) ). ## C.2 TRANSFERABILITY OF UNIVERSAL PERTURBATIONS Fig. 13 shows examples of normal cross- sections of the decision boundary across a fixed direction in \(\mathcal{S}_{c}\) , for the VGG- 16 architecture (but where \(\mathcal{S}_{c}\) is computed for CaffeNet). Note that the decision boundary across this fixed direction is positively curved for both networks, albeit computing this subspace for a distinct network. The sharing of \(\mathcal{S}_{c}\) across different nets explains the transferability of universal perturbations observed in Moosavi- Dezfooli et al. (2017).
accept
Accept (Poster)
6
ICLR_2019_paper_0007
iclr
2,019
# INSTAGAN: INSTANCE-AWARE IMAGE-TO-IMAGE TRANSLATION Sangwoo Mo\*, Minsu Cho†, Jinwoo Shin\*† \*Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea †Pohang University of Science and Technology (POSTECH), Pohang, Korea †Altrics, Seoul, Korea \*{swmo, jinwoos}@kaist.ac.kr, †mscho@postech.ac.kr ## ABSTRACT Unsupervised image- to- image translation has gained considerable attention due to the recent impressive progress based on generative adversarial networks (GANs). However, previous methods often fail in challenging cases, in particular, when an image has multiple target instances and a translation task involves significant changes in shape, e.g., translating pants to skirts in fashion images. To tackle the issues, we propose a novel method, coined instance- aware GAN (InstaGAN), that incorporates the instance information (e.g., object segmentation masks) and improves multi- instance transfiguration. The proposed method translates both an image and the corresponding set of instance attributes while maintaining the permutation invariance property of the instances. To this end, we introduce a context preserving loss that encourages the network to learn the identity function outside of target instances. We also propose a sequential mini- batch inference/training technique that handles multiple instances with a limited GPU memory and enhances the network to generalize better for multiple instances. Our comparative evaluation demonstrates the effectiveness of the proposed method on different image datasets, in particular, in the aforementioned challenging cases. Code and results are available in https://github.com/sangwoomo/instagan. ## 1 INTRODUCTION Cross- domain generation arises in many machine learning tasks, including neural machine translation (Artetxe et al., 2017; Lample et al., 2017), image synthesis (Reed et al., 2016; Zhu et al., 2016), text style transfer (Shen et al., 2017), and video generation (Bansal et al., 2018; Wang et al., 2018a; Chan et al., 2018). In particular, the unpaired (or unsupervised) image- to- image translation has achieved an impressive progress based on variants of generative adversarial networks (GANs) (Zhu et al., 2017; Liu et al., 2017; Choi et al., 2017; Almahairi et al., 2018; Huang et al., 2018; Lee et al., 2018), and has also drawn considerable attention due to its practical applications including colorization (Zhang et al., 2016), super- resolution (Ledig et al., 2017), semantic manipulation (Wang et al., 2018b), and domain adaptation (Bousmalis et al., 2017; Shrivastava et al., 2017; Hoffman et al., 2017). Previous methods on this line of research, however, often fail on challenging tasks, in particular, when the translation task involves significant changes in shape of instances (Zhu et al., 2017) or the images to translate contains multiple target instances (Gokaslan et al., 2018). Our goal is to extend image- to- image translation towards such challenging tasks, which can strengthen its applicability up to the next level, e.g., changing pants to skirts in fashion images for a customer to decide which one is better to buy. To this end, we propose a novel method that incorporates the instance information of multiple target objects in the framework of generative adversarial networks (GAN); hence we called it instance- aware GAN (InstaGAN). In this work, we use the object segmentation masks for instance information, which may be a good representation for instance shapes, as it contains object boundaries while ignoring other details such as color. Using the information, our method shows impressive results for multi- instance transfiguration tasks, as shown in Figure 1. Our main contribution is three- fold: an instance- augmented neural architecture, a context preserving loss, and a sequential mini- batch inference/training technique. First, we propose a neural network architecture that translates both an image and the corresponding set of instance attributes. Our architecture can translate an arbitrary number of instance attributes conditioned by the input, and is designed to be permutation- invariant to the order of instances. Second, we propose a context preserv <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Translation results of the prior work (CycleGAN, Zhu et al. (2017)), and our proposed method, InstaGAN. Our method shows better results for multi-instance transfiguration problems. </center> ing loss that encourages the network to focus on target instances in translation and learn an identity function outside of them. Namely, it aims at preserving the background context while transforming the target instances. Finally, we propose a sequential mini- batch inference/training technique, i.e., translating the mini- batches of instance attributes sequentially, instead of doing the entire set at once. It allows to handle a large number of instance attributes with a limited GPU memory, and thus enhances the network to generalize better for images with many instances. Furthermore, it improves the translation quality of images with even a few instances because it acts as data augmentation during training by producing multiple intermediate samples. All the aforementioned contributions are dedicated to how to incorporate the instance information (e.g., segmentation masks) for image- to- image translation. However, we believe that our approach is applicable to numerous other cross- domain generation tasks where set- structured side information is available. To the best of our knowledge, we are the first to report image- to- image translation results for multi- instance transfiguration tasks. A few number of recent methods (Kim et al., 2017; Liu et al., 2017; Gokaslan et al., 2018) show some transfiguration results but only for images with a single instance often in a clear background. Unlike the previous results in a simple setting, our focus is on the harmony of instances naturally rendered with the background. On the other hand, CycleGAN (Zhu et al., 2017) show some results for multi- instance cases, but report only a limited performance for transfiguration tasks. At a high level, the significance of our work is also on discovering that the instance information is effective for shape- transforming image- to- image translation, which we think would be influential to other related research in the future. Mask contrast- GAN (Liang et al., 2017) and Attention- GAN (Mejiati et al., 2018) use segmentation masks or predicted attentions, but only to attach the background to the (translated) cropped instances. They do not allow to transform the shapes of the instances. To the contrary, our method learns how to preserve the background by optimizing the context preserving loss, thus facilitating the shape transformation. ## 2 INSTAGAN: INSTANCE-AWARE IMAGE-TO-IMAGE TRANSLATION Given two image domains \(\mathcal{X}\) and \(\mathcal{V}\) , the problem of image- to- image translation aims to learn mappings across different image domains, \(G_{\mathrm{XY}}:\mathcal{X}\to \mathcal{V}\) or/and \(G_{\mathrm{YX}}:\mathcal{V}\to \mathcal{X}\) , i.e., transforming target scene elements while preserving the original contexts. This can also be formulated as a conditional generative modeling task where we estimate the conditionals \(p(y|x)\) or/and \(p(x|y)\) . The goal of unsupervised translation we tackle is to recover such mappings only using unpaired samples from marginal distributions of original data, \(p_{\mathrm{data}}(x)\) and \(p_{\mathrm{data}}(y)\) of two image domains. The main and unique idea of our approach is to incorporate the additional instance information, i.e., augment a space of set of instance attributes \(\mathcal{A}\) to the original image space \(\mathcal{X}\) , to improve the image- to- image translation. The set of instance attributes \(\alpha \in \mathcal{A}\) comprises all individual attributes of \(N\) target instances: \(\alpha = \{a_{i}\}_{i = 1}^{N}\) . In this work, we use an instance segmentation mask only, but we remark that any useful type of instance information can be incorporated for the attributes. Our approach then can be described as learning joint- mappings between attribute- augmented spaces \(\mathcal{X}\times \mathcal{A}\) and \(\mathcal{V}\times \mathcal{B}\) . This leads to disentangle different instances in the image and allows the generator to perform an accurate and detailed translation. We learn our attribute- augmented mapping in the framework of generative adversarial networks (GANs) (Goodfellow et al., 2014), hence, we call it instance- aware GAN (InstaGAN). We present details of our approach in the following subsections. ### 2.1 INSTAGAN ARCHITECTURE Recent GAN- based methods (Zhu et al., 2017; Liu et al., 2017) have achieved impressive performance in the unsupervised translation by jointly training two coupled mappings \(G_{\mathrm{XY}}\) and \(G_{\mathrm{YX}}\) with <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: (a) Overview of InstaGAN, where generators \(G_{\mathrm{XY}}\) , \(G_{\mathrm{YX}}\) and discriminator \(D_{\mathrm{X}}\) , \(D_{\mathrm{Y}}\) follows the architectures in (b) and (c), respectively. Each network is designed to encode both an image and set of instance masks. \(G\) is permutation equivariant, and \(D\) is permutation invariant to the set order. To achieve properties, we sum features of all set elements for invariance, and then concatenate it with the identity mapping for equivariance. </center> a cycle- consistency loss that encourages \(G_{\mathrm{YX}}(G_{\mathrm{XY}}(x)) \approx x\) and \(G_{\mathrm{XY}}(G_{\mathrm{YX}}(y)) \approx y\) . Namely, we choose to leverage the CycleGAN approach (Zhu et al., 2017) to build our InstaGAN. However, we remark that training two coupled mappings is not essential for our method, and one can also design a single mapping following other approaches (Benaim & Wolf, 2017; Galanti et al., 2018). Figure 2 illustrates the overall architecture of our model. We train two coupled generators \(G_{\mathrm{XY}}: \mathcal{X} \times \mathcal{A} \to \mathcal{Y} \times \mathcal{B}\) and \(G_{\mathrm{YX}}: \mathcal{Y} \times \mathcal{B} \to \mathcal{X} \times \mathcal{A}\) , where \(G_{\mathrm{XY}}\) translates the original data \((x, a)\) to the target domain data \((y', b')\) (and vice versa for \(G_{\mathrm{YX}}\) ), with adversarial discriminators \(D_{\mathrm{X}}: \mathcal{X} \times \mathcal{A} \to \{\mathrm{X}', \mathrm{not} \mathrm{X} \}\) and \(D_{\mathrm{Y}}: \mathcal{Y} \times \mathcal{B} \to \{\mathrm{Y}', \mathrm{not} \mathrm{Y} \}\) , where \(D_{\mathrm{X}}\) determines if the data (original \((x, a)\) or translated \((x', a')\) ) is in the target domain \(\mathcal{X} \times \mathcal{A}\) or not (and vice versa for \(D_{\mathrm{Y}}\) ). Our generator \(G\) encodes both \(x\) and \(a\) , and translates them into \(y'\) and \(b'\) . Notably, the order of the instance attributes in the set \(a\) should not affect the translated image \(y'\) , and each instance attribute in the set \(a\) should be translated to the corresponding one in \(b'\) . In other words, \(y'\) is permutation- invariant with respect to the instances in \(a\) , and \(b'\) is permutation- equivariant with respect to them. These properties can be implemented by introducing proper operators in feature encoding (Zaheer et al., 2017). We first extract individual features from image and attributes using image feature extractor \(f_{\mathrm{GA}}\) and attribute feature extractor \(f_{\mathrm{OA}}\) , respectively. The attribute features individually extracted using \(f_{\mathrm{OA}}\) are then aggregated into a permutation- invariant set feature via summation: \(\sum_{i = 1}^{N} f_{\mathrm{OA}}(a_i)\) . As illustrated in Figure 2b, we concatenate some of image and attribute features with the set feature, and feed them to image and attribute generators. Formally, the image representation \(h_{\mathrm{GA}}\) and the \(n\) - th attribute representation \(h_{\mathrm{OA}}^n\) in generator \(G\) can be formulated as: \[h_{\mathrm{GA}}(x,a) = \left[f_{\mathrm{GA}}(x);\sum_{i = 1}^{N}f_{\mathrm{GA}}(a_i)\right],\quad h_{\mathrm{OA}}^n (x,a) = \left[f_{\mathrm{GA}}(x);\sum_{i = 1}^{N}f_{\mathrm{OA}}(a_i);f_{\mathrm{GA}}(a_n)\right], \quad (1)\] where each attribute encoding \(h_{\mathrm{OA}}^n\) process features of all attributes as a contextual feature. Finally, \(h_{\mathrm{GA}}\) is fed to the image generator \(g_{\mathrm{GA}}\) , and \(h_{\mathrm{OA}}^n (n = 1, \ldots , N)\) are to the attribute generator \(g_{\mathrm{OA}}\) . On the other hand, our discriminator \(D\) encodes both \(x\) and \(a\) (or \(x'\) and \(a'\) ), and determines whether the pair is from the domain or not. Here, the order of the instance attributes in the set \(a\) should not affect the output. In a similar manner above, our representation in discriminator \(D\) , which is permutation- invariant to the instances, is formulated as: \[h_{\mathrm{DX}}(x,a) = \left[f_{\mathrm{DX}}(x);\sum_{i = 1}^{N}f_{\mathrm{DA}}(a_i)\right], \quad (2)\] which is fed to an adversarial discriminator \(g_{\mathrm{DX}}\) . We emphasize that the joint encoding of both image \(x\) and instance attributes \(a\) for each neural component is crucial because it allows the network to learn the relation between \(x\) and \(a\) . For <--- Page Split ---> example, if two separate encodings and discriminators are used for \(x\) and \(\pmb{a}\) , the generator may be misled to produce image and instance masks that do not match with each other. By using the joint encoding and discriminator, our generator can produce an image of instances properly depicted on the area consistent with its segmentation masks. As will be seen in Section 3, our approach can disentangle output instances considering their original layouts. Note that any types of neural networks may be used for sub- network architectures mentioned above such as \(f_{\alpha x}\) , \(f_{\alpha \mathbf{A}}\) , \(f_{\beta \mathbf{X}}\) , \(f_{\beta \mathbf{A}}\) , \(g_{\alpha \mathbf{X}}\) , \(g_{\alpha \mathbf{A}}\) , and \(g_{\beta \mathbf{X}}\) . We describe the detailed architectures used in our experiments in Appendix A. ### 2.2 TRAINING LOSS Remind that an image- to- image translation model aims to translate a domain while keeping the original contexts (e.g., background or instances' domain- independent characteristics such as the looking direction). To this end, we both consider the domain loss, which makes the generated outputs to follow the style of a target domain, and the content loss, which makes the outputs to keep the original contents. Following our baseline model, CycleGAN (Zhu et al., 2017), we use the GAN loss for the domain loss, and consider both the cycle- consistency loss (Kim et al., 2017; Yi et al., 2017) and the identity mapping loss (Taigman et al., 2016) for the content losses. In addition, we also propose a new content loss, coined context preserving loss, using the original and predicted segmentation information. In what follows, we formally define our training loss in detail. For simplicity, we denote our loss function as a function of a single training sample \((x,\pmb {a})\in \mathcal{X}\times \mathcal{A}\) and \((y,\pmb {b})\in \mathcal{Y}\times \mathcal{B}\) , while one has to minimize its empirical means in training. The GAN loss is originally proposed by Goodfellow et al. (2014) for generative modeling via alternately training generator \(G\) and discriminator \(D\) . Here, \(D\) determines if the data is a real one of a fake/generated/translated one made by \(G\) . There are numerous variants of the GAN loss (Nowozin et al., 2016; Arjovsky et al., 2017; Li et al., 2017; Mroueh et al., 2017), and we follow the LSGAN scheme (Mao et al., 2017), which is empirically known to show a stably good performance: \[\mathcal{L}_{\mathrm{LSGAN}} = (D_{X}(x,\pmb {a}) - 1)^{2} + D_{X}(G_{Y X}(y,\pmb {b}))^{2} + (D_{Y}(y,\pmb {b}) - 1)^{2} + D_{Y}(G_{X Y}(x,\pmb {a}))^{2}. \quad (3)\] For keeping the original content, the cycle- consistency loss \(\mathcal{L}_{cyc}\) and the identity mapping loss \(\mathcal{L}_{\mathrm{id}}\) enforce samples not to lose the original information after translating twice and once, respectively: \[\begin{array}{r l} & {\mathcal{L}_{c y c} = \| G_{Y X}(G_{X Y}(x,\pmb {a})) - (x,\pmb {a})\|_{1} + \| G_{X Y}(G_{Y X}(y,\pmb {b})) - (y,\pmb {b})\|_{1},}\\ & {\mathcal{L}_{\mathrm{i a t}} = \| G_{X Y}(y,\pmb {b}) - (y,\pmb {b})\|_{1} + \| G_{Y X}(x,\pmb {a}) - (x,\pmb {a})\|_{1}.} \end{array} \quad (4)\] Finally, our newly proposed context preserving loss \(\mathcal{L}_{\mathrm{ctx}}\) enforces to translate instances only, while keeping outside of them, i.e., background. Formally, it is a pixel- wise weighted \(\ell_{1}\) - loss where the weight is 1 for background and 0 for instances. Here, note that backgrounds for two domains become different in transfiguration- type translation involving significant shape changes. Hence, we consider the non- zero weight only if a pixel is in background in both original and translated ones. Namely, for the original samples \((x,\pmb {a})\) , \((y,\pmb {b})\) and the translated one \((y^{\prime},\pmb {b}^{\prime})\) , \((x^{\prime},\pmb {a}^{\prime})\) , we let the weight \(w(\pmb {a},\pmb {b}^{\prime})\) , \(w(\pmb {b},\pmb {a}^{\prime})\) be one minus the element- wise minimum of binary represented instance masks, and we propose \[\mathcal{L}_{\mathrm{ctx}} = \| w(\pmb {a},\pmb {b}^{\prime})\odot (x - y^{\prime})\|_{1} + \| w(\pmb {b},\pmb {a}^{\prime})\odot (y - x^{\prime})\|_{1} \quad (6)\] where \(\odot\) is the element- wise product. In our experiments, we found that the context preserving loss not only keeps the background better, but also improves the quality of generated instance segmentations. Finally, the total loss of InstaGAN is \[\mathcal{L}_{\mathrm{InstaGAN}} = \underbrace{\mathcal{L}_{\mathrm{LSGAN}}}_{\mathrm{GAN~}(domain~loss)} + \underbrace{\lambda_{cyc}\mathcal{L}_{cyc} + \lambda_{\mathrm{id}t}\mathcal{L}_{\mathrm{id}t} + \lambda_{\mathrm{ctx}}\mathcal{L}_{\mathrm{ctx}}}_{\mathrm{content~loss}}, \quad (7)\] where \(\lambda_{cyc}, \lambda_{\mathrm{id}t}, \lambda_{\mathrm{ctx}} > 0\) are some hyper- parameters balancing the losses. ### 2.3 SEQUENTIAL MINI-BATCH TRANSLATION While the proposed architecture is able to translate an arbitrary number of instances in principle, the GPU memory required linearly increases with the number of instances. For example, in our experiments, a machine was able to forward only a small number (say, 2) of instance attributes <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 3: Overview of the sequential mini-batch training with instance subsets (mini-batches) of size 1,2, and 1, as shown in the top right side. The content loss is applied to the intermediate samples of current mini-batch, and GAN loss is applied to the samples of aggregated mini-batches. We detach every iteration in training, in that the real line indicates the backpropagated paths and dashed lines indicates the detached paths. See text for details. </center> during training, and thus the learned model suffered from poor generalization to images with a larger number of instances. To address this issue, we propose a new inference/training technique, which allows to train an arbitrary number of instances without increasing the GPU memory. We first describe the sequential inference scheme that translates the subset of instances sequentially, and then describe the corresponding mini- batch training technique. Given an input \((x,\alpha)\) , we first divide the set of instance masks \(\alpha\) into mini- batches \(\alpha_{1},\ldots ,\alpha_{M}\) , i.e., \(\alpha = \bigcup_{i}\alpha_{i}\) and \(\alpha_{i}\cap \alpha_{j} = \emptyset\) for \(i\neq j\) . Then, at the \(m\) - th iteration for \(m = 1,2,\ldots ,M\) , we translate the image- mask pair \((x_{m},\alpha_{m})\) , where \(x_{m}\) is the translated image \(y_{m - 1}^{\prime}\) from the previous iteration, and \(x_{1} = x\) . In this sequential scheme, at each iteration, the generator \(G\) outputs an intermediate translated image \(y_{m}^{\prime}\) , which accumulates all mini- batch translations up to the current iteration, and a translated mini- batch of instance masks \(b_{m}^{\prime}\) : \[(y_{m}^{\prime},b_{m}^{\prime}) = G(x_{m},\alpha_{m}) = G(y_{m - 1}^{\prime},\alpha_{m}). \quad (8)\] In order to align the translated image with mini- batches of instance masks, we aggregate all the translated mini- batch and produce a translated sample: \[(y_{m}^{\prime},b_{1:m}^{\prime}) = (y_{m}^{\prime},\cup_{i = 1}^{m}b_{i}^{\prime}). \quad (9)\] The final output of the proposed sequential inference scheme is \((y_{M}^{\prime},b_{1:M}^{\prime})\) . We also propose the corresponding sequential training algorithm, as illustrated in Figure 3. We apply content loss (4- 6) to the intermediate samples \((y_{m}^{\prime},b_{m}^{\prime})\) of current mini- batch \(\alpha_{m}\) , as it is just a function of inputs and outputs of the generator \(G\) . In contrast, we apply GAN loss (3) to the samples of aggregated mini- batches \((y_{m}^{\prime},b_{1:m}^{\prime})\) , because the network fails to align images and masks when using only a partial subset of instance masks. We used real/original samples \(\{x\}\) with the full set of instance masks only. Formally, the sequential version of the training loss of InstaGAN is \[\mathcal{L}_{\mathrm{InstaGAN - SM}} = \sum_{m = 1}^{M}\mathcal{L}_{\mathrm{LSGAN}}((x,\alpha),(y_{m}^{\prime},b_{1:m}^{\prime})) + \mathcal{L}_{\mathrm{content}}((x_{m},\alpha_{m}),(y_{m}^{\prime},b_{m}^{\prime})) \quad (10)\] where \(\mathcal{L}_{\mathrm{content}} = \lambda_{\mathrm{cyc}}\mathcal{L}_{\mathrm{cyc}} + \lambda_{\mathrm{id}}\mathcal{L}_{\mathrm{id}} + \lambda_{\mathrm{ctx}}\mathcal{L}_{\mathrm{ctx}}\) . We detach every \(m\) - th iteration of training, i.e., backpropagating with the mini- batch \(\alpha_{m}\) , so that only a fixed GPU memory is required, regardless of the number of training instances.3 Hence, the <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 4: Translation results on clothing co-parsing (CCP) (Yang et al., 2014) dataset. </center> ![](images/5_1.jpg) <center>Figure 5: Translation results on multi-human parsing (MHP) (Zhao et al., 2018) dataset. </center> ![](images/5_2.jpg) <center>Figure 6: Translation results on COCO (Lin et al., 2014) dataset. </center> sequential training allows for training with samples containing many instances, and thus improves the generalization performance. Furthermore, it also improves translation of an image even with a few instances, compared to the one- step approach, due to its data augmentation effect using intermediate samples \((x_{m}, a_{m})\) . In our experiments, we divided the instances into mini- batches \(a_{1}, \ldots , a_{M}\) according to the decreasing order of the spatial sizes of instances. Interestingly, the decreasing order showed a better performance than the random order. We believe that this is because small instances tend to be occluded by other instances in images, thus often losing their intrinsic shape information. ## 3 EXPERIMENTAL RESULTS ### 3.1 IMAGE-TO-IMAGE TRANSLATION RESULTS We first qualitatively evaluate our method on various datasets. We compare our model, InstaGAN, with the baseline model, CycleGAN (Zhu et al., 2017). For fair comparisons, we doubled the number of parameters of CycleGAN, as InstaGAN uses two networks for image and masks, respectively. We sample two classes from various datasets, including clothing co- parsing (CCP) (Yang et al., <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 7: Results of InstaGAN varying over different input masks. </center> ![](images/6_1.jpg) <center>Figure 8: Translation results on CCP dataset, using predicted mask for inference. </center> 2014), multi- human parsing (MHP) (Zhao et al., 2018), and MS COCO (Lin et al., 2014) datasets, and use them as the two domains for translation. In visualizations, we merge all instance masks into one for the sake of compactness. See Appendix B for detailed settings for our experiments. The translation results for three datasets are presented in Figure 4, 5, and 6, respectively. While CycleGAN mostly fails, our method generates reasonable shapes of the target instances and keeps the original contexts by focusing on the instances via the context preserving loss. For example, see the results on sheep \(\leftrightarrow\) giraffe in Figure 6. CycleGAN often generates sheep- like instances but loses the original background. InstaGAN not only generates better sheep or giraffes, but also preserves the layout of the original instances, i.e., the looking direction (left, right, front) of sheep and giraffes are consistent after translation. More experimental results are presented in Appendix E. Code and results are available in https://github.com/sangwoomo/instagan. On the other hand, our method can control the instances to translate by conditioning the input, as shown in Figure 7. Such a control is impossible under CycleGAN. We also note that we focus on complex (multi- instance transfiguration) tasks to emphasize the advantages of our method. Nevertheless, our method is also attractive to use even for simple tasks (e.g., horse \(\leftrightarrow\) zebra) as it reduces false positives/negatives via the context preserving loss and enables to control translation. We finally emphasize that our method showed good results even when we use predicted segmentation for inference, as shown in Figure 8, and this can reduce the cost of collecting mask labels in practice.4 Finally, we also quantitatively evaluate the translation performance of our method. We measure the classification score, the ratio of images predicted as the target class by a pretrained classifier. Specifically, we fine- tune the final layers of the ImageNet (Deng et al., 2009) pretrained VGG- 16 (Simonyan & Zisserman, 2014) network, as a binary classifier for each domain. Table 1 and Table 2 in Appendix D show the classification scores for CCP and COCO datasets, respectively. Our method outperforms CycleGAN in all classification experiments, e.g., ours achieves \(23.2\%\) accuracy for the pants \(\rightarrow\) shorts task, while CycleGAN obtains only \(8.5\%\) . ### 3.2 ABLATION STUDY We now investigate the effects of each component of our proposed method in Figure 9. Our method is composed of the InstaGAN architecture, the context preserving loss \(\mathcal{L}_{\mathrm{ctx}}\) , and the sequential mini- batch inference/training technique. We progressively add each component to the baseline model, CycleGAN (with doubled parameters). First, we study the effect of our architecture. For fair comparison, we train a CycleGAN model with an additional input channel, which translates the mask- augmented image, hence we call it CycleGAN+Seg. Unlike our architecture which translates the set of instance masks, CycleGAN+Seg translates the union of all masks at once. Due to this, CycleGAN+Seg fails to translate some instances and often merge them. On the other hand, our architecture keeps every instance and disentangles better. Second, we study the effect of the context <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 9: Ablation study on the effect of each component of our method: the InstaGAN architecture, the context preserving loss, and the sequential mini-batch inference/training algorithm, which are denoted as InstaGAN, \(\mathcal{L}_{\mathrm{ctx}}\) , and Sequential, respectively. </center> ![](images/7_1.jpg) <center>Figure 10: Ablation study on the effects of the sequential mini-batch inference/training technique. The left and right side of title indicates which method used for training and inference, respectively, where "One" and "Seq" indicate the one-step and sequential schemes, respectively. </center> preserving loss: it not only preserves the background better (row 2), but also improves the translation results as it regularizes the mapping (row 3). Third, we study the effect of our sequential translation: it not only improves the generalization performance (row 2,3) but also improves the translation results on few instances, via data augmentation (row 1). Finally, Figure 10 reports how much the sequential translation, denoted by "Seq", is effective in inference and training, compared to the one- step approach, denoted by "One". For the one- step training, we consider only two instances, as it is the maximum number affordable for our machines. On the other hand, for the sequential training, we sequentially train two instances twice, i.e., images of four instances. For the one- step inference, we translate the entire set at once, and for the sequential inference, we sequentially translate two instances at each iteration. We find that our sequential algorithm is effective for both training and inference: (a) training/inference = One/Seq shows blurry results as intermediate data have not shown during training and stacks noise as the iteration goes, and (b) Seq/One shows poor generalization performance for multiple instances as the one- step inference for many instances is not shown in training (due to a limited GPU memory). ## 4 CONCLUSION We have proposed a novel method incorporating the set of instance attributes for image- to- image translation. The experiments on different datasets have shown successful image- to- image translation on the challenging tasks of multi- instance transfiguration, including new tasks, e.g., translating jeans to skirt in fashion images. We remark that our ideas utilizing the set- structured side information have potential to be applied to other cross- domain generations tasks, e.g., neural machine translation or video generation. Investigating new tasks and new information could be an interesting research direction in the future. <--- Page Split ---> ## ACKNOWLEDGMENTS This work was supported by the National Research Council of Science & Technology (NST) grant by the Korea government (MSIP) (No. CRC- 15- 05- ETRI), by the ICT R&D program of MSIT/IITP [2016- 0- 00563, Research on Adaptive Machine Learning Technology Development for Intelligent Autonomous Digital Companion], and also by Basic Science Research Program (NRF- 2017RIE1A1A01077999) through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT. ## REFERENCES Amjad Almahairi, Sai Rajeswar, Alessandro Sordoni, Philip Bachman, and Aaron Courville. Augmented cyclegan: Learning many- to- many mappings from unpaired data. arXiv preprint arXiv:1802.10151, 2018. Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. Unsupervised neural machine translation. arXiv preprint arXiv:1710.11041, 2017. Aayush Bansal, Shugao Ma, Deva Ramanan, and Yaser Sheikh. Recycle- gan: Unsupervised video retargeting. arXiv preprint arXiv:1808.05174, 2018. Sagie Benaim and Lior Wolf. One- sided unsupervised domain mapping. In Advances in Neural Information Processing Systems, pp. 752- 762, 2017. Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, and Dilip Krishnan. Unsupervised pixel- level domain adaptation with generative adversarial networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, pp. 7, 2017. Caroline Chan, Shiry Ginosar, Tinghui Zhou, and Alexei A Efros. Everybody dance now. arXiv preprint arXiv:1808.07371, 2018. Yunjey Choi, Minje Choi, Munyoung Kim, Jung- Woo Ha, Sunghun Kim, and Jaegul Choo. Stargan: Unified generative adversarial networks for multi- domain image- to- image translation. arXiv preprint arXiv:1711.09020, 2017. Jia Deng, Wei Dong, Richard Socher, Li- Jia Li, Kai Li, and Li Fei- Fei. Imagenet: A large- scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248- 255. IEEE, 2009. Tomer Galanti, Lior Wolf, and Sagie Benaim. The role of minimal complexity functions in unsupervised learning of semantic mappings. 2018. Aaron Gokaslan, Vivek Ramanujan, Daniel Ritchie, Kwang In Kim, and James Tompkin. Improving shape deformation in unsupervised image- to- image translation. arXiv preprint arXiv:1808.04325, 2018. Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672- 2680, 2014. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In Advances in Neural Information Processing Systems, pp. 5767- 5777, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770- 778, 2016. Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r- cnn. In Computer Vision (ICCV), 2017 IEEE International Conference on, pp. 2980- 2988. IEEE, 2017. <--- Page Split ---> Judy Hoffman, Eric Tzeng, Taesung Park, Jun- Yan Zhu, Phillip Isola, Kate Saenko, Alexei A Efros, and Trevor Darrell. Cycada: Cycle- consistent adversarial domain adaptation. arXiv preprint arXiv:1711.03213, 2017. Xun Huang, Ming- Yu Liu, Serge Belongie, and Jan Kautz. Multimodal unsupervised image- to- image translation. arXiv preprint arXiv:1804.04732, 2018. Phillip Isola, Jun- Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image- to- image translation with conditional adversarial networks. arXiv preprint, 2017. Justin Johnson, Alexandre Alahi, and Li Fei- Fei. Perceptual losses for real- time style transfer and super- resolution. In European Conference on Computer Vision, pp. 694- 711. Springer, 2016. Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jungkwon Lee, and Jiwon Kim. Learning to discover cross- domain relations with generative adversarial networks. arXiv preprint arXiv:1703.05192, 2017. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Guillaume Lample, Ludovic Denoyer, and Marc'Aurelio Ranzato. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043, 2017. Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew P Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo- realistic single image super- resolution using a generative adversarial network. In CVPR, volume 2, pp. 4, 2017. Hsin- Ying Lee, Hung- Yu Tseng, Jia- Bin Huang, Maneesh Singh, and Ming- Hsuan Yang. Diverse image- to- image translation via disentangled representations. arXiv preprint arXiv:1808.00948, 2018. Chun- Liang Li, Wei- Cheng Chang, Yu Cheng, Yiming Yang, and Barnabas Póczos. Mmd gan: Towards deeper understanding of moment matching network. In Advances in Neural Information Processing Systems, pp. 2203- 2213, 2017. Xiaodan Liang, Hao Zhang, and Eric P Xing. Generative semantic manipulation with contrasting gan. arXiv preprint arXiv:1708.00315, 2017. Tsung- Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pp. 740- 755. Springer, 2014. Ming- Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image- to- image translation networks. In Advances in Neural Information Processing Systems, pp. 700- 708, 2017. Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. In 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2813- 2821. IEEE, 2017. Youssef A Mejjati, Christian Richardt, James Tompkin, Darren Cosker, and Kwang In Kim. Unsupervised attention- guided image to image translation. arXiv preprint arXiv:1806.02311, 2018. Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957, 2018. Youssef Mroueh, Chun- Liang Li, Tom Sercu, Anant Raj, and Yu Cheng. Sobolev gan. arXiv preprint arXiv:1711.04894, 2017. Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f- gan: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information Processing Systems, pp. 271- 279, 2016. Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396, 2016. <--- Page Split ---> Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. Style transfer from non- parallel text by cross- alignment. In Advances in Neural Information Processing Systems, pp. 6830- 6841, 2017. Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Josh Susskind, Wenda Wang, and Russ Webb. Learning from simulated and unsupervised images through adversarial training. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 3, pp. 6, 2017. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large- scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Yaniv Taigman, Adam Polyak, and Lior Wolf. Unsupervised cross- domain image generation. arXiv preprint arXiv:1611.02200, 2016. Vedaldi Ulyanov and Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016. Ting- Chun Wang, Ming- Yu Liu, Jun- Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. Video- to- video synthesis. arXiv preprint arXiv:1808.06601, 2018a. Ting- Chun Wang, Ming- Yu Liu, Jun- Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High- resolution image synthesis and semantic manipulation with conditional gans. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, pp. 5, 2018b. Wei Yang, Ping Luo, and Liang Lin. Clothing co- parsing by joint image segmentation and labeling. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3182- 3189, 2014. Zili Yi, Hao Zhang, Ping Tan, and Minglun Gong. Dualgan: Unsupervised dual learning for image- to- image translation. arXiv preprint, 2017. Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan R Salakhutdinov, and Alexander J Smola. Deep sets. In Advances in Neural Information Processing Systems, pp. 3391- 3401, 2017. Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. Self- attention generative adversarial networks. arXiv preprint arXiv:1805.08318, 2018. Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In European Conference on Computer Vision, pp. 649- 666. Springer, 2016. Jian Zhao, Jianshu Li, Yu Cheng, Li Zhou, Terence Sim, Shuicheng Yan, and Jiashi Feng. Understanding humans in crowded scenes: Deep nested adversarial learning and a new benchmark for multi- human parsing. arXiv preprint arXiv:1804.03287, 2018. Yanzhao Zhou, Yi Zhu, Qixiang Ye, Qiang Qiu, and Jianbin Jiao. Weakly supervised instance segmentation using class peak response. arXiv preprint arXiv:1804.00880, 2018. Jun- Yan Zhu, Philipp Krähenbühl, Eli Shechtman, and Alexei A Efros. Generative visual manipulation on the natural image manifold. In European Conference on Computer Vision, pp. 597- 613. Springer, 2016. Jun- Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image- to- image translation using cycle- consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017. <--- Page Split ---> ## A ARCHITECTURE DETAILS We adopted the network architectures of CycleGAN (Zhu et al., 2017) as the building blocks for our proposed model. In specific, we adopted ResNet 9- blocks generator (Johnson et al., 2016; He et al., 2016) and PatchGAN (Isola et al., 2017) discriminator. ResNet generator is composed of downsampling blocks, residual blocks, and upsampling blocks. We used downsampling blocks and residual blocks for encoders, and used upsampling blocks for generators. On the other hand, PatchGAN discriminator is composed of 5 convolutional layers, including normalization and non- linearity layers. We used the first 3 convolution layers for feature extractors, and the last 2 convolution layers for classifier. We preprocessed instance segmentation as a binary foreground/background mask, hence simply used it as an 1- channel binary image. Also, since we concatenated two or three features to generate the final outputs, we doubled or tripled the input dimension of those architectures. Similar to prior works (Johnson et al., 2016; Zhu et al., 2017), we applied Instance Normalization (IN) (Ulyanov & Lempitsky, 2016) for both generators and discriminators. In addition, we observed that applying Spectral Normalization (SN) (Miyato et al., 2018) for discriminators significantly improves the performance, although we used LSGAN (Mao et al., 2017), while the original motivation of SN was to enforce Lipschitz condition to match with the theory of WGAN (Arjovsky et al., 2017; Gulrajani et al., 2017). We also applied SN for generators as suggested in Self- Attention GAN (Zhang et al., 2018), but did not observed gain for our setting. ## B TRAINING DETAILS For all the experiments, we simply set \(\lambda_{\mathrm{cyc}} = 10\) , \(\lambda_{\mathrm{idt}} = 10\) , and \(\lambda_{\mathrm{ctx}} = 10\) for our loss (7). We used Adam (Kingma & Ba, 2014) optimizer with batch size 4, training with 4 GPUs in parallel. All networks were trained from scratch, with learning rate of 0.0002 for \(G\) and 0.0001 for \(D\) , and \(\beta_{1} = 0.5\) , \(\beta_{2} = 0.999\) for the optimizer. Similar to CycleGAN (Zhu et al., 2017), we kept learning rate for first 100 epochs and linearly decayed to zero for next 100 epochs for multi- human parsing (MHP) (Zhao et al., 2018) and COCO (Lin et al., 2014) dataset, and kept learning rate for first 400 epochs and linearly decayed for next 200 epochs for clothing co- parsing (CCP) (Yang et al., 2014) dataset, as it contains smaller number of samples. We sampled two classes from the datasets above, and used it as two domains for translation. We resized images with size \(300 \times 200\) (height \(\times\) width) for CCP dataset, \(240 \times 160\) for MHP dataset, and \(200 \times 200\) for COCO dataset, respectively. ## C TREND OF TRANSLATION RESULTS We tracked the trend of translation results over epoch increases, as shown in Figure 11. Both image and mask smoothly adopted to the target instances. For example, the remaining parts in legs slowly disappears, and the skirt slowly constructs the triangular shapes. ![](images/11_0.jpg) <center>Figure 11: Trend of the translation results of our method over epoch increases. </center> <--- Page Split ---> ## D QUANTITATIVE RESULTS We evaluated the classification score for CCP and COCO dataset. Unlike CCP dataset, COCO dataset suffers from the false positive problem, that the classifier fails to determine if the generator produced target instances on the right place. To overcome this issue, we measured the masked classification score, where the input images are masked by the corresponding segmentations. We note that CycleGAN and our method showed comparable results for the naive classification score, but ours outperformed for the masked classification score, as it reduces the false positive problem. Table 1: Classification score for CCP dataset. <table><tr><td rowspan="2"></td><td colspan="2">jeans→skirt</td><td colspan="2">skirt→jeans</td><td colspan="2">shorts→pants</td><td colspan="2">pants→shorts</td></tr><tr><td>train</td><td>test</td><td>train</td><td>test</td><td>train</td><td>test</td><td>train</td><td>test</td></tr><tr><td>Real</td><td>0.970</td><td>0.888</td><td>0.982</td><td>0.946</td><td>1.000</td><td>0.984</td><td>0.990</td><td>0.720</td></tr><tr><td>CycleGAN</td><td>0.465</td><td>0.371</td><td>0.561</td><td>0.483</td><td>0.845</td><td>0.524</td><td>0.305</td><td>0.085</td></tr><tr><td>InstaGAN (ours)</td><td>0.665</td><td>0.600</td><td>0.658</td><td>0.540</td><td>0.898</td><td>0.768</td><td>0.373</td><td>0.232</td></tr></table> Table 2: Classification score (masked) for COCO dataset. <table><tr><td rowspan="2"></td><td colspan="2">sheep→giraffe</td><td colspan="2">giraffe→sheep</td><td colspan="2">cup→bottle</td><td colspan="2">bottle→cup</td></tr><tr><td>train</td><td>test</td><td>train</td><td>test</td><td>train</td><td>test</td><td>train</td><td>test</td></tr><tr><td>Real</td><td>0.891</td><td>0.911</td><td>0.925</td><td>0.930</td><td>0.746</td><td>0.723</td><td>0.622</td><td>0.566</td></tr><tr><td>CycleGAN</td><td>0.313</td><td>0.594</td><td>0.291</td><td>0.512</td><td>0.368</td><td>0.403</td><td>0.290</td><td>0.275</td></tr><tr><td>InstaGAN (ours)</td><td>0.406</td><td>0.781</td><td>0.355</td><td>0.642</td><td>0.443</td><td>0.465</td><td>0.322</td><td>0.333</td></tr></table> ## E MORE TRANSLATION RESULTS We present more qualitative results in high resolution images. ![](images/12_0.jpg) <center>Figure 12: Translation results for images searched from Google to test the generalization performance of our model. We used a pix2pix (Isola et al., 2017) model to predict the segmentation. </center> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 13: More translation results on MHP dataset (pants \(\rightarrow\) skirt). </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 14: More translation results on MHP dataset (skirt \(\rightarrow\) pants). </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 15: More translation results on COCO dataset (sheep→giraffe). </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 16: More translation results on COCO dataset (giraffe \(\rightarrow\) sheep). </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 17: More translation results on COCO dataset (zebra \(\rightarrow\) elephant). </center> ![](images/17_1.jpg) <center>Figure 18: More translation results on COCO dataset (elephant \(\rightarrow\) zebra). </center> ![](images/17_2.jpg) <center>Figure 19: More translation results on COCO dataset (bird \(\rightarrow\) zebra). </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 20: More translation results on COCO dataset (zebra \(\rightarrow\) bird). </center> ![](images/18_1.jpg) <center>Figure 21: More translation results on COCO dataset (horse \(\rightarrow\) car). </center> ![](images/18_2.jpg) <center>Figure 22: More translation results on COCO dataset (car \(\rightarrow\) horse). </center> <--- Page Split ---> ## F MORE COMPARISONS WITH CYCLEGAN+SEG To demonstrate the effectiveness of our method further, we provide more comparison results with CycleGAN+Seg. Since CycleGAN+Seg translates all instances at once, it often (a) fails to translate instances, or (b) merges multiple instances (see Figure 23 and 25), or (c) generates multiple instances from one instance (see Figure 24 and 26). On the other hand, our method does not have such issues due to its instance- aware nature. In addition, since the unioned mask losses the original shape information, our instance- aware method produces better shape results (e.g., see row 1 of Figure 25). ![](images/19_0.jpg) <center>Figure 23: Comparisons with CycleGAN+Seg on MHP dataset (pants \(\rightarrow\) skirt). </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 24: Comparisons with CycleGAN+Seg on MHP dataset (skirt \(\rightarrow\) pants). </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 25: Comparisons with CycleGAN+Seg on COCO dataset (sheep→giraffe). </center> <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 26: Comparisons with CycleGAN+Seg on COCO dataset (giraffe \(\rightarrow\) sheep). </center> <--- Page Split ---> ## G GENERALIZATION OF TRANSLATED MASKS G GENERALIZATION OF TRANSLATED MASKSTo show that our model generalizes well, we searched the nearest training neighbors (in \(L_{2}\) - norm) of translated target masks. As reported in Figure 27, we observe that the translated masks (col 3,4) are often much different from the nearest neighbors (col 5,6). This confirms that our model does not simply memorize training instance masks, but learns a mapping that generalizes for target instances. ![](images/23_0.jpg) <center>Figure 27: Nearest training neighbors of translated masks. </center> ## H TRANSLATION RESULTS OF CROP & ATTACH BASELINE For interested readers, we also present the translation results of the simple crop & attach baseline in Figure 28, that find the nearest neighbors of the original masks from target masks, and crop & attach the corresponding image to the original image. Here, since the distance in pixel space (e.g., \(L_{2}\) - norm) obviously does not capture semantics, the cropped instances do not fit with the original contexts as well. ![](images/23_1.jpg) <center>Figure 28: Translation results of crop & attach baseline. </center> <--- Page Split ---> ## I VIDEO TRANSLATION RESULTS For interested readers, we also present video translation results in Figure 29. Here, we use a predicted segmentation (generated by a pix2pix (Isola et al., 2017) model as in Figure 8 and Figure 12) for each frame. Similar to CycleGAN, our method shows temporally coherent results, even though we did not used any explicit regularization. One might design a more advanced version of our model utilizing temporal patterns e.g., using the idea of Recycle- GAN (Bansal et al., 2018) for video- to- video translation, which we think is an interesting future direction to explore. ![](images/24_0.jpg) <center>Figure 29: Original images (row 1) and translated results of our method (row 2) on a video searched from YouTube. We present translation results on successive eight frames for visualization. </center> <--- Page Split ---> ## J RECONSTRUCTION RESULTS For interested readers, we also report the translation and reconstruction results of our method in Figure 30. One can observe that our method shows good reconstruction results while showing good translation results. This implies that our translated results preserve the original context well. ![](images/25_0.jpg) <center>Figure 30: Translation and reconstruction results of our method. </center> <--- Page Split --->
## ABSTRACT Unsupervised image- to- image translation has gained considerable attention due to the recent impressive progress based on generative adversarial networks (GANs). However, previous methods often fail in challenging cases, in particular, when an image has multiple target instances and a translation task involves significant changes in shape, e.g., translating pants to skirts in fashion images. To tackle the issues, we propose a novel method, coined instance- aware GAN (InstaGAN), that incorporates the instance information (e.g., object segmentation masks) and improves multi- instance transfiguration. The proposed method translates both an image and the corresponding set of instance attributes while maintaining the permutation invariance property of the instances. To this end, we introduce a context preserving loss that encourages the network to learn the identity function outside of target instances. We also propose a sequential mini- batch inference/training technique that handles multiple instances with a limited GPU memory and enhances the network to generalize better for multiple instances. Our comparative evaluation demonstrates the effectiveness of the proposed method on different image datasets, in particular, in the aforementioned challenging cases. Code and results are available in https://github.com/sangwoomo/instagan. ## 1 INTRODUCTION Cross- domain generation arises in many machine learning tasks, including neural machine translation (Artetxe et al., 2017; Lample et al., 2017), image synthesis (Reed et al., 2016; Zhu et al., 2016), text style transfer (Shen et al., 2017), and video generation (Bansal et al., 2018; Wang et al., 2018a; Chan et al., 2018). In particular, the unpaired (or unsupervised) image- to- image translation has achieved an impressive progress based on variants of generative adversarial networks (GANs) (Zhu et al., 2017; Liu et al., 2017; Choi et al., 2017; Almahairi et al., 2018; Huang et al., 2018; Lee et al., 2018), and has also drawn considerable attention due to its practical applications including colorization (Zhang et al., 2016), super- resolution (Ledig et al., 2017), semantic manipulation (Wang et al., 2018b), and domain adaptation (Bousmalis et al., 2017; Shrivastava et al., 2017; Hoffman et al., 2017). Previous methods on this line of research, however, often fail on challenging tasks, in particular, when the translation task involves significant changes in shape of instances (Zhu et al., 2017) or the images to translate contains multiple target instances (Gokaslan et al., 2018). Our goal is to extend image- to- image translation towards such challenging tasks, which can strengthen its applicability up to the next level, e.g., changing pants to skirts in fashion images for a customer to decide which one is better to buy. To this end, we propose a novel method that incorporates the instance information of multiple target objects in the framework of generative adversarial networks (GAN); hence we called it instance- aware GAN (InstaGAN). In this work, we use the object segmentation masks for instance information, which may be a good representation for instance shapes, as it contains object boundaries while ignoring other details such as color. Using the information, our method shows impressive results for multi- instance transfiguration tasks, as shown in Figure 1. Our main contribution is three- fold: an instance- augmented neural architecture, a context preserving loss, and a sequential mini- batch inference/training technique. First, we propose a neural network architecture that translates both an image and the corresponding set of instance attributes. Our architecture can translate an arbitrary number of instance attributes conditioned by the input, and is designed to be permutation- invariant to the order of instances. Second, we propose a context preserv <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Translation results of the prior work (CycleGAN, Zhu et al. (2017)), and our proposed method, InstaGAN. Our method shows better results for multi-instance transfiguration problems. </center> ing loss that encourages the network to focus on target instances in translation and learn an identity function outside of them. Namely, it aims at preserving the background context while transforming the target instances. Finally, we propose a sequential mini- batch inference/training technique, i.e., translating the mini- batches of instance attributes sequentially, instead of doing the entire set at once. It allows to handle a large number of instance attributes with a limited GPU memory, and thus enhances the network to generalize better for images with many instances. Furthermore, it improves the translation quality of images with even a few instances because it acts as data augmentation during training by producing multiple intermediate samples. All the aforementioned contributions are dedicated to how to incorporate the instance information (e.g., segmentation masks) for image- to- image translation. However, we believe that our approach is applicable to numerous other cross- domain generation tasks where set- structured side information is available. To the best of our knowledge, we are the first to report image- to- image translation results for multi- instance transfiguration tasks. A few number of recent methods (Kim et al., 2017; Liu et al., 2017; Gokaslan et al., 2018) show some transfiguration results but only for images with a single instance often in a clear background. Unlike the previous results in a simple setting, our focus is on the harmony of instances naturally rendered with the background. On the other hand, CycleGAN (Zhu et al., 2017) show some results for multi- instance cases, but report only a limited performance for transfiguration tasks. At a high level, the significance of our work is also on discovering that the instance information is effective for shape- transforming image- to- image translation, which we think would be influential to other related research in the future. Mask contrast- GAN (Liang et al., 2017) and Attention- GAN (Mejiati et al., 2018) use segmentation masks or predicted attentions, but only to attach the background to the (translated) cropped instances. They do not allow to transform the shapes of the instances. To the contrary, our method learns how to preserve the background by optimizing the context preserving loss, thus facilitating the shape transformation. ## 2 INSTAGAN: INSTANCE-AWARE IMAGE-TO-IMAGE TRANSLATION Given two image domains \(\mathcal{X}\) and \(\mathcal{V}\) , the problem of image- to- image translation aims to learn mappings across different image domains, \(G_{\mathrm{XY}}:\mathcal{X}\to \mathcal{V}\) or/and \(G_{\mathrm{YX}}:\mathcal{V}\to \mathcal{X}\) , i.e., transforming target scene elements while preserving the original contexts. This can also be formulated as a conditional generative modeling task where we estimate the conditionals \(p(y|x)\) or/and \(p(x|y)\) . The goal of unsupervised translation we tackle is to recover such mappings only using unpaired samples from marginal distributions of original data, \(p_{\mathrm{data}}(x)\) and \(p_{\mathrm{data}}(y)\) of two image domains. The main and unique idea of our approach is to incorporate the additional instance information, i.e., augment a space of set of instance attributes \(\mathcal{A}\) to the original image space \(\mathcal{X}\) , to improve the image- to- image translation. The set of instance attributes \(\alpha \in \mathcal{A}\) comprises all individual attributes of \(N\) target instances: \(\alpha = \{a_{i}\}_{i = 1}^{N}\) . In this work, we use an instance segmentation mask only, but we remark that any useful type of instance information can be incorporated for the attributes. Our approach then can be described as learning joint- mappings between attribute- augmented spaces \(\mathcal{X}\times \mathcal{A}\) and \(\mathcal{V}\times \mathcal{B}\) . This leads to disentangle different instances in the image and allows the generator to perform an accurate and detailed translation. We learn our attribute- augmented mapping in the framework of generative adversarial networks (GANs) (Goodfellow et al., 2014), hence, we call it instance- aware GAN (InstaGAN). We present details of our approach in the following subsections. ### 2.1 INSTAGAN ARCHITECTURE Recent GAN- based methods (Zhu et al., 2017; Liu et al., 2017) have achieved impressive performance in the unsupervised translation by jointly training two coupled mappings \(G_{\mathrm{XY}}\) and \(G_{\mathrm{YX}}\) with <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: (a) Overview of InstaGAN, where generators \(G_{\mathrm{XY}}\) , \(G_{\mathrm{YX}}\) and discriminator \(D_{\mathrm{X}}\) , \(D_{\mathrm{Y}}\) follows the architectures in (b) and (c), respectively. Each network is designed to encode both an image and set of instance masks. \(G\) is permutation equivariant, and \(D\) is permutation invariant to the set order. To achieve properties, we sum features of all set elements for invariance, and then concatenate it with the identity mapping for equivariance. </center> a cycle- consistency loss that encourages \(G_{\mathrm{YX}}(G_{\mathrm{XY}}(x)) \approx x\) and \(G_{\mathrm{XY}}(G_{\mathrm{YX}}(y)) \approx y\) . Namely, we choose to leverage the CycleGAN approach (Zhu et al., 2017) to build our InstaGAN. However, we remark that training two coupled mappings is not essential for our method, and one can also design a single mapping following other approaches (Benaim & Wolf, 2017; Galanti et al., 2018). Figure 2 illustrates the overall architecture of our model. We train two coupled generators \(G_{\mathrm{XY}}: \mathcal{X} \times \mathcal{A} \to \mathcal{Y} \times \mathcal{B}\) and \(G_{\mathrm{YX}}: \mathcal{Y} \times \mathcal{B} \to \mathcal{X} \times \mathcal{A}\) , where \(G_{\mathrm{XY}}\) translates the original data \((x, a)\) to the target domain data \((y', b')\) (and vice versa for \(G_{\mathrm{YX}}\) ), with adversarial discriminators \(D_{\mathrm{X}}: \mathcal{X} \times \mathcal{A} \to \{\mathrm{X}', \mathrm{not} \mathrm{X} \}\) and \(D_{\mathrm{Y}}: \mathcal{Y} \times \mathcal{B} \to \{\mathrm{Y}', \mathrm{not} \mathrm{Y} \}\) , where \(D_{\mathrm{X}}\) determines if the data (original \((x, a)\) or translated \((x', a')\) ) is in the target domain \(\mathcal{X} \times \mathcal{A}\) or not (and vice versa for \(D_{\mathrm{Y}}\) ). Our generator \(G\) encodes both \(x\) and \(a\) , and translates them into \(y'\) and \(b'\) . Notably, the order of the instance attributes in the set \(a\) should not affect the translated image \(y'\) , and each instance attribute in the set \(a\) should be translated to the corresponding one in \(b'\) . In other words, \(y'\) is permutation- invariant with respect to the instances in \(a\) , and \(b'\) is permutation- equivariant with respect to them. These properties can be implemented by introducing proper operators in feature encoding (Zaheer et al., 2017). We first extract individual features from image and attributes using image feature extractor \(f_{\mathrm{GA}}\) and attribute feature extractor \(f_{\mathrm{OA}}\) , respectively. The attribute features individually extracted using \(f_{\mathrm{OA}}\) are then aggregated into a permutation- invariant set feature via summation: \(\sum_{i = 1}^{N} f_{\mathrm{OA}}(a_i)\) . As illustrated in Figure 2b, we concatenate some of image and attribute features with the set feature, and feed them to image and attribute generators. Formally, the image representation \(h_{\mathrm{GA}}\) and the \(n\) - th attribute representation \(h_{\mathrm{OA}}^n\) in generator \(G\) can be formulated as: \[h_{\mathrm{GA}}(x,a) = \left[f_{\mathrm{GA}}(x);\sum_{i = 1}^{N}f_{\mathrm{GA}}(a_i)\right],\quad h_{\mathrm{OA}}^n (x,a) = \left[f_{\mathrm{GA}}(x);\sum_{i = 1}^{N}f_{\mathrm{OA}}(a_i);f_{\mathrm{GA}}(a_n)\right], \quad (1)\] where each attribute encoding \(h_{\mathrm{OA}}^n\) process features of all attributes as a contextual feature. Finally, \(h_{\mathrm{GA}}\) is fed to the image generator \(g_{\mathrm{GA}}\) , and \(h_{\mathrm{OA}}^n (n = 1, \ldots , N)\) are to the attribute generator \(g_{\mathrm{OA}}\) . On the other hand, our discriminator \(D\) encodes both \(x\) and \(a\) (or \(x'\) and \(a'\) ), and determines whether the pair is from the domain or not. Here, the order of the instance attributes in the set \(a\) should not affect the output. In a similar manner above, our representation in discriminator \(D\) , which is permutation- invariant to the instances, is formulated as: \[h_{\mathrm{DX}}(x,a) = \left[f_{\mathrm{DX}}(x);\sum_{i = 1}^{N}f_{\mathrm{DA}}(a_i)\right], \quad (2)\] which is fed to an adversarial discriminator \(g_{\mathrm{DX}}\) . We emphasize that the joint encoding of both image \(x\) and instance attributes \(a\) for each neural component is crucial because it allows the network to learn the relation between \(x\) and \(a\) . For <--- Page Split ---> example, if two separate encodings and discriminators are used for \(x\) and \(\pmb{a}\) , the generator may be misled to produce image and instance masks that do not match with each other. By using the joint encoding and discriminator, our generator can produce an image of instances properly depicted on the area consistent with its segmentation masks. As will be seen in Section 3, our approach can disentangle output instances considering their original layouts. Note that any types of neural networks may be used for sub- network architectures mentioned above such as \(f_{\alpha x}\) , \(f_{\alpha \mathbf{A}}\) , \(f_{\beta \mathbf{X}}\) , \(f_{\beta \mathbf{A}}\) , \(g_{\alpha \mathbf{X}}\) , \(g_{\alpha \mathbf{A}}\) , and \(g_{\beta \mathbf{X}}\) . We describe the detailed architectures used in our experiments in Appendix A. ### 2.2 TRAINING LOSS Remind that an image- to- image translation model aims to translate a domain while keeping the original contexts (e.g., background or instances' domain- independent characteristics such as the looking direction). To this end, we both consider the domain loss, which makes the generated outputs to follow the style of a target domain, and the content loss, which makes the outputs to keep the original contents. Following our baseline model, CycleGAN (Zhu et al., 2017), we use the GAN loss for the domain loss, and consider both the cycle- consistency loss (Kim et al., 2017; Yi et al., 2017) and the identity mapping loss (Taigman et al., 2016) for the content losses. In addition, we also propose a new content loss, coined context preserving loss, using the original and predicted segmentation information. In what follows, we formally define our training loss in detail. For simplicity, we denote our loss function as a function of a single training sample \((x,\pmb {a})\in \mathcal{X}\times \mathcal{A}\) and \((y,\pmb {b})\in \mathcal{Y}\times \mathcal{B}\) , while one has to minimize its empirical means in training. The GAN loss is originally proposed by Goodfellow et al. (2014) for generative modeling via alternately training generator \(G\) and discriminator \(D\) . Here, \(D\) determines if the data is a real one of a fake/generated/translated one made by \(G\) . There are numerous variants of the GAN loss (Nowozin et al., 2016; Arjovsky et al., 2017; Li et al., 2017; Mroueh et al., 2017), and we follow the LSGAN scheme (Mao et al., 2017), which is empirically known to show a stably good performance: \[\mathcal{L}_{\mathrm{LSGAN}} = (D_{X}(x,\pmb {a}) - 1)^{2} + D_{X}(G_{Y X}(y,\pmb {b}))^{2} + (D_{Y}(y,\pmb {b}) - 1)^{2} + D_{Y}(G_{X Y}(x,\pmb {a}))^{2}. \quad (3)\] For keeping the original content, the cycle- consistency loss \(\mathcal{L}_{cyc}\) and the identity mapping loss \(\mathcal{L}_{\mathrm{id}}\) enforce samples not to lose the original information after translating twice and once, respectively: \[\begin{array}{r l} & {\mathcal{L}_{c y c} = \| G_{Y X}(G_{X Y}(x,\pmb {a})) - (x,\pmb {a})\|_{1} + \| G_{X Y}(G_{Y X}(y,\pmb {b})) - (y,\pmb {b})\|_{1},}\\ & {\mathcal{L}_{\mathrm{i a t}} = \| G_{X Y}(y,\pmb {b}) - (y,\pmb {b})\|_{1} + \| G_{Y X}(x,\pmb {a}) - (x,\pmb {a})\|_{1}.} \end{array} \quad (4)\] Finally, our newly proposed context preserving loss \(\mathcal{L}_{\mathrm{ctx}}\) enforces to translate instances only, while keeping outside of them, i.e., background. Formally, it is a pixel- wise weighted \(\ell_{1}\) - loss where the weight is 1 for background and 0 for instances. Here, note that backgrounds for two domains become different in transfiguration- type translation involving significant shape changes. Hence, we consider the non- zero weight only if a pixel is in background in both original and translated ones. Namely, for the original samples \((x,\pmb {a})\) , \((y,\pmb {b})\) and the translated one \((y^{\prime},\pmb {b}^{\prime})\) , \((x^{\prime},\pmb {a}^{\prime})\) , we let the weight \(w(\pmb {a},\pmb {b}^{\prime})\) , \(w(\pmb {b},\pmb {a}^{\prime})\) be one minus the element- wise minimum of binary represented instance masks, and we propose \[\mathcal{L}_{\mathrm{ctx}} = \| w(\pmb {a},\pmb {b}^{\prime})\odot (x - y^{\prime})\|_{1} + \| w(\pmb {b},\pmb {a}^{\prime})\odot (y - x^{\prime})\|_{1} \quad (6)\] where \(\odot\) is the element- wise product. In our experiments, we found that the context preserving loss not only keeps the background better, but also improves the quality of generated instance segmentations. Finally, the total loss of InstaGAN is \[\mathcal{L}_{\mathrm{InstaGAN}} = \underbrace{\mathcal{L}_{\mathrm{LSGAN}}}_{\mathrm{GAN~}(domain~loss)} + \underbrace{\lambda_{cyc}\mathcal{L}_{cyc} + \lambda_{\mathrm{id}t}\mathcal{L}_{\mathrm{id}t} + \lambda_{\mathrm{ctx}}\mathcal{L}_{\mathrm{ctx}}}_{\mathrm{content~loss}}, \quad (7)\] where \(\lambda_{cyc}, \lambda_{\mathrm{id}t}, \lambda_{\mathrm{ctx}} > 0\) are some hyper- parameters balancing the losses. ### 2.3 SEQUENTIAL MINI-BATCH TRANSLATION While the proposed architecture is able to translate an arbitrary number of instances in principle, the GPU memory required linearly increases with the number of instances. For example, in our experiments, a machine was able to forward only a small number (say, 2) of instance attributes <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 3: Overview of the sequential mini-batch training with instance subsets (mini-batches) of size 1,2, and 1, as shown in the top right side. The content loss is applied to the intermediate samples of current mini-batch, and GAN loss is applied to the samples of aggregated mini-batches. We detach every iteration in training, in that the real line indicates the backpropagated paths and dashed lines indicates the detached paths. See text for details. </center> during training, and thus the learned model suffered from poor generalization to images with a larger number of instances. To address this issue, we propose a new inference/training technique, which allows to train an arbitrary number of instances without increasing the GPU memory. We first describe the sequential inference scheme that translates the subset of instances sequentially, and then describe the corresponding mini- batch training technique. Given an input \((x,\alpha)\) , we first divide the set of instance masks \(\alpha\) into mini- batches \(\alpha_{1},\ldots ,\alpha_{M}\) , i.e., \(\alpha = \bigcup_{i}\alpha_{i}\) and \(\alpha_{i}\cap \alpha_{j} = \emptyset\) for \(i\neq j\) . Then, at the \(m\) - th iteration for \(m = 1,2,\ldots ,M\) , we translate the image- mask pair \((x_{m},\alpha_{m})\) , where \(x_{m}\) is the translated image \(y_{m - 1}^{\prime}\) from the previous iteration, and \(x_{1} = x\) . In this sequential scheme, at each iteration, the generator \(G\) outputs an intermediate translated image \(y_{m}^{\prime}\) , which accumulates all mini- batch translations up to the current iteration, and a translated mini- batch of instance masks \(b_{m}^{\prime}\) : \[(y_{m}^{\prime},b_{m}^{\prime}) = G(x_{m},\alpha_{m}) = G(y_{m - 1}^{\prime},\alpha_{m}). \quad (8)\] In order to align the translated image with mini- batches of instance masks, we aggregate all the translated mini- batch and produce a translated sample: \[(y_{m}^{\prime},b_{1:m}^{\prime}) = (y_{m}^{\prime},\cup_{i = 1}^{m}b_{i}^{\prime}). \quad (9)\] The final output of the proposed sequential inference scheme is \((y_{M}^{\prime},b_{1:M}^{\prime})\) . We also propose the corresponding sequential training algorithm, as illustrated in Figure 3. We apply content loss (4- 6) to the intermediate samples \((y_{m}^{\prime},b_{m}^{\prime})\) of current mini- batch \(\alpha_{m}\) , as it is just a function of inputs and outputs of the generator \(G\) . In contrast, we apply GAN loss (3) to the samples of aggregated mini- batches \((y_{m}^{\prime},b_{1:m}^{\prime})\) , because the network fails to align images and masks when using only a partial subset of instance masks. We used real/original samples \(\{x\}\) with the full set of instance masks only. Formally, the sequential version of the training loss of InstaGAN is \[\mathcal{L}_{\mathrm{InstaGAN - SM}} = \sum_{m = 1}^{M}\mathcal{L}_{\mathrm{LSGAN}}((x,\alpha),(y_{m}^{\prime},b_{1:m}^{\prime})) + \mathcal{L}_{\mathrm{content}}((x_{m},\alpha_{m}),(y_{m}^{\prime},b_{m}^{\prime})) \quad (10)\] where \(\mathcal{L}_{\mathrm{content}} = \lambda_{\mathrm{cyc}}\mathcal{L}_{\mathrm{cyc}} + \lambda_{\mathrm{id}}\mathcal{L}_{\mathrm{id}} + \lambda_{\mathrm{ctx}}\mathcal{L}_{\mathrm{ctx}}\) . We detach every \(m\) - th iteration of training, i.e., backpropagating with the mini- batch \(\alpha_{m}\) , so that only a fixed GPU memory is required, regardless of the number of training instances.3 Hence, the <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 4: Translation results on clothing co-parsing (CCP) (Yang et al., 2014) dataset. </center> ![](images/5_1.jpg) <center>Figure 5: Translation results on multi-human parsing (MHP) (Zhao et al., 2018) dataset. </center> ![](images/5_2.jpg) <center>Figure 6: Translation results on COCO (Lin et al., 2014) dataset. </center> sequential training allows for training with samples containing many instances, and thus improves the generalization performance. Furthermore, it also improves translation of an image even with a few instances, compared to the one- step approach, due to its data augmentation effect using intermediate samples \((x_{m}, a_{m})\) . In our experiments, we divided the instances into mini- batches \(a_{1}, \ldots , a_{M}\) according to the decreasing order of the spatial sizes of instances. Interestingly, the decreasing order showed a better performance than the random order. We believe that this is because small instances tend to be occluded by other instances in images, thus often losing their intrinsic shape information. ## 3 EXPERIMENTAL RESULTS ### 3.1 IMAGE-TO-IMAGE TRANSLATION RESULTS We first qualitatively evaluate our method on various datasets. We compare our model, InstaGAN, with the baseline model, CycleGAN (Zhu et al., 2017). For fair comparisons, we doubled the number of parameters of CycleGAN, as InstaGAN uses two networks for image and masks, respectively. We sample two classes from various datasets, including clothing co- parsing (CCP) (Yang et al., <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 7: Results of InstaGAN varying over different input masks. </center> ![](images/6_1.jpg) <center>Figure 8: Translation results on CCP dataset, using predicted mask for inference. </center> 2014), multi- human parsing (MHP) (Zhao et al., 2018), and MS COCO (Lin et al., 2014) datasets, and use them as the two domains for translation. In visualizations, we merge all instance masks into one for the sake of compactness. See Appendix B for detailed settings for our experiments. The translation results for three datasets are presented in Figure 4, 5, and 6, respectively. While CycleGAN mostly fails, our method generates reasonable shapes of the target instances and keeps the original contexts by focusing on the instances via the context preserving loss. For example, see the results on sheep \(\leftrightarrow\) giraffe in Figure 6. CycleGAN often generates sheep- like instances but loses the original background. InstaGAN not only generates better sheep or giraffes, but also preserves the layout of the original instances, i.e., the looking direction (left, right, front) of sheep and giraffes are consistent after translation. More experimental results are presented in Appendix E. Code and results are available in https://github.com/sangwoomo/instagan. On the other hand, our method can control the instances to translate by conditioning the input, as shown in Figure 7. Such a control is impossible under CycleGAN. We also note that we focus on complex (multi- instance transfiguration) tasks to emphasize the advantages of our method. Nevertheless, our method is also attractive to use even for simple tasks (e.g., horse \(\leftrightarrow\) zebra) as it reduces false positives/negatives via the context preserving loss and enables to control translation. We finally emphasize that our method showed good results even when we use predicted segmentation for inference, as shown in Figure 8, and this can reduce the cost of collecting mask labels in practice.4 Finally, we also quantitatively evaluate the translation performance of our method. We measure the classification score, the ratio of images predicted as the target class by a pretrained classifier. Specifically, we fine- tune the final layers of the ImageNet (Deng et al., 2009) pretrained VGG- 16 (Simonyan & Zisserman, 2014) network, as a binary classifier for each domain. Table 1 and Table 2 in Appendix D show the classification scores for CCP and COCO datasets, respectively. Our method outperforms CycleGAN in all classification experiments, e.g., ours achieves \(23.2\%\) accuracy for the pants \(\rightarrow\) shorts task, while CycleGAN obtains only \(8.5\%\) . ### 3.2 ABLATION STUDY We now investigate the effects of each component of our proposed method in Figure 9. Our method is composed of the InstaGAN architecture, the context preserving loss \(\mathcal{L}_{\mathrm{ctx}}\) , and the sequential mini- batch inference/training technique. We progressively add each component to the baseline model, CycleGAN (with doubled parameters). First, we study the effect of our architecture. For fair comparison, we train a CycleGAN model with an additional input channel, which translates the mask- augmented image, hence we call it CycleGAN+Seg. Unlike our architecture which translates the set of instance masks, CycleGAN+Seg translates the union of all masks at once. Due to this, CycleGAN+Seg fails to translate some instances and often merge them. On the other hand, our architecture keeps every instance and disentangles better. Second, we study the effect of the context <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 9: Ablation study on the effect of each component of our method: the InstaGAN architecture, the context preserving loss, and the sequential mini-batch inference/training algorithm, which are denoted as InstaGAN, \(\mathcal{L}_{\mathrm{ctx}}\) , and Sequential, respectively. </center> ![](images/7_1.jpg) <center>Figure 10: Ablation study on the effects of the sequential mini-batch inference/training technique. The left and right side of title indicates which method used for training and inference, respectively, where "One" and "Seq" indicate the one-step and sequential schemes, respectively. </center> preserving loss: it not only preserves the background better (row 2), but also improves the translation results as it regularizes the mapping (row 3). Third, we study the effect of our sequential translation: it not only improves the generalization performance (row 2,3) but also improves the translation results on few instances, via data augmentation (row 1). Finally, Figure 10 reports how much the sequential translation, denoted by "Seq", is effective in inference and training, compared to the one- step approach, denoted by "One". For the one- step training, we consider only two instances, as it is the maximum number affordable for our machines. On the other hand, for the sequential training, we sequentially train two instances twice, i.e., images of four instances. For the one- step inference, we translate the entire set at once, and for the sequential inference, we sequentially translate two instances at each iteration. We find that our sequential algorithm is effective for both training and inference: (a) training/inference = One/Seq shows blurry results as intermediate data have not shown during training and stacks noise as the iteration goes, and (b) Seq/One shows poor generalization performance for multiple instances as the one- step inference for many instances is not shown in training (due to a limited GPU memory). ## 4 CONCLUSION We have proposed a novel method incorporating the set of instance attributes for image- to- image translation. The experiments on different datasets have shown successful image- to- image translation on the challenging tasks of multi- instance transfiguration, including new tasks, e.g., translating jeans to skirt in fashion images. We remark that our ideas utilizing the set- structured side information have potential to be applied to other cross- domain generations tasks, e.g., neural machine translation or video generation. Investigating new tasks and new information could be an interesting research direction in the future. <--- Page Split ---> ## ACKNOWLEDGMENTS This work was supported by the National Research Council of Science & Technology (NST) grant by the Korea government (MSIP) (No. CRC- 15- 05- ETRI), by the ICT R&D program of MSIT/IITP [2016- 0- 00563, Research on Adaptive Machine Learning Technology Development for Intelligent Autonomous Digital Companion], and also by Basic Science Research Program (NRF- 2017RIE1A1A01077999) through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT. ## A ARCHITECTURE DETAILS We adopted the network architectures of CycleGAN (Zhu et al., 2017) as the building blocks for our proposed model. In specific, we adopted ResNet 9- blocks generator (Johnson et al., 2016; He et al., 2016) and PatchGAN (Isola et al., 2017) discriminator. ResNet generator is composed of downsampling blocks, residual blocks, and upsampling blocks. We used downsampling blocks and residual blocks for encoders, and used upsampling blocks for generators. On the other hand, PatchGAN discriminator is composed of 5 convolutional layers, including normalization and non- linearity layers. We used the first 3 convolution layers for feature extractors, and the last 2 convolution layers for classifier. We preprocessed instance segmentation as a binary foreground/background mask, hence simply used it as an 1- channel binary image. Also, since we concatenated two or three features to generate the final outputs, we doubled or tripled the input dimension of those architectures. Similar to prior works (Johnson et al., 2016; Zhu et al., 2017), we applied Instance Normalization (IN) (Ulyanov & Lempitsky, 2016) for both generators and discriminators. In addition, we observed that applying Spectral Normalization (SN) (Miyato et al., 2018) for discriminators significantly improves the performance, although we used LSGAN (Mao et al., 2017), while the original motivation of SN was to enforce Lipschitz condition to match with the theory of WGAN (Arjovsky et al., 2017; Gulrajani et al., 2017). We also applied SN for generators as suggested in Self- Attention GAN (Zhang et al., 2018), but did not observed gain for our setting. ## B TRAINING DETAILS For all the experiments, we simply set \(\lambda_{\mathrm{cyc}} = 10\) , \(\lambda_{\mathrm{idt}} = 10\) , and \(\lambda_{\mathrm{ctx}} = 10\) for our loss (7). We used Adam (Kingma & Ba, 2014) optimizer with batch size 4, training with 4 GPUs in parallel. All networks were trained from scratch, with learning rate of 0.0002 for \(G\) and 0.0001 for \(D\) , and \(\beta_{1} = 0.5\) , \(\beta_{2} = 0.999\) for the optimizer. Similar to CycleGAN (Zhu et al., 2017), we kept learning rate for first 100 epochs and linearly decayed to zero for next 100 epochs for multi- human parsing (MHP) (Zhao et al., 2018) and COCO (Lin et al., 2014) dataset, and kept learning rate for first 400 epochs and linearly decayed for next 200 epochs for clothing co- parsing (CCP) (Yang et al., 2014) dataset, as it contains smaller number of samples. We sampled two classes from the datasets above, and used it as two domains for translation. We resized images with size \(300 \times 200\) (height \(\times\) width) for CCP dataset, \(240 \times 160\) for MHP dataset, and \(200 \times 200\) for COCO dataset, respectively. ## C TREND OF TRANSLATION RESULTS We tracked the trend of translation results over epoch increases, as shown in Figure 11. Both image and mask smoothly adopted to the target instances. For example, the remaining parts in legs slowly disappears, and the skirt slowly constructs the triangular shapes. ![](images/11_0.jpg) <center>Figure 11: Trend of the translation results of our method over epoch increases. </center> <--- Page Split ---> ## D QUANTITATIVE RESULTS We evaluated the classification score for CCP and COCO dataset. Unlike CCP dataset, COCO dataset suffers from the false positive problem, that the classifier fails to determine if the generator produced target instances on the right place. To overcome this issue, we measured the masked classification score, where the input images are masked by the corresponding segmentations. We note that CycleGAN and our method showed comparable results for the naive classification score, but ours outperformed for the masked classification score, as it reduces the false positive problem. Table 1: Classification score for CCP dataset. <table><tr><td rowspan="2"></td><td colspan="2">jeans→skirt</td><td colspan="2">skirt→jeans</td><td colspan="2">shorts→pants</td><td colspan="2">pants→shorts</td></tr><tr><td>train</td><td>test</td><td>train</td><td>test</td><td>train</td><td>test</td><td>train</td><td>test</td></tr><tr><td>Real</td><td>0.970</td><td>0.888</td><td>0.982</td><td>0.946</td><td>1.000</td><td>0.984</td><td>0.990</td><td>0.720</td></tr><tr><td>CycleGAN</td><td>0.465</td><td>0.371</td><td>0.561</td><td>0.483</td><td>0.845</td><td>0.524</td><td>0.305</td><td>0.085</td></tr><tr><td>InstaGAN (ours)</td><td>0.665</td><td>0.600</td><td>0.658</td><td>0.540</td><td>0.898</td><td>0.768</td><td>0.373</td><td>0.232</td></tr></table> Table 2: Classification score (masked) for COCO dataset. <table><tr><td rowspan="2"></td><td colspan="2">sheep→giraffe</td><td colspan="2">giraffe→sheep</td><td colspan="2">cup→bottle</td><td colspan="2">bottle→cup</td></tr><tr><td>train</td><td>test</td><td>train</td><td>test</td><td>train</td><td>test</td><td>train</td><td>test</td></tr><tr><td>Real</td><td>0.891</td><td>0.911</td><td>0.925</td><td>0.930</td><td>0.746</td><td>0.723</td><td>0.622</td><td>0.566</td></tr><tr><td>CycleGAN</td><td>0.313</td><td>0.594</td><td>0.291</td><td>0.512</td><td>0.368</td><td>0.403</td><td>0.290</td><td>0.275</td></tr><tr><td>InstaGAN (ours)</td><td>0.406</td><td>0.781</td><td>0.355</td><td>0.642</td><td>0.443</td><td>0.465</td><td>0.322</td><td>0.333</td></tr></table> ## E MORE TRANSLATION RESULTS We present more qualitative results in high resolution images. ![](images/12_0.jpg) <center>Figure 12: Translation results for images searched from Google to test the generalization performance of our model. We used a pix2pix (Isola et al., 2017) model to predict the segmentation. </center> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 13: More translation results on MHP dataset (pants \(\rightarrow\) skirt). </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 14: More translation results on MHP dataset (skirt \(\rightarrow\) pants). </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 15: More translation results on COCO dataset (sheep→giraffe). </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 16: More translation results on COCO dataset (giraffe \(\rightarrow\) sheep). </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 17: More translation results on COCO dataset (zebra \(\rightarrow\) elephant). </center> ![](images/17_1.jpg) <center>Figure 18: More translation results on COCO dataset (elephant \(\rightarrow\) zebra). </center> ![](images/17_2.jpg) <center>Figure 19: More translation results on COCO dataset (bird \(\rightarrow\) zebra). </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 20: More translation results on COCO dataset (zebra \(\rightarrow\) bird). </center> ![](images/18_1.jpg) <center>Figure 21: More translation results on COCO dataset (horse \(\rightarrow\) car). </center> ![](images/18_2.jpg) <center>Figure 22: More translation results on COCO dataset (car \(\rightarrow\) horse). </center> <--- Page Split ---> ## F MORE COMPARISONS WITH CYCLEGAN+SEG To demonstrate the effectiveness of our method further, we provide more comparison results with CycleGAN+Seg. Since CycleGAN+Seg translates all instances at once, it often (a) fails to translate instances, or (b) merges multiple instances (see Figure 23 and 25), or (c) generates multiple instances from one instance (see Figure 24 and 26). On the other hand, our method does not have such issues due to its instance- aware nature. In addition, since the unioned mask losses the original shape information, our instance- aware method produces better shape results (e.g., see row 1 of Figure 25). ![](images/19_0.jpg) <center>Figure 23: Comparisons with CycleGAN+Seg on MHP dataset (pants \(\rightarrow\) skirt). </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 24: Comparisons with CycleGAN+Seg on MHP dataset (skirt \(\rightarrow\) pants). </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 25: Comparisons with CycleGAN+Seg on COCO dataset (sheep→giraffe). </center> <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 26: Comparisons with CycleGAN+Seg on COCO dataset (giraffe \(\rightarrow\) sheep). </center> <--- Page Split ---> ## G GENERALIZATION OF TRANSLATED MASKS G GENERALIZATION OF TRANSLATED MASKSTo show that our model generalizes well, we searched the nearest training neighbors (in \(L_{2}\) - norm) of translated target masks. As reported in Figure 27, we observe that the translated masks (col 3,4) are often much different from the nearest neighbors (col 5,6). This confirms that our model does not simply memorize training instance masks, but learns a mapping that generalizes for target instances. ![](images/23_0.jpg) <center>Figure 27: Nearest training neighbors of translated masks. </center> ## H TRANSLATION RESULTS OF CROP & ATTACH BASELINE For interested readers, we also present the translation results of the simple crop & attach baseline in Figure 28, that find the nearest neighbors of the original masks from target masks, and crop & attach the corresponding image to the original image. Here, since the distance in pixel space (e.g., \(L_{2}\) - norm) obviously does not capture semantics, the cropped instances do not fit with the original contexts as well. ![](images/23_1.jpg) <center>Figure 28: Translation results of crop & attach baseline. </center> <--- Page Split ---> ## I VIDEO TRANSLATION RESULTS For interested readers, we also present video translation results in Figure 29. Here, we use a predicted segmentation (generated by a pix2pix (Isola et al., 2017) model as in Figure 8 and Figure 12) for each frame. Similar to CycleGAN, our method shows temporally coherent results, even though we did not used any explicit regularization. One might design a more advanced version of our model utilizing temporal patterns e.g., using the idea of Recycle- GAN (Bansal et al., 2018) for video- to- video translation, which we think is an interesting future direction to explore. ![](images/24_0.jpg) <center>Figure 29: Original images (row 1) and translated results of our method (row 2) on a video searched from YouTube. We present translation results on successive eight frames for visualization. </center> <--- Page Split ---> ## J RECONSTRUCTION RESULTS For interested readers, we also report the translation and reconstruction results of our method in Figure 30. One can observe that our method shows good reconstruction results while showing good translation results. This implies that our translated results preserve the original context well. ![](images/25_0.jpg) <center>Figure 30: Translation and reconstruction results of our method. </center> <--- Page Split --->
accept
Accept (Poster)
7.333333
ICLR_2019_paper_0049
iclr
2,019
# THE VARIATIONAL DEFICIENCY BOTTLENECK Anonymous authors Paper under double- blind review ## ABSTRACT We introduce a bottleneck method for learning data representations based on channel deficiency, rather than the more traditional information sufficiency. A variational upper bound allows us to implement this method efficiently. The bound itself is bounded above by the variational information bottleneck objective, and the two methods coincide in the regime of single- shot Monte Carlo approximations. The notion of deficiency provides a principled way of approximating complicated channels by relatively simpler ones. The deficiency of one channel w.r.t. another has an operational interpretation in terms of the optimal risk gap of decision problems, capturing classification as a special case. Unsupervised generalizations are possible, such as the deficiency autoencoder, which can also be formulated in a variational form. Experiments demonstrate that the deficiency bottleneck can provide advantages in terms of minimal sufficiency as measured by information bottleneck curves, while retaining a good test performance in classification and reconstruction tasks. Keywords: Variational Information Bottleneck, Blackwell Sufficiency, Le Cam Deficiency, Information Channel ## 1 INTRODUCTION The information bottleneck (IB) is an approach to learning data representations based on a notion of minimal sufficiency. The general idea is to map an input source into a representation that retains as little information as possible about the input (minimality), but retains as much information as possible in relation to a target variable of interest (sufficiency). See Figure 1. For example, in a classification problem, the target variable could be the class label of the input data. In a reconstruction problem, the target variable could be a denoised reconstruction of the input. Intuitively, a representation which is minimal in relation to a given task, will discard nuisances in the inputs that are irrelevant to the task, and hence distill more meaningful information and allow for a better generalization. In a typical bottleneck paradigm, an input variable \(X\) is first mapped to an intermediate representation variable \(Z\) , and then \(Z\) is mapped to an output variable of interest \(Y\) . We call the mappings, resp., a representation model (encoder) and an inference model (decoder). The channel \(\kappa\) models the true relation between the input \(X\) and the output \(Y\) . In general, the channel \(\kappa\) is unknown, and only accessible through a set of examples \((x^{(i)}, y^{(i)})_{i = 1}^{N}\) . We would like to obtain an approximation of \(\kappa\) using a probabilistic model that comprises of the encoder- decoder pair. The IB methods (Witsenhausen & Wyner, 1975; Tishby et al., 1999; Harremoes & Tishby, 2007; Hsu et al., 2018) have found numerous applications, e.g., in representation learning, clustering, classification, generative modeling, model selection and analysis in deep neural networks, among others (see, e.g., Shamir et al., 2008; Gondek & Hofmann, 2003; Higgins et al., 2017; Alemi et al., 2018; Tishby & Zaslavsky, 2015; Shwartz- Ziv & Tishby, 2017). In the traditional IB, minimality and sufficiency are measured in terms of the mutual information. Computing the mutual information can be challenging in practice. Various recent works have formulated more tractable functions by way of variational bounds on the mutual information (Chalk et al., 2016; Alemi et al., 2016; Kolchinsky et al., 2017), sandwiching the objective function of interest. Instead of maximizing the sufficiency term of the IB, we formulate a new bottleneck method that minimizes deficiency. Deficiencies provide a principled way of approximating complex channels <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: The bottleneck paradigm: The general idea of a bottleneck method is to first map an input \(X\) to an intermediate representation \(Z\) , and then map \(Z\) to an output \(Y\) . We call the mappings, resp., an encoder (e) and a decoder (d). In general, the true channel \(\kappa\) is unknown, and only accessible through a set of training examples. We would like to obtain an approximation of \(\kappa\) . </center> by relatively simpler ones. The deficiency of a decoder with respect to the true channel between input and output variables quantifies how well any stochastic encoding at the decoder input can be used to approximate the true channel. Deficiencies have a rich heritage in the theory of comparison of statistical experiments (Blackwell, 1953; Le Cam, 1964; Torgersen, 1991). From this angle, the formalism of deficiencies has been used to obtain bounds on optimal risk gaps of statistical decision problems. As we show, the deficiency bottleneck minimizes a regularized risk gap. Moreover, the proposed method has an immediate variational formulation that can be easily implemented as a modification of the Variational Information Bottleneck (VIB) (Alemi et al., 2016). In fact, both methods coincide in the limit of single- shot Monte Carlo approximations. We call our method the Variational Deficiency Bottleneck (VDB). Perfect maximization of the IB sufficiency corresponds to perfect minimization of the DB deficiency. However, when working over a parametrized model and adding the bottleneck regularizer, both methods have different preferences, with the DB being closer to the optimal risk gap. Experiments on basic data sets show that the VDB is able to obtain more compressed representations than the VIB while performing equally well or better in terms of test accuracy. We describe the details of our method in Section 2. We elaborate on the theory of deficiencies in Section 3. Experimental results with the VDB are presented in Section 4. ## 2 THE VARIATIONAL DEFICIENCY BOTTLENECK (VDB) Let \(X\) denote an observation or input variable and \(Y\) an output variable of interest. Let \(p(x,y) = \pi (x)\kappa (y|x)\) be the true joint distribution, where the conditional distribution or channel \(\kappa (y|x)\) describes how the output depends on the input. We consider the situation where the true channel is unknown, but we are given a set of \(N\) independent and identically distributed (i.i.d.) samples \((x^{(i)},y^{(i)})_{i = 1}^{N}\) from \(p\) . Our goal is to use this data to learn a more structured version of the channel \(\kappa\) , by first "compressing" the input \(X\) to an intermediate representation variable \(Z\) and subsequently mapping the representation back to the output \(Y\) . The presence of an intermediate representation can be regarded as a bottleneck, a model selection problem, or as a regularization strategy. We define a representation model and an inference model using two parameterized families of channels \(e(z|x)\) and \(d(y|x)\) . We will refer to \(e(z|x)\) and \(d(y|x)\) as an encoder and a decoder. The encoder- decoder pair induces a model \(\widehat{\kappa} (y|x) = \int d(y|x)e(z|x)dz\) . Equivalently, we write \(\widehat{\kappa} = d\circ e\) . Given a representation, we want the decoder to be as powerful as the original channel \(\kappa\) in terms of ability to recover the output. The deficiency of a decoder \(d\) w.r.t. \(\kappa\) quantifies the extent to which any pre- processing of \(d\) (by way of randomized encodings) can be used to approximate \(\kappa\) (in the KL- distance sense). Let \(\mathsf{M}(\mathcal{X};\mathcal{Y})\) denote the space of all channels from \(\mathcal{X}\) to \(\mathcal{Y}\) . We define the deficiency of \(d\) w.r.t. \(\kappa\) as follows. Definition 1. Given the channel \(\kappa \in \mathsf{M}(\mathcal{X};\mathcal{Y})\) from \(X\) to \(Y\) , and a decoder \(d\in \mathsf{M}(\mathcal{Z};\mathcal{Y})\) from some \(Z\) to \(Y\) , the deficiency of \(d\) w.r.t. \(\kappa\) is defined as \[\delta^{\pi}(d,\kappa) = \min_{e\in \mathsf{M}(\mathcal{X};\mathcal{Z})}D_{\mathrm{KL}}(\kappa \| d\circ e|\pi). \quad (1)\] Here \(D_{\mathrm{KL}}(\cdot \| \cdot |\cdot)\) is the conditional KL divergence (Csiszár & Körner, 2011), and \(\pi\) is an input distribution over \(\mathcal{X}\) . The definition is similar in spirit to Lucien Le Cam's notion of weighted defi <--- Page Split ---> ciencies of one channel w.r.t. another (Le Cam, 1964; Torgersen, 1991, Section 6.2) and its recent generalization by Raginsky (2011). We propose to train the model by minimizing the deficiency of \(d\) w.r.t. \(\kappa\) subject to a regularization that penalizes complex representations. The regularization is achieved by limiting the rate \(I(Z;X)\) , the mutual information between the representation and the raw inputs. We call our method the Deficiency Bottleneck (DB). The DB minimizes the following objective over all tuples \((e \in \mathcal{M}(\mathcal{X}; \mathcal{Z}), d \in \mathcal{M}(\mathcal{Z}; \mathcal{Y}))\) : \[\mathcal{L}_{D B}(e,d):= \delta^{\pi}(d,\kappa) + \beta I(Z;X). \quad (2)\] The parameter \(\beta \geq 0\) allows us to adjust the level of regularization. For any distribution \(r(z)\) , the rate term admits a simple variational upper bound (Csiszar & Körner, 2011, Eq. (8.7)): \[I(Z;X)\leq \int p(x,z)\log \frac{e(z|x)}{r(z)} dxdz. \quad (3)\] Let \(\hat{p}_{\mathrm{data}}\) be the empirical distribution of the data (input- output pairs). By noting that \(\delta^{\pi}(d,\kappa) \leq D_{\mathrm{KL}}(\kappa \| d\circ e|\pi)\) for any \(e \in \mathcal{M}(\mathcal{X}; \mathcal{Z})\) , and ignoring (unknown) data- dependent constants, we obtain the following optimization objective which we call the Variational Deficiency Bottleneck (VDB) objective: \[\mathcal{L}_{V D B}(e,d):= \mathbb{E}_{(x,y)\sim \hat{p}_{\mathrm{data}}}\left[-\log ((d\circ e)(y|x)) + \beta D_{\mathrm{KL}}(e(Z|x)\| r(Z))\right]. \quad (4)\] The computation is simplified by defining \(r(z)\) to be a standard multivariate Gaussian distribution \(\mathcal{N}(0, I)\) , and using an encoder of the form \(e(z|x) = \mathcal{N}(z|f_{\phi}(x))\) , where \(f_{\phi}\) is a neural network that outputs the parameters of a Gaussian distribution. Using the reparametrization trick (Kingma & Welling, 2013; Rezende et al., 2014), we then write \(e(z|x)dz = p(\epsilon)d\epsilon\) , where \(z = f(x, \epsilon)\) is a function of \(x\) and the realization \(\epsilon\) of a standard normal distribution. This allows us to do stochastic backpropagation through a single sample \(z\) . The KL term admits an analytic expression for a choice of Gaussian \(r(z)\) and encoders. We train the model by minimizing the following empirical objective: \[\frac{1}{N}\sum_{i = 1}^{N}\left[-\log (\frac{1}{M}\sum_{j = 1}^{M}[d(y^{(i)}|f(x^{(i)},\epsilon^{(j)})]) + \beta D_{\mathrm{KL}}(e(Z|x^{(i)})\| r(Z))\right]. \quad (5)\] For training, we choose a mini- batch size of \(N = 100\) . For Monte Carlo estimates of the expectation inside the log, we choose \(M = 3, 6, 12\) samples from the encoding distribution. We note that the Variational Information Bottleneck (VIB) (Alemi et al., 2016) leads to a similar- looking objective function, with the only difference that the sum over \(j\) is outside of the log. By Jensen's inequality, the VIB loss is an upper bound to our loss. If one uses a single sample from the encoding distribution (i.e., \(M = 1\) ), the VDB and the VIB objective functions coincide. The average log- loss and the rate term in the VDB objective equation 4 are the two fundamental quantities that govern the probability of error when the model is a classifier. For a discussion of these relations, see Appendix A. ## 3 BLACKWELL SUFFICIENCY AND CHANNEL DEFICIENCY In this section, we discuss an intuitive geometric interpretation of the deficiency in the space of probability distributions over the output variable. We also give an operational interpretation of the deficiency as a deviation from Blackwell sufficiency (in the KL- distance sense). Finally, we discuss its relation to the log- loss. ### 3.1 DEFICIENCY AND DECISION GEOMETRY We first formulate the learning task as a decision problem. We show that \(\delta^{\pi}(d, \kappa)\) quantifies the gap in the optimal risks of decision problems when using the channel \(d\) rather than \(\kappa\) . <--- Page Split ---> Let \(\mathcal{X}\) , \(\mathcal{V}\) denote the space of possible inputs and outputs. In the following, we assume that \(\mathcal{X}\) and \(\mathcal{V}\) are finite. Let \(\mathbb{P}_{\mathcal{V}}\) be the set of all distributions on \(\mathcal{V}\) . For every \(x\in \mathcal{X}\) , define \(\kappa_{x}\in \mathbb{P}_{\mathcal{V}}\) as \(\kappa_{x}(y) = \kappa (y|x)\) , \(\forall y\in \mathcal{V}\) . Nature draws \(x\sim \pi\) and \(y\sim \kappa_{x}\) . The learner observes \(x\) and quotes a distribution \(q_{x}\in \mathbb{P}_{\mathcal{V}}\) that expresses her uncertainty about the true value \(y\) . The quality of a quote \(q_{x}\) in relation to \(y\) is measured by an extended real- valued loss function called the score \(\ell \colon \mathcal{V}\times \mathbb{P}_{\mathcal{V}}\to \mathbb{R}\) . For a background on such special kind of loss functions see, e.g., Grünwald et al., 2004; Gneiting & Raftery, 2007; Parry et al., 2012. Ideally, the quote \(q_{x}\) should be to be as close as possible to the true conditional distribution \(\kappa_{x}\) . This is achieved by minimizing the expected loss \(L(\kappa_{x},q_{x}):=\) \(\mathbb{E}_{y\sim \kappa_{x}}\ell (y,q_{x})\) , for all \(x\in \mathcal{X}\) . The score is called proper if \(\kappa_{x}\in \arg \min_{q_{x}\in \mathbb{P}_{\mathcal{V}}}L(\kappa_{x},q_{x})\) . Define the Bayes act against \(\kappa_{x}\) as the optimal quote \[q_{x}^{*}:= \arg \min_{q_{x}\in \mathbb{P}_{\mathcal{V}}}L(\kappa_{x},q_{x}).\] If multiple Bayes acts exist then select one arbitrarily. Define the Bayes risk for the distribution \(p_{X Y}(x,y) = \pi (x)\kappa (y|x)\) as \(R(p_{X Y},\ell):= \mathbb{E}_{x\sim \pi}L(\kappa_{x},q_{x}^{*})\) . A score is strictly proper if the Bayes act is unique. An example of a strictly proper score is the log- loss function defined as \(\ell_{L}(y,q):= -\log q(y)\) . For the log- loss, the Bayes act is \(q_{x}^{*} = \kappa_{x}\) and the Bayes risk is just the conditional entropy \[R(p_{X Y},\ell_{L}) = \mathbb{E}_{x\sim \pi}\mathbb{E}_{y\sim \kappa_{x}}\big[-\log q_{x}^{*}(y)\big] = \mathbb{E}_{x\sim \pi}\mathbb{E}_{y\sim \kappa_{x}}\big[-\log \kappa_{x}(y)\big] = H(Y|X). \quad (6)\] Given a representation \(z\in \mathcal{Z}\) (output by some encoder), when using the decoder \(d\) , the learner is constrained to quote a distribution from a subset of \(\mathbb{P}_{\mathcal{V}}\) . Let \(C = \mathrm{conv}\{\{d_{z}:z\in \mathcal{Z}\} \} \subset \mathbb{P}_{\mathcal{V}}\) be the convex hull of the points \(\{d_{z}\}_{z\in \mathcal{Z}}\in \mathbb{P}_{\mathcal{V}}\) . The Bayes act against \(d_{z}\) is \[q_{x_{Z}}^{*}:= \arg \min_{q_{x}\in C}\mathbb{E}_{y\sim \kappa_{x}}\big[-\log q_{x}(y)\big]. \quad (7)\] \(q_{x_{Z}}^{*}\) has an interpretation as the reverse \(I\) - projection of \(\kappa_{x}\) to the convex set of probability measures \(C\subset \mathbb{P}_{\mathcal{V}}\) (Csiszar & Matuś, 2003)1. We call the associated Bayes risk as the projected Bayes risk \(R_{Z}(p_{X Y},\ell_{L})\) and the associated conditional entropy as the projected conditional entropy \(H_{Z}(Y|X)\) \[R_{Z}(p_{X Y},\ell_{L}) = \mathbb{E}_{x\sim \pi}\mathbb{E}_{y\sim \kappa_{x}}\big[-\log q_{x_{Z}}^{*}(y)\big] = H_{Z}(Y|X). \quad (8)\] The gap in the optimal risks, \(\Delta R:= R_{Z}(p_{X Y},\ell_{L}) - R(p_{X Y},\ell_{L})\) when making a decision based on an intermediate representation and a decision based on the input data is just the deficiency. This follows from noting that \[\begin{array}{r l} & {\Delta R = H_{Z}(Y|X) - H(Y|X) = \sum_{x\in \mathcal{X}}\pi (x)\min_{q_{x}\in C\subset \mathbb{P}_{\mathcal{V}}}D_{\mathrm{KL}}(\kappa_{x}\| q_{x})}\\ & {\qquad = \min_{e\in \mathsf{M}(\mathcal{X};\mathcal{Z})}\sum_{x\in \mathcal{X}}\pi (x)D_{\mathrm{KL}}(\kappa_{x}\| d\circ e_{x})}\\ & {\qquad = \min_{e\in \mathsf{M}(\mathcal{X};\mathcal{Z})}D_{\mathrm{KL}}(\kappa \| d\circ e\| \pi) = \delta^{\pi}(d,\kappa).} \end{array} \quad (9)\] \(\Delta R\) vanishes if and only if the optimal quote against \(d_{z}\) , \(q_{x_{Z}}^{*}\) matches \(\kappa_{x}\) for all \(x,y\) . This gives an intuitive geometric interpretation of a vanishing deficiency in the space of distributions over \(\mathcal{V}\) . Given a decoder channel \(d\) , since \(\delta^{\pi}(d,\kappa)\leq D_{\mathrm{KL}}(\kappa \| d\circ e\| \pi)\) for any \(e\in \mathsf{M}(\mathcal{X};\mathcal{Z})\) , the loss term in the VDB objective is a variational upper bound on the projected conditional entropy \(H_{Z}(Y|X)\) . However, this loss is still a lower bound to the standard cross- entropy loss in the VIB objective (Alemi et al., 2016), i.e., \[\mathbb{E}_{(x,y)\sim \hat{p}_{\mathrm{data}}}\big[-\log d\circ e(y|x)\big]\leq \mathbb{E}_{(x,y)\sim \hat{p}_{\mathrm{data}}}\left[\int -e(z|x)\log d(y|z)d z\right]. \quad (10)\] This follows simply from the convexity of the negative logarithm function. <--- Page Split ---> ### 3.2 DEFICIENCY AS A KL-DISTANCE FROM INPUT-BLACKWELL SUFFICIENCY In a seminal paper David Blackwell (1953) asked the following question: if a learner wishes to make an optimal decision about some target variable of interest and she can choose between two channels with a common input alphabet, which one should she prefer? She can rank the channels by comparing her optimal risks: she will always prefer one channel over another if her optimal risk when using the former is at most that when using the latter for any decision problem. She can also rank the variables purely probabilistically: she will always prefer the former if the latter is an output- degraded version of the former, in the sense that she can simulate a single use of the latter by randomizing at the output of the former. Blackwell showed that these two criteria are equivalent. Very recently, Nasser (2017) asked the same question, only now the learner has to choose between two channels with a common output alphabet. Given two channels, \(\kappa \in \mathsf{M}(\mathcal{X};\mathcal{Y})\) and \(d\in \mathsf{M}(\mathcal{Z};\mathcal{Y})\) we say that \(\kappa\) is input- degraded from \(d\) and write \(d\succ_{y}\kappa\) if \(\kappa = d\circ e\) for some \(e\in \mathsf{M}(\mathcal{X};\mathcal{Z})\) . Stated in another way, \(d\) can be reduced to \(\kappa\) by applying a randomization at its input. Nasser (2017) gave a characterization of input- degradedness that is similar to Blackwell's theorem (Blackwell, 1953). We say, \(d\) is input- Blackwell sufficient for \(\kappa\) if \(d\succ_{y}\kappa\) . Input- Blackwell sufficiency induces a preorder on the set of all channels with the same output alphabet. In practice, most channels are uncomparable, i.e., one cannot be reduced to another by a randomization. When such is the case, the deficiency quantifies how far the true channel \(\kappa\) is from being a randomization (by way of all input encodings) of the decoder \(d\) . See Appendix B for a brief summary of Blackwell- Le Cam theory. ### 3.3 DEFICIENCY AND THE LOG-LOSS When \(Y - X - Z\) is a Markov chain, the conditional mutual information \(I(Y;X|Z)\) is the Bayes risk gap for the log- loss. This is apparent from noting that \(I(Y;X|Z) = H(Y|Z) - H(Y|XZ) =\) \(H(Y|Z) - H(Y|X) = R(p_{Z Y},\ell_{L}) - R(p_{X Y},\ell_{L})\) . This risk gap is closely related to Blackwell's original notion of sufficiency. Since the log- loss is strictly proper, a vanishing \(I(Y;X|Z)\) implies that the risk gap is zero for all loss functions. This suggests that minimizing the log- loss risk gap under a suitable regularization constraint is a potential recipe for constructing representations \(Z\) that are approximately sufficient for \(X\) w.r.t. \(Y\) , since in the limit when \(I(Y;X|Z) = 0\) one would achieve \(I(Y;Z) = I(Y;X)\) . This is indeed the basis for the IB algorithm (Tishby et al., 1999) and its generalization, clustering with Bregman divergences (Banerjee et al., 2005; van Rooyen & Williamson, 2015; 2014). One can also approximate a sufficient statistic by minimizing deficiencies instead. This is motivated from noting the following proposition. Proposition 2. When \(Y - X - Z\) is a Markov chain, \(\delta^{\pi}(d,\kappa) = 0 \iff I(Y;X|Z) = 0\) . In general, for the bottleneck paradigms involving the conditional mutual information (IB) and the deficiency (DB), we have the following relationship: \[\min_{e(z|x):I(Y;X|Z)\leq \epsilon}I(X;Z)\geq \min_{e(z|x):\delta^{\pi}(d,\kappa)\leq \epsilon}I(X;Z). \quad (11)\] Our experiments corroborate that for achieving the same level of sufficiency, one needs to store less information about the input \(X\) when minimizing the deficiencies than when minimizing the conditional mutual information. ## 4 EXPERIMENTS We present some experiments on the MNIST dataset (LeCun & Cortes, 2010). Classification on MNIST is a very well studied problem. The main objective of our experiments is to evaluate the information- theoretic properties of the representations learned by the VDB and whether it can match the classification accuracy provided by other bottleneck methods. For the encoder, we use a fully connected feedforward network with 784 input units- 1024 ReLUs- 1024 ReLUs- 512 linear output units. The deterministic output of this network is interpreted as the vector of means and variances of a 256 dimensional Gaussian distribution. The decoder is a <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 2: Effect of the regularization parameter \(\beta\) . The upper left panel shows the accuracy on train and test data after training the VDB for different values of \(M\) . Here, \(M\) is the number of encoder output samples used in the training objective. \(L\) is the number of encoder output samples used for evaluating the classifier. The upper right panel traces the deficiency bottleneck curve for different values of \(\beta\) (see text). The curves are averages over 5 repetitions of the experiment. Each curve corresponds to one value of \(M = 1, 3, 6, 12\) . Notice the generalization gap for small values of \(\beta\) (towards the right of the plot). The lower right panel plots the corresponding information bottleneck curve. The lower left panel plots the minimality term vs. \(\beta\) . Evidently, the levels of compression vary depending on \(M\) . Higher values of \(M\) (our method) lead to a more compressed representation. For \(M = 1\) , the VDB and the VIB models coincide. </center> Table 1: Comparison of test accuracy values for different values of \(\beta\) and \(M\) . \(K\) is the size of the bottleneck and \(L = 12\) . We see a slight improvement in the test accuracies for higher values of \(M\) <table><tr><td rowspan="2">β</td><td rowspan="2">K</td><td colspan="4">M</td></tr><tr><td>1</td><td>3</td><td>6</td><td>12</td></tr><tr><td rowspan="2">10-5</td><td>256</td><td>0.9869</td><td>0.9873</td><td>0.9885</td><td>0.9878</td></tr><tr><td>2</td><td>0.9575</td><td>0.9678</td><td>0.9696</td><td>0.9687</td></tr><tr><td rowspan="2">10-3</td><td>256</td><td>0.9872</td><td>0.9879</td><td>0.9875</td><td>0.9882</td></tr><tr><td>2</td><td>0.9632</td><td>0.9726</td><td>0.9790</td><td>0.9702</td></tr></table> simple logistic regression model with a softmax layer. These are the same settings of the model used by Alemi et al. (2016). We implement the algorithm in Tensorflow and train for 200 epochs using the Adam optimizer. As can be seen from the upper left panel in Figure 2, the test accuracy is stable with increasing \(M\) . Here, \(M\) is the number of encoder output samples used in the training objective. We note that \(M = 1\) is just the VIB model (Alemi et al., 2016). \(L\) is the number of encoder output samples used for evaluating the classifier (i.e., we use \(\frac{1}{L} \sum_{j = 1}^{L} d(y|z^{(j)})\) where \(z^{(j)} \sim e(z|x)\) ). Numerical values of the <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: We trained the VDB on MNIST with the basic encoder given by a fully connected network with two hidden layers of ReLUs generating the means and variances of 2D independent Gaussian latent representation. Ellipses represent the posterior distributions of 1000 input images in latent space after training with \(\beta = 10^{0}, 10^{-1}, 10^{-3}, 10^{-5}\) and \(M = 1, 3, 6, 12\) . Color corresponds to the class label. </center> test accuracies are provided in Table 1 for different values of \(\beta\) and \(M\) . We see a slight improvement in the test accuracies for higher values of \(M\) . See Figure 5 for train and test accuracies for \(L = 3\) and \(L = 12\) in Appendix D. The traditional IB paradigm traces the mutual information \(I(Z; Y)\) between representation and output (sufficiency) vs. the mutual information \(I(Z; X)\) between representation and input (minimality), for different values of the regularization parameter \(\beta\) . This curve is called the information bottleneck curve (Tishby et al., 1999). In the case of the VDB, we define the corresponding sufficiency term as \(J(Z; Y) := H(Y) - \mathbb{E}_{(x, y) \sim \hat{p}_{\mathrm{dist}}}[-\log (\int d(y|z) e(z|x) dz)]\) . Here \(H(Y) = \log_{2}(10)\) is the entropy of the output which has 10 classes. In our method, “more informative” means “less deficient”. The upper right panel in Figure 2 shows the deficiency bottleneck curve which traces \(J(Z; Y)\) vs. \(I(Z; X)\) for different values of \(\beta\) at the end of training. For orientation, lower values of \(\beta\) have higher values of \(I(Z; X)\) (towards the right of the plot). For small values of \(\beta\) , when the effect of the regularization is negligible, the bottleneck allows more information from the input through the representation. In this case, \(J(Z; Y)\) increases on the training set, but not necessarily on the test set. This is manifest in the gap between the train and test curves indicative of a degradation in generalization. For intermediate values of \(\beta\) , the gap is smaller for larger values of \(M\) (our method). The lower right panel plots the corresponding information bottleneck curve. The lower left panel in Figure 2 plots the minimality term \(I(Z; X)\) vs. \(\beta\) . We see that, for \(\beta\) in the range between \(10^{- 8}\) and \(10^{- 4}\) , for the same level of sufficiency, setting \(M = 12\) consistently achieves more compression of the input compared to the setting \(M = 1\) . The dynamics of the information quantities during training are also interesting. We provide figures on these in Appendix D. In order to visualize the representations, we also train the VDB on MNIST with a 2 dimensional representation. We use the same settings as before, with the only difference that the dimension of <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 4: Learning curves for MNIST, where the encoder is a MLP of size \(784 - 1024 - 1024 - 2K\) , the last layer being a \(K = 2\) dimensional diagonal Gaussian. The decoder is simply a softmax with 10 classes. The left panel plots the mutual information between the representation and the class label, \(I(Z;Y)\) , against the mutual information between the representation at the last layer of the encoder and the input, \(I(Z;X)\) , as training progresses. The former increases monotonically, while the latter increases and then decreases. The right panel shows the test accuracy as training progresses. </center> the output layer of the encoder is 4, with two coordinates representing the mean, and two a diagonal covariance matrix. The results are shown in Figure 3. For \(\beta = 10^{- 5}\) , the representations are well separated, depending on the class. For related figures in the setting of unsupervised learning see Appendix E. The learning dynamics of the mutual information and classification accuracy are shown in Figure 4. The left panel has an interpretation in terms of a phase where the model is mainly fitting the input- output relationship and hence increasing the mutual information \(I(Z;Y)\) , followed by a compression phase, where training is mainly reducing \(I(Z;X)\) , leading to a better generalization. The right panel shows the test accuracy as training progresses. Higher values of \(M\) (our method) usually lead to better accuracy. An exception is when the number \(L\) of posterior samples for classification is large. ## 5 DISCUSSION We have formulated a bottleneck method based on channel deficiencies. The deficiency of a decoder with respect to the true channel between input and output quantifies how well a randomization at the decoder input (by way of stochastic encodings) can be used to simulate the true channel. The VDB has a natural variational formulation which recovers the VIB in the limit of a single sample of the encoder output. Experiments demonstrate that the VDB can learn more compressed representations while retaining the same discriminative capacity. The method has a statistical decision- theoretic appeal. Moreover, the resulting variational objective of the DB can be implemented as an easy modification of the VIB, with little to no computational overhead. Given two channels that convey information about a target variable of interest, two different notions of deficiencies arise, depending on whether the target resides at the common input or the common output of the given channels. When the target is at the common output of the two channels, as is in a typical bottleneck setting (see Figure 1), our Definition 1 has a natural interpretation as a KL- divergence from input- Blackwell sufficiency (Nasser, 2017). Here sufficiency is achieved by applying a randomization at the input of the decoder with the goal of simulating the true channel. The notion of input- Blackwell sufficiency contrasts with Blackwell's original notion of sufficiency (Blackwell, 1953) in the sense that Blackwell's theory compares two channels with a common input. One can again define a notion of deficiency in this setting (see Appendix B for a discussion on deficiencies in the classical Blackwell setup). The associated channels (one from \(\mathcal{V}\) to \(\mathcal{Z}\) and the other from \(\mathcal{V}\) to \(\mathcal{X}\) ) do not however have a natural interpretation in a typical bottleneck setting. In contrast, the input- Blackwell setup appears to be much more intuitive in this context. <--- Page Split ---> The more detailed view of information emerging from this analysis explains various effects and opens the door to multiple generalizations. In the spirit of the VDB, one can formulate a deficiency autoencoder as well (see sketch in Appendix E). On a related note, we mention that the deficiency is a lower bound to a quantity called the Unique information (Bertschinger et al., 2014; Banerjee et al., 2018a) (see details in Appendix C). An alternating minimization algorithm similar in spirit to the classical Blahut- Arimoto algorithm (Blahut, 1972) has been proposed to compute this quantity (Banerjee et al., 2018b). A deep neural network implementation of such an algorithm remains a challenge. In the limit \(\beta \rightarrow 0\) , the VDB is a step forward towards estimating the unique information. This might be of independent interest in improving the practicality of the theory of information decompositions. ## REFERENCES Alexander A. Alemi, Ian Fischer, Joshua V Dillon, and Kevin Murphy. Deep variational information bottleneck. arXiv preprint arXiv:1612.00410, 2016. ICLR 2017. Alexander A. Alemi, Ben Poole, Ian Fischer, Joshua Dillon, Rif A. Saurous, and Kevin Murphy. Fixing a broken ELBO. In International Conference on Machine Learning, pp. 159- 168, 2018. Philip Bachman and Doina Precup. Training deep generative models: Variations on a theme. In NIPS Approximate Inference Workshop, 2015. Arindam Banerjee, Srujana Merugu, Inderjit S Dhillon, and Joydeep Ghosh. Clustering with Bregman divergences. Journal of Machine Learning Research, 6(Oct):1705- 1749, 2005. Pradeep Kr. Banerjee, Eckehard Olbrich, Jürgen Jost, and Johannes Rauh. Unique informations and deficiencies. arXiv preprint arXiv:1807.05103, 2018a. Allerton 2018. Pradeep Kr. Banerjee, Johannes Rauh, and Guido Montufar. Computing the unique information. In Proc. IEEE ISIT, pp. 141- 145. IEEE, 2018b. Nils Bertschinger and Johannes Rauh. The Blackwell relation defines no lattice. In Proc. IEEE ISIT, pp. 2479- 2483. IEEE, 2014. Nils Bertschinger, Johannes Rauh, Eckehard Olbrich, Jürgen Jost, and Nihat Ay. Quantifying unique information. Entropy, 16(4):2161- 2183, 2014. David Blackwell. Equivalent comparisons of experiments. The Annals of Mathematical Statistics, 24(2):265- 272, 1953. Richard Blahut. Computation of channel capacity and rate- distortion functions. IEEE Transactions on Information Theory, 18(4):460- 473, 1972. Stéphane Boucheron, Olivier Bousquet, and Gábor Lugosi. Theory of classification: A survey of some recent advances. ESAIM: Probability and Statistics, 9:323- 375, 2005. Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015. ICLR 2016. Christopher P. Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in \(\beta\) - VAE. arXiv preprint arXiv:1804.03599, 2018. Matthew Chalk, Olivier Marre, and Gasper Tkacik. Relevant sparse codes with variational information bottleneck. In Advances in Neural Information Processing Systems, pp. 1957- 1965, 2016. Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. Variational lossy autoencoder. arXiv preprint arXiv:1611.02731, 2016. ICLR 2017. Chris Cremer, Quaid Morris, and David Duvenaud. Reinterpreting importance- weighted autoencoders. arXiv preprint arXiv:1704.02916, 2017. Workshop track - ICLR 2017. <--- Page Split ---> Imre Csiszár. A class of measures of informativity of observation channels. Periodica Mathematica Hungarica, 2(1- 4):191- 213, 1972. Imre Csiszár and János Körner. Information theory: coding theorems for discrete memoryless systems. Cambridge University Press, 2011. Imre Csiszár and František Matuš. Information projections revisited. IEEE Transactions on Information Theory, 49(6):1474- 1490, 2003. Tilmann Gneiting and Adrian E. Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102(477):359- 378, 2007. David Gondek and Thomas Hofmann. Conditional information bottleneck clustering. In 3rd IEEE International Conference on Data Mining, Workshop on Clustering Large Data Sets, 2003. Peter D. Grünwald, A. Philip Dawid, et al. Game theory, maximum entropy, minimum discrepancy and robust bayesian decision theory. The Annals of Statistics, 32(4):1367- 1433, 2004. Malte Harder, Christoph Salge, and Daniel Polani. A bivariate measure of redundant information. Physical Review E, 87:012130, 2013. Peter Harremoës and Naftali Tishby. The information bottleneck revisited or how to choose a good distortion measure. In Proc. IEEE ISIT, pp. 566- 570. IEEE, 2007. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. \(\beta\) - VAE: Learning basic visual concepts with a constrained variational framework. 2017. ICLR 2017. Hsiang Hsu, Shahab Asoodeh, Salman Salamatian, and Flavio P. Calmon. Generalizing bottleneck problems. In Proc. IEEE ISIT, pp. 531- 535. IEEE, 2018. Diederik P. Kingma and Max Welling. Auto- encoding variational Bayes. arXiv preprint arXiv:1312.6114, 2013. ICLR 2013. Artemy Kolchinsky, Brendan D. Tracey, and David H. Wolpert. Nonlinear information bottleneck. arXiv preprint arXiv:1705.02436, 2017. Janos Körner and Katalin Marton. Comparison of two noisy channels. In Topics in information theory, volume 16, pp. 411- 423. Colloquia Mathematica Societatis Jnos Bolyai, Keszthely (Hungary), 1975. Tuan Anh Le, Maximilian Igl, Tom Rainforth, Tom Jin, and Frank Wood. Auto- encoding sequential Monte Carlo. arXiv preprint arXiv:1705.10306, 2017. ICLR 2018. Lucien Le Cam. Sufficiency and approximate sufficiency. The Annals of Mathematical Statistics, pp. 1419- 1455, 1964. Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010. URL http://yann.lecun.com/exdb/mnist/. Friedrich Liese and Igor Vajda. On divergences and informations in statistics and information theory. IEEE Transactions on Information Theory, 52(10):4394- 4412, 2006. Christian Naesseth, Scott Linderman, Rajesh Ranganath, and David Blei. Variational sequential Monte Carlo. In International Conference on Artificial Intelligence and Statistics, pp. 968- 977, 2018. Rajai Nasser. On the input- degradedness and input- equivalence between channels. In Proc. IEEE ISIT, pp. 2453- 2457. IEEE, 2017. Matthew Parry, A Philip Dawid, Steffen Lauritzen, et al. Proper local scoring rules. The Annals of Statistics, 40(1):561- 592, 2012. Maxim Raginsky. Shannon meets Blackwell and Le Cam: Channels, codes, and statistical experiments. In Proc. IEEE ISIT, pp. 1220- 1224. IEEE, 2011. <--- Page Split ---> Tom Rainforth, Adam R Kosiorek, Tuan Anh Le, Chris J. Maddison, Maximilian Igl, Frank Wood, and Yee Whye Teh. Tighter variational bounds are not necessarily better. arXiv preprint arXiv:1802.04537, 2018. ICML 2018. Danilo Jimenez Rezende and Fabio Viola. Taming VAEs. arXiv preprint arXiv:1810.00597, 2018. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning, pp. 1278- 1286, 2014. Ohad Shamir, Sivan Sabato, and Naftali Tishby. Learning and generalization with the information bottleneck. In International Conference on Algorithmic Learning Theory, pp. 92- 107. Springer, 2008. Ravid Shwartz- Ziv and Naftali Tishby. Opening the black box of deep neural networks via information. arXiv preprint arXiv:1703.00810, 2017. Naftali Tishby and Noga Zaslavsky. Deep learning and the information bottleneck principle. In Information Theory Workshop (ITW), 2015 IEEE, pp. 1- 5. IEEE, 2015. Naftali Tishby, Fernando C. Pereira, and William Bialek. The information bottleneck method. In Proceedings of the 37th Annual Allerton Conference on Communication, Control and Computing, pp. 368- 377, 1999. Erik Torgersen. Comparison of statistical experiments, volume 36. Cambridge University Press, 1991. Brendan van Rooyen and Robert C. Williamson. Le Cam meets LeCun: Deficiency and generic feature learning. arXiv preprint arXiv:1402.4884, 2014. Brendan van Rooyen and Robert C. Williamson. A theory of feature learning. arXiv preprint arXiv:1504.00083, 2015. Matías Vera, Pablo Piantanida, and Leonardo Rey Vega. The role of information complexity and randomization in representation learning. arXiv preprint arXiv:1802.05355, 2018. Hans S. Witsenhausen and Aaron D. Wyner. A conditional entropy bound for a pair of discrete random variables. IEEE Transactions on Information Theory, 21(5):493- 501, 1975. Shengjia Zhao, Jiaming Song, and Stefano Ermon. Towards deeper understanding of variational autoencoding models. arXiv preprint arXiv:1702.08658, 2017. <--- Page Split ---> ## APPENDIX ## A MISCLASSIFICATION ERROR AND THE AVERAGE LOG-LOSS In a classification task, the goal is to use the training dataset to learn a classifier \(\widehat{\kappa} (y|x)\) that minimizes the probability of error under the true data distribution, defined as follows. \[P_{\mathcal{E}}(\widehat{\kappa}):= 1 - \mathbb{E}_{(x,y)\sim p}\left[\widehat{\kappa} (y|x)\right]. \quad (12)\] It is well known that the optimal classifier that gives the smallest probability of error is the Bayes classifier (Boucheron et al., 2005). Since we do not know the true data distribution we try to learn based on the empirical error. Directly minimizing the empirical probability of error over the training dataset is in general a NP- hard problem. In practice, one minimizes a surrogate loss function that is a convex upper bound on \(P_{\mathcal{E}}\) . A natural surrogate is the average log- loss function \(\mathbb{E}_{(x,y)\sim p}\left[-\log \widehat{\kappa} (y|x)\right]\) . When the model is \(\widehat{\kappa} = d\circ e\) , the following upper bounds are immediate from using Jensen's inequality. \[\begin{array}{r l} & {P_{\mathcal{E}}(\widehat{\kappa})\leq 1 - \exp \big(-\mathbb{E}_{(x,y)\sim p}\big[-\log d\circ e(y|x)\big]\big)}\\ & {\qquad \leq 1 - \exp \big(-\mathbb{E}_{(x,y)\sim p}\mathbb{E}_{z\sim e(z|x)}\big[-\log d(y|z)\big]\big)} \end{array} \quad (13)\] The bound using the standard cross- entropy loss is evidently weaker than the average log- loss. A lower bound on the probability of error is controlled by a convex functional of the mutual information between the representation and the raw inputs \(I(Z;X)\) (Vera et al., 2018, see, e.g., Lemma 4). The average log- loss and the rate term in the VDB objective equation 4 are two fundamental quantities that govern the probability of error. ## B CLASSICAL THEORY OF COMPARISON OF CHANNELS In this section, we discuss the classical theory of comparison of channels due to Blackwell (1953) and its extension by Le Cam (1964); Torgersen (1991) and more recently by Raginsky (2011). Suppose that a learner wishes to predict the value of a random variable \(Y\) that takes values in a set \(\mathcal{Y}\) . She has a set of actions \(\mathcal{A}\) . Each action incurs a loss \(\ell (y,a)\) that depends on the true state \(y\) of \(Y\) and the chosen action \(a\) . Let \(\pi_{Y}\) encode the learners' uncertainty about the true state \(y\) . The tuple \((\pi_{Y},\mathcal{A},\ell)\) is called a decision problem. Before choosing her action, the learner observes a random variable \(X\) through a channel \(\kappa \in \mathsf{M}(\mathcal{Y};\mathcal{X})\) . An ideal learner chooses a strategy \(\rho \in \mathsf{M}(\mathcal{X};\mathcal{A})\) that minimizes her expected loss or risk \(R(\pi_{Y},\kappa ,\rho ,\ell):= \mathbb{E}_{y\sim \pi_{Y}}\mathbb{E}_{a\sim \rho \circ \kappa_{y}}\ell (y,a)\) . The optimal risk when using the channel \(\kappa\) is \(R(\pi_{Y},\kappa ,\ell):= \min_{\rho \in \mathsf{M}(\mathcal{X};\mathcal{A})}R(\pi_{Y},\kappa ,\rho ,\ell)\) . Suppose now that the learner has to choose between \(X\) and another random variable \(Z\) that she observes through a second channel \(\mu \in \mathsf{M}(\mathcal{Y};\mathcal{Z})\) with common input \(Y\) . She can always discard \(X\) in favor of \(Z\) if, knowing \(Z\) , she can simulate a single use of \(X\) by randomly sampling a \(x^{\prime}\in \mathcal{X}\) after each observation \(z\in \mathcal{Z}\) . Definition 3. We say that \(X\) is output- degraded from \(Z\) w.r.t. \(Y\) , denoted \(Z\sqsupseteq_{Y}^{I}X\) , if there exists a random variable \(X^{\prime}\) such that the pairs \((Y,X)\) and \((Y,X^{\prime})\) are stochastically indistinguishable, and \(Y - Z - X^{\prime}\) is a Markov chain. She can also discard \(X\) if her optimal risk when using \(Z\) is at most that when using \(X\) for any decision problem. Write \(Z\sqsupseteq_{Y}X\) if \(R(\pi_{Y},\kappa ,\ell)\geq R(\pi_{Y},\mu ,\ell)\) for any decision problem. Blackwell (1953) showed the equivalence of these two relations. Theorem 4. (Blackwell's Theorem) \(Z\sqsupseteq_{Y}^{I}X\iff Z\sqsupseteq_{Y}X\) Write \(\mu \sqsupseteq_{Y}\kappa\) if \(\kappa = \lambda \circ \mu\) for some \(\lambda \in \mathsf{M}(\mathcal{Z};\mathcal{X})\) . If \(\pi_{Y}\) has full support, then it easy to check that \(\mu \sqsupseteq_{Y}\kappa \iff Z\sqsupseteq_{Y}^{I}X\) (Bertschinger & Rauh, 2014, Theorem 4). The learner can also compare \(\kappa\) and \(\mu\) by comparing the mutual informations \(I(Y;X)\) and \(I(Y;Z)\) between the common input \(Y\) and the channel outputs \(X\) and \(Z\) . Definition 5. \(\mu\) is said to be more capable than \(\kappa\) , denoted \(\mu \sqsupseteq_{Y}^{mc}\kappa\) , if \(I(Y;Z)\geq I(Y;X)\) for all probability distribution on \(\mathcal{Y}\) . <--- Page Split ---> It follows from the data processing inequality that \(\mu \xrightarrow{\mathrm{__}} \mu \xrightarrow{\mathrm{__}} \mu \xrightarrow{\mathrm{__}} k\) . However, the converse implication is not true in general (Körner & Marton, 1975). The converse to the Blackwell's theorem states that if the relation \(Z \xrightarrow{\mathrm{__}} Y\) \(X\) does not hold, then there exists a set of actions \(\mathcal{A}\) and a loss function \(\ell (y, a) \in \mathbb{R}^{\mathcal{Y} \times \mathcal{A}}\) such that \(R(\pi_Y, \kappa , \ell) < R(\pi_Y, \mu , \ell)\) . Le Cam introduced the concept of a deficiency of \(\mu\) w.r.t. \(\kappa\) to express this deficit in optimal risks (Le Cam, 1964) in terms of an approximation of \(\kappa\) from \(\mu\) via Markov kernels. Definition 6. The Le Cam deficiency of \(\mu\) w.r.t. \(\kappa\) is \[\delta (\mu , \kappa) := \inf_{\lambda \in \mathcal{M}(\mathcal{Z}; \mathcal{X})} \sup_{y \in \mathcal{Y}} \| \lambda \circ \mu_y - \kappa_y \|_{\mathrm{TV}}, \quad (14)\] where \(\| \cdot \|_{\mathrm{TV}}\) denotes the total variation distance. When the distribution of the common input to the channels is fixed, one can define a weighted deficiency (Torgersen, 1991, Section 6.2). Definition 7. Given \(Y \sim \pi_Y\) , the weighted Le Cam deficiency of \(\mu\) w.r.t. \(\kappa\) is \[\delta^{\pi}(\mu , \kappa) := \inf_{\lambda \in \mathcal{M}(\mathcal{Z}; \mathcal{X})} \mathbb{E}_{y \sim \pi_Y} \| \lambda \circ \mu_y - \kappa_y \|_{\mathrm{TV}}. \quad (15)\] Le Cam's randomization criterion (Le Cam, 1964) shows that deficiencies quantify the maximal gap in the optimal risks of decision problems when using the channel \(\mu\) rather than \(\kappa\) . Theorem 8 (Le Cam (1964)). Fix \(\mu \in \mathcal{M}(\mathcal{Y}; \mathcal{Z})\) , \(\kappa \in \mathcal{M}(\mathcal{Y}; \mathcal{Z})\) and a probability distribution \(\pi_Y\) on \(\mathcal{Y}\) and write \(\| \ell \|_{\infty} = \max_{y, a} \ell (y, a)\) . For every \(\epsilon > 0\) , \(\delta^{\pi}(\mu , \kappa) < \epsilon\) if and only if \(R(\pi_Y, \mu , \ell) - R(\pi_Y, \kappa , \ell) < \epsilon \| \ell \|_{\infty}\) for any set of actions \(\mathcal{A}\) and any bounded loss function \(\ell\) . Raginsky (2011) introduced a broad class of deficiency- like quantities using the notion of a generalized divergence between probability distributions that satisfies a monotonicity property w.r.t. data processing. The family of \(f\) - divergences due to Csiszár belongs to this class (Liese & Vajda, 2006). Definition 9. The \(f\) - deficiency of \(\mu\) w.r.t. \(\kappa\) is \[\delta_{f}(\mu , \kappa) := \inf_{\lambda \in \mathcal{M}(\mathcal{Z}; \mathcal{X})} \sup_{y \in \mathcal{Y}} D_{f}(\kappa_y \| \lambda \circ \mu_y), \quad (16)\] Many common divergences, such as the KL divergence, the reverse- KL divergence, and the total variation distance are \(f\) - divergences. When the channel \(\mu\) is such that its output is constant, no matter what the input, the corresponding \(f\) - deficiency is called \(f\) - informativity (Csiszár, 1972). The \(f\) - informativity associated with the KL divergence is just the channel capacity which has a geometric interpretation as an "information radius" (Csiszár & Körner, 2011). We can also define a weighted \(f\) - deficiency of \(\mu\) w.r.t. \(\kappa\) . Definition 10. The weighted \(f\) - deficiency of \(\mu\) w.r.t. \(\kappa\) is \[\delta_{f}(\mu , \kappa) := \inf_{\lambda \in \mathcal{M}(\mathcal{Z}; \mathcal{X})} D_{f}(\kappa_y \| \lambda \circ \mu_y | \pi_Y), \quad (17)\] Specializing to the KL divergence, we have the following definition. Definition 11. The weighted output deficiency of \(\mu\) w.r.t. \(\kappa\) is \[\delta_{\sigma}^{\pi}(\mu , \kappa) := \inf_{\lambda \in \mathcal{M}(\mathcal{Z}; \mathcal{X})} D_{\mathrm{KL}}(\kappa \| \lambda \circ \mu | \pi_Y), \quad (18)\] where the subscript \(o\) in \(\delta_{\sigma}^{\pi}\) emphasizes the fact that the randomization is at the output of the channel \(\mu\) . Note that \(\delta_{\sigma}^{\pi}(\mu , \kappa) = 0\) if and only if \(Z \xrightarrow{\mathrm{__}} Y\) . \(X\) , which captures the intuition that if \(\delta_{\sigma}^{\pi}(\mu , \kappa)\) is small, then \(X\) is approximately output- degraded from \(Z\) w.r.t. \(Y\) . Using Pinsker's inequality, we have \[\delta^{\pi}(\mu , \kappa) \leq \sqrt{\frac{\ln(2)}{2}} \delta_{\sigma}^{\pi}(\mu , \kappa). \quad (19)\] <--- Page Split ---> ## C THE UNIQUE INFORMATION BOTTLENECK In this section, we give a new perspective on the Information Bottleneck paradigm using nonnegative mutual information decompositions. The quantity we are interested in is the notion of Unique information proposed in (Bertschinger et al., 2014). Work in similar vein include (Harder et al., 2013) and more recently (Banerjee et al., 2018a) which gives an operationalization of the unique information. Consider three jointly distributed random variables \(Y\) \(X\) , and \(Z\) \(Y\) is the target variable of interest. The mutual information between \(Y\) and \(X\) can be decomposed into information that \(X\) has about \(Y\) that is unknown to \(Z\) (we call this the unique information of \(X\) w.r.t. \(Z\) ) and information that \(X\) has about \(Y\) that is known to \(Z\) (we call this the shared information). \[I(Y;X) = \underbrace{\widetilde{U} I(Y;X\backslash Z)}_{\mathrm{unique} X\mathrm{wt}Z} + \underbrace{\widetilde{S} I(Y;X,Z)}_{\mathrm{shared} (\mathrm{redundant})}. \quad (20)\] Conditioning on \(Z\) destroys the shared information but creates complementary or synergistic information from the interaction of \(X\) and \(Z\) . \[I(Y;X|Z) = \underbrace{\widetilde{U} I(Y;X\backslash Z)}_{\mathrm{unique} X\mathrm{wt}Z} + \underbrace{\widetilde{C} I(Y;X,Z)}_{\mathrm{complementary} (\mathrm{synergistic})}. \quad (21)\] Using the chain rule, the total information that the pair \((X,Z)\) conveys about the target \(Y\) can be decomposed into four terms. \[\begin{array}{r l} & {I(Y;X Z) = I(Y;X) + I(Y;Z|X)}\\ & {\qquad = \widetilde{U} I(Y;X\backslash Z) + \widetilde{S} I(Y;X,Z) + \widetilde{U} I(Y;Z\backslash X) + \widetilde{C} I(Y;X,Z).} \end{array} \quad (22)\] \(\widetilde{U} I\) , \(\widetilde{S} I\) , and \(\widetilde{C} I\) are nonnegative functions that depend continuously on the joint distribution of \((Y,X,Z)\) . For completeness, we rewrite the information decomposition equations below. \[\begin{array}{r l} & {I(Y;X) = \widetilde{U} I(Y;X\backslash Z) + \widetilde{S} I(Y;X,Z),}\\ & {I(Y;Z) = \widetilde{U} I(Y;Z\backslash X) + \widetilde{S} I(Y;X,Z),}\\ & {I(Y;X|Z) = \widetilde{U} I(Y;X\backslash Z) + \widetilde{C} I(Y;X,Z),}\\ & {I(Y;Z|X) = \widetilde{U} I(Y;Z\backslash X) + \widetilde{C} I(Y;X,Z),} \end{array} \quad (24b)\] The unique information can be interpreted as either the conditional mutual information without the synergy, or as the mutual information without the redundancy. When \(Y - X - Z\) is a Markov chain, the information decomposition is \[\begin{array}{r l} & {\widetilde{U} I(Y;Z\backslash X) = 0,}\\ & {\widetilde{U} I(Y;X\backslash Z) = I(Y;X|Z) = I(Y;X) - I(Y;Z),}\\ & {\widetilde{S} I(Y;X,Z) = I(Y;Z),}\\ & {\widetilde{C} I(Y;X,Z) = 0.} \end{array} \quad (25d)\] The Information bottleneck (Tishby et al., 1999) minimizes the following objective \[\mathcal{L}_{I B}(e) = I(Y;X|Z) + \beta I(X;Z), \quad (26)\] over all encoders \(e\in \mathsf{M}(\mathcal{X};\mathcal{Z}):Y - X - Z\) . Since \(Y - X - Z\) is a Markov chain, the sufficiency term in the IB objective depends on the pairwise marginals \((Y,X)\) and \((Y,Z)\) , while the minimality term depends on the \((X,Z)\) - marginal. From equation 25b, it follows that one can equivalently write the IB objective function as \[\mathcal{L}_{I B}(e) = \widetilde{U} I(Y;X\backslash Z) + \beta I(X;Z). \quad (27)\] <--- Page Split ---> From an information decomposition perspective, the original IB is actually minimizing just the unique information subject to a regularization constraint. This is a simple consequence of the fact that the synergistic information \(\overline{{C I}}(Y;X,Z) = 0\) (see equation 25d) when we have the Markov chain condition \(Y - X - Z\) . Hence, one might equivalently call the original IB as the Unique information bottleneck. Appealing to classical Blackwell theory, Bertschinger et al. (2014) defined a nonnegative decomposition of the mutual information \(I(Y;X Z)\) based on the idea that the unique and shared information should depend only on the pairwise marginals \((Y,X)\) and \((Y,Z)\) . Definition 12. Let \((Y,X,Z)\sim P,Y\sim \pi_{Y}\) and let \(\kappa \in \mathsf{M}(\mathcal{Y};\mathcal{X})\) \(\mu \in \mathsf{M}(\mathcal{Y};\mathcal{X})\) be two channels with the same input alphabet such that \(P_{Y X}(y,x) = \pi_{Y}(y)\kappa_{y}(x)\) and \(P_{Y Z}(y,z) = \pi_{Y}(y)\mu_{y}(z)\) Define \[\begin{array}{r l} & {\Delta_{P} = \big\{Q\in \mathbb{P}_{Y\times \mathcal{X}\times \mathcal{Z}}:\mathcal{Q}_{Y X}(y,x) = \pi_{Y}(y)\kappa_{y}(x),}\\ & {\qquad \mathcal{Q}_{Y Z}(y,z) = \pi_{Y}(y)\mu_{y}(z)\big\} ,}\\ & {\qquad U I(Y;X\backslash Z) = \underset {Q\in \Delta_{P}}{\min}I_{Q}(Y;X|Z),}\\ & {\qquad U I(Y;Z\backslash X) = \underset {Q\in \Delta_{P}}{\min}I_{Q}(Y;Z|X),}\\ & {\qquad S I(Y;X,Z) = I(Y;X) - U I(Y;X\backslash Z),}\\ & {\qquad C I(Y;X,Z) = I(Y;X|Z) - U I(Y;X\backslash Z),} \end{array} \quad (28b)\] where the subscript \(Q\) in \(I_{Q}\) denotes that joint distribution on which the quantities are computed. The functions \(U I,S I\) , and \(C I\) are nonnegative and satisfy equation 24. Furthermore, \(U I\) and \(S I\) depend on the marginal distributions of the pairs \((Y,X)\) and \((Y,Z)\) . Only the function \(C I\) depends on the full joint distribution \(P\) . \(U I\) satisfies the following intuitive property in relation to Blackwell's theorem 4. Proposition 13. (Bertschinger et al., 2014, Lemma 6) \(U I(Y;X\backslash Z) = 0\iff Z\exists_{Y}^{\prime}X\) . Proposition 2 follows from noting that \(\delta^{\pi}(d,\kappa) = 0\iff U I(Y;X\backslash Z) = 0\) (Bertschinger et al., 2014, Theorem 22) and the fact that \(U I(Y;X\backslash Z) = I(Y;X|Z)\) when \(Y - X - Z\) is a Markov chain. ## D ADDITIONAL FIGURES ON VDB EXPERIMENTS ![](images/14_0.jpg) <center>Figure 5: Train and test accuracy of the VDB for \(L = 3\) and \(L = 12\) . Similar to Figure 2. </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 6: Evolution of the mutual information between representation and output vs. representation and input (values farther up and to the left are better) over 200 training epochs (dark to light color) on MNIST. The curves are averages over 20 repetitions of the experiment. At early epochs, training mainly effects fitting of the input-output relationship and an increase of \(I(Z;Y)\) . At later epochs, training mainly effects a decrease of \(I(Z;X)\) , which corresponds to the representation increasingly discarding information about the input. An exception is when the regularization parameter \(\beta\) is very small. In this case the representation captures more information about the input, and longer training decreases \(I(Z;Y)\) , which is indicative of overfitting to the training data. Higher values of \(M\) lead to the representation capturing more information about the target, while at the same time discarding more information about the input. \(M = 1\) corresponds to the Variational Information Bottleneck. </center> <--- Page Split ---> ## E UNSUPERVISED REPRESENTATION LEARNING OBJECTIVES Recent work on variational autoencoders (VAEs) has shown that optimizing the standard evidence lower- bound (ELBO) is not sufficient in itself for learning useful representations (Chen et al., 2016; Alemi et al., 2018; Zhao et al., 2017). Generalizing the ELBO by incorporating different bottleneck constraints (Zhao et al., 2017; Higgins et al., 2017; Rezende & Viola, 2018) has shown promise in learning better latent codes. In this section, we discuss some preliminary results on an unsupervised version of the VDB objective. We discuss its relation to VAE objectives such as the \(\beta\) - VAE (Higgins et al., 2017) and the importance weighted autoencoder (IWAE) (Burda et al., 2015). ## E.1 \(\beta\) -VAE AND IWAE A variational autoencoder (VAE) (Kingma & Welling, 2013; Rezende et al., 2014) is a directed probabilistic model that defines a joint density \(p_{\theta}(x,z) = p_{\theta}(x|z)p(z)\) between some continuous latent variable \(z\) and observed data \(x\) . \(p(z)\) is chosen to be a simple prior distribution over the latents (e.g., isotropic unit Gaussian) and \(p_{\theta}(x|z)\) is a decoder network that models the data generative process. Maximum likelihood estimation of the model parameters \(\theta\) is in general intractable. VAEs instead maximize the evidence lower- bound (ELBO) by jointly training the decoder with an auxiliary encoder network \(q_{\phi}(z|x)\) , parameterized by \(\phi\) . The ELBO objective is \[\begin{array}{r l} & {\mathcal{L}_{L B O}(x):= \mathbb{E}_{z\sim q_{\phi}(z|x)}\left[\log \frac{p_{\theta}(x|z)p(z)}{q_{\phi}(z|x)}\right]}\\ & {\qquad = \log p_{\theta}(x) - D_{\mathrm{KL}}(q_{\phi}(z|x)||p_{\theta}(z|x))\leq \log p_{\theta}(x).} \end{array} \quad (30)\] The ELBO is optimized by sampling from \(q_{\phi}(z|x)\) using the reparameterization trick to obtain low variance Monte Carlo estimates of the gradient. The ELBO is optimized by sampling from \(q_{\phi}(z|x)\) using the reparameterization trick to obtain low variance Monte Carlo estimates of the gradient.The ELBO is tight when \(q_{\phi}(z|x)\) matches the true posterior \(p_{\theta}(z|x)\) . The tightness of the bound is coupled to the expressiveness of the encoder distribution. When \(q_{\phi}(z|x)\) is chosen as a simple diagonal Gaussian, minimizing \(D_{\mathrm{KL}}(q_{\phi}(z|x)||p_{\theta}(z|x))\) encourages the model's posterior to be approximately factorial which limits the capacity of the model. The importance weighted autoencoder (IWAE) (Burda et al., 2015) addresses this issue. The key observation is that the unified ELBO objective in equation 29 is the log of a single (unnormalized) importance weight \(\frac{p_{\theta}(x,z)}{q_{\phi}(z|x)}\) with a proposal density defined by \(q_{\phi}(z|x)\) . Using more samples from the proposal can only tighten the bound. The \(M\) - sample IWAE bound is \[\mathcal{L}_{M}(x):= \mathbb{E}_{z^{1:M}\sim \prod_{i = 1}^{M}q_{\phi}(z^{(i)}|x)}\left[\log \left(\frac{1}{M}\sum_{i = 1}^{M}\frac{p_{\theta}(x,z^{(i)})}{q_{\phi}(z^{(i)}|x)}\right)\right]. \quad (31)\] \(\mathcal{L}_{1}\) is just the ELBO and \(\begin{array}{r}{\lim_{M\to \infty}\mathcal{L}_{M} = \log p_{\theta}(x)} \end{array}\) . The \(M\) - sample bound can alternatively be written as (Le et al., 2017) \[\mathcal{L}_{M}(x) = \log p_{\theta}(x) - D_{\mathrm{KL}}(Q_{I S}\| P_{I S})\leq \log p_{\theta}(x), \quad (32)\] where \(Q_{I S}\) and \(P_{I S}\) are, resp., proposal and target densities defined on an extended sample space. \[Q_{I S}(z^{1:M}) = \frac{1}{M}\prod_{i = 1}^{M}q_{\phi}(z^{(i)}|x),P_{I S}(z^{1:M}) = \frac{1}{M}\sum_{i = 1}^{M}\frac{Q_{I S}(z^{1:M})}{q_{\phi}(z^{(i)}|x)} p_{\theta}(z^{(i)}|x). \quad (33)\] Optimizing over this extended sample space allows for more flexible decoders and gives the IWAE additional degrees of freedom to model complex posteriors. The IWAE bound is in fact equivalent to the ELBO in expectation with a more complex proposal density \(\tilde{q}_{I W}\) defined by importance reweighting (Bachman & Precup, 2015; Cremer et al., 2017; Naesseth et al., 2018). \[\mathcal{L}_{M}(x) = \mathbb{E}_{z^{2:M}\sim \prod_{i = 2}^{M}q_{\phi}(z^{(i)}|x)}\mathbb{E}_{z\sim \tilde{q}_{I W}(z|x,z^{2:M})}\left[\log \left(\frac{p_{\theta}(x,z)}{\tilde{q}_{I W}(z|x,z^{2:M})}\right)\right], \quad (34)\] where the inner expectation is w.r.t. the unnormalized distribution \(\tilde{q}_{I W}\) defined as follows. \[\tilde{q}_{I W}(z|x,z^{2:M}):= M\tilde{w} q_{\phi}(z|x),\mathrm{where}\tilde{w} = \frac{\frac{p_{\theta}(x,z)}{q_{\phi}(z|x)}}{\sum_{j = 1}^{M}\frac{p_{\theta}(x,z^{(j)})}{q_{\phi}(z^{(j)}|x)}}. \quad (35)\] <--- Page Split ---> For \(M = 1\) , \(\tilde{q}_{IW}(z|x) = q_{\phi}(z|x)\) , and the unified ELBO objective admits the following decomposition. \[\mathcal{L}_{ELBO}(x) = \mathcal{L}_1(x) = \mathbb{E}_{z\sim q_{\phi}(z|x)}[\log p_{\theta}(x|z)] - D_{\mathrm{KL}}(q_{\phi}(z|x)\| p(z)). \quad (36)\] The first term can be interpreted as an expected reconstruction cost and the second term as a regularizer (Kingma & Welling, 2013). For \(M > 1\) however, the IWAE bound admits no such decomposition. As \(M \to \infty\) , \(\tilde{q}_{IW}\) approaches the true posterior \(p_{\theta}(z|x)\) . However, the magnitude of the gradient w.r.t. the encoder parameters also decays to zero as more samples are used (Rainforth et al., 2018). This potentially limits the IWAE's ability to learn useful representations. The \(\beta\) - VAE (Higgins et al., 2017; Burgess et al., 2018) augments the ELBO by incorporating a hyperparameter \(\beta\) for the regularization term. \[\mathcal{L}_{\beta}(x) = \mathbb{E}_{z\sim q_{\phi}(z|x)}[\log p_{\theta}(x|z)] - \beta D_{\mathrm{KL}}(q_{\phi}(z|x)\| p(z)) \quad (37)\] For \(\beta = 1\) , we recover the standard ELBO. Higgins et al. (2017) showed that when the prior \(p(z)\) and the encoder \(q_{\phi}(z|x)\) are chosen as diagonal Gaussians, reducing the capacity of the latent bottleneck by choosing \(\beta > 1\) incentivizes the latent representations to be more disentangled. Higher values of \(\beta\) also results in a more coherent latent space so that the reconstructions interpolate smoothly on latent traversals. ## E.2 UNSUPERVISED DEFICIENCY BOTTLENECK OBJECTIVE We now discuss some preliminary results on an unsupervised version of the VDB objective on the MNIST and Fashion- MNIST datasets. We consider a standard VAE model with the prior \(p(z)\) and the encoder \(q_{\phi}(z|x)\) parameterized by diagonal Gaussians. The encoder has two hidden layers with 200 units each. The decoder is a factorized Bernoulli parameterized by MLP's with two hidden layers with 200 units each. Using factorial decoder densities constrains the model to use the latent code to attain a high likelihood (Chen et al., 2016; Alemi et al., 2018). This is simple way to achieve a nonvanishing mutual information between the latent variable and the input. This is important in our setting since we are interested in learning a useful representation. We train the model by minimizing the following unsupervised version of the VDB objective. \[\frac{1}{N}\sum_{i = 1}^{N}\left[-\log (\frac{1}{M}\sum_{j = 1}^{M}[p_{\theta}(x^{(i)}|f(x^{(i)},\epsilon^{(j)}))] + \beta D_{\mathrm{KL}}(q_{\phi}(Z|x^{(i)})||p(Z))\right]. \quad (38)\] We note that the \(\beta\) - VAE has a similar- looking training objective, with the only difference that the averaging w.r.t. to the posterior samples is outside the log. In particular, if \(M = 1\) , this is just the \(\beta\) - VAE objective. The objective in equation 38 also shares some superficial similarities with the IWAE objective for \(\beta = 1\) . Note however, as discussed in Section E.1, we cannot decompose the IWAE objective for \(M > 1\) . In particular, this implies we cannot trade- off reconstruction fidelity for learning more meaningful representations by incorporating bottleneck constraints. We have not explored if using more complex posteriors such as the \(\tilde{q}_{IW}\) is possible in the bottleneck formulation. For training, we choose a mini- batch size of \(N = 100\) and draw \(M = 1,3,6\) samples from the approximate posterior by using the reparameterization trick (Kingma & Welling, 2013). For our choice of Gaussian prior and encoder, the KL term can be computed and differentiated without estimation. We estimate the expected loss term using Monte Carlo sampling. Since the expectation is inside the log, higher values of \(M\) increases the variance of gradient estimates. Also numerically handling the log requires some care. We used the log- sum- exp trick to compute the expectation. For values of \(M\) beyond 12, we observe some degradation in the visualizations. <--- Page Split ---> ## E.3 VISUALIZATIONS ![](images/18_0.jpg) <center>Figure 7: Sampling grids in latent space for \(M = 6\) for different values of \(\beta\) for the MNIST. Higher values of \(\beta\) results in a more coherent latent space. </center> ![](images/18_1.jpg) <center>Figure 8: The latent space (mean values of the posterior for 5000 test examples) for the FMNIST for different values of \(M\) and \(\beta\) . \(M = 1\) corresponds to the \(\beta\) -VAE. </center> ![](images/18_2.jpg) <center>Figure 9: FMNIST reconstructions for different values of \(M\) and \(\beta\) . At low values of \(\beta\) , we have good reconstructions. \(M = 1\) corresponds to the \(\beta\) -VAE. </center> <--- Page Split --->
## ABSTRACT We introduce a bottleneck method for learning data representations based on channel deficiency, rather than the more traditional information sufficiency. A variational upper bound allows us to implement this method efficiently. The bound itself is bounded above by the variational information bottleneck objective, and the two methods coincide in the regime of single- shot Monte Carlo approximations. The notion of deficiency provides a principled way of approximating complicated channels by relatively simpler ones. The deficiency of one channel w.r.t. another has an operational interpretation in terms of the optimal risk gap of decision problems, capturing classification as a special case. Unsupervised generalizations are possible, such as the deficiency autoencoder, which can also be formulated in a variational form. Experiments demonstrate that the deficiency bottleneck can provide advantages in terms of minimal sufficiency as measured by information bottleneck curves, while retaining a good test performance in classification and reconstruction tasks. Keywords: Variational Information Bottleneck, Blackwell Sufficiency, Le Cam Deficiency, Information Channel ## 1 INTRODUCTION The information bottleneck (IB) is an approach to learning data representations based on a notion of minimal sufficiency. The general idea is to map an input source into a representation that retains as little information as possible about the input (minimality), but retains as much information as possible in relation to a target variable of interest (sufficiency). See Figure 1. For example, in a classification problem, the target variable could be the class label of the input data. In a reconstruction problem, the target variable could be a denoised reconstruction of the input. Intuitively, a representation which is minimal in relation to a given task, will discard nuisances in the inputs that are irrelevant to the task, and hence distill more meaningful information and allow for a better generalization. In a typical bottleneck paradigm, an input variable \(X\) is first mapped to an intermediate representation variable \(Z\) , and then \(Z\) is mapped to an output variable of interest \(Y\) . We call the mappings, resp., a representation model (encoder) and an inference model (decoder). The channel \(\kappa\) models the true relation between the input \(X\) and the output \(Y\) . In general, the channel \(\kappa\) is unknown, and only accessible through a set of examples \((x^{(i)}, y^{(i)})_{i = 1}^{N}\) . We would like to obtain an approximation of \(\kappa\) using a probabilistic model that comprises of the encoder- decoder pair. The IB methods (Witsenhausen & Wyner, 1975; Tishby et al., 1999; Harremoes & Tishby, 2007; Hsu et al., 2018) have found numerous applications, e.g., in representation learning, clustering, classification, generative modeling, model selection and analysis in deep neural networks, among others (see, e.g., Shamir et al., 2008; Gondek & Hofmann, 2003; Higgins et al., 2017; Alemi et al., 2018; Tishby & Zaslavsky, 2015; Shwartz- Ziv & Tishby, 2017). In the traditional IB, minimality and sufficiency are measured in terms of the mutual information. Computing the mutual information can be challenging in practice. Various recent works have formulated more tractable functions by way of variational bounds on the mutual information (Chalk et al., 2016; Alemi et al., 2016; Kolchinsky et al., 2017), sandwiching the objective function of interest. Instead of maximizing the sufficiency term of the IB, we formulate a new bottleneck method that minimizes deficiency. Deficiencies provide a principled way of approximating complex channels <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: The bottleneck paradigm: The general idea of a bottleneck method is to first map an input \(X\) to an intermediate representation \(Z\) , and then map \(Z\) to an output \(Y\) . We call the mappings, resp., an encoder (e) and a decoder (d). In general, the true channel \(\kappa\) is unknown, and only accessible through a set of training examples. We would like to obtain an approximation of \(\kappa\) . </center> by relatively simpler ones. The deficiency of a decoder with respect to the true channel between input and output variables quantifies how well any stochastic encoding at the decoder input can be used to approximate the true channel. Deficiencies have a rich heritage in the theory of comparison of statistical experiments (Blackwell, 1953; Le Cam, 1964; Torgersen, 1991). From this angle, the formalism of deficiencies has been used to obtain bounds on optimal risk gaps of statistical decision problems. As we show, the deficiency bottleneck minimizes a regularized risk gap. Moreover, the proposed method has an immediate variational formulation that can be easily implemented as a modification of the Variational Information Bottleneck (VIB) (Alemi et al., 2016). In fact, both methods coincide in the limit of single- shot Monte Carlo approximations. We call our method the Variational Deficiency Bottleneck (VDB). Perfect maximization of the IB sufficiency corresponds to perfect minimization of the DB deficiency. However, when working over a parametrized model and adding the bottleneck regularizer, both methods have different preferences, with the DB being closer to the optimal risk gap. Experiments on basic data sets show that the VDB is able to obtain more compressed representations than the VIB while performing equally well or better in terms of test accuracy. We describe the details of our method in Section 2. We elaborate on the theory of deficiencies in Section 3. Experimental results with the VDB are presented in Section 4. ## 2 THE VARIATIONAL DEFICIENCY BOTTLENECK (VDB) Let \(X\) denote an observation or input variable and \(Y\) an output variable of interest. Let \(p(x,y) = \pi (x)\kappa (y|x)\) be the true joint distribution, where the conditional distribution or channel \(\kappa (y|x)\) describes how the output depends on the input. We consider the situation where the true channel is unknown, but we are given a set of \(N\) independent and identically distributed (i.i.d.) samples \((x^{(i)},y^{(i)})_{i = 1}^{N}\) from \(p\) . Our goal is to use this data to learn a more structured version of the channel \(\kappa\) , by first "compressing" the input \(X\) to an intermediate representation variable \(Z\) and subsequently mapping the representation back to the output \(Y\) . The presence of an intermediate representation can be regarded as a bottleneck, a model selection problem, or as a regularization strategy. We define a representation model and an inference model using two parameterized families of channels \(e(z|x)\) and \(d(y|x)\) . We will refer to \(e(z|x)\) and \(d(y|x)\) as an encoder and a decoder. The encoder- decoder pair induces a model \(\widehat{\kappa} (y|x) = \int d(y|x)e(z|x)dz\) . Equivalently, we write \(\widehat{\kappa} = d\circ e\) . Given a representation, we want the decoder to be as powerful as the original channel \(\kappa\) in terms of ability to recover the output. The deficiency of a decoder \(d\) w.r.t. \(\kappa\) quantifies the extent to which any pre- processing of \(d\) (by way of randomized encodings) can be used to approximate \(\kappa\) (in the KL- distance sense). Let \(\mathsf{M}(\mathcal{X};\mathcal{Y})\) denote the space of all channels from \(\mathcal{X}\) to \(\mathcal{Y}\) . We define the deficiency of \(d\) w.r.t. \(\kappa\) as follows. Definition 1. Given the channel \(\kappa \in \mathsf{M}(\mathcal{X};\mathcal{Y})\) from \(X\) to \(Y\) , and a decoder \(d\in \mathsf{M}(\mathcal{Z};\mathcal{Y})\) from some \(Z\) to \(Y\) , the deficiency of \(d\) w.r.t. \(\kappa\) is defined as \[\delta^{\pi}(d,\kappa) = \min_{e\in \mathsf{M}(\mathcal{X};\mathcal{Z})}D_{\mathrm{KL}}(\kappa \| d\circ e|\pi). \quad (1)\] Here \(D_{\mathrm{KL}}(\cdot \| \cdot |\cdot)\) is the conditional KL divergence (Csiszár & Körner, 2011), and \(\pi\) is an input distribution over \(\mathcal{X}\) . The definition is similar in spirit to Lucien Le Cam's notion of weighted defi <--- Page Split ---> ciencies of one channel w.r.t. another (Le Cam, 1964; Torgersen, 1991, Section 6.2) and its recent generalization by Raginsky (2011). We propose to train the model by minimizing the deficiency of \(d\) w.r.t. \(\kappa\) subject to a regularization that penalizes complex representations. The regularization is achieved by limiting the rate \(I(Z;X)\) , the mutual information between the representation and the raw inputs. We call our method the Deficiency Bottleneck (DB). The DB minimizes the following objective over all tuples \((e \in \mathcal{M}(\mathcal{X}; \mathcal{Z}), d \in \mathcal{M}(\mathcal{Z}; \mathcal{Y}))\) : \[\mathcal{L}_{D B}(e,d):= \delta^{\pi}(d,\kappa) + \beta I(Z;X). \quad (2)\] The parameter \(\beta \geq 0\) allows us to adjust the level of regularization. For any distribution \(r(z)\) , the rate term admits a simple variational upper bound (Csiszar & Körner, 2011, Eq. (8.7)): \[I(Z;X)\leq \int p(x,z)\log \frac{e(z|x)}{r(z)} dxdz. \quad (3)\] Let \(\hat{p}_{\mathrm{data}}\) be the empirical distribution of the data (input- output pairs). By noting that \(\delta^{\pi}(d,\kappa) \leq D_{\mathrm{KL}}(\kappa \| d\circ e|\pi)\) for any \(e \in \mathcal{M}(\mathcal{X}; \mathcal{Z})\) , and ignoring (unknown) data- dependent constants, we obtain the following optimization objective which we call the Variational Deficiency Bottleneck (VDB) objective: \[\mathcal{L}_{V D B}(e,d):= \mathbb{E}_{(x,y)\sim \hat{p}_{\mathrm{data}}}\left[-\log ((d\circ e)(y|x)) + \beta D_{\mathrm{KL}}(e(Z|x)\| r(Z))\right]. \quad (4)\] The computation is simplified by defining \(r(z)\) to be a standard multivariate Gaussian distribution \(\mathcal{N}(0, I)\) , and using an encoder of the form \(e(z|x) = \mathcal{N}(z|f_{\phi}(x))\) , where \(f_{\phi}\) is a neural network that outputs the parameters of a Gaussian distribution. Using the reparametrization trick (Kingma & Welling, 2013; Rezende et al., 2014), we then write \(e(z|x)dz = p(\epsilon)d\epsilon\) , where \(z = f(x, \epsilon)\) is a function of \(x\) and the realization \(\epsilon\) of a standard normal distribution. This allows us to do stochastic backpropagation through a single sample \(z\) . The KL term admits an analytic expression for a choice of Gaussian \(r(z)\) and encoders. We train the model by minimizing the following empirical objective: \[\frac{1}{N}\sum_{i = 1}^{N}\left[-\log (\frac{1}{M}\sum_{j = 1}^{M}[d(y^{(i)}|f(x^{(i)},\epsilon^{(j)})]) + \beta D_{\mathrm{KL}}(e(Z|x^{(i)})\| r(Z))\right]. \quad (5)\] For training, we choose a mini- batch size of \(N = 100\) . For Monte Carlo estimates of the expectation inside the log, we choose \(M = 3, 6, 12\) samples from the encoding distribution. We note that the Variational Information Bottleneck (VIB) (Alemi et al., 2016) leads to a similar- looking objective function, with the only difference that the sum over \(j\) is outside of the log. By Jensen's inequality, the VIB loss is an upper bound to our loss. If one uses a single sample from the encoding distribution (i.e., \(M = 1\) ), the VDB and the VIB objective functions coincide. The average log- loss and the rate term in the VDB objective equation 4 are the two fundamental quantities that govern the probability of error when the model is a classifier. For a discussion of these relations, see Appendix A. ## 3 BLACKWELL SUFFICIENCY AND CHANNEL DEFICIENCY In this section, we discuss an intuitive geometric interpretation of the deficiency in the space of probability distributions over the output variable. We also give an operational interpretation of the deficiency as a deviation from Blackwell sufficiency (in the KL- distance sense). Finally, we discuss its relation to the log- loss. ### 3.1 DEFICIENCY AND DECISION GEOMETRY We first formulate the learning task as a decision problem. We show that \(\delta^{\pi}(d, \kappa)\) quantifies the gap in the optimal risks of decision problems when using the channel \(d\) rather than \(\kappa\) . <--- Page Split ---> Let \(\mathcal{X}\) , \(\mathcal{V}\) denote the space of possible inputs and outputs. In the following, we assume that \(\mathcal{X}\) and \(\mathcal{V}\) are finite. Let \(\mathbb{P}_{\mathcal{V}}\) be the set of all distributions on \(\mathcal{V}\) . For every \(x\in \mathcal{X}\) , define \(\kappa_{x}\in \mathbb{P}_{\mathcal{V}}\) as \(\kappa_{x}(y) = \kappa (y|x)\) , \(\forall y\in \mathcal{V}\) . Nature draws \(x\sim \pi\) and \(y\sim \kappa_{x}\) . The learner observes \(x\) and quotes a distribution \(q_{x}\in \mathbb{P}_{\mathcal{V}}\) that expresses her uncertainty about the true value \(y\) . The quality of a quote \(q_{x}\) in relation to \(y\) is measured by an extended real- valued loss function called the score \(\ell \colon \mathcal{V}\times \mathbb{P}_{\mathcal{V}}\to \mathbb{R}\) . For a background on such special kind of loss functions see, e.g., Grünwald et al., 2004; Gneiting & Raftery, 2007; Parry et al., 2012. Ideally, the quote \(q_{x}\) should be to be as close as possible to the true conditional distribution \(\kappa_{x}\) . This is achieved by minimizing the expected loss \(L(\kappa_{x},q_{x}):=\) \(\mathbb{E}_{y\sim \kappa_{x}}\ell (y,q_{x})\) , for all \(x\in \mathcal{X}\) . The score is called proper if \(\kappa_{x}\in \arg \min_{q_{x}\in \mathbb{P}_{\mathcal{V}}}L(\kappa_{x},q_{x})\) . Define the Bayes act against \(\kappa_{x}\) as the optimal quote \[q_{x}^{*}:= \arg \min_{q_{x}\in \mathbb{P}_{\mathcal{V}}}L(\kappa_{x},q_{x}).\] If multiple Bayes acts exist then select one arbitrarily. Define the Bayes risk for the distribution \(p_{X Y}(x,y) = \pi (x)\kappa (y|x)\) as \(R(p_{X Y},\ell):= \mathbb{E}_{x\sim \pi}L(\kappa_{x},q_{x}^{*})\) . A score is strictly proper if the Bayes act is unique. An example of a strictly proper score is the log- loss function defined as \(\ell_{L}(y,q):= -\log q(y)\) . For the log- loss, the Bayes act is \(q_{x}^{*} = \kappa_{x}\) and the Bayes risk is just the conditional entropy \[R(p_{X Y},\ell_{L}) = \mathbb{E}_{x\sim \pi}\mathbb{E}_{y\sim \kappa_{x}}\big[-\log q_{x}^{*}(y)\big] = \mathbb{E}_{x\sim \pi}\mathbb{E}_{y\sim \kappa_{x}}\big[-\log \kappa_{x}(y)\big] = H(Y|X). \quad (6)\] Given a representation \(z\in \mathcal{Z}\) (output by some encoder), when using the decoder \(d\) , the learner is constrained to quote a distribution from a subset of \(\mathbb{P}_{\mathcal{V}}\) . Let \(C = \mathrm{conv}\{\{d_{z}:z\in \mathcal{Z}\} \} \subset \mathbb{P}_{\mathcal{V}}\) be the convex hull of the points \(\{d_{z}\}_{z\in \mathcal{Z}}\in \mathbb{P}_{\mathcal{V}}\) . The Bayes act against \(d_{z}\) is \[q_{x_{Z}}^{*}:= \arg \min_{q_{x}\in C}\mathbb{E}_{y\sim \kappa_{x}}\big[-\log q_{x}(y)\big]. \quad (7)\] \(q_{x_{Z}}^{*}\) has an interpretation as the reverse \(I\) - projection of \(\kappa_{x}\) to the convex set of probability measures \(C\subset \mathbb{P}_{\mathcal{V}}\) (Csiszar & Matuś, 2003)1. We call the associated Bayes risk as the projected Bayes risk \(R_{Z}(p_{X Y},\ell_{L})\) and the associated conditional entropy as the projected conditional entropy \(H_{Z}(Y|X)\) \[R_{Z}(p_{X Y},\ell_{L}) = \mathbb{E}_{x\sim \pi}\mathbb{E}_{y\sim \kappa_{x}}\big[-\log q_{x_{Z}}^{*}(y)\big] = H_{Z}(Y|X). \quad (8)\] The gap in the optimal risks, \(\Delta R:= R_{Z}(p_{X Y},\ell_{L}) - R(p_{X Y},\ell_{L})\) when making a decision based on an intermediate representation and a decision based on the input data is just the deficiency. This follows from noting that \[\begin{array}{r l} & {\Delta R = H_{Z}(Y|X) - H(Y|X) = \sum_{x\in \mathcal{X}}\pi (x)\min_{q_{x}\in C\subset \mathbb{P}_{\mathcal{V}}}D_{\mathrm{KL}}(\kappa_{x}\| q_{x})}\\ & {\qquad = \min_{e\in \mathsf{M}(\mathcal{X};\mathcal{Z})}\sum_{x\in \mathcal{X}}\pi (x)D_{\mathrm{KL}}(\kappa_{x}\| d\circ e_{x})}\\ & {\qquad = \min_{e\in \mathsf{M}(\mathcal{X};\mathcal{Z})}D_{\mathrm{KL}}(\kappa \| d\circ e\| \pi) = \delta^{\pi}(d,\kappa).} \end{array} \quad (9)\] \(\Delta R\) vanishes if and only if the optimal quote against \(d_{z}\) , \(q_{x_{Z}}^{*}\) matches \(\kappa_{x}\) for all \(x,y\) . This gives an intuitive geometric interpretation of a vanishing deficiency in the space of distributions over \(\mathcal{V}\) . Given a decoder channel \(d\) , since \(\delta^{\pi}(d,\kappa)\leq D_{\mathrm{KL}}(\kappa \| d\circ e\| \pi)\) for any \(e\in \mathsf{M}(\mathcal{X};\mathcal{Z})\) , the loss term in the VDB objective is a variational upper bound on the projected conditional entropy \(H_{Z}(Y|X)\) . However, this loss is still a lower bound to the standard cross- entropy loss in the VIB objective (Alemi et al., 2016), i.e., \[\mathbb{E}_{(x,y)\sim \hat{p}_{\mathrm{data}}}\big[-\log d\circ e(y|x)\big]\leq \mathbb{E}_{(x,y)\sim \hat{p}_{\mathrm{data}}}\left[\int -e(z|x)\log d(y|z)d z\right]. \quad (10)\] This follows simply from the convexity of the negative logarithm function. <--- Page Split ---> ### 3.2 DEFICIENCY AS A KL-DISTANCE FROM INPUT-BLACKWELL SUFFICIENCY In a seminal paper David Blackwell (1953) asked the following question: if a learner wishes to make an optimal decision about some target variable of interest and she can choose between two channels with a common input alphabet, which one should she prefer? She can rank the channels by comparing her optimal risks: she will always prefer one channel over another if her optimal risk when using the former is at most that when using the latter for any decision problem. She can also rank the variables purely probabilistically: she will always prefer the former if the latter is an output- degraded version of the former, in the sense that she can simulate a single use of the latter by randomizing at the output of the former. Blackwell showed that these two criteria are equivalent. Very recently, Nasser (2017) asked the same question, only now the learner has to choose between two channels with a common output alphabet. Given two channels, \(\kappa \in \mathsf{M}(\mathcal{X};\mathcal{Y})\) and \(d\in \mathsf{M}(\mathcal{Z};\mathcal{Y})\) we say that \(\kappa\) is input- degraded from \(d\) and write \(d\succ_{y}\kappa\) if \(\kappa = d\circ e\) for some \(e\in \mathsf{M}(\mathcal{X};\mathcal{Z})\) . Stated in another way, \(d\) can be reduced to \(\kappa\) by applying a randomization at its input. Nasser (2017) gave a characterization of input- degradedness that is similar to Blackwell's theorem (Blackwell, 1953). We say, \(d\) is input- Blackwell sufficient for \(\kappa\) if \(d\succ_{y}\kappa\) . Input- Blackwell sufficiency induces a preorder on the set of all channels with the same output alphabet. In practice, most channels are uncomparable, i.e., one cannot be reduced to another by a randomization. When such is the case, the deficiency quantifies how far the true channel \(\kappa\) is from being a randomization (by way of all input encodings) of the decoder \(d\) . See Appendix B for a brief summary of Blackwell- Le Cam theory. ### 3.3 DEFICIENCY AND THE LOG-LOSS When \(Y - X - Z\) is a Markov chain, the conditional mutual information \(I(Y;X|Z)\) is the Bayes risk gap for the log- loss. This is apparent from noting that \(I(Y;X|Z) = H(Y|Z) - H(Y|XZ) =\) \(H(Y|Z) - H(Y|X) = R(p_{Z Y},\ell_{L}) - R(p_{X Y},\ell_{L})\) . This risk gap is closely related to Blackwell's original notion of sufficiency. Since the log- loss is strictly proper, a vanishing \(I(Y;X|Z)\) implies that the risk gap is zero for all loss functions. This suggests that minimizing the log- loss risk gap under a suitable regularization constraint is a potential recipe for constructing representations \(Z\) that are approximately sufficient for \(X\) w.r.t. \(Y\) , since in the limit when \(I(Y;X|Z) = 0\) one would achieve \(I(Y;Z) = I(Y;X)\) . This is indeed the basis for the IB algorithm (Tishby et al., 1999) and its generalization, clustering with Bregman divergences (Banerjee et al., 2005; van Rooyen & Williamson, 2015; 2014). One can also approximate a sufficient statistic by minimizing deficiencies instead. This is motivated from noting the following proposition. Proposition 2. When \(Y - X - Z\) is a Markov chain, \(\delta^{\pi}(d,\kappa) = 0 \iff I(Y;X|Z) = 0\) . In general, for the bottleneck paradigms involving the conditional mutual information (IB) and the deficiency (DB), we have the following relationship: \[\min_{e(z|x):I(Y;X|Z)\leq \epsilon}I(X;Z)\geq \min_{e(z|x):\delta^{\pi}(d,\kappa)\leq \epsilon}I(X;Z). \quad (11)\] Our experiments corroborate that for achieving the same level of sufficiency, one needs to store less information about the input \(X\) when minimizing the deficiencies than when minimizing the conditional mutual information. ## 4 EXPERIMENTS We present some experiments on the MNIST dataset (LeCun & Cortes, 2010). Classification on MNIST is a very well studied problem. The main objective of our experiments is to evaluate the information- theoretic properties of the representations learned by the VDB and whether it can match the classification accuracy provided by other bottleneck methods. For the encoder, we use a fully connected feedforward network with 784 input units- 1024 ReLUs- 1024 ReLUs- 512 linear output units. The deterministic output of this network is interpreted as the vector of means and variances of a 256 dimensional Gaussian distribution. The decoder is a <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 2: Effect of the regularization parameter \(\beta\) . The upper left panel shows the accuracy on train and test data after training the VDB for different values of \(M\) . Here, \(M\) is the number of encoder output samples used in the training objective. \(L\) is the number of encoder output samples used for evaluating the classifier. The upper right panel traces the deficiency bottleneck curve for different values of \(\beta\) (see text). The curves are averages over 5 repetitions of the experiment. Each curve corresponds to one value of \(M = 1, 3, 6, 12\) . Notice the generalization gap for small values of \(\beta\) (towards the right of the plot). The lower right panel plots the corresponding information bottleneck curve. The lower left panel plots the minimality term vs. \(\beta\) . Evidently, the levels of compression vary depending on \(M\) . Higher values of \(M\) (our method) lead to a more compressed representation. For \(M = 1\) , the VDB and the VIB models coincide. </center> Table 1: Comparison of test accuracy values for different values of \(\beta\) and \(M\) . \(K\) is the size of the bottleneck and \(L = 12\) . We see a slight improvement in the test accuracies for higher values of \(M\) <table><tr><td rowspan="2">β</td><td rowspan="2">K</td><td colspan="4">M</td></tr><tr><td>1</td><td>3</td><td>6</td><td>12</td></tr><tr><td rowspan="2">10-5</td><td>256</td><td>0.9869</td><td>0.9873</td><td>0.9885</td><td>0.9878</td></tr><tr><td>2</td><td>0.9575</td><td>0.9678</td><td>0.9696</td><td>0.9687</td></tr><tr><td rowspan="2">10-3</td><td>256</td><td>0.9872</td><td>0.9879</td><td>0.9875</td><td>0.9882</td></tr><tr><td>2</td><td>0.9632</td><td>0.9726</td><td>0.9790</td><td>0.9702</td></tr></table> simple logistic regression model with a softmax layer. These are the same settings of the model used by Alemi et al. (2016). We implement the algorithm in Tensorflow and train for 200 epochs using the Adam optimizer. As can be seen from the upper left panel in Figure 2, the test accuracy is stable with increasing \(M\) . Here, \(M\) is the number of encoder output samples used in the training objective. We note that \(M = 1\) is just the VIB model (Alemi et al., 2016). \(L\) is the number of encoder output samples used for evaluating the classifier (i.e., we use \(\frac{1}{L} \sum_{j = 1}^{L} d(y|z^{(j)})\) where \(z^{(j)} \sim e(z|x)\) ). Numerical values of the <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: We trained the VDB on MNIST with the basic encoder given by a fully connected network with two hidden layers of ReLUs generating the means and variances of 2D independent Gaussian latent representation. Ellipses represent the posterior distributions of 1000 input images in latent space after training with \(\beta = 10^{0}, 10^{-1}, 10^{-3}, 10^{-5}\) and \(M = 1, 3, 6, 12\) . Color corresponds to the class label. </center> test accuracies are provided in Table 1 for different values of \(\beta\) and \(M\) . We see a slight improvement in the test accuracies for higher values of \(M\) . See Figure 5 for train and test accuracies for \(L = 3\) and \(L = 12\) in Appendix D. The traditional IB paradigm traces the mutual information \(I(Z; Y)\) between representation and output (sufficiency) vs. the mutual information \(I(Z; X)\) between representation and input (minimality), for different values of the regularization parameter \(\beta\) . This curve is called the information bottleneck curve (Tishby et al., 1999). In the case of the VDB, we define the corresponding sufficiency term as \(J(Z; Y) := H(Y) - \mathbb{E}_{(x, y) \sim \hat{p}_{\mathrm{dist}}}[-\log (\int d(y|z) e(z|x) dz)]\) . Here \(H(Y) = \log_{2}(10)\) is the entropy of the output which has 10 classes. In our method, “more informative” means “less deficient”. The upper right panel in Figure 2 shows the deficiency bottleneck curve which traces \(J(Z; Y)\) vs. \(I(Z; X)\) for different values of \(\beta\) at the end of training. For orientation, lower values of \(\beta\) have higher values of \(I(Z; X)\) (towards the right of the plot). For small values of \(\beta\) , when the effect of the regularization is negligible, the bottleneck allows more information from the input through the representation. In this case, \(J(Z; Y)\) increases on the training set, but not necessarily on the test set. This is manifest in the gap between the train and test curves indicative of a degradation in generalization. For intermediate values of \(\beta\) , the gap is smaller for larger values of \(M\) (our method). The lower right panel plots the corresponding information bottleneck curve. The lower left panel in Figure 2 plots the minimality term \(I(Z; X)\) vs. \(\beta\) . We see that, for \(\beta\) in the range between \(10^{- 8}\) and \(10^{- 4}\) , for the same level of sufficiency, setting \(M = 12\) consistently achieves more compression of the input compared to the setting \(M = 1\) . The dynamics of the information quantities during training are also interesting. We provide figures on these in Appendix D. In order to visualize the representations, we also train the VDB on MNIST with a 2 dimensional representation. We use the same settings as before, with the only difference that the dimension of <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 4: Learning curves for MNIST, where the encoder is a MLP of size \(784 - 1024 - 1024 - 2K\) , the last layer being a \(K = 2\) dimensional diagonal Gaussian. The decoder is simply a softmax with 10 classes. The left panel plots the mutual information between the representation and the class label, \(I(Z;Y)\) , against the mutual information between the representation at the last layer of the encoder and the input, \(I(Z;X)\) , as training progresses. The former increases monotonically, while the latter increases and then decreases. The right panel shows the test accuracy as training progresses. </center> the output layer of the encoder is 4, with two coordinates representing the mean, and two a diagonal covariance matrix. The results are shown in Figure 3. For \(\beta = 10^{- 5}\) , the representations are well separated, depending on the class. For related figures in the setting of unsupervised learning see Appendix E. The learning dynamics of the mutual information and classification accuracy are shown in Figure 4. The left panel has an interpretation in terms of a phase where the model is mainly fitting the input- output relationship and hence increasing the mutual information \(I(Z;Y)\) , followed by a compression phase, where training is mainly reducing \(I(Z;X)\) , leading to a better generalization. The right panel shows the test accuracy as training progresses. Higher values of \(M\) (our method) usually lead to better accuracy. An exception is when the number \(L\) of posterior samples for classification is large. ## 5 DISCUSSION We have formulated a bottleneck method based on channel deficiencies. The deficiency of a decoder with respect to the true channel between input and output quantifies how well a randomization at the decoder input (by way of stochastic encodings) can be used to simulate the true channel. The VDB has a natural variational formulation which recovers the VIB in the limit of a single sample of the encoder output. Experiments demonstrate that the VDB can learn more compressed representations while retaining the same discriminative capacity. The method has a statistical decision- theoretic appeal. Moreover, the resulting variational objective of the DB can be implemented as an easy modification of the VIB, with little to no computational overhead. Given two channels that convey information about a target variable of interest, two different notions of deficiencies arise, depending on whether the target resides at the common input or the common output of the given channels. When the target is at the common output of the two channels, as is in a typical bottleneck setting (see Figure 1), our Definition 1 has a natural interpretation as a KL- divergence from input- Blackwell sufficiency (Nasser, 2017). Here sufficiency is achieved by applying a randomization at the input of the decoder with the goal of simulating the true channel. The notion of input- Blackwell sufficiency contrasts with Blackwell's original notion of sufficiency (Blackwell, 1953) in the sense that Blackwell's theory compares two channels with a common input. One can again define a notion of deficiency in this setting (see Appendix B for a discussion on deficiencies in the classical Blackwell setup). The associated channels (one from \(\mathcal{V}\) to \(\mathcal{Z}\) and the other from \(\mathcal{V}\) to \(\mathcal{X}\) ) do not however have a natural interpretation in a typical bottleneck setting. In contrast, the input- Blackwell setup appears to be much more intuitive in this context. <--- Page Split ---> The more detailed view of information emerging from this analysis explains various effects and opens the door to multiple generalizations. In the spirit of the VDB, one can formulate a deficiency autoencoder as well (see sketch in Appendix E). On a related note, we mention that the deficiency is a lower bound to a quantity called the Unique information (Bertschinger et al., 2014; Banerjee et al., 2018a) (see details in Appendix C). An alternating minimization algorithm similar in spirit to the classical Blahut- Arimoto algorithm (Blahut, 1972) has been proposed to compute this quantity (Banerjee et al., 2018b). A deep neural network implementation of such an algorithm remains a challenge. In the limit \(\beta \rightarrow 0\) , the VDB is a step forward towards estimating the unique information. This might be of independent interest in improving the practicality of the theory of information decompositions. ## APPENDIX ## A MISCLASSIFICATION ERROR AND THE AVERAGE LOG-LOSS In a classification task, the goal is to use the training dataset to learn a classifier \(\widehat{\kappa} (y|x)\) that minimizes the probability of error under the true data distribution, defined as follows. \[P_{\mathcal{E}}(\widehat{\kappa}):= 1 - \mathbb{E}_{(x,y)\sim p}\left[\widehat{\kappa} (y|x)\right]. \quad (12)\] It is well known that the optimal classifier that gives the smallest probability of error is the Bayes classifier (Boucheron et al., 2005). Since we do not know the true data distribution we try to learn based on the empirical error. Directly minimizing the empirical probability of error over the training dataset is in general a NP- hard problem. In practice, one minimizes a surrogate loss function that is a convex upper bound on \(P_{\mathcal{E}}\) . A natural surrogate is the average log- loss function \(\mathbb{E}_{(x,y)\sim p}\left[-\log \widehat{\kappa} (y|x)\right]\) . When the model is \(\widehat{\kappa} = d\circ e\) , the following upper bounds are immediate from using Jensen's inequality. \[\begin{array}{r l} & {P_{\mathcal{E}}(\widehat{\kappa})\leq 1 - \exp \big(-\mathbb{E}_{(x,y)\sim p}\big[-\log d\circ e(y|x)\big]\big)}\\ & {\qquad \leq 1 - \exp \big(-\mathbb{E}_{(x,y)\sim p}\mathbb{E}_{z\sim e(z|x)}\big[-\log d(y|z)\big]\big)} \end{array} \quad (13)\] The bound using the standard cross- entropy loss is evidently weaker than the average log- loss. A lower bound on the probability of error is controlled by a convex functional of the mutual information between the representation and the raw inputs \(I(Z;X)\) (Vera et al., 2018, see, e.g., Lemma 4). The average log- loss and the rate term in the VDB objective equation 4 are two fundamental quantities that govern the probability of error. ## B CLASSICAL THEORY OF COMPARISON OF CHANNELS In this section, we discuss the classical theory of comparison of channels due to Blackwell (1953) and its extension by Le Cam (1964); Torgersen (1991) and more recently by Raginsky (2011). Suppose that a learner wishes to predict the value of a random variable \(Y\) that takes values in a set \(\mathcal{Y}\) . She has a set of actions \(\mathcal{A}\) . Each action incurs a loss \(\ell (y,a)\) that depends on the true state \(y\) of \(Y\) and the chosen action \(a\) . Let \(\pi_{Y}\) encode the learners' uncertainty about the true state \(y\) . The tuple \((\pi_{Y},\mathcal{A},\ell)\) is called a decision problem. Before choosing her action, the learner observes a random variable \(X\) through a channel \(\kappa \in \mathsf{M}(\mathcal{Y};\mathcal{X})\) . An ideal learner chooses a strategy \(\rho \in \mathsf{M}(\mathcal{X};\mathcal{A})\) that minimizes her expected loss or risk \(R(\pi_{Y},\kappa ,\rho ,\ell):= \mathbb{E}_{y\sim \pi_{Y}}\mathbb{E}_{a\sim \rho \circ \kappa_{y}}\ell (y,a)\) . The optimal risk when using the channel \(\kappa\) is \(R(\pi_{Y},\kappa ,\ell):= \min_{\rho \in \mathsf{M}(\mathcal{X};\mathcal{A})}R(\pi_{Y},\kappa ,\rho ,\ell)\) . Suppose now that the learner has to choose between \(X\) and another random variable \(Z\) that she observes through a second channel \(\mu \in \mathsf{M}(\mathcal{Y};\mathcal{Z})\) with common input \(Y\) . She can always discard \(X\) in favor of \(Z\) if, knowing \(Z\) , she can simulate a single use of \(X\) by randomly sampling a \(x^{\prime}\in \mathcal{X}\) after each observation \(z\in \mathcal{Z}\) . Definition 3. We say that \(X\) is output- degraded from \(Z\) w.r.t. \(Y\) , denoted \(Z\sqsupseteq_{Y}^{I}X\) , if there exists a random variable \(X^{\prime}\) such that the pairs \((Y,X)\) and \((Y,X^{\prime})\) are stochastically indistinguishable, and \(Y - Z - X^{\prime}\) is a Markov chain. She can also discard \(X\) if her optimal risk when using \(Z\) is at most that when using \(X\) for any decision problem. Write \(Z\sqsupseteq_{Y}X\) if \(R(\pi_{Y},\kappa ,\ell)\geq R(\pi_{Y},\mu ,\ell)\) for any decision problem. Blackwell (1953) showed the equivalence of these two relations. Theorem 4. (Blackwell's Theorem) \(Z\sqsupseteq_{Y}^{I}X\iff Z\sqsupseteq_{Y}X\) Write \(\mu \sqsupseteq_{Y}\kappa\) if \(\kappa = \lambda \circ \mu\) for some \(\lambda \in \mathsf{M}(\mathcal{Z};\mathcal{X})\) . If \(\pi_{Y}\) has full support, then it easy to check that \(\mu \sqsupseteq_{Y}\kappa \iff Z\sqsupseteq_{Y}^{I}X\) (Bertschinger & Rauh, 2014, Theorem 4). The learner can also compare \(\kappa\) and \(\mu\) by comparing the mutual informations \(I(Y;X)\) and \(I(Y;Z)\) between the common input \(Y\) and the channel outputs \(X\) and \(Z\) . Definition 5. \(\mu\) is said to be more capable than \(\kappa\) , denoted \(\mu \sqsupseteq_{Y}^{mc}\kappa\) , if \(I(Y;Z)\geq I(Y;X)\) for all probability distribution on \(\mathcal{Y}\) . <--- Page Split ---> It follows from the data processing inequality that \(\mu \xrightarrow{\mathrm{__}} \mu \xrightarrow{\mathrm{__}} \mu \xrightarrow{\mathrm{__}} k\) . However, the converse implication is not true in general (Körner & Marton, 1975). The converse to the Blackwell's theorem states that if the relation \(Z \xrightarrow{\mathrm{__}} Y\) \(X\) does not hold, then there exists a set of actions \(\mathcal{A}\) and a loss function \(\ell (y, a) \in \mathbb{R}^{\mathcal{Y} \times \mathcal{A}}\) such that \(R(\pi_Y, \kappa , \ell) < R(\pi_Y, \mu , \ell)\) . Le Cam introduced the concept of a deficiency of \(\mu\) w.r.t. \(\kappa\) to express this deficit in optimal risks (Le Cam, 1964) in terms of an approximation of \(\kappa\) from \(\mu\) via Markov kernels. Definition 6. The Le Cam deficiency of \(\mu\) w.r.t. \(\kappa\) is \[\delta (\mu , \kappa) := \inf_{\lambda \in \mathcal{M}(\mathcal{Z}; \mathcal{X})} \sup_{y \in \mathcal{Y}} \| \lambda \circ \mu_y - \kappa_y \|_{\mathrm{TV}}, \quad (14)\] where \(\| \cdot \|_{\mathrm{TV}}\) denotes the total variation distance. When the distribution of the common input to the channels is fixed, one can define a weighted deficiency (Torgersen, 1991, Section 6.2). Definition 7. Given \(Y \sim \pi_Y\) , the weighted Le Cam deficiency of \(\mu\) w.r.t. \(\kappa\) is \[\delta^{\pi}(\mu , \kappa) := \inf_{\lambda \in \mathcal{M}(\mathcal{Z}; \mathcal{X})} \mathbb{E}_{y \sim \pi_Y} \| \lambda \circ \mu_y - \kappa_y \|_{\mathrm{TV}}. \quad (15)\] Le Cam's randomization criterion (Le Cam, 1964) shows that deficiencies quantify the maximal gap in the optimal risks of decision problems when using the channel \(\mu\) rather than \(\kappa\) . Theorem 8 (Le Cam (1964)). Fix \(\mu \in \mathcal{M}(\mathcal{Y}; \mathcal{Z})\) , \(\kappa \in \mathcal{M}(\mathcal{Y}; \mathcal{Z})\) and a probability distribution \(\pi_Y\) on \(\mathcal{Y}\) and write \(\| \ell \|_{\infty} = \max_{y, a} \ell (y, a)\) . For every \(\epsilon > 0\) , \(\delta^{\pi}(\mu , \kappa) < \epsilon\) if and only if \(R(\pi_Y, \mu , \ell) - R(\pi_Y, \kappa , \ell) < \epsilon \| \ell \|_{\infty}\) for any set of actions \(\mathcal{A}\) and any bounded loss function \(\ell\) . Raginsky (2011) introduced a broad class of deficiency- like quantities using the notion of a generalized divergence between probability distributions that satisfies a monotonicity property w.r.t. data processing. The family of \(f\) - divergences due to Csiszár belongs to this class (Liese & Vajda, 2006). Definition 9. The \(f\) - deficiency of \(\mu\) w.r.t. \(\kappa\) is \[\delta_{f}(\mu , \kappa) := \inf_{\lambda \in \mathcal{M}(\mathcal{Z}; \mathcal{X})} \sup_{y \in \mathcal{Y}} D_{f}(\kappa_y \| \lambda \circ \mu_y), \quad (16)\] Many common divergences, such as the KL divergence, the reverse- KL divergence, and the total variation distance are \(f\) - divergences. When the channel \(\mu\) is such that its output is constant, no matter what the input, the corresponding \(f\) - deficiency is called \(f\) - informativity (Csiszár, 1972). The \(f\) - informativity associated with the KL divergence is just the channel capacity which has a geometric interpretation as an "information radius" (Csiszár & Körner, 2011). We can also define a weighted \(f\) - deficiency of \(\mu\) w.r.t. \(\kappa\) . Definition 10. The weighted \(f\) - deficiency of \(\mu\) w.r.t. \(\kappa\) is \[\delta_{f}(\mu , \kappa) := \inf_{\lambda \in \mathcal{M}(\mathcal{Z}; \mathcal{X})} D_{f}(\kappa_y \| \lambda \circ \mu_y | \pi_Y), \quad (17)\] Specializing to the KL divergence, we have the following definition. Definition 11. The weighted output deficiency of \(\mu\) w.r.t. \(\kappa\) is \[\delta_{\sigma}^{\pi}(\mu , \kappa) := \inf_{\lambda \in \mathcal{M}(\mathcal{Z}; \mathcal{X})} D_{\mathrm{KL}}(\kappa \| \lambda \circ \mu | \pi_Y), \quad (18)\] where the subscript \(o\) in \(\delta_{\sigma}^{\pi}\) emphasizes the fact that the randomization is at the output of the channel \(\mu\) . Note that \(\delta_{\sigma}^{\pi}(\mu , \kappa) = 0\) if and only if \(Z \xrightarrow{\mathrm{__}} Y\) . \(X\) , which captures the intuition that if \(\delta_{\sigma}^{\pi}(\mu , \kappa)\) is small, then \(X\) is approximately output- degraded from \(Z\) w.r.t. \(Y\) . Using Pinsker's inequality, we have \[\delta^{\pi}(\mu , \kappa) \leq \sqrt{\frac{\ln(2)}{2}} \delta_{\sigma}^{\pi}(\mu , \kappa). \quad (19)\] <--- Page Split ---> ## C THE UNIQUE INFORMATION BOTTLENECK In this section, we give a new perspective on the Information Bottleneck paradigm using nonnegative mutual information decompositions. The quantity we are interested in is the notion of Unique information proposed in (Bertschinger et al., 2014). Work in similar vein include (Harder et al., 2013) and more recently (Banerjee et al., 2018a) which gives an operationalization of the unique information. Consider three jointly distributed random variables \(Y\) \(X\) , and \(Z\) \(Y\) is the target variable of interest. The mutual information between \(Y\) and \(X\) can be decomposed into information that \(X\) has about \(Y\) that is unknown to \(Z\) (we call this the unique information of \(X\) w.r.t. \(Z\) ) and information that \(X\) has about \(Y\) that is known to \(Z\) (we call this the shared information). \[I(Y;X) = \underbrace{\widetilde{U} I(Y;X\backslash Z)}_{\mathrm{unique} X\mathrm{wt}Z} + \underbrace{\widetilde{S} I(Y;X,Z)}_{\mathrm{shared} (\mathrm{redundant})}. \quad (20)\] Conditioning on \(Z\) destroys the shared information but creates complementary or synergistic information from the interaction of \(X\) and \(Z\) . \[I(Y;X|Z) = \underbrace{\widetilde{U} I(Y;X\backslash Z)}_{\mathrm{unique} X\mathrm{wt}Z} + \underbrace{\widetilde{C} I(Y;X,Z)}_{\mathrm{complementary} (\mathrm{synergistic})}. \quad (21)\] Using the chain rule, the total information that the pair \((X,Z)\) conveys about the target \(Y\) can be decomposed into four terms. \[\begin{array}{r l} & {I(Y;X Z) = I(Y;X) + I(Y;Z|X)}\\ & {\qquad = \widetilde{U} I(Y;X\backslash Z) + \widetilde{S} I(Y;X,Z) + \widetilde{U} I(Y;Z\backslash X) + \widetilde{C} I(Y;X,Z).} \end{array} \quad (22)\] \(\widetilde{U} I\) , \(\widetilde{S} I\) , and \(\widetilde{C} I\) are nonnegative functions that depend continuously on the joint distribution of \((Y,X,Z)\) . For completeness, we rewrite the information decomposition equations below. \[\begin{array}{r l} & {I(Y;X) = \widetilde{U} I(Y;X\backslash Z) + \widetilde{S} I(Y;X,Z),}\\ & {I(Y;Z) = \widetilde{U} I(Y;Z\backslash X) + \widetilde{S} I(Y;X,Z),}\\ & {I(Y;X|Z) = \widetilde{U} I(Y;X\backslash Z) + \widetilde{C} I(Y;X,Z),}\\ & {I(Y;Z|X) = \widetilde{U} I(Y;Z\backslash X) + \widetilde{C} I(Y;X,Z),} \end{array} \quad (24b)\] The unique information can be interpreted as either the conditional mutual information without the synergy, or as the mutual information without the redundancy. When \(Y - X - Z\) is a Markov chain, the information decomposition is \[\begin{array}{r l} & {\widetilde{U} I(Y;Z\backslash X) = 0,}\\ & {\widetilde{U} I(Y;X\backslash Z) = I(Y;X|Z) = I(Y;X) - I(Y;Z),}\\ & {\widetilde{S} I(Y;X,Z) = I(Y;Z),}\\ & {\widetilde{C} I(Y;X,Z) = 0.} \end{array} \quad (25d)\] The Information bottleneck (Tishby et al., 1999) minimizes the following objective \[\mathcal{L}_{I B}(e) = I(Y;X|Z) + \beta I(X;Z), \quad (26)\] over all encoders \(e\in \mathsf{M}(\mathcal{X};\mathcal{Z}):Y - X - Z\) . Since \(Y - X - Z\) is a Markov chain, the sufficiency term in the IB objective depends on the pairwise marginals \((Y,X)\) and \((Y,Z)\) , while the minimality term depends on the \((X,Z)\) - marginal. From equation 25b, it follows that one can equivalently write the IB objective function as \[\mathcal{L}_{I B}(e) = \widetilde{U} I(Y;X\backslash Z) + \beta I(X;Z). \quad (27)\] <--- Page Split ---> From an information decomposition perspective, the original IB is actually minimizing just the unique information subject to a regularization constraint. This is a simple consequence of the fact that the synergistic information \(\overline{{C I}}(Y;X,Z) = 0\) (see equation 25d) when we have the Markov chain condition \(Y - X - Z\) . Hence, one might equivalently call the original IB as the Unique information bottleneck. Appealing to classical Blackwell theory, Bertschinger et al. (2014) defined a nonnegative decomposition of the mutual information \(I(Y;X Z)\) based on the idea that the unique and shared information should depend only on the pairwise marginals \((Y,X)\) and \((Y,Z)\) . Definition 12. Let \((Y,X,Z)\sim P,Y\sim \pi_{Y}\) and let \(\kappa \in \mathsf{M}(\mathcal{Y};\mathcal{X})\) \(\mu \in \mathsf{M}(\mathcal{Y};\mathcal{X})\) be two channels with the same input alphabet such that \(P_{Y X}(y,x) = \pi_{Y}(y)\kappa_{y}(x)\) and \(P_{Y Z}(y,z) = \pi_{Y}(y)\mu_{y}(z)\) Define \[\begin{array}{r l} & {\Delta_{P} = \big\{Q\in \mathbb{P}_{Y\times \mathcal{X}\times \mathcal{Z}}:\mathcal{Q}_{Y X}(y,x) = \pi_{Y}(y)\kappa_{y}(x),}\\ & {\qquad \mathcal{Q}_{Y Z}(y,z) = \pi_{Y}(y)\mu_{y}(z)\big\} ,}\\ & {\qquad U I(Y;X\backslash Z) = \underset {Q\in \Delta_{P}}{\min}I_{Q}(Y;X|Z),}\\ & {\qquad U I(Y;Z\backslash X) = \underset {Q\in \Delta_{P}}{\min}I_{Q}(Y;Z|X),}\\ & {\qquad S I(Y;X,Z) = I(Y;X) - U I(Y;X\backslash Z),}\\ & {\qquad C I(Y;X,Z) = I(Y;X|Z) - U I(Y;X\backslash Z),} \end{array} \quad (28b)\] where the subscript \(Q\) in \(I_{Q}\) denotes that joint distribution on which the quantities are computed. The functions \(U I,S I\) , and \(C I\) are nonnegative and satisfy equation 24. Furthermore, \(U I\) and \(S I\) depend on the marginal distributions of the pairs \((Y,X)\) and \((Y,Z)\) . Only the function \(C I\) depends on the full joint distribution \(P\) . \(U I\) satisfies the following intuitive property in relation to Blackwell's theorem 4. Proposition 13. (Bertschinger et al., 2014, Lemma 6) \(U I(Y;X\backslash Z) = 0\iff Z\exists_{Y}^{\prime}X\) . Proposition 2 follows from noting that \(\delta^{\pi}(d,\kappa) = 0\iff U I(Y;X\backslash Z) = 0\) (Bertschinger et al., 2014, Theorem 22) and the fact that \(U I(Y;X\backslash Z) = I(Y;X|Z)\) when \(Y - X - Z\) is a Markov chain. ## D ADDITIONAL FIGURES ON VDB EXPERIMENTS ![](images/14_0.jpg) <center>Figure 5: Train and test accuracy of the VDB for \(L = 3\) and \(L = 12\) . Similar to Figure 2. </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 6: Evolution of the mutual information between representation and output vs. representation and input (values farther up and to the left are better) over 200 training epochs (dark to light color) on MNIST. The curves are averages over 20 repetitions of the experiment. At early epochs, training mainly effects fitting of the input-output relationship and an increase of \(I(Z;Y)\) . At later epochs, training mainly effects a decrease of \(I(Z;X)\) , which corresponds to the representation increasingly discarding information about the input. An exception is when the regularization parameter \(\beta\) is very small. In this case the representation captures more information about the input, and longer training decreases \(I(Z;Y)\) , which is indicative of overfitting to the training data. Higher values of \(M\) lead to the representation capturing more information about the target, while at the same time discarding more information about the input. \(M = 1\) corresponds to the Variational Information Bottleneck. </center> <--- Page Split ---> ## E UNSUPERVISED REPRESENTATION LEARNING OBJECTIVES Recent work on variational autoencoders (VAEs) has shown that optimizing the standard evidence lower- bound (ELBO) is not sufficient in itself for learning useful representations (Chen et al., 2016; Alemi et al., 2018; Zhao et al., 2017). Generalizing the ELBO by incorporating different bottleneck constraints (Zhao et al., 2017; Higgins et al., 2017; Rezende & Viola, 2018) has shown promise in learning better latent codes. In this section, we discuss some preliminary results on an unsupervised version of the VDB objective. We discuss its relation to VAE objectives such as the \(\beta\) - VAE (Higgins et al., 2017) and the importance weighted autoencoder (IWAE) (Burda et al., 2015). ## E.1 \(\beta\) -VAE AND IWAE A variational autoencoder (VAE) (Kingma & Welling, 2013; Rezende et al., 2014) is a directed probabilistic model that defines a joint density \(p_{\theta}(x,z) = p_{\theta}(x|z)p(z)\) between some continuous latent variable \(z\) and observed data \(x\) . \(p(z)\) is chosen to be a simple prior distribution over the latents (e.g., isotropic unit Gaussian) and \(p_{\theta}(x|z)\) is a decoder network that models the data generative process. Maximum likelihood estimation of the model parameters \(\theta\) is in general intractable. VAEs instead maximize the evidence lower- bound (ELBO) by jointly training the decoder with an auxiliary encoder network \(q_{\phi}(z|x)\) , parameterized by \(\phi\) . The ELBO objective is \[\begin{array}{r l} & {\mathcal{L}_{L B O}(x):= \mathbb{E}_{z\sim q_{\phi}(z|x)}\left[\log \frac{p_{\theta}(x|z)p(z)}{q_{\phi}(z|x)}\right]}\\ & {\qquad = \log p_{\theta}(x) - D_{\mathrm{KL}}(q_{\phi}(z|x)||p_{\theta}(z|x))\leq \log p_{\theta}(x).} \end{array} \quad (30)\] The ELBO is optimized by sampling from \(q_{\phi}(z|x)\) using the reparameterization trick to obtain low variance Monte Carlo estimates of the gradient. The ELBO is optimized by sampling from \(q_{\phi}(z|x)\) using the reparameterization trick to obtain low variance Monte Carlo estimates of the gradient.The ELBO is tight when \(q_{\phi}(z|x)\) matches the true posterior \(p_{\theta}(z|x)\) . The tightness of the bound is coupled to the expressiveness of the encoder distribution. When \(q_{\phi}(z|x)\) is chosen as a simple diagonal Gaussian, minimizing \(D_{\mathrm{KL}}(q_{\phi}(z|x)||p_{\theta}(z|x))\) encourages the model's posterior to be approximately factorial which limits the capacity of the model. The importance weighted autoencoder (IWAE) (Burda et al., 2015) addresses this issue. The key observation is that the unified ELBO objective in equation 29 is the log of a single (unnormalized) importance weight \(\frac{p_{\theta}(x,z)}{q_{\phi}(z|x)}\) with a proposal density defined by \(q_{\phi}(z|x)\) . Using more samples from the proposal can only tighten the bound. The \(M\) - sample IWAE bound is \[\mathcal{L}_{M}(x):= \mathbb{E}_{z^{1:M}\sim \prod_{i = 1}^{M}q_{\phi}(z^{(i)}|x)}\left[\log \left(\frac{1}{M}\sum_{i = 1}^{M}\frac{p_{\theta}(x,z^{(i)})}{q_{\phi}(z^{(i)}|x)}\right)\right]. \quad (31)\] \(\mathcal{L}_{1}\) is just the ELBO and \(\begin{array}{r}{\lim_{M\to \infty}\mathcal{L}_{M} = \log p_{\theta}(x)} \end{array}\) . The \(M\) - sample bound can alternatively be written as (Le et al., 2017) \[\mathcal{L}_{M}(x) = \log p_{\theta}(x) - D_{\mathrm{KL}}(Q_{I S}\| P_{I S})\leq \log p_{\theta}(x), \quad (32)\] where \(Q_{I S}\) and \(P_{I S}\) are, resp., proposal and target densities defined on an extended sample space. \[Q_{I S}(z^{1:M}) = \frac{1}{M}\prod_{i = 1}^{M}q_{\phi}(z^{(i)}|x),P_{I S}(z^{1:M}) = \frac{1}{M}\sum_{i = 1}^{M}\frac{Q_{I S}(z^{1:M})}{q_{\phi}(z^{(i)}|x)} p_{\theta}(z^{(i)}|x). \quad (33)\] Optimizing over this extended sample space allows for more flexible decoders and gives the IWAE additional degrees of freedom to model complex posteriors. The IWAE bound is in fact equivalent to the ELBO in expectation with a more complex proposal density \(\tilde{q}_{I W}\) defined by importance reweighting (Bachman & Precup, 2015; Cremer et al., 2017; Naesseth et al., 2018). \[\mathcal{L}_{M}(x) = \mathbb{E}_{z^{2:M}\sim \prod_{i = 2}^{M}q_{\phi}(z^{(i)}|x)}\mathbb{E}_{z\sim \tilde{q}_{I W}(z|x,z^{2:M})}\left[\log \left(\frac{p_{\theta}(x,z)}{\tilde{q}_{I W}(z|x,z^{2:M})}\right)\right], \quad (34)\] where the inner expectation is w.r.t. the unnormalized distribution \(\tilde{q}_{I W}\) defined as follows. \[\tilde{q}_{I W}(z|x,z^{2:M}):= M\tilde{w} q_{\phi}(z|x),\mathrm{where}\tilde{w} = \frac{\frac{p_{\theta}(x,z)}{q_{\phi}(z|x)}}{\sum_{j = 1}^{M}\frac{p_{\theta}(x,z^{(j)})}{q_{\phi}(z^{(j)}|x)}}. \quad (35)\] <--- Page Split ---> For \(M = 1\) , \(\tilde{q}_{IW}(z|x) = q_{\phi}(z|x)\) , and the unified ELBO objective admits the following decomposition. \[\mathcal{L}_{ELBO}(x) = \mathcal{L}_1(x) = \mathbb{E}_{z\sim q_{\phi}(z|x)}[\log p_{\theta}(x|z)] - D_{\mathrm{KL}}(q_{\phi}(z|x)\| p(z)). \quad (36)\] The first term can be interpreted as an expected reconstruction cost and the second term as a regularizer (Kingma & Welling, 2013). For \(M > 1\) however, the IWAE bound admits no such decomposition. As \(M \to \infty\) , \(\tilde{q}_{IW}\) approaches the true posterior \(p_{\theta}(z|x)\) . However, the magnitude of the gradient w.r.t. the encoder parameters also decays to zero as more samples are used (Rainforth et al., 2018). This potentially limits the IWAE's ability to learn useful representations. The \(\beta\) - VAE (Higgins et al., 2017; Burgess et al., 2018) augments the ELBO by incorporating a hyperparameter \(\beta\) for the regularization term. \[\mathcal{L}_{\beta}(x) = \mathbb{E}_{z\sim q_{\phi}(z|x)}[\log p_{\theta}(x|z)] - \beta D_{\mathrm{KL}}(q_{\phi}(z|x)\| p(z)) \quad (37)\] For \(\beta = 1\) , we recover the standard ELBO. Higgins et al. (2017) showed that when the prior \(p(z)\) and the encoder \(q_{\phi}(z|x)\) are chosen as diagonal Gaussians, reducing the capacity of the latent bottleneck by choosing \(\beta > 1\) incentivizes the latent representations to be more disentangled. Higher values of \(\beta\) also results in a more coherent latent space so that the reconstructions interpolate smoothly on latent traversals. ## E.2 UNSUPERVISED DEFICIENCY BOTTLENECK OBJECTIVE We now discuss some preliminary results on an unsupervised version of the VDB objective on the MNIST and Fashion- MNIST datasets. We consider a standard VAE model with the prior \(p(z)\) and the encoder \(q_{\phi}(z|x)\) parameterized by diagonal Gaussians. The encoder has two hidden layers with 200 units each. The decoder is a factorized Bernoulli parameterized by MLP's with two hidden layers with 200 units each. Using factorial decoder densities constrains the model to use the latent code to attain a high likelihood (Chen et al., 2016; Alemi et al., 2018). This is simple way to achieve a nonvanishing mutual information between the latent variable and the input. This is important in our setting since we are interested in learning a useful representation. We train the model by minimizing the following unsupervised version of the VDB objective. \[\frac{1}{N}\sum_{i = 1}^{N}\left[-\log (\frac{1}{M}\sum_{j = 1}^{M}[p_{\theta}(x^{(i)}|f(x^{(i)},\epsilon^{(j)}))] + \beta D_{\mathrm{KL}}(q_{\phi}(Z|x^{(i)})||p(Z))\right]. \quad (38)\] We note that the \(\beta\) - VAE has a similar- looking training objective, with the only difference that the averaging w.r.t. to the posterior samples is outside the log. In particular, if \(M = 1\) , this is just the \(\beta\) - VAE objective. The objective in equation 38 also shares some superficial similarities with the IWAE objective for \(\beta = 1\) . Note however, as discussed in Section E.1, we cannot decompose the IWAE objective for \(M > 1\) . In particular, this implies we cannot trade- off reconstruction fidelity for learning more meaningful representations by incorporating bottleneck constraints. We have not explored if using more complex posteriors such as the \(\tilde{q}_{IW}\) is possible in the bottleneck formulation. For training, we choose a mini- batch size of \(N = 100\) and draw \(M = 1,3,6\) samples from the approximate posterior by using the reparameterization trick (Kingma & Welling, 2013). For our choice of Gaussian prior and encoder, the KL term can be computed and differentiated without estimation. We estimate the expected loss term using Monte Carlo sampling. Since the expectation is inside the log, higher values of \(M\) increases the variance of gradient estimates. Also numerically handling the log requires some care. We used the log- sum- exp trick to compute the expectation. For values of \(M\) beyond 12, we observe some degradation in the visualizations. <--- Page Split ---> ## E.3 VISUALIZATIONS ![](images/18_0.jpg) <center>Figure 7: Sampling grids in latent space for \(M = 6\) for different values of \(\beta\) for the MNIST. Higher values of \(\beta\) results in a more coherent latent space. </center> ![](images/18_1.jpg) <center>Figure 8: The latent space (mean values of the posterior for 5000 test examples) for the FMNIST for different values of \(M\) and \(\beta\) . \(M = 1\) corresponds to the \(\beta\) -VAE. </center> ![](images/18_2.jpg) <center>Figure 9: FMNIST reconstructions for different values of \(M\) and \(\beta\) . At low values of \(\beta\) , we have good reconstructions. \(M = 1\) corresponds to the \(\beta\) -VAE. </center> <--- Page Split --->
reject
Reject
6
ICLR_2019_paper_0094
iclr
2,019
# FAST OBJECT LOCALIZATION VIA SENSITIVITY ANALYSIS Anonymous authors Paper under double- blind review ## ABSTRACT Deep Convolutional Neural Networks (CNNs) have been repeatedly shown to perform well on image classification tasks, successfully recognizing a broad array of objects when given sufficient training data. Methods for object localization, however, are still in need of substantial improvement. In this paper, we offer a fundamentally different approach to the localization of recognized objects in images. Our method is predicated on the idea that a deep CNN capable of recognizing an object must implicitly contain knowledge about object location in its connection weights. We provide a simple method to interpret classifier weights in the context of individual classified images. This method involves the calculation of the derivative of network generated activation patterns, such as the activation of output class label units, with regard to each input pixel, performing a sensitivity analysis that identifies the pixels that, in a local sense, have the greatest influence on internal representations and object recognition. These derivatives can be efficiently computed using a single backward pass through the deep CNN classifier, producing a sensitivity map of the image. We demonstrate that a simple linear mapping can be learned from sensitivity maps to bounding box coordinates, localizing the recognized object. Our experimental results, using real- world data sets for which ground truth localization information is known, reveal competitive accuracy from our fast technique. ## 1 INTRODUCTION Deep Convolutional Neural Networks (CNNs) have been shown to be effective at image classification, accurately performing object recognition even with thousands of object classes when trained on a sufficiently rich data set of labeled images Krizhevsky et al. (2012). One advantage of CNNs is their ability to learn complete functional mappings from image pixels to object categories, without any need for the extraction of hand- engineered image features Sermanet et al. (2013). To facilitate learning through stochastic gradient descent, CNNs are (at least approximately) differentiable with regard to connection weight parameters. Image classification, however, is only one of the problems of computer vision. In the task of image classification, each image has a single label, associated with the class identity of the main object in the image, and the goal is to assign correct labels in a manner that generalizes to novel images. This can be accomplished by training a machine learning classifier, such as a CNN, on a large data set of labeled images Deng et al. (2009). In the object localization task, in comparison, the output for a given image is not a class label but the locations of a specified number of objects in the image, usually encoded as bounding boxes. Evaluation of an object localization system generally requires ground truth bounding boxes to compare to the system's output. The detection task is more difficult than the localization task, as the number of objects are not predetermined Sermanet et al. (2013). In this paper, we focus on object localization, identifying the position in the image of a recognized object. As is common in the localization literature, position information is output in the form of a bounding box. Previously developed techniques for accomplishing this task generally involve searching the image for the object, considering many candidate bounding boxes with different sizes and locations, sometimes guided by an auxiliary algorithm for heuristically identifying regions of interest Sermanet et al. (2013); Girshick (2015); He et al. (2017). For each candidate location, the sub- image captured by the bounding box is classified for object category, with the final output <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Examples of sensitivity maps, displaying the sensitivity of network internal representations to individual pixels, providing information about the locations of the main objects in the source images. </center> bounding box either being the specific candidate region classified as the target object with the highest level of certainty or some heuristic combination of neighboring or overlapping candidate regions with high classification certainty. These approaches tend to be time consuming, often requiring deep CNN classification calculations of many candidate regions at multiple scales. Efforts to speed these methods mostly focus on reducing the number of regions considered, typically by using some adjunct heuristic region proposal algorithm Girshick (2015); Ren et al. (2015); He et al. (2017). Still, the number of considered regions is often reported to be roughly 2,000 per image. While these approaches can be fairly accurate, their slowness limits their usefulness, particularly for online applications. A noteworthy alternative approach is to directly train a deep CNN to produce outputs that match ground truth localization bounding boxes, using a large image data set that provides both category and localization information for each image. It appears as if some form of this method was used with AlexNet Krizhevsky et al. (2012), though details concerning localization, rather than image classification, are difficult to discern from the published literature. A natural approach would be to cast the learning of bounding boxes as a simple regression problem, with targets being the four coordinates that specify a bounding box (e.g., coordinates of upper- left and lower- right corners, or region center coordinates along with region width and height). It is reasonable to consider sharing early layers of a deep CNN, such as those performing convolution and max pooling, between both an image classification network and an object localization network. Indeed, taking such a multitask learning approach Caruana (1997) can allow for both object category and object location training data to shape connection weights throughout the network. Thus, the deep CNN would have "two heads", one for image classification, using a classification cross- entropy loss function, and one for object localization, reducing the \(\ell_2\) norm between ground truth and predicted bounding box coordinates Krizhevsky et al. (2012). While this approach can produce a network that quickly outputs location information, extensive training on large data sets containing ground truth bounding box information is necessary to produce good generalization. In this paper, we introduce an approach to object localization that is both very fast and robust in the face of limited ground truth bounding box training data. This approach is rooted in the assertion that any deep CNN for image classification must contain, implicit in its connection weights, knowledge about the location of recognized objects Selvaraju et al. (2016). The goal, then, is to interpret the flow of activation in an object recognition network when it is performing image classification so as to extract information about object location. Furthermore, the goal is to do this quickly. Thus, this approach aims to leverage location knowledge that is already latent in extensively trained and tuned image classification networks, without requiring a separate learning process for localization. Our method makes use of the notion of a sensitivity analysis Sobol (1993). We propose estimating the sensitivity of the category outputs, or activation patterns at internal network layers, of an image classification CNN to variance in each input pixel, given a specific input image. The result is a numeric value for each pixel in the input image that captures the degree to which small changes in that pixel (locally, around its current value) give rise to large changes in the output category. Together, these numeric values form a sensitivity map of the image, encoding image regions that are important for the current classification. Our proposed measure of sensitivity is the partial derivative of activity with regard to each pixel value, evaluated for the current image. For a deep CNN that formally embodies a differentiable mapping (at least approximately) from image pixels to output categories, this partial derivative can be quickly calculated. While many tools currently exist for <--- Page Split ---> efficiently calculating such derivatives, we provide a simple algorithm that computes these values through a single backward pass through the image classification network, similar to that used to calculate unit error (delta) values in the backpropagation of error learning algorithm Rumelhart et al. (1986). Thus, we can generate a sensitivity map for an image in about the same amount of time as it takes the employed image classification network to produce an output. Some example sensitivity maps are shown in Figure 1. The idea of using sensitivity information, like that in our sensitivity maps, for a variety of tasks, including localization, has previously appeared in the literature Simonyan et al. (2013); Zhou et al. (2016); Selvaraju et al. (2016). Indeed, some of these past efforts have used more sophisticated measures of sensitivity. In this paper, we show that even our very simple sensitivity measure can produce strong localization performance, and it can do so quickly, without any modifications to the classification network, and even for object categories on which the classification network was not trained. The relationship of the results reported here to previously reported work is discussed further in Section 4. As previously mentioned, object localization methods typically encode object location as a bounding box. Since our sensitivity maps encode location differently, in terms of pixels, we propose learning a simple linear mapping from sensitivity maps to bounding box coordinates, allowing our method to output a bounding box for each classified image. We suggest that this linear mapping can be robustly learned from a relatively small training set of images with ground truth bounding boxes, since the sensitivity maps form a much more simple input than the original images. The primary contributions of this paper may be summarized as follows: - We propose a new general approach to performing object localization, interpreting a previously trained image classification network by performing a sensitivity analysis, identifying pixels to which the category output, or a more general internal representation, is particularly sensitive.- We demonstrate how a linear function from the resulting sensitivity maps to object location bounding box coordinates may be learned from training images containing ground truth location information.- We provide a preliminary assessment of our approach, measuring object localization performance on the ImageNet and PASCAL VOC data sets using the VGG16 image classification CNN, showing strong accuracy while maintaining short computation times. ## 2 METHOD ### 2.1 CALCULATING PIXEL SENSITIVITIES IN A TRAINED CNN Calculating derivatives of a function of network output with regard to network parameters, such as connection weights, is a standard part of CNN training. It is common for learning in a deep CNN to involve stochastic gradient decent, which involves such derivatives. In that case, the derivatives are of an objective function with regard to connection weight values. In image classification networks, the objective function is designed to have optima where training images are correctly classified. In the case of object localization, a similar objective function could be designed to minimize differences between output bounding box coordinates and provided ground truth bounding box coordinates, for all images in an appropriately labeled training set. For example, given \(N\) training images, stored in the matrix \(\mathbf{X}\) , with the ground truth 4- dimensional bounding box vector for image \(x_{i}\) being \(y_{i}\) , and \(G(x_{i};\mathbf{w})\) being the CNN output vector for image \(x_{i}\) given connection weights \(\mathbf{w}\) , an appropriate loss function would be: \[\ell (\mathbf{X},\mathbf{w}) = \frac{1}{N}\sum_{i = 1}^{N}\| y_{i} - G(x_{i};\mathbf{w})\|_{2}^{2} \quad (1)\] The CNN will produce good estimates of the training image bounding boxes when this loss function is minimized with regard to \(\mathbf{w}\) . Network weight parameters that minimize this loss, \(\mathbf{w}^{*}\) , may be sought through stochastic gradient decent, incrementally updating \(\mathbf{w}\) according to the gradient of \(\ell (\mathbf{X},\mathbf{w})\) with regard to \(\mathbf{w}\) . A primary drawback of this approach is that it requires a large and representative sample of images with ground truth bounding box information. <--- Page Split ---> Consider that, once weights are found, the gradient of \(\ell (\mathbf{X},\mathbf{w}^{*})\) with regard to \(\mathbf{X}\) would provide information about the sensitivity of the bounding box loss function with regard to the pixels in the images. This gradient can be calculated as efficiently as the gradient of the loss with regard to the weights, with both depending on the gradient of \(G(x_{i};\mathbf{w})\) with regard to a subset of its arguments. This means that the gradient of \(G(x_{i};\mathbf{w}^{*})\) with regard to \(x_{i}\) can be efficiently computed, and that gradient would capture the sensitivity of bounding box coordinates with regard to the specific pixels in image \(x_{i}\) . Note that this gradient can be calculated for images beyond those in the training set. Knowing which pixels in a novel image play an important role in determining the bounding box provides useful information for object localization. Using this calculation to address the object localization task makes little sense, however, as \(G(x_{i};\mathbf{w}^{*})\) provides an estimate of object location without a need to consider pixel sensitivity. Rather than training a deep CNN to output bounding boxes, requiring extensive labeled data, we propose calculating the same gradient for a different network – one successfully trained to perform image classification. If we now see \(G(x_{i};\mathbf{w}^{*})\) as the output of such an image classification network, its gradient with regard to \(x_{i}\) would provide information about the sensitivity of the assigned category to individual pixels. Pixels with the largest absolute values of this derivative will, around the input \(x_{i}\) , produce the largest changes in the classification decision of the CNN. This can be seen as one measure of how important pixels are for classifying the object in the image. Consider that the object class output is not immediately affected by changes to pixels with a derivative of zero. The calculation of this gradient can be performed as efficiently as a single "backward pass" through the classification network. This is well illustrated by considering the case of a simple layered backpropagation network Rumelhart et al. (1986) in which the "net input" of unit \(i\) , \(\eta_{i}\) , is a weighted sum of the activations of units in the previous layer, and the activation of unit \(i\) is \(g(\eta_{i})\) , where \(g(\cdot)\) is the unit activation function. In this case, we can define a sensitivity value for each unit, \(s_{i}\) , as the derivative of the network output with regard to \(\eta_{i}\) . Using the chain rule of calculus, it is easy to show that the sensitivity of an output unit is \(g'(\eta i)\) , and, for units in earlier layers the gradients are computed as follows: \[s_{i} = g^{\prime}(\eta_{i})\sum_{k}w_{k i}s_{k} \quad (2)\] where \(k\) iterates over all units in the immediately downstream layer from unit \(i\) and \(\mathbf{w}_{k i}\) is the connection weight from unit \(i\) to unit \(k\) . This calculation may be performed, layer by layer, from outputs to inputs, until \(s_{i}\) values for each pixel input unit are available. This demonstrates how efficiently pixel sensitivity values can be calculated for a given classified image. Of course, there are currently a variety of software packages that include tools for calculating gradients. In the evaluation of our approach in Section 3, we report results using the tools provided by TensorFlow Abadi et al. (2015). ### 2.2 SENSITIVITY OF THE ATTENTION MAP We have proposed using a previously trained image classification network as a source of information about object location, focusing on the gradient of the network output with regard to image pixels. It is interesting to note that it might not be necessary to perform the sensitivity calculation using the full classification network. There is a growing body of research that suggests that, in a well trained image classification CNN, the features that are extracted at the "attention map" layer (i.e., the output of the last convolutional layer) tend to be generally useful for learning a variety of image analysis tasks Razavian et al. (2014); Donahue et al. (2014). Inspired by these results, we have investigated the possibility of substituting the gradient of the classifier output with regard to pixels with the gradient of the attention map with regard to pixels. This avoids calculations involving final fully connected layers and any classification softmax layer. Generating image sensitivity maps from the attention map layer is slightly faster than our original proposal, but, more importantly, it is possible that general knowledge about object location might be found in the attention map, and using the attention map as the basis of the sensitivity map might actually generalize beyond the categories on which the image classification CNN was trained. We have not yet done a formal comparison of these two approaches to constructing the sensitivity map, but example results using both approaches are reported in Section 3. Note that computing the gradients of the aggregated values of the last convolution layer with respect to the input pixels are considered as the Gestalt total which can be <--- Page Split ---> computed as follows. \[G T = \frac{1}{H\times W\times C}\sum_{i,j,k}A_{n}(i,j,k) \quad (3)\] which \(A_{n}\) is the activation map of the last convolution layer, \(H,W\) and \(C\) are the height, width and channels of the last convolution layer; moreover, \(G T\) indicates as Gestalt Total. ### 2.3 AGGREGATING ACROSS COLOR CHANNELS The sensitivity map calculations that have been described, so far, provide a scalar sensitivity value for each input to the image classification deep CNN. Color images, however, are regularly provided to such networks using multiple inputs per image pixel, often encoding each pixel over three color channels. Thus, the gradient calculation will actually produce three sensitivity values for each pixel. Since we hope to produce a sensitivity map that focuses in a general way on location information, it seems reasonable to aggregate the three sensitivity values into one. Since the direction of the sensitivity relationship with the class output is irrelevant, a good first step is to take the absolute value of each derivative. Given that dependence on even a single color channel suggests that a pixel is important for identifying the object, an argument can be made that a pixel should be labeled with the maximum of the three absolute derivatives. Alternatively, it could be argued that all color channels should be taken into account when producing the sensitivity map, in which case it might be better to average the three absolute derivatives. We have explored both of these aggregation methods, with results appearing in Section 3. ### 2.4 LEARNING TO PRODUCE BOUNDING BOXES Object localization algorithms typically output the four coordinates of a bounding box to communicate the location of the target object. Such a bounding box is not intrinsic to a sensitivity map, however. Heuristic techniques could be used to identify a rectangular region that captures the majority of the high sensitivity pixels, while avoiding low sensitivity pixels, but we have taken a different approach. We have opted to learn a linear mapping from sensitivity maps to bounding box coordinates, using training images with ground truth location information. It is important to note that learning this mapping is not the same as learning to map from the original images to bounding box coordinates, as has been done in some other object localization systems. Sensitivity maps contain much less information than the original images, so using the sensitivity maps as inputs both reduces the dimensionality of the input to this mapping and makes for a more simple functional relationship between pixels and bounding box coordinates. We expect that this simplification will allow the mapping to bounding box coordinates to be successfully learned using a far smaller set of training images labeled with ground truth object locations. Indeed, we expect that a simple linear mapping could perform well. Formally, we define the parameters of the linear mapping to the four bounding box coordinates as a \(4\times M\) matrix, \(\hat{W}\) , (where \(M\) is the number of pixels in an image) and a 4- dimensional vector of "bias weights", \(\hat{w}\) . Given a sensitivity map, \(s\) , the output is \((\hat{W} s + \hat{w})\) . Given a training set of \(N\) images, the mapping is found by minimizing the following objective function with regard to \(\hat{W}\) and \(\hat{w}\) : \[\frac{1}{N}\sum_{i = 1}^{N}\frac{1}{4}\sum_{j = 1}^{4}\| B_{i,j} - (\hat{W} s_{i} + \hat{w})\|_{2}^{2} \quad (4)\] where \(s_{i}\) is the sensitivity map for the \(i^{t h}\) image, and \(B_{i,j}\) is the \(j^{t h}\) coordinate of the bounding box for the \(i^{t h}\) image. This learning process amounts to four independent linear regression problems, which can be solved efficiently. Once learned, mapping from sensitivity maps to bounding box coordinates can be done very quickly. With sensitivity map formation requiring only a single backward pass through the image classification network, the whole process - from image, to classification, to sensitivity map, to bounding box - can be performed in little more than twice the time it takes for the network to do object recognition. <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 2: A schematic illustration of the proposed method for sensitivity analysis on the pre-trained VGG16 network. </center> ## 3 RESULTS The code and the sensitivity map of on imageNet as well as PASCAL VOC dataset will be publicly available. ### 3.1 DATA SETS & PERFORMANCE MEASURES We evaluated our proposed method for object localization on two challenging data sets: the PASCAL VOC 2007 Everingham et al. (2008) data set and the ImageNet 2012 Deng et al. (2009) data set. The PASCAL VOC 2007 data set was selected due to its use in the existing object localization literature. The ImageNet data set is one of the largest publicly available data sets. It also contains many images annotated with ground truth bounding boxes. We followed the literature with regard to the evaluation criterion applied to our method, using CorLoc, which has been used for weakly supervised localization. The CorLoc metric is defined as the percentage of images in a data set that are correctly localized based on the PASCAL criterion, in which a given localization is considered correct if and only if the intersection over union (IOU) area of the predicted and ground truth bounding boxes is greater than one half: \[I O U = \frac{a r e a(\beta_{p}\cap\beta_{g t})}{a r e a(\beta_{p}\cup\beta_{g t})} >0.5 \quad (5)\] ...where \(\beta_{p}\) is the predicted bounding box and \(\beta_{g t}\) is the ground truth bounding box Tang et al. (2014). ### 3.2 PRE-TRAINED IMAGE CLASSIFICATION DEEP CNN To demonstrate that our approach works with an image classification deep CNN that was in no way specialized for our localization method, we opted to use a publicly available network. We used the VGG16 network, shown in Figure 2, fully trained Simonyan & Zisserman (2014). This network provides ImageNet object classes as output, allowing us to calculate sensitivity maps based on the network classification when examining ImageNet data. For the PASCAL VOC 2007 data set, we used the previously described method of calculating derivatives based on the attention map of VGG16, since there is not consistent class correspondence between the PASCAL VOC 2007 classes and the classes on which VGG16 was trained. To produce sensitivity maps for the PASCAL VOC 2007 data set, we aggregated across color channels by using the maximum absolute derivative across the three inputs for each pixel. For the ImageNet data set, we averaged the absolute derivatives across the three inputs in order to produce pixel sensitivity values. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: Results of the proposed method on the first 10 classes of PASCAL VOC 2007. Each column shows three examples in one class. The green boxes are the ground truth, and the red ones are the predicted bounding boxes. </center> ### 3.3 EXPERIMENTAL SETUP For generating sensitivity maps, we have used a pretrained vgg 16 network, we have used the whole network architecture while we were experimenting on ImageNet dataset, otherwise we have removed the last 3 fully connected layers and computed the Gestalt Total by the last convolution layer. The derivatives in either case computed using just one backward pass to the original pixels. For learning bounding boxes we have used the aggregated sensitivity maps as an input. To learn the mapping from sensitivity maps to bounding box coordinates, we performed linear regression using stochastic gradient decent. Updates were performed in batches of 2,048. The learning rate was initialized to 0.1 and decayed by a factor of 10 every 10,000 iterations. The experiment run on 1 GPU for 4 days. ### 3.4 PERFORMANCE ON PASCAL VOC 2007 The full PASCAL VOC 2007 data set includes 12,608 training set images and an equal number of testing set images Everingham et al. (2008). Each image contains an object of 1 of 20 different categories. We applied our object localization method to this full data set. However, we were unable to find published localization performance data for other methods applied to the full data set, which we might use for comparison to our approach. Work reported in Tang et al. Tang et al. (2014) provides performance data on 6 of the classes: aeroplane, bicycle, boat, bus, horse, and motorbike. Performance on these same classes have also been reported by others Russell et al. (2006); Chum & Zisserman (2007); Deselaers et al. (2012). Table 1 compares the localization performance of our method with that of other approaches. Note that our method, while being very fast, outperforms the comparison algorithms. Table 1: CorLoc Performance on PASCAL VOC 2007 (6 Classes) <table><tr><td>Method</td><td>Average CorLoc</td></tr><tr><td>Russell et al.</td><td>22%</td></tr><tr><td>Chum &amp;amp; Zisserman</td><td>32%</td></tr><tr><td>Deselaers et al.</td><td>37%</td></tr><tr><td>Tang et al.</td><td>39%</td></tr><tr><td>Sensitivity Maps</td><td>55%</td></tr></table> Examples of the bounding boxes selected by our method, compared to ground truth, for all 20 classes in the PASCAL VOC 2007 data set are shown in Figure 3. Qualitatively, it appears as if our approach is most accurate when there is a single target object with little crowding. However, if the target object is small and in a crowded region of the image, performance is less reliable. While speed is an important property of our method, as is the reuse of classification training for localization, we compared our approach to data from some slower state- of- the- art deep learning techniques for localization that do not necessarily have these properties. We compared our method to R- CNN Girshick et al. (2014), DPM Felzenszwalb et al. (2013), and Poselets Bourdev & Malik (2009). These were chosen due to the ready availability of published localization results for these alternative methods on the PASCAL VOC 2007 data set, with the measure of performance being Average CorLoc (or mean Average Precision, mAP). The comparison results are given in Table ??.. Several of the comparison methods display better localization performance than our approach, but <--- Page Split ---> Table 2: PASCAL VOC 2007 Test Detection Results. The proposed method performed favorably against the state of the art methods. <table><tr><td>Method</td><td>MeanCL</td><td>arco</td><td>bike</td><td>bird</td><td>boat</td><td>bottle</td><td>box</td><td>car</td><td>cat</td><td>chair</td><td>cow</td><td>table</td><td>dog</td><td>horse</td><td>mibike</td><td>person</td><td>plant</td><td>sheep</td><td>sofa</td><td>train</td><td>tv</td></tr><tr><td>Siva et al. (2012)</td><td>30.2</td><td>45.8</td><td>21.8</td><td>30.9</td><td>20.4</td><td>5.3</td><td>37.6</td><td>40.8</td><td>51.6</td><td>7.0</td><td>29.8</td><td>27.5</td><td>41.3</td><td>41.8</td><td>47.3</td><td>24.1</td><td>12.2</td><td>28.1</td><td>32.8</td><td>48.7</td><td>9.4</td></tr><tr><td>Shi et al. (2013)</td><td>36.2</td><td>67.3</td><td>54.4</td><td>34.3</td><td>21.7</td><td>1.3</td><td>46.6</td><td>60.7</td><td>68.9</td><td>2.5</td><td>32.4</td><td>16.2</td><td>58.9</td><td>51.5</td><td>64.6</td><td>18.2</td><td>3.1</td><td>20.9</td><td>34.7</td><td>63.4</td><td>5.9</td></tr><tr><td>Goblock (2013)</td><td>38.8</td><td>56.6</td><td>58.3</td><td>34.9</td><td>20.7</td><td>6.8</td><td>54.9</td><td>60.8</td><td>20.8</td><td>6.2</td><td>30.5</td><td>10.2</td><td>29.8</td><td>58.0</td><td>56.0</td><td>46.9</td><td>18.7</td><td>18.5</td><td>56.5</td><td>34.7</td><td>32.4</td><td>58.4</td></tr><tr><td>OM Li et al. (2016)</td><td>31.8</td><td>50.4</td><td>30</td><td>34.6</td><td>18.2</td><td>6.2</td><td>39.3</td><td>42.2</td><td>57.3</td><td>10.8</td><td>29.8</td><td>20.5</td><td>41.8</td><td>43.2</td><td>51.8</td><td>24.3</td><td>20.8</td><td>29.2</td><td>26.6</td><td>45.6</td><td>12.5</td></tr><tr><td>SP-VGGNet Zhu et al. (2016)</td><td>60.6</td><td>85.3</td><td>64.2</td><td>67.0</td><td>42.0</td><td>16.4</td><td>71.0</td><td>64.7</td><td>83.7</td><td>20.7</td><td>63.8</td><td>58.0</td><td>24.1</td><td>84.3</td><td>80.0</td><td>60.0</td><td>29.4</td><td>36.3</td><td>68.1</td><td>77.4</td><td>30.5</td></tr><tr><td>PIRM Chu et al. (2015)</td><td>36.6</td><td>50.3</td><td>42.8</td><td>30.8</td><td>18.5</td><td>4.0</td><td>52.3</td><td>64.5</td><td>74.5</td><td>8.3</td><td>49.3</td><td>12.2</td><td>44.0</td><td>64.1</td><td>57.2</td><td>15.8</td><td>30.9</td><td>34.0</td><td>61.6</td><td>31.5</td><td></td></tr><tr><td>Sensitivity Map</td><td>40.1</td><td>63.8</td><td>55.1</td><td>41.2</td><td>23.3</td><td>34.2</td><td>58.6</td><td>72.7</td><td>36.9</td><td>23.3</td><td>49.7</td><td>11.5</td><td>29.6</td><td>50.1</td><td>65.9</td><td>11.8</td><td>42.2</td><td>39.7</td><td>18.1</td><td>51.0</td><td>41.2</td></tr></table> it is important to keep in mind that the comparison cases had some important advantages, including taking the time to use a sliding window and access to the class labels on which the network was trained. Recall that our sensitivity maps were produced, in this case, by calculating the sensitivity of the network attention map activity to pixel values. Thus, this comparison illustrates trade- offs between speed, performance, and generalization. Note that as one of the reviewers mentioned it would be worth looking at the results if we just use the sensitivity maps and heuristics to draw bounding boxes out of objects. For this experiment, we used a Gaussian smoothing filter to smooth out the sensitivity maps and then we picked top \(\% 20\) pixels and draw the bounding box out of those pixels as other researchers obtained this experiment before Zhou et al. (2016); Selvaraju et al. (2016). Based on our observation we noticed that it could damage the mean CorLoc by \(\% 3\) in our best observations. However, this process is highly depends on the smoothing \(\sigma\) parameter. The obtained results from different \(\sigma\) values are reported in Table 3. Table 3: Average CorLoc Performance on Pascal VOC 2007 based on heuristic bounding box <table><tr><td>σ</td><td>CorLoc</td></tr><tr><td>10</td><td>27.2%</td></tr><tr><td>20</td><td>38.4%</td></tr><tr><td>30</td><td>32.5%</td></tr></table> ### 3.5 PERFORMANCE ON IMAGENET ImageNet is a large image data set that has been systematically organized by object category Deng et al. (2009). We executed a large scale evaluation of our approach by using all images in ImageNet that are annotated with ground truth localization information. This subset contains 300,916 images involving 478 object classes. We divided this data set into a training set, a test set, and a validation set by sampling without replacement (i.e., the intersection between each pair of the three sets was empty). There were 225,687 images (75%) in the training set, and there were 45,137 images in each of the other two sets. We compared the performance of our approach with two methods discussed in Tang et al. Tang et al. (2014) for which ImageNet results are explicitly reported: Top Objectiveness Box & Co- Localization. Also, we noted that many images in this data set presented the target object in the middle of the image, providing a bias that could be leveraged by learned localization systems. Thus, as a baseline of performance, we calculated the CorLoc performance for a system that blindly offered the same bounding box in the middle of the image, with average size, for every input. The results are shown in Table 4. Once again, note the relatively high accuracy performance of our efficient method. Also note that the baseline was comfortingly low. As might be expected, performance varies with Table 4: CorLoc Performance on ImageNet (478 Classes) <table><tr><td>Method</td><td>Average CorLoc</td></tr><tr><td>Constant Center Box Baseline</td><td>12.34%</td></tr><tr><td>Top Objectiveness Box</td><td>37.42%</td></tr><tr><td>Co-Localization</td><td>53.20%</td></tr><tr><td>Sensitivity Maps</td><td>68.76%</td></tr></table> class. Our algorithm appears to do well on some objects, such as balls and dogs. One might suspect <--- Page Split ---> that failures arise in the linear mapping from sensitivity maps to bounding box coordinates, but a perusal of the sensitivity maps, themselves, suggests that the pixel sensitivity values vary in utility across different object categories. Still, our method performs fairly well across the classes. Note that the IOU does not fall below 0.62 for any class. This suggests that, while some individual images may be problematic, the overall performance for each class is quite good. This universally strong class- specific performance is also displayed in Table 4. ![](images/8_0.jpg) <center>Figure 4: Results of the proposed method on different object categories from the ImageNet data set. Each row shows 9 examples in one class. The green boxes are the ground truth, and the red ones are the predicted bounding boxes. </center> ### 3.6 AGGREGATION METHODS ANALYSIS The sensitivity analysis approach gives us the sensitivity of every single pixels in all channels in the RGB images and since we are in need of locations we need to aggregate among channels. We proposed two methods, an average function, a maximum function. The first approach is to taking average among channels and the second method is to pick up the maximum numbers among channels. We didn't notice significant difference between these two methods in the localization performance, the only information that comes to light is generating sensitivity maps based on average function is a bit smoother in visual sense than maximum function. The CorLoc between average and maximum aggregation function on ImageNet dataset are 68.7 and 67.9 respectively and the results of these two aggregation operators on PASCAL VOC dataset is 39.2 and 40.1 , respectively. <--- Page Split ---> ### 3.7 SPEED ANALYSIS To analyze our object localization approach since it highly depends to the hardware parameters and network architecture, we decided to analyze the speed in term of forward and backward passes. Our approach only needs two passes, one forward pass for the classification and one backward pass for localization. If we consider each forward/backward pass as \(n\) operations we can say our approach is \(O(N^{2}) + \epsilon\) which means it needs one forward pass, one backward pass and one inference from the linear model. ## 4 CONCLUSION We have presented an approach to object localization based on performing a sensitivity analysis of a previously trained image classification deep CNN. Our method is fast enough to be used in online applications, and it demonstrates accuracy that is superior to some methods that are much slower. It is likely that even better accuracy could be had by incorporating sensitivity analysis information into a more sophisticated bounding box estimator. As previously noted, the idea of using sensitivity information has appeared in previously published work. There are ways in which the results reported in this paper are distinct, however. We have moved beyond visualization of network function using sensitivity (or saliency) Simonyan et al. (2013) to performing direct comparisons between different methods on the localization task. We have shown that using a fast and simple measure of sensitivity can produce comparable performance to that of much slower methods. Our approach produces good generalization without modifying the classification network, as is done in Class Activation Mapping (CAM) Zhou et al. (2016). With our PASCAL VOC 2007 results, we have shown that our approach can successfully be applied to attention maps, even when the image contains objects belonging to a class on which the classification network was not trained, distinguishing it from Grad- CAM Selvaraju et al. (2016). In short, we have demonstrated the power of a simple sensitivity measure for performing localization. Note that our approach may be used with image classifiers other than CNNs. The proposed sensitivity analysis can be conducted on any differentiable classifier, though performance will likely depend on classifier specifics. Indeed, at a substantial time cost, even a black box classifier could be approximately analyzed by making small changes to pixels and observing the effects on activation patterns. The proposed approach is quite general. Indeed, we are currently working on applying sensitivity analysis to deep networks trained on other tasks, with the goal of interpreting network performance on the current input in a useful way. Thus, we see a potentially large range of uses for sensitivity analysis in neural network applications. ## REFERENCES Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large- scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org. Lubomir Bourdev and Jitendra Malik. Poselets: Body part detectors trained using 3d human pose annotations. In CVPR, 2009. Rich Caruana. Multitask learning. Machine Learning, 28:41- 75, 1997. Minsu Cho, Suha Kwak, Cordelia Schmid, and Jean Ponce. Unsupervised object discovery and localization in the wild: Part- based matching with bottom- up region proposals. In CVPR, 2015. Ondrej Chum and Andrew Zisserman. An exemplar model for learning object classes. In CVPR, 2007. <--- Page Split ---> Jia Deng, Wei Dong, Richard Socher, Li- Jia Li, Kai Li, and Li Fei- Fei. ImageNet: A large- scale hierarchical image database. In CVPR. IEEE, 2009. Thomas Deselaers, Bogdan Alexe, and Vittorio Ferrari. Weakly supervised localization and learning with generic knowledge. International Journal of Computer Vision, 100(3):275- 293, 2012. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. DeCAF: A deep convolutional activation feature for generic visual recognition. In ICML, 2014. Mark Everingham, L Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. The PASCAL visual object classes challenge 2007 (VOC 2007) results, 2008. Pedro Felzenszwalb, Ross Girshick, David McAllester, and Deva Ramanan. Visual object detection with deformable part models. Communications of the ACM, 56(9):97- 105, 2013. Ross Girshick. Fast R- CNN. In CVPR, 2015. Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. Ramazan Gokberk Cinbis, Jakob Verbeek, and Cordelia Schmid. Multi- fold mil training for weakly supervised object localization. In CVPR, 2014. Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask r- cnn. In CVPR, 2017. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, 2012. Dong Li, Jia- Bin Huang, Yali Li, Shengjin Wang, and Ming- Hsuan Yang. Weakly supervised object localization with progressive domain adaptation. In CVPR, 2016. Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. CNN features off- the- shelf: An astounding baseline for recognition. In CVPR, 2014. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R- CNN: Towards real- time object detection with region proposal networks. In NIPS, 2015. David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning representations by back- propagating errors. Nature, 323:533- 536, 1986. Bryan C. Russell, William T. Freeman, Alexei A. Efros, Josef Sivic, and Andrew Zisserman. Using multiple segmentations to discover objects and their extent in image collections. In CVPR, 2006. Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad- CAM: Visual explanations from deep networks via gradient- based localization. See https://arxiv.org/abs/1610.02391 v3, 7(8), 2016. Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, and Yann LeCun. OverFeat: Integrated recognition, localization and detection using convolutional networks. arXiv:1312.6229, 2013. Zhiyuan Shi, Timothy M Hospedales, and Tao Xiang. Bayesian joint topic modelling for weakly supervised object localisation. In CVPR, 2013. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013. Parthipan Siva, Chris Russell, and Tao Xiang. In defence of negative mining for annotating weakly labelled data. In ECCV. Springer, 2012. <--- Page Split ---> IM Sobol. Sensitivity estimates for nonlinear mathematical models. Mathematical Modelling and Computational Experiments, 1(4):407- 414, 1993. Kevin Tang, Armand Joulin, Li- Jia Li, and Li Fei- Fei. Co- localization in real- world images. In CVPR, 2014. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In CVPR, 2016. Yi Zhu, Yanzhao Zhou, Qixiang Ye, Qiang Qiu, and Jianbin Jiao. Soft proposal networks for weakly supervised object localization. In ICCV, year=2017. <--- Page Split --->
## ABSTRACT Deep Convolutional Neural Networks (CNNs) have been repeatedly shown to perform well on image classification tasks, successfully recognizing a broad array of objects when given sufficient training data. Methods for object localization, however, are still in need of substantial improvement. In this paper, we offer a fundamentally different approach to the localization of recognized objects in images. Our method is predicated on the idea that a deep CNN capable of recognizing an object must implicitly contain knowledge about object location in its connection weights. We provide a simple method to interpret classifier weights in the context of individual classified images. This method involves the calculation of the derivative of network generated activation patterns, such as the activation of output class label units, with regard to each input pixel, performing a sensitivity analysis that identifies the pixels that, in a local sense, have the greatest influence on internal representations and object recognition. These derivatives can be efficiently computed using a single backward pass through the deep CNN classifier, producing a sensitivity map of the image. We demonstrate that a simple linear mapping can be learned from sensitivity maps to bounding box coordinates, localizing the recognized object. Our experimental results, using real- world data sets for which ground truth localization information is known, reveal competitive accuracy from our fast technique. ## 1 INTRODUCTION Deep Convolutional Neural Networks (CNNs) have been shown to be effective at image classification, accurately performing object recognition even with thousands of object classes when trained on a sufficiently rich data set of labeled images Krizhevsky et al. (2012). One advantage of CNNs is their ability to learn complete functional mappings from image pixels to object categories, without any need for the extraction of hand- engineered image features Sermanet et al. (2013). To facilitate learning through stochastic gradient descent, CNNs are (at least approximately) differentiable with regard to connection weight parameters. Image classification, however, is only one of the problems of computer vision. In the task of image classification, each image has a single label, associated with the class identity of the main object in the image, and the goal is to assign correct labels in a manner that generalizes to novel images. This can be accomplished by training a machine learning classifier, such as a CNN, on a large data set of labeled images Deng et al. (2009). In the object localization task, in comparison, the output for a given image is not a class label but the locations of a specified number of objects in the image, usually encoded as bounding boxes. Evaluation of an object localization system generally requires ground truth bounding boxes to compare to the system's output. The detection task is more difficult than the localization task, as the number of objects are not predetermined Sermanet et al. (2013). In this paper, we focus on object localization, identifying the position in the image of a recognized object. As is common in the localization literature, position information is output in the form of a bounding box. Previously developed techniques for accomplishing this task generally involve searching the image for the object, considering many candidate bounding boxes with different sizes and locations, sometimes guided by an auxiliary algorithm for heuristically identifying regions of interest Sermanet et al. (2013); Girshick (2015); He et al. (2017). For each candidate location, the sub- image captured by the bounding box is classified for object category, with the final output <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Examples of sensitivity maps, displaying the sensitivity of network internal representations to individual pixels, providing information about the locations of the main objects in the source images. </center> bounding box either being the specific candidate region classified as the target object with the highest level of certainty or some heuristic combination of neighboring or overlapping candidate regions with high classification certainty. These approaches tend to be time consuming, often requiring deep CNN classification calculations of many candidate regions at multiple scales. Efforts to speed these methods mostly focus on reducing the number of regions considered, typically by using some adjunct heuristic region proposal algorithm Girshick (2015); Ren et al. (2015); He et al. (2017). Still, the number of considered regions is often reported to be roughly 2,000 per image. While these approaches can be fairly accurate, their slowness limits their usefulness, particularly for online applications. A noteworthy alternative approach is to directly train a deep CNN to produce outputs that match ground truth localization bounding boxes, using a large image data set that provides both category and localization information for each image. It appears as if some form of this method was used with AlexNet Krizhevsky et al. (2012), though details concerning localization, rather than image classification, are difficult to discern from the published literature. A natural approach would be to cast the learning of bounding boxes as a simple regression problem, with targets being the four coordinates that specify a bounding box (e.g., coordinates of upper- left and lower- right corners, or region center coordinates along with region width and height). It is reasonable to consider sharing early layers of a deep CNN, such as those performing convolution and max pooling, between both an image classification network and an object localization network. Indeed, taking such a multitask learning approach Caruana (1997) can allow for both object category and object location training data to shape connection weights throughout the network. Thus, the deep CNN would have "two heads", one for image classification, using a classification cross- entropy loss function, and one for object localization, reducing the \(\ell_2\) norm between ground truth and predicted bounding box coordinates Krizhevsky et al. (2012). While this approach can produce a network that quickly outputs location information, extensive training on large data sets containing ground truth bounding box information is necessary to produce good generalization. In this paper, we introduce an approach to object localization that is both very fast and robust in the face of limited ground truth bounding box training data. This approach is rooted in the assertion that any deep CNN for image classification must contain, implicit in its connection weights, knowledge about the location of recognized objects Selvaraju et al. (2016). The goal, then, is to interpret the flow of activation in an object recognition network when it is performing image classification so as to extract information about object location. Furthermore, the goal is to do this quickly. Thus, this approach aims to leverage location knowledge that is already latent in extensively trained and tuned image classification networks, without requiring a separate learning process for localization. Our method makes use of the notion of a sensitivity analysis Sobol (1993). We propose estimating the sensitivity of the category outputs, or activation patterns at internal network layers, of an image classification CNN to variance in each input pixel, given a specific input image. The result is a numeric value for each pixel in the input image that captures the degree to which small changes in that pixel (locally, around its current value) give rise to large changes in the output category. Together, these numeric values form a sensitivity map of the image, encoding image regions that are important for the current classification. Our proposed measure of sensitivity is the partial derivative of activity with regard to each pixel value, evaluated for the current image. For a deep CNN that formally embodies a differentiable mapping (at least approximately) from image pixels to output categories, this partial derivative can be quickly calculated. While many tools currently exist for <--- Page Split ---> efficiently calculating such derivatives, we provide a simple algorithm that computes these values through a single backward pass through the image classification network, similar to that used to calculate unit error (delta) values in the backpropagation of error learning algorithm Rumelhart et al. (1986). Thus, we can generate a sensitivity map for an image in about the same amount of time as it takes the employed image classification network to produce an output. Some example sensitivity maps are shown in Figure 1. The idea of using sensitivity information, like that in our sensitivity maps, for a variety of tasks, including localization, has previously appeared in the literature Simonyan et al. (2013); Zhou et al. (2016); Selvaraju et al. (2016). Indeed, some of these past efforts have used more sophisticated measures of sensitivity. In this paper, we show that even our very simple sensitivity measure can produce strong localization performance, and it can do so quickly, without any modifications to the classification network, and even for object categories on which the classification network was not trained. The relationship of the results reported here to previously reported work is discussed further in Section 4. As previously mentioned, object localization methods typically encode object location as a bounding box. Since our sensitivity maps encode location differently, in terms of pixels, we propose learning a simple linear mapping from sensitivity maps to bounding box coordinates, allowing our method to output a bounding box for each classified image. We suggest that this linear mapping can be robustly learned from a relatively small training set of images with ground truth bounding boxes, since the sensitivity maps form a much more simple input than the original images. The primary contributions of this paper may be summarized as follows: - We propose a new general approach to performing object localization, interpreting a previously trained image classification network by performing a sensitivity analysis, identifying pixels to which the category output, or a more general internal representation, is particularly sensitive.- We demonstrate how a linear function from the resulting sensitivity maps to object location bounding box coordinates may be learned from training images containing ground truth location information.- We provide a preliminary assessment of our approach, measuring object localization performance on the ImageNet and PASCAL VOC data sets using the VGG16 image classification CNN, showing strong accuracy while maintaining short computation times. ## 2 METHOD ### 2.1 CALCULATING PIXEL SENSITIVITIES IN A TRAINED CNN Calculating derivatives of a function of network output with regard to network parameters, such as connection weights, is a standard part of CNN training. It is common for learning in a deep CNN to involve stochastic gradient decent, which involves such derivatives. In that case, the derivatives are of an objective function with regard to connection weight values. In image classification networks, the objective function is designed to have optima where training images are correctly classified. In the case of object localization, a similar objective function could be designed to minimize differences between output bounding box coordinates and provided ground truth bounding box coordinates, for all images in an appropriately labeled training set. For example, given \(N\) training images, stored in the matrix \(\mathbf{X}\) , with the ground truth 4- dimensional bounding box vector for image \(x_{i}\) being \(y_{i}\) , and \(G(x_{i};\mathbf{w})\) being the CNN output vector for image \(x_{i}\) given connection weights \(\mathbf{w}\) , an appropriate loss function would be: \[\ell (\mathbf{X},\mathbf{w}) = \frac{1}{N}\sum_{i = 1}^{N}\| y_{i} - G(x_{i};\mathbf{w})\|_{2}^{2} \quad (1)\] The CNN will produce good estimates of the training image bounding boxes when this loss function is minimized with regard to \(\mathbf{w}\) . Network weight parameters that minimize this loss, \(\mathbf{w}^{*}\) , may be sought through stochastic gradient decent, incrementally updating \(\mathbf{w}\) according to the gradient of \(\ell (\mathbf{X},\mathbf{w})\) with regard to \(\mathbf{w}\) . A primary drawback of this approach is that it requires a large and representative sample of images with ground truth bounding box information. <--- Page Split ---> Consider that, once weights are found, the gradient of \(\ell (\mathbf{X},\mathbf{w}^{*})\) with regard to \(\mathbf{X}\) would provide information about the sensitivity of the bounding box loss function with regard to the pixels in the images. This gradient can be calculated as efficiently as the gradient of the loss with regard to the weights, with both depending on the gradient of \(G(x_{i};\mathbf{w})\) with regard to a subset of its arguments. This means that the gradient of \(G(x_{i};\mathbf{w}^{*})\) with regard to \(x_{i}\) can be efficiently computed, and that gradient would capture the sensitivity of bounding box coordinates with regard to the specific pixels in image \(x_{i}\) . Note that this gradient can be calculated for images beyond those in the training set. Knowing which pixels in a novel image play an important role in determining the bounding box provides useful information for object localization. Using this calculation to address the object localization task makes little sense, however, as \(G(x_{i};\mathbf{w}^{*})\) provides an estimate of object location without a need to consider pixel sensitivity. Rather than training a deep CNN to output bounding boxes, requiring extensive labeled data, we propose calculating the same gradient for a different network – one successfully trained to perform image classification. If we now see \(G(x_{i};\mathbf{w}^{*})\) as the output of such an image classification network, its gradient with regard to \(x_{i}\) would provide information about the sensitivity of the assigned category to individual pixels. Pixels with the largest absolute values of this derivative will, around the input \(x_{i}\) , produce the largest changes in the classification decision of the CNN. This can be seen as one measure of how important pixels are for classifying the object in the image. Consider that the object class output is not immediately affected by changes to pixels with a derivative of zero. The calculation of this gradient can be performed as efficiently as a single "backward pass" through the classification network. This is well illustrated by considering the case of a simple layered backpropagation network Rumelhart et al. (1986) in which the "net input" of unit \(i\) , \(\eta_{i}\) , is a weighted sum of the activations of units in the previous layer, and the activation of unit \(i\) is \(g(\eta_{i})\) , where \(g(\cdot)\) is the unit activation function. In this case, we can define a sensitivity value for each unit, \(s_{i}\) , as the derivative of the network output with regard to \(\eta_{i}\) . Using the chain rule of calculus, it is easy to show that the sensitivity of an output unit is \(g'(\eta i)\) , and, for units in earlier layers the gradients are computed as follows: \[s_{i} = g^{\prime}(\eta_{i})\sum_{k}w_{k i}s_{k} \quad (2)\] where \(k\) iterates over all units in the immediately downstream layer from unit \(i\) and \(\mathbf{w}_{k i}\) is the connection weight from unit \(i\) to unit \(k\) . This calculation may be performed, layer by layer, from outputs to inputs, until \(s_{i}\) values for each pixel input unit are available. This demonstrates how efficiently pixel sensitivity values can be calculated for a given classified image. Of course, there are currently a variety of software packages that include tools for calculating gradients. In the evaluation of our approach in Section 3, we report results using the tools provided by TensorFlow Abadi et al. (2015). ### 2.2 SENSITIVITY OF THE ATTENTION MAP We have proposed using a previously trained image classification network as a source of information about object location, focusing on the gradient of the network output with regard to image pixels. It is interesting to note that it might not be necessary to perform the sensitivity calculation using the full classification network. There is a growing body of research that suggests that, in a well trained image classification CNN, the features that are extracted at the "attention map" layer (i.e., the output of the last convolutional layer) tend to be generally useful for learning a variety of image analysis tasks Razavian et al. (2014); Donahue et al. (2014). Inspired by these results, we have investigated the possibility of substituting the gradient of the classifier output with regard to pixels with the gradient of the attention map with regard to pixels. This avoids calculations involving final fully connected layers and any classification softmax layer. Generating image sensitivity maps from the attention map layer is slightly faster than our original proposal, but, more importantly, it is possible that general knowledge about object location might be found in the attention map, and using the attention map as the basis of the sensitivity map might actually generalize beyond the categories on which the image classification CNN was trained. We have not yet done a formal comparison of these two approaches to constructing the sensitivity map, but example results using both approaches are reported in Section 3. Note that computing the gradients of the aggregated values of the last convolution layer with respect to the input pixels are considered as the Gestalt total which can be <--- Page Split ---> computed as follows. \[G T = \frac{1}{H\times W\times C}\sum_{i,j,k}A_{n}(i,j,k) \quad (3)\] which \(A_{n}\) is the activation map of the last convolution layer, \(H,W\) and \(C\) are the height, width and channels of the last convolution layer; moreover, \(G T\) indicates as Gestalt Total. ### 2.3 AGGREGATING ACROSS COLOR CHANNELS The sensitivity map calculations that have been described, so far, provide a scalar sensitivity value for each input to the image classification deep CNN. Color images, however, are regularly provided to such networks using multiple inputs per image pixel, often encoding each pixel over three color channels. Thus, the gradient calculation will actually produce three sensitivity values for each pixel. Since we hope to produce a sensitivity map that focuses in a general way on location information, it seems reasonable to aggregate the three sensitivity values into one. Since the direction of the sensitivity relationship with the class output is irrelevant, a good first step is to take the absolute value of each derivative. Given that dependence on even a single color channel suggests that a pixel is important for identifying the object, an argument can be made that a pixel should be labeled with the maximum of the three absolute derivatives. Alternatively, it could be argued that all color channels should be taken into account when producing the sensitivity map, in which case it might be better to average the three absolute derivatives. We have explored both of these aggregation methods, with results appearing in Section 3. ### 2.4 LEARNING TO PRODUCE BOUNDING BOXES Object localization algorithms typically output the four coordinates of a bounding box to communicate the location of the target object. Such a bounding box is not intrinsic to a sensitivity map, however. Heuristic techniques could be used to identify a rectangular region that captures the majority of the high sensitivity pixels, while avoiding low sensitivity pixels, but we have taken a different approach. We have opted to learn a linear mapping from sensitivity maps to bounding box coordinates, using training images with ground truth location information. It is important to note that learning this mapping is not the same as learning to map from the original images to bounding box coordinates, as has been done in some other object localization systems. Sensitivity maps contain much less information than the original images, so using the sensitivity maps as inputs both reduces the dimensionality of the input to this mapping and makes for a more simple functional relationship between pixels and bounding box coordinates. We expect that this simplification will allow the mapping to bounding box coordinates to be successfully learned using a far smaller set of training images labeled with ground truth object locations. Indeed, we expect that a simple linear mapping could perform well. Formally, we define the parameters of the linear mapping to the four bounding box coordinates as a \(4\times M\) matrix, \(\hat{W}\) , (where \(M\) is the number of pixels in an image) and a 4- dimensional vector of "bias weights", \(\hat{w}\) . Given a sensitivity map, \(s\) , the output is \((\hat{W} s + \hat{w})\) . Given a training set of \(N\) images, the mapping is found by minimizing the following objective function with regard to \(\hat{W}\) and \(\hat{w}\) : \[\frac{1}{N}\sum_{i = 1}^{N}\frac{1}{4}\sum_{j = 1}^{4}\| B_{i,j} - (\hat{W} s_{i} + \hat{w})\|_{2}^{2} \quad (4)\] where \(s_{i}\) is the sensitivity map for the \(i^{t h}\) image, and \(B_{i,j}\) is the \(j^{t h}\) coordinate of the bounding box for the \(i^{t h}\) image. This learning process amounts to four independent linear regression problems, which can be solved efficiently. Once learned, mapping from sensitivity maps to bounding box coordinates can be done very quickly. With sensitivity map formation requiring only a single backward pass through the image classification network, the whole process - from image, to classification, to sensitivity map, to bounding box - can be performed in little more than twice the time it takes for the network to do object recognition. <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 2: A schematic illustration of the proposed method for sensitivity analysis on the pre-trained VGG16 network. </center> ## 3 RESULTS The code and the sensitivity map of on imageNet as well as PASCAL VOC dataset will be publicly available. ### 3.1 DATA SETS & PERFORMANCE MEASURES We evaluated our proposed method for object localization on two challenging data sets: the PASCAL VOC 2007 Everingham et al. (2008) data set and the ImageNet 2012 Deng et al. (2009) data set. The PASCAL VOC 2007 data set was selected due to its use in the existing object localization literature. The ImageNet data set is one of the largest publicly available data sets. It also contains many images annotated with ground truth bounding boxes. We followed the literature with regard to the evaluation criterion applied to our method, using CorLoc, which has been used for weakly supervised localization. The CorLoc metric is defined as the percentage of images in a data set that are correctly localized based on the PASCAL criterion, in which a given localization is considered correct if and only if the intersection over union (IOU) area of the predicted and ground truth bounding boxes is greater than one half: \[I O U = \frac{a r e a(\beta_{p}\cap\beta_{g t})}{a r e a(\beta_{p}\cup\beta_{g t})} >0.5 \quad (5)\] ...where \(\beta_{p}\) is the predicted bounding box and \(\beta_{g t}\) is the ground truth bounding box Tang et al. (2014). ### 3.2 PRE-TRAINED IMAGE CLASSIFICATION DEEP CNN To demonstrate that our approach works with an image classification deep CNN that was in no way specialized for our localization method, we opted to use a publicly available network. We used the VGG16 network, shown in Figure 2, fully trained Simonyan & Zisserman (2014). This network provides ImageNet object classes as output, allowing us to calculate sensitivity maps based on the network classification when examining ImageNet data. For the PASCAL VOC 2007 data set, we used the previously described method of calculating derivatives based on the attention map of VGG16, since there is not consistent class correspondence between the PASCAL VOC 2007 classes and the classes on which VGG16 was trained. To produce sensitivity maps for the PASCAL VOC 2007 data set, we aggregated across color channels by using the maximum absolute derivative across the three inputs for each pixel. For the ImageNet data set, we averaged the absolute derivatives across the three inputs in order to produce pixel sensitivity values. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: Results of the proposed method on the first 10 classes of PASCAL VOC 2007. Each column shows three examples in one class. The green boxes are the ground truth, and the red ones are the predicted bounding boxes. </center> ### 3.3 EXPERIMENTAL SETUP For generating sensitivity maps, we have used a pretrained vgg 16 network, we have used the whole network architecture while we were experimenting on ImageNet dataset, otherwise we have removed the last 3 fully connected layers and computed the Gestalt Total by the last convolution layer. The derivatives in either case computed using just one backward pass to the original pixels. For learning bounding boxes we have used the aggregated sensitivity maps as an input. To learn the mapping from sensitivity maps to bounding box coordinates, we performed linear regression using stochastic gradient decent. Updates were performed in batches of 2,048. The learning rate was initialized to 0.1 and decayed by a factor of 10 every 10,000 iterations. The experiment run on 1 GPU for 4 days. ### 3.4 PERFORMANCE ON PASCAL VOC 2007 The full PASCAL VOC 2007 data set includes 12,608 training set images and an equal number of testing set images Everingham et al. (2008). Each image contains an object of 1 of 20 different categories. We applied our object localization method to this full data set. However, we were unable to find published localization performance data for other methods applied to the full data set, which we might use for comparison to our approach. Work reported in Tang et al. Tang et al. (2014) provides performance data on 6 of the classes: aeroplane, bicycle, boat, bus, horse, and motorbike. Performance on these same classes have also been reported by others Russell et al. (2006); Chum & Zisserman (2007); Deselaers et al. (2012). Table 1 compares the localization performance of our method with that of other approaches. Note that our method, while being very fast, outperforms the comparison algorithms. Table 1: CorLoc Performance on PASCAL VOC 2007 (6 Classes) <table><tr><td>Method</td><td>Average CorLoc</td></tr><tr><td>Russell et al.</td><td>22%</td></tr><tr><td>Chum &amp;amp; Zisserman</td><td>32%</td></tr><tr><td>Deselaers et al.</td><td>37%</td></tr><tr><td>Tang et al.</td><td>39%</td></tr><tr><td>Sensitivity Maps</td><td>55%</td></tr></table> Examples of the bounding boxes selected by our method, compared to ground truth, for all 20 classes in the PASCAL VOC 2007 data set are shown in Figure 3. Qualitatively, it appears as if our approach is most accurate when there is a single target object with little crowding. However, if the target object is small and in a crowded region of the image, performance is less reliable. While speed is an important property of our method, as is the reuse of classification training for localization, we compared our approach to data from some slower state- of- the- art deep learning techniques for localization that do not necessarily have these properties. We compared our method to R- CNN Girshick et al. (2014), DPM Felzenszwalb et al. (2013), and Poselets Bourdev & Malik (2009). These were chosen due to the ready availability of published localization results for these alternative methods on the PASCAL VOC 2007 data set, with the measure of performance being Average CorLoc (or mean Average Precision, mAP). The comparison results are given in Table ??.. Several of the comparison methods display better localization performance than our approach, but <--- Page Split ---> Table 2: PASCAL VOC 2007 Test Detection Results. The proposed method performed favorably against the state of the art methods. <table><tr><td>Method</td><td>MeanCL</td><td>arco</td><td>bike</td><td>bird</td><td>boat</td><td>bottle</td><td>box</td><td>car</td><td>cat</td><td>chair</td><td>cow</td><td>table</td><td>dog</td><td>horse</td><td>mibike</td><td>person</td><td>plant</td><td>sheep</td><td>sofa</td><td>train</td><td>tv</td></tr><tr><td>Siva et al. (2012)</td><td>30.2</td><td>45.8</td><td>21.8</td><td>30.9</td><td>20.4</td><td>5.3</td><td>37.6</td><td>40.8</td><td>51.6</td><td>7.0</td><td>29.8</td><td>27.5</td><td>41.3</td><td>41.8</td><td>47.3</td><td>24.1</td><td>12.2</td><td>28.1</td><td>32.8</td><td>48.7</td><td>9.4</td></tr><tr><td>Shi et al. (2013)</td><td>36.2</td><td>67.3</td><td>54.4</td><td>34.3</td><td>21.7</td><td>1.3</td><td>46.6</td><td>60.7</td><td>68.9</td><td>2.5</td><td>32.4</td><td>16.2</td><td>58.9</td><td>51.5</td><td>64.6</td><td>18.2</td><td>3.1</td><td>20.9</td><td>34.7</td><td>63.4</td><td>5.9</td></tr><tr><td>Goblock (2013)</td><td>38.8</td><td>56.6</td><td>58.3</td><td>34.9</td><td>20.7</td><td>6.8</td><td>54.9</td><td>60.8</td><td>20.8</td><td>6.2</td><td>30.5</td><td>10.2</td><td>29.8</td><td>58.0</td><td>56.0</td><td>46.9</td><td>18.7</td><td>18.5</td><td>56.5</td><td>34.7</td><td>32.4</td><td>58.4</td></tr><tr><td>OM Li et al. (2016)</td><td>31.8</td><td>50.4</td><td>30</td><td>34.6</td><td>18.2</td><td>6.2</td><td>39.3</td><td>42.2</td><td>57.3</td><td>10.8</td><td>29.8</td><td>20.5</td><td>41.8</td><td>43.2</td><td>51.8</td><td>24.3</td><td>20.8</td><td>29.2</td><td>26.6</td><td>45.6</td><td>12.5</td></tr><tr><td>SP-VGGNet Zhu et al. (2016)</td><td>60.6</td><td>85.3</td><td>64.2</td><td>67.0</td><td>42.0</td><td>16.4</td><td>71.0</td><td>64.7</td><td>83.7</td><td>20.7</td><td>63.8</td><td>58.0</td><td>24.1</td><td>84.3</td><td>80.0</td><td>60.0</td><td>29.4</td><td>36.3</td><td>68.1</td><td>77.4</td><td>30.5</td></tr><tr><td>PIRM Chu et al. (2015)</td><td>36.6</td><td>50.3</td><td>42.8</td><td>30.8</td><td>18.5</td><td>4.0</td><td>52.3</td><td>64.5</td><td>74.5</td><td>8.3</td><td>49.3</td><td>12.2</td><td>44.0</td><td>64.1</td><td>57.2</td><td>15.8</td><td>30.9</td><td>34.0</td><td>61.6</td><td>31.5</td><td></td></tr><tr><td>Sensitivity Map</td><td>40.1</td><td>63.8</td><td>55.1</td><td>41.2</td><td>23.3</td><td>34.2</td><td>58.6</td><td>72.7</td><td>36.9</td><td>23.3</td><td>49.7</td><td>11.5</td><td>29.6</td><td>50.1</td><td>65.9</td><td>11.8</td><td>42.2</td><td>39.7</td><td>18.1</td><td>51.0</td><td>41.2</td></tr></table> it is important to keep in mind that the comparison cases had some important advantages, including taking the time to use a sliding window and access to the class labels on which the network was trained. Recall that our sensitivity maps were produced, in this case, by calculating the sensitivity of the network attention map activity to pixel values. Thus, this comparison illustrates trade- offs between speed, performance, and generalization. Note that as one of the reviewers mentioned it would be worth looking at the results if we just use the sensitivity maps and heuristics to draw bounding boxes out of objects. For this experiment, we used a Gaussian smoothing filter to smooth out the sensitivity maps and then we picked top \(\% 20\) pixels and draw the bounding box out of those pixels as other researchers obtained this experiment before Zhou et al. (2016); Selvaraju et al. (2016). Based on our observation we noticed that it could damage the mean CorLoc by \(\% 3\) in our best observations. However, this process is highly depends on the smoothing \(\sigma\) parameter. The obtained results from different \(\sigma\) values are reported in Table 3. Table 3: Average CorLoc Performance on Pascal VOC 2007 based on heuristic bounding box <table><tr><td>σ</td><td>CorLoc</td></tr><tr><td>10</td><td>27.2%</td></tr><tr><td>20</td><td>38.4%</td></tr><tr><td>30</td><td>32.5%</td></tr></table> ### 3.5 PERFORMANCE ON IMAGENET ImageNet is a large image data set that has been systematically organized by object category Deng et al. (2009). We executed a large scale evaluation of our approach by using all images in ImageNet that are annotated with ground truth localization information. This subset contains 300,916 images involving 478 object classes. We divided this data set into a training set, a test set, and a validation set by sampling without replacement (i.e., the intersection between each pair of the three sets was empty). There were 225,687 images (75%) in the training set, and there were 45,137 images in each of the other two sets. We compared the performance of our approach with two methods discussed in Tang et al. Tang et al. (2014) for which ImageNet results are explicitly reported: Top Objectiveness Box & Co- Localization. Also, we noted that many images in this data set presented the target object in the middle of the image, providing a bias that could be leveraged by learned localization systems. Thus, as a baseline of performance, we calculated the CorLoc performance for a system that blindly offered the same bounding box in the middle of the image, with average size, for every input. The results are shown in Table 4. Once again, note the relatively high accuracy performance of our efficient method. Also note that the baseline was comfortingly low. As might be expected, performance varies with Table 4: CorLoc Performance on ImageNet (478 Classes) <table><tr><td>Method</td><td>Average CorLoc</td></tr><tr><td>Constant Center Box Baseline</td><td>12.34%</td></tr><tr><td>Top Objectiveness Box</td><td>37.42%</td></tr><tr><td>Co-Localization</td><td>53.20%</td></tr><tr><td>Sensitivity Maps</td><td>68.76%</td></tr></table> class. Our algorithm appears to do well on some objects, such as balls and dogs. One might suspect <--- Page Split ---> that failures arise in the linear mapping from sensitivity maps to bounding box coordinates, but a perusal of the sensitivity maps, themselves, suggests that the pixel sensitivity values vary in utility across different object categories. Still, our method performs fairly well across the classes. Note that the IOU does not fall below 0.62 for any class. This suggests that, while some individual images may be problematic, the overall performance for each class is quite good. This universally strong class- specific performance is also displayed in Table 4. ![](images/8_0.jpg) <center>Figure 4: Results of the proposed method on different object categories from the ImageNet data set. Each row shows 9 examples in one class. The green boxes are the ground truth, and the red ones are the predicted bounding boxes. </center> ### 3.6 AGGREGATION METHODS ANALYSIS The sensitivity analysis approach gives us the sensitivity of every single pixels in all channels in the RGB images and since we are in need of locations we need to aggregate among channels. We proposed two methods, an average function, a maximum function. The first approach is to taking average among channels and the second method is to pick up the maximum numbers among channels. We didn't notice significant difference between these two methods in the localization performance, the only information that comes to light is generating sensitivity maps based on average function is a bit smoother in visual sense than maximum function. The CorLoc between average and maximum aggregation function on ImageNet dataset are 68.7 and 67.9 respectively and the results of these two aggregation operators on PASCAL VOC dataset is 39.2 and 40.1 , respectively. <--- Page Split ---> ### 3.7 SPEED ANALYSIS To analyze our object localization approach since it highly depends to the hardware parameters and network architecture, we decided to analyze the speed in term of forward and backward passes. Our approach only needs two passes, one forward pass for the classification and one backward pass for localization. If we consider each forward/backward pass as \(n\) operations we can say our approach is \(O(N^{2}) + \epsilon\) which means it needs one forward pass, one backward pass and one inference from the linear model. ## 4 CONCLUSION We have presented an approach to object localization based on performing a sensitivity analysis of a previously trained image classification deep CNN. Our method is fast enough to be used in online applications, and it demonstrates accuracy that is superior to some methods that are much slower. It is likely that even better accuracy could be had by incorporating sensitivity analysis information into a more sophisticated bounding box estimator. As previously noted, the idea of using sensitivity information has appeared in previously published work. There are ways in which the results reported in this paper are distinct, however. We have moved beyond visualization of network function using sensitivity (or saliency) Simonyan et al. (2013) to performing direct comparisons between different methods on the localization task. We have shown that using a fast and simple measure of sensitivity can produce comparable performance to that of much slower methods. Our approach produces good generalization without modifying the classification network, as is done in Class Activation Mapping (CAM) Zhou et al. (2016). With our PASCAL VOC 2007 results, we have shown that our approach can successfully be applied to attention maps, even when the image contains objects belonging to a class on which the classification network was not trained, distinguishing it from Grad- CAM Selvaraju et al. (2016). In short, we have demonstrated the power of a simple sensitivity measure for performing localization. Note that our approach may be used with image classifiers other than CNNs. The proposed sensitivity analysis can be conducted on any differentiable classifier, though performance will likely depend on classifier specifics. Indeed, at a substantial time cost, even a black box classifier could be approximately analyzed by making small changes to pixels and observing the effects on activation patterns. The proposed approach is quite general. Indeed, we are currently working on applying sensitivity analysis to deep networks trained on other tasks, with the goal of interpreting network performance on the current input in a useful way. Thus, we see a potentially large range of uses for sensitivity analysis in neural network applications.
reject
Reject
4.333333
ICLR_2019_paper_0132
iclr
2,019
# DIRICHLET VARIATIONAL AUTOENCODER Anonymous authors Paper under double- blind review ## ABSTRACT This paper proposes Dirichlet Variational Autoencoder (DirVAE) using a Dirichlet prior for a continuous latent variable that exhibits the characteristic of the categorical probabilities. To infer the parameters of DirVAE, we utilize the stochastic gradient method by approximating the Gamma distribution, which is a component of the Dirichlet distribution, with the inverse Gamma CDF approximation. Additionally, we reshape the component collapsing issue by investigating two problem sources, which are decoder weight collapsing and latent value collapsing, and we show that DirVAE has no component collapsing; while Gaussian VAE exhibits the decoder weight collapsing and Stick- Breaking VAE shows the latent value collapsing. The experimental results show that 1) DirVAE models the latent representation result with the best log- likelihood compared to the baselines; and 2) DirVAE produces more interpretable latent values with no collapsing issues which the baseline models suffer from. Also, we show that the learned latent representation from the DirVAE achieves the best classification accuracy in the semi- supervised and the supervised classification tasks on MNIST, OMNIGLOT, and SVHN compared to the baseline VAEs. Finally, we demonstrated that the DirVAE augmented topic models show better performances in most cases. ## 1 INTRODUCTION A Variational Autoencoder (VAE) (Kingma & Welling, 2014c) brought success in deep generative models (DGMs) with a Gaussian distribution as a prior distribution (Jiang et al., 2017; Miao et al., 2016; 2017; Srivastava & Sutton, 2017). If we focus on the VAE, the VAE assumes the prior distribution to be \(\mathcal{N}(\mathbf{0}, \mathbf{I})\) with the learning on the approximated \(\hat{\mu}\) and \(\hat{\Sigma}\) . Also, Stick- Breaking VAE (SBVAE) (Nalisnick & Smyth, 2017) is a nonparametric version of the VAE, which modeled the latent dimension to be infinite using a stick- breaking process (Ishwaran & James, 2001). While these VAEs assume that the prior distribution of the latent variables to be continuous random variables, recent studies introduce the approximations on discrete priors with continuous random variables (Jang et al., 2017; Maddison et al., 2017; Rolfe, 2017). The key of these approximations is enabling the backpropagation with the reparametrization technique, or the stochastic gradient variational Bayes (SGVB) estimator, while the modeled prior follows a discrete distribution. The applications of these approximations on discrete priors include the prior modeling of a multinomial distribution which is frequently used in the probabilistic graphical models (PGMs). Inherently, the multinomial distributions can take a Dirichlet distribution as a conjugate prior, and the demands on such prior have motivated the works like Jang et al. (2017); Maddison et al. (2017); Rolfe (2017) that support the multinomial distribution posterior without explicit modeling on a Dirichlet prior. When we survey the work with a explicit modeling on the Dirichlet prior, we found a frequent approach such as utilizing a softmax Laplace approximation (Srivastava & Sutton, 2017). We argue that this approach has a limitation from the multi- modality perspective. The Dirichlet distribution can exhibit a multi- modal distribution with parameter settings, see Figure 1, which is infeasible to generate with the Gaussian distribution with a softmax function. Therefore, the previous continuous domain VAEs cannot be a perfect substitute for the direct approximation on the Dirichlet distribution. Utilizing a Dirichlet distribution as a conjugate prior to a multinomial distribution has an advantage compared to the usage of a softmax function on a Gaussian distribution. For instance, Figure 1 illustrates the potential difficulties in utilizing the softmax function with the Gaussian distribution. <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Illustrated probability simplex with Gaussian-Softmax, GEM, and Dirichlet distributions. Unlike the Gaussian-Softmax or the GEM distribution, the Dirichlet distribution is able to capture the multi-modality that illustrates multiple peaks at the vertices of the probability simplex. </center> Given the three- dimensional probability simplex, the Gaussian- Softmax distribution cannot generate the illustrated case of the Dirichlet distribution with a high probability measure at the vertices of the simplex, i.e. the multi- modality where the necessity was emphasized in Hoffman & Johnson (2016). Additionally, the Griffiths- Engen- McCloskey (GEM) distribution (Pitman, 2002), which is the prior distribution of the SBVAE, is difficult to model the multi- modality because the sampling procedure of the GEM distribution is affected by the rich- get- richer phenomenon, so a few components tend to dominate the weight of the samples. This is different from the Dirichlet distribution that does not exhibit such phenomenon, and the Dirichlet distribution can fairly distribute the weights to the components, and the Dirichlet distribution is more likely to capture the multi- modality by controlling the prior hyper- parameter (Blei et al., 2003). Then, we conjecture that an enhanced modeling on Dirichlet prior is still needed 1) because there are cases that the Gaussian- Softmax approaches, or the softmax Laplace approximation, cannot imitate the Dirichlet distribution; and 2) because the nonparametric approaches could be influenced by the biases that the Dirichlet distribution does not suffer from. Given these motivations for modeling the Dirichlet distribution with the SGVB estimator, this paper introduces the Dirichlet Variational Autoencoder (DirVAE) that shows the same characteristics of the Dirichlet distribution. The DirVAE is able to model the multi- modal distribution that was not possible with the Gaussian- Softmax and the GEM approaches. These characteristics allow the DirVAE to be the prior of the discrete latent distribution, as the original Dirichlet distribution is. Introducing the DirVAE requires the configuration of the SGVB estimator on the Dirichlet distribution. Specifically, the Dirichlet distribution is a composition of the Gamma random variables, so we approximate the inverse Gamma cumulative distribution function (CDF) with the asymptotic approximation. This approximation on the inverse Gamma CDF becomes the component of approximating the Dirichlet distribution. We compared this approach to the previously suggested approximations, i.e. approaches with the Weibull distribution and with the softmax Gaussian distribution, and our approximation shows the best log- likelihood among the compared approximations. Moreover, we report that we had to investigate the component collapsing along with the research on DirVAE. It has been known that the component collapsing issue is resolved by the SBVAE because of the meaningful decoder weights from the latent layer to the next layer. However, we found that SBVAE has latent value collapsing issue resulting in many near- zero values on the latent dimensions that leads to the incomplete utilization of the latent dimension. Hence, we argue that Gaussian VAE (GVAE) suffers from the decoder weight collapsing, previously limitedly defined as component collapsing; and SBVAE has a problem of the latent value collapsing. Finally, we suggest that the definition of component collapsing should be expanded to represent both cases of decoder weight and latent value collapsing. The proposed DirVAE shows neither the near- zero decoder weights nor the near- zero latent values, so the reconstruction uses the full latent dimension information in most cases. We investigated this issue because our performance gain comes from resolving the expanded version of the component collapsing. Due to the component collapsing issues, the existing VAEs have less meaningful latent values or could not effectively use its latent representation. Meanwhile, DirVAE does not have component collapsing due to the multi- modal prior which possibly leads to superior qualitative and quantitative performances. We experimentally showed that the DirVAE has more meaningful or disentangled latent representation by image generation and latent value visualizations. <--- Page Split ---> Technically, the new approximation provides the closed- form loss function derived from the evidence lower bound (ELBO) of the DirVAE. The optimization on the ELBO enables the representation learning with the DirVAE, and we test the learned representation from the DirVAE in two folds. Firstly, we test the representation learning quality by performing the supervised and the semi- supervised classification tasks on MNIST, OMNIGLOT, and SVHN. These classification tasks conclude that DirVAE has the best classification performances with its learned representation. Secondly, we test the applicability of DirVAE to the existing models, such as topic models with DirVAE priors on 20Newsgroup and RCV1- v2. This experiment shows that the augmentation of DirVAE to the existing neural variational topic models improves the perplexity and the topic coherence, and most of best performers were DirVAE augmented. ## 2 PRELIMINARIES ### 2.1 VARIATIONAL AUTOENCODERS A VAE is composed of two parts: a generative sub- model and an inference sub- model. In the generative part, a probabilistic decoder reproduces \(\hat{\mathbf{x}}\) close to an observation \(\mathbf{x}\) from a latent variable \(\mathbf{z} \sim p(\mathbf{z})\) , i.e. \(\mathbf{x} \sim p_{\theta}(\mathbf{x}|\mathbf{z}) = p_{\theta}(\mathbf{x}|\hat{\mathbf{x}})\) where \(\hat{\mathbf{x}} = \mathrm{MLP}(\mathbf{z})\) is obtained from a latent variable \(\mathbf{z}\) by a multilayer perceptron (MLP). In the inference part, a probabilistic encoder outputs a latent variable \(\mathbf{z} \sim q_{\phi}(\mathbf{z}|\mathbf{x}) = q_{\phi}(\mathbf{z}|\eta)\) where \(\eta = \mathrm{MLP}(\mathbf{x})\) is computed from the observation \(\mathbf{x}\) by a MLP. Model parameters, \(\theta\) and \(\phi\) , are jointly learned by optimizing the below ELBO with the stochastic gradient method through the backpropagations as the ordinary neural networks by using the SGVB estimators on the random nodes. \[\log p(\mathbf{x}) \geq \mathcal{L}(\mathbf{x}) = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[\log p_{\theta}(\mathbf{x}|\mathbf{z})] - \mathrm{KL}(q_{\phi}(\mathbf{z}|\mathbf{x})||p_{\theta}(\mathbf{z})) \quad (1)\] In GVAE (Kingma & Welling, 2014c), the prior distribution of \(p(\mathbf{z})\) is assumed to be a standard Gaussian distribution. In SBVAE (Nalisnick & Smyth, 2017), the prior distribution becomes a GEM distribution that produces samples with a Beta distribution and a stick- breaking algorithm. ### 2.2 DIRICHLET DISTRIBUTION AS A COMPOSITION OF GAMMA RANDOM VARIABLES The Dirichlet distribution is a composition of multiple Gamma random variables. Note that the probability density functions (PDFs) of Dirichlet and Gamma distributions are as follows: \[\mathrm{Dirichlet}(\mathbf{x}; \boldsymbol {\alpha}) = \frac{\Gamma(\sum \alpha_{k})}{\prod \Gamma(\alpha_{k})} \prod x_{k}^{\alpha_{k} - 1}, \mathrm{Gamma}(x; \boldsymbol {\alpha}, \beta) = \frac{\beta^{\alpha}}{\Gamma(\alpha)} x^{\alpha - 1} e^{-\beta x} \quad (2)\] where \(\alpha_{k}, \alpha , \beta > 0\) . In detail, if there are \(K\) independent random variables following the Gamma distributions \(X_{k} \sim \mathrm{Gamma}(\alpha_{k}, \beta)\) or \(\mathbf{X} \sim \mathrm{MultiGamma}(\alpha , \beta \cdot \mathbf{1}_{K})\) where \(\alpha_{k}, \beta > 0\) for \(k = 1, \dots , K\) , then we have \(\mathbf{Y} \sim \mathrm{Dirichlet}(\boldsymbol {\alpha})\) where \(Y_{k} = X_{k} / \sum X_{i}\) . It should be noted that the rate parameter, \(\beta\) , should be the same for every Gamma distribution in the composition. Then, the KL divergence can be derived as the following: \[\mathrm{KL}(Q||P) = \sum \log \Gamma (\alpha_{k}) - \sum \log \Gamma (\hat{\alpha}_{k}) + \sum (\hat{\alpha}_{k} - \alpha_{k})\psi (\hat{\alpha}_{k}) \quad (3)\] for \(P = \mathrm{MultiGamma}(\boldsymbol {\alpha}, \beta \cdot \mathbf{1}_{K})\) and \(Q = \mathrm{MultiGamma}(\hat{\boldsymbol{\alpha}}, \beta \cdot \mathbf{1}_{K})\) where \(\psi\) is a digamma function. The detailed derivation is provided in Appendix B. ### 2.3 SGVB FOR GAMMA RANDOM VARIABLE AND APPROXIMATION ON DIRICHLET DISTRIBUTION This section discusses several ways of approximating the Dirichlet random variable; or the SGVB estimators for the Gamma random variables which compose a Dirichlet distribution. Utilizing SGVB requires a differentiable non- centered parametrization (DNCP) for the distribution (Kingma & Welling, 2014d). The main SGVB for Gamma random variables, used in DirVAE, is using the inverse Gamma CDF approximation explained in the next section. Prior works include two approaches: the use of the Weibull distribution and the softmax Gaussian distribution, and the two approaches are explained in this section. <--- Page Split ---> Approximation with Weibull distribution. Because of the similar PDFs between the Weibull distribution and the Gamma distribution, some prior works used the Weibull distribution as a posterior distribution of the prior Gamma distribution (Zhang et al., 2018): \[\mathrm{Weibull}(x;k,\lambda) = \frac{k}{\lambda}\Big(\frac{x}{\lambda}\Big)^{k - 1}e^{-(x / \lambda)^{k}}\mathrm{where}k,\lambda >0. \quad (4)\] The paper Zhang et al. (2018) pointed out that there are two useful characteristics when approximating the Gamma distribution with the Weibull distribution. One useful property is that the KL divergence expressed in a closed form, and the other is the simple reparametrization trick with a closed form of the inverse CDF from the Weibull distribution. However, we noticed that the Weibull distribution has a component of \(e^{- (x / \lambda)^{k}}\) , and the Gamma distribution does not have the additional power term of \(k\) in the component. Since \(k\) is placed in the exponential component, small changes on \(k\) can cause a significant difference that limits the optimization. Approximation with softmax Gaussian distribution. As in MacKay (1998); Srivastava & Sutton (2017), a Dirichlet distribution can be approximated by a softmax Gaussian distribution by using a softmax Laplace approximation. The relation between the Dirichlet parameter \(\alpha\) and the Gaussian parameters \(\mu\) , \(\Sigma\) is explained as the following: \[\mu_{k} = \log \alpha_{k} - \frac{1}{K}\sum_{i}\log \alpha_{i}, \Sigma_{k} = \frac{1}{\alpha_{k}}\Big(1 - \frac{2}{K}\Big) + \frac{1}{K^{2}}\sum_{i}\frac{1}{\alpha_{i}}, \quad (5)\] where \(\Sigma\) is assumed to be a diagonal matrix, and we use the reparametrization trick in the usual GVAE for the SGVB estimator. ## 3 MODEL DESCRIPTION Along with the inverse Gamma CDF approximation, we describe two sub- models in this section: the generative sub- model and the inference sub- model. Figure 2 describes the graphical notations of various VAEs and the neural network view of our model. ![](images/3_0.jpg) <center>Figure 2: Sub-figures 2a, 2b, and 2c are the graphical notations of the VAEs as latent variable models. The solid lines indicate the generative sub-models where the waved lines denote a prior distribution of the latent variables. The dotted lines indicate the inference sub-models. Sub-figure 2d denotes a neural network structure corresponding to Sub-figure 2c. Red nodes denote the random nodes which allow the backpropagation flows to the input. </center> Generative sub- model. The key difference between the generative models between the DirVAE and the GVAE is the prior distribution assumption on the latent variable \(\mathbf{z}\) . Instead of using the standard Gaussian distribution, we use the Dirichlet distribution which is a conjugate prior distribution of the multinomial distribution. \[\mathbf{z}\sim p(\mathbf{z}) = \mathrm{Dirichlet}(\alpha),\mathbf{x}\sim p_{\theta}(\mathbf{x}|\mathbf{z}) \quad (6)\] Inference sub- model. The probabilistic encoder with an approximating posterior distribution \(q_{\phi}(\mathbf{z}|\mathbf{x})\) is designed to be \(\mathrm{Dirichlet}(\hat{\alpha})\) . The approximated posterior parameter \(\hat{\alpha}\) is derived by the MLP from the observation \(\mathbf{x}\) with the softplus output function, so the outputs can be positive values constrained by the Dirichlet distribution. Here, we do not directly sample \(\mathbf{z}\) from the Dirichlet distribution. Instead, we use the Gamma composition method described in Section 2.2. Firstly, we draw \(\mathbf{v} \sim \mathrm{MultiGamma}(\alpha , \beta \cdot \mathbf{1}_{K})\) . Afterwards, we normalize \(\mathbf{v}\) with its summation \(\sum v_{i}\) . <--- Page Split ---> The objective function to optimize the model parameters, \(\theta\) and \(\phi\) , is composed of Equation (1) and (3). Equation (7) is the loss function to optimize after the composition. The inverse Gamma CDF method explained in the next paragraph enables the backpropagation flows to the input with the stochastic gradient method. Here, for the fair comparison of expressing the Dirichlet distribution between the inverse Gamma CDF approximation method and the softmax Gaussian method, we set \(\alpha_{k} = 1 - 1 / K\) when \(\mu_{k} = 0\) and \(\Sigma_{k} = 1\) by using Equation (5); and \(\beta = 1\) . \[\mathcal{L}(\mathbf{x}) = \mathbb{E}_{d_{\phi (\mathbf{x}|\mathbf{x})}}[\log p_{\theta}(\mathbf{x}|\mathbf{z})] - (\sum \log \Gamma (\alpha_{k}) - \sum \log \Gamma (\hat{\alpha}_{k}) + \sum (\hat{\alpha}_{k} - \alpha_{k})\psi (\hat{\alpha}_{k})) \quad (7)\] Approximation with inverse Gamma CDF. A previous work Knowles (2015) suggested that, if \(X \sim \mathrm{Gamma}(\alpha , \beta)\) , and if \(F(x; \alpha , \beta)\) is a CDF of the random variable \(X\) , the inverse CDF can be approximated as \(F^{- 1}(u; \alpha , \beta) \approx \beta^{- 1}(u\alpha \Gamma (\alpha))^{1 / \alpha}\) . Hence, we can introduce an auxiliary variable \(u \sim \mathrm{Uniform}(0, 1)\) to take over all the randomness of \(X\) , and we treat the Gamma sampled \(X\) as a deterministic value in terms of \(\alpha\) and \(\beta\) . It should be noted that there has been a practice of utilizing the combination of decomposing a Dirichlet distribution and approximating each Gamma component with inverse Gamma CDF. However, such practices have not been examined with its learning properties and applicabilities. The following section shows a new aspect of component collapsing that can be remedied by this combination on Dirichlet prior in VAE, and the section illustrates the performance gains in a certain set of applications, i.e. topic modeling. ## 4 EXPERIMENTAL RESULTS This section reports the experimental results with the following experiment settings: 1) a pure VAE model; 2) a semi- supervised classification task with VAEs; 3) a supervised classification task with VAEs; and 4) topic models with DirVAE augmentations. ### 4.1 EXPERIMENTS FOR REPRESENTATION LEARNING OF VAES Baseline models. We select the following models as baseline alternatives of the DirVAE: 1) the standard GVAE; 2) the GVAE with softmax (GVAE- Softmax) approximating the Dirichlet distribution with the softmax Gaussian distribution; 3) the SBVAE with the Kumaraswamy distribution (SBVAE- Kuma) & the Gamma composition (SBVAE- Gamma) described in Nalisnick & Smyth (2017); and 4) the DirVAE with the Weibull distribution (DirVAE- Weibull) approximating the Gamma distribution with the Weibull distribution described in Zhang et al. (2018). We use the following benchmark datasets for the experiments: 1) MNIST; 2) MNIST with rotations (MNIST+rot); 3) OMNIGLOT; and 4) SVHN with PCA transformation. We provide the details on the datasets in Appendix D.1. Experimental setting. As a pure VAE model, we compare the DirVAE with the following models: GVAE, GVAE- Softmax, SBVAE- Kuma, SBVAE- Gamma, and DirVAE- Weibull. We use 50- dimension and 100- dimension latent variables for MNIST and OMNIGLOT, respectively. We provide the details of the network structure and optimization in Appendix D.2. We set \(\alpha = 0.98 \cdot 1_{50}\) for MNIST and \(\alpha = 0.99 \cdot 1_{100}\) for OMNIGLOT for the fair comparison to GVAEs by using Equation (5). All experiments use the Adam optimizer (Kingma & Ba, 2014a) for the parameter learning. Finally, we acknowledge that the hyper- parameter could be updated as Appendix C, and the experiment result with the update is separately reported in Appendix D.2. Quantitative result. For the quantitative comparison among the VAEs, we calculated the Monte- Carlo estimation on the marginal negative log- likelihood, the negative ELBO, and the reconstruction loss. The marginal log- likelihood is approximated as \(p(\mathbf{x}) \approx \sum_{i} \frac{p(\mathbf{x}|\mathbf{z}_{i})p(\mathbf{z}_{i})}{q(\mathbf{z}_{i})}\) for single instance \(\mathbf{x}\) where \(q(\mathbf{z})\) is a posterior distribution of a prior distribution \(p(\mathbf{z})\) , which is further derived in Appendix A. Table 1 shows the overall performance of the alternative VAEs. The DirVAE outperforms all baselines in both datasets from the log- likelihood perspective. The value of DirVAE comes from the better encoding of the latent variables that can be used for classification tasks which we examine in the next experiments. While the DirVAE- Weibull follows the prior modeling with the Dirichlet distribution, the Weibull based approximation can be improved by adopting the proposed approach with the inverse Gamma CDF. <--- Page Split ---> Table 1: Negative log-likelihood, negative ELBO, and reconstruction loss of the VAEs for MNIST and OMNIGLOT dataset. The lower values are the better for all measures. <table><tr><td rowspan="2"></td><td colspan="3">MNIST (K = 50)</td><td colspan="3">OMNIGLOT (K = 100)</td></tr><tr><td>Neg. LL</td><td>Neg. ELBO</td><td>Reconst. Loss</td><td>Neg. LL</td><td>Neg. ELBO</td><td>Reconst. Loss</td></tr><tr><td>GVAE (Nalisnick &amp;amp; Smyth, 2017)</td><td>96.80</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SBVAE-Kuma (Nalisnick &amp;amp; Smyth, 2017)</td><td>98.01</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SBVAE-Gamma (Nalisnick &amp;amp; Smyth, 2017)</td><td>100.74</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>GVAE</td><td>94.54±0.79</td><td>98.58±0.04</td><td>74.31±0.13</td><td>119.29±0.44</td><td>126.42±0.24</td><td>98.90±0.36</td></tr><tr><td>GVAE-Softmax</td><td>98.18±0.61</td><td>103.49±0.16</td><td>79.36±0.82</td><td>130.01±1.16</td><td>139.73±0.81</td><td>123.34±1.45</td></tr><tr><td>SBVAE-Kuma</td><td>99.27±0.48</td><td>102.60±1.81</td><td>83.90±0.82</td><td>130.73±2.17</td><td>132.86±3.03</td><td>119.25±1.00</td></tr><tr><td>SBVAE-Gamma</td><td>102.14±0.60</td><td>135.30±0.24</td><td>113.89±0.25</td><td>128.82±1.82</td><td>149.30±0.82</td><td>136.36±1.53</td></tr><tr><td>DirVAE-Weibull</td><td>114.59±1.15</td><td>183.33±2.96</td><td>150.92±3.70</td><td>140.89±3.21</td><td>198.01±2.46</td><td>145.52±3.15</td></tr><tr><td>DirVAE</td><td>87.64±0.64</td><td>100.47±0.35</td><td>81.50±0.27</td><td>108.24±0.42</td><td>120.06±0.35</td><td>99.78±0.36</td></tr></table> Qualitative result. As a qualitative result, we report the latent dimension- wise reconstructions which are decoder outputs with each one- hot vector in the latent dimension. Figure 3a shows 50 reconstructed images corresponding to each latent dimension from GVAE- Softmax, SBVAE, and DirVAE. We manually ordered the digit- like figures in the ascending order for GVAE- Softmax and DirVAE. We can see that the GVAE- Softmax and the SBVAE have components without significant semantic information, which we will discuss further in Section 4.2, and the DirVAE has interpretable latent dimensions in most of the latent dimensions. Figure 3b also supports the quality of the latent values from DirVAE by visualizing learned latent values through t- SNE (Maaten & Hinton, 2008). ![](images/5_0.jpg) <center>Figure 3: Latent dimension visualization with reconstruction images and t-SNE latent embeddings. </center> ### 4.2 DISCUSSION ON COMPONENT COLLAPSING Decoder weight collapsing, a.k.a. component collapsing. One main issue of GVAE is component collapsing that there are a significant number of near- zero decoder weights from the latent neurons to the next decoder neurons. If these weights become near- zero, the values of the latent dimensions loose influence to the next decoder, and this means an inefficient learning given a neural network structure. The same issue occurs when we use the GVAE- Softmax. We rename this component collapsing phenomenon as decoder weight collapsing to specifically address the collapsing source. Latent value collapsing. SBVAE claims that SBVAE solved the decoder weight collapsing by learning the meaningful weights as shown in Figure 4a. However, we notice that SBVAE produces the output values, not the weight parameters, from the latent dimension to be near- zero in many latent dimensions after averaging many samples obtained from the test dataset. Figure 4b shows the properties of DirVAE and SBVAE from the perspective of the latent value collapsing, which SBVAE shows many near- zero average means and near- zero average variances, while DirVAE does not. The average Fisher kurtosis and average skewness of DirVAE are 5.76 and 2.03, respectively over the dataset, while SBVAE has 20.85 and 4.35, which states that the latent output distribution from SBVAE is more skewed than that of DirVAE. We found out that these near- zero latent values prevent learning on decoder weights, which we introduce as another type of collapsing problem, as latent value collapsing that is different from the decoder weight collapsing. These results mean that SBVAE distributes the non- near- zero latent values sparsely over a few dimensions while DirVAE samples relatively dense latent values. In other words, DirVAE utilizes the full spectrum of latent dimensions compared to SBVAE, and DirVAE has a better learning capability in the decoder network. Figure 3a supports the argument on the latent value collapsing by activating each and single latent dimension with a one- hot vector through the decoder. The non- changing latent dimension <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 4: Sub-figure 4a shows GVAE and GVAE-Softmax have component collapsing issue, while SBVAE and DirVAE do not. Sub-figure 4b shows that SBVAE has many near-zero output values in the latent dimensions. </center> wise images of SBVAE proves that there were no generation differences between the two differently activated one- hot latent values. ### 4.3 APPLICATION 1. EXPERIMENTS OF (SEMI-)SUPERVISED CLASSIFICATION WITH VAES Semi- supervised classification task with VAEs. There is a previous work demonstrating that the SBVAE outperforms the GVAE in semi- supervised classification task (Nalisnick & Smyth, 2017). The overall model structure for this semi- supervised classification task uses a VAE with separate random variables of \(\mathbf{z}\) and \(\mathbf{y}\) , which is introduced as the \(M2\) model in the original VAE work (Kingma et al., 2014b). The detailed settings of the semi- supervised classification tasks are enumerated in Appendix D.3. Fundamentally, we applied the same experimental settings to GVAE, SBVAE, and DirVAE in this experiment, as specified by the authors in Nalisnick & Smyth (2017). Table 2 enumerates the performances of the GVAE, the SBVAE, and the DirVAE, and the result shows that the error rate of classification result using \(10\%\) , \(5\%\) and \(1\%\) of labeled data for each dataset. In general, the experiment shows that the DirVAE has the best performance out of three alternative VAEs. Also, it should be noted that the performance of the DirVAE is more improved in the most complex task with the SVHN dataset. Table 2: The error rate of semi-supervised classification task using VAEs. <table><tr><td rowspan="2"></td><td colspan="3">MNIST (K = 50)</td><td colspan="3">MNIST+rot (K = 50)</td><td colspan="3">SVHN (K = 50)</td></tr><tr><td>10%</td><td>5%</td><td>1%</td><td>10%</td><td>5%</td><td>1%</td><td>10%</td><td>5%</td><td></td></tr><tr><td>GVAE (Nalisnick &amp;amp; Smyth, 2017)</td><td>3.95±0.15</td><td>4.74±0.43</td><td>11.55±2.28</td><td>21.78±0.71</td><td>27.72±0.69</td><td>38.13±0.95</td><td>36.08±1.49</td><td>48.75±1.47</td><td>69.58±1.64</td></tr><tr><td>SBVAE (Nalisnick &amp;amp; Smyth, 2017)</td><td>4.86±0.14</td><td>5.29±0.39</td><td>7.34±0.47</td><td>11.78±0.39</td><td>14.27±0.58</td><td>27.67±1.39</td><td>32.08±4.00</td><td>37.07±5.22</td><td>61.37±3.60</td></tr><tr><td>DirVAE</td><td>4.60±0.07</td><td>5.05±0.18</td><td>7.00±0.17</td><td>11.18±0.32</td><td>13.53±0.46</td><td>26.20±0.66</td><td>24.81±1.13</td><td>28.45±1.14</td><td>55.99±3.30</td></tr></table> Supervised classification task with latent values of VAEs. Also, we tested the performance of the supervised classification task with the learned latent representation from the VAEs. We applied the vanilla version of VAEs to the datasets, and we classified the latent representation of instances with \(k\) - Nearest Neighbor ( \(k\) NN) which is one of the simplest classification algorithms. Hence, this experiment can better distinguish the performance of the representation learning in the classification task. Further experimental details can be found in Appendix D.4. Table 3 enumerates the performances from the experimented VAEs in the datasets of MNIST and OMNIGLOT. Both datasets indicated that the DirVAE shows the best performance in reducing the classification error, which we conjecture that the performance is gathered from the better representation learning. It should be noted that, to our knowledge, this is the first reported comparison of latent representation learning on VAEs with \(k\) NN in the supervised classification using OMNIGLOT dataset. We identified that the classification with OMNIGLOT is difficult given that the \(k\) NN error <--- Page Split ---> rates with the raw original data are as high as \(69.94\%\) , \(69.41\%\) , and \(70.10\%\) . This high error rate mainly originates from the number of classification categories which is 50 categories in our test setting of OMNIGLOT, compared to 10 categories in MNIST. Table 3: The error rate of \(k\) NN with the latent representations of VAEs. <table><tr><td rowspan="2"></td><td colspan="3">MNIST (K = 50)</td><td colspan="3">OMNIGLOT (K = 100)</td></tr><tr><td>k = 3</td><td>k = 5</td><td>k = 10</td><td>k = 3</td><td>k = 5</td><td>k = 10</td></tr><tr><td>GVAE (Nalisnick et al., 2016)</td><td>28.40</td><td>20.96</td><td>15.33</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SBVAE (Nalisnick et al., 2016)</td><td>9.34</td><td>8.65</td><td>8.90</td><td>-</td><td>-</td><td>-</td></tr><tr><td>DLGMM (Nalisnick et al., 2016)</td><td>9.14</td><td>8.38</td><td>8.42</td><td>-</td><td>-</td><td>-</td></tr><tr><td>GVAE</td><td>27.16±0.48</td><td>20.20±0.93</td><td>14.89±0.40</td><td>92.34±0.25</td><td>91.21±0.18</td><td>88.79±0.35</td></tr><tr><td>GVAE-Softmax</td><td>25.08±2.64</td><td>21.79±2.17</td><td>18.75±2.06</td><td>94.76±0.20</td><td>94.22±0.37</td><td>92.98±0.42</td></tr><tr><td>SBVAE</td><td>10.01±0.52</td><td>9.58±0.47</td><td>9.39±0.54</td><td>86.90±0.82</td><td>85.10±0.89</td><td>82.96±0.64</td></tr><tr><td>DiVAE</td><td>5.98±0.06</td><td>5.29±0.06</td><td>5.06±0.06</td><td>76.55±0.23</td><td>73.81±0.29</td><td>70.95±0.29</td></tr><tr><td>Raw Data</td><td>3.00</td><td>3.21</td><td>3.44</td><td>69.94</td><td>69.41</td><td>70.10</td></tr></table> ### 4.4 APPLICATION 2. EXPERIMENTS OF TOPIC MODEL AUGMENTATION WITH DIRVAE One usefulness of the Dirichlet distribution is being a conjugate prior to the multinomial distribution, so it has been widely used in the field of topic modeling, such as Latent Dirichlet Allocation (LDA) (Blei et al., 2003). Recently, some neural variational topic (or document) models have been suggested, for example, ProdLDA (Srivastava & Sutton, 2017), NVDM (Miao et al., 2016), and GSM (Miao et al., 2017). NVDM used the GVAE, and the GSM used the GVAE- Softmax to make the sum- to- one positive topic vectors. Meanwhile, ProdLDA assume the prior distribution to be the Dirichlet distribution with the softmax Laplace approximation. To verify the usefulness of the DirVAE, we replace the probabilistic encoder part of the DirVAE to each model. Two popular performance measures in the topic model fields, which are perplexity and topic coherence via normalized pointwise mutual information (NPMI) (Lau et al., 2014), have been used with 20Newsgroups and RCV1- v2 datasets. Further details of the experiments can be found in Appendix D.5. Table 4 indicates that the augmentation of DirVAE improves the performance in general. Additionally, the best performers from the two measurements are always the experiment cell with DirVAE augmentation except for the perplexity of RCV1- v2, which still remains competent. Table 4: Topic modeling performances of perplexity and NPMI with DirVAE augmentations. <table><tr><td rowspan="2"></td><td rowspan="2"></td><td colspan="4">20Newsgroups (K = 50)</td><td colspan="4">RCV1-v2 (K = 100)</td></tr><tr><td>ProdLDA</td><td>NVDM</td><td>GSM</td><td>LDA (Gibbs)</td><td>ProdLDA</td><td>NVDM</td><td>GSM</td><td>LDA (Gibs)</td></tr><tr><td rowspan="3">Perplexity</td><td>Reported</td><td>1172</td><td>837</td><td>822</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Reproduced</td><td>1219±8.87</td><td>810±2.60</td><td>954±1.22</td><td>1314±18.50</td><td>1190±45.24</td><td>796±6.24</td><td>1386±21.06</td><td>1126±12.66</td></tr><tr><td>Add SBVAE</td><td>1164±2.55</td><td>878±14.21</td><td>980±13.50</td><td>-</td><td>1077±22.57</td><td>1050±12.19</td><td>1670±4.78</td><td>-</td></tr><tr><td rowspan="3">NPMI</td><td>Add DirVAE</td><td>1114±2.30</td><td>752±12.17</td><td>916±1.64</td><td>-</td><td>992±2.19</td><td>809±12.60</td><td>1526±6.11</td><td>-</td></tr><tr><td>Reported</td><td>0.240</td><td>0.186</td><td>0.121</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Reproduced</td><td>0.273±0.019</td><td>0.119±0.003</td><td>0.199±0.006</td><td>0.225±0.002</td><td>0.194±0.005</td><td>0.023±0.002</td><td>0.267±0.019</td><td>0.260±0.006</td></tr><tr><td></td><td>Add SBVAE</td><td>0.247±0.015</td><td>0.162±0.007</td><td>0.162±0.006</td><td>-</td><td>0.190±0.006</td><td>0.116±0.016</td><td>0.207±0.004</td><td>-</td></tr><tr><td></td><td>Add DirVAE</td><td>0.359±0.026</td><td>0.247±0.010</td><td>0.201±0.003</td><td>-</td><td>0.193±0.004</td><td>0.131±0.015</td><td>0.308±0.005</td><td>-</td></tr></table> ## 5 CONCLUSION Recent advances in VAEs have become one of the cornerstones in the field of DGMs. The VAEs infer the parameters of explicitly described latent variables, so the VAEs are easily included in the conventional PGMs. While this merit has motivated the diverse cases of merging the VAEs to the graphical models, we ask the fundamental quality of utilizing the GVAE where many models have latent values to be categorical probabilities. The softmax function cannot reproduce the multi- modal distribution that the Dirichlet distribution can. Recognizing this problem, there have been some previous works that approximated the Dirichlet distribution in the VAE settings by utilizing the Weibull distribution or the softmax Gaussian distribution, but the DirVAE with the inverse Gamma CDF shows the better learning performance in our experiments of the representation: the semi- supervised, the supervised classifications, and the topic models. Moreover, DirVAE shows no component collapsing and it leads to better latent representation and performance gain. The proposed DirVAE can be widely used if we recall the popularity of the conjugate relation between the multinomial and the Dirichlet distributions because the proposed DirVAE can be a brick to the construction of complex probabilistic models with neural networks. <--- Page Split ---> ## REFERENCES D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. Journal of Machine Learning Research, 2003. X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. International Conference on Artificial Intelligence and Statistics, 2010. M. Hoffman and M. Johnson. Elbo surgery: yet another way to carve up the variational evidence lower bound. Neural Information Processing Systems Workshop on Advances in Approximate Bayesian Inference, 2016. H. Ishwaran and L. F. James. Gibbs sampling methods for stick-breaking priors. Journal of the American Statistical Association, 2001. E. Jang, S. Gu, and B. Poole. Categorical reparameterization with gumbel-softmax. International Conference on Learning Representations, 2017. Z. Jiang, Y. Zheng, H. Tan, B. Tang, and H. Zhou. Variational deep embedding: An unsupervised and generative approach to clustering. International Joint Conference on Artificial Intelligence, 2017. D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014a. D. P. Kingma and M. Welling. Auto-encoding variational bayes. International Conference on Learning Representations, 2014c. D. P. Kingma and M. Welling. Efficient gradient-based inference through transformations between bayes nets and neural nets. International Conference on Machine Learning, 2014d. D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with deep generative models. Neural Information Processing Systems, 2014b. D. A. Knowles. Stochastic gradient variational bayes for gamma approximating distributions. arXiv preprint arXiv:1509.01631, 2015. B. M. Lake, R. R. Salakhutdinov, and J. Tenenbaum. One-shot learning by inverting a compositional causal process. Neural Information Processing Systems, 2013. J. H. Lau, D. Newman, and T. Baldwin. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. European Chapter of the Association for Computational Linguistics, 2014. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the Institute of Electrical and Electronics Engineers, 1998. L. V. D. Maaten and G. Hinton. Visualizing data using t-sne. Journal of machine learning research, 2008. D. J. C. MacKay. Choice of basis for laplace approximation. Machine Learning, 1998. C. J. Maddison, A. Mnih, and Y. W. Teh. The concrete distribution: A continuous relaxation of discrete random variables. International Conference on Learning Representations, 2017. Y. Miao, L. Yu, and P. Blunsom. Neural variational inference for text processing. International Conference on Machine Learning, 2016. Y. Miao, E. Grefenstette, and P. Blunsom. Discovering discrete latent topics with neural variational inference. International Conference on Machine Learning, 2017. T. Minka. Estimating a dirichlet distribution. Technical report, M.I.T., 2000. V. Nair and G. Hinton. Rectified linear units improve restricted boltzmann machines. International Conference on Machine Learning, 2010. <--- Page Split ---> E. Nalisnick and P. Smyth. Stick-breaking variational autoencoders. International Conference on Learning Representations, 2017. E. Nalisnick, L. Hertel, and P. Smyth. Approximate inference for deep latent gaussian mixtures. Neural Information Processing Systems Workshop on Bayesian Deep Learning, 2016. J. Pitman. Combinatorial stochastic processes. Technical report, UC Berkeley, 2002. D. J. Rezende and S. Mohamed. Variational inference with normalizing flows. International Conference on Machine Learning, 2015. J. T. Rolfe. Discrete variational autoencoders. International Conference on Learning Representations, 2017. A. Srivastava and C. Sutton. Autoencoding variational inference for topic models. International Conference on Learning Representations, 2017. C. K. Sinderby, T. Raiko, L. Maale, S. K. Sinderby, and O. Winther. Ladder variational autoencoders. Neural Information Processing Systems, 2016. H. Zhang, B. Chen, D. Guo, and M. Zhou. Whai: Weibull hybrid autoencoding inference for deep topic modeling. International Conference on Learning Representations, 2018. <--- Page Split ---> ## APPENDIX This is an appendix for Dirichlet Variational Autoencoder. Here, we describe the derivations of key equations and experimental setting details which were used in the body of the paper. The detailed information such as model names, parameter names, or experiment assumptions is based on the main paper. ## A MONTE-CARLO ESTIMATION ON THE MARGINAL LIKELIHOOD Proposition A.1. The marginal log- likelihood is approximated as \(p(\mathbf{x}) \approx \sum_{i} \frac{p(\mathbf{x}|\mathbf{z}_{i}) p(\mathbf{z}_{i})}{q(\mathbf{z}_{i})}\) , where \(q(\mathbf{z})\) is a posterior distribution of a prior distribution \(p(\mathbf{z})\) . Proof. \[p(\mathbf{x}) = \int_{\mathbf{z}} p(\mathbf{x},\mathbf{z}) d\mathbf{z} = \int_{\mathbf{z}} p(\mathbf{x},\mathbf{z}) \frac{q(\mathbf{z})}{q(\mathbf{z})} d\mathbf{z}\] \[= \int_{\mathbf{z}} p(\mathbf{x}|\mathbf{z}) p(\mathbf{z}) \frac{q(\mathbf{z})}{q(\mathbf{z})} d\mathbf{z} = \int_{\mathbf{z}} \frac{p(\mathbf{x}|\mathbf{z}) p(\mathbf{z})}{q(\mathbf{z})} q(\mathbf{z}) d\mathbf{z}\] \[\approx \sum_{i} \frac{p(\mathbf{x}|\mathbf{z}_{i}) p(\mathbf{z}_{i})}{q(\mathbf{z}_{i})} \mathrm{where} \mathbf{z}_{i} \sim q(\mathbf{z})\] ## B KL DIVERGENCE OF TWO MULTI-GAMMA DISTRIBUTIONS Proposition B.1. Define \(\mathbf{X} = (X_{1}, \dots , X_{K}) \sim \text{MultiGamma} (\alpha , \beta \cdot \mathbf{1}_{K})\) as a vector of \(K\) independent Gamma random variables \(X_{k} \sim \text{Gamma} (\alpha_{k}, \beta)\) where \(\alpha_{k}, \beta > 0\) for \(k = 1, \dots , K\) . The KL divergence between two MultiGamma distributions \(P = \text{MultiGamma} (\alpha , \beta \cdot \mathbf{1}_{K})\) and \(Q = \text{MultiGamma} (\hat{\alpha}, \beta \cdot \mathbf{1}_{K})\) can be derived as the following: \[\begin{array}{r}{K L(Q||P) = \sum \log \Gamma (\alpha_{k}) - \sum \log \Gamma (\hat{\alpha}_{k}) + \sum (\hat{\alpha}_{k} - \alpha_{k})\psi (\hat{\alpha}_{k}),} \end{array} \quad (8)\] where \(\psi\) is a digamma function. Proof. Note that the derivative of a Gamma- like function \(\frac{\Gamma(\alpha)}{\beta^{\alpha}}\) can be derived as follows: \[\frac{d}{d\alpha} \frac{\Gamma(\alpha)}{\beta^{\alpha}} = \beta^{-\alpha} (\Gamma'(\alpha) - \Gamma(\alpha) \log \beta) = \int_{0}^{\infty} x^{\alpha -1} e^{-\beta x} \log x dx.\] Then, we have the following. \[\mathrm{KL}(Q||P) = \int_{\mathcal{D}}q(\mathbf{x})\log \frac{q(\mathbf{x})}{p(\mathbf{x})} d\mathbf{x}\] \[= \int_{0}^{\infty}\dots \int_{0}^{\infty}\prod_{\alpha}\mathrm{Gamma}(\hat{\alpha}_{k},\beta)\log \frac{\beta^{\sum\hat{\alpha}_{k}}\prod^{\alpha -1}(\hat{\alpha}_{k})e^{-\beta\sum x_{k}}\prod x_{k}^{\hat{\alpha}_{k} - 1}}{\beta^{\sum\alpha_{k}}\prod^{\alpha -1}(\alpha_{k})e^{-\beta\sum x_{k}}\prod x_{k}^{\hat{\alpha}_{k} - 1}} d\mathbf{x}\] \[= \int_{0}^{\infty}\dots \int_{0}^{\infty}\prod_{\alpha}\mathrm{Gamma}(\hat{\alpha}_{k},\beta)\] \[\qquad \times \left[\sum (\hat{\alpha}_{k} - \alpha_{k})\log \beta +\sum \log \Gamma (\alpha_{k}) - \sum \log \Gamma (\hat{\alpha}_{k}) + \sum (\hat{\alpha}_{k} - \alpha_{k})\log x_{k}\right]d\mathbf{x}\] \[= \left[\sum (\hat{\alpha}_{k} - \alpha_{k})\log \beta +\sum \log \Gamma (\alpha_{k}) - \sum \log \Gamma (\hat{\alpha}_{k})\right]\] \[\qquad +\int_{0}^{\infty}\dots \int_{0}^{\infty}\frac{\beta^{\hat{\alpha}_{k}}}{\prod \Gamma (\hat{\alpha}_{k})} e^{-\beta \sum x_{k}}\prod x_{k}^{\hat{\alpha}_{k} - 1}(\sum (\hat{\alpha}_{k} - \alpha_{k})\log x_{k}) d\mathbf{x}\] <--- Page Split ---> \[\begin{array}{r l} & {= \Big[\sum (\hat{\alpha}_{k} - \alpha_{k})\log \beta +\sum \log \Gamma (\alpha_{k}) - \sum \log \Gamma (\hat{\alpha}_{k})\Big]}\\ & {\quad +\sum (\hat{\alpha}_{k} - \alpha_{k})\beta^{\hat{\alpha}_{k}}\Gamma^{-1}(\hat{\alpha}_{k})\beta^{-\hat{\alpha}_{k}}\big(\Gamma^{\prime}(\hat{\alpha}_{k}) - \Gamma (\hat{\alpha}_{k})\log \beta \big)}\\ & {= \sum (\hat{\alpha}_{k} - \alpha_{k})\log \beta +\sum \log \Gamma (\alpha_{k}) - \sum \log \Gamma (\hat{\alpha}_{k}) + \sum (\hat{\alpha}_{k} - \alpha_{k})(\psi (\hat{\alpha}_{k}) - \log \beta)}\\ & {= \sum \log \Gamma (\alpha_{k}) - \sum \log \Gamma (\hat{\alpha}_{k}) + \sum (\hat{\alpha}_{k} - \alpha_{k})\psi (\hat{\alpha}_{k})} \end{array} \quad (9)\] ## C HYPER-PARAMETER \(\alpha\) LEARNING STRATEGY In this section, we introduce the method of moment estimator (MME) to update the Dirichlet prior parameter \(\alpha\) . Suppose we have a set of sum- to- one proportions \(\mathcal{D} = \{\mathbf{p}_{1},\dots ,\mathbf{p}_{N}\}\) sampled from Dirichlet( \(\alpha\) ), then the MME update rule is as the following: \[\alpha_{k}\leftarrow \frac{S}{N}\sum_{n}p_{n,k}\mathrm{~where~}S = \frac{1}{K}\sum_{k}\frac{\tilde{\mu}_{1,k} - \tilde{\mu}_{2,k}}{\tilde{\mu}_{2,k} - \tilde{\mu}_{1,k}^{2}}\mathrm{~for~}\tilde{\mu}_{j,k} = \frac{1}{N}\sum_{n}p_{n,k}^{j}. \quad (9)\] After the burn- in period for stabilizing the neural network parameters, we use the MME for the hyper- parameter learning using the sampled latent values during training. We alternatively update the neural network parameters and hyper- parameter \(\alpha\) . We choose this estimator because of its closed form nature and consistency (Minka, 2000). The usefulness of the hyper- parameter update can be found in Appendix D.2. Proposition C.1. Given a proportion set \(\mathcal{D} = \{\mathbf{p}_{1},\dots ,\mathbf{p}_{N}\}\) sampled from Dirichlet( \(\alpha\) ), MME of the hyper- parameter \(\alpha\) is as the following: \[\alpha_{k}\leftarrow \frac{S}{N}\sum_{n}p_{n,k}\mathrm{~where~}S = \frac{1}{K}\sum_{k}\frac{\tilde{\mu}_{1,k} - \tilde{\mu}_{2,k}}{\tilde{\mu}_{2,k} - \tilde{\mu}_{1,k}^{2}}\mathrm{~for~}\tilde{\mu}_{j,k} = \frac{1}{N}\sum_{n}p_{n,k}^{j}.\] Proof. Define \(\mu_{j,k} = \mathbb{E}[p_{k}^{j}]\) as the \(j^{\mathrm{th}}\) moment of the \(k^{\mathrm{th}}\) dimension of Dirichlet distribution with prior \(\alpha\) . Then, by the law of large number, \(\mu_{j,k}\approx \tilde{\mu}_{j,k}\) . It can be easily shown that \(\mu_{1,k} = \frac{\alpha_{k}}{\sum_{i}\alpha_{i}}\) and \(\mu_{2,k} = \frac{\alpha_{k}}{\sum_{i}\alpha_{i}}\frac{1 + \alpha_{k}}{1 + \sum_{i}\alpha_{i}} = \mu_{1,k}\frac{1 + \alpha_{k}}{1 + \sum_{i}\alpha_{i}}\) so that \[\mathrm{numera}\mathrm{tor}\Big(\frac{\mu_{1,k} - \mu_{2,k}}{\mu_{2,k} - \mu_{1,k}^{2}}\Big) = \frac{\alpha_{k}}{\sum_{i}\alpha_{i}} -\frac{\alpha_{k}}{\sum_{i}\alpha_{i}}\frac{1 + \alpha_{k}}{1 + \sum_{i}\alpha_{i}}\] \[\qquad = \frac{\alpha_{k}(\sum_{i\neq k}\alpha_{i})}{(\sum_{i}\alpha_{i})(1 + \sum_{i}\alpha_{i})}\] \[\mathrm{denominator}\Big(\frac{\mu_{1,k} - \mu_{2,k}}{\mu_{2,k} - \mu_{1,k}^{2}}\Big) = \frac{\alpha_{k}}{\sum_{i}\alpha_{i}}\frac{1 + \alpha_{k}}{1 + \sum_{i}\alpha_{i}} -\Big(\frac{\alpha_{k}}{\sum_{i}\alpha_{i}}\Big)^{2}\] \[\qquad = \frac{\alpha_{k}(\sum_{i\neq k}\alpha_{i})}{(\sum_{i}\alpha_{i})^{2}(1 + \sum_{i}\alpha_{i})}\] holds for each \(k = 1,\dots ,K\) . Therefore, \[\sum_{i}\alpha_{i} = \frac{\mu_{1,k} - \mu_{2,k}}{\mu_{2,k} - \mu_{1,k}^{2}}\approx \frac{1}{K}\sum_{k}\frac{\mu_{1,k} - \mu_{2,k}}{\mu_{2,k} - \mu_{1,k}^{2}}\approx \frac{1}{K}\sum_{k}\frac{\tilde{\mu}_{1,k} - \tilde{\mu}_{2,k}}{\tilde{\mu}_{2,k} - \tilde{\mu}_{1,k}^{2}}\] and hence, \[\hat{\alpha}_{k} = (\sum_{i}\alpha_{i})\tilde{\mu}_{1,k} = \frac{S}{N}\sum_{n}p_{n,k}.\] <--- Page Split ---> ## D EXPERIMENTAL SETTINGS D EXPERIMENTAL SETTINGSIn this section, we support Section 4 in the original paper with more detailed experimental settings. Our Tensorflow implementation is available at https://TO_BE_RELEASED. ## D.1 DATASET DESCRIPTION D.1 DATASET DESCRIPTIONWe use the following benchmark datasets for the experiments in the original paper: 1) MNIST; 2) MNIST with rotations (MNIST+rot); 3) OMNIGLOT; and 4) SVHN with PCA transformation. MNIST (LeCun et al., 1998) is a hand-written digit image dataset of size \(28 \times 28\) with 10 labels, consists of 60,000 training data and 10,000 testing data. MNIST+rot data is reproduced by the authors of Nalisnick & Smyth (2017) consists of MNIST and rotated MNIST<sup>1</sup>. OMNIGLOT<sup>2</sup> (Lake et al., 2013; Sinderby et al., 2016) is another hand-written image dataset of characters with \(28 \times 28\) size and 50 labels, consists of 24,345 training data and 8,070 testing data. SVHN<sup>3</sup> is a Street View House Numbers image dataset with the dimension-reduction by PCA into 500 dimensions (Nalisnick & Smyth, 2017). ## D.2 REPRESENTATION LEARNING OF VAES We divided the datasets into {train,valid,test} as the following: \(\mathrm{MNIST} = \{45,000:5,000:\) 10,000} and OMNIGLOT \(= \{22,095:2,250:8,070\}\) For MNIST, we use 50- dimension latent variables with two hidden layers in the encoder and one hidden layer in the decoder of 500 dimensions. We set \(\alpha = 0.98\cdot \mathbf{1}_{50}\) for the fair comparison to GVAEs using the Equation (5). The batch size was set to be 100. For OMNIGLOT, we use 100- dimension latent variables with two hidden layers in the encoder and one hidden layer in the decoder of 500 dimensions. We assume \(\alpha = 0.99\cdot \mathbf{1}_{100}\) for the fair comparison to the GVAEs using the Equation (5). The batch size was set to be 15. For both datasets, the gradient clipping is used; ReLU function (Nair & Hinton, 2010) is used as an activation function in hidden layers; Xavier initialization (Glorot & Bengio, 2010) is used for the neural network parameter initialization; and the Adam optimizer (Kingma & Ba, 2014a) is used as an optimizer with learning rate \(5\mathrm{e} - 4\) for all VAEs except \(3\mathrm{e} - 4\) for the SBVAEs. The prior assumptions for each VAE is the following: 1) \(\mathcal{N}(0,\mathbf{I})\) for the GVAE and the GVAE- Softmax; 2) GEM(5) for the SBVAEs; and 3) Dirichlet(0.98 \(\cdot \mathbf{1}_{50}\) ) (MNIST) and Dirichlet(0.99 \(\cdot \mathbf{1}_{100}\) ) (OMNIGLOT) for the DirVAE- Weibull. Finally, to compute the marginal log- likelihood, we used 100 samples for each 1,000 randomly selected from the test data. We add the result of VAE with 20 normalizing flows (GVAE- NF20) (Rezende & Mohamed, 2015) as a baseline in Table 5. Also, latent dimension- wise decoder weight norm and t- SNE visualization on latent embeddings of MNIST is given in Figure 5a and 5b which correspond to Figure 4a and 3, respectively. Additionally, DirVAE- Learning use the same \(\alpha\) for the initial value, but the DirVAE- Learning optimizes hyper- parameter \(\alpha\) by the following stages through the learning iterations using the MME method in Appendix C: 1) the burn- in period for stabilizing the neural network parameters; 2) the alternative update period for the neural network parameters and \(\alpha\) ; and 3) the update period for the neural network parameters with the fixed learned hyper- parameter \(\alpha\) . Table 5 shows that there are improvements in the marginal log- likelihood, ELBO, and reconstruction loss with DirVAE- Learning in both datasets. We also give the learned hyper- parameter \(\alpha\) in Figure 6. ## D.3 SEMI-SUPERVISED CLASSIFICATION TASK WITH VAES The overall model structure for this semi- supervised classification task uses a VAE with a separate random variable of \(\mathbf{z}\) and \(\mathbf{y}\) , which is introduced as the \(M2\) model in the original VAE work (Kingma et al., 2014b). However, the same task with the SBVAE uses a different model modified to ignore <--- Page Split ---> Table 5: Negative log-likelihood, negative ELBO, and reconstruction loss of the VAEs for MNIST and OMNIGLOT dataset. The lower values are the better for all measures. <table><tr><td rowspan="2"></td><td colspan="4">MNIST (K = 50)</td><td colspan="4">OMNIGLOT (K = 100)</td></tr><tr><td>Neg. LL</td><td>Neg. ELBO</td><td>Reconst. Loss</td><td>Neg. LL</td><td>Neg. ELBO</td><td>Reconst. Loss</td><td></td><td></td></tr><tr><td>GVAE (Nalisnick &amp;amp; Smyth, 2017)</td><td>96.80</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SBVAE-Kuma (Nalisnick &amp;amp; Smyth, 2017)</td><td>98.01</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SBVAE-Gamma (Nalisnick &amp;amp; Smyth, 2017)</td><td>100.74</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>GVAE</td><td>94.54±0.79</td><td>98.58±0.04</td><td>74.31±0.13</td><td>119.29±0.44</td><td>126.42±0.24</td><td>98.90±0.36</td><td></td><td></td></tr><tr><td>GVAE-Softmax</td><td>98.18±0.61</td><td>103.49±0.16</td><td>79.36±0.82</td><td>130.01±1.16</td><td>139.73±0.81</td><td>123.34±1.43</td><td></td><td></td></tr><tr><td>GVAE-NFv2</td><td>95.87±0.64</td><td>113.14±0.47</td><td>90.09±1.19</td><td>113.51±1.29</td><td>129.82±0.64</td><td>108.86±1.19</td><td></td><td></td></tr><tr><td>SBVAE-Kuma</td><td>99.27±0.48</td><td>102.60±1.81</td><td>85.90±0.82</td><td>130.73±2.17</td><td>132.80±0.63</td><td>110.25±1.00</td><td></td><td></td></tr><tr><td>SBVAE-Gamma</td><td>102.14±0.69</td><td>135.30±0.24</td><td>113.89±0.25</td><td>128.82±1.82</td><td>149.30±0.82</td><td>136.36±1.53</td><td></td><td></td></tr><tr><td>DirVAE-Weibull</td><td>114.94±1.13</td><td>153.32±2.96</td><td>150.92±3.70</td><td>140.89±3.21</td><td>198.01±2.46</td><td>145.52±3.13</td><td></td><td></td></tr><tr><td>DirVAE</td><td>87.64±0.64</td><td>100.47±0.35</td><td>81.50±0.27</td><td>108.24±0.42</td><td>120.06±0.35</td><td>99.78±0.36</td><td></td><td></td></tr><tr><td>DirVAE-Learning</td><td>84.42±0.53</td><td>99.88±0.40</td><td>80.73±0.31</td><td>100.01±0.52</td><td>119.73±0.31</td><td>99.55±0.32</td><td></td><td></td></tr></table> ![](images/13_0.jpg) <center>Figure 5: Decoder weight collapsing and t-SNE latent embeddings visualization of GVAE-NF20 on MNIST. </center> the relation between the class label variable \(\mathbf{y}\) and the latent variable \(\mathbf{z}\) , but they still share the same parent nodes: \(q_{\phi}(\mathbf{z},\mathbf{y}|\mathbf{x}) = q_{\phi}(\mathbf{z}|\mathbf{x})q_{\phi}(\mathbf{y}|\mathbf{x})\) where \(q_{\phi}(\mathbf{y}|\mathbf{x})\) is a discriminative network for the unseen labels. We follow the structure of SBVAE. Finally, the below are the objective functions to optimize for the labeled and the unlabeled instances of the semi- supervised classification task, respectively: \[\log p(\mathbf{x},\mathbf{y})\geq \mathcal{L}_{\mathrm{labeled}}(\mathbf{x},\mathbf{y}) = \mathbb{E}_{q_{\phi (\mathbf{z}|\mathbf{x})}}[\log p_{\theta}(\mathbf{x}|\mathbf{z},\mathbf{y})] - \mathrm{KL}(q_{\phi}(\mathbf{z}|\mathbf{x})||p_{\theta}(\mathbf{z})) + \log q_{\phi}(\mathbf{y}|\mathbf{x}), \quad (10)\] \[\log p(\mathbf{x})\geq \mathcal{L}_{\mathrm{unlabeled}}(\mathbf{x}) = \mathbb{E}_{q_{\phi (\mathbf{z},\mathbf{y}|\mathbf{x})}}[\log p_{\theta}(\mathbf{x}|\mathbf{z},\mathbf{y}) + \mathbb{H}(q_{\phi}(\mathbf{y}|\mathbf{x}))] - \mathrm{KL}(q_{\phi}(\mathbf{z}|\mathbf{x})||p_{\theta}(\mathbf{z}))~. \quad (11)\] In the above, \(\mathbb{H}\) is an entropy function. The actual training on the semi- supervised learning optimizes the weighted sum of Equation (10) and (11) with a ratio hyper- parameter \(0< \lambda < 1\) . The datasets are divided into {train, valid, test} as the following: \(\mathrm{MNIST} = \{45,000:5,000:10,000\}\) , \(\mathrm{MNIST} + \mathrm{rot} = \{70,000:10,000:20,000\}\) , and \(\mathrm{SVHN} = \{65,000:8,257:26,032\}\) . For SVHN, dimension reduction into 500 dimensions by PCA is applied as preprocessing. Fundamentally, we applied the same experimental settings to GVAE, SBVAE and DirVAE in this experiment, as specified by the authors in Nalisnick & Smyth (2017). Specifically, the three VAEs used the same network structures of 1) a hidden layer of 500 dimension for MNIST; and 2) four hidden layers of 500 dimensions for MNIST+rot and SVHN with the residual network for the last three hidden layers. The latent variables have 50 dimensions for all settings. The ratio parameter \(\lambda\) is set to be 0.375 for the MNISTS, and 0.45 for SVHN. ReLU function is used as an activation function in hidden layers, and the neural network parameters were initialized by sampling from \(\mathcal{N}(0,0.001)\) . The Adam optimizer is used with learning rate \(3e - 4\) and the batch size was set to be 100. Finally, the DirVAE sets \(\alpha = 0.98 \cdot 1_{50}\) by using Equation (5). <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 6: The optimized dimension-wise \(\alpha\) values from DirVAE-Learning with MNIST. </center> ## D.4 SUPERVISED CLASSIFICATION TASK WITH LATENT VALUES OF VAES For the supervised classification task on the latent representation of the VAEs, we used exactly the same experimental settings as in D.2. Since DLGMM is basically a Gaussian mixture model with the SBVAE, DLGMM is a more complex model than the VAE alternatives. We only report the authors' result from Nalisnick et al. (2016) for the comparison purposes. Additionally, we omit the comparison with the VaDE (Jiang et al., 2017) because the VaDE is more customized to be a clustering model rather than the ordinary VAEs that we choose as baselines. Table 6: The error rate of \(k\) NN with the latent representations of VAEs. <table><tr><td rowspan="2"></td><td colspan="3">MNIST (K = 50)</td><td colspan="3">OMNIGLOT (K = 100)</td></tr><tr><td>k = 3</td><td>k = 5</td><td>k = 10</td><td>k = 3</td><td>k = 5</td><td>k = 10</td></tr><tr><td>GVAE (Nalisnick et al., 2016)</td><td>28.40</td><td>20.96</td><td>15.33</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SBVAE (Nalisnick et al., 2016)</td><td>9.34</td><td>8.65</td><td>8.90</td><td>-</td><td>-</td><td>-</td></tr><tr><td>DLGMM (Nalisnick et al., 2016)</td><td>9.14</td><td>8.38</td><td>8.42</td><td>-</td><td>-</td><td>-</td></tr><tr><td>GVAE</td><td>27.16±0.48</td><td>20.20±0.93</td><td>14.89±0.40</td><td>92.34±0.25</td><td>91.21±0.18</td><td>88.79±0.35</td></tr><tr><td>GVAE-Softmax</td><td>25.68±2.64</td><td>21.79±2.17</td><td>18.75±2.06</td><td>94.76±0.20</td><td>94.22±0.37</td><td>92.98±0.42</td></tr><tr><td>GVAE-NF20</td><td>25.72±1.58</td><td>20.13±1.25</td><td>15.87±0.74</td><td>91.25±0.12</td><td>90.03±0.20</td><td>87.73±0.38</td></tr><tr><td>SBVAE</td><td>10.01±0.52</td><td>9.58±0.47</td><td>9.39±0.54</td><td>86.90±0.82</td><td>85.10±0.89</td><td>82.96±0.64</td></tr><tr><td>DirVAE</td><td>5.98±0.06</td><td>5.29±0.06</td><td>5.06±0.06</td><td>76.55±0.23</td><td>73.81±0.29</td><td>70.95±0.29</td></tr><tr><td>Raw Data</td><td>3.00</td><td>3.21</td><td>3.44</td><td>69.94</td><td>69.41</td><td>70.10</td></tr></table> ## D.5 TOPIC MODEL AUGMENTATION WITH DIRVAE For the topic model augmentation experiment, two popular performance measures in the topic model fields, which are perplexity and topic coherence via normalized pointwise mutual information (NPMI) (Lau et al., 2014), have been used with 20Newsgroups and \(RCV1 - v2\) datasets. 20Newsgroups has 11, 258 train data and 7, 487 test data with vocabulary size 1, 995. For the RCV1- v2 dataset, due to the massive size of the whole data, we randomly sampled 20, 000 train data and 10, 000 test data with vocabulary size 10, 000. The lower is better for the perplexity, and the higher is better for the NPMI. The specific model structures can be found in the original papers, Srivastava & Sutton (2017); Miao et al. (2016; 2017). We replace the model prior to that of DirVAE to each model and search the hyper- parameter as Table 7 with 1, 000 randomly selected test data. We use 500- dimension hidden layers and 50 topics for 20Newsgroups, and 1, 000- dimension hidden layers and 100 topics for RCV1- v2. Table 8 shows top- 10 high probability words per topic by activating single latent dimensions in the case of 20Newsgroups. Also, we visualized the latent embeddings of documents by t- SNE in Figure 7.8, and 9. <--- Page Split ---> Table 7: Hyper-parameter selections for DirVAE augmentations. <table><tr><td rowspan="2"></td><td colspan="3">20Newsgroups (K = 50)</td><td colspan="3">RCV1-v2 (K = 100)</td></tr><tr><td>ProdLDA</td><td>NVDM</td><td>GSM</td><td>ProdLDA</td><td>NVDM</td><td>GSM</td></tr><tr><td>Add DirVAE</td><td>0.98 · 1.50</td><td>0.95 · 1.50</td><td>0.20 · 1.50</td><td>0.99 · 1.100</td><td>0.90 · 1.100</td><td>0.01 · 1.100</td></tr></table> Table 8: Sample of learned per topic top-10 high probability words from 20Newsgroups with DirVAE augmentation by activating single latent dimensions. <table><tr><td>Topic 1</td><td>turks turkish armenian genocide village armenia armenians muslims turkey greece</td></tr><tr><td>Topic 2</td><td>doctrine jesus god faith christ scripture belief eternal holy bible</td></tr><tr><td>Topic 3</td><td>season defensive puck playoff coach score flyers nhl team ice</td></tr><tr><td>Topic 4</td><td>pitcher braves hitter coach pen defensive injury roger pitch player</td></tr><tr><td>Topic 5</td><td>ide scsi scsus controller motherboard isa cache mb floppy ram</td></tr><tr><td>Topic 6</td><td>toolkit widget workstation xlib jpeg xt vendor colormap interface pixel</td></tr><tr><td>Topic 7</td><td>spacecraft satellite solar shuttle nasa mission professor lunar orbit rocket</td></tr><tr><td>Topic 8</td><td>knife handgun assault homicide batf criminal gun firearm police apartment</td></tr><tr><td>Topic 9</td><td>enforcement privacy encrypt encryption ripem wiretap rsa cipher cryptography escrow</td></tr><tr><td>Topic 10</td><td>min detroit tor det calgary rangers leafs montreal philadelphia cal<br>(a) DirVAE augmentation to ProdLDA</td></tr><tr><td></td><td>NVDM+DirVAE</td></tr><tr><td>Topic 1</td><td>armenian azerbaijan armenia genocide armenians turkish militia massacre village turks</td></tr><tr><td>Topic 2</td><td>arab arab sirsali palestinian jews soldier turks nazi massacre jew</td></tr><tr><td>Topic 3</td><td>resurrection bible christianity doctrine scripture eternal belief christian faith jesus</td></tr><tr><td>Topic 4</td><td>hitter season braves pitcher baseball pitch game player defensive team</td></tr><tr><td>Topic 5</td><td>directory file compile variable update ftp version site copy host</td></tr><tr><td>Topic 6</td><td>performance speed faster mhz rate clock processor average twice fast</td></tr><tr><td>Topic 7</td><td>windows microsoft driver dos nt graphic vga card virtual upgrade</td></tr><tr><td>Topic 8</td><td>seat gear rear tire honda oil front mile wheel engine</td></tr><tr><td>Topic 9</td><td>patient disease doctor treatment symptom medical health hospital pain medicine</td></tr><tr><td>Topic 10</td><td>pt la det tor pit pp vs van cal nj<br>(b) DirVAE augmentation to NVDM</td></tr><tr><td></td><td>GSM+DirVAE</td></tr><tr><td>Topic 1</td><td>turkish armenian armenians people one turkey armenia turks greek history</td></tr><tr><td>Topic 2</td><td>israel israeli jews attack world jewish article arab peace land</td></tr><tr><td>Topic 3</td><td>god jesus christian religion truth believe bible church christ belief</td></tr><tr><td>Topic 4</td><td>team play game hockey nhl score first division go win</td></tr><tr><td>Topic 5</td><td>drive video mac card port pc system modem memory speed</td></tr><tr><td>Topic 6</td><td>image software file version server program system ftp package support</td></tr><tr><td>Topic 7</td><td>space launch orbit earth nasa moon satellite mission project center</td></tr><tr><td>Topic 8</td><td>law state gun government right rights case court police crime</td></tr><tr><td>Topic 9</td><td>price sell new sale offer pay buy good condition money</td></tr><tr><td>Topic 10</td><td>internet mail computer send list fax phone email address information<br>(c) DirVAE augmentation to GSM</td></tr></table> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 7: 20Newsgroups latent document embedding visualization with t-SNE by replacing the model prior to the Dirichlet. (Left) ProdLDA+DirVAE, (Middle) NVDM+DirVAE, (Right) GSM+DirVAE. </center> ![](images/16_1.jpg) <center>Figure 8: 20Newsgroups latent document embedding visualization with t-SNE by replacing the model prior to the Stick-Breaking. (Left) ProdLDA+SBAVE, (Middle) NVDM+SBAVE, (Right) GSM+SBAVE. </center> ![](images/16_2.jpg) <center>Figure 9: 20Newsgroups latent document embedding visualization with t-SNE of original models. (Left) ProdLDA, (Middle) NVDM, (Right) GSM. </center> <--- Page Split --->
## ABSTRACT This paper proposes Dirichlet Variational Autoencoder (DirVAE) using a Dirichlet prior for a continuous latent variable that exhibits the characteristic of the categorical probabilities. To infer the parameters of DirVAE, we utilize the stochastic gradient method by approximating the Gamma distribution, which is a component of the Dirichlet distribution, with the inverse Gamma CDF approximation. Additionally, we reshape the component collapsing issue by investigating two problem sources, which are decoder weight collapsing and latent value collapsing, and we show that DirVAE has no component collapsing; while Gaussian VAE exhibits the decoder weight collapsing and Stick- Breaking VAE shows the latent value collapsing. The experimental results show that 1) DirVAE models the latent representation result with the best log- likelihood compared to the baselines; and 2) DirVAE produces more interpretable latent values with no collapsing issues which the baseline models suffer from. Also, we show that the learned latent representation from the DirVAE achieves the best classification accuracy in the semi- supervised and the supervised classification tasks on MNIST, OMNIGLOT, and SVHN compared to the baseline VAEs. Finally, we demonstrated that the DirVAE augmented topic models show better performances in most cases. ## 1 INTRODUCTION A Variational Autoencoder (VAE) (Kingma & Welling, 2014c) brought success in deep generative models (DGMs) with a Gaussian distribution as a prior distribution (Jiang et al., 2017; Miao et al., 2016; 2017; Srivastava & Sutton, 2017). If we focus on the VAE, the VAE assumes the prior distribution to be \(\mathcal{N}(\mathbf{0}, \mathbf{I})\) with the learning on the approximated \(\hat{\mu}\) and \(\hat{\Sigma}\) . Also, Stick- Breaking VAE (SBVAE) (Nalisnick & Smyth, 2017) is a nonparametric version of the VAE, which modeled the latent dimension to be infinite using a stick- breaking process (Ishwaran & James, 2001). While these VAEs assume that the prior distribution of the latent variables to be continuous random variables, recent studies introduce the approximations on discrete priors with continuous random variables (Jang et al., 2017; Maddison et al., 2017; Rolfe, 2017). The key of these approximations is enabling the backpropagation with the reparametrization technique, or the stochastic gradient variational Bayes (SGVB) estimator, while the modeled prior follows a discrete distribution. The applications of these approximations on discrete priors include the prior modeling of a multinomial distribution which is frequently used in the probabilistic graphical models (PGMs). Inherently, the multinomial distributions can take a Dirichlet distribution as a conjugate prior, and the demands on such prior have motivated the works like Jang et al. (2017); Maddison et al. (2017); Rolfe (2017) that support the multinomial distribution posterior without explicit modeling on a Dirichlet prior. When we survey the work with a explicit modeling on the Dirichlet prior, we found a frequent approach such as utilizing a softmax Laplace approximation (Srivastava & Sutton, 2017). We argue that this approach has a limitation from the multi- modality perspective. The Dirichlet distribution can exhibit a multi- modal distribution with parameter settings, see Figure 1, which is infeasible to generate with the Gaussian distribution with a softmax function. Therefore, the previous continuous domain VAEs cannot be a perfect substitute for the direct approximation on the Dirichlet distribution. Utilizing a Dirichlet distribution as a conjugate prior to a multinomial distribution has an advantage compared to the usage of a softmax function on a Gaussian distribution. For instance, Figure 1 illustrates the potential difficulties in utilizing the softmax function with the Gaussian distribution. <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Illustrated probability simplex with Gaussian-Softmax, GEM, and Dirichlet distributions. Unlike the Gaussian-Softmax or the GEM distribution, the Dirichlet distribution is able to capture the multi-modality that illustrates multiple peaks at the vertices of the probability simplex. </center> Given the three- dimensional probability simplex, the Gaussian- Softmax distribution cannot generate the illustrated case of the Dirichlet distribution with a high probability measure at the vertices of the simplex, i.e. the multi- modality where the necessity was emphasized in Hoffman & Johnson (2016). Additionally, the Griffiths- Engen- McCloskey (GEM) distribution (Pitman, 2002), which is the prior distribution of the SBVAE, is difficult to model the multi- modality because the sampling procedure of the GEM distribution is affected by the rich- get- richer phenomenon, so a few components tend to dominate the weight of the samples. This is different from the Dirichlet distribution that does not exhibit such phenomenon, and the Dirichlet distribution can fairly distribute the weights to the components, and the Dirichlet distribution is more likely to capture the multi- modality by controlling the prior hyper- parameter (Blei et al., 2003). Then, we conjecture that an enhanced modeling on Dirichlet prior is still needed 1) because there are cases that the Gaussian- Softmax approaches, or the softmax Laplace approximation, cannot imitate the Dirichlet distribution; and 2) because the nonparametric approaches could be influenced by the biases that the Dirichlet distribution does not suffer from. Given these motivations for modeling the Dirichlet distribution with the SGVB estimator, this paper introduces the Dirichlet Variational Autoencoder (DirVAE) that shows the same characteristics of the Dirichlet distribution. The DirVAE is able to model the multi- modal distribution that was not possible with the Gaussian- Softmax and the GEM approaches. These characteristics allow the DirVAE to be the prior of the discrete latent distribution, as the original Dirichlet distribution is. Introducing the DirVAE requires the configuration of the SGVB estimator on the Dirichlet distribution. Specifically, the Dirichlet distribution is a composition of the Gamma random variables, so we approximate the inverse Gamma cumulative distribution function (CDF) with the asymptotic approximation. This approximation on the inverse Gamma CDF becomes the component of approximating the Dirichlet distribution. We compared this approach to the previously suggested approximations, i.e. approaches with the Weibull distribution and with the softmax Gaussian distribution, and our approximation shows the best log- likelihood among the compared approximations. Moreover, we report that we had to investigate the component collapsing along with the research on DirVAE. It has been known that the component collapsing issue is resolved by the SBVAE because of the meaningful decoder weights from the latent layer to the next layer. However, we found that SBVAE has latent value collapsing issue resulting in many near- zero values on the latent dimensions that leads to the incomplete utilization of the latent dimension. Hence, we argue that Gaussian VAE (GVAE) suffers from the decoder weight collapsing, previously limitedly defined as component collapsing; and SBVAE has a problem of the latent value collapsing. Finally, we suggest that the definition of component collapsing should be expanded to represent both cases of decoder weight and latent value collapsing. The proposed DirVAE shows neither the near- zero decoder weights nor the near- zero latent values, so the reconstruction uses the full latent dimension information in most cases. We investigated this issue because our performance gain comes from resolving the expanded version of the component collapsing. Due to the component collapsing issues, the existing VAEs have less meaningful latent values or could not effectively use its latent representation. Meanwhile, DirVAE does not have component collapsing due to the multi- modal prior which possibly leads to superior qualitative and quantitative performances. We experimentally showed that the DirVAE has more meaningful or disentangled latent representation by image generation and latent value visualizations. <--- Page Split ---> Technically, the new approximation provides the closed- form loss function derived from the evidence lower bound (ELBO) of the DirVAE. The optimization on the ELBO enables the representation learning with the DirVAE, and we test the learned representation from the DirVAE in two folds. Firstly, we test the representation learning quality by performing the supervised and the semi- supervised classification tasks on MNIST, OMNIGLOT, and SVHN. These classification tasks conclude that DirVAE has the best classification performances with its learned representation. Secondly, we test the applicability of DirVAE to the existing models, such as topic models with DirVAE priors on 20Newsgroup and RCV1- v2. This experiment shows that the augmentation of DirVAE to the existing neural variational topic models improves the perplexity and the topic coherence, and most of best performers were DirVAE augmented. ## 2 PRELIMINARIES ### 2.1 VARIATIONAL AUTOENCODERS A VAE is composed of two parts: a generative sub- model and an inference sub- model. In the generative part, a probabilistic decoder reproduces \(\hat{\mathbf{x}}\) close to an observation \(\mathbf{x}\) from a latent variable \(\mathbf{z} \sim p(\mathbf{z})\) , i.e. \(\mathbf{x} \sim p_{\theta}(\mathbf{x}|\mathbf{z}) = p_{\theta}(\mathbf{x}|\hat{\mathbf{x}})\) where \(\hat{\mathbf{x}} = \mathrm{MLP}(\mathbf{z})\) is obtained from a latent variable \(\mathbf{z}\) by a multilayer perceptron (MLP). In the inference part, a probabilistic encoder outputs a latent variable \(\mathbf{z} \sim q_{\phi}(\mathbf{z}|\mathbf{x}) = q_{\phi}(\mathbf{z}|\eta)\) where \(\eta = \mathrm{MLP}(\mathbf{x})\) is computed from the observation \(\mathbf{x}\) by a MLP. Model parameters, \(\theta\) and \(\phi\) , are jointly learned by optimizing the below ELBO with the stochastic gradient method through the backpropagations as the ordinary neural networks by using the SGVB estimators on the random nodes. \[\log p(\mathbf{x}) \geq \mathcal{L}(\mathbf{x}) = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[\log p_{\theta}(\mathbf{x}|\mathbf{z})] - \mathrm{KL}(q_{\phi}(\mathbf{z}|\mathbf{x})||p_{\theta}(\mathbf{z})) \quad (1)\] In GVAE (Kingma & Welling, 2014c), the prior distribution of \(p(\mathbf{z})\) is assumed to be a standard Gaussian distribution. In SBVAE (Nalisnick & Smyth, 2017), the prior distribution becomes a GEM distribution that produces samples with a Beta distribution and a stick- breaking algorithm. ### 2.2 DIRICHLET DISTRIBUTION AS A COMPOSITION OF GAMMA RANDOM VARIABLES The Dirichlet distribution is a composition of multiple Gamma random variables. Note that the probability density functions (PDFs) of Dirichlet and Gamma distributions are as follows: \[\mathrm{Dirichlet}(\mathbf{x}; \boldsymbol {\alpha}) = \frac{\Gamma(\sum \alpha_{k})}{\prod \Gamma(\alpha_{k})} \prod x_{k}^{\alpha_{k} - 1}, \mathrm{Gamma}(x; \boldsymbol {\alpha}, \beta) = \frac{\beta^{\alpha}}{\Gamma(\alpha)} x^{\alpha - 1} e^{-\beta x} \quad (2)\] where \(\alpha_{k}, \alpha , \beta > 0\) . In detail, if there are \(K\) independent random variables following the Gamma distributions \(X_{k} \sim \mathrm{Gamma}(\alpha_{k}, \beta)\) or \(\mathbf{X} \sim \mathrm{MultiGamma}(\alpha , \beta \cdot \mathbf{1}_{K})\) where \(\alpha_{k}, \beta > 0\) for \(k = 1, \dots , K\) , then we have \(\mathbf{Y} \sim \mathrm{Dirichlet}(\boldsymbol {\alpha})\) where \(Y_{k} = X_{k} / \sum X_{i}\) . It should be noted that the rate parameter, \(\beta\) , should be the same for every Gamma distribution in the composition. Then, the KL divergence can be derived as the following: \[\mathrm{KL}(Q||P) = \sum \log \Gamma (\alpha_{k}) - \sum \log \Gamma (\hat{\alpha}_{k}) + \sum (\hat{\alpha}_{k} - \alpha_{k})\psi (\hat{\alpha}_{k}) \quad (3)\] for \(P = \mathrm{MultiGamma}(\boldsymbol {\alpha}, \beta \cdot \mathbf{1}_{K})\) and \(Q = \mathrm{MultiGamma}(\hat{\boldsymbol{\alpha}}, \beta \cdot \mathbf{1}_{K})\) where \(\psi\) is a digamma function. The detailed derivation is provided in Appendix B. ### 2.3 SGVB FOR GAMMA RANDOM VARIABLE AND APPROXIMATION ON DIRICHLET DISTRIBUTION This section discusses several ways of approximating the Dirichlet random variable; or the SGVB estimators for the Gamma random variables which compose a Dirichlet distribution. Utilizing SGVB requires a differentiable non- centered parametrization (DNCP) for the distribution (Kingma & Welling, 2014d). The main SGVB for Gamma random variables, used in DirVAE, is using the inverse Gamma CDF approximation explained in the next section. Prior works include two approaches: the use of the Weibull distribution and the softmax Gaussian distribution, and the two approaches are explained in this section. <--- Page Split ---> Approximation with Weibull distribution. Because of the similar PDFs between the Weibull distribution and the Gamma distribution, some prior works used the Weibull distribution as a posterior distribution of the prior Gamma distribution (Zhang et al., 2018): \[\mathrm{Weibull}(x;k,\lambda) = \frac{k}{\lambda}\Big(\frac{x}{\lambda}\Big)^{k - 1}e^{-(x / \lambda)^{k}}\mathrm{where}k,\lambda >0. \quad (4)\] The paper Zhang et al. (2018) pointed out that there are two useful characteristics when approximating the Gamma distribution with the Weibull distribution. One useful property is that the KL divergence expressed in a closed form, and the other is the simple reparametrization trick with a closed form of the inverse CDF from the Weibull distribution. However, we noticed that the Weibull distribution has a component of \(e^{- (x / \lambda)^{k}}\) , and the Gamma distribution does not have the additional power term of \(k\) in the component. Since \(k\) is placed in the exponential component, small changes on \(k\) can cause a significant difference that limits the optimization. Approximation with softmax Gaussian distribution. As in MacKay (1998); Srivastava & Sutton (2017), a Dirichlet distribution can be approximated by a softmax Gaussian distribution by using a softmax Laplace approximation. The relation between the Dirichlet parameter \(\alpha\) and the Gaussian parameters \(\mu\) , \(\Sigma\) is explained as the following: \[\mu_{k} = \log \alpha_{k} - \frac{1}{K}\sum_{i}\log \alpha_{i}, \Sigma_{k} = \frac{1}{\alpha_{k}}\Big(1 - \frac{2}{K}\Big) + \frac{1}{K^{2}}\sum_{i}\frac{1}{\alpha_{i}}, \quad (5)\] where \(\Sigma\) is assumed to be a diagonal matrix, and we use the reparametrization trick in the usual GVAE for the SGVB estimator. ## 3 MODEL DESCRIPTION Along with the inverse Gamma CDF approximation, we describe two sub- models in this section: the generative sub- model and the inference sub- model. Figure 2 describes the graphical notations of various VAEs and the neural network view of our model. ![](images/3_0.jpg) <center>Figure 2: Sub-figures 2a, 2b, and 2c are the graphical notations of the VAEs as latent variable models. The solid lines indicate the generative sub-models where the waved lines denote a prior distribution of the latent variables. The dotted lines indicate the inference sub-models. Sub-figure 2d denotes a neural network structure corresponding to Sub-figure 2c. Red nodes denote the random nodes which allow the backpropagation flows to the input. </center> Generative sub- model. The key difference between the generative models between the DirVAE and the GVAE is the prior distribution assumption on the latent variable \(\mathbf{z}\) . Instead of using the standard Gaussian distribution, we use the Dirichlet distribution which is a conjugate prior distribution of the multinomial distribution. \[\mathbf{z}\sim p(\mathbf{z}) = \mathrm{Dirichlet}(\alpha),\mathbf{x}\sim p_{\theta}(\mathbf{x}|\mathbf{z}) \quad (6)\] Inference sub- model. The probabilistic encoder with an approximating posterior distribution \(q_{\phi}(\mathbf{z}|\mathbf{x})\) is designed to be \(\mathrm{Dirichlet}(\hat{\alpha})\) . The approximated posterior parameter \(\hat{\alpha}\) is derived by the MLP from the observation \(\mathbf{x}\) with the softplus output function, so the outputs can be positive values constrained by the Dirichlet distribution. Here, we do not directly sample \(\mathbf{z}\) from the Dirichlet distribution. Instead, we use the Gamma composition method described in Section 2.2. Firstly, we draw \(\mathbf{v} \sim \mathrm{MultiGamma}(\alpha , \beta \cdot \mathbf{1}_{K})\) . Afterwards, we normalize \(\mathbf{v}\) with its summation \(\sum v_{i}\) . <--- Page Split ---> The objective function to optimize the model parameters, \(\theta\) and \(\phi\) , is composed of Equation (1) and (3). Equation (7) is the loss function to optimize after the composition. The inverse Gamma CDF method explained in the next paragraph enables the backpropagation flows to the input with the stochastic gradient method. Here, for the fair comparison of expressing the Dirichlet distribution between the inverse Gamma CDF approximation method and the softmax Gaussian method, we set \(\alpha_{k} = 1 - 1 / K\) when \(\mu_{k} = 0\) and \(\Sigma_{k} = 1\) by using Equation (5); and \(\beta = 1\) . \[\mathcal{L}(\mathbf{x}) = \mathbb{E}_{d_{\phi (\mathbf{x}|\mathbf{x})}}[\log p_{\theta}(\mathbf{x}|\mathbf{z})] - (\sum \log \Gamma (\alpha_{k}) - \sum \log \Gamma (\hat{\alpha}_{k}) + \sum (\hat{\alpha}_{k} - \alpha_{k})\psi (\hat{\alpha}_{k})) \quad (7)\] Approximation with inverse Gamma CDF. A previous work Knowles (2015) suggested that, if \(X \sim \mathrm{Gamma}(\alpha , \beta)\) , and if \(F(x; \alpha , \beta)\) is a CDF of the random variable \(X\) , the inverse CDF can be approximated as \(F^{- 1}(u; \alpha , \beta) \approx \beta^{- 1}(u\alpha \Gamma (\alpha))^{1 / \alpha}\) . Hence, we can introduce an auxiliary variable \(u \sim \mathrm{Uniform}(0, 1)\) to take over all the randomness of \(X\) , and we treat the Gamma sampled \(X\) as a deterministic value in terms of \(\alpha\) and \(\beta\) . It should be noted that there has been a practice of utilizing the combination of decomposing a Dirichlet distribution and approximating each Gamma component with inverse Gamma CDF. However, such practices have not been examined with its learning properties and applicabilities. The following section shows a new aspect of component collapsing that can be remedied by this combination on Dirichlet prior in VAE, and the section illustrates the performance gains in a certain set of applications, i.e. topic modeling. ## 4 EXPERIMENTAL RESULTS This section reports the experimental results with the following experiment settings: 1) a pure VAE model; 2) a semi- supervised classification task with VAEs; 3) a supervised classification task with VAEs; and 4) topic models with DirVAE augmentations. ### 4.1 EXPERIMENTS FOR REPRESENTATION LEARNING OF VAES Baseline models. We select the following models as baseline alternatives of the DirVAE: 1) the standard GVAE; 2) the GVAE with softmax (GVAE- Softmax) approximating the Dirichlet distribution with the softmax Gaussian distribution; 3) the SBVAE with the Kumaraswamy distribution (SBVAE- Kuma) & the Gamma composition (SBVAE- Gamma) described in Nalisnick & Smyth (2017); and 4) the DirVAE with the Weibull distribution (DirVAE- Weibull) approximating the Gamma distribution with the Weibull distribution described in Zhang et al. (2018). We use the following benchmark datasets for the experiments: 1) MNIST; 2) MNIST with rotations (MNIST+rot); 3) OMNIGLOT; and 4) SVHN with PCA transformation. We provide the details on the datasets in Appendix D.1. Experimental setting. As a pure VAE model, we compare the DirVAE with the following models: GVAE, GVAE- Softmax, SBVAE- Kuma, SBVAE- Gamma, and DirVAE- Weibull. We use 50- dimension and 100- dimension latent variables for MNIST and OMNIGLOT, respectively. We provide the details of the network structure and optimization in Appendix D.2. We set \(\alpha = 0.98 \cdot 1_{50}\) for MNIST and \(\alpha = 0.99 \cdot 1_{100}\) for OMNIGLOT for the fair comparison to GVAEs by using Equation (5). All experiments use the Adam optimizer (Kingma & Ba, 2014a) for the parameter learning. Finally, we acknowledge that the hyper- parameter could be updated as Appendix C, and the experiment result with the update is separately reported in Appendix D.2. Quantitative result. For the quantitative comparison among the VAEs, we calculated the Monte- Carlo estimation on the marginal negative log- likelihood, the negative ELBO, and the reconstruction loss. The marginal log- likelihood is approximated as \(p(\mathbf{x}) \approx \sum_{i} \frac{p(\mathbf{x}|\mathbf{z}_{i})p(\mathbf{z}_{i})}{q(\mathbf{z}_{i})}\) for single instance \(\mathbf{x}\) where \(q(\mathbf{z})\) is a posterior distribution of a prior distribution \(p(\mathbf{z})\) , which is further derived in Appendix A. Table 1 shows the overall performance of the alternative VAEs. The DirVAE outperforms all baselines in both datasets from the log- likelihood perspective. The value of DirVAE comes from the better encoding of the latent variables that can be used for classification tasks which we examine in the next experiments. While the DirVAE- Weibull follows the prior modeling with the Dirichlet distribution, the Weibull based approximation can be improved by adopting the proposed approach with the inverse Gamma CDF. <--- Page Split ---> Table 1: Negative log-likelihood, negative ELBO, and reconstruction loss of the VAEs for MNIST and OMNIGLOT dataset. The lower values are the better for all measures. <table><tr><td rowspan="2"></td><td colspan="3">MNIST (K = 50)</td><td colspan="3">OMNIGLOT (K = 100)</td></tr><tr><td>Neg. LL</td><td>Neg. ELBO</td><td>Reconst. Loss</td><td>Neg. LL</td><td>Neg. ELBO</td><td>Reconst. Loss</td></tr><tr><td>GVAE (Nalisnick &amp;amp; Smyth, 2017)</td><td>96.80</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SBVAE-Kuma (Nalisnick &amp;amp; Smyth, 2017)</td><td>98.01</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SBVAE-Gamma (Nalisnick &amp;amp; Smyth, 2017)</td><td>100.74</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>GVAE</td><td>94.54±0.79</td><td>98.58±0.04</td><td>74.31±0.13</td><td>119.29±0.44</td><td>126.42±0.24</td><td>98.90±0.36</td></tr><tr><td>GVAE-Softmax</td><td>98.18±0.61</td><td>103.49±0.16</td><td>79.36±0.82</td><td>130.01±1.16</td><td>139.73±0.81</td><td>123.34±1.45</td></tr><tr><td>SBVAE-Kuma</td><td>99.27±0.48</td><td>102.60±1.81</td><td>83.90±0.82</td><td>130.73±2.17</td><td>132.86±3.03</td><td>119.25±1.00</td></tr><tr><td>SBVAE-Gamma</td><td>102.14±0.60</td><td>135.30±0.24</td><td>113.89±0.25</td><td>128.82±1.82</td><td>149.30±0.82</td><td>136.36±1.53</td></tr><tr><td>DirVAE-Weibull</td><td>114.59±1.15</td><td>183.33±2.96</td><td>150.92±3.70</td><td>140.89±3.21</td><td>198.01±2.46</td><td>145.52±3.15</td></tr><tr><td>DirVAE</td><td>87.64±0.64</td><td>100.47±0.35</td><td>81.50±0.27</td><td>108.24±0.42</td><td>120.06±0.35</td><td>99.78±0.36</td></tr></table> Qualitative result. As a qualitative result, we report the latent dimension- wise reconstructions which are decoder outputs with each one- hot vector in the latent dimension. Figure 3a shows 50 reconstructed images corresponding to each latent dimension from GVAE- Softmax, SBVAE, and DirVAE. We manually ordered the digit- like figures in the ascending order for GVAE- Softmax and DirVAE. We can see that the GVAE- Softmax and the SBVAE have components without significant semantic information, which we will discuss further in Section 4.2, and the DirVAE has interpretable latent dimensions in most of the latent dimensions. Figure 3b also supports the quality of the latent values from DirVAE by visualizing learned latent values through t- SNE (Maaten & Hinton, 2008). ![](images/5_0.jpg) <center>Figure 3: Latent dimension visualization with reconstruction images and t-SNE latent embeddings. </center> ### 4.2 DISCUSSION ON COMPONENT COLLAPSING Decoder weight collapsing, a.k.a. component collapsing. One main issue of GVAE is component collapsing that there are a significant number of near- zero decoder weights from the latent neurons to the next decoder neurons. If these weights become near- zero, the values of the latent dimensions loose influence to the next decoder, and this means an inefficient learning given a neural network structure. The same issue occurs when we use the GVAE- Softmax. We rename this component collapsing phenomenon as decoder weight collapsing to specifically address the collapsing source. Latent value collapsing. SBVAE claims that SBVAE solved the decoder weight collapsing by learning the meaningful weights as shown in Figure 4a. However, we notice that SBVAE produces the output values, not the weight parameters, from the latent dimension to be near- zero in many latent dimensions after averaging many samples obtained from the test dataset. Figure 4b shows the properties of DirVAE and SBVAE from the perspective of the latent value collapsing, which SBVAE shows many near- zero average means and near- zero average variances, while DirVAE does not. The average Fisher kurtosis and average skewness of DirVAE are 5.76 and 2.03, respectively over the dataset, while SBVAE has 20.85 and 4.35, which states that the latent output distribution from SBVAE is more skewed than that of DirVAE. We found out that these near- zero latent values prevent learning on decoder weights, which we introduce as another type of collapsing problem, as latent value collapsing that is different from the decoder weight collapsing. These results mean that SBVAE distributes the non- near- zero latent values sparsely over a few dimensions while DirVAE samples relatively dense latent values. In other words, DirVAE utilizes the full spectrum of latent dimensions compared to SBVAE, and DirVAE has a better learning capability in the decoder network. Figure 3a supports the argument on the latent value collapsing by activating each and single latent dimension with a one- hot vector through the decoder. The non- changing latent dimension <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 4: Sub-figure 4a shows GVAE and GVAE-Softmax have component collapsing issue, while SBVAE and DirVAE do not. Sub-figure 4b shows that SBVAE has many near-zero output values in the latent dimensions. </center> wise images of SBVAE proves that there were no generation differences between the two differently activated one- hot latent values. ### 4.3 APPLICATION 1. EXPERIMENTS OF (SEMI-)SUPERVISED CLASSIFICATION WITH VAES Semi- supervised classification task with VAEs. There is a previous work demonstrating that the SBVAE outperforms the GVAE in semi- supervised classification task (Nalisnick & Smyth, 2017). The overall model structure for this semi- supervised classification task uses a VAE with separate random variables of \(\mathbf{z}\) and \(\mathbf{y}\) , which is introduced as the \(M2\) model in the original VAE work (Kingma et al., 2014b). The detailed settings of the semi- supervised classification tasks are enumerated in Appendix D.3. Fundamentally, we applied the same experimental settings to GVAE, SBVAE, and DirVAE in this experiment, as specified by the authors in Nalisnick & Smyth (2017). Table 2 enumerates the performances of the GVAE, the SBVAE, and the DirVAE, and the result shows that the error rate of classification result using \(10\%\) , \(5\%\) and \(1\%\) of labeled data for each dataset. In general, the experiment shows that the DirVAE has the best performance out of three alternative VAEs. Also, it should be noted that the performance of the DirVAE is more improved in the most complex task with the SVHN dataset. Table 2: The error rate of semi-supervised classification task using VAEs. <table><tr><td rowspan="2"></td><td colspan="3">MNIST (K = 50)</td><td colspan="3">MNIST+rot (K = 50)</td><td colspan="3">SVHN (K = 50)</td></tr><tr><td>10%</td><td>5%</td><td>1%</td><td>10%</td><td>5%</td><td>1%</td><td>10%</td><td>5%</td><td></td></tr><tr><td>GVAE (Nalisnick &amp;amp; Smyth, 2017)</td><td>3.95±0.15</td><td>4.74±0.43</td><td>11.55±2.28</td><td>21.78±0.71</td><td>27.72±0.69</td><td>38.13±0.95</td><td>36.08±1.49</td><td>48.75±1.47</td><td>69.58±1.64</td></tr><tr><td>SBVAE (Nalisnick &amp;amp; Smyth, 2017)</td><td>4.86±0.14</td><td>5.29±0.39</td><td>7.34±0.47</td><td>11.78±0.39</td><td>14.27±0.58</td><td>27.67±1.39</td><td>32.08±4.00</td><td>37.07±5.22</td><td>61.37±3.60</td></tr><tr><td>DirVAE</td><td>4.60±0.07</td><td>5.05±0.18</td><td>7.00±0.17</td><td>11.18±0.32</td><td>13.53±0.46</td><td>26.20±0.66</td><td>24.81±1.13</td><td>28.45±1.14</td><td>55.99±3.30</td></tr></table> Supervised classification task with latent values of VAEs. Also, we tested the performance of the supervised classification task with the learned latent representation from the VAEs. We applied the vanilla version of VAEs to the datasets, and we classified the latent representation of instances with \(k\) - Nearest Neighbor ( \(k\) NN) which is one of the simplest classification algorithms. Hence, this experiment can better distinguish the performance of the representation learning in the classification task. Further experimental details can be found in Appendix D.4. Table 3 enumerates the performances from the experimented VAEs in the datasets of MNIST and OMNIGLOT. Both datasets indicated that the DirVAE shows the best performance in reducing the classification error, which we conjecture that the performance is gathered from the better representation learning. It should be noted that, to our knowledge, this is the first reported comparison of latent representation learning on VAEs with \(k\) NN in the supervised classification using OMNIGLOT dataset. We identified that the classification with OMNIGLOT is difficult given that the \(k\) NN error <--- Page Split ---> rates with the raw original data are as high as \(69.94\%\) , \(69.41\%\) , and \(70.10\%\) . This high error rate mainly originates from the number of classification categories which is 50 categories in our test setting of OMNIGLOT, compared to 10 categories in MNIST. Table 3: The error rate of \(k\) NN with the latent representations of VAEs. <table><tr><td rowspan="2"></td><td colspan="3">MNIST (K = 50)</td><td colspan="3">OMNIGLOT (K = 100)</td></tr><tr><td>k = 3</td><td>k = 5</td><td>k = 10</td><td>k = 3</td><td>k = 5</td><td>k = 10</td></tr><tr><td>GVAE (Nalisnick et al., 2016)</td><td>28.40</td><td>20.96</td><td>15.33</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SBVAE (Nalisnick et al., 2016)</td><td>9.34</td><td>8.65</td><td>8.90</td><td>-</td><td>-</td><td>-</td></tr><tr><td>DLGMM (Nalisnick et al., 2016)</td><td>9.14</td><td>8.38</td><td>8.42</td><td>-</td><td>-</td><td>-</td></tr><tr><td>GVAE</td><td>27.16±0.48</td><td>20.20±0.93</td><td>14.89±0.40</td><td>92.34±0.25</td><td>91.21±0.18</td><td>88.79±0.35</td></tr><tr><td>GVAE-Softmax</td><td>25.08±2.64</td><td>21.79±2.17</td><td>18.75±2.06</td><td>94.76±0.20</td><td>94.22±0.37</td><td>92.98±0.42</td></tr><tr><td>SBVAE</td><td>10.01±0.52</td><td>9.58±0.47</td><td>9.39±0.54</td><td>86.90±0.82</td><td>85.10±0.89</td><td>82.96±0.64</td></tr><tr><td>DiVAE</td><td>5.98±0.06</td><td>5.29±0.06</td><td>5.06±0.06</td><td>76.55±0.23</td><td>73.81±0.29</td><td>70.95±0.29</td></tr><tr><td>Raw Data</td><td>3.00</td><td>3.21</td><td>3.44</td><td>69.94</td><td>69.41</td><td>70.10</td></tr></table> ### 4.4 APPLICATION 2. EXPERIMENTS OF TOPIC MODEL AUGMENTATION WITH DIRVAE One usefulness of the Dirichlet distribution is being a conjugate prior to the multinomial distribution, so it has been widely used in the field of topic modeling, such as Latent Dirichlet Allocation (LDA) (Blei et al., 2003). Recently, some neural variational topic (or document) models have been suggested, for example, ProdLDA (Srivastava & Sutton, 2017), NVDM (Miao et al., 2016), and GSM (Miao et al., 2017). NVDM used the GVAE, and the GSM used the GVAE- Softmax to make the sum- to- one positive topic vectors. Meanwhile, ProdLDA assume the prior distribution to be the Dirichlet distribution with the softmax Laplace approximation. To verify the usefulness of the DirVAE, we replace the probabilistic encoder part of the DirVAE to each model. Two popular performance measures in the topic model fields, which are perplexity and topic coherence via normalized pointwise mutual information (NPMI) (Lau et al., 2014), have been used with 20Newsgroups and RCV1- v2 datasets. Further details of the experiments can be found in Appendix D.5. Table 4 indicates that the augmentation of DirVAE improves the performance in general. Additionally, the best performers from the two measurements are always the experiment cell with DirVAE augmentation except for the perplexity of RCV1- v2, which still remains competent. Table 4: Topic modeling performances of perplexity and NPMI with DirVAE augmentations. <table><tr><td rowspan="2"></td><td rowspan="2"></td><td colspan="4">20Newsgroups (K = 50)</td><td colspan="4">RCV1-v2 (K = 100)</td></tr><tr><td>ProdLDA</td><td>NVDM</td><td>GSM</td><td>LDA (Gibbs)</td><td>ProdLDA</td><td>NVDM</td><td>GSM</td><td>LDA (Gibs)</td></tr><tr><td rowspan="3">Perplexity</td><td>Reported</td><td>1172</td><td>837</td><td>822</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Reproduced</td><td>1219±8.87</td><td>810±2.60</td><td>954±1.22</td><td>1314±18.50</td><td>1190±45.24</td><td>796±6.24</td><td>1386±21.06</td><td>1126±12.66</td></tr><tr><td>Add SBVAE</td><td>1164±2.55</td><td>878±14.21</td><td>980±13.50</td><td>-</td><td>1077±22.57</td><td>1050±12.19</td><td>1670±4.78</td><td>-</td></tr><tr><td rowspan="3">NPMI</td><td>Add DirVAE</td><td>1114±2.30</td><td>752±12.17</td><td>916±1.64</td><td>-</td><td>992±2.19</td><td>809±12.60</td><td>1526±6.11</td><td>-</td></tr><tr><td>Reported</td><td>0.240</td><td>0.186</td><td>0.121</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Reproduced</td><td>0.273±0.019</td><td>0.119±0.003</td><td>0.199±0.006</td><td>0.225±0.002</td><td>0.194±0.005</td><td>0.023±0.002</td><td>0.267±0.019</td><td>0.260±0.006</td></tr><tr><td></td><td>Add SBVAE</td><td>0.247±0.015</td><td>0.162±0.007</td><td>0.162±0.006</td><td>-</td><td>0.190±0.006</td><td>0.116±0.016</td><td>0.207±0.004</td><td>-</td></tr><tr><td></td><td>Add DirVAE</td><td>0.359±0.026</td><td>0.247±0.010</td><td>0.201±0.003</td><td>-</td><td>0.193±0.004</td><td>0.131±0.015</td><td>0.308±0.005</td><td>-</td></tr></table> ## 5 CONCLUSION Recent advances in VAEs have become one of the cornerstones in the field of DGMs. The VAEs infer the parameters of explicitly described latent variables, so the VAEs are easily included in the conventional PGMs. While this merit has motivated the diverse cases of merging the VAEs to the graphical models, we ask the fundamental quality of utilizing the GVAE where many models have latent values to be categorical probabilities. The softmax function cannot reproduce the multi- modal distribution that the Dirichlet distribution can. Recognizing this problem, there have been some previous works that approximated the Dirichlet distribution in the VAE settings by utilizing the Weibull distribution or the softmax Gaussian distribution, but the DirVAE with the inverse Gamma CDF shows the better learning performance in our experiments of the representation: the semi- supervised, the supervised classifications, and the topic models. Moreover, DirVAE shows no component collapsing and it leads to better latent representation and performance gain. The proposed DirVAE can be widely used if we recall the popularity of the conjugate relation between the multinomial and the Dirichlet distributions because the proposed DirVAE can be a brick to the construction of complex probabilistic models with neural networks. <--- Page Split ---> ## APPENDIX This is an appendix for Dirichlet Variational Autoencoder. Here, we describe the derivations of key equations and experimental setting details which were used in the body of the paper. The detailed information such as model names, parameter names, or experiment assumptions is based on the main paper. ## A MONTE-CARLO ESTIMATION ON THE MARGINAL LIKELIHOOD Proposition A.1. The marginal log- likelihood is approximated as \(p(\mathbf{x}) \approx \sum_{i} \frac{p(\mathbf{x}|\mathbf{z}_{i}) p(\mathbf{z}_{i})}{q(\mathbf{z}_{i})}\) , where \(q(\mathbf{z})\) is a posterior distribution of a prior distribution \(p(\mathbf{z})\) . Proof. \[p(\mathbf{x}) = \int_{\mathbf{z}} p(\mathbf{x},\mathbf{z}) d\mathbf{z} = \int_{\mathbf{z}} p(\mathbf{x},\mathbf{z}) \frac{q(\mathbf{z})}{q(\mathbf{z})} d\mathbf{z}\] \[= \int_{\mathbf{z}} p(\mathbf{x}|\mathbf{z}) p(\mathbf{z}) \frac{q(\mathbf{z})}{q(\mathbf{z})} d\mathbf{z} = \int_{\mathbf{z}} \frac{p(\mathbf{x}|\mathbf{z}) p(\mathbf{z})}{q(\mathbf{z})} q(\mathbf{z}) d\mathbf{z}\] \[\approx \sum_{i} \frac{p(\mathbf{x}|\mathbf{z}_{i}) p(\mathbf{z}_{i})}{q(\mathbf{z}_{i})} \mathrm{where} \mathbf{z}_{i} \sim q(\mathbf{z})\] ## B KL DIVERGENCE OF TWO MULTI-GAMMA DISTRIBUTIONS Proposition B.1. Define \(\mathbf{X} = (X_{1}, \dots , X_{K}) \sim \text{MultiGamma} (\alpha , \beta \cdot \mathbf{1}_{K})\) as a vector of \(K\) independent Gamma random variables \(X_{k} \sim \text{Gamma} (\alpha_{k}, \beta)\) where \(\alpha_{k}, \beta > 0\) for \(k = 1, \dots , K\) . The KL divergence between two MultiGamma distributions \(P = \text{MultiGamma} (\alpha , \beta \cdot \mathbf{1}_{K})\) and \(Q = \text{MultiGamma} (\hat{\alpha}, \beta \cdot \mathbf{1}_{K})\) can be derived as the following: \[\begin{array}{r}{K L(Q||P) = \sum \log \Gamma (\alpha_{k}) - \sum \log \Gamma (\hat{\alpha}_{k}) + \sum (\hat{\alpha}_{k} - \alpha_{k})\psi (\hat{\alpha}_{k}),} \end{array} \quad (8)\] where \(\psi\) is a digamma function. Proof. Note that the derivative of a Gamma- like function \(\frac{\Gamma(\alpha)}{\beta^{\alpha}}\) can be derived as follows: \[\frac{d}{d\alpha} \frac{\Gamma(\alpha)}{\beta^{\alpha}} = \beta^{-\alpha} (\Gamma'(\alpha) - \Gamma(\alpha) \log \beta) = \int_{0}^{\infty} x^{\alpha -1} e^{-\beta x} \log x dx.\] Then, we have the following. \[\mathrm{KL}(Q||P) = \int_{\mathcal{D}}q(\mathbf{x})\log \frac{q(\mathbf{x})}{p(\mathbf{x})} d\mathbf{x}\] \[= \int_{0}^{\infty}\dots \int_{0}^{\infty}\prod_{\alpha}\mathrm{Gamma}(\hat{\alpha}_{k},\beta)\log \frac{\beta^{\sum\hat{\alpha}_{k}}\prod^{\alpha -1}(\hat{\alpha}_{k})e^{-\beta\sum x_{k}}\prod x_{k}^{\hat{\alpha}_{k} - 1}}{\beta^{\sum\alpha_{k}}\prod^{\alpha -1}(\alpha_{k})e^{-\beta\sum x_{k}}\prod x_{k}^{\hat{\alpha}_{k} - 1}} d\mathbf{x}\] \[= \int_{0}^{\infty}\dots \int_{0}^{\infty}\prod_{\alpha}\mathrm{Gamma}(\hat{\alpha}_{k},\beta)\] \[\qquad \times \left[\sum (\hat{\alpha}_{k} - \alpha_{k})\log \beta +\sum \log \Gamma (\alpha_{k}) - \sum \log \Gamma (\hat{\alpha}_{k}) + \sum (\hat{\alpha}_{k} - \alpha_{k})\log x_{k}\right]d\mathbf{x}\] \[= \left[\sum (\hat{\alpha}_{k} - \alpha_{k})\log \beta +\sum \log \Gamma (\alpha_{k}) - \sum \log \Gamma (\hat{\alpha}_{k})\right]\] \[\qquad +\int_{0}^{\infty}\dots \int_{0}^{\infty}\frac{\beta^{\hat{\alpha}_{k}}}{\prod \Gamma (\hat{\alpha}_{k})} e^{-\beta \sum x_{k}}\prod x_{k}^{\hat{\alpha}_{k} - 1}(\sum (\hat{\alpha}_{k} - \alpha_{k})\log x_{k}) d\mathbf{x}\] <--- Page Split ---> \[\begin{array}{r l} & {= \Big[\sum (\hat{\alpha}_{k} - \alpha_{k})\log \beta +\sum \log \Gamma (\alpha_{k}) - \sum \log \Gamma (\hat{\alpha}_{k})\Big]}\\ & {\quad +\sum (\hat{\alpha}_{k} - \alpha_{k})\beta^{\hat{\alpha}_{k}}\Gamma^{-1}(\hat{\alpha}_{k})\beta^{-\hat{\alpha}_{k}}\big(\Gamma^{\prime}(\hat{\alpha}_{k}) - \Gamma (\hat{\alpha}_{k})\log \beta \big)}\\ & {= \sum (\hat{\alpha}_{k} - \alpha_{k})\log \beta +\sum \log \Gamma (\alpha_{k}) - \sum \log \Gamma (\hat{\alpha}_{k}) + \sum (\hat{\alpha}_{k} - \alpha_{k})(\psi (\hat{\alpha}_{k}) - \log \beta)}\\ & {= \sum \log \Gamma (\alpha_{k}) - \sum \log \Gamma (\hat{\alpha}_{k}) + \sum (\hat{\alpha}_{k} - \alpha_{k})\psi (\hat{\alpha}_{k})} \end{array} \quad (9)\] ## C HYPER-PARAMETER \(\alpha\) LEARNING STRATEGY In this section, we introduce the method of moment estimator (MME) to update the Dirichlet prior parameter \(\alpha\) . Suppose we have a set of sum- to- one proportions \(\mathcal{D} = \{\mathbf{p}_{1},\dots ,\mathbf{p}_{N}\}\) sampled from Dirichlet( \(\alpha\) ), then the MME update rule is as the following: \[\alpha_{k}\leftarrow \frac{S}{N}\sum_{n}p_{n,k}\mathrm{~where~}S = \frac{1}{K}\sum_{k}\frac{\tilde{\mu}_{1,k} - \tilde{\mu}_{2,k}}{\tilde{\mu}_{2,k} - \tilde{\mu}_{1,k}^{2}}\mathrm{~for~}\tilde{\mu}_{j,k} = \frac{1}{N}\sum_{n}p_{n,k}^{j}. \quad (9)\] After the burn- in period for stabilizing the neural network parameters, we use the MME for the hyper- parameter learning using the sampled latent values during training. We alternatively update the neural network parameters and hyper- parameter \(\alpha\) . We choose this estimator because of its closed form nature and consistency (Minka, 2000). The usefulness of the hyper- parameter update can be found in Appendix D.2. Proposition C.1. Given a proportion set \(\mathcal{D} = \{\mathbf{p}_{1},\dots ,\mathbf{p}_{N}\}\) sampled from Dirichlet( \(\alpha\) ), MME of the hyper- parameter \(\alpha\) is as the following: \[\alpha_{k}\leftarrow \frac{S}{N}\sum_{n}p_{n,k}\mathrm{~where~}S = \frac{1}{K}\sum_{k}\frac{\tilde{\mu}_{1,k} - \tilde{\mu}_{2,k}}{\tilde{\mu}_{2,k} - \tilde{\mu}_{1,k}^{2}}\mathrm{~for~}\tilde{\mu}_{j,k} = \frac{1}{N}\sum_{n}p_{n,k}^{j}.\] Proof. Define \(\mu_{j,k} = \mathbb{E}[p_{k}^{j}]\) as the \(j^{\mathrm{th}}\) moment of the \(k^{\mathrm{th}}\) dimension of Dirichlet distribution with prior \(\alpha\) . Then, by the law of large number, \(\mu_{j,k}\approx \tilde{\mu}_{j,k}\) . It can be easily shown that \(\mu_{1,k} = \frac{\alpha_{k}}{\sum_{i}\alpha_{i}}\) and \(\mu_{2,k} = \frac{\alpha_{k}}{\sum_{i}\alpha_{i}}\frac{1 + \alpha_{k}}{1 + \sum_{i}\alpha_{i}} = \mu_{1,k}\frac{1 + \alpha_{k}}{1 + \sum_{i}\alpha_{i}}\) so that \[\mathrm{numera}\mathrm{tor}\Big(\frac{\mu_{1,k} - \mu_{2,k}}{\mu_{2,k} - \mu_{1,k}^{2}}\Big) = \frac{\alpha_{k}}{\sum_{i}\alpha_{i}} -\frac{\alpha_{k}}{\sum_{i}\alpha_{i}}\frac{1 + \alpha_{k}}{1 + \sum_{i}\alpha_{i}}\] \[\qquad = \frac{\alpha_{k}(\sum_{i\neq k}\alpha_{i})}{(\sum_{i}\alpha_{i})(1 + \sum_{i}\alpha_{i})}\] \[\mathrm{denominator}\Big(\frac{\mu_{1,k} - \mu_{2,k}}{\mu_{2,k} - \mu_{1,k}^{2}}\Big) = \frac{\alpha_{k}}{\sum_{i}\alpha_{i}}\frac{1 + \alpha_{k}}{1 + \sum_{i}\alpha_{i}} -\Big(\frac{\alpha_{k}}{\sum_{i}\alpha_{i}}\Big)^{2}\] \[\qquad = \frac{\alpha_{k}(\sum_{i\neq k}\alpha_{i})}{(\sum_{i}\alpha_{i})^{2}(1 + \sum_{i}\alpha_{i})}\] holds for each \(k = 1,\dots ,K\) . Therefore, \[\sum_{i}\alpha_{i} = \frac{\mu_{1,k} - \mu_{2,k}}{\mu_{2,k} - \mu_{1,k}^{2}}\approx \frac{1}{K}\sum_{k}\frac{\mu_{1,k} - \mu_{2,k}}{\mu_{2,k} - \mu_{1,k}^{2}}\approx \frac{1}{K}\sum_{k}\frac{\tilde{\mu}_{1,k} - \tilde{\mu}_{2,k}}{\tilde{\mu}_{2,k} - \tilde{\mu}_{1,k}^{2}}\] and hence, \[\hat{\alpha}_{k} = (\sum_{i}\alpha_{i})\tilde{\mu}_{1,k} = \frac{S}{N}\sum_{n}p_{n,k}.\] <--- Page Split ---> ## D EXPERIMENTAL SETTINGS D EXPERIMENTAL SETTINGSIn this section, we support Section 4 in the original paper with more detailed experimental settings. Our Tensorflow implementation is available at https://TO_BE_RELEASED. ## D.1 DATASET DESCRIPTION D.1 DATASET DESCRIPTIONWe use the following benchmark datasets for the experiments in the original paper: 1) MNIST; 2) MNIST with rotations (MNIST+rot); 3) OMNIGLOT; and 4) SVHN with PCA transformation. MNIST (LeCun et al., 1998) is a hand-written digit image dataset of size \(28 \times 28\) with 10 labels, consists of 60,000 training data and 10,000 testing data. MNIST+rot data is reproduced by the authors of Nalisnick & Smyth (2017) consists of MNIST and rotated MNIST<sup>1</sup>. OMNIGLOT<sup>2</sup> (Lake et al., 2013; Sinderby et al., 2016) is another hand-written image dataset of characters with \(28 \times 28\) size and 50 labels, consists of 24,345 training data and 8,070 testing data. SVHN<sup>3</sup> is a Street View House Numbers image dataset with the dimension-reduction by PCA into 500 dimensions (Nalisnick & Smyth, 2017). ## D.2 REPRESENTATION LEARNING OF VAES We divided the datasets into {train,valid,test} as the following: \(\mathrm{MNIST} = \{45,000:5,000:\) 10,000} and OMNIGLOT \(= \{22,095:2,250:8,070\}\) For MNIST, we use 50- dimension latent variables with two hidden layers in the encoder and one hidden layer in the decoder of 500 dimensions. We set \(\alpha = 0.98\cdot \mathbf{1}_{50}\) for the fair comparison to GVAEs using the Equation (5). The batch size was set to be 100. For OMNIGLOT, we use 100- dimension latent variables with two hidden layers in the encoder and one hidden layer in the decoder of 500 dimensions. We assume \(\alpha = 0.99\cdot \mathbf{1}_{100}\) for the fair comparison to the GVAEs using the Equation (5). The batch size was set to be 15. For both datasets, the gradient clipping is used; ReLU function (Nair & Hinton, 2010) is used as an activation function in hidden layers; Xavier initialization (Glorot & Bengio, 2010) is used for the neural network parameter initialization; and the Adam optimizer (Kingma & Ba, 2014a) is used as an optimizer with learning rate \(5\mathrm{e} - 4\) for all VAEs except \(3\mathrm{e} - 4\) for the SBVAEs. The prior assumptions for each VAE is the following: 1) \(\mathcal{N}(0,\mathbf{I})\) for the GVAE and the GVAE- Softmax; 2) GEM(5) for the SBVAEs; and 3) Dirichlet(0.98 \(\cdot \mathbf{1}_{50}\) ) (MNIST) and Dirichlet(0.99 \(\cdot \mathbf{1}_{100}\) ) (OMNIGLOT) for the DirVAE- Weibull. Finally, to compute the marginal log- likelihood, we used 100 samples for each 1,000 randomly selected from the test data. We add the result of VAE with 20 normalizing flows (GVAE- NF20) (Rezende & Mohamed, 2015) as a baseline in Table 5. Also, latent dimension- wise decoder weight norm and t- SNE visualization on latent embeddings of MNIST is given in Figure 5a and 5b which correspond to Figure 4a and 3, respectively. Additionally, DirVAE- Learning use the same \(\alpha\) for the initial value, but the DirVAE- Learning optimizes hyper- parameter \(\alpha\) by the following stages through the learning iterations using the MME method in Appendix C: 1) the burn- in period for stabilizing the neural network parameters; 2) the alternative update period for the neural network parameters and \(\alpha\) ; and 3) the update period for the neural network parameters with the fixed learned hyper- parameter \(\alpha\) . Table 5 shows that there are improvements in the marginal log- likelihood, ELBO, and reconstruction loss with DirVAE- Learning in both datasets. We also give the learned hyper- parameter \(\alpha\) in Figure 6. ## D.3 SEMI-SUPERVISED CLASSIFICATION TASK WITH VAES The overall model structure for this semi- supervised classification task uses a VAE with a separate random variable of \(\mathbf{z}\) and \(\mathbf{y}\) , which is introduced as the \(M2\) model in the original VAE work (Kingma et al., 2014b). However, the same task with the SBVAE uses a different model modified to ignore <--- Page Split ---> Table 5: Negative log-likelihood, negative ELBO, and reconstruction loss of the VAEs for MNIST and OMNIGLOT dataset. The lower values are the better for all measures. <table><tr><td rowspan="2"></td><td colspan="4">MNIST (K = 50)</td><td colspan="4">OMNIGLOT (K = 100)</td></tr><tr><td>Neg. LL</td><td>Neg. ELBO</td><td>Reconst. Loss</td><td>Neg. LL</td><td>Neg. ELBO</td><td>Reconst. Loss</td><td></td><td></td></tr><tr><td>GVAE (Nalisnick &amp;amp; Smyth, 2017)</td><td>96.80</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SBVAE-Kuma (Nalisnick &amp;amp; Smyth, 2017)</td><td>98.01</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SBVAE-Gamma (Nalisnick &amp;amp; Smyth, 2017)</td><td>100.74</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>GVAE</td><td>94.54±0.79</td><td>98.58±0.04</td><td>74.31±0.13</td><td>119.29±0.44</td><td>126.42±0.24</td><td>98.90±0.36</td><td></td><td></td></tr><tr><td>GVAE-Softmax</td><td>98.18±0.61</td><td>103.49±0.16</td><td>79.36±0.82</td><td>130.01±1.16</td><td>139.73±0.81</td><td>123.34±1.43</td><td></td><td></td></tr><tr><td>GVAE-NFv2</td><td>95.87±0.64</td><td>113.14±0.47</td><td>90.09±1.19</td><td>113.51±1.29</td><td>129.82±0.64</td><td>108.86±1.19</td><td></td><td></td></tr><tr><td>SBVAE-Kuma</td><td>99.27±0.48</td><td>102.60±1.81</td><td>85.90±0.82</td><td>130.73±2.17</td><td>132.80±0.63</td><td>110.25±1.00</td><td></td><td></td></tr><tr><td>SBVAE-Gamma</td><td>102.14±0.69</td><td>135.30±0.24</td><td>113.89±0.25</td><td>128.82±1.82</td><td>149.30±0.82</td><td>136.36±1.53</td><td></td><td></td></tr><tr><td>DirVAE-Weibull</td><td>114.94±1.13</td><td>153.32±2.96</td><td>150.92±3.70</td><td>140.89±3.21</td><td>198.01±2.46</td><td>145.52±3.13</td><td></td><td></td></tr><tr><td>DirVAE</td><td>87.64±0.64</td><td>100.47±0.35</td><td>81.50±0.27</td><td>108.24±0.42</td><td>120.06±0.35</td><td>99.78±0.36</td><td></td><td></td></tr><tr><td>DirVAE-Learning</td><td>84.42±0.53</td><td>99.88±0.40</td><td>80.73±0.31</td><td>100.01±0.52</td><td>119.73±0.31</td><td>99.55±0.32</td><td></td><td></td></tr></table> ![](images/13_0.jpg) <center>Figure 5: Decoder weight collapsing and t-SNE latent embeddings visualization of GVAE-NF20 on MNIST. </center> the relation between the class label variable \(\mathbf{y}\) and the latent variable \(\mathbf{z}\) , but they still share the same parent nodes: \(q_{\phi}(\mathbf{z},\mathbf{y}|\mathbf{x}) = q_{\phi}(\mathbf{z}|\mathbf{x})q_{\phi}(\mathbf{y}|\mathbf{x})\) where \(q_{\phi}(\mathbf{y}|\mathbf{x})\) is a discriminative network for the unseen labels. We follow the structure of SBVAE. Finally, the below are the objective functions to optimize for the labeled and the unlabeled instances of the semi- supervised classification task, respectively: \[\log p(\mathbf{x},\mathbf{y})\geq \mathcal{L}_{\mathrm{labeled}}(\mathbf{x},\mathbf{y}) = \mathbb{E}_{q_{\phi (\mathbf{z}|\mathbf{x})}}[\log p_{\theta}(\mathbf{x}|\mathbf{z},\mathbf{y})] - \mathrm{KL}(q_{\phi}(\mathbf{z}|\mathbf{x})||p_{\theta}(\mathbf{z})) + \log q_{\phi}(\mathbf{y}|\mathbf{x}), \quad (10)\] \[\log p(\mathbf{x})\geq \mathcal{L}_{\mathrm{unlabeled}}(\mathbf{x}) = \mathbb{E}_{q_{\phi (\mathbf{z},\mathbf{y}|\mathbf{x})}}[\log p_{\theta}(\mathbf{x}|\mathbf{z},\mathbf{y}) + \mathbb{H}(q_{\phi}(\mathbf{y}|\mathbf{x}))] - \mathrm{KL}(q_{\phi}(\mathbf{z}|\mathbf{x})||p_{\theta}(\mathbf{z}))~. \quad (11)\] In the above, \(\mathbb{H}\) is an entropy function. The actual training on the semi- supervised learning optimizes the weighted sum of Equation (10) and (11) with a ratio hyper- parameter \(0< \lambda < 1\) . The datasets are divided into {train, valid, test} as the following: \(\mathrm{MNIST} = \{45,000:5,000:10,000\}\) , \(\mathrm{MNIST} + \mathrm{rot} = \{70,000:10,000:20,000\}\) , and \(\mathrm{SVHN} = \{65,000:8,257:26,032\}\) . For SVHN, dimension reduction into 500 dimensions by PCA is applied as preprocessing. Fundamentally, we applied the same experimental settings to GVAE, SBVAE and DirVAE in this experiment, as specified by the authors in Nalisnick & Smyth (2017). Specifically, the three VAEs used the same network structures of 1) a hidden layer of 500 dimension for MNIST; and 2) four hidden layers of 500 dimensions for MNIST+rot and SVHN with the residual network for the last three hidden layers. The latent variables have 50 dimensions for all settings. The ratio parameter \(\lambda\) is set to be 0.375 for the MNISTS, and 0.45 for SVHN. ReLU function is used as an activation function in hidden layers, and the neural network parameters were initialized by sampling from \(\mathcal{N}(0,0.001)\) . The Adam optimizer is used with learning rate \(3e - 4\) and the batch size was set to be 100. Finally, the DirVAE sets \(\alpha = 0.98 \cdot 1_{50}\) by using Equation (5). <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 6: The optimized dimension-wise \(\alpha\) values from DirVAE-Learning with MNIST. </center> ## D.4 SUPERVISED CLASSIFICATION TASK WITH LATENT VALUES OF VAES For the supervised classification task on the latent representation of the VAEs, we used exactly the same experimental settings as in D.2. Since DLGMM is basically a Gaussian mixture model with the SBVAE, DLGMM is a more complex model than the VAE alternatives. We only report the authors' result from Nalisnick et al. (2016) for the comparison purposes. Additionally, we omit the comparison with the VaDE (Jiang et al., 2017) because the VaDE is more customized to be a clustering model rather than the ordinary VAEs that we choose as baselines. Table 6: The error rate of \(k\) NN with the latent representations of VAEs. <table><tr><td rowspan="2"></td><td colspan="3">MNIST (K = 50)</td><td colspan="3">OMNIGLOT (K = 100)</td></tr><tr><td>k = 3</td><td>k = 5</td><td>k = 10</td><td>k = 3</td><td>k = 5</td><td>k = 10</td></tr><tr><td>GVAE (Nalisnick et al., 2016)</td><td>28.40</td><td>20.96</td><td>15.33</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SBVAE (Nalisnick et al., 2016)</td><td>9.34</td><td>8.65</td><td>8.90</td><td>-</td><td>-</td><td>-</td></tr><tr><td>DLGMM (Nalisnick et al., 2016)</td><td>9.14</td><td>8.38</td><td>8.42</td><td>-</td><td>-</td><td>-</td></tr><tr><td>GVAE</td><td>27.16±0.48</td><td>20.20±0.93</td><td>14.89±0.40</td><td>92.34±0.25</td><td>91.21±0.18</td><td>88.79±0.35</td></tr><tr><td>GVAE-Softmax</td><td>25.68±2.64</td><td>21.79±2.17</td><td>18.75±2.06</td><td>94.76±0.20</td><td>94.22±0.37</td><td>92.98±0.42</td></tr><tr><td>GVAE-NF20</td><td>25.72±1.58</td><td>20.13±1.25</td><td>15.87±0.74</td><td>91.25±0.12</td><td>90.03±0.20</td><td>87.73±0.38</td></tr><tr><td>SBVAE</td><td>10.01±0.52</td><td>9.58±0.47</td><td>9.39±0.54</td><td>86.90±0.82</td><td>85.10±0.89</td><td>82.96±0.64</td></tr><tr><td>DirVAE</td><td>5.98±0.06</td><td>5.29±0.06</td><td>5.06±0.06</td><td>76.55±0.23</td><td>73.81±0.29</td><td>70.95±0.29</td></tr><tr><td>Raw Data</td><td>3.00</td><td>3.21</td><td>3.44</td><td>69.94</td><td>69.41</td><td>70.10</td></tr></table> ## D.5 TOPIC MODEL AUGMENTATION WITH DIRVAE For the topic model augmentation experiment, two popular performance measures in the topic model fields, which are perplexity and topic coherence via normalized pointwise mutual information (NPMI) (Lau et al., 2014), have been used with 20Newsgroups and \(RCV1 - v2\) datasets. 20Newsgroups has 11, 258 train data and 7, 487 test data with vocabulary size 1, 995. For the RCV1- v2 dataset, due to the massive size of the whole data, we randomly sampled 20, 000 train data and 10, 000 test data with vocabulary size 10, 000. The lower is better for the perplexity, and the higher is better for the NPMI. The specific model structures can be found in the original papers, Srivastava & Sutton (2017); Miao et al. (2016; 2017). We replace the model prior to that of DirVAE to each model and search the hyper- parameter as Table 7 with 1, 000 randomly selected test data. We use 500- dimension hidden layers and 50 topics for 20Newsgroups, and 1, 000- dimension hidden layers and 100 topics for RCV1- v2. Table 8 shows top- 10 high probability words per topic by activating single latent dimensions in the case of 20Newsgroups. Also, we visualized the latent embeddings of documents by t- SNE in Figure 7.8, and 9. <--- Page Split ---> Table 7: Hyper-parameter selections for DirVAE augmentations. <table><tr><td rowspan="2"></td><td colspan="3">20Newsgroups (K = 50)</td><td colspan="3">RCV1-v2 (K = 100)</td></tr><tr><td>ProdLDA</td><td>NVDM</td><td>GSM</td><td>ProdLDA</td><td>NVDM</td><td>GSM</td></tr><tr><td>Add DirVAE</td><td>0.98 · 1.50</td><td>0.95 · 1.50</td><td>0.20 · 1.50</td><td>0.99 · 1.100</td><td>0.90 · 1.100</td><td>0.01 · 1.100</td></tr></table> Table 8: Sample of learned per topic top-10 high probability words from 20Newsgroups with DirVAE augmentation by activating single latent dimensions. <table><tr><td>Topic 1</td><td>turks turkish armenian genocide village armenia armenians muslims turkey greece</td></tr><tr><td>Topic 2</td><td>doctrine jesus god faith christ scripture belief eternal holy bible</td></tr><tr><td>Topic 3</td><td>season defensive puck playoff coach score flyers nhl team ice</td></tr><tr><td>Topic 4</td><td>pitcher braves hitter coach pen defensive injury roger pitch player</td></tr><tr><td>Topic 5</td><td>ide scsi scsus controller motherboard isa cache mb floppy ram</td></tr><tr><td>Topic 6</td><td>toolkit widget workstation xlib jpeg xt vendor colormap interface pixel</td></tr><tr><td>Topic 7</td><td>spacecraft satellite solar shuttle nasa mission professor lunar orbit rocket</td></tr><tr><td>Topic 8</td><td>knife handgun assault homicide batf criminal gun firearm police apartment</td></tr><tr><td>Topic 9</td><td>enforcement privacy encrypt encryption ripem wiretap rsa cipher cryptography escrow</td></tr><tr><td>Topic 10</td><td>min detroit tor det calgary rangers leafs montreal philadelphia cal<br>(a) DirVAE augmentation to ProdLDA</td></tr><tr><td></td><td>NVDM+DirVAE</td></tr><tr><td>Topic 1</td><td>armenian azerbaijan armenia genocide armenians turkish militia massacre village turks</td></tr><tr><td>Topic 2</td><td>arab arab sirsali palestinian jews soldier turks nazi massacre jew</td></tr><tr><td>Topic 3</td><td>resurrection bible christianity doctrine scripture eternal belief christian faith jesus</td></tr><tr><td>Topic 4</td><td>hitter season braves pitcher baseball pitch game player defensive team</td></tr><tr><td>Topic 5</td><td>directory file compile variable update ftp version site copy host</td></tr><tr><td>Topic 6</td><td>performance speed faster mhz rate clock processor average twice fast</td></tr><tr><td>Topic 7</td><td>windows microsoft driver dos nt graphic vga card virtual upgrade</td></tr><tr><td>Topic 8</td><td>seat gear rear tire honda oil front mile wheel engine</td></tr><tr><td>Topic 9</td><td>patient disease doctor treatment symptom medical health hospital pain medicine</td></tr><tr><td>Topic 10</td><td>pt la det tor pit pp vs van cal nj<br>(b) DirVAE augmentation to NVDM</td></tr><tr><td></td><td>GSM+DirVAE</td></tr><tr><td>Topic 1</td><td>turkish armenian armenians people one turkey armenia turks greek history</td></tr><tr><td>Topic 2</td><td>israel israeli jews attack world jewish article arab peace land</td></tr><tr><td>Topic 3</td><td>god jesus christian religion truth believe bible church christ belief</td></tr><tr><td>Topic 4</td><td>team play game hockey nhl score first division go win</td></tr><tr><td>Topic 5</td><td>drive video mac card port pc system modem memory speed</td></tr><tr><td>Topic 6</td><td>image software file version server program system ftp package support</td></tr><tr><td>Topic 7</td><td>space launch orbit earth nasa moon satellite mission project center</td></tr><tr><td>Topic 8</td><td>law state gun government right rights case court police crime</td></tr><tr><td>Topic 9</td><td>price sell new sale offer pay buy good condition money</td></tr><tr><td>Topic 10</td><td>internet mail computer send list fax phone email address information<br>(c) DirVAE augmentation to GSM</td></tr></table> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 7: 20Newsgroups latent document embedding visualization with t-SNE by replacing the model prior to the Dirichlet. (Left) ProdLDA+DirVAE, (Middle) NVDM+DirVAE, (Right) GSM+DirVAE. </center> ![](images/16_1.jpg) <center>Figure 8: 20Newsgroups latent document embedding visualization with t-SNE by replacing the model prior to the Stick-Breaking. (Left) ProdLDA+SBAVE, (Middle) NVDM+SBAVE, (Right) GSM+SBAVE. </center> ![](images/16_2.jpg) <center>Figure 9: 20Newsgroups latent document embedding visualization with t-SNE of original models. (Left) ProdLDA, (Middle) NVDM, (Right) GSM. </center> <--- Page Split --->
reject
Reject
6
ICLR_2019_paper_0141
iclr
2,019
# LEARNING TO REFER TO 3D OBJECTS WITH NATURAL LANGUAGE Anonymous authors Paper under double- blind review ## ABSTRACT Human world knowledge is both structured and flexible. When people see an object, they represent it not as a pixel array but as a meaningful arrangement of semantic parts. Moreover, when people refer to an object, they provide descriptions that are not merely true but also relevant in the current context. Here, we combine these two observations in order to learn fine- grained correspondences between language and contextually relevant geometric properties of 3D objects. To do this, we employed an interactive communication task with human participants to construct a large dataset containing natural utterances referring to 3D objects from ShapeNet in a wide variety of contexts. Using this dataset, we developed neural listener and speaker models with strong capacity for generalization. By performing targeted lesions of visual and linguistic input, we discovered that the neural listener depends heavily on part- related words and associates these words correctly with the corresponding geometric properties of objects, suggesting that it has learned task- relevant structure linking the two input modalities. We further show that a neural speaker that is 'listener- aware' — that plans its utterances according to how an imagined listener would interpret its words in context — produces more discriminative referring expressions than an 'listener- unaware' speaker, as measured by human performance in identifying the correct object. ## 1 INTRODUCTION Human world knowledge is both structured and flexible. For example, when people see a chair, they represent it not as a pixel array but as a semantically meaningful combination of parts, such as arms, legs, seat, and back. How to obtain and flexibly deploy such structured knowledge remains an outstanding problem in machine learning (Lake et al., 2017). One promising approach is to harness the rich conceptual and relational structure latent in language (Andreas et al., 2017). Natural languages have been optimized across human history to solve the problem of efficiently communicating those aspects of the world most relevant to current goals (Kirby et al., 2015; Gibson et al., 2017). Consequently, language reflects the structured nature of our world knowledge: we not only conceive of a chair in terms of its semantic parts, but can combine multiple words to refer to its 'curved back' or 'cushioned seat', and provide more informative descriptions if the context requires it, e.g., refer to a different distinguishing part if all the chairs have a cushioned seat. Our goal is to leverage these insights to develop systems that can make fine- grained distinctions between complex object geometries across a wide variety of contexts. Our approach is to leverage natural language produced by people in an interactive communication task to develop neural network models of the speaker and listener roles in this task. We find that the resulting representations learned by these models exhibit structure that is crucial for robust communication: first, they capture task- relevant correspondences between individual parts of objects and individual tokens of language, and second, they have strong capacity to generalize to novel contexts, objects, utterances, and other related object classes. We make the following contributions: <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Constructing "close" and "far" contexts by exploiting the latent neighborhood structure of 3D chairs. Orange is a high in-degree seed chair, dark gray its selected distractors in each context. </center> - We introduce a new multimodal dataset (Chairs In Context) comprised of 4,511 chairs from ShapeNet, organized into 4,054 sets of size 3 (called communication contexts), with 78,789 natural utterances, each utterance intended to distinguish a chair in context.<sup>1</sup>- By training on this dataset we develop neural listeners and speakers with strong generalization capacity even in out-of-training classes, such as tables.- We demonstrate that the neural listener learns to prioritize the same geometric information in objects (i.e., properties of individual chair parts) that humans do in solving the communication task, despite never being provided with an explicit decomposition of these objects into parts.- We show how our listeners can be used to search large collections of unseen objects to retrieve models based on natural language queries, e.g., curved back and fat legs.- Lastly, we find that a neural speaker that is 'listener-aware' — that plans its utterances according to how an imagined listener would interpret its words in context — produces more discriminative utterances than an 'listener-unaware' speaker, as measured by human performance in identifying the correct object. ## 2 DATASET AND TASK Our dataset consists of triplets of 3D objects coupled with referential utterances that aim to distinguish one object (the "target") from the remaining two (the "distractors"). To obtain such utterances, we paired participants from Amazon Mechanical Turk to play an online, reference game (Hawkins, 2015). On each round of the game, the two players were shown a triplet of objects. The designated target object was privately highlighted for one player (the "speaker") who was asked to send a message through a chat box such that their partner (the "listener") could successfully select it from the context (see Appendix Fig. 16). To ensure speakers used geometric information rather than color, texture, orientation, or position on the screen, we scrambled the positions of the objects for each participant and used textureless, colorless renders of 3D objects taken from the same viewpoint. Additionally, to ensure communicative interaction was natural, no constraints were placed on the chat box: referring expressions from the speaker were occasionally followed by clarification questions from the listener or other discourse. A crucial decision in building our dataset concerned the construction of useful contexts that would reliably elicit fine- grained contrastive language. Perceptually identical objects cannot be distinguished with language at all, while wildly different objects (a chair and a car) can be easily distinguished with a single word ("it's a chair"). To solve this problem, we considered three objectives. First, the set of objects must be familiar so we can tap existing visual and linguistic representations. Second, the objects should be complex and variable to provide wide coverage of interesting geometries. Third, different contexts must contain diverse combinations of objects to ensure variation in the relevant distinctions required. To satisfy the first two objectives, we utilize the collection of about 7,000 chairs from ShapeNet (Chang et al., 2015). This class is geometrically complex, densely sampled, highly diverse, and abundant in the real world. To satisfy the third objective in a scalable and unsupervised manner, we estimated object similarity between different chairs using the Point Cloud- AutoEncoder (PC- AE) from Achlioptas et al. (2018). This representation allowed us to leverage the fact that point- clouds extracted from a 3D surface provide an intrinsic 3D representation of an object, oblique to color or texture. To deal with the inhomogeneity of data in repositories like ShapeNet we used a sampling <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: Proposed listener architecture. </center> strategy to construct our triplets. First, we computed the 2- nearest- neighbor graph of all ShapeNet chairs based on their PC- AE embedding distances. On this graph, we selected chairs of highest in- degree as seeds and for each seed chair we generated two kinds of triplets. Close contexts sampled nearby chairs while Far contexts sampled highly dissimilar chairs (see Fig. 1). Additional details of triplet construction are provided in the Appendix (Section 9.1). In total, we collected a corpus containing 78,789 referring expressions for 4,054 triplets, containing 4,511 unique chairs. In doing so we recruited 2,124 unique participants. Human performance on the reference game was high in general, but listeners made significantly more errors in the close triplets (94.2% vs. 97.2%, \(z = 13.54\) , \(p < 0.001\) ). Also, significantly longer utterances were used on average to describe targets in close triplets (approximately 8.4 words vs. 6.1, \(t = - 35\) , \(p < 0.001\) ). A wide spectrum of descriptions was elicited, ranging from the more holistic/categorical common for far triplets (e.g., "the rocking chair") to more complex, geometric language common for close triplets (e.g., "thinner legs but without armrests"). 78% of utterances used at least one part word: "back", "legs", "seat", "arms", or closely related synonyms (e.g., "armrests"). ## 3 NEURAL LISTENERS Constructing neural listeners that reason about geometric relationships is a key contribution of our work. It lays the foundation for creating speakers that utter discriminative utterances and enables the creation of an object retrieval system that operates with linguistic queries. Given its importance, below we conduct a detailed comparison between three distinct architectures, highlight the effect of different regularization techniques, and investigate the merits of two different representations of 3D objects for the listening task, namely, images and point clouds. In what follows, we denote the three objects of a communication context as \(O = \{o_{1}, o_{2}, o_{3}\}\) , the corresponding word- tokenized utterance, which has at most \(K\) tokens, as \(U = u_{1}, u_{2}, \ldots\) and as \(t \in O\) the referential target. Our proposed listener is inspired by Monroe et al. (2017). It takes as input a (latent code) vector for each of the three objects in \(O\) and a (latent code) vector for each token of \(U\) , and outputs an object- utterance compatibility score \(\mathcal{L}(o_{i}, U) \in [0, 1]\) for each of the three objects. At its core lies a multi- modal LSTM (Hochreiter & Schmidhuber, 1997) that takes as input ("is grounded" with) the vector of a single object, processes the word- sequence \(U\) , and is read out by a final MLP to yield a single number (the compatibility score). This is repeated for each \(o_{i}\) , sharing all network parameters across the objects. We then apply a soft- max to the three compatibility scores to yield a distribution over the three objects, and compute a cross- entropy loss between this distribution and the ground- truth indicator vector of the target. Object encoders We experimented with three object representations to capture the underlying geometries: (a) the bottleneck representation of a pretrained Point Cloud- AutoEncoder (PC- AE), (b) the embedding provided by a convolutional network operating on single- view images of non- textured 3D objects, or (c) a combination of (a) and (b). Specifically, for (a) we use the PC- AE architecture of Achlioptas et al. (2018) trained with single- class point clouds extracted from the surfaces of 3D CAD models, while for (b) we use the activations of the penultimate layer of a VGG- 16 (Simonyan & Zisserman, 2014) neural network, pre- trained on ImageNet (Deng et al., 2009), and fine- tuned on an 8- way classification task with images of objects from ShapeNet. For each <--- Page Split ---> representation we project the corresponding code vector to the input space of the LSTM using a fully connected (FC) layer with \(L_{2}\) - norm weight regularization. The addition of these projection- like layers improves the training and convergence of our system. While there are many ways to simultaneously incorporate the two modalities in the LSTM, we found that the best performance resulted when we ground the LSTM with the image code, concatenate the LSTM's final output (after processing \(U\) ) with the point cloud code, and finally feed this result into a shallow MLP to produce the final compatibility (see Figure 2 for an overview of this architecture). We note that grounding the LSTM with point clouds and using images towards the end of the pipeline, resulted in a significant performance drop ( \(\sim 4.8\%\) on average). Also, adding dropout at the input layer of the LSTM and \(L_{2}\) weight regularization and dropout at and before the FC projecting layers was crucial (giving improvements of more that \(10\%\) ). The token codes of each sentence where initialized with the GloVe embedding (Pennington et al., 2014) and fine- tuned for the listening task. Incorporating context information Critically, our proposed listener architecture first scores each object separately then applies softmax normalization to yield a score distribution over the three objects. In order to evaluate the importance of this design choice, we consider two alternative architectures that incorporate context earlier, at encoding. The first alternative (Separate- Augment), is identical to the proposed architecture, except for it uses a convolutional layer to augment each object's grounding vector with information about the other two objects in context before yielding its score. Specifically, if \(v_{i}\) is the image code vector of the i- th object \((o_{i} \in O)\) , to produce the grounding vector for \(o_{i}\) , the convolutional layer receives \(f(v_{j}, v_{k})||g(v_{j}, v_{k})||v_{i}\) , where \(f, g\) are order invariant functions, such as the average or max- pooling and \(||\) denotes feature- wise concatenation. The second alternative architecture (Ar- Once) first feeds the image vectors for all three objects sequentially to the LSTM and then proceeds to process the tokens of \(U\) once, to yield the entire score distribution. Similarly to the proposed architecture, point clouds are incorporated in both alternatives via a separate MLP after the LSTM. Attention mechanism over words We hypothesize that a listener forced to prioritize a few words in each utterance would learn to prioritize words that express properties that distinguish the target from the distractors (and, thus, perform better). To test this hypothesis, we augment the listener models with a bilinear attention mechanism over words. Specifically, to estimate the "importance" of each text- token \(u_{i}\) we compare the output of the LSTM for \(u_{i}\) (denoted as \(r_{i}\) ) with the hidden state after the entire utterance has been processed (denoted as \(h\) ). The ideas is that the hidden state acts as a summary of the grounded sentence (Shen & Lee, 2016), that can be used to assess the relative importance of each word as \(a_{i} \triangleq r_{i}^{T} \times W_{\mathrm{att}} \times h\) , where \(W_{\mathrm{att}}\) is a trainable diagonal matrix. With the attention mechanism in place, the final output of the LSTM is defined as \(\sum_{i = 1}^{|U|} r_{i} \odot \hat{a}_{i}\) , where \(\hat{a}_{i} = \frac{\exp(a_{i})}{\sum_{j}^{|U|} \exp(a_{j})}\) and \(\odot\) is the point- wise product. The optimal parameters of each listener (and speaker), the hyper- parameter search strategy, and the exact details of training are provided in the Appendix 9.2 and 9.3. ## 4 NEURAL SPEAKERS Architecture Our speaker models are inspired by the show- and- tell model (Vinyals et al., 2015) developed for image captioning. Specifically, a speaker is a neural network that receives an image- based code vector per object in \(O\) and learns to generate an utterance \(U\) that refers to the target and which distinguishes it from the distractors. Similarly to the listener model, the main components of the speaker's architecture are an LSTM and a convolutional image network (we do not include point clouds when speaking to allow for a more easily deployable model). During the first three time steps, the speaker receives sequentially the three image code vectors of a context (projected via an \(L_{2}\) - norm weight regularized FC) and outputs a vector which is transformed into a logit prediction over our vocabulary via an FC. The soft- normalized version of the output is compared against the first ground- truth token \((u_{1})\) under the cross- entropy loss. For each remaining token \(u_{i} \in u_{2}, \ldots\) , the LSTM is conditioned on the previous \((u_{i - 1})\) ground- truth token and the cross- entropy comparison is repeated (i.e., we do teacher- forcing (Williams & Zipser, 1989)). In all speakers the target vector is fed third, <--- Page Split ---> Table 1: Performance of variants of proposed listener architecture (image-modality, attention, and context-incorporation alternatives) on different generalization tasks and subpopulations of the test set (chance is \(33\%\) ; mean accuracy \(\pm 1\) standard error). Bottom table uses best-performing model from top table. Averages taken over five random seeds that controlled the data splits and neural-net initializations. <table><tr><td></td><td>Input-Modality</td><td>Language-Task</td><td>Object-Task</td></tr><tr><td rowspan="3">No Attention</td><td>Point Cloud</td><td>67.6 ± 0.3%</td><td>66.4 ± 0.7%</td></tr><tr><td>Image</td><td>81.2 ± 0.5%</td><td>77.4 ± 0.7%</td></tr><tr><td>Image &amp;amp; Point Cloud</td><td>83.1 ± 0.2%</td><td>78.9 ± 1.0%</td></tr><tr><td rowspan="3">With Word-level Attention</td><td>Point Cloud</td><td>67.4 ± 0.3%</td><td>65.6 ± 1.4%</td></tr><tr><td>Image</td><td>81.7 ± 0.5%</td><td>77.6 ± 0.8%</td></tr><tr><td>Image &amp;amp; Point Cloud</td><td>83.7 ± 0.2%</td><td>79.6 ± 0.8%</td></tr></table> <table><tr><td rowspan="2">Architecture</td><td colspan="4">Subpopulations</td></tr><tr><td>Overall</td><td>Close</td><td>Far</td><td>Sup-Comp</td></tr><tr><td>At-Once</td><td>75.9 ± 0.5%</td><td>67.4 ± 1.0%</td><td>83.8 ± 0.6%</td><td>74.4 ± 1.3%</td></tr><tr><td>Separate-Augment</td><td>79.4 ± 0.8%</td><td>70.1 ± 1.3%</td><td>88.1 ± 0.6%</td><td>75.2 ± 2.1%</td></tr><tr><td>Separate (proposed)</td><td>79.6 ± 0.8%</td><td>69.9 ± 1.3%</td><td>88.1 ± 0.4%</td><td>76.0 ± 1.6%</td></tr></table> thereby minimizing the length of dependence between the most important input object and the output (Sutskever et al., 2014) and eliminating the need to represent the index of the target separately. To find the best hyper- parameters ( \(L_{2}\) weights, dropout- rate and # of LSTM neurons) and the optimal (per validation) epoch, during training we sample synthetic utterances of each model and use a pretrained listener to select the combination with the highest listener accuracy. We found this approach to produce model parameters with stronger correlation between the training 'progress' and the quality of produced utterances, than using listening- unaware metrics like BLEU (Papineni et al., 2002). Variations In principle, the above speaker can learn to generate language that follows the discriminative characteristics of the referential ground truth. To test the degree to which the distractors are taken into account for this purpose, we experiment with a speaker that is "context- unaware" by construction. This speaker at both training and test time uses the image encoding of the target object only, and is otherwise identical to the above model. Then, motivated by the recursive social reasoning formalized in the Rational Speech Act framework (Goodman & Frank, 2016), we create a listener- aware speaker that plans synthetic utterances according to their capacity to be discriminative, as judged by an "internal" listener. In this case, a speaker's sampled utterance \(U\) is scored as: \[\mathrm{score}(\mathbf{U}) = \beta \log (P_{L}(t|U)) + (1 - \beta) \sum_{i = 1}^{i = |U|}\frac{\log(P_{S}(u_{i}|O))}{|U|^{\alpha}}, \quad (1)\] where \(P_{L}\) is the listener's probability to predict the target ( \(t\) ) given \(U\) , \(u_{k}\) is a token of \(U\) and \(P_{S}\) is the likelihood of the speaker for generating \(U\) given the objects in \(O\) . The parameter \(\alpha\) controls a length- penalty term to discourage short sentences (Wu et al., 2016), while \(\beta\) controls the relative importance of the speaker's vs. the listener's opinions. ## 5 LISTENER EXPERIMENTS We evaluated our listener's generalization performance using two tasks based on different data splits. In the language generalization task, we test on target objects that were seen as targets in at least one context during training but ensured that all utterances in the test split are from unseen speakers. In the more challenging object generalization task, we restricted the set of objects that appeared as targets in the test set to be disjoint from those in training such that all objects and utterances in the test split are unseen. For each of these tasks, we evaluated choices of input modality and word attention, using \([80\%, 10\%, 10\% ]\) of the data, for training, validation and test for all experiments. Listener accuracies are shown in Table 1 (top). <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 3: (A) The listener places more attention on adjectives in close (orange) triplets than far (blue) ones. (B) Lesioning highest attention words to lowest worsens performance more than lesioning random words or lesioning lowest attention words. </center> First, as expected, all architectures have higher accuracy on the language generalization task (3.2% on average). Second, the attention mechanism on words yields a mild performance boost, as long as images are part of the input. Third, images provide a much better input than point- clouds when only one modality is used. In other words, despite being an intrinsic 3D representation, point- clouds alone seem to provide a weaker input signal, perhaps due to their relative lack of high- frequency details. Finally, we find large gains in accuracy (4.1% on average) from exploiting the two modalities simultaneously, potentially implying a complementarity between the two representations that the network can exploit. Next, we evaluate how the different approaches to incorporating context information described in Section 3 affect listener performance. We focus on the more challenging object generalization task, using models that include attention and both object modalities. See Table 1 (bottom) for results. We find that the At- Once architecture, which consumes the entire context with a single (non- weight shared replica) LSTM, performs significantly worse than both the Separate and Separate- Augment architectures (which use an explicit shared- weighting mechanism) and achieve similar performance to each other. It is plausible that our alternative strategies for incorporating context information however, would yield an advantage in close contexts, where finer distinctions must be made. However, we did not observe differences between the Separate and Separate- Augment variants in either the far or the close subpopulations; we do find that far contexts were easier for all models than close contexts. Surprisingly, we found that the Separate architecture remains competitive against the Separate- Augment architecture even in the subpopulation that includes utterances with superlatives and/or comparatives ("skinner"/"skinliest") which made up \(\sim 16\%\) of the test set and make explicit reference to context. Since the Separate architecture is the most flexible (see Section 5.2 for a demonstration of this), and is also simpler than Separate- Augment while performing equally well, we focus on this in the following sections. ### 5.1 EXPLORING LEARNED REPRESENTATIONS Which aspects of a sentence are more critical for our listener's performance? To inspect the properties of words receiving the most attention, we ran a part- of- speech tagger on our corpus. We found that the highest attention weight is placed on nouns, controlling for the length of the utterance. However, adjectives that modify nouns received more attention in close contexts (controlling for the average occurrence in each context), where nouns are often not sufficient to disambiguate (see Fig. 3A). To more systematically evaluate the role of higher- attention tokens in listener performance, we conducted an utterance lesioning experiment. For each utterance in our dataset, we successively replaced words with the <UNK> token according to three schemes: (1) from highest attention to lowest, (2) from lowest attention to highest, and (3) in random order. We then fed these through an equivalent listener trained without attention. We found large differential performance from random in both directions (see Fig. 3B). This ablation result was found across a wide range of utterance lengths. Our word- attentive listener thus appears to rely on context- appropriate content words to successfully disambiguate the referent. Examples demonstrating where the attention is being placed on utterances produced by humans are given in Appendix Fig. 7. To test the extent to which our listener is relying on the same semantic parts of the object as humans, we conducted a lesion experiment on the visual input rather than the linguistic one. We took the subset of our test set where (1) all chairs had complete part annotations available (Yi et al., 2016) and (2) the corresponding utterance mentioned a single part (17.5% of our test set). We then rendered <--- Page Split ---> Table 2: Testing the part-awareness of neural listener by lesioning different parts of the objects. Reported is the average accuracy of a listener under different lesions. <table><tr><td rowspan="2">Mentioned Part</td><td colspan="2">Intact Object</td><td>Single Part Lesioned</td><td>Single Part Present</td></tr><tr><td>78.5%</td><td>41.8% ± 0.1</td><td>66.6% ± 0.1</td><td></td></tr><tr><td>Random Part</td><td></td><td>67.0% ± 0.2</td><td>37.4% ± 0.1</td><td></td></tr></table> ![](images/6_0.jpg) <center>Figure 4: Top-scoring retrieved results in collections of unseen objects with natural-language queries. Bottom two rows include out-of-class examples from collections of lamps, sofas and tables. </center> lesioned versions of all three objects on each trial by removing pixels corresponding to parts 2 according to two schemes: removing a single part or keeping a single part. We did this either for the mentioned one, or another part, chosen at random. We report listener accuracies on these lesioned contexts in Table 2. We found that removing random parts hurts the accuracy by \(11\%\) on average, but removing the mentioned part dropped accuracy more than three times as much, nearly to chance. Conversely, keeping only the mentioned part while lesioning the rest of the image merely drops accuracy by \(11.9\%\) while keeping a non- mentioned (random) part alone brings accuracy down to \(37.4\%\) on average. In other words, on trials when participants depended on information about a part to communicate the object to their partner, we found that localized information about that part was both necessary and sufficient for the performance of our listener model. ### 5.2 USING LISTENER FOR RETRIEVAL IN NOVEL OBJECT COLLECTIONS Finally, as a demonstration the broader applicability of our listener, we consider the problem of searching a large database of 3D objects using natural language queries. A key advantage of the proposed listener is its flexibility to be applied on arbitrary sized contexts. We exploit this flexibility by using a pre- trained listener to measure the compatibility \(\mathcal{L}(o_i, U)\) between every object of a test collection \(O = \{o_1, \ldots , o_N\}\) and the query \(U\) . In Figure 4 (top) we show the chairs of the held- out splits (a set of 900 chairs) with the highest compatibility for a range of utterance queries. Additionally, we show the results of applying this model (trained on chairs) in the entire out- of- training classes of (ShapeNet) sofas, tables and lamps (object sets of size 3.2K, 8.5K, and 2.3K, respectively). We see surprisingly good results on searching these transfer categories. This further supports the part- awareness of the learned embedding, since the commonality between a chair and a table can be primarily expressed trough their shared parts. In the Appendix we include additional queries for chairs (Fig. 10) and non- chairs (Fig. 11). ## 6 SPEAKER EXPERIMENTS Having established that our neural listener learns useful representations with surprisingly structured properties, we now proceed to evaluate our neural speakers (see Fig. 5 for examples). 3 We evaluate them by measuring their success in referential games with two different kinds of partners: with an independently trained listener model and with human listeners on Amazon <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: Top-scoring synthetic utterances generated from listener-aware and context-aware speakers for unseen targets. Proportions correspond classification scores of our independent evaluating listener. </center> ![](images/7_1.jpg) <center>Figure 6: Our listener-aware speaker can produce informative referring expressions for out-of-class objects in context. Here, we apply our search technique in the collection of ShapeNet Tables, to produce triplets of well-separated objects. We use the queries: 'no legs' (left), 'modern' (center), and 'x' (right), to construct each triplet. Notice that the target of each triplet (selected from the highest-ranked matches) reflects the semantics of the used query, as opposed to the distractors (selected from the lowest-ranked matches). </center> Mechanical Turk (see Table 3). Critically, to conduct a fair evaluation using a neural listener, we split our training data in half. The evaluating listener was trained using one half while the scoring (or "internal") listener used by the speaker to choose utterances was trained on the remaining half. For our human evaluation, we used the context- and listener- aware speakers to generate synthetic referring expressions on the test set. To avoid data- spillage we use all training data to train the internal listeners here. We then showed these referring expressions to participants and asked them to select the object from context that the speaker was referring to. We collected approximately 2.58 responses for each triplet. For all speaking experiments, we used the same object- generalization splits used in the listening experiments. The synthetic utterances were the best scoring sentence according to each model with optimal \(\alpha\) and a subset of \(\beta\) values (See Appendix Section 12 for the effect of these hyper- parameters in a wider range). We found that our listener- aware speaker, which uses an internal listener model to produce informative utterances, performs significantly better in reference games. While its success with the evaluating listener model may be unsurprising, given the architectural similarity of the internal listener and the evaluating listener, human listeners were 10.4 percentage points better at picking out the target on utterances produced by the listener- aware speaker. While for listeners we found it was sufficient to bring context into the model only at the final stage, the soft- max over objects, we found that for speakers it was helpful to bring context into play earlier: the context- unaware speaker does significantly worse than context- aware one (64.0% vs. 76.6%). Qualitatively, we note that both context/listener aware speakers produce succinct descriptions (average sentence length Table 3: Speaker evaluations. For the neural listeners, five random seeds controlling the weight initialization and speaker-listener data splits were used. <table><tr><td>Speaker-Architecture</td><td>Listener Model</td><td>Human Listeners</td></tr><tr><td>Context-unaware</td><td>64.0 ± 1.7%</td><td>-</td></tr><tr><td>Context-aware (β = 0.0)</td><td>76.6 ± 1.0%</td><td>68.3</td></tr><tr><td>Listener-aware (β = 0.5)</td><td>85.9 ± 0.4%</td><td>-</td></tr><tr><td>Listener-aware (β = 1.0)</td><td>92.2 ± 0.5%</td><td>78.7</td></tr></table> <--- Page Split ---> 4.21 vs. 4.97) but the listener- aware speaker uses a much richer vocabulary (14% more unique nouns and 33% more unique adjectives, after controlling for average length discrepancy). As a final qualitative examination of our speakers' generalization ability, we ran a simple out- of- class speaking experiment. We constructed well- separated contexts from the search results presented in Section 5.2, taking as the target the highest- ranked exemplar and choosing distractors from among the lowest- ranked. Our best speaker model produced promising results (see Fig. 6). ## 7 RELATED WORK Image labeling and captioning Our work builds on recent progress in the development of vision models that involve some amount of language data, including object categorization (Simonyan & Zisserman, 2014; Zhang et al., 2014) and image captioning (Karpathy & Fei- Fei, 2015; Vinyals et al., 2015; Xu et al., 2016). Unlike object categorization, which pre- specifies a fixed set of class labels to which all images must project, our system uses open- ended, natural language. Similarly to other recent works in image captioning (Luo & Shakhnarovich, 2017; Monroe et al., 2017; Vedanta et al., 2017) instead of captioning a single image in isolation, our systems learn how to communicate across diverse semantic contexts. More importantly, using 'clean' images of separate articulated objects enables the generation of very fine- grained, part- based descriptions. Reference games In our work we use reference games in order to operationalize the demand to be relevant in context. The basic arrangement of such games can be traced back to the language games explored by Wittgenstein (Wittgenstein, 1953) and Lewis (Lewis, 1969). For decades, such games have been a valuable tool in cognitive science to quantitatively measure inferences about language use and the behavioral consequences of those inferences (Rosenberg & Cohen, 1964; Krauss & Weinheimer, 1964; Clark & Wilkes- Gibbs, 1986; van Deemter, 2016). Recently, these approaches have also been adopted as a benchmark for discriminative or context- aware NLP (Paetzel et al., 2014; Andreas & Klein, 2016; Cohn- Gordon et al., 2018; Vedantam et al., 2017; Su et al., 2017; Lazaridou et al., 2018). Rational Speech Acts framework Rational Speech Act (RSA) models provide a probabilistic framework for deriving linguistic behavior from general principles of social cognition (Goodman & Frank, 2016). At the core of the RSA framework is the Gricean proposal (Grice, 1975) that speakers are decision- theoretic agents who select utterances \(u\) that are parsimonious yet informative about the state of the world \(w\) . RSA formalizes this notion of informativity as the expected reduction in the uncertainty of an (internally simulated) listener \(L\) : \(S(u|w) \propto \exp \{\log L(w|u)\}\) , \(L(w|u) \propto \mathcal{L}(u, w)\) . This speaker \(S\) is pragmatic because it considers informativity in terms of a rational listener agent \((L)\) who updates their beliefs about the world according to the literal semantics of the language \((\mathcal{L})\) . Previous work has shown that RSA models account for context sensitivity in human speakers (Graf et al., 2016; Monroe et al., 2017; Yu et al., 2017; Fried et al., 2017). Our speaking results add evidence in the effectiveness of this approach. ## 8 CONCLUSION AND FUTURE DIRECTIONS Taken together, our results show that natural language, derived from communication in context, provides a strong objective for learning to make fine- grained distinctions between objects with an emphasis on their shared part- structure. An exciting future application of this work would be to leverage these techniques for improving unsupervised part segmentation and 3D shape retrieval, as well as context- aware shape synthesis, providing an advance over existing context- unaware synthesis techniques (Chen et al., 2018). ## ACKNOWLEDGMENTS ## REFERENCES Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, and Leonidas J Guibas. Learning representations and generative models for 3d point clouds. Proceedings of the 35th International Conference on Machine Learning, 2018. <--- Page Split ---> Jacob Andreas and Dan Klein. Reasoning about pragmatics with neural listeners and speakers. arXiv preprint arXiv:1604.00562, 2016. Jacob Andreas, Dan Klein, and Sergey Levine. Learning with latent language. CoRR, abs/1711.00482, 2017. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. CoRR, abs/1607.06450, 2016. Angel X. Chang, Thomas A. Funkhouser, Leonidas J. Guibas, Pat Hanrahan, Qi- Xing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. Shapenet: An information- rich 3d model repository. CoRR, abs/1512.03012, 2015. Kevin Chen, Christopher B Choy, Manolis Savva, Angel X Chang, Thomas Funkhouser, and Silvio Savarese. Text2shape: Generating shapes from natural language by learning joint embeddings. arXiv preprint arXiv:1803.08495, 2018. Herbert H Clark and Deanna Wilkes- Gibbs. Referring as a collaborative process. Cognition, 22(1): 1- 39, 1986. Reuben Cohn- Gordon, Noah Goodman, and Chris Potts. Pragmatically informative image captioning with character- level reference. arXiv preprint arXiv:1804.05417, 2018. Jia Deng, Wei Dong, Richard Socher, Li- Jia Li, Kai Li, and Li Fei- Fei. Imagenet: A large- scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248- 255. IEEE, 2009. Daniel Fried, Jacob Andreas, and Dan Klein. Unified pragmatic models for generating and following instructions. CoRR, abs/1711.04987, 2017. Edward Gibson, Richard Futrell, Julian Jara- Ettinger, Kyle Mahowald, Leon Bergen, Sivalogeswaran Ratnasingam, Mitchell Gibson, Steven T. Piantadosi, and Bevil R. Conway. Color naming across languages reflects color use. Proceedings of the National Academy of Sciences, 114(40):10785- 10790, 2017. Noah D. Goodman and Michael C Frank. Pragmatic language interpretation as probabilistic inference. Trends in Cognitive Sciences, 20(11):818 - 829, 2016. Caroline Graf, Judith Degen, Robert X. D. Hawkins, and Noah D. Goodman. Animal, dog, or dalmatian? level of abstraction in nominal referring expressions. In Proceedings of the 38th Annual Conference of the Cognitive Science Society, 2016. H. P. Grice. Logic and conversation. In P. Cole and J. Morgan (eds.), Syntax and Semantics, pp. 43-58. Academic Press, New York, 1975. Robert X. D. Hawkins. Conducting real- time multiplayer experiments on the web. Behavior Research Methods, 47(4):966- 976, 2015. Sepp Hochreiter and Jürgen Schmidhuber. Long short- term memory. Neural computation, 9(8): 1735- 1780, 1997. Andrej Karpathy and Li Fei- Fei. Deep visual- semantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3128- 3137, 2015. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. Simon Kirby, Monica Tamariz, Hannah Cornish, and Kenny Smith. Compression and communication in the cultural evolution of linguistic structure. Cognition, 141:87- 102, 2015. Robert M Krauss and Sidney Weinheimer. Changes in reference phrases as a function of frequency of usage in social interaction: A preliminary study. Psychonomic Science, 1964. <--- Page Split ---> Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. Behavioral and Brain Sciences, 40, 2017. Angeliki Lazaridou, Karl Moritz Hermann, Karl Tuyls, and Stephen Clark. Emergence of linguistic communication from referential games with symbolic and pixel input. arXiv preprint arXiv:1804.03984, 2018. David Lewis. Convention: A philosophical study. Harvard University Press, 1969. Ruotian Luo and Gregory Shakhnarovich. Comprehension- guided referring expressions. In Computer Vision and Pattern Recognition (CVPR), volume 2, 2017. Andrew L. Maas, Awni Y. Hannun, and Andrew Y. Ng. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the 30th International Conference on Machine Learning (ICML- 13), 2013. Takeru Miyato, Dai M. Andrew, and Goodfellow Ian. Adversarial training methods for semi- supervised text classification. International Conference on Learning, 2017. Will Monroe, Robert X.D. Hawkins, Noah D. Goodman, and Christopher Potts. Colors in context: A pragmatic neural model for grounded language understanding. arXiv preprint arXiv:1703.10186, 2017. Maike Paetzel, David Nicolas Racca, and David DeVault. A multimodal corpus of rapid dialogue games. In Language Resources and Evaluation Conference (LREC), 2014. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, pp. 311- 318, Stroudsburg, PA, USA, 2002. Association for Computational Linguistics. doi: 10.3115/1073083.1073135. URL https://doi.org/10.3115/1073083.1073135. Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532- 1543, 2014. Seymour Rosenberg and Bertram D Cohen. Speakers' and listeners' processes in a word- communication task. Science, 1964. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NIPS, 2016. Sheng Shen and Hung Lee. Neural attention models for sequence classification: Analysis and application to key term extraction and dialogue act detection. CoRR, abs/1604.00077, 2016. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large- scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1), 2014. Jong- Chyi Su, Chenyun Wu, Huaizu Jiang, and Subhransu Maji. Reasoning about fine- grained attribute phrases using reference games. arXiv preprint arXiv:1708.08874, 2017. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. NIPS, 2014. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. CoRR, abs/1512.00567, 2015. Kees van Deemter. Computational models of referring: a study in cognitive science. MIT Press, 2016. <--- Page Split ---> Ramakrishna Vedanta, Samy Bengio, Kevin Murphy, Devi Parikh, and Gal Chechik. Context- aware captions from context- agnostic supervision. CoRR, abs/1701.02870, 2017. Ramakrishna Vedanta, Samy Bengio, Kevin Murphy, Devi Parikh, and Gal Chechik. Context- aware captions from context- agnostic supervision. In Computer Vision and Pattern Recognition (CVPR), volume 3, 2017. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. CoRR, abs/1411.4555, 2015. Ronald J. Williams and David Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural Comput., 1989. Ludwig Wittgenstein. Philosophical investigations. Macmillan, 1953. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Gregory S. Corrado, Macduff Hughes, and Jeffrey Dean. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144, 2016. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. CoRR, abs/1502.03044, 2016. Li Yi, Hao Su, Xingwen Guo, and Leonidas J. Guibas. Syncspeccnn: Synchronized spectral CNN for 3d shape segmentation. CoRR, abs/1612.00606, 2016. Licheng Yu, Hao Tan, Mohit Bansal, and Tamara L. Berg. A joint speaker- listener- reinforcer model for referring expressions. CoRR, abs/1612.09542, 2017. Ning Zhang, Jeff Donahue, Ross Girshick, and Trevor Darrell. Part- based r- cnns for fine- grained category detection. In European conference on computer vision, pp. 834- 849. Springer, 2014. <--- Page Split ---> ## 9 APPENDIX ### 9.1 DETAILS ON BUILDING THE CONTEXTS To build our contrastive triplets, we first compute the 2- nearest- neighbor graph of all ShapeNet chairs based on their Euclidean latent distances and use a subset of 1K chairs, those with the highest in- degree on this graph to "seed" the triplet generation. Given a node of this graph, we select its two nearest neighbors from the entire shape collection to form a triplet of highly similar ("close") objects and also select the two objects that are closest to it but which also are more distant from it than the median of all pairwise distances, to form a triplet of relatively "far" objects. Having two types of contexts (close and far) allowed us to collect contrastive language with various degrees of specificity. To counterbalance the dataset we ensured that each object of a triplet alternated roles (with the remaining two) as a distractor or target, and that each resulting combination was annotated by at least 4 humans. The AE used to make the embedding was of a relative small size (64D) to promote meaningful Euclidean comparisons. Also, for the close triplets we applied a manually tuned threshold, to semi- automatically reject triplets that contained two indistinguishable geometric objects, e.g., two that were listed in ShapeNet but only varied on their texture. ### 9.2 LISTENERS DETAILS Table 4: Optimal hyper parameters for neural listener architectures using both images and point clouds and word attention. <table><tr><td>Param/Architecture</td><td>Proposed</td><td>At-Once</td><td>Separate-Augment</td></tr><tr><td>Learning-rate</td><td>0.0005</td><td>0.001</td><td>0.001</td></tr><tr><td>Label-smoothing</td><td>0.9</td><td>0.9</td><td>0.9</td></tr><tr><td>L2-reg.</td><td>0.3</td><td>0.05</td><td>0.09</td></tr><tr><td>RNN-dropout</td><td>0.5</td><td>0.7</td><td>0.45</td></tr></table> The optimal values for the hyper- parameters used by each listener model (using both point clouds and images and word- attention) are given in Table 4. All listeners use an MLP with [100, 50] hidden neurons (FC- ReLU (Maas et al., 2013)) with batch normalization after each layer and an LSTM with 100 hidden units. The GloVe embedding was also 100- dimensional and it was fine- tuned during training. For the point- cloud latent bottleneck codes (128D) and the VGG image features (4096D) we use a dropout with 0.5 keep probability to zero half their entries before using the FC- projecting layers. The same drop- out mask was applied on the codes of a given triplet. The ground- truth indicator vectors were label- smoothed (Szegedy et al., 2015). Assigning a probability of 0.933 to the ground- truth target and 0.0333 to the distractors (smoothing of 0.9, second row of Table 4) yielded a performance boost of \(\sim 2\%\) . Label- smoothing has been found also in previous work to improve the generalization (Szegedy et al., 2015) or reducing mode- collapse in GANs (Salimans et al., 2016). We note that we didn't manage to improve the best attained accuracies by applying layer normalization (Ba et al., 2016) in the LSTM, or adversarial regularization (Miyato et al., 2017) on the word embeddings. Dropout (Srivastava et al., 2014) was by far the most effective form of regularization for the listener ( \(\sim [8 - 9]\%\) ), following by \(L_{2}\) weight- regularization on the projected layers ( \(\sim [2 - 3]\%\) ). Hyper- parameter Search We did a grid search over the space of hyper- parameters associated with each listener type separately. To circumvent the exponential growth of this space, we search it into two phases. First, we optimized the learning rate (in the regime of [0.0001, 0.0005, 0.001, 0.002, 0.004, 0.005]) in conjunction with the drop- out (keep probability) applied at the RNN's input, in the range [0.4- 0.7] with increments of 0.05. Given the acquired optimal values, we conducted the second stage of search over the \(L_{2}\) weight- regularization (in the range of [0.005, 0.01, 0.05, 0.1, 0.3, 0.9]), label- smoothing ([0.8, 0.9, 1.0]) and drop- out after vgg/pc- AE projected vectors ([0.4, 0.5, 0.7, 1.0]). In this search, we used a single random seed to control for the data- split which was based on at the object- generalization task. Details on the ablated listeners For the "Separate- Augment", a convolutional layer for aggregating the three encodings showed better performance than an FC. Also, the order- invariant max/mean poolings \((f,g)\) produced better results than other alternatives (e.g. using the identity <--- Page Split ---> function in their place). Using a separate MLP to process the point cloud data (via concatenation with the output of the RNN), was slightly better than feeding them directly in the recurrent net (after the tokens of each utterance were processed). However, conditioning the recurrent net with point- clouds and using the images in the end of the pipeline deteriorate significantly all attained results. We hypothesize that the gradient flow is better when processing (the inferior in quality) point cloud data, closer to the loss. Training details We trained the "Proposed" and the "At- Once" for 500 epochs and the "Separate- Augment" for 350. This was sufficient, as more training only resulted in more overfitting without improving the achieved test/val accuracies. We halved the learning every 50 epochs, if the validation error was not improved during them. Every 5 epochs we evaluated the model on the validation split in order to select the epoch/parameters with the highest attained accuracy. Because the "At- Once" is sensitive in the input order of the geometric codes, we randomly permute them during training. We use the ADAM (Kingma & Ba, 2014) \((\beta_{1} = 0.9)\) optimizer for all experiments. ### 9.3 SPEAKER DETAILS Table 5: Optimal hyper parameters for the context-aware neural-speaker. <table><tr><td>LSTM Size</td><td>Learning-rate</td><td>L2-reg.</td><td>Word-dropout</td><td>Image-dropout</td><td>RNN-out dropout</td></tr><tr><td>200</td><td>0.003</td><td>0.005</td><td>0.8</td><td>0.5</td><td>0.9</td></tr></table> Hyper- parameter Search To optimize for the hyper- parameters of a speaker we conducted a twostate grid search for as we did for the listeners. First, we optimized (a context- aware speaker with one seed under the object generalization task) with respect to: the hidden neurons of the LSTM (100 and 200), the learning rate (0.0005, 0.001, 0.003]), the drop- out keep probability for the word- vectors (0.8, 0.9, 1.0]) and the dropout applied at the RNN's output, before the linear transformation/wordto- logits matrix (with keep probabilities of [0.8, 0.9, 1.0]). The two best models were further optimized by introducing a drop- out layer after the image- projection layer (with keep probabilities in [0.8, 0.9, 1.0]) and \(L_{2}\) - weight regularization applied at the same projection layer (with values in [0, 0.005, 0.01]). The optimal parameters of this search are reported in Table 5. Model Selection To do model selection for a speaker, we used a pre- trained listener (with the same train/test/val splits) which evaluated the synthetic utterances produced by the speaker. To this purpose the speaker generated 1 utterance for each validation triplet via greedy (arg- max) sampling every 10 epochs of training and the listener reported the accuracy of predicting the target given the synthetic utterance. In the end of training (300 epochs), the epoch/model with the highest accuracy was selected. Other details We initially used GloVe to provide our speaker pretrained word embeddings, as in the listener, but found that it was sufficient to train the word embedding from uniformly random initialized weights (in range [- 0.1, 0.1]). We initialized the bias terms of the linear word- encoding layer with the log probability of the frequency of each word in the training data (Karpathy & Fei- Fei, 2015), which provided faster convergence. We train with SGD and ADAM \((\beta_{1} = 0.9)\) and apply norm- wise gradient clipping with a cut- off threshold of 5.0. The sampled and training utterances have a maximal length of 33 tokens (99th percentile of the dataset) and for each speaker we sample and score 50 utterances per triplet at test time (via Eq. 1). The optimal length penalty \((\alpha)\) for the context- unaware speaker is 0.7, and set to 0.6 for the rest. ## 10 PRE-TRAINED IMAGES AND POINT CLOUDS We train the PC- AE under the Chamfer loss with a bottleneck of 128 dimensions with point clouds of 2048 points extracted from a 3D CAD model, uniformly area- wise. For the VGG- 16 encoding, we use the 4096- dimensional output activations of its second fully- connected layer \((fc7)\) . To finetune the VGG- 16 we optimized it under the cross- entropy loss for an 8- way classification, which included photo- realistic rendered images of textureless meshes in the 8 largest object classes of Shape- Net (cars, chairs, aeroplanes, ...). The total number of shapes was 36,632. The fine- tuning took 30 epochs of training. The first 15 we optimized only the weights of the last \((fc8)\) layer and <--- Page Split ---> the last 15 the weights of all layers. On the test split of a \([90\% , 5\% , 5\% ]\) (train/test/val) the network achieves a 96.9 classification accuracy. ## 11 FURTHER QUALITATIVE RESULTS ## 11.1 SPLIT UTTERANCES While taking entire triplets as input to the listener LSTM did not improve listener performance utterances containing comparatives and superlatives (which in theory should be difficult to evaluate for isolated objects), we also anecdotally considered another subpopulation of utterances that perhaps even more strongly rely on context. These utterances distinguish the target by associating it explicitly with one distractor (e.g. "from the two that have thin legs, the one..."). We used an ad hoc set of search queries to find such utterances among the test set \((\approx 1.5\%\) of utterances) and found that both context- aware architectures do perform noticeably better on these utterances (67.4 \(\pm 3.0\%\) for "Separate- Augment" and 65.8 \(\pm 5.2\%\) for "At- Once" compared to only 62.5 \(\pm 3.7\%\) for the proposed model). However, given the low occurrence of such cases, these effects were not significant and we decided that the negligible gains of the "Separate- Augment" architecture were not worth the increase in model complexity and rigidity with respect to context size (see Section 5.2 for a demonstration of this shortcoming). <--- Page Split ---> Figure 7: Examples of attention weights on human utterances. The listener LSTM learns attention weights that emphasize more informative words when forming its linguistic representation. For these speaker utterances drawn from our corpus, we colored each word according to the weight assigned by the attention mechanism, with low attention words in blue and high attention words in red. ![](images/15_0.jpg) <--- Page Split ---> Figure 8: Examples of errors in listener model. Our top- performing listener model appeared to struggle to interpret referential language that relied on metaphors, negations, precisely counting parts, ambiguous modifiers, or descriptions of the object's texture or material. All examples are drawn from the test set and were correctly classified by human listeners in the original task. ![](images/16_0.jpg) <--- Page Split ---> Figure 9: Examples of errors in speaker models. Sometimes even the pragmatic (listener- aware) speaker produces insufficiently specific utterances that mention only undiagnostic features, or produces utterances that are literally false of the target (e.g. there technically is a hole in the back) while still succeeding in distinguishing the objects. ![](images/17_0.jpg) <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 10: Search results for chairs Gallery of retrieved exemplars of held out chairs for different queries. Only the top five are shown. </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 11: Search results for out-of-class objects. Gallery of retrieved exemplars from other ShapeNet furniture categories for different queries. Top five and bottom five are shown, demonstrating intuitive contrasts from the highest ones. Note that there are some mislabeled objects in ShapeNet. </center> <--- Page Split ---> Figure 12: Effect of context on production: Synthetic utterances generated by a literal (context-aware) and pragmatic (listener-aware) speaker. The top and bottom rows show utterances produced for the same target in a far and close context, respectively. The best-performing listener's prediction confidence for each object is displayed above: while both speaker models produce similarly effective utterances in far contexts, the literal speaker fails to produce effective utterances in close contexts. ![](images/20_0.jpg) <--- Page Split ---> ## 12 LENGTH PENALTY AND SPEAKER-AWARENESS Figure 13: Measuring the effect of using different \(\alpha\) , \(\beta\) values to select the top- 1 scoring sentence for context- aware and unaware speakers when creating utterances for the objects/contexts of the validation split. The y- axis in each subplot denotes the performance of a listener who is used to rank and evaluate the sentences. Averages are with respect to 5 random seeds controlling the data splits and the initializations of the neural- networks. ![](images/21_0.jpg) <center>Figure 14: Effect of using a different fraction of the training data for the evaluating listener when using two separate listeners (for evaluating and scoring a speaker's results.). On the x-axis is the fraction \(f\) of the entire training data (80% of the dataset) that is used by the evaluator. \(1 - f\) is used by the utterance-scoring listener. Averages are with respect to 5 random seeds controlling the data splits and the initializations of the neural-networks. </center> ![](images/21_1.jpg) It is interesting to see that even a context- unaware speaker can generate sentences that a listener can re- rank and then to find the top one as very discriminative. The context- unaware (listener aware) examples in our website demonstrate this improvement. ## 12.1 MORE ABLATIONS ## 12.2 GAME INTERFACE AND CORPUS Each game consisted of 69 rounds and participants swapped speaker and listener roles after each round. The game's interface is depicted in Figure 16. Participants were allowed to play multiple games, but most participants in our dataset played exactly one game (81% of participants). The most distinctive words in each triplet type (as measured by point- wise mutual information) are shown in Table 7). <--- Page Split ---> Table 6: Performance of different listeners in specific subpopulations in the earlier language generalization task. Averages over five random seeds that controlled the data splits and the neural-net initializations. <table><tr><td rowspan="2">Architecture</td><td colspan="5">Subpopulations</td></tr><tr><td>Overall</td><td>Close</td><td>Far</td><td>Sup-Comp</td><td>Split</td></tr><tr><td>Separate (Proposed)</td><td>83.7 ± 0.2%</td><td>77.0 ± 0.8%</td><td>90.3 ± 0.3%</td><td>80.7 ± 0.3%</td><td>64.6 ± 3.7%</td></tr><tr><td>Separate-Augment</td><td>84.4 ± 0.5%</td><td>78.5 ± 0.8%</td><td>90.2 ± 0.7%</td><td>80.9 ± 0.4%</td><td>68.9 ± 2.3%</td></tr><tr><td>Aggregate</td><td>78.4 ± 0.2%</td><td>71.5 ± 0.6%</td><td>85.2 ± 0.3%</td><td>76.0 ± 0.8%</td><td>61.8 ± 3.0%</td></tr></table> Figure 15: Listener's accuracy for different sizes of training data, under the object generalization task. The original split includes \([80\% , 10\% , 10\% ]\) for training/test/val purposes, thus the maximum size of training data is 0.8 of the entire dataset when the fraction is 1.0 (x- axis). The listener model uses the main architecture with using attention, images and point- clouds and its accuracy is always measured on the original \((10\%)\) test split. Results with five random seeds controlling the original data split and the neural- net's initialization. ![](images/22_0.jpg) <center>Table 7: Most distinctive words in each triplet type according to point-wise mutual information (excluding tokens that appeared fewer than 30 times in the dataset). Lower numbers are more distinctive of far and higher numbers are more distinctive of close. </center> <table><tr><td rowspan="2">far</td><td>word</td><td>office</td><td>sofa</td><td>regular</td><td>folding</td><td>wooden</td><td>stool</td><td>wheels</td><td>metal</td><td>normal</td><td>rocking</td></tr><tr><td>pmi</td><td>-1.70</td><td>-0.94</td><td>-0.88</td><td>-0.84</td><td>-0.83</td><td>-0.79</td><td>-0.78</td><td>-0.71</td><td>-0.67</td><td>-0.66</td></tr><tr><td rowspan="2">close</td><td>word</td><td>alike</td><td>identical</td><td>thickness</td><td>texture</td><td>darker</td><td>skinnier</td><td>thicker</td><td>perfect</td><td>similar</td><td>larger</td></tr><tr><td>pmi</td><td>0.69</td><td>0.67</td><td>0.67</td><td>0.66</td><td>0.65</td><td>0.64</td><td>0.63</td><td>0.62</td><td>0.62</td><td>0.61</td></tr></table> <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 16: Reference game interface. </center> <--- Page Split --->
## ABSTRACT Human world knowledge is both structured and flexible. When people see an object, they represent it not as a pixel array but as a meaningful arrangement of semantic parts. Moreover, when people refer to an object, they provide descriptions that are not merely true but also relevant in the current context. Here, we combine these two observations in order to learn fine- grained correspondences between language and contextually relevant geometric properties of 3D objects. To do this, we employed an interactive communication task with human participants to construct a large dataset containing natural utterances referring to 3D objects from ShapeNet in a wide variety of contexts. Using this dataset, we developed neural listener and speaker models with strong capacity for generalization. By performing targeted lesions of visual and linguistic input, we discovered that the neural listener depends heavily on part- related words and associates these words correctly with the corresponding geometric properties of objects, suggesting that it has learned task- relevant structure linking the two input modalities. We further show that a neural speaker that is 'listener- aware' — that plans its utterances according to how an imagined listener would interpret its words in context — produces more discriminative referring expressions than an 'listener- unaware' speaker, as measured by human performance in identifying the correct object. ## 1 INTRODUCTION Human world knowledge is both structured and flexible. For example, when people see a chair, they represent it not as a pixel array but as a semantically meaningful combination of parts, such as arms, legs, seat, and back. How to obtain and flexibly deploy such structured knowledge remains an outstanding problem in machine learning (Lake et al., 2017). One promising approach is to harness the rich conceptual and relational structure latent in language (Andreas et al., 2017). Natural languages have been optimized across human history to solve the problem of efficiently communicating those aspects of the world most relevant to current goals (Kirby et al., 2015; Gibson et al., 2017). Consequently, language reflects the structured nature of our world knowledge: we not only conceive of a chair in terms of its semantic parts, but can combine multiple words to refer to its 'curved back' or 'cushioned seat', and provide more informative descriptions if the context requires it, e.g., refer to a different distinguishing part if all the chairs have a cushioned seat. Our goal is to leverage these insights to develop systems that can make fine- grained distinctions between complex object geometries across a wide variety of contexts. Our approach is to leverage natural language produced by people in an interactive communication task to develop neural network models of the speaker and listener roles in this task. We find that the resulting representations learned by these models exhibit structure that is crucial for robust communication: first, they capture task- relevant correspondences between individual parts of objects and individual tokens of language, and second, they have strong capacity to generalize to novel contexts, objects, utterances, and other related object classes. We make the following contributions: <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Constructing "close" and "far" contexts by exploiting the latent neighborhood structure of 3D chairs. Orange is a high in-degree seed chair, dark gray its selected distractors in each context. </center> - We introduce a new multimodal dataset (Chairs In Context) comprised of 4,511 chairs from ShapeNet, organized into 4,054 sets of size 3 (called communication contexts), with 78,789 natural utterances, each utterance intended to distinguish a chair in context.<sup>1</sup>- By training on this dataset we develop neural listeners and speakers with strong generalization capacity even in out-of-training classes, such as tables.- We demonstrate that the neural listener learns to prioritize the same geometric information in objects (i.e., properties of individual chair parts) that humans do in solving the communication task, despite never being provided with an explicit decomposition of these objects into parts.- We show how our listeners can be used to search large collections of unseen objects to retrieve models based on natural language queries, e.g., curved back and fat legs.- Lastly, we find that a neural speaker that is 'listener-aware' — that plans its utterances according to how an imagined listener would interpret its words in context — produces more discriminative utterances than an 'listener-unaware' speaker, as measured by human performance in identifying the correct object. ## 2 DATASET AND TASK Our dataset consists of triplets of 3D objects coupled with referential utterances that aim to distinguish one object (the "target") from the remaining two (the "distractors"). To obtain such utterances, we paired participants from Amazon Mechanical Turk to play an online, reference game (Hawkins, 2015). On each round of the game, the two players were shown a triplet of objects. The designated target object was privately highlighted for one player (the "speaker") who was asked to send a message through a chat box such that their partner (the "listener") could successfully select it from the context (see Appendix Fig. 16). To ensure speakers used geometric information rather than color, texture, orientation, or position on the screen, we scrambled the positions of the objects for each participant and used textureless, colorless renders of 3D objects taken from the same viewpoint. Additionally, to ensure communicative interaction was natural, no constraints were placed on the chat box: referring expressions from the speaker were occasionally followed by clarification questions from the listener or other discourse. A crucial decision in building our dataset concerned the construction of useful contexts that would reliably elicit fine- grained contrastive language. Perceptually identical objects cannot be distinguished with language at all, while wildly different objects (a chair and a car) can be easily distinguished with a single word ("it's a chair"). To solve this problem, we considered three objectives. First, the set of objects must be familiar so we can tap existing visual and linguistic representations. Second, the objects should be complex and variable to provide wide coverage of interesting geometries. Third, different contexts must contain diverse combinations of objects to ensure variation in the relevant distinctions required. To satisfy the first two objectives, we utilize the collection of about 7,000 chairs from ShapeNet (Chang et al., 2015). This class is geometrically complex, densely sampled, highly diverse, and abundant in the real world. To satisfy the third objective in a scalable and unsupervised manner, we estimated object similarity between different chairs using the Point Cloud- AutoEncoder (PC- AE) from Achlioptas et al. (2018). This representation allowed us to leverage the fact that point- clouds extracted from a 3D surface provide an intrinsic 3D representation of an object, oblique to color or texture. To deal with the inhomogeneity of data in repositories like ShapeNet we used a sampling <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: Proposed listener architecture. </center> strategy to construct our triplets. First, we computed the 2- nearest- neighbor graph of all ShapeNet chairs based on their PC- AE embedding distances. On this graph, we selected chairs of highest in- degree as seeds and for each seed chair we generated two kinds of triplets. Close contexts sampled nearby chairs while Far contexts sampled highly dissimilar chairs (see Fig. 1). Additional details of triplet construction are provided in the Appendix (Section 9.1). In total, we collected a corpus containing 78,789 referring expressions for 4,054 triplets, containing 4,511 unique chairs. In doing so we recruited 2,124 unique participants. Human performance on the reference game was high in general, but listeners made significantly more errors in the close triplets (94.2% vs. 97.2%, \(z = 13.54\) , \(p < 0.001\) ). Also, significantly longer utterances were used on average to describe targets in close triplets (approximately 8.4 words vs. 6.1, \(t = - 35\) , \(p < 0.001\) ). A wide spectrum of descriptions was elicited, ranging from the more holistic/categorical common for far triplets (e.g., "the rocking chair") to more complex, geometric language common for close triplets (e.g., "thinner legs but without armrests"). 78% of utterances used at least one part word: "back", "legs", "seat", "arms", or closely related synonyms (e.g., "armrests"). ## 3 NEURAL LISTENERS Constructing neural listeners that reason about geometric relationships is a key contribution of our work. It lays the foundation for creating speakers that utter discriminative utterances and enables the creation of an object retrieval system that operates with linguistic queries. Given its importance, below we conduct a detailed comparison between three distinct architectures, highlight the effect of different regularization techniques, and investigate the merits of two different representations of 3D objects for the listening task, namely, images and point clouds. In what follows, we denote the three objects of a communication context as \(O = \{o_{1}, o_{2}, o_{3}\}\) , the corresponding word- tokenized utterance, which has at most \(K\) tokens, as \(U = u_{1}, u_{2}, \ldots\) and as \(t \in O\) the referential target. Our proposed listener is inspired by Monroe et al. (2017). It takes as input a (latent code) vector for each of the three objects in \(O\) and a (latent code) vector for each token of \(U\) , and outputs an object- utterance compatibility score \(\mathcal{L}(o_{i}, U) \in [0, 1]\) for each of the three objects. At its core lies a multi- modal LSTM (Hochreiter & Schmidhuber, 1997) that takes as input ("is grounded" with) the vector of a single object, processes the word- sequence \(U\) , and is read out by a final MLP to yield a single number (the compatibility score). This is repeated for each \(o_{i}\) , sharing all network parameters across the objects. We then apply a soft- max to the three compatibility scores to yield a distribution over the three objects, and compute a cross- entropy loss between this distribution and the ground- truth indicator vector of the target. Object encoders We experimented with three object representations to capture the underlying geometries: (a) the bottleneck representation of a pretrained Point Cloud- AutoEncoder (PC- AE), (b) the embedding provided by a convolutional network operating on single- view images of non- textured 3D objects, or (c) a combination of (a) and (b). Specifically, for (a) we use the PC- AE architecture of Achlioptas et al. (2018) trained with single- class point clouds extracted from the surfaces of 3D CAD models, while for (b) we use the activations of the penultimate layer of a VGG- 16 (Simonyan & Zisserman, 2014) neural network, pre- trained on ImageNet (Deng et al., 2009), and fine- tuned on an 8- way classification task with images of objects from ShapeNet. For each <--- Page Split ---> representation we project the corresponding code vector to the input space of the LSTM using a fully connected (FC) layer with \(L_{2}\) - norm weight regularization. The addition of these projection- like layers improves the training and convergence of our system. While there are many ways to simultaneously incorporate the two modalities in the LSTM, we found that the best performance resulted when we ground the LSTM with the image code, concatenate the LSTM's final output (after processing \(U\) ) with the point cloud code, and finally feed this result into a shallow MLP to produce the final compatibility (see Figure 2 for an overview of this architecture). We note that grounding the LSTM with point clouds and using images towards the end of the pipeline, resulted in a significant performance drop ( \(\sim 4.8\%\) on average). Also, adding dropout at the input layer of the LSTM and \(L_{2}\) weight regularization and dropout at and before the FC projecting layers was crucial (giving improvements of more that \(10\%\) ). The token codes of each sentence where initialized with the GloVe embedding (Pennington et al., 2014) and fine- tuned for the listening task. Incorporating context information Critically, our proposed listener architecture first scores each object separately then applies softmax normalization to yield a score distribution over the three objects. In order to evaluate the importance of this design choice, we consider two alternative architectures that incorporate context earlier, at encoding. The first alternative (Separate- Augment), is identical to the proposed architecture, except for it uses a convolutional layer to augment each object's grounding vector with information about the other two objects in context before yielding its score. Specifically, if \(v_{i}\) is the image code vector of the i- th object \((o_{i} \in O)\) , to produce the grounding vector for \(o_{i}\) , the convolutional layer receives \(f(v_{j}, v_{k})||g(v_{j}, v_{k})||v_{i}\) , where \(f, g\) are order invariant functions, such as the average or max- pooling and \(||\) denotes feature- wise concatenation. The second alternative architecture (Ar- Once) first feeds the image vectors for all three objects sequentially to the LSTM and then proceeds to process the tokens of \(U\) once, to yield the entire score distribution. Similarly to the proposed architecture, point clouds are incorporated in both alternatives via a separate MLP after the LSTM. Attention mechanism over words We hypothesize that a listener forced to prioritize a few words in each utterance would learn to prioritize words that express properties that distinguish the target from the distractors (and, thus, perform better). To test this hypothesis, we augment the listener models with a bilinear attention mechanism over words. Specifically, to estimate the "importance" of each text- token \(u_{i}\) we compare the output of the LSTM for \(u_{i}\) (denoted as \(r_{i}\) ) with the hidden state after the entire utterance has been processed (denoted as \(h\) ). The ideas is that the hidden state acts as a summary of the grounded sentence (Shen & Lee, 2016), that can be used to assess the relative importance of each word as \(a_{i} \triangleq r_{i}^{T} \times W_{\mathrm{att}} \times h\) , where \(W_{\mathrm{att}}\) is a trainable diagonal matrix. With the attention mechanism in place, the final output of the LSTM is defined as \(\sum_{i = 1}^{|U|} r_{i} \odot \hat{a}_{i}\) , where \(\hat{a}_{i} = \frac{\exp(a_{i})}{\sum_{j}^{|U|} \exp(a_{j})}\) and \(\odot\) is the point- wise product. The optimal parameters of each listener (and speaker), the hyper- parameter search strategy, and the exact details of training are provided in the Appendix 9.2 and 9.3. ## 4 NEURAL SPEAKERS Architecture Our speaker models are inspired by the show- and- tell model (Vinyals et al., 2015) developed for image captioning. Specifically, a speaker is a neural network that receives an image- based code vector per object in \(O\) and learns to generate an utterance \(U\) that refers to the target and which distinguishes it from the distractors. Similarly to the listener model, the main components of the speaker's architecture are an LSTM and a convolutional image network (we do not include point clouds when speaking to allow for a more easily deployable model). During the first three time steps, the speaker receives sequentially the three image code vectors of a context (projected via an \(L_{2}\) - norm weight regularized FC) and outputs a vector which is transformed into a logit prediction over our vocabulary via an FC. The soft- normalized version of the output is compared against the first ground- truth token \((u_{1})\) under the cross- entropy loss. For each remaining token \(u_{i} \in u_{2}, \ldots\) , the LSTM is conditioned on the previous \((u_{i - 1})\) ground- truth token and the cross- entropy comparison is repeated (i.e., we do teacher- forcing (Williams & Zipser, 1989)). In all speakers the target vector is fed third, <--- Page Split ---> Table 1: Performance of variants of proposed listener architecture (image-modality, attention, and context-incorporation alternatives) on different generalization tasks and subpopulations of the test set (chance is \(33\%\) ; mean accuracy \(\pm 1\) standard error). Bottom table uses best-performing model from top table. Averages taken over five random seeds that controlled the data splits and neural-net initializations. <table><tr><td></td><td>Input-Modality</td><td>Language-Task</td><td>Object-Task</td></tr><tr><td rowspan="3">No Attention</td><td>Point Cloud</td><td>67.6 ± 0.3%</td><td>66.4 ± 0.7%</td></tr><tr><td>Image</td><td>81.2 ± 0.5%</td><td>77.4 ± 0.7%</td></tr><tr><td>Image &amp;amp; Point Cloud</td><td>83.1 ± 0.2%</td><td>78.9 ± 1.0%</td></tr><tr><td rowspan="3">With Word-level Attention</td><td>Point Cloud</td><td>67.4 ± 0.3%</td><td>65.6 ± 1.4%</td></tr><tr><td>Image</td><td>81.7 ± 0.5%</td><td>77.6 ± 0.8%</td></tr><tr><td>Image &amp;amp; Point Cloud</td><td>83.7 ± 0.2%</td><td>79.6 ± 0.8%</td></tr></table> <table><tr><td rowspan="2">Architecture</td><td colspan="4">Subpopulations</td></tr><tr><td>Overall</td><td>Close</td><td>Far</td><td>Sup-Comp</td></tr><tr><td>At-Once</td><td>75.9 ± 0.5%</td><td>67.4 ± 1.0%</td><td>83.8 ± 0.6%</td><td>74.4 ± 1.3%</td></tr><tr><td>Separate-Augment</td><td>79.4 ± 0.8%</td><td>70.1 ± 1.3%</td><td>88.1 ± 0.6%</td><td>75.2 ± 2.1%</td></tr><tr><td>Separate (proposed)</td><td>79.6 ± 0.8%</td><td>69.9 ± 1.3%</td><td>88.1 ± 0.4%</td><td>76.0 ± 1.6%</td></tr></table> thereby minimizing the length of dependence between the most important input object and the output (Sutskever et al., 2014) and eliminating the need to represent the index of the target separately. To find the best hyper- parameters ( \(L_{2}\) weights, dropout- rate and # of LSTM neurons) and the optimal (per validation) epoch, during training we sample synthetic utterances of each model and use a pretrained listener to select the combination with the highest listener accuracy. We found this approach to produce model parameters with stronger correlation between the training 'progress' and the quality of produced utterances, than using listening- unaware metrics like BLEU (Papineni et al., 2002). Variations In principle, the above speaker can learn to generate language that follows the discriminative characteristics of the referential ground truth. To test the degree to which the distractors are taken into account for this purpose, we experiment with a speaker that is "context- unaware" by construction. This speaker at both training and test time uses the image encoding of the target object only, and is otherwise identical to the above model. Then, motivated by the recursive social reasoning formalized in the Rational Speech Act framework (Goodman & Frank, 2016), we create a listener- aware speaker that plans synthetic utterances according to their capacity to be discriminative, as judged by an "internal" listener. In this case, a speaker's sampled utterance \(U\) is scored as: \[\mathrm{score}(\mathbf{U}) = \beta \log (P_{L}(t|U)) + (1 - \beta) \sum_{i = 1}^{i = |U|}\frac{\log(P_{S}(u_{i}|O))}{|U|^{\alpha}}, \quad (1)\] where \(P_{L}\) is the listener's probability to predict the target ( \(t\) ) given \(U\) , \(u_{k}\) is a token of \(U\) and \(P_{S}\) is the likelihood of the speaker for generating \(U\) given the objects in \(O\) . The parameter \(\alpha\) controls a length- penalty term to discourage short sentences (Wu et al., 2016), while \(\beta\) controls the relative importance of the speaker's vs. the listener's opinions. ## 5 LISTENER EXPERIMENTS We evaluated our listener's generalization performance using two tasks based on different data splits. In the language generalization task, we test on target objects that were seen as targets in at least one context during training but ensured that all utterances in the test split are from unseen speakers. In the more challenging object generalization task, we restricted the set of objects that appeared as targets in the test set to be disjoint from those in training such that all objects and utterances in the test split are unseen. For each of these tasks, we evaluated choices of input modality and word attention, using \([80\%, 10\%, 10\% ]\) of the data, for training, validation and test for all experiments. Listener accuracies are shown in Table 1 (top). <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 3: (A) The listener places more attention on adjectives in close (orange) triplets than far (blue) ones. (B) Lesioning highest attention words to lowest worsens performance more than lesioning random words or lesioning lowest attention words. </center> First, as expected, all architectures have higher accuracy on the language generalization task (3.2% on average). Second, the attention mechanism on words yields a mild performance boost, as long as images are part of the input. Third, images provide a much better input than point- clouds when only one modality is used. In other words, despite being an intrinsic 3D representation, point- clouds alone seem to provide a weaker input signal, perhaps due to their relative lack of high- frequency details. Finally, we find large gains in accuracy (4.1% on average) from exploiting the two modalities simultaneously, potentially implying a complementarity between the two representations that the network can exploit. Next, we evaluate how the different approaches to incorporating context information described in Section 3 affect listener performance. We focus on the more challenging object generalization task, using models that include attention and both object modalities. See Table 1 (bottom) for results. We find that the At- Once architecture, which consumes the entire context with a single (non- weight shared replica) LSTM, performs significantly worse than both the Separate and Separate- Augment architectures (which use an explicit shared- weighting mechanism) and achieve similar performance to each other. It is plausible that our alternative strategies for incorporating context information however, would yield an advantage in close contexts, where finer distinctions must be made. However, we did not observe differences between the Separate and Separate- Augment variants in either the far or the close subpopulations; we do find that far contexts were easier for all models than close contexts. Surprisingly, we found that the Separate architecture remains competitive against the Separate- Augment architecture even in the subpopulation that includes utterances with superlatives and/or comparatives ("skinner"/"skinliest") which made up \(\sim 16\%\) of the test set and make explicit reference to context. Since the Separate architecture is the most flexible (see Section 5.2 for a demonstration of this), and is also simpler than Separate- Augment while performing equally well, we focus on this in the following sections. ### 5.1 EXPLORING LEARNED REPRESENTATIONS Which aspects of a sentence are more critical for our listener's performance? To inspect the properties of words receiving the most attention, we ran a part- of- speech tagger on our corpus. We found that the highest attention weight is placed on nouns, controlling for the length of the utterance. However, adjectives that modify nouns received more attention in close contexts (controlling for the average occurrence in each context), where nouns are often not sufficient to disambiguate (see Fig. 3A). To more systematically evaluate the role of higher- attention tokens in listener performance, we conducted an utterance lesioning experiment. For each utterance in our dataset, we successively replaced words with the <UNK> token according to three schemes: (1) from highest attention to lowest, (2) from lowest attention to highest, and (3) in random order. We then fed these through an equivalent listener trained without attention. We found large differential performance from random in both directions (see Fig. 3B). This ablation result was found across a wide range of utterance lengths. Our word- attentive listener thus appears to rely on context- appropriate content words to successfully disambiguate the referent. Examples demonstrating where the attention is being placed on utterances produced by humans are given in Appendix Fig. 7. To test the extent to which our listener is relying on the same semantic parts of the object as humans, we conducted a lesion experiment on the visual input rather than the linguistic one. We took the subset of our test set where (1) all chairs had complete part annotations available (Yi et al., 2016) and (2) the corresponding utterance mentioned a single part (17.5% of our test set). We then rendered <--- Page Split ---> Table 2: Testing the part-awareness of neural listener by lesioning different parts of the objects. Reported is the average accuracy of a listener under different lesions. <table><tr><td rowspan="2">Mentioned Part</td><td colspan="2">Intact Object</td><td>Single Part Lesioned</td><td>Single Part Present</td></tr><tr><td>78.5%</td><td>41.8% ± 0.1</td><td>66.6% ± 0.1</td><td></td></tr><tr><td>Random Part</td><td></td><td>67.0% ± 0.2</td><td>37.4% ± 0.1</td><td></td></tr></table> ![](images/6_0.jpg) <center>Figure 4: Top-scoring retrieved results in collections of unseen objects with natural-language queries. Bottom two rows include out-of-class examples from collections of lamps, sofas and tables. </center> lesioned versions of all three objects on each trial by removing pixels corresponding to parts 2 according to two schemes: removing a single part or keeping a single part. We did this either for the mentioned one, or another part, chosen at random. We report listener accuracies on these lesioned contexts in Table 2. We found that removing random parts hurts the accuracy by \(11\%\) on average, but removing the mentioned part dropped accuracy more than three times as much, nearly to chance. Conversely, keeping only the mentioned part while lesioning the rest of the image merely drops accuracy by \(11.9\%\) while keeping a non- mentioned (random) part alone brings accuracy down to \(37.4\%\) on average. In other words, on trials when participants depended on information about a part to communicate the object to their partner, we found that localized information about that part was both necessary and sufficient for the performance of our listener model. ### 5.2 USING LISTENER FOR RETRIEVAL IN NOVEL OBJECT COLLECTIONS Finally, as a demonstration the broader applicability of our listener, we consider the problem of searching a large database of 3D objects using natural language queries. A key advantage of the proposed listener is its flexibility to be applied on arbitrary sized contexts. We exploit this flexibility by using a pre- trained listener to measure the compatibility \(\mathcal{L}(o_i, U)\) between every object of a test collection \(O = \{o_1, \ldots , o_N\}\) and the query \(U\) . In Figure 4 (top) we show the chairs of the held- out splits (a set of 900 chairs) with the highest compatibility for a range of utterance queries. Additionally, we show the results of applying this model (trained on chairs) in the entire out- of- training classes of (ShapeNet) sofas, tables and lamps (object sets of size 3.2K, 8.5K, and 2.3K, respectively). We see surprisingly good results on searching these transfer categories. This further supports the part- awareness of the learned embedding, since the commonality between a chair and a table can be primarily expressed trough their shared parts. In the Appendix we include additional queries for chairs (Fig. 10) and non- chairs (Fig. 11). ## 6 SPEAKER EXPERIMENTS Having established that our neural listener learns useful representations with surprisingly structured properties, we now proceed to evaluate our neural speakers (see Fig. 5 for examples). 3 We evaluate them by measuring their success in referential games with two different kinds of partners: with an independently trained listener model and with human listeners on Amazon <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: Top-scoring synthetic utterances generated from listener-aware and context-aware speakers for unseen targets. Proportions correspond classification scores of our independent evaluating listener. </center> ![](images/7_1.jpg) <center>Figure 6: Our listener-aware speaker can produce informative referring expressions for out-of-class objects in context. Here, we apply our search technique in the collection of ShapeNet Tables, to produce triplets of well-separated objects. We use the queries: 'no legs' (left), 'modern' (center), and 'x' (right), to construct each triplet. Notice that the target of each triplet (selected from the highest-ranked matches) reflects the semantics of the used query, as opposed to the distractors (selected from the lowest-ranked matches). </center> Mechanical Turk (see Table 3). Critically, to conduct a fair evaluation using a neural listener, we split our training data in half. The evaluating listener was trained using one half while the scoring (or "internal") listener used by the speaker to choose utterances was trained on the remaining half. For our human evaluation, we used the context- and listener- aware speakers to generate synthetic referring expressions on the test set. To avoid data- spillage we use all training data to train the internal listeners here. We then showed these referring expressions to participants and asked them to select the object from context that the speaker was referring to. We collected approximately 2.58 responses for each triplet. For all speaking experiments, we used the same object- generalization splits used in the listening experiments. The synthetic utterances were the best scoring sentence according to each model with optimal \(\alpha\) and a subset of \(\beta\) values (See Appendix Section 12 for the effect of these hyper- parameters in a wider range). We found that our listener- aware speaker, which uses an internal listener model to produce informative utterances, performs significantly better in reference games. While its success with the evaluating listener model may be unsurprising, given the architectural similarity of the internal listener and the evaluating listener, human listeners were 10.4 percentage points better at picking out the target on utterances produced by the listener- aware speaker. While for listeners we found it was sufficient to bring context into the model only at the final stage, the soft- max over objects, we found that for speakers it was helpful to bring context into play earlier: the context- unaware speaker does significantly worse than context- aware one (64.0% vs. 76.6%). Qualitatively, we note that both context/listener aware speakers produce succinct descriptions (average sentence length Table 3: Speaker evaluations. For the neural listeners, five random seeds controlling the weight initialization and speaker-listener data splits were used. <table><tr><td>Speaker-Architecture</td><td>Listener Model</td><td>Human Listeners</td></tr><tr><td>Context-unaware</td><td>64.0 ± 1.7%</td><td>-</td></tr><tr><td>Context-aware (β = 0.0)</td><td>76.6 ± 1.0%</td><td>68.3</td></tr><tr><td>Listener-aware (β = 0.5)</td><td>85.9 ± 0.4%</td><td>-</td></tr><tr><td>Listener-aware (β = 1.0)</td><td>92.2 ± 0.5%</td><td>78.7</td></tr></table> <--- Page Split ---> 4.21 vs. 4.97) but the listener- aware speaker uses a much richer vocabulary (14% more unique nouns and 33% more unique adjectives, after controlling for average length discrepancy). As a final qualitative examination of our speakers' generalization ability, we ran a simple out- of- class speaking experiment. We constructed well- separated contexts from the search results presented in Section 5.2, taking as the target the highest- ranked exemplar and choosing distractors from among the lowest- ranked. Our best speaker model produced promising results (see Fig. 6). ## 7 RELATED WORK Image labeling and captioning Our work builds on recent progress in the development of vision models that involve some amount of language data, including object categorization (Simonyan & Zisserman, 2014; Zhang et al., 2014) and image captioning (Karpathy & Fei- Fei, 2015; Vinyals et al., 2015; Xu et al., 2016). Unlike object categorization, which pre- specifies a fixed set of class labels to which all images must project, our system uses open- ended, natural language. Similarly to other recent works in image captioning (Luo & Shakhnarovich, 2017; Monroe et al., 2017; Vedanta et al., 2017) instead of captioning a single image in isolation, our systems learn how to communicate across diverse semantic contexts. More importantly, using 'clean' images of separate articulated objects enables the generation of very fine- grained, part- based descriptions. Reference games In our work we use reference games in order to operationalize the demand to be relevant in context. The basic arrangement of such games can be traced back to the language games explored by Wittgenstein (Wittgenstein, 1953) and Lewis (Lewis, 1969). For decades, such games have been a valuable tool in cognitive science to quantitatively measure inferences about language use and the behavioral consequences of those inferences (Rosenberg & Cohen, 1964; Krauss & Weinheimer, 1964; Clark & Wilkes- Gibbs, 1986; van Deemter, 2016). Recently, these approaches have also been adopted as a benchmark for discriminative or context- aware NLP (Paetzel et al., 2014; Andreas & Klein, 2016; Cohn- Gordon et al., 2018; Vedantam et al., 2017; Su et al., 2017; Lazaridou et al., 2018). Rational Speech Acts framework Rational Speech Act (RSA) models provide a probabilistic framework for deriving linguistic behavior from general principles of social cognition (Goodman & Frank, 2016). At the core of the RSA framework is the Gricean proposal (Grice, 1975) that speakers are decision- theoretic agents who select utterances \(u\) that are parsimonious yet informative about the state of the world \(w\) . RSA formalizes this notion of informativity as the expected reduction in the uncertainty of an (internally simulated) listener \(L\) : \(S(u|w) \propto \exp \{\log L(w|u)\}\) , \(L(w|u) \propto \mathcal{L}(u, w)\) . This speaker \(S\) is pragmatic because it considers informativity in terms of a rational listener agent \((L)\) who updates their beliefs about the world according to the literal semantics of the language \((\mathcal{L})\) . Previous work has shown that RSA models account for context sensitivity in human speakers (Graf et al., 2016; Monroe et al., 2017; Yu et al., 2017; Fried et al., 2017). Our speaking results add evidence in the effectiveness of this approach. ## 8 CONCLUSION AND FUTURE DIRECTIONS Taken together, our results show that natural language, derived from communication in context, provides a strong objective for learning to make fine- grained distinctions between objects with an emphasis on their shared part- structure. An exciting future application of this work would be to leverage these techniques for improving unsupervised part segmentation and 3D shape retrieval, as well as context- aware shape synthesis, providing an advance over existing context- unaware synthesis techniques (Chen et al., 2018). ## ACKNOWLEDGMENTS ## 9 APPENDIX ### 9.1 DETAILS ON BUILDING THE CONTEXTS To build our contrastive triplets, we first compute the 2- nearest- neighbor graph of all ShapeNet chairs based on their Euclidean latent distances and use a subset of 1K chairs, those with the highest in- degree on this graph to "seed" the triplet generation. Given a node of this graph, we select its two nearest neighbors from the entire shape collection to form a triplet of highly similar ("close") objects and also select the two objects that are closest to it but which also are more distant from it than the median of all pairwise distances, to form a triplet of relatively "far" objects. Having two types of contexts (close and far) allowed us to collect contrastive language with various degrees of specificity. To counterbalance the dataset we ensured that each object of a triplet alternated roles (with the remaining two) as a distractor or target, and that each resulting combination was annotated by at least 4 humans. The AE used to make the embedding was of a relative small size (64D) to promote meaningful Euclidean comparisons. Also, for the close triplets we applied a manually tuned threshold, to semi- automatically reject triplets that contained two indistinguishable geometric objects, e.g., two that were listed in ShapeNet but only varied on their texture. ### 9.2 LISTENERS DETAILS Table 4: Optimal hyper parameters for neural listener architectures using both images and point clouds and word attention. <table><tr><td>Param/Architecture</td><td>Proposed</td><td>At-Once</td><td>Separate-Augment</td></tr><tr><td>Learning-rate</td><td>0.0005</td><td>0.001</td><td>0.001</td></tr><tr><td>Label-smoothing</td><td>0.9</td><td>0.9</td><td>0.9</td></tr><tr><td>L2-reg.</td><td>0.3</td><td>0.05</td><td>0.09</td></tr><tr><td>RNN-dropout</td><td>0.5</td><td>0.7</td><td>0.45</td></tr></table> The optimal values for the hyper- parameters used by each listener model (using both point clouds and images and word- attention) are given in Table 4. All listeners use an MLP with [100, 50] hidden neurons (FC- ReLU (Maas et al., 2013)) with batch normalization after each layer and an LSTM with 100 hidden units. The GloVe embedding was also 100- dimensional and it was fine- tuned during training. For the point- cloud latent bottleneck codes (128D) and the VGG image features (4096D) we use a dropout with 0.5 keep probability to zero half their entries before using the FC- projecting layers. The same drop- out mask was applied on the codes of a given triplet. The ground- truth indicator vectors were label- smoothed (Szegedy et al., 2015). Assigning a probability of 0.933 to the ground- truth target and 0.0333 to the distractors (smoothing of 0.9, second row of Table 4) yielded a performance boost of \(\sim 2\%\) . Label- smoothing has been found also in previous work to improve the generalization (Szegedy et al., 2015) or reducing mode- collapse in GANs (Salimans et al., 2016). We note that we didn't manage to improve the best attained accuracies by applying layer normalization (Ba et al., 2016) in the LSTM, or adversarial regularization (Miyato et al., 2017) on the word embeddings. Dropout (Srivastava et al., 2014) was by far the most effective form of regularization for the listener ( \(\sim [8 - 9]\%\) ), following by \(L_{2}\) weight- regularization on the projected layers ( \(\sim [2 - 3]\%\) ). Hyper- parameter Search We did a grid search over the space of hyper- parameters associated with each listener type separately. To circumvent the exponential growth of this space, we search it into two phases. First, we optimized the learning rate (in the regime of [0.0001, 0.0005, 0.001, 0.002, 0.004, 0.005]) in conjunction with the drop- out (keep probability) applied at the RNN's input, in the range [0.4- 0.7] with increments of 0.05. Given the acquired optimal values, we conducted the second stage of search over the \(L_{2}\) weight- regularization (in the range of [0.005, 0.01, 0.05, 0.1, 0.3, 0.9]), label- smoothing ([0.8, 0.9, 1.0]) and drop- out after vgg/pc- AE projected vectors ([0.4, 0.5, 0.7, 1.0]). In this search, we used a single random seed to control for the data- split which was based on at the object- generalization task. Details on the ablated listeners For the "Separate- Augment", a convolutional layer for aggregating the three encodings showed better performance than an FC. Also, the order- invariant max/mean poolings \((f,g)\) produced better results than other alternatives (e.g. using the identity <--- Page Split ---> function in their place). Using a separate MLP to process the point cloud data (via concatenation with the output of the RNN), was slightly better than feeding them directly in the recurrent net (after the tokens of each utterance were processed). However, conditioning the recurrent net with point- clouds and using the images in the end of the pipeline deteriorate significantly all attained results. We hypothesize that the gradient flow is better when processing (the inferior in quality) point cloud data, closer to the loss. Training details We trained the "Proposed" and the "At- Once" for 500 epochs and the "Separate- Augment" for 350. This was sufficient, as more training only resulted in more overfitting without improving the achieved test/val accuracies. We halved the learning every 50 epochs, if the validation error was not improved during them. Every 5 epochs we evaluated the model on the validation split in order to select the epoch/parameters with the highest attained accuracy. Because the "At- Once" is sensitive in the input order of the geometric codes, we randomly permute them during training. We use the ADAM (Kingma & Ba, 2014) \((\beta_{1} = 0.9)\) optimizer for all experiments. ### 9.3 SPEAKER DETAILS Table 5: Optimal hyper parameters for the context-aware neural-speaker. <table><tr><td>LSTM Size</td><td>Learning-rate</td><td>L2-reg.</td><td>Word-dropout</td><td>Image-dropout</td><td>RNN-out dropout</td></tr><tr><td>200</td><td>0.003</td><td>0.005</td><td>0.8</td><td>0.5</td><td>0.9</td></tr></table> Hyper- parameter Search To optimize for the hyper- parameters of a speaker we conducted a twostate grid search for as we did for the listeners. First, we optimized (a context- aware speaker with one seed under the object generalization task) with respect to: the hidden neurons of the LSTM (100 and 200), the learning rate (0.0005, 0.001, 0.003]), the drop- out keep probability for the word- vectors (0.8, 0.9, 1.0]) and the dropout applied at the RNN's output, before the linear transformation/wordto- logits matrix (with keep probabilities of [0.8, 0.9, 1.0]). The two best models were further optimized by introducing a drop- out layer after the image- projection layer (with keep probabilities in [0.8, 0.9, 1.0]) and \(L_{2}\) - weight regularization applied at the same projection layer (with values in [0, 0.005, 0.01]). The optimal parameters of this search are reported in Table 5. Model Selection To do model selection for a speaker, we used a pre- trained listener (with the same train/test/val splits) which evaluated the synthetic utterances produced by the speaker. To this purpose the speaker generated 1 utterance for each validation triplet via greedy (arg- max) sampling every 10 epochs of training and the listener reported the accuracy of predicting the target given the synthetic utterance. In the end of training (300 epochs), the epoch/model with the highest accuracy was selected. Other details We initially used GloVe to provide our speaker pretrained word embeddings, as in the listener, but found that it was sufficient to train the word embedding from uniformly random initialized weights (in range [- 0.1, 0.1]). We initialized the bias terms of the linear word- encoding layer with the log probability of the frequency of each word in the training data (Karpathy & Fei- Fei, 2015), which provided faster convergence. We train with SGD and ADAM \((\beta_{1} = 0.9)\) and apply norm- wise gradient clipping with a cut- off threshold of 5.0. The sampled and training utterances have a maximal length of 33 tokens (99th percentile of the dataset) and for each speaker we sample and score 50 utterances per triplet at test time (via Eq. 1). The optimal length penalty \((\alpha)\) for the context- unaware speaker is 0.7, and set to 0.6 for the rest. ## 10 PRE-TRAINED IMAGES AND POINT CLOUDS We train the PC- AE under the Chamfer loss with a bottleneck of 128 dimensions with point clouds of 2048 points extracted from a 3D CAD model, uniformly area- wise. For the VGG- 16 encoding, we use the 4096- dimensional output activations of its second fully- connected layer \((fc7)\) . To finetune the VGG- 16 we optimized it under the cross- entropy loss for an 8- way classification, which included photo- realistic rendered images of textureless meshes in the 8 largest object classes of Shape- Net (cars, chairs, aeroplanes, ...). The total number of shapes was 36,632. The fine- tuning took 30 epochs of training. The first 15 we optimized only the weights of the last \((fc8)\) layer and <--- Page Split ---> the last 15 the weights of all layers. On the test split of a \([90\% , 5\% , 5\% ]\) (train/test/val) the network achieves a 96.9 classification accuracy. ## 11 FURTHER QUALITATIVE RESULTS ## 11.1 SPLIT UTTERANCES While taking entire triplets as input to the listener LSTM did not improve listener performance utterances containing comparatives and superlatives (which in theory should be difficult to evaluate for isolated objects), we also anecdotally considered another subpopulation of utterances that perhaps even more strongly rely on context. These utterances distinguish the target by associating it explicitly with one distractor (e.g. "from the two that have thin legs, the one..."). We used an ad hoc set of search queries to find such utterances among the test set \((\approx 1.5\%\) of utterances) and found that both context- aware architectures do perform noticeably better on these utterances (67.4 \(\pm 3.0\%\) for "Separate- Augment" and 65.8 \(\pm 5.2\%\) for "At- Once" compared to only 62.5 \(\pm 3.7\%\) for the proposed model). However, given the low occurrence of such cases, these effects were not significant and we decided that the negligible gains of the "Separate- Augment" architecture were not worth the increase in model complexity and rigidity with respect to context size (see Section 5.2 for a demonstration of this shortcoming). <--- Page Split ---> Figure 7: Examples of attention weights on human utterances. The listener LSTM learns attention weights that emphasize more informative words when forming its linguistic representation. For these speaker utterances drawn from our corpus, we colored each word according to the weight assigned by the attention mechanism, with low attention words in blue and high attention words in red. ![](images/15_0.jpg) <--- Page Split ---> Figure 8: Examples of errors in listener model. Our top- performing listener model appeared to struggle to interpret referential language that relied on metaphors, negations, precisely counting parts, ambiguous modifiers, or descriptions of the object's texture or material. All examples are drawn from the test set and were correctly classified by human listeners in the original task. ![](images/16_0.jpg) <--- Page Split ---> Figure 9: Examples of errors in speaker models. Sometimes even the pragmatic (listener- aware) speaker produces insufficiently specific utterances that mention only undiagnostic features, or produces utterances that are literally false of the target (e.g. there technically is a hole in the back) while still succeeding in distinguishing the objects. ![](images/17_0.jpg) <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 10: Search results for chairs Gallery of retrieved exemplars of held out chairs for different queries. Only the top five are shown. </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 11: Search results for out-of-class objects. Gallery of retrieved exemplars from other ShapeNet furniture categories for different queries. Top five and bottom five are shown, demonstrating intuitive contrasts from the highest ones. Note that there are some mislabeled objects in ShapeNet. </center> <--- Page Split ---> Figure 12: Effect of context on production: Synthetic utterances generated by a literal (context-aware) and pragmatic (listener-aware) speaker. The top and bottom rows show utterances produced for the same target in a far and close context, respectively. The best-performing listener's prediction confidence for each object is displayed above: while both speaker models produce similarly effective utterances in far contexts, the literal speaker fails to produce effective utterances in close contexts. ![](images/20_0.jpg) <--- Page Split ---> ## 12 LENGTH PENALTY AND SPEAKER-AWARENESS Figure 13: Measuring the effect of using different \(\alpha\) , \(\beta\) values to select the top- 1 scoring sentence for context- aware and unaware speakers when creating utterances for the objects/contexts of the validation split. The y- axis in each subplot denotes the performance of a listener who is used to rank and evaluate the sentences. Averages are with respect to 5 random seeds controlling the data splits and the initializations of the neural- networks. ![](images/21_0.jpg) <center>Figure 14: Effect of using a different fraction of the training data for the evaluating listener when using two separate listeners (for evaluating and scoring a speaker's results.). On the x-axis is the fraction \(f\) of the entire training data (80% of the dataset) that is used by the evaluator. \(1 - f\) is used by the utterance-scoring listener. Averages are with respect to 5 random seeds controlling the data splits and the initializations of the neural-networks. </center> ![](images/21_1.jpg) It is interesting to see that even a context- unaware speaker can generate sentences that a listener can re- rank and then to find the top one as very discriminative. The context- unaware (listener aware) examples in our website demonstrate this improvement. ## 12.1 MORE ABLATIONS ## 12.2 GAME INTERFACE AND CORPUS Each game consisted of 69 rounds and participants swapped speaker and listener roles after each round. The game's interface is depicted in Figure 16. Participants were allowed to play multiple games, but most participants in our dataset played exactly one game (81% of participants). The most distinctive words in each triplet type (as measured by point- wise mutual information) are shown in Table 7). <--- Page Split ---> Table 6: Performance of different listeners in specific subpopulations in the earlier language generalization task. Averages over five random seeds that controlled the data splits and the neural-net initializations. <table><tr><td rowspan="2">Architecture</td><td colspan="5">Subpopulations</td></tr><tr><td>Overall</td><td>Close</td><td>Far</td><td>Sup-Comp</td><td>Split</td></tr><tr><td>Separate (Proposed)</td><td>83.7 ± 0.2%</td><td>77.0 ± 0.8%</td><td>90.3 ± 0.3%</td><td>80.7 ± 0.3%</td><td>64.6 ± 3.7%</td></tr><tr><td>Separate-Augment</td><td>84.4 ± 0.5%</td><td>78.5 ± 0.8%</td><td>90.2 ± 0.7%</td><td>80.9 ± 0.4%</td><td>68.9 ± 2.3%</td></tr><tr><td>Aggregate</td><td>78.4 ± 0.2%</td><td>71.5 ± 0.6%</td><td>85.2 ± 0.3%</td><td>76.0 ± 0.8%</td><td>61.8 ± 3.0%</td></tr></table> Figure 15: Listener's accuracy for different sizes of training data, under the object generalization task. The original split includes \([80\% , 10\% , 10\% ]\) for training/test/val purposes, thus the maximum size of training data is 0.8 of the entire dataset when the fraction is 1.0 (x- axis). The listener model uses the main architecture with using attention, images and point- clouds and its accuracy is always measured on the original \((10\%)\) test split. Results with five random seeds controlling the original data split and the neural- net's initialization. ![](images/22_0.jpg) <center>Table 7: Most distinctive words in each triplet type according to point-wise mutual information (excluding tokens that appeared fewer than 30 times in the dataset). Lower numbers are more distinctive of far and higher numbers are more distinctive of close. </center> <table><tr><td rowspan="2">far</td><td>word</td><td>office</td><td>sofa</td><td>regular</td><td>folding</td><td>wooden</td><td>stool</td><td>wheels</td><td>metal</td><td>normal</td><td>rocking</td></tr><tr><td>pmi</td><td>-1.70</td><td>-0.94</td><td>-0.88</td><td>-0.84</td><td>-0.83</td><td>-0.79</td><td>-0.78</td><td>-0.71</td><td>-0.67</td><td>-0.66</td></tr><tr><td rowspan="2">close</td><td>word</td><td>alike</td><td>identical</td><td>thickness</td><td>texture</td><td>darker</td><td>skinnier</td><td>thicker</td><td>perfect</td><td>similar</td><td>larger</td></tr><tr><td>pmi</td><td>0.69</td><td>0.67</td><td>0.67</td><td>0.66</td><td>0.65</td><td>0.64</td><td>0.63</td><td>0.62</td><td>0.62</td><td>0.61</td></tr></table> <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 16: Reference game interface. </center> <--- Page Split --->
reject
Reject
5.333333
ICLR_2019_paper_0163
iclr
2,019
# THE CONDITIONAL ENTROPY BOTTLENECK Anonymous authors Paper under double- blind review ## ABSTRACT We present a new family of objective functions, which we term the Conditional Entropy Bottleneck (CEB). These objectives are motivated by the Minimum Necessary Information (MNI) criterion. We demonstrate the application of CEB to classification tasks. We show that CEB gives: well- calibrated predictions; strong detection of challenging out- of- distribution examples and powerful whitebox adversarial examples; and substantial robustness to those adversaries. Finally, we report that CEB fails to learn from information- free datasets, providing a possible resolution to the problem of generalization observed in Zhang et al. (2016). ## 1 INTRODUCTION The field of Machine Learning has suffered from the following well- known problems in recent years<sup>1</sup>: - Vulnerability to adversarial examples. Essentially all machine-learned systems are currently believed by default to be highly vulnerable to adversarial examples. Many defenses have been proposed, but very few have demonstrated robustness against a powerful, general-purpose adversary. Lacking a clear theoretical framework for adversarial attacks, most proposed defenses are ad-hoc and fail in the presence of a concerted attacker (Carlini & Wagner, 2017a; Athalye et al., 2018).- Poor out-of-distribution detection. Classifiers do a poor job of signaling that they have received data that is substantially different from the data they were trained on. Ideally, a trained classifier would give less confident predictions for data that was far from the training distribution (as well as for adversarial examples). Barring that, there would be a clear, principled statistic that could be extracted from the model to tell whether the model should have made a low-confidence prediction. Many different approaches to providing such a statistic have been proposed (Guo et al., 2017; Lakshminarayanan et al., 2016; Hendrycks & Gimpel, 2016; Liang et al., 2017; Lee et al., 2017; DeVries & Taylor, 2018), but most seem to do poorly on what humans intuitively view as obviously different data.- Miscalibrated predictions. Related to the issues above, classifiers tend to be very overconfident in their predictions (Guo et al., 2017). This may be a symptom, rather than a cause, but miscalibration does not give practitioners confidence in their models.- Overfitting to the training data. Zhang et al. (2016) demonstrated that classifiers can memorize fixed random labelings of training data, which means that it is possible to learn a classifier with perfect inability to generalize. This critical observation makes it clear that a fundamental test of generalization is that the model should fail to learn when given what we call information-free datasets. This paper does not set out to solve any of these problems. Instead, our sole interest is the learning of optimal representations. In pursuit of that goal, we attempt to be as general as possible, considering only how to define optimal representations, what objective function might be capable of learning them, and what requirements such an objective function places on the form of the model. Given an optimal (according to our criterion) objective function, however, it is natural to explore the problems listed above, to see if such an objective function can ameliorate some of the core issues in the field of machine learning. We make those explorations in this paper, and find that our objective function, the Conditional Entropy Bottleneck (CEB) appears to impact all of the issues listed above. <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: (Left): Information Venn diagram showing the joint distribution over \(X, Y\) . (Right): The joint distribution \(Z_{X} \leftarrow X \leftrightarrow Y\) . \(Z_{X}\) is carefully positioned to indicate its conditional independence from \(Y\) given \(X\) . </center> ## 2 OPTIMAL REPRESENTATIONS Consider a joint distribution, \(p(x, y)\) , represented by the graphical model: \[X\leftrightarrow Y\] This joint distribution is our data, and may take any form. We don't presume to know how the data factors. It may factor as \(p(x, y) = p(x)p(y|x)\) , \(p(x, y) = p(y)p(x|y)\) , or even \(p(x, y) = p(x)p(y)\) . The first two factorings are depicted in Figure 1 in a standard information diagram showing the various entropies and the mutual information. We can ask: given this generic setting, what is the optimal representation? It seems there are only two options: capture all of the information in both \(X\) and \(Y\) (measured by the joint entropy, \(H(X, Y)\) ), or capture only the information shared between \(X\) and \(Y\) (measured by the mutual information, \(I(X; Y)\) ). The field of lossless compression is concerned with representations that perfectly maintain all of the information in both \(X\) and \(Y\) , as are the closely related studies of Kolmogorov Complexity (Kolmogorov, 1965) and Minimum Description Length (MDL) (Grunwald, 2007), all three of which are concerned with perfect reconstruction of inputs or messages. In contrast, we think that the field of machine learning is primarily concerned with making optimal predictions on unseen data. The requirements of perfect reconstruction from a compressed representation may result in the retention of much more information in the model than may be needed for prediction or stochastic generation tasks. For most such machine learning tasks, this points towards learning representations that capture only the information shared between \(X\) and \(Y\) , which is measured by the mutual information, \(I(X; Y)\) . The mutual information is defined in a variety of ways; we will use two (Cover & Thomas, 2006): \[I(X; Y) = H(X) - H(X|Y) = H(Y) - H(Y|X) \quad (1)\] \(I(X; Y)\) measures the amount of information necessary to define the relationship between \(X\) and \(Y\) . For some fixed dataset \(X, Y\) , any information less than \(I(X; Y)\) must be insufficient to predict \(Y\) from \(X\) or vice- versa with minimal error. Equivalently, any information more than \(I(X; Y)\) must contain some superfluous information for those two tasks. For example, consider a labeled dataset, where \(X\) is high- dimensional and information- rich, and \(Y\) is a single integer. All of the information in \(X\) that is not needed to correctly predict the single value \(Y = y\) is useless for the prediction task defined by the dataset, and may be harmful to the performance of a machine learning system if retained in the learned representation, as we will show empirically below. Next, we formalize this intuition about the information required for an optimal representation. ## 3 MINIMUM NECESSARY INFORMATION We propose the Minimum Necessary Information (MNI) criterion for a learned representation. We can define MNI in three parts. First is Information: we would like a representation that captures semantically meaningful information. In order to measure how successfully we capture meaningful <--- Page Split ---> information, we must first know how to measure information. Thus, the criterion prefers informationtheoretic approaches, given the uniqueness of entropy as a measure of information (Shannon, 1948). The semantic value of information is given by a task, which is specified by the set of variables in the dataset. I.e., the dataset \(X\) , \(Y\) defines two tasks: predict \(Y\) given \(X\) , or predict \(X\) given \(Y\) . This brings us to Necessity: the information we capture in our representations must be necessary to solve the task. \(^2\) Finally, Minimality: this simply refers to the amount of information – given that we learn a representation that can solve the task, we require that the representation we learn retain the smallest amount of information about the task out of the set of all representations that solve the task. This part of the criterion restricts us from incorporating “non- semantic” information into our representation, such as noise or spurious correlation. More formally, in the case of two observed variables, \(X\) and \(Y\) , a necessary set of conditions for a representation \(Z\) to satisfy the MNI criterion is the following: \[I(X;Y) = I(X;Z) = I(Y;Z) \quad (2)\] This fully constrains the amount of information. To constrain the necessity of the information in the representation \(Z\) , the following conditions must be satisfied: \[p(y|x) = \int dz p(y|z)p(z|x)\qquad p(x|y) = \int dz p(x|z)p(z|y) \quad (3)\] These four distributions of \(z\) correspond to the two tasks: predict \(Y\) given \(X\) and predict \(X\) given \(Y\) . \(^3\) ## 4 THE CONDITIONAL ENTROPY BOTTLENECK One way to satisfy Equation (2) is to learn a representation \(Z_{X}\) of \(X\) only, indicated by the Markov chain \(Z_{X}\gets X\leftrightarrow Y\) . We show this Markov chain as an information diagram in Figure 1 (Right). The placement of \(H(Z_{X})\) in that diagram carefully maintains the conditional independence between \(Y\) and \(Z_{X}\) given \(X\) , but is otherwise fully general. Some of the entropy of \(Z_{X}\) is unassociated with any other variable; some is only associated with \(X\) , and some is associated with \(X\) and \(Y\) together. Figure 1 (Right), then, shows diagrammatically the state of the learned representation early in training. At the end of training, we would like \(Z_{X}\) to satisfy the equalities in Equation (2), which corresponds to Figure 1 (Left), where the gray region labeled \(I(X;Y)\) also corresponds to \(I(X;Z_{X})\) and \(I(Y;Z_{X})\) . Given the conditional independence \(Z_{X}\perp Y|X\) in our Markov chain, \(I(Y;Z_{X})\) is maximal at \(I(X;Y)\) , by the data processing inequality. However, \(I(X;Z_{X})\) does not clearly have a constraint that targets \(I(X;Y)\) . We cannot maximize \(I(X;Z_{X})\) in general while being compatible with the MNI criterion, as that is only constrained from above by \(H(X)\geq I(X;Y)\) . Instead, we could use the Information Bottleneck objective (Tishby et al., 2000) which starts from the same Markov chain and minimizes \(\beta I(X;Z_{X}) - I(Y;Z_{X})\) , but it is not immediately clear what value of \(\beta\) will achieve the MNI. Thus, we need a different approach to hit the MNI. Considering the information diagram in Figure 1 (Left), we can notice the following identities when when we have achieved the MNI: \[I(X;Y|Z_{X}) = I(X;Z_{X}|Y) = I(Y;Z_{X}|X) = 0 \quad (4)\] With our Markov chain and the chain rule of mutual information (Cover & Thomas, 2006), we have: \[I(X;Z_{X}|Y) = I(X,Y;Z_{X}) - I(Y;Z_{X}) = I(X;Z_{X}) - I(Y;Z_{X}) \quad (5)\] This conditional information is guaranteed to be non- negative, as both terms are mutual informations, and the Markov chain guarantees that \(I(Y;Z_{X})\) is no larger than \(I(X;Z_{X})\) , by the data processing inequality. From an optimization perspective, this is ideal – we have a term that we can minimize, and we can directly know how far we are from the optimal value of 0 (measured in nats, so it is <--- Page Split ---> interpretable), when we are done (when it’s close enough to 0 that we are satisfied), and when our model is insufficient for the task (i.e., when this term isn’t close enough to 0). This leads us to the general Conditional Entropy Bottleneck objective: \[\mathrm{CEB}\equiv I(X;Z_{X}|Y) - I(Y;Z_{X}) \quad (6)\] Typically we would add a Lagrange multiplier on one of the two terms. In Appendix A, we present some geometric arguments to prefer leaving the two terms balanced. It is straightforward to turn this into a variational objective function that we can minimize. Taking the terms in turn:4 \[\begin{array}{r l} & {I(X;Z_{X}|Y) = I(X;Z_{X}) - I(Y;Z_{X}) = H(Z_{X}) - H(Z_{X}|X) - H(Z_{X}) + H(Z_{X}|Y)}\\ & {\qquad = -H(Z_{X}|X) + H(Z_{X}|Y) = \langle \log e(z_{X}|x)\rangle -\langle \log p(z_{X}|y)\rangle}\\ & {\qquad \leq \langle \log e(z_{X}|x)\rangle -\langle \log p(z_{x}|y)\rangle +\mathrm{KL}[p(z_{X}|y)||b(z_{X}|y)]}\\ & {\qquad = \langle \log e(z_{X}|x)\rangle -\langle \log b(z_{X}|y)\rangle} \end{array} \quad (9)\] \(e(z_{X}|x)\) is our encoder. It is not a variational approximation, even though it has learned parameters. \(b(z_{X}|y)\) is the backward encoder, a variational approximation of \(p(z_{X}|y)\) . In the second term, \(H(Y)\) can be dropped because it is constant with respect to the model: \[\begin{array}{r l} & {I(Y;Z_{X}) = H(Y) - H(Y|Z_{X})\Rightarrow -H(Y|Z_{X}) = \langle \log p(y|z_{X})\rangle}\\ & {\qquad \geq \langle \log p(y|z_{X})\rangle -\mathrm{KL}[p(y|z_{X})||c(y|z_{X})]}\\ & {\qquad = \langle \log c(y|z_{X})\rangle} \end{array} \quad (12)\] \(c(y|z_{X})\) is the classifier (although that name is arbitrary, given that \(Y\) may not be labels), which variationally approximates \(p(y|z_{X})\) . The variational bounds derived above give us a fully tractable objective function that works on large- scale problems and supports amortized inference, Variational Conditional Entropy Bottleneck (VCEB): \[\mathrm{CEB}\equiv I(X;Z_{X}|Y) - I(Y;Z_{X})\Rightarrow \langle \log e(z_{X}|x)\rangle -\langle \log b(z_{X}|y)\rangle -\langle \log c(y|z_{X})\rangle \equiv \mathrm{VCEB} \quad (14)\] The distributions with letters other than \(p\) are assumed to have learned parameters, which we otherwise omit in the notation. In other words, all three of \(e(\cdot)\) , \(b(\cdot)\) , and \(c(\cdot)\) have learned parameters, just as in the encoder and decoder of a normal VAE (Kingma & Welling, 2014), or the encoder, classifier, and marginal in a VIB model. We will name the \(I(X;Z_{X}|Y)\) term the Residual Information – this is the excess information in our representation beyond the information shared between \(X\) and \(Y\) : \[R e_{X / Y}\equiv \langle \log e(z_{X}|x)\rangle -\langle \log b(z_{X}|y)\rangle \geq -H(Z_{X}|X) + H(Z_{X}|Y) = I(X;Z_{X}|Y) \quad (15)\] There are a number of natural variations on this objective. We describe a few of them in Appendix E. ## 5 THE INFORMATION BOTTLENECK The Information Bottleneck (IB) (Tishby et al., 2000) learns a representation of \(X\) and \(Y\) subject to a soft information constraint: \[I B\equiv \min \beta I(Z;X) - I(Z;Y) \quad (16)\] where \(\beta\) controls the size of the constraint. In Figure 2 we show the optimal surfaces for CEB and IB, labeling the MNI point on both. In Figure 4 we show the same surfaces for finite models and that adjusting \(\beta\) determines a unique point in these information planes relative to \(I(X;Y)\) . As described in Tishby et al. (2000), IB is a tabular method, so it is not usable for amortized inference.5 Two recent works have extended IB for amortized inference. Both of these approaches <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 2: Geometry of the optimal surfaces for IB and CEB, with all points labeled. CEB rectifies IB's parallelogram by subtracting \(I(Y; Z)\) at every point. </center> rely on sweeping \(\beta\) , and do not propose a way to set \(\beta\) directly to train models where \(I(X; Z) = I(Y; Z) = I(X; Y)\) . Achille & Soatto (2018) presents InfoDropout, which uses IB to motivate a variation on Dropout (Srivastava et al., 2014). A variational version of IB is presented in Alemi et al. (2017). That objective is the Variational Information Bottleneck (VIB): \[V I B\equiv \beta (\langle \log e(z_{X}|x\rangle) - \langle \log m(z_{X})\rangle) - \langle \log c(y|z_{X})\rangle \quad (17)\] Instead of the backward encoder, VIB has a marginal posterior, \(m(z_{X})\) , which is a variational approximation to \(e(z_{X}) = \int d x p(x)e(z_{X}|x)\) . Additionally, it has a hyperparameter, \(\beta\) . We show in Appendix A that the optimal value for \(\beta = \frac{1}{2}\) when attempting to adhere to the MNI criterion. Following Alemi et al. (2018), we define the Rate (R): \[R\equiv \langle \log e(z_{X}|x\rangle) - \langle \log m(z_{X})\rangle \geq I(X; Z_{X}) \quad (18)\] We can compare variational CEB with VIB by taking their difference at \(\beta = \frac{1}{2}\) . Note that both objectives have an elided dependence on \(\langle \log p(y)\rangle\) from the \(I(Y; Z_{X})\) term that we must track: \[C E B - V I B_{\beta = \frac{1}{2}} = \langle \log b(z_{X}|y)\rangle -\langle \log m(z_{X})\rangle -\langle \log c(y|z_{X})\rangle +\langle \log p(y)\rangle \quad (19)\] Solving for \(m(z_{X})\) when that difference is 0: \[m(z_{X}) = \frac{b(z_{X}|y)p(y)}{c(y|z_{X})} \quad (20)\] Since the optimal \(m^{*}(z_{X})\) is the marginalization of \(e(z_{X}|x)\) , at convergence we must have: \[m^{*}(z_{X}) = \int d x p(x)e(z_{X}|x) = \frac{p(z_{X}|y)p(y)}{p(y|z_{X})} \quad (21)\] Depending on the distributional families and the parameterizations, this point may be difficult to find, particularly given that \(m(z_{X})\) only gets information about \(y\) indirectly through \(e(z_{X}|x)\) . Consequently, for otherwise equivalent models, we may expect \(V I B_{\frac{1}{2}}\) to converge to a looser approximation of \(I(X; Z) = I(Y; Z) = I(X; Y)\) than CEB. Since VIB optimizes an upper bound on \(I(X; Z)\) , that means that \(V I B_{\frac{1}{2}}\) will report \(R\) converging to \(I(X; Y)\) , but will capture less than the MNI. In contrast, if \(R e_{X / Y}\) converges to 0, the variational tightness of \(b(z_{X}|y)\) to the optimal \(p(z_{X}|y)\) depends only on the tightness of \(c(y|z_{X})\) to the optimal \(p(y|z_{X})\) . <--- Page Split ---> Table 1: Accuracy and rates \((R)\) for each model. Bold indicates the best score in that column. Determ doesn't have a rate, since it doesn't have an explicit encoder distribution. The final rate for the other four models is reported, as well as the peak rate achieved during training. The true mutual information for Fashion MNIST is \(I(X;Y) = 2.3\) nats, so achieving \(R = 2.3\) is optimal according to MNI. <table><tr><td>Model</td><td>Accuracy</td><td>Train R final (peak)</td></tr><tr><td>Determ</td><td>92.7</td><td>n/a</td></tr><tr><td>VIB0.01</td><td>93.0</td><td>2.6 (11.6)</td></tr><tr><td>VIB0.1</td><td>92.7</td><td>2.3 (3.2)</td></tr><tr><td>VIB0.5</td><td>90.0</td><td>2.3 (2.4)</td></tr><tr><td>CEB</td><td>92.9</td><td>2.3 (2.3)</td></tr></table> ## 6 MNI OPTIMALITY OF CEB In this work we do not attempt to give a formal proof that CEB representations learn the optimal information about the observed data (and certainly the variational form of the objective will prevent that from happening in general cases). However, CEB's targeting of the MNI is motivated by the following simple observations: If \(I(X;Z)< I(X;Y)\) , then we have thrown out relevant information in \(X\) for predicting \(Y\) . If \(I(X;Z) > I(X;Y)\) , then we are including information in \(X\) that is not useful for predicting \(Y\) . Thus \(I(X;Z) = I(X;Y)\) is the "correct" amount of information, which is one of the equalities required in order to satisfy the MNI criterion. Only models that successfully learn that amount of information can possibly be MNI- optimal. The second condition of MNI (Equation (3)) is only fully satisfied when optimizing the bidirectional CEB objective, described in Appendix E.2, as \(\langle \log e(z_{X}|x)\rangle - \langle \log b(z_{X}|y)\rangle\) and \(\langle \log b(z_{Y}|y)\rangle - \langle \log e(z_{Y}|x)\rangle\) are both 0 only when \(b(z|y) = p(z|y)\) and \(e(z|x) = p(z|x)\) and the corresponding decoder terms are both maximal. We leave such models for future work. ## 7 CLASSIFICATION EXPERIMENTS Our primary experiments are focused on comparing the performance of otherwise identical models when we change only the objective function. Consequently, we aren't interested in demonstrating state- of- the- art results for a particular classification task. Instead, we are interested in relative differences in performance that can be directly attributed to the difference in objective. With that in mind, we present results for classification of Fashion MNIST (Xiao et al., 2017) for five different models. The five models are: a deterministic model (Determ); three VIB models, with \(\beta \in \{\frac{1}{2},10^{- 1},10^{- 2}\}\) ( \(\mathrm{VIB}_{0.5}\) , \(\mathrm{VIB}_{0.1}\) , \(\mathrm{VIB}_{0.01}\) ); and a CEB model. These same models are used in the calibration, out- of- distribution, and adversarial experiments (Sections 8 to 10). Critically, all five models share the same inference architecture mapping \(X\) to \(Y\) . See Appendices C and D for details on training and the architectures. Since Fashion MNIST doesn't have a prespecified validation set, it offers an opportunity to test training algorithms that only look at training results, rather than relying on cross validation. To that end, the five models presented here are the first models with these hyperparameters that we trained on Fashion MNIST. The learning rate for the CEB model was lowered according to the training algorithm described in Appendix C. The other four models followed the same algorithm, but instead of tracking \(R e_{X / Y}\) , they simply tracked their training loss. All five models were required to retain the initial learning rate of 0.001 for 40 epochs before they could begin lowering the learning rate. At no point during training did any of the models exhibit non- monotonic test accuracy, so we do not believe that this approach harmed any performance - all five models converged essentially smoothly to their final, reported performance. In spite of the dynamic learning rate schedule, all five models took approximately the same number of epochs to reach the minimum learning rate. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: Calibration plots with \(90\%\) confidence intervals for four of the models after 2,000 steps, 20,000 steps, and 40,000 steps (left, center, and right of each trio, respectively): a is CEB, b is \(\mathrm{VIB}_{0.5}\) , c is \(\mathrm{VIB}_{0.1}\) , d is Determ. Perfect calibration corresponds to the dashed diagonal lines. Underconfidence occurs when the points are above the diagonal. Overconfidence occurs when the points are below the diagonal. </center> In the case of a simple classification problem with a uniform distribution over classes in the training set, we can directly compute \(I(X;Y)\) as \(\log C\) , where \(C\) is the number of classes.7 See Table 1 for a comparison of the rates between the four variational models, as well as their accuracies. All but \(\mathrm{VIB}_{0.5}\) achieve the same accuracy. All four stochastic models get close to the ideal rate of 2.3 nats, but they get there by different paths. For the VIB models, the lower \(\beta\) is, the higher the rate goes early in training, before converging down to (close to) 2.3 nats. CEB never goes above 2.3 nats. ## 8 CALIBRATION In Figure 3, we show calibration plots at various points during training for the four models. Calibration curves help analyze whether models are underconfident or overconfident. Each point in the plots corresponds to a \(5\%\) confidence range. Accuracy is averaged for each bin. A well- calibrated model is correct half of the time it gives a confidence of \(50\%\) for its prediction. All of the networks move from under- to overconfidence during training. However, CEB and \(\mathrm{VIB}_{0.5}\) are only barely overconfident, while \(\beta = 0.1\) is sufficient to make it nearly as overconfident as the deterministic model. This overconfidence is one of the issues that is correlated with exceeding the MNI during training (Table 1). See Appendix A for a geometric explanation for how this can occur. ## 9 OUT-OF-DISTRIBUTION DETECTION We test the ability of the five models to detect three different out- of- distribution (OoD) detection datasets. \(U(0,1)\) is uniform noise in the image domain. MNIST uses the MNIST test set. Vertical Flip is the most challenging, using vertically flipped Fashion MNIST test images, as originally proposed in Alemi et al. (2018). We use three different metrics for thresholding. The first two, \(H\) and \(R\) , were proposed in Alemi et al. (2018). \(H\) is the classifier entropy. \(R\) is the rate, defined in Section 5. The third metric is specific to CEB: \(Re_{X / \hat{Y}}\) . This is the predicted residual information – since we don’t have access to the true value of \(Y\) at test time, we use \(\hat{y} \sim c(y|z_X)\) to calculate \(H(Z_X|\hat{Y})\) . This is no longer a valid bound on \(Re_{X / Y}\) , as \(\hat{y}\) may not be from the true distribution \(p(x,y,z_X)\) . However, the better the classifier, the closer the estimate should be. These three threshold scores are used with the standard suite of proper scoring rules: False Positive Rate at \(95\%\) True Positive Rate (FPR \(95\%\) TPR), Area Under the ROC Curve (AUROC), and Area Under the Precision- Recall Curve (AUPR). See Lee et al. (2018) for definitions. <--- Page Split ---> Table 2: Results for out-of-distribution detection (OoD). Thrsh. is the threshold score used: \(H\) is the entropy of the classifier; \(R\) and \(R e_{X / \hat{Y}}\) are defined in Section 9. Arrows denote whether higher or lower scores are better. Bold indicates the best score in that column for a particular OoD dataset. <table><tr><td>OoD</td><td>Method</td><td>Thrsh.</td><td>FPR @ 95% TPR ↓</td><td>AUROC ↑</td><td>AUPR In ↑</td></tr><tr><td rowspan="6">U(0,1)</td><td>Determ</td><td>H</td><td>35.8</td><td>93.5</td><td>97.1</td></tr><tr><td>VIB0.01</td><td>H<br>R</td><td>41.1<br>0.0</td><td>92.5<br>100.0</td><td>96.0<br>100.0</td></tr><tr><td>VIB0.1</td><td>H<br>R</td><td>43.5<br>0.0</td><td>94.5<br>100.0</td><td>96.2<br>100.0</td></tr><tr><td>VIB0.5</td><td>H<br>R</td><td>73.2<br>80.6</td><td>87.0<br>57.1</td><td>90.5<br>51.4</td></tr><tr><td>CEB</td><td>H<br>R<br>ReX/ˆY</td><td>63.4<br>0.0<br>0.0</td><td>92.8<br>100.0<br>100.0</td><td>95.1<br>100.0<br>100.0</td></tr><tr><td rowspan="6">MNIST</td><td>Determ</td><td>H</td><td>59.0</td><td>88.4</td><td>90.0</td></tr><tr><td>VIB0.01</td><td>H<br>R</td><td>42.3<br>0.0</td><td>91.6<br>100.0</td><td>95.9<br>100.0</td></tr><tr><td>VIB0.1</td><td>H<br>R</td><td>60.3<br>0.5</td><td>84.7<br>86.8</td><td>89.7<br>99.8</td></tr><tr><td>VIB0.5</td><td>H<br>R</td><td>70.2<br>12.3</td><td>79.6<br>66.7</td><td>86.8<br>91.1</td></tr><tr><td>CEB</td><td>H<br>R<br>ReX/ˆY</td><td>70.6<br>0.1<br>0.2</td><td>77.8<br>94.4<br>92.0</td><td>73.0<br>99.9<br>99.9</td></tr><tr><td rowspan="6">Vertical Flip</td><td>Determ</td><td>H</td><td>66.8</td><td>88.6</td><td>90.2</td></tr><tr><td>VIB0.01</td><td>H<br>R</td><td>57.6<br>0.0</td><td>82.6<br>100.0</td><td>80.3<br>100.0</td></tr><tr><td>VIB0.1</td><td>H<br>R</td><td>65.3<br>0.0</td><td>84.5<br>99.2</td><td>85.2<br>100.0</td></tr><tr><td>VIB0.5</td><td>H<br>R</td><td>79.7<br>17.3</td><td>79.8<br>52.7</td><td>81.4<br>91.3</td></tr><tr><td>CEB</td><td>H<br>R<br>ReX/ˆY</td><td>68.0<br>0.0<br>0.0</td><td>84.9<br>90.7<br>92.6</td><td>85.5<br>100.0<br>100.0</td></tr></table> The core result is that \(\mathrm{VIB}_{0.5}\) performs much less well at the OoD tasks than the other two VIB models and CEB. We believe that this is another result of \(\mathrm{VIB}_{0.5}\) learning the right amount of information, but not learning all of the right information, thereby demonstrating that it is not a valid MNI objective, as explored in Appendix A. On the other hand, the other two VIB objectives seem to perform extremely well, which is the benefit they get from capturing a bit more information about the training set. We will see below that there is a price for that information, however. ## 10 ADVERSARIAL EXAMPLE ROBUSTNESS AND DETECTION Adversarial examples were first noted in Szegedy et al. (2013). The first practical attack, Fast Gradient Method (FGM) was introduced shortly after (Goodfellow et al., 2015). Since then, many new attacks have been proposed. Most relevant to us is the Carlini- Wagner (CW) attack (Carlini & Wagner, 2017b), which was the first practical attack to directly use a blackbox optimizer to find minimal perturbations.8 Many defenses have also been proposed, but almost all of them are broken (Carlini & Wagner, 2017a; Athalye et al., 2018). This work may be seen as a natural continuation of the adversarial analysis of Alemi et al. (2017), which showed that VIB naturally had robustness to whitebox adversaries, including CW. In that work, the authors did not train any VIB models with a learned \(m(z_{X})\) , which results in much weaker models, as shown in Alemi et al. (2018). We believe this is the first work that trains a VIB model with a learned marginal and using it in an adversarial setting. <--- Page Split ---> Table 3: Results for adversarial example detection (Attack). All attacks are targeting the “trousers” class in Fashion MNIST. CW is Carlini & Wagner (2017b). CW \((C = 1)\) is CW with an additional confidence penalty set to 1. CW, \((C = 1)\) Det. is a custom CW attack targeting CEB’s detection mechanism, \(R e_{X / \hat{Y}}\) . \(L_{0},L_{1},L_{2},L_{\infty}\) report the corresponding norm (mean \(\pm 1\) std.) of successful adversarial perturbations. Higher norms on CW indicate that the attack had a harder time finding adversarial perturbations, since it starts by looking for the smallest possible perturbation. The remaining columns are as in Table 2. Arrows denote whether higher or lower scores are better. Bold indicates the best score in that column for a particular adversarial attack. <table><tr><td>Attack</td><td>Model</td><td>Attack Success ↓</td><td>L0↑</td><td>L1↑</td><td>L2↑</td><td>L∞↑</td><td>Thresh.</td><td>FPR @ 95% TPR ↓</td><td>AUROC ↑</td><td>AUPR In ↑</td></tr><tr><td rowspan="6">CW</td><td rowspan="2">Determ</td><td rowspan="2">100.0%</td><td>377.1</td><td>16.2</td><td>1.4</td><td>0.2</td><td rowspan="2">H</td><td rowspan="2">15.4</td><td rowspan="2">90.7</td><td rowspan="2">86.0</td></tr><tr><td>±100.3</td><td>±10.2</td><td>±1.7</td><td>±0.1</td></tr><tr><td rowspan="2">VIB0.01</td><td rowspan="2">55.2%</td><td>389.6</td><td>17.1</td><td>1.5</td><td>0.2</td><td rowspan="2">H</td><td rowspan="2">11.2</td><td rowspan="2">59.9</td><td rowspan="2">90.0</td></tr><tr><td>±100.9</td><td>±10.3</td><td>±1.8</td><td>±0.1</td></tr><tr><td rowspan="2">VIB0.1</td><td rowspan="2">68.8%</td><td>392.1</td><td>29.2</td><td>5.1</td><td>0.4</td><td rowspan="2">H</td><td rowspan="2">16.5</td><td rowspan="2">77.4</td><td rowspan="2">80.0</td></tr><tr><td>±101.6</td><td>±18.1</td><td>±7.5</td><td>±0.2</td></tr><tr><td rowspan="2">CW</td><td rowspan="2">VIB0.5</td><td rowspan="2">35.8%</td><td>432.0</td><td>40.1</td><td>9.4</td><td>0.5</td><td rowspan="2">H</td><td rowspan="2">64.2</td><td rowspan="2">62.5</td><td rowspan="2">55.3</td></tr><tr><td>±99.6</td><td>±32.1</td><td>±14.4</td><td>±0.3</td></tr><tr><td rowspan="2">CEB</td><td rowspan="2">35.8%</td><td rowspan="2">416.4</td><td rowspan="2">33.6</td><td rowspan="2">7.4</td><td rowspan="2">0.3</td><td rowspan="2">H</td><td rowspan="2">62.2</td><td rowspan="2">65.2</td><td rowspan="2">57.1</td></tr><tr></tr><tr><td rowspan="2">CW (C = 1)</td><td rowspan="2">Determ</td><td rowspan="2">100.0%</td><td>378.7</td><td>16.6</td><td>1.4</td><td>0.2</td><td rowspan="2">H</td><td rowspan="2">17.9</td><td rowspan="2">90.9</td><td rowspan="2">85.7</td></tr><tr><td>±100.3</td><td>±10.4</td><td>±1.9</td><td>±0.1</td></tr><tr><td rowspan="2">CW (C = 1)</td><td rowspan="2">VIB0.01</td><td rowspan="2">96.7%</td><td>381.3</td><td>17.4</td><td>1.6</td><td>0.2</td><td rowspan="2">H</td><td rowspan="2">19.6</td><td rowspan="2">72.1</td><td rowspan="2">89.6</td></tr><tr><td>±101.5</td><td>±10.5</td><td>±1.9</td><td>±0.1</td></tr><tr><td rowspan="2">CW (C = 1)</td><td rowspan="2">VIB0.1</td><td rowspan="2">97.3%</td><td>382.8</td><td>28.2</td><td>4.8</td><td>0.4</td><td rowspan="2">H</td><td rowspan="2">28.7</td><td rowspan="2">86.0</td><td rowspan="2">79.1</td></tr><tr><td>±100.4</td><td>±17.2</td><td>±7.4</td><td>±0.2</td></tr><tr><td rowspan="2">CW (C = 1)</td><td rowspan="2">VIB0.5</td><td rowspan="2">50.4%</td><td>422.0</td><td>36.4</td><td>7.8</td><td>0.4</td><td rowspan="2">H</td><td rowspan="2">86.5</td><td rowspan="2">59.8</td><td rowspan="2">54.1</td></tr><tr><td>±101.3</td><td>±28.6</td><td>±12.3</td><td>±0.2</td></tr><tr><td rowspan="2">CW (C = 1)</td><td rowspan="2">CEB</td><td rowspan="2">48.0%</td><td>417.6</td><td>33.3</td><td>7.3</td><td>0.4</td><td rowspan="2">H</td><td rowspan="2">77.4</td><td rowspan="2">63.5</td><td rowspan="2">56.4</td></tr><tr><td>±95.5</td><td>±29.8</td><td>±15.4</td><td>±0.2</td></tr><tr><td rowspan="2">CW (C = 1)</td><td rowspan="2">CEB</td><td rowspan="2">25.1%</td><td>416.4</td><td>84.1</td><td>34.4</td><td>0.9</td><td rowspan="2">H</td><td rowspan="2">95.1</td><td rowspan="2">56.4</td><td rowspan="2">45.0</td></tr><tr><td>±92.2</td><td>±44.0</td><td>±22.8</td><td>±0.1</td></tr></table> We consider CW in the whitebox setting to be the current gold standard attack, even though it is more expensive than FGM or the various iterative attacks like DeepFool (Moosavi- Dezfooli et al., 2016) or iterative variants of FGM (Kurakin et al., 2016). Running an optimizer directly on the model to find the perturbation that can fool that model tells us much more about the robustness of the model than approaches that focus on attack efficiency. CW searches over the space of perturbation magnitudes, which makes the attack hard to defend against, and thus a strong option for testing robustness. Here, we explore three variants of the CW \(L_{2}\) targeted attack. The implementation the first two CW attacks are from Papernot et al. (2018). CW and CW \((C = 1)\) are the baseline CW attack, and CW with a confidence adjustment of 1. Note that in order for these attacks to succeed at all on CEB, we had to increase the default CW learning rate to \(5 \times 10^{- 1}\) . Without that increase, CW found almost no adversaries in our early experiments. All other parameters are left at their defaults for CW, apart from setting the clip ranges to [0, 1]. The final attack, CW \((C = 1)\) Det. is a modified version of CW \((C = 1)\) that additionally incorporates a detection tensor into the loss that CW minimizes. For CEB, we had it target minimizing \(R e_{X / \hat{Y}}\) in order to break the network's ability to detect the attack. All of the attacks are targeting the trouser class of Fashion MNIST, as that is the most distinctive class. Targeting a less distinctive class, such as one of the shirt classes, would confuse the difficulty of classifying the different shirts and the robustness of the model to adversaries. We run each of the first three attacks on the entire Fashion MNIST test set (all 10,000 images). For the stochastic networks, we permit 32 encoder samples and take the mean classification result (the same number of samples is also used for gradient generation in the attacks to be fair to the attacker). CW is expensive, but we are able to run these on a single GPU in about 30 minutes. However, CW \((C = 1)\) Det. ends up being <--- Page Split ---> about 200 times more expensive – we were only able to run 1000 images and only 8 encoder samples, and it took \(2\frac{1}{2}\) hours. Consequently, we only run CW \((C = 1)\) Det. on the CEB model. Our metric for robustness is the following: we count the number of adversarial examples that change a correct prediction to an incorrect prediction of the target class, and divide by the number of correct predictions the model makes on the non- adversarial inputs. We additionally measure the size of the resulting perturbations using the \(L_{0}\) , \(L_{1}\) , \(L_{2}\) , and \(L_{\infty}\) norms. For CW, a larger perturbation generally indicates that the attack had to work harder to find an adversarial example, making this a secondary indication of robustness. Finally, we measure adversarial detection using the same thresholding techniques from Table 2. The results of these experiments are in Table 3. We show all 20,000 images for four of the models in Figure 9. The most striking pattern in the models is how well \(\mathrm{VIB}_{0.01}\) and \(\mathrm{VIB}_{0.1}\) do at detection, while \(\mathrm{VIB}_{0.5}\) is dramatically more robust. We think that this is the most compelling indication of the importance of not overshooting \(I(X;Y)\) – even minor amounts of overshooting appear to destroy the robustness of the model. On the other hand, \(\mathrm{VIB}_{0.5}\) has a hard time with detection, which indicates that, while it has learned a highly compressed representation, it has not learned the optimal set of bits. Thus, as we discuss in Appendix A, VIB trades off between learning the necessary information, which allows it to detect attacks perfectly, and learning the minimum information, which allows it to be robust to attacks. The CEB model permits both – it maintains the necessary information for detecting powerful whitebox attacks, but also retains the minimum information, providing robustness. This is again visible in the CW \((C = 1)\) Det. attack, which directly targets CEB's detection mechanism. Even though it no longer does well detecting the attack, the model becomes more robust to the attack, as indicated both by the much lower attack success rate and the much larger perturbation magnitudes. ## 11 INFORMATION-FREE GENERALIZATION EXPERIMENTS We replicate the basic experiment from Zhang et al. (2016): we use the images from Fashion MNIST, but replace the training labels with fixed random labels. This dataset is information- free in the sense that \(I(X;Y) = 0\) . We use that dataset to train multiple deterministic models, CEB models, and a range of VIB models. We find that the CEB model never learns (even after 100 epochs of training), the deterministic model always learns (after about 40 epochs of training it begins to memorize the random labels), and the VIB models only learn with \(\beta \leq 0.001\) . The fact that CEB and VIB with \(\beta\) near \(\frac{1}{2}\) manage to resist memorizing random labels is our final empirical demonstration that MNI is a powerful criterion for objective functions. ## 12 CONCLUSION We have presented the basic form of the Conditional Entropy Bottleneck (CEB), motivated by the Minimum Necessary Information (MNI) criterion for optimal representations. We have shown through careful experimentation that simply by switching to CEB, you can expect substantial improvements in OoD detection, adversarial example detection and robustness, calibration, and generalization. Additionally, we have shown that it is possible to get all of these advantages without using any additional form of regularization, and without any new hyperparameters. We have argued empirically that objective hyperparameters can lead to hard- to- predict suboptimal behavior, such as memorizing random labels, or reducing robustness to adversarial examples. In Appendix E and in future work, we will show how to generalize CEB beyond the simple case of two observed variables. It is our perspective that all of the issues explored here – miscalibration, failure at OoD tasks, vulnerability to adversarial examples, and dataset memorization – stem from the same underlying issue, which is retaining too much information about the training data in the learned representation. We believe that the MNI criterion and CEB show a path forward for many tasks in machine learning, permitting fast, amortized inference while ameliorating major problems. ## ACKNOWLEDGMENTS REDACTED <--- Page Split ---> ## REFERENCES Alessandro Achille and Stefano Soatto. Information dropout: Learning optimal representations through noisy computation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018. A. A. Alemi, B. Poole, I. Fischer, J. V. Dillon, R. A. Saurous, and K. Murphy. Fixing a Broken ELBO. ICML2018, 2018. URL http://arxiv.org/abs/1711.00464. Alexander A Alemi, Ian Fischer, Joshua V Dillon, and Kevin Murphy. Deep Variational Information Bottleneck. In International Conference on Learning Representations, 2017. URL http://arxiv.org/abs/1612.00410. Alexander A Alemi, Ian Fischer, and Joshua V Dillon. Uncertainty in the variational information bottleneck. arXiv preprint arXiv:1807.00906, 2018. Rana Ali Amjad and Bernhard C. Geiger. How (Not) To Train Your Neural Network Using the Information Bottleneck Principle. CoRR, 2018. Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798- 1828, 2013. William Bialek and Naftali Tishby. Predictive information. arXiv preprint cond- mat/9902341, 1999. Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 3- 14. ACM, 2017a. Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 39- 57. IEEE, 2017b. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in neural information processing systems, pp. 2172- 2180, 2016. Djork- Arne Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015. Thomas M Cover and Joy A Thomas. Elements of information theory 2nd edition. John Wiley & Sons, 2006. T. DeVries and G. W. Taylor. Learning Confidence for Out-of-Distribution Detection in Neural Networks. arXiv: 1802.04865, 2018. URL https://arxiv.org/abs/1802.04865. Michael Figurnov, Shakir Mohamed, and Andriy Mnih. Implicit reparameterization gradients. arXiv preprint arXiv:1805.08498, 2018. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In CoRR, 2015. Priya Goyal, Piotr Dollar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. Peter D Grunwald. The Minimum Description Length Principle. MIT press, 2007. C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger. On Calibration of Modern Neural Networks. arXiv: 1706.04599, 2017. URL https://arxiv.org/abs/1706.04599. D. Hendrycks and K. Gimpel. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks. arXiv: 1610.02136, 2016. URL https://arxiv.org/abs/1610.02136. <--- Page Split ---> R Devon Hjelm, Alex Fedorov, Samuel Lavoie- Marchildon, Karan Grewal, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670, 2018. Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, and Masashi Sugiyama. Learning discrete representations via information maximizing self- augmented training. arXiv preprint arXiv:1702.08720, 2017. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015. URL https://arxiv.org/abs/1412.6980. Diederik P Kingma and Max Welling. Auto- encoding variational Bayes. In International Conference on Learning Representations, 2014. Andrei N Kolmogorov. Three approaches to the quantitative definition of information'. Problems of information transmission, 1(1):1- 7, 1965. Andreas Krause, Pietro Perona, and Ryan G Gomes. Discriminative clustering by regularized information maximization. In Advances in neural information processing systems, pp. 775- 783, 2010. Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016. B. Lakshminarayanan, A. Pritzel, and C. Blundell. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles. arXiv: 1612.01474, 2016. URL https://arxiv.org/abs/1612.01474. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient- based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278- 2324, 1998. K. Lee, H. Lee, K. Lee, and J. Shin. Training Confidence- calibrated Classifiers for Detecting Out-of-Distribution Samples. arXiv: 1711.09325, 2017. URL https://arxiv.org/abs/1711.09325. Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. arXiv preprint arXiv:1807.03888, 2018. S. Liang, Y. Li, and R. Srikant. Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks. arXiv: 1706.02690, 2017. URL https://arxiv.org/abs/1706.02690. Seyed- Mohsen Moosavi- Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574- 2582, 2016. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. Nicolas Papernot, Fartash Faghri, Nicholas Carlini, Ian Goodfellow, Reuben Feinman, Alexey Kurakin, Cihang Xie, Yash Sharma, Tom Brown, Aurko Roy, Alexander Matyasko, Vahid Behzadan, Karen Hambardzumyan, Zhishuai Zhang, Yi- Lin Juang, Zhi Li, Ryan Sheatsley, Abhibhav Garg, Jonathan Uesato, Willi Gierke, Yinpeng Dong, David Berthelot, Paul Hendricks, Jonas Rauber, and Rujun Long. Technical report on the cleverhans v2.1.0 adversarial examples library. arXiv preprint arXiv:1610.00768, 2018. Jonathan Uesato, Willi Gierke, Yinpeng Dong, David Berthelot, Paul Hendricks, Jonas Rauber, and Rujun Long. Technical report on the cleverhans v2.1.0 adversarial examples library. arXiv preprint arXiv:1610.00768, 2018. Claude Elwood Shannon. A Mathematical Theory of Communication. The Bell System Technical Journal, 27:379- 423, 1948. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929- 1958, 2014. <--- Page Split ---> DJ Strouse and David J Schwab. The deterministic information bottleneck. Neural computation, 29 (6):1611- 1630, 2017. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. arXiv: 1312.6199, 2013. URL https://arxiv.org/abs/1312.6199. Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. arXiv preprint physics/0004057, 2000. Aäron Van Den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. In SSW, pp. 125, 2016. Ramakrishna Vedantam, Ian Fischer, Jonathan Huang, and Kevin Murphy. Generative models of visually grounded imagination. International Conference on Learning Representations, 2018. URL https://arxiv.org/abs/1705.10762. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pp. 1096- 1103. ACM, 2008. Ronald J Williams. Simple statistical gradient- following algorithms for connectionist reinforcement learning. Machine learning, 8(3- 4):229- 256, 1992. A. C. Wilson, R. Roelofs, M. Stern, N. Srebro, and B. Recht. The Marginal Value of Adaptive Gradient Methods in Machine Learning. arXiv: 1705.08292, 2017. URL https://arxiv.org/abs/1705.08292. Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017. S. Zagoruyko and N. Komodakis. Wide Residual Networks. arXiv: 1605.07146, 2016. URL https://arxiv.org/abs/1605.07146. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016. <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 4: Geometry of the optimal surfaces for both CEB (purple) and IB (green) for models that can only come within \(\epsilon\) of the optimal surface (a: \(\epsilon = 0.1I(X;Y)\) ; b: \(\epsilon = 0.01I(X;Y)\) ). The tangent lines have the slope of the corresponding \(\beta\) – the tangent point on the \(\epsilon\) ball corresponds to the point on the pareto-optimal frontier for the corresponding model. Note that \(\beta\) determines the “exchange rate” between bits of \(I(X;Z)\) and \(I(Y;Z)\) , which is how we determine the coordinate of the center of the \(\epsilon\) ball. For IB to achieve the MNI point, 2 bits of \(I(Y;Z)\) are needed for every bit of \(I(X;Z)\) . Consequently, even for an infitely powerful model (corresponding to \(\epsilon = 0\) ), the only value of \(\beta\) that hits the MNI point is \(\beta = 2\) . Thus, knowing the function \(\epsilon (\beta)\) for a given model and dataset completely determines the model’s pareto-optimal frontier. </center> Here we collect a number of results that are not critical to the core of the paper, but may be of interest to particular audiences. ## A ANALYSIS OF CEB AND IB From Equation (5) and the definition of CEB in Equation (6), the following equivalence between CEB and IB is obvious: \[C E B\equiv I(X;Z|Y) - I(Y;Z) = I(X;Z) - 2I(Y;Z)\equiv I B_{2} \quad (22)\] where we are parameterizing IB with \(\beta\) on the \(I(Y;Z)\) term for convenience. This equivalence generalizes as follows: \[I B = I(X;Z) - \beta I(Y;Z) \quad (23)\] \[C E B = I(X;Z|Y) - \frac{\beta}{2} I(Y;Z) \quad (24)\] In Figure 4, we show the combined information planes for CEB and IB given the above parameterization. The figures show the simple geometry that determines a point on the pareto- optimal frontier for both objectives. Every such point is fully determined by the function \(\epsilon (\beta)\) for a given model and dataset, where \(\epsilon\) is the closest the model can approach the true optimal surface. \(\epsilon (\beta) = 0\) corresponds to the “infinite” model family that exactly traces out the boundaries of the feasible region. The full feasible regions can be seen in Figure 2. From this geometry we can immediately conclude that if an IB model and a CEB model have the same value of \(\epsilon > 0\) at equivalent \(\beta\) , the CEB model will always yield a value of \(I(Y;Z)\) closer to \(I(X;Y)\) . This is because the slope of the tangent lines for CEB are always lower, putting the tangent points higher on the \(\epsilon\) ball. This gives part of a theoretical justification for the empirical observations above that \(VIB_{0.5}\) (equivalent to \(IB_{2}\) in the parameterization we are describing here) fails to capture <--- Page Split ---> as much of the necessary information as the CEB model. Even at the pareto- optimal frontier, \(VIB_{0.5}\) cannot get \(I(Y;Z)\) as close to \(I(X;Y)\) as CEB can. Of course, we do not want to claim that this effect accounts for the fairly substantial difference in performance – that is likely to be due to a combination of other factors, including the fact that it is often easier to train continuous conditional distributions (like \(b(z|y)\) ) than it is to train continuous marginal distributions (like \(m(z)\) ). We also think that this analysis of the geometry of IB and CEB supports our preference for targeting the MNI point and treating CEB as an objective without hyperparameters. First, there are only a maximum of 4 points of interest in both the IB and CEB information planes (all 4 are visible in Figure 2): the origin, where there is no information in the representation; the MNI point; the point at \((I(Y;Z) = I(X;Y), I(X;Z) = H(X))\) (which is an MDL- compatible representation (Grünwald, 2007)); and the point at \((I(Y;Z) = 0, I(X;Z) = H(X|Y))\) (which would be the optimal decoder for an MNI representation). These are the only points naturally identified by the dataset – selecting a point on one of the edges between those four points seems to need additional justification. Second, if you do agree with the MNI criterion, for a given model it is impossible to get any closer to the MNI point than by setting \(\mathrm{CEB} \times \beta = 1\) , due to the convexity of the pareto- optimal frontier. Much more useful is making changes to the model, architecture, dataset, etc in order to make \(\epsilon\) smaller. One possibility in that direction that IB and CEB models offer is inspecting training examples with high rate or residual information to check for label noise, leading to a natural human- in- the- loop model improvement algorithm. Another is using CEB’s residual information as a measure of the quality of the trained model, as mentioned in Appendix C. A final point of interest is what happens when \(I(X;Y) = H(X)\) . In this case, the feasible region for CEB collapses to the line segment \(I(X;Z|Y) = 0\) with \(0 \leq I(Y;Z) \leq I(X;Y)\) . Similarly, the corresponding IB feasible region is the diagonal line \(I(X;Z) = I(Y;Z)\) . This case happens if we choose as our task to predict images given labels, for example. We should expect such label- conditional generative models to be particularly easy to train, since the search space is so simple. Additionally, it is never possible to learn a representation that exceeds the MNI, \(I(X;Z) \leq H(X) = I(X;Y)\) . ## B Mutual Information Optimization As an objective function, CEB is independent of the methods used to optimize it. Here we focus on variational objectives because they are simple, tractable, and well- understood, but any approach to optimize mutual information terms can work, so long as they respect the side of the bounds required by the objective. For example, both Oord et al. (2018); Hjelm et al. (2018) could be used to maximize the \(I(Y;Z)\) term. There are many approaches in the literature that attempt to optimize mutual information terms in some form, including Krause et al. (2010); Chen et al. (2016); Hu et al. (2017); Hjelm et al. (2018); Oord et al. (2018). It is worth noting that none of those approaches by themselves are compatible with the MNI criterion. Some of them explicitly maximize \(I(X;Z_X)\) , while others maximize \(I(Y;Z_X)\) , but leave \(I(X;Z_X)\) unconstrained. We expect all of these approaches to capture more than the MNI in general. ## C TRAINING Because of the properties of \(Re_{X / Y}\) , we can consider training algorithms that don’t rely on observing validation set performance in order to decide when to lower the learning rate. The closer we can get \(Re_{X / Y}\) to 0 on the training set, the better we expect to generalize to data drawn from the same distribution. One simple approach to training is to set a high initial learning rate (possibly with reverse annealing of the learning rate (Goyal et al., 2017)), and then lower the learning rate after any epoch of training that doesn’t result in a new lowest mean residual information on the training data. This is equivalent to the logic of dev- decay training algorithm of Wilson et al. (2017), but does not require the use of a validation set. Additionally, since the training set is typically much larger than a validation set would be, the average loss over the epoch is much more stable, so the learning rate is less likely to be lowered spuriously. The intuition for this algorithm is that \(Re_{X / Y}\) directly measures how far from optimal our learned representation is for a given \(c(y|z_X)\) . At the end of training \(Re_{X / Y}\) indicates that we could improve performance by increasing the capacity of our architecture or <--- Page Split ---> Algorithm 1: Training algorithm that lowers the learning rate when the mean \(\overline{{R e_{X / Y}}}\) of the previous epoch is not less than the lowest \(\overline{{R e_{X / Y}}}\) seen so far. The same idea can be applied to training VIB and deterministic models by tracking that the training loss is always going down. For the experiments in Section 7, we set the values specified in the Input section. Input : learning_rate \(= 10^{- 3}\) , min_learning_rate \(= 10^{- 6}\) , lowering_scale \(= 1 - \frac{1}{e}\) , first_epoch_when_lowering_learning_rate_is_permitted \(= 40\) 1 epoch \(= 0\) 2 \(\overline{{R e_{X / Y}}} = \infty\) 3 progress \(=\) true 4 while learning_rate \(>m i n\_ l e a r n i n g\_ r a t e\) do 5 while progress do // Train and get the mean residual information. \(\overline{{R e_{X / Y}}} = \mathrm{train\_model\_for\_1\_epoch()}\) epoch \(= \mathrm{epoch} + 1\) if \(\overline{{R e_{X / Y}}} >\overline{{R e_{X / Y}}}\) then progress \(=\) false else \(\overline{{R e_{X / Y}}} = \overline{{R e_{X / Y}}}\) if epoch \(\geq\) first_epoch_when_lowering_learning_rate_is_permitted then learning_rate \(=\) learning_rate \\* lowering_scale else \(\overline{{R e_{X / Y}}} = \infty\) considering ways in which our model may be misspecified. See Algorithm 1 for psuedocode. We do not claim that this algorithm is optimal. ## D MODEL DETAILS All of the models in our experiments have the same core architecture: A \(7\times 2\) Wide Resnet (Zagoruyko & Komodakis, 2016) for the encoder, with a final layer of \(D = 4\) dimensions for the latent representation, followed by a two layer MLP classifier using ELU (Clevert et al., 2015) activations with a final categorical distribution over the 10 classes. The stochastic models parameterize the mean and variance of a \(D = 4\) fully covariate multivariate Normal distribution with the output of the encoder. Samples from that distribution are passed into the classifier MLP. Apart from that difference, the stochastic models don't differ from Determ during evaluation. None of the five models uses any form of regularization (e.g., \(L_{1}\) , \(L_{2}\) , DropOut (Srivastava et al., 2014), BatchNorm (Ioffe & Szegedy, 2015)). The VIB models have an additional learned marginal, \(m(z_{X})\) , which is a mixture of \(240 D = 4\) fully covariate multivariate Normal distributions. The CEB model instead has the backward encoder, \(b(z_{X}|y)\) which is a \(D = 4\) fully covariate multivariate Normal distribution parameterized by a 1 layer MLP mapping the label, \(Y = y\) , to the mean and variance. In order to simplify comparisons, for CEB we additionally train a marginal \(m(z_{X})\) identical in form to that used by the VIB models. However, for CEB, \(m(z_{X})\) is trained using a separate optimizer so that it doesn't impact training of the CEB objective in any way. Having \(m(z_{X})\) for both CEB and VIB allows us to compare the rate, \(R\) , of each model except Determ. ## D.1 DISTRIBUTIONAL FAMILIES Any distributional family may be used for the encoder. Reparameterizable distributions (Kingma & Welling, 2014; Figurnov et al., 2018) are convenient, but it is also possible to use the score function trick (Williams, 1992) to get a high- variance estimate of the gradient for distributions that have no explicit or implicit reparameterization. In general, a good choice for \(b(z|y)\) is the same distributional <--- Page Split ---> family as \(e(z|x)\) , or a mixture thereof. These are modeling choices that need to be made by the practitioner, as they depend on the dataset. In this work, we chose normal distributions because they are easy to work with and will be the common choice for many problems, particularly when parameterized with neural networks, but that choice is incidental rather than fundamental. ## D.2 REGULARIZATION Note that we did not use additional regularization on the deterministic model, but all models have a 4 dimensional bottleneck, which is likely to have acted as a strong regularizer for the deterministic model. Additionally, standard forms of regularization, including stochastic regularization, did not prevent the CW attack from being successful \(100\%\) of the time in the original work (Carlini & Wagner, 2017b). Nor did regularization cause the deterministic networks in Zhang et al. (2016) to avoid memorizing the training set. Thus, we don't think that our deterministic baseline is disadvantaged on the tasks we considered in Sections 7 and 11. ## D.3 FINITNESS OF THE MUTUAL INFORMATION It is worth noting that the conditions for infinite mutual information given in Amjad & Geiger (2018) do not apply to either CEB or VIB, as they both use stochastic encoders \(e(z_{X}|x)\) . In our experiments using continuous representations, we did not encounter mutual information terms that diverged to infinity, although it is possible to make modeling and data choices that make it more likely that there will be numerical instabilities. This is not a flaw specific to CEB or VIB, however, and we found numerical instability to be almost non- existent across a wide variety of modeling and architectural choices for both variational objectives. ## E ADDITIONAL CEB OBJECTIVES Here we describe a few of the more obvious variants of the CEB objective. ## E.1 CONDITIONAL GENERATION In the above presentation of CEB, we derived the objective for what may be termed "classification" tasks (although there is nothing in the derivation that restricts the form of either \(X\) or \(Y\) ). However, CEB is fully symmetric, so it is natural to consider the second task defined by our choice of dataset, conditional generation of \(X\) given \(Y = y\) . In this case, we can augment our graphical model with a new variable, \(Z_{Y}\) , and derive the same CEB objective for that variable: \[\begin{array}{r}{\min I(Y;Z_{Y}|X) = \min I(Y;Z_{Y}) - I(X;Z_{Y})}\\ {\Rightarrow \min -H(Z_{Y}|Y) + H(Z_{Y}|X)} \end{array} \quad (25)\] \[\begin{array}{r}{\max I(X;Z_{Y}) = \max H(X) - H(X|Z_{Y})}\\ {\Rightarrow \max -H(X|Z_{Y})} \end{array} \quad (28)\] In the same manner as above, we can derive variational bounds on \(H(Z_{Y}|X)\) and \(H(X|Z_{Y})\) . In particular, we can variationally bound \(p(z_{Y}|x)\) with \(e(z_{Y}|x)\) . Additionally, we can bound \(p(x|z_{Y})\) with a decoder distribution of our choice, \(d(x|z_{Y})\) . Because the decoder is maximizing a lower bound on the mutual information between \(Z_{Y}\) and \(X\) , it can never memorize \(X\) . It is directly limited during training to use exactly \(H(Y)\) nats of information from \(Z_{Y}\) to decode \(X\) . For a mean field decoder, this means that the decoder will only output a canonical member of each class. For a powerful decoder, such as an autoregressive decoder, it will learn to select a random member of the class. For discrete \(Y\) , this model can trivially be turned into an unconditional generative model by first sampling \(Y\) from the training data or using any other appropriate procedure, such as sampling \(Y\) uniformly at random. <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 5: Information diagram for the basic hierarchical CEB model, \(Z_{2} \leftarrow Z_{1} \leftarrow X \leftrightarrow Y\) . </center> ## E.2 BIDIRECTIONAL GENERATION Given the presentation of conditional generation above, it is natural to consider that both \(c(y|z)\) and \(d(x|z)\) are conditional generative models of \(Y\) and \(X\) , respectively, and to learn a \(Z\) that can handle both tasks. This can be done easily with the following bidirectional CEB model: \(Z_{X} \leftarrow X \leftrightarrow Y \rightarrow Z_{Y}\) . This corresponds to the following factorization: \(p(x, y, z_{X}, z_{Y}) \equiv p(x, y) e(z_{X}|x) b(z_{Y}|y)\) . The two objectives from above then become the following single objective: \[\begin{array}{r l} & {\min -H(Z_{X}|X) + H(Z_{X}|Y) + H(Y|Z_{X})}\\ & {\qquad -H(Z_{Y}|Y) + H(Z_{Y}|X) + H(X|Z_{Y})} \end{array} \quad (30)\] A natural question is how to ensure that \(Z_{X}\) and \(Z_{Y}\) are consistent with each other. Fortunately, that consistency is trivial to encourage by making the natural variational approximations: \(p(z_{Y}|x) \rightarrow e(z_{Y}|x)\) and \(p(z_{X}|y) \rightarrow b(z_{X}|y)\) . The full bidirection variational CEB objective then becomes: \[\begin{array}{r l} & {\min \langle \log e(z_{X}|x)\rangle -\langle \log b(z_{X}|y)\rangle -\langle \log c(y|z_{X})\rangle}\\ & {\qquad +\langle \log b(z_{Y}|y)\rangle -\langle \log e(z_{Y}|x)\rangle -\langle \log d(x|z_{Y})\rangle} \end{array} \quad (32)\] At convergence, we learn a unified \(Z\) that is consistent with both \(Z_{X}\) and \(Z_{Y}\) , permitting generation of either output given either input in the trained model, in the same spirit as Vedantam et al. (2018), but without any objective function hyperparameter tuning. ## E.3 HIERARCHICAL CEB Thus far, we have focused on learning a single latent representation (possibly composed of multiple latent variables at the same level). Here, we consider how to learn a hierarchical model with CEB. Consider the graphical model \(Z_{2} \leftarrow Z_{1} \leftarrow X \leftrightarrow Y\) . This is the simplest hierarchical supervised representation learning model. The general form of its information diagram is given in Figure 5. The key observation for generalizing CEB to hierarchical models is that the target mutual information doesn't change. By this, we mean that all of the \(Z_{i}\) in the hierarchy should cover \(I(X; Y)\) at convergence, which means maximizing \(I(Y; Z_{i})\) . It is reasonable to ask why we would want to train such a model, given that the final set of representations are presumably all effectively identical in terms of information content. The answer is simple: doing so allows us to train deep models in a principled manner such that all layers of the network are consistent with each other and with the data. We need to be more careful when considering the residual information terms, though – it is not the case that we want to minimize \(I(X; Z_{i}|Y)\) , which is not consistent with the graphical model. Instead, we want to minimize \(I(Z_{i - 1}; Z_{i}|Y)\) , defining \(Z_{0} = X\) . This gives the following simple Hierarchical CEB objective: \[\begin{array}{r l} & {C E B_{\mathrm{hier}}\equiv \min \sum_{i}I(Z_{i - 1};Z_{i}|Y) - I(Y;Z_{i})}\\ & {\qquad \Leftrightarrow \min \sum_{i} - H(Z_{i}|Z_{i - 1}) + H(Z_{i}|Y) + H(Y|Z_{i})} \end{array} \quad (34)\] <--- Page Split ---> Because all of the \(Z_{i}\) are targetting \(Y\) , this objective is as stable as regular CEB. Note that if all of the \(Z_{i}\) have the same dimensionality, in principle they may all use the same networks for \(b(z_{i}|Y)\) and/or \(c(\mathbf{y}|z_{i})\) , which may substantially reduce the number of parameters in the model. All of the individual loss terms in the objective must still appear, of course. There is no requirement, however, that the \(Z_{i}\) have the same latent dimensionality, although doing so may give a unified hierarchical representation. ## E.4 SEQUENCE LEARNING Many of the richest problems in machine learning vary over time. In Bialek & Tishby (1999), the authors define the Predictive Information: \[I(X_{past},X_{future}) = \left\{\log \frac{p(X_{past},X_{future})}{p(X_{past})p(X_{future})}\right\}\] This is of course just the mutual information between the past and the future. However, under an assumption of temporal invariance (any time of fixed length is expected to have the same entropy), they are able to characterize the predictive information, and show that it is a subextensive quantity: \(\lim_{T \to \infty} I(T) / T \to 0\) , where \(I(T)\) is the predictive information over a time window of length \(2T\) ( \(T\) steps of the past predicting \(T\) steps into the future). This concise statement tells us that past observations contain vanishingly small information about the future as the time window increases. The application of CEB to extracting the predictive information is straightforward. Given the Markov chain \(X_{< t} \to X_{\geq t}\) , we learn a representation \(Z_{t}\) that optimally covers \(I(X_{< t}, X_{\geq t})\) in Predictive CEB: \[\begin{array}{r l} & {C E B_{\mathrm{pred}}\equiv \min I(X_{< t};Z_{t}|X_{\geq t}) - I(X_{\geq t},Z_{t})}\\ & {\qquad \Rightarrow \min -H(Z_{t}|X_{< t}) + H(Z_{t}|X_{\geq t}) + H(X_{\geq t}|Z_{t})} \end{array} \quad (35)\] Note that the model entailed by this objective function does not rely on \(Z_{< t}\) when predicting \(X_{\geq t}\) . A single \(Z_{t}\) captures all of the information in \(X_{< t}\) and is to be used to predict as far forward as is desired. "Rolling out" \(Z_{t}\) to make predictions is a modeling error according to the predictive information. Also note that, given a dataset of sequences, \(C E B_{\mathrm{pred}}\) may be extended to a bidirectional model, as in Appendix E.2. In this case, two representations are learned, \(Z_{< t}\) and \(Z_{\geq t}\) . Both representations are for timestep \(t\) , the first representing the observations before \(t\) , and the second representing the observations from \(t\) onwards. As in the normal bidirectional model, using the same encoder and backwards encoder for both parts of the bidirectional CEB objective ties the two representations together. Modeling and architectural choices. As with all of the variants of CEB, whatever entropy remains in the data after capturing the entropy of the mutual information in the representation must be modeled by the decoder. In this case, a natural modeling choice would be a probalistic RNN with powerful decoders per time- step to be predicted. However, it is worth noting that such a decoder would need to sample at each future step to decode the subsequent step. An alternative, if the prediction horizon is short or the predicted data are small, is to decode the entire sequence from \(Z_{t}\) in a single, feed- forward network (possibly as a single autoregression over all outputs in some natural sequence). Given the subextensivity of the predictive information, that may be a reasonable choice in stochastic environments, as the useful prediction window may be small. Multi- scale sequence learning. As in WaveNet (Van Den Oord et al., 2016), it is natural to consider sequence learning at multiple different temporal scales. Combining an architecture like time- dilated WaveNet with CEB is as simple as combining \(C E B_{\mathrm{pred}}\) with \(C E B_{\mathrm{hir}}\) (Appendix E.3). In this case, each of the \(Z_{t}\) would represent a wider time dilation conditioned on the aggregate \(Z_{t - 1}\) . The advantage of such an objective over that used in WaveNet is avoiding unnecessary memorization of earlier timesteps. ## E.5 UNSUPERVISED CEB Pure unsupervised learning is fundamentally an ill- posed problem. Without knowing what the task is, it is impossible to define an optimal representation directly. We think that this core issue is what lead the authors of Bengio et al. (2013) to prefer barely compressed representations. But by that line of <--- Page Split ---> reasoning, it seems that unsupervised learning devolves to lossless compression – perhaps the correct representation is the one that allows you to answer the question: “What is the color of the fourth pixel in the second row?” On the other hand, it also seems challenging to put the decision about what information should be kept into objective function hyperparameters, as in the \(\beta\) VAE and penalty VAE (Alemi et al., 2018) objectives. That work showed that it is possible to constrain the amount of information in the learned representation, but it is unclear how those objective functions keep only the “correct” bits of information for the downstream tasks you might care about. This is in contrast to all of the preceeding discussion, where the task clearly defines the both the correct amount of information and which bits are likely to be important. However, unsupervised representation learning is still an interesting problem, even if it is ill- posed. Our perspective on the importance of defining a task in order to constrain the information in the representation suggests that we can turn the problem into a data modeling problem in which the practitioner who selects the dataset also “models” the likely form of the useful bits in the dataset for the downstream task of interest. In particular, given a dataset \(X\) , we propose selecting a function \(f(X) \to X'\) that transforms \(X\) into a new random variable \(X'\) . This defines a paired dataset, \(P(X, X')\) , on which we can use CEB as normal. Note that choosing the identity function for \(f\) results in maximal mutual information between \(X\) and \(X'\) ( \(H(X)\) nats), which will result in a representation that is far from the MNI for normal downstream tasks. In other words, representations learned by true autoencoders are unlikely to be any better than simply using the raw \(X\) . It may seem that we have not proposed anything useful, as the selection of \(f(. )\) is unconstrained, and seems much more daunting than selecting \(\beta\) in a \(\beta\) VAE or \(\sigma\) in a penalty VAE. However, there is a very powerful class of functions that makes this problem much simpler, and that also make it clear using CEB will only select bits from \(X\) that are useful. That class of functions is the noise functions. ## E.5.1 DENOISING CEB AUTOENCODER Given a dataset \(X\) without labels or other targets, and some set of tasks in mind to be solved by a learned representation, we may select a random noise variable \(U\) , and function \(X' = f(X, U)\) that we believe will destroy the irrelevant information in \(X\) . We may then add representation variables \(Z_{X}, Z_{X'}\) to the model, giving the joint distribution \(p(x, x', u, z_{X}, z_{X'}) \equiv p(x)p(u)p(x'|f(x, u))e(z_{X}|x)b(z_{X'}|x')\) . This joint distribution is represented in Figure 6. Denoising Autoencoders were originally proposed in Vincent et al. (2008). In that work, the authors argue informally that reconstruction of corrupted inputs is a desirable property of learned representations. In this paper's notation, we could describe their proposed objective as \(\min H(X|Z_{X'})\) , or equivalently \(\min (\log d(x|z_{X'} = f(x, \eta)))_{\lambda ,\eta \sim p(x)\rho (\theta)}\) . Here we make this idea somewhat more formal through the MNI criterion and the derivation of CEB as the optimal objective for that criterion. We also note that, practically speaking, we would like to learn a representation that is consistent with uncorrupted inputs as well. Consequently, we are going to use a bidirectional model. \[\begin{array}{r l} & {C E B_{\mathrm{denoise}}\equiv \min I(X;Z_{X}|X^{\prime}) - I(X^{\prime};Z_{X}) + I(X^{\prime};Z_{X^{\prime}}|X) - I(X;Z_{X^{\prime}})}\\ & {\qquad \Rightarrow \min -H(Z_{X}|X) + H(Z_{X}|X^{\prime}) + H(X^{\prime}|Z_{X}) - H(Z_{X^{\prime}}|X^{\prime}) + H(Z_{X^{\prime}}|X) + H(X|Z_{X^{\prime}})} \end{array} \quad (37)\] This requires two encoders and two decoders, which may seem expensive, but it permits a consistent learned representation that can be used cleanly for downstream tasks. Using a single encoder/decoder pair would result in either an encoder that does not work well with uncorrupted inputs, or a decoder that only generates noisy outputs. If you are only interested in the learned representation and not in generating good reconstructions, the objective simplifies to the first three terms. In that case, the objective is properly called a Noising CEB Autoencoder, as the model predicts the noisy \(X'\) from \(X\) : \[\begin{array}{r l} & {C E B_{\mathrm{noise}}\equiv \min I(X;Z_{X}|X^{\prime}) - I(X^{\prime};Z_{X})}\\ & {\qquad \Rightarrow \min -H(Z_{X}|X) + H(Z_{X}|X^{\prime}) + H(X^{\prime}|Z_{X})} \end{array} \quad (40)\] <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 6: Information diagram and graphical model for the Denoising CEB Autoencoder. </center> In these models, the noise function, \(X^{\prime} = f(X,U)\) must encode the practitioner's assumptions about the structure of information in the data. This obviously will vary per type of data, and even per desired downstream task. However, we don't need to work too hard to find the perfect noise function initially. A natural first choice for \(f\) is: \(^{9}\) \[\begin{array}{c}{f(x,\eta) = \mathrm{clip}(x + \eta ,\mathcal{D})}\\ {\eta \sim \lambda U(-1,1)*\mathcal{D}}\\ {\mathcal{D} = \mathrm{domain}(X)} \end{array} \quad (42)\] In other words, add uniform noise scaled to the domain of \(X\) and by a hyperparameter \(\lambda\) , and clip the result to the domain of \(X\) . When \(\lambda = 1\) , \(X^{\prime}\) is indistinguishable from uniform noise. As \(\lambda \to 0\) , this maintains more and more of the original information from \(X\) in \(X^{\prime}\) . For some value of \(\lambda > 0\) , most of the irrelevant information is destroyed and most of the relevant information is maintained, if we assume that higher frequency content in the domain of \(X\) is less likely to contain the desired information. That information is what will be retained in the learned representation. Theoretical optimality of noise functions. Above we claimed that this learning procedure will only select bits that are useful for the downstream task, given that we select the proper noise function. Here we prove that claim constructively. Imagine an oracle that knows which bits of information should be destroyed, and which retained in order to solve the future task of interest. Further imagine, for simplicity, that the task of interest is classification. What noise function must that oracle implement in order to ensure that \(CEB_{denoise}\) can only learn exactly the bits needed for classification? The answer is simple: for every \(X = x_{i}\) , select \(X^{\prime} = x_{i}^{\prime}\) uniformly at random from among all of the \(X = x_{j}\) that should have the same class label as \(X = x_{i}\) . Now, the only way for CEB to maximize \(I(X;Z_{X^{\prime}})\) and minimize \(I(X^{\prime};Z_{X^{\prime}})\) is by learning a representation that is isomorphic to classification, and that encodes exactly \(I(X;Y)\) nats of information, even though it was only trained "unsupervisedly" on \(X,X^{\prime}\) pairs. Thus, if we can choose the correct noise function that destroys only the bits we don't care about, \(CEB_{denoise}\) will learn the desired representation and nothing else (caveated by model, architecture, and optimizer selection, as usual). ## E.6 SEMI-SUPERVISED CEB Given any amount of paired data \(X,Y\) immediately improves our ability to learn a semantic representation. Fortunately, it is easy to reincorporate paired data in combination with noising and <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 7: Graphical model for Semi-Supervised CEB. </center> denoising CEB, introduced above. We present the assumed graphical model in Figure 7. We give the corresponding Semi- Supervised CEB directly: \[\begin{array}{r l} & {C E B_{s e m i}\equiv \min I(X;Z_{X}|X^{\prime}) - I(X^{\prime};Z_{X}) + I(X^{\prime};Z_{X^{\prime}}|X) - I(X;Z_{X^{\prime}})}\\ & {\qquad +{\bf 1}_{Y\in (X,Y)}[I(X^{\prime};Z_{X^{\prime}}|Y) - I(Y;Z_{X^{\prime}})]}\\ & {\qquad \Rightarrow \min -H(Z_{X}|X) + H(Z_{X}|X^{\prime}) + H(X^{\prime}|Z_{X}) - H(Z_{X^{\prime}}|X^{\prime}) + H(Z_{X^{\prime}}|X) + H(X|Z_{X^{\prime}})}\\ & {\qquad +{\bf 1}_{Y\in (X,Y)}[ - H(Z_{X^{\prime}}|Y) + H(Z_{X^{\prime}}|Y) + H(Y|Z_{X^{\prime}})]} \end{array} \quad (47)\] \({\bf 1}_{Y\in (X,Y)}\) is the indicator function, equal to 1 when a \(Y\) is part of the paired data, and equal to 0 otherwise. In other words, if we have \(Y = y\) paired with a given \(X = x\) , we can include those terms in the objective. If we do not have that, we can simply leave them out. Note that it is straightforward to generalize this to semisupervised learning with two or more observations that are both being learned unsupervisedly, but also have some amount of paired data. For example, images and natural language, assuming we have a reasonable noise model for unsupervisedly learning natural language. ## F VISUALIZATIONS Here we provide some visualizations of the Fashion MNIST tasks. In Figure 8, we show a trained 2D CEB latent representation of Fashion MNIST. The model learned to locate closely related concepts together, including the cluster of "shirt" classes near the center, and the cluster of "shoe" classes toward the lower right. In spite of the restriction to 2 dimensions, this model achieves \(\sim 92\%\) on the test set. In Figure 9, the 10,000 test images and their 10,000 adversaries are shown for four of the models. It is easy to see at a glance that the CEB model organizes all of the adversaries into the "trousers" class, with a crisp division between the true examples and the adversaries. In contrast, the two VIB models have adversaries mixed throughout. However, all three models are clearly preferable to the deterministic model, which has all of the adversaries mixed into the "trousers" class with no ability to distinguish between adversaries and true examples. <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 8: A trained 2D latent space for a Fashion MNIST CEB model. </center> <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 9: All adversarial images sorted by predicted class, showing the difference between robust and non-robust models. Each predicted class is sorted by the model's rate, \(R\) ( \(H\) is used for \(\mathbf{d}\) ), from low to high. Images with a red bar along their top are adversarial. \(\mathbf{a}\) is CEB, \(\mathbf{b}\) is \(\mathrm{VIB}_{0.5}\) , \(\mathbf{c}\) is \(\mathrm{VIB}_{0.01}\) , \(\mathbf{d}\) is Determ. </center> <--- Page Split --->
## ABSTRACT We present a new family of objective functions, which we term the Conditional Entropy Bottleneck (CEB). These objectives are motivated by the Minimum Necessary Information (MNI) criterion. We demonstrate the application of CEB to classification tasks. We show that CEB gives: well- calibrated predictions; strong detection of challenging out- of- distribution examples and powerful whitebox adversarial examples; and substantial robustness to those adversaries. Finally, we report that CEB fails to learn from information- free datasets, providing a possible resolution to the problem of generalization observed in Zhang et al. (2016). ## 1 INTRODUCTION The field of Machine Learning has suffered from the following well- known problems in recent years<sup>1</sup>: - Vulnerability to adversarial examples. Essentially all machine-learned systems are currently believed by default to be highly vulnerable to adversarial examples. Many defenses have been proposed, but very few have demonstrated robustness against a powerful, general-purpose adversary. Lacking a clear theoretical framework for adversarial attacks, most proposed defenses are ad-hoc and fail in the presence of a concerted attacker (Carlini & Wagner, 2017a; Athalye et al., 2018).- Poor out-of-distribution detection. Classifiers do a poor job of signaling that they have received data that is substantially different from the data they were trained on. Ideally, a trained classifier would give less confident predictions for data that was far from the training distribution (as well as for adversarial examples). Barring that, there would be a clear, principled statistic that could be extracted from the model to tell whether the model should have made a low-confidence prediction. Many different approaches to providing such a statistic have been proposed (Guo et al., 2017; Lakshminarayanan et al., 2016; Hendrycks & Gimpel, 2016; Liang et al., 2017; Lee et al., 2017; DeVries & Taylor, 2018), but most seem to do poorly on what humans intuitively view as obviously different data.- Miscalibrated predictions. Related to the issues above, classifiers tend to be very overconfident in their predictions (Guo et al., 2017). This may be a symptom, rather than a cause, but miscalibration does not give practitioners confidence in their models.- Overfitting to the training data. Zhang et al. (2016) demonstrated that classifiers can memorize fixed random labelings of training data, which means that it is possible to learn a classifier with perfect inability to generalize. This critical observation makes it clear that a fundamental test of generalization is that the model should fail to learn when given what we call information-free datasets. This paper does not set out to solve any of these problems. Instead, our sole interest is the learning of optimal representations. In pursuit of that goal, we attempt to be as general as possible, considering only how to define optimal representations, what objective function might be capable of learning them, and what requirements such an objective function places on the form of the model. Given an optimal (according to our criterion) objective function, however, it is natural to explore the problems listed above, to see if such an objective function can ameliorate some of the core issues in the field of machine learning. We make those explorations in this paper, and find that our objective function, the Conditional Entropy Bottleneck (CEB) appears to impact all of the issues listed above. <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: (Left): Information Venn diagram showing the joint distribution over \(X, Y\) . (Right): The joint distribution \(Z_{X} \leftarrow X \leftrightarrow Y\) . \(Z_{X}\) is carefully positioned to indicate its conditional independence from \(Y\) given \(X\) . </center> ## 2 OPTIMAL REPRESENTATIONS Consider a joint distribution, \(p(x, y)\) , represented by the graphical model: \[X\leftrightarrow Y\] This joint distribution is our data, and may take any form. We don't presume to know how the data factors. It may factor as \(p(x, y) = p(x)p(y|x)\) , \(p(x, y) = p(y)p(x|y)\) , or even \(p(x, y) = p(x)p(y)\) . The first two factorings are depicted in Figure 1 in a standard information diagram showing the various entropies and the mutual information. We can ask: given this generic setting, what is the optimal representation? It seems there are only two options: capture all of the information in both \(X\) and \(Y\) (measured by the joint entropy, \(H(X, Y)\) ), or capture only the information shared between \(X\) and \(Y\) (measured by the mutual information, \(I(X; Y)\) ). The field of lossless compression is concerned with representations that perfectly maintain all of the information in both \(X\) and \(Y\) , as are the closely related studies of Kolmogorov Complexity (Kolmogorov, 1965) and Minimum Description Length (MDL) (Grunwald, 2007), all three of which are concerned with perfect reconstruction of inputs or messages. In contrast, we think that the field of machine learning is primarily concerned with making optimal predictions on unseen data. The requirements of perfect reconstruction from a compressed representation may result in the retention of much more information in the model than may be needed for prediction or stochastic generation tasks. For most such machine learning tasks, this points towards learning representations that capture only the information shared between \(X\) and \(Y\) , which is measured by the mutual information, \(I(X; Y)\) . The mutual information is defined in a variety of ways; we will use two (Cover & Thomas, 2006): \[I(X; Y) = H(X) - H(X|Y) = H(Y) - H(Y|X) \quad (1)\] \(I(X; Y)\) measures the amount of information necessary to define the relationship between \(X\) and \(Y\) . For some fixed dataset \(X, Y\) , any information less than \(I(X; Y)\) must be insufficient to predict \(Y\) from \(X\) or vice- versa with minimal error. Equivalently, any information more than \(I(X; Y)\) must contain some superfluous information for those two tasks. For example, consider a labeled dataset, where \(X\) is high- dimensional and information- rich, and \(Y\) is a single integer. All of the information in \(X\) that is not needed to correctly predict the single value \(Y = y\) is useless for the prediction task defined by the dataset, and may be harmful to the performance of a machine learning system if retained in the learned representation, as we will show empirically below. Next, we formalize this intuition about the information required for an optimal representation. ## 3 MINIMUM NECESSARY INFORMATION We propose the Minimum Necessary Information (MNI) criterion for a learned representation. We can define MNI in three parts. First is Information: we would like a representation that captures semantically meaningful information. In order to measure how successfully we capture meaningful <--- Page Split ---> information, we must first know how to measure information. Thus, the criterion prefers informationtheoretic approaches, given the uniqueness of entropy as a measure of information (Shannon, 1948). The semantic value of information is given by a task, which is specified by the set of variables in the dataset. I.e., the dataset \(X\) , \(Y\) defines two tasks: predict \(Y\) given \(X\) , or predict \(X\) given \(Y\) . This brings us to Necessity: the information we capture in our representations must be necessary to solve the task. \(^2\) Finally, Minimality: this simply refers to the amount of information – given that we learn a representation that can solve the task, we require that the representation we learn retain the smallest amount of information about the task out of the set of all representations that solve the task. This part of the criterion restricts us from incorporating “non- semantic” information into our representation, such as noise or spurious correlation. More formally, in the case of two observed variables, \(X\) and \(Y\) , a necessary set of conditions for a representation \(Z\) to satisfy the MNI criterion is the following: \[I(X;Y) = I(X;Z) = I(Y;Z) \quad (2)\] This fully constrains the amount of information. To constrain the necessity of the information in the representation \(Z\) , the following conditions must be satisfied: \[p(y|x) = \int dz p(y|z)p(z|x)\qquad p(x|y) = \int dz p(x|z)p(z|y) \quad (3)\] These four distributions of \(z\) correspond to the two tasks: predict \(Y\) given \(X\) and predict \(X\) given \(Y\) . \(^3\) ## 4 THE CONDITIONAL ENTROPY BOTTLENECK One way to satisfy Equation (2) is to learn a representation \(Z_{X}\) of \(X\) only, indicated by the Markov chain \(Z_{X}\gets X\leftrightarrow Y\) . We show this Markov chain as an information diagram in Figure 1 (Right). The placement of \(H(Z_{X})\) in that diagram carefully maintains the conditional independence between \(Y\) and \(Z_{X}\) given \(X\) , but is otherwise fully general. Some of the entropy of \(Z_{X}\) is unassociated with any other variable; some is only associated with \(X\) , and some is associated with \(X\) and \(Y\) together. Figure 1 (Right), then, shows diagrammatically the state of the learned representation early in training. At the end of training, we would like \(Z_{X}\) to satisfy the equalities in Equation (2), which corresponds to Figure 1 (Left), where the gray region labeled \(I(X;Y)\) also corresponds to \(I(X;Z_{X})\) and \(I(Y;Z_{X})\) . Given the conditional independence \(Z_{X}\perp Y|X\) in our Markov chain, \(I(Y;Z_{X})\) is maximal at \(I(X;Y)\) , by the data processing inequality. However, \(I(X;Z_{X})\) does not clearly have a constraint that targets \(I(X;Y)\) . We cannot maximize \(I(X;Z_{X})\) in general while being compatible with the MNI criterion, as that is only constrained from above by \(H(X)\geq I(X;Y)\) . Instead, we could use the Information Bottleneck objective (Tishby et al., 2000) which starts from the same Markov chain and minimizes \(\beta I(X;Z_{X}) - I(Y;Z_{X})\) , but it is not immediately clear what value of \(\beta\) will achieve the MNI. Thus, we need a different approach to hit the MNI. Considering the information diagram in Figure 1 (Left), we can notice the following identities when when we have achieved the MNI: \[I(X;Y|Z_{X}) = I(X;Z_{X}|Y) = I(Y;Z_{X}|X) = 0 \quad (4)\] With our Markov chain and the chain rule of mutual information (Cover & Thomas, 2006), we have: \[I(X;Z_{X}|Y) = I(X,Y;Z_{X}) - I(Y;Z_{X}) = I(X;Z_{X}) - I(Y;Z_{X}) \quad (5)\] This conditional information is guaranteed to be non- negative, as both terms are mutual informations, and the Markov chain guarantees that \(I(Y;Z_{X})\) is no larger than \(I(X;Z_{X})\) , by the data processing inequality. From an optimization perspective, this is ideal – we have a term that we can minimize, and we can directly know how far we are from the optimal value of 0 (measured in nats, so it is <--- Page Split ---> interpretable), when we are done (when it’s close enough to 0 that we are satisfied), and when our model is insufficient for the task (i.e., when this term isn’t close enough to 0). This leads us to the general Conditional Entropy Bottleneck objective: \[\mathrm{CEB}\equiv I(X;Z_{X}|Y) - I(Y;Z_{X}) \quad (6)\] Typically we would add a Lagrange multiplier on one of the two terms. In Appendix A, we present some geometric arguments to prefer leaving the two terms balanced. It is straightforward to turn this into a variational objective function that we can minimize. Taking the terms in turn:4 \[\begin{array}{r l} & {I(X;Z_{X}|Y) = I(X;Z_{X}) - I(Y;Z_{X}) = H(Z_{X}) - H(Z_{X}|X) - H(Z_{X}) + H(Z_{X}|Y)}\\ & {\qquad = -H(Z_{X}|X) + H(Z_{X}|Y) = \langle \log e(z_{X}|x)\rangle -\langle \log p(z_{X}|y)\rangle}\\ & {\qquad \leq \langle \log e(z_{X}|x)\rangle -\langle \log p(z_{x}|y)\rangle +\mathrm{KL}[p(z_{X}|y)||b(z_{X}|y)]}\\ & {\qquad = \langle \log e(z_{X}|x)\rangle -\langle \log b(z_{X}|y)\rangle} \end{array} \quad (9)\] \(e(z_{X}|x)\) is our encoder. It is not a variational approximation, even though it has learned parameters. \(b(z_{X}|y)\) is the backward encoder, a variational approximation of \(p(z_{X}|y)\) . In the second term, \(H(Y)\) can be dropped because it is constant with respect to the model: \[\begin{array}{r l} & {I(Y;Z_{X}) = H(Y) - H(Y|Z_{X})\Rightarrow -H(Y|Z_{X}) = \langle \log p(y|z_{X})\rangle}\\ & {\qquad \geq \langle \log p(y|z_{X})\rangle -\mathrm{KL}[p(y|z_{X})||c(y|z_{X})]}\\ & {\qquad = \langle \log c(y|z_{X})\rangle} \end{array} \quad (12)\] \(c(y|z_{X})\) is the classifier (although that name is arbitrary, given that \(Y\) may not be labels), which variationally approximates \(p(y|z_{X})\) . The variational bounds derived above give us a fully tractable objective function that works on large- scale problems and supports amortized inference, Variational Conditional Entropy Bottleneck (VCEB): \[\mathrm{CEB}\equiv I(X;Z_{X}|Y) - I(Y;Z_{X})\Rightarrow \langle \log e(z_{X}|x)\rangle -\langle \log b(z_{X}|y)\rangle -\langle \log c(y|z_{X})\rangle \equiv \mathrm{VCEB} \quad (14)\] The distributions with letters other than \(p\) are assumed to have learned parameters, which we otherwise omit in the notation. In other words, all three of \(e(\cdot)\) , \(b(\cdot)\) , and \(c(\cdot)\) have learned parameters, just as in the encoder and decoder of a normal VAE (Kingma & Welling, 2014), or the encoder, classifier, and marginal in a VIB model. We will name the \(I(X;Z_{X}|Y)\) term the Residual Information – this is the excess information in our representation beyond the information shared between \(X\) and \(Y\) : \[R e_{X / Y}\equiv \langle \log e(z_{X}|x)\rangle -\langle \log b(z_{X}|y)\rangle \geq -H(Z_{X}|X) + H(Z_{X}|Y) = I(X;Z_{X}|Y) \quad (15)\] There are a number of natural variations on this objective. We describe a few of them in Appendix E. ## 5 THE INFORMATION BOTTLENECK The Information Bottleneck (IB) (Tishby et al., 2000) learns a representation of \(X\) and \(Y\) subject to a soft information constraint: \[I B\equiv \min \beta I(Z;X) - I(Z;Y) \quad (16)\] where \(\beta\) controls the size of the constraint. In Figure 2 we show the optimal surfaces for CEB and IB, labeling the MNI point on both. In Figure 4 we show the same surfaces for finite models and that adjusting \(\beta\) determines a unique point in these information planes relative to \(I(X;Y)\) . As described in Tishby et al. (2000), IB is a tabular method, so it is not usable for amortized inference.5 Two recent works have extended IB for amortized inference. Both of these approaches <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 2: Geometry of the optimal surfaces for IB and CEB, with all points labeled. CEB rectifies IB's parallelogram by subtracting \(I(Y; Z)\) at every point. </center> rely on sweeping \(\beta\) , and do not propose a way to set \(\beta\) directly to train models where \(I(X; Z) = I(Y; Z) = I(X; Y)\) . Achille & Soatto (2018) presents InfoDropout, which uses IB to motivate a variation on Dropout (Srivastava et al., 2014). A variational version of IB is presented in Alemi et al. (2017). That objective is the Variational Information Bottleneck (VIB): \[V I B\equiv \beta (\langle \log e(z_{X}|x\rangle) - \langle \log m(z_{X})\rangle) - \langle \log c(y|z_{X})\rangle \quad (17)\] Instead of the backward encoder, VIB has a marginal posterior, \(m(z_{X})\) , which is a variational approximation to \(e(z_{X}) = \int d x p(x)e(z_{X}|x)\) . Additionally, it has a hyperparameter, \(\beta\) . We show in Appendix A that the optimal value for \(\beta = \frac{1}{2}\) when attempting to adhere to the MNI criterion. Following Alemi et al. (2018), we define the Rate (R): \[R\equiv \langle \log e(z_{X}|x\rangle) - \langle \log m(z_{X})\rangle \geq I(X; Z_{X}) \quad (18)\] We can compare variational CEB with VIB by taking their difference at \(\beta = \frac{1}{2}\) . Note that both objectives have an elided dependence on \(\langle \log p(y)\rangle\) from the \(I(Y; Z_{X})\) term that we must track: \[C E B - V I B_{\beta = \frac{1}{2}} = \langle \log b(z_{X}|y)\rangle -\langle \log m(z_{X})\rangle -\langle \log c(y|z_{X})\rangle +\langle \log p(y)\rangle \quad (19)\] Solving for \(m(z_{X})\) when that difference is 0: \[m(z_{X}) = \frac{b(z_{X}|y)p(y)}{c(y|z_{X})} \quad (20)\] Since the optimal \(m^{*}(z_{X})\) is the marginalization of \(e(z_{X}|x)\) , at convergence we must have: \[m^{*}(z_{X}) = \int d x p(x)e(z_{X}|x) = \frac{p(z_{X}|y)p(y)}{p(y|z_{X})} \quad (21)\] Depending on the distributional families and the parameterizations, this point may be difficult to find, particularly given that \(m(z_{X})\) only gets information about \(y\) indirectly through \(e(z_{X}|x)\) . Consequently, for otherwise equivalent models, we may expect \(V I B_{\frac{1}{2}}\) to converge to a looser approximation of \(I(X; Z) = I(Y; Z) = I(X; Y)\) than CEB. Since VIB optimizes an upper bound on \(I(X; Z)\) , that means that \(V I B_{\frac{1}{2}}\) will report \(R\) converging to \(I(X; Y)\) , but will capture less than the MNI. In contrast, if \(R e_{X / Y}\) converges to 0, the variational tightness of \(b(z_{X}|y)\) to the optimal \(p(z_{X}|y)\) depends only on the tightness of \(c(y|z_{X})\) to the optimal \(p(y|z_{X})\) . <--- Page Split ---> Table 1: Accuracy and rates \((R)\) for each model. Bold indicates the best score in that column. Determ doesn't have a rate, since it doesn't have an explicit encoder distribution. The final rate for the other four models is reported, as well as the peak rate achieved during training. The true mutual information for Fashion MNIST is \(I(X;Y) = 2.3\) nats, so achieving \(R = 2.3\) is optimal according to MNI. <table><tr><td>Model</td><td>Accuracy</td><td>Train R final (peak)</td></tr><tr><td>Determ</td><td>92.7</td><td>n/a</td></tr><tr><td>VIB0.01</td><td>93.0</td><td>2.6 (11.6)</td></tr><tr><td>VIB0.1</td><td>92.7</td><td>2.3 (3.2)</td></tr><tr><td>VIB0.5</td><td>90.0</td><td>2.3 (2.4)</td></tr><tr><td>CEB</td><td>92.9</td><td>2.3 (2.3)</td></tr></table> ## 6 MNI OPTIMALITY OF CEB In this work we do not attempt to give a formal proof that CEB representations learn the optimal information about the observed data (and certainly the variational form of the objective will prevent that from happening in general cases). However, CEB's targeting of the MNI is motivated by the following simple observations: If \(I(X;Z)< I(X;Y)\) , then we have thrown out relevant information in \(X\) for predicting \(Y\) . If \(I(X;Z) > I(X;Y)\) , then we are including information in \(X\) that is not useful for predicting \(Y\) . Thus \(I(X;Z) = I(X;Y)\) is the "correct" amount of information, which is one of the equalities required in order to satisfy the MNI criterion. Only models that successfully learn that amount of information can possibly be MNI- optimal. The second condition of MNI (Equation (3)) is only fully satisfied when optimizing the bidirectional CEB objective, described in Appendix E.2, as \(\langle \log e(z_{X}|x)\rangle - \langle \log b(z_{X}|y)\rangle\) and \(\langle \log b(z_{Y}|y)\rangle - \langle \log e(z_{Y}|x)\rangle\) are both 0 only when \(b(z|y) = p(z|y)\) and \(e(z|x) = p(z|x)\) and the corresponding decoder terms are both maximal. We leave such models for future work. ## 7 CLASSIFICATION EXPERIMENTS Our primary experiments are focused on comparing the performance of otherwise identical models when we change only the objective function. Consequently, we aren't interested in demonstrating state- of- the- art results for a particular classification task. Instead, we are interested in relative differences in performance that can be directly attributed to the difference in objective. With that in mind, we present results for classification of Fashion MNIST (Xiao et al., 2017) for five different models. The five models are: a deterministic model (Determ); three VIB models, with \(\beta \in \{\frac{1}{2},10^{- 1},10^{- 2}\}\) ( \(\mathrm{VIB}_{0.5}\) , \(\mathrm{VIB}_{0.1}\) , \(\mathrm{VIB}_{0.01}\) ); and a CEB model. These same models are used in the calibration, out- of- distribution, and adversarial experiments (Sections 8 to 10). Critically, all five models share the same inference architecture mapping \(X\) to \(Y\) . See Appendices C and D for details on training and the architectures. Since Fashion MNIST doesn't have a prespecified validation set, it offers an opportunity to test training algorithms that only look at training results, rather than relying on cross validation. To that end, the five models presented here are the first models with these hyperparameters that we trained on Fashion MNIST. The learning rate for the CEB model was lowered according to the training algorithm described in Appendix C. The other four models followed the same algorithm, but instead of tracking \(R e_{X / Y}\) , they simply tracked their training loss. All five models were required to retain the initial learning rate of 0.001 for 40 epochs before they could begin lowering the learning rate. At no point during training did any of the models exhibit non- monotonic test accuracy, so we do not believe that this approach harmed any performance - all five models converged essentially smoothly to their final, reported performance. In spite of the dynamic learning rate schedule, all five models took approximately the same number of epochs to reach the minimum learning rate. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: Calibration plots with \(90\%\) confidence intervals for four of the models after 2,000 steps, 20,000 steps, and 40,000 steps (left, center, and right of each trio, respectively): a is CEB, b is \(\mathrm{VIB}_{0.5}\) , c is \(\mathrm{VIB}_{0.1}\) , d is Determ. Perfect calibration corresponds to the dashed diagonal lines. Underconfidence occurs when the points are above the diagonal. Overconfidence occurs when the points are below the diagonal. </center> In the case of a simple classification problem with a uniform distribution over classes in the training set, we can directly compute \(I(X;Y)\) as \(\log C\) , where \(C\) is the number of classes.7 See Table 1 for a comparison of the rates between the four variational models, as well as their accuracies. All but \(\mathrm{VIB}_{0.5}\) achieve the same accuracy. All four stochastic models get close to the ideal rate of 2.3 nats, but they get there by different paths. For the VIB models, the lower \(\beta\) is, the higher the rate goes early in training, before converging down to (close to) 2.3 nats. CEB never goes above 2.3 nats. ## 8 CALIBRATION In Figure 3, we show calibration plots at various points during training for the four models. Calibration curves help analyze whether models are underconfident or overconfident. Each point in the plots corresponds to a \(5\%\) confidence range. Accuracy is averaged for each bin. A well- calibrated model is correct half of the time it gives a confidence of \(50\%\) for its prediction. All of the networks move from under- to overconfidence during training. However, CEB and \(\mathrm{VIB}_{0.5}\) are only barely overconfident, while \(\beta = 0.1\) is sufficient to make it nearly as overconfident as the deterministic model. This overconfidence is one of the issues that is correlated with exceeding the MNI during training (Table 1). See Appendix A for a geometric explanation for how this can occur. ## 9 OUT-OF-DISTRIBUTION DETECTION We test the ability of the five models to detect three different out- of- distribution (OoD) detection datasets. \(U(0,1)\) is uniform noise in the image domain. MNIST uses the MNIST test set. Vertical Flip is the most challenging, using vertically flipped Fashion MNIST test images, as originally proposed in Alemi et al. (2018). We use three different metrics for thresholding. The first two, \(H\) and \(R\) , were proposed in Alemi et al. (2018). \(H\) is the classifier entropy. \(R\) is the rate, defined in Section 5. The third metric is specific to CEB: \(Re_{X / \hat{Y}}\) . This is the predicted residual information – since we don’t have access to the true value of \(Y\) at test time, we use \(\hat{y} \sim c(y|z_X)\) to calculate \(H(Z_X|\hat{Y})\) . This is no longer a valid bound on \(Re_{X / Y}\) , as \(\hat{y}\) may not be from the true distribution \(p(x,y,z_X)\) . However, the better the classifier, the closer the estimate should be. These three threshold scores are used with the standard suite of proper scoring rules: False Positive Rate at \(95\%\) True Positive Rate (FPR \(95\%\) TPR), Area Under the ROC Curve (AUROC), and Area Under the Precision- Recall Curve (AUPR). See Lee et al. (2018) for definitions. <--- Page Split ---> Table 2: Results for out-of-distribution detection (OoD). Thrsh. is the threshold score used: \(H\) is the entropy of the classifier; \(R\) and \(R e_{X / \hat{Y}}\) are defined in Section 9. Arrows denote whether higher or lower scores are better. Bold indicates the best score in that column for a particular OoD dataset. <table><tr><td>OoD</td><td>Method</td><td>Thrsh.</td><td>FPR @ 95% TPR ↓</td><td>AUROC ↑</td><td>AUPR In ↑</td></tr><tr><td rowspan="6">U(0,1)</td><td>Determ</td><td>H</td><td>35.8</td><td>93.5</td><td>97.1</td></tr><tr><td>VIB0.01</td><td>H<br>R</td><td>41.1<br>0.0</td><td>92.5<br>100.0</td><td>96.0<br>100.0</td></tr><tr><td>VIB0.1</td><td>H<br>R</td><td>43.5<br>0.0</td><td>94.5<br>100.0</td><td>96.2<br>100.0</td></tr><tr><td>VIB0.5</td><td>H<br>R</td><td>73.2<br>80.6</td><td>87.0<br>57.1</td><td>90.5<br>51.4</td></tr><tr><td>CEB</td><td>H<br>R<br>ReX/ˆY</td><td>63.4<br>0.0<br>0.0</td><td>92.8<br>100.0<br>100.0</td><td>95.1<br>100.0<br>100.0</td></tr><tr><td rowspan="6">MNIST</td><td>Determ</td><td>H</td><td>59.0</td><td>88.4</td><td>90.0</td></tr><tr><td>VIB0.01</td><td>H<br>R</td><td>42.3<br>0.0</td><td>91.6<br>100.0</td><td>95.9<br>100.0</td></tr><tr><td>VIB0.1</td><td>H<br>R</td><td>60.3<br>0.5</td><td>84.7<br>86.8</td><td>89.7<br>99.8</td></tr><tr><td>VIB0.5</td><td>H<br>R</td><td>70.2<br>12.3</td><td>79.6<br>66.7</td><td>86.8<br>91.1</td></tr><tr><td>CEB</td><td>H<br>R<br>ReX/ˆY</td><td>70.6<br>0.1<br>0.2</td><td>77.8<br>94.4<br>92.0</td><td>73.0<br>99.9<br>99.9</td></tr><tr><td rowspan="6">Vertical Flip</td><td>Determ</td><td>H</td><td>66.8</td><td>88.6</td><td>90.2</td></tr><tr><td>VIB0.01</td><td>H<br>R</td><td>57.6<br>0.0</td><td>82.6<br>100.0</td><td>80.3<br>100.0</td></tr><tr><td>VIB0.1</td><td>H<br>R</td><td>65.3<br>0.0</td><td>84.5<br>99.2</td><td>85.2<br>100.0</td></tr><tr><td>VIB0.5</td><td>H<br>R</td><td>79.7<br>17.3</td><td>79.8<br>52.7</td><td>81.4<br>91.3</td></tr><tr><td>CEB</td><td>H<br>R<br>ReX/ˆY</td><td>68.0<br>0.0<br>0.0</td><td>84.9<br>90.7<br>92.6</td><td>85.5<br>100.0<br>100.0</td></tr></table> The core result is that \(\mathrm{VIB}_{0.5}\) performs much less well at the OoD tasks than the other two VIB models and CEB. We believe that this is another result of \(\mathrm{VIB}_{0.5}\) learning the right amount of information, but not learning all of the right information, thereby demonstrating that it is not a valid MNI objective, as explored in Appendix A. On the other hand, the other two VIB objectives seem to perform extremely well, which is the benefit they get from capturing a bit more information about the training set. We will see below that there is a price for that information, however. ## 10 ADVERSARIAL EXAMPLE ROBUSTNESS AND DETECTION Adversarial examples were first noted in Szegedy et al. (2013). The first practical attack, Fast Gradient Method (FGM) was introduced shortly after (Goodfellow et al., 2015). Since then, many new attacks have been proposed. Most relevant to us is the Carlini- Wagner (CW) attack (Carlini & Wagner, 2017b), which was the first practical attack to directly use a blackbox optimizer to find minimal perturbations.8 Many defenses have also been proposed, but almost all of them are broken (Carlini & Wagner, 2017a; Athalye et al., 2018). This work may be seen as a natural continuation of the adversarial analysis of Alemi et al. (2017), which showed that VIB naturally had robustness to whitebox adversaries, including CW. In that work, the authors did not train any VIB models with a learned \(m(z_{X})\) , which results in much weaker models, as shown in Alemi et al. (2018). We believe this is the first work that trains a VIB model with a learned marginal and using it in an adversarial setting. <--- Page Split ---> Table 3: Results for adversarial example detection (Attack). All attacks are targeting the “trousers” class in Fashion MNIST. CW is Carlini & Wagner (2017b). CW \((C = 1)\) is CW with an additional confidence penalty set to 1. CW, \((C = 1)\) Det. is a custom CW attack targeting CEB’s detection mechanism, \(R e_{X / \hat{Y}}\) . \(L_{0},L_{1},L_{2},L_{\infty}\) report the corresponding norm (mean \(\pm 1\) std.) of successful adversarial perturbations. Higher norms on CW indicate that the attack had a harder time finding adversarial perturbations, since it starts by looking for the smallest possible perturbation. The remaining columns are as in Table 2. Arrows denote whether higher or lower scores are better. Bold indicates the best score in that column for a particular adversarial attack. <table><tr><td>Attack</td><td>Model</td><td>Attack Success ↓</td><td>L0↑</td><td>L1↑</td><td>L2↑</td><td>L∞↑</td><td>Thresh.</td><td>FPR @ 95% TPR ↓</td><td>AUROC ↑</td><td>AUPR In ↑</td></tr><tr><td rowspan="6">CW</td><td rowspan="2">Determ</td><td rowspan="2">100.0%</td><td>377.1</td><td>16.2</td><td>1.4</td><td>0.2</td><td rowspan="2">H</td><td rowspan="2">15.4</td><td rowspan="2">90.7</td><td rowspan="2">86.0</td></tr><tr><td>±100.3</td><td>±10.2</td><td>±1.7</td><td>±0.1</td></tr><tr><td rowspan="2">VIB0.01</td><td rowspan="2">55.2%</td><td>389.6</td><td>17.1</td><td>1.5</td><td>0.2</td><td rowspan="2">H</td><td rowspan="2">11.2</td><td rowspan="2">59.9</td><td rowspan="2">90.0</td></tr><tr><td>±100.9</td><td>±10.3</td><td>±1.8</td><td>±0.1</td></tr><tr><td rowspan="2">VIB0.1</td><td rowspan="2">68.8%</td><td>392.1</td><td>29.2</td><td>5.1</td><td>0.4</td><td rowspan="2">H</td><td rowspan="2">16.5</td><td rowspan="2">77.4</td><td rowspan="2">80.0</td></tr><tr><td>±101.6</td><td>±18.1</td><td>±7.5</td><td>±0.2</td></tr><tr><td rowspan="2">CW</td><td rowspan="2">VIB0.5</td><td rowspan="2">35.8%</td><td>432.0</td><td>40.1</td><td>9.4</td><td>0.5</td><td rowspan="2">H</td><td rowspan="2">64.2</td><td rowspan="2">62.5</td><td rowspan="2">55.3</td></tr><tr><td>±99.6</td><td>±32.1</td><td>±14.4</td><td>±0.3</td></tr><tr><td rowspan="2">CEB</td><td rowspan="2">35.8%</td><td rowspan="2">416.4</td><td rowspan="2">33.6</td><td rowspan="2">7.4</td><td rowspan="2">0.3</td><td rowspan="2">H</td><td rowspan="2">62.2</td><td rowspan="2">65.2</td><td rowspan="2">57.1</td></tr><tr></tr><tr><td rowspan="2">CW (C = 1)</td><td rowspan="2">Determ</td><td rowspan="2">100.0%</td><td>378.7</td><td>16.6</td><td>1.4</td><td>0.2</td><td rowspan="2">H</td><td rowspan="2">17.9</td><td rowspan="2">90.9</td><td rowspan="2">85.7</td></tr><tr><td>±100.3</td><td>±10.4</td><td>±1.9</td><td>±0.1</td></tr><tr><td rowspan="2">CW (C = 1)</td><td rowspan="2">VIB0.01</td><td rowspan="2">96.7%</td><td>381.3</td><td>17.4</td><td>1.6</td><td>0.2</td><td rowspan="2">H</td><td rowspan="2">19.6</td><td rowspan="2">72.1</td><td rowspan="2">89.6</td></tr><tr><td>±101.5</td><td>±10.5</td><td>±1.9</td><td>±0.1</td></tr><tr><td rowspan="2">CW (C = 1)</td><td rowspan="2">VIB0.1</td><td rowspan="2">97.3%</td><td>382.8</td><td>28.2</td><td>4.8</td><td>0.4</td><td rowspan="2">H</td><td rowspan="2">28.7</td><td rowspan="2">86.0</td><td rowspan="2">79.1</td></tr><tr><td>±100.4</td><td>±17.2</td><td>±7.4</td><td>±0.2</td></tr><tr><td rowspan="2">CW (C = 1)</td><td rowspan="2">VIB0.5</td><td rowspan="2">50.4%</td><td>422.0</td><td>36.4</td><td>7.8</td><td>0.4</td><td rowspan="2">H</td><td rowspan="2">86.5</td><td rowspan="2">59.8</td><td rowspan="2">54.1</td></tr><tr><td>±101.3</td><td>±28.6</td><td>±12.3</td><td>±0.2</td></tr><tr><td rowspan="2">CW (C = 1)</td><td rowspan="2">CEB</td><td rowspan="2">48.0%</td><td>417.6</td><td>33.3</td><td>7.3</td><td>0.4</td><td rowspan="2">H</td><td rowspan="2">77.4</td><td rowspan="2">63.5</td><td rowspan="2">56.4</td></tr><tr><td>±95.5</td><td>±29.8</td><td>±15.4</td><td>±0.2</td></tr><tr><td rowspan="2">CW (C = 1)</td><td rowspan="2">CEB</td><td rowspan="2">25.1%</td><td>416.4</td><td>84.1</td><td>34.4</td><td>0.9</td><td rowspan="2">H</td><td rowspan="2">95.1</td><td rowspan="2">56.4</td><td rowspan="2">45.0</td></tr><tr><td>±92.2</td><td>±44.0</td><td>±22.8</td><td>±0.1</td></tr></table> We consider CW in the whitebox setting to be the current gold standard attack, even though it is more expensive than FGM or the various iterative attacks like DeepFool (Moosavi- Dezfooli et al., 2016) or iterative variants of FGM (Kurakin et al., 2016). Running an optimizer directly on the model to find the perturbation that can fool that model tells us much more about the robustness of the model than approaches that focus on attack efficiency. CW searches over the space of perturbation magnitudes, which makes the attack hard to defend against, and thus a strong option for testing robustness. Here, we explore three variants of the CW \(L_{2}\) targeted attack. The implementation the first two CW attacks are from Papernot et al. (2018). CW and CW \((C = 1)\) are the baseline CW attack, and CW with a confidence adjustment of 1. Note that in order for these attacks to succeed at all on CEB, we had to increase the default CW learning rate to \(5 \times 10^{- 1}\) . Without that increase, CW found almost no adversaries in our early experiments. All other parameters are left at their defaults for CW, apart from setting the clip ranges to [0, 1]. The final attack, CW \((C = 1)\) Det. is a modified version of CW \((C = 1)\) that additionally incorporates a detection tensor into the loss that CW minimizes. For CEB, we had it target minimizing \(R e_{X / \hat{Y}}\) in order to break the network's ability to detect the attack. All of the attacks are targeting the trouser class of Fashion MNIST, as that is the most distinctive class. Targeting a less distinctive class, such as one of the shirt classes, would confuse the difficulty of classifying the different shirts and the robustness of the model to adversaries. We run each of the first three attacks on the entire Fashion MNIST test set (all 10,000 images). For the stochastic networks, we permit 32 encoder samples and take the mean classification result (the same number of samples is also used for gradient generation in the attacks to be fair to the attacker). CW is expensive, but we are able to run these on a single GPU in about 30 minutes. However, CW \((C = 1)\) Det. ends up being <--- Page Split ---> about 200 times more expensive – we were only able to run 1000 images and only 8 encoder samples, and it took \(2\frac{1}{2}\) hours. Consequently, we only run CW \((C = 1)\) Det. on the CEB model. Our metric for robustness is the following: we count the number of adversarial examples that change a correct prediction to an incorrect prediction of the target class, and divide by the number of correct predictions the model makes on the non- adversarial inputs. We additionally measure the size of the resulting perturbations using the \(L_{0}\) , \(L_{1}\) , \(L_{2}\) , and \(L_{\infty}\) norms. For CW, a larger perturbation generally indicates that the attack had to work harder to find an adversarial example, making this a secondary indication of robustness. Finally, we measure adversarial detection using the same thresholding techniques from Table 2. The results of these experiments are in Table 3. We show all 20,000 images for four of the models in Figure 9. The most striking pattern in the models is how well \(\mathrm{VIB}_{0.01}\) and \(\mathrm{VIB}_{0.1}\) do at detection, while \(\mathrm{VIB}_{0.5}\) is dramatically more robust. We think that this is the most compelling indication of the importance of not overshooting \(I(X;Y)\) – even minor amounts of overshooting appear to destroy the robustness of the model. On the other hand, \(\mathrm{VIB}_{0.5}\) has a hard time with detection, which indicates that, while it has learned a highly compressed representation, it has not learned the optimal set of bits. Thus, as we discuss in Appendix A, VIB trades off between learning the necessary information, which allows it to detect attacks perfectly, and learning the minimum information, which allows it to be robust to attacks. The CEB model permits both – it maintains the necessary information for detecting powerful whitebox attacks, but also retains the minimum information, providing robustness. This is again visible in the CW \((C = 1)\) Det. attack, which directly targets CEB's detection mechanism. Even though it no longer does well detecting the attack, the model becomes more robust to the attack, as indicated both by the much lower attack success rate and the much larger perturbation magnitudes. ## 11 INFORMATION-FREE GENERALIZATION EXPERIMENTS We replicate the basic experiment from Zhang et al. (2016): we use the images from Fashion MNIST, but replace the training labels with fixed random labels. This dataset is information- free in the sense that \(I(X;Y) = 0\) . We use that dataset to train multiple deterministic models, CEB models, and a range of VIB models. We find that the CEB model never learns (even after 100 epochs of training), the deterministic model always learns (after about 40 epochs of training it begins to memorize the random labels), and the VIB models only learn with \(\beta \leq 0.001\) . The fact that CEB and VIB with \(\beta\) near \(\frac{1}{2}\) manage to resist memorizing random labels is our final empirical demonstration that MNI is a powerful criterion for objective functions. ## 12 CONCLUSION We have presented the basic form of the Conditional Entropy Bottleneck (CEB), motivated by the Minimum Necessary Information (MNI) criterion for optimal representations. We have shown through careful experimentation that simply by switching to CEB, you can expect substantial improvements in OoD detection, adversarial example detection and robustness, calibration, and generalization. Additionally, we have shown that it is possible to get all of these advantages without using any additional form of regularization, and without any new hyperparameters. We have argued empirically that objective hyperparameters can lead to hard- to- predict suboptimal behavior, such as memorizing random labels, or reducing robustness to adversarial examples. In Appendix E and in future work, we will show how to generalize CEB beyond the simple case of two observed variables. It is our perspective that all of the issues explored here – miscalibration, failure at OoD tasks, vulnerability to adversarial examples, and dataset memorization – stem from the same underlying issue, which is retaining too much information about the training data in the learned representation. We believe that the MNI criterion and CEB show a path forward for many tasks in machine learning, permitting fast, amortized inference while ameliorating major problems. ## ACKNOWLEDGMENTS REDACTED <--- Page Split ---> ## A ANALYSIS OF CEB AND IB From Equation (5) and the definition of CEB in Equation (6), the following equivalence between CEB and IB is obvious: \[C E B\equiv I(X;Z|Y) - I(Y;Z) = I(X;Z) - 2I(Y;Z)\equiv I B_{2} \quad (22)\] where we are parameterizing IB with \(\beta\) on the \(I(Y;Z)\) term for convenience. This equivalence generalizes as follows: \[I B = I(X;Z) - \beta I(Y;Z) \quad (23)\] \[C E B = I(X;Z|Y) - \frac{\beta}{2} I(Y;Z) \quad (24)\] In Figure 4, we show the combined information planes for CEB and IB given the above parameterization. The figures show the simple geometry that determines a point on the pareto- optimal frontier for both objectives. Every such point is fully determined by the function \(\epsilon (\beta)\) for a given model and dataset, where \(\epsilon\) is the closest the model can approach the true optimal surface. \(\epsilon (\beta) = 0\) corresponds to the “infinite” model family that exactly traces out the boundaries of the feasible region. The full feasible regions can be seen in Figure 2. From this geometry we can immediately conclude that if an IB model and a CEB model have the same value of \(\epsilon > 0\) at equivalent \(\beta\) , the CEB model will always yield a value of \(I(Y;Z)\) closer to \(I(X;Y)\) . This is because the slope of the tangent lines for CEB are always lower, putting the tangent points higher on the \(\epsilon\) ball. This gives part of a theoretical justification for the empirical observations above that \(VIB_{0.5}\) (equivalent to \(IB_{2}\) in the parameterization we are describing here) fails to capture <--- Page Split ---> as much of the necessary information as the CEB model. Even at the pareto- optimal frontier, \(VIB_{0.5}\) cannot get \(I(Y;Z)\) as close to \(I(X;Y)\) as CEB can. Of course, we do not want to claim that this effect accounts for the fairly substantial difference in performance – that is likely to be due to a combination of other factors, including the fact that it is often easier to train continuous conditional distributions (like \(b(z|y)\) ) than it is to train continuous marginal distributions (like \(m(z)\) ). We also think that this analysis of the geometry of IB and CEB supports our preference for targeting the MNI point and treating CEB as an objective without hyperparameters. First, there are only a maximum of 4 points of interest in both the IB and CEB information planes (all 4 are visible in Figure 2): the origin, where there is no information in the representation; the MNI point; the point at \((I(Y;Z) = I(X;Y), I(X;Z) = H(X))\) (which is an MDL- compatible representation (Grünwald, 2007)); and the point at \((I(Y;Z) = 0, I(X;Z) = H(X|Y))\) (which would be the optimal decoder for an MNI representation). These are the only points naturally identified by the dataset – selecting a point on one of the edges between those four points seems to need additional justification. Second, if you do agree with the MNI criterion, for a given model it is impossible to get any closer to the MNI point than by setting \(\mathrm{CEB} \times \beta = 1\) , due to the convexity of the pareto- optimal frontier. Much more useful is making changes to the model, architecture, dataset, etc in order to make \(\epsilon\) smaller. One possibility in that direction that IB and CEB models offer is inspecting training examples with high rate or residual information to check for label noise, leading to a natural human- in- the- loop model improvement algorithm. Another is using CEB’s residual information as a measure of the quality of the trained model, as mentioned in Appendix C. A final point of interest is what happens when \(I(X;Y) = H(X)\) . In this case, the feasible region for CEB collapses to the line segment \(I(X;Z|Y) = 0\) with \(0 \leq I(Y;Z) \leq I(X;Y)\) . Similarly, the corresponding IB feasible region is the diagonal line \(I(X;Z) = I(Y;Z)\) . This case happens if we choose as our task to predict images given labels, for example. We should expect such label- conditional generative models to be particularly easy to train, since the search space is so simple. Additionally, it is never possible to learn a representation that exceeds the MNI, \(I(X;Z) \leq H(X) = I(X;Y)\) . ## B Mutual Information Optimization As an objective function, CEB is independent of the methods used to optimize it. Here we focus on variational objectives because they are simple, tractable, and well- understood, but any approach to optimize mutual information terms can work, so long as they respect the side of the bounds required by the objective. For example, both Oord et al. (2018); Hjelm et al. (2018) could be used to maximize the \(I(Y;Z)\) term. There are many approaches in the literature that attempt to optimize mutual information terms in some form, including Krause et al. (2010); Chen et al. (2016); Hu et al. (2017); Hjelm et al. (2018); Oord et al. (2018). It is worth noting that none of those approaches by themselves are compatible with the MNI criterion. Some of them explicitly maximize \(I(X;Z_X)\) , while others maximize \(I(Y;Z_X)\) , but leave \(I(X;Z_X)\) unconstrained. We expect all of these approaches to capture more than the MNI in general. ## C TRAINING Because of the properties of \(Re_{X / Y}\) , we can consider training algorithms that don’t rely on observing validation set performance in order to decide when to lower the learning rate. The closer we can get \(Re_{X / Y}\) to 0 on the training set, the better we expect to generalize to data drawn from the same distribution. One simple approach to training is to set a high initial learning rate (possibly with reverse annealing of the learning rate (Goyal et al., 2017)), and then lower the learning rate after any epoch of training that doesn’t result in a new lowest mean residual information on the training data. This is equivalent to the logic of dev- decay training algorithm of Wilson et al. (2017), but does not require the use of a validation set. Additionally, since the training set is typically much larger than a validation set would be, the average loss over the epoch is much more stable, so the learning rate is less likely to be lowered spuriously. The intuition for this algorithm is that \(Re_{X / Y}\) directly measures how far from optimal our learned representation is for a given \(c(y|z_X)\) . At the end of training \(Re_{X / Y}\) indicates that we could improve performance by increasing the capacity of our architecture or <--- Page Split ---> Algorithm 1: Training algorithm that lowers the learning rate when the mean \(\overline{{R e_{X / Y}}}\) of the previous epoch is not less than the lowest \(\overline{{R e_{X / Y}}}\) seen so far. The same idea can be applied to training VIB and deterministic models by tracking that the training loss is always going down. For the experiments in Section 7, we set the values specified in the Input section. Input : learning_rate \(= 10^{- 3}\) , min_learning_rate \(= 10^{- 6}\) , lowering_scale \(= 1 - \frac{1}{e}\) , first_epoch_when_lowering_learning_rate_is_permitted \(= 40\) 1 epoch \(= 0\) 2 \(\overline{{R e_{X / Y}}} = \infty\) 3 progress \(=\) true 4 while learning_rate \(>m i n\_ l e a r n i n g\_ r a t e\) do 5 while progress do // Train and get the mean residual information. \(\overline{{R e_{X / Y}}} = \mathrm{train\_model\_for\_1\_epoch()}\) epoch \(= \mathrm{epoch} + 1\) if \(\overline{{R e_{X / Y}}} >\overline{{R e_{X / Y}}}\) then progress \(=\) false else \(\overline{{R e_{X / Y}}} = \overline{{R e_{X / Y}}}\) if epoch \(\geq\) first_epoch_when_lowering_learning_rate_is_permitted then learning_rate \(=\) learning_rate \\* lowering_scale else \(\overline{{R e_{X / Y}}} = \infty\) considering ways in which our model may be misspecified. See Algorithm 1 for psuedocode. We do not claim that this algorithm is optimal. ## D MODEL DETAILS All of the models in our experiments have the same core architecture: A \(7\times 2\) Wide Resnet (Zagoruyko & Komodakis, 2016) for the encoder, with a final layer of \(D = 4\) dimensions for the latent representation, followed by a two layer MLP classifier using ELU (Clevert et al., 2015) activations with a final categorical distribution over the 10 classes. The stochastic models parameterize the mean and variance of a \(D = 4\) fully covariate multivariate Normal distribution with the output of the encoder. Samples from that distribution are passed into the classifier MLP. Apart from that difference, the stochastic models don't differ from Determ during evaluation. None of the five models uses any form of regularization (e.g., \(L_{1}\) , \(L_{2}\) , DropOut (Srivastava et al., 2014), BatchNorm (Ioffe & Szegedy, 2015)). The VIB models have an additional learned marginal, \(m(z_{X})\) , which is a mixture of \(240 D = 4\) fully covariate multivariate Normal distributions. The CEB model instead has the backward encoder, \(b(z_{X}|y)\) which is a \(D = 4\) fully covariate multivariate Normal distribution parameterized by a 1 layer MLP mapping the label, \(Y = y\) , to the mean and variance. In order to simplify comparisons, for CEB we additionally train a marginal \(m(z_{X})\) identical in form to that used by the VIB models. However, for CEB, \(m(z_{X})\) is trained using a separate optimizer so that it doesn't impact training of the CEB objective in any way. Having \(m(z_{X})\) for both CEB and VIB allows us to compare the rate, \(R\) , of each model except Determ. ## D.1 DISTRIBUTIONAL FAMILIES Any distributional family may be used for the encoder. Reparameterizable distributions (Kingma & Welling, 2014; Figurnov et al., 2018) are convenient, but it is also possible to use the score function trick (Williams, 1992) to get a high- variance estimate of the gradient for distributions that have no explicit or implicit reparameterization. In general, a good choice for \(b(z|y)\) is the same distributional <--- Page Split ---> family as \(e(z|x)\) , or a mixture thereof. These are modeling choices that need to be made by the practitioner, as they depend on the dataset. In this work, we chose normal distributions because they are easy to work with and will be the common choice for many problems, particularly when parameterized with neural networks, but that choice is incidental rather than fundamental. ## D.2 REGULARIZATION Note that we did not use additional regularization on the deterministic model, but all models have a 4 dimensional bottleneck, which is likely to have acted as a strong regularizer for the deterministic model. Additionally, standard forms of regularization, including stochastic regularization, did not prevent the CW attack from being successful \(100\%\) of the time in the original work (Carlini & Wagner, 2017b). Nor did regularization cause the deterministic networks in Zhang et al. (2016) to avoid memorizing the training set. Thus, we don't think that our deterministic baseline is disadvantaged on the tasks we considered in Sections 7 and 11. ## D.3 FINITNESS OF THE MUTUAL INFORMATION It is worth noting that the conditions for infinite mutual information given in Amjad & Geiger (2018) do not apply to either CEB or VIB, as they both use stochastic encoders \(e(z_{X}|x)\) . In our experiments using continuous representations, we did not encounter mutual information terms that diverged to infinity, although it is possible to make modeling and data choices that make it more likely that there will be numerical instabilities. This is not a flaw specific to CEB or VIB, however, and we found numerical instability to be almost non- existent across a wide variety of modeling and architectural choices for both variational objectives. ## E ADDITIONAL CEB OBJECTIVES Here we describe a few of the more obvious variants of the CEB objective. ## E.1 CONDITIONAL GENERATION In the above presentation of CEB, we derived the objective for what may be termed "classification" tasks (although there is nothing in the derivation that restricts the form of either \(X\) or \(Y\) ). However, CEB is fully symmetric, so it is natural to consider the second task defined by our choice of dataset, conditional generation of \(X\) given \(Y = y\) . In this case, we can augment our graphical model with a new variable, \(Z_{Y}\) , and derive the same CEB objective for that variable: \[\begin{array}{r}{\min I(Y;Z_{Y}|X) = \min I(Y;Z_{Y}) - I(X;Z_{Y})}\\ {\Rightarrow \min -H(Z_{Y}|Y) + H(Z_{Y}|X)} \end{array} \quad (25)\] \[\begin{array}{r}{\max I(X;Z_{Y}) = \max H(X) - H(X|Z_{Y})}\\ {\Rightarrow \max -H(X|Z_{Y})} \end{array} \quad (28)\] In the same manner as above, we can derive variational bounds on \(H(Z_{Y}|X)\) and \(H(X|Z_{Y})\) . In particular, we can variationally bound \(p(z_{Y}|x)\) with \(e(z_{Y}|x)\) . Additionally, we can bound \(p(x|z_{Y})\) with a decoder distribution of our choice, \(d(x|z_{Y})\) . Because the decoder is maximizing a lower bound on the mutual information between \(Z_{Y}\) and \(X\) , it can never memorize \(X\) . It is directly limited during training to use exactly \(H(Y)\) nats of information from \(Z_{Y}\) to decode \(X\) . For a mean field decoder, this means that the decoder will only output a canonical member of each class. For a powerful decoder, such as an autoregressive decoder, it will learn to select a random member of the class. For discrete \(Y\) , this model can trivially be turned into an unconditional generative model by first sampling \(Y\) from the training data or using any other appropriate procedure, such as sampling \(Y\) uniformly at random. <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 5: Information diagram for the basic hierarchical CEB model, \(Z_{2} \leftarrow Z_{1} \leftarrow X \leftrightarrow Y\) . </center> ## E.2 BIDIRECTIONAL GENERATION Given the presentation of conditional generation above, it is natural to consider that both \(c(y|z)\) and \(d(x|z)\) are conditional generative models of \(Y\) and \(X\) , respectively, and to learn a \(Z\) that can handle both tasks. This can be done easily with the following bidirectional CEB model: \(Z_{X} \leftarrow X \leftrightarrow Y \rightarrow Z_{Y}\) . This corresponds to the following factorization: \(p(x, y, z_{X}, z_{Y}) \equiv p(x, y) e(z_{X}|x) b(z_{Y}|y)\) . The two objectives from above then become the following single objective: \[\begin{array}{r l} & {\min -H(Z_{X}|X) + H(Z_{X}|Y) + H(Y|Z_{X})}\\ & {\qquad -H(Z_{Y}|Y) + H(Z_{Y}|X) + H(X|Z_{Y})} \end{array} \quad (30)\] A natural question is how to ensure that \(Z_{X}\) and \(Z_{Y}\) are consistent with each other. Fortunately, that consistency is trivial to encourage by making the natural variational approximations: \(p(z_{Y}|x) \rightarrow e(z_{Y}|x)\) and \(p(z_{X}|y) \rightarrow b(z_{X}|y)\) . The full bidirection variational CEB objective then becomes: \[\begin{array}{r l} & {\min \langle \log e(z_{X}|x)\rangle -\langle \log b(z_{X}|y)\rangle -\langle \log c(y|z_{X})\rangle}\\ & {\qquad +\langle \log b(z_{Y}|y)\rangle -\langle \log e(z_{Y}|x)\rangle -\langle \log d(x|z_{Y})\rangle} \end{array} \quad (32)\] At convergence, we learn a unified \(Z\) that is consistent with both \(Z_{X}\) and \(Z_{Y}\) , permitting generation of either output given either input in the trained model, in the same spirit as Vedantam et al. (2018), but without any objective function hyperparameter tuning. ## E.3 HIERARCHICAL CEB Thus far, we have focused on learning a single latent representation (possibly composed of multiple latent variables at the same level). Here, we consider how to learn a hierarchical model with CEB. Consider the graphical model \(Z_{2} \leftarrow Z_{1} \leftarrow X \leftrightarrow Y\) . This is the simplest hierarchical supervised representation learning model. The general form of its information diagram is given in Figure 5. The key observation for generalizing CEB to hierarchical models is that the target mutual information doesn't change. By this, we mean that all of the \(Z_{i}\) in the hierarchy should cover \(I(X; Y)\) at convergence, which means maximizing \(I(Y; Z_{i})\) . It is reasonable to ask why we would want to train such a model, given that the final set of representations are presumably all effectively identical in terms of information content. The answer is simple: doing so allows us to train deep models in a principled manner such that all layers of the network are consistent with each other and with the data. We need to be more careful when considering the residual information terms, though – it is not the case that we want to minimize \(I(X; Z_{i}|Y)\) , which is not consistent with the graphical model. Instead, we want to minimize \(I(Z_{i - 1}; Z_{i}|Y)\) , defining \(Z_{0} = X\) . This gives the following simple Hierarchical CEB objective: \[\begin{array}{r l} & {C E B_{\mathrm{hier}}\equiv \min \sum_{i}I(Z_{i - 1};Z_{i}|Y) - I(Y;Z_{i})}\\ & {\qquad \Leftrightarrow \min \sum_{i} - H(Z_{i}|Z_{i - 1}) + H(Z_{i}|Y) + H(Y|Z_{i})} \end{array} \quad (34)\] <--- Page Split ---> Because all of the \(Z_{i}\) are targetting \(Y\) , this objective is as stable as regular CEB. Note that if all of the \(Z_{i}\) have the same dimensionality, in principle they may all use the same networks for \(b(z_{i}|Y)\) and/or \(c(\mathbf{y}|z_{i})\) , which may substantially reduce the number of parameters in the model. All of the individual loss terms in the objective must still appear, of course. There is no requirement, however, that the \(Z_{i}\) have the same latent dimensionality, although doing so may give a unified hierarchical representation. ## E.4 SEQUENCE LEARNING Many of the richest problems in machine learning vary over time. In Bialek & Tishby (1999), the authors define the Predictive Information: \[I(X_{past},X_{future}) = \left\{\log \frac{p(X_{past},X_{future})}{p(X_{past})p(X_{future})}\right\}\] This is of course just the mutual information between the past and the future. However, under an assumption of temporal invariance (any time of fixed length is expected to have the same entropy), they are able to characterize the predictive information, and show that it is a subextensive quantity: \(\lim_{T \to \infty} I(T) / T \to 0\) , where \(I(T)\) is the predictive information over a time window of length \(2T\) ( \(T\) steps of the past predicting \(T\) steps into the future). This concise statement tells us that past observations contain vanishingly small information about the future as the time window increases. The application of CEB to extracting the predictive information is straightforward. Given the Markov chain \(X_{< t} \to X_{\geq t}\) , we learn a representation \(Z_{t}\) that optimally covers \(I(X_{< t}, X_{\geq t})\) in Predictive CEB: \[\begin{array}{r l} & {C E B_{\mathrm{pred}}\equiv \min I(X_{< t};Z_{t}|X_{\geq t}) - I(X_{\geq t},Z_{t})}\\ & {\qquad \Rightarrow \min -H(Z_{t}|X_{< t}) + H(Z_{t}|X_{\geq t}) + H(X_{\geq t}|Z_{t})} \end{array} \quad (35)\] Note that the model entailed by this objective function does not rely on \(Z_{< t}\) when predicting \(X_{\geq t}\) . A single \(Z_{t}\) captures all of the information in \(X_{< t}\) and is to be used to predict as far forward as is desired. "Rolling out" \(Z_{t}\) to make predictions is a modeling error according to the predictive information. Also note that, given a dataset of sequences, \(C E B_{\mathrm{pred}}\) may be extended to a bidirectional model, as in Appendix E.2. In this case, two representations are learned, \(Z_{< t}\) and \(Z_{\geq t}\) . Both representations are for timestep \(t\) , the first representing the observations before \(t\) , and the second representing the observations from \(t\) onwards. As in the normal bidirectional model, using the same encoder and backwards encoder for both parts of the bidirectional CEB objective ties the two representations together. Modeling and architectural choices. As with all of the variants of CEB, whatever entropy remains in the data after capturing the entropy of the mutual information in the representation must be modeled by the decoder. In this case, a natural modeling choice would be a probalistic RNN with powerful decoders per time- step to be predicted. However, it is worth noting that such a decoder would need to sample at each future step to decode the subsequent step. An alternative, if the prediction horizon is short or the predicted data are small, is to decode the entire sequence from \(Z_{t}\) in a single, feed- forward network (possibly as a single autoregression over all outputs in some natural sequence). Given the subextensivity of the predictive information, that may be a reasonable choice in stochastic environments, as the useful prediction window may be small. Multi- scale sequence learning. As in WaveNet (Van Den Oord et al., 2016), it is natural to consider sequence learning at multiple different temporal scales. Combining an architecture like time- dilated WaveNet with CEB is as simple as combining \(C E B_{\mathrm{pred}}\) with \(C E B_{\mathrm{hir}}\) (Appendix E.3). In this case, each of the \(Z_{t}\) would represent a wider time dilation conditioned on the aggregate \(Z_{t - 1}\) . The advantage of such an objective over that used in WaveNet is avoiding unnecessary memorization of earlier timesteps. ## E.5 UNSUPERVISED CEB Pure unsupervised learning is fundamentally an ill- posed problem. Without knowing what the task is, it is impossible to define an optimal representation directly. We think that this core issue is what lead the authors of Bengio et al. (2013) to prefer barely compressed representations. But by that line of <--- Page Split ---> reasoning, it seems that unsupervised learning devolves to lossless compression – perhaps the correct representation is the one that allows you to answer the question: “What is the color of the fourth pixel in the second row?” On the other hand, it also seems challenging to put the decision about what information should be kept into objective function hyperparameters, as in the \(\beta\) VAE and penalty VAE (Alemi et al., 2018) objectives. That work showed that it is possible to constrain the amount of information in the learned representation, but it is unclear how those objective functions keep only the “correct” bits of information for the downstream tasks you might care about. This is in contrast to all of the preceeding discussion, where the task clearly defines the both the correct amount of information and which bits are likely to be important. However, unsupervised representation learning is still an interesting problem, even if it is ill- posed. Our perspective on the importance of defining a task in order to constrain the information in the representation suggests that we can turn the problem into a data modeling problem in which the practitioner who selects the dataset also “models” the likely form of the useful bits in the dataset for the downstream task of interest. In particular, given a dataset \(X\) , we propose selecting a function \(f(X) \to X'\) that transforms \(X\) into a new random variable \(X'\) . This defines a paired dataset, \(P(X, X')\) , on which we can use CEB as normal. Note that choosing the identity function for \(f\) results in maximal mutual information between \(X\) and \(X'\) ( \(H(X)\) nats), which will result in a representation that is far from the MNI for normal downstream tasks. In other words, representations learned by true autoencoders are unlikely to be any better than simply using the raw \(X\) . It may seem that we have not proposed anything useful, as the selection of \(f(. )\) is unconstrained, and seems much more daunting than selecting \(\beta\) in a \(\beta\) VAE or \(\sigma\) in a penalty VAE. However, there is a very powerful class of functions that makes this problem much simpler, and that also make it clear using CEB will only select bits from \(X\) that are useful. That class of functions is the noise functions. ## E.5.1 DENOISING CEB AUTOENCODER Given a dataset \(X\) without labels or other targets, and some set of tasks in mind to be solved by a learned representation, we may select a random noise variable \(U\) , and function \(X' = f(X, U)\) that we believe will destroy the irrelevant information in \(X\) . We may then add representation variables \(Z_{X}, Z_{X'}\) to the model, giving the joint distribution \(p(x, x', u, z_{X}, z_{X'}) \equiv p(x)p(u)p(x'|f(x, u))e(z_{X}|x)b(z_{X'}|x')\) . This joint distribution is represented in Figure 6. Denoising Autoencoders were originally proposed in Vincent et al. (2008). In that work, the authors argue informally that reconstruction of corrupted inputs is a desirable property of learned representations. In this paper's notation, we could describe their proposed objective as \(\min H(X|Z_{X'})\) , or equivalently \(\min (\log d(x|z_{X'} = f(x, \eta)))_{\lambda ,\eta \sim p(x)\rho (\theta)}\) . Here we make this idea somewhat more formal through the MNI criterion and the derivation of CEB as the optimal objective for that criterion. We also note that, practically speaking, we would like to learn a representation that is consistent with uncorrupted inputs as well. Consequently, we are going to use a bidirectional model. \[\begin{array}{r l} & {C E B_{\mathrm{denoise}}\equiv \min I(X;Z_{X}|X^{\prime}) - I(X^{\prime};Z_{X}) + I(X^{\prime};Z_{X^{\prime}}|X) - I(X;Z_{X^{\prime}})}\\ & {\qquad \Rightarrow \min -H(Z_{X}|X) + H(Z_{X}|X^{\prime}) + H(X^{\prime}|Z_{X}) - H(Z_{X^{\prime}}|X^{\prime}) + H(Z_{X^{\prime}}|X) + H(X|Z_{X^{\prime}})} \end{array} \quad (37)\] This requires two encoders and two decoders, which may seem expensive, but it permits a consistent learned representation that can be used cleanly for downstream tasks. Using a single encoder/decoder pair would result in either an encoder that does not work well with uncorrupted inputs, or a decoder that only generates noisy outputs. If you are only interested in the learned representation and not in generating good reconstructions, the objective simplifies to the first three terms. In that case, the objective is properly called a Noising CEB Autoencoder, as the model predicts the noisy \(X'\) from \(X\) : \[\begin{array}{r l} & {C E B_{\mathrm{noise}}\equiv \min I(X;Z_{X}|X^{\prime}) - I(X^{\prime};Z_{X})}\\ & {\qquad \Rightarrow \min -H(Z_{X}|X) + H(Z_{X}|X^{\prime}) + H(X^{\prime}|Z_{X})} \end{array} \quad (40)\] <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 6: Information diagram and graphical model for the Denoising CEB Autoencoder. </center> In these models, the noise function, \(X^{\prime} = f(X,U)\) must encode the practitioner's assumptions about the structure of information in the data. This obviously will vary per type of data, and even per desired downstream task. However, we don't need to work too hard to find the perfect noise function initially. A natural first choice for \(f\) is: \(^{9}\) \[\begin{array}{c}{f(x,\eta) = \mathrm{clip}(x + \eta ,\mathcal{D})}\\ {\eta \sim \lambda U(-1,1)*\mathcal{D}}\\ {\mathcal{D} = \mathrm{domain}(X)} \end{array} \quad (42)\] In other words, add uniform noise scaled to the domain of \(X\) and by a hyperparameter \(\lambda\) , and clip the result to the domain of \(X\) . When \(\lambda = 1\) , \(X^{\prime}\) is indistinguishable from uniform noise. As \(\lambda \to 0\) , this maintains more and more of the original information from \(X\) in \(X^{\prime}\) . For some value of \(\lambda > 0\) , most of the irrelevant information is destroyed and most of the relevant information is maintained, if we assume that higher frequency content in the domain of \(X\) is less likely to contain the desired information. That information is what will be retained in the learned representation. Theoretical optimality of noise functions. Above we claimed that this learning procedure will only select bits that are useful for the downstream task, given that we select the proper noise function. Here we prove that claim constructively. Imagine an oracle that knows which bits of information should be destroyed, and which retained in order to solve the future task of interest. Further imagine, for simplicity, that the task of interest is classification. What noise function must that oracle implement in order to ensure that \(CEB_{denoise}\) can only learn exactly the bits needed for classification? The answer is simple: for every \(X = x_{i}\) , select \(X^{\prime} = x_{i}^{\prime}\) uniformly at random from among all of the \(X = x_{j}\) that should have the same class label as \(X = x_{i}\) . Now, the only way for CEB to maximize \(I(X;Z_{X^{\prime}})\) and minimize \(I(X^{\prime};Z_{X^{\prime}})\) is by learning a representation that is isomorphic to classification, and that encodes exactly \(I(X;Y)\) nats of information, even though it was only trained "unsupervisedly" on \(X,X^{\prime}\) pairs. Thus, if we can choose the correct noise function that destroys only the bits we don't care about, \(CEB_{denoise}\) will learn the desired representation and nothing else (caveated by model, architecture, and optimizer selection, as usual). ## E.6 SEMI-SUPERVISED CEB Given any amount of paired data \(X,Y\) immediately improves our ability to learn a semantic representation. Fortunately, it is easy to reincorporate paired data in combination with noising and <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 7: Graphical model for Semi-Supervised CEB. </center> denoising CEB, introduced above. We present the assumed graphical model in Figure 7. We give the corresponding Semi- Supervised CEB directly: \[\begin{array}{r l} & {C E B_{s e m i}\equiv \min I(X;Z_{X}|X^{\prime}) - I(X^{\prime};Z_{X}) + I(X^{\prime};Z_{X^{\prime}}|X) - I(X;Z_{X^{\prime}})}\\ & {\qquad +{\bf 1}_{Y\in (X,Y)}[I(X^{\prime};Z_{X^{\prime}}|Y) - I(Y;Z_{X^{\prime}})]}\\ & {\qquad \Rightarrow \min -H(Z_{X}|X) + H(Z_{X}|X^{\prime}) + H(X^{\prime}|Z_{X}) - H(Z_{X^{\prime}}|X^{\prime}) + H(Z_{X^{\prime}}|X) + H(X|Z_{X^{\prime}})}\\ & {\qquad +{\bf 1}_{Y\in (X,Y)}[ - H(Z_{X^{\prime}}|Y) + H(Z_{X^{\prime}}|Y) + H(Y|Z_{X^{\prime}})]} \end{array} \quad (47)\] \({\bf 1}_{Y\in (X,Y)}\) is the indicator function, equal to 1 when a \(Y\) is part of the paired data, and equal to 0 otherwise. In other words, if we have \(Y = y\) paired with a given \(X = x\) , we can include those terms in the objective. If we do not have that, we can simply leave them out. Note that it is straightforward to generalize this to semisupervised learning with two or more observations that are both being learned unsupervisedly, but also have some amount of paired data. For example, images and natural language, assuming we have a reasonable noise model for unsupervisedly learning natural language. ## F VISUALIZATIONS Here we provide some visualizations of the Fashion MNIST tasks. In Figure 8, we show a trained 2D CEB latent representation of Fashion MNIST. The model learned to locate closely related concepts together, including the cluster of "shirt" classes near the center, and the cluster of "shoe" classes toward the lower right. In spite of the restriction to 2 dimensions, this model achieves \(\sim 92\%\) on the test set. In Figure 9, the 10,000 test images and their 10,000 adversaries are shown for four of the models. It is easy to see at a glance that the CEB model organizes all of the adversaries into the "trousers" class, with a crisp division between the true examples and the adversaries. In contrast, the two VIB models have adversaries mixed throughout. However, all three models are clearly preferable to the deterministic model, which has all of the adversaries mixed into the "trousers" class with no ability to distinguish between adversaries and true examples. <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 8: A trained 2D latent space for a Fashion MNIST CEB model. </center> <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 9: All adversarial images sorted by predicted class, showing the difference between robust and non-robust models. Each predicted class is sorted by the model's rate, \(R\) ( \(H\) is used for \(\mathbf{d}\) ), from low to high. Images with a red bar along their top are adversarial. \(\mathbf{a}\) is CEB, \(\mathbf{b}\) is \(\mathrm{VIB}_{0.5}\) , \(\mathbf{c}\) is \(\mathrm{VIB}_{0.01}\) , \(\mathbf{d}\) is Determ. </center> <--- Page Split --->
reject
Reject
4.666667
ICLR_2019_paper_0244
iclr
2,019
# LEARNING ROBUST REPRESENTATIONS BY PROJECTING SUPERFICIAL STATISTICS OUT Haohan Wang Carnegie Mellon University Pittsburgh, PA USA haohanw@cs.cmu.edu Zexue He Beijing Normal University Beijing, China zexueh@mail.bnu.edu.cn Zachary C. Lipton Carnegie Mellon University Pittsburgh, PA, USA zlipton@cmu.edu Eric P. Xing Carnegie Mellon University Pittsburgh, PA, USA epxing@cs.cmu.edu ## ABSTRACT Despite impressive performance as evaluated on i.i.d. holdout data, deep neural networks depend heavily on superficial statistics of the training data and are liable to break under distribution shift. For example, subtle changes to the background or texture of an image can break a seemingly powerful classifier. Building on previous work on domain generalization, we hope to produce a classifier that will generalize to previously unseen domains, even when domain identifiers are not available during training. This setting is challenging because the model may extract many distribution- specific (superficial) signals together with distribution- agnostic (semantic) signals. To overcome this challenge, we incorporate the gray- level co- occurrence matrix (GLCM) to extract patterns that our prior knowledge suggests are superficial: they are sensitive to texture but unable to capture the gestalt of an image. Then we introduce two techniques for improving our networks' out- of- sample performance. The first method is built on the reverse gradient method that pushes our model to learn representations from which the GLCM representation is not predictable. The second method is built on the independence introduced by projecting the model's representation onto the subspace orthogonal to GLCM representation's. We test our method on battery of standard domain generalization data sets and, interestingly, achieve comparable or better performance as compared to other domain generalization methods that explicitly require samples from the target distribution for training. ## 1 INTRODUCTION Imagine training an image classifier to recognize facial expressions. In the training data, while all images labeled "smile" may actually depict smiling people, the "smile" label might also be correlated with other aspects of the image. For example, people might tend to smile more often while outdoors, and to frown more in airports. In the future, we might encounter photographs with previously unseen backgrounds, and thus we prefer models that rely as little as possible on the superficial signal. The problem of learning classifiers robust to distribution shift, commonly called Domain Adaptation (DA), has a rich history. Under restrictive assumptions, such as covariate shift (Shimodaira, 2000; Gretton et al., 2009), and label shift (also known as target shift or prior probability shift) (Storkey, 2009; Schölkopf et al., 2012; Zhang et al., 2013; Lipton et al., 2018), principled methods exist for estimating the shifts and retraining under the importance- weighted ERM framework. Other papers bound worst- case performance under bounded shifts as measured by divergence measures on the train v.s. test distributions (Ben- David et al., 2010a; Mansour et al., 2009; Hu et al., 2016). While many impossibility results for DA have been proven (Ben- David et al., 2010b), humans nevertheless exhibit a remarkable ability to function out- of- sample, even when confronting dramatic <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Example illustration of train/validation/test data. The first row is "happiness" sentiment and the second row is "sadness" sentiment. The background and sentiment labels are correlated in training and validation set, but independent in testing set. </center> distribution shift. Few would doubt that given photographs of smiling and frowning astronauts on the Martian plains, we could (mostly) agree upon the correct labels. While we lack a mathematical description of how precisely humans are able to generalize so easily out- of- sample, we can often point to certain classes of perturbations that should not effect the semantics of an image. For example for many tasks, we know that the background should not influence the predictions made about an image. Similarly, other superficial statistics of the data, such as textures or subtle coloring changes should not matter. The essential assumption of this paper is that by making our model depend less on known superficial aspects, we can push the model to rely more on the difference that makes a difference. This paper focuses on visual applications, and we focus on high- frequency textural information as the relevant notion of superficial statistics that we do not want our model to depend upon. The contribution of this paper can be summarized as follows. - We propose a new differentiable neural network building block (neural gray-level co-occurrence matrix) that captures textural information only from images without modeling the lower-frequency semantic information that we care about (Section 3.1).- We propose an architecture-agnostic, parameter-free method that is designed to discard this superficial information, (Section 3.2).- We introduce two synthetic datasets for DA/DG studies that are more challenging than regular DA/DG scenario in the sense that the domain-specific information is correlated with semantic information. Figure 1 is a toy example (Section 4). ## 2 RELATED WORK IN DOMAIN ADAPTATION AND DOMAIN GENERALIZATION Domain generalization (DG) (Muandet et al., 2013) is a variation on DA, where samples from the target domain are not available during training. In reality, data- sets may contain data cobbled together from many sources but where those sources are not labeled. For example, a common assumption used to be that there is one and only one distribution for each dataset collected, but Wang et al. (2016) noticed that in video sentiment analysis, the data sources varied considerably even within the same dataset due to heterogeneous data sources and collection practices. Domain adaptation (Bridle & Cox, 1991; Ben- David et al., 2010a), and (more broadly) transfer learning have been studied for decades, with antecedents in the classic econometrics work on sample selection bias Heckman (1977) and choice models Manski & Lerman (1977). For a general primer, we refer the reader to these extensive reviews (Weiss et al., 2016; Csurka, 2017). Domain generalization (Muandet et al., 2013) is relatively new, but has also been studied extensively: covering a wide spectrum of techniques from kernel methods (Muandet et al., 2013; Niu et al., 2015; Erfani et al., 2016; Li et al., 2017c) to more recent deep learning end- to- end methods, where the methods mostly fall into two categories: reducing the inter- domain differences of representations through adversarial (or similar) techniques (Ghifary et al., 2015; Wang et al., 2016; Motiian et al., 2017; Li et al., 2018; Carlucci et al., 2018), or building an ensemble of one- for- each- domain deep <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: Introduction of Neural Gray-level Co-occurrence Matrix (NGLCM) and HEX. </center> models and then fusing representations together (Ding & Fu, 2018; Mancini et al., 2018). Meta- learning techniques are also explored (Li et al., 2017b). Related studies are also conducted under the name "zero shot domain adaptation" e.g. (Kumagai & Iwata, 2018). ## 3 METHOD In this section, we introduce our main technical contributions. We will first introduce the our new differentiable neural building block, NGLCM that is designed to capture textural but not semantic information from images, and then introduce our technique for excluding the textural information. ### 3.1 NEURAL GRAY-LEVEL CO-OCCURRENCE MATRIX FOR SUPERFICIAL INFORMATION Our goal is to design a neural building block that 1) has enough capacity to extract the textural information from an image, 2) is not capable of extracting semantic information. We consulted some classic computer vision techniques for inspiration and extensive experimental evidence (Appendix A1), suggested that gray- level co- occurrence matrix (GLCM) (Haralick et al., 1973; Lam, 1996) may suit our goal. The idea of GLCM is to count the number of pixel pairs under a certain direction (common direction choices are \(0^{\circ}\) , \(45^{\circ}\) , \(90^{\circ}\) , and \(135^{\circ}\) ). For example, for an image \(A \in \mathcal{M}^{m \times m}\) , where \(\mathcal{M}\) denotes the set of all possible pixel values. The GLCM of \(A\) under the direction to \(0^{\circ}\) (horizontally right) will be a \(|\mathcal{M}| \times |\mathcal{M}|\) matrix (denoted by \(G\) ) defined as following: \[G_{k,l} = \sum_{i = 0}^{m - 1}\sum_{j = 0}^{m}I(A_{i,j} = k)I(A_{i + 1,j} = l) \quad (1)\] where \(|\mathcal{M}|\) stands for the cardinality of \(\mathcal{M}\) , \(I(\cdot)\) is an identity function, \(i,j\) are indices of \(A\) , and \(k,l\) are pixel values of \(A\) as well as indices of \(G\) . We design a new neural network building block that resembles GLCM but whose parameters are differentiable, having (sub)gradient everywhere, and thus are tunable through backpropagation. We first flatten \(A\) into a row vector \(a \in \mathcal{M}^{1 \times m^{2}}\) . The first observation we made is that the counting of pixel pairs \((p_{k}, p_{l})\) in Equation 1 is equivalent to counting the pairs \((p_{k}, \Delta p)\) , where \(\Delta p = p_{k} - p_{l}\) . Therefore, we first generate a vector \(d\) by multiplying \(a\) with a matrix \(D\) , where \(D\) is designed according to the direction of GLCM. For example, \(D\) in the \(0^{\circ}\) case will be a \(m^{2} \times m^{2}\) matrix \(D\) such that \(D_{i,i} = 1\) , \(D_{i,i + 1} = - 1\) , and 0 elsewhere. <--- Page Split ---> To count the elements in \(a\) and \(d\) with a differentiable operation, we introduce two sets of parameters \(\phi_{a} \in \mathcal{B}^{|\mathcal{N}| \times 1}\) and \(\phi_{b} \in \mathcal{B}^{|\mathcal{N}| \times 1}\) as the tunable parameter for this building block, so that: \[G = s(a;\phi_{a})s^{T}(d;\phi_{b}) \quad (2)\] where \(s()\) is a thresholding function defined as: \[s(a;\phi_{a}) = \min (\max (a\odot \phi_{a},0),1)\] where \(\odot\) denotes the minus operation with the broadcasting mechanism, yielding both \(s(a;\phi_{a})\) and \(s(d;\phi_{b})\) as \(|\mathcal{N}| \times m^{2}\) matrices. As a result, \(G\) is a \(|\mathcal{N}| \times |\mathcal{N}|\) matrix. The design rationale is that, with an extra constrain that requires \(\phi\) to have only unique values in the set of \(\{n - \epsilon |n\in \mathcal{N}\}\) , where \(\epsilon\) is a small number, \(G\) in Equation 2 will be equivalent to the GLCM extracted with old counting techniques, subject to permutation and scale. Also, all the operations used in the construction of \(G\) have (sub)gradient and therefore all the parameters are tunable with backpropagation. In practice, we drop the extra constraint on \(\phi\) for simplicity in computation. Our preliminary experiments suggested that for our purposes it is sufficient to first map standard images with 256 pixel levels to images with 16 pixel levels, which can reduce to the number of parameters of NGLCM \((|\mathcal{N}| = 16)\) . ### 3.2 HEX We first introduce notation to represent the neural network. We use \(\langle X,y\rangle\) to denote a dataset of inputs \(X\) and corresponding labels \(y\) . We use \(h(\cdot ;\theta)\) and \(f(\cdot ;\xi)\) to the bottom and top components of a neural network. A conventional neural network architecture will use \(f(h(X;\theta);\xi)\) to generate a corresponding result \(F_{i}\) and then calculate the argmax to yield the prediction label. Besides conventional \(f(h(X;\theta);\xi)\) , we introduce another architecture \[g(X;\phi) = \sigma_{m}((s(a;\phi_{a})s^{T}(d;\phi_{b}))W_{m} + b_{m})\] where \(\phi = \{\phi_{a},\phi_{b},W_{m},b_{m}\}\) , \(s(a;\phi_{a})s^{T}(d;\phi_{b})\) is introduced in previous section, \(\{W_{m},b_{m},\sigma_{m}\}\) (weights, biases, and activation function) form a standard MLP. With the introduction of \(g(\cdot ;\phi)\) , the final classification layer turns into \(f[h(X;\theta),g(X;\phi)];\xi)\) (where we use \([\cdot ,\cdot ]\) to denote concatenation). Now, with the representation learned through raw data by \(h(\cdot ;\theta)\) and textural representation learned by \(g(\cdot ;\phi)\) , the next question is to force \(f(\cdot ;\xi)\) to predict with transformed representation from \(h(\cdot ;\theta)\) that in some sense independent of the superficial representation captured by \(g(\cdot ;\phi)\) . To illustrate following ideas, we first introduce three different outputs from the final layer: \[\begin{array}{r l} & {F_{A} = f([h(X;\theta),g(X;\phi)];\xi)}\\ & {F_{G} = f([\mathbf{0},g(X;\phi)];\xi)}\\ & {F_{P} = f([h(X;\theta),\mathbf{0}];\xi)} \end{array} \quad (3)\] where \(F_{A}\) , \(F_{G}\) , and \(F_{P}\) stands for the results from both representations (concatenated), only the textural information (prepended with the 0 vector), and only the raw data (concatenated with the 0 vector), respectively. 0 stands for a padding matrix with all the zeros, whose shape can be inferred by context. Several heuristics have been proposed to force a network to "forget" some part of a representation, such as adversarial training (Ganin et al., 2016) or information- theoretic regularization (Moyer et al., 2018). In similar spirit, our first proposed solution is to adopt the reverse gradient idea (Ganin et al., 2016) to train \(F_{P}\) to be predictive for the semantic labels \(y\) while forcing the \(F_{P}\) to be invariant to \(F_{G}\) . Later, we refer to this method as \(ADV\) . When we use a multilayer perceptron (MLP) to try to predict \(g(X;\phi)\) from \(h(X;\theta)\) and update the primary model to fool the MLP via reverse gradient, we refer to the model as \(ADVE\) . Additionally, we introduce a simple alternative. Our idea lies in the fact that, in an affine space, to find a transformation of representation \(A\) that is least explainable by some other representation \(B\) , a straightforward method will be projecting \(A\) with a projection matrix constructed by \(B\) (sometimes <--- Page Split ---> referred as residual maker matrix.). To utilize this linear property, we choose to work on the space of \(F\) generated by \(f(\cdot ;\xi)\) right before the final argmax function. Projecting \(F_{A}\) with \[F_{L} = (I - F_{G}(F_{G}^{T}F_{G})^{-1}F_{G}^{T})F_{A} \quad (4)\] will yield \(F_{L}\) for parameter tuning. All the parameters \(\xi ,\phi ,\theta\) can be trained simultaneously (more relevant discussions in Section 5). In testing time, \(F_{P}\) is used. Due to limited space, we leave the following topics to the Appendix: 1) rationales of this approach (A2.1) 2) what to do in cases when \(F_{G}^{T}F_{G}\) is not invertible (A2.2). This method is referred as HEX. Two alternative forms of our algorithm are also worth mentioning: 1) During training, one can tune an extra hyperparameter \((\lambda)\) through \[l(\arg \max F_{L},y) + \lambda l(\arg \max F_{G},y)\] to ensure that the NGLCM component is learning superficial representations that are related to the present task where \(l(\cdot ,\cdot)\) is a generic loss function. 2) During testing, one can use \(F_{L}\) , although this requires evaluating the NGLCM component at prediction time and thus is slightly slower. We experimented with these three forms with our synthetic datasets and did not observe significant differences in performance and thus we adopt the fastest method as the main one. Empirically, we also notice that it is helpful to make sure the textural representation \(g(X;\phi)\) and raw data representation \(h(X;\theta)\) are of the same scale for HEX to work, so we column- wise normalize these two representations in every minibatch. ## 4 EXPERIMENTS To show the effectiveness of our proposed method, we conduct range of experiments, evaluating HEX's resilience against dataset shift. To form intuition, we first examine the NGLCM and HEX separately with two basic testings, then we evaluate on two synthetic datasets, on in which dataset shift is introduced at the semantic level and another at the raw feature level, respectively. We finally evaluate other two standard domain generalization datasets to compare with the state- of- the- art. All these models are trained with ADAM (Kingma & Ba, 2014). We conducted ablation tests on our two synthetic datasets with two cases 1) replacing NGLCM with one- layer MLP (denoted as M), 2) not using HEX/ADV (training the network with \(F_{A}\) (Equation 3) instead of \(F_{L}\) (Equation 4)) (denoted as N). We also experimented with the two alternative forms of HEX: 1) with \(F_{G}\) in the loss and \(\lambda = 1\) (referred as HEX- ADV), 2) predicting with \(F_{L}\) (referred as HEX- ALL). We also compare with the popular DG methods (DANN (Ganin et al., 2016)) and another method called information- dropout (Achille & Soatto, 2018). ### 4.1 SYNTHETIC EXPERIMENTS FOR BASIC PERFORMANCE TESTS #### 4.1.1 NGLCM ONLY EXTRACTS TEXTURAL INFORMATION To show that the NGLCM only extracts textural information, we trained the network with a mixture of four digit recognition data sets: MNIST (LeCun et al., 1998), SVHN (Netzer et al., 2011), MNIST- M (Ganin & Lempitsky, 2014), and USPS (Denker et al., 1989). We compared NGLCM with a single layer of MLP. The parameters are trained to minimize prediction risk of digits (instead of domain). We extracted the representations of NGLCM and MLP and used these representations as features to test the five- fold cross- validated Naïve Bayes classifier's accuracy of predicting digit and domain. With two choices of learning rates, we repeated this for every epoch through 100 epochs of training and reported the mean and standard deviation over 100 epochs in Table 1: while MLP and NGLCM perform comparably well in extracting textural information, NGLCM is significantly less useful for recognizing the semantic label. #### 4.1.2 HEX PROJECTION To test the effectiveness of HEX, we used the extracted SURF (Bay et al., 2006) features (800 dimension) and GLCM (Lam, 1996) features (256 dimension) from office data set (Saenko et al., <--- Page Split ---> Table 1: Accuracy of domain classification and digit classification <table><tr><td></td><td>Random</td><td>MLP (1e-2)</td><td>NGLCM (1e-2)</td><td>MLP (1e-4)</td><td>NGLCM (1e-4)</td></tr><tr><td>Domain</td><td>0.25</td><td>0.686±0.020</td><td>0.738±0.018</td><td>0.750±0.054</td><td>0.687±0.029</td></tr><tr><td>Label</td><td>0.1</td><td>0.447±0.039</td><td>0.161±0.008</td><td>0.534±0.022</td><td>0.142±0.023</td></tr></table> <table><tr><td>Train</td><td>Test</td><td>Baseline</td><td>HEX</td><td>HEX-ADV</td><td>HEX-ALL</td></tr><tr><td>A, W</td><td>D</td><td>0.405±0.016</td><td>0.343±0.030</td><td>0.343±0.030</td><td>0.216±0.119</td></tr><tr><td>D, W</td><td>A</td><td>0.112±0.008</td><td>0.147±0.004</td><td>0.147±0.004</td><td>0.055±0.004</td></tr><tr><td>A, D</td><td>W</td><td>0.400±0.016</td><td>0.378±0.034</td><td>0.378±0.034</td><td>0.151±0.008</td></tr></table> Table 2: Accuracy on Office data set with extracted features. The Baseline refers to MLP with SURF features. The HEX methods refer to adding another MLP with features extracted by traditional GLCM methods. Because \(D\) and \(W\) are similar domains (same objects even share the same background), we believe these results favor the HEX method (see Section 4.1.2) for discussion). 2010) (31 classes). We built a two- layer MLP ( \(800 \times 256\) , and \(256 \times 31\) ) as baseline that only predicts with SURF features. This architecture and corresponding learning rate are picked to make sure the baseline can converge to a relatively high prediction performance. Then we plugged in the GLCM part with an extra first- layer network \(256 \times 32\) and the second layer of the baseline is extended to \(288 \times 31\) to take in the information from GLCM. Then we train the network again with HEX with the same learning rate. The Office data set has three different subsets: Webcam (W), Amazon (A), and DSLR (D). We trained and validated the model on a mixture of two and tested on the third one. We ran five experiments and reported the averaged accuracy with standard deviation in Table 2. These performances are not comparable to the state- of- the- art because they are based on features. At first glance, one may frown upon on the performance of HEX because out of three configurations, HEX only outperforms the baseline in the setting \(\{W, D\} \to A\) . However, a closer look into the datasets gives some promising indications for HEX: we notice \(W\) and \(D\) are distributed similarly in the sense that objects have similar backgrounds, while \(A\) is distributed distinctly (Appendix A3.1). Therefore, if we assume that there are two classifiers \(C_1\) and \(C_2\) : \(C_1\) can classify objects based on object feature and background feature while \(C_2\) can only classify objects based on object feature ignoring background feature. \(C_2\) will only perform better than \(C_1\) in \(\{W, D\} \to A\) case, and will perform worse than \(C_2\) in the other two cases, which is exactly what we observe with HEX. ### 4.2 FACIAL EXPRESSION CLASSIFICATION WITH NUISANCE BACKGROUND We generated a synthetic data set extending the Facial Expression Research Group Database (Aneja et al., 2016), which is a dataset of six animated individuals expressing seven different sentiments. For each pair of individual and sentiment, there are over 1000 images. To introduce the data shift, we attach seven different backgrounds to these images. In the training set (50% of the data) and validation set (30% of the data), the background is correlated with the sentiment label with a correlation of \(\rho\) ; in testing set (the rest 20% of the data), the background is independent of the sentiment label. A simpler toy example of the data set is shown in Figure 1. In the experiment, we format the resulting images to \(28 \times 28\) grayscale images. We run the experiments first with the baseline CNN (two convolutional layers and two fully connected layers) to tune for hyperparameters. We chose to run 100 epochs with learning rate 5e- 4 because this is when the CNN can converge for all these 10 synthetic datasets. We then tested other methods with the same learning rate. The results are shown in Figure 3 with testing accuracy and standard deviation from five repeated experiments. Testing accuracy is reported by the model with the highest validation score. In the figure, we compare baseline CNN (B), Ablation Tests (M and N), ADV (A), HEX (H), DANN (G), and InfoDropout (I). Most these methods perform well when \(\rho\) is small (when testing distributions are relatively similar to training distribution). As \(\rho\) increases, most methods' performances decrease, but Adv and HEX behave relatively stable across these ten correlation settings. We also notice that, as the correlation becomes stronger, M deteriorates at a faster pace than other methods. Intuitively, we believe this is because the MLP learns both from the seman <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: Averaged testing accuracy and standard deviation of five repeated experiments with different correlation level on sentiment with nuisance background data. Notations: baseline CNN (B), Ablation Tests (M (replacing NGLCM with MLP) and N (training without HEX projection)), ADVE (E), ADV (A), HEX (H), HEX-ADV (V), HEX-ALL (L), DANN (G), and InfoDropout (I). </center> ![](images/6_1.jpg) <center>Figure 4: Averaged testing accuracy and standard deviation of five repeated experiments with different strategies of attaching patterns to MNIST data. Notations: baseline CNN (B), Ablation Tests (M (replacing NGLCM with MLP) and N (training without HEX projection)), ADVE (E), ADV (A), HEX (H), HEX-ADV (V), HEX-ALL (L), DANN (G), and InfoDropout (I). </center> tic signal together with superficial signal, leading to inferior performance when HEX projects this signal out. We also notice that ADV and HEX improve the speed of convergence (Appendix A3.2). ### 4.3 MITIGATING THE TENDENCY OF SURFACE STATISTICAL REGULARITIES IN MNIST As Jo & Bengio (2017) observed, CNNs have a tendency to learn the surface statistical regularities: the generalization of CNNs is partially due to the abstraction of high level semantics of an image, and partially due to surface statistical regularities. Here, we demonstrate the ability of HEX to overcome such tendencies. We followed the radial and random Fourier filtering introduced in (Jo & Bengio, 2017) to attach the surface statistical regularities into the images in MNIST. There are three different regularities altogether (radial kernel, random kernel, and original image). We attached two of these into training and validation images and the remaining one into testing images. We also adopted two strategies in attaching surface patterns to training/validation images: 1) independently: the pattern is independent of the digit, and 2) dependently: images of digit 0- 4 have one pattern while images of digit 5- 9 have the other pattern. Some examples of this synthetic data are shown in Appendix A3.3. We used the same learning rate scheduling strategy as in the previous experiment. The results are shown in Figure 4. Figure legends are the same as previous. Interestingly, NGLCM and HEX contribute differently across these cases. When the patterns are attached independently, M performs the best overall, but when the patterns are attached dependently, N and HEX perform the best overall. In the most challenging case of these experiments (random kerneled as testing, pattern attached dependently), HEX shows a clear advantage. Also, HEX behaves relatively more stable overall. <--- Page Split ---> Table 3: Accuracy on MNIST-Rotation data set <table><tr><td>Test</td><td>CAE</td><td>MTAE</td><td>CCSA</td><td>DANN</td><td>Fusion</td><td>LabelGrad</td><td>CrossGrad</td><td>HEX</td><td>ADV</td></tr><tr><td>M0°</td><td>72.1</td><td>82.5</td><td>84.6</td><td>86.7</td><td>85.6</td><td>89.7</td><td>88.3</td><td>90.1</td><td>89.9</td></tr><tr><td>M15°</td><td>95.3</td><td>96.3</td><td>95.6</td><td>98</td><td>95.0</td><td>97.8</td><td>98.6</td><td>98.9</td><td>98.6</td></tr><tr><td>M30°</td><td>92.6</td><td>93.4</td><td>94.6</td><td>97.8</td><td>95.6</td><td>98.0</td><td>98.0</td><td>98.9</td><td>98.8</td></tr><tr><td>M45°</td><td>81.5</td><td>78.6</td><td>82.9</td><td>97.4</td><td>95.5</td><td>97.1</td><td>97.7</td><td>98.8</td><td>98.7</td></tr><tr><td>M60°</td><td>92.7</td><td>94.2</td><td>94.8</td><td>96.9</td><td>95.9</td><td>96.6</td><td>97.7</td><td>98.3</td><td>98.6</td></tr><tr><td>M75°</td><td>79.3</td><td>80.5</td><td>82.1</td><td>89.1</td><td>84.3</td><td>92.1</td><td>91.4</td><td>90.0</td><td>90.4</td></tr><tr><td>Avg</td><td>85.6</td><td>87.6</td><td>89.1</td><td>94.3</td><td>92.0</td><td>95.2</td><td>95.3</td><td>95.8</td><td>95.2</td></tr></table> ### 4.4 MNIST WITH ROTATION AS DOMAIN We continue to compare HEX with other state- of- the- art DG methods (that use distribution labels) on popular DG data sets. We experimented with the MNIST- rotation data set, on which many DG methods have been tested. The images are rotated with different degrees to create different domains. We followed the approach introduced by Ghifary et al. (2015). To reiterate: we randomly sampled a set \(\mathcal{M}\) of 1000 images out of MNIST (100 for each label). Then we rotated the images in \(\mathcal{M}\) counter- clockwise with different degrees to create data in other domains, denoted by \(\mathcal{M}_{15^{\circ}}\) , \(\mathcal{M}_{30^{\circ}}\) , \(\mathcal{M}_{45^{\circ}}\) , \(\mathcal{M}_{60^{\circ}}\) , \(\mathcal{M}_{75^{\circ}}\) . With the original set, denoted by \(\mathcal{M}_{0^{\circ}}\) , there are six domains altogether. We compared the performance of HEX/ADV with several methods tested on this data including CAE (Rifai et al., 2011), MTAE (Ghifary et al., 2015), CCSA (Motiian et al., 2017), DANN (Ganin et al., 2016), Fusion (Mancini et al., 2018), LabelGrad, and CrossGrad (Shankar et al., 2018). The results are shown in Table 3: HEX is only inferior to previous methods in one case and leads the average performance overall. ### 4.5 PACS: GENERALIZATION IN PHOTO, ART, CARTOON, AND SKETCH Finally, we tested on the PACS data set (Li et al., 2017a), which consists of collections of images of seven different objects over four domains, including photo, art painting, cartoon, and sketch. Following (Li et al., 2017a), we used AlexNet as baseline method and built HEX upon it. We met some optimization difficulties in directly training AlexNet on PACS data set with HEX, so we used a heuristic training approach: we first fine- tuned the AlexNet pretrained on ImageNet with PACS data of training domains without plugging in NGLCM and HEX, then we used HEX and NGLCM to further train the top classifier of AlexNet while the weights of the bottom layer are fixed. Our heuristic training procedure allows us to tune the AlexNet with only 10 epoches and train the top- layer classifier 100 epochs (roughly only 600 seconds on our server for each testing case). We compared HEX/ADV with the following methods that have been tested on PACS: AlexNet (directly fine- tuning pretrained AlexNet on PACS training data (Li et al., 2017a)), DSN (Bousmalis et al., 2016), L- CNN (Li et al., 2017a), MLDG (Li et al., 2017b), Fusion (Mancini et al., 2018). Notice that most of the competing methods (DSN, L- CNN, MLDG, and Fusion) have explicit knowledge about the domain identification of the training images. The results are shown in Table 4. Impressively, HEX is only slightly shy of Fusion in terms of overall performance. Fusion is a method that involves three different AlexNets, one for each training domain, and a fusion layer to combine the representation for prediction. The Fusion model is roughly three times bigger than HEX since the extra NGLCM component used by HEX is negligible in comparison to AlexNet in terms of model complexity. Interestingly, HEX achieves impressively high performance when the testing domain is Art painting and Cartoon, while Fusion is good at prediction for Photo and Sketch. ## 5 DISCUSSION AND CONCLUSION We introduced two novel components: NGLCM that only extracts textural information from an image, and HEX that projects the textural information out and forces the model to focus on semantic information. Limitations still exist. For example, NGLCM cannot be completely free of semantic information of an image. As a result, if we apply our method on standard MNIST data set, we will <--- Page Split ---> Table 4: Testing Accuracy on PACS <table><tr><td>Test Domain</td><td>AlexNet</td><td>DSN</td><td>L-CNN</td><td>MLDG</td><td>Fusion</td><td>HEX</td><td>ADV</td></tr><tr><td>Art</td><td>63.3</td><td>61.1</td><td>62.8</td><td>63.6</td><td>64.1</td><td>66.8</td><td>64.9</td></tr><tr><td>Cartoon</td><td>63.1</td><td>66.5</td><td>66.9</td><td>63.4</td><td>66.8</td><td>69.7</td><td>69.6</td></tr><tr><td>Photo</td><td>87.7</td><td>83.2</td><td>89.5</td><td>87.8</td><td>90.2</td><td>87.9</td><td>88.2</td></tr><tr><td>Sketch</td><td>54</td><td>58.5</td><td>57.5</td><td>54.9</td><td>60.1</td><td>56.3</td><td>55.5</td></tr><tr><td>Average</td><td>67.0</td><td>67.3</td><td>69.2</td><td>67.4</td><td>70.3</td><td>70.2</td><td>69.5</td></tr></table> see slight drop of performance because NGLCM also learns some semantic information, which is then projected out. Also, training all the model parameters simultaneously may lead into a trivial solution where \(F_{G}\) (in Equation 3) learns garbage information and HEX degenerates to the baseline model. To overcome these limitations, we invented several training heuristics, such as optimizing \(F_{P}\) and \(F_{G}\) sequentially and then fix some weights. However, we did not report results with training heuristics (expect for PACS experiment) because we hope to simplify the methods. Another limitation we observe is that sometimes the training performance of HEX fluctuates dramatically during training, but fortunately, the model picked up by highest validation accuracy generally performs better than competing methods. Despite these limitations, we still achieved impressive performance on both synthetic and popular DG data sets. ## REFERENCES Alessandro Achille and Stefano Soatto. Information dropout: Learning optimal representations through noisy computation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018. Deepali Aneja, Alex Colburn, Gary Faigin, Linda Shapiro, and Barbara Mones. Modeling stylized character expressions via deep learning. In Asian Conference on Computer Vision, pp. 136- 153. Springer, 2016. Herbert Bay, Tinne Tuytelaars, and Luc Van Gool. Surf: Speeded up robust features. In European conference on computer vision, pp. 404- 417. Springer, 2006. Shai Ben- David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine learning, 79(1):151- 175, 2010a. Shai Ben- David, Tyler Lu, Teresa Luu, and Dávid Pál. Impossibility theorems for domain adaptation. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2010b. Chris Bishop, Christopher M Bishop, et al. Neural networks for pattern recognition. Oxford university press, 1995. Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. Domain separation networks. In Advances in Neural Information Processing Systems, pp. 343- 351, 2016. John S Bridle and Stephen J Cox. Recnorm: Simultaneous normalisation and classification applied to speech recognition. In Advances in Neural Information Processing Systems, pp. 234- 240, 1991. Fabio M Carlucci, Paolo Russo, Tatiana Tommasi, and Barbara Caputo. Agnostic domain generalization. arXiv preprint arXiv:1808.01102, 2018. Gabriela Csurka. Domain adaptation for visual applications: A comprehensive survey. arXiv preprint arXiv:1702.05374, 2017. John S Denker, WR Gardner, Hans Peter Graf, Donnie Henderson, Richard E Howard, W Hubbard, Lawrence D Jackel, Henry S Baird, and Isabelle Guyon. Neural network recognizer for handwritten zip code digits. In Advances in neural information processing systems, pp. 323- 331, 1989. <--- Page Split ---> Zhengming Ding and Yun Fu. Deep domain generalization with structured low- rank constraint. IEEE Transactions on Image Processing, 27(1):304- 313, 2018. Sarah Erfani, Mahsa Baktashmotlagh, Masoud Moshtaghi, Vinh Nguyen, Christopher Leckie, James Bailey, and Ramamohanarao Kotagiri. Robust domain generalisation by enforcing distribution invariance. In Proceedings of the Twenty- Fifth International Joint Conference on Artificial Intelligence, pp. 1455- 1461. AAAI Press/International Joint Conferences on Artificial Intelligence, 2016. Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. arXiv preprint arXiv:1409.7495, 2014. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain- adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096- 2030, 2016. Muhammad Ghifary, W Bastiaan Kleijn, Mengjie Zhang, and David Balduzzi. Domain generalization for object recognition with multi- task autoencoders. In Proceedings of the IEEE international conference on computer vision, pp. 2551- 2559, 2015. Arthur Gretton, Alexander J Smola, Jiayuan Huang, Marcel Schmittfull, Karsten M Borgwardt, and Bernhard Schölkopf. Covariate shift by kernel mean matching. Journal of Machine Learning Research, 2009. Robert M Haralick, Karthikeyan Shanmugam, et al. Textural features for image classification. IEEE Transactions on systems, man, and cybernetics, 1973. Dong- Chen He and Li Wang. Texture unit, texture spectrum, and texture analysis. IEEE transactions on Geoscience and Remote Sensing, 28(4):509- 512, 1990. James J Heckman. Sample selection bias as a specification error (with an application to the estimation of labor supply functions), 1977. Weihua Hu, Gang Nio, Issei Sato, and Masashi Sugiyama. Does distributionally robust supervised learning give robust classifiers? arXiv preprint arXiv:1611.02041, 2016. Jason Jo and Yoshua Bengio. Measuring the tendency of cnns to learn surface statistical regularities. arXiv preprint arXiv:1711.11561, 2017. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Atsutoshi Kumagai and Tomoharu Iwata. Zero- shot domain adaptation without domain semantic descriptors. arXiv preprint arXiv:1807.02927, 2018. SW- C Lam. Texture feature extraction using gray level gradient based co- occurence matrices. In Systems, Man, and Cybernetics, 1996., IEEE International Conference on, volume 1, pp. 267- 271. IEEE, 1996. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient- based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278- 2324, 1998. Da Li, Yongxin Yang, Yi- Zhe Song, and Timothy M Hospedales. Deeper, broader and artier domain generalization. In Computer Vision (ICCV), 2017 IEEE International Conference on, pp. 5543- 5551. IEEE, 2017a. Da Li, Yongxin Yang, Yi- Zhe Song, and Timothy M Hospedales. Learning to generalize: Meta- learning for domain generalization. arXiv preprint arXiv:1710.03463, 2017b. Haoliang Li, Sinno Jialin Pan, Shiqi Wang, and Alex C Kot. Domain generalization with adversarial feature learning. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2018. Wen Li, Zheng Xu, Dong Xu, Dengxin Dai, and Luc Van Gool. Domain generalization and adaptation using low rank exemplar svms. IEEE transactions on pattern analysis and machine intelligence, 2017c. <--- Page Split ---> Zachary C Lipton, Yu- Xiang Wang, and Alex Smola. Detecting and correcting for label shift with black box predictors. In International Conference on Machine Learning (ICML), 2018. Massimiliano Mancini, Samuel Rota Bulò, Barbara Caputo, and Elisa Ricci. Best sources forward: domain generalization through source- specific nets. arXiv preprint arXiv:1806.05810, 2018. Charles F Manski and Steven R Lerman. The estimation of choice probabilities from choice based samples. Econometrica: Journal of the Econometric Society, 1977. Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation: Learning bounds and algorithms. arXiv preprint arXiv:0902.3430, 2009. Saeid Motiian, Marco Piccirilli, Donald A Adjeroh, and Gianfranco Doretto. Unified deep supervised domain adaptation and generalization. In The IEEE International Conference on Computer Vision (ICCV), volume 2, pp. 3, 2017. Daniel Moyer, Shuyang Gao, Rob Brekelmans, Greg Ver Steeg, and Aram Galstyan. Evading the adversary in invariant representation. arXiv preprint arXiv:1805.09458, 2018. Krikamol Muandet, David Balduzzi, and Bernhard Schölkopf. Domain generalization via invariant feature representation. In International Conference on Machine Learning, pp. 10- 18, 2013. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning, 2011. Li Niu, Wen Li, and Dong Xu. Multi- view domain generalization for visual recognition. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4193- 4201, 2015. Salah Rifai, Pascal Vincent, Xavier Muller, Xavier Glorot, and Yoshua Bengio. Contractive auto- encoders: Explicit invariance during feature extraction. In Proceedings of the 28th International Conference on International Conference on Machine Learning, pp. 833- 840. Omnipress, 2011. Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In European conference on computer vision, pp. 213- 226. Springer, 2010. Bernhard Schölkopf, Dominik Janzing, Jonas Peters, Eleni Sgouritsa, Kun Zhang, and Joris Mooij. On causal and anticausal learning. In International Conference on International Conference on Machine Learning (ICML- 12), pp. 459- 466. Omnipress, 2012. Shiv Shankar, Vihari Piratla, Soumen Chakrabarti, Siddhartha Chaudhuri, Preethi Jyothi, and Sunita Sarawagi. Generalizing across domains via cross- gradient training. arXiv preprint arXiv:1804.10745, 2018. Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log- likelihood function. Journal of statistical planning and inference, 2000. Amos Storkey. When training and test sets are different: characterizing learning transfer. Dataset shift in machine learning, 2009. Haohan Wang, Aaksha Meghawat, Louis- Philippe Morency, and Eric P Xing. Select- additive learning: Improving generalization in multimodal sentiment analysis. arXiv preprint arXiv:1609.05244, 2016. Haohan Wang, Bryon Aragam, and Eric P. Xing. Variable selection in heterogeneous datasets: A truncated- rank sparse linear mixed model with applications to genome- wide association studies. Bioinformatics and Biomedicine (BIBM), 2017 IEEE International Conference on, 2017. Karl Weiss, Taghi M Khoshgoftaar, and DingDing Wang. A survey of transfer learning. Journal of Big Data, 3(1):9, 2016. Kun Zhang, Bernhard Schölkopf, Krikamol Muandet, and Zhikun Wang. Domain adaptation under target and conditional shift. In International Conference on Machine Learning, 2013. <--- Page Split ---> ## APPENDIX ## A1 REASONS TO CHOOSE GLCM In order to search the old computer vision techniques for a method that can extract more textural information and less semantic information, we experimented with three classical computer vision techniques: SURF (Bay et al., 2006), LBP (He & Wang, 1990), and GLCM (Haralick et al., 1973) on several different data sets: 1) a mixture of four digit data sets (MNIST (LeCun et al., 1998), SVHN (Netzer et al., 2011), MNIST- M (Ganin & Lempitsky, 2014), and USPS (Denker et al., 1989)) where the semantic task is to recognize the digit and the textural task is to classify the which data set the image is from; 2) a rotated MNIST data set with 10 different rotations where the semantic task is to recognize the digit and the textural task is to classify the degrees of rotation; 3) a MNIST data randomly attached one of 10 different types of radial kernel, for which the semantic task is to recognize digits and the textural task is to classify the different kernels. Table A1: Accuracy in classifying semantic and superficial information <table><tr><td rowspan="2">Digits</td><td colspan="2">LBP</td><td colspan="2">SURF</td><td colspan="2">GLCM</td></tr><tr><td>Semantic</td><td>0.179</td><td>0.563</td><td>0.164</td><td></td><td></td></tr><tr><td></td><td>Textural</td><td>0.527</td><td>0.809</td><td>0.952</td><td></td><td></td></tr><tr><td rowspan="2">Rotated Digit</td><td>Semantic</td><td>0.155</td><td>0.707</td><td>0.214</td><td></td><td></td></tr><tr><td>Textural</td><td>0.121</td><td>0.231</td><td>0.267</td><td></td><td></td></tr><tr><td rowspan="2">Kernelled</td><td>Semantic</td><td>0.710</td><td>0.620</td><td>0.220</td><td></td><td></td></tr><tr><td>Textural</td><td>0.550</td><td>0.200</td><td>0.490</td><td></td><td></td></tr></table> From results in Table A1, we can see that GLCM suits our goal very well: GLCM outperforms other methods in most cases in classifying textural patterns while predicts least well in the semantic tasks. ## A2 EXPLANATION OF HEX ## A2.1 MATHEMATICAL RATIONALE With \(F_{A}\) and \(F_{G}\) calculated in Equation 3, we need to transform the representation of \(F_{A}\) so that it is least explainable by \(F_{G}\) . Directly adopting subtraction maybe problematic because the \(F_{A} - F_{G}\) can still be correlated with \(F_{G}\) . A straightforward way is to regress the information of \(F_{G}\) out of \(F_{A}\) . Since both \(F_{A}\) and \(F_{G}\) are in the same space and the only operation left in the network is the argmax operation, which is linear, we can safely use linear operations. To form a standard linear regression problem, we first consider the column \(k\) of \(F_{A}\) , denoted by \(F_{A}^{(k)}\) . To solve a standard linear regression problem is to solve: \[\beta^{(k)} = \arg \min_{\beta^{(k)}}||F_{A}^{(k)} - F_{G}\beta^{(k)}||_{2}^{2}\] This function has a closed form solution when the minibatch size is greater than the number of classes of the problem (i.e. when the number of rows of \(F_{G}\) is greater than number of columns of \(F_{G}\) ), and the closed form solution is: \[\beta^{(k)} = \frac{F_{G}^{T}F_{A}^{(k)}}{(F_{G}^{T}F_{G})}\] Therefore, for \(k^{\mathrm{th}}\) column of \(F_{A}\) , what cannot be explained by \(F_{G}\) is (denoted by \(F_{L}^{(k)}\) ): \[F_{L}^{(k)} = F_{A}^{(k)} - F_{G}\frac{F_{G}^{T}F_{A}^{(k)}}{(F_{G}^{T}F_{G})} = (I - F_{G}(F_{G}^{T}F_{G})^{-1}F_{G}^{T})F_{A}^{(k)}\] Repeat this for every column of \(F_{A}\) will lead to: \[F_{L} = (I - F_{G}(F_{G}^{T}F_{G})^{-1}F_{G}^{T})F_{A}\] which is Equation 4. <--- Page Split ---> ## A2.2 WHEN \(F_{G}^{T}F_{G}\) IS NOT INVERTIBLE As we mentioned above, Equation 4 can only be derived when the minibatch size is greater than the number of classes to predict because \(F_{G}^{T}F_{G}\) is only non- singular (invertible) when this condition is met. Therefore, a simple technique to always guarantee a solution with HEX is to use a minibatch size that is greater than the number of classes. We believe this is a realistic requirement because in the real- world application, we always know the number of classes to classify, and it is usually a number much smaller than the maximum minibatch size a modern computer can deal with. However, to complete this paper, we also introduce a more robust method that is always applicable independent of the choices of minibatch sizes. We start with the simple intuition that to make sure \(F_{G}^{T}F_{G}\) is always invertible, the simplest conduct will be adding a smaller number to the diagonal, leading to \(F_{G}^{T}F_{G} + \lambda I\) , where we can end the discussion by simply treating \(\lambda\) as a tunable hyperparameter. However, we prefer that our algorithm not require tuning additional hyperparameters. We write \(F_{G}^{T}F_{G} + \lambda I\) back to the previous equation, \[F_{L}^{(k)} = (I - F_{G}(F_{G}^{T}F_{G} + \lambda I)^{-1}F_{G}^{T})F_{A}^{(k)}\] With the Kailath Variant (Bishop et al., 1995), we can have: \[F_{L}^{(k)} = F_{A}^{(k)} - F_{G}\frac{F_{G}^{T}(F_{G}F_{G}^{T} + \lambda I)^{-1}F_{A}^{(k)}}{F_{G}^{T}(F_{G}F_{G}^{T} + \lambda I)^{-}F_{G}} = F_{A}^{(k)} - F_{G}\beta_{\lambda}^{(k)}\] where \(\beta_{\lambda}^{(k)}\) is a result of a heteroscedastic regression method where \(\lambda\) can be estimated through maximum likelihood estimation (MLE) (Wang et al., 2017), which completes the story of a hyperparameter- free method even when \(F_{G}^{T}F_{G}\) is not invertible. However, in practice, we notice that the MLE procedure is very slow and the estimation is usually sensitive to noise. As a result, we recommend users to simply choose a larger minibatch size to avoid the problem. Nonetheless, we still release these steps here to 1) make the paper more complete, 2) offer a solution when in rare cases a model is asked to predict over hundreds or thousands of classes. Also, we name our main method "HEX" as short of heteroscedastic regression. ## A3 EXTRA EXPERIMENT RESULTS ## A3.1 A CLOSER LOOK INTO OFFICE DATA SET We visualize some images of the office data set in Figure A1, where we can see that the background of images for DSLR and Webcam are very similar while the background of images in Amazon are distinctly different from these two. ## A3.2 HEX CONVERGES MUCH FASTER We plotted the testing accuracy of each method in the facial expression classification in Figure A2. From the figure, we can see that HEX and related ablation methods converge significantly faster than baseline methods. ## A3.3 EXAMPLES OF PATTERN-ATTACHED MNIST DATA SET Examples of MNIST images when attached with different kerneled patterns following (Jo & Bengio, 2017), as shown in Figure A3. <--- Page Split ---> ![](images/13_0.jpg) <center>Figure A1: A closer look of Office data set, we visualize the first 10 images of each data set. We show 12 labels out of 31 labels, but the story of the rest labels are similar to what we have shown here. From the images, we can clearly see that many images of DSLR and Webcam share the similar background, while the images of Amazon have a distinct background. Top row: Amazon, middle row: DSLR, bottom row: Webcam </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure A2: Testing accuracy curve of the facial expression classification experiment. </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure A3: Synthetic MNIST data sets with Fourier transform patterns. The leftmost image represents the kernel. </center> <--- Page Split --->
## ABSTRACT Despite impressive performance as evaluated on i.i.d. holdout data, deep neural networks depend heavily on superficial statistics of the training data and are liable to break under distribution shift. For example, subtle changes to the background or texture of an image can break a seemingly powerful classifier. Building on previous work on domain generalization, we hope to produce a classifier that will generalize to previously unseen domains, even when domain identifiers are not available during training. This setting is challenging because the model may extract many distribution- specific (superficial) signals together with distribution- agnostic (semantic) signals. To overcome this challenge, we incorporate the gray- level co- occurrence matrix (GLCM) to extract patterns that our prior knowledge suggests are superficial: they are sensitive to texture but unable to capture the gestalt of an image. Then we introduce two techniques for improving our networks' out- of- sample performance. The first method is built on the reverse gradient method that pushes our model to learn representations from which the GLCM representation is not predictable. The second method is built on the independence introduced by projecting the model's representation onto the subspace orthogonal to GLCM representation's. We test our method on battery of standard domain generalization data sets and, interestingly, achieve comparable or better performance as compared to other domain generalization methods that explicitly require samples from the target distribution for training. ## 1 INTRODUCTION Imagine training an image classifier to recognize facial expressions. In the training data, while all images labeled "smile" may actually depict smiling people, the "smile" label might also be correlated with other aspects of the image. For example, people might tend to smile more often while outdoors, and to frown more in airports. In the future, we might encounter photographs with previously unseen backgrounds, and thus we prefer models that rely as little as possible on the superficial signal. The problem of learning classifiers robust to distribution shift, commonly called Domain Adaptation (DA), has a rich history. Under restrictive assumptions, such as covariate shift (Shimodaira, 2000; Gretton et al., 2009), and label shift (also known as target shift or prior probability shift) (Storkey, 2009; Schölkopf et al., 2012; Zhang et al., 2013; Lipton et al., 2018), principled methods exist for estimating the shifts and retraining under the importance- weighted ERM framework. Other papers bound worst- case performance under bounded shifts as measured by divergence measures on the train v.s. test distributions (Ben- David et al., 2010a; Mansour et al., 2009; Hu et al., 2016). While many impossibility results for DA have been proven (Ben- David et al., 2010b), humans nevertheless exhibit a remarkable ability to function out- of- sample, even when confronting dramatic <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Example illustration of train/validation/test data. The first row is "happiness" sentiment and the second row is "sadness" sentiment. The background and sentiment labels are correlated in training and validation set, but independent in testing set. </center> distribution shift. Few would doubt that given photographs of smiling and frowning astronauts on the Martian plains, we could (mostly) agree upon the correct labels. While we lack a mathematical description of how precisely humans are able to generalize so easily out- of- sample, we can often point to certain classes of perturbations that should not effect the semantics of an image. For example for many tasks, we know that the background should not influence the predictions made about an image. Similarly, other superficial statistics of the data, such as textures or subtle coloring changes should not matter. The essential assumption of this paper is that by making our model depend less on known superficial aspects, we can push the model to rely more on the difference that makes a difference. This paper focuses on visual applications, and we focus on high- frequency textural information as the relevant notion of superficial statistics that we do not want our model to depend upon. The contribution of this paper can be summarized as follows. - We propose a new differentiable neural network building block (neural gray-level co-occurrence matrix) that captures textural information only from images without modeling the lower-frequency semantic information that we care about (Section 3.1).- We propose an architecture-agnostic, parameter-free method that is designed to discard this superficial information, (Section 3.2).- We introduce two synthetic datasets for DA/DG studies that are more challenging than regular DA/DG scenario in the sense that the domain-specific information is correlated with semantic information. Figure 1 is a toy example (Section 4). ## 2 RELATED WORK IN DOMAIN ADAPTATION AND DOMAIN GENERALIZATION Domain generalization (DG) (Muandet et al., 2013) is a variation on DA, where samples from the target domain are not available during training. In reality, data- sets may contain data cobbled together from many sources but where those sources are not labeled. For example, a common assumption used to be that there is one and only one distribution for each dataset collected, but Wang et al. (2016) noticed that in video sentiment analysis, the data sources varied considerably even within the same dataset due to heterogeneous data sources and collection practices. Domain adaptation (Bridle & Cox, 1991; Ben- David et al., 2010a), and (more broadly) transfer learning have been studied for decades, with antecedents in the classic econometrics work on sample selection bias Heckman (1977) and choice models Manski & Lerman (1977). For a general primer, we refer the reader to these extensive reviews (Weiss et al., 2016; Csurka, 2017). Domain generalization (Muandet et al., 2013) is relatively new, but has also been studied extensively: covering a wide spectrum of techniques from kernel methods (Muandet et al., 2013; Niu et al., 2015; Erfani et al., 2016; Li et al., 2017c) to more recent deep learning end- to- end methods, where the methods mostly fall into two categories: reducing the inter- domain differences of representations through adversarial (or similar) techniques (Ghifary et al., 2015; Wang et al., 2016; Motiian et al., 2017; Li et al., 2018; Carlucci et al., 2018), or building an ensemble of one- for- each- domain deep <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: Introduction of Neural Gray-level Co-occurrence Matrix (NGLCM) and HEX. </center> models and then fusing representations together (Ding & Fu, 2018; Mancini et al., 2018). Meta- learning techniques are also explored (Li et al., 2017b). Related studies are also conducted under the name "zero shot domain adaptation" e.g. (Kumagai & Iwata, 2018). ## 3 METHOD In this section, we introduce our main technical contributions. We will first introduce the our new differentiable neural building block, NGLCM that is designed to capture textural but not semantic information from images, and then introduce our technique for excluding the textural information. ### 3.1 NEURAL GRAY-LEVEL CO-OCCURRENCE MATRIX FOR SUPERFICIAL INFORMATION Our goal is to design a neural building block that 1) has enough capacity to extract the textural information from an image, 2) is not capable of extracting semantic information. We consulted some classic computer vision techniques for inspiration and extensive experimental evidence (Appendix A1), suggested that gray- level co- occurrence matrix (GLCM) (Haralick et al., 1973; Lam, 1996) may suit our goal. The idea of GLCM is to count the number of pixel pairs under a certain direction (common direction choices are \(0^{\circ}\) , \(45^{\circ}\) , \(90^{\circ}\) , and \(135^{\circ}\) ). For example, for an image \(A \in \mathcal{M}^{m \times m}\) , where \(\mathcal{M}\) denotes the set of all possible pixel values. The GLCM of \(A\) under the direction to \(0^{\circ}\) (horizontally right) will be a \(|\mathcal{M}| \times |\mathcal{M}|\) matrix (denoted by \(G\) ) defined as following: \[G_{k,l} = \sum_{i = 0}^{m - 1}\sum_{j = 0}^{m}I(A_{i,j} = k)I(A_{i + 1,j} = l) \quad (1)\] where \(|\mathcal{M}|\) stands for the cardinality of \(\mathcal{M}\) , \(I(\cdot)\) is an identity function, \(i,j\) are indices of \(A\) , and \(k,l\) are pixel values of \(A\) as well as indices of \(G\) . We design a new neural network building block that resembles GLCM but whose parameters are differentiable, having (sub)gradient everywhere, and thus are tunable through backpropagation. We first flatten \(A\) into a row vector \(a \in \mathcal{M}^{1 \times m^{2}}\) . The first observation we made is that the counting of pixel pairs \((p_{k}, p_{l})\) in Equation 1 is equivalent to counting the pairs \((p_{k}, \Delta p)\) , where \(\Delta p = p_{k} - p_{l}\) . Therefore, we first generate a vector \(d\) by multiplying \(a\) with a matrix \(D\) , where \(D\) is designed according to the direction of GLCM. For example, \(D\) in the \(0^{\circ}\) case will be a \(m^{2} \times m^{2}\) matrix \(D\) such that \(D_{i,i} = 1\) , \(D_{i,i + 1} = - 1\) , and 0 elsewhere. <--- Page Split ---> To count the elements in \(a\) and \(d\) with a differentiable operation, we introduce two sets of parameters \(\phi_{a} \in \mathcal{B}^{|\mathcal{N}| \times 1}\) and \(\phi_{b} \in \mathcal{B}^{|\mathcal{N}| \times 1}\) as the tunable parameter for this building block, so that: \[G = s(a;\phi_{a})s^{T}(d;\phi_{b}) \quad (2)\] where \(s()\) is a thresholding function defined as: \[s(a;\phi_{a}) = \min (\max (a\odot \phi_{a},0),1)\] where \(\odot\) denotes the minus operation with the broadcasting mechanism, yielding both \(s(a;\phi_{a})\) and \(s(d;\phi_{b})\) as \(|\mathcal{N}| \times m^{2}\) matrices. As a result, \(G\) is a \(|\mathcal{N}| \times |\mathcal{N}|\) matrix. The design rationale is that, with an extra constrain that requires \(\phi\) to have only unique values in the set of \(\{n - \epsilon |n\in \mathcal{N}\}\) , where \(\epsilon\) is a small number, \(G\) in Equation 2 will be equivalent to the GLCM extracted with old counting techniques, subject to permutation and scale. Also, all the operations used in the construction of \(G\) have (sub)gradient and therefore all the parameters are tunable with backpropagation. In practice, we drop the extra constraint on \(\phi\) for simplicity in computation. Our preliminary experiments suggested that for our purposes it is sufficient to first map standard images with 256 pixel levels to images with 16 pixel levels, which can reduce to the number of parameters of NGLCM \((|\mathcal{N}| = 16)\) . ### 3.2 HEX We first introduce notation to represent the neural network. We use \(\langle X,y\rangle\) to denote a dataset of inputs \(X\) and corresponding labels \(y\) . We use \(h(\cdot ;\theta)\) and \(f(\cdot ;\xi)\) to the bottom and top components of a neural network. A conventional neural network architecture will use \(f(h(X;\theta);\xi)\) to generate a corresponding result \(F_{i}\) and then calculate the argmax to yield the prediction label. Besides conventional \(f(h(X;\theta);\xi)\) , we introduce another architecture \[g(X;\phi) = \sigma_{m}((s(a;\phi_{a})s^{T}(d;\phi_{b}))W_{m} + b_{m})\] where \(\phi = \{\phi_{a},\phi_{b},W_{m},b_{m}\}\) , \(s(a;\phi_{a})s^{T}(d;\phi_{b})\) is introduced in previous section, \(\{W_{m},b_{m},\sigma_{m}\}\) (weights, biases, and activation function) form a standard MLP. With the introduction of \(g(\cdot ;\phi)\) , the final classification layer turns into \(f[h(X;\theta),g(X;\phi)];\xi)\) (where we use \([\cdot ,\cdot ]\) to denote concatenation). Now, with the representation learned through raw data by \(h(\cdot ;\theta)\) and textural representation learned by \(g(\cdot ;\phi)\) , the next question is to force \(f(\cdot ;\xi)\) to predict with transformed representation from \(h(\cdot ;\theta)\) that in some sense independent of the superficial representation captured by \(g(\cdot ;\phi)\) . To illustrate following ideas, we first introduce three different outputs from the final layer: \[\begin{array}{r l} & {F_{A} = f([h(X;\theta),g(X;\phi)];\xi)}\\ & {F_{G} = f([\mathbf{0},g(X;\phi)];\xi)}\\ & {F_{P} = f([h(X;\theta),\mathbf{0}];\xi)} \end{array} \quad (3)\] where \(F_{A}\) , \(F_{G}\) , and \(F_{P}\) stands for the results from both representations (concatenated), only the textural information (prepended with the 0 vector), and only the raw data (concatenated with the 0 vector), respectively. 0 stands for a padding matrix with all the zeros, whose shape can be inferred by context. Several heuristics have been proposed to force a network to "forget" some part of a representation, such as adversarial training (Ganin et al., 2016) or information- theoretic regularization (Moyer et al., 2018). In similar spirit, our first proposed solution is to adopt the reverse gradient idea (Ganin et al., 2016) to train \(F_{P}\) to be predictive for the semantic labels \(y\) while forcing the \(F_{P}\) to be invariant to \(F_{G}\) . Later, we refer to this method as \(ADV\) . When we use a multilayer perceptron (MLP) to try to predict \(g(X;\phi)\) from \(h(X;\theta)\) and update the primary model to fool the MLP via reverse gradient, we refer to the model as \(ADVE\) . Additionally, we introduce a simple alternative. Our idea lies in the fact that, in an affine space, to find a transformation of representation \(A\) that is least explainable by some other representation \(B\) , a straightforward method will be projecting \(A\) with a projection matrix constructed by \(B\) (sometimes <--- Page Split ---> referred as residual maker matrix.). To utilize this linear property, we choose to work on the space of \(F\) generated by \(f(\cdot ;\xi)\) right before the final argmax function. Projecting \(F_{A}\) with \[F_{L} = (I - F_{G}(F_{G}^{T}F_{G})^{-1}F_{G}^{T})F_{A} \quad (4)\] will yield \(F_{L}\) for parameter tuning. All the parameters \(\xi ,\phi ,\theta\) can be trained simultaneously (more relevant discussions in Section 5). In testing time, \(F_{P}\) is used. Due to limited space, we leave the following topics to the Appendix: 1) rationales of this approach (A2.1) 2) what to do in cases when \(F_{G}^{T}F_{G}\) is not invertible (A2.2). This method is referred as HEX. Two alternative forms of our algorithm are also worth mentioning: 1) During training, one can tune an extra hyperparameter \((\lambda)\) through \[l(\arg \max F_{L},y) + \lambda l(\arg \max F_{G},y)\] to ensure that the NGLCM component is learning superficial representations that are related to the present task where \(l(\cdot ,\cdot)\) is a generic loss function. 2) During testing, one can use \(F_{L}\) , although this requires evaluating the NGLCM component at prediction time and thus is slightly slower. We experimented with these three forms with our synthetic datasets and did not observe significant differences in performance and thus we adopt the fastest method as the main one. Empirically, we also notice that it is helpful to make sure the textural representation \(g(X;\phi)\) and raw data representation \(h(X;\theta)\) are of the same scale for HEX to work, so we column- wise normalize these two representations in every minibatch. ## 4 EXPERIMENTS To show the effectiveness of our proposed method, we conduct range of experiments, evaluating HEX's resilience against dataset shift. To form intuition, we first examine the NGLCM and HEX separately with two basic testings, then we evaluate on two synthetic datasets, on in which dataset shift is introduced at the semantic level and another at the raw feature level, respectively. We finally evaluate other two standard domain generalization datasets to compare with the state- of- the- art. All these models are trained with ADAM (Kingma & Ba, 2014). We conducted ablation tests on our two synthetic datasets with two cases 1) replacing NGLCM with one- layer MLP (denoted as M), 2) not using HEX/ADV (training the network with \(F_{A}\) (Equation 3) instead of \(F_{L}\) (Equation 4)) (denoted as N). We also experimented with the two alternative forms of HEX: 1) with \(F_{G}\) in the loss and \(\lambda = 1\) (referred as HEX- ADV), 2) predicting with \(F_{L}\) (referred as HEX- ALL). We also compare with the popular DG methods (DANN (Ganin et al., 2016)) and another method called information- dropout (Achille & Soatto, 2018). ### 4.1 SYNTHETIC EXPERIMENTS FOR BASIC PERFORMANCE TESTS #### 4.1.1 NGLCM ONLY EXTRACTS TEXTURAL INFORMATION To show that the NGLCM only extracts textural information, we trained the network with a mixture of four digit recognition data sets: MNIST (LeCun et al., 1998), SVHN (Netzer et al., 2011), MNIST- M (Ganin & Lempitsky, 2014), and USPS (Denker et al., 1989). We compared NGLCM with a single layer of MLP. The parameters are trained to minimize prediction risk of digits (instead of domain). We extracted the representations of NGLCM and MLP and used these representations as features to test the five- fold cross- validated Naïve Bayes classifier's accuracy of predicting digit and domain. With two choices of learning rates, we repeated this for every epoch through 100 epochs of training and reported the mean and standard deviation over 100 epochs in Table 1: while MLP and NGLCM perform comparably well in extracting textural information, NGLCM is significantly less useful for recognizing the semantic label. #### 4.1.2 HEX PROJECTION To test the effectiveness of HEX, we used the extracted SURF (Bay et al., 2006) features (800 dimension) and GLCM (Lam, 1996) features (256 dimension) from office data set (Saenko et al., <--- Page Split ---> Table 1: Accuracy of domain classification and digit classification <table><tr><td></td><td>Random</td><td>MLP (1e-2)</td><td>NGLCM (1e-2)</td><td>MLP (1e-4)</td><td>NGLCM (1e-4)</td></tr><tr><td>Domain</td><td>0.25</td><td>0.686±0.020</td><td>0.738±0.018</td><td>0.750±0.054</td><td>0.687±0.029</td></tr><tr><td>Label</td><td>0.1</td><td>0.447±0.039</td><td>0.161±0.008</td><td>0.534±0.022</td><td>0.142±0.023</td></tr></table> <table><tr><td>Train</td><td>Test</td><td>Baseline</td><td>HEX</td><td>HEX-ADV</td><td>HEX-ALL</td></tr><tr><td>A, W</td><td>D</td><td>0.405±0.016</td><td>0.343±0.030</td><td>0.343±0.030</td><td>0.216±0.119</td></tr><tr><td>D, W</td><td>A</td><td>0.112±0.008</td><td>0.147±0.004</td><td>0.147±0.004</td><td>0.055±0.004</td></tr><tr><td>A, D</td><td>W</td><td>0.400±0.016</td><td>0.378±0.034</td><td>0.378±0.034</td><td>0.151±0.008</td></tr></table> Table 2: Accuracy on Office data set with extracted features. The Baseline refers to MLP with SURF features. The HEX methods refer to adding another MLP with features extracted by traditional GLCM methods. Because \(D\) and \(W\) are similar domains (same objects even share the same background), we believe these results favor the HEX method (see Section 4.1.2) for discussion). 2010) (31 classes). We built a two- layer MLP ( \(800 \times 256\) , and \(256 \times 31\) ) as baseline that only predicts with SURF features. This architecture and corresponding learning rate are picked to make sure the baseline can converge to a relatively high prediction performance. Then we plugged in the GLCM part with an extra first- layer network \(256 \times 32\) and the second layer of the baseline is extended to \(288 \times 31\) to take in the information from GLCM. Then we train the network again with HEX with the same learning rate. The Office data set has three different subsets: Webcam (W), Amazon (A), and DSLR (D). We trained and validated the model on a mixture of two and tested on the third one. We ran five experiments and reported the averaged accuracy with standard deviation in Table 2. These performances are not comparable to the state- of- the- art because they are based on features. At first glance, one may frown upon on the performance of HEX because out of three configurations, HEX only outperforms the baseline in the setting \(\{W, D\} \to A\) . However, a closer look into the datasets gives some promising indications for HEX: we notice \(W\) and \(D\) are distributed similarly in the sense that objects have similar backgrounds, while \(A\) is distributed distinctly (Appendix A3.1). Therefore, if we assume that there are two classifiers \(C_1\) and \(C_2\) : \(C_1\) can classify objects based on object feature and background feature while \(C_2\) can only classify objects based on object feature ignoring background feature. \(C_2\) will only perform better than \(C_1\) in \(\{W, D\} \to A\) case, and will perform worse than \(C_2\) in the other two cases, which is exactly what we observe with HEX. ### 4.2 FACIAL EXPRESSION CLASSIFICATION WITH NUISANCE BACKGROUND We generated a synthetic data set extending the Facial Expression Research Group Database (Aneja et al., 2016), which is a dataset of six animated individuals expressing seven different sentiments. For each pair of individual and sentiment, there are over 1000 images. To introduce the data shift, we attach seven different backgrounds to these images. In the training set (50% of the data) and validation set (30% of the data), the background is correlated with the sentiment label with a correlation of \(\rho\) ; in testing set (the rest 20% of the data), the background is independent of the sentiment label. A simpler toy example of the data set is shown in Figure 1. In the experiment, we format the resulting images to \(28 \times 28\) grayscale images. We run the experiments first with the baseline CNN (two convolutional layers and two fully connected layers) to tune for hyperparameters. We chose to run 100 epochs with learning rate 5e- 4 because this is when the CNN can converge for all these 10 synthetic datasets. We then tested other methods with the same learning rate. The results are shown in Figure 3 with testing accuracy and standard deviation from five repeated experiments. Testing accuracy is reported by the model with the highest validation score. In the figure, we compare baseline CNN (B), Ablation Tests (M and N), ADV (A), HEX (H), DANN (G), and InfoDropout (I). Most these methods perform well when \(\rho\) is small (when testing distributions are relatively similar to training distribution). As \(\rho\) increases, most methods' performances decrease, but Adv and HEX behave relatively stable across these ten correlation settings. We also notice that, as the correlation becomes stronger, M deteriorates at a faster pace than other methods. Intuitively, we believe this is because the MLP learns both from the seman <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: Averaged testing accuracy and standard deviation of five repeated experiments with different correlation level on sentiment with nuisance background data. Notations: baseline CNN (B), Ablation Tests (M (replacing NGLCM with MLP) and N (training without HEX projection)), ADVE (E), ADV (A), HEX (H), HEX-ADV (V), HEX-ALL (L), DANN (G), and InfoDropout (I). </center> ![](images/6_1.jpg) <center>Figure 4: Averaged testing accuracy and standard deviation of five repeated experiments with different strategies of attaching patterns to MNIST data. Notations: baseline CNN (B), Ablation Tests (M (replacing NGLCM with MLP) and N (training without HEX projection)), ADVE (E), ADV (A), HEX (H), HEX-ADV (V), HEX-ALL (L), DANN (G), and InfoDropout (I). </center> tic signal together with superficial signal, leading to inferior performance when HEX projects this signal out. We also notice that ADV and HEX improve the speed of convergence (Appendix A3.2). ### 4.3 MITIGATING THE TENDENCY OF SURFACE STATISTICAL REGULARITIES IN MNIST As Jo & Bengio (2017) observed, CNNs have a tendency to learn the surface statistical regularities: the generalization of CNNs is partially due to the abstraction of high level semantics of an image, and partially due to surface statistical regularities. Here, we demonstrate the ability of HEX to overcome such tendencies. We followed the radial and random Fourier filtering introduced in (Jo & Bengio, 2017) to attach the surface statistical regularities into the images in MNIST. There are three different regularities altogether (radial kernel, random kernel, and original image). We attached two of these into training and validation images and the remaining one into testing images. We also adopted two strategies in attaching surface patterns to training/validation images: 1) independently: the pattern is independent of the digit, and 2) dependently: images of digit 0- 4 have one pattern while images of digit 5- 9 have the other pattern. Some examples of this synthetic data are shown in Appendix A3.3. We used the same learning rate scheduling strategy as in the previous experiment. The results are shown in Figure 4. Figure legends are the same as previous. Interestingly, NGLCM and HEX contribute differently across these cases. When the patterns are attached independently, M performs the best overall, but when the patterns are attached dependently, N and HEX perform the best overall. In the most challenging case of these experiments (random kerneled as testing, pattern attached dependently), HEX shows a clear advantage. Also, HEX behaves relatively more stable overall. <--- Page Split ---> Table 3: Accuracy on MNIST-Rotation data set <table><tr><td>Test</td><td>CAE</td><td>MTAE</td><td>CCSA</td><td>DANN</td><td>Fusion</td><td>LabelGrad</td><td>CrossGrad</td><td>HEX</td><td>ADV</td></tr><tr><td>M0°</td><td>72.1</td><td>82.5</td><td>84.6</td><td>86.7</td><td>85.6</td><td>89.7</td><td>88.3</td><td>90.1</td><td>89.9</td></tr><tr><td>M15°</td><td>95.3</td><td>96.3</td><td>95.6</td><td>98</td><td>95.0</td><td>97.8</td><td>98.6</td><td>98.9</td><td>98.6</td></tr><tr><td>M30°</td><td>92.6</td><td>93.4</td><td>94.6</td><td>97.8</td><td>95.6</td><td>98.0</td><td>98.0</td><td>98.9</td><td>98.8</td></tr><tr><td>M45°</td><td>81.5</td><td>78.6</td><td>82.9</td><td>97.4</td><td>95.5</td><td>97.1</td><td>97.7</td><td>98.8</td><td>98.7</td></tr><tr><td>M60°</td><td>92.7</td><td>94.2</td><td>94.8</td><td>96.9</td><td>95.9</td><td>96.6</td><td>97.7</td><td>98.3</td><td>98.6</td></tr><tr><td>M75°</td><td>79.3</td><td>80.5</td><td>82.1</td><td>89.1</td><td>84.3</td><td>92.1</td><td>91.4</td><td>90.0</td><td>90.4</td></tr><tr><td>Avg</td><td>85.6</td><td>87.6</td><td>89.1</td><td>94.3</td><td>92.0</td><td>95.2</td><td>95.3</td><td>95.8</td><td>95.2</td></tr></table> ### 4.4 MNIST WITH ROTATION AS DOMAIN We continue to compare HEX with other state- of- the- art DG methods (that use distribution labels) on popular DG data sets. We experimented with the MNIST- rotation data set, on which many DG methods have been tested. The images are rotated with different degrees to create different domains. We followed the approach introduced by Ghifary et al. (2015). To reiterate: we randomly sampled a set \(\mathcal{M}\) of 1000 images out of MNIST (100 for each label). Then we rotated the images in \(\mathcal{M}\) counter- clockwise with different degrees to create data in other domains, denoted by \(\mathcal{M}_{15^{\circ}}\) , \(\mathcal{M}_{30^{\circ}}\) , \(\mathcal{M}_{45^{\circ}}\) , \(\mathcal{M}_{60^{\circ}}\) , \(\mathcal{M}_{75^{\circ}}\) . With the original set, denoted by \(\mathcal{M}_{0^{\circ}}\) , there are six domains altogether. We compared the performance of HEX/ADV with several methods tested on this data including CAE (Rifai et al., 2011), MTAE (Ghifary et al., 2015), CCSA (Motiian et al., 2017), DANN (Ganin et al., 2016), Fusion (Mancini et al., 2018), LabelGrad, and CrossGrad (Shankar et al., 2018). The results are shown in Table 3: HEX is only inferior to previous methods in one case and leads the average performance overall. ### 4.5 PACS: GENERALIZATION IN PHOTO, ART, CARTOON, AND SKETCH Finally, we tested on the PACS data set (Li et al., 2017a), which consists of collections of images of seven different objects over four domains, including photo, art painting, cartoon, and sketch. Following (Li et al., 2017a), we used AlexNet as baseline method and built HEX upon it. We met some optimization difficulties in directly training AlexNet on PACS data set with HEX, so we used a heuristic training approach: we first fine- tuned the AlexNet pretrained on ImageNet with PACS data of training domains without plugging in NGLCM and HEX, then we used HEX and NGLCM to further train the top classifier of AlexNet while the weights of the bottom layer are fixed. Our heuristic training procedure allows us to tune the AlexNet with only 10 epoches and train the top- layer classifier 100 epochs (roughly only 600 seconds on our server for each testing case). We compared HEX/ADV with the following methods that have been tested on PACS: AlexNet (directly fine- tuning pretrained AlexNet on PACS training data (Li et al., 2017a)), DSN (Bousmalis et al., 2016), L- CNN (Li et al., 2017a), MLDG (Li et al., 2017b), Fusion (Mancini et al., 2018). Notice that most of the competing methods (DSN, L- CNN, MLDG, and Fusion) have explicit knowledge about the domain identification of the training images. The results are shown in Table 4. Impressively, HEX is only slightly shy of Fusion in terms of overall performance. Fusion is a method that involves three different AlexNets, one for each training domain, and a fusion layer to combine the representation for prediction. The Fusion model is roughly three times bigger than HEX since the extra NGLCM component used by HEX is negligible in comparison to AlexNet in terms of model complexity. Interestingly, HEX achieves impressively high performance when the testing domain is Art painting and Cartoon, while Fusion is good at prediction for Photo and Sketch. ## 5 DISCUSSION AND CONCLUSION We introduced two novel components: NGLCM that only extracts textural information from an image, and HEX that projects the textural information out and forces the model to focus on semantic information. Limitations still exist. For example, NGLCM cannot be completely free of semantic information of an image. As a result, if we apply our method on standard MNIST data set, we will <--- Page Split ---> Table 4: Testing Accuracy on PACS <table><tr><td>Test Domain</td><td>AlexNet</td><td>DSN</td><td>L-CNN</td><td>MLDG</td><td>Fusion</td><td>HEX</td><td>ADV</td></tr><tr><td>Art</td><td>63.3</td><td>61.1</td><td>62.8</td><td>63.6</td><td>64.1</td><td>66.8</td><td>64.9</td></tr><tr><td>Cartoon</td><td>63.1</td><td>66.5</td><td>66.9</td><td>63.4</td><td>66.8</td><td>69.7</td><td>69.6</td></tr><tr><td>Photo</td><td>87.7</td><td>83.2</td><td>89.5</td><td>87.8</td><td>90.2</td><td>87.9</td><td>88.2</td></tr><tr><td>Sketch</td><td>54</td><td>58.5</td><td>57.5</td><td>54.9</td><td>60.1</td><td>56.3</td><td>55.5</td></tr><tr><td>Average</td><td>67.0</td><td>67.3</td><td>69.2</td><td>67.4</td><td>70.3</td><td>70.2</td><td>69.5</td></tr></table> see slight drop of performance because NGLCM also learns some semantic information, which is then projected out. Also, training all the model parameters simultaneously may lead into a trivial solution where \(F_{G}\) (in Equation 3) learns garbage information and HEX degenerates to the baseline model. To overcome these limitations, we invented several training heuristics, such as optimizing \(F_{P}\) and \(F_{G}\) sequentially and then fix some weights. However, we did not report results with training heuristics (expect for PACS experiment) because we hope to simplify the methods. Another limitation we observe is that sometimes the training performance of HEX fluctuates dramatically during training, but fortunately, the model picked up by highest validation accuracy generally performs better than competing methods. Despite these limitations, we still achieved impressive performance on both synthetic and popular DG data sets. ## APPENDIX ## A1 REASONS TO CHOOSE GLCM In order to search the old computer vision techniques for a method that can extract more textural information and less semantic information, we experimented with three classical computer vision techniques: SURF (Bay et al., 2006), LBP (He & Wang, 1990), and GLCM (Haralick et al., 1973) on several different data sets: 1) a mixture of four digit data sets (MNIST (LeCun et al., 1998), SVHN (Netzer et al., 2011), MNIST- M (Ganin & Lempitsky, 2014), and USPS (Denker et al., 1989)) where the semantic task is to recognize the digit and the textural task is to classify the which data set the image is from; 2) a rotated MNIST data set with 10 different rotations where the semantic task is to recognize the digit and the textural task is to classify the degrees of rotation; 3) a MNIST data randomly attached one of 10 different types of radial kernel, for which the semantic task is to recognize digits and the textural task is to classify the different kernels. Table A1: Accuracy in classifying semantic and superficial information <table><tr><td rowspan="2">Digits</td><td colspan="2">LBP</td><td colspan="2">SURF</td><td colspan="2">GLCM</td></tr><tr><td>Semantic</td><td>0.179</td><td>0.563</td><td>0.164</td><td></td><td></td></tr><tr><td></td><td>Textural</td><td>0.527</td><td>0.809</td><td>0.952</td><td></td><td></td></tr><tr><td rowspan="2">Rotated Digit</td><td>Semantic</td><td>0.155</td><td>0.707</td><td>0.214</td><td></td><td></td></tr><tr><td>Textural</td><td>0.121</td><td>0.231</td><td>0.267</td><td></td><td></td></tr><tr><td rowspan="2">Kernelled</td><td>Semantic</td><td>0.710</td><td>0.620</td><td>0.220</td><td></td><td></td></tr><tr><td>Textural</td><td>0.550</td><td>0.200</td><td>0.490</td><td></td><td></td></tr></table> From results in Table A1, we can see that GLCM suits our goal very well: GLCM outperforms other methods in most cases in classifying textural patterns while predicts least well in the semantic tasks. ## A2 EXPLANATION OF HEX ## A2.1 MATHEMATICAL RATIONALE With \(F_{A}\) and \(F_{G}\) calculated in Equation 3, we need to transform the representation of \(F_{A}\) so that it is least explainable by \(F_{G}\) . Directly adopting subtraction maybe problematic because the \(F_{A} - F_{G}\) can still be correlated with \(F_{G}\) . A straightforward way is to regress the information of \(F_{G}\) out of \(F_{A}\) . Since both \(F_{A}\) and \(F_{G}\) are in the same space and the only operation left in the network is the argmax operation, which is linear, we can safely use linear operations. To form a standard linear regression problem, we first consider the column \(k\) of \(F_{A}\) , denoted by \(F_{A}^{(k)}\) . To solve a standard linear regression problem is to solve: \[\beta^{(k)} = \arg \min_{\beta^{(k)}}||F_{A}^{(k)} - F_{G}\beta^{(k)}||_{2}^{2}\] This function has a closed form solution when the minibatch size is greater than the number of classes of the problem (i.e. when the number of rows of \(F_{G}\) is greater than number of columns of \(F_{G}\) ), and the closed form solution is: \[\beta^{(k)} = \frac{F_{G}^{T}F_{A}^{(k)}}{(F_{G}^{T}F_{G})}\] Therefore, for \(k^{\mathrm{th}}\) column of \(F_{A}\) , what cannot be explained by \(F_{G}\) is (denoted by \(F_{L}^{(k)}\) ): \[F_{L}^{(k)} = F_{A}^{(k)} - F_{G}\frac{F_{G}^{T}F_{A}^{(k)}}{(F_{G}^{T}F_{G})} = (I - F_{G}(F_{G}^{T}F_{G})^{-1}F_{G}^{T})F_{A}^{(k)}\] Repeat this for every column of \(F_{A}\) will lead to: \[F_{L} = (I - F_{G}(F_{G}^{T}F_{G})^{-1}F_{G}^{T})F_{A}\] which is Equation 4. <--- Page Split ---> ## A2.2 WHEN \(F_{G}^{T}F_{G}\) IS NOT INVERTIBLE As we mentioned above, Equation 4 can only be derived when the minibatch size is greater than the number of classes to predict because \(F_{G}^{T}F_{G}\) is only non- singular (invertible) when this condition is met. Therefore, a simple technique to always guarantee a solution with HEX is to use a minibatch size that is greater than the number of classes. We believe this is a realistic requirement because in the real- world application, we always know the number of classes to classify, and it is usually a number much smaller than the maximum minibatch size a modern computer can deal with. However, to complete this paper, we also introduce a more robust method that is always applicable independent of the choices of minibatch sizes. We start with the simple intuition that to make sure \(F_{G}^{T}F_{G}\) is always invertible, the simplest conduct will be adding a smaller number to the diagonal, leading to \(F_{G}^{T}F_{G} + \lambda I\) , where we can end the discussion by simply treating \(\lambda\) as a tunable hyperparameter. However, we prefer that our algorithm not require tuning additional hyperparameters. We write \(F_{G}^{T}F_{G} + \lambda I\) back to the previous equation, \[F_{L}^{(k)} = (I - F_{G}(F_{G}^{T}F_{G} + \lambda I)^{-1}F_{G}^{T})F_{A}^{(k)}\] With the Kailath Variant (Bishop et al., 1995), we can have: \[F_{L}^{(k)} = F_{A}^{(k)} - F_{G}\frac{F_{G}^{T}(F_{G}F_{G}^{T} + \lambda I)^{-1}F_{A}^{(k)}}{F_{G}^{T}(F_{G}F_{G}^{T} + \lambda I)^{-}F_{G}} = F_{A}^{(k)} - F_{G}\beta_{\lambda}^{(k)}\] where \(\beta_{\lambda}^{(k)}\) is a result of a heteroscedastic regression method where \(\lambda\) can be estimated through maximum likelihood estimation (MLE) (Wang et al., 2017), which completes the story of a hyperparameter- free method even when \(F_{G}^{T}F_{G}\) is not invertible. However, in practice, we notice that the MLE procedure is very slow and the estimation is usually sensitive to noise. As a result, we recommend users to simply choose a larger minibatch size to avoid the problem. Nonetheless, we still release these steps here to 1) make the paper more complete, 2) offer a solution when in rare cases a model is asked to predict over hundreds or thousands of classes. Also, we name our main method "HEX" as short of heteroscedastic regression. ## A3 EXTRA EXPERIMENT RESULTS ## A3.1 A CLOSER LOOK INTO OFFICE DATA SET We visualize some images of the office data set in Figure A1, where we can see that the background of images for DSLR and Webcam are very similar while the background of images in Amazon are distinctly different from these two. ## A3.2 HEX CONVERGES MUCH FASTER We plotted the testing accuracy of each method in the facial expression classification in Figure A2. From the figure, we can see that HEX and related ablation methods converge significantly faster than baseline methods. ## A3.3 EXAMPLES OF PATTERN-ATTACHED MNIST DATA SET Examples of MNIST images when attached with different kerneled patterns following (Jo & Bengio, 2017), as shown in Figure A3. <--- Page Split ---> ![](images/13_0.jpg) <center>Figure A1: A closer look of Office data set, we visualize the first 10 images of each data set. We show 12 labels out of 31 labels, but the story of the rest labels are similar to what we have shown here. From the images, we can clearly see that many images of DSLR and Webcam share the similar background, while the images of Amazon have a distinct background. Top row: Amazon, middle row: DSLR, bottom row: Webcam </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure A2: Testing accuracy curve of the facial expression classification experiment. </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure A3: Synthetic MNIST data sets with Fourier transform patterns. The leftmost image represents the kernel. </center> <--- Page Split --->
accept
Accept (Oral)
7.666667
ICLR_2019_paper_0265
iclr
2,019
# MODELING UNCERTAINTY WITH HEDGED INSTANCE EMBEDDING Seong Joon Oh\* LINE Corporation coallaoh@linecorp.com Kevin Murphy Google Research kpmurphy@google.com Jiyan Pan Google Research jiyanpan@google.com Joseph Roth Google Research josephroth@google.com Florian Schroff Google Research fschroff@google.com Andrew Gallagher Google Research agallagher@google.com ## ABSTRACT Instance embeddings are an efficient and versatile image representation that facilitates applications like recognition, verification, retrieval, and clustering. Many metric learning methods represent the input as a single point in the embedding space. Often the distance between points is used as a proxy for match confidence. However, this can fail to represent uncertainty which can arise when the input is ambiguous, e.g., due to occlusion or blurriness. This work addresses this issue and explicitly models the uncertainty by "hedging" the location of each input in the embedding space. We introduce the hedged instance embedding (HIB) in which embeddings are modeled as random variables and the model is trained under the variational information bottleneck principle (Alemi et al., 2016; Achille & Soatto, 2018). Empirical results on our new N- digit MNIST dataset show that our method leads to the desired behavior of "hedging its bets" across the embedding space upon encountering ambiguous inputs. This results in improved performance for image matching and classification tasks, more structure in the learned embedding space, and an ability to compute a per- exemplar uncertainty measure which is correlated with downstream performance. ## 1 INTRODUCTION An instance embedding is a mapping \(f\) from an input \(x\) , such as an image, to a vector representation, \(z \in \mathbb{R}^{D}\) , such that "similar" inputs are mapped to nearby points in space. Embeddings are a versatile representation that support various downstream tasks, including image retrieval (Babenko et al., 2014) and face recognition (Schroff et al., 2015). Instance embeddings are often treated deterministically, i.e., \(z = f(x)\) is a point in \(\mathbb{R}^{D}\) . We refer to this approach as a point embedding. One drawback of this representation is the difficulty of modeling aleatoric uncertainty (Kendall & Gal, 2017), i.e. uncertainty induced by the input. In the case of images this can be caused by occlusion, blurriness, low- contrast and other factors. To illustrate this, consider the example in Figure 1a. On the left, we show an image composed of two adjacent MNIST digits, the first of which is highly occluded. The right digit is clearly a 7, but the left digit could be a 1, or a 4. One way to express this uncertainty about which choice to make is to map the input to a region of space, representing the inherent uncertainty of "where it belongs". We propose a new method, called hedged instance embedding (HIB), which achieves this goal. Each embedding is represented as a random variable, \(Z \sim p(z|x) \in \mathbb{R}^{D}\) . The embedding effectively spreads probability mass across locations in space, depending on the level of uncertainty. For example in Figure 1b, the corrupted image is mapped to a two- component mixture of Gaussians covering both the "17" and "47" clusters. We propose a training scheme for the HIB with a learnable- margin contrastive loss and the variational information bottleneck (VIB) principle (Alemi et al., 2016; Achille & Soatto, 2018). <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Unlike point embeddings, stochastic embeddings may hedge their bets across the space. When both “17” and “47” are plausible, our 2-component Gaussian mixture embedding has the power to spread probability mass on clusters with clean “17” and “47” images. By contrast, the point embedding will choose to be close to one or the other of these points (or somewhere between). </center> To evaluate our method, we propose a novel dataset, N- digit MNIST, which we will open source. Using this dataset, we show that HIB exhibits several desirable properties compared to point embeddings: (1) downstream task performance (e.g. recognition and verification) improves for uncertain inputs; (2) the embedding space exhibits enhanced structural regularity; and (3) a per- exemplar uncertainty measure that predicts when the output of the system is reliable. ## 2 METHODS In this section, we describe our method in detail. ### 2.1 POINT EMBEDDINGS Standard point embedding methods try to compute embeddings such that \(z_{1} = f(x_{1})\) and \(z_{2} = f(x_{2})\) are “close” in the embedding space if \(x_{1}\) and \(x_{2}\) are “similar” in the ambient space. To obtain such a mapping, we must decide on the definition of “closeness” as well as a training objective, as we explain below. Contrastive loss Contrastive loss (Hadsell et al., 2006) is designed to encourage a small Euclidean distance between a similar pair, and large distance of margin \(M > 0\) for a dissimilar pair. The loss is \[\mathcal{L}_{\mathrm{con}} = \left\{ \begin{array}{ll}||z_{1} - z_{2}||_{2}^{2} & \mathrm{if~match}\\ \max (M - ||z_{1} - z_{2}||_{2},0)^{2} & \mathrm{if~non - match} \end{array} \right. \quad (1)\] where \(z_{i} = f(x_{i})\) . The hyperparameter \(M\) is usually set heuristically or based on validation- set performance. Soft contrastive loss A probabilistic alternative to contrastive loss, which we will use in our experiments is defined here. It represents the probability that a pair of points is matching: \[p(m|z_{1},z_{2}):= \sigma (-a||z_{1} - z_{2}||_{2} + b) \quad (2)\] with scalar parameters \(a > 0\) and \(b\in \mathbb{R}\) , and the sigmoid function \(\sigma (t) = \frac{1}{1 + e^{- t}}\) . This formulation calibrates Euclidean distances into a probabilistic expression for similarity. Instead of setting a hard threshold like \(M\) , \(a\) and \(b\) together comprise a soft threshold on the Euclidean distance. We will later let \(a\) and \(b\) be trained from data. Having defined the match probability \(p(m|z_{1},z_{2})\) , we formulate the contrastive loss as a binary classification loss based on the softmax cross- entropy (negative log- likelihood loss). More precisely, for an embedding pair \((z_{1},z_{2})\) the loss is defined as \[\mathcal{L}_{\mathrm{softcon}} = -\log p(m = \hat{m} |z_{1},z_{2}) = \left\{ \begin{array}{ll} - \log p(m|z_{1},z_{2}) & \mathrm{if} \hat{m} = 1,\\ - \log (1 - p(m|z_{1},z_{2})) & \mathrm{if} \hat{m} = 0, \end{array} \right. \quad (3)\] where \(\hat{m}\) is the indicator function with value 1 for ground- truth match and 0 otherwise. <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: Computational graph for computing \(p(m|x_{1},x_{2})\) using HIB with Gaussian embeddings. </center> Although some prior work has explored this soft contrastive loss (e.g. Bertinetto et al. (2016); Orekondy et al. (2018)), it does not seem to be widely used. However, in our experiments, it performs strictly better than the hard margin version, as explained in Appendix B. ### 2.2 STOCHASTIC EMBEDDINGS In HIB, we treat embeddings as stochastic mappings \(x \mapsto Z\) , and write the distribution as \(Z \sim p(z|x)\) . In the sections below, we show how to learn and use this mapping. Match probability for probabilistic embeddings The probability of two inputs matching, given in Equation 2, can easily be extended to stochastic embeddings, as follows: \[p(m|x_{1},x_{2}) = \int p(m|z_{1},z_{2})p(z_{1}|x_{1})p(z_{2}|x_{2})\mathrm{d}z_{1}\mathrm{d}z_{2}. \quad (4)\] We approximate this integral via Monte- Carlo sampling from \(z_{1}^{(k_{1})} \sim p(z_{1}|x_{1})\) and \(z_{2}^{(k_{2})} \sim p(z_{2}|x_{2})\) : \[p(m|x_{1},x_{2}) \approx \frac{1}{K^{2}} \sum_{k_{1} = 1}^{K} \sum_{k_{2} = 1}^{K} p\left(m|z_{1}^{(k_{1})}, z_{2}^{(k_{2})}\right). \quad (5)\] In practice, we get good results using \(K = 8\) samples per input image. Now we discuss the computation of \(p(z|x)\) . Single Gaussian embedding The simplest setting is to let \(p(z|x)\) be a \(D\) - dimensional Gaussian with mean \(\mu (x)\) and diagonal covariance \(\Sigma (x)\) , where \(\mu\) and \(\Sigma\) are computed via a deep neural network with a shared "body" and \(2D\) total outputs. Given a Gaussian representation, we can draw \(K\) samples \(z^{(1)}, \dots , z^{(K)} \stackrel{\mathrm{iid}}{\sim} p(z|x)\) , which we can use to approximate the match probability. Furthermore, we can use the reparametrization trick (Kingma & Welling, 2013) to rewrite the samples as \(z^{(k)} = \mathrm{diag}\left(\sqrt{\Sigma(x)}\right) \cdot \epsilon^{(k)} + \mu (x)\) , where \(\epsilon^{(1)}, \dots , \epsilon^{(K)} \stackrel{\mathrm{iid}}{\sim} N(0, I)\) . This enables easy backpropagation during training. Mixture of Gaussians (MoG) embedding We can obtain a more flexible representation of uncertainty by using a mixture of \(C\) Gaussians to represent our embeddings, i.e. \(p(z|x) = \sum_{c = 1}^{C} \mathcal{N}(z; \mu (x, c), \Sigma (x, c))\) . To enhance computational efficiency, the \(2C\) mappings \(\left\{(\mu (x, c), \Sigma (x, c))\right\}_{c = 1}^{C}\) share a common CNN stump and are branched with one linear layer per branch. When approximating Equation 5, we use stratified sampling, i.e. we sample the same number of samples from each Gaussian component. Computational considerations The overall pipeline for computing the match probability is shown in Figure 2. If we use a single Gaussian embedding, the cost (time complexity) of computing the stochastic representation is essentially the same as for point embedding methods, due to the use of a shared network for computing \(\mu (x)\) and \(\Sigma (x)\) . Also, the space requirement is only \(2 \times\) more. (This is an important consideration for many embedding- based methods.) <--- Page Split ---> ### 2.3 VIB TRAINING OBJECTIVE For training our stochastic embedding, we combine two ingredients: soft contrastive loss in Equation 3 and the VIB principle Alemi et al. (2016); Achille & Soatto (2018). We start with a summary of the original VIB formulation, and then describe its extension to our setting. Variational Information Bottleneck (VIB) A discriminative model \(p(y|x)\) is trained under the information bottleneck principle (Tishby et al., 1999) by maximizing the following objective: \[I(z,y) - \beta I(z,x) \quad (6)\] where \(I\) is the mutual information, and \(\beta >0\) is a hyperparameter which controls the tradeoff between the sufficiency of \(z\) for predicting \(y\) , and the minimality (size) of the representation. Intuitively, this objective lets the latent encoding \(z\) capture the salient parts of \(x\) (salient for predicting \(y\) ), while disallowing it to "memorise" other parts of the input which are irrelevant. Computing the mutual information is generally computationally intractable, but it is possible to use a tractable variational approximation as shown in Alemi et al. (2016); Achille & Soatto (2018). In particular, under the Markov assumption that \(p(z|x,y) = p(z|x)\) we arrive at a lower bound on Equation 6 for every training data point \((x,y)\) as follows: \[- \mathcal{L}_{\mathrm{VIB}}:= \mathbb{E}_{z\sim p(z|x)}[\log q(y|z)] - \beta \cdot \mathrm{KL}(p(z|x)||r(z)) \quad (7)\] where \(p(z|x)\) is the latent distribution for \(x\) , \(q(y|z)\) is the decoder (classifier), and \(r(z)\) is an approximate marginal term that is typically set to the unit Gaussian \(\mathcal{N}(0,I)\) . In Alemi et al. (2016), this approach was shown (experimentally) to be more robust to adversarial image perturbations than deterministic classifiers. It has also been shown to provide a useful way to detect out- of- domain inputs (Alemi et al., 2018). Hence we use it as the foundation for our approach. VIB for learning stochastic embeddings We now apply the above method to learn our stochastic embedding. In particular, we train a discriminative model based on matching or mismatching pairs of inputs \((x_{1},x_{2})\) , by minimizing the following loss: \[\begin{array}{r l} & {\mathcal{L}_{\mathrm{VIBEmb}}:= -\mathbb{E}_{z_{1}\sim p(z_{1}|x_{1}),z_{2}\sim p(z_{2}|x_{2})}[\log p(m = \hat{m} |z_{1},z_{2})]}\\ & {\qquad +\beta \cdot [\mathrm{KL}(p(z_{1}|x_{1})||r(z_{1})) + \mathrm{KL}(p(z_{2}|x_{2})||r(z_{2}))]} \end{array} \quad (8)\] where the first term is given by the negative log likelihood loss with respect to the ground truth match \(\hat{m}\) (this is identical to Equation 3, the soft contrastive loss), and the second term is the KL regularization term, \(r(z) = \mathcal{N}(z;0,I)\) . The full derivation is in appendix F. We optimize this loss with respect to the embedding function \((\mu (x),\Sigma (x))\) , as well as with respect to the \(a\) and \(b\) terms in the match probability in Equation 2. Note that most pairs are not matching, so the \(m = 1\) class is rare. To handle this, we encourage a balance of \(m = 0\) and \(m = 1\) pair samples within each SGD minibatch by using two streams of input sample images. One samples images from the training set at random and the other selects images from specific class labels, and then these are randomly shuffled to produce the final batch. As a result, each minibatch has plenty of positive pairs even when there are a large number of classes. ### 2.4 UNCERTAINTY MEASURE One useful property of our method is that the embedding is a distribution and encodes the level of uncertainty for given inputs. As a scalar uncertainty measure, we propose the self- mismatch probability as follows: \[\eta (x):= 1 - p(m|x,x)\geq 0 \quad (9)\] Intuitively, the embedding for an ambiguous input will span diverse semantic classes (as in Figure 1b). \(\eta (x)\) quantifies this by measuring the chance two samples of the embedding \(z_{1},z_{2}\stackrel {\mathrm{id}}{\sim}p(z|x)\) belong to different semantic classes (i.e., the event \(m = 0\) happens). We compute \(\eta (x)\) using the Monte- Carlo estimation in Equation 5. Prior works (Vilnis & McCallum, 2014; Bojchevski & Gunnemann, 2018) have computed uncertainty for Gaussian embeddings based on volumetric measures like trace or determinant of covariance matrix. Unlike those measures, \(\eta (x)\) can be computed for any distribution from which one can sample, including multi- modal distributions like mixture of Gaussians. <--- Page Split ---> ## 3 RELATED WORK In this section, we mention the most closely related work from the fields of deep learning and probabilistic modeling. Probabilistic DNNs Several works have considered the problem of estimating the uncertainty of a regression or classification model, \(p(y|x)\) , when given ambiguous inputs. One of the simplest and most widely used techniques is known as Monte Carlo dropout (Gal & Ghahramani, 2016). In this approach, different random components of the hidden activations are "masked out" and a distribution over the outputs \(f(x)\) is computed. By contrast, we compute a parametric representation of the uncertainty and use Monte Carlo to approximate the probability of two points matching. Monte Carlo dropout is not directly applicable in our setting as the randomness is attached to model parameters and is independent of input; it is designed to measure model uncertainty (epistemic uncertainty). On the other hand, we measure input uncertainty where the embedding distribution is conditioned on the input. Our model is designed to measure input uncertainty (aleatoric uncertainty). VAEs and VIB A variational autoencoder (VAE, Kingma & Welling (2013)) is a latent variable model of the form \(p(x,z) = p(z)p(x|z)\) , in which the generative decoder \(p(x|z)\) and an encoder network, \(q(z|x)\) are trained jointly so as to maximize the evidence lower bound. By contrast, we compute a discriminative model on pairs of inputs to maximize a lower bound on the match probability. The variational information bottleneck (VIB) method (Alemi et al., 2016; Achille & Soatto, 2018) uses a variational approximation similar to the VAE to approximate the information bottleneck objective (Tishby et al., 1999). We build on this as explained in 2.3. Point embeddings Instance embeddings are often trained with metric learning objectives, such as contrastive (Hadsell et al., 2006) and triplet (Schroff et al., 2015) losses. Although these methods work well, they require careful sampling schemes (Wu et al., 2017; Movshovitz- Attias et al., 2017). Many other alternatives have attempted to decouple the dependency on sampling, including softmax cross- entropy loss coupled with the centre loss (Wan et al., 2018), or a clustering- based loss (Song et al., 2017), and have improved the embedding quality. In HIB, we use a soft contrastive loss, as explained in section 2.1. Probabilistic embeddings The idea of probabilistic embeddings is not new. For example, Vilnis & McCallum (2014) proposed Gaussian embeddings to represent levels of specificity of word embeddings (e.g. "Bach" is more specific than "composer"). The closeness of the two Gaussians is based on their KL- divergence, and uncertainty is computed from the spread of Gaussian (determinant of covariance matrix). See also Karaletsos et al. (2015); Bojchevski & Gunnemann (2018) for related work. Neelakantan et al. (2014) proposed to represent each word using multiple prototypes, using a "best of \(K\) " loss when training. HIB, on the other hand, measures closeness based on a quantity related to the expected Euclidean distance, and measures uncertainty using the self- mismatch probability. ## 4 EXPERIMENTS In this section, we report our experimental results, where we compare our stochastic embeddings to point embeddings. We consider two main tasks: the verification task (i.e., determining if two input images correspond to the same class or not), and the identification task (i.e., predicting the label for an input image). For the latter, we use a K- nearest neighbors approach with \(K = 5\) . We compare performance of three methods: a baseline deterministic embedding method, our stochastic embedding method with a Gaussian embedding, and our stochastic embedding method with a mixture of Gaussians embedding. We also conduct a qualitative comparison of the embeddings of each method. In the Appendix, Section C, we describe additional experiments, including where HIB is applied to a dataset of cat and dog images of over 100K distinct animals and the embedding is directed towards identifying specific animals. <--- Page Split ---> ### 4.1 EXPERIMENTAL DETAILS We conduct all our experiments on a new dataset we created called N- digit MNIST, which consists of images composed of \(N\) adjacent MNIST digits, which may be randomly occluded (partially or fully). See appendix A for details. During training, we occlude \(20\%\) of the digits independently. A single image can have multiple corrupted digits. During testing, we consider both clean (unoccluded) and corrupted (occluded) images, and report results separately. We use images with \(N = 2\) and \(N = 3\) digits. We will open source the data to ensure reproducibility. Since our dataset is fairly simple, we use a shallow CNN model to compute the embedding function. Specifically, it consists of 2 convolutional layers, with \(5\times 5\) filters, each followed by max pooling, and then a fully connected layer mapping to \(D\) dimensions. We focus on the cases where \(D = 2\) or \(D = 3\) , and present additional results where \(D = 4\) in the Appendix. When we use more dimensions, we find that all methods (both stochastic and deterministic) perform almost perfectly (upper \(90\%\) ), so there are no interesting differences to report. We leave exploration of more challenging datasets, and higher dimensional embeddings, to future work. Our networks are built with TensorFlow (Abadi et al., 2015). For each task, the input is an image of size \(28\times (N\times 28)\) , where N is the number of concatenated digit images. We use a batch size of 128 and 500k training iterations. Each model is trained from scratch with random weight initialization. The KL- divergence hyperparameter \(\beta\) is set to \(10^{- 4}\) throughout the experiments. We report effects of changing \(\beta\) in appendix E. ### 4.2 QUALITATIVE EVALUATION OF THE REPRESENTATION Figure 3 shows HIB 2D Gaussian embeddings for the clean and corrupt subsets of the test set. We can easily see that the corrupt images generally have larger (i.e., less certain) embeddings. In the Appendix, Figure 7 shows a similar result when using a 2D MoG representation, and Figure 8 shows a similar result for 3D Gaussian embeddings. Figure 4 illustrates the embeddings for several test set images, overlaid with an indication of each class' centroid. Hedged embeddings capture the uncertainty that may exist across complex subsets of the class label space, by learning a layout of the embedding space such that classes that may be confused are able to receive density from the underlying hedged embedding distribution. We observe enhanced spatial regularity when using HIB. Classes with a common least or most significant digit roughly align parallel to the \(x\) or \(y\) axis. This is because of the diagonal structure of the embedding covariance matrix. By controlling the parametrization of the covariance matrix, one may apply varying degrees and types of structures over the embedding space (e.g. diagonally aligned embeddings). See appendix D for more analysis of the learned latent space. ### 4.3 QUANTITATIVE EVALUATION OF THE BENEFITS OF STOCHASTIC EMBEDDING We first measure performance on the verification task, where the network is used to compute \(p(m|x_{1},x_{2})\) for 10k pairs of test images, half of which are matches and half are not. The average precision (AP) of this task is reported in the top half of Table 1. HIB shows improved performance, especially for corrupted test images. For example, in the \(N = 2\) digit case, when using \(D = 2\) dimensions, point embeddings achieve \(88.0\%\) AP on corrupt test images, while hedged instance embeddings improves to \(90.7\%\) with \(C = 1\) Gaussian, and \(91.2\%\) with \(C = 2\) Gaussians. We next measure performance on the KNN identification task. The bottom half of Table 1 reports the results. Again, proposed stochastic embeddings generally outperform point embeddings, with the greatest advantage for the corrupted input samples. For example, in the \(N = 2\) digit case, when using \(D = 2\) dimensions, point embeddings achieve \(58.3\%\) AP on corrupt test images, while HIB improves to \(76.0\%\) with \(C = 1\) Gaussian, and \(75.7\%\) with \(C = 2\) Gaussians. ### 4.4 KNOWN UNKNOWNS In this section, we address the task of estimating when an input can be reliably recognized or not, which has important practical applications. To do this, we use the measure of uncertainty \(\eta (x)\) defined in Equation 9. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: 2D Gaussian embeddings from 2-digit MNIST. Corrupted test images (right) generally have embeddings that spread density across larger regions of the space than clean images (left). See Appendix Figure 8 for a 3D version. </center> ![](images/6_1.jpg) <center>Figure 4: Point vs. hedged image embeddings. Square markers indicate the class centroids of embeddings for 2-digit MNIST. Left: Centroids learned with point embeddings. Occluded examples map into the embedding space with no special indication of corruption or confidence. Right: Hedged embeddings comprising a single Gaussian (rendered as \(3\sigma\) ellipses). A clean input such as the “03” is embedded into a tight Gaussian, while a corrupted image such as the “52” occupies an elongated ellipse that places bets that the least significant digit is a “2” but the most significant digit could be any of a number of choices. </center> We measure the utility of \(\eta (x)\) for the identification task as follows. For the test set, we sort all test input examples according to \(\eta (x)\) , and bin examples into 20 bins ranging from the lowest to highest range of uncertainty. We then measure the KNN classification accuracy for the examples falling in each bin. To measure the utility of \(\eta (x)\) for the verification task, we take random pairs of samples, \((x_{1}, x_{2})\) , and compute the mean of their uncertainties, \(\eta (x_{1}, x_{2}) = \frac{1}{2} (\eta (x_{1}) + \eta (x_{2}))\) . We then distribute the test pairs to 20 equal- sized bins according to their uncertainty levels, and compute the probability of a match for each pair. To cope with the severe class imbalance (most pairs don’t match), we measure performance for each bin using average precision (AP). Then, again, the Kendall’s tau is applied to measure the uncertainty- performance correlation. <--- Page Split ---> Table 1: Accuracy of pairwise verification and KNN identification tasks for point embeddings, and our hedged embeddings with a single Gaussian component (MoG-1) and two components (MoG-2). We report results for images with \(N\) digits and using \(D\) embedding dimensions. <table><tr><td></td><td colspan="2">N = 2, D = 2</td><td colspan="2">N = 2, D = 3</td><td colspan="2">N = 3, D = 2</td><td colspan="2">N = 3, D = 3</td></tr><tr><td></td><td>point MoG-1 MoG-2</td><td>point MoG-1 MoG-2</td><td>point MoG-1 MoC-2</td><td>point MoG-1 MoG-2</td><td>point MoG-2</td><td>point MoG-1 MoG-2</td><td>point MoG-1 MoG-2</td><td>point MoC-2</td></tr><tr><td>Verification AP</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>clean</td><td>0.987</td><td>0.989</td><td>0.990</td><td>0.996</td><td>0.996</td><td>0.978</td><td>0.981</td><td>0.976</td></tr><tr><td>corrupt</td><td>0.880</td><td>0.907</td><td>0.912</td><td>0.913</td><td>0.926</td><td>0.932</td><td>0.886</td><td>0.899</td></tr><tr><td>KNN Accuracy</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>clean</td><td>0.871</td><td>0.879</td><td>0.888</td><td>0.942</td><td>0.953</td><td>0.939</td><td>0.554</td><td>0.591</td></tr><tr><td>corrupt</td><td>0.583</td><td>0.760</td><td>0.757</td><td>0.874</td><td>0.909</td><td>0.885</td><td>0.274</td><td>0.350</td></tr></table> Table 2: Correlations between each input image's measure of uncertainty, \(\eta (x)\) , and AP and KNN performances. High correlation coefficients suggest a close relationship. <table><tr><td></td><td colspan="2">N = 2, D = 2</td><td colspan="2">N = 2, D = 3</td><td colspan="2">N = 3, D = 2</td><td colspan="2">N = 3, D = 3</td></tr><tr><td></td><td>MoG-1</td><td>MoG-2</td><td>MoG-1</td><td>MoG-2</td><td>MoG-1</td><td> MoG-2</td><td>MoG-1</td><td>MoG-2</td></tr><tr><td>AP Correlation</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>clean</td><td>0.74</td><td>0.43</td><td>0.68</td><td>0.48</td><td>0.63</td><td>0.28</td><td>0.51</td><td>0.39</td></tr><tr><td>corrupt</td><td>0.81</td><td>0.79</td><td>0.86</td><td>0.87</td><td>0.82</td><td>0.76</td><td>0.85</td><td>0.79</td></tr><tr><td>KNN Correlation</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>clean</td><td>0.71</td><td>0.57</td><td>0.72</td><td>0.47</td><td>0.76</td><td>0.29</td><td>0.74</td><td>0.54</td></tr><tr><td>corrupt</td><td>0.47</td><td>0.43</td><td>0.55</td><td>0.52</td><td>0.49</td><td>0.50</td><td>0.67</td><td>0.34</td></tr></table> ![](images/7_0.jpg) <center>Figure 5: Correlations between the uncertainty measure \(\eta (x)\) and AP and KNN accuracy on the test set for the \(N = 3\) , \(D = 3\) case using single Gaussian embeddings. Uncertainty increases along the horizontal axis. We observe that accuracy generally decreases as uncertainty increases. </center> Figure 5 plots the AP and KNN accuracy vs the uncertainty bin index, for both clean and corrupted inputs. We see that when the performance drops off, the model's uncertainty measure increases, as desired. To quantify this, we compute the correlation between the performance metric and the uncertainty metric. Instead of the standard linear correlations (Pearson correlation coefficient), we use Kendall's tau correlation (Kendall, 1938) that measures the degree of monotonicity between the performance and the uncertainty level (bin index), inverting the sign so that positive correlation aligns with our goal. The results of different models are shown in Table 2. In general, the measure \(\eta (x)\) correlates with the task performance. As a baseline for point embeddings in KNN, we explored using the distance to the nearest neighbor as a proxy for uncertainty, but found that it performed poorly. The HIB uncertainty metric correlates with task accuracy even in within the subset of clean (uncorrupted) input images having no corrupted digits, indicating that HIB's understanding of uncertainty goes beyond simply detecting which images are corrupted. ## 5 DISCUSSION AND FUTURE WORK Hedged instance embedding is a stochastic embedding that captures the uncertainty of the mapping of an image to a latent embedding space, by spreading density across plausible locations. This <--- Page Split ---> results in improved performance on various tasks, such as verification and identification, especially for ambiguous corrupted input. It also allows for a simple way to estimate the uncertainty of the embedding that is correlated with performance on downstream tasks. There are many possible directions for future work, including experimenting with higher- dimensional embeddings, and harder datasets. As an early look at these tasks, in the Appendix, Section C.3, we apply HIB towards cat and dog instance embedding directed towards identifying specific animals with 20D embeddings. It would also be interesting to consider the "open world" (or "unknown unknowns") scenario, in which the test set may contain examples of novel classes, such as digit combinations that were not in the training set (see e.g., Lakkaraju et al. (2017); Günther et al. (2017)). This is likely to result in uncertainty about where to embed the input which is different from the uncertainty induced by occlusion, since uncertainty due to open world is epistemic (due to lack of knowledge of a class), whereas uncertainty due to occlusion is aleatoric (intrinsic, due to lack of information in the input), as explained in Kendall & Gal (2017). Preliminary experiments suggest that \(\eta (x)\) correlates well with detecting occluded inputs, but does not work as well for novel classes. We leave more detailed modeling of epistemic uncertainty as future work. ## ACKNOWLEDGMENTS We are grateful to Alex Alemi and Josh Dillon, helpful with discussions related to VAEs and Variational Information Bottleneck, and to Lucy Gao for the pet dataset of cats and dogs. ## REFERENCES Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large- scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from tensorflow.org. Alessandro Achille and Stefano Soatto. On the emergence of invariance and disentangling in deep representations. JMLR, 18:1- 34, 2018. Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. Deep variational information bottleneck. In ICLR, 2016. Alexander A Alemi, Ian Fischer, and Joshua V Dillon. Uncertainty in the variational information bottleneck. In UAI Workshop on Uncertainty in Deep Learning, 2018. Artem Babenko, Anton Slesarev, Alexandr Chigorin, and Victor Lempitsky. Neural codes for image retrieval. In ECCV, 2014. Luca Bertinetto, Jack Valmadre, João F Henriques, Andrea Vedaldi, and Philip HS Torr. Fully- convolutional siamese networks for object tracking. arXiv preprint arXiv:1606.09549, 2016. Aleksandar Bojchevski and Stephan Gunnemann. Deep gaussian embedding of attributed graphs: Unsupervised inductive learning via ranking. In ICLR, 2018. Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In ICML, 2016. Manuel Günther, Steve Cruz, Ethan M Rudd, and Terrance E Boult. Toward Open- Set face recognition. In CVPR Biometrics Workshop, 2017. R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality reduction by learning an invariant mapping. In CVPR, 2006. <--- Page Split ---> Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. Theofanis Karaletsos, Serge Belongie, and Gunnar Ratsch. Bayesian representation learning with oracle constraints. In ICLR, 2015. Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision? In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), NIPS. 2017. Maurice G Kendall. A new measure of rank correlation. Biometrika, 30(1/2):81- 93, 1938. Diederik P Kingma and Max Welling. Auto- encoding variational bayes. In ICLR, 2013. H Lakkaraju, E Kamar, R Caruana, and E Horvitz. Identifying unknown unknowns in the open world: Representations and policies for guided exploration, 2017. Yann LeCun. The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/, 1998. Yair Movshovitz- Attias, Alexander Toshev, Thomas K Leung, Sergey Ioffe, and Saurabh Singh. No fuss distance metric learning using proxies. In ICCV, 2017. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. Efficient non- parametric estimation of multiple embeddings per word in vector space. In EMNLP, 2014. Tribhuvanesh Orekondy, Seong Joon Oh, Bernt Schiele, and Mario Fritz. Understanding and controlling user linkability in decentralized learning. arXiv preprint arXiv:1805.05838, 2018. Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In CVPR, 2015. Hyun Oh Song, Stefanie Jegelka, Vivek Rathod, and Kevin Murphy. Deep metric learning via facility location. In CVPR, 2017. N. Tishby, F.C. Pereira, and W. Biale. The information bottleneck method. In The 37th annual Allerton Conf. on Communication, Control, and Computing, 1999. Luke Vilnis and Andrew McCallum. Word representations via gaussian embedding. In ICLR, 2014. Weitao Wan, Yuanyi Zhong, Tianpeng Li, and Jiansheng Chen. Rethinking feature distribution for loss functions in image classification. In CVPR, 2018. Chao- Yuan Wu, R Manmatha, Alexander J Smola, and Philipp Krähenbühl. Sampling matters in deep embedding learning. In ICCV, 2017. <--- Page Split ---> ## Appendix ## A N-DIGIT MNIST DATASET We present N- digit MNIST, a new dataset based upon MNIST (LeCun, 1998) that has an exponentially large number of classes on the number of digits \(N\) , for which embedding- style classification methods are well- suited. The dataset is created by horizontally concatenating \(N\) MNIST digit images. While constructing new classes, we respect the training and test splits. For example, a test image from 2- digit MNIST of a "54" does not have its "5" or its "4" shared with any image from the training set (in all positions). Table 3: Summary of N-digit MNIST dataset for \(N = 2,3\) <table><tr><td>Number Digits</td><td>Total Classes</td><td>Training Classes</td><td>Unseen Test Classes</td><td>Seen Test Classes</td><td>Training Images</td><td>Test Images</td></tr><tr><td>2</td><td>100</td><td>70</td><td>30</td><td>70</td><td>100 000</td><td>10 000</td></tr><tr><td>3</td><td>1000</td><td>700</td><td>300</td><td>700</td><td>100 000</td><td>10 000</td></tr></table> We employ 2- and 3- digit MNIST (Table 3) in our experiments. This dataset is meant to provide a test bed for easier and more efficient evaluation of embedding algorithms than with larger and more realistic datasets. N- digit MNIST is more suitable for embedding evaluation than other synthetic datasets due to the exponentially increasing number of classes as well as the factorizability aspect: each digit position corresponds to e.g. a face attribute for face datasets. We inject uncertainty into the underlying tasks by randomly occluding (with black fill- in) regions of images at training and test time. Specifically, the corruption operation is done independently on each digit of number samples in the dataset. A random- sized square patch is identified from a random location of each \(28 \times 28\) digit image. The patch side length is first sampled \(L \sim \mathrm{Unif}(0, 28)\) , and then the top left patch corner coordinates are sampled \((TL_{x}, TL_{y}) \stackrel{\mathrm{id}}{\sim} \mathrm{Unif}(0, 28 - L)\) , so that the occluded square size is always \(L^{2}\) . During training, we set independent binary random flags \(B\) for every digit determining whether to perform the occlusion at all; the occlusion chance is set to \(20\%\) . For testing, we prepare twin datasets, clean and corrupt, with digit images that are either not corrupted with occlusion at all or always occluded, respectively. ## B SOFT CONTRASTIVE LOSS VERSUS CONTRASTIVE LOSS ![](images/10_0.jpg) <center>Figure 6: Soft contrastive versus vanilla contrastive loss for point embeddings. </center> As a building block for the HIB, soft contrastive loss has been proposed in §2.1. Soft contrastive loss has a conceptual advantage over the vanilla contrastive loss that the margin hyperparameter \(M\) does not have to be hand- tuned. Here we verify that soft contrastive loss outperforms the vanilla version over a range of \(M\) values. Figure 9 shows the verification (average precision) and identification (KNN accuracy) performance of embedding 2- digit MNIST samples. In both evaluations, soft contrastive loss performance is upper bounding the vanilla contrastive case. This new formulation removes one hyperparameter from the learning process, while not sacrificing performance. <--- Page Split ---> ![](images/11_0.jpg) <center>Figure 7: MoG embeddings of 2-digit numbers. Components corresponding to common input are joined with a thin line. It is qualitatively apparent that corrupted examples (right) tend to have more dispersed components than clean ones (left) as a result of the "bet hedging" behavior. </center> ![](images/11_1.jpg) <center>Figure 8: 3D embeddings of 3-digit MNIST. Corrupted test images (right) generally have embeddings that spread density across larger regions of the space than those of clean images (left). </center> ## C ADDITIONAL RESULTS In this section, we include some extra results which would not fit in the main paper. <--- Page Split ---> Table 4: Task performance and uncertainty measure correlations with task performance for 4D embeddings with 3-digit MNIST. <table><tr><td></td><td colspan="2">N = 3, D = 4</td></tr><tr><td></td><td colspan="2">point MoG-1 MoG-2</td></tr><tr><td>Verification AP</td><td>clean</td><td>0.996</td></tr><tr><td>corrupt</td><td>0.942</td><td>0.944</td></tr><tr><td>KNN Accuracy</td><td>clean</td><td>0.914</td></tr><tr><td>corrupt</td><td>0.803</td><td>0.816</td></tr></table> task performances. <table><tr><td></td><td colspan="2">N = 3, D = 4</td></tr><tr><td></td><td colspan="2">MoG-1 MoG-2</td></tr><tr><td>AP Correlation</td><td>clean</td><td>0.35</td></tr><tr><td>corrupt</td><td>0.79</td><td>0.68</td></tr><tr><td>KNN Correlation</td><td>clean</td><td>0.31</td></tr><tr><td>corrupt</td><td>0.42</td><td>0.35</td></tr></table> (a) Task accuracies for pairwise verification and KNN identification. (b) Correlations between uncertainty, \(\eta (x)\) , and ## C.1 VISUAL REPRESENTATIONS OF MNIST MoG EMBEDDINGS Figure 7 shows some 2D embeddings of 2- digit images using a MoG representation, with \(C = 2\) or \(C = 4\) clusters per embedding. Figure 8 shows some 3D embeddings of 3- digit images using a single Gaussian. ## C.2 4D MNIST EMBEDDINGS In Table 4, we report the results of 4D embeddings with 3- digit MNIST. Here, the task performance begins to saturate, with verification accuracy on clean input images above 0.99. However, we again observe that HIB provides a slight performance improvement over point embeddings for corrupt images, and task performance correlates with uncertainty. ![](images/12_0.jpg) <center>Figure 9: Correlation between the uncertainty measure \(\eta (x)\) and balanced accuracy on the pet test set for 20D single Gaussian embeddings. Uncertainty increases along the horizontal axis. (b) and (c) show match score distributions of pairs of the same and different pets for lowest and highest uncertainty bins. There is clearly more confusion for the highly uncertain samples. </center> ## C.3 HIB EMBEDDINGS FOR PET IDENTIFICATION We applied HIB to learn instance embeddings for identifying pets, using an internal dataset of nearly 1M cat and dog images with known identity, including 117913 different pets. We trained an HIB with 20D embeddings and a single Gaussian component (diagonal covariance). The CNN portion of the model is a MobileNet (Howard et al., 2017) with a width multiplier of 0.25. No artificial corruption is applied to the images, as there is sufficient uncertainty from sources such as occlusion, natural variations in lighting, and the pose of the animals. We evaluate the verification task on a held out test set of 8576 pet images, from which all pairs were analyzed. On this set, point embeddings achieve 0.777 balanced accuracy. Binning the uncertainty measures and measuring correlation with the verification task (similarly as with N- digit MNIST), we find the correlation between performance and task accuracy to be 0.995. This experiment shows <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 10: Uncertain embeddings self-organize. Class centroids for 2D point embeddings of 2-digit MNIST are shown, colored here by ones digit (left) and tens digit (right). The hedged instance embeddings have a structure where the classes self-organize in an axis aligned manner because of the diagonal covariance matrix used in the model. </center> ![](images/13_1.jpg) <center>Figure 11: Embedding 2-digit MNIST onto the number line. Top: An ordering of class centroids produced with hedged instance embedding with Gaussian embeddings. Bottom: An ordering produced with point embedding. Centroids are colored according to least significant (ones) digit. The hedged instance embedding more often places classes that share attributes (in this case, a ones digit or tens digit). </center> that HIB scales to real- world problems with real- world sources of corruption, and provides evidence that HIB understands which embeddings are uncertain. ## D ORGANIZATION OF THE LATENT EMBEDDING SPACE As hedged instance embedding training progresses, it is advantageous for any subset of classes that may be confused with one another to be situated in the embedding space such that a given input image's embedding can strategically place probability mass. We observe this impacts the organization of the underlying space. For example, in the 2D embeddings shown in Figure 10, the class centers of mass for each class are roughly axis- aligned so that classes that share a tens' digit vary by x- coordinate, and classes that share a least significant (ones) digit vary by y- coordinate. To further explore this idea, we embed 2- digit MNIST into a single dimension, to see how the classes get embedded along the number line. For hedged instance embedding, a single Gaussian embedding was chosen as the representation. We conjectured that because hedged instance embedding reduces objective loss by placing groups of confusing categories nearby one another, the resulting embedding space would be organized to encourage classes that share a tens or ones digit to be nearby. Figure 11 shows an example embedding learned by the two methods. We assess the embedding space as follows. First the centroids for each of the 100 classes are derived from the test set embeddings. After sorting the classes, a count is made of adjacent class pairs that share a ones or tens digit, with the maximum possible being 99. The hedged embeddings outscored the point embeddings on each of the the four trials, with scores ranging from 76 to 80 versus scores of 42 to 74. Similarly, consider a run as a series of consecutive class pairs that share a ones or tens digit. The average run contains 4.6 classes with from hedged embeddings, and only 3.0 for point embeddings, as illustrated in Figure 11. <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 12: Impact of the weight on KL divergence. Each plot shows ellipsoids corresponding to hedged embeddings of corrupted test images from models mapping 3-digit MNIST to 3 dimensions. From left to right, the KL divergence weight increases by a factor of ten with each plot. When the weight \(\beta\) is too small \((10^{-6})\) , the embeddings approach points, and when too large \((10^{-2})\) embeddings approach the unit Gaussian. </center> (a) Task performances. <table><tr><td></td><td colspan="2">N = 2, D = 2</td><td colspan="2">N = 3, D = 3</td></tr><tr><td>β</td><td>0</td><td>10-4</td><td>0</td><td>10-4</td></tr><tr><td>Verification AP</td><td></td><td></td><td></td><td></td></tr><tr><td>clean</td><td>0.986</td><td>0.988</td><td>0.993</td><td>0.992</td></tr><tr><td>corrupt</td><td>0.900</td><td>0.906</td><td>0.932</td><td>0.932</td></tr><tr><td>KNN Accuracy</td><td></td><td></td><td></td><td></td></tr><tr><td>clean</td><td>0.867</td><td>0.891</td><td>0.858</td><td>0.861</td></tr><tr><td>corrupt</td><td>0.729</td><td>0.781</td><td>0.685</td><td>0.730</td></tr></table> <table><tr><td></td><td colspan="2">N = 2, D = 2</td><td colspan="2">N = 3, D = 3</td></tr><tr><td>β</td><td>0</td><td>10-4</td><td>0</td><td>10-4</td></tr><tr><td>AP Correlation</td><td></td><td></td><td></td><td></td></tr><tr><td>clean</td><td>0.680</td><td>0.766</td><td>0.425</td><td>0.630</td></tr><tr><td>corrupt</td><td>0.864</td><td>0.836</td><td>0.677</td><td>0.764</td></tr><tr><td>KNN Correlation</td><td></td><td></td><td></td><td></td></tr><tr><td>clean</td><td>0.748</td><td>0.800</td><td>-0.080</td><td>0.685</td></tr><tr><td>corrupt</td><td>0.636</td><td>0.609</td><td>0.183</td><td>0.549</td></tr></table> (b) Uncertainty correlations. Table 5: Results for single Gaussian embeddings with and without KL divergence term. We report results for images with \(N\) digits and using \(D\) embedding dimensions. ## E KL DIVERGENCE REGULARIZATION The training objective (Equation 8) contains a regularization hyperparameter \(\beta \geq 0\) controlling the weight of the KL divergence regularization that controls, from information theoretic perspective, bits of information encoded in the latent representation. In the main experiments, we consistently use \(\beta = 10^{- 4}\) . In this Appendix, we explore the effect of this regularization by considering other values of \(\beta\) . See Figure 12 for visualization of embeddings according to \(\beta\) . We observe that increasing KL term weight induces overall increase in variances of embeddings. For quantitative analysis on the impact of \(\beta\) on main task performance and uncertainty quality, see Table 5 where we compare \(\beta = 0\) and \(\beta = 10^{- 4}\) cases. We observe mild improvements in main task performances when the KL divergence regularization is used. For example, KNN accuracy improves from 0.685 to 0.730 by including the KL term for \(N = 3\) , \(D = 3\) case. Improvement in uncertainty quality, measured in terms of Kendall's tau correlation, is more pronounced. For example, KNN accuracy and uncertainty are nearly uncorrelated when \(\beta = 0\) under the \(N = 3\) , \(D = 3\) setting (- 0.080 for clean and 0.183 for corrupt inputs), while they are well- correlated when \(\beta = 10^{- 4}\) (0.685 for clean and 0.549 for corrupt inputs). KL divergence helps generalization, as seen by the main task performance boost, and improves the uncertainty measure by increasing overall variances of embeddings, as seen in Figure 12. ## F DERIVATION OF VIB OBJECTIVE FOR STOCHASTIC EMBEDDING Our goal is to train a discriminative model for match prediction on a pair of variables, \(p(m|f(x_{1}),f(x_{2}))\) , as opposed to predicting a class label, \(p(y|x)\) . Our VIB loss (Equation 8) follows straightforwardly from the original VIB, with two additional independence assumptions. In particular, we assume that the samples in the pair are independent, so \(p(x_{1},x_{2}) = p(x_{1})p(x_{2})\) . We also assume the embeddings do not depend on the other input in the pair, \(p(z_{1},z_{2}|x_{1},x_{2}) =\) <--- Page Split ---> \(p(z_{1}|x_{1})p(z_{2}|x_{2})\) . With these two assumptions, the VIB objective is given by the following: \[I((z_{1},z_{2}),m) - \beta I((z_{1},z_{2}),(x_{1},x_{2})). \quad (10)\] We variationally bound the first term using the approximation \(q(m|z_{1},z_{2})\) of \(p(m|z_{1},z_{2})\) as follows \[\begin{array}{r l} & {I((z_{1},z_{2}),m) = \int p(m,z_{1},z_{2})\log \frac{p(m,z_{1},z_{2})}{p(m)p(z_{1},z_{2})}\mathrm{d}m\mathrm{d}z_{1}\mathrm{d}z_{2}}\\ & {\qquad = \int p(m,z_{1},z_{2})\log \frac{p(m|z_{1},z_{2})}{p(m)}\mathrm{d}m\mathrm{d}z_{1}\mathrm{d}z_{2}}\\ & {\qquad = \int p(m,z_{1},z_{2})\log \frac{q(m|z_{1},z_{2})}{p(m)}\mathrm{d}m\mathrm{d}z_{1}\mathrm{d}z_{2} + \mathrm{KL}(p(m|z_{1},z_{2})||q(m|z_{1},z_{2}))}\\ & {\qquad \geq \int p(m,z_{1},z_{2})\log \frac{q(m|z_{1},z_{2})}{p(m)}\mathrm{d}m\mathrm{d}z_{1}\mathrm{d}z_{2}}\\ & {\qquad = \int p(m,z_{1},z_{2})\log q(m|z_{1},z_{2})\mathrm{d}m\mathrm{d}z_{1}\mathrm{d}z_{2} + H(m)}\\ & {\qquad \geq \int p(m,z_{1},z_{2})\log q(m|z_{1},z_{2})\mathrm{d}m\mathrm{d}z_{1}\mathrm{d}z_{2}}\\ & {\qquad = \int p(m|x_{1},x_{2})p(z_{1}|x_{1})p(z_{2}|x_{2})p(x_{1})p(x_{2})\log q(m|z_{1},z_{2})\mathrm{d}m\mathrm{d}z_{1}\mathrm{d}z_{2}\mathrm{d}x_{1}\mathrm{d}x_{2}.} \end{array} \quad (14)\] The inequalities follow from the non- negativity of KL- divergence \(\mathrm{KL}(\cdot)\) and entropy \(H(\cdot)\) . The final equality follows from our assumptions above. The second term is variationally bounded using approximation \(r(z_{i})\) of \(p(z_{i}|x_{i})\) as follows: \[\begin{array}{r l} & {I((z_{1},z_{2}),(x_{1},x_{2})) = \int p(z_{1},z_{2},x_{1},x_{2})\log \frac{p(z_{1},z_{2}|x_{1},x_{2})}{p(z_{1},z_{2})}\mathrm{d}z_{1}\mathrm{d}z_{2}\mathrm{d}x_{1}\mathrm{d}x_{2}}\\ & {\qquad = \int p(z_{1},z_{2},x_{1},x_{2})\log \frac{p(z_{1},z_{2}|x_{1},x_{2})}{r(z_{1})r(z_{2})}\mathrm{d}z_{1}\mathrm{d}z_{2}\mathrm{d}x_{1}\mathrm{d}x_{2}}\\ & {\qquad -K L(p(z_{1})||r(z_{1})) - K L(p(z_{2})||r(z_{2}))}\\ & {\qquad \leq \int p(z_{1},z_{2},x_{1},x_{2})\log \frac{p(z_{1},z_{2}|x_{1},x_{2})}{r(z_{1})r(z_{2})}\mathrm{d}z_{1}\mathrm{d}z_{2}\mathrm{d}x_{1}\mathrm{d}x_{2}}\\ & {\qquad = \int p(z_{1},z_{2}|x_{1},x_{2})p(x_{1},x_{2})\log \frac{p(z_{1},z_{2}|x_{1},x_{2})}{r(z_{1})r(z_{2})}\mathrm{d}z_{1}\mathrm{d}z_{2}\mathrm{d}x_{1}\mathrm{d}x_{2}}\\ & {\qquad = \int p(z_{1}|x_{1})p(x_{1})\log \frac{p(z_{1}|x_{1})}{r(z_{1})}\mathrm{d}z_{1}\mathrm{d}x_{1}}\\ & {\qquad +\int p(z_{2}|x_{2})p(x_{2})\log \frac{p(z_{2}|x_{2})}{r(z_{2})}\mathrm{d}z_{2}\mathrm{d}x_{2}.} \end{array} \quad (21)\] The inequality, again, follows from the non- negativity of KL- divergence, and the last equality follows from our additional independence assumptions. Combining the two bounds, the VIB objective (Equation 10) for a fixed input pair \((x_{1},x_{2})\) is bounded from below by \[\begin{array}{r l} & {\int p(z_{1}|x_{1})p(z_{2}|x_{2})\log q(m|z_{1},z_{2})\mathrm{d}z_{1}\mathrm{d}z_{2}}\\ & {\qquad -\beta \left(\mathrm{KL}(p(z_{1}|x_{1})||r(z_{1})) + \mathrm{KL}(p(z_{2}|x_{2})||r(z_{2}))\right).} \end{array} \quad (24)\] The negative of the above expression is identical to the loss \(\mathcal{L}_{\mathrm{VIBEmb}}\) in Equation 8. <--- Page Split --->
## ABSTRACT Instance embeddings are an efficient and versatile image representation that facilitates applications like recognition, verification, retrieval, and clustering. Many metric learning methods represent the input as a single point in the embedding space. Often the distance between points is used as a proxy for match confidence. However, this can fail to represent uncertainty which can arise when the input is ambiguous, e.g., due to occlusion or blurriness. This work addresses this issue and explicitly models the uncertainty by "hedging" the location of each input in the embedding space. We introduce the hedged instance embedding (HIB) in which embeddings are modeled as random variables and the model is trained under the variational information bottleneck principle (Alemi et al., 2016; Achille & Soatto, 2018). Empirical results on our new N- digit MNIST dataset show that our method leads to the desired behavior of "hedging its bets" across the embedding space upon encountering ambiguous inputs. This results in improved performance for image matching and classification tasks, more structure in the learned embedding space, and an ability to compute a per- exemplar uncertainty measure which is correlated with downstream performance. ## 1 INTRODUCTION An instance embedding is a mapping \(f\) from an input \(x\) , such as an image, to a vector representation, \(z \in \mathbb{R}^{D}\) , such that "similar" inputs are mapped to nearby points in space. Embeddings are a versatile representation that support various downstream tasks, including image retrieval (Babenko et al., 2014) and face recognition (Schroff et al., 2015). Instance embeddings are often treated deterministically, i.e., \(z = f(x)\) is a point in \(\mathbb{R}^{D}\) . We refer to this approach as a point embedding. One drawback of this representation is the difficulty of modeling aleatoric uncertainty (Kendall & Gal, 2017), i.e. uncertainty induced by the input. In the case of images this can be caused by occlusion, blurriness, low- contrast and other factors. To illustrate this, consider the example in Figure 1a. On the left, we show an image composed of two adjacent MNIST digits, the first of which is highly occluded. The right digit is clearly a 7, but the left digit could be a 1, or a 4. One way to express this uncertainty about which choice to make is to map the input to a region of space, representing the inherent uncertainty of "where it belongs". We propose a new method, called hedged instance embedding (HIB), which achieves this goal. Each embedding is represented as a random variable, \(Z \sim p(z|x) \in \mathbb{R}^{D}\) . The embedding effectively spreads probability mass across locations in space, depending on the level of uncertainty. For example in Figure 1b, the corrupted image is mapped to a two- component mixture of Gaussians covering both the "17" and "47" clusters. We propose a training scheme for the HIB with a learnable- margin contrastive loss and the variational information bottleneck (VIB) principle (Alemi et al., 2016; Achille & Soatto, 2018). <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Unlike point embeddings, stochastic embeddings may hedge their bets across the space. When both “17” and “47” are plausible, our 2-component Gaussian mixture embedding has the power to spread probability mass on clusters with clean “17” and “47” images. By contrast, the point embedding will choose to be close to one or the other of these points (or somewhere between). </center> To evaluate our method, we propose a novel dataset, N- digit MNIST, which we will open source. Using this dataset, we show that HIB exhibits several desirable properties compared to point embeddings: (1) downstream task performance (e.g. recognition and verification) improves for uncertain inputs; (2) the embedding space exhibits enhanced structural regularity; and (3) a per- exemplar uncertainty measure that predicts when the output of the system is reliable. ## 2 METHODS In this section, we describe our method in detail. ### 2.1 POINT EMBEDDINGS Standard point embedding methods try to compute embeddings such that \(z_{1} = f(x_{1})\) and \(z_{2} = f(x_{2})\) are “close” in the embedding space if \(x_{1}\) and \(x_{2}\) are “similar” in the ambient space. To obtain such a mapping, we must decide on the definition of “closeness” as well as a training objective, as we explain below. Contrastive loss Contrastive loss (Hadsell et al., 2006) is designed to encourage a small Euclidean distance between a similar pair, and large distance of margin \(M > 0\) for a dissimilar pair. The loss is \[\mathcal{L}_{\mathrm{con}} = \left\{ \begin{array}{ll}||z_{1} - z_{2}||_{2}^{2} & \mathrm{if~match}\\ \max (M - ||z_{1} - z_{2}||_{2},0)^{2} & \mathrm{if~non - match} \end{array} \right. \quad (1)\] where \(z_{i} = f(x_{i})\) . The hyperparameter \(M\) is usually set heuristically or based on validation- set performance. Soft contrastive loss A probabilistic alternative to contrastive loss, which we will use in our experiments is defined here. It represents the probability that a pair of points is matching: \[p(m|z_{1},z_{2}):= \sigma (-a||z_{1} - z_{2}||_{2} + b) \quad (2)\] with scalar parameters \(a > 0\) and \(b\in \mathbb{R}\) , and the sigmoid function \(\sigma (t) = \frac{1}{1 + e^{- t}}\) . This formulation calibrates Euclidean distances into a probabilistic expression for similarity. Instead of setting a hard threshold like \(M\) , \(a\) and \(b\) together comprise a soft threshold on the Euclidean distance. We will later let \(a\) and \(b\) be trained from data. Having defined the match probability \(p(m|z_{1},z_{2})\) , we formulate the contrastive loss as a binary classification loss based on the softmax cross- entropy (negative log- likelihood loss). More precisely, for an embedding pair \((z_{1},z_{2})\) the loss is defined as \[\mathcal{L}_{\mathrm{softcon}} = -\log p(m = \hat{m} |z_{1},z_{2}) = \left\{ \begin{array}{ll} - \log p(m|z_{1},z_{2}) & \mathrm{if} \hat{m} = 1,\\ - \log (1 - p(m|z_{1},z_{2})) & \mathrm{if} \hat{m} = 0, \end{array} \right. \quad (3)\] where \(\hat{m}\) is the indicator function with value 1 for ground- truth match and 0 otherwise. <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: Computational graph for computing \(p(m|x_{1},x_{2})\) using HIB with Gaussian embeddings. </center> Although some prior work has explored this soft contrastive loss (e.g. Bertinetto et al. (2016); Orekondy et al. (2018)), it does not seem to be widely used. However, in our experiments, it performs strictly better than the hard margin version, as explained in Appendix B. ### 2.2 STOCHASTIC EMBEDDINGS In HIB, we treat embeddings as stochastic mappings \(x \mapsto Z\) , and write the distribution as \(Z \sim p(z|x)\) . In the sections below, we show how to learn and use this mapping. Match probability for probabilistic embeddings The probability of two inputs matching, given in Equation 2, can easily be extended to stochastic embeddings, as follows: \[p(m|x_{1},x_{2}) = \int p(m|z_{1},z_{2})p(z_{1}|x_{1})p(z_{2}|x_{2})\mathrm{d}z_{1}\mathrm{d}z_{2}. \quad (4)\] We approximate this integral via Monte- Carlo sampling from \(z_{1}^{(k_{1})} \sim p(z_{1}|x_{1})\) and \(z_{2}^{(k_{2})} \sim p(z_{2}|x_{2})\) : \[p(m|x_{1},x_{2}) \approx \frac{1}{K^{2}} \sum_{k_{1} = 1}^{K} \sum_{k_{2} = 1}^{K} p\left(m|z_{1}^{(k_{1})}, z_{2}^{(k_{2})}\right). \quad (5)\] In practice, we get good results using \(K = 8\) samples per input image. Now we discuss the computation of \(p(z|x)\) . Single Gaussian embedding The simplest setting is to let \(p(z|x)\) be a \(D\) - dimensional Gaussian with mean \(\mu (x)\) and diagonal covariance \(\Sigma (x)\) , where \(\mu\) and \(\Sigma\) are computed via a deep neural network with a shared "body" and \(2D\) total outputs. Given a Gaussian representation, we can draw \(K\) samples \(z^{(1)}, \dots , z^{(K)} \stackrel{\mathrm{iid}}{\sim} p(z|x)\) , which we can use to approximate the match probability. Furthermore, we can use the reparametrization trick (Kingma & Welling, 2013) to rewrite the samples as \(z^{(k)} = \mathrm{diag}\left(\sqrt{\Sigma(x)}\right) \cdot \epsilon^{(k)} + \mu (x)\) , where \(\epsilon^{(1)}, \dots , \epsilon^{(K)} \stackrel{\mathrm{iid}}{\sim} N(0, I)\) . This enables easy backpropagation during training. Mixture of Gaussians (MoG) embedding We can obtain a more flexible representation of uncertainty by using a mixture of \(C\) Gaussians to represent our embeddings, i.e. \(p(z|x) = \sum_{c = 1}^{C} \mathcal{N}(z; \mu (x, c), \Sigma (x, c))\) . To enhance computational efficiency, the \(2C\) mappings \(\left\{(\mu (x, c), \Sigma (x, c))\right\}_{c = 1}^{C}\) share a common CNN stump and are branched with one linear layer per branch. When approximating Equation 5, we use stratified sampling, i.e. we sample the same number of samples from each Gaussian component. Computational considerations The overall pipeline for computing the match probability is shown in Figure 2. If we use a single Gaussian embedding, the cost (time complexity) of computing the stochastic representation is essentially the same as for point embedding methods, due to the use of a shared network for computing \(\mu (x)\) and \(\Sigma (x)\) . Also, the space requirement is only \(2 \times\) more. (This is an important consideration for many embedding- based methods.) <--- Page Split ---> ### 2.3 VIB TRAINING OBJECTIVE For training our stochastic embedding, we combine two ingredients: soft contrastive loss in Equation 3 and the VIB principle Alemi et al. (2016); Achille & Soatto (2018). We start with a summary of the original VIB formulation, and then describe its extension to our setting. Variational Information Bottleneck (VIB) A discriminative model \(p(y|x)\) is trained under the information bottleneck principle (Tishby et al., 1999) by maximizing the following objective: \[I(z,y) - \beta I(z,x) \quad (6)\] where \(I\) is the mutual information, and \(\beta >0\) is a hyperparameter which controls the tradeoff between the sufficiency of \(z\) for predicting \(y\) , and the minimality (size) of the representation. Intuitively, this objective lets the latent encoding \(z\) capture the salient parts of \(x\) (salient for predicting \(y\) ), while disallowing it to "memorise" other parts of the input which are irrelevant. Computing the mutual information is generally computationally intractable, but it is possible to use a tractable variational approximation as shown in Alemi et al. (2016); Achille & Soatto (2018). In particular, under the Markov assumption that \(p(z|x,y) = p(z|x)\) we arrive at a lower bound on Equation 6 for every training data point \((x,y)\) as follows: \[- \mathcal{L}_{\mathrm{VIB}}:= \mathbb{E}_{z\sim p(z|x)}[\log q(y|z)] - \beta \cdot \mathrm{KL}(p(z|x)||r(z)) \quad (7)\] where \(p(z|x)\) is the latent distribution for \(x\) , \(q(y|z)\) is the decoder (classifier), and \(r(z)\) is an approximate marginal term that is typically set to the unit Gaussian \(\mathcal{N}(0,I)\) . In Alemi et al. (2016), this approach was shown (experimentally) to be more robust to adversarial image perturbations than deterministic classifiers. It has also been shown to provide a useful way to detect out- of- domain inputs (Alemi et al., 2018). Hence we use it as the foundation for our approach. VIB for learning stochastic embeddings We now apply the above method to learn our stochastic embedding. In particular, we train a discriminative model based on matching or mismatching pairs of inputs \((x_{1},x_{2})\) , by minimizing the following loss: \[\begin{array}{r l} & {\mathcal{L}_{\mathrm{VIBEmb}}:= -\mathbb{E}_{z_{1}\sim p(z_{1}|x_{1}),z_{2}\sim p(z_{2}|x_{2})}[\log p(m = \hat{m} |z_{1},z_{2})]}\\ & {\qquad +\beta \cdot [\mathrm{KL}(p(z_{1}|x_{1})||r(z_{1})) + \mathrm{KL}(p(z_{2}|x_{2})||r(z_{2}))]} \end{array} \quad (8)\] where the first term is given by the negative log likelihood loss with respect to the ground truth match \(\hat{m}\) (this is identical to Equation 3, the soft contrastive loss), and the second term is the KL regularization term, \(r(z) = \mathcal{N}(z;0,I)\) . The full derivation is in appendix F. We optimize this loss with respect to the embedding function \((\mu (x),\Sigma (x))\) , as well as with respect to the \(a\) and \(b\) terms in the match probability in Equation 2. Note that most pairs are not matching, so the \(m = 1\) class is rare. To handle this, we encourage a balance of \(m = 0\) and \(m = 1\) pair samples within each SGD minibatch by using two streams of input sample images. One samples images from the training set at random and the other selects images from specific class labels, and then these are randomly shuffled to produce the final batch. As a result, each minibatch has plenty of positive pairs even when there are a large number of classes. ### 2.4 UNCERTAINTY MEASURE One useful property of our method is that the embedding is a distribution and encodes the level of uncertainty for given inputs. As a scalar uncertainty measure, we propose the self- mismatch probability as follows: \[\eta (x):= 1 - p(m|x,x)\geq 0 \quad (9)\] Intuitively, the embedding for an ambiguous input will span diverse semantic classes (as in Figure 1b). \(\eta (x)\) quantifies this by measuring the chance two samples of the embedding \(z_{1},z_{2}\stackrel {\mathrm{id}}{\sim}p(z|x)\) belong to different semantic classes (i.e., the event \(m = 0\) happens). We compute \(\eta (x)\) using the Monte- Carlo estimation in Equation 5. Prior works (Vilnis & McCallum, 2014; Bojchevski & Gunnemann, 2018) have computed uncertainty for Gaussian embeddings based on volumetric measures like trace or determinant of covariance matrix. Unlike those measures, \(\eta (x)\) can be computed for any distribution from which one can sample, including multi- modal distributions like mixture of Gaussians. <--- Page Split ---> ## 3 RELATED WORK In this section, we mention the most closely related work from the fields of deep learning and probabilistic modeling. Probabilistic DNNs Several works have considered the problem of estimating the uncertainty of a regression or classification model, \(p(y|x)\) , when given ambiguous inputs. One of the simplest and most widely used techniques is known as Monte Carlo dropout (Gal & Ghahramani, 2016). In this approach, different random components of the hidden activations are "masked out" and a distribution over the outputs \(f(x)\) is computed. By contrast, we compute a parametric representation of the uncertainty and use Monte Carlo to approximate the probability of two points matching. Monte Carlo dropout is not directly applicable in our setting as the randomness is attached to model parameters and is independent of input; it is designed to measure model uncertainty (epistemic uncertainty). On the other hand, we measure input uncertainty where the embedding distribution is conditioned on the input. Our model is designed to measure input uncertainty (aleatoric uncertainty). VAEs and VIB A variational autoencoder (VAE, Kingma & Welling (2013)) is a latent variable model of the form \(p(x,z) = p(z)p(x|z)\) , in which the generative decoder \(p(x|z)\) and an encoder network, \(q(z|x)\) are trained jointly so as to maximize the evidence lower bound. By contrast, we compute a discriminative model on pairs of inputs to maximize a lower bound on the match probability. The variational information bottleneck (VIB) method (Alemi et al., 2016; Achille & Soatto, 2018) uses a variational approximation similar to the VAE to approximate the information bottleneck objective (Tishby et al., 1999). We build on this as explained in 2.3. Point embeddings Instance embeddings are often trained with metric learning objectives, such as contrastive (Hadsell et al., 2006) and triplet (Schroff et al., 2015) losses. Although these methods work well, they require careful sampling schemes (Wu et al., 2017; Movshovitz- Attias et al., 2017). Many other alternatives have attempted to decouple the dependency on sampling, including softmax cross- entropy loss coupled with the centre loss (Wan et al., 2018), or a clustering- based loss (Song et al., 2017), and have improved the embedding quality. In HIB, we use a soft contrastive loss, as explained in section 2.1. Probabilistic embeddings The idea of probabilistic embeddings is not new. For example, Vilnis & McCallum (2014) proposed Gaussian embeddings to represent levels of specificity of word embeddings (e.g. "Bach" is more specific than "composer"). The closeness of the two Gaussians is based on their KL- divergence, and uncertainty is computed from the spread of Gaussian (determinant of covariance matrix). See also Karaletsos et al. (2015); Bojchevski & Gunnemann (2018) for related work. Neelakantan et al. (2014) proposed to represent each word using multiple prototypes, using a "best of \(K\) " loss when training. HIB, on the other hand, measures closeness based on a quantity related to the expected Euclidean distance, and measures uncertainty using the self- mismatch probability. ## 4 EXPERIMENTS In this section, we report our experimental results, where we compare our stochastic embeddings to point embeddings. We consider two main tasks: the verification task (i.e., determining if two input images correspond to the same class or not), and the identification task (i.e., predicting the label for an input image). For the latter, we use a K- nearest neighbors approach with \(K = 5\) . We compare performance of three methods: a baseline deterministic embedding method, our stochastic embedding method with a Gaussian embedding, and our stochastic embedding method with a mixture of Gaussians embedding. We also conduct a qualitative comparison of the embeddings of each method. In the Appendix, Section C, we describe additional experiments, including where HIB is applied to a dataset of cat and dog images of over 100K distinct animals and the embedding is directed towards identifying specific animals. <--- Page Split ---> ### 4.1 EXPERIMENTAL DETAILS We conduct all our experiments on a new dataset we created called N- digit MNIST, which consists of images composed of \(N\) adjacent MNIST digits, which may be randomly occluded (partially or fully). See appendix A for details. During training, we occlude \(20\%\) of the digits independently. A single image can have multiple corrupted digits. During testing, we consider both clean (unoccluded) and corrupted (occluded) images, and report results separately. We use images with \(N = 2\) and \(N = 3\) digits. We will open source the data to ensure reproducibility. Since our dataset is fairly simple, we use a shallow CNN model to compute the embedding function. Specifically, it consists of 2 convolutional layers, with \(5\times 5\) filters, each followed by max pooling, and then a fully connected layer mapping to \(D\) dimensions. We focus on the cases where \(D = 2\) or \(D = 3\) , and present additional results where \(D = 4\) in the Appendix. When we use more dimensions, we find that all methods (both stochastic and deterministic) perform almost perfectly (upper \(90\%\) ), so there are no interesting differences to report. We leave exploration of more challenging datasets, and higher dimensional embeddings, to future work. Our networks are built with TensorFlow (Abadi et al., 2015). For each task, the input is an image of size \(28\times (N\times 28)\) , where N is the number of concatenated digit images. We use a batch size of 128 and 500k training iterations. Each model is trained from scratch with random weight initialization. The KL- divergence hyperparameter \(\beta\) is set to \(10^{- 4}\) throughout the experiments. We report effects of changing \(\beta\) in appendix E. ### 4.2 QUALITATIVE EVALUATION OF THE REPRESENTATION Figure 3 shows HIB 2D Gaussian embeddings for the clean and corrupt subsets of the test set. We can easily see that the corrupt images generally have larger (i.e., less certain) embeddings. In the Appendix, Figure 7 shows a similar result when using a 2D MoG representation, and Figure 8 shows a similar result for 3D Gaussian embeddings. Figure 4 illustrates the embeddings for several test set images, overlaid with an indication of each class' centroid. Hedged embeddings capture the uncertainty that may exist across complex subsets of the class label space, by learning a layout of the embedding space such that classes that may be confused are able to receive density from the underlying hedged embedding distribution. We observe enhanced spatial regularity when using HIB. Classes with a common least or most significant digit roughly align parallel to the \(x\) or \(y\) axis. This is because of the diagonal structure of the embedding covariance matrix. By controlling the parametrization of the covariance matrix, one may apply varying degrees and types of structures over the embedding space (e.g. diagonally aligned embeddings). See appendix D for more analysis of the learned latent space. ### 4.3 QUANTITATIVE EVALUATION OF THE BENEFITS OF STOCHASTIC EMBEDDING We first measure performance on the verification task, where the network is used to compute \(p(m|x_{1},x_{2})\) for 10k pairs of test images, half of which are matches and half are not. The average precision (AP) of this task is reported in the top half of Table 1. HIB shows improved performance, especially for corrupted test images. For example, in the \(N = 2\) digit case, when using \(D = 2\) dimensions, point embeddings achieve \(88.0\%\) AP on corrupt test images, while hedged instance embeddings improves to \(90.7\%\) with \(C = 1\) Gaussian, and \(91.2\%\) with \(C = 2\) Gaussians. We next measure performance on the KNN identification task. The bottom half of Table 1 reports the results. Again, proposed stochastic embeddings generally outperform point embeddings, with the greatest advantage for the corrupted input samples. For example, in the \(N = 2\) digit case, when using \(D = 2\) dimensions, point embeddings achieve \(58.3\%\) AP on corrupt test images, while HIB improves to \(76.0\%\) with \(C = 1\) Gaussian, and \(75.7\%\) with \(C = 2\) Gaussians. ### 4.4 KNOWN UNKNOWNS In this section, we address the task of estimating when an input can be reliably recognized or not, which has important practical applications. To do this, we use the measure of uncertainty \(\eta (x)\) defined in Equation 9. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: 2D Gaussian embeddings from 2-digit MNIST. Corrupted test images (right) generally have embeddings that spread density across larger regions of the space than clean images (left). See Appendix Figure 8 for a 3D version. </center> ![](images/6_1.jpg) <center>Figure 4: Point vs. hedged image embeddings. Square markers indicate the class centroids of embeddings for 2-digit MNIST. Left: Centroids learned with point embeddings. Occluded examples map into the embedding space with no special indication of corruption or confidence. Right: Hedged embeddings comprising a single Gaussian (rendered as \(3\sigma\) ellipses). A clean input such as the “03” is embedded into a tight Gaussian, while a corrupted image such as the “52” occupies an elongated ellipse that places bets that the least significant digit is a “2” but the most significant digit could be any of a number of choices. </center> We measure the utility of \(\eta (x)\) for the identification task as follows. For the test set, we sort all test input examples according to \(\eta (x)\) , and bin examples into 20 bins ranging from the lowest to highest range of uncertainty. We then measure the KNN classification accuracy for the examples falling in each bin. To measure the utility of \(\eta (x)\) for the verification task, we take random pairs of samples, \((x_{1}, x_{2})\) , and compute the mean of their uncertainties, \(\eta (x_{1}, x_{2}) = \frac{1}{2} (\eta (x_{1}) + \eta (x_{2}))\) . We then distribute the test pairs to 20 equal- sized bins according to their uncertainty levels, and compute the probability of a match for each pair. To cope with the severe class imbalance (most pairs don’t match), we measure performance for each bin using average precision (AP). Then, again, the Kendall’s tau is applied to measure the uncertainty- performance correlation. <--- Page Split ---> Table 1: Accuracy of pairwise verification and KNN identification tasks for point embeddings, and our hedged embeddings with a single Gaussian component (MoG-1) and two components (MoG-2). We report results for images with \(N\) digits and using \(D\) embedding dimensions. <table><tr><td></td><td colspan="2">N = 2, D = 2</td><td colspan="2">N = 2, D = 3</td><td colspan="2">N = 3, D = 2</td><td colspan="2">N = 3, D = 3</td></tr><tr><td></td><td>point MoG-1 MoG-2</td><td>point MoG-1 MoG-2</td><td>point MoG-1 MoC-2</td><td>point MoG-1 MoG-2</td><td>point MoG-2</td><td>point MoG-1 MoG-2</td><td>point MoG-1 MoG-2</td><td>point MoC-2</td></tr><tr><td>Verification AP</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>clean</td><td>0.987</td><td>0.989</td><td>0.990</td><td>0.996</td><td>0.996</td><td>0.978</td><td>0.981</td><td>0.976</td></tr><tr><td>corrupt</td><td>0.880</td><td>0.907</td><td>0.912</td><td>0.913</td><td>0.926</td><td>0.932</td><td>0.886</td><td>0.899</td></tr><tr><td>KNN Accuracy</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>clean</td><td>0.871</td><td>0.879</td><td>0.888</td><td>0.942</td><td>0.953</td><td>0.939</td><td>0.554</td><td>0.591</td></tr><tr><td>corrupt</td><td>0.583</td><td>0.760</td><td>0.757</td><td>0.874</td><td>0.909</td><td>0.885</td><td>0.274</td><td>0.350</td></tr></table> Table 2: Correlations between each input image's measure of uncertainty, \(\eta (x)\) , and AP and KNN performances. High correlation coefficients suggest a close relationship. <table><tr><td></td><td colspan="2">N = 2, D = 2</td><td colspan="2">N = 2, D = 3</td><td colspan="2">N = 3, D = 2</td><td colspan="2">N = 3, D = 3</td></tr><tr><td></td><td>MoG-1</td><td>MoG-2</td><td>MoG-1</td><td>MoG-2</td><td>MoG-1</td><td> MoG-2</td><td>MoG-1</td><td>MoG-2</td></tr><tr><td>AP Correlation</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>clean</td><td>0.74</td><td>0.43</td><td>0.68</td><td>0.48</td><td>0.63</td><td>0.28</td><td>0.51</td><td>0.39</td></tr><tr><td>corrupt</td><td>0.81</td><td>0.79</td><td>0.86</td><td>0.87</td><td>0.82</td><td>0.76</td><td>0.85</td><td>0.79</td></tr><tr><td>KNN Correlation</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>clean</td><td>0.71</td><td>0.57</td><td>0.72</td><td>0.47</td><td>0.76</td><td>0.29</td><td>0.74</td><td>0.54</td></tr><tr><td>corrupt</td><td>0.47</td><td>0.43</td><td>0.55</td><td>0.52</td><td>0.49</td><td>0.50</td><td>0.67</td><td>0.34</td></tr></table> ![](images/7_0.jpg) <center>Figure 5: Correlations between the uncertainty measure \(\eta (x)\) and AP and KNN accuracy on the test set for the \(N = 3\) , \(D = 3\) case using single Gaussian embeddings. Uncertainty increases along the horizontal axis. We observe that accuracy generally decreases as uncertainty increases. </center> Figure 5 plots the AP and KNN accuracy vs the uncertainty bin index, for both clean and corrupted inputs. We see that when the performance drops off, the model's uncertainty measure increases, as desired. To quantify this, we compute the correlation between the performance metric and the uncertainty metric. Instead of the standard linear correlations (Pearson correlation coefficient), we use Kendall's tau correlation (Kendall, 1938) that measures the degree of monotonicity between the performance and the uncertainty level (bin index), inverting the sign so that positive correlation aligns with our goal. The results of different models are shown in Table 2. In general, the measure \(\eta (x)\) correlates with the task performance. As a baseline for point embeddings in KNN, we explored using the distance to the nearest neighbor as a proxy for uncertainty, but found that it performed poorly. The HIB uncertainty metric correlates with task accuracy even in within the subset of clean (uncorrupted) input images having no corrupted digits, indicating that HIB's understanding of uncertainty goes beyond simply detecting which images are corrupted. ## 5 DISCUSSION AND FUTURE WORK Hedged instance embedding is a stochastic embedding that captures the uncertainty of the mapping of an image to a latent embedding space, by spreading density across plausible locations. This <--- Page Split ---> results in improved performance on various tasks, such as verification and identification, especially for ambiguous corrupted input. It also allows for a simple way to estimate the uncertainty of the embedding that is correlated with performance on downstream tasks. There are many possible directions for future work, including experimenting with higher- dimensional embeddings, and harder datasets. As an early look at these tasks, in the Appendix, Section C.3, we apply HIB towards cat and dog instance embedding directed towards identifying specific animals with 20D embeddings. It would also be interesting to consider the "open world" (or "unknown unknowns") scenario, in which the test set may contain examples of novel classes, such as digit combinations that were not in the training set (see e.g., Lakkaraju et al. (2017); Günther et al. (2017)). This is likely to result in uncertainty about where to embed the input which is different from the uncertainty induced by occlusion, since uncertainty due to open world is epistemic (due to lack of knowledge of a class), whereas uncertainty due to occlusion is aleatoric (intrinsic, due to lack of information in the input), as explained in Kendall & Gal (2017). Preliminary experiments suggest that \(\eta (x)\) correlates well with detecting occluded inputs, but does not work as well for novel classes. We leave more detailed modeling of epistemic uncertainty as future work. ## ACKNOWLEDGMENTS We are grateful to Alex Alemi and Josh Dillon, helpful with discussions related to VAEs and Variational Information Bottleneck, and to Lucy Gao for the pet dataset of cats and dogs. ## Appendix ## A N-DIGIT MNIST DATASET We present N- digit MNIST, a new dataset based upon MNIST (LeCun, 1998) that has an exponentially large number of classes on the number of digits \(N\) , for which embedding- style classification methods are well- suited. The dataset is created by horizontally concatenating \(N\) MNIST digit images. While constructing new classes, we respect the training and test splits. For example, a test image from 2- digit MNIST of a "54" does not have its "5" or its "4" shared with any image from the training set (in all positions). Table 3: Summary of N-digit MNIST dataset for \(N = 2,3\) <table><tr><td>Number Digits</td><td>Total Classes</td><td>Training Classes</td><td>Unseen Test Classes</td><td>Seen Test Classes</td><td>Training Images</td><td>Test Images</td></tr><tr><td>2</td><td>100</td><td>70</td><td>30</td><td>70</td><td>100 000</td><td>10 000</td></tr><tr><td>3</td><td>1000</td><td>700</td><td>300</td><td>700</td><td>100 000</td><td>10 000</td></tr></table> We employ 2- and 3- digit MNIST (Table 3) in our experiments. This dataset is meant to provide a test bed for easier and more efficient evaluation of embedding algorithms than with larger and more realistic datasets. N- digit MNIST is more suitable for embedding evaluation than other synthetic datasets due to the exponentially increasing number of classes as well as the factorizability aspect: each digit position corresponds to e.g. a face attribute for face datasets. We inject uncertainty into the underlying tasks by randomly occluding (with black fill- in) regions of images at training and test time. Specifically, the corruption operation is done independently on each digit of number samples in the dataset. A random- sized square patch is identified from a random location of each \(28 \times 28\) digit image. The patch side length is first sampled \(L \sim \mathrm{Unif}(0, 28)\) , and then the top left patch corner coordinates are sampled \((TL_{x}, TL_{y}) \stackrel{\mathrm{id}}{\sim} \mathrm{Unif}(0, 28 - L)\) , so that the occluded square size is always \(L^{2}\) . During training, we set independent binary random flags \(B\) for every digit determining whether to perform the occlusion at all; the occlusion chance is set to \(20\%\) . For testing, we prepare twin datasets, clean and corrupt, with digit images that are either not corrupted with occlusion at all or always occluded, respectively. ## B SOFT CONTRASTIVE LOSS VERSUS CONTRASTIVE LOSS ![](images/10_0.jpg) <center>Figure 6: Soft contrastive versus vanilla contrastive loss for point embeddings. </center> As a building block for the HIB, soft contrastive loss has been proposed in §2.1. Soft contrastive loss has a conceptual advantage over the vanilla contrastive loss that the margin hyperparameter \(M\) does not have to be hand- tuned. Here we verify that soft contrastive loss outperforms the vanilla version over a range of \(M\) values. Figure 9 shows the verification (average precision) and identification (KNN accuracy) performance of embedding 2- digit MNIST samples. In both evaluations, soft contrastive loss performance is upper bounding the vanilla contrastive case. This new formulation removes one hyperparameter from the learning process, while not sacrificing performance. <--- Page Split ---> ![](images/11_0.jpg) <center>Figure 7: MoG embeddings of 2-digit numbers. Components corresponding to common input are joined with a thin line. It is qualitatively apparent that corrupted examples (right) tend to have more dispersed components than clean ones (left) as a result of the "bet hedging" behavior. </center> ![](images/11_1.jpg) <center>Figure 8: 3D embeddings of 3-digit MNIST. Corrupted test images (right) generally have embeddings that spread density across larger regions of the space than those of clean images (left). </center> ## C ADDITIONAL RESULTS In this section, we include some extra results which would not fit in the main paper. <--- Page Split ---> Table 4: Task performance and uncertainty measure correlations with task performance for 4D embeddings with 3-digit MNIST. <table><tr><td></td><td colspan="2">N = 3, D = 4</td></tr><tr><td></td><td colspan="2">point MoG-1 MoG-2</td></tr><tr><td>Verification AP</td><td>clean</td><td>0.996</td></tr><tr><td>corrupt</td><td>0.942</td><td>0.944</td></tr><tr><td>KNN Accuracy</td><td>clean</td><td>0.914</td></tr><tr><td>corrupt</td><td>0.803</td><td>0.816</td></tr></table> task performances. <table><tr><td></td><td colspan="2">N = 3, D = 4</td></tr><tr><td></td><td colspan="2">MoG-1 MoG-2</td></tr><tr><td>AP Correlation</td><td>clean</td><td>0.35</td></tr><tr><td>corrupt</td><td>0.79</td><td>0.68</td></tr><tr><td>KNN Correlation</td><td>clean</td><td>0.31</td></tr><tr><td>corrupt</td><td>0.42</td><td>0.35</td></tr></table> (a) Task accuracies for pairwise verification and KNN identification. (b) Correlations between uncertainty, \(\eta (x)\) , and ## C.1 VISUAL REPRESENTATIONS OF MNIST MoG EMBEDDINGS Figure 7 shows some 2D embeddings of 2- digit images using a MoG representation, with \(C = 2\) or \(C = 4\) clusters per embedding. Figure 8 shows some 3D embeddings of 3- digit images using a single Gaussian. ## C.2 4D MNIST EMBEDDINGS In Table 4, we report the results of 4D embeddings with 3- digit MNIST. Here, the task performance begins to saturate, with verification accuracy on clean input images above 0.99. However, we again observe that HIB provides a slight performance improvement over point embeddings for corrupt images, and task performance correlates with uncertainty. ![](images/12_0.jpg) <center>Figure 9: Correlation between the uncertainty measure \(\eta (x)\) and balanced accuracy on the pet test set for 20D single Gaussian embeddings. Uncertainty increases along the horizontal axis. (b) and (c) show match score distributions of pairs of the same and different pets for lowest and highest uncertainty bins. There is clearly more confusion for the highly uncertain samples. </center> ## C.3 HIB EMBEDDINGS FOR PET IDENTIFICATION We applied HIB to learn instance embeddings for identifying pets, using an internal dataset of nearly 1M cat and dog images with known identity, including 117913 different pets. We trained an HIB with 20D embeddings and a single Gaussian component (diagonal covariance). The CNN portion of the model is a MobileNet (Howard et al., 2017) with a width multiplier of 0.25. No artificial corruption is applied to the images, as there is sufficient uncertainty from sources such as occlusion, natural variations in lighting, and the pose of the animals. We evaluate the verification task on a held out test set of 8576 pet images, from which all pairs were analyzed. On this set, point embeddings achieve 0.777 balanced accuracy. Binning the uncertainty measures and measuring correlation with the verification task (similarly as with N- digit MNIST), we find the correlation between performance and task accuracy to be 0.995. This experiment shows <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 10: Uncertain embeddings self-organize. Class centroids for 2D point embeddings of 2-digit MNIST are shown, colored here by ones digit (left) and tens digit (right). The hedged instance embeddings have a structure where the classes self-organize in an axis aligned manner because of the diagonal covariance matrix used in the model. </center> ![](images/13_1.jpg) <center>Figure 11: Embedding 2-digit MNIST onto the number line. Top: An ordering of class centroids produced with hedged instance embedding with Gaussian embeddings. Bottom: An ordering produced with point embedding. Centroids are colored according to least significant (ones) digit. The hedged instance embedding more often places classes that share attributes (in this case, a ones digit or tens digit). </center> that HIB scales to real- world problems with real- world sources of corruption, and provides evidence that HIB understands which embeddings are uncertain. ## D ORGANIZATION OF THE LATENT EMBEDDING SPACE As hedged instance embedding training progresses, it is advantageous for any subset of classes that may be confused with one another to be situated in the embedding space such that a given input image's embedding can strategically place probability mass. We observe this impacts the organization of the underlying space. For example, in the 2D embeddings shown in Figure 10, the class centers of mass for each class are roughly axis- aligned so that classes that share a tens' digit vary by x- coordinate, and classes that share a least significant (ones) digit vary by y- coordinate. To further explore this idea, we embed 2- digit MNIST into a single dimension, to see how the classes get embedded along the number line. For hedged instance embedding, a single Gaussian embedding was chosen as the representation. We conjectured that because hedged instance embedding reduces objective loss by placing groups of confusing categories nearby one another, the resulting embedding space would be organized to encourage classes that share a tens or ones digit to be nearby. Figure 11 shows an example embedding learned by the two methods. We assess the embedding space as follows. First the centroids for each of the 100 classes are derived from the test set embeddings. After sorting the classes, a count is made of adjacent class pairs that share a ones or tens digit, with the maximum possible being 99. The hedged embeddings outscored the point embeddings on each of the the four trials, with scores ranging from 76 to 80 versus scores of 42 to 74. Similarly, consider a run as a series of consecutive class pairs that share a ones or tens digit. The average run contains 4.6 classes with from hedged embeddings, and only 3.0 for point embeddings, as illustrated in Figure 11. <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 12: Impact of the weight on KL divergence. Each plot shows ellipsoids corresponding to hedged embeddings of corrupted test images from models mapping 3-digit MNIST to 3 dimensions. From left to right, the KL divergence weight increases by a factor of ten with each plot. When the weight \(\beta\) is too small \((10^{-6})\) , the embeddings approach points, and when too large \((10^{-2})\) embeddings approach the unit Gaussian. </center> (a) Task performances. <table><tr><td></td><td colspan="2">N = 2, D = 2</td><td colspan="2">N = 3, D = 3</td></tr><tr><td>β</td><td>0</td><td>10-4</td><td>0</td><td>10-4</td></tr><tr><td>Verification AP</td><td></td><td></td><td></td><td></td></tr><tr><td>clean</td><td>0.986</td><td>0.988</td><td>0.993</td><td>0.992</td></tr><tr><td>corrupt</td><td>0.900</td><td>0.906</td><td>0.932</td><td>0.932</td></tr><tr><td>KNN Accuracy</td><td></td><td></td><td></td><td></td></tr><tr><td>clean</td><td>0.867</td><td>0.891</td><td>0.858</td><td>0.861</td></tr><tr><td>corrupt</td><td>0.729</td><td>0.781</td><td>0.685</td><td>0.730</td></tr></table> <table><tr><td></td><td colspan="2">N = 2, D = 2</td><td colspan="2">N = 3, D = 3</td></tr><tr><td>β</td><td>0</td><td>10-4</td><td>0</td><td>10-4</td></tr><tr><td>AP Correlation</td><td></td><td></td><td></td><td></td></tr><tr><td>clean</td><td>0.680</td><td>0.766</td><td>0.425</td><td>0.630</td></tr><tr><td>corrupt</td><td>0.864</td><td>0.836</td><td>0.677</td><td>0.764</td></tr><tr><td>KNN Correlation</td><td></td><td></td><td></td><td></td></tr><tr><td>clean</td><td>0.748</td><td>0.800</td><td>-0.080</td><td>0.685</td></tr><tr><td>corrupt</td><td>0.636</td><td>0.609</td><td>0.183</td><td>0.549</td></tr></table> (b) Uncertainty correlations. Table 5: Results for single Gaussian embeddings with and without KL divergence term. We report results for images with \(N\) digits and using \(D\) embedding dimensions. ## E KL DIVERGENCE REGULARIZATION The training objective (Equation 8) contains a regularization hyperparameter \(\beta \geq 0\) controlling the weight of the KL divergence regularization that controls, from information theoretic perspective, bits of information encoded in the latent representation. In the main experiments, we consistently use \(\beta = 10^{- 4}\) . In this Appendix, we explore the effect of this regularization by considering other values of \(\beta\) . See Figure 12 for visualization of embeddings according to \(\beta\) . We observe that increasing KL term weight induces overall increase in variances of embeddings. For quantitative analysis on the impact of \(\beta\) on main task performance and uncertainty quality, see Table 5 where we compare \(\beta = 0\) and \(\beta = 10^{- 4}\) cases. We observe mild improvements in main task performances when the KL divergence regularization is used. For example, KNN accuracy improves from 0.685 to 0.730 by including the KL term for \(N = 3\) , \(D = 3\) case. Improvement in uncertainty quality, measured in terms of Kendall's tau correlation, is more pronounced. For example, KNN accuracy and uncertainty are nearly uncorrelated when \(\beta = 0\) under the \(N = 3\) , \(D = 3\) setting (- 0.080 for clean and 0.183 for corrupt inputs), while they are well- correlated when \(\beta = 10^{- 4}\) (0.685 for clean and 0.549 for corrupt inputs). KL divergence helps generalization, as seen by the main task performance boost, and improves the uncertainty measure by increasing overall variances of embeddings, as seen in Figure 12. ## F DERIVATION OF VIB OBJECTIVE FOR STOCHASTIC EMBEDDING Our goal is to train a discriminative model for match prediction on a pair of variables, \(p(m|f(x_{1}),f(x_{2}))\) , as opposed to predicting a class label, \(p(y|x)\) . Our VIB loss (Equation 8) follows straightforwardly from the original VIB, with two additional independence assumptions. In particular, we assume that the samples in the pair are independent, so \(p(x_{1},x_{2}) = p(x_{1})p(x_{2})\) . We also assume the embeddings do not depend on the other input in the pair, \(p(z_{1},z_{2}|x_{1},x_{2}) =\) <--- Page Split ---> \(p(z_{1}|x_{1})p(z_{2}|x_{2})\) . With these two assumptions, the VIB objective is given by the following: \[I((z_{1},z_{2}),m) - \beta I((z_{1},z_{2}),(x_{1},x_{2})). \quad (10)\] We variationally bound the first term using the approximation \(q(m|z_{1},z_{2})\) of \(p(m|z_{1},z_{2})\) as follows \[\begin{array}{r l} & {I((z_{1},z_{2}),m) = \int p(m,z_{1},z_{2})\log \frac{p(m,z_{1},z_{2})}{p(m)p(z_{1},z_{2})}\mathrm{d}m\mathrm{d}z_{1}\mathrm{d}z_{2}}\\ & {\qquad = \int p(m,z_{1},z_{2})\log \frac{p(m|z_{1},z_{2})}{p(m)}\mathrm{d}m\mathrm{d}z_{1}\mathrm{d}z_{2}}\\ & {\qquad = \int p(m,z_{1},z_{2})\log \frac{q(m|z_{1},z_{2})}{p(m)}\mathrm{d}m\mathrm{d}z_{1}\mathrm{d}z_{2} + \mathrm{KL}(p(m|z_{1},z_{2})||q(m|z_{1},z_{2}))}\\ & {\qquad \geq \int p(m,z_{1},z_{2})\log \frac{q(m|z_{1},z_{2})}{p(m)}\mathrm{d}m\mathrm{d}z_{1}\mathrm{d}z_{2}}\\ & {\qquad = \int p(m,z_{1},z_{2})\log q(m|z_{1},z_{2})\mathrm{d}m\mathrm{d}z_{1}\mathrm{d}z_{2} + H(m)}\\ & {\qquad \geq \int p(m,z_{1},z_{2})\log q(m|z_{1},z_{2})\mathrm{d}m\mathrm{d}z_{1}\mathrm{d}z_{2}}\\ & {\qquad = \int p(m|x_{1},x_{2})p(z_{1}|x_{1})p(z_{2}|x_{2})p(x_{1})p(x_{2})\log q(m|z_{1},z_{2})\mathrm{d}m\mathrm{d}z_{1}\mathrm{d}z_{2}\mathrm{d}x_{1}\mathrm{d}x_{2}.} \end{array} \quad (14)\] The inequalities follow from the non- negativity of KL- divergence \(\mathrm{KL}(\cdot)\) and entropy \(H(\cdot)\) . The final equality follows from our assumptions above. The second term is variationally bounded using approximation \(r(z_{i})\) of \(p(z_{i}|x_{i})\) as follows: \[\begin{array}{r l} & {I((z_{1},z_{2}),(x_{1},x_{2})) = \int p(z_{1},z_{2},x_{1},x_{2})\log \frac{p(z_{1},z_{2}|x_{1},x_{2})}{p(z_{1},z_{2})}\mathrm{d}z_{1}\mathrm{d}z_{2}\mathrm{d}x_{1}\mathrm{d}x_{2}}\\ & {\qquad = \int p(z_{1},z_{2},x_{1},x_{2})\log \frac{p(z_{1},z_{2}|x_{1},x_{2})}{r(z_{1})r(z_{2})}\mathrm{d}z_{1}\mathrm{d}z_{2}\mathrm{d}x_{1}\mathrm{d}x_{2}}\\ & {\qquad -K L(p(z_{1})||r(z_{1})) - K L(p(z_{2})||r(z_{2}))}\\ & {\qquad \leq \int p(z_{1},z_{2},x_{1},x_{2})\log \frac{p(z_{1},z_{2}|x_{1},x_{2})}{r(z_{1})r(z_{2})}\mathrm{d}z_{1}\mathrm{d}z_{2}\mathrm{d}x_{1}\mathrm{d}x_{2}}\\ & {\qquad = \int p(z_{1},z_{2}|x_{1},x_{2})p(x_{1},x_{2})\log \frac{p(z_{1},z_{2}|x_{1},x_{2})}{r(z_{1})r(z_{2})}\mathrm{d}z_{1}\mathrm{d}z_{2}\mathrm{d}x_{1}\mathrm{d}x_{2}}\\ & {\qquad = \int p(z_{1}|x_{1})p(x_{1})\log \frac{p(z_{1}|x_{1})}{r(z_{1})}\mathrm{d}z_{1}\mathrm{d}x_{1}}\\ & {\qquad +\int p(z_{2}|x_{2})p(x_{2})\log \frac{p(z_{2}|x_{2})}{r(z_{2})}\mathrm{d}z_{2}\mathrm{d}x_{2}.} \end{array} \quad (21)\] The inequality, again, follows from the non- negativity of KL- divergence, and the last equality follows from our additional independence assumptions. Combining the two bounds, the VIB objective (Equation 10) for a fixed input pair \((x_{1},x_{2})\) is bounded from below by \[\begin{array}{r l} & {\int p(z_{1}|x_{1})p(z_{2}|x_{2})\log q(m|z_{1},z_{2})\mathrm{d}z_{1}\mathrm{d}z_{2}}\\ & {\qquad -\beta \left(\mathrm{KL}(p(z_{1}|x_{1})||r(z_{1})) + \mathrm{KL}(p(z_{2}|x_{2})||r(z_{2}))\right).} \end{array} \quad (24)\] The negative of the above expression is identical to the loss \(\mathcal{L}_{\mathrm{VIBEmb}}\) in Equation 8. <--- Page Split --->
accept
Accept (Poster)
7
ICLR_2019_paper_0322
iclr
2,019
# INTEGRATED STEGANOGRAPHY AND STEGANALYSIS WITH GENERATIVE ADVERSARIAL NETWORKS Anonymous authors Paper under double- blind review ## ABSTRACT Recently, generative adversarial network is the hotspot in research areas and industrial application areas. Its application on data generation in computer vision is most common usage. This paper extends its application to data hiding and security area. In this paper, we propose the novel framework to integrate steganography and steganalysis processes. The proposed framework applies generative adversarial networks as the core structure. The discriminative model simulate the steganalysis process, which can help us understand the sensitivity of cover images to semantic changes. The steganography generative model is to generate stego image which is aligned with the original cover image, and attempts to confuse steganalysis discriminative model. The introduction of cycle discriminative model and inconsistent loss can help to enhance the quality and security of generated stego image in the iterative training process. Training dataset is mixed with intact images as well as intentional attacked images. The mix training process can further improve the robustness and security of new framework. Through the qualitative, quantitative experiments and analysis, this novel framework shows compelling performance and advantages over the current state- of- the- art methods in steganography and steganalysis benchmarks. ## 1 INTRODUCTION Steganography literally means "covered writing" and is usually interpreted to hide information in other information. As the counterpart, the main idea of steganalysis is to analyze whether the received information contains any hidden information, and to recover the hidden information if possible (Volkhonskiy et al., 2017). Since their birth, steganography and steganalysis have complementary progress. Steganography is widely used in secret information transmission (Shi et al., 2017), watermark (Yu, 2016), copyright certification (Mun et al., 2017), forgery detection (Wolfgang & Delp, 1996) applications. In this paper, we propose an integrated steganography and steganalysis framework with generative adversarial networks, and use ISS- GAN to represent the method in this paper. (ISS is the acronym of integrated steganography and steganalysis.) ISS- GAN combines the steganalysis's evaluation metrics of secure steganography with the advantages in latest GAN principle, and integrate the counterparts into single framework. Firstly, we will simulate the steganalysis process with discriminative model. It will help us to dynamically change the capacity of cover images, and understand their sensitivity to semantic change. Then with the fine- tuning adversarial training process of steganography generative model and steganalysis discriminative model, ISS- GAN can iteratively reduce the consistent loss between original cover images and generated stego images. Finally, when ISS- GAN gets the minimal consistent differences, the generated stego images can hardly be distinguished from original cover images. In the training process, we also involve some intentional attacks (noise, compression, etc.) in dataset. The mixture of training dataset can further improve the security of ISS- GAN. By comparing ISS- GAN with the state- of- the- art steganography methods in benchmark datasets, we can conclude that ISS- GAN has the advantages in improving the quality and security of generated stego images. In Figure 1, can you differentiate between Van Gogh's paintings in (a) and (b)? Or Monet's paintings in (e) and (f)? Actually, the images in (a) and (e) are the original version of drawing masters' works. The images in (b) and (f) are the stego version with ISS- GAN framework. The embedded <--- Page Split ---> secret images are emblems of painters' nations: Netherland and France. The embedded info is kept imperceptible to ensure there is no influence on audience to appreciate paintings from fidelity aspect. ![](images/1_0.jpg) <center>Figure 1: Illustration of ISS-GAN framework's steganographic experimental performance on the world-renowned art paintings. (a) Original version of The Starry Night painted by Van Gogh. (b) Stego version of The Starry Night. (c) Emblem of Netherland as the embedded secret image. (e) Original version of The rose arches painted by Monet. (f) Stego version of The rose arches. (g) Emblem of France as the embedded secret image. (d,h) Residual difference between original and stego versions. (We inverse the color to emphasize the difference.) </center> ## 2 RELATED WORK State- of- the- art steganography approaches can be categorized into three types. Least Significant Bit Steganography The main strength of this category is that algorithms are theoretically simple and have low computational complexities. Secret information is embedded into cover image with the operations like shifting or replacing of pixels. In typical Least Significant Bit (LSB) algorithm, pixel values of cover image and secret messages are represented by binary form. Stego image generation process is implemented by replacing the least significant bits of cover image with the most significant bits of secret information. In (Das et al., 2018), authors proposed to generate a LSB based hash function for image authentication process, which can provide good imperceptibility between original image and stego image with hash bits. Moreover, it can successfully identify tamper by a process of tamper localization. Content Adaptive Steganography In this category, some sophisticated steganographic algorithms design a hand- crafted distortion function which is used for selecting the embedding localization of the image. These algorithms are the most secure image steganography in spatial domain, such as Wavelet Obtained Weights (WOW), Highly Undetectable Steganography (HUGO), S- UNIWARD, etc. WOW (Holub & Fridrich, 2012) embeds information into the cover image according to textural complexity of regions. In WOW algorithm, the more texturally complex the image region is, the more pixel values will be modified in this region. HUGO (Pevný et al., 2010) defines a distortion function domain by assigning costs to pixels based on the effect of embedding some information within a pixel. It uses a weighted norm function to represent the feature space. S- UNIWARD (Holub et al., 2014) proposes a universal distortion function that is independent of the embedded domain. Despite the diverse implementation details, the ultimate goals are identical in this category. They are all devoted to minimize distortion functions, to embed the secret into the noisy area or complex textures, and to avoid the smooth regions of the cover images. Deep Learning based Steganography As deep learning has brilliant capability in image processing and generation, researchers also attempt to utilize it in steganography. (Volkhonskiy et al., 2017) <--- Page Split ---> introduces a new model for generating more steganalysis- secure cover images based on deep convolutional generative adversarial networks. (Dong et al., 2018) proposes a steganography model which can conceal a gray secret image into a color cover image with the same size, and generate stego image which seems quite similar to cover image in semantics and color. (Shi et al., 2017) wants to generate more secure covers for steganography. Based on Wasserstein GAN (Arjovsky et al., 2017), the proposed algorithm is efficient to generate cover images with higher visual quality. ## 3 FRAMEWORK OF ISS-GAN ### 3.1 PRINCIPLE OF ISS-GAN In the proposal, ISS- GAN is a steganography framework to embed secret message into the source cover image. So here are two essential metrics to evaluate the steganographic algorithm. - Secret info should remain imperceptible until it is extracted by specific authorized receiver.- Stego image should be secure and intact to resist tampering and attacks. In traditional state- of- the- art frameworks, the imperceptibility is achieved by carefully choosing the LSB in pixel domain, or relied on hand- crafted distortion function in traditional steganography. So the features and algorithms need meticulous artificial design. Moreover, these designs heavily rely on the characteristics of target images. So it is very hard for these schemes to become general solutions in various applications. The artificial designed features are also vulnerable to intentional and hybrid attacks. For deep learning based steganography, the main focus is to generate the steganalysis- secure cover images. But in many real applications, the cover images are given. So how to fully utilize the given images to hide secret, and to improve the security of generated stego images are not answered. After analysing the drawbacks of state- of- the- art algorithms, we find GAN is very suitable for integrated steganography and steganalysis framework. Instead of artificial design, the generative network can learn from training samples and generate the suitable imperceptible features by itself. The discriminative network can simulate the function of steganalysis. The iterative adversarial training process can strength the capability if steganalysis model as well as steganography model. The stronger steganalysis model will stimulate the boost of steganography model, and vice versa. Moreover, how to resist the tampering can also be learned from attacked training samples. For the first evaluation metric, let's imagine the following situation. An eavesdropper wants to check whether the image he obtained from public media contains secret info. So he needs to discriminate the original cover image and received stego image. If these two images are perceptibly same, then the eavesdropper can hardly differentiate the stego image from the cover image. For the purpose of steganography, we can accumulate the visual and statistic differences between cover and stego images. If the difference for each evaluation metric is small enough, we can regard this stego image as a high- quality steganography result. This aligns with the imperceptible evaluation criterion of steganography. For the second evaluation metric, let's imagine the following situation. The eavesdropper wants to destroy the secret communication. So he makes intentional changes to the stego image, like rotate, clip, add noises and JPEG compression. Because he assume that even the image he obtained contains secret, these intentional changes will make the secret extraction method disabled. If the steganography framework is secure, and steganalysis algorithm is robust enough, the intentional changes are in vain. This aligns with the secure evaluation criterion of steganography. GAN (Goodfellow et al., 2014) consists of the generative model and the discriminative model. The purpose of the generative model is to generate new samples which are very similar to the real samples, and attempts to confuse the discriminator. While the purpose of the discriminative model is to classify samples synthesized by the generative model and the real ones. The discriminative model will also estimate the probability that a specific sample comes from the generative model rather than the real ones. When the whole GAN model achieves Nash Equilibrium, that is to say, the generative model can generate the samples which exactly align with the character and distribution of real samples. And at the same time, the discriminative model returns the classification probability 0.5 for each pair of generated and real samples. Then this GAN model is well- trained and converged. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: Framework and workflow chart of ISS-GAN </center> To combine the purpose of steganography, steganalysis and GAN model, we propose novel ISS- GAN framework. ISS- GAN also consists of the steganography generative and steganalysis discriminative model. The purpose of steganography generative model is to generate stego image which is aligned with the original cover image, and attempts to confuse steganalysis discriminative model. While the purpose of the discriminative model is to distinguish generated stego image from the cover image. When ISS- GAN achieves Nash Equilibrium, i.e., the generative model can generate stego image which exactly aligns with the character and distribution of cover image. And at the same time, the discriminative model returns the classification probability 0.5 for each pair of stego and cover image. This also aligns with the evaluation criterion of steganography and steganalysis. In conclusion, designing steganography and steganalysis framework is equal to make the ISS- GAN model well- trained and converged. The overall framework of ISS- GAN is shown in Figure. 2. In ISS- GAN, there are two generative models and two discriminative models. Because steganography and steganalysis framework should contain secret info embedding and extraction processes, so it needs to learn the bijective mapping relationship between two image collections. For ISS- GAN, one image collection contains the original cover images, the other collection contains the secret images for embedding. In the left part of Figure 2, the original cover image \((CI)\) and the original secret image \((STI)\) go through the stego image generative model \(G_{SOI}\) , to produce the stego image \((SOI)\) . This is the secret embedding and stego image generation process, which can be expressed as follows. \[S O I = G_{S O I}(C I,S T I) \quad (1)\] In the right part of Figure 2, the stego image \((SOI)\) go through the secret image generative model \(G_{STI}\) , to get the extracted secret image \((ESTI)\) . This is the secret image extraction process, which can be expressed as follows. \[ESTI = G_{STI}(SOI) \quad (2)\] The cover image discriminative model \(D_{CI}\) ensures that the distribution of images from \(CI\) is indistinguishable from the distribution \(SOI\) using an adversarial loss. This is the guarantee of the imperceptible evaluation criterion in steganography. For the purpose of refining secret extraction, we introduce the secret image cycle discriminative model \(D_{STI}\) . Because generative model is learned to transform from a source image domain to a target image domain. Take the secret image generative model \(G_{STI}\) as an example, the learned mapping relation is highly under- constrained, and cannot ensure the generated \(ESTI\) is indistinguishable from original \(STI\) (Zhu et al., 2017). So we couple this mapping relation with its inverse mapping \(G_{SOI}\) , and introduce a cycle adversarial loss: \[D_{STI}(STI,ESTI)\rightarrow 0 \quad (3)\] That is equal to \[G_{STI}(SOI) = G_{STI}(G_{SOI}(CI,STI))\approx STI \quad (4)\] Its goal is to ensure that the distribution of images from \(ESTI\) is indistinguishable from the distribution \(STI\) using cycle adversarial loss \(D_{STI}\) . This is the guarantee of the secure and robust extraction criterion in steganography and steganalysis. <--- Page Split ---> To refine steganalysis scheme, we introduce the extra inconsistent loss. To make the whole ISSGAN framework useable, we should ensure the secret can only be extracted from \(SOI\) . If we apply the secret extraction process to \(CI\) , secret image should not be recovered. The inconsistent loss can be expressed as follows: \[\max_{G_{S T I}}|G_{S T I}(C I) - G_{S T I}(S O I)| \quad (5)\] ### 3.2 LOSS FUNCTION DEFINITION The overall loss function of ISS- GAN consists of three parts: the adversarial loss \(L_{GAN}(G_{SOI},\) \(D_{CI})\) , the cycle adversarial loss \(L_{GAN}(G_{STI},D_{STI})\) and the inconsistent loss \(L_{IC}\) . So the loss function is written as follows: \[L_{O v e r a l l} = L_{G A N}(G_{S O I},D_{C I}) + L_{G A N}(G_{S T I},D_{S T I}) + \lambda L_{I C}[G_{S T I}(C I),G_{S T I}(S O I)], \quad (6)\] where \(\lambda\) is the parameter to adjust the percentages between adversarial loss and inconsistent loss. The inconsistent loss needs to change to the minimization format as follows. \[\min_{G_{S T I}}\frac{1}{|G_{S T I}(C I) - G_{S T I}(S O I)|} \quad (7)\] In ISS- GAN framework, the quality of generated stego image \(SOI\) and extracted secret image \(ESTI\) is judged by the difference from original cover image \(CI\) and original secret image \(STI\) , respectively. In this paper, two quantitative image effect indicators are applied to measure the differences (Yu, 2017). Peak Signal to Noise Ratio (PSNR) indicator is applied to assess the effect difference from the gray- level fidelity aspect. Structural Similarity (SSIM) (Wang et al., 2004) indicator which is an image quality assessment indicator based on the human vision system is applied to assess the effect difference from the structure- level fidelity aspect. The definitions of these two evaluation indicators are as follows. \[PSNR(x,y) = 10\log_{10}\left(\frac{(MAX_{I})^{2}}{MSE(x,y)}\right), \quad (8)\] where \(MAX_{I}\) is the maximum possible pixel value of images: \(x\) and \(y\) . \(MSE(x,y)\) represents the Mean Squared Error (MSE) between images: \(x\) and \(y\) . \[S S I M(x,y) = \frac{(2\mu_{x}\mu_{y} + C_{1})\left(2\sigma_{x y} + C_{2}\right)}{\left(\mu_{x}^{2} + \mu_{y}^{2} + C_{1}\right)\left(\sigma_{x}^{2} + \sigma_{y}^{2} + C_{2}\right)}, \quad (9)\] where \(\mu_{x}\) and \(\mu_{y}\) represent the average grey values of images. Symbol \(\sigma_{x}\) and \(\sigma_{y}\) represent the variances of images. Symbol \(\sigma_{xy}\) represents covariance between images. \(C_{1}\) and \(C_{1}\) are two constants which are used to prevent unstable results when either \(\mu_{x}^{2} + \mu_{y}^{2}\) or \(\sigma_{x}^{2} + \sigma_{y}^{2}\) is very close to 0. ### 3.3 ISS-GAN NETWORK STRUCTURE For ISS- GAN, the resolution of cover image \(CI\) and secret image \(STI\) is \(256 \times 256\) . The network structure of stego image generative model \(G_{SOI}\) includes a convolution layer (kernel size = 7, stride = 0, pad = 0), two convolution layers (kernel size = 3, stride = 2, pad = 1), nine residual blocks (He et al., 2016), and two deconvolution layers (kernel size = 3, stride = 2, pad = 1, outside pad = 1), and a convolution layer (kernel size = 7, stride = 0, pad = 0). Each convolution and deconvolution layer follows with an instance normalization layer and a ReLU layer. The structure of secret image generative model \(G_{STI}\) is identical with \(G_{SOI}\) . The network structure of cover image discriminative model \(D_{CI}\) is similar with PatchGAN model (Isola et al., 2017). Each time, it operates a image patch with \(70 \times 70\) size, and classifies whether this patch is real or fake. The model will run across the whole image, and average all results in the \(70 \times 70\) overlapping patches to provide the ensemble output. The architecture of such a patch- level discriminative model requires fewer parameters and runs faster than a full- image discriminator (Yi et al., 2017). Moreover, it has no constraints over the size of the input image. \(D_{C}\) contains a convolution layer (kernel size = 4, stride = 2, pad = 1) follows with a leaky ReLU layer, three convolution layers (kernel size = 4, stride = 2, pad = 1) follows with an instance normalization layer and a leaky ReLU layer, a convolution layer (kernel size = 4, stride = 1, pad = 1) follows with an instance normalization layer and a leaky ReLU layer, a convolution layer (kernel size = 4, stride = 1, pad = 1) <--- Page Split ---> follows with a sigmoid layer to output a scalar output between [0, 1]. The structure of secret image cycle discriminative model \(D_{STI}\) is identical with \(D_{CI}\) . Moreover, to improve the convergence performance, we use Adam optimizer (Kinga & Adam, 2015) instead of stochastic gradient descent (SGD) optimizer. In practice, Adam optimizer can be adaptive to the training of ISS- GAN. It is computationally efficient and has little memory requirements. The hyper- parameters of Adam optimizer are: \(\beta_{1} = 0.5\) , \(\beta_{2} = 0.999\) . The base learning rate is 0.0002. ## 4 EXPERIMENTAL RESULTS ### 4.1 STEGANOGRAPHY PERFORMANCE EXPERIMENTS In the secret embedding and stego image generation performance experiments, we adopt the benchmark images as the cover images \(CI\) shown in the first row of Figure 3 to test the performance of proposed ISS- GAN framework. The embedded secret image used is the Barbara benchmark image. We use PyTorch as the framework and train ISS- GAN with 150 epochs. The generated stego images \(SOI\) are shown in the second row of Figure 3. For illustration purpose, the residual differences between cover and stego images are shown in the third row of Figure 3. The PSNR and SSIM metrics for generated stego images \(SOI\) versus cover images \(CI\) are shown in Table 1. (SOI is used as image \(x\) , and \(CI\) is used as image \(y\) for PSNR and SSIM metrics calculation equations (8) and (9).) The results shown in Figure 3 and Table. 1 can prove the high quality and difference imperceptibility of \(SOI\) in qualitative and quantitative aspects. ![](images/5_0.jpg) <center>Figure 3: Stego images \(SOI\) generation performance of ISS-GAN. Row 1: Original cover images: Lena, Airplane, Baboon, Fruits and Peppers, Row 2: Corresponding generated stego images, Row 3: Residual difference between original and stego images. (We inverse the color to emphasize the difference. Because the differences are inconspicuous. Please magnify to see the differences which mainly on the marginal parts of objects.) </center> Table 1: Evaluation metrics of generated stego images \(SOI\) <table><tr><td>Metrics/Images</td><td>Lena</td><td>Airplane</td><td>Baboon</td><td>Fruits</td><td>Peppers</td></tr><tr><td>PSNR</td><td>33.0170</td><td>33.0065</td><td>29.1163</td><td>33.9085</td><td>30.5124</td></tr><tr><td>SSIM</td><td>0.9390</td><td>0.9589</td><td>0.9335</td><td>0.9510</td><td>0.9034</td></tr></table> Let's have a further analysis of the obtained results. If we magnify Figure 3 to see the residual differences, we can find they are mainly on the marginal and textural parts of objects. For example, the hat of Lena, the edges of F16 plane, the skin and whiskers of baboon, the profile of fruits and peppers, etc. It means ISS- GAN tends to hide the secret info into marginal parts of the object in <--- Page Split ---> original cover images. In information theory, textures and edges represent the high frequency parts of the image, while smooth regions represent the low frequency parts of the image. If we change the low frequency parts, it is easy to be detected by steganalysis method. So many state- of- the- art steganography algorithms transform the cover image from spatial domain to frequency domain. Change the tiny part in high frequency parts, and transform it back to spatial domain. Moreover, when we discuss the state- of- the- art content adaptive steganography algorithms, we find the ultimate goal is trying to embed the secret image into the parts with complex edges and textures, and avoiding the smooth regions of the cover images. The behavior of ISS- GAN is very similar to the state- of- the- art steganography algorithms. But the state- of- the- art algorithms need to design a hand- crafted distortion function to achieve the goal, while ISS- GAN learns from the discriminative network which simulates the behaviors of steganalysis. From the learning process, the generative network in ISS- GAN finds steganalysis method are very sensitive to the low frequency parts, and not so sensitive to the high frequency parts. So the stego images generated by ISS- GAN generative network mainly hide their secret info into marginal and textural parts to ensure the best imperceptibility. ### 4.2 STEGANALYSIS QUALITATIVE PERFORMANCE EXPERIMENTS In the steganalysis qualitative experiments, we adopt the world- renowned art paintings shown in Figure 1 to test the performance of proposed ISS- GAN and its robustness to different patterns of noise attack. Several patterns of noises are adopted respectively to imitate real- world noise attacks. We use PyTorch as the framework and train ISS- GAN with 200 epochs. The extracted secret image ESTI are shown in Figure 4 and Appendix Figure 8. The results shown in Figure 4 and 8 can prove the high quality of ESTI in qualitative aspect. ![](images/6_0.jpg) <center>Figure 4: Extracted secret images if stego images are attacked by noise. Column 1: Multiplicative noise, Column 2: Salt and pepper noise, Column 3: Gaussian white noise, Column 4: Poisson noise. Row 2 and 4: Residual difference between embedded secret images and extracted secret images. (We inverse the color to emphasize the difference.) </center> ### 4.3 STEGANALYSIS QUANTITATIVE COMPARATIVE EXPERIMENTS We compare ISS- GAN with the state- of- the- art steganography methods in various image benchmarks. Here we use steganalysis process to extract secret images from stego images, and evaluate the security criteria of steganography algorithms. For LSB steganography algorithms, we choose LSB- TLH Das et al. (2018). For content adaptive steganography algorithms, we choose WOW Holub & Fridrich (2012), HUGO (Pevný et al., 2010) and S- UNIWARD (Holub et al., 2014). For deep learning based steganography algorithms, we choose ISGAN Dong et al. (2018) and SSGAN Shi et al. (2017) for comparison. We make two group experiments. In the first group experiment, we use Lena as the original cover image, and the trolleybus image as the secret image. The results are shown in Figure 5. In the second group experiment, we use Lena as the original cover image, and the headline of ICLR conference as the secret image. The results are shown in Figure 6. In quantitative experiments, we add the JPEG compression to simulate the coding and decoding processes in real <--- Page Split ---> secure information transmission system. We need to ensure ISS- GAN can work well against coding and decoding algorithms. ![](images/7_0.jpg) <center>Figure 5: Extracted secret image of trolleybus after adding certain attacks. Row 1: Gaussian white noise, Row 2: Poisson noise, Row 3: Salt and pepper noise, Row 4: Speckle noise, Row 5: JPEG compression. (Results of separate Salt noise and Pepper noise are in Appendix.) Column 1-7 are ESTI obtained by LSB-TLH, WOW, HUGO, S-UNIWARD, ISGAN, SSGAN and proposed ISS-GAN algorithms. </center> Table 2: PSNR metric for extracted secret images of trolleybus <table><tr><td>Images/Algorithms</td><td>LSB-TLH</td><td>WOW</td><td>HUGO</td><td>S-UNIWARD</td><td>ISGAN</td><td>SSGAN</td><td>ISS-GAN</td></tr><tr><td>Gaussian noise</td><td>21.9184</td><td>22.5196</td><td>25.4038</td><td>23.6964</td><td>21.0874</td><td>22.0219</td><td>28.5659</td></tr><tr><td>Possion noise</td><td>28.0956</td><td>28.1163</td><td>28.1097</td><td>28.1035</td><td>28.0996</td><td>28.1182</td><td>28.1181</td></tr><tr><td>Salt &amp;amp; Pepper noise</td><td>20.6125</td><td>22.3013</td><td>22.3739</td><td>22.1683</td><td>20.2038</td><td>21.0407</td><td>23.3684</td></tr><tr><td>Salt noise</td><td>19.7074</td><td>20.2493</td><td>22.0935</td><td>20.4663</td><td>19.6455</td><td>19.7446</td><td>22.8699</td></tr><tr><td>Pepper noise</td><td>21.8021</td><td>22.4331</td><td>22.5555</td><td>24.2972</td><td>21.7754</td><td>21.8409</td><td>25.0980</td></tr><tr><td>Speckle noise</td><td>27.9778</td><td>33.8242</td><td>32.7279</td><td>30.5234</td><td>28.8135</td><td>29.6587</td><td>37.5805</td></tr><tr><td>JPEG Compression</td><td>26.6882</td><td>30.4574</td><td>31.0763</td><td>29.7196</td><td>27.9319</td><td>28.8819</td><td>31.6285</td></tr></table> Table 3: SSIM metric for extracted secret images of trolleybus <table><tr><td>Images/Algorithms</td><td>LSB-TLH</td><td>WOW</td><td>HUGO</td><td>S-UNIWARD</td><td>ISGAN</td><td>SSGAN</td><td>ISS-GAN</td></tr><tr><td>Gaussian noise</td><td>0.6548</td><td>0.6802</td><td>0.7885</td><td>0.7271</td><td>0.6182</td><td>0.6589</td><td>0.8772</td></tr><tr><td>Possion noise</td><td>0.8876</td><td>0.8878</td><td>0.8878</td><td>0.8877</td><td>0.8874</td><td>0.8880</td><td>0.8880</td></tr><tr><td>Salt &amp;amp; Pepper noise</td><td>0.7338</td><td>0.8224</td><td>0.8220</td><td>0.8110</td><td>0.7190</td><td>0.7584</td><td>0.8464</td></tr><tr><td>Salt noise</td><td>0.7147</td><td>0.7384</td><td>0.8096</td><td>0.7476</td><td>0.7127</td><td>0.7156</td><td>0.8352</td></tr><tr><td>Pepper noise</td><td>0.8119</td><td>0.8295</td><td>0.8333</td><td>0.8780</td><td>0.8100</td><td>0.8124</td><td>0.8941</td></tr><tr><td>Speckle noise</td><td>0.8955</td><td>0.9635</td><td>0.9542</td><td>0.9304</td><td>0.9062</td><td>0.9206</td><td>0.9836</td></tr><tr><td>JPEG Compression</td><td>0.8909</td><td>0.9425</td><td>0.9487</td><td>0.9343</td><td>0.9304</td><td>0.9245</td><td>0.9538</td></tr></table> The PSNR and SSIM metrics for extracted secret image of trolleybus and ICLR conference headline are shown in Table \(2\sim 5\) , respectively. In Table \(2\sim 5\) , extracted secret image is used as image \(\pmb{x}\) and original secret image is used as image \(\pmb{y}\) for PSNR and SSIM metrics calculation equations (8) and (9). According to these metrics, the security of ISS- GAN outperforms all other state- of- the- art steganography algorithms in quantitative aspect. In these two group experiments, PSNR and SSIM metrics, the closest competitors are content adaptive steganography algorithms. WOW, HUGO and S- UNIWARD have quite good performance on steganography security. But for PSNR metric, ISS- GAN can still achieve \(1.01\mathrm{X}\sim 1.12\mathrm{X}\) relative improvement over the second highest PSNR algorithms with trolleybus secret image embedded, and <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 6: Extracted secret image of ICLR conference headline after adding certain attacks. Row 1: Gaussian white noise, Row 2: Poisson noise, Row 3: Salt and pepper noise, Row 4: Speckle noise, Row 5: JPEG compression. (Results of separate Salt noise and Pepper noise are in Appendix.) Column 1-7 are ESTI obtained by LSB-TLH, WOW, HUGO, S-UNIWARD, ISGAN, SSGAN and proposed ISS-GAN algorithms. </center> Table 4: PSNR metric for extracted secret images of ICLR conference headline <table><tr><td>Images/Algorithms</td><td>LSB-TLH</td><td>WOW</td><td>HUGO</td><td>S-UNIWARD</td><td>ISGAN</td><td>SSGAN</td><td>ISS-GAN</td></tr><tr><td>Gaussian noise</td><td>25.6366</td><td>24.5133</td><td>27.9945</td><td>25.8188</td><td>22.8196</td><td>23.6139</td><td>30.0088</td></tr><tr><td>Poisson noise</td><td>27.2302</td><td>27.2400</td><td>27.2332</td><td>27.2281</td><td>27.2352</td><td>27.2242</td><td>27.2497</td></tr><tr><td>Salt &amp;amp; Pepper noise</td><td>19.4886</td><td>20.8146</td><td>24.3997</td><td>21.5674</td><td>19.5006</td><td>20.7629</td><td>36.4338</td></tr><tr><td>Salt noise</td><td>30.3170</td><td>31.8986</td><td>34.0030</td><td>32.3590</td><td>31.3167</td><td>31.7378</td><td>49.0843</td></tr><tr><td>Pepper noise</td><td>17.3986</td><td>17.8621</td><td>18.3596</td><td>19.8744</td><td>16.4737</td><td>17.7671</td><td>35.9698</td></tr><tr><td>Speckle noise</td><td>26.8375</td><td>27.6863</td><td>27.8516</td><td>34.0109</td><td>24.5629</td><td>25.3830</td><td>42.8481</td></tr><tr><td>JPEG Compression</td><td>30.1590</td><td>32.1941</td><td>32.6464</td><td>31.8712</td><td>30.9326</td><td>31.1552</td><td>32.7724</td></tr></table> achieve \(1.01\mathrm{X} \sim 1.81\mathrm{X}\) relative improvement over the second highest PSNR algorithms with ICLR conference headline secret image embedded. For SSIM metric, ISS- GAN can achieve \(1.01\mathrm{X} \sim 1.11\mathrm{X}\) relative improvement over the second highest SSIM algorithms with trolleybus secret image embedded, and achieve \(1.01\mathrm{X} \sim 1.46\mathrm{X}\) relative improvement over the second highest SSIM algorithms with ICLR conference headline secret image embedded. To have a further analysis of the obtained results, we can find the performance of state- of- the- art deep learning steganography algorithms is not as good as expected. The main reason is the authors are more focus on generating the new cover images which are steganalysis- secure. But in our experiments, the cover images are fixed. So we can see the performance of state- of- the- art deep learning steganography algorithms is just at the same level of LSB steganography algorithms, and is worse than content adaptive steganography algorithms. Compare the results of first and second group experiments, we find ISS- GAN has better performance with the ICLR conference headline secret image. The amount of meaningful pixels and semantic info in ICLR conference headline image is much less than trolleybus image. This aligns with the principle of steganography, i.e., if more pixels are concealed into the cover image, then the security of stego image will be worse. To further illustrate the effect of embedding secret info amount on the security and imperceptibility of stego image, we make the curve plots to show the quantitative experiment results of generated <--- Page Split ---> Table 5: SSIM metric for extracted secret images of ICLR conference headline <table><tr><td>Images/Algorithms</td><td>LSB-TLH</td><td>WOW</td><td>HUGO</td><td>S-UNIWARD</td><td>ISGAN</td><td>SSGAN</td><td>ISS-GAN</td></tr><tr><td>Gaussian noise</td><td>0.7163</td><td>0.6647</td><td>0.8084</td><td>0.7228</td><td>0.5808</td><td>0.6200</td><td>0.8682</td></tr><tr><td>Position noise</td><td>0.7706</td><td>0.7707</td><td>0.7710</td><td>0.7709</td><td>0.7709</td><td>0.7703</td><td>0.7716</td></tr><tr><td>Salt &amp;amp; Pepper noise</td><td>0.6566</td><td>0.7352</td><td>0.8839</td><td>0.7664</td><td>0.6569</td><td>0.7362</td><td>0.9912</td></tr><tr><td>Salt noise</td><td>0.9900</td><td>0.9927</td><td>0.9951</td><td>0.9931</td><td>0.9918</td><td>0.9924</td><td>0.9998</td></tr><tr><td>Pepper noise</td><td>0.5160</td><td>0.5455</td><td>0.5834</td><td>0.6775</td><td>0.4464</td><td>0.5428</td><td>0.9900</td></tr><tr><td>Speckle noise</td><td>0.7422</td><td>0.7718</td><td>0.7790</td><td>0.9340</td><td>0.6461</td><td>0.6752</td><td>0.9904</td></tr><tr><td>JPEG Compression</td><td>0.9841</td><td>0.9888</td><td>0.9892</td><td>0.9880</td><td>0.9863</td><td>0.9877</td><td>0.9897</td></tr></table> stego images SOI versus cover images CI as shown in Figure 7. Here, we use the pixel- ratio to control the amount of embedding secret info. It is defined as the ratio of valid pixels amount in secret image versus those in cover images. For example, if the amount of valid pixels in \(256 \times 256\) size cover image is 63000, and the amount of valid pixels in secret image is 15000, then pixel- ratio is 0.2381. ![](images/9_0.jpg) <center>Figure 7: PSNR and SSIM metrics for generated stego images SOI versus cover images CI with different pixel-ratio and attacks. </center> From the curve plots shown above, we can see PSNR and SSIM metrics decline with the increase of pixel- ratio. Under the noise attack or image compression, PSNR and SSIM metrics are worse than the situations without attack, and also decline with the increase of pixel- ratio. The detailed results further prove the inherent contradiction between embedded secret amount and security of stego image. So in the real applications, ISS- GAN should make the trade- off between embedding capacity, imperceptibility and security according to real requirements, just like all state- of- the- art steganography methods. This curve can tell user the largest embedded secret capacity at certain imperceptibility and security level. So it is helpful for user to choose the most suitable embedded secret image in real secure information transmission systems. For example, if the user want to generate a stego image with no less than 25dB PSNR and 0.97 SSIM versus cover image. Considering the noise attacks and image compression possibility, the largest embedded secret pixel- ratio should be less than 0.5. ## 5 CONCLUSION AND FUTURE WORKS In this paper, we integrate steganography and steganalysis into single framework. The good performance of ISS- GAN derives from the following factors. - The discriminative network simulates the features of steganalysis. It helps to understand the sensitivity of cover images to semantic changes.- The introduction of cycle discriminative model and inconsistent loss helps to enhance the quality and security of generated stego image.- The mixture training dataset can further improve the robustness and security of ISS-GAN framework. How to resist the tampering can be learned from attacked training samples.- The iterative adversarial training process can strength the capability if steganalysis model and steganography model at the same time. The stronger steganalysis model will stimulate the improvement of steganography model, and vice versa. In the future, we will study the influence of color cover/secret image and gray cover/secret image to the proposed ISS- GAN framework. <--- Page Split ---> ## REFERENCES Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017. Ujjal Kumar Das, Shefalika Ghosh Samaddar, and Pankaj Kumar Keserwani. Digital forensic enabled image authentication using least significant bit (lsb) with tamper localization based hash function. In Intelligent Communication and Computational Technologies, pp. 141- 155. Springer, 2018. Shiqi Dong, Ru Zhang, and Jianyi Liu. Invisible steganography via generative adversarial network. arXiv preprint arXiv:1807.08571, 2018. Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672- 2680, 2014. Jamie Hayes and George Danezis. Generating steganographic images via adversarial training. In Advances in Neural Information Processing Systems, pp. 1954- 1963, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770- 778, 2016. Vojtěch Holub and Jessica Fridrich. Designing steganographic distortion using directional filters. In 2012 IEEE International workshop on information forensics and security (WIFS), pp. 234- 239. IEEE, 2012. Vojtěch Holub, Jessica Fridrich, and Tomáš Denemark. Universal distortion function for steganography in an arbitrary domain. EURASIP Journal on Information Security, 2014(1):1, 2014. Phillip Isola, Jun- Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image- to- image translation with conditional adversarial networks. arXiv preprint, 2017. D Kinga and J Ba Adam. A method for stochastic optimization. In International Conference on Learning Representations (ICLR), volume 5, 2015. Seung- Min Mun, Seung- Hun Nam, Han- Ul Jang, Dongkyu Kim, and Heung- Kyu Lee. A robust blind watermarking using convolutional neural network. arXiv preprint arXiv:1704.03248, 2017. Ali Hamzah Obaid. Information hiding techniques for steganography and digital watermarking. UDC 681.518 (04) INTERACTIVE S; STEMS: Problems of Human- Computer Interaction.- Collection of scientific papers.- Ulyanovsk: USTU, 2015.- 306 p., pp. 63, 2015. Tomáš Pevný, Tomáš Filler, and Patrick Bas. Using high- dimensional image models to perform highly undetectable steganography. In International Workshop on Information Hiding, pp. 161- 177. Springer, 2010. Haichao Shi, Jing Dong, Wei Wang, Yinlong Qian, and Xiaoyu Zhang. Ssgan: Secure steganography based on generative adversarial networks. In Pacific Rim Conference on Multimedia, pp. 534- 544. Springer, 2017. Denis Volkhonskiy, Ivan Nazarov, Boris Borisenko, and Evgeny Burnaev. Steganographic generative adversarial networks. arXiv preprint arXiv:1703.05502, 2017. Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600- 612, 2004. Raymond B Wolfgang and Edward J Delp. A watermark for digital images. In Image Processing, 1996. Proceedings., International Conference on, volume 3, pp. 219- 222. IEEE, 1996. Zili Yi, Hao (Richard) Zhang, Ping Tan, and Minglun Gong. Dualgan: Unsupervised dual learning for image- to- image translation. In ICCV, pp. 2868- 2876, 2017. <--- Page Split ---> Chong Yu. Steganography of digital watermark based on artificial neural networks in image communication and intellectual property protection. Neural Processing Letters, 44(2):307- 316, 2016. Chong Yu. Steganography of digital watermark by arnold scrambling transform with blind source separation morphological component analysis. Multimedia Tools and Applications, 76(5):6821- 6842, 2017. Jun- Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image- to- image translation using cycle- consistent adversarial networks. arXiv preprint, 2017. <--- Page Split ---> ## 6 APPENDIX ### 6.1 INTRODUCTION OF STEGANOGRAPHY AND STEGANALYSIS Imagine that you are an enthusiastic shutterbug. You are on trip to New Orleans and took a nice photo of St. Louis Cathedral. You want to send the beautiful photo and reveal the romantic feelings to your girlfriend. As your girlfriend is the Ph.D candidate of computer vision, so you want to share the romantic words in her professional manner. You embed the words by changing the least significant bits of the photograph, because this method can hide the romantic words in nearly invisible way. As a social media network fan, you share this photo on Facebook. Many friends leave the messages to express their love of this photo. And your girlfriend write the following sentence under your photo. "Wonderful photo. By the way, I like the clever idea. I think I am the first audience who have read the hidden words. I love you, too." You will be in a cheerful mood, and appreciate the clever communication method that only you and your girlfriend can "see" the secret information inside the photograph. This is a simple scenario to show the basic workflow of steganography and steganalysis. Steganography is defined as the art and science of hiding information in ways that prevent the detection of hidden messages (Obaid, 2015). Steganography literally means "covered writing" and is usually interpreted to hide information in other information. In the simple scenario aforementioned, you apply steganography to hide the romantic information into the photo of New Orleans. The romantic information is called the secret message, while the original photo of New Orleans is called the cover image. As the counterpart, the main idea of steganalysis is to analyze whether the received information contains any hidden information, and to recover the hidden information if possible (Volkhonskiy et al., 2017). In the simple scenario aforementioned, the social network plays the role of the public channel (Hayes & Danezis, 2017), and the posted photo is called stego image (Dong et al., 2018), which contains the secret message. Your girlfriend applies steganalysis to discover and recover the secret information you embedded. Since their birth, steganography and steganalysis have complementary progress. ### 6.2 EXPERIMENT RESULTS ![](images/12_0.jpg) <center>Figure 8: Extracted secret images if stego images are attacked by noise. Column 1: Multiplicative noise, Column 2: Salt and pepper noise, Column 3: Gaussian white noise, Column 4: Poisson noise. Row 2 and 4: Residual difference between embedded secret images and extracted secret images. (We inverse the color to emphasize the difference.) </center> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 9: Extracted secret image of trolleybus after adding certain attacks. Row 1: Salt noise, Row 2: Pepper noise. Column 1-7 are ESTI obtained by LSB-TLH, WOW, HUGO, S-UNIWARD, ISGAN, SSGAN and proposed ISS-GAN algorithms. </center> ![](images/13_1.jpg) <center>Figure 10: Extracted secret image of ICLR conference headline after adding certain attacks. Row 1: Salt noise, Row 2: Pepper noise. Column 1-7 are ESTI obtained by LSB-TLH, WOW, HUGO, S-UNIWARD, ISGAN, SSGAN and proposed ISS-GAN algorithms. </center> <--- Page Split --->
## ABSTRACT Recently, generative adversarial network is the hotspot in research areas and industrial application areas. Its application on data generation in computer vision is most common usage. This paper extends its application to data hiding and security area. In this paper, we propose the novel framework to integrate steganography and steganalysis processes. The proposed framework applies generative adversarial networks as the core structure. The discriminative model simulate the steganalysis process, which can help us understand the sensitivity of cover images to semantic changes. The steganography generative model is to generate stego image which is aligned with the original cover image, and attempts to confuse steganalysis discriminative model. The introduction of cycle discriminative model and inconsistent loss can help to enhance the quality and security of generated stego image in the iterative training process. Training dataset is mixed with intact images as well as intentional attacked images. The mix training process can further improve the robustness and security of new framework. Through the qualitative, quantitative experiments and analysis, this novel framework shows compelling performance and advantages over the current state- of- the- art methods in steganography and steganalysis benchmarks. ## 1 INTRODUCTION Steganography literally means "covered writing" and is usually interpreted to hide information in other information. As the counterpart, the main idea of steganalysis is to analyze whether the received information contains any hidden information, and to recover the hidden information if possible (Volkhonskiy et al., 2017). Since their birth, steganography and steganalysis have complementary progress. Steganography is widely used in secret information transmission (Shi et al., 2017), watermark (Yu, 2016), copyright certification (Mun et al., 2017), forgery detection (Wolfgang & Delp, 1996) applications. In this paper, we propose an integrated steganography and steganalysis framework with generative adversarial networks, and use ISS- GAN to represent the method in this paper. (ISS is the acronym of integrated steganography and steganalysis.) ISS- GAN combines the steganalysis's evaluation metrics of secure steganography with the advantages in latest GAN principle, and integrate the counterparts into single framework. Firstly, we will simulate the steganalysis process with discriminative model. It will help us to dynamically change the capacity of cover images, and understand their sensitivity to semantic change. Then with the fine- tuning adversarial training process of steganography generative model and steganalysis discriminative model, ISS- GAN can iteratively reduce the consistent loss between original cover images and generated stego images. Finally, when ISS- GAN gets the minimal consistent differences, the generated stego images can hardly be distinguished from original cover images. In the training process, we also involve some intentional attacks (noise, compression, etc.) in dataset. The mixture of training dataset can further improve the security of ISS- GAN. By comparing ISS- GAN with the state- of- the- art steganography methods in benchmark datasets, we can conclude that ISS- GAN has the advantages in improving the quality and security of generated stego images. In Figure 1, can you differentiate between Van Gogh's paintings in (a) and (b)? Or Monet's paintings in (e) and (f)? Actually, the images in (a) and (e) are the original version of drawing masters' works. The images in (b) and (f) are the stego version with ISS- GAN framework. The embedded <--- Page Split ---> secret images are emblems of painters' nations: Netherland and France. The embedded info is kept imperceptible to ensure there is no influence on audience to appreciate paintings from fidelity aspect. ![](images/1_0.jpg) <center>Figure 1: Illustration of ISS-GAN framework's steganographic experimental performance on the world-renowned art paintings. (a) Original version of The Starry Night painted by Van Gogh. (b) Stego version of The Starry Night. (c) Emblem of Netherland as the embedded secret image. (e) Original version of The rose arches painted by Monet. (f) Stego version of The rose arches. (g) Emblem of France as the embedded secret image. (d,h) Residual difference between original and stego versions. (We inverse the color to emphasize the difference.) </center> ## 2 RELATED WORK State- of- the- art steganography approaches can be categorized into three types. Least Significant Bit Steganography The main strength of this category is that algorithms are theoretically simple and have low computational complexities. Secret information is embedded into cover image with the operations like shifting or replacing of pixels. In typical Least Significant Bit (LSB) algorithm, pixel values of cover image and secret messages are represented by binary form. Stego image generation process is implemented by replacing the least significant bits of cover image with the most significant bits of secret information. In (Das et al., 2018), authors proposed to generate a LSB based hash function for image authentication process, which can provide good imperceptibility between original image and stego image with hash bits. Moreover, it can successfully identify tamper by a process of tamper localization. Content Adaptive Steganography In this category, some sophisticated steganographic algorithms design a hand- crafted distortion function which is used for selecting the embedding localization of the image. These algorithms are the most secure image steganography in spatial domain, such as Wavelet Obtained Weights (WOW), Highly Undetectable Steganography (HUGO), S- UNIWARD, etc. WOW (Holub & Fridrich, 2012) embeds information into the cover image according to textural complexity of regions. In WOW algorithm, the more texturally complex the image region is, the more pixel values will be modified in this region. HUGO (Pevný et al., 2010) defines a distortion function domain by assigning costs to pixels based on the effect of embedding some information within a pixel. It uses a weighted norm function to represent the feature space. S- UNIWARD (Holub et al., 2014) proposes a universal distortion function that is independent of the embedded domain. Despite the diverse implementation details, the ultimate goals are identical in this category. They are all devoted to minimize distortion functions, to embed the secret into the noisy area or complex textures, and to avoid the smooth regions of the cover images. Deep Learning based Steganography As deep learning has brilliant capability in image processing and generation, researchers also attempt to utilize it in steganography. (Volkhonskiy et al., 2017) <--- Page Split ---> introduces a new model for generating more steganalysis- secure cover images based on deep convolutional generative adversarial networks. (Dong et al., 2018) proposes a steganography model which can conceal a gray secret image into a color cover image with the same size, and generate stego image which seems quite similar to cover image in semantics and color. (Shi et al., 2017) wants to generate more secure covers for steganography. Based on Wasserstein GAN (Arjovsky et al., 2017), the proposed algorithm is efficient to generate cover images with higher visual quality. ## 3 FRAMEWORK OF ISS-GAN ### 3.1 PRINCIPLE OF ISS-GAN In the proposal, ISS- GAN is a steganography framework to embed secret message into the source cover image. So here are two essential metrics to evaluate the steganographic algorithm. - Secret info should remain imperceptible until it is extracted by specific authorized receiver.- Stego image should be secure and intact to resist tampering and attacks. In traditional state- of- the- art frameworks, the imperceptibility is achieved by carefully choosing the LSB in pixel domain, or relied on hand- crafted distortion function in traditional steganography. So the features and algorithms need meticulous artificial design. Moreover, these designs heavily rely on the characteristics of target images. So it is very hard for these schemes to become general solutions in various applications. The artificial designed features are also vulnerable to intentional and hybrid attacks. For deep learning based steganography, the main focus is to generate the steganalysis- secure cover images. But in many real applications, the cover images are given. So how to fully utilize the given images to hide secret, and to improve the security of generated stego images are not answered. After analysing the drawbacks of state- of- the- art algorithms, we find GAN is very suitable for integrated steganography and steganalysis framework. Instead of artificial design, the generative network can learn from training samples and generate the suitable imperceptible features by itself. The discriminative network can simulate the function of steganalysis. The iterative adversarial training process can strength the capability if steganalysis model as well as steganography model. The stronger steganalysis model will stimulate the boost of steganography model, and vice versa. Moreover, how to resist the tampering can also be learned from attacked training samples. For the first evaluation metric, let's imagine the following situation. An eavesdropper wants to check whether the image he obtained from public media contains secret info. So he needs to discriminate the original cover image and received stego image. If these two images are perceptibly same, then the eavesdropper can hardly differentiate the stego image from the cover image. For the purpose of steganography, we can accumulate the visual and statistic differences between cover and stego images. If the difference for each evaluation metric is small enough, we can regard this stego image as a high- quality steganography result. This aligns with the imperceptible evaluation criterion of steganography. For the second evaluation metric, let's imagine the following situation. The eavesdropper wants to destroy the secret communication. So he makes intentional changes to the stego image, like rotate, clip, add noises and JPEG compression. Because he assume that even the image he obtained contains secret, these intentional changes will make the secret extraction method disabled. If the steganography framework is secure, and steganalysis algorithm is robust enough, the intentional changes are in vain. This aligns with the secure evaluation criterion of steganography. GAN (Goodfellow et al., 2014) consists of the generative model and the discriminative model. The purpose of the generative model is to generate new samples which are very similar to the real samples, and attempts to confuse the discriminator. While the purpose of the discriminative model is to classify samples synthesized by the generative model and the real ones. The discriminative model will also estimate the probability that a specific sample comes from the generative model rather than the real ones. When the whole GAN model achieves Nash Equilibrium, that is to say, the generative model can generate the samples which exactly align with the character and distribution of real samples. And at the same time, the discriminative model returns the classification probability 0.5 for each pair of generated and real samples. Then this GAN model is well- trained and converged. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: Framework and workflow chart of ISS-GAN </center> To combine the purpose of steganography, steganalysis and GAN model, we propose novel ISS- GAN framework. ISS- GAN also consists of the steganography generative and steganalysis discriminative model. The purpose of steganography generative model is to generate stego image which is aligned with the original cover image, and attempts to confuse steganalysis discriminative model. While the purpose of the discriminative model is to distinguish generated stego image from the cover image. When ISS- GAN achieves Nash Equilibrium, i.e., the generative model can generate stego image which exactly aligns with the character and distribution of cover image. And at the same time, the discriminative model returns the classification probability 0.5 for each pair of stego and cover image. This also aligns with the evaluation criterion of steganography and steganalysis. In conclusion, designing steganography and steganalysis framework is equal to make the ISS- GAN model well- trained and converged. The overall framework of ISS- GAN is shown in Figure. 2. In ISS- GAN, there are two generative models and two discriminative models. Because steganography and steganalysis framework should contain secret info embedding and extraction processes, so it needs to learn the bijective mapping relationship between two image collections. For ISS- GAN, one image collection contains the original cover images, the other collection contains the secret images for embedding. In the left part of Figure 2, the original cover image \((CI)\) and the original secret image \((STI)\) go through the stego image generative model \(G_{SOI}\) , to produce the stego image \((SOI)\) . This is the secret embedding and stego image generation process, which can be expressed as follows. \[S O I = G_{S O I}(C I,S T I) \quad (1)\] In the right part of Figure 2, the stego image \((SOI)\) go through the secret image generative model \(G_{STI}\) , to get the extracted secret image \((ESTI)\) . This is the secret image extraction process, which can be expressed as follows. \[ESTI = G_{STI}(SOI) \quad (2)\] The cover image discriminative model \(D_{CI}\) ensures that the distribution of images from \(CI\) is indistinguishable from the distribution \(SOI\) using an adversarial loss. This is the guarantee of the imperceptible evaluation criterion in steganography. For the purpose of refining secret extraction, we introduce the secret image cycle discriminative model \(D_{STI}\) . Because generative model is learned to transform from a source image domain to a target image domain. Take the secret image generative model \(G_{STI}\) as an example, the learned mapping relation is highly under- constrained, and cannot ensure the generated \(ESTI\) is indistinguishable from original \(STI\) (Zhu et al., 2017). So we couple this mapping relation with its inverse mapping \(G_{SOI}\) , and introduce a cycle adversarial loss: \[D_{STI}(STI,ESTI)\rightarrow 0 \quad (3)\] That is equal to \[G_{STI}(SOI) = G_{STI}(G_{SOI}(CI,STI))\approx STI \quad (4)\] Its goal is to ensure that the distribution of images from \(ESTI\) is indistinguishable from the distribution \(STI\) using cycle adversarial loss \(D_{STI}\) . This is the guarantee of the secure and robust extraction criterion in steganography and steganalysis. <--- Page Split ---> To refine steganalysis scheme, we introduce the extra inconsistent loss. To make the whole ISSGAN framework useable, we should ensure the secret can only be extracted from \(SOI\) . If we apply the secret extraction process to \(CI\) , secret image should not be recovered. The inconsistent loss can be expressed as follows: \[\max_{G_{S T I}}|G_{S T I}(C I) - G_{S T I}(S O I)| \quad (5)\] ### 3.2 LOSS FUNCTION DEFINITION The overall loss function of ISS- GAN consists of three parts: the adversarial loss \(L_{GAN}(G_{SOI},\) \(D_{CI})\) , the cycle adversarial loss \(L_{GAN}(G_{STI},D_{STI})\) and the inconsistent loss \(L_{IC}\) . So the loss function is written as follows: \[L_{O v e r a l l} = L_{G A N}(G_{S O I},D_{C I}) + L_{G A N}(G_{S T I},D_{S T I}) + \lambda L_{I C}[G_{S T I}(C I),G_{S T I}(S O I)], \quad (6)\] where \(\lambda\) is the parameter to adjust the percentages between adversarial loss and inconsistent loss. The inconsistent loss needs to change to the minimization format as follows. \[\min_{G_{S T I}}\frac{1}{|G_{S T I}(C I) - G_{S T I}(S O I)|} \quad (7)\] In ISS- GAN framework, the quality of generated stego image \(SOI\) and extracted secret image \(ESTI\) is judged by the difference from original cover image \(CI\) and original secret image \(STI\) , respectively. In this paper, two quantitative image effect indicators are applied to measure the differences (Yu, 2017). Peak Signal to Noise Ratio (PSNR) indicator is applied to assess the effect difference from the gray- level fidelity aspect. Structural Similarity (SSIM) (Wang et al., 2004) indicator which is an image quality assessment indicator based on the human vision system is applied to assess the effect difference from the structure- level fidelity aspect. The definitions of these two evaluation indicators are as follows. \[PSNR(x,y) = 10\log_{10}\left(\frac{(MAX_{I})^{2}}{MSE(x,y)}\right), \quad (8)\] where \(MAX_{I}\) is the maximum possible pixel value of images: \(x\) and \(y\) . \(MSE(x,y)\) represents the Mean Squared Error (MSE) between images: \(x\) and \(y\) . \[S S I M(x,y) = \frac{(2\mu_{x}\mu_{y} + C_{1})\left(2\sigma_{x y} + C_{2}\right)}{\left(\mu_{x}^{2} + \mu_{y}^{2} + C_{1}\right)\left(\sigma_{x}^{2} + \sigma_{y}^{2} + C_{2}\right)}, \quad (9)\] where \(\mu_{x}\) and \(\mu_{y}\) represent the average grey values of images. Symbol \(\sigma_{x}\) and \(\sigma_{y}\) represent the variances of images. Symbol \(\sigma_{xy}\) represents covariance between images. \(C_{1}\) and \(C_{1}\) are two constants which are used to prevent unstable results when either \(\mu_{x}^{2} + \mu_{y}^{2}\) or \(\sigma_{x}^{2} + \sigma_{y}^{2}\) is very close to 0. ### 3.3 ISS-GAN NETWORK STRUCTURE For ISS- GAN, the resolution of cover image \(CI\) and secret image \(STI\) is \(256 \times 256\) . The network structure of stego image generative model \(G_{SOI}\) includes a convolution layer (kernel size = 7, stride = 0, pad = 0), two convolution layers (kernel size = 3, stride = 2, pad = 1), nine residual blocks (He et al., 2016), and two deconvolution layers (kernel size = 3, stride = 2, pad = 1, outside pad = 1), and a convolution layer (kernel size = 7, stride = 0, pad = 0). Each convolution and deconvolution layer follows with an instance normalization layer and a ReLU layer. The structure of secret image generative model \(G_{STI}\) is identical with \(G_{SOI}\) . The network structure of cover image discriminative model \(D_{CI}\) is similar with PatchGAN model (Isola et al., 2017). Each time, it operates a image patch with \(70 \times 70\) size, and classifies whether this patch is real or fake. The model will run across the whole image, and average all results in the \(70 \times 70\) overlapping patches to provide the ensemble output. The architecture of such a patch- level discriminative model requires fewer parameters and runs faster than a full- image discriminator (Yi et al., 2017). Moreover, it has no constraints over the size of the input image. \(D_{C}\) contains a convolution layer (kernel size = 4, stride = 2, pad = 1) follows with a leaky ReLU layer, three convolution layers (kernel size = 4, stride = 2, pad = 1) follows with an instance normalization layer and a leaky ReLU layer, a convolution layer (kernel size = 4, stride = 1, pad = 1) follows with an instance normalization layer and a leaky ReLU layer, a convolution layer (kernel size = 4, stride = 1, pad = 1) <--- Page Split ---> follows with a sigmoid layer to output a scalar output between [0, 1]. The structure of secret image cycle discriminative model \(D_{STI}\) is identical with \(D_{CI}\) . Moreover, to improve the convergence performance, we use Adam optimizer (Kinga & Adam, 2015) instead of stochastic gradient descent (SGD) optimizer. In practice, Adam optimizer can be adaptive to the training of ISS- GAN. It is computationally efficient and has little memory requirements. The hyper- parameters of Adam optimizer are: \(\beta_{1} = 0.5\) , \(\beta_{2} = 0.999\) . The base learning rate is 0.0002. ## 4 EXPERIMENTAL RESULTS ### 4.1 STEGANOGRAPHY PERFORMANCE EXPERIMENTS In the secret embedding and stego image generation performance experiments, we adopt the benchmark images as the cover images \(CI\) shown in the first row of Figure 3 to test the performance of proposed ISS- GAN framework. The embedded secret image used is the Barbara benchmark image. We use PyTorch as the framework and train ISS- GAN with 150 epochs. The generated stego images \(SOI\) are shown in the second row of Figure 3. For illustration purpose, the residual differences between cover and stego images are shown in the third row of Figure 3. The PSNR and SSIM metrics for generated stego images \(SOI\) versus cover images \(CI\) are shown in Table 1. (SOI is used as image \(x\) , and \(CI\) is used as image \(y\) for PSNR and SSIM metrics calculation equations (8) and (9).) The results shown in Figure 3 and Table. 1 can prove the high quality and difference imperceptibility of \(SOI\) in qualitative and quantitative aspects. ![](images/5_0.jpg) <center>Figure 3: Stego images \(SOI\) generation performance of ISS-GAN. Row 1: Original cover images: Lena, Airplane, Baboon, Fruits and Peppers, Row 2: Corresponding generated stego images, Row 3: Residual difference between original and stego images. (We inverse the color to emphasize the difference. Because the differences are inconspicuous. Please magnify to see the differences which mainly on the marginal parts of objects.) </center> Table 1: Evaluation metrics of generated stego images \(SOI\) <table><tr><td>Metrics/Images</td><td>Lena</td><td>Airplane</td><td>Baboon</td><td>Fruits</td><td>Peppers</td></tr><tr><td>PSNR</td><td>33.0170</td><td>33.0065</td><td>29.1163</td><td>33.9085</td><td>30.5124</td></tr><tr><td>SSIM</td><td>0.9390</td><td>0.9589</td><td>0.9335</td><td>0.9510</td><td>0.9034</td></tr></table> Let's have a further analysis of the obtained results. If we magnify Figure 3 to see the residual differences, we can find they are mainly on the marginal and textural parts of objects. For example, the hat of Lena, the edges of F16 plane, the skin and whiskers of baboon, the profile of fruits and peppers, etc. It means ISS- GAN tends to hide the secret info into marginal parts of the object in <--- Page Split ---> original cover images. In information theory, textures and edges represent the high frequency parts of the image, while smooth regions represent the low frequency parts of the image. If we change the low frequency parts, it is easy to be detected by steganalysis method. So many state- of- the- art steganography algorithms transform the cover image from spatial domain to frequency domain. Change the tiny part in high frequency parts, and transform it back to spatial domain. Moreover, when we discuss the state- of- the- art content adaptive steganography algorithms, we find the ultimate goal is trying to embed the secret image into the parts with complex edges and textures, and avoiding the smooth regions of the cover images. The behavior of ISS- GAN is very similar to the state- of- the- art steganography algorithms. But the state- of- the- art algorithms need to design a hand- crafted distortion function to achieve the goal, while ISS- GAN learns from the discriminative network which simulates the behaviors of steganalysis. From the learning process, the generative network in ISS- GAN finds steganalysis method are very sensitive to the low frequency parts, and not so sensitive to the high frequency parts. So the stego images generated by ISS- GAN generative network mainly hide their secret info into marginal and textural parts to ensure the best imperceptibility. ### 4.2 STEGANALYSIS QUALITATIVE PERFORMANCE EXPERIMENTS In the steganalysis qualitative experiments, we adopt the world- renowned art paintings shown in Figure 1 to test the performance of proposed ISS- GAN and its robustness to different patterns of noise attack. Several patterns of noises are adopted respectively to imitate real- world noise attacks. We use PyTorch as the framework and train ISS- GAN with 200 epochs. The extracted secret image ESTI are shown in Figure 4 and Appendix Figure 8. The results shown in Figure 4 and 8 can prove the high quality of ESTI in qualitative aspect. ![](images/6_0.jpg) <center>Figure 4: Extracted secret images if stego images are attacked by noise. Column 1: Multiplicative noise, Column 2: Salt and pepper noise, Column 3: Gaussian white noise, Column 4: Poisson noise. Row 2 and 4: Residual difference between embedded secret images and extracted secret images. (We inverse the color to emphasize the difference.) </center> ### 4.3 STEGANALYSIS QUANTITATIVE COMPARATIVE EXPERIMENTS We compare ISS- GAN with the state- of- the- art steganography methods in various image benchmarks. Here we use steganalysis process to extract secret images from stego images, and evaluate the security criteria of steganography algorithms. For LSB steganography algorithms, we choose LSB- TLH Das et al. (2018). For content adaptive steganography algorithms, we choose WOW Holub & Fridrich (2012), HUGO (Pevný et al., 2010) and S- UNIWARD (Holub et al., 2014). For deep learning based steganography algorithms, we choose ISGAN Dong et al. (2018) and SSGAN Shi et al. (2017) for comparison. We make two group experiments. In the first group experiment, we use Lena as the original cover image, and the trolleybus image as the secret image. The results are shown in Figure 5. In the second group experiment, we use Lena as the original cover image, and the headline of ICLR conference as the secret image. The results are shown in Figure 6. In quantitative experiments, we add the JPEG compression to simulate the coding and decoding processes in real <--- Page Split ---> secure information transmission system. We need to ensure ISS- GAN can work well against coding and decoding algorithms. ![](images/7_0.jpg) <center>Figure 5: Extracted secret image of trolleybus after adding certain attacks. Row 1: Gaussian white noise, Row 2: Poisson noise, Row 3: Salt and pepper noise, Row 4: Speckle noise, Row 5: JPEG compression. (Results of separate Salt noise and Pepper noise are in Appendix.) Column 1-7 are ESTI obtained by LSB-TLH, WOW, HUGO, S-UNIWARD, ISGAN, SSGAN and proposed ISS-GAN algorithms. </center> Table 2: PSNR metric for extracted secret images of trolleybus <table><tr><td>Images/Algorithms</td><td>LSB-TLH</td><td>WOW</td><td>HUGO</td><td>S-UNIWARD</td><td>ISGAN</td><td>SSGAN</td><td>ISS-GAN</td></tr><tr><td>Gaussian noise</td><td>21.9184</td><td>22.5196</td><td>25.4038</td><td>23.6964</td><td>21.0874</td><td>22.0219</td><td>28.5659</td></tr><tr><td>Possion noise</td><td>28.0956</td><td>28.1163</td><td>28.1097</td><td>28.1035</td><td>28.0996</td><td>28.1182</td><td>28.1181</td></tr><tr><td>Salt &amp;amp; Pepper noise</td><td>20.6125</td><td>22.3013</td><td>22.3739</td><td>22.1683</td><td>20.2038</td><td>21.0407</td><td>23.3684</td></tr><tr><td>Salt noise</td><td>19.7074</td><td>20.2493</td><td>22.0935</td><td>20.4663</td><td>19.6455</td><td>19.7446</td><td>22.8699</td></tr><tr><td>Pepper noise</td><td>21.8021</td><td>22.4331</td><td>22.5555</td><td>24.2972</td><td>21.7754</td><td>21.8409</td><td>25.0980</td></tr><tr><td>Speckle noise</td><td>27.9778</td><td>33.8242</td><td>32.7279</td><td>30.5234</td><td>28.8135</td><td>29.6587</td><td>37.5805</td></tr><tr><td>JPEG Compression</td><td>26.6882</td><td>30.4574</td><td>31.0763</td><td>29.7196</td><td>27.9319</td><td>28.8819</td><td>31.6285</td></tr></table> Table 3: SSIM metric for extracted secret images of trolleybus <table><tr><td>Images/Algorithms</td><td>LSB-TLH</td><td>WOW</td><td>HUGO</td><td>S-UNIWARD</td><td>ISGAN</td><td>SSGAN</td><td>ISS-GAN</td></tr><tr><td>Gaussian noise</td><td>0.6548</td><td>0.6802</td><td>0.7885</td><td>0.7271</td><td>0.6182</td><td>0.6589</td><td>0.8772</td></tr><tr><td>Possion noise</td><td>0.8876</td><td>0.8878</td><td>0.8878</td><td>0.8877</td><td>0.8874</td><td>0.8880</td><td>0.8880</td></tr><tr><td>Salt &amp;amp; Pepper noise</td><td>0.7338</td><td>0.8224</td><td>0.8220</td><td>0.8110</td><td>0.7190</td><td>0.7584</td><td>0.8464</td></tr><tr><td>Salt noise</td><td>0.7147</td><td>0.7384</td><td>0.8096</td><td>0.7476</td><td>0.7127</td><td>0.7156</td><td>0.8352</td></tr><tr><td>Pepper noise</td><td>0.8119</td><td>0.8295</td><td>0.8333</td><td>0.8780</td><td>0.8100</td><td>0.8124</td><td>0.8941</td></tr><tr><td>Speckle noise</td><td>0.8955</td><td>0.9635</td><td>0.9542</td><td>0.9304</td><td>0.9062</td><td>0.9206</td><td>0.9836</td></tr><tr><td>JPEG Compression</td><td>0.8909</td><td>0.9425</td><td>0.9487</td><td>0.9343</td><td>0.9304</td><td>0.9245</td><td>0.9538</td></tr></table> The PSNR and SSIM metrics for extracted secret image of trolleybus and ICLR conference headline are shown in Table \(2\sim 5\) , respectively. In Table \(2\sim 5\) , extracted secret image is used as image \(\pmb{x}\) and original secret image is used as image \(\pmb{y}\) for PSNR and SSIM metrics calculation equations (8) and (9). According to these metrics, the security of ISS- GAN outperforms all other state- of- the- art steganography algorithms in quantitative aspect. In these two group experiments, PSNR and SSIM metrics, the closest competitors are content adaptive steganography algorithms. WOW, HUGO and S- UNIWARD have quite good performance on steganography security. But for PSNR metric, ISS- GAN can still achieve \(1.01\mathrm{X}\sim 1.12\mathrm{X}\) relative improvement over the second highest PSNR algorithms with trolleybus secret image embedded, and <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 6: Extracted secret image of ICLR conference headline after adding certain attacks. Row 1: Gaussian white noise, Row 2: Poisson noise, Row 3: Salt and pepper noise, Row 4: Speckle noise, Row 5: JPEG compression. (Results of separate Salt noise and Pepper noise are in Appendix.) Column 1-7 are ESTI obtained by LSB-TLH, WOW, HUGO, S-UNIWARD, ISGAN, SSGAN and proposed ISS-GAN algorithms. </center> Table 4: PSNR metric for extracted secret images of ICLR conference headline <table><tr><td>Images/Algorithms</td><td>LSB-TLH</td><td>WOW</td><td>HUGO</td><td>S-UNIWARD</td><td>ISGAN</td><td>SSGAN</td><td>ISS-GAN</td></tr><tr><td>Gaussian noise</td><td>25.6366</td><td>24.5133</td><td>27.9945</td><td>25.8188</td><td>22.8196</td><td>23.6139</td><td>30.0088</td></tr><tr><td>Poisson noise</td><td>27.2302</td><td>27.2400</td><td>27.2332</td><td>27.2281</td><td>27.2352</td><td>27.2242</td><td>27.2497</td></tr><tr><td>Salt &amp;amp; Pepper noise</td><td>19.4886</td><td>20.8146</td><td>24.3997</td><td>21.5674</td><td>19.5006</td><td>20.7629</td><td>36.4338</td></tr><tr><td>Salt noise</td><td>30.3170</td><td>31.8986</td><td>34.0030</td><td>32.3590</td><td>31.3167</td><td>31.7378</td><td>49.0843</td></tr><tr><td>Pepper noise</td><td>17.3986</td><td>17.8621</td><td>18.3596</td><td>19.8744</td><td>16.4737</td><td>17.7671</td><td>35.9698</td></tr><tr><td>Speckle noise</td><td>26.8375</td><td>27.6863</td><td>27.8516</td><td>34.0109</td><td>24.5629</td><td>25.3830</td><td>42.8481</td></tr><tr><td>JPEG Compression</td><td>30.1590</td><td>32.1941</td><td>32.6464</td><td>31.8712</td><td>30.9326</td><td>31.1552</td><td>32.7724</td></tr></table> achieve \(1.01\mathrm{X} \sim 1.81\mathrm{X}\) relative improvement over the second highest PSNR algorithms with ICLR conference headline secret image embedded. For SSIM metric, ISS- GAN can achieve \(1.01\mathrm{X} \sim 1.11\mathrm{X}\) relative improvement over the second highest SSIM algorithms with trolleybus secret image embedded, and achieve \(1.01\mathrm{X} \sim 1.46\mathrm{X}\) relative improvement over the second highest SSIM algorithms with ICLR conference headline secret image embedded. To have a further analysis of the obtained results, we can find the performance of state- of- the- art deep learning steganography algorithms is not as good as expected. The main reason is the authors are more focus on generating the new cover images which are steganalysis- secure. But in our experiments, the cover images are fixed. So we can see the performance of state- of- the- art deep learning steganography algorithms is just at the same level of LSB steganography algorithms, and is worse than content adaptive steganography algorithms. Compare the results of first and second group experiments, we find ISS- GAN has better performance with the ICLR conference headline secret image. The amount of meaningful pixels and semantic info in ICLR conference headline image is much less than trolleybus image. This aligns with the principle of steganography, i.e., if more pixels are concealed into the cover image, then the security of stego image will be worse. To further illustrate the effect of embedding secret info amount on the security and imperceptibility of stego image, we make the curve plots to show the quantitative experiment results of generated <--- Page Split ---> Table 5: SSIM metric for extracted secret images of ICLR conference headline <table><tr><td>Images/Algorithms</td><td>LSB-TLH</td><td>WOW</td><td>HUGO</td><td>S-UNIWARD</td><td>ISGAN</td><td>SSGAN</td><td>ISS-GAN</td></tr><tr><td>Gaussian noise</td><td>0.7163</td><td>0.6647</td><td>0.8084</td><td>0.7228</td><td>0.5808</td><td>0.6200</td><td>0.8682</td></tr><tr><td>Position noise</td><td>0.7706</td><td>0.7707</td><td>0.7710</td><td>0.7709</td><td>0.7709</td><td>0.7703</td><td>0.7716</td></tr><tr><td>Salt &amp;amp; Pepper noise</td><td>0.6566</td><td>0.7352</td><td>0.8839</td><td>0.7664</td><td>0.6569</td><td>0.7362</td><td>0.9912</td></tr><tr><td>Salt noise</td><td>0.9900</td><td>0.9927</td><td>0.9951</td><td>0.9931</td><td>0.9918</td><td>0.9924</td><td>0.9998</td></tr><tr><td>Pepper noise</td><td>0.5160</td><td>0.5455</td><td>0.5834</td><td>0.6775</td><td>0.4464</td><td>0.5428</td><td>0.9900</td></tr><tr><td>Speckle noise</td><td>0.7422</td><td>0.7718</td><td>0.7790</td><td>0.9340</td><td>0.6461</td><td>0.6752</td><td>0.9904</td></tr><tr><td>JPEG Compression</td><td>0.9841</td><td>0.9888</td><td>0.9892</td><td>0.9880</td><td>0.9863</td><td>0.9877</td><td>0.9897</td></tr></table> stego images SOI versus cover images CI as shown in Figure 7. Here, we use the pixel- ratio to control the amount of embedding secret info. It is defined as the ratio of valid pixels amount in secret image versus those in cover images. For example, if the amount of valid pixels in \(256 \times 256\) size cover image is 63000, and the amount of valid pixels in secret image is 15000, then pixel- ratio is 0.2381. ![](images/9_0.jpg) <center>Figure 7: PSNR and SSIM metrics for generated stego images SOI versus cover images CI with different pixel-ratio and attacks. </center> From the curve plots shown above, we can see PSNR and SSIM metrics decline with the increase of pixel- ratio. Under the noise attack or image compression, PSNR and SSIM metrics are worse than the situations without attack, and also decline with the increase of pixel- ratio. The detailed results further prove the inherent contradiction between embedded secret amount and security of stego image. So in the real applications, ISS- GAN should make the trade- off between embedding capacity, imperceptibility and security according to real requirements, just like all state- of- the- art steganography methods. This curve can tell user the largest embedded secret capacity at certain imperceptibility and security level. So it is helpful for user to choose the most suitable embedded secret image in real secure information transmission systems. For example, if the user want to generate a stego image with no less than 25dB PSNR and 0.97 SSIM versus cover image. Considering the noise attacks and image compression possibility, the largest embedded secret pixel- ratio should be less than 0.5. ## 5 CONCLUSION AND FUTURE WORKS In this paper, we integrate steganography and steganalysis into single framework. The good performance of ISS- GAN derives from the following factors. - The discriminative network simulates the features of steganalysis. It helps to understand the sensitivity of cover images to semantic changes.- The introduction of cycle discriminative model and inconsistent loss helps to enhance the quality and security of generated stego image.- The mixture training dataset can further improve the robustness and security of ISS-GAN framework. How to resist the tampering can be learned from attacked training samples.- The iterative adversarial training process can strength the capability if steganalysis model and steganography model at the same time. The stronger steganalysis model will stimulate the improvement of steganography model, and vice versa. In the future, we will study the influence of color cover/secret image and gray cover/secret image to the proposed ISS- GAN framework. <--- Page Split ---> ## 6 APPENDIX ### 6.1 INTRODUCTION OF STEGANOGRAPHY AND STEGANALYSIS Imagine that you are an enthusiastic shutterbug. You are on trip to New Orleans and took a nice photo of St. Louis Cathedral. You want to send the beautiful photo and reveal the romantic feelings to your girlfriend. As your girlfriend is the Ph.D candidate of computer vision, so you want to share the romantic words in her professional manner. You embed the words by changing the least significant bits of the photograph, because this method can hide the romantic words in nearly invisible way. As a social media network fan, you share this photo on Facebook. Many friends leave the messages to express their love of this photo. And your girlfriend write the following sentence under your photo. "Wonderful photo. By the way, I like the clever idea. I think I am the first audience who have read the hidden words. I love you, too." You will be in a cheerful mood, and appreciate the clever communication method that only you and your girlfriend can "see" the secret information inside the photograph. This is a simple scenario to show the basic workflow of steganography and steganalysis. Steganography is defined as the art and science of hiding information in ways that prevent the detection of hidden messages (Obaid, 2015). Steganography literally means "covered writing" and is usually interpreted to hide information in other information. In the simple scenario aforementioned, you apply steganography to hide the romantic information into the photo of New Orleans. The romantic information is called the secret message, while the original photo of New Orleans is called the cover image. As the counterpart, the main idea of steganalysis is to analyze whether the received information contains any hidden information, and to recover the hidden information if possible (Volkhonskiy et al., 2017). In the simple scenario aforementioned, the social network plays the role of the public channel (Hayes & Danezis, 2017), and the posted photo is called stego image (Dong et al., 2018), which contains the secret message. Your girlfriend applies steganalysis to discover and recover the secret information you embedded. Since their birth, steganography and steganalysis have complementary progress. ### 6.2 EXPERIMENT RESULTS ![](images/12_0.jpg) <center>Figure 8: Extracted secret images if stego images are attacked by noise. Column 1: Multiplicative noise, Column 2: Salt and pepper noise, Column 3: Gaussian white noise, Column 4: Poisson noise. Row 2 and 4: Residual difference between embedded secret images and extracted secret images. (We inverse the color to emphasize the difference.) </center> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 9: Extracted secret image of trolleybus after adding certain attacks. Row 1: Salt noise, Row 2: Pepper noise. Column 1-7 are ESTI obtained by LSB-TLH, WOW, HUGO, S-UNIWARD, ISGAN, SSGAN and proposed ISS-GAN algorithms. </center> ![](images/13_1.jpg) <center>Figure 10: Extracted secret image of ICLR conference headline after adding certain attacks. Row 1: Salt noise, Row 2: Pepper noise. Column 1-7 are ESTI obtained by LSB-TLH, WOW, HUGO, S-UNIWARD, ISGAN, SSGAN and proposed ISS-GAN algorithms. </center> <--- Page Split --->
reject
Reject
5.333333
ICLR_2019_paper_0339
iclr
2,019
# COCO-GAN: CONDITIONAL COORDINATE GENERATIVE ADVERSARIAL NETWORK Anonymous authors Paper under double- blind review ## ABSTRACT Recent advancements on Generative Adversarial Network (GAN) have inspired a wide range of works that generate synthetic images. However, current processes have to generate an entire image at once, and therefore resolutions are limited by memory or computational constraints. In this work, we propose COnditional COordinate GAN (COCO- GAN), which generates a specific patch of an image conditioned on a spatial position rather than the entire image at a time. The generated patches are later combined together to form a globally coherent full- image. With this process, we show that the generated image can achieve competitive quality to state- of- the- arts and the generated patches are locally smooth between consecutive neighbors. One direct implication of the COCO- GAN is that it can be applied onto any coordinate systems including the cylindrical systems which makes it feasible for generating panorama images. The fact that the patch generation process is independent to each other inspires a wide range of new applications: firstly, "Patch- Inspired Image Generation" enables us to generate the entire image based on a single patch. Secondly, "Partial- Scene Generation" allows us to generate images within a customized target region. Finally, thanks to COCO- GAN's patch generation and massive parallelism, which enables combining patches for generating a full- image with higher resolution than state- of- the- arts. ## 1 INTRODUCTION This paper explores the idea of enforcing both the generator and the discriminator of generative adversarial networks (GANs) (Goodfellow et al., 2014) to deal with only partial views via conditional coordinating. Via training and inference with partial views only, the minimum memory requirement can be largely reduced. However, as shown in Section 3.3, naive approaches fail to generate high quality images, either having clear seams or totally failing to generate reasonable structures. We investigate this problem and propose a new GAN architecture: COnditional COordinate GAN (COCO- GAN). Given a latent vector and multiple spatial positions, the generator of COCO- GAN learns to produce image patches independently according to the spatial positions. On the other hand, the discriminator learns to judge whether adjacently generated patches are structurally sound and visually homogeneous. During the inference phase, the generated patches can directly be used to compose a complete full- image without further post- processing. Owing to the adversarial loss provided by the discriminator, the composed full- image is locally smooth and globally convincing. We show several randomly- selected full- images generated by COCO- GAN in Figure 2a. In Section 3.2, we visualize the interpolation between two spatial positions, showing COCO- GAN has inter- class continuity as common conditional GANs do. Further quantitative evaluations with "Frechet Inception Distance" (FID) (Heusel et al., 2017) score are presented in Table 1. Without additional hyper- parameter tuning, the evaluations on CelebA (Liu et al., 2015) and LSUN (Yu et al., 2015) datasets suggest that COCO- GAN is competitive with other state- of- the- art GANs. To further demonstrate the effectiveness of COCO- GAN, we perform ablation study in Section 3.3. In Section 3.4, we demonstrate COCO- GAN is a flexible coordinate- system- aware framework that is suitable for different spatial coordinate systems. Common learning frameworks are confined to <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: An overview of COCO-GAN (training phase). The full-images are only generated during testing phase (Figure 9 in the Appendix). </center> Cartesian coordinate system, which restricts those frameworks from learning the characteristics of certain image formats. For instance, panoramas should be trained with a cylindrical coordinate system, which has a "cyclic topology" in the horizontal direction. In comparison, COCO- GAN directly learns with a cylindrical coordinate system. In Figure 6 and Figure 14, the generated full scenes are naturally cylindrical and continuous while crossing the left and right borders. Besides, we find COCO- GAN has multiple interesting applications and characteristics. We select three representative new applications as case studies: Patch- Inspired Image Generation takes a real image patch as input. COCO- GAN can be inspired by the given patch and further generates a full- image proposal. This proposal is partially similar to the given patch, while globally realistic and reasonable. This setting is different from conventional image completion/reconstruction since the information loss during cropping is extreme and the spatial position information is not provided to the model. Therefore, COCO- GAN has to infer the position of the given patch before generating the whole image. Further experiments are presented in Section 3.5. Partial- Scene Generation shows that COCO- GAN can generate partial scenes without spending additional computation outside certain designated regions. This capability is exclusively beneficial to applications that are only interested in partial information of the full scene. For instance, virtual reality (VR) is only interested in the user's viewport direction, and COCO- GAN can seamlessly adapt to such viewport- aware settings. Computation- Friendly Generation demonstrates the computational related merits. First, since the patches are produced independently, the generator is able to generate patches with high parallelism. Second, as the full- image generation is decomposed to patches generation, the minimum memory requirement of generating images can be reduced. This characteristic enables COCO- GAN to generate images of very high resolution or containing much more complex structures. ## 2 COCO-GAN Overview. COCO- GAN consists of three networks: a generator \(G\) , a discriminator \(D\) , and a auxiliary head \(Q\) . These three networks are trained with four loss terms: patch Wasserstein loss \(L_{W}\) , patch gradient penalty loss \(L_{GP}\) , spatial consistency loss \(L_{S}\) , and content consistency loss \(L_{C}\) . Compared to conventional GANs that use full images as input for both \(G\) and \(D\) , COCO- GAN only uses micro patches for \(G\) , and macro patches for \(D\) . The details of micro and macro patches will be described in the paragraph "Spatial coordinate system design". \(D\) has two auxiliary prediction heads: the content vector prediction head (\(Q\) ) and the spatial condition prediction head. \(Q\) is trained <--- Page Split ---> with an extra optimizer independent to \(G\) and \(D\) . It aims to minimize \(L_{C}\) and is used to estimate the original latent vector of the given sample. The spatial condition prediction head, on the other hand, is jointly trained with the discriminator. It aims at minimizing \(L_{S}\) and is used to estimate the macro spatial position of the given sample. Both of the two auxiliary prediction heads are simple feed- forward networks that take inputs from one of the high- level feature maps of the discriminator. COCO- GAN considers two coordinate systems: a micro coordinate system \(m\) , which refers to the annotated component that related to the finer coordinate system, on the generator's side, and a macro coordinate system \(M\) , which is related to the coarser macro coordinate system on the discriminator's side. The discriminator learns to distinguish between the generated macro patch \(\tilde{p}^{M}\) and the real macro patch \(p^{M}\) . The discriminator also learns to predict two auxiliary outputs: a predicted macro coordinate \(\tilde{c}^{M}\) and a predicted latent vector \(\tilde{z}^{M}\) . Outputs from these two networks are then used to compute two auxiliary losses: Spatial Consistency Loss \((L_{S})\) and Content Consistency Loss \((L_{C})\) . The objective function of the discriminator \(D\) is \(L_{W} + L_{G P} + L_{S} + L_{C}\) , the generator \(G\) is \(- L_{W} + L_{S} + L_{C}\) , and the content vector prediction head \(Q\) is \(- L_{W} + L_{C}\) . Spatial coordinate system. Before presenting the details of these four loss terms, we introduce our notations first. We start with designing two spatial coordinate systems, a micro coordinate system \(C^{m}\) for the generator \(G\) and a macro coordinate system \(C^{M}\) for the discriminator \(D\) . Let \(S^{N}\) be a space of spatial position sequences, each spatial position sequence \(s = \langle c_{i}^{m}\rangle_{i = 1}^{N}\in S^{N}\) is an ordered sequence, which \(c_{i}^{m}\in C^{m}\) . During COCO- GAN training, \(R\) is some predefined spatial constraints for sampling \(s\) from a uniform distribution \(\mathcal{U}(S^{N},R)\) . The generator \(G\) is conditioned by each spatial position \(c_{i}^{m}\) , and learns to accordingly produce micro patches \(\tilde{p}_{i}^{m} = G(z|c_{i}^{m})\) . The \(\langle \tilde{p}_{i}^{m}\rangle_{i = 1}^{N} = \langle G(z|c_{i}^{m})\rangle_{i = 1}^{N}\) is a sequence of micro patches produced independently while sharing the same latent vector \(z\) across the spatial position sequence \(s\) . The design of \(R\) may need to be slightly changed with respect to the selection of \(C^{M}\) and \(C^{m}\) . The design principle of \(R\) is that the accordingly generated micro patches \(\langle \tilde{p}_{i}^{m}\rangle_{i = 1}^{N}\) should be spatially close to each other. Then the micro patches are merged by a merging function \(T_{(m\to M)}\) to form a complete macro patch \(\tilde{p}^{M} = T_{(m\to M)}(\langle \tilde{p}_{i}^{m}\rangle_{i = 1}^{N})\) as a coarser partial- view of the full- scene \(\tilde{x}\) . Meanwhile, we assign \(\tilde{p}^{M}\) with a new spatial position \(c^{M}\) under the macro coordinate system for \(s\) . In Figure 1, we illustrate one of the simplest design for the above heuristic functions that we have adopted throughout our experiments. The four micro patches are always a neighbor of each other and can be directly combined into a square macro patch with \(T_{(m\to M)}\) , which is just simple concatenation. Figure 3 shows some examples of micro and macro patches generation. On the real samples side, we also sample \(s\sim S\) , but we directly infer \(s\) to macro position \(c^{M}\) . Then we design a transformation \(T_{(X\to M)}\) with respect to \(T_{(m\to M)}\) to transform a real full- image \(x\) to a real macro patch \(p^{M}\) by \(T_{(X\to M)}(x|c^{M})\) . The shape of \(p^{M}\) should be the same as \(\tilde{p}^{M}\) . In our simplest experiment setting, \(T_{(X\to M)}\) is a simple cropping function that crops \(x\) into \(p^{M}\) , which has the same shape as \(\tilde{p}^{M}\) . During the testing phase, depending on the design of \(C^{m}\) , we can directly infer a corresponding spatial position sequence \(\langle c_{j}^{m}\rangle_{j = 1}^{K}\) . It is used to independently produce spatially- disentangled patches that constitute the full- image. Figure 9 demonstrates how the full- image is generated during the testing phase. Figure 2a shows some examples of the full- image generation. Loss functions. The patch Wasserstein loss \(L_{W}\) is a patch- level Wasserstein distance loss similar to Wasserstein- GAN (Arjovsky et al., 2017) loss. It forces the discriminator to discriminate between the real macro patch \(p^{M}\) and the generated macro patch \(\tilde{p}^{M}\) , and on the other hand, encourages the generator to confuse the discriminator with seemingly realistic \(\tilde{p}^{M}\) . Its complete form is \[L_{W} = \mathbb{E}\left[D(T_{(X\to M)}(x|c^{M}))\right] - \mathbb{E}\left[D(T_{(m\to M)}(\langle G(z|c_{i}^{m})\rangle_{i = 1}^{N}))\right], \quad (1)\] where \(x\sim \mathbb{P}_{r}\) and \(\langle c_{i}^{m}\rangle_{i = 1}^{N}\sim \mathcal{U}(S^{N},R)\) . Note that \(\mathbb{P}_{r}\) is the real data distribution. We also apply Gradient Penalty (Gulrajani et al., 2017) to the patches generation: \[L_{G P} = \mathbb{E}\left[(\| \nabla_{\tilde{p}^{M}}D(\tilde{p}^{M})\|_{2} - 1)^{2}\right], \quad (2)\] where \(\tilde{p}^{M}\sim \mathbb{P}_{g}\) . Note that \(\mathbb{P}_{g}\) is the generator distribution. <--- Page Split ---> The spatial consistency loss \(L_{S}\) is similar to ACGAN loss (Odena et al., 2017). A slight difference is that \(c_{i}^{m}\) has relatively more continuous values than the discrete setting of ACGAN. As a result, we apply a distance measurement loss for \(L_{S}\) , which is an \(L_{2}\) - loss between \(\tilde{c}^{M}\) and \(c^{M}\) . It aims to train the generator of COCO- GAN to generate the corresponding micro patch by \(G(z|c_{i}^{m})\) with respect to the given spatial condition \(c_{i}^{m}\) . The spatial consistency loss is \[L_{S} = \mathbb{E}\left[\| c^{M} - \tilde{c}^{M}\|_{2}\right]. \quad (3)\] On the other hand, the content consistency loss \(L_{C}\) is similar to a hybrid of infoGAN loss (Chen et al., 2016) and the latent space constraint loss (Chang et al., 2018). The former one uses a separate optimizer to optimize an auxiliary network \(Q\) , which aims to reconstruct the original latent vector. The latter one suggests that the latent space consistency loss can be a distance measurement instead of minimizing the KL- divergence as the original infoGAN does. In our experiments, we train the extra \(Q\) network with a separate optimizer and directly minimize \(L_{1}\) - loss between \(\tilde{z}\) and \(z\) . \(L_{C}\) aims to force the generator to produce shared context between patches that share the same latent vector but locate at different micro coordinate positions. The content consistency loss is defined by \[L_{C} = \mathbb{E}\left[\| z - \tilde{z}\|_{1}\right]. \quad (4)\] To ensure that \(Z\) , \(C^{m}\) , and \(C^{M}\) share the similar scale, which are directly concatenated and feed to \(G\) . We evaluate the maximum possible pixel position of \(C^{m}\) and \(C^{M}\) , then normalize the range into \([- 1,1]\) . For the latent space \(Z\) , although uniform sampling between \([- 1,1]\) should be numerically more compatible with the normalized spatial condition space, we empirically do not observe significant differences even if we switch to random sampling with a zero- mean and unit- variance normal distribution. For simplicity, we adopt uniform sampling strategy throughout our experiments. Training details. Our generator and discriminator architectures follow the idea of projection discriminator (Miyato & Koyama, 2018), both with ResNet (He et al., 2016) based architecture and adding class- projection to the discriminator. All convolutional and feed- forward layers of generator and discriminator are added with the spectral- normalization scheme (Miyato et al., 2018) as suggested in (Zhang et al., 2018). A more detailed architecture diagram is illustrated in Appendix B. We also add conditional- batch- normalization (CBN) (Dumoulin et al., 2016) to the generator. In our design, CBN is conditioned on the given spatial positions and the input latent vector. It learns to normalize the feature maps with respect to the given conditions. However, our implementation has a crucial difference from the one described in (Miyato & Koyama, 2018): our spatial positions are real values rather than discrete classes. We alternatively adopt similar strategy to (de Vries et al., 2017) with a slight modification. Instead of using MLPs to produce \(\Delta \gamma\) and \(\Delta \beta\) , we make MLPs directly output \(\gamma\) and \(\beta\) . For a \(K\) - channel input feature map \(i_{K}\) with mean recorder \(\mu_{K}\) and variance recorder \(\sigma_{K}\) , we creates two learnable MLP layers, \(\mathrm{MLP}_{\gamma}\) and \(\mathrm{MLP}_{\beta}\) . The output feature map \(o_{M}\) is calculated as \(o_{M} = \left((i_{K} - \mu_{K}) / \sigma_{K}\right)* \mathrm{MLP}_{\gamma}(i_{K}) + \mathrm{MLP}_{\beta}(i_{K})\) . We use Adam (Kingma & Ba, 2014) optimizer with \(\beta_{1} = 0\) and \(\beta_{2} = 0.999\) for both the generator and the discriminator. The learning rates are based on the Two Time- scale Update Rule (TTUR) (Heusel et al., 2017), setting 0.0001 for the generator and 0.0004 for the discriminator. We do not specifically balance the generator and the discriminator by manually setting how many iterations to update the generator once as described in the WGAN paper (Arjovsky et al., 2017). ## 3 EXPERIMENTS Although COCO- GAN framework supports the spatial positions to be uniformly and continuously sampled within normalized range \([- 1,1]\) , we empirically find that only sampling the discrete spatial positions that is used to inference will result in better generation quality. For instance, if the full- image is formed by concatenating four micro patches on each axis, the model only uniformly samples spatial positions from the set of four discrete points, \(\{- 1, - 1 / 3,1 / 3,1\}\) on each axis of the coordinate system. We adopt this uniform and discrete sampling strategy throughout the experiments. The root cause of degradation in generation quality is still unclear. One possible hypothesis is that the task difficulty dramatically increases while sampling with continuous spatial positions. We flag further analysis and solution toward this phenomenon as an important future research direction. Some comparisons between continuous sampling and discrete sampling are shown in Appendix F. <--- Page Split ---> ![](images/4_0.jpg) <center>(a) CelebA (128×128). (b) LSUN (256×256). </center> ![](images/4_1.jpg) <center>Figure 3: Generated samples of micro/macro patches. Each micro patch, macro patch, and full-image in Figure 2a at the same relative position uses the same latent vector. </center> Figure 2: Without any post-processing, the generated full-images of COCO-GAN are visually smooth and globally coherent. More full-image generation results are shown in Figure 12. Table 1: The FID score suggests that COCO- GAN is competitive with other state- of- the- art GANs. FID scores are measured between 50,000 real and generated samples based on the original implementation provided at https://github.com/bioinf- jku/TTUR. <table><tr><td>Dataset</td><td>DCGAN + TTUR</td><td>PGGAN</td><td>WGAN-GP + TTUR</td><td>COCO-GAN</td></tr><tr><td>CelebA (64×64)</td><td>12.5</td><td>-</td><td>-</td><td>4.99</td></tr><tr><td>CelebA (128×128)</td><td>-</td><td>7.30</td><td>-</td><td>8.35</td></tr><tr><td>LSUN - Bedroom (64×64)</td><td>57.5</td><td>-</td><td>9.5</td><td>-</td></tr><tr><td>LSUN - Bedroom (128×128)</td><td>-</td><td>-</td><td>-</td><td>3.06</td></tr><tr><td>LSUN - Bedroom (256×256)</td><td>-</td><td>8.34</td><td>-</td><td>16.59</td></tr></table> ### 3.1 QUALITY OF GENERATED IMAGES We start with validating COCO- GAN on CelebA (Liu et al., 2015) and the bedroom category of LSUN (Yu et al., 2015). For CelebA dataset, the resolutions of full- image, micro patch, and macro patch are \(128 \times 128\) , \(32 \times 32\) , and \(64 \times 64\) , respectively. We choose \(32 \times 32\) for micro patches in this experiment since smaller patch size would be too small for the model (neither for human) to observe useful information. On the other hand, larger patch size makes macro patch size too similar to the full- image size, which is hard to demonstrate the idea of COCO- GAN can learn without access to the full- image. For LSUN dataset, the full- image is with \(256 \times 256\) resolution, micro patches with \(64 \times 64\) resolution and macro patches with \(128 \times 128\) resolution. We choose \(64 \times 64\) for micro patch size since the micro patch to full- image ratio is the same with the CelebA experiment. We report Frechet Inception Distance (FID) (Heusel et al., 2017) in Table 1 as quantitative comparisons with state- of- the- art GANs. Since many other state- of- the- art models do not use full- resolution of the datasets, we accordingly run COCO- GAN in different resolutions without changing hyperparameters other than input size and micro/macro patch size. Throughout these experiments, we choose to always retain the micro patch size to be 1/16 (1/4 for height and 1/4 for width) of the full- image size and macro patch size to be 1/4 (1/2 for height and 1/2 for width) of the full- image size. Without additional hyper- parameter tuning, the results suggest COCO- GAN is both qualitatively and quantitatively competitive with other state- of- the- art GANs. In Appendix H, we also provide Wasserstein distance and FID score through time as training indicators. The curves suggest that COCO- GAN is stable during training. ### 3.2 LATENT SPACE CONTINUITY To demonstrate more precisely the space continuity, we perform the interpolation experiment in three directions: micro patches interpolation, spatial positions interpolation, and full- images interpolation. <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 5: (a) Micro patches interpolation with fixed spatial position. Note that each left-right pair of interpolation sample uses the same latent vectors. (b) Full-images interpolation between two latent vectors. More interpolation results are shown in Appendix D. </center> Micro Patches Interpolation. The simplest interpolation experiment is the in- class (e.g. fixed spatial condition) interpolation between latent vectors. With a fixed spatial position \(c_{i}^{micro}\) , we randomly sample two latent vectors \(z_{1}\) and \(z_{2}\) . Then perform interpolation between \(z_{1}\) and \(z_{2}\) through a slerp- path (White, 2016). The results in Figure 5a suggest that for each of the spatial position, the latent space \(Z\) has continuity. Spatial Positions Interpolation. Another simple interpolation experiment is inter- class (e.g. between classes) interpolation with a fixed latent vector. We directly linearly- interpolate spatial position between \([- 1,1]\) when the latent vector \(z\) is fixed. The results in Figure 4 show that, although we only uniformly sample spatial positions within a discrete spatial position set, the spatial position interpolation is still continuous. An interesting observation is about the interpolation at the position between the eyebrows. In this example, COCO- GAN does not know the existence of the smooth area (glabella) between two eyes due to the discrete and sparse spatial positions sampling strategy. Instead, it learns to directly deform the shape of eye to switch from one eye to another. This phenomenon raises an interesting discussion, even the model learns to produce high- quality face images, it still may learn wrong relationship of objects behind the scene. ![](images/5_1.jpg) <center>Figure 4: An example of spatial positions interpolation for showing the spatial continuity of the micro patches. The spatial conditions are interpolated between range \([- 1,1]\) of the micro coordinate with a fixed latent vector. More examples are shown in Appendix I. </center> Full- Images Interpolation. The hardest inter Full- Images Interpolation. The hardest interpolation is to directly interpolate full- images between two latent vectors. All micro patches generated with different spatial positions must all change synchronously to make the full- image interpolation smooth. We randomly sample two latent vectors \(z_{1}\) and \(z_{2}\) . With any given interpolation point \(z'\) in the slerp path between \(z_{1}\) and \(z_{2}\) , the generator uses the full spatial position sequence \(\langle c_{j}^{m}\rangle_{j = 1}^{K}\) to generate all corresponding patches. Then we merge all generated micro patches with \(T_{(m \rightarrow M)}\) and forms a full- image \(x'\) . The interpolation results in Figure 5a and Figure 5b show that all micro patches can interpolate smoothly and synchronously. This result suggests that COCO- GAN learns the main latent space \(Z\) as well as the correlation between micro patches, and the spatial conditions \(C^{m}\) are disentangled. ### 3.3 ABLATION STUDY The ablation study is conducted in two folds: we first show that a straightforward approach fails in COCO- GAN setting, then we study the trade- offs of each component of COCO- GAN. A straightforward approach. One straightforward approach to the learning and inference with partial views is using a full- sized generator and a patch discriminator, and then pre- calculating the <--- Page Split ---> locations of feature maps that are associated with a specific partial view. We refer to this method as \(\mathcal{M}\) afterward. Despite \(\mathcal{M}\) still implicitly uses conditional coordinating in feature map selection, we observe that it fails to generate high- quality samples in comparison with COCO- GAN. We show the FID score results in Table 2. More experimental details are shown in Figure 15 and some generated samples in Figure 16. The trade- offs of each component. We perform ablation study in CelebA \(64\times 64\) setting with five configurations: "continuous sampling" demonstrates that using continuous uniform sampling strategy for spatial positions during training will result in moderate generation quality drop; "optimal \(D\) " lets the discriminator directly discriminate the full image while the generator still generates micro patches; "optimal \(G\) " lets the generator directly generate the full image while the discriminator still discriminates macro patches; "without \(Q\) " removes the \(Q\) network from COCO- GAN; "multiple \(G\) " trains an individual generator for each spatial position. The results in Table 2 suggest \(Q\) network is not a necessary component if not considering the "Patch- Inspired Image Generation" application. Surprisingly, despite the convergence speed is different, "optimal discrimi Table 2: Best FID scores in the first 150 epochs. COCO-GAN usually converges well in CelebA \(64\times 64\) setting. <table><tr><td>Model</td><td>FID</td></tr><tr><td>M</td><td>72.82</td></tr><tr><td>M + PD (100 epochs)</td><td>90.87</td></tr><tr><td>M + PD + macro D</td><td>60.36</td></tr><tr><td>COCO-GAN (cont. sampling)</td><td>6.13</td></tr><tr><td>COCO-GAN + optimal D</td><td>4.05</td></tr><tr><td>COCO-GAN + optimal G</td><td>6.12</td></tr><tr><td>COCO-GAN + without Q</td><td>4.87</td></tr><tr><td>Multiple G</td><td>7.26</td></tr><tr><td>COCO-GAN (ours)</td><td>4.99</td></tr></table> nator", COCO- GAN, and "optimal generator" (ordered by convergence speed from fast to slow) can all achieve similar FID scores if with sufficient training time. The difference in convergence speed is expected, since "optimal discriminator" provides the generator with more accurate and global adversarial loss. In contrast, the "optimal generator" has relatively more parameters and layers to optimize, which causes the convergence speed slower than COCO- GAN. Lastly, the "multiple generators" setting cannot converge well. Although it can also concatenate micro patches without obvious seams as COCO- GAN does, the full- image results often cannot agree and are not globally coherent. More experimental details and generated samples are shown in Figure 17 and Figure 18. ### 3.4 PANORAMA GENERATION AND PARTIAL SCENE GENERATION Generating panoramas using GANs is an interesting problem but has never been carefully investigated. Different from simple image generation, panoramas are expected to be cylindrical and cyclic in the horizontal direction. However, normal GANs do not have built- in ability to handle such cyclic characteristic if without special types of padding mechanism support (Cheng et al., 2018). In contrast, COCO- GAN is a coordinate- system- aware learning framework. We can easily adapt a cylindrical coordinate system, and generate panoramas that are cyclic in the horizontal direction as shown in Figure 6 and Figure 14. To train COCO- GAN with a panorama dataset under a cylindrical coordinate system, the spatial position sampling strategy needs to be slightly modified. In the horizontal direction, the sampled value within the normalized range \([- 1,1]\) is treated as an angular value \(\theta\) , and then is projected with \(\cos (\theta)\) and \(\sin (\theta)\) individually to form a unit- circle on a 2D surface. Along with the normal sampling on the vertical axis, a cylindrical coordinate system is formed. We first take the sky- box format of Matterport3D (Chang et al., 2017) dataset to obtain panoramas for training and testing. The sky- boxes consist of six faces of a 3D cube. We preprocess and project the sky- box to a cylinder using Mercator projection, the resulting cylindrical image size is \(768\times 512\) . Since the Mercator projection creates extreme sparsity near the northern and southern poles, which lacks information, we directly remove the upper and lower \(1 / 4\) areas. Eventually, the size of panorama we used for training is \(768\times 256\) pixels. We also find COCO- GAN has interesting connection with virtual reality (VR). VR is known to have tight computational budget due to high frame- rate requirement and high resolution demand. It is hard to generate full- scene for VR in real time using standard generative models. Some recent VR studies on omnidirectional view rendering and streaming (Corbillon et al., 2017b; Ozcinar et al., <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 6: The generated panorama is cyclic in the horizontal direction since COCO-GAN is trained with a cylindrical coordinate system in this experiment. Here we paste the same generated panorama twice (from \(360^{\circ}\) to \(720^{\circ}\) ) to illustrate that it indeed has the cyclic property. </center> 2017; Corbillon et al., 2017a) are focusing on reducing computational cost or network bandwidth by adapting the user's viewport. COCO- GAN can easily inherit the same strategy and achieve user- viewport- aware partial- scene generation based on its effectiveness in spatial disentanglement and panorama generation. This can largely reduce unnecessary computational cost outside the region of interest, thus making image generation in VR more applicable. ### 3.5 PATCH-INSPIRED IMAGE GENERATION The content consistency loss equips the discriminator with the ability to approximate the original latent vector of generated macro patch \(\tilde{p}^{macro}\) using the discriminator's auxiliary content vector prediction \(\tilde{z}\) . This property can also generalize to any real macro patch \(p^{macro}\) . The approximation process does not require a spatial position; the spatial position is implicitly inferred by the spatial position prediction \(\tilde{c}^{M}\) . The generator can accordingly generate a full- image \(x'\) with respect to \(\tilde{z}\) . The generated \(x'\) should be partially similar to the given macro patch and also be globally coherent. Such \(x'\) can be seen as a generated sample inspired by the given macro patch. One important footnote is that most of the information of a real full- image is lost while retrieving \(p^{M}\) . As a result, the produced guess of full- image is not guaranteed to be identical to the original real image. We provides some examples of patch- inspired image generation in Figure 7. The results show that \(x'\) can loosely retain some local structure or global characteristic of the original image, such as gender, face direction, and facial expression. This process is also similar to image inpainting (Liu et al., 2018a; Yeh et al., 2017; Yang et al., 2017) except two key differences. First, the spatial position of the macro patch is not explicitly given. Existing image inpainting frameworks assume that the remaining parts of the image are already at their optimal positions, whereas COCO- GAN can infer by itself. Take human face for instance, if only given a cropped patch of a face image without further providing the position of the patch in the original image, COCO- GAN can still infer the position of the patch and reconstruct a full face image, while common inpainting frameworks may not reconstruct a correctly structured and well centered human face. Second, most inpainting frameworks do not assume the image is extremely damaged, like loosing \(75\%\) of information in our examples. In Figure 8, we accordingly compare COCO- GAN with partial convolution (Liu et al., 2018a), which is one of the state- of- the- art image inpainting methods. For the partial convolution method, we simply place the macro patch at the center since the spatial position of the macro patch is unknown. The results show that, unlike COCO- GAN, the partial convolution method (Liu et al., 2018a) cannot handle this situation well. ### 3.6 COMPUTATION-FRIENDLY GENERATION Recent studies in high- resolution image generation (Karras et al., 2017; Mescheder et al., 2018) have gained lots of success. We observe a shared conundrum of these existing works is the memory requirement. They usually require some workarounds to improve memory consumption during training, such as decreasing the batch size (Karras et al., 2017) or cutting down the number of feature maps (Mescheder et al., 2018). The memory requirement problem cannot be easily resolved without specific hardware support, which makes the generation of over \(1024 \times 1024\) resolution images hard to achieve. These types of high- resolution images are commonly seen in panoramas, street views, and medical images. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 7: Patch-inspired image generation can loosely retain some local structure or global characteristic of the original image. The red boxes are the sampled spatial positions that crop the full-images in (a) into the macro patches in (b). (c) & (d) show the patch-inspired generated macro patches and full-images based on \(\bar{z}\) . The blue boxes visualize the predicted spatial positions \(\bar{c}^{m}\) . Since the information loss of the cropping process (from (a) to (b)) is critical, we do not expect (a) and (d) to be identical. Instead, (b) and (c) should be visually similar and (d) should be globally coherent. More examples are shown in Appendix G. </center> In contrast, COCO- GAN only requires partial views of the full- image for both training and inference. This characteristic largely reduces the minimum memory requirement while dealing with high- resolution images. In Section 3.1, we show that COCO- GAN can generate and compose a \(128 \times 128\) image with patches of \(32 \times 32\) resolution. In other words, we can train a CIFAR- 10- sized model to generate \(128 \times 128\) resolution images while the results are still high quality. The characteristic of spatial disentanglement is the first step toward super- high- resolution image generation with limited memory resource. Moreover, some state- of- the- art structures of deep models tend to require more parameters (if without reducing number of channels), such as the skip- connection structure (He et al., 2016) used in projection discriminator (Miyato & Koyama, 2018), inception (Szegedy et al., 2015), and self- attention (Zhang et al., 2018). COCO- GAN is able to equip many of these complex structures since our COCO- GAN model is relatively light- weight with a smaller receptive field and a shallower structure. Furthermore, the spatial disentanglement makes the generation of micro patches independent after the latent vector and the spatial positions are decided. This characteristic enables the generation of micro patches to have high parallelism and take advantage of modern computation architectures. ## 4 RELATED WORK Generative Adversarial Network (GAN) (Goodfellow et al., 2014) has shown its potential and flexibility to many different tasks. Recent studies on GANs are focusing on generating high- resolution and high- quality synthetic images in different settings. For instance, generating images with \(1024 \times 1024\) resolution (Karras et al., 2017; Mescheder et al., 2018), generating images with low- quality synthetic images as condition (Shrivastava et al., 2017), and by applying segmentation map as conditions (Wang et al., 2017). However, these prior work ![](images/8_1.jpg) <center>Figure 8: Patch-inspired image generation is globally more coherent than the partial convolution method. </center> share the similar assumptions: the model must access and generate the full- image in a single shot. This assumption consumes an unavoidable and significant amount of memory when the size of the targeting image is relatively large, and therefore making it difficult to satisfy memory requirements for both training and inference. Searching for a solution towards this problem is one of the initial motivations of this work. <--- Page Split ---> COCO- GAN shares some similarities to Pixel- RNN (van den Oord et al., 2016), which is a pixel- level generation framework while COCO- GAN is a patch- level generation framework. Pixel- RNN transforms the image generation task into a sequence generation task, maximizes the log- likelihood directly. In contrast, COCO- GAN aims at disentangling the spatial dependencies between micro patches. It also utilizes the adversarial loss to ensure smoothness between adjacent micro patches. CoordConv (Liu et al., 2018b) is another similar work but with fundamental differences. CoordConv provides spatial positioning information directly to the convolutional kernels in order to the coordinate transform problem and shows multiple improvements in different tasks. In contrast, COCO- GAN uses spatial conditions as an input condition of the generator and an auxiliary output of the discriminator. This setting enforces both the generator and the discriminator to learn coordinating and correlations between the generated micro patches. We have also considered incorporating CoordConv into COCO- GAN. However, empirical results show little visual improvement. Group convolution (Krizhevsky et al., 2012) is another work that is highly related to COCO- GAN. While group convolution aims at reducing computational costs by disentangling channels inside a convolution layer, our model learns to disentangle on the spatial level and is highly parallelizable. However, the micro- patch generation of COCO- GAN uses padding in all feature maps while applying convolution. This problem causes a large number number of FLOPs for each image generation. We are particularly interested in this phenomenon and flag utilizing the spatial disentanglement to reduce the total number of FLOPs as an important future work. ## 5 CONCLUSION AND DISCUSSION In this work, we propose COCO- GAN, a new generative model toward dividing full image generation into non- overlapping patches generation. Through the experiments, we show that COCO- GAN can learn and inference with limited partial views. Although the model is restrained from accessing the full scene, it can still generate high- quality samples without extra hyper- parameter tuning. We also demonstrate COCO- GAN is a coordinate- system- aware framework, and take panorama generation within a cylindrical coordinate system as case study. Furthermore, we highlight the advantages of COCO- GAN by showcasing three applications, including "Patch- Inspired image generation", "User- Viewport- Aware Partial Scene Generation", and "Computation- Friendly Generation". Despite the generation quality of COCO- GAN being competitive with other state- of- the- art GANs without any post- processing, sometimes we still observe that local structures of generated samples may be discontinued or mottled. This indicates that extra refinements and blending methods are still important for COCO- GAN to generate more stable and reliable samples. We adopt a discrete uniform sampling strategy over spatial positions since we observe a huge drop in generation quality with continuous uniform sampling. Although in practice COCO- GAN successfully learns spatial continuity using discrete sampling, continuous sampling, in theory, should still be preferred since the spatial domain is continuous. Achieving such a goal would require deeper understanding and insights about the root cause of the generation quality drop. We demonstrate that COCO- GAN can generate panoramas under a cylindrical coordinate system. However, another commonly used panorama format is sky- sphere under a hyperbolic coordinate system. Considering that the image patches with the hyperbolic coordinate is not square- shaped, further studies on incorporating special convolution schemes like Spherical- CNN (Cohen et al., 2018) and implementing COCO- GAN under a hyperbolic coordinate system would be required. Furthermore, to allow a more flexible and general coordinate system, some learnable coordinating methods (Balakrishnan et al., 2018) might be correlated to COCO- GAN and could further enhance the flexibility. The size of micro patches is crucial to the results of COCO- GAN. A rule- of- thumb is that the size should be large enough to cover sufficient information. The precise lower bound requires experiments to examine if COCO- GAN learns undesired spatial patterns. In "Spatial Positions Interpolations" of Section 3.2, we mention the model can be misled to learn reasonable but incorrect spatial relationship. An effective evaluation for the lower bound of the patch size needs future investigation. <--- Page Split ---> ## REFERENCES REFERENCESMartin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6- 11 August 2017, pp. 214- 223, 2017. Guha Balakrishnan, Amy Zhao, Adrian V. Dalca, Frédo Durand, and John V. Guttag. Synthesizing images of humans in unseen poses. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18- 22, 2018, pp. 8340- 8348, 2018. Angel X. Chang, Angela Dai, Thomas A. Funkhouser, Maciej Halber, Matthias Nießner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3d: Learning from RGB- D data in indoor environments. In 2017 International Conference on 3D Vision, 3DV 2017, Qingdao, China, October 10- 12, 2017, pp. 667- 676, 2017. Chia- Che Chang, Chieh Hubert Lin, Che- Rung Lee, Da- Cheng Juan, Wei Wei, and Hwann- Tzong Chen. Escaping from collapsing modes in a constrained space. In The European Conference on Computer Vision (ECCV), September 2018. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5- 10, 2016, Barcelona, Spain, pp. 2172- 2180, 2016. Hsien- Tzu Cheng, Chun- Hung Chao, Jin- Dong Dong, Hao- Kai Wen, Tyng- Luh Liu, and Min Sun. Cube padding for weakly- supervised saliency prediction in 360 videos. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. Taco S. Cohen, Mario Geiger, Jonas Kohler, and Max Welling. Spherical CNNs. In International Conference on Learning Representations, 2018. Xavier Corbillon, Alisa Devlic, Gwendal Simon, and Jacob Chakareski. Optimal set of 360- degree videos for viewport- adaptive streaming. In Proceedings of the 2017 ACM on Multimedia Conference, MM 2017, Mountain View, CA, USA, October 23- 27, 2017, pp. 943- 951, 2017a. Xavier Corbillon, Gwendal Simon, Alisa Devlic, and Jacob Chakareski. Viewport- adaptive navigable 360- degree video delivery. In IEEE International Conference on Communications, ICC 2017, Paris, France, May 21- 25, 2017, pp. 1- 7, 2017b. Harm de Vries, Florian Strub, Jérémie Mary, Hugo Larochelle, Olivier Pietquin, and Aaron C. Courville. Modulating early visual processing by language. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4- 9 December 2017, Long Beach, CA, USA, pp. 6597- 6607, 2017. Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. A learned representation for artistic style. CoRR, abs/1610.07629, 2016. Ian J. Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8- 13 2014, Montreal, Quebec, Canada, pp. 2672- 2680, 2014. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C. Courville. Improved training of wasserstein gans. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4- 9 December 2017, Long Beach, CA, USA, pp. 5769- 5779, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27- 30, 2016, pp. 770- 778, 2016. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time- scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4- 9 December 2017, Long Beach, CA, USA, pp. 6629- 6640, 2017. <--- Page Split ---> Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. CoRR, abs/1710.10196, 2017. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3- 6, 2012, Lake Tahoe, Nevada, United States., pp. 1106- 1114, 2012. Guilin Liu, Fitsum A. Reda, Kevin J. Shih, Ting- Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. CoRR, abs/1804.07723, 2018a. Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski Such, Eric Frank, Alex Sergeev, and Jason Yosinski. An intriguing failing of convolutional neural networks and the coordconv solution. CoRR, abs/1807.03247, 2018b. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015. Lars M. Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for gans do actually converge? In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmassan, Stockholm, Sweden, July 10- 15, 2018, pp. 3478- 3487, 2018. Takeru Miyato and Masanori Koyama. cgans with projection discriminator. CoRR, abs/1802.05637, 2018. Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. CoRR, abs/1802.05957, 2018. Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier gans. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6- 11 August 2017, pp. 2642- 2651, 2017. Cagri Ozcinar, Ana De Abreu, and Aljosa Smolic. Viewport- aware adaptive \(360^{\circ}\) video streaming using tiles for virtual reality. In 2017 IEEE International Conference on Image Processing, ICIP 2017, Beijing, China, September 17- 20, 2017, pp. 2174- 2178, 2017. Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Joshua Susskind, Wenda Wang, and Russell Webb. Learning from simulated and unsupervised images through adversarial training. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21- 26, 2017, pp. 2242- 2251, 2017. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7- 12, 2015, pp. 1- 9, 2015. Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19- 24, 2016, pp. 1747- 1756, 2016. Ting- Chun Wang, Ming- Yu Liu, Jun- Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High- resolution image synthesis and semantic manipulation with conditional gans. CoRR, abs/1711.11585, 2017. Tom White. Sampling generative networks: Notes on a few effective techniques. CoRR, abs/1609.04468, 2016. Chao Yang, Xin Lu, Zhe Lin, Eli Shechtman, Oliver Wang, and Hao Li. High- resolution image inpainting using multi- scale neural patch synthesis. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21- 26, 2017, pp. 4076- 4084, 2017. <--- Page Split ---> Raymond A. Yeh, Chen Chen, Teck- Yian Lim, Alexander G. Schwing, Mark Hasegawa- Johnson, and Minh N. Do. Semantic image inpainting with deep generative models. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21- 26, 2017, pp. 6882- 6890, 2017. Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. LSUN: construction of a large- scale image dataset using deep learning with humans in the loop. CoRR, abs/1506.03365, 2015. Han Zhang, Ian J. Goodfellow, Dimitris N. Metaxas, and Augustus Odena. Self- attention generative adversarial networks. CoRR, abs/1805.08318, 2018. <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 9: An overview of COCO-GAN during testing phase. The micro patches generated by \(G\) are directly combined into a full-image as the final output. </center> ## APPENDIX B MODEL ARCHITECTURE DETAILS ![](images/13_1.jpg) <center>Figure 10: The detailed generator architecture of COCO-GAN for generating micro patches with a size of \(32 \times 32\) pixels. We directly duplicate/remove the last residual block if we need to enlarge/reduce the size of output patch. </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 11: The detailed discriminator architecture of COCO-GAN for discriminate macro patches with a size of \(64 \times 64\) pixels. We directly duplicate/remove the first residual block if we need to enlarge/reduce the input patch size. Both the content vector prediction head \((Q)\) and the spatial condition prediction head use the same structure shown in (c). </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 12: More full-image generation examples of COCO-GAN. More results across epochs are provided in following anonymous link: https://goo.gl/A88ewn and https://goo.gl/hgCAeE. </center> <--- Page Split ---> ## APPENDIX D MORE INTERPOLATION EXAMPLES ![](images/16_0.jpg) <center>Figure 13: More interpolation examples. Given two latent vectors, COCO-GAN generates the micro patches and full-images that correspond to the interpolated latent vectors. </center> <--- Page Split ---> ## APPENDIX E MORE PANORAMA GENERATION SAMPLES ![](images/17_0.jpg) <center>Figure 14: More examples of generated panoramas. All samples possess the cyclic property along the horizontal direction. Each sample is generated with a resolution of \(768 \times 256\) pixels, and micro patch size \(64 \times 64\) pixels. </center> ## APPENDIX F ABLATION STUDY ![](images/17_1.jpg) <center>Figure 15: Comparison with the \(\mathcal{M}\) method mentioned in Section 3.3 in CelebA \(64 \times 64\) setting shows that the \(\mathcal{M}\) method is not competitive to COCO-GAN. Note that PD refers to "projection discriminator" and macro indicates the discriminator is in macro patch sized. </center> ![](images/17_2.jpg) <center>Figure 16: Some samples generated by different variants of \(\mathcal{M}\) . Note that each set of samples is extracted at the epoch when each \(\mathcal{M}\) variant reaches its lowest FID score. We also provide more samples at different epochs: https://goo.gl/ChQhCx. </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 17: FID score curves of different variants of COCO-GAN in CelebA \(64 \times 64\) setting. Combined with Figure 18, the results do not show significant differences in quality between COCO-GAN variants. Therefore, COCO-GAN does not pay significant trade-off for the conditional coordinate property. </center> ![](images/18_1.jpg) <center>Figure 18: Some samples generated by different variants of COCO-GAN. Note that each set of samples is extracted at the epoch when each \(\mathcal{M}\) variant reaches its lowest FID score. We also provide more samples for each of the variants at different epochs: https://goo.gl/Wnrppf. </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 19: Patch-inspired image generation can loosely retain some local structure or global characteristic of the original image. The red boxes are the sampled spatial positions that crop the full-images in (a) into the macro patches in (b). (c) & (d) show the patch-inspired generated macro patches and full-images based on \(\bar{z}\) . The blue boxes visualize the predicted spatial position \(\bar{c}^{m}\) . Since the information loss of the cropping process (from (a) to (b)) is critical, we do not expect (a) and (c) to be identical. Instead, (b) and (d) should be visually similar and (d) should be globally coherent. </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 20: Patch-inspired image generation can loosely retain some local structure or global characteristic of the original image. The red boxes are the sampled spatial positions that crop the full-images in (a) into the macro patches in (b). (c) & (d) show the patch-inspired generated macro patches and full-images based on \(\bar{z}\) . The blue boxes visualize the predicted spatial position \(\bar{c}^{m}\) . Since the information loss of the cropping process (from (a) to (b)) is critical, we do not expect (a) and (c) to be identical. Instead, (b) and (d) should be visually similar and (d) should be globally coherent. Note that the diversity and difficulty of LSUN (bedroom category) is higher than CelebA. The \(Q\) network can only capture the structure, shape, and orientation of the room and the bed, but it fails to capture detailed texture of objects. </center> <--- Page Split ---> ## APPENDIX H TRAINING INDICATORS ![](images/21_0.jpg) <center>Figure 21: Both Wasserstein distance and FID through time show that the training of COCO-GAN is stable. Both two figures are logged while training on CelebA with \(128 \times 128\) resolution. </center> ## APPENDIX I SPATIAL POSITIONS INTERPOLATION ![](images/21_1.jpg) <center>Figure 22: Spatial interpolation shows the spatial continuity of the micro patches. The spatial conditions are interpolated between range \([-1, 1]\) of the micro coordinate with a fixed latent vector. </center> <--- Page Split --->
## ABSTRACT Recent advancements on Generative Adversarial Network (GAN) have inspired a wide range of works that generate synthetic images. However, current processes have to generate an entire image at once, and therefore resolutions are limited by memory or computational constraints. In this work, we propose COnditional COordinate GAN (COCO- GAN), which generates a specific patch of an image conditioned on a spatial position rather than the entire image at a time. The generated patches are later combined together to form a globally coherent full- image. With this process, we show that the generated image can achieve competitive quality to state- of- the- arts and the generated patches are locally smooth between consecutive neighbors. One direct implication of the COCO- GAN is that it can be applied onto any coordinate systems including the cylindrical systems which makes it feasible for generating panorama images. The fact that the patch generation process is independent to each other inspires a wide range of new applications: firstly, "Patch- Inspired Image Generation" enables us to generate the entire image based on a single patch. Secondly, "Partial- Scene Generation" allows us to generate images within a customized target region. Finally, thanks to COCO- GAN's patch generation and massive parallelism, which enables combining patches for generating a full- image with higher resolution than state- of- the- arts. ## 1 INTRODUCTION This paper explores the idea of enforcing both the generator and the discriminator of generative adversarial networks (GANs) (Goodfellow et al., 2014) to deal with only partial views via conditional coordinating. Via training and inference with partial views only, the minimum memory requirement can be largely reduced. However, as shown in Section 3.3, naive approaches fail to generate high quality images, either having clear seams or totally failing to generate reasonable structures. We investigate this problem and propose a new GAN architecture: COnditional COordinate GAN (COCO- GAN). Given a latent vector and multiple spatial positions, the generator of COCO- GAN learns to produce image patches independently according to the spatial positions. On the other hand, the discriminator learns to judge whether adjacently generated patches are structurally sound and visually homogeneous. During the inference phase, the generated patches can directly be used to compose a complete full- image without further post- processing. Owing to the adversarial loss provided by the discriminator, the composed full- image is locally smooth and globally convincing. We show several randomly- selected full- images generated by COCO- GAN in Figure 2a. In Section 3.2, we visualize the interpolation between two spatial positions, showing COCO- GAN has inter- class continuity as common conditional GANs do. Further quantitative evaluations with "Frechet Inception Distance" (FID) (Heusel et al., 2017) score are presented in Table 1. Without additional hyper- parameter tuning, the evaluations on CelebA (Liu et al., 2015) and LSUN (Yu et al., 2015) datasets suggest that COCO- GAN is competitive with other state- of- the- art GANs. To further demonstrate the effectiveness of COCO- GAN, we perform ablation study in Section 3.3. In Section 3.4, we demonstrate COCO- GAN is a flexible coordinate- system- aware framework that is suitable for different spatial coordinate systems. Common learning frameworks are confined to <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: An overview of COCO-GAN (training phase). The full-images are only generated during testing phase (Figure 9 in the Appendix). </center> Cartesian coordinate system, which restricts those frameworks from learning the characteristics of certain image formats. For instance, panoramas should be trained with a cylindrical coordinate system, which has a "cyclic topology" in the horizontal direction. In comparison, COCO- GAN directly learns with a cylindrical coordinate system. In Figure 6 and Figure 14, the generated full scenes are naturally cylindrical and continuous while crossing the left and right borders. Besides, we find COCO- GAN has multiple interesting applications and characteristics. We select three representative new applications as case studies: Patch- Inspired Image Generation takes a real image patch as input. COCO- GAN can be inspired by the given patch and further generates a full- image proposal. This proposal is partially similar to the given patch, while globally realistic and reasonable. This setting is different from conventional image completion/reconstruction since the information loss during cropping is extreme and the spatial position information is not provided to the model. Therefore, COCO- GAN has to infer the position of the given patch before generating the whole image. Further experiments are presented in Section 3.5. Partial- Scene Generation shows that COCO- GAN can generate partial scenes without spending additional computation outside certain designated regions. This capability is exclusively beneficial to applications that are only interested in partial information of the full scene. For instance, virtual reality (VR) is only interested in the user's viewport direction, and COCO- GAN can seamlessly adapt to such viewport- aware settings. Computation- Friendly Generation demonstrates the computational related merits. First, since the patches are produced independently, the generator is able to generate patches with high parallelism. Second, as the full- image generation is decomposed to patches generation, the minimum memory requirement of generating images can be reduced. This characteristic enables COCO- GAN to generate images of very high resolution or containing much more complex structures. ## 2 COCO-GAN Overview. COCO- GAN consists of three networks: a generator \(G\) , a discriminator \(D\) , and a auxiliary head \(Q\) . These three networks are trained with four loss terms: patch Wasserstein loss \(L_{W}\) , patch gradient penalty loss \(L_{GP}\) , spatial consistency loss \(L_{S}\) , and content consistency loss \(L_{C}\) . Compared to conventional GANs that use full images as input for both \(G\) and \(D\) , COCO- GAN only uses micro patches for \(G\) , and macro patches for \(D\) . The details of micro and macro patches will be described in the paragraph "Spatial coordinate system design". \(D\) has two auxiliary prediction heads: the content vector prediction head (\(Q\) ) and the spatial condition prediction head. \(Q\) is trained <--- Page Split ---> with an extra optimizer independent to \(G\) and \(D\) . It aims to minimize \(L_{C}\) and is used to estimate the original latent vector of the given sample. The spatial condition prediction head, on the other hand, is jointly trained with the discriminator. It aims at minimizing \(L_{S}\) and is used to estimate the macro spatial position of the given sample. Both of the two auxiliary prediction heads are simple feed- forward networks that take inputs from one of the high- level feature maps of the discriminator. COCO- GAN considers two coordinate systems: a micro coordinate system \(m\) , which refers to the annotated component that related to the finer coordinate system, on the generator's side, and a macro coordinate system \(M\) , which is related to the coarser macro coordinate system on the discriminator's side. The discriminator learns to distinguish between the generated macro patch \(\tilde{p}^{M}\) and the real macro patch \(p^{M}\) . The discriminator also learns to predict two auxiliary outputs: a predicted macro coordinate \(\tilde{c}^{M}\) and a predicted latent vector \(\tilde{z}^{M}\) . Outputs from these two networks are then used to compute two auxiliary losses: Spatial Consistency Loss \((L_{S})\) and Content Consistency Loss \((L_{C})\) . The objective function of the discriminator \(D\) is \(L_{W} + L_{G P} + L_{S} + L_{C}\) , the generator \(G\) is \(- L_{W} + L_{S} + L_{C}\) , and the content vector prediction head \(Q\) is \(- L_{W} + L_{C}\) . Spatial coordinate system. Before presenting the details of these four loss terms, we introduce our notations first. We start with designing two spatial coordinate systems, a micro coordinate system \(C^{m}\) for the generator \(G\) and a macro coordinate system \(C^{M}\) for the discriminator \(D\) . Let \(S^{N}\) be a space of spatial position sequences, each spatial position sequence \(s = \langle c_{i}^{m}\rangle_{i = 1}^{N}\in S^{N}\) is an ordered sequence, which \(c_{i}^{m}\in C^{m}\) . During COCO- GAN training, \(R\) is some predefined spatial constraints for sampling \(s\) from a uniform distribution \(\mathcal{U}(S^{N},R)\) . The generator \(G\) is conditioned by each spatial position \(c_{i}^{m}\) , and learns to accordingly produce micro patches \(\tilde{p}_{i}^{m} = G(z|c_{i}^{m})\) . The \(\langle \tilde{p}_{i}^{m}\rangle_{i = 1}^{N} = \langle G(z|c_{i}^{m})\rangle_{i = 1}^{N}\) is a sequence of micro patches produced independently while sharing the same latent vector \(z\) across the spatial position sequence \(s\) . The design of \(R\) may need to be slightly changed with respect to the selection of \(C^{M}\) and \(C^{m}\) . The design principle of \(R\) is that the accordingly generated micro patches \(\langle \tilde{p}_{i}^{m}\rangle_{i = 1}^{N}\) should be spatially close to each other. Then the micro patches are merged by a merging function \(T_{(m\to M)}\) to form a complete macro patch \(\tilde{p}^{M} = T_{(m\to M)}(\langle \tilde{p}_{i}^{m}\rangle_{i = 1}^{N})\) as a coarser partial- view of the full- scene \(\tilde{x}\) . Meanwhile, we assign \(\tilde{p}^{M}\) with a new spatial position \(c^{M}\) under the macro coordinate system for \(s\) . In Figure 1, we illustrate one of the simplest design for the above heuristic functions that we have adopted throughout our experiments. The four micro patches are always a neighbor of each other and can be directly combined into a square macro patch with \(T_{(m\to M)}\) , which is just simple concatenation. Figure 3 shows some examples of micro and macro patches generation. On the real samples side, we also sample \(s\sim S\) , but we directly infer \(s\) to macro position \(c^{M}\) . Then we design a transformation \(T_{(X\to M)}\) with respect to \(T_{(m\to M)}\) to transform a real full- image \(x\) to a real macro patch \(p^{M}\) by \(T_{(X\to M)}(x|c^{M})\) . The shape of \(p^{M}\) should be the same as \(\tilde{p}^{M}\) . In our simplest experiment setting, \(T_{(X\to M)}\) is a simple cropping function that crops \(x\) into \(p^{M}\) , which has the same shape as \(\tilde{p}^{M}\) . During the testing phase, depending on the design of \(C^{m}\) , we can directly infer a corresponding spatial position sequence \(\langle c_{j}^{m}\rangle_{j = 1}^{K}\) . It is used to independently produce spatially- disentangled patches that constitute the full- image. Figure 9 demonstrates how the full- image is generated during the testing phase. Figure 2a shows some examples of the full- image generation. Loss functions. The patch Wasserstein loss \(L_{W}\) is a patch- level Wasserstein distance loss similar to Wasserstein- GAN (Arjovsky et al., 2017) loss. It forces the discriminator to discriminate between the real macro patch \(p^{M}\) and the generated macro patch \(\tilde{p}^{M}\) , and on the other hand, encourages the generator to confuse the discriminator with seemingly realistic \(\tilde{p}^{M}\) . Its complete form is \[L_{W} = \mathbb{E}\left[D(T_{(X\to M)}(x|c^{M}))\right] - \mathbb{E}\left[D(T_{(m\to M)}(\langle G(z|c_{i}^{m})\rangle_{i = 1}^{N}))\right], \quad (1)\] where \(x\sim \mathbb{P}_{r}\) and \(\langle c_{i}^{m}\rangle_{i = 1}^{N}\sim \mathcal{U}(S^{N},R)\) . Note that \(\mathbb{P}_{r}\) is the real data distribution. We also apply Gradient Penalty (Gulrajani et al., 2017) to the patches generation: \[L_{G P} = \mathbb{E}\left[(\| \nabla_{\tilde{p}^{M}}D(\tilde{p}^{M})\|_{2} - 1)^{2}\right], \quad (2)\] where \(\tilde{p}^{M}\sim \mathbb{P}_{g}\) . Note that \(\mathbb{P}_{g}\) is the generator distribution. <--- Page Split ---> The spatial consistency loss \(L_{S}\) is similar to ACGAN loss (Odena et al., 2017). A slight difference is that \(c_{i}^{m}\) has relatively more continuous values than the discrete setting of ACGAN. As a result, we apply a distance measurement loss for \(L_{S}\) , which is an \(L_{2}\) - loss between \(\tilde{c}^{M}\) and \(c^{M}\) . It aims to train the generator of COCO- GAN to generate the corresponding micro patch by \(G(z|c_{i}^{m})\) with respect to the given spatial condition \(c_{i}^{m}\) . The spatial consistency loss is \[L_{S} = \mathbb{E}\left[\| c^{M} - \tilde{c}^{M}\|_{2}\right]. \quad (3)\] On the other hand, the content consistency loss \(L_{C}\) is similar to a hybrid of infoGAN loss (Chen et al., 2016) and the latent space constraint loss (Chang et al., 2018). The former one uses a separate optimizer to optimize an auxiliary network \(Q\) , which aims to reconstruct the original latent vector. The latter one suggests that the latent space consistency loss can be a distance measurement instead of minimizing the KL- divergence as the original infoGAN does. In our experiments, we train the extra \(Q\) network with a separate optimizer and directly minimize \(L_{1}\) - loss between \(\tilde{z}\) and \(z\) . \(L_{C}\) aims to force the generator to produce shared context between patches that share the same latent vector but locate at different micro coordinate positions. The content consistency loss is defined by \[L_{C} = \mathbb{E}\left[\| z - \tilde{z}\|_{1}\right]. \quad (4)\] To ensure that \(Z\) , \(C^{m}\) , and \(C^{M}\) share the similar scale, which are directly concatenated and feed to \(G\) . We evaluate the maximum possible pixel position of \(C^{m}\) and \(C^{M}\) , then normalize the range into \([- 1,1]\) . For the latent space \(Z\) , although uniform sampling between \([- 1,1]\) should be numerically more compatible with the normalized spatial condition space, we empirically do not observe significant differences even if we switch to random sampling with a zero- mean and unit- variance normal distribution. For simplicity, we adopt uniform sampling strategy throughout our experiments. Training details. Our generator and discriminator architectures follow the idea of projection discriminator (Miyato & Koyama, 2018), both with ResNet (He et al., 2016) based architecture and adding class- projection to the discriminator. All convolutional and feed- forward layers of generator and discriminator are added with the spectral- normalization scheme (Miyato et al., 2018) as suggested in (Zhang et al., 2018). A more detailed architecture diagram is illustrated in Appendix B. We also add conditional- batch- normalization (CBN) (Dumoulin et al., 2016) to the generator. In our design, CBN is conditioned on the given spatial positions and the input latent vector. It learns to normalize the feature maps with respect to the given conditions. However, our implementation has a crucial difference from the one described in (Miyato & Koyama, 2018): our spatial positions are real values rather than discrete classes. We alternatively adopt similar strategy to (de Vries et al., 2017) with a slight modification. Instead of using MLPs to produce \(\Delta \gamma\) and \(\Delta \beta\) , we make MLPs directly output \(\gamma\) and \(\beta\) . For a \(K\) - channel input feature map \(i_{K}\) with mean recorder \(\mu_{K}\) and variance recorder \(\sigma_{K}\) , we creates two learnable MLP layers, \(\mathrm{MLP}_{\gamma}\) and \(\mathrm{MLP}_{\beta}\) . The output feature map \(o_{M}\) is calculated as \(o_{M} = \left((i_{K} - \mu_{K}) / \sigma_{K}\right)* \mathrm{MLP}_{\gamma}(i_{K}) + \mathrm{MLP}_{\beta}(i_{K})\) . We use Adam (Kingma & Ba, 2014) optimizer with \(\beta_{1} = 0\) and \(\beta_{2} = 0.999\) for both the generator and the discriminator. The learning rates are based on the Two Time- scale Update Rule (TTUR) (Heusel et al., 2017), setting 0.0001 for the generator and 0.0004 for the discriminator. We do not specifically balance the generator and the discriminator by manually setting how many iterations to update the generator once as described in the WGAN paper (Arjovsky et al., 2017). ## 3 EXPERIMENTS Although COCO- GAN framework supports the spatial positions to be uniformly and continuously sampled within normalized range \([- 1,1]\) , we empirically find that only sampling the discrete spatial positions that is used to inference will result in better generation quality. For instance, if the full- image is formed by concatenating four micro patches on each axis, the model only uniformly samples spatial positions from the set of four discrete points, \(\{- 1, - 1 / 3,1 / 3,1\}\) on each axis of the coordinate system. We adopt this uniform and discrete sampling strategy throughout the experiments. The root cause of degradation in generation quality is still unclear. One possible hypothesis is that the task difficulty dramatically increases while sampling with continuous spatial positions. We flag further analysis and solution toward this phenomenon as an important future research direction. Some comparisons between continuous sampling and discrete sampling are shown in Appendix F. <--- Page Split ---> ![](images/4_0.jpg) <center>(a) CelebA (128×128). (b) LSUN (256×256). </center> ![](images/4_1.jpg) <center>Figure 3: Generated samples of micro/macro patches. Each micro patch, macro patch, and full-image in Figure 2a at the same relative position uses the same latent vector. </center> Figure 2: Without any post-processing, the generated full-images of COCO-GAN are visually smooth and globally coherent. More full-image generation results are shown in Figure 12. Table 1: The FID score suggests that COCO- GAN is competitive with other state- of- the- art GANs. FID scores are measured between 50,000 real and generated samples based on the original implementation provided at https://github.com/bioinf- jku/TTUR. <table><tr><td>Dataset</td><td>DCGAN + TTUR</td><td>PGGAN</td><td>WGAN-GP + TTUR</td><td>COCO-GAN</td></tr><tr><td>CelebA (64×64)</td><td>12.5</td><td>-</td><td>-</td><td>4.99</td></tr><tr><td>CelebA (128×128)</td><td>-</td><td>7.30</td><td>-</td><td>8.35</td></tr><tr><td>LSUN - Bedroom (64×64)</td><td>57.5</td><td>-</td><td>9.5</td><td>-</td></tr><tr><td>LSUN - Bedroom (128×128)</td><td>-</td><td>-</td><td>-</td><td>3.06</td></tr><tr><td>LSUN - Bedroom (256×256)</td><td>-</td><td>8.34</td><td>-</td><td>16.59</td></tr></table> ### 3.1 QUALITY OF GENERATED IMAGES We start with validating COCO- GAN on CelebA (Liu et al., 2015) and the bedroom category of LSUN (Yu et al., 2015). For CelebA dataset, the resolutions of full- image, micro patch, and macro patch are \(128 \times 128\) , \(32 \times 32\) , and \(64 \times 64\) , respectively. We choose \(32 \times 32\) for micro patches in this experiment since smaller patch size would be too small for the model (neither for human) to observe useful information. On the other hand, larger patch size makes macro patch size too similar to the full- image size, which is hard to demonstrate the idea of COCO- GAN can learn without access to the full- image. For LSUN dataset, the full- image is with \(256 \times 256\) resolution, micro patches with \(64 \times 64\) resolution and macro patches with \(128 \times 128\) resolution. We choose \(64 \times 64\) for micro patch size since the micro patch to full- image ratio is the same with the CelebA experiment. We report Frechet Inception Distance (FID) (Heusel et al., 2017) in Table 1 as quantitative comparisons with state- of- the- art GANs. Since many other state- of- the- art models do not use full- resolution of the datasets, we accordingly run COCO- GAN in different resolutions without changing hyperparameters other than input size and micro/macro patch size. Throughout these experiments, we choose to always retain the micro patch size to be 1/16 (1/4 for height and 1/4 for width) of the full- image size and macro patch size to be 1/4 (1/2 for height and 1/2 for width) of the full- image size. Without additional hyper- parameter tuning, the results suggest COCO- GAN is both qualitatively and quantitatively competitive with other state- of- the- art GANs. In Appendix H, we also provide Wasserstein distance and FID score through time as training indicators. The curves suggest that COCO- GAN is stable during training. ### 3.2 LATENT SPACE CONTINUITY To demonstrate more precisely the space continuity, we perform the interpolation experiment in three directions: micro patches interpolation, spatial positions interpolation, and full- images interpolation. <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 5: (a) Micro patches interpolation with fixed spatial position. Note that each left-right pair of interpolation sample uses the same latent vectors. (b) Full-images interpolation between two latent vectors. More interpolation results are shown in Appendix D. </center> Micro Patches Interpolation. The simplest interpolation experiment is the in- class (e.g. fixed spatial condition) interpolation between latent vectors. With a fixed spatial position \(c_{i}^{micro}\) , we randomly sample two latent vectors \(z_{1}\) and \(z_{2}\) . Then perform interpolation between \(z_{1}\) and \(z_{2}\) through a slerp- path (White, 2016). The results in Figure 5a suggest that for each of the spatial position, the latent space \(Z\) has continuity. Spatial Positions Interpolation. Another simple interpolation experiment is inter- class (e.g. between classes) interpolation with a fixed latent vector. We directly linearly- interpolate spatial position between \([- 1,1]\) when the latent vector \(z\) is fixed. The results in Figure 4 show that, although we only uniformly sample spatial positions within a discrete spatial position set, the spatial position interpolation is still continuous. An interesting observation is about the interpolation at the position between the eyebrows. In this example, COCO- GAN does not know the existence of the smooth area (glabella) between two eyes due to the discrete and sparse spatial positions sampling strategy. Instead, it learns to directly deform the shape of eye to switch from one eye to another. This phenomenon raises an interesting discussion, even the model learns to produce high- quality face images, it still may learn wrong relationship of objects behind the scene. ![](images/5_1.jpg) <center>Figure 4: An example of spatial positions interpolation for showing the spatial continuity of the micro patches. The spatial conditions are interpolated between range \([- 1,1]\) of the micro coordinate with a fixed latent vector. More examples are shown in Appendix I. </center> Full- Images Interpolation. The hardest inter Full- Images Interpolation. The hardest interpolation is to directly interpolate full- images between two latent vectors. All micro patches generated with different spatial positions must all change synchronously to make the full- image interpolation smooth. We randomly sample two latent vectors \(z_{1}\) and \(z_{2}\) . With any given interpolation point \(z'\) in the slerp path between \(z_{1}\) and \(z_{2}\) , the generator uses the full spatial position sequence \(\langle c_{j}^{m}\rangle_{j = 1}^{K}\) to generate all corresponding patches. Then we merge all generated micro patches with \(T_{(m \rightarrow M)}\) and forms a full- image \(x'\) . The interpolation results in Figure 5a and Figure 5b show that all micro patches can interpolate smoothly and synchronously. This result suggests that COCO- GAN learns the main latent space \(Z\) as well as the correlation between micro patches, and the spatial conditions \(C^{m}\) are disentangled. ### 3.3 ABLATION STUDY The ablation study is conducted in two folds: we first show that a straightforward approach fails in COCO- GAN setting, then we study the trade- offs of each component of COCO- GAN. A straightforward approach. One straightforward approach to the learning and inference with partial views is using a full- sized generator and a patch discriminator, and then pre- calculating the <--- Page Split ---> locations of feature maps that are associated with a specific partial view. We refer to this method as \(\mathcal{M}\) afterward. Despite \(\mathcal{M}\) still implicitly uses conditional coordinating in feature map selection, we observe that it fails to generate high- quality samples in comparison with COCO- GAN. We show the FID score results in Table 2. More experimental details are shown in Figure 15 and some generated samples in Figure 16. The trade- offs of each component. We perform ablation study in CelebA \(64\times 64\) setting with five configurations: "continuous sampling" demonstrates that using continuous uniform sampling strategy for spatial positions during training will result in moderate generation quality drop; "optimal \(D\) " lets the discriminator directly discriminate the full image while the generator still generates micro patches; "optimal \(G\) " lets the generator directly generate the full image while the discriminator still discriminates macro patches; "without \(Q\) " removes the \(Q\) network from COCO- GAN; "multiple \(G\) " trains an individual generator for each spatial position. The results in Table 2 suggest \(Q\) network is not a necessary component if not considering the "Patch- Inspired Image Generation" application. Surprisingly, despite the convergence speed is different, "optimal discrimi Table 2: Best FID scores in the first 150 epochs. COCO-GAN usually converges well in CelebA \(64\times 64\) setting. <table><tr><td>Model</td><td>FID</td></tr><tr><td>M</td><td>72.82</td></tr><tr><td>M + PD (100 epochs)</td><td>90.87</td></tr><tr><td>M + PD + macro D</td><td>60.36</td></tr><tr><td>COCO-GAN (cont. sampling)</td><td>6.13</td></tr><tr><td>COCO-GAN + optimal D</td><td>4.05</td></tr><tr><td>COCO-GAN + optimal G</td><td>6.12</td></tr><tr><td>COCO-GAN + without Q</td><td>4.87</td></tr><tr><td>Multiple G</td><td>7.26</td></tr><tr><td>COCO-GAN (ours)</td><td>4.99</td></tr></table> nator", COCO- GAN, and "optimal generator" (ordered by convergence speed from fast to slow) can all achieve similar FID scores if with sufficient training time. The difference in convergence speed is expected, since "optimal discriminator" provides the generator with more accurate and global adversarial loss. In contrast, the "optimal generator" has relatively more parameters and layers to optimize, which causes the convergence speed slower than COCO- GAN. Lastly, the "multiple generators" setting cannot converge well. Although it can also concatenate micro patches without obvious seams as COCO- GAN does, the full- image results often cannot agree and are not globally coherent. More experimental details and generated samples are shown in Figure 17 and Figure 18. ### 3.4 PANORAMA GENERATION AND PARTIAL SCENE GENERATION Generating panoramas using GANs is an interesting problem but has never been carefully investigated. Different from simple image generation, panoramas are expected to be cylindrical and cyclic in the horizontal direction. However, normal GANs do not have built- in ability to handle such cyclic characteristic if without special types of padding mechanism support (Cheng et al., 2018). In contrast, COCO- GAN is a coordinate- system- aware learning framework. We can easily adapt a cylindrical coordinate system, and generate panoramas that are cyclic in the horizontal direction as shown in Figure 6 and Figure 14. To train COCO- GAN with a panorama dataset under a cylindrical coordinate system, the spatial position sampling strategy needs to be slightly modified. In the horizontal direction, the sampled value within the normalized range \([- 1,1]\) is treated as an angular value \(\theta\) , and then is projected with \(\cos (\theta)\) and \(\sin (\theta)\) individually to form a unit- circle on a 2D surface. Along with the normal sampling on the vertical axis, a cylindrical coordinate system is formed. We first take the sky- box format of Matterport3D (Chang et al., 2017) dataset to obtain panoramas for training and testing. The sky- boxes consist of six faces of a 3D cube. We preprocess and project the sky- box to a cylinder using Mercator projection, the resulting cylindrical image size is \(768\times 512\) . Since the Mercator projection creates extreme sparsity near the northern and southern poles, which lacks information, we directly remove the upper and lower \(1 / 4\) areas. Eventually, the size of panorama we used for training is \(768\times 256\) pixels. We also find COCO- GAN has interesting connection with virtual reality (VR). VR is known to have tight computational budget due to high frame- rate requirement and high resolution demand. It is hard to generate full- scene for VR in real time using standard generative models. Some recent VR studies on omnidirectional view rendering and streaming (Corbillon et al., 2017b; Ozcinar et al., <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 6: The generated panorama is cyclic in the horizontal direction since COCO-GAN is trained with a cylindrical coordinate system in this experiment. Here we paste the same generated panorama twice (from \(360^{\circ}\) to \(720^{\circ}\) ) to illustrate that it indeed has the cyclic property. </center> 2017; Corbillon et al., 2017a) are focusing on reducing computational cost or network bandwidth by adapting the user's viewport. COCO- GAN can easily inherit the same strategy and achieve user- viewport- aware partial- scene generation based on its effectiveness in spatial disentanglement and panorama generation. This can largely reduce unnecessary computational cost outside the region of interest, thus making image generation in VR more applicable. ### 3.5 PATCH-INSPIRED IMAGE GENERATION The content consistency loss equips the discriminator with the ability to approximate the original latent vector of generated macro patch \(\tilde{p}^{macro}\) using the discriminator's auxiliary content vector prediction \(\tilde{z}\) . This property can also generalize to any real macro patch \(p^{macro}\) . The approximation process does not require a spatial position; the spatial position is implicitly inferred by the spatial position prediction \(\tilde{c}^{M}\) . The generator can accordingly generate a full- image \(x'\) with respect to \(\tilde{z}\) . The generated \(x'\) should be partially similar to the given macro patch and also be globally coherent. Such \(x'\) can be seen as a generated sample inspired by the given macro patch. One important footnote is that most of the information of a real full- image is lost while retrieving \(p^{M}\) . As a result, the produced guess of full- image is not guaranteed to be identical to the original real image. We provides some examples of patch- inspired image generation in Figure 7. The results show that \(x'\) can loosely retain some local structure or global characteristic of the original image, such as gender, face direction, and facial expression. This process is also similar to image inpainting (Liu et al., 2018a; Yeh et al., 2017; Yang et al., 2017) except two key differences. First, the spatial position of the macro patch is not explicitly given. Existing image inpainting frameworks assume that the remaining parts of the image are already at their optimal positions, whereas COCO- GAN can infer by itself. Take human face for instance, if only given a cropped patch of a face image without further providing the position of the patch in the original image, COCO- GAN can still infer the position of the patch and reconstruct a full face image, while common inpainting frameworks may not reconstruct a correctly structured and well centered human face. Second, most inpainting frameworks do not assume the image is extremely damaged, like loosing \(75\%\) of information in our examples. In Figure 8, we accordingly compare COCO- GAN with partial convolution (Liu et al., 2018a), which is one of the state- of- the- art image inpainting methods. For the partial convolution method, we simply place the macro patch at the center since the spatial position of the macro patch is unknown. The results show that, unlike COCO- GAN, the partial convolution method (Liu et al., 2018a) cannot handle this situation well. ### 3.6 COMPUTATION-FRIENDLY GENERATION Recent studies in high- resolution image generation (Karras et al., 2017; Mescheder et al., 2018) have gained lots of success. We observe a shared conundrum of these existing works is the memory requirement. They usually require some workarounds to improve memory consumption during training, such as decreasing the batch size (Karras et al., 2017) or cutting down the number of feature maps (Mescheder et al., 2018). The memory requirement problem cannot be easily resolved without specific hardware support, which makes the generation of over \(1024 \times 1024\) resolution images hard to achieve. These types of high- resolution images are commonly seen in panoramas, street views, and medical images. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 7: Patch-inspired image generation can loosely retain some local structure or global characteristic of the original image. The red boxes are the sampled spatial positions that crop the full-images in (a) into the macro patches in (b). (c) & (d) show the patch-inspired generated macro patches and full-images based on \(\bar{z}\) . The blue boxes visualize the predicted spatial positions \(\bar{c}^{m}\) . Since the information loss of the cropping process (from (a) to (b)) is critical, we do not expect (a) and (d) to be identical. Instead, (b) and (c) should be visually similar and (d) should be globally coherent. More examples are shown in Appendix G. </center> In contrast, COCO- GAN only requires partial views of the full- image for both training and inference. This characteristic largely reduces the minimum memory requirement while dealing with high- resolution images. In Section 3.1, we show that COCO- GAN can generate and compose a \(128 \times 128\) image with patches of \(32 \times 32\) resolution. In other words, we can train a CIFAR- 10- sized model to generate \(128 \times 128\) resolution images while the results are still high quality. The characteristic of spatial disentanglement is the first step toward super- high- resolution image generation with limited memory resource. Moreover, some state- of- the- art structures of deep models tend to require more parameters (if without reducing number of channels), such as the skip- connection structure (He et al., 2016) used in projection discriminator (Miyato & Koyama, 2018), inception (Szegedy et al., 2015), and self- attention (Zhang et al., 2018). COCO- GAN is able to equip many of these complex structures since our COCO- GAN model is relatively light- weight with a smaller receptive field and a shallower structure. Furthermore, the spatial disentanglement makes the generation of micro patches independent after the latent vector and the spatial positions are decided. This characteristic enables the generation of micro patches to have high parallelism and take advantage of modern computation architectures. ## 4 RELATED WORK Generative Adversarial Network (GAN) (Goodfellow et al., 2014) has shown its potential and flexibility to many different tasks. Recent studies on GANs are focusing on generating high- resolution and high- quality synthetic images in different settings. For instance, generating images with \(1024 \times 1024\) resolution (Karras et al., 2017; Mescheder et al., 2018), generating images with low- quality synthetic images as condition (Shrivastava et al., 2017), and by applying segmentation map as conditions (Wang et al., 2017). However, these prior work ![](images/8_1.jpg) <center>Figure 8: Patch-inspired image generation is globally more coherent than the partial convolution method. </center> share the similar assumptions: the model must access and generate the full- image in a single shot. This assumption consumes an unavoidable and significant amount of memory when the size of the targeting image is relatively large, and therefore making it difficult to satisfy memory requirements for both training and inference. Searching for a solution towards this problem is one of the initial motivations of this work. <--- Page Split ---> COCO- GAN shares some similarities to Pixel- RNN (van den Oord et al., 2016), which is a pixel- level generation framework while COCO- GAN is a patch- level generation framework. Pixel- RNN transforms the image generation task into a sequence generation task, maximizes the log- likelihood directly. In contrast, COCO- GAN aims at disentangling the spatial dependencies between micro patches. It also utilizes the adversarial loss to ensure smoothness between adjacent micro patches. CoordConv (Liu et al., 2018b) is another similar work but with fundamental differences. CoordConv provides spatial positioning information directly to the convolutional kernels in order to the coordinate transform problem and shows multiple improvements in different tasks. In contrast, COCO- GAN uses spatial conditions as an input condition of the generator and an auxiliary output of the discriminator. This setting enforces both the generator and the discriminator to learn coordinating and correlations between the generated micro patches. We have also considered incorporating CoordConv into COCO- GAN. However, empirical results show little visual improvement. Group convolution (Krizhevsky et al., 2012) is another work that is highly related to COCO- GAN. While group convolution aims at reducing computational costs by disentangling channels inside a convolution layer, our model learns to disentangle on the spatial level and is highly parallelizable. However, the micro- patch generation of COCO- GAN uses padding in all feature maps while applying convolution. This problem causes a large number number of FLOPs for each image generation. We are particularly interested in this phenomenon and flag utilizing the spatial disentanglement to reduce the total number of FLOPs as an important future work. ## 5 CONCLUSION AND DISCUSSION In this work, we propose COCO- GAN, a new generative model toward dividing full image generation into non- overlapping patches generation. Through the experiments, we show that COCO- GAN can learn and inference with limited partial views. Although the model is restrained from accessing the full scene, it can still generate high- quality samples without extra hyper- parameter tuning. We also demonstrate COCO- GAN is a coordinate- system- aware framework, and take panorama generation within a cylindrical coordinate system as case study. Furthermore, we highlight the advantages of COCO- GAN by showcasing three applications, including "Patch- Inspired image generation", "User- Viewport- Aware Partial Scene Generation", and "Computation- Friendly Generation". Despite the generation quality of COCO- GAN being competitive with other state- of- the- art GANs without any post- processing, sometimes we still observe that local structures of generated samples may be discontinued or mottled. This indicates that extra refinements and blending methods are still important for COCO- GAN to generate more stable and reliable samples. We adopt a discrete uniform sampling strategy over spatial positions since we observe a huge drop in generation quality with continuous uniform sampling. Although in practice COCO- GAN successfully learns spatial continuity using discrete sampling, continuous sampling, in theory, should still be preferred since the spatial domain is continuous. Achieving such a goal would require deeper understanding and insights about the root cause of the generation quality drop. We demonstrate that COCO- GAN can generate panoramas under a cylindrical coordinate system. However, another commonly used panorama format is sky- sphere under a hyperbolic coordinate system. Considering that the image patches with the hyperbolic coordinate is not square- shaped, further studies on incorporating special convolution schemes like Spherical- CNN (Cohen et al., 2018) and implementing COCO- GAN under a hyperbolic coordinate system would be required. Furthermore, to allow a more flexible and general coordinate system, some learnable coordinating methods (Balakrishnan et al., 2018) might be correlated to COCO- GAN and could further enhance the flexibility. The size of micro patches is crucial to the results of COCO- GAN. A rule- of- thumb is that the size should be large enough to cover sufficient information. The precise lower bound requires experiments to examine if COCO- GAN learns undesired spatial patterns. In "Spatial Positions Interpolations" of Section 3.2, we mention the model can be misled to learn reasonable but incorrect spatial relationship. An effective evaluation for the lower bound of the patch size needs future investigation. <--- Page Split ---> ## APPENDIX B MODEL ARCHITECTURE DETAILS ![](images/13_1.jpg) <center>Figure 10: The detailed generator architecture of COCO-GAN for generating micro patches with a size of \(32 \times 32\) pixels. We directly duplicate/remove the last residual block if we need to enlarge/reduce the size of output patch. </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 11: The detailed discriminator architecture of COCO-GAN for discriminate macro patches with a size of \(64 \times 64\) pixels. We directly duplicate/remove the first residual block if we need to enlarge/reduce the input patch size. Both the content vector prediction head \((Q)\) and the spatial condition prediction head use the same structure shown in (c). </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 12: More full-image generation examples of COCO-GAN. More results across epochs are provided in following anonymous link: https://goo.gl/A88ewn and https://goo.gl/hgCAeE. </center> <--- Page Split ---> ## APPENDIX D MORE INTERPOLATION EXAMPLES ![](images/16_0.jpg) <center>Figure 13: More interpolation examples. Given two latent vectors, COCO-GAN generates the micro patches and full-images that correspond to the interpolated latent vectors. </center> <--- Page Split ---> ## APPENDIX E MORE PANORAMA GENERATION SAMPLES ![](images/17_0.jpg) <center>Figure 14: More examples of generated panoramas. All samples possess the cyclic property along the horizontal direction. Each sample is generated with a resolution of \(768 \times 256\) pixels, and micro patch size \(64 \times 64\) pixels. </center> ## APPENDIX F ABLATION STUDY ![](images/17_1.jpg) <center>Figure 15: Comparison with the \(\mathcal{M}\) method mentioned in Section 3.3 in CelebA \(64 \times 64\) setting shows that the \(\mathcal{M}\) method is not competitive to COCO-GAN. Note that PD refers to "projection discriminator" and macro indicates the discriminator is in macro patch sized. </center> ![](images/17_2.jpg) <center>Figure 16: Some samples generated by different variants of \(\mathcal{M}\) . Note that each set of samples is extracted at the epoch when each \(\mathcal{M}\) variant reaches its lowest FID score. We also provide more samples at different epochs: https://goo.gl/ChQhCx. </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 17: FID score curves of different variants of COCO-GAN in CelebA \(64 \times 64\) setting. Combined with Figure 18, the results do not show significant differences in quality between COCO-GAN variants. Therefore, COCO-GAN does not pay significant trade-off for the conditional coordinate property. </center> ![](images/18_1.jpg) <center>Figure 18: Some samples generated by different variants of COCO-GAN. Note that each set of samples is extracted at the epoch when each \(\mathcal{M}\) variant reaches its lowest FID score. We also provide more samples for each of the variants at different epochs: https://goo.gl/Wnrppf. </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 19: Patch-inspired image generation can loosely retain some local structure or global characteristic of the original image. The red boxes are the sampled spatial positions that crop the full-images in (a) into the macro patches in (b). (c) & (d) show the patch-inspired generated macro patches and full-images based on \(\bar{z}\) . The blue boxes visualize the predicted spatial position \(\bar{c}^{m}\) . Since the information loss of the cropping process (from (a) to (b)) is critical, we do not expect (a) and (c) to be identical. Instead, (b) and (d) should be visually similar and (d) should be globally coherent. </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 20: Patch-inspired image generation can loosely retain some local structure or global characteristic of the original image. The red boxes are the sampled spatial positions that crop the full-images in (a) into the macro patches in (b). (c) & (d) show the patch-inspired generated macro patches and full-images based on \(\bar{z}\) . The blue boxes visualize the predicted spatial position \(\bar{c}^{m}\) . Since the information loss of the cropping process (from (a) to (b)) is critical, we do not expect (a) and (c) to be identical. Instead, (b) and (d) should be visually similar and (d) should be globally coherent. Note that the diversity and difficulty of LSUN (bedroom category) is higher than CelebA. The \(Q\) network can only capture the structure, shape, and orientation of the room and the bed, but it fails to capture detailed texture of objects. </center> <--- Page Split ---> ## APPENDIX H TRAINING INDICATORS ![](images/21_0.jpg) <center>Figure 21: Both Wasserstein distance and FID through time show that the training of COCO-GAN is stable. Both two figures are logged while training on CelebA with \(128 \times 128\) resolution. </center> ## APPENDIX I SPATIAL POSITIONS INTERPOLATION ![](images/21_1.jpg) <center>Figure 22: Spatial interpolation shows the spatial continuity of the micro patches. The spatial conditions are interpolated between range \([-1, 1]\) of the micro coordinate with a fixed latent vector. </center> <--- Page Split --->
reject
Reject
5.333333
ICLR_2019_paper_0354
iclr
2,019
# ADVERSARIAL REPROGRAMMING OF NEURAL NETWORKS Gamaleldin F. Elsayed\* Google Brain gamaleldin.elsayed@gmail.com Ian Goodfellow Google Brain goodfellow@google.com Jascha Sohl- Dickstein Google Brain jaschasd@google.com ## ABSTRACT Deep neural networks are susceptible to adversarial attacks. In computer vision, well- crafted perturbations to images can cause neural networks to make mistakes such as confusing a cat with a computer. Previous adversarial attacks have been designed to degrade performance of models or cause machine learning models to produce specific outputs chosen ahead of time by the attacker. We introduce attacks that instead reprogram the target model to perform a task chosen by the attacker—without the attacker needing to specify or compute the desired output for each test- time input. This attack finds a single adversarial perturbation, that can be added to all test- time inputs to a machine learning model in order to cause the model to perform a task chosen by the adversary—even if the model was not trained to do this task. These perturbations can thus be considered a program for the new task. We demonstrate adversarial reprogramming on six ImageNet classification models, repurposing these models to perform a counting task, as well as classification tasks: classification of MNIST and CIFAR- 10 examples presented as inputs to the ImageNet model. ## 1 INTRODUCTION The study of adversarial examples is often motivated in terms of the danger posed by an attacker whose goal is to cause model prediction errors with a small change to the model's input. Such an attacker could make a self- driving car react to a phantom stop sign (Evtimov et al., 2017) by means of a sticker (a small \(L_{0}\) perturbation), or cause an insurance company's damage model to overestimate the claim value from the resulting accident by subtly doctoring photos of the damage (a small \(L_{\infty}\) perturbation). With this context, various methods have been proposed both to construct (Szegedy et al., 2013; Papernot et al., 2015; 2017; 2016; Brown et al., 2017; Liu et al., 2016) and defend against (Goodfellow et al., 2014; Kurakin et al., 2016; Madry et al., 2017; Tramer et al., 2017; Kolter & Wong, 2017; Kannan et al., 2018) this style of adversarial attack. Thus far, the majority of adversarial attacks have consisted of untargeted attacks that aim to degrade the performance of a model without necessarily requiring it to produce a specific output, or targeted attacks in which the attacker designs an adversarial perturbation to produce a specific output for that input. For example, an attack against a classifier might target a specific desired output class for each input image, or an attack against a reinforcement learning agent might induce that agent to enter a specific state (Lin et al., 2017). In practice, there is no requirement that adversarial attacks will adhere to this framework. Thus, it is crucial to proactively anticipate other unexplored adversarial goals in order to make machine learning systems more secure. In this work, we consider a novel and more challenging adversarial goal: reprogramming the model to perform a task chosen by the attacker, without the attacker needing to compute the specific desired output. Consider a model trained to perform some original task: for inputs \(x\) it produces outputs \(f(x)\) . Consider an adversary who wishes to perform an adversarial task: <--- Page Split ---> for inputs \(\tilde{x}\) (not necessarily in the same domain as \(x\) ) the adversary wishes to compute a function \(g(\tilde{x})\) . We show that an adversary can accomplish this by learning adversarial reprogramming functions \(h_{f}(\cdot ;\theta)\) and \(h_{g}(\cdot ;\theta)\) that map between the two tasks. Here, \(h_{f}\) converts inputs from the domain of \(\tilde{x}\) into the domain of \(x\) (i.e., \(h_{f}(\tilde{x};\theta)\) is a valid input to the function \(f\) ), while \(h_{g}\) maps output of \(f(h(\tilde{x};\theta))\) back to outputs of \(g(\tilde{x})\) . The parameters \(\theta\) of the adversarial program are then adjusted to achieve \(h_{g}\left(f\left(h_{f}\left(\tilde{x}\right)\right)\right) = g\left(\tilde{x}\right)\) . In our work, for simplicity, we define \(\tilde{x}\) to be a small image, \(g\) a function that processes small images, \(x\) a large image, and \(f\) a function that processes large images. Our function \(h_{f}\) then just consists of drawing \(x\) in the center of the large image and \(\theta\) in the borders (though see Section 4.5 for other schemes), and \(h_{g}\) is simply a hard coded mapping between output class labels. However, the idea is more general; \(h_{f}\left(h_{g}\right)\) could be any consistent transformation that converts between the input (output) formats for the two tasks and causes the model to perform the adversarial task. We refer to the class of attacks where a model is repurposed to perform a new task as adversarial reprogramming. We refer to \(\theta\) as an adversarial program. In contrast to most previous adversarial work, the attack does not need to be imperceptible to humans, or even subtle, in order to be considered a success. However, we note that it is still possible to construct reprogramming attacks that are imperceptible. Potential consequences of adversarial reprogramming include theft of computational resources from public facing services, repurposing of AI- driven assistants into spies or spam bots, and abusing machine learning services for tasks violating the ethical principles of system providers. Risks stemming from this type of attack are discussed in more detail in Section 5.2. It may seem unlikely that an additive offset to a neural network's input would be sufficient on its own to repurpose the network to a new task. However, this flexibility stemming only from changes to a network's inputs is consistent with results on the expressive power of deep neural networks. For instance, in Raghu et al. (2016) it is shown that, depending on network hyperparameters, the number of unique output patterns achievable by moving along a one- dimensional trajectory in input space increases exponentially with network depth. Further, Li et al. (2018) shows that networks can often be trained to high accuracy even if parameter updates are restricted to occur only in a low dimensional subspace. An additive offset to a neural network's input is equivalent to a modification of its first layer biases (for a convolutional network with biases shared across space, this operation effectively introduces new parameters because the additive input is not shared across space), and therefore an adversarial program corresponds to an update in a low dimensional parameter subspace. In this paper, we present the first instances of adversarial reprogramming. In Section 2, we discuss related work. In Section 3, we present a training procedure for crafting adversarial programs, which cause a neural network to perform a new task. In Section 4, we experimentally demonstrate adversarial programs that target several convolutional neural networks designed to classify ImageNet data. These adversarial programs alter the network function from ImageNet classification to: counting squares in an image, classifying MNIST digits, and classifying CIFAR- 10 images. Next, we examine the susceptibility of trained and untrained networks to adversarial reprogramming. We then demonstrate the possibility of reprogramming adversarial tasks with adversarial data that has no resemblance to original data, demonstrating that results from transfer learning do not fully explain adversarial reprogramming. Further, we demonstrate the possibility of concealing adversarial programs and data. Finally, we end in Sections 5 and 6 by discussing and summarizing our results. ## 2 BACKGROUND AND RELATED WORK ### 2.1 ADVERSARIAL EXAMPLES One definition of adversarial examples is that they are "inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake" (Goodfellow et al., 2017). They are often formed by starting with a naturally occuring image and using a gradient- based optimizer to search for a nearby image that causes a mistake (Biggio et al., 2013; Szegedy et al., 2013; Carlini & Wagner, 2017). These attacks can be either untargeted (the adversary succeeds when causing any mistake at all) or targeted (the adversary succeeds when causing the model to predict a specific incorrect class). Adversarial attacks have been also proposed for other domains like malware detection (Grosse et al., 2017), generative models (Kos et al., 2017), network policies for reinforcement learning tasks (Huang et al., 2017), and network interpretation (Ghorbani et al., <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 1: Illustration of adversarial reprogramming. (a) Mapping of ImageNet labels to adversarial task labels (squares count in an image). (b) Two examples of images from the adversarial task (left) are embedded at the center of an adversarial program (middle), yielding adversarial images (right). The adversarial program shown repurposes an Inception V3 network to count squares in images. (c) Illustration of inference with adversarial images. The network when presented with adversarial images will predict ImageNet labels that map to the adversarial task labels. </center> 2017). In these domains, the attack remains either untargeted (generally degrading the performance) or targeted (producing a specific output). We extend this line of work by developing reprogramming methods that aim to produce specific functionality rather than a specific hardcoded output. Several authors have observed that the same modification can be applied to many different inputs in order to form adversarial examples (Goodfellow et al., 2014; Moosavi- Dezfooli et al., 2017). For example, Brown et al. (2017) designed an "adversarial patch" that can switch the prediction of many models to one specific class (e.g. toaster) when it is placed physically in their field of view. We continue this line of work by finding a single adversarial program that can be presented with many input images to cause the model to process each image according to the adversarial program. ### 2.2 PARASITIC COMPUTING AND WEIRD MACHINES Parasitic computing involves forcing a target system to solve a complex computational task it wasn't originally designed to perform, by taking advantage of peculiarities in network communication protocols (Barabasi et al., 2001; Peresini & Kostic, 2013). Weird machines, on the other hand, are a class of computational exploits where carefully crafted inputs can be used to run arbitrary code on a targeted computer (Bratus et al., 2011). Adversarial reprogramming can be seen as a form of parasitic computing, though without the focus on leveraging the communication protocol itself to perform the computation. Similarly, adversarial reprogramming can be seen as an example of neural networks behaving like weird machines, though adversarial reprogramming functions only within the neural network paradigm – we do not gain access to the host computer. ### 2.3 TRANSFER LEARNING Transfer learning (Raina et al., 2007; Mesnil et al., 2011) and adversarial reprogramming share the goal of repurposing networks to perform a new task. Transfer learning methods use the knowledge obtained from one task as a base to learn how to perform another. Neural networks possess properties that can be useful for many tasks (Yosinski et al., 2014). For example, neural networks when trained on images develop features that resemble Gabor filters in early layers even if they are trained with different datasets or different training objectives such as supervised image classification (Krizhevsky et al., 2012), unsupervised density learning (Lee et al., 2009), or unsupervised learning of sparse representations (Le et al., 2011). Empirical work has demonstrated that it is possible to take a <--- Page Split ---> convolutional neural network trained to perform one task, and simply train a linear SVM classifier to make the network work for other tasks (Razavian et al., 2014; Donahue et al., 2014). However, transfer learning is very different from the adversarial reprogramming task in that it allows model parameters to be changed for the new task. In typical adversarial settings, an attacker is unable to alter the model, and instead must achieve their goals solely through manipulation of the input. Further, one may wish to adversarially reprogram across tasks with very different datasets. This makes the task of adversarial reprogramming more challenging than transfer learning. ## 3 METHODS In this work, we consider an adversary with access to the parameters of a neural network that is performing a specific task. The objective of the adversary is to reprogram the model to perform a new task by crafting an adversarial program to be included within the network input. Here, the network was originally designed to perform ImageNet classification, but the methods discussed here can be directly extended to other settings. Our adversarial program is formulated as an additive contribution to network input. Note that unlike most adversarial perturbations, the adversarial program is not specific to a single image. The same adversarial program will be applied to all images. We define the adversarial program as: \[P = \tanh \left(W\odot M\right) \quad (1)\] where \(W\in \mathbb{R}^{n\times n\times 3}\) is the adversarial program parameters to be learned, \(n\) is the ImageNet image width, and \(M\) is a masking matrix that is 0 for image locations that corresponds to the adversarial data for the new task, otherwise 1. Note that the mask \(M\) is not required - we mask out the central region of the adversarial program purely to improve visualization of the action of the adversarial program. Also, note that we use \(\tanh (\cdot)\) to bound the adversarial perturbation to be in \((- 1,1)\) - the same range as the (rescaled) ImageNet images the target networks are trained to classify. Let, \(\tilde{x}\in \mathbb{R}^{k\times k\times 3}\) be a sample from the dataset to which we wish to apply the adversarial task, where \(k< n\) . \(\tilde{X}\in \mathbb{R}^{n\times n\times 3}\) is the equivalent ImageNet size image with \(\tilde{x}\) placed in the proper area, defined by the mask \(M\) . The corresponding adversarial image is then: \[X_{adv} = h_{f}\left(\tilde{x};W\right) = \tilde{X} +P \quad (2)\] Let \(P(y|X)\) be the probability that an ImageNet classifier gives to ImageNet label \(y\in \{1,\ldots ,1000\}\) given an input image \(X\) . We define a hard- coded mapping function \(h_{g}(y_{adv})\) that maps a label from an adversarial task \(y_{adv}\) to a set of ImageNet labels. For example, if an adversarial task has 10 different classes \((y_{adv}\in \{1,\ldots ,10\})\) , \(h_{g}(\cdot)\) may be defined to assign the first 10 classes of ImageNet, any other 10 classes, or multiple ImageNet classes to the adversarial labels. Our adversarial goal is thus to maximize the probability \(P(h_{g}(y_{adv})|X_{adv})\) . We set up our optimization problem as \[\begin{array}{r}{\hat{W} = \underset {W}{\mathrm{argmin}}\left(-\log P(h_{g}(y_{adv})|X_{adv}) + \lambda ||W||_{F}^{2}\right),} \end{array} \quad (3)\] where \(\lambda\) is the coefficient for a weight norm penalty, to reduce overfitting. We optimize this loss with Adam while exponentially decaying the learning rate. Hyperparameters are given in Appendix A. Note that after the optimization the adversarial program has minimal computation cost for the adversary, as it only requires computing \(X_{adv}\) (Equation 2), and mapping the resulting ImageNet label to the correct class. In other words, during inference the adversary needs only store the program and add it to the data, thus leaving the majority of computation to the target network. One interesting property of adversarial reprogramming is that it must exploit nonlinear behavior of the target model. This is in contrast to traditional adversarial examples, where attack algorithms based on linear approximations of deep neural networks are sufficient to cause a high error rate (Goodfellow et al., 2014). Consider a linear model that receives an input \(\tilde{x}\) and a program \(\theta\) concatenated into a single vector: \(x = [\tilde{x},\theta ]^{\top}\) . Suppose that the weights of the linear model are partitioned into two sets, \(v = [v_{\tilde{x}},v_{\theta }]^{\top}\) . The output of the model is \(v^{\top}x = v_{\tilde{x}}^{\top}\tilde{x} +v_{\theta}^{\top}\theta\) . The adversarial program \(\theta\) adapts the effective biases \(v_{\theta}^{\top}\theta\) but cannot adapt the weights applied to the input \(\tilde{x}\) . The adversarial program \(\theta\) can thus bias the model toward consistently outputting one class or the other but cannot change the way the input is processed. For adversarial reprogramming to work, the model must include nonlinear interactions between \(\tilde{x}\) and \(\theta\) . A nonlinear deep network satisfies this requirement. <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 2: Examples of adversarial programs for MNIST classification. Adversarial programs which cause six ImageNet models to function as MNIST classifiers. Each program is shown being applied to an MNIST digit. </center> ## 4 RESULTS To demonstrate the feasibility of adversarial reprogramming, we conducted experiments on six architectures trained on ImageNet. In each case, we reprogrammed the network to perform three different adversarial tasks: counting squares, MNIST classification, and CIFAR- 10 classification. The weights of all trained models were obtained from TensorFlow- Slim, and top- 1 ImageNet precisions are shown in Table Supp. 1. We additionally examined whether adversarial training conferred resistance to adversarial reprogramming, and compared the susceptibility of trained networks to random networks. Further, we investigated the possibility of reprogramming the networks when the adversarial data has no resemblance to the original data. Finally, we demonstrated the possibility of concealing the adversarial program and the adversarial data. ### 4.1 COUNTING SQUARES To illustrate the adversarial reprogramming procedure, we start with a simple adversarial task. That is counting the number of squares in an image. We generated images \((\bar{x})\) of size \(36 \times 36 \times 3\) that include \(9 \times 9\) white squares with black frames. Each square could appear in 16 different position in the image, and the number of squares ranged from 1 to 10. The squares were placed randomly on gridpoints (Figure 1b left). We embedded these images in an adversarial program (Figure 1b middle). The resulting images \((X_{adv})\) are of size \(299 \times 299 \times 3\) with the \(36 \times 36 \times 3\) images of the squares at the center (Figure 1b right). Thus, the adversarial program is simply a frame around the counting task images. We trained one adversarial program per ImageNet model, such that the first 10 ImageNet labels represent the number of squares in each image (Figure 1c). Note that the labels we used from ImageNet have no relation to the labels of the new adversarial task. For example, a 'White Shark' has nothing to do with counting 3 squares in an image, and an 'Ostrich' does not at all resemble 10 squares. We then evaluated the accuracy in the task by sampling 100,000 images and comparing the network prediction to the number of squares in the image. Despite the dissimilarity of ImageNet labels and adversarial labels, and that the adversarial program is equivalent simply to a first layer bias, the adversarial program masters this counting task for all networks (Table 1). These results demonstrate the vulnerability of neural networks to reprogramming on this simple task using only additive contributions to the input. ### 4.2 MNIST CLASSIFICATION In this section, we demonstrate adversarial reprogramming on somewhat more complex task of classifying MNIST digits. We measure test and train accuracy, so it is impossible for the adversarial program to have simply memorized all training examples. Similar to the counting task, we embedded MNIST digits of size \(28 \times 28 \times 3\) inside a frame representing the adversarial program, we assigned the first 10 ImageNet labels to the MNIST digits, and trained an adversarial program for each ImageNet model. Figure 2 shows examples of the adversarial program for each network being applied. Our results show that ImageNet networks can be successfully reprogrammed to function as an MNIST classifier by presenting an additive adversarial program. The adversarial program additionally generalized well from the training to test set, suggesting that the reprogramming does not function purely by memorizing train examples, and is not brittle to small changes in the input. <--- Page Split ---> Table 1: Neural networks adversarially reprogrammed to perform a variety of tasks. Table gives accuracy of reprogrammed networks to perform a counting task, MNIST classification task, CIFAR-10 classification task, and Shuffled MNIST classification task. <table><tr><td rowspan="3">Model</td><td colspan="4">Pretrained on ImageNet</td><td colspan="2">Untrained</td></tr><tr><td colspan="2">Counting</td><td colspan="2">MNIST</td><td colspan="2">MNIST</td></tr><tr><td>train</td><td>test</td><td>train</td><td>test</td><td>test</td><td></td></tr><tr><td>Incep. V3</td><td>0.9993</td><td>0.9781</td><td>0.9753</td><td>0.7311</td><td>0.6911</td><td>0.4539</td></tr><tr><td>Incep. V4</td><td>0.9999</td><td>0.9638</td><td>0.9646</td><td>0.6948</td><td>0.6683</td><td>0.1861</td></tr><tr><td>Incep. Res. V2</td><td>0.9994</td><td>0.9773</td><td>0.9744</td><td>0.6985</td><td>0.6719</td><td>0.1135</td></tr><tr><td>Res. V2 152</td><td>0.9763</td><td>0.9478</td><td>0.9534</td><td>0.6410</td><td>0.6210</td><td>0.1032</td></tr><tr><td>Res. V2 101</td><td>0.9843</td><td>0.9650</td><td>0.9664</td><td>0.6435</td><td>0.6301</td><td>0.1756</td></tr><tr><td>Res. V2 50</td><td>0.9966</td><td>0.9506</td><td>0.9496</td><td>0.6</td><td>0.5858</td><td>0.9325</td></tr><tr><td>Incep. V3 adv.</td><td></td><td>0.9761</td><td>0.9752</td><td></td><td></td><td></td></tr></table> ### 4.3 CIFAR-10 CLASSIFICATION Here we implement a more challenging adversarial task. That is, crafting adversarial programs to repurpose ImageNet models to instead classify CIFAR- 10 images. Some examples of the resulting adversarial images are given in Figure Supp. 1. Our results show that our adversarial program was able to increase the accuracy on CIFAR- 10 from chance to a moderate accuracy (Table 1). This accuracy is near what is expected from typical fully connected networks (Lin et al., 2015) but with minimal computation cost from the adversary side at inference time. One observation is that although adversarial programs trained to classify CIFAR- 10 are different from those that classify MNIST or perform the counting task, the programs show some visual similarities, e.g. ResNet architecture adversarial programs seem to possess some low spatial frequency texture (Figure Supp. 2a). ### 4.4 INVESTIGATION OF THE EFFECT OF THE TRAINED MODEL DETAILS AND ORIGINAL DATA One important question is what is the degree to which susceptibility to adversarial reprogramming depends on the details of the model being attacked. To address this question, we examined attack success on an Inception V3 model that was trained on ImageNet data using adversarial training (Tramèr et al., 2017). Adversarial training augments data with adversarial examples during training, and is one of the most common methods for guarding against adversarial examples. As in Section 4.2, we adversarially reprogrammed this network to classify MNIST digits. Our results (Table 1) indicate that the model trained with adversarial training is still vulnerable to reprogramming, with only a slight reduction in attack success. This finding shows that standard approaches to adversarial defense has little efficacy against adversarial reprogramming. This is likely explained by the differences between adversarial reprogramming and standard adversarial attacks. First, that the goal is to repurpose the network rather than cause it to make a specific mistake, second that the magnitude of adversarial programs can be large, while traditional adversarial attacks are of a small perturbations magnitude, and third adversarial defense methods may be specific to original data and may not generalize to data from the adversarial task. To further explore dependence on the details of the model, we performed adversarial reprogramming attacks on models with random weights. We used the same experiment set up and MNIST reprogramming task as in Section 4.2 - we simply used the ImageNet models with randomly initialized rather than trained weights. The MNIST classification task was easy for networks pretrained on ImageNet (Table 1). However, for random networks, training was very challenging and generally converged to a much lower accuracy (only ResNet V2 50 could train to a similar accuracy as trained ImageNet models; see Table 1). Moreover, the appearance of the adversarial programs was qualitatively distinct from the adversarial programs obtained with networks pretrained on ImageNet (see Figure Supp. 2b). This finding demonstrates that the original task the neural networks perform is important for adversarial reprogramming. This result may seem surprising, as random networks have rich structure adversarial programs might be expected to take advantage of. For example, theoretical results have shown that wide neural networks become identical to Gaussian processes, where training specific weights in intermediate layers is not necessary to perform tasks (Matthews et al., 2018; Lee et al., <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: Adversarial programs may be limited in size or concealed. In all panels, an Inception V3 model pretrained on ImageNet is reprogrammed to classify MNIST digits. Example images (a) with adversarial programs of different sizes, and (b) with adversarial programs of different perturbation scales. In (c), the adversarial data + program (right) are hidden inside a normal image from ImageNet (left), yielding an adversarial image (center) that is able to reprogram the network to function as an MNIST classifier. The pixels of the adversarial data are shuffled to conceal its structure. </center> 2017). Other work has demonstrated that it is possible to use random networks as generative models for images (Ustyuzhaninov et al., 2016; He et al., 2016), further supporting their potential richness. One explanation may be that randomly initialized networks perform poorly for simple reasons, such as poor scaling of network weights at initialization, whereas the trained weights are better conditioned. One explanation of adversarial reprogramming that is motivated by transfer learning (Yosinski et al., 2014) is that the network may be relying on some similarities between original and adversarial data. To address this hypothesis, we randomized the pixels on MNIST digits such that any resemblance between the adversarial data (MNIST) and images in the original data (ImageNet) is removed (see Figure Supp. 3). We then attempted to reprogram pretrained ImageNet networks to classify the shuffled MNIST digits. Despite shuffled MNIST not sharing any spatial structure with images, we managed to reprogram the ImageNet network for this task (Table Supp. 7) with almost equal accuracy to standard MNIST (in some cases shuffled MNIST even achieved higher accuracy). We also investigated the possibility of reprogramming shuffled CIFAR- 10 images. Our results show that it is possible to reprogram neural networks to classify shuffled CIFAR- 10 images (Table Supp. 7). The accuracy for shuffled CIFAR- 10 decreased as the convolutional structure of the network is not useful to classify the shuffled images. However, the accuracy was comparable to that expected from fully connected networks (Novak et al., 2019). These results thus suggest that transferring knowledge between the original and adversarial data does not completely explain the susceptibility to adversarial reprogramming. Even more interestingly, these results suggest the possibility of reprogramming across tasks with unrelated datasets and across domains. ### 4.5 CONCEALING ADVERSARIAL PROGRAMS In our previous experiment, there were no constraints on the size (number of program pixels) or scale (magnitude of perturbations) of the adversarial program. Here, we demonstrate the possibility of limiting the visibility of the adversarial perturbations by limiting the program size, scale, or even concealing the whole adversarial task. In these experiments, we used an Inception V3 model pretrained to classify ImageNet. <--- Page Split ---> In our first experiment, we adversarially reprogrammed the network to classify MNIST digits while limiting the size of the program (Figure 3a). Our results show that adversarial reprogramming is still successful, yet with lower accuracy, even if we use a very small adversarial program. In our next experiment, we made the adversarial program nearly imperceptible by limiting the \(L_{\mathrm{inf}}\) norm of the adversarial perturbation to a small percentage of the pixel values. Our results show that adversarial reprogramming is still successful (Figure 3b) even with nearly imperceptible programs. Further, we tested the possibility of concealing the whole adversarial task by hiding both the adversarial data and program within a normal image from ImageNet. To do this, we shuffled the pixels of the adversarial data (here MNIST), so that the adversarial data structure is hidden. Then, we limited the scale of both the adversarial program and data to a small fraction of the possible pixel values. We added the resulting image to a random image from ImageNet. Formally, we extended our reprogramming method as follows: \[\begin{array}{r l} & {P_{X} = \alpha \tanh \left(\mathrm{shuffle}_{i x}(\tilde{X}) + (W\odot \mathrm{shuffle}_{i x}(M))\right)}\\ & {X_{a d v} = \mathrm{clip}\left(X_{I m a g e N e t} + P_{X},\left[-1,1\right]\right),} \end{array} \quad (4)\] where \(\tilde{X}\) , \(M\) and \(W\) are as described in Section 3, \(P_{X}\) is the adversarial data combined with the adversarial program, \(i x\) is the shuffling sequence (same for \(M\) and \(\forall X\) ), \(\alpha\) is a scalar used to limit the perturbation scale, and \(X_{ImageNet}\) is an image chosen randomly from ImageNet, which is the same for all MNIST examples. We then optimized the adversarial program for the network to classify MNIST digits (see Equation 3). The resulting adversarial images are very similar to normal images from ImageNet (see Figure 3c), yet the network is successfully reprogrammed to classify MNIST digits, though with lower accuracy (see Figure 3c). This result demonstrates the possibility of hiding the adversarial task. Here, we used a simple shuffling technique and picked an image from ImageNet to hide the adversarial task, but one could go further and use more complex schemes for hiding the adversarial task and optimize the choice of the image from ImageNet, which may make adversarial reprogramming more effective and harder to detect. ## 5 DISCUSSION ### 5.1 FLEXIBILITY OF TRAINED NEURAL NETWORKS We found that trained neural networks were more susceptible to adversarial reprogramming than random networks. Further, we found that reprogramming is still successful even when data structure is very different from the structure of the data in the original task. This demonstrates a large flexibility of repurposing trained weights for a new task. Our results suggest that dynamical reuse of neural circuits should be practical in modern artificial neural networks. This holds the promise of enabling machine learning systems which are easier to repurpose, more flexible, and more efficient due to shared compute. Indeed, recent work in machine learning has focused on building large dynamically connected networks with reusable components (Shazeer et al., 2017). It is unclear whether the reduced performance when targeting random networks, and when reprogramming to perform CIFAR- 10 classification, was due to limitations in the expressivity of the adversarial perturbation, or due to the optimization task in Equation 3 being more difficult in these situations. Disentangling limitations in expressivity and trainability will be an interesting future direction. ### 5.2 ADVERSARIAL GOALS BEYOND THE IMAGE DOMAIN We demonstrated adversarial reprogramming on classification tasks in the image domain. It is an interesting area for future research whether similar attacks might succeed for audio, video, text, or other domains and tasks. Our finding that trained networks can be reprogrammed to classify shuffled images, which do not retain any of the original spatial structure, suggests that reprogramming across domains is likely possible. Adversarial reprogramming of recurrent neural networks (RNNs) would be particularly interesting, since RNNs (especially those with attention or memory) can be Turing complete (Neelakantan et al., 2015). An attacker would thus only need to find inputs which induced the RNN to perform a number of simple operations, such as increment counter, decrement counter, and change input attention <--- Page Split ---> location if counter is zero (Minsky, 1961). If adversarial programs can be found for these simple operations, then they could be composed to reprogram the RNN to perform any computational task. A variety of nefarious ends may be achievable if machine learning systems can be reprogrammed by a specially crafted input. The most direct of these is the theft of computational resources. For instance, an attacker might develop an adversarial program which causes the computer vision classifier in a cloud hosted photos service to solve image captchas and enable creation of spam accounts. If RNNs can be flexibly reprogrammed as mentioned above, this computational theft might extend to more arbitrary tasks. A major danger beyond the computational theft is that an adversary may repurpose computational resources to perform a task which violates the code of ethics of system providers. This is particularly important as ML service providers are invested in protecting the ethical principles and guidelines that governs the use of their services. ## 6 CONCLUSION In this work, we proposed a new class of adversarial attacks that aim to reprogram neural networks to perform novel adversarial tasks. Our results demonstrate for the first time the possibility of such attacks. They are also illustrative of both surprising flexibility and surprising vulnerability in deep neural networks. Future investigation should address the properties and limitations of adversarial reprogramming, and possible ways to mitigate or defend against it. ## REFERENCES Albert- Laszlo Barabasi, Vincent W Freeh, Hawoong Jeong, and Jay B Brockman. Parasitic computing. Nature, 412(6850):894, 2001. Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Srndic, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2013, Prague, Czech Republic, September 23- 27, 2013, Proceedings, Part III, pp. 387- 402, 2013. doi: 10.1007/978- 3- 642- 40994- 3_25. Sergey Bratus, Michael Locasto, Meredith Patterson, Len Sassaman, and Anna Shubina. Exploit programming: From buffer overflows to weird machines and theory of computation. {USENIX, login:}, 2011. Tom B Brown, Dandelion Mané, Aurko Roy, Martin Abadi, and Justin Gilmer. Adversarial patch. arXiv preprint arXiv:1712.09665, 2017. N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 39- 57, May 2017. doi: 10.1109/SP.2017.49. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In International conference on machine learning, pp. 647- 655, 2014. Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, and Dawn Song. Robust physical- world attacks on deep learning models. arXiv preprint arXiv:1707.08945, 1, 2017. Amirata Ghorbani, Abubakar Abid, and James Zou. Interpretation of neural networks is fragile. arXiv preprint arXiv:1710.10547, 2017. Ian Goodfellow, Nicolas Papernot, Sandy Huang, Yan Duan, Pieter Abbeel, and Jack Clark. Attacking machine learning with adversarial examples, 2017. URL https://blog.openai.com/adversarial- example- research/. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. <--- Page Split ---> Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick D. McDaniel. Adversarial examples for malware detection. In ESORICS 2017, pp. 62–79, 2017. doi: 10.1007/978- 3- 319- 66399- 9_4. URL https://doi.org/10.1007/978- 3- 319- 66399- 9- 4. Kun He, Yan Wang, and John Hopcroft. A powerful generative model using random weights for the deep image representation. In Advances in Neural Information Processing Systems, pp. 631–639, 2016. Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel. Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284, 2017. Harini Kannan, Alexey Kurakin, and Ian Goodfellow. Adversarial logit pairing. arXiv preprint arXiv:1803.06373, 2018. J Zico Kolter and Eric Wong. Provable defenses against adversarial examples via the convex outer adversarial polytope. arXiv preprint arXiv:1711.00851, 2017. Jernej Kos, Ian Fischer, and Dawn Song. Adversarial examples for generative models. arXiv preprint arXiv:1702.06832, 2017. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012. A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial Machine Learning at Scale. ArXiv e-prints, November 2016. Quoc V Le, Alexandre Karpenko, Jiquan Ngiam, and Andrew Y Ng. Ica with reconstruction cost for efficient overcomplete feature learning. In Advances in neural information processing systems, pp. 1017–1025, 2011. Honglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th annual international conference on machine learning, pp. 609–616. ACM, 2009. Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S Schoenholz, Jeffrey Pennington, and Jascha Sohl-Dickstein. Deep neural networks as gaussian processes. arXiv preprint arXiv:1711.00165, 2017. Chunyuan Li, Heerad Farkhoor, Rosanne Liu, and Jason Yosinski. Measuring the intrinsic dimension of objective landscapes. arXiv preprint arXiv:1804.08838, 2018. Yen-Chen Lin, Zhang-Wei Hong, Yuan-Hong Liao, Meng-Li Shih, Ming-Yu Liu, and Min Sun. Tactics of adversarial attack on deep reinforcement learning agents. arXiv preprint arXiv:1703.06748, 2017. Zhouhan Lin, Roland Memisevic, and Kishore Konda. How far can we go without convolution: Improving fully-connected networks. arXiv preprint arXiv:1511.02580, 2015. Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770, 2016. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. Alexander G de G Matthews, Mark Rowland, Jiri Hron, Richard E Turner, and Zoubin Ghahramani. Gaussian process behaviour in wide deep neural networks. arXiv preprint arXiv:1804.11271, 2018. Grégoire Mesnil, Yann Dauphin, Xavier Glorot, Salah Rifai, Yoshua Bengio, Ian Goodfellow, Erick Lavoie, Xavier Muller, Guillaume Desjardins, David Warde-Farley, et al. Unsupervised and transfer learning challenge: a deep learning approach. In Proceedings of the 2011 International Conference on Unsupervised and Transfer Learning workshop- Volume 27, pp. 97–111. JMLR. org, 2011. <--- Page Split ---> Marvin L Minsky. Recursive unsolvability of post's problem of" tag" and other topics in theory of turing machines. Annals of Mathematics, pp. 437- 455, 1961. Seyed- Mohsen Moosavi- Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pp. 86- 94. IEEE, 2017. Arvind Neelakantan, Quoc V Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. arXiv preprint arXiv:1511.04834, 2015. Roman Novak, Lechao Xiao, Jaehoon Lee, Yasaman Bahri, Greg Yang, Dan Abolafia, Jeffrey Pennington, and Jascha Sohl- dickstein. Bayesian deep convolutional networks with many channels are gaussian processes. In ICLR, 2019. URL https://openreview.net/forum?id=B1g30j0qF7. Nicolas Papernot, Patrick D. McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. CoRR, abs/1511.07528, 2015. Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. Transferability in machine learning: from phenomena to black- box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016. Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black- box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506- 519. ACM, 2017. Peter Peresini and Dejan Kostic. Is the network capable of computation? In Network Protocols (ICNP), 2013 21st IEEE International Conference on, pp. 1- 6. IEEE, 2013. Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, and Jascha Sohl- Dickstein. On the expressive power of deep neural networks. arXiv preprint arXiv:1606.05336, 2016. Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y Ng. Self- taught learning: transfer learning from unlabeled data. In Proceedings of the 24th international conference on Machine learning, pp. 759- 766. ACM, 2007. Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off- the- shelf: an astounding baseline for recognition. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on, pp. 512- 519. IEEE, 2014. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely- gated mixture- of- experts layer. arXiv preprint arXiv:1701.06538, 2017. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. TensorFlow- Slim. Tensorflow- slim image classification model library. https://github.com/tensorflow/models/tree/master/research/slim. Accessed: 2018- 05- 01. F. Tramer, A. Kurakin, N. Papernot, D. Boneh, and P. McDaniel. Ensemble Adversarial Training: Attacks and Defenses. ArXiv e-prints, May 2017. Ivan Ustyuzhaninov, Wieland Brendel, Leon A Gatys, and Matthias Bethge. Texture synthesis using shallow convolutional networks with random filters. arXiv preprint arXiv:1606.00021, 2016. Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in neural information processing systems, pp. 3320- 3328, 2014. <--- Page Split ---> ## Supplemental material ## A SUPPLEMENTARY TABLES Table Supp. 1: Top-1 precision of models on ImageNet data <table><tr><td>Model</td><td>Accuracy</td></tr><tr><td>Inception V3</td><td>0.78</td></tr><tr><td>Inception V4</td><td>0.802</td></tr><tr><td>Inception Resnet V2</td><td>0.804</td></tr><tr><td>Resnet V2 152</td><td>0.778</td></tr><tr><td>Resnet V2 101</td><td>0.77</td></tr><tr><td>Resnet V2 50</td><td>0.756</td></tr><tr><td>Inception V3 adv.</td><td>0.776</td></tr></table> Table Supp. 2: Hyper-parameters for adversarial program training for the square counting adversarial task. For all models, we used the Adam optimizer with its default parameters while decaying the learning rate exponentially during training. We distributed training data across a number of GPUs (each GPU receive 'batch' data samples ). We then performed synchronized updates of the adversarial program parameters. <table><tr><td>ImageNet Model</td><td>λ</td><td>batch</td><td>GPUS</td><td>learn rate</td><td>decay</td><td>epochs/decay</td><td>steps</td></tr><tr><td>Inception V3</td><td>0.01</td><td>50</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>100000</td></tr><tr><td>Inception V4</td><td>0.01</td><td>50</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>100000</td></tr><tr><td>Inception Resnet V2</td><td>0.01</td><td>50</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>100000</td></tr><tr><td>Resnet V2 152</td><td>0.01</td><td>20</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>100000</td></tr><tr><td>Resnet V2 101</td><td>0.01</td><td>20</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Resnet V2 50</td><td>0.01</td><td>20</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>100000</td></tr></table> <--- Page Split ---> Table Supp. 3: Hyper- parameters for adversarial program training for MNIST classification adversarial task. For all models, we used the Adam optimizer with its default parameters while decaying the learning rate exponentially during training. We distributed training data across a number of GPUs (each GPU receive 'batch' data samples ). We then performed synchronized updates of the adversarial program parameters. (The Model Inception V3 adv is pretrained on ImageNet data using adversarial training method. Table Supp. 4: Hyper-parameters for adversarial program training for CIFAR-10 classification adversarial task. For all models, we used ADAM optimizer with its default parameters while decaying the learning rate exponentially during training. We distributed training data on number of GPUS (each GPU receive 'batch' data samples ). We then performed synchronized updates of the adversarial program parameters. <table><tr><td>ImageNet Model</td><td>λ</td><td>batch</td><td>GPUS</td><td>learn rate</td><td>decay</td><td>epochs/decay</td><td>steps</td></tr><tr><td>Inception V3</td><td>0.05</td><td>100</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Inception V4</td><td>0.05</td><td>100</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Inception Resnet V2</td><td>0.05</td><td>50</td><td>8</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Resnet V2 152</td><td>0.05</td><td>50</td><td>8</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Resnet V2 101</td><td>0.05</td><td>50</td><td>8</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Resnet V2 50</td><td>0.05</td><td>100</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Inception V3 adv.</td><td>0.01</td><td>50</td><td>6</td><td>0.05</td><td>0.98</td><td>4</td><td>100000</td></tr></table> Table Supp. 5: Hyper-parameters for adversarial program training for MNIST classification adversarial task. For all models, we used the Adam optimizer with its default parameters while decaying the learning rate exponentially during training. We distributed training data across a number of GPUs (each GPU receive 'batch' data samples ). We then performed synchronized updates of the adversarial program parameters. <table><tr><td>ImageNet Model</td><td>λ</td><td>batch</td><td>GPUS</td><td>learn rate</td><td>decay</td><td>epochs/decay</td><td>steps</td></tr><tr><td>Inception V3</td><td>0.01</td><td>50</td><td>6</td><td>0.05</td><td>0.99</td><td>4</td><td>300000</td></tr><tr><td>Inception V4</td><td>0.01</td><td>50</td><td>6</td><td>0.05</td><td>0.99</td><td>4</td><td>300000</td></tr><tr><td>Inception Resnet V2</td><td>0.01</td><td>50</td><td>6</td><td>0.05</td><td>0.99</td><td>4</td><td>300000</td></tr><tr><td>Resnet V2 152</td><td>0.01</td><td>30</td><td>6</td><td>0.05</td><td>0.99</td><td>4</td><td>300000</td></tr><tr><td>Resnet V2 101</td><td>0.01</td><td>30</td><td>6</td><td>0.05</td><td>0.99</td><td>4</td><td>300000</td></tr><tr><td>Resnet V2 50</td><td>0.01</td><td>30</td><td>6</td><td>0.05</td><td>0.99</td><td>4</td><td>300000</td></tr></table> <table><tr><td>Random Model</td><td>λ</td><td>batch</td><td>GPUS</td><td>learn rate</td><td>decay</td><td>epochs/decay</td><td>steps</td></tr><tr><td>Inception V3</td><td>0.01</td><td>50</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>100000</td></tr><tr><td>Inception V4</td><td>0.01</td><td>50</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>100000</td></tr><tr><td>Inception Resnet V2</td><td>0.01</td><td>50</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Resnet V2 152</td><td>0.01</td><td>20</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Resnet V2 101</td><td>0.01</td><td>20</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Resnet V2 50</td><td>0.01</td><td>50</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr></table> Table Supp. 6: Hyper- parameters for adversarial program training for Shuffled CIFAR- 10 classification task. For all models, we used 1 GPU and a batch size of 100. We decayed the learning rate with a cosine schedule over 200 epochs. We performed hyperparameters tuning and picked the best parameters based on a held- out validation set. The best parameters are shown below. <table><tr><td>ImageNet Model</td><td>λ</td><td>optimizer</td><td>learn rate</td></tr><tr><td>Inception V3</td><td>0</td><td>Momentum</td><td>0.1</td></tr><tr><td>Inception V4</td><td>0.1</td><td>Momentum</td><td>0.001</td></tr><tr><td>Inception Resnet V2</td><td>0</td><td>ADAM</td><td>0.001</td></tr><tr><td>Resnet V2 152</td><td>0.1</td><td>Momentum</td><td>0.1</td></tr><tr><td>Resnet V2 101</td><td>0.01</td><td>Momentum</td><td>0.1</td></tr><tr><td>Resnet V2 50</td><td>0.01</td><td>Momentum</td><td>0.1</td></tr></table> <--- Page Split ---> Table Supp. 7: Neural networks adversarially reprogrammed to perform classification tasks with shuffled pixels. Table gives accuracy of reprogrammed networks to perform classification using Shuffled MNIST and Shuffled CIFAR-10. Shuffling is performed across pixels. The optimization parameters for shuffled MNIST are the same as in Table Supp. 3. The optimization parameters for Shuffled-CIFAR-10 are shown in Table Supp. 6 <table><tr><td rowspan="2">Model</td><td colspan="2">Pretrained on ImageNet</td></tr><tr><td>Shuffled MNIST</td><td>Shuffled CIFAR-10</td></tr><tr><td>Incep. V3</td><td>0.9709</td><td>0.5578</td></tr><tr><td>Incep. V4</td><td>0.9715</td><td>0.5618</td></tr><tr><td>Incep. Res. V2</td><td>0.9683</td><td>0.5507</td></tr><tr><td>Res. V2 152</td><td>0.9691</td><td>0.5624</td></tr><tr><td>Res. V2 101</td><td>0.9678</td><td>0.5612</td></tr><tr><td>Res. V2 50</td><td>0.9717</td><td>0.5614</td></tr></table> ![](images/13_0.jpg) <center>Figure Supp. 1: Examples of adversarial images for CIFAR-10 classification. An adversarial program repurposing an Inception V3 model to classify CIFAR-10 images, applied to four CIFAR-10 images. </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure Supp. 2: Adversarial programs exhibit qualitative similarities and differences across both network and task. (a) Top: adversarial programs targeted to repurpose networks pre-trained on ImageNet to count squares in images. Middle: adversarial programs targeted to repurpose networks pre-trained on ImageNet to function as MNIST classifiers. Bottom: adversarial programs to cause the same networks to function as CIFAR-10 classifiers. (b) Adversarial programs targeted to repurpose networks with randomly initialized parameters to function as MNIST classifiers. </center> ![](images/14_1.jpg) <center>Figure Supp. 3: Neural networks are susceptible to adversarial reprogramming even in cases when adversarial data and original task data are unrelated. The pixels in MNIST digits are shuffled. So, that the resulting image has no resemblance to any image. Then, the shuffled image is combined with the adversarial program to create a reprogramming image. This image successfully reprogram Inception V3 model to classify the shuffled digits, despite that the adversarial data (i.e., shuffled MNIST digits) being unrelated to the original data (i.e., ImageNet). </center> <--- Page Split --->
## ABSTRACT Deep neural networks are susceptible to adversarial attacks. In computer vision, well- crafted perturbations to images can cause neural networks to make mistakes such as confusing a cat with a computer. Previous adversarial attacks have been designed to degrade performance of models or cause machine learning models to produce specific outputs chosen ahead of time by the attacker. We introduce attacks that instead reprogram the target model to perform a task chosen by the attacker—without the attacker needing to specify or compute the desired output for each test- time input. This attack finds a single adversarial perturbation, that can be added to all test- time inputs to a machine learning model in order to cause the model to perform a task chosen by the adversary—even if the model was not trained to do this task. These perturbations can thus be considered a program for the new task. We demonstrate adversarial reprogramming on six ImageNet classification models, repurposing these models to perform a counting task, as well as classification tasks: classification of MNIST and CIFAR- 10 examples presented as inputs to the ImageNet model. ## 1 INTRODUCTION The study of adversarial examples is often motivated in terms of the danger posed by an attacker whose goal is to cause model prediction errors with a small change to the model's input. Such an attacker could make a self- driving car react to a phantom stop sign (Evtimov et al., 2017) by means of a sticker (a small \(L_{0}\) perturbation), or cause an insurance company's damage model to overestimate the claim value from the resulting accident by subtly doctoring photos of the damage (a small \(L_{\infty}\) perturbation). With this context, various methods have been proposed both to construct (Szegedy et al., 2013; Papernot et al., 2015; 2017; 2016; Brown et al., 2017; Liu et al., 2016) and defend against (Goodfellow et al., 2014; Kurakin et al., 2016; Madry et al., 2017; Tramer et al., 2017; Kolter & Wong, 2017; Kannan et al., 2018) this style of adversarial attack. Thus far, the majority of adversarial attacks have consisted of untargeted attacks that aim to degrade the performance of a model without necessarily requiring it to produce a specific output, or targeted attacks in which the attacker designs an adversarial perturbation to produce a specific output for that input. For example, an attack against a classifier might target a specific desired output class for each input image, or an attack against a reinforcement learning agent might induce that agent to enter a specific state (Lin et al., 2017). In practice, there is no requirement that adversarial attacks will adhere to this framework. Thus, it is crucial to proactively anticipate other unexplored adversarial goals in order to make machine learning systems more secure. In this work, we consider a novel and more challenging adversarial goal: reprogramming the model to perform a task chosen by the attacker, without the attacker needing to compute the specific desired output. Consider a model trained to perform some original task: for inputs \(x\) it produces outputs \(f(x)\) . Consider an adversary who wishes to perform an adversarial task: <--- Page Split ---> for inputs \(\tilde{x}\) (not necessarily in the same domain as \(x\) ) the adversary wishes to compute a function \(g(\tilde{x})\) . We show that an adversary can accomplish this by learning adversarial reprogramming functions \(h_{f}(\cdot ;\theta)\) and \(h_{g}(\cdot ;\theta)\) that map between the two tasks. Here, \(h_{f}\) converts inputs from the domain of \(\tilde{x}\) into the domain of \(x\) (i.e., \(h_{f}(\tilde{x};\theta)\) is a valid input to the function \(f\) ), while \(h_{g}\) maps output of \(f(h(\tilde{x};\theta))\) back to outputs of \(g(\tilde{x})\) . The parameters \(\theta\) of the adversarial program are then adjusted to achieve \(h_{g}\left(f\left(h_{f}\left(\tilde{x}\right)\right)\right) = g\left(\tilde{x}\right)\) . In our work, for simplicity, we define \(\tilde{x}\) to be a small image, \(g\) a function that processes small images, \(x\) a large image, and \(f\) a function that processes large images. Our function \(h_{f}\) then just consists of drawing \(x\) in the center of the large image and \(\theta\) in the borders (though see Section 4.5 for other schemes), and \(h_{g}\) is simply a hard coded mapping between output class labels. However, the idea is more general; \(h_{f}\left(h_{g}\right)\) could be any consistent transformation that converts between the input (output) formats for the two tasks and causes the model to perform the adversarial task. We refer to the class of attacks where a model is repurposed to perform a new task as adversarial reprogramming. We refer to \(\theta\) as an adversarial program. In contrast to most previous adversarial work, the attack does not need to be imperceptible to humans, or even subtle, in order to be considered a success. However, we note that it is still possible to construct reprogramming attacks that are imperceptible. Potential consequences of adversarial reprogramming include theft of computational resources from public facing services, repurposing of AI- driven assistants into spies or spam bots, and abusing machine learning services for tasks violating the ethical principles of system providers. Risks stemming from this type of attack are discussed in more detail in Section 5.2. It may seem unlikely that an additive offset to a neural network's input would be sufficient on its own to repurpose the network to a new task. However, this flexibility stemming only from changes to a network's inputs is consistent with results on the expressive power of deep neural networks. For instance, in Raghu et al. (2016) it is shown that, depending on network hyperparameters, the number of unique output patterns achievable by moving along a one- dimensional trajectory in input space increases exponentially with network depth. Further, Li et al. (2018) shows that networks can often be trained to high accuracy even if parameter updates are restricted to occur only in a low dimensional subspace. An additive offset to a neural network's input is equivalent to a modification of its first layer biases (for a convolutional network with biases shared across space, this operation effectively introduces new parameters because the additive input is not shared across space), and therefore an adversarial program corresponds to an update in a low dimensional parameter subspace. In this paper, we present the first instances of adversarial reprogramming. In Section 2, we discuss related work. In Section 3, we present a training procedure for crafting adversarial programs, which cause a neural network to perform a new task. In Section 4, we experimentally demonstrate adversarial programs that target several convolutional neural networks designed to classify ImageNet data. These adversarial programs alter the network function from ImageNet classification to: counting squares in an image, classifying MNIST digits, and classifying CIFAR- 10 images. Next, we examine the susceptibility of trained and untrained networks to adversarial reprogramming. We then demonstrate the possibility of reprogramming adversarial tasks with adversarial data that has no resemblance to original data, demonstrating that results from transfer learning do not fully explain adversarial reprogramming. Further, we demonstrate the possibility of concealing adversarial programs and data. Finally, we end in Sections 5 and 6 by discussing and summarizing our results. ## 2 BACKGROUND AND RELATED WORK ### 2.1 ADVERSARIAL EXAMPLES One definition of adversarial examples is that they are "inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake" (Goodfellow et al., 2017). They are often formed by starting with a naturally occuring image and using a gradient- based optimizer to search for a nearby image that causes a mistake (Biggio et al., 2013; Szegedy et al., 2013; Carlini & Wagner, 2017). These attacks can be either untargeted (the adversary succeeds when causing any mistake at all) or targeted (the adversary succeeds when causing the model to predict a specific incorrect class). Adversarial attacks have been also proposed for other domains like malware detection (Grosse et al., 2017), generative models (Kos et al., 2017), network policies for reinforcement learning tasks (Huang et al., 2017), and network interpretation (Ghorbani et al., <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 1: Illustration of adversarial reprogramming. (a) Mapping of ImageNet labels to adversarial task labels (squares count in an image). (b) Two examples of images from the adversarial task (left) are embedded at the center of an adversarial program (middle), yielding adversarial images (right). The adversarial program shown repurposes an Inception V3 network to count squares in images. (c) Illustration of inference with adversarial images. The network when presented with adversarial images will predict ImageNet labels that map to the adversarial task labels. </center> 2017). In these domains, the attack remains either untargeted (generally degrading the performance) or targeted (producing a specific output). We extend this line of work by developing reprogramming methods that aim to produce specific functionality rather than a specific hardcoded output. Several authors have observed that the same modification can be applied to many different inputs in order to form adversarial examples (Goodfellow et al., 2014; Moosavi- Dezfooli et al., 2017). For example, Brown et al. (2017) designed an "adversarial patch" that can switch the prediction of many models to one specific class (e.g. toaster) when it is placed physically in their field of view. We continue this line of work by finding a single adversarial program that can be presented with many input images to cause the model to process each image according to the adversarial program. ### 2.2 PARASITIC COMPUTING AND WEIRD MACHINES Parasitic computing involves forcing a target system to solve a complex computational task it wasn't originally designed to perform, by taking advantage of peculiarities in network communication protocols (Barabasi et al., 2001; Peresini & Kostic, 2013). Weird machines, on the other hand, are a class of computational exploits where carefully crafted inputs can be used to run arbitrary code on a targeted computer (Bratus et al., 2011). Adversarial reprogramming can be seen as a form of parasitic computing, though without the focus on leveraging the communication protocol itself to perform the computation. Similarly, adversarial reprogramming can be seen as an example of neural networks behaving like weird machines, though adversarial reprogramming functions only within the neural network paradigm – we do not gain access to the host computer. ### 2.3 TRANSFER LEARNING Transfer learning (Raina et al., 2007; Mesnil et al., 2011) and adversarial reprogramming share the goal of repurposing networks to perform a new task. Transfer learning methods use the knowledge obtained from one task as a base to learn how to perform another. Neural networks possess properties that can be useful for many tasks (Yosinski et al., 2014). For example, neural networks when trained on images develop features that resemble Gabor filters in early layers even if they are trained with different datasets or different training objectives such as supervised image classification (Krizhevsky et al., 2012), unsupervised density learning (Lee et al., 2009), or unsupervised learning of sparse representations (Le et al., 2011). Empirical work has demonstrated that it is possible to take a <--- Page Split ---> convolutional neural network trained to perform one task, and simply train a linear SVM classifier to make the network work for other tasks (Razavian et al., 2014; Donahue et al., 2014). However, transfer learning is very different from the adversarial reprogramming task in that it allows model parameters to be changed for the new task. In typical adversarial settings, an attacker is unable to alter the model, and instead must achieve their goals solely through manipulation of the input. Further, one may wish to adversarially reprogram across tasks with very different datasets. This makes the task of adversarial reprogramming more challenging than transfer learning. ## 3 METHODS In this work, we consider an adversary with access to the parameters of a neural network that is performing a specific task. The objective of the adversary is to reprogram the model to perform a new task by crafting an adversarial program to be included within the network input. Here, the network was originally designed to perform ImageNet classification, but the methods discussed here can be directly extended to other settings. Our adversarial program is formulated as an additive contribution to network input. Note that unlike most adversarial perturbations, the adversarial program is not specific to a single image. The same adversarial program will be applied to all images. We define the adversarial program as: \[P = \tanh \left(W\odot M\right) \quad (1)\] where \(W\in \mathbb{R}^{n\times n\times 3}\) is the adversarial program parameters to be learned, \(n\) is the ImageNet image width, and \(M\) is a masking matrix that is 0 for image locations that corresponds to the adversarial data for the new task, otherwise 1. Note that the mask \(M\) is not required - we mask out the central region of the adversarial program purely to improve visualization of the action of the adversarial program. Also, note that we use \(\tanh (\cdot)\) to bound the adversarial perturbation to be in \((- 1,1)\) - the same range as the (rescaled) ImageNet images the target networks are trained to classify. Let, \(\tilde{x}\in \mathbb{R}^{k\times k\times 3}\) be a sample from the dataset to which we wish to apply the adversarial task, where \(k< n\) . \(\tilde{X}\in \mathbb{R}^{n\times n\times 3}\) is the equivalent ImageNet size image with \(\tilde{x}\) placed in the proper area, defined by the mask \(M\) . The corresponding adversarial image is then: \[X_{adv} = h_{f}\left(\tilde{x};W\right) = \tilde{X} +P \quad (2)\] Let \(P(y|X)\) be the probability that an ImageNet classifier gives to ImageNet label \(y\in \{1,\ldots ,1000\}\) given an input image \(X\) . We define a hard- coded mapping function \(h_{g}(y_{adv})\) that maps a label from an adversarial task \(y_{adv}\) to a set of ImageNet labels. For example, if an adversarial task has 10 different classes \((y_{adv}\in \{1,\ldots ,10\})\) , \(h_{g}(\cdot)\) may be defined to assign the first 10 classes of ImageNet, any other 10 classes, or multiple ImageNet classes to the adversarial labels. Our adversarial goal is thus to maximize the probability \(P(h_{g}(y_{adv})|X_{adv})\) . We set up our optimization problem as \[\begin{array}{r}{\hat{W} = \underset {W}{\mathrm{argmin}}\left(-\log P(h_{g}(y_{adv})|X_{adv}) + \lambda ||W||_{F}^{2}\right),} \end{array} \quad (3)\] where \(\lambda\) is the coefficient for a weight norm penalty, to reduce overfitting. We optimize this loss with Adam while exponentially decaying the learning rate. Hyperparameters are given in Appendix A. Note that after the optimization the adversarial program has minimal computation cost for the adversary, as it only requires computing \(X_{adv}\) (Equation 2), and mapping the resulting ImageNet label to the correct class. In other words, during inference the adversary needs only store the program and add it to the data, thus leaving the majority of computation to the target network. One interesting property of adversarial reprogramming is that it must exploit nonlinear behavior of the target model. This is in contrast to traditional adversarial examples, where attack algorithms based on linear approximations of deep neural networks are sufficient to cause a high error rate (Goodfellow et al., 2014). Consider a linear model that receives an input \(\tilde{x}\) and a program \(\theta\) concatenated into a single vector: \(x = [\tilde{x},\theta ]^{\top}\) . Suppose that the weights of the linear model are partitioned into two sets, \(v = [v_{\tilde{x}},v_{\theta }]^{\top}\) . The output of the model is \(v^{\top}x = v_{\tilde{x}}^{\top}\tilde{x} +v_{\theta}^{\top}\theta\) . The adversarial program \(\theta\) adapts the effective biases \(v_{\theta}^{\top}\theta\) but cannot adapt the weights applied to the input \(\tilde{x}\) . The adversarial program \(\theta\) can thus bias the model toward consistently outputting one class or the other but cannot change the way the input is processed. For adversarial reprogramming to work, the model must include nonlinear interactions between \(\tilde{x}\) and \(\theta\) . A nonlinear deep network satisfies this requirement. <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 2: Examples of adversarial programs for MNIST classification. Adversarial programs which cause six ImageNet models to function as MNIST classifiers. Each program is shown being applied to an MNIST digit. </center> ## 4 RESULTS To demonstrate the feasibility of adversarial reprogramming, we conducted experiments on six architectures trained on ImageNet. In each case, we reprogrammed the network to perform three different adversarial tasks: counting squares, MNIST classification, and CIFAR- 10 classification. The weights of all trained models were obtained from TensorFlow- Slim, and top- 1 ImageNet precisions are shown in Table Supp. 1. We additionally examined whether adversarial training conferred resistance to adversarial reprogramming, and compared the susceptibility of trained networks to random networks. Further, we investigated the possibility of reprogramming the networks when the adversarial data has no resemblance to the original data. Finally, we demonstrated the possibility of concealing the adversarial program and the adversarial data. ### 4.1 COUNTING SQUARES To illustrate the adversarial reprogramming procedure, we start with a simple adversarial task. That is counting the number of squares in an image. We generated images \((\bar{x})\) of size \(36 \times 36 \times 3\) that include \(9 \times 9\) white squares with black frames. Each square could appear in 16 different position in the image, and the number of squares ranged from 1 to 10. The squares were placed randomly on gridpoints (Figure 1b left). We embedded these images in an adversarial program (Figure 1b middle). The resulting images \((X_{adv})\) are of size \(299 \times 299 \times 3\) with the \(36 \times 36 \times 3\) images of the squares at the center (Figure 1b right). Thus, the adversarial program is simply a frame around the counting task images. We trained one adversarial program per ImageNet model, such that the first 10 ImageNet labels represent the number of squares in each image (Figure 1c). Note that the labels we used from ImageNet have no relation to the labels of the new adversarial task. For example, a 'White Shark' has nothing to do with counting 3 squares in an image, and an 'Ostrich' does not at all resemble 10 squares. We then evaluated the accuracy in the task by sampling 100,000 images and comparing the network prediction to the number of squares in the image. Despite the dissimilarity of ImageNet labels and adversarial labels, and that the adversarial program is equivalent simply to a first layer bias, the adversarial program masters this counting task for all networks (Table 1). These results demonstrate the vulnerability of neural networks to reprogramming on this simple task using only additive contributions to the input. ### 4.2 MNIST CLASSIFICATION In this section, we demonstrate adversarial reprogramming on somewhat more complex task of classifying MNIST digits. We measure test and train accuracy, so it is impossible for the adversarial program to have simply memorized all training examples. Similar to the counting task, we embedded MNIST digits of size \(28 \times 28 \times 3\) inside a frame representing the adversarial program, we assigned the first 10 ImageNet labels to the MNIST digits, and trained an adversarial program for each ImageNet model. Figure 2 shows examples of the adversarial program for each network being applied. Our results show that ImageNet networks can be successfully reprogrammed to function as an MNIST classifier by presenting an additive adversarial program. The adversarial program additionally generalized well from the training to test set, suggesting that the reprogramming does not function purely by memorizing train examples, and is not brittle to small changes in the input. <--- Page Split ---> Table 1: Neural networks adversarially reprogrammed to perform a variety of tasks. Table gives accuracy of reprogrammed networks to perform a counting task, MNIST classification task, CIFAR-10 classification task, and Shuffled MNIST classification task. <table><tr><td rowspan="3">Model</td><td colspan="4">Pretrained on ImageNet</td><td colspan="2">Untrained</td></tr><tr><td colspan="2">Counting</td><td colspan="2">MNIST</td><td colspan="2">MNIST</td></tr><tr><td>train</td><td>test</td><td>train</td><td>test</td><td>test</td><td></td></tr><tr><td>Incep. V3</td><td>0.9993</td><td>0.9781</td><td>0.9753</td><td>0.7311</td><td>0.6911</td><td>0.4539</td></tr><tr><td>Incep. V4</td><td>0.9999</td><td>0.9638</td><td>0.9646</td><td>0.6948</td><td>0.6683</td><td>0.1861</td></tr><tr><td>Incep. Res. V2</td><td>0.9994</td><td>0.9773</td><td>0.9744</td><td>0.6985</td><td>0.6719</td><td>0.1135</td></tr><tr><td>Res. V2 152</td><td>0.9763</td><td>0.9478</td><td>0.9534</td><td>0.6410</td><td>0.6210</td><td>0.1032</td></tr><tr><td>Res. V2 101</td><td>0.9843</td><td>0.9650</td><td>0.9664</td><td>0.6435</td><td>0.6301</td><td>0.1756</td></tr><tr><td>Res. V2 50</td><td>0.9966</td><td>0.9506</td><td>0.9496</td><td>0.6</td><td>0.5858</td><td>0.9325</td></tr><tr><td>Incep. V3 adv.</td><td></td><td>0.9761</td><td>0.9752</td><td></td><td></td><td></td></tr></table> ### 4.3 CIFAR-10 CLASSIFICATION Here we implement a more challenging adversarial task. That is, crafting adversarial programs to repurpose ImageNet models to instead classify CIFAR- 10 images. Some examples of the resulting adversarial images are given in Figure Supp. 1. Our results show that our adversarial program was able to increase the accuracy on CIFAR- 10 from chance to a moderate accuracy (Table 1). This accuracy is near what is expected from typical fully connected networks (Lin et al., 2015) but with minimal computation cost from the adversary side at inference time. One observation is that although adversarial programs trained to classify CIFAR- 10 are different from those that classify MNIST or perform the counting task, the programs show some visual similarities, e.g. ResNet architecture adversarial programs seem to possess some low spatial frequency texture (Figure Supp. 2a). ### 4.4 INVESTIGATION OF THE EFFECT OF THE TRAINED MODEL DETAILS AND ORIGINAL DATA One important question is what is the degree to which susceptibility to adversarial reprogramming depends on the details of the model being attacked. To address this question, we examined attack success on an Inception V3 model that was trained on ImageNet data using adversarial training (Tramèr et al., 2017). Adversarial training augments data with adversarial examples during training, and is one of the most common methods for guarding against adversarial examples. As in Section 4.2, we adversarially reprogrammed this network to classify MNIST digits. Our results (Table 1) indicate that the model trained with adversarial training is still vulnerable to reprogramming, with only a slight reduction in attack success. This finding shows that standard approaches to adversarial defense has little efficacy against adversarial reprogramming. This is likely explained by the differences between adversarial reprogramming and standard adversarial attacks. First, that the goal is to repurpose the network rather than cause it to make a specific mistake, second that the magnitude of adversarial programs can be large, while traditional adversarial attacks are of a small perturbations magnitude, and third adversarial defense methods may be specific to original data and may not generalize to data from the adversarial task. To further explore dependence on the details of the model, we performed adversarial reprogramming attacks on models with random weights. We used the same experiment set up and MNIST reprogramming task as in Section 4.2 - we simply used the ImageNet models with randomly initialized rather than trained weights. The MNIST classification task was easy for networks pretrained on ImageNet (Table 1). However, for random networks, training was very challenging and generally converged to a much lower accuracy (only ResNet V2 50 could train to a similar accuracy as trained ImageNet models; see Table 1). Moreover, the appearance of the adversarial programs was qualitatively distinct from the adversarial programs obtained with networks pretrained on ImageNet (see Figure Supp. 2b). This finding demonstrates that the original task the neural networks perform is important for adversarial reprogramming. This result may seem surprising, as random networks have rich structure adversarial programs might be expected to take advantage of. For example, theoretical results have shown that wide neural networks become identical to Gaussian processes, where training specific weights in intermediate layers is not necessary to perform tasks (Matthews et al., 2018; Lee et al., <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: Adversarial programs may be limited in size or concealed. In all panels, an Inception V3 model pretrained on ImageNet is reprogrammed to classify MNIST digits. Example images (a) with adversarial programs of different sizes, and (b) with adversarial programs of different perturbation scales. In (c), the adversarial data + program (right) are hidden inside a normal image from ImageNet (left), yielding an adversarial image (center) that is able to reprogram the network to function as an MNIST classifier. The pixels of the adversarial data are shuffled to conceal its structure. </center> 2017). Other work has demonstrated that it is possible to use random networks as generative models for images (Ustyuzhaninov et al., 2016; He et al., 2016), further supporting their potential richness. One explanation may be that randomly initialized networks perform poorly for simple reasons, such as poor scaling of network weights at initialization, whereas the trained weights are better conditioned. One explanation of adversarial reprogramming that is motivated by transfer learning (Yosinski et al., 2014) is that the network may be relying on some similarities between original and adversarial data. To address this hypothesis, we randomized the pixels on MNIST digits such that any resemblance between the adversarial data (MNIST) and images in the original data (ImageNet) is removed (see Figure Supp. 3). We then attempted to reprogram pretrained ImageNet networks to classify the shuffled MNIST digits. Despite shuffled MNIST not sharing any spatial structure with images, we managed to reprogram the ImageNet network for this task (Table Supp. 7) with almost equal accuracy to standard MNIST (in some cases shuffled MNIST even achieved higher accuracy). We also investigated the possibility of reprogramming shuffled CIFAR- 10 images. Our results show that it is possible to reprogram neural networks to classify shuffled CIFAR- 10 images (Table Supp. 7). The accuracy for shuffled CIFAR- 10 decreased as the convolutional structure of the network is not useful to classify the shuffled images. However, the accuracy was comparable to that expected from fully connected networks (Novak et al., 2019). These results thus suggest that transferring knowledge between the original and adversarial data does not completely explain the susceptibility to adversarial reprogramming. Even more interestingly, these results suggest the possibility of reprogramming across tasks with unrelated datasets and across domains. ### 4.5 CONCEALING ADVERSARIAL PROGRAMS In our previous experiment, there were no constraints on the size (number of program pixels) or scale (magnitude of perturbations) of the adversarial program. Here, we demonstrate the possibility of limiting the visibility of the adversarial perturbations by limiting the program size, scale, or even concealing the whole adversarial task. In these experiments, we used an Inception V3 model pretrained to classify ImageNet. <--- Page Split ---> In our first experiment, we adversarially reprogrammed the network to classify MNIST digits while limiting the size of the program (Figure 3a). Our results show that adversarial reprogramming is still successful, yet with lower accuracy, even if we use a very small adversarial program. In our next experiment, we made the adversarial program nearly imperceptible by limiting the \(L_{\mathrm{inf}}\) norm of the adversarial perturbation to a small percentage of the pixel values. Our results show that adversarial reprogramming is still successful (Figure 3b) even with nearly imperceptible programs. Further, we tested the possibility of concealing the whole adversarial task by hiding both the adversarial data and program within a normal image from ImageNet. To do this, we shuffled the pixels of the adversarial data (here MNIST), so that the adversarial data structure is hidden. Then, we limited the scale of both the adversarial program and data to a small fraction of the possible pixel values. We added the resulting image to a random image from ImageNet. Formally, we extended our reprogramming method as follows: \[\begin{array}{r l} & {P_{X} = \alpha \tanh \left(\mathrm{shuffle}_{i x}(\tilde{X}) + (W\odot \mathrm{shuffle}_{i x}(M))\right)}\\ & {X_{a d v} = \mathrm{clip}\left(X_{I m a g e N e t} + P_{X},\left[-1,1\right]\right),} \end{array} \quad (4)\] where \(\tilde{X}\) , \(M\) and \(W\) are as described in Section 3, \(P_{X}\) is the adversarial data combined with the adversarial program, \(i x\) is the shuffling sequence (same for \(M\) and \(\forall X\) ), \(\alpha\) is a scalar used to limit the perturbation scale, and \(X_{ImageNet}\) is an image chosen randomly from ImageNet, which is the same for all MNIST examples. We then optimized the adversarial program for the network to classify MNIST digits (see Equation 3). The resulting adversarial images are very similar to normal images from ImageNet (see Figure 3c), yet the network is successfully reprogrammed to classify MNIST digits, though with lower accuracy (see Figure 3c). This result demonstrates the possibility of hiding the adversarial task. Here, we used a simple shuffling technique and picked an image from ImageNet to hide the adversarial task, but one could go further and use more complex schemes for hiding the adversarial task and optimize the choice of the image from ImageNet, which may make adversarial reprogramming more effective and harder to detect. ## 5 DISCUSSION ### 5.1 FLEXIBILITY OF TRAINED NEURAL NETWORKS We found that trained neural networks were more susceptible to adversarial reprogramming than random networks. Further, we found that reprogramming is still successful even when data structure is very different from the structure of the data in the original task. This demonstrates a large flexibility of repurposing trained weights for a new task. Our results suggest that dynamical reuse of neural circuits should be practical in modern artificial neural networks. This holds the promise of enabling machine learning systems which are easier to repurpose, more flexible, and more efficient due to shared compute. Indeed, recent work in machine learning has focused on building large dynamically connected networks with reusable components (Shazeer et al., 2017). It is unclear whether the reduced performance when targeting random networks, and when reprogramming to perform CIFAR- 10 classification, was due to limitations in the expressivity of the adversarial perturbation, or due to the optimization task in Equation 3 being more difficult in these situations. Disentangling limitations in expressivity and trainability will be an interesting future direction. ### 5.2 ADVERSARIAL GOALS BEYOND THE IMAGE DOMAIN We demonstrated adversarial reprogramming on classification tasks in the image domain. It is an interesting area for future research whether similar attacks might succeed for audio, video, text, or other domains and tasks. Our finding that trained networks can be reprogrammed to classify shuffled images, which do not retain any of the original spatial structure, suggests that reprogramming across domains is likely possible. Adversarial reprogramming of recurrent neural networks (RNNs) would be particularly interesting, since RNNs (especially those with attention or memory) can be Turing complete (Neelakantan et al., 2015). An attacker would thus only need to find inputs which induced the RNN to perform a number of simple operations, such as increment counter, decrement counter, and change input attention <--- Page Split ---> location if counter is zero (Minsky, 1961). If adversarial programs can be found for these simple operations, then they could be composed to reprogram the RNN to perform any computational task. A variety of nefarious ends may be achievable if machine learning systems can be reprogrammed by a specially crafted input. The most direct of these is the theft of computational resources. For instance, an attacker might develop an adversarial program which causes the computer vision classifier in a cloud hosted photos service to solve image captchas and enable creation of spam accounts. If RNNs can be flexibly reprogrammed as mentioned above, this computational theft might extend to more arbitrary tasks. A major danger beyond the computational theft is that an adversary may repurpose computational resources to perform a task which violates the code of ethics of system providers. This is particularly important as ML service providers are invested in protecting the ethical principles and guidelines that governs the use of their services. ## 6 CONCLUSION In this work, we proposed a new class of adversarial attacks that aim to reprogram neural networks to perform novel adversarial tasks. Our results demonstrate for the first time the possibility of such attacks. They are also illustrative of both surprising flexibility and surprising vulnerability in deep neural networks. Future investigation should address the properties and limitations of adversarial reprogramming, and possible ways to mitigate or defend against it. ## Supplemental material ## A SUPPLEMENTARY TABLES Table Supp. 1: Top-1 precision of models on ImageNet data <table><tr><td>Model</td><td>Accuracy</td></tr><tr><td>Inception V3</td><td>0.78</td></tr><tr><td>Inception V4</td><td>0.802</td></tr><tr><td>Inception Resnet V2</td><td>0.804</td></tr><tr><td>Resnet V2 152</td><td>0.778</td></tr><tr><td>Resnet V2 101</td><td>0.77</td></tr><tr><td>Resnet V2 50</td><td>0.756</td></tr><tr><td>Inception V3 adv.</td><td>0.776</td></tr></table> Table Supp. 2: Hyper-parameters for adversarial program training for the square counting adversarial task. For all models, we used the Adam optimizer with its default parameters while decaying the learning rate exponentially during training. We distributed training data across a number of GPUs (each GPU receive 'batch' data samples ). We then performed synchronized updates of the adversarial program parameters. <table><tr><td>ImageNet Model</td><td>λ</td><td>batch</td><td>GPUS</td><td>learn rate</td><td>decay</td><td>epochs/decay</td><td>steps</td></tr><tr><td>Inception V3</td><td>0.01</td><td>50</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>100000</td></tr><tr><td>Inception V4</td><td>0.01</td><td>50</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>100000</td></tr><tr><td>Inception Resnet V2</td><td>0.01</td><td>50</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>100000</td></tr><tr><td>Resnet V2 152</td><td>0.01</td><td>20</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>100000</td></tr><tr><td>Resnet V2 101</td><td>0.01</td><td>20</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Resnet V2 50</td><td>0.01</td><td>20</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>100000</td></tr></table> <--- Page Split ---> Table Supp. 3: Hyper- parameters for adversarial program training for MNIST classification adversarial task. For all models, we used the Adam optimizer with its default parameters while decaying the learning rate exponentially during training. We distributed training data across a number of GPUs (each GPU receive 'batch' data samples ). We then performed synchronized updates of the adversarial program parameters. (The Model Inception V3 adv is pretrained on ImageNet data using adversarial training method. Table Supp. 4: Hyper-parameters for adversarial program training for CIFAR-10 classification adversarial task. For all models, we used ADAM optimizer with its default parameters while decaying the learning rate exponentially during training. We distributed training data on number of GPUS (each GPU receive 'batch' data samples ). We then performed synchronized updates of the adversarial program parameters. <table><tr><td>ImageNet Model</td><td>λ</td><td>batch</td><td>GPUS</td><td>learn rate</td><td>decay</td><td>epochs/decay</td><td>steps</td></tr><tr><td>Inception V3</td><td>0.05</td><td>100</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Inception V4</td><td>0.05</td><td>100</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Inception Resnet V2</td><td>0.05</td><td>50</td><td>8</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Resnet V2 152</td><td>0.05</td><td>50</td><td>8</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Resnet V2 101</td><td>0.05</td><td>50</td><td>8</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Resnet V2 50</td><td>0.05</td><td>100</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Inception V3 adv.</td><td>0.01</td><td>50</td><td>6</td><td>0.05</td><td>0.98</td><td>4</td><td>100000</td></tr></table> Table Supp. 5: Hyper-parameters for adversarial program training for MNIST classification adversarial task. For all models, we used the Adam optimizer with its default parameters while decaying the learning rate exponentially during training. We distributed training data across a number of GPUs (each GPU receive 'batch' data samples ). We then performed synchronized updates of the adversarial program parameters. <table><tr><td>ImageNet Model</td><td>λ</td><td>batch</td><td>GPUS</td><td>learn rate</td><td>decay</td><td>epochs/decay</td><td>steps</td></tr><tr><td>Inception V3</td><td>0.01</td><td>50</td><td>6</td><td>0.05</td><td>0.99</td><td>4</td><td>300000</td></tr><tr><td>Inception V4</td><td>0.01</td><td>50</td><td>6</td><td>0.05</td><td>0.99</td><td>4</td><td>300000</td></tr><tr><td>Inception Resnet V2</td><td>0.01</td><td>50</td><td>6</td><td>0.05</td><td>0.99</td><td>4</td><td>300000</td></tr><tr><td>Resnet V2 152</td><td>0.01</td><td>30</td><td>6</td><td>0.05</td><td>0.99</td><td>4</td><td>300000</td></tr><tr><td>Resnet V2 101</td><td>0.01</td><td>30</td><td>6</td><td>0.05</td><td>0.99</td><td>4</td><td>300000</td></tr><tr><td>Resnet V2 50</td><td>0.01</td><td>30</td><td>6</td><td>0.05</td><td>0.99</td><td>4</td><td>300000</td></tr></table> <table><tr><td>Random Model</td><td>λ</td><td>batch</td><td>GPUS</td><td>learn rate</td><td>decay</td><td>epochs/decay</td><td>steps</td></tr><tr><td>Inception V3</td><td>0.01</td><td>50</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>100000</td></tr><tr><td>Inception V4</td><td>0.01</td><td>50</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>100000</td></tr><tr><td>Inception Resnet V2</td><td>0.01</td><td>50</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Resnet V2 152</td><td>0.01</td><td>20</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Resnet V2 101</td><td>0.01</td><td>20</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr><tr><td>Resnet V2 50</td><td>0.01</td><td>50</td><td>4</td><td>0.05</td><td>0.96</td><td>2</td><td>60000</td></tr></table> Table Supp. 6: Hyper- parameters for adversarial program training for Shuffled CIFAR- 10 classification task. For all models, we used 1 GPU and a batch size of 100. We decayed the learning rate with a cosine schedule over 200 epochs. We performed hyperparameters tuning and picked the best parameters based on a held- out validation set. The best parameters are shown below. <table><tr><td>ImageNet Model</td><td>λ</td><td>optimizer</td><td>learn rate</td></tr><tr><td>Inception V3</td><td>0</td><td>Momentum</td><td>0.1</td></tr><tr><td>Inception V4</td><td>0.1</td><td>Momentum</td><td>0.001</td></tr><tr><td>Inception Resnet V2</td><td>0</td><td>ADAM</td><td>0.001</td></tr><tr><td>Resnet V2 152</td><td>0.1</td><td>Momentum</td><td>0.1</td></tr><tr><td>Resnet V2 101</td><td>0.01</td><td>Momentum</td><td>0.1</td></tr><tr><td>Resnet V2 50</td><td>0.01</td><td>Momentum</td><td>0.1</td></tr></table> <--- Page Split ---> Table Supp. 7: Neural networks adversarially reprogrammed to perform classification tasks with shuffled pixels. Table gives accuracy of reprogrammed networks to perform classification using Shuffled MNIST and Shuffled CIFAR-10. Shuffling is performed across pixels. The optimization parameters for shuffled MNIST are the same as in Table Supp. 3. The optimization parameters for Shuffled-CIFAR-10 are shown in Table Supp. 6 <table><tr><td rowspan="2">Model</td><td colspan="2">Pretrained on ImageNet</td></tr><tr><td>Shuffled MNIST</td><td>Shuffled CIFAR-10</td></tr><tr><td>Incep. V3</td><td>0.9709</td><td>0.5578</td></tr><tr><td>Incep. V4</td><td>0.9715</td><td>0.5618</td></tr><tr><td>Incep. Res. V2</td><td>0.9683</td><td>0.5507</td></tr><tr><td>Res. V2 152</td><td>0.9691</td><td>0.5624</td></tr><tr><td>Res. V2 101</td><td>0.9678</td><td>0.5612</td></tr><tr><td>Res. V2 50</td><td>0.9717</td><td>0.5614</td></tr></table> ![](images/13_0.jpg) <center>Figure Supp. 1: Examples of adversarial images for CIFAR-10 classification. An adversarial program repurposing an Inception V3 model to classify CIFAR-10 images, applied to four CIFAR-10 images. </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure Supp. 2: Adversarial programs exhibit qualitative similarities and differences across both network and task. (a) Top: adversarial programs targeted to repurpose networks pre-trained on ImageNet to count squares in images. Middle: adversarial programs targeted to repurpose networks pre-trained on ImageNet to function as MNIST classifiers. Bottom: adversarial programs to cause the same networks to function as CIFAR-10 classifiers. (b) Adversarial programs targeted to repurpose networks with randomly initialized parameters to function as MNIST classifiers. </center> ![](images/14_1.jpg) <center>Figure Supp. 3: Neural networks are susceptible to adversarial reprogramming even in cases when adversarial data and original task data are unrelated. The pixels in MNIST digits are shuffled. So, that the resulting image has no resemblance to any image. Then, the shuffled image is combined with the adversarial program to create a reprogramming image. This image successfully reprogram Inception V3 model to classify the shuffled digits, despite that the adversarial data (i.e., shuffled MNIST digits) being unrelated to the original data (i.e., ImageNet). </center> <--- Page Split --->
accept
Accept (Poster)
6
ICLR_2019_paper_0366
iclr
2,019
# LEARNING GRID CELLS AS VECTOR REPRESENTATION OF SELF-POSITION COUPLED WITH MATRIX REPRESENTATION OF SELF-MOTION Ruiqi Gao \(^{1}\) , Jianwen Xie \(^{2}\) , Song- Chun Zhu \(^{1}\) & Ying Nian Wu \(^{1}\) \(^{1}\) University of California, Los Angeles, USA \(^{2}\) Hikvision Research Institute, Santa Clara, USA {ruiqigao, jianwen}@ucla.edu, {sczhu, ywu}@stat.ucla.edu ## ABSTRACT This paper proposes a representational model for grid cells. In this model, the 2D self- position of the agent is represented by a high- dimensional vector, and the 2D self- motion or displacement of the agent is represented by a matrix that transforms the vector. Each component of the vector is a unit or a cell. The model consists of the following three sub- models. (1) Vector- matrix multiplication. The movement from the current position to the next position is modeled by matrix- vector multiplication, i.e., the vector of the next position is obtained by multiplying the matrix of the motion to the vector of the current position. (2) Magnified local isometry. The angle between two nearby vectors equals the Euclidean distance between the two corresponding positions multiplied by a magnifying factor. (3) Global adjacency kernel. The inner product between two vectors measures the adjacency between the two corresponding positions, which is defined by a kernel function of the Euclidean distance between the two positions. Our representational model has explicit algebra and geometry. It can learn hexagon patterns of grid cells, and it is capable of error correction, path integral and path planning. ## 1 INTRODUCTION Imagine you are walking in your living room in the dark at night without any visual cues. Purely based on your self- motion, you know where you are while you are walking simply by integrating your self- motion. This is called path integral (Hafting et al. (2005); Fiete et al. (2008); McNaughton et al. (2006)). You can also plan your path to the light switch or to the door. This is called path planning (Fiete et al. (2008); Erdem & Hasselmo (2012); Bush et al. (2015)). You need to thank your grid cells for performing such navigation tasks. ![](images/0_0.jpg) <center>Figure 1: Place cells and grid cells. (a) The rat is moving within a square region. (b) The activity of a neuron is recorded. (c) When the rat moves around (the curve is the trajectory), each place cell fires at a particular location, but each grid cell fires at multiple locations that form a hexagon grid. (d) The place cells and grid cells exist in the brains of both rat and human. (Source of pictures: internet) </center> Figure 1(a) shows Dr. May- Britt Moser, who together with Dr. Edvard Moser, won the 2014 Nobel Prize for Physiology or Medicine, for their discovery of the grid cells (Hafting et al. (2005); Fyhn et al. (2008); Yartsev et al. (2011); Killian et al. (2012); Jacobs et al. (2013); Doeller et al. (2010)) <--- Page Split ---> in 2005. Their thesis advisor, Dr. John O'keeffe, shared the prize for his discovery of the place cells (O'Keefe (1979)). Both the place cells and grid cells are used for navigation. The discoveries of these cells were made by recording the activities of the neurons of the rat when it moves within a square region. See Figure 1(b). Some neurons in the Hippocampus area are place cells. Each place cell fires when the rat moves to a particular location, and different place cells fire at different locations. The whole collection of place cells cover the whole square region. The discovery of grid cells was much more surprising and unexpected. The grid cells exist in the Entorhinal cortex. Each grid cell fires at multiple locations, and these locations form a regular hexagon grid. See Figure 1(c). The grid cells have been discovered across mammalian species, including human. See Figure 1(d). In this paper, we propose a representational model to explain the hexagon patterns of the grid cells, and to explain how the grid cells perform path integral and path planning. We shall show that the grid cells are capable of error correction, which provides a justification for the grid cells. ![](images/1_0.jpg) <center>Figure 2: Grid cells form a high-dimensional vector representation of 2D self-position. Three submodels: (1) Local motion is modeled by vector-matrix multiplication. (2) Angle between two nearby vectors magnifies the Euclidean distance. (3) Inner product between any two vectors measures the adjacency which is a kernel function of the Euclidean distance. </center> Figure 2 illustrates our model. The 2D self- position of the agent is represented by a high- dimensional vector, and the 2D self- motion or displacement of the agent is represented by a matrix that acts on the vector. Each component of the vector is a unit or a cell. In Figure 2, \(x\) denotes the 2D self- position, \(v(x)\) is the high- dimensional vector representation of the 2D \(x\) . \(\Delta x\) is the self- motion or one- step displacement. \(M(\Delta x)\) is the matrix representation of \(\Delta x\) . The model consists of the following three sub- models. (1) Vector- matrix multiplication. The movement from the current position \(x\) to the next position \(x + \Delta x\) is modeled by matrix- vector multiplication, i.e., the vector of the next position \(v(x + \Delta x)\) is obtained by multiplying the matrix of the motion, \(M(\Delta x)\) , to the vector of the current position \(v(x)\) . (2) Magnified local isometry. The angle between two nearby vectors equals the Euclidean distance \(|\Delta x|\) between the two corresponding positions multiplied by a magnifying factor. (3) Global adjacency kernel. The inner product between two vectors \(\langle v(x), v(y) \rangle\) measures the adjacency between the two corresponding positions, which is defined by a kernel function \(f\) of the Euclidean distance \(|x - y|\) between the two positions \(x\) and \(y\) . One additional feature is that the whole vector \(v(x)\) is partitioned into multiple sub- vectors, and each sub- vector is driven by an associated sub- matrix. The whole system is like a multi- arm clock, with each arm rotating at a magnified speed and spanning a 2D sub- manifold on a high- dimensional sphere. Our experiments show that sub- models (1) and (2) are sufficient for the emergence of the hexagon grid patterns of the grid cells. Sub- model (2) makes the vector representation robust to noises or errors due to the magnification of the distance. Sub- model (3) enables unique decoding of the position from the vector, because \(f\) is unimodal and peaked at 0, and it may serve as the link between the grid cells and place cells. Together with sub- model (1), sub- model (3) enables path integral and path planning, because the adjacency \(\langle v(x), v(y) \rangle\) informs the Euclidean distance \(|x - y|\) . All the three sub- models can be implemented by one- layer neural networks. ## 2 CONTRIBUTIONS AND RELATED WORK The following are the contributions of our work. (1) We propose a representational model for grid cells, where the self- position is represented by a vector and the self- motion is represented by a matrix that acts on the vector. (2) We show that our model can learn hexagon grid patterns. (3) We show our model is capable of path integral, path planning, and error correction. <--- Page Split ---> Many mathematical and computational models (Burak & Fiete (2009); Sreenivasan & Fiete (2011); Blair et al. (2007); de Almeida et al. (2009)) have been proposed to explain the formation and function of grid cells. Compared to previous computational models on grid cells, our model only makes very generic assumptions about the algebra and geometry of the representational scheme, without assuming Fourier plane waves or clock arithmetics. Recently, deep learning models (Banino et al. (2018); Cueva & Wei (2018)) have been proposed to learn the grid- like units for navigation. Our work was inspired by them. Compared with these models, our model has explicit algebra and geometry. In terms of algebra, our model has explicit matrix representation of self- motion, and the change of self- position is modeled by vector- matrix multiplication. In terms of geometry, our model assumes that the vector rotates while the agent moves, and our model assumes magnified local isometry and global adjacency kernel based on the angles between the vectors. Expressing the adjacency kernel as the inner product between the vectors is related to the kernel trick (Cortes & Vapnik (1995)), random Fourier basis (Ng et al. (2002)), and spectral clustering (Ng et al. (2002)). ## 3 REPRESENTATIONAL MODEL OF GRID CELLS Consider an agent navigating within a domain \(D = [0,1]\times [0,1]\) (actually the shape does not matter, and it can be \(\mathbb{R}^{2}\) ). We can discretize \(D\) into an \(N\times N\) lattice. \(N = 40\) in our experiments. Let \(x = (x_{1},x_{2})\in D\) be the self- position of the agent. \(x\) is 2D. Suppose the agent wants to represent its self- position by a \(d\) - dimensional hidden vector \(v(x)\) . We introduce the following three sub- models. ### 3.1 SUB-MODEL 1 ABOUT MOTION ALGEBRA: VECTOR-MATRIX MULTIPLICATION Suppose at a position \(x\) , the self- motion or one- step displacement is \(\Delta x\) , so that the agent moves to \(x + \Delta x\) after one step. We assume that \[v(x + \Delta x) = M(\Delta x)v(x), \quad (1)\] where \(M(\Delta x)\) is a \(d\times d\) matrix that depends on \(\Delta x\) . While \(v(x)\) is the vector representation of the self- position \(x\) , \(M(\Delta x)\) is the matrix representation of the self- motion \(\Delta x\) . We can illustrate the motion model by the following diagram: \[\begin{array}{r l r l}{\mathrm{Motion:}} & {\underbrace{x_{t}}} & {\xrightarrow{+\Delta x}} & {\underbrace{x_{t + 1}}}\\ & {\downarrow} & {\downarrow} & {\downarrow}\\ & {v(x_{t})} & {\xrightarrow{M(\Delta x)\times}} & {v(x_{t + 1})} \end{array} \quad (2)\] See Figure 2(1). Both \(v(x)\) and \(M(\Delta x)\) are to be learned. We can discretize \(\Delta x\) , and learn a motion matrix \(M\) for each \(\Delta x\) . We can also learn a parametric model for \(M\) . To this end, we can further parametrize \(M = I + \bar{M} (\Delta x)\) such that each element of \(\bar{M} (\Delta x)\) is a quadratic (or polynomial) function of \(\Delta x = (\Delta x_{1},\Delta x_{2})\) : \[\bar{M}_{ij}(\Delta x) = \beta_{ij}^{(1)}\Delta x_{1} + \beta_{ij}^{(2)}\Delta x_{2} + \beta_{ij}^{(11)}\Delta x_{1}^{2} + \beta_{ij}^{(22)}\Delta x_{2}^{2} + \beta_{ij}^{(12)}\Delta x_{1}\Delta x_{2}, \quad (3)\] where the coefficients \(\beta\) are to be learned. The above may be considered a second- order Taylor expansion which is expected to be accurate for small \(\Delta x\in \Delta\) , where \(\Delta\) is the allowed collection of one- step displacements, e.g., \(\pm 3\) grid points in each direction on the \(40\times 40\) grid. The motion model can be considered a linear recurrent neural network (RNN). However, if we are to interpret \(M\) as the weight matrix, then the weight matrix is dynamic because it depends on the motion \(\Delta x\) . One may implement it by discretizing \(\Delta x\) , so that we have a finite set of \(\Delta x\) , and thus a finite set of \(M(\Delta x)\) . Then at each time step, the RNN switches between the finite set of motion matrices. This is like the gearing operation of a multi- speed bicycle. ### 3.2 DISENTANGLED BLOCKS OR MODULES For the sake of estimation accuracy and computational efficiency, we further assume that \(M(\Delta x)\) is block diagonal, i.e., we can divide \(v\) into \(K\) blocks of sub- vectors, \(v = (v^{(k)},k = 1,\dots,K)\) , and <--- Page Split ---> \(v^{(k)}(x + \Delta x) = M^{(k)}(\Delta x)v^{(k)}(x)\) . That is, the vector \(v\) consists of sub- vectors, each of which rotates in its own subspace, so that the dynamics of the sub- vectors are disentangled. The assumption of disentangled blocks in our model is related to the modular organization of grid cells in neuroscience (Stensola et al. (2012)), where a module refers to a region of grid cells where all cells share similar grid scales and orientations. ### 3.3 SUB-MODEL 2 ABOUT LOCAL GEOMETRY: MAGNIFIED LOCAL ISOMETRY The above motion algebra alone is not sufficient for learning the vector matrix representations, because a trivial solution is that all the \(v(x)\) are the same, and the matrix \(M(\Delta x)\) is always identity. We need to properly displace the vectors \(v(x)\) . So we shall model \(\langle v(x),v(y)\rangle\) both locally and globally. Let \(d\) be the dimensionality of \(v^{(k)}\) , i.e., the number of grid cells within the \(k\) - th block. For the local geometry, we assume that for each block, \[\langle v^{(k)}(x),v^{(k)}(x + \Delta x)\rangle = d(1 - \alpha_{k}|\Delta x|^{2}), \quad (4)\] for all \(x\) and \(\Delta x\) such that \(\alpha_{k}|\Delta x|^{2}\leq c\) , i.e., \(\Delta x\in \Delta (\alpha_{k}) = \{\Delta x:\alpha_{k}|\Delta x|^{2}\leq c\}\) . In our experiments, we take \(c = 1.5\) . \(\alpha_{k}\) can be either designed or learned. Based on sub- model (4), \(\| v^{(k)}(x)\|^{2} = d\) for every \(x\) . The inner product on the left hand side is \(d\cos (\Delta \theta)\) where \(\Delta \theta\) is the angle between \(v^{(k)}(x)\) and \(v^{(k)}(x + \Delta x)\) . \(1 - \alpha_{k}|\Delta x|^{2}\) on the right hand side may be considered a second order Taylor expansion of a function \(f(r)\) such that \(f(0) = 1\) , \(f^{\prime}(0) = 0\) , i.e., 0 is the maximum, and \(f^{\prime \prime}(0) = - 2\alpha_{k}\) . It is also an approximation to \(\cos (\sqrt{2\alpha_{k}} |\Delta x|)\) . Let \(\omega_{k} = \sqrt{2\alpha_{k}}\) , we have \[\mathrm{Magnified~local~isometry}:\Delta \theta = \omega_{k}|\Delta x|, \quad (5)\] i.e., the angle between \(v(x)\) and \(v(x + \Delta x)\) magnifies the distance \(|\Delta x|\) by a factor of \(\omega_{k}\) uniformly for all \(x\) . See Figure 2(2). The factor \(\omega_{k}\) defines the metric of block \(k\) . ### 3.4 ROTATION AND PROJECTION Since \(\| v^{(k)}(x)\|\) is a constant for all \(x\) , \(M^{(k)}(\Delta x)\) is an orthogonal matrix, and the self- motion is represented by a rotation in the \(d\) - dimensional space. \(\omega_{k}|\Delta x|\) is the angular speed of rotation of the sub- vector \(k\) . \((v^{(k)}(x),\forall x)\) forms a 2D sub- manifold on the sphere in the \(d\) - dimensional space. \(v^{(k)}\) is like an arm of a clock except that the clock is not a 1D circle, but a 2D sub- manifold. This 2D sub- manifold becomes a local codebook for the 2D positions within a local neighborhood. For a vector \(v^{(k)}\) , we can decode its position by projecting it onto the 2D sub- manifold, to get \(\hat{x} = \arg \max_{x}\langle v^{(k)},v^{(k)}(x)\rangle\) where the maximization is within the local neighborhood. ### 3.5 ERROR CORRECTION The neurons are intrinsically noisy and error prone. For \(\omega_{k} = \sqrt{2\alpha_{k}}\gg 1\) , the magnification offers error correction because \(v^{(k)}(x)\) and \(v^{(k)}(x + \Delta x)\) are far apart, which is resistant to noises or corruptions. That is, projection to the 2D sub- manifold codebook removes the noises. Specifically, suppose we have a vector \(u = v^{(k)}(x) + \epsilon\) , where \(\epsilon \sim \mathrm{N}(0,s^{2}I_{d})\) , and \(I_{d}\) is the \(d\) - dimensional identity matrix. We can decode \(x\) from \(u\) based on the codebook by maximizing \(\langle u,v^{(k)}(y)\rangle = \langle v^{(k)}(x),v^{(k)}(y)\rangle +\langle \epsilon ,v^{(k)}(y)\rangle = d(1 - \alpha_{k}|y - x|^{2}) + \sqrt{d} sZ\) over \(y\) that is within a local neighborhood of \(x\) , where \(Z\sim \mathrm{N}(0,1)\) . The optimal \(y\) will be close to \(x\) because if \(y\) deviates from \(x\) by \(\Delta x\) , then the first term will drop by \(d\alpha_{k}|\Delta x|^{2}\) , which cannot be made up by the second term \(\sqrt{d} sZ\) due to noises, unless the noise level \(s\) is extremely large. From the above analysis, we can also see that the larger \(d\) is, the more resistant the system is to noise, because the first term is of order \(d\) and the second term is of order \(\sqrt{d}\) . In addition to additive noises, the system is also resistant to multiplicative noises including dropout errors, i.e., multiplicative Bernoulli 0/1 errors. The dropout errors may occur due to noises, aging, or diseases. They may also be related to the asynchronous nature of the neuron activities in computing. <--- Page Split ---> ### 3.6 HEXAGON GRID PATTERNS While the magnified local isometry enables error correction, it also causes global periodicity. Because the angle between nearby \(v^{(k)}(x)\) and \(v^{(k)}(x + \Delta x)\) is magnified, when we vary \(x\) , the vector \(v^{(k)}(x)\) will rotate at a magnified speed \(\omega_{k}|\Delta x|\) , so that it quickly rotates back to itself, like an arm of a clock. Thus each unit of \(v^{(k)}(x)\) is periodic with \(\omega_{k}\) determining the periodicity. Our experiments show that sub- models (1) and (2) are sufficient for the emergence of hexagon grid patterns. In fact, we have the following analytical solution (see Appendix for a proof): Theorem 1. Let \(e(x) = (e^{i\langle a_{j},x\rangle},j = 1,2,3)^{\top}\) , and \(a_{1}\) , \(a_{2}\) , \(a_{3}\) are three 2D vectors so that the angle between \(a_{i}\) and \(a_{j}\) is \(2\pi /3\) for \(\forall i\neq j\) and \(|a_{j}| = 2\sqrt{\alpha}\) for \(\forall j\) . Let \(C\) be a random \(3\times 3\) complex matrix such that \(C^{*}C = I\) . Then \(v(x) = C e(x)\) , \(M(\Delta x) = C\mathrm{diag}(e(\Delta x))C^{*}\) satisfy equation 1 and equation 4 approximately for all \(x\) and small \(\Delta x\) . \(v(x)\) amounts to a 6- dimensional real vector. Since the angle between \(a_{i}\) and \(a_{j}\) is \(2\pi /3\) for \(\forall i\neq j\) , patterns of \(v(x)\) over \(x\) have hexagon periodicity. Moreover, the scale of the patterns is controlled by the length of \(a_{j}\) , i.e., the scaling parameter \(\alpha\) . We want to emphasize that sub- models (1) and (2) are about local \(\Delta x\) , where we do not make any assumptions about global patterns, such as Fourier basis. In contrast, the solution in the above theorem is global. That is, our model assumes much less than the solution in the theorem. Our experiments show that the hexagon patterns will emerge as long as the number of units is greater than or equal to 6. ### 3.7 SUB-MODEL 3 ABOUT GLOBAL GEOMETRY: ADJACENCY KERNEL Because of the periodicity, each block \((v^{(k)}(x),\forall x)\) does not form a global codebook of 2D positions, i.e., there can be \(x\neq y\) , but \(v^{(k)}(x) = v^{(k)}(y)\) , i.e., \(v^{(k)}\) does not encode \(x\) uniquely. We can combine multiple blocks to resolve the global ambiguity. Specifically, let \(v(x) = (v^{(k)}(x),k =\) \(1,\ldots ,K)\) be the whole vector, we assume the following global adjacency sub- model for the whole vector: \[\langle v(x),v(y)\rangle = \sum_{k = 1}^{K}\langle v^{(k)}(x),v^{(k)}(y)\rangle = (Kd)f(|x - y|), \quad (6)\] where \((v(x),M(\Delta x),\alpha_{k},\forall x,\Delta x,k)\) are to be learned. Recall \(d\) is the number of grid cells in each block, and \(K\) is the number of blocks. \(f(r)\) is the adjacency kernel that decreases monotonically as the Euclidean distance \(r = |x - y|\) increases. One example of \(f\) is the Gaussian kernel \(f(r) = \exp \left(- r^{2} / 2\sigma^{2}\right)\) . Another example is the exponential kernel \(f(r) = \exp \left(- r / \sigma\right)\) . As a matter of normalization, we assume \(f(0) = 1\) , which is the maximum of \(f(r)\) . Since \(f(0) = 1\) , \(\| v(x)\|^{2} = Kd\) for any \(x\) , and \(\langle v(x),v(y)\rangle = (Kd)\cos \theta\) , where \(\theta\) is the angle between \(v(x)\) and \(v(y)\) , and we have \[\mathrm{Global~adjacency}:\cos \theta = f(|x - y|). \quad (7)\] The angle between any two vectors is always less than \(\pi /2\) . See Figure 2(3). By fitting the multiple sub- vectors together, we still retain the error correction capacity due to magnified local isometry, meanwhile we eliminate the ambiguity by letting \(\langle v^{(k)}(x),v^{(k)}(y)\rangle\) for different \(k\) cancel each other out by destructive interference as \(y\) moves away from \(x\) , so that we obtain unique decoding of positions. Let \(C = \{v(x),x\in D\}\) be the codebook sub- manifold, error correction of a vector \(u\) is obtained by projection onto \(C\) : \(\arg \max_{v\in C}\langle u,v\rangle\) . The whole vector \(v\) is like a \(K\) - arm clock, with each \(v^{(k)}\) being an arm rotating at a speed \(\omega_{k}|\Delta x| = \sqrt{2\alpha_{k}} |\Delta x|\) . ### 3.8 LOCALIZATION AND HEAT MAP \((v(x),\forall x)\) forms a global codebook for \(x\) . It is a 2D sub- manifold on the sphere in the \((Kd)\) - dimensional space. For a vector \(v\) , we can decode its position by its projection on the codebook <--- Page Split ---> manifold. Since \(f(r)\) is monotonically decreasing, \(h(x) = \langle v,v(x)\rangle\) gives us the heat map to decode the position of \(v\) uniquely. Let the decoded position be \(\hat{x}\) , then \(\hat{x} = \arg \max_{x}\langle v,v(x)\rangle\) . We can obtain the one- hot representation \(\delta_{\hat{x}}\) of \(\hat{x}\) by non- maximum suppression on the heat map \(h(x)\) . Let \(V = (v(x),\forall x)\) be the \((K d)\times N^{2}\) matrix (recall the domain \(D = [0,1]^{2}\) is discretized into an \(N\times N\) lattice), where each column is a \(v(x)\) . We can write the heat map \(h(x) = \langle v,v(x)\rangle\) as a \(N^{2}\) - dimensional vector \(h = V^{\top}v\) , which serves to decode the position \(x\) encoded by \(v\) . Conversely, for a one- hot representation of a position \(x\) , i.e., \(\delta_{x}\) , which is a one- hot \(N^{2}\) - dimensional vector, we can encode it by \(v = V\delta_{x}\) . Both the encoder and decoder can be implemented by a linear neural network with connection weights \(V\) and \(V^{\top}\) respectively, as illustrated by the following diagram: \[\begin{array}{r l r l}{\mathrm{Localization:}} & {v} & {\xrightarrow{V^{\top}\times}} & {h} & {\mathrm{(heat~map~and~decoding~to~}\delta_{x})}\\ & {\delta_{x}} & {\xrightarrow{V\times}} & {v(x)} & {\mathrm{(encoding)}} \end{array} \quad (8)\] Note that in decoding \(v\to h(x)\to \delta_{x}\) and encoding \(\delta_{x}\to v\) , we do not represent or operate on the 2D coordinate \(x\) explicitly, i.e., \(x\) itself is never explicitly represented, although we may use the notation \(x\) in the description of the experiments. ### 3.9 PATH INTEGRAL Path integral (also referred to as dead- reckoning) is the task of inferring the self- position based on self- motion (e.g., imagine walking in a dark room). Specifically, the input to path integral is a previously determined initial position \(x_{0}\) and motion sequences \(\{\Delta x_{1},\dots,\Delta x_{T}\}\) , and the output is the prediction of one's current position \(x_{T}\) . We first encode the initial position \(x_{0}\) as \(v(x_{0})\) . Then, by the motion model, the hidden vector \(v(x_{T})\) at time \(T\) can be predicted as: \[v(x_{T}) = \prod_{t = T}^{1}M(\Delta x_{t})v(x_{0}). \quad (9)\] We can then decode \(x_{T}\) from \(v(x_{T})\) . ### 3.10 PATH PLANNING Our representation system can plan direct path from the starting position \(x_{0}\) to the target position \(y\) by steepest ascent on the inner product \(\langle v(x),v(y)\rangle\) , i.e., let \(v_{0} = v(x_{0})\) , the algorithm iterates \[\begin{array}{l}{\Delta x_{t} = \arg \max_{\Delta x\in \Delta}\langle v(y),M(\Delta x)v_{t - 1}\rangle ,}\\ {v_{t} = M(\Delta x_{t})v_{t - 1},} \end{array} \quad (10)\] where \(\Delta\) is the set of allowable displacements \(\Delta x\) . When a rat does path planning, even if it is not moving, its grid cells are expected to be active. This may be explained by the above algorithm. In path planning, the rat can also fantasize bigger step sizes that are beyond its physical capacity, by letting \(\Delta\) include physically impossible large steps. In general, our representation scheme \((v(x),M(\Delta x),f(|x - y|))\) mirrors \((x,\Delta x,|x - y|)\) . Thus the learned representation is capable of implementing existing path planning algorithms in robotics (Siegwart et al. (2011)) even though our system does not have explicit coordinates in 2D (i.e., the 2D coordinates \(x\) are never explicitly represented by two neurons). ### 3.11 PLACE CELLS AND SCALE For each \(x\) , we may interpret \(\langle v(x),v(y)\rangle /(K d) = f(|x - y|)\) as a place cell whose response is \(f(|x - y|)\) if the agent is at \(y\) , or at least, we may consider \(f(|x - y|)\) as an internal input to the place cell based on self- motion, in addition to external visual cues. For the Gaussian kernel \(f(r) = \exp (- r^{2} / 2\sigma^{2})\) , the choice of \(\sigma\) determines the scale. In path integral, for accurate decoding of the position \(x\) from the vector \(v\) in the presence of noises, we want \(\sigma\) to be small so that \(f(r)\) drops to zero quickly. However, in path planning, for a vector \(v\) , we also want to know the Euclidean distance between its position \(x\) to a target position \(y\) , which <--- Page Split ---> may be far away from \(x\) . The distance is \(|x - y| = f^{- 1}\langle \langle v,v(y)\rangle /(K d)\rangle\) . For accurate estimation of long distance in the presence of noises, we need \(\sigma\) to be large, so that the slope of \(f^{- 1}\) is not too big. Perhaps we need multiple \(f(r)\) with different \(\sigma\) , and for each \(f(r)\) we have \(\langle K d\rangle f(r) =\) \(\begin{array}{r}{\sum_{k = 1}^{K}\gamma_{k}\langle v^{(k)}(x),v^{(k)}(y)\rangle} \end{array}\) , where different \(f(r)\) have different coefficients \((\gamma_{k})\) while sharing the same \(\langle v^{(k)}(x)\rangle\) . We shall study this issue in future work. ### 3.12 GROUP REPRESENTATION IN MOTOR CORTEX Where does the self- motion \(\Delta x\) come from? It comes from the movements of head and legs, i.e., each step of navigation involves a whole process of path integral and path planning of the movements of head and legs (as well as arms). In general, we can use the same system we have developed for the movements of head, legs, arms, fingers, etc. Their movements form groups of actions. In navigation, the movements belong to 2D Euclidean group. In body movements, the movements belong to various Lie groups. Our method can be used to learn the representational systems of these groups, and such representations may exist in motor cortex. We leave this problem to future investigation. ## 4 LEARNING REPRESENTATION The square domain \(D\) is discretized into a \(40\times 40\) lattice, and the agent is only allowed to move on the lattice. We can learn \((v(x),\forall x)\) and \((M(\Delta x),\forall \Delta x)\) (or the \(\beta\) coefficients that parametrize \(M\) ) by minimizing the following loss functions. For sub- model (1) on vector- matrix multiplication, the loss function is \[L_{1} = \mathbb{E}_{x,\Delta x}\left[\| v(x + \Delta x) - M(\Delta x)v(x)\|^{2}\right], \quad (12)\] where \(x\) is sampled from the uniform distribution on \(D = [0,1]^{2}\) , and \(\Delta x\) is sampled from the uniform distribution within a certain range \(\Delta_{1}\) (3 grid points in each direction in our experiments). The above motion loss is a single- step loss. It can be generalized to multi- step loss \[L_{1,T} = \mathbb{E}_{x,\Delta x_{1},\ldots \Delta x_{T}}\left[\| v(x + \Delta x_{1} + \ldots +\Delta x_{T}) - M(\Delta x_{T})\ldots M(\Delta x_{1})v(x)\|^{2}\right], \quad (13)\] where \((\Delta x_{t},t = 1,\dots,T)\) is a sequence of \(T\) steps of displacements, i.e., a simulated trajectory. For sub- model (2) on magnified local isometry, the loss function is \[L_{2,k} = \mathbb{E}_{x,\Delta x}\left[\left(\langle v^{(k)}(x),v^{(k)}(x + \Delta x)\rangle -(d(1 - \alpha_{k}|\Delta x|^{2})\right)^{2}\right], \quad (14)\] where for fixed \(\alpha_{k}\) , \(\Delta x\) is sampled uniformly within the range \(\Delta_{2}(\alpha_{k}) = \{\Delta x:\alpha_{k}|\Delta x|^{2}\leq c\}\) \((c = 1.5\) in our experiments). We can define \(L_{2} = \sum_{k = 1}^{K}L_{2,k}\) For sub- model (3) on global adjacency kernel, the loss function is \[L_{3} = \mathbb{E}_{x,y}\left[\left((K d)f(|x - y|) - \langle v(x),v(y)\rangle\right)^{2}\right], \quad (15)\] where both \(x\) and \(y\) are sampled uniformly from \(D\) Let \(v(x) = (v_{i}(x),i = 1,\dots,K d)\) , we also impose a regularization loss \(L_{0} = \sum_{i = 1}^{K d}(\mathbb{E}_{x}[v_{i}(x)^{2}] - 1)^{2}\) , to enforce uniform energy among the grid cell. It also helps to break the symmetry caused by the fact that the loss function is invariant under the transformation \(v(x)\to Q v(x),\forall x\) , where \(Q\) is an arbitrary orthogonal matrix. This loss is not crucial though, so we will make it implicit for the rest of the paper. Fixing the magnitude of \(\mathbb{E}_{x}[v_{i}(x)]\) within a certain range is biologically plausible, because \(v_{i}(x)\) is a single cell. For mathematical and computational convenience, we can also normalize \(v(x)\) so that \(\| v(x)\|^{2} = 1\) , \(\langle v(x),v(y)\rangle = f(|x - y|)\) , \(\langle v^{(k)}(x),v^{(k)}(x + \Delta x)\rangle = (1 - \alpha_{k}|\Delta x|^{2}) / K\) , and \(\mathbb{E}_{x}[v_{i}(x)^{2}] = 1 / (K d)\) . When learning a single block, we can normalize \(v^{(k)}(x)\) so that \(\| v^{(k)}(x)\|^{2} = 1\) , \(\langle v^{(k)}(x),v^{(k)}(x + \Delta x)\rangle = 1 - \alpha_{k}|\Delta x|^{2}\) and \(\mathbb{E}_{x}[v_{i}(x)^{2}] = 1 / d\) . The total loss function is a linear combination of the above losses, where the weights for combining the losses are chosen so that the weighted losses are of the same order of magnitude. <--- Page Split ---> The loss function is minimized by Adam optimizer (Kingma & Ba (2014)) (lr = 0.03) for 6, 000 iterations. A batch of 30, 000 examples, i.e., \((x,\Delta x_{t},t = 1,\dots,T))\) for \(L_{1}\) , \((x,\Delta x)\) for \(L_{2}\) , and \((x,y)\) for \(L_{3}\) are sampled at each learning iteration as the input to the loss function. In the later stage of learning \((\geq 4,000\) iterations), \(\| v(x)\| = 1\) is enforced by projected gradient descent, i.e., normalizing each \(v(x)\) after the gradient descent step. ## 5 EXPERIMENTS ### 5.1 LEARNING SINGLE BLOCKS: HEXAGON PATTERNS AND METRICS ![](images/7_0.jpg) <center>Figure 3: Learned units of a single block with fixed \(\alpha\) . (a) Learned single block with 6 units. Every row shows the learned units with a given \(\alpha\) . (b) Learned single block with 100 units and \(\alpha = 72\) . </center> We first learn a single block with fixed \(\alpha_{k}\) by minimizing \(L_{1} + \lambda_{2}L_{2,k}\) (we shall drop the subscript \(k\) in this subsection for simplicity). Figure 3 shows the learned units over the \(40\times 40\) lattice of \(x\) . Figure 3(a) shows the learned results with 6 units and different values of \(\alpha\) . The scale or metric of the lattice is controlled by \(\alpha\) . The units within a block have patterns with similar scale and arrangement, yet different phases. Figure 3(b) shows the learned results with 100 units and \(\alpha = 72\) , indicating that the grid- like patterns are stable and easy to learn even when the number of units is large. ### 5.2 LEARNING MULTIPLE HEXAGON BLOCKS AND METRICS ![](images/7_1.jpg) <center>Figure 4: (a) Response maps of learned units of the vector representation and learned scaling parameters \(\alpha_{k}\) . Block size equals 6 and each row shows the units belonging to the same block. (b) Illustration of block-wise activities of the units (where the activities are rectified to be positive). </center> <--- Page Split ---> We learn multiple blocks by minimizing \(L_{1} + \lambda_{2}L_{2} + \lambda_{3}L_{3}\) . Instead of manually assigning \(\alpha_{k}\) , we learn \(\alpha_{k}\) by gradient descent, simultaneously with \(v\) and \(M\) . In Figure 4(a), we show the learned units \(v(x)\) over the \(40 \times 40\) lattice of \(x\) and the learned metrics \(\alpha_{k}\) . A Gaussian kernel with \(\sigma = 0.08\) is used for the global adjacency measure \(f(|x - y|)\) . Block size is set to 6 and each row shows the learned units belonging to the same block. The scales of the firing fields are controlled by the learned \(\alpha_{k}\) . Figure 4(b) illustrates the combination of multiple blocks. For the localization model, given a vector \(v\) , the heat map of a single block \(\langle v^{(k)}, v^{(k)}(x) \rangle\) has periodic firing fields and cannot determine a location uniquely. However, ambiguity disappears by combing the heat maps of multiple blocks, which have firing fields of multiple scales and phases that add up to a Gaussian kernel \(\langle v, v(x) \rangle\) . The Gaussian kernel informs the place cell, by which the location is determined uniquely. For a motion \(\Delta x\) , every block rotates in its own subspace with motion matrix \(M^{(k)}(\Delta x)\) , resulting in phase shifting in the heat map of each single block. In the subsequent experiments, we shall learn the grid cells by minimizing \(L_{1} + \lambda_{3}L_{3}\) for simplicity. ### 5.3 PATH INTEGRAL ![](images/8_0.jpg) <center>Figure 5: (a) Path integral prediction. The black line depicts the real path while red dotted line is the predicted path by the learned model. (b) Mean square error over time step. The error is average over 1,000 episodes. The curves correspond to different numbers of steps used in the multi-step motion loss. (c) Mean square error performed by models with different block sizes and different kernel types. Error is measured by number of grids. </center> Figure 5(a) shows an example of path integral result (time duration \(\tilde{T} = 40\) ), where we use single step motion loss \(L_{1,T = 1}\) in learning. Gaussian kernel with \(\sigma = 0.08\) is used as the adjacency measure in \(L_{3}\) . We find single-step loss is sufficient for performing path integral. The mean square error remains small \((\sim 1.2\) grid) even after 400 steps of motions (figure 5(b)). The error is averaged over 1,000 episodes. The motion loss can be generalized to multi-step \(L_{1,T}\) , as shown by equation 13. In Figure 5(b), we show that multi-step loss can improve the performance slightly. In Figure 5(c) we compare the learned models with fixed number of units (96) but different block sizes. We also compare the performance of models using Gaussian kernel ( \(\sigma = 0.08\) ) and exponential kernel ( \(\sigma = 0.3\) ) as the adjacency measure in the localization model. The result shows that models with Gaussian kernel and block size \(\geq 3\) , and with exponential kernel and block size \(\geq 4\) have performances comparable to the model learned without block- diagonal assumption (block size \(= 96\) ). ### 5.4 PATH PLANNING For path planning, we assume continuous \(x\) and \(\Delta x\) . First, we design the set of allowable motions \(\Delta\) . For a small length \(r\) , we evenly divide \([0, 2\pi ]\) into \(n\) directions \(\{\theta_{i}, i = 1, \ldots , n\}\) , resulting in \(n\) candidate motions \(\Delta = \{\Delta x_{i} = (r \cos (\theta_{i}), r \sin (\theta_{i})), i = 1, \ldots , n\}\) . These \(n\) small motions serve as motion basis. Larger motion can be further added to \(\Delta\) by estimating the motion matrix \(M(k \Delta x_{i}) = M^{k}(\Delta x_{i})\) . The starting position and destination can also be any continuous values, where the encoding to the latent vector is approximated by bilinear interpolation of nearest neighbors on the lattice. <--- Page Split ---> In the experiments, we choose \(r = 0.05\) and \(n = 100\) , and add another set of motions with length 0.025 to enable accurate planning. The system is learned with exponential kernel \((\sigma = 0.3)\) as global adjacency to encourage connection of long distance. Figure 6(a) shows planning examples with six settings of motion ranges \(\Delta\) . Including larger motions accelerates the planning process so that it finishes with less steps. We define one episode to be a success if the distance between the end position and the destination is less than 0.025. We achieve a success rate of \(>99\%\) over 1,000 episodes for all the six settings. ![](images/9_0.jpg) <center>Figure 6: (a) Planning examples with different motion ranges. Red star represents the destination \(y\) and green dots represent the planned position \(\{x_{0} + \sum_{i = 1}^{t}\Delta x_{i}\}\) . (b) Planning examples with a dot obstacle. Left figure shows the effect of changing scaling parameter \(a\) , while right figure shows the effect of changing annealing parameter \(b\) . (c) Planning examples with obstacles mimicking walls, large objects and simple mazes. </center> The learned system can also perform planning with obstacles, where the global adjacency between the agent's current position and an obstacle serves as a repulsive forces. Specifically, suppose \(z\) is an obstacle to avoid, the agent can choose the motion \(\Delta x_{t}\) at time \(t\) by \[\Delta x_{t} = \arg \max_{\Delta x\in \Delta}\left[\langle v(y),M(\Delta x)v_{t}\rangle -a\langle v(z),M(\Delta x)v_{t}\rangle^{b}\right], \quad (16)\] where \(a\) and \(b\) are the scaling and annealing parameters. Figure 6(b) shows the planning result with a dot obstacle laid on the direct path between the starting position and destination, with tuning of \(a\) and \(b\) . We choose \(a = 0.5\) and \(b = 6\) in subsequent experiments. Now suppose we have more complicated obstacles \(\{z_{i}\}_{i = 1}^{m}\) . They can be included by summing over the kernels of every obstacle \(\{a\langle v(z_{i}),M(\Delta x)v_{t}\rangle^{b}\}_{i = 1}^{m}\) and choosing \(\Delta x_{t}\) at time \(t\) by \(\Delta x_{t} = \arg \max_{\Delta x\in \Delta}\left[\langle v(y),M(\Delta x)v_{t}\rangle - \sum_{i = 1}^{m}a\langle v(z_{i}),M(\Delta x)v_{t}\rangle^{b}\right]\) . Figure 6(c) shows some examples, where the obstacles mimicking walls, large objects and simple mazes. The above method is related to the potential field method in robotics (Siegwart et al. (2011)). ## 6 DISCUSSION: ROTATIONIST-CONNECTIONIST MODEL? In terms of general modeling methodology, a typical recurrent network is of the form \(v_{t} = \tanh (W(v_{t - 1},\delta_{t}))\) , where \(v_{t}\) is the latent vector, \(\delta_{t}\) is the input change or action. \(v_{t - 1}\) and \(\delta_{t}\) are concatenated and linearly mixed by \(W\) , followed by coordinate- wise tanh non- linearity. We replace it by a model of the form \(v_{t} = M(\delta_{t})v_{t - 1}\) , where \(M(\delta_{t})\) is a matrix that is non- linear in \(\delta_{t}\) . \(M(\delta_{t})\) is a matrix representation of \(\delta_{t}\) . \(v_{t}\) can be interpreted as neuron activities. We can discretize the value of \(\delta_{t}\) into a finite set \(\{\delta \}\) , and each \(M(\delta)\) can be stored as synaptic connection weights that drive the neuron activities. In prediction, the input \(\delta_{t}\) activates \(M(\delta_{t})\) . In planning or control, all the \(\{M(\delta)\}\) are activated, and among them the optimal \(\delta\) is chosen. The matrix representation of the above model is inspired by the group representation theory, where the group elements are represented by matrices acting on the vectors (Fulton & Harris (2013)). It underlies much of modern mathematics and holds the key to the quantum theory (Zee (2016)). Perhaps it also underlies the visual and motor cortex, where neurons form rotating sub- vectors driven by matrices representing groups of transformations. One may call it a rotationist- connectionist model. <--- Page Split ---> ## PROJECT PAGE http://www.stat.ucla.edu/\~ruiqigao/gridcell/main.html ## ACKNOWLEDGEMENT We thank the three reviewers for their insightful comments and suggestions. Part of the work was done while Ruiqi Gao was an intern at Hikvision Research Institute during the summer of 2018. She thanks Director Jane Chen for her help and guidance. We also thank Jiayu Wu for her help with experiments and Zilong Zheng for his help with visualization. The work is supported by DARPA XAI project N66001- 17- 2- 4029; ARO project W911NF1810296; ONR MURI project N00014- 16- 1- 2007; and a Hikvision gift to UCLA. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. ## REFERENCES Andrea Banino, Caswell Barry, Benigno Uria, Charles Blundell, Timothy Lillicrap, Piotr Mirowski, Alexander Pritzel, Martin J Chadwick, Thomas Degris, Joseph Modayil, et al. Vector- based navigation using grid- like representations in artificial agents. Nature, 557(7705):429, 2018. Hugh T Blair, Adam C Welday, and Kechen Zhang. Scale- invariant memory representations emerge from more interference between grid fields that produce theta oscillations: a computational model. Journal of Neuroscience, 27(12):3211- 3229, 2007. Yoram Burak and Ila R Fiete. Accurate path integration in continuous attractor network models of grid cells. PLoS computational biology, 5(2):e1000291, 2009. Daniel Bush, Caswell Barry, Daniel Manson, and Neil Burgess. Using grid cells for navigation. Neuron, 87(3):507- 520, 2015. Corinna Cortes and Vladimir Vapnik. Support- vector networks. Machine learning, 20(3):273- 297, 1995. Christopher J Cueva and Xue- Xin Wei. Emergence of grid- like representations by training recurrent neural networks to perform spatial localization. arXiv preprint arXiv:1803.07770, 2018. Licurgo de Almeida, Marco Idiart, and John E Lisman. The input- output transformation of the hippocampal granule cells: from grid cells to place fields. Journal of Neuroscience, 29(23):7504- 7512, 2009. Christian F Doeller, Caswell Barry, and Neil Burgess. Evidence for grid cells in a human memory network. Nature, 463(7281):657, 2010. Ugur M Erdem and Michael Hasselmo. A goal- directed spatial navigation model using forward trajectory planning based on grid cells. European Journal of Neuroscience, 35(6):916- 931, 2012. Ila R Fiete, Yoram Burak, and Ted Brookings. What grid cells convey about rat location. Journal of Neuroscience, 28(27):6858- 6871, 2008. William Fulton and Joe Harris. Representation theory: a first course, volume 129. Springer Science & Business Media, 2013. Marianne Fyhn, Torkel Hafting, Menno P Witter, Edvard I Moser, and May- Britt Moser. Grid cells in mice. Hippocampus, 18(12):1230- 1238, 2008. Torkel Hafting, Marianne Fyhn, Sturla Molden, May- Britt Moser, and Edvard I Moser. Microstructure of a spatial map in the entorhinal cortex. Nature, 436(7052):801, 2005. Joshua Jacobs, Christoph T Weidemann, Jonathan F Miller, Alec Solway, John F Burke, Xue- Xin Wei, Nanthia Suthana, Michael R Sperling, Ashwini D Sharan, Itzhak Fried, et al. Direct recordings of grid- like neuronal activity in human spatial navigation. Nature neuroscience, 16(9):1188, 2013. <--- Page Split ---> Nathaniel J Killian, Michael J Jutras, and Elizabeth A Buffalo. A map of visual space in the primate entorhinal cortex. Nature, 491(7426):761, 2012. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Rosamund F Langston, James A Ainge, Jonathan J Couey, Cathrin B Canto, Tale L Bjerknes, Menno P Witter, Edvard I Moser, and May- Britt Moser. Development of the spatial representation system in the rat. Science, 328(5985):1576- 1580, 2010. Bruce L McNaughton, Francesco P Battaglia, Ole Jensen, Edvard I Moser, and May- Britt Moser. Path integration and the neural basis of the'cognitive map'. Nature Reviews Neuroscience, 7(8): 663, 2006. Andrew Y Ng, Michael I Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm. In Advances in neural information processing systems, pp. 849- 856, 2002. John O'Keefe. A review of the hippocampal place cells. Progress in neurobiology, 13(4):419- 439, 1979. Francesca Sargolini, Marianne Fyhn, Torkel Hafting, Bruce L McNaughton, Menno P Witter, May- Britt Moser, and Edvard I Moser. Conjunctive representation of position, direction, and velocity in entorhinal cortex. Science, 312(5774):758- 762, 2006. Roland Siegwart, Illah Reza Nourbakhsh, Davide Scaramuzza, and Ronald C Arkin. Introduction to autonomous mobile robots. MIT press, 2011. Sameet Sreenivasan and Ila Fiete. Grid cells generate an analog error- correcting code for singularly precise neural computation. Nature neuroscience, 14(10):1330, 2011. Hanne Stensola, Tor Stensola, Trygve Solstad, Kristian Frøland, May- Britt Moser, and Edvard I Moser. The entorhinal grid map is discretized. Nature, 492(7427):72, 2012. Albert Tsao, Jörgen Sugar, Li Lu, Cheng Wang, James J Knierim, May- Britt Moser, and Edvard I Moser. Integrating time from experience in the lateral entorhinal cortex. Nature, 561(7721):57, 2018. Michael M Yartsev, Menno P Witter, and Nachum Ulanovsky. Grid cells without theta oscillations in the entorhinal cortex of bats. Nature, 479(7371):103, 2011. Anthony Zee. Group theory in a nutshell for physicists. Princeton University Press, 2016. <--- Page Split ---> ## A PROOF FOR SECTION 3.6 Proof. \((a_{j},j = 1,2,3)\) forms a tight frame in the 2D space, in that for any vector \(x\) in 2D, \(\sum_{j = 1}^{3}\langle x,a_{j}\rangle^{2}\propto |x|^{2}\) . Since \(C^{*}C = I\) , we have \[\begin{array}{r c l}{{\langle v(x),v(y)\rangle}}&{{=}}&{{v(x)^{*}v(y)}}\\ {{}}&{{=}}&{{e(x)^{*}C^{*}C e(y)}}\\ {{}}&{{=}}&{{\sum_{j=1}^{3}e^{i\langle a_{j},y-x\rangle}.}}\end{array} \quad (19)\] Then we have \[\begin{array}{r c l}{{\mathrm{RE}(\langle v(x),v(x+\Delta x)\rangle)}}&{{=}}&{{\sum_{j=1}^{3}\cos(\langle a_{j},\Delta x\rangle)}}\\ {{}}&{{}}&{{\approx\sum_{j=1}^{3}(1-\langle a_{j},\Delta x\rangle^{2}/2)}}\\ {{}}&{{}}&{{\approx3-\sum_{j=1}^{3}\langle a_{j},\Delta x\rangle^{2}/2}}\\ {{}}&{{}}&{{\approx3(1-\alpha|\Delta x|^{2}),}}\end{array} \quad (22)\] where \(|a_{j}| = 2\sqrt{\alpha}\) . The self motion from \(v(x)\) to \(v(x + \Delta x)\) is \[\begin{array}{r c l}{{v(x+\Delta x)}}&{{=}}&{{C e(x+\Delta x)}}\\ {{}}&{{=}}&{{C D(\Delta x)e(x)}}\\ {{}}&{{=}}&{{C D(\Delta x)C^{*}v(x)}}\\ {{}}&{{=}}&{{M(\Delta x)v(x),}}\end{array} \quad (24)\] where \(D(\Delta x) = \mathrm{diag}(e^{i\langle a_{j},\Delta x\rangle},j = 1,2,3)\) . If \(K > 1\) and block size \(= 6\) , we can fit the multiple blocks together by a Fourier expansion of the kernel function \[\begin{array}{r c l}{{f(|x-y|)}}&{{=}}&{{\langle v(x),v(y)\rangle}}\\ {{}}&{{=}}&{{\sum_{k=1}^{K}\sum_{j=1}^{3}e^{i\langle a_{k j},y-x\rangle}}}\\ {{}}&{{=}}&{{\sum_{k=1}^{K}\langle v^{(k)}(x),v^{(k)}(y)\rangle.}}\end{array} \quad (30)\] ## B HEXAGON GRID PATTERNS AND METRICS ## B.1 SIMULATED INPUT DATA We obtain input data for learning the model by simulating agent trajectories with the number of steps equal to \(\tilde{T}\) in the square domain \(D\) . \(D\) is discretized into a \(40 \times 40\) lattice and the agent is only allowed to move on the lattice. The agent starts at a random location \(x_{0}\) . At each time step \(t\) , a small motion \(\Delta x_{t} (\leq 3\) grids in each direction) is randomly sampled with the restriction of not leading the agent outside the boundary, resulting in a simulated trajectory \(\{x_{0} + \sum_{i = 1}^{t}\Delta x_{i}\}_{t = 1}^{T}\) . \(\tilde{T}\) is set to \(1,000\) to obtain trajectories that are uniformly distributed over the whole area. Although the trajectories used for training are restricted to the lattice and with small motions, in Section 5.4 <--- Page Split ---> we show that the learned model can be easily generalized to handle continuous positions and large motions. In training, pairs of locations \((x,y)\) are randomly sampled from each trajectory as the input to the adjacency loss \(L_{3}\) , while consecutive position sequences \((x,\Delta x_{t},t = 1,\dots,T)\) are randomly sampled as the input to the motion loss \(L_{1}\) , with length specified by \(T\) (which is usually much smaller than the whole length of the trajectory \(T\) ). ## B.2 LEARNED SINGLE BLOCK UNITS WITH DIFFERENT BLOCK SIZES In (Blair et al. (2007)), the grid cell response is modeled by three cosine gratings with different orientations and phases. In our model, we learn such patterns of different scales without inserting artificial assumptions. ![](images/13_0.jpg) <center>Figure 7: Response maps of learned single block units with different block sizes </center> Figure 7 displays the response maps of the learned single block units with different block sizes. For block sizes 4 and 5, the learned maps show square lattice patterns. For block sizes greater than or equal to 6, the learned maps show hexagon lattice patterns. ## B.3 LEARNED MULTIPLE BLOCK UNITS AND METRICS ## B.3.1 WITH DIFFERENT BLOCK SIZES Figure 8 displays the response maps of the multiple block units with different block sizes. The metrics of the multiple blocks are learned automatically. ## B.3.2 WITH DIFFERENT SHAPES OF AREA Figure 9 displays the response maps of the multiple block units with different shapes of the area, such as circle and triangle. ## B.4 QUANTITATIVE ANALYSIS OF SPATIAL ACTIVITY We assess the spatial activity of the learned units quantitatively using measures adopted from the neuroscience literature. Specifically, we quantify the hexagonal regularity of the grid- like patterns using the gridness score (Langston et al. (2010); Sargolini et al. (2006)). The measure is derived from the spatial autocorrelogram of each unit's response map. A unit is classified as a grid cell if its gridness score is larger than 0. For those units that are classified as grid cells, grid scale and orientation can be further derived from the autocorrelogram following Sargolini et al. (2006). Figure 10(a) summarizes the results. 76 out of 96 learned units are classified as grid cells. Most units with large learned \(\alpha_{k}\) are classified as grid cells, while those units with small \(\alpha_{k}\) are not due to the lack of a full period of hexagonal patterns. The grid scales and orientations vary among units, while <--- Page Split ---> ![](images/14_0.jpg) remaining similar within each block. We show the histograms of the grid orientations and scales in Figure 10(b) and 10(c) respectively. Moreover, we average the grid scales within each block and make the scatter plot of the averaged grid scales and the learned \(1 / \sqrt{\alpha_{k}}\) in figure 10(d). Interestingly, the grid scale is nicely proportional to the learned \(1 / \sqrt{\alpha_{k}}\) . ## B.5 ABLATION STUDIES We conduct ablation studies to assess various assumptions in our model. ## B.5.1 LOSS TERMS We learn \(v\) and \(M\) with multiple blocks by minimizing \(L_{1} + \lambda_{2}L_{2} + \lambda_{3}L_{3}\) , which consists of (1) a motion loss \(L_{1}\) , (2) a local isometry loss \(L_{2}\) , and (3) a global adjacency loss \(L_{3}\) . Figure 11 shows the learned units when using only some of the components. Grid- like patterns do not emerge if using only the adjacency loss \(L_{3}\) , or only the isometry loss \(L_{2}\) . If using only the motion loss, the motion equation is approximately satisfied at the beginning of training, since both \(v\) and \(M\) are initialized from small values that are close to 0. The system cannot be learned without an extra term to push \(v(x)\) apart from each other. Figure 11(c) shows the learned units using the adjacency loss and the motion loss, leaving out the isometry loss. Grid- like patterns still emerge (also some strip- like patterns), although less obvious than the ones learned using the full loss. ## B.5.2 ASSUMPTIONS OF MOTION MATRIX In another ablation study, we drop the quadratic parametrization by \(\beta\) coefficients and the block diagonal assumption of the motion matrix. A separate motion matrix \(M(\Delta x)\) is learned for each displacement \(\Delta x\) , and \(v(x)\) is assumed to be a single block. We use either the local isometry loss \(L_{2}\) or the global adjacency loss \(L_{3}\) in addition to the motion loss \(L_{1}\) . The results are shown in Figure 12. With the isometry loss, the setting is similar to the one of learning single block, except that the parametrization of \(M(\Delta x)\) is dropped. As shown in figure 12(a), the learned units resemble the ones learned with parametrized \(M(\Delta x)\) , but when the block size is large, the grid- like patterns are less obvious. With the adjacency loss, grid- like patterns do not emerge any more. <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 8: Learned multiple block units and metrics with different block sizes </center> ## C MODELING EGOCENTRIC MOTION The model can be generalized to handle egocentric motion that involves head direction. <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 9: Learned multiple block units and metrics with different shapes of the area </center> ## C.1 COUPLING TWO GRID SYSTEMS The egocentric motion consists of angular velocity in the change of head direction and the spatial velocity along the current head direction. We couple two grid systems, one for head direction and the other for the spatial position. For notational simplicity, we put the "hat" notation on top of \(v\) and \(M\) notation to denote the vector and matrix for the head direction. Specifically, the agent has a head direction \(\theta\) , which may change over time. The agent moves along its head direction with scalar motion. We can discretize the range of \(\theta\) , \([0, 2\pi ]\) , into \(\hat{N}\) equally spaced values \(\{\theta_{i}, i = 1, \ldots , \hat{N}\}\) , and introduce a vector representation of self- direction. That is, the agent represents its self- direction by a \(\hat{d}\) - dimensional hidden vector \(\hat{v} (\theta)\) . ## C.1.1 MOTION SUB-MODEL Suppose at a position \(x\) , the agent has a head direction \(\theta\) . The self- motion is decomposed into (1) a scalar motion \(\delta\) along the head direction \(\theta\) and then (2) a head direction rotation \(\Delta \theta\) . The self- motion \(\Delta x = (\delta \cos \theta , \delta \sin \theta)\) . We assume \[\begin{array}{rcl}{\hat{v} (\theta +\Delta \theta)} & = & {\hat{M} (\Delta \theta)\hat{v} (\theta),}\\ {v(x + \Delta x)} & = & {M(\delta ,\hat{v} (\theta))v(x),} \end{array} \quad (32)\] where \(\hat{M} (\Delta \theta)\) is a \(\hat{d} \times \hat{d}\) matrix that depends on \(\Delta \theta\) , and \(M(\Delta \theta , \hat{v} (\theta))\) is a \(d \times d\) matrix that depends on \(\delta\) and \(\hat{v} (\theta)\) . \(\hat{M} (\Delta \theta)\) is the matrix representation of the head direction rotation \(\Delta \theta\) , while \(M(\delta , \hat{v} (\theta))\) is the matrix representation of scalar motion \(\delta\) along the direction \(\theta\) . <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 10: (a) Autocorrelograms of the learned units' response maps. Gridness scores are calculated based on the autocorrelograms. A unit is classified as a grid cell if the gridness score is larger than 0. The gridness score is shown in red color if a unit fails to be classified as a grid cell. For those units that are classified as grid cells, gridness score, scale and orientation are listed sequentially in black color. Orientation is computed using a camera-fixed reference line \((0^{\circ})\) and in counterclockwise direction. (b) Histogram of grid orientations. (c) Histogram of grid scales. (d) Scatter plot of averaged grid scales within each block versus the corresponding learned \(1 / \sqrt{\alpha_{k}}\) . </center> We can model \(M(\delta , \hat{v} (\theta))\) by an attention (or selection) mechanism: \[\begin{array}{l}{p_{i} = \frac{\langle\hat{v} (\theta),\hat{v} (\theta_{i})^{b}}{\sum_{i = 1}^{\tilde{N}}\langle\hat{v} (\theta),\hat{v} (\theta_{i})\rangle^{b}},}\\ {M(\delta ,\hat{v} (\theta)) = \sum_{i = 1}^{\tilde{N}}p_{i}M^{(i)}(\delta),} \end{array} \quad (34)\] where \(M^{(i)}(\delta)\) is the matrix representation of scalar motion \(\delta\) given the head direction \(\theta_{i}\) . The inner product \(\langle \hat{v} (\theta), \hat{v} (\theta_{i})\rangle\) , that informs the angular distance between \(\theta\) and \(\theta_{i}\) , serves as the attention weight. \(b\) is an annealing (inverse temperature) parameter. If \(b \to \infty\) , \(p = (p_{i}, i = 1, \ldots , \tilde{N})\) becomes a one- hot vector for selection. We can further assume that \(\tilde{M} (\Delta \theta)\) and \(M^{(i)}(\delta)\) are block diagonal, and learn a parametric model for each of them by the second order Taylor expansion at \(\Delta \theta\) and \(\delta\) respectively. ## C.1.2 LOCALIZATION SUB-MODEL For the localization sub- model, we define the adjacency measures of self- direction and self- position separately. Let \(\hat{f} (|\theta_{1} - \theta_{2}|)\) be the adjacency measure between two directions \(\theta_{1}\) and \(\theta_{2}\) . We use von Mises kernel \(\hat{f} (r) = \exp ((\cos (r) - 1) / \sigma^{2})\) , where \(\hat{f} (0) = 1\) . For adjacency measure between self- positions \(x\) and \(y\) , we keep it the same as described in section 3.7. <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 11: Ablation study of the components in the training loss. (a) Learn the model using only the localization loss with global adjacency. (b) Learn the model using only the localization loss with local adjacency. (c) Using global adjacency and motion loss, leaving out local adjacency. </center> ## C.1.3 LOSS FUNCTION FOR LEARNING For learning the system, we can first learn \(\hat{v}\) and \(\hat{M}\) (or the coefficients that parametrize \(\hat{M}\) ) by minimizing \[\mathbb{E}_{\theta_{1},\theta_{2}}\left[\left(\hat{f} (|\theta_{1} - \theta_{2}|) - \langle \hat{v} (\theta_{1}),\hat{v} (\theta_{2})\rangle\right)^{2}\right] + \lambda \mathbb{E}_{\theta ,\Delta \theta}\left[\left\Vert \hat{v} (\theta +\Delta \theta) - \hat{M} (\Delta \theta)\hat{v} (\theta)\right\Vert^{2}\right], \quad (35)\] and then we learn \(v\) and \(M\) (or the coefficients that parametrize \(M\) ) as before. ## C.2 LEARNED UNITS FOR SELF-DIRECTION AND SELF-POSITION Figure 13 shows a result of learning such an egocentric motion model by displaying the response curves of the learned units in the head direction system, \(\hat{v} (\theta)\) , for \(\theta \in [0,2\pi ]\) , as well as the response maps of the learned multiple block units in the self- position system, \(v(x)\) , for \(x \in [0,1]^2\) . ## C.3 CLOCK AND TIMESTAMP We may re- purpose the head direction system as a clock, by interpreting \(\theta \in [0,2\pi ]\) as the time on a clock, and \(\hat{v} (\theta)\) as a timestamp for events happening over time. This may be related to the recent neuroscience observations in Tsao et al. (2018). ## D ERRORS ## D.1 ERROR CORRECTION Unlike commonly used embedding in machine learning, here we embed a 2D position into a high- dimensional space, and the embedding is a highly distributed representation or population code. The <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 12: Learned units by dropping the parametrization and the block diagonal assumption of the motion matrix \(M(\Delta x)\) . </center> advantage of such a redundant code lies in its tolerance to errors. We show that the learned system is tolerant to various sources of errors. Specifically, in both path integral and path planning tasks, at every time step \(t\) , we randomly add (1) Gaussian noises or (2) dropout masks to the hidden units and see if the system can still perform the tasks well. We find that the decoding- encoding process (DE) is important for error correction. That is, at each time step \(t\) , given the noisy hidden vector \(v_{t}\) , we decode it to \(x_{t} = \arg \max_{x}\langle v_{t},v(x)\rangle\) and then re- encode it to the hidden vector \(v(x_{t})\) . Actually the whole process can be accomplished by projection to the codebook sub- manifold without explicit decoding, by obtaining the vector: \(\arg \max_{v\in C}\langle v_{t},v\rangle\) , where \(C = \{v(x),x\in [0,1]^{2}\}\) . Table 1 shows the error correction results tested on path integral and path planning tasks. Each number is averaged over 1,000 episodes. We compute the overall standard deviation of \(\{v(x)\}\) for all \(x\) and treat it as the reference standard deviation (s) for the Gaussian noise. For dropout noise, we set a percentage to drop at each time step. With the decoding- encoding process, the system is quite robust to the Gaussian noise and dropout error, and the system still works even if \(70\%\) units are silenced at each step. ## D.2 NOISY SELF-MOTION INPUT Besides adding noises to the hidden units, we also experiment with adding noises to the self- motion \(\Delta x\) , and compare the performance of path integral with the one performed in the original 2D coordinates. Specifically, at each time step, we add Gaussian noises to the self- motion \(\Delta x\) . For path integral, we compute the mean square error between the predicted locations and ground truth locations. Besides, we also compute the predicted locations using the 2D coordinates with the noisy self- motions. Its mean square error serves as the reference error. Table 2 shows the result, indicating that the error of the learned system is close to the error in the original 2D coordinates, i.e., our system does not blow up the noise. <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 13: (a) Response curves of learned units in the head direction model, \(\hat{v} (\theta)\) . Each block shows the units belonging to the same sub-vector in the model. The horizontal axis represents the angle within a range of [0, \(2\pi ]\) , while the vertical axis indicates the values of responses. (b) Response maps of the learned multiple block units of the self-position model, \(v(x)\) , for \(x \in [0, 1]^2\) . </center> <table><tr><td colspan="5">Path integral: MSE</td><td colspan="5">Path planning: success rate</td></tr><tr><td>Noise type</td><td>1s</td><td>0.75s</td><td>0.5s</td><td>0.25s</td><td>0.1s</td><td>1s</td><td>0.75s</td><td>0.5s</td><td>0,25s</td></tr><tr><td>Gaussian (DE)</td><td>1.687</td><td>1.135</td><td>0.384</td><td>0.017</td><td>0</td><td>0.928</td><td>0.959</td><td>0.961</td><td>0.977</td></tr><tr><td>Gaussian</td><td>6.578</td><td>2.999</td><td>1.603</td><td>0.549</td><td>0.250</td><td>0.503</td><td>0.791</td><td>0.934</td><td>0.966</td></tr><tr><td>Noise type</td><td>70%</td><td>50%</td><td>30%</td><td>10%</td><td>5%</td><td>70%</td><td>50%</td><td>30%</td><td>10%</td></tr><tr><td>Dropout (DE)</td><td>2.837</td><td>1.920</td><td>1.102</td><td>0.109</td><td>0.013</td><td>0.810</td><td>0.916</td><td>0.961</td><td>0.970</td></tr><tr><td>Dropout</td><td>19.611</td><td>16.883</td><td>14.137</td><td>3.416</td><td>0.602</td><td>0.067</td><td>0.186</td><td>0.603</td><td>0.952</td></tr></table> Table 1: Error correction results on the vector representation. The performance of path integral is measured by mean square error between predicted locations and ground truth locations; while for path planning, the performance is measured by success rate. Experiments are conducted using several noise levels: Gaussian noise with different standard deviations in terms of the reference standard deviation \(s\) and dropout mask with different percentages. DE means implementing decoding- encoding process when performing the tasks. ## E 3D ENVIRONMENT AND 1D TIME The system can be generalized to 3D environments. Specifically, we assume the agent navigates within a domain \(D = [0,1] \times [0,1] \times [0,1]\) , which is discretized into a \(40 \times 40 \times 40\) lattice. We learn a parametric model for motion matrix \(M\) in a residual form \(M = I + \tilde{M} (\Delta x)\) , where \(\tilde{M} (\Delta x)\) is approximated by the second order Taylor expansion of \(\Delta x\) . We first learn a single block with fixed \(\alpha_{k}\) by minimizing \(L_{1} + \lambda_{2}L_{2,k}\) . A batch of 500,000 examples of \((x, y)\) and \((x, \Delta x_{t}, t = 1, \ldots , T)\) is sampled online at every iteration for training. Figure 14 shows the learned units. <--- Page Split ---> <table><tr><td>Standard deviation</td><td>1.2</td><td>0.9</td><td>0.6</td><td>0.3</td></tr><tr><td>Learned system</td><td>6.020</td><td>4.382</td><td>3.000</td><td>1.422</td></tr><tr><td>Reference</td><td>5.852</td><td>4.185</td><td>2.873</td><td>1.315</td></tr></table> Table 2: Path integral results with noises in the self- motion. Performance is measured by mean square error (MSE) between the predicted locations and ground truth locations in the path integral task. Noises are added to self- motions \(\Delta x\) by several noise levels: Gaussian noise with different standard deviations in terms of number of grids. Reference MSE is computed by path integral in 2D coordinates. ![](images/21_0.jpg) <center>Figure 14: Learned units of a single block with fixed \(\alpha\) in 3D environment. Every row shows the learned units with a given \(\alpha\) . </center> Next we learn multiple blocks by minimizing \(L_{1} + \lambda_{3}L_{3}\) , and use the learned models to perform 3D path integral and path planning. For simplicity, we remove \(L_{2}\) . We use 8 blocks of units with block size 8 and exponential kernel ( \(\sigma = 0.3\) ) for path planning. A batch of 200,000 examples is sampled online at every iteration for training. ## E.1 3D PATH INTEGRAL Figure 15 shows some results of 3D path integral with duration \(\tilde{T} = 30\) . Gaussian Kernel ( \(\sigma = 0.08\) ) is used as the adjacency measure. ## E.2 3D PATH PLANNING Figure 16 shows some results of 3D simple path planning. Exponential kernel ( \(\sigma = 0.3\) ) is used as the adjacency measure. We design a set of allowable motions \(\Delta\) : \(m\) lengths of radius \(r\) are used, and for each \(r\) , we evenly divide the inclination \(\theta \in [0, \pi ]\) and the azimuth \(\alpha \in [0, 2\pi ]\) into \(n\) directions, which results in a motion pool with \(mn^{2}\) candidate motions \(\Delta = \{(r \sin (\theta) \cos (\alpha), r \sin (\theta) \sin (\alpha), r \cos (\theta))\}\) . We use \(m = 2, n = 90\) in this experiment. ## E.3 3D PATH PLANNING WITH OBSTACLES Figure 17 shows some examples of 3D path planning with a cuboid obstacle. \(a = 38\) and \(b = 24\) in equation 16. <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 15: Examples of 3D path integral with duration \(\hat{T} = 30\) . </center> ![](images/22_1.jpg) <center>Figure 16: Examples of 3D simple path planning, where the agent is capable of planning a direct trajectory. </center> ## E.4 LEARNING IN 1D The system can also be applied to 1D. Inspired by Tsao et al. (2018), the learned system in 1D may serve as a timestamp of events. We assume domain \(D = [0,1]\) and discretize it into 100 time points. A parametric \(M\) is learned by a residual form \(M = I + \hat{M} (\Delta t)\) , where each element of \(\hat{M} (\Delta t)\) is parametrized as a function of \((\Delta t, \Delta t^2)\) . 16 blocks of hidden units with block size 6 are used. Figure 18 visualizes the learned units over the 100 time points. The response wave of every unit <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 17: Examples of 3D path planning with a cuboid obstacle. </center> shows strong periodicity of a specific scale or frequency. Within each block, the response waves of units have similar patterns, with different phases. ![](images/23_1.jpg) <center>Figure 18: Response curves of learned units in 1D. Each block shows the units belonging to the same sub-vector in the motion model. The horizontal axis represents the time points, while the vertical axis indicates the value of responses. </center> <--- Page Split --->
## ABSTRACT This paper proposes a representational model for grid cells. In this model, the 2D self- position of the agent is represented by a high- dimensional vector, and the 2D self- motion or displacement of the agent is represented by a matrix that transforms the vector. Each component of the vector is a unit or a cell. The model consists of the following three sub- models. (1) Vector- matrix multiplication. The movement from the current position to the next position is modeled by matrix- vector multiplication, i.e., the vector of the next position is obtained by multiplying the matrix of the motion to the vector of the current position. (2) Magnified local isometry. The angle between two nearby vectors equals the Euclidean distance between the two corresponding positions multiplied by a magnifying factor. (3) Global adjacency kernel. The inner product between two vectors measures the adjacency between the two corresponding positions, which is defined by a kernel function of the Euclidean distance between the two positions. Our representational model has explicit algebra and geometry. It can learn hexagon patterns of grid cells, and it is capable of error correction, path integral and path planning. ## 1 INTRODUCTION Imagine you are walking in your living room in the dark at night without any visual cues. Purely based on your self- motion, you know where you are while you are walking simply by integrating your self- motion. This is called path integral (Hafting et al. (2005); Fiete et al. (2008); McNaughton et al. (2006)). You can also plan your path to the light switch or to the door. This is called path planning (Fiete et al. (2008); Erdem & Hasselmo (2012); Bush et al. (2015)). You need to thank your grid cells for performing such navigation tasks. ![](images/0_0.jpg) <center>Figure 1: Place cells and grid cells. (a) The rat is moving within a square region. (b) The activity of a neuron is recorded. (c) When the rat moves around (the curve is the trajectory), each place cell fires at a particular location, but each grid cell fires at multiple locations that form a hexagon grid. (d) The place cells and grid cells exist in the brains of both rat and human. (Source of pictures: internet) </center> Figure 1(a) shows Dr. May- Britt Moser, who together with Dr. Edvard Moser, won the 2014 Nobel Prize for Physiology or Medicine, for their discovery of the grid cells (Hafting et al. (2005); Fyhn et al. (2008); Yartsev et al. (2011); Killian et al. (2012); Jacobs et al. (2013); Doeller et al. (2010)) <--- Page Split ---> in 2005. Their thesis advisor, Dr. John O'keeffe, shared the prize for his discovery of the place cells (O'Keefe (1979)). Both the place cells and grid cells are used for navigation. The discoveries of these cells were made by recording the activities of the neurons of the rat when it moves within a square region. See Figure 1(b). Some neurons in the Hippocampus area are place cells. Each place cell fires when the rat moves to a particular location, and different place cells fire at different locations. The whole collection of place cells cover the whole square region. The discovery of grid cells was much more surprising and unexpected. The grid cells exist in the Entorhinal cortex. Each grid cell fires at multiple locations, and these locations form a regular hexagon grid. See Figure 1(c). The grid cells have been discovered across mammalian species, including human. See Figure 1(d). In this paper, we propose a representational model to explain the hexagon patterns of the grid cells, and to explain how the grid cells perform path integral and path planning. We shall show that the grid cells are capable of error correction, which provides a justification for the grid cells. ![](images/1_0.jpg) <center>Figure 2: Grid cells form a high-dimensional vector representation of 2D self-position. Three submodels: (1) Local motion is modeled by vector-matrix multiplication. (2) Angle between two nearby vectors magnifies the Euclidean distance. (3) Inner product between any two vectors measures the adjacency which is a kernel function of the Euclidean distance. </center> Figure 2 illustrates our model. The 2D self- position of the agent is represented by a high- dimensional vector, and the 2D self- motion or displacement of the agent is represented by a matrix that acts on the vector. Each component of the vector is a unit or a cell. In Figure 2, \(x\) denotes the 2D self- position, \(v(x)\) is the high- dimensional vector representation of the 2D \(x\) . \(\Delta x\) is the self- motion or one- step displacement. \(M(\Delta x)\) is the matrix representation of \(\Delta x\) . The model consists of the following three sub- models. (1) Vector- matrix multiplication. The movement from the current position \(x\) to the next position \(x + \Delta x\) is modeled by matrix- vector multiplication, i.e., the vector of the next position \(v(x + \Delta x)\) is obtained by multiplying the matrix of the motion, \(M(\Delta x)\) , to the vector of the current position \(v(x)\) . (2) Magnified local isometry. The angle between two nearby vectors equals the Euclidean distance \(|\Delta x|\) between the two corresponding positions multiplied by a magnifying factor. (3) Global adjacency kernel. The inner product between two vectors \(\langle v(x), v(y) \rangle\) measures the adjacency between the two corresponding positions, which is defined by a kernel function \(f\) of the Euclidean distance \(|x - y|\) between the two positions \(x\) and \(y\) . One additional feature is that the whole vector \(v(x)\) is partitioned into multiple sub- vectors, and each sub- vector is driven by an associated sub- matrix. The whole system is like a multi- arm clock, with each arm rotating at a magnified speed and spanning a 2D sub- manifold on a high- dimensional sphere. Our experiments show that sub- models (1) and (2) are sufficient for the emergence of the hexagon grid patterns of the grid cells. Sub- model (2) makes the vector representation robust to noises or errors due to the magnification of the distance. Sub- model (3) enables unique decoding of the position from the vector, because \(f\) is unimodal and peaked at 0, and it may serve as the link between the grid cells and place cells. Together with sub- model (1), sub- model (3) enables path integral and path planning, because the adjacency \(\langle v(x), v(y) \rangle\) informs the Euclidean distance \(|x - y|\) . All the three sub- models can be implemented by one- layer neural networks. ## 2 CONTRIBUTIONS AND RELATED WORK The following are the contributions of our work. (1) We propose a representational model for grid cells, where the self- position is represented by a vector and the self- motion is represented by a matrix that acts on the vector. (2) We show that our model can learn hexagon grid patterns. (3) We show our model is capable of path integral, path planning, and error correction. <--- Page Split ---> Many mathematical and computational models (Burak & Fiete (2009); Sreenivasan & Fiete (2011); Blair et al. (2007); de Almeida et al. (2009)) have been proposed to explain the formation and function of grid cells. Compared to previous computational models on grid cells, our model only makes very generic assumptions about the algebra and geometry of the representational scheme, without assuming Fourier plane waves or clock arithmetics. Recently, deep learning models (Banino et al. (2018); Cueva & Wei (2018)) have been proposed to learn the grid- like units for navigation. Our work was inspired by them. Compared with these models, our model has explicit algebra and geometry. In terms of algebra, our model has explicit matrix representation of self- motion, and the change of self- position is modeled by vector- matrix multiplication. In terms of geometry, our model assumes that the vector rotates while the agent moves, and our model assumes magnified local isometry and global adjacency kernel based on the angles between the vectors. Expressing the adjacency kernel as the inner product between the vectors is related to the kernel trick (Cortes & Vapnik (1995)), random Fourier basis (Ng et al. (2002)), and spectral clustering (Ng et al. (2002)). ## 3 REPRESENTATIONAL MODEL OF GRID CELLS Consider an agent navigating within a domain \(D = [0,1]\times [0,1]\) (actually the shape does not matter, and it can be \(\mathbb{R}^{2}\) ). We can discretize \(D\) into an \(N\times N\) lattice. \(N = 40\) in our experiments. Let \(x = (x_{1},x_{2})\in D\) be the self- position of the agent. \(x\) is 2D. Suppose the agent wants to represent its self- position by a \(d\) - dimensional hidden vector \(v(x)\) . We introduce the following three sub- models. ### 3.1 SUB-MODEL 1 ABOUT MOTION ALGEBRA: VECTOR-MATRIX MULTIPLICATION Suppose at a position \(x\) , the self- motion or one- step displacement is \(\Delta x\) , so that the agent moves to \(x + \Delta x\) after one step. We assume that \[v(x + \Delta x) = M(\Delta x)v(x), \quad (1)\] where \(M(\Delta x)\) is a \(d\times d\) matrix that depends on \(\Delta x\) . While \(v(x)\) is the vector representation of the self- position \(x\) , \(M(\Delta x)\) is the matrix representation of the self- motion \(\Delta x\) . We can illustrate the motion model by the following diagram: \[\begin{array}{r l r l}{\mathrm{Motion:}} & {\underbrace{x_{t}}} & {\xrightarrow{+\Delta x}} & {\underbrace{x_{t + 1}}}\\ & {\downarrow} & {\downarrow} & {\downarrow}\\ & {v(x_{t})} & {\xrightarrow{M(\Delta x)\times}} & {v(x_{t + 1})} \end{array} \quad (2)\] See Figure 2(1). Both \(v(x)\) and \(M(\Delta x)\) are to be learned. We can discretize \(\Delta x\) , and learn a motion matrix \(M\) for each \(\Delta x\) . We can also learn a parametric model for \(M\) . To this end, we can further parametrize \(M = I + \bar{M} (\Delta x)\) such that each element of \(\bar{M} (\Delta x)\) is a quadratic (or polynomial) function of \(\Delta x = (\Delta x_{1},\Delta x_{2})\) : \[\bar{M}_{ij}(\Delta x) = \beta_{ij}^{(1)}\Delta x_{1} + \beta_{ij}^{(2)}\Delta x_{2} + \beta_{ij}^{(11)}\Delta x_{1}^{2} + \beta_{ij}^{(22)}\Delta x_{2}^{2} + \beta_{ij}^{(12)}\Delta x_{1}\Delta x_{2}, \quad (3)\] where the coefficients \(\beta\) are to be learned. The above may be considered a second- order Taylor expansion which is expected to be accurate for small \(\Delta x\in \Delta\) , where \(\Delta\) is the allowed collection of one- step displacements, e.g., \(\pm 3\) grid points in each direction on the \(40\times 40\) grid. The motion model can be considered a linear recurrent neural network (RNN). However, if we are to interpret \(M\) as the weight matrix, then the weight matrix is dynamic because it depends on the motion \(\Delta x\) . One may implement it by discretizing \(\Delta x\) , so that we have a finite set of \(\Delta x\) , and thus a finite set of \(M(\Delta x)\) . Then at each time step, the RNN switches between the finite set of motion matrices. This is like the gearing operation of a multi- speed bicycle. ### 3.2 DISENTANGLED BLOCKS OR MODULES For the sake of estimation accuracy and computational efficiency, we further assume that \(M(\Delta x)\) is block diagonal, i.e., we can divide \(v\) into \(K\) blocks of sub- vectors, \(v = (v^{(k)},k = 1,\dots,K)\) , and <--- Page Split ---> \(v^{(k)}(x + \Delta x) = M^{(k)}(\Delta x)v^{(k)}(x)\) . That is, the vector \(v\) consists of sub- vectors, each of which rotates in its own subspace, so that the dynamics of the sub- vectors are disentangled. The assumption of disentangled blocks in our model is related to the modular organization of grid cells in neuroscience (Stensola et al. (2012)), where a module refers to a region of grid cells where all cells share similar grid scales and orientations. ### 3.3 SUB-MODEL 2 ABOUT LOCAL GEOMETRY: MAGNIFIED LOCAL ISOMETRY The above motion algebra alone is not sufficient for learning the vector matrix representations, because a trivial solution is that all the \(v(x)\) are the same, and the matrix \(M(\Delta x)\) is always identity. We need to properly displace the vectors \(v(x)\) . So we shall model \(\langle v(x),v(y)\rangle\) both locally and globally. Let \(d\) be the dimensionality of \(v^{(k)}\) , i.e., the number of grid cells within the \(k\) - th block. For the local geometry, we assume that for each block, \[\langle v^{(k)}(x),v^{(k)}(x + \Delta x)\rangle = d(1 - \alpha_{k}|\Delta x|^{2}), \quad (4)\] for all \(x\) and \(\Delta x\) such that \(\alpha_{k}|\Delta x|^{2}\leq c\) , i.e., \(\Delta x\in \Delta (\alpha_{k}) = \{\Delta x:\alpha_{k}|\Delta x|^{2}\leq c\}\) . In our experiments, we take \(c = 1.5\) . \(\alpha_{k}\) can be either designed or learned. Based on sub- model (4), \(\| v^{(k)}(x)\|^{2} = d\) for every \(x\) . The inner product on the left hand side is \(d\cos (\Delta \theta)\) where \(\Delta \theta\) is the angle between \(v^{(k)}(x)\) and \(v^{(k)}(x + \Delta x)\) . \(1 - \alpha_{k}|\Delta x|^{2}\) on the right hand side may be considered a second order Taylor expansion of a function \(f(r)\) such that \(f(0) = 1\) , \(f^{\prime}(0) = 0\) , i.e., 0 is the maximum, and \(f^{\prime \prime}(0) = - 2\alpha_{k}\) . It is also an approximation to \(\cos (\sqrt{2\alpha_{k}} |\Delta x|)\) . Let \(\omega_{k} = \sqrt{2\alpha_{k}}\) , we have \[\mathrm{Magnified~local~isometry}:\Delta \theta = \omega_{k}|\Delta x|, \quad (5)\] i.e., the angle between \(v(x)\) and \(v(x + \Delta x)\) magnifies the distance \(|\Delta x|\) by a factor of \(\omega_{k}\) uniformly for all \(x\) . See Figure 2(2). The factor \(\omega_{k}\) defines the metric of block \(k\) . ### 3.4 ROTATION AND PROJECTION Since \(\| v^{(k)}(x)\|\) is a constant for all \(x\) , \(M^{(k)}(\Delta x)\) is an orthogonal matrix, and the self- motion is represented by a rotation in the \(d\) - dimensional space. \(\omega_{k}|\Delta x|\) is the angular speed of rotation of the sub- vector \(k\) . \((v^{(k)}(x),\forall x)\) forms a 2D sub- manifold on the sphere in the \(d\) - dimensional space. \(v^{(k)}\) is like an arm of a clock except that the clock is not a 1D circle, but a 2D sub- manifold. This 2D sub- manifold becomes a local codebook for the 2D positions within a local neighborhood. For a vector \(v^{(k)}\) , we can decode its position by projecting it onto the 2D sub- manifold, to get \(\hat{x} = \arg \max_{x}\langle v^{(k)},v^{(k)}(x)\rangle\) where the maximization is within the local neighborhood. ### 3.5 ERROR CORRECTION The neurons are intrinsically noisy and error prone. For \(\omega_{k} = \sqrt{2\alpha_{k}}\gg 1\) , the magnification offers error correction because \(v^{(k)}(x)\) and \(v^{(k)}(x + \Delta x)\) are far apart, which is resistant to noises or corruptions. That is, projection to the 2D sub- manifold codebook removes the noises. Specifically, suppose we have a vector \(u = v^{(k)}(x) + \epsilon\) , where \(\epsilon \sim \mathrm{N}(0,s^{2}I_{d})\) , and \(I_{d}\) is the \(d\) - dimensional identity matrix. We can decode \(x\) from \(u\) based on the codebook by maximizing \(\langle u,v^{(k)}(y)\rangle = \langle v^{(k)}(x),v^{(k)}(y)\rangle +\langle \epsilon ,v^{(k)}(y)\rangle = d(1 - \alpha_{k}|y - x|^{2}) + \sqrt{d} sZ\) over \(y\) that is within a local neighborhood of \(x\) , where \(Z\sim \mathrm{N}(0,1)\) . The optimal \(y\) will be close to \(x\) because if \(y\) deviates from \(x\) by \(\Delta x\) , then the first term will drop by \(d\alpha_{k}|\Delta x|^{2}\) , which cannot be made up by the second term \(\sqrt{d} sZ\) due to noises, unless the noise level \(s\) is extremely large. From the above analysis, we can also see that the larger \(d\) is, the more resistant the system is to noise, because the first term is of order \(d\) and the second term is of order \(\sqrt{d}\) . In addition to additive noises, the system is also resistant to multiplicative noises including dropout errors, i.e., multiplicative Bernoulli 0/1 errors. The dropout errors may occur due to noises, aging, or diseases. They may also be related to the asynchronous nature of the neuron activities in computing. <--- Page Split ---> ### 3.6 HEXAGON GRID PATTERNS While the magnified local isometry enables error correction, it also causes global periodicity. Because the angle between nearby \(v^{(k)}(x)\) and \(v^{(k)}(x + \Delta x)\) is magnified, when we vary \(x\) , the vector \(v^{(k)}(x)\) will rotate at a magnified speed \(\omega_{k}|\Delta x|\) , so that it quickly rotates back to itself, like an arm of a clock. Thus each unit of \(v^{(k)}(x)\) is periodic with \(\omega_{k}\) determining the periodicity. Our experiments show that sub- models (1) and (2) are sufficient for the emergence of hexagon grid patterns. In fact, we have the following analytical solution (see Appendix for a proof): Theorem 1. Let \(e(x) = (e^{i\langle a_{j},x\rangle},j = 1,2,3)^{\top}\) , and \(a_{1}\) , \(a_{2}\) , \(a_{3}\) are three 2D vectors so that the angle between \(a_{i}\) and \(a_{j}\) is \(2\pi /3\) for \(\forall i\neq j\) and \(|a_{j}| = 2\sqrt{\alpha}\) for \(\forall j\) . Let \(C\) be a random \(3\times 3\) complex matrix such that \(C^{*}C = I\) . Then \(v(x) = C e(x)\) , \(M(\Delta x) = C\mathrm{diag}(e(\Delta x))C^{*}\) satisfy equation 1 and equation 4 approximately for all \(x\) and small \(\Delta x\) . \(v(x)\) amounts to a 6- dimensional real vector. Since the angle between \(a_{i}\) and \(a_{j}\) is \(2\pi /3\) for \(\forall i\neq j\) , patterns of \(v(x)\) over \(x\) have hexagon periodicity. Moreover, the scale of the patterns is controlled by the length of \(a_{j}\) , i.e., the scaling parameter \(\alpha\) . We want to emphasize that sub- models (1) and (2) are about local \(\Delta x\) , where we do not make any assumptions about global patterns, such as Fourier basis. In contrast, the solution in the above theorem is global. That is, our model assumes much less than the solution in the theorem. Our experiments show that the hexagon patterns will emerge as long as the number of units is greater than or equal to 6. ### 3.7 SUB-MODEL 3 ABOUT GLOBAL GEOMETRY: ADJACENCY KERNEL Because of the periodicity, each block \((v^{(k)}(x),\forall x)\) does not form a global codebook of 2D positions, i.e., there can be \(x\neq y\) , but \(v^{(k)}(x) = v^{(k)}(y)\) , i.e., \(v^{(k)}\) does not encode \(x\) uniquely. We can combine multiple blocks to resolve the global ambiguity. Specifically, let \(v(x) = (v^{(k)}(x),k =\) \(1,\ldots ,K)\) be the whole vector, we assume the following global adjacency sub- model for the whole vector: \[\langle v(x),v(y)\rangle = \sum_{k = 1}^{K}\langle v^{(k)}(x),v^{(k)}(y)\rangle = (Kd)f(|x - y|), \quad (6)\] where \((v(x),M(\Delta x),\alpha_{k},\forall x,\Delta x,k)\) are to be learned. Recall \(d\) is the number of grid cells in each block, and \(K\) is the number of blocks. \(f(r)\) is the adjacency kernel that decreases monotonically as the Euclidean distance \(r = |x - y|\) increases. One example of \(f\) is the Gaussian kernel \(f(r) = \exp \left(- r^{2} / 2\sigma^{2}\right)\) . Another example is the exponential kernel \(f(r) = \exp \left(- r / \sigma\right)\) . As a matter of normalization, we assume \(f(0) = 1\) , which is the maximum of \(f(r)\) . Since \(f(0) = 1\) , \(\| v(x)\|^{2} = Kd\) for any \(x\) , and \(\langle v(x),v(y)\rangle = (Kd)\cos \theta\) , where \(\theta\) is the angle between \(v(x)\) and \(v(y)\) , and we have \[\mathrm{Global~adjacency}:\cos \theta = f(|x - y|). \quad (7)\] The angle between any two vectors is always less than \(\pi /2\) . See Figure 2(3). By fitting the multiple sub- vectors together, we still retain the error correction capacity due to magnified local isometry, meanwhile we eliminate the ambiguity by letting \(\langle v^{(k)}(x),v^{(k)}(y)\rangle\) for different \(k\) cancel each other out by destructive interference as \(y\) moves away from \(x\) , so that we obtain unique decoding of positions. Let \(C = \{v(x),x\in D\}\) be the codebook sub- manifold, error correction of a vector \(u\) is obtained by projection onto \(C\) : \(\arg \max_{v\in C}\langle u,v\rangle\) . The whole vector \(v\) is like a \(K\) - arm clock, with each \(v^{(k)}\) being an arm rotating at a speed \(\omega_{k}|\Delta x| = \sqrt{2\alpha_{k}} |\Delta x|\) . ### 3.8 LOCALIZATION AND HEAT MAP \((v(x),\forall x)\) forms a global codebook for \(x\) . It is a 2D sub- manifold on the sphere in the \((Kd)\) - dimensional space. For a vector \(v\) , we can decode its position by its projection on the codebook <--- Page Split ---> manifold. Since \(f(r)\) is monotonically decreasing, \(h(x) = \langle v,v(x)\rangle\) gives us the heat map to decode the position of \(v\) uniquely. Let the decoded position be \(\hat{x}\) , then \(\hat{x} = \arg \max_{x}\langle v,v(x)\rangle\) . We can obtain the one- hot representation \(\delta_{\hat{x}}\) of \(\hat{x}\) by non- maximum suppression on the heat map \(h(x)\) . Let \(V = (v(x),\forall x)\) be the \((K d)\times N^{2}\) matrix (recall the domain \(D = [0,1]^{2}\) is discretized into an \(N\times N\) lattice), where each column is a \(v(x)\) . We can write the heat map \(h(x) = \langle v,v(x)\rangle\) as a \(N^{2}\) - dimensional vector \(h = V^{\top}v\) , which serves to decode the position \(x\) encoded by \(v\) . Conversely, for a one- hot representation of a position \(x\) , i.e., \(\delta_{x}\) , which is a one- hot \(N^{2}\) - dimensional vector, we can encode it by \(v = V\delta_{x}\) . Both the encoder and decoder can be implemented by a linear neural network with connection weights \(V\) and \(V^{\top}\) respectively, as illustrated by the following diagram: \[\begin{array}{r l r l}{\mathrm{Localization:}} & {v} & {\xrightarrow{V^{\top}\times}} & {h} & {\mathrm{(heat~map~and~decoding~to~}\delta_{x})}\\ & {\delta_{x}} & {\xrightarrow{V\times}} & {v(x)} & {\mathrm{(encoding)}} \end{array} \quad (8)\] Note that in decoding \(v\to h(x)\to \delta_{x}\) and encoding \(\delta_{x}\to v\) , we do not represent or operate on the 2D coordinate \(x\) explicitly, i.e., \(x\) itself is never explicitly represented, although we may use the notation \(x\) in the description of the experiments. ### 3.9 PATH INTEGRAL Path integral (also referred to as dead- reckoning) is the task of inferring the self- position based on self- motion (e.g., imagine walking in a dark room). Specifically, the input to path integral is a previously determined initial position \(x_{0}\) and motion sequences \(\{\Delta x_{1},\dots,\Delta x_{T}\}\) , and the output is the prediction of one's current position \(x_{T}\) . We first encode the initial position \(x_{0}\) as \(v(x_{0})\) . Then, by the motion model, the hidden vector \(v(x_{T})\) at time \(T\) can be predicted as: \[v(x_{T}) = \prod_{t = T}^{1}M(\Delta x_{t})v(x_{0}). \quad (9)\] We can then decode \(x_{T}\) from \(v(x_{T})\) . ### 3.10 PATH PLANNING Our representation system can plan direct path from the starting position \(x_{0}\) to the target position \(y\) by steepest ascent on the inner product \(\langle v(x),v(y)\rangle\) , i.e., let \(v_{0} = v(x_{0})\) , the algorithm iterates \[\begin{array}{l}{\Delta x_{t} = \arg \max_{\Delta x\in \Delta}\langle v(y),M(\Delta x)v_{t - 1}\rangle ,}\\ {v_{t} = M(\Delta x_{t})v_{t - 1},} \end{array} \quad (10)\] where \(\Delta\) is the set of allowable displacements \(\Delta x\) . When a rat does path planning, even if it is not moving, its grid cells are expected to be active. This may be explained by the above algorithm. In path planning, the rat can also fantasize bigger step sizes that are beyond its physical capacity, by letting \(\Delta\) include physically impossible large steps. In general, our representation scheme \((v(x),M(\Delta x),f(|x - y|))\) mirrors \((x,\Delta x,|x - y|)\) . Thus the learned representation is capable of implementing existing path planning algorithms in robotics (Siegwart et al. (2011)) even though our system does not have explicit coordinates in 2D (i.e., the 2D coordinates \(x\) are never explicitly represented by two neurons). ### 3.11 PLACE CELLS AND SCALE For each \(x\) , we may interpret \(\langle v(x),v(y)\rangle /(K d) = f(|x - y|)\) as a place cell whose response is \(f(|x - y|)\) if the agent is at \(y\) , or at least, we may consider \(f(|x - y|)\) as an internal input to the place cell based on self- motion, in addition to external visual cues. For the Gaussian kernel \(f(r) = \exp (- r^{2} / 2\sigma^{2})\) , the choice of \(\sigma\) determines the scale. In path integral, for accurate decoding of the position \(x\) from the vector \(v\) in the presence of noises, we want \(\sigma\) to be small so that \(f(r)\) drops to zero quickly. However, in path planning, for a vector \(v\) , we also want to know the Euclidean distance between its position \(x\) to a target position \(y\) , which <--- Page Split ---> may be far away from \(x\) . The distance is \(|x - y| = f^{- 1}\langle \langle v,v(y)\rangle /(K d)\rangle\) . For accurate estimation of long distance in the presence of noises, we need \(\sigma\) to be large, so that the slope of \(f^{- 1}\) is not too big. Perhaps we need multiple \(f(r)\) with different \(\sigma\) , and for each \(f(r)\) we have \(\langle K d\rangle f(r) =\) \(\begin{array}{r}{\sum_{k = 1}^{K}\gamma_{k}\langle v^{(k)}(x),v^{(k)}(y)\rangle} \end{array}\) , where different \(f(r)\) have different coefficients \((\gamma_{k})\) while sharing the same \(\langle v^{(k)}(x)\rangle\) . We shall study this issue in future work. ### 3.12 GROUP REPRESENTATION IN MOTOR CORTEX Where does the self- motion \(\Delta x\) come from? It comes from the movements of head and legs, i.e., each step of navigation involves a whole process of path integral and path planning of the movements of head and legs (as well as arms). In general, we can use the same system we have developed for the movements of head, legs, arms, fingers, etc. Their movements form groups of actions. In navigation, the movements belong to 2D Euclidean group. In body movements, the movements belong to various Lie groups. Our method can be used to learn the representational systems of these groups, and such representations may exist in motor cortex. We leave this problem to future investigation. ## 4 LEARNING REPRESENTATION The square domain \(D\) is discretized into a \(40\times 40\) lattice, and the agent is only allowed to move on the lattice. We can learn \((v(x),\forall x)\) and \((M(\Delta x),\forall \Delta x)\) (or the \(\beta\) coefficients that parametrize \(M\) ) by minimizing the following loss functions. For sub- model (1) on vector- matrix multiplication, the loss function is \[L_{1} = \mathbb{E}_{x,\Delta x}\left[\| v(x + \Delta x) - M(\Delta x)v(x)\|^{2}\right], \quad (12)\] where \(x\) is sampled from the uniform distribution on \(D = [0,1]^{2}\) , and \(\Delta x\) is sampled from the uniform distribution within a certain range \(\Delta_{1}\) (3 grid points in each direction in our experiments). The above motion loss is a single- step loss. It can be generalized to multi- step loss \[L_{1,T} = \mathbb{E}_{x,\Delta x_{1},\ldots \Delta x_{T}}\left[\| v(x + \Delta x_{1} + \ldots +\Delta x_{T}) - M(\Delta x_{T})\ldots M(\Delta x_{1})v(x)\|^{2}\right], \quad (13)\] where \((\Delta x_{t},t = 1,\dots,T)\) is a sequence of \(T\) steps of displacements, i.e., a simulated trajectory. For sub- model (2) on magnified local isometry, the loss function is \[L_{2,k} = \mathbb{E}_{x,\Delta x}\left[\left(\langle v^{(k)}(x),v^{(k)}(x + \Delta x)\rangle -(d(1 - \alpha_{k}|\Delta x|^{2})\right)^{2}\right], \quad (14)\] where for fixed \(\alpha_{k}\) , \(\Delta x\) is sampled uniformly within the range \(\Delta_{2}(\alpha_{k}) = \{\Delta x:\alpha_{k}|\Delta x|^{2}\leq c\}\) \((c = 1.5\) in our experiments). We can define \(L_{2} = \sum_{k = 1}^{K}L_{2,k}\) For sub- model (3) on global adjacency kernel, the loss function is \[L_{3} = \mathbb{E}_{x,y}\left[\left((K d)f(|x - y|) - \langle v(x),v(y)\rangle\right)^{2}\right], \quad (15)\] where both \(x\) and \(y\) are sampled uniformly from \(D\) Let \(v(x) = (v_{i}(x),i = 1,\dots,K d)\) , we also impose a regularization loss \(L_{0} = \sum_{i = 1}^{K d}(\mathbb{E}_{x}[v_{i}(x)^{2}] - 1)^{2}\) , to enforce uniform energy among the grid cell. It also helps to break the symmetry caused by the fact that the loss function is invariant under the transformation \(v(x)\to Q v(x),\forall x\) , where \(Q\) is an arbitrary orthogonal matrix. This loss is not crucial though, so we will make it implicit for the rest of the paper. Fixing the magnitude of \(\mathbb{E}_{x}[v_{i}(x)]\) within a certain range is biologically plausible, because \(v_{i}(x)\) is a single cell. For mathematical and computational convenience, we can also normalize \(v(x)\) so that \(\| v(x)\|^{2} = 1\) , \(\langle v(x),v(y)\rangle = f(|x - y|)\) , \(\langle v^{(k)}(x),v^{(k)}(x + \Delta x)\rangle = (1 - \alpha_{k}|\Delta x|^{2}) / K\) , and \(\mathbb{E}_{x}[v_{i}(x)^{2}] = 1 / (K d)\) . When learning a single block, we can normalize \(v^{(k)}(x)\) so that \(\| v^{(k)}(x)\|^{2} = 1\) , \(\langle v^{(k)}(x),v^{(k)}(x + \Delta x)\rangle = 1 - \alpha_{k}|\Delta x|^{2}\) and \(\mathbb{E}_{x}[v_{i}(x)^{2}] = 1 / d\) . The total loss function is a linear combination of the above losses, where the weights for combining the losses are chosen so that the weighted losses are of the same order of magnitude. <--- Page Split ---> The loss function is minimized by Adam optimizer (Kingma & Ba (2014)) (lr = 0.03) for 6, 000 iterations. A batch of 30, 000 examples, i.e., \((x,\Delta x_{t},t = 1,\dots,T))\) for \(L_{1}\) , \((x,\Delta x)\) for \(L_{2}\) , and \((x,y)\) for \(L_{3}\) are sampled at each learning iteration as the input to the loss function. In the later stage of learning \((\geq 4,000\) iterations), \(\| v(x)\| = 1\) is enforced by projected gradient descent, i.e., normalizing each \(v(x)\) after the gradient descent step. ## 5 EXPERIMENTS ### 5.1 LEARNING SINGLE BLOCKS: HEXAGON PATTERNS AND METRICS ![](images/7_0.jpg) <center>Figure 3: Learned units of a single block with fixed \(\alpha\) . (a) Learned single block with 6 units. Every row shows the learned units with a given \(\alpha\) . (b) Learned single block with 100 units and \(\alpha = 72\) . </center> We first learn a single block with fixed \(\alpha_{k}\) by minimizing \(L_{1} + \lambda_{2}L_{2,k}\) (we shall drop the subscript \(k\) in this subsection for simplicity). Figure 3 shows the learned units over the \(40\times 40\) lattice of \(x\) . Figure 3(a) shows the learned results with 6 units and different values of \(\alpha\) . The scale or metric of the lattice is controlled by \(\alpha\) . The units within a block have patterns with similar scale and arrangement, yet different phases. Figure 3(b) shows the learned results with 100 units and \(\alpha = 72\) , indicating that the grid- like patterns are stable and easy to learn even when the number of units is large. ### 5.2 LEARNING MULTIPLE HEXAGON BLOCKS AND METRICS ![](images/7_1.jpg) <center>Figure 4: (a) Response maps of learned units of the vector representation and learned scaling parameters \(\alpha_{k}\) . Block size equals 6 and each row shows the units belonging to the same block. (b) Illustration of block-wise activities of the units (where the activities are rectified to be positive). </center> <--- Page Split ---> We learn multiple blocks by minimizing \(L_{1} + \lambda_{2}L_{2} + \lambda_{3}L_{3}\) . Instead of manually assigning \(\alpha_{k}\) , we learn \(\alpha_{k}\) by gradient descent, simultaneously with \(v\) and \(M\) . In Figure 4(a), we show the learned units \(v(x)\) over the \(40 \times 40\) lattice of \(x\) and the learned metrics \(\alpha_{k}\) . A Gaussian kernel with \(\sigma = 0.08\) is used for the global adjacency measure \(f(|x - y|)\) . Block size is set to 6 and each row shows the learned units belonging to the same block. The scales of the firing fields are controlled by the learned \(\alpha_{k}\) . Figure 4(b) illustrates the combination of multiple blocks. For the localization model, given a vector \(v\) , the heat map of a single block \(\langle v^{(k)}, v^{(k)}(x) \rangle\) has periodic firing fields and cannot determine a location uniquely. However, ambiguity disappears by combing the heat maps of multiple blocks, which have firing fields of multiple scales and phases that add up to a Gaussian kernel \(\langle v, v(x) \rangle\) . The Gaussian kernel informs the place cell, by which the location is determined uniquely. For a motion \(\Delta x\) , every block rotates in its own subspace with motion matrix \(M^{(k)}(\Delta x)\) , resulting in phase shifting in the heat map of each single block. In the subsequent experiments, we shall learn the grid cells by minimizing \(L_{1} + \lambda_{3}L_{3}\) for simplicity. ### 5.3 PATH INTEGRAL ![](images/8_0.jpg) <center>Figure 5: (a) Path integral prediction. The black line depicts the real path while red dotted line is the predicted path by the learned model. (b) Mean square error over time step. The error is average over 1,000 episodes. The curves correspond to different numbers of steps used in the multi-step motion loss. (c) Mean square error performed by models with different block sizes and different kernel types. Error is measured by number of grids. </center> Figure 5(a) shows an example of path integral result (time duration \(\tilde{T} = 40\) ), where we use single step motion loss \(L_{1,T = 1}\) in learning. Gaussian kernel with \(\sigma = 0.08\) is used as the adjacency measure in \(L_{3}\) . We find single-step loss is sufficient for performing path integral. The mean square error remains small \((\sim 1.2\) grid) even after 400 steps of motions (figure 5(b)). The error is averaged over 1,000 episodes. The motion loss can be generalized to multi-step \(L_{1,T}\) , as shown by equation 13. In Figure 5(b), we show that multi-step loss can improve the performance slightly. In Figure 5(c) we compare the learned models with fixed number of units (96) but different block sizes. We also compare the performance of models using Gaussian kernel ( \(\sigma = 0.08\) ) and exponential kernel ( \(\sigma = 0.3\) ) as the adjacency measure in the localization model. The result shows that models with Gaussian kernel and block size \(\geq 3\) , and with exponential kernel and block size \(\geq 4\) have performances comparable to the model learned without block- diagonal assumption (block size \(= 96\) ). ### 5.4 PATH PLANNING For path planning, we assume continuous \(x\) and \(\Delta x\) . First, we design the set of allowable motions \(\Delta\) . For a small length \(r\) , we evenly divide \([0, 2\pi ]\) into \(n\) directions \(\{\theta_{i}, i = 1, \ldots , n\}\) , resulting in \(n\) candidate motions \(\Delta = \{\Delta x_{i} = (r \cos (\theta_{i}), r \sin (\theta_{i})), i = 1, \ldots , n\}\) . These \(n\) small motions serve as motion basis. Larger motion can be further added to \(\Delta\) by estimating the motion matrix \(M(k \Delta x_{i}) = M^{k}(\Delta x_{i})\) . The starting position and destination can also be any continuous values, where the encoding to the latent vector is approximated by bilinear interpolation of nearest neighbors on the lattice. <--- Page Split ---> In the experiments, we choose \(r = 0.05\) and \(n = 100\) , and add another set of motions with length 0.025 to enable accurate planning. The system is learned with exponential kernel \((\sigma = 0.3)\) as global adjacency to encourage connection of long distance. Figure 6(a) shows planning examples with six settings of motion ranges \(\Delta\) . Including larger motions accelerates the planning process so that it finishes with less steps. We define one episode to be a success if the distance between the end position and the destination is less than 0.025. We achieve a success rate of \(>99\%\) over 1,000 episodes for all the six settings. ![](images/9_0.jpg) <center>Figure 6: (a) Planning examples with different motion ranges. Red star represents the destination \(y\) and green dots represent the planned position \(\{x_{0} + \sum_{i = 1}^{t}\Delta x_{i}\}\) . (b) Planning examples with a dot obstacle. Left figure shows the effect of changing scaling parameter \(a\) , while right figure shows the effect of changing annealing parameter \(b\) . (c) Planning examples with obstacles mimicking walls, large objects and simple mazes. </center> The learned system can also perform planning with obstacles, where the global adjacency between the agent's current position and an obstacle serves as a repulsive forces. Specifically, suppose \(z\) is an obstacle to avoid, the agent can choose the motion \(\Delta x_{t}\) at time \(t\) by \[\Delta x_{t} = \arg \max_{\Delta x\in \Delta}\left[\langle v(y),M(\Delta x)v_{t}\rangle -a\langle v(z),M(\Delta x)v_{t}\rangle^{b}\right], \quad (16)\] where \(a\) and \(b\) are the scaling and annealing parameters. Figure 6(b) shows the planning result with a dot obstacle laid on the direct path between the starting position and destination, with tuning of \(a\) and \(b\) . We choose \(a = 0.5\) and \(b = 6\) in subsequent experiments. Now suppose we have more complicated obstacles \(\{z_{i}\}_{i = 1}^{m}\) . They can be included by summing over the kernels of every obstacle \(\{a\langle v(z_{i}),M(\Delta x)v_{t}\rangle^{b}\}_{i = 1}^{m}\) and choosing \(\Delta x_{t}\) at time \(t\) by \(\Delta x_{t} = \arg \max_{\Delta x\in \Delta}\left[\langle v(y),M(\Delta x)v_{t}\rangle - \sum_{i = 1}^{m}a\langle v(z_{i}),M(\Delta x)v_{t}\rangle^{b}\right]\) . Figure 6(c) shows some examples, where the obstacles mimicking walls, large objects and simple mazes. The above method is related to the potential field method in robotics (Siegwart et al. (2011)). ## 6 DISCUSSION: ROTATIONIST-CONNECTIONIST MODEL? In terms of general modeling methodology, a typical recurrent network is of the form \(v_{t} = \tanh (W(v_{t - 1},\delta_{t}))\) , where \(v_{t}\) is the latent vector, \(\delta_{t}\) is the input change or action. \(v_{t - 1}\) and \(\delta_{t}\) are concatenated and linearly mixed by \(W\) , followed by coordinate- wise tanh non- linearity. We replace it by a model of the form \(v_{t} = M(\delta_{t})v_{t - 1}\) , where \(M(\delta_{t})\) is a matrix that is non- linear in \(\delta_{t}\) . \(M(\delta_{t})\) is a matrix representation of \(\delta_{t}\) . \(v_{t}\) can be interpreted as neuron activities. We can discretize the value of \(\delta_{t}\) into a finite set \(\{\delta \}\) , and each \(M(\delta)\) can be stored as synaptic connection weights that drive the neuron activities. In prediction, the input \(\delta_{t}\) activates \(M(\delta_{t})\) . In planning or control, all the \(\{M(\delta)\}\) are activated, and among them the optimal \(\delta\) is chosen. The matrix representation of the above model is inspired by the group representation theory, where the group elements are represented by matrices acting on the vectors (Fulton & Harris (2013)). It underlies much of modern mathematics and holds the key to the quantum theory (Zee (2016)). Perhaps it also underlies the visual and motor cortex, where neurons form rotating sub- vectors driven by matrices representing groups of transformations. One may call it a rotationist- connectionist model. <--- Page Split ---> ## PROJECT PAGE http://www.stat.ucla.edu/\~ruiqigao/gridcell/main.html ## ACKNOWLEDGEMENT We thank the three reviewers for their insightful comments and suggestions. Part of the work was done while Ruiqi Gao was an intern at Hikvision Research Institute during the summer of 2018. She thanks Director Jane Chen for her help and guidance. We also thank Jiayu Wu for her help with experiments and Zilong Zheng for his help with visualization. The work is supported by DARPA XAI project N66001- 17- 2- 4029; ARO project W911NF1810296; ONR MURI project N00014- 16- 1- 2007; and a Hikvision gift to UCLA. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. ## A PROOF FOR SECTION 3.6 Proof. \((a_{j},j = 1,2,3)\) forms a tight frame in the 2D space, in that for any vector \(x\) in 2D, \(\sum_{j = 1}^{3}\langle x,a_{j}\rangle^{2}\propto |x|^{2}\) . Since \(C^{*}C = I\) , we have \[\begin{array}{r c l}{{\langle v(x),v(y)\rangle}}&{{=}}&{{v(x)^{*}v(y)}}\\ {{}}&{{=}}&{{e(x)^{*}C^{*}C e(y)}}\\ {{}}&{{=}}&{{\sum_{j=1}^{3}e^{i\langle a_{j},y-x\rangle}.}}\end{array} \quad (19)\] Then we have \[\begin{array}{r c l}{{\mathrm{RE}(\langle v(x),v(x+\Delta x)\rangle)}}&{{=}}&{{\sum_{j=1}^{3}\cos(\langle a_{j},\Delta x\rangle)}}\\ {{}}&{{}}&{{\approx\sum_{j=1}^{3}(1-\langle a_{j},\Delta x\rangle^{2}/2)}}\\ {{}}&{{}}&{{\approx3-\sum_{j=1}^{3}\langle a_{j},\Delta x\rangle^{2}/2}}\\ {{}}&{{}}&{{\approx3(1-\alpha|\Delta x|^{2}),}}\end{array} \quad (22)\] where \(|a_{j}| = 2\sqrt{\alpha}\) . The self motion from \(v(x)\) to \(v(x + \Delta x)\) is \[\begin{array}{r c l}{{v(x+\Delta x)}}&{{=}}&{{C e(x+\Delta x)}}\\ {{}}&{{=}}&{{C D(\Delta x)e(x)}}\\ {{}}&{{=}}&{{C D(\Delta x)C^{*}v(x)}}\\ {{}}&{{=}}&{{M(\Delta x)v(x),}}\end{array} \quad (24)\] where \(D(\Delta x) = \mathrm{diag}(e^{i\langle a_{j},\Delta x\rangle},j = 1,2,3)\) . If \(K > 1\) and block size \(= 6\) , we can fit the multiple blocks together by a Fourier expansion of the kernel function \[\begin{array}{r c l}{{f(|x-y|)}}&{{=}}&{{\langle v(x),v(y)\rangle}}\\ {{}}&{{=}}&{{\sum_{k=1}^{K}\sum_{j=1}^{3}e^{i\langle a_{k j},y-x\rangle}}}\\ {{}}&{{=}}&{{\sum_{k=1}^{K}\langle v^{(k)}(x),v^{(k)}(y)\rangle.}}\end{array} \quad (30)\] ## B HEXAGON GRID PATTERNS AND METRICS ## B.1 SIMULATED INPUT DATA We obtain input data for learning the model by simulating agent trajectories with the number of steps equal to \(\tilde{T}\) in the square domain \(D\) . \(D\) is discretized into a \(40 \times 40\) lattice and the agent is only allowed to move on the lattice. The agent starts at a random location \(x_{0}\) . At each time step \(t\) , a small motion \(\Delta x_{t} (\leq 3\) grids in each direction) is randomly sampled with the restriction of not leading the agent outside the boundary, resulting in a simulated trajectory \(\{x_{0} + \sum_{i = 1}^{t}\Delta x_{i}\}_{t = 1}^{T}\) . \(\tilde{T}\) is set to \(1,000\) to obtain trajectories that are uniformly distributed over the whole area. Although the trajectories used for training are restricted to the lattice and with small motions, in Section 5.4 <--- Page Split ---> we show that the learned model can be easily generalized to handle continuous positions and large motions. In training, pairs of locations \((x,y)\) are randomly sampled from each trajectory as the input to the adjacency loss \(L_{3}\) , while consecutive position sequences \((x,\Delta x_{t},t = 1,\dots,T)\) are randomly sampled as the input to the motion loss \(L_{1}\) , with length specified by \(T\) (which is usually much smaller than the whole length of the trajectory \(T\) ). ## B.2 LEARNED SINGLE BLOCK UNITS WITH DIFFERENT BLOCK SIZES In (Blair et al. (2007)), the grid cell response is modeled by three cosine gratings with different orientations and phases. In our model, we learn such patterns of different scales without inserting artificial assumptions. ![](images/13_0.jpg) <center>Figure 7: Response maps of learned single block units with different block sizes </center> Figure 7 displays the response maps of the learned single block units with different block sizes. For block sizes 4 and 5, the learned maps show square lattice patterns. For block sizes greater than or equal to 6, the learned maps show hexagon lattice patterns. ## B.3 LEARNED MULTIPLE BLOCK UNITS AND METRICS ## B.3.1 WITH DIFFERENT BLOCK SIZES Figure 8 displays the response maps of the multiple block units with different block sizes. The metrics of the multiple blocks are learned automatically. ## B.3.2 WITH DIFFERENT SHAPES OF AREA Figure 9 displays the response maps of the multiple block units with different shapes of the area, such as circle and triangle. ## B.4 QUANTITATIVE ANALYSIS OF SPATIAL ACTIVITY We assess the spatial activity of the learned units quantitatively using measures adopted from the neuroscience literature. Specifically, we quantify the hexagonal regularity of the grid- like patterns using the gridness score (Langston et al. (2010); Sargolini et al. (2006)). The measure is derived from the spatial autocorrelogram of each unit's response map. A unit is classified as a grid cell if its gridness score is larger than 0. For those units that are classified as grid cells, grid scale and orientation can be further derived from the autocorrelogram following Sargolini et al. (2006). Figure 10(a) summarizes the results. 76 out of 96 learned units are classified as grid cells. Most units with large learned \(\alpha_{k}\) are classified as grid cells, while those units with small \(\alpha_{k}\) are not due to the lack of a full period of hexagonal patterns. The grid scales and orientations vary among units, while <--- Page Split ---> ![](images/14_0.jpg) remaining similar within each block. We show the histograms of the grid orientations and scales in Figure 10(b) and 10(c) respectively. Moreover, we average the grid scales within each block and make the scatter plot of the averaged grid scales and the learned \(1 / \sqrt{\alpha_{k}}\) in figure 10(d). Interestingly, the grid scale is nicely proportional to the learned \(1 / \sqrt{\alpha_{k}}\) . ## B.5 ABLATION STUDIES We conduct ablation studies to assess various assumptions in our model. ## B.5.1 LOSS TERMS We learn \(v\) and \(M\) with multiple blocks by minimizing \(L_{1} + \lambda_{2}L_{2} + \lambda_{3}L_{3}\) , which consists of (1) a motion loss \(L_{1}\) , (2) a local isometry loss \(L_{2}\) , and (3) a global adjacency loss \(L_{3}\) . Figure 11 shows the learned units when using only some of the components. Grid- like patterns do not emerge if using only the adjacency loss \(L_{3}\) , or only the isometry loss \(L_{2}\) . If using only the motion loss, the motion equation is approximately satisfied at the beginning of training, since both \(v\) and \(M\) are initialized from small values that are close to 0. The system cannot be learned without an extra term to push \(v(x)\) apart from each other. Figure 11(c) shows the learned units using the adjacency loss and the motion loss, leaving out the isometry loss. Grid- like patterns still emerge (also some strip- like patterns), although less obvious than the ones learned using the full loss. ## B.5.2 ASSUMPTIONS OF MOTION MATRIX In another ablation study, we drop the quadratic parametrization by \(\beta\) coefficients and the block diagonal assumption of the motion matrix. A separate motion matrix \(M(\Delta x)\) is learned for each displacement \(\Delta x\) , and \(v(x)\) is assumed to be a single block. We use either the local isometry loss \(L_{2}\) or the global adjacency loss \(L_{3}\) in addition to the motion loss \(L_{1}\) . The results are shown in Figure 12. With the isometry loss, the setting is similar to the one of learning single block, except that the parametrization of \(M(\Delta x)\) is dropped. As shown in figure 12(a), the learned units resemble the ones learned with parametrized \(M(\Delta x)\) , but when the block size is large, the grid- like patterns are less obvious. With the adjacency loss, grid- like patterns do not emerge any more. <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 8: Learned multiple block units and metrics with different block sizes </center> ## C MODELING EGOCENTRIC MOTION The model can be generalized to handle egocentric motion that involves head direction. <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 9: Learned multiple block units and metrics with different shapes of the area </center> ## C.1 COUPLING TWO GRID SYSTEMS The egocentric motion consists of angular velocity in the change of head direction and the spatial velocity along the current head direction. We couple two grid systems, one for head direction and the other for the spatial position. For notational simplicity, we put the "hat" notation on top of \(v\) and \(M\) notation to denote the vector and matrix for the head direction. Specifically, the agent has a head direction \(\theta\) , which may change over time. The agent moves along its head direction with scalar motion. We can discretize the range of \(\theta\) , \([0, 2\pi ]\) , into \(\hat{N}\) equally spaced values \(\{\theta_{i}, i = 1, \ldots , \hat{N}\}\) , and introduce a vector representation of self- direction. That is, the agent represents its self- direction by a \(\hat{d}\) - dimensional hidden vector \(\hat{v} (\theta)\) . ## C.1.1 MOTION SUB-MODEL Suppose at a position \(x\) , the agent has a head direction \(\theta\) . The self- motion is decomposed into (1) a scalar motion \(\delta\) along the head direction \(\theta\) and then (2) a head direction rotation \(\Delta \theta\) . The self- motion \(\Delta x = (\delta \cos \theta , \delta \sin \theta)\) . We assume \[\begin{array}{rcl}{\hat{v} (\theta +\Delta \theta)} & = & {\hat{M} (\Delta \theta)\hat{v} (\theta),}\\ {v(x + \Delta x)} & = & {M(\delta ,\hat{v} (\theta))v(x),} \end{array} \quad (32)\] where \(\hat{M} (\Delta \theta)\) is a \(\hat{d} \times \hat{d}\) matrix that depends on \(\Delta \theta\) , and \(M(\Delta \theta , \hat{v} (\theta))\) is a \(d \times d\) matrix that depends on \(\delta\) and \(\hat{v} (\theta)\) . \(\hat{M} (\Delta \theta)\) is the matrix representation of the head direction rotation \(\Delta \theta\) , while \(M(\delta , \hat{v} (\theta))\) is the matrix representation of scalar motion \(\delta\) along the direction \(\theta\) . <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 10: (a) Autocorrelograms of the learned units' response maps. Gridness scores are calculated based on the autocorrelograms. A unit is classified as a grid cell if the gridness score is larger than 0. The gridness score is shown in red color if a unit fails to be classified as a grid cell. For those units that are classified as grid cells, gridness score, scale and orientation are listed sequentially in black color. Orientation is computed using a camera-fixed reference line \((0^{\circ})\) and in counterclockwise direction. (b) Histogram of grid orientations. (c) Histogram of grid scales. (d) Scatter plot of averaged grid scales within each block versus the corresponding learned \(1 / \sqrt{\alpha_{k}}\) . </center> We can model \(M(\delta , \hat{v} (\theta))\) by an attention (or selection) mechanism: \[\begin{array}{l}{p_{i} = \frac{\langle\hat{v} (\theta),\hat{v} (\theta_{i})^{b}}{\sum_{i = 1}^{\tilde{N}}\langle\hat{v} (\theta),\hat{v} (\theta_{i})\rangle^{b}},}\\ {M(\delta ,\hat{v} (\theta)) = \sum_{i = 1}^{\tilde{N}}p_{i}M^{(i)}(\delta),} \end{array} \quad (34)\] where \(M^{(i)}(\delta)\) is the matrix representation of scalar motion \(\delta\) given the head direction \(\theta_{i}\) . The inner product \(\langle \hat{v} (\theta), \hat{v} (\theta_{i})\rangle\) , that informs the angular distance between \(\theta\) and \(\theta_{i}\) , serves as the attention weight. \(b\) is an annealing (inverse temperature) parameter. If \(b \to \infty\) , \(p = (p_{i}, i = 1, \ldots , \tilde{N})\) becomes a one- hot vector for selection. We can further assume that \(\tilde{M} (\Delta \theta)\) and \(M^{(i)}(\delta)\) are block diagonal, and learn a parametric model for each of them by the second order Taylor expansion at \(\Delta \theta\) and \(\delta\) respectively. ## C.1.2 LOCALIZATION SUB-MODEL For the localization sub- model, we define the adjacency measures of self- direction and self- position separately. Let \(\hat{f} (|\theta_{1} - \theta_{2}|)\) be the adjacency measure between two directions \(\theta_{1}\) and \(\theta_{2}\) . We use von Mises kernel \(\hat{f} (r) = \exp ((\cos (r) - 1) / \sigma^{2})\) , where \(\hat{f} (0) = 1\) . For adjacency measure between self- positions \(x\) and \(y\) , we keep it the same as described in section 3.7. <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 11: Ablation study of the components in the training loss. (a) Learn the model using only the localization loss with global adjacency. (b) Learn the model using only the localization loss with local adjacency. (c) Using global adjacency and motion loss, leaving out local adjacency. </center> ## C.1.3 LOSS FUNCTION FOR LEARNING For learning the system, we can first learn \(\hat{v}\) and \(\hat{M}\) (or the coefficients that parametrize \(\hat{M}\) ) by minimizing \[\mathbb{E}_{\theta_{1},\theta_{2}}\left[\left(\hat{f} (|\theta_{1} - \theta_{2}|) - \langle \hat{v} (\theta_{1}),\hat{v} (\theta_{2})\rangle\right)^{2}\right] + \lambda \mathbb{E}_{\theta ,\Delta \theta}\left[\left\Vert \hat{v} (\theta +\Delta \theta) - \hat{M} (\Delta \theta)\hat{v} (\theta)\right\Vert^{2}\right], \quad (35)\] and then we learn \(v\) and \(M\) (or the coefficients that parametrize \(M\) ) as before. ## C.2 LEARNED UNITS FOR SELF-DIRECTION AND SELF-POSITION Figure 13 shows a result of learning such an egocentric motion model by displaying the response curves of the learned units in the head direction system, \(\hat{v} (\theta)\) , for \(\theta \in [0,2\pi ]\) , as well as the response maps of the learned multiple block units in the self- position system, \(v(x)\) , for \(x \in [0,1]^2\) . ## C.3 CLOCK AND TIMESTAMP We may re- purpose the head direction system as a clock, by interpreting \(\theta \in [0,2\pi ]\) as the time on a clock, and \(\hat{v} (\theta)\) as a timestamp for events happening over time. This may be related to the recent neuroscience observations in Tsao et al. (2018). ## D ERRORS ## D.1 ERROR CORRECTION Unlike commonly used embedding in machine learning, here we embed a 2D position into a high- dimensional space, and the embedding is a highly distributed representation or population code. The <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 12: Learned units by dropping the parametrization and the block diagonal assumption of the motion matrix \(M(\Delta x)\) . </center> advantage of such a redundant code lies in its tolerance to errors. We show that the learned system is tolerant to various sources of errors. Specifically, in both path integral and path planning tasks, at every time step \(t\) , we randomly add (1) Gaussian noises or (2) dropout masks to the hidden units and see if the system can still perform the tasks well. We find that the decoding- encoding process (DE) is important for error correction. That is, at each time step \(t\) , given the noisy hidden vector \(v_{t}\) , we decode it to \(x_{t} = \arg \max_{x}\langle v_{t},v(x)\rangle\) and then re- encode it to the hidden vector \(v(x_{t})\) . Actually the whole process can be accomplished by projection to the codebook sub- manifold without explicit decoding, by obtaining the vector: \(\arg \max_{v\in C}\langle v_{t},v\rangle\) , where \(C = \{v(x),x\in [0,1]^{2}\}\) . Table 1 shows the error correction results tested on path integral and path planning tasks. Each number is averaged over 1,000 episodes. We compute the overall standard deviation of \(\{v(x)\}\) for all \(x\) and treat it as the reference standard deviation (s) for the Gaussian noise. For dropout noise, we set a percentage to drop at each time step. With the decoding- encoding process, the system is quite robust to the Gaussian noise and dropout error, and the system still works even if \(70\%\) units are silenced at each step. ## D.2 NOISY SELF-MOTION INPUT Besides adding noises to the hidden units, we also experiment with adding noises to the self- motion \(\Delta x\) , and compare the performance of path integral with the one performed in the original 2D coordinates. Specifically, at each time step, we add Gaussian noises to the self- motion \(\Delta x\) . For path integral, we compute the mean square error between the predicted locations and ground truth locations. Besides, we also compute the predicted locations using the 2D coordinates with the noisy self- motions. Its mean square error serves as the reference error. Table 2 shows the result, indicating that the error of the learned system is close to the error in the original 2D coordinates, i.e., our system does not blow up the noise. <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 13: (a) Response curves of learned units in the head direction model, \(\hat{v} (\theta)\) . Each block shows the units belonging to the same sub-vector in the model. The horizontal axis represents the angle within a range of [0, \(2\pi ]\) , while the vertical axis indicates the values of responses. (b) Response maps of the learned multiple block units of the self-position model, \(v(x)\) , for \(x \in [0, 1]^2\) . </center> <table><tr><td colspan="5">Path integral: MSE</td><td colspan="5">Path planning: success rate</td></tr><tr><td>Noise type</td><td>1s</td><td>0.75s</td><td>0.5s</td><td>0.25s</td><td>0.1s</td><td>1s</td><td>0.75s</td><td>0.5s</td><td>0,25s</td></tr><tr><td>Gaussian (DE)</td><td>1.687</td><td>1.135</td><td>0.384</td><td>0.017</td><td>0</td><td>0.928</td><td>0.959</td><td>0.961</td><td>0.977</td></tr><tr><td>Gaussian</td><td>6.578</td><td>2.999</td><td>1.603</td><td>0.549</td><td>0.250</td><td>0.503</td><td>0.791</td><td>0.934</td><td>0.966</td></tr><tr><td>Noise type</td><td>70%</td><td>50%</td><td>30%</td><td>10%</td><td>5%</td><td>70%</td><td>50%</td><td>30%</td><td>10%</td></tr><tr><td>Dropout (DE)</td><td>2.837</td><td>1.920</td><td>1.102</td><td>0.109</td><td>0.013</td><td>0.810</td><td>0.916</td><td>0.961</td><td>0.970</td></tr><tr><td>Dropout</td><td>19.611</td><td>16.883</td><td>14.137</td><td>3.416</td><td>0.602</td><td>0.067</td><td>0.186</td><td>0.603</td><td>0.952</td></tr></table> Table 1: Error correction results on the vector representation. The performance of path integral is measured by mean square error between predicted locations and ground truth locations; while for path planning, the performance is measured by success rate. Experiments are conducted using several noise levels: Gaussian noise with different standard deviations in terms of the reference standard deviation \(s\) and dropout mask with different percentages. DE means implementing decoding- encoding process when performing the tasks. ## E 3D ENVIRONMENT AND 1D TIME The system can be generalized to 3D environments. Specifically, we assume the agent navigates within a domain \(D = [0,1] \times [0,1] \times [0,1]\) , which is discretized into a \(40 \times 40 \times 40\) lattice. We learn a parametric model for motion matrix \(M\) in a residual form \(M = I + \tilde{M} (\Delta x)\) , where \(\tilde{M} (\Delta x)\) is approximated by the second order Taylor expansion of \(\Delta x\) . We first learn a single block with fixed \(\alpha_{k}\) by minimizing \(L_{1} + \lambda_{2}L_{2,k}\) . A batch of 500,000 examples of \((x, y)\) and \((x, \Delta x_{t}, t = 1, \ldots , T)\) is sampled online at every iteration for training. Figure 14 shows the learned units. <--- Page Split ---> <table><tr><td>Standard deviation</td><td>1.2</td><td>0.9</td><td>0.6</td><td>0.3</td></tr><tr><td>Learned system</td><td>6.020</td><td>4.382</td><td>3.000</td><td>1.422</td></tr><tr><td>Reference</td><td>5.852</td><td>4.185</td><td>2.873</td><td>1.315</td></tr></table> Table 2: Path integral results with noises in the self- motion. Performance is measured by mean square error (MSE) between the predicted locations and ground truth locations in the path integral task. Noises are added to self- motions \(\Delta x\) by several noise levels: Gaussian noise with different standard deviations in terms of number of grids. Reference MSE is computed by path integral in 2D coordinates. ![](images/21_0.jpg) <center>Figure 14: Learned units of a single block with fixed \(\alpha\) in 3D environment. Every row shows the learned units with a given \(\alpha\) . </center> Next we learn multiple blocks by minimizing \(L_{1} + \lambda_{3}L_{3}\) , and use the learned models to perform 3D path integral and path planning. For simplicity, we remove \(L_{2}\) . We use 8 blocks of units with block size 8 and exponential kernel ( \(\sigma = 0.3\) ) for path planning. A batch of 200,000 examples is sampled online at every iteration for training. ## E.1 3D PATH INTEGRAL Figure 15 shows some results of 3D path integral with duration \(\tilde{T} = 30\) . Gaussian Kernel ( \(\sigma = 0.08\) ) is used as the adjacency measure. ## E.2 3D PATH PLANNING Figure 16 shows some results of 3D simple path planning. Exponential kernel ( \(\sigma = 0.3\) ) is used as the adjacency measure. We design a set of allowable motions \(\Delta\) : \(m\) lengths of radius \(r\) are used, and for each \(r\) , we evenly divide the inclination \(\theta \in [0, \pi ]\) and the azimuth \(\alpha \in [0, 2\pi ]\) into \(n\) directions, which results in a motion pool with \(mn^{2}\) candidate motions \(\Delta = \{(r \sin (\theta) \cos (\alpha), r \sin (\theta) \sin (\alpha), r \cos (\theta))\}\) . We use \(m = 2, n = 90\) in this experiment. ## E.3 3D PATH PLANNING WITH OBSTACLES Figure 17 shows some examples of 3D path planning with a cuboid obstacle. \(a = 38\) and \(b = 24\) in equation 16. <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 15: Examples of 3D path integral with duration \(\hat{T} = 30\) . </center> ![](images/22_1.jpg) <center>Figure 16: Examples of 3D simple path planning, where the agent is capable of planning a direct trajectory. </center> ## E.4 LEARNING IN 1D The system can also be applied to 1D. Inspired by Tsao et al. (2018), the learned system in 1D may serve as a timestamp of events. We assume domain \(D = [0,1]\) and discretize it into 100 time points. A parametric \(M\) is learned by a residual form \(M = I + \hat{M} (\Delta t)\) , where each element of \(\hat{M} (\Delta t)\) is parametrized as a function of \((\Delta t, \Delta t^2)\) . 16 blocks of hidden units with block size 6 are used. Figure 18 visualizes the learned units over the 100 time points. The response wave of every unit <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 17: Examples of 3D path planning with a cuboid obstacle. </center> shows strong periodicity of a specific scale or frequency. Within each block, the response waves of units have similar patterns, with different phases. ![](images/23_1.jpg) <center>Figure 18: Response curves of learned units in 1D. Each block shows the units belonging to the same sub-vector in the motion model. The horizontal axis represents the time points, while the vertical axis indicates the value of responses. </center> <--- Page Split --->
accept
Accept (Poster)
7.333333
ICLR_2019_paper_0399
iclr
2,019
# STEP-WISE SENSITIVITY ANALYSIS: IDENTIFYING PARTIALLY DISTRIBUTED REPRESENTATIONS FOR INTERPRETABLE DEEP LEARNING Anonymous authors Paper under double- blind review ## ABSTRACT In this paper, we introduce a novel method, called step- wise sensitivity analysis, which makes three contributions towards increasing the interpretability of Deep Neural Networks (DNNs). First, we are the first to suggest a methodology that aggregates results across input stimuli to gain model- centric results. Second, we linearly approximate the neuron activation and propose to use the outlier weights to identify distributed code. Third, our method constructs a dependency graph of the relevant neurons across the network to gain fine- grained understanding of the nature and interactions of DNN's internal features. The dependency graph illustrates shared subgraphs that generalise across 10 classes and can be clustered into semantically related groups. This is the first step towards building decision trees as an interpretation of learned representations. ## 1 INTRODUCTION Deep Neural Networks (DNNs) have impressed the scientific community because of their performance and the variety of domains to which they can be applied. However, DNNs are difficult to interpret because of their highly complex non- linear and interconnected nature. The lack of transparency is a threefold problem. First, it inhibits adoption, especially in industries under heavy regulation and with a high cost of errors. Second, it makes it difficult to debug existing models and hampers development progress. Third, it prevents us from utilising the insights gained from the models for further knowledge discovery. DNN interpretability is loosely defined, and it is also referred to as Explanatory AI, Understandable Machine Learning, and Deep Visualisation. DNN interpretability can be gained from a humaninterpretable explanation of the reasons behind the network's choice of output (Ribeiro et al., 2016; Doshi- Velez & Kim, 2017). In a DNN the basis for a decision is encoded in features either as one neuron - local representation; or as a set of neurons - partially- distributed representation (PDR) (Li et al., 2016; Fong & Vedaldi, 2018). The identification of PDRs and their interactions remains the main hindrance to end- to- end interpretability systems (Olah et al., 2018). Once identified, PDRs will enable us to give much finer- grained explanations (e.g., an image is a shark because the network detected a sea, sharp teeth, a long fin, etc.). In this paper, we introduce our novel technique, step- wise sensitivity analysis (SSA), with which we make 3 contributions towards this vision: SSA 1) identifies PDRs; 2) illustrates interactions between related PDRs; 3) applies a novel perspective for interpretability - statistics across multiple input stimuli (instance- specific) to gain a general understanding of the DNN operation (model- centric). The method produces statistical topological interpretability. That is, we analyse the network's properties as a directed graph over various inputs to produce a dependency graph between neurons. The dependency graph highlights the relationships between adjacent layers that are pertinent to the decision, and how these relationships are formed in each layer and across all layers to form a feature representation. The remainder of this paper is organised as follows: Section 2 reviews related work; in Section 3 we introduce our technique for building the cross- input layer- wise dependency graph; Section 4 <--- Page Split ---> illustrates three novel neuron relevance metrics stemming from the dependency graph and how they can be used to determine and interpret neurons of interest; Section 5 makes concluding remarks and suggests how our work can be applied and extended. ## 2 BACKGROUND In this section, we organise the existing effort dedicated to DNN interpretability, based on its three main limitations: functional vs. topological, instance- specific versus model- centric, single neuron versus layer analysis. We then compare step- wise sensitivity analysis with other works that address the same limitations. First, recent attempts focus primarily on the input- output relationship, considering the network as a black- box function. These methods are classified as functional since they treat the entire network as a black- box function with an input, output and parameters, while the topological approach considers the structure of the network. One of the most investigated areas in the functional vein is sensitivity analysis, which produces a heatmap, illustrating the input parts relevant to the output decision. Examples include deconvolution (Zeiler & Fergus, 2014), sensitivity image- specific class saliency visualisations (Simonyan et al., 2013), guided- back propagation (Springenberg et al., 2014), and predictive difference analysis (Zintgraf et al., 2017). In contrast to these functional methods, Section 3 demonstrates how the sensitivity analysis technique can be modified to gain more granular information across layers. The second limitation is that there are two completely opposite types of visualisation approaches for interpretability - model- centric or instance- specific. The model- centric approaches, such as activation maximisation (Erhan et al., 2009) and inversion (Mahendran & Vedaldi, 2015), synthesise a prototype image to explain a neuron. This can be generalised to every data point, but does not contain enough details to reason about particular mistakes or edge- cases. On the other hand, instance- specific methods operate on the level of a single instance, but this fine- granularity cannot be used to elicit principles applicable across a wider set of instances (e.g., the relevance of regions in a single image). Our methodology iterates over instance- specific results, and we compare the results for different classes to illustrate similar "thought patterns", that is, the relevance of filters is shared across classes. Thus, our approach can be viewed as both, instance- specific and model- centric. Third, current approaches either explore a single neuron in isolation, such as sensitivity analysis and activation maximisation, or the entire layer, such as inversion (Mahendran & Vedaldi, 2015). The former approach assumes purely local representations, while the latter assumes fully- distributed representations. However, recent findings (Li et al., 2016; Fong & Vedaldi, 2018; Agrawal et al., 2014; Bau et al., 2017) suggest that every layer consists of a mixture of local and partially- distributed representations. Our approach addresses the last two limitations, in particular, it interprets the internal DNN structure across layers of the network to identify the different types of representations in each layer. ### 2.1 RELATED WORK #### 2.1.1 RELEVANCE SCORE TECHNIQUES Net2Vec (Fong & Vedaldi, 2018) builds on network dissection (Bau et al., 2017) to propose a method for selecting and combining relevant filters, and providing an explanation for filters in conjunction with each other, thus identifying and interpreting PDRs. This method determines the relevance of neurons by optimising the combinations of filters for classification and segmentation on proxy ad hoc tasks. In contrast, our method can ascertain the neuron relevance using the original data set, which makes it more generalisable as it is not necessary to compile explanatory datasets for various problems. On the other hand, similarly to our approach, in Landecker et al. (2013) it is argued that computing the relevance in a layer- by- layer fashion yields more fine- grained results, and draws the distinction between functional and topological approaches. In Bach et al. (2015) this idea is incorporated into a sensitivity analysis technique - Layer- wise- relevance propagation (LRP). Deep Taylor Decomposition (Montavon et al., 2017) generalises the output of such sensitivity analysis techniques to an importance score, which is computed in a topological layer- wise fashion. DeepLift (Shrikumar et al., 2017) proposes an alternative functional approximation method for computing the relevance score by comparing the activation of a neuron to a reference one. Our work is similar to the rel <--- Page Split ---> evance score techniques in that it uses the original sensitivity analysis approach as an importance score metric. In addition and similarly to the topological approaches, we suggest computing the importance score at each layer. However, we propose that the network's decision is driven by a partially- distributed representation rather than the entire layer. Hence, instead of distributing the relevance across all neurons, we only redistribute the relevance to a small number of outliers. #### 2.1.2 CONSTRAINED REDISTRIBUTION Our work is comparable to excitation backpropagation (Zhang et al., 2016), which distributes the relevance to a subset of neurons. There are two important differences. First, excitation backpropagation focuses on improving the heatmap quality, while we investigate how to discover PDRs. Second, while excitation backpropagation uses a probabilistic winner- take- all sampling approach that is limited to neurons with ReLU activations and positive weight connections between adjacent layers, we deploy a more generalisable linear Taylor approximation and statistical analysis over multiple inputs to restrict the relevant neurons. #### 2.1.3 VISUALISATION Our method resembles the approach in (Liu et al., 2017) in that we also generate a DAG and augment it with additional visualisations to produce an explanation. The main difference resides in that they use clustering to reduce the visual clutter, and group neurons together. In contrast, we propose a novel method to select only the relevant paths for investigation. ## 3 STEP-WISE SENSITIVITY ANALYSIS Our interpretability method is based on the sensitivity analysis technique by Baehrens et al. (2010) that was first applied to DCNNs by Simonyan et al. (2013) to produce image- specific class saliency visualisations. Formally, given an image \(\mathbf{I}_{0}\) , a representation function \(\Phi :\mathbb{R}^{H\times W\times C}\to \mathbb{R}^{d}\) such that \(\Phi (\mathbf{I}) = \mathbf{o}\) , and a neuron \(n\) - approximate the activation of \(\mathbf{o}_{n}\) with a linear function. In the neighbourhood of \(\mathbf{I}_{i}\) , this is achieved by computing the first- order Taylor expansion: \[\begin{array}{r}{\pmb{o}_{n} = \pmb{\Phi}_{n}(\mathbf{I})\approx \pmb{\omega}^{T}\mathbf{I} + b} \end{array} \quad (1)\] where \(\omega\) is the gradient of \(\Phi_{n}\) with respect to an image \(\mathbf{I}\) . The function is evaluated at image \(\mathbf{I}_{i}\) : \[\omega = \frac{\partial\Phi_{n}}{\partial\mathbf{I}}\Big|_{\mathbf{I}_{i}} \quad (2)\] This formulation allows us to interpret the magnitude of the values of \(\omega\) as an importance metric corresponding to each pixel. In other words, these values indicate which pixels need to be changed the least to change \(\Phi (\mathbf{I})\) such that \(\mathbf{o}_{n}\) (corresponding to a classification decision) is increased the most. ### 3.1 SINGLE BACK-PROPAGATION STEP Sensitivity analysis performs a complete back- propagation pass to compute \(\omega\) in equation 2. The end result is a class saliency map, which is particularly useful to identify the image regions most pertinent to the network's decision. However, this is a very coarse- grained explanation since it only analyses the input- output relationship. There is no information regarding the effect of particular layers and neurons on the selection of the relevant image regions. We propose a much more fine- grained analysis based on the simple hypothesis that sensitivity analysis can be used in an analogous way to determine the relevance between adjacent layers. Instead of trying to approximate \(\mathbf{o}_{n}\) directly, we consider \(\Phi\) to be defined as the successive composition of smaller functions that represent the transformations of data between layers: \[\begin{array}{r l} & {\Phi (\mathbf{I}) = f^{l}(\Phi^{l - 1}(\mathbf{I}))}\\ & {\qquad = f^{l}\circ f^{l - 1}\circ f^{l - 2}\dots \circ f^{1}(\mathbf{I})} \end{array} \quad (3)\] where \(l = 1\dots L\) , \(L\) is the network's depth, and each layer denoted as \(f^{l}:\mathbb{R}^{d^{\prime}}\to \mathbb{R}^{d}\) represents the operation applied by layer \(l\) , when \(d^{\prime}\) is the output dimensionality of the input layer \(f^{l - 1}\) and \(d\) is <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 1: (a) A sketch of how Step-wise Sensitivity Analysis can be used to provide interpretation for a shark prediction (for actual output examples, see Fig. 3). Each step identifies PDRs of relevant neurons, and for each PDR recursively traverses downwards. (b) Schematic Representation of Step-wise sensitivity analysis representing the two novelties in the single step between adjacent layers. First, the relevant neurons are the ones with positive outlier values in the distribution of \(\omega\) , represented with a boxplot \((\omega \sim)\) . Second, the analysis is aggregated across instance-specific inputs to gain model-centric results. </center> the output dimensionality of layer \(l\) . Starting with neuron \(n\) from the top layer \(f_{n}^{l}\) , we conduct only the last step of back- propagation to compute the \(\omega^{l - 1} \in \mathbb{R}^{d'}\) values, where the node of the network under consideration is \(f^{l - 1} \in \mathbb{R}^{d'}\) . We call our approach Step- wise sensitivity analysis (SSA) since it iteratively applies sensitivity analysis through the computation of a single back- propagation step. The result of a single back- propagation step can be seen as performing one step of step- wise sensitivity analysis between a higher layer \(l\) and a lower one \(j\) : \[\omega_{n,i,:}^{l} = \frac{\partial f_{n}^{l}\left(\Phi^{j}(\mathbf{l})\right)}{\partial\mathbf{l}}\bigg|_{\mathbf{l}_{i}} \quad (4)\] The \(\omega\) values now represent the neurons which have to be changed the least to affect the upper layer's neuron activation the most, and as such they can be treated as the relevance scores for \(f_{n}^{l}\) of all lower layer neurons \(f^{l - 1}\) . A large positive \(\omega_{n,i,k}^{l}\) value means the neuron \(f_{k}^{l - 1}\) contributes substantially to the activation of \(f_{n}^{l}\) , while a large negative \(\omega_{n,i,k}^{l}\) value inhibits the activation. The ability to identify the positively and negatively contributing neurons in layer- by- layer fashion is the first step towards understanding the internal representation structure and identifying partially- distributed representations. This can be accomplished by using Step- wise sensitivity analysis and the resulting \(\omega^{l}\) to guide a hierarchical traversal of the network layers. Next, we formally describe our method. ### 3.2 THE METHOD The main contribution of our algorithm is the granularity with which it can illuminate the internal workings of a DNN. We generate a directed acyclic graph that spans the entire network. We call it a dependency graph because it increases the understanding of exactly how the activation of higher output layers depends on lower input layers. It illustrates how the input is transformed in each of the network's layers and highlights "paths" that, when followed, identify the building blocks of higher level features. For example, our method can take us much closer to the ability to say that the concept of a shark is encoded as combination of a fin, body, and tail feature (see Fig. 1a). Our step- wise sensitivity analysis method is separated into two parts. The first (wrapper) part traverses the network and maintains the input to iteratively apply the second part of the method. The second (step) part applies step- wise sensitivity analysis to identify relevant neurons, as specified in Algorithm 1. <--- Page Split ---> Algorithm 1 Step- wise sensitivity analysis: Identifying partially- distributed representations INPUT: DNN classifier \(\Phi\) , a layer \(f^{l} \in \mathbb{R}^{d}\) from \(\Phi\) , a set of relevant neurons \(n \in \mathbb{S}\) , and a set of images \(\mathbf{I}_{i} \in \mathbb{I}\) . STEP 1: Compute relevance of neurons in layer \(f^{l - 1}\) for each \(n\) and \(\mathbf{I}_{i}\) so that if \(f^{l - 1}\) is a: 1. Fully-connected layer: stack results into a relevance tensor \(\omega_{n,i,:}^{l} \in \mathbb{R}^{|\mathbb{S}| \times K}\) ; 2. Convolutional layer: spatially average the output volume tensor \(\omega_{n,i,:}^{l}\) into a relevance tensor \(\omega_{n,i,:}^{l} \in \mathbb{R}^{|\mathbb{S}| \times |\mathbb{I}| \times K}\) ; 3. Pooling-layers: directly compute for \(l - 2\) : \(\omega^{l} = \nabla_{f^{l - 2}} f^{l}|_{\mathbf{I}_{i}}\) STEP 2: Select outliers as relevant neurons using \(1.5 \times\) Inter Quartile Range STEP 3: Rank relevant neurons based on their relevance frequency across images \((\omega_{n,i,:}^{l})\) . STEP 4: Select top \(b\) relevant neurons, where \(b\) is a branching factor. OUTPUT: \(b\) relevant neurons for each distinct \(n\) in \(\mathbb{S}\) . The basic idea of our step- wise sensitivity analysis is illustrated in Fig. 1a. Given a DNN classifier \(\Phi\) and a set of relevant neurons \(n \in \mathbb{S}\) , start from the top layer and follow Algorithm 1 to produce a set of \(b\) relevant neurons \(-\mathbb{B}^{n}\) , for each distinct \(n\) in \(\mathbb{S}\) . Next, set \(\mathbb{S}\) to the union of all relevant neurons \(\mathbb{B}^{n}\) ( \(\mathbb{S} \leftarrow \bigcup_{n} \mathbb{B}^{n}\) ) for the lower layer, and repeat until the input layer. For computational efficiency, the magnitude of \(b\) is a threshold for the cardinality of each \(\mathbb{B}^{n}\) , thus discarding a proportion of potential relevant neurons. We believe that \(b\) is an important hyper- parameter since it limits the size of potential PDRs, which recent studies indicate to be typically between 8 and 50 neurons (Fong & Vedaldi, 2018). We detail each of the steps in Algorithm 1 next. ## STEP 1: COMPUTE RELEVANCE TENSOR Input: This step requires a network \((\Phi)\) , a layer \(f^{l}\) , a neuron \(n \in f^{l}\) , and an image \(\mathbf{I}_{i}\) . Output: Computes the relevance score of neurons in layer \(f^{l} - 1\) with respect to a neuron \(n\) in layer \(f^{l}\) as a gradient at \(\mathbf{I}_{i}\) using Equation equation 4. Essentially, this produces the relevance of all neurons in layer \(f^{l} - 1\) to the activation of neuron \(n\) . Method: The relevance for DCNN is computed differently depending on the type of layer \(f^{l}\) . If \(f^{l}\) is fully- connected, the result is a relevance vector \(\omega_{n,i,:}^{l} \in \mathbb{R}^{|f^{l - 1}|}\) . Repeating this process for all images and neurons in \(\mathbb{S}\) yields a relevance tensor \(\omega^{l}\) . If \(l\) is a convolutional layer, the result of Equation equation 4 is a 5D relevance tensor \(\omega_{n,i,:}^{l} \in \mathbb{R}^{H \times W \times K}\) , where \(H\) , \(W\) , \(K\) are respectively the height, width, and number of activation maps in \(l - 1\) . Since every activation map \(k\) is produced by convolving identical weights onto a lower layer activation map \(p\) , \(k\) represents the existence of an identical feature across \(p\) . Hence, the vector \(\omega_{n,i,h,w,:}^{l}\) represents the relevance of all lower level activation maps (features) at a location \((h, w)\) to the activation of \(n\) . Since we are interested in the relative importance of a feature, we perform spatial- averaging over all locations \((h, w)\) to convert \(\omega_{i}^{f^{l}}\) into a relevance vector \(\omega_{i}^{f^{l}} \in \mathbb{R}^{K}\) , where each dimension indicates the relative importance of an activation map across locations. This formulation enables us to repeat the process for all images, neurons and again obtain a 3D relevance tensor \(\omega^{l}\) . The pooling layers can be seen as a filter of their predecessors since \(\frac{d f^{l}}{d \mathbf{I}_{i}} = c \times \frac{d f^{l - 1}}{d \mathbf{I}_{i}}\) , where \(c \in \{0,1\}\) . Hence, if \(f^{l - 1}\) is a pooling layer we compute the relevance tensor directly w.r.t \(l - 2\) : \(\omega^{l} = \nabla_{f^{l - 2}} f^{l}|_{\mathbf{I}_{i}}\) . ## STEP 2: OUTLIER DETECTION Input: This steps requires a relevance tensor \(\omega^{l}\) . Output: The result identifies neurons relevant for neuron \(n\) in layer \(l - 1\) . Method: Our preliminary experiments indicated that each row \(\omega_{n,i,:}^{l}\) follows a normal distribution, and consistently exhibits a small number of outliers across \(i\) (see Figure 1b). Therefore, we make two simplifying assumptions. First, we assume these outliers are the only relevant neurons. Second, we choose to focus on only the positive \(\omega\) outlier values and leave the analysis of negative outliers <--- Page Split ---> for future work. Finally, we use the Tukey's fences \((1.5 \times\) Inter- Quartile Range) outlier detection method (Tukey, 1977) to select relevant neurons from each row \(\omega_{n,i,:}^{l}\) . Figure 1b illustrates this step. ## STEP 3: RANKING Input: Outliers in \(\omega_{n,i,:}^{l}\) . Output: Relevance ranking of each lower- layer neuron. Method: We use the outlier detection procedure to detect relevant neurons in every row. We rank the neurons based on the number of \(i\) columns of \(\omega_{n,i,k}^{l}\) in which they appear as relevant, resulting in a relevance ranking for each \(k \in f^{l - 1}\) . ## STEP 4: SELECT Input: Relevance ranking for each \(k \in f^{l - 1}\) . Output: \(b\) relevant neurons for each distinct \(n\) in \(\mathbb{S}\) . Method: Select top \(b\) most frequent neurons \(\mathbb{B} \subset \forall_{k} k \in \omega_{n,i,k}^{l}\) as relevant, where \(b = |\mathbb{B}|\) is a branching factor. ## 4 RESULTS AND DISCUSSION We now demonstrate how our method can be applied to the ImageNet dataset (Russakovsky et al., 2015) for the 16- layer VGG network (VGG16) (Simonyan & Zisserman, 2014). We use the publicly available pre- trained model implemented in the deep learning framework keras (Chollet et al., 2015) and modify the keras- vis (Kotikalapudi & contributors, 2017) implementation of sensitivity analysis using the Guided- backpropagation algorithm (Springenberg et al., 2014). We perform experiments with 10 classes and select 100 images per class from the training set for which VGG16 the probability mass of the correct class is above \(99\%\) . The execution takes approximately 12 hours on NVIDIA Tesla K80 GPU to traverse the entire network for one class with a branching factor of 3. However, most of the time is spent within the convolutional layers, which have a larger number of unique neurons. Since our approach's time complexity can be reduced to worst- case \(O(b^{d})\) , where \(b\) is the branching vector and \(d\) is the depth ( \(d = 22\) in the case of VGG16) we constrain \(b\) due to computational limitations. The approach is still practical since it is not designed to be executed every time that an explanation is necessary, just as a network is not retrained every time before a prediction. In Section 4.1 we show the particular occurrence of outliers within the \(\omega^{l}\) values to demonstrate how our method chooses relevant neurons. Furthermore, we provide in Section 4.1 a quantitative justification behind the choice of heuristics to detect relevant outliers, while in Section 4.2 we demonstrate our methodology on two classes, demonstrate that the results generalise across 10 class, and illustrate the importance of statistical interpretability. The novelty of our approach is that it generates a dependency graph of relevant neurons. In contrast to other approaches, we study the process of feature formation and the interdependence of relevant features by following a path through the graph. This allows us to selectively visualise relevant activation maps through sensitivity analysis. We demonstrate not only an increase in the granularity of the state- of- the- art interpretability capabilities, but also that an analysis on a single image could lead to false assumptions about the operation of the network, as demonstrated in Section 4.2. We use \(f^{fc}\) to refer to a fully- connected layer, and \(f^{b_{i}c_{j}}\) to refer to \(block_{n}conv_{j}\) convolutional layer. ### 4.1 QUANTITATIVE JUSTIFICATION OF OUTLIERS AS RELEVANT NEURONS According to the \(\omega\) values, there is a consistent presence of a small number of relevant outlier neurons (less than \(6\%\) ) – see Figure 2b. The outlier neurons are not identical for different input stimulus of the same class, as can be seen in Figure 2a. This indicates that our approach is not identical to simply selecting the neurons with highest weights, which would yield constant results across images. On the contrary, the frequency of relevance follows a power- law distribution. This suggests that the most frequently occurring neurons could be the main “drivers” (the most pertinent) for the class activation, while the other relevant neurons pick- up nuances or modulate the main drivers. In other words, the most relevant neurons form a basis, which is transformed by the less <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 2: (a) Barplot representing the frequency of occurrence for the positive outliers in layer \(f^{fc2}\) for the Hammerhead shark class- the y-axis represents the number of images, in which a neuron was a positive outlier. There are 189 unique outliers (4.6% of the total 4096 neurons). Notice that the first 3 outliers occur in almost all images and that the relevance follows a power-law distribution. (b) A heatmap amplified visualisation of outlier neurons, the \(\omega_{f^{fc2},200,i,k}^{fc2}\) values for 4 Hammerhead shark images are cubed. The x-, y-, z-axes represent respectively \(k\) , \(n\) , and the number of images where \(k\) is relevant for \(n\) . Observe that the images share exactly the same small number of positive and negative outliers with varying degrees of intensity. </center> relevant neuron. In some sense, we are finding the inverse of Szegedy et al. (2013)'s adversarial examples - high probability low- dimensional "pockets" in the manifold. What is surprising about Figure 2b is that it is not only consistent with the power law distribution result, but also that different images of the same class share peaks and troughs at exactly the same neurons. We hypothesise that Figure 2b is a visualisation of part of the PDR for a Hammerhead shark in layer \(f^{fc2}\) . To investigate this hypothesis, we generate dependency graphs of relevant neurons for 10 different classes, which we describe next. ### 4.2 DEPENDENCY GRAPH Figure 3 illustrates two examples of the output of Step- wise sensitivity analysis with two important observations. First, graphs in Figures 3a & 3b for different classes may share significant similarities. For example, the subgraphs of both Hammerhead shark and Egyptian cat starting from \(f_{1820}^{fc2}\) reveal similar most relevant neurons (e.g., \(f_{1820}^{fc2}, f_{3116}^{fc1}, f_{2053}^{fc1}\) ), identical subgraph structure (e.g., \(f_{3116}^{fc1}\) and its top 3 most relevant neurons) and similar subgraph structure (e.g., \(f_{2053}^{fc1}\) , only one relevant child neuron is shared - \(f_{2053}^{bc3}\) ). Interestingly, the two classes share 6 out of the 8 most relevant activation maps in block_5_conv3- 41, 49, 155, 256, 313, 335. This implies that the dependency graphs open the frontier for pattern matching and analysis of network motifs (Milo et al., 2002) across classes. We hypothesise that the emerging network motifs will give a strong indication of the various PDRs positions within a layer and how upper PDRs are leveraging them. In Section 4.2.1 we present further evidence about the existence of network motifs. Second, Figures 3a & 3b show that both dependency graphs share multiple incoming connections to the very same neuron ( \(f_{155}^{bc3}\) ). It is surprising that this is the first time a neuron is shared within a class. Consequently, the interpretation of the dependency graph enables us to infer an additional relevance metric for a neuron - its inter- connectedness according to the number of incoming edges. Therefore, step- wise sensitivity analysis allows researchers to focus analysis and interpretation efforts on the most pertinent regions of a DNN. For example, Figures 4b & 4b display targeted visualisation through sensitivity analysis of the especially relevant neuron \(f_{155}^{bc3}\) . Had we relied on a single visualisation, we would have erroneously presumed that the neuron perfectly encodes either the idea of a shark or of a cat. However, step- wise sensitivity analysis exposes that the neuron is equally important for both classes, and forms a part of a shared sub- structure. Therefore, it must encode a more abstract concept. Exploring a neuron or activation map in isolation is simplistic. In reality, the semantics are expressed within the combination of neurons within the PDR. In future work, we will apply activation maximisation Erhan et al. (2009) of an entire PDR to investigate its semantic properties. <--- Page Split ---> In order to investigate the generalisability of the results in Figure 3, we transformed the 10 dependency graphs into bag- of- nodes features representations. Then we performed the ward method (Murtagh & Legendre, 2011) for hierarchical agglomerative clustering with cosine distance similarity to group the dependency graphs. Interestingly, closer inspection of Figure 4a reveals three clusters of most similar dependency graphs - 1) hammer head and tiger shark; 2) African and Indian elephant; 3) German Sheppard and great white shark. Naturally, the first two clusters consist of the most semantically and visually similar classes - this supports the validity of our approach. Surprisingly, cluster 3) suggests an unnatural similarity between animals. One possible explanation could be that both of these classes share a PDR encoding sharp teeth. Another interesting observation is that the graphs are separated into two general clusters - one consisting of the all the sharks, the German Shepherd, and the Persian cat; and another consisting of the elephants, the Labrador Retriever and the Egyptian cat. One natural semantic separation between the two groups could be the degree of "danger". Finally, as expected, most of the lower layer activation maps are shared across all classes since they encode very abstract features. At the same time, while there is a large proportion of abstract neurons shared across all dependency graphs, the upper dendrogram in Figure 4a depicts other more specialised neuron clusters, which are idiosyncratic to their semantic groupings. These three observations support the hypothesis that the dependency graphs reveal semantically meaningful groups of neurons across classes that form PDRs. Identifying and analysing such specific sub- graphs is a non- trivial graph theory problem, which we leave for future work. Once we are able to accurately extract the shared sub- graphs, we will be able to provide hierarchical explanation behind a particular decision. For example, the classification was a shark because the network detected a sea, sharp teeth, a fish tail and a long fin. ## 5 CONCLUSIONS & FUTURE WORK In this paper we make three contributions to the area of interpreting learned representations. First, we are the first to propose a statistical DNN interpretability method that aggregates results of an instance- specific method to gain model- centric results. Second, we build a dependency graph of the relevant neurons to gain finer- grained understanding of the nature and interactions of a DNN's internal features. Third, we propose three new relevance metrics to identify salient neurons: 1) the outlier weights of a linear approximation of the neuron activation; 2) the ranking score of a neuron based on its frequency as an outlier across multiple input stimuli; 3) the interconnectedness of a neuron within the dependency graph. Our method modifies sensitivity analysis into Step- wise sensitivity analysis that applies the same linear approximation - but on a layer- by- layer basis as opposed to the usual output- input basis. We demonstrate that the results generalise by illuminating shared subgraphs across 10 classes. These subgraphs can be grouped into semantic clusters since they contain the quintessential neurons of a class. Step- wise sensitivity analysis opens an opportunity for further and more focused explorations of the internal operations of DNNs. Although it can still be applied on a single image to gain instance- specific interpretability, we argue that to gain a statistically viable result it is important to conduct analysis both, across classes and images. In the future, we will apply the approach to facilitate error explanation and decision justification on a much lower level by providing semantic interpretation of the discovered PDRs through visualisation approaches. We will demonstrate the features that make the difference between semantically similar classes and quantify the interpretability of the resulting PDRs using concept segmentation as in (Fong & Vedaldi, 2018; Bau et al., 2017). Further, we will investigate the suitability of our approach to defend against adversarial attacks. Finally, we will explore the possibility to use the dependency graphs to prune the network in order to perform network compression, or the extract binary classifiers for particular classes in the form of dependency graphs to distil the network. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 3: Dependency graphs for hammerhead shark and Egyptian cat classes of the relevant neurons for the penultimate 4 layers, excluding the pooling layer. The graphs expose the links only between the relevant neurons with a branching factor of 3. Notice the multiple connections to \(f_{155}^{163}\) (red circle). Notice also the similarities in the subgraphs of \(f_{155}^{162}\) (blue rectangle) for both, shark and cat classes. </center> ![](images/8_1.jpg) <center>Figure 4: a) A clustered heatmap (clustermap) of each of the 10 dependency graphs (spanning the entire network) into a bag-of-nodes features representation. The x,y,z-axis respectively represent neuron, class, and presence of the neuron in the dependency graph - red present, blue absent. The dendrograms on the side indicate the relative distance between points and clusters. Notice the three small clusters of semantically similar classes on the side. b) & c) Heatmaps representing standard sensitivity analysis of activation map \(f_{155}^{163}\) indicating the regions of the image from the corresponding class. Red and blue respectively correspond to positive or negative contribution to the activation. </center> ## REFERENCES Pulkit Agrawal, Ross Girshick, and Jitendra Malik. Analyzing the performance of multilayer neural networks for object recognition. In European conference on computer vision, pp. 329- 344. Springer, 2014. Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus- Robert Müller, and Wojciech Samek. On pixel- wise explanations for non- linear classifier decisions by layer- wise relevance propagation. PloS one, 10(7):e0130140, 2015. David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus- Robert MÄZller. How to explain individual classification decisions. Journal of Machine Learning Research, 11(Jun):1803- 1831, 2010. <--- Page Split ---> David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. Network dissection: Quantifying interpretability of deep visual representations. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pp. 3319- 3327. IEEE, 2017. François Chollet et al. Keras. https://github.com/fchollet/keras, 2015. F. Doshi-Velez and B. Kim. Towards A Rigorous Science of Interpretable Machine Learning. ArXiv e-prints, February 2017. Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-layer features of a deep network. University of Montreal, 1341:3, 2009. Ruth Fong and Andrea Vedaldi. Net2vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks. arXiv preprint arXiv:1801.03454, 2018. Raghavendra Kotikalapudi and contributors. keras- vis. https://github.com/raghakot/keras- vis, 2017. Will Landecker, Michael D Thomure, Luís MA Bettencourt, Melanie Mitchell, Garrett T Kenyon, and Steven P Brumby. Interpreting individual classifications of hierarchical networks. In Computational Intelligence and Data Mining (CIDM), 2013 IEEE Symposium on, pp. 32- 38. IEEE, 2013. Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John Hopcroft. Convergent learning: Do different neural networks learn the same representations? In Proceedings of International Conference on Learning Representation (ICLR), 2016. Mengchen Liu, Jiaxin Shi, Zhen Li, Chongxuan Li, Jun Zhu, and Shixia Liu. Towards better analysis of deep convolutional neural networks. IEEE Transactions on Visualization and Computer Graphics, 23(1):91- 100, 2017. Aravindh Mahendran and Andrea Vedaldi. Understanding deep image representations by inverting them. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5188- 5196, 2015. Ron Milo, Shai Shen- Orr, Shalev Itzkovitz, Nadav Kashtan, Dmitri Chklovskii, and Uri Alon. Network motifs: simple building blocks of complex networks. Science, 298(5594):824- 827, 2002. Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus- Robert Müller. Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recognition, 65:211- 222, 2017. Fionn Murtagh and Pierre Legendre. Ward's hierarchical clustering method: clustering criterion and agglomerative algorithm. arXiv preprint arXiv:1111.6285, 2011. Chris Olah, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine Ye, and Alexander Mordvintsev. The building blocks of interpretability. Distill, 2018. doi: 10.23915/distill.00010. https://distill.pub/2018/building- blocks. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135- 1144. ACM, 2016. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei- Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211- 252, 2015. doi: 10.1007/s11263- 015- 0816- y. Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. arXiv preprint arXiv:1704.02685, 2017. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large- scale image recognition. CoRR, abs/1409.1556, 2014. URL http://arxiv.org/abs/1409.1556. <--- Page Split ---> Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013. Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. John W Tukey. Exploratory data analysis. Reading, Mass., 1977. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pp. 818- 833. Springer, 2014. Jianming Zhang, Zhe Lin, Jonathan Brandt, Xiaohui Shen, and Stan Sclaroff. Top- down neural attention by excitation backprop. In European Conference on Computer Vision, pp. 543- 559. Springer, 2016. Luisa M. Zintgraf, Taco S. Cohen, Tameem Adel, and Max Welling. Visualizing deep neural network decisions: Prediction difference analysis. CoRR, abs/1702.04595, 2017. URL http://arxiv.org/abs/1702.04595. <--- Page Split ---> ![](images/11_0.jpg) <center>Figure 5: A heatmap of all activation maps at layer \(f^{b5c3}\) , relevant to neuron \(f_{1820}^{fc2}\) for the respective classes. The red heatmaps indicate absence of relevant pixels to a particular activation map (best viewed in digital). </center> ## 6 APPENDIX ### 6.1 INVESTIGATION OF THE SHARED FEATURES An important claim of our paper is that a comparison of activation maps across images can lead to a better understanding of the effect of a feature. For instance, rows 1 & 5, column 256 in Figure 5a could lead to the erroneous conclusion that \(f_{256}^{b5c3}\) detects shark tails, while it also activated for the shark head and also for front and rear parts of cats. Figure 5b suggests that neurons \(f_{49}^{b5c3}\) and \(f_{335}^{b5c3}\) are complementary. They activate for similar regions; however, they each capture different parts (e.g., a cat's ear - row 1; the region of \(f_{335}^{b5c3}\) is contained within that of \(f_{49}^{b5c3}\) , but has a much sharper boundary). In future work we will explore the exact relationship between such neurons. <--- Page Split --->
## ABSTRACT In this paper, we introduce a novel method, called step- wise sensitivity analysis, which makes three contributions towards increasing the interpretability of Deep Neural Networks (DNNs). First, we are the first to suggest a methodology that aggregates results across input stimuli to gain model- centric results. Second, we linearly approximate the neuron activation and propose to use the outlier weights to identify distributed code. Third, our method constructs a dependency graph of the relevant neurons across the network to gain fine- grained understanding of the nature and interactions of DNN's internal features. The dependency graph illustrates shared subgraphs that generalise across 10 classes and can be clustered into semantically related groups. This is the first step towards building decision trees as an interpretation of learned representations. ## 1 INTRODUCTION Deep Neural Networks (DNNs) have impressed the scientific community because of their performance and the variety of domains to which they can be applied. However, DNNs are difficult to interpret because of their highly complex non- linear and interconnected nature. The lack of transparency is a threefold problem. First, it inhibits adoption, especially in industries under heavy regulation and with a high cost of errors. Second, it makes it difficult to debug existing models and hampers development progress. Third, it prevents us from utilising the insights gained from the models for further knowledge discovery. DNN interpretability is loosely defined, and it is also referred to as Explanatory AI, Understandable Machine Learning, and Deep Visualisation. DNN interpretability can be gained from a humaninterpretable explanation of the reasons behind the network's choice of output (Ribeiro et al., 2016; Doshi- Velez & Kim, 2017). In a DNN the basis for a decision is encoded in features either as one neuron - local representation; or as a set of neurons - partially- distributed representation (PDR) (Li et al., 2016; Fong & Vedaldi, 2018). The identification of PDRs and their interactions remains the main hindrance to end- to- end interpretability systems (Olah et al., 2018). Once identified, PDRs will enable us to give much finer- grained explanations (e.g., an image is a shark because the network detected a sea, sharp teeth, a long fin, etc.). In this paper, we introduce our novel technique, step- wise sensitivity analysis (SSA), with which we make 3 contributions towards this vision: SSA 1) identifies PDRs; 2) illustrates interactions between related PDRs; 3) applies a novel perspective for interpretability - statistics across multiple input stimuli (instance- specific) to gain a general understanding of the DNN operation (model- centric). The method produces statistical topological interpretability. That is, we analyse the network's properties as a directed graph over various inputs to produce a dependency graph between neurons. The dependency graph highlights the relationships between adjacent layers that are pertinent to the decision, and how these relationships are formed in each layer and across all layers to form a feature representation. The remainder of this paper is organised as follows: Section 2 reviews related work; in Section 3 we introduce our technique for building the cross- input layer- wise dependency graph; Section 4 <--- Page Split ---> illustrates three novel neuron relevance metrics stemming from the dependency graph and how they can be used to determine and interpret neurons of interest; Section 5 makes concluding remarks and suggests how our work can be applied and extended. ## 2 BACKGROUND In this section, we organise the existing effort dedicated to DNN interpretability, based on its three main limitations: functional vs. topological, instance- specific versus model- centric, single neuron versus layer analysis. We then compare step- wise sensitivity analysis with other works that address the same limitations. First, recent attempts focus primarily on the input- output relationship, considering the network as a black- box function. These methods are classified as functional since they treat the entire network as a black- box function with an input, output and parameters, while the topological approach considers the structure of the network. One of the most investigated areas in the functional vein is sensitivity analysis, which produces a heatmap, illustrating the input parts relevant to the output decision. Examples include deconvolution (Zeiler & Fergus, 2014), sensitivity image- specific class saliency visualisations (Simonyan et al., 2013), guided- back propagation (Springenberg et al., 2014), and predictive difference analysis (Zintgraf et al., 2017). In contrast to these functional methods, Section 3 demonstrates how the sensitivity analysis technique can be modified to gain more granular information across layers. The second limitation is that there are two completely opposite types of visualisation approaches for interpretability - model- centric or instance- specific. The model- centric approaches, such as activation maximisation (Erhan et al., 2009) and inversion (Mahendran & Vedaldi, 2015), synthesise a prototype image to explain a neuron. This can be generalised to every data point, but does not contain enough details to reason about particular mistakes or edge- cases. On the other hand, instance- specific methods operate on the level of a single instance, but this fine- granularity cannot be used to elicit principles applicable across a wider set of instances (e.g., the relevance of regions in a single image). Our methodology iterates over instance- specific results, and we compare the results for different classes to illustrate similar "thought patterns", that is, the relevance of filters is shared across classes. Thus, our approach can be viewed as both, instance- specific and model- centric. Third, current approaches either explore a single neuron in isolation, such as sensitivity analysis and activation maximisation, or the entire layer, such as inversion (Mahendran & Vedaldi, 2015). The former approach assumes purely local representations, while the latter assumes fully- distributed representations. However, recent findings (Li et al., 2016; Fong & Vedaldi, 2018; Agrawal et al., 2014; Bau et al., 2017) suggest that every layer consists of a mixture of local and partially- distributed representations. Our approach addresses the last two limitations, in particular, it interprets the internal DNN structure across layers of the network to identify the different types of representations in each layer. ### 2.1 RELATED WORK #### 2.1.1 RELEVANCE SCORE TECHNIQUES Net2Vec (Fong & Vedaldi, 2018) builds on network dissection (Bau et al., 2017) to propose a method for selecting and combining relevant filters, and providing an explanation for filters in conjunction with each other, thus identifying and interpreting PDRs. This method determines the relevance of neurons by optimising the combinations of filters for classification and segmentation on proxy ad hoc tasks. In contrast, our method can ascertain the neuron relevance using the original data set, which makes it more generalisable as it is not necessary to compile explanatory datasets for various problems. On the other hand, similarly to our approach, in Landecker et al. (2013) it is argued that computing the relevance in a layer- by- layer fashion yields more fine- grained results, and draws the distinction between functional and topological approaches. In Bach et al. (2015) this idea is incorporated into a sensitivity analysis technique - Layer- wise- relevance propagation (LRP). Deep Taylor Decomposition (Montavon et al., 2017) generalises the output of such sensitivity analysis techniques to an importance score, which is computed in a topological layer- wise fashion. DeepLift (Shrikumar et al., 2017) proposes an alternative functional approximation method for computing the relevance score by comparing the activation of a neuron to a reference one. Our work is similar to the rel <--- Page Split ---> evance score techniques in that it uses the original sensitivity analysis approach as an importance score metric. In addition and similarly to the topological approaches, we suggest computing the importance score at each layer. However, we propose that the network's decision is driven by a partially- distributed representation rather than the entire layer. Hence, instead of distributing the relevance across all neurons, we only redistribute the relevance to a small number of outliers. #### 2.1.2 CONSTRAINED REDISTRIBUTION Our work is comparable to excitation backpropagation (Zhang et al., 2016), which distributes the relevance to a subset of neurons. There are two important differences. First, excitation backpropagation focuses on improving the heatmap quality, while we investigate how to discover PDRs. Second, while excitation backpropagation uses a probabilistic winner- take- all sampling approach that is limited to neurons with ReLU activations and positive weight connections between adjacent layers, we deploy a more generalisable linear Taylor approximation and statistical analysis over multiple inputs to restrict the relevant neurons. #### 2.1.3 VISUALISATION Our method resembles the approach in (Liu et al., 2017) in that we also generate a DAG and augment it with additional visualisations to produce an explanation. The main difference resides in that they use clustering to reduce the visual clutter, and group neurons together. In contrast, we propose a novel method to select only the relevant paths for investigation. ## 3 STEP-WISE SENSITIVITY ANALYSIS Our interpretability method is based on the sensitivity analysis technique by Baehrens et al. (2010) that was first applied to DCNNs by Simonyan et al. (2013) to produce image- specific class saliency visualisations. Formally, given an image \(\mathbf{I}_{0}\) , a representation function \(\Phi :\mathbb{R}^{H\times W\times C}\to \mathbb{R}^{d}\) such that \(\Phi (\mathbf{I}) = \mathbf{o}\) , and a neuron \(n\) - approximate the activation of \(\mathbf{o}_{n}\) with a linear function. In the neighbourhood of \(\mathbf{I}_{i}\) , this is achieved by computing the first- order Taylor expansion: \[\begin{array}{r}{\pmb{o}_{n} = \pmb{\Phi}_{n}(\mathbf{I})\approx \pmb{\omega}^{T}\mathbf{I} + b} \end{array} \quad (1)\] where \(\omega\) is the gradient of \(\Phi_{n}\) with respect to an image \(\mathbf{I}\) . The function is evaluated at image \(\mathbf{I}_{i}\) : \[\omega = \frac{\partial\Phi_{n}}{\partial\mathbf{I}}\Big|_{\mathbf{I}_{i}} \quad (2)\] This formulation allows us to interpret the magnitude of the values of \(\omega\) as an importance metric corresponding to each pixel. In other words, these values indicate which pixels need to be changed the least to change \(\Phi (\mathbf{I})\) such that \(\mathbf{o}_{n}\) (corresponding to a classification decision) is increased the most. ### 3.1 SINGLE BACK-PROPAGATION STEP Sensitivity analysis performs a complete back- propagation pass to compute \(\omega\) in equation 2. The end result is a class saliency map, which is particularly useful to identify the image regions most pertinent to the network's decision. However, this is a very coarse- grained explanation since it only analyses the input- output relationship. There is no information regarding the effect of particular layers and neurons on the selection of the relevant image regions. We propose a much more fine- grained analysis based on the simple hypothesis that sensitivity analysis can be used in an analogous way to determine the relevance between adjacent layers. Instead of trying to approximate \(\mathbf{o}_{n}\) directly, we consider \(\Phi\) to be defined as the successive composition of smaller functions that represent the transformations of data between layers: \[\begin{array}{r l} & {\Phi (\mathbf{I}) = f^{l}(\Phi^{l - 1}(\mathbf{I}))}\\ & {\qquad = f^{l}\circ f^{l - 1}\circ f^{l - 2}\dots \circ f^{1}(\mathbf{I})} \end{array} \quad (3)\] where \(l = 1\dots L\) , \(L\) is the network's depth, and each layer denoted as \(f^{l}:\mathbb{R}^{d^{\prime}}\to \mathbb{R}^{d}\) represents the operation applied by layer \(l\) , when \(d^{\prime}\) is the output dimensionality of the input layer \(f^{l - 1}\) and \(d\) is <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 1: (a) A sketch of how Step-wise Sensitivity Analysis can be used to provide interpretation for a shark prediction (for actual output examples, see Fig. 3). Each step identifies PDRs of relevant neurons, and for each PDR recursively traverses downwards. (b) Schematic Representation of Step-wise sensitivity analysis representing the two novelties in the single step between adjacent layers. First, the relevant neurons are the ones with positive outlier values in the distribution of \(\omega\) , represented with a boxplot \((\omega \sim)\) . Second, the analysis is aggregated across instance-specific inputs to gain model-centric results. </center> the output dimensionality of layer \(l\) . Starting with neuron \(n\) from the top layer \(f_{n}^{l}\) , we conduct only the last step of back- propagation to compute the \(\omega^{l - 1} \in \mathbb{R}^{d'}\) values, where the node of the network under consideration is \(f^{l - 1} \in \mathbb{R}^{d'}\) . We call our approach Step- wise sensitivity analysis (SSA) since it iteratively applies sensitivity analysis through the computation of a single back- propagation step. The result of a single back- propagation step can be seen as performing one step of step- wise sensitivity analysis between a higher layer \(l\) and a lower one \(j\) : \[\omega_{n,i,:}^{l} = \frac{\partial f_{n}^{l}\left(\Phi^{j}(\mathbf{l})\right)}{\partial\mathbf{l}}\bigg|_{\mathbf{l}_{i}} \quad (4)\] The \(\omega\) values now represent the neurons which have to be changed the least to affect the upper layer's neuron activation the most, and as such they can be treated as the relevance scores for \(f_{n}^{l}\) of all lower layer neurons \(f^{l - 1}\) . A large positive \(\omega_{n,i,k}^{l}\) value means the neuron \(f_{k}^{l - 1}\) contributes substantially to the activation of \(f_{n}^{l}\) , while a large negative \(\omega_{n,i,k}^{l}\) value inhibits the activation. The ability to identify the positively and negatively contributing neurons in layer- by- layer fashion is the first step towards understanding the internal representation structure and identifying partially- distributed representations. This can be accomplished by using Step- wise sensitivity analysis and the resulting \(\omega^{l}\) to guide a hierarchical traversal of the network layers. Next, we formally describe our method. ### 3.2 THE METHOD The main contribution of our algorithm is the granularity with which it can illuminate the internal workings of a DNN. We generate a directed acyclic graph that spans the entire network. We call it a dependency graph because it increases the understanding of exactly how the activation of higher output layers depends on lower input layers. It illustrates how the input is transformed in each of the network's layers and highlights "paths" that, when followed, identify the building blocks of higher level features. For example, our method can take us much closer to the ability to say that the concept of a shark is encoded as combination of a fin, body, and tail feature (see Fig. 1a). Our step- wise sensitivity analysis method is separated into two parts. The first (wrapper) part traverses the network and maintains the input to iteratively apply the second part of the method. The second (step) part applies step- wise sensitivity analysis to identify relevant neurons, as specified in Algorithm 1. <--- Page Split ---> Algorithm 1 Step- wise sensitivity analysis: Identifying partially- distributed representations INPUT: DNN classifier \(\Phi\) , a layer \(f^{l} \in \mathbb{R}^{d}\) from \(\Phi\) , a set of relevant neurons \(n \in \mathbb{S}\) , and a set of images \(\mathbf{I}_{i} \in \mathbb{I}\) . STEP 1: Compute relevance of neurons in layer \(f^{l - 1}\) for each \(n\) and \(\mathbf{I}_{i}\) so that if \(f^{l - 1}\) is a: 1. Fully-connected layer: stack results into a relevance tensor \(\omega_{n,i,:}^{l} \in \mathbb{R}^{|\mathbb{S}| \times K}\) ; 2. Convolutional layer: spatially average the output volume tensor \(\omega_{n,i,:}^{l}\) into a relevance tensor \(\omega_{n,i,:}^{l} \in \mathbb{R}^{|\mathbb{S}| \times |\mathbb{I}| \times K}\) ; 3. Pooling-layers: directly compute for \(l - 2\) : \(\omega^{l} = \nabla_{f^{l - 2}} f^{l}|_{\mathbf{I}_{i}}\) STEP 2: Select outliers as relevant neurons using \(1.5 \times\) Inter Quartile Range STEP 3: Rank relevant neurons based on their relevance frequency across images \((\omega_{n,i,:}^{l})\) . STEP 4: Select top \(b\) relevant neurons, where \(b\) is a branching factor. OUTPUT: \(b\) relevant neurons for each distinct \(n\) in \(\mathbb{S}\) . The basic idea of our step- wise sensitivity analysis is illustrated in Fig. 1a. Given a DNN classifier \(\Phi\) and a set of relevant neurons \(n \in \mathbb{S}\) , start from the top layer and follow Algorithm 1 to produce a set of \(b\) relevant neurons \(-\mathbb{B}^{n}\) , for each distinct \(n\) in \(\mathbb{S}\) . Next, set \(\mathbb{S}\) to the union of all relevant neurons \(\mathbb{B}^{n}\) ( \(\mathbb{S} \leftarrow \bigcup_{n} \mathbb{B}^{n}\) ) for the lower layer, and repeat until the input layer. For computational efficiency, the magnitude of \(b\) is a threshold for the cardinality of each \(\mathbb{B}^{n}\) , thus discarding a proportion of potential relevant neurons. We believe that \(b\) is an important hyper- parameter since it limits the size of potential PDRs, which recent studies indicate to be typically between 8 and 50 neurons (Fong & Vedaldi, 2018). We detail each of the steps in Algorithm 1 next. ## STEP 1: COMPUTE RELEVANCE TENSOR Input: This step requires a network \((\Phi)\) , a layer \(f^{l}\) , a neuron \(n \in f^{l}\) , and an image \(\mathbf{I}_{i}\) . Output: Computes the relevance score of neurons in layer \(f^{l} - 1\) with respect to a neuron \(n\) in layer \(f^{l}\) as a gradient at \(\mathbf{I}_{i}\) using Equation equation 4. Essentially, this produces the relevance of all neurons in layer \(f^{l} - 1\) to the activation of neuron \(n\) . Method: The relevance for DCNN is computed differently depending on the type of layer \(f^{l}\) . If \(f^{l}\) is fully- connected, the result is a relevance vector \(\omega_{n,i,:}^{l} \in \mathbb{R}^{|f^{l - 1}|}\) . Repeating this process for all images and neurons in \(\mathbb{S}\) yields a relevance tensor \(\omega^{l}\) . If \(l\) is a convolutional layer, the result of Equation equation 4 is a 5D relevance tensor \(\omega_{n,i,:}^{l} \in \mathbb{R}^{H \times W \times K}\) , where \(H\) , \(W\) , \(K\) are respectively the height, width, and number of activation maps in \(l - 1\) . Since every activation map \(k\) is produced by convolving identical weights onto a lower layer activation map \(p\) , \(k\) represents the existence of an identical feature across \(p\) . Hence, the vector \(\omega_{n,i,h,w,:}^{l}\) represents the relevance of all lower level activation maps (features) at a location \((h, w)\) to the activation of \(n\) . Since we are interested in the relative importance of a feature, we perform spatial- averaging over all locations \((h, w)\) to convert \(\omega_{i}^{f^{l}}\) into a relevance vector \(\omega_{i}^{f^{l}} \in \mathbb{R}^{K}\) , where each dimension indicates the relative importance of an activation map across locations. This formulation enables us to repeat the process for all images, neurons and again obtain a 3D relevance tensor \(\omega^{l}\) . The pooling layers can be seen as a filter of their predecessors since \(\frac{d f^{l}}{d \mathbf{I}_{i}} = c \times \frac{d f^{l - 1}}{d \mathbf{I}_{i}}\) , where \(c \in \{0,1\}\) . Hence, if \(f^{l - 1}\) is a pooling layer we compute the relevance tensor directly w.r.t \(l - 2\) : \(\omega^{l} = \nabla_{f^{l - 2}} f^{l}|_{\mathbf{I}_{i}}\) . ## STEP 2: OUTLIER DETECTION Input: This steps requires a relevance tensor \(\omega^{l}\) . Output: The result identifies neurons relevant for neuron \(n\) in layer \(l - 1\) . Method: Our preliminary experiments indicated that each row \(\omega_{n,i,:}^{l}\) follows a normal distribution, and consistently exhibits a small number of outliers across \(i\) (see Figure 1b). Therefore, we make two simplifying assumptions. First, we assume these outliers are the only relevant neurons. Second, we choose to focus on only the positive \(\omega\) outlier values and leave the analysis of negative outliers <--- Page Split ---> for future work. Finally, we use the Tukey's fences \((1.5 \times\) Inter- Quartile Range) outlier detection method (Tukey, 1977) to select relevant neurons from each row \(\omega_{n,i,:}^{l}\) . Figure 1b illustrates this step. ## STEP 3: RANKING Input: Outliers in \(\omega_{n,i,:}^{l}\) . Output: Relevance ranking of each lower- layer neuron. Method: We use the outlier detection procedure to detect relevant neurons in every row. We rank the neurons based on the number of \(i\) columns of \(\omega_{n,i,k}^{l}\) in which they appear as relevant, resulting in a relevance ranking for each \(k \in f^{l - 1}\) . ## STEP 4: SELECT Input: Relevance ranking for each \(k \in f^{l - 1}\) . Output: \(b\) relevant neurons for each distinct \(n\) in \(\mathbb{S}\) . Method: Select top \(b\) most frequent neurons \(\mathbb{B} \subset \forall_{k} k \in \omega_{n,i,k}^{l}\) as relevant, where \(b = |\mathbb{B}|\) is a branching factor. ## 4 RESULTS AND DISCUSSION We now demonstrate how our method can be applied to the ImageNet dataset (Russakovsky et al., 2015) for the 16- layer VGG network (VGG16) (Simonyan & Zisserman, 2014). We use the publicly available pre- trained model implemented in the deep learning framework keras (Chollet et al., 2015) and modify the keras- vis (Kotikalapudi & contributors, 2017) implementation of sensitivity analysis using the Guided- backpropagation algorithm (Springenberg et al., 2014). We perform experiments with 10 classes and select 100 images per class from the training set for which VGG16 the probability mass of the correct class is above \(99\%\) . The execution takes approximately 12 hours on NVIDIA Tesla K80 GPU to traverse the entire network for one class with a branching factor of 3. However, most of the time is spent within the convolutional layers, which have a larger number of unique neurons. Since our approach's time complexity can be reduced to worst- case \(O(b^{d})\) , where \(b\) is the branching vector and \(d\) is the depth ( \(d = 22\) in the case of VGG16) we constrain \(b\) due to computational limitations. The approach is still practical since it is not designed to be executed every time that an explanation is necessary, just as a network is not retrained every time before a prediction. In Section 4.1 we show the particular occurrence of outliers within the \(\omega^{l}\) values to demonstrate how our method chooses relevant neurons. Furthermore, we provide in Section 4.1 a quantitative justification behind the choice of heuristics to detect relevant outliers, while in Section 4.2 we demonstrate our methodology on two classes, demonstrate that the results generalise across 10 class, and illustrate the importance of statistical interpretability. The novelty of our approach is that it generates a dependency graph of relevant neurons. In contrast to other approaches, we study the process of feature formation and the interdependence of relevant features by following a path through the graph. This allows us to selectively visualise relevant activation maps through sensitivity analysis. We demonstrate not only an increase in the granularity of the state- of- the- art interpretability capabilities, but also that an analysis on a single image could lead to false assumptions about the operation of the network, as demonstrated in Section 4.2. We use \(f^{fc}\) to refer to a fully- connected layer, and \(f^{b_{i}c_{j}}\) to refer to \(block_{n}conv_{j}\) convolutional layer. ### 4.1 QUANTITATIVE JUSTIFICATION OF OUTLIERS AS RELEVANT NEURONS According to the \(\omega\) values, there is a consistent presence of a small number of relevant outlier neurons (less than \(6\%\) ) – see Figure 2b. The outlier neurons are not identical for different input stimulus of the same class, as can be seen in Figure 2a. This indicates that our approach is not identical to simply selecting the neurons with highest weights, which would yield constant results across images. On the contrary, the frequency of relevance follows a power- law distribution. This suggests that the most frequently occurring neurons could be the main “drivers” (the most pertinent) for the class activation, while the other relevant neurons pick- up nuances or modulate the main drivers. In other words, the most relevant neurons form a basis, which is transformed by the less <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 2: (a) Barplot representing the frequency of occurrence for the positive outliers in layer \(f^{fc2}\) for the Hammerhead shark class- the y-axis represents the number of images, in which a neuron was a positive outlier. There are 189 unique outliers (4.6% of the total 4096 neurons). Notice that the first 3 outliers occur in almost all images and that the relevance follows a power-law distribution. (b) A heatmap amplified visualisation of outlier neurons, the \(\omega_{f^{fc2},200,i,k}^{fc2}\) values for 4 Hammerhead shark images are cubed. The x-, y-, z-axes represent respectively \(k\) , \(n\) , and the number of images where \(k\) is relevant for \(n\) . Observe that the images share exactly the same small number of positive and negative outliers with varying degrees of intensity. </center> relevant neuron. In some sense, we are finding the inverse of Szegedy et al. (2013)'s adversarial examples - high probability low- dimensional "pockets" in the manifold. What is surprising about Figure 2b is that it is not only consistent with the power law distribution result, but also that different images of the same class share peaks and troughs at exactly the same neurons. We hypothesise that Figure 2b is a visualisation of part of the PDR for a Hammerhead shark in layer \(f^{fc2}\) . To investigate this hypothesis, we generate dependency graphs of relevant neurons for 10 different classes, which we describe next. ### 4.2 DEPENDENCY GRAPH Figure 3 illustrates two examples of the output of Step- wise sensitivity analysis with two important observations. First, graphs in Figures 3a & 3b for different classes may share significant similarities. For example, the subgraphs of both Hammerhead shark and Egyptian cat starting from \(f_{1820}^{fc2}\) reveal similar most relevant neurons (e.g., \(f_{1820}^{fc2}, f_{3116}^{fc1}, f_{2053}^{fc1}\) ), identical subgraph structure (e.g., \(f_{3116}^{fc1}\) and its top 3 most relevant neurons) and similar subgraph structure (e.g., \(f_{2053}^{fc1}\) , only one relevant child neuron is shared - \(f_{2053}^{bc3}\) ). Interestingly, the two classes share 6 out of the 8 most relevant activation maps in block_5_conv3- 41, 49, 155, 256, 313, 335. This implies that the dependency graphs open the frontier for pattern matching and analysis of network motifs (Milo et al., 2002) across classes. We hypothesise that the emerging network motifs will give a strong indication of the various PDRs positions within a layer and how upper PDRs are leveraging them. In Section 4.2.1 we present further evidence about the existence of network motifs. Second, Figures 3a & 3b show that both dependency graphs share multiple incoming connections to the very same neuron ( \(f_{155}^{bc3}\) ). It is surprising that this is the first time a neuron is shared within a class. Consequently, the interpretation of the dependency graph enables us to infer an additional relevance metric for a neuron - its inter- connectedness according to the number of incoming edges. Therefore, step- wise sensitivity analysis allows researchers to focus analysis and interpretation efforts on the most pertinent regions of a DNN. For example, Figures 4b & 4b display targeted visualisation through sensitivity analysis of the especially relevant neuron \(f_{155}^{bc3}\) . Had we relied on a single visualisation, we would have erroneously presumed that the neuron perfectly encodes either the idea of a shark or of a cat. However, step- wise sensitivity analysis exposes that the neuron is equally important for both classes, and forms a part of a shared sub- structure. Therefore, it must encode a more abstract concept. Exploring a neuron or activation map in isolation is simplistic. In reality, the semantics are expressed within the combination of neurons within the PDR. In future work, we will apply activation maximisation Erhan et al. (2009) of an entire PDR to investigate its semantic properties. <--- Page Split ---> In order to investigate the generalisability of the results in Figure 3, we transformed the 10 dependency graphs into bag- of- nodes features representations. Then we performed the ward method (Murtagh & Legendre, 2011) for hierarchical agglomerative clustering with cosine distance similarity to group the dependency graphs. Interestingly, closer inspection of Figure 4a reveals three clusters of most similar dependency graphs - 1) hammer head and tiger shark; 2) African and Indian elephant; 3) German Sheppard and great white shark. Naturally, the first two clusters consist of the most semantically and visually similar classes - this supports the validity of our approach. Surprisingly, cluster 3) suggests an unnatural similarity between animals. One possible explanation could be that both of these classes share a PDR encoding sharp teeth. Another interesting observation is that the graphs are separated into two general clusters - one consisting of the all the sharks, the German Shepherd, and the Persian cat; and another consisting of the elephants, the Labrador Retriever and the Egyptian cat. One natural semantic separation between the two groups could be the degree of "danger". Finally, as expected, most of the lower layer activation maps are shared across all classes since they encode very abstract features. At the same time, while there is a large proportion of abstract neurons shared across all dependency graphs, the upper dendrogram in Figure 4a depicts other more specialised neuron clusters, which are idiosyncratic to their semantic groupings. These three observations support the hypothesis that the dependency graphs reveal semantically meaningful groups of neurons across classes that form PDRs. Identifying and analysing such specific sub- graphs is a non- trivial graph theory problem, which we leave for future work. Once we are able to accurately extract the shared sub- graphs, we will be able to provide hierarchical explanation behind a particular decision. For example, the classification was a shark because the network detected a sea, sharp teeth, a fish tail and a long fin. ## 5 CONCLUSIONS & FUTURE WORK In this paper we make three contributions to the area of interpreting learned representations. First, we are the first to propose a statistical DNN interpretability method that aggregates results of an instance- specific method to gain model- centric results. Second, we build a dependency graph of the relevant neurons to gain finer- grained understanding of the nature and interactions of a DNN's internal features. Third, we propose three new relevance metrics to identify salient neurons: 1) the outlier weights of a linear approximation of the neuron activation; 2) the ranking score of a neuron based on its frequency as an outlier across multiple input stimuli; 3) the interconnectedness of a neuron within the dependency graph. Our method modifies sensitivity analysis into Step- wise sensitivity analysis that applies the same linear approximation - but on a layer- by- layer basis as opposed to the usual output- input basis. We demonstrate that the results generalise by illuminating shared subgraphs across 10 classes. These subgraphs can be grouped into semantic clusters since they contain the quintessential neurons of a class. Step- wise sensitivity analysis opens an opportunity for further and more focused explorations of the internal operations of DNNs. Although it can still be applied on a single image to gain instance- specific interpretability, we argue that to gain a statistically viable result it is important to conduct analysis both, across classes and images. In the future, we will apply the approach to facilitate error explanation and decision justification on a much lower level by providing semantic interpretation of the discovered PDRs through visualisation approaches. We will demonstrate the features that make the difference between semantically similar classes and quantify the interpretability of the resulting PDRs using concept segmentation as in (Fong & Vedaldi, 2018; Bau et al., 2017). Further, we will investigate the suitability of our approach to defend against adversarial attacks. Finally, we will explore the possibility to use the dependency graphs to prune the network in order to perform network compression, or the extract binary classifiers for particular classes in the form of dependency graphs to distil the network. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 3: Dependency graphs for hammerhead shark and Egyptian cat classes of the relevant neurons for the penultimate 4 layers, excluding the pooling layer. The graphs expose the links only between the relevant neurons with a branching factor of 3. Notice the multiple connections to \(f_{155}^{163}\) (red circle). Notice also the similarities in the subgraphs of \(f_{155}^{162}\) (blue rectangle) for both, shark and cat classes. </center> ![](images/8_1.jpg) <center>Figure 4: a) A clustered heatmap (clustermap) of each of the 10 dependency graphs (spanning the entire network) into a bag-of-nodes features representation. The x,y,z-axis respectively represent neuron, class, and presence of the neuron in the dependency graph - red present, blue absent. The dendrograms on the side indicate the relative distance between points and clusters. Notice the three small clusters of semantically similar classes on the side. b) & c) Heatmaps representing standard sensitivity analysis of activation map \(f_{155}^{163}\) indicating the regions of the image from the corresponding class. Red and blue respectively correspond to positive or negative contribution to the activation. </center> ## 6 APPENDIX ### 6.1 INVESTIGATION OF THE SHARED FEATURES An important claim of our paper is that a comparison of activation maps across images can lead to a better understanding of the effect of a feature. For instance, rows 1 & 5, column 256 in Figure 5a could lead to the erroneous conclusion that \(f_{256}^{b5c3}\) detects shark tails, while it also activated for the shark head and also for front and rear parts of cats. Figure 5b suggests that neurons \(f_{49}^{b5c3}\) and \(f_{335}^{b5c3}\) are complementary. They activate for similar regions; however, they each capture different parts (e.g., a cat's ear - row 1; the region of \(f_{335}^{b5c3}\) is contained within that of \(f_{49}^{b5c3}\) , but has a much sharper boundary). In future work we will explore the exact relationship between such neurons. <--- Page Split --->
reject
Reject
3.333333
ICLR_2019_paper_0407
iclr
2,019
# VISUAL IMITATION LEARNING WITH RECURRENT SIAMESE NETWORKS Anonymous authors Paper under double- blind review ## ABSTRACT People are incredibly skilled at imitating others by simply observing them. They achieve this even in the presence of significant morphological differences and capabilities. Further, people are able to do this from raw perceptions of the actions of others, without direct access to the abstracted demonstration actions and with only partial state information. People therefore solve a difficult problem of understanding the salient features of both observations of others and the relationship to their own state when learning to imitate specific tasks. We can attempt to reproduce a similar demonstration via trial and error and through this gain more understanding of the task space. To reproduce this ability an agent would need to both learn how to recognize the differences between itself and some demonstration and at the same time learn to minimize the distance between its own performance and that of the demonstration. In this paper we propose an approach using only visual information to learn a distance metric between agent behaviour and a given video demonstration. We train a Recurrent Neural Network (RNN)- based Siamese model to compute distances in space and time between motion clips while training an Reinforcement Learning (RL) policy to minimize this distance. Furthermore, we examine a particularly challenging form of this problem where the agent must learn an imitation based task given a single demonstration. We demonstrate our approach in the setting of deep learning based control for physical simulation of humanoid walking in both 2D with 10 degrees of freedom (DoF) and 3D with 38 DoF. ## 1 INTRODUCTION Often in RL the designer formulates a reward function to elicit some desired behaviour in the policy. However, people often modify or refine their objectives as they learn. For example, a gymnast that is learning how to perform a flip can understand the overall motion from a few demonstrations. However, over time the gymnast, along with their previous experience, will learn to understand the less obvious but significant state features that determine a good flipping motion. In this same vein we want to gradually learn a distance function where, as the agent explores and gets more skilled, the agent refines its state space understanding and therefore the distance metric can further refine its accuracy. Robots and people may plan using an internal pose space understanding; however, typically when people observe others performing tasks only visual information is available. Often, using distances in pose- space is ill- suited for imitation as changing some features will have result in drastically different visual appearance. In order to understand how to perform tasks from visual observation some mapping/transformation is used pose \(= \phi (image)\) , which allows for the minimization of \(\phi (image) - agent_{pose}\) . Even with a method to transform observations to a similar pose every person has different capabilities. Because of this, people must learn how to transform demonstrations into a representation where they can reproduce the behaviour to the best of their ability. In our work here we construct a distance metric derived from the agent's visual perceptions without the need for an intermediate pose representation by allowing the agent to observe itself externally and compare that perception with a demonstration. Searching for a distance function has been an active topic of research (Abbeel & Ng, 2004; Argall et al., 2009). Given some vector of features the goal is to find an optimal transformation of these <--- Page Split ---> features such that when differences are computed in this transformed space there exists a strong contextual meaning. For example, if we wanted a transformation that computed the distance between an agent's standing pose and its current pose, a good schema may prioritize the joint angles of the legs and torso while ignoring momentum. With a meaningful transformation function \(\phi (\cdot)\) a distance can be computed between an agent and a demonstration. Previous work has explored the area of state- based distance functions, but many rely pose based metrics (Ho & Ermon, 2016; Merel et al., 2017a) that come from an expert. Few use image based inputs and none consider the importance of learning a distance function in time as well as space (Sermanet et al., 2017; Finn et al., 2017; Liu et al., 2017; Dwibedi et al., 2018). In this work we use a recurrent siamese network to learn the distance metric (Chopra et al., 2005). An important detail of imitating demonstrations is their sequential and causal nature. There is both an ordering and speed in which the demonstration is performed. It is important to match the demonstrations state distribution. However, similarity between states may force the agent to imitate the same timing as the demonstration. This can be highly effective and lead to learning smooth motions but it also constrains the result to have similar timing as the demonstration. However, when the agent's motion becomes desynchronized with the demonstration the agent will receive low reward. Consider the case when a robot has learned to stand before it can walk. This pose exists inside the demonstration and should be encouraged. Therefore we learned an RNN- based distance function that can give reward for out of sync but similar behaviour. The work in Liu et al. (2017); Dwibedi et al. (2018); Sermanet et al. (2017) also performs imitation from video observation but each assumes some sort of time alignment between the agent and demonstration. Considering the data sparsity of the problem we include data from other tasks in order to learn a more robust distance function in visual sequence space. Our method has similarities to both Inverse Reinforcement Learning (IRL) (Abbeel & Ng, 2004) and Generative Advisarial Imitation Learning (GAIL) (Ho & Ermon, 2016). The process of learning a cost function that will understand the space of policies in order to find an optimal policy given a demonstration is fundamentally IRL. While using positive examples from the expert and negative examples from the policy is similar to the method GAIL used to train a discriminator to understand the desired distribution. In this work we build upon these techniques by constructing a method that can learn polices using only noisy partially observable visual data. We also construct a cost function that takes into account the demonstration timing as well as pose by using a recurrent Siamese network. Our contribution rests on proposing and exploring this form of recurrent Siamese network as a way to address the key problem of defining the reward structure for imitation learning for deep RL agents. ## 2 PRELIMINARIES In this section we outline some key details of the general RL framework and specific specialized formulations for RL that we rely upon when developing our method in Section: 3. ### 2.1 REINFORCEMENT LEARNING Using the RL framework formulated with a Markov Dynamic Process (MDP): at every time step \(t\) , the world (including the agent) exists in a state \(s_{t} \in S\) , wherein the agent is able to perform actions \(a_{t} \in A\) , sampled from a policy \(\pi (s_{t}, a_{t})\) which results in a new state \(s_{t + 1} \in S\) according to the transition probability function \(T(s_{t}, a_{t}, s_{t + 1})\) . Performing action \(a_{t}\) from state \(s_{t}\) produces a reward \(r_{t}\) from the environment; the expected future discounted reward from executing a policy \(\pi\) is: \[J(\pi) = \mathbb{E}_{r_{0},\dots ,r_{T}}\left[\sum_{t = 0}^{T}\gamma^{t}r_{t}\right] \quad (1)\] where \(T\) is the max time horizon, and \(\gamma\) is the discount factor, indicating the planning horizon length. The agent's goal is to optimize its policy, \(\pi\) , by maximizing \(J(\pi)\) . Given policy parameters \(\theta_{\pi}\) , the goal is reformulated to identify the optimal parameters \(\theta_{\pi}^{*}\) : \[\theta_{\pi}^{*} = \arg \max_{\theta_{\pi}}J(\pi (\cdot |\theta_{\pi})) \quad (2)\] <--- Page Split ---> Using a Gaussian distribution to model the stochastic policy \(\mu_{\theta_{t}}(s_{t})\) . Our stochastic policy is formulated as follows: \[a_{t}\sim \pi (a_{t}\mid s_{t},\theta_{\pi}) = \mathcal{N}(\mu (s_{t}\mid \theta_{\pi}),\Sigma)\qquad \Sigma = diag\{\sigma_{i}^{2}\} \quad (3)\] where \(\Sigma\) is a diagonal covariance matrix with entries \(\sigma_{i}^{2}\) on the diagonal, similar to (Peng et al., 2017). For policy optimization we employ stochastic policy gradient methods (Sutton et al., 2000). The gradient of the expected future discounted reward with respect to the policy parameters, \(\nabla_{\theta_{\pi}}J(\pi (\cdot |\theta_{\pi}))\) , is given by: \[\nabla_{\theta_{\pi}}J(\pi (\cdot |\theta_{\pi})) = \int_{S}d_{\theta}(s)\int_{A}\nabla_{\theta_{\pi}}\log (\pi (a,s|\theta_{\pi}))A_{\pi}(s,a)da ds \quad (4)\] where \(\begin{array}{r}{d_{\theta} = \int_{S}\sum_{t = 0}^{T}\gamma^{t}p_{0}(s_{0})(s_{0}\to s\mid t,\pi_{0})d s_{0}} \end{array}\) is the discounted state distribution, \(p_{0}(s)\) represents the initial state distribution, and \(p_{0}(s_{0})(s_{0}\to s\mid t,\pi_{0})\) models the likelihood of reaching state \(s\) by starting at state \(s_{0}\) and following the policy \(\pi (a,s|\theta_{\pi})\) for \(T\) steps (Silver et al., 2014). \(A_{\pi}(s,a)\) represents an advantage function (Schulman et al., 2016). ### 2.2 IMITATION LEARNING Imitation learning is the process of training a new policy to reproduce the behaviour of some expert policy. Behavioural Cloning (BC) is a fundamental method for imitation learning. Given an expert policy \(\pi_{E}\) possibly represented as a collection of trajectories \(\tau < (s_{0},a_{0}),\ldots ,(s_{T},a_{T})>\) a new policy \(\pi\) can be learned to match this trajectory using supervised learning. \[\max_{\theta}\mathbb{E}_{\pi_{E}}[\sum_{t = 0}^{T}log\pi (a_{t}|s_{t},\theta_{\pi})] \quad (5)\] While this simple method can work well, it often suffers from distribution mismatch issues leading to compounding errors as the learned policy deviates from the expert's behaviour. Similar to BC, IRL also learns to replicate some desired behaviour. However, IRL makes use of the environment, using the RL environment without a defined reward function. Here we describe maximal entropy IRL (Ziebart et al., 2008). Given an expert trajectory \(\tau < (s_{0},a_{0}),\ldots ,(s_{T},a_{T})>\) a policy \(\pi\) can be trained to produce similar trajectories by discovering a distance metric between the expert trajectory and trajectories produced by the policy \(\pi\) . \[\max_{c\in C}\min_{\pi}[\mathbb{E}_{\pi}[c(s,a)] - H(\pi)) - \mathbb{E}_{\pi_{E}}[c(s,a)] \quad (6)\] where \(c\) is some learned cost function and \(H(\pi)\) is a causal entropy term. \(\pi_{E}\) is the expert policy that is represented by a collection of trajectories. IRL is searching for a cost function \(c\) that is low for the expert \(\pi_{E}\) and high for other policies. Then, a policy can be optimized by maximizing the reward function \(r_{t} = - c(s_{t},a_{t})\) . GAIL (Ho & Ermon, 2016) uses a Generative Adversarial Network (GAN)- based (Goodfellow et al., 2014) framework where the discriminator is trained with positive examples from the expert trajectories and negative examples from the policy. The generator is a combination of the environment and the current state visitation probability induced by the policy \(p_{\pi}(s)\) . \[\min_{\theta_{\pi}}\max_{\theta_{\phi}}\mathbb{E}_{\pi_{E}}[log(D(s,a|\theta_{\phi}))] + \mathbb{E}_{\pi_{\theta_{\pi}}}[log(1 - D(s,a|\theta_{\phi}))] \quad (7)\] In this framework the discriminator provides rewards for the RL policy to optimize, as the probability of a state generated by the policy being in the distribution \(r_{t} = D(s_{t},a_{t}|\theta_{\phi})\) . ## 3 OUR APPROACH In this section we describe our method to perform recurrent vision based imitation learning. <--- Page Split ---> ### 3.1 PARTIAL OBSERVABLE IMITATION LEARNING WITHOUT ACTIONS For many problems we want to learn how to replicate the behaviour of some expert \(\pi_{E}\) without access to the experts actions. Instead, we may only have access to an actionless noisy observation of the expert that we call a demonstration. Recent work uses BC to learn a new model to estimate the actions used via maximum- likelihood estimation (Torabi et al., 2018). Still, BC often needs many expert examples and tends to suffer from state distribution mismatch issues between the expert policy and the student (Ross et al., 2011). Work in (Merel et al., 2017b) proposes a system based on GAIL that can learn a policy from a partial observation of the demonstration. In this work the state input to the discriminator is a customized version of the expert's pose and does not take into account the demonstration's sequential nature. The work in (Wang et al., 2017) provides a more robust GAIL framework along with a new model to encode motions for few- shot imitation. This model uses an RNN to encode a demonstration but uses expert state and action observations. In our work we limit the agent to only a partial visual observation as a demonstration. Additional works learn implicit models of distance (Yu et al., 2018; Pathak et al., 2018; Finn et al., 2017; Sermanet et al., 2017), none of these explicitly learn a sequential model to use the demonstration timing. Another version of GAIL, infoGAIL (Li et al., 2017), was used on some pixel based inputs. In contrast, here we train a recurrent siamese model that can be used to enable curriculum learning and allow for reasonable distances to be computed even when the agent and demonstration are out of sync or have different capabilities. ### 3.2 DISTANCE-BASED REINFORCEMENT LEARNING Given a distance function \(d(s)\) that indicates how far away the agent is from some desired behaviour a reward function over states can be constructed \(r(s) = - d(s)\) . In this framework there is no reward signal coming from the environment instead fixed rewards produced by the environment are replaced by the agent's learned model that is used to compare itself to some desired behaviour. \[J(\pi) = \mathbb{E}_{d(s_{0}),\ldots ,d(s_{T})}\left[\sum_{t = 0}^{T}\gamma^{t}(-d(s_{t}))\right] \quad (8)\] Here \(d(s)\) can take many forms. In IRL it is some learned cost function, in GAIL it is the discriminator probability. In this framework this function can be considered more general and it can be interpreted as distance from desired behaviour, Specifically, in our work \(d(s)\) is learning a distance between video clips. Many different methods can be used to learn a distance function in state- space. We use a standard triplet loss over time and task data Chopra et al. (2005). The triplet loss is used to minimize the distance between two examples that are positive, very similar or from the same class, and maximize the distance between pairs of examples that are known to be un- related. Data used to train the siamese network is a combination of trajectories \(\tau = \langle s_{0},\ldots ,s_{T}\rangle\) generated from simulating the agent in the environment as well as the demonstration. \[\mathcal{L}(s_{i},s_{p},s_{n}) = y*||f(s_{i}) - f(s_{p})|| + ((1 - y)*(\max (\rho - (||f(s_{i}) - f(s_{n})||),0))) \quad (9)\] Where \(y = 1\) is a positive example \(s_{p}\) , pair where the distance should be minimal and \(y = 0\) is a negative example \(s_{n}\) , pair where the distance should be maximal. The margin \(\rho\) is used as an attractor or anchor to pull the negative example output away from \(s_{i}\) and push values towards a 0 to 1 range. \(f(\cdot)\) computes the output from the underlying network. A diagram of this image- based training process and design is shown in Figure 1a. The distance between two states is calculated as \(d(s,s^{\prime}) = ||f(s) - f(s^{\prime})||\) and the reward as \(r(s,s^{\prime}) = - d(s,s^{\prime})\) . For recurrent models we use the same loss however, the states \(s_{p},s_{n},s_{i}\) are sequences. The sequence is fed into the RNN and a single output encoding is produced for that sequence. During RL training we compute a distance given the sequence of states observed so far in the episode. This is a very flexible framework that allows us to train a distance function in state space where all we need to provide is labels that denote if two states, or sequences, are similar or not. ### 3.3 SEQUENCE IMITATION Using a distance function in the space of states can allow us to find advantageous policies. The hazard with using a state only distance metric when you are given demonstrations as sequences to <--- Page Split ---> imitate is that the RL agent can suffer from phase- mismatch. In this situation the agent may be performing the desired behaviour but at a different speed. As the demonstration timing and agent diverge the agent receives less reward, even though it is visiting states that exist elsewhere in the demonstration. If instead we consider the current state conditioned on the previous states, we can learn to give reward for visiting states that are only out of sync with the demonstration motion. This distance metric is formulated in a recurrent style where the distance is computed from the current state and conditioned on all previous states \(d(s_{t}|s_{t - 1},\ldots ,s_{0})\) . The loss function remains the same as in Eq. 9 but the overall learning process changes to use an RNN- based model. A diagram of the method is shown in Figure 1b. This model uses a time distributed RNN. A single convolutional network \(conv^{a}\) is first used to transform images of the agent and demonstration to encoding vectors \(e^{a}t\) . After the sequence of images is distributed through \(conv^{a}\) there is an encoded sequence \(< e_{0}^{a},\ldots ,e_{t}^{a}>\) , this sequence is fed into the RNN \(lstm^{a}\) until a final encoding is produced \(h_{t}^{a}\) . This same process is done for a copy of the RNN \(lstm_{b}\) producing \(h_{t}^{b}\) for the demonstration. The loss is computed in a similar fashion to (Mueller & Thyagarajan, 2016) using the sequence outputs of images from the agent and another from the demonstration. The reward at each timestep is computed as \(r_{t} = ||h_{t}^{a} - h_{t}^{b}||\) . At the beginning of ever episode the RNN's internal state is reset. The policy and value function use a 2 layer neural network with 512 and 256 units respectively. ![](images/4_0.jpg) <center>Figure 1: Siamese network network structure. The convolutional portion of the network includes 3 convolution layers of 16 filters with size \(10 \times 10\) and stride \(5 \times 5\) , 32 filters of size \(5 \times 5\) and stride \(2 \times 2\) and 32 filters of size \(3 \times 3\) and stride \(1 \times 1\) . The features are then flattened and followed by two dense layers of 256 and 128 units. The majority of the network uses ReLU activations except the last layer that uses a sigmoid activation. Dropout is used between the convolutional layers. The RNN-based model uses a GRU layer with 128 hidden units, followed by a dense layer of 128 units. </center> ### 3.4 THE RL SIMULATION ENVIRONMENT Our simulation environment is similar to the OpenAI Gym robotics environments (Plappert et al., 2018). The demonstration \(M\) the agent is learning to imitate is generated from a clip of mocap data. The mocap data is used to animate a second robot in the simulation. Frames from the simulation are captured and used as video input to train the distance metric. The policy is trained on pose data, as link distances and velocities relative to the robot's Centre of Mass (COM) of the simulated robot. This is a new simulation environment that has been created to take motion capture data and produce multi- view video data that can be used for training RL agents or generating data for computer vision tasks. The environment also includes new challenging and dynamic tasks for humanoid robots. The simulation and rendering have been optimized and efficient EGL- based off- screen rendering is used to capture video data. ### 3.5 DATA AUGMENTATION In a manner similar to how a person may learn to understand and reproduce a behaviour (Council et al., 2000; Gentner & Stevens, 2014) we apply a number of data augmentation methods to produce additional data used to train the distance metric. Using methods analogous to the cropping and warping methods popular in computer vision (He et al., 2015) we randomly crop sequences and randomly warp the demonstration timing. The cropping is performed by both initializing the agent to random poses from the demonstration motion and terminating episodes when the agent's head, hands or torso contact the ground. The motion warping is done by replaying the demonstration <--- Page Split ---> motion at different speeds. We also make use of time information in a similar way to (Sermanet et al., 2017), where observations at similar times in the same sequence are often correlated and observations at different times may have little similarity. To this end we generate more training samples using randomly cropped sequences from the same trajectory as well as reversed and out of sync versions. Imitation data for other tasks is also used to help condition the distance metric learning process. Motion clips for running, backflips and frontflips are used along with the desired walking motion. See the Appendix for more details on how positive and negative pairs are created from this data. The algorithm used to train the distance metric and policy is outlined in Algorithm 1. Importantly the RL environment generates two different state representations for the agent. The first state \(s_{t + 1}\) is the internal robot pose. The second state \(s_{t + 1}^{v}\) is the agent's rendered view. The rendered view is used with the distance metric to compute the similarity between the agent and the demonstration. We attempted to use the visual features as the state input for the policy as well. This resulted in poor policy quality. # Algorithm 1 Visual Imitation Learning Algorithm 1: Randomly initialize model parameters \(\theta_{\pi}\) and \(\theta_{d}\) 2: Create experience memory \(D\leftarrow \{\}\) 3: Given a demonstration \(M\gets m_{0},\ldots ,m_{T}>\) 4: while not done do 5: for \(i\in \{0,\ldots N\}\) do 6: \(\tau_{i}\leftarrow \{\}\) 7: for \(t\in \{0,\ldots ,T\}\) do 8: \(a_{t}\leftarrow \pi (\cdot |s_{t},\theta_{\pi})\) 9: \(s_{t + 1},s_{t + 1}^{v}\gets env(a_{t})\) 10: \(r_{t}\gets - d(s_{t + 1}^{v},m_{t + 1}|\theta_{d})\) 11: \(\tau_{i,t}\gets < s_{t},a_{t},r_{t}>\) 12: \(s_{t}\gets s_{t + 1}\) 13: end for 14: end for 15: \(D\gets D\bigcup \{\tau_{0},\ldots ,\tau_{N}\}\) 16: Update the distance metric \(d(\cdot)\) parameters \(\theta_{d}\) using \(D\) 17: Update the policy parameters \(\theta_{\pi}\) using \(\{\tau_{0},\ldots ,\tau_{N}\}\) 18: end while ## 4 EXPERIMENTS AND RESULTS This section contains a collection of experiments and results produced to investigate the capabilities of the method. ### 4.1 2D HUMANOID IMITATION The first experiment performed uses the method to learn a cyclic walking gait for a simulated humanoid walking robot given a single motion example, similar to (Peng & van de Panne, 2017). In this simulated robotics environment the agent is learning to imitate a given reference motion of a walk. The agent's goal is to learn how to actuate Proportional Derivative (PD) controllers at \(30\mathrm{fps}\) to mimic the desired motion. The simulation environment provides a hard coded reward function based on the robot's pose that is used to evaluate the policies quality. The images captured from the simulation are converted to grey- scale with \(128\times 128\) pixels. In between control timesteps the simulation collects images of the agent and the rendered demonstration motion. The agent is able to learn a robust walking gate even though it is only given noisy partial observations of a demonstration. We find it extremely helpful to normalize the distance metric outputs using \(r = exp(r^{2}*w_{d})\) where \(w_{d} = - 5.0\) scales the filtering width (Peng & van de Panne, 2017). Early in training the distance metric often produces noisy large values, also the RL method constantly updates reward scaling statistics, the initial high variance data reduces the significance of better distance metric values produced later on by scaling them to very small numbers. The improvement of using this normalize <--- Page Split ---> reward is shown in Figure 3a. Example motion of the agent after learning is shown in Figure 2 and in the supplemental Video. ![](images/6_0.jpg) <center>Figure 2: Still frame shots from trained policy in the humanoid2d environment. </center> ![](images/6_1.jpg) <center>Figure 3: Ablation analysis of the method. We find that training RL policies is sensitive to size and distribution of rewards. The siamese network benefits from a number of training adjustments that make it more suitable for RL. </center> ### 4.2 ALGORITHM ANALYSIS We compare the method to two other methods that can be considered as learning a distance function in state space, GAIL and using a Variational Auto Encoder (VAE) to train an encoding and compute distances between those encodings using the same method as the siamese network in Figure 4a. We find that the VAE method does not appear to capture the important distances between states, possibly due to the complexity of the decoding transformation. Similarly, we use try a GAIL type baseline and find that the method would either produce very jerky motion or stand still, both of which are contained in the imitation data. Our method that considers the temporal structure of the data produces higher value policies. Additionally, we create a multi- modal version of the method where the same learning framework is use. Here we replace the bottom conv net with a dense network and learn a distance metric between agent poses and imitation video. The results of these models along with using the default manual reward function are shown in Figure 4b. The multi- modal version appears to perform about equal to the vision- only modal. In Figure 4b and Figure 8c we compare our method to a non sequence based model that is equivalent to TCN. On average the method achieves higher value policies. We conduct an additional ablation analysis in Figure 3c to compare the effects of particular methods used to assist in training the recurrent siamese network We find it very helpful to reduce Reference State Initialization (RSI). If more episodes start in the same state it increases the temporal alignment of training batches for the RNN. We believe it is very important that the distance metric be most accurate for the earlier states in an episode so we use Early Episode Sequence Priority (EESP). Meaning we give the chances of cropping the window used for RNN training batches closer to the beginning of the episode. Also, we give higher probability to shorter windows. As the agent gets better the average length of episodes increases and so to will the average size of the cropped window. Last, we tried pretraining the distance function. This leads to mixed results, see Figure 7a and Figure 7b. Often, pretraining overfits the initial data collected leading to poor early RL training. However, in the long run pretraining does appear to improve over its non pretrained version. ### 4.3 3D HUMANOID ROBOT IMITATION In these simulated robotics environments the agent is learning to imitate a given reference motion of a walk, run, frontflip or backflip. A single imitation motion demonstration is provided by the simula <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 4: Baseline comparisons between our sequence-based method, GAIL and TCN. In 4a We compare our method to GAIL and a VAE where use using the euclidean distance of the encodings. We perform two additional baseline comparison between out method and TCN in 4b and 8c. These both show that on average our method performs similar to TCN or better over time. In these plots the large solid lines are the average performance of a collection of policy training run. The dotted lines of the same colour are the specific performance value for each policy training run. The filled in areas around the average policy performance is the variance other the collection of policy training runs. </center> tion environment as a cyclic motion, similar to (Peng et al., 2018). The agent controls and observes frames at \(30\mathrm{fps}\) . During learning, additional data is included from other tasks for the walking task this would be: walking- dynamic- speed, running, frontflips and backflips) that are used to train the distance metric. We also include data from a modified version of the tasks that has a randomly generated speed modifier \(\omega \in [0.5,2.0]\) walking- dynamic- speed, that warps the demonstration timing. This additional data is used to provide a richer understanding of distances in space and time. The input to the distance metric is a pair of frames from the simulation per control step, see Algorithm 1. We find that using the RNN- based distance metric makes the learning process more gradual and smoother. This can be seen in Figure 4b where the original manually created reward can still provide sparse feedback after the agent has sufficiently diverged from the desired behaviour. Some example trajectories using the policy learned using the method are shown in Figure 5, Figure 6 and in the supplemental Video. We include a rendered version of the running task in Figure 6. ![](images/7_1.jpg) <center>Figure 5: Still frame shots of the agent's motion after training on humanoid3d walking. </center> Sequence Encoding Using the learned sequence encoder a collection of motions from different classes are processed to create a TSNE embedding of the encodings (Maaten & Hinton, 2008). In Figure 4c we plot motions both generated from the learned policy \(\pi\) and the expert trajectories \(\pi_{E}\) . Interestingly, there are clear overlaps in specific areas of the space for similar classes across learned \(\pi\) and expert \(\pi_{E}\) data. There is also a separation between motion classes in the data. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 6: Still frame shots of the agent's motion after training on humanoid3d running. </center> ## 5 DISCUSSION AND CONCLUSION Learning a distance metric is imperfect, the distance metric can compute inaccurate distances in areas of the state space it has not yet been trained on. This could imply that when the agent explores and finds truly new and promising trajectories the distance metric will produce bad results. We attempt to mitigate this affect by including training data from different tasks while training the distance metric. We believe the method will benefit greatly from a larger collection of multitask data and increased variation of each task. Additionally, if the distance metric confidence is modelled this information could be used to reduce variance and overconfidence during policy optimization. It appears Deep Deterministic Policy Gradient (DDPG) works well for this type of problem. Our hypothesis is the learned reward function is changing between data collection phases it may be better to view this as off- policy data. Learning a reward function while training also adds additional variance to the policy gradient. This may indicate that the bias of off- policy methods could be preferred over the added variance of on- policy methods. We also find it important to have a small learning rate for the distance metric Figure 7c. This reduces the reward variance between data collection phases and allows learning a more accurate value function. Another approach may be to use partially observable RL that has the ability to learn a better model of the value function given a changing RNN- based reward function. Training the distance metric could benefit from additional regularization such as constraining the kl- divergence between updates to reduce variance. Another option is learn a sequence- based policy as well given that the rewards are now not dependant on a single state observation. We tried using GAIL but we found it has limited temporal consistency. This led to learning very jerky and overactive policies. The use of a recurrent discriminator for GAIL may mitigate some of these issues. It is challenging to produce result better than the carefully manually crafted reward functions used by some of the RL simulation environments that include motion phase information in the observations (Peng et al., 2018; 2017). Still as environments become increasing more realistic and grow in complexity we will need more complex reward functions to describe the desired behaviour we want from the agent. Training the distance metric is a complex balancing game. One might expect that the model should be trained early and fast so that it quickly understands the difference between a good and bad demonstration. However, quickly learning confuses the agent, rewards can change quickly which can cause the agent to diverge off toward an unrecoverable policy space. Slower is been better, as the distance metric my not be accurate but it may be locally or relatively reasonable which is enough to learn a good policy. As learning continues these two optimizations can converge together. <--- Page Split ---> ## REFERENCES Pieter Abbeel and Andrew Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the Twenty- first International Conference on Machine Learning, ICML '04, pp. 1- , New York, NY, USA, 2004. ACM. ISBN 1- 58113- 838- 5. doi: 10.1145/1015330.1015430. URL http://doi.acm.org/10.1145/1015330.1015430. Brenna D. Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5):469- 483, 2009. ISSN 0921- 8890. doi: https://doi.org/10.1016/j.robot.2008.10.024. URL http://www.sciencedirect.com/science/article/pii/S0921889008001772. Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pp. 539- 546. IEEE, 2005. National Research Council et al. How people learn: Brain, mind, experience, and school: Expanded edition. National Academies Press, 2000. D. Dwibedi, J. Tompson, C. Lynch, and P. Sermanet. Learning Actionable Representations from Visual Observations. ArXiv e-prints, August 2018. Chelsea Finn, Tianhe Yu, Tianhao Zhang, Pieter Abbeel, and Sergey Levine. One-shot visual imitation learning via meta-learning. CoRR, abs/1709.04905, 2017. URL http://arxiv.org/abs/1709.04905. Dedre Gentner and Albert L Stevens. Mental models. Psychology Press, 2014. Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 27, pp. 2672- 2680. Curran Associates, Inc., 2014. URL http://papers.nips.cc/paper/5423- generative- adversarial- nets.pdf. K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(9): 1904- 1916, Sept 2015. ISSN 0162- 8828. doi: 10.1109/TPAMI.2015.2389824. Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 4565- 4573. Curran Associates, Inc., 2016. URL http://papers.nips.cc/paper/6391- generative- adversarial- imitation- learning.pdf. Yunzhu Li, Jiaming Song, and Stefano Ermon. Infogail: Interpretable imitation learning from visual demonstrations. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 3812- 3822. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/6971- infogail- interpretable- imitation- learning- from- visual- demonstrations.pdf. Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. CoRR, abs/1509.02971, 2015. URL http://arxiv.org/abs/1509.02971. Yuxuan Liu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Imitation from observation: Learning to imitate behaviors from raw video via context translation. CoRR, abs/1707.03374, 2017. URL http://arxiv.org/abs/1707.03374. Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t- sne. Journal of machine learning research, 9(Nov):2579- 2605, 2008. <--- Page Split ---> Josh Merel, Yuval Tassa, Sriram Srinivasan, Jay Lemmon, Ziyu Wang, Greg Wayne, and Nicolas Heess. Learning human behaviors from motion capture by adversarial imitation. arXiv preprint arXiv:1707.02201, 2017a. Josh Merel, Yuval Tassa, Dhruva TB, Sriram Srinivasan, Jay Lemmon, Ziyu Wang, Greg Wayne, and Nicolas Heess. Learning human behaviors from motion capture by adversarial imitation. CoRR, abs/1707.02201, 2017b. URL http://arxiv.org/abs/1707.02201. Jonas Mueller and Aditya Thyagarajan. Siamese recurrent architectures for learning sentence similarity. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16, pp. 2786- 2792. AAAI Press, 2016. URL http://dl.acm.org/citation.cfm?id=3016100.3016291. Deepak Pathak, Parsa Mahmoudieh, Guanghao Luo, Pulkit Agrawal, Dian Chen, Yide Shentu, Evan Shelhamer, Jitendra Malik, Alexei A. Efros, and Trevor Darrell. Zero- shot visual imitation. CoRR, abs/1804.08606, 2018. URL http://arxiv.org/abs/1804.08606. Xue Bin Peng and Michiel van de Panne. Learning locomotion skills using deeprl: Does the choice of action space matter? In Proceedings of the ACM SIGGRAPH / Eurographics Symposium on Computer Animation, SCA '17, pp. 12:1- 12:13, New York, NY, USA, 2017. ACM. ISBN 978- 1- 4503- 5091- 4. doi: 10.1145/3099564.3099567. URL http://doi.acm.org/10.1145/3099564.3099567. Xue Bin Peng, Glen Berseth, Kangkang Yin, and Michiel Van De Panne. Deeploco: Dynamic locomotion skills using hierarchical deep reinforcement learning. ACM Trans. Graph., 36(4): 41:1- 41:13, July 2017. ISSN 0730- 0301. doi: 10.1145/3072959.3073602. URL http://doi.acm.org/10.1145/3072959.3073602. Xue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel van de Panne. Deepmimic: Example- guided deep reinforcement learning of physics- based character skills. ACM Trans. Graph., 37 (4):143:1- 143:14, July 2018. ISSN 0730- 0301. doi: 10.1145/3197517.3201311. URL http://doi.acm.org/10.1145/3197517.3201311. Matthias Plappert, Marcin Andrychowicz, Alex Ray, Bob McGrew, Bowen Baker, Glenn Powell, Jonas Schneider, Josh Tobin, Maciek Chociej, Peter Welinder, Vikash Kumar, and Wojciech Zaremba. Multi- goal reinforcement learning: Challenging robotics environments and request for research. CoRR, abs/1802.09464, 2018. URL http://arxiv.org/abs/1802.09464. Stephane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no- regret online learning. In Geoffrey Gordon, David Dunson, and Miroslav Dudk (eds.), Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, volume 15 of Proceedings of Machine Learning Research, pp. 627- 635, Fort Lauderdale, FL, USA, 11- 13 Apr 2011. PMLR. URL http://proceedings.mlr.press/v15/rosslla.html. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal Policy Optimization Algorithms. ArXiv e- prints, July 2017. John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. Trust region policy optimization. CoRR, abs/1502.05477, 2015. URL http://arxiv.org/abs/1502.05477. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High- dimensional continuous control using generalized advantage estimation. In International Conference on Learning Representations (ICLR 2016), 2016. Pierre Sermanet, Corey Lynch, Jasmine Hsu, and Sergey Levine. Time- contrastive networks: Self- supervised learning from multi- view observation. CoRR, abs/1704.06888, 2017. URL http://arxiv.org/abs/1704.06888. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In Proc. ICML, 2014. <--- Page Split ---> Richard S. Sutton, David Mcallester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In In Advances in Neural Information Processing Systems 12, pp. 1057- 1063. MIT Press, 2000. Faraz Torabi, Garrett Warnell, and Peter Stone. Behavioral Cloning from Observation. (July), 2018. URL http://arxiv.org/abs/1805.01954. Hado Van Hasselt. Reinforcement learning in continuous state and action spaces. In Reinforcement Learning, pp. 207- 251. Springer, 2012. Ziyu Wang, Josh S Merel, Scott E Reed, Nando de Freitas, Gregory Wayne, and Nicolas Heess. Robust imitation of diverse behaviors. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 5320- 5329. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/7116- robust- imitation- of- diverse- behaviors.pdf. Tianhe Yu, Chelsea Finn, Annie Xie, Sudeep Dasari, Tianhao Zhang, Pieter Abbeel, and Sergey Levine. One- shot imitation from observing humans via domain- adaptive meta- learning. CoRR, abs/1802.01557, 2018. URL http://arxiv.org/abs/1802.01557. Brian D. Ziebart, Andrew Maas, J. Andrew Bagnell, and Anind K. Dey. Maximum entropy inverse reinforcement learning. In Proceedings of the 23rd National Conference on Artificial Intelligence - Volume 3, AAAI'08, pp. 1433- 1438. AAAI Press, 2008. ISBN 978- 1- 57735- 368- 3. URL http://dl.acm.org/citation.cfm?id=1620270.1620297. ## 6 APPENDIX ### 6.1 DATASETS The mocap used in the created environment come from the CMU mocap database and the SFU mocap database. ### 6.2 TRAINING DETAILS The learning simulations are trained using Graphics Processing Unit (GPU)s. The simulation is not only simulating the interaction physics of the world but also rendering the simulation scene in order to capture video observations. On average it takes 3 days to execute a single training simulation. The process of rendering and copying the images from the GPU is one of the most expensive operations with the method. ### 6.3 DISTANCE FUNCTION TRAINING In Figure 7b we show the training curve for the recurrent siamese network. The model learns smoothly considering that the training data used is constantly changing as the RL agent explores. In Figure 7a the learning curve for the siamese RNN is shown after performing pretraining. We can see the overfitting portion the occurs during RL training. This overfitting can lead to poor reward prediction during the early phase of training. ![](images/11_0.jpg) <center>Figure 7: Training losses for the siamese distance metric. Higher is better as it indicates the distance between sequences from the same class are closer. </center> <--- Page Split ---> It can be difficult to train a sequenced based distance function. One particular challenge is training the distance function to be accurate across the space of possible states. We found a good strategy was to focus on the beginning of episode data. If the model is not accurate on states seen earlier in the episode it may never learn how to get into good states later on in the episode that the distance function understands better. Therefore, when constructing batches to train the RNN on we give higher probability to starting earlier in episodes. We also give a higher probability to shorter sequences. As the agent gets better average episodes length increase, so to will the randomly selected sequence windows. ## 7 POSITIVE AND NEGATIVE EXAMPLES We use two methods to generate positive and negative examples. The first method is similar to TCN where we can make an assumption that sequences that overlap more in time are more similar. For each episode two sequences are generate, one for the agent and one for the imitation motion. We compute positive pairs by altering one of these sequences and comparing this altered verion to its original version. Here we list the number of ways we alter sequences for positive pairs. 1. Adding Gaussian noise to each state in the sequence (mean \(= 0\) and variance \(= 0.02\) ) 2. Out of sync versions where the first state is removed from the first sequence and the last state from the second sequence 3. Duplicating the first state in either sequence 4. Duplicating the last state in either sequence We alter sequences for negative pairs by 1. Reversing the ordering of the second sequence in the pair. 2. Randomly picking a state out of the second sequence and replicating it to be as long as the first sequence. 3. Randomly shuffling one sequence. 4. Randomly shuffling both sequences. The second method we use to create positive and negative examples is by including data for additional classes of motion. These classes denote different task types. For the humanoid3d environment we generate data for walking-dynamic-speed, running, backflipping and frontflipping. Pairs from the same tasks are labelled as positive and pair from different classes are negative. ### 7.1 RL ALGORITHM ANALYSIS It is not clear which RL algorithm may work best for this type of imitation problem. A number of RL algorithms were evaluated on the humanoid2d environment Figure 8a. Surprisingly, Trust Region Policy Optimization (TRPO) (Schulman et al., 2015) did not work well in this framework, considering it has a controlled policy gradient step, we thought it would reduce the overall variance. We found that DDPG (Lillicrap et al., 2015) worked rather well. This could be related to having a changing reward function, in that if the changing rewards are considered off- policy data it can be easier to learn. This can be seen in Figure 8b where DDPG is best at estimating the future discounted rewards in the environment. We tried also Continuous Actor Critic Learning Automaton (CACLA) (Van Hasselt, 2012) and Proximal Policy Optimization (PPO) (Schulman et al., 2017), we found that PPO did not work particularly well on this task, this could also be related to added variance. <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 8: RL algorithm comparison on humanoid2d environment. </center> <--- Page Split --->
## ABSTRACT People are incredibly skilled at imitating others by simply observing them. They achieve this even in the presence of significant morphological differences and capabilities. Further, people are able to do this from raw perceptions of the actions of others, without direct access to the abstracted demonstration actions and with only partial state information. People therefore solve a difficult problem of understanding the salient features of both observations of others and the relationship to their own state when learning to imitate specific tasks. We can attempt to reproduce a similar demonstration via trial and error and through this gain more understanding of the task space. To reproduce this ability an agent would need to both learn how to recognize the differences between itself and some demonstration and at the same time learn to minimize the distance between its own performance and that of the demonstration. In this paper we propose an approach using only visual information to learn a distance metric between agent behaviour and a given video demonstration. We train a Recurrent Neural Network (RNN)- based Siamese model to compute distances in space and time between motion clips while training an Reinforcement Learning (RL) policy to minimize this distance. Furthermore, we examine a particularly challenging form of this problem where the agent must learn an imitation based task given a single demonstration. We demonstrate our approach in the setting of deep learning based control for physical simulation of humanoid walking in both 2D with 10 degrees of freedom (DoF) and 3D with 38 DoF. ## 1 INTRODUCTION Often in RL the designer formulates a reward function to elicit some desired behaviour in the policy. However, people often modify or refine their objectives as they learn. For example, a gymnast that is learning how to perform a flip can understand the overall motion from a few demonstrations. However, over time the gymnast, along with their previous experience, will learn to understand the less obvious but significant state features that determine a good flipping motion. In this same vein we want to gradually learn a distance function where, as the agent explores and gets more skilled, the agent refines its state space understanding and therefore the distance metric can further refine its accuracy. Robots and people may plan using an internal pose space understanding; however, typically when people observe others performing tasks only visual information is available. Often, using distances in pose- space is ill- suited for imitation as changing some features will have result in drastically different visual appearance. In order to understand how to perform tasks from visual observation some mapping/transformation is used pose \(= \phi (image)\) , which allows for the minimization of \(\phi (image) - agent_{pose}\) . Even with a method to transform observations to a similar pose every person has different capabilities. Because of this, people must learn how to transform demonstrations into a representation where they can reproduce the behaviour to the best of their ability. In our work here we construct a distance metric derived from the agent's visual perceptions without the need for an intermediate pose representation by allowing the agent to observe itself externally and compare that perception with a demonstration. Searching for a distance function has been an active topic of research (Abbeel & Ng, 2004; Argall et al., 2009). Given some vector of features the goal is to find an optimal transformation of these <--- Page Split ---> features such that when differences are computed in this transformed space there exists a strong contextual meaning. For example, if we wanted a transformation that computed the distance between an agent's standing pose and its current pose, a good schema may prioritize the joint angles of the legs and torso while ignoring momentum. With a meaningful transformation function \(\phi (\cdot)\) a distance can be computed between an agent and a demonstration. Previous work has explored the area of state- based distance functions, but many rely pose based metrics (Ho & Ermon, 2016; Merel et al., 2017a) that come from an expert. Few use image based inputs and none consider the importance of learning a distance function in time as well as space (Sermanet et al., 2017; Finn et al., 2017; Liu et al., 2017; Dwibedi et al., 2018). In this work we use a recurrent siamese network to learn the distance metric (Chopra et al., 2005). An important detail of imitating demonstrations is their sequential and causal nature. There is both an ordering and speed in which the demonstration is performed. It is important to match the demonstrations state distribution. However, similarity between states may force the agent to imitate the same timing as the demonstration. This can be highly effective and lead to learning smooth motions but it also constrains the result to have similar timing as the demonstration. However, when the agent's motion becomes desynchronized with the demonstration the agent will receive low reward. Consider the case when a robot has learned to stand before it can walk. This pose exists inside the demonstration and should be encouraged. Therefore we learned an RNN- based distance function that can give reward for out of sync but similar behaviour. The work in Liu et al. (2017); Dwibedi et al. (2018); Sermanet et al. (2017) also performs imitation from video observation but each assumes some sort of time alignment between the agent and demonstration. Considering the data sparsity of the problem we include data from other tasks in order to learn a more robust distance function in visual sequence space. Our method has similarities to both Inverse Reinforcement Learning (IRL) (Abbeel & Ng, 2004) and Generative Advisarial Imitation Learning (GAIL) (Ho & Ermon, 2016). The process of learning a cost function that will understand the space of policies in order to find an optimal policy given a demonstration is fundamentally IRL. While using positive examples from the expert and negative examples from the policy is similar to the method GAIL used to train a discriminator to understand the desired distribution. In this work we build upon these techniques by constructing a method that can learn polices using only noisy partially observable visual data. We also construct a cost function that takes into account the demonstration timing as well as pose by using a recurrent Siamese network. Our contribution rests on proposing and exploring this form of recurrent Siamese network as a way to address the key problem of defining the reward structure for imitation learning for deep RL agents. ## 2 PRELIMINARIES In this section we outline some key details of the general RL framework and specific specialized formulations for RL that we rely upon when developing our method in Section: 3. ### 2.1 REINFORCEMENT LEARNING Using the RL framework formulated with a Markov Dynamic Process (MDP): at every time step \(t\) , the world (including the agent) exists in a state \(s_{t} \in S\) , wherein the agent is able to perform actions \(a_{t} \in A\) , sampled from a policy \(\pi (s_{t}, a_{t})\) which results in a new state \(s_{t + 1} \in S\) according to the transition probability function \(T(s_{t}, a_{t}, s_{t + 1})\) . Performing action \(a_{t}\) from state \(s_{t}\) produces a reward \(r_{t}\) from the environment; the expected future discounted reward from executing a policy \(\pi\) is: \[J(\pi) = \mathbb{E}_{r_{0},\dots ,r_{T}}\left[\sum_{t = 0}^{T}\gamma^{t}r_{t}\right] \quad (1)\] where \(T\) is the max time horizon, and \(\gamma\) is the discount factor, indicating the planning horizon length. The agent's goal is to optimize its policy, \(\pi\) , by maximizing \(J(\pi)\) . Given policy parameters \(\theta_{\pi}\) , the goal is reformulated to identify the optimal parameters \(\theta_{\pi}^{*}\) : \[\theta_{\pi}^{*} = \arg \max_{\theta_{\pi}}J(\pi (\cdot |\theta_{\pi})) \quad (2)\] <--- Page Split ---> Using a Gaussian distribution to model the stochastic policy \(\mu_{\theta_{t}}(s_{t})\) . Our stochastic policy is formulated as follows: \[a_{t}\sim \pi (a_{t}\mid s_{t},\theta_{\pi}) = \mathcal{N}(\mu (s_{t}\mid \theta_{\pi}),\Sigma)\qquad \Sigma = diag\{\sigma_{i}^{2}\} \quad (3)\] where \(\Sigma\) is a diagonal covariance matrix with entries \(\sigma_{i}^{2}\) on the diagonal, similar to (Peng et al., 2017). For policy optimization we employ stochastic policy gradient methods (Sutton et al., 2000). The gradient of the expected future discounted reward with respect to the policy parameters, \(\nabla_{\theta_{\pi}}J(\pi (\cdot |\theta_{\pi}))\) , is given by: \[\nabla_{\theta_{\pi}}J(\pi (\cdot |\theta_{\pi})) = \int_{S}d_{\theta}(s)\int_{A}\nabla_{\theta_{\pi}}\log (\pi (a,s|\theta_{\pi}))A_{\pi}(s,a)da ds \quad (4)\] where \(\begin{array}{r}{d_{\theta} = \int_{S}\sum_{t = 0}^{T}\gamma^{t}p_{0}(s_{0})(s_{0}\to s\mid t,\pi_{0})d s_{0}} \end{array}\) is the discounted state distribution, \(p_{0}(s)\) represents the initial state distribution, and \(p_{0}(s_{0})(s_{0}\to s\mid t,\pi_{0})\) models the likelihood of reaching state \(s\) by starting at state \(s_{0}\) and following the policy \(\pi (a,s|\theta_{\pi})\) for \(T\) steps (Silver et al., 2014). \(A_{\pi}(s,a)\) represents an advantage function (Schulman et al., 2016). ### 2.2 IMITATION LEARNING Imitation learning is the process of training a new policy to reproduce the behaviour of some expert policy. Behavioural Cloning (BC) is a fundamental method for imitation learning. Given an expert policy \(\pi_{E}\) possibly represented as a collection of trajectories \(\tau < (s_{0},a_{0}),\ldots ,(s_{T},a_{T})>\) a new policy \(\pi\) can be learned to match this trajectory using supervised learning. \[\max_{\theta}\mathbb{E}_{\pi_{E}}[\sum_{t = 0}^{T}log\pi (a_{t}|s_{t},\theta_{\pi})] \quad (5)\] While this simple method can work well, it often suffers from distribution mismatch issues leading to compounding errors as the learned policy deviates from the expert's behaviour. Similar to BC, IRL also learns to replicate some desired behaviour. However, IRL makes use of the environment, using the RL environment without a defined reward function. Here we describe maximal entropy IRL (Ziebart et al., 2008). Given an expert trajectory \(\tau < (s_{0},a_{0}),\ldots ,(s_{T},a_{T})>\) a policy \(\pi\) can be trained to produce similar trajectories by discovering a distance metric between the expert trajectory and trajectories produced by the policy \(\pi\) . \[\max_{c\in C}\min_{\pi}[\mathbb{E}_{\pi}[c(s,a)] - H(\pi)) - \mathbb{E}_{\pi_{E}}[c(s,a)] \quad (6)\] where \(c\) is some learned cost function and \(H(\pi)\) is a causal entropy term. \(\pi_{E}\) is the expert policy that is represented by a collection of trajectories. IRL is searching for a cost function \(c\) that is low for the expert \(\pi_{E}\) and high for other policies. Then, a policy can be optimized by maximizing the reward function \(r_{t} = - c(s_{t},a_{t})\) . GAIL (Ho & Ermon, 2016) uses a Generative Adversarial Network (GAN)- based (Goodfellow et al., 2014) framework where the discriminator is trained with positive examples from the expert trajectories and negative examples from the policy. The generator is a combination of the environment and the current state visitation probability induced by the policy \(p_{\pi}(s)\) . \[\min_{\theta_{\pi}}\max_{\theta_{\phi}}\mathbb{E}_{\pi_{E}}[log(D(s,a|\theta_{\phi}))] + \mathbb{E}_{\pi_{\theta_{\pi}}}[log(1 - D(s,a|\theta_{\phi}))] \quad (7)\] In this framework the discriminator provides rewards for the RL policy to optimize, as the probability of a state generated by the policy being in the distribution \(r_{t} = D(s_{t},a_{t}|\theta_{\phi})\) . ## 3 OUR APPROACH In this section we describe our method to perform recurrent vision based imitation learning. <--- Page Split ---> ### 3.1 PARTIAL OBSERVABLE IMITATION LEARNING WITHOUT ACTIONS For many problems we want to learn how to replicate the behaviour of some expert \(\pi_{E}\) without access to the experts actions. Instead, we may only have access to an actionless noisy observation of the expert that we call a demonstration. Recent work uses BC to learn a new model to estimate the actions used via maximum- likelihood estimation (Torabi et al., 2018). Still, BC often needs many expert examples and tends to suffer from state distribution mismatch issues between the expert policy and the student (Ross et al., 2011). Work in (Merel et al., 2017b) proposes a system based on GAIL that can learn a policy from a partial observation of the demonstration. In this work the state input to the discriminator is a customized version of the expert's pose and does not take into account the demonstration's sequential nature. The work in (Wang et al., 2017) provides a more robust GAIL framework along with a new model to encode motions for few- shot imitation. This model uses an RNN to encode a demonstration but uses expert state and action observations. In our work we limit the agent to only a partial visual observation as a demonstration. Additional works learn implicit models of distance (Yu et al., 2018; Pathak et al., 2018; Finn et al., 2017; Sermanet et al., 2017), none of these explicitly learn a sequential model to use the demonstration timing. Another version of GAIL, infoGAIL (Li et al., 2017), was used on some pixel based inputs. In contrast, here we train a recurrent siamese model that can be used to enable curriculum learning and allow for reasonable distances to be computed even when the agent and demonstration are out of sync or have different capabilities. ### 3.2 DISTANCE-BASED REINFORCEMENT LEARNING Given a distance function \(d(s)\) that indicates how far away the agent is from some desired behaviour a reward function over states can be constructed \(r(s) = - d(s)\) . In this framework there is no reward signal coming from the environment instead fixed rewards produced by the environment are replaced by the agent's learned model that is used to compare itself to some desired behaviour. \[J(\pi) = \mathbb{E}_{d(s_{0}),\ldots ,d(s_{T})}\left[\sum_{t = 0}^{T}\gamma^{t}(-d(s_{t}))\right] \quad (8)\] Here \(d(s)\) can take many forms. In IRL it is some learned cost function, in GAIL it is the discriminator probability. In this framework this function can be considered more general and it can be interpreted as distance from desired behaviour, Specifically, in our work \(d(s)\) is learning a distance between video clips. Many different methods can be used to learn a distance function in state- space. We use a standard triplet loss over time and task data Chopra et al. (2005). The triplet loss is used to minimize the distance between two examples that are positive, very similar or from the same class, and maximize the distance between pairs of examples that are known to be un- related. Data used to train the siamese network is a combination of trajectories \(\tau = \langle s_{0},\ldots ,s_{T}\rangle\) generated from simulating the agent in the environment as well as the demonstration. \[\mathcal{L}(s_{i},s_{p},s_{n}) = y*||f(s_{i}) - f(s_{p})|| + ((1 - y)*(\max (\rho - (||f(s_{i}) - f(s_{n})||),0))) \quad (9)\] Where \(y = 1\) is a positive example \(s_{p}\) , pair where the distance should be minimal and \(y = 0\) is a negative example \(s_{n}\) , pair where the distance should be maximal. The margin \(\rho\) is used as an attractor or anchor to pull the negative example output away from \(s_{i}\) and push values towards a 0 to 1 range. \(f(\cdot)\) computes the output from the underlying network. A diagram of this image- based training process and design is shown in Figure 1a. The distance between two states is calculated as \(d(s,s^{\prime}) = ||f(s) - f(s^{\prime})||\) and the reward as \(r(s,s^{\prime}) = - d(s,s^{\prime})\) . For recurrent models we use the same loss however, the states \(s_{p},s_{n},s_{i}\) are sequences. The sequence is fed into the RNN and a single output encoding is produced for that sequence. During RL training we compute a distance given the sequence of states observed so far in the episode. This is a very flexible framework that allows us to train a distance function in state space where all we need to provide is labels that denote if two states, or sequences, are similar or not. ### 3.3 SEQUENCE IMITATION Using a distance function in the space of states can allow us to find advantageous policies. The hazard with using a state only distance metric when you are given demonstrations as sequences to <--- Page Split ---> imitate is that the RL agent can suffer from phase- mismatch. In this situation the agent may be performing the desired behaviour but at a different speed. As the demonstration timing and agent diverge the agent receives less reward, even though it is visiting states that exist elsewhere in the demonstration. If instead we consider the current state conditioned on the previous states, we can learn to give reward for visiting states that are only out of sync with the demonstration motion. This distance metric is formulated in a recurrent style where the distance is computed from the current state and conditioned on all previous states \(d(s_{t}|s_{t - 1},\ldots ,s_{0})\) . The loss function remains the same as in Eq. 9 but the overall learning process changes to use an RNN- based model. A diagram of the method is shown in Figure 1b. This model uses a time distributed RNN. A single convolutional network \(conv^{a}\) is first used to transform images of the agent and demonstration to encoding vectors \(e^{a}t\) . After the sequence of images is distributed through \(conv^{a}\) there is an encoded sequence \(< e_{0}^{a},\ldots ,e_{t}^{a}>\) , this sequence is fed into the RNN \(lstm^{a}\) until a final encoding is produced \(h_{t}^{a}\) . This same process is done for a copy of the RNN \(lstm_{b}\) producing \(h_{t}^{b}\) for the demonstration. The loss is computed in a similar fashion to (Mueller & Thyagarajan, 2016) using the sequence outputs of images from the agent and another from the demonstration. The reward at each timestep is computed as \(r_{t} = ||h_{t}^{a} - h_{t}^{b}||\) . At the beginning of ever episode the RNN's internal state is reset. The policy and value function use a 2 layer neural network with 512 and 256 units respectively. ![](images/4_0.jpg) <center>Figure 1: Siamese network network structure. The convolutional portion of the network includes 3 convolution layers of 16 filters with size \(10 \times 10\) and stride \(5 \times 5\) , 32 filters of size \(5 \times 5\) and stride \(2 \times 2\) and 32 filters of size \(3 \times 3\) and stride \(1 \times 1\) . The features are then flattened and followed by two dense layers of 256 and 128 units. The majority of the network uses ReLU activations except the last layer that uses a sigmoid activation. Dropout is used between the convolutional layers. The RNN-based model uses a GRU layer with 128 hidden units, followed by a dense layer of 128 units. </center> ### 3.4 THE RL SIMULATION ENVIRONMENT Our simulation environment is similar to the OpenAI Gym robotics environments (Plappert et al., 2018). The demonstration \(M\) the agent is learning to imitate is generated from a clip of mocap data. The mocap data is used to animate a second robot in the simulation. Frames from the simulation are captured and used as video input to train the distance metric. The policy is trained on pose data, as link distances and velocities relative to the robot's Centre of Mass (COM) of the simulated robot. This is a new simulation environment that has been created to take motion capture data and produce multi- view video data that can be used for training RL agents or generating data for computer vision tasks. The environment also includes new challenging and dynamic tasks for humanoid robots. The simulation and rendering have been optimized and efficient EGL- based off- screen rendering is used to capture video data. ### 3.5 DATA AUGMENTATION In a manner similar to how a person may learn to understand and reproduce a behaviour (Council et al., 2000; Gentner & Stevens, 2014) we apply a number of data augmentation methods to produce additional data used to train the distance metric. Using methods analogous to the cropping and warping methods popular in computer vision (He et al., 2015) we randomly crop sequences and randomly warp the demonstration timing. The cropping is performed by both initializing the agent to random poses from the demonstration motion and terminating episodes when the agent's head, hands or torso contact the ground. The motion warping is done by replaying the demonstration <--- Page Split ---> motion at different speeds. We also make use of time information in a similar way to (Sermanet et al., 2017), where observations at similar times in the same sequence are often correlated and observations at different times may have little similarity. To this end we generate more training samples using randomly cropped sequences from the same trajectory as well as reversed and out of sync versions. Imitation data for other tasks is also used to help condition the distance metric learning process. Motion clips for running, backflips and frontflips are used along with the desired walking motion. See the Appendix for more details on how positive and negative pairs are created from this data. The algorithm used to train the distance metric and policy is outlined in Algorithm 1. Importantly the RL environment generates two different state representations for the agent. The first state \(s_{t + 1}\) is the internal robot pose. The second state \(s_{t + 1}^{v}\) is the agent's rendered view. The rendered view is used with the distance metric to compute the similarity between the agent and the demonstration. We attempted to use the visual features as the state input for the policy as well. This resulted in poor policy quality. # Algorithm 1 Visual Imitation Learning Algorithm 1: Randomly initialize model parameters \(\theta_{\pi}\) and \(\theta_{d}\) 2: Create experience memory \(D\leftarrow \{\}\) 3: Given a demonstration \(M\gets m_{0},\ldots ,m_{T}>\) 4: while not done do 5: for \(i\in \{0,\ldots N\}\) do 6: \(\tau_{i}\leftarrow \{\}\) 7: for \(t\in \{0,\ldots ,T\}\) do 8: \(a_{t}\leftarrow \pi (\cdot |s_{t},\theta_{\pi})\) 9: \(s_{t + 1},s_{t + 1}^{v}\gets env(a_{t})\) 10: \(r_{t}\gets - d(s_{t + 1}^{v},m_{t + 1}|\theta_{d})\) 11: \(\tau_{i,t}\gets < s_{t},a_{t},r_{t}>\) 12: \(s_{t}\gets s_{t + 1}\) 13: end for 14: end for 15: \(D\gets D\bigcup \{\tau_{0},\ldots ,\tau_{N}\}\) 16: Update the distance metric \(d(\cdot)\) parameters \(\theta_{d}\) using \(D\) 17: Update the policy parameters \(\theta_{\pi}\) using \(\{\tau_{0},\ldots ,\tau_{N}\}\) 18: end while ## 4 EXPERIMENTS AND RESULTS This section contains a collection of experiments and results produced to investigate the capabilities of the method. ### 4.1 2D HUMANOID IMITATION The first experiment performed uses the method to learn a cyclic walking gait for a simulated humanoid walking robot given a single motion example, similar to (Peng & van de Panne, 2017). In this simulated robotics environment the agent is learning to imitate a given reference motion of a walk. The agent's goal is to learn how to actuate Proportional Derivative (PD) controllers at \(30\mathrm{fps}\) to mimic the desired motion. The simulation environment provides a hard coded reward function based on the robot's pose that is used to evaluate the policies quality. The images captured from the simulation are converted to grey- scale with \(128\times 128\) pixels. In between control timesteps the simulation collects images of the agent and the rendered demonstration motion. The agent is able to learn a robust walking gate even though it is only given noisy partial observations of a demonstration. We find it extremely helpful to normalize the distance metric outputs using \(r = exp(r^{2}*w_{d})\) where \(w_{d} = - 5.0\) scales the filtering width (Peng & van de Panne, 2017). Early in training the distance metric often produces noisy large values, also the RL method constantly updates reward scaling statistics, the initial high variance data reduces the significance of better distance metric values produced later on by scaling them to very small numbers. The improvement of using this normalize <--- Page Split ---> reward is shown in Figure 3a. Example motion of the agent after learning is shown in Figure 2 and in the supplemental Video. ![](images/6_0.jpg) <center>Figure 2: Still frame shots from trained policy in the humanoid2d environment. </center> ![](images/6_1.jpg) <center>Figure 3: Ablation analysis of the method. We find that training RL policies is sensitive to size and distribution of rewards. The siamese network benefits from a number of training adjustments that make it more suitable for RL. </center> ### 4.2 ALGORITHM ANALYSIS We compare the method to two other methods that can be considered as learning a distance function in state space, GAIL and using a Variational Auto Encoder (VAE) to train an encoding and compute distances between those encodings using the same method as the siamese network in Figure 4a. We find that the VAE method does not appear to capture the important distances between states, possibly due to the complexity of the decoding transformation. Similarly, we use try a GAIL type baseline and find that the method would either produce very jerky motion or stand still, both of which are contained in the imitation data. Our method that considers the temporal structure of the data produces higher value policies. Additionally, we create a multi- modal version of the method where the same learning framework is use. Here we replace the bottom conv net with a dense network and learn a distance metric between agent poses and imitation video. The results of these models along with using the default manual reward function are shown in Figure 4b. The multi- modal version appears to perform about equal to the vision- only modal. In Figure 4b and Figure 8c we compare our method to a non sequence based model that is equivalent to TCN. On average the method achieves higher value policies. We conduct an additional ablation analysis in Figure 3c to compare the effects of particular methods used to assist in training the recurrent siamese network We find it very helpful to reduce Reference State Initialization (RSI). If more episodes start in the same state it increases the temporal alignment of training batches for the RNN. We believe it is very important that the distance metric be most accurate for the earlier states in an episode so we use Early Episode Sequence Priority (EESP). Meaning we give the chances of cropping the window used for RNN training batches closer to the beginning of the episode. Also, we give higher probability to shorter windows. As the agent gets better the average length of episodes increases and so to will the average size of the cropped window. Last, we tried pretraining the distance function. This leads to mixed results, see Figure 7a and Figure 7b. Often, pretraining overfits the initial data collected leading to poor early RL training. However, in the long run pretraining does appear to improve over its non pretrained version. ### 4.3 3D HUMANOID ROBOT IMITATION In these simulated robotics environments the agent is learning to imitate a given reference motion of a walk, run, frontflip or backflip. A single imitation motion demonstration is provided by the simula <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 4: Baseline comparisons between our sequence-based method, GAIL and TCN. In 4a We compare our method to GAIL and a VAE where use using the euclidean distance of the encodings. We perform two additional baseline comparison between out method and TCN in 4b and 8c. These both show that on average our method performs similar to TCN or better over time. In these plots the large solid lines are the average performance of a collection of policy training run. The dotted lines of the same colour are the specific performance value for each policy training run. The filled in areas around the average policy performance is the variance other the collection of policy training runs. </center> tion environment as a cyclic motion, similar to (Peng et al., 2018). The agent controls and observes frames at \(30\mathrm{fps}\) . During learning, additional data is included from other tasks for the walking task this would be: walking- dynamic- speed, running, frontflips and backflips) that are used to train the distance metric. We also include data from a modified version of the tasks that has a randomly generated speed modifier \(\omega \in [0.5,2.0]\) walking- dynamic- speed, that warps the demonstration timing. This additional data is used to provide a richer understanding of distances in space and time. The input to the distance metric is a pair of frames from the simulation per control step, see Algorithm 1. We find that using the RNN- based distance metric makes the learning process more gradual and smoother. This can be seen in Figure 4b where the original manually created reward can still provide sparse feedback after the agent has sufficiently diverged from the desired behaviour. Some example trajectories using the policy learned using the method are shown in Figure 5, Figure 6 and in the supplemental Video. We include a rendered version of the running task in Figure 6. ![](images/7_1.jpg) <center>Figure 5: Still frame shots of the agent's motion after training on humanoid3d walking. </center> Sequence Encoding Using the learned sequence encoder a collection of motions from different classes are processed to create a TSNE embedding of the encodings (Maaten & Hinton, 2008). In Figure 4c we plot motions both generated from the learned policy \(\pi\) and the expert trajectories \(\pi_{E}\) . Interestingly, there are clear overlaps in specific areas of the space for similar classes across learned \(\pi\) and expert \(\pi_{E}\) data. There is also a separation between motion classes in the data. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 6: Still frame shots of the agent's motion after training on humanoid3d running. </center> ## 5 DISCUSSION AND CONCLUSION Learning a distance metric is imperfect, the distance metric can compute inaccurate distances in areas of the state space it has not yet been trained on. This could imply that when the agent explores and finds truly new and promising trajectories the distance metric will produce bad results. We attempt to mitigate this affect by including training data from different tasks while training the distance metric. We believe the method will benefit greatly from a larger collection of multitask data and increased variation of each task. Additionally, if the distance metric confidence is modelled this information could be used to reduce variance and overconfidence during policy optimization. It appears Deep Deterministic Policy Gradient (DDPG) works well for this type of problem. Our hypothesis is the learned reward function is changing between data collection phases it may be better to view this as off- policy data. Learning a reward function while training also adds additional variance to the policy gradient. This may indicate that the bias of off- policy methods could be preferred over the added variance of on- policy methods. We also find it important to have a small learning rate for the distance metric Figure 7c. This reduces the reward variance between data collection phases and allows learning a more accurate value function. Another approach may be to use partially observable RL that has the ability to learn a better model of the value function given a changing RNN- based reward function. Training the distance metric could benefit from additional regularization such as constraining the kl- divergence between updates to reduce variance. Another option is learn a sequence- based policy as well given that the rewards are now not dependant on a single state observation. We tried using GAIL but we found it has limited temporal consistency. This led to learning very jerky and overactive policies. The use of a recurrent discriminator for GAIL may mitigate some of these issues. It is challenging to produce result better than the carefully manually crafted reward functions used by some of the RL simulation environments that include motion phase information in the observations (Peng et al., 2018; 2017). Still as environments become increasing more realistic and grow in complexity we will need more complex reward functions to describe the desired behaviour we want from the agent. Training the distance metric is a complex balancing game. One might expect that the model should be trained early and fast so that it quickly understands the difference between a good and bad demonstration. However, quickly learning confuses the agent, rewards can change quickly which can cause the agent to diverge off toward an unrecoverable policy space. Slower is been better, as the distance metric my not be accurate but it may be locally or relatively reasonable which is enough to learn a good policy. As learning continues these two optimizations can converge together. <--- Page Split ---> ## 6 APPENDIX ### 6.1 DATASETS The mocap used in the created environment come from the CMU mocap database and the SFU mocap database. ### 6.2 TRAINING DETAILS The learning simulations are trained using Graphics Processing Unit (GPU)s. The simulation is not only simulating the interaction physics of the world but also rendering the simulation scene in order to capture video observations. On average it takes 3 days to execute a single training simulation. The process of rendering and copying the images from the GPU is one of the most expensive operations with the method. ### 6.3 DISTANCE FUNCTION TRAINING In Figure 7b we show the training curve for the recurrent siamese network. The model learns smoothly considering that the training data used is constantly changing as the RL agent explores. In Figure 7a the learning curve for the siamese RNN is shown after performing pretraining. We can see the overfitting portion the occurs during RL training. This overfitting can lead to poor reward prediction during the early phase of training. ![](images/11_0.jpg) <center>Figure 7: Training losses for the siamese distance metric. Higher is better as it indicates the distance between sequences from the same class are closer. </center> <--- Page Split ---> It can be difficult to train a sequenced based distance function. One particular challenge is training the distance function to be accurate across the space of possible states. We found a good strategy was to focus on the beginning of episode data. If the model is not accurate on states seen earlier in the episode it may never learn how to get into good states later on in the episode that the distance function understands better. Therefore, when constructing batches to train the RNN on we give higher probability to starting earlier in episodes. We also give a higher probability to shorter sequences. As the agent gets better average episodes length increase, so to will the randomly selected sequence windows. ## 7 POSITIVE AND NEGATIVE EXAMPLES We use two methods to generate positive and negative examples. The first method is similar to TCN where we can make an assumption that sequences that overlap more in time are more similar. For each episode two sequences are generate, one for the agent and one for the imitation motion. We compute positive pairs by altering one of these sequences and comparing this altered verion to its original version. Here we list the number of ways we alter sequences for positive pairs. 1. Adding Gaussian noise to each state in the sequence (mean \(= 0\) and variance \(= 0.02\) ) 2. Out of sync versions where the first state is removed from the first sequence and the last state from the second sequence 3. Duplicating the first state in either sequence 4. Duplicating the last state in either sequence We alter sequences for negative pairs by 1. Reversing the ordering of the second sequence in the pair. 2. Randomly picking a state out of the second sequence and replicating it to be as long as the first sequence. 3. Randomly shuffling one sequence. 4. Randomly shuffling both sequences. The second method we use to create positive and negative examples is by including data for additional classes of motion. These classes denote different task types. For the humanoid3d environment we generate data for walking-dynamic-speed, running, backflipping and frontflipping. Pairs from the same tasks are labelled as positive and pair from different classes are negative. ### 7.1 RL ALGORITHM ANALYSIS It is not clear which RL algorithm may work best for this type of imitation problem. A number of RL algorithms were evaluated on the humanoid2d environment Figure 8a. Surprisingly, Trust Region Policy Optimization (TRPO) (Schulman et al., 2015) did not work well in this framework, considering it has a controlled policy gradient step, we thought it would reduce the overall variance. We found that DDPG (Lillicrap et al., 2015) worked rather well. This could be related to having a changing reward function, in that if the changing rewards are considered off- policy data it can be easier to learn. This can be seen in Figure 8b where DDPG is best at estimating the future discounted rewards in the environment. We tried also Continuous Actor Critic Learning Automaton (CACLA) (Van Hasselt, 2012) and Proximal Policy Optimization (PPO) (Schulman et al., 2017), we found that PPO did not work particularly well on this task, this could also be related to added variance. <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 8: RL algorithm comparison on humanoid2d environment. </center> <--- Page Split --->
reject
Reject
4.333333
ICLR_2019_paper_0432
iclr
2,019
# SELECTIVITY METRICS CAN OVERESTIMATE THE SELECTIVITY OF UNITS: A CASE STUDY ON ALEXNET Anonymous authors Paper under double- blind review ## ABSTRACT Various methods of measuring unit selectivity have been developed in order to understand the representations learned by neural networks (NNs). Here we undertake a comparison of four such measures on AlexNet, namely, localist selectivity Bowers et al. (2014), precision (Zhou et al., 2015), class- conditional mean activity selectivity CCMAS; Morcos et al. (2018), and a new measure called top- class selectivity. In contrast with previous work on recurrent neural networks (RNNs), we fail to find any \(100\%\) selective 'localist units' in AlexNet, and demonstrate that the precision and CCMAS measures provide a much higher level of selectivity than is warranted, with the most selective hidden units only responding strongly to a small minority of images from within a category. We also generated activation maximization (AM) images that maximally activated individual units and found that under \(5\%\) of units in fc6 and conv5 produced interpretable images of objects, whereas fc8 produced over \(50\%\) interpretable images. Furthermore, the interpretable images in the hidden layers were not associated with highly selective units. These findings highlight the problem with current selectivity measures and show that new measures are required in order to provide a better assessment of learned representations in NNs. We also consider why localist representations are learned in RNNs and not AlexNet. ## 1 INTRODUCTION Although previously seen as black boxes, there have been recent attempts to understand how neural networks (NNs) work by analyzing hidden units one at a time using various measures such as localist selectivity (Bowers et al., 2014), class- conditional mean activity selectivity (CCMAS) (Morcos et al., 2018), precision (Zhou et al., 2015), and activation maximization (AM) (Erhan et al., 2009b). These measures are defined below, and they all provide evidence that some units respond selectively to categories under some conditions. For example, Bowers et al. (2014; 2016) found localist letter and word representations in recurrent networks (RNNs) trained on short- term memory tests, and (Zhou et al., 2015; 2018) reported object detectors in convolutional neural networks (CNNs) trained on ImageNet. Our goal here is to directly compare different measures of object selectivity on a common network trained on a single task. We chose AlexNet (Krizhevsky et al. (2012)) because it is a well- studied CNN and many authors have reported high levels of selectivity in its hidden layers via both quantitative (Zhou et al., 2018; 2015) and qualitative methods using Activation Maximization (AM) images (Nguyen et al., 2017; Yosinski et al., 2015; Simonyan et al., 2013). Our main findings are: 1. The precision and CCMAS are misleading measures that overestimate selectivity. 2. There are no localist 'grandmother cell' representations in AlexNet, in contrast with the localist representations learned in some RNNs. 3. Units with interpretable AM images do not necessarily correspond to highly selective representations. 4. New selectivity measures are required to provide a better assessment of the learned hidden representations in NNs. <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Examples of selectivity measures used. (a) Jitterplot of unit 113 in an RNN under the superposition constraint selective the letter 'j' (b) Jitterplot of a non-selective unit 160 found when RNN trained on words one-at-a-time; from Bowers et al. (2016). (c) Activation maximization (AM) image of a unit in conv5 of AlexNet that looks like a lighthouse; from Nguyen et al. (2016). (d) Highest activation images for a 'lamp' detector with 84% precision in layer pool5 of AlexNet; from Zhou et al. (2015). </center> Bowers et al. (2014; 2016) assessed the selectivity of hidden units in recurrent NNs using networks similar to those developed by Botvinick & Plaut (2006) designed to explain human performance on short- term memory tests. They reported many 'localist' units that are \(100\%\) selective for specific letters or words, where all members of the selective category were more active than and disjoint from all non- members, as can be shown in jitterplots (Berkeley et al., 1995) (see Fig. 1a for an example of a unit selective to the letter 'j'). These localist representations were compared to 'grandmother cells' as discussed in neuroscience (Bowers, 2017a). Bowers et al. (2014) argued that the network learned these representations in order to co- activate multiple letters or words at the same time in short- term memory without producing ambiguous blends of overlapping distributed patterns (the so- called 'superposition catastrophe'). Consistent with this hypothesis, localist units did not emerge when the model was trained on letters or words one- at- a- time (a condition in which the model did not need to overcome the superposition catastrophe (Bowers et al., 2014)). (see Fig. 1b for an example of a non- selective unit trained under this latter condition) In parallel, researchers (Zhou et al. 2015; Morcos et al. 2018; Zeiler & Fergus 2014; Erhan et al. 2009a, for a review see Bowers (2017a)) reported selective units in the hidden layers of various CNNs, including AlexNet (Krizhevsky et al., 2012), trained to classify images into one of multiple categories. For example, Zhou et al. (2015) assessed the selectivity of units in the pool5 layer of two CNNs trained to classify images into 1000 objects and 205 scene categories, respectively. They reported multiple 'object detectors' (as defined in the method section) in both networks, Similarly, Morcos et al. (2018) reported that CNNs trained on CIFAR- 10 and ImageNet learned many highly selective hidden units, with CCMAS scores often approaching the maximum of 1.0. Again, these results suggest high- levels of selectivity in CNNs. Note that these later studies show that selective representations develop in CNNs trained to classify images one- at- a- time. This appears to be inconsistent with the Bowers et al. (2016) who (a) failed to obtain selective representations for letters or words under these conditions (see Fig. 1b); and (b) it suggests that there are additional pressures for CNNs to learn selective representations above and beyond the challenge of overcoming the superposition catastrophe. However, the measures of selectivity that have been applied across studies are different, and accordingly, it is difficult to directly compare results. In order to directly compare and have a better understanding of the different selectivity measures we assessed (1) localist, (2) precision, and (3) CCMAS selectivity on the prob, fc8, fc7, fc6, and conv5 layers of AlexNet. We also introduce a new measure called top- class selectivity, and show that the precision and CCMAS measures provide much higher estimates of object selectivity compared to other measures. Importantly, we do not find any localist 'grandmother cell' representations in the hidden layers of AlexNet, consistent with the hypothesis that the superposition catastrophe provides a pressure to learn more selective representations Bowers et al. (2014; 2016). <--- Page Split ---> In addition, we compared these selectivity measures to a state- of- the- art activation maximization (AM) method for visualizing single- unit representations in CNNs (Nguyen et al., 2017). AM images are generated to strongly activate individual units, and some of them are interpretable by humans (e.g., a generated image that looks like a lighthouse, see Fig. 1c). For the first time, we systematically evaluated the interpretability of the AM images in an on- line experiment and compare these ratings with the selectivity measures for corresponding units. We show that hidden units with interpretable AM images are not highly selective. ## 2 METHODS Networks and Datasets All \(\sim 1.2\mathrm{M}\) photos from ImageNet2010 (Deng et al. 2009) were cropped to \(277\times 277\) pixels and classified by the pre- trained AlexNet CNN (Krizhevsky et al. 2012) shipped with Caffe (Jia et al. 2014), resulting in 721,536 correctly classified images. Once classified, the images are not re- cropped nor subject to any changes. In Caffe, the softmax operation (Denker & leCun 1991) is applied at the 'prob' (ability) output layer that contains 1000 units (one for each class). We analyzed these prob units, the fully connected (fc) layers: fc8 (1000 units) that encodes the outputs prior to the softmax operation, fc6 and fc7 (4096 units), and the top convolutional layer conv5 which has 256 filters. We only recorded the activations of correctly classified images, and saved them in an activation table so the activations could be probed without re- evaluating the images each time. The activation files are stored in .h5 format and can be retrieved at http://anonymizedForReview. We selected 233 conv5, 2738 fc6, 2239 fc7, 911 fc8, and 954 prob units for analysis. Localist selectivity Here we define a unit to be localist for class \(A\) if the set of activations for class \(A\) was disjoint with those of \(\neg A\) . A unit is selectively 'on' if \(\{A\} >\{\neg A\}\) (i.e. all images in \(A\) have higher activations than those not in \(A\) ) and selectively 'off' if \(\{A\} < \{\neg A\}\) . Localist selectivity is easily depicted with jitterplots in which a scatter plot for each unit is generated (see Figs. 3a and 4a, b). Each point in a plot corresponds to a unit's activation in response to a single image, and only correctly classified images are plotted (if an image has been misclassified we cannot use its label to elucidate what the unit responds to). The level of activations is coded along the \(x\) - axis, and an arbitrary value is assigned to each point on the \(y\) - axis (they are jittered). When generating jitterplots for the conv5 layer we plotted the highest level of activation across each filter for each image. Top- Class selectivity Top- class selectivity is closely related to localist selectivity except that it provides a continuous rather than discrete measure. We counted the number of images from the same class that were more active than all images from all other classes (what we called the top cluster size) and divided the cluster size by the total number of correctly identified images from this class. \(100\%\) top- class selectivity is equivalent to a localist representation. Precision The precision method of finding object detectors (Zhou et al., 2015; 2018) involves identifying a small subset of images that most strongly activate a unit (the number of images in the most strongly activated subset differ across papers) and then identifying the critical part of these images that are responsible for driving the unit. Zhou et al. 2015 took the 60 images that activated a unit the most strongly and asked independent raters to interpret the critical image patches. Zhou et al. (2015) developed a precision metric that assessed the percentage of the 60 images that raters judged to depict the same class of object (e.g., if 50 of the 60 images were labeled as 'lamp', the unit would have a precision index of \(50 / 60\) or \(83\%\) ; see Fig. 1d). Object detectors were defined as units with a precision \(>75\%\) : they reported multiple such detectors. Here we approximate this approach by considering the 100 images that most strongly activate a given unit and assess the highest percentage of images from a given output class (e.g., if 75 of the top 100 images are all examples of a class 'lighthouse' then we consider the unit to be a 'lighthouse' object detector with a precision of \(75\%\) ). CCMAS Morcos et al. (2018) introduced a selectivity index based on the 'class- conditional mean activation' selectivity (CCMAS). The CCMAS for class \(A\) compares the mean activation of all images in class \(A\) , \(\mu_{A}\) , with the mean activation of all images not in class \(A\) , \(\mu_{- A}\) , and is given by: \((\mu_{A} - \mu_{- A}) / (\mu_{A} + \mu_{- A})\) . Morcos et al. (2018) states that this metric should vary within [0,1], with 0 meaning that a unit's average activity was identical for all classes, and 1 meaning that a unit was only active for inputs of a single class. Here, we assessed class selectivity for the highest mean activation class (CCMAS) as well as for the class with the second highest mean activation \(\mu_{A}\) (what <--- Page Split ---> we call CCMAS.2) in order to assess the extent to which CCMAS reflects the selectivity to one class. Activation Maximization We harnessed an activation maximization method called Plug & Play Generative Networks (Nguyen et al., 2017) in which an image generator network was used to generate images (hereafter, AM images) that highly activate a unit. We generated 100 separate images that maximally activated each unit in the conv5, fc6 and fc8 layers of AlexNet and displayed them in a grid format (see Appendix Figs. A5, A6 & A7). We then asked 333 participants to judge whether they could identify any repeating objects, animals, or places in images after receiving some practice trials (see Appendix Fig. A1 for an example). Participants were recruited using Prolific (pro; Palan & Schitter, 2018), with the experiment run online using Gorilla (gor). More details of the experiment can be found in the Appendix A1 and an example experiment for readers to try is at: exp. ## 3 RESULTS ### 3.1 COMPARISON OF SELECTIVITY MEASURES. The mean top- class, precision, and CCMAS selectivities across the conv5, fc6, fc7, fc8, and prob layers are displayed in Fig. 2a- c. We did not plot localist selectivity as there were no localist 'grandmother units' at any level, including the prob layer. The first point to note is that the top- class, precision, and CCMAS measures all increased in the higher layers, showing that they capture degrees of selectivity ignored by the localist measure. The second and perhaps the most striking finding is that top- class selectivity was extremely low across the hidden layers, with means below \(0.25\%\) in the the conv5, fc6, and fc7 layers. Third, the different measures provided very different estimates of selectivity. In contrast with top- class selectivity, the mean precision scores are over an order of magnitude larger in the hidden layers of network, with average precision scores of \(9.6\%\) , \(12.1\%\) , and \(15.4\%\) in layers conv5, fc6, and fc7, respectively. Similarly, the CCMAS measure suggests a much higher level of selectivity than top- class selectivity, with mean scores of .49, .84, and .85 in the conv5, fc6, and fc7 layers, respectively. ![](images/3_0.jpg) <center>Figure 2: Selectivity measures across different layers of AlexNet. Left: top-class selectivity. Middle: precision 100 (the percentage of the top 100 images which are members of the top class). Right: Class-conditional mean activity selectivity (CCMAS). </center> The discrepancy between precision and CCMAS on the one hand, and top- class selectivity on the other, is even more striking when precision and CCMAS scores are high, as shown in Table 1. For example, we find \(4.5\%\) of units in layer fc7 have a precision of over \(50\%\) (that is 184 units) and over \(80\%\) of units in fc7 have a CCMAS measure of over .92. At the same time, only \(0.1\%\) of units have a top- class selectivity over \(5\%\) . Indeed, there are no units with a top- class selectivity over \(5\%\) in fc6 or conv5. Based on the precision and CCMAS measures it appears that AlexNet has learned some highly selective representations for objects, but according to localist and top- class selectivity, there is no evidence for this conclusion. This discrepancy becomes more striking still when you consider the units with the highest precision and CCMAS scores (see Table A1 in the Appendix for examples across multiple layers of AlexNet). To highlight one example, in Fig. 3 we illustrate why the unit fc6.1199 with the highest precision (95%) in fc6 should not be considered a Monarch butterfly detector. Fig. 3a depicts a jitterplot of activations to all accurately identified images, with Monarch butterfly images found across the range of activations. Fig. 3b shows a histogram that plots the distribution of activations for Monarch <--- Page Split ---> Table 1: The percentage of precision, CCMAS and top-class units in each layer at different threshold cut-offs. <table><tr><td rowspan="2">Layer</td><td colspan="6">% of units at various thresholds</td></tr><tr><td>Precision</td><td></td><td>CCMAS</td><td></td><td>Top-Class selectivity</td><td></td></tr><tr><td></td><td>over 50%</td><td>over 75%</td><td>over 0.7</td><td>over 0.8</td><td>over 0.9</td><td>over 5%</td></tr><tr><td>prob</td><td>99.7%</td><td>99.6%</td><td>100%</td><td>100%</td><td>100%</td><td>100%</td></tr><tr><td>fc8</td><td>75.4%</td><td>32.8%</td><td>99.9%</td><td>97.2%</td><td>84.5%</td><td>23.6%</td></tr><tr><td>fc7</td><td>4.5%</td><td>0.3%</td><td>99.9%</td><td>92.1%</td><td>5.0%</td><td>0.1%</td></tr><tr><td>fc6</td><td>3.0%</td><td>0.1%</td><td>99.7%</td><td>86.7%</td><td>6.7%</td><td>0%</td></tr><tr><td>conv5</td><td>4.7%</td><td>0%</td><td>3.4%</td><td>0%</td><td>0%</td><td>0%</td></tr></table> butterflies. By far the most common activation to correctly identified Monarch butterflies is 0 and the mean is \(39.2 \pm 0.6\) . Figures 3c- e displays example images with 0 (c), mean (d) and maximal (e) activations, and all are identifiable as Monarch butterflies. Thus, classifying this unit as a Monarch butterfly detector is misleading. ![](images/4_0.jpg) <center>Figure 3: Data for unit fc6.1199. (a) activation jitterplot black squares: Monarch butterfly images; grey circles: all other classes. (b) histogram of activations of Monarch butterflies, c-e example ImageNet images with activations of 0, the mean, and the maximum of the range. Unit fc6.1199 has a precision of \(95\%\) over the top 100 images ( \(98.3\%\) over the top 60) and is thus classified as a butterfly detector, yet there are Monarch butterfly images covering the whole range of values, with 72 images ( \(5.8\%\) of the total) having an activation of 0.0. </center> Another surprising result is that we did not obtain any \(100\%\) top- class selectivity units (localist units) in the prob layer of AlexNet. Rather, the mean top- class selectivity was \(\sim 80\%\) in the prob layer, and only \(\sim 5\%\) in fc8 (prior to the softmax being applied). Figs. 4a,b depict the pattern of activation for units fc8.11 and prob.11 that are examples of the most top- class selective units in these layers (responding to images of 'goldfinch' birds with top- class selectivity measures of \(8.4\%\) and \(95.2\%\) , respectively). Clearly these units do respond much more selectively than the most selective units in earlier layers (as in Fig. 3), and at the same time, the jitterplots show why we did not observe any localist units (a few 'goldfinch' images were less active than a few images from other categories). These jitterplots also show that top- class and localist selectivity provide somewhat misleading estimates of selectivity as well. Consider Fig. 4a that depicts a substantial overlap between goldfinch and non- goldfinch activations on unit fc8.11. The \(8.4\%\) top- class selectivity score captures the se <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 4: Example data from the fc8 and prob layers. (a) jitterplot activations for unit fc8.11 that has a top-class selectivity of \(8.4\%\) . (b) jitterplot activations for prob.11 (i.e. post-softmax) that has top-class selectivity of \(95.2\%\) . Activations for the 'ground truth' class 'goldfinch' are shown as black squares, all other classes are shown as colored circles. </center> lectivity for the most highly active goldfinch images, but it is blind to the fact that almost goldfinch images have a reasonably high level of activation (more than most non- goldfinch images). The problem with localist selectivity is highlighted in Fig. 4b that shows that the measure misses all but the most extreme version of selectivity. Together, these findings suggest that additional selectivity measures are required to better characterize the learned representations in NNs: precision and CC- MAS measures strongly overestimate the degree of selectivity, and localist and top- class selectivity provide either a too strict or too narrow a measure of selectivity. ### 3.2 ADDITIONAL PROBLEMS WITH THE CCMAS MEASURE. The main problem with the precision and CCMAS measures is that they provide misleadingly high estimates of selectivity, but the CCMAS measure has some additional limitations. The first point to note is that contrary to Morcos et al. (2018), the CCMAS measure can go above 1 if \(\mu_{- A}\) is negative, and this happens in the fc8 layer of AlexNet (see Fig. 4a) because there is no ReLU transformations in layer fc8 (as opposed to layers fc6 & fc7). This is a minor issue that can be fixed by normalizing the range of activations, but it explains why some of our CCMAS scores are above 1. More importantly, if the CCMAS provided a good measure of a unit's class selectivity then one should expect that a very high measure of selectivity for one class would imply that the unit is not highly selective for other classes. This is not the case as shown in Fig. 5a- c where CCMAS are compared to the CCMAS_2 that assesses unit selectivity to the category with the second highest mean activation. For example, unit fc7.0 has a CCMAS of .813 for the class 'maypole', but the class with the second highest mean activation, 'chainsaw' has a CCMAS of .808 (and neither of these is the top- class which is 'orangutan' and has a precision of \(14\%\) ). ![](images/5_1.jpg) <center>Figure 5: Example of where the CCMAS does not match intuitive understandings of selectivity. Experimental data: (a-c) the CCMAS scores for the class with the highest (CCMAS) and second highest mean activation (CCMAS_2) for all units across layer conv5 (a), fc6 (b), and fc7 (c). </center> It is also important to note that the CCMAS measure of selectivity is measuring something quite different to alternative measures. For example, the percentage of conv5, fc6 and fc8 units in which <--- Page Split ---> top- class and CCMAS are selective to the same class as follows: \(0\%\) , \(9.8\%\) , and \(83.5\%\) ( \(0.1\%\) being chance). To highlight how discrepant the different measures can be, we have generated some artificial datasets depicted in Fig. A4 in the Appendix that show that CCMAS scores can be much higher or lower than top- class or localist selectivity scores. Indeed, the figure shows that the CCMAS measure can give a low selectivity score to a localist representation. Part of the problem is that the CCMAS measure compares the mean of the selected class with the mean of the unselected classes, but these distributions are not normal (in fact activations of all classes follows an exponential distribution), and thus a comparison between means is not an appropriate measure: see Figures A2 and A3 in Appendix section A3). ### 3.3 HUMAN INTERPRETATION OF GENERATED IMAGES FROM ACROSS ALEXNET The results of the behavioral experiment in which humans rated AM images are reported in Table 2. Consistent with past research, the generated images in the output fc8 layer were often interpreted as objects, and when they were given a consistent interpretation, they almost always ( \(95.4\%\) ) correspond to the trained category. By contrast, less than \(5\%\) of units in conv5 or fc6 were associated with consistently interpretable images, and as can be seen in Table 2, the interpretations only weakly matched the category with the highest top- class or CCMAS selectivity. The frequency with which objects were seen by the participants was similar in layers conv5 and fc6 layers and increased in fc8, consistent with the top- class and and precision measures of selectivity. Apart from showing that there are few interpretable units in the hidden layers of AlexNet, our findings show that the interpretability of images does not imply a high level of selectivity. That is, we know from Sec. 3.1 that the maximum top- class selectivity for the hidden units is well under \(10\%\) (Fig. 2), and accordingly, all the consistently interpretable units had selectivities less than this. Indeed, in most cases, the top- class selectivity of the interpretable units is well under \(1\%\) . To briefly illustrate the types of images that participants rated as objects or non- objects, Fig. 6a- c depicts three AM images from units in the conv5, fc6, and fc8 layers of AlexNet that were interpreted consistently with the top- class selectivity measure, and Fig. 6d- e depicts corresponding uninterpretable images. Additional images can be found in the Appendix Figs. A5 and A6. ![](images/6_0.jpg) <center>Figure 6: Example AM images that were either judged by all participants to contain objects (a-c) or judged by all participants to be uninterpretable as objects (d-e). The human judgement for conv5.183 was 'dogs' and the top-class was 'flat-coated retriever'. For fc6.319 subjects reported 'green peppers' or 'apples' (all classified as the same broad class in our analysis), and the CCMAS and top-class was 'Granny Smith apples'. For fc8.969 humans suggested 'beverage' or 'drink': ground truth class for this unit is 'eggnog'. The ground-truth for fc8.865 is 'toy-store'. </center> ## 4 DISCUSSIONS AND CONCLUSIONS Our central finding is that different measures of activation selectivity support very different conclusions when applied to the same units in AlexNet. In contrast with the precision (Zhou et al. 2015) and CCMAS (Morcos et al. 2018) measures that revealed some highly selective units for objects in layers conv5, fc6, and fc8, we found no localist representations, and the mean top- class selectivity in these layers was well under \(1\%\) . These findings are in stark contrast with the many localist 'grandmother cell' representations learned in RNNs Bowers et al. (2014; 2016); Bowers (2017b). <--- Page Split ---> Table 2: Interpretability judgements for AM images. The number of judgments for conv5, fc6 and fc8 were 1332, 10,656 and 5,181, respectively. <table><tr><td>LAYER</td><td>% YES RESPONSES</td><td>% OF UNITS WITH &amp;gt; 80% YES RESPONSE</td><td>AMONG HUMANS</td><td>% OVERLAP BETWEEN HUMANS and TOP CLASS</td><td>CCMAS CLASS</td></tr><tr><td>conv5</td><td>21.7% ±1.1%</td><td>4.3% ± 1.3%</td><td>89.5%±5.7%</td><td>34.1%±14.4%</td><td>0%</td></tr><tr><td>fc6</td><td>21.0% ±0.4%</td><td>3.1% ± 0.4%</td><td>80.4%±4.1%</td><td>23.3%±5.9%</td><td>18.9% ±5.9%</td></tr><tr><td>fc8</td><td>71.2% ±0.6%</td><td>59.3% ±1.6%</td><td>96.5%±0.4%</td><td>95.4%±0.6%</td><td>94.6% ±0.7%</td></tr></table> Not only did the different measures provide very different assessments of selectivity, we found that the precision and CCMAS measures provided highly misleading estimates. For example, a unit with over a \(75\%\) precision score for Monarch butterflies had a top- class selectivity of under \(5\%\) . Although Zhou et al. (2015) used \(75\%\) precision scores as the criterion for 'object detectors', it is inappropriate to call this unit a Monarch butterfly detector given that it did not respond strongly to the majority of Monarch butterfly images (and indeed, the modal response was 0; see Fig. 3). This discrepancy between precision and top- class selectivity was widespread. Similarly, we found that extremely high CCMAS measures did not indicate the item was selective exclusively to one category as might be expected. To take an extreme example, we found a unit in the output prob layer that had selectivities of .999 for category 'bridgeroom' and .995 for category 'flowerpot'. Top- class selectivity for this unit was different once again, responding most strongly to the category 'ringlet butterfly'. At the same time, we identified problems with the localist, top- class, and activation maximization (AM) methods as well. The localist selectivity measure failed to obtain any localist representations, even at the output prob layer of AlexNet. This measure is so extreme that it misses highly selective representations that are of theoretical interest. The top- class selectivity does provide a graded measure of selectivity (with \(100\%\) top- class selectivity equivalent to a localist grandmother cell), but it can underestimate selectivity when a few member from outside the top- class are highly activated (see Fig. 4b for an example). At the same time, the human interpretation of AM images provides a poor measure of hidden- unit selectivity given that interpretable AM images were associated with low top- class selectivity scores. These findings highlight the need to provide better measures of selectivity in order to better characterize the learned representations in NNs. What should be made of the contrasting findings that localist representations are found in RNNs, but not in AlexNet? The failure to observe localist units in the hidden layers of AlexNet is consistent with the Bowers et al. 2014 claim that these units only emerge in order to support the co- activation of multiple items at the same time in short- term memory. That is, highly selective representations may be the solution to the superposition catastrophe, and AlexNet only has to identify one image at a time. This may help explain the many reports of highly selective neurons in cortex given that the cortex needs to co- activate multiple items at the same time in order to support short- term memory (Bowers et al., 2016). However, it should be noted that the RNNs that learned localist units were very small in scale compared to AlexNet, and accordingly, it will be interesting to assess unit selectivity in larger RNNs that have much larger memory capacity. Relevant to this issue, Karpathy et al. (2016) reported some striking examples of selective representations in a RNN with long- short term memory (LSTM) trained to predict text. Although they did not systematically assess the degree of selectivity, they reported examples that are consistent with \(100\%\) selective units. If in fact the superposition constraint provides a pressure to learn more selective representations, then we should observe more highly selective representations, perhaps localist units, in large RNNs with LSTMs as well. We will be testing this hypothesis in future work. ## REFERENCES Experiment trial. https://research.sc/participant/login/dynamic/63907FB2- 3CB9- 45A9- B4AC- EFFD4C4A95D5. Accessed: 2018- 09- 24. <--- Page Split ---> Gorilla experiment builder. www.gorilla.sc. Accessed: 2018- 09- 24. Attrition. http://Prolific.ac. Accessed: 2018- 09- 24. Istvan SN Berkeley, Michael RW Dawson, David A Medler, Don P Schopflocher, and Lorraine Hornsby. Density plots of hidden value unit activations reveal interpretable bands. Connection Science, 7(2):167- 187, 1995. Matthew M Botvinick and David C Plaut. Short- term memory for serial order: a recurrent neural network model. Psychological review, 113(2):201, 2006. Jeffrey S Bowers. Grandmother cells and localist representations: a review of current thinking. Language, Cognition, and Neuroscience, pp. 257- 273, 2017a. Jeffrey S Bowers. Parallel distributed processing theory in the age of deep networks. Trends in cognitive sciences, 21(12):950- 961, 2017b. Jeffrey S Bowers, Ivan I Vankov, Markus F Damian, and Colin J Davis. Neural networks learn highly selective representations in order to overcome the superposition catastrophe. Psychological review, 121(2):248- 261, 2014. Jeffrey S Bowers, Ivan I Vankov, Markus F Damian, and Colin J Davis. Why do some neurons in cortex respond to information in a selective manner? insights from artificial neural networks. Cognition, 148:47- 63, 2016. Jia Deng, Wei Dong, Richard Socher, Li- Jia Li, Kai Li, and Li Fei- Fei. Imagenet: A large- scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248- 255. IEEE, 2009. John S Denker and Yann leCun. Transforming neural- net output levels to probability distributions. In Advances in neural information processing systems, pp. 853- 859, 1991. Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher- layer features of a deep network. University of Montreal, 1341(3):1, 2009a. Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher- layer features of a deep network. University of Montreal, 1341(3):1, 2009b. Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. Andrej Karpathy, Justin Johnson, and Li Fei- Fei. Visualizing and understanding recurrent networks. Workshop Track at International Conference on Learning Representations, 2016. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097- 1105, 2012. Ari S Morcos, David GT Barrett, Neil C Rabinowitz, and Matthew Botvinick. On the importance of single directions for generalization. arXiv preprint arXiv:1803.06959, 2018. Anh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In Advances in Neural Information Processing Systems, pp. 3387- 3395, 2016. Anh Nguyen, Jeff Clune, Yoshua Bengio, Alexey Dosovitskiy, and Jason Yosinski. Plug & play generative networks: Conditional iterative generation of images in latent space. In CVPR, volume 2, pp. 7, 2017. Stefan Palan and Christian Schitter. Prolific. aca subject pool for online experiments. Journal of Behavioral and Experimental Finance, 17:22- 27, 2018. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013. <--- Page Split ---> Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579, 2015. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pp. 818- 833. Springer, 2014. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Object detectors emerge in deep scene CNNs. In International Conference on Learning Representations, 2015. Bolei Zhou, David Bau, Aude Oliva, and Antonio Torralba. Interpreting deep visual representations via network dissection. IEEE transactions on pattern analysis and machine intelligence, 2018. <--- Page Split ---> ## APPENDIX FOR SELECTIVITY METRICS CAN OVERESTIMATE THE SELECTIVITY OF UNITS: A CASE STUDY ON ALEXNET ## A1 METHODOLOGICAL DETAILS FOR THE BEHAVIORAL EXPERIMENT One hundred generated images were made for every unit in layers conv5, fc6 and fc8 in AlexNet, as in Nguyen et al. (2017), and displayed as 10x10 image panels (figures A7 and Figures A5 and A6). A total of 3,299 image panels were used in the experiment (995 fc8, 256 conv5, and 2048 randomly selected fc6 image panels) and were divided into 64 counterbalanced lists of 51 or 52 (4 conv5, 15 or 16 fc8 and 32 fc6). 51 of the lists were assigned 5 participants and 13 lists were assigned 6 participants. To test the interpretability for these units as object detectors, paid volunteers were asked to look at image panels and asked if the images had an object / animal or place in common. If the answer was 'yes', they were asked to name that object simply (i.e. fish rather than goldfish). Analyses of common responses was done for any units where over \(80\%\) of humans agreed there was an object present, by reading the human responses and comparing them to both each other and to the output classes. Agreement was taken if the object was the same rough class. For example, 'beer', 'glass', and 'drink' were all considered to be in agreement in the general object of 'drink', and in agreement with both the classes of 'wine glass' and 'beer' as these classes were also general drink classes (this is an actual example, most responses were more obvious and required far less interpretation than that). Participants were given six practice trials, each with panels of 20 images before starting the main experiment. Practice trials included images that varied in their interpretability. ![](images/10_0.jpg) <center>Figure A1: Example screen from the identification task shown to participants as part of the instructions. The images included on this practice trial are ImageNet2010 images, not AM images. </center> <--- Page Split ---> ## A2 FURTHER DATA ON THE SELECTIVITY MEASURES Further data on the selectivity measures across AlexNet. Table A1 demonstrates some extreme values of CCMAS, precision, and top- class selectivity as well as the similarity between the CCMAS ad CCMAS_2, and Table A2 gives the mean values across the layers. Table A1: CCMAS selectivity measures for extreme example units. Across all the tested units, prob.322, fc8.393, fc7.31, fc6.582 and conv5.unit78 were the units with the highest CCMAS measures. Units fc6.1199, fc8.11 and prob.11 were displayed in Figs. 3 & 4. <table><tr><td>LAYER.UNIT</td><td>CCMAS</td><td>CCMAS_2</td><td>Precision</td><td>Top Cluster Sizes</td><td>% Top Class Selectivity</td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>prob.322</td><td>0.999921</td><td>0.995384</td><td>100%</td><td>963</td><td>97.1%</td></tr><tr><td>fc8.393</td><td>1.24357</td><td>1.39617</td><td>95%</td><td>29</td><td>3.3%</td></tr><tr><td>fc7.31</td><td>0.936767</td><td>0.865305</td><td>11%</td><td>1</td><td>0.15%</td></tr><tr><td>fc6.582</td><td>0.933832</td><td>0.919425</td><td>1%</td><td>1</td><td>0.14%</td></tr><tr><td>conv5.78</td><td>0.746763</td><td>0.746346</td><td>5%</td><td>1</td><td>0.10%</td></tr><tr><td></td><td></td><td></td><td>Top precision units</td><td></td><td></td></tr><tr><td>prob.0</td><td>0.9996680</td><td>0.99316330</td><td>100%</td><td>1000</td><td>92.2%</td></tr><tr><td>fc8.1</td><td>1.049710</td><td>1.110110</td><td>100%</td><td>172</td><td>16.1%</td></tr><tr><td>fc7.255</td><td>0.8961654</td><td>0.84108156</td><td>97%</td><td>94</td><td>7.6%</td></tr><tr><td>fc6.1199</td><td>0.92323</td><td>0.818260</td><td>95%</td><td>43</td><td>3.5%</td></tr><tr><td>conv5.0</td><td>0.554049430</td><td>0.528534300</td><td>77%</td><td>1</td><td>0.1%</td></tr><tr><td></td><td></td><td></td><td>Top class selectivity units</td><td></td><td></td></tr><tr><td>prob.322</td><td>0.99964017</td><td>0.99538374</td><td>100%</td><td>963</td><td>97.1%</td></tr><tr><td>fc8.985</td><td>1.0908700</td><td>1.1929800</td><td>100%</td><td>574</td><td>50%</td></tr><tr><td>fc7.255</td><td>0.8961654</td><td>0.84108156</td><td>97%</td><td>94</td><td>7.6%</td></tr><tr><td>fc6.1199</td><td>0.92323</td><td>0.818260</td><td>95%</td><td>43</td><td>3.5%</td></tr><tr><td>conv5.100</td><td>0.68313890</td><td>0.6831976</td><td>56%</td><td>9</td><td>1.1%</td></tr><tr><td></td><td></td><td></td><td>Example units</td><td></td><td></td></tr><tr><td>prob.11</td><td>0.999876</td><td>0.973243</td><td>100%</td><td>1000</td><td>95.2%</td></tr><tr><td>fc8.11</td><td>1.10469</td><td>1.1946</td><td>99%</td><td>88</td><td>8.4%</td></tr></table> Table A2: Average CCMAS measures across layers (note that these values are not normally distributed, see figures A2 and A3. <table><tr><td>LAYER.UNIT</td><td>CCMAS</td><td>CCMAS_2</td><td>Precision</td><td>Top Cluster Sizes</td><td>% Top Class Selectivity</td></tr><tr><td>Mean[prob]</td><td>0.999309</td><td>0.984169</td><td>99.7%</td><td>592.5</td><td>82.1%</td></tr><tr><td>Mean[fc8]</td><td>0.995644</td><td>1.002905</td><td>70.1%</td><td>108.4</td><td>5.1%</td></tr><tr><td>Mean[fc7]</td><td>0.847799</td><td>0.830854</td><td>15.4%</td><td>20.3</td><td>0.23%</td></tr><tr><td>Mean[fc6]</td><td>0.8439226</td><td>0.821118</td><td>12.1%</td><td>17.4</td><td>0.19%</td></tr><tr><td>Mean[conv5]</td><td>0.491029</td><td>0.464662</td><td>9.6%</td><td>17.3</td><td>0.17%</td></tr></table> <--- Page Split ---> ## A3 FURTHER ISSUES WITH THE CCMAS MEASURE ## A3.1 HISTORGRAMS AND DISTRIBUTION FITS FOR ACTIVATIONS IN UNIT fc6.1199 The CCMAS measure is based on comparing the mean activation of categories, and this is problematic for a few reasons. First, the majority of images do not activate a unit at all. For instance, our butterfly 'detector' unit fc6.1199 has \(82.8\%\) of images with an activation of 0.0 (see figure A2). This means that the CCMAS selectivity is largely determined by the the distribution of images that have 0 activation rather than the distribution of images that strongly activate a unit. This leads to very different estimates of precision, top- class and localist selectivity that are concerned with the distribution of highly activated units. Note that, this issue with activations of 0.0 moving the mean around could be solved by taking only the non- zero activations (which is not that dissimilar from what neuroscientists do), however there are problems with the non- zero activations. ![](images/12_0.jpg) <center>Figure A2: The distribution of activations on unit fc6.1199 for all images (left). \(82.8\%\) of activations are zero across all classes, only \(5.8\%\) of the butterfly class are zero. Unit fc6.1199 is a butterfly detector under Zhou et la's precision measure. Bins are 1.0 wide. </center> The problem with the CCMAS measure when applied to the non- zero activations is that they are not normally distributed. As figure A3 demonstrates (for our example unit fc6.1199), the all non- zero activations follow an experiential curve and thus the mean is not a useful measure. Fitting a normal distribution to this data gives the blue normal distribution curve. Although the butterfly class can be roughly fit by a normal distribution, as the entire activations follow an exponential, the non- butterfly classes will be best fit by an exponential not a normal distribution. As the CCMAS requires a comparison of means, and the not- A classes follow an exponential, rather than normal distribution, it follows that the CCMAS will give misleading results. Computing CCMAS on the basis of mean activations can produce highly non- intuitive as illustrated in figure A4 that plots three distributions of generated data from 10 classes of 100 points. We can see that the CCMAS gives the same (and maximal) score for the case where the unit is off to everything but a single point (figure A4a) as it does for a 'grandma unit' (see figure A4c). And the CCMAS gives an incredibly low score when the means of A and not- A are similar, even if they perfectly separate the disjoint sets (figure A4b). <--- Page Split ---> ![](images/13_0.jpg) <center>Figure A3: Fitting the non-zero activations for all classes (red) and the maximum activation class (black). For the superset of all the classes, the distribution is well-described by exponential-derived fits, normal-derived fits are bad. For the maximum activating class (butterfly), the distribution has a mean and can be well-described as a normal distribution. </center> ![](images/13_1.jpg) <center>Figure A4: Example of where the CCMAS does not match intuitive understandings of selectivity. Generated example data: (a) If a unit is off to all but a single image from a large class of objects, the CCMAS for that class is 1 (maximum possible selectivity). (b) If a unit is strongly activated to all members of one class and off to everything else (an archetypal 'grandmother' cell) the CCMAS is the same as for (a) although the precision and top-class selectivity is vastly different. (f): If a unit has high activations for all classes, but one class (black squares) is 0.1 more than all others (coloured circles), the CCMAS is very low (0.06) despite being \(\% 100\) top-class selective. The generated examples are for 10 classes of 100 items </center> <--- Page Split ---> ## A4 HUMAN INTERPRETATION OF ACTIVATION MAXIMIZATION IMAGES Some examples of the 10x10 grids of activation maximisation images that were presented to participants are shown in Figures A5, A6 and A7. Figure A5 shows an example from conv5 that human participants agreed had no obvious object in common (although there are repeated shape motifs, the participants were specifically asked for objects, and not abstract concepts like shape or color. Figure A6 is also from the conv5 and was judged by participants as some images containing 'dogs'. Figure A7 is the AM images for the supposed 'butterfly detector' unit example discussed in the paper. ![](images/14_0.jpg) <center>Figure A5: Example activation maximisation images for unit conv5.65. These images were judged by humans to not contain any interpretable objects in common (although the reader may agree that there are some shape and colour similarities in the images). </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure A6: Example activation maximisation images for unit conv5.183. These images were judged by humans to contain some interpretable images, in this case, of the type 'dogs'. </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure A7: Example activation maximisation images for unit fc6.1199. Whilst there are some butterfly wing shapes in these images, there are not obvious butterflies. N.B. the second highest activating class for this unit is ladybirds, and there are some orange round shapes that could conceivably be ladybug-alikes. </center> <--- Page Split --->
## ABSTRACT Various methods of measuring unit selectivity have been developed in order to understand the representations learned by neural networks (NNs). Here we undertake a comparison of four such measures on AlexNet, namely, localist selectivity Bowers et al. (2014), precision (Zhou et al., 2015), class- conditional mean activity selectivity CCMAS; Morcos et al. (2018), and a new measure called top- class selectivity. In contrast with previous work on recurrent neural networks (RNNs), we fail to find any \(100\%\) selective 'localist units' in AlexNet, and demonstrate that the precision and CCMAS measures provide a much higher level of selectivity than is warranted, with the most selective hidden units only responding strongly to a small minority of images from within a category. We also generated activation maximization (AM) images that maximally activated individual units and found that under \(5\%\) of units in fc6 and conv5 produced interpretable images of objects, whereas fc8 produced over \(50\%\) interpretable images. Furthermore, the interpretable images in the hidden layers were not associated with highly selective units. These findings highlight the problem with current selectivity measures and show that new measures are required in order to provide a better assessment of learned representations in NNs. We also consider why localist representations are learned in RNNs and not AlexNet. ## 1 INTRODUCTION Although previously seen as black boxes, there have been recent attempts to understand how neural networks (NNs) work by analyzing hidden units one at a time using various measures such as localist selectivity (Bowers et al., 2014), class- conditional mean activity selectivity (CCMAS) (Morcos et al., 2018), precision (Zhou et al., 2015), and activation maximization (AM) (Erhan et al., 2009b). These measures are defined below, and they all provide evidence that some units respond selectively to categories under some conditions. For example, Bowers et al. (2014; 2016) found localist letter and word representations in recurrent networks (RNNs) trained on short- term memory tests, and (Zhou et al., 2015; 2018) reported object detectors in convolutional neural networks (CNNs) trained on ImageNet. Our goal here is to directly compare different measures of object selectivity on a common network trained on a single task. We chose AlexNet (Krizhevsky et al. (2012)) because it is a well- studied CNN and many authors have reported high levels of selectivity in its hidden layers via both quantitative (Zhou et al., 2018; 2015) and qualitative methods using Activation Maximization (AM) images (Nguyen et al., 2017; Yosinski et al., 2015; Simonyan et al., 2013). Our main findings are: 1. The precision and CCMAS are misleading measures that overestimate selectivity. 2. There are no localist 'grandmother cell' representations in AlexNet, in contrast with the localist representations learned in some RNNs. 3. Units with interpretable AM images do not necessarily correspond to highly selective representations. 4. New selectivity measures are required to provide a better assessment of the learned hidden representations in NNs. <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Examples of selectivity measures used. (a) Jitterplot of unit 113 in an RNN under the superposition constraint selective the letter 'j' (b) Jitterplot of a non-selective unit 160 found when RNN trained on words one-at-a-time; from Bowers et al. (2016). (c) Activation maximization (AM) image of a unit in conv5 of AlexNet that looks like a lighthouse; from Nguyen et al. (2016). (d) Highest activation images for a 'lamp' detector with 84% precision in layer pool5 of AlexNet; from Zhou et al. (2015). </center> Bowers et al. (2014; 2016) assessed the selectivity of hidden units in recurrent NNs using networks similar to those developed by Botvinick & Plaut (2006) designed to explain human performance on short- term memory tests. They reported many 'localist' units that are \(100\%\) selective for specific letters or words, where all members of the selective category were more active than and disjoint from all non- members, as can be shown in jitterplots (Berkeley et al., 1995) (see Fig. 1a for an example of a unit selective to the letter 'j'). These localist representations were compared to 'grandmother cells' as discussed in neuroscience (Bowers, 2017a). Bowers et al. (2014) argued that the network learned these representations in order to co- activate multiple letters or words at the same time in short- term memory without producing ambiguous blends of overlapping distributed patterns (the so- called 'superposition catastrophe'). Consistent with this hypothesis, localist units did not emerge when the model was trained on letters or words one- at- a- time (a condition in which the model did not need to overcome the superposition catastrophe (Bowers et al., 2014)). (see Fig. 1b for an example of a non- selective unit trained under this latter condition) In parallel, researchers (Zhou et al. 2015; Morcos et al. 2018; Zeiler & Fergus 2014; Erhan et al. 2009a, for a review see Bowers (2017a)) reported selective units in the hidden layers of various CNNs, including AlexNet (Krizhevsky et al., 2012), trained to classify images into one of multiple categories. For example, Zhou et al. (2015) assessed the selectivity of units in the pool5 layer of two CNNs trained to classify images into 1000 objects and 205 scene categories, respectively. They reported multiple 'object detectors' (as defined in the method section) in both networks, Similarly, Morcos et al. (2018) reported that CNNs trained on CIFAR- 10 and ImageNet learned many highly selective hidden units, with CCMAS scores often approaching the maximum of 1.0. Again, these results suggest high- levels of selectivity in CNNs. Note that these later studies show that selective representations develop in CNNs trained to classify images one- at- a- time. This appears to be inconsistent with the Bowers et al. (2016) who (a) failed to obtain selective representations for letters or words under these conditions (see Fig. 1b); and (b) it suggests that there are additional pressures for CNNs to learn selective representations above and beyond the challenge of overcoming the superposition catastrophe. However, the measures of selectivity that have been applied across studies are different, and accordingly, it is difficult to directly compare results. In order to directly compare and have a better understanding of the different selectivity measures we assessed (1) localist, (2) precision, and (3) CCMAS selectivity on the prob, fc8, fc7, fc6, and conv5 layers of AlexNet. We also introduce a new measure called top- class selectivity, and show that the precision and CCMAS measures provide much higher estimates of object selectivity compared to other measures. Importantly, we do not find any localist 'grandmother cell' representations in the hidden layers of AlexNet, consistent with the hypothesis that the superposition catastrophe provides a pressure to learn more selective representations Bowers et al. (2014; 2016). <--- Page Split ---> In addition, we compared these selectivity measures to a state- of- the- art activation maximization (AM) method for visualizing single- unit representations in CNNs (Nguyen et al., 2017). AM images are generated to strongly activate individual units, and some of them are interpretable by humans (e.g., a generated image that looks like a lighthouse, see Fig. 1c). For the first time, we systematically evaluated the interpretability of the AM images in an on- line experiment and compare these ratings with the selectivity measures for corresponding units. We show that hidden units with interpretable AM images are not highly selective. ## 2 METHODS Networks and Datasets All \(\sim 1.2\mathrm{M}\) photos from ImageNet2010 (Deng et al. 2009) were cropped to \(277\times 277\) pixels and classified by the pre- trained AlexNet CNN (Krizhevsky et al. 2012) shipped with Caffe (Jia et al. 2014), resulting in 721,536 correctly classified images. Once classified, the images are not re- cropped nor subject to any changes. In Caffe, the softmax operation (Denker & leCun 1991) is applied at the 'prob' (ability) output layer that contains 1000 units (one for each class). We analyzed these prob units, the fully connected (fc) layers: fc8 (1000 units) that encodes the outputs prior to the softmax operation, fc6 and fc7 (4096 units), and the top convolutional layer conv5 which has 256 filters. We only recorded the activations of correctly classified images, and saved them in an activation table so the activations could be probed without re- evaluating the images each time. The activation files are stored in .h5 format and can be retrieved at http://anonymizedForReview. We selected 233 conv5, 2738 fc6, 2239 fc7, 911 fc8, and 954 prob units for analysis. Localist selectivity Here we define a unit to be localist for class \(A\) if the set of activations for class \(A\) was disjoint with those of \(\neg A\) . A unit is selectively 'on' if \(\{A\} >\{\neg A\}\) (i.e. all images in \(A\) have higher activations than those not in \(A\) ) and selectively 'off' if \(\{A\} < \{\neg A\}\) . Localist selectivity is easily depicted with jitterplots in which a scatter plot for each unit is generated (see Figs. 3a and 4a, b). Each point in a plot corresponds to a unit's activation in response to a single image, and only correctly classified images are plotted (if an image has been misclassified we cannot use its label to elucidate what the unit responds to). The level of activations is coded along the \(x\) - axis, and an arbitrary value is assigned to each point on the \(y\) - axis (they are jittered). When generating jitterplots for the conv5 layer we plotted the highest level of activation across each filter for each image. Top- Class selectivity Top- class selectivity is closely related to localist selectivity except that it provides a continuous rather than discrete measure. We counted the number of images from the same class that were more active than all images from all other classes (what we called the top cluster size) and divided the cluster size by the total number of correctly identified images from this class. \(100\%\) top- class selectivity is equivalent to a localist representation. Precision The precision method of finding object detectors (Zhou et al., 2015; 2018) involves identifying a small subset of images that most strongly activate a unit (the number of images in the most strongly activated subset differ across papers) and then identifying the critical part of these images that are responsible for driving the unit. Zhou et al. 2015 took the 60 images that activated a unit the most strongly and asked independent raters to interpret the critical image patches. Zhou et al. (2015) developed a precision metric that assessed the percentage of the 60 images that raters judged to depict the same class of object (e.g., if 50 of the 60 images were labeled as 'lamp', the unit would have a precision index of \(50 / 60\) or \(83\%\) ; see Fig. 1d). Object detectors were defined as units with a precision \(>75\%\) : they reported multiple such detectors. Here we approximate this approach by considering the 100 images that most strongly activate a given unit and assess the highest percentage of images from a given output class (e.g., if 75 of the top 100 images are all examples of a class 'lighthouse' then we consider the unit to be a 'lighthouse' object detector with a precision of \(75\%\) ). CCMAS Morcos et al. (2018) introduced a selectivity index based on the 'class- conditional mean activation' selectivity (CCMAS). The CCMAS for class \(A\) compares the mean activation of all images in class \(A\) , \(\mu_{A}\) , with the mean activation of all images not in class \(A\) , \(\mu_{- A}\) , and is given by: \((\mu_{A} - \mu_{- A}) / (\mu_{A} + \mu_{- A})\) . Morcos et al. (2018) states that this metric should vary within [0,1], with 0 meaning that a unit's average activity was identical for all classes, and 1 meaning that a unit was only active for inputs of a single class. Here, we assessed class selectivity for the highest mean activation class (CCMAS) as well as for the class with the second highest mean activation \(\mu_{A}\) (what <--- Page Split ---> we call CCMAS.2) in order to assess the extent to which CCMAS reflects the selectivity to one class. Activation Maximization We harnessed an activation maximization method called Plug & Play Generative Networks (Nguyen et al., 2017) in which an image generator network was used to generate images (hereafter, AM images) that highly activate a unit. We generated 100 separate images that maximally activated each unit in the conv5, fc6 and fc8 layers of AlexNet and displayed them in a grid format (see Appendix Figs. A5, A6 & A7). We then asked 333 participants to judge whether they could identify any repeating objects, animals, or places in images after receiving some practice trials (see Appendix Fig. A1 for an example). Participants were recruited using Prolific (pro; Palan & Schitter, 2018), with the experiment run online using Gorilla (gor). More details of the experiment can be found in the Appendix A1 and an example experiment for readers to try is at: exp. ## 3 RESULTS ### 3.1 COMPARISON OF SELECTIVITY MEASURES. The mean top- class, precision, and CCMAS selectivities across the conv5, fc6, fc7, fc8, and prob layers are displayed in Fig. 2a- c. We did not plot localist selectivity as there were no localist 'grandmother units' at any level, including the prob layer. The first point to note is that the top- class, precision, and CCMAS measures all increased in the higher layers, showing that they capture degrees of selectivity ignored by the localist measure. The second and perhaps the most striking finding is that top- class selectivity was extremely low across the hidden layers, with means below \(0.25\%\) in the the conv5, fc6, and fc7 layers. Third, the different measures provided very different estimates of selectivity. In contrast with top- class selectivity, the mean precision scores are over an order of magnitude larger in the hidden layers of network, with average precision scores of \(9.6\%\) , \(12.1\%\) , and \(15.4\%\) in layers conv5, fc6, and fc7, respectively. Similarly, the CCMAS measure suggests a much higher level of selectivity than top- class selectivity, with mean scores of .49, .84, and .85 in the conv5, fc6, and fc7 layers, respectively. ![](images/3_0.jpg) <center>Figure 2: Selectivity measures across different layers of AlexNet. Left: top-class selectivity. Middle: precision 100 (the percentage of the top 100 images which are members of the top class). Right: Class-conditional mean activity selectivity (CCMAS). </center> The discrepancy between precision and CCMAS on the one hand, and top- class selectivity on the other, is even more striking when precision and CCMAS scores are high, as shown in Table 1. For example, we find \(4.5\%\) of units in layer fc7 have a precision of over \(50\%\) (that is 184 units) and over \(80\%\) of units in fc7 have a CCMAS measure of over .92. At the same time, only \(0.1\%\) of units have a top- class selectivity over \(5\%\) . Indeed, there are no units with a top- class selectivity over \(5\%\) in fc6 or conv5. Based on the precision and CCMAS measures it appears that AlexNet has learned some highly selective representations for objects, but according to localist and top- class selectivity, there is no evidence for this conclusion. This discrepancy becomes more striking still when you consider the units with the highest precision and CCMAS scores (see Table A1 in the Appendix for examples across multiple layers of AlexNet). To highlight one example, in Fig. 3 we illustrate why the unit fc6.1199 with the highest precision (95%) in fc6 should not be considered a Monarch butterfly detector. Fig. 3a depicts a jitterplot of activations to all accurately identified images, with Monarch butterfly images found across the range of activations. Fig. 3b shows a histogram that plots the distribution of activations for Monarch <--- Page Split ---> Table 1: The percentage of precision, CCMAS and top-class units in each layer at different threshold cut-offs. <table><tr><td rowspan="2">Layer</td><td colspan="6">% of units at various thresholds</td></tr><tr><td>Precision</td><td></td><td>CCMAS</td><td></td><td>Top-Class selectivity</td><td></td></tr><tr><td></td><td>over 50%</td><td>over 75%</td><td>over 0.7</td><td>over 0.8</td><td>over 0.9</td><td>over 5%</td></tr><tr><td>prob</td><td>99.7%</td><td>99.6%</td><td>100%</td><td>100%</td><td>100%</td><td>100%</td></tr><tr><td>fc8</td><td>75.4%</td><td>32.8%</td><td>99.9%</td><td>97.2%</td><td>84.5%</td><td>23.6%</td></tr><tr><td>fc7</td><td>4.5%</td><td>0.3%</td><td>99.9%</td><td>92.1%</td><td>5.0%</td><td>0.1%</td></tr><tr><td>fc6</td><td>3.0%</td><td>0.1%</td><td>99.7%</td><td>86.7%</td><td>6.7%</td><td>0%</td></tr><tr><td>conv5</td><td>4.7%</td><td>0%</td><td>3.4%</td><td>0%</td><td>0%</td><td>0%</td></tr></table> butterflies. By far the most common activation to correctly identified Monarch butterflies is 0 and the mean is \(39.2 \pm 0.6\) . Figures 3c- e displays example images with 0 (c), mean (d) and maximal (e) activations, and all are identifiable as Monarch butterflies. Thus, classifying this unit as a Monarch butterfly detector is misleading. ![](images/4_0.jpg) <center>Figure 3: Data for unit fc6.1199. (a) activation jitterplot black squares: Monarch butterfly images; grey circles: all other classes. (b) histogram of activations of Monarch butterflies, c-e example ImageNet images with activations of 0, the mean, and the maximum of the range. Unit fc6.1199 has a precision of \(95\%\) over the top 100 images ( \(98.3\%\) over the top 60) and is thus classified as a butterfly detector, yet there are Monarch butterfly images covering the whole range of values, with 72 images ( \(5.8\%\) of the total) having an activation of 0.0. </center> Another surprising result is that we did not obtain any \(100\%\) top- class selectivity units (localist units) in the prob layer of AlexNet. Rather, the mean top- class selectivity was \(\sim 80\%\) in the prob layer, and only \(\sim 5\%\) in fc8 (prior to the softmax being applied). Figs. 4a,b depict the pattern of activation for units fc8.11 and prob.11 that are examples of the most top- class selective units in these layers (responding to images of 'goldfinch' birds with top- class selectivity measures of \(8.4\%\) and \(95.2\%\) , respectively). Clearly these units do respond much more selectively than the most selective units in earlier layers (as in Fig. 3), and at the same time, the jitterplots show why we did not observe any localist units (a few 'goldfinch' images were less active than a few images from other categories). These jitterplots also show that top- class and localist selectivity provide somewhat misleading estimates of selectivity as well. Consider Fig. 4a that depicts a substantial overlap between goldfinch and non- goldfinch activations on unit fc8.11. The \(8.4\%\) top- class selectivity score captures the se <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 4: Example data from the fc8 and prob layers. (a) jitterplot activations for unit fc8.11 that has a top-class selectivity of \(8.4\%\) . (b) jitterplot activations for prob.11 (i.e. post-softmax) that has top-class selectivity of \(95.2\%\) . Activations for the 'ground truth' class 'goldfinch' are shown as black squares, all other classes are shown as colored circles. </center> lectivity for the most highly active goldfinch images, but it is blind to the fact that almost goldfinch images have a reasonably high level of activation (more than most non- goldfinch images). The problem with localist selectivity is highlighted in Fig. 4b that shows that the measure misses all but the most extreme version of selectivity. Together, these findings suggest that additional selectivity measures are required to better characterize the learned representations in NNs: precision and CC- MAS measures strongly overestimate the degree of selectivity, and localist and top- class selectivity provide either a too strict or too narrow a measure of selectivity. ### 3.2 ADDITIONAL PROBLEMS WITH THE CCMAS MEASURE. The main problem with the precision and CCMAS measures is that they provide misleadingly high estimates of selectivity, but the CCMAS measure has some additional limitations. The first point to note is that contrary to Morcos et al. (2018), the CCMAS measure can go above 1 if \(\mu_{- A}\) is negative, and this happens in the fc8 layer of AlexNet (see Fig. 4a) because there is no ReLU transformations in layer fc8 (as opposed to layers fc6 & fc7). This is a minor issue that can be fixed by normalizing the range of activations, but it explains why some of our CCMAS scores are above 1. More importantly, if the CCMAS provided a good measure of a unit's class selectivity then one should expect that a very high measure of selectivity for one class would imply that the unit is not highly selective for other classes. This is not the case as shown in Fig. 5a- c where CCMAS are compared to the CCMAS_2 that assesses unit selectivity to the category with the second highest mean activation. For example, unit fc7.0 has a CCMAS of .813 for the class 'maypole', but the class with the second highest mean activation, 'chainsaw' has a CCMAS of .808 (and neither of these is the top- class which is 'orangutan' and has a precision of \(14\%\) ). ![](images/5_1.jpg) <center>Figure 5: Example of where the CCMAS does not match intuitive understandings of selectivity. Experimental data: (a-c) the CCMAS scores for the class with the highest (CCMAS) and second highest mean activation (CCMAS_2) for all units across layer conv5 (a), fc6 (b), and fc7 (c). </center> It is also important to note that the CCMAS measure of selectivity is measuring something quite different to alternative measures. For example, the percentage of conv5, fc6 and fc8 units in which <--- Page Split ---> top- class and CCMAS are selective to the same class as follows: \(0\%\) , \(9.8\%\) , and \(83.5\%\) ( \(0.1\%\) being chance). To highlight how discrepant the different measures can be, we have generated some artificial datasets depicted in Fig. A4 in the Appendix that show that CCMAS scores can be much higher or lower than top- class or localist selectivity scores. Indeed, the figure shows that the CCMAS measure can give a low selectivity score to a localist representation. Part of the problem is that the CCMAS measure compares the mean of the selected class with the mean of the unselected classes, but these distributions are not normal (in fact activations of all classes follows an exponential distribution), and thus a comparison between means is not an appropriate measure: see Figures A2 and A3 in Appendix section A3). ### 3.3 HUMAN INTERPRETATION OF GENERATED IMAGES FROM ACROSS ALEXNET The results of the behavioral experiment in which humans rated AM images are reported in Table 2. Consistent with past research, the generated images in the output fc8 layer were often interpreted as objects, and when they were given a consistent interpretation, they almost always ( \(95.4\%\) ) correspond to the trained category. By contrast, less than \(5\%\) of units in conv5 or fc6 were associated with consistently interpretable images, and as can be seen in Table 2, the interpretations only weakly matched the category with the highest top- class or CCMAS selectivity. The frequency with which objects were seen by the participants was similar in layers conv5 and fc6 layers and increased in fc8, consistent with the top- class and and precision measures of selectivity. Apart from showing that there are few interpretable units in the hidden layers of AlexNet, our findings show that the interpretability of images does not imply a high level of selectivity. That is, we know from Sec. 3.1 that the maximum top- class selectivity for the hidden units is well under \(10\%\) (Fig. 2), and accordingly, all the consistently interpretable units had selectivities less than this. Indeed, in most cases, the top- class selectivity of the interpretable units is well under \(1\%\) . To briefly illustrate the types of images that participants rated as objects or non- objects, Fig. 6a- c depicts three AM images from units in the conv5, fc6, and fc8 layers of AlexNet that were interpreted consistently with the top- class selectivity measure, and Fig. 6d- e depicts corresponding uninterpretable images. Additional images can be found in the Appendix Figs. A5 and A6. ![](images/6_0.jpg) <center>Figure 6: Example AM images that were either judged by all participants to contain objects (a-c) or judged by all participants to be uninterpretable as objects (d-e). The human judgement for conv5.183 was 'dogs' and the top-class was 'flat-coated retriever'. For fc6.319 subjects reported 'green peppers' or 'apples' (all classified as the same broad class in our analysis), and the CCMAS and top-class was 'Granny Smith apples'. For fc8.969 humans suggested 'beverage' or 'drink': ground truth class for this unit is 'eggnog'. The ground-truth for fc8.865 is 'toy-store'. </center> ## 4 DISCUSSIONS AND CONCLUSIONS Our central finding is that different measures of activation selectivity support very different conclusions when applied to the same units in AlexNet. In contrast with the precision (Zhou et al. 2015) and CCMAS (Morcos et al. 2018) measures that revealed some highly selective units for objects in layers conv5, fc6, and fc8, we found no localist representations, and the mean top- class selectivity in these layers was well under \(1\%\) . These findings are in stark contrast with the many localist 'grandmother cell' representations learned in RNNs Bowers et al. (2014; 2016); Bowers (2017b). <--- Page Split ---> Table 2: Interpretability judgements for AM images. The number of judgments for conv5, fc6 and fc8 were 1332, 10,656 and 5,181, respectively. <table><tr><td>LAYER</td><td>% YES RESPONSES</td><td>% OF UNITS WITH &amp;gt; 80% YES RESPONSE</td><td>AMONG HUMANS</td><td>% OVERLAP BETWEEN HUMANS and TOP CLASS</td><td>CCMAS CLASS</td></tr><tr><td>conv5</td><td>21.7% ±1.1%</td><td>4.3% ± 1.3%</td><td>89.5%±5.7%</td><td>34.1%±14.4%</td><td>0%</td></tr><tr><td>fc6</td><td>21.0% ±0.4%</td><td>3.1% ± 0.4%</td><td>80.4%±4.1%</td><td>23.3%±5.9%</td><td>18.9% ±5.9%</td></tr><tr><td>fc8</td><td>71.2% ±0.6%</td><td>59.3% ±1.6%</td><td>96.5%±0.4%</td><td>95.4%±0.6%</td><td>94.6% ±0.7%</td></tr></table> Not only did the different measures provide very different assessments of selectivity, we found that the precision and CCMAS measures provided highly misleading estimates. For example, a unit with over a \(75\%\) precision score for Monarch butterflies had a top- class selectivity of under \(5\%\) . Although Zhou et al. (2015) used \(75\%\) precision scores as the criterion for 'object detectors', it is inappropriate to call this unit a Monarch butterfly detector given that it did not respond strongly to the majority of Monarch butterfly images (and indeed, the modal response was 0; see Fig. 3). This discrepancy between precision and top- class selectivity was widespread. Similarly, we found that extremely high CCMAS measures did not indicate the item was selective exclusively to one category as might be expected. To take an extreme example, we found a unit in the output prob layer that had selectivities of .999 for category 'bridgeroom' and .995 for category 'flowerpot'. Top- class selectivity for this unit was different once again, responding most strongly to the category 'ringlet butterfly'. At the same time, we identified problems with the localist, top- class, and activation maximization (AM) methods as well. The localist selectivity measure failed to obtain any localist representations, even at the output prob layer of AlexNet. This measure is so extreme that it misses highly selective representations that are of theoretical interest. The top- class selectivity does provide a graded measure of selectivity (with \(100\%\) top- class selectivity equivalent to a localist grandmother cell), but it can underestimate selectivity when a few member from outside the top- class are highly activated (see Fig. 4b for an example). At the same time, the human interpretation of AM images provides a poor measure of hidden- unit selectivity given that interpretable AM images were associated with low top- class selectivity scores. These findings highlight the need to provide better measures of selectivity in order to better characterize the learned representations in NNs. What should be made of the contrasting findings that localist representations are found in RNNs, but not in AlexNet? The failure to observe localist units in the hidden layers of AlexNet is consistent with the Bowers et al. 2014 claim that these units only emerge in order to support the co- activation of multiple items at the same time in short- term memory. That is, highly selective representations may be the solution to the superposition catastrophe, and AlexNet only has to identify one image at a time. This may help explain the many reports of highly selective neurons in cortex given that the cortex needs to co- activate multiple items at the same time in order to support short- term memory (Bowers et al., 2016). However, it should be noted that the RNNs that learned localist units were very small in scale compared to AlexNet, and accordingly, it will be interesting to assess unit selectivity in larger RNNs that have much larger memory capacity. Relevant to this issue, Karpathy et al. (2016) reported some striking examples of selective representations in a RNN with long- short term memory (LSTM) trained to predict text. Although they did not systematically assess the degree of selectivity, they reported examples that are consistent with \(100\%\) selective units. If in fact the superposition constraint provides a pressure to learn more selective representations, then we should observe more highly selective representations, perhaps localist units, in large RNNs with LSTMs as well. We will be testing this hypothesis in future work. ## APPENDIX FOR SELECTIVITY METRICS CAN OVERESTIMATE THE SELECTIVITY OF UNITS: A CASE STUDY ON ALEXNET ## A1 METHODOLOGICAL DETAILS FOR THE BEHAVIORAL EXPERIMENT One hundred generated images were made for every unit in layers conv5, fc6 and fc8 in AlexNet, as in Nguyen et al. (2017), and displayed as 10x10 image panels (figures A7 and Figures A5 and A6). A total of 3,299 image panels were used in the experiment (995 fc8, 256 conv5, and 2048 randomly selected fc6 image panels) and were divided into 64 counterbalanced lists of 51 or 52 (4 conv5, 15 or 16 fc8 and 32 fc6). 51 of the lists were assigned 5 participants and 13 lists were assigned 6 participants. To test the interpretability for these units as object detectors, paid volunteers were asked to look at image panels and asked if the images had an object / animal or place in common. If the answer was 'yes', they were asked to name that object simply (i.e. fish rather than goldfish). Analyses of common responses was done for any units where over \(80\%\) of humans agreed there was an object present, by reading the human responses and comparing them to both each other and to the output classes. Agreement was taken if the object was the same rough class. For example, 'beer', 'glass', and 'drink' were all considered to be in agreement in the general object of 'drink', and in agreement with both the classes of 'wine glass' and 'beer' as these classes were also general drink classes (this is an actual example, most responses were more obvious and required far less interpretation than that). Participants were given six practice trials, each with panels of 20 images before starting the main experiment. Practice trials included images that varied in their interpretability. ![](images/10_0.jpg) <center>Figure A1: Example screen from the identification task shown to participants as part of the instructions. The images included on this practice trial are ImageNet2010 images, not AM images. </center> <--- Page Split ---> ## A2 FURTHER DATA ON THE SELECTIVITY MEASURES Further data on the selectivity measures across AlexNet. Table A1 demonstrates some extreme values of CCMAS, precision, and top- class selectivity as well as the similarity between the CCMAS ad CCMAS_2, and Table A2 gives the mean values across the layers. Table A1: CCMAS selectivity measures for extreme example units. Across all the tested units, prob.322, fc8.393, fc7.31, fc6.582 and conv5.unit78 were the units with the highest CCMAS measures. Units fc6.1199, fc8.11 and prob.11 were displayed in Figs. 3 & 4. <table><tr><td>LAYER.UNIT</td><td>CCMAS</td><td>CCMAS_2</td><td>Precision</td><td>Top Cluster Sizes</td><td>% Top Class Selectivity</td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>prob.322</td><td>0.999921</td><td>0.995384</td><td>100%</td><td>963</td><td>97.1%</td></tr><tr><td>fc8.393</td><td>1.24357</td><td>1.39617</td><td>95%</td><td>29</td><td>3.3%</td></tr><tr><td>fc7.31</td><td>0.936767</td><td>0.865305</td><td>11%</td><td>1</td><td>0.15%</td></tr><tr><td>fc6.582</td><td>0.933832</td><td>0.919425</td><td>1%</td><td>1</td><td>0.14%</td></tr><tr><td>conv5.78</td><td>0.746763</td><td>0.746346</td><td>5%</td><td>1</td><td>0.10%</td></tr><tr><td></td><td></td><td></td><td>Top precision units</td><td></td><td></td></tr><tr><td>prob.0</td><td>0.9996680</td><td>0.99316330</td><td>100%</td><td>1000</td><td>92.2%</td></tr><tr><td>fc8.1</td><td>1.049710</td><td>1.110110</td><td>100%</td><td>172</td><td>16.1%</td></tr><tr><td>fc7.255</td><td>0.8961654</td><td>0.84108156</td><td>97%</td><td>94</td><td>7.6%</td></tr><tr><td>fc6.1199</td><td>0.92323</td><td>0.818260</td><td>95%</td><td>43</td><td>3.5%</td></tr><tr><td>conv5.0</td><td>0.554049430</td><td>0.528534300</td><td>77%</td><td>1</td><td>0.1%</td></tr><tr><td></td><td></td><td></td><td>Top class selectivity units</td><td></td><td></td></tr><tr><td>prob.322</td><td>0.99964017</td><td>0.99538374</td><td>100%</td><td>963</td><td>97.1%</td></tr><tr><td>fc8.985</td><td>1.0908700</td><td>1.1929800</td><td>100%</td><td>574</td><td>50%</td></tr><tr><td>fc7.255</td><td>0.8961654</td><td>0.84108156</td><td>97%</td><td>94</td><td>7.6%</td></tr><tr><td>fc6.1199</td><td>0.92323</td><td>0.818260</td><td>95%</td><td>43</td><td>3.5%</td></tr><tr><td>conv5.100</td><td>0.68313890</td><td>0.6831976</td><td>56%</td><td>9</td><td>1.1%</td></tr><tr><td></td><td></td><td></td><td>Example units</td><td></td><td></td></tr><tr><td>prob.11</td><td>0.999876</td><td>0.973243</td><td>100%</td><td>1000</td><td>95.2%</td></tr><tr><td>fc8.11</td><td>1.10469</td><td>1.1946</td><td>99%</td><td>88</td><td>8.4%</td></tr></table> Table A2: Average CCMAS measures across layers (note that these values are not normally distributed, see figures A2 and A3. <table><tr><td>LAYER.UNIT</td><td>CCMAS</td><td>CCMAS_2</td><td>Precision</td><td>Top Cluster Sizes</td><td>% Top Class Selectivity</td></tr><tr><td>Mean[prob]</td><td>0.999309</td><td>0.984169</td><td>99.7%</td><td>592.5</td><td>82.1%</td></tr><tr><td>Mean[fc8]</td><td>0.995644</td><td>1.002905</td><td>70.1%</td><td>108.4</td><td>5.1%</td></tr><tr><td>Mean[fc7]</td><td>0.847799</td><td>0.830854</td><td>15.4%</td><td>20.3</td><td>0.23%</td></tr><tr><td>Mean[fc6]</td><td>0.8439226</td><td>0.821118</td><td>12.1%</td><td>17.4</td><td>0.19%</td></tr><tr><td>Mean[conv5]</td><td>0.491029</td><td>0.464662</td><td>9.6%</td><td>17.3</td><td>0.17%</td></tr></table> <--- Page Split ---> ## A3 FURTHER ISSUES WITH THE CCMAS MEASURE ## A3.1 HISTORGRAMS AND DISTRIBUTION FITS FOR ACTIVATIONS IN UNIT fc6.1199 The CCMAS measure is based on comparing the mean activation of categories, and this is problematic for a few reasons. First, the majority of images do not activate a unit at all. For instance, our butterfly 'detector' unit fc6.1199 has \(82.8\%\) of images with an activation of 0.0 (see figure A2). This means that the CCMAS selectivity is largely determined by the the distribution of images that have 0 activation rather than the distribution of images that strongly activate a unit. This leads to very different estimates of precision, top- class and localist selectivity that are concerned with the distribution of highly activated units. Note that, this issue with activations of 0.0 moving the mean around could be solved by taking only the non- zero activations (which is not that dissimilar from what neuroscientists do), however there are problems with the non- zero activations. ![](images/12_0.jpg) <center>Figure A2: The distribution of activations on unit fc6.1199 for all images (left). \(82.8\%\) of activations are zero across all classes, only \(5.8\%\) of the butterfly class are zero. Unit fc6.1199 is a butterfly detector under Zhou et la's precision measure. Bins are 1.0 wide. </center> The problem with the CCMAS measure when applied to the non- zero activations is that they are not normally distributed. As figure A3 demonstrates (for our example unit fc6.1199), the all non- zero activations follow an experiential curve and thus the mean is not a useful measure. Fitting a normal distribution to this data gives the blue normal distribution curve. Although the butterfly class can be roughly fit by a normal distribution, as the entire activations follow an exponential, the non- butterfly classes will be best fit by an exponential not a normal distribution. As the CCMAS requires a comparison of means, and the not- A classes follow an exponential, rather than normal distribution, it follows that the CCMAS will give misleading results. Computing CCMAS on the basis of mean activations can produce highly non- intuitive as illustrated in figure A4 that plots three distributions of generated data from 10 classes of 100 points. We can see that the CCMAS gives the same (and maximal) score for the case where the unit is off to everything but a single point (figure A4a) as it does for a 'grandma unit' (see figure A4c). And the CCMAS gives an incredibly low score when the means of A and not- A are similar, even if they perfectly separate the disjoint sets (figure A4b). <--- Page Split ---> ![](images/13_0.jpg) <center>Figure A3: Fitting the non-zero activations for all classes (red) and the maximum activation class (black). For the superset of all the classes, the distribution is well-described by exponential-derived fits, normal-derived fits are bad. For the maximum activating class (butterfly), the distribution has a mean and can be well-described as a normal distribution. </center> ![](images/13_1.jpg) <center>Figure A4: Example of where the CCMAS does not match intuitive understandings of selectivity. Generated example data: (a) If a unit is off to all but a single image from a large class of objects, the CCMAS for that class is 1 (maximum possible selectivity). (b) If a unit is strongly activated to all members of one class and off to everything else (an archetypal 'grandmother' cell) the CCMAS is the same as for (a) although the precision and top-class selectivity is vastly different. (f): If a unit has high activations for all classes, but one class (black squares) is 0.1 more than all others (coloured circles), the CCMAS is very low (0.06) despite being \(\% 100\) top-class selective. The generated examples are for 10 classes of 100 items </center> <--- Page Split ---> ## A4 HUMAN INTERPRETATION OF ACTIVATION MAXIMIZATION IMAGES Some examples of the 10x10 grids of activation maximisation images that were presented to participants are shown in Figures A5, A6 and A7. Figure A5 shows an example from conv5 that human participants agreed had no obvious object in common (although there are repeated shape motifs, the participants were specifically asked for objects, and not abstract concepts like shape or color. Figure A6 is also from the conv5 and was judged by participants as some images containing 'dogs'. Figure A7 is the AM images for the supposed 'butterfly detector' unit example discussed in the paper. ![](images/14_0.jpg) <center>Figure A5: Example activation maximisation images for unit conv5.65. These images were judged by humans to not contain any interpretable objects in common (although the reader may agree that there are some shape and colour similarities in the images). </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure A6: Example activation maximisation images for unit conv5.183. These images were judged by humans to contain some interpretable images, in this case, of the type 'dogs'. </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure A7: Example activation maximisation images for unit fc6.1199. Whilst there are some butterfly wing shapes in these images, there are not obvious butterflies. N.B. the second highest activating class for this unit is ladybirds, and there are some orange round shapes that could conceivably be ladybug-alikes. </center> <--- Page Split --->
reject
Reject
4.666667
ICLR_2019_paper_0437
iclr
2,019
# LEMONADE: LEARNED MOTIF AND NEURONAL ASSEMBLY DETECTION IN CALCIUM IMAGING VIDEOS Elke Kirschbaum<sup>1</sup> Manuel Haußmann<sup>1</sup> Steffen Wolf<sup>1</sup> {elke.kirschbaum,manuel.haussmann,steffen.wolf}@iwr.uni- heidelberg.de Hannah Sonntag<sup>2</sup> hannah.sonntag@mpimf- heidelberg.mpg.de Justus Schneider<sup>3</sup> Shehabeldin Elzheiry<sup>3</sup> {justus.schneider, shehab.elzheiry}@physiologie.uni- heidelberg.de Oliver Kann<sup>3</sup> oliver.kann@physiologie.uni- heidelberg.de Daniel Durstewitz<sup>4</sup> Fred A. Hamprecht<sup>1</sup> daniel.durstewitz@zi- mannheim.de fred.hamprecht@iwr.uni- heidelberg.de <sup>1</sup>Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Germany <sup>2</sup>Institute for Anatomy and Cell Biology, Heidelberg University, Germany <sup>3</sup>Institute of Physiology and Pathophysiology, Heidelberg University, Germany <sup>4</sup>Dept. Theoretical Neuroscience, Central Institute of Mental Health, Mannheim, Germany ## ABSTRACT Neuronal assemblies, loosely defined as subsets of neurons with reoccurring spatiotemporally coordinated activation patterns, or "motifs", are thought to be building blocks of neural representations and information processing. We here propose LeMoNADe, a new exploratory data analysis method that facilitates hunting for motifs in calcium imaging videos, the dominant microscopic functional imaging modality in neurophysiology. Our nonparametric method extracts motifs directly from videos, bypassing the difficult intermediate step of spike extraction. Our technique augments variational autoencoders with a discrete stochastic node, and we show in detail how a differentiable reparametrization and relaxation can be used. An evaluation on simulated data, with available ground truth, reveals excellent quantitative performance. In real video data acquired from brain slices, with no ground truth available, LeMoNADe uncovers nontrivial candidate motifs that can help generate hypotheses for more focused biological investigations. ## 1 INTRODUCTION Seventy years after being postulated by Hebb (1949), the existence and importance of reoccurring spatio- temporally coordinated neuronal activation patterns (motifs), also known as neuronal assemblies, is still fiercely debated (Marr et al., 1991; Singer, 1993; Nicolelis et al., 1997; Ikegaya et al., 2004; Cossart & Sansonetti, 2004; Buzsáki, 2004; Mokecíhev et al., 2007; Pastalkova et al., 2008; Stevenson & Kording, 2011; Ahrens et al., 2013; Carrillo- Reid et al., 2015). Calcium imaging, a microscopic video technique that enables the concurrent observation of hundreds of neurons in vitro and in vivo (Denk et al., 1990; Helmchen & Denk, 2005; Flusberg et al., 2008), is best suited to witness such motifs if they indeed exist. <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: We present LeMoNADe, a novel approach to identify neuronal assemblies directly from calcium imaging data. In contrast to previous methods, LeMoNADe does not need pre-processing steps such as cell identification and spike time extraction for unravelling assemblies. </center> In recent years, a variety of methods have been developed to identify neuronal assemblies. These methods range from approaches for the detection of synchronous spiking, up to more advanced methods for the detection of arbitrary spatio- temporal firing patterns (Comon, 1994; Nicolelis et al., 1995; Grün et al., 2002a;b; Lopes- dos Santos et al., 2013; Russo & Durstewitz, 2017; Peter et al., 2017). All of these methods, however, require a spike time matrix as input. Generating such a spike time matrix from calcium imaging data requires the extraction of individual cells and discrete spike times. Again, many methods have been proposed for these tasks (Mukamel et al., 2009; Pnevmatikakis & Paninski, 2013; Pnevmatikakis et al., 2013; Diego et al., 2013; Diego & Hamprecht, 2013; Pachitariu et al., 2013; Pnevmatikakis et al., 2014; Diego & Hamprecht, 2014; Kaifosh et al., 2014; Pnevmatikakis et al., 2016; Apthorpe et al., 2016; Inan et al., 2017; Spaen et al., 2017; Klibisz et al., 2017; Speiser et al., 2017; Zhou et al., 2018). Given the low signal- to- noise ratios (SNR), large background fluctuations, non- linearities, and strong temporal smoothing due to the calcium dynamics itself as well as that of calcium indicators, it is impressive how well some of these methods perform, thanks to modern recording technologies and state- of- the- art regularization and inference (Pnevmatikakis et al., 2016; Zhou et al., 2018). Still, given the difficulty of this data, errors in segmentation and spike extraction are unavoidable, and adversely affect downstream processing steps that do not have access to the raw data. Hence, properly annotating data and correcting the output from automatic segmentation can still take up a huge amount of time. In this paper, we propose LeMoNADe (Learned Motif and Neuronal Assembly Detection), a variational autoencoder (VAE) based framework specifically designed to identify repeating firing motifs with arbitrary temporal structure directly in calcium imaging data (see figure 1). The encoding and decoding networks are set up such that motifs can be extracted directly from the decoding filters, and their activation times from the latent space (see sec. 3). Motivated by the sparse nature of neuronal activity we replace the Gaussian priors used in standard VAE. Instead we place Bernoulli priors on the latent variables to yield sparse and sharply peaked motif activations (sec. 3.1). The choice of discrete Bernoulli distributions makes it necessary to use a BinConcrete relaxation and the Gumbel- softmax reparametrization trick (Maddison et al., 2016; Jang et al., 2017) to enable gradient descent techniques with low variance (sec. 3.3). We add a \(\beta\) - coefficient (Higgins et al., 2017) to the loss function in order to adapt the regularization to the properties of the data (sec. 3.3). Furthermore, we propose a training scheme which allows us to process videos of arbitrary length in a computationally efficient way (sec. 3.4). On synthetically generated datasets the proposed method performs as well as a state- of- the- art motif detection method that requires the extraction of individual cells (sec. 4.1). Finally, we detect possible repeating motifs in two fluorescent microscopy datasets from hippocampal slice cultures (sec. 4.2). A PyTorch implementation of the proposed method is released at https://github.com/EKirschbaum/LeMoNADe. ## 2 RELATED WORK Autoencoder and variational autoencoder Variational Autoencoders (VAEs) were introduced by Kingma & Welling (2014) and have become a popular method for unsupervised generative deep learning. They consist of an encoder, mapping a data point into a latent representation, and a decoder whose task is to restore the original data and to generate samples from this latent space. However, the <--- Page Split ---> original VAE lacks an interpretable latent space. Recent suggestions on solving this problem have been modifications of the loss term (Higgins et al., 2017) or a more structured latent space (Johnson et al., 2016; Deng et al., 2017). VAE have also been successfully used on video sequences. Li & Mandt (2018) learn a disentangled representation to manipulate content in cartoon video clips, while Goyal et al. (2017) combine VAEs with nested Chinese Restaurant Processes to learn a hierarchical representation of video data. Johnson et al. (2016) use a latent switching linear dynamical system (SLDS) model combined with a structured variational autoencoder to segment and categorize mouse behavior from raw depth videos. Unfortunately, this model is not directly applicable to the task of identifying motifs with temporal structure from calcium imaging data for the following reasons: Firstly, neuronal assemblies are expected to extend over multiple frames. Since in the model by Johnson et al. (2016) the underlying latent process is a relatively simple first- order Markovian (switching) linear process, representing longer- term temporal dependencies will be very hard to achieve due to the usually exponential forgetting in such systems. Secondly, in the model of Johnson et al. (2016) each frame is generated from exactly one of \(M\) latent states. For calcium imaging, however, most frames are not generated by one of the \(M\) motifs but from noise, and different motifs could also temporally overlap which is also not possible in the model by Johnson et al. (2016). Closest to our goal of detecting motifs in video data is the work described in Bascol et al. (2016). In this approach, a convolutional autoencoder is combined with a number of functions and regularization terms to enforce interpretability both in the convolutional filters and the latent space. This method was successfully used to detect patterns in data with document structure, including optical flow features of videos. However, as the cells observed in calcium imaging are spatially stationary and have varying luminosity, the extraction of optical flow features makes no sense. Hence this method is not applicable to the task of detecting neuronal assemblies in calcium imaging data. Cell segmentation and spike time extraction from calcium imaging data Various methods have been proposed for automated segmentation and signal extraction from calcium imaging data. Most of them are based on non- negative matrix factorization (Mukamel et al., 2009; Pnevmatikakis & Paninski, 2013; Pnevmatikakis et al., 2013; 2014; Diego & Hamprecht, 2014; Pnevmatikakis et al., 2016; Inan et al., 2017; Zhou et al., 2018), clustering (Kaifosh et al., 2014; Spaan et al., 2017), and dictionary learning (Diego et al., 2013; Diego & Hamprecht, 2013; Pachitariu et al., 2013). Recent approaches started to use deep learning for the analysis of calcium imaging data. Aphorpe et al. (2016) and Klibisz et al. (2017) use convolutional neural networks (CNNs) to identify neuron locations and Speiser et al. (2017) use a VAE combined with different models for calcium dynamics to extract spike times from the calcium transients. Although many sophisticated methods have been proposed, the extraction of cells and spike times from calcium imaging data can still be prohibitively laborious and require manual annotation and correction, with the accuracy of these methods being limited by the quality of the calcium recordings. Furthermore, some of the mentioned methods are specially designed for two- photon microscopy, whereas only few methods are capable to deal with the low SNR and large background fluctuations in single- photon and microendoscopic imaging (Flusberg et al., 2008; Ghosh et al., 2011). Additional challenges for these methods are factors such as non- Gaussian noise, non- cell background activity and seemingly overlapping cells which are out of focus (Inan et al., 2017). Neuronal assembly detection The identification of neuronal assemblies in spike time matrices has been studied from different perspectives. For the detection of joint (strictly synchronous) spike events across multiple neurons, rather simple methods based on PCA or ICA have been proposed (Comon, 1994; Nicolelis et al., 1995; Lopes- dos Santos et al., 2013), as well as more sophisticated statistical methods such as unitary event analysis (Grün et al., 2002a;b). Higher- order correlations among neurons and sequential spiking motifs such as synfire chains can be identified using more advanced statistical tests (Staude et al., 2010a;b; Gerstein et al., 2012). The identification of cell assemblies with arbitrary spatio- temporal structure has been addressed only quite recently. One approach recursively merges sets of units into larger groups based on their joint spike count probabilities evaluated across multiple different time lags (Russo & Durstewitz, 2017). Another method uses sparse convolutional coding (SCC) for reconstructing the spike matrix as a convolution of spatio- temporal motifs and their activations in time (Peter et al., 2017). An extension of this method uses a group sparsity regularization to identify the correct number of motifs (Mackevicius et al., 2018). <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: Schematic sketch of the proposed method. In this toy example, the input video \(\mathbf{x}\) is an additive mixture of two motifs (highlighted in red and blue) plus noise, as shown in (a). To learn the motifs and activations, the loss between input video \(\mathbf{x}\) and reconstructed video \(\mathbf{x}^{\prime}\) is minimized. (b) shows the generation of the reconstructed video through the proposed VAE framework. </center> To the authors' knowledge, solely Diego & Hamprecht (2013) address the detection of neuronal assemblies directly from calcium imaging data. This method, however, only aims at identifying synchronously firing neurons, whereas the method proposed in this paper can identify also assemblies with more complex temporal firing patterns. ## 3 METHOD LeMoNADe is a VAE based latent variable method, specifically designed for the unsupervised detection of repeating motifs with temporal structure in video data. The data \(\mathbf{x}\) is reconstructed as a convolution of motifs and their activation time points as displayed in figure 2a. The VAE is set up such that the latent variables \(\mathbf{z}\) contain the activations of the motifs, while the decoder encapsulates the firing motifs of the cells as indicated in figure 2b. The proposed generative model is displayed in figure 3. The great benefit of this generative model in combination with the proposed VAE is the possibility to directly extract the temporal motifs and their activations and at the same time take into account the sparse nature of neuronal assemblies. ### 3.1 THE LEMONADE MODEL In the proposed model the dataset consists of a single video \(\mathbf{x} \in \mathbb{R}^{T \times P \times P'}\) with \(T\) frames of \(P \times P'\) pixels each. We assume this video to be an additive mixture of \(M\) repeating motifs of maximum temporal length \(F\) . At each time frame \(t = 1, \ldots , T\) , and for each motif \(m = 1, \ldots , M\) , a latent random variable \(z_{t}^{m} \in \{0, 1\}\) is drawn from a prior distribution \(p_{\alpha}(\mathbf{z})\) . The variable \(z_{t}^{m}\) indicates <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 3: Plate diagram and proposed generative and recognition model. We show the plate diagram of the proposed model (left), where red (solid) lines correspond to the generative/decoding process and blue (dashed) lines correspond to the recognition/encoding model. On the right the equations for the generative as well as the recognition model are given. </center> whether motif \(m\) is activated in frame \(t\) or not. The video \(\mathbf{x}\) is then generated from the conditional distribution \(p_{\theta}(\mathbf{x}\mid \mathbf{z})\) with parameters \(\theta\) . In order to infer the latent activations \(\mathbf{z}\) the posterior \(p_{\theta}(\mathbf{z}\mid \mathbf{x})\) is needed. However, the true posterior \(p_{\theta}(\mathbf{z}\mid \mathbf{x})\) is intractable, but it can be approximated by introducing the recognition model (or approximate posterior) \(q_{\phi}(\mathbf{z}\mid \mathbf{x})\) . We assume that the recognition model \(q_{\phi}(\mathbf{z}\mid \mathbf{x})\) factorizes into the \(M\) motifs and \(T\) time steps of the video. In contrast to most VAE, we further assume that each latent variable \(z_{t}^{m}\) is Bernoulli- distributed with parameter \(\alpha_{t}^{m}(\mathbf{x};\phi)\) \[q_{\phi}(\mathbf{z}\mid \mathbf{x}) = \prod_{m = 1}^{M}\prod_{t = 1}^{T}q_{\phi}(z_{t}^{m}\mid \mathbf{x}) = \prod_{m = 1}^{M}\prod_{t = 1}^{T}\mathrm{Bernoulli}\big(z_{t}^{m}\mid \alpha_{t}^{m}(\mathbf{x};\phi)\big). \quad (1)\] We sample the activations \(\mathbf{z}\) in the latent space from the Bernoulli distributions to enforce sparse, sharply peaked activations. The parameters \(\alpha_{t}^{m}(\mathbf{x};\phi)\) are given by a CNN with parameters \(\phi\) . The corresponding plate diagram and proposed generative and recognition model are shown in figure 3. ### 3.2 THE VAE OBJECTIVE In order to learn the variational parameters, the KL- divergence between approximate and true posterior \(\mathrm{KL}(q_{\phi}(\mathbf{z}\mid \mathbf{x})\| p_{\theta}(\mathbf{z}\mid \mathbf{x}))\) is minimized. Instead of minimizing this KL- divergence, we can also maximize the variational lower bound \(\mathcal{L}(\theta ,\phi ;\mathbf{x})\) (ELBO) (see e.g. Blei et al. (2017)) \[\mathcal{L}(\theta ,\phi ;\mathbf{x}) = \mathbb{E}_{\mathbf{z}\sim q_{\phi}(\mathbf{z}\mid \mathbf{x})}\big[\log p_{\theta}(\mathbf{x}\mid \mathbf{z})\big] - \mathrm{KL}\big(q_{\phi}(\mathbf{z}\mid \mathbf{x})\| p_{a}(\mathbf{z})\big). \quad (2)\] In order to optimize the ELBO, the gradients w.r.t. the variational parameters \(\phi\) and the generative parameters \(\theta\) have to be computed. The gradient w.r.t. \(\phi\) , however, cannot be computed easily, since the expectation in eq. (2) depends on \(\phi\) . A reparameterization trick (Kingma et al., 2015) is used to overcome this problem: the random variable \(\mathbf{z} \sim q_{\phi}(\mathbf{z}\mid \mathbf{x})\) is reparameterized using a differentiable transformation \(h_{\phi}(\epsilon , \mathbf{x})\) of a noise variable \(\epsilon\) such that \[\mathbf{z} = h_{\phi}(\epsilon ,\mathbf{x})\quad \mathrm{with}\quad \epsilon \sim p(\epsilon)\quad . \quad (3)\] The reparameterized ELBO, for which the expectation can be computed, e.g. using Monte Carlo sampling, is then given by \[\mathcal{L}(\theta ,\phi ;\mathbf{x}) = \mathbb{E}_{\epsilon \sim p(\epsilon)}\big[\log p_{\theta}(\mathbf{x}\mid \mathbf{z} = h_{\phi}(\epsilon ,\mathbf{x}))\big] - \mathrm{KL}\big(q_{\phi}(\mathbf{z}\mid \mathbf{x})\| p_{a}(\mathbf{z})\big)~. \quad (4)\] More details on VAE as introduced by Kingma & Welling (2014) are given in appendix A. ### 3.3 LEMONADE REPARAMETRIZATION TRICK AND LOSS FUNCTION In our case, however, by sampling from Bernoulli distributions we have added discrete stochastic nodes to our computational graph, and we need to find differentiable reparameterizations of these nodes. The Bernoulli distribution can be reparameterized using the Gumbel- max trick (Luce, 1959; Yellott, 1977; Papandreou & Yuille, 2011; Hazan & Jaakkola, 2012; Maddison et al., 2014). This, <--- Page Split ---> however, is not differentiable. For this reason we use the BinConcrete distribution (Maddison et al., 2016), which is a continuous relaxation of the Bernoulli distribution with temperature parameter \(\lambda\) . For \(\lambda \to 0\) the BinConcrete distribution smoothly anneals to the Bernoulli distribution. The BinConcrete distribution can be reparameterized using the Gumbel- softmax trick (Maddison et al., 2016; Jang et al., 2017), which is differentiable. Maddison et al. (2016) show that for a discrete random variable \(\mathbf{z} \sim \operatorname{Bernoulli}(\alpha)\) , the reparameterization of the BinConcrete relaxation of this discrete distribution is \[\tilde{\mathbf{z}} = \sigma (\mathbf{y}) = \frac{1}{1 + \exp(-\mathbf{y})}\quad \mathrm{with}\quad \mathbf{y} = \frac{\log(\tilde{\alpha}) + \log(U) - \log(1 - U)}{\lambda} \quad (5)\] where \(U \sim \operatorname {Uni}(0,1)\) and \(\tilde{\alpha} = \alpha /(1 - \alpha)\) . Hence the relaxed and reparameterized lower bound \(\tilde{\mathcal{L}} (\theta , \tilde{\alpha}; \mathbf{x}) \approx \mathcal{L}(\theta , \phi ; \mathbf{x})\) can be written as \[\tilde{\mathcal{L}} (\theta , \tilde{\alpha}; \mathbf{x}) = \mathbb{E}_{\mathbf{y} \sim g_{\tilde{\alpha}, \lambda_{1}}(\mathbf{y} \mid \mathbf{x})} \left[ \log p_{\theta}(\mathbf{x} \mid \sigma (\mathbf{y})) \right] - \operatorname {KL}\left(g_{\tilde{\alpha}, \lambda_{1}}(\mathbf{y} \mid \mathbf{x}) \mid \mid f_{\tilde{\alpha}, \lambda_{2}}(\mathbf{y})\right) \quad (6)\] where \(g_{\tilde{\alpha}, \lambda_{1}}(\mathbf{y} \mid \mathbf{x})\) is the reparameterized BinConcrete relaxation of the variational posterior \(q_{\phi}(\mathbf{z} \mid \mathbf{x})\) and \(f_{\tilde{\alpha}, \lambda_{2}}(\mathbf{y})\) the reparameterized relaxation of the prior \(p_{a}(\mathbf{z})\) . \(\lambda_{1}\) and \(\lambda_{2}\) are the respective temperatures and \(\tilde{\alpha}\) and \(\tilde{\alpha}\) the respective locations of the relaxed and reparameterized variational posterior and prior distribution. The first term on the RHS of eq. (6) is a negative reconstruction error, showing the connection to traditional autoencoders, while the KL- divergence acts as a regularizer on the approximate posterior \(q_{\phi}(\mathbf{z} \mid \mathbf{x})\) . As shown in Higgins et al. (2017), we can add a \(\beta\) - coefficient to this KL- term which allows to vary the strength of the constraint on the latent space. Instead of maximizing the lower bound, we will minimize the corresponding loss function \[\begin{array}{r l} & {\ell (\mathbf{x},\mathbf{x}^{\prime},\tilde{\alpha},\lambda_{1},\tilde{a},\lambda_{2},\beta_{\mathrm{KL}}) = \mathrm{MSE}(\mathbf{x},\mathbf{x}^{\prime}) + \beta_{\mathrm{KL}}\cdot \mathrm{KL}\big(g_{\tilde{\alpha},\lambda_{1}}(\mathbf{y}\mid \mathbf{x})\big|\big|f_{\tilde{a},\lambda_{2}}(\mathbf{y})\big)}\\ & {\qquad = \mathrm{MSE}(\mathbf{x},\mathbf{x}^{\prime}) - \beta_{\mathrm{KL}}\cdot \mathbb{E}_{U\sim \mathrm{Uni}(0,1)}\left[\log \frac{f_{\tilde{a},\lambda_{2}}\big(\mathbf{y}(U,\tilde{\alpha},\lambda_{1})\big)}{g_{\tilde{\alpha},\lambda_{1}}\big(\mathbf{y}(U,\tilde{\alpha},\lambda_{1})\big\mid\mathbf{x}}\right]} \end{array} \quad (7)\] with \(\mathrm{MSE}(\mathbf{x}, \mathbf{x}^{\prime})\) being the mean- squared error between \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\) , and the \(\beta\) - coefficient \(\beta_{\mathrm{KL}}\) . Datasets with low SNR and large background fluctuations will need a stronger regularization on the activations and hence a larger \(\beta_{\mathrm{KL}}\) than higher quality recordings. Hence, adding the \(\beta\) - coefficient to the loss function enables our method to adapt better to the properties of specific datasets and recording methods. ### 3.4 LEMONADE NETWORK ARCHITECTURE The encoder network starts with a few convolutional layers with small 2D filters operating on each frame of the video separately, inspired by the architecture used in Apthorpe et al. (2016) to extract cells from calcium imaging data. Afterwards the feature maps of the whole video are passed through a final convolutional layer with 3D filters. These filters have the size of the feature maps obtained from the single images times a temporal component of length \(F\) , which is the expected maximum temporal length of the motifs. We apply padding in the temporal domain to also capture motifs correctly which are cut off at the beginning or end of the analyzed image sequence. The output of the encoder are the parameters \(\tilde{\alpha}\) which we need for the reparametrization in eq. (5). From the reparametrization we gain the activations \(\mathbf{z}\) which are then passed to the decoder. The decoder consists of a single deconvolution layer with \(M\) filters of the original frame size times the expected motif length \(F\) , enforcing the reconstructed data \(\mathbf{x}^{\prime}\) to be an additive mixture of the decoder filters. Hence, after minimizing the loss the filters of the decoder contain the detected motifs. Performing these steps on the whole video would be computationally very costly. For this reason, we perform each training epoch only on a small subset of the video. The subset consists of a few hundred consecutive frames, where the starting point of this short sequence is randomly chosen in each epoch. We found that doing so did not negatively affect the performance of the algorithm. By using this strategy we are able to analyse videos of arbitrary length in a computationally efficient way. More implementation details can be found in appendix B. <--- Page Split ---> ## 4 EXPERIMENTS AND RESULTS ### 4.1 SYNTHETIC DATA The existence of neuronal assemblies is still fiercely debated and their detection would only be possible with automated, specifically tailored tools, like the one proposed in this paper. For this reason, no ground truth exists for the identification of spatio- temporal motifs in real neurophysiological spike data. In order to yet report quantitative accuracies, we test the algorithm on synthetically generated datasets for which ground truth is available. For the data generation we used a procedure analogous to the one used in Diego et al. (2013) and Diego & Hamprecht (2013) for testing automated pipelines for the analysis and identification of neuronal activity from calcium imaging data. In contrast to them, we include neuronal assemblies with temporal firing structure. The cells within an assembly can have multiple spikes in a randomly chosen but fixed motif of temporal length up to 30 frames. We used 3 different assemblies in each sequence. Additionally, spurious spikes of single neurons were added to simulate noise. The ratio of spurious spikes to all spikes in the dataset was varied from \(0\%\) up to \(90\%\) in ten steps. The details of the synthetic data generation can be found in appendix C.1. To the best of our knowledge, the proposed method is the first ever to detect video motifs with temporal structure directly in calcium imaging data. As a consequence, there are no existing baselines to compare to. Hence we here propose and evaluate the SCC method presented in Peter et al. (2017) as a baseline. The SCC algorithm is able to identify motifs with temporal structure in spike trains or calcium transients. To apply it to our datasets, we first have to extract the calcium transients of the individual cells. For the synthetically generated data we know the location of each cell by construction, so this is possible with arbitrary accuracy. The output of the SCC algorithm is a matrix that contains for each cell the firing behavior over time within the motif. For a fair comparison we brought the motifs found with LeMoNADe, which are short video sequences, into the same format. The performance of the algorithms is measured by computing the cosine similarity (Singhal, 2001) between ground truth motifs and detected motifs. The cosine similarity is one for identical and zero for orthogonal patterns. Not all ground truth motifs extend across all 30 frames, and may have almost vanishing luminosity in the last frames. Hence, the discovered motifs can be shifted by a few frames and still capture all relevant parts of the motifs. For this reason we computed the similarity for the motifs with all possible temporal shifts and took the maximum. More details on the computation of the similarity measure can be found in appendix C.2. We ran both methods on 200 synthetically generated datasets with the parameters shown in table 3 in the appendix. We here show the results with the correct number of motifs ( \(M = 3\) ) used in both methods. In appendix E.1 we show that if the number of motifs is overestimated (here \(M > 3\) ), LeMoNADe still identifies the correct motifs, but they are repeated multiple times in the surplus filters. Hence this does not reduce the performance of the algorithm. The temporal extent of the motifs was set to \(F = 31\) to give the algorithms the chance to also capture the longer patterns. The cosine similarity of the found motifs to the set of ground truth motifs, averaged over all found motifs and all experiments for each of the ten noise levels, is shown in figure 4. The results in figure 4 show that LeMoNADe performs as well as SCC in detecting motifs and also shows a similar stability in the presence of noise as SCC. This is surprising since LeMoNADe does not need the previous extraction of individual cells and hence has to solve a much harder problem than SCC. In order to verify that the results achieved by LeMoNADe and SCC range significantly above chance, we performed a bootstrap (BS) test. For this, multiple datasets were created with similar spike distributions as before, but with no reoccurring motif- like firing patterns. We compiled a distribution of similarities between patterns suggested by the proposed method and randomly sampled segments of same length and general statistics from that same BS dataset. The full BS distributions are shown in appendix C.3. The \(95\%\) - tile of the BS distributions for each noise level are also shown in figure 4. Figure 5 shows an exemplary result from one of the analysed synthetic datasets with \(10\%\) noise and maximum temporal extend of the ground truth motifs of 28 frames. All three motifs were correctly identified (see figure 5a) with a small temporal shift. This shift does not reduce the performance as it is compensated by a corresponding shift in the activations of the motifs (see figure 5b). In order to show that the temporal structure of the found motifs matches the ground truth, in figure 5a for motif 1 and 2 we corrected the shift of one and two frames, respectively. We also show the results after <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 4: Similarities between found motifs and ground truth for different noise levels. We show for LeMoNADe (lime green) and SCC (blue) the average similarities between found motifs and ground truth for ten different noise levels ranging from \(0\%\) up to \(90\%\) spurious spikes. Error bars indicate the standard deviation. For each noise level 20 different datasets were analyzed. For both, LeMoNADe and SCC, the similarities between found and ground truth motifs are significantly above the \(95\%\) -tile of the corresponding bootstrap distribution (red) up to a noise level of \(70\%\) spurious spikes. Although LeMoNADe does not need the previous extraction of individual cells, it performs as well as SCC in detecting motifs and also shows a similar stability in the presence of noise. </center> ![](images/7_1.jpg) <center>Figure 5: Exemplary result from one synthetic dataset. (a) shows a single plot containing all three motifs as additive RGB values for the ground truth motifs (top) and discovered motifs (bottom). The found motifs were ordered manually and temporally aligned to match the ground truth, for better readability. The complete motif sequences can be found in figure 10 in appendix E.1. In (b) the activations \(\mathbf{z}\) of the found motifs are shown in red for the complete video (left) and a small excerpt of the sequence (right). The ground truth activations are marked with blue crosses. (c) shows the firing of the extracted cells in the ground truth motifs (top), the motifs identified by SCC (middle) and the motifs found with LeMoNADe (bottom). </center> extracting the individual cells from the motifs and the results from SCC in figure 5c. One can see that the results are almost identical, again except for small temporal shifts. ### 4.2 REAL DATA We applied the proposed method on two datasets obtained from organotypic hippocampal slice cultures. The cultures were prepared from 7- 9- day- old Wistar rats as described in Kann et al. (2003) and Schneider et al. (2015). The fluorescent \(\mathrm{Ca^{2 + }}\) sensor, GCaMP6f (Chen et al., 2013), was delivered to the neurons by an adeno- associated virus (AAV). Neurons in stratum pyramidal of CA3 were imaged for 6.5 (dataset 1) and 5 minutes (dataset 2) in the presence of the cholinergic receptor agonist carbachol. For more details on the generation of these datasets see appendix D.1. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 6: Result from hippocampal slice culture datasets 1 (top) and 2 (bottom). The colors in (a) are inverted compared to the standard visualization of calcium imaging data for better visibility. In (c) activations are thresholded to \(70\%\) of the maximum activation for each motif. In (d) the manually selected frames of motif 0 highlight the temporal structure of the motif. </center> <--- Page Split ---> The proposed method was run on these datasets with the parameter settings shown in table 3 in the appendix E, where we also provide additional comments on the parameter settings. The analysis of the datasets took less than two hours on a Ti 1080 GPU. Before running the analysis we computed \(\Delta F / F\) for the datasets. We looked for up to three motifs with a maximum extent of \(F = 21\) frames. The results are shown in figure 6. For both datasets, one motif in figure 6a consists of multiple cells, shows repeated activation over the recording period (see figure 6b, 6c), and contains temporal structure (see figure 6d). The other two "motifs" can easily be identified as artefacts and background fluctuations. As SCC and many other motif detection methods, LeMoNADe suffers from the fact that such artefacts, especially single events with extremely high neuronal activation, potentially explain a large part of the data and hence can be falsely detected as motifs. Nevertheless, these events can be easily identified by simply looking at the motif videos or thresholding the activations as done in figure 6c. Although the found motifs also include neuropil activation, this does not imply this was indeed used by the VAE as a defining feature of the motifs, just that it was also present in the images. Dendritic/axonal structures are part of the activated neurons and therefore also visible in the motif videos. If necessary, these structures can be removed by post- processing steps. As LeMoNADe reduces the problem to the short motif videos instead of the whole calcium imaging video, the neuropil subtraction becomes much more feasible. ## 5 CONCLUSION We have presented a novel approach for the detection of neuronal assemblies that directly operates on the calcium imaging data, making the cumbersome extraction of individual cells and discrete spike times from the raw data dispensable. The motifs are extracted as short, repeating image sequences. This provides them in a very intuitive way and additionally returns information about the spatial distribution of the cells within an assembly. The proposed method's performance in identifying motifs is equivalent to that of a state- of- the- art method that requires the previous extraction of individual cells. Moreover, we were able to identify repeating firing patterns in two datasets from hippocampal slice cultures, proving that the method is capable of handling real calcium imaging conditions. For future work, a post- processing step as used in Peter et al. (2017) or a group sparsity regularization similar to the ones used in Bascol et al. (2016) or Mackevicius et al. (2018) could be added to determine a plausible number of motifs automatically. Moreover, additional latent dimensions could be introduced to capture artefacts and background fluctuations and hence automatically separate them from the actual motifs. The method is expected to, in principle, also work on other functional imaging modalities. We will investigate the possibility of detecting motifs using LeMoNADe on recordings from human fMRI or voltage- sensitive dyes in the future. ## ACKNOWLEDGMENTS EK thanks Ferran Diego for sharing his knowledge on generating synthetic data and for his scientific advice. DD acknowledges partial financial support by DFG Du 354/8- 1. EK, HS, JS, SE, OK, DD and FAH gratefully acknowledge partial financial support by DFG SFB 1134. ## REFERENCES Misha B Ahrens, Michael B Orger, Drew N Robson, Jennifer M Li, and Philipp J Keller. Whole- brain functional imaging at cellular resolution using light- sheet microscopy. Nature methods, 2013. Noah J. Apthorpe, Alexander J. Riordan, Rob E. Aguilar, Jan Homann, Yi Gu, David W. Tank, and H. Sebastian Seung. Automatic neuron detection in calcium imaging data using convolutional networks. In NIPS, 2016. Kevin Bascol, Remi Emonet, Elisa Fromont, and Jean- Marc Odobez. Unsupervised interpretable pattern discovery in time series using autoencoders. In Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR). Springer, 2016. David M. Blei, Alp Kucukelbir, and Jon D. McAuliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, 2017. György Buzsáki. Large- scale recording of neuronal ensembles. Nature neuroscience, 2004. <--- Page Split ---> Luis Carrillo- Reid, Jae- eun Kang Miller, Jordan P. Hamm, Jesse Jackson, and Rafael Yuste. Endogenous sequential cortical activity evoked by visual stimuli. Journal of Neuroscience, 2015. Tsai- Wen Chen, Trevor J. Wardill, Yi Sun, Stefan R. Pulver, Sabine L. Renninger, Amy Baohan, Eric R. Schreiter, Rex A. Kerr, Michael B. Orger, Vivek Jayaraman, Loren L. Looger, Karel Svoboda, and Douglas S. Kim. Ultrasensitive fluorescent proteins for imaging neuronal activity. Nature, 2013. Pierre Comon. Independent component analysis, a new concept? Signal processing, 1994. Pascale Cossart and Philippe J. Sansonetti. Bacterial invasion: The paradigms of enteroinvasive pathogens. Science, 2004. Anthony Christopher Davison, David Victor Hinkley, et al. Bootstrap methods and their application. Cambridge university press, 1997. Zhiwei Deng, Rajitha Navarathna, Peter Carr, Stephan Mandt, Yisong Yue, Iain Matthews, and Greg Mori. Factorized variational autoencoders for modeling audience reactions to movies. In CVPR, 2017. Winfried Denk, James H. Strickler, and Watt W. Webb. Two- photon laser scanning fluorescence microscopy. Science, 1990. Ferran Diego and Fred A Hamprecht. Learning multi- level sparse representations. In NIPS. 2013. Ferran Diego and Fred A Hamprecht. Sparse space- time deconvolution for calcium image analysis. In NIPS. 2014. Ferran Diego, Susanne Reichinnek, Martin Both, and Fred A. Hamprecht. Automated identification of neuronal activity from calcium imaging by sparse dictionary learning. ISBI, 2013. Benjamin A. Flusberg, Axel Nimmerjahn, Eric D. Cocker, Eran A. Mukamel, Robert P. J. Barretto, Tony H. Ko, Laurie D. Burns, Juergen C. Jung, and Mark J. Schnitzer. High- speed, miniaturized fluorescence microscopy in freely moving mice. Nature Methods, 2008. George L. Gerstein, Elizabeth R. Williams, Markus Diesmann, Sonja Grün, and Chris Trengove. Detecting synfire chains in parallel spike data. Journal of Neuroscience Methods, 2012. Kunal Ghosh, Laurie Burns, Eric D. Cocker, Axel Nimmerjahn, Yaniv Ziv, Abbas El Gamal, and Mark J. Schnitzer. Miniaturized integration of a fluorescence microscope. In Nature Methods, 2011. Prasoon Goyal, Zhiting Hu, Xiaodan Liang, Chenyu Wang, and Eric P. Xing. Nonparametric variational auto- encoders for hierarchical representation learning. In ICCV, 2017. Sonja Grün. Data- driven significance estimation for precise spike correlation. Journal of Neurophysiology, 2009. Sonja Grün, Markus Diesmann, and Ad Aertsen. Unitary events in multiple single- neuron spiking activity: I. detection and significance. Neural Computation, 2002a. Sonja Grün, Markus Diesmann, and Ad Aertsen. Unitary events in multiple single- neuron spiking activity: II. nonstationary data. Neural Computation, 2002b. Tamir Hazan and Tommi Jaakkola. On the partition function and random maximum a- posteriori perturbations. In ICML, 2012. Donald O. Hebb. The Organization of Behaviour: A Neuropsychological Theory. Wiley, 1949. Fritjof Helmchen and Winfried Denk. Deep tissue two- photon microscopy. Nature Methods, 2005. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta- vae: Learning basic visual concepts with a constrained variational framework. In ICLR, 2017. Yuji Ikegaya, Gloster Aaron, Rosa Cossart, Dmitriy Aronov, Ilan Lampl, David Ferster, and Rafael Yuste. Synfire chains and cortical songs: temporal modules of cortical activity. Science, 2004. Hakan Inan, Murat A. Erdogdu, and Mark Schnitzer. Robust estimation of neural signals in calcium imaging. In NIPS. 2017. Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel- softmax. In ICLR, 2017. Matthew Johnson, David K Duvenaud, Alex Wiltschko, Ryan P Adams, and Sandeep R Datta. Composing graphical models with neural networks for structured representations and fast inference. In NIPS, 2016. Patrick Kaifosh, Jeffrey Zaremba, Nathan B. Danielson, and Attila Losonczy. Sima: Python software for analysis of dynamic fluorescence imaging data. In Front. Neuroinform., 2014. O. Kann, S. Schuchmann, K. Buchheim, and U. Heinemann. Coupling of neuronal activity and mitochondrial metabolism as revealed by nad(p)h fluorescence signals in organotypic hippocampal slice cultures of the rat. Neuroscience, 2003. Diederik P. Kingma and Max Welling. Auto- encoding variational bayes. In ICLR, 2014. <--- Page Split ---> Diederik P. Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameterization trick. In NIPS, 2015. Aleksander Klibisz, Derek Rose, Matthew Eicholtz, Jay Blundon, and Stanislav Zakharenko. Fast, simple calcium imaging segmentation with fully convolutional networks. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support - Third International Workshop, DLMIA 2017, and 7th International Workshop, ML- CDS 2017, Held in Conjunction with MICCAI 2017, Quebec City, QC, Canada, September 14, 2017, Proceedings, 2017. Yingzhen Li and Stephan Mandt. A deep generative model for disentangled representations of sequential data. arXiv preprint arXiv:1803.02991, 2018. Vitor Lopes- dos Santos, Sidarta Ribeiro, and Adriano BL Tort. Detecting cell assemblies in large neuronal populations. Journal of neuroscience methods, 2013. R. Duncan Luce. Individual Choice Behavior: A theoretical analysis. Wiley, 1959. Emily L. Mackevicius, Andrew H. Bahle, Alex H. Williams, Shijie Gu, Natalia I. Denissenko, Mark S. Goldman, and Michale S. Fee. Unsupervised discovery of temporal sequences in high- dimensional datasets, with applications to neuroscience. bioRxiv, 2018. Chris J. Maddison, Daniel Tarlow, and Tom Minka. A\* sampling. In NIPS, 2014. Christopher Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. In ICLR, 2016. David Marr, David Willshaw, and Bruce McNaughton. Simple memory: a theory for archicortex. Springer, 1991. Alik Mokeichev, Michael Okun, Omri Barak, Yonatan Katz, Ohad Ben- Shahar, and Ilan Lampl. Stochastic emergence of repeating cortical motifs in spontaneous membrane potential fluctuations in vivo. Neuron, 2007. Eran A. Mukamel, Axel Nimmerjahn, and Mark J. Schnitzer. Automated analysis of cellular signals from large- scale calcium imaging data. Neuron, 2009. W. Müller, U. Misgeld, and U. Heinemann. Carbachol effects on hippocampal neurons in vitro: dependence on the rate of rise of carbachol tissue concentration. Experimental brain research, 1988. Miguel A Nicolelis, Luiz A Baccala, RC Lin, and John K Chapin. Sensorimotor encoding by synchronous neural ensemble activity at multiple levels of the somatosensory system. Science, 1995. Miguel AL Nicolelis, Erika E Fanselow, and Asif A Ghazanfar. Hebb's dream: the resurgence of cell assemblies. Neuron, 1997. Marius Pachitariu, Adam M Packer, Noah Pettit, Henry Dalgleish, Michael Hausser, and Maneesh Sahani. Extracting regions of interest from biological images with convolutional sparse block coding. In NIPS. 2013. George Papandreou and Alan L. Yuille. Perturb- and- map random fields: Using discrete optimization to learn and sample from energy models. In ICCV, 2011. Eva Pastalkova, Vladimir Itskov, Asohan Amarasingham, and Gyorgy Buzsaki. Internally generated cell assembly sequences in the rat hippocampus. Science, 2008. Sven Peter, Elke Kirschbaum, Martin Both, Lee Campbell, Brandon Harvey, Conor Heins, Daniel Durstewitz, Ferran Diego, and Fred A Hamprecht. Sparse convolutional coding for neuronal assembly detection. In NIPS. 2017. Efthychios A Pnevmatikakis and Liam Paninski. Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions. In NIPS. 2013. Efthychios A Pnevmatikakis, Timothy A Machado, Logan Grosenick, Ben Poole, Joshua T Vogelstein, and Liam Paninski. Rank- penalized nonnegative spatiotemporal deconvolution and demixing of calcium imaging data. In Computational and Systems Neuroscience (Cosyne), 2013. Efthychios A. Pnevmatikakis, Yuanjun Gao, Daniel Soudry, David Pfau, Clay Lacefield, Kira Poskanzer, Randy Bruno, Rafael Yuste, and Liam Paninski. A structured matrix factorization framework for large scale calcium imaging data analysis. arXiv:1409.2903, 2014. Efthychios A. Pnevmatikakis, Daniel Soudry, Yuanjun Gao, Timothy A. Machado, Josh Merel, David Pfau, Thomas Reardon, Yu Mu, Clay Lacefield, Weijian Yang, Misha Ahrens, Randy Bruno, Thomas M. Jessell, Darcy S. Peterka, Rafael Yuste, and Liam Paninski. Simultaneous denoising, deconvolution, and demixing of calcium imaging data. Neuron, 2016. Eleonora Russo and Daniel Durstewitz. Cell assemblies at multiple time scales with arbitrary lag constellations. eLife, 2017. <--- Page Split ---> Justus Schneider, Andrea Lewen, Thuy- Truc Ta, Lukas V. Galow, Raffaella Isola, Ismini E. Papageorgiou, and Oliver Kann. A reliable model for gamma oscillations in hippocampal tissue. Journal of Neuroscience Research, 2015. Wolf Singer. Synchronization of cortical activity and its putative role in information processing and learning. Annual review of physiology, 1993. Amit Singhal. Modern information retrieval: A brief overview. IEEE Data Eng. Bull., 2001. Paris Smaragdis. Non- negative matrix factor deconvolution; extraction of multiple sound sources from monophonic inputs. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2004. Quico Spaen, Dorit S Hochbaum, and Roberto Asin- Achá. Hnccorr: A novel combinatorial approach for cell identification in calcium- imaging movies. arXiv:1703.01999, 2017. Artur Speiser, Jinyao Yan, Evan W Archer, Lars Buesing, Srinivas C Turaga, and Jakob H Macke. Fast amortized inference of neural activity from calcium imaging data with variational autoencoders. In NIPS. 2017. Benjamin Staude, Sonja Grün, and Stefan Rotter. Higher- order correlations in non- stationary parallel spike trains: statistical modeling and inference. Frontiers in Computational Neuroscience, 2010a. Benjamin Staude, Stefan Rotter, and Sonja Grün. Cubic: cumulant based inference of higher- order correlations in massively parallel spike trains. Journal of Computational Neuroscience, 2010b. Ian H Stevenson and Konrad P Kording. How advances in neural recording affect data analysis. Nature neuroscience, 2011. John I. Yellott. The relationship between luce's choice axiom, thurstone's theory of comparative judgment, and the double exponential distribution. Journal of Mathematical Psychology, 1977. Pengcheng Zhou, Shanna L Resendez, Jose Rodriguez- Romaguera, Jessica C Jimenez, Shay Q Neufeld, Andrea Giovannucci, Johannes Friedrich, Eftychios A Pnevmatikakis, Garret D Stuber, Rene Hen, Mazen A Kheirbek, Bernardo L Sabatini, Robert E Kass, and Liam Paninski. Efficient and accurate extraction of in vivo calcium signals from microendoscopic video data. eLife, 2018. <--- Page Split ---> ## APPENDIX ## A VARIATIONAL AUTOENCODER Variational autoencoder (VAE) are generative latent variable models which were first described in Kingma & Welling (2014). The data \(\mathbf{x} = \left\{\mathbf{x}^{(i)}\right\}_{i = 1}^{N}\) , consisting of \(N\) samples of some random variable \(\mathbf{x}\) , is generated by first drawing a latent variable \(\mathbf{z}^{(i)}\) from a prior distribution \(p(\mathbf{z})\) and then sampling from the conditional distribution \(p_{\theta^{*}}(\mathbf{x}\mid \mathbf{z})\) with parameters \(\theta^{*}\) . The distribution \(p_{\theta^{*}}(\mathbf{x}\mid \mathbf{z})\) belongs to the parametric family \(p_{\theta}(\mathbf{x}\mid \mathbf{z})\) with differentiable PDFs w.r.t. \(\theta\) and \(\mathbf{z}\) . Both the true parameters \(\theta^{*}\) as well as the latent variables \(\mathbf{z}^{(i)}\) are unknown. We are interested in an approximate posterior inference of the latent variables \(\mathbf{z}\) given some data \(\mathbf{x}\) . The true posterior \(p_{\theta}(\mathbf{z}\mid \mathbf{x})\) , however, is usually intractable. But it can be approximated by introducing the recognition model (or approximate posterior) \(q_{\phi}(\mathbf{z}\mid \mathbf{x})\) . We want to learn both the recognition model parameters \(\phi\) as well as the generative model parameters \(\theta\) . The recognition model is usually referred to as the probabilistic encoder and \(p_{\theta}(\mathbf{x}\mid \mathbf{z})\) is called the probabilistic decoder. In order to learn the variational parameters \(\phi\) we want to minimise the KL- divergence between approximate and true posterior \(\mathrm{KL}(q_{\phi}(\mathbf{z}|\mathbf{x})\| p_{\theta}(\mathbf{z}|\mathbf{x}))\) . Therefore we use the fact that the marginal likelihood \(p_{\theta}(\mathbf{x})\) can be written as \[\log p_{\theta}(\mathbf{x}) = \mathcal{L}(p,q;\mathbf{x}) + \mathrm{KL}\big(q_{\phi}(\mathbf{z}|\mathbf{x})\| p_{\theta}(\mathbf{z}|\mathbf{x})\big) \quad (8)\] As the KL- divergence is non- negative, we can minimize \(\mathrm{KL}(q_{\phi}(\mathbf{z}|\mathbf{x})\| p_{\theta}(\mathbf{z}|\mathbf{x}))\) by maximizing the (variational) lower bound \(\mathcal{L}(p,q;\mathbf{x})\) with \[\mathcal{L}(p,q;\mathbf{x}) = \mathbb{E}_{\mathbf{z}\sim q_{\phi}(\mathbf{z}|\mathbf{x})}\left[\log p_{\theta}(\mathbf{x}|\mathbf{z})\right] - \mathrm{KL}\big(q_{\phi}(\mathbf{z}|\mathbf{x})\| p(\mathbf{z})\big) \quad (9)\] In order to optimise the lower bound \(\mathcal{L}(p,q;\mathbf{x})\) w.r.t. both the variational parameters \(\phi\) and the generative parameters \(\theta\) , we need to compute the gradients \[\nabla_{\phi ,\theta}\mathcal{L}(p,q;\mathbf{x}) = \nabla_{\phi ,\theta}\mathbb{E}_{\mathbf{z}\sim q_{\phi}(\mathbf{z}|\mathbf{x})}\left[\log p_{\theta}(\mathbf{x}|\mathbf{z})\right] - \nabla_{\phi ,\theta}\mathrm{KL}\big(q_{\phi}(\mathbf{z}|\mathbf{x})\| p(\mathbf{z})\big) \quad (10)\] For the first part of the lower bound the gradient w.r.t. \(\theta\) can be easily computed using Monte Carlo sampling \[\nabla_{\theta}\mathbb{E}_{\mathbf{z}\sim q_{\phi}(\mathbf{z}|\mathbf{x})}\left[\log p_{\theta}(\mathbf{x}|\mathbf{z})\right] = \mathbb{E}_{\mathbf{z}\sim q_{\phi}(\mathbf{z}|\mathbf{x})}\left[\nabla_{\theta}\log p_{\theta}(\mathbf{x}|\mathbf{z})\right]\approx \frac{1}{S}\sum_{s = 1}^{S}\nabla_{\theta}\log p_{\theta}(\mathbf{x}|\mathbf{z}^{s}) \quad (11)\] with \(\mathbf{z}^{s}\sim q_{\phi}(\mathbf{z}|\mathbf{x})\) . The gradient w.r.t. \(\phi\) , however, does not take the form of an expectation in \(\mathbf{z}\) and can therefore not be sampled that easily: \[\nabla_{\phi}\mathbb{E}_{\mathbf{z}\sim q_{\phi}(\mathbf{z}|\mathbf{x})}\left[\log p_{\theta}(\mathbf{x}|\mathbf{z})\right] = \nabla_{\phi}\int q_{\phi}(\mathbf{z}|\mathbf{x})\log p_{\theta}(\mathbf{x}|\mathbf{z})d\mathbf{z} = \int \log p_{\theta}(\mathbf{x}|\mathbf{z})\nabla_{\phi}q_{\phi}(\mathbf{z}|\mathbf{x})d\mathbf{z} \quad (12)\] However, in most cases we can use the reparameterization trick to overcome this problem: the random variable \(\tilde{\mathbf{z}}\sim q_{\phi}(\mathbf{z}\mid \mathbf{x})\) can be reparameterised using a differentiable transformation \(h_{\phi}(\epsilon ,\mathbf{x})\) of a noise variable \(\epsilon\) such that \[\tilde{\mathbf{z}} = h_{\phi}(\epsilon ,\mathbf{x})\quad \mathrm{with}\quad \epsilon \sim p(\epsilon) \quad (13)\] We now can compute the gradient w.r.t. \(\phi\) again using Monte Carlo sampling \[\begin{array}{r l} & {\nabla_{\phi}\mathbb{E}_{\epsilon \sim p(\epsilon)}\left[\log p_{\theta}(\mathbf{x}|\mathbf{z} = h_{\phi}(\epsilon ,\mathbf{x}))\right] = \mathbb{E}_{\epsilon \sim p(\epsilon)}\left[\nabla_{\phi}\log p_{\theta}(\mathbf{x}|\mathbf{z} = h_{\phi}(\epsilon ,\mathbf{x}))\right]}\\ & {\qquad \approx \frac{1}{S}\sum_{s = 1}^{S}\nabla_{\phi}\log p_{\theta}(\mathbf{x}|\mathbf{z}^{s} = h_{\phi}(\epsilon^{s},\mathbf{x}))} \end{array} \quad (14)\] with \(\epsilon^{s}\sim p(\epsilon)\) . Hence, the reparameterized lower bound \(\tilde{\mathcal{L}}(p,q;\mathbf{x})\approx \mathcal{L}(p,q;\mathbf{x})\) can be written as \[\tilde{\mathcal{L}}(p,q;\mathbf{x}) = \frac{1}{S}\sum_{s = 1}^{S}\log p_{\theta}(\mathbf{x}|\mathbf{z}^{s}) - \mathrm{KL}(q_{\phi}(\mathbf{z}|\mathbf{x})\| p(\mathbf{z})) \quad (15)\] with \(\mathbf{z}^{s} = h_{\phi}(\epsilon^{s},\mathbf{x}),\epsilon \sim p(\epsilon)\) . The first term on the RHS of eq. (15) is a negative reconstruction error, showing the connection to traditional autoencoders, while the KL- divergence acts as a regularizer on the approximate posterior \(q_{\phi}(\mathbf{z}\mid \mathbf{x})\) . <--- Page Split ---> ## B LEMONADE NETWORK ARCHITECTURE AND IMPLEMENTATION DETAILS ## B.1 ENCODER The encoder network starts with a few convolutional layers with small 2D filters operating on each frame of the video separately, inspired by the architecture used in Apthorpe et al. (2016) to extract cells from calcium imaging data. The details of this network are shown in table 1. Afterwards the feature maps of the whole video are passed through a final convolutional layer with 3D filters. These filters have size of the feature maps gained from the single images times a temporal component of length \(F\) , which is the expected maximum temporal extent of a motif. We use \(2\cdot M\) filters and apply padding in the temporal domain to avoid edge effects. By this also motifs that are cut off at the beginning or the end of the sequence can be captured properly. The output of the encoder are \(2\cdot M\) feature maps of size \((T + F - 1)\times 1\times 1\) . ## B.2 REPARAMETERIZATION Instead of reparameterizing the Bernoulli distributions, we will reparameterize their BinConcrete relaxations. The BinConcrete relaxation of a Bernoulli distribution with parameter \(\alpha\) takes as input parameter \(\tilde{\alpha} = \alpha /(1 - \alpha)\) . Maddison et al. (2016) showed that instead of using the normalized probabilities \(\alpha\) , we can also perform the reparametrization with unnormalized parameters \(\alpha^{1}\) and \(\alpha^{2}\) , where \(\alpha^{1}\) is the probability to sample a one and \(\alpha^{2}\) is the probability to sample a zero and \(\tilde{\alpha} = \alpha^{1} / \alpha^{2}\) . The first \(M\) feature maps, which were outputted by the encoder, are assigned to contain the unnormalised probabilities \(\alpha_{m,t}^{1}\) for the activation of motif \(m\) in frame \(t\) to be one. The second \(M\) feature maps contain the unnormalized probabilities \(\alpha_{m,t}^{2}\) for the activation of motif \(m\) in frame \(t\) to be zero. The parameter \(\tilde{\alpha}\) that is needed for the reparameterized BinConcrete distribution is obtained by dividing the two vectors elementwise: \(\tilde{\alpha}_{t}^{m} = \alpha_{m,t}^{1} / \alpha_{m,t}^{2}\) . We use the reparameterization trick to sample from BinConcrete \((\tilde{\alpha}_{t}^{m})\) as follows: First we sample \(\left\{\{U_{t}^{m}\}_{t = 1}^{T + F - 1}\right\}_{m = 1}^{M}\) from a uniform distribution \(\mathrm{Uni}(0,1)\) . Next, we compute \(\mathbf{y}\) with \[y_{t}^{m} = \left(\frac{\tilde{\alpha}_{t}^{m}\cdot U_{t}^{m}}{1 - U_{t}^{m}}\right)^{1 / \lambda_{1}}. \quad (16)\] Finally, we gain \(\mathbf{z}\) according to \[z_{t}^{m} = \frac{y_{t}^{m}}{1 + y_{t}^{m}}\cdot \alpha_{m,t}^{1} \quad (17)\] for all \(m = 1,\ldots ,M\) and \(t = 1,\ldots ,T + F - 1\) . The multiplication by \(\alpha_{m,t}^{1}\) in eq. (17) is not part of the original reparametrization trick (Maddison et al., 2016; Jang et al., 2017). But we found that the results of the algorithm improved dramatically as we scaled the activations with the \(\alpha^{1}\) - values that were originally predicted from the encoder network. ## B.3 DECODER The input to the decoder are now the activations \(\mathbf{z}\) . The decoder consists of a single deconvolution layer with \(M\) filters of the original frame size times the expected motif length \(F\) . These deconvolution filters contain the motifs we are looking for. The details of the used networks as well as the sizes of the inputs and outputs of the different steps are shown in table 1. Algorithm 1 summarizes the reparametrization and updates. ## C EXPERIMENTS AND RESULTS ON SYNTHETIC DATA ## C.1 SYNTHETIC DATA GENERATION We created 200 artificial sequences of length \(60\mathrm{s}\) with a frame rate of \(30\mathrm{fps}\) and \(128\times 128\) pixel per image. The number of cells was varied and they were located randomly in the image plane with an overlap of up to \(30\%\) . The cell shapes were selected randomly from 36 shapes extracted from <--- Page Split ---> Table 1: LeMoNADe network architecture details <table><tr><td>Operation</td><td>Kernel</td><td>Feature maps</td><td>Padding</td><td>Stride</td><td>Nonlinearity</td></tr><tr><td></td><td>Input: T images, P × P&#x27;</td><td></td><td></td><td></td><td></td></tr><tr><td>2D Convolution</td><td>3 × 3</td><td>24</td><td>0 × 0</td><td>1</td><td>ELU</td></tr><tr><td>2D Convolution</td><td>3 × 3</td><td>48</td><td>0 × 0</td><td>1</td><td>ELU</td></tr><tr><td>Max-Pooling</td><td>2 × 2</td><td>-</td><td>0 × 0</td><td>2</td><td>-</td></tr><tr><td>2D Convolution</td><td>3 × 3</td><td>72</td><td>0 × 0</td><td>1</td><td>ELU</td></tr><tr><td>2D Convolution</td><td>3 × 3</td><td>96</td><td>0 × 0</td><td>1</td><td>ELU</td></tr><tr><td>Max-Pooling</td><td>2 × 2</td><td>-</td><td>0 × 0</td><td>2</td><td>-</td></tr><tr><td>2D Convolution</td><td>3 × 3</td><td>120</td><td>0 × 0</td><td>1</td><td>ELU</td></tr><tr><td>2D Convolution</td><td>1 × 1</td><td>48</td><td>0 × 0</td><td>1</td><td>ELU</td></tr><tr><td>Output: T images, P × P&#x27;, P = ((P - 4)/2 - 4)/2 - 2, P&#x27; = ((P&#x27; - 4)/2 - 4)/2 - 2</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>3D Convolution</td><td>Input: 1 video, T × P × P&#x27;</td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>F × P × P&#x27;</td><td>2M</td><td>(F - 1) × 0 × 0</td><td>1</td><td>SoftPlus</td></tr><tr><td></td><td>Output: 2M feature maps, (T + F - 1) × 1 × 1</td><td></td><td></td><td></td><td></td></tr><tr><td>Reparametrization</td><td>Input: 2M feature maps, (T + F - 1) × 1 × 1</td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td></td><td>Output: M activations, (T + F - 1) × 1 × 1</td><td></td><td></td><td></td><td></td></tr><tr><td>3D TransposedConvolution</td><td>Input: M activations, (T + F - 1) × 1 × 1</td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>F × P × P&#x27;</td><td>M</td><td>(F - 1) × 0 × 0</td><td>1</td><td>ReLU</td></tr><tr><td></td><td>Output: 1 video, T × P × P&#x27;</td><td></td><td></td><td></td><td></td></tr></table> Algorithm 1: The LeMoNADe algorithm Input: raw video \(\mathbf{x}\) , normalized to zero mean and unit variance, architectures \(f_{\theta}, \alpha_{\phi}\) , hyperparameter \(\lambda_{1}, \lambda_{2}, \tilde{\alpha}, \beta_{\mathrm{KL}}\) Result: trained \(f_{\theta}, \alpha_{\phi}\) \(\theta , \phi \leftarrow\) Initialize network parameters # repeat // Sample subset of video \(\mathbf{x}_{\mathrm{sub}}\leftarrow\) Randomly chosen sequence of consecutive frames from \(\mathbf{x}\) // Encoding step Encode \(\mathbf{x}_{\mathrm{sub}}\) to get \(\tilde{\alpha}\) as described in section B.1 and B.2 // Latent Step Sample noise \(U\sim \mathrm{Uni}(0,1)\) Compute y following eq. (16) Compute z following eq. (17) // Decoding Step \(\mathbf{x}_{\mathrm{sub}}^{\prime}\leftarrow\) decode via \(f_{\theta}(\mathbf{z})\) // Update Parameters Compute gradients of loss \(\phi ,\theta \leftarrow\) update via \(\nabla_{\phi ,\theta}\ell (\mathbf{x}_{\mathrm{sub}},\mathbf{x}_{\mathrm{sub}}^{\prime},\tilde{\alpha},\lambda_{1},\tilde{\alpha},\lambda_{2},\beta_{\mathrm{KL}})\) (see eq. (7) in the main paper) until until convergence of \(\theta ,\phi\) real data. The transients were modelled as two- sided exponential decay with scales of \(50\mathrm{ms}\) and \(400\mathrm{ms}\) , respectively. In contrast to Diego & Hamprecht (2013), we included neuronal assemblies with temporal firing structure. That means cells within an assembly can perform multiple spikes in a randomly chosen but fixed motif of temporal length up to 30 frames. We used 3 different assemblies in each sequence. The assembly activity itself was modelled as a Poisson process (Lopes- dos Santos et al., 2013) with a mean of 0.15 spikes/second and a refractory period of at least the length of the motif itself. By construction the cell locations as well as the firing motifs are known for these datasets. In order to simulate the conditions in real calcium imaging videos as good as possible, we added Gaussian background noise with a relative amplitude (max intensity - mean intensity) \(\sigma_{\mathrm{noise}}\) between 10 and 20. Additionally, spurious spikes not belonging to any motif were added. The amount <--- Page Split ---> Table 2: Average cosine similarity between ground truth and discovered motifs. The average similarity together with the standard deviation were computed over 20 different datasets for each noise level, both for LeMoNADe and SCC. A bootstrap distribution of similarities was computed (see section C.3). BS-95 gives the \(5\%\) significance threshold of this distribution. <table><tr><td rowspan="2">NOISE LEVEL</td><td colspan="2">on video data</td><td colspan="2">after cell extraction</td></tr><tr><td>LeMoNADe</td><td>BS-95</td><td>SCC</td><td></td></tr><tr><td>0%</td><td>0.838 ± 0.066</td><td>0.400</td><td>0.837 ± 0.088</td><td></td></tr><tr><td>10%</td><td>0.826 ± 0.061</td><td>0.387</td><td>0.826 ± 0.116</td><td></td></tr><tr><td>20%</td><td>0.804 ± 0.080</td><td>0.402</td><td>0.818 ± 0.120</td><td></td></tr><tr><td>30%</td><td>0.770 ± 0.130</td><td>0.413</td><td>0.830 ± 0.125</td><td></td></tr><tr><td>40%</td><td>0.775 ± 0.107</td><td>0.426</td><td>0.822 ± 0.093</td><td></td></tr><tr><td>50%</td><td>0.756 ± 0.079</td><td>0.477</td><td>0.791 ± 0.126</td><td></td></tr><tr><td>60%</td><td>0.730 ± 0.098</td><td>0.492</td><td>0.731 ± 0.169</td><td></td></tr><tr><td>70%</td><td>0.639 ± 0.142</td><td>0.516</td><td>0.636 ± 0.163</td><td></td></tr><tr><td>80%</td><td>0.462 ± 0.103</td><td>0.553</td><td>0.454 ± 0.135</td><td></td></tr><tr><td>90%</td><td>0.357 ± 0.034</td><td>0.656</td><td>0.351 ± 0.067</td><td></td></tr></table> of spurious spikes was varied from \(0\%\) up to \(90\%\) of all spikes in the dataset. For each of the 10 noise levels 20 datasets were generated. ## C.2 SIMILARITY MEASURE The performance of the algorithms is measured by computing the cosine similarity (Singhal, 2001) between ground truth motifs and found motifs. The found motifs are in an arbitrary order, not necessarily corresponding to the order of the ground truth motifs. Additionally, the found motifs can be shifted in time compared to the ground truth. To account for this fact, we compute the similarity between the found motifs and each of the ground truth motifs with all possible temporal shifts and take the maximum. Hence, the similarity between the \(m\) - th found motif and the set of ground truth motifs \(\mathcal{G}\) is defined by \[S i m(\mathcal{M}^{m},\mathcal{G})=\max \left\{\frac{\langle\mathrm{vec}(\mathcal{M}^{m}),\mathrm{vec}(\stackrel{\rightharpoonup}{G})\rangle}{\|\mathrm{vec}(\mathcal{M}^{m})\|_{2}\cdot\|\mathrm{vec}(\stackrel{\rightharpoonup}{G})\|_{2}}\right\}|G\in\mathcal{G},s\in\{-F,\ldots,F\}\right\} \quad (18)\] where \(\mathcal{M}^{m}\) is the \(m\) - th found motif, \(\langle \cdot ,\cdot \rangle\) is the dot product and \(\mathrm{vec}(\cdot)\) vectorizes the motifs with dimensions \(F\times N\) into a vector of length \(F\cdot N\) , where \(N\) is the number of cells. The shift operator \((\cdot)\) moves a motif \(s\) frames forward in time while keeping the same size and filling missing values appropriately with zeros (Smaragdis, 2004). The cosine similarity of the found motifs to the set of ground truth motifs was averaged over all found motifs and all experiments for each noise level. The average similarities achieved with LeMoNADe and SCC as well as the \(5\%\) significance threshold of the BS distribution for each noise level can be found in table 2. ## C.3 BOOTSTRAP-BASED SIGNIFICANCE TEST Statistical methods for testing for cell assemblies (or spatio- temporal patterns more generally) have been advanced tremendously in recent years, addressing many of the issues that have plagued older approaches (Grün, 2009; Staude et al., 2010a;b; Russo & Durstewitz, 2017). Simple shuffle bootstraps are not necessarily the best methods if they destroy too much of the auto- correlative structure, and they can severely underestimate the distributional tails (Davison et al., 1997). Therefore we use sophisticated parametric, model- based bootstraps which retain the full statistical structure of the original data, except for the crucial feature of repeating motifs. In order to provide a 'null hypothesis (HO)' reference for the motif similarities returned by LeMoNADe (or other methods), we used the following bootstrap (BS) based test procedure: We generated 20 datasets analogue to those described in section C.1, i.e. with same spiking statistics and temporal <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 7: Top: Bootstrap distribution for similarity between random patterns. Shown is a sample from the BS distribution (blue) and the \(95\%\) significance threshold (red). Bottom: Distribution for similarity between patterns found on data which contained repeating motifs. Shown are the similarities between motifs found with LeMoNADe (lime green) and the ground truth motifs for the synthetic datasets discussed in the paper, which contained repeating motifs. The \(95\%\) significance threshold of the corresponding BS distribution is indicated as vertical red line. </center> convolution with calcium transients, but without repeating motifs. These motif- less H0 datasets were then processed by LeMoNADe in the very same way as the motif- containing datasets, i.e. with the parameter settings as shown in table 3. From each of these BS datasets 150 random samples of the same temporal length as that of the 'detected' motifs were drawn. For each BS dataset, the similarities between each of the found motifs and all of the 150 random samples were computed as described in section C.2. As datasets with higher noise levels have different spiking statistics, we repeated this procedure for each of the ten noise levels. Figure 7 shows the BS distributions (top). We also show the distribution of similarities between motifs found with LeMoNADe on the datasets which contained motifs (bottom). The \(95\%\) - tile (corresponding to a \(5\%\) alpha level) of the BS distribution is displayed as vertical red line. Up to a noise level of \(70\%\) the average of the similarities found on the datasets that contained motifs is much higher than the \(95\%\) - tile of the BS distribution. ## D EXPERIMENTS AND RESULTS ON REAL DATA ## D.1 DATA GENERATION Organotypic hippocampal slice cultures were prepared from 7- 9- day- old Wistar rats (Charles River Laboratories, Sulzfeld, Germany) as described by Kann et al. (2003) and Schneider et al. (2015). Animals were taken care of and handled in accordance with the European directive 2010/63/EU and with consent of the animal welfare officers at Heidelberg University (license, T96/15). Slices were infected with adeno- associated virus (AAV) obtained from Penn Vector Core (PA, USA) encoding GCaMP6f under the control of the CamKII promoter AAV5.CamKII.GCaMPf.WPRE.SV40, Lot # V5392MI- S). AAV transduction was achieved, under sterile conditions, by applying \(0.5\mu l\) of the viral particles solution (qTiter: 1.55e13 GC/ml) on top of the slices. Slices were maintained on Biopore membranes (Millicell standing inserts; Merck Millipore, Schwalbach, Germany) between culture medium. The medium consisted of \(50\%\) minimal essential medium, \(25\%\) Hank's balanced salt solution (Sigma- Aldrich, Taufkirchen, Germany), \(25\%\) horse serum (Life Technologies, Darmstadt, Germany), and \(2mM\) L- glutamine (Life Technologie) at pH 7.3, stored in an incubator (Heracell; Thermoscientific, Dreieich, Germany) with humidified normal atmosphere ( \(5\%\) CO2, \(36.5^{\circ}C\) ). The culture medium (1 ml) was replaced three times per week. Artificial cerebrospinal fluid used for imaging was composed of \(129\mathrm{mMNaCl}\) , \(3\mathrm{mM}\) KCl, \(1.25\mathrm{mM}\) NaH2PO4, \(1.8\mathrm{mM}\) MgSO4, \(1.6\mathrm{mM}\) CaCl2, \(21\mathrm{mM}\) NaHCO3, and \(10\mathrm{mM}\) glucose (Sigma- Aldrich, Taufkirchen, Germany). The pH of the recording solution was 7.3 when it was saturated with the gas mixture ( \(95\%\) O2, \(5\%\) CO2). Recording temperature was \(32\pm 1^{\circ}C\) . Constant bath wash of \(20\mu M\) (dataset 1) and \(10\mu M\) (dataset 2) carbachol (Sigma- Aldrich) was performed to enhance neuronal activity and increase firing probability during imaging (Müller et al., 1988). <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 8: Color-coded difference between discovered motifs and intensity modulated synchronous firing. Red color indicates negative differences, blue positive differences and white zero difference. The fact that for both datasets in motif 0 some cells are displayed in red over multiple frames shows that these motifs contain temporal structure beyond mere spiking synchrony. </center> Imaging of CA3 region of the hippocampus was performed on day 29 with 20x magnification (dataset 1) and on day 30 with 10x magnification (dataset 2) in vitro (23 days post viral infection) from slices maintained in submerged chamber of Olympus BX51WI microscope. GCaMP6f was excited at \(485 \pm 10nm\) . Fluorescence images (emission at \(521 \pm 10nm\) ) were recorded at \(6.4Hz\) (dataset 1) and \(4Hz\) (dataset 2) using a CCD camera (ORCA- ER; Hamamatsu Photonics, Hamamatsu City, Japan). Before running the analysis we computed \(\Delta F / F\) for the datasets. In order to perform the computations more efficiently, we cropped the outer parts of the images containing no interesting neuronal activity and downsampled dataset 2 by a factor of 0.4. ## D.2 TEMPORAL STRUCTURE PLOTS In order to show that the motifs 0 found in the two real datasets contain temporal structure, we compare them to what the synchronous activity of the participating cells with modulated amplitude would look like. The synchronous firing pattern was constructed as follows: First, for the motif \(\mathcal{M}^{m}\) with \(m = 1,\ldots ,M\) the maximum projection \(\mathcal{P}^{m}\) at each pixel \(p = 1,\ldots ,P\cdot P^{\prime}\) over time was computed by \[\mathcal{P}_{p}^{m} = \max_{f}\mathcal{M}_{p}^{m}\quad \mathrm{with~}f = 1,\ldots ,F \quad (19)\] and normalized \[\hat{\mathcal{P}}_{p}^{m} = \frac{\mathcal{P}_{p}^{m}}{\max_{p}^{\prime}\mathcal{P}^{m}} \quad (20)\] Finally, the synchronous firing pattern \(S^{m}\) for motif \(m\) is gained by multiplying this normalized maximum projection at each time frame \(f\) with the maximum intensity of motif \(m\) at that frame: \[\mathcal{S}_{f}^{m} = \hat{\mathcal{P}}^{m}\cdot \max_{p}\mathcal{M}_{f}^{m}\quad \mathrm{for}~f = 1,\ldots ,F \quad (21)\] Figures 8 shows the difference between the found motif and the constructed synchronous firing patterns for the motifs found on the two real datasets. ## D.3 COMPARISON TO RESULTS OBTAINED WITH SCC In order to show that LeMoNADe performs similar to SCC not only on synthetically generated data but also on real data, we ran both methods on real dataset 2. A well trained neuroscientist manually extracted the individual cells and calcium traces from the original calcium imaging video. Figure 9a shows the result obtained with SCC on these traces. In the same manner calcium traces were extracted from the motif found with LeMoNADe (see figure 9b). Both results in figure 9 are highly similar. ## E PARAMETER SETTINGS LeMoNADe is not more difficult to apply than other motif detection methods for neuronal spike data. In our experiments, for most of the parameters the default settings worked well on different datasets and only three parameters need to be adjusted: the maximum number of motifs \(M\) , the maximum <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 9: Result obtained on the real dataset 2 after manual cell extraction with SCC (a) and the traces manually extracted from the motif found with LeMoNADe on the original video data (b). </center> Table 3: Parameters used for the shown experiments. \(M\) is the number of motifs, \(F\) the maximum temporal extent of a motif, \(\lambda_{1}\) and \(\lambda_{2}\) are the temperatures for the relaxed approximate posterior and prior distributions, \(\tilde{a}\) is the location of the BinConcrete prior, \(b\) is the number of consecutive frames analysed in each epoch, and \(\beta_{\mathrm{KL}}\) is the weight of the KL- regularization term in the loss function. \(\beta_{c}\) is the ensemble- penalty used in SCC. <table><tr><td></td><td>M</td><td>F</td><td>a</td><td>λ1</td><td>λ2</td><td>#epochs</td><td>learning rate</td><td>b</td><td>βKL</td></tr><tr><td>LeMoNADe on synth. datasets with noise level &amp;lt; 50%</td><td>3</td><td>31</td><td>0.05</td><td>0.6</td><td>0.5</td><td>5000</td><td>10−5</td><td>500</td><td>0.10</td></tr><tr><td>LeMoNADe on synth. datasets with noise level ≥ 50%</td><td>3</td><td>31</td><td>0.10</td><td>0.6</td><td>0.5</td><td>5000</td><td>10−5</td><td>500</td><td>0.10</td></tr><tr><td>LeMoNADe on real dataset 1</td><td>3</td><td>21</td><td>0.05</td><td>0.4</td><td>0.3</td><td>5000</td><td>10−5</td><td>150</td><td>0.01</td></tr><tr><td>LeMoNADe on real dataset 2</td><td>3</td><td>21</td><td>0.01</td><td>0.6</td><td>0.5</td><td>5000</td><td>10−5</td><td>500</td><td>0.10</td></tr><tr><td></td><td>M</td><td>F</td><td>βc</td><td></td><td></td><td>#epochs</td><td>#nits</td><td></td><td></td></tr><tr><td>SCC on synth. datasets</td><td>3</td><td>31</td><td>10−4</td><td></td><td></td><td>10</td><td>1</td><td></td><td></td></tr></table> motif length \(F\) , and one of the sparsity parameters (e.g. \(\tilde{a}\) or \(\beta_{\mathrm{KL}}\) ). For SCC the user also has to specify three similar parameters. In addition, SCC requires the previous extraction of a spike matrix which implies many additional parameters. Table 3 shows the parameter settings used for the experiments shown in the paper. ## E.1 OVER- AND UNDER-ESTIMATION OF THE MAXIMUM NUMBER OF MOTIFS \(M\) In order to show the effects of over- and underestimating the number of motifs, we first use our synthetic data with existing ground truth and 3 true motifs and run LeMoNADe with underestimated \((M = 1)\) , correct \((M = 3)\) and overestimated \((M = 5)\) number of expected motifs. Figure 10 shows the complete ground truth (figure 10a) and found motifs for the exemplary synthetic dataset discussed in the paper. Besides the results for \(M = 3\) (figure 10c) we also show the found motifs for \(M = 1\) (figure 10b) and \(M = 5\) (figure 10d). If the number of motifs is underestimated \((M = 1)\) only one of the true motifs is captured. When the number of motifs is overestimated \((M = 5)\) the correct motifs are identified and the surplus filters are filled with (shifted) copies of the true motifs and background noise. We also investigated the influence of different numbers of motifs on the results on real datasets. Figure 11 shows the found motifs on dataset 1 for the different numbers of motifs \(M = 1,2,3,5\) . When the number is limited (as for \(M = 1\) ), the model is expected to learn those motifs first which best explain the data. The motif shown in figure 11a also appears if \(M\) is increased. This shows that this motif is highly present in the data. However, as long as only one filter is available the motif also contains a lot of background noise. The second filter in figure 11b contains a high luminosity artefact of the data. With its high luminosity and large spacial extent, it explains a lot of the dataset. However, it can easily be identified as no neuronal assembly. If the number of motifs is further increased to \(M = 3\) (see figure 11c), more background noise is captured in the additional filter and the motif becomes cleaner. When the number of motifs is further increased to \(M = 5\) , no new motifs appear <--- Page Split ---> and the surplus two filters seem to be filled up with parts of the structures which were already present in 11c. Hence, when the correct number of motifs is unknown (as expected for real datasets) we recommend to slightly overestimate the expected number of motifs. The result will capture the true motifs plus some copies of them. In future work, a post- processing step as in Peter et al. (2017) or a group sparsity regularization as in Bascol et al. (2016) and Mackevicius et al. (2018) could be introduced to eliminate these additional copies automatically. Background noise could be easily identified as no motif by either looking at the motif videos or thresholding the found activations. In future extends of the model we will study the effect of additional latent dimensions for background noise to automatically separate it from actual motifs. ## E.2 OVER- AND UNDER-ESTIMATION OF THE MAXIMUM MOTIF LENGTH \(F\) If the maximum motif length \(F\) is underestimated the found motifs are expected to just contain the part of the motif that reduces the reconstruction error most. Hence in most cases the most interesting parts of the motifs will be captured but details at either end of the motifs could be lost. If the motif length is overestimated, the motifs can be captured completely but might be shifted in time. This shift, however, will be compensated by the motif activations and hence has no negative effect on the results. In our experiments we achieved good results with a generously chosen motif length. For this reason we recommend to overestimate the motif length. Figure 12 shows the found motifs on real dataset 1 with \(M = 3\) and for the different motif lengths \(F = 21\) and \(F = 31\) . The results are highly similar. In both cases, the interesting pattern (motif 0 in figure 12a and motif 1 in figure 12b, respectively) is captured. ## E.3 SPARSITY PARAMETER The parameter \(\bar{a}\) influences the sparsity of the found activations. Smaller values of \(\bar{a}\) will penalize activations harder and hence often result in cleaner and more meaningful motifs. However, if \(\bar{a}\) is too small it will suppress the activations completely. For this reason we recommend to perform for each new dataset experiments with different values of \(\bar{a}\) . Changing the value of \(\beta_{\mathrm{KL}}\) is another option to regulate the sparsity of the activations. However, in our experiments we found that the default value of \(\beta_{\mathrm{KL}} = 0.1\) worked well for many different datasets and varying \(\bar{a}\) was effective enough. For the temperature parameters the default values \(\lambda_{1} = 0.6\) and \(\lambda_{2} = 0.5\) worked well in most cases and changing them is usually not necessary. In order to show the reaction of the method to the choice of \(\bar{a}\) and \(\beta_{\mathrm{KL}}\) we performed multiple experiments on the real dataset 2 with different parameter settings. We fixed all parameters as shown in table 3 except for \(\bar{a}\) (figures 13 and 14) and \(\beta_{\mathrm{KL}}\) (figures 15 and 16). When \(\bar{a}\) is varied within one order of magnitude (see figure 13) the motifs look quite similar - except for temporal shifts of the motifs and shuffling of the order of the motifs. For smaller values of \(\bar{a}\) surplus filters are filled with background noise (see figures 13a to 13d), whereas for a bit larger values of \(\bar{a}\) the surplus filters are filled with copies of (parts of) the motif (see figures 13e to 13g). Note that the motif which was also highlighted in the paper (figure 6d) appears in all results from figure 13b to 13g at least once. Only if \(\bar{a}\) is changed by more than one order of magnitude the results become significantly different and the motif is no longer detected (see figure 14). This indicates that it is sufficient to vary only the order of magnitude of \(\bar{a}\) in order to find a regime where motifs appear in the results and fine tuning \(\bar{a}\) is not necessary. This strategy is also the recommended strategy to find an appropriate sparsity parameter in SCC. A similar behavior can be observed when \(\beta_{\mathrm{KL}}\) is varied (see figure 15 for changes within an order of magnitude and figure 16 for larger changes). One can see similar effects as for the variation of \(\bar{a}\) , but in the opposite direction: for smaller \(\beta_{\mathrm{KL}}\) surplus filters are rather filled with copies of the motif whereas for larger values of \(\beta_{\mathrm{KL}}\) the surplus filters are filled with background noise. This shows that it is usually sufficient to only tune one of the two - either \(\bar{a}\) or \(\beta_{\mathrm{KL}}\) - in order to achieve good results. <--- Page Split ---> ![](images/21_0.jpg) <--- Page Split ---> ![](images/22_0.jpg) <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 13: Motifs found on real dataset 2 for small changes of \(\tilde{a}\) . The parameter \(\tilde{a}\) was increased in steps of 0.003 from \(\tilde{a} = 0.001\) (a) to \(\tilde{a} = 0.010\) (d) and in steps of 0.030 from \(\tilde{a} = 0.010\) (d) to \(\tilde{a} = 0.100\) (g). </center> ## F MOTIF VIDEOS In order to give the reader a better impression of what the used data and the motifs extracted as short video sequences would look like, we provide a few video files containing extracted motifs, <--- Page Split ---> ![](images/24_0.jpg) <center>Figure 14: Motifs found on real dataset 2 for huge changes of \(\tilde{a}\) . The parameter \(\tilde{a}\) was increased by two orders of magnitude in each step from \(\tilde{a} = 10^{-4}\) (a) to \(\tilde{a} = 1\) (c). </center> analyzed data and reconstructed videos at https://drive.google.com/drive/folders/19F76JLn490RzZ4d7GxbWZog6RdF2nt3w?usp=sharing. The reconstructed videos are gained by convolving the found motifs with the corresponding found activations. The videos are provided either in TIFF or MP4 format. Table 4 shows the names of the files together with short descriptions what each video shows. The videos corresponding to the synthetic dataset were generated with a frame rate of 30 fps and those corresponding to the real dataset with 10 fps. <--- Page Split ---> ![](images/25_0.jpg) <center>Figure 15: Motifs found on real dataset 2 for small changes of \(\beta_{\mathrm{KL}}\) . The parameter \(\beta_{\mathrm{KL}}\) was increased in steps of 0.03 from \(\beta_{\mathrm{KL}} = 0.01\) (a) to \(\beta_{\mathrm{KL}} = 0.19\) (g). </center> <--- Page Split ---> ![](images/26_0.jpg) <center>Figure 16: Motifs found on real dataset 2 for huge changes of \(\beta_{\mathrm{KL}}\) . The parameter \(\beta_{\mathrm{KL}}\) was increased by two orders of magnitude in each step from \(\beta_{\mathrm{KL}} = 10^{-3}\) (a) to \(\beta_{\mathrm{KL}} = 10\) (c). </center> Table 4: Attached video files and descriptions. The used parameters for the analysis are the same as given in table 3 if not mentioned differently. The three different types of video are: motif showing a single motif; parallel video showing the original video from the dataset (upper left corner) and reconstructions from the found motifs; and RGB video showing a superposition of RGB values of the reconstructed videos from the three motifs found on the dataset. Additionally to the synthetic data example discussed in the paper (with \(10\%\) noise spikes), we also provide videos from a synthetic dataset with \(50\%\) spurious spikes. <table><tr><td>File name</td><td>dataset</td><td>video type</td><td>number of motifs M</td><td>motif length F</td></tr><tr><td>real_1_e1_121_recon.mp4</td><td>real dataset 1</td><td>parallel video</td><td>1</td><td>21</td></tr><tr><td>real_1_e3_121_motif_0.tiff</td><td>real dataset 1</td><td>motif</td><td>3</td><td>21</td></tr><tr><td>real_1_e3_121_motif_1.tiff</td><td>real dataset 1</td><td>motif</td><td>3</td><td>21</td></tr><tr><td>real_1_e3_121_motif_2.tiff</td><td>real dataset 1</td><td>motif</td><td>3</td><td>21</td></tr><tr><td>real_1_e3_121_recon.mp4</td><td>real dataset 1</td><td>parallel video</td><td>3</td><td>21</td></tr><tr><td>real_1_e3_121_rgb.mp4</td><td>real dataset 1</td><td>RGB video</td><td>3</td><td>21</td></tr><tr><td>real_1_e3_121_recon.mp4</td><td>real dataset 1</td><td>parallel video</td><td>3</td><td>21</td></tr><tr><td>real_1_e3_121_reon.mp4</td><td>real dataset 1</td><td>parallel video</td><td>5</td><td>21</td></tr><tr><td>real_2_e3_121_motif_0.tiff</td><td>real dataset 2</td><td>motif</td><td>3</td><td>21</td></tr><tr><td>real_2_e3_121_motif_1.tiff</td><td>real dataset 2</td><td>motif</td><td>3</td><td>21</td></tr><tr><td>real_2_e3_121_motif_2.tiff</td><td>real dataset 2</td><td>motif</td><td>3</td><td>21</td></tr><tr><td>real_2_e3_121_recon.mp4</td><td>real dataset 2</td><td>parallel video</td><td>3</td><td>21</td></tr><tr><td>real_2_e3_121_rgb.mp4</td><td>real dataset 2</td><td>RGB video</td><td>3</td><td>21</td></tr><tr><td>synth_example_e3_121_motif_0.tiff</td><td>synth. example</td><td>motif</td><td>3</td><td>21</td></tr><tr><td>synth_example_e3_121_motif_1.tiff</td><td>synth. example</td><td>motif</td><td>3</td><td>21</td></tr><tr><td>synth_example_e3_121_motif_2.tiff</td><td>synth. example</td><td>motif</td><td>3</td><td>21</td></tr><tr><td>synth_example_e3_121_recon.mp4</td><td>synth. example</td><td>parallel video</td><td>3</td><td>21</td></tr><tr><td>synth_example_e3_121_rgb.mp4</td><td>synth. example</td><td>RGB video</td><td>3</td><td>21</td></tr><tr><td>synth_50noise_e3_121_recon.mp4</td><td>synth. with 50% noise</td><td>parallel video</td><td>3</td><td>21</td></tr><tr><td>synth_50noise_e3_121_rgb.mp4</td><td>synth. with 50% noise</td><td>RGB video</td><td>3</td><td>21</td></tr></table> <--- Page Split --->
## ABSTRACT Neuronal assemblies, loosely defined as subsets of neurons with reoccurring spatiotemporally coordinated activation patterns, or "motifs", are thought to be building blocks of neural representations and information processing. We here propose LeMoNADe, a new exploratory data analysis method that facilitates hunting for motifs in calcium imaging videos, the dominant microscopic functional imaging modality in neurophysiology. Our nonparametric method extracts motifs directly from videos, bypassing the difficult intermediate step of spike extraction. Our technique augments variational autoencoders with a discrete stochastic node, and we show in detail how a differentiable reparametrization and relaxation can be used. An evaluation on simulated data, with available ground truth, reveals excellent quantitative performance. In real video data acquired from brain slices, with no ground truth available, LeMoNADe uncovers nontrivial candidate motifs that can help generate hypotheses for more focused biological investigations. ## 1 INTRODUCTION Seventy years after being postulated by Hebb (1949), the existence and importance of reoccurring spatio- temporally coordinated neuronal activation patterns (motifs), also known as neuronal assemblies, is still fiercely debated (Marr et al., 1991; Singer, 1993; Nicolelis et al., 1997; Ikegaya et al., 2004; Cossart & Sansonetti, 2004; Buzsáki, 2004; Mokecíhev et al., 2007; Pastalkova et al., 2008; Stevenson & Kording, 2011; Ahrens et al., 2013; Carrillo- Reid et al., 2015). Calcium imaging, a microscopic video technique that enables the concurrent observation of hundreds of neurons in vitro and in vivo (Denk et al., 1990; Helmchen & Denk, 2005; Flusberg et al., 2008), is best suited to witness such motifs if they indeed exist. <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: We present LeMoNADe, a novel approach to identify neuronal assemblies directly from calcium imaging data. In contrast to previous methods, LeMoNADe does not need pre-processing steps such as cell identification and spike time extraction for unravelling assemblies. </center> In recent years, a variety of methods have been developed to identify neuronal assemblies. These methods range from approaches for the detection of synchronous spiking, up to more advanced methods for the detection of arbitrary spatio- temporal firing patterns (Comon, 1994; Nicolelis et al., 1995; Grün et al., 2002a;b; Lopes- dos Santos et al., 2013; Russo & Durstewitz, 2017; Peter et al., 2017). All of these methods, however, require a spike time matrix as input. Generating such a spike time matrix from calcium imaging data requires the extraction of individual cells and discrete spike times. Again, many methods have been proposed for these tasks (Mukamel et al., 2009; Pnevmatikakis & Paninski, 2013; Pnevmatikakis et al., 2013; Diego et al., 2013; Diego & Hamprecht, 2013; Pachitariu et al., 2013; Pnevmatikakis et al., 2014; Diego & Hamprecht, 2014; Kaifosh et al., 2014; Pnevmatikakis et al., 2016; Apthorpe et al., 2016; Inan et al., 2017; Spaen et al., 2017; Klibisz et al., 2017; Speiser et al., 2017; Zhou et al., 2018). Given the low signal- to- noise ratios (SNR), large background fluctuations, non- linearities, and strong temporal smoothing due to the calcium dynamics itself as well as that of calcium indicators, it is impressive how well some of these methods perform, thanks to modern recording technologies and state- of- the- art regularization and inference (Pnevmatikakis et al., 2016; Zhou et al., 2018). Still, given the difficulty of this data, errors in segmentation and spike extraction are unavoidable, and adversely affect downstream processing steps that do not have access to the raw data. Hence, properly annotating data and correcting the output from automatic segmentation can still take up a huge amount of time. In this paper, we propose LeMoNADe (Learned Motif and Neuronal Assembly Detection), a variational autoencoder (VAE) based framework specifically designed to identify repeating firing motifs with arbitrary temporal structure directly in calcium imaging data (see figure 1). The encoding and decoding networks are set up such that motifs can be extracted directly from the decoding filters, and their activation times from the latent space (see sec. 3). Motivated by the sparse nature of neuronal activity we replace the Gaussian priors used in standard VAE. Instead we place Bernoulli priors on the latent variables to yield sparse and sharply peaked motif activations (sec. 3.1). The choice of discrete Bernoulli distributions makes it necessary to use a BinConcrete relaxation and the Gumbel- softmax reparametrization trick (Maddison et al., 2016; Jang et al., 2017) to enable gradient descent techniques with low variance (sec. 3.3). We add a \(\beta\) - coefficient (Higgins et al., 2017) to the loss function in order to adapt the regularization to the properties of the data (sec. 3.3). Furthermore, we propose a training scheme which allows us to process videos of arbitrary length in a computationally efficient way (sec. 3.4). On synthetically generated datasets the proposed method performs as well as a state- of- the- art motif detection method that requires the extraction of individual cells (sec. 4.1). Finally, we detect possible repeating motifs in two fluorescent microscopy datasets from hippocampal slice cultures (sec. 4.2). A PyTorch implementation of the proposed method is released at https://github.com/EKirschbaum/LeMoNADe. ## 2 RELATED WORK Autoencoder and variational autoencoder Variational Autoencoders (VAEs) were introduced by Kingma & Welling (2014) and have become a popular method for unsupervised generative deep learning. They consist of an encoder, mapping a data point into a latent representation, and a decoder whose task is to restore the original data and to generate samples from this latent space. However, the <--- Page Split ---> original VAE lacks an interpretable latent space. Recent suggestions on solving this problem have been modifications of the loss term (Higgins et al., 2017) or a more structured latent space (Johnson et al., 2016; Deng et al., 2017). VAE have also been successfully used on video sequences. Li & Mandt (2018) learn a disentangled representation to manipulate content in cartoon video clips, while Goyal et al. (2017) combine VAEs with nested Chinese Restaurant Processes to learn a hierarchical representation of video data. Johnson et al. (2016) use a latent switching linear dynamical system (SLDS) model combined with a structured variational autoencoder to segment and categorize mouse behavior from raw depth videos. Unfortunately, this model is not directly applicable to the task of identifying motifs with temporal structure from calcium imaging data for the following reasons: Firstly, neuronal assemblies are expected to extend over multiple frames. Since in the model by Johnson et al. (2016) the underlying latent process is a relatively simple first- order Markovian (switching) linear process, representing longer- term temporal dependencies will be very hard to achieve due to the usually exponential forgetting in such systems. Secondly, in the model of Johnson et al. (2016) each frame is generated from exactly one of \(M\) latent states. For calcium imaging, however, most frames are not generated by one of the \(M\) motifs but from noise, and different motifs could also temporally overlap which is also not possible in the model by Johnson et al. (2016). Closest to our goal of detecting motifs in video data is the work described in Bascol et al. (2016). In this approach, a convolutional autoencoder is combined with a number of functions and regularization terms to enforce interpretability both in the convolutional filters and the latent space. This method was successfully used to detect patterns in data with document structure, including optical flow features of videos. However, as the cells observed in calcium imaging are spatially stationary and have varying luminosity, the extraction of optical flow features makes no sense. Hence this method is not applicable to the task of detecting neuronal assemblies in calcium imaging data. Cell segmentation and spike time extraction from calcium imaging data Various methods have been proposed for automated segmentation and signal extraction from calcium imaging data. Most of them are based on non- negative matrix factorization (Mukamel et al., 2009; Pnevmatikakis & Paninski, 2013; Pnevmatikakis et al., 2013; 2014; Diego & Hamprecht, 2014; Pnevmatikakis et al., 2016; Inan et al., 2017; Zhou et al., 2018), clustering (Kaifosh et al., 2014; Spaan et al., 2017), and dictionary learning (Diego et al., 2013; Diego & Hamprecht, 2013; Pachitariu et al., 2013). Recent approaches started to use deep learning for the analysis of calcium imaging data. Aphorpe et al. (2016) and Klibisz et al. (2017) use convolutional neural networks (CNNs) to identify neuron locations and Speiser et al. (2017) use a VAE combined with different models for calcium dynamics to extract spike times from the calcium transients. Although many sophisticated methods have been proposed, the extraction of cells and spike times from calcium imaging data can still be prohibitively laborious and require manual annotation and correction, with the accuracy of these methods being limited by the quality of the calcium recordings. Furthermore, some of the mentioned methods are specially designed for two- photon microscopy, whereas only few methods are capable to deal with the low SNR and large background fluctuations in single- photon and microendoscopic imaging (Flusberg et al., 2008; Ghosh et al., 2011). Additional challenges for these methods are factors such as non- Gaussian noise, non- cell background activity and seemingly overlapping cells which are out of focus (Inan et al., 2017). Neuronal assembly detection The identification of neuronal assemblies in spike time matrices has been studied from different perspectives. For the detection of joint (strictly synchronous) spike events across multiple neurons, rather simple methods based on PCA or ICA have been proposed (Comon, 1994; Nicolelis et al., 1995; Lopes- dos Santos et al., 2013), as well as more sophisticated statistical methods such as unitary event analysis (Grün et al., 2002a;b). Higher- order correlations among neurons and sequential spiking motifs such as synfire chains can be identified using more advanced statistical tests (Staude et al., 2010a;b; Gerstein et al., 2012). The identification of cell assemblies with arbitrary spatio- temporal structure has been addressed only quite recently. One approach recursively merges sets of units into larger groups based on their joint spike count probabilities evaluated across multiple different time lags (Russo & Durstewitz, 2017). Another method uses sparse convolutional coding (SCC) for reconstructing the spike matrix as a convolution of spatio- temporal motifs and their activations in time (Peter et al., 2017). An extension of this method uses a group sparsity regularization to identify the correct number of motifs (Mackevicius et al., 2018). <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: Schematic sketch of the proposed method. In this toy example, the input video \(\mathbf{x}\) is an additive mixture of two motifs (highlighted in red and blue) plus noise, as shown in (a). To learn the motifs and activations, the loss between input video \(\mathbf{x}\) and reconstructed video \(\mathbf{x}^{\prime}\) is minimized. (b) shows the generation of the reconstructed video through the proposed VAE framework. </center> To the authors' knowledge, solely Diego & Hamprecht (2013) address the detection of neuronal assemblies directly from calcium imaging data. This method, however, only aims at identifying synchronously firing neurons, whereas the method proposed in this paper can identify also assemblies with more complex temporal firing patterns. ## 3 METHOD LeMoNADe is a VAE based latent variable method, specifically designed for the unsupervised detection of repeating motifs with temporal structure in video data. The data \(\mathbf{x}\) is reconstructed as a convolution of motifs and their activation time points as displayed in figure 2a. The VAE is set up such that the latent variables \(\mathbf{z}\) contain the activations of the motifs, while the decoder encapsulates the firing motifs of the cells as indicated in figure 2b. The proposed generative model is displayed in figure 3. The great benefit of this generative model in combination with the proposed VAE is the possibility to directly extract the temporal motifs and their activations and at the same time take into account the sparse nature of neuronal assemblies. ### 3.1 THE LEMONADE MODEL In the proposed model the dataset consists of a single video \(\mathbf{x} \in \mathbb{R}^{T \times P \times P'}\) with \(T\) frames of \(P \times P'\) pixels each. We assume this video to be an additive mixture of \(M\) repeating motifs of maximum temporal length \(F\) . At each time frame \(t = 1, \ldots , T\) , and for each motif \(m = 1, \ldots , M\) , a latent random variable \(z_{t}^{m} \in \{0, 1\}\) is drawn from a prior distribution \(p_{\alpha}(\mathbf{z})\) . The variable \(z_{t}^{m}\) indicates <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 3: Plate diagram and proposed generative and recognition model. We show the plate diagram of the proposed model (left), where red (solid) lines correspond to the generative/decoding process and blue (dashed) lines correspond to the recognition/encoding model. On the right the equations for the generative as well as the recognition model are given. </center> whether motif \(m\) is activated in frame \(t\) or not. The video \(\mathbf{x}\) is then generated from the conditional distribution \(p_{\theta}(\mathbf{x}\mid \mathbf{z})\) with parameters \(\theta\) . In order to infer the latent activations \(\mathbf{z}\) the posterior \(p_{\theta}(\mathbf{z}\mid \mathbf{x})\) is needed. However, the true posterior \(p_{\theta}(\mathbf{z}\mid \mathbf{x})\) is intractable, but it can be approximated by introducing the recognition model (or approximate posterior) \(q_{\phi}(\mathbf{z}\mid \mathbf{x})\) . We assume that the recognition model \(q_{\phi}(\mathbf{z}\mid \mathbf{x})\) factorizes into the \(M\) motifs and \(T\) time steps of the video. In contrast to most VAE, we further assume that each latent variable \(z_{t}^{m}\) is Bernoulli- distributed with parameter \(\alpha_{t}^{m}(\mathbf{x};\phi)\) \[q_{\phi}(\mathbf{z}\mid \mathbf{x}) = \prod_{m = 1}^{M}\prod_{t = 1}^{T}q_{\phi}(z_{t}^{m}\mid \mathbf{x}) = \prod_{m = 1}^{M}\prod_{t = 1}^{T}\mathrm{Bernoulli}\big(z_{t}^{m}\mid \alpha_{t}^{m}(\mathbf{x};\phi)\big). \quad (1)\] We sample the activations \(\mathbf{z}\) in the latent space from the Bernoulli distributions to enforce sparse, sharply peaked activations. The parameters \(\alpha_{t}^{m}(\mathbf{x};\phi)\) are given by a CNN with parameters \(\phi\) . The corresponding plate diagram and proposed generative and recognition model are shown in figure 3. ### 3.2 THE VAE OBJECTIVE In order to learn the variational parameters, the KL- divergence between approximate and true posterior \(\mathrm{KL}(q_{\phi}(\mathbf{z}\mid \mathbf{x})\| p_{\theta}(\mathbf{z}\mid \mathbf{x}))\) is minimized. Instead of minimizing this KL- divergence, we can also maximize the variational lower bound \(\mathcal{L}(\theta ,\phi ;\mathbf{x})\) (ELBO) (see e.g. Blei et al. (2017)) \[\mathcal{L}(\theta ,\phi ;\mathbf{x}) = \mathbb{E}_{\mathbf{z}\sim q_{\phi}(\mathbf{z}\mid \mathbf{x})}\big[\log p_{\theta}(\mathbf{x}\mid \mathbf{z})\big] - \mathrm{KL}\big(q_{\phi}(\mathbf{z}\mid \mathbf{x})\| p_{a}(\mathbf{z})\big). \quad (2)\] In order to optimize the ELBO, the gradients w.r.t. the variational parameters \(\phi\) and the generative parameters \(\theta\) have to be computed. The gradient w.r.t. \(\phi\) , however, cannot be computed easily, since the expectation in eq. (2) depends on \(\phi\) . A reparameterization trick (Kingma et al., 2015) is used to overcome this problem: the random variable \(\mathbf{z} \sim q_{\phi}(\mathbf{z}\mid \mathbf{x})\) is reparameterized using a differentiable transformation \(h_{\phi}(\epsilon , \mathbf{x})\) of a noise variable \(\epsilon\) such that \[\mathbf{z} = h_{\phi}(\epsilon ,\mathbf{x})\quad \mathrm{with}\quad \epsilon \sim p(\epsilon)\quad . \quad (3)\] The reparameterized ELBO, for which the expectation can be computed, e.g. using Monte Carlo sampling, is then given by \[\mathcal{L}(\theta ,\phi ;\mathbf{x}) = \mathbb{E}_{\epsilon \sim p(\epsilon)}\big[\log p_{\theta}(\mathbf{x}\mid \mathbf{z} = h_{\phi}(\epsilon ,\mathbf{x}))\big] - \mathrm{KL}\big(q_{\phi}(\mathbf{z}\mid \mathbf{x})\| p_{a}(\mathbf{z})\big)~. \quad (4)\] More details on VAE as introduced by Kingma & Welling (2014) are given in appendix A. ### 3.3 LEMONADE REPARAMETRIZATION TRICK AND LOSS FUNCTION In our case, however, by sampling from Bernoulli distributions we have added discrete stochastic nodes to our computational graph, and we need to find differentiable reparameterizations of these nodes. The Bernoulli distribution can be reparameterized using the Gumbel- max trick (Luce, 1959; Yellott, 1977; Papandreou & Yuille, 2011; Hazan & Jaakkola, 2012; Maddison et al., 2014). This, <--- Page Split ---> however, is not differentiable. For this reason we use the BinConcrete distribution (Maddison et al., 2016), which is a continuous relaxation of the Bernoulli distribution with temperature parameter \(\lambda\) . For \(\lambda \to 0\) the BinConcrete distribution smoothly anneals to the Bernoulli distribution. The BinConcrete distribution can be reparameterized using the Gumbel- softmax trick (Maddison et al., 2016; Jang et al., 2017), which is differentiable. Maddison et al. (2016) show that for a discrete random variable \(\mathbf{z} \sim \operatorname{Bernoulli}(\alpha)\) , the reparameterization of the BinConcrete relaxation of this discrete distribution is \[\tilde{\mathbf{z}} = \sigma (\mathbf{y}) = \frac{1}{1 + \exp(-\mathbf{y})}\quad \mathrm{with}\quad \mathbf{y} = \frac{\log(\tilde{\alpha}) + \log(U) - \log(1 - U)}{\lambda} \quad (5)\] where \(U \sim \operatorname {Uni}(0,1)\) and \(\tilde{\alpha} = \alpha /(1 - \alpha)\) . Hence the relaxed and reparameterized lower bound \(\tilde{\mathcal{L}} (\theta , \tilde{\alpha}; \mathbf{x}) \approx \mathcal{L}(\theta , \phi ; \mathbf{x})\) can be written as \[\tilde{\mathcal{L}} (\theta , \tilde{\alpha}; \mathbf{x}) = \mathbb{E}_{\mathbf{y} \sim g_{\tilde{\alpha}, \lambda_{1}}(\mathbf{y} \mid \mathbf{x})} \left[ \log p_{\theta}(\mathbf{x} \mid \sigma (\mathbf{y})) \right] - \operatorname {KL}\left(g_{\tilde{\alpha}, \lambda_{1}}(\mathbf{y} \mid \mathbf{x}) \mid \mid f_{\tilde{\alpha}, \lambda_{2}}(\mathbf{y})\right) \quad (6)\] where \(g_{\tilde{\alpha}, \lambda_{1}}(\mathbf{y} \mid \mathbf{x})\) is the reparameterized BinConcrete relaxation of the variational posterior \(q_{\phi}(\mathbf{z} \mid \mathbf{x})\) and \(f_{\tilde{\alpha}, \lambda_{2}}(\mathbf{y})\) the reparameterized relaxation of the prior \(p_{a}(\mathbf{z})\) . \(\lambda_{1}\) and \(\lambda_{2}\) are the respective temperatures and \(\tilde{\alpha}\) and \(\tilde{\alpha}\) the respective locations of the relaxed and reparameterized variational posterior and prior distribution. The first term on the RHS of eq. (6) is a negative reconstruction error, showing the connection to traditional autoencoders, while the KL- divergence acts as a regularizer on the approximate posterior \(q_{\phi}(\mathbf{z} \mid \mathbf{x})\) . As shown in Higgins et al. (2017), we can add a \(\beta\) - coefficient to this KL- term which allows to vary the strength of the constraint on the latent space. Instead of maximizing the lower bound, we will minimize the corresponding loss function \[\begin{array}{r l} & {\ell (\mathbf{x},\mathbf{x}^{\prime},\tilde{\alpha},\lambda_{1},\tilde{a},\lambda_{2},\beta_{\mathrm{KL}}) = \mathrm{MSE}(\mathbf{x},\mathbf{x}^{\prime}) + \beta_{\mathrm{KL}}\cdot \mathrm{KL}\big(g_{\tilde{\alpha},\lambda_{1}}(\mathbf{y}\mid \mathbf{x})\big|\big|f_{\tilde{a},\lambda_{2}}(\mathbf{y})\big)}\\ & {\qquad = \mathrm{MSE}(\mathbf{x},\mathbf{x}^{\prime}) - \beta_{\mathrm{KL}}\cdot \mathbb{E}_{U\sim \mathrm{Uni}(0,1)}\left[\log \frac{f_{\tilde{a},\lambda_{2}}\big(\mathbf{y}(U,\tilde{\alpha},\lambda_{1})\big)}{g_{\tilde{\alpha},\lambda_{1}}\big(\mathbf{y}(U,\tilde{\alpha},\lambda_{1})\big\mid\mathbf{x}}\right]} \end{array} \quad (7)\] with \(\mathrm{MSE}(\mathbf{x}, \mathbf{x}^{\prime})\) being the mean- squared error between \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\) , and the \(\beta\) - coefficient \(\beta_{\mathrm{KL}}\) . Datasets with low SNR and large background fluctuations will need a stronger regularization on the activations and hence a larger \(\beta_{\mathrm{KL}}\) than higher quality recordings. Hence, adding the \(\beta\) - coefficient to the loss function enables our method to adapt better to the properties of specific datasets and recording methods. ### 3.4 LEMONADE NETWORK ARCHITECTURE The encoder network starts with a few convolutional layers with small 2D filters operating on each frame of the video separately, inspired by the architecture used in Apthorpe et al. (2016) to extract cells from calcium imaging data. Afterwards the feature maps of the whole video are passed through a final convolutional layer with 3D filters. These filters have the size of the feature maps obtained from the single images times a temporal component of length \(F\) , which is the expected maximum temporal length of the motifs. We apply padding in the temporal domain to also capture motifs correctly which are cut off at the beginning or end of the analyzed image sequence. The output of the encoder are the parameters \(\tilde{\alpha}\) which we need for the reparametrization in eq. (5). From the reparametrization we gain the activations \(\mathbf{z}\) which are then passed to the decoder. The decoder consists of a single deconvolution layer with \(M\) filters of the original frame size times the expected motif length \(F\) , enforcing the reconstructed data \(\mathbf{x}^{\prime}\) to be an additive mixture of the decoder filters. Hence, after minimizing the loss the filters of the decoder contain the detected motifs. Performing these steps on the whole video would be computationally very costly. For this reason, we perform each training epoch only on a small subset of the video. The subset consists of a few hundred consecutive frames, where the starting point of this short sequence is randomly chosen in each epoch. We found that doing so did not negatively affect the performance of the algorithm. By using this strategy we are able to analyse videos of arbitrary length in a computationally efficient way. More implementation details can be found in appendix B. <--- Page Split ---> ## 4 EXPERIMENTS AND RESULTS ### 4.1 SYNTHETIC DATA The existence of neuronal assemblies is still fiercely debated and their detection would only be possible with automated, specifically tailored tools, like the one proposed in this paper. For this reason, no ground truth exists for the identification of spatio- temporal motifs in real neurophysiological spike data. In order to yet report quantitative accuracies, we test the algorithm on synthetically generated datasets for which ground truth is available. For the data generation we used a procedure analogous to the one used in Diego et al. (2013) and Diego & Hamprecht (2013) for testing automated pipelines for the analysis and identification of neuronal activity from calcium imaging data. In contrast to them, we include neuronal assemblies with temporal firing structure. The cells within an assembly can have multiple spikes in a randomly chosen but fixed motif of temporal length up to 30 frames. We used 3 different assemblies in each sequence. Additionally, spurious spikes of single neurons were added to simulate noise. The ratio of spurious spikes to all spikes in the dataset was varied from \(0\%\) up to \(90\%\) in ten steps. The details of the synthetic data generation can be found in appendix C.1. To the best of our knowledge, the proposed method is the first ever to detect video motifs with temporal structure directly in calcium imaging data. As a consequence, there are no existing baselines to compare to. Hence we here propose and evaluate the SCC method presented in Peter et al. (2017) as a baseline. The SCC algorithm is able to identify motifs with temporal structure in spike trains or calcium transients. To apply it to our datasets, we first have to extract the calcium transients of the individual cells. For the synthetically generated data we know the location of each cell by construction, so this is possible with arbitrary accuracy. The output of the SCC algorithm is a matrix that contains for each cell the firing behavior over time within the motif. For a fair comparison we brought the motifs found with LeMoNADe, which are short video sequences, into the same format. The performance of the algorithms is measured by computing the cosine similarity (Singhal, 2001) between ground truth motifs and detected motifs. The cosine similarity is one for identical and zero for orthogonal patterns. Not all ground truth motifs extend across all 30 frames, and may have almost vanishing luminosity in the last frames. Hence, the discovered motifs can be shifted by a few frames and still capture all relevant parts of the motifs. For this reason we computed the similarity for the motifs with all possible temporal shifts and took the maximum. More details on the computation of the similarity measure can be found in appendix C.2. We ran both methods on 200 synthetically generated datasets with the parameters shown in table 3 in the appendix. We here show the results with the correct number of motifs ( \(M = 3\) ) used in both methods. In appendix E.1 we show that if the number of motifs is overestimated (here \(M > 3\) ), LeMoNADe still identifies the correct motifs, but they are repeated multiple times in the surplus filters. Hence this does not reduce the performance of the algorithm. The temporal extent of the motifs was set to \(F = 31\) to give the algorithms the chance to also capture the longer patterns. The cosine similarity of the found motifs to the set of ground truth motifs, averaged over all found motifs and all experiments for each of the ten noise levels, is shown in figure 4. The results in figure 4 show that LeMoNADe performs as well as SCC in detecting motifs and also shows a similar stability in the presence of noise as SCC. This is surprising since LeMoNADe does not need the previous extraction of individual cells and hence has to solve a much harder problem than SCC. In order to verify that the results achieved by LeMoNADe and SCC range significantly above chance, we performed a bootstrap (BS) test. For this, multiple datasets were created with similar spike distributions as before, but with no reoccurring motif- like firing patterns. We compiled a distribution of similarities between patterns suggested by the proposed method and randomly sampled segments of same length and general statistics from that same BS dataset. The full BS distributions are shown in appendix C.3. The \(95\%\) - tile of the BS distributions for each noise level are also shown in figure 4. Figure 5 shows an exemplary result from one of the analysed synthetic datasets with \(10\%\) noise and maximum temporal extend of the ground truth motifs of 28 frames. All three motifs were correctly identified (see figure 5a) with a small temporal shift. This shift does not reduce the performance as it is compensated by a corresponding shift in the activations of the motifs (see figure 5b). In order to show that the temporal structure of the found motifs matches the ground truth, in figure 5a for motif 1 and 2 we corrected the shift of one and two frames, respectively. We also show the results after <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 4: Similarities between found motifs and ground truth for different noise levels. We show for LeMoNADe (lime green) and SCC (blue) the average similarities between found motifs and ground truth for ten different noise levels ranging from \(0\%\) up to \(90\%\) spurious spikes. Error bars indicate the standard deviation. For each noise level 20 different datasets were analyzed. For both, LeMoNADe and SCC, the similarities between found and ground truth motifs are significantly above the \(95\%\) -tile of the corresponding bootstrap distribution (red) up to a noise level of \(70\%\) spurious spikes. Although LeMoNADe does not need the previous extraction of individual cells, it performs as well as SCC in detecting motifs and also shows a similar stability in the presence of noise. </center> ![](images/7_1.jpg) <center>Figure 5: Exemplary result from one synthetic dataset. (a) shows a single plot containing all three motifs as additive RGB values for the ground truth motifs (top) and discovered motifs (bottom). The found motifs were ordered manually and temporally aligned to match the ground truth, for better readability. The complete motif sequences can be found in figure 10 in appendix E.1. In (b) the activations \(\mathbf{z}\) of the found motifs are shown in red for the complete video (left) and a small excerpt of the sequence (right). The ground truth activations are marked with blue crosses. (c) shows the firing of the extracted cells in the ground truth motifs (top), the motifs identified by SCC (middle) and the motifs found with LeMoNADe (bottom). </center> extracting the individual cells from the motifs and the results from SCC in figure 5c. One can see that the results are almost identical, again except for small temporal shifts. ### 4.2 REAL DATA We applied the proposed method on two datasets obtained from organotypic hippocampal slice cultures. The cultures were prepared from 7- 9- day- old Wistar rats as described in Kann et al. (2003) and Schneider et al. (2015). The fluorescent \(\mathrm{Ca^{2 + }}\) sensor, GCaMP6f (Chen et al., 2013), was delivered to the neurons by an adeno- associated virus (AAV). Neurons in stratum pyramidal of CA3 were imaged for 6.5 (dataset 1) and 5 minutes (dataset 2) in the presence of the cholinergic receptor agonist carbachol. For more details on the generation of these datasets see appendix D.1. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 6: Result from hippocampal slice culture datasets 1 (top) and 2 (bottom). The colors in (a) are inverted compared to the standard visualization of calcium imaging data for better visibility. In (c) activations are thresholded to \(70\%\) of the maximum activation for each motif. In (d) the manually selected frames of motif 0 highlight the temporal structure of the motif. </center> <--- Page Split ---> The proposed method was run on these datasets with the parameter settings shown in table 3 in the appendix E, where we also provide additional comments on the parameter settings. The analysis of the datasets took less than two hours on a Ti 1080 GPU. Before running the analysis we computed \(\Delta F / F\) for the datasets. We looked for up to three motifs with a maximum extent of \(F = 21\) frames. The results are shown in figure 6. For both datasets, one motif in figure 6a consists of multiple cells, shows repeated activation over the recording period (see figure 6b, 6c), and contains temporal structure (see figure 6d). The other two "motifs" can easily be identified as artefacts and background fluctuations. As SCC and many other motif detection methods, LeMoNADe suffers from the fact that such artefacts, especially single events with extremely high neuronal activation, potentially explain a large part of the data and hence can be falsely detected as motifs. Nevertheless, these events can be easily identified by simply looking at the motif videos or thresholding the activations as done in figure 6c. Although the found motifs also include neuropil activation, this does not imply this was indeed used by the VAE as a defining feature of the motifs, just that it was also present in the images. Dendritic/axonal structures are part of the activated neurons and therefore also visible in the motif videos. If necessary, these structures can be removed by post- processing steps. As LeMoNADe reduces the problem to the short motif videos instead of the whole calcium imaging video, the neuropil subtraction becomes much more feasible. ## 5 CONCLUSION We have presented a novel approach for the detection of neuronal assemblies that directly operates on the calcium imaging data, making the cumbersome extraction of individual cells and discrete spike times from the raw data dispensable. The motifs are extracted as short, repeating image sequences. This provides them in a very intuitive way and additionally returns information about the spatial distribution of the cells within an assembly. The proposed method's performance in identifying motifs is equivalent to that of a state- of- the- art method that requires the previous extraction of individual cells. Moreover, we were able to identify repeating firing patterns in two datasets from hippocampal slice cultures, proving that the method is capable of handling real calcium imaging conditions. For future work, a post- processing step as used in Peter et al. (2017) or a group sparsity regularization similar to the ones used in Bascol et al. (2016) or Mackevicius et al. (2018) could be added to determine a plausible number of motifs automatically. Moreover, additional latent dimensions could be introduced to capture artefacts and background fluctuations and hence automatically separate them from the actual motifs. The method is expected to, in principle, also work on other functional imaging modalities. We will investigate the possibility of detecting motifs using LeMoNADe on recordings from human fMRI or voltage- sensitive dyes in the future. ## ACKNOWLEDGMENTS EK thanks Ferran Diego for sharing his knowledge on generating synthetic data and for his scientific advice. DD acknowledges partial financial support by DFG Du 354/8- 1. EK, HS, JS, SE, OK, DD and FAH gratefully acknowledge partial financial support by DFG SFB 1134. ## APPENDIX ## A VARIATIONAL AUTOENCODER Variational autoencoder (VAE) are generative latent variable models which were first described in Kingma & Welling (2014). The data \(\mathbf{x} = \left\{\mathbf{x}^{(i)}\right\}_{i = 1}^{N}\) , consisting of \(N\) samples of some random variable \(\mathbf{x}\) , is generated by first drawing a latent variable \(\mathbf{z}^{(i)}\) from a prior distribution \(p(\mathbf{z})\) and then sampling from the conditional distribution \(p_{\theta^{*}}(\mathbf{x}\mid \mathbf{z})\) with parameters \(\theta^{*}\) . The distribution \(p_{\theta^{*}}(\mathbf{x}\mid \mathbf{z})\) belongs to the parametric family \(p_{\theta}(\mathbf{x}\mid \mathbf{z})\) with differentiable PDFs w.r.t. \(\theta\) and \(\mathbf{z}\) . Both the true parameters \(\theta^{*}\) as well as the latent variables \(\mathbf{z}^{(i)}\) are unknown. We are interested in an approximate posterior inference of the latent variables \(\mathbf{z}\) given some data \(\mathbf{x}\) . The true posterior \(p_{\theta}(\mathbf{z}\mid \mathbf{x})\) , however, is usually intractable. But it can be approximated by introducing the recognition model (or approximate posterior) \(q_{\phi}(\mathbf{z}\mid \mathbf{x})\) . We want to learn both the recognition model parameters \(\phi\) as well as the generative model parameters \(\theta\) . The recognition model is usually referred to as the probabilistic encoder and \(p_{\theta}(\mathbf{x}\mid \mathbf{z})\) is called the probabilistic decoder. In order to learn the variational parameters \(\phi\) we want to minimise the KL- divergence between approximate and true posterior \(\mathrm{KL}(q_{\phi}(\mathbf{z}|\mathbf{x})\| p_{\theta}(\mathbf{z}|\mathbf{x}))\) . Therefore we use the fact that the marginal likelihood \(p_{\theta}(\mathbf{x})\) can be written as \[\log p_{\theta}(\mathbf{x}) = \mathcal{L}(p,q;\mathbf{x}) + \mathrm{KL}\big(q_{\phi}(\mathbf{z}|\mathbf{x})\| p_{\theta}(\mathbf{z}|\mathbf{x})\big) \quad (8)\] As the KL- divergence is non- negative, we can minimize \(\mathrm{KL}(q_{\phi}(\mathbf{z}|\mathbf{x})\| p_{\theta}(\mathbf{z}|\mathbf{x}))\) by maximizing the (variational) lower bound \(\mathcal{L}(p,q;\mathbf{x})\) with \[\mathcal{L}(p,q;\mathbf{x}) = \mathbb{E}_{\mathbf{z}\sim q_{\phi}(\mathbf{z}|\mathbf{x})}\left[\log p_{\theta}(\mathbf{x}|\mathbf{z})\right] - \mathrm{KL}\big(q_{\phi}(\mathbf{z}|\mathbf{x})\| p(\mathbf{z})\big) \quad (9)\] In order to optimise the lower bound \(\mathcal{L}(p,q;\mathbf{x})\) w.r.t. both the variational parameters \(\phi\) and the generative parameters \(\theta\) , we need to compute the gradients \[\nabla_{\phi ,\theta}\mathcal{L}(p,q;\mathbf{x}) = \nabla_{\phi ,\theta}\mathbb{E}_{\mathbf{z}\sim q_{\phi}(\mathbf{z}|\mathbf{x})}\left[\log p_{\theta}(\mathbf{x}|\mathbf{z})\right] - \nabla_{\phi ,\theta}\mathrm{KL}\big(q_{\phi}(\mathbf{z}|\mathbf{x})\| p(\mathbf{z})\big) \quad (10)\] For the first part of the lower bound the gradient w.r.t. \(\theta\) can be easily computed using Monte Carlo sampling \[\nabla_{\theta}\mathbb{E}_{\mathbf{z}\sim q_{\phi}(\mathbf{z}|\mathbf{x})}\left[\log p_{\theta}(\mathbf{x}|\mathbf{z})\right] = \mathbb{E}_{\mathbf{z}\sim q_{\phi}(\mathbf{z}|\mathbf{x})}\left[\nabla_{\theta}\log p_{\theta}(\mathbf{x}|\mathbf{z})\right]\approx \frac{1}{S}\sum_{s = 1}^{S}\nabla_{\theta}\log p_{\theta}(\mathbf{x}|\mathbf{z}^{s}) \quad (11)\] with \(\mathbf{z}^{s}\sim q_{\phi}(\mathbf{z}|\mathbf{x})\) . The gradient w.r.t. \(\phi\) , however, does not take the form of an expectation in \(\mathbf{z}\) and can therefore not be sampled that easily: \[\nabla_{\phi}\mathbb{E}_{\mathbf{z}\sim q_{\phi}(\mathbf{z}|\mathbf{x})}\left[\log p_{\theta}(\mathbf{x}|\mathbf{z})\right] = \nabla_{\phi}\int q_{\phi}(\mathbf{z}|\mathbf{x})\log p_{\theta}(\mathbf{x}|\mathbf{z})d\mathbf{z} = \int \log p_{\theta}(\mathbf{x}|\mathbf{z})\nabla_{\phi}q_{\phi}(\mathbf{z}|\mathbf{x})d\mathbf{z} \quad (12)\] However, in most cases we can use the reparameterization trick to overcome this problem: the random variable \(\tilde{\mathbf{z}}\sim q_{\phi}(\mathbf{z}\mid \mathbf{x})\) can be reparameterised using a differentiable transformation \(h_{\phi}(\epsilon ,\mathbf{x})\) of a noise variable \(\epsilon\) such that \[\tilde{\mathbf{z}} = h_{\phi}(\epsilon ,\mathbf{x})\quad \mathrm{with}\quad \epsilon \sim p(\epsilon) \quad (13)\] We now can compute the gradient w.r.t. \(\phi\) again using Monte Carlo sampling \[\begin{array}{r l} & {\nabla_{\phi}\mathbb{E}_{\epsilon \sim p(\epsilon)}\left[\log p_{\theta}(\mathbf{x}|\mathbf{z} = h_{\phi}(\epsilon ,\mathbf{x}))\right] = \mathbb{E}_{\epsilon \sim p(\epsilon)}\left[\nabla_{\phi}\log p_{\theta}(\mathbf{x}|\mathbf{z} = h_{\phi}(\epsilon ,\mathbf{x}))\right]}\\ & {\qquad \approx \frac{1}{S}\sum_{s = 1}^{S}\nabla_{\phi}\log p_{\theta}(\mathbf{x}|\mathbf{z}^{s} = h_{\phi}(\epsilon^{s},\mathbf{x}))} \end{array} \quad (14)\] with \(\epsilon^{s}\sim p(\epsilon)\) . Hence, the reparameterized lower bound \(\tilde{\mathcal{L}}(p,q;\mathbf{x})\approx \mathcal{L}(p,q;\mathbf{x})\) can be written as \[\tilde{\mathcal{L}}(p,q;\mathbf{x}) = \frac{1}{S}\sum_{s = 1}^{S}\log p_{\theta}(\mathbf{x}|\mathbf{z}^{s}) - \mathrm{KL}(q_{\phi}(\mathbf{z}|\mathbf{x})\| p(\mathbf{z})) \quad (15)\] with \(\mathbf{z}^{s} = h_{\phi}(\epsilon^{s},\mathbf{x}),\epsilon \sim p(\epsilon)\) . The first term on the RHS of eq. (15) is a negative reconstruction error, showing the connection to traditional autoencoders, while the KL- divergence acts as a regularizer on the approximate posterior \(q_{\phi}(\mathbf{z}\mid \mathbf{x})\) . <--- Page Split ---> ## B LEMONADE NETWORK ARCHITECTURE AND IMPLEMENTATION DETAILS ## B.1 ENCODER The encoder network starts with a few convolutional layers with small 2D filters operating on each frame of the video separately, inspired by the architecture used in Apthorpe et al. (2016) to extract cells from calcium imaging data. The details of this network are shown in table 1. Afterwards the feature maps of the whole video are passed through a final convolutional layer with 3D filters. These filters have size of the feature maps gained from the single images times a temporal component of length \(F\) , which is the expected maximum temporal extent of a motif. We use \(2\cdot M\) filters and apply padding in the temporal domain to avoid edge effects. By this also motifs that are cut off at the beginning or the end of the sequence can be captured properly. The output of the encoder are \(2\cdot M\) feature maps of size \((T + F - 1)\times 1\times 1\) . ## B.2 REPARAMETERIZATION Instead of reparameterizing the Bernoulli distributions, we will reparameterize their BinConcrete relaxations. The BinConcrete relaxation of a Bernoulli distribution with parameter \(\alpha\) takes as input parameter \(\tilde{\alpha} = \alpha /(1 - \alpha)\) . Maddison et al. (2016) showed that instead of using the normalized probabilities \(\alpha\) , we can also perform the reparametrization with unnormalized parameters \(\alpha^{1}\) and \(\alpha^{2}\) , where \(\alpha^{1}\) is the probability to sample a one and \(\alpha^{2}\) is the probability to sample a zero and \(\tilde{\alpha} = \alpha^{1} / \alpha^{2}\) . The first \(M\) feature maps, which were outputted by the encoder, are assigned to contain the unnormalised probabilities \(\alpha_{m,t}^{1}\) for the activation of motif \(m\) in frame \(t\) to be one. The second \(M\) feature maps contain the unnormalized probabilities \(\alpha_{m,t}^{2}\) for the activation of motif \(m\) in frame \(t\) to be zero. The parameter \(\tilde{\alpha}\) that is needed for the reparameterized BinConcrete distribution is obtained by dividing the two vectors elementwise: \(\tilde{\alpha}_{t}^{m} = \alpha_{m,t}^{1} / \alpha_{m,t}^{2}\) . We use the reparameterization trick to sample from BinConcrete \((\tilde{\alpha}_{t}^{m})\) as follows: First we sample \(\left\{\{U_{t}^{m}\}_{t = 1}^{T + F - 1}\right\}_{m = 1}^{M}\) from a uniform distribution \(\mathrm{Uni}(0,1)\) . Next, we compute \(\mathbf{y}\) with \[y_{t}^{m} = \left(\frac{\tilde{\alpha}_{t}^{m}\cdot U_{t}^{m}}{1 - U_{t}^{m}}\right)^{1 / \lambda_{1}}. \quad (16)\] Finally, we gain \(\mathbf{z}\) according to \[z_{t}^{m} = \frac{y_{t}^{m}}{1 + y_{t}^{m}}\cdot \alpha_{m,t}^{1} \quad (17)\] for all \(m = 1,\ldots ,M\) and \(t = 1,\ldots ,T + F - 1\) . The multiplication by \(\alpha_{m,t}^{1}\) in eq. (17) is not part of the original reparametrization trick (Maddison et al., 2016; Jang et al., 2017). But we found that the results of the algorithm improved dramatically as we scaled the activations with the \(\alpha^{1}\) - values that were originally predicted from the encoder network. ## B.3 DECODER The input to the decoder are now the activations \(\mathbf{z}\) . The decoder consists of a single deconvolution layer with \(M\) filters of the original frame size times the expected motif length \(F\) . These deconvolution filters contain the motifs we are looking for. The details of the used networks as well as the sizes of the inputs and outputs of the different steps are shown in table 1. Algorithm 1 summarizes the reparametrization and updates. ## C EXPERIMENTS AND RESULTS ON SYNTHETIC DATA ## C.1 SYNTHETIC DATA GENERATION We created 200 artificial sequences of length \(60\mathrm{s}\) with a frame rate of \(30\mathrm{fps}\) and \(128\times 128\) pixel per image. The number of cells was varied and they were located randomly in the image plane with an overlap of up to \(30\%\) . The cell shapes were selected randomly from 36 shapes extracted from <--- Page Split ---> Table 1: LeMoNADe network architecture details <table><tr><td>Operation</td><td>Kernel</td><td>Feature maps</td><td>Padding</td><td>Stride</td><td>Nonlinearity</td></tr><tr><td></td><td>Input: T images, P × P&#x27;</td><td></td><td></td><td></td><td></td></tr><tr><td>2D Convolution</td><td>3 × 3</td><td>24</td><td>0 × 0</td><td>1</td><td>ELU</td></tr><tr><td>2D Convolution</td><td>3 × 3</td><td>48</td><td>0 × 0</td><td>1</td><td>ELU</td></tr><tr><td>Max-Pooling</td><td>2 × 2</td><td>-</td><td>0 × 0</td><td>2</td><td>-</td></tr><tr><td>2D Convolution</td><td>3 × 3</td><td>72</td><td>0 × 0</td><td>1</td><td>ELU</td></tr><tr><td>2D Convolution</td><td>3 × 3</td><td>96</td><td>0 × 0</td><td>1</td><td>ELU</td></tr><tr><td>Max-Pooling</td><td>2 × 2</td><td>-</td><td>0 × 0</td><td>2</td><td>-</td></tr><tr><td>2D Convolution</td><td>3 × 3</td><td>120</td><td>0 × 0</td><td>1</td><td>ELU</td></tr><tr><td>2D Convolution</td><td>1 × 1</td><td>48</td><td>0 × 0</td><td>1</td><td>ELU</td></tr><tr><td>Output: T images, P × P&#x27;, P = ((P - 4)/2 - 4)/2 - 2, P&#x27; = ((P&#x27; - 4)/2 - 4)/2 - 2</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>3D Convolution</td><td>Input: 1 video, T × P × P&#x27;</td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>F × P × P&#x27;</td><td>2M</td><td>(F - 1) × 0 × 0</td><td>1</td><td>SoftPlus</td></tr><tr><td></td><td>Output: 2M feature maps, (T + F - 1) × 1 × 1</td><td></td><td></td><td></td><td></td></tr><tr><td>Reparametrization</td><td>Input: 2M feature maps, (T + F - 1) × 1 × 1</td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td></td><td>Output: M activations, (T + F - 1) × 1 × 1</td><td></td><td></td><td></td><td></td></tr><tr><td>3D TransposedConvolution</td><td>Input: M activations, (T + F - 1) × 1 × 1</td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>F × P × P&#x27;</td><td>M</td><td>(F - 1) × 0 × 0</td><td>1</td><td>ReLU</td></tr><tr><td></td><td>Output: 1 video, T × P × P&#x27;</td><td></td><td></td><td></td><td></td></tr></table> Algorithm 1: The LeMoNADe algorithm Input: raw video \(\mathbf{x}\) , normalized to zero mean and unit variance, architectures \(f_{\theta}, \alpha_{\phi}\) , hyperparameter \(\lambda_{1}, \lambda_{2}, \tilde{\alpha}, \beta_{\mathrm{KL}}\) Result: trained \(f_{\theta}, \alpha_{\phi}\) \(\theta , \phi \leftarrow\) Initialize network parameters # repeat // Sample subset of video \(\mathbf{x}_{\mathrm{sub}}\leftarrow\) Randomly chosen sequence of consecutive frames from \(\mathbf{x}\) // Encoding step Encode \(\mathbf{x}_{\mathrm{sub}}\) to get \(\tilde{\alpha}\) as described in section B.1 and B.2 // Latent Step Sample noise \(U\sim \mathrm{Uni}(0,1)\) Compute y following eq. (16) Compute z following eq. (17) // Decoding Step \(\mathbf{x}_{\mathrm{sub}}^{\prime}\leftarrow\) decode via \(f_{\theta}(\mathbf{z})\) // Update Parameters Compute gradients of loss \(\phi ,\theta \leftarrow\) update via \(\nabla_{\phi ,\theta}\ell (\mathbf{x}_{\mathrm{sub}},\mathbf{x}_{\mathrm{sub}}^{\prime},\tilde{\alpha},\lambda_{1},\tilde{\alpha},\lambda_{2},\beta_{\mathrm{KL}})\) (see eq. (7) in the main paper) until until convergence of \(\theta ,\phi\) real data. The transients were modelled as two- sided exponential decay with scales of \(50\mathrm{ms}\) and \(400\mathrm{ms}\) , respectively. In contrast to Diego & Hamprecht (2013), we included neuronal assemblies with temporal firing structure. That means cells within an assembly can perform multiple spikes in a randomly chosen but fixed motif of temporal length up to 30 frames. We used 3 different assemblies in each sequence. The assembly activity itself was modelled as a Poisson process (Lopes- dos Santos et al., 2013) with a mean of 0.15 spikes/second and a refractory period of at least the length of the motif itself. By construction the cell locations as well as the firing motifs are known for these datasets. In order to simulate the conditions in real calcium imaging videos as good as possible, we added Gaussian background noise with a relative amplitude (max intensity - mean intensity) \(\sigma_{\mathrm{noise}}\) between 10 and 20. Additionally, spurious spikes not belonging to any motif were added. The amount <--- Page Split ---> Table 2: Average cosine similarity between ground truth and discovered motifs. The average similarity together with the standard deviation were computed over 20 different datasets for each noise level, both for LeMoNADe and SCC. A bootstrap distribution of similarities was computed (see section C.3). BS-95 gives the \(5\%\) significance threshold of this distribution. <table><tr><td rowspan="2">NOISE LEVEL</td><td colspan="2">on video data</td><td colspan="2">after cell extraction</td></tr><tr><td>LeMoNADe</td><td>BS-95</td><td>SCC</td><td></td></tr><tr><td>0%</td><td>0.838 ± 0.066</td><td>0.400</td><td>0.837 ± 0.088</td><td></td></tr><tr><td>10%</td><td>0.826 ± 0.061</td><td>0.387</td><td>0.826 ± 0.116</td><td></td></tr><tr><td>20%</td><td>0.804 ± 0.080</td><td>0.402</td><td>0.818 ± 0.120</td><td></td></tr><tr><td>30%</td><td>0.770 ± 0.130</td><td>0.413</td><td>0.830 ± 0.125</td><td></td></tr><tr><td>40%</td><td>0.775 ± 0.107</td><td>0.426</td><td>0.822 ± 0.093</td><td></td></tr><tr><td>50%</td><td>0.756 ± 0.079</td><td>0.477</td><td>0.791 ± 0.126</td><td></td></tr><tr><td>60%</td><td>0.730 ± 0.098</td><td>0.492</td><td>0.731 ± 0.169</td><td></td></tr><tr><td>70%</td><td>0.639 ± 0.142</td><td>0.516</td><td>0.636 ± 0.163</td><td></td></tr><tr><td>80%</td><td>0.462 ± 0.103</td><td>0.553</td><td>0.454 ± 0.135</td><td></td></tr><tr><td>90%</td><td>0.357 ± 0.034</td><td>0.656</td><td>0.351 ± 0.067</td><td></td></tr></table> of spurious spikes was varied from \(0\%\) up to \(90\%\) of all spikes in the dataset. For each of the 10 noise levels 20 datasets were generated. ## C.2 SIMILARITY MEASURE The performance of the algorithms is measured by computing the cosine similarity (Singhal, 2001) between ground truth motifs and found motifs. The found motifs are in an arbitrary order, not necessarily corresponding to the order of the ground truth motifs. Additionally, the found motifs can be shifted in time compared to the ground truth. To account for this fact, we compute the similarity between the found motifs and each of the ground truth motifs with all possible temporal shifts and take the maximum. Hence, the similarity between the \(m\) - th found motif and the set of ground truth motifs \(\mathcal{G}\) is defined by \[S i m(\mathcal{M}^{m},\mathcal{G})=\max \left\{\frac{\langle\mathrm{vec}(\mathcal{M}^{m}),\mathrm{vec}(\stackrel{\rightharpoonup}{G})\rangle}{\|\mathrm{vec}(\mathcal{M}^{m})\|_{2}\cdot\|\mathrm{vec}(\stackrel{\rightharpoonup}{G})\|_{2}}\right\}|G\in\mathcal{G},s\in\{-F,\ldots,F\}\right\} \quad (18)\] where \(\mathcal{M}^{m}\) is the \(m\) - th found motif, \(\langle \cdot ,\cdot \rangle\) is the dot product and \(\mathrm{vec}(\cdot)\) vectorizes the motifs with dimensions \(F\times N\) into a vector of length \(F\cdot N\) , where \(N\) is the number of cells. The shift operator \((\cdot)\) moves a motif \(s\) frames forward in time while keeping the same size and filling missing values appropriately with zeros (Smaragdis, 2004). The cosine similarity of the found motifs to the set of ground truth motifs was averaged over all found motifs and all experiments for each noise level. The average similarities achieved with LeMoNADe and SCC as well as the \(5\%\) significance threshold of the BS distribution for each noise level can be found in table 2. ## C.3 BOOTSTRAP-BASED SIGNIFICANCE TEST Statistical methods for testing for cell assemblies (or spatio- temporal patterns more generally) have been advanced tremendously in recent years, addressing many of the issues that have plagued older approaches (Grün, 2009; Staude et al., 2010a;b; Russo & Durstewitz, 2017). Simple shuffle bootstraps are not necessarily the best methods if they destroy too much of the auto- correlative structure, and they can severely underestimate the distributional tails (Davison et al., 1997). Therefore we use sophisticated parametric, model- based bootstraps which retain the full statistical structure of the original data, except for the crucial feature of repeating motifs. In order to provide a 'null hypothesis (HO)' reference for the motif similarities returned by LeMoNADe (or other methods), we used the following bootstrap (BS) based test procedure: We generated 20 datasets analogue to those described in section C.1, i.e. with same spiking statistics and temporal <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 7: Top: Bootstrap distribution for similarity between random patterns. Shown is a sample from the BS distribution (blue) and the \(95\%\) significance threshold (red). Bottom: Distribution for similarity between patterns found on data which contained repeating motifs. Shown are the similarities between motifs found with LeMoNADe (lime green) and the ground truth motifs for the synthetic datasets discussed in the paper, which contained repeating motifs. The \(95\%\) significance threshold of the corresponding BS distribution is indicated as vertical red line. </center> convolution with calcium transients, but without repeating motifs. These motif- less H0 datasets were then processed by LeMoNADe in the very same way as the motif- containing datasets, i.e. with the parameter settings as shown in table 3. From each of these BS datasets 150 random samples of the same temporal length as that of the 'detected' motifs were drawn. For each BS dataset, the similarities between each of the found motifs and all of the 150 random samples were computed as described in section C.2. As datasets with higher noise levels have different spiking statistics, we repeated this procedure for each of the ten noise levels. Figure 7 shows the BS distributions (top). We also show the distribution of similarities between motifs found with LeMoNADe on the datasets which contained motifs (bottom). The \(95\%\) - tile (corresponding to a \(5\%\) alpha level) of the BS distribution is displayed as vertical red line. Up to a noise level of \(70\%\) the average of the similarities found on the datasets that contained motifs is much higher than the \(95\%\) - tile of the BS distribution. ## D EXPERIMENTS AND RESULTS ON REAL DATA ## D.1 DATA GENERATION Organotypic hippocampal slice cultures were prepared from 7- 9- day- old Wistar rats (Charles River Laboratories, Sulzfeld, Germany) as described by Kann et al. (2003) and Schneider et al. (2015). Animals were taken care of and handled in accordance with the European directive 2010/63/EU and with consent of the animal welfare officers at Heidelberg University (license, T96/15). Slices were infected with adeno- associated virus (AAV) obtained from Penn Vector Core (PA, USA) encoding GCaMP6f under the control of the CamKII promoter AAV5.CamKII.GCaMPf.WPRE.SV40, Lot # V5392MI- S). AAV transduction was achieved, under sterile conditions, by applying \(0.5\mu l\) of the viral particles solution (qTiter: 1.55e13 GC/ml) on top of the slices. Slices were maintained on Biopore membranes (Millicell standing inserts; Merck Millipore, Schwalbach, Germany) between culture medium. The medium consisted of \(50\%\) minimal essential medium, \(25\%\) Hank's balanced salt solution (Sigma- Aldrich, Taufkirchen, Germany), \(25\%\) horse serum (Life Technologies, Darmstadt, Germany), and \(2mM\) L- glutamine (Life Technologie) at pH 7.3, stored in an incubator (Heracell; Thermoscientific, Dreieich, Germany) with humidified normal atmosphere ( \(5\%\) CO2, \(36.5^{\circ}C\) ). The culture medium (1 ml) was replaced three times per week. Artificial cerebrospinal fluid used for imaging was composed of \(129\mathrm{mMNaCl}\) , \(3\mathrm{mM}\) KCl, \(1.25\mathrm{mM}\) NaH2PO4, \(1.8\mathrm{mM}\) MgSO4, \(1.6\mathrm{mM}\) CaCl2, \(21\mathrm{mM}\) NaHCO3, and \(10\mathrm{mM}\) glucose (Sigma- Aldrich, Taufkirchen, Germany). The pH of the recording solution was 7.3 when it was saturated with the gas mixture ( \(95\%\) O2, \(5\%\) CO2). Recording temperature was \(32\pm 1^{\circ}C\) . Constant bath wash of \(20\mu M\) (dataset 1) and \(10\mu M\) (dataset 2) carbachol (Sigma- Aldrich) was performed to enhance neuronal activity and increase firing probability during imaging (Müller et al., 1988). <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 8: Color-coded difference between discovered motifs and intensity modulated synchronous firing. Red color indicates negative differences, blue positive differences and white zero difference. The fact that for both datasets in motif 0 some cells are displayed in red over multiple frames shows that these motifs contain temporal structure beyond mere spiking synchrony. </center> Imaging of CA3 region of the hippocampus was performed on day 29 with 20x magnification (dataset 1) and on day 30 with 10x magnification (dataset 2) in vitro (23 days post viral infection) from slices maintained in submerged chamber of Olympus BX51WI microscope. GCaMP6f was excited at \(485 \pm 10nm\) . Fluorescence images (emission at \(521 \pm 10nm\) ) were recorded at \(6.4Hz\) (dataset 1) and \(4Hz\) (dataset 2) using a CCD camera (ORCA- ER; Hamamatsu Photonics, Hamamatsu City, Japan). Before running the analysis we computed \(\Delta F / F\) for the datasets. In order to perform the computations more efficiently, we cropped the outer parts of the images containing no interesting neuronal activity and downsampled dataset 2 by a factor of 0.4. ## D.2 TEMPORAL STRUCTURE PLOTS In order to show that the motifs 0 found in the two real datasets contain temporal structure, we compare them to what the synchronous activity of the participating cells with modulated amplitude would look like. The synchronous firing pattern was constructed as follows: First, for the motif \(\mathcal{M}^{m}\) with \(m = 1,\ldots ,M\) the maximum projection \(\mathcal{P}^{m}\) at each pixel \(p = 1,\ldots ,P\cdot P^{\prime}\) over time was computed by \[\mathcal{P}_{p}^{m} = \max_{f}\mathcal{M}_{p}^{m}\quad \mathrm{with~}f = 1,\ldots ,F \quad (19)\] and normalized \[\hat{\mathcal{P}}_{p}^{m} = \frac{\mathcal{P}_{p}^{m}}{\max_{p}^{\prime}\mathcal{P}^{m}} \quad (20)\] Finally, the synchronous firing pattern \(S^{m}\) for motif \(m\) is gained by multiplying this normalized maximum projection at each time frame \(f\) with the maximum intensity of motif \(m\) at that frame: \[\mathcal{S}_{f}^{m} = \hat{\mathcal{P}}^{m}\cdot \max_{p}\mathcal{M}_{f}^{m}\quad \mathrm{for}~f = 1,\ldots ,F \quad (21)\] Figures 8 shows the difference between the found motif and the constructed synchronous firing patterns for the motifs found on the two real datasets. ## D.3 COMPARISON TO RESULTS OBTAINED WITH SCC In order to show that LeMoNADe performs similar to SCC not only on synthetically generated data but also on real data, we ran both methods on real dataset 2. A well trained neuroscientist manually extracted the individual cells and calcium traces from the original calcium imaging video. Figure 9a shows the result obtained with SCC on these traces. In the same manner calcium traces were extracted from the motif found with LeMoNADe (see figure 9b). Both results in figure 9 are highly similar. ## E PARAMETER SETTINGS LeMoNADe is not more difficult to apply than other motif detection methods for neuronal spike data. In our experiments, for most of the parameters the default settings worked well on different datasets and only three parameters need to be adjusted: the maximum number of motifs \(M\) , the maximum <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 9: Result obtained on the real dataset 2 after manual cell extraction with SCC (a) and the traces manually extracted from the motif found with LeMoNADe on the original video data (b). </center> Table 3: Parameters used for the shown experiments. \(M\) is the number of motifs, \(F\) the maximum temporal extent of a motif, \(\lambda_{1}\) and \(\lambda_{2}\) are the temperatures for the relaxed approximate posterior and prior distributions, \(\tilde{a}\) is the location of the BinConcrete prior, \(b\) is the number of consecutive frames analysed in each epoch, and \(\beta_{\mathrm{KL}}\) is the weight of the KL- regularization term in the loss function. \(\beta_{c}\) is the ensemble- penalty used in SCC. <table><tr><td></td><td>M</td><td>F</td><td>a</td><td>λ1</td><td>λ2</td><td>#epochs</td><td>learning rate</td><td>b</td><td>βKL</td></tr><tr><td>LeMoNADe on synth. datasets with noise level &amp;lt; 50%</td><td>3</td><td>31</td><td>0.05</td><td>0.6</td><td>0.5</td><td>5000</td><td>10−5</td><td>500</td><td>0.10</td></tr><tr><td>LeMoNADe on synth. datasets with noise level ≥ 50%</td><td>3</td><td>31</td><td>0.10</td><td>0.6</td><td>0.5</td><td>5000</td><td>10−5</td><td>500</td><td>0.10</td></tr><tr><td>LeMoNADe on real dataset 1</td><td>3</td><td>21</td><td>0.05</td><td>0.4</td><td>0.3</td><td>5000</td><td>10−5</td><td>150</td><td>0.01</td></tr><tr><td>LeMoNADe on real dataset 2</td><td>3</td><td>21</td><td>0.01</td><td>0.6</td><td>0.5</td><td>5000</td><td>10−5</td><td>500</td><td>0.10</td></tr><tr><td></td><td>M</td><td>F</td><td>βc</td><td></td><td></td><td>#epochs</td><td>#nits</td><td></td><td></td></tr><tr><td>SCC on synth. datasets</td><td>3</td><td>31</td><td>10−4</td><td></td><td></td><td>10</td><td>1</td><td></td><td></td></tr></table> motif length \(F\) , and one of the sparsity parameters (e.g. \(\tilde{a}\) or \(\beta_{\mathrm{KL}}\) ). For SCC the user also has to specify three similar parameters. In addition, SCC requires the previous extraction of a spike matrix which implies many additional parameters. Table 3 shows the parameter settings used for the experiments shown in the paper. ## E.1 OVER- AND UNDER-ESTIMATION OF THE MAXIMUM NUMBER OF MOTIFS \(M\) In order to show the effects of over- and underestimating the number of motifs, we first use our synthetic data with existing ground truth and 3 true motifs and run LeMoNADe with underestimated \((M = 1)\) , correct \((M = 3)\) and overestimated \((M = 5)\) number of expected motifs. Figure 10 shows the complete ground truth (figure 10a) and found motifs for the exemplary synthetic dataset discussed in the paper. Besides the results for \(M = 3\) (figure 10c) we also show the found motifs for \(M = 1\) (figure 10b) and \(M = 5\) (figure 10d). If the number of motifs is underestimated \((M = 1)\) only one of the true motifs is captured. When the number of motifs is overestimated \((M = 5)\) the correct motifs are identified and the surplus filters are filled with (shifted) copies of the true motifs and background noise. We also investigated the influence of different numbers of motifs on the results on real datasets. Figure 11 shows the found motifs on dataset 1 for the different numbers of motifs \(M = 1,2,3,5\) . When the number is limited (as for \(M = 1\) ), the model is expected to learn those motifs first which best explain the data. The motif shown in figure 11a also appears if \(M\) is increased. This shows that this motif is highly present in the data. However, as long as only one filter is available the motif also contains a lot of background noise. The second filter in figure 11b contains a high luminosity artefact of the data. With its high luminosity and large spacial extent, it explains a lot of the dataset. However, it can easily be identified as no neuronal assembly. If the number of motifs is further increased to \(M = 3\) (see figure 11c), more background noise is captured in the additional filter and the motif becomes cleaner. When the number of motifs is further increased to \(M = 5\) , no new motifs appear <--- Page Split ---> and the surplus two filters seem to be filled up with parts of the structures which were already present in 11c. Hence, when the correct number of motifs is unknown (as expected for real datasets) we recommend to slightly overestimate the expected number of motifs. The result will capture the true motifs plus some copies of them. In future work, a post- processing step as in Peter et al. (2017) or a group sparsity regularization as in Bascol et al. (2016) and Mackevicius et al. (2018) could be introduced to eliminate these additional copies automatically. Background noise could be easily identified as no motif by either looking at the motif videos or thresholding the found activations. In future extends of the model we will study the effect of additional latent dimensions for background noise to automatically separate it from actual motifs. ## E.2 OVER- AND UNDER-ESTIMATION OF THE MAXIMUM MOTIF LENGTH \(F\) If the maximum motif length \(F\) is underestimated the found motifs are expected to just contain the part of the motif that reduces the reconstruction error most. Hence in most cases the most interesting parts of the motifs will be captured but details at either end of the motifs could be lost. If the motif length is overestimated, the motifs can be captured completely but might be shifted in time. This shift, however, will be compensated by the motif activations and hence has no negative effect on the results. In our experiments we achieved good results with a generously chosen motif length. For this reason we recommend to overestimate the motif length. Figure 12 shows the found motifs on real dataset 1 with \(M = 3\) and for the different motif lengths \(F = 21\) and \(F = 31\) . The results are highly similar. In both cases, the interesting pattern (motif 0 in figure 12a and motif 1 in figure 12b, respectively) is captured. ## E.3 SPARSITY PARAMETER The parameter \(\bar{a}\) influences the sparsity of the found activations. Smaller values of \(\bar{a}\) will penalize activations harder and hence often result in cleaner and more meaningful motifs. However, if \(\bar{a}\) is too small it will suppress the activations completely. For this reason we recommend to perform for each new dataset experiments with different values of \(\bar{a}\) . Changing the value of \(\beta_{\mathrm{KL}}\) is another option to regulate the sparsity of the activations. However, in our experiments we found that the default value of \(\beta_{\mathrm{KL}} = 0.1\) worked well for many different datasets and varying \(\bar{a}\) was effective enough. For the temperature parameters the default values \(\lambda_{1} = 0.6\) and \(\lambda_{2} = 0.5\) worked well in most cases and changing them is usually not necessary. In order to show the reaction of the method to the choice of \(\bar{a}\) and \(\beta_{\mathrm{KL}}\) we performed multiple experiments on the real dataset 2 with different parameter settings. We fixed all parameters as shown in table 3 except for \(\bar{a}\) (figures 13 and 14) and \(\beta_{\mathrm{KL}}\) (figures 15 and 16). When \(\bar{a}\) is varied within one order of magnitude (see figure 13) the motifs look quite similar - except for temporal shifts of the motifs and shuffling of the order of the motifs. For smaller values of \(\bar{a}\) surplus filters are filled with background noise (see figures 13a to 13d), whereas for a bit larger values of \(\bar{a}\) the surplus filters are filled with copies of (parts of) the motif (see figures 13e to 13g). Note that the motif which was also highlighted in the paper (figure 6d) appears in all results from figure 13b to 13g at least once. Only if \(\bar{a}\) is changed by more than one order of magnitude the results become significantly different and the motif is no longer detected (see figure 14). This indicates that it is sufficient to vary only the order of magnitude of \(\bar{a}\) in order to find a regime where motifs appear in the results and fine tuning \(\bar{a}\) is not necessary. This strategy is also the recommended strategy to find an appropriate sparsity parameter in SCC. A similar behavior can be observed when \(\beta_{\mathrm{KL}}\) is varied (see figure 15 for changes within an order of magnitude and figure 16 for larger changes). One can see similar effects as for the variation of \(\bar{a}\) , but in the opposite direction: for smaller \(\beta_{\mathrm{KL}}\) surplus filters are rather filled with copies of the motif whereas for larger values of \(\beta_{\mathrm{KL}}\) the surplus filters are filled with background noise. This shows that it is usually sufficient to only tune one of the two - either \(\bar{a}\) or \(\beta_{\mathrm{KL}}\) - in order to achieve good results. <--- Page Split ---> ![](images/21_0.jpg) <--- Page Split ---> ![](images/22_0.jpg) <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 13: Motifs found on real dataset 2 for small changes of \(\tilde{a}\) . The parameter \(\tilde{a}\) was increased in steps of 0.003 from \(\tilde{a} = 0.001\) (a) to \(\tilde{a} = 0.010\) (d) and in steps of 0.030 from \(\tilde{a} = 0.010\) (d) to \(\tilde{a} = 0.100\) (g). </center> ## F MOTIF VIDEOS In order to give the reader a better impression of what the used data and the motifs extracted as short video sequences would look like, we provide a few video files containing extracted motifs, <--- Page Split ---> ![](images/24_0.jpg) <center>Figure 14: Motifs found on real dataset 2 for huge changes of \(\tilde{a}\) . The parameter \(\tilde{a}\) was increased by two orders of magnitude in each step from \(\tilde{a} = 10^{-4}\) (a) to \(\tilde{a} = 1\) (c). </center> analyzed data and reconstructed videos at https://drive.google.com/drive/folders/19F76JLn490RzZ4d7GxbWZog6RdF2nt3w?usp=sharing. The reconstructed videos are gained by convolving the found motifs with the corresponding found activations. The videos are provided either in TIFF or MP4 format. Table 4 shows the names of the files together with short descriptions what each video shows. The videos corresponding to the synthetic dataset were generated with a frame rate of 30 fps and those corresponding to the real dataset with 10 fps. <--- Page Split ---> ![](images/25_0.jpg) <center>Figure 15: Motifs found on real dataset 2 for small changes of \(\beta_{\mathrm{KL}}\) . The parameter \(\beta_{\mathrm{KL}}\) was increased in steps of 0.03 from \(\beta_{\mathrm{KL}} = 0.01\) (a) to \(\beta_{\mathrm{KL}} = 0.19\) (g). </center> <--- Page Split ---> ![](images/26_0.jpg) <center>Figure 16: Motifs found on real dataset 2 for huge changes of \(\beta_{\mathrm{KL}}\) . The parameter \(\beta_{\mathrm{KL}}\) was increased by two orders of magnitude in each step from \(\beta_{\mathrm{KL}} = 10^{-3}\) (a) to \(\beta_{\mathrm{KL}} = 10\) (c). </center> Table 4: Attached video files and descriptions. The used parameters for the analysis are the same as given in table 3 if not mentioned differently. The three different types of video are: motif showing a single motif; parallel video showing the original video from the dataset (upper left corner) and reconstructions from the found motifs; and RGB video showing a superposition of RGB values of the reconstructed videos from the three motifs found on the dataset. Additionally to the synthetic data example discussed in the paper (with \(10\%\) noise spikes), we also provide videos from a synthetic dataset with \(50\%\) spurious spikes. <table><tr><td>File name</td><td>dataset</td><td>video type</td><td>number of motifs M</td><td>motif length F</td></tr><tr><td>real_1_e1_121_recon.mp4</td><td>real dataset 1</td><td>parallel video</td><td>1</td><td>21</td></tr><tr><td>real_1_e3_121_motif_0.tiff</td><td>real dataset 1</td><td>motif</td><td>3</td><td>21</td></tr><tr><td>real_1_e3_121_motif_1.tiff</td><td>real dataset 1</td><td>motif</td><td>3</td><td>21</td></tr><tr><td>real_1_e3_121_motif_2.tiff</td><td>real dataset 1</td><td>motif</td><td>3</td><td>21</td></tr><tr><td>real_1_e3_121_recon.mp4</td><td>real dataset 1</td><td>parallel video</td><td>3</td><td>21</td></tr><tr><td>real_1_e3_121_rgb.mp4</td><td>real dataset 1</td><td>RGB video</td><td>3</td><td>21</td></tr><tr><td>real_1_e3_121_recon.mp4</td><td>real dataset 1</td><td>parallel video</td><td>3</td><td>21</td></tr><tr><td>real_1_e3_121_reon.mp4</td><td>real dataset 1</td><td>parallel video</td><td>5</td><td>21</td></tr><tr><td>real_2_e3_121_motif_0.tiff</td><td>real dataset 2</td><td>motif</td><td>3</td><td>21</td></tr><tr><td>real_2_e3_121_motif_1.tiff</td><td>real dataset 2</td><td>motif</td><td>3</td><td>21</td></tr><tr><td>real_2_e3_121_motif_2.tiff</td><td>real dataset 2</td><td>motif</td><td>3</td><td>21</td></tr><tr><td>real_2_e3_121_recon.mp4</td><td>real dataset 2</td><td>parallel video</td><td>3</td><td>21</td></tr><tr><td>real_2_e3_121_rgb.mp4</td><td>real dataset 2</td><td>RGB video</td><td>3</td><td>21</td></tr><tr><td>synth_example_e3_121_motif_0.tiff</td><td>synth. example</td><td>motif</td><td>3</td><td>21</td></tr><tr><td>synth_example_e3_121_motif_1.tiff</td><td>synth. example</td><td>motif</td><td>3</td><td>21</td></tr><tr><td>synth_example_e3_121_motif_2.tiff</td><td>synth. example</td><td>motif</td><td>3</td><td>21</td></tr><tr><td>synth_example_e3_121_recon.mp4</td><td>synth. example</td><td>parallel video</td><td>3</td><td>21</td></tr><tr><td>synth_example_e3_121_rgb.mp4</td><td>synth. example</td><td>RGB video</td><td>3</td><td>21</td></tr><tr><td>synth_50noise_e3_121_recon.mp4</td><td>synth. with 50% noise</td><td>parallel video</td><td>3</td><td>21</td></tr><tr><td>synth_50noise_e3_121_rgb.mp4</td><td>synth. with 50% noise</td><td>RGB video</td><td>3</td><td>21</td></tr></table> <--- Page Split --->
accept
Accept (Poster)
7
ICLR_2019_paper_0589
iclr
2,019
# ADVERSARIAL EXAMPLES ARE A NATURAL CONSEQUENCE OF TEST ERROR IN NOISE Anonymous authors Paper under double- blind review ## ABSTRACT Over the last few years, the phenomenon of adversarial examples — maliciously constructed inputs that fool trained machine learning models — has captured the attention of the research community, especially when the adversary is restricted to making small modifications of a correctly handled input. At the same time, less surprisingly, image classifiers lack human- level performance on randomly corrupted images, such as images with additive Gaussian noise. In this work, we show that these are two manifestations of the same underlying phenomenon. We establish this connection in several ways. First, we find that adversarial examples exist at the same distance scales we would expect from a linear model with the same performance on corrupted images. Next, we show that Gaussian data augmentation during training improves robustness to small adversarial perturbations and that adversarial training improves robustness to several types of image corruptions. Finally, we present a model- independent upper bound on the distance from a corrupted image to its nearest error given test performance and show that in practice we already come close to achieving the bound, so that improving robustness further for the corrupted image distribution requires significantly reducing test error. All of this suggests that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions. This yields a computationally tractable evaluation metric for defenses to consider: test error in noisy image distributions. ## 1 INTRODUCTION State- of- the- art computer vision models can achieve superhuman performance on many image classification tasks. Despite this, these same models still lack the robustness of the human visual system to various forms of image corruptions. For example, they are distinctly subhuman when classifying images distorted with additive Gaussian noise (Dodge & Karam, 2017b), they lack robustness to different types of blur, pixelation, and changes in brightness (Hendrycks & Dietterich, 2018), lack robustness to random translations of the input (Azulay & Weiss, 2018), and even make errors when foreign objects are inserted into the field of view (Rosenfeld et al., 2018). At the same time, they also are sensitive to small, worst- case perturbations of the input, so- called "adversarial examples" (Szegedy et al., 2014). This latter phenomenon has struck many in the machine learning community as surprising and has attracted a great deal of research interest, while the former seems to inspire less surprise and has received considerably less attention. Our classification models make errors on two different sorts of inputs: those found by randomly sampling from some predetermined distribution, and those found by an adversary deliberately searching for the closest error to a given point. In this work, we ask what, if anything, is the difference between these two types of error. Given that our classifiers make errors in these corrupted image distributions, there must be a closest such error; do we find that this closest error appears at the distance we would expect from the model's performance in noise, or is it in fact "surprisingly" close? The answer to this question has strong implications for the way we approach the task of eliminating these two types of errors. An assumption underlying most of the work on adversarial examples is that solving it requires a different set of methods than the ones being developed to improve model generalization. The adversarial defense literature focuses primarily on improving robustness to small perturbations of the input and rarely reports improved generalization in any distribution. <--- Page Split ---> We claim that, on the contrary, adversarial examples are found at the same distance scales that one should expect given the performance on noise that we see in practice. We explore the connection between small perturbation adversarial examples and test error in noise in two different ways. First, in Sections 4 and 5, we provide empirical evidence of a close relationship between test performance in Gaussian noise and adversarial perturbations. We show that the errors we find close to the clean image and the errors we sample under Gaussian noise are part of the same large set and show some visualizations that illustrate this relationship. (This analysis builds upon prior work (Fawzi et al., 2018; 2016) which makes smoothness assumptions on the decision boundary to relate these two quantities.) This suggests that training procedures designed to improve adversarial robustness might reduce test error in noise and vice versa. We provide results from experiments which show that this is indeed the case: for every model we examined, either both quantities improved or neither did. In particular, a model trained on Gaussian noise shows significant improvements in adversarial robustness, comparable to (but not quite as strong as) a model trained on adversarial examples. We also found that an adversarially trained model on CIFAR- 10 shows improved robustness to random image corruptions. Finally, in Section 6, we establish a relationship between the error rate of an image classification model in the presence of Gaussian noise and the existence of adversarial examples for noisy versions of test set images. In this setting we can actually prove a rigorous, model- independent bound relating these two quantities that is achieved when the error set is a half space, and we see that the models we tested are already quite close to this optimum. Therefore, for these noisy image distributions, our models are already almost as adversarially robust as they can be given the error rates we see, so the only way to defend against adversarial examples is to reduce test error. In this work we will investigate several different models trained on the MNIST, CIFAR- 10 and ImageNet datasets. For MNIST and CIFAR- 10 we look at the naturally trained and adversarially trained models which have been open- sourced by Madry et al. (2017). We also trained the same model on CIFAR- 10 with Gaussian data augmentation. For ImageNet, we investigate Wide ResNet- 50 trailjned with Gaussian data augmentation. We were unable to study the effects of adversarial training on ImageNet because no robust open sourced model exists (we considered the models released in Tramer et al. (2017) but found that they only minimally improve robustness to the white box PGD adversaries we consider here). Additional training details can be found in Appendix A. ## 2 RELATED WORK The broader field of adversarial machine learning studies general ways in which an adversary may interact with an ML system, and dates back to 2004 (Dalvi et al., 2004; Biggio & Roli, 2018). Since the work of Szegedy et al. (2014), a subfield has focused specifically on the phenomenon of small adversarial perturbations of the input, or "adversarial examples." In Szegedy et al. (2014) it was proposed these adversarial examples occupy a dense, measure- zero subset of image space. However, more recent work has provided evidence that this is not true. For example, Fawzi et al. (2016); Franceschi et al. (2018) shows that under linearity assumptions of the decision boundary small adversarial perturbations exist when test error in noise is non- zero. Gilmer et al. (2018b) showed for a specific data distribution that there is a fundamental upper bound on adversarial robustness in terms of test error. Mahloujifar et al. (2018) has generalized these results to a much broader class of distributions. Recent work has proven for a synthetic data distribution that adversarially robust generalization requires more data (Schmidt et al., 2018). The distribution they consider when proving this result is a mixture of high dimensional Gaussians. As we will soon discuss, every set \(E\) of small measure in the high dimensional Gaussian distribution has large boundary measure. Therefore, at least for the data distribution considered, the main conclusion of this work, "adversarially robust generalization requires more data", is a direct corollary of the statement "generalization requires more data." ## 3 TEST ERROR AND ADVERSARIAL ROBUSTNESS Understanding the relationship between nearby errors and model generalization requires understanding the geometry of the error set of a statistical classifier, that is, the set of points in the input space <--- Page Split ---> on which the classifier makes an incorrect prediction. In particular, the assertion that these adversarial examples are a distinct phenomenon from test error is equivalent to stating that the error set is in some sense poorly behaved. We study two functions of a model's error set \(E\) . The first quantity, test error under a given distribution of inputs \(q(x)\) , is the probability that a random sample from the distribution \(q\) is in \(E\) . We will denote this \(\mathbb{P}_{x\sim q}[x\in E]\) ; reducing this quantity when \(q\) is the natural data distribution is the goal of supervised learning. While one usually takes \(q\) to be the distribution from which the training set was sampled, we will also consider other distributions over the course of this paper. When \(q\) includes points from outside the natural data distribution, a decision needs to be made about the labels in order to define \(E\) . The only such cases we will consider in this paper are noisy perturbations of training or test points, and we will always assume that the noise is at a scale which is small enough not to change the label. This assumption is commonly made in works which study model robustness to random corruptions of the input (Hendrycks & Dietterich, 2018; Dodge & Karam, 2017b). Some examples noisy images can be found in Figure 7 in the appendix. The second quantity is called adversarial robustness. For an input \(x\) and a metric on the input space \(d\) , let \(d(x, E)\) denote the distance from \(x\) to the nearest point of \(E\) . For any \(\epsilon\) , let \(E_{\epsilon}\) denote the set \(\{x: d(x, E) < \epsilon \}\) , the set of points within \(\epsilon\) of an error. The adversarial robustness of the model is then \(\mathbb{P}_{x\sim q}[x\in E_{\epsilon}]\) , the probability that a random sample from \(q\) is within distance \(\epsilon\) of some point in the error set. Reducing this quantity is the goal of much of the adversarial defense literature. When we refer to "adversarial examples" in this paper, we will always mean these nearby errors. In geometric terms we can think of \(\mathbb{P}_{x\sim q}[x\in E]\) as a sort of volume of the error set while \(\mathbb{P}_{x\sim q}[x\in E_{\epsilon}]\) is related to its surface area. More directly, \(\mathbb{P}_{x\sim q}[x\in E_{\epsilon}]\) is what we will call the \(\epsilon\) - boundary measure, the volume under \(q\) of the region within \(\epsilon\) of the surface or the interior. The adversarial example phenomenon is then simply that, for small \(\epsilon\) , \(\mathbb{P}_{x\sim q}[x\in E_{\epsilon}]\) can be large even when \(\mathbb{P}_{x\sim q}[x\in E]\) is small. In other words, most correctly classified inputs are very close to a misclassified point, even though the model is very accurate. In high- dimensional spaces this phenomenon is not isolated to the error sets of statistical classifiers. In fact almost every nonempty set of small volume has large \(\epsilon\) - boundary measure, even sets that seem very well- behaved. As a simple example, consider the measure of the set \(E = \{x\in \mathbb{R}^{n}:||x||_{2}< 1\}\) under the Gaussian distribution \(q = \mathcal{N}(0,\sigma^{2}I)\) . For \(n = 1000\) , \(\sigma = 1.05 / \sqrt{n}\) , and \(\epsilon = 0.1\) , we have \(\mathbb{P}_{x\sim q}[x\in E]\approx 0.02\) and \(\mathbb{P}_{x\sim q}[x\in E_{\epsilon}]\approx 0.98\) , so most samples from \(q\) will be close to \(E\) despite the fact that \(E\) has relatively little measure under the Gaussian distribution. If we relied only on our low- dimensional spatial intuition, we might be surprised to find how consistently small adversarial perturbations could be found — \(98\%\) of our test points would have an error at distance 0.1 or less even though only \(2\%\) are misclassified. In high dimensions, it is much easier for most points to be close to some set even if that set itself has a small volume. Contrary to what one might expect from our low- dimensional intuition, this does not require the set in question to be somehow pathological; in our example, it was just a ball. Therefore, when we see that some image classifier has errors in some noise distribution \(q\) (so that \(\mathbb{P}_{x\sim q}[x\in E]\) is appreciably bigger than zero) it is possible that \(E_{\epsilon}\) is much larger even if \(E\) is quite simple, so the existence of small worst- case perturbations should be expected given imperfect robustness to large average- case corruptions. In the sections that follow we will make this precise. ## 4 ERRORS IN NOISE SUGGEST ADVERSARIAL EXAMPLES FOR CLEAN IMAGES The Linear Case. For linear models, the relationship between errors in Gaussian noise and small perturbations of a clean image is exact. For an image \(x\) , let \(d(x)\) be the distance from \(x\) to decision boundary and let \(\sigma (x,\mu)\) be the \(\sigma\) for which \(\mathbb{P}_{x\sim q}[x\in E]\) is some fixed error rate \(\mu\) . (As we mentioned in the introduction, we assume that \(\sigma\) is small enough that adding this noise does not change the "correct" label.) Then we have \(d(x) = -\sigma (x,\mu)\Phi^{- 1}(\mu)\) , where \[\Phi (t) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{t}\exp (-x^{2} / 2)dx\] is the cdf of the univariate standard normal distribution. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 1: Comparing the distance to decision boundary with the \(\sigma\) for which the error rate in Gaussian noise is \(1\%\) . Each point represents 50 images from the test set, and the median values for each coordinate are shown. (The PGD attack was run with \(\epsilon = 1\) , so the distances to the decision boundary reported here are cut off at 1.) We also see histograms of the \(x\) coordinates. (A misclassified point is assigned \(\sigma = 0\) .) </center> Note that this equality depends only on the error rate \(\mu\) and the standard deviation \(\sigma\) of a single component, and not directly on the dimension. This might seem at odds with the emphasis on high- dimensional geometry in Section 3. The dimension does appear if we consider the norm of a typical sample from \(\mathcal{N}(0, \sigma^2 I)\) , which is \(\sigma \sqrt{n}\) . As the dimension increases, so does the ratio between the distance to a noisy image and the distance to the decision boundary. The decision boundary of a neural network is, of course, not linear. However, by computing the ratio between \(d(x)\) and \(\sigma (x, \mu)\) for neural networks and comparing it to what it would be for a linear model, we can investigate the question posed in the introduction: do we see adversarial examples at the distances we do because of pathologies in the shape of the error set, or do we find them at about the distances we would expect given the error rates we see in noise? We ran experiments on the error sets of several neural image classifiers and found evidence that is much more consistent with the second of these two possibilities. This relationship was also explored in Fawzi et al. (2016; 2018); here we additionally measure how data augmentation affects this relationship. We examined this relationship for neural networks when \(\mu = 0.01\) . For each test point, we compared \(\sigma (x, \mu)\) to an estimate of \(d(x)\) . It is not actually possible to compute \(d(x)\) precisely for the error set of a neural network. In fact, finding the distance to the nearest error is NP- hard (Katz et al., 2017). Instead, the best we can do is to search for an error using a method like PGD (Madry et al., 2017) and report the nearest error we can find. Figure 1 shows the results for several CIFAR- 10 and ImageNet models, including ordinary trained models, models trained on noise with \(\sigma = 0.4\) , and an adversarially trained CIFAR- 10 model. We also included a line representing how these quantities would be related for a linear model. We can see that none of the models we examined have nearby errors at a scale much smaller than we would expect from a linear model. Indeed, while the adversarially trained model does deviate from the linear case to a greater extent than the others, it does so in the direction of greater distances to the decision boundary. Moreover, we can see from the histograms that both of the interventions that increase \(d(x)\) also increase \(\sigma (x, \mu)\) . So, to explain the distances to the errors we can find using PGD, it is not necessary to rely on any great complexity in the shape of the error set; a linear model with the same error rates in noise would have errors just as close. <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 2: Two-dimensional slices of image space through different triples of points together with the classes assigned by a trained model. The black circle in both images has radius 31.4, corresponding to noise with \(\sigma = 31.4 / \sqrt{n} = 0.08\) . </center> Left: An image from the test set (black), a random misclassified Gaussian perturbation at standard deviation 0.08 (blue), and an error found using PGD (red). The estimated measure of the cyan region ("miniature poodle") in the Gaussian distribution is about \(0.1\%\) . The small diamond-shaped region in the center of the image is the \(l_{\infty}\) ball of radius 8/255. Right: A slice at a larger scale with the same black point, together with an error from the clean set (blue) and an adversarially constructed error (red) which are both assigned to the same class ("elephant"). Visualizing the Decision Boundary. In Figure 2 we drew some pictures of two- dimensional slices of image space through several different triples of points. (Similar visualizations have previously appeared in Fawzi et al. (2018), and are called "church window plots.") We see some common themes. In the figure on the left, we see that an error found in Gaussian noise lies in the same connected component of the error set as an error found using PGD, and that at this scale that component visually resembles a half space. This figure also illustrates the relationship between test error and adversarial robustness. To measure adversarial robustness is to ask whether or not there are any errors in the \(l_{\infty}\) ball — the small diamond- shaped region in the center of the image — and to measure test error in noise is to measure the volume of the error set in the defined noise distribution. At least in this slice, nothing distinguishes the PGD error from any other point in the error set apart from its proximity to the center point. The figure on the right shows a different slice through the same test point but at a larger scale. This slice includes an ordinary test error along with an adversarial perturbation of the center image constructed with the goal of maintaining visual similarity while having a large \(l_{2}\) distance. The two errors are both classified (incorrectly) by the model as "elephant." This adversarial error is actually farther from the center than the test error, but they still clearly belong to the same connected component. This suggests that defending against worst- case content- preserving perturbations (Gilmer et al., 2018a) requires removing all errors at a scale comparable to the distance between unrelated pairs of images. Many more church window plots can be found in Appendix G. ## 5 COMPARING ADVERSARIAL TRAINING TO TRAINING ON NOISE For a linear model, improving generalization in the presence of noise is equivalent to increasing the distance to the decision boundary. The results from the previous section suggest that a similar relationship should hold for other statistical classifiers, including neural networks. That is, augmenting the training data distribution with noisy images ought to increase the distance to the decision boundary, and augmenting the training distribution with small- perturbation adversarial examples should improve performance in noise. Here we present evidence that this is the case. We analyzed the performance of the models described in Section 1 on four different noise distributions: two types of Gaussian noise, pepper noise (Hendrycks & Dietterich, 2018), and a randomized variant of the stAdv adversarial attack introduced in Xiao et al. (2018). We used both ordinary, spherical Gaussian noise and what we call "PCA noise," which is Gaussian noise supported only on the <--- Page Split ---> <table><tr><td colspan="2">Dataset</td><td colspan="3">CIFAR-10</td><td colspan="3">ImageNet</td></tr><tr><td>Training</td><td>Vanilla</td><td>Noise<br>σ=0.1</td><td>Noise<br>σ=0.4</td><td>Adv</td><td>Vanilla</td><td>Noise<br>σ=0.4</td><td>Noise<br>σ=0.8</td></tr><tr><td>Noise Type</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Clean</td><td>95.0%</td><td>93.5%</td><td>84.0%</td><td>87.3%</td><td>76.0%</td><td>74.4%</td><td>72.6%</td></tr><tr><td>PCA100, σ=0.2</td><td>93.2%</td><td>92.3%</td><td>83.6%</td><td>86.5%</td><td>45.5%</td><td>56.5%</td><td>59.7%</td></tr><tr><td>PCA100, σ=0.4</td><td>82.6%</td><td>83.1%</td><td>81.0%</td><td>80.6%</td><td>13.5%</td><td>17.7%</td><td>19.7%</td></tr><tr><td>Pepper, p=0.1</td><td>20.2%</td><td>53.3%</td><td>81.2%</td><td>38.4%</td><td>31.3%</td><td>70.0%</td><td>69.1%</td></tr><tr><td>Pepper, p=0.3</td><td>12.3%</td><td>18.9%</td><td>58.0%</td><td>21.1%</td><td>5.4%</td><td>56.0%</td><td>61.5%</td></tr><tr><td>Gaussian, σ=0.1</td><td>29.1%</td><td>89.0%</td><td>85.1%</td><td>77.8%</td><td>60.7%</td><td>73.3%</td><td>71.7%</td></tr><tr><td>Gaussian, σ=0.2</td><td>13.5%</td><td>38.8%</td><td>83.5%</td><td>42.1%</td><td>27.9%</td><td>70.5%</td><td>69.3%</td></tr><tr><td>stAdv, σ=0.5</td><td>52.3%</td><td>84.4%</td><td>77.9%</td><td>81.7%</td><td>57.3%</td><td>67.3%</td><td>69.0%</td></tr><tr><td>stAdv, σ=2.0</td><td>17.4%</td><td>30.6%</td><td>52.1%</td><td>27.0%</td><td>11.4%</td><td>27.2%</td><td>31.3%</td></tr><tr><td>lp robustness</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>l2, ε=0.5</td><td>0.3%</td><td>39.2%</td><td>54.5%</td><td>58.3%</td><td>7.9%</td><td>43.8%</td><td>47.7%</td></tr><tr><td>l2, ε=1.0</td><td>0.0%</td><td>9.5%</td><td>25.1%</td><td>29.7%</td><td>0.5%</td><td>16.8%</td><td>22.5%</td></tr><tr><td>l∞, ε=1/255</td><td>26.2%</td><td>84.4%</td><td>76.6%</td><td>83.5%</td><td>0.8%</td><td>20.1%</td><td>25.0%</td></tr><tr><td>l∞, ε=4/255</td><td>0.4%</td><td>39.8%</td><td>49.6%</td><td>68.3%</td><td>0.0%</td><td>0.1%</td><td>0.1%</td></tr><tr><td>l∞, ε=8/255</td><td>0.0%</td><td>10.3%</td><td>20.0%</td><td>45.4%</td><td>0.0%</td><td>0.0%</td><td>0.0%</td></tr></table> Table 1: The performance of the models we considered under various noise distributions, together with our measurements of those models' robustness to small \(l_{p}\) perturbations. For all the robustness tests we used PGD with 100 steps and a step size of \(\epsilon /25\) . The adversarially trained CIFAR- 10 model is the open sourced model from Madry et al. (2017). subspace spanned by the first 100 principal components of the training set. Pepper noise randomly assigns channels of the image to 1 with some fixed probability. Details of the stAdv attack can be found in Appendix B, but it visually similar to Gaussian blurring where \(\sigma\) controls the severity of the blurring. Example images that have undergone each of the noise transformations we used can be found in Appendix I. Each model was also tested for \(l_{p}\) robustness with a variety of norms and \(\epsilon\) 's using the same PGD attack as in Section 4. For CIFAR- 10, standard Gaussian data augmentation yields comparable (but slightly worse) results to adversarial training on all considered metrics. For ImageNet we found that Gaussian data augmentation improves robustness to small \(l_{2}\) perturbations as well as robustness to other noise corruptions. The results are shown in Table 1. This holds both for generalization in all noises considered and for robustness to small perturbations. We found that performing data augmentation with heavy Gaussian noise ( \(\sigma = 0.4\) for CIFAR- 10 and \(\sigma = 0.8\) for ImageNet) worked best. The adversarially trained CIFAR- 10 models were trained in the \(l_{\infty}\) metric and they performed especially well on worst- case perturbations in this metric. Prior work has observed that Gaussian data augmentation helps small perturbation robustness on MNIST (Kannan et al., 2018), but to our knowledge we are the first to measure this on CIFAR- 10 and ImageNet. Neither augmentation method shows much improved generalization in PCA noise. We hypothesize that adversarially trained models learn to project away the high- frequency information in the input, which would do little to improve performance in PCA noise, which is supported in the low- frequency subspace of the data distribution. Further work would be required to establish this. We also considered the MNIST adversarially trained model from Madry et al. (2017), and found it to be a special case where although robustness to small perturbations was increased generalization in noise was not improved. This is because this model violates the linearity assumption discussed in Section 4. This overfitting to the \(l_{\infty}\) metric has been observed in prior work (Sharma & Chen, 2017). More details can be found in Appendix D. Although no \(l_{p}\) - robust open sourced ImageNet model exists, recent work has found that the adversarially trained models on Tiny ImageNet from Kannan et al. (2018) generalize very well on a large suite of common image corruptions (Hendrycks & Dietterich, 2018). Failed Adversarial Defenses Do Not Improve Generalization in Noise. We performed a similar analysis on seven previously published adversarial defense strategies. These methods have already been shown to result in masking gradients, which cause standard optimization procedures to fail to find errors, rather than actually improving small perturbation robustness (Athalye et al., 2018). We find <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: The performance in Gaussian noise of several previously published defenses for ImageNet, along with a model trained on Gaussian noise at \(\sigma = 0.4\) for comparison. For each point we ran ten trials; the error bars show one standard deviation. All of these defenses are now known not to improve adversarial robustness (Athalye et al., 2018). The defense strategies include bitdepth reduction (Guo et al., 2017), JPEG compression (Guo et al., 2017; Dziugaite et al., 2016; Liu et al., 2018; Aydemir et al., 2018; Das et al., 2018; 2017), Pixel Deflection (Prakash et al., 2018), total variance minimization (Guo et al., 2017), representation-guided denoising (Liao et al., 2018), and random resizing and random padding of the input image (Xie et al., 2017). </center> that these methods also show no improved generalization in Gaussian noise. The results are shown in Figure 3. Given how easy it is for a method to show improved robustness to standard optimization procedures without changing the decision boundary in any meaningful way, we strongly recommend that future defense efforts evaluate on out- of- distribution inputs such as the noise distributions we consider here. The current standard practice of evaluating solely on gradient- based attack algorithms is making progress more difficult to measure. Obtaining Zero Test Error in Noise is Nontrivial. It is important to note that applying Gaussian data augmentation does not reduce error rates in Gaussian noise to zero. For example, we performed Gaussian data augmentation on CIFAR- 10 at \(\sigma = .15\) and obtained \(99.9\%\) training accuracy but \(77.5\%\) test accuracy in the same noise distribution. (For comparison, the naturally trained obtains \(95\%\) clean test accuracy.) Previous work (Dodge & Karam, 2017b) has also observed that obtaining perfect generalization in large Gaussian noise is nontrivial. This mirrors Schmidt et al. (2018), which found that small perturbation robustness did not generalize to the test set. This is perhaps not surprising given that error rates on the clean test set are also non- zero. Although the model is in some sense "superhuman" with respect to clean test accuracy, it still makes many mistakes on the clean test set that a human would never make. We collected some examples in Appendix I. More detailed results on training and testing in noise can be found in Appendices C and H. ## 6 ERRORS IN NOISE IMPLY ADVERSARIAL EXAMPLES FOR NOISY IMAGES The Gaussian Isoperimetric Inequality. Let \(x\) be a correctly classified image and consider the distribution \(q\) of Gaussian perturbations of \(x\) with some fixed variance \(\sigma^2 I\) . For this distribution, there is a precise sense in which small adversarial perturbations exist only because test error is nonzero. That is, given the error rates we actually observe on noisy images, most noisy images must be close to the error set. This result holds completely independently of any assumptions about the model and follows from a fundamental geometric property of the high- dimensional Gaussian distribution, which we will now make precise. For an image \(x\) and the corresponding noisy image distribution \(q\) , let \(\epsilon_{q}^{*}(E)\) be the median distance from one of these noisy images to the nearest error. (In other words, it is the \(\epsilon\) for which \(\mathbb{P}_{x\sim q}[x\in E_{\epsilon}] = \frac{1}{2}\) .) As before, let \(\mathbb{P}_{x\sim q}[x\in E]\) be the probability that a random Gaussian perturbation <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 4: The adversarial example phenomenon occurs for noisy images as well as clean ones. Starting with a noisy image that is correctly classified, one can apply carefully crafted imperceptible noise to it which causes the model to output an incorrect answer. This occurs even though the error rate among random Gaussian perturbations of this image is small (less than \(.1\%\) for the ImageNet panda shown above). In fact, we prove that the presence of errors in Gaussian noise logically implies that small adversarial perturbations exists around noisy images. The only way to "defend" against such adversarial perturbations is to reduce the error rate in Gaussian noise. </center> of \(x\) lies in \(E\) . It is possible to deduce a bound relating these two quantities from the Gaussian isoperimetric inequality (Borell, 1975). The form we will use is: Theorem (Gaussian Isoperimetric Inequality). Let \(q = \mathcal{N}(0,\sigma^{2}I)\) be the Gaussian distribution on \(\mathbb{R}^{n}\) with variance \(\sigma^{2}I\) , and let \(\mu = \mathbb{P}_{x\sim q}[x\in E]\) . Write \(\begin{array}{r}{\Phi (t) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{t}\exp (- x^{2} / 2)d x} \end{array}\) , the cdf of the univariate standard normal distribution. If \(\mu \geq \frac{1}{2}\) , then \(\epsilon_{q}^{*}(E) = 0\) . Otherwise, \(\epsilon_{q}^{*}(E)\leq - \sigma \Phi^{- 1}(\mu)\) , with equality when \(E\) is a half space. In particular, for any machine learning model for which the error rate in the distribution \(q\) is at least \(\mu\) , the median distance to the nearest error is at most \(- \sigma \Phi^{- 1}(\mu)\) . (Note that \(\Phi^{- 1}(\mu)\) is negative when \(\mu < \frac{1}{2}\) .) Because each coordinate of a multivariate normal is a univariate normal, \(- \Phi^{- 1}(\mu)\) is the distance to a half space for which the error rate is \(\mu\) when \(\sigma = 1\) . (We have the same indirect dependence on dimension here as we saw in Section 4: the distance to a typical sample from the Gaussian is \(\sigma \sqrt{n}\) .) In Appendix E we will give the more common statement of the Gaussian isoperimetric inequality along with a proof of the version presented here. In geometric terms, we can say that a half space is the set \(E\) of a fixed volume that minimizes the surface area under the Gaussian measure, similar to how a circle is the set of fixed area that minimizes the perimeter. So among models with some fixed test error \(\mathbb{P}_{x\sim q}[x\in E]\) , the most robust on this distribution are the ones whose error set is a half space. Comparing Neural Networks to the Isoperimetric Bound. We evaluated these quantities for several models and many images from the CIFAR- 10 and ImageNet test sets. Just like for clean images, we found that most noisy images are both correctly classified and very close to a visually similar image which is not. (See Figure 4. ) As we mentioned in Section 4, it is not actually possible to compute \(\epsilon_{q}^{*}\) precisely for the error set of a neural network, so we again report an estimate. For each test image, we took 1,000 samples from the corresponding Gaussian and estimated \(\epsilon_{q}^{*}\) using PGD with 200 steps on each sample and reported the median. We find that for the five models we considered on CIFAR- 10 and ImageNet, the relationship between our estimate of \(\epsilon_{q}^{*}(E)\) and \(\mathbb{P}_{x\sim q}[x\in E]\) is already close to optimal. This is visualized in Figure 5. Note that in both cases, adversarial training does improve robustness to small perturbations, but the gains are primarily because error rates in Gaussian noise were dramatically improved, and less because the surface area of the error set was decreased. In particular, many test points do not appear on these graphs because error rates in noise were so low that we did not find any errors among the 100,000 samples we used. For example, for the naturally trained CIFAR model, about \(1\%\) of the points lie off the left edge of the plot, compared to about \(59\%\) for the adversarially trained model and \(70\%\) for the model trained on noise. This shows that adversarial training on small perturbations improved generalization to large random perturbations, as the isoperimetric inequality says it must. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 5: These plots give two ways to visualize the relationship between the error rate in noise and the distance from noisy points to the decision boundary (found using PGD). Each point on each plot represents one image from the test set. On the left, we compare the error rate of the model on Gaussian perturbations at \(\sigma = 0.1\) to the distance from the median noisy point to its nearest error. On the right, we compare the \(\sigma\) at which the error rate is 0.01 to this same median distance. (The plots on the right are therefore similar to the plots in Figure 1.) The thick black line at the top of each plot is the upper bound provided by the Gaussian isoperimetric inequality. We include data from a model trained on clean images, an adversarially trained model, and a model trained on Gaussian noise \((\sigma = 0.4)\) . As mentioned in Section 1, we were unable to run this experiment on an adversarially robust ImageNet model. </center> Not all models or functions will be this close to optimal. As a simple example, if we took one of the CIFAR models shown in Figure 5 and modified it so that the model outputs an error whenever each coordinate of the input is an integer multiple of \(10^{- 6}\) , the resulting model would have an error within \(\sqrt{\frac{1}{2} \cdot 10^{- 6} \cdot \dim(\text{CIFAR})} \approx 0.039\) of every point. In this case, adversarial examples would be a distinct phenomenon from test performance, since \(\epsilon_{\eta}^{*}(E)\) would be far from optimal. The contrast between these two settings is important for adversarial defense design. If adversarial examples arose from a badly behaved decision boundary (as in the latter case), then it would make sense to design defenses which attempt to smooth out the decision boundary in some way. However, because we observe that image models are already close to the optimal bound on robustness for a fixed error rate in noise, future defense design should attempt to improve generalization in noise. Currently there is a considerable subset of the adversarial defense literature which develops methods that would remove any small "pockets" of errors but which don't improve model generalization. One example is Xie et al. (2017) which proposes randomly resizing the input to the network as a defense strategy. Unfortunately, this defense, like many others, has been shown to be ineffective against stronger adversaries (Carlini & Wagner, 2017a;b; Athalye et al., 2018). ## 7 CONCLUSION We proved a fundamental relationship between generalization in noisy image distributions and the existence of small adversarial perturbations. By appealing to the Gaussian isoperimetric inequality, we formalized the notion of what it means for a decision boundary to be badly behaved. We showed that, for noisy images, there is very little room to improve robustness without also decreasing the volume of the error set, and we provided evidence that small perturbations of clean images can also be explained in a similar way. These results show that small- perturbation adversarial robustness is closely related to generalization in the presence of noise and that future defense efforts can measure progress by measuring test error in different noise distributions. <--- Page Split ---> Indeed, several such noise distributions have already been proposed, and other researchers have developed methods which improve generalization in these distributions (Hendrycks & Dietterich, 2018; Dodge & Karam, 2017b;a; Vasiljevic et al., 2016; Zheng et al., 2016). Our work suggests that adversarial defense and improving generalization in noise involve attacking the same set of errors in two different ways — the first community tries to remove the errors on the boundary of the error set while the second community tries to reduce the volume of the error set. The isoperimetric inequality connects these two perspectives, and suggests that improvements in adversarial robustness should result in improved generalization in noise and vice versa. Adversarial training on small perturbations on CIFAR- 10 also improved generalization in noise, and training on noise improved robustness to small perturbations. In the introduction we referred to a question from Szegedy et al. (2014) about why we find errors so close to our test points while the test error itself is so low. We can now suggest an answer: despite what our low- dimensional visual intuition may lead us to believe, these errors are not in fact unnaturally close given the error rates we observe in noise. There is a sense, then, in which we simply haven’t reduced the test error enough to expect to have removed most nearby errors. While we focused on the Gaussian distribution, similar conclusions can be made about other distributions. In general, in high dimensions, the \(\epsilon\) - boundary measure of a typical set is large even when its volume is small, and this observation does not depend on anything specific about the Gaussian distribution. The Gaussian distribution is a special case in that we can easily prove that all sets will have large \(\epsilon\) - boundary measure. Mahloujifar et al. (2018) proved a similar theorem for a larger class of distributions. For other data distributions not every set has large \(\epsilon\) - boundary measure, but under some additional assumptions it still holds that most sets do. An investigation of this relationship on the MNIST distribution can be found in Gilmer et al. (2018b, Appendix G). We believe it would be beneficial for the adversarial defense literature to start reporting generalization in noisy image distributions, such as the common corruption benchmark introduced in Hendrycks & Dietterich (2018), rather than the current practice of only reporting empirical estimates of adversarial robustness. There are several reasons for this recommendation. 1. Measuring test error in noise is significantly easier than measuring adversarial robustness — computing adversarial robustness perfectly requires solving an NP-hard problem for every point in the test set (Katz et al., 2017). Since Szegedy et al. (2014), hundreds of adversarial defense papers have been published. To our knowledge, only one (Madry et al., 2017) has reported robustness numbers which were confirmed by a third party. We believe the difficulty of measuring robustness under the usual definition has contributed to this unproductive situation. 2. Measuring test error in noise would also allow us to determine whether or not these methods improve robustness in a trivial way, such as how the robust MNIST model learned to threshold the input, or whether they have actually succeeded in improving generalization outside the natural data distribution. 3. All of the failed defense strategies we examined failed to improve generalization in noise. For this reason, we should be highly skeptical of defense strategies that only claim improved \(l_{p}\) -robustness but do not demonstrate robustness in more general settings. 4. Finally, if the goal is improving the security of our models in adversarial settings, errors in the presence of noise are already indicative that our models are not secure. Until our models are perfectly robust in the presence of average-case corruptions, they will not be robust in worst-case settings. The usefulness of \(l_{p}\) -robustness in realistic threat models is limited when attackers are not constrained to making small modifications. The interest in measuring \(l_{p}\) robustness arose from a sense of surprise that errors could be found so close to correctly classified points. But from the perspective described in this paper, the phenomenon is less surprising. Statistical classifiers make a large number of errors outside the data on which they were trained, and small adversarial perturbations are simply the nearest ones. ## REFERENCES Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018. <--- Page Split ---> Ayse Elvan Aydemir, Alptekin Temizel, and Tugba Taskaya Temizel. The effects of jpeg and jpeg2000 compression on attacks using adversarial examples. arXiv preprint arXiv:1803.10418, 2018. Aharon Azulay and Yair Weiss. Why do deep convolutional networks generalize so poorly to small image transformations? arXiv preprint arXiv:1805.12177, 2018. Battista Biggio and Fabio Roli. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84:317- 331, 2018. Christer Borell. The Brunn- Minkowski inequality in Gauss space. Inventiones mathematicae, 30(2): 207- 216, 1975. Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. arXiv preprint arXiv:1705.07263, 2017a. Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In Security and Privacy (SP), 2017 IEEE Symposium on, pp. 39- 57. IEEE, 2017b. Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation policies from data. arXiv preprint arXiv:1805.09501, 2018. Nilesh Dalvi, Pedro Domingos, Sumit Sanghai, Deepak Verma, et al. Adversarial classification. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 99- 108. ACM, 2004. Nilakh Das, Madhuri Shanbhogue, Shang- Tse Chen, Fred Hohman, Li Chen, Michael E Kounavis, and Duen Horng Chau. Keeping the bad guys out: Protecting and vaccinating deep learning with jpeg compression. arXiv preprint arXiv:1705.02900, 2017. Nilakhs Das, Madhuri Shanbhogue, Shang- Tse Chen, Fred Hohman, Siwei Li, Li Chen, Michael E Kounavis, and Duen Horng Chau. Shield: Fast, practical defense and vaccination for deep learning using jpeg compression. arXiv preprint arXiv:1802.06816, 2018. Samuel Dodge and Lina Karam. Quality resilient deep neural networks. arXiv preprint arXiv:1703.08119, 2017a. Samuel Dodge and Lina Karam. A study and comparison of human and deep learning recognition performance under visual distortions. In Computer Communication and Networks (ICCCN), 2017 26th International Conference on, pp. 1- 7. IEEE, 2017b. Gintare Karolina Dziugaite, Zoubin Ghahramani, and Daniel M Roy. A study of the effect of jpg compression on adversarial images. arXiv preprint arXiv:1608.00853, 2016. Alhussein Fawzi, Seyed- Mohsen Moosavi- Dezfooli, and Pascal Frossard. Robustness of classifiers: from adversarial to random noise. In Advances in Neural Information Processing Systems, pp. 1632- 1640, 2016. Alhussein Fawzi, Seyed- Mohsen Moosavi- Dezfooli, Pascal Frossard, and Stefano Soatto. Empirical study of the topology and geometry of deep networks. In IEEE CVPR, number CONF, 2018. Jean- Yves Franceschi, Alhussein Fawzi, and Omar Fawzi. Robustness of classifiers to uniform \(\ell_{p}\) and Gaussian noise. arXiv preprint arXiv:1802.07971, 2018. Justin Gilmer, Ryan P Adams, Ian Goodfellow, David Andersen, and George E Dahl. Motivating the rules of the game for adversarial example research. arXiv preprint arXiv:1807.06732, 2018a. Justin Gilmer, Luke Metz, Fartash Faghri, Samuel S Schoenholz, Maithra Raghu, Martin Wattenberg, and Ian Goodfellow. Adversarial spheres. arXiv preprint arXiv:1801.02774, 2018b. Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens van der Maaten. Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770- 778, 2016. <--- Page Split ---> Dan Hendrycks and Thomas G Dietterich. Benchmarking neural network robustness to common corruptions and surface variations. arXiv preprint arXiv:1807.01697, 2018. Harini Kannan, Alexey Kurakin, and Ian Goodfellow. Adversarial logit pairing. arXiv preprint arXiv:1803.06373, 2018. Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. Reluplex: An efficient smt solver for verifying deep neural networks. In International Conference on Computer Aided Verification, pp. 97- 117. Springer, 2017. Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Jun Zhu, and Xiaolin Hu. Defense against adversarial attacks using high- level representation guided denoiser. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1778- 1787, 2018. Zihao Liu, Qi Liu, Tao Liu, Yanzhi Wang, and Wujie Wen. Feature distillation: Dnn- oriented jpeg compression against adversarial examples. arXiv preprint arXiv:1803.05787, 2018. Aleksander Madry, Aleksander Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial examples. arXiv preprint arXiv:1706.06083, 2017. Saeed Mahloujifar, Dimitrios I Diochnos, and Mohammad Mahmoody. The curse of concentration in robust learning: Evasion and poisoning attacks from concentration of measure. arXiv preprint arXiv:1809.03063, 2018. Aaditya Prakash, Nick Moran, Solomon Garber, Antonella DiLillo, and James Storer. Deflecting adversarial attacks with pixel deflection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8571- 8580, 2018. Amir Rosenfeld, Richard Zemel, and John K Tsotsos. The elephant in the room. arXiv preprint arXiv:1808.03305, 2018. Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adversarially robust generalization requires more data. arXiv preprint arXiv:1804.11285, 2018. Yash Sharma and Pin- Yu Chen. Breaking the madry defense model with 11- based adversarial examples. arXiv preprint arXiv:1710.10733, 2017. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014. URL http://arxiv.org/abs/1312.6199. Florian Tramer, Alexey Kurakin, Nicolas Papernot, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017. Igor Vasiljevic, Ayan Chakrabarti, and Gregory Shakhnarovich. Examining the impact of blur on recognition by convolutional networks. arXiv preprint arXiv:1611.05760, 2016. Z. Wang and A. C. Bovik. Mean squared error: Love it or leave it? a new look at signal fidelity measures. IEEE Signal Processing Magazine, 26(1):98- 117, 2009. Chaowei Xiao, Jun- Yan Zhu, Bo Li, Warren He, Mingyan Liu, and Dawn Song. Spatially transformed adversarial examples. arXiv preprint arXiv:1801.02612, 2018. Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille. Mitigating adversarial effects through randomization. arXiv preprint arXiv:1711.01991, 2017. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. Stephan Zheng, Yang Song, Thomas Leung, and Ian Goodfellow. Improving the robustness of deep neural networks via stability training. In Proceedings of the ieee conference on computer vision and pattern recognition, pp. 4480- 4488, 2016. <--- Page Split ---> Table 2: Wide ResNet-28-10 (Zagoruyko & Komodakis, 2016) trained and tested on CIFAR-10 with Gaussian noise with standard deviation \(\sigma\) <table><tr><td>σ</td><td>0.00625</td><td>0.0125</td><td>0.025</td><td>0.075</td><td>0.15</td><td>0.25</td></tr><tr><td>Training Accuracy</td><td>100%</td><td>100%</td><td>100%</td><td>100%</td><td>99.9%</td><td>99.4%</td></tr><tr><td>Test Accuracy</td><td>96.0%</td><td>95.5%</td><td>94.8%</td><td>90.4%</td><td>77.5%</td><td>62.2%</td></tr></table> Table 3: The models from Section 1 trained and tested on ImageNet with Gaussian noise with standard deviation \(\sigma\) ; the column labeled 0 refers to a model trained only on clean images. <table><tr><td>σ</td><td>0</td><td>0.1</td><td>0.2</td><td>0.4</td><td>0.6</td><td>0.8</td></tr><tr><td>Clean Training Accuracy</td><td>91.5%</td><td>90.8%</td><td>89.9%</td><td>87.7%</td><td>86.1%</td><td>84.6%</td></tr><tr><td>Clean Test Accuracy</td><td>75.9%</td><td>75.5%</td><td>75.2%</td><td>74.2%</td><td>73.3%</td><td>72.4%</td></tr><tr><td>Noisy Training Accuracy</td><td>—</td><td>89.0%</td><td>85.7%</td><td>78.3%</td><td>71.7%</td><td>65.2%</td></tr><tr><td>Noisy Test Accuracy</td><td>—</td><td>73.9%</td><td>70.9%</td><td>65.2%</td><td>59.7%</td><td>54.0%</td></tr></table> ## A TRAINING DETAILS Models trained on CIFAR- 10. We trained the Wide- ResNet- 28- 10 model (Zagoruyko & Komodakis, 2016) using standard data augmentation of flips, horizontal shifts and crops in addition to Gaussian noise independently sampled for each image in every minibatch. The models were trained with the open- source code by Cubuk et al. (2018) for 200 epochs, using the same hyperparameters which we summarize here: a weight decay of 5e- 4, learning rate of 0.1, batch size of 128. The learning rate was decayed by a factor of 0.2 at epochs 60, 120, 160. Models trained on ImageNet. The ResNet- 50 model (He et al., 2016) was trained with a learning rate of 1.6, batch size of 4096, and weight decay of 1e- 4. During training, random crops and horizontal flips were used, in addition to the Gaussian noise independently sampled for each image in every minibatch. The models were trained for 90 epochs, where the learning rate was decayed by a factor of 0.1 at epochs 30, 60, and 80. Learning rate was linearly increased from 0 to the value of 1.6 over the first 5 epochs. ## B NOISE ATTACK DETAILS Here we provide more detail for the noise distributions considered in Section 5. The stAdv attack defines a flow field over the pixels of the image and shifts the pixels according to this flow. The field is parameterized by a latent \(Z\) . When we measure accuracy against our randomized variant of this attack, we randomly sample \(Z\) from a multivariate Gaussian distribution with standard deviation \(\sigma\) . To implement this attack we used the open sourced code from Xiao et al. (2018). PCA- 100 noise first samples noise from a Gaussian distribution \(\mathcal{N}(0, \sigma)\) , and then projects this noise onto the first 100 PCA components of the data. For ImageNet, the input dimension is too large to perform a PCA decomposition on the entire dataset. So we first perform a PCA decomposition on 30x30x1 patches taken from different color channels of the data. To general the noise we first sample from a 900 dimensional Gaussian, then project this into the basis spanned by the top 100 PCA components, then finally tile this projects to the full 299x299 dimension of the input. Each color channel is constructed independently in this fashion. ## C TRAINING AND TESTING ON GAUSSIAN NOISE In Section 5, we mentioned that it is not trivial to learn the distribution of noisy images simply by augmenting the training data distribution. In Tables 2 and 3 we present more information about the performance of the models we trained and tested on various scales of Gaussian noise. <--- Page Split ---> <table><tr><td></td><td>Clean</td><td>Pepper<br>p = 0.2</td><td>Gaussian<br>σ = 0.3</td><td>stAdv<br>σ = 1.0</td><td>PCA-100<br>σ = 0.3</td></tr><tr><td>Model</td><td>Accuracy</td><td>Accuracy</td><td>Accuracy</td><td>Accuracy</td><td>Accuracy</td></tr><tr><td>Clean</td><td>99.2%</td><td>81.4%</td><td>96.9%</td><td>89.5%</td><td>63.3%</td></tr><tr><td>Adv</td><td>98.4%</td><td>27.5%</td><td>78.2%</td><td>93.2%</td><td>47.1%</td></tr></table> Table 4: The performance of ordinarily and adversarially trained MNIST models on various noise distributions. ## D RESULTS ON MNIST MNIST is a special case when it comes to the relationship between small adversarial perturbations and generalization in noise. Indeed prior has already observed that an MNIST model can trivially become robust to small \(l_{\infty}\) perturbations by learning to threshold the input (Schmidt et al., 2018), and observed that the model from Madry et al. (2017) indeed seems to do this. When we investigated this model in different noise distributions we found it generalizes worse than a naturally trained model, results are shown in Table 4. Given that it is possible for a defense to overfit to a particular \(l_{p}\) metric, future work would be strengthened by demonstrating improved generalization outside the natural data distribution. ## E THE GAUSSIAN ISOPERIMETRIC INEQUALITY Here we will discuss the Gaussian isoperimetric inequality more thoroughly than we did in the text. We will present some of the geometric intuition behind the theorem, and in the end we will show how the version quoted in the text follows from the form in which the inequality is usually stated. The historically earliest version of the isoperimetric inequality, and probably the easiest to understand, is about areas of subsets of the plane and has nothing to do with Gaussians at all. It is concerned with the following problem: among all measurable subsets of the plane with area \(A\) , which ones have the smallest possible perimeter? One picture to keep in mind is to imagine that you are required to fence off some region of the plane with area \(A\) and you would like to use as little fence as possible. The isoperimetric inequality says that the sets which are most "efficient" in this sense are balls. Some care needs to be taken with the definition of the word "perimeter" here — what do we mean by the perimeter of some arbitrary subset of \(\mathbb{R}^{2}\) ? The definition that we will use involves the concept of the \(\epsilon\) - boundary measure we discussed in the text. For any set \(E\) and any \(\epsilon > 0\) , recall that we defined the \(\epsilon\) - extension of \(E\) , written \(E_{\epsilon}\) , to be the set of all points which are within \(\epsilon\) of a point in \(E\) ; writing \(A(E)\) for the area of \(E\) , we then define the perimeter of \(E\) to be \[\operatorname {surf}(E):= \lim_{\epsilon \to 0}\inf_{\epsilon}\frac{1}{\epsilon} (A(E_{\epsilon}) - A(E)).\] A good way to convince yourself that this is reasonable is to notice that, for small \(\epsilon\) , \(E_{\epsilon} - E\) looks like a small band around the perimeter of \(E\) with width \(\epsilon\) . The isoperimetric inequality can then be formally expressed as giving a bound on the quantity inside the limit in terms of what it would be for a ball. (This is slightly stronger than just bounding the perimeter, that is, bounding the limit itself, but this stronger version is still true.) That is, for any measurable set \(E \subseteq \mathbb{R}^{2}\) , \[\frac{1}{\epsilon} (A(E_{\epsilon}) - A(E))\geq 2\sqrt{\pi A(E)} +\epsilon \pi .\] It is a good exercise to check that we have equality here when \(E\) is a ball. There are many generalizations of the isoperimetric inequality. For example, balls are also the subsets in \(\mathbb{R}^{n}\) which have minimal surface area for a given fixed volume, and the corresponding set on the surface of a sphere is a "spherical cap," the set of points inside a circle drawn on the surface of the sphere. The version we are most concerned with in this paper is the generalization to a Gaussian distribution. Rather than trying to relate the volume of \(E\) to the volume of \(E_{\epsilon}\) , the Gaussian <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 6: The Gaussian isoperimetric inequality relates the amount of probability mass contained in a set \(E\) to the amount contained in its \(\epsilon\) -extension \(E_{\epsilon}\) . A sample from the Gaussian is equally likely to land in the pink set on the left or the pink set on the right, but the set on the right has a larger \(\epsilon\) -extension. The Gaussian isoperimetric inequality says that the sets with the smallest possible \(\epsilon\) -extensions are half spaces. </center> isoperimetric inequality is about the relationship between the probability that a random sample from the Gaussian distribution lands in \(E\) or \(E_{\epsilon}\) . Other than this, though, the question we are trying to answer is the same: for a given probability \(p\) , among all sets \(E\) for which the probability of landing in \(E\) is \(p\) , when is the probability of landing in \(E_{\epsilon}\) as small as possible? The Gaussian isoperimetric inequality says that the sets that do this are half spaces. (See Figure 6. ) Just as we did in the plane, it is convenient to express this as a bound on the probability of landing in \(E_{\epsilon}\) for an arbitrary measurable set \(E\) . This can be stated as follows: Theorem. Consider the standard normal distribution \(q\) on \(\mathbb{R}^{n}\) , and let \(E\) be a measurable subset of \(\mathbb{R}^{n}\) . Write \[\Phi (t) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{t}\exp (x^{2} / 2)d x,\] the cdf of the one- variable standard normal distribution. For a measurable subset \(E\subseteq \mathbb{R}^{n}\) , write \(\alpha (E) = \Phi^{- 1}(\mathbb{P}_{x\sim q}[x\in E])\) . Then for any \(\epsilon \geq 0\) \[\mathbb{P}_{x\sim q}[x\in E_{\epsilon }]\geq \Phi (\alpha (E) + \epsilon).\] The version we stated in the text involved \(\epsilon_{q}^{*}(E)\) , the median distance from a random sample from \(q\) to the closest point in \(E\) . This is the same as the smallest \(\epsilon\) for which \(\mathbb{P}_{x\sim q}[x\in E_{\epsilon }] = \frac{1}{2}\) . So, when \(\epsilon = \epsilon_{q}^{*}(E)\) , the left- hand side of the Gaussian isoperimetric inequality is \(\frac{1}{2}\) , giving us that \(\Phi (\alpha + \epsilon_{q}^{*}(E))\leq \frac{1}{2}\) . Since \(\Phi^{- 1}\) is a strictly increasing function, applying it to both sides preserves the direction of this inequality. But \(\Phi^{- 1}(\frac{1}{2}) = 0\) , so we in fact have that \(\epsilon_{q}^{*}(E)\leq - \alpha\) , which is the statement we wanted. ## F VISUALIZING THE OPTIMAL CURVES The optimal bound according to the isoperimetric inequality gives surprisingly strong bounds in terms of the existence of worst- case \(l_{2}\) perturbations and error rates in Gaussian noise. In Figure 7 we plot the optimal curves for various values of \(\sigma\) , visualize images sampled from \(x + N(0,\sigma)\) , and visualize images at various \(l_{2}\) distance from the unperturbed clean image. Even for very large noise \((\sigma = .6)\) , test error needs to be less than \(10^{- 15}\) in order to have worst- case perturbations be larger than 5.0. In order to visualize worst- case perturbations at varying \(l_{2}\) distances, we visualize an image that minimizes similarity according to the SSIM metric (Wang & Bovik, 2009). These images are found by performing gradient descent to minimize the SSIM metric subject to the constant that \(|x - x_{adv}|_{2}< \epsilon\) . <--- Page Split ---> ![](images/15_0.jpg) <center>Random Gaussian Perturbations of the Clean Image </center> ![](images/15_1.jpg) <center>Figure 7: Top: The optimal curves on Imagenet for different values of \(\sigma\) . Middle: Visualizing different coordinates of the optimal curves. First, random samples from \(x + N(0, \sigma I)\) for different values of \(\sigma\) . Bottom: Images at different \(l_{2}\) distances from the unperturbed clean image. Each image visualized is the image at the given \(l_{2}\) distance which minimizes visual similarity according to the SSIM metric. Note that images at \(l_{2} < 5\) have almost no perceptible change from the clean image despite the fact that SSIM visual similarity is minimized. </center> <--- Page Split ---> In this section we include many more visualizations of the sorts of church window plots we discussed briefly in Section 4. We will show an ordinarily trained model's predictions on several different slices through the same CIFAR test point which illustrate different aspects of the story told in this paper. These images are best viewed in color. ![](images/16_0.jpg) <center>Figure 8: A slice through a clean test point (black, center image), the closest error found using PGD (blue, top image), and a random error found using Gaussian noise (red, bottom image). For this visualization, and all others in this section involving Gaussian noise, we used noise with \(\sigma = 0.05\) , at which the error rate was about \(1.7\%\) . In all of these images, the black circle indicates the distance at which the typical such Gaussian sample will lie. The plot on the right shows the probability that the model assigned to its chosen class. Green indicates a correct prediction, gray or white is an incorrect prediction, and brighter means more confident. </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 9: A slice through a clean test point (black, center image), the closest error found using PGD (blue, top image), and the average of a large number of errors randomly found using Gaussian noise (red, bottom image). The distance from the clean image to the PGD error was 0.12, and the distance from the clean image to the averaged error was 0.33. The clean image is assigned the correct class with probability 99.9995% and the average and PGD errors are assigned the incorrect class with probabilities 55.3% and 61.4% respectively. However, it is clear from this image that moving even a small amount into the orange region will increase these latter numbers significantly. For example, the probability assigned to the PGD error can be increased to 99% by moving it further from the clean image in the same direction by a distance of 0.07. </center> ![](images/17_1.jpg) <center>Figure 10: A slice through a clean test point (black, center image), a random error found using Gaussian noise (blue, top image), and the average of a large number of errors randomly found using Gaussian noise (red, bottom image). </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 11: A slice through a clean test point (black, center image) and two random errors found using Gaussian noise (blue and red, top and bottom images). Note that both random errors lie very close to the decision boundary, and in this slice the decision boundary does not appear to come close to the clean image. </center> ![](images/18_1.jpg) <center>Figure 12: A slice through three random errors found using Gaussian noise. (Note, in particular, that the black point in this visualization does not correspond to the clean image.) </center> ![](images/18_2.jpg) <center>Figure 13: A completely random slice through the clean image. </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 15: The cdf of the error rates in noise for images in the test set. The blue curve corresponds to a model trained and tested on noise with \(\sigma = 0.1\) , and the green curve is for a model trained and tested at \(\sigma = 0.3\) . For example, the left most point on the blue curve indicates that about \(40\%\) of test images had an error rate of at least \(10^{-3}\) . </center> ![](images/19_1.jpg) <center>Figure 14: Some visualizations of the same phenomenon, but using the "pepper noise" discussed in Section 5 rather than Gaussian noise. In all of these visualizations, we see the slice through the clean image (black, center image), the same PGD error as above (red, bottom image), and a random error found using pepper noise (blue, top image). In the visualization on the left, we used an amount of noise that places the noisy image further from the clean image than in the Gaussian cases we considered above. In the visualization in the center, we selected a noisy image which was assigned to neither the correct class nor the class of the PGD error. In the visualization on the right, we selected a noisy image which was assigned to the same class as the PGD error. </center> ## H THE DISTRIBUTION OF ERROR RATES IN NOISE Using some of the models that were trained on noise, we computed, for each image in the CIFAR test set, the probably that a random Gaussian perturbation will be misclassified. A histogram is shown in Figure 15. Note that, even though these models were trained on noise, there are still many errors around most images in the test set. While it would have been possible for the reduced performance in noise to be due to only a few test points, we see clearly that this is not the case. <--- Page Split ---> ## I A COLLECTION OF MODEL ERRORS In this section we first show a collection of iid test errors for the ResNet- 50 model on the ImageNet validation set. We also visualize the severity of the different noise distributions considered in this work, along with model errors found by random sampling in these distributions. ![](images/20_0.jpg) <center>Figure 16: A collection of adversarially chosen model errors. These errors appeared in the ImageNet validation set. Despite the high accuracy of the model there remain plenty of errors in the test set that a human would not make. </center> ![](images/20_1.jpg) <center>Figure 17: A collection of adversarially chosen model errors. These errors appeared in the ImageNet validation set. Despite the high accuracy of the model there remain plenty of errors in the test set that a human would not make. </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 18: Visualizing the severity of PCA noise, along with model errors found in this noise distribution. </center> ![](images/21_1.jpg) <center>Figure 19: Visualizing the severity of Gaussian noise, along with model errors found in this noise distribution. Note the model shown here was trained at noise level \(\sigma = .6\) . </center> ![](images/21_2.jpg) <center>Figure 20: Visualizing the severity of pepper noise. </center> ![](images/21_3.jpg) <center>Figure 21: Visualizing the severity of the randomized stAdv attack. </center> <--- Page Split --->
## ABSTRACT Over the last few years, the phenomenon of adversarial examples — maliciously constructed inputs that fool trained machine learning models — has captured the attention of the research community, especially when the adversary is restricted to making small modifications of a correctly handled input. At the same time, less surprisingly, image classifiers lack human- level performance on randomly corrupted images, such as images with additive Gaussian noise. In this work, we show that these are two manifestations of the same underlying phenomenon. We establish this connection in several ways. First, we find that adversarial examples exist at the same distance scales we would expect from a linear model with the same performance on corrupted images. Next, we show that Gaussian data augmentation during training improves robustness to small adversarial perturbations and that adversarial training improves robustness to several types of image corruptions. Finally, we present a model- independent upper bound on the distance from a corrupted image to its nearest error given test performance and show that in practice we already come close to achieving the bound, so that improving robustness further for the corrupted image distribution requires significantly reducing test error. All of this suggests that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions. This yields a computationally tractable evaluation metric for defenses to consider: test error in noisy image distributions. ## 1 INTRODUCTION State- of- the- art computer vision models can achieve superhuman performance on many image classification tasks. Despite this, these same models still lack the robustness of the human visual system to various forms of image corruptions. For example, they are distinctly subhuman when classifying images distorted with additive Gaussian noise (Dodge & Karam, 2017b), they lack robustness to different types of blur, pixelation, and changes in brightness (Hendrycks & Dietterich, 2018), lack robustness to random translations of the input (Azulay & Weiss, 2018), and even make errors when foreign objects are inserted into the field of view (Rosenfeld et al., 2018). At the same time, they also are sensitive to small, worst- case perturbations of the input, so- called "adversarial examples" (Szegedy et al., 2014). This latter phenomenon has struck many in the machine learning community as surprising and has attracted a great deal of research interest, while the former seems to inspire less surprise and has received considerably less attention. Our classification models make errors on two different sorts of inputs: those found by randomly sampling from some predetermined distribution, and those found by an adversary deliberately searching for the closest error to a given point. In this work, we ask what, if anything, is the difference between these two types of error. Given that our classifiers make errors in these corrupted image distributions, there must be a closest such error; do we find that this closest error appears at the distance we would expect from the model's performance in noise, or is it in fact "surprisingly" close? The answer to this question has strong implications for the way we approach the task of eliminating these two types of errors. An assumption underlying most of the work on adversarial examples is that solving it requires a different set of methods than the ones being developed to improve model generalization. The adversarial defense literature focuses primarily on improving robustness to small perturbations of the input and rarely reports improved generalization in any distribution. <--- Page Split ---> We claim that, on the contrary, adversarial examples are found at the same distance scales that one should expect given the performance on noise that we see in practice. We explore the connection between small perturbation adversarial examples and test error in noise in two different ways. First, in Sections 4 and 5, we provide empirical evidence of a close relationship between test performance in Gaussian noise and adversarial perturbations. We show that the errors we find close to the clean image and the errors we sample under Gaussian noise are part of the same large set and show some visualizations that illustrate this relationship. (This analysis builds upon prior work (Fawzi et al., 2018; 2016) which makes smoothness assumptions on the decision boundary to relate these two quantities.) This suggests that training procedures designed to improve adversarial robustness might reduce test error in noise and vice versa. We provide results from experiments which show that this is indeed the case: for every model we examined, either both quantities improved or neither did. In particular, a model trained on Gaussian noise shows significant improvements in adversarial robustness, comparable to (but not quite as strong as) a model trained on adversarial examples. We also found that an adversarially trained model on CIFAR- 10 shows improved robustness to random image corruptions. Finally, in Section 6, we establish a relationship between the error rate of an image classification model in the presence of Gaussian noise and the existence of adversarial examples for noisy versions of test set images. In this setting we can actually prove a rigorous, model- independent bound relating these two quantities that is achieved when the error set is a half space, and we see that the models we tested are already quite close to this optimum. Therefore, for these noisy image distributions, our models are already almost as adversarially robust as they can be given the error rates we see, so the only way to defend against adversarial examples is to reduce test error. In this work we will investigate several different models trained on the MNIST, CIFAR- 10 and ImageNet datasets. For MNIST and CIFAR- 10 we look at the naturally trained and adversarially trained models which have been open- sourced by Madry et al. (2017). We also trained the same model on CIFAR- 10 with Gaussian data augmentation. For ImageNet, we investigate Wide ResNet- 50 trailjned with Gaussian data augmentation. We were unable to study the effects of adversarial training on ImageNet because no robust open sourced model exists (we considered the models released in Tramer et al. (2017) but found that they only minimally improve robustness to the white box PGD adversaries we consider here). Additional training details can be found in Appendix A. ## 2 RELATED WORK The broader field of adversarial machine learning studies general ways in which an adversary may interact with an ML system, and dates back to 2004 (Dalvi et al., 2004; Biggio & Roli, 2018). Since the work of Szegedy et al. (2014), a subfield has focused specifically on the phenomenon of small adversarial perturbations of the input, or "adversarial examples." In Szegedy et al. (2014) it was proposed these adversarial examples occupy a dense, measure- zero subset of image space. However, more recent work has provided evidence that this is not true. For example, Fawzi et al. (2016); Franceschi et al. (2018) shows that under linearity assumptions of the decision boundary small adversarial perturbations exist when test error in noise is non- zero. Gilmer et al. (2018b) showed for a specific data distribution that there is a fundamental upper bound on adversarial robustness in terms of test error. Mahloujifar et al. (2018) has generalized these results to a much broader class of distributions. Recent work has proven for a synthetic data distribution that adversarially robust generalization requires more data (Schmidt et al., 2018). The distribution they consider when proving this result is a mixture of high dimensional Gaussians. As we will soon discuss, every set \(E\) of small measure in the high dimensional Gaussian distribution has large boundary measure. Therefore, at least for the data distribution considered, the main conclusion of this work, "adversarially robust generalization requires more data", is a direct corollary of the statement "generalization requires more data." ## 3 TEST ERROR AND ADVERSARIAL ROBUSTNESS Understanding the relationship between nearby errors and model generalization requires understanding the geometry of the error set of a statistical classifier, that is, the set of points in the input space <--- Page Split ---> on which the classifier makes an incorrect prediction. In particular, the assertion that these adversarial examples are a distinct phenomenon from test error is equivalent to stating that the error set is in some sense poorly behaved. We study two functions of a model's error set \(E\) . The first quantity, test error under a given distribution of inputs \(q(x)\) , is the probability that a random sample from the distribution \(q\) is in \(E\) . We will denote this \(\mathbb{P}_{x\sim q}[x\in E]\) ; reducing this quantity when \(q\) is the natural data distribution is the goal of supervised learning. While one usually takes \(q\) to be the distribution from which the training set was sampled, we will also consider other distributions over the course of this paper. When \(q\) includes points from outside the natural data distribution, a decision needs to be made about the labels in order to define \(E\) . The only such cases we will consider in this paper are noisy perturbations of training or test points, and we will always assume that the noise is at a scale which is small enough not to change the label. This assumption is commonly made in works which study model robustness to random corruptions of the input (Hendrycks & Dietterich, 2018; Dodge & Karam, 2017b). Some examples noisy images can be found in Figure 7 in the appendix. The second quantity is called adversarial robustness. For an input \(x\) and a metric on the input space \(d\) , let \(d(x, E)\) denote the distance from \(x\) to the nearest point of \(E\) . For any \(\epsilon\) , let \(E_{\epsilon}\) denote the set \(\{x: d(x, E) < \epsilon \}\) , the set of points within \(\epsilon\) of an error. The adversarial robustness of the model is then \(\mathbb{P}_{x\sim q}[x\in E_{\epsilon}]\) , the probability that a random sample from \(q\) is within distance \(\epsilon\) of some point in the error set. Reducing this quantity is the goal of much of the adversarial defense literature. When we refer to "adversarial examples" in this paper, we will always mean these nearby errors. In geometric terms we can think of \(\mathbb{P}_{x\sim q}[x\in E]\) as a sort of volume of the error set while \(\mathbb{P}_{x\sim q}[x\in E_{\epsilon}]\) is related to its surface area. More directly, \(\mathbb{P}_{x\sim q}[x\in E_{\epsilon}]\) is what we will call the \(\epsilon\) - boundary measure, the volume under \(q\) of the region within \(\epsilon\) of the surface or the interior. The adversarial example phenomenon is then simply that, for small \(\epsilon\) , \(\mathbb{P}_{x\sim q}[x\in E_{\epsilon}]\) can be large even when \(\mathbb{P}_{x\sim q}[x\in E]\) is small. In other words, most correctly classified inputs are very close to a misclassified point, even though the model is very accurate. In high- dimensional spaces this phenomenon is not isolated to the error sets of statistical classifiers. In fact almost every nonempty set of small volume has large \(\epsilon\) - boundary measure, even sets that seem very well- behaved. As a simple example, consider the measure of the set \(E = \{x\in \mathbb{R}^{n}:||x||_{2}< 1\}\) under the Gaussian distribution \(q = \mathcal{N}(0,\sigma^{2}I)\) . For \(n = 1000\) , \(\sigma = 1.05 / \sqrt{n}\) , and \(\epsilon = 0.1\) , we have \(\mathbb{P}_{x\sim q}[x\in E]\approx 0.02\) and \(\mathbb{P}_{x\sim q}[x\in E_{\epsilon}]\approx 0.98\) , so most samples from \(q\) will be close to \(E\) despite the fact that \(E\) has relatively little measure under the Gaussian distribution. If we relied only on our low- dimensional spatial intuition, we might be surprised to find how consistently small adversarial perturbations could be found — \(98\%\) of our test points would have an error at distance 0.1 or less even though only \(2\%\) are misclassified. In high dimensions, it is much easier for most points to be close to some set even if that set itself has a small volume. Contrary to what one might expect from our low- dimensional intuition, this does not require the set in question to be somehow pathological; in our example, it was just a ball. Therefore, when we see that some image classifier has errors in some noise distribution \(q\) (so that \(\mathbb{P}_{x\sim q}[x\in E]\) is appreciably bigger than zero) it is possible that \(E_{\epsilon}\) is much larger even if \(E\) is quite simple, so the existence of small worst- case perturbations should be expected given imperfect robustness to large average- case corruptions. In the sections that follow we will make this precise. ## 4 ERRORS IN NOISE SUGGEST ADVERSARIAL EXAMPLES FOR CLEAN IMAGES The Linear Case. For linear models, the relationship between errors in Gaussian noise and small perturbations of a clean image is exact. For an image \(x\) , let \(d(x)\) be the distance from \(x\) to decision boundary and let \(\sigma (x,\mu)\) be the \(\sigma\) for which \(\mathbb{P}_{x\sim q}[x\in E]\) is some fixed error rate \(\mu\) . (As we mentioned in the introduction, we assume that \(\sigma\) is small enough that adding this noise does not change the "correct" label.) Then we have \(d(x) = -\sigma (x,\mu)\Phi^{- 1}(\mu)\) , where \[\Phi (t) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{t}\exp (-x^{2} / 2)dx\] is the cdf of the univariate standard normal distribution. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 1: Comparing the distance to decision boundary with the \(\sigma\) for which the error rate in Gaussian noise is \(1\%\) . Each point represents 50 images from the test set, and the median values for each coordinate are shown. (The PGD attack was run with \(\epsilon = 1\) , so the distances to the decision boundary reported here are cut off at 1.) We also see histograms of the \(x\) coordinates. (A misclassified point is assigned \(\sigma = 0\) .) </center> Note that this equality depends only on the error rate \(\mu\) and the standard deviation \(\sigma\) of a single component, and not directly on the dimension. This might seem at odds with the emphasis on high- dimensional geometry in Section 3. The dimension does appear if we consider the norm of a typical sample from \(\mathcal{N}(0, \sigma^2 I)\) , which is \(\sigma \sqrt{n}\) . As the dimension increases, so does the ratio between the distance to a noisy image and the distance to the decision boundary. The decision boundary of a neural network is, of course, not linear. However, by computing the ratio between \(d(x)\) and \(\sigma (x, \mu)\) for neural networks and comparing it to what it would be for a linear model, we can investigate the question posed in the introduction: do we see adversarial examples at the distances we do because of pathologies in the shape of the error set, or do we find them at about the distances we would expect given the error rates we see in noise? We ran experiments on the error sets of several neural image classifiers and found evidence that is much more consistent with the second of these two possibilities. This relationship was also explored in Fawzi et al. (2016; 2018); here we additionally measure how data augmentation affects this relationship. We examined this relationship for neural networks when \(\mu = 0.01\) . For each test point, we compared \(\sigma (x, \mu)\) to an estimate of \(d(x)\) . It is not actually possible to compute \(d(x)\) precisely for the error set of a neural network. In fact, finding the distance to the nearest error is NP- hard (Katz et al., 2017). Instead, the best we can do is to search for an error using a method like PGD (Madry et al., 2017) and report the nearest error we can find. Figure 1 shows the results for several CIFAR- 10 and ImageNet models, including ordinary trained models, models trained on noise with \(\sigma = 0.4\) , and an adversarially trained CIFAR- 10 model. We also included a line representing how these quantities would be related for a linear model. We can see that none of the models we examined have nearby errors at a scale much smaller than we would expect from a linear model. Indeed, while the adversarially trained model does deviate from the linear case to a greater extent than the others, it does so in the direction of greater distances to the decision boundary. Moreover, we can see from the histograms that both of the interventions that increase \(d(x)\) also increase \(\sigma (x, \mu)\) . So, to explain the distances to the errors we can find using PGD, it is not necessary to rely on any great complexity in the shape of the error set; a linear model with the same error rates in noise would have errors just as close. <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 2: Two-dimensional slices of image space through different triples of points together with the classes assigned by a trained model. The black circle in both images has radius 31.4, corresponding to noise with \(\sigma = 31.4 / \sqrt{n} = 0.08\) . </center> Left: An image from the test set (black), a random misclassified Gaussian perturbation at standard deviation 0.08 (blue), and an error found using PGD (red). The estimated measure of the cyan region ("miniature poodle") in the Gaussian distribution is about \(0.1\%\) . The small diamond-shaped region in the center of the image is the \(l_{\infty}\) ball of radius 8/255. Right: A slice at a larger scale with the same black point, together with an error from the clean set (blue) and an adversarially constructed error (red) which are both assigned to the same class ("elephant"). Visualizing the Decision Boundary. In Figure 2 we drew some pictures of two- dimensional slices of image space through several different triples of points. (Similar visualizations have previously appeared in Fawzi et al. (2018), and are called "church window plots.") We see some common themes. In the figure on the left, we see that an error found in Gaussian noise lies in the same connected component of the error set as an error found using PGD, and that at this scale that component visually resembles a half space. This figure also illustrates the relationship between test error and adversarial robustness. To measure adversarial robustness is to ask whether or not there are any errors in the \(l_{\infty}\) ball — the small diamond- shaped region in the center of the image — and to measure test error in noise is to measure the volume of the error set in the defined noise distribution. At least in this slice, nothing distinguishes the PGD error from any other point in the error set apart from its proximity to the center point. The figure on the right shows a different slice through the same test point but at a larger scale. This slice includes an ordinary test error along with an adversarial perturbation of the center image constructed with the goal of maintaining visual similarity while having a large \(l_{2}\) distance. The two errors are both classified (incorrectly) by the model as "elephant." This adversarial error is actually farther from the center than the test error, but they still clearly belong to the same connected component. This suggests that defending against worst- case content- preserving perturbations (Gilmer et al., 2018a) requires removing all errors at a scale comparable to the distance between unrelated pairs of images. Many more church window plots can be found in Appendix G. ## 5 COMPARING ADVERSARIAL TRAINING TO TRAINING ON NOISE For a linear model, improving generalization in the presence of noise is equivalent to increasing the distance to the decision boundary. The results from the previous section suggest that a similar relationship should hold for other statistical classifiers, including neural networks. That is, augmenting the training data distribution with noisy images ought to increase the distance to the decision boundary, and augmenting the training distribution with small- perturbation adversarial examples should improve performance in noise. Here we present evidence that this is the case. We analyzed the performance of the models described in Section 1 on four different noise distributions: two types of Gaussian noise, pepper noise (Hendrycks & Dietterich, 2018), and a randomized variant of the stAdv adversarial attack introduced in Xiao et al. (2018). We used both ordinary, spherical Gaussian noise and what we call "PCA noise," which is Gaussian noise supported only on the <--- Page Split ---> <table><tr><td colspan="2">Dataset</td><td colspan="3">CIFAR-10</td><td colspan="3">ImageNet</td></tr><tr><td>Training</td><td>Vanilla</td><td>Noise<br>σ=0.1</td><td>Noise<br>σ=0.4</td><td>Adv</td><td>Vanilla</td><td>Noise<br>σ=0.4</td><td>Noise<br>σ=0.8</td></tr><tr><td>Noise Type</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Clean</td><td>95.0%</td><td>93.5%</td><td>84.0%</td><td>87.3%</td><td>76.0%</td><td>74.4%</td><td>72.6%</td></tr><tr><td>PCA100, σ=0.2</td><td>93.2%</td><td>92.3%</td><td>83.6%</td><td>86.5%</td><td>45.5%</td><td>56.5%</td><td>59.7%</td></tr><tr><td>PCA100, σ=0.4</td><td>82.6%</td><td>83.1%</td><td>81.0%</td><td>80.6%</td><td>13.5%</td><td>17.7%</td><td>19.7%</td></tr><tr><td>Pepper, p=0.1</td><td>20.2%</td><td>53.3%</td><td>81.2%</td><td>38.4%</td><td>31.3%</td><td>70.0%</td><td>69.1%</td></tr><tr><td>Pepper, p=0.3</td><td>12.3%</td><td>18.9%</td><td>58.0%</td><td>21.1%</td><td>5.4%</td><td>56.0%</td><td>61.5%</td></tr><tr><td>Gaussian, σ=0.1</td><td>29.1%</td><td>89.0%</td><td>85.1%</td><td>77.8%</td><td>60.7%</td><td>73.3%</td><td>71.7%</td></tr><tr><td>Gaussian, σ=0.2</td><td>13.5%</td><td>38.8%</td><td>83.5%</td><td>42.1%</td><td>27.9%</td><td>70.5%</td><td>69.3%</td></tr><tr><td>stAdv, σ=0.5</td><td>52.3%</td><td>84.4%</td><td>77.9%</td><td>81.7%</td><td>57.3%</td><td>67.3%</td><td>69.0%</td></tr><tr><td>stAdv, σ=2.0</td><td>17.4%</td><td>30.6%</td><td>52.1%</td><td>27.0%</td><td>11.4%</td><td>27.2%</td><td>31.3%</td></tr><tr><td>lp robustness</td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>l2, ε=0.5</td><td>0.3%</td><td>39.2%</td><td>54.5%</td><td>58.3%</td><td>7.9%</td><td>43.8%</td><td>47.7%</td></tr><tr><td>l2, ε=1.0</td><td>0.0%</td><td>9.5%</td><td>25.1%</td><td>29.7%</td><td>0.5%</td><td>16.8%</td><td>22.5%</td></tr><tr><td>l∞, ε=1/255</td><td>26.2%</td><td>84.4%</td><td>76.6%</td><td>83.5%</td><td>0.8%</td><td>20.1%</td><td>25.0%</td></tr><tr><td>l∞, ε=4/255</td><td>0.4%</td><td>39.8%</td><td>49.6%</td><td>68.3%</td><td>0.0%</td><td>0.1%</td><td>0.1%</td></tr><tr><td>l∞, ε=8/255</td><td>0.0%</td><td>10.3%</td><td>20.0%</td><td>45.4%</td><td>0.0%</td><td>0.0%</td><td>0.0%</td></tr></table> Table 1: The performance of the models we considered under various noise distributions, together with our measurements of those models' robustness to small \(l_{p}\) perturbations. For all the robustness tests we used PGD with 100 steps and a step size of \(\epsilon /25\) . The adversarially trained CIFAR- 10 model is the open sourced model from Madry et al. (2017). subspace spanned by the first 100 principal components of the training set. Pepper noise randomly assigns channels of the image to 1 with some fixed probability. Details of the stAdv attack can be found in Appendix B, but it visually similar to Gaussian blurring where \(\sigma\) controls the severity of the blurring. Example images that have undergone each of the noise transformations we used can be found in Appendix I. Each model was also tested for \(l_{p}\) robustness with a variety of norms and \(\epsilon\) 's using the same PGD attack as in Section 4. For CIFAR- 10, standard Gaussian data augmentation yields comparable (but slightly worse) results to adversarial training on all considered metrics. For ImageNet we found that Gaussian data augmentation improves robustness to small \(l_{2}\) perturbations as well as robustness to other noise corruptions. The results are shown in Table 1. This holds both for generalization in all noises considered and for robustness to small perturbations. We found that performing data augmentation with heavy Gaussian noise ( \(\sigma = 0.4\) for CIFAR- 10 and \(\sigma = 0.8\) for ImageNet) worked best. The adversarially trained CIFAR- 10 models were trained in the \(l_{\infty}\) metric and they performed especially well on worst- case perturbations in this metric. Prior work has observed that Gaussian data augmentation helps small perturbation robustness on MNIST (Kannan et al., 2018), but to our knowledge we are the first to measure this on CIFAR- 10 and ImageNet. Neither augmentation method shows much improved generalization in PCA noise. We hypothesize that adversarially trained models learn to project away the high- frequency information in the input, which would do little to improve performance in PCA noise, which is supported in the low- frequency subspace of the data distribution. Further work would be required to establish this. We also considered the MNIST adversarially trained model from Madry et al. (2017), and found it to be a special case where although robustness to small perturbations was increased generalization in noise was not improved. This is because this model violates the linearity assumption discussed in Section 4. This overfitting to the \(l_{\infty}\) metric has been observed in prior work (Sharma & Chen, 2017). More details can be found in Appendix D. Although no \(l_{p}\) - robust open sourced ImageNet model exists, recent work has found that the adversarially trained models on Tiny ImageNet from Kannan et al. (2018) generalize very well on a large suite of common image corruptions (Hendrycks & Dietterich, 2018). Failed Adversarial Defenses Do Not Improve Generalization in Noise. We performed a similar analysis on seven previously published adversarial defense strategies. These methods have already been shown to result in masking gradients, which cause standard optimization procedures to fail to find errors, rather than actually improving small perturbation robustness (Athalye et al., 2018). We find <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: The performance in Gaussian noise of several previously published defenses for ImageNet, along with a model trained on Gaussian noise at \(\sigma = 0.4\) for comparison. For each point we ran ten trials; the error bars show one standard deviation. All of these defenses are now known not to improve adversarial robustness (Athalye et al., 2018). The defense strategies include bitdepth reduction (Guo et al., 2017), JPEG compression (Guo et al., 2017; Dziugaite et al., 2016; Liu et al., 2018; Aydemir et al., 2018; Das et al., 2018; 2017), Pixel Deflection (Prakash et al., 2018), total variance minimization (Guo et al., 2017), representation-guided denoising (Liao et al., 2018), and random resizing and random padding of the input image (Xie et al., 2017). </center> that these methods also show no improved generalization in Gaussian noise. The results are shown in Figure 3. Given how easy it is for a method to show improved robustness to standard optimization procedures without changing the decision boundary in any meaningful way, we strongly recommend that future defense efforts evaluate on out- of- distribution inputs such as the noise distributions we consider here. The current standard practice of evaluating solely on gradient- based attack algorithms is making progress more difficult to measure. Obtaining Zero Test Error in Noise is Nontrivial. It is important to note that applying Gaussian data augmentation does not reduce error rates in Gaussian noise to zero. For example, we performed Gaussian data augmentation on CIFAR- 10 at \(\sigma = .15\) and obtained \(99.9\%\) training accuracy but \(77.5\%\) test accuracy in the same noise distribution. (For comparison, the naturally trained obtains \(95\%\) clean test accuracy.) Previous work (Dodge & Karam, 2017b) has also observed that obtaining perfect generalization in large Gaussian noise is nontrivial. This mirrors Schmidt et al. (2018), which found that small perturbation robustness did not generalize to the test set. This is perhaps not surprising given that error rates on the clean test set are also non- zero. Although the model is in some sense "superhuman" with respect to clean test accuracy, it still makes many mistakes on the clean test set that a human would never make. We collected some examples in Appendix I. More detailed results on training and testing in noise can be found in Appendices C and H. ## 6 ERRORS IN NOISE IMPLY ADVERSARIAL EXAMPLES FOR NOISY IMAGES The Gaussian Isoperimetric Inequality. Let \(x\) be a correctly classified image and consider the distribution \(q\) of Gaussian perturbations of \(x\) with some fixed variance \(\sigma^2 I\) . For this distribution, there is a precise sense in which small adversarial perturbations exist only because test error is nonzero. That is, given the error rates we actually observe on noisy images, most noisy images must be close to the error set. This result holds completely independently of any assumptions about the model and follows from a fundamental geometric property of the high- dimensional Gaussian distribution, which we will now make precise. For an image \(x\) and the corresponding noisy image distribution \(q\) , let \(\epsilon_{q}^{*}(E)\) be the median distance from one of these noisy images to the nearest error. (In other words, it is the \(\epsilon\) for which \(\mathbb{P}_{x\sim q}[x\in E_{\epsilon}] = \frac{1}{2}\) .) As before, let \(\mathbb{P}_{x\sim q}[x\in E]\) be the probability that a random Gaussian perturbation <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 4: The adversarial example phenomenon occurs for noisy images as well as clean ones. Starting with a noisy image that is correctly classified, one can apply carefully crafted imperceptible noise to it which causes the model to output an incorrect answer. This occurs even though the error rate among random Gaussian perturbations of this image is small (less than \(.1\%\) for the ImageNet panda shown above). In fact, we prove that the presence of errors in Gaussian noise logically implies that small adversarial perturbations exists around noisy images. The only way to "defend" against such adversarial perturbations is to reduce the error rate in Gaussian noise. </center> of \(x\) lies in \(E\) . It is possible to deduce a bound relating these two quantities from the Gaussian isoperimetric inequality (Borell, 1975). The form we will use is: Theorem (Gaussian Isoperimetric Inequality). Let \(q = \mathcal{N}(0,\sigma^{2}I)\) be the Gaussian distribution on \(\mathbb{R}^{n}\) with variance \(\sigma^{2}I\) , and let \(\mu = \mathbb{P}_{x\sim q}[x\in E]\) . Write \(\begin{array}{r}{\Phi (t) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{t}\exp (- x^{2} / 2)d x} \end{array}\) , the cdf of the univariate standard normal distribution. If \(\mu \geq \frac{1}{2}\) , then \(\epsilon_{q}^{*}(E) = 0\) . Otherwise, \(\epsilon_{q}^{*}(E)\leq - \sigma \Phi^{- 1}(\mu)\) , with equality when \(E\) is a half space. In particular, for any machine learning model for which the error rate in the distribution \(q\) is at least \(\mu\) , the median distance to the nearest error is at most \(- \sigma \Phi^{- 1}(\mu)\) . (Note that \(\Phi^{- 1}(\mu)\) is negative when \(\mu < \frac{1}{2}\) .) Because each coordinate of a multivariate normal is a univariate normal, \(- \Phi^{- 1}(\mu)\) is the distance to a half space for which the error rate is \(\mu\) when \(\sigma = 1\) . (We have the same indirect dependence on dimension here as we saw in Section 4: the distance to a typical sample from the Gaussian is \(\sigma \sqrt{n}\) .) In Appendix E we will give the more common statement of the Gaussian isoperimetric inequality along with a proof of the version presented here. In geometric terms, we can say that a half space is the set \(E\) of a fixed volume that minimizes the surface area under the Gaussian measure, similar to how a circle is the set of fixed area that minimizes the perimeter. So among models with some fixed test error \(\mathbb{P}_{x\sim q}[x\in E]\) , the most robust on this distribution are the ones whose error set is a half space. Comparing Neural Networks to the Isoperimetric Bound. We evaluated these quantities for several models and many images from the CIFAR- 10 and ImageNet test sets. Just like for clean images, we found that most noisy images are both correctly classified and very close to a visually similar image which is not. (See Figure 4. ) As we mentioned in Section 4, it is not actually possible to compute \(\epsilon_{q}^{*}\) precisely for the error set of a neural network, so we again report an estimate. For each test image, we took 1,000 samples from the corresponding Gaussian and estimated \(\epsilon_{q}^{*}\) using PGD with 200 steps on each sample and reported the median. We find that for the five models we considered on CIFAR- 10 and ImageNet, the relationship between our estimate of \(\epsilon_{q}^{*}(E)\) and \(\mathbb{P}_{x\sim q}[x\in E]\) is already close to optimal. This is visualized in Figure 5. Note that in both cases, adversarial training does improve robustness to small perturbations, but the gains are primarily because error rates in Gaussian noise were dramatically improved, and less because the surface area of the error set was decreased. In particular, many test points do not appear on these graphs because error rates in noise were so low that we did not find any errors among the 100,000 samples we used. For example, for the naturally trained CIFAR model, about \(1\%\) of the points lie off the left edge of the plot, compared to about \(59\%\) for the adversarially trained model and \(70\%\) for the model trained on noise. This shows that adversarial training on small perturbations improved generalization to large random perturbations, as the isoperimetric inequality says it must. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 5: These plots give two ways to visualize the relationship between the error rate in noise and the distance from noisy points to the decision boundary (found using PGD). Each point on each plot represents one image from the test set. On the left, we compare the error rate of the model on Gaussian perturbations at \(\sigma = 0.1\) to the distance from the median noisy point to its nearest error. On the right, we compare the \(\sigma\) at which the error rate is 0.01 to this same median distance. (The plots on the right are therefore similar to the plots in Figure 1.) The thick black line at the top of each plot is the upper bound provided by the Gaussian isoperimetric inequality. We include data from a model trained on clean images, an adversarially trained model, and a model trained on Gaussian noise \((\sigma = 0.4)\) . As mentioned in Section 1, we were unable to run this experiment on an adversarially robust ImageNet model. </center> Not all models or functions will be this close to optimal. As a simple example, if we took one of the CIFAR models shown in Figure 5 and modified it so that the model outputs an error whenever each coordinate of the input is an integer multiple of \(10^{- 6}\) , the resulting model would have an error within \(\sqrt{\frac{1}{2} \cdot 10^{- 6} \cdot \dim(\text{CIFAR})} \approx 0.039\) of every point. In this case, adversarial examples would be a distinct phenomenon from test performance, since \(\epsilon_{\eta}^{*}(E)\) would be far from optimal. The contrast between these two settings is important for adversarial defense design. If adversarial examples arose from a badly behaved decision boundary (as in the latter case), then it would make sense to design defenses which attempt to smooth out the decision boundary in some way. However, because we observe that image models are already close to the optimal bound on robustness for a fixed error rate in noise, future defense design should attempt to improve generalization in noise. Currently there is a considerable subset of the adversarial defense literature which develops methods that would remove any small "pockets" of errors but which don't improve model generalization. One example is Xie et al. (2017) which proposes randomly resizing the input to the network as a defense strategy. Unfortunately, this defense, like many others, has been shown to be ineffective against stronger adversaries (Carlini & Wagner, 2017a;b; Athalye et al., 2018). ## 7 CONCLUSION We proved a fundamental relationship between generalization in noisy image distributions and the existence of small adversarial perturbations. By appealing to the Gaussian isoperimetric inequality, we formalized the notion of what it means for a decision boundary to be badly behaved. We showed that, for noisy images, there is very little room to improve robustness without also decreasing the volume of the error set, and we provided evidence that small perturbations of clean images can also be explained in a similar way. These results show that small- perturbation adversarial robustness is closely related to generalization in the presence of noise and that future defense efforts can measure progress by measuring test error in different noise distributions. <--- Page Split ---> Indeed, several such noise distributions have already been proposed, and other researchers have developed methods which improve generalization in these distributions (Hendrycks & Dietterich, 2018; Dodge & Karam, 2017b;a; Vasiljevic et al., 2016; Zheng et al., 2016). Our work suggests that adversarial defense and improving generalization in noise involve attacking the same set of errors in two different ways — the first community tries to remove the errors on the boundary of the error set while the second community tries to reduce the volume of the error set. The isoperimetric inequality connects these two perspectives, and suggests that improvements in adversarial robustness should result in improved generalization in noise and vice versa. Adversarial training on small perturbations on CIFAR- 10 also improved generalization in noise, and training on noise improved robustness to small perturbations. In the introduction we referred to a question from Szegedy et al. (2014) about why we find errors so close to our test points while the test error itself is so low. We can now suggest an answer: despite what our low- dimensional visual intuition may lead us to believe, these errors are not in fact unnaturally close given the error rates we observe in noise. There is a sense, then, in which we simply haven’t reduced the test error enough to expect to have removed most nearby errors. While we focused on the Gaussian distribution, similar conclusions can be made about other distributions. In general, in high dimensions, the \(\epsilon\) - boundary measure of a typical set is large even when its volume is small, and this observation does not depend on anything specific about the Gaussian distribution. The Gaussian distribution is a special case in that we can easily prove that all sets will have large \(\epsilon\) - boundary measure. Mahloujifar et al. (2018) proved a similar theorem for a larger class of distributions. For other data distributions not every set has large \(\epsilon\) - boundary measure, but under some additional assumptions it still holds that most sets do. An investigation of this relationship on the MNIST distribution can be found in Gilmer et al. (2018b, Appendix G). We believe it would be beneficial for the adversarial defense literature to start reporting generalization in noisy image distributions, such as the common corruption benchmark introduced in Hendrycks & Dietterich (2018), rather than the current practice of only reporting empirical estimates of adversarial robustness. There are several reasons for this recommendation. 1. Measuring test error in noise is significantly easier than measuring adversarial robustness — computing adversarial robustness perfectly requires solving an NP-hard problem for every point in the test set (Katz et al., 2017). Since Szegedy et al. (2014), hundreds of adversarial defense papers have been published. To our knowledge, only one (Madry et al., 2017) has reported robustness numbers which were confirmed by a third party. We believe the difficulty of measuring robustness under the usual definition has contributed to this unproductive situation. 2. Measuring test error in noise would also allow us to determine whether or not these methods improve robustness in a trivial way, such as how the robust MNIST model learned to threshold the input, or whether they have actually succeeded in improving generalization outside the natural data distribution. 3. All of the failed defense strategies we examined failed to improve generalization in noise. For this reason, we should be highly skeptical of defense strategies that only claim improved \(l_{p}\) -robustness but do not demonstrate robustness in more general settings. 4. Finally, if the goal is improving the security of our models in adversarial settings, errors in the presence of noise are already indicative that our models are not secure. Until our models are perfectly robust in the presence of average-case corruptions, they will not be robust in worst-case settings. The usefulness of \(l_{p}\) -robustness in realistic threat models is limited when attackers are not constrained to making small modifications. The interest in measuring \(l_{p}\) robustness arose from a sense of surprise that errors could be found so close to correctly classified points. But from the perspective described in this paper, the phenomenon is less surprising. Statistical classifiers make a large number of errors outside the data on which they were trained, and small adversarial perturbations are simply the nearest ones. ## A TRAINING DETAILS Models trained on CIFAR- 10. We trained the Wide- ResNet- 28- 10 model (Zagoruyko & Komodakis, 2016) using standard data augmentation of flips, horizontal shifts and crops in addition to Gaussian noise independently sampled for each image in every minibatch. The models were trained with the open- source code by Cubuk et al. (2018) for 200 epochs, using the same hyperparameters which we summarize here: a weight decay of 5e- 4, learning rate of 0.1, batch size of 128. The learning rate was decayed by a factor of 0.2 at epochs 60, 120, 160. Models trained on ImageNet. The ResNet- 50 model (He et al., 2016) was trained with a learning rate of 1.6, batch size of 4096, and weight decay of 1e- 4. During training, random crops and horizontal flips were used, in addition to the Gaussian noise independently sampled for each image in every minibatch. The models were trained for 90 epochs, where the learning rate was decayed by a factor of 0.1 at epochs 30, 60, and 80. Learning rate was linearly increased from 0 to the value of 1.6 over the first 5 epochs. ## B NOISE ATTACK DETAILS Here we provide more detail for the noise distributions considered in Section 5. The stAdv attack defines a flow field over the pixels of the image and shifts the pixels according to this flow. The field is parameterized by a latent \(Z\) . When we measure accuracy against our randomized variant of this attack, we randomly sample \(Z\) from a multivariate Gaussian distribution with standard deviation \(\sigma\) . To implement this attack we used the open sourced code from Xiao et al. (2018). PCA- 100 noise first samples noise from a Gaussian distribution \(\mathcal{N}(0, \sigma)\) , and then projects this noise onto the first 100 PCA components of the data. For ImageNet, the input dimension is too large to perform a PCA decomposition on the entire dataset. So we first perform a PCA decomposition on 30x30x1 patches taken from different color channels of the data. To general the noise we first sample from a 900 dimensional Gaussian, then project this into the basis spanned by the top 100 PCA components, then finally tile this projects to the full 299x299 dimension of the input. Each color channel is constructed independently in this fashion. ## C TRAINING AND TESTING ON GAUSSIAN NOISE In Section 5, we mentioned that it is not trivial to learn the distribution of noisy images simply by augmenting the training data distribution. In Tables 2 and 3 we present more information about the performance of the models we trained and tested on various scales of Gaussian noise. <--- Page Split ---> <table><tr><td></td><td>Clean</td><td>Pepper<br>p = 0.2</td><td>Gaussian<br>σ = 0.3</td><td>stAdv<br>σ = 1.0</td><td>PCA-100<br>σ = 0.3</td></tr><tr><td>Model</td><td>Accuracy</td><td>Accuracy</td><td>Accuracy</td><td>Accuracy</td><td>Accuracy</td></tr><tr><td>Clean</td><td>99.2%</td><td>81.4%</td><td>96.9%</td><td>89.5%</td><td>63.3%</td></tr><tr><td>Adv</td><td>98.4%</td><td>27.5%</td><td>78.2%</td><td>93.2%</td><td>47.1%</td></tr></table> Table 4: The performance of ordinarily and adversarially trained MNIST models on various noise distributions. ## D RESULTS ON MNIST MNIST is a special case when it comes to the relationship between small adversarial perturbations and generalization in noise. Indeed prior has already observed that an MNIST model can trivially become robust to small \(l_{\infty}\) perturbations by learning to threshold the input (Schmidt et al., 2018), and observed that the model from Madry et al. (2017) indeed seems to do this. When we investigated this model in different noise distributions we found it generalizes worse than a naturally trained model, results are shown in Table 4. Given that it is possible for a defense to overfit to a particular \(l_{p}\) metric, future work would be strengthened by demonstrating improved generalization outside the natural data distribution. ## E THE GAUSSIAN ISOPERIMETRIC INEQUALITY Here we will discuss the Gaussian isoperimetric inequality more thoroughly than we did in the text. We will present some of the geometric intuition behind the theorem, and in the end we will show how the version quoted in the text follows from the form in which the inequality is usually stated. The historically earliest version of the isoperimetric inequality, and probably the easiest to understand, is about areas of subsets of the plane and has nothing to do with Gaussians at all. It is concerned with the following problem: among all measurable subsets of the plane with area \(A\) , which ones have the smallest possible perimeter? One picture to keep in mind is to imagine that you are required to fence off some region of the plane with area \(A\) and you would like to use as little fence as possible. The isoperimetric inequality says that the sets which are most "efficient" in this sense are balls. Some care needs to be taken with the definition of the word "perimeter" here — what do we mean by the perimeter of some arbitrary subset of \(\mathbb{R}^{2}\) ? The definition that we will use involves the concept of the \(\epsilon\) - boundary measure we discussed in the text. For any set \(E\) and any \(\epsilon > 0\) , recall that we defined the \(\epsilon\) - extension of \(E\) , written \(E_{\epsilon}\) , to be the set of all points which are within \(\epsilon\) of a point in \(E\) ; writing \(A(E)\) for the area of \(E\) , we then define the perimeter of \(E\) to be \[\operatorname {surf}(E):= \lim_{\epsilon \to 0}\inf_{\epsilon}\frac{1}{\epsilon} (A(E_{\epsilon}) - A(E)).\] A good way to convince yourself that this is reasonable is to notice that, for small \(\epsilon\) , \(E_{\epsilon} - E\) looks like a small band around the perimeter of \(E\) with width \(\epsilon\) . The isoperimetric inequality can then be formally expressed as giving a bound on the quantity inside the limit in terms of what it would be for a ball. (This is slightly stronger than just bounding the perimeter, that is, bounding the limit itself, but this stronger version is still true.) That is, for any measurable set \(E \subseteq \mathbb{R}^{2}\) , \[\frac{1}{\epsilon} (A(E_{\epsilon}) - A(E))\geq 2\sqrt{\pi A(E)} +\epsilon \pi .\] It is a good exercise to check that we have equality here when \(E\) is a ball. There are many generalizations of the isoperimetric inequality. For example, balls are also the subsets in \(\mathbb{R}^{n}\) which have minimal surface area for a given fixed volume, and the corresponding set on the surface of a sphere is a "spherical cap," the set of points inside a circle drawn on the surface of the sphere. The version we are most concerned with in this paper is the generalization to a Gaussian distribution. Rather than trying to relate the volume of \(E\) to the volume of \(E_{\epsilon}\) , the Gaussian <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 6: The Gaussian isoperimetric inequality relates the amount of probability mass contained in a set \(E\) to the amount contained in its \(\epsilon\) -extension \(E_{\epsilon}\) . A sample from the Gaussian is equally likely to land in the pink set on the left or the pink set on the right, but the set on the right has a larger \(\epsilon\) -extension. The Gaussian isoperimetric inequality says that the sets with the smallest possible \(\epsilon\) -extensions are half spaces. </center> isoperimetric inequality is about the relationship between the probability that a random sample from the Gaussian distribution lands in \(E\) or \(E_{\epsilon}\) . Other than this, though, the question we are trying to answer is the same: for a given probability \(p\) , among all sets \(E\) for which the probability of landing in \(E\) is \(p\) , when is the probability of landing in \(E_{\epsilon}\) as small as possible? The Gaussian isoperimetric inequality says that the sets that do this are half spaces. (See Figure 6. ) Just as we did in the plane, it is convenient to express this as a bound on the probability of landing in \(E_{\epsilon}\) for an arbitrary measurable set \(E\) . This can be stated as follows: Theorem. Consider the standard normal distribution \(q\) on \(\mathbb{R}^{n}\) , and let \(E\) be a measurable subset of \(\mathbb{R}^{n}\) . Write \[\Phi (t) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{t}\exp (x^{2} / 2)d x,\] the cdf of the one- variable standard normal distribution. For a measurable subset \(E\subseteq \mathbb{R}^{n}\) , write \(\alpha (E) = \Phi^{- 1}(\mathbb{P}_{x\sim q}[x\in E])\) . Then for any \(\epsilon \geq 0\) \[\mathbb{P}_{x\sim q}[x\in E_{\epsilon }]\geq \Phi (\alpha (E) + \epsilon).\] The version we stated in the text involved \(\epsilon_{q}^{*}(E)\) , the median distance from a random sample from \(q\) to the closest point in \(E\) . This is the same as the smallest \(\epsilon\) for which \(\mathbb{P}_{x\sim q}[x\in E_{\epsilon }] = \frac{1}{2}\) . So, when \(\epsilon = \epsilon_{q}^{*}(E)\) , the left- hand side of the Gaussian isoperimetric inequality is \(\frac{1}{2}\) , giving us that \(\Phi (\alpha + \epsilon_{q}^{*}(E))\leq \frac{1}{2}\) . Since \(\Phi^{- 1}\) is a strictly increasing function, applying it to both sides preserves the direction of this inequality. But \(\Phi^{- 1}(\frac{1}{2}) = 0\) , so we in fact have that \(\epsilon_{q}^{*}(E)\leq - \alpha\) , which is the statement we wanted. ## F VISUALIZING THE OPTIMAL CURVES The optimal bound according to the isoperimetric inequality gives surprisingly strong bounds in terms of the existence of worst- case \(l_{2}\) perturbations and error rates in Gaussian noise. In Figure 7 we plot the optimal curves for various values of \(\sigma\) , visualize images sampled from \(x + N(0,\sigma)\) , and visualize images at various \(l_{2}\) distance from the unperturbed clean image. Even for very large noise \((\sigma = .6)\) , test error needs to be less than \(10^{- 15}\) in order to have worst- case perturbations be larger than 5.0. In order to visualize worst- case perturbations at varying \(l_{2}\) distances, we visualize an image that minimizes similarity according to the SSIM metric (Wang & Bovik, 2009). These images are found by performing gradient descent to minimize the SSIM metric subject to the constant that \(|x - x_{adv}|_{2}< \epsilon\) . <--- Page Split ---> ![](images/15_0.jpg) <center>Random Gaussian Perturbations of the Clean Image </center> ![](images/15_1.jpg) <center>Figure 7: Top: The optimal curves on Imagenet for different values of \(\sigma\) . Middle: Visualizing different coordinates of the optimal curves. First, random samples from \(x + N(0, \sigma I)\) for different values of \(\sigma\) . Bottom: Images at different \(l_{2}\) distances from the unperturbed clean image. Each image visualized is the image at the given \(l_{2}\) distance which minimizes visual similarity according to the SSIM metric. Note that images at \(l_{2} < 5\) have almost no perceptible change from the clean image despite the fact that SSIM visual similarity is minimized. </center> <--- Page Split ---> In this section we include many more visualizations of the sorts of church window plots we discussed briefly in Section 4. We will show an ordinarily trained model's predictions on several different slices through the same CIFAR test point which illustrate different aspects of the story told in this paper. These images are best viewed in color. ![](images/16_0.jpg) <center>Figure 8: A slice through a clean test point (black, center image), the closest error found using PGD (blue, top image), and a random error found using Gaussian noise (red, bottom image). For this visualization, and all others in this section involving Gaussian noise, we used noise with \(\sigma = 0.05\) , at which the error rate was about \(1.7\%\) . In all of these images, the black circle indicates the distance at which the typical such Gaussian sample will lie. The plot on the right shows the probability that the model assigned to its chosen class. Green indicates a correct prediction, gray or white is an incorrect prediction, and brighter means more confident. </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 9: A slice through a clean test point (black, center image), the closest error found using PGD (blue, top image), and the average of a large number of errors randomly found using Gaussian noise (red, bottom image). The distance from the clean image to the PGD error was 0.12, and the distance from the clean image to the averaged error was 0.33. The clean image is assigned the correct class with probability 99.9995% and the average and PGD errors are assigned the incorrect class with probabilities 55.3% and 61.4% respectively. However, it is clear from this image that moving even a small amount into the orange region will increase these latter numbers significantly. For example, the probability assigned to the PGD error can be increased to 99% by moving it further from the clean image in the same direction by a distance of 0.07. </center> ![](images/17_1.jpg) <center>Figure 10: A slice through a clean test point (black, center image), a random error found using Gaussian noise (blue, top image), and the average of a large number of errors randomly found using Gaussian noise (red, bottom image). </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 11: A slice through a clean test point (black, center image) and two random errors found using Gaussian noise (blue and red, top and bottom images). Note that both random errors lie very close to the decision boundary, and in this slice the decision boundary does not appear to come close to the clean image. </center> ![](images/18_1.jpg) <center>Figure 12: A slice through three random errors found using Gaussian noise. (Note, in particular, that the black point in this visualization does not correspond to the clean image.) </center> ![](images/18_2.jpg) <center>Figure 13: A completely random slice through the clean image. </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 15: The cdf of the error rates in noise for images in the test set. The blue curve corresponds to a model trained and tested on noise with \(\sigma = 0.1\) , and the green curve is for a model trained and tested at \(\sigma = 0.3\) . For example, the left most point on the blue curve indicates that about \(40\%\) of test images had an error rate of at least \(10^{-3}\) . </center> ![](images/19_1.jpg) <center>Figure 14: Some visualizations of the same phenomenon, but using the "pepper noise" discussed in Section 5 rather than Gaussian noise. In all of these visualizations, we see the slice through the clean image (black, center image), the same PGD error as above (red, bottom image), and a random error found using pepper noise (blue, top image). In the visualization on the left, we used an amount of noise that places the noisy image further from the clean image than in the Gaussian cases we considered above. In the visualization in the center, we selected a noisy image which was assigned to neither the correct class nor the class of the PGD error. In the visualization on the right, we selected a noisy image which was assigned to the same class as the PGD error. </center> ## H THE DISTRIBUTION OF ERROR RATES IN NOISE Using some of the models that were trained on noise, we computed, for each image in the CIFAR test set, the probably that a random Gaussian perturbation will be misclassified. A histogram is shown in Figure 15. Note that, even though these models were trained on noise, there are still many errors around most images in the test set. While it would have been possible for the reduced performance in noise to be due to only a few test points, we see clearly that this is not the case. <--- Page Split ---> ## I A COLLECTION OF MODEL ERRORS In this section we first show a collection of iid test errors for the ResNet- 50 model on the ImageNet validation set. We also visualize the severity of the different noise distributions considered in this work, along with model errors found by random sampling in these distributions. ![](images/20_0.jpg) <center>Figure 16: A collection of adversarially chosen model errors. These errors appeared in the ImageNet validation set. Despite the high accuracy of the model there remain plenty of errors in the test set that a human would not make. </center> ![](images/20_1.jpg) <center>Figure 17: A collection of adversarially chosen model errors. These errors appeared in the ImageNet validation set. Despite the high accuracy of the model there remain plenty of errors in the test set that a human would not make. </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 18: Visualizing the severity of PCA noise, along with model errors found in this noise distribution. </center> ![](images/21_1.jpg) <center>Figure 19: Visualizing the severity of Gaussian noise, along with model errors found in this noise distribution. Note the model shown here was trained at noise level \(\sigma = .6\) . </center> ![](images/21_2.jpg) <center>Figure 20: Visualizing the severity of pepper noise. </center> ![](images/21_3.jpg) <center>Figure 21: Visualizing the severity of the randomized stAdv attack. </center> <--- Page Split --->
reject
Reject
4.333333
ICLR_2019_paper_0620
iclr
2,019
# SELF-SUPERVISED GENERALISATION WITH META AUXILIARY LEARNING Anonymous authors Paper under double- blind review ## ABSTRACT Auxiliary learning has been shown to improve the generalisation performance of a principal task. But typically, this requires manually- defined auxiliary tasks based on domain knowledge. In this paper, we consider that it may be possible to automatically learn these auxiliary tasks to best suit the principal task, towards optimum auxiliary tasks without any human knowledge. We propose a novel method, Meta Auxiliary Learning (MAXL), which we design for the task of image classification, where the auxiliary task is hierarchical sub- class image classification. The role of the meta learner is to determine sub- class target labels to train a multi- task evaluator, such that these labels improve the generalisation performance on the principal task. Experiments on three different CIFAR datasets show that MAXL outperforms baseline auxiliary learning methods, and is competitive even with a method which uses human- defined sub- class hierarchies. MAXL is self- supervised and general, and therefore offers a promising new direction towards automated generalisation. ## 1 INTRODUCTION Auxiliary learning is a method to improve the generalisation of a task. It works by training on additional auxiliary tasks simultaneously with the principal task. Extra data may be available for those auxiliary tasks, but not the principal task. If the auxiliary tasks and the principal task share some common reasoning, then the prediction model is encouraged to learn additional relevant features which otherwise would not be learned from single- task learning. The broader support of these features then assists with generalisation of the principal task. We now rethink this generalisation by considering that not all auxiliary tasks are created equal. In supervised auxiliary learning (Liebel & Körner, 2018; Toshniwal et al., 2017), auxiliary tasks can be carefully chosen to complement the principal task, but at the expense of a dependency on labelled data. Unsupervised auxiliary learning (Flynn et al., 2016; Zhou et al., 2017; Zhang et al., 2018; Jaderberg et al., 2017) alleviates this, but at the expense of a limited set of auxiliary tasks which may not be well aligned with the principal task. By combining the merits of both supervised and unsupervised auxiliary learning, the ideal auxiliary learning framework is one with the flexibility to automatically determine the optimum auxiliary tasks, but without the requirement of any manually- labelled data. In this paper, we propose to achieve such a framework with a simple and general meta- learning algorithm which we call Meta Auxiliary Learning (MAXL). Given a principal task, the goal of MAXL is to discover the auxiliary tasks which, when trained alongside the principal task, give the greatest generalisation performance of the principal task on a meta dataset. In our work, we focus on the problem of image classification, where an auxiliary task is required to assign a sub- class label to an image. As such, data is classified both at a coarse level as the principal task, and at a fine level as the auxiliary task. The meta learner's role is then to determine the target labels for this sub- class labelling, in such a way that the learned features induced by learning these additional, more complex auxiliary tasks generate the best generalisation performance for the principal task. As well as our method being able to automatically learn the optimum auxiliary tasks, we achieve this in an unsupervised manner, giving potential to scale well beyond any datasets without manually- labelled auxiliary tasks, such as a class hierarchy as in our experiments. And even when such a hierarchy is available, in our experiments we show that MAXL is at least as competitive despite <--- Page Split ---> this hierarchy being learned in an unsupervised manner. In our experiments, we define the auxiliary tasks as sub- class labelling with MAXL learning to generate target sub- class labels, but MAXL is general and in future work this could be relaxed to actually learn the auxiliary tasks themselves. The ability to learn these tasks in a purely unsupervised and scalable manner opens up an exciting new way of thinking about how we can achieve generalisation in an automated manner. ![](images/1_0.jpg) <center>Figure 1: Illustration of our proposed MAXL framework. The Multi-task evaluator takes an input image and is trained to predict both the principal class (e.g. Dog), and the auxiliary class (e.g. Border Collie). The principal class has a ground-truth label, but the label for the auxiliary class is determined by the meta generator. The meta generator is trained by outputting auxiliary class labels which, when used to train the multi-task evaluator, improve its prediction performance on the principal task. </center> ## 2 RELATED WORK This work brings ideas together from a number of related areas of machine learning. Multi- task & Transfer Learning The aim of multi- task learning (MTL) is to achieve shared representations by simultaneously training a set of related learning tasks. In this case, the learned knowledge used to share across domains is encoded into the feature representations, to improve performance of each individual task, since knowledge distilled from related tasks are interdependent. The success of deep neural networks has led to some recent methods advancing the multi- task architecture design, such as applying a linear combination of task- specific features (Misra et al., 2016; Doersch & Zisserman, 2017; Kokkinos, 2017). Liu et al. (2018) applied soft- attention modules as feature selectors, allowing learning of both task- shared and task- specific features in a self- supervised, end- to- end manner. Transfer learning is another common approach to improve generalisation, by incorporating knowledge learned from one or more related domains. Pre- training a model with a large- scale dataset such as ImageNet (Deng et al., 2009) has become standard practice in many vision- based applications. The transferability of different convolutional layers in CNNs has also been investigated in Yosinski et al. (2014). Auxiliary Learning Whilst in multi- task learning the goal is high test accuracy across all tasks, auxiliary learning differs in that high test accuracy is only required for a single principal task, and the role of the auxiliary tasks is to assist in generalisation of this principal task. Toshniwal et al. (2017) applied auxiliary supervision with phoneme recognition at intermediate low- level representations of deep networks to improve the performance of conversational speech recognition. Liebel & Körner (2018) chose auxiliary tasks which can be obtained with low effort, such as global descriptions of a scene, to boost the performance for single scene depth estimation and semantic segmentation. By carefully choosing a pair of learning tasks, we may also perform auxiliary learning without ground truth labels, in an unsupervised manner. Jaderberg et al. (2017) introduced a method for improving the learning agents in Atari games, by building unsupervised auxiliary tasks to predict the onset of immediate rewards from a short historical context. Flynn et al. (2016); Zhou et al. (2017) proposed <--- Page Split ---> image synthesis networks to perform unsupervised monocular depth estimation by predicting the relative pose of multiple cameras. Different from these works which require prior knowledge to manually define suitable auxiliary tasks, our proposed method requires no additional task knowledge, since our meta learner generates useful auxiliary knowledge in a purely unsupervised fashion. The most similar work to ours is Zhang et al. (2018), in which meta learning was used in auxiliary data selection. However, this still requires manually- labelled data from which these selections are made, whilst our method is able to generate auxiliary data from scratch. Meta Learning Meta learning (or learning to learn) aims to design a higher- level learning system which itself is trained using the experiences of a lower- level learning system, in an attempt to improve this lower- level system. Early works in meta learning explored automatically learning update rules for neural models (Bengio et al., 1990; 1992; Schmidhuber, 1992). Recent approaches have focused on learning optimisers for deep networks based on LSTMs (Ravi & Larochelle, 2016) or synthetic gradients (Andrychowicz et al., 2016; Jaderberg et al., 2016). Meta learning has also been studied for finding optimal hyper- parameters (Li et al., 2017) and a good initialisation for few- shot learning (Finn et al., 2017). (Santoro et al., 2016) also investigated few shot learning via an external memory module. Vinyals et al. (2016); Snell et al. (2017) realised few shot learning in the instance space via a differentiable nearest- neighbour approach. Our method also performs in the instance space, but induces auxiliary knowledge as an implicit regularisation to improve generalisation of the principal task. ## 3 META AUXILIARY LEARNING In this section, we introduce our method for automatically generating optimum auxiliary tasks, which we call Meta AuXiliary Learning (MAXL). ### 3.1 PROBLEM SETUP The goal of meta auxiliary learning is to train a meta generator that can generate higher complexity auxiliary tasks, to improve performance of the principal task. To accomplish this, we use two networks: a multi- task evaluator which trains on the principal and auxiliary tasks, and evaluates the performance of the auxiliary tasks on a meta set, and a meta generator which generates these auxiliary tasks. For simplicity, we consider image classification tasks in this section, where the auxiliary task is sub- class labelling, and the meta generator determines target sub- class labels, but the approach can be considered general for any type of task. We denote the multi- task evaluator as a function \(f_{\theta_{1}}(x)\) that takes an input \(x\) with network parameters \(\theta_{1}\) , and the meta generator as a function \(g_{\theta_{2}}(x)\) that takes the same input \(x\) with network parameters \(\theta_{2}\) . For a dataset with input \(x\) and ground- truth label \(y\) for the principal task, we split into three subsets: training \((x_{\mathrm{train}}, y_{\mathrm{train}})\) , meta- training \((x_{\mathrm{meta}}, y_{\mathrm{meta}})\) , and test \((x_{\mathrm{test}}, y_{\mathrm{test}})\) . Training data is used for updating \(\theta_{1}\) , meta- training data is used for updating the \(\theta_{2}\) , and test data is used for overall evaluation. In the multi- task evaluator, we apply a hard parameter sharing approach (Ruder, 2017) in which we predict the principal and auxiliary tasks using the shared set of features \(\theta_{1}\) in the multi- task network. At the end of the last feature layer \(f_{\theta_{1}}(x)\) , we then apply further task- specific layers to output the corresponding prediction for each task. We denote the predicted principal labels by \(f_{\theta_{1}}^{\mathrm{pri}}(x)\) and predicted auxiliary labels by \(f_{\theta_{1}}^{\mathrm{aux}}(x)\) . In the meta generator, we pre- define a hierarchical structure \(\psi\) which determines the number of subclasses for each class in the principal task. At the end of the last feature layer \(g_{\theta_{2}}(x)\) , this hierarchy, together with the ground- truth label \(y\) for the principal task, are used to generate the target auxiliary labels, denoted by \(g_{\theta_{2}}^{\mathrm{gen}}(x, y, \psi)\) . We allow for soft assignment labelling rather than enforcing one- hot encoding, which enables greater flexibility to learn optimum auxiliary tasks. The meta generator uses a masked SoftMax to ensure that each output node represents a sub- class label for only one class in the principal task, as described further in Section 3.3. The visualisation of the our proposed MAXL approach is shown in Figure 2. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: (a) Illustration of the two networks which make up our meta auxiliary learning algorithm. (b) Illustration of vanilla SoftMax and Mask SoftMax with 3 principal classes. Vanilla SoftMax outputs over all 5 auxiliary classes, where as Mask Softmax outputs over a hierarchical structure \(\psi = [2,2,1]\) to constrain the prediction space. </center> ### 3.2 MODEL OBJECTIVES The multi- task evaluator is trained in a tightly- coupled manner with the meta generator: the meta generator determines target labels for the multi- task evaluator, which in turn determines the suitability of those labels. Given target labels as determined by the meta generator, the multi- task evaluator is trained to predict these labels, alongside the ground- truth labels for the principal task. For both the principal and auxiliary classification tasks, we apply focal loss (Lin et al., 2017) with a focusing parameter \(\gamma = 2\) , defined as: \[\mathcal{L}(\hat{y},y) = -y(1 - \hat{y})^{\gamma}\log (\hat{y}), \quad (1)\] where \(\hat{y}\) is the predicted label and \(y\) is the ground- truth label. The focal loss helps to focus on the incorrectly predicted labels, which we found improved performance during our experimental evaluation compared with the regular cross- entropy log loss. To update parameters \(\theta_{1}\) in the multi- task evaluator, we define the multi- task objective as follows: \[\arg \min_{\theta_{1}}\left(\mathcal{L}(f_{\theta_{1}}^{\mathrm{pri}}(x_{\mathrm{train}}^{(i)}),y_{\mathrm{train}}^{(i)}) + \mathcal{L}(f_{\theta_{1}}^{\mathrm{aux}}(x_{\mathrm{train}}^{(i)}),y_{\theta_{2}}^{\mathrm{gen}}(x_{\mathrm{train}}^{(i)},y_{\mathrm{train}}^{(i)},\psi))\right), \quad (2)\] where \((i)\) represents the \(i^{t h}\) batch from the training data. The meta generator is then trained by encouraging target labels for the auxiliary task to be chosen such that, if the multi- task evaluator were to be trained on these labels, the performance on the principal task would be maximised. This requires evaluation on a separate dataset, the meta- training set, to train the meta generator, to ensure that the target auxiliary labels encourage generalisation beyond the data supplied to the multi- task evaluator. To update parameters \(\theta_{2}\) in the meta generator, we define the meta objective as follows: \[\arg \min_{\theta_{2}}\mathcal{L}(f_{\theta_{1}^{+}}^{\mathrm{pri}}(x_{\mathrm{meta}}^{(i)}),y_{\mathrm{meta}}^{(i)}). \quad (3)\] Here \(\theta_{1}^{+}\) represents the weights of the multi- task network were it to be trained, with one gradient update, using auxiliary labels \(y_{\mathrm{meta}}\) : \[\theta_{1}^{+} = \theta_{1} - \alpha \nabla_{\theta_{1}}\left(\mathcal{L}(f_{\theta_{1}}^{\mathrm{pri}}(x_{\mathrm{meta}}^{(i)}),y_{\mathrm{meta}}^{(i)}) + \mathcal{L}(f_{\theta_{1}}^{\mathrm{aux}}(x_{\mathrm{meta}}^{(i)}),g_{\theta_{2}}^{\mathrm{gen}}(x_{\mathrm{meta}}^{(i)},y_{\mathrm{meta}}^{(i)},\psi))\right), \quad (4)\] <--- Page Split ---> where \(\alpha\) is the learning rate. The trick in this meta objective is that we perform the derivative over a derivative (a Hessian matrix) to update \(\theta_{2}\) , by using a retained computational graph of \(\theta_{1}^{+}\) in order to compute derivatives with respect to \(\theta_{2}\) . This second derivative trick in meta learning was also proposed in Finn et al. (2017) and Zhang et al. (2018). However, we found that the generated auxiliary labels can easily collapse (i.e. degenerate by simply learning a similar level of complexity as the principal task), which leaves parameters \(\theta_{2}\) in a local minimum without producing any extra useful knowledge. Thus, to encourage the network to learn more complex and informative auxiliary tasks, we further apply an entropy loss \(\mathcal{H}(g_{\theta_{2}}(x_{\mathrm{meta}}^{(i)},y_{\mathrm{meta}}^{(i)},\psi))\) as a regularisation term in the meta objective. A detailed explanation of the entropy loss and the collapsing label problem will be given in Section 3.4. Finally, the entire MAXL algorithm is defined as follows: # Algorithm 1: The MAXL algorithm Dataset: \(D = \left\{\left(x_{\mathrm{train}},y_{\mathrm{train}}\right),\left(x_{\mathrm{meta}},y_{\mathrm{meta}}\right)\right\}\) Initialise: Network parameters: \(\theta_{1},\theta_{2}\) ; Hierarchical structure: \(\psi\) Initialise: Hyper- parameter (learning rate): \(\alpha ,\beta\) ; Hyper- parameter (task weighting): \(\lambda\) for each training iteration \(i\) do # fetch one batch of training and meta data \(\left\{\left(x_{\mathrm{train}}^{(i)},y_{\mathrm{train}}^{(i)},\left(x_{\mathrm{meta}}^{(i)},y_{\mathrm{meta}}^{(i)}\right)\right\} \in \left\{\left(x_{\mathrm{train}},y_{\mathrm{train}}\right),\left(x_{\mathrm{meta}},y_{\mathrm{meta}}\right)\right\}\) # training step Update: \(\theta_{1}\leftarrow \theta_{1} - \alpha \nabla_{\theta_{1}}\left(\mathcal{L}(f_{\theta_{1}}^{\mathrm{pri}}(x_{\mathrm{train}}^{(i)}),y_{\mathrm{train}}^{(i)}) + \mathcal{L}(f_{\theta_{1}}^{\mathrm{aux}}(x_{\mathrm{train}}^{(i)}),g_{\theta_{2}}(x_{\mathrm{train}}^{(i)},y_{\mathrm{train}}^{(i)},\psi))\right)\) # meta- training step Compute: \(\theta_{1}^{+} = \theta_{1} - \alpha \nabla_{\theta_{1}}\left(\mathcal{L}(f_{\theta_{1}}^{\mathrm{pri}}(x_{\mathrm{meta}}^{(i)}),y_{\mathrm{meta}}^{(i)}) + \mathcal{L}(f_{\theta_{1}}^{\mathrm{aux}}(x_{\mathrm{meta}}^{(i)}),g_{\theta_{2}}(x_{\mathrm{meta}}^{(i)},y_{\mathrm{meta}}^{(i)},\psi))\right)\) Update: \(\theta_{2}\leftarrow \theta_{2} - \beta \nabla_{\theta_{2}}\left(\mathcal{L}(f_{\theta_{1}}^{\mathrm{pri}}(x_{\mathrm{meta}}^{(i)}),y_{\mathrm{meta}}^{(i)}) + \lambda \mathcal{H}(g_{\theta_{2}}^{\mathrm{aux}}(x_{\mathrm{meta}}^{(i)},y_{\mathrm{meta}}^{(i)},\psi))\right)\) end ### 3.3 MASK SOFTMAX FOR HIERARCHICAL PREDICTIONS In the prediction layer of the meta generator, we designed a modified SoftMax function to predict target auxiliary labels which conform to a pre- defined hierarchy \(\psi\) . As shown in Figure 2 (upper right), the original softmax function does not constrain sub- class labelling to lie within this hierarchy. Our mask SoftMax structure resolves this issue by applying a binary mask to the original SoftMax function. The overall hierarchical structure \(\psi\) determines the number of sub- classes \(\psi [i]\) in each principal class \(i\) . As such, the total prediction space for auxiliary labels is \(\sum_{i}\psi [i]\) . This hierarchy, together with the ground- truth principal class label \(y\) of the current image, creates the mask with a binary function \(M = \mathcal{B}(y,\psi)\) . Using the principal ground- truth label \(y\) , the corresponding range of sub- classes \(\psi [y]\) is selected, and a binary mask \(M\) is created with size \(\sum_{i}\psi [i]\) with a multi one- hot encoding \(\mathbb{1}_{\sum_{i< y}\psi [i]:\sum_{i< y + 1}\psi [i]}(\mathbb{1}_{a:b}\) is denoted as a multi one- hot encoding in which indexes from \(a\) to \(b\) are encoded as 1). Using the example in Figure 2, consider the principal task to have 3 classes with ground truth labels \(y = 0,1,2\) , and hierarchical structure \(\psi = [2,2,1]\) . In this case, the auxiliary prediction space is equal to 5 and the corresponding binary masks are \(M = [1,1,0,0,0],[0,0,1,1,0],[0,0,0,0,1]\) respectively. Finally, we apply binary mask \(M\) with an element- wise multiplication on the original SoftMax function for the final auxiliary task predictions: \[\mathrm{SoftMax}\colon p(\hat{y}_{i}) = \frac{\exp\hat{y}_{i}}{\sum_{i}\exp\hat{y}_{i}},\qquad \mathrm{MaskSoftMax}\colon p(\hat{y}_{i}) = \frac{\exp M\odot\hat{y}_{i}}{\sum_{i}\exp M\odot\hat{y}_{i}},\quad M = \mathcal{B}(y,\psi),\] where \(p(\hat{y}_{i})\) represents the probability of the predicted principal label \(\hat{y}\) over class \(i\) , and \(\odot\) represents element- wise multiplication. <--- Page Split ---> ### 3.4 THE COLLAPSING CLASS PROBLEM As previously discussed, we predict each auxiliary label within a hierarchical structure \(\psi\) . However, the number of sub- classes defined in \(\psi [i]\) is the maximum auxiliary label prediction space, with no guarantee that all \(\psi [i]\) classes will be predicted. This may result in some auxiliary labels defined in \(\psi [i]\) being overlooked, with the output of the meta generator collapsing into a smaller sub- class space. In experiments, we found that this phenomenon is particularly apparent when we either have a large learning rate for training the meta generator, or a large sub- class prediction space \(\psi\) . To avoid the collapsing class problem, we introduced an additional regularisation loss, which we call the entropy loss \(\mathcal{H}(\hat{y}^{(i)})\) . This encourages the meta generator to utilise the full prediction space, by encouraging a large prediction entropy across this space. Assuming we have a well- balanced dataset, the entropy loss calculates the KL divergence between the predicted auxiliary label space \(\hat{y}^{(i)}\) , and a uniform distribution \(\mathcal{U}\) for each \(i^{th}\) batch. This is equivalent to calculating the entropy of the predicted label space, and is defined as: \[\mathcal{H}(\hat{y}^{(i)}) = \sum_{k = 1}^{K}\overline{y_{k}}\log \overline{y_{k}},\quad \overline{y_{k}} = \frac{1}{N}\sum_{i = 1}^{N}\hat{y}^{(i)}. \quad (5)\] where \(K\) is the number of auxiliary labels and \(N\) is the training batch size. The entropy loss is essential to achieve human- level performance, as shown in our experiments. The higher entropy in the auxiliary target labels results in a more complex auxiliary task. This avoids local minima during training, such as assigning a single label to all examples of a principal class. ## 4 EXPERIMENTS In this section, we present experimental results to evaluate MAXL with respect to several baselines and datasets on image classification tasks. ### 4.1 EXPERIMENTAL SETUP Datasets We evaluated on three different datasets: CIFAR100, CIFAR10, and CIFAR10.1v6 (Recht et al., 2018). CIFAR100 consists of 100 principal classes, whilst CIFAR10 and CIFAR10.1v6 consist of 10 principal classes and have the same training dataset as each other, but two different test datasets. To assess the generalisation across different task complexities, we tested a range of different combinations in the numbers of principal and auxiliary classes. For CIFAR100, we expanded the dataset's provided 2- level hierarchy (20 and 100 classes) into a 4- level hierarchy (additional 3 and 10 classes), by manually assigning examples for these new hierarchy levels (see Appendix A). Based on the new hierarchy, we then tested on all 6 possible combinations of principal and auxiliary class numbers. Note that for MAXL, the hierarchy was used only to define the structure of \(\psi\) and the principal task labels, to ensure a fair comparison with a method using human- defined auxiliary tasks, but the auxiliary task labelling within that structure was learned by MAXL itself. CIFAR10 and CIFAR10.1v6 do not have an associated manually- defined hierarchy, and so we defined a range of hierarchical structures \(\psi [i] = 2, 5, 10, 20, 50, 100, \forall i\) . Baselines We compared MAXL to a number of baselines. Single Task trains only with the principal class label. Random Assignment trains with auxiliary classes, and randomly assigns the auxiliary class labels. Prototypical Net is a clustering method based on (Snell et al., 2017), where prototypes for auxiliary classes are defined by embedding examples from meta- training data, which has human- defined auxiliary classes, using a pre- trained ImageNet network. Unsupervised, differentiable, nearest- neighbour clustering is then used to produce the final auxiliary class labelling for the remaining training data. The key difference to MAXL is that, whilst both methods are unsupervised, the auxiliary class labelling with MAXL actually evaluates the generalisation performance of this labelling on the principal task, whilst the Prototypical Net method does not. Finally, Human trains with auxiliary classes, using the human- defined hierarchy. Note that due to the need for a manually- defined hierarchy, Prototypical Net and Human were only evaluated on CIFAR100. For all baselines, we use the same network architecture and training procedure as MAXL's multi- task <--- Page Split ---> evaluator. For the meta- training for MAXL and Prototypical Net, we split each training dataset and used \(10\%\) for meta- training the auxiliary labelling, and \(90\%\) for training the multi- task evaluator. For all other baselines, we used the full training set for training the multi- task evaluator. Training For both the multi- task evaluator and the meta generator use VGG- 16 as its core (Simonyan & Zisserman, 2014), together with batch normalisation. For all experiments, we used a learning rate of 0.01 for the multi- task evaluator. For MAXL's meta generator, we found that a smaller learning rate of \(10^{- 5}\) was necessary to help prevent the class collapsing problem. For all training, we drop the learning rate by half after every 50 epochs, and train for a total of 200 epochs, using vanilla stochastic gradient descent. For the meta generator, we apply an \(L_{1}\) norm weight decay of \(5 \cdot 10^{- 4}\) on the meta generator, with no regularisation on the multi- task evaluator. We chose the weighting of the entropy regularisation loss term to be 0.2 based on empirical performance. ### 4.2 TEST PERFORMANCE We now evaluate the performance of MAXL compared to these baselines, on all three datasets. Results for CIFAR100 are presented in Figure 3, and results for CIFAR10 and CIFAR10.1v6 are presented in Appendix B. ![](images/6_0.jpg) <center>Figure 3: Learning curves for the CIFAR100 test dataset, comparing MAXL with baseline methods. We provide results in all 6 different combinations of principal and auxiliary class numbers. </center> For CIFAR100, we observe that MAXL performs similarly to when human knowledge is used in 4 out of the 6 hierarchical structures, and performs worse in 2 out of the 6. For all other baselines, MAXL performs at least as well, and in the majority of cases outperforms other baselines by a significant margin. We therefore see that MAXL is able to learn auxiliary tasks effectively by tightly coupling the auxiliary task generation and the principal task training, in a superior manner than when these auxiliary tasks are assigned independently, such as with random assignment or using prototypical net. With performance of MAXL approaching that of a system using a human- defined auxiliary tasks, we see strong evidence that MAXL is able to learn to generalise effectively in an unsupervised manner. ### 4.3 EFFECT OF AUXILIARY TASK COMPLEXITY We now evaluate how the complexity of the auxiliary tasks affects the performance of the principal task. In Figure 4 (a), we present results from CIFAR10 and CIFAR10v1.6 showing the performance increase over single- task learning, when there are 10 principal classes, but a range of auxiliary class numbers \((\psi [i] = 2, 5, 10, 20, 50, 100, \forall i)\) . For each data point, the performance is calculated by <--- Page Split ---> averaging the test accuracy from the last 5 epochs, after a total of 200 epochs. Experiments were performed both with and without the entropy loss term to show the benefit of this regularisation. We observe an interesting trend in which test performance rises as the number of auxiliary classes increases, but then begins to fall. This suggests that for a given complexity of principal task, there is an optimum complexity in the auxiliary tasks. One explanation for this may be that as the auxiliary tasks increase in complexity, the learned features favour learning these auxiliary tasks rather than the principal task, encouraging further generalisation beyond the features learned only for the principal task. But if the auxiliary task is too complex, then these features begin to overfit and lose the overlap between the reasoning required for the principal and auxiliary tasks, begins to decrease. ![](images/7_0.jpg) <center>Figure 4: Performance improvement in percentages when training with MAXL compared with single-task learning, with 10 principal classes and a range of auxiliary classes. </center> ### 4.4 VISUALISATIONS OF GENERATED KNOWLEDGE In Figure 5, we visualise 2D embeddings of examples from the CIFAR100 test dataset, on two different task complexities. This was computed using t- SNE (Maaten & Hinton, 2008) on the final feature layer of the multi- task evaluator, and compared across three methods: our MAXL method, our baseline using human- defined hierarchy, and our baseline using single- task learning. ![](images/7_1.jpg) <center>Figure 5: t-SNE visualisation of the learned final layer of the multi-task evaluator network, trained with two combinations of principal and auxiliary class numbers from CIFAR100. Colours represent the principal classes. </center> This visualisation shows the separability of principal classes after being trained with the multi- task evaluator. We see that both MAXL and Human show better separation of the principal classes than with Single- Task, owing to the generalisation effect of the auxiliary task learning. The distinction between the separability of the MAXL and Human visualisations is not as clear, despite their very similar performance for these two task complexities in Figure 3. But given that MAXL uses the same hierarchical structure as Human, we see from the visualisation that these two methods are clearly learning different representations. We also show examples of images assigned to the same auxiliary class through MAXL's multi- task evaluator. Figure 6 shows example images with the highest prediction probabilities for three random auxiliary classes from CIFAR100, using the combination of 20 principal classes and 5 auxiliary classes per principal class, which showed the best performance of MAXL in Figure 3. In addition, we also applied MAXL to MNIST, in which 3 auxiliary classes were used for each of the 10 principal classes. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 6: Visualisation of 5 test examples with the highest prediction probability, for each of 3 randomly selected auxiliary classes, for a number of different principal classes. We present the visualisation for CIFAR100 (top) when trained with 20 principal classes and 5 auxiliary classes per principal class, and for MNIST (bottom) when trained with 10 principal classes and 3 auxiliary classes per principal class. </center> To our initial surprise, the generated auxiliary labels visualised in both datasets show no clear human- understandable knowledge. In particular, there are no obvious similarities within each auxiliary class whether in terms of shape, colour, style, structure or semantic meaning. However, this makes more sense when we re- consider the task of the meta generator, which is to assign auxiliary labels which assist the principal task. Rather than grouping images in terms of semantic or visual similarity, the meta generator would therefore be more effective it it were to group images in terms of a shared aspect of reasoning which the multi- task evaluator is currently facing difficulty on. If the multi- task evaluator is then able to improve its ability to determine the auxiliary class of an image in such a cluster, then the learned features will help in overcoming this challenging aspect of reasoning. It therefore makes sense that the examples within an auxiliary class do not share semantic or visual similarity, but instead share a more complex underlying property. Further, we discovered that the generated auxiliary knowledge is not deterministic, since the top predicted candidates are different when we re- train the network from scratch. We therefore speculate that using a human- defined hierarchy is just one out of a potentially infinite number of local optima, and on each run of training the meta generator produces another of these local optimums. ## 5 CONCLUSION & FUTURE WORK In this paper, we have presented and evaluated Meta Auxiliary Learning (MAXL). MAXL learns to generate optimum auxiliary tasks which, when trained alongside a principal task in a multi- task setup, maximise the generalisation of the principal task across a validation dataset. Rather than employing domain knowledge and human- defined auxiliary tasks as is typically required, MAXL is self- supervised and, combined with its general nature, has the potential to automate the process of generalisation to new levels. <--- Page Split ---> Our evaluations on three image datasets have shown the performance of MAXL in an image classification setup, where the auxiliary task is to predict sub- class, hierarchical labels for an image. We have shown that MAXL significantly outperforms other auxiliary learning baselines, and even when human- defined knowledge is used to manually construct the auxiliary tasks, MAXL performs similarly in the majority of experiments. Despite this impressive performance from a self- supervised method, questioning why auxiliary tasks generated by MAXL do not outperform those constructed by a human opens exciting future research in this direction. Perhaps, human- defined auxiliary tasks are optimal themselves and cannot be surpassed. However, we believe this not to be the case since such tasks are typically chosen due to the availability of labelled data for these tasks, and not necessarily their optimality when combined with the principal task. Alternatively, perhaps the power of the human knowledge is not from the domain specific labels, but from higher- level reasoning about how auxiliary tasks should be structured. In our experiments, training MAXL using the same structure as a human- defined hierarchy, but learning its own auxiliary labels, typically led to similar performance as when the human- defined labels were used. The general nature of MAXL also opens up questions about how self- supervised auxiliary learning may be used to learn generic auxiliary tasks beyond sub- class labelling. During our experiments, we also ran preliminary experiments on predicting arbitrary vectors as the auxiliary task, but results so far have been inconclusive. However, the ability of MAXL to potentially learn flexible auxiliary tasks which can automatically be tuned for the principal task now offers an exciting direction towards automated generalisation across a wide range of more complex tasks. ## REFERENCES Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems, pp. 3981- 3989, 2016. Samy Bengio, Yoshua Bengio, Jocelyn Cloutier, and Jan Gecsei. On the optimization of a synaptic learning rule. In Preprints Conf. Optimality in Artificial and Biological Neural Networks, pp. 6- 8. Univ. of Texas, 1992. Yoshua Bengio, Samy Bengio, and Jocelyn Cloutier. Learning a synaptic learning rule. Université de Montréal, Département d’informatique et de recherche opérationnelle, 1990. Jia Deng, Wei Dong, Richard Socher, Li- Jia Li, Kai Li, and Li Fei- Fei. Imagenet: A large- scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248- 255. Ieee, 2009. Carl Doersch and Andrew Zisserman. Multi- task self- supervised visual learning. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model- agnostic meta- learning for fast adaptation of deep networks. In International Conference on Machine Learning, pp. 1126- 1135, 2017. John Flynn, Ivan Neulander, James Philbin, and Noah Snavely. Deepstereo: Learning to predict new views from the world’s imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5515- 5524, 2016. Max Jaderberg, Wojciech Marian Czarnecki, Simon Osindero, Oriol Vinyals, Alex Graves, David Silver, and Koray Kavukcuoglu. Decoupled neural interfaces using synthetic gradients. arXiv preprint arXiv:1608.05343, 2016. Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. International Conference on Learning Representations, 2017. Iasonas Kokkinos. Ubernet: Training a universal convolutional neural network for low- , mid- , and high- level vision using diverse datasets and limited memory. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. <--- Page Split ---> Zhenguo Li, Fengwei Zhou, Fei Chen, and Hang Li. Meta- sgd: Learning to learn quickly for few shot learning. arXiv preprint arXiv:1707.09835, 2017. Lukas Liebel and Marco Körner. Auxiliary tasks in multi- task learning. arXiv preprint arXiv:1805.06334, 2018. Tsung- Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. arXiv preprint arXiv:1708.02002, 2017. Shikun Liu, Edward Johns, and Andrew J Davison. End- to- end multi- task learning with attention. arXiv preprint arXiv:1803.10704, 2018. Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t- sne. Journal of machine learning research, 9(Nov):2579- 2605, 2008. Ishan Misra, Abhinav Shrivastava, Abhinav Gupta, and Martial Hebert. Cross- stitch networks for multi- task learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3994- 4003, 2016. Sachin Ravi and Hugo Larochelle. Optimization as a model for few- shot learning. 2016. Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do cifar- 10 classifiers generalize to cifar- 10? arXiv preprint arXiv:1806.00451, 2018. Sebastian Ruder. An overview of multi- task learning in deep neural networks. arXiv preprint arXiv:1706.05098, 2017. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. One- shot learning with memory- augmented neural networks. arXiv preprint arXiv:1605.06065, 2016. Jürgen Schmidhuber. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234- 242, 1992. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large- scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few- shot learning. In Advances in Neural Information Processing Systems, pp. 4077- 4087, 2017. Shubham Toshniwal, Hao Tang, Liang Lu, and Karen Livescu. Multitask learning with low- level auxiliary tasks for encoder- decoder based speech recognition. arXiv preprint arXiv:1704.01631, 2017. Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pp. 3630- 3638, 2016. Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in neural information processing systems, pp. 3320- 3328, 2014. Yabin Zhang, Hui Tang, and Kui Jia. Fine- grained visual categorization using meta- learning optimization with sample selection of auxiliary data. arXiv preprint arXiv:1807.10916, 2018. Tinghui Zhou, Matthew Brown, Noah Snavely, and David G Lowe. Unsupervised learning of depth and ego- motion from video. In CVPR, 2017. <--- Page Split ---> ## A 4-LEVEL CIFAR100 DATASET Table 1: Building a 4- level hierarchy for image classification task based on CIFAR100 dataset. Originally, a 20- class and 100- class heirarchy was provided, and we manually introduced a 3- class and 10 class layer. <table><tr><td>3 Class</td><td>10 Class</td><td>20 Class</td><td>100 Class</td></tr><tr><td rowspan="10">animals</td><td rowspan="3">large animals</td><td>reptiles</td><td>crocodile, dinosaur, lizard, snake, turtle</td></tr><tr><td>large carnivores</td><td>bear, leopard, lion, tiger, wolf</td></tr><tr><td>large omnivores and herbivores</td><td>camel, cattle, chimpanzee, elephant, kangaroo</td></tr><tr><td rowspan="2">medium animals</td><td>aquatic mammals</td><td>beaver, dolphin, otter, seal, whale</td></tr><tr><td>medium-sized mammals</td><td>fox, porcupine, possum, raccoon, skunk</td></tr><tr><td rowspan="2">small animals</td><td>small mammals</td><td>hamster, mouse, rabbit, shrew, squirrel</td></tr><tr><td>fish</td><td>aquarium fish, flatfish, ray, shark, trout</td></tr><tr><td rowspan="2">invertebrates</td><td>insects</td><td>bee, beetle, butterfly, caterpillar, cockroach</td></tr><tr><td>non-insect invertebrates</td><td>crab, lobster, snail, spider, worm</td></tr><tr><td>people</td><td>people</td><td>baby, boy, girl, man, woman</td></tr><tr><td rowspan="3">vegetations</td><td rowspan="3">vegetations</td><td>flowers</td><td>orchids, poppies, roses, sunflowers, tulips</td></tr><tr><td>fruit and vegetables</td><td>apples, mushrooms, oranges, pears, peppers</td></tr><tr><td>trees</td><td>maple, oak, palm, pine, willow</td></tr><tr><td rowspan="3">objects and scenes</td><td rowspan="3">household objects</td><td>food containers</td><td>bottles, bowls, cans, cups, plates</td></tr><tr><td>household electrical devices</td><td>clock, keyboard, lamp, telephone, television</td></tr><tr><td>household furniture</td><td>bed, chair, couch, table, wardrobe</td></tr><tr><td rowspan="3">construction</td><td>natural scenes</td><td>large man-made outdoor things</td><td>bridge, castle, house, road, skyscraper</td></tr><tr><td>natural scenes</td><td>large natural outdoor scenes</td><td>cloud, forest, mountain, plain, sea</td></tr><tr><td>vehicles</td><td>vehicles 1</td><td>bicycle, bus, motorcycle, pickup truck, train</td></tr><tr><td>vehicles</td><td>vehicles 2</td><td>lawn-mower, rocket, streetcar, tank, tractor</td><td></td></tr></table> <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 7: Testing performance on CIFAR10 (bottom) and CIFAR10.1v6 (top) datasets, across 6 different numbers of auxiliary classes. </center> <--- Page Split --->
## ABSTRACT Auxiliary learning has been shown to improve the generalisation performance of a principal task. But typically, this requires manually- defined auxiliary tasks based on domain knowledge. In this paper, we consider that it may be possible to automatically learn these auxiliary tasks to best suit the principal task, towards optimum auxiliary tasks without any human knowledge. We propose a novel method, Meta Auxiliary Learning (MAXL), which we design for the task of image classification, where the auxiliary task is hierarchical sub- class image classification. The role of the meta learner is to determine sub- class target labels to train a multi- task evaluator, such that these labels improve the generalisation performance on the principal task. Experiments on three different CIFAR datasets show that MAXL outperforms baseline auxiliary learning methods, and is competitive even with a method which uses human- defined sub- class hierarchies. MAXL is self- supervised and general, and therefore offers a promising new direction towards automated generalisation. ## 1 INTRODUCTION Auxiliary learning is a method to improve the generalisation of a task. It works by training on additional auxiliary tasks simultaneously with the principal task. Extra data may be available for those auxiliary tasks, but not the principal task. If the auxiliary tasks and the principal task share some common reasoning, then the prediction model is encouraged to learn additional relevant features which otherwise would not be learned from single- task learning. The broader support of these features then assists with generalisation of the principal task. We now rethink this generalisation by considering that not all auxiliary tasks are created equal. In supervised auxiliary learning (Liebel & Körner, 2018; Toshniwal et al., 2017), auxiliary tasks can be carefully chosen to complement the principal task, but at the expense of a dependency on labelled data. Unsupervised auxiliary learning (Flynn et al., 2016; Zhou et al., 2017; Zhang et al., 2018; Jaderberg et al., 2017) alleviates this, but at the expense of a limited set of auxiliary tasks which may not be well aligned with the principal task. By combining the merits of both supervised and unsupervised auxiliary learning, the ideal auxiliary learning framework is one with the flexibility to automatically determine the optimum auxiliary tasks, but without the requirement of any manually- labelled data. In this paper, we propose to achieve such a framework with a simple and general meta- learning algorithm which we call Meta Auxiliary Learning (MAXL). Given a principal task, the goal of MAXL is to discover the auxiliary tasks which, when trained alongside the principal task, give the greatest generalisation performance of the principal task on a meta dataset. In our work, we focus on the problem of image classification, where an auxiliary task is required to assign a sub- class label to an image. As such, data is classified both at a coarse level as the principal task, and at a fine level as the auxiliary task. The meta learner's role is then to determine the target labels for this sub- class labelling, in such a way that the learned features induced by learning these additional, more complex auxiliary tasks generate the best generalisation performance for the principal task. As well as our method being able to automatically learn the optimum auxiliary tasks, we achieve this in an unsupervised manner, giving potential to scale well beyond any datasets without manually- labelled auxiliary tasks, such as a class hierarchy as in our experiments. And even when such a hierarchy is available, in our experiments we show that MAXL is at least as competitive despite <--- Page Split ---> this hierarchy being learned in an unsupervised manner. In our experiments, we define the auxiliary tasks as sub- class labelling with MAXL learning to generate target sub- class labels, but MAXL is general and in future work this could be relaxed to actually learn the auxiliary tasks themselves. The ability to learn these tasks in a purely unsupervised and scalable manner opens up an exciting new way of thinking about how we can achieve generalisation in an automated manner. ![](images/1_0.jpg) <center>Figure 1: Illustration of our proposed MAXL framework. The Multi-task evaluator takes an input image and is trained to predict both the principal class (e.g. Dog), and the auxiliary class (e.g. Border Collie). The principal class has a ground-truth label, but the label for the auxiliary class is determined by the meta generator. The meta generator is trained by outputting auxiliary class labels which, when used to train the multi-task evaluator, improve its prediction performance on the principal task. </center> ## 2 RELATED WORK This work brings ideas together from a number of related areas of machine learning. Multi- task & Transfer Learning The aim of multi- task learning (MTL) is to achieve shared representations by simultaneously training a set of related learning tasks. In this case, the learned knowledge used to share across domains is encoded into the feature representations, to improve performance of each individual task, since knowledge distilled from related tasks are interdependent. The success of deep neural networks has led to some recent methods advancing the multi- task architecture design, such as applying a linear combination of task- specific features (Misra et al., 2016; Doersch & Zisserman, 2017; Kokkinos, 2017). Liu et al. (2018) applied soft- attention modules as feature selectors, allowing learning of both task- shared and task- specific features in a self- supervised, end- to- end manner. Transfer learning is another common approach to improve generalisation, by incorporating knowledge learned from one or more related domains. Pre- training a model with a large- scale dataset such as ImageNet (Deng et al., 2009) has become standard practice in many vision- based applications. The transferability of different convolutional layers in CNNs has also been investigated in Yosinski et al. (2014). Auxiliary Learning Whilst in multi- task learning the goal is high test accuracy across all tasks, auxiliary learning differs in that high test accuracy is only required for a single principal task, and the role of the auxiliary tasks is to assist in generalisation of this principal task. Toshniwal et al. (2017) applied auxiliary supervision with phoneme recognition at intermediate low- level representations of deep networks to improve the performance of conversational speech recognition. Liebel & Körner (2018) chose auxiliary tasks which can be obtained with low effort, such as global descriptions of a scene, to boost the performance for single scene depth estimation and semantic segmentation. By carefully choosing a pair of learning tasks, we may also perform auxiliary learning without ground truth labels, in an unsupervised manner. Jaderberg et al. (2017) introduced a method for improving the learning agents in Atari games, by building unsupervised auxiliary tasks to predict the onset of immediate rewards from a short historical context. Flynn et al. (2016); Zhou et al. (2017) proposed <--- Page Split ---> image synthesis networks to perform unsupervised monocular depth estimation by predicting the relative pose of multiple cameras. Different from these works which require prior knowledge to manually define suitable auxiliary tasks, our proposed method requires no additional task knowledge, since our meta learner generates useful auxiliary knowledge in a purely unsupervised fashion. The most similar work to ours is Zhang et al. (2018), in which meta learning was used in auxiliary data selection. However, this still requires manually- labelled data from which these selections are made, whilst our method is able to generate auxiliary data from scratch. Meta Learning Meta learning (or learning to learn) aims to design a higher- level learning system which itself is trained using the experiences of a lower- level learning system, in an attempt to improve this lower- level system. Early works in meta learning explored automatically learning update rules for neural models (Bengio et al., 1990; 1992; Schmidhuber, 1992). Recent approaches have focused on learning optimisers for deep networks based on LSTMs (Ravi & Larochelle, 2016) or synthetic gradients (Andrychowicz et al., 2016; Jaderberg et al., 2016). Meta learning has also been studied for finding optimal hyper- parameters (Li et al., 2017) and a good initialisation for few- shot learning (Finn et al., 2017). (Santoro et al., 2016) also investigated few shot learning via an external memory module. Vinyals et al. (2016); Snell et al. (2017) realised few shot learning in the instance space via a differentiable nearest- neighbour approach. Our method also performs in the instance space, but induces auxiliary knowledge as an implicit regularisation to improve generalisation of the principal task. ## 3 META AUXILIARY LEARNING In this section, we introduce our method for automatically generating optimum auxiliary tasks, which we call Meta AuXiliary Learning (MAXL). ### 3.1 PROBLEM SETUP The goal of meta auxiliary learning is to train a meta generator that can generate higher complexity auxiliary tasks, to improve performance of the principal task. To accomplish this, we use two networks: a multi- task evaluator which trains on the principal and auxiliary tasks, and evaluates the performance of the auxiliary tasks on a meta set, and a meta generator which generates these auxiliary tasks. For simplicity, we consider image classification tasks in this section, where the auxiliary task is sub- class labelling, and the meta generator determines target sub- class labels, but the approach can be considered general for any type of task. We denote the multi- task evaluator as a function \(f_{\theta_{1}}(x)\) that takes an input \(x\) with network parameters \(\theta_{1}\) , and the meta generator as a function \(g_{\theta_{2}}(x)\) that takes the same input \(x\) with network parameters \(\theta_{2}\) . For a dataset with input \(x\) and ground- truth label \(y\) for the principal task, we split into three subsets: training \((x_{\mathrm{train}}, y_{\mathrm{train}})\) , meta- training \((x_{\mathrm{meta}}, y_{\mathrm{meta}})\) , and test \((x_{\mathrm{test}}, y_{\mathrm{test}})\) . Training data is used for updating \(\theta_{1}\) , meta- training data is used for updating the \(\theta_{2}\) , and test data is used for overall evaluation. In the multi- task evaluator, we apply a hard parameter sharing approach (Ruder, 2017) in which we predict the principal and auxiliary tasks using the shared set of features \(\theta_{1}\) in the multi- task network. At the end of the last feature layer \(f_{\theta_{1}}(x)\) , we then apply further task- specific layers to output the corresponding prediction for each task. We denote the predicted principal labels by \(f_{\theta_{1}}^{\mathrm{pri}}(x)\) and predicted auxiliary labels by \(f_{\theta_{1}}^{\mathrm{aux}}(x)\) . In the meta generator, we pre- define a hierarchical structure \(\psi\) which determines the number of subclasses for each class in the principal task. At the end of the last feature layer \(g_{\theta_{2}}(x)\) , this hierarchy, together with the ground- truth label \(y\) for the principal task, are used to generate the target auxiliary labels, denoted by \(g_{\theta_{2}}^{\mathrm{gen}}(x, y, \psi)\) . We allow for soft assignment labelling rather than enforcing one- hot encoding, which enables greater flexibility to learn optimum auxiliary tasks. The meta generator uses a masked SoftMax to ensure that each output node represents a sub- class label for only one class in the principal task, as described further in Section 3.3. The visualisation of the our proposed MAXL approach is shown in Figure 2. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: (a) Illustration of the two networks which make up our meta auxiliary learning algorithm. (b) Illustration of vanilla SoftMax and Mask SoftMax with 3 principal classes. Vanilla SoftMax outputs over all 5 auxiliary classes, where as Mask Softmax outputs over a hierarchical structure \(\psi = [2,2,1]\) to constrain the prediction space. </center> ### 3.2 MODEL OBJECTIVES The multi- task evaluator is trained in a tightly- coupled manner with the meta generator: the meta generator determines target labels for the multi- task evaluator, which in turn determines the suitability of those labels. Given target labels as determined by the meta generator, the multi- task evaluator is trained to predict these labels, alongside the ground- truth labels for the principal task. For both the principal and auxiliary classification tasks, we apply focal loss (Lin et al., 2017) with a focusing parameter \(\gamma = 2\) , defined as: \[\mathcal{L}(\hat{y},y) = -y(1 - \hat{y})^{\gamma}\log (\hat{y}), \quad (1)\] where \(\hat{y}\) is the predicted label and \(y\) is the ground- truth label. The focal loss helps to focus on the incorrectly predicted labels, which we found improved performance during our experimental evaluation compared with the regular cross- entropy log loss. To update parameters \(\theta_{1}\) in the multi- task evaluator, we define the multi- task objective as follows: \[\arg \min_{\theta_{1}}\left(\mathcal{L}(f_{\theta_{1}}^{\mathrm{pri}}(x_{\mathrm{train}}^{(i)}),y_{\mathrm{train}}^{(i)}) + \mathcal{L}(f_{\theta_{1}}^{\mathrm{aux}}(x_{\mathrm{train}}^{(i)}),y_{\theta_{2}}^{\mathrm{gen}}(x_{\mathrm{train}}^{(i)},y_{\mathrm{train}}^{(i)},\psi))\right), \quad (2)\] where \((i)\) represents the \(i^{t h}\) batch from the training data. The meta generator is then trained by encouraging target labels for the auxiliary task to be chosen such that, if the multi- task evaluator were to be trained on these labels, the performance on the principal task would be maximised. This requires evaluation on a separate dataset, the meta- training set, to train the meta generator, to ensure that the target auxiliary labels encourage generalisation beyond the data supplied to the multi- task evaluator. To update parameters \(\theta_{2}\) in the meta generator, we define the meta objective as follows: \[\arg \min_{\theta_{2}}\mathcal{L}(f_{\theta_{1}^{+}}^{\mathrm{pri}}(x_{\mathrm{meta}}^{(i)}),y_{\mathrm{meta}}^{(i)}). \quad (3)\] Here \(\theta_{1}^{+}\) represents the weights of the multi- task network were it to be trained, with one gradient update, using auxiliary labels \(y_{\mathrm{meta}}\) : \[\theta_{1}^{+} = \theta_{1} - \alpha \nabla_{\theta_{1}}\left(\mathcal{L}(f_{\theta_{1}}^{\mathrm{pri}}(x_{\mathrm{meta}}^{(i)}),y_{\mathrm{meta}}^{(i)}) + \mathcal{L}(f_{\theta_{1}}^{\mathrm{aux}}(x_{\mathrm{meta}}^{(i)}),g_{\theta_{2}}^{\mathrm{gen}}(x_{\mathrm{meta}}^{(i)},y_{\mathrm{meta}}^{(i)},\psi))\right), \quad (4)\] <--- Page Split ---> where \(\alpha\) is the learning rate. The trick in this meta objective is that we perform the derivative over a derivative (a Hessian matrix) to update \(\theta_{2}\) , by using a retained computational graph of \(\theta_{1}^{+}\) in order to compute derivatives with respect to \(\theta_{2}\) . This second derivative trick in meta learning was also proposed in Finn et al. (2017) and Zhang et al. (2018). However, we found that the generated auxiliary labels can easily collapse (i.e. degenerate by simply learning a similar level of complexity as the principal task), which leaves parameters \(\theta_{2}\) in a local minimum without producing any extra useful knowledge. Thus, to encourage the network to learn more complex and informative auxiliary tasks, we further apply an entropy loss \(\mathcal{H}(g_{\theta_{2}}(x_{\mathrm{meta}}^{(i)},y_{\mathrm{meta}}^{(i)},\psi))\) as a regularisation term in the meta objective. A detailed explanation of the entropy loss and the collapsing label problem will be given in Section 3.4. Finally, the entire MAXL algorithm is defined as follows: # Algorithm 1: The MAXL algorithm Dataset: \(D = \left\{\left(x_{\mathrm{train}},y_{\mathrm{train}}\right),\left(x_{\mathrm{meta}},y_{\mathrm{meta}}\right)\right\}\) Initialise: Network parameters: \(\theta_{1},\theta_{2}\) ; Hierarchical structure: \(\psi\) Initialise: Hyper- parameter (learning rate): \(\alpha ,\beta\) ; Hyper- parameter (task weighting): \(\lambda\) for each training iteration \(i\) do # fetch one batch of training and meta data \(\left\{\left(x_{\mathrm{train}}^{(i)},y_{\mathrm{train}}^{(i)},\left(x_{\mathrm{meta}}^{(i)},y_{\mathrm{meta}}^{(i)}\right)\right\} \in \left\{\left(x_{\mathrm{train}},y_{\mathrm{train}}\right),\left(x_{\mathrm{meta}},y_{\mathrm{meta}}\right)\right\}\) # training step Update: \(\theta_{1}\leftarrow \theta_{1} - \alpha \nabla_{\theta_{1}}\left(\mathcal{L}(f_{\theta_{1}}^{\mathrm{pri}}(x_{\mathrm{train}}^{(i)}),y_{\mathrm{train}}^{(i)}) + \mathcal{L}(f_{\theta_{1}}^{\mathrm{aux}}(x_{\mathrm{train}}^{(i)}),g_{\theta_{2}}(x_{\mathrm{train}}^{(i)},y_{\mathrm{train}}^{(i)},\psi))\right)\) # meta- training step Compute: \(\theta_{1}^{+} = \theta_{1} - \alpha \nabla_{\theta_{1}}\left(\mathcal{L}(f_{\theta_{1}}^{\mathrm{pri}}(x_{\mathrm{meta}}^{(i)}),y_{\mathrm{meta}}^{(i)}) + \mathcal{L}(f_{\theta_{1}}^{\mathrm{aux}}(x_{\mathrm{meta}}^{(i)}),g_{\theta_{2}}(x_{\mathrm{meta}}^{(i)},y_{\mathrm{meta}}^{(i)},\psi))\right)\) Update: \(\theta_{2}\leftarrow \theta_{2} - \beta \nabla_{\theta_{2}}\left(\mathcal{L}(f_{\theta_{1}}^{\mathrm{pri}}(x_{\mathrm{meta}}^{(i)}),y_{\mathrm{meta}}^{(i)}) + \lambda \mathcal{H}(g_{\theta_{2}}^{\mathrm{aux}}(x_{\mathrm{meta}}^{(i)},y_{\mathrm{meta}}^{(i)},\psi))\right)\) end ### 3.3 MASK SOFTMAX FOR HIERARCHICAL PREDICTIONS In the prediction layer of the meta generator, we designed a modified SoftMax function to predict target auxiliary labels which conform to a pre- defined hierarchy \(\psi\) . As shown in Figure 2 (upper right), the original softmax function does not constrain sub- class labelling to lie within this hierarchy. Our mask SoftMax structure resolves this issue by applying a binary mask to the original SoftMax function. The overall hierarchical structure \(\psi\) determines the number of sub- classes \(\psi [i]\) in each principal class \(i\) . As such, the total prediction space for auxiliary labels is \(\sum_{i}\psi [i]\) . This hierarchy, together with the ground- truth principal class label \(y\) of the current image, creates the mask with a binary function \(M = \mathcal{B}(y,\psi)\) . Using the principal ground- truth label \(y\) , the corresponding range of sub- classes \(\psi [y]\) is selected, and a binary mask \(M\) is created with size \(\sum_{i}\psi [i]\) with a multi one- hot encoding \(\mathbb{1}_{\sum_{i< y}\psi [i]:\sum_{i< y + 1}\psi [i]}(\mathbb{1}_{a:b}\) is denoted as a multi one- hot encoding in which indexes from \(a\) to \(b\) are encoded as 1). Using the example in Figure 2, consider the principal task to have 3 classes with ground truth labels \(y = 0,1,2\) , and hierarchical structure \(\psi = [2,2,1]\) . In this case, the auxiliary prediction space is equal to 5 and the corresponding binary masks are \(M = [1,1,0,0,0],[0,0,1,1,0],[0,0,0,0,1]\) respectively. Finally, we apply binary mask \(M\) with an element- wise multiplication on the original SoftMax function for the final auxiliary task predictions: \[\mathrm{SoftMax}\colon p(\hat{y}_{i}) = \frac{\exp\hat{y}_{i}}{\sum_{i}\exp\hat{y}_{i}},\qquad \mathrm{MaskSoftMax}\colon p(\hat{y}_{i}) = \frac{\exp M\odot\hat{y}_{i}}{\sum_{i}\exp M\odot\hat{y}_{i}},\quad M = \mathcal{B}(y,\psi),\] where \(p(\hat{y}_{i})\) represents the probability of the predicted principal label \(\hat{y}\) over class \(i\) , and \(\odot\) represents element- wise multiplication. <--- Page Split ---> ### 3.4 THE COLLAPSING CLASS PROBLEM As previously discussed, we predict each auxiliary label within a hierarchical structure \(\psi\) . However, the number of sub- classes defined in \(\psi [i]\) is the maximum auxiliary label prediction space, with no guarantee that all \(\psi [i]\) classes will be predicted. This may result in some auxiliary labels defined in \(\psi [i]\) being overlooked, with the output of the meta generator collapsing into a smaller sub- class space. In experiments, we found that this phenomenon is particularly apparent when we either have a large learning rate for training the meta generator, or a large sub- class prediction space \(\psi\) . To avoid the collapsing class problem, we introduced an additional regularisation loss, which we call the entropy loss \(\mathcal{H}(\hat{y}^{(i)})\) . This encourages the meta generator to utilise the full prediction space, by encouraging a large prediction entropy across this space. Assuming we have a well- balanced dataset, the entropy loss calculates the KL divergence between the predicted auxiliary label space \(\hat{y}^{(i)}\) , and a uniform distribution \(\mathcal{U}\) for each \(i^{th}\) batch. This is equivalent to calculating the entropy of the predicted label space, and is defined as: \[\mathcal{H}(\hat{y}^{(i)}) = \sum_{k = 1}^{K}\overline{y_{k}}\log \overline{y_{k}},\quad \overline{y_{k}} = \frac{1}{N}\sum_{i = 1}^{N}\hat{y}^{(i)}. \quad (5)\] where \(K\) is the number of auxiliary labels and \(N\) is the training batch size. The entropy loss is essential to achieve human- level performance, as shown in our experiments. The higher entropy in the auxiliary target labels results in a more complex auxiliary task. This avoids local minima during training, such as assigning a single label to all examples of a principal class. ## 4 EXPERIMENTS In this section, we present experimental results to evaluate MAXL with respect to several baselines and datasets on image classification tasks. ### 4.1 EXPERIMENTAL SETUP Datasets We evaluated on three different datasets: CIFAR100, CIFAR10, and CIFAR10.1v6 (Recht et al., 2018). CIFAR100 consists of 100 principal classes, whilst CIFAR10 and CIFAR10.1v6 consist of 10 principal classes and have the same training dataset as each other, but two different test datasets. To assess the generalisation across different task complexities, we tested a range of different combinations in the numbers of principal and auxiliary classes. For CIFAR100, we expanded the dataset's provided 2- level hierarchy (20 and 100 classes) into a 4- level hierarchy (additional 3 and 10 classes), by manually assigning examples for these new hierarchy levels (see Appendix A). Based on the new hierarchy, we then tested on all 6 possible combinations of principal and auxiliary class numbers. Note that for MAXL, the hierarchy was used only to define the structure of \(\psi\) and the principal task labels, to ensure a fair comparison with a method using human- defined auxiliary tasks, but the auxiliary task labelling within that structure was learned by MAXL itself. CIFAR10 and CIFAR10.1v6 do not have an associated manually- defined hierarchy, and so we defined a range of hierarchical structures \(\psi [i] = 2, 5, 10, 20, 50, 100, \forall i\) . Baselines We compared MAXL to a number of baselines. Single Task trains only with the principal class label. Random Assignment trains with auxiliary classes, and randomly assigns the auxiliary class labels. Prototypical Net is a clustering method based on (Snell et al., 2017), where prototypes for auxiliary classes are defined by embedding examples from meta- training data, which has human- defined auxiliary classes, using a pre- trained ImageNet network. Unsupervised, differentiable, nearest- neighbour clustering is then used to produce the final auxiliary class labelling for the remaining training data. The key difference to MAXL is that, whilst both methods are unsupervised, the auxiliary class labelling with MAXL actually evaluates the generalisation performance of this labelling on the principal task, whilst the Prototypical Net method does not. Finally, Human trains with auxiliary classes, using the human- defined hierarchy. Note that due to the need for a manually- defined hierarchy, Prototypical Net and Human were only evaluated on CIFAR100. For all baselines, we use the same network architecture and training procedure as MAXL's multi- task <--- Page Split ---> evaluator. For the meta- training for MAXL and Prototypical Net, we split each training dataset and used \(10\%\) for meta- training the auxiliary labelling, and \(90\%\) for training the multi- task evaluator. For all other baselines, we used the full training set for training the multi- task evaluator. Training For both the multi- task evaluator and the meta generator use VGG- 16 as its core (Simonyan & Zisserman, 2014), together with batch normalisation. For all experiments, we used a learning rate of 0.01 for the multi- task evaluator. For MAXL's meta generator, we found that a smaller learning rate of \(10^{- 5}\) was necessary to help prevent the class collapsing problem. For all training, we drop the learning rate by half after every 50 epochs, and train for a total of 200 epochs, using vanilla stochastic gradient descent. For the meta generator, we apply an \(L_{1}\) norm weight decay of \(5 \cdot 10^{- 4}\) on the meta generator, with no regularisation on the multi- task evaluator. We chose the weighting of the entropy regularisation loss term to be 0.2 based on empirical performance. ### 4.2 TEST PERFORMANCE We now evaluate the performance of MAXL compared to these baselines, on all three datasets. Results for CIFAR100 are presented in Figure 3, and results for CIFAR10 and CIFAR10.1v6 are presented in Appendix B. ![](images/6_0.jpg) <center>Figure 3: Learning curves for the CIFAR100 test dataset, comparing MAXL with baseline methods. We provide results in all 6 different combinations of principal and auxiliary class numbers. </center> For CIFAR100, we observe that MAXL performs similarly to when human knowledge is used in 4 out of the 6 hierarchical structures, and performs worse in 2 out of the 6. For all other baselines, MAXL performs at least as well, and in the majority of cases outperforms other baselines by a significant margin. We therefore see that MAXL is able to learn auxiliary tasks effectively by tightly coupling the auxiliary task generation and the principal task training, in a superior manner than when these auxiliary tasks are assigned independently, such as with random assignment or using prototypical net. With performance of MAXL approaching that of a system using a human- defined auxiliary tasks, we see strong evidence that MAXL is able to learn to generalise effectively in an unsupervised manner. ### 4.3 EFFECT OF AUXILIARY TASK COMPLEXITY We now evaluate how the complexity of the auxiliary tasks affects the performance of the principal task. In Figure 4 (a), we present results from CIFAR10 and CIFAR10v1.6 showing the performance increase over single- task learning, when there are 10 principal classes, but a range of auxiliary class numbers \((\psi [i] = 2, 5, 10, 20, 50, 100, \forall i)\) . For each data point, the performance is calculated by <--- Page Split ---> averaging the test accuracy from the last 5 epochs, after a total of 200 epochs. Experiments were performed both with and without the entropy loss term to show the benefit of this regularisation. We observe an interesting trend in which test performance rises as the number of auxiliary classes increases, but then begins to fall. This suggests that for a given complexity of principal task, there is an optimum complexity in the auxiliary tasks. One explanation for this may be that as the auxiliary tasks increase in complexity, the learned features favour learning these auxiliary tasks rather than the principal task, encouraging further generalisation beyond the features learned only for the principal task. But if the auxiliary task is too complex, then these features begin to overfit and lose the overlap between the reasoning required for the principal and auxiliary tasks, begins to decrease. ![](images/7_0.jpg) <center>Figure 4: Performance improvement in percentages when training with MAXL compared with single-task learning, with 10 principal classes and a range of auxiliary classes. </center> ### 4.4 VISUALISATIONS OF GENERATED KNOWLEDGE In Figure 5, we visualise 2D embeddings of examples from the CIFAR100 test dataset, on two different task complexities. This was computed using t- SNE (Maaten & Hinton, 2008) on the final feature layer of the multi- task evaluator, and compared across three methods: our MAXL method, our baseline using human- defined hierarchy, and our baseline using single- task learning. ![](images/7_1.jpg) <center>Figure 5: t-SNE visualisation of the learned final layer of the multi-task evaluator network, trained with two combinations of principal and auxiliary class numbers from CIFAR100. Colours represent the principal classes. </center> This visualisation shows the separability of principal classes after being trained with the multi- task evaluator. We see that both MAXL and Human show better separation of the principal classes than with Single- Task, owing to the generalisation effect of the auxiliary task learning. The distinction between the separability of the MAXL and Human visualisations is not as clear, despite their very similar performance for these two task complexities in Figure 3. But given that MAXL uses the same hierarchical structure as Human, we see from the visualisation that these two methods are clearly learning different representations. We also show examples of images assigned to the same auxiliary class through MAXL's multi- task evaluator. Figure 6 shows example images with the highest prediction probabilities for three random auxiliary classes from CIFAR100, using the combination of 20 principal classes and 5 auxiliary classes per principal class, which showed the best performance of MAXL in Figure 3. In addition, we also applied MAXL to MNIST, in which 3 auxiliary classes were used for each of the 10 principal classes. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 6: Visualisation of 5 test examples with the highest prediction probability, for each of 3 randomly selected auxiliary classes, for a number of different principal classes. We present the visualisation for CIFAR100 (top) when trained with 20 principal classes and 5 auxiliary classes per principal class, and for MNIST (bottom) when trained with 10 principal classes and 3 auxiliary classes per principal class. </center> To our initial surprise, the generated auxiliary labels visualised in both datasets show no clear human- understandable knowledge. In particular, there are no obvious similarities within each auxiliary class whether in terms of shape, colour, style, structure or semantic meaning. However, this makes more sense when we re- consider the task of the meta generator, which is to assign auxiliary labels which assist the principal task. Rather than grouping images in terms of semantic or visual similarity, the meta generator would therefore be more effective it it were to group images in terms of a shared aspect of reasoning which the multi- task evaluator is currently facing difficulty on. If the multi- task evaluator is then able to improve its ability to determine the auxiliary class of an image in such a cluster, then the learned features will help in overcoming this challenging aspect of reasoning. It therefore makes sense that the examples within an auxiliary class do not share semantic or visual similarity, but instead share a more complex underlying property. Further, we discovered that the generated auxiliary knowledge is not deterministic, since the top predicted candidates are different when we re- train the network from scratch. We therefore speculate that using a human- defined hierarchy is just one out of a potentially infinite number of local optima, and on each run of training the meta generator produces another of these local optimums. ## 5 CONCLUSION & FUTURE WORK In this paper, we have presented and evaluated Meta Auxiliary Learning (MAXL). MAXL learns to generate optimum auxiliary tasks which, when trained alongside a principal task in a multi- task setup, maximise the generalisation of the principal task across a validation dataset. Rather than employing domain knowledge and human- defined auxiliary tasks as is typically required, MAXL is self- supervised and, combined with its general nature, has the potential to automate the process of generalisation to new levels. <--- Page Split ---> Our evaluations on three image datasets have shown the performance of MAXL in an image classification setup, where the auxiliary task is to predict sub- class, hierarchical labels for an image. We have shown that MAXL significantly outperforms other auxiliary learning baselines, and even when human- defined knowledge is used to manually construct the auxiliary tasks, MAXL performs similarly in the majority of experiments. Despite this impressive performance from a self- supervised method, questioning why auxiliary tasks generated by MAXL do not outperform those constructed by a human opens exciting future research in this direction. Perhaps, human- defined auxiliary tasks are optimal themselves and cannot be surpassed. However, we believe this not to be the case since such tasks are typically chosen due to the availability of labelled data for these tasks, and not necessarily their optimality when combined with the principal task. Alternatively, perhaps the power of the human knowledge is not from the domain specific labels, but from higher- level reasoning about how auxiliary tasks should be structured. In our experiments, training MAXL using the same structure as a human- defined hierarchy, but learning its own auxiliary labels, typically led to similar performance as when the human- defined labels were used. The general nature of MAXL also opens up questions about how self- supervised auxiliary learning may be used to learn generic auxiliary tasks beyond sub- class labelling. During our experiments, we also ran preliminary experiments on predicting arbitrary vectors as the auxiliary task, but results so far have been inconclusive. However, the ability of MAXL to potentially learn flexible auxiliary tasks which can automatically be tuned for the principal task now offers an exciting direction towards automated generalisation across a wide range of more complex tasks. ## A 4-LEVEL CIFAR100 DATASET Table 1: Building a 4- level hierarchy for image classification task based on CIFAR100 dataset. Originally, a 20- class and 100- class heirarchy was provided, and we manually introduced a 3- class and 10 class layer. <table><tr><td>3 Class</td><td>10 Class</td><td>20 Class</td><td>100 Class</td></tr><tr><td rowspan="10">animals</td><td rowspan="3">large animals</td><td>reptiles</td><td>crocodile, dinosaur, lizard, snake, turtle</td></tr><tr><td>large carnivores</td><td>bear, leopard, lion, tiger, wolf</td></tr><tr><td>large omnivores and herbivores</td><td>camel, cattle, chimpanzee, elephant, kangaroo</td></tr><tr><td rowspan="2">medium animals</td><td>aquatic mammals</td><td>beaver, dolphin, otter, seal, whale</td></tr><tr><td>medium-sized mammals</td><td>fox, porcupine, possum, raccoon, skunk</td></tr><tr><td rowspan="2">small animals</td><td>small mammals</td><td>hamster, mouse, rabbit, shrew, squirrel</td></tr><tr><td>fish</td><td>aquarium fish, flatfish, ray, shark, trout</td></tr><tr><td rowspan="2">invertebrates</td><td>insects</td><td>bee, beetle, butterfly, caterpillar, cockroach</td></tr><tr><td>non-insect invertebrates</td><td>crab, lobster, snail, spider, worm</td></tr><tr><td>people</td><td>people</td><td>baby, boy, girl, man, woman</td></tr><tr><td rowspan="3">vegetations</td><td rowspan="3">vegetations</td><td>flowers</td><td>orchids, poppies, roses, sunflowers, tulips</td></tr><tr><td>fruit and vegetables</td><td>apples, mushrooms, oranges, pears, peppers</td></tr><tr><td>trees</td><td>maple, oak, palm, pine, willow</td></tr><tr><td rowspan="3">objects and scenes</td><td rowspan="3">household objects</td><td>food containers</td><td>bottles, bowls, cans, cups, plates</td></tr><tr><td>household electrical devices</td><td>clock, keyboard, lamp, telephone, television</td></tr><tr><td>household furniture</td><td>bed, chair, couch, table, wardrobe</td></tr><tr><td rowspan="3">construction</td><td>natural scenes</td><td>large man-made outdoor things</td><td>bridge, castle, house, road, skyscraper</td></tr><tr><td>natural scenes</td><td>large natural outdoor scenes</td><td>cloud, forest, mountain, plain, sea</td></tr><tr><td>vehicles</td><td>vehicles 1</td><td>bicycle, bus, motorcycle, pickup truck, train</td></tr><tr><td>vehicles</td><td>vehicles 2</td><td>lawn-mower, rocket, streetcar, tank, tractor</td><td></td></tr></table> <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 7: Testing performance on CIFAR10 (bottom) and CIFAR10.1v6 (top) datasets, across 6 different numbers of auxiliary classes. </center> <--- Page Split --->
reject
Reject
4.666667
ICLR_2019_paper_0658
iclr
2,019
# MULTI-OBJECTIVE TRAINING OF GENERATIVE ADVERSARIAL NETWORKS WITH MULTIPLE DISCRIMINATORS Anonymous authors Paper under double- blind review ## ABSTRACT Recent literature has demonstrated promising results on the training of Generative Adversarial Networks by employing a set of discriminators, as opposed to the traditional game involving one generator against a single adversary. Those methods perform single- objective optimization on some simple consolidation of the losses, e.g. an average. In this work, we revisit the multiple- discriminator approach by framing the simultaneous minimization of losses provided by different models as a multi- objective optimization problem. Specifically, we evaluate the performance of multiple gradient descent and the hypervolume maximization algorithm on a number of different datasets. Moreover, we argue that the previously proposed methods and hypervolume maximization can all be seen as variations of multiple gradient descent in which the update direction computation can be done efficiently. Our results indicate that hypervolume maximization presents a better compromise between sample quality and diversity, and computational cost than previous methods. ## 1 INTRODUCTION Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) offer a new approach to generative modeling, using game- theoretic training schemes to implicitly learn a given probability density. Prior to the emergence of GAN architectures, realistic generative modeling remained elusive. When offering unparalleled realism, GAN training remains fraught with stability issues. Commonly reported shortcomings involved in the GAN game are the lack of useful gradients provided by the discriminator, and mode collapse, i.e. lack of diversity in the generator's samples. Considerable research effort has been devoted in recent literature in order to overcome training instability within the GAN framework. Some architectures such as BEGAN (Berthelot et al., 2017) have applied auto- encoders as discriminators and proposed a new loss to help stabilize training. Methods such as TTUR (Heusel et al., 2017), in turn, have attempted to define schedules for updating the generator and discriminator differently. The PacGAN algorithm (Lin et al., 2017) proposes to modify the discriminator's architecture which will receive \(m\) concatenated samples as input, while modifications to alternate updates in SGD were introduced in (Yadav et al., 2017). These samples are jointly classified as either real or generated, and authors show that this enforces sample diversity. In SNGAN (Miyato et al., 2018), authors introduce spectral normalization on the discriminator aiming to ensure Lipschitz continuity, which is empirically shown to consistently yield high quality samples when different sets of hyperparameters are used. Recent works have proposed to tackle GANs instability issues using multiple discriminators. Neyshabur et al. (2017) propose a GAN variation in which one generator is trained against a set of discriminators, where each discriminator sees a fixed random projection of the inputs. Prior work, including GMAN (Durugkar et al., 2016) has also explored training against multiple discriminators. <--- Page Split ---> In this paper, we build upon Neyshabur et al.'s introduced framework and propose reformulating the average loss minimization aiming to further stabilize GAN training. Specifically, we propose treating the loss signal provided by each discriminator as an independent objective function. To achieve this, we simultaneously minimize the losses using multi- objective optimization techniques. Namely, we exploit previously introduced methods in literature such as the multiple gradient descent algorithm (MGD) (Désidéri, 2012). However, due to MGD's prohibitively high cost in the case of large neural networks, we propose the use of more efficient alternatives such as maximization of the hypervolume of the region defined between a fixed, shared upper bound on those losses, which we will refer to as the nadir point \(\eta^{*}\) , and each of the component losses. In contrast to Neyshabur et al. (2017)'s approach, where the average loss is minimized when training the generator, hypervolume maximization (HV) optimizes a weighted loss, and the generator's training will adaptively assign greater importance to feedback from discriminators against which it performs poorly. Experiments performed on MNIST show that HV presents a good compromise in the computational cost- samples quality trade- off, when compared to average loss minimization or GMAN's approach (low quality and cost), and MGD (high quality and cost). Also, the sensitivity to introduced hyperparameters is studied and results indicate that increasing the number of discriminators consequently increases the generator's robustness along with sample quality and diversity. Experiments on CIFAR- 10 indicate the method described produces higher quality generator samples in terms of quantitative evaluation. Moreover, image quality and sample diversity are once more shown to consistently improve as we increase the number of discriminators. In summary, our main contributions are the following: 1. We offer a new perspective on multiple-discriminator GAN training by framing it in the context of multi-objective optimization, and draw similarities between previous research in GANs variations and MGD, commonly employed as a general solver for multi-objective optimization. 2. We propose a new method for training multiple-discriminator GANs: Hypervolume maximization, which weighs the gradient contributions of each discriminator by its loss. The remainder of this document is organized as follows: Section 2 introduces definitions on multiobjective optimization and MGD. In Section 3 we describe prior relevant literature. Hypervolume maximization is detailed in Section 4, with experiments and results presented in Section 5. Conclusions and directions for future work are drawn in Section 6. ## 2 PRELIMINARIES In this section we provide some definitions regarding multi- objective optimization literature which will be useful in the next sections. Henceforth, the boldface notation will be used to indicate vector- valued variables. Multi- objective optimization. A multi- objective optimization problem is defined as (Deb, 2001): \[\begin{array}{r}{\min \mathbf{F}(\mathbf{x}) = [f_{1}(\mathbf{x}),f_{2}(\mathbf{x}),\dots,f_{K}(\mathbf{x})]^{T},}\\ {\mathbf{x}\in \Omega ,} \end{array} \quad (1)\] where \(K\) is the number of objectives, \(\Omega\) is the variables space and \(\mathbf{x} = [x_{1},x_{2},\dots,x_{n}]^{T}\in \Omega\) is a decision vector or possible solution to the problem. \(\mathbf{F}:\Omega \to \mathbb{R}^{K}\) is a set of \(K\) - objective functions that maps the \(n\) - dimensional variables space to the \(K\) - dimensional objective space. Pareto- dominance. Let \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) be two decision vectors. \(\mathbf{x}_{1}\) is said to dominate \(\mathbf{x}_{2}\) (denoted by \(\mathbf{x}_{1}\prec \mathbf{x}_{2}\) ) if and only if \(f_{i}(\mathbf{x}_{1})\leq f_{i}(\mathbf{x}_{2})\) for all \(i\in \{1,2,\ldots ,K\}\) and \(f_{j}(\mathbf{x}_{1})< f_{j}(\mathbf{x}_{2})\) for some \(j\in \{1,2,\ldots ,K\}\) . If a decision vector \(\mathbf{x}\) is dominated by no other vector in \(\Omega\) , \(\mathbf{x}\) is said to be non- dominated. Pareto- optimality. A decision vector \(\mathbf{x}^{*}\in \Omega\) is said to be Pareto- optimal if and only if there is no \(\mathbf{x}\in \Omega\) such that \(\mathbf{x}\prec \mathbf{x}^{*}\) , i.e. \(\mathbf{x}^{*}\) is a non- dominated solution. The Pareto- optimal Set (PS) is <--- Page Split ---> defined as the set of all Pareto- optimal solutions \(\mathbf{x} \in \Omega\) , i.e., \(PS = \{\mathbf{x} \in \Omega | \mathbf{x}\) is Pareto optimal\}. The set of all objective vectors \(\mathbf{F}(\mathbf{x})\) such that \(\mathbf{x}\) is Pareto- optimal is called Pareto front (PF), that is \(PF = \{\mathbf{F}(\mathbf{x}) \in \mathbb{R}^{K} | \mathbf{x} \in PS\}\) . Pareto- stationarity. Pareto- stationarity is a necessary condition for Pareto- optimality. For \(f_{k}\) differentiable everywhere for all \(k\) , \(\mathbf{F}\) is said to be Pareto- stationary at the point \(\mathbf{x}\) if there exists a set of scalars \(\alpha_{k}\) , \(k \in \{1, \ldots , K\}\) , such that: \[\sum_{k = 1}^{K} \alpha_{k} \nabla f_{k} = \mathbf{0}, \quad \sum_{k = 1}^{K} \alpha_{k} = 1, \quad \alpha_{k} \geq 0 \quad \forall k. \quad (2)\] Multiple Gradient Descent. Multiple gradient descent (Désidéri, 2012; Schäffler et al., 2002; Peitz & Dellnitz, 2018) was proposed for the unconstrained case of multi- objective optimization of \(\mathbf{F}(\mathbf{x})\) assuming a convex, continuously differentiable and smooth \(f_{k}(\mathbf{x})\) for all \(k\) . MGD finds a common descent direction for all \(f_{k}\) by defining the convex hull of all \(\nabla f_{k}(\mathbf{x})\) and finding the minimum norm element within it. Consider \(\mathbf{w}^{*}\) given by: \[\mathbf{w}^{*} = \mathrm{argmin}||\mathbf{w}||, \quad \mathbf{w} = \sum_{k = 1}^{K} \alpha_{k} \nabla f_{k}(\mathbf{x}), \quad \mathrm{s.t.} \quad \sum_{k = 1}^{K} \alpha_{k} = 1, \quad \alpha_{k} \geq 0 \quad \forall k. \quad (3)\] \(\mathbf{w}^{*}\) will be either \(\mathbf{0}\) in which case \(\mathbf{x}\) is a Pareto- stationary point, or \(\mathbf{w}^{*} \neq \mathbf{0}\) and then \(\mathbf{w}^{*}\) is a descent direction for all \(f_{i}(\mathbf{x})\) . Similar to gradient descent, MGD consists in finding the common steepest descent direction \(\mathbf{w}_{t}^{*}\) at each iteration \(t\) , and then updating parameters with a learning rate \(\lambda\) according to \(\mathbf{x}_{t + 1} = \mathbf{x}_{t} - \lambda \frac{\mathbf{w}_{t}^{*}}{||\mathbf{w}_{t}^{*}||}\) . ## 3 RELATED WORK ### 3.1 TRAINING GANs WITH MULTIPLE DISCRIMINATORS While we would prefer to always have strong gradients from the discriminator during training, the vanilla GAN makes this difficult to ensure, as the discriminator quickly learns to distinguish real and generated samples (Goodfellow, 2016), thus providing no meaningful error signal to improve the generator thereafter. Durugkar et al. (2016) proposed the Generative Multi- Adversarial Networks (GMAN) which consist in training the generator against a softmax weighted arithmetic average of \(K\) different discriminators, according to Eq. 4. \[\mathcal{L}_{G} = \sum_{k = 1}^{K} \alpha_{k} \mathcal{L}_{D_{k}}, \quad (4)\] where \(\alpha_{k} = \frac{e^{\beta \mathcal{L}_{D_{k}}}}{\sum_{j = 1}^{K} e^{\beta \mathcal{L}_{D_{j}}}}\) , \(\beta \geq 0\) , and \(\mathcal{L}_{D_{k}}\) is the loss of discriminator \(k\) and defined as \[\mathcal{L}_{D_{k}} = -\mathbb{E}_{\mathbf{x} \sim p_{\mathrm{data}}} \log D_{k}(\mathbf{x}) - \mathbb{E}_{\mathbf{z} \sim p_{z}} \log (1 - D_{k}(G(\mathbf{z}))), \quad (5)\] where \(D_{k}(\mathbf{x})\) and \(G(\mathbf{z})\) are the outputs of the \(k\) - th discriminator and the generator, respectively. The goal of using the proposed averaging scheme is to privilege worse discriminators and thus providing more useful gradients to the generator during training. Experiments were performed with \(\beta = 0\) (equal weights), \(\beta \to \infty\) (only worst discriminator is taken into account), \(\beta = 1\) , and \(\beta\) learned by the generator. Models with \(K = \{2, 5\}\) were tested and evaluated using a proposed metric and the Inception score (Salimans et al., 2016). However, results showed that the simple average of discriminator's losses provided the best values for both metrics in most of the considered cases. Opposed to GMAN, Neyshabur et al. (2017) proposed training a GAN with \(K\) discriminators using the same architecture. Each discriminator \(D_{k}\) sees a different randomly projected lower- dimensional version of the input image. Random projections are defined by a randomly initialized matrix \(W_{k}\) , which remains fixed during training. Theoretical results provided show that the distribution induced by the generator \(G\) will converge to the real data distribution \(p_{\mathrm{data}}\) , as long as there is a sufficient number of discriminators. Moreover, discriminative tasks in the projected space are harder, i.e. real <--- Page Split ---> and fake samples are more alike, thus avoiding early convergence of discriminators, which leads to common stability issues in GAN training such as mode- collapse (Goodfellow, 2016). Essentially, the authors trade one hard problem for \(K\) easier subproblems. The losses of each discriminator \(\mathcal{L}_{D_{k}}\) are the same as shown in Eq. 5. However, the generator loss \(\mathcal{L}_{G}\) is defined as simply the sum of the losses provided by each discriminator, as shown in Eq. 6. This choice of \(\mathcal{L}_{G}\) does not exploit available information such as the performance of the generator with respect to each discriminator. \[\mathcal{L}_{G} = -\sum_{k = 1}^{K}\mathbb{E}_{\mathbf{z}\sim p_{z}}\log D_{k}(G(\mathbf{z})). \quad (6)\] ### 3.2 HYPERVOLUME MAXIMIZATION Consider a set of solutions \(S\) for a multi- objective optimization problem. The hypervolume \(\mathcal{H}\) of \(S\) is defined as (Fleischer, 2003): \(\mathcal{H}(S) = \mu (\cup_{\mathbf{x}\in S}[\mathbf{F}(\mathbf{x}),\pmb{\eta}^{*}])\) , where \(\mu\) is the Lebesgue measure and \(\pmb{\eta}^{*}\) is a point dominated by all \(\mathbf{x}\in S\) (i.e. \(f_{i}(\mathbf{x})\) is upper- bounded by \(\eta\) ), referred to as nadir point. \(\mathcal{H}(S)\) can be understood as the size of the space covered by \(\{\mathbf{F}(\mathbf{x})|\mathbf{x}\in S\}\) (Bader & Zitzler, 2011). The hypervolume was originally introduced as a quantitative metric for coverage and convergence of Pareto- optimal fronts obtained through population based algorithms (Beume et al., 2007). Methods based on direct maximization of \(\mathcal{H}\) exhibit favorable convergence even in challenging scenarios, such as simultaneous minimization of 50 objectives (Bader & Zitzler, 2011). In the context of Machine Learning, a single- solution hypervolume maximization has been applied to neural networks as a surrogate loss for mean squared error (Miranda & Zuben, 2016), i.e. the loss provided by each example in a training batch is treated as a single cost and the multi- objective approach aims to minimize costs over all examples. Authors show that such method provides an inexpensive boosting- like training. ## 4 MULTI-OBJECTIVE TRAINING OF GANS WITH MULTIPLE DISCRIMINATORS We introduce a variation of the GAN game such that the generator solves the following multi- objective problem: \[\min \mathcal{L}_{G}(\mathbf{x}) = [l_{1}(\mathbf{z}),l_{2}(\mathbf{z}),\dots,l_{K}(\mathbf{z})]^{T}, \quad (7)\] where each \(l_{k} = - \mathbb{E}_{z\sim p_{z}}\log D_{k}(G(z))\) , \(k\in \{1,\dots,K\}\) , is the loss provided by the \(k\) - th discriminator. Training proceeds as the usual formulation (Goodfellow et al., 2014), i.e. with alternate updates between the discriminators and the generator. Updates of each discriminator are performed to minimize the loss described in Eq. 5. A natural choice for generator's updates is the MGD algorithm, described in Section 2. However, computing the direction of steepest descent \(\mathbf{w}^{*}\) before every parameter update step, as required in MGD, can be prohibitively expensive for large neural networks. Therefore, we propose an alternative scheme for multi- objective optimization and argue that both our proposal and previously published methods can all be viewed as performing computationally more efficient versions of MGD update rule without the burden of having to solve a quadratic program, i.e. computing \(\mathbf{w}^{*}\) , every iteration. ### 4.1 HYPERVOLUME MAXIMIZATION FOR TRAINING GANS Fleischer (Fleischer, 2003) has shown that maximizing \(\mathcal{H}\) yields Pareto- optimal solutions. Since MGD converges to a set of Pareto- stationary points, i.e. a super- set of the Pareto- optimal solutions, hypervolume maximization yields a sub- set of the solutions obtained using MGD. We exploit the above mentioned property and define the generator loss as the negative log- hypervolume, as defined in Eq. 8: \[\mathcal{L}_{G} = -\mathcal{V} = -\sum_{k = 1}^{K}\log (\eta -l_{k}), \quad (8)\] <--- Page Split ---> where the nadir point coordinate \(\eta\) is an upper bound for all \(l_{k}\) . In Fig. 1 we provide an illustrative example for the case where \(K = 2\) . The highlighted region corresponds to \(e^{\lambda}\) . Since the nadir point \(\eta^{*}\) is fixed, \(\nu\) will only be maximized, and consequently \(\mathcal{L}_{G}\) minimized, if each \(l_{k}\) is minimized. ![](images/4_0.jpg) <center>Figure 1: 2D example of the objective space where the generator loss is being optimized. </center> Moreover, by adapting the results shown in (Miranda & Zuben, 2016), the gradient of \(\mathcal{L}_{G}\) with respect to any generator's parameter \(\theta\) is given by: \[\frac{\partial\mathcal{L}_{G}}{\partial\theta} = \sum_{k = 1}^{K}\frac{1}{\eta - l_{k}}\frac{\partial l_{k}}{\partial\theta}. \quad (9)\] In other words, the gradient can be obtained by computing a weighted sum of the gradients of the losses provided by each discriminator, whose weights are defined as the inverse distance to the nadir point components. This formulation will naturally assign more importance to higher losses in the final gradient, which is another useful property of hypervolume maximization. Nadir point selection. It is evident from Eq. 9 that the selection of \(\eta\) directly affects the importance assignment of gradients provided by different discriminators. Particularly, as the quantity \(\min_{k}\{\eta - l_{k}\}\) grows, the multi- objective GAN game approaches the one defined by the simple average of \(l_{k}\) . Previous literature has discussed in depth the effects of the selection of \(\eta\) in the case of population- based methods (Auger et al., 2009; 2012). However, those results are not readily applicable for the single- solution case. As will be shown in Section 5, our experiments indicate that the choice of \(\eta\) plays an important role in the final quality of samples. Nevertheless, this effect becomes less relevant as the number of discriminators increases. Nadir point adaptation. Similarly to (Miranda & Zuben, 2016), we propose an adaptive scheme for \(\eta\) such that at iteration \(t\) : \(\eta_{t} = \delta \max_{k}\{l_{k,t}\}\) , where \(\delta >1\) is a user- defined parameter which will be referred to as slack. This enforces \(\min_{k}\{\eta - l_{k}\}\) to be higher when \(\max_{k}\{l_{k,t}\}\) is high and low otherwise, which induces a similar behavior as an average loss when training begins and automatically places more importance on the discriminators in which performance is worse as training progresses. Extra discussion and an illustrative example of the adaptation scheme adopted is presented in Appendix G. Comparison to average loss minimization. The upper bound proven by Neyshabur et al. (2017) assumes that the marginals of the real and generated distributions are identical along all random projections. Average loss minimization does not ensure equally good approximation between the marginals along all directions. In case of a trade- off between discriminators, i.e. if decreasing the loss on a given projection increases the loss with respect to another one, the distribution of losses can be uneven. With HV on the other hand, especially when \(\eta\) is reduced throughout training, overall loss will be kept high as long as there are discriminators with high loss. This objective tends to prefer central regions of a trade- off, in which all discriminators present a roughly equally low loss. ### 4.2 RELATIONSHIP BETWEEN MULTIPLE DISCRIMINATOR GANs AND MGD All methods described previously for the solution of GANs with multiple discriminators, i.e. average loss minimization (Neyshabur et al., 2017), GMAN's weighted average (Durugkar et al., 2016) and hypervolume maximization can be defined as MGD- like two- step algorithms consisting of: Step 1 - consolidating all gradients into a single update direction (compute the set \(\alpha_{1,\ldots ,K}\) ); Step 2 - updating parameters in the direction returned in step 1. Definition of Step 1 for the different methods studied here can be seen in the following: 1. MGD: \(\alpha_{1:K} = \mathrm{argmin}_{\alpha}||\mathbf{w}||\) , s.t. \(\sum_{k = 1}^{K}\alpha_{k} = 1\) , \(\alpha_{k}\geq 0\forall k\in \{1,\dots,K\}\) 2. Average loss minimization (Neyshabur et al., 2017): \(\alpha_{k} = \frac{1}{K}\) 3. GMAN (Durugkar et al., 2016): \(\alpha_{k} = \mathrm{softmax}(l_{1:K})_{k}\) 4. Hypervolume maximization: \(\alpha_{k} = \frac{1}{T(\eta - l_{k})}\) , \(T = \sum_{k = 1}^{K}\frac{1}{\eta - l_{k}}\) <--- Page Split ---> ## 5 EXPERIMENTS We performed three sets of experiments aiming to analyze the following aspects: (i) How alternative methods for training GANs with multiple discriminators perform in comparison to MGD; (ii) How alternative methods perform in comparison to each other in terms of sample quality and coverage; and (iii) Whether the behavior induced by HV improves the results with respect to the baseline methods. Firstly, we exploited the relatively low dimensionality of MNIST and used it as testbed for a comparison of MGD with the other approaches, i.e. average loss minimization (AVG), GMAN's weighted average loss, and HV, proposed in this work. Moreover, multiple initializations and slack combinations were evaluated in order to investigate how varying the number of discriminators affects robustness to those factors. Then, experiments were performed with CIFAR- 10 while increasing the number of discriminators. We evaluated HV's performance compared to baseline methods, and the effect in samples quality. We also analyzed the impact on the diversity of generated samples by using the stacked MNIST dataset (Srivastava et al., 2017). Samples of generators trained on stacked MNIST, CIFAR- 10, CelebA, and Cats dataset are shown in the Appendix. In all experiments performed, the same architecture, set of hyperparameters and initialization were used for both AVG, GMAN and our proposed method. The only different aspect is the generator loss. Unless stated otherwise, Adam (Kingma & Ba, 2014) was used to train all the models with learning rate, \(\beta_{1}\) and \(\beta_{2}\) set to 0.0002, 0.5 and 0.999, respectively. Mini- batch size was set to 64. The Frechet Inception Distance (FID) (Heusel et al., 2017) was employed for comparison. Details on FID computation can be found in Appendix A. ### 5.1 MGD COMPARED WITH ALTERNATIVE METHODS We employed MGD in our experiments with MNIST. In order to do so, a quadratic program has to be solved prior to every parameters update. For this, we used the Scipy's implementation of the Serial Least Square Quadratic Program solver. Three and four fully connected layers with LeakyReLU activations were used for the generator and discriminator, respectively. Dropout was also employed in the discriminator and the random projection layer was implemented as a randomly initialized norm- 1 fully connected layer, reducing the vectorized dimensionality of MNIST from 784 to 512. A pretrained LeNet (LeCun et al., 1998) was used for FID computation. Experiments over 100 epochs with 8 discriminators are reported in Fig. 2 and Fig. 3. In Fig. 2, box- plots refer to 30 independent computations of FID over 10000 images sampled from the generator which achieved the minimum FID at train time. FID results are measured at train time over 1000 images and the best values are reported in Fig. 3 along with the necessary time to achieve it. MGD outperforms all tested methods. However, its cost per iteration does not allow its use in more relevant datasets other than MNIST. Hypervolume maximization, on the other hand, performs closest to MGD than the considered baselines, while introducing no relevant extra cost. In Fig. 4, we analyze convergence in the Pareto- stationarity sense by plotting the norm of the update direction for each method, given by \(\| \sum_{k = 1}^{K}\alpha_{k}\nabla l_{k}\|\) . All methods converged to similar norms, leading to the conclusion that different Pareto- stationary solutions will perform differently in terms of quality of samples. FID as a function of wall- clock time is shown in Figure 22 (Appendix H). HV sensitivity to initialization and choice of \(\delta\) . Analysis of the sensitivity of the performance with the choice of the slack parameter \(\delta\) and initialization was performed under the following setting: models were trained for 50 epochs on MNIST with hypervolume maximization using 8, 16, 24 discriminators. Three independent runs (different initializations) were executed with each \(\delta = \{1.05, 1.5, 1.75, 2\}\) and number of discriminators, totalizing 36 final models. Fig. 5 reports the box- plots obtained for 5 FID independent computations using 10000 images, for each of the 36 models obtained under the setting previously described. Results clearly indicate that increasing the number of discriminators yields much smaller variation in the FID obtained by the final model. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 2: Box-plots corresponding to 30 independent FID computations with 10000 images. MGD performs consistently better than other methods, followed by hypervolume maximization. Models that achieved minimum FID at train time were used. Red and blue dashed lines are the FIDs of a random generator and real data, respectively. </center> ![](images/6_1.jpg) <center>Figure 3: Time vs. best FID achieved during training for each approach. FID values are computed over 1000 generated images after every epoch. MGD performs relevantly better than others in terms of FID, followed by HV. However, MGD is approximately 7 times slower than HV. HV is well-placed in the time-quality trade-off. </center> ![](images/6_2.jpg) <center>Figure 4: Norm of the update direction over time for each method. While Pareto-stationarity is approximately achieved by all methods, performance varies relevantly in terms of FID. </center> ![](images/6_3.jpg) <center>Figure 5: Independent FID evaluations for models obtained with different runs using distinct slack parameter \(\delta\) . Sensitivity reduces as the number of discriminators increases. </center> ### 5.2 HV AS AN ALTERNATIVE FOR MGD We evaluate the performance of HV compared to baseline methods using the CIFAR- 10 dataset. FID was computed with a pretrained ResNet (He et al., 2016). ResNet was trained on the 10- class classification task of CIFAR- 10 up to approximately \(95\%\) test accuracy. DCGAN (Radford et al., 2015) and WGAN- GP (Gulrajani et al., 2017) were included in the experiments for FID reference. Same architectures as in (Neyshabur et al., 2017) were employed for all multi- discriminators settings. An increasing number of discriminators was used. Inception score as well as FID computed with other models are included in Appendix C. In Fig. 6, we report the box- plots of 15 independent evaluations of FID on 10000 images for the best model obtained with each method across 3 independent runs. Results once more indicate that HV outperforms other methods in terms of quality of the generated samples. Moreover, performance clearly improves as the number of discriminators grows. Fig. 7 shows the FID at train time, i.e. measured with 1000 generated samples after each epoch, for the best models across runs. Models trained against more discriminators clearly converge to smaller values. We report the norm of the update direction \(\| \sum_{k = 1}^{K}\alpha_{k}\nabla l_{k}\|\) for each method in Fig. 9, Appendix C. <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 6: Box-plots of 15 independent FID computations with 10000 images. Dashed lines are real data (blue) and random generator (red) FIDs. </center> ![](images/7_1.jpg) <center>Figure 7: FID estimated over 1000 generated images at train time. Models trained against more discriminators achieve lower FID. </center> Cost under the multiple discriminator setting. We highlight that even though training with multiple discriminators may be more computationally expensive when compared to conventional approaches, such framework supports fully parallel training of the discriminators, a feature which is not trivially possible in other GAN settings. For example in WGAN, the discriminator is serially updated multiple times for each generator update. In Fig. 10 at Appendix C, we provide a comparison between the wall- clock time per iteration between all methods evaluated. Serial implementations of discriminators updates with 8 and 16 discriminators were faster than WGAN- GP. ### 5.3 EFFECT OF THE NUMBER OF DISCRIMINATORS ON SAMPLE DIVERSITY We repeat the experiments in (Srivastava et al., 2017) aiming to analyze how the number of discriminators impacts the sample diversity of the corresponding generator when trained using hypervolume maximization. The stacked MNIST dataset is employed and results reported in (Lin et al., 2017) are used for comparison. HV results for 8, 16, and 24 discriminators were obtained with 10k and 26k generator images averaged over 10 runs. The number of covered modes along with the KL divergence between the generated mode distribution and test data are reported in Table 1. Table 1: Number of covered modes and reverse KL divergence for stacked MNIST. <table><tr><td>Test samples</td><td>Model</td><td>Modes (Max 1000)</td><td>KL</td></tr><tr><td rowspan="5">26k</td><td>DCGAN (Radford et al., 2015)</td><td>99.0</td><td>3.400</td></tr><tr><td>ALL (Dumoulin et al., 2016)</td><td>16.0</td><td>5.400</td></tr><tr><td>Unrolled GAN (Metz et al., 2016)</td><td>48.7</td><td>4.320</td></tr><tr><td>VEGAN (Srivastava et al., 2017)</td><td>150.0</td><td>2.950</td></tr><tr><td>PadCGAN2 (Lin et al., 2017)</td><td>1000.0 ± 0.0</td><td>0.000 ± 0.003</td></tr><tr><td rowspan="3">10k</td><td>HV - 8 &amp;amp; disc.</td><td>679.2 ± 5.9</td><td>1.139 ± 0.011</td></tr><tr><td>HV - 16 disc.</td><td>998.0 ± 1.8</td><td>0.120 ± 0.004</td></tr><tr><td>HV - 24 disc.</td><td>998.3 ± 1.1</td><td>0.116 ± 0.003</td></tr><tr><td rowspan="3">26k</td><td>HV - 8 disc.</td><td>770.8 ± 6.4</td><td>1.115 ± 0.007</td></tr><tr><td>HV - 16 disc.</td><td>1000.0 ± 0.0</td><td>0.088 ± 0.002</td></tr><tr><td>HV - 24 disc.</td><td>1000.0 ± 0.0</td><td>0.084 ± 0.002</td></tr></table> As in previous experiments, results improved as we increased the number of discriminators. All evaluated models using HV outperformed DCGAN, ALI, Unrolled GAN and VEGAN. Moreover, HV with 16 and 24 discriminators achieved state- of- the- art coverage values. Thus, the increase in models' capacity via using more discriminators directly resulted in an improvement in generator's coverage. Training details as well as architectures information are presented in Appendix B. ## 6 CONCLUSION In this work we have shown that employing multiple discriminators is a practical approach allowing us to trade extra capacity, and thereby extra computational cost, for higher quality and diversity of generated samples. Such an approach is complimentary to other advances in GANs training and can be easily used together with other methods. We introduced a multi- objective optimization framework for studying multiple discriminator GANs, and showed strong similarities between previous work and the multiple gradient descent algorithm. The proposed approach was observed to consistently yield higher quality samples in terms of FID. Furthermore, increasing the number of discriminators was shown to increase sample diversity and generator robustness. Deeper analysis of the quantity \(||\sum_{k = 1}^{K}\alpha_{k}\nabla l_{k}||\) is the subject of future investigation. We hypothesize that using it as a penalty term might reduce the necessity of a high number of discriminators. <--- Page Split ---> ## REFERENCES Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. arXiv preprint arXiv:1701.07875, 2017. Anne Auger, Johannes Bader, Dimo Brockhoff, and Eckart Zitzler. Theory of the hypervolume indicator: optimal \(\mu\) - distributions and the choice of the reference point. In Proceedings of the tenth ACM SIGEVO workshop on Foundations of genetic algorithms, pp. 87- 102. ACM, 2009. Anne Auger, Johannes Bader, Dimo Brockhoff, and Eckart Zitzler. Hypervolume- based multiobjective optimization: Theoretical foundations and practical implications. Theoretical Computer Science, 425:75- 103, 2012. Johannes Bader and Eckart Zitzler. HypE: An algorithm for fast hypervolume- based many- objective optimization. Evolutionary computation, 19(1):45- 76, 2011. David Berthelot, Tom Schumm, and Luke Metz. BEGAN: boundary equilibrium generative adversarial networks. CoRR, abs/1703.10717, 2017. URL http://arxiv.org/abs/1703.10717. Nicola Beume, Boris Naujoks, and Michael Emmerich. SMS- EMOA: Multiobjective selection based on dominated hypervolume. European Journal of Operational Research, 181(3):1653- 1669, 2007. Kalyanmoy Deb. Multi- objective optimization using evolutionary algorithms, volume 16. John Wiley & Sons, 2001. Jean- Antoine Désidéri. Multiple- gradient descent algorithm (MGDA) for multiobjective optimization. Comptes Rendus Mathematique, 350(5- 6):313- 318, 2012. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016. Ishan Durugkar, Ian Gemp, and Sridhar Mahadevan. Generative multi- adversarial networks. arXiv preprint arXiv:1611.01673, 2016. Mark Fleischer. The measure of pareto optima applications to multi- objective metaheuristics. In International Conference on Evolutionary Multi- Criterion Optimization, pp. 519- 533. Springer, 2003. Maurice Fréchet. Sur la distance de deux lois de probabilité. COMPTES RENDUS HEBDO- MADAIRES DES SEANCES DE L ACADEMIE DES SCIENCES, 244(6):689- 692, 1957. Ian Goodfellow. NIPS 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160, 2016. Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672- 2680, 2014. Ihsaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of Wasserstein GANs. In Advances in Neural Information Processing Systems, pp. 5769- 5779, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770- 778, 2016. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two time- scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6629- 6640, 2017. Alexia Jolicoeur- Martineau. The relativistic discriminator: a key element missing from standard gan. arXiv preprint arXiv:1807.00734, 2018. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. <--- Page Split ---> Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient- based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278- 2324, 1998. Zinan Lin, Ashish Khetan, Giulia Fanti, and Sewoong Oh. PacGAN: The power of two samples in generative adversarial networks. arXiv preprint arXiv:1712.04086, 2017. Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. In Computer Vision (ICCV), 2017 IEEE International Conference on, pp. 2813- 2821. IEEE, 2017. Luke Metz, Ben Poole, David Pfau, and Jascha Sohl- Dickstein. Unrolled generative adversarial networks. arXiv preprint arXiv:1611.02163, 2016. Conrado Silva Miranda and Fernando José Von Zuben. Single- solution hypervolume maximization and its use for improving generalization of neural networks. CoRR, abs/1602.01164, 2016. URL http://arxiv.org/abs/1602.01164. Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957, 2018. Behnam Neyshabur, Srinadh Bhojanapalli, and Ayan Chakrabarti. Stabilizing GAN training with multiple random projections. arXiv preprint arXiv:1705.07831, 2017. Sebastian Peitz and Michael Dellnitz. Gradient- based multiobjective optimization with uncertainties. In NEO 2016, pp. 159- 182. Springer, 2018. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. In Advances in Neural Information Processing Systems, pp. 2234- 2242, 2016. Stefan Schäffler, Reinhart Schultz, and Klaus Weinzierl. Stochastic method for the solution of unconstrained vector optimization problems. Journal of Optimization Theory and Applications, 114(1):209- 222, 2002. Akash Srivastava, Lazar Valkoz, Chris Russell, Michael U Gutmann, and Charles Sutton. VEEGAN: Reducing mode collapse in GANs using implicit variational learning. In Advances in Neural Information Processing Systems, pp. 3310- 3320, 2017. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818- 2826, 2016. Abhay Yadav, Sohil Shah, Zheng Xu, David Jacobs, and Tom Goldstein. Stabilizing adversarial nets with prediction methods. arXiv preprint arXiv:1705.07364, 2017. <--- Page Split ---> ## APPENDIX ## A - OBJECTIVE EVALUATION METRIC. In (Heusel et al., 2017), authors proposed to use as a quality metric the squared Fréchet distance (Fréchet, 1957) between Gaussians defined by estimates of the first and second order moments of the outputs obtained through a forward pass in a pretrained classifier of both real and generated data. They proposed the use of Inception V3 (Szegedy et al., 2016) for computation of the data representation and called the metric Fréchet Inception Distance (FID), which is defined as: \[\mathrm{FID} = ||m_d - m_g||^2 +\mathrm{Tr}(\Sigma_d + \Sigma_g - 2(\Sigma_d\Sigma_g)^{\frac{1}{2}}), \quad (10)\] where \(m_d\) , \(\Sigma_d\) and \(m_g\) , \(\Sigma_g\) are estimates of the first and second order moments from the representations of real data distributions and generated data, respectively. We employ FID throughout our experiments for comparison of different approaches. However, for each dataset in which FID was computed, the output layer of a pretrained classifier on that particular dataset was used instead of Inception. \(m_d\) and \(\Sigma_d\) were estimated on the complete test partitions, which are not used during training. ## B - EXPERIMENTAL SETUP FOR STACKED MNIST EXPERIMENTS AND GENERATOR'S SAMPLES Architectures of the generator and discriminator are detailed in Tables 2 and 3, respectively. Batch normalization was used in all intermediate convolutional and fully connected layers of both models. We employed RMSprop to train all the models with learning rate and \(\alpha\) set to 0.0001 and 0.9, respectively. Mini- batch size was set to 64. The setup in (Lin et al., 2017) is employed and we build 128000 and 26000 samples for train and test sets, respectively. Table 2: Generator's architecture. <table><tr><td>Layer</td><td>Outputs</td><td>Kernel size</td><td>Stride</td><td>Activation</td></tr><tr><td>Input: z ~ N(0, I100)</td><td></td><td></td><td></td><td></td></tr><tr><td>Fully connected</td><td>2*2*512</td><td>4, 4</td><td>2, 2</td><td>ReLU</td></tr><tr><td>Transposed convolution</td><td>4*4*256</td><td>4, 4</td><td>2, 2</td><td>ReLU</td></tr><tr><td>Transposed convolution</td><td>8*8*128</td><td>4, 4</td><td>2, 2</td><td>ReLU</td></tr><tr><td>Transposed convolution</td><td>14*14*64</td><td>4, 4</td><td>2, 2</td><td>ReLU</td></tr><tr><td>Transposed convolution</td><td>28*28*3</td><td>4, 4</td><td>2, 2</td><td>Tanh</td></tr></table> Table 3: Discriminator's architecture. <table><tr><td>Layer</td><td>Outputs</td><td>Kernel size</td><td>Stride</td><td>Activation</td></tr><tr><td>Input</td><td>28*28*3</td><td></td><td></td><td></td></tr><tr><td>Projection</td><td>14*14*3</td><td>8, 8</td><td>2, 2</td><td></td></tr><tr><td>Convolution</td><td>7*7*64</td><td>4, 4</td><td>2, 2</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>5*5*128</td><td>4, 4</td><td>2, 2</td><td>LeakyReLU</td>损</td></tr><tr><td>Convolution</td><td>2*2*256</td><td>4, 4</td><td>2, 2</td><td>LeakyReLU</td>损</td></tr><tr><td>Convolution</td><td>1</td><td>4, 4</td><td>2, 2</td><td>Sigmoid</td></tr></table> <--- Page Split ---> ![](images/11_0.jpg) <center>(a) HV - 8 discriminators </center> ![](images/11_1.jpg) <center>(b) HV - 16 discriminators </center> ![](images/11_2.jpg) <center>(c) HV - 24 discriminators </center> Figure 8: Stacked MNIST samples for HV trained with 8, 16, and 24 discriminators. Samples diversity increases greatly when more discriminators are employed. <--- Page Split ---> ## C - EXTRA RESULTS ON CIFAR-10 ## C.1 - MULTIPLE DISCRIMINATORS ACROSS DIFFERENT INITIALIZATIONS AND OTHER SCORES Table 4 presents the best FID (computed with a pretrained ResNet) achieved by each approach at train time, along with the epoch in which it was achieved, for each of 3 independent runs. Train time FIDs are computed using 1000 generated images. <table><tr><td>#D</td><td>Method</td><td>Best FID (epoch)</td></tr><tr><td rowspan="2">1</td><td>DCGAN</td><td>7.09 (68), 9.09 (21), 4.22 (101)</td></tr><tr><td>WGAN-GP</td><td>5.09 (117), 5.69 (101) 7.13 (71)</td></tr><tr><td rowspan="3">8</td><td>AVG</td><td>3.35 (105), 4.64 (141), 3.00 (76)</td></tr><tr><td>GMAN</td><td>4.28 (123), 4.24 (129), 3.80 (133)</td></tr><tr><td>HV</td><td>3.87 (102), 4.54 (82), 3.20 (98)</td></tr><tr><td rowspan="3">16</td><td>AVG</td><td>3.16 (96), 2.50 (91), 2.77 (116)</td></tr><tr><td>GMAN</td><td>2.69 (129), 2.36 (144), 2.48 (120)</td></tr><tr><td>HV</td><td>2.56 (85), 2.70 (97), 2.68 (133)</td></tr><tr><td rowspan="3">24</td><td>AVG</td><td>2.10 (94), 2.44 (132), 2.43 (129)</td></tr><tr><td>GMAN</td><td>2.16 (120), 2.02 (98), 2.13 (130)</td></tr><tr><td>HV</td><td>2.05 (83), 1.89 (97), 2.23 (130)</td></tr></table> Table 4: Best FID obtained for each approach on 3 independent runs. FID is computed on 1000 generated images after every epoch. In Fig. 9, we report the norm of the update direction \(\begin{array}{r}{\big\|\sum_{k = 1}^{K}\alpha_{k}\nabla l_{k}\big\|} \end{array}\) of the best model obtained for each method. Interestingly, different methods present similar behavior in terms of convergence in the Pareto- stationarity sense, i.e. the norm upon convergence is lower for models trained against more discriminators, regardless of the employed method. ![](images/12_0.jpg) <center>Figure 9: Norm of the update direction over time for each method. Higher number of discriminators yield lower norm upon convergence. </center> We computed extra scores using 10000 images generated by the best model reported in Table 4, i.e. the same models utilized to generate the results shown in Fig. 6. Both Inception score and FID were computed with original implementations, while FID- VGG and FID- ResNet were computed using a VGG and a ResNet we pretrained. Results are reported with respect to DCGAN's scores. <table><tr><td></td><td>WGAN-GP</td><td>AVG-8</td><td>AVG-16</td><td>AVG-24</td><td>GMAN-8</td><td>GMAN-16</td><td>GMAN-24</td><td>HV-8</td><td>HV-16</td><td>HV-24</td></tr><tr><td>Inception Score</td><td>1.08</td><td>1.02</td><td>1.26</td><td>1.36</td><td>0.95</td><td>1.32</td><td>1.42</td><td>1.00</td><td>1.30</td><td>1.44</td></tr><tr><td>FID</td><td>0.80</td><td>0.98</td><td>0.76</td><td>0.73</td><td>0.92</td><td>0.79</td><td>0.65</td><td>0.89</td><td>0.77</td><td>0.72</td></tr><tr><td>FID-VGG</td><td>1.29</td><td>0.91</td><td>1.03</td><td>0.85</td><td>0.87</td><td>0.78</td><td>0.73</td><td>0.78</td><td>0.75</td><td>0.64</td></tr><tr><td>FID-ResNet</td><td>1.64</td><td>0.88</td><td>0.90</td><td>0.62</td><td>0.80</td><td>0.72</td><td>0.73</td><td>0.75</td><td>0.73</td><td>0.51</td></tr></table> Table 5: Scores of different methods measure on generated CIFAR- 10 samples. DCGAN scores are used as reference values, and results report are the ratio between given model and DCGAN scores. Inception score is better when high, whereas FIDs are better when low. <--- Page Split ---> ## C.2 - COMPUTATIONAL COST In Table 6 we present a comparison of minimum FID- ResNet obtained during training, along with computation cost in terms of time and space for different GANs, with both 1 and 24 discriminators. The computational cost of training GANs under a multiple discriminator setting is higher by design, in terms of both FLOPS and memory, if compared with single discriminators settings. However, a corresponding shift in performance is the result of the additional cost. This effect was consistently observed considering 4 different well- known approaches, namely DCGAN (Radford et al., 2015), Least- square GAN (LSGAN) (Mao et al., 2017), and HingeGAN (Miyato et al., 2018). The architectures of all single discriminator models follow the DCGAN, described in (Radford et al., 2015). For the 24 discriminators models, we used the architecture described in (Neyshabur et al., 2017), which consists in removing the the normalization layers from DCGAN's discriminator and further adding the projection layer, inline with previous experiments reported for CIFAR- 10 upscaled to 64x64. All models were trained with minibatch size of 64 during 150 epochs. Adam (Kingma & Ba, 2014) was used as the optimizer. Learning rate, \(\beta_{1}\) and \(\beta_{2}\) were equal to 0.0002, 0.5 and 0.999, respectively. <table><tr><td></td><td># Discriminators</td><td>FID-ResNet</td><td>FLOPS (MAC)</td><td>Memory (Mb)</td></tr><tr><td rowspan="2">DCGAN</td><td>1</td><td>4.22</td><td>8e10</td><td>1292</td></tr><tr><td>24</td><td>1.89</td><td>5e11</td><td>5671</td></tr><tr><td rowspan="2">LSGAN</td><td>1</td><td>4.55</td><td>8e10</td><td>1303</td></tr><tr><td>24</td><td>1.91</td><td>5e11</td><td>5682</td></tr><tr><td rowspan="2">HingeGAN</td><td>1</td><td>6.17</td><td>8e10</td><td>1303</td></tr><tr><td>24</td><td>2.25</td><td>5e11</td><td>5682</td></tr></table> Table 6: Comparison between different GANs with 1 and 24 discriminators in terms of minimum FID- ResNet obtained during training, and FLOPs and memory consumption for a complete train step. Furthermore, wall- clock time per iteration for different numbers of discriminators is shown in Fig. 10 for experiments with CIFAR- 10 with serial updates of discriminators. Notice that while the increase in cost in terms of FLOPS and memory is unavoidable when multiple discriminators settings is employed, wall- clock time can be made close to single discriminators cases since training with respect to different discriminators can be implemented in parallel. On the other hand, extra cost in time introduced by other frameworks such as WGAN- GP or SNGAN cannot be trivially recovered. ![](images/13_0.jpg) <center>Figure 10: Time in seconds per iteration of each method for serial updates of discriminators. Multiple discriminators approaches considered do not present relevant difference in time per iteration. </center> <--- Page Split ---> ## C.3 - GENERATED SAMPLES In Figs. 11, 12, and 13 we show random generated samples with 8, 16, and 24 discriminators for AVG, GMAN, and HV, respectively. ![](images/14_0.jpg) <center>Figure 11: CIFAR-10 samples for AVG trained with 8, 16, and 24 discriminators. </center> ![](images/14_1.jpg) <center>Figure 12: CIFAR-10 samples for GMAN trained with 8, 16, and 24 discriminators. </center> ![](images/14_2.jpg) <center>Figure 13: CIFAR-10 samples for HV trained with 8, 16, and 24 discriminators. </center> <--- Page Split ---> ## C.4 - RESULTS CIFAR-10 32X32 All results reported in previous sections using CIFAR- 10 were obtained with an upscaled version of the dataset. Here, we thus run experiments with the dataset in its original resolution aiming to contextualize our proposed approach with respect to previously introduced methods. To do so, we repeated similar experiments as reported in Miyato et al. (2018)- Table 2, for the model referred to as standard CNN. The same architecture is employed and the spectral normalization is removed from the discriminators. Moreover, the same projection input is added in each of the discriminators. Results in terms of both FID and Inception score, evaluated on top of 5000 generated images as in (Miyato et al., 2018) as well as with 10000 images, are reported in Table 7 for our proposed approach and our implementation of (Miyato et al., 2018), along with the FID measured using a ResNet classifier trained in advance. As can be seen, the addition of the multiple discriminators setting along with hypervolume maximization yields a relevant shift in performance for the DCGAN- like generator, taking all evaluated metrics to levels of recently proposed GANs. Table 7: Evaluation of the effect of adding discriminators on a DCGAN-like model trained on CIFAR-10. Results reach the same level as the best reported for the given architecture when the multiple-discriminator setting is added and the normalization layers are removed from discriminators. <table><tr><td></td><td>FID-ResNet</td><td>FID (5k)</td><td>IS (5k)</td><td>FID (10k)</td><td>IS (10k)</td></tr><tr><td>SNGAN (Miyato et al., 2018)</td><td>-</td><td>25.5</td><td>7.58 ± 0.12</td><td>-</td><td>-</td></tr><tr><td>WGAN-GP (Miyato et al., 2018)</td><td>-</td><td>40.2</td><td>6.68 ± 0.06</td><td>-</td><td>-</td></tr><tr><td>DCGAN (Miyato et al., 2018)</td><td>-</td><td>-</td><td>6.64 ± 0.14</td><td>-</td><td>-</td></tr><tr><td>SNGAN (our implementation)</td><td>1.55</td><td>27.93</td><td>7.11 ± 0.30</td><td>25.29</td><td>7.26 ± 0.12</td></tr><tr><td>DCGAN + 24 Ds and HV</td><td>1.21</td><td>27.74</td><td>7.32 ± 0.26</td><td>24.90</td><td>7.45 ± 0.17</td></tr></table> <--- Page Split ---> ## D - CELEBA DATASET ## D.1 - COMPARING WITH OTHER MULTIPLE-DISCRIMINATORS APPROACHES Here, we present samples obtained by generators trained against 8, 16, and 24 discriminators using AVG, GMAN, and HV on the CelebA dataset rescaled to \(64 \times 64\) . Training lasted 100 epochs and samples are shown in Figs. 14, 15, and 16 for AVG, GMAN and HV, respectively. Same architectures and hyperparameters used for experiments with CIFAR- 10 presented in Section 5 were utilized. ![](images/16_0.jpg) <center>Figure 14: CelebA samples for AVG trained with 8, 16, and 24 discriminators. </center> ![](images/16_1.jpg) <center>Figure 15: CelebA samples for GMAN trained with 8, 16, and 24 discriminators. </center> ![](images/16_2.jpg) <center>Figure 16: CelebA samples for HV trained with 8, 16, and 24 discriminators. </center> <--- Page Split ---> ## D.2 - GENERATING 128x128 IMAGES D.2 - GENERATING 128x128 IMAGESIn this experiment, we verify whether the proposed multiple discriminators setting is capable of generating higher resolution images. For that, we employed the CelebA at a size of 128x128. We used a similar architecture for both generator and discriminators networks as described in the previous experiments. A convolutional layer with 2048 feature maps was added to both generator and discriminators architectures due to the increase in the image size. Adam optimizer with the same set of hyperparameters as for CIFAR-10 and CelebA 64x64 was employed. We trained models with 6, 8, and 10 discriminators during 24 epochs. Samples from each generator are shown in Figure 17. ![](images/17_0.jpg) <center>Figure 17: 128x128 CelebA samples for HV trained during 24 epochs with 6, 8, and 10 discriminators. </center> <--- Page Split ---> ## E - GENERATING 256x256 CATS We show the proposed multiple- discriminators setting scales to higher resolution even in the small dataset regime, by reproducing the experiments presented in (Jolicoeur- Martineau, 2018). We used the same architecture for the generator. For the discriminator, we removed batch normalization from all layers and used stride equal to 1 at the last convolutional layer, after adding the initial projection step. The Cats dataset \(^3\) was employed, we followed the same pre- processing steps, which, in our case, yielded 1740 training samples with resolution of 256x256. Our model is trained using 24 discriminators and Adam optimizer with the same hyperparameters as for CIFAR- 10 and CelebA previously described experiments. In Figure 18 we show generator's samples after 288 training epochs. One epoch corresponds to updating over 27 minibatches of size 64. ![](images/18_0.jpg) <center>Figure 18: Cats generated using 24 discriminators after 288 training epochs. </center> <--- Page Split ---> ## F - INCREASING NUMBER OF RANDOM PROJECTIONS In this experiment we illustrate and confirm the results introduced in (Neyshabur et al., 2017), showing the effect of using an increasing number of random projections to train a GAN. We trained models using average loss minimization with 1 to 6 discriminators on the CelebA dataset for 15 epochs. Samples from the generator obtained in the last epoch are shown in Fig. 19. Generated samples are closer to real data as the number of random projections (and discriminators, consequently) increases. ![](images/19_0.jpg) <center>Figure 19: Models trained with AVG during 15 epochs using an increasing number of random projections and discriminators. </center> <--- Page Split ---> ## G - ILLUSTRATION OF INTERACTION BETWEEN HYPERVOLUME AND ADOPTED NADIR POINT ADAPTATION SCHEME Consider a two- objectives problem, with \(l_{1}^{t} > 0\) and \(l_{2}^{t} > 0\) corresponding to each of the losses we want to minimize, at iteration \(t\) . We present in Figures 20 and 21 an illustrative example of the effect of the adaptation scheme adopted for \(\eta\) , as described in Section 4. Figure 20 describes the initialization state. Since \(l_{1}^{t}\) and \(l_{2}^{t}\) will be high at \(t = 0\) , and, following the adaptation rule presented in previous sections, \(\eta^{t} = \delta \max \{l_{1}^{t},l_{2}^{t}\}\) , for a slack \(\delta >0\) , the difference \(\eta^{t} - \max \{l_{1}^{t},l_{2}^{t}\}\) will be high. In contrast, after \(T\) updates, as described in Figure 21, \(\eta^{t} = \delta \max \{l_{1}^{t},l_{2}^{t}\}\) will be smaller, since losses are now closer to 0. If no adaptation is performed and \(\eta\) is kept unchanged throughout training, as represented in red in Figure 21, \(\eta^{T} - l_{1}^{T} \approx \eta^{T} - l_{2}^{T}\) for a large enough \(T\) , which will end up assigning similar weights to gradients provided by the different losses, defeating the purpose of employing hypervolume maximization rather than optimizing for the average loss. The employed adaptation scheme thus keeps the gradient weighting relevant even when losses become low. Moreover, this effect will be more aggressive as training progresses, assigning more gradient importance to the higher losses, since \(\eta^{T} - \max \{l_{1}^{T},l_{2}^{T}\} < \eta^{0} - \max \{l_{1}^{0},l_{2}^{0}\}\) . ![](images/20_0.jpg) <center>Figure 20: Losses and nadir point at beginning of training. </center> ![](images/20_1.jpg) <center>Figure 21: Losses and nadir point at \(t = T\) , and nadir point at \(t = 0\) (in red). </center> <--- Page Split ---> H - WALL- CLOCK TIME FOR REACHING BEST FID DURING TRAINING ON MNIST ![](images/21_0.jpg) <center>Figure 22: Minimum FID during training. X-axis is in minutes. The blue dot is intended to highlight the moment during training when the minimum FID was reached. </center> <--- Page Split --->
## ABSTRACT Recent literature has demonstrated promising results on the training of Generative Adversarial Networks by employing a set of discriminators, as opposed to the traditional game involving one generator against a single adversary. Those methods perform single- objective optimization on some simple consolidation of the losses, e.g. an average. In this work, we revisit the multiple- discriminator approach by framing the simultaneous minimization of losses provided by different models as a multi- objective optimization problem. Specifically, we evaluate the performance of multiple gradient descent and the hypervolume maximization algorithm on a number of different datasets. Moreover, we argue that the previously proposed methods and hypervolume maximization can all be seen as variations of multiple gradient descent in which the update direction computation can be done efficiently. Our results indicate that hypervolume maximization presents a better compromise between sample quality and diversity, and computational cost than previous methods. ## 1 INTRODUCTION Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) offer a new approach to generative modeling, using game- theoretic training schemes to implicitly learn a given probability density. Prior to the emergence of GAN architectures, realistic generative modeling remained elusive. When offering unparalleled realism, GAN training remains fraught with stability issues. Commonly reported shortcomings involved in the GAN game are the lack of useful gradients provided by the discriminator, and mode collapse, i.e. lack of diversity in the generator's samples. Considerable research effort has been devoted in recent literature in order to overcome training instability within the GAN framework. Some architectures such as BEGAN (Berthelot et al., 2017) have applied auto- encoders as discriminators and proposed a new loss to help stabilize training. Methods such as TTUR (Heusel et al., 2017), in turn, have attempted to define schedules for updating the generator and discriminator differently. The PacGAN algorithm (Lin et al., 2017) proposes to modify the discriminator's architecture which will receive \(m\) concatenated samples as input, while modifications to alternate updates in SGD were introduced in (Yadav et al., 2017). These samples are jointly classified as either real or generated, and authors show that this enforces sample diversity. In SNGAN (Miyato et al., 2018), authors introduce spectral normalization on the discriminator aiming to ensure Lipschitz continuity, which is empirically shown to consistently yield high quality samples when different sets of hyperparameters are used. Recent works have proposed to tackle GANs instability issues using multiple discriminators. Neyshabur et al. (2017) propose a GAN variation in which one generator is trained against a set of discriminators, where each discriminator sees a fixed random projection of the inputs. Prior work, including GMAN (Durugkar et al., 2016) has also explored training against multiple discriminators. <--- Page Split ---> In this paper, we build upon Neyshabur et al.'s introduced framework and propose reformulating the average loss minimization aiming to further stabilize GAN training. Specifically, we propose treating the loss signal provided by each discriminator as an independent objective function. To achieve this, we simultaneously minimize the losses using multi- objective optimization techniques. Namely, we exploit previously introduced methods in literature such as the multiple gradient descent algorithm (MGD) (Désidéri, 2012). However, due to MGD's prohibitively high cost in the case of large neural networks, we propose the use of more efficient alternatives such as maximization of the hypervolume of the region defined between a fixed, shared upper bound on those losses, which we will refer to as the nadir point \(\eta^{*}\) , and each of the component losses. In contrast to Neyshabur et al. (2017)'s approach, where the average loss is minimized when training the generator, hypervolume maximization (HV) optimizes a weighted loss, and the generator's training will adaptively assign greater importance to feedback from discriminators against which it performs poorly. Experiments performed on MNIST show that HV presents a good compromise in the computational cost- samples quality trade- off, when compared to average loss minimization or GMAN's approach (low quality and cost), and MGD (high quality and cost). Also, the sensitivity to introduced hyperparameters is studied and results indicate that increasing the number of discriminators consequently increases the generator's robustness along with sample quality and diversity. Experiments on CIFAR- 10 indicate the method described produces higher quality generator samples in terms of quantitative evaluation. Moreover, image quality and sample diversity are once more shown to consistently improve as we increase the number of discriminators. In summary, our main contributions are the following: 1. We offer a new perspective on multiple-discriminator GAN training by framing it in the context of multi-objective optimization, and draw similarities between previous research in GANs variations and MGD, commonly employed as a general solver for multi-objective optimization. 2. We propose a new method for training multiple-discriminator GANs: Hypervolume maximization, which weighs the gradient contributions of each discriminator by its loss. The remainder of this document is organized as follows: Section 2 introduces definitions on multiobjective optimization and MGD. In Section 3 we describe prior relevant literature. Hypervolume maximization is detailed in Section 4, with experiments and results presented in Section 5. Conclusions and directions for future work are drawn in Section 6. ## 2 PRELIMINARIES In this section we provide some definitions regarding multi- objective optimization literature which will be useful in the next sections. Henceforth, the boldface notation will be used to indicate vector- valued variables. Multi- objective optimization. A multi- objective optimization problem is defined as (Deb, 2001): \[\begin{array}{r}{\min \mathbf{F}(\mathbf{x}) = [f_{1}(\mathbf{x}),f_{2}(\mathbf{x}),\dots,f_{K}(\mathbf{x})]^{T},}\\ {\mathbf{x}\in \Omega ,} \end{array} \quad (1)\] where \(K\) is the number of objectives, \(\Omega\) is the variables space and \(\mathbf{x} = [x_{1},x_{2},\dots,x_{n}]^{T}\in \Omega\) is a decision vector or possible solution to the problem. \(\mathbf{F}:\Omega \to \mathbb{R}^{K}\) is a set of \(K\) - objective functions that maps the \(n\) - dimensional variables space to the \(K\) - dimensional objective space. Pareto- dominance. Let \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) be two decision vectors. \(\mathbf{x}_{1}\) is said to dominate \(\mathbf{x}_{2}\) (denoted by \(\mathbf{x}_{1}\prec \mathbf{x}_{2}\) ) if and only if \(f_{i}(\mathbf{x}_{1})\leq f_{i}(\mathbf{x}_{2})\) for all \(i\in \{1,2,\ldots ,K\}\) and \(f_{j}(\mathbf{x}_{1})< f_{j}(\mathbf{x}_{2})\) for some \(j\in \{1,2,\ldots ,K\}\) . If a decision vector \(\mathbf{x}\) is dominated by no other vector in \(\Omega\) , \(\mathbf{x}\) is said to be non- dominated. Pareto- optimality. A decision vector \(\mathbf{x}^{*}\in \Omega\) is said to be Pareto- optimal if and only if there is no \(\mathbf{x}\in \Omega\) such that \(\mathbf{x}\prec \mathbf{x}^{*}\) , i.e. \(\mathbf{x}^{*}\) is a non- dominated solution. The Pareto- optimal Set (PS) is <--- Page Split ---> defined as the set of all Pareto- optimal solutions \(\mathbf{x} \in \Omega\) , i.e., \(PS = \{\mathbf{x} \in \Omega | \mathbf{x}\) is Pareto optimal\}. The set of all objective vectors \(\mathbf{F}(\mathbf{x})\) such that \(\mathbf{x}\) is Pareto- optimal is called Pareto front (PF), that is \(PF = \{\mathbf{F}(\mathbf{x}) \in \mathbb{R}^{K} | \mathbf{x} \in PS\}\) . Pareto- stationarity. Pareto- stationarity is a necessary condition for Pareto- optimality. For \(f_{k}\) differentiable everywhere for all \(k\) , \(\mathbf{F}\) is said to be Pareto- stationary at the point \(\mathbf{x}\) if there exists a set of scalars \(\alpha_{k}\) , \(k \in \{1, \ldots , K\}\) , such that: \[\sum_{k = 1}^{K} \alpha_{k} \nabla f_{k} = \mathbf{0}, \quad \sum_{k = 1}^{K} \alpha_{k} = 1, \quad \alpha_{k} \geq 0 \quad \forall k. \quad (2)\] Multiple Gradient Descent. Multiple gradient descent (Désidéri, 2012; Schäffler et al., 2002; Peitz & Dellnitz, 2018) was proposed for the unconstrained case of multi- objective optimization of \(\mathbf{F}(\mathbf{x})\) assuming a convex, continuously differentiable and smooth \(f_{k}(\mathbf{x})\) for all \(k\) . MGD finds a common descent direction for all \(f_{k}\) by defining the convex hull of all \(\nabla f_{k}(\mathbf{x})\) and finding the minimum norm element within it. Consider \(\mathbf{w}^{*}\) given by: \[\mathbf{w}^{*} = \mathrm{argmin}||\mathbf{w}||, \quad \mathbf{w} = \sum_{k = 1}^{K} \alpha_{k} \nabla f_{k}(\mathbf{x}), \quad \mathrm{s.t.} \quad \sum_{k = 1}^{K} \alpha_{k} = 1, \quad \alpha_{k} \geq 0 \quad \forall k. \quad (3)\] \(\mathbf{w}^{*}\) will be either \(\mathbf{0}\) in which case \(\mathbf{x}\) is a Pareto- stationary point, or \(\mathbf{w}^{*} \neq \mathbf{0}\) and then \(\mathbf{w}^{*}\) is a descent direction for all \(f_{i}(\mathbf{x})\) . Similar to gradient descent, MGD consists in finding the common steepest descent direction \(\mathbf{w}_{t}^{*}\) at each iteration \(t\) , and then updating parameters with a learning rate \(\lambda\) according to \(\mathbf{x}_{t + 1} = \mathbf{x}_{t} - \lambda \frac{\mathbf{w}_{t}^{*}}{||\mathbf{w}_{t}^{*}||}\) . ## 3 RELATED WORK ### 3.1 TRAINING GANs WITH MULTIPLE DISCRIMINATORS While we would prefer to always have strong gradients from the discriminator during training, the vanilla GAN makes this difficult to ensure, as the discriminator quickly learns to distinguish real and generated samples (Goodfellow, 2016), thus providing no meaningful error signal to improve the generator thereafter. Durugkar et al. (2016) proposed the Generative Multi- Adversarial Networks (GMAN) which consist in training the generator against a softmax weighted arithmetic average of \(K\) different discriminators, according to Eq. 4. \[\mathcal{L}_{G} = \sum_{k = 1}^{K} \alpha_{k} \mathcal{L}_{D_{k}}, \quad (4)\] where \(\alpha_{k} = \frac{e^{\beta \mathcal{L}_{D_{k}}}}{\sum_{j = 1}^{K} e^{\beta \mathcal{L}_{D_{j}}}}\) , \(\beta \geq 0\) , and \(\mathcal{L}_{D_{k}}\) is the loss of discriminator \(k\) and defined as \[\mathcal{L}_{D_{k}} = -\mathbb{E}_{\mathbf{x} \sim p_{\mathrm{data}}} \log D_{k}(\mathbf{x}) - \mathbb{E}_{\mathbf{z} \sim p_{z}} \log (1 - D_{k}(G(\mathbf{z}))), \quad (5)\] where \(D_{k}(\mathbf{x})\) and \(G(\mathbf{z})\) are the outputs of the \(k\) - th discriminator and the generator, respectively. The goal of using the proposed averaging scheme is to privilege worse discriminators and thus providing more useful gradients to the generator during training. Experiments were performed with \(\beta = 0\) (equal weights), \(\beta \to \infty\) (only worst discriminator is taken into account), \(\beta = 1\) , and \(\beta\) learned by the generator. Models with \(K = \{2, 5\}\) were tested and evaluated using a proposed metric and the Inception score (Salimans et al., 2016). However, results showed that the simple average of discriminator's losses provided the best values for both metrics in most of the considered cases. Opposed to GMAN, Neyshabur et al. (2017) proposed training a GAN with \(K\) discriminators using the same architecture. Each discriminator \(D_{k}\) sees a different randomly projected lower- dimensional version of the input image. Random projections are defined by a randomly initialized matrix \(W_{k}\) , which remains fixed during training. Theoretical results provided show that the distribution induced by the generator \(G\) will converge to the real data distribution \(p_{\mathrm{data}}\) , as long as there is a sufficient number of discriminators. Moreover, discriminative tasks in the projected space are harder, i.e. real <--- Page Split ---> and fake samples are more alike, thus avoiding early convergence of discriminators, which leads to common stability issues in GAN training such as mode- collapse (Goodfellow, 2016). Essentially, the authors trade one hard problem for \(K\) easier subproblems. The losses of each discriminator \(\mathcal{L}_{D_{k}}\) are the same as shown in Eq. 5. However, the generator loss \(\mathcal{L}_{G}\) is defined as simply the sum of the losses provided by each discriminator, as shown in Eq. 6. This choice of \(\mathcal{L}_{G}\) does not exploit available information such as the performance of the generator with respect to each discriminator. \[\mathcal{L}_{G} = -\sum_{k = 1}^{K}\mathbb{E}_{\mathbf{z}\sim p_{z}}\log D_{k}(G(\mathbf{z})). \quad (6)\] ### 3.2 HYPERVOLUME MAXIMIZATION Consider a set of solutions \(S\) for a multi- objective optimization problem. The hypervolume \(\mathcal{H}\) of \(S\) is defined as (Fleischer, 2003): \(\mathcal{H}(S) = \mu (\cup_{\mathbf{x}\in S}[\mathbf{F}(\mathbf{x}),\pmb{\eta}^{*}])\) , where \(\mu\) is the Lebesgue measure and \(\pmb{\eta}^{*}\) is a point dominated by all \(\mathbf{x}\in S\) (i.e. \(f_{i}(\mathbf{x})\) is upper- bounded by \(\eta\) ), referred to as nadir point. \(\mathcal{H}(S)\) can be understood as the size of the space covered by \(\{\mathbf{F}(\mathbf{x})|\mathbf{x}\in S\}\) (Bader & Zitzler, 2011). The hypervolume was originally introduced as a quantitative metric for coverage and convergence of Pareto- optimal fronts obtained through population based algorithms (Beume et al., 2007). Methods based on direct maximization of \(\mathcal{H}\) exhibit favorable convergence even in challenging scenarios, such as simultaneous minimization of 50 objectives (Bader & Zitzler, 2011). In the context of Machine Learning, a single- solution hypervolume maximization has been applied to neural networks as a surrogate loss for mean squared error (Miranda & Zuben, 2016), i.e. the loss provided by each example in a training batch is treated as a single cost and the multi- objective approach aims to minimize costs over all examples. Authors show that such method provides an inexpensive boosting- like training. ## 4 MULTI-OBJECTIVE TRAINING OF GANS WITH MULTIPLE DISCRIMINATORS We introduce a variation of the GAN game such that the generator solves the following multi- objective problem: \[\min \mathcal{L}_{G}(\mathbf{x}) = [l_{1}(\mathbf{z}),l_{2}(\mathbf{z}),\dots,l_{K}(\mathbf{z})]^{T}, \quad (7)\] where each \(l_{k} = - \mathbb{E}_{z\sim p_{z}}\log D_{k}(G(z))\) , \(k\in \{1,\dots,K\}\) , is the loss provided by the \(k\) - th discriminator. Training proceeds as the usual formulation (Goodfellow et al., 2014), i.e. with alternate updates between the discriminators and the generator. Updates of each discriminator are performed to minimize the loss described in Eq. 5. A natural choice for generator's updates is the MGD algorithm, described in Section 2. However, computing the direction of steepest descent \(\mathbf{w}^{*}\) before every parameter update step, as required in MGD, can be prohibitively expensive for large neural networks. Therefore, we propose an alternative scheme for multi- objective optimization and argue that both our proposal and previously published methods can all be viewed as performing computationally more efficient versions of MGD update rule without the burden of having to solve a quadratic program, i.e. computing \(\mathbf{w}^{*}\) , every iteration. ### 4.1 HYPERVOLUME MAXIMIZATION FOR TRAINING GANS Fleischer (Fleischer, 2003) has shown that maximizing \(\mathcal{H}\) yields Pareto- optimal solutions. Since MGD converges to a set of Pareto- stationary points, i.e. a super- set of the Pareto- optimal solutions, hypervolume maximization yields a sub- set of the solutions obtained using MGD. We exploit the above mentioned property and define the generator loss as the negative log- hypervolume, as defined in Eq. 8: \[\mathcal{L}_{G} = -\mathcal{V} = -\sum_{k = 1}^{K}\log (\eta -l_{k}), \quad (8)\] <--- Page Split ---> where the nadir point coordinate \(\eta\) is an upper bound for all \(l_{k}\) . In Fig. 1 we provide an illustrative example for the case where \(K = 2\) . The highlighted region corresponds to \(e^{\lambda}\) . Since the nadir point \(\eta^{*}\) is fixed, \(\nu\) will only be maximized, and consequently \(\mathcal{L}_{G}\) minimized, if each \(l_{k}\) is minimized. ![](images/4_0.jpg) <center>Figure 1: 2D example of the objective space where the generator loss is being optimized. </center> Moreover, by adapting the results shown in (Miranda & Zuben, 2016), the gradient of \(\mathcal{L}_{G}\) with respect to any generator's parameter \(\theta\) is given by: \[\frac{\partial\mathcal{L}_{G}}{\partial\theta} = \sum_{k = 1}^{K}\frac{1}{\eta - l_{k}}\frac{\partial l_{k}}{\partial\theta}. \quad (9)\] In other words, the gradient can be obtained by computing a weighted sum of the gradients of the losses provided by each discriminator, whose weights are defined as the inverse distance to the nadir point components. This formulation will naturally assign more importance to higher losses in the final gradient, which is another useful property of hypervolume maximization. Nadir point selection. It is evident from Eq. 9 that the selection of \(\eta\) directly affects the importance assignment of gradients provided by different discriminators. Particularly, as the quantity \(\min_{k}\{\eta - l_{k}\}\) grows, the multi- objective GAN game approaches the one defined by the simple average of \(l_{k}\) . Previous literature has discussed in depth the effects of the selection of \(\eta\) in the case of population- based methods (Auger et al., 2009; 2012). However, those results are not readily applicable for the single- solution case. As will be shown in Section 5, our experiments indicate that the choice of \(\eta\) plays an important role in the final quality of samples. Nevertheless, this effect becomes less relevant as the number of discriminators increases. Nadir point adaptation. Similarly to (Miranda & Zuben, 2016), we propose an adaptive scheme for \(\eta\) such that at iteration \(t\) : \(\eta_{t} = \delta \max_{k}\{l_{k,t}\}\) , where \(\delta >1\) is a user- defined parameter which will be referred to as slack. This enforces \(\min_{k}\{\eta - l_{k}\}\) to be higher when \(\max_{k}\{l_{k,t}\}\) is high and low otherwise, which induces a similar behavior as an average loss when training begins and automatically places more importance on the discriminators in which performance is worse as training progresses. Extra discussion and an illustrative example of the adaptation scheme adopted is presented in Appendix G. Comparison to average loss minimization. The upper bound proven by Neyshabur et al. (2017) assumes that the marginals of the real and generated distributions are identical along all random projections. Average loss minimization does not ensure equally good approximation between the marginals along all directions. In case of a trade- off between discriminators, i.e. if decreasing the loss on a given projection increases the loss with respect to another one, the distribution of losses can be uneven. With HV on the other hand, especially when \(\eta\) is reduced throughout training, overall loss will be kept high as long as there are discriminators with high loss. This objective tends to prefer central regions of a trade- off, in which all discriminators present a roughly equally low loss. ### 4.2 RELATIONSHIP BETWEEN MULTIPLE DISCRIMINATOR GANs AND MGD All methods described previously for the solution of GANs with multiple discriminators, i.e. average loss minimization (Neyshabur et al., 2017), GMAN's weighted average (Durugkar et al., 2016) and hypervolume maximization can be defined as MGD- like two- step algorithms consisting of: Step 1 - consolidating all gradients into a single update direction (compute the set \(\alpha_{1,\ldots ,K}\) ); Step 2 - updating parameters in the direction returned in step 1. Definition of Step 1 for the different methods studied here can be seen in the following: 1. MGD: \(\alpha_{1:K} = \mathrm{argmin}_{\alpha}||\mathbf{w}||\) , s.t. \(\sum_{k = 1}^{K}\alpha_{k} = 1\) , \(\alpha_{k}\geq 0\forall k\in \{1,\dots,K\}\) 2. Average loss minimization (Neyshabur et al., 2017): \(\alpha_{k} = \frac{1}{K}\) 3. GMAN (Durugkar et al., 2016): \(\alpha_{k} = \mathrm{softmax}(l_{1:K})_{k}\) 4. Hypervolume maximization: \(\alpha_{k} = \frac{1}{T(\eta - l_{k})}\) , \(T = \sum_{k = 1}^{K}\frac{1}{\eta - l_{k}}\) <--- Page Split ---> ## 5 EXPERIMENTS We performed three sets of experiments aiming to analyze the following aspects: (i) How alternative methods for training GANs with multiple discriminators perform in comparison to MGD; (ii) How alternative methods perform in comparison to each other in terms of sample quality and coverage; and (iii) Whether the behavior induced by HV improves the results with respect to the baseline methods. Firstly, we exploited the relatively low dimensionality of MNIST and used it as testbed for a comparison of MGD with the other approaches, i.e. average loss minimization (AVG), GMAN's weighted average loss, and HV, proposed in this work. Moreover, multiple initializations and slack combinations were evaluated in order to investigate how varying the number of discriminators affects robustness to those factors. Then, experiments were performed with CIFAR- 10 while increasing the number of discriminators. We evaluated HV's performance compared to baseline methods, and the effect in samples quality. We also analyzed the impact on the diversity of generated samples by using the stacked MNIST dataset (Srivastava et al., 2017). Samples of generators trained on stacked MNIST, CIFAR- 10, CelebA, and Cats dataset are shown in the Appendix. In all experiments performed, the same architecture, set of hyperparameters and initialization were used for both AVG, GMAN and our proposed method. The only different aspect is the generator loss. Unless stated otherwise, Adam (Kingma & Ba, 2014) was used to train all the models with learning rate, \(\beta_{1}\) and \(\beta_{2}\) set to 0.0002, 0.5 and 0.999, respectively. Mini- batch size was set to 64. The Frechet Inception Distance (FID) (Heusel et al., 2017) was employed for comparison. Details on FID computation can be found in Appendix A. ### 5.1 MGD COMPARED WITH ALTERNATIVE METHODS We employed MGD in our experiments with MNIST. In order to do so, a quadratic program has to be solved prior to every parameters update. For this, we used the Scipy's implementation of the Serial Least Square Quadratic Program solver. Three and four fully connected layers with LeakyReLU activations were used for the generator and discriminator, respectively. Dropout was also employed in the discriminator and the random projection layer was implemented as a randomly initialized norm- 1 fully connected layer, reducing the vectorized dimensionality of MNIST from 784 to 512. A pretrained LeNet (LeCun et al., 1998) was used for FID computation. Experiments over 100 epochs with 8 discriminators are reported in Fig. 2 and Fig. 3. In Fig. 2, box- plots refer to 30 independent computations of FID over 10000 images sampled from the generator which achieved the minimum FID at train time. FID results are measured at train time over 1000 images and the best values are reported in Fig. 3 along with the necessary time to achieve it. MGD outperforms all tested methods. However, its cost per iteration does not allow its use in more relevant datasets other than MNIST. Hypervolume maximization, on the other hand, performs closest to MGD than the considered baselines, while introducing no relevant extra cost. In Fig. 4, we analyze convergence in the Pareto- stationarity sense by plotting the norm of the update direction for each method, given by \(\| \sum_{k = 1}^{K}\alpha_{k}\nabla l_{k}\|\) . All methods converged to similar norms, leading to the conclusion that different Pareto- stationary solutions will perform differently in terms of quality of samples. FID as a function of wall- clock time is shown in Figure 22 (Appendix H). HV sensitivity to initialization and choice of \(\delta\) . Analysis of the sensitivity of the performance with the choice of the slack parameter \(\delta\) and initialization was performed under the following setting: models were trained for 50 epochs on MNIST with hypervolume maximization using 8, 16, 24 discriminators. Three independent runs (different initializations) were executed with each \(\delta = \{1.05, 1.5, 1.75, 2\}\) and number of discriminators, totalizing 36 final models. Fig. 5 reports the box- plots obtained for 5 FID independent computations using 10000 images, for each of the 36 models obtained under the setting previously described. Results clearly indicate that increasing the number of discriminators yields much smaller variation in the FID obtained by the final model. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 2: Box-plots corresponding to 30 independent FID computations with 10000 images. MGD performs consistently better than other methods, followed by hypervolume maximization. Models that achieved minimum FID at train time were used. Red and blue dashed lines are the FIDs of a random generator and real data, respectively. </center> ![](images/6_1.jpg) <center>Figure 3: Time vs. best FID achieved during training for each approach. FID values are computed over 1000 generated images after every epoch. MGD performs relevantly better than others in terms of FID, followed by HV. However, MGD is approximately 7 times slower than HV. HV is well-placed in the time-quality trade-off. </center> ![](images/6_2.jpg) <center>Figure 4: Norm of the update direction over time for each method. While Pareto-stationarity is approximately achieved by all methods, performance varies relevantly in terms of FID. </center> ![](images/6_3.jpg) <center>Figure 5: Independent FID evaluations for models obtained with different runs using distinct slack parameter \(\delta\) . Sensitivity reduces as the number of discriminators increases. </center> ### 5.2 HV AS AN ALTERNATIVE FOR MGD We evaluate the performance of HV compared to baseline methods using the CIFAR- 10 dataset. FID was computed with a pretrained ResNet (He et al., 2016). ResNet was trained on the 10- class classification task of CIFAR- 10 up to approximately \(95\%\) test accuracy. DCGAN (Radford et al., 2015) and WGAN- GP (Gulrajani et al., 2017) were included in the experiments for FID reference. Same architectures as in (Neyshabur et al., 2017) were employed for all multi- discriminators settings. An increasing number of discriminators was used. Inception score as well as FID computed with other models are included in Appendix C. In Fig. 6, we report the box- plots of 15 independent evaluations of FID on 10000 images for the best model obtained with each method across 3 independent runs. Results once more indicate that HV outperforms other methods in terms of quality of the generated samples. Moreover, performance clearly improves as the number of discriminators grows. Fig. 7 shows the FID at train time, i.e. measured with 1000 generated samples after each epoch, for the best models across runs. Models trained against more discriminators clearly converge to smaller values. We report the norm of the update direction \(\| \sum_{k = 1}^{K}\alpha_{k}\nabla l_{k}\|\) for each method in Fig. 9, Appendix C. <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 6: Box-plots of 15 independent FID computations with 10000 images. Dashed lines are real data (blue) and random generator (red) FIDs. </center> ![](images/7_1.jpg) <center>Figure 7: FID estimated over 1000 generated images at train time. Models trained against more discriminators achieve lower FID. </center> Cost under the multiple discriminator setting. We highlight that even though training with multiple discriminators may be more computationally expensive when compared to conventional approaches, such framework supports fully parallel training of the discriminators, a feature which is not trivially possible in other GAN settings. For example in WGAN, the discriminator is serially updated multiple times for each generator update. In Fig. 10 at Appendix C, we provide a comparison between the wall- clock time per iteration between all methods evaluated. Serial implementations of discriminators updates with 8 and 16 discriminators were faster than WGAN- GP. ### 5.3 EFFECT OF THE NUMBER OF DISCRIMINATORS ON SAMPLE DIVERSITY We repeat the experiments in (Srivastava et al., 2017) aiming to analyze how the number of discriminators impacts the sample diversity of the corresponding generator when trained using hypervolume maximization. The stacked MNIST dataset is employed and results reported in (Lin et al., 2017) are used for comparison. HV results for 8, 16, and 24 discriminators were obtained with 10k and 26k generator images averaged over 10 runs. The number of covered modes along with the KL divergence between the generated mode distribution and test data are reported in Table 1. Table 1: Number of covered modes and reverse KL divergence for stacked MNIST. <table><tr><td>Test samples</td><td>Model</td><td>Modes (Max 1000)</td><td>KL</td></tr><tr><td rowspan="5">26k</td><td>DCGAN (Radford et al., 2015)</td><td>99.0</td><td>3.400</td></tr><tr><td>ALL (Dumoulin et al., 2016)</td><td>16.0</td><td>5.400</td></tr><tr><td>Unrolled GAN (Metz et al., 2016)</td><td>48.7</td><td>4.320</td></tr><tr><td>VEGAN (Srivastava et al., 2017)</td><td>150.0</td><td>2.950</td></tr><tr><td>PadCGAN2 (Lin et al., 2017)</td><td>1000.0 ± 0.0</td><td>0.000 ± 0.003</td></tr><tr><td rowspan="3">10k</td><td>HV - 8 &amp;amp; disc.</td><td>679.2 ± 5.9</td><td>1.139 ± 0.011</td></tr><tr><td>HV - 16 disc.</td><td>998.0 ± 1.8</td><td>0.120 ± 0.004</td></tr><tr><td>HV - 24 disc.</td><td>998.3 ± 1.1</td><td>0.116 ± 0.003</td></tr><tr><td rowspan="3">26k</td><td>HV - 8 disc.</td><td>770.8 ± 6.4</td><td>1.115 ± 0.007</td></tr><tr><td>HV - 16 disc.</td><td>1000.0 ± 0.0</td><td>0.088 ± 0.002</td></tr><tr><td>HV - 24 disc.</td><td>1000.0 ± 0.0</td><td>0.084 ± 0.002</td></tr></table> As in previous experiments, results improved as we increased the number of discriminators. All evaluated models using HV outperformed DCGAN, ALI, Unrolled GAN and VEGAN. Moreover, HV with 16 and 24 discriminators achieved state- of- the- art coverage values. Thus, the increase in models' capacity via using more discriminators directly resulted in an improvement in generator's coverage. Training details as well as architectures information are presented in Appendix B. ## 6 CONCLUSION In this work we have shown that employing multiple discriminators is a practical approach allowing us to trade extra capacity, and thereby extra computational cost, for higher quality and diversity of generated samples. Such an approach is complimentary to other advances in GANs training and can be easily used together with other methods. We introduced a multi- objective optimization framework for studying multiple discriminator GANs, and showed strong similarities between previous work and the multiple gradient descent algorithm. The proposed approach was observed to consistently yield higher quality samples in terms of FID. Furthermore, increasing the number of discriminators was shown to increase sample diversity and generator robustness. Deeper analysis of the quantity \(||\sum_{k = 1}^{K}\alpha_{k}\nabla l_{k}||\) is the subject of future investigation. We hypothesize that using it as a penalty term might reduce the necessity of a high number of discriminators. <--- Page Split ---> ## APPENDIX ## A - OBJECTIVE EVALUATION METRIC. In (Heusel et al., 2017), authors proposed to use as a quality metric the squared Fréchet distance (Fréchet, 1957) between Gaussians defined by estimates of the first and second order moments of the outputs obtained through a forward pass in a pretrained classifier of both real and generated data. They proposed the use of Inception V3 (Szegedy et al., 2016) for computation of the data representation and called the metric Fréchet Inception Distance (FID), which is defined as: \[\mathrm{FID} = ||m_d - m_g||^2 +\mathrm{Tr}(\Sigma_d + \Sigma_g - 2(\Sigma_d\Sigma_g)^{\frac{1}{2}}), \quad (10)\] where \(m_d\) , \(\Sigma_d\) and \(m_g\) , \(\Sigma_g\) are estimates of the first and second order moments from the representations of real data distributions and generated data, respectively. We employ FID throughout our experiments for comparison of different approaches. However, for each dataset in which FID was computed, the output layer of a pretrained classifier on that particular dataset was used instead of Inception. \(m_d\) and \(\Sigma_d\) were estimated on the complete test partitions, which are not used during training. ## B - EXPERIMENTAL SETUP FOR STACKED MNIST EXPERIMENTS AND GENERATOR'S SAMPLES Architectures of the generator and discriminator are detailed in Tables 2 and 3, respectively. Batch normalization was used in all intermediate convolutional and fully connected layers of both models. We employed RMSprop to train all the models with learning rate and \(\alpha\) set to 0.0001 and 0.9, respectively. Mini- batch size was set to 64. The setup in (Lin et al., 2017) is employed and we build 128000 and 26000 samples for train and test sets, respectively. Table 2: Generator's architecture. <table><tr><td>Layer</td><td>Outputs</td><td>Kernel size</td><td>Stride</td><td>Activation</td></tr><tr><td>Input: z ~ N(0, I100)</td><td></td><td></td><td></td><td></td></tr><tr><td>Fully connected</td><td>2*2*512</td><td>4, 4</td><td>2, 2</td><td>ReLU</td></tr><tr><td>Transposed convolution</td><td>4*4*256</td><td>4, 4</td><td>2, 2</td><td>ReLU</td></tr><tr><td>Transposed convolution</td><td>8*8*128</td><td>4, 4</td><td>2, 2</td><td>ReLU</td></tr><tr><td>Transposed convolution</td><td>14*14*64</td><td>4, 4</td><td>2, 2</td><td>ReLU</td></tr><tr><td>Transposed convolution</td><td>28*28*3</td><td>4, 4</td><td>2, 2</td><td>Tanh</td></tr></table> Table 3: Discriminator's architecture. <table><tr><td>Layer</td><td>Outputs</td><td>Kernel size</td><td>Stride</td><td>Activation</td></tr><tr><td>Input</td><td>28*28*3</td><td></td><td></td><td></td></tr><tr><td>Projection</td><td>14*14*3</td><td>8, 8</td><td>2, 2</td><td></td></tr><tr><td>Convolution</td><td>7*7*64</td><td>4, 4</td><td>2, 2</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>5*5*128</td><td>4, 4</td><td>2, 2</td><td>LeakyReLU</td>损</td></tr><tr><td>Convolution</td><td>2*2*256</td><td>4, 4</td><td>2, 2</td><td>LeakyReLU</td>损</td></tr><tr><td>Convolution</td><td>1</td><td>4, 4</td><td>2, 2</td><td>Sigmoid</td></tr></table> <--- Page Split ---> ![](images/11_0.jpg) <center>(a) HV - 8 discriminators </center> ![](images/11_1.jpg) <center>(b) HV - 16 discriminators </center> ![](images/11_2.jpg) <center>(c) HV - 24 discriminators </center> Figure 8: Stacked MNIST samples for HV trained with 8, 16, and 24 discriminators. Samples diversity increases greatly when more discriminators are employed. <--- Page Split ---> ## C - EXTRA RESULTS ON CIFAR-10 ## C.1 - MULTIPLE DISCRIMINATORS ACROSS DIFFERENT INITIALIZATIONS AND OTHER SCORES Table 4 presents the best FID (computed with a pretrained ResNet) achieved by each approach at train time, along with the epoch in which it was achieved, for each of 3 independent runs. Train time FIDs are computed using 1000 generated images. <table><tr><td>#D</td><td>Method</td><td>Best FID (epoch)</td></tr><tr><td rowspan="2">1</td><td>DCGAN</td><td>7.09 (68), 9.09 (21), 4.22 (101)</td></tr><tr><td>WGAN-GP</td><td>5.09 (117), 5.69 (101) 7.13 (71)</td></tr><tr><td rowspan="3">8</td><td>AVG</td><td>3.35 (105), 4.64 (141), 3.00 (76)</td></tr><tr><td>GMAN</td><td>4.28 (123), 4.24 (129), 3.80 (133)</td></tr><tr><td>HV</td><td>3.87 (102), 4.54 (82), 3.20 (98)</td></tr><tr><td rowspan="3">16</td><td>AVG</td><td>3.16 (96), 2.50 (91), 2.77 (116)</td></tr><tr><td>GMAN</td><td>2.69 (129), 2.36 (144), 2.48 (120)</td></tr><tr><td>HV</td><td>2.56 (85), 2.70 (97), 2.68 (133)</td></tr><tr><td rowspan="3">24</td><td>AVG</td><td>2.10 (94), 2.44 (132), 2.43 (129)</td></tr><tr><td>GMAN</td><td>2.16 (120), 2.02 (98), 2.13 (130)</td></tr><tr><td>HV</td><td>2.05 (83), 1.89 (97), 2.23 (130)</td></tr></table> Table 4: Best FID obtained for each approach on 3 independent runs. FID is computed on 1000 generated images after every epoch. In Fig. 9, we report the norm of the update direction \(\begin{array}{r}{\big\|\sum_{k = 1}^{K}\alpha_{k}\nabla l_{k}\big\|} \end{array}\) of the best model obtained for each method. Interestingly, different methods present similar behavior in terms of convergence in the Pareto- stationarity sense, i.e. the norm upon convergence is lower for models trained against more discriminators, regardless of the employed method. ![](images/12_0.jpg) <center>Figure 9: Norm of the update direction over time for each method. Higher number of discriminators yield lower norm upon convergence. </center> We computed extra scores using 10000 images generated by the best model reported in Table 4, i.e. the same models utilized to generate the results shown in Fig. 6. Both Inception score and FID were computed with original implementations, while FID- VGG and FID- ResNet were computed using a VGG and a ResNet we pretrained. Results are reported with respect to DCGAN's scores. <table><tr><td></td><td>WGAN-GP</td><td>AVG-8</td><td>AVG-16</td><td>AVG-24</td><td>GMAN-8</td><td>GMAN-16</td><td>GMAN-24</td><td>HV-8</td><td>HV-16</td><td>HV-24</td></tr><tr><td>Inception Score</td><td>1.08</td><td>1.02</td><td>1.26</td><td>1.36</td><td>0.95</td><td>1.32</td><td>1.42</td><td>1.00</td><td>1.30</td><td>1.44</td></tr><tr><td>FID</td><td>0.80</td><td>0.98</td><td>0.76</td><td>0.73</td><td>0.92</td><td>0.79</td><td>0.65</td><td>0.89</td><td>0.77</td><td>0.72</td></tr><tr><td>FID-VGG</td><td>1.29</td><td>0.91</td><td>1.03</td><td>0.85</td><td>0.87</td><td>0.78</td><td>0.73</td><td>0.78</td><td>0.75</td><td>0.64</td></tr><tr><td>FID-ResNet</td><td>1.64</td><td>0.88</td><td>0.90</td><td>0.62</td><td>0.80</td><td>0.72</td><td>0.73</td><td>0.75</td><td>0.73</td><td>0.51</td></tr></table> Table 5: Scores of different methods measure on generated CIFAR- 10 samples. DCGAN scores are used as reference values, and results report are the ratio between given model and DCGAN scores. Inception score is better when high, whereas FIDs are better when low. <--- Page Split ---> ## C.2 - COMPUTATIONAL COST In Table 6 we present a comparison of minimum FID- ResNet obtained during training, along with computation cost in terms of time and space for different GANs, with both 1 and 24 discriminators. The computational cost of training GANs under a multiple discriminator setting is higher by design, in terms of both FLOPS and memory, if compared with single discriminators settings. However, a corresponding shift in performance is the result of the additional cost. This effect was consistently observed considering 4 different well- known approaches, namely DCGAN (Radford et al., 2015), Least- square GAN (LSGAN) (Mao et al., 2017), and HingeGAN (Miyato et al., 2018). The architectures of all single discriminator models follow the DCGAN, described in (Radford et al., 2015). For the 24 discriminators models, we used the architecture described in (Neyshabur et al., 2017), which consists in removing the the normalization layers from DCGAN's discriminator and further adding the projection layer, inline with previous experiments reported for CIFAR- 10 upscaled to 64x64. All models were trained with minibatch size of 64 during 150 epochs. Adam (Kingma & Ba, 2014) was used as the optimizer. Learning rate, \(\beta_{1}\) and \(\beta_{2}\) were equal to 0.0002, 0.5 and 0.999, respectively. <table><tr><td></td><td># Discriminators</td><td>FID-ResNet</td><td>FLOPS (MAC)</td><td>Memory (Mb)</td></tr><tr><td rowspan="2">DCGAN</td><td>1</td><td>4.22</td><td>8e10</td><td>1292</td></tr><tr><td>24</td><td>1.89</td><td>5e11</td><td>5671</td></tr><tr><td rowspan="2">LSGAN</td><td>1</td><td>4.55</td><td>8e10</td><td>1303</td></tr><tr><td>24</td><td>1.91</td><td>5e11</td><td>5682</td></tr><tr><td rowspan="2">HingeGAN</td><td>1</td><td>6.17</td><td>8e10</td><td>1303</td></tr><tr><td>24</td><td>2.25</td><td>5e11</td><td>5682</td></tr></table> Table 6: Comparison between different GANs with 1 and 24 discriminators in terms of minimum FID- ResNet obtained during training, and FLOPs and memory consumption for a complete train step. Furthermore, wall- clock time per iteration for different numbers of discriminators is shown in Fig. 10 for experiments with CIFAR- 10 with serial updates of discriminators. Notice that while the increase in cost in terms of FLOPS and memory is unavoidable when multiple discriminators settings is employed, wall- clock time can be made close to single discriminators cases since training with respect to different discriminators can be implemented in parallel. On the other hand, extra cost in time introduced by other frameworks such as WGAN- GP or SNGAN cannot be trivially recovered. ![](images/13_0.jpg) <center>Figure 10: Time in seconds per iteration of each method for serial updates of discriminators. Multiple discriminators approaches considered do not present relevant difference in time per iteration. </center> <--- Page Split ---> ## C.3 - GENERATED SAMPLES In Figs. 11, 12, and 13 we show random generated samples with 8, 16, and 24 discriminators for AVG, GMAN, and HV, respectively. ![](images/14_0.jpg) <center>Figure 11: CIFAR-10 samples for AVG trained with 8, 16, and 24 discriminators. </center> ![](images/14_1.jpg) <center>Figure 12: CIFAR-10 samples for GMAN trained with 8, 16, and 24 discriminators. </center> ![](images/14_2.jpg) <center>Figure 13: CIFAR-10 samples for HV trained with 8, 16, and 24 discriminators. </center> <--- Page Split ---> ## C.4 - RESULTS CIFAR-10 32X32 All results reported in previous sections using CIFAR- 10 were obtained with an upscaled version of the dataset. Here, we thus run experiments with the dataset in its original resolution aiming to contextualize our proposed approach with respect to previously introduced methods. To do so, we repeated similar experiments as reported in Miyato et al. (2018)- Table 2, for the model referred to as standard CNN. The same architecture is employed and the spectral normalization is removed from the discriminators. Moreover, the same projection input is added in each of the discriminators. Results in terms of both FID and Inception score, evaluated on top of 5000 generated images as in (Miyato et al., 2018) as well as with 10000 images, are reported in Table 7 for our proposed approach and our implementation of (Miyato et al., 2018), along with the FID measured using a ResNet classifier trained in advance. As can be seen, the addition of the multiple discriminators setting along with hypervolume maximization yields a relevant shift in performance for the DCGAN- like generator, taking all evaluated metrics to levels of recently proposed GANs. Table 7: Evaluation of the effect of adding discriminators on a DCGAN-like model trained on CIFAR-10. Results reach the same level as the best reported for the given architecture when the multiple-discriminator setting is added and the normalization layers are removed from discriminators. <table><tr><td></td><td>FID-ResNet</td><td>FID (5k)</td><td>IS (5k)</td><td>FID (10k)</td><td>IS (10k)</td></tr><tr><td>SNGAN (Miyato et al., 2018)</td><td>-</td><td>25.5</td><td>7.58 ± 0.12</td><td>-</td><td>-</td></tr><tr><td>WGAN-GP (Miyato et al., 2018)</td><td>-</td><td>40.2</td><td>6.68 ± 0.06</td><td>-</td><td>-</td></tr><tr><td>DCGAN (Miyato et al., 2018)</td><td>-</td><td>-</td><td>6.64 ± 0.14</td><td>-</td><td>-</td></tr><tr><td>SNGAN (our implementation)</td><td>1.55</td><td>27.93</td><td>7.11 ± 0.30</td><td>25.29</td><td>7.26 ± 0.12</td></tr><tr><td>DCGAN + 24 Ds and HV</td><td>1.21</td><td>27.74</td><td>7.32 ± 0.26</td><td>24.90</td><td>7.45 ± 0.17</td></tr></table> <--- Page Split ---> ## D - CELEBA DATASET ## D.1 - COMPARING WITH OTHER MULTIPLE-DISCRIMINATORS APPROACHES Here, we present samples obtained by generators trained against 8, 16, and 24 discriminators using AVG, GMAN, and HV on the CelebA dataset rescaled to \(64 \times 64\) . Training lasted 100 epochs and samples are shown in Figs. 14, 15, and 16 for AVG, GMAN and HV, respectively. Same architectures and hyperparameters used for experiments with CIFAR- 10 presented in Section 5 were utilized. ![](images/16_0.jpg) <center>Figure 14: CelebA samples for AVG trained with 8, 16, and 24 discriminators. </center> ![](images/16_1.jpg) <center>Figure 15: CelebA samples for GMAN trained with 8, 16, and 24 discriminators. </center> ![](images/16_2.jpg) <center>Figure 16: CelebA samples for HV trained with 8, 16, and 24 discriminators. </center> <--- Page Split ---> ## D.2 - GENERATING 128x128 IMAGES D.2 - GENERATING 128x128 IMAGESIn this experiment, we verify whether the proposed multiple discriminators setting is capable of generating higher resolution images. For that, we employed the CelebA at a size of 128x128. We used a similar architecture for both generator and discriminators networks as described in the previous experiments. A convolutional layer with 2048 feature maps was added to both generator and discriminators architectures due to the increase in the image size. Adam optimizer with the same set of hyperparameters as for CIFAR-10 and CelebA 64x64 was employed. We trained models with 6, 8, and 10 discriminators during 24 epochs. Samples from each generator are shown in Figure 17. ![](images/17_0.jpg) <center>Figure 17: 128x128 CelebA samples for HV trained during 24 epochs with 6, 8, and 10 discriminators. </center> <--- Page Split ---> ## E - GENERATING 256x256 CATS We show the proposed multiple- discriminators setting scales to higher resolution even in the small dataset regime, by reproducing the experiments presented in (Jolicoeur- Martineau, 2018). We used the same architecture for the generator. For the discriminator, we removed batch normalization from all layers and used stride equal to 1 at the last convolutional layer, after adding the initial projection step. The Cats dataset \(^3\) was employed, we followed the same pre- processing steps, which, in our case, yielded 1740 training samples with resolution of 256x256. Our model is trained using 24 discriminators and Adam optimizer with the same hyperparameters as for CIFAR- 10 and CelebA previously described experiments. In Figure 18 we show generator's samples after 288 training epochs. One epoch corresponds to updating over 27 minibatches of size 64. ![](images/18_0.jpg) <center>Figure 18: Cats generated using 24 discriminators after 288 training epochs. </center> <--- Page Split ---> ## F - INCREASING NUMBER OF RANDOM PROJECTIONS In this experiment we illustrate and confirm the results introduced in (Neyshabur et al., 2017), showing the effect of using an increasing number of random projections to train a GAN. We trained models using average loss minimization with 1 to 6 discriminators on the CelebA dataset for 15 epochs. Samples from the generator obtained in the last epoch are shown in Fig. 19. Generated samples are closer to real data as the number of random projections (and discriminators, consequently) increases. ![](images/19_0.jpg) <center>Figure 19: Models trained with AVG during 15 epochs using an increasing number of random projections and discriminators. </center> <--- Page Split ---> ## G - ILLUSTRATION OF INTERACTION BETWEEN HYPERVOLUME AND ADOPTED NADIR POINT ADAPTATION SCHEME Consider a two- objectives problem, with \(l_{1}^{t} > 0\) and \(l_{2}^{t} > 0\) corresponding to each of the losses we want to minimize, at iteration \(t\) . We present in Figures 20 and 21 an illustrative example of the effect of the adaptation scheme adopted for \(\eta\) , as described in Section 4. Figure 20 describes the initialization state. Since \(l_{1}^{t}\) and \(l_{2}^{t}\) will be high at \(t = 0\) , and, following the adaptation rule presented in previous sections, \(\eta^{t} = \delta \max \{l_{1}^{t},l_{2}^{t}\}\) , for a slack \(\delta >0\) , the difference \(\eta^{t} - \max \{l_{1}^{t},l_{2}^{t}\}\) will be high. In contrast, after \(T\) updates, as described in Figure 21, \(\eta^{t} = \delta \max \{l_{1}^{t},l_{2}^{t}\}\) will be smaller, since losses are now closer to 0. If no adaptation is performed and \(\eta\) is kept unchanged throughout training, as represented in red in Figure 21, \(\eta^{T} - l_{1}^{T} \approx \eta^{T} - l_{2}^{T}\) for a large enough \(T\) , which will end up assigning similar weights to gradients provided by the different losses, defeating the purpose of employing hypervolume maximization rather than optimizing for the average loss. The employed adaptation scheme thus keeps the gradient weighting relevant even when losses become low. Moreover, this effect will be more aggressive as training progresses, assigning more gradient importance to the higher losses, since \(\eta^{T} - \max \{l_{1}^{T},l_{2}^{T}\} < \eta^{0} - \max \{l_{1}^{0},l_{2}^{0}\}\) . ![](images/20_0.jpg) <center>Figure 20: Losses and nadir point at beginning of training. </center> ![](images/20_1.jpg) <center>Figure 21: Losses and nadir point at \(t = T\) , and nadir point at \(t = 0\) (in red). </center> <--- Page Split ---> H - WALL- CLOCK TIME FOR REACHING BEST FID DURING TRAINING ON MNIST ![](images/21_0.jpg) <center>Figure 22: Minimum FID during training. X-axis is in minutes. The blue dot is intended to highlight the moment during training when the minimum FID was reached. </center> <--- Page Split --->
reject
Reject
5.666667
ICLR_2019_paper_0676
iclr
2,019
# ON THE EFFECTIVENESS OF TASK GRANULARITY FOR TRANSFER LEARNING ## Anonymous authors Paper under double- blind review ## ABSTRACT We describe a DNN for video classification and captioning, trained end- to- end, with shared features, to solve tasks at different levels of granularity, exploring the link between granularity in a source task and the quality of learned features for transfer learning. For solving the new task domain in transfer learning, we freeze the trained encoder and fine- tune an MLP on the target domain. We train on the Something- Something dataset with over 220,000 videos, and multiple levels of target granularity, including 50 action groups, 174 fine- grained action categories and captions. Classification and captioning with Something- Something are challenging because of the subtle differences between actions, applied to thousands of different object classes, and the diversity of captions penned by crowd actors. Our model performs better than existing classification baselines for Something- Something, with impressive fine- grained results. And it yields a strong baseline on the new Something- Something captioning task. Experiments reveal that training with more fine- grained tasks tends to produce better features for transfer learning. ## 1 INTRODUCTION Common- sense video understanding entails fine- grained recognition of actions, objects, spatial temporal relations, and physical interactions, arguably well beyond the capabilities of current techniques. A general framework will need to discriminate myriad variations of actions and interactions. For example, we need to be able to discriminate similar actions that differ in subtle ways, e.g., 'putting a pen beside the cup', 'putting the pen in the cup', or 'pretending to put the pen on the table'. Similarly, one must be able to cope with diverse actors and object classes. Such generality and fine- grained discrimination are key challenges to video understanding. In contrast to current approaches to action recognition and captioning on relatively small corpora, with coarse- grained actions, this paper considers fine- grained classification and captioning tasks on large- scale video corpora. Training is performed on the Something- Something dataset (Goyal et al. (2017)), with 174 fine- grained action categories and thousands of different objects, all under significant variations in lighting, viewpoint, background clutter, and occlusion. We describe a DNN architecture comprising a 2- channel convolutional net and an LSTM recurrent network for video encoding. A common encoding is shared for classification and caption generation. The resulting network performs better than baseline action classification results in Goyal et al. (2017), and it gives impressive results on an extremely challenging fine- grained captioning task. We also demonstrate the quality of learned features through transfer learning from Something- Something features to a kitchen action dataset. The main contributions in the paper include: 1. Explore the link between label granularity and feature quality: We exploit 3 levels of granularity in sth-sth-v2, namely, action groups, action categories, and captions. Experiments show that more fine-grained labels yield richer features (see Table 3 and Fig. 4). 2. Baselines for captioning on sth-sth-v2 data: We note that the captioning task is new for this dataset; the original dataset did not provide captions. 3. Captioning as a source task for transfer learning: We show that models trained to jointly perform classification and captioning learn features that transfer better to other tasks (e.g., see Fig. 4). To the best of our knowledge, captioning has, to date, been used as a target task. Our results suggest that captioning is a powerful source task. 4. 20bn-kitchenware: We introduce a new dataset, ostensibly for video transfer learning. <--- Page Split ---> ## 2 VIDEO DATA Video action classification and captioning have received significant attention for several years, but progress has been somewhat slow in part because of a lack of large- scale corpora. Using web sources (e.g., YouTube, Vimeo and Flickr) and human annotators, larger datasets have been collected (e.g., Kay et al. (2017); Monfort et al. (2018)), but they lack control over pose variations, motion and other scene properties that might be important for learning fine- grained models. More recently, crowd- sourced/crowd- acted data has emerged. This allows targeted video domains, action classes, and control over subtle differences between fine- grained action classes. The first version of Goyal et al. (2017) has 100, 000 videos of human- object interactions, comprising 50 coarse- grained action groups, decomposed further into 174 related action categories. The videos exhibit significant diversity in viewing and lighting, objects and backgrounds, and the ways in which actors perform actions. Baseline performance in Goyal et al. (2017) was a correct action classification rate of \(11.5\%\) , and \(36.2\%\) on action groups. Zhou et al. (2017) report \(42.01\%\) classification accuracy on Something- Something- V1 action categories. With our architecture we obtain validation accuracy of \(38.8\%\) . Something- Something- V2 is larger, with 220, 847 videos of the same 174 action categories. In addition, each video includes a caption that was authored and uploaded by the crowd actor. These captions incorporate the action class as well as descriptions of the objects involved. That is, the captions mirror the action template, but with the generic placeholder Something replaced by the object(s) chosen by the actor. As an example, a video with template action 'Putting [something] on [something]', might have a caption 'Putting a red cup on a blue plastic box'. In a nutshell, this dataset provides different levels of granularity: 50 coarse- grained action groups, 174 fine- grained action categories, and even more fine- grained labeling via video captions. ## 3 VIDEO CLASSIFICATION AND CAPTIONING TASKS Due to the prevalence of datasets like UCF- 101 (Soomro et al. (2012)), sports1M (Karpathy et al. (2014)) and more recently Kinetics (Kay et al. (2017)), most research on action classification has focused on models for coarse- grained action classification. In extreme cases, action classes can be can be discriminated from a glimpse of the scene, often encoded in isolated frames; e.g., inferring "soccer play" from the presence of a green field. Even when motion is essential to the action, many existing approaches do well by encoding rough statistics over velocities, directions, and motion positions. Little work has been devoted to the task of representing details of object interactions or how configurations change over time. Image and video captioning have received increasing attention since the release of captioning data, notably, Microsoft COCO (Chen et al. (2015)) and MSR- VTT (Xu et al. (2016)). One problem with existing captioning approaches is that many datasets implicitly allow models to "cheat", e.g., by generating phrases that are grammatically and semantically coherent, but only loosely related to the fine- grained visual structure. It has been shown, for example, that a language model trained on unordered, individual words (e.g., object- nouns) predicted by a separate NN can compete with captioning model trained on the actual task end- to- end (e.g., Yao et al. (2015); Heuer et al. (2016)). Similarly, nearest neighbor methods have been surprisingly effective (Devlin et al. (2015)). Captioning tasks, if designed appropriately, should capture detailed scene properties. Labels with more subtle and fine- grained distinctions would directly expose the ability (or inability) of a network to correctly infer the scene properties encoded in the captions. The captions provided with the Something- Something dataset are designed to be sufficiently fine- grained that attention to details is needed to yield high prediction accuracy. For example, to generate correct captions, networks need not only infer the correct actions, but must also learn to recognize different objects and their properties, as well as object interactions. ## 4 APPROACH We use a modified encoder- decoder architecture with an action classifier applied to the video encoding (see Fig. 1). The decoder or classifier can be switched off, leaving pure classification or captioning models. One can also jointly train classification and captioning models. We train our two- channel architecture to solve four different tasks with different granularity levels: - Coarse-grained classification on action groups (CG actions)- Fine-grained classification on 174 action categories (FG actions) <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 1: Our model architecture includes a two-channel CNN followed by an LSTM video encoder, an action classifier, and an LSTM decoder for caption generation. </center> ![](images/2_1.jpg) <center>Figure 2: Our encoder includes a two-channel CNN followed by an LSTM for aggregating features. </center> - Captioning on simplified objects (SO captions) - Fine-grained Captioning with full object placeholders (Captions) Finally, we investigate the effectiveness of the learned features for transfer learning. ### 4.1 MODIFIED VIDEO ENCODER-DECODER The video encoder receives the input video \(v\) , and maps it to an embedded representation \(h\) . Conditioned on \(h\) , a caption decoder generates the caption \(c\) , and a classifier predicts the action category. The encoder processes the video with a two- channel convolutional architecture (Fig. 2). A spatial 2D- CNN and a spatiotemporal 3D- CNN are applied in parallel. The 3D- CNN features are used in lieu of a separate module to compute motion (e.g., optical flow) features. The basic building block of each channel is a \(3 \times 3 \times 3\) ( \(3 \times 3\) in 2D- CNN channel) convolution filter with batchnorm and ReLU activation. To aggregate features across time, feature vectors from each channel are combined and fed to a 2- layer bidirectional LSTM. We average these features to get the video encoding, \(h\) . The action classifier applies a fully- connected layer to the encoding \(h\) , followed by a softmax layer. Training uses a cross- entropy loss over action categories. The caption decoder is a two- layer LSTM. Much like conventional encoder- decoder methods for video captioning (Venugopalan et al. (2014); Donahue et al. (2015); Kaufman et al. (2016) and Sutskever et al. (2014)), our decoder generates captions using a softmax over vocabulary words, conditioned on previously generated words. The loss is the usual negative log- probability of the word sequence: \[\mathrm{loss}_{\mathrm{captioning}} = -\sum_{i = 0}^{N - 1}\log p(w^{i + 1}|w^{\leq i},h;\theta). \quad (1)\] where \(w^{i}\) denotes the \(i^{\mathrm{th}}\) word of the caption, \(h\) is the video encoding, and \(\theta\) denotes model parameters. To optimize speed and memory usage during training, the length of captions generated by the decoder is fixed at 14 words<sup>1</sup>. As is common for encoder- decoder networks, we train with teacher- <--- Page Split ---> Table 1: Validation and test accuracy for the pure classification task ( \(\lambda = 1\) ), with different numbers of 2D-CNN and 3D-CNN features used for video encoding. <table><tr><td>Models</td><td>3D-CNN Channels</td><td>2D-CNN Channels</td><td>Number of Parameters</td><td>Validation Accuracy</td><td>Test Accuracy</td></tr><tr><td>M(256 - 0)</td><td>256</td><td>0</td><td>8.9M</td><td>50.06</td><td>48.84</td></tr><tr><td>M(512 - 0)</td><td>512</td><td>0</td><td>24.1M</td><td>51.96</td><td>51.15</td></tr><tr><td>M(384 - 128)</td><td>384</td><td>128</td><td>16.2M</td><td>51.11</td><td>49.96</td></tr><tr><td>M(256 - 256)</td><td>256</td><td>256</td><td>11.5M</td><td>51.62</td><td>50.44</td></tr><tr><td>M(128 - 384)</td><td>128</td><td>384</td><td>10.0M</td><td>50.82</td><td>49.57</td></tr><tr><td>M(0 - 512)</td><td>0</td><td>512</td><td>5.8M</td><td>39.78</td><td>37.80</td></tr><tr><td>M(0 - 256)</td><td>0</td><td>256</td><td>11.5M</td><td>40.2</td><td>38.83</td></tr></table> Table 2: Comparison of classification accuracy of fine-grained and coarse-grained models, tested on fine-grained actions (using action categories) versus coarse-grained actions (using action groups). <table><tr><td></td><td>Coarse-grained Testing</td><td>Fine-grained Testing</td></tr><tr><td>Coarse-grained Training</td><td>57.6</td><td>41.7</td></tr><tr><td>Fine-grained Training</td><td>62.5</td><td>50.44</td></tr></table> forcing (Williams & Zipser (1989)). At test time, the input to the decoder at each time- step is the token generated at the previous time- step (i.e., no teacher forcing). The model is trained end- to- end for classification and captioning with a weighted sum of the classification and captioning losses, i.e., \[\mathrm{loss} = \lambda \cdot \mathrm{loss}_{\mathrm{classification}} + (1 - \lambda)\cdot \mathrm{loss}_{\mathrm{captioning}} \quad (2)\] With \(\lambda = 1\) or \(\lambda = 0\) , we end up with pure classification and captioning tasks respectively. For other values of \(\lambda\) , they are trained jointly. The encoder is shared by the action classifier and the caption decoder. The experiments below also compare this joint training regime with models for which the encoder is trained on the classification loss or the captioning loss alone. ## 5 EXPERIMENTS We train four two- channel models that are trained to solve coarse- grained and fine- grained classification and captioning tasks. In what follows we discuss different tasks in more details. ### 5.1 COARSE- AND FINE-GRAINED CLASSIFICATION Something- Something provides coarse- grained categories (action groups), each comprising a set of fine- grained actions. We trained a classification model on coarse- grained action groups, using the M(256- 256) architecture, with accuracy of \(57.6\%\) (see Table 2 (top- left)). Table 1 reports the performance of our model for fine- grained classification. For the pure classification task (with \(\lambda = 1\) ) we consider different numbers of features produced by the 2D (spatial) CNN and the 3D (spatiotemporal) CNN. The total number of features is 512 in all cases. The results show that the model benefits from 2D and 3D features. The even split M(256- 256) provides a good trade- off between performance and model complexity. We therefore use this model below, unless otherwise specified. Transferring between Coarse- and Fine- Grained Classification We can perform coarse- grained classification by mapping predictions from the fine- grained classifier onto the action groups. To this end we sum the probabilities of fine- grained actions belonging to each action group. Interestingly, the resulting accuracy on coarse- grained action groups increases to \(62.5\%\) . This improvement suggests that fine- trained training provides higher quality features. We also examine to what extent coarse- grained performance accounts for fine- grained accuracy; i.e., how better fine- grained performs compared to chance when conditioned on coarse- grain predictions. For example, conditioned on a predicted action group, if we select the most frequent action within the action group, fine- grained test accuracy would be \(23.8\%\) . One can also fix the coarse- grained model and train a linear classifier on top of its penultimate features. This yields test accuracy of \(41.7\%\) , still \(8.7\%\) below test performance for the corresponding architecture trained on the fine- grained task, further supporting our contention that training on fine- grained details yields richer features. ### 5.2 CLASSIFICATION BASELINES. As a baseline, we use ImageNet- pretrained models (Simonyan & Zisserman (2014); He et al. (2015)) on individual frames, to which we add layers. First, we use just the middle frame of the video, with <--- Page Split ---> Table 3: Classification results on 174 action categories using VGG16 and ResNet152 as frame encoders, along with different strategies for temporal aggregation. <table><tr><td>Models</td><td>Test Accuracy</td></tr><tr><td>VGG16 + MLP 1024 (single middle frame)</td><td>13.29</td></tr><tr><td>VGG16 + MLP 1024 (averaged over 48 frames)</td><td>17.57</td></tr><tr><td>VGG16 + LSTM 1024</td><td>31.69</td></tr><tr><td>ResNet152 + MLP 1024 (single middle frame)</td><td>13.62</td></tr><tr><td>ResNet152 + MLP 1024 (averaged over 48 frames)</td><td>16.79</td></tr><tr><td>ResNet152 + LSTM 1024 (48 steps)</td><td>28.82</td></tr></table> a 2- layer MLP with 1024 hidden units. We also consider a baseline in which we apply this approach to all 48 frames, and then average frame by frame predictions. We experiment with aggregating information over time using a LSTM layer with 1024 units. We report results in Table 3. There is a marked improvement with the LSTM, confirming that this task improves with temporal analysis. Our best baseline was achieved with a VGG architecture, and the test accuracy is close to the best architecture reported to date on Something- Something (e.g., Zhou et al. (2017)). ### 5.3 CAPTIONING WITH SIMPLIFIED OBJECT PLACEHOLDERS. The ground truth object placeholders in Something- Something video captions (i.e., the object descriptions provided by crowd actors) are not highly constrained. Crowd actors have the option to type in the objects they used, along with multiple descriptive or quantitative adjectives, elaborating shape, color, material or the number of objects involved. Accordingly, it is not surprising that the distribution over object placeholders is extremely heavy- tailed, with many words occurring rarely. To facilitate training we therefore replaced all words that occurred 5 times or less by [Something]. After removing rare words, we are left with 2880 words comprising around 30,000 distinct object placeholders (i.e., different combination of nouns, adjectives, etc). We consider a simplified task in which we modify the ground truth captions so they only contain one word per placeholder, by keeping the last noun, removing all other words from the placeholders. By substituting the pre- processed placeholders into the action category, we obtain a simplified caption. Table 5 shows an example of the process. The result is a reduced vocabulary with 2055 words. In the spectrum of granularity, captioning with simplified objects can be considered as a middle ground between fine- grained action classification and captioning with full labels. We train different variations of our two- channel architecture on captions with simplified objects. Table 6 summarizes our results. We observe that the model with an equal number of 2D- and 3D- channels performs best (albeit by a fairly small margin). Also the best captioning model performs best on the classification task. We also evaluate the models using standard captioning metrics: BLEU (Papineni et al. (2002)), ROUGE- L (Lin (2004)) and METEOR (Denkowski & Lavie (2014)). #### 5.3.1 FINE-GRAINED CAPTIONING WITH FULL OBJECT PLACEHOLDERS. We also train networks on the full object placeholders. This constitutes the finest level of action granularity. The experimental results are shown in Table 8. They show that, again, the best captioning model also yields the highest corresponding classification accuracy. The Exact- Match accuracy is significantly lower than for the simplified object placeholders, as it has to account for a much wider variety of phrases. The captioning models produce impressive qualitative results with a high degree of approximate action and object accuracy. Some examples are shown in Figure 9n. More examples can be found in the appendix. To the best of our knowledge there are no baselines for the Something- Something captioning task. To quantify the performance of our captioning models, we count the percentage of generated captions that match ground truth word by word. We refer to this as "Exact- Match Accuracy". This is a challenging metric as the model is deemed correct only if it generates the entire caption correctly. If we use the action category predicted by model M(256- 256), trained for classification, and replace all occurrences of [something] with the most likely object string conditioned on that action class, the Exact- Match accuracy is \(3.15\%\) . The same baseline for simplified object placeholders is \(5.69\%\) . We also implemented a conventional encoder- decoder model for captioning 4. <--- Page Split ---> Table 4: Captioning baselines using a conventional encoder-decoder architecture <table><tr><td>Models</td><td>BLEU@4</td><td>ROUGE-L</td><td>METEOR</td><td>Exact-Match Accuracy</td><td>Classification Accuracy</td></tr><tr><td>VGG16+LSTM</td><td>31.83</td><td>52.22</td><td>24.79</td><td>3.13</td><td>31.69</td></tr><tr><td>Resnet152+LSTM</td><td>31.93</td><td>51.76</td><td>24.89</td><td>3.25</td><td>28.82</td></tr></table> <table><tr><td>Video ID</td><td>81955</td></tr><tr><td>Action Group</td><td>Holding [something]</td></tr><tr><td>Action Category</td><td>Holding [something] in front of [something]</td></tr><tr><td>Somethings</td><td>“a blue plastic cap”, “a men’s short sleeve shirt”</td></tr><tr><td>Simplified somethings’s</td><td>“cap”, “shirt”</td></tr><tr><td>Simplified-object Caption</td><td>Holding cap in front of shirt</td></tr><tr><td>Full Caption</td><td>Holding a blue plastic cap in front of a men short sleeve shirt</td></tr></table> Table 5: An example of annotation file for a Something-Something video ![](images/5_0.jpg) <center>Figure 3: Captioning examples: </center> Model Outputs: [Piling coins up], [Removing mug, revealing cup behind]. Ground Truths: [Stacking coins], [Removing cup, revealing little cup behind]. Training settings In all our experiments we use frame rate of \(12fps\) . During training we randomly pick 48 consecutive frames. For videos with less than 48 frames, we replicate the first and last frames to achieve the intended length. We resize the frames to \(128\times 128\) , and then use random cropping of size \(96\times 96\) . For validation and testing, we use \(96\times 96\) center cropping. We optimize all models using Adam, with an initial learning rate of 0.001. While the captioning task theoretically entails action classification, we found that our two- channel networks optimized on the pure captioning task do not perform as well as models trained jointly on classification and captioning (see 7). By coarsely tuning the hyper- parameter \(\lambda\) empirically, we found \(\lambda = 0.1\) to work well and fix it at this value for the captioning experiments below. More specifically, we first train with a pure classification loss, by setting \(\lambda = 1\) , and subsequently introduce the captioning loss by gradually decreasing \(\lambda\) to 0.1. ## 6 TRANSFER LEARNING One astonishing property of neural networks is their ability to learn representations transfer well to other tasks (e.g., Donahue et al. (2013); Sharif Razavian et al. (2014)). A distinguishing feature of the ImageNet task, which likely contributes to its potential for transfer learning, is the dataset size and the variety of fine- grained discriminations required. In what follows we explore transfer learning performance as a function of course task granularity. We introduce 20bn- kitchenware, a few- shot video classification dataset with 390 videos over 13 action categories (see Fig. 4). It contains video clips of manipulating a kitchen utensil for roughly 4 seconds and was designed to capture fine- grained actions with subtle differences. For each utensil \(X\in \{\mathrm{fork},\mathrm{spoon},\mathrm{knife},\mathrm{tongs}\}\) , the target label belongs to one of 3 actions, "Using \(X\) ", "Pretending to use \(X\) " or "Trying but failing to use \(X\) ". In addition to these 12 action categories, we also include a fall- back class labeled "Doing other things". We further encourage the model to pay attention to visual details by including unused 'negative' objects. The last row of Fig. 4 shows one example; the target label indicates a manipulation of tongs, but the clip also contains a spoon with an egg in it. Given the limited amount of data available for training, the action granularity and the presence of negative objects, we hypothesize that only models that have some understanding of physical world properties will perform well. We will release 20bn- kitchenware upon publication of this paper. <--- Page Split ---> Table 6: Performance of our two-channel models with different sizes of channel features on for simplified objects. For this task we use ( \(\lambda = 0.1\) ). The maximum sequence length is 14. <table><tr><td>Models</td><td>BLEU@4</td><td>ROUGE-L</td><td>METEOR</td><td>Exact-Match Accuracy</td><td>Classification Accuracy</td></tr><tr><td>M(256 - 0)</td><td>22.75</td><td>44.54</td><td>22.40</td><td>8.46</td><td>50.64</td></tr><tr><td>M(512 - 0)</td><td>23.28</td><td>45.29</td><td>22.75</td><td>8.47</td><td>50.96</td></tr><tr><td>M(384 - 128)</td><td>23.02</td><td>44.86</td><td>22.58</td><td>8.53</td><td>50.73</td></tr><tr><td>M(256 - 256)</td><td>23.04</td><td>44.89</td><td>22.60</td><td>8.63</td><td>51.38</td></tr><tr><td>M(128 - 384)</td><td>22.76</td><td>44.40</td><td>22.39</td><td>8.33</td><td>50.04</td></tr></table> <table><tr><td>Models</td><td>Classification Accuracy</td><td>Exact-Match Accuracy</td></tr><tr><td>λ = 0</td><td>39.78</td><td>5.96</td></tr><tr><td>λ = 0.1</td><td>51.32</td><td>8.63</td></tr></table> Table 7: Comparing models trained with pure captioning task vs joint captioning and classification. Results are shown for captioning with simplified object placeholders. The test classification accuracy for the pure captioning model was obtained by freezing the video encoder and training a linear regressor on top of penultimate features. ![](images/6_0.jpg) <center>Figure 4: 20bn-kitchenware samples: Using a knife to cut something (left), Trying but failing to pick something up with tongs(right). </center> ### 6.1 PROPOSED BENCHMARK Our transfer learning experiments considers four two- channel models that have been respectively pre- trained on coarse- grained labels (classification on action groups), on fine- grained labels (classification on 174 action categories), on simplified captions (captioning on fine- grained action categories expanded with single object descriptor) and on template captions (captioning on fine- grained action categories expanded with object descriptors). We also include two neural nets pre- trained on other datasets: a VGG16 network pre- trained on ImageNet, and an Inflated- ResNet34 pre- trained on Kinetics3. The overall training procedure remains the same for all models. We freeze each pre- trained model and then train a neural net on top of extracted penultimate features. Independent of the architecture used, we use the pre- trained model to produce 12 feature vectors per second. To achieve this, where necessary we split the input video into smaller clips and apply the pre- trained network on each clip individually4. In the simplest case, we pass the obtained features through a logistic regressor and average predictions over time. We also report results for which we classify pre- trained features using an MLP with 512 hidden units as well as a single bidirectional LSTM layer with 128 hidden states. This allows the network to perform some temporal analysis about the target domain. #### 6.1.1 OBSERVATIONS For each pretrained model and classifier, we evaluate 1- shot, 5- shot and 10- shot performance, averaging scores over 10 runs. Fig. 5 shows the average scores with \(95\%\) confidence intervals. The most noticeable findings are: 1. Logistic Regression vs MLP vs BiLSTM: Using a recurrent network yields better performance. 2. Something-Something features vs others: Our models pre-trained on Something-Something outperform other external models. This is not surprising given the target domain; 20bn-kitchenware samples are, by design, closer to Something-Something samples than ImageNet or Kinetics ones. It is surprising that VGG16 features perform better on 20bn-kitchenware than Kinetics features. <--- Page Split ---> <table><tr><td>Models</td><td>BLEU@4</td><td>ROUGE-L</td><td>METEOR</td><td>Exact Match Accuracy</td><td>Classification Accuracy</td></tr><tr><td>M(256 - 0)</td><td>16.87</td><td>40.03</td><td>19.13</td><td>3.33</td><td>50.48</td></tr><tr><td>M(512 - 0)</td><td>16.92</td><td>40.54</td><td>19.26</td><td>3.56</td><td>49.81</td></tr><tr><td>M(384 - 128)</td><td>17.99</td><td>41.82</td><td>20.03</td><td>3.80</td><td>50.92</td></tr><tr><td>M(256 - 256)</td><td>17.61</td><td>41.28</td><td>19.69</td><td>3.76</td><td>50.56</td></tr><tr><td>M(128 - 384)</td><td>16.80</td><td>39.98</td><td>19.11</td><td>3.61</td><td>49.24</td></tr></table> Table 8: Performance of captioning models with different sizes of channel features on full object placeholders. For this task we use \(\lambda = 0.1\) ). The maximum sequence length is 14. ![](images/7_0.jpg) <center>Figure 5: 20bn-kitchenware transfer learning results: averaged scores obtained using a VGG16, an Inflated ResNet34, as well as two-channel models trained on four aforementioned tasks. We report results using 1 training sample per class, 5 training samples per class or the full training set. </center> 3. Effect of the action granularity: Fig. 5 supports the contention that training on fine-grained tasks yields better features. The best model on this benchmark is that trained jointly on full captions and action categories. The only exception is the model trained to do pure captioning. ## 7 CONCLUSIONS Pre- training neural networks on large labeled datasets has become a driving force in deep learning applications. Some might argue that it may be considered a serious competitor to unsupervised learning as a means to generate universal features for the visual world. Ever since ImageNet became popular as a generic feature extractor, a hypothesis been that it is dataset size, the amount of detail and the variety of labels that drive a network's capability to learn useful features. To the degree that this hypothesis is true, generating visual features capable of transfer learning should involve source tasks that (i) are fine- grained and complex, and (ii) involve video not still images, because video is a much more fertile domain for defining complex tasks that represent aspects of the physical world. This paper provides further evidence for that hypothesis, showing that task granularity has a strong influence on the quality of the learned features. We also show that captioning, which to the best of our knowledge has been used only as a target task in transfer learning, can be a powerful source task. Our work suggests that one gets substantial leverage by utilizing ever more fine- grained recognition tasks, represented in the form of captions, possibly in combination with question- answering. Unlike the current trend of engineering neural networks to perform bounding box generation, semantic segmentation, or tracking, the appeal of fine- grained textual labels is that they provide a simple homogeneous interface. More importantly, they may provide "just enough" localization and tracking capability to solve a wide variety of important tasks, without allocating valuable compute resources to satisfy intermediate goals at an accuracy that these tasks may not actually require. <--- Page Split ---> ## REFERENCES Xinlei Chen, Hao Fang, Tsung- Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO captions: Data collection and evaluation server. CoRR, 2015. URL http://arxiv.org/abs/1504.00325. Michael Denkowski and Alon Lavie. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation, 2014. Jacob Devlin, Saurabh Gupta, Ross B. Girshick, Margaret Mitchell, and C. Lawrence Zitnick. Exploring nearest neighbor approaches for image captioning. CoRR, 2015. URL http://arxiv.org/abs/1505.04467. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. arXiv preprint arXiv:1310.1531, 2013. Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. Long- term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2625- 2634, 2015. Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna Materzynska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz Mueller- Freitag, Florian Hoppe, Christian Thurau, Ingo Bax, and Roland Memisevic. The "something something" video database for learning and evaluating visual common sense. In Proceedings of the IEEE International Conference on Computer Vision, ICCV17, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, 2015. URL http://arxiv.org/abs/1512.03385. Hendrik Heuer, Christof Monz, and Arnold WM Smeulders. Generating captions without looking beyond objects. arXiv preprint arXiv:1610.03708, 2016. Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei- Fei. Large- scale video classification with convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1725- 1732, 2014. Dotan Kaufman, Gil Levi, Tal Hassner, and Lior Wolf. Temporal tessellation for video annotation and summarization. arXiv preprint arXiv:1612.06950, 2016. Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, and Andrew Zisserman. The kinetics human action video dataset, 2017. preprint arXiv:1705.06950. Chin- Yew Lin. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out, 2004. Mathew Monfort, Bolei Zhou, Sarah Adel Bargal, Alex Andonian, Tom Yan, Kandan Ramakrishnan, Lisa Brown, Quanfu Fan, Dan Gutfruend, Carl Vondrick, and Aude Oliva. Moments in time dataset: one million videos for event understanding, 2018. preprint arXiv:1801.03150. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, July 6- 12, 2002, Philadelphia, PA, USA., 2002. Ramprasaath R. Selvaraju, Abhishek Das, Ramakrishna Vedantam, Michael Cogswell, Devi Parikh, and Dhruv Batra. Grad- cam: Why did you say that? visual explanations from deep networks via gradient- based localization. CoRR, abs/1610.02391, 2016. URL http://arxiv.org/abs/1610.02391. Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off- the- shelf: an astounding baseline for recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806- 813, 2014. <--- Page Split ---> Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large- scale image recognition. CoRR, 2014. URL http://arxiv.org/abs/1409.1556. Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild, 2012. preprint arXiv:1212.0402. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. CoRR, 2014. URL http://arxiv.org/abs/1409.3215. Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, and Kate Saenko. Translating videos to natural language using deep recurrent neural networks. arXiv preprint arXiv:1412.4729, 2014. R.J. Williams and D. Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1:270- 280, 1989. Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr- vtt: A large video description dataset for bridging video and language. In Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on, pp. 5288- 5296. IEEE, 2016. Li Yao, Nicolas Ballas, Kyunghyun Cho, John R Smith, and Yoshua Bengio. Oracle performance for visual captioning. arXiv preprint arXiv:1511.04590, 2015. Bolei Zhou, Alex Andonian, and Antonio Torralba. Temporal relational reasoning in videos. CoRR, 2017. URL http://arxiv.org/abs/1711.08496. <--- Page Split ---> ## APPENDIX In this supplementary document, we provide: Qualitative examples for our classification and captioning. Visualization of our classification and captioning models using Grad- CAM. The full list of action categories of 20bn- kitchenware. ## QUALITATIVE EXAMPLES OF CLASSIFICATION Here we provide video examples and their ground truth action categories, along with model predictions for each. We use our M(256- 256) which is trained with \(\lambda = 0.1\) . Interestingly, notice that even when the predicted actions are incorrect, e.g. row 4 in Figure 6, they are, nevertheless, usually quite sensible. ![](images/10_0.jpg) <center>Figure 6: Ground truth and model prediction for classification examples. </center> <--- Page Split ---> ## QUALITATIVE EXAMPLES OF CAPTIONING Below are examples of videos. accompanied by their their ground truth caption and the caption generated by the model. We use model M(256- 256) in this section as well, which is also trained jointly for classification and captioning (with \(\lambda = 0.1\) ). ![](images/11_0.jpg) (a) Ground Truth: Touching (without moving) the head of a toy. (b) Model output: Poking a stuffed animal so lightly that it doesn't or almost doesn't move. ![](images/11_1.jpg) (c) Ground Truth: Pushing duster with white coloured pen. (d) Model output: Pushing phone with pen. ![](images/11_2.jpg) (e) Ground Truth: Plugging a charger into a phone. (f) Model output: Plugging charger into phone. ![](images/11_3.jpg) (g) Ground Truth: Piling bowl up. (h) Model output: Stacking bowls. ![](images/11_4.jpg) (i) Ground Truth: Removing cup, revealing little cup behind. (j) Model output: Removing mug, revealing cup behind. ![](images/11_5.jpg) (k) Ground Truth: Hitting cup with spoon. (l) Model output: Hitting mug with spoon. <--- Page Split ---> ![](images/12_0.jpg) (a) Ground Truth: Stacking 4 coins. (b) Model output: Piling coins up. ![](images/12_1.jpg) (c) Ground Truth: Taking toffee eclairs from jar. (d) Model output: Taking battery out of container. ![](images/12_2.jpg) (e) Ground Truth: Rolling paper towels on a flat surface. (f) Model output: Letting bottle roll along a flat surface. ![](images/12_3.jpg) (g) Ground Truth: Pretending to put nail polish into jar. (h) Model output: Pretending to put bottle into container. ![](images/12_4.jpg) (i) Ground Truth: Letting toy truck roll along a flat surface. (j) Model output: Pushing car from right to left. ![](images/12_5.jpg) (k) Ground Truth: Lifting up one end of flower pot, then letting it drop down. (l) Model output: Lifting up one end of bucket, then letting it drop down. ![](images/12_6.jpg) (m) Ground Truth: Letting roll roll down a slanted surface. (n) Model output: Letting spray can roll down a slanted surface. ![](images/12_7.jpg) (o) Ground Truth: Lifting plate with cutlery on it. (p) Model output: Lifting plate with spoon on it. <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 9: Ground truth captions and model outputs for video examples. </center> <--- Page Split ---> ## VISUALIZATION OF CLASSIFICATION MODEL WITH GRAD-CAM To visualize regularities learned from data, we extracted temporally- sensitive saliency maps using Grad- CAM Selvaraju et al. (2016), for both classification and captioning task. To this end we extended the Grad- CAM implementation for video processing. Figure 10 shows saliency maps of examples from Something- Something obtained with model M(256- 0) trained on fine- grained action categories, with \(\lambda = 1\) (i.e., the pure classification task). ![](images/14_0.jpg) <center>Figure 10: Grad-CAM for M(256-0) on video examples predicted correctly during fine-grained action classification. We can see that the model focuses on different parts of different frames in the video in order to make a prediction. </center> ## VISUALIZATION OF CAPTIONING MODEL USING GRAD-CAM To get saliency maps during the captioning process, we calculate the Grad- CAM once for each token, for which different regions of the video are highlighted. Figures 11, - 13 shows saliency maps for the captioning model, jointly trained with \(\lambda = 0.1\) . Notice how the attentional focus of the model changes qualitatively as we perform Grad- CAM for different tokens in the target caption. <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 11: Grad-CAM on video example with ground truth caption Pretending to pick mouse up. The model focuses on hand motion in the beginning and end of the video for the token "Up". </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 12: Grad-CAM on video example with ground truth caption Moving toy closer to toy. We can see that the model focuses on the gap between toys when using “Moving” token. It also looks at both toy objects when using the token “Closer”. </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 13: Grad-CAM on video example with ground truth caption Bottle being deflected from ball during captioning process. The model focuses on the collision between bottle and ball, when using token "Deflected". </center> <--- Page Split ---> ## 20BN-KITCHENWARE ACTION CATEGORIES Table 9 lists the full list of 20bn- kitchenware action categories. <table><tr><td>Action categories</td></tr><tr><td>Using a fork to pick something up</td></tr><tr><td>Pretending to use a fork to pick something up</td></tr><tr><td>Trying but failing to pick something up with a fork</td></tr><tr><td>Using a spoon to pick something up</td></tr><tr><td>Pretending to use a spoon to pick something up</td></tr><tr><td>Trying but failing to pick something up with a spoon</td></tr><tr><td>Using a knife to cut something</td></tr><tr><td>Pretending to use a knife to cut something</td></tr><tr><td>Trying but failing to cut something with a knife</td></tr><tr><td>Using tongs to pick something up</td></tr><tr><td>Pretending to use tongs to pick something up</td></tr><tr><td>Trying but failing to pick something up with tongs</td></tr><tr><td>Doing other things</td></tr></table> Table 9: The 13 action categories represented in 20bn- kitchenware. <--- Page Split --->
## ABSTRACT We describe a DNN for video classification and captioning, trained end- to- end, with shared features, to solve tasks at different levels of granularity, exploring the link between granularity in a source task and the quality of learned features for transfer learning. For solving the new task domain in transfer learning, we freeze the trained encoder and fine- tune an MLP on the target domain. We train on the Something- Something dataset with over 220,000 videos, and multiple levels of target granularity, including 50 action groups, 174 fine- grained action categories and captions. Classification and captioning with Something- Something are challenging because of the subtle differences between actions, applied to thousands of different object classes, and the diversity of captions penned by crowd actors. Our model performs better than existing classification baselines for Something- Something, with impressive fine- grained results. And it yields a strong baseline on the new Something- Something captioning task. Experiments reveal that training with more fine- grained tasks tends to produce better features for transfer learning. ## 1 INTRODUCTION Common- sense video understanding entails fine- grained recognition of actions, objects, spatial temporal relations, and physical interactions, arguably well beyond the capabilities of current techniques. A general framework will need to discriminate myriad variations of actions and interactions. For example, we need to be able to discriminate similar actions that differ in subtle ways, e.g., 'putting a pen beside the cup', 'putting the pen in the cup', or 'pretending to put the pen on the table'. Similarly, one must be able to cope with diverse actors and object classes. Such generality and fine- grained discrimination are key challenges to video understanding. In contrast to current approaches to action recognition and captioning on relatively small corpora, with coarse- grained actions, this paper considers fine- grained classification and captioning tasks on large- scale video corpora. Training is performed on the Something- Something dataset (Goyal et al. (2017)), with 174 fine- grained action categories and thousands of different objects, all under significant variations in lighting, viewpoint, background clutter, and occlusion. We describe a DNN architecture comprising a 2- channel convolutional net and an LSTM recurrent network for video encoding. A common encoding is shared for classification and caption generation. The resulting network performs better than baseline action classification results in Goyal et al. (2017), and it gives impressive results on an extremely challenging fine- grained captioning task. We also demonstrate the quality of learned features through transfer learning from Something- Something features to a kitchen action dataset. The main contributions in the paper include: 1. Explore the link between label granularity and feature quality: We exploit 3 levels of granularity in sth-sth-v2, namely, action groups, action categories, and captions. Experiments show that more fine-grained labels yield richer features (see Table 3 and Fig. 4). 2. Baselines for captioning on sth-sth-v2 data: We note that the captioning task is new for this dataset; the original dataset did not provide captions. 3. Captioning as a source task for transfer learning: We show that models trained to jointly perform classification and captioning learn features that transfer better to other tasks (e.g., see Fig. 4). To the best of our knowledge, captioning has, to date, been used as a target task. Our results suggest that captioning is a powerful source task. 4. 20bn-kitchenware: We introduce a new dataset, ostensibly for video transfer learning. <--- Page Split ---> ## 2 VIDEO DATA Video action classification and captioning have received significant attention for several years, but progress has been somewhat slow in part because of a lack of large- scale corpora. Using web sources (e.g., YouTube, Vimeo and Flickr) and human annotators, larger datasets have been collected (e.g., Kay et al. (2017); Monfort et al. (2018)), but they lack control over pose variations, motion and other scene properties that might be important for learning fine- grained models. More recently, crowd- sourced/crowd- acted data has emerged. This allows targeted video domains, action classes, and control over subtle differences between fine- grained action classes. The first version of Goyal et al. (2017) has 100, 000 videos of human- object interactions, comprising 50 coarse- grained action groups, decomposed further into 174 related action categories. The videos exhibit significant diversity in viewing and lighting, objects and backgrounds, and the ways in which actors perform actions. Baseline performance in Goyal et al. (2017) was a correct action classification rate of \(11.5\%\) , and \(36.2\%\) on action groups. Zhou et al. (2017) report \(42.01\%\) classification accuracy on Something- Something- V1 action categories. With our architecture we obtain validation accuracy of \(38.8\%\) . Something- Something- V2 is larger, with 220, 847 videos of the same 174 action categories. In addition, each video includes a caption that was authored and uploaded by the crowd actor. These captions incorporate the action class as well as descriptions of the objects involved. That is, the captions mirror the action template, but with the generic placeholder Something replaced by the object(s) chosen by the actor. As an example, a video with template action 'Putting [something] on [something]', might have a caption 'Putting a red cup on a blue plastic box'. In a nutshell, this dataset provides different levels of granularity: 50 coarse- grained action groups, 174 fine- grained action categories, and even more fine- grained labeling via video captions. ## 3 VIDEO CLASSIFICATION AND CAPTIONING TASKS Due to the prevalence of datasets like UCF- 101 (Soomro et al. (2012)), sports1M (Karpathy et al. (2014)) and more recently Kinetics (Kay et al. (2017)), most research on action classification has focused on models for coarse- grained action classification. In extreme cases, action classes can be can be discriminated from a glimpse of the scene, often encoded in isolated frames; e.g., inferring "soccer play" from the presence of a green field. Even when motion is essential to the action, many existing approaches do well by encoding rough statistics over velocities, directions, and motion positions. Little work has been devoted to the task of representing details of object interactions or how configurations change over time. Image and video captioning have received increasing attention since the release of captioning data, notably, Microsoft COCO (Chen et al. (2015)) and MSR- VTT (Xu et al. (2016)). One problem with existing captioning approaches is that many datasets implicitly allow models to "cheat", e.g., by generating phrases that are grammatically and semantically coherent, but only loosely related to the fine- grained visual structure. It has been shown, for example, that a language model trained on unordered, individual words (e.g., object- nouns) predicted by a separate NN can compete with captioning model trained on the actual task end- to- end (e.g., Yao et al. (2015); Heuer et al. (2016)). Similarly, nearest neighbor methods have been surprisingly effective (Devlin et al. (2015)). Captioning tasks, if designed appropriately, should capture detailed scene properties. Labels with more subtle and fine- grained distinctions would directly expose the ability (or inability) of a network to correctly infer the scene properties encoded in the captions. The captions provided with the Something- Something dataset are designed to be sufficiently fine- grained that attention to details is needed to yield high prediction accuracy. For example, to generate correct captions, networks need not only infer the correct actions, but must also learn to recognize different objects and their properties, as well as object interactions. ## 4 APPROACH We use a modified encoder- decoder architecture with an action classifier applied to the video encoding (see Fig. 1). The decoder or classifier can be switched off, leaving pure classification or captioning models. One can also jointly train classification and captioning models. We train our two- channel architecture to solve four different tasks with different granularity levels: - Coarse-grained classification on action groups (CG actions)- Fine-grained classification on 174 action categories (FG actions) <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 1: Our model architecture includes a two-channel CNN followed by an LSTM video encoder, an action classifier, and an LSTM decoder for caption generation. </center> ![](images/2_1.jpg) <center>Figure 2: Our encoder includes a two-channel CNN followed by an LSTM for aggregating features. </center> - Captioning on simplified objects (SO captions) - Fine-grained Captioning with full object placeholders (Captions) Finally, we investigate the effectiveness of the learned features for transfer learning. ### 4.1 MODIFIED VIDEO ENCODER-DECODER The video encoder receives the input video \(v\) , and maps it to an embedded representation \(h\) . Conditioned on \(h\) , a caption decoder generates the caption \(c\) , and a classifier predicts the action category. The encoder processes the video with a two- channel convolutional architecture (Fig. 2). A spatial 2D- CNN and a spatiotemporal 3D- CNN are applied in parallel. The 3D- CNN features are used in lieu of a separate module to compute motion (e.g., optical flow) features. The basic building block of each channel is a \(3 \times 3 \times 3\) ( \(3 \times 3\) in 2D- CNN channel) convolution filter with batchnorm and ReLU activation. To aggregate features across time, feature vectors from each channel are combined and fed to a 2- layer bidirectional LSTM. We average these features to get the video encoding, \(h\) . The action classifier applies a fully- connected layer to the encoding \(h\) , followed by a softmax layer. Training uses a cross- entropy loss over action categories. The caption decoder is a two- layer LSTM. Much like conventional encoder- decoder methods for video captioning (Venugopalan et al. (2014); Donahue et al. (2015); Kaufman et al. (2016) and Sutskever et al. (2014)), our decoder generates captions using a softmax over vocabulary words, conditioned on previously generated words. The loss is the usual negative log- probability of the word sequence: \[\mathrm{loss}_{\mathrm{captioning}} = -\sum_{i = 0}^{N - 1}\log p(w^{i + 1}|w^{\leq i},h;\theta). \quad (1)\] where \(w^{i}\) denotes the \(i^{\mathrm{th}}\) word of the caption, \(h\) is the video encoding, and \(\theta\) denotes model parameters. To optimize speed and memory usage during training, the length of captions generated by the decoder is fixed at 14 words<sup>1</sup>. As is common for encoder- decoder networks, we train with teacher- <--- Page Split ---> Table 1: Validation and test accuracy for the pure classification task ( \(\lambda = 1\) ), with different numbers of 2D-CNN and 3D-CNN features used for video encoding. <table><tr><td>Models</td><td>3D-CNN Channels</td><td>2D-CNN Channels</td><td>Number of Parameters</td><td>Validation Accuracy</td><td>Test Accuracy</td></tr><tr><td>M(256 - 0)</td><td>256</td><td>0</td><td>8.9M</td><td>50.06</td><td>48.84</td></tr><tr><td>M(512 - 0)</td><td>512</td><td>0</td><td>24.1M</td><td>51.96</td><td>51.15</td></tr><tr><td>M(384 - 128)</td><td>384</td><td>128</td><td>16.2M</td><td>51.11</td><td>49.96</td></tr><tr><td>M(256 - 256)</td><td>256</td><td>256</td><td>11.5M</td><td>51.62</td><td>50.44</td></tr><tr><td>M(128 - 384)</td><td>128</td><td>384</td><td>10.0M</td><td>50.82</td><td>49.57</td></tr><tr><td>M(0 - 512)</td><td>0</td><td>512</td><td>5.8M</td><td>39.78</td><td>37.80</td></tr><tr><td>M(0 - 256)</td><td>0</td><td>256</td><td>11.5M</td><td>40.2</td><td>38.83</td></tr></table> Table 2: Comparison of classification accuracy of fine-grained and coarse-grained models, tested on fine-grained actions (using action categories) versus coarse-grained actions (using action groups). <table><tr><td></td><td>Coarse-grained Testing</td><td>Fine-grained Testing</td></tr><tr><td>Coarse-grained Training</td><td>57.6</td><td>41.7</td></tr><tr><td>Fine-grained Training</td><td>62.5</td><td>50.44</td></tr></table> forcing (Williams & Zipser (1989)). At test time, the input to the decoder at each time- step is the token generated at the previous time- step (i.e., no teacher forcing). The model is trained end- to- end for classification and captioning with a weighted sum of the classification and captioning losses, i.e., \[\mathrm{loss} = \lambda \cdot \mathrm{loss}_{\mathrm{classification}} + (1 - \lambda)\cdot \mathrm{loss}_{\mathrm{captioning}} \quad (2)\] With \(\lambda = 1\) or \(\lambda = 0\) , we end up with pure classification and captioning tasks respectively. For other values of \(\lambda\) , they are trained jointly. The encoder is shared by the action classifier and the caption decoder. The experiments below also compare this joint training regime with models for which the encoder is trained on the classification loss or the captioning loss alone. ## 5 EXPERIMENTS We train four two- channel models that are trained to solve coarse- grained and fine- grained classification and captioning tasks. In what follows we discuss different tasks in more details. ### 5.1 COARSE- AND FINE-GRAINED CLASSIFICATION Something- Something provides coarse- grained categories (action groups), each comprising a set of fine- grained actions. We trained a classification model on coarse- grained action groups, using the M(256- 256) architecture, with accuracy of \(57.6\%\) (see Table 2 (top- left)). Table 1 reports the performance of our model for fine- grained classification. For the pure classification task (with \(\lambda = 1\) ) we consider different numbers of features produced by the 2D (spatial) CNN and the 3D (spatiotemporal) CNN. The total number of features is 512 in all cases. The results show that the model benefits from 2D and 3D features. The even split M(256- 256) provides a good trade- off between performance and model complexity. We therefore use this model below, unless otherwise specified. Transferring between Coarse- and Fine- Grained Classification We can perform coarse- grained classification by mapping predictions from the fine- grained classifier onto the action groups. To this end we sum the probabilities of fine- grained actions belonging to each action group. Interestingly, the resulting accuracy on coarse- grained action groups increases to \(62.5\%\) . This improvement suggests that fine- trained training provides higher quality features. We also examine to what extent coarse- grained performance accounts for fine- grained accuracy; i.e., how better fine- grained performs compared to chance when conditioned on coarse- grain predictions. For example, conditioned on a predicted action group, if we select the most frequent action within the action group, fine- grained test accuracy would be \(23.8\%\) . One can also fix the coarse- grained model and train a linear classifier on top of its penultimate features. This yields test accuracy of \(41.7\%\) , still \(8.7\%\) below test performance for the corresponding architecture trained on the fine- grained task, further supporting our contention that training on fine- grained details yields richer features. ### 5.2 CLASSIFICATION BASELINES. As a baseline, we use ImageNet- pretrained models (Simonyan & Zisserman (2014); He et al. (2015)) on individual frames, to which we add layers. First, we use just the middle frame of the video, with <--- Page Split ---> Table 3: Classification results on 174 action categories using VGG16 and ResNet152 as frame encoders, along with different strategies for temporal aggregation. <table><tr><td>Models</td><td>Test Accuracy</td></tr><tr><td>VGG16 + MLP 1024 (single middle frame)</td><td>13.29</td></tr><tr><td>VGG16 + MLP 1024 (averaged over 48 frames)</td><td>17.57</td></tr><tr><td>VGG16 + LSTM 1024</td><td>31.69</td></tr><tr><td>ResNet152 + MLP 1024 (single middle frame)</td><td>13.62</td></tr><tr><td>ResNet152 + MLP 1024 (averaged over 48 frames)</td><td>16.79</td></tr><tr><td>ResNet152 + LSTM 1024 (48 steps)</td><td>28.82</td></tr></table> a 2- layer MLP with 1024 hidden units. We also consider a baseline in which we apply this approach to all 48 frames, and then average frame by frame predictions. We experiment with aggregating information over time using a LSTM layer with 1024 units. We report results in Table 3. There is a marked improvement with the LSTM, confirming that this task improves with temporal analysis. Our best baseline was achieved with a VGG architecture, and the test accuracy is close to the best architecture reported to date on Something- Something (e.g., Zhou et al. (2017)). ### 5.3 CAPTIONING WITH SIMPLIFIED OBJECT PLACEHOLDERS. The ground truth object placeholders in Something- Something video captions (i.e., the object descriptions provided by crowd actors) are not highly constrained. Crowd actors have the option to type in the objects they used, along with multiple descriptive or quantitative adjectives, elaborating shape, color, material or the number of objects involved. Accordingly, it is not surprising that the distribution over object placeholders is extremely heavy- tailed, with many words occurring rarely. To facilitate training we therefore replaced all words that occurred 5 times or less by [Something]. After removing rare words, we are left with 2880 words comprising around 30,000 distinct object placeholders (i.e., different combination of nouns, adjectives, etc). We consider a simplified task in which we modify the ground truth captions so they only contain one word per placeholder, by keeping the last noun, removing all other words from the placeholders. By substituting the pre- processed placeholders into the action category, we obtain a simplified caption. Table 5 shows an example of the process. The result is a reduced vocabulary with 2055 words. In the spectrum of granularity, captioning with simplified objects can be considered as a middle ground between fine- grained action classification and captioning with full labels. We train different variations of our two- channel architecture on captions with simplified objects. Table 6 summarizes our results. We observe that the model with an equal number of 2D- and 3D- channels performs best (albeit by a fairly small margin). Also the best captioning model performs best on the classification task. We also evaluate the models using standard captioning metrics: BLEU (Papineni et al. (2002)), ROUGE- L (Lin (2004)) and METEOR (Denkowski & Lavie (2014)). #### 5.3.1 FINE-GRAINED CAPTIONING WITH FULL OBJECT PLACEHOLDERS. We also train networks on the full object placeholders. This constitutes the finest level of action granularity. The experimental results are shown in Table 8. They show that, again, the best captioning model also yields the highest corresponding classification accuracy. The Exact- Match accuracy is significantly lower than for the simplified object placeholders, as it has to account for a much wider variety of phrases. The captioning models produce impressive qualitative results with a high degree of approximate action and object accuracy. Some examples are shown in Figure 9n. More examples can be found in the appendix. To the best of our knowledge there are no baselines for the Something- Something captioning task. To quantify the performance of our captioning models, we count the percentage of generated captions that match ground truth word by word. We refer to this as "Exact- Match Accuracy". This is a challenging metric as the model is deemed correct only if it generates the entire caption correctly. If we use the action category predicted by model M(256- 256), trained for classification, and replace all occurrences of [something] with the most likely object string conditioned on that action class, the Exact- Match accuracy is \(3.15\%\) . The same baseline for simplified object placeholders is \(5.69\%\) . We also implemented a conventional encoder- decoder model for captioning 4. <--- Page Split ---> Table 4: Captioning baselines using a conventional encoder-decoder architecture <table><tr><td>Models</td><td>BLEU@4</td><td>ROUGE-L</td><td>METEOR</td><td>Exact-Match Accuracy</td><td>Classification Accuracy</td></tr><tr><td>VGG16+LSTM</td><td>31.83</td><td>52.22</td><td>24.79</td><td>3.13</td><td>31.69</td></tr><tr><td>Resnet152+LSTM</td><td>31.93</td><td>51.76</td><td>24.89</td><td>3.25</td><td>28.82</td></tr></table> <table><tr><td>Video ID</td><td>81955</td></tr><tr><td>Action Group</td><td>Holding [something]</td></tr><tr><td>Action Category</td><td>Holding [something] in front of [something]</td></tr><tr><td>Somethings</td><td>“a blue plastic cap”, “a men’s short sleeve shirt”</td></tr><tr><td>Simplified somethings’s</td><td>“cap”, “shirt”</td></tr><tr><td>Simplified-object Caption</td><td>Holding cap in front of shirt</td></tr><tr><td>Full Caption</td><td>Holding a blue plastic cap in front of a men short sleeve shirt</td></tr></table> Table 5: An example of annotation file for a Something-Something video ![](images/5_0.jpg) <center>Figure 3: Captioning examples: </center> Model Outputs: [Piling coins up], [Removing mug, revealing cup behind]. Ground Truths: [Stacking coins], [Removing cup, revealing little cup behind]. Training settings In all our experiments we use frame rate of \(12fps\) . During training we randomly pick 48 consecutive frames. For videos with less than 48 frames, we replicate the first and last frames to achieve the intended length. We resize the frames to \(128\times 128\) , and then use random cropping of size \(96\times 96\) . For validation and testing, we use \(96\times 96\) center cropping. We optimize all models using Adam, with an initial learning rate of 0.001. While the captioning task theoretically entails action classification, we found that our two- channel networks optimized on the pure captioning task do not perform as well as models trained jointly on classification and captioning (see 7). By coarsely tuning the hyper- parameter \(\lambda\) empirically, we found \(\lambda = 0.1\) to work well and fix it at this value for the captioning experiments below. More specifically, we first train with a pure classification loss, by setting \(\lambda = 1\) , and subsequently introduce the captioning loss by gradually decreasing \(\lambda\) to 0.1. ## 6 TRANSFER LEARNING One astonishing property of neural networks is their ability to learn representations transfer well to other tasks (e.g., Donahue et al. (2013); Sharif Razavian et al. (2014)). A distinguishing feature of the ImageNet task, which likely contributes to its potential for transfer learning, is the dataset size and the variety of fine- grained discriminations required. In what follows we explore transfer learning performance as a function of course task granularity. We introduce 20bn- kitchenware, a few- shot video classification dataset with 390 videos over 13 action categories (see Fig. 4). It contains video clips of manipulating a kitchen utensil for roughly 4 seconds and was designed to capture fine- grained actions with subtle differences. For each utensil \(X\in \{\mathrm{fork},\mathrm{spoon},\mathrm{knife},\mathrm{tongs}\}\) , the target label belongs to one of 3 actions, "Using \(X\) ", "Pretending to use \(X\) " or "Trying but failing to use \(X\) ". In addition to these 12 action categories, we also include a fall- back class labeled "Doing other things". We further encourage the model to pay attention to visual details by including unused 'negative' objects. The last row of Fig. 4 shows one example; the target label indicates a manipulation of tongs, but the clip also contains a spoon with an egg in it. Given the limited amount of data available for training, the action granularity and the presence of negative objects, we hypothesize that only models that have some understanding of physical world properties will perform well. We will release 20bn- kitchenware upon publication of this paper. <--- Page Split ---> Table 6: Performance of our two-channel models with different sizes of channel features on for simplified objects. For this task we use ( \(\lambda = 0.1\) ). The maximum sequence length is 14. <table><tr><td>Models</td><td>BLEU@4</td><td>ROUGE-L</td><td>METEOR</td><td>Exact-Match Accuracy</td><td>Classification Accuracy</td></tr><tr><td>M(256 - 0)</td><td>22.75</td><td>44.54</td><td>22.40</td><td>8.46</td><td>50.64</td></tr><tr><td>M(512 - 0)</td><td>23.28</td><td>45.29</td><td>22.75</td><td>8.47</td><td>50.96</td></tr><tr><td>M(384 - 128)</td><td>23.02</td><td>44.86</td><td>22.58</td><td>8.53</td><td>50.73</td></tr><tr><td>M(256 - 256)</td><td>23.04</td><td>44.89</td><td>22.60</td><td>8.63</td><td>51.38</td></tr><tr><td>M(128 - 384)</td><td>22.76</td><td>44.40</td><td>22.39</td><td>8.33</td><td>50.04</td></tr></table> <table><tr><td>Models</td><td>Classification Accuracy</td><td>Exact-Match Accuracy</td></tr><tr><td>λ = 0</td><td>39.78</td><td>5.96</td></tr><tr><td>λ = 0.1</td><td>51.32</td><td>8.63</td></tr></table> Table 7: Comparing models trained with pure captioning task vs joint captioning and classification. Results are shown for captioning with simplified object placeholders. The test classification accuracy for the pure captioning model was obtained by freezing the video encoder and training a linear regressor on top of penultimate features. ![](images/6_0.jpg) <center>Figure 4: 20bn-kitchenware samples: Using a knife to cut something (left), Trying but failing to pick something up with tongs(right). </center> ### 6.1 PROPOSED BENCHMARK Our transfer learning experiments considers four two- channel models that have been respectively pre- trained on coarse- grained labels (classification on action groups), on fine- grained labels (classification on 174 action categories), on simplified captions (captioning on fine- grained action categories expanded with single object descriptor) and on template captions (captioning on fine- grained action categories expanded with object descriptors). We also include two neural nets pre- trained on other datasets: a VGG16 network pre- trained on ImageNet, and an Inflated- ResNet34 pre- trained on Kinetics3. The overall training procedure remains the same for all models. We freeze each pre- trained model and then train a neural net on top of extracted penultimate features. Independent of the architecture used, we use the pre- trained model to produce 12 feature vectors per second. To achieve this, where necessary we split the input video into smaller clips and apply the pre- trained network on each clip individually4. In the simplest case, we pass the obtained features through a logistic regressor and average predictions over time. We also report results for which we classify pre- trained features using an MLP with 512 hidden units as well as a single bidirectional LSTM layer with 128 hidden states. This allows the network to perform some temporal analysis about the target domain. #### 6.1.1 OBSERVATIONS For each pretrained model and classifier, we evaluate 1- shot, 5- shot and 10- shot performance, averaging scores over 10 runs. Fig. 5 shows the average scores with \(95\%\) confidence intervals. The most noticeable findings are: 1. Logistic Regression vs MLP vs BiLSTM: Using a recurrent network yields better performance. 2. Something-Something features vs others: Our models pre-trained on Something-Something outperform other external models. This is not surprising given the target domain; 20bn-kitchenware samples are, by design, closer to Something-Something samples than ImageNet or Kinetics ones. It is surprising that VGG16 features perform better on 20bn-kitchenware than Kinetics features. <--- Page Split ---> <table><tr><td>Models</td><td>BLEU@4</td><td>ROUGE-L</td><td>METEOR</td><td>Exact Match Accuracy</td><td>Classification Accuracy</td></tr><tr><td>M(256 - 0)</td><td>16.87</td><td>40.03</td><td>19.13</td><td>3.33</td><td>50.48</td></tr><tr><td>M(512 - 0)</td><td>16.92</td><td>40.54</td><td>19.26</td><td>3.56</td><td>49.81</td></tr><tr><td>M(384 - 128)</td><td>17.99</td><td>41.82</td><td>20.03</td><td>3.80</td><td>50.92</td></tr><tr><td>M(256 - 256)</td><td>17.61</td><td>41.28</td><td>19.69</td><td>3.76</td><td>50.56</td></tr><tr><td>M(128 - 384)</td><td>16.80</td><td>39.98</td><td>19.11</td><td>3.61</td><td>49.24</td></tr></table> Table 8: Performance of captioning models with different sizes of channel features on full object placeholders. For this task we use \(\lambda = 0.1\) ). The maximum sequence length is 14. ![](images/7_0.jpg) <center>Figure 5: 20bn-kitchenware transfer learning results: averaged scores obtained using a VGG16, an Inflated ResNet34, as well as two-channel models trained on four aforementioned tasks. We report results using 1 training sample per class, 5 training samples per class or the full training set. </center> 3. Effect of the action granularity: Fig. 5 supports the contention that training on fine-grained tasks yields better features. The best model on this benchmark is that trained jointly on full captions and action categories. The only exception is the model trained to do pure captioning. ## 7 CONCLUSIONS Pre- training neural networks on large labeled datasets has become a driving force in deep learning applications. Some might argue that it may be considered a serious competitor to unsupervised learning as a means to generate universal features for the visual world. Ever since ImageNet became popular as a generic feature extractor, a hypothesis been that it is dataset size, the amount of detail and the variety of labels that drive a network's capability to learn useful features. To the degree that this hypothesis is true, generating visual features capable of transfer learning should involve source tasks that (i) are fine- grained and complex, and (ii) involve video not still images, because video is a much more fertile domain for defining complex tasks that represent aspects of the physical world. This paper provides further evidence for that hypothesis, showing that task granularity has a strong influence on the quality of the learned features. We also show that captioning, which to the best of our knowledge has been used only as a target task in transfer learning, can be a powerful source task. Our work suggests that one gets substantial leverage by utilizing ever more fine- grained recognition tasks, represented in the form of captions, possibly in combination with question- answering. Unlike the current trend of engineering neural networks to perform bounding box generation, semantic segmentation, or tracking, the appeal of fine- grained textual labels is that they provide a simple homogeneous interface. More importantly, they may provide "just enough" localization and tracking capability to solve a wide variety of important tasks, without allocating valuable compute resources to satisfy intermediate goals at an accuracy that these tasks may not actually require. <--- Page Split ---> ## APPENDIX In this supplementary document, we provide: Qualitative examples for our classification and captioning. Visualization of our classification and captioning models using Grad- CAM. The full list of action categories of 20bn- kitchenware. ## QUALITATIVE EXAMPLES OF CLASSIFICATION Here we provide video examples and their ground truth action categories, along with model predictions for each. We use our M(256- 256) which is trained with \(\lambda = 0.1\) . Interestingly, notice that even when the predicted actions are incorrect, e.g. row 4 in Figure 6, they are, nevertheless, usually quite sensible. ![](images/10_0.jpg) <center>Figure 6: Ground truth and model prediction for classification examples. </center> <--- Page Split ---> ## QUALITATIVE EXAMPLES OF CAPTIONING Below are examples of videos. accompanied by their their ground truth caption and the caption generated by the model. We use model M(256- 256) in this section as well, which is also trained jointly for classification and captioning (with \(\lambda = 0.1\) ). ![](images/11_0.jpg) (a) Ground Truth: Touching (without moving) the head of a toy. (b) Model output: Poking a stuffed animal so lightly that it doesn't or almost doesn't move. ![](images/11_1.jpg) (c) Ground Truth: Pushing duster with white coloured pen. (d) Model output: Pushing phone with pen. ![](images/11_2.jpg) (e) Ground Truth: Plugging a charger into a phone. (f) Model output: Plugging charger into phone. ![](images/11_3.jpg) (g) Ground Truth: Piling bowl up. (h) Model output: Stacking bowls. ![](images/11_4.jpg) (i) Ground Truth: Removing cup, revealing little cup behind. (j) Model output: Removing mug, revealing cup behind. ![](images/11_5.jpg) (k) Ground Truth: Hitting cup with spoon. (l) Model output: Hitting mug with spoon. <--- Page Split ---> ![](images/12_0.jpg) (a) Ground Truth: Stacking 4 coins. (b) Model output: Piling coins up. ![](images/12_1.jpg) (c) Ground Truth: Taking toffee eclairs from jar. (d) Model output: Taking battery out of container. ![](images/12_2.jpg) (e) Ground Truth: Rolling paper towels on a flat surface. (f) Model output: Letting bottle roll along a flat surface. ![](images/12_3.jpg) (g) Ground Truth: Pretending to put nail polish into jar. (h) Model output: Pretending to put bottle into container. ![](images/12_4.jpg) (i) Ground Truth: Letting toy truck roll along a flat surface. (j) Model output: Pushing car from right to left. ![](images/12_5.jpg) (k) Ground Truth: Lifting up one end of flower pot, then letting it drop down. (l) Model output: Lifting up one end of bucket, then letting it drop down. ![](images/12_6.jpg) (m) Ground Truth: Letting roll roll down a slanted surface. (n) Model output: Letting spray can roll down a slanted surface. ![](images/12_7.jpg) (o) Ground Truth: Lifting plate with cutlery on it. (p) Model output: Lifting plate with spoon on it. <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 9: Ground truth captions and model outputs for video examples. </center> <--- Page Split ---> ## VISUALIZATION OF CLASSIFICATION MODEL WITH GRAD-CAM To visualize regularities learned from data, we extracted temporally- sensitive saliency maps using Grad- CAM Selvaraju et al. (2016), for both classification and captioning task. To this end we extended the Grad- CAM implementation for video processing. Figure 10 shows saliency maps of examples from Something- Something obtained with model M(256- 0) trained on fine- grained action categories, with \(\lambda = 1\) (i.e., the pure classification task). ![](images/14_0.jpg) <center>Figure 10: Grad-CAM for M(256-0) on video examples predicted correctly during fine-grained action classification. We can see that the model focuses on different parts of different frames in the video in order to make a prediction. </center> ## VISUALIZATION OF CAPTIONING MODEL USING GRAD-CAM To get saliency maps during the captioning process, we calculate the Grad- CAM once for each token, for which different regions of the video are highlighted. Figures 11, - 13 shows saliency maps for the captioning model, jointly trained with \(\lambda = 0.1\) . Notice how the attentional focus of the model changes qualitatively as we perform Grad- CAM for different tokens in the target caption. <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 11: Grad-CAM on video example with ground truth caption Pretending to pick mouse up. The model focuses on hand motion in the beginning and end of the video for the token "Up". </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 12: Grad-CAM on video example with ground truth caption Moving toy closer to toy. We can see that the model focuses on the gap between toys when using “Moving” token. It also looks at both toy objects when using the token “Closer”. </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 13: Grad-CAM on video example with ground truth caption Bottle being deflected from ball during captioning process. The model focuses on the collision between bottle and ball, when using token "Deflected". </center> <--- Page Split ---> ## 20BN-KITCHENWARE ACTION CATEGORIES Table 9 lists the full list of 20bn- kitchenware action categories. <table><tr><td>Action categories</td></tr><tr><td>Using a fork to pick something up</td></tr><tr><td>Pretending to use a fork to pick something up</td></tr><tr><td>Trying but failing to pick something up with a fork</td></tr><tr><td>Using a spoon to pick something up</td></tr><tr><td>Pretending to use a spoon to pick something up</td></tr><tr><td>Trying but failing to pick something up with a spoon</td></tr><tr><td>Using a knife to cut something</td></tr><tr><td>Pretending to use a knife to cut something</td></tr><tr><td>Trying but failing to cut something with a knife</td></tr><tr><td>Using tongs to pick something up</td></tr><tr><td>Pretending to use tongs to pick something up</td></tr><tr><td>Trying but failing to pick something up with tongs</td></tr><tr><td>Doing other things</td></tr></table> Table 9: The 13 action categories represented in 20bn- kitchenware. <--- Page Split --->
reject
Reject
5
ICLR_2019_paper_0683
iclr
2,019
# VARIATIONAL DISCRIMINATOR BOTTLENECK: IMPROVING IMITATION LEARNING, INVERSE RL, AND GANS BY CONSTRAINING INFORMATION FLOW Xue Bin Peng & Angjoo Kanazawa & Sam Toyer & Pieter Abbeel & Sergey Levine University of California, Berkeley {xbpeng, kanazawa, sdt, pabbeel, svlevine}@berkeley.edu ## ABSTRACT Adversarial learning methods have been proposed for a wide range of applications, but the training of adversarial models can be notoriously unstable. Effectively balancing the performance of the generator and discriminator is critical, since a discriminator that achieves very high accuracy will produce relatively uninformative gradients. In this work, we propose a simple and general technique to constrain information flow in the discriminator by means of an information bottleneck. By enforcing a constraint on the mutual information between the observations and the discriminator's internal representation, we can effectively modulate the discriminator's accuracy and maintain useful and informative gradients. We demonstrate that our proposed variational discriminator bottleneck (VDB) leads to significant improvements across three distinct application areas for adversarial learning algorithms. Our primary evaluation studies the applicability of the VDB to imitation learning of dynamic continuous control skills, such as running. We show that our method can learn such skills directly from raw video demonstrations, substantially outperforming prior adversarial imitation learning methods. The VDB can also be combined with adversarial inverse reinforcement learning to learn parsimonious reward functions that can be transferred and re- optimized in new settings. Finally, we demonstrate that VDB can train GANs more effectively for image generation, improving upon a number of prior stabilization methods. (Video<sup>1</sup>) ## 1 INTRODUCTION Adversarial learning methods provide a promising approach to modeling distributions over high- dimensional data with complex internal correlation structures. These methods generally use a discriminator to supervise the training of a generator in order to produce samples that are indistinguishable from the data. A particular instantiation is generative adversarial networks, which can be used for high- fidelity generation of images (Goodfellow et al., 2014; Karras et al., 2017) and other high- dimensional data (Vondrick et al., 2016; Xie et al., 2018; Donahue et al., 2018). Adversarial methods can also be used to learn reward functions in the framework of inverse reinforcement learning (Finn et al., 2016a; Fu et al., 2017), or to directly imitate demonstrations (Ho & Ermon, 2016). However, they suffer from major optimization challenges, one of which is balancing the performance of the generator and discriminator. A discriminator that achieves very high accuracy can produce relatively uninformative gradients, but a weak discriminator can also hamper the generator's ability to learn. These challenges have led to widespread interest in a variety of stabilization methods for adversarial learning algorithms (Arjovsky et al., 2017; Kodali et al., 2017; Berthelot et al., 2017). In this work, we propose a simple regularization technique for adversarial learning, which constrains the information flow from the inputs to the discriminator using a variational approximation to the information bottleneck. By enforcing a constraint on the mutual information between the input observations and the discriminator's internal representation, we can encourage the discriminator to learn a representation that has heavy overlap between the data and the generator's distribution, thereby effectively modulating the discriminator's accuracy and maintaining useful and informative <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Our method is general and can be applied to a broad range of adversarial learning tasks. Left: Motion imitation with adversarial imitation learning. Middle: Image generation. Right: Learning transferable reward functions through adversarial inverse reinforcement learning. </center> gradients for the generator. Our approach to stabilizing adversarial learning can be viewed as an adaptive variant of instance noise (Salimans et al., 2016; Sonderby et al., 2016; Arjovsky & Bottou, 2017). However, we show that the adaptive nature of this method is critical. Constraining the mutual information between the discriminator's internal representation and the input allows the regularizer to directly limit the discriminator's accuracy, which automates the choice of noise magnitude and applies this noise to a compressed representation of the input that is specifically optimized to model the most discerning differences between the generator and data distributions. The main contribution of this work is the variational discriminator bottleneck (VDB), an adaptive stochastic regularization method for adversarial learning that substantially improves performance across a range of different application domains, examples of which are available in Figure 1. Our method can be easily applied to a variety of tasks and architectures. First, we evaluate our method on a suite of challenging imitation tasks, including learning highly acrobatic skills from mocap data with a simulated humanoid character. Our method also enables characters to learn dynamic continuous control skills directly from raw video demonstrations, and drastically improves upon previous work that uses adversarial imitation learning. We further evaluate the effectiveness of the technique for inverse reinforcement learning, which recovers a reward function from demonstrations in order to train future policies. Finally, we apply our framework to image generation using generative adversarial networks, where employing VDB improves the performance in many cases. ## 2 RELATED WORK Recent years have seen an explosion of adversarial learning techniques, spurred by the success of generative adversarial networks (GANs) (Goodfellow et al., 2014). A GAN framework is commonly composed of a discriminator and a generator, where the discriminator's objective is to classify samples as real or fake, while the generator's objective is to produce samples that fool the discriminator. Similar frameworks have also been proposed for inverse reinforcement learning (IRL) (Finn et al., 2016b) and imitation learning (Ho & Ermon, 2016). The training of adversarial models can be extremely unstable, with one of the most prevalent challenges being balancing the interplay between the discriminator and the generator (Berthelot et al., 2017). The discriminator can often overpower the generator, easily differentiating between real and fake samples, thus providing the generator with uninformative gradients for improvement (Che et al., 2016). Alternative loss functions have been proposed to mitigate this problem (Mao et al., 2016; Zhao et al., 2016; Arjovsky et al., 2017). Regularizers have been incorporated to improve stability and convergence, such as gradient penalties (Kodali et al., 2017; Gulrajani et al., 2017a; Mescheder et al., 2018), reconstruction loss (Che et al., 2016), and a myriad of other heuristics (Sonderby et al., 2016; Salimans et al., 2016; Arjovsky & Bottou, 2017; Berthelot et al., 2017). Task- specific architectural designs can also substantially improve performance (Radford et al., 2015; Karras et al., 2017). Similarly, our method also aims to regularize the discriminator in order to improve the feedback provided to the generator. But instead of explicit regularization of gradients or architecture- specific constraints, we apply a general information bottleneck to the discriminator, which previous works have shown to encourage networks to ignore irrelevant cues (Achille & Soatto, 2017). We hypothesize that this then allows the generator to focus on improving the most discerning differences between real and fake samples. Adversarial techniques have also been applied to inverse reinforcement learning (Fu et al., 2017), where a reward function is recovered from demonstrations, which can then be used to train policies to reproduce a desired skill. Finn et al. (2016a) showed an equivalence between maximum entropy IRL and GANs. Similar techniques have been developed for adversarial imitation learning (Ho & Ermon, 2016; Merel et al., 2017), where agents learn to imitate demonstrations without explicitly recovering <--- Page Split ---> a reward function. One advantage of adversarial methods is that by leveraging a discriminator in place of a reward function, they can be applied to imitate skills where reward functions can be difficult to engineer. However, the performance of policies trained through adversarial methods still falls short of those produced by manually designed reward functions, when such reward functions are available (Rajeswaran et al., 2017; Peng et al., 2018). We show that our method can significantly improve upon previous works that use adversarial techniques, and produces results of comparable quality to those from state- of- the- art approaches that utilize manually engineered reward functions. Our variational discriminator bottleneck is based on the information bottleneck (Tishby & Zaslavsky, 2015), a technique for regularizing internal representations to minimize the mutual information with the input. Intuitively, a compressed representation can improve generalization by ignoring irrelevant distractors present in the original input. The information bottleneck can be instantiated in practical deep models by leveraging a variational bound and the reparameterization trick, inspired by a similar approach in variational autoencoders (VAE) (Kingma & Welling, 2013). The resulting variational information bottleneck approximates this compression effect in deep networks (Alemi et al., 2016; Achille & Soatto, 2017). A similar bottleneck has also been applied to learn disentangled representations (Higgins et al., 2017). Building on the success of VAEs and GANs, a number of efforts have been made to combine the two. Makhzani et al. (2016) used adversarial discriminators during the training of VAEs to encourage the marginal distribution of the latent encoding to be similar to the prior distribution, similar techniques include Mescheder et al. (2017) and Chen et al. (2018). Conversely, Larsen et al. (2016) modeled the generator of a GAN using a VAE. Zhao et al. (2016) used an autoencoder instead of a VAE to model the discriminator, but does not enforce an information bottleneck on the encoding. While instance noise is widely used in modern architectures (Salimans et al., 2016; Sonderby et al., 2016; Arjovsky & Bottou, 2017), we show that explicitly enforcing an information bottleneck leads to improved performance over simply adding noise for a variety of applications. ## 3 PRELIMINARIES In this section, we provide a review of the variational information bottleneck proposed by Alemi et al. (2016) in the context of supervised learning. Our variational discriminator bottleneck is based on the same principle, and can be instantiated in the context of GANs, inverse RL, and imitation learning. Given a dataset \(\{\mathbf{x}_i,\mathbf{y}_i\}\) , with features \(\mathbf{x}_i\) and labels \(\mathbf{y}_i\) , the standard maximum likelihood estimate \(q(\mathbf{y}_i|\mathbf{x}_i)\) can be determined according to \[\min_{q}\mathbb{E}_{\mathbf{x},\mathbf{y}\sim p(\mathbf{x},\mathbf{y})}\left[-\log q(\mathbf{y}|\mathbf{x})\right]. \quad (1)\] Unfortunately, this estimate is prone to overfitting, and the resulting model can often exploit idiosyncrasies in the data (Krizhevsky et al., 2012; Srivastava et al., 2014). Alemi et al. (2016) proposed regularizing the model using an information bottleneck to encourage the model to focus only on the most discriminative features. The bottleneck can be incorporated by first introducing an encoder \(E(\mathbf{z}|\mathbf{x})\) that maps the features \(\mathbf{x}\) to a latent distribution over \(Z\) , and then enforcing an upper bound \(I_{c}\) on the mutual information between the encoding and the original features \(I(X,Z)\) . This results in the following regularized objective \(J(q,E)\) \[\begin{array}{r l} & {J(q,E) = \min_{q,E}\mathbb{E}_{\mathbf{x},\mathbf{y}\sim p(\mathbf{x},\mathbf{y})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log q(\mathbf{y}|\mathbf{z})\right]\right]}\\ & {\qquad \mathrm{s.t.}\quad I(X,Z)\leq I_{c}.} \end{array} \quad (2)\] Note that the model \(q(\mathbf{y}|\mathbf{z})\) now maps samples from the latent distribution \(\mathbf{z}\) to the label \(\mathbf{y}\) . The mutual information is defined according to \[I(X,Z) = \int p(\mathbf{x},\mathbf{z})\log \frac{p(\mathbf{x},\mathbf{z})}{p(\mathbf{x})p(\mathbf{z})} d\mathbf{x} d\mathbf{z} = \int p(\mathbf{x})E(\mathbf{z}|\mathbf{x})\log \frac{E(\mathbf{z}|\mathbf{x})}{p(\mathbf{z})} d\mathbf{x} d\mathbf{z}, \quad (3)\] where \(p(\mathbf{x})\) is the distribution given by the dataset. Computing the marginal distribution \(p(\mathbf{z}) = \int E(\mathbf{z}|\mathbf{x}) p(\mathbf{x}) d\mathbf{x}\) can be challenging. Instead, a variational lower bound can be obtained by using an approximation \(r(\mathbf{z})\) of the marginal. Since \(\mathrm{KL}\left[p(\mathbf{z})\right]\left|r(\mathbf{z})\right|\geq 0\) , \(\int p(\mathbf{z})\log p(\mathbf{z}) d\mathbf{z}\geq\) \(\int p(\mathbf{z})\log r(\mathbf{z}) d\mathbf{z}\) , an upper bound on \(I(X,Z)\) can be obtained via the KL divergence, \[I(X,Z)\leq \int p(\mathbf{x})E(\mathbf{z}|\mathbf{x})\log \frac{E(\mathbf{z}|\mathbf{x})}{r(\mathbf{z})} d\mathbf{x} d\mathbf{z} = \mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right]\right]. \quad (4)\] <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: Left: Overview of the variational discriminator bottleneck. The encoder first maps samples \(\mathbf{x}\) to a latent distribution \(E(\mathbf{z}|\mathbf{x})\) . The discriminator is then trained to classify samples \(\mathbf{z}\) from the latent distribution. An information bottleneck \(I(X,Z)\leq I_{c}\) is applied to \(Z\) . Right: Visualization of discriminators trained to differentiate two Gaussians with different KL bounds \(I_{c}\) . </center> This provides an upper bound on the regularized objective \(\tilde{J} (q,E)\geq J(q,E)\) \[\begin{array}{r l} & {\tilde{J} (q,E) = \underset {q,E}{\min}\quad \mathbb{E}_{\mathbf{x},\mathbf{y}\sim p(\mathbf{x},\mathbf{y})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log q(\mathbf{y}|\mathbf{z})\right]\right]}\\ & {\qquad \mathrm{s.t.}\quad \mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right]\right]\leq I_{c}.} \end{array} \quad (5)\] To solve this problem, the constraint can be subsumed into the objective with a coefficient \(\beta\) \[\min_{q,E}\mathbb{E}_{\mathbf{x},\mathbf{y}\sim p(\mathbf{x},\mathbf{y})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log q(\mathbf{y}|\mathbf{z})\right]\right] + \beta \left(\mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right]\right] - I_{c}\right). \quad (6)\] Alemi et al. (2016) evaluated the method on supervised learning tasks, and showed that models trained with a VIB can be less prone to overfitting and more robust to adversarial examples. ## 4 VARIATIONAL DISCRIMINATOR BOTTLENECK To outline our method, we first consider a standard GAN framework consisting of a discriminator \(D\) and a generator \(G\) , where the goal of the discriminator is to distinguish between samples from the target distribution \(p^{*}(\mathbf{x})\) and samples from the generator \(G(\mathbf{x})\) , \[\max_{G}\min_{D}\quad \mathbb{E}_{\mathbf{x}\sim p^{*}(\mathbf{x})}\left[-\log \left(D(\mathbf{x})\right)\right] + \mathbb{E}_{\mathbf{x}\sim G(\mathbf{x})}\left[-\log \left(1 - D(\mathbf{x})\right)\right].\] We incorporate a variational information bottleneck by introducing an encoder \(E\) into the discriminator that maps a sample \(\mathbf{x}\) to a stochastic encoding \(\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})\) , and then apply a constraint \(I_{c}\) on the mutual information \(I(X,Z)\) between the original features and the encoding. \(D\) is then trained to classify samples drawn from the encoder distribution. A schematic illustration of the framework is available in Figure 2. The regularized objective \(J(D,E)\) for the discriminator is given by \[\begin{array}{r l} & {J(D,E) = \underset {D,E}{\min}\quad \mathbb{E}_{x\sim p^{*}(\mathbf{x})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log \left(D(\mathbf{z})\right)\right]\right] + \mathbb{E}_{\mathbf{x}\sim G(\mathbf{x})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log \left(1 - D(\mathbf{z})\right)\right]\right]}\\ & {\qquad \mathrm{s.t.}\quad \mathbb{E}_{\mathbf{x}\sim \tilde{p} (\mathbf{x})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right]\right]\leq I_{c},} \end{array} \quad (7)\] with \(\tilde{p} = \frac{1}{2} p^{*} + \frac{1}{2} G\) being a mixture of the target distribution and the generator. We refer to this regularizer as the variational discriminator bottleneck (VDB). To optimize this objective, we can introduce a Lagrange multiplier \(\beta\) , \[\begin{array}{r l} & {J(D,E) = \underset {D,E}{\min}\underset {\beta \geq 0}{\max}\quad \mathbb{E}_{\mathbf{x}\sim p^{*}(\mathbf{x})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log \left(D(\mathbf{z})\right)\right]\right] + \mathbb{E}_{\mathbf{x}\sim G(\mathbf{x})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log \left(1 - D(\mathbf{z})\right)\right]\right]}\\ & {\qquad \qquad +\beta \left(\mathbb{E}_{\mathbf{x}\sim \tilde{p} (\mathbf{x})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right]\right] - I_{c}\right).} \end{array} \quad (8)\] As we will discuss in Section 4.1 and demonstrate in our experiments, enforcing a specific mutual information budget between \(\mathbf{x}\) and \(\mathbf{z}\) is critical for good performance. We therefore adaptively update \(\beta\) via dual gradient descent to enforce a specific constraint \(I_{c}\) on the mutual information, \[\begin{array}{r l} & {D,E\leftarrow \underset {D,E}{\arg \min}\mathcal{L}(D,E,\beta)}\\ & {\beta \leftarrow \max \left(0,\beta +\alpha_{\beta}\left(\mathbb{E}_{\mathbf{x}\sim \tilde{p} (\mathbf{x})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right]\right] - I_{c}\right)\right),} \end{array} \quad (9)\] <--- Page Split ---> where \(\mathcal{L}(D,E,\beta)\) is the Lagrangian \[\begin{array}{r l} & {\mathcal{L}(D,E,\beta) = \mathbb{E}_{\mathbf{x}\sim p^{*}(\mathbf{x})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log \left(D(\mathbf{z})\right)\right]\right] + \mathbb{E}_{\mathbf{x}\sim G(\mathbf{x})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log \left(1 - D(\mathbf{z})\right)\right]\right]}\\ & {\qquad +\beta \left(\mathbb{E}_{\mathbf{x}\sim \hat{p}(\mathbf{x})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right]\right] - I_{c}\right),} \end{array} \quad (10)\] and \(\alpha_{\beta}\) is the stepsize for the dual variable in dual gradient descent (Boyd & Vandenberghe, 2004). In practice, we perform only one gradient step on \(D\) and \(E\) , followed by an update to \(\beta\) . We refer to a GAN that incorporates a VDB as a variational generative adversarial network (VGAN). In our experiments, the prior \(r(\mathbf{z}) = \mathcal{N}(0,I)\) is modeled with a standard Gaussian. The encoder \(E(\mathbf{z}|\mathbf{x}) = \mathcal{N}(\mu_{E}(\mathbf{x}),\Sigma_{E}(\mathbf{x}))\) models a Gaussian distribution in the latent variables \(Z\) , with mean \(\mu_{E}(\mathbf{x})\) and diagonal covariance matrix \(\Sigma_{E}(\mathbf{x})\) . When computing the KL loss, each batch of data contains an equal number of samples from \(p^{*}(x)\) and \(G(x)\) . We use a simplified objective for the generator, \[\max_{G}\mathbb{E}_{\mathbf{x}\sim G(\mathbf{x})}\left[-\log \left(1 - D(\mu_{E}(\mathbf{x}))\right)\right]. \quad (11)\] where the KL penalty is excluded from the generator's objective. Instead of computing the expectation over \(Z\) , we found that approximating the expectation by evaluating \(D\) at the mean \(\mu_{E}(\mathbf{x})\) of the encoder's distribution was sufficient for our tasks. The discriminator is modeled with a single linear unit followed by a sigmoid \(D(\mathbf{z}) = \sigma (\mathbf{w}_{D}^{T}\mathbf{z} + \mathbf{b}_{D})\) , with weights \(\mathbf{w}_{D}\) and bias \(\mathbf{b}_{D}\) . ### 4.1 DISCUSSION AND ANALYSIS To interpret the effects of the VDB, we consider the results presented by Arjovsky & Bottou (2017), which show that for two distributions with disjoint support, the optimal discriminator can perfectly classify all samples and its gradients will be zero almost everywhere. Thus, as the discriminator converges to the optimum, the gradients for the generator vanishes accordingly. To address this issue, Arjovsky & Bottou (2017) proposed applying continuous noise to the discriminator inputs, thereby ensuring that the distributions have continuous support everywhere. In practice, if the original distributions are sufficiently distant from each other, the added noise will have negligible effects. As shown by Mescheder et al. (2017), the optimal choice for the variance of the noise to ensure convergence can be quite delicate. In our method, by first using a learned encoder to map the inputs to an embedding and then applying an information bottleneck on the embedding, we can dynamically adjust the variance of the noise such that the distributions not only share support in the embedding space, but also have significant overlap. Since the minimum amount of information required for binary classification is 1 bit, by selecting an information constraint \(I_{c}< 1\) , the discriminator is prevented from from perfectly differentiating between the distributions. To illustrate the effects of the VDB, we consider a simple task of training a discriminator to differentiate between two Gaussian distributions. Figure 2 visualizes the decision boundaries learned with different bounds \(I_{c}\) on the mutual information. Without a VDB, the discriminator learns a sharp decision boundary, resulting in vanishing gradients for much of the space. But as \(I_{c}\) decreases and the bound tightens, the decision boundary is smoothed, providing more informative gradients that can be leveraged by the generator. Taking this analysis further, we can extend Theorem 3.2 from Arjovsky & Bottou (2017) to analyze the VDB, and show that the gradient of the generator will be non- degenerate for a small enough constraint \(I_{c}\) , under some additional simplifying assumptions. The result in Arjovsky & Bottou (2017) states that the gradient consists of vectors that point toward samples on the data manifold, multiplied by coefficients that depend on the noise. However, these coefficients may be arbitrarily small if the generated samples are far from real samples, and the noise is not large enough. This can still cause the generator gradient to vanish. In the case of the VDB, the constraint ensures that these coefficients are always bounded below. Due to space constraints, this result is presented in Appendix A. ### 4.2 VAIL: VARIATIONAL ADVERSARIAL IMITATION LEARNING To extend the VDB to imitation learning, we start with the generative adversarial imitation learning (GAIL) framework (Ho & Ermon, 2016), where the discriminator's objective is to differentiate between the state distribution induced by a target policy \(\pi^{*}(\mathbf{s})\) and the state distribution of the agent's policy \(\pi (\mathbf{s})\) , \[\max_{\pi}\min_{D}\mathbb{E}_{\mathbf{s}\sim \pi^{*}(\mathbf{s})}\left[-\log \left(D(\mathbf{s})\right)\right] + \mathbb{E}_{\mathbf{s}\sim \pi (\mathbf{s})}\left[-\log \left(1 - D(\mathbf{s})\right)\right].\] <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 3: Simulated humanoid performing various skills. VAIL is able to closely imitate a broad range of skills from mocap data. </center> The discriminator is trained to maximize the likelihood assigned to states from the target policy, while minimizing the likelihood assigned to states from the agent's policy. The discriminator also serves as the reward function for the agent, which encourages the policy to visit states that, to the discriminator, appear indistinguishable from the demonstrations. Similar to the GAN framework, we can incorporate a VDB into the discriminator, \[\begin{array}{r l} & {J(D,E) = \min_{D,E}\max_{\beta \geq 0}\mathbb{E}_{\mathbf{s}\sim \pi^{*}(\mathbf{s})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{s})}\left[-\log \left(D(\mathbf{z})\right)\right]\right] + \mathbb{E}_{\mathbf{s}\sim \pi (\mathbf{s})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{s})}\left[-\log \left(1 - D(\mathbf{z})\right)\right]\right]}\\ & {\qquad +\beta \left(\mathbb{E}_{\mathbf{s}\sim \hat{\pi} (\mathbf{s})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{s})\right]|r(\mathbf{z})\right]\right) - I_{c}\big).} \end{array} \quad (12)\] where \(\begin{array}{r}{\hat{\pi} = \frac{1}{2}\pi^{*} + \frac{1}{2}\pi} \end{array}\) represents a mixture of the target policy and the agent's policy. The reward for \(\pi\) is then specified by the discriminator \(r_{t} = - \log \left(1 - D(\mu_{E}(\mathbf{s}))\right)\) . We refer to this method as variational adversarial imitation learning (VAIL). ### 4.3 VAIL: VARIATIONAL ADVERSARIAL INVERSE REINFORCEMENT LEARNING The VDB can also be applied to adversarial inverse reinforcement learning (Fu et al., 2017) to yield a new algorithm which we call variational adversarial inverse reinforcement learning (VAIRL). AIRL operates in a similar manner to GAIL, but with a discriminator of the form \[D(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime}) = \frac{\exp\left(f(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime})\right)}{\exp\left(f(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime})\right) + \pi(\mathbf{a}|\mathbf{s})}, \quad (13)\] where \(f(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime}) = g(\mathbf{s},\mathbf{a}) + \gamma h(\mathbf{s}^{\prime}) - h(\mathbf{s})\) , with \(g\) and \(h\) being learned functions. Under certain restrictions on the environment, Fu et al. show that if \(g(\mathbf{s},\mathbf{a})\) is defined to depend only on the current state \(\mathbf{s}\) , the optimal \(g(\mathbf{s})\) recovers the expert's true reward function \(r^{*}(\mathbf{s})\) up to a constant \(g^{*}(\mathbf{s}) = r^{*}(\mathbf{s}) + \mathrm{const}\) . In this case, the learned reward can be re- used to train policies in environments with different dynamics, and will yield the same policy as if the policy was trained under the expert's true reward. In contrast, GAIL's discriminator typically cannot be re- optimized in this way (Fu et al., 2017). In VAIRL, we introduce stochastic encoders \(E_{g}(\mathbf{z}_{g}|\mathbf{s})\) , \(E_{h}(\mathbf{z}_{h}|\mathbf{s})\) , and \(g(\mathbf{z}_{g})\) , \(h(\mathbf{z}_{h})\) are modified to be functions of the encoding. We can reformulate Equation 13 as \[D(\mathbf{s},\mathbf{a},\mathbf{z}) = \frac{\exp\left(f(\mathbf{z}_{g},\mathbf{z}_{h},\mathbf{z}_{h}^{\prime})\right)}{\exp\left(f(\mathbf{z}_{g},\mathbf{z}_{h},\mathbf{z}_{h}^{\ell})\right) + \pi(\mathbf{a}|\mathbf{s})},\] for \(\mathbf{z} = (\mathbf{z}_{g},\mathbf{z}_{h},\mathbf{z}_{h}^{\prime})\) and \(f(\mathbf{z}_{g},\mathbf{z}_{h},\mathbf{z}_{h}^{\prime}) = D_{g}(\mathbf{z}_{g}) + \gamma D_{h}(\mathbf{z}_{h}^{\prime}) - D_{h}(\mathbf{z}_{h})\) . We then obtain a modified objective of the form \[J(D,E) = \min_{D,E}\max_{\beta \geq 0} \mathbb{E}_{\mathbf{s},\mathbf{s}^{\prime}\sim \pi^{*}(\mathbf{s},\mathbf{s}^{\prime})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{s},\mathbf{s}^{\prime})}\left[-\log \left(D(\mathbf{s},\mathbf{a},\mathbf{z})\right)\right]\right]\] \[\qquad +\mathbb{E}_{\mathbf{s},\mathbf{s}^{\prime}\sim \pi (\mathbf{s},\mathbf{s}^{\prime})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{s},\mathbf{s}^{\prime})}\left[-\log \left(1 - D(\mathbf{s},\mathbf{a},\mathbf{z})\right)\right]\right]\] \[\qquad +\beta \left(\mathbb{E}_{\mathbf{s},\mathbf{s}^{\prime}\sim \hat{\pi}(\mathbf{s},\mathbf{s}^{\prime})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{s},\mathbf{s}^{\prime})\right]|r(\mathbf{z})\right]\right) - I_{c}\big),\] where \(\pi (s,s^{\prime})\) denotes the joint distribution of successive states from a policy, and \(E(\mathbf{z}|\mathbf{s},\mathbf{s}^{\prime}) =\) \(E_{g}(\mathbf{z}_{g}|\mathbf{s})\cdot E_{h}(\mathbf{z}_{h}|\mathbf{s})\cdot E_{h}(\mathbf{z}_{h}^{\prime}|\mathbf{s}^{\prime})\) <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 4: Learning curves comparing VAIL to other methods for motion imitation. Performance is measured using the average joint rotation error between the simulated character and the reference motion. Each method is evaluated with 3 random seeds. </center> Table 1: Average joint rotation error (radians) on humanoid motion imitation tasks. VAIL outperforms the other methods for all skills evaluated, except for policies trained using the manually-designed reward function from (Peng et al., 2018). <table><tr><td>Method</td><td>Backflip</td><td>Cartwheel</td><td>Dance</td><td>Run</td><td>Spinkick</td></tr><tr><td>BC</td><td>3.01</td><td>2.88</td><td>2.93</td><td>2.63</td><td>2.88</td></tr><tr><td>Merel et al., 2017</td><td>1.33 ± 0.03</td><td>1.47 ± 0.12</td><td>2.61 ± 0.30</td><td>0.52 ± 0.04</td><td>1.82 ± 0.35</td></tr><tr><td>GAIL</td><td>0.74 ± 0.15</td><td>0.84 ± 0.05</td><td>1.31 ± 0.16</td><td>0.17 ± 0.03</td><td>1.07 ± 0.03</td></tr><tr><td>GAIL - noise</td><td>0.42 ± 0.02</td><td>0.92 ± 0.07</td><td>0.96 ± 0.08</td><td>0.21 ± 0.05</td><td>0.95 ± 0.14</td></tr><tr><td>GAIL - noise z</td><td>0.67 ± 0.12</td><td>0.72 ± 0.04</td><td>1.14 ± 0.08</td><td>0.14 ± 0.03</td><td>0.64 ± 0.09</td></tr><tr><td>GAIL - GP</td><td>0.62 ± 0.09</td><td>0.69 ± 0.05</td><td>0.80 ± 0.32</td><td>0.12 ± 0.02</td><td>0.64 ± 0.04</td></tr><tr><td>VAIL (ours)</td><td>0.36 ± 0.13</td><td>0.40 ± 0.08</td><td>0.40 ± 0.21</td><td>0.13 ± 0.01</td><td>0.34 ± 0.05</td></tr><tr><td>VAIL - GP (ours)</td><td>0.46 ± 0.17</td><td>0.31 ± 0.02</td><td>0.15 ± 0.01</td><td>0.10 ± 0.01</td><td>0.31 ± 0.02</td></tr><tr><td>Peng et al., 2018</td><td>0.26</td><td>0.21</td><td>0.20</td><td>0.14</td><td>0.19</td></tr></table> ## 5 EXPERIMENTS We evaluate our method on adversarial learning problems in imitation learning, inverse reinforcement learning, and image generation. In the case of imitation learning, we show that the VDB enables agents to learn complex motion skills from a single demonstration, including visual demonstrations provided in the form of video clips. We also show that the VDB improves the performance of inverse RL methods. Inverse RL aims to reconstruct a reward function from a set demonstrations, which can then used to perform the task in new environments, in contrast to imitation learning, which aims to recover a policy directly. Our method is also not limited to control tasks, and we demonstrate its effectiveness for unconditional image generation. ### 5.1 VAIL: VARIATIONAL ADVERSARIAL IMITATION LEARNING The goal of the motion imitation tasks is to train a simulated character to mimic demonstrations provided by mocap clips recorded from human actors. Each mocap clip provides a sequence of target states \(\{\mathbf{s}_0^s,\mathbf{s}_1^s,\dots,\mathbf{s}_T^s\}\) that the character should track at each timestep. We use a similar experimental setup as Peng et al. (2018), with a 34 degrees- of- freedom humanoid character. We found that the discriminator architecture can greatly affect the performance on complex skills. The particular architecture we employ differs substantially from those used in prior work (Merel et al., 2017), details of which are available in Appendix C. The encoding \(Z\) is 128D and an information constraint of \(I_{c} = 0.5\) is applied for all skills, with a dual stepsize of \(\alpha_{\beta} = 10^{- 5}\) . All policies are trained using PPO (Schulman et al., 2017). The motions learned by the policies are best seen in the supplementary video. Snapshots of the character's motions are shown in Figure 3. Each skill is learned from a single demonstration. VAIL is able to closely reproduce a variety of skills, including those that involve highly dynamics flips and complex contacts. We compare VAIL to a number of other techniques, including state- only GAIL (Ho & Ermon, 2016), GAIL with instance noise applied to the discriminator inputs (GAIL - noise), GAIL with instance noise applied to the last hidden layer (GAIL - noise z), and GAIL with a gradient penalty applied to the discriminator (GAIL - GP) (Mescheder et al., 2018). Since the VDB helps to prevent vanishing gradients, while GP mitigates exploding gradients, the two techniques can be seen as being complementary. Therefore, we also train a model that combines both VAIL and GP (VAIL - <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: Left: Snapshots of the video demonstration and the simulated character trained with VAIL. The policy learns to run by directly imitating the video. Right: Saliency maps that visualize the magnitude of the discriminator's gradient with respect to all channels of the RGB input images from both the demonstration and the simulation. Pixel values are normalized between \([0,1]\) . </center> ![](images/7_1.jpg) <center>Figure 6: Left: Learning curves comparing policies for the video imitation task trained using a pixel-wise loss as the reward, GAIL, and VAIL. Only VAIL successfully learns to run from a video demonstration. Middle: Effect of training with fixed values of \(\beta\) and adaptive \(\beta\) ( \(I_{c} = 0.5\) ). Right: KL loss over the course of training with adaptive \(\beta\) . The dual gradient descent update for \(\beta\) effectively enforces the VDB constraint \(I_{c}\) . </center> GP). Implementation details for combining the VDB and GP are available in Appendix B. Learning curves for the various methods are shown in Figure 10 and Table 1 summarizes the performance of the final policies. Performance is measured in terms of the average joint rotation error between the simulated character and the reference motion. We also include a reimplementation of the method described by Merel et al. (2017). For the purpose of our experiments, GAIL denotes policies trained using our particular architecture but without a VDB, and Merel et al. (2017) denotes policies trained using an architecture that closely mirror those from previous work. Furthermore, we include comparisons to policies trained using the handcrafted reward from Peng et al. (2018), as well as policies trained via behavioral cloning (BC). Since mocap data does not provide expert actions, we use the policies from Peng et al. (2018) as oracles to provide state- action demonstrations, which are then used to train the BC policies via supervised learning. Each BC policy is trained with 10k samples from the oracle policies, while all other policies are trained from just a single demonstration, the equivalent of approximately 100 samples. VAIL consistently outperforms previous adversarial methods, and VAIL - GP achieves the best performance overall. Simply adding instance noise to the inputs (Salimans et al., 2016) or hidden layer without the KL constraint (Sonderby et al., 2016) leads to worse performance, since the network can learn a latent representation that renders the effects of the noise negligible. Though training with the handcrafted reward still outperforms the adversarial methods, VAIL demonstrates comparable performance to the handcrafted reward without manual reward or feature engineering, and produces motions that closely resemble the original demonstrations. The method from Merel et al. (2017) was able to imitate simple skills such as running, but was unable to reproduce more acrobatic skills such as the backflip and spinkick. In the case of running, our implementation produces more natural gaits than the results reported in Merel et al. (2017). Behavioral cloning is unable to reproduce any of the skills, despite being provided with substantially more demonstration data than the other methods. Video Imitation: While our method achieves substantially better results on motion imitation when compared to prior work, previous methods can still produce reasonable behaviors. However, if the demonstrations are provided in terms of the raw pixels from video clips, instead of mocap data, the imitation task becomes substantially harder. The goal of the agent is therefore to directly im <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 7: Left: C-Maze and S-Maze. When trained on the training maze on the left, AIRL learns a reward that overfits to the training task, and which cannot be transferred to the mirrored maze on the right. In contrast, VAIRL learns a smoother reward function that enables more-reliable transfer. Right: Performance on flipped test versions of our two training mazes. We report mean return \((\pm\) std. dev.) over five runs, and the mean return for the expert used to generate demonstrations. </center> itate the skill depicted in the video. This is also a setting where manually engineering rewards is impractical, since simple losses like pixel distance do not provide a semantically meaningful measure of similarity. Figure 6 compares learning curves of policies trained with VAIL, GAIL, and policies trained using a reward function defined by the average pixel- wise difference between the frame \(M_{t}^{*}\) from the video demonstration and a rendered image \(M_{t}\) of the agent at each timestep \(t\) , \(r_{t} = 1 - \frac{1}{3\times 64^{2}}\| M_{t}^{*} - M_{t}\|^{2}\) . Each frame is represented by a \(64\times 64\) RGB image. Both GAIL and the pixel- loss are unable to learn the running gait. VAIL is the only method that successfully learns to imitate the skill from the video demonstration. Snapshots of the video demonstration and the simulated motion is available in Figure 5. To further investigate the effects of the VDB, we visualize the gradient of the discriminator with respect to images from the video demonstration and simulation. Saliency maps for discriminators trained with VAIL and GAIL are available in Figure 5. The VAIL discriminator learns to attend to spatially coherent image patches around the character, while the GAIL discriminator exhibits less structure. The magnitude of the gradients from VAIL also tend to be significantly larger than those from GAIL, which may suggests that VAIL is able to mitigate the problem of vanishing gradients present in GAIL. Adaptive Constraint: To evaluate the effects of the adaptive \(\beta\) updates, we compare policies trained with different fixed values of \(\beta\) and policies where \(\beta\) is updated adaptively to enforce a desired information constraint \(I_{c} = 0.5\) . Figure 6 illustrates the learning curves and the KL loss over the course of training. When \(\beta\) is too small, performance reverts to that achieved by GAIL. Large values of \(\beta\) help to smooth the discriminator landscape and improve learning speed during the early stages of training, but converges to a worse performance. Policies trained using dual gradient descent to adaptively update \(\beta\) consistently achieves the best performance overall. ### 5.2 VAIRL: VARIATIONAL ADVERSARIAL INVERSE REINFORCEMENT LEARNING Next, we use VAIRL to recover reward functions from demonstrations. Unlike the discriminator learned by VAIL, the reward function recovered by VAIRL can be re- optimized to train new policies from scratch in the same environment. In some cases, it can also be used to transfer similar behaviour to different environments. In Figure 7, we show the results of applying VAIRL to the C- maze from Fu et al. (2017), and a more complex S- maze; the simple 2D observation spaces of these tasks make it easy to interpret the recovered reward functions. In both mazes, the expert is trained to navigate from a start position at the bottom of the maze to a fixed target position at the top. We use each method to obtain an imitation policy and to approximate the expert's reward on the original maze. The recovered reward is then used to train a new policy to solve a left- right flipped version of the training maze. On the C- maze, we found that plain AIRL—without a gradient penalty—would sometimes overfit and fail to transfer to the new environment, as evidenced by the reward visualization in Figure 7 (left) and the higher return variance in Figure 7 (right). In contrast, by incorporating a VDB into AIRL, VAIRL learns a substantially smoother reward function that is more suitable for transfer. Furthermore, we found that in the S- maze with two internal walls, AIRL was too unstable to acquire a meaningful reward function. This was true even with the use of a gradient penalty. In contrast, VAIRL was able to learn a reasonable reward in most cases without a <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 8: Comparison of VGAN and other methods on CIFAR-10, with performance evaluated using the Fréchet Inception Distance (FID). </center> ![](images/9_1.jpg) <center>Figure 9: VGAN samples on CIFAR-10, CelebA \(128 \times 128\) , and CelebAHQ \(1024 \times 1024\) . </center> gradient penalty, and its performance improved even further with the addition of a gradient penalty. To evaluate the effects of the VDB, we observe that the performance of VAIRL drops on both tasks when the KL constraint is disabled \((\beta = 0)\) , suggesting that the improvements from the VDB cannot be attributed entirely to the noise introduced by the sampling process for \(\mathbf{z}\) . Further details of these experiments and illustrations of the recovered reward functions are available in Appendix D. ### 5.3 VGAN: VARIATIONAL GENERATIVE ADVERSARIAL NETWORKS Finally, we apply the VDB to image generation with generative adversarial networks, which we refer to as VGAN. Experiment are conducted on CIFAR- 10 (Krizhevsky et al.), CelebA (Liu et al. (2015)), and CelebAHQ (Karras et al., 2018) datasets. We compare our approach to recent stabilization techniques: WGAN- GP (Gulrajani et al., 2017b), instance noise (Sonderby et al., 2016; Arjovsky & Bottou, 2017), spectral normalization (SN) (Miyato et al., 2018), and gradient penalty (GP) (Mescheder et al., 2018), as well as the original GAN (Goodfellow et al., 2014) on CIFAR- 10. To measure performance, we report the Fréchet Inception Distance (FID) (Heusel et al., 2017), which has been shown to be more consistent with human evaluation. All methods are implemented using the same base model, built on the resnet architecture of Mescheder et al. (2018). Aside from tuning the KL constraint \(I_{c}\) for VGAN, no additional hyperparameter optimization was performed to modify the settings provided by Mescheder et al. (2018). The performance of the various methods on CIFAR- 10 are shown in Figure 8. While vanilla GAN and instance noise are prone to diverging as training progresses, VGAN remains stable. Note that instance noise can be seen as a non- adaptive version of VGAN without constraints on \(I_{c}\) . This experiment again highlights that there is a significant improvement from imposing the information bottleneck over simply adding instance noise. Combining both VDB and gradient penalty (VGAN - GP) achieves the best performance overall with an FID of 18.1. We also experimented with combining the VDB with SN, but this combination is prone to diverging. See Figure 9 for samples of images generated with our approach. Please refer to Appendix E for experimental details and more results. ## 6 CONCLUSION We present the variational discriminator bottleneck, a general regularization technique for adversarial learning. Our experiments show that the VDB is broadly applicable to a variety of domains, and yields significant improvements over previous techniques on a number of challenging tasks. While our experiments have produced promising results for video imitation, the results have been primarily with videos of synthetic scenes. We believe that extending the technique to imitating real- world videos is an exciting direction. Another exciting direction for future work is a more in- depth theoretical analysis of the method, to derive convergence and stability results or conditions. <--- Page Split ---> ## ACKNOWLEDGEMENTS We would like to thank the anonymous reviewers for their helpful feedback, and AWS and NVIDIA for providing computational resources. This research was funded by an NSERC Postgraduate Scholarship, a Berkeley Fellowship for Graduate Study, BAIR, Huawei, and ONR PECASE N000141612723. ## REFERENCES Alessandro Achille and Stefano Soatto. Information dropout: Learning optimal representations through noisy computation. IEEE Transactions on Pattern Analysis & Machine Intelligence, 40 (12):2897- 2905, 2017. ISSN 0162- 8828. doi: 10.1109/TPAMI.2017.2784440. Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. Deep variational information bottleneck. CoRR, abs/1612.00410, 2016. URL http://arxiv.org/abs/1612.00410. Martín Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. CoRR, abs/1701.04862, 2017. URL http://arxiv.org/abs/1701.04862. Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 214- 223, International Convention Centre, Sydney, Australia, 06- 11 Aug 2017. PMLR. URL http://proceedings.mlr.press/v70/arjovsky17a.html. David Berthelot, Tom Schumm, and Luke Metz. BEGAN: boundary equilibrium generative adversarial networks. CoRR, abs/1703.10717, 2017. URL http://arxiv.org/abs/1703.10717. Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, New York, NY, USA, 2004. ISBN 0521833787. Bullet. Bullet physics library, 2015. http://bulletphysics.org. Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized generative adversarial networks. CoRR, abs/1612.02136, 2016. URL http://arxiv.org/abs/1612.02136. Liqun Chen, Shuyang Dai, Yunchen Pu, Erjin Zhou, Chunyuan Li, Qinliang Su, Changyou Chen, and Lawrence Carin. Symmetric variational autoencoder and connections to adversarial learning. In Amos Storkey and Fernando Perez- Cruz (eds.), Proceedings of the Twenty- First International Conference on Artificial Intelligence and Statistics, volume 84 of Proceedings of Machine Learning Research, pp. 661- 669, Playa Blanca, Lanzarote, Canary Islands, 09- 11 Apr 2018. PMLR. URL http://proceedings.mlr.press/v84/chen18b.html. Chris Donahue, Julian McAuley, and Miller Puckette. Synthesizing audio with generative adversarial networks. CoRR, abs/1802.04208, 2018. URL http://arxiv.org/abs/1802.04208. Chelsea Finn, Paul F. Christiano, Pieter Abbeel, and Sergey Levine. A connection between generative adversarial networks, inverse reinforcement learning, and energy- based models. CoRR, abs/1611.03852, 2016a. URL http://arxiv.org/abs/1611.03852. Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19- 24, 2016, pp. 49- 58, 2016b. URL http://jmlr.org/proceedings/papers/v48/finn16. html. Justin Fu, Katie Luo, and Sergey Levine. Learning robust rewards with adversarial inverse reinforcement learning. CoRR, abs/1710.11248, 2017. URL http://arxiv.org/abs/1710.11248. <--- Page Split ---> Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 27, pp. 2672- 2680. Curran Associates, Inc., 2014. URL http://papers.nips.cc/paper/5423- generative- adversarial- nets.pdf. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 5767- 5777. Curran Associates, Inc., 2017a. URL http://papers.nips.cc/paper/7159- improved- training- of- wasserstein- gans.pdf. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In Advances in Neural Information Processing Systems, pp. 5767- 5777, 2017b. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time- scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626- 6637, 2017. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. \(\beta\) - VAE: Learning basic visual concepts with a constrained variational framework. In ICLR, 2017. Geoffrey Hinton, Nitish Srivastava, and Kevin Swersky. Neural networks for machine learning lecture 6a overview of mini- batch gradient descent. Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems 29, pp. 4565- 4573. Curran Associates, Inc., 2016. Daniel Holden, Taku Komura, and Jun Saito. Phase- functioned neural networks for character control. ACM Trans. Graph., 36(4):42:1- 42:13, July 2017. ISSN 0730- 0301. doi: 10.1145/3072959.3073663. URL http://doi.acm.org/10.1145/3072959.3073663. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. CoRR, abs/1710.10196, 2017. URL http://arxiv.org/abs/1710.10196. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=Hk99zCeAb. Diederik P. Kingma and Max Welling. Auto- encoding variational bayes. CoRR, abs/1312.6114, 2013. URL http://dblp.uni- trier.de/db/journals/corr/corr1312. html#KingmaW13. Naveen Kodali, Jacob D. Abernethy, James Hays, and Zsolt Kira. How to train your DRAGAN. CoRR, abs/1705.07215, 2017. URL http://arxiv.org/abs/1705.07215. Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar- 10 (canadian institute for advanced research). URL http://www.cs.toronto.edu/\~kriz/cifar.html. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 25, pp. 1097- 1105. Curran Associates, Inc., 2012. URL http://papers.nips.cc/paper/4824- imagenet- classification- with- deep- convolutional- neural- networks.pdf. Anders Boesen Lindbo Larsen, Sren Kaae Sinderby, Hugo Larochelle, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. In Maria Florina Balcan and Kilian Q. Weinberger (eds.), Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pp. 1558- 1566, New York, New York, USA, 20- 22 Jun 2016. PMLR. URL http://proceedings.mlr.press/v48/larsen16. html. <--- Page Split ---> Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3730- 3738, 2015. Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. Are gans created equal? a large- scale study. arXiv preprint arXiv:1711.10337, 2017. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders. In International Conference on Learning Representations, 2016. URL http://arxiv.org/abs/1511.05644. Xudong Mao, Qing Li, Haoran Xie, Raymond Y. K. Lau, and Zhen Wang. Multi- class generative adversarial networks with the L2 loss function. CoRR, abs/1611.04076, 2016. URL http://arxiv.org/abs/1611.04076. Josh Merel, Yuval Tassa, Dhruva TB, Sriram Srinivasan, Jay Lemmon, Ziyu Wang, Greg Wayne, and Nicolas Heess. Learning human behaviors from motion capture by adversarial imitation. CoRR, abs/1707.02201, 2017. URL http://arxiv.org/abs/1707.02201. Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for GANs do actually converge? In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 3481- 3490, Stockholmsmassan, Stockholm Sweden, 10- 15 Jul 2018. PMLR. URL http://proceedings.mlr.press/v80/mescheder18a.html. Lars M. Mescheder, Sebastian Nowozin, and Andreas Geiger. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. CoRR, abs/1701.04722, 2017. URL http://arxiv.org/abs/1701.04722. Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=B1QrgzIT- . Xue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel van de Panne. Deepmimic: Example- guided deep reinforcement learning of physics- based character skills. ACM Trans. Graph., 37 (4):143:1- 143:14, July 2018. ISSN 0730- 0301. doi: 10.1145/3197517.3201311. URL http://doi.acm.org/10.1145/3197517.3201311. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015. URL http://arxiv.org/abs/1511.06434. Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, John Schulman, Emanuel Todorov, and Sergey Levine. Learning complex dexterous manipulation with deep reinforcement learning and demonstrations. CoRR, abs/1709.10087, 2017. URL http://arxiv.org/abs/1709.10087. Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. CoRR, abs/1606.03498, 2016. URL http://arxiv.org/abs/1606.03498. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International Conference on Machine Learning, 2015. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017. URL http://arxiv.org/abs/1707.06347. Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszár. Amortised MAP inference for image super- resolution. CoRR, abs/1610.04490, 2016. URL http://arxiv.org/abs/1610.04490. <--- Page Split ---> Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15 (1):1929–1958, January 2014. ISSN 1532- 4435. URL http://dl.acm.org/citation.cfm?id=2627435.2670313. Naftali Tishby and Noga Zaslavsky. Deep learning and the information bottleneck principle. CoRR, abs/1503.02406, 2015. URL http://arxiv.org/abs/1503.02406. Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics. CoRR, abs/1609.02612, 2016. URL http://arxiv.org/abs/1609.02612. You Xie, Erik Franz, Mengyu Chu, and Nils Thuerey. tempogan: A temporally coherent, volumetric gan for super- resolution fluid flow. ACM Transactions on Graphics (TOG), 37(4):95, 2018. Junbo Jake Zhao, Michael Mathieu, and Yann LeCun. Energy- based generative adversarial network. CoRR, abs/1609.03126, 2016. URL http://arxiv.org/abs/1609.03126. <--- Page Split ---> ## SUPPLEMENTARY MATERIAL ## A ANALYSIS AND PROOFS In this appendix, we show that the gradient of the generator when the discriminator is augmented with the VDB is non- degenerate, under some mild additional assumptions. First, we assume a pointwise constraint of the form \(\mathrm{KL}[E(\mathbf{z}|\mathbf{x})\| r(\mathbf{z})]\leq I_{c}\) for all \(\mathbf{x}\) . In reality, we use an average KL constraint, since we found it to be more convenient to optimize, though a pointwise constraint is also possible to enforce by using the largest constraint violation to increment \(\beta\) . We could likely also extend the analysis to the average constraint, though we leave this to future work. The main theorem can then be stated as follows: Theorem A.1. Let \(g(\mathbf{u})\) denote the generator's mapping from a noise vector \(\mathbf{u}\sim p(\mathbf{u})\) to a point in \(X\) . Given the generator distribution \(G(\mathbf{x})\) and data distribution \(p^{*}(\mathbf{x})\) , a VDB with an encoder \(E(\mathbf{z}|\mathbf{x}) = \mathcal{N}(\mu_{E}(\mathbf{x}),\Sigma)\) , and \(\mathrm{KL}[E(\mathbf{z}|\mathbf{x})\| r(\mathbf{z})]\leq I_{c}\) , the gradient passed to the generator has the form \[\nabla_{g}\mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}[\log (1 - D^{*}(\mu_{E}(g(\mathbf{u}))))]\] \[\qquad = \mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[a(\mathbf{u})\int E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\nabla_{g}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})||^{2}dp^{*}(\mathbf{x})\right.\] \[\qquad \left. - b(\mathbf{u})\int E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\nabla_{g}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})||^{2}dG(\mathbf{x})\right]\] where \(D^{*}(\mathbf{z})\) is the optimal discriminator, \(a(\mathbf{x})\) and \(b(\mathbf{x})\) are positive functions, and we always have \(E(\mu_{E}(g(\mathbf{u}))|\mathbf{x}) > C(I_{c})\) , where \(C(I_{c})\) is a continuous monotonic function, and \(C(I_{c})\to \delta >0\) as \(I_{c}\to 0\) . Analysis for an encoder with an input- dependent variance \(\Sigma (\mathbf{x})\) is also possible, but more involved. We'll further assume below for notational simplicity that \(\Sigma\) is diagonal with diagonal values \(\sigma^{2}\) . This assumption is not required, but substantially simplifies the linear algebra. Analogously to Theorem 3.2 from Arjovsky & Bottou (2017), this theorem states that the gradient of the generator points in the direction of points in the data distribution, and away from points in the generator distribution. However, going beyond the theorem in Arjovsky & Bottou (2017), this result states that the coefficients on these vectors, given by \(E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\) , are always bounded below by a value that approaches a positive constant \(\delta\) as we decrease \(I_{c}\) , meaning that the gradient does not vanish. The proof of the first part of this theorem is essentially identical to the proof presented by Arjovsky & Bottou (2017), but accounting for the fact that the noise is now injected into the latent space of the VDB, rather than being added directly to \(\mathbf{x}\) . This result assumes that \(E(\mathbf{z}|\mathbf{x})\) has a learned but input- independent variance \(\Sigma = \sigma^{2}I\) , though the proof can be repeated for an input- dependent or non- diagonal \(\Sigma\) : Proof. Overloading \(p^{*}(\mathbf{x})\) and \(G(\mathbf{x})\) , let \(p^{*}(\mathbf{z})\) and \(G(\mathbf{z})\) be the distribution of embeddings \(\mathbf{z}\) under the real data and generator respectively. \(p^{*}(\mathbf{z})\) is then given by \[p^{*}(\mathbf{z}) = \mathbb{E}_{\mathbf{x}\sim p^{*}(\mathbf{x})}\left[E(\mathbf{z}|\mathbf{x})\right] = \int E(\mathbf{z}|\mathbf{x})dp^{*}(\mathbf{x}),\] and similarly for \(G(z)\) \[G(\mathbf{z}) = \mathbb{E}_{\mathbf{x}\sim G(\mathbf{x})}\left[E(\mathbf{z}|\mathbf{x})\right] = \int E(\mathbf{z}|\mathbf{x})dG(\mathbf{x}),\] From Arjovsky & Bottou (2017), the optimal discriminator between \(p^{*}(\mathbf{z})\) and \(G(\mathbf{z})\) is \[D^{*}(\mathbf{z}) = \frac{p^{*}(\mathbf{z})}{p^{*}(\mathbf{z}) + G(\mathbf{z})}\] <--- Page Split ---> The gradient passed to the generator then has the form \[\nabla_{g}\mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[\log \left(1 - D^{*}(\mu_{E}(g(\mathbf{u})))\right)\right]\] \[\qquad = \mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[\nabla_{g}\log \left(G(\mu_{E}(g(\mathbf{u})))\right) - \nabla_{g}\log \left(p^{*}(\mu_{E}(g(\mathbf{u}))\right) + G(\mu_{E}(g(\mathbf{u})))\right]\] \[\qquad = \mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[\frac{\nabla_{g}G(\mu_{E}(g(\mathbf{u})))}{G(\mu_{E}(g(\mathbf{u})))} -\frac{\nabla_{g}p^{*}(\mu_{E}(g(\mathbf{u})))}{p^{*}(\mu_{E}(g(\mathbf{u})))} +\nabla_{g}G(\mu_{E}(g(\mathbf{u})))\right]\] \[\qquad = \mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[\frac{1}{p^{*}(\mu_{E}(g(\mathbf{u})))} +\frac{1}{G(\mu_{E}(g(\mathbf{u})))}\nabla_{g}\left[-p^{*}(\mu_{E}(g(\mathbf{u})))\right]\right]\] \[\qquad -\frac{1}{p^{*}(\mu_{E}(g(\mathbf{u})))} +\frac{p^{*}(\mu_{E}(g(\mathbf{u})))}{G(\mu_{E}(g(\mathbf{u})))}\nabla_{g}\left[-G(\mu_{E}(g(\mathbf{u})))\right]\right].\] Let \[a(\mathbf{u}) = \frac{1}{2\sigma^{2}}\frac{1}{p^{*}(\mu_{E}(g(\mathbf{u})))} +G(\mu_{E}(g(\mathbf{u})))\] \[b(\mathbf{u}) = \frac{1}{2\sigma^{2}}\frac{1}{p^{*}(\mu_{E}(g(\mathbf{u})))} +\frac{1}{G(\mu_{E}(g(\mathbf{u})))}\frac{p^{*}(\mu_{E}(g(\mathbf{u})))}{G(\mu_{E}(g(\mathbf{u})))}.\] We then have \[\nabla_{g}\mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[\log \left(1 - D^{*}(\mu_{E}(g(\mathbf{u})))\right)\right]\] \[\qquad = \mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[2\sigma^{2}a(\mathbf{u})\nabla_{g}\left[-p^{*}(\mu_{E}(g(\mathbf{u})))\right] - 2\sigma^{2}b(\mathbf{u})\nabla_{g}\left[-G(\mu_{E}(g(\mathbf{u})))\right]\right]\] \[\qquad = \mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[2\sigma^{2}a(\mathbf{u})\int -\nabla_{g}E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})d p^{*}(\mathbf{x})\right.\] \[\qquad \left. - 2\sigma^{2}b(\mathbf{u})\int -\nabla_{g}E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})d G(\mathbf{x})\right]\] \[\qquad = \mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[2\sigma^{2}a(\mathbf{u})\int -\nabla_{g}\frac{1}{Z}\exp \left(-\frac{1}{2\sigma^{2}}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})||^{2}\right)d p^{*}(\mathbf{x})\right.\] \[\qquad \left. - 2\sigma^{2}b(\mathbf{u})\int -\nabla_{g}\frac{1}{Z}\exp \left(-\frac{1}{2\sigma^{2}}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})||^{2}\right)d G(\mathbf{x})\right]\] \[\qquad = \mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[a(\mathbf{u})\int \frac{1}{Z}\exp \left(-\frac{1}{2\sigma^{2}}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})||^{2}\right)\nabla_{g}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})\right]^{2}d p^{*}(\mathbf{x})\] \[\qquad \left. - b(\mathbf{u})\int \frac{1}{Z}\exp \left(-\frac{1}{2\sigma^{2}}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})||^{2}\right)\nabla_{g}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})\right]^{2}d G(\mathbf{x})\right]\] \[\qquad = \mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[a(\mathbf{u})\int E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\nabla_{g}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})||^{2}d p^{*}(\mathbf{x})\right.\] \[\qquad \left. - b(\mathbf{u})\int E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\nabla_{g}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})||^{2}d G(\mathbf{x})\right]\] Similar to the result from Arjovsky & Bottou (2017), the gradient of the generator drives the generator's samples in the embedding space \(\mu_{E}(g(\mathbf{u}))\) towards embeddings of the points from the dataset \(\mu_{E}(\mathbf{x})\) weighted by their likelihood \(E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\) under the real data. For an arbitrary encoder \(E\) , real and fake samples in the embedding may be far apart. As such, the coefficients \(E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\) can be arbitrarily small, thereby resulting in vanishing gradients for the generator. The second part of the theorem states that \(C(I_{c})\) is a continuous monotonic function, and \(C(I_{c})\to \delta >0\) as \(I_{c}\to 0\) . This is the main result, and relies on the fact that \(\mathrm{KL}[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})]\leq I_{c}\) . The intuition behind this result is that, for any two inputs \(\mathbf{x}\) and \(\mathbf{y}\) , their encoded distributions \(E(\mathbf{z}|\mathbf{x})\) and \(E(\mathbf{z}|\mathbf{y})\) have means that cannot be more than some distance apart, and that distance shrinks with \(I_{c}\) . This allows us to bound \(E(\mu_{E}(\mathbf{y})|\mathbf{x})\) below by \(C(I_{c})\) , which ensures that the coefficients on the vectors in the theorem above are always at least as large as \(C(I_{c})\) . <--- Page Split ---> Proof. Let \(r(\mathbf{z}) = \mathcal{N}(0, I)\) be the prior distribution and suppose the KL divergence for all \(\mathbf{x}\) in the dataset and all \(g(\mathbf{u})\) generated by the generator are bounded by \(I_{c}\) \[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right]\leq I_{c},\quad \forall \mathbf{x},\mathbf{x}\sim p^{*}(\mathbf{x})\] \[\mathrm{KL}\left[E(\mathbf{z}|g(\mathbf{u}))||r(\mathbf{z})\right]\leq I_{c},\quad \forall \mathbf{u},\mathbf{u}\sim p(\mathbf{u}).\] From the definition of the KL- divergence we can bound the length of all embedding vectors, \[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right] = \frac{1}{2}\left(K\sigma^{2} + \mu_{E}(\mathbf{x})^{T}\mu_{E}(\mathbf{x}) - K - K\log \sigma^{2}\right)\leq I_{c}\] \[\qquad ||\mu_{E}(\mathbf{x})||^{2}\leq 2I_{c} - K\sigma^{2} + K + K\log \sigma^{2},\] and similarly for \(\| \mu_{E}(g(\mathbf{u}))\|^{2}\) , with \(K\) denoting the dimension of \(Z\) . A lower bound on \(E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\) , where \(\mathbf{u}\sim p(\mathbf{u})\) and \(\mathbf{x}\sim p^{*}(\mathbf{x})\) , can then be determined by \[\log \left(E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\right) = -\frac{1}{2\sigma^{2}}\left(\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})\right)^{T}\left(\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})) - \frac{K}{2}\log \sigma^{2} - \frac{K}{2}\log 2\pi\] Since \(\| \mu_{E}(\mathbf{x})\|^{2}\) , \(\| \mu_{E}(g(\mathbf{u}))\|^{2}\leq 2I_{c} - K\sigma^{2} + K + K\log \sigma^{2}\) \[\left|\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})\right|^{2}\leq 8I_{c} - 4K\sigma^{2} + 4K + 4K\log \sigma^{2},\] and it follows that \[-\frac{1}{2\sigma^{2}}\left(\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})\right)^{T}\left(\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x}))\geq -4\sigma^{-2}I_{c} + 2K - 2K\sigma^{-2} - 2K\sigma^{-2}\log \sigma^{-2}.\] The likelihood is therefore bounded below by \[\log \left(E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\right)\geq -4\sigma^{-2}I_{c} + 2K - 2K\sigma^{-2} - 2K\sigma^{-2}\log \sigma^{-2} - \frac{K}{2}\log \sigma^{2} - \frac{K}{2}\log 2\pi\] Since \(- \sigma^{- 2} - \sigma^{- 2}\log \sigma^{- 2}\geq - 1\) \[\log \left(E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\right)\geq -4\sigma^{-2}I_{c} - \frac{K}{2}\log \sigma^{2} - \frac{K}{2}\log 2\pi \quad (14)\] From the KL constraint, we can derive a lower bound \(\ell (I_{c})\) and an upper bound \(\mathcal{U}(I_{c})\) on \(\sigma^{2}\) \[\frac{1}{2}\left(K\sigma^{2} + \mu_{E}(\mathbf{x})^{T}\mu_{E}(\mathbf{x}) - K - K\log \sigma^{2}\right)\leq I_{c}\] \[\sigma^{2} - 1 - \log \sigma^{2}\leq \frac{2I_{c}}{K}\] \[\log \sigma^{2}\geq -\frac{2I_{c}}{K} -1\] \[\sigma^{2}\geq \exp \left(-\frac{2I_{c}}{K} -1\right) = \ell (I_{c})\] For the upper bound, since \(\sigma^{2} - \log \sigma^{2} > \frac{1}{2}\sigma^{2}\) \[\sigma^{2} - 1 - \log \sigma^{2}\leq \frac{2I_{c}}{K}\] \[\frac{1}{2}\sigma^{2} - 1< \frac{2I_{c}}{K}\] \[\sigma^{2}< \frac{4I_{c}}{K} +2 = \mathcal{U}(I_{c})\] Substituting \(\ell (I_{c})\) and \(\mathcal{U}(I_{c})\) into Equation 14, we arrive at the following lower bound \[E(\mu_{E}(g(\mathbf{u}))|\mathbf{x}) > \exp \left(-4I_{c}\exp \left(\frac{2I_{c}}{K} +1\right) - \frac{K}{2}\log \left(\frac{4I_{c}}{K} +2\right) - \frac{K}{2}\log 2\pi\right) = C(I_{c}).\] <--- Page Split ---> ## B GRADIENT PENALTY To combine VDB with gradient penalty, we use the reparameterization trick to backprop through the encoder when computing the gradient of the discriminator with respect to the inputs. \[\begin{array}{r l} & {J(D,E) = \underset {D,E}{\min}\mathbb{E}_{x\sim p^{*}(\mathbf{x})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log \left(D(\mathbf{z})\right)\right]\right] + \mathbb{E}_{\mathbf{x}\sim G(\mathbf{x})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log \left(1 - D(\mathbf{z})\right)\right]\right]}\\ & {\qquad +w_{G P}\mathbb{E}_{x\sim p^{*}(\mathbf{x})}\left[\mathbb{E}_{\epsilon \sim \mathcal{N}(0,I)}\left[\frac{1}{2} ||\nabla_{x}D(\mu_{E}(x) + \Sigma_{E}(x)\epsilon)||^{2}\right]\right]}\\ & {\qquad \mathrm{s.t.}\quad \mathbb{E}_{\mathbf{x}\sim \tilde{p} (\mathbf{x})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right]\right]\leq I_{c},} \end{array} \quad (15)\] The coefficient \(w_{GP}\) weights the gradient penalty in the objective, \(w_{GP} = 10\) for the image generation, \(w_{GP} = 1\) for motion imitation, and \(w_{GP} = 0.1\) (C- maze) or \(w_{GP} = 0.01\) (S- maze) for the IRL tasks. The gradient penalty is applied only to real samples \(p^{*}(x)\) . We have experimented with apply the penalty to both real and fake samples, but found that performance was worse than penalizing only gradients from real samples. This is consistent with the GP implementation from Mescheder et al. (2018). ## C IMITATION LEARNING Experimental Setup: The goal of the motion imitation tasks is to train a simulated agent to mimic a demonstration provided in the form of a mocap clip recorded from a human actor. We use a similar experimental setup as Peng et al. (2018), with a 34 degrees- of- freedom humanoid character. The state s consists of features that represent the configuration of the character's body (link positions and velocities). We also include a phase variable \(\phi \in [0,1]\) among the state features, which records the character's progress along the motion and helps to synchronize the character with the reference motion. With 0 and 1 denoting the start and end of the motion respectively. The action a sampled from the policy \(\pi (\mathbf{a}|\mathbf{s})\) specifies target poses for PD controller positioned at each joint. Given a state, the policy specifies a Gaussian distribution over the action space \(\pi (\mathbf{a}|\mathbf{s}) = \mathcal{N}(\mu (\mathbf{s}),\Sigma)\) , with a state- dependent mean \(\mu (\mathbf{s})\) and fixed diagonal covariance matrix \(\Sigma\) . \(\mu (\mathbf{s})\) is modeled using a 3- layered fully- connected network with 1024 and 512 hidden units, followed by a linear output layer that specifies the mean of the Gaussian. ReLU activations are used for all hidden layers. The value function is modeled with a similar architecture but with a single linear output unit. The policy is queried at \(30Hz\) . Physics simulation is performed at \(1.2\mathrm{kHz}\) using the Bullet physics engine Bullet (2015). Given the rewards from the discriminator, PPO (Schulman et al., 2017) is used to train the policy, with a stepsize of \(2.5\times 10^{- 6}\) for the policy, a stepsize of 0.01 for the value function, and a stepsize of \(10^{- 5}\) for the discriminator. Gradient descent with momentum 0.9 is used for all models. The PPO clipping threshold is set to 0.2. When evaluating the performance of the policies, each episode is simulated for a maximum horizon of 20s. Early termination is triggered whenever the character's torso contacts the ground, leaving the policy is a maximum error of \(\pi\) radians for all remaining timesteps. Phase- Functioned Discriminator: Unlike the policy and value function, which are modeled with standard fully- connected networks, the discriminator is modeled by a phase- functioned neural network (PFNN) to explicitly model the time- dependency of the reference motion (Holden et al., 2017). While the parameters of a network are generally fixed, the parameters of a PFNN are functions of the phase variable \(\phi\) . The parameters \(\theta\) of the network for a given \(\phi\) is determined by a weighted combination of a set of fixed parameters \(\{\theta_{0},\theta_{1},\dots,\theta_{k}\}\) , \[\theta = \sum_{i = 0}^{k}w_{i}(\phi)\theta_{i},\] where \(w_{i}(\phi)\) is a phase- dependent weight for \(\theta_{i}\) . In our implementation, we use \(k = 5\) sets of parameters and \(w_{i}(\phi)\) is designed to linearly interpolate between two adjacent sets of parameters for each phase \(\phi\) , where each set of parameters \(\theta_{i}\) corresponds to a discrete phase value \(\phi_{i}\) spaced <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 10: Learning curves comparing VAIL to other methods for motion imitation. Performance is measured using the average joint rotation error between the simulated character and the reference motion. Each method is evaluated with 3 random seeds. </center> ![](images/18_1.jpg) <center>Figure 11: Learning curves comparing VAIL with a discriminator modeled by a phase-functioned neural network (PFNN), to modeling the discriminator with a fully-connected network that receives the phase-variable \(\phi\) as part of the input (no PFNN), and a discriminator modeled with a fully-connected network but does not receive \(\phi\) as an input (no phase). </center> uniformly between \([0,1]\) . For a given value of \(\phi\) , the parameters of the discriminator are determined according to \[\theta = w_{i}(\phi)\theta_{i} + w_{i + 1}(\phi)\theta_{i + 1}\] where \(\theta_{i}\) and \(\theta_{i + 1}\) correspond to the phase values \(\phi_{i} \leq \phi < \phi_{i + 1}\) that form the endpoints of the phase interval that contains \(\phi\) . A PFNN is used for all motion imitation experiments, both VAIL and GAIL, except for those that use the approach proposed by Merel et al. (2017), which use standard fully- connected networks for the discriminator. Figure 11 compares the performance of VAIL when the discriminator is modeled with a phase- functioned neural network (with PFNN) to discriminators modeled with standard fully- connected networks. We increased the size of the layers of the fully- connected nets to have a similar number of parameters as a PFNN. We evaluate the performance of fully- connected nets that receive the phase variable \(\phi\) as part of the input (no PFNN), and fully- connected nets that do not receive \(\phi\) as an input. The phase- functioned discriminator leads to significant performance improvements across all tasks evaluated. Policies trained without a phase variable performs worst overall, suggesting that phase information is critical for performance. All methods perform well on simpler skills, such as running, but the additional phase structure introduced by the PFNN proved to be vital for successful imitation of more complex skills, such as the dance and backflip. Next we compare the accuracy of discriminators trained using different methods. Figure 12 illustrates accuracy of the discriminators over the course of training. Discriminators trained via GAIL quickly overpowers the policy, and learns to accurately differentiate between samples, even when <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 12: Left: Accuracy of the discriminator trained using different methods for imitating the dance skill. Middle: Value of the dual variable \(\beta\) over the course of training. Right: KL loss over the course of training. The dual gradient descent update for \(\beta\) effectively enforces the VDB constraint \(I_{c}\) . </center> instance noise is applied to the inputs. VAIL without the KL constraint slows the discriminator's progress, but nonetheless reaches near perfect accuracy with a larger number of samples. Once the KL constraint is enforced, the information bottleneck constrains the performance of the discriminator, converging to approximately \(80\%\) accuracy. Figure 12 also visualizes the value of \(\beta\) over the course of training for motion imitation tasks, along with the loss of the KL term in the objective. The dual gradient descent update effectively enforces the VDB constraint \(I_{c}\) . Video Imitation: In the video imitation tasks, we use a simplified 2D biped character in order to avoid issues that may arise due to depth ambiguity from monocular videos. The biped character has a total of 12 degrees- of- freedom, with similar state and action parameters as the humanoid. The video demonstrations are generated by rendering a reference motion into a sequence of video frames, which are then provided to the agent as a demonstration. The goal of the agent is to imitate the motion depicted in the video, without access to the original reference motion, and the reference motion is used only to evaluate performance. ## D INVERSE REINFORCEMENT LEARNING ## D.1 EXPERIMENTAL SETUP Environments We evaluate on two maze tasks, as illustrated in Figure 13. The C- maze is taken from Fu et al. (2017): in this maze, the agent starts at a random point within a small fixed distance of the mean start position. The agent has a continuous, 2D action space which allows it to accelerate in the \(x\) or \(y\) directions, and is able to observe its \(x\) and \(y\) position, but not its velocity. The ground truth reward is \(r_{t} = - d_{t} - 10^{- 3}\| a_{t}\|^{2}\) , where \(d_{t}\) is the agent's distance to the goal, and \(a_{t}\) is its action (this action penalty is assumed to be zero in Figure 13). Episodes terminate after 100 steps; for evaluation, we report the undiscounted mean sum of rewards over each episode The S- maze is larger variant of the same environment with an extra wall between the agent and its goal. To make the S- maze easier to solve for the expert, we added further reward shaping to encourage the agent to pass between the gaps between walls. We also increased the maximum control forces relative to the C- maze to enable more rapid exploration. Environments will be released along with the rest of our VAIRL implementations. Hyperparameters Policy networks for all methods were two- layer ReLU MLPs with 32 hidden units per layer. Reward and discriminator networks were similar, but with 32- unit mean and standard deviation layers inserted before the final layer for VDB methods. To generate expert demonstrations, we trained a TRPO (Schulman et al., 2015) agent on the ground truth reward for the training environment for 200 iterations, and saved 107 trajectories from each of the policies corresponding to the five final iterations. TRPO used a batch size of 10,000, a step size of 0.01, and entropy bonus with a coefficient of 0.1 to increase diversity. After generating demonstrations, we trained the IRL and imitation methods on a training maze for 200 iterations; again, our policy optimizer was TRPO with the same hyperparameters used to generate demonstrations. Between each policy update, we did 100 discriminator updates using Adam with a learning rate of \(5 \times 10^{- 5}\) and batch size of 32. For the C- maze our VAIRL runs used a target KL of \(I_{C} = 0.5\) , while for the more complex S- maze we <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 13: Left: The C-maze used for training and its mirror version used for testing. Colour contours show the ground truth reward function that we use to train the expert and evaluate transfer quality, while the red and green dots show the initial and goal positions, respectively. Right: The analogous diagram for the S-maze. </center> use a tighter target of \(I_{C} = 0.05\) . For the test C- maze, we trained new policies against the recovered reward using TRPO with the hyperparameters described above; for the test S- maze, we modified these parameters to use a batch size of 50,000 and learning rate of 0.001 for 400 iterations. ## D.2 RECOVERED REWARD FUNCTIONS Figure 14 and 15 show the reward functions recovered by each IRL baseline on the C- maze and S- maze, respectively, along with sample trajectories for policies trained to optimize those rewards. Notice that VAIRL tends to recover smoother reward functions that match the ground truth reward more closely than the baselines. Addition of a gradient penalty enhances this effect for both AIRL and VAIRL. This is especially true in S- maze, where combining a gradient penalty with a variational discriminator bottleneck leads to a smooth reward that gradually increases as the agent nears its goal position at the top of the maze. ## E IMAGE GENERATION We provide further experiment on image generation and details of the experimental setup. ## E.1 EXPERIMENTAL SETUP: We use the non- saturating objective of Goodfellow et al. (2014) for all models except WGAN- GP. Following (Lucic et al., 2017), we compute FID on samples of size \(10000^{2}\) . We base our implementation on (Mescheder et al., 2018), where we do not use any batch normalization for both the generator and the discriminator. We use RMSprop (Hinton et al.) and a fixed learning rate for all experiments. For convolutional GAN, variational discriminative bottleneck is implemented as a 1x1 convolution on the final embedding space that outputs a Gaussian distribution over \(Z\) parametrized with a mean and a diagonal covariance matrix. For all image experiments, we preserve the dimensionality of the latent space. All experiments use adaptive \(\beta\) update with a dual stepsize of \(\alpha_{\beta} = 10^{- 5}\) . We will make our code public. Similarly to VGAN, instance noise Sonderby et al. (2016); Arjovsky & Bottou (2017) is added to the final embedding space of the discriminator right before applying the classifier. Instance noise can be interpreted as a non- adaptive VGAN without a information constraint. Architecture: For CIFAR- 10, we use a resnet- based architecture adapted from (Mescheder et al., 2018) detailed in Tables 2, 3, and 4. For CelebA and CelebAHQ, we use the same architecture used in (Mescheder et al., 2018). <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 14: Visualizations of recovered reward functions transferred to the mirrored C-maze. Also shown are trajectories executed by policies trained to maximize the corresponding reward in the new environment. </center> <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 15: Visualizations of recovered reward functions transferred to the mirrored S-maze, like Figure 14. </center> <--- Page Split ---> Table 2: CIFAR-10 Generator <table><tr><td>Layer</td><td>Output size</td><td>Filter</td></tr><tr><td>FC</td><td>256·4·4</td><td>256→256·4·4</td></tr><tr><td>Reshape</td><td>256×4×4</td><td></td></tr><tr><td>Resnet-block</td><td>128×4×4</td><td>256→128</td></tr><tr><td>Upsample</td><td>128×8×8</td><td></td></tr><tr><td>Resnet-block</td><td>64×8×8</td><td>128→64</td></tr><tr><td>Upsample</td><td>64×16×16</td><td></td></tr><tr><td>Resnet-block</td><td>32×16×16</td><td>64→32</td></tr><tr><td>Upsample</td><td>32×32×32</td><td></td></tr><tr><td>Resnet-block</td><td>32×32×32</td><td></td></tr><tr><td>Conv2D</td><td>3×32×32</td><td>32→32</td></tr></table> Table 3: CIFAR-10 Discriminator <table><tr><td>Layer</td><td>Output size</td><td>Filter</td></tr><tr><td>Conv2D</td><td>32×32×32</td><td>3→32</td></tr><tr><td>Resnet-block</td><td>64×32×32</td><td>32→64</td></tr><tr><td>AvgPool2D</td><td>64×16×16</td><td></td></tr><tr><td>Resnet-block</td><td>128×16×16</td><td>64→128</td></tr><tr><td>AvgPool2D</td><td>128×8×8</td><td></td></tr><tr><td>Resnet-block</td><td>256×8×8</td><td>128→256</td></tr><tr><td>AvgPool2D</td><td>256×4×4</td><td></td></tr><tr><td>FC</td><td>1</td><td>256·4·4→1</td></tr></table> ## E.2 RESULTS CIFAR- 10: We compare our approach with recent stabilization techniques: WGAN- GP (Gulrajani et al., 2017b), instance noise (Sonderby et al., 2016; Arjovsky & Bottou, 2017), spectral normalization (Miyato et al., 2018), and gradient penalty (Mescheder et al., 2018). We train report the networks at 750k iterations. We use \(I_{c} = 0.1\) , and a coefficient of \(w_{GP} = 10\) for the gradient penalty, which is the same as the value used by the implementation from Mescheder et al. (2018). See Figure 16 for visual comparisons of randomly generated samples. CelebA: On the CelebA (Liu et al., 2015) dataset, we generate images of size \(128 \times 128\) with \(I_{c} = 0.2\) . On this dataset we do not see a big improvement upon the other baselines. This is likely because the architecture has been effectively tuned for this task, reflected by the fact that even the vanilla GAN trains fine on this dataset. All GAN, GP, and VGAN- GP obtain a similar FID scores of 7.64, 7.76, 7.25 respectively. See Figure 17 for more qualitative results with our approach. CelebAHQ: VGAN can also be trained on on CelebAHQ Karras et al. (2018) at 1024 by 1024 resolution directly, without progressive growing (Karras et al., 2018). We use \(I_{c} = 0.1\) and train with VGAN- GP. We train on a single Tesla V100, which fits a batch size of 8 in our experiments. Previous approaches (Karras et al., 2018; Mescheder et al., 2018) use a larger batch size and train over multiple GPUs. The model was trained for 300k iterations. Table 4: CIFAR-10 Discriminator with VDB <table><tr><td>Layer</td><td>Output size</td><td>Filter</td></tr><tr><td>Conv2D</td><td>32×32×32</td><td>3→32</td></tr><tr><td>Resnet-block</td><td>64×32×32</td><td>32→64</td></tr><tr><td>AvgPool2D</td><td>64×16×16</td><td></td></tr><tr><td>Resnet-block</td><td>128×16×16</td><td>64→128</td></tr><tr><td>AvgPool2D</td><td>128×8×8</td><td></td></tr><tr><td>Resnet-block</td><td>256×8×8</td><td>128→256</td></tr><tr><td>AvgPool2D</td><td>256×4×4</td><td></td></tr><tr><td>1×1 Conv2D</td><td>2·256×4×4</td><td>256→2·256</td></tr><tr><td>Sampling</td><td>256×4×4</td><td></td></tr><tr><td>FC</td><td>1</td><td>256·4·4→1</td></tr></table> <--- Page Split ---> ![](images/24_0.jpg) <center>Figure 16: Random results on CIFAR-10 (Krizhevsky et al.): GAN (Goodfellow et al., 2014) FID: 63.6, instance noise (Sønderby et al., 2016; Arjovsky & Bottou, 2017) FID: 30.7, spectral normalization (SN) (Miyato et al., 2018) FID: 23.9, gradient penalty (GP) (Mescheder et al., 2018) FID: 22.6, WGAN-GP Guraljani et al. (2017b) FID: 19.9, and the proposed VGAN-GP FID: 18.1. The samples produced by VGAN-GP (right) look the most realistic where objects like vehicles may be discerned. </center> <--- Page Split ---> ![](images/25_0.jpg) <center>Figure 17: Random VGAN samples on CelebA \(128 \times 128\) at 300k iterations. </center> <--- Page Split ---> ![](images/26_0.jpg) <center>Figure 18: VGAN samples on CelebA HQ (Karras et al., 2018) \(1024 \times 1024\) resolution at 300k iterations. Models are trained from scratch at full resolution, without the progressive scheme proposed by Karras et al. (2017). </center> <--- Page Split --->
## ABSTRACT Adversarial learning methods have been proposed for a wide range of applications, but the training of adversarial models can be notoriously unstable. Effectively balancing the performance of the generator and discriminator is critical, since a discriminator that achieves very high accuracy will produce relatively uninformative gradients. In this work, we propose a simple and general technique to constrain information flow in the discriminator by means of an information bottleneck. By enforcing a constraint on the mutual information between the observations and the discriminator's internal representation, we can effectively modulate the discriminator's accuracy and maintain useful and informative gradients. We demonstrate that our proposed variational discriminator bottleneck (VDB) leads to significant improvements across three distinct application areas for adversarial learning algorithms. Our primary evaluation studies the applicability of the VDB to imitation learning of dynamic continuous control skills, such as running. We show that our method can learn such skills directly from raw video demonstrations, substantially outperforming prior adversarial imitation learning methods. The VDB can also be combined with adversarial inverse reinforcement learning to learn parsimonious reward functions that can be transferred and re- optimized in new settings. Finally, we demonstrate that VDB can train GANs more effectively for image generation, improving upon a number of prior stabilization methods. (Video<sup>1</sup>) ## 1 INTRODUCTION Adversarial learning methods provide a promising approach to modeling distributions over high- dimensional data with complex internal correlation structures. These methods generally use a discriminator to supervise the training of a generator in order to produce samples that are indistinguishable from the data. A particular instantiation is generative adversarial networks, which can be used for high- fidelity generation of images (Goodfellow et al., 2014; Karras et al., 2017) and other high- dimensional data (Vondrick et al., 2016; Xie et al., 2018; Donahue et al., 2018). Adversarial methods can also be used to learn reward functions in the framework of inverse reinforcement learning (Finn et al., 2016a; Fu et al., 2017), or to directly imitate demonstrations (Ho & Ermon, 2016). However, they suffer from major optimization challenges, one of which is balancing the performance of the generator and discriminator. A discriminator that achieves very high accuracy can produce relatively uninformative gradients, but a weak discriminator can also hamper the generator's ability to learn. These challenges have led to widespread interest in a variety of stabilization methods for adversarial learning algorithms (Arjovsky et al., 2017; Kodali et al., 2017; Berthelot et al., 2017). In this work, we propose a simple regularization technique for adversarial learning, which constrains the information flow from the inputs to the discriminator using a variational approximation to the information bottleneck. By enforcing a constraint on the mutual information between the input observations and the discriminator's internal representation, we can encourage the discriminator to learn a representation that has heavy overlap between the data and the generator's distribution, thereby effectively modulating the discriminator's accuracy and maintaining useful and informative <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Our method is general and can be applied to a broad range of adversarial learning tasks. Left: Motion imitation with adversarial imitation learning. Middle: Image generation. Right: Learning transferable reward functions through adversarial inverse reinforcement learning. </center> gradients for the generator. Our approach to stabilizing adversarial learning can be viewed as an adaptive variant of instance noise (Salimans et al., 2016; Sonderby et al., 2016; Arjovsky & Bottou, 2017). However, we show that the adaptive nature of this method is critical. Constraining the mutual information between the discriminator's internal representation and the input allows the regularizer to directly limit the discriminator's accuracy, which automates the choice of noise magnitude and applies this noise to a compressed representation of the input that is specifically optimized to model the most discerning differences between the generator and data distributions. The main contribution of this work is the variational discriminator bottleneck (VDB), an adaptive stochastic regularization method for adversarial learning that substantially improves performance across a range of different application domains, examples of which are available in Figure 1. Our method can be easily applied to a variety of tasks and architectures. First, we evaluate our method on a suite of challenging imitation tasks, including learning highly acrobatic skills from mocap data with a simulated humanoid character. Our method also enables characters to learn dynamic continuous control skills directly from raw video demonstrations, and drastically improves upon previous work that uses adversarial imitation learning. We further evaluate the effectiveness of the technique for inverse reinforcement learning, which recovers a reward function from demonstrations in order to train future policies. Finally, we apply our framework to image generation using generative adversarial networks, where employing VDB improves the performance in many cases. ## 2 RELATED WORK Recent years have seen an explosion of adversarial learning techniques, spurred by the success of generative adversarial networks (GANs) (Goodfellow et al., 2014). A GAN framework is commonly composed of a discriminator and a generator, where the discriminator's objective is to classify samples as real or fake, while the generator's objective is to produce samples that fool the discriminator. Similar frameworks have also been proposed for inverse reinforcement learning (IRL) (Finn et al., 2016b) and imitation learning (Ho & Ermon, 2016). The training of adversarial models can be extremely unstable, with one of the most prevalent challenges being balancing the interplay between the discriminator and the generator (Berthelot et al., 2017). The discriminator can often overpower the generator, easily differentiating between real and fake samples, thus providing the generator with uninformative gradients for improvement (Che et al., 2016). Alternative loss functions have been proposed to mitigate this problem (Mao et al., 2016; Zhao et al., 2016; Arjovsky et al., 2017). Regularizers have been incorporated to improve stability and convergence, such as gradient penalties (Kodali et al., 2017; Gulrajani et al., 2017a; Mescheder et al., 2018), reconstruction loss (Che et al., 2016), and a myriad of other heuristics (Sonderby et al., 2016; Salimans et al., 2016; Arjovsky & Bottou, 2017; Berthelot et al., 2017). Task- specific architectural designs can also substantially improve performance (Radford et al., 2015; Karras et al., 2017). Similarly, our method also aims to regularize the discriminator in order to improve the feedback provided to the generator. But instead of explicit regularization of gradients or architecture- specific constraints, we apply a general information bottleneck to the discriminator, which previous works have shown to encourage networks to ignore irrelevant cues (Achille & Soatto, 2017). We hypothesize that this then allows the generator to focus on improving the most discerning differences between real and fake samples. Adversarial techniques have also been applied to inverse reinforcement learning (Fu et al., 2017), where a reward function is recovered from demonstrations, which can then be used to train policies to reproduce a desired skill. Finn et al. (2016a) showed an equivalence between maximum entropy IRL and GANs. Similar techniques have been developed for adversarial imitation learning (Ho & Ermon, 2016; Merel et al., 2017), where agents learn to imitate demonstrations without explicitly recovering <--- Page Split ---> a reward function. One advantage of adversarial methods is that by leveraging a discriminator in place of a reward function, they can be applied to imitate skills where reward functions can be difficult to engineer. However, the performance of policies trained through adversarial methods still falls short of those produced by manually designed reward functions, when such reward functions are available (Rajeswaran et al., 2017; Peng et al., 2018). We show that our method can significantly improve upon previous works that use adversarial techniques, and produces results of comparable quality to those from state- of- the- art approaches that utilize manually engineered reward functions. Our variational discriminator bottleneck is based on the information bottleneck (Tishby & Zaslavsky, 2015), a technique for regularizing internal representations to minimize the mutual information with the input. Intuitively, a compressed representation can improve generalization by ignoring irrelevant distractors present in the original input. The information bottleneck can be instantiated in practical deep models by leveraging a variational bound and the reparameterization trick, inspired by a similar approach in variational autoencoders (VAE) (Kingma & Welling, 2013). The resulting variational information bottleneck approximates this compression effect in deep networks (Alemi et al., 2016; Achille & Soatto, 2017). A similar bottleneck has also been applied to learn disentangled representations (Higgins et al., 2017). Building on the success of VAEs and GANs, a number of efforts have been made to combine the two. Makhzani et al. (2016) used adversarial discriminators during the training of VAEs to encourage the marginal distribution of the latent encoding to be similar to the prior distribution, similar techniques include Mescheder et al. (2017) and Chen et al. (2018). Conversely, Larsen et al. (2016) modeled the generator of a GAN using a VAE. Zhao et al. (2016) used an autoencoder instead of a VAE to model the discriminator, but does not enforce an information bottleneck on the encoding. While instance noise is widely used in modern architectures (Salimans et al., 2016; Sonderby et al., 2016; Arjovsky & Bottou, 2017), we show that explicitly enforcing an information bottleneck leads to improved performance over simply adding noise for a variety of applications. ## 3 PRELIMINARIES In this section, we provide a review of the variational information bottleneck proposed by Alemi et al. (2016) in the context of supervised learning. Our variational discriminator bottleneck is based on the same principle, and can be instantiated in the context of GANs, inverse RL, and imitation learning. Given a dataset \(\{\mathbf{x}_i,\mathbf{y}_i\}\) , with features \(\mathbf{x}_i\) and labels \(\mathbf{y}_i\) , the standard maximum likelihood estimate \(q(\mathbf{y}_i|\mathbf{x}_i)\) can be determined according to \[\min_{q}\mathbb{E}_{\mathbf{x},\mathbf{y}\sim p(\mathbf{x},\mathbf{y})}\left[-\log q(\mathbf{y}|\mathbf{x})\right]. \quad (1)\] Unfortunately, this estimate is prone to overfitting, and the resulting model can often exploit idiosyncrasies in the data (Krizhevsky et al., 2012; Srivastava et al., 2014). Alemi et al. (2016) proposed regularizing the model using an information bottleneck to encourage the model to focus only on the most discriminative features. The bottleneck can be incorporated by first introducing an encoder \(E(\mathbf{z}|\mathbf{x})\) that maps the features \(\mathbf{x}\) to a latent distribution over \(Z\) , and then enforcing an upper bound \(I_{c}\) on the mutual information between the encoding and the original features \(I(X,Z)\) . This results in the following regularized objective \(J(q,E)\) \[\begin{array}{r l} & {J(q,E) = \min_{q,E}\mathbb{E}_{\mathbf{x},\mathbf{y}\sim p(\mathbf{x},\mathbf{y})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log q(\mathbf{y}|\mathbf{z})\right]\right]}\\ & {\qquad \mathrm{s.t.}\quad I(X,Z)\leq I_{c}.} \end{array} \quad (2)\] Note that the model \(q(\mathbf{y}|\mathbf{z})\) now maps samples from the latent distribution \(\mathbf{z}\) to the label \(\mathbf{y}\) . The mutual information is defined according to \[I(X,Z) = \int p(\mathbf{x},\mathbf{z})\log \frac{p(\mathbf{x},\mathbf{z})}{p(\mathbf{x})p(\mathbf{z})} d\mathbf{x} d\mathbf{z} = \int p(\mathbf{x})E(\mathbf{z}|\mathbf{x})\log \frac{E(\mathbf{z}|\mathbf{x})}{p(\mathbf{z})} d\mathbf{x} d\mathbf{z}, \quad (3)\] where \(p(\mathbf{x})\) is the distribution given by the dataset. Computing the marginal distribution \(p(\mathbf{z}) = \int E(\mathbf{z}|\mathbf{x}) p(\mathbf{x}) d\mathbf{x}\) can be challenging. Instead, a variational lower bound can be obtained by using an approximation \(r(\mathbf{z})\) of the marginal. Since \(\mathrm{KL}\left[p(\mathbf{z})\right]\left|r(\mathbf{z})\right|\geq 0\) , \(\int p(\mathbf{z})\log p(\mathbf{z}) d\mathbf{z}\geq\) \(\int p(\mathbf{z})\log r(\mathbf{z}) d\mathbf{z}\) , an upper bound on \(I(X,Z)\) can be obtained via the KL divergence, \[I(X,Z)\leq \int p(\mathbf{x})E(\mathbf{z}|\mathbf{x})\log \frac{E(\mathbf{z}|\mathbf{x})}{r(\mathbf{z})} d\mathbf{x} d\mathbf{z} = \mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right]\right]. \quad (4)\] <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: Left: Overview of the variational discriminator bottleneck. The encoder first maps samples \(\mathbf{x}\) to a latent distribution \(E(\mathbf{z}|\mathbf{x})\) . The discriminator is then trained to classify samples \(\mathbf{z}\) from the latent distribution. An information bottleneck \(I(X,Z)\leq I_{c}\) is applied to \(Z\) . Right: Visualization of discriminators trained to differentiate two Gaussians with different KL bounds \(I_{c}\) . </center> This provides an upper bound on the regularized objective \(\tilde{J} (q,E)\geq J(q,E)\) \[\begin{array}{r l} & {\tilde{J} (q,E) = \underset {q,E}{\min}\quad \mathbb{E}_{\mathbf{x},\mathbf{y}\sim p(\mathbf{x},\mathbf{y})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log q(\mathbf{y}|\mathbf{z})\right]\right]}\\ & {\qquad \mathrm{s.t.}\quad \mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right]\right]\leq I_{c}.} \end{array} \quad (5)\] To solve this problem, the constraint can be subsumed into the objective with a coefficient \(\beta\) \[\min_{q,E}\mathbb{E}_{\mathbf{x},\mathbf{y}\sim p(\mathbf{x},\mathbf{y})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log q(\mathbf{y}|\mathbf{z})\right]\right] + \beta \left(\mathbb{E}_{\mathbf{x}\sim p(\mathbf{x})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right]\right] - I_{c}\right). \quad (6)\] Alemi et al. (2016) evaluated the method on supervised learning tasks, and showed that models trained with a VIB can be less prone to overfitting and more robust to adversarial examples. ## 4 VARIATIONAL DISCRIMINATOR BOTTLENECK To outline our method, we first consider a standard GAN framework consisting of a discriminator \(D\) and a generator \(G\) , where the goal of the discriminator is to distinguish between samples from the target distribution \(p^{*}(\mathbf{x})\) and samples from the generator \(G(\mathbf{x})\) , \[\max_{G}\min_{D}\quad \mathbb{E}_{\mathbf{x}\sim p^{*}(\mathbf{x})}\left[-\log \left(D(\mathbf{x})\right)\right] + \mathbb{E}_{\mathbf{x}\sim G(\mathbf{x})}\left[-\log \left(1 - D(\mathbf{x})\right)\right].\] We incorporate a variational information bottleneck by introducing an encoder \(E\) into the discriminator that maps a sample \(\mathbf{x}\) to a stochastic encoding \(\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})\) , and then apply a constraint \(I_{c}\) on the mutual information \(I(X,Z)\) between the original features and the encoding. \(D\) is then trained to classify samples drawn from the encoder distribution. A schematic illustration of the framework is available in Figure 2. The regularized objective \(J(D,E)\) for the discriminator is given by \[\begin{array}{r l} & {J(D,E) = \underset {D,E}{\min}\quad \mathbb{E}_{x\sim p^{*}(\mathbf{x})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log \left(D(\mathbf{z})\right)\right]\right] + \mathbb{E}_{\mathbf{x}\sim G(\mathbf{x})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log \left(1 - D(\mathbf{z})\right)\right]\right]}\\ & {\qquad \mathrm{s.t.}\quad \mathbb{E}_{\mathbf{x}\sim \tilde{p} (\mathbf{x})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right]\right]\leq I_{c},} \end{array} \quad (7)\] with \(\tilde{p} = \frac{1}{2} p^{*} + \frac{1}{2} G\) being a mixture of the target distribution and the generator. We refer to this regularizer as the variational discriminator bottleneck (VDB). To optimize this objective, we can introduce a Lagrange multiplier \(\beta\) , \[\begin{array}{r l} & {J(D,E) = \underset {D,E}{\min}\underset {\beta \geq 0}{\max}\quad \mathbb{E}_{\mathbf{x}\sim p^{*}(\mathbf{x})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log \left(D(\mathbf{z})\right)\right]\right] + \mathbb{E}_{\mathbf{x}\sim G(\mathbf{x})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log \left(1 - D(\mathbf{z})\right)\right]\right]}\\ & {\qquad \qquad +\beta \left(\mathbb{E}_{\mathbf{x}\sim \tilde{p} (\mathbf{x})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right]\right] - I_{c}\right).} \end{array} \quad (8)\] As we will discuss in Section 4.1 and demonstrate in our experiments, enforcing a specific mutual information budget between \(\mathbf{x}\) and \(\mathbf{z}\) is critical for good performance. We therefore adaptively update \(\beta\) via dual gradient descent to enforce a specific constraint \(I_{c}\) on the mutual information, \[\begin{array}{r l} & {D,E\leftarrow \underset {D,E}{\arg \min}\mathcal{L}(D,E,\beta)}\\ & {\beta \leftarrow \max \left(0,\beta +\alpha_{\beta}\left(\mathbb{E}_{\mathbf{x}\sim \tilde{p} (\mathbf{x})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right]\right] - I_{c}\right)\right),} \end{array} \quad (9)\] <--- Page Split ---> where \(\mathcal{L}(D,E,\beta)\) is the Lagrangian \[\begin{array}{r l} & {\mathcal{L}(D,E,\beta) = \mathbb{E}_{\mathbf{x}\sim p^{*}(\mathbf{x})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log \left(D(\mathbf{z})\right)\right]\right] + \mathbb{E}_{\mathbf{x}\sim G(\mathbf{x})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log \left(1 - D(\mathbf{z})\right)\right]\right]}\\ & {\qquad +\beta \left(\mathbb{E}_{\mathbf{x}\sim \hat{p}(\mathbf{x})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right]\right] - I_{c}\right),} \end{array} \quad (10)\] and \(\alpha_{\beta}\) is the stepsize for the dual variable in dual gradient descent (Boyd & Vandenberghe, 2004). In practice, we perform only one gradient step on \(D\) and \(E\) , followed by an update to \(\beta\) . We refer to a GAN that incorporates a VDB as a variational generative adversarial network (VGAN). In our experiments, the prior \(r(\mathbf{z}) = \mathcal{N}(0,I)\) is modeled with a standard Gaussian. The encoder \(E(\mathbf{z}|\mathbf{x}) = \mathcal{N}(\mu_{E}(\mathbf{x}),\Sigma_{E}(\mathbf{x}))\) models a Gaussian distribution in the latent variables \(Z\) , with mean \(\mu_{E}(\mathbf{x})\) and diagonal covariance matrix \(\Sigma_{E}(\mathbf{x})\) . When computing the KL loss, each batch of data contains an equal number of samples from \(p^{*}(x)\) and \(G(x)\) . We use a simplified objective for the generator, \[\max_{G}\mathbb{E}_{\mathbf{x}\sim G(\mathbf{x})}\left[-\log \left(1 - D(\mu_{E}(\mathbf{x}))\right)\right]. \quad (11)\] where the KL penalty is excluded from the generator's objective. Instead of computing the expectation over \(Z\) , we found that approximating the expectation by evaluating \(D\) at the mean \(\mu_{E}(\mathbf{x})\) of the encoder's distribution was sufficient for our tasks. The discriminator is modeled with a single linear unit followed by a sigmoid \(D(\mathbf{z}) = \sigma (\mathbf{w}_{D}^{T}\mathbf{z} + \mathbf{b}_{D})\) , with weights \(\mathbf{w}_{D}\) and bias \(\mathbf{b}_{D}\) . ### 4.1 DISCUSSION AND ANALYSIS To interpret the effects of the VDB, we consider the results presented by Arjovsky & Bottou (2017), which show that for two distributions with disjoint support, the optimal discriminator can perfectly classify all samples and its gradients will be zero almost everywhere. Thus, as the discriminator converges to the optimum, the gradients for the generator vanishes accordingly. To address this issue, Arjovsky & Bottou (2017) proposed applying continuous noise to the discriminator inputs, thereby ensuring that the distributions have continuous support everywhere. In practice, if the original distributions are sufficiently distant from each other, the added noise will have negligible effects. As shown by Mescheder et al. (2017), the optimal choice for the variance of the noise to ensure convergence can be quite delicate. In our method, by first using a learned encoder to map the inputs to an embedding and then applying an information bottleneck on the embedding, we can dynamically adjust the variance of the noise such that the distributions not only share support in the embedding space, but also have significant overlap. Since the minimum amount of information required for binary classification is 1 bit, by selecting an information constraint \(I_{c}< 1\) , the discriminator is prevented from from perfectly differentiating between the distributions. To illustrate the effects of the VDB, we consider a simple task of training a discriminator to differentiate between two Gaussian distributions. Figure 2 visualizes the decision boundaries learned with different bounds \(I_{c}\) on the mutual information. Without a VDB, the discriminator learns a sharp decision boundary, resulting in vanishing gradients for much of the space. But as \(I_{c}\) decreases and the bound tightens, the decision boundary is smoothed, providing more informative gradients that can be leveraged by the generator. Taking this analysis further, we can extend Theorem 3.2 from Arjovsky & Bottou (2017) to analyze the VDB, and show that the gradient of the generator will be non- degenerate for a small enough constraint \(I_{c}\) , under some additional simplifying assumptions. The result in Arjovsky & Bottou (2017) states that the gradient consists of vectors that point toward samples on the data manifold, multiplied by coefficients that depend on the noise. However, these coefficients may be arbitrarily small if the generated samples are far from real samples, and the noise is not large enough. This can still cause the generator gradient to vanish. In the case of the VDB, the constraint ensures that these coefficients are always bounded below. Due to space constraints, this result is presented in Appendix A. ### 4.2 VAIL: VARIATIONAL ADVERSARIAL IMITATION LEARNING To extend the VDB to imitation learning, we start with the generative adversarial imitation learning (GAIL) framework (Ho & Ermon, 2016), where the discriminator's objective is to differentiate between the state distribution induced by a target policy \(\pi^{*}(\mathbf{s})\) and the state distribution of the agent's policy \(\pi (\mathbf{s})\) , \[\max_{\pi}\min_{D}\mathbb{E}_{\mathbf{s}\sim \pi^{*}(\mathbf{s})}\left[-\log \left(D(\mathbf{s})\right)\right] + \mathbb{E}_{\mathbf{s}\sim \pi (\mathbf{s})}\left[-\log \left(1 - D(\mathbf{s})\right)\right].\] <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 3: Simulated humanoid performing various skills. VAIL is able to closely imitate a broad range of skills from mocap data. </center> The discriminator is trained to maximize the likelihood assigned to states from the target policy, while minimizing the likelihood assigned to states from the agent's policy. The discriminator also serves as the reward function for the agent, which encourages the policy to visit states that, to the discriminator, appear indistinguishable from the demonstrations. Similar to the GAN framework, we can incorporate a VDB into the discriminator, \[\begin{array}{r l} & {J(D,E) = \min_{D,E}\max_{\beta \geq 0}\mathbb{E}_{\mathbf{s}\sim \pi^{*}(\mathbf{s})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{s})}\left[-\log \left(D(\mathbf{z})\right)\right]\right] + \mathbb{E}_{\mathbf{s}\sim \pi (\mathbf{s})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{s})}\left[-\log \left(1 - D(\mathbf{z})\right)\right]\right]}\\ & {\qquad +\beta \left(\mathbb{E}_{\mathbf{s}\sim \hat{\pi} (\mathbf{s})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{s})\right]|r(\mathbf{z})\right]\right) - I_{c}\big).} \end{array} \quad (12)\] where \(\begin{array}{r}{\hat{\pi} = \frac{1}{2}\pi^{*} + \frac{1}{2}\pi} \end{array}\) represents a mixture of the target policy and the agent's policy. The reward for \(\pi\) is then specified by the discriminator \(r_{t} = - \log \left(1 - D(\mu_{E}(\mathbf{s}))\right)\) . We refer to this method as variational adversarial imitation learning (VAIL). ### 4.3 VAIL: VARIATIONAL ADVERSARIAL INVERSE REINFORCEMENT LEARNING The VDB can also be applied to adversarial inverse reinforcement learning (Fu et al., 2017) to yield a new algorithm which we call variational adversarial inverse reinforcement learning (VAIRL). AIRL operates in a similar manner to GAIL, but with a discriminator of the form \[D(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime}) = \frac{\exp\left(f(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime})\right)}{\exp\left(f(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime})\right) + \pi(\mathbf{a}|\mathbf{s})}, \quad (13)\] where \(f(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime}) = g(\mathbf{s},\mathbf{a}) + \gamma h(\mathbf{s}^{\prime}) - h(\mathbf{s})\) , with \(g\) and \(h\) being learned functions. Under certain restrictions on the environment, Fu et al. show that if \(g(\mathbf{s},\mathbf{a})\) is defined to depend only on the current state \(\mathbf{s}\) , the optimal \(g(\mathbf{s})\) recovers the expert's true reward function \(r^{*}(\mathbf{s})\) up to a constant \(g^{*}(\mathbf{s}) = r^{*}(\mathbf{s}) + \mathrm{const}\) . In this case, the learned reward can be re- used to train policies in environments with different dynamics, and will yield the same policy as if the policy was trained under the expert's true reward. In contrast, GAIL's discriminator typically cannot be re- optimized in this way (Fu et al., 2017). In VAIRL, we introduce stochastic encoders \(E_{g}(\mathbf{z}_{g}|\mathbf{s})\) , \(E_{h}(\mathbf{z}_{h}|\mathbf{s})\) , and \(g(\mathbf{z}_{g})\) , \(h(\mathbf{z}_{h})\) are modified to be functions of the encoding. We can reformulate Equation 13 as \[D(\mathbf{s},\mathbf{a},\mathbf{z}) = \frac{\exp\left(f(\mathbf{z}_{g},\mathbf{z}_{h},\mathbf{z}_{h}^{\prime})\right)}{\exp\left(f(\mathbf{z}_{g},\mathbf{z}_{h},\mathbf{z}_{h}^{\ell})\right) + \pi(\mathbf{a}|\mathbf{s})},\] for \(\mathbf{z} = (\mathbf{z}_{g},\mathbf{z}_{h},\mathbf{z}_{h}^{\prime})\) and \(f(\mathbf{z}_{g},\mathbf{z}_{h},\mathbf{z}_{h}^{\prime}) = D_{g}(\mathbf{z}_{g}) + \gamma D_{h}(\mathbf{z}_{h}^{\prime}) - D_{h}(\mathbf{z}_{h})\) . We then obtain a modified objective of the form \[J(D,E) = \min_{D,E}\max_{\beta \geq 0} \mathbb{E}_{\mathbf{s},\mathbf{s}^{\prime}\sim \pi^{*}(\mathbf{s},\mathbf{s}^{\prime})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{s},\mathbf{s}^{\prime})}\left[-\log \left(D(\mathbf{s},\mathbf{a},\mathbf{z})\right)\right]\right]\] \[\qquad +\mathbb{E}_{\mathbf{s},\mathbf{s}^{\prime}\sim \pi (\mathbf{s},\mathbf{s}^{\prime})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{s},\mathbf{s}^{\prime})}\left[-\log \left(1 - D(\mathbf{s},\mathbf{a},\mathbf{z})\right)\right]\right]\] \[\qquad +\beta \left(\mathbb{E}_{\mathbf{s},\mathbf{s}^{\prime}\sim \hat{\pi}(\mathbf{s},\mathbf{s}^{\prime})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{s},\mathbf{s}^{\prime})\right]|r(\mathbf{z})\right]\right) - I_{c}\big),\] where \(\pi (s,s^{\prime})\) denotes the joint distribution of successive states from a policy, and \(E(\mathbf{z}|\mathbf{s},\mathbf{s}^{\prime}) =\) \(E_{g}(\mathbf{z}_{g}|\mathbf{s})\cdot E_{h}(\mathbf{z}_{h}|\mathbf{s})\cdot E_{h}(\mathbf{z}_{h}^{\prime}|\mathbf{s}^{\prime})\) <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 4: Learning curves comparing VAIL to other methods for motion imitation. Performance is measured using the average joint rotation error between the simulated character and the reference motion. Each method is evaluated with 3 random seeds. </center> Table 1: Average joint rotation error (radians) on humanoid motion imitation tasks. VAIL outperforms the other methods for all skills evaluated, except for policies trained using the manually-designed reward function from (Peng et al., 2018). <table><tr><td>Method</td><td>Backflip</td><td>Cartwheel</td><td>Dance</td><td>Run</td><td>Spinkick</td></tr><tr><td>BC</td><td>3.01</td><td>2.88</td><td>2.93</td><td>2.63</td><td>2.88</td></tr><tr><td>Merel et al., 2017</td><td>1.33 ± 0.03</td><td>1.47 ± 0.12</td><td>2.61 ± 0.30</td><td>0.52 ± 0.04</td><td>1.82 ± 0.35</td></tr><tr><td>GAIL</td><td>0.74 ± 0.15</td><td>0.84 ± 0.05</td><td>1.31 ± 0.16</td><td>0.17 ± 0.03</td><td>1.07 ± 0.03</td></tr><tr><td>GAIL - noise</td><td>0.42 ± 0.02</td><td>0.92 ± 0.07</td><td>0.96 ± 0.08</td><td>0.21 ± 0.05</td><td>0.95 ± 0.14</td></tr><tr><td>GAIL - noise z</td><td>0.67 ± 0.12</td><td>0.72 ± 0.04</td><td>1.14 ± 0.08</td><td>0.14 ± 0.03</td><td>0.64 ± 0.09</td></tr><tr><td>GAIL - GP</td><td>0.62 ± 0.09</td><td>0.69 ± 0.05</td><td>0.80 ± 0.32</td><td>0.12 ± 0.02</td><td>0.64 ± 0.04</td></tr><tr><td>VAIL (ours)</td><td>0.36 ± 0.13</td><td>0.40 ± 0.08</td><td>0.40 ± 0.21</td><td>0.13 ± 0.01</td><td>0.34 ± 0.05</td></tr><tr><td>VAIL - GP (ours)</td><td>0.46 ± 0.17</td><td>0.31 ± 0.02</td><td>0.15 ± 0.01</td><td>0.10 ± 0.01</td><td>0.31 ± 0.02</td></tr><tr><td>Peng et al., 2018</td><td>0.26</td><td>0.21</td><td>0.20</td><td>0.14</td><td>0.19</td></tr></table> ## 5 EXPERIMENTS We evaluate our method on adversarial learning problems in imitation learning, inverse reinforcement learning, and image generation. In the case of imitation learning, we show that the VDB enables agents to learn complex motion skills from a single demonstration, including visual demonstrations provided in the form of video clips. We also show that the VDB improves the performance of inverse RL methods. Inverse RL aims to reconstruct a reward function from a set demonstrations, which can then used to perform the task in new environments, in contrast to imitation learning, which aims to recover a policy directly. Our method is also not limited to control tasks, and we demonstrate its effectiveness for unconditional image generation. ### 5.1 VAIL: VARIATIONAL ADVERSARIAL IMITATION LEARNING The goal of the motion imitation tasks is to train a simulated character to mimic demonstrations provided by mocap clips recorded from human actors. Each mocap clip provides a sequence of target states \(\{\mathbf{s}_0^s,\mathbf{s}_1^s,\dots,\mathbf{s}_T^s\}\) that the character should track at each timestep. We use a similar experimental setup as Peng et al. (2018), with a 34 degrees- of- freedom humanoid character. We found that the discriminator architecture can greatly affect the performance on complex skills. The particular architecture we employ differs substantially from those used in prior work (Merel et al., 2017), details of which are available in Appendix C. The encoding \(Z\) is 128D and an information constraint of \(I_{c} = 0.5\) is applied for all skills, with a dual stepsize of \(\alpha_{\beta} = 10^{- 5}\) . All policies are trained using PPO (Schulman et al., 2017). The motions learned by the policies are best seen in the supplementary video. Snapshots of the character's motions are shown in Figure 3. Each skill is learned from a single demonstration. VAIL is able to closely reproduce a variety of skills, including those that involve highly dynamics flips and complex contacts. We compare VAIL to a number of other techniques, including state- only GAIL (Ho & Ermon, 2016), GAIL with instance noise applied to the discriminator inputs (GAIL - noise), GAIL with instance noise applied to the last hidden layer (GAIL - noise z), and GAIL with a gradient penalty applied to the discriminator (GAIL - GP) (Mescheder et al., 2018). Since the VDB helps to prevent vanishing gradients, while GP mitigates exploding gradients, the two techniques can be seen as being complementary. Therefore, we also train a model that combines both VAIL and GP (VAIL - <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: Left: Snapshots of the video demonstration and the simulated character trained with VAIL. The policy learns to run by directly imitating the video. Right: Saliency maps that visualize the magnitude of the discriminator's gradient with respect to all channels of the RGB input images from both the demonstration and the simulation. Pixel values are normalized between \([0,1]\) . </center> ![](images/7_1.jpg) <center>Figure 6: Left: Learning curves comparing policies for the video imitation task trained using a pixel-wise loss as the reward, GAIL, and VAIL. Only VAIL successfully learns to run from a video demonstration. Middle: Effect of training with fixed values of \(\beta\) and adaptive \(\beta\) ( \(I_{c} = 0.5\) ). Right: KL loss over the course of training with adaptive \(\beta\) . The dual gradient descent update for \(\beta\) effectively enforces the VDB constraint \(I_{c}\) . </center> GP). Implementation details for combining the VDB and GP are available in Appendix B. Learning curves for the various methods are shown in Figure 10 and Table 1 summarizes the performance of the final policies. Performance is measured in terms of the average joint rotation error between the simulated character and the reference motion. We also include a reimplementation of the method described by Merel et al. (2017). For the purpose of our experiments, GAIL denotes policies trained using our particular architecture but without a VDB, and Merel et al. (2017) denotes policies trained using an architecture that closely mirror those from previous work. Furthermore, we include comparisons to policies trained using the handcrafted reward from Peng et al. (2018), as well as policies trained via behavioral cloning (BC). Since mocap data does not provide expert actions, we use the policies from Peng et al. (2018) as oracles to provide state- action demonstrations, which are then used to train the BC policies via supervised learning. Each BC policy is trained with 10k samples from the oracle policies, while all other policies are trained from just a single demonstration, the equivalent of approximately 100 samples. VAIL consistently outperforms previous adversarial methods, and VAIL - GP achieves the best performance overall. Simply adding instance noise to the inputs (Salimans et al., 2016) or hidden layer without the KL constraint (Sonderby et al., 2016) leads to worse performance, since the network can learn a latent representation that renders the effects of the noise negligible. Though training with the handcrafted reward still outperforms the adversarial methods, VAIL demonstrates comparable performance to the handcrafted reward without manual reward or feature engineering, and produces motions that closely resemble the original demonstrations. The method from Merel et al. (2017) was able to imitate simple skills such as running, but was unable to reproduce more acrobatic skills such as the backflip and spinkick. In the case of running, our implementation produces more natural gaits than the results reported in Merel et al. (2017). Behavioral cloning is unable to reproduce any of the skills, despite being provided with substantially more demonstration data than the other methods. Video Imitation: While our method achieves substantially better results on motion imitation when compared to prior work, previous methods can still produce reasonable behaviors. However, if the demonstrations are provided in terms of the raw pixels from video clips, instead of mocap data, the imitation task becomes substantially harder. The goal of the agent is therefore to directly im <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 7: Left: C-Maze and S-Maze. When trained on the training maze on the left, AIRL learns a reward that overfits to the training task, and which cannot be transferred to the mirrored maze on the right. In contrast, VAIRL learns a smoother reward function that enables more-reliable transfer. Right: Performance on flipped test versions of our two training mazes. We report mean return \((\pm\) std. dev.) over five runs, and the mean return for the expert used to generate demonstrations. </center> itate the skill depicted in the video. This is also a setting where manually engineering rewards is impractical, since simple losses like pixel distance do not provide a semantically meaningful measure of similarity. Figure 6 compares learning curves of policies trained with VAIL, GAIL, and policies trained using a reward function defined by the average pixel- wise difference between the frame \(M_{t}^{*}\) from the video demonstration and a rendered image \(M_{t}\) of the agent at each timestep \(t\) , \(r_{t} = 1 - \frac{1}{3\times 64^{2}}\| M_{t}^{*} - M_{t}\|^{2}\) . Each frame is represented by a \(64\times 64\) RGB image. Both GAIL and the pixel- loss are unable to learn the running gait. VAIL is the only method that successfully learns to imitate the skill from the video demonstration. Snapshots of the video demonstration and the simulated motion is available in Figure 5. To further investigate the effects of the VDB, we visualize the gradient of the discriminator with respect to images from the video demonstration and simulation. Saliency maps for discriminators trained with VAIL and GAIL are available in Figure 5. The VAIL discriminator learns to attend to spatially coherent image patches around the character, while the GAIL discriminator exhibits less structure. The magnitude of the gradients from VAIL also tend to be significantly larger than those from GAIL, which may suggests that VAIL is able to mitigate the problem of vanishing gradients present in GAIL. Adaptive Constraint: To evaluate the effects of the adaptive \(\beta\) updates, we compare policies trained with different fixed values of \(\beta\) and policies where \(\beta\) is updated adaptively to enforce a desired information constraint \(I_{c} = 0.5\) . Figure 6 illustrates the learning curves and the KL loss over the course of training. When \(\beta\) is too small, performance reverts to that achieved by GAIL. Large values of \(\beta\) help to smooth the discriminator landscape and improve learning speed during the early stages of training, but converges to a worse performance. Policies trained using dual gradient descent to adaptively update \(\beta\) consistently achieves the best performance overall. ### 5.2 VAIRL: VARIATIONAL ADVERSARIAL INVERSE REINFORCEMENT LEARNING Next, we use VAIRL to recover reward functions from demonstrations. Unlike the discriminator learned by VAIL, the reward function recovered by VAIRL can be re- optimized to train new policies from scratch in the same environment. In some cases, it can also be used to transfer similar behaviour to different environments. In Figure 7, we show the results of applying VAIRL to the C- maze from Fu et al. (2017), and a more complex S- maze; the simple 2D observation spaces of these tasks make it easy to interpret the recovered reward functions. In both mazes, the expert is trained to navigate from a start position at the bottom of the maze to a fixed target position at the top. We use each method to obtain an imitation policy and to approximate the expert's reward on the original maze. The recovered reward is then used to train a new policy to solve a left- right flipped version of the training maze. On the C- maze, we found that plain AIRL—without a gradient penalty—would sometimes overfit and fail to transfer to the new environment, as evidenced by the reward visualization in Figure 7 (left) and the higher return variance in Figure 7 (right). In contrast, by incorporating a VDB into AIRL, VAIRL learns a substantially smoother reward function that is more suitable for transfer. Furthermore, we found that in the S- maze with two internal walls, AIRL was too unstable to acquire a meaningful reward function. This was true even with the use of a gradient penalty. In contrast, VAIRL was able to learn a reasonable reward in most cases without a <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 8: Comparison of VGAN and other methods on CIFAR-10, with performance evaluated using the Fréchet Inception Distance (FID). </center> ![](images/9_1.jpg) <center>Figure 9: VGAN samples on CIFAR-10, CelebA \(128 \times 128\) , and CelebAHQ \(1024 \times 1024\) . </center> gradient penalty, and its performance improved even further with the addition of a gradient penalty. To evaluate the effects of the VDB, we observe that the performance of VAIRL drops on both tasks when the KL constraint is disabled \((\beta = 0)\) , suggesting that the improvements from the VDB cannot be attributed entirely to the noise introduced by the sampling process for \(\mathbf{z}\) . Further details of these experiments and illustrations of the recovered reward functions are available in Appendix D. ### 5.3 VGAN: VARIATIONAL GENERATIVE ADVERSARIAL NETWORKS Finally, we apply the VDB to image generation with generative adversarial networks, which we refer to as VGAN. Experiment are conducted on CIFAR- 10 (Krizhevsky et al.), CelebA (Liu et al. (2015)), and CelebAHQ (Karras et al., 2018) datasets. We compare our approach to recent stabilization techniques: WGAN- GP (Gulrajani et al., 2017b), instance noise (Sonderby et al., 2016; Arjovsky & Bottou, 2017), spectral normalization (SN) (Miyato et al., 2018), and gradient penalty (GP) (Mescheder et al., 2018), as well as the original GAN (Goodfellow et al., 2014) on CIFAR- 10. To measure performance, we report the Fréchet Inception Distance (FID) (Heusel et al., 2017), which has been shown to be more consistent with human evaluation. All methods are implemented using the same base model, built on the resnet architecture of Mescheder et al. (2018). Aside from tuning the KL constraint \(I_{c}\) for VGAN, no additional hyperparameter optimization was performed to modify the settings provided by Mescheder et al. (2018). The performance of the various methods on CIFAR- 10 are shown in Figure 8. While vanilla GAN and instance noise are prone to diverging as training progresses, VGAN remains stable. Note that instance noise can be seen as a non- adaptive version of VGAN without constraints on \(I_{c}\) . This experiment again highlights that there is a significant improvement from imposing the information bottleneck over simply adding instance noise. Combining both VDB and gradient penalty (VGAN - GP) achieves the best performance overall with an FID of 18.1. We also experimented with combining the VDB with SN, but this combination is prone to diverging. See Figure 9 for samples of images generated with our approach. Please refer to Appendix E for experimental details and more results. ## 6 CONCLUSION We present the variational discriminator bottleneck, a general regularization technique for adversarial learning. Our experiments show that the VDB is broadly applicable to a variety of domains, and yields significant improvements over previous techniques on a number of challenging tasks. While our experiments have produced promising results for video imitation, the results have been primarily with videos of synthetic scenes. We believe that extending the technique to imitating real- world videos is an exciting direction. Another exciting direction for future work is a more in- depth theoretical analysis of the method, to derive convergence and stability results or conditions. <--- Page Split ---> ## ACKNOWLEDGEMENTS We would like to thank the anonymous reviewers for their helpful feedback, and AWS and NVIDIA for providing computational resources. This research was funded by an NSERC Postgraduate Scholarship, a Berkeley Fellowship for Graduate Study, BAIR, Huawei, and ONR PECASE N000141612723. ## SUPPLEMENTARY MATERIAL ## A ANALYSIS AND PROOFS In this appendix, we show that the gradient of the generator when the discriminator is augmented with the VDB is non- degenerate, under some mild additional assumptions. First, we assume a pointwise constraint of the form \(\mathrm{KL}[E(\mathbf{z}|\mathbf{x})\| r(\mathbf{z})]\leq I_{c}\) for all \(\mathbf{x}\) . In reality, we use an average KL constraint, since we found it to be more convenient to optimize, though a pointwise constraint is also possible to enforce by using the largest constraint violation to increment \(\beta\) . We could likely also extend the analysis to the average constraint, though we leave this to future work. The main theorem can then be stated as follows: Theorem A.1. Let \(g(\mathbf{u})\) denote the generator's mapping from a noise vector \(\mathbf{u}\sim p(\mathbf{u})\) to a point in \(X\) . Given the generator distribution \(G(\mathbf{x})\) and data distribution \(p^{*}(\mathbf{x})\) , a VDB with an encoder \(E(\mathbf{z}|\mathbf{x}) = \mathcal{N}(\mu_{E}(\mathbf{x}),\Sigma)\) , and \(\mathrm{KL}[E(\mathbf{z}|\mathbf{x})\| r(\mathbf{z})]\leq I_{c}\) , the gradient passed to the generator has the form \[\nabla_{g}\mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}[\log (1 - D^{*}(\mu_{E}(g(\mathbf{u}))))]\] \[\qquad = \mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[a(\mathbf{u})\int E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\nabla_{g}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})||^{2}dp^{*}(\mathbf{x})\right.\] \[\qquad \left. - b(\mathbf{u})\int E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\nabla_{g}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})||^{2}dG(\mathbf{x})\right]\] where \(D^{*}(\mathbf{z})\) is the optimal discriminator, \(a(\mathbf{x})\) and \(b(\mathbf{x})\) are positive functions, and we always have \(E(\mu_{E}(g(\mathbf{u}))|\mathbf{x}) > C(I_{c})\) , where \(C(I_{c})\) is a continuous monotonic function, and \(C(I_{c})\to \delta >0\) as \(I_{c}\to 0\) . Analysis for an encoder with an input- dependent variance \(\Sigma (\mathbf{x})\) is also possible, but more involved. We'll further assume below for notational simplicity that \(\Sigma\) is diagonal with diagonal values \(\sigma^{2}\) . This assumption is not required, but substantially simplifies the linear algebra. Analogously to Theorem 3.2 from Arjovsky & Bottou (2017), this theorem states that the gradient of the generator points in the direction of points in the data distribution, and away from points in the generator distribution. However, going beyond the theorem in Arjovsky & Bottou (2017), this result states that the coefficients on these vectors, given by \(E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\) , are always bounded below by a value that approaches a positive constant \(\delta\) as we decrease \(I_{c}\) , meaning that the gradient does not vanish. The proof of the first part of this theorem is essentially identical to the proof presented by Arjovsky & Bottou (2017), but accounting for the fact that the noise is now injected into the latent space of the VDB, rather than being added directly to \(\mathbf{x}\) . This result assumes that \(E(\mathbf{z}|\mathbf{x})\) has a learned but input- independent variance \(\Sigma = \sigma^{2}I\) , though the proof can be repeated for an input- dependent or non- diagonal \(\Sigma\) : Proof. Overloading \(p^{*}(\mathbf{x})\) and \(G(\mathbf{x})\) , let \(p^{*}(\mathbf{z})\) and \(G(\mathbf{z})\) be the distribution of embeddings \(\mathbf{z}\) under the real data and generator respectively. \(p^{*}(\mathbf{z})\) is then given by \[p^{*}(\mathbf{z}) = \mathbb{E}_{\mathbf{x}\sim p^{*}(\mathbf{x})}\left[E(\mathbf{z}|\mathbf{x})\right] = \int E(\mathbf{z}|\mathbf{x})dp^{*}(\mathbf{x}),\] and similarly for \(G(z)\) \[G(\mathbf{z}) = \mathbb{E}_{\mathbf{x}\sim G(\mathbf{x})}\left[E(\mathbf{z}|\mathbf{x})\right] = \int E(\mathbf{z}|\mathbf{x})dG(\mathbf{x}),\] From Arjovsky & Bottou (2017), the optimal discriminator between \(p^{*}(\mathbf{z})\) and \(G(\mathbf{z})\) is \[D^{*}(\mathbf{z}) = \frac{p^{*}(\mathbf{z})}{p^{*}(\mathbf{z}) + G(\mathbf{z})}\] <--- Page Split ---> The gradient passed to the generator then has the form \[\nabla_{g}\mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[\log \left(1 - D^{*}(\mu_{E}(g(\mathbf{u})))\right)\right]\] \[\qquad = \mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[\nabla_{g}\log \left(G(\mu_{E}(g(\mathbf{u})))\right) - \nabla_{g}\log \left(p^{*}(\mu_{E}(g(\mathbf{u}))\right) + G(\mu_{E}(g(\mathbf{u})))\right]\] \[\qquad = \mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[\frac{\nabla_{g}G(\mu_{E}(g(\mathbf{u})))}{G(\mu_{E}(g(\mathbf{u})))} -\frac{\nabla_{g}p^{*}(\mu_{E}(g(\mathbf{u})))}{p^{*}(\mu_{E}(g(\mathbf{u})))} +\nabla_{g}G(\mu_{E}(g(\mathbf{u})))\right]\] \[\qquad = \mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[\frac{1}{p^{*}(\mu_{E}(g(\mathbf{u})))} +\frac{1}{G(\mu_{E}(g(\mathbf{u})))}\nabla_{g}\left[-p^{*}(\mu_{E}(g(\mathbf{u})))\right]\right]\] \[\qquad -\frac{1}{p^{*}(\mu_{E}(g(\mathbf{u})))} +\frac{p^{*}(\mu_{E}(g(\mathbf{u})))}{G(\mu_{E}(g(\mathbf{u})))}\nabla_{g}\left[-G(\mu_{E}(g(\mathbf{u})))\right]\right].\] Let \[a(\mathbf{u}) = \frac{1}{2\sigma^{2}}\frac{1}{p^{*}(\mu_{E}(g(\mathbf{u})))} +G(\mu_{E}(g(\mathbf{u})))\] \[b(\mathbf{u}) = \frac{1}{2\sigma^{2}}\frac{1}{p^{*}(\mu_{E}(g(\mathbf{u})))} +\frac{1}{G(\mu_{E}(g(\mathbf{u})))}\frac{p^{*}(\mu_{E}(g(\mathbf{u})))}{G(\mu_{E}(g(\mathbf{u})))}.\] We then have \[\nabla_{g}\mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[\log \left(1 - D^{*}(\mu_{E}(g(\mathbf{u})))\right)\right]\] \[\qquad = \mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[2\sigma^{2}a(\mathbf{u})\nabla_{g}\left[-p^{*}(\mu_{E}(g(\mathbf{u})))\right] - 2\sigma^{2}b(\mathbf{u})\nabla_{g}\left[-G(\mu_{E}(g(\mathbf{u})))\right]\right]\] \[\qquad = \mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[2\sigma^{2}a(\mathbf{u})\int -\nabla_{g}E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})d p^{*}(\mathbf{x})\right.\] \[\qquad \left. - 2\sigma^{2}b(\mathbf{u})\int -\nabla_{g}E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})d G(\mathbf{x})\right]\] \[\qquad = \mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[2\sigma^{2}a(\mathbf{u})\int -\nabla_{g}\frac{1}{Z}\exp \left(-\frac{1}{2\sigma^{2}}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})||^{2}\right)d p^{*}(\mathbf{x})\right.\] \[\qquad \left. - 2\sigma^{2}b(\mathbf{u})\int -\nabla_{g}\frac{1}{Z}\exp \left(-\frac{1}{2\sigma^{2}}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})||^{2}\right)d G(\mathbf{x})\right]\] \[\qquad = \mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[a(\mathbf{u})\int \frac{1}{Z}\exp \left(-\frac{1}{2\sigma^{2}}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})||^{2}\right)\nabla_{g}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})\right]^{2}d p^{*}(\mathbf{x})\] \[\qquad \left. - b(\mathbf{u})\int \frac{1}{Z}\exp \left(-\frac{1}{2\sigma^{2}}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})||^{2}\right)\nabla_{g}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})\right]^{2}d G(\mathbf{x})\right]\] \[\qquad = \mathbb{E}_{\mathbf{u}\sim p(\mathbf{u})}\left[a(\mathbf{u})\int E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\nabla_{g}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})||^{2}d p^{*}(\mathbf{x})\right.\] \[\qquad \left. - b(\mathbf{u})\int E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\nabla_{g}||\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})||^{2}d G(\mathbf{x})\right]\] Similar to the result from Arjovsky & Bottou (2017), the gradient of the generator drives the generator's samples in the embedding space \(\mu_{E}(g(\mathbf{u}))\) towards embeddings of the points from the dataset \(\mu_{E}(\mathbf{x})\) weighted by their likelihood \(E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\) under the real data. For an arbitrary encoder \(E\) , real and fake samples in the embedding may be far apart. As such, the coefficients \(E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\) can be arbitrarily small, thereby resulting in vanishing gradients for the generator. The second part of the theorem states that \(C(I_{c})\) is a continuous monotonic function, and \(C(I_{c})\to \delta >0\) as \(I_{c}\to 0\) . This is the main result, and relies on the fact that \(\mathrm{KL}[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})]\leq I_{c}\) . The intuition behind this result is that, for any two inputs \(\mathbf{x}\) and \(\mathbf{y}\) , their encoded distributions \(E(\mathbf{z}|\mathbf{x})\) and \(E(\mathbf{z}|\mathbf{y})\) have means that cannot be more than some distance apart, and that distance shrinks with \(I_{c}\) . This allows us to bound \(E(\mu_{E}(\mathbf{y})|\mathbf{x})\) below by \(C(I_{c})\) , which ensures that the coefficients on the vectors in the theorem above are always at least as large as \(C(I_{c})\) . <--- Page Split ---> Proof. Let \(r(\mathbf{z}) = \mathcal{N}(0, I)\) be the prior distribution and suppose the KL divergence for all \(\mathbf{x}\) in the dataset and all \(g(\mathbf{u})\) generated by the generator are bounded by \(I_{c}\) \[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right]\leq I_{c},\quad \forall \mathbf{x},\mathbf{x}\sim p^{*}(\mathbf{x})\] \[\mathrm{KL}\left[E(\mathbf{z}|g(\mathbf{u}))||r(\mathbf{z})\right]\leq I_{c},\quad \forall \mathbf{u},\mathbf{u}\sim p(\mathbf{u}).\] From the definition of the KL- divergence we can bound the length of all embedding vectors, \[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right] = \frac{1}{2}\left(K\sigma^{2} + \mu_{E}(\mathbf{x})^{T}\mu_{E}(\mathbf{x}) - K - K\log \sigma^{2}\right)\leq I_{c}\] \[\qquad ||\mu_{E}(\mathbf{x})||^{2}\leq 2I_{c} - K\sigma^{2} + K + K\log \sigma^{2},\] and similarly for \(\| \mu_{E}(g(\mathbf{u}))\|^{2}\) , with \(K\) denoting the dimension of \(Z\) . A lower bound on \(E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\) , where \(\mathbf{u}\sim p(\mathbf{u})\) and \(\mathbf{x}\sim p^{*}(\mathbf{x})\) , can then be determined by \[\log \left(E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\right) = -\frac{1}{2\sigma^{2}}\left(\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})\right)^{T}\left(\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})) - \frac{K}{2}\log \sigma^{2} - \frac{K}{2}\log 2\pi\] Since \(\| \mu_{E}(\mathbf{x})\|^{2}\) , \(\| \mu_{E}(g(\mathbf{u}))\|^{2}\leq 2I_{c} - K\sigma^{2} + K + K\log \sigma^{2}\) \[\left|\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})\right|^{2}\leq 8I_{c} - 4K\sigma^{2} + 4K + 4K\log \sigma^{2},\] and it follows that \[-\frac{1}{2\sigma^{2}}\left(\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x})\right)^{T}\left(\mu_{E}(g(\mathbf{u})) - \mu_{E}(\mathbf{x}))\geq -4\sigma^{-2}I_{c} + 2K - 2K\sigma^{-2} - 2K\sigma^{-2}\log \sigma^{-2}.\] The likelihood is therefore bounded below by \[\log \left(E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\right)\geq -4\sigma^{-2}I_{c} + 2K - 2K\sigma^{-2} - 2K\sigma^{-2}\log \sigma^{-2} - \frac{K}{2}\log \sigma^{2} - \frac{K}{2}\log 2\pi\] Since \(- \sigma^{- 2} - \sigma^{- 2}\log \sigma^{- 2}\geq - 1\) \[\log \left(E(\mu_{E}(g(\mathbf{u}))|\mathbf{x})\right)\geq -4\sigma^{-2}I_{c} - \frac{K}{2}\log \sigma^{2} - \frac{K}{2}\log 2\pi \quad (14)\] From the KL constraint, we can derive a lower bound \(\ell (I_{c})\) and an upper bound \(\mathcal{U}(I_{c})\) on \(\sigma^{2}\) \[\frac{1}{2}\left(K\sigma^{2} + \mu_{E}(\mathbf{x})^{T}\mu_{E}(\mathbf{x}) - K - K\log \sigma^{2}\right)\leq I_{c}\] \[\sigma^{2} - 1 - \log \sigma^{2}\leq \frac{2I_{c}}{K}\] \[\log \sigma^{2}\geq -\frac{2I_{c}}{K} -1\] \[\sigma^{2}\geq \exp \left(-\frac{2I_{c}}{K} -1\right) = \ell (I_{c})\] For the upper bound, since \(\sigma^{2} - \log \sigma^{2} > \frac{1}{2}\sigma^{2}\) \[\sigma^{2} - 1 - \log \sigma^{2}\leq \frac{2I_{c}}{K}\] \[\frac{1}{2}\sigma^{2} - 1< \frac{2I_{c}}{K}\] \[\sigma^{2}< \frac{4I_{c}}{K} +2 = \mathcal{U}(I_{c})\] Substituting \(\ell (I_{c})\) and \(\mathcal{U}(I_{c})\) into Equation 14, we arrive at the following lower bound \[E(\mu_{E}(g(\mathbf{u}))|\mathbf{x}) > \exp \left(-4I_{c}\exp \left(\frac{2I_{c}}{K} +1\right) - \frac{K}{2}\log \left(\frac{4I_{c}}{K} +2\right) - \frac{K}{2}\log 2\pi\right) = C(I_{c}).\] <--- Page Split ---> ## B GRADIENT PENALTY To combine VDB with gradient penalty, we use the reparameterization trick to backprop through the encoder when computing the gradient of the discriminator with respect to the inputs. \[\begin{array}{r l} & {J(D,E) = \underset {D,E}{\min}\mathbb{E}_{x\sim p^{*}(\mathbf{x})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log \left(D(\mathbf{z})\right)\right]\right] + \mathbb{E}_{\mathbf{x}\sim G(\mathbf{x})}\left[\mathbb{E}_{\mathbf{z}\sim E(\mathbf{z}|\mathbf{x})}\left[-\log \left(1 - D(\mathbf{z})\right)\right]\right]}\\ & {\qquad +w_{G P}\mathbb{E}_{x\sim p^{*}(\mathbf{x})}\left[\mathbb{E}_{\epsilon \sim \mathcal{N}(0,I)}\left[\frac{1}{2} ||\nabla_{x}D(\mu_{E}(x) + \Sigma_{E}(x)\epsilon)||^{2}\right]\right]}\\ & {\qquad \mathrm{s.t.}\quad \mathbb{E}_{\mathbf{x}\sim \tilde{p} (\mathbf{x})}\left[\mathrm{KL}\left[E(\mathbf{z}|\mathbf{x})||r(\mathbf{z})\right]\right]\leq I_{c},} \end{array} \quad (15)\] The coefficient \(w_{GP}\) weights the gradient penalty in the objective, \(w_{GP} = 10\) for the image generation, \(w_{GP} = 1\) for motion imitation, and \(w_{GP} = 0.1\) (C- maze) or \(w_{GP} = 0.01\) (S- maze) for the IRL tasks. The gradient penalty is applied only to real samples \(p^{*}(x)\) . We have experimented with apply the penalty to both real and fake samples, but found that performance was worse than penalizing only gradients from real samples. This is consistent with the GP implementation from Mescheder et al. (2018). ## C IMITATION LEARNING Experimental Setup: The goal of the motion imitation tasks is to train a simulated agent to mimic a demonstration provided in the form of a mocap clip recorded from a human actor. We use a similar experimental setup as Peng et al. (2018), with a 34 degrees- of- freedom humanoid character. The state s consists of features that represent the configuration of the character's body (link positions and velocities). We also include a phase variable \(\phi \in [0,1]\) among the state features, which records the character's progress along the motion and helps to synchronize the character with the reference motion. With 0 and 1 denoting the start and end of the motion respectively. The action a sampled from the policy \(\pi (\mathbf{a}|\mathbf{s})\) specifies target poses for PD controller positioned at each joint. Given a state, the policy specifies a Gaussian distribution over the action space \(\pi (\mathbf{a}|\mathbf{s}) = \mathcal{N}(\mu (\mathbf{s}),\Sigma)\) , with a state- dependent mean \(\mu (\mathbf{s})\) and fixed diagonal covariance matrix \(\Sigma\) . \(\mu (\mathbf{s})\) is modeled using a 3- layered fully- connected network with 1024 and 512 hidden units, followed by a linear output layer that specifies the mean of the Gaussian. ReLU activations are used for all hidden layers. The value function is modeled with a similar architecture but with a single linear output unit. The policy is queried at \(30Hz\) . Physics simulation is performed at \(1.2\mathrm{kHz}\) using the Bullet physics engine Bullet (2015). Given the rewards from the discriminator, PPO (Schulman et al., 2017) is used to train the policy, with a stepsize of \(2.5\times 10^{- 6}\) for the policy, a stepsize of 0.01 for the value function, and a stepsize of \(10^{- 5}\) for the discriminator. Gradient descent with momentum 0.9 is used for all models. The PPO clipping threshold is set to 0.2. When evaluating the performance of the policies, each episode is simulated for a maximum horizon of 20s. Early termination is triggered whenever the character's torso contacts the ground, leaving the policy is a maximum error of \(\pi\) radians for all remaining timesteps. Phase- Functioned Discriminator: Unlike the policy and value function, which are modeled with standard fully- connected networks, the discriminator is modeled by a phase- functioned neural network (PFNN) to explicitly model the time- dependency of the reference motion (Holden et al., 2017). While the parameters of a network are generally fixed, the parameters of a PFNN are functions of the phase variable \(\phi\) . The parameters \(\theta\) of the network for a given \(\phi\) is determined by a weighted combination of a set of fixed parameters \(\{\theta_{0},\theta_{1},\dots,\theta_{k}\}\) , \[\theta = \sum_{i = 0}^{k}w_{i}(\phi)\theta_{i},\] where \(w_{i}(\phi)\) is a phase- dependent weight for \(\theta_{i}\) . In our implementation, we use \(k = 5\) sets of parameters and \(w_{i}(\phi)\) is designed to linearly interpolate between two adjacent sets of parameters for each phase \(\phi\) , where each set of parameters \(\theta_{i}\) corresponds to a discrete phase value \(\phi_{i}\) spaced <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 10: Learning curves comparing VAIL to other methods for motion imitation. Performance is measured using the average joint rotation error between the simulated character and the reference motion. Each method is evaluated with 3 random seeds. </center> ![](images/18_1.jpg) <center>Figure 11: Learning curves comparing VAIL with a discriminator modeled by a phase-functioned neural network (PFNN), to modeling the discriminator with a fully-connected network that receives the phase-variable \(\phi\) as part of the input (no PFNN), and a discriminator modeled with a fully-connected network but does not receive \(\phi\) as an input (no phase). </center> uniformly between \([0,1]\) . For a given value of \(\phi\) , the parameters of the discriminator are determined according to \[\theta = w_{i}(\phi)\theta_{i} + w_{i + 1}(\phi)\theta_{i + 1}\] where \(\theta_{i}\) and \(\theta_{i + 1}\) correspond to the phase values \(\phi_{i} \leq \phi < \phi_{i + 1}\) that form the endpoints of the phase interval that contains \(\phi\) . A PFNN is used for all motion imitation experiments, both VAIL and GAIL, except for those that use the approach proposed by Merel et al. (2017), which use standard fully- connected networks for the discriminator. Figure 11 compares the performance of VAIL when the discriminator is modeled with a phase- functioned neural network (with PFNN) to discriminators modeled with standard fully- connected networks. We increased the size of the layers of the fully- connected nets to have a similar number of parameters as a PFNN. We evaluate the performance of fully- connected nets that receive the phase variable \(\phi\) as part of the input (no PFNN), and fully- connected nets that do not receive \(\phi\) as an input. The phase- functioned discriminator leads to significant performance improvements across all tasks evaluated. Policies trained without a phase variable performs worst overall, suggesting that phase information is critical for performance. All methods perform well on simpler skills, such as running, but the additional phase structure introduced by the PFNN proved to be vital for successful imitation of more complex skills, such as the dance and backflip. Next we compare the accuracy of discriminators trained using different methods. Figure 12 illustrates accuracy of the discriminators over the course of training. Discriminators trained via GAIL quickly overpowers the policy, and learns to accurately differentiate between samples, even when <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 12: Left: Accuracy of the discriminator trained using different methods for imitating the dance skill. Middle: Value of the dual variable \(\beta\) over the course of training. Right: KL loss over the course of training. The dual gradient descent update for \(\beta\) effectively enforces the VDB constraint \(I_{c}\) . </center> instance noise is applied to the inputs. VAIL without the KL constraint slows the discriminator's progress, but nonetheless reaches near perfect accuracy with a larger number of samples. Once the KL constraint is enforced, the information bottleneck constrains the performance of the discriminator, converging to approximately \(80\%\) accuracy. Figure 12 also visualizes the value of \(\beta\) over the course of training for motion imitation tasks, along with the loss of the KL term in the objective. The dual gradient descent update effectively enforces the VDB constraint \(I_{c}\) . Video Imitation: In the video imitation tasks, we use a simplified 2D biped character in order to avoid issues that may arise due to depth ambiguity from monocular videos. The biped character has a total of 12 degrees- of- freedom, with similar state and action parameters as the humanoid. The video demonstrations are generated by rendering a reference motion into a sequence of video frames, which are then provided to the agent as a demonstration. The goal of the agent is to imitate the motion depicted in the video, without access to the original reference motion, and the reference motion is used only to evaluate performance. ## D INVERSE REINFORCEMENT LEARNING ## D.1 EXPERIMENTAL SETUP Environments We evaluate on two maze tasks, as illustrated in Figure 13. The C- maze is taken from Fu et al. (2017): in this maze, the agent starts at a random point within a small fixed distance of the mean start position. The agent has a continuous, 2D action space which allows it to accelerate in the \(x\) or \(y\) directions, and is able to observe its \(x\) and \(y\) position, but not its velocity. The ground truth reward is \(r_{t} = - d_{t} - 10^{- 3}\| a_{t}\|^{2}\) , where \(d_{t}\) is the agent's distance to the goal, and \(a_{t}\) is its action (this action penalty is assumed to be zero in Figure 13). Episodes terminate after 100 steps; for evaluation, we report the undiscounted mean sum of rewards over each episode The S- maze is larger variant of the same environment with an extra wall between the agent and its goal. To make the S- maze easier to solve for the expert, we added further reward shaping to encourage the agent to pass between the gaps between walls. We also increased the maximum control forces relative to the C- maze to enable more rapid exploration. Environments will be released along with the rest of our VAIRL implementations. Hyperparameters Policy networks for all methods were two- layer ReLU MLPs with 32 hidden units per layer. Reward and discriminator networks were similar, but with 32- unit mean and standard deviation layers inserted before the final layer for VDB methods. To generate expert demonstrations, we trained a TRPO (Schulman et al., 2015) agent on the ground truth reward for the training environment for 200 iterations, and saved 107 trajectories from each of the policies corresponding to the five final iterations. TRPO used a batch size of 10,000, a step size of 0.01, and entropy bonus with a coefficient of 0.1 to increase diversity. After generating demonstrations, we trained the IRL and imitation methods on a training maze for 200 iterations; again, our policy optimizer was TRPO with the same hyperparameters used to generate demonstrations. Between each policy update, we did 100 discriminator updates using Adam with a learning rate of \(5 \times 10^{- 5}\) and batch size of 32. For the C- maze our VAIRL runs used a target KL of \(I_{C} = 0.5\) , while for the more complex S- maze we <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 13: Left: The C-maze used for training and its mirror version used for testing. Colour contours show the ground truth reward function that we use to train the expert and evaluate transfer quality, while the red and green dots show the initial and goal positions, respectively. Right: The analogous diagram for the S-maze. </center> use a tighter target of \(I_{C} = 0.05\) . For the test C- maze, we trained new policies against the recovered reward using TRPO with the hyperparameters described above; for the test S- maze, we modified these parameters to use a batch size of 50,000 and learning rate of 0.001 for 400 iterations. ## D.2 RECOVERED REWARD FUNCTIONS Figure 14 and 15 show the reward functions recovered by each IRL baseline on the C- maze and S- maze, respectively, along with sample trajectories for policies trained to optimize those rewards. Notice that VAIRL tends to recover smoother reward functions that match the ground truth reward more closely than the baselines. Addition of a gradient penalty enhances this effect for both AIRL and VAIRL. This is especially true in S- maze, where combining a gradient penalty with a variational discriminator bottleneck leads to a smooth reward that gradually increases as the agent nears its goal position at the top of the maze. ## E IMAGE GENERATION We provide further experiment on image generation and details of the experimental setup. ## E.1 EXPERIMENTAL SETUP: We use the non- saturating objective of Goodfellow et al. (2014) for all models except WGAN- GP. Following (Lucic et al., 2017), we compute FID on samples of size \(10000^{2}\) . We base our implementation on (Mescheder et al., 2018), where we do not use any batch normalization for both the generator and the discriminator. We use RMSprop (Hinton et al.) and a fixed learning rate for all experiments. For convolutional GAN, variational discriminative bottleneck is implemented as a 1x1 convolution on the final embedding space that outputs a Gaussian distribution over \(Z\) parametrized with a mean and a diagonal covariance matrix. For all image experiments, we preserve the dimensionality of the latent space. All experiments use adaptive \(\beta\) update with a dual stepsize of \(\alpha_{\beta} = 10^{- 5}\) . We will make our code public. Similarly to VGAN, instance noise Sonderby et al. (2016); Arjovsky & Bottou (2017) is added to the final embedding space of the discriminator right before applying the classifier. Instance noise can be interpreted as a non- adaptive VGAN without a information constraint. Architecture: For CIFAR- 10, we use a resnet- based architecture adapted from (Mescheder et al., 2018) detailed in Tables 2, 3, and 4. For CelebA and CelebAHQ, we use the same architecture used in (Mescheder et al., 2018). <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 14: Visualizations of recovered reward functions transferred to the mirrored C-maze. Also shown are trajectories executed by policies trained to maximize the corresponding reward in the new environment. </center> <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 15: Visualizations of recovered reward functions transferred to the mirrored S-maze, like Figure 14. </center> <--- Page Split ---> Table 2: CIFAR-10 Generator <table><tr><td>Layer</td><td>Output size</td><td>Filter</td></tr><tr><td>FC</td><td>256·4·4</td><td>256→256·4·4</td></tr><tr><td>Reshape</td><td>256×4×4</td><td></td></tr><tr><td>Resnet-block</td><td>128×4×4</td><td>256→128</td></tr><tr><td>Upsample</td><td>128×8×8</td><td></td></tr><tr><td>Resnet-block</td><td>64×8×8</td><td>128→64</td></tr><tr><td>Upsample</td><td>64×16×16</td><td></td></tr><tr><td>Resnet-block</td><td>32×16×16</td><td>64→32</td></tr><tr><td>Upsample</td><td>32×32×32</td><td></td></tr><tr><td>Resnet-block</td><td>32×32×32</td><td></td></tr><tr><td>Conv2D</td><td>3×32×32</td><td>32→32</td></tr></table> Table 3: CIFAR-10 Discriminator <table><tr><td>Layer</td><td>Output size</td><td>Filter</td></tr><tr><td>Conv2D</td><td>32×32×32</td><td>3→32</td></tr><tr><td>Resnet-block</td><td>64×32×32</td><td>32→64</td></tr><tr><td>AvgPool2D</td><td>64×16×16</td><td></td></tr><tr><td>Resnet-block</td><td>128×16×16</td><td>64→128</td></tr><tr><td>AvgPool2D</td><td>128×8×8</td><td></td></tr><tr><td>Resnet-block</td><td>256×8×8</td><td>128→256</td></tr><tr><td>AvgPool2D</td><td>256×4×4</td><td></td></tr><tr><td>FC</td><td>1</td><td>256·4·4→1</td></tr></table> ## E.2 RESULTS CIFAR- 10: We compare our approach with recent stabilization techniques: WGAN- GP (Gulrajani et al., 2017b), instance noise (Sonderby et al., 2016; Arjovsky & Bottou, 2017), spectral normalization (Miyato et al., 2018), and gradient penalty (Mescheder et al., 2018). We train report the networks at 750k iterations. We use \(I_{c} = 0.1\) , and a coefficient of \(w_{GP} = 10\) for the gradient penalty, which is the same as the value used by the implementation from Mescheder et al. (2018). See Figure 16 for visual comparisons of randomly generated samples. CelebA: On the CelebA (Liu et al., 2015) dataset, we generate images of size \(128 \times 128\) with \(I_{c} = 0.2\) . On this dataset we do not see a big improvement upon the other baselines. This is likely because the architecture has been effectively tuned for this task, reflected by the fact that even the vanilla GAN trains fine on this dataset. All GAN, GP, and VGAN- GP obtain a similar FID scores of 7.64, 7.76, 7.25 respectively. See Figure 17 for more qualitative results with our approach. CelebAHQ: VGAN can also be trained on on CelebAHQ Karras et al. (2018) at 1024 by 1024 resolution directly, without progressive growing (Karras et al., 2018). We use \(I_{c} = 0.1\) and train with VGAN- GP. We train on a single Tesla V100, which fits a batch size of 8 in our experiments. Previous approaches (Karras et al., 2018; Mescheder et al., 2018) use a larger batch size and train over multiple GPUs. The model was trained for 300k iterations. Table 4: CIFAR-10 Discriminator with VDB <table><tr><td>Layer</td><td>Output size</td><td>Filter</td></tr><tr><td>Conv2D</td><td>32×32×32</td><td>3→32</td></tr><tr><td>Resnet-block</td><td>64×32×32</td><td>32→64</td></tr><tr><td>AvgPool2D</td><td>64×16×16</td><td></td></tr><tr><td>Resnet-block</td><td>128×16×16</td><td>64→128</td></tr><tr><td>AvgPool2D</td><td>128×8×8</td><td></td></tr><tr><td>Resnet-block</td><td>256×8×8</td><td>128→256</td></tr><tr><td>AvgPool2D</td><td>256×4×4</td><td></td></tr><tr><td>1×1 Conv2D</td><td>2·256×4×4</td><td>256→2·256</td></tr><tr><td>Sampling</td><td>256×4×4</td><td></td></tr><tr><td>FC</td><td>1</td><td>256·4·4→1</td></tr></table> <--- Page Split ---> ![](images/24_0.jpg) <center>Figure 16: Random results on CIFAR-10 (Krizhevsky et al.): GAN (Goodfellow et al., 2014) FID: 63.6, instance noise (Sønderby et al., 2016; Arjovsky & Bottou, 2017) FID: 30.7, spectral normalization (SN) (Miyato et al., 2018) FID: 23.9, gradient penalty (GP) (Mescheder et al., 2018) FID: 22.6, WGAN-GP Guraljani et al. (2017b) FID: 19.9, and the proposed VGAN-GP FID: 18.1. The samples produced by VGAN-GP (right) look the most realistic where objects like vehicles may be discerned. </center> <--- Page Split ---> ![](images/25_0.jpg) <center>Figure 17: Random VGAN samples on CelebA \(128 \times 128\) at 300k iterations. </center> <--- Page Split ---> ![](images/26_0.jpg) <center>Figure 18: VGAN samples on CelebA HQ (Karras et al., 2018) \(1024 \times 1024\) resolution at 300k iterations. Models are trained from scratch at full resolution, without the progressive scheme proposed by Karras et al. (2017). </center> <--- Page Split --->
accept
Accept (Poster)
8
ICLR_2019_paper_0685
iclr
2,019
# CONTEXT-ADAPTIVE ENTROPY MODEL FOR END-TO-END OPTIMIZED IMAGE COMPRESSION Jooyoung Lee, Seunghyun Cho & Seung- Kwon Beack Broadcasting Media Research Laboratory Electronics and Telecommunications Research Institute Daejeon, Korea {leejyl003, shcho, skbeack}@etri.re.kr ## ABSTRACT We propose a context- adaptive entropy model for use in end- to- end optimized image compression. Our model exploits two types of contexts, bit- consuming contexts and bit- free contexts, distinguished based upon whether additional bit allocation is required. Based on these contexts, we allow the model to more accurately estimate the distribution of each latent representation with a more generalized form of the approximation models, which accordingly leads to an enhanced compression performance. Based on the experimental results, the proposed method outperforms the traditional image codecs, such as BPG and JPEG2000, as well as other previous artificial- neural- network (ANN) based approaches, in terms of the peak signal- to- noise ratio (PSNR) and multi- scale structural similarity (MS- SSIM) index. The test code is publicly available at https://github.com/JooyoungLeeETRI/CA_Entropy_Model. ## 1 INTRODUCTION Recently, artificial neural networks (ANNs) have been applied in various areas and have achieved a number of breakthroughs resulting from their superior optimization and representation learning performance. In particular, for various problems that are sufficiently straightforward that they can be solved within a short period of time by hand, a number of ANN- based studies have been conducted and significant progress has been made. With regard to image compression, however, relatively slow progress has been made owing to its complicated target problems. A number of works, focusing on the quality enhancement of reconstructed images, were proposed. For instance, certain approaches (Dong et al., 2015; Svoboda et al., 2016; Zhang et al., 2017) have been proposed to reduce artifacts caused by image compression, relying on the superior image restoration capability of an ANN. Although it is indisputable that artifact reduction is one of the most promising areas exploiting the advantages of ANNs, such approaches can be viewed as a type of post- processing, rather than image compression itself. Regarding ANN- based image compression, the previous methods can be divided into two types. First, as a consequence of the recent success of generative models, some image compression approaches targeting the superior perceptual quality (Agustsson et al., 2018; Santurkar et al., 2018; Rippel & Bourdev, 2017) have been proposed. The basic idea here is that learning the distribution of natural images enables a very high compression level without severe perceptual loss by allowing the generation of image components, such as textures, which do not highly affect the structure or the perceptual quality of the reconstructed images. Although the generated images are very realistic, the acceptability of the machine- created image components eventually becomes somewhat application- dependent. Meanwhile, a few end- to- end optimized ANN- based approaches (Toderici et al., 2017; Johnston et al., 2018; Balle et al., 2017; Theis et al., 2017; Balle et al., 2018), without generative models, have been proposed. In these approaches, unlike traditional codecs comprising separate tools, such as prediction, transform, and quantization, a comprehensive solution covering all functions has been sought after using end- to- end optimization. Toderici et al. (2017)'s approach exploits a small number of latent binary representations to contain the compressed information in every step, and each step increasingly stacks the additional latent representations to achieve a progressive improvement in quality of the reconstructed images. Johnston et al. (2018) improved the compression <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Comparison of sample test results including the ground truth, our method, Balle et al. (2018)'s approach, BPG, and JPEG2000. </center> performance by enhancing operation methods of the networks developed by Toderici et al. (2017). Although Toderici et al. (2017); Johnston et al. (2018) provided novel frameworks suitable to quality control using a single trained network, the increasing number of iteration steps to obtain higher image quality can be a burden to certain applications. In contrast to the approaches developed by Toderici et al. (2017) and Johnston et al. (2018), which extract binary representations with as high an entropy as possible, Balle et al. (2017), Theis et al. (2017), and Balle et al. (2018) regard the image compression problem as being how to retrieve discrete latent representations having as low an entropy as possible. In other words, the target problem of the former methods can be viewed as how to include as much information as possible in a fixed number of representations, whereas the latter is simply how to reduce the expected bit- rate when a sufficient number of representations are given, assuming that the low entropy corresponds to small number of bits from the entropy coder. To solve the second target problem, Balle et al. (2017), Theis et al. (2017), and Balle et al. (2018) adopt their own entropy models to approximate the actual distributions of the discrete latent representations. More specifically, Balle et al. (2017) and Theis et al. (2017) proposed novel frameworks that exploit the entropy models, and proved their performance capabilities by comparing the results with those of conventional codecs such as JPEG2000. Whereas Balle et al. (2017) and Theis et al. (2017) assume that each representation has a fixed distribution, Balle et al. (2018) introduced an input- adaptive entropy model that estimates the scale of the distribution for each representation. This idea is based on the characteristics of natural images in which the scales of the representations vary together in adjacent areas. They provided test results that outperform all previous ANN- based approaches, and reach very close to those of BPG (Bellard, 2014), which is known as a subset of HEVC (ISO/IEC 23008- 2, ITU- T H.265), used for image compression. One of the principle elements in end- to- end optimized image compression is the trainable entropy model used for the latent representations. Because the actual distributions of latent representations are unknown, the entropy models provide the means to estimate the required bits for encoding the latent representations by approximating their distributions. When an input image \(\boldsymbol{x}\) is transformed into a latent representation \(\boldsymbol{y}\) and then uniformly quantized into \(\hat{\boldsymbol{y}}\) , the simple entropy model can be represented by \(p_{\hat{\boldsymbol{y}}}(\hat{\boldsymbol{y}})\) , as described by Balle et al. (2018). When the actual marginal distribution of \(\hat{\boldsymbol{y}}\) is denoted as \(m(\hat{\boldsymbol{y}})\) , the rate estimation, calculated through cross entropy using the entropy model, \(p_{\hat{\boldsymbol{y}}}(\hat{\boldsymbol{y}})\) , can be represented as shown in equation (1), and can be decomposed into the actual entropy of \(\hat{\boldsymbol{y}}\) and the additional bits owing to a mismatch between the actual distributions and their approximations. Therefore, decreasing the rate term \(R\) during the training process allows the entropy model \(p_{\hat{\boldsymbol{y}}}(\hat{\boldsymbol{y}})\) to approximate \(m(\hat{\boldsymbol{y}})\) as closely as possible, and let the other parameters transform \(x\) into \(y\) properly such that the actual entropy of \(\hat{\boldsymbol{y}}\) becomes small. \[R = \mathbb{E}_{\hat{\boldsymbol{y}}\sim m}\big[-\log_{2}p_{\hat{\boldsymbol{y}}}(\hat{\boldsymbol{y}})\big] = H(m) + D_{KL}(m||p_{\hat{\boldsymbol{y}}}). \quad (1)\] In terms of KL- divergence, \(R\) is minimized when \(p_{\hat{\boldsymbol{y}}}(\hat{\boldsymbol{y}})\) becomes perfectly matched with the actual distribution \(m(\hat{\boldsymbol{y}})\) . This means that the compression performance of the methods essentially depends on the capacity of the entropy model. To enhance the capacity, we propose a new entropy model <--- Page Split ---> that exploits two types of contexts, bit- consuming and bit- free contexts, distinguished according to whether additional bit allocation is required. Utilizing these two contexts, we allow the model to more accurately estimate the distribution of each latent representation through the use of a more generalized form of the entropy models, and thus more effectively reduce the spatial dependencies among the adjacent latent representations. Figure 1 demonstrates a comparison of the compression results of our method to those of other previous approaches. The contributions of our work are as follows: - We propose a new context-adaptive entropy model framework that incorporates the two different types of contexts.- We provide the test results that outperform the widely used conventional image codec BPG in terms of the PSNR and MS-SSIM.- We discuss the directions of improvement in the proposed methods in terms of the model capacity and the level of the contexts. Note that we follow a number of notations given by Balle et al. (2018) because our approach can be viewed as an extension of their work, in that we exploit the same rate- distortion (R- D) optimization framework. The rest of this paper is organized as follows. In Section 2, we introduce the key approaches of end- to- end optimized image compression and propose the context- adaptive entropy model. Section 3 demonstrates the structure of the encoder and decoder models used, and the experimental setup and results are then given in section 4. Finally, in Section 5, we discuss the current state of our work and directions for improvement. ## 2 END-TO-END OPTIMIZATION BASED ON CONTEXT-ADAPTIVE ENTROPY MODELS ### 2.1 PREVIOUS ENTROPY MODELS Since they were first proposed by Balle et al. (2017) and Theis et al. (2017), entropy models, which approximate the distribution of discrete latent representations, have noticeably improved the image compression performance of ANN- based approaches. Balle et al. (2017) assumes the entropy models of the latent representations as non- parametric models, whereas Theis et al. (2017) adopted a Gaussian scale mixture model composed of six weighted zero- mean Gaussian models per representation. Although they assume different forms of entropy models, they have a common feature in that both concentrate on learning the distributions of the representations without considering input adaptivity. In other words, once the entropy models are trained, the trained model parameters for the representations are fixed for any input during the test time. Balle et al. (2018), in contrast, introduced a novel entropy model that adaptively estimates the scales of the representations based on input. They assume that the scales of the latent representations from the natural images tend to move together within an adjacent area. To reduce this redundancy, they use a small amount of additional information by which the proper scale parameters (standard deviations) of the latent representations are estimated. In addition to the scale estimation, Balle et al. (2018) have also shown that when the prior probability density function (PDF) for each representation in a continuous domain is convolved with a standard uniform density function, it approximates the prior probability mass function (PMF) of the discrete latent representation, which is uniformly quantized by rounding, much more closely. For training, a uniform noise is added to each latent representation so as to fit the distribution of these noisy representations into the mentioned PMF- approximating functions. Using these approaches, Balle et al. (2018) achieved a state- of- the- art compression performance, close to that of BPG. ### 2.2 SPATIAL DEPENDENCIES OF THE LATENT VARIABLES The latent representations, when transformed through a convolutional neural network, essentially contain spatial dependencies because the same convolutional filters are shared across the spatial regions, and natural images have various factors in common within adjacent regions. Balle et al. (2018) successfully captured these spatial dependencies and enhanced the compression performance by input- adaptively estimating standard deviations of the latent representations. Taking a step forward, we generalize the form of the estimated distribution by allowing, in addition to the standard <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: Examples of latent representations and their normalized versions for the two cases (the first two images show the results in which only the standard deviations are estimated using side information, whereas the last two images show the results in which the mu and standard deviation are estimated using our method). For a clear demonstration, the latent representations having the highest covariance between the spatially adjacent variables are extracted. Left: the latent representations of \(\hat{y}\) from the first case. Middle left: the normalized versions of \(\hat{y}\) from the first case, divided by the estimated standard deviation. Middle right: the latent variable of \(\hat{y}\) from the second case. Right: the normalized versions of \(\hat{y}\) from the second case, shifted and divided by the estimated mu and the standard deviation. </center> deviation, the mean estimation utilizing the contexts. For instance, assuming that certain representations tend to have similar values within a spatially adjacent area, when all neighborhood representations have a value of 10, we can intuitively guess that, for the current representation, the chance of having a value equal or similar to 10 is relatively high. This simple estimation will consequently reduce the entropy. Likewise, our method utilizes the given contexts for estimating the mean, as well as the standard deviation, of each latent representation. Note that Toderici et al. (2017), Johnston et al. (2018), and Rippel & Bourdev (2017) also apply context- adaptive entropy coding by estimating the probability of each binary representation. However, these context- adaptive entropy- coding methods can be viewed as separate components, rather than one end- to- end optimization component because their probability estimation does not directly contribute to the rate term of the R- D optimization framework. Figure 2 visualizes the latent variables \(\hat{y}\) and their normalized versions of the two different approaches, one estimating only the standard deviation parameters and the other estimating both the mu and standard deviation parameters with the two types of mentioned contexts. The visualization shows that the spatial dependency can be removed more effectively when the mu is estimated along with the given contexts. ### 2.3 CONTEXT-ADAPTIVE ENTROPY MODEL The optimization problem described in this paper is similar with Ballé et al. (2018), in that the input \(x\) is transformed into \(y\) having a low entropy, and the spatial dependencies of \(y\) are captured into \(\hat{z}\) . Therefore, we also use four fundamental parametric transform functions: an analysis transform \(g_{\alpha}(\boldsymbol {x}; \boldsymbol {\phi}_{g})\) to transform \(x\) into a latent representation \(y\) , a synthesis transform \(g_{s}(\hat{y}; \boldsymbol {\theta}_{g})\) to reconstruct image \(\hat{x}\) , an analysis transform \(h_{\alpha}(\hat{y}; \boldsymbol {\phi}_{h})\) to capture the spatial redundancies of \(\hat{y}\) into a latent representation \(z\) , and a synthesis transform \(h_{s}(\hat{z}; \boldsymbol {\theta}_{h})\) used to generate the contexts for the model estimation. Note that \(h_{s}\) does not estimate the standard deviations of the representations directly as in Ballé et al. (2018)'s approach. In our method, instead, \(h_{s}\) generates the context \(c'\) , one of the two types of contexts for estimating the distribution. These two types of contexts are described in this section. Ballé et al. (2018) analyzed the optimization problem from the viewpoint of the variational autoencoder (Kingma & Welling (2014); Rezende et al. (2014)), and showed that the minimization of the KL- divergence is the same problem as the R- D optimization of image compression. Basically, we follow the same concept; however, for training, we use the discrete representations on the conditions instead of the noisy representations, and thus the noisy representations are only used as the inputs to the entropy models. Empirically, we found that using discrete representations on the conditions show better results, as shown in appendix 6.2. These results might come from removing the mis <--- Page Split ---> matches of the conditions between the training and testing, thereby enhancing the training capacity by limiting the affect of the uniform noise only to help the approximation to the probability mass functions. We use the gradient overriding method with the identity function, as in Theis et al. (2017), to deal with the discontinuities from the uniform quantization. The resulting objective function used in this paper is given in equation (2). The total loss consists of two terms representing the rates and distortions, and the coefficient \(\lambda\) controls the balance between the rate and distortion during the R- D optimization. Note that \(\lambda\) is not an optimization target, but a manually configured condition that determines which to focus on between rate and distortion: \[\begin{array}{r l} & {\mathcal{L} = R + \lambda D}\\ & {\mathrm{with} R = \mathbb{E}_{\pmb {x}\sim p_{\pmb{x}}}\mathbb{E}_{\hat{\pmb{y}},\hat{\pmb{z}}\sim q}\Big[-\log p_{\hat{\pmb{y}}|\hat{\pmb{z}}}(\hat{\pmb{y}}\mid \hat{\pmb{z}}) - \log p_{\hat{\pmb{z}}}(\hat{\pmb{z}})\Big],}\\ & {\qquad D = \mathbb{E}_{\pmb{x}\sim p_{\pmb{x}}}\Big[-\log p_{\pmb{x}|\hat{\pmb{y}}}(\pmb {x}\mid \hat{\pmb{y}})\Big]} \end{array} \quad (2)\] Here, the noisy representations of \(\tilde{\pmb{y}}\) and \(\tilde{\pmb{z}}\) follow the standard uniform distribution, the mean values of which are \(\pmb{y}\) and \(\pmb{z}\) , respectively, when \(\pmb{y}\) and \(\pmb{z}\) are the result of the transforms \(g_{a}\) and \(h_{a}\) , respectively. Note that the input to \(h_{a}\) is \(\hat{\pmb{y}}\) , which is a uniformly quantized representation of \(\pmb{y}\) , rather than the noisy representation \(\tilde{\pmb{y}}\) . \(Q\) denotes the uniform quantization function, for which we simply use a rounding function: \[\begin{array}{r c l}{{q(\tilde{\pmb{y}},\tilde{\pmb{z}}\mid\pmb{x},\phi_{g},\phi_{h})}}&{{=}}&{{\prod_{i}\mathcal{U}\big(\tilde{y}_{i}\mid y_{i}-\frac{1}{2},y_{i}+\frac{1}{2}\big)\cdot\prod_{j}\mathcal{U}\big(\tilde{z}_{j}\mid z_{j}-\frac{1}{2},z_{j}+\frac{1}{2}\big)}}\\ {{}}&{{}}&{{\mathrm{with}~y=g_{a}(\pmb{x};\phi_{g}),\hat{\pmb{y}}=Q(\pmb{y}),\pmb{z}=h_{a}(\hat{\pmb{y}};\phi_{h}).}}\end{array} \quad (3)\] The rate term represents the expected bits calculated using the entropy models of \(p_{\tilde{\pmb{y}} |\tilde{\pmb{z}}}\) and \(p_{\tilde{\pmb{z}}}\) . Note that \(p_{\tilde{\pmb{y}} |\tilde{\pmb{z}}}\) and \(p_{\tilde{\pmb{z}}}\) are eventually the approximations of \(p_{\tilde{\pmb{y}} |\tilde{\pmb{z}}}\) and \(p_{\tilde{\pmb{z}}}\) , respectively. Equation (4) represents the entropy model for approximating the required bits for \(\hat{\pmb{y}}\) . The model is based on the Gaussian model, which not only has the standard deviation parameter \(\sigma_{i}\) , but also the mu parameter \(\mu_{i}\) . The values of \(\mu_{i}\) and \(\sigma_{i}\) are estimated from the two types of given contexts based on the function \(f\) , the distribution estimator, in a deterministic manner. The two types of contexts, bit- consuming and bit- free contexts, for estimating the distribution of a certain representation are denoted as \(c_{i}^{\prime}\) and \(c_{i}^{\prime \prime}\) . \(E^{\prime}\) extracts \(c_{i}^{\prime}\) from \(c^{\prime}\) , the result of transform \(h_{s}\) . In contrast to \(c_{i}^{\prime}\) , no additional bit allocation is required for \(c_{i}^{\prime \prime}\) . Instead, we simply utilize the known (already entropy- coded or decoded) subset of \(\hat{\pmb{y}}\) , denoted as \(\langle \hat{\pmb{y}}\rangle\) . Here, \(c_{i}^{\prime \prime}\) is extracted from \(\langle \hat{\pmb{y}}\rangle\) by the extractor \(E^{\prime \prime}\) . We assume that the entropy coder and the decoder sequentially process \(\hat{y}_{i}\) in the same specific order, such as with raster scanning, and thus \(\langle \hat{\pmb{y}}\rangle\) given to the encoder and decoder can always be identical when processing the same \(\hat{y}_{i}\) . A formal expression of this is as follows: \[\begin{array}{r l} & {p_{\tilde{\pmb{y}} |\tilde{\pmb{z}}}(\tilde{\pmb{y}}\mid \hat{\pmb{z}},\pmb {\theta}_{h}) = \prod_{i}\Big(\mathcal{N}\big(\mu_{i},\sigma_{i}^{2}\big)*\mathcal{U}\big(-\frac{1}{2},\frac{1}{2}\big)\Big)(\tilde{y}_{i})}\\ & {\qquad \mathrm{with} \mu_{i},\sigma_{i} = f(c_{i}^{\prime},c_{i}^{\prime \prime}),}\\ & {\qquad c_{i}^{\prime} = E^{\prime}(h_{s}(\hat{\pmb{z}};\pmb {\theta}_{h}),i),}\\ & {\qquad c_{i}^{\prime \prime} = E^{\prime \prime}(\langle \hat{\pmb{y}}\rangle ,i),}\\ & {\qquad \hat{\pmb{z}} = Q(\pmb {z})} \end{array} \quad (4)\] In the case of \(\hat{\pmb{z}}\) , a simple entropy model is used. We assumed that the model follows zero- mean Gaussian distributions which have a trainable \(\sigma\) . Note that \(\hat{\pmb{z}}\) is regarded as side information and it contributes a very small amount of the total bit- rate, as described by Balle et al. (2018), and thus we use this simpler version of the entropy model, rather than a more complex model, for end- to- end optimization over all parameters of the proposed method: \[p_{\tilde{\pmb{z}}}(\tilde{\pmb{z}}) = \prod_{j}\Big(\mathcal{N}\big(0,\sigma_{j}^{2}\big)*\mathcal{U}\big(-\frac{1}{2},\frac{1}{2}\big)\Big)(\tilde{z}_{j}) \quad (5)\] <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 3: Encoder-decoder model of the proposed method. The left block represents the encoder side, whereas the right block represents the decoder side. The small icons between them represent the entropy-coded bitstreams. EC and ED represent entropy coding and entropy decoding, and U | Q represents the addition of uniform noise to \(y\) or a uniform quantization of \(y\) . Noisy representations are used only for training as inputs to the entropy models, and are illustrated with the dotted lines. </center> Note that actual entropy coding or decoding processes are not necessarily required for training or encoding because the rate term is not the amount of real bits, but an estimation calculated from the entropy models, as mentioned previously. We calculate the distortion term using the mean squared error (MSE) \(^{1}\) , assuming that \(p_{x|\hat{y}}\) follows a Gaussian distribution as a widely used distortion metric. ## 3 ENCODER-DECODER MODEL This section describes the basic structure of the proposed encoder- decoder model. On the encoder side, an input image is transformed into latent representations, quantized, and then entropy- coded using the trained entropy models. In contrast, the decoder first applies entropy decoding with the same entropy models used for the encoder, and reconstructs the image from the latent representations, as illustrated in figure 3. It is assumed that all parameters that appear in this section were already trained. The structure of the encoder- decoder model basically includes \(g_{\alpha}\) and \(g_{s}\) in charge of the transform of \(x\) into \(y\) and its inverse transform, respectively. The transformed \(y\) is uniformly quantized into \(\hat{y}\) by rounding. Note that, in the case of approaches based on the entropy models, unlike traditional codecs, tuning the quantization steps is usually unnecessary because the scales of the representations are optimized together through training. The other components between \(g_{\alpha}\) and \(g_{s}\) carry out the role of entropy coding (or decoding) with the shared entropy models and underlying context preparation processes. More specifically, the entropy model estimates the distribution of each \(\hat{y}_{i}\) individually, in which \(\mu_{i}\) and \(\sigma_{i}\) are estimated with the two types of given contexts, \(c_{i}^{\prime}\) and \(c_{i}^{\prime \prime}\) . Among these contexts, \(c^{\prime}\) can be viewed as side information, which requires an additional bit allocation. To reduce the required bit- rate for carrying \(c^{\prime}\) , the latent representation \(z\) , transformed from \(\hat{y}\) , is quantized and entropy- coded by its own entropy model, as specified in section 2.3. On the other hand, \(c_{i}^{\prime \prime}\) is extracted from \(\langle \hat{y} \rangle\) , without any additional bit allocation. Note that \(\langle \hat{y} \rangle\) varies as the entropy coding or decoding progresses, but is always identical for processing the same \(\hat{y}_{i}\) in both the encoder and decoder, as described in 2.3. The parameters of \(h_{s}\) and the entropy models are simply shared by both the encoder and the decoder. Note that the inputs to the entropy models during training are the noisy representations, as illustrated with the dotted line in figure 3, to allow the entropy model to approximate the probability mass functions of the discrete representations. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 4: Implementation of the proposed method. We basically use the convolutional autoencoder structure, and the distribution estimator \(f\) is also implemented using convolutional neural networks. The notations of the convolutional layer follow Ballé et al. (2018): the number of filters \(\times\) filter height \(\times\) filter width / the downscale or upscale factor, where \(\uparrow\) and \(\downarrow\) denote the up and downscaling, respectively. For up or downscaling, we use the transposed convolution. For the networks, input images are normalized into a scale between -1 and 1. </center> ## 4 EXPERIMENTS ### 4.1 IMPLEMENTATION We use a convolutional neural networks to implement the analysis transform and the synthesis transform functions, \(g_{a}\) , \(g_{s}\) , \(h_{a}\) , and \(h_{s}\) . The structures of the implemented networks follow the same structures of Ballé et al. (2018), except that we use the exponentiation operator instead of an absolute operator at the end of \(h_{s}\) . Based on Ballé et al. (2018)'s structure, we added the components to estimate the distribution of each \(\hat{y}_{i}\) , as shown in figure 4. Herein, we represent a uniform quantization (round) as "Q," entropy coding as "EC," and entropy decoding as "ED." The distribution estimator is denoted as \(f\) , and is also implemented using the convolutional layers which takes channel- wise concatenated \(\boldsymbol{c}_{i}^{\prime}\) and \(\boldsymbol{c}_{i}^{\prime \prime}\) as inputs and provides estimated \(\mu_{i}\) and \(\sigma_{i}\) as results. Note that the same \(\boldsymbol{c}_{i}^{\prime}\) and \(\boldsymbol{c}_{i}^{\prime \prime}\) are shared for all \(\hat{y}_{i}\) s located at the same spatial position. In other words, we let \(E^{\prime}\) extract all spatially adjacent elements from \(\boldsymbol{c}^{\prime}\) across the channels to retrieve \(\boldsymbol{c}_{i}^{\prime}\) and likewise let \(E^{\prime \prime}\) extract all adjacent known elements from \(\langle \hat{y}\rangle\) for \(\boldsymbol{c}_{i}^{\prime \prime}\) . This could have the effect of capturing the remaining correlations among the different channels. In short, when \(M\) is the total number of channels of \(y\) , we let \(f\) estimate all \(M\) distributions of \(\hat{y}_{i}\) s, which are located at the same spatial position, using only a single step, thereby allowing the total number of estimations to be reduced. Furthermore, the parameters of \(f\) are shared for all spatial positions of \(\hat{y}\) , and thus only one trained \(f\) per \(\lambda\) is necessary to process any sized images. In the case of training, however, collecting the results from the all spatial positions to calculate the rate term becomes a significant burden, despite the simplifications mentioned above. To reduce this burden, we designate a certain number (32 and 16 for the base model and the hybrid model, respectively) of random spatial points as the representatives per training step, to calculate the rate term easily. Note that we let these random points contribute solely to the rate term, whereas the distortion is still calculated over all of the images. Because \(y\) is a three- dimensional array in our implementation, index \(i\) can be represented as three indexes, \(k\) , \(l\) , and \(m\) , representing the horizontal index, the vertical index, and the channel index, respectively. When the current position is given as \((k, l, m)\) , \(E^{\prime}\) extracts \(\boldsymbol{c}_{[k - 2, \ldots , k + 1], [l - 3, \ldots , l], [1, \ldots , M]}\) as \(\boldsymbol{c}_{i}^{\prime}\) , and \(E^{\prime \prime}\) extracts \(\langle \hat{y} \rangle_{[k - 2, \ldots , k + 1], [l - 3, \ldots , l], [1, \ldots , M]}\) as \(\boldsymbol{c}_{i}^{\prime \prime}\) , when \(\langle \hat{y} \rangle\) represents the known area of \(\hat{y}\) . Note that we filled in the unknown area of \(\langle \hat{y} \rangle\) with zeros, to maintain the dimensions of \(\langle \hat{y} \rangle\) <--- Page Split ---> identical to those of \(\hat{\pmb{y}}\) . Consequently, \(\pmb{c}_{i}^{\prime \prime}[_{[3\dots 4]},_{[4\dots M]}]\) are always padded with zeros. To keep the dimensions of the estimation results to the inputs, the marginal areas of \(\pmb{c}^{\prime}\) and \(\langle \hat{\pmb{y}}\rangle\) are also set to zeros. Note that when training or encoding, \(\pmb{c}_{i}^{\prime \prime}\) can be extracted using simple \(4\times 4\times M\) windows and binary masks, thereby enabling parallel processing, whereas a sequential reconstruction is inevitable for decoding. Another implementation technique used to reduce the implementation cost is combining the lightweight entropy model with the proposed model. The lightweight entropy model assumes that the representations follow a zero- mean Gaussian model with the estimated standard deviations, which is very similar with Ballé et al. (2018)'s approach. We utilize this hybrid approach for the top four cases, in bit- rate descending order, of the nine \(\lambda\) configurations, based on the assumption that for the higher- quality compression, the number of sparse representations having a very low spatial dependency increases, and thus a direct scale estimation provides sufficient performance for these added representations. For implementation, we separate the latent representation \(\pmb{y}\) into two parts, \(\pmb{y}_{1}\) and \(\pmb{y}_{2}\) , and two different entropy models are applied for them. Note that the parameters of \(g_{a}\) , \(g_{s}\) , \(h_{a}\) , and \(h_{s}\) are shared, and all parameters are still trained together. The detailed structure and experimental settings are described in appendix 6.1. The number of parameters \(N\) and \(M\) are set to 128 and 192, respectively, for the five \(\lambda\) configurations for lower bit- rates, whereas 2- 3 times more parameters, described in appendix 6.1, are used for the four \(\lambda\) configurations for higher bit- rates. Tensorflow and Python were used to setup the overall network structures, and for the actual entropy coding and decoding using the estimated model parameters, we implemented an arithmetic coder and decoder, for which the source code of the "Reference arithmetic coding" project<sup>2</sup> was used as the base code. ### 4.2 EXPERIMENTAL ENVIRONMENTS We optimized the networks using two different types of distortion terms, one with MSE and the other with MS- SSIM. For each distortion type, the average bits per pixel (BPP) and the distortion, PSNR and MS- SSIM, over the test set are measured for each of the nine \(\lambda\) configurations. Therefore, a total of 18 networks are trained and evaluated within the experimental environments, as explained below: - For training, we used \(256\times 256\) patches extracted from 32,420 randomly selected YFCC100m (Thomee et al. (2016)) images. We extracted one patch per image, and the extracted regions were randomly chosen. Each batch consists of eight images, and 1M iterations of the training steps were conducted, applying the ADAM optimizer (Kingma & Ba (2015)). We set the initial learning rate to \(5\times 10^{-5}\) , and reduced the rate by half every 50,000 iterations for the last 200,000 iterations. Note that, in the case of the four \(\lambda\) configurations for high bpp, in which the hybrid entropy model is used, 1M iterations of pre-training steps were conducted using the learning rate of \(1\times 10^{-5}\) . Although we previously indicated that the total loss is the sum of \(R\) and \(\lambda D\) for a simple explanation, we tuned the balancing parameter \(\lambda\) in a similar way as Theis et al. (2017), as indicated in equation (6). We used the \(\lambda\) parameters ranging from 0.01 to 0.5. \[\mathcal{L} = \frac{\lambda}{W_{y}\cdot H_{y}\cdot 256} R + \frac{1 - \lambda}{1000} D. \quad (6)\] - For the evaluation, we measured the average BPP and average quality of the reconstructed images in terms of the PSNR and MS-SSIM over 24 PNG images of the Kodak PhotoCD image dataset (Kodak, 1993). Note that we represent the MS-SSIM results in the form of decibels, as in Ballé et al. (2018), to increase the discrimination. ### 4.3 EXPERIMENTAL RESULTS We compared the test results with other previous methods, including traditional codecs such as BPG and JPEG2000, as well as previous ANN- based approaches such as Theis et al. (2017) and Ballé <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 5: Rate-distortion curves of the proposed method and competitive methods. The top plot represents the PSNR values as a result of changes in bpp, whereas the bottom plot shows MS-SSIM values in the same manner. Note that MS-SSIM values are converted into decibels \((-10\log_{10}(1 - \mathrm{MS-SSIM}))\) for differentiating the quality levels, in the same manner as in Ballé et al. (2018). </center> et al. (2018). Because two different quality metrics are used, the results are presented with two separate plots. As shown in figure 5, our methods outperform all other previous methods in both metrics. In particular, our models not only outperform Ballé et al. (2018)'s method, which is believed to be a state- of- the- art ANN- based approach, but we also obtain better results than the widely used conventional image codec, BPG. More specifically, the compression gains in terms of the BD- rate of PSNR over JPEG2000, Ballé et al. (2018)'s approach (MSE- optimized), and BPG are \(34.08\%\) , \(11.97\%\) , and \(6.85\%\) , respectively. In the case of MS- SSIM, we found wider gaps of \(68.82\%\) , \(13.93\%\) , and \(49.68\%\) , respectively. Note that we achieved significant gains over traditional codecs in terms of MS- SSIM, although this might be because the dominant target metric of the traditional codec developments have been the PSNR. <--- Page Split ---> In other words, they can be viewed as a type of MSE- optimized codec. Even when setting aside the case of MS- SSIM, our results can be viewed as one concrete evidence supporting that ANN- based image compression can outperform the existing traditional image codecs in terms of the compression performance. Supplemental image samples are provided in appendix 6.3. ## 5 DISCUSSION Based on previous ANN- based image compression approaches utilizing entropy models (Balle et al., 2017; Theis et al., 2017; Balle et al., 2018), we extended the entropy model to exploit two different types of contexts. These contexts allow the entropy models to more accurately estimate the distribution of the representations with a more generalized form having both mean and standard deviation parameters. Based on the evaluation results, we showed the superiority of the proposed method. The contexts we utilized are divided into two types. One is a sort of free context, containing the part of the latent variables known to both the encoder and the decoder, whereas the other is the context, which requires additional bit allocation. Because the former is a generally used context in a variety of codecs, and the latter was already verified to help compression using Balle et al. (2018)'s approach, our contributions are not the contexts themselves, but can be viewed as providing a framework of entropy models utilizing these contexts. Although the experiments showed the best results in the ANN- based image compression domain, we still have various studies to conduct to further improve the performance. One possible way is generalizing the distribution models underlying the entropy model. Although we enhanced the performance by generalizing the previous entropy models, and have achieved quite acceptable results, the Gaussian- based entropy models apparently have a limited expression power. If more elaborate models, such as the non- parametric models of Balle et al. (2018) or Oord et al. (2016), are combined with the context- adaptivity proposed in this paper, they would provide better results by reducing the mismatch between the actual distributions and the approximation models. Another possible way is improving the level of the contexts. Currently, our methods only use low- level representations within very limited adjacent areas. However, if the sufficient capacity of the networks and higher- level contexts are given, a much more accurate estimation could be possible. For instance, if an entropy model understands the structures of human faces, in that they usually have two eyes, between which a symmetry exists, the entropy model could approximate the distributions more accurately when encoding the second eye of a human face by referencing the shape and position of the first given eye. As is widely known, various generative models (Goodfellow et al., 2014; Radford et al., 2016; Zhao et al., 2017) learn the distribution \(p(x)\) of the images within a specific domain, such as human faces or bedrooms. In addition, various in- painting methods (Pathak et al., 2016; Yang et al., 2017; Yeh et al., 2017) learn the conditional distribution \(p(x \mid context)\) when the viewed areas are given as context. Although these methods have not been developed for image compression, hopefully such high- level understandings can be utilized sooner or later. Furthermore, the contexts carried using side information can also be extended to some high- level information such as segmentation maps or any other information that helps with compression. Segmentation maps, for instance, may be able to help the entropy models estimate the distribution of a representation discriminatively according to the segment class the representation belongs to. Traditional codecs have a long development history, and a vast number of hand- crafted heuristics have been stacked thus far, not only for enhancing compression performance, but also for compromising computational complexities. Therefore, ANN- based image compression approaches may not provide satisfactory solutions as of yet, when taking their high complexity into account. However, considering its much shorter history, we believe that ANN- based image compression has much more potential and possibility in terms of future extension. Although we remain a long way from completion, we hope the proposed context- adaptive entropy model will provide an useful contribution to this area. ## ACKNOWLEDGMENTS This work was supported by Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government (MSIT) (No. 2017- 0- 00072, Development of Audio/Video Coding and Light Field Media Fundamental Technologies for Ultra Realistic Teramedia). <--- Page Split ---> ## REFERENCES Eirikur Agustsson, Michael Tschannen, Fabian Mentzer, Radu Timofte, and Luc Van Gool. Generative adversarial networks for extreme learned image compression. arXiv preprint arXiv:1804.02958, 2018. URL http://arxiv.org/abs/1804.02958. Johannes Ballé, Valero Laparra, and Eero P. Simoncelli. End- to- end optimized image compression. In the 5th Int. Conf. on Learning Representations, 2017. URL http://arxiv.org/abs/1611.01704. Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston. Variational image compression with a scale hyperprior. In the 6th Int. Conf. on Learning Representations, 2018. URL http://arxiv.org/abs/1802.01436. Fabrice Bellard. Bpg image format, 2014. URL http://bellard.org/bpg/. Chao Dong, Yubin Deng, Chen Change Loy, and Xiaoou Tang Tang. Compression artifacts reduction by a deep convolutional network. In IEEE International Conference on Computer Vision (ICCV), 2015. URL http://arxiv.org/abs/1504.06993. Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 27, pp. 2672- 2680. Curran Associates, Inc., 2014. URL http://papers.nips.cc/paper/5423- generative- adversarial- nets.pdf. ISO/IEC 23008- 2, ITU- T H.265. Information technology - high efficiency coding and media delivery in heterogeneous environments - part 2: High efficiency video coding. Standard, ISO/IEC, 2013. Nick Johnston, Damien Vincent, David Minnen, Michele Covell, Saurabh Singh, Troy Chinen, Sung Jin Hwang, Joel Shor, and George Toderici. Improved lossy image compression with priming and spatially adaptive bit rates for recurrent networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In the 3rd Int. Conf. on Learning Representations, 2015. URL http://arxiv.org/abs/1412.6980. Diederik P Kingma and Max Welling. Auto- encoding variational bayes. In the 2nd Int. Conf. on Learning Representations, 2014. URL http://arxiv.org/abs/1312.6114. Eastman Kodak. Kodak lossless true color image suite (photocd pcd0992), 1993. URL http://r0k.us/graphics/kodak/. Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In the 33 rd International Conference on Machine Learning, 2016. URL http://arxiv.org/abs/1601.06759. Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A. Efros. Context encoders: Feature learning by inpainting. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In the 4th Int. Conf. on Learning Representations, 2016. URL http://arxiv.org/abs/1511.06434. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In the 31st Int. Conf. on Machine Learning, 2014. URL http://arxiv.org/abs/1401.4082. Oren Rippel and Lubomir Bourdev. Real- time adaptive image compression. In International Conference on Machine Learning, 2017. URL http://arxiv.org/abs/1705.05823. <--- Page Split ---> Shibani Santurkar, David M. Budden, and Nir Shavit. Generative compression. In The 33rd Picture Coding Symposium, 2018. URL http://arxiv.org/abs/1703.01467. Pavel Svoboda, Michal Hradis, David Barina, and Pavel Zemcik. Compression artifacts removal using convolutional neural networks. In WSCG 24th International Conference on Computer Graphics, Visualization and Computer Vision, 2016. URL http://arxiv.org/abs/1605.00366. Lucas Theis, Wenzhe Shi, Andrew Cunningham, and Ferenc Huszár. Lossy image compression with compressive autoencoders. In the 5th Int. Conf. on Learning Representations, 2017. URL http://arxiv.org/abs/1703.00395. Bart Thomee, David A. Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li- Jia Li. The new data in multimedia research. Communications of the ACM, 59(2):64- 73, 2016. George Toderici, Damien Vincent, Nick Johnston, Sung Jin Hwang, David Minnen, Joel Shor, and Michele Covell. Full resolution image compression with recurrent neural networks. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2017. doi: 10.1109/CVPR.2017.577. URL http://arxiv.org/abs/1608.05148. Zhou Wang, Eero P. Simoncelli, and Alan C. Bovik. Multiscale structural similarity for image quality assessment. In The Thrity- Seventh Asilomar Conference on Signals, Systems & Computers, 2003. doi: 10.1109/ACSSC.2003.1292216. Chao Yang, Xin Lu, Zhe Lin, Eli Shechtman, Oliver Wang, and Hao Li. High- resolution image inpainting using multi- scale neural patch synthesis. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. Raymond A. Yeh, Chen Chen, Teck- Yian Lim, Alexander G. Schwing, Mark Hasegawa- Johnson, and Minh N. Do. Semantic image inpainting with deep generative models. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6882- 6890. IEEE Computer Society, 2017. Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Transactions on Image Processing, 26(7):3142- 3155, 2017. doi: 10.1109/TIP.2017.2662206. URL http://arxiv.org/abs/1608.03981. Junbo Jake Zhao, Michael Mathieu, and Yann LeCun. Energy- based generative adversarial network. In the 5th Int. Conf. on Learning Representations, 2017. URL http://arxiv.org/abs/1609.03126. <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 6: The structure of the hybrid network for higher bit-rate environments. The same notations as in the figure 4 are used. The representation \(y\) is divided into two parts and quantized. One of the resulting parts, \(\hat{y_{1}}\) , is encoded using the proposed model, whereas the other, \(\hat{y_{2}}\) , is encoded using a simpler model in which only the standard deviations are estimated using side information. The detailed structure of the proposed model is illustrated in figure 4. All concatenation and split operations are performed in a channel-wise manner. </center> ## 6 APPENDIX ### 6.1 HYBRID NETWORK FOR HIGHER BIT-RATE COMPRESSIONS We combined the lightweight entropy model with the context- adaptive entropy model to reduce the implementation costs for high- bpp configurations. The lightweight model exploits the scale (standard deviation) estimation, assuming that the PMF approximations of the quantized representations follow zero- mean Gaussian distributions convolved with a standard uniform distribution. Figure 6 illustrates the network structure of this hybrid network. The representation \(y\) is split channel- wise into two parts, \(y_{1}\) and \(y_{2}\) , which have \(M_{1}\) and \(M_{2}\) channels, respectively, and is then quantized. Here, \(\hat{y_{1}}\) is entropy coded using the proposed entropy model, whereas \(\hat{y_{2}}\) is coded with the lightweight entropy model. The standard deviations of \(\hat{y_{2}}\) are estimated using \(h_{a}\) and \(h_{s}\) . Unlike the context- adaptive entropy model, which uses the results of \(h_{a}\) ( \(\hat{c}^{\prime}\) ) as the input source to the estimator \(f\) , the lightweight entropy model retrieves the estimated standard deviations from \(h_{a}\) directly. Note that \(h_{a}\) takes the concatenated \(\hat{y_{1}}\) and \(\hat{y_{2}}\) as input, and \(h_{s}\) generates \(\hat{c}^{\prime}\) as well as \(\sigma_{2}\) , at the same time. The total loss function also consists of the rate and distortion terms, although the rate is divided into three parts, each of which is for \(\hat{y_{1}}\) , \(\hat{y_{2}}\) , and \(\tilde{z}\) , respectively. The distortion term is the same as before, but note that \(\hat{y}\) is the channel- wise concatenated representation of \(\hat{y_{1}}\) and \(\hat{y_{2}}\) : \[\begin{array}{r l} & {\mathcal{L} = R + \lambda D}\\ & {\mathrm{with} R = \mathbb{E}_{x\sim p_{\hat{x}}}\mathbb{E}_{\hat{y_{1}},\hat{y_{2}},\tilde{z}\sim q}\Big[-\log p_{\hat{y_{1}}|\hat{z}}(\hat{y_{1}}\mid \hat{z}) - \log p_{\hat{y_{2}}|\hat{z}}(\hat{y_{2}}\mid \hat{z}) - \log p_{\tilde{z}}(\tilde{z})\Big],}\\ & {\qquad D = \mathbb{E}_{x\sim p_{\hat{x}}}\Big[-\log p_{x|\hat{y}}(\boldsymbol {x}\mid \hat{y})\Big]} \end{array} \quad (7)\] Here, the noisy representations of \(\hat{y_{1}}\) , \(\hat{y_{2}}\) , and \(\tilde{z}\) follow a standard uniform distribution, the mean values of which are \(y_{1}\) , \(y_{2}\) , and \(z\) , respectively. In addition, \(y_{1}\) and \(y_{2}\) are channel- wise split representations from \(y\) , the results of the transform \(g_{a}\) , and have \(M_{1}\) and \(M_{2}\) channels, respectively: <--- Page Split ---> \[\begin{array}{r l r}{{q(\tilde{y}_{1},\tilde{y}_{2},\tilde{z}\mid x,\phi_{g},\phi_{h})}}&{{=}}&{\prod_{i}\mathcal{U}\big(\tilde{y}_{1i}\mid y_{1i}-\frac{1}{2},y_{1i}+\frac{1}{2}\big)\cdot}\\ &{}&{\prod_{j}\mathcal{U}\big(\tilde{y}_{2j}\mid y_{2j}-\frac{1}{2},y_{2j}+\frac{1}{2}\big)\cdot}\\ &{}&{\prod_{k}\mathcal{U}\big(\tilde{z}_{k}\mid z_{k}-\frac{1}{2},z_{k}+\frac{1}{2}\big)}\\ &{}&{\mathrm{with}~y_{1},y_{2}=S(g_{a}(x;\phi_{g})),\tilde{y}=Q(y_{1})\oplus Q(y_{2}),z=h_{a}(\tilde{y};\phi_{h}).}\end{array} \quad (8)\] The rate term for \(\hat{y}_{1}\) is the same model as that of equation (4). Note that \(\sigma_{2}\) does not contribute here, but does contribute to the model for \(\hat{y}_{2}\) : \[\begin{array}{r l r}{{p_{\tilde{y}_{1}|\tilde{z}}(\tilde{y}_{1}\mid\tilde{z},\theta_{h})=\prod_{i}\Big(\mathcal{N}\big(\mu_{1i},\sigma_{1i}{}^{2}\big)*\mathcal{U}\big(-\frac{1}{2},\frac{1}{2}\big)\Big)(\tilde{y}_{1i})}}\\ &{}&{\mathrm{with}~\mu_{1i},\sigma_{1i}=f(c_{i}^{\prime},c_{i}^{\prime\prime}),}\\ &{}&{c_{i}^{\prime}=E^{\prime}(c^{\prime},i),}\\ &{}&{c_{i}^{\prime\prime}=E^{\prime\prime}(\langle\tilde{y}_{1}\rangle,i),}\\ &{}&{c^{\prime},\sigma_{2}=S(h_{s}(\hat{z};\theta_{h}))}\end{array} \quad (9)\] The rate term for \(\hat{y}_{2}\) is almost the same as Balle et al. (2018), except that noisy representations are only used as the inputs to the entropy models for training, and not for the conditions of the models. \[p_{\tilde{y}_{2}|\tilde{z}}(\tilde{y}_{2}\mid \hat{z},\theta_{h}) = \prod_{j}\Big(\mathcal{N}\big(0,\sigma_{2j}{}^{2}\big)*\mathcal{U}\big(-\frac{1}{2},\frac{1}{2}\big)\Big)(y_{2j}) \quad (10)\] The model of \(z\) is the same as in equation (5). For implementation, we used this hybrid structure for the top- four configurations in bit- rate descending order. We set \(N\) , \(M_{1}\) , and \(M_{2}\) to 400, 192, and 408, respectively, for the top- two configurations, and to 320, 192, and 228, respectively, for the next two configurations. In addition, we measured average execution time per image, spent for encoding and decoding Kodak PhotoCD image dataset (Kodak, 1993), to clarify benefit of the hybrid model. The test was conducted under CPU environments, Intel i9- 7900X. Note that we ignored time for actual entropy coding because all models with the same values of \(N\) and \(M\) spend the same amount of time for entropy coding. As shown in figure 7, the hybrid models clearly reduced execution time of the models. Setting \(N\) and \(M\) to 320 and 420, respectively, we obtained 46.83% of speed gain. With the higher number of parameters, 400 of \(N\) and 600 of \(M\) , we obtained 57.28% of speed gain. ### 6.2 TEST RESULTS OF THE MODELS TRAINED USING DIFFERENT TYPES OF REPRESENTATIONS In this section, we provide test results of the two models, the proposed model trained using discrete representations as inputs to the synthesis transforms, \(g_{s}\) and \(h_{s}\) , and the same model but trained using noisy representations following the training process of Balle et al. (2018)'s approach. In detail, in training phase of the proposed model, we used quantized representations \(\tilde{y}\) and \(\hat{z}\) as inputs to the transforms \(g_{s}\) and \(h_{s}\) , respectively, to ensure the same conditions of training and testing phases. On the other hand, for training the compared model, representations \(\tilde{y}\) and \(\hat{z}\) are used as inputs to the transforms. An additional change of the proposed model is using \(\tilde{y}\) , instead of \(y\) , as inputs to \(h_{a}\) , but note that this has nothing to do with the mismatches between training and testing. We used them to match inputs to \(h_{a}\) to targets of model estimation via \(f\) . As shown in figure 8, the proposed model, trained using discrete representations, was 5.94% better than the model trained using noisy representations, in terms of the BD- rate of PSNR. Compared with Balle et al. (2018)'s approach, the performance gains of the two models, trained using discrete representations and noisy representations, were 11.97% and 7.20%, respectively. <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 7: Comparison results of execution time between the base model and hybrid model </center> ![](images/14_1.jpg) <center>Figure 8: Evaluation results of the models trained using noisy/discrete representations as input to synthesis transforms </center> ### 6.3 SAMPLES OF THE EXPERIMENTS In this section, we provide a few more supplemental test results. Figures 9, 10, and 11 show the results of the MSE optimized version of our method, whereas figures 12 and 13 show the results of the MS- SSIM optimized version. All figures include a ground truth image and the reconstructed images for our method, BPG, and Ballé et al. (2018)'s approach, in the clockwise direction. <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 9: Sample test results. Top left, ground truth; top right, our method (MSE optimized; bpp, 0.2040; PSNR, 32.2063); bottom left, BPG (bpp, 0.2078; PSNR, 32.0406); bottom right, Ballé et al. (2018)'s approach (MSE optimized; bpp, 0.2101; PSNR, 31.7054) </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 10: Sample test results. Top left, ground truth; top right, our method (MSE optimized; bpp, 0.1236; PSNR, 32.4284); bottom left, BPG (bpp, 0.1285; PSNR, 32.0444); bottom right, Ballé et al. (2018)'s approach (MSE optimized, bpp, 0.1229; PSNR, 31.0596) </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 11: Sample test results. Top left, ground truth; top right, our method (MSE optimized; bpp, 0.1501; PSNR, 34.7103); bottom left, BPG (bpp, 0.1477; PSNR, 33.9623); bottom right, Ballé et al. (2018)'s approach (MSE optimized; bpp, 0.1520; PSNR, 34.0465) </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 12: Sample test results. Top left, ground truth; top right, our method (MS-SSIM optimized; bpp, 0.2507; MS-SSIM, 0.9740); bottom left, BPG (bpp, 0.2441; MS-SSIM, 0.9555); bottom right, Ballé et al. (2018)'s approach (MS-SSIM optimized; bpp, 0.2101; MS-SSIM, 0.9705) </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 13: Sample test results. Top left, ground truth; top right, our method (MS-SSIM optimized; bpp, 0.2269; MS-SSIM, 0.9810); bottom left, BPG (bpp, 0.2316; MS-SSIM, 0.9658); bottom right, Ballé et al. (2018)'s approach (MS-SSIM optimized; bpp, 0.2291; MS-SSIM, 0.9786) </center> <--- Page Split --->
## ABSTRACT We propose a context- adaptive entropy model for use in end- to- end optimized image compression. Our model exploits two types of contexts, bit- consuming contexts and bit- free contexts, distinguished based upon whether additional bit allocation is required. Based on these contexts, we allow the model to more accurately estimate the distribution of each latent representation with a more generalized form of the approximation models, which accordingly leads to an enhanced compression performance. Based on the experimental results, the proposed method outperforms the traditional image codecs, such as BPG and JPEG2000, as well as other previous artificial- neural- network (ANN) based approaches, in terms of the peak signal- to- noise ratio (PSNR) and multi- scale structural similarity (MS- SSIM) index. The test code is publicly available at https://github.com/JooyoungLeeETRI/CA_Entropy_Model. ## 1 INTRODUCTION Recently, artificial neural networks (ANNs) have been applied in various areas and have achieved a number of breakthroughs resulting from their superior optimization and representation learning performance. In particular, for various problems that are sufficiently straightforward that they can be solved within a short period of time by hand, a number of ANN- based studies have been conducted and significant progress has been made. With regard to image compression, however, relatively slow progress has been made owing to its complicated target problems. A number of works, focusing on the quality enhancement of reconstructed images, were proposed. For instance, certain approaches (Dong et al., 2015; Svoboda et al., 2016; Zhang et al., 2017) have been proposed to reduce artifacts caused by image compression, relying on the superior image restoration capability of an ANN. Although it is indisputable that artifact reduction is one of the most promising areas exploiting the advantages of ANNs, such approaches can be viewed as a type of post- processing, rather than image compression itself. Regarding ANN- based image compression, the previous methods can be divided into two types. First, as a consequence of the recent success of generative models, some image compression approaches targeting the superior perceptual quality (Agustsson et al., 2018; Santurkar et al., 2018; Rippel & Bourdev, 2017) have been proposed. The basic idea here is that learning the distribution of natural images enables a very high compression level without severe perceptual loss by allowing the generation of image components, such as textures, which do not highly affect the structure or the perceptual quality of the reconstructed images. Although the generated images are very realistic, the acceptability of the machine- created image components eventually becomes somewhat application- dependent. Meanwhile, a few end- to- end optimized ANN- based approaches (Toderici et al., 2017; Johnston et al., 2018; Balle et al., 2017; Theis et al., 2017; Balle et al., 2018), without generative models, have been proposed. In these approaches, unlike traditional codecs comprising separate tools, such as prediction, transform, and quantization, a comprehensive solution covering all functions has been sought after using end- to- end optimization. Toderici et al. (2017)'s approach exploits a small number of latent binary representations to contain the compressed information in every step, and each step increasingly stacks the additional latent representations to achieve a progressive improvement in quality of the reconstructed images. Johnston et al. (2018) improved the compression <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Comparison of sample test results including the ground truth, our method, Balle et al. (2018)'s approach, BPG, and JPEG2000. </center> performance by enhancing operation methods of the networks developed by Toderici et al. (2017). Although Toderici et al. (2017); Johnston et al. (2018) provided novel frameworks suitable to quality control using a single trained network, the increasing number of iteration steps to obtain higher image quality can be a burden to certain applications. In contrast to the approaches developed by Toderici et al. (2017) and Johnston et al. (2018), which extract binary representations with as high an entropy as possible, Balle et al. (2017), Theis et al. (2017), and Balle et al. (2018) regard the image compression problem as being how to retrieve discrete latent representations having as low an entropy as possible. In other words, the target problem of the former methods can be viewed as how to include as much information as possible in a fixed number of representations, whereas the latter is simply how to reduce the expected bit- rate when a sufficient number of representations are given, assuming that the low entropy corresponds to small number of bits from the entropy coder. To solve the second target problem, Balle et al. (2017), Theis et al. (2017), and Balle et al. (2018) adopt their own entropy models to approximate the actual distributions of the discrete latent representations. More specifically, Balle et al. (2017) and Theis et al. (2017) proposed novel frameworks that exploit the entropy models, and proved their performance capabilities by comparing the results with those of conventional codecs such as JPEG2000. Whereas Balle et al. (2017) and Theis et al. (2017) assume that each representation has a fixed distribution, Balle et al. (2018) introduced an input- adaptive entropy model that estimates the scale of the distribution for each representation. This idea is based on the characteristics of natural images in which the scales of the representations vary together in adjacent areas. They provided test results that outperform all previous ANN- based approaches, and reach very close to those of BPG (Bellard, 2014), which is known as a subset of HEVC (ISO/IEC 23008- 2, ITU- T H.265), used for image compression. One of the principle elements in end- to- end optimized image compression is the trainable entropy model used for the latent representations. Because the actual distributions of latent representations are unknown, the entropy models provide the means to estimate the required bits for encoding the latent representations by approximating their distributions. When an input image \(\boldsymbol{x}\) is transformed into a latent representation \(\boldsymbol{y}\) and then uniformly quantized into \(\hat{\boldsymbol{y}}\) , the simple entropy model can be represented by \(p_{\hat{\boldsymbol{y}}}(\hat{\boldsymbol{y}})\) , as described by Balle et al. (2018). When the actual marginal distribution of \(\hat{\boldsymbol{y}}\) is denoted as \(m(\hat{\boldsymbol{y}})\) , the rate estimation, calculated through cross entropy using the entropy model, \(p_{\hat{\boldsymbol{y}}}(\hat{\boldsymbol{y}})\) , can be represented as shown in equation (1), and can be decomposed into the actual entropy of \(\hat{\boldsymbol{y}}\) and the additional bits owing to a mismatch between the actual distributions and their approximations. Therefore, decreasing the rate term \(R\) during the training process allows the entropy model \(p_{\hat{\boldsymbol{y}}}(\hat{\boldsymbol{y}})\) to approximate \(m(\hat{\boldsymbol{y}})\) as closely as possible, and let the other parameters transform \(x\) into \(y\) properly such that the actual entropy of \(\hat{\boldsymbol{y}}\) becomes small. \[R = \mathbb{E}_{\hat{\boldsymbol{y}}\sim m}\big[-\log_{2}p_{\hat{\boldsymbol{y}}}(\hat{\boldsymbol{y}})\big] = H(m) + D_{KL}(m||p_{\hat{\boldsymbol{y}}}). \quad (1)\] In terms of KL- divergence, \(R\) is minimized when \(p_{\hat{\boldsymbol{y}}}(\hat{\boldsymbol{y}})\) becomes perfectly matched with the actual distribution \(m(\hat{\boldsymbol{y}})\) . This means that the compression performance of the methods essentially depends on the capacity of the entropy model. To enhance the capacity, we propose a new entropy model <--- Page Split ---> that exploits two types of contexts, bit- consuming and bit- free contexts, distinguished according to whether additional bit allocation is required. Utilizing these two contexts, we allow the model to more accurately estimate the distribution of each latent representation through the use of a more generalized form of the entropy models, and thus more effectively reduce the spatial dependencies among the adjacent latent representations. Figure 1 demonstrates a comparison of the compression results of our method to those of other previous approaches. The contributions of our work are as follows: - We propose a new context-adaptive entropy model framework that incorporates the two different types of contexts.- We provide the test results that outperform the widely used conventional image codec BPG in terms of the PSNR and MS-SSIM.- We discuss the directions of improvement in the proposed methods in terms of the model capacity and the level of the contexts. Note that we follow a number of notations given by Balle et al. (2018) because our approach can be viewed as an extension of their work, in that we exploit the same rate- distortion (R- D) optimization framework. The rest of this paper is organized as follows. In Section 2, we introduce the key approaches of end- to- end optimized image compression and propose the context- adaptive entropy model. Section 3 demonstrates the structure of the encoder and decoder models used, and the experimental setup and results are then given in section 4. Finally, in Section 5, we discuss the current state of our work and directions for improvement. ## 2 END-TO-END OPTIMIZATION BASED ON CONTEXT-ADAPTIVE ENTROPY MODELS ### 2.1 PREVIOUS ENTROPY MODELS Since they were first proposed by Balle et al. (2017) and Theis et al. (2017), entropy models, which approximate the distribution of discrete latent representations, have noticeably improved the image compression performance of ANN- based approaches. Balle et al. (2017) assumes the entropy models of the latent representations as non- parametric models, whereas Theis et al. (2017) adopted a Gaussian scale mixture model composed of six weighted zero- mean Gaussian models per representation. Although they assume different forms of entropy models, they have a common feature in that both concentrate on learning the distributions of the representations without considering input adaptivity. In other words, once the entropy models are trained, the trained model parameters for the representations are fixed for any input during the test time. Balle et al. (2018), in contrast, introduced a novel entropy model that adaptively estimates the scales of the representations based on input. They assume that the scales of the latent representations from the natural images tend to move together within an adjacent area. To reduce this redundancy, they use a small amount of additional information by which the proper scale parameters (standard deviations) of the latent representations are estimated. In addition to the scale estimation, Balle et al. (2018) have also shown that when the prior probability density function (PDF) for each representation in a continuous domain is convolved with a standard uniform density function, it approximates the prior probability mass function (PMF) of the discrete latent representation, which is uniformly quantized by rounding, much more closely. For training, a uniform noise is added to each latent representation so as to fit the distribution of these noisy representations into the mentioned PMF- approximating functions. Using these approaches, Balle et al. (2018) achieved a state- of- the- art compression performance, close to that of BPG. ### 2.2 SPATIAL DEPENDENCIES OF THE LATENT VARIABLES The latent representations, when transformed through a convolutional neural network, essentially contain spatial dependencies because the same convolutional filters are shared across the spatial regions, and natural images have various factors in common within adjacent regions. Balle et al. (2018) successfully captured these spatial dependencies and enhanced the compression performance by input- adaptively estimating standard deviations of the latent representations. Taking a step forward, we generalize the form of the estimated distribution by allowing, in addition to the standard <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: Examples of latent representations and their normalized versions for the two cases (the first two images show the results in which only the standard deviations are estimated using side information, whereas the last two images show the results in which the mu and standard deviation are estimated using our method). For a clear demonstration, the latent representations having the highest covariance between the spatially adjacent variables are extracted. Left: the latent representations of \(\hat{y}\) from the first case. Middle left: the normalized versions of \(\hat{y}\) from the first case, divided by the estimated standard deviation. Middle right: the latent variable of \(\hat{y}\) from the second case. Right: the normalized versions of \(\hat{y}\) from the second case, shifted and divided by the estimated mu and the standard deviation. </center> deviation, the mean estimation utilizing the contexts. For instance, assuming that certain representations tend to have similar values within a spatially adjacent area, when all neighborhood representations have a value of 10, we can intuitively guess that, for the current representation, the chance of having a value equal or similar to 10 is relatively high. This simple estimation will consequently reduce the entropy. Likewise, our method utilizes the given contexts for estimating the mean, as well as the standard deviation, of each latent representation. Note that Toderici et al. (2017), Johnston et al. (2018), and Rippel & Bourdev (2017) also apply context- adaptive entropy coding by estimating the probability of each binary representation. However, these context- adaptive entropy- coding methods can be viewed as separate components, rather than one end- to- end optimization component because their probability estimation does not directly contribute to the rate term of the R- D optimization framework. Figure 2 visualizes the latent variables \(\hat{y}\) and their normalized versions of the two different approaches, one estimating only the standard deviation parameters and the other estimating both the mu and standard deviation parameters with the two types of mentioned contexts. The visualization shows that the spatial dependency can be removed more effectively when the mu is estimated along with the given contexts. ### 2.3 CONTEXT-ADAPTIVE ENTROPY MODEL The optimization problem described in this paper is similar with Ballé et al. (2018), in that the input \(x\) is transformed into \(y\) having a low entropy, and the spatial dependencies of \(y\) are captured into \(\hat{z}\) . Therefore, we also use four fundamental parametric transform functions: an analysis transform \(g_{\alpha}(\boldsymbol {x}; \boldsymbol {\phi}_{g})\) to transform \(x\) into a latent representation \(y\) , a synthesis transform \(g_{s}(\hat{y}; \boldsymbol {\theta}_{g})\) to reconstruct image \(\hat{x}\) , an analysis transform \(h_{\alpha}(\hat{y}; \boldsymbol {\phi}_{h})\) to capture the spatial redundancies of \(\hat{y}\) into a latent representation \(z\) , and a synthesis transform \(h_{s}(\hat{z}; \boldsymbol {\theta}_{h})\) used to generate the contexts for the model estimation. Note that \(h_{s}\) does not estimate the standard deviations of the representations directly as in Ballé et al. (2018)'s approach. In our method, instead, \(h_{s}\) generates the context \(c'\) , one of the two types of contexts for estimating the distribution. These two types of contexts are described in this section. Ballé et al. (2018) analyzed the optimization problem from the viewpoint of the variational autoencoder (Kingma & Welling (2014); Rezende et al. (2014)), and showed that the minimization of the KL- divergence is the same problem as the R- D optimization of image compression. Basically, we follow the same concept; however, for training, we use the discrete representations on the conditions instead of the noisy representations, and thus the noisy representations are only used as the inputs to the entropy models. Empirically, we found that using discrete representations on the conditions show better results, as shown in appendix 6.2. These results might come from removing the mis <--- Page Split ---> matches of the conditions between the training and testing, thereby enhancing the training capacity by limiting the affect of the uniform noise only to help the approximation to the probability mass functions. We use the gradient overriding method with the identity function, as in Theis et al. (2017), to deal with the discontinuities from the uniform quantization. The resulting objective function used in this paper is given in equation (2). The total loss consists of two terms representing the rates and distortions, and the coefficient \(\lambda\) controls the balance between the rate and distortion during the R- D optimization. Note that \(\lambda\) is not an optimization target, but a manually configured condition that determines which to focus on between rate and distortion: \[\begin{array}{r l} & {\mathcal{L} = R + \lambda D}\\ & {\mathrm{with} R = \mathbb{E}_{\pmb {x}\sim p_{\pmb{x}}}\mathbb{E}_{\hat{\pmb{y}},\hat{\pmb{z}}\sim q}\Big[-\log p_{\hat{\pmb{y}}|\hat{\pmb{z}}}(\hat{\pmb{y}}\mid \hat{\pmb{z}}) - \log p_{\hat{\pmb{z}}}(\hat{\pmb{z}})\Big],}\\ & {\qquad D = \mathbb{E}_{\pmb{x}\sim p_{\pmb{x}}}\Big[-\log p_{\pmb{x}|\hat{\pmb{y}}}(\pmb {x}\mid \hat{\pmb{y}})\Big]} \end{array} \quad (2)\] Here, the noisy representations of \(\tilde{\pmb{y}}\) and \(\tilde{\pmb{z}}\) follow the standard uniform distribution, the mean values of which are \(\pmb{y}\) and \(\pmb{z}\) , respectively, when \(\pmb{y}\) and \(\pmb{z}\) are the result of the transforms \(g_{a}\) and \(h_{a}\) , respectively. Note that the input to \(h_{a}\) is \(\hat{\pmb{y}}\) , which is a uniformly quantized representation of \(\pmb{y}\) , rather than the noisy representation \(\tilde{\pmb{y}}\) . \(Q\) denotes the uniform quantization function, for which we simply use a rounding function: \[\begin{array}{r c l}{{q(\tilde{\pmb{y}},\tilde{\pmb{z}}\mid\pmb{x},\phi_{g},\phi_{h})}}&{{=}}&{{\prod_{i}\mathcal{U}\big(\tilde{y}_{i}\mid y_{i}-\frac{1}{2},y_{i}+\frac{1}{2}\big)\cdot\prod_{j}\mathcal{U}\big(\tilde{z}_{j}\mid z_{j}-\frac{1}{2},z_{j}+\frac{1}{2}\big)}}\\ {{}}&{{}}&{{\mathrm{with}~y=g_{a}(\pmb{x};\phi_{g}),\hat{\pmb{y}}=Q(\pmb{y}),\pmb{z}=h_{a}(\hat{\pmb{y}};\phi_{h}).}}\end{array} \quad (3)\] The rate term represents the expected bits calculated using the entropy models of \(p_{\tilde{\pmb{y}} |\tilde{\pmb{z}}}\) and \(p_{\tilde{\pmb{z}}}\) . Note that \(p_{\tilde{\pmb{y}} |\tilde{\pmb{z}}}\) and \(p_{\tilde{\pmb{z}}}\) are eventually the approximations of \(p_{\tilde{\pmb{y}} |\tilde{\pmb{z}}}\) and \(p_{\tilde{\pmb{z}}}\) , respectively. Equation (4) represents the entropy model for approximating the required bits for \(\hat{\pmb{y}}\) . The model is based on the Gaussian model, which not only has the standard deviation parameter \(\sigma_{i}\) , but also the mu parameter \(\mu_{i}\) . The values of \(\mu_{i}\) and \(\sigma_{i}\) are estimated from the two types of given contexts based on the function \(f\) , the distribution estimator, in a deterministic manner. The two types of contexts, bit- consuming and bit- free contexts, for estimating the distribution of a certain representation are denoted as \(c_{i}^{\prime}\) and \(c_{i}^{\prime \prime}\) . \(E^{\prime}\) extracts \(c_{i}^{\prime}\) from \(c^{\prime}\) , the result of transform \(h_{s}\) . In contrast to \(c_{i}^{\prime}\) , no additional bit allocation is required for \(c_{i}^{\prime \prime}\) . Instead, we simply utilize the known (already entropy- coded or decoded) subset of \(\hat{\pmb{y}}\) , denoted as \(\langle \hat{\pmb{y}}\rangle\) . Here, \(c_{i}^{\prime \prime}\) is extracted from \(\langle \hat{\pmb{y}}\rangle\) by the extractor \(E^{\prime \prime}\) . We assume that the entropy coder and the decoder sequentially process \(\hat{y}_{i}\) in the same specific order, such as with raster scanning, and thus \(\langle \hat{\pmb{y}}\rangle\) given to the encoder and decoder can always be identical when processing the same \(\hat{y}_{i}\) . A formal expression of this is as follows: \[\begin{array}{r l} & {p_{\tilde{\pmb{y}} |\tilde{\pmb{z}}}(\tilde{\pmb{y}}\mid \hat{\pmb{z}},\pmb {\theta}_{h}) = \prod_{i}\Big(\mathcal{N}\big(\mu_{i},\sigma_{i}^{2}\big)*\mathcal{U}\big(-\frac{1}{2},\frac{1}{2}\big)\Big)(\tilde{y}_{i})}\\ & {\qquad \mathrm{with} \mu_{i},\sigma_{i} = f(c_{i}^{\prime},c_{i}^{\prime \prime}),}\\ & {\qquad c_{i}^{\prime} = E^{\prime}(h_{s}(\hat{\pmb{z}};\pmb {\theta}_{h}),i),}\\ & {\qquad c_{i}^{\prime \prime} = E^{\prime \prime}(\langle \hat{\pmb{y}}\rangle ,i),}\\ & {\qquad \hat{\pmb{z}} = Q(\pmb {z})} \end{array} \quad (4)\] In the case of \(\hat{\pmb{z}}\) , a simple entropy model is used. We assumed that the model follows zero- mean Gaussian distributions which have a trainable \(\sigma\) . Note that \(\hat{\pmb{z}}\) is regarded as side information and it contributes a very small amount of the total bit- rate, as described by Balle et al. (2018), and thus we use this simpler version of the entropy model, rather than a more complex model, for end- to- end optimization over all parameters of the proposed method: \[p_{\tilde{\pmb{z}}}(\tilde{\pmb{z}}) = \prod_{j}\Big(\mathcal{N}\big(0,\sigma_{j}^{2}\big)*\mathcal{U}\big(-\frac{1}{2},\frac{1}{2}\big)\Big)(\tilde{z}_{j}) \quad (5)\] <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 3: Encoder-decoder model of the proposed method. The left block represents the encoder side, whereas the right block represents the decoder side. The small icons between them represent the entropy-coded bitstreams. EC and ED represent entropy coding and entropy decoding, and U | Q represents the addition of uniform noise to \(y\) or a uniform quantization of \(y\) . Noisy representations are used only for training as inputs to the entropy models, and are illustrated with the dotted lines. </center> Note that actual entropy coding or decoding processes are not necessarily required for training or encoding because the rate term is not the amount of real bits, but an estimation calculated from the entropy models, as mentioned previously. We calculate the distortion term using the mean squared error (MSE) \(^{1}\) , assuming that \(p_{x|\hat{y}}\) follows a Gaussian distribution as a widely used distortion metric. ## 3 ENCODER-DECODER MODEL This section describes the basic structure of the proposed encoder- decoder model. On the encoder side, an input image is transformed into latent representations, quantized, and then entropy- coded using the trained entropy models. In contrast, the decoder first applies entropy decoding with the same entropy models used for the encoder, and reconstructs the image from the latent representations, as illustrated in figure 3. It is assumed that all parameters that appear in this section were already trained. The structure of the encoder- decoder model basically includes \(g_{\alpha}\) and \(g_{s}\) in charge of the transform of \(x\) into \(y\) and its inverse transform, respectively. The transformed \(y\) is uniformly quantized into \(\hat{y}\) by rounding. Note that, in the case of approaches based on the entropy models, unlike traditional codecs, tuning the quantization steps is usually unnecessary because the scales of the representations are optimized together through training. The other components between \(g_{\alpha}\) and \(g_{s}\) carry out the role of entropy coding (or decoding) with the shared entropy models and underlying context preparation processes. More specifically, the entropy model estimates the distribution of each \(\hat{y}_{i}\) individually, in which \(\mu_{i}\) and \(\sigma_{i}\) are estimated with the two types of given contexts, \(c_{i}^{\prime}\) and \(c_{i}^{\prime \prime}\) . Among these contexts, \(c^{\prime}\) can be viewed as side information, which requires an additional bit allocation. To reduce the required bit- rate for carrying \(c^{\prime}\) , the latent representation \(z\) , transformed from \(\hat{y}\) , is quantized and entropy- coded by its own entropy model, as specified in section 2.3. On the other hand, \(c_{i}^{\prime \prime}\) is extracted from \(\langle \hat{y} \rangle\) , without any additional bit allocation. Note that \(\langle \hat{y} \rangle\) varies as the entropy coding or decoding progresses, but is always identical for processing the same \(\hat{y}_{i}\) in both the encoder and decoder, as described in 2.3. The parameters of \(h_{s}\) and the entropy models are simply shared by both the encoder and the decoder. Note that the inputs to the entropy models during training are the noisy representations, as illustrated with the dotted line in figure 3, to allow the entropy model to approximate the probability mass functions of the discrete representations. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 4: Implementation of the proposed method. We basically use the convolutional autoencoder structure, and the distribution estimator \(f\) is also implemented using convolutional neural networks. The notations of the convolutional layer follow Ballé et al. (2018): the number of filters \(\times\) filter height \(\times\) filter width / the downscale or upscale factor, where \(\uparrow\) and \(\downarrow\) denote the up and downscaling, respectively. For up or downscaling, we use the transposed convolution. For the networks, input images are normalized into a scale between -1 and 1. </center> ## 4 EXPERIMENTS ### 4.1 IMPLEMENTATION We use a convolutional neural networks to implement the analysis transform and the synthesis transform functions, \(g_{a}\) , \(g_{s}\) , \(h_{a}\) , and \(h_{s}\) . The structures of the implemented networks follow the same structures of Ballé et al. (2018), except that we use the exponentiation operator instead of an absolute operator at the end of \(h_{s}\) . Based on Ballé et al. (2018)'s structure, we added the components to estimate the distribution of each \(\hat{y}_{i}\) , as shown in figure 4. Herein, we represent a uniform quantization (round) as "Q," entropy coding as "EC," and entropy decoding as "ED." The distribution estimator is denoted as \(f\) , and is also implemented using the convolutional layers which takes channel- wise concatenated \(\boldsymbol{c}_{i}^{\prime}\) and \(\boldsymbol{c}_{i}^{\prime \prime}\) as inputs and provides estimated \(\mu_{i}\) and \(\sigma_{i}\) as results. Note that the same \(\boldsymbol{c}_{i}^{\prime}\) and \(\boldsymbol{c}_{i}^{\prime \prime}\) are shared for all \(\hat{y}_{i}\) s located at the same spatial position. In other words, we let \(E^{\prime}\) extract all spatially adjacent elements from \(\boldsymbol{c}^{\prime}\) across the channels to retrieve \(\boldsymbol{c}_{i}^{\prime}\) and likewise let \(E^{\prime \prime}\) extract all adjacent known elements from \(\langle \hat{y}\rangle\) for \(\boldsymbol{c}_{i}^{\prime \prime}\) . This could have the effect of capturing the remaining correlations among the different channels. In short, when \(M\) is the total number of channels of \(y\) , we let \(f\) estimate all \(M\) distributions of \(\hat{y}_{i}\) s, which are located at the same spatial position, using only a single step, thereby allowing the total number of estimations to be reduced. Furthermore, the parameters of \(f\) are shared for all spatial positions of \(\hat{y}\) , and thus only one trained \(f\) per \(\lambda\) is necessary to process any sized images. In the case of training, however, collecting the results from the all spatial positions to calculate the rate term becomes a significant burden, despite the simplifications mentioned above. To reduce this burden, we designate a certain number (32 and 16 for the base model and the hybrid model, respectively) of random spatial points as the representatives per training step, to calculate the rate term easily. Note that we let these random points contribute solely to the rate term, whereas the distortion is still calculated over all of the images. Because \(y\) is a three- dimensional array in our implementation, index \(i\) can be represented as three indexes, \(k\) , \(l\) , and \(m\) , representing the horizontal index, the vertical index, and the channel index, respectively. When the current position is given as \((k, l, m)\) , \(E^{\prime}\) extracts \(\boldsymbol{c}_{[k - 2, \ldots , k + 1], [l - 3, \ldots , l], [1, \ldots , M]}\) as \(\boldsymbol{c}_{i}^{\prime}\) , and \(E^{\prime \prime}\) extracts \(\langle \hat{y} \rangle_{[k - 2, \ldots , k + 1], [l - 3, \ldots , l], [1, \ldots , M]}\) as \(\boldsymbol{c}_{i}^{\prime \prime}\) , when \(\langle \hat{y} \rangle\) represents the known area of \(\hat{y}\) . Note that we filled in the unknown area of \(\langle \hat{y} \rangle\) with zeros, to maintain the dimensions of \(\langle \hat{y} \rangle\) <--- Page Split ---> identical to those of \(\hat{\pmb{y}}\) . Consequently, \(\pmb{c}_{i}^{\prime \prime}[_{[3\dots 4]},_{[4\dots M]}]\) are always padded with zeros. To keep the dimensions of the estimation results to the inputs, the marginal areas of \(\pmb{c}^{\prime}\) and \(\langle \hat{\pmb{y}}\rangle\) are also set to zeros. Note that when training or encoding, \(\pmb{c}_{i}^{\prime \prime}\) can be extracted using simple \(4\times 4\times M\) windows and binary masks, thereby enabling parallel processing, whereas a sequential reconstruction is inevitable for decoding. Another implementation technique used to reduce the implementation cost is combining the lightweight entropy model with the proposed model. The lightweight entropy model assumes that the representations follow a zero- mean Gaussian model with the estimated standard deviations, which is very similar with Ballé et al. (2018)'s approach. We utilize this hybrid approach for the top four cases, in bit- rate descending order, of the nine \(\lambda\) configurations, based on the assumption that for the higher- quality compression, the number of sparse representations having a very low spatial dependency increases, and thus a direct scale estimation provides sufficient performance for these added representations. For implementation, we separate the latent representation \(\pmb{y}\) into two parts, \(\pmb{y}_{1}\) and \(\pmb{y}_{2}\) , and two different entropy models are applied for them. Note that the parameters of \(g_{a}\) , \(g_{s}\) , \(h_{a}\) , and \(h_{s}\) are shared, and all parameters are still trained together. The detailed structure and experimental settings are described in appendix 6.1. The number of parameters \(N\) and \(M\) are set to 128 and 192, respectively, for the five \(\lambda\) configurations for lower bit- rates, whereas 2- 3 times more parameters, described in appendix 6.1, are used for the four \(\lambda\) configurations for higher bit- rates. Tensorflow and Python were used to setup the overall network structures, and for the actual entropy coding and decoding using the estimated model parameters, we implemented an arithmetic coder and decoder, for which the source code of the "Reference arithmetic coding" project<sup>2</sup> was used as the base code. ### 4.2 EXPERIMENTAL ENVIRONMENTS We optimized the networks using two different types of distortion terms, one with MSE and the other with MS- SSIM. For each distortion type, the average bits per pixel (BPP) and the distortion, PSNR and MS- SSIM, over the test set are measured for each of the nine \(\lambda\) configurations. Therefore, a total of 18 networks are trained and evaluated within the experimental environments, as explained below: - For training, we used \(256\times 256\) patches extracted from 32,420 randomly selected YFCC100m (Thomee et al. (2016)) images. We extracted one patch per image, and the extracted regions were randomly chosen. Each batch consists of eight images, and 1M iterations of the training steps were conducted, applying the ADAM optimizer (Kingma & Ba (2015)). We set the initial learning rate to \(5\times 10^{-5}\) , and reduced the rate by half every 50,000 iterations for the last 200,000 iterations. Note that, in the case of the four \(\lambda\) configurations for high bpp, in which the hybrid entropy model is used, 1M iterations of pre-training steps were conducted using the learning rate of \(1\times 10^{-5}\) . Although we previously indicated that the total loss is the sum of \(R\) and \(\lambda D\) for a simple explanation, we tuned the balancing parameter \(\lambda\) in a similar way as Theis et al. (2017), as indicated in equation (6). We used the \(\lambda\) parameters ranging from 0.01 to 0.5. \[\mathcal{L} = \frac{\lambda}{W_{y}\cdot H_{y}\cdot 256} R + \frac{1 - \lambda}{1000} D. \quad (6)\] - For the evaluation, we measured the average BPP and average quality of the reconstructed images in terms of the PSNR and MS-SSIM over 24 PNG images of the Kodak PhotoCD image dataset (Kodak, 1993). Note that we represent the MS-SSIM results in the form of decibels, as in Ballé et al. (2018), to increase the discrimination. ### 4.3 EXPERIMENTAL RESULTS We compared the test results with other previous methods, including traditional codecs such as BPG and JPEG2000, as well as previous ANN- based approaches such as Theis et al. (2017) and Ballé <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 5: Rate-distortion curves of the proposed method and competitive methods. The top plot represents the PSNR values as a result of changes in bpp, whereas the bottom plot shows MS-SSIM values in the same manner. Note that MS-SSIM values are converted into decibels \((-10\log_{10}(1 - \mathrm{MS-SSIM}))\) for differentiating the quality levels, in the same manner as in Ballé et al. (2018). </center> et al. (2018). Because two different quality metrics are used, the results are presented with two separate plots. As shown in figure 5, our methods outperform all other previous methods in both metrics. In particular, our models not only outperform Ballé et al. (2018)'s method, which is believed to be a state- of- the- art ANN- based approach, but we also obtain better results than the widely used conventional image codec, BPG. More specifically, the compression gains in terms of the BD- rate of PSNR over JPEG2000, Ballé et al. (2018)'s approach (MSE- optimized), and BPG are \(34.08\%\) , \(11.97\%\) , and \(6.85\%\) , respectively. In the case of MS- SSIM, we found wider gaps of \(68.82\%\) , \(13.93\%\) , and \(49.68\%\) , respectively. Note that we achieved significant gains over traditional codecs in terms of MS- SSIM, although this might be because the dominant target metric of the traditional codec developments have been the PSNR. <--- Page Split ---> In other words, they can be viewed as a type of MSE- optimized codec. Even when setting aside the case of MS- SSIM, our results can be viewed as one concrete evidence supporting that ANN- based image compression can outperform the existing traditional image codecs in terms of the compression performance. Supplemental image samples are provided in appendix 6.3. ## 5 DISCUSSION Based on previous ANN- based image compression approaches utilizing entropy models (Balle et al., 2017; Theis et al., 2017; Balle et al., 2018), we extended the entropy model to exploit two different types of contexts. These contexts allow the entropy models to more accurately estimate the distribution of the representations with a more generalized form having both mean and standard deviation parameters. Based on the evaluation results, we showed the superiority of the proposed method. The contexts we utilized are divided into two types. One is a sort of free context, containing the part of the latent variables known to both the encoder and the decoder, whereas the other is the context, which requires additional bit allocation. Because the former is a generally used context in a variety of codecs, and the latter was already verified to help compression using Balle et al. (2018)'s approach, our contributions are not the contexts themselves, but can be viewed as providing a framework of entropy models utilizing these contexts. Although the experiments showed the best results in the ANN- based image compression domain, we still have various studies to conduct to further improve the performance. One possible way is generalizing the distribution models underlying the entropy model. Although we enhanced the performance by generalizing the previous entropy models, and have achieved quite acceptable results, the Gaussian- based entropy models apparently have a limited expression power. If more elaborate models, such as the non- parametric models of Balle et al. (2018) or Oord et al. (2016), are combined with the context- adaptivity proposed in this paper, they would provide better results by reducing the mismatch between the actual distributions and the approximation models. Another possible way is improving the level of the contexts. Currently, our methods only use low- level representations within very limited adjacent areas. However, if the sufficient capacity of the networks and higher- level contexts are given, a much more accurate estimation could be possible. For instance, if an entropy model understands the structures of human faces, in that they usually have two eyes, between which a symmetry exists, the entropy model could approximate the distributions more accurately when encoding the second eye of a human face by referencing the shape and position of the first given eye. As is widely known, various generative models (Goodfellow et al., 2014; Radford et al., 2016; Zhao et al., 2017) learn the distribution \(p(x)\) of the images within a specific domain, such as human faces or bedrooms. In addition, various in- painting methods (Pathak et al., 2016; Yang et al., 2017; Yeh et al., 2017) learn the conditional distribution \(p(x \mid context)\) when the viewed areas are given as context. Although these methods have not been developed for image compression, hopefully such high- level understandings can be utilized sooner or later. Furthermore, the contexts carried using side information can also be extended to some high- level information such as segmentation maps or any other information that helps with compression. Segmentation maps, for instance, may be able to help the entropy models estimate the distribution of a representation discriminatively according to the segment class the representation belongs to. Traditional codecs have a long development history, and a vast number of hand- crafted heuristics have been stacked thus far, not only for enhancing compression performance, but also for compromising computational complexities. Therefore, ANN- based image compression approaches may not provide satisfactory solutions as of yet, when taking their high complexity into account. However, considering its much shorter history, we believe that ANN- based image compression has much more potential and possibility in terms of future extension. Although we remain a long way from completion, we hope the proposed context- adaptive entropy model will provide an useful contribution to this area. ## ACKNOWLEDGMENTS This work was supported by Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government (MSIT) (No. 2017- 0- 00072, Development of Audio/Video Coding and Light Field Media Fundamental Technologies for Ultra Realistic Teramedia). <--- Page Split ---> ## 6 APPENDIX ### 6.1 HYBRID NETWORK FOR HIGHER BIT-RATE COMPRESSIONS We combined the lightweight entropy model with the context- adaptive entropy model to reduce the implementation costs for high- bpp configurations. The lightweight model exploits the scale (standard deviation) estimation, assuming that the PMF approximations of the quantized representations follow zero- mean Gaussian distributions convolved with a standard uniform distribution. Figure 6 illustrates the network structure of this hybrid network. The representation \(y\) is split channel- wise into two parts, \(y_{1}\) and \(y_{2}\) , which have \(M_{1}\) and \(M_{2}\) channels, respectively, and is then quantized. Here, \(\hat{y_{1}}\) is entropy coded using the proposed entropy model, whereas \(\hat{y_{2}}\) is coded with the lightweight entropy model. The standard deviations of \(\hat{y_{2}}\) are estimated using \(h_{a}\) and \(h_{s}\) . Unlike the context- adaptive entropy model, which uses the results of \(h_{a}\) ( \(\hat{c}^{\prime}\) ) as the input source to the estimator \(f\) , the lightweight entropy model retrieves the estimated standard deviations from \(h_{a}\) directly. Note that \(h_{a}\) takes the concatenated \(\hat{y_{1}}\) and \(\hat{y_{2}}\) as input, and \(h_{s}\) generates \(\hat{c}^{\prime}\) as well as \(\sigma_{2}\) , at the same time. The total loss function also consists of the rate and distortion terms, although the rate is divided into three parts, each of which is for \(\hat{y_{1}}\) , \(\hat{y_{2}}\) , and \(\tilde{z}\) , respectively. The distortion term is the same as before, but note that \(\hat{y}\) is the channel- wise concatenated representation of \(\hat{y_{1}}\) and \(\hat{y_{2}}\) : \[\begin{array}{r l} & {\mathcal{L} = R + \lambda D}\\ & {\mathrm{with} R = \mathbb{E}_{x\sim p_{\hat{x}}}\mathbb{E}_{\hat{y_{1}},\hat{y_{2}},\tilde{z}\sim q}\Big[-\log p_{\hat{y_{1}}|\hat{z}}(\hat{y_{1}}\mid \hat{z}) - \log p_{\hat{y_{2}}|\hat{z}}(\hat{y_{2}}\mid \hat{z}) - \log p_{\tilde{z}}(\tilde{z})\Big],}\\ & {\qquad D = \mathbb{E}_{x\sim p_{\hat{x}}}\Big[-\log p_{x|\hat{y}}(\boldsymbol {x}\mid \hat{y})\Big]} \end{array} \quad (7)\] Here, the noisy representations of \(\hat{y_{1}}\) , \(\hat{y_{2}}\) , and \(\tilde{z}\) follow a standard uniform distribution, the mean values of which are \(y_{1}\) , \(y_{2}\) , and \(z\) , respectively. In addition, \(y_{1}\) and \(y_{2}\) are channel- wise split representations from \(y\) , the results of the transform \(g_{a}\) , and have \(M_{1}\) and \(M_{2}\) channels, respectively: <--- Page Split ---> \[\begin{array}{r l r}{{q(\tilde{y}_{1},\tilde{y}_{2},\tilde{z}\mid x,\phi_{g},\phi_{h})}}&{{=}}&{\prod_{i}\mathcal{U}\big(\tilde{y}_{1i}\mid y_{1i}-\frac{1}{2},y_{1i}+\frac{1}{2}\big)\cdot}\\ &{}&{\prod_{j}\mathcal{U}\big(\tilde{y}_{2j}\mid y_{2j}-\frac{1}{2},y_{2j}+\frac{1}{2}\big)\cdot}\\ &{}&{\prod_{k}\mathcal{U}\big(\tilde{z}_{k}\mid z_{k}-\frac{1}{2},z_{k}+\frac{1}{2}\big)}\\ &{}&{\mathrm{with}~y_{1},y_{2}=S(g_{a}(x;\phi_{g})),\tilde{y}=Q(y_{1})\oplus Q(y_{2}),z=h_{a}(\tilde{y};\phi_{h}).}\end{array} \quad (8)\] The rate term for \(\hat{y}_{1}\) is the same model as that of equation (4). Note that \(\sigma_{2}\) does not contribute here, but does contribute to the model for \(\hat{y}_{2}\) : \[\begin{array}{r l r}{{p_{\tilde{y}_{1}|\tilde{z}}(\tilde{y}_{1}\mid\tilde{z},\theta_{h})=\prod_{i}\Big(\mathcal{N}\big(\mu_{1i},\sigma_{1i}{}^{2}\big)*\mathcal{U}\big(-\frac{1}{2},\frac{1}{2}\big)\Big)(\tilde{y}_{1i})}}\\ &{}&{\mathrm{with}~\mu_{1i},\sigma_{1i}=f(c_{i}^{\prime},c_{i}^{\prime\prime}),}\\ &{}&{c_{i}^{\prime}=E^{\prime}(c^{\prime},i),}\\ &{}&{c_{i}^{\prime\prime}=E^{\prime\prime}(\langle\tilde{y}_{1}\rangle,i),}\\ &{}&{c^{\prime},\sigma_{2}=S(h_{s}(\hat{z};\theta_{h}))}\end{array} \quad (9)\] The rate term for \(\hat{y}_{2}\) is almost the same as Balle et al. (2018), except that noisy representations are only used as the inputs to the entropy models for training, and not for the conditions of the models. \[p_{\tilde{y}_{2}|\tilde{z}}(\tilde{y}_{2}\mid \hat{z},\theta_{h}) = \prod_{j}\Big(\mathcal{N}\big(0,\sigma_{2j}{}^{2}\big)*\mathcal{U}\big(-\frac{1}{2},\frac{1}{2}\big)\Big)(y_{2j}) \quad (10)\] The model of \(z\) is the same as in equation (5). For implementation, we used this hybrid structure for the top- four configurations in bit- rate descending order. We set \(N\) , \(M_{1}\) , and \(M_{2}\) to 400, 192, and 408, respectively, for the top- two configurations, and to 320, 192, and 228, respectively, for the next two configurations. In addition, we measured average execution time per image, spent for encoding and decoding Kodak PhotoCD image dataset (Kodak, 1993), to clarify benefit of the hybrid model. The test was conducted under CPU environments, Intel i9- 7900X. Note that we ignored time for actual entropy coding because all models with the same values of \(N\) and \(M\) spend the same amount of time for entropy coding. As shown in figure 7, the hybrid models clearly reduced execution time of the models. Setting \(N\) and \(M\) to 320 and 420, respectively, we obtained 46.83% of speed gain. With the higher number of parameters, 400 of \(N\) and 600 of \(M\) , we obtained 57.28% of speed gain. ### 6.2 TEST RESULTS OF THE MODELS TRAINED USING DIFFERENT TYPES OF REPRESENTATIONS In this section, we provide test results of the two models, the proposed model trained using discrete representations as inputs to the synthesis transforms, \(g_{s}\) and \(h_{s}\) , and the same model but trained using noisy representations following the training process of Balle et al. (2018)'s approach. In detail, in training phase of the proposed model, we used quantized representations \(\tilde{y}\) and \(\hat{z}\) as inputs to the transforms \(g_{s}\) and \(h_{s}\) , respectively, to ensure the same conditions of training and testing phases. On the other hand, for training the compared model, representations \(\tilde{y}\) and \(\hat{z}\) are used as inputs to the transforms. An additional change of the proposed model is using \(\tilde{y}\) , instead of \(y\) , as inputs to \(h_{a}\) , but note that this has nothing to do with the mismatches between training and testing. We used them to match inputs to \(h_{a}\) to targets of model estimation via \(f\) . As shown in figure 8, the proposed model, trained using discrete representations, was 5.94% better than the model trained using noisy representations, in terms of the BD- rate of PSNR. Compared with Balle et al. (2018)'s approach, the performance gains of the two models, trained using discrete representations and noisy representations, were 11.97% and 7.20%, respectively. <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 7: Comparison results of execution time between the base model and hybrid model </center> ![](images/14_1.jpg) <center>Figure 8: Evaluation results of the models trained using noisy/discrete representations as input to synthesis transforms </center> ### 6.3 SAMPLES OF THE EXPERIMENTS In this section, we provide a few more supplemental test results. Figures 9, 10, and 11 show the results of the MSE optimized version of our method, whereas figures 12 and 13 show the results of the MS- SSIM optimized version. All figures include a ground truth image and the reconstructed images for our method, BPG, and Ballé et al. (2018)'s approach, in the clockwise direction. <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 9: Sample test results. Top left, ground truth; top right, our method (MSE optimized; bpp, 0.2040; PSNR, 32.2063); bottom left, BPG (bpp, 0.2078; PSNR, 32.0406); bottom right, Ballé et al. (2018)'s approach (MSE optimized; bpp, 0.2101; PSNR, 31.7054) </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 10: Sample test results. Top left, ground truth; top right, our method (MSE optimized; bpp, 0.1236; PSNR, 32.4284); bottom left, BPG (bpp, 0.1285; PSNR, 32.0444); bottom right, Ballé et al. (2018)'s approach (MSE optimized, bpp, 0.1229; PSNR, 31.0596) </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 11: Sample test results. Top left, ground truth; top right, our method (MSE optimized; bpp, 0.1501; PSNR, 34.7103); bottom left, BPG (bpp, 0.1477; PSNR, 33.9623); bottom right, Ballé et al. (2018)'s approach (MSE optimized; bpp, 0.1520; PSNR, 34.0465) </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 12: Sample test results. Top left, ground truth; top right, our method (MS-SSIM optimized; bpp, 0.2507; MS-SSIM, 0.9740); bottom left, BPG (bpp, 0.2441; MS-SSIM, 0.9555); bottom right, Ballé et al. (2018)'s approach (MS-SSIM optimized; bpp, 0.2101; MS-SSIM, 0.9705) </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 13: Sample test results. Top left, ground truth; top right, our method (MS-SSIM optimized; bpp, 0.2269; MS-SSIM, 0.9810); bottom left, BPG (bpp, 0.2316; MS-SSIM, 0.9658); bottom right, Ballé et al. (2018)'s approach (MS-SSIM optimized; bpp, 0.2291; MS-SSIM, 0.9786) </center> <--- Page Split --->
accept
Accept (Poster)
6.666667
ICLR_2019_paper_0741
iclr
2,019
# INTERPRETING LAYERED NEURAL NETWORKS VIA HIERARCHICAL MODULAR REPRESENTATION Anonymous authors Paper under double- blind review ## ABSTRACT Interpreting the prediction mechanism of complex models is currently one of the most important tasks in the machine learning field, especially with layered neural networks, which have achieved high predictive performance with various practical data sets. To reveal the global structure of a trained neural network in an interpretable way, a series of clustering methods have been proposed, which decompose the units into clusters according to the similarity of their inference roles. The main problems in these studies were that (1) we have no prior knowledge about the optimal resolution for the decomposition, or the appropriate number of clusters, and (2) there was no method with which to acquire knowledge about whether the outputs of each cluster have a positive or negative correlation with the input and output dimension values. In this paper, to solve these problems, we propose a method for obtaining a hierarchical modular representation of a layered neural network. The application of a hierarchical clustering method to a trained network reveals a tree- structured relationship among hidden layer units, based on their feature vectors defined by their correlation with the input and output dimension values. ## 1 INTRODUCTION To construct a method for interpreting the prediction mechanism of complex statistical models is currently one of the most important tasks in the machine learning field, especially with layered neural networks (or LNNs), which have achieved high predictive performance in various practical tasks. Due to their complex hierarchical structure and the nonlinear parameters that they use to process the input data, we cannot understand the function of a trained LNN as it is, and we need some kind of approximation method to convert the original function of an LNN into a simpler interpretable representation. Recently, various methods have been proposed for interpreting the function of an LNN, and they can be roughly classified into (1) the approximation of an LNN with an interpretable model, and (2) the investigation of the roles of the partial structures constituting an LNN (e.g. units or layers). As for approach (1), various methods have been investigated for approximating an LNN with a linear model (Lundberg & Lee, 2017; Nagamine & Mesgarani, 2017; Ribeiro et al., 2016) or a decision tree (Craven & Shavlik, 1996; Johansson & Niklasson, 2009; Krishnan et al., 1999; Thiagarajan et al., 2016). For image classification tasks in particular, methods for visualizing an LNN function have been extensively studied in terms of which part of an input image affects the prediction result (Ancona et al., 2018; Bach et al., 2015; Shrikumar et al., 2017; Simonyan et al., 2014; Smilkov et al., 2017; Springenberg et al., 2015; Sundararajan et al., 2017). Approach (2) has been studied by several authors who examined the function of a given part of an LNN (Alain & Bengio, 2017; Luo et al., 2016; Raghu et al., 2017; Zahavy et al., 2016). There has also been an approach designed to automatically extract the cluster structure of a trained LNN (Watanabe et al., 2017b; a; 2018a) based on network analysis. Although the above studies have made it possible to provide us with an interpretable representation of an LNN function with a fixed resolution (or number of clusters), there is a problem in that we do not know in advance the optimal resolution for interpreting the original network. In the methods described in the previous studies (Watanabe et al., 2017a;b; 2018a;b;c), the unit clustering results may change greatly with the cluster size setting, and there is no criterion for determining the optimal <--- Page Split ---> cluster size. Another problem is that the previous studies could only provide us with information about the magnitude of the relationship between a cluster and each input or output dimension value, and we could not determine whether this relationship was positive or negative. In this paper, we propose a method for extracting a hierarchical modular representation from a trained LNN, which provides us with both hierarchical clustering results with every possible number of clusters and the function of each cluster. Our proposed method mainly consists of three parts: (a) training an LNN for a given data set based on error back propagation, (b) determining the feature vectors of each hidden layer unit based on its correlation with the input and output dimension values, and (c) the hierarchical clustering of the feature vectors. Unlike the clustering methods in the previous studies, the role of each cluster is computed as a centroid of the feature vectors defined by the correlations in step (b), which enables us to know the representative mapping performed by the cluster in terms of both sign and magnitude for each input or output dimension. We show experimentally the effectiveness of our proposed method in interpreting the internal mechanism of a trained LNN, by applying it to two kinds of data sets: the MNIST data set that contains digit image data and a sequential data set of food consumer price indices. Based on the experimental results for the extracted hierarchical cluster structure and the role of each cluster, we discuss how the overall LNN function is structured as a collection of individual units. ## 2 TRAINING A LAYERED NEURAL NETWORK An LNN can be trained to approximate the input- output relationship of an arbitrary data set \((x,y)\) that consists of input data \(x\in \mathbb{R}^{M}\) and output data \(y\in \mathbb{R}^{N}\) , by using a function \(f(x,w)\) from \(x\in \mathbb{R}^{M}\) and a parameter \(w\in \mathbb{R}^{L}\) to \(\mathbb{R}^{N}\) . An LNN parameter is defined by \(w = \{\omega_{ij}^{d},\theta_{i}^{d}\}\) where \(\omega_{ij}^{d}\) is the connection weight between the \(i\) - th unit in a depth \(d\) layer and the \(j\) - th unit in a depth \(d + 1\) layer, and \(\theta_{i}^{d}\) is the bias of the \(i\) - th unit in the depth \(d\) layer. Here, \(d = 1\) and \(d = d_{0}\) , respectively, correspond to the input and output layers. The LNN function \(f(x,w)\) is a set of functions \(\{f_{j}(x,w)\}\) for all output dimensions \(j\) , each of which is defined by \(f_{j}(x,w) = \sigma (\sum_{i}\omega_{ij}^{d_{0} - 1}\theta_{i}^{d_{0} - 1} + \theta_{j}^{d_{0} - 1})\) . Here, \(\sigma (x) = 1 / (1 + \exp (- x))\) , and \(\theta_{i}^{d}\) is the output value of the \(i\) - th unit in the depth \(d\) layer and \(\sigma_{i}^{1} = x_{i}\) holds in the input layer. Such output values in each layer are given by \(\sigma_{j}^{d} = \sigma (\sum_{i}\omega_{ij}^{d_{0} - 1}\theta_{i}^{d_{0} - 1} + \theta_{j}^{d_{0} - 1})\) . The purpose of training an LNN is to find an optimal parameter \(w\) to approximate the true input- output relationship with a finite size training data set \(\{(X_{n},Y_{n})\}_{n = 1}^{n_{1}}\) , where \(n_{1}\) is the sample size. The training error \(E(w)\) of an LNN is given by \(E(w) = \frac{1}{n_{1}}\sum_{n = 1}^{n_{1}}\| Y_{n} - f(X_{n},w)\|^{2}\) , where \(\| \cdot \|\) is the Euclidean norm of \(\mathbb{R}^{N}\) . Since the minimization of the training error \(E(w)\) leads to overfitting to a training data set, we adopt the L1 regularization method (Ishikawa, 1990; Tibshirani, 1996) to delete redundant connection weights and obtain a sparse solution. Here, the objective function to be minimized is given by \(H(w) = \frac{n_{1}}{2} E(w) + \lambda \sum_{d,i,j}|\omega_{ij}^{d}|\) , where \(\lambda\) is a hyperparameter used to determine the strength of regularization. The minimization of such a function \(H(w)\) with the stochastic steepest descent method can be executed by an iterative update of the parameters from the output layer to the input layer, which is called error back propagation (Rumelhart et al., 1986; Werbos, 1974). The parameter update is given by \[\Delta \omega_{ij}^{d - 1} = -\eta (\delta_{j}^{d}\theta_{i}^{d - 1} + \lambda \operatorname {sgn}(\omega_{ij}^{d - 1})), \Delta \theta_{j}^{d} = -\eta \delta_{j}^{d},\] where \(\delta_{j}^{d_{0}} = (\sigma_{j}^{d_{0}} - y_{j})\) \((\sigma_{j}^{d_{0}}(1 - \sigma_{j}^{d_{0}}) + \epsilon_{1})\) , and \(\delta_{j}^{d} = \sum_{k = 1}^{l_{d + 1}}\delta_{k}^{d + 1}\omega_{jk}^{d}\) \((\sigma_{j}^{d}(1 - \sigma_{j}^{d}) + \epsilon_{1})\) for \(d = d_{0} - 1,\cdot \cdot \cdot ,2\) . Here, \(y_{j}\) is the \(j\) - th output dimension value of a randomly chosen \(n\) - th sample \((X_{n},Y_{n})\) , \(\epsilon_{1}\) is a hyperparameter for the LNN convergence, and \(\eta\) is the step size for training time \(t\) that is determined such that \(\eta (t)\propto 1 / t\) . In the experiments, we adopt \(\epsilon_{1} = 0.001\) and \(\eta = 0.7\times a_{1}n_{1} / (a_{1}n_{1} + 5t)\) , where \(a_{1}\) is the mean iteration number for LNN training per dataset. <--- Page Split ---> ## 3 HIERARCHICAL MODULAR REPRESENTATION OF LNNS ### 3.1 DETERMINING FEATURE VECTORS OF HIDDEN LAYER UNITS To apply hierarchical clustering to a trained LNN, we define a feature vector for each hidden layer unit. Let \(v_{k}\) be the feature vector of the \(k\) - th hidden layer unit in a hidden layer. Such a feature vector should reflect the role of its corresponding unit in LNN inference. Here, we propose defining such a feature vector \(v_{k}\) of the \(k\) - th hidden layer unit based on its correlations between each input or output dimension. In previous studies (Watanabe et al., 2018b,c), methods have been proposed for determining the role of a unit or a unit cluster based on the square root error. However, these methods can only provide us with knowledge about the magnitude of the effect of each input dimension on a unit and the effect of a unit on each output dimension, not information about how a hidden layer unit is affected by each input dimension and how each output dimension is affected by a hidden layer unit. In other words, there is no method that can reveal whether an increase in the input dimension value has a positive or negative effect on the output value of a hidden layer unit, or whether an increase in the output value of a hidden layer unit has a positive or negative effect on the output dimension value. To obtain such sign information regarding the roles of each hidden layer unit, we use the following definition based on the correlation. Definition 1 (Effect of \(i\) - th input dimension on \(k\) - th hidden layer unit). We define the effect of the \(i\) - th input dimension on the \(k\) - th hidden layer unit as \(v_{ik}^{\mathrm{in}}\) , where \[v_{ik}^{\mathrm{in}} = \frac{E\Big[\Big(X_{i}^{(n)} - E[X_{i}^{(n)}]\Big)\Big(o_{k}^{(n)} - E[o_{k}^{(n)}]\Big)\Big]}{\sqrt{E\Big[\Big(X_{i}^{(n)} - E[X_{i}^{(n)}]\Big)^{2}\Big]E\Big[\Big(o_{k}^{(n)} - E[o_{k}^{(n)}]\Big)^{2}\Big]}}.\] Here, \(E[\cdot ]\) represents the mean for all the data samples, \(X_{i}^{(n)}\) is the \(i\) - th input dimension value of the \(n\) - th data sample, and \(o_{k}^{(n)}\) is the output of the \(k\) - th hidden layer unit for the \(n\) - th input data sample. Definition 2 (Effect of \(k\) - th hidden layer unit on \(j\) - th output dimension). We define the effect of the \(k\) - th hidden layer unit on the \(j\) - th output dimension as \(v_{kj}^{\mathrm{out}}\) , where \[v_{kj}^{\mathrm{out}} = \frac{E\Big[\Big(o_{k}^{(n)} - E[o_{k}^{(n)}]\Big)\Big(y_{j}^{(n)} - E[y_{j}^{(n)}]\Big)\Big]}{\sqrt{E\Big[\Big(o_{k}^{(n)} - E[o_{k}^{(n)}]\Big)^{2}\Big]E\Big[\Big(y_{j}^{(n)} - E[y_{j}^{(n)}]\Big)^{2}\Big]}}.\] Here, \(y_{j}^{(n)}\) is the value of the \(j\) - th output layer unit for the \(n\) - th input data sample. We define a feature vector of each hidden layer unit based on the above definitions. Definition 3 (Feature vector of \(k\) - th hidden layer unit). We define the feature vector of the \(k\) - th hidden layer unit as \(v_{k} \equiv [v_{1k}^{\mathrm{in}}, \dots , v_{nk}^{\mathrm{in}}, v_{k1}^{\mathrm{out}}, \dots , v_{kj0}^{\mathrm{out}}]\) . Here, \(i_{0}\) and \(j_{0}\) , respectively, represent the dimensions of the input and output data. Alignment of signs of feature vectors based on cosine similarity The feature vectors of Definition 3 represent the roles of the hidden layer units in terms of input- output mapping. When interpreting such roles of hidden layer units, it is natural to regard the roles of any pair of units \((k_{1}, k_{2})\) as being the same iff they satisfy \(v_{k_{1}} = v_{k_{2}}\) or \(v_{k_{1}} = - v_{k_{2}}\) . The latter condition corresponds to the case where the \(k_{1}\) - th and \(k_{2}\) - th units have the same correlations with input and output dimensions except that their signs are the opposite, as depicted in Figure 1. To regard the roles of unit pairs that satisfy one of the above conditions as the same, we propose an algorithm for aligning the signs of the feature vectors based on cosine similarity (Algorithm 1). By randomly selecting a feature vector and aligning its sign according to the sum of the cosine similarities with all the other feature vectors, the sum of the cosine similarities of all the pairs of feature vectors increases monotonically. We show experimentally the effect of this sign alignment algorithm in Appendix 2. ### 3.2 HIERARCHICAL CLUSTERING OF UNITS IN A TRAINED LNN Once we have obtained the feature vectors of all the hidden layer units as described in section 3.1, we can extract a hierarchical modular representation of an LNN by applying hierarchical clustering <--- Page Split ---> Algorithm 1 Alignment of signs of feature vectors based on cosine similarity 1: Let \(v_{k}\) and \(a_{0}\) respectively be the feature vector for the \(k\) - th hidden layer unit and the number of iterations. 2: for \(a = 1\) to \(a_{0}\) do 3: Randomly choose the \(k\) - th hidden layer unit according to the uniform distribution. 4: if \(\sum_{j\neq k}\frac{v_{k}\cdot v_{j}}{\sqrt{v_{k}\cdot v_{k}}\sqrt{v_{1}\cdot v_{1}}} < 0\) then 5: \(v_{k}\leftarrow - v_{k}\) 6: end if 7: end for ![](images/3_0.jpg) <center>Figure 1: An example of two hidden layer units with the same function. The corresponding feature vectors are the same, except that their signs are opposite. </center> to the feature vectors. Among the several existing methods for such hierarchical clustering including single- link and complete- link, Ward's method (Ward, 1963) has been shown experimentally to be effective in terms of its classification sensitivity, so we employ this method in our experiments. We start with \(k_{0}\) individual hidden layer units, and sequentially combine clusters with the minimum error sum of squares (ESS), which is given by \[E S S\equiv \sum_{m}\Big(\sum_{k:u_{k}\in C_{m}}\| v_{k}\|^{2} - \frac{1}{|C_{m}|}\Big|\sum_{k:u_{k}\in C_{m}}v_{k}\Big\|^{2}\Big), \quad (1)\] where \(u_{k}\) and \(v_{k}\) , respectively, are the \(k\) - th hidden layer unit ( \(k = 1,\dots ,k_{0}\) ) and its corresponding feature vector, \(C_{m}\) is the unit set assigned to the \(m\) - th cluster, and \(|\cdot |\) represents the cluster size. From Equation (1), the ESS is the value given by first computing the cluster size ( \(|C_{m}|\) ) times the variance of the feature vectors in each cluster, and then by taking the sum of all these values for all the clusters. When combining a pair of clusters \((C_{m_{1}},C_{m_{2}})\) into one cluster, the ESS increases by \[\Delta E S S = \frac{|C_{m_{1}}||C_{m_{2}}|}{|C_{m_{1}}| + |C_{m_{2}}|}\Big\| \frac{1}{|C_{m_{1}}|}\sum_{k:u_{k}\in C_{m_{1}}}v_{k} - \frac{1}{|C_{m_{2}}|}\sum_{k:u_{k}\in C_{m_{2}}}v_{k}\Big\|^{2}. \quad (2)\] Therefore, in each iteration, we do not have to compute the error sum of squares for all the clusters, instead we simply have to compute the error increase \(\Delta E S S\) given by Equation (2) for all the pairs of current clusters \((C_{m_{1}},C_{m_{2}})\) , find the optimal pair of clusters that achieves the minimum error increase, and combine them. We describe the whole procedure of Ward's method in Algorithm 2. This procedure to combine a pair of clusters is repeated until all the hidden layer units are assigned to one cluster, and from the clustering result \(\{C_{m}^{(t)}\}\) in each iteration \(t = 1,\dots ,k_{0} - 1\) , we can obtain a hierarchical modular representation of an LNN, which connects the two extreme resolutions given by "all units are in a single cluster" and "all clusters consist of a single unit". The role of each extracted cluster can be determined from the centroid of the feature vectors of the units assigned to the cluster, which can be interpreted as a representative input- output mapping of the cluster. ## 4 EXPERIMENTS We apply our proposed method to two kinds of data sets to show its effectiveness in interpreting the mechanism of trained LNNs. The experimental settings are detailed in the Appendix 3. In Appendix 1, we provide a qualitative comparison with the previous method (Watanabe et al., 2018c). ### 4.1 EXPERIMENT USING THE MNIST DATA SET First, we applied our proposed method to an LNN trained with the MNIST data set (LeCun et al., 1998) to recognize 10 types of digits from input images. Before the LNN training, we sharpened the top, bottom, left and right margins and then resized the images to \(14 \times 14\) pixels. Figure 2 shows sample images for each class of digits. Although our proposed method provided us with a clustering <--- Page Split ---> Algorithm 2 Ward's hierarchical clustering method (Ward, 1963) 1: Let \(u_{k}\) and \(v_{k}\) , respectively, be the \(k\) - th hidden layer unit \((k = 1,\dots ,k_{0})\) and its corresponding feature vector, and let \(\{C_{m}^{(t)}\}\) be the unit set assigned to the \(m\) - th cluster in the \(t\) - th iteration \((m = 1,\dots ,k_{0} - t + 1)\) . Initially, we set \(t\gets 1\) and \(C_{m}^{(1)}\gets \{u_{m}\}\) . 2: for \(t = 2\) to \(k_{0} - 1\) do \[(C_{m_{1}}^{(t - 1)},C_{m_{2}}^{(t - 1)})\gets \arg \min_{(C_{i}^{(t - 1)},C_{j}^{(t - 1)})}\Delta E S S(C_{i}^{(t - 1)},C_{j}^{(t - 1)}),\mathrm{where}\] \[\Delta E S S(C,C^{\prime})\equiv \frac{|C||C^{\prime}|}{|C| + |C^{\prime}|}\Big\| \frac{1}{|C|}\sum_{k:u_{k}\in C}v_{k} - \frac{1}{|C^{\prime}|}\sum_{k:u_{k}\in C^{\prime}}v_{k}\Big\|^{2}.\] Here, we assume \(m_{1}< m_{2}\) . 4: Update the clusters as follows: \[C_{m}^{(t)}\leftarrow \left\{ \begin{array}{ll}C_{m_{1}}^{(t - 1)}\cup C_{m_{2}}^{(t - 1)} & (m = m_{1})\\ C_{m_{1}}^{(t - 1)} & (1\leq m\leq m_{2} - 1,m\neq m_{1})\\ C_{m + 1}^{(t - 1)} & (m_{2}\leq m\leq k_{0} - t + 1) \end{array} \right..\] 5: end for result for all the possible resolutions or the numbers of clusters \(c\) , we have only plotted the results for \(c = 4,8,16\) , for ease of visibility. Figures 3 and 4, respectively, show the hierarchical cluster structure extracted from the trained LNN and the roles or representative input- output mappings of the extracted clusters. From these figures, we can gain knowledge about the LNN structure as follows. - At the coarsest resolution, the main function of the trained LNN is decomposed into Clusters 1, 2, 3 and 4. Cluster 1 captures the input information about black pixels in the shape of a 6 and white pixels in the shape of a 7, and it has a positive and negative correlation with the output dimensions corresponding to "6" and "7", respectively. Cluster 2 correlates negatively with the region in the shape of a 9, and positively with the other areas. It has a positive correlation with the recognition of "2" and "6," and it has a negative one with "0," "4" and "9." Cluster 3 correlates positively with the black pixels in the left part of an image, and it has a positive correlation with "0," "4" and "6," and a negative correlation with "3" and "7." Cluster 4 captures the 0-shaped region, and it has a larger correlation with the output of "0" compared with the other digits. - Cluster 2 is decomposed into three smaller clusters, 7, 8 and 9. Cluster 7 captures similar input information to Cluster 2, and it also correlates strongly with the lower area of an image. This cluster mainly affects the recognition result for "5" and "6." Cluster 8 uses the input information of the area with the shape of a 9, however, its main recognition target is "2." Cluster 9 correlates positively with the area extending from the upper right to the lower left of an image, and it correlates negatively with the digits "4" and "9." - Cluster 8 consists of two smaller clusters, 17 and 18. Cluster 17 is mainly affected by the upper part and lower right part of an image, and the absolute value of its correlations with output dimensions are all less than 0.2, while the role of Cluster 18 is almost the same as that of Cluster 8. ### 4.2 EXPERIMENT USING THE CONSUMER PRICE INDEX DATA SET We also applied the proposed method to an LNN trained with a data set of a consumer price index (e Stat, 2018) to predict the consumer price indices of taro, radish and carrot for a month from 36 months' input data. With this data set, we plotted the results for \(c = 3,6,12\) , where \(c\) is the number of clusters. Figures 5 and 6, respectively, show the hierarchical cluster structure extracted from the trained LNN and the roles or representative input- output mappings of the extracted clusters. From these figures, we can gain knowledge about the LNN structure as follows. - Clusters 1, 2 and 3 represent the main input-output function of the hidden layer units. Interestingly, all of these clusters have similar correlations with the output dimensions ( \(0 < \text{radish} < \text{taro} < \text{carrot}\) ). However, these three clusters use different input information: Cluster 1 strongly reflects seasonal information, and its correlation is especially high with the consumer price indices of the <--- Page Split ---> three vegetables one month before and one, two and three years earlier. Cluster 3 also reflects seasonal information, however, the absolute values of the correlations are less than 0.3 and it correlates strongly with the input information of eight, 20 and 32 months before. On the other hand, Cluster 2 does not use such a seasonal effect very much, and it is affected almost equally by the information of all months, except the recent information of radish from nine months before. - Cluster 1 is composed of smaller clusters of 16 and 17. Cluster 16 is mainly used to predict the consumer price index of taro and it strongly correlates with the input information for taro from one month before and one, two and three years before. Compared with Cluster 16, Cluster 17 affects the three output dimensions more equally. - Cluster 7 is a part of Cluster 3, and consists of smaller clusters of 11, 12 and 13. These clusters have mutually different relationships with the output dimension values: Cluster 11 correlates positively with consumer price indices of taro and carrot, and negatively with that of radish. It mainly uses recent information about carrot (within a year) and the values of taro of five, 17 and 29 months before. Cluster 13 is mainly used to predict the radish output value. It has a positive correlation with the input information for taro, radish and carrot of about six, 18 and 30 months earlier, and it has a negative correlation with values for one month before and one, two and three years before. The absolute values of the correlations between Cluster 12 and the output dimension values are less than 0.2, so, unlike with Clusters 11 and 13, it does not significantly affect the prediction result. ## 5 DISCUSSION Here, we discuss our proposed method for obtaining a hierarchical modular representation from the perspectives of statistical evaluation and visualization. Our proposed method provides us with a series of clustering results for an arbitrary cluster size, and the resulting structure does not change if we use the same criterion (e.g. error sum of squares for Ward's method) for evaluating the similarity of the feature vectors. However, there is no way to determine which criterion yields the optimal clustering result to represent a trained LNN, due to the fact that interpretability of acquired knowledge cannot be formulated mathematically (although there has been an attempt to quantify the interpretability for a specific task, especially image recognition (Bau et al., 2017)). This problem makes it impossible to compare different methods for interpreting LNNs quantitatively, as pointed out in the previous studies (Lipton, 2016; Doshi- Velez & Kim, 2017). Therefore, the provision of a statistical evaluation method as regards both interpretability and accuracy for the resulting cluster structure constitutes important future work. Although we can apply our proposed method to an arbitrary network structure, as long as it contains a set of units that outputs some value for a given input data sample, the visualization of the resulting hierarchical modular representations becomes more difficult with a deeper and a larger scale network structure, since a cluster may contain units in mutually distant layers. Additionally, the number of possible cluster sizes increases with the scale (or the number of units) of a network, and so it is necessary to construct a method for automatically selecting a set of representative resolutions, instead of visualizing the entire hierarchical cluster structure. ## 6 CONCLUSION Finding a way to unravel the function of a trained LNN is an important issue in the machine learning field. While LNNs have achieved high prediction accuracy with various data sets, their highly complex and nonlinear parameters have made it difficult to interpret their internal inference mechanism. Recent studies have enabled us to decompose a trained LNN into simpler cluster structure, however, there is no method for (1) determining the optimal number of clusters, or (2) knowing whether the outputs of each cluster have a positive or negative correlation with the input and output dimension values. In this paper, we proposed a method for extracting the hierarchical modular representation of a trained LNN, which consists of sequential clustering results with every possible number of clusters. By determining the feature vectors of the hidden layer units based on their correlations with input and output dimension values, it also enabled us to know what range of input each cluster maps to what range of output. We showed the effectiveness of our proposed method experimentally by applying it to two kinds of practical data sets and by interpreting the resulting cluster structure. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 2: Input image examples of MNIST data set. </center> ![](images/6_1.jpg) <center>Figure 3: Hierarchical clusters of an LNN (MNIST data set). </center> ![](images/6_2.jpg) <center>Figure 4: Representative input-output mappings of extracted clusters. </center> <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: Hierarchical clusters of an LNN (food consumer price index data set). </center> ![](images/7_1.jpg) <center>Figure 6: Representative input-output mappings of extracted clusters. </center> <--- Page Split ---> ## REFERENCES G. Alain and Y. Bengio. Understanding intermediate layers using linear classifier probes. In ICLR 2017 Workshop, 2017. M. Ancona, E. Ceolini, A. C. Öztireli, and M. Gross. Towards better understanding of gradient-based attribution methods for deep neural networks. In International Conference on Learning Representations, 2018. S. Bach, A. Binder, G. Montavon, F. Klauschen, K-R. Müller, and W. Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLOS ONE, 10: 1-46, 2015. D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba. Network dissection: Quantifying interpretability of deep visual representations. In Computer Vision and Pattern Recognition, 2017. M. Craven and J. W. Shavlik. Extracting tree-structured representations of trained networks. In Advances in Neural Information Processing Systems 8, pp. 24-30, 1996. F. Doshi-Velez and B. Kim. Towards a rigorous science of interpretable machine learning. arXiv:1702.08608, 2017. e Stat. Consumer price index of food nationwide from January 1970 to January 2018. https://www.e-stat.go.jp/dbview?sid=0003143513, 2018. M. Ishikawa. A structural connectionist learning algorithm with forgetting. Journal of Japanese Society for Artificial Intelligence, 5:595-603, 1990. U. Johansson and L. Niklasson. Evolving decision trees using oracle guides. In 2009 IEEE Symposium on Computational Intelligence and Data Mining, pp. 238-244, 2009. R. Krishnan, G. Sivakumar, and P. Bhattacharya. Extracting decision trees from trained neural networks. Pattern Recognition, 32(12):1999-2009, 1999. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, volume 86, pp. 2278-2324, 1998. Z. C. Lipton. The mythos of model interpretability. In Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning, 2016. S. M. Lundberg and S. Lee. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems 30, pp. 4765-4774, 2017. W. Luo, Y. Li, R. Urtasun, and R. Zemel. Understanding the effective receptive field in deep convolutional neural networks. In Advances in Neural Information Processing Systems 29, pp. 4898-4906, 2016. T. Nagamine and N. Mesgarani. Understanding the representation and computation of multilayer perceptrons: A case study in speech recognition. In Proceedings of the 34th International Conference on Machine Learning, pp. 2564-2573, 2017. M. Raghu, J. Gilmer, J. Yosinski, and J. Sohl-Dickstein. SVCCA: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. In Advances in Neural Information Processing Systems 30, pp. 6076-6085, 2017. M. T. Ribeiro, S. Singh, and C. Guestrin. "Why should I trust you?": Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135-1144, 2016. D. Rumelhart, G. Hinton, and R. Williams. Learning representations by back-propagating errors. Nature, 323:533-536, 1986. A. Shrikumar, P. Greenside, and A. Kundaje. Learning important features through propagating activation differences. In Proceedings of the 34th International Conference on Machine Learning, pp. 3145-3153, 2017. <--- Page Split ---> K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In ICLR 2014 Workshop, 2014. D. Smilkov, N. Thorat, B. Kim, F. Viegas, and M. Wattenberg. Smoothgrad: removing noise by adding noise. arXiv:1706.03825, 2017. J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for simplicity: The all convolutional net. In ICLR 2015 Workshop, 2015. M. Sundararajan, A. Taly, and Q. Yan. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning, pp. 3319- 3328, 2017. J. J. Thiagarajan, B. Kailkhura, P. Sattigeri, and K. N. Ramamurthy. Treeview: Peeking into deep neural networks via feature- space partitioning. In NIPS 2016 Workshop on Interpretable Machine Learning in Complex Systems, 2016. R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58(1):267- 288, 1996. J. H. Ward. Hierarchical grouping to optimize an objective function. Journal of the American Statistical Association, 58(301):236- 244, 1963. C. Watanabe, K. Hiramatsu, and K. Kashino. Recursive extraction of modular structure from layered neural networks using variational Bayes method. In Proceedings of Discovery Science 2017, Lecture Notes in Computer Science, volume 10558, pp. 207-222, 2017a. C. Watanabe, K. Hiramatsu, and K. Kashino. Modular representation of autoencoder networks. In Proceedings of 2017 IEEE Symposium on Deep Learning, 2017 IEEE Symposium Series on Computational Intelligence, 2017b. C. Watanabe, K. Hiramatsu, and K. Kashino. Modular representation of layered neural networks. Neural Networks, 97:62-73, 2018a. C. Watanabe, K. Hiramatsu, and K. Kashino. Understanding community structure in layered neural networks. arXiv:1804.04778, 2018b. C. Watanabe, K. Hiramatsu, and K. Kashino. Knowledge discovery from layered neural networks based on non-negative task decomposition. arXiv:1805.07137v2, 2018c. P. Werbos. Beyond regression: new tools for prediction and analysis in the behavioral sciences. PhD thesis, Harvard University, 1974. T. Zahavy, N. Ben-Zrihem, and S. Mannor. Graying the black box: Understanding DQNs. In Proceedings of the 33rd International Conference on Machine Learning, pp. 1899-1908, 2016. <--- Page Split ---> ## APPENDIX 1: COMPARISON WITH CLUSTERING METHOD BASED ON NON-NEGATIVE MATRIX FACTORIZATION Here, we show the effectiveness of our proposed method by comparing it with the clustering method based on non- negative matrix factorization (or NNMF), which was proposed in a previous study (Watanabe et al., 2018c). We applied this NNMF- based clustering method to the same data sets that we used in the experiments described in section 4. In the previous study (Watanabe et al., 2018c), the feature vectors of the hidden layer units are defined by the magnitude of the effect of each input dimension value on a cluster and the effect of a cluster on each output dimension value, computed by the square root error of the unit output values. By definition, the elements of such feature vectors are all non- negative, which is a necessary condition for applying NNMF to the feature vectors. We applied the NNMF- based clustering method to the trained network with exactly the same parameter as the network shown in Figures 3 and 5. With the MNIST data set (LeCun et al., 1998) and the data set of a consumer price index (e Stat, 2018), respectively, we decomposed the trained networks into 16 and 12 clusters. With both data sets, we set the number of iterations of the NNMF algorithm at 1000. We applied the NNMF algorithm for 10000 times, and used the best result in terms of the approximation error. Initial values of the two low- dimensional matrices were randomly chosen according to the normal distribution \(\mathcal{N}(0.5, 0.5)\) . Figures 7, 8, 9, and 10 show the resulting cluster structures and the representative roles of the clusters. Comparing these figures with the results in Figures 3, 4, 5, and 6, we can observe that the previous NNMF- based method could not capture the structures of the input and output dimension values in as much detail as our proposed method, since it does not take the sign information into account. Furthermore, with the NNMF- based method, we should define the number of clusters in advance, and we cannot observe the hierarchical structure of clusters to find the optimal resolution for interpreting the roles of partial structures of an LNN. ## APPENDIX 2: EFFECT OF SIGN ALIGNMENT OF FEATURE VECTORS Here, we discuss the effect of the sign alignment of the feature vectors based on cosine similarity (Algorithm 1). Figures 11 shows the effect of the sign alignment of the feature vectors extracted from an LNN trained with the MNIST data set (LeCun et al., 1998). The left and center figures, respectively, show the feature vectors before and after the alignment of the signs. The right figure shows the monotonic increase of the sum of the cosine similarities through the alignment algorithm. Figure 12 shows the dendrograms of the hierarchical clustering results with the original feature vectors of Definition 3 and with the feature vectors after the alignment of the signs. From this figure, we can observe that the height of the dendrogram, which shows the similarity of all the hidden layer units, is higher with the original feature vectors than with the feature vectors after the sign alignment. In other words, it was shown that the algorithm successfully aligned the feature vectors so that they became similar to each other. Figures 13 and 14 show the effect of the sign alignment of the feature vectors extracted from an LNN trained with the food consumer price index data set (e Stat, 2018). These figures show similar results to those of the MNIST data set. ## APPENDIX 3: EXPERIMENTAL SETTINGS Here, we detail the experimental settings. E1 and E2, respectively, represent the settings of the experiments described in sections 4.1 and 4.2. - The training sample size \(n_1\) was: 500 per class (E1), and 270 (E2). - We normalized the input data so that the minimum and maximum values of an element, respectively, were \(-1\) and 1. Similarly, we normalized the output data so that the minimum and maximum values of an element, respectively, were 0.01 and 0.99. - The mean iteration number for LNN training per dataset \(a_1\) was: 100 per class (E1), and 500 (E2). <--- Page Split ---> - We generated the initial connection weights and biases of a layered neural network as follows: \(\omega_{ij}^{\mathrm{d}}\stackrel {\mathrm{i.i.d.}}{\sim}\mathcal{N}(0,0.5),\theta_{i}^{\mathrm{d}}\stackrel {\mathrm{i.i.d.}}{\sim}\mathcal{N}(0, 0.5).\) - The hyperparameter of the L1 regularization \(\lambda\) was: \(1.1 \times 10^{-5}\) (E1), and \(2 \times 10^{-5}\) (E2). - As regards the LNN training with the MNIST data set, we chose training data with the following deterministic procedure to stabilize the training. Let \(Z_{n}^{(k)} \equiv \{X_{n}^{(k)}, Y_{n}^{(k)}\}\) be the \(n\) -th training data sample in class \(k\). The training data were chosen in the following order: \[Z_{1}^{(1)},\dots ,Z_{1}^{(10)},Z_{2}^{(1)},\dots ,Z_{2}^{(10)},\dots ,Z_{n_{1}}^{(1)},\dots ,Z_{n_{1}}^{(10)},\] \[Z_{1}^{(1)},\dots ,Z_{1}^{(10)},\dots\] - The iteration number for the alignment of the signs of the feature vectors \(a_{0}\) was: 5000 (E1 and E2). - The weight removing hyperparameter \(\xi\) was: 0.6 (E1), and 0.001 (E2) In Figures 3 and 5, we only draw connections where the absolute values of weights were \(\xi\) or more. <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 7: Cluster structure of an LNN acquired by non-negative matrix factorization (MNIST data set). </center> ![](images/12_1.jpg) <center>Figure 8: Representative input-output mappings of extracted clusters. </center> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 9: Cluster structure of an LNN acquired by non-negative matrix factorization (food consumer price index data set). </center> ![](images/13_1.jpg) <center>Figure 10: Representative input-output mappings of extracted clusters. </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 11: Left: Feature vectors of Definition 3. Each row corresponds to a feature vector for a hidden layer unit. Center: Feature vectors after the alignment of the signs. Right: Sum of the cosine similarities of all the pairs of feature vectors (MNIST data set). </center> ![](images/14_1.jpg) <center>Figure 12: Dendrograms of the hierarchical clustering results with the original feature vectors of Definition 3 (top) and with the feature vectors after the alignment of the signs (bottom). </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 13: Left: Feature vectors of Definition 3. Each row corresponds to a feature vector for a hidden layer unit. Center: Feature vectors after the alignment of the signs. Right: Sum of the cosine similarities of all the pairs of feature vectors (food consumer price index data set). </center> ![](images/15_1.jpg) <center>Figure 14: Dendrograms of the hierarchical clustering results with the original feature vectors of Definition 3 (top) and with the feature vectors after the alignment of the signs (bottom). </center> <--- Page Split --->
## ABSTRACT Interpreting the prediction mechanism of complex models is currently one of the most important tasks in the machine learning field, especially with layered neural networks, which have achieved high predictive performance with various practical data sets. To reveal the global structure of a trained neural network in an interpretable way, a series of clustering methods have been proposed, which decompose the units into clusters according to the similarity of their inference roles. The main problems in these studies were that (1) we have no prior knowledge about the optimal resolution for the decomposition, or the appropriate number of clusters, and (2) there was no method with which to acquire knowledge about whether the outputs of each cluster have a positive or negative correlation with the input and output dimension values. In this paper, to solve these problems, we propose a method for obtaining a hierarchical modular representation of a layered neural network. The application of a hierarchical clustering method to a trained network reveals a tree- structured relationship among hidden layer units, based on their feature vectors defined by their correlation with the input and output dimension values. ## 1 INTRODUCTION To construct a method for interpreting the prediction mechanism of complex statistical models is currently one of the most important tasks in the machine learning field, especially with layered neural networks (or LNNs), which have achieved high predictive performance in various practical tasks. Due to their complex hierarchical structure and the nonlinear parameters that they use to process the input data, we cannot understand the function of a trained LNN as it is, and we need some kind of approximation method to convert the original function of an LNN into a simpler interpretable representation. Recently, various methods have been proposed for interpreting the function of an LNN, and they can be roughly classified into (1) the approximation of an LNN with an interpretable model, and (2) the investigation of the roles of the partial structures constituting an LNN (e.g. units or layers). As for approach (1), various methods have been investigated for approximating an LNN with a linear model (Lundberg & Lee, 2017; Nagamine & Mesgarani, 2017; Ribeiro et al., 2016) or a decision tree (Craven & Shavlik, 1996; Johansson & Niklasson, 2009; Krishnan et al., 1999; Thiagarajan et al., 2016). For image classification tasks in particular, methods for visualizing an LNN function have been extensively studied in terms of which part of an input image affects the prediction result (Ancona et al., 2018; Bach et al., 2015; Shrikumar et al., 2017; Simonyan et al., 2014; Smilkov et al., 2017; Springenberg et al., 2015; Sundararajan et al., 2017). Approach (2) has been studied by several authors who examined the function of a given part of an LNN (Alain & Bengio, 2017; Luo et al., 2016; Raghu et al., 2017; Zahavy et al., 2016). There has also been an approach designed to automatically extract the cluster structure of a trained LNN (Watanabe et al., 2017b; a; 2018a) based on network analysis. Although the above studies have made it possible to provide us with an interpretable representation of an LNN function with a fixed resolution (or number of clusters), there is a problem in that we do not know in advance the optimal resolution for interpreting the original network. In the methods described in the previous studies (Watanabe et al., 2017a;b; 2018a;b;c), the unit clustering results may change greatly with the cluster size setting, and there is no criterion for determining the optimal <--- Page Split ---> cluster size. Another problem is that the previous studies could only provide us with information about the magnitude of the relationship between a cluster and each input or output dimension value, and we could not determine whether this relationship was positive or negative. In this paper, we propose a method for extracting a hierarchical modular representation from a trained LNN, which provides us with both hierarchical clustering results with every possible number of clusters and the function of each cluster. Our proposed method mainly consists of three parts: (a) training an LNN for a given data set based on error back propagation, (b) determining the feature vectors of each hidden layer unit based on its correlation with the input and output dimension values, and (c) the hierarchical clustering of the feature vectors. Unlike the clustering methods in the previous studies, the role of each cluster is computed as a centroid of the feature vectors defined by the correlations in step (b), which enables us to know the representative mapping performed by the cluster in terms of both sign and magnitude for each input or output dimension. We show experimentally the effectiveness of our proposed method in interpreting the internal mechanism of a trained LNN, by applying it to two kinds of data sets: the MNIST data set that contains digit image data and a sequential data set of food consumer price indices. Based on the experimental results for the extracted hierarchical cluster structure and the role of each cluster, we discuss how the overall LNN function is structured as a collection of individual units. ## 2 TRAINING A LAYERED NEURAL NETWORK An LNN can be trained to approximate the input- output relationship of an arbitrary data set \((x,y)\) that consists of input data \(x\in \mathbb{R}^{M}\) and output data \(y\in \mathbb{R}^{N}\) , by using a function \(f(x,w)\) from \(x\in \mathbb{R}^{M}\) and a parameter \(w\in \mathbb{R}^{L}\) to \(\mathbb{R}^{N}\) . An LNN parameter is defined by \(w = \{\omega_{ij}^{d},\theta_{i}^{d}\}\) where \(\omega_{ij}^{d}\) is the connection weight between the \(i\) - th unit in a depth \(d\) layer and the \(j\) - th unit in a depth \(d + 1\) layer, and \(\theta_{i}^{d}\) is the bias of the \(i\) - th unit in the depth \(d\) layer. Here, \(d = 1\) and \(d = d_{0}\) , respectively, correspond to the input and output layers. The LNN function \(f(x,w)\) is a set of functions \(\{f_{j}(x,w)\}\) for all output dimensions \(j\) , each of which is defined by \(f_{j}(x,w) = \sigma (\sum_{i}\omega_{ij}^{d_{0} - 1}\theta_{i}^{d_{0} - 1} + \theta_{j}^{d_{0} - 1})\) . Here, \(\sigma (x) = 1 / (1 + \exp (- x))\) , and \(\theta_{i}^{d}\) is the output value of the \(i\) - th unit in the depth \(d\) layer and \(\sigma_{i}^{1} = x_{i}\) holds in the input layer. Such output values in each layer are given by \(\sigma_{j}^{d} = \sigma (\sum_{i}\omega_{ij}^{d_{0} - 1}\theta_{i}^{d_{0} - 1} + \theta_{j}^{d_{0} - 1})\) . The purpose of training an LNN is to find an optimal parameter \(w\) to approximate the true input- output relationship with a finite size training data set \(\{(X_{n},Y_{n})\}_{n = 1}^{n_{1}}\) , where \(n_{1}\) is the sample size. The training error \(E(w)\) of an LNN is given by \(E(w) = \frac{1}{n_{1}}\sum_{n = 1}^{n_{1}}\| Y_{n} - f(X_{n},w)\|^{2}\) , where \(\| \cdot \|\) is the Euclidean norm of \(\mathbb{R}^{N}\) . Since the minimization of the training error \(E(w)\) leads to overfitting to a training data set, we adopt the L1 regularization method (Ishikawa, 1990; Tibshirani, 1996) to delete redundant connection weights and obtain a sparse solution. Here, the objective function to be minimized is given by \(H(w) = \frac{n_{1}}{2} E(w) + \lambda \sum_{d,i,j}|\omega_{ij}^{d}|\) , where \(\lambda\) is a hyperparameter used to determine the strength of regularization. The minimization of such a function \(H(w)\) with the stochastic steepest descent method can be executed by an iterative update of the parameters from the output layer to the input layer, which is called error back propagation (Rumelhart et al., 1986; Werbos, 1974). The parameter update is given by \[\Delta \omega_{ij}^{d - 1} = -\eta (\delta_{j}^{d}\theta_{i}^{d - 1} + \lambda \operatorname {sgn}(\omega_{ij}^{d - 1})), \Delta \theta_{j}^{d} = -\eta \delta_{j}^{d},\] where \(\delta_{j}^{d_{0}} = (\sigma_{j}^{d_{0}} - y_{j})\) \((\sigma_{j}^{d_{0}}(1 - \sigma_{j}^{d_{0}}) + \epsilon_{1})\) , and \(\delta_{j}^{d} = \sum_{k = 1}^{l_{d + 1}}\delta_{k}^{d + 1}\omega_{jk}^{d}\) \((\sigma_{j}^{d}(1 - \sigma_{j}^{d}) + \epsilon_{1})\) for \(d = d_{0} - 1,\cdot \cdot \cdot ,2\) . Here, \(y_{j}\) is the \(j\) - th output dimension value of a randomly chosen \(n\) - th sample \((X_{n},Y_{n})\) , \(\epsilon_{1}\) is a hyperparameter for the LNN convergence, and \(\eta\) is the step size for training time \(t\) that is determined such that \(\eta (t)\propto 1 / t\) . In the experiments, we adopt \(\epsilon_{1} = 0.001\) and \(\eta = 0.7\times a_{1}n_{1} / (a_{1}n_{1} + 5t)\) , where \(a_{1}\) is the mean iteration number for LNN training per dataset. <--- Page Split ---> ## 3 HIERARCHICAL MODULAR REPRESENTATION OF LNNS ### 3.1 DETERMINING FEATURE VECTORS OF HIDDEN LAYER UNITS To apply hierarchical clustering to a trained LNN, we define a feature vector for each hidden layer unit. Let \(v_{k}\) be the feature vector of the \(k\) - th hidden layer unit in a hidden layer. Such a feature vector should reflect the role of its corresponding unit in LNN inference. Here, we propose defining such a feature vector \(v_{k}\) of the \(k\) - th hidden layer unit based on its correlations between each input or output dimension. In previous studies (Watanabe et al., 2018b,c), methods have been proposed for determining the role of a unit or a unit cluster based on the square root error. However, these methods can only provide us with knowledge about the magnitude of the effect of each input dimension on a unit and the effect of a unit on each output dimension, not information about how a hidden layer unit is affected by each input dimension and how each output dimension is affected by a hidden layer unit. In other words, there is no method that can reveal whether an increase in the input dimension value has a positive or negative effect on the output value of a hidden layer unit, or whether an increase in the output value of a hidden layer unit has a positive or negative effect on the output dimension value. To obtain such sign information regarding the roles of each hidden layer unit, we use the following definition based on the correlation. Definition 1 (Effect of \(i\) - th input dimension on \(k\) - th hidden layer unit). We define the effect of the \(i\) - th input dimension on the \(k\) - th hidden layer unit as \(v_{ik}^{\mathrm{in}}\) , where \[v_{ik}^{\mathrm{in}} = \frac{E\Big[\Big(X_{i}^{(n)} - E[X_{i}^{(n)}]\Big)\Big(o_{k}^{(n)} - E[o_{k}^{(n)}]\Big)\Big]}{\sqrt{E\Big[\Big(X_{i}^{(n)} - E[X_{i}^{(n)}]\Big)^{2}\Big]E\Big[\Big(o_{k}^{(n)} - E[o_{k}^{(n)}]\Big)^{2}\Big]}}.\] Here, \(E[\cdot ]\) represents the mean for all the data samples, \(X_{i}^{(n)}\) is the \(i\) - th input dimension value of the \(n\) - th data sample, and \(o_{k}^{(n)}\) is the output of the \(k\) - th hidden layer unit for the \(n\) - th input data sample. Definition 2 (Effect of \(k\) - th hidden layer unit on \(j\) - th output dimension). We define the effect of the \(k\) - th hidden layer unit on the \(j\) - th output dimension as \(v_{kj}^{\mathrm{out}}\) , where \[v_{kj}^{\mathrm{out}} = \frac{E\Big[\Big(o_{k}^{(n)} - E[o_{k}^{(n)}]\Big)\Big(y_{j}^{(n)} - E[y_{j}^{(n)}]\Big)\Big]}{\sqrt{E\Big[\Big(o_{k}^{(n)} - E[o_{k}^{(n)}]\Big)^{2}\Big]E\Big[\Big(y_{j}^{(n)} - E[y_{j}^{(n)}]\Big)^{2}\Big]}}.\] Here, \(y_{j}^{(n)}\) is the value of the \(j\) - th output layer unit for the \(n\) - th input data sample. We define a feature vector of each hidden layer unit based on the above definitions. Definition 3 (Feature vector of \(k\) - th hidden layer unit). We define the feature vector of the \(k\) - th hidden layer unit as \(v_{k} \equiv [v_{1k}^{\mathrm{in}}, \dots , v_{nk}^{\mathrm{in}}, v_{k1}^{\mathrm{out}}, \dots , v_{kj0}^{\mathrm{out}}]\) . Here, \(i_{0}\) and \(j_{0}\) , respectively, represent the dimensions of the input and output data. Alignment of signs of feature vectors based on cosine similarity The feature vectors of Definition 3 represent the roles of the hidden layer units in terms of input- output mapping. When interpreting such roles of hidden layer units, it is natural to regard the roles of any pair of units \((k_{1}, k_{2})\) as being the same iff they satisfy \(v_{k_{1}} = v_{k_{2}}\) or \(v_{k_{1}} = - v_{k_{2}}\) . The latter condition corresponds to the case where the \(k_{1}\) - th and \(k_{2}\) - th units have the same correlations with input and output dimensions except that their signs are the opposite, as depicted in Figure 1. To regard the roles of unit pairs that satisfy one of the above conditions as the same, we propose an algorithm for aligning the signs of the feature vectors based on cosine similarity (Algorithm 1). By randomly selecting a feature vector and aligning its sign according to the sum of the cosine similarities with all the other feature vectors, the sum of the cosine similarities of all the pairs of feature vectors increases monotonically. We show experimentally the effect of this sign alignment algorithm in Appendix 2. ### 3.2 HIERARCHICAL CLUSTERING OF UNITS IN A TRAINED LNN Once we have obtained the feature vectors of all the hidden layer units as described in section 3.1, we can extract a hierarchical modular representation of an LNN by applying hierarchical clustering <--- Page Split ---> Algorithm 1 Alignment of signs of feature vectors based on cosine similarity 1: Let \(v_{k}\) and \(a_{0}\) respectively be the feature vector for the \(k\) - th hidden layer unit and the number of iterations. 2: for \(a = 1\) to \(a_{0}\) do 3: Randomly choose the \(k\) - th hidden layer unit according to the uniform distribution. 4: if \(\sum_{j\neq k}\frac{v_{k}\cdot v_{j}}{\sqrt{v_{k}\cdot v_{k}}\sqrt{v_{1}\cdot v_{1}}} < 0\) then 5: \(v_{k}\leftarrow - v_{k}\) 6: end if 7: end for ![](images/3_0.jpg) <center>Figure 1: An example of two hidden layer units with the same function. The corresponding feature vectors are the same, except that their signs are opposite. </center> to the feature vectors. Among the several existing methods for such hierarchical clustering including single- link and complete- link, Ward's method (Ward, 1963) has been shown experimentally to be effective in terms of its classification sensitivity, so we employ this method in our experiments. We start with \(k_{0}\) individual hidden layer units, and sequentially combine clusters with the minimum error sum of squares (ESS), which is given by \[E S S\equiv \sum_{m}\Big(\sum_{k:u_{k}\in C_{m}}\| v_{k}\|^{2} - \frac{1}{|C_{m}|}\Big|\sum_{k:u_{k}\in C_{m}}v_{k}\Big\|^{2}\Big), \quad (1)\] where \(u_{k}\) and \(v_{k}\) , respectively, are the \(k\) - th hidden layer unit ( \(k = 1,\dots ,k_{0}\) ) and its corresponding feature vector, \(C_{m}\) is the unit set assigned to the \(m\) - th cluster, and \(|\cdot |\) represents the cluster size. From Equation (1), the ESS is the value given by first computing the cluster size ( \(|C_{m}|\) ) times the variance of the feature vectors in each cluster, and then by taking the sum of all these values for all the clusters. When combining a pair of clusters \((C_{m_{1}},C_{m_{2}})\) into one cluster, the ESS increases by \[\Delta E S S = \frac{|C_{m_{1}}||C_{m_{2}}|}{|C_{m_{1}}| + |C_{m_{2}}|}\Big\| \frac{1}{|C_{m_{1}}|}\sum_{k:u_{k}\in C_{m_{1}}}v_{k} - \frac{1}{|C_{m_{2}}|}\sum_{k:u_{k}\in C_{m_{2}}}v_{k}\Big\|^{2}. \quad (2)\] Therefore, in each iteration, we do not have to compute the error sum of squares for all the clusters, instead we simply have to compute the error increase \(\Delta E S S\) given by Equation (2) for all the pairs of current clusters \((C_{m_{1}},C_{m_{2}})\) , find the optimal pair of clusters that achieves the minimum error increase, and combine them. We describe the whole procedure of Ward's method in Algorithm 2. This procedure to combine a pair of clusters is repeated until all the hidden layer units are assigned to one cluster, and from the clustering result \(\{C_{m}^{(t)}\}\) in each iteration \(t = 1,\dots ,k_{0} - 1\) , we can obtain a hierarchical modular representation of an LNN, which connects the two extreme resolutions given by "all units are in a single cluster" and "all clusters consist of a single unit". The role of each extracted cluster can be determined from the centroid of the feature vectors of the units assigned to the cluster, which can be interpreted as a representative input- output mapping of the cluster. ## 4 EXPERIMENTS We apply our proposed method to two kinds of data sets to show its effectiveness in interpreting the mechanism of trained LNNs. The experimental settings are detailed in the Appendix 3. In Appendix 1, we provide a qualitative comparison with the previous method (Watanabe et al., 2018c). ### 4.1 EXPERIMENT USING THE MNIST DATA SET First, we applied our proposed method to an LNN trained with the MNIST data set (LeCun et al., 1998) to recognize 10 types of digits from input images. Before the LNN training, we sharpened the top, bottom, left and right margins and then resized the images to \(14 \times 14\) pixels. Figure 2 shows sample images for each class of digits. Although our proposed method provided us with a clustering <--- Page Split ---> Algorithm 2 Ward's hierarchical clustering method (Ward, 1963) 1: Let \(u_{k}\) and \(v_{k}\) , respectively, be the \(k\) - th hidden layer unit \((k = 1,\dots ,k_{0})\) and its corresponding feature vector, and let \(\{C_{m}^{(t)}\}\) be the unit set assigned to the \(m\) - th cluster in the \(t\) - th iteration \((m = 1,\dots ,k_{0} - t + 1)\) . Initially, we set \(t\gets 1\) and \(C_{m}^{(1)}\gets \{u_{m}\}\) . 2: for \(t = 2\) to \(k_{0} - 1\) do \[(C_{m_{1}}^{(t - 1)},C_{m_{2}}^{(t - 1)})\gets \arg \min_{(C_{i}^{(t - 1)},C_{j}^{(t - 1)})}\Delta E S S(C_{i}^{(t - 1)},C_{j}^{(t - 1)}),\mathrm{where}\] \[\Delta E S S(C,C^{\prime})\equiv \frac{|C||C^{\prime}|}{|C| + |C^{\prime}|}\Big\| \frac{1}{|C|}\sum_{k:u_{k}\in C}v_{k} - \frac{1}{|C^{\prime}|}\sum_{k:u_{k}\in C^{\prime}}v_{k}\Big\|^{2}.\] Here, we assume \(m_{1}< m_{2}\) . 4: Update the clusters as follows: \[C_{m}^{(t)}\leftarrow \left\{ \begin{array}{ll}C_{m_{1}}^{(t - 1)}\cup C_{m_{2}}^{(t - 1)} & (m = m_{1})\\ C_{m_{1}}^{(t - 1)} & (1\leq m\leq m_{2} - 1,m\neq m_{1})\\ C_{m + 1}^{(t - 1)} & (m_{2}\leq m\leq k_{0} - t + 1) \end{array} \right..\] 5: end for result for all the possible resolutions or the numbers of clusters \(c\) , we have only plotted the results for \(c = 4,8,16\) , for ease of visibility. Figures 3 and 4, respectively, show the hierarchical cluster structure extracted from the trained LNN and the roles or representative input- output mappings of the extracted clusters. From these figures, we can gain knowledge about the LNN structure as follows. - At the coarsest resolution, the main function of the trained LNN is decomposed into Clusters 1, 2, 3 and 4. Cluster 1 captures the input information about black pixels in the shape of a 6 and white pixels in the shape of a 7, and it has a positive and negative correlation with the output dimensions corresponding to "6" and "7", respectively. Cluster 2 correlates negatively with the region in the shape of a 9, and positively with the other areas. It has a positive correlation with the recognition of "2" and "6," and it has a negative one with "0," "4" and "9." Cluster 3 correlates positively with the black pixels in the left part of an image, and it has a positive correlation with "0," "4" and "6," and a negative correlation with "3" and "7." Cluster 4 captures the 0-shaped region, and it has a larger correlation with the output of "0" compared with the other digits. - Cluster 2 is decomposed into three smaller clusters, 7, 8 and 9. Cluster 7 captures similar input information to Cluster 2, and it also correlates strongly with the lower area of an image. This cluster mainly affects the recognition result for "5" and "6." Cluster 8 uses the input information of the area with the shape of a 9, however, its main recognition target is "2." Cluster 9 correlates positively with the area extending from the upper right to the lower left of an image, and it correlates negatively with the digits "4" and "9." - Cluster 8 consists of two smaller clusters, 17 and 18. Cluster 17 is mainly affected by the upper part and lower right part of an image, and the absolute value of its correlations with output dimensions are all less than 0.2, while the role of Cluster 18 is almost the same as that of Cluster 8. ### 4.2 EXPERIMENT USING THE CONSUMER PRICE INDEX DATA SET We also applied the proposed method to an LNN trained with a data set of a consumer price index (e Stat, 2018) to predict the consumer price indices of taro, radish and carrot for a month from 36 months' input data. With this data set, we plotted the results for \(c = 3,6,12\) , where \(c\) is the number of clusters. Figures 5 and 6, respectively, show the hierarchical cluster structure extracted from the trained LNN and the roles or representative input- output mappings of the extracted clusters. From these figures, we can gain knowledge about the LNN structure as follows. - Clusters 1, 2 and 3 represent the main input-output function of the hidden layer units. Interestingly, all of these clusters have similar correlations with the output dimensions ( \(0 < \text{radish} < \text{taro} < \text{carrot}\) ). However, these three clusters use different input information: Cluster 1 strongly reflects seasonal information, and its correlation is especially high with the consumer price indices of the <--- Page Split ---> three vegetables one month before and one, two and three years earlier. Cluster 3 also reflects seasonal information, however, the absolute values of the correlations are less than 0.3 and it correlates strongly with the input information of eight, 20 and 32 months before. On the other hand, Cluster 2 does not use such a seasonal effect very much, and it is affected almost equally by the information of all months, except the recent information of radish from nine months before. - Cluster 1 is composed of smaller clusters of 16 and 17. Cluster 16 is mainly used to predict the consumer price index of taro and it strongly correlates with the input information for taro from one month before and one, two and three years before. Compared with Cluster 16, Cluster 17 affects the three output dimensions more equally. - Cluster 7 is a part of Cluster 3, and consists of smaller clusters of 11, 12 and 13. These clusters have mutually different relationships with the output dimension values: Cluster 11 correlates positively with consumer price indices of taro and carrot, and negatively with that of radish. It mainly uses recent information about carrot (within a year) and the values of taro of five, 17 and 29 months before. Cluster 13 is mainly used to predict the radish output value. It has a positive correlation with the input information for taro, radish and carrot of about six, 18 and 30 months earlier, and it has a negative correlation with values for one month before and one, two and three years before. The absolute values of the correlations between Cluster 12 and the output dimension values are less than 0.2, so, unlike with Clusters 11 and 13, it does not significantly affect the prediction result. ## 5 DISCUSSION Here, we discuss our proposed method for obtaining a hierarchical modular representation from the perspectives of statistical evaluation and visualization. Our proposed method provides us with a series of clustering results for an arbitrary cluster size, and the resulting structure does not change if we use the same criterion (e.g. error sum of squares for Ward's method) for evaluating the similarity of the feature vectors. However, there is no way to determine which criterion yields the optimal clustering result to represent a trained LNN, due to the fact that interpretability of acquired knowledge cannot be formulated mathematically (although there has been an attempt to quantify the interpretability for a specific task, especially image recognition (Bau et al., 2017)). This problem makes it impossible to compare different methods for interpreting LNNs quantitatively, as pointed out in the previous studies (Lipton, 2016; Doshi- Velez & Kim, 2017). Therefore, the provision of a statistical evaluation method as regards both interpretability and accuracy for the resulting cluster structure constitutes important future work. Although we can apply our proposed method to an arbitrary network structure, as long as it contains a set of units that outputs some value for a given input data sample, the visualization of the resulting hierarchical modular representations becomes more difficult with a deeper and a larger scale network structure, since a cluster may contain units in mutually distant layers. Additionally, the number of possible cluster sizes increases with the scale (or the number of units) of a network, and so it is necessary to construct a method for automatically selecting a set of representative resolutions, instead of visualizing the entire hierarchical cluster structure. ## 6 CONCLUSION Finding a way to unravel the function of a trained LNN is an important issue in the machine learning field. While LNNs have achieved high prediction accuracy with various data sets, their highly complex and nonlinear parameters have made it difficult to interpret their internal inference mechanism. Recent studies have enabled us to decompose a trained LNN into simpler cluster structure, however, there is no method for (1) determining the optimal number of clusters, or (2) knowing whether the outputs of each cluster have a positive or negative correlation with the input and output dimension values. In this paper, we proposed a method for extracting the hierarchical modular representation of a trained LNN, which consists of sequential clustering results with every possible number of clusters. By determining the feature vectors of the hidden layer units based on their correlations with input and output dimension values, it also enabled us to know what range of input each cluster maps to what range of output. We showed the effectiveness of our proposed method experimentally by applying it to two kinds of practical data sets and by interpreting the resulting cluster structure. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 2: Input image examples of MNIST data set. </center> ![](images/6_1.jpg) <center>Figure 3: Hierarchical clusters of an LNN (MNIST data set). </center> ![](images/6_2.jpg) <center>Figure 4: Representative input-output mappings of extracted clusters. </center> <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: Hierarchical clusters of an LNN (food consumer price index data set). </center> ![](images/7_1.jpg) <center>Figure 6: Representative input-output mappings of extracted clusters. </center> <--- Page Split ---> ## APPENDIX 1: COMPARISON WITH CLUSTERING METHOD BASED ON NON-NEGATIVE MATRIX FACTORIZATION Here, we show the effectiveness of our proposed method by comparing it with the clustering method based on non- negative matrix factorization (or NNMF), which was proposed in a previous study (Watanabe et al., 2018c). We applied this NNMF- based clustering method to the same data sets that we used in the experiments described in section 4. In the previous study (Watanabe et al., 2018c), the feature vectors of the hidden layer units are defined by the magnitude of the effect of each input dimension value on a cluster and the effect of a cluster on each output dimension value, computed by the square root error of the unit output values. By definition, the elements of such feature vectors are all non- negative, which is a necessary condition for applying NNMF to the feature vectors. We applied the NNMF- based clustering method to the trained network with exactly the same parameter as the network shown in Figures 3 and 5. With the MNIST data set (LeCun et al., 1998) and the data set of a consumer price index (e Stat, 2018), respectively, we decomposed the trained networks into 16 and 12 clusters. With both data sets, we set the number of iterations of the NNMF algorithm at 1000. We applied the NNMF algorithm for 10000 times, and used the best result in terms of the approximation error. Initial values of the two low- dimensional matrices were randomly chosen according to the normal distribution \(\mathcal{N}(0.5, 0.5)\) . Figures 7, 8, 9, and 10 show the resulting cluster structures and the representative roles of the clusters. Comparing these figures with the results in Figures 3, 4, 5, and 6, we can observe that the previous NNMF- based method could not capture the structures of the input and output dimension values in as much detail as our proposed method, since it does not take the sign information into account. Furthermore, with the NNMF- based method, we should define the number of clusters in advance, and we cannot observe the hierarchical structure of clusters to find the optimal resolution for interpreting the roles of partial structures of an LNN. ## APPENDIX 2: EFFECT OF SIGN ALIGNMENT OF FEATURE VECTORS Here, we discuss the effect of the sign alignment of the feature vectors based on cosine similarity (Algorithm 1). Figures 11 shows the effect of the sign alignment of the feature vectors extracted from an LNN trained with the MNIST data set (LeCun et al., 1998). The left and center figures, respectively, show the feature vectors before and after the alignment of the signs. The right figure shows the monotonic increase of the sum of the cosine similarities through the alignment algorithm. Figure 12 shows the dendrograms of the hierarchical clustering results with the original feature vectors of Definition 3 and with the feature vectors after the alignment of the signs. From this figure, we can observe that the height of the dendrogram, which shows the similarity of all the hidden layer units, is higher with the original feature vectors than with the feature vectors after the sign alignment. In other words, it was shown that the algorithm successfully aligned the feature vectors so that they became similar to each other. Figures 13 and 14 show the effect of the sign alignment of the feature vectors extracted from an LNN trained with the food consumer price index data set (e Stat, 2018). These figures show similar results to those of the MNIST data set. ## APPENDIX 3: EXPERIMENTAL SETTINGS Here, we detail the experimental settings. E1 and E2, respectively, represent the settings of the experiments described in sections 4.1 and 4.2. - The training sample size \(n_1\) was: 500 per class (E1), and 270 (E2). - We normalized the input data so that the minimum and maximum values of an element, respectively, were \(-1\) and 1. Similarly, we normalized the output data so that the minimum and maximum values of an element, respectively, were 0.01 and 0.99. - The mean iteration number for LNN training per dataset \(a_1\) was: 100 per class (E1), and 500 (E2). <--- Page Split ---> - We generated the initial connection weights and biases of a layered neural network as follows: \(\omega_{ij}^{\mathrm{d}}\stackrel {\mathrm{i.i.d.}}{\sim}\mathcal{N}(0,0.5),\theta_{i}^{\mathrm{d}}\stackrel {\mathrm{i.i.d.}}{\sim}\mathcal{N}(0, 0.5).\) - The hyperparameter of the L1 regularization \(\lambda\) was: \(1.1 \times 10^{-5}\) (E1), and \(2 \times 10^{-5}\) (E2). - As regards the LNN training with the MNIST data set, we chose training data with the following deterministic procedure to stabilize the training. Let \(Z_{n}^{(k)} \equiv \{X_{n}^{(k)}, Y_{n}^{(k)}\}\) be the \(n\) -th training data sample in class \(k\). The training data were chosen in the following order: \[Z_{1}^{(1)},\dots ,Z_{1}^{(10)},Z_{2}^{(1)},\dots ,Z_{2}^{(10)},\dots ,Z_{n_{1}}^{(1)},\dots ,Z_{n_{1}}^{(10)},\] \[Z_{1}^{(1)},\dots ,Z_{1}^{(10)},\dots\] - The iteration number for the alignment of the signs of the feature vectors \(a_{0}\) was: 5000 (E1 and E2). - The weight removing hyperparameter \(\xi\) was: 0.6 (E1), and 0.001 (E2) In Figures 3 and 5, we only draw connections where the absolute values of weights were \(\xi\) or more. <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 7: Cluster structure of an LNN acquired by non-negative matrix factorization (MNIST data set). </center> ![](images/12_1.jpg) <center>Figure 8: Representative input-output mappings of extracted clusters. </center> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 9: Cluster structure of an LNN acquired by non-negative matrix factorization (food consumer price index data set). </center> ![](images/13_1.jpg) <center>Figure 10: Representative input-output mappings of extracted clusters. </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 11: Left: Feature vectors of Definition 3. Each row corresponds to a feature vector for a hidden layer unit. Center: Feature vectors after the alignment of the signs. Right: Sum of the cosine similarities of all the pairs of feature vectors (MNIST data set). </center> ![](images/14_1.jpg) <center>Figure 12: Dendrograms of the hierarchical clustering results with the original feature vectors of Definition 3 (top) and with the feature vectors after the alignment of the signs (bottom). </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 13: Left: Feature vectors of Definition 3. Each row corresponds to a feature vector for a hidden layer unit. Center: Feature vectors after the alignment of the signs. Right: Sum of the cosine similarities of all the pairs of feature vectors (food consumer price index data set). </center> ![](images/15_1.jpg) <center>Figure 14: Dendrograms of the hierarchical clustering results with the original feature vectors of Definition 3 (top) and with the feature vectors after the alignment of the signs (bottom). </center> <--- Page Split --->
reject
Reject
3.333333
ICLR_2019_paper_0784
iclr
2,019
# ADAPTIVE SAMPLE-SPACE & ADAPTIVE PROBABILITY CODING: A NEURAL-NETWORK BASED APPROACH FOR COMPRESSION Anonymous authors Paper under double- blind review ## ABSTRACT We propose Adaptive Sample- space & Adaptive Probability (ASAP) coding, an efficient neural- network based method for lossy data compression. Our ASAP coding distinguishes itself from the conventional methods based on adaptive arithmetic coding in that it models the probability distribution for the quantization process in such a way that one can conduct back- propagation for the quantization width that determines the support of the distribution. Our ASAP also trains the model with a novel, hyper- parameter free multiplicative loss for the rate- distortion tradeoff. With our ASAP encoder, we are able to compress the image files in the Kodak dataset to as low as one fifth the size of the JPEG- compressed image without compromising their visual quality, and achieved the state- of- the- art result in terms of MS- SSIM based rate- distortion tradeoff. ## 1 INTRODUCTION In terse terms, lossy data compression is a task in which one seeks to encode a data file into as short a code as possible without losing the essential information. Extensive research has been conducted in the field of lossy data compression; JPEG, WebP and BPG are well known lossy image compression codec. The recent advances in machine learning methods are accelerating the pace of the research in this field. For example, studies like Toderici et al. (2015; 2017); Balle et al. (2017); Theis et al. (2017); Johnston et al. (2017); Rippel & Bourdev (2017); Mentzer et al. (2018); Agustsson et al. (2018); Nakanishi et al. (2018); Balle et al. (2018) have incorporated the methods of deep learning into lossy compression, and some of them succeeded in producing a result that far surpasses the classical, neural- network- free methods. Almost all lossy codecs to date—including the state- of- the- art codec—are built on autoencoder. After transforming the input data by the autoencoder, the lossy codecs discretize the latent space using a quantizer function and further converts the transformed data so that it can be stored as a binary code. In order to quantize the latent feature into as short a code as possible, one usually decompose the feature into pieces and use the information of the pieces that it has quantized so far (i.e., context) in order to adaptively choose a discrete probability space for the quantization and the entropy coding of the next piece of information. By construction, the choice of the probability space for the quantization and the entropy coding of the feature vector is a key factor that determines the performance of the compression algorithm. In this study, we present a novel compression architecture that allows the user to optimize the discrete support of the probability space to be used for quantization and entropy coding (PSQE). Optimization of the support of the PSQE in a NN- based architecture is a daunting task. To the authors' best knowledge, there has been no study to date that enabled this optimization for NN- based architecture. Our Adaptive Sample- space & Adaptive Probability (ASAP) coding partially achieves this task by assigning to each latent feature a discretized normal distribution with different mean, variance and quantization width. Our method automatically chooses appropriate quantization width for each latent feature. For the compression with small models in particular, we were able to confirm that this adaptive quantization width benefits the performance in terms of the MS- SSIM score of the reconstructed images. We also present a novel objective function for the training of the quantizer. In general, the problem of minimizing the reconstruction error (distortion) under a constraint of code- length (or bpp) can be <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Reconstructed images with different compression methods. </center> ![](images/1_1.jpg) <center>Figure 2: Rate-distortion tradeoff curves evaluated for different methods on the Kodak dataset. The horizontal axis represents bits-per-pixel (bpp) and the vertical axis represents the average multiscale structural similarity (MS-SSIM) computed over RGB channels. The right two panels are the magnifications of the leftmost panel. Regarding the RD curves of Rippel & Bourdev (2017), we carefully traced the RD curve from the figure of their paper, because the original paper did not provide the exact values at each point. As for the RD curve of Johnston et al. (2017), we used the values provided by the authors via personal communication. </center> formulated in an equation of the form \(\mathrm{bpp} + \lambda \times \mathrm{distortion}\) with a Lagrange multiplier \(\lambda\) . Needless to say, however, the appropriate value of \(\lambda\) depends on the user- chosen weight of the importance for the code- length, which in turn heavily depends on the situation. Training a separate model and conducting a separate hyper- parameter search at every different situation is cumbersome. In order to circumvent this inconvenience, we propose a novel, hyper- parameter free multiplicative loss function that measures the rate- distortion tradeoff. Equipped with the greater freedom for the choice of PSQE, our ASAP model trained with our multiplicative- loss is not only able to attain compression quality that is on par with the state- of- the- art methods trained with an extensive hyper- parameter search about \(\lambda\) , but also is able to outperform all the predecessors in terms of the tradeoff ability for almost all region of bpp constraint. Vice versa, under the same distortion constraint, our codec can compress the image file to as low as \(2 \sim 5\) times smaller code in comparison to JPEG. This is \(2 \sim 2.5\) times smaller a size in comparison to BPG, which is one of the latest compression algorithms used in industry (see Figs. 2 and 15). <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 3: Overall architecture of the proposed model. </center> ![](images/2_1.jpg) <center>Figure 4: Outline of ASAP coding for image compression. </center> ## 2 METHODS By all means, we intend our algorithm to be applied to various tasks including audio and video compression. For ease of explanation, however, we will assume for now that our problem is the compression of image, and describe the overall flow of our algorithm in the context of image compression. In a nutshell, our method follows the usual routine of transform coding. Throughout, let \(\boldsymbol {x}\in \mathbb{R}^{C_{0}\times H_{0}\times W_{0}}\) be the original image, where \(C_{0},H_{0},W_{0}\) respectively represent the number of the channels, the height and the width of the image. The algorithm begins by first applying the transformation function \(F\) to \(\boldsymbol{x}\) in order to obtain a latent feature representation, \(\boldsymbol {z}\in \mathbb{R}^{C\times H\times W}\) . The algorithm then quantizes the signal \(\boldsymbol{z}\) into a finite space, and then covert the quantized signal \(\hat{\boldsymbol{z}}\) to a binary signal \(s\) by entropy coding. For the reconstruction, we apply the synthesizer map \(G\) to the code \(\hat{\boldsymbol{z}}\) . That is, \(\hat{\boldsymbol{x}} = G\left(\hat{\boldsymbol{z}}\right)\) . Fig. 3 illustrates the overall flow. In general, we would like to obtain a shorter \(s\) with smaller distortion \(d\left(\boldsymbol {x},\hat{\boldsymbol{x}}\right)\) , where \(d\) is the measure of the corruption in the reconstruction (e.g., 1 - MS- SSIM) (Wang et al., 2004). Our goal is therefore to train the networks that optimize the rate- distortion tradeoff. By construction, the loss of information takes place during the transformation from \(\boldsymbol{z}\) to \(\hat{\boldsymbol{z}}\) . Our ASAP is the invention for this part of the compression. In Sec. 2.1, we describe the distinguishing features of ASAP coding relative to the conventional coding methods. We then describe in Sec. 2.2 the technicalities for the training of the ASAP models in the context of image compression. ### 2.1 THE FEATURES OF ASAP CODING In this section, we review the basics of entropy coding. Let us assume that \(\boldsymbol {z}\in [- 1,1]^{C\times H\times W}\) . This \(\boldsymbol {z}\) is to be quantized into \(\hat{\boldsymbol{z}}\in \mathbb{R}^{C\times H\times W}\) by the discretization of the continuous space \([- 1,1]^{C\times H\times W}\) , so that it can be converted into a binary code via invertible compression algorithm. For the entropy coding of \(\hat{\boldsymbol{z}}\) , we would have to prepare a probability distribution for \(\hat{\boldsymbol{z}}\) that represents its occurring probability. Oftentimes, this occurring probability is modeled by some form of auto regressive model. In our task of compressing the images, we adopted a model used in Nakanishi et al. (2018). We partitioned \(\hat{\boldsymbol{z}}\) into \(K\) groups \(\{\hat{\boldsymbol{z}}^{(k)}\in \mathbb{R}^{C^{(k)}\times H^{(k)}\times W^{(k)}};k = 1,\dots,K\}\) based on a geometric pattern we illustrate in Fig. 5. Let \(C^{(k)},H^{(k)}\) , and \(W^{(k)}\) be the dimension of the \(k\) - th group. We then estimated the true occurrence probability as \(p^{*}(\hat{\boldsymbol{z}}) = p^{*}(\hat{\boldsymbol{z}}^{(1)})\prod_{k = 2}^{K}p^{*}(\hat{\boldsymbol{z}}^{(k)}|\hat{\boldsymbol{z}}^{(1:k - 1)})\) <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 5: The grouping scheme for the coordinates of \(\hat{z}\) . This is an imitation of the procedure used in Nakanishi et al. (2018) </center> by recursively approximating \(p^{*}(\hat{z}^{(k)}|\hat{z}^{(1:k - 1)})\) with some probability distribution \(\pi (\hat{z}^{(k)}|\hat{z}^{(1:k - 1)})\) with discrete support. For ease of notation, we would occasionally use \(\pi^{(k)}(\hat{z}^{(k)})\) to denote the same distribution. In this model, we assume that each coordinate of \(\hat{z}^{(k)}\) is approximately independent conditional to \(\hat{z}^{(1:k - 1)}\) . This way, we can resort to the power of parallel computation and greatly speed up the algorithm. For more details of this procedure, please see Nakanishi et al. (2018). For the task of compressing other type of data, one can choose different type of partitioning that is natural to the situation. Our method is novel in that we allow the optimization of the support of \(\pi (\hat{z}^{(k)}|\hat{z}^{(1:k - 1)})\) . We begin by modeling \(p^{*}(\hat{z}_{i}^{(k)}|\hat{z}^{(1:k - 1)})\) with a truncated Gaussian distribution \(p(\hat{z}_{i}^{(k)}|\hat{z}^{(1:k - 1)})\) on \([- 1,1]\) with parameters \(\mu_{i}^{(k)},\sigma_{i}^{(k)}\) for each \(i\) - th coordinate of \(\hat{z}^{(k)}\) using the information of \(\hat{z}^{(1:k - 1)}\) . We then quantize \([- 1,1]\) into the intervals \(\{I_{i}^{(k)}[j]\}_{j = 1}^{M_{k i}}\) with the centers \(\{A_{i}^{(k)}[j]\}_{j = 1}^{M_{k i}}\) using the quantization width \(\{q_{i}^{(k)}\}_{j = 1}^{M_{k i}}\) , where \(M_{k i}\) is the number of the intervals. The larger the value of \(M_{k i}\) , finer the quantization. By construction, we construct \(\pi (\hat{z}_{i}^{(k)}|\hat{z}^{(1:k - 1)})\) by defining \[\pi (\hat{z}_{i}^{(k)}|\hat{z}^{(1:k - 1)}): = \sum_{j = 1}^{M_{k i}}p(\boldsymbol{z}_{i}^{(k)}\in I_{i}^{(k)}[j]|\hat{z}^{(1:k - 1)})\mathbb{1}_{\boldsymbol{z}_{i}^{(k)} = A_{i}^{(k)}[j]} \quad (1)\] With our method, we can train the objective function about \(\mu_{i}^{(k)},\sigma_{i}^{(k)}\) , and \(q_{i}^{(k)}\) . The ability to optimize \(q_{i}^{(k)}\) is a distinctive feature of our ASAP coding. With this distinctive feature, our ASAP can compress \(\hat{z}\) into a code of much smaller size under the same distortion constraint. By the rate- distortion argument we mentioned in the introduction, this also means that our ASAP can store the data with much smaller distortion under the same constraint on the code- length. ### 2.2 ASAP CODING FOR IMAGE COMPRESSION Next, we will explain the technicalities for the optimization of the parameters \(\mu_{i}^{(k)},\sigma_{i}^{(k)}\) , and \(q_{i}^{(k)}\) in the context of image compression. We will also explain the multiplicative loss, a novel objective function for the training of \(\hat{R}\) . #### 2.2.1 ADAPTIVE CONSTRUCTION OF DISCRETIZED SAMPLE SPACE AND PROBABILITY DISTRIBUTION Fig. 6 illustrates the construction of \(\hat{z}^{(k)}\) and \((A^{(k)},\pi^{(k)})\) . From now on, let us write \(\pi_{i}^{(k)}(a):= \pi (\hat{z}_{i}^{(k)} = a|\hat{z}^{(1:k - 1)})\) for \(a\in A_{i}^{(k)}\) . In order to recursively construct these probability measures, we trained a set of neural networks that map \(\hat{z}^{(1:k - 1)}\) to the triplets \(\theta_{i}^{(k)}:= (\mu_{i}^{(k)},\sigma_{i}^{(k)},q_{i}^{(k)})\) . For the base level of \(k = 1\) , we prepared a trainable triplets \(\theta_{m}^{\mathrm{init}}:= (\sigma_{m}^{\mathrm{init}},\mu_{m}^{\mathrm{init}},q_{m}^{\mathrm{init}})\) for each \(m\) - th channel <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 6: Detailed architecture of the quantizer. The solid line represents the operation subject to back propagation, and the dotted line represents the operation that is not subject to back propagation. </center> and set \(\theta_{i}^{(1)} = \theta_{m}^{\mathrm{init}}\) for all \(\hat{z}_{i}^{(1)}\) belonging to the channel \(m\) (please see Fig. 14 in the appendix for the detail). We define \(A_{i}^{(k)}\) as \[A_{i}^{(k)} := \left\{a \middle | a = \mu_{i}^{(k)} + (u + 0.5)q_{i}^{(k)}, u \in \mathbb{Z}, a - 0.5q_{i}^{(k)} < 1, a + 0.5q_{i}^{(k)} \geq -1\right\} \quad (2)\] so that \(I_{i}^{(k)}[j] = \left[A_{i}^{(k)}[j] - 0.5q_{i}^{(k)}, A_{i}^{(k)}[j] + 0.5q_{i}^{(k)}\right]\) . This indeed amounts to discretizing the interval \([- 1, 1]\) using the subintervals of width \(q_{i}^{(k)}\) ; \(M_{k i}\) is determined naturally from this constraint. To each subinterval, we would assign the probability mass computed with the formula \[\pi_{i}^{(k)}(a) := \int_{a - 0.5q_{i}^{(k)}}^{a + 0.5q_{i}^{(k)}} p_{i}^{(k)}(x) \mathrm{d}x, \quad (3)\] where \(p_{i}^{(k)}\) is a truncated normal distribution with the parameters \((\mu_{i}^{(k)}, \sigma_{i}^{(k)})\) that is defined over the domain \(\left[\underline{a}_{i}^{(k)} - 0.5q_{i}^{(k)}, \bar{a}_{i}^{(k)} + 0.5q_{i}^{(k)}\right)\) . We are aware that this is not technically the faithful discretization of the interval \([- 1, 1]\) ; this compromise is for ease of implementation. If \(I_{i}^{(k)}\) perfectly partitions \([- 1, 1]\) , the support of \(p_{i}^{(k)}\) will be exactly \([- 1, 1]\) . By construction, our \(\pi_{i}^{(k)}\) satisfies the required properties of the probability mass function. By appealing to the basic result of information theory, any signal sampled from the distribution \(\pi_{i}^{(k)}\) can theoretically be encoded in a code- length: \[\ell (\hat{\boldsymbol{z}}^{(k)}; \pi^{(k)}) := - \sum_{i = 1}^{C^{(k)} H^{(k)} W^{(k)}} \log_{2} \pi^{(k)} (\hat{z}_{i}^{(k)} | \hat{\boldsymbol{z}}^{(1:k - 1)}). \quad (4)\] In order to map \(z_{i}^{(k)}\) to \(A_{i}^{(k)}\) , we simply pick the member of \(A_{i}^{(k)}\) that is closest to \(z_{i}^{(k)}\) ; this member would be the value we assign to \(\hat{z}_{i}^{(k)}\) . By construction, the quantization error \(\hat{z}_{i}^{(k)} - z_{i}^{(k)}\) is bounded from above by \(q_{i}^{(k)} / 2\) . ## Training of the neural models In order to make the training of the neural networks possible, we would take advantage of the fact that we can always write \[\hat{z}_{i}^{(k)} = z_{i}^{(k)} - \delta_{i}^{(k)} q_{i}^{(k)}. \quad (5)\] for some \(\delta_{i}^{(k)} \in [- 0.5, 0.5)\) . We can therefore interpret \(\delta_{i}^{(k)}\) as a noise and regard the equation above as a probabilistic model for \(\hat{z}_{i}^{(k)}\) with a trainable noise parameter \(q_{i}^{(k)}\) . Indeed, the demand <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 7: Rate-distortion tradeoff curve for the ASAP model trained with different objective functions. For the Multiplicative loss, Multiplicative rate-distortion trade-off function. For the additive costs, we run experiments with few selected values of \(C\) and varied the choice of \(\lambda\) only. This is because the computational cost would explode if we are to do the parameter search of \(\lambda\) for large number of \(C\) . We should add, as a cautionary remark, that the results shown hereof are the results obtained with less number of iterations (0.1M) than the results in Figure 2 (0.3M). </center> for smaller distortion (loss of information) would prefer smaller \(q_{i}^{(k)}\) and hence smaller variance for noise. On the contrary, the demand for shorter code-length would prefer coarser discretization of the sample space and hence larger \(q_{i}^{(k)}\) . This is indeed the very manifestation of the rate-distortion tradeoff for the training of \(q_{i}^{(k)}\) . #### 2.2.2 MULTIPLICATIVE RATE-DISTORTION TRADEOFF FUNCTION In this section, we will finally describe the objective function by which we would train the ASAP model we introduced above. In order to make comparisons, we would introduce our objective function along with its variation and the classic counterpart: \[\begin{array}{r l} & {\mathrm{bpp} + \lambda \times (1 - \mathrm{MS - SSIM})\quad (\lambda \geq 0)}\\ & {\mathrm{bpp}\times (1 - \mathrm{MS - SSIM})}\\ & {\mathrm{bpp}\times (1 - \mathrm{MS - SSIM})\times \mathrm{MSE}\quad (\mathrm{proposed~method}),} \end{array} \quad (6)\] where \(\begin{array}{r}{\mathrm{bpp}:= \frac{1}{H_{0}W_{0}}\sum_{k = 1}^{K}\ell (\hat{\boldsymbol{z}}^{(k)};\pi^{(k)})} \end{array}\) . An apparent disadvantage of Eq. (6) is that one must search for different appropriate \(\lambda\) for different constraint on the compression rate (code- length). In order to do away with the computationally heavy parameter search, we took advantage of our observation that the relation \(\mathrm{bpp}\times (1 - \mathrm{MS - SSIM}) = \mathrm{const}\) approximately holds in experiments, and first adopted this function as a candidate for the objective function (Eq. (7)). As we show in Fig. 7, our ASAP trained with this objective function performs almost as well as the ASAP trained with the Eq. (6) with the optimal \(\lambda\) for all choice of the compression rate. However, as we can see in Fig. 7, the MS- SSIM value for both Eqs. (6) and (7) plateaus at some value less than 1. We resolved this issue by modifying Eq. (7) to Eq. (8). This solution works because the inclusion of the MSE cost would create still another pressure to the training process. By this modification, we were able to produce good results in the region of large bpp as well. ## 3 RELATED WORKS Application of the neural networks to the compression of image dataset is popular these days. Most studies choose (recurrent) autoencoder based on CNN as the main architecture. By its construction, the coordinates of the output from such autoencoder will have strong spatial correlation. This tendency should be particularly true for the network trained for the compression of the images. Moreover, the local correlative relations among the coordinates can greatly vary across the locations and the channels, not to mention across the types of images. In order to optimize the rate- distortion trade off, one must therefore strive to choose appropriate quantization width for different situations. <--- Page Split ---> Indeed, this effort must be jointly planned and implemented with the optimization of the parameters of the autoencoder, because the the correlations amongst the coordinates of the feature vector is obviously dependent on the behavior of the autoencoder. The work of Ballé et al. (2017) is one of the pioneer studies that trained PSQE together with the autoencoder. Ballé et al. (2018) further improved their work by introducing an additional latent variable that governs the spatial relation among the quantization probability distributions of the latent spaces. The optimization of the quantization process is a challenging problem in general, and numerous studies proposed different approaches to this problem. Toderici et al. (2015; 2017); Johnston et al. (2017); Ballé et al. (2017; 2018) tackled this problem by interpreting the quantization process probabilistically. In these approaches, the use of probabilistic noise can de- stabilize the training process; this tends to be particularly true for variational inferences. The works of Theis et al. (2017); Nakanishi et al. (2018); Mentzer et al. (2018) fall in to this category. Mentzer et al. (2018) in particular deploys a set of trainable parameters \(C = \{c_{1},\dots ,c_{L}\}\) from which to sample \(\hat{z}\) with the encoding rule \(\hat{z}_{i} = \arg \min_{i}\| z_{i} - c_{j}\|\) . The compression method of Mentzer et al. (2018), however, trains \(C\) of fixed length (i.e., the number of bins). On the other hand, in our study, the number of bins is a function of quantization width. Our method therefore optimizes the number of bins together with the quantization width. Numerous types of objective functions are also used for the training of the network for compression task. Most of the previous works like Ballé et al. (2017); Rippel & Bourdev (2017); Mentzer et al. (2018) use the objective function of type (6). There is also a variety of distortion measure, such as MS- SSIM, \(L_{2}\) loss, \(L_{1}\) loss, as well as their weighted mixtures. Some studies also use GANs' loss (Rippel & Bourdev, 2017; Agustsson et al., 2018). Johnston et al. (2017) uses a multiplicative loss of the form \(\mathrm{SSIM} \times L_{1}\) . As mentioned in the previous sections, our multiplicative loss is based on the observation that the empirical trade- off functions resemble a function of the form \(\mathrm{bpp} \times (1 - \mathrm{MS- SSIM}) = \mathrm{const}\) . It shall be emphasized that we were able to produce superior result using a single, hyper- parameter free objective function. ## 4 EXPERIMENT We evaluated the performance of our method on the benchmark image datasets. We trained our proposed model over 0.3M iterations with batchsize 25. For the optimization, we used Adam (Kingma & Ba, 2015) with AMSGrad (Reddi et al., 2018). As for the hyper- parameters of Adam, we set \(\alpha = 0.001\) , \(\beta_{1} = 0.9\) , \(\beta_{2} = 0.999\) , and linearly decayed \(\alpha\) from the 0.225M- th iteration so that it will reach 0 at the end of the training. For the update of the parameters, we applied \(L_{2}\) regularization with weight decay of rate 0.00001, and clipped the magnitude of each coordinate of the gradient at the value of 5. As a similarity measure between two images, we used MS- SSIM (Wang et al., 2004), which is considered by the community to have close correlation with the subjective evaluation of humans. Because MS- SSIM is originally intended for the evaluation of gray scale measures, we computed the MS- SSIM separately for each of RGB channels and reported their average. For the training, we used ILSVRC2012 dataset (ImageNet) (Russakovsky et al., 2015). In order to produce the rate distortion (RD) curve, we trained a model for each choice of \(C\) , the number of channels for the latent feature. It shall be reminded that \(C\) affects the code- length via entropy coding. For the performance evaluation, we used Kodak \(^{2}\) dataset and RAISE- 1k dataset (Dang- Nguyen et al., 2015). ### 4.1 PERFORMANCE Kodak dataset is a widely used dataset for the performance evaluation of compression methods. Fig. 2 compares the RD curves for different compression methods. In terms of MS- SSIM, our method was slightly better than the method trained with the best fixed quantization size, which was found through an extensive grid search. Again in terms of MS- SSIM, our method outperformed the model of Ballé et al. (2018) trained to optimize MS- SSIM ("Balle for MS- SSIM"), as well as the same model trained to optimize mean squared error (MSE) ("Balle for MSE"). In terms of PSNR, our method performed better than "Balle for MS- SSIM", not to mention JPEG. However, it was not able to perform better than "Balle for MSE" in terms of PSNR. <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 8: The heat map of \(\mu\) , \(\sigma\) , \(q\) produced by a trained ASAP model for a Kodak image. The stacked pair of images corresponds to the heat maps of a selected pair of channels. Notice that the network is choosing larger value of \(q\) for a channel containing more information. </center> ![](images/7_1.jpg) <center>Figure 9: Performances of various compression methods, evaluated in terms of PSNR. </center> <--- Page Split ---> For the visuals of the reconstructed images, please see Figs. 1 and 16–18. Figs. 19–24 illustrate regions of medium-high bpp. Our method outperforms BPG on the medium-high bpp region. Fig. 8 illustrates the heat map of \((\mu_{i}^{(k)},\sigma_{i}^{(k)},q_{i}^{(k)})\) produced for a Kodak image by a trained ASAP model. The signals from each \(\mathbf{z}^{(k)}\) was re- aligned in the way that respects the original geometrical relations prescribed by the pattern in Fig. 5. A pair of stacked images is a pair of images from two different channels. Fig. 8 features several interesting properties of our ASAP coding. Firstly, as we can readily see in the figure, the 'bottom' channel is capturing more contextual information than the 'top' channel. Note that our ASAP is assigning larger values of \(q\) in general to the bottom channel than to the top channel. We can also see that the algorithm is assigning larger value of \(q\) to the region within the picture with relatively small information. Lastly, the value of \(\sigma\) assigned by the algorithm to each coordinate is reflective of the algorithm's confidence for the compression of the information on the coordinate. By construction, \(\mathbf{z}^{(1)}\) is a group for which the algorithm would have least confidence because it is the base level of the recursive inference. The bright points (representing large value) scattered across the heat map in grid- pattern correspond to \(\mathbf{z}^{(1)}\) . Kodak dataset consists of only 24 images. To rule out the possibility that our model is overfitted to the Kodak dataset by chance, we also evaluated the performance of our models using RAISE- 1k dataset that consists of 1000 raw images (Fig. 15). We can confirm the superiority of our method over all the current contemporaries on RAISE- 1k dataset as well. It shall be noted that our method is outperforming other methods not only in the low bit rate region, but also in the high bit rate region as well. ### 4.2 THE EFFECTIVENESS OF THE MULTIPLICATIVE LOSS We conducted still another comparative experiment in order to evaluate the effectiveness of using the multiplicative loss. Fig. 7 shows the RD curves of our model trained with the three different objective functions we introduced in Sec. 2.2.2 (Eqs. (6)–(8)). We used the same experimental setup as the one we used in Sec. 4.2 except that we stopped the iteration at 0.1M- th step and initiated the linear decay of \(\alpha\) from 0.075M- th step. To be more fair, we tested the performance of Eq. (6) using multiple good values of \(\lambda\) . We can confirm in Fig. 7 that the multiplicative loss outperforms all other loss functions. We shall also note that the model trained to optimize our multiplicative loss yielded the results with higher MS- SSIM than the model trained to directly optimize the MS- SSIM loss. We shall emphasize that our multiplicative loss does not require the hyperparameter search about \(\lambda\) for each bit rate. ### 4.3 ABLATION STUDY WITH SMALL ASAP MODEL We conducted an ablation study to assess the effect of making the quantization width adaptive for each latent feature. We conducted the compression with a fixed quantization width for all latent features and compared the result with our ASAP coding. For the size of the the common quantization width, we conducted a grid search and selected the width that achieved the best MS- SSIM score. Figure 10a compares the result of our algorithm against those of different choices of fixed quantization width. We can see in the plot that our algorithm performs equally well with the result with the optimal fixed quantization width. Co- incidentally, there was a choice of fixed quantization width that works equally well with the adaptive quantization width. We also conducted a set of experiments with a smaller version of our model, in which we reduced the number of channels for each convolution layer and replaced each residual block with a single convolution layer. The figure 10b shows the results of the experiments with the small model. As we can see in the figure, the adaptive quantization width performs better than all choices of fixed quantization width we tested. All in all, we were able to confirm through the ablation study that our adaptive quantization width performs equally well or better in comparison to the optimal fixed width quantization that was found through a grid- search. ## 5 CONCLUSION In this study, we proposed ASAP, a novel lossy compressor based on a neural architecture that allows the training of quantization width. We also proposed multiplicative loss, a novel hyper <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 10: (a) Ablation study for the effect of adaptive quantization width. The adaptive quantization width works equally well with the best fixed quantization width, which was found through an extensive grid search. (b) The same ablation study conducted with a smaller version of the ASAP model (normal convolution layers in place of resblocks, smaller number of channels). The adaptive quantization width works better than the best fixed quantization width. </center> parameter free objective function that can be used to train the compressor with better rate- distortion tradeoff. Our study has shown that there is still a room for improvement in the neural network based approaches to compression, which once seemed to have reached its limit. The application of our ASAP is not be limited to image compression. The framework of ASAP and the idea of the multiplicative loss shall be applicable to various industrial needs in which the rate distortion tradeoff is an important factor. ## REFERENCES Eirikur Agustsson, Michael Tschannen, Fabian Mentzer, Radu Timofte, and Luc Van Gool. Generative adversarial networks for extreme learned image compression. arXiv preprint arXiv:1804.02958, 2018. Johannes Ballé, Valero Laparra, and Eero P Simoncelli. End- to- end optimized image compression. In ICLR, 2017. Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston. Variational image compression with a scale hyperprior. In ICLR, 2018. Duc- Tien Dang- Nguyen, Cecilia Pasquini, Valentina Conotter, and Giulia Boato. RAISE: A raw images dataset for digital image forensics. In ACM Multimedia Systems Conference, pp. 219- 224, 2015. Nick Johnston, Damien Vincent, David Minnen, Michele Covell, Saurabh Singh, Troy Chinen, Sung Jin Hwang, Joel Shor, and George Toderici. Improved lossy image compression with priming and spatially adaptive bit rates for recurrent networks. arXiv preprint arXiv:1703.10114, 2017. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. Fabian Mentzer, Eirikur Agustsson, Michael Tschannen, Radu Timofte, and Luc Van Gool. Conditional probability models for deep image compression. CVPR, 2018. Ken Nakanishi, Shin- ichi Maeda, Takeru Miyato, and Daisuke Okanohara. Neural multi- scale image compression. arXiv preprint arXiv:1805.06386, 2018. Sashank J Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. In ICLR, 2018. Oren Rippel and Lubomir Bourdev. Real- time adaptive image compression. In ICML, pp. 2922- 2930, 2017. <--- Page Split ---> Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei- Fei. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211- 252, 2015. Lucas Theis, Wenzhe Shi, Andrew Cunningham, and Ferenc Huszár. Lossy image compression with compressive autoencoders. In ICLR, 2017. George Toderici, Sean M O'Malley, Sung Jin Hwang, Damien Vincent, David Minnen, Shumeet Baluja, Michele Covell, and Rahul Sukthankar. Variable rate image compression with recurrent neural networks. arXiv preprint arXiv:1511.06085, 2015. George Toderici, Damien Vincent, Nick Johnston, Sung Jin Hwang, David Minnen, Joel Shor, and Michele Covell. Full resolution image compression with recurrent neural networks. In CVPR, pp. 5435- 5443, 2017. Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality assessment. In Conference Record of the Thirty- Seventh Asilomar Conference on Signals, Systems and Computers, volume 2, pp. 1398- 1402. IEEE, 2004. <--- Page Split ---> ![](images/11_0.jpg) <center>Figure 11: Detail architecture of proposed model. </center> ![](images/11_1.jpg) <center>Figure 12: CNNs architecture for the map from \(\hat{\pmb{z}}^{(1:k - 1)}\) to \(\mu^{(k)}\) , \(\sigma^{(k)}\) , and \(\pmb{q}^{(k)}\) . The map designated as "Function" stands for an element-wise non-linear function, which in our case is tanh for \(\mu\) and \(2 \times \mathrm{sigmoid} + \epsilon\) for \(\sigma\) and \(\pmb{q}\) , where \(\epsilon\) is a small positive constant. </center> ![](images/11_2.jpg) <center>Figure 13: Detail architecture of one ResBlock. </center> <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 14: Construction of \((\mu^{(1)},\sigma^{(1)},q^{(1)})\) . One set of parameters was created for each channel, and same parameter set was assigned to all the coordinates belonging to same channel. </center> ## B APPENDIX RESULTS ![](images/12_1.jpg) <center>Figure 15: Rate-distortion tradeoff curves evaluated for different methods on resized RAISE-1K (Dang-Nguyen et al., 2015) dataset. </center> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 16: Images reconstructed with various coding methods. </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 17: Images reconstructed with various coding methods (con'd). </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 18: Images reconstructed with various coding methods (con'd). </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 19: Comparison of the visual results of the reconstruction for middle-high bpp range. </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 20: Comparison of the visual results of the reconstruction for middle-high bpp range (con'd). </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 21: Comparison of the visual results of the reconstruction for middle-high bpp range (con'd). </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 22: Comparison of the visual results of the reconstruction for middle-high bpp range (con'd). </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 23: Comparison of the visual results of the reconstruction for middle-high bpp range (con'd). </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 24: Comparison of the visual results of the reconstruction for middle-high bpp range (con'd). </center> <--- Page Split --->
## ABSTRACT We propose Adaptive Sample- space & Adaptive Probability (ASAP) coding, an efficient neural- network based method for lossy data compression. Our ASAP coding distinguishes itself from the conventional methods based on adaptive arithmetic coding in that it models the probability distribution for the quantization process in such a way that one can conduct back- propagation for the quantization width that determines the support of the distribution. Our ASAP also trains the model with a novel, hyper- parameter free multiplicative loss for the rate- distortion tradeoff. With our ASAP encoder, we are able to compress the image files in the Kodak dataset to as low as one fifth the size of the JPEG- compressed image without compromising their visual quality, and achieved the state- of- the- art result in terms of MS- SSIM based rate- distortion tradeoff. ## 1 INTRODUCTION In terse terms, lossy data compression is a task in which one seeks to encode a data file into as short a code as possible without losing the essential information. Extensive research has been conducted in the field of lossy data compression; JPEG, WebP and BPG are well known lossy image compression codec. The recent advances in machine learning methods are accelerating the pace of the research in this field. For example, studies like Toderici et al. (2015; 2017); Balle et al. (2017); Theis et al. (2017); Johnston et al. (2017); Rippel & Bourdev (2017); Mentzer et al. (2018); Agustsson et al. (2018); Nakanishi et al. (2018); Balle et al. (2018) have incorporated the methods of deep learning into lossy compression, and some of them succeeded in producing a result that far surpasses the classical, neural- network- free methods. Almost all lossy codecs to date—including the state- of- the- art codec—are built on autoencoder. After transforming the input data by the autoencoder, the lossy codecs discretize the latent space using a quantizer function and further converts the transformed data so that it can be stored as a binary code. In order to quantize the latent feature into as short a code as possible, one usually decompose the feature into pieces and use the information of the pieces that it has quantized so far (i.e., context) in order to adaptively choose a discrete probability space for the quantization and the entropy coding of the next piece of information. By construction, the choice of the probability space for the quantization and the entropy coding of the feature vector is a key factor that determines the performance of the compression algorithm. In this study, we present a novel compression architecture that allows the user to optimize the discrete support of the probability space to be used for quantization and entropy coding (PSQE). Optimization of the support of the PSQE in a NN- based architecture is a daunting task. To the authors' best knowledge, there has been no study to date that enabled this optimization for NN- based architecture. Our Adaptive Sample- space & Adaptive Probability (ASAP) coding partially achieves this task by assigning to each latent feature a discretized normal distribution with different mean, variance and quantization width. Our method automatically chooses appropriate quantization width for each latent feature. For the compression with small models in particular, we were able to confirm that this adaptive quantization width benefits the performance in terms of the MS- SSIM score of the reconstructed images. We also present a novel objective function for the training of the quantizer. In general, the problem of minimizing the reconstruction error (distortion) under a constraint of code- length (or bpp) can be <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Reconstructed images with different compression methods. </center> ![](images/1_1.jpg) <center>Figure 2: Rate-distortion tradeoff curves evaluated for different methods on the Kodak dataset. The horizontal axis represents bits-per-pixel (bpp) and the vertical axis represents the average multiscale structural similarity (MS-SSIM) computed over RGB channels. The right two panels are the magnifications of the leftmost panel. Regarding the RD curves of Rippel & Bourdev (2017), we carefully traced the RD curve from the figure of their paper, because the original paper did not provide the exact values at each point. As for the RD curve of Johnston et al. (2017), we used the values provided by the authors via personal communication. </center> formulated in an equation of the form \(\mathrm{bpp} + \lambda \times \mathrm{distortion}\) with a Lagrange multiplier \(\lambda\) . Needless to say, however, the appropriate value of \(\lambda\) depends on the user- chosen weight of the importance for the code- length, which in turn heavily depends on the situation. Training a separate model and conducting a separate hyper- parameter search at every different situation is cumbersome. In order to circumvent this inconvenience, we propose a novel, hyper- parameter free multiplicative loss function that measures the rate- distortion tradeoff. Equipped with the greater freedom for the choice of PSQE, our ASAP model trained with our multiplicative- loss is not only able to attain compression quality that is on par with the state- of- the- art methods trained with an extensive hyper- parameter search about \(\lambda\) , but also is able to outperform all the predecessors in terms of the tradeoff ability for almost all region of bpp constraint. Vice versa, under the same distortion constraint, our codec can compress the image file to as low as \(2 \sim 5\) times smaller code in comparison to JPEG. This is \(2 \sim 2.5\) times smaller a size in comparison to BPG, which is one of the latest compression algorithms used in industry (see Figs. 2 and 15). <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 3: Overall architecture of the proposed model. </center> ![](images/2_1.jpg) <center>Figure 4: Outline of ASAP coding for image compression. </center> ## 2 METHODS By all means, we intend our algorithm to be applied to various tasks including audio and video compression. For ease of explanation, however, we will assume for now that our problem is the compression of image, and describe the overall flow of our algorithm in the context of image compression. In a nutshell, our method follows the usual routine of transform coding. Throughout, let \(\boldsymbol {x}\in \mathbb{R}^{C_{0}\times H_{0}\times W_{0}}\) be the original image, where \(C_{0},H_{0},W_{0}\) respectively represent the number of the channels, the height and the width of the image. The algorithm begins by first applying the transformation function \(F\) to \(\boldsymbol{x}\) in order to obtain a latent feature representation, \(\boldsymbol {z}\in \mathbb{R}^{C\times H\times W}\) . The algorithm then quantizes the signal \(\boldsymbol{z}\) into a finite space, and then covert the quantized signal \(\hat{\boldsymbol{z}}\) to a binary signal \(s\) by entropy coding. For the reconstruction, we apply the synthesizer map \(G\) to the code \(\hat{\boldsymbol{z}}\) . That is, \(\hat{\boldsymbol{x}} = G\left(\hat{\boldsymbol{z}}\right)\) . Fig. 3 illustrates the overall flow. In general, we would like to obtain a shorter \(s\) with smaller distortion \(d\left(\boldsymbol {x},\hat{\boldsymbol{x}}\right)\) , where \(d\) is the measure of the corruption in the reconstruction (e.g., 1 - MS- SSIM) (Wang et al., 2004). Our goal is therefore to train the networks that optimize the rate- distortion tradeoff. By construction, the loss of information takes place during the transformation from \(\boldsymbol{z}\) to \(\hat{\boldsymbol{z}}\) . Our ASAP is the invention for this part of the compression. In Sec. 2.1, we describe the distinguishing features of ASAP coding relative to the conventional coding methods. We then describe in Sec. 2.2 the technicalities for the training of the ASAP models in the context of image compression. ### 2.1 THE FEATURES OF ASAP CODING In this section, we review the basics of entropy coding. Let us assume that \(\boldsymbol {z}\in [- 1,1]^{C\times H\times W}\) . This \(\boldsymbol {z}\) is to be quantized into \(\hat{\boldsymbol{z}}\in \mathbb{R}^{C\times H\times W}\) by the discretization of the continuous space \([- 1,1]^{C\times H\times W}\) , so that it can be converted into a binary code via invertible compression algorithm. For the entropy coding of \(\hat{\boldsymbol{z}}\) , we would have to prepare a probability distribution for \(\hat{\boldsymbol{z}}\) that represents its occurring probability. Oftentimes, this occurring probability is modeled by some form of auto regressive model. In our task of compressing the images, we adopted a model used in Nakanishi et al. (2018). We partitioned \(\hat{\boldsymbol{z}}\) into \(K\) groups \(\{\hat{\boldsymbol{z}}^{(k)}\in \mathbb{R}^{C^{(k)}\times H^{(k)}\times W^{(k)}};k = 1,\dots,K\}\) based on a geometric pattern we illustrate in Fig. 5. Let \(C^{(k)},H^{(k)}\) , and \(W^{(k)}\) be the dimension of the \(k\) - th group. We then estimated the true occurrence probability as \(p^{*}(\hat{\boldsymbol{z}}) = p^{*}(\hat{\boldsymbol{z}}^{(1)})\prod_{k = 2}^{K}p^{*}(\hat{\boldsymbol{z}}^{(k)}|\hat{\boldsymbol{z}}^{(1:k - 1)})\) <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 5: The grouping scheme for the coordinates of \(\hat{z}\) . This is an imitation of the procedure used in Nakanishi et al. (2018) </center> by recursively approximating \(p^{*}(\hat{z}^{(k)}|\hat{z}^{(1:k - 1)})\) with some probability distribution \(\pi (\hat{z}^{(k)}|\hat{z}^{(1:k - 1)})\) with discrete support. For ease of notation, we would occasionally use \(\pi^{(k)}(\hat{z}^{(k)})\) to denote the same distribution. In this model, we assume that each coordinate of \(\hat{z}^{(k)}\) is approximately independent conditional to \(\hat{z}^{(1:k - 1)}\) . This way, we can resort to the power of parallel computation and greatly speed up the algorithm. For more details of this procedure, please see Nakanishi et al. (2018). For the task of compressing other type of data, one can choose different type of partitioning that is natural to the situation. Our method is novel in that we allow the optimization of the support of \(\pi (\hat{z}^{(k)}|\hat{z}^{(1:k - 1)})\) . We begin by modeling \(p^{*}(\hat{z}_{i}^{(k)}|\hat{z}^{(1:k - 1)})\) with a truncated Gaussian distribution \(p(\hat{z}_{i}^{(k)}|\hat{z}^{(1:k - 1)})\) on \([- 1,1]\) with parameters \(\mu_{i}^{(k)},\sigma_{i}^{(k)}\) for each \(i\) - th coordinate of \(\hat{z}^{(k)}\) using the information of \(\hat{z}^{(1:k - 1)}\) . We then quantize \([- 1,1]\) into the intervals \(\{I_{i}^{(k)}[j]\}_{j = 1}^{M_{k i}}\) with the centers \(\{A_{i}^{(k)}[j]\}_{j = 1}^{M_{k i}}\) using the quantization width \(\{q_{i}^{(k)}\}_{j = 1}^{M_{k i}}\) , where \(M_{k i}\) is the number of the intervals. The larger the value of \(M_{k i}\) , finer the quantization. By construction, we construct \(\pi (\hat{z}_{i}^{(k)}|\hat{z}^{(1:k - 1)})\) by defining \[\pi (\hat{z}_{i}^{(k)}|\hat{z}^{(1:k - 1)}): = \sum_{j = 1}^{M_{k i}}p(\boldsymbol{z}_{i}^{(k)}\in I_{i}^{(k)}[j]|\hat{z}^{(1:k - 1)})\mathbb{1}_{\boldsymbol{z}_{i}^{(k)} = A_{i}^{(k)}[j]} \quad (1)\] With our method, we can train the objective function about \(\mu_{i}^{(k)},\sigma_{i}^{(k)}\) , and \(q_{i}^{(k)}\) . The ability to optimize \(q_{i}^{(k)}\) is a distinctive feature of our ASAP coding. With this distinctive feature, our ASAP can compress \(\hat{z}\) into a code of much smaller size under the same distortion constraint. By the rate- distortion argument we mentioned in the introduction, this also means that our ASAP can store the data with much smaller distortion under the same constraint on the code- length. ### 2.2 ASAP CODING FOR IMAGE COMPRESSION Next, we will explain the technicalities for the optimization of the parameters \(\mu_{i}^{(k)},\sigma_{i}^{(k)}\) , and \(q_{i}^{(k)}\) in the context of image compression. We will also explain the multiplicative loss, a novel objective function for the training of \(\hat{R}\) . #### 2.2.1 ADAPTIVE CONSTRUCTION OF DISCRETIZED SAMPLE SPACE AND PROBABILITY DISTRIBUTION Fig. 6 illustrates the construction of \(\hat{z}^{(k)}\) and \((A^{(k)},\pi^{(k)})\) . From now on, let us write \(\pi_{i}^{(k)}(a):= \pi (\hat{z}_{i}^{(k)} = a|\hat{z}^{(1:k - 1)})\) for \(a\in A_{i}^{(k)}\) . In order to recursively construct these probability measures, we trained a set of neural networks that map \(\hat{z}^{(1:k - 1)}\) to the triplets \(\theta_{i}^{(k)}:= (\mu_{i}^{(k)},\sigma_{i}^{(k)},q_{i}^{(k)})\) . For the base level of \(k = 1\) , we prepared a trainable triplets \(\theta_{m}^{\mathrm{init}}:= (\sigma_{m}^{\mathrm{init}},\mu_{m}^{\mathrm{init}},q_{m}^{\mathrm{init}})\) for each \(m\) - th channel <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 6: Detailed architecture of the quantizer. The solid line represents the operation subject to back propagation, and the dotted line represents the operation that is not subject to back propagation. </center> and set \(\theta_{i}^{(1)} = \theta_{m}^{\mathrm{init}}\) for all \(\hat{z}_{i}^{(1)}\) belonging to the channel \(m\) (please see Fig. 14 in the appendix for the detail). We define \(A_{i}^{(k)}\) as \[A_{i}^{(k)} := \left\{a \middle | a = \mu_{i}^{(k)} + (u + 0.5)q_{i}^{(k)}, u \in \mathbb{Z}, a - 0.5q_{i}^{(k)} < 1, a + 0.5q_{i}^{(k)} \geq -1\right\} \quad (2)\] so that \(I_{i}^{(k)}[j] = \left[A_{i}^{(k)}[j] - 0.5q_{i}^{(k)}, A_{i}^{(k)}[j] + 0.5q_{i}^{(k)}\right]\) . This indeed amounts to discretizing the interval \([- 1, 1]\) using the subintervals of width \(q_{i}^{(k)}\) ; \(M_{k i}\) is determined naturally from this constraint. To each subinterval, we would assign the probability mass computed with the formula \[\pi_{i}^{(k)}(a) := \int_{a - 0.5q_{i}^{(k)}}^{a + 0.5q_{i}^{(k)}} p_{i}^{(k)}(x) \mathrm{d}x, \quad (3)\] where \(p_{i}^{(k)}\) is a truncated normal distribution with the parameters \((\mu_{i}^{(k)}, \sigma_{i}^{(k)})\) that is defined over the domain \(\left[\underline{a}_{i}^{(k)} - 0.5q_{i}^{(k)}, \bar{a}_{i}^{(k)} + 0.5q_{i}^{(k)}\right)\) . We are aware that this is not technically the faithful discretization of the interval \([- 1, 1]\) ; this compromise is for ease of implementation. If \(I_{i}^{(k)}\) perfectly partitions \([- 1, 1]\) , the support of \(p_{i}^{(k)}\) will be exactly \([- 1, 1]\) . By construction, our \(\pi_{i}^{(k)}\) satisfies the required properties of the probability mass function. By appealing to the basic result of information theory, any signal sampled from the distribution \(\pi_{i}^{(k)}\) can theoretically be encoded in a code- length: \[\ell (\hat{\boldsymbol{z}}^{(k)}; \pi^{(k)}) := - \sum_{i = 1}^{C^{(k)} H^{(k)} W^{(k)}} \log_{2} \pi^{(k)} (\hat{z}_{i}^{(k)} | \hat{\boldsymbol{z}}^{(1:k - 1)}). \quad (4)\] In order to map \(z_{i}^{(k)}\) to \(A_{i}^{(k)}\) , we simply pick the member of \(A_{i}^{(k)}\) that is closest to \(z_{i}^{(k)}\) ; this member would be the value we assign to \(\hat{z}_{i}^{(k)}\) . By construction, the quantization error \(\hat{z}_{i}^{(k)} - z_{i}^{(k)}\) is bounded from above by \(q_{i}^{(k)} / 2\) . ## Training of the neural models In order to make the training of the neural networks possible, we would take advantage of the fact that we can always write \[\hat{z}_{i}^{(k)} = z_{i}^{(k)} - \delta_{i}^{(k)} q_{i}^{(k)}. \quad (5)\] for some \(\delta_{i}^{(k)} \in [- 0.5, 0.5)\) . We can therefore interpret \(\delta_{i}^{(k)}\) as a noise and regard the equation above as a probabilistic model for \(\hat{z}_{i}^{(k)}\) with a trainable noise parameter \(q_{i}^{(k)}\) . Indeed, the demand <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 7: Rate-distortion tradeoff curve for the ASAP model trained with different objective functions. For the Multiplicative loss, Multiplicative rate-distortion trade-off function. For the additive costs, we run experiments with few selected values of \(C\) and varied the choice of \(\lambda\) only. This is because the computational cost would explode if we are to do the parameter search of \(\lambda\) for large number of \(C\) . We should add, as a cautionary remark, that the results shown hereof are the results obtained with less number of iterations (0.1M) than the results in Figure 2 (0.3M). </center> for smaller distortion (loss of information) would prefer smaller \(q_{i}^{(k)}\) and hence smaller variance for noise. On the contrary, the demand for shorter code-length would prefer coarser discretization of the sample space and hence larger \(q_{i}^{(k)}\) . This is indeed the very manifestation of the rate-distortion tradeoff for the training of \(q_{i}^{(k)}\) . #### 2.2.2 MULTIPLICATIVE RATE-DISTORTION TRADEOFF FUNCTION In this section, we will finally describe the objective function by which we would train the ASAP model we introduced above. In order to make comparisons, we would introduce our objective function along with its variation and the classic counterpart: \[\begin{array}{r l} & {\mathrm{bpp} + \lambda \times (1 - \mathrm{MS - SSIM})\quad (\lambda \geq 0)}\\ & {\mathrm{bpp}\times (1 - \mathrm{MS - SSIM})}\\ & {\mathrm{bpp}\times (1 - \mathrm{MS - SSIM})\times \mathrm{MSE}\quad (\mathrm{proposed~method}),} \end{array} \quad (6)\] where \(\begin{array}{r}{\mathrm{bpp}:= \frac{1}{H_{0}W_{0}}\sum_{k = 1}^{K}\ell (\hat{\boldsymbol{z}}^{(k)};\pi^{(k)})} \end{array}\) . An apparent disadvantage of Eq. (6) is that one must search for different appropriate \(\lambda\) for different constraint on the compression rate (code- length). In order to do away with the computationally heavy parameter search, we took advantage of our observation that the relation \(\mathrm{bpp}\times (1 - \mathrm{MS - SSIM}) = \mathrm{const}\) approximately holds in experiments, and first adopted this function as a candidate for the objective function (Eq. (7)). As we show in Fig. 7, our ASAP trained with this objective function performs almost as well as the ASAP trained with the Eq. (6) with the optimal \(\lambda\) for all choice of the compression rate. However, as we can see in Fig. 7, the MS- SSIM value for both Eqs. (6) and (7) plateaus at some value less than 1. We resolved this issue by modifying Eq. (7) to Eq. (8). This solution works because the inclusion of the MSE cost would create still another pressure to the training process. By this modification, we were able to produce good results in the region of large bpp as well. ## 3 RELATED WORKS Application of the neural networks to the compression of image dataset is popular these days. Most studies choose (recurrent) autoencoder based on CNN as the main architecture. By its construction, the coordinates of the output from such autoencoder will have strong spatial correlation. This tendency should be particularly true for the network trained for the compression of the images. Moreover, the local correlative relations among the coordinates can greatly vary across the locations and the channels, not to mention across the types of images. In order to optimize the rate- distortion trade off, one must therefore strive to choose appropriate quantization width for different situations. <--- Page Split ---> Indeed, this effort must be jointly planned and implemented with the optimization of the parameters of the autoencoder, because the the correlations amongst the coordinates of the feature vector is obviously dependent on the behavior of the autoencoder. The work of Ballé et al. (2017) is one of the pioneer studies that trained PSQE together with the autoencoder. Ballé et al. (2018) further improved their work by introducing an additional latent variable that governs the spatial relation among the quantization probability distributions of the latent spaces. The optimization of the quantization process is a challenging problem in general, and numerous studies proposed different approaches to this problem. Toderici et al. (2015; 2017); Johnston et al. (2017); Ballé et al. (2017; 2018) tackled this problem by interpreting the quantization process probabilistically. In these approaches, the use of probabilistic noise can de- stabilize the training process; this tends to be particularly true for variational inferences. The works of Theis et al. (2017); Nakanishi et al. (2018); Mentzer et al. (2018) fall in to this category. Mentzer et al. (2018) in particular deploys a set of trainable parameters \(C = \{c_{1},\dots ,c_{L}\}\) from which to sample \(\hat{z}\) with the encoding rule \(\hat{z}_{i} = \arg \min_{i}\| z_{i} - c_{j}\|\) . The compression method of Mentzer et al. (2018), however, trains \(C\) of fixed length (i.e., the number of bins). On the other hand, in our study, the number of bins is a function of quantization width. Our method therefore optimizes the number of bins together with the quantization width. Numerous types of objective functions are also used for the training of the network for compression task. Most of the previous works like Ballé et al. (2017); Rippel & Bourdev (2017); Mentzer et al. (2018) use the objective function of type (6). There is also a variety of distortion measure, such as MS- SSIM, \(L_{2}\) loss, \(L_{1}\) loss, as well as their weighted mixtures. Some studies also use GANs' loss (Rippel & Bourdev, 2017; Agustsson et al., 2018). Johnston et al. (2017) uses a multiplicative loss of the form \(\mathrm{SSIM} \times L_{1}\) . As mentioned in the previous sections, our multiplicative loss is based on the observation that the empirical trade- off functions resemble a function of the form \(\mathrm{bpp} \times (1 - \mathrm{MS- SSIM}) = \mathrm{const}\) . It shall be emphasized that we were able to produce superior result using a single, hyper- parameter free objective function. ## 4 EXPERIMENT We evaluated the performance of our method on the benchmark image datasets. We trained our proposed model over 0.3M iterations with batchsize 25. For the optimization, we used Adam (Kingma & Ba, 2015) with AMSGrad (Reddi et al., 2018). As for the hyper- parameters of Adam, we set \(\alpha = 0.001\) , \(\beta_{1} = 0.9\) , \(\beta_{2} = 0.999\) , and linearly decayed \(\alpha\) from the 0.225M- th iteration so that it will reach 0 at the end of the training. For the update of the parameters, we applied \(L_{2}\) regularization with weight decay of rate 0.00001, and clipped the magnitude of each coordinate of the gradient at the value of 5. As a similarity measure between two images, we used MS- SSIM (Wang et al., 2004), which is considered by the community to have close correlation with the subjective evaluation of humans. Because MS- SSIM is originally intended for the evaluation of gray scale measures, we computed the MS- SSIM separately for each of RGB channels and reported their average. For the training, we used ILSVRC2012 dataset (ImageNet) (Russakovsky et al., 2015). In order to produce the rate distortion (RD) curve, we trained a model for each choice of \(C\) , the number of channels for the latent feature. It shall be reminded that \(C\) affects the code- length via entropy coding. For the performance evaluation, we used Kodak \(^{2}\) dataset and RAISE- 1k dataset (Dang- Nguyen et al., 2015). ### 4.1 PERFORMANCE Kodak dataset is a widely used dataset for the performance evaluation of compression methods. Fig. 2 compares the RD curves for different compression methods. In terms of MS- SSIM, our method was slightly better than the method trained with the best fixed quantization size, which was found through an extensive grid search. Again in terms of MS- SSIM, our method outperformed the model of Ballé et al. (2018) trained to optimize MS- SSIM ("Balle for MS- SSIM"), as well as the same model trained to optimize mean squared error (MSE) ("Balle for MSE"). In terms of PSNR, our method performed better than "Balle for MS- SSIM", not to mention JPEG. However, it was not able to perform better than "Balle for MSE" in terms of PSNR. <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 8: The heat map of \(\mu\) , \(\sigma\) , \(q\) produced by a trained ASAP model for a Kodak image. The stacked pair of images corresponds to the heat maps of a selected pair of channels. Notice that the network is choosing larger value of \(q\) for a channel containing more information. </center> ![](images/7_1.jpg) <center>Figure 9: Performances of various compression methods, evaluated in terms of PSNR. </center> <--- Page Split ---> For the visuals of the reconstructed images, please see Figs. 1 and 16–18. Figs. 19–24 illustrate regions of medium-high bpp. Our method outperforms BPG on the medium-high bpp region. Fig. 8 illustrates the heat map of \((\mu_{i}^{(k)},\sigma_{i}^{(k)},q_{i}^{(k)})\) produced for a Kodak image by a trained ASAP model. The signals from each \(\mathbf{z}^{(k)}\) was re- aligned in the way that respects the original geometrical relations prescribed by the pattern in Fig. 5. A pair of stacked images is a pair of images from two different channels. Fig. 8 features several interesting properties of our ASAP coding. Firstly, as we can readily see in the figure, the 'bottom' channel is capturing more contextual information than the 'top' channel. Note that our ASAP is assigning larger values of \(q\) in general to the bottom channel than to the top channel. We can also see that the algorithm is assigning larger value of \(q\) to the region within the picture with relatively small information. Lastly, the value of \(\sigma\) assigned by the algorithm to each coordinate is reflective of the algorithm's confidence for the compression of the information on the coordinate. By construction, \(\mathbf{z}^{(1)}\) is a group for which the algorithm would have least confidence because it is the base level of the recursive inference. The bright points (representing large value) scattered across the heat map in grid- pattern correspond to \(\mathbf{z}^{(1)}\) . Kodak dataset consists of only 24 images. To rule out the possibility that our model is overfitted to the Kodak dataset by chance, we also evaluated the performance of our models using RAISE- 1k dataset that consists of 1000 raw images (Fig. 15). We can confirm the superiority of our method over all the current contemporaries on RAISE- 1k dataset as well. It shall be noted that our method is outperforming other methods not only in the low bit rate region, but also in the high bit rate region as well. ### 4.2 THE EFFECTIVENESS OF THE MULTIPLICATIVE LOSS We conducted still another comparative experiment in order to evaluate the effectiveness of using the multiplicative loss. Fig. 7 shows the RD curves of our model trained with the three different objective functions we introduced in Sec. 2.2.2 (Eqs. (6)–(8)). We used the same experimental setup as the one we used in Sec. 4.2 except that we stopped the iteration at 0.1M- th step and initiated the linear decay of \(\alpha\) from 0.075M- th step. To be more fair, we tested the performance of Eq. (6) using multiple good values of \(\lambda\) . We can confirm in Fig. 7 that the multiplicative loss outperforms all other loss functions. We shall also note that the model trained to optimize our multiplicative loss yielded the results with higher MS- SSIM than the model trained to directly optimize the MS- SSIM loss. We shall emphasize that our multiplicative loss does not require the hyperparameter search about \(\lambda\) for each bit rate. ### 4.3 ABLATION STUDY WITH SMALL ASAP MODEL We conducted an ablation study to assess the effect of making the quantization width adaptive for each latent feature. We conducted the compression with a fixed quantization width for all latent features and compared the result with our ASAP coding. For the size of the the common quantization width, we conducted a grid search and selected the width that achieved the best MS- SSIM score. Figure 10a compares the result of our algorithm against those of different choices of fixed quantization width. We can see in the plot that our algorithm performs equally well with the result with the optimal fixed quantization width. Co- incidentally, there was a choice of fixed quantization width that works equally well with the adaptive quantization width. We also conducted a set of experiments with a smaller version of our model, in which we reduced the number of channels for each convolution layer and replaced each residual block with a single convolution layer. The figure 10b shows the results of the experiments with the small model. As we can see in the figure, the adaptive quantization width performs better than all choices of fixed quantization width we tested. All in all, we were able to confirm through the ablation study that our adaptive quantization width performs equally well or better in comparison to the optimal fixed width quantization that was found through a grid- search. ## 5 CONCLUSION In this study, we proposed ASAP, a novel lossy compressor based on a neural architecture that allows the training of quantization width. We also proposed multiplicative loss, a novel hyper <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 10: (a) Ablation study for the effect of adaptive quantization width. The adaptive quantization width works equally well with the best fixed quantization width, which was found through an extensive grid search. (b) The same ablation study conducted with a smaller version of the ASAP model (normal convolution layers in place of resblocks, smaller number of channels). The adaptive quantization width works better than the best fixed quantization width. </center> parameter free objective function that can be used to train the compressor with better rate- distortion tradeoff. Our study has shown that there is still a room for improvement in the neural network based approaches to compression, which once seemed to have reached its limit. The application of our ASAP is not be limited to image compression. The framework of ASAP and the idea of the multiplicative loss shall be applicable to various industrial needs in which the rate distortion tradeoff is an important factor. ## B APPENDIX RESULTS ![](images/12_1.jpg) <center>Figure 15: Rate-distortion tradeoff curves evaluated for different methods on resized RAISE-1K (Dang-Nguyen et al., 2015) dataset. </center> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 16: Images reconstructed with various coding methods. </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 17: Images reconstructed with various coding methods (con'd). </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 18: Images reconstructed with various coding methods (con'd). </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 19: Comparison of the visual results of the reconstruction for middle-high bpp range. </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 20: Comparison of the visual results of the reconstruction for middle-high bpp range (con'd). </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 21: Comparison of the visual results of the reconstruction for middle-high bpp range (con'd). </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 22: Comparison of the visual results of the reconstruction for middle-high bpp range (con'd). </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 23: Comparison of the visual results of the reconstruction for middle-high bpp range (con'd). </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 24: Comparison of the visual results of the reconstruction for middle-high bpp range (con'd). </center> <--- Page Split --->
reject
Reject
5.666667
ICLR_2019_paper_0785
iclr
2,019
# HIGH RESOLUTION AND FAST FACE COMPLETION VIA PROGRESSIVELY ATTENTIVE GANS Anonymous authors Paper under double- blind review ## ABSTRACT Face completion is a challenging task with the difficulty level increasing significantly with respect to high resolution, the complexity of "holes" and the controllable attributes of filled- in fragments. Our system addresses the challenges by learning a fully end- to- end framework that trains generative adversarial networks (GANs) progressively from low resolution to high resolution with conditional vectors encoding controllable attributes. We design a novel coarse- to- fine attentive module network architecture. Our model is encouraged to attend on finer details while the network is growing to a higher resolution, thus being capable of showing progressive attention to different frequency components in a coarse- to- fine way. We term the module Frequency- oriented Attentive Module (FAM). Our system can complete faces with large structural and appearance variations using a single feed- forward pass of computation with mean inference time of 0.54 seconds for images at \(1024 \times 1024\) resolution. A pilot human study shows our approach outperforms state- of- the- art face completion methods. The code will be released upon publication. ## 1 INTRODUCTION Image completion is a technique to replace target regions, either missing or unwanted, of images with synthetic content so that the completed images look natural, realistic and appealing. Two types of methods have been used: data similarity driven methods and data distribution based generative methods. In the first paradigm, texture synthesis or patch matching are usually used (Efros & Leung, 1999; Kwatra et al., 2003; Criminisi et al., 2003; Wilczkowiak et al., 2005; Komodakis, 2006; Barnes et al., 2009; Darabi et al., 2012; Huang et al., 2014; Wexler et al., 2007). The second paradigm learns the underlying distribution governing the data generation with respect to the context and is able to synthesize novel content. Much progress (Iizuka et al., 2017; Yeh et al., 2017; Li et al., 2017; Yang et al., 2016; Denton et al., 2016; Pathak et al., 2016; Yu et al., 2018; Liu et al., 2018) has been made since the generative adversarial network (GAN) was proposed (Goodfellow et al., 2014). We adopt the data distribution based generative method and focus on human face completion in this paper. Three important issues are addressed. First, previous methods are only able to complete faces at low resolutions (e.g. \(176 \times 216\) (Iizuka et al., 2017) and \(256 \times 256\) (Yu et al., 2018)). Second, most approaches cannot control the attributes of the synthesized content. Previous methods focus on generating random realistic content. However, users may want to complete the missing parts with certain properties (e.g. expressions). Third, most existing approaches (Iizuka et al., 2017; Yeh et al., 2017; Li et al., 2017) require post processing (e.g. Poisson Blending (Pérez et al., 2003)) or complex inference process (e.g. thousands of optimization iterations (Yeh et al., 2017) or repeatedly feeding an incomplete image to CNNs at multiple scales (Yang et al., 2016)). To overcome the above limitations, we propose a novel progressively attentive GAN to complete face images at high resolution with multiple controllable attributes in a single forward pass without any post processing. We utilize facial landmarks as backbone guidance of face structures and propose a straightforward method of integrating them in our system. The training methodology of growing GANs progressively (Karras et al., 2017) is used to generate high- resolution images end- to- end. To avoid distorting the learned coarse structures when the network is growing to a higher resolution, we design a novel Frequency- oriented Attentive Module (FAM) to encourage the model to attend on finer details (i.e. higher- frequency structures, see Figure 1). A conditional version of our network is designed so that the appearance properties (e.g. male or female), and facial expressions of the synthesized faces can be controlled. Moreover, we design a set of loss functions inducing the network <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Face completion results of our method on CelebA-HQ (Karras et al., 2017). Top: our approach can complete face images at high resolution ( \(1024 \times 1024\) ). Middle and Bottom: the frequency-attention filters of readers and writers of the top left image. While the resolution increases from \(8 \times 8\) to \(1024 \times 1024\) , the model attends on higher frequency information. Regions with rich details (e.g. eyes) get more attention, especially at high resolutions. Best viewed in magnification. </center> to blend the synthesized content with the contexts in a realistic way. Our method was compared with state- of- the- art approaches on a high- resolution face dataset CelebA- HQ (Karras et al., 2017). Both the evaluations and a pilot user study showed that our approach completed face images significantly more naturally than existing methods. The main contributions of this paper are: (i) We propose a progressively attentive GAN architecture, which incorporates a novel frequency- oriented attention mechanism, to complete face images with random masks in much higher resolution than existing methods. (ii) A conditional version of our model is designed to control multiple attributes of the synthesized content. (iii) Our framework is able to complete images in a single forward pass, without any post- processing, and thus fast. ## 2 RELATED WORK There is a large body of image completion literature. Early non- learning based algorithms (Efros & Leung, 1999; Bertalmio et al., 2000; 2003) complete missing content by propagating information from known neighborhoods, based on low level cues or global statistics (Levin et al., 2003). Texture synthesis and patch matching based approaches (Efros & Leung, 1999; Kwatra et al., 2003; Criminisi et al., 2003; Wilczkowiak et al., 2005; Komodakis, 2006; Barnes et al., 2009; Darabi et al., 2012; Huang et al., 2014; Wexler et al., 2007) find similar structures from the context of the input image or from an external database (Hays & Efros, 2007) and then paste them to fill in the holes. Recent learning based methods have shown the capability of CNNs to complete large missing content. Based on existing GANs, the Context Encoder (CE) (Pathak et al., 2016) encodes the contexts of masked images to latent representations, and then decodes them to natural content images, which are pasted into the original contexts for completion. However, the synthesized content of CE is often blurry and has inconsistent boundaries. Given a trained generative model, Yeh et al. (Yeh et al., 2017) propose a framework to find the most plausible latent representations of contexts to complete masked images. The Generative Face Completion model (GFC) (Li et al., 2017) and the Global and Local Consistent model (GL) (Iizuka et al., 2017) use both global and local discriminators, combined with post processing, to complete images more coherently. Built on GL, Yu et al. (Yu et al., 2018) design a contextual attention layer (CTX) to help the model borrow contextual information from distant locations. Liu et al. (Liu et al., 2018) incorporates partial convolutions to handle irregular masks. Unfortunately, these approaches can only complete face images in relatively low resolutions (e.g. 176 x 216 (Iizuka et al., 2017) and \(256 \times 256\) (Yu et al., 2018)). Yang et al. (Yang et al., 2016) combine a global content network and a texture network, and the networks are trained at multiple scales repeatedly to complete high- resolution images ( \(512 \times 512\) ). Like the patch matching based approaches, Yang et al. assume that the missing content always shares some similar textures with the context, which is improbable for the face completion task. <--- Page Split ---> Many methods try to generate high quality images and stabilize the training process. The Laplacian GAN (Denton et al., 2015) uses a a cascade of CNNs to synthesize images from coarse to fine by generating high frequency information at different layers. The Deep Recurrent Attentive Writer (DRAW) architecture (Gregor et al., 2015), which is a spatial attention mechanism, generates images iteratively by learning to read and write parts of images at each time- step with a sequence of variational auto- encoders (VAEs). Unfortunately, these techniques are unable to synthesize high- resolution images (e.g. \(64 \times 64\) (Gregor et al., 2015; Denton et al., 2015)). Karras et al. (Karras et al., 2017) put forward a progressive training methodology (Progressive GAN) to grow GANs from low to high resolution, and are able to generate realistic \(1024 \times 1024\) images. However, since all the parameters remain trainable at the growing stage, learned coarse structures can be altered and distorted, and thus the training process of Progressive GANs is usually unstable. Note that these generative models cannot be applied to the image completion task directly because they aim at generating natural content which are not necessarily consistent with the image contexts. ## 3 APPROACH ### 3.1 PROBLEM FORMULATION Denoted by \(\Lambda\) an image lattice (e.g., \(1024 \times 1024\) pixels). Let \(I_{\Lambda}\) be an RGB image defined on the lattice \(\Lambda\) . Denote by \(\Lambda_{t}\) and \(\Lambda_{c}\) the target region to complete and the remaining context region respectively which form a partition of the lattice. Without loss of generality, we assume \(\Lambda_{t}\) is a single connected component region. \(I_{\Lambda_{t}}\) is masked out with the same gray pixel value. Let \(M_{\Lambda}\) be a binary mask image with all pixels in \(M_{\Lambda_{t}}\) being 1 and all pixels in \(M_{\Lambda_{c}}\) being 0. For notational simplicity, we will omit the subscripts \(\Lambda\) , \(\Lambda_{t}\) and \(\Lambda_{c}\) when the text context is clear. The objective of image completion is to generate a synthesized image \(I^{syn}\) that looks natural, realistic and appealing for an observed image \(I^{obs}\) with the target region \(I_{\Lambda_{c}}^{syn}\) masked out in the ground- truth full image \(I^{gt}\) . Furthermore, it is desirable to control the completion according to a set of attributes which are assumed to be independent from each other (e.g., to respect the underlying intrinsic ambiguities due to the loss of information in the target region). Let \(A = (a_{1}, \dots , a_{N})\) be a \(N\) - dim vector with \(a_{i} \in \{0, 1\}\) encoding if a corresponding attribute appears \((a_{i} = 1)\) or not \((a_{i} = 0)\) (e.g. the "Male" attribute in Figure 5). We define the generator as, \[I^{syn} = G(I^{obs}, M, A; \theta_{G}), \text{subject to} I_{\Lambda_{c}}^{syn} \approx I_{\Lambda_{c}}^{obs} \quad (1)\] where \(\theta_{G}\) collects all parameters of the generator, and \(\approx\) represents the two context regions, \(I_{\Lambda_{c}}^{syn}\) and \(I_{\Lambda_{c}}^{obs}\) , need to kept very similar (both to be elaborated later). ### 3.2 THE PROPOSED METHOD The proposed method is built on progressive GANs with well- executed combination between the network architecture, appropriate receipt of loss functions and a novel FAM. Denote by \(G_{r}\) and \(D_{r}\) the generator and discriminator at resolution \(r\) respectively, where \(r \in \{1, \dots , R\}\) is the index of resolution (e.g., \(r = 1\) represents \(4 \times 4\) and \(r = R = 9\) represents \(1024 \times 1024\) ). Accordingly, we have the observed corrupted image, its corresponding binary mask and its ground- truth uncorrupted image (in training only), \(I_{r}^{obs}\) , \(M_{r}\) and \(I_{r}^{gt}\) at each resolution. The generator, \(G_{r}\) takes as input the observed data \(X_{r}^{G}\) and the attribute vector \(A\) , then outputs a completed image \(I_{r}^{syn}\) . It consists of two components, \[I_{r}^{syn} = G_{r}(X_{r}^{G}, A; \theta_{G_{r}}) = G_{r}^{compl}(G_{r}^{enc}(X_{r}^{G}; \theta_{G_{r}^{enc}}), A; \theta_{G_{r}^{compl}}), \quad (2)\] where \(G_{r}^{enc}(\cdot)\) encodes the input \(X_{r}^{G}\) to a latent low dimensional vector. The latent vector is concatenated with the attribute vector. The concatenated vector plays the role of the noise random variable \(z\) in the original GAN. Then, \(G_{r}^{compl}(\cdot)\) transforms the concatenated vector to a sample \(I_{r}^{syn}\) (i.e., the completed image). \(G_{r}^{enc}\) and \(G_{r}^{compl}\) are mirrored to each other in the form of U- shape networks (Ronneberger et al., 2015; Newell et al., 2016). \(\theta_{G_{r}} = (\theta_{G_{r}^{enc}}, \theta_{G_{r}^{compl}})\) . The discriminator, \(D_{r}\) classifies its input \(X_{r}^{D}\) and has two output branches, the fake versus real classification branch and the attribute prediction branch. It consists of three components: a shared feature backbone and two head classifiers. We have, \[D_{r}(X_{r}^{D}; \theta_{D_{r}}) = \{D_{r}^{cls}(\mathbb{F}_{r}(X_{r}^{D}; \theta_{\mathbb{F}_{r}}); \theta_{D_{r}^{cls}}), D_{r}^{attr}(\mathbb{F}_{r}(X_{r}^{D}; \theta_{\mathbb{F}_{r}}); \theta_{D_{r}^{attr}})\} \quad (3)\] <--- Page Split ---> Where the feature backbone, \(\mathbb{F}_{r}(X_{r}^{D};\theta_{\mathbb{F}_{r}})\) computes the feature map. On top of the feature map, the first head classifier, \(D_{cls}(\cdot)\) computes binary classification between real and fake, and the second one, \(D_{attr}(\cdot)\) predicts an attribute vector. All the parameters of the discriminator are collected by \(\theta_{D_{r}} = (\theta_{\mathbb{F}_{r}},\theta_{D_{r}^{cl s}},\theta_{D_{r}^{attr}})\) . We note that the discriminator is only needed in training. We will omit the notations for parameters in equations when no confusion is caused. Progressive Growing of Generators and Discriminators. Following the methodology proposed in (Karras et al., 2017), we start with training \(G_{1}\) and \(D_{1}\) . Then, at the growing stage \(r\) ( \(r > 1\) ), \(G_{r} = (G_{r - 1}, G_{r}^{fade - in})\) is first created on top of the previous trained stage \(G_{r - 1}\) with newly added layers \(G_{r}^{fade - in}\) , and both of them are trainable in a fade- in process for smooth transition. The discriminators grow in the same way. The alternation of \(G_{r - 1}\) in training stage \(G_{r}\) may lead to the instability of the growing process and the loss of learned structures of the underlying image distribution, which motivates the proposed FAM in growing generators (Section 3.2.1). Inputs for Generators and Discriminators, \(X_{r}^{G}\) and \(X_{r}^{D}\) . We have, \[X_{r}^{G} = (\hat{I}_{r}^{obs},M_{r},L_{r})\quad a n d\quad X_{r}^{D} = (I_{r},L_{r}) \quad (4)\] where \(L_{r}\) represents the facial landmarks. Recent works (Isola et al., 2016; Wang et al., 2017; Zhu et al., 2017; Sangkloy et al., 2017; Xian et al., 2017; Chen & Hays, 2018) have shown the capability of GANs to translate sketches or edges to photo- realistic images. Given a corrupted image, it is better if the model is able to "draw" a sketch of face first, which provides a backbone guidance for image completion. We utilize the following methods to completion. We utilize the following methods to compute landmarks (Figure 2). - In training, we extract landmarks from the uncorrupted image at \(256 \times 256\) resolution using an off-the-shelf pre-trained Face Alignment Network (FAN) (Bulat & Tzimiropoulos, 2017) which achieved very good results for faces in the wild. - In testing, to predict landmarks from corrupted images, we first train a single stage face completion network at \(256 \times 256\) resolution, denoted by \(G_{L}\) , using reconstruction loss (Section 3.2.2) only. For a testing image, we use \(G_{L}\) to generate a blurry completed image from which the landmarks are extracted with FAN. ![](images/3_0.jpg) <center>Figure 2: Overview of our method. See text for details. </center> \(L\) is up- sampled or down- sampled to \(L_{r}\) to match the size of networks at different resolutions. \(I_{r}\) in \(X_{r}^{D}\) represents either the uncorrupted image or the image synthesized by \(G_{r}\) . For \(\hat{I}_{r}^{obs}\) in \(X_{r}^{G}\) , we have \(\hat{I}_{1}^{obs} = I_{1}^{obs}\) , and \(\hat{I}_{r}^{obs}\) is the output of FAM ( \(r > 1\) ) as described in following section. #### 3.2.1 FAM: THE PROPOSED FREQUENCY-ORIENTED ATTENTIVE MODULE Figure 3 illustrates the proposed FAM. To obtain a smooth transition from low to high resolutions, we design a FAM architecture (i.e. the red components in Figure 3), which is a frequency- oriented attention mechanism integrated along with the resolution change, so that the model learns filters to encourage \(G_{r}^{fade - in}\) (blue components in Figure 3) to read and write information that are important at level \(r\) , but have not been handled well at level \(r - 1\) . By doing those, - Our model attends on higher frequency signals as resolution increases (see Figure 1), thus improving the completion performance.- Our model preserves what have been learned in previous progressive stages, thus furthering the stability of progressive GANs. Existing approaches (Gregor et al., 2015; Yu et al., 2018) use spatial attention mechanisms to encourage networks to attend to selected parts of images (e.g. rectangular regions), while FAM is an attention model in the frequency domain. But different from a regular band- pass filter, the filters generated by FAM is predicted based on semantics of images which are enforced by the objective function (Eqn. 10), and thus is also sensitive to locations inferred on- the- fly in a coarse- to- fine manner. For instance, the model pays more attention to eye regions where the rich details aggregate, especially at a high resolution. <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 3: The proposed FAM used in growing GANs progressively. Here, we show the example of increasing resolutions from \(32 \times 32\) to \(64 \times 64\) . See text for details. Best viewed in color and magnification. </center> Recall that at the growing stage \(r\) \((r > 1)\) in training, we have \(G_{r} = (G_{r - 1},G_{r}^{fade - in})\) . In the vanilla progressive GANs, we will compute the completed image \(I_{r}^{syn} = G_{r}(I_{r}^{obs},M_{r},L_{r},A)\) (simplified from Eqn. 2 for clarity). The previously trained \(G_{r - 1}\) can be changed while optimizing the stage \(r\) which may lead to unexpected updating. Our FAM prevents \(G_{r - 1}\) from changing to wrong directions. To that end, we first introduce a read module which utilizes a read filter \(\hat{F}_{read}\) to extract the most valuable information in both \(I_{r}^{obs}\) and \(I_{r - 1}^{obs}\) , \[\hat{I}_{r}^{obs},\hat{I}_{r - 1}^{obs} = read(I_{r}^{obs},I_{r - 1}^{obs},\hat{F}_{read}), \quad (5)\] which is implemented by, \[\begin{array}{r l} & {\hat{I}_{r}^{obs} = \hat{F}_{read}\odot (1 - M_{r})\odot I_{r}^{obs},}\\ & {\hat{I}_{r - 1}^{obs} = D o w n s a m p l e((1 - \hat{F}_{read})\odot (1 - M_{r})\odot \hat{I}_{r - 1}^{obs}),} \end{array} \quad (6)\] where \(\odot\) denotes element- wise multiplication. \(\hat{I}_{r - 1}^{obs}\) is up- sampled from \(I_{r}^{obs}\) to match the resolution of \(I_{r}^{obs}\) . \(\hat{I}_{r - 1}^{obs}\) represents the blurred (i.e. low- frequency) version of \(I_{r}^{obs}\) since high- frequency information is lost when \(I_{r - 1}^{obs}\) is down- sampled from \(I_{r}^{obs}\) . The read filter \(\hat{F}_{read}\) is defined by, \[\hat{F}_{read} = \beta \cdot F_{read} + \gamma ,\qquad \beta :\left\{ \begin{array}{ll}2\alpha , & \gamma :\left\{ \begin{array}{ll}0, & \alpha \leq 0.5\\ 2\alpha -1, & 0.5< \alpha \leq 1.0 \end{array} \right.\\ 2 - 2\alpha , & \gamma :\left\{ \begin{array}{ll}0, & \alpha \leq 0.5\\ 2\alpha -1, & 0.5< \alpha \leq 1.0 \end{array} \right. \end{array} \right. \quad (8)\] where \(F_{read}\) is computed by \(F_{read} = T o F i l t e r(G_{r - 1}^{fixed}(I_{r - 1}^{obs},M_{r - 1},L_{r - 1}))\) using a trained generator \(G_{r - 1}^{fixed}\) with fixed weights and a small trainable network ToFilter. \(\alpha\) is a weight increasing linearly from 0 to 1 proportional to the number of seen images during growing. \(\hat{F}_{read}\) starts as an all- zero filter, is adjusted by a trainable ToFilter at the growing stages and eventually increased to all ones. As illustrated in Figure 3, given the outputs from the read module, \(\hat{I}_{r}^{obs}\) and \(\hat{I}_{r - 1}^{obs}\) , we can generate the corresponding completed images, \(I_{r}^{syn}\) and the up- sampled \(I_{r - 1}^{syn}\) respectively, and the write filter \(F_{write}\) . Then, to generate the final completed image at stage \(r\) , we also introduce a write module, \[\begin{array}{r l} & {\hat{I}_{r}^{syn} = w r i t e(I_{r}^{syn},I_{r - 1}^{syn},\hat{F}_{w r i t e})}\\ & {\qquad = (I_{r}^{syn}\cdot \alpha +I_{r - 1}^{syn}\cdot (1 - \alpha))\odot (1 - M_{r}) + (\hat{F}_{w r i t e}\odot I_{r}^{syn} + (1 - \hat{F}_{w r i t e})\odot I_{r - 1}^{syn})\odot M_{r},} \end{array} \quad (9)\] where \(\hat{F}_{write} = \beta \cdot F_{write} + \gamma\) and \(F_{write}\) is predicted from the last feature maps. At a low resolution, both the read and write module exploit more on the low- frequency information, but gradually move to exploit higher- frequency information when resolution increases under the guidance of minimizing the objective function (Eqn. 10). \(F_{read}\) and \(F_{write}\) can be discarded when the growing process is done. A testing image only needs to go through one generator that is independent of FAM (blue flow in Figure 3). This is more efficient than the Laplacian GAN (Denton et al., 2015) that requires feeding a testing sample to a cascade of generators, and uses multiple discriminators in training. #### 3.2.2 LOSS FUNCTIONS Beside extending the original adversarial loss function, we design three new loss functions to enforce sharp image completion. Adversarial Loss Given an uncorrupted image \(I^{gt}\) , its attribute vector \(A\) , a mask \(M\) , landmarks \(L\) , and the corresponding corrupted image \(I^{obs}\) we define the loss by, \(l_{adv}(I^{gt},M,L,I^{obs},A|G,D) = \log (1 - D_{cls}(I^{syn},L) + \log D_{cls}(I^{gt},L)\) , where \(I^{syn} = G(I^{obs},M,L,A))\) . <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 4: Comparison with Context Encoder (CE) on high-resolution face completion. While increasing resolution, CE generated more distorted images while our method produced sharper faces with more details. </center> Attribute Loss Similar to the InfoGAN models (Chen et al., 2016; Choi et al., 2017), for the attribute prediction head classifier in the discriminator, we define the attribute loss based on cross- entropy between the predicted attribute vector, \(\hat{A}^{real} = D_{attr}(I^{real},L)\) and \(\hat{A}^{obs} = D_{attr}(I^{obs},L)\) and the corresponding targets, \(A\) for both a real uncorrupted image and a synthesized image. We have, \(l_{attr}(I^{gt},A,M,I^{obs}|G,D) = C r o s s E n t r o p y(A,\hat{A}^{gt}) + C r o s s E n t r o p y(A,\hat{A}^{obs})\) Reconstruction Loss Since our method generates the entire completed face rather than only the target region, we define a weighted reconstruction loss \(l_{rec}\) to preserve both the target region and the context region, which is defined as, \(l_{rec}(I^{gt},M,L,I^{obs},A|G) = \| \kappa \odot M\odot I^{diff}\|_{1} + \| (1 - \kappa)\odot (1 - M)\odot I^{diff}\|_{1}\) , where \(I^{diff} = I^{gt} - I^{syn}\) and \(\kappa\) is the trade- off parameter. Feature Loss In additional to the reconstruction loss in terms of pixel values, we also encourage the synthesized image to have a similar feature representation (Johnson et al., 2016) based on a pre- trained deep neural network \(\phi\) . Let \(\phi_{j}\) be the activations of the \(jth\) layer of \(\phi\) , the feature loss is defined by, \(l_{feat}(I^{gt},M,L,I^{obs},A|\phi ,G) = \| \phi_{j}(I^{gt}) - \phi_{j}(I^{syn})\|_{2}^{2}\) . In our experiments, \(\phi_{j}\) is the \(relu2\_ 2\) layer of a 16- layer VGG network (Simonyan & Zisserman, 2014) pre- trained on the ImageNet dataset (Russakovsky et al., 2015). Boundary Loss To make the generator learn to blend the synthesized target region with the original context region seamlessly, we further define a close- up reconstruction loss along the boundary of the mask. Similar to (Yeh et al., 2017), we first create a weighted kernel \(w\) based on the mask image \(M\) . \(w\) is computed by blurring the mask boundary in \(M\) with a mean filter so that the pixels closer to the mask boundary are assigned larger weights. The kernel size of the mean filters is seven in our experiments. We have, \(l_{bdy}(I^{gt},M,L,I^{obs},A|G) = \| w\odot (I^{gt} - I^{syn})\|_{1}\) . Our model is trained end- to- end by integrating the expected loss of the different loss functions defined above under the minimax game setting. We have, \[\min_{G}\max_{D}\mathcal{L}_{adv}(G,D) + \lambda_{1}\mathcal{L}_{attr}(G,D) + \lambda_{2}\mathcal{L}_{rec}(G) + \lambda_{3}\mathcal{L}_{feat}(G,\phi) + \lambda_{4}\mathcal{L}_{bdy}(G). \quad (10)\] Where \(\lambda_{i}\) 's are trade- off parameters between different loss terms. Training without Multiple Controllable Attributes. To that end, since it is a special case of the proposed formulation stated above, we can simply remove the components involving attributes such as the attribute loss in a straightforward way. The resulting system still enjoys end- to- end learning. ## 4 EXPERIMENTS Datasets and Experiment Settings We used the CelebA- HQ (Karras et al., 2017) dataset for evaluation. It contains 30,000 aligned face images at \(1024 \times 1024\) resolution. The dataset is split randomly while ensuring there is no identity overlap between test/training sets: 3,009 images for testing, and 26,991 for training. There were two types of masks: center and random. The center mask was a square region in the middle of the image with a side length of half the size of the image. The random masks, generated in a similar way to previous methods (Iizuka et al., 2017; Yu et al., 2018), were rectangular regions with random width- to- height ratios, sizes and locations covering about \(5\%\) to \(25\%\) of the original images. Hyper- parameters used for training are listed in the supplemental materials. Quality Comparison with the Context Encoder Our method was compared with the Context Encoder (CE) (Pathak et al., 2016) on high- resolution face completion. Since the original networks <--- Page Split ---> of CE were designed for \(128 \times 128\) images, we used a naive approach to fit it to different resolutions. One, two, and three convolutional layers were added to the encoder, decoder and discriminator for \(256 \times 256\) , \(512 \times 512\) and \(1024 \times 1024\) networks respectively. The result (Figure 4) shows that, when the resolution increased, our method learned details incrementally and synthesized sharper faces, while CE generated poorer images with more distortions. Table 1: The quantitative comparison between our method and state-of-the-art methods <table><tr><td>Method</td><td>Resolution</td><td>L1 (%)</td><td>L2 (%)</td><td>PSNR</td></tr><tr><td>GL (Iizuka et al., 2017)</td><td>128 × 128</td><td>9.34</td><td>1.75</td><td>18.22</td></tr><tr><td>Ours</td><td>128 × 128</td><td>7.8</td><td>1.42</td><td>19.15</td></tr><tr><td>CTX (Yu et al., 2018)</td><td>256 × 256</td><td>8.53</td><td>1.75</td><td>18.41</td></tr><tr><td>Ours</td><td>256 × 256</td><td>7.05</td><td>1.21</td><td>19.97</td></tr></table> Quantitative Comparison with State- of- the- art Methods As noted in literatures (Yeh et al., 2017; Yu et al., 2018), reconstruction metrics such as mean \(L1\) , \(L2\) errors and peak signal- to- noise ratio (RSNR) that are commonly used are not good quantitative evaluation metrics for inpainting methods since image completion aims at completing missing regions with plausible content rather than reconstructing it. As a reference, we show the comparison between our method and state- of- the- art models at their reported resolutions respectively: \(128 \times 128\) for GL with center masks (using implementation of (Yu et al., 2018)) and \(256 \times 256\) for CTX with random masks (Table 1). Semantic Completion We first trained a high- resolution \((1024 \times 1024)\) model with center masks (examples shown in Figure 5) to test whether our model is capable of learning high- level semantics and structures of faces and synthesize large missing regions. The second model was trained with random masks, but was able to handle arbitrary (e.g. irregular hand- drawn) shapes of masks. The result (Figure 5) shows that our model was able to capture the anatomical structures of faces and generate content that is consistent with the holistic semantics. Attribute Controller Unlike previous image completion techniques (Iizuka et al., 2017; Yeh et al., 2017; Li et al., 2017; Yang et al., 2016; Pathak et al., 2016; Yu et al., 2018) that generate only random plausible content, our network completes faces with structurally meaningful content whose appearances and expressions are controllable. Existing approaches (Mirza & Osindero, 2014; Choi et al., 2017; Kaneko et al., 2017) can only control facial expressions roughly (e.g. smiling or not smiling). In contrast, our model is able to control subtle expressions. In the experiment, the face appearance was conditioned on a "Male" attribute and we used landmarks from source actors to control the synthesized expressions (Figure 5). This \(512 \times 512\) model was trained from scratch. The example demonstrates the potential application of our method in face reenactment (Li et al., 2012; Garrido et al., 2014; Thies et al., 2016). Computation Time Our model, once trained, is able to complete a face image with a single forward pass, resulting in much higher efficiency. We tested our model with a Titan Xp GPU by processing \(3000 1024 \times 1024\) images with \(512 \times 512\) holes. The mean completion time is 0.54 second per image. Unlike our model, existing CNN- based high- resolution in- painting approaches often need much longer time to process an image. For instance, it took about 1 min for the model of Yang et al. (Yang et al., 2016) to complete a \(512 \times 512\) image with a Titan X GPU. User Study We compared our method with CTX (Yu et al., 2018), which is the state- of- the- art CNN- based face completion approach capable of completing face images at \(256 \times 256\) resolution, with a pilot user study at \(256 \times 256\) resolution with random masks. 27 subjects (15 male and 12 female participants, with ages from 22 to 32) were volunteered to participate. There were four sessions of pairwise A/B tests. Each time, a user was shown two images and asked choose the more realistic one. In the first session, two images completed from the same image by different methods were chosen. In session two to four, a real image and a corresponding synthesized image were shown. In the first session, time was unlimited. In session two to four, images were on display for 250ms, 1000ms, 4000ms respectively. The result (Figure 6) shows that there were significantly more images generated by our method being favored by the viewers. Overall, our approach generated sharper images with more details and fewer distortions. Limitations Though our method has low inference time, the training time is long due to the progressive growing of networks. In our experiment, it took about three weeks to train a \(1024 \times 1024\) model on a Titan Xp GPU. Additionally, by carefully zooming in our results, we find that our high- resolution model fails to learn low- level skin textures, such as furrows and sweat holes. Moreover, the model <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: Sample results of our approach. The left two groups are completion results with center and irregular hand-drawn masks at \(1024 \times 1024\) . For each group, from left to right columns: cropped, synthesized and real images. The third group shows the performance of the attribute controller, in which the first and third row are corrupted images and source actors whose facial landmarks are used to control the expressions of synthesized faces (row two and four). The right most two columns are conditioned on the "Male" attribute while column two and three are with "Not Male". The leftmost column depends on their ground-truth landmarks and attributes. </center> ![](images/7_1.jpg) <center>Figure 6: Comparisons on the naturalness: ours and CTX (Yu et al., 2018). The leftmost bar chart shows the average percentage that the images generated by our method look more natural than CTX. The second bar chart shows the percentage that a synthesized image is considered more realistic than a ground-truth (GT) one with displaying time of 250ms, 1000ms and 4000ms. The right figure shows samples used in the user study. The first group comes from session one while group two and three are both from session four (the 4000ms session). The preferred images are marked with red boxes. </center> ![](images/7_2.jpg) <center>Figure 7: Some failure cases of our approach. </center> could generate distorted content while removing large parts (e.g. hats) or synthesize some plausible but unnatural faces (Figure 7). Furthermore, For facial expression transfer, our method requires that the head poses of the source and target faces are similar. These issues are left for future work. ## 5 CONCLUSION We propose a progressive GAN with frequency- oriented attentive modules (FAM) for high- resolution and fast face completion, which learns face structures from coarse to fine guided by the FAM. By consolidating information across all scales, our model not only outperforms state- of- the- art methods by generating sharper images in low resolution, but is also able to synthesize faces in higher resolutions than existing techniques. A conditional version of our model allows users to control the properties of generated images explicitly with attribute vectors and landmarks. Our system is designed in an end- to- end manner, in that it learns to generate completed faces directly and more efficiently. <--- Page Split ---> ## REFERENCES Connelly Barnes, Eli Shechtman, Adam Finkelstein, and Dan B Goldman. Patchmatch: A randomized correspondence algorithm for structural image editing. ACM Trans. Graph., 28(3):24- 1, 2009. Marcelo Bertalmio, Guillermo Sapiro, Vincent Caselles, and Coloma Ballester. Image inpainting. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp. 417- 424. ACM Press/Addison- Wesley Publishing Co., 2000. Marcelo Bertalmio, Luminita Vese, Guillermo Sapiro, and Stanley Osher. Simultaneous structure and texture image inpainting. IEEE Transactions on image processing, 12(8):882- 889, 2003. Adrian Bulat and Georgios Tzimiropoulos. How far are we from solving the 2d & 3d face alignment problem?(and a dataset of 230,000 3d facial landmarks). In International Conference on Computer Vision, volume 1, pp. 8, 2017. Wengling Chen and James Hays. Sketchygan: Towards diverse and realistic sketch to image synthesis. arXiv preprint arXiv:1801.02753, 2018. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2172- 2180, 2016. Yunjey Choi, Minje Choi, Munyoung Kim, Jung- Woo Ha, Sunghun Kim, and Jaegul Choo. Stargan: Unified generative adversarial networks for multi- domain image- to- image translation. arXiv preprint arXiv:1711.09020, 2017. Antonio Criminisi, Patrick Perez, and Kentaro Toyama. Object removal by exemplar- based inpainting. In Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on, volume 2, pp. II- II. IEEE, 2003. Soheil Darabi, Eli Shechtman, Connelly Barnes, Dan B Goldman, and Pradeep Sen. Image melding: Combining inconsistent images using patch- based synthesis. ACM Trans. Graph., 31(4):82- 1, 2012. Emily Denton, Sam Gross, and Rob Fergus. Semi- supervised learning with context- conditional generative adversarial networks. arXiv preprint arXiv:1611.06430, 2016. Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in neural information processing systems, pp. 1486- 1494, 2015. Alexei A Efros and Thomas K Leung. Texture synthesis by non- parametric sampling. In Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, volume 2, pp. 1033- 1038. IEEE, 1999. Pablo Garrido, Levi Valgaerts, Ole Rehmsen, Thorsten Thormahlen, Patrick Perez, and Christian Theobalt. Automatic face reenactment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4217- 4224, 2014. Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672- 2680, 2014. Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015. James Hays and Alexei A Efros. Scene completion using millions of photographs. In ACM Transactions on Graphics (TOG), volume 26, pp. 4. ACM, 2007. Jia- Bin Huang, Sing Bing Kang, Narendra Ahuja, and Johannes Kopf. Image completion using planar structure guidance. ACM Transactions on Graphics (TOG), 33(4):129, 2014. Satoshi Iizuka, Edgar Simo- Serra, and Hiroshi Ishikawa. Globally and locally consistent image completion. ACM Transactions on Graphics (TOG), 36(4):107, 2017. <--- Page Split ---> Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pp. 448- 456, 2015. Phillip Isola, Jun- Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image- to- image translation with conditional adversarial networks. arXiv preprint arXiv:1611.07004, 2016. Justin Johnson, Alexandre Alahi, and Li Fei- Fei. Perceptual losses for real- time style transfer and super- resolution. In European Conference on Computer Vision, pp. 694- 711. Springer, 2016. Takuho Kaneko, Kaoru Hiramatsu, and Kunio Kashino. Generative attribute controller with conditional filtered generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, 2017. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Nikos Komodakis. Image completion using global optimization. In Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 1, pp. 442- 452. IEEE, 2006. Vivek Kwatra, Arno Schödl, Irfan Essa, Greg Turk, and Aaron Bobick. Graphcut textures: image and video synthesis using graph cuts. In ACM Transactions on Graphics (ToG), volume 22, pp. 277- 286. ACM, 2003. Anat Levin, Assaf Zomet, and Yair Weiss. Learning how to inpaint from global image statistics. In null, pp. 305. IEEE, 2003. Kai Li, Feng Xu, Jue Wang, Qionghai Dai, and Yebin Liu. A data- driven approach for facial expression synthesis in video. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 57- 64. IEEE, 2012. Yijun Li, Sifei Liu, Jimei Yang, and Ming- Hsuan Yang. Generative face completion. arXiv preprint arXiv:1704.05838, 2017. Guilin Liu, Fitsum A Reda, Kevin J Shih, Ting- Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. arXiv preprint arXiv:1804.07723, 2018. Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neural network acoustic models. In Proc. ICML, volume 30, 2013. Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estimation. In European Conference on Computer Vision, pp. 483- 499. Springer, 2016. Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536- 2544, 2016. Patrick Pérez, Michel Gangnet, and Andrew Blake. Poisson image editing. In ACM Transactions on graphics (TOG), volume 22, pp. 313- 318. ACM, 2003. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U- net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer- Assisted Intervention, pp. 234- 241. Springer, 2015. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211- 252, 2015. <--- Page Split ---> Patsorn Sangkloy, Jingwan Lu, Chen Fang, Fisher Yu, and James Hays. Scribbler: Controlling deep image synthesis with sketch and color. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, 2017. Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Josh Susskind, Wenda Wang, and Russ Webb. Learning from simulated and unsupervised images through adversarial training. arXiv preprint arXiv:1612.07828, 2016. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large- scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, and Matthias NieBner. Face2face: Real- time face capture and reenactment of rgb videos. In Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on, pp. 2387- 2395. IEEE, 2016. Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016. Ting- Chun Wang, Ming- Yu Liu, Jun- Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High- resolution image synthesis and semantic manipulation with conditional gans. arXiv preprint arXiv:1711.11585, 2017. Yonatan Wexler, Eli Shechtman, and Michal Irani. Space- time completion of video. IEEE Transactions on pattern analysis and machine intelligence, 29(3), 2007. Marta Wilczkowiak, Gabriel J Brostow, Ben Tordoff, and Roberto Cipolla. Hole filling through photomontage. In BMVC, volume 5, pp. 492- 501, 2005. Wenqi Xian, Patsorn Sangkloy, Jingwan Lu, Chen Fang, Fisher Yu, and James Hays. Texturegan: Controlling deep image synthesis with texture patches. arXiv preprint arXiv:1706.02823, 2017. Chao Yang, Xin Lu, Zhe Lin, Eli Shechtman, Oliver Wang, and Hao Li. High- resolution image inpainting using multi- scale neural patch synthesis. arXiv preprint arXiv:1611.09969, 2016. Raymond A Yeh, Chen Chen, Teck Yian Lim, Alexander G Schwing, Mark Hasegawa- Johnson, and Minh N Do. Semantic image inpainting with deep generative models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5485- 5493, 2017. Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. Generative image inpainting with contextual attention. arXiv preprint arXiv:1801.07892, 2018. Jun- Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image- to- image translation using cycle- consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017. <--- Page Split ---> ![](images/11_0.jpg) <center>Figure 8: Ablation Study. First row: cropped images; Second row: results from a model that is trained with only regular \(\mathcal{L}_{1}\) loss (unweighted) and \(\mathcal{L}_{adv}\) . The vanishing gradient problem prevents the networks from making progress when the synthesized content is still blurry. Third row: the training process is stabilized by adopting the progressive training methodology and a set of designed loss functions. However, the already-learned structures can be distorted while the network is growing (e.g. first column); Fourth row, FAM helps prevent the coarse structures from being altered while encouraging the model to attend to regions with rich details. For instance, the eyes are sharper, more vivid and realistic when the model is trained with FAM; Fifth row: the ground-truth samples. </center> ## A APPENDIX ## A.1 ABLATION STUDY The encoder- decoder structure has been widely used in image completion networks (Pathak et al., 2016; Iizuka et al., 2017; Li et al., 2017). However, the encoding process is a lossy compression, which makes it difficult to reconstruct the original contextual regions. Additionally, since much contextual information is lost during the encoding process, it is also difficult for the encoder to reconstruct content that perfectly match the context. UNet adds skip connections between the mirrored layers of the encoder and decoder, consolidating information from all previous layers, rather than depending on only the latent code. Therefore, UNet can be used to generate a completed image conditioned on the corrupted input. Unfortunately, if UNet is applied to image completion networks directly, it will be much easier for the generator to reconstruct the context than the content, resulting in inconsistent colors and textures along boundaries, which often causes the vanishing gradient problem <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 9: High-resolution face completion results with center masks. All images are resized from \(1024 \times 1024\) . For each group, from left to right: cropped, synthesized, and real images. </center> where both the generator and discriminator stop learning when the synthesized content is still blurry (Figure 8). We try to solve this problem from two aspects. First, we adopt the method of training GANs progressively. Our model starts at a low resolution, so that it is difficult for the discriminator to capture the inconsistency between content and context regions. When the network grows, since the lower- resolution is already trained, the inconsistency between context and content regions is reduced. Second, we design the boundary loss and weighted reconstruction loss, which encourage the network focus more on synthesizing the boundary and content regions respectively. The feature loss also helps stabilizing the training by encouraging the synthesized images to have similar high- level features to real samples. These two improvements have improved the performance significantly. The image quality is further enhanced by incorporating FAM. First, the reader and writer act as band- pass filters that minimize the influence of high- frequency noise on low- level network parameters, and thus avoiding distorting the already- learned global structures. Second, FAM is predicted from the facial semantics so that it encourages the model to focus on learning features in regions with richer details. For instance, the eyes synthesized by models trained with FAM are much sharper, more vivid and realistic. This ablation study was run at \(256 \times 256\) to provide an intuitive illustration of the impact of different components of our method. Since the training of high- resolution models was very time consuming, a more thorough ablation study is left for future work. ## A.2 HIGH-RESOLUTION COMPLETION RESULTS More high resolution face completion results with various mask types are demonstrated in Figure 9 and Figure 10. Additionally, Figure 11 and Figure 12 represent two ways to control expressions <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 10: High-resolution face completion results with random and hand-drawn masks. All images are resized from \(1024 \times 1024\) . For each group, from left to right: cropped, synthesized, and real images. </center> and appearances. In the first example (Figure 11), we use a two- dimensional attribute vector ["Smiling", "Male"] without landmarks as inputs. In the second one, an attribute ["Male"] is used with landmarks extracted from source actors (Figure 12). The results show that both methods can control the expressions and appearances explicitly. But more subtle expressions can be controlled with landmarks. Moreover, we present more examples of attention filters during growing process in Figure 13. ## A.3 TRAINING DETAILS The progressive training process is illustrated in Figure 14. At a resolution lower than \(1024 \times 1024\) , the input face images, masks, landmarks and real images are all down- sampled with average pooling to fit the given scale. One of the major challenges of generating high resolution images is the limitation of Graphics Processing Unit (GPU) memory. Most completion networks use Batch Normalization (Ioffe & Szegedy, 2015) to avoid covariate shift. However, with the limited GPU memory, only a small number of batch sizes are supported at high resolution, resulting in low quality of generated images. We use the Instance Normalization (Ulyanov et al., 2016), similar to Zhu et al. (Zhu et al., 2017), and update \(D\) with a history of completed images instead of the latest generated one (Shrivastava et al., 2016) to stabilize training. At the growing stage, new layers are added for both \(D\) and \(G\) and these layers are faded in with current networks smoothly. After the fade- in process, the network is trained on more images for stabilization. We used \(300\mathrm{K}\) , and \(150\mathrm{K}\) training images for resolution \([8 \times 8\) to \(256 \times 256]\) and \([512 \times 512, 1024 \times 1024]\) respectively at growing stage, and \(600\mathrm{K}\) , \(430\mathrm{K}\) images for \(4 \times 4\) and \([8 \times 8\) to \(1024 \times 1024]\) at stabilizing stage respectively. <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 11: Face completion results with attribute controller. In this example, only attribute vectors ("Smiling", "Male") are used to control the properties of generated images. The facial expressions are controlled with the latent variables, rather than landmarks. From column two to five, the attributes are: [0, 0], [0, 1], [1, 0], [1, 1]. "1" denotes an attribute is turned on, otherwise not. </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 12: Face completion results with attribute controller. Attribute “Male” is used to control the appearances (“Male” for column two and three; “Not Male” for column four and five). Landmarks from source actors (row one and three) are used to control expressions of synthesized images (row two and four). The leftmost column shows cropped images and faces generated with ground-truth attributes and landmarks. </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 13: More examples of the attentive read/write filters while the resolution grows from \(8 \times 8\) to \(1024 \times 1024\) . The leftmost column are cropped images while the rightmost are synthesized images. </center> In the experiments, the reconstruction trade- off parameter was set to \(\kappa = 0.7\) to focus more on the target region. To balance the effects of different objective functions, we used \(\lambda_{attr} = 2\) , \(\lambda_{rec} = 500\) , \(\lambda_{feat} = 8\) , and \(\lambda_{bdy} = 5000\) . The Adam solver (Kingma & Ba, 2014) was employed with a learning rate of 0.0001. ## A.4 NETWORK STRUCTURE The generator \(G\) in our model is implemented by a U- shape network architecture consisting of the first component \(G_{enc}\) transforming the observed image and its mask to a latent vector and the second component \(G_{compl}\) transforming the concatenated vector (latent and attribute) to the completed image. There are residual connections between layers in \(G_{enc}\) and the counterpart in \(G_{compl}\) similar in the spirit to the U- Net (Ronneberger et al., 2015) and the Hourglass network (Newell et al., 2016) to consolidate information across multiple scales. Figure 15 illustrates the two structures of a layer in the generator for training without and with attributes respectively, which are adapted from the U- Net and Hourglass network. Every convolutional layer (Conv) is followed by an Instance Normalization (InsNorm) and a LeakyReLU layer, except that the Conv before the latent vector (i.e. the second Conv layer in Table 2) is not followed by an InsNorm. Additionally, the there are no InsNorms or LeakyReLUs after the last Convs of both \(D_{cls}\) and \(D_{attr}\) . All Convs used in the residual block of the skip connections of our conditional model have a kernel size of three and a stride of one. Since we use Instance Normalization rather than Batch Normalization, the batch size is not an important hyper- parameter. Technically, for faster computation, we use as large a batch size as possible so long as it does not exceed the GPU memory limit. Tables 2 and 4 demonstrate the architecture of the components of the generator \(G\) while Tables 5 shows the components of the discriminator \(D\) . In Table 5, depending on the operation of the skip connection (Skip), the number of filters is either doubled (for a concatenation operation) or remains the same (for an addition operation). <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 14: The progressive training process of our approach. The training of the completion network (or the "generator" \(G\) ) and the discriminator \(D\) starts at low resolution \((4 \times 4)\) . Higher layers are added to both \(G\) and \(D\) progressively to increase the resolution of the synthesized images. The \(\left[\overline{r \times r}\right]\) cubes in the figure represent convolutional layers that handle resolution \(r\) . For the conditional version, attribute labels \(A^{obs}\) are concatenated to the latent vectors. The discriminator \(D\) splits into two branches in the final layers: \(D_{cls}\) that classifies if an input image is real, and \(D_{attr}\) that predicts attribute vectors. Note that \(X^G\) and \(X^D\) are both a set of inputs as defined in the paper. We use images in this Figure as a simplified illustration. </center> ![](images/17_1.jpg) <center>Figure 15: Illustrations of a single layer of our architecture. There are skip connections between mirrored encoder and decoder layers. Left: the structure of the completion network; the skip connection is a copy-and-concatenate operation. This structure helps preserve the identity information between the synthesized images and real faces, resulting in little deformation. Right: the structure of the conditional completion network; residual connections are added to the encoder, and the skip connections are residual blocks instead of direct concatenation. The attributes of the synthesized contents can be manipulated more easily with this structure. Each blue rectangle represents a set of Convolutional, Instance Normalization and Leaky Rectified Linear Unit (LeakyReLU) (Maas et al., 2013) layers. </center> <--- Page Split ---> Table 2: Top: the Encoding component of generator \(G_{enc}\) Bottom: Latent Layer. \(N\) is the length of an attribute vector. The attribute concatenation operation (AttrConcat) is only activated for our conditional model. <table><tr><td>Type</td><td>Kernel</td><td>Stride</td><td>Output Shape</td></tr><tr><td>Input Image</td><td>-</td><td>-</td><td>4 × 1024 × 1024</td></tr><tr><td>Conv</td><td>1 × 1</td><td>1 × 1</td><td>16 × 1024 × 1024</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>32 × 1024 × 1024</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>32 × 1024 × 1024</td></tr><tr><td>Downsample</td><td>-</td><td>-</td><td>32 × 512 × 512</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>64 × 512 × 512</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>64 × 512 × 512</td></tr><tr><td>Downsample</td><td>-</td><td>-</td><td>64 × 256 × 256</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>128 × 256 × 256</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>128 × 256 × 256</td></tr><tr><td>Downsample</td><td>-</td><td>-</td><td>128 × 128 × 128</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>256 × 128 × 128</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>256 × 128 × 128</td></tr><tr><td>Downsample</td><td>-</td><td>-</td><td>256 × 64 × 64</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 64 × 64</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>512 × 64 × 64</td></tr><tr><td>Downsample</td><td>-</td><td>-</td><td>512 × 32 × 32</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 32 × 32</td></tr><tr><td>Conv</td><td>3 × 2</td><td>3 × 1</td><td>512 × 32 × 32</td></tr><tr><td>Downsample</td><td>-</td><td>-</td><td>512 × 16 × 16</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 16 × 16</td></tr><tr><td>Conv</td><td>3 × 2</td><td>3 × 1</td><td>512 × 16 × 16</td></tr><tr><td>Downsample</td><td>-</td><td>-</td><td>512 × 8 × 8</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 8 × 8</td></tr><tr><td>Conv</td><td>3 × 2</td><td>3 × 1</td><td>512 × 8 × 8</td></tr><tr><td>Downsample</td><td>-</td><td>-</td><td>512 × 4 × 4</td></tr></table> <table><tr><td>Type</td><td>Kernel</td><td>Stride</td><td>Output Shape</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 4 × 4</td></tr><tr><td>Conv</td><td>4 × 4</td><td>1 × 1</td><td>512 × 1 × 1</td></tr><tr><td>AttrConcat</td><td>optional</td><td>-</td><td>512(+N) × 1 × 1</td></tr><tr><td>Conv</td><td>4 × 4</td><td>1 × 1</td><td>512 × 4 × 4</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 4 × 4</td></tr></table> <--- Page Split ---> Table 3: The completion component of generator \(G_{compl}\) . Depending on the particular operation of the skip connection (Skip), the number of filters is either doubled (for concatenation operations) or remains the same (for addition operations). In practice, \(G_{compl}\) output a feature map that can be used to generate a RGB image (with ToRGB layers) or predict a read/write Filter (with ToFilter layers, see Table 4). <table><tr><td>Type</td><td>Kernel</td><td>Stride</td><td>Output Shape</td></tr><tr><td>Upsample</td><td>-</td><td>-</td><td>512 × 8 × 8</td></tr><tr><td>Skip</td><td>-</td><td>-</td><td>1024 (512) × 8 × 8</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 8 × 8</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>512 × 8 × 8</td></tr><tr><td>Upsample</td><td>-</td><td>-</td><td>512 × 16 × 16</td></tr><tr><td>Skip</td><td>-</td><td>-</td><td>1024 (512) × 16 × 16</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 16 × 16</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>512 × 16 × 16</td></tr><tr><td>Upsample</td><td>-</td><td>-</td><td>512 × 32 × 32</td></tr><tr><td>Skip</td><td>-</td><td>-</td><td>1024 (512) × 32 × 32</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 32 × 32</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>512 × 32 × 32</td></tr><tr><td>Upsample</td><td>-</td><td>-</td><td>512 × 64 × 64</td></tr><tr><td>Skip</td><td>-</td><td>-</td><td>1024 (512) × 64 × 64</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 64 × 64</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>512 × 64 × 64</td></tr><tr><td>Upsample</td><td>-</td><td>-</td><td>512 × 128 × 128</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>256 × 128 × 128</td></tr><tr><td>Skip</td><td>-</td><td>-</td><td>512 (256) × 128 × 128</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>256 × 128 × 128</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>256 × 128 × 128</td></tr><tr><td>Upsample</td><td>-</td><td>-</td><td>256 × 256 × 256</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>128 × 256 × 256</td></tr><tr><td>Skip</td><td>-</td><td>-</td><td>256 (128) × 256 × 256</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>128 × 256 × 256</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>128 × 256 × 256</td></tr><tr><td>Upsample</td><td>-</td><td>-</td><td>128 × 512 × 512</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>64 × 512 × 512</td></tr><tr><td>Skip</td><td>-</td><td>-</td><td>128 (64) × 512 × 512</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>64 × 512 × 512</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>64 × 512 × 512</td></tr><tr><td>Upsample</td><td>-</td><td>-</td><td>64 × 1024 × 1024</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>32 × 1024 × 1024</td></tr><tr><td>Skip</td><td>-</td><td>-</td><td>64 (32) × 1024 × 1024</td></tr></table> Table 4: Left: The ToRGB layers that convert feature maps to RGB images. Right: ToFilter layers that predict a read/write filter from feature maps. <table><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>32 × 1024 × 1024</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>32 ×1024 × 1024</td></tr><tr><td>Conv</td><td>1 × 1</td><td>1 × 1</td><td>3 × 1024 × 1024</td></tr></table> <table><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>64 × 1024 × 1024</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>64 ×1024 × 1024</td></tr><tr><td>Conv</td><td>1 × 1</td><td>1 × 1</td><td>1 × 1024 × 1024</td></tr></table> <--- Page Split --->
## ABSTRACT Face completion is a challenging task with the difficulty level increasing significantly with respect to high resolution, the complexity of "holes" and the controllable attributes of filled- in fragments. Our system addresses the challenges by learning a fully end- to- end framework that trains generative adversarial networks (GANs) progressively from low resolution to high resolution with conditional vectors encoding controllable attributes. We design a novel coarse- to- fine attentive module network architecture. Our model is encouraged to attend on finer details while the network is growing to a higher resolution, thus being capable of showing progressive attention to different frequency components in a coarse- to- fine way. We term the module Frequency- oriented Attentive Module (FAM). Our system can complete faces with large structural and appearance variations using a single feed- forward pass of computation with mean inference time of 0.54 seconds for images at \(1024 \times 1024\) resolution. A pilot human study shows our approach outperforms state- of- the- art face completion methods. The code will be released upon publication. ## 1 INTRODUCTION Image completion is a technique to replace target regions, either missing or unwanted, of images with synthetic content so that the completed images look natural, realistic and appealing. Two types of methods have been used: data similarity driven methods and data distribution based generative methods. In the first paradigm, texture synthesis or patch matching are usually used (Efros & Leung, 1999; Kwatra et al., 2003; Criminisi et al., 2003; Wilczkowiak et al., 2005; Komodakis, 2006; Barnes et al., 2009; Darabi et al., 2012; Huang et al., 2014; Wexler et al., 2007). The second paradigm learns the underlying distribution governing the data generation with respect to the context and is able to synthesize novel content. Much progress (Iizuka et al., 2017; Yeh et al., 2017; Li et al., 2017; Yang et al., 2016; Denton et al., 2016; Pathak et al., 2016; Yu et al., 2018; Liu et al., 2018) has been made since the generative adversarial network (GAN) was proposed (Goodfellow et al., 2014). We adopt the data distribution based generative method and focus on human face completion in this paper. Three important issues are addressed. First, previous methods are only able to complete faces at low resolutions (e.g. \(176 \times 216\) (Iizuka et al., 2017) and \(256 \times 256\) (Yu et al., 2018)). Second, most approaches cannot control the attributes of the synthesized content. Previous methods focus on generating random realistic content. However, users may want to complete the missing parts with certain properties (e.g. expressions). Third, most existing approaches (Iizuka et al., 2017; Yeh et al., 2017; Li et al., 2017) require post processing (e.g. Poisson Blending (Pérez et al., 2003)) or complex inference process (e.g. thousands of optimization iterations (Yeh et al., 2017) or repeatedly feeding an incomplete image to CNNs at multiple scales (Yang et al., 2016)). To overcome the above limitations, we propose a novel progressively attentive GAN to complete face images at high resolution with multiple controllable attributes in a single forward pass without any post processing. We utilize facial landmarks as backbone guidance of face structures and propose a straightforward method of integrating them in our system. The training methodology of growing GANs progressively (Karras et al., 2017) is used to generate high- resolution images end- to- end. To avoid distorting the learned coarse structures when the network is growing to a higher resolution, we design a novel Frequency- oriented Attentive Module (FAM) to encourage the model to attend on finer details (i.e. higher- frequency structures, see Figure 1). A conditional version of our network is designed so that the appearance properties (e.g. male or female), and facial expressions of the synthesized faces can be controlled. Moreover, we design a set of loss functions inducing the network <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Face completion results of our method on CelebA-HQ (Karras et al., 2017). Top: our approach can complete face images at high resolution ( \(1024 \times 1024\) ). Middle and Bottom: the frequency-attention filters of readers and writers of the top left image. While the resolution increases from \(8 \times 8\) to \(1024 \times 1024\) , the model attends on higher frequency information. Regions with rich details (e.g. eyes) get more attention, especially at high resolutions. Best viewed in magnification. </center> to blend the synthesized content with the contexts in a realistic way. Our method was compared with state- of- the- art approaches on a high- resolution face dataset CelebA- HQ (Karras et al., 2017). Both the evaluations and a pilot user study showed that our approach completed face images significantly more naturally than existing methods. The main contributions of this paper are: (i) We propose a progressively attentive GAN architecture, which incorporates a novel frequency- oriented attention mechanism, to complete face images with random masks in much higher resolution than existing methods. (ii) A conditional version of our model is designed to control multiple attributes of the synthesized content. (iii) Our framework is able to complete images in a single forward pass, without any post- processing, and thus fast. ## 2 RELATED WORK There is a large body of image completion literature. Early non- learning based algorithms (Efros & Leung, 1999; Bertalmio et al., 2000; 2003) complete missing content by propagating information from known neighborhoods, based on low level cues or global statistics (Levin et al., 2003). Texture synthesis and patch matching based approaches (Efros & Leung, 1999; Kwatra et al., 2003; Criminisi et al., 2003; Wilczkowiak et al., 2005; Komodakis, 2006; Barnes et al., 2009; Darabi et al., 2012; Huang et al., 2014; Wexler et al., 2007) find similar structures from the context of the input image or from an external database (Hays & Efros, 2007) and then paste them to fill in the holes. Recent learning based methods have shown the capability of CNNs to complete large missing content. Based on existing GANs, the Context Encoder (CE) (Pathak et al., 2016) encodes the contexts of masked images to latent representations, and then decodes them to natural content images, which are pasted into the original contexts for completion. However, the synthesized content of CE is often blurry and has inconsistent boundaries. Given a trained generative model, Yeh et al. (Yeh et al., 2017) propose a framework to find the most plausible latent representations of contexts to complete masked images. The Generative Face Completion model (GFC) (Li et al., 2017) and the Global and Local Consistent model (GL) (Iizuka et al., 2017) use both global and local discriminators, combined with post processing, to complete images more coherently. Built on GL, Yu et al. (Yu et al., 2018) design a contextual attention layer (CTX) to help the model borrow contextual information from distant locations. Liu et al. (Liu et al., 2018) incorporates partial convolutions to handle irregular masks. Unfortunately, these approaches can only complete face images in relatively low resolutions (e.g. 176 x 216 (Iizuka et al., 2017) and \(256 \times 256\) (Yu et al., 2018)). Yang et al. (Yang et al., 2016) combine a global content network and a texture network, and the networks are trained at multiple scales repeatedly to complete high- resolution images ( \(512 \times 512\) ). Like the patch matching based approaches, Yang et al. assume that the missing content always shares some similar textures with the context, which is improbable for the face completion task. <--- Page Split ---> Many methods try to generate high quality images and stabilize the training process. The Laplacian GAN (Denton et al., 2015) uses a a cascade of CNNs to synthesize images from coarse to fine by generating high frequency information at different layers. The Deep Recurrent Attentive Writer (DRAW) architecture (Gregor et al., 2015), which is a spatial attention mechanism, generates images iteratively by learning to read and write parts of images at each time- step with a sequence of variational auto- encoders (VAEs). Unfortunately, these techniques are unable to synthesize high- resolution images (e.g. \(64 \times 64\) (Gregor et al., 2015; Denton et al., 2015)). Karras et al. (Karras et al., 2017) put forward a progressive training methodology (Progressive GAN) to grow GANs from low to high resolution, and are able to generate realistic \(1024 \times 1024\) images. However, since all the parameters remain trainable at the growing stage, learned coarse structures can be altered and distorted, and thus the training process of Progressive GANs is usually unstable. Note that these generative models cannot be applied to the image completion task directly because they aim at generating natural content which are not necessarily consistent with the image contexts. ## 3 APPROACH ### 3.1 PROBLEM FORMULATION Denoted by \(\Lambda\) an image lattice (e.g., \(1024 \times 1024\) pixels). Let \(I_{\Lambda}\) be an RGB image defined on the lattice \(\Lambda\) . Denote by \(\Lambda_{t}\) and \(\Lambda_{c}\) the target region to complete and the remaining context region respectively which form a partition of the lattice. Without loss of generality, we assume \(\Lambda_{t}\) is a single connected component region. \(I_{\Lambda_{t}}\) is masked out with the same gray pixel value. Let \(M_{\Lambda}\) be a binary mask image with all pixels in \(M_{\Lambda_{t}}\) being 1 and all pixels in \(M_{\Lambda_{c}}\) being 0. For notational simplicity, we will omit the subscripts \(\Lambda\) , \(\Lambda_{t}\) and \(\Lambda_{c}\) when the text context is clear. The objective of image completion is to generate a synthesized image \(I^{syn}\) that looks natural, realistic and appealing for an observed image \(I^{obs}\) with the target region \(I_{\Lambda_{c}}^{syn}\) masked out in the ground- truth full image \(I^{gt}\) . Furthermore, it is desirable to control the completion according to a set of attributes which are assumed to be independent from each other (e.g., to respect the underlying intrinsic ambiguities due to the loss of information in the target region). Let \(A = (a_{1}, \dots , a_{N})\) be a \(N\) - dim vector with \(a_{i} \in \{0, 1\}\) encoding if a corresponding attribute appears \((a_{i} = 1)\) or not \((a_{i} = 0)\) (e.g. the "Male" attribute in Figure 5). We define the generator as, \[I^{syn} = G(I^{obs}, M, A; \theta_{G}), \text{subject to} I_{\Lambda_{c}}^{syn} \approx I_{\Lambda_{c}}^{obs} \quad (1)\] where \(\theta_{G}\) collects all parameters of the generator, and \(\approx\) represents the two context regions, \(I_{\Lambda_{c}}^{syn}\) and \(I_{\Lambda_{c}}^{obs}\) , need to kept very similar (both to be elaborated later). ### 3.2 THE PROPOSED METHOD The proposed method is built on progressive GANs with well- executed combination between the network architecture, appropriate receipt of loss functions and a novel FAM. Denote by \(G_{r}\) and \(D_{r}\) the generator and discriminator at resolution \(r\) respectively, where \(r \in \{1, \dots , R\}\) is the index of resolution (e.g., \(r = 1\) represents \(4 \times 4\) and \(r = R = 9\) represents \(1024 \times 1024\) ). Accordingly, we have the observed corrupted image, its corresponding binary mask and its ground- truth uncorrupted image (in training only), \(I_{r}^{obs}\) , \(M_{r}\) and \(I_{r}^{gt}\) at each resolution. The generator, \(G_{r}\) takes as input the observed data \(X_{r}^{G}\) and the attribute vector \(A\) , then outputs a completed image \(I_{r}^{syn}\) . It consists of two components, \[I_{r}^{syn} = G_{r}(X_{r}^{G}, A; \theta_{G_{r}}) = G_{r}^{compl}(G_{r}^{enc}(X_{r}^{G}; \theta_{G_{r}^{enc}}), A; \theta_{G_{r}^{compl}}), \quad (2)\] where \(G_{r}^{enc}(\cdot)\) encodes the input \(X_{r}^{G}\) to a latent low dimensional vector. The latent vector is concatenated with the attribute vector. The concatenated vector plays the role of the noise random variable \(z\) in the original GAN. Then, \(G_{r}^{compl}(\cdot)\) transforms the concatenated vector to a sample \(I_{r}^{syn}\) (i.e., the completed image). \(G_{r}^{enc}\) and \(G_{r}^{compl}\) are mirrored to each other in the form of U- shape networks (Ronneberger et al., 2015; Newell et al., 2016). \(\theta_{G_{r}} = (\theta_{G_{r}^{enc}}, \theta_{G_{r}^{compl}})\) . The discriminator, \(D_{r}\) classifies its input \(X_{r}^{D}\) and has two output branches, the fake versus real classification branch and the attribute prediction branch. It consists of three components: a shared feature backbone and two head classifiers. We have, \[D_{r}(X_{r}^{D}; \theta_{D_{r}}) = \{D_{r}^{cls}(\mathbb{F}_{r}(X_{r}^{D}; \theta_{\mathbb{F}_{r}}); \theta_{D_{r}^{cls}}), D_{r}^{attr}(\mathbb{F}_{r}(X_{r}^{D}; \theta_{\mathbb{F}_{r}}); \theta_{D_{r}^{attr}})\} \quad (3)\] <--- Page Split ---> Where the feature backbone, \(\mathbb{F}_{r}(X_{r}^{D};\theta_{\mathbb{F}_{r}})\) computes the feature map. On top of the feature map, the first head classifier, \(D_{cls}(\cdot)\) computes binary classification between real and fake, and the second one, \(D_{attr}(\cdot)\) predicts an attribute vector. All the parameters of the discriminator are collected by \(\theta_{D_{r}} = (\theta_{\mathbb{F}_{r}},\theta_{D_{r}^{cl s}},\theta_{D_{r}^{attr}})\) . We note that the discriminator is only needed in training. We will omit the notations for parameters in equations when no confusion is caused. Progressive Growing of Generators and Discriminators. Following the methodology proposed in (Karras et al., 2017), we start with training \(G_{1}\) and \(D_{1}\) . Then, at the growing stage \(r\) ( \(r > 1\) ), \(G_{r} = (G_{r - 1}, G_{r}^{fade - in})\) is first created on top of the previous trained stage \(G_{r - 1}\) with newly added layers \(G_{r}^{fade - in}\) , and both of them are trainable in a fade- in process for smooth transition. The discriminators grow in the same way. The alternation of \(G_{r - 1}\) in training stage \(G_{r}\) may lead to the instability of the growing process and the loss of learned structures of the underlying image distribution, which motivates the proposed FAM in growing generators (Section 3.2.1). Inputs for Generators and Discriminators, \(X_{r}^{G}\) and \(X_{r}^{D}\) . We have, \[X_{r}^{G} = (\hat{I}_{r}^{obs},M_{r},L_{r})\quad a n d\quad X_{r}^{D} = (I_{r},L_{r}) \quad (4)\] where \(L_{r}\) represents the facial landmarks. Recent works (Isola et al., 2016; Wang et al., 2017; Zhu et al., 2017; Sangkloy et al., 2017; Xian et al., 2017; Chen & Hays, 2018) have shown the capability of GANs to translate sketches or edges to photo- realistic images. Given a corrupted image, it is better if the model is able to "draw" a sketch of face first, which provides a backbone guidance for image completion. We utilize the following methods to completion. We utilize the following methods to compute landmarks (Figure 2). - In training, we extract landmarks from the uncorrupted image at \(256 \times 256\) resolution using an off-the-shelf pre-trained Face Alignment Network (FAN) (Bulat & Tzimiropoulos, 2017) which achieved very good results for faces in the wild. - In testing, to predict landmarks from corrupted images, we first train a single stage face completion network at \(256 \times 256\) resolution, denoted by \(G_{L}\) , using reconstruction loss (Section 3.2.2) only. For a testing image, we use \(G_{L}\) to generate a blurry completed image from which the landmarks are extracted with FAN. ![](images/3_0.jpg) <center>Figure 2: Overview of our method. See text for details. </center> \(L\) is up- sampled or down- sampled to \(L_{r}\) to match the size of networks at different resolutions. \(I_{r}\) in \(X_{r}^{D}\) represents either the uncorrupted image or the image synthesized by \(G_{r}\) . For \(\hat{I}_{r}^{obs}\) in \(X_{r}^{G}\) , we have \(\hat{I}_{1}^{obs} = I_{1}^{obs}\) , and \(\hat{I}_{r}^{obs}\) is the output of FAM ( \(r > 1\) ) as described in following section. #### 3.2.1 FAM: THE PROPOSED FREQUENCY-ORIENTED ATTENTIVE MODULE Figure 3 illustrates the proposed FAM. To obtain a smooth transition from low to high resolutions, we design a FAM architecture (i.e. the red components in Figure 3), which is a frequency- oriented attention mechanism integrated along with the resolution change, so that the model learns filters to encourage \(G_{r}^{fade - in}\) (blue components in Figure 3) to read and write information that are important at level \(r\) , but have not been handled well at level \(r - 1\) . By doing those, - Our model attends on higher frequency signals as resolution increases (see Figure 1), thus improving the completion performance.- Our model preserves what have been learned in previous progressive stages, thus furthering the stability of progressive GANs. Existing approaches (Gregor et al., 2015; Yu et al., 2018) use spatial attention mechanisms to encourage networks to attend to selected parts of images (e.g. rectangular regions), while FAM is an attention model in the frequency domain. But different from a regular band- pass filter, the filters generated by FAM is predicted based on semantics of images which are enforced by the objective function (Eqn. 10), and thus is also sensitive to locations inferred on- the- fly in a coarse- to- fine manner. For instance, the model pays more attention to eye regions where the rich details aggregate, especially at a high resolution. <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 3: The proposed FAM used in growing GANs progressively. Here, we show the example of increasing resolutions from \(32 \times 32\) to \(64 \times 64\) . See text for details. Best viewed in color and magnification. </center> Recall that at the growing stage \(r\) \((r > 1)\) in training, we have \(G_{r} = (G_{r - 1},G_{r}^{fade - in})\) . In the vanilla progressive GANs, we will compute the completed image \(I_{r}^{syn} = G_{r}(I_{r}^{obs},M_{r},L_{r},A)\) (simplified from Eqn. 2 for clarity). The previously trained \(G_{r - 1}\) can be changed while optimizing the stage \(r\) which may lead to unexpected updating. Our FAM prevents \(G_{r - 1}\) from changing to wrong directions. To that end, we first introduce a read module which utilizes a read filter \(\hat{F}_{read}\) to extract the most valuable information in both \(I_{r}^{obs}\) and \(I_{r - 1}^{obs}\) , \[\hat{I}_{r}^{obs},\hat{I}_{r - 1}^{obs} = read(I_{r}^{obs},I_{r - 1}^{obs},\hat{F}_{read}), \quad (5)\] which is implemented by, \[\begin{array}{r l} & {\hat{I}_{r}^{obs} = \hat{F}_{read}\odot (1 - M_{r})\odot I_{r}^{obs},}\\ & {\hat{I}_{r - 1}^{obs} = D o w n s a m p l e((1 - \hat{F}_{read})\odot (1 - M_{r})\odot \hat{I}_{r - 1}^{obs}),} \end{array} \quad (6)\] where \(\odot\) denotes element- wise multiplication. \(\hat{I}_{r - 1}^{obs}\) is up- sampled from \(I_{r}^{obs}\) to match the resolution of \(I_{r}^{obs}\) . \(\hat{I}_{r - 1}^{obs}\) represents the blurred (i.e. low- frequency) version of \(I_{r}^{obs}\) since high- frequency information is lost when \(I_{r - 1}^{obs}\) is down- sampled from \(I_{r}^{obs}\) . The read filter \(\hat{F}_{read}\) is defined by, \[\hat{F}_{read} = \beta \cdot F_{read} + \gamma ,\qquad \beta :\left\{ \begin{array}{ll}2\alpha , & \gamma :\left\{ \begin{array}{ll}0, & \alpha \leq 0.5\\ 2\alpha -1, & 0.5< \alpha \leq 1.0 \end{array} \right.\\ 2 - 2\alpha , & \gamma :\left\{ \begin{array}{ll}0, & \alpha \leq 0.5\\ 2\alpha -1, & 0.5< \alpha \leq 1.0 \end{array} \right. \end{array} \right. \quad (8)\] where \(F_{read}\) is computed by \(F_{read} = T o F i l t e r(G_{r - 1}^{fixed}(I_{r - 1}^{obs},M_{r - 1},L_{r - 1}))\) using a trained generator \(G_{r - 1}^{fixed}\) with fixed weights and a small trainable network ToFilter. \(\alpha\) is a weight increasing linearly from 0 to 1 proportional to the number of seen images during growing. \(\hat{F}_{read}\) starts as an all- zero filter, is adjusted by a trainable ToFilter at the growing stages and eventually increased to all ones. As illustrated in Figure 3, given the outputs from the read module, \(\hat{I}_{r}^{obs}\) and \(\hat{I}_{r - 1}^{obs}\) , we can generate the corresponding completed images, \(I_{r}^{syn}\) and the up- sampled \(I_{r - 1}^{syn}\) respectively, and the write filter \(F_{write}\) . Then, to generate the final completed image at stage \(r\) , we also introduce a write module, \[\begin{array}{r l} & {\hat{I}_{r}^{syn} = w r i t e(I_{r}^{syn},I_{r - 1}^{syn},\hat{F}_{w r i t e})}\\ & {\qquad = (I_{r}^{syn}\cdot \alpha +I_{r - 1}^{syn}\cdot (1 - \alpha))\odot (1 - M_{r}) + (\hat{F}_{w r i t e}\odot I_{r}^{syn} + (1 - \hat{F}_{w r i t e})\odot I_{r - 1}^{syn})\odot M_{r},} \end{array} \quad (9)\] where \(\hat{F}_{write} = \beta \cdot F_{write} + \gamma\) and \(F_{write}\) is predicted from the last feature maps. At a low resolution, both the read and write module exploit more on the low- frequency information, but gradually move to exploit higher- frequency information when resolution increases under the guidance of minimizing the objective function (Eqn. 10). \(F_{read}\) and \(F_{write}\) can be discarded when the growing process is done. A testing image only needs to go through one generator that is independent of FAM (blue flow in Figure 3). This is more efficient than the Laplacian GAN (Denton et al., 2015) that requires feeding a testing sample to a cascade of generators, and uses multiple discriminators in training. #### 3.2.2 LOSS FUNCTIONS Beside extending the original adversarial loss function, we design three new loss functions to enforce sharp image completion. Adversarial Loss Given an uncorrupted image \(I^{gt}\) , its attribute vector \(A\) , a mask \(M\) , landmarks \(L\) , and the corresponding corrupted image \(I^{obs}\) we define the loss by, \(l_{adv}(I^{gt},M,L,I^{obs},A|G,D) = \log (1 - D_{cls}(I^{syn},L) + \log D_{cls}(I^{gt},L)\) , where \(I^{syn} = G(I^{obs},M,L,A))\) . <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 4: Comparison with Context Encoder (CE) on high-resolution face completion. While increasing resolution, CE generated more distorted images while our method produced sharper faces with more details. </center> Attribute Loss Similar to the InfoGAN models (Chen et al., 2016; Choi et al., 2017), for the attribute prediction head classifier in the discriminator, we define the attribute loss based on cross- entropy between the predicted attribute vector, \(\hat{A}^{real} = D_{attr}(I^{real},L)\) and \(\hat{A}^{obs} = D_{attr}(I^{obs},L)\) and the corresponding targets, \(A\) for both a real uncorrupted image and a synthesized image. We have, \(l_{attr}(I^{gt},A,M,I^{obs}|G,D) = C r o s s E n t r o p y(A,\hat{A}^{gt}) + C r o s s E n t r o p y(A,\hat{A}^{obs})\) Reconstruction Loss Since our method generates the entire completed face rather than only the target region, we define a weighted reconstruction loss \(l_{rec}\) to preserve both the target region and the context region, which is defined as, \(l_{rec}(I^{gt},M,L,I^{obs},A|G) = \| \kappa \odot M\odot I^{diff}\|_{1} + \| (1 - \kappa)\odot (1 - M)\odot I^{diff}\|_{1}\) , where \(I^{diff} = I^{gt} - I^{syn}\) and \(\kappa\) is the trade- off parameter. Feature Loss In additional to the reconstruction loss in terms of pixel values, we also encourage the synthesized image to have a similar feature representation (Johnson et al., 2016) based on a pre- trained deep neural network \(\phi\) . Let \(\phi_{j}\) be the activations of the \(jth\) layer of \(\phi\) , the feature loss is defined by, \(l_{feat}(I^{gt},M,L,I^{obs},A|\phi ,G) = \| \phi_{j}(I^{gt}) - \phi_{j}(I^{syn})\|_{2}^{2}\) . In our experiments, \(\phi_{j}\) is the \(relu2\_ 2\) layer of a 16- layer VGG network (Simonyan & Zisserman, 2014) pre- trained on the ImageNet dataset (Russakovsky et al., 2015). Boundary Loss To make the generator learn to blend the synthesized target region with the original context region seamlessly, we further define a close- up reconstruction loss along the boundary of the mask. Similar to (Yeh et al., 2017), we first create a weighted kernel \(w\) based on the mask image \(M\) . \(w\) is computed by blurring the mask boundary in \(M\) with a mean filter so that the pixels closer to the mask boundary are assigned larger weights. The kernel size of the mean filters is seven in our experiments. We have, \(l_{bdy}(I^{gt},M,L,I^{obs},A|G) = \| w\odot (I^{gt} - I^{syn})\|_{1}\) . Our model is trained end- to- end by integrating the expected loss of the different loss functions defined above under the minimax game setting. We have, \[\min_{G}\max_{D}\mathcal{L}_{adv}(G,D) + \lambda_{1}\mathcal{L}_{attr}(G,D) + \lambda_{2}\mathcal{L}_{rec}(G) + \lambda_{3}\mathcal{L}_{feat}(G,\phi) + \lambda_{4}\mathcal{L}_{bdy}(G). \quad (10)\] Where \(\lambda_{i}\) 's are trade- off parameters between different loss terms. Training without Multiple Controllable Attributes. To that end, since it is a special case of the proposed formulation stated above, we can simply remove the components involving attributes such as the attribute loss in a straightforward way. The resulting system still enjoys end- to- end learning. ## 4 EXPERIMENTS Datasets and Experiment Settings We used the CelebA- HQ (Karras et al., 2017) dataset for evaluation. It contains 30,000 aligned face images at \(1024 \times 1024\) resolution. The dataset is split randomly while ensuring there is no identity overlap between test/training sets: 3,009 images for testing, and 26,991 for training. There were two types of masks: center and random. The center mask was a square region in the middle of the image with a side length of half the size of the image. The random masks, generated in a similar way to previous methods (Iizuka et al., 2017; Yu et al., 2018), were rectangular regions with random width- to- height ratios, sizes and locations covering about \(5\%\) to \(25\%\) of the original images. Hyper- parameters used for training are listed in the supplemental materials. Quality Comparison with the Context Encoder Our method was compared with the Context Encoder (CE) (Pathak et al., 2016) on high- resolution face completion. Since the original networks <--- Page Split ---> of CE were designed for \(128 \times 128\) images, we used a naive approach to fit it to different resolutions. One, two, and three convolutional layers were added to the encoder, decoder and discriminator for \(256 \times 256\) , \(512 \times 512\) and \(1024 \times 1024\) networks respectively. The result (Figure 4) shows that, when the resolution increased, our method learned details incrementally and synthesized sharper faces, while CE generated poorer images with more distortions. Table 1: The quantitative comparison between our method and state-of-the-art methods <table><tr><td>Method</td><td>Resolution</td><td>L1 (%)</td><td>L2 (%)</td><td>PSNR</td></tr><tr><td>GL (Iizuka et al., 2017)</td><td>128 × 128</td><td>9.34</td><td>1.75</td><td>18.22</td></tr><tr><td>Ours</td><td>128 × 128</td><td>7.8</td><td>1.42</td><td>19.15</td></tr><tr><td>CTX (Yu et al., 2018)</td><td>256 × 256</td><td>8.53</td><td>1.75</td><td>18.41</td></tr><tr><td>Ours</td><td>256 × 256</td><td>7.05</td><td>1.21</td><td>19.97</td></tr></table> Quantitative Comparison with State- of- the- art Methods As noted in literatures (Yeh et al., 2017; Yu et al., 2018), reconstruction metrics such as mean \(L1\) , \(L2\) errors and peak signal- to- noise ratio (RSNR) that are commonly used are not good quantitative evaluation metrics for inpainting methods since image completion aims at completing missing regions with plausible content rather than reconstructing it. As a reference, we show the comparison between our method and state- of- the- art models at their reported resolutions respectively: \(128 \times 128\) for GL with center masks (using implementation of (Yu et al., 2018)) and \(256 \times 256\) for CTX with random masks (Table 1). Semantic Completion We first trained a high- resolution \((1024 \times 1024)\) model with center masks (examples shown in Figure 5) to test whether our model is capable of learning high- level semantics and structures of faces and synthesize large missing regions. The second model was trained with random masks, but was able to handle arbitrary (e.g. irregular hand- drawn) shapes of masks. The result (Figure 5) shows that our model was able to capture the anatomical structures of faces and generate content that is consistent with the holistic semantics. Attribute Controller Unlike previous image completion techniques (Iizuka et al., 2017; Yeh et al., 2017; Li et al., 2017; Yang et al., 2016; Pathak et al., 2016; Yu et al., 2018) that generate only random plausible content, our network completes faces with structurally meaningful content whose appearances and expressions are controllable. Existing approaches (Mirza & Osindero, 2014; Choi et al., 2017; Kaneko et al., 2017) can only control facial expressions roughly (e.g. smiling or not smiling). In contrast, our model is able to control subtle expressions. In the experiment, the face appearance was conditioned on a "Male" attribute and we used landmarks from source actors to control the synthesized expressions (Figure 5). This \(512 \times 512\) model was trained from scratch. The example demonstrates the potential application of our method in face reenactment (Li et al., 2012; Garrido et al., 2014; Thies et al., 2016). Computation Time Our model, once trained, is able to complete a face image with a single forward pass, resulting in much higher efficiency. We tested our model with a Titan Xp GPU by processing \(3000 1024 \times 1024\) images with \(512 \times 512\) holes. The mean completion time is 0.54 second per image. Unlike our model, existing CNN- based high- resolution in- painting approaches often need much longer time to process an image. For instance, it took about 1 min for the model of Yang et al. (Yang et al., 2016) to complete a \(512 \times 512\) image with a Titan X GPU. User Study We compared our method with CTX (Yu et al., 2018), which is the state- of- the- art CNN- based face completion approach capable of completing face images at \(256 \times 256\) resolution, with a pilot user study at \(256 \times 256\) resolution with random masks. 27 subjects (15 male and 12 female participants, with ages from 22 to 32) were volunteered to participate. There were four sessions of pairwise A/B tests. Each time, a user was shown two images and asked choose the more realistic one. In the first session, two images completed from the same image by different methods were chosen. In session two to four, a real image and a corresponding synthesized image were shown. In the first session, time was unlimited. In session two to four, images were on display for 250ms, 1000ms, 4000ms respectively. The result (Figure 6) shows that there were significantly more images generated by our method being favored by the viewers. Overall, our approach generated sharper images with more details and fewer distortions. Limitations Though our method has low inference time, the training time is long due to the progressive growing of networks. In our experiment, it took about three weeks to train a \(1024 \times 1024\) model on a Titan Xp GPU. Additionally, by carefully zooming in our results, we find that our high- resolution model fails to learn low- level skin textures, such as furrows and sweat holes. Moreover, the model <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: Sample results of our approach. The left two groups are completion results with center and irregular hand-drawn masks at \(1024 \times 1024\) . For each group, from left to right columns: cropped, synthesized and real images. The third group shows the performance of the attribute controller, in which the first and third row are corrupted images and source actors whose facial landmarks are used to control the expressions of synthesized faces (row two and four). The right most two columns are conditioned on the "Male" attribute while column two and three are with "Not Male". The leftmost column depends on their ground-truth landmarks and attributes. </center> ![](images/7_1.jpg) <center>Figure 6: Comparisons on the naturalness: ours and CTX (Yu et al., 2018). The leftmost bar chart shows the average percentage that the images generated by our method look more natural than CTX. The second bar chart shows the percentage that a synthesized image is considered more realistic than a ground-truth (GT) one with displaying time of 250ms, 1000ms and 4000ms. The right figure shows samples used in the user study. The first group comes from session one while group two and three are both from session four (the 4000ms session). The preferred images are marked with red boxes. </center> ![](images/7_2.jpg) <center>Figure 7: Some failure cases of our approach. </center> could generate distorted content while removing large parts (e.g. hats) or synthesize some plausible but unnatural faces (Figure 7). Furthermore, For facial expression transfer, our method requires that the head poses of the source and target faces are similar. These issues are left for future work. ## 5 CONCLUSION We propose a progressive GAN with frequency- oriented attentive modules (FAM) for high- resolution and fast face completion, which learns face structures from coarse to fine guided by the FAM. By consolidating information across all scales, our model not only outperforms state- of- the- art methods by generating sharper images in low resolution, but is also able to synthesize faces in higher resolutions than existing techniques. A conditional version of our model allows users to control the properties of generated images explicitly with attribute vectors and landmarks. Our system is designed in an end- to- end manner, in that it learns to generate completed faces directly and more efficiently. <--- Page Split ---> ## A APPENDIX ## A.1 ABLATION STUDY The encoder- decoder structure has been widely used in image completion networks (Pathak et al., 2016; Iizuka et al., 2017; Li et al., 2017). However, the encoding process is a lossy compression, which makes it difficult to reconstruct the original contextual regions. Additionally, since much contextual information is lost during the encoding process, it is also difficult for the encoder to reconstruct content that perfectly match the context. UNet adds skip connections between the mirrored layers of the encoder and decoder, consolidating information from all previous layers, rather than depending on only the latent code. Therefore, UNet can be used to generate a completed image conditioned on the corrupted input. Unfortunately, if UNet is applied to image completion networks directly, it will be much easier for the generator to reconstruct the context than the content, resulting in inconsistent colors and textures along boundaries, which often causes the vanishing gradient problem <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 9: High-resolution face completion results with center masks. All images are resized from \(1024 \times 1024\) . For each group, from left to right: cropped, synthesized, and real images. </center> where both the generator and discriminator stop learning when the synthesized content is still blurry (Figure 8). We try to solve this problem from two aspects. First, we adopt the method of training GANs progressively. Our model starts at a low resolution, so that it is difficult for the discriminator to capture the inconsistency between content and context regions. When the network grows, since the lower- resolution is already trained, the inconsistency between context and content regions is reduced. Second, we design the boundary loss and weighted reconstruction loss, which encourage the network focus more on synthesizing the boundary and content regions respectively. The feature loss also helps stabilizing the training by encouraging the synthesized images to have similar high- level features to real samples. These two improvements have improved the performance significantly. The image quality is further enhanced by incorporating FAM. First, the reader and writer act as band- pass filters that minimize the influence of high- frequency noise on low- level network parameters, and thus avoiding distorting the already- learned global structures. Second, FAM is predicted from the facial semantics so that it encourages the model to focus on learning features in regions with richer details. For instance, the eyes synthesized by models trained with FAM are much sharper, more vivid and realistic. This ablation study was run at \(256 \times 256\) to provide an intuitive illustration of the impact of different components of our method. Since the training of high- resolution models was very time consuming, a more thorough ablation study is left for future work. ## A.2 HIGH-RESOLUTION COMPLETION RESULTS More high resolution face completion results with various mask types are demonstrated in Figure 9 and Figure 10. Additionally, Figure 11 and Figure 12 represent two ways to control expressions <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 10: High-resolution face completion results with random and hand-drawn masks. All images are resized from \(1024 \times 1024\) . For each group, from left to right: cropped, synthesized, and real images. </center> and appearances. In the first example (Figure 11), we use a two- dimensional attribute vector ["Smiling", "Male"] without landmarks as inputs. In the second one, an attribute ["Male"] is used with landmarks extracted from source actors (Figure 12). The results show that both methods can control the expressions and appearances explicitly. But more subtle expressions can be controlled with landmarks. Moreover, we present more examples of attention filters during growing process in Figure 13. ## A.3 TRAINING DETAILS The progressive training process is illustrated in Figure 14. At a resolution lower than \(1024 \times 1024\) , the input face images, masks, landmarks and real images are all down- sampled with average pooling to fit the given scale. One of the major challenges of generating high resolution images is the limitation of Graphics Processing Unit (GPU) memory. Most completion networks use Batch Normalization (Ioffe & Szegedy, 2015) to avoid covariate shift. However, with the limited GPU memory, only a small number of batch sizes are supported at high resolution, resulting in low quality of generated images. We use the Instance Normalization (Ulyanov et al., 2016), similar to Zhu et al. (Zhu et al., 2017), and update \(D\) with a history of completed images instead of the latest generated one (Shrivastava et al., 2016) to stabilize training. At the growing stage, new layers are added for both \(D\) and \(G\) and these layers are faded in with current networks smoothly. After the fade- in process, the network is trained on more images for stabilization. We used \(300\mathrm{K}\) , and \(150\mathrm{K}\) training images for resolution \([8 \times 8\) to \(256 \times 256]\) and \([512 \times 512, 1024 \times 1024]\) respectively at growing stage, and \(600\mathrm{K}\) , \(430\mathrm{K}\) images for \(4 \times 4\) and \([8 \times 8\) to \(1024 \times 1024]\) at stabilizing stage respectively. <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 11: Face completion results with attribute controller. In this example, only attribute vectors ("Smiling", "Male") are used to control the properties of generated images. The facial expressions are controlled with the latent variables, rather than landmarks. From column two to five, the attributes are: [0, 0], [0, 1], [1, 0], [1, 1]. "1" denotes an attribute is turned on, otherwise not. </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 12: Face completion results with attribute controller. Attribute “Male” is used to control the appearances (“Male” for column two and three; “Not Male” for column four and five). Landmarks from source actors (row one and three) are used to control expressions of synthesized images (row two and four). The leftmost column shows cropped images and faces generated with ground-truth attributes and landmarks. </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 13: More examples of the attentive read/write filters while the resolution grows from \(8 \times 8\) to \(1024 \times 1024\) . The leftmost column are cropped images while the rightmost are synthesized images. </center> In the experiments, the reconstruction trade- off parameter was set to \(\kappa = 0.7\) to focus more on the target region. To balance the effects of different objective functions, we used \(\lambda_{attr} = 2\) , \(\lambda_{rec} = 500\) , \(\lambda_{feat} = 8\) , and \(\lambda_{bdy} = 5000\) . The Adam solver (Kingma & Ba, 2014) was employed with a learning rate of 0.0001. ## A.4 NETWORK STRUCTURE The generator \(G\) in our model is implemented by a U- shape network architecture consisting of the first component \(G_{enc}\) transforming the observed image and its mask to a latent vector and the second component \(G_{compl}\) transforming the concatenated vector (latent and attribute) to the completed image. There are residual connections between layers in \(G_{enc}\) and the counterpart in \(G_{compl}\) similar in the spirit to the U- Net (Ronneberger et al., 2015) and the Hourglass network (Newell et al., 2016) to consolidate information across multiple scales. Figure 15 illustrates the two structures of a layer in the generator for training without and with attributes respectively, which are adapted from the U- Net and Hourglass network. Every convolutional layer (Conv) is followed by an Instance Normalization (InsNorm) and a LeakyReLU layer, except that the Conv before the latent vector (i.e. the second Conv layer in Table 2) is not followed by an InsNorm. Additionally, the there are no InsNorms or LeakyReLUs after the last Convs of both \(D_{cls}\) and \(D_{attr}\) . All Convs used in the residual block of the skip connections of our conditional model have a kernel size of three and a stride of one. Since we use Instance Normalization rather than Batch Normalization, the batch size is not an important hyper- parameter. Technically, for faster computation, we use as large a batch size as possible so long as it does not exceed the GPU memory limit. Tables 2 and 4 demonstrate the architecture of the components of the generator \(G\) while Tables 5 shows the components of the discriminator \(D\) . In Table 5, depending on the operation of the skip connection (Skip), the number of filters is either doubled (for a concatenation operation) or remains the same (for an addition operation). <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 14: The progressive training process of our approach. The training of the completion network (or the "generator" \(G\) ) and the discriminator \(D\) starts at low resolution \((4 \times 4)\) . Higher layers are added to both \(G\) and \(D\) progressively to increase the resolution of the synthesized images. The \(\left[\overline{r \times r}\right]\) cubes in the figure represent convolutional layers that handle resolution \(r\) . For the conditional version, attribute labels \(A^{obs}\) are concatenated to the latent vectors. The discriminator \(D\) splits into two branches in the final layers: \(D_{cls}\) that classifies if an input image is real, and \(D_{attr}\) that predicts attribute vectors. Note that \(X^G\) and \(X^D\) are both a set of inputs as defined in the paper. We use images in this Figure as a simplified illustration. </center> ![](images/17_1.jpg) <center>Figure 15: Illustrations of a single layer of our architecture. There are skip connections between mirrored encoder and decoder layers. Left: the structure of the completion network; the skip connection is a copy-and-concatenate operation. This structure helps preserve the identity information between the synthesized images and real faces, resulting in little deformation. Right: the structure of the conditional completion network; residual connections are added to the encoder, and the skip connections are residual blocks instead of direct concatenation. The attributes of the synthesized contents can be manipulated more easily with this structure. Each blue rectangle represents a set of Convolutional, Instance Normalization and Leaky Rectified Linear Unit (LeakyReLU) (Maas et al., 2013) layers. </center> <--- Page Split ---> Table 2: Top: the Encoding component of generator \(G_{enc}\) Bottom: Latent Layer. \(N\) is the length of an attribute vector. The attribute concatenation operation (AttrConcat) is only activated for our conditional model. <table><tr><td>Type</td><td>Kernel</td><td>Stride</td><td>Output Shape</td></tr><tr><td>Input Image</td><td>-</td><td>-</td><td>4 × 1024 × 1024</td></tr><tr><td>Conv</td><td>1 × 1</td><td>1 × 1</td><td>16 × 1024 × 1024</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>32 × 1024 × 1024</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>32 × 1024 × 1024</td></tr><tr><td>Downsample</td><td>-</td><td>-</td><td>32 × 512 × 512</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>64 × 512 × 512</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>64 × 512 × 512</td></tr><tr><td>Downsample</td><td>-</td><td>-</td><td>64 × 256 × 256</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>128 × 256 × 256</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>128 × 256 × 256</td></tr><tr><td>Downsample</td><td>-</td><td>-</td><td>128 × 128 × 128</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>256 × 128 × 128</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>256 × 128 × 128</td></tr><tr><td>Downsample</td><td>-</td><td>-</td><td>256 × 64 × 64</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 64 × 64</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>512 × 64 × 64</td></tr><tr><td>Downsample</td><td>-</td><td>-</td><td>512 × 32 × 32</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 32 × 32</td></tr><tr><td>Conv</td><td>3 × 2</td><td>3 × 1</td><td>512 × 32 × 32</td></tr><tr><td>Downsample</td><td>-</td><td>-</td><td>512 × 16 × 16</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 16 × 16</td></tr><tr><td>Conv</td><td>3 × 2</td><td>3 × 1</td><td>512 × 16 × 16</td></tr><tr><td>Downsample</td><td>-</td><td>-</td><td>512 × 8 × 8</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 8 × 8</td></tr><tr><td>Conv</td><td>3 × 2</td><td>3 × 1</td><td>512 × 8 × 8</td></tr><tr><td>Downsample</td><td>-</td><td>-</td><td>512 × 4 × 4</td></tr></table> <table><tr><td>Type</td><td>Kernel</td><td>Stride</td><td>Output Shape</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 4 × 4</td></tr><tr><td>Conv</td><td>4 × 4</td><td>1 × 1</td><td>512 × 1 × 1</td></tr><tr><td>AttrConcat</td><td>optional</td><td>-</td><td>512(+N) × 1 × 1</td></tr><tr><td>Conv</td><td>4 × 4</td><td>1 × 1</td><td>512 × 4 × 4</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 4 × 4</td></tr></table> <--- Page Split ---> Table 3: The completion component of generator \(G_{compl}\) . Depending on the particular operation of the skip connection (Skip), the number of filters is either doubled (for concatenation operations) or remains the same (for addition operations). In practice, \(G_{compl}\) output a feature map that can be used to generate a RGB image (with ToRGB layers) or predict a read/write Filter (with ToFilter layers, see Table 4). <table><tr><td>Type</td><td>Kernel</td><td>Stride</td><td>Output Shape</td></tr><tr><td>Upsample</td><td>-</td><td>-</td><td>512 × 8 × 8</td></tr><tr><td>Skip</td><td>-</td><td>-</td><td>1024 (512) × 8 × 8</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 8 × 8</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>512 × 8 × 8</td></tr><tr><td>Upsample</td><td>-</td><td>-</td><td>512 × 16 × 16</td></tr><tr><td>Skip</td><td>-</td><td>-</td><td>1024 (512) × 16 × 16</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 16 × 16</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>512 × 16 × 16</td></tr><tr><td>Upsample</td><td>-</td><td>-</td><td>512 × 32 × 32</td></tr><tr><td>Skip</td><td>-</td><td>-</td><td>1024 (512) × 32 × 32</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 32 × 32</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>512 × 32 × 32</td></tr><tr><td>Upsample</td><td>-</td><td>-</td><td>512 × 64 × 64</td></tr><tr><td>Skip</td><td>-</td><td>-</td><td>1024 (512) × 64 × 64</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>512 × 64 × 64</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>512 × 64 × 64</td></tr><tr><td>Upsample</td><td>-</td><td>-</td><td>512 × 128 × 128</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>256 × 128 × 128</td></tr><tr><td>Skip</td><td>-</td><td>-</td><td>512 (256) × 128 × 128</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>256 × 128 × 128</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>256 × 128 × 128</td></tr><tr><td>Upsample</td><td>-</td><td>-</td><td>256 × 256 × 256</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>128 × 256 × 256</td></tr><tr><td>Skip</td><td>-</td><td>-</td><td>256 (128) × 256 × 256</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>128 × 256 × 256</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>128 × 256 × 256</td></tr><tr><td>Upsample</td><td>-</td><td>-</td><td>128 × 512 × 512</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>64 × 512 × 512</td></tr><tr><td>Skip</td><td>-</td><td>-</td><td>128 (64) × 512 × 512</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>64 × 512 × 512</td></tr><tr><td>Conv</td><td>3 × 3</td><td>3 × 1</td><td>64 × 512 × 512</td></tr><tr><td>Upsample</td><td>-</td><td>-</td><td>64 × 1024 × 1024</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>32 × 1024 × 1024</td></tr><tr><td>Skip</td><td>-</td><td>-</td><td>64 (32) × 1024 × 1024</td></tr></table> Table 4: Left: The ToRGB layers that convert feature maps to RGB images. Right: ToFilter layers that predict a read/write filter from feature maps. <table><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>32 × 1024 × 1024</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>32 ×1024 × 1024</td></tr><tr><td>Conv</td><td>1 × 1</td><td>1 × 1</td><td>3 × 1024 × 1024</td></tr></table> <table><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>64 × 1024 × 1024</td></tr><tr><td>Conv</td><td>3 × 3</td><td>1 × 1</td><td>64 ×1024 × 1024</td></tr><tr><td>Conv</td><td>1 × 1</td><td>1 × 1</td><td>1 × 1024 × 1024</td></tr></table> <--- Page Split --->
reject
Reject
5
ICLR_2019_paper_0831
iclr
2,019
# OVERCOMING THE DISENTANGLEMENT VS RECONSTRUCTION TRADE-OFF VIA JACOBIAN SUPERVISION Jose Lezama Universidad de la República, Uruguay jlezama@fing.edu.uy ## ABSTRACT A major challenge in learning image representations is the disentangling of the factors of variation underlying the image formation. This is typically achieved with an autoencoder architecture where a subset of the latent variables is constrained to correspond to specific factors, and the rest of them are considered nuisance variables. This approach has an important drawback: as the dimension of the nuisance variables is increased, image reconstruction is improved, but the decoder has the flexibility to ignore the specified factors, thus losing the ability to condition the output on them. In this work, we propose to overcome this trade- off by progressively growing the dimension of the latent code, while constraining the Jacobian of the output image with respect to the disentangled variables to remain the same. As a result, the obtained models are effective at both disentangling and reconstruction. We demonstrate the applicability of this method in both unsupervised and supervised scenarios for learning disentangled representations. In a facial attribute manipulation task, we obtain high quality image generation while smoothly controlling dozens of attributes with a single model. This is an order of magnitude more disentangled factors than state- of- the- art methods, while obtaining visually similar or superior results, and avoiding adversarial training<sup>1</sup>. ## 1 INTRODUCTION A desired characteristic of deep generative models is the ability to output realistic images while controlling one or more of the factors of variation underlying the image formation. Moreover, when each unit in the model's internal image representation is sensitive to each of these factors, the model is said to obtain disentangled representations. Learning such models has been approached in the past by training autoencoders where the latent variables (or a subset of them) are constrained to correspond to given factors of variation, which can be specified (supervised) or learned from the data (unsupervised) (Bengio et al., 2013; Mathieu et al., 2016; Hu et al., 2017; Szabo et al., 2018; Kim & Mnih, 2018). The remaining latent variables are typically considered nuisance variables and are used by the autoencoder to complete the reconstruction of the image. There exists one fundamental problem when learning disentangled representations using autoencoders, sometimes referred to as the "shortcut problem" (Hu et al., 2017; Szabo et al., 2018). If the dimension of the latent code is too large, the decoder ignores the latent variables associated to the specified factors of variation, and achieves the reconstruction by using the capacity available in the nuisance variables. On the other hand, if the dimension of the latent code is small, the decoder is encouraged to use the specified variables, but is also limited in the amount of information it can use for reconstruction, so the reconstructed image is more distorted with respect to the autoencoder's input. Szabo et al. (2018) showed that this trade- off between reconstruction and disentangling can indeed be traversed by varying the dimension of the latent code. However, no principled method exists to choose the optimal latent code dimension. The shortcut problem was also addressed by using additional mechanisms to make sure the decoder output is a function of the specified factors in the latent code. One approach, for example, consists in swapping the specified part of the latent code between different samples, and using adversarial <--- Page Split ---> training to make sure the output distribution is indeed conditioned to the specified factors (Mathieu et al., 2016; Lample et al., 2017; Szabó et al., 2017; Szabo et al., 2018). However, adversarial training remains a difficult and unstable optimization problem in practice. Based on these observations, we propose a method for avoiding the shortcut problem that requires no adversarial training and achieves good disentanglement and reconstruction at the same time. Our method consists in first training an autoencoder model, the teacher, where the dimension of the latent code is small, so that the autoencoder is able to effectively disentangle the factors of variation and condition its output on them. These factors can be specified in a supervised manner or learned from the data in an unsupervised way, as we shall demonstrate. After the teacher model is trained, we construct a student model that has a larger latent code dimension for the nuisance variables. For the student, we optimize the reconstruction loss as well as an additional loss function that constrains the variation of the output with respect to the specified latent variables to be the same as the teacher's. In what follows, we consider autoencoder models \((E,D)\) , that receive an image \(\boldsymbol{x}\) as input and produce a reconstruction \(\hat{\boldsymbol{x}}:D(E(\boldsymbol {x})) = \hat{\boldsymbol{x}}\) . We consider that the latent code is always split into a specified factors part \(\boldsymbol {y}\in \mathbb{R}^{k}\) and a nuisance variables part \(\boldsymbol {z}\in \mathbb{R}^{d}\) : \(E(\boldsymbol {x}) = (\boldsymbol {y},\boldsymbol {z}),D(\boldsymbol {y},\boldsymbol {z}) = \hat{\boldsymbol{x}}\) . Consider a teacher autoencoder \((E^{T},D^{T})\) , with nuisance variables dimension \(d^{T}\) , and a student autoencoder \((E^{S},D^{S})\) with nuisance variables dimension \(d^{S};d^{S} > d^{T}\) . Because the dimension of the nuisance variables of the student is larger than in the teacher model, we expect a better reconstruction from it (i.e. \(||\boldsymbol {x} - \hat{\boldsymbol{x}}^{S}||< ||\boldsymbol {x} - \hat{\boldsymbol{x}}^{T}||\) , for some norm). At the same time, we want the student model to maintain the same disentangling ability as the teacher as well as the conditioning of the output on the specified factors. A first order approximation of this desired goal can be expressed as \[\frac{\partial\hat{x}_{j}^{S}}{\partial y_{i}}\approx \frac{\partial\hat{x}_{j}^{T}}{\partial y_{i}}, \quad (1)\] where \(j\in \{1\dots H\cdot W\cdot C\}\) , \(H\) , \(W\) and \(C\) are the dimensions of the output image, and \(i\in \{1\dots k\}\) indexes over the specified factors of variation. In this paper we propose a method to impose the first- order constraint in (1), which we term Jacobian supervision. We show two applications of this method. First, we propose an unsupervised algorithm that progressively disentangles the principal factors of variation in a dataset of images. Second, we use the Jacobian supervision to train an autoencoder model for images of faces, in which the factors of variation to be controlled are facial attributes. Our resulting model outperforms the state- of- the- art in terms of both reconstruction quality and facial attribute manipulation ability. ## 2 RELATED WORK Autoencoders (Hinton & Salakhutdinov, 2006; Bengio et al., 2013; Kingma & Welling, 2014) are trained to reconstruct an input image while learning an internal low- dimensional representation of the input. Ideally, this representation should be disentangled, in the sense that each hidden unit in the latent code should encode one factor of variation in the formation of the input images, and should control this factor in the output images. There exist extensive literature on learning disentangled representations (Rifai et al., 2012; Bengio, 2013; Cheung et al., 2014; Kingma et al., 2014; Cogswell et al., 2015; Chen et al., 2016; Mathieu et al., 2016; Szabó et al., 2017; Perarnau et al., 2016; Hu et al., 2017; Kim & Mnih, 2018; Burgess et al., 2018). Disentangled representations have two important applications. One is their use as rich features for downstream tasks such as classification (Rifai et al., 2012; Tran et al., 2017) or semi- supervised learning (Kingma et al., 2014). In the face recognition community, for example, disentanglement is often used to learn viewpoint- or pose- invariant features (Yang et al., 2015; Peng et al., 2017; Tran et al., 2017). A second important application is in a generative setting, where a disentangled representation can be used to control the factors of variation in the generated image (Yan et al., 2016; Perarnau et al., 2016; Higgins et al., 2016; Mathieu et al., 2016; Szabó et al., 2017; Lample et al., 2017). In this work we concentrate on the second one. In recent years, with the advent of Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), a broad family of methods uses adversarial training to learn disentangled representations <--- Page Split ---> (Mathieu et al., 2016; Szabó et al., 2017; Lample et al., 2017; Perarnau et al., 2016; Chen et al., 2016). In a generative setting, the adversarial discriminator can be used to assess the quality of a reconstructed image for which the conditioning factors do not exist in the training set (Mathieu et al., 2016; Szabó et al., 2017; Chen et al., 2016). Another alternative, proposed in Fader Networks (Lample et al., 2017), is to apply the adversarial discriminator on the latent code itself, to prevent it from containing any information pertaining to the specified factors of variation. Then, the known factors of variation or attributes are appended to the latent code. This allows to specify directly the amount of variation for each factor, generating visually pleasing attribute manipulations. Despite being trained on binary attribute labels, Fader Networks generalize remarkably well to real- valued attribute conditioning. However, despite recent advances (Arjovsky et al., 2017; Gulrajani et al., 2017), adversarial training remains a non- trivial min- max optimization problem, that in this work we wish to avoid. Other remarkable disentangling methods that require no adversarial training are: Cheung et al. (2014), where the cross- covariance between parts of the latent representation is minimized, so that the hidden factors of variation can be learned unsupervised and Higgins et al. (2016); Kim & Mnih (2018); Burgess et al. (2018) where a factorized latent representation is learned using the Variational Autoencoder (VAE) framework. In particular, the authors of Burgess et al. (2018), propose to overcome the disentangling versus reconstruction trade- off by progressively allowing a larger divergence between the factorized prior distribution and the latent posterior in a VAE. Related to the task of varying the factors of image generation is that of domain- transfer (Reed et al., 2015; Zhu et al., 2017; Isola et al., 2017; Choi et al., 2017; Donahue et al., 2016; Liu & Tuzel, 2016). Here the challenge is to "translate" an image into a domain for which examples of the original image are unknown and not available during training. For example, in the face generation task, the target domain can represent a change of facial attribute such as wearing eyeglasses or not, gender, age, etc. (Liu & Tuzel, 2016; Perarnau et al., 2016; Yan et al., 2016; Choi et al., 2017). ## 3 UNSUPERVISED PROGRESSIVE LEARNING OF DISENTANGLED REPRESENTATIONS In this section we detail how the Jacobian supervision motivated in Section 1 can be applied, by ways of a practical example. We will use the Jacobian supervision to learn a disentangled image representation, where the main factors of variation are progressively discovered and learned unsupervised. We start with a simple autoencoder model, the teacher \(T\) , identified by its encoder and decoder parts \((E^{T}, D^{T})\) . The output of the encoder (the latent code) is split into two parts. One part corresponds to the factors of variation \(\boldsymbol {y} \in \mathbb{R}^{k}\) and the other part corresponds to the nuisance variables, \(\boldsymbol {z} \in \mathbb{R}^{d}\) . We begin by using \(k = 2\) and \(d = 0\) , meaning that the latent code of the teacher is only 2- dimensional. We consider the information encoded in these two variables as the two principal factors of variation in the dataset. This choice was done merely for visualization purposes (Figure 1). For this example, we trained a 3- layer multi- layer perceptron (MLP) on MNIST digits, using only the L2 reconstruction loss. We used BatchNorm at the end of the encoder, so that the distribution of \(\boldsymbol{y}\) is normalized inside a mini- batch. In Figure 1 (a) we show the result of sampling this two- dimensional variable and feeding the samples to the decoder \(D^{T}\) . The resulting digits are blurry, but the hidden variables learned to encode the digit class. Next, we create a student autoencoder model \((E^{S}, D^{S})\) , similar to the teacher, but with a larger latent code. Namely, \(k = 2\) and \(d = 1\) instead of \(d = 0\) , so that the latent code has now an extra dimension and the reconstruction can be improved. In order to try to maintain the conditioning of the digit class by the 2D hidden variable \(\boldsymbol{y}\) , we will impose that the Jacobian of the student with respect to \(\boldsymbol{y}\) be the same as that of the teacher, as in (1). How to achieve this is described next. We take two random samples from the training set \(x_{1}\) and \(x_{2}\) , and feed them to the student autoencoder, producing two sets of latent codes: \((\boldsymbol{y}_{1}^{S}, \boldsymbol{z}_{1}^{S})\) and \((\boldsymbol{y}_{2}^{S}, \boldsymbol{z}_{2}^{S})\) , and two reconstructions \(\hat{\boldsymbol{x}}_{1}^{S}\) and \(\hat{\boldsymbol{x}}_{2}^{S}\) , respectively. We then swap the parts of the latent code to form \((\boldsymbol{y}_{2}^{S}, \boldsymbol{z}_{1}^{S})\) and \((\boldsymbol{y}_{1}^{S}, \boldsymbol{z}_{2}^{S})\) and feed them to the student decoder to obtain their respective reconstructions \(\hat{\boldsymbol{x}}_{21}^{S}\) and \(\hat{\boldsymbol{x}}_{12}^{S}\) . We also feed <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 1: Unsupervised learning of disentangled representations on MNIST digits, using Jacobian supervision. (a) Output of teacher model \((k = 2,d = 0)\) when varying its two hidden units. (b) Output of the final student model \((k = 2,d = 14)\) , while varying the same hidden units. The Jacobian supervision makes the model maintain control of the factors of variation of the teacher, while obtaining significantly better reconstruction. (c) A student model \((k = 2,d = 14)\) trained without Jacobian supervision loses the control of the factor of variation discovered by the teacher. (d) Performance curves for the test set, as training of the student progresses. The gray vertical bars indicate the moments where the latent code was progressively grown by one hidden unit. </center> the same pair of images to the teacher autoencoder to obtain \(y_{1}^{T}, y_{2}^{T}, \hat{x}_{1}^{T}, \hat{x}_{2}^{T}\) . Note that the teacher encoder in this case does not produce a \(z\) . We observe, by a first- order Taylor expansion, that \[D^{T}(y_{2}^{T}) = D^{T}(y_{1}^{T}) + J_{T}(y_{1}^{T})(y_{2}^{T} - y_{1}^{T}) + o^{T}(||y_{2}^{T} - y_{1}^{T}||), \quad (2)\] and \[D^{S}(y_{2}^{S},z_{1}^{S}) = D^{S}(y_{1}^{S},z_{1}^{S}) + J_{S}(y_{1}^{S},z_{1}^{S})\left[(y_{2}^{S},z_{1}^{S}) - (y_{1}^{S},z_{1}^{S})\right] + o^{S}(||(y_{2}^{S},z_{1}^{S}) - (y_{1}^{S},z_{1}^{S})||) \quad (3)\] \[= D^{S}(y_{1}^{S},z_{1}^{S}) + J_{S}(y_{1}^{S},z_{1}^{S})(y_{2}^{S} - y_{1}^{S},\mathbf{0}) + o^{S}(||y_{2}^{S} - y_{1}^{S}||), \quad (4)\] where \(J_{T}\) and \(J_{S}\) are the Jacobian of the teacher and student decoders respectively. Suppose \[y_{1}^{S} = y_{1}^{T} \text{and} y_{2}^{S} = y_{2}^{T}, \quad (5)\] and \[D^{T}(y_{2}^{T}) - D^{T}(y_{1}^{T}) = D^{S}(y_{2}^{S},z_{1}^{S}) - D^{S}(y_{1}^{S},z_{1}^{S}) \quad (6)\] then, by simple arithmetic, \[J_{S}(y_{1},z_{1})\left[(y_{2} - y_{1},\mathbf{0})\right]\approx J_{T}(y_{1})(y_{2} - y_{1}), \quad (7)\] where, since we assume (5) holds, we dropped the superscripts for clarity. What (7) expresses is that the partial derivative of the output with respect to the latent variables \(y\) in the direction of \((y_{2} - y_{1})\) is approximately the same for the student model and the teacher model. To achieve this, the proposed method consists essentially in enforcing the assumptions in (5) and (6) by simple reconstruction losses used during training of the student. Note that one could exhaustively explore partial derivatives in all the canonical directions of the space. In our case however, by visiting random pairs during training, we impose the constraint in (7) for random directions sampled from the data itself. This allows for more efficient training than exhaustive exploration. Putting everything together, the loss function for training the student autoencoder with Jacobian supervision is composed of a reconstruction part \(\mathcal{L}_{rec}\) and a Jacobian part \(\mathcal{L}_{jac}\) : \[\mathcal{L}_{rec}(\pmb {x},E^{S},D^{S}):= ||\pmb {x} - D^{S}(E^{S}(\pmb {x}))||_{2}^{2} = ||\pmb {x} - D^{S}(y^{S},z^{S})||_{2}^{2} = ||\pmb {x} - \hat{\pmb{x}}^{S}||_{2}^{2} \quad (8)\] \[\mathcal{L}_{jac}(\pmb {x},E^{S},D^{S}):= \lambda_{y}||\pmb{y}^{S} - \pmb{y}^{T}||_{2}^{2} + \lambda_{diff}||\left(D^{T}(\pmb{y}_{j}^{T}) - D^{T}(\pmb{y}^{T})\right) - \left(D^{S}(\pmb{y}_{j}^{S},\pmb{z}^{S}) - D^{S}(\pmb{y}^{S},\pmb{z}^{S})\right)||_{2}^{2}\] <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 2: \(3^{rd}\) to \(6^{th}\) principal factors of variation discovered by our unsupervised algorithm. The first two factors of variation are learned by the first teacher model (Figure 1 (a)). Each time a hidden unit is added to the autoencoder, a new factor of variation is discovered and learned. Each row shows the variation of the newly discovered factor for three different validation samples, while fixing all the other variables. The unsupervised discovered factors are related to stroke and handwriting style. </center> where the subscript \(j\) indicates a paired random sample. For the experiments in Figure 1 we used \(\lambda_{y} = 0.25\) , \(\lambda_{diff} = 0.1\) . Table 4 in the appendix presents ablation studies on these hyperparameters. In practice, we found it also helps to add a term computing the cross- covariance between \(\mathbf{y}\) and \(\mathbf{z}\) , to obtain further decorrelation between disentangled features (Cheung et al., 2014): \[\mathcal{L}_{x_{cov}}(\mathbf{y},\mathbf{z}):= \sum_{ij}\left[\frac{1}{M}\sum_{m = 1}^{M}\left(z_{i}^{m} - \bar{z}_{i}\right)\left(y_{j}^{m} - \bar{y}_{j}\right)\right]^{2}, \quad (10)\] where \(M\) is the number of samples in the data batch, \(m\) is an index over samples and \(i,j\) index feature dimensions, and \(\bar{z}_{i}\) and \(\bar{y}_{j}\) denote means over samples. In our experiments we weigh this loss with \(\lambda_{x_{cov}} = 1e^{- 3}\) . Once the student model is trained, it generates a better reconstructed image than the teacher model, thanks to the expanded latent code, while maintaining the conditioning of the output that the teacher had. The extra variable in the student latent code will be exploited by the autoencoder to learn the next important factor of variation in the dataset. Examples of factors of variations progressively learned in this way are shown in Figure 2. To progressively obtain an unsupervised disentangled representation we do the following procedure. After training of the student with \(k = 2\) , \(d = 1\) is finished, we consider this model as a new teacher (equivalent to \(k = 3\) ), and we create a new student model with one more hidden unit (equivalent to \(k = 3\) , \(d = 1\) ). We then repeat the same procedure. Results of repeating this procedure 14 times, using 100 epochs for each stage are shown in Figure 1. In Figure 1(b), we show how the resulting final model can maintain the conditioning of the digit class, while obtaining a much better reconstruction. A model trained progressively until reaching the same latent code dimension but without Jacobian supervision, and only the cross- covariance loss for disentangling (Cheung et al., 2014), is shown in Figure 1(c). This model also obtains good reconstruction but loses the conditioning. For this model we also found \(\lambda_{x_{cov}} = 1e^{- 3}\) to give the best result. To quantitatively evaluate the disentangling performance of each model, we look at how the first two latent units ( \(k = 2\) ) control the digit class in each model. We take two images of different digits from the test set, feed them to the encoder, swap their corresponding \(\mathbf{y}\) subvector and feed the fabricated latent codes to the decoder. We then run a pre- trained MNIST classifier in the generated image to see if the class was correctly swapped. The quantitative results are shown in Table 1. We observe that the reconstruction- disentanglement trade- off is indeed more advantageous for the student with Jacobian supervision. To complement this section, we present results of the unsupervised progressive learning of disentangled representations for the SVHN dataset (Netzer et al., 2011) in Section A.5 in the Appendix. ## 4 APPLICATION TO FACIAL ATTRIBUTE MODIFICATION In photographs of human faces, many factors of variation affect the image formation, such as subject identity, pose, illumination, viewpoint, etc., or even more subtle ones such as gender, age, expression. Modern facial manipulation algorithms allow the user to control these factors in the generative process. Our goal here is to obtain a model that has good control of these factors and produces faithful image reconstruction at the same time. We shall do so using the Jacobian supervision introduced <--- Page Split ---> Table 1: Quantitative comparison of the disentanglement and reconstruction performance of the unsupervised method on MNIST digits. <table><tr><td>Model</td><td>d</td><td>successful class swaps</td><td>reconstruction MSE</td></tr><tr><td>Teacher</td><td>0</td><td>94.3%</td><td>0.036</td></tr><tr><td>Student with Jacobian supervision</td><td>14</td><td>61.7%</td><td>0.014</td></tr><tr><td>Student with Jacobian supervision</td><td>18</td><td>52.1%</td><td>0.012</td></tr><tr><td>Student without Jacobian supervision</td><td>14</td><td>32.6%</td><td>0.011</td></tr><tr><td>Random weights</td><td>14</td><td>9.8%</td><td></td></tr></table> ![](images/5_0.jpg) <center>Figure 3: Diagram of the proposed training procedure for facial attributes disentangling. \(E\) and \(D\) always denote the same encoder and decoder module, respectively. Images \(x_{1}\) and \(x_{2}\) are randomly sampled and do not need to share any attribute or class. Their ground truth attribute labels are \(\bar{y}_{1}\) and \(\bar{y}_{2}\) respectively. The latent code is split into a vector predicting the attributes \(y\) and an unspecified part \(z\) . Shaded \(E\) indicates its weights are frozen, i.e., any loss over the indicated output does not affect its weights. </center> in Section 3. In this more challenging case, the disentangling will be first learned by a teacher autoencoder using available annotations and an original training procedure. After a teacher is trained to correctly disentangle and control said attributes, a student model will be trained to improve the visual quality of the reconstruction, while maintaining the attribute manipulation ability. ### 4.1 MODEL ARCHITECTURE AND LOSS FUNCTION We begin by training a teacher model for effective disentangling at the cost of low quality reconstruction. Figure 3 shows a diagram of the training architecture for the teacher model. Let \(x \in \mathbb{R}^{H \times W \times 3}\) be an image with annotated ground truth binary attributes \(\bar{y} \in \{- 1, 1\}^{k}\) , where \(k\) is the number of attributes for which annotations are available. Our goal is to learn the parameters of the encoder \(E^{T} \colon \mathbb{R}^{H \times W \times 3} \to \mathbb{R}^{k + d}\) and the decoder \(D^{T} \colon \mathbb{R}^{k + d} \to \mathbb{R}^{H \times W \times 3}\) such that \(E^{T}(x) = (y, z)\) and \(D^{T}(y, z) = \hat{x} \approx x\) (Figure 3, top). Ideally, \(y \in \mathbb{R}^{k}\) should encode the specified attributes of \(x\) , while \(z \in \mathbb{R}^{d}\) should encode the remaining information necessary for reconstruction. The training of the teacher is divided into two steps. First, the autoencoder reconstructs the input \(x\) , while at the same time predicting in \(y\) the ground truth labels for the attributes \(\bar{y}\) . Second, the attributes part of the latent code \(y\) is swapped with that of another training sample (Figure 3, bottom). The randomly fabricated latent code is fed into the decoder to produce a new image. Typically, this combination of factors and nuisance variables is not represented in the training set, so evaluating the reconstruction is not possible. Instead, we use the same encoder to assess the new image: If the disentangling is achieved, the part of the latent code that is not related to the attributes should be the same for the existing and fabricated images, and the predicted factors should match those of the sample from which they were copied. In what follows, we describe step by step the loss function used for training, which consists of the sum of multiple loss terms. Note that, contrary to relevant recent methods (Mathieu et al., 2016; Lample et al., 2017; Szabó et al., 2017), the proposed method does not require adversarial training. Reconstruction loss. The first task of the autoencoder is to reconstruct the input image. The first term of the loss is given by the L2 reconstruction loss, as in (8). <--- Page Split ---> Prediction loss. In order to encourage \(y\) to encode the original attributes of \(x\) indicated in the ground truth label \(\bar{y}\) , we add the following penalty based on the hinge loss with margin 1: \[\mathcal{L}_{pred}(\pmb {y},\bar{\pmb{y}}) = \frac{1}{k}\sum_{i = 1}^{k}\max (1 - y_{i}\bar{y}_{i},0), \quad (11)\] where the subscript \([i]\) indicates the \(i^{\mathrm{th}}\) attribute. Compared to recent related methods (Perarnau et al., 2016; Lample et al., 2017), the decoder sees the real- valued predicted attributes instead of an inserted vector of binary attribute labels. This allows the decoder to naturally learn from continuous attribute variables, leaving a degree of freedom to encode subtle variations of the attributes. Cycle- consistency loss. Recall our goal is to control variations of the attributes in the generated image, with the ability to generalize to combinations of content and attributes that are not present in the training set. Suppose we have two randomly sampled images \(\boldsymbol{x}_{1}\) and \(\boldsymbol{x}_{2}\) as in Figure 3. After obtaining \((y_{1},z_{1}) = E(x_{1})\) and \((y_{2},z_{2}) = E(x_{2})\) , we form the new artificial latent code \((y_{2},z_{1})\) Ideally, using this code, the decoder should produce an image with the attributes of \(x_{2}\) and the content of \(x_{1}\) . Such an image typically does not exist in the training set, so using a reconstruction loss is not possible. Instead, we resort to a cycle- consistency loss (Zhu et al., 2017). We input this image to the same encoder, which will produce a new code that we denote as \((y_{2}^{\prime},z_{1}^{\prime}) = E^{T}(D^{T}(y_{2},z_{1}))\) . If the decoder correctly generates an image with attributes \(y_{2}\) , and the encoder is good at predicting the input image attributes, then \(y_{2}^{\prime}\) should predict \(y_{2}\) . We use again the hinge loss to enforce this: \[\mathcal{L}_{cyc_{1}} = \frac{1}{k}\sum_{i = 1}^{k}\max (1 - y_{2i}y_{2i}^{\prime},0). \quad (12)\] Here we could have used any random values instead of the sampled \(y_{2}\) . However, we found that sampling predictions from the data eases the task of the decoder, as it is given combinations of attributes that it has already seen. Despite this simplification, the decoder shows remarkable generalization to unseen values of the specified attributes \(y\) during evaluation. Finally, we add a cycle- consistency check on the unspecified part of the latent code, \(z_{1}\) and \(z_{1}^{\prime}\) : \[\mathcal{L}_{cyc_{2}} = ||z_{1} - z_{1}^{\prime}||_{2}^{2} \quad (13)\] Encoder freezing. The training approach we just described presents a major pitfall. The reversed autoencoder could learn to replicate the input code \((y_{2},z_{1})\) by encoding this information inside a latent image in whatever way it finds easier, that does not induce a natural attribute variation. To avoid this issue, a key ingredient of the procedure is to freeze the weights of the encoder when back- propagating \(\mathcal{L}_{cyc_{1}}\) and \(\mathcal{L}_{cyc_{2}}\) . This forces the decoder to produce a naturally looking image so that the encoder correctly classifies its attributes. Global teacher loss. Overall, the global loss used to train the teacher is the sum of the five terms: \[\mathcal{L}(\theta_{E},\theta_{D}) = \lambda_{1}\mathcal{L}_{rec} + \lambda_{2}\mathcal{L}_{pred} + \lambda_{3}\mathcal{L}_{xcor} + \lambda_{4}\mathcal{L}_{cyc_{1}} + \lambda_{5}\mathcal{L}_{cyc_{2}}, \quad (14)\] where \(\lambda_{i} \in \mathbb{R}, i = 1:5\) represent weights for each term in the sum. Details on how their values are found and how we optimize (14) in practice are described in the next section. Ablation studies showing the contribution of each loss are shown in Section A.3 in the appendix. Student training. After the teacher is trained, we create a student autoencoder model with a larger dimension for the nuisance variables \(z\) and train it using only reconstruction and Jacobian supervision ((8) and (9)), as detailed in the next section. ### 4.2 IMPLEMENTATION We implement both teacher and student autoencoders as Convolutional Neural Networks (CNN). Further architecture and implementation details are detailed in the Appendix. We train and evaluate our method on the standard CelebA dataset (Liu et al., 2015), which contains 200,000 aligned faces of celebrities with 40 annotated attributes. The unspecified part of latent code \((z)\) of the teacher autoencoder is implemented as a feature map of 512 channels of size \(2 \times 2\) . To encode the attributes part \(y\) , we concatenate an additional \(k = 40\) <--- Page Split ---> channels. At the output of the encoder the values of these 40 channels are averaged, so the actual latent vector has \(k = 40\) and \(d = 2048\) , dimensions for \(\mathbf{y}\) and \(\mathbf{z}\) respectively. The decoder uses a symmetrical architecture and, following Lample et al. (2017), the attribute prediction \(\mathbf{y}\) is concatenated as constant channels to every feature map of the decoder. We perform grid search to find the values of the weights in (14) by training for 10 epochs and evaluating on a hold- out validation set. The values we used in the experiments in this paper are \(\lambda_{1} = 10^{2}\) , \(\lambda_{2} = 10^{- 1}\) , \(\lambda_{3} = 10^{- 1}\) , \(\lambda_{4} = 10^{- 4}\) , \(\lambda_{5} = 10^{- 5}\) . #### 4.2.1 TEACHER TRAINING At the beginning of the training of the teacher, the weights of the cycle- consistency losses \(\lambda_{4}\) and \(\lambda_{5}\) are set to 0, so the autoencoder is only trained for reconstruction ( \(\mathcal{L}_{rec}\) ), attribute prediction ( \(\mathcal{L}_{pred}\) ) and linear decorrelation ( \(\mathcal{L}_{con}\) ). After 100 training epochs, we resume the training turning on \(\mathcal{L}_{cyc_{1}}\) and \(\mathcal{L}_{cyc_{2}}\) and training for another 100 epochs. At each iteration, we do the parameter updates in two separate steps. We first update for \(\mathcal{L}_{1} = \lambda_{1}\mathcal{L}_{rec} + \lambda_{2}\mathcal{L}_{pred} + \lambda_{3}\mathcal{L}_{con}\) . Then, freezing the encoder, we do the update (only for the decoder), for \(\mathcal{L}_{2} = \lambda_{4}\mathcal{L}_{cyc_{1}} + \lambda_{5}\mathcal{L}_{cyc_{2}}\) . #### 4.2.2 STUDENT TRAINING After the teacher autoencoder training is completed, we create the student model by appending new convolutional filters to the output of the encoder and the input of the decoder, so that the effective dimension of the latent code is increased. In this experiment, we first doubled the size of the latent code from \(d = 2048\) to \(d = 4096\) at the \(200^{th}\) epoch and then from \(d = 4096\) to \(d = 8192\) at the \(400^{th}\) epoch. Note that this is different to the experiment of Section 3, where we grew \(d\) by one unit at a time. We initialize the weights of the student with the weights of the teacher wherever possible. Then, we train the student using the reconstruction loss (8) and the Jacobian loss (9) as defined in Section 3, using \(\lambda_{y} = 1\) , \(\lambda_{diff} = 50\) , and no prediction nor cycle- consistency loss ( \(\lambda_{2} = \lambda_{4} = \lambda_{5} = 0\) ). The hyperparameters were found by quantitative and qualitative evaluation on a separate validation set. ### 4.3 EXPERIMENTAL RESULTS From CelebA, we use 162,770 images of size 256x256 for training and the rest for validation. All the result figures in this paper show images from the validation set and were obtained using the same single model. For each model, we evaluated quantitatively how well the generated image is conditioned to the specified factors. To do this, for each image in the CelebA test set, we tried to flip each of the disentangled attributes, one at a time (e.g. eyeglasses/no eyeglasses). The flipping is done by setting the latent variable \(y_{i}\) to \(- \alpha \cdot \mathrm{sign}(y_{i})\) , with \(\alpha > 0\) a multiplier to exaggerate the attribute, found in a separate validation set for each model ( \(\alpha = 40\) for all models). To verify that the attribute was successfully flipped in the generated image, we used an external classifier trained to predict each of the attributes. We used the classifier provided by the authors of Fader Networks, which was trained directly on the same training split of the CelebA dataset. Table 2 and Figure 4 show the quantitative results we obtained. Most notably, at approximately the same reconstruction performance, the student with Jacobian supervision is significantly better at flipping attributes than the student without it. With the Jacobian supervision, the student maintains almost the same disentangling and conditioning ability as the teacher. Note that these numbers could be higher if we carefully chose a different value of \(\alpha\) for each attribute. To the best of our knowledge, Fader Networks (Lample et al., 2017) constitutes the state- of- the- art in face image generation with continuous control of the facial attributes. For comparison, we trained Fader Networks models using the authors' implementation with \(d = 2048\) and \(d = 8192\) to disentangle the same number of attributes as our model ( \(k = 40\) ), but the training did not converge (using the same provided optimization hyperparameters). We conjecture that the adversarial discriminator acting on the latent code harms the reconstruction and makes the optimization unstable. Comparisons with these models are shown in Table 2 and in Figures 7 and 8 in the appendix. We also show <--- Page Split ---> Table 2: Quantitative comparison of the disentanglement and reconstruction performance of the evaluated models in the facial attribute manipulation task. Disentanglement is measured as the ability to flip specified attributes by varying the corresponding latent unit. <table><tr><td>Model</td><td>loss function</td><td>d</td><td>successful attribute flips</td><td>reconstruction MSE</td></tr><tr><td>Teacher</td><td>cycle-consistency</td><td>2048</td><td>73.1%</td><td>1.82e - 3</td></tr><tr><td>Student</td><td>Jacobian</td><td>8192</td><td>72.2%</td><td>1.08e - 3</td></tr><tr><td>Student</td><td>cycle-consistency</td><td>8192</td><td>42.7%</td><td>1.04e - 3</td></tr><tr><td rowspan="2">Fader Networks</td><td rowspan="2">adversarial</td><td>2048</td><td>43.1%</td><td>3.08e - 3</td></tr><tr><td>8192</td><td>44.2%</td><td>1.83e - 3</td></tr><tr><td>Random weights</td><td></td><td>2048</td><td>20.2%</td><td>1.01e - 1</td></tr></table> ![](images/8_0.jpg) <center>Figure 4: Disentanglement versus reconstruction trade-off for the facial attribute manipulation example (top-left is better). The disentangling score measures the ability to flip facial attributes by manipulating the corresponding latent variables. </center> in Figure 6 that our multiple- attribute model achieves similar performance to the single- attribute Fader Networks models provided by the authors. Finally, Figure 5 shows the result of manipulating 32 attributes for eight different subjects, using the student model with Jacobian supervision. Note that our model is designed to learn the 40 attributes, however in practice there are 8 of them which the model does not learn to manipulate, possibly because they are poorly represented in the dataset (e.g. sideburns, wearing necktie) or too difficult to generate (e.g. wearing hat, wearing earrings). ## 5 CONCLUSION A natural trade- off between disentanglement and reconstruction exists when learning image representations using autoencoder architectures. In this work, we showed that it is possible to overcome this trade- off by first learning a teacher model that is good at disentangling and then imposing the Jacobian of this model with respect to the disentangled variables to a student model that is good at reconstruction. The student model then becomes good at both disentangling and reconstruction. We showed two example applications of this idea. The first one was to progressively learn the principal factors of variation in a dataset, in an unsupervised manner. The second application is a generative model that is able to manipulate facial attributes in human faces. The resulting model is able to manipulate one order of magnitude more facial attributes than state- of- the- art methods, while obtaining similar or superior visual results, and requiring no adversarial training. <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 5: Results of attribute manipulation with the student model with Jacobian supervision. All the images were produced with the same model and belong to the test set. </center> <center>Figure 6: Comparison with single-attribute Fader Networks models (Lample et al., 2017). (a) Original image. (b) Reconstruction of Fader Networks with the provided 'eyeglasses' model. (c) Our teacher model achieves a sharper reconstruction using the same latent code dimension, and is able to effectively manipulate up to 32 attributes, instead of only one. (d) Result of amplifying age with Fader Networks with the provided aging model. (e) Our result for the same task. </center> <--- Page Split ---> ## ACKNOWLEDGMENTS This work was supported by CAP- UDELAR Grant BPDN_2018_1. Experiments were partially run on ClusterUY, National Center for Supercomputing, Uruguay. ## REFERENCES Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. arXiv preprint arXiv:1701.07875, 2017. Yoshua Bengio. Deep learning of representations: Looking forward. In International Conference on Statistical Language and Speech Processing, pp. 1- 37. Springer, 2013. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798- 1828, 2013. Christopher P Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in \(\beta\) - VAE. arXiv preprint arXiv:1804.03599, 2018. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2172- 2180, 2016. Brian Cheung, Jesse A Livezey, Arjun K Bansal, and Bruno A Olshausen. Discovering hidden factors of variation in deep networks. arXiv preprint arXiv:1412.6583, 2014. Yunjey Choi, Minje Choi, Munyoung Kim, Jung- Woo Ha, Sunghun Kim, and Jaegul Choo. StarGAN: Unified generative adversarial networks for multi- domain image- to- image translation. arXiv preprint arXiv:1711.09020, 2017. Michael Cogswell, Faruk Ahmed, Ross Girshick, Larry Zitnick, and Dhruv Batra. Reducing overfitting in deep networks by decorrelating representations. arXiv preprint arXiv:1511.06068, 2015. Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016. Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672- 2680, 2014. Ishan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of Wasserstein GANs. In Advances in Neural Information Processing Systems, pp. 5769- 5779, 2017. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. Beta- VAE: Learning basic visual concepts with a constrained variational framework. 2016. Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504- 507, 2006. Qiyang Hu, Attila Szabó, Tiziano Portenier, Paolo Favaro, and Matthias Zwicker. Disentangling factors of variation by mixing them. arXiv preprint arXiv:1711.07410, 2017. Phillip Isola, Jun- Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image- to- image translation with conditional adversarial networks. arXiv preprint, 2017. Hyunjik Kim and Andriy Mnih. Disentangling by factorising. arXiv preprint arXiv:1802.05983, 2018. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. <--- Page Split ---> Diederik P Kingma and Max Welling. Auto- encoding variational bayes. In ICLR, 2014. Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi- supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pp. 3581- 3589, 2014. Guillaume Lample, Neil Zeghidour, Nicolas Usunier, Antoine Bordes, Ludovic Denoyer, et al. Fader networks: Manipulating images by sliding attributes. In Advances in Neural Information Processing Systems, pp. 5969- 5978, 2017. Ming- Yu Liu and Oncel Tuzel. Coupled generative adversarial networks. In Advances in Neural Information Processing Systems, pp. 469- 477, 2016. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015. Michael F Mathieu, Junbo Jake Zhao, Junbo Zhao, Aditya Ramesh, Pablo Sprechmann, and Yann LeCun. Disentangling factors of variation in deep representation using adversarial training. In Advances in Neural Information Processing Systems, pp. 5040- 5048, 2016. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature learning, volume 2011, pp. 5, 2011. Xi Peng, Xiang Yu, Kihyuk Sohn, Dimitris N Metaxas, and Manmohan Chandraker. Reconstruction- based disentanglement for pose- invariant face recognition. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1623- 1632, 2017. Guim Perarnau, Joost van de Weijer, Bogdan Raducanu, and Jose M. Álvarez. Invertible Conditional GANs for image editing. In NIPS Workshop on Adversarial Training, 2016. Scott E Reed, Yi Zhang, Yuting Zhang, and Honglak Lee. Deep visual analogy- making. In Advances in Neural Information Processing Systems, pp. 1252- 1260, 2015. Salah Rifai, Yoshua Bengio, Aaron Courville, Pascal Vincent, and Mehdi Mirza. Disentangling factors of variation for facial expression recognition. In European Conference on Computer Vision, pp. 808- 822. Springer, 2012. Attila Szabó, Qiyang Hu, Tiziano Portenier, Matthias Zwicker, and Paolo Favaro. Challenges in disentangling independent factors of variation. arXiv preprint arXiv:1711.02245, 2017. Attila Szabó, Qiyang Hu, Tiziano Portenier, Matthias Zwicker, and Paolo Favaro. Understanding degeneracies and ambiguities in attribute transfer. In The European Conference on Computer Vision (ECCV), September 2018. Luan Tran, Xi Yin, and Xiaoming Liu. Disentangled representation learning GAN for pose- invariant face recognition. In CVPR, volume 3, pp. 7, 2017. Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional image generation from visual attributes. In European Conference on Computer Vision, pp. 776- 791. Springer, 2016. Jimei Yang, Scott E Reed, Ming- Hsuan Yang, and Honglak Lee. Weakly- supervised disentangling with recurrent transformations for 3d view synthesis. In Advances in Neural Information Processing Systems, pp. 1099- 1107, 2015. Jun- Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image- to- image translation using cycle- consistent adversarial networks. In IEEE International Conference on Computer Vision, 2017. <--- Page Split ---> Table 3: Summary comparison of the characteristics of recent related methods. Our method has advantages over each of them, and together with Fader Networks are the only ones to generate \(256\times 256\) images while continuously varying the generated facial attributes. <table><tr><td>Method</td><td>end-to-end training</td><td>requires aligned pairs</td><td>requires adversarial training</td><td>face image resolution</td><td>number of attributes per model</td><td>generous continuous attributes</td></tr><tr><td>CoGAN</td><td>yes ✓</td><td>no ✓</td><td>yes ✗</td><td>128x128 ✗</td><td>1 ✗</td><td>no ✗</td></tr><tr><td>IcGAN</td><td>no ✗</td><td>no ✓</td><td>yes ✗</td><td>64x64 ✗</td><td>18 ✓</td><td>no ✗</td></tr><tr><td>Attribute2Image</td><td>no ✓</td><td>no ✓</td><td>no ✓</td><td>64x64 ✗</td><td>1 ✗</td><td>yes ✓</td></tr><tr><td>StarGAN</td><td>yes ✓</td><td>no ✓</td><td>yes ✗</td><td>128x128 ✗</td><td>7 ✗</td><td>no ✗</td></tr><tr><td>Fader Networks</td><td>yes ✓</td><td>no ✓</td><td>yes ✗</td><td>256x256 ✓</td><td>3 ✗</td><td>yes ✓</td></tr><tr><td>This work</td><td>yes ✓</td><td>no ✓</td><td>no ✓</td><td>256x256 ✓</td><td>32 ✓</td><td>yes ✓</td></tr></table> ## A APPENDIX ## A.1 IMPLEMENTATION DETAILS FOR SECTION 3 For the autoencoder utilized for experiments in Section 3, we used the following architecture. For the encoder: \[F(768,256)\to ReLU\to F(256,128)\to ReLU\to F(128,64)\to ReLU\to FC(64,k + d)\] where \(F(I,O)\) indicates a fully connected layer with \(I\) inputs and \(O\) outputs. For the first teacher model ( \(k = 2\) , \(d = 0\) ), we also used BatchNorm after the encoder output. The decoder is the exact symmetric of the encoder, with a Tanh layer appended at the end. We used Adam (Kingma & Ba, 2014) with a learning rate of \(3e^{- 4}\) , a batch size of 128 and weight decay coefficient \(1e^{- 6}\) . ## A.2 IMPLEMENTATION DETAILS FOR SECTION 4 Following Lample et al. (2017), we used convolutional blocks of Convolution- BatchNorm- ReLU layers and a geometric reduction in spatial resolution by using stride 2. The convolutional kernels are all of size \(4\times 4\) with padding of 1, and we use Leaky ReLU with slope 0.2. The input to the encoder is a \(256\times 256\) image. Denoting by \(k\) the number of attributes, the encoder architecture can be summarized as: \[C(16)\to C(32)\to C(64)\to C(128)\to C(256)\to C(512)\to C(512 + k),\] where \(C(f)\) indicates a convolutional block with \(f\) output channels. The decoder architecture can be summarized as: \[D(512 + k)\to D(512 + k)\to D(256 + k)\to D(128 + k)\to D(64 + k)\to D(32 + k)\to D(16 + k),\] where \(D(f)\) in this case indicates a deconvolutional block doing \(\times 2\) upsampling (using transposed convolutions, BatchNorm and ReLU) with \(f\) input channels. We trained all networks using Adam, with learning rate of 0.002, \(\beta_{1} = 0.5\) and \(\beta_{2} = 0.999\) . We use a batch size of 128. Table 3 shows a comparison chart between the proposed and related methods. ## A.2.1 STUDENT MODEL For the student model, we only need to change the last layer in the encoder from \(C(512 + k)\) to \(C(1024 + k)\) in the first stage and \(C(2048 + k)\) in the second stage. Similarly, the first layer of the decoder was changed from \(D(512 + k)\) to \(D(1024 + k)\) in the first stage and \(D(2048 + k)\) in the second stage. <--- Page Split ---> ## A.3 ABLATION STUDIES Table 4: Ablation study of the weight of each loss term for the unsupervised example of Section 3, using \(k = 2\) and \(d = 14\) for the student. <table><tr><td>λy</td><td>λdiff</td><td>λxcov</td><td>successful class swaps</td><td>reconstruction MSE</td></tr><tr><td>0</td><td>0</td><td>1.0e-1</td><td>33.6%</td><td>1.15e-2</td></tr><tr><td>0</td><td>0</td><td>1.0e-2</td><td>32.6%</td><td>1.17e-2</td></tr><tr><td>0</td><td>0</td><td>1.0e-3</td><td>32.3%</td><td>1.12e-2</td></tr><tr><td>0</td><td>0</td><td>1.0e-4</td><td>31.1%</td><td>1.13e-2</td></tr><tr><td>2.5e-1</td><td>1.0e-1</td><td>1.0e-1</td><td>63.7%</td><td>1.55e-2</td></tr><tr><td>2.5e-1</td><td>1.0e-1</td><td>1.0e-2</td><td>65.2%</td><td>1.55e-2</td></tr><tr><td>2.5e-1</td><td>1.0e-1</td><td>1.0e-3</td><td>61.7%</td><td>1.42e-2</td></tr><tr><td>1.0</td><td>1.0</td><td>1.0e-3</td><td>70.5%</td><td>1.86e-2</td></tr><tr><td>1.0e-1</td><td>1.0</td><td>1.0e-3</td><td>59.4%</td><td>1.70e-2</td></tr><tr><td>1.0</td><td>1.0e-1</td><td>1.0e-3</td><td>65.0%</td><td>1.58e-2</td></tr><tr><td>1.0e-1</td><td>1.0e-1</td><td>1.0e-3</td><td>57.9%</td><td>1.38e-2</td></tr><tr><td>1.0e-2</td><td>1.0e-1</td><td>1.0e-3</td><td>52.3%</td><td>1.30e-2</td></tr><tr><td>1.0e-1</td><td>1.0e-2</td><td>1.0e-3</td><td>52.3%</td><td>1.28e-2</td></tr><tr><td>1.0e-2</td><td>1.0e-2</td><td>1.0e-3</td><td>42.5%</td><td>1.17e-2</td></tr></table> Table 5: Ablation study of the impact of Jacobian and cycle-consistency losses in the training of the student model in the facial attribute manipulation task. The results correspond to training a student with \(d = 4096\) for 50 epochs, and where the teacher was trained only with reconstruction and cross-covariance losses, with \(d = 2048\) . All models used the same teacher. <table><tr><td>λ4(LCyc1)</td><td>λ5(LCyc2)</td><td>λdiff</td><td>λy</td><td>reconstruction MSE</td><td>successful flips</td></tr><tr><td>0</td><td>0</td><td>50</td><td>1</td><td>1.76e-3</td><td>70.3%</td></tr><tr><td>0</td><td>0</td><td>10</td><td>1</td><td>1.67e-3</td><td>64.8%</td></tr><tr><td>0</td><td>0</td><td>1</td><td>10</td><td>1.85e-3</td><td>49.1%</td></tr><tr><td>0</td><td>0</td><td>1</td><td>1</td><td>1.69e-3</td><td>46.8%</td></tr><tr><td>1e-4</td><td>1e-5</td><td>0</td><td>0</td><td>1.78e-3</td><td>43.2%</td></tr><tr><td>1e-4</td><td>1e-5</td><td>50</td><td>1</td><td>1.84e-3</td><td>70.7%</td></tr></table> Table 6: Ablation study of the weigh of the cross-covariance loss in the facial attribute manipulation example. The results correspond to training a teacher model with \(d = 2048\) , from scratch and for 50 epochs. <table><tr><td>λxcov</td><td>reconstruction MSE</td><td>successful flips</td></tr><tr><td>1e-3</td><td>2.39e - 3</td><td>51.4%</td></tr><tr><td>1e-2</td><td>2.50e - 3</td><td>53.3%</td></tr><tr><td>1e-1</td><td>2.71e - 3</td><td>49.9%</td></tr></table> <--- Page Split ---> ![](images/14_0.jpg) <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 8: Results of smoothly controlling attributes for our model and Fader Networks. In each row, our result is shown on the top and Fader Networks' on the bottom. From top to bottom, the attributes are: Arched eyebrows, Bags under eyes, Big nose, Pale skin, Age. </center> <--- Page Split ---> ## A.5 UNSUPERVISED PROGRESSIVE LEARNING OF DISENTANGLED REPRESENTATIONS ON SVHN DATASET We applied the procedure described in Section 3 for progressive unsupervised learning of disentangled representations to the Street View House Numbers (SVHN) dataset (Netzer et al., 2011). The SVHN dataset contains 73,257 \(32\times 32\) RGB images for training. For this experiment, the encoder architecture is: \[C(64)\to C(128)\to C(256)\to C(k + d)\to BatchNorm.\] Here, \(C(n)\) represents a convolutional block with \(n3\times 3\) filters and zero padding, ReLU activation function and average pooling. The decoder architecture is: \[D(256)\to D(128)\to D(64)\to D(3).\] Here, \(D(n)\) represents a \(\times 2\) upconvolution block with \(n4\times 4\) filters and zero padding, ReLU activation function and average pooling. The latent code was started with \(k = 2\) and \(d = 0\) and progressively grown to \(k = 2,d = 16\) . Each stage was trained for 25 epochs. We used \(\lambda_{y} = 0.025\) , \(\lambda_{diff} = 0.01\) . We used Adam with a learning rate of \(3e - 4\) , a batch size of 128 and weight decay coefficient \(1e - 6\) . The first teacher model ( \(k = 2\) , \(d = 0\) ) achieves a reconstruction MSE of \(1.94e^{- 2}\) and the final student model ( \(k = 2\) , \(d = 16\) ) a reconstruction MSE of \(4.06e^{- 3}\) . Figure 9 shows the two principal factors of variation learned by the first teacher model (corresponding to \(k = 2\) , \(d = 0\) ). Contrary to the MNIST example of Section 3, here the two main factors of variation are not related to the digit class, but to the shading of the digit. The progressive growth of the latent code is carried on from \(d = 0\) to \(d = 16\) . The following factors of variation are related to lighting, contrast and color (see Figure 10). In this case, the unsupervised progressive method discovered factors that appear related to the digit class at the \(9^{th}\) and \(10^{th}\) steps of the progression. Figure 11 shows how the digit class can be controlled by the student with \(d = 16\) by varying these factors. Because of the Jacobian supervision, the student is able to control the digit class while maintaining the style of the digit. Finally, in Figure 12 we show that the student also maintains control of the two main factors of variation discovered by the first teacher. ![](images/16_0.jpg) <center>Figure 9: The two principal factors of variation learned on SVHN appear related to shading of the digit. Left to right: darker to lighter. Top to bottom: light color on the left to light color on the right. </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 10: Third, fourth and fifth factors of variation automatically discovered on SVHN. Each row corresponds to one factor and each column corresponds to one sample. Each factor is varied while maintaining the rest of the latent units fixed. </center> ![](images/17_1.jpg) <center>Figure 11: Factors of variation related to the center digit class appear to emerge on the 9th and 10th discovered factor during the unsupervised progressive procedure described in Section 3. Here we show how the student model with Jacobian supervision and \(d = 16\) can be used to manipulate the digit class while approximately maintaining the style of the digit, by varying the latent units corresponding to those factors. The bottom row shows the original images (reconstructed by the autoencoder). All images are from the test set and were not seen during training. </center> ![](images/17_2.jpg) <center>Figure 12: Result of the student with Jacobian supervision \((d = 16)\) when varying the two factors learned by the teacher (Fig. 9), for four different images (whose reconstruction is shown on the bottom row). The conditioning related to shading is maintained. (Left to right: darker to lighter. Top to bottom: light color on the left to light color on the right.) All images are from the test set and were not seen during training. </center> <--- Page Split --->
## ABSTRACT A major challenge in learning image representations is the disentangling of the factors of variation underlying the image formation. This is typically achieved with an autoencoder architecture where a subset of the latent variables is constrained to correspond to specific factors, and the rest of them are considered nuisance variables. This approach has an important drawback: as the dimension of the nuisance variables is increased, image reconstruction is improved, but the decoder has the flexibility to ignore the specified factors, thus losing the ability to condition the output on them. In this work, we propose to overcome this trade- off by progressively growing the dimension of the latent code, while constraining the Jacobian of the output image with respect to the disentangled variables to remain the same. As a result, the obtained models are effective at both disentangling and reconstruction. We demonstrate the applicability of this method in both unsupervised and supervised scenarios for learning disentangled representations. In a facial attribute manipulation task, we obtain high quality image generation while smoothly controlling dozens of attributes with a single model. This is an order of magnitude more disentangled factors than state- of- the- art methods, while obtaining visually similar or superior results, and avoiding adversarial training<sup>1</sup>. ## 1 INTRODUCTION A desired characteristic of deep generative models is the ability to output realistic images while controlling one or more of the factors of variation underlying the image formation. Moreover, when each unit in the model's internal image representation is sensitive to each of these factors, the model is said to obtain disentangled representations. Learning such models has been approached in the past by training autoencoders where the latent variables (or a subset of them) are constrained to correspond to given factors of variation, which can be specified (supervised) or learned from the data (unsupervised) (Bengio et al., 2013; Mathieu et al., 2016; Hu et al., 2017; Szabo et al., 2018; Kim & Mnih, 2018). The remaining latent variables are typically considered nuisance variables and are used by the autoencoder to complete the reconstruction of the image. There exists one fundamental problem when learning disentangled representations using autoencoders, sometimes referred to as the "shortcut problem" (Hu et al., 2017; Szabo et al., 2018). If the dimension of the latent code is too large, the decoder ignores the latent variables associated to the specified factors of variation, and achieves the reconstruction by using the capacity available in the nuisance variables. On the other hand, if the dimension of the latent code is small, the decoder is encouraged to use the specified variables, but is also limited in the amount of information it can use for reconstruction, so the reconstructed image is more distorted with respect to the autoencoder's input. Szabo et al. (2018) showed that this trade- off between reconstruction and disentangling can indeed be traversed by varying the dimension of the latent code. However, no principled method exists to choose the optimal latent code dimension. The shortcut problem was also addressed by using additional mechanisms to make sure the decoder output is a function of the specified factors in the latent code. One approach, for example, consists in swapping the specified part of the latent code between different samples, and using adversarial <--- Page Split ---> training to make sure the output distribution is indeed conditioned to the specified factors (Mathieu et al., 2016; Lample et al., 2017; Szabó et al., 2017; Szabo et al., 2018). However, adversarial training remains a difficult and unstable optimization problem in practice. Based on these observations, we propose a method for avoiding the shortcut problem that requires no adversarial training and achieves good disentanglement and reconstruction at the same time. Our method consists in first training an autoencoder model, the teacher, where the dimension of the latent code is small, so that the autoencoder is able to effectively disentangle the factors of variation and condition its output on them. These factors can be specified in a supervised manner or learned from the data in an unsupervised way, as we shall demonstrate. After the teacher model is trained, we construct a student model that has a larger latent code dimension for the nuisance variables. For the student, we optimize the reconstruction loss as well as an additional loss function that constrains the variation of the output with respect to the specified latent variables to be the same as the teacher's. In what follows, we consider autoencoder models \((E,D)\) , that receive an image \(\boldsymbol{x}\) as input and produce a reconstruction \(\hat{\boldsymbol{x}}:D(E(\boldsymbol {x})) = \hat{\boldsymbol{x}}\) . We consider that the latent code is always split into a specified factors part \(\boldsymbol {y}\in \mathbb{R}^{k}\) and a nuisance variables part \(\boldsymbol {z}\in \mathbb{R}^{d}\) : \(E(\boldsymbol {x}) = (\boldsymbol {y},\boldsymbol {z}),D(\boldsymbol {y},\boldsymbol {z}) = \hat{\boldsymbol{x}}\) . Consider a teacher autoencoder \((E^{T},D^{T})\) , with nuisance variables dimension \(d^{T}\) , and a student autoencoder \((E^{S},D^{S})\) with nuisance variables dimension \(d^{S};d^{S} > d^{T}\) . Because the dimension of the nuisance variables of the student is larger than in the teacher model, we expect a better reconstruction from it (i.e. \(||\boldsymbol {x} - \hat{\boldsymbol{x}}^{S}||< ||\boldsymbol {x} - \hat{\boldsymbol{x}}^{T}||\) , for some norm). At the same time, we want the student model to maintain the same disentangling ability as the teacher as well as the conditioning of the output on the specified factors. A first order approximation of this desired goal can be expressed as \[\frac{\partial\hat{x}_{j}^{S}}{\partial y_{i}}\approx \frac{\partial\hat{x}_{j}^{T}}{\partial y_{i}}, \quad (1)\] where \(j\in \{1\dots H\cdot W\cdot C\}\) , \(H\) , \(W\) and \(C\) are the dimensions of the output image, and \(i\in \{1\dots k\}\) indexes over the specified factors of variation. In this paper we propose a method to impose the first- order constraint in (1), which we term Jacobian supervision. We show two applications of this method. First, we propose an unsupervised algorithm that progressively disentangles the principal factors of variation in a dataset of images. Second, we use the Jacobian supervision to train an autoencoder model for images of faces, in which the factors of variation to be controlled are facial attributes. Our resulting model outperforms the state- of- the- art in terms of both reconstruction quality and facial attribute manipulation ability. ## 2 RELATED WORK Autoencoders (Hinton & Salakhutdinov, 2006; Bengio et al., 2013; Kingma & Welling, 2014) are trained to reconstruct an input image while learning an internal low- dimensional representation of the input. Ideally, this representation should be disentangled, in the sense that each hidden unit in the latent code should encode one factor of variation in the formation of the input images, and should control this factor in the output images. There exist extensive literature on learning disentangled representations (Rifai et al., 2012; Bengio, 2013; Cheung et al., 2014; Kingma et al., 2014; Cogswell et al., 2015; Chen et al., 2016; Mathieu et al., 2016; Szabó et al., 2017; Perarnau et al., 2016; Hu et al., 2017; Kim & Mnih, 2018; Burgess et al., 2018). Disentangled representations have two important applications. One is their use as rich features for downstream tasks such as classification (Rifai et al., 2012; Tran et al., 2017) or semi- supervised learning (Kingma et al., 2014). In the face recognition community, for example, disentanglement is often used to learn viewpoint- or pose- invariant features (Yang et al., 2015; Peng et al., 2017; Tran et al., 2017). A second important application is in a generative setting, where a disentangled representation can be used to control the factors of variation in the generated image (Yan et al., 2016; Perarnau et al., 2016; Higgins et al., 2016; Mathieu et al., 2016; Szabó et al., 2017; Lample et al., 2017). In this work we concentrate on the second one. In recent years, with the advent of Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), a broad family of methods uses adversarial training to learn disentangled representations <--- Page Split ---> (Mathieu et al., 2016; Szabó et al., 2017; Lample et al., 2017; Perarnau et al., 2016; Chen et al., 2016). In a generative setting, the adversarial discriminator can be used to assess the quality of a reconstructed image for which the conditioning factors do not exist in the training set (Mathieu et al., 2016; Szabó et al., 2017; Chen et al., 2016). Another alternative, proposed in Fader Networks (Lample et al., 2017), is to apply the adversarial discriminator on the latent code itself, to prevent it from containing any information pertaining to the specified factors of variation. Then, the known factors of variation or attributes are appended to the latent code. This allows to specify directly the amount of variation for each factor, generating visually pleasing attribute manipulations. Despite being trained on binary attribute labels, Fader Networks generalize remarkably well to real- valued attribute conditioning. However, despite recent advances (Arjovsky et al., 2017; Gulrajani et al., 2017), adversarial training remains a non- trivial min- max optimization problem, that in this work we wish to avoid. Other remarkable disentangling methods that require no adversarial training are: Cheung et al. (2014), where the cross- covariance between parts of the latent representation is minimized, so that the hidden factors of variation can be learned unsupervised and Higgins et al. (2016); Kim & Mnih (2018); Burgess et al. (2018) where a factorized latent representation is learned using the Variational Autoencoder (VAE) framework. In particular, the authors of Burgess et al. (2018), propose to overcome the disentangling versus reconstruction trade- off by progressively allowing a larger divergence between the factorized prior distribution and the latent posterior in a VAE. Related to the task of varying the factors of image generation is that of domain- transfer (Reed et al., 2015; Zhu et al., 2017; Isola et al., 2017; Choi et al., 2017; Donahue et al., 2016; Liu & Tuzel, 2016). Here the challenge is to "translate" an image into a domain for which examples of the original image are unknown and not available during training. For example, in the face generation task, the target domain can represent a change of facial attribute such as wearing eyeglasses or not, gender, age, etc. (Liu & Tuzel, 2016; Perarnau et al., 2016; Yan et al., 2016; Choi et al., 2017). ## 3 UNSUPERVISED PROGRESSIVE LEARNING OF DISENTANGLED REPRESENTATIONS In this section we detail how the Jacobian supervision motivated in Section 1 can be applied, by ways of a practical example. We will use the Jacobian supervision to learn a disentangled image representation, where the main factors of variation are progressively discovered and learned unsupervised. We start with a simple autoencoder model, the teacher \(T\) , identified by its encoder and decoder parts \((E^{T}, D^{T})\) . The output of the encoder (the latent code) is split into two parts. One part corresponds to the factors of variation \(\boldsymbol {y} \in \mathbb{R}^{k}\) and the other part corresponds to the nuisance variables, \(\boldsymbol {z} \in \mathbb{R}^{d}\) . We begin by using \(k = 2\) and \(d = 0\) , meaning that the latent code of the teacher is only 2- dimensional. We consider the information encoded in these two variables as the two principal factors of variation in the dataset. This choice was done merely for visualization purposes (Figure 1). For this example, we trained a 3- layer multi- layer perceptron (MLP) on MNIST digits, using only the L2 reconstruction loss. We used BatchNorm at the end of the encoder, so that the distribution of \(\boldsymbol{y}\) is normalized inside a mini- batch. In Figure 1 (a) we show the result of sampling this two- dimensional variable and feeding the samples to the decoder \(D^{T}\) . The resulting digits are blurry, but the hidden variables learned to encode the digit class. Next, we create a student autoencoder model \((E^{S}, D^{S})\) , similar to the teacher, but with a larger latent code. Namely, \(k = 2\) and \(d = 1\) instead of \(d = 0\) , so that the latent code has now an extra dimension and the reconstruction can be improved. In order to try to maintain the conditioning of the digit class by the 2D hidden variable \(\boldsymbol{y}\) , we will impose that the Jacobian of the student with respect to \(\boldsymbol{y}\) be the same as that of the teacher, as in (1). How to achieve this is described next. We take two random samples from the training set \(x_{1}\) and \(x_{2}\) , and feed them to the student autoencoder, producing two sets of latent codes: \((\boldsymbol{y}_{1}^{S}, \boldsymbol{z}_{1}^{S})\) and \((\boldsymbol{y}_{2}^{S}, \boldsymbol{z}_{2}^{S})\) , and two reconstructions \(\hat{\boldsymbol{x}}_{1}^{S}\) and \(\hat{\boldsymbol{x}}_{2}^{S}\) , respectively. We then swap the parts of the latent code to form \((\boldsymbol{y}_{2}^{S}, \boldsymbol{z}_{1}^{S})\) and \((\boldsymbol{y}_{1}^{S}, \boldsymbol{z}_{2}^{S})\) and feed them to the student decoder to obtain their respective reconstructions \(\hat{\boldsymbol{x}}_{21}^{S}\) and \(\hat{\boldsymbol{x}}_{12}^{S}\) . We also feed <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 1: Unsupervised learning of disentangled representations on MNIST digits, using Jacobian supervision. (a) Output of teacher model \((k = 2,d = 0)\) when varying its two hidden units. (b) Output of the final student model \((k = 2,d = 14)\) , while varying the same hidden units. The Jacobian supervision makes the model maintain control of the factors of variation of the teacher, while obtaining significantly better reconstruction. (c) A student model \((k = 2,d = 14)\) trained without Jacobian supervision loses the control of the factor of variation discovered by the teacher. (d) Performance curves for the test set, as training of the student progresses. The gray vertical bars indicate the moments where the latent code was progressively grown by one hidden unit. </center> the same pair of images to the teacher autoencoder to obtain \(y_{1}^{T}, y_{2}^{T}, \hat{x}_{1}^{T}, \hat{x}_{2}^{T}\) . Note that the teacher encoder in this case does not produce a \(z\) . We observe, by a first- order Taylor expansion, that \[D^{T}(y_{2}^{T}) = D^{T}(y_{1}^{T}) + J_{T}(y_{1}^{T})(y_{2}^{T} - y_{1}^{T}) + o^{T}(||y_{2}^{T} - y_{1}^{T}||), \quad (2)\] and \[D^{S}(y_{2}^{S},z_{1}^{S}) = D^{S}(y_{1}^{S},z_{1}^{S}) + J_{S}(y_{1}^{S},z_{1}^{S})\left[(y_{2}^{S},z_{1}^{S}) - (y_{1}^{S},z_{1}^{S})\right] + o^{S}(||(y_{2}^{S},z_{1}^{S}) - (y_{1}^{S},z_{1}^{S})||) \quad (3)\] \[= D^{S}(y_{1}^{S},z_{1}^{S}) + J_{S}(y_{1}^{S},z_{1}^{S})(y_{2}^{S} - y_{1}^{S},\mathbf{0}) + o^{S}(||y_{2}^{S} - y_{1}^{S}||), \quad (4)\] where \(J_{T}\) and \(J_{S}\) are the Jacobian of the teacher and student decoders respectively. Suppose \[y_{1}^{S} = y_{1}^{T} \text{and} y_{2}^{S} = y_{2}^{T}, \quad (5)\] and \[D^{T}(y_{2}^{T}) - D^{T}(y_{1}^{T}) = D^{S}(y_{2}^{S},z_{1}^{S}) - D^{S}(y_{1}^{S},z_{1}^{S}) \quad (6)\] then, by simple arithmetic, \[J_{S}(y_{1},z_{1})\left[(y_{2} - y_{1},\mathbf{0})\right]\approx J_{T}(y_{1})(y_{2} - y_{1}), \quad (7)\] where, since we assume (5) holds, we dropped the superscripts for clarity. What (7) expresses is that the partial derivative of the output with respect to the latent variables \(y\) in the direction of \((y_{2} - y_{1})\) is approximately the same for the student model and the teacher model. To achieve this, the proposed method consists essentially in enforcing the assumptions in (5) and (6) by simple reconstruction losses used during training of the student. Note that one could exhaustively explore partial derivatives in all the canonical directions of the space. In our case however, by visiting random pairs during training, we impose the constraint in (7) for random directions sampled from the data itself. This allows for more efficient training than exhaustive exploration. Putting everything together, the loss function for training the student autoencoder with Jacobian supervision is composed of a reconstruction part \(\mathcal{L}_{rec}\) and a Jacobian part \(\mathcal{L}_{jac}\) : \[\mathcal{L}_{rec}(\pmb {x},E^{S},D^{S}):= ||\pmb {x} - D^{S}(E^{S}(\pmb {x}))||_{2}^{2} = ||\pmb {x} - D^{S}(y^{S},z^{S})||_{2}^{2} = ||\pmb {x} - \hat{\pmb{x}}^{S}||_{2}^{2} \quad (8)\] \[\mathcal{L}_{jac}(\pmb {x},E^{S},D^{S}):= \lambda_{y}||\pmb{y}^{S} - \pmb{y}^{T}||_{2}^{2} + \lambda_{diff}||\left(D^{T}(\pmb{y}_{j}^{T}) - D^{T}(\pmb{y}^{T})\right) - \left(D^{S}(\pmb{y}_{j}^{S},\pmb{z}^{S}) - D^{S}(\pmb{y}^{S},\pmb{z}^{S})\right)||_{2}^{2}\] <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 2: \(3^{rd}\) to \(6^{th}\) principal factors of variation discovered by our unsupervised algorithm. The first two factors of variation are learned by the first teacher model (Figure 1 (a)). Each time a hidden unit is added to the autoencoder, a new factor of variation is discovered and learned. Each row shows the variation of the newly discovered factor for three different validation samples, while fixing all the other variables. The unsupervised discovered factors are related to stroke and handwriting style. </center> where the subscript \(j\) indicates a paired random sample. For the experiments in Figure 1 we used \(\lambda_{y} = 0.25\) , \(\lambda_{diff} = 0.1\) . Table 4 in the appendix presents ablation studies on these hyperparameters. In practice, we found it also helps to add a term computing the cross- covariance between \(\mathbf{y}\) and \(\mathbf{z}\) , to obtain further decorrelation between disentangled features (Cheung et al., 2014): \[\mathcal{L}_{x_{cov}}(\mathbf{y},\mathbf{z}):= \sum_{ij}\left[\frac{1}{M}\sum_{m = 1}^{M}\left(z_{i}^{m} - \bar{z}_{i}\right)\left(y_{j}^{m} - \bar{y}_{j}\right)\right]^{2}, \quad (10)\] where \(M\) is the number of samples in the data batch, \(m\) is an index over samples and \(i,j\) index feature dimensions, and \(\bar{z}_{i}\) and \(\bar{y}_{j}\) denote means over samples. In our experiments we weigh this loss with \(\lambda_{x_{cov}} = 1e^{- 3}\) . Once the student model is trained, it generates a better reconstructed image than the teacher model, thanks to the expanded latent code, while maintaining the conditioning of the output that the teacher had. The extra variable in the student latent code will be exploited by the autoencoder to learn the next important factor of variation in the dataset. Examples of factors of variations progressively learned in this way are shown in Figure 2. To progressively obtain an unsupervised disentangled representation we do the following procedure. After training of the student with \(k = 2\) , \(d = 1\) is finished, we consider this model as a new teacher (equivalent to \(k = 3\) ), and we create a new student model with one more hidden unit (equivalent to \(k = 3\) , \(d = 1\) ). We then repeat the same procedure. Results of repeating this procedure 14 times, using 100 epochs for each stage are shown in Figure 1. In Figure 1(b), we show how the resulting final model can maintain the conditioning of the digit class, while obtaining a much better reconstruction. A model trained progressively until reaching the same latent code dimension but without Jacobian supervision, and only the cross- covariance loss for disentangling (Cheung et al., 2014), is shown in Figure 1(c). This model also obtains good reconstruction but loses the conditioning. For this model we also found \(\lambda_{x_{cov}} = 1e^{- 3}\) to give the best result. To quantitatively evaluate the disentangling performance of each model, we look at how the first two latent units ( \(k = 2\) ) control the digit class in each model. We take two images of different digits from the test set, feed them to the encoder, swap their corresponding \(\mathbf{y}\) subvector and feed the fabricated latent codes to the decoder. We then run a pre- trained MNIST classifier in the generated image to see if the class was correctly swapped. The quantitative results are shown in Table 1. We observe that the reconstruction- disentanglement trade- off is indeed more advantageous for the student with Jacobian supervision. To complement this section, we present results of the unsupervised progressive learning of disentangled representations for the SVHN dataset (Netzer et al., 2011) in Section A.5 in the Appendix. ## 4 APPLICATION TO FACIAL ATTRIBUTE MODIFICATION In photographs of human faces, many factors of variation affect the image formation, such as subject identity, pose, illumination, viewpoint, etc., or even more subtle ones such as gender, age, expression. Modern facial manipulation algorithms allow the user to control these factors in the generative process. Our goal here is to obtain a model that has good control of these factors and produces faithful image reconstruction at the same time. We shall do so using the Jacobian supervision introduced <--- Page Split ---> Table 1: Quantitative comparison of the disentanglement and reconstruction performance of the unsupervised method on MNIST digits. <table><tr><td>Model</td><td>d</td><td>successful class swaps</td><td>reconstruction MSE</td></tr><tr><td>Teacher</td><td>0</td><td>94.3%</td><td>0.036</td></tr><tr><td>Student with Jacobian supervision</td><td>14</td><td>61.7%</td><td>0.014</td></tr><tr><td>Student with Jacobian supervision</td><td>18</td><td>52.1%</td><td>0.012</td></tr><tr><td>Student without Jacobian supervision</td><td>14</td><td>32.6%</td><td>0.011</td></tr><tr><td>Random weights</td><td>14</td><td>9.8%</td><td></td></tr></table> ![](images/5_0.jpg) <center>Figure 3: Diagram of the proposed training procedure for facial attributes disentangling. \(E\) and \(D\) always denote the same encoder and decoder module, respectively. Images \(x_{1}\) and \(x_{2}\) are randomly sampled and do not need to share any attribute or class. Their ground truth attribute labels are \(\bar{y}_{1}\) and \(\bar{y}_{2}\) respectively. The latent code is split into a vector predicting the attributes \(y\) and an unspecified part \(z\) . Shaded \(E\) indicates its weights are frozen, i.e., any loss over the indicated output does not affect its weights. </center> in Section 3. In this more challenging case, the disentangling will be first learned by a teacher autoencoder using available annotations and an original training procedure. After a teacher is trained to correctly disentangle and control said attributes, a student model will be trained to improve the visual quality of the reconstruction, while maintaining the attribute manipulation ability. ### 4.1 MODEL ARCHITECTURE AND LOSS FUNCTION We begin by training a teacher model for effective disentangling at the cost of low quality reconstruction. Figure 3 shows a diagram of the training architecture for the teacher model. Let \(x \in \mathbb{R}^{H \times W \times 3}\) be an image with annotated ground truth binary attributes \(\bar{y} \in \{- 1, 1\}^{k}\) , where \(k\) is the number of attributes for which annotations are available. Our goal is to learn the parameters of the encoder \(E^{T} \colon \mathbb{R}^{H \times W \times 3} \to \mathbb{R}^{k + d}\) and the decoder \(D^{T} \colon \mathbb{R}^{k + d} \to \mathbb{R}^{H \times W \times 3}\) such that \(E^{T}(x) = (y, z)\) and \(D^{T}(y, z) = \hat{x} \approx x\) (Figure 3, top). Ideally, \(y \in \mathbb{R}^{k}\) should encode the specified attributes of \(x\) , while \(z \in \mathbb{R}^{d}\) should encode the remaining information necessary for reconstruction. The training of the teacher is divided into two steps. First, the autoencoder reconstructs the input \(x\) , while at the same time predicting in \(y\) the ground truth labels for the attributes \(\bar{y}\) . Second, the attributes part of the latent code \(y\) is swapped with that of another training sample (Figure 3, bottom). The randomly fabricated latent code is fed into the decoder to produce a new image. Typically, this combination of factors and nuisance variables is not represented in the training set, so evaluating the reconstruction is not possible. Instead, we use the same encoder to assess the new image: If the disentangling is achieved, the part of the latent code that is not related to the attributes should be the same for the existing and fabricated images, and the predicted factors should match those of the sample from which they were copied. In what follows, we describe step by step the loss function used for training, which consists of the sum of multiple loss terms. Note that, contrary to relevant recent methods (Mathieu et al., 2016; Lample et al., 2017; Szabó et al., 2017), the proposed method does not require adversarial training. Reconstruction loss. The first task of the autoencoder is to reconstruct the input image. The first term of the loss is given by the L2 reconstruction loss, as in (8). <--- Page Split ---> Prediction loss. In order to encourage \(y\) to encode the original attributes of \(x\) indicated in the ground truth label \(\bar{y}\) , we add the following penalty based on the hinge loss with margin 1: \[\mathcal{L}_{pred}(\pmb {y},\bar{\pmb{y}}) = \frac{1}{k}\sum_{i = 1}^{k}\max (1 - y_{i}\bar{y}_{i},0), \quad (11)\] where the subscript \([i]\) indicates the \(i^{\mathrm{th}}\) attribute. Compared to recent related methods (Perarnau et al., 2016; Lample et al., 2017), the decoder sees the real- valued predicted attributes instead of an inserted vector of binary attribute labels. This allows the decoder to naturally learn from continuous attribute variables, leaving a degree of freedom to encode subtle variations of the attributes. Cycle- consistency loss. Recall our goal is to control variations of the attributes in the generated image, with the ability to generalize to combinations of content and attributes that are not present in the training set. Suppose we have two randomly sampled images \(\boldsymbol{x}_{1}\) and \(\boldsymbol{x}_{2}\) as in Figure 3. After obtaining \((y_{1},z_{1}) = E(x_{1})\) and \((y_{2},z_{2}) = E(x_{2})\) , we form the new artificial latent code \((y_{2},z_{1})\) Ideally, using this code, the decoder should produce an image with the attributes of \(x_{2}\) and the content of \(x_{1}\) . Such an image typically does not exist in the training set, so using a reconstruction loss is not possible. Instead, we resort to a cycle- consistency loss (Zhu et al., 2017). We input this image to the same encoder, which will produce a new code that we denote as \((y_{2}^{\prime},z_{1}^{\prime}) = E^{T}(D^{T}(y_{2},z_{1}))\) . If the decoder correctly generates an image with attributes \(y_{2}\) , and the encoder is good at predicting the input image attributes, then \(y_{2}^{\prime}\) should predict \(y_{2}\) . We use again the hinge loss to enforce this: \[\mathcal{L}_{cyc_{1}} = \frac{1}{k}\sum_{i = 1}^{k}\max (1 - y_{2i}y_{2i}^{\prime},0). \quad (12)\] Here we could have used any random values instead of the sampled \(y_{2}\) . However, we found that sampling predictions from the data eases the task of the decoder, as it is given combinations of attributes that it has already seen. Despite this simplification, the decoder shows remarkable generalization to unseen values of the specified attributes \(y\) during evaluation. Finally, we add a cycle- consistency check on the unspecified part of the latent code, \(z_{1}\) and \(z_{1}^{\prime}\) : \[\mathcal{L}_{cyc_{2}} = ||z_{1} - z_{1}^{\prime}||_{2}^{2} \quad (13)\] Encoder freezing. The training approach we just described presents a major pitfall. The reversed autoencoder could learn to replicate the input code \((y_{2},z_{1})\) by encoding this information inside a latent image in whatever way it finds easier, that does not induce a natural attribute variation. To avoid this issue, a key ingredient of the procedure is to freeze the weights of the encoder when back- propagating \(\mathcal{L}_{cyc_{1}}\) and \(\mathcal{L}_{cyc_{2}}\) . This forces the decoder to produce a naturally looking image so that the encoder correctly classifies its attributes. Global teacher loss. Overall, the global loss used to train the teacher is the sum of the five terms: \[\mathcal{L}(\theta_{E},\theta_{D}) = \lambda_{1}\mathcal{L}_{rec} + \lambda_{2}\mathcal{L}_{pred} + \lambda_{3}\mathcal{L}_{xcor} + \lambda_{4}\mathcal{L}_{cyc_{1}} + \lambda_{5}\mathcal{L}_{cyc_{2}}, \quad (14)\] where \(\lambda_{i} \in \mathbb{R}, i = 1:5\) represent weights for each term in the sum. Details on how their values are found and how we optimize (14) in practice are described in the next section. Ablation studies showing the contribution of each loss are shown in Section A.3 in the appendix. Student training. After the teacher is trained, we create a student autoencoder model with a larger dimension for the nuisance variables \(z\) and train it using only reconstruction and Jacobian supervision ((8) and (9)), as detailed in the next section. ### 4.2 IMPLEMENTATION We implement both teacher and student autoencoders as Convolutional Neural Networks (CNN). Further architecture and implementation details are detailed in the Appendix. We train and evaluate our method on the standard CelebA dataset (Liu et al., 2015), which contains 200,000 aligned faces of celebrities with 40 annotated attributes. The unspecified part of latent code \((z)\) of the teacher autoencoder is implemented as a feature map of 512 channels of size \(2 \times 2\) . To encode the attributes part \(y\) , we concatenate an additional \(k = 40\) <--- Page Split ---> channels. At the output of the encoder the values of these 40 channels are averaged, so the actual latent vector has \(k = 40\) and \(d = 2048\) , dimensions for \(\mathbf{y}\) and \(\mathbf{z}\) respectively. The decoder uses a symmetrical architecture and, following Lample et al. (2017), the attribute prediction \(\mathbf{y}\) is concatenated as constant channels to every feature map of the decoder. We perform grid search to find the values of the weights in (14) by training for 10 epochs and evaluating on a hold- out validation set. The values we used in the experiments in this paper are \(\lambda_{1} = 10^{2}\) , \(\lambda_{2} = 10^{- 1}\) , \(\lambda_{3} = 10^{- 1}\) , \(\lambda_{4} = 10^{- 4}\) , \(\lambda_{5} = 10^{- 5}\) . #### 4.2.1 TEACHER TRAINING At the beginning of the training of the teacher, the weights of the cycle- consistency losses \(\lambda_{4}\) and \(\lambda_{5}\) are set to 0, so the autoencoder is only trained for reconstruction ( \(\mathcal{L}_{rec}\) ), attribute prediction ( \(\mathcal{L}_{pred}\) ) and linear decorrelation ( \(\mathcal{L}_{con}\) ). After 100 training epochs, we resume the training turning on \(\mathcal{L}_{cyc_{1}}\) and \(\mathcal{L}_{cyc_{2}}\) and training for another 100 epochs. At each iteration, we do the parameter updates in two separate steps. We first update for \(\mathcal{L}_{1} = \lambda_{1}\mathcal{L}_{rec} + \lambda_{2}\mathcal{L}_{pred} + \lambda_{3}\mathcal{L}_{con}\) . Then, freezing the encoder, we do the update (only for the decoder), for \(\mathcal{L}_{2} = \lambda_{4}\mathcal{L}_{cyc_{1}} + \lambda_{5}\mathcal{L}_{cyc_{2}}\) . #### 4.2.2 STUDENT TRAINING After the teacher autoencoder training is completed, we create the student model by appending new convolutional filters to the output of the encoder and the input of the decoder, so that the effective dimension of the latent code is increased. In this experiment, we first doubled the size of the latent code from \(d = 2048\) to \(d = 4096\) at the \(200^{th}\) epoch and then from \(d = 4096\) to \(d = 8192\) at the \(400^{th}\) epoch. Note that this is different to the experiment of Section 3, where we grew \(d\) by one unit at a time. We initialize the weights of the student with the weights of the teacher wherever possible. Then, we train the student using the reconstruction loss (8) and the Jacobian loss (9) as defined in Section 3, using \(\lambda_{y} = 1\) , \(\lambda_{diff} = 50\) , and no prediction nor cycle- consistency loss ( \(\lambda_{2} = \lambda_{4} = \lambda_{5} = 0\) ). The hyperparameters were found by quantitative and qualitative evaluation on a separate validation set. ### 4.3 EXPERIMENTAL RESULTS From CelebA, we use 162,770 images of size 256x256 for training and the rest for validation. All the result figures in this paper show images from the validation set and were obtained using the same single model. For each model, we evaluated quantitatively how well the generated image is conditioned to the specified factors. To do this, for each image in the CelebA test set, we tried to flip each of the disentangled attributes, one at a time (e.g. eyeglasses/no eyeglasses). The flipping is done by setting the latent variable \(y_{i}\) to \(- \alpha \cdot \mathrm{sign}(y_{i})\) , with \(\alpha > 0\) a multiplier to exaggerate the attribute, found in a separate validation set for each model ( \(\alpha = 40\) for all models). To verify that the attribute was successfully flipped in the generated image, we used an external classifier trained to predict each of the attributes. We used the classifier provided by the authors of Fader Networks, which was trained directly on the same training split of the CelebA dataset. Table 2 and Figure 4 show the quantitative results we obtained. Most notably, at approximately the same reconstruction performance, the student with Jacobian supervision is significantly better at flipping attributes than the student without it. With the Jacobian supervision, the student maintains almost the same disentangling and conditioning ability as the teacher. Note that these numbers could be higher if we carefully chose a different value of \(\alpha\) for each attribute. To the best of our knowledge, Fader Networks (Lample et al., 2017) constitutes the state- of- the- art in face image generation with continuous control of the facial attributes. For comparison, we trained Fader Networks models using the authors' implementation with \(d = 2048\) and \(d = 8192\) to disentangle the same number of attributes as our model ( \(k = 40\) ), but the training did not converge (using the same provided optimization hyperparameters). We conjecture that the adversarial discriminator acting on the latent code harms the reconstruction and makes the optimization unstable. Comparisons with these models are shown in Table 2 and in Figures 7 and 8 in the appendix. We also show <--- Page Split ---> Table 2: Quantitative comparison of the disentanglement and reconstruction performance of the evaluated models in the facial attribute manipulation task. Disentanglement is measured as the ability to flip specified attributes by varying the corresponding latent unit. <table><tr><td>Model</td><td>loss function</td><td>d</td><td>successful attribute flips</td><td>reconstruction MSE</td></tr><tr><td>Teacher</td><td>cycle-consistency</td><td>2048</td><td>73.1%</td><td>1.82e - 3</td></tr><tr><td>Student</td><td>Jacobian</td><td>8192</td><td>72.2%</td><td>1.08e - 3</td></tr><tr><td>Student</td><td>cycle-consistency</td><td>8192</td><td>42.7%</td><td>1.04e - 3</td></tr><tr><td rowspan="2">Fader Networks</td><td rowspan="2">adversarial</td><td>2048</td><td>43.1%</td><td>3.08e - 3</td></tr><tr><td>8192</td><td>44.2%</td><td>1.83e - 3</td></tr><tr><td>Random weights</td><td></td><td>2048</td><td>20.2%</td><td>1.01e - 1</td></tr></table> ![](images/8_0.jpg) <center>Figure 4: Disentanglement versus reconstruction trade-off for the facial attribute manipulation example (top-left is better). The disentangling score measures the ability to flip facial attributes by manipulating the corresponding latent variables. </center> in Figure 6 that our multiple- attribute model achieves similar performance to the single- attribute Fader Networks models provided by the authors. Finally, Figure 5 shows the result of manipulating 32 attributes for eight different subjects, using the student model with Jacobian supervision. Note that our model is designed to learn the 40 attributes, however in practice there are 8 of them which the model does not learn to manipulate, possibly because they are poorly represented in the dataset (e.g. sideburns, wearing necktie) or too difficult to generate (e.g. wearing hat, wearing earrings). ## 5 CONCLUSION A natural trade- off between disentanglement and reconstruction exists when learning image representations using autoencoder architectures. In this work, we showed that it is possible to overcome this trade- off by first learning a teacher model that is good at disentangling and then imposing the Jacobian of this model with respect to the disentangled variables to a student model that is good at reconstruction. The student model then becomes good at both disentangling and reconstruction. We showed two example applications of this idea. The first one was to progressively learn the principal factors of variation in a dataset, in an unsupervised manner. The second application is a generative model that is able to manipulate facial attributes in human faces. The resulting model is able to manipulate one order of magnitude more facial attributes than state- of- the- art methods, while obtaining similar or superior visual results, and requiring no adversarial training. <--- Page Split ---> ![](images/9_0.jpg) <center>Figure 5: Results of attribute manipulation with the student model with Jacobian supervision. All the images were produced with the same model and belong to the test set. </center> <center>Figure 6: Comparison with single-attribute Fader Networks models (Lample et al., 2017). (a) Original image. (b) Reconstruction of Fader Networks with the provided 'eyeglasses' model. (c) Our teacher model achieves a sharper reconstruction using the same latent code dimension, and is able to effectively manipulate up to 32 attributes, instead of only one. (d) Result of amplifying age with Fader Networks with the provided aging model. (e) Our result for the same task. </center> <--- Page Split ---> ## ACKNOWLEDGMENTS This work was supported by CAP- UDELAR Grant BPDN_2018_1. Experiments were partially run on ClusterUY, National Center for Supercomputing, Uruguay. ## A APPENDIX ## A.1 IMPLEMENTATION DETAILS FOR SECTION 3 For the autoencoder utilized for experiments in Section 3, we used the following architecture. For the encoder: \[F(768,256)\to ReLU\to F(256,128)\to ReLU\to F(128,64)\to ReLU\to FC(64,k + d)\] where \(F(I,O)\) indicates a fully connected layer with \(I\) inputs and \(O\) outputs. For the first teacher model ( \(k = 2\) , \(d = 0\) ), we also used BatchNorm after the encoder output. The decoder is the exact symmetric of the encoder, with a Tanh layer appended at the end. We used Adam (Kingma & Ba, 2014) with a learning rate of \(3e^{- 4}\) , a batch size of 128 and weight decay coefficient \(1e^{- 6}\) . ## A.2 IMPLEMENTATION DETAILS FOR SECTION 4 Following Lample et al. (2017), we used convolutional blocks of Convolution- BatchNorm- ReLU layers and a geometric reduction in spatial resolution by using stride 2. The convolutional kernels are all of size \(4\times 4\) with padding of 1, and we use Leaky ReLU with slope 0.2. The input to the encoder is a \(256\times 256\) image. Denoting by \(k\) the number of attributes, the encoder architecture can be summarized as: \[C(16)\to C(32)\to C(64)\to C(128)\to C(256)\to C(512)\to C(512 + k),\] where \(C(f)\) indicates a convolutional block with \(f\) output channels. The decoder architecture can be summarized as: \[D(512 + k)\to D(512 + k)\to D(256 + k)\to D(128 + k)\to D(64 + k)\to D(32 + k)\to D(16 + k),\] where \(D(f)\) in this case indicates a deconvolutional block doing \(\times 2\) upsampling (using transposed convolutions, BatchNorm and ReLU) with \(f\) input channels. We trained all networks using Adam, with learning rate of 0.002, \(\beta_{1} = 0.5\) and \(\beta_{2} = 0.999\) . We use a batch size of 128. Table 3 shows a comparison chart between the proposed and related methods. ## A.2.1 STUDENT MODEL For the student model, we only need to change the last layer in the encoder from \(C(512 + k)\) to \(C(1024 + k)\) in the first stage and \(C(2048 + k)\) in the second stage. Similarly, the first layer of the decoder was changed from \(D(512 + k)\) to \(D(1024 + k)\) in the first stage and \(D(2048 + k)\) in the second stage. <--- Page Split ---> ## A.3 ABLATION STUDIES Table 4: Ablation study of the weight of each loss term for the unsupervised example of Section 3, using \(k = 2\) and \(d = 14\) for the student. <table><tr><td>λy</td><td>λdiff</td><td>λxcov</td><td>successful class swaps</td><td>reconstruction MSE</td></tr><tr><td>0</td><td>0</td><td>1.0e-1</td><td>33.6%</td><td>1.15e-2</td></tr><tr><td>0</td><td>0</td><td>1.0e-2</td><td>32.6%</td><td>1.17e-2</td></tr><tr><td>0</td><td>0</td><td>1.0e-3</td><td>32.3%</td><td>1.12e-2</td></tr><tr><td>0</td><td>0</td><td>1.0e-4</td><td>31.1%</td><td>1.13e-2</td></tr><tr><td>2.5e-1</td><td>1.0e-1</td><td>1.0e-1</td><td>63.7%</td><td>1.55e-2</td></tr><tr><td>2.5e-1</td><td>1.0e-1</td><td>1.0e-2</td><td>65.2%</td><td>1.55e-2</td></tr><tr><td>2.5e-1</td><td>1.0e-1</td><td>1.0e-3</td><td>61.7%</td><td>1.42e-2</td></tr><tr><td>1.0</td><td>1.0</td><td>1.0e-3</td><td>70.5%</td><td>1.86e-2</td></tr><tr><td>1.0e-1</td><td>1.0</td><td>1.0e-3</td><td>59.4%</td><td>1.70e-2</td></tr><tr><td>1.0</td><td>1.0e-1</td><td>1.0e-3</td><td>65.0%</td><td>1.58e-2</td></tr><tr><td>1.0e-1</td><td>1.0e-1</td><td>1.0e-3</td><td>57.9%</td><td>1.38e-2</td></tr><tr><td>1.0e-2</td><td>1.0e-1</td><td>1.0e-3</td><td>52.3%</td><td>1.30e-2</td></tr><tr><td>1.0e-1</td><td>1.0e-2</td><td>1.0e-3</td><td>52.3%</td><td>1.28e-2</td></tr><tr><td>1.0e-2</td><td>1.0e-2</td><td>1.0e-3</td><td>42.5%</td><td>1.17e-2</td></tr></table> Table 5: Ablation study of the impact of Jacobian and cycle-consistency losses in the training of the student model in the facial attribute manipulation task. The results correspond to training a student with \(d = 4096\) for 50 epochs, and where the teacher was trained only with reconstruction and cross-covariance losses, with \(d = 2048\) . All models used the same teacher. <table><tr><td>λ4(LCyc1)</td><td>λ5(LCyc2)</td><td>λdiff</td><td>λy</td><td>reconstruction MSE</td><td>successful flips</td></tr><tr><td>0</td><td>0</td><td>50</td><td>1</td><td>1.76e-3</td><td>70.3%</td></tr><tr><td>0</td><td>0</td><td>10</td><td>1</td><td>1.67e-3</td><td>64.8%</td></tr><tr><td>0</td><td>0</td><td>1</td><td>10</td><td>1.85e-3</td><td>49.1%</td></tr><tr><td>0</td><td>0</td><td>1</td><td>1</td><td>1.69e-3</td><td>46.8%</td></tr><tr><td>1e-4</td><td>1e-5</td><td>0</td><td>0</td><td>1.78e-3</td><td>43.2%</td></tr><tr><td>1e-4</td><td>1e-5</td><td>50</td><td>1</td><td>1.84e-3</td><td>70.7%</td></tr></table> Table 6: Ablation study of the weigh of the cross-covariance loss in the facial attribute manipulation example. The results correspond to training a teacher model with \(d = 2048\) , from scratch and for 50 epochs. <table><tr><td>λxcov</td><td>reconstruction MSE</td><td>successful flips</td></tr><tr><td>1e-3</td><td>2.39e - 3</td><td>51.4%</td></tr><tr><td>1e-2</td><td>2.50e - 3</td><td>53.3%</td></tr><tr><td>1e-1</td><td>2.71e - 3</td><td>49.9%</td></tr></table> <--- Page Split ---> ![](images/14_0.jpg) <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 8: Results of smoothly controlling attributes for our model and Fader Networks. In each row, our result is shown on the top and Fader Networks' on the bottom. From top to bottom, the attributes are: Arched eyebrows, Bags under eyes, Big nose, Pale skin, Age. </center> <--- Page Split ---> ## A.5 UNSUPERVISED PROGRESSIVE LEARNING OF DISENTANGLED REPRESENTATIONS ON SVHN DATASET We applied the procedure described in Section 3 for progressive unsupervised learning of disentangled representations to the Street View House Numbers (SVHN) dataset (Netzer et al., 2011). The SVHN dataset contains 73,257 \(32\times 32\) RGB images for training. For this experiment, the encoder architecture is: \[C(64)\to C(128)\to C(256)\to C(k + d)\to BatchNorm.\] Here, \(C(n)\) represents a convolutional block with \(n3\times 3\) filters and zero padding, ReLU activation function and average pooling. The decoder architecture is: \[D(256)\to D(128)\to D(64)\to D(3).\] Here, \(D(n)\) represents a \(\times 2\) upconvolution block with \(n4\times 4\) filters and zero padding, ReLU activation function and average pooling. The latent code was started with \(k = 2\) and \(d = 0\) and progressively grown to \(k = 2,d = 16\) . Each stage was trained for 25 epochs. We used \(\lambda_{y} = 0.025\) , \(\lambda_{diff} = 0.01\) . We used Adam with a learning rate of \(3e - 4\) , a batch size of 128 and weight decay coefficient \(1e - 6\) . The first teacher model ( \(k = 2\) , \(d = 0\) ) achieves a reconstruction MSE of \(1.94e^{- 2}\) and the final student model ( \(k = 2\) , \(d = 16\) ) a reconstruction MSE of \(4.06e^{- 3}\) . Figure 9 shows the two principal factors of variation learned by the first teacher model (corresponding to \(k = 2\) , \(d = 0\) ). Contrary to the MNIST example of Section 3, here the two main factors of variation are not related to the digit class, but to the shading of the digit. The progressive growth of the latent code is carried on from \(d = 0\) to \(d = 16\) . The following factors of variation are related to lighting, contrast and color (see Figure 10). In this case, the unsupervised progressive method discovered factors that appear related to the digit class at the \(9^{th}\) and \(10^{th}\) steps of the progression. Figure 11 shows how the digit class can be controlled by the student with \(d = 16\) by varying these factors. Because of the Jacobian supervision, the student is able to control the digit class while maintaining the style of the digit. Finally, in Figure 12 we show that the student also maintains control of the two main factors of variation discovered by the first teacher. ![](images/16_0.jpg) <center>Figure 9: The two principal factors of variation learned on SVHN appear related to shading of the digit. Left to right: darker to lighter. Top to bottom: light color on the left to light color on the right. </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 10: Third, fourth and fifth factors of variation automatically discovered on SVHN. Each row corresponds to one factor and each column corresponds to one sample. Each factor is varied while maintaining the rest of the latent units fixed. </center> ![](images/17_1.jpg) <center>Figure 11: Factors of variation related to the center digit class appear to emerge on the 9th and 10th discovered factor during the unsupervised progressive procedure described in Section 3. Here we show how the student model with Jacobian supervision and \(d = 16\) can be used to manipulate the digit class while approximately maintaining the style of the digit, by varying the latent units corresponding to those factors. The bottom row shows the original images (reconstructed by the autoencoder). All images are from the test set and were not seen during training. </center> ![](images/17_2.jpg) <center>Figure 12: Result of the student with Jacobian supervision \((d = 16)\) when varying the two factors learned by the teacher (Fig. 9), for four different images (whose reconstruction is shown on the bottom row). The conditioning related to shading is maintained. (Left to right: darker to lighter. Top to bottom: light color on the left to light color on the right.) All images are from the test set and were not seen during training. </center> <--- Page Split --->
accept
Accept (Poster)
6.333333
ICLR_2019_paper_0846
iclr
2,019
# RECTIFIED GRADIENT: LAYER-WISE THRESHOLDING FOR SHARP AND COHERENT ATTRIBUTION MAPS Anonymous authors Paper under double- blind review ## ABSTRACT Saliency map, or the gradient of the score function with respect to the input, is the most basic means of interpreting deep neural network decisions. However, saliency maps are often visually noisy. Although several hypotheses were proposed to account for this phenomenon, there is no work that provides a rigorous analysis of noisy saliency maps. This may be a problem as numerous advanced attribution methods were proposed under the assumption that the existing hypotheses are true. In this paper, we identify the cause of noisy saliency maps. Then, we propose Rectified Gradient, a simple method that significantly improves saliency maps by alleviating that cause. Experiments showed effectiveness of our method and its superiority to other attribution methods. Codes and examples for the experiments will be released in public. ## 1 INTRODUCTION The gradient of the score function with respect to the input, also called the saliency map (Erhan et al., 2009; Baehrens et al., 2010; Simonyan et al., 2014), is the most basic means of interpreting deep neural networks (DNNs). It is also a baseline method for other advanced attribution- based methods. However, our understanding of saliency maps is still poor. Previous studies such as Springenberg et al. (2015) and Selvaraju et al. (2017) have noted that saliency maps tend to be visually noisy. To explain this phenomenon, Sundararajan et al. (2016) and Smilkov et al. (2017) suggested saturation and discontinuous gradients as the causes (see Section 2.1 for further explanation). There were several studies attempting to improve saliency maps by tackling these hypothesized causes (Bach et al., 2015; Montavon et al., 2017; Sundararajan et al., 2016; Shrikumar et al., 2017; Smilkov et al., 2017; Sundararajan et al., 2017). Even though such attribution methods generally produce better visualizations, we find troubling that the hypotheses regarding noisy saliency maps have not been rigorously verified (see Section 2.2 for more detail on attribution methods). In other words, numerous attribution methods were built upon unproven claims that gradient discontinuity or saturation truly causes saliency maps to be noisy. This situation gives rise to two major problems. First, if the hypotheses regarding noisy saliency maps are incorrect, current and future works based on those hypotheses will also be erroneous. Second, as we do not know precisely why saliency maps are noisy, we have to rely on heuristics and guessworks to develop better attribution methods. In this paper, we address these problems by identifying saliency maps are noisy because DNNs do not filter out irrelevant features during forward propagation. We then introduce Rectified Gradient, or RectGrad in short, a simple technique that significantly improves the quality of saliency maps by alleviating the cause through layer- wise thresholding during backpropagation. Finally, we demonstrate that RectGrad produces attributions qualitatively superior and quantitatively comparable to other attribution methods. Specifically, we have the following key contributions: - We explain why saliency maps are noisy. Noise occurs in saliency maps when irrelevant features have positive pre-activation values and consequently pass through ReLU activation functions. This causes gradients to be nonzero at unimportant regions. We perform experiments with networks trained on CIFAR-10 to justify our claims (Section 3). <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Comparison of attribution methods. See Section 5 for details on the visualization. </center> - We introduce Rectified Gradient, a method that removes noise from saliency maps by thresholding irrelevant units at ReLU binary gates during backpropagation (Section 4). We first explain the rationale behind Rectified Gradient (Section 4.1). We then prove that Rectified Gradient generalizes Deconvolution and Guided Backpropagation (Section 4.2). In addition, we discuss two techniques that enhance the visual quality of Rectified Gradient attribution maps (Appendix C).- We first investigate the effect of threshold level on attribution maps produced by Rectified Gradient (Section 5.1). Then, we apply Rectified Gradient to networks trained on CIFAR-10 and ImageNet to demonstrate that it produces qualitatively superior attribution maps (Section 5.2). We also compare Rectified Gradient with other attribution methods using several quantitative metrics (Section 5.3). ## 2 BACKGROUND OVERVIEW Let \(S:\mathbb{R}^{d}\mapsto \mathbb{R}^{|C|}\) be an image classification network, where \(x\in \mathbb{R}^{d}\) is a single image instance and \(C\) is the set of image classes. Then, we can define a score function \(S_{c}:\mathbb{R}^{d}\mapsto \mathbb{R}\) for each class \(c\in C\) and the final class of the image \(x\) is given by \(class(x) = \arg \max_{c\in C}S_{c}(x)\) . A typical score function is constructed by alternately composing affine transformations and nonlinear activation functions. A squashing function such as softmax is applied to the final layer. Since functions comprising \(S_{c}\) are differentiable or piecewise linear, the score function is also piecewise differentiable. Using this fact, Erhan et al. (2009), Baehrens et al. (2010) and Simonyan et al. (2014) proposed the saliency map, or the gradient of \(S_{c}\) with respect to \(x\) , to highlight features within \(x\) that the network associates with the given class. In an ideal case, saliency maps highlight objects of interest. However, previous studies such as Springenberg et al. (2015) and Selvaraju et al. (2017) have pointed out that saliency maps tend to be visually noisy, as verified by Figure 1. Three hypotheses were proposed to account for this phenomenon. We describe them in the next section. ### 2.1 PREVIOUS HYPOTHESES Saliency Maps are Truthful. Smilkov et al. (2017) suggested that noisy saliency maps are faithful descriptions of what the network is doing. That is, pixels scattered seemingly at random are actually crucial to how the network makes a decision. In short, this hypothesis claims that noise is actually informative. Discontinuous Gradients. Smilkov et al. (2017) and Shrikumar et al. (2017) proposed that saliency maps are noisy due to the piece- wise linearity of the score function. Specifically, since typical DNNs use ReLU activation functions and max pooling, the derivative of the score function with respect to the input will not be continuously differentiable. Under this hypothesis, noise is caused by meaningless local variations in the gradient. <--- Page Split ---> Saturating Score Function. Shrikumar et al. (2017) and Sundararajan et al. (2017) suggested that important features may have small gradient due to saturation. In other words, the score function can flatten in the proximity of the input and have a small derivative. This hypothesis explains why informative features may not be highlighted in the saliency map even though they contributed significantly to the decision of the DNN. ### 2.2 PREVIOUS WORKS ON IMPROVING SALIENCY MAPS DNN interpretation methods that assign a signed attribution value to each input feature are collectively called attribution methods. Attributions are usually visualized as a heatmap by arranging them to have the same shape as the input sample. Such heatmaps are called attribution maps. We now describe attribution methods that have been proposed to improve saliency maps. Attribution Methods Addressing Discontinuity. SmoothGrad (Smilkov et al., 2017) attempts to smooth discontinuous gradient with a Gaussian kernel. Since calculating the local average in a high dimensional space is intractable, the authors proposed a stochastic approximation which takes random samples in a neighborhood of the input \(x\) and then averages their gradients. Attribution Methods Addressing Saturation. Since saliency maps estimate the local importance of each input feature, they are vulnerable to saturation. Therefore, attribution methods such as Gradient \* Input (Shrikumar et al., 2017), Layer- wise Relevance Propagation (LRP) (Bach et al., 2015), DeepLIFT (Shrikumar et al., 2017) and Integrated Gradient (Sundararajan et al., 2017) attempt to alleviate saturation by estimating the global importance of each pixel (Ancona et al., 2018). Ancona et al. (2018) has also shown that several global attribution methods are closely related under certain conditions. Other Attribution Methods. Some attribution methods take a different approach to improving saliency maps. Deconvolution (Zeiler & Fergus, 2014) and Guided Backpropagation (Springenberg et al., 2015) remove negative gradient during backpropagation. Due to this imputation procedure, Deconvolution and Guided Backpropagation yield attribution maps sharper than those of other methods. However, Nie et al. (2018) has recently proven that these methods are actually doing partial image recovery which is unrelated to DNN decisions. ## 3 OUR EXPLANATION FOR NOISY SALIENCY MAPS For brevity, we refer to pixels on the background as background features and pixels on the object as object features. Then, noise in a saliency map corresponds to background gradient, or gradient that highlights background features. We assume the DNN uses ReLU activation functions. Under this condition, nonzero background gradient indicates the presence of at least one positive pre- activation in each network layer corresponding to background features. To verify this, we visualized intermediate layer activations of a convolutional neural network (CNN) trained on CIFAR- 10. Figure 2b shows convolutional layer feature maps for an image that produced a noisy saliency map. Since CNN filters act as feature extractors, we expected the CNN to remove most background feature activations through convolutions. However, we found significant amounts of background feature activations in all convolution layers. As the last convolution layer is connected to fully connected layers, the majority of activations in the last convolution layer will have nonzero gradient. Hence, the gradient flowed through background feature activations up to the input. This gradient flow caused background gradient, as shown in Figure 2a. From our perspective, the answer to "why are saliency maps noisy?" is trivial. Saliency maps are noisy because background features pass through ReLU activation functions. Therefore, rather than asking why saliency maps are noisy, we ask "do activations of background features highlighted by background gradient have nontrivial influence on the decision?" If the answer is yes, noise in saliency maps is informative as suggested by Smilkov et al. (2017), and saliency maps do not need any major improvement. However, if the answer is no, we should find a way to remove background gradient. We investigated this question through two experiments. Feature Map Occlusion. We evaluated the significance of background features by occluding activations at intermediate layers. Then, we analyzed the effect of this perturbation on the final decision. Note that this is different from the Sensitivity metric (Bach et al., 2015; Samek et al., 2017). Sensi <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: Feature map visualization for an image with a noisy saliency map. </center> tivity measures the impact of occlusion in the data space (e.g. pixel occlusion) while we measured the impact of occlusion in each feature space. We first created a background mask that covers background features in the images. We then plotted the average class logits as we incrementally occluded intermediate layer activations that fell on the background mask. We carried out occlusion following a random ordering and took the average over 50 trials. Figures 3a and 3b give an example of a background mask and a completely occluded feature map respectively. Figure 3c shows that the final decision did not change throughout the occlusion process for all convolution layers. Moreover, the difference between the top label logit and the next largest logit remained constant. Therefore, background feature activations are irrelevant to the classification task. To further support this claim, we conducted a larger- scale version of this experiment, and we describe the procedure and results in Appendix A.1. Training Dataset Occlusion. Next, we show that gradient can be nonzero for completely uninformative features. We occluded the upper left corner of all images in the training dataset with a \(10 \times 10\) random patch and trained a randomly initialized CNN on the modified dataset. We used the same patch for all images. Since the test accuracy did not change significantly (79.4% to 79.3%), we expected the CNN to have learned to extract important features and ignore irrelevant ones. However, Figure 4 shows that gradient is nonzero for the patch although it is completely irrelevant to the classification task. We can draw three conclusions from these experiments: 1. DNNs do not filter out irrelevant features during forward propagation. 2. DNNs are capable of making correct decisions even if we occlude the majority of background feature activations in intermediate layers. This implies that most background feature activations are irrelevant to the classification task. 3. Since DNNs do not remove irrelevant features through ReLU activation functions, zero threshold at ReLU binary gates during backpropagation also allow irrelevant information to flow through the gradient. With the conclusions above, we can refute the first of three previous hypotheses. As for the second hypothesis, we can interpret meaningless local variation in the gradient as a side effect of irrelevant features contaminating the gradient. Why the network does not learn to filter out irrelevant features is <--- Page Split ---> ![](images/4_0.jpg) <center>(c) Average class logits as background feature activations are incrementally occluded in a random order. The average is taken over 50 trials. Image class is illustrated by a solid line and other classes by dotted lines. </center> ![](images/4_1.jpg) <center>Figure 3: Impact of background feature activation occlusion on the final decision. </center> Figure 4: Saliency maps produced from a CNN trained on occluded images. The upper left corner of all the images in the training dataset is replaced with a \(10 \times 10\) random patch, as shown above. Readers should examine the \(8 \times 8\) patch enclosed by the red square instead of the entire \(10 \times 10\) patch due to the receptive field of filters in the first convolution layer \((3 \times 3)\) . a matter of optimization, which is out of scope of this paper. However, we believe it is a phenomenon worth investigating. ## 4 RECTIFIED GRADIENT We now introduce our technique to improve saliency maps. As we have shown in Section 3, zero is a poor threshold at ReLU binary gates during backpropagation. This indicates that we need better thresholds at ReLU binary gates in order to remove uninformative gradient from saliency maps. To this end, we propose Rectified Gradient, or RectGrad in short, where the gradient propagates only through units whose importance scores exceed some threshold. Importance score for an unit is calculated by multiplying its activation with gradient propagated up to the unit. Formally, RectGrad is given as follows: Suppose we have a \(L\) - layer ReLU DNN. Denote input feature \(i\) as \(x_{i}\) , pre- activation of unit \(i\) in layer \(l\) as \(z_{i}^{(l)}\) , its activation as \(a_{i}^{(l)}\) and gradient propagated up to \(a_{i}^{(l)}\) as \(R_{i}^{(l + 1)}\) . Let \(\mathbb{I}(\cdot)\) be the indicator function. Then, the relation between \(a_{i}^{(l)}\) and \(z_{i}^{(l)}\) is given by \(a_{i}^{(l)} = ReLU(z_{i}^{(l)}) = \max (z_{i}^{(l)},0)\) when \(l< L\) and \(a_{i}^{(L)} = softmax(z_{i}^{(L)})\) . By the chain rule, backward pass through the ReLU nonlinearity for vanilla gradient is achieved by \(R_{i}^{(l)} = \mathbb{I}(a_{i}^{(l)} > 0)\cdot R_{i}^{(l + 1)}\) . We modify this rule such that \(R_{i}^{(l)} = \mathbb{I}(a_{i}^{(l)}\cdot R_{i}^{(l + 1)} > \tau)\cdot R_{i}^{(l + 1)}\) for some threshold \(\tau\) . Backward pass through affine transformations and pooling operations is carried out in the same manner as backpropagation. Finally, importance scores for input features are calculated by multiplying gradient propagated up to input layer \((l = 0)\) with input features: \(x_{i}\cdot R_{i}^{(1)}\) . Instead of setting \(\tau\) to a constant <--- Page Split ---> value, we use the \(q^{\mathrm{th}}\) percentile of importance scores at each layer. This prevents the gradient from entirely dying out during the backward pass. Due to the simplicity of the propagation rule, RectGrad can easily be applied to DNNs in graph computation frameworks such as TensorFlow (Abadi et al., 2016) or PyTorch (Paszke et al., 2017). Listing 1 in Appendix D.1 shows how to implement RectGrad in TensorFlow. In Appendix C we also introduce two techniques, namely the padding trick and the proportional redistribution rule (PRR) that enhance the visual quality of RectGrad attribution maps. ### 4.1 RATIONALE BEHIND THE PROPAGATION RULE FOR RECTIFIED GRADIENT This subsection explains the reason we have chosen \(R_{i}^{(l)} = \mathbb{I}(a_{i}^{(l)}\cdot R_{i}^{(l + 1)} > \tau)\cdot R_{i}^{(l + 1)}\) and not \(R_{i}^{(l)} = \mathbb{I}(a_{i}^{(l)} > \tau)\cdot R_{i}^{(l + 1)}\) or \(R_{i}^{(l)} = \mathbb{I}(R_{i}^{(l + 1)} > \tau)\cdot R_{i}^{(l + 1)}\) as the definition of RectGrad. The significance of multiplying an unit's activation with gradient propagated up to the unit is that it estimates the marginal effect of that unit on the output (Ancona et al., 2018). For instance, consider the following linear model: \(f(a_{1},a_{2},a_{3}) = 2\cdot a_{1} + 1\cdot a_{2} + 3\cdot a_{3}\) . We have \(\partial f / \partial a_{1} = 2\) , \(\partial f / \partial a_{2} = 1\) , and \(\partial f / a_{3} = 3\) . Suppose we are given inputs \(a_{1} = 2\) , \(a_{2} = 3\) , \(a_{3} = 1\) and we apply RectGrad with \(q = 67\) , i.e., we propagate the gradient through the unit with the highest importance score. Clearly \(a_{1}\) has the largest contribution of \(2\cdot 2 = 4\) to the final output compared to \(1\cdot 3 = 3\cdot 1 = 3\) of \(a_{2}\) and \(a_{3}\) . Only the first rule correctly propagates gradient through the most influential unit \(a_{1}\) while the latter two rules mistakenly choose \(a_{2}\) and \(a_{3}\) respectively. Since the latter two rules fail even for this simple example, it is highly likely that they will not work for DNNs which are constructed by composing multiple linear layers. On the other hand, the first rule propagates gradient through units with the largest marginal effect in a layer- wise manner. Hence, it makes sense to select the first propagation rule as the definition of RectGrad. Next, we show that RectGrad generalizes Deconvolution and Guided Backpropagation. ### 4.2 RELATION TO DECONVOLUTION AND GUIDED BACKPROPAGATION Claim 1. Deconvolution \* Input is equivalent to Rectified Gradient with the propagation rule \[R_{i}^{(l)} = \mathbb{I}\left[\left(a_{i}^{(l)} + \epsilon\right)\cdot R_{i}^{(l + 1)} > 0\right]\cdot R_{i}^{(l + 1)}\] for some small \(\epsilon > 0\) Claim 2. Guided Backpropagation \* Input is equivalent to Rectified Gradient when \(\tau = 0\) : \[R_{i}^{(l)} = \mathbb{I}\left(a_{i}^{(l)}\cdot R_{i}^{(l + 1)} > 0\right)\cdot R_{i}^{(l + 1)}.\] The proofs for Claims 1 and 2 are provided in Appendix E.1 and E.2 respectively. These results indicate that RectGrad generalizes Deconvolution and Guided Backpropagation. Figure 1 illustrates the relation between the saliency map, Deconvolution, Guided Backpropagation and RectGrad. However, Nie et al. (2018) has recently proven that Deconvolution and Guided Backpropagation are actually doing partial image recovery which is unrelated to DNN decisions. RectGrad does not suffer from this problem as it does not satisfy the assumptions of the analyses of Nie et al. (2018) for two reasons. First, the threshold criterion is based on the product of activation and gradient which is not Gaussian distributed.\(^{1}\) Second, we set \(\tau\) as the \(q^{\mathrm{th}}\) percentile of importance scores and therefore \(\tau\) will vary layer by layer. We also show in Section 5.2 with adversarial attacks that attributions produced by RectGrad are class sensitive. Therefore, RectGrad inherits the sharp visualizations of Deconvolution and Guided Backpropagation while amending their disadvantages with layer- wise importance score thresholding. ## 5 EXPERIMENTS To evaluate RectGrad, we performed a series of experiments using Inception V4 network (Szegedy et al., 2017) trained on ImageNet (Russakovsky et al., 2015) and CNNs trained on CIFAR- 10 (Krizhevsky & Hinton, 2009). See Appendix F.1 for details on the attribution map visualization method. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 5: Effect of threshold \(\tau\) (columns) on RectGrad for 3 images of the cabbage butterfly class in ImageNet (rows). The second column shows attribution maps with \(\tau = 0\) , which is equivalent to Guided Backpropagation \* Input. For the following columns, \(\tau\) is set to \(q^{\mathrm{th}}\) percentile of importance scores. The padding trick was used for all attribution maps above. </center> ### 5.1 EFFECT OF THRESHOLD PERCENTILE RectGrad has one hyper- parameter \(\tau\) , which is set to \(q^{\mathrm{th}}\) percentile of importance scores for each layer. Figure 5 shows the effect of threshold percentile for several images from ImageNet. While the attribution maps were incomprehensible for \(q = 0\) , the visual quality dramatically improved as we incremented \(q\) up to 20. There was no significant change up to \(q = 80\) . Then the attribution maps began to sparse out again as we incremented \(q\) further. We also observed that regions of high attributions did not change from \(q > 20\) . We speculate that the attributions stay constant between \(q = 20\) and 80 because of zero activations. That is, since we use ReLU activation functions, the majority of activations and consequently importance scores will be zero. Hence, \(\tau \approx 0\) for \(20 \leq q \leq 80\) . This causes RectGrad attribution maps to resemble those produced by Guided Backpropagation \* Input. It indicates that we have to increment \(q > 80\) in order to produce sparser attribution maps that highlight important regions instead of reconstruct input images. ### 5.2 QUALITATIVE COMPARISON WITH BASELINE METHODS We used the saliency map, Gradient \* Input, Guided Backpropagation, SmoothGrad, Integrated Gradient, Epsilon- LRP and DeepLIFT as baseline methods. As for RectGrad, we used the padding trick and \(q = 98\) for all attribution maps. We show attributions both with and without application of the proportional redistribution rule. In this subsection, we compare RectGrad with other attribution methods through three experiments that each focus on different aspect of qualitative evaluation. We also show applying simple final thresholding to baseline methods is not enough to replicate the benefits of RectGrad. To demonstrate this, we applied 95 percentile final threshold to baseline attribution methods such that RectGrad and baseline attribution maps have similar levels of sparsity. Coherence. Following prior work (Simonyan et al., 2014; Zeiler & Fergus, 2014), we inspected two types of visual coherence. First, the attributions should fall on discriminative features (e.g. the object of interest), not the background. Second, the attributions should highlight similar features for images of the same class. For the first type of visual coherence, Figure 6 shows a side- by- side comparison between our method and baseline methods. It can clearly be seen that RectGrad produced attribution maps more visually coherent and focused than other methods—background noise was nearly nonexistent. This <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 6: Evaluation of coherence across different classes without and with final thresholding. </center> ![](images/7_1.jpg) <center>Figure 7: Comparison of attribution maps for images (left column) and their adversarial examples (right column) without and with final thresholding. This figure shows examples where attribution maps produced by RectGrad changed significantly. </center> phenomenon may be due to noise accumulation. Specifically, irrelevant features may have trivial gradient near the output layer. However, since gradient is calculated by successive multiplication, the noise can grow exponentially as gradient is propagated towards the input layer. This can result in confusing attribution maps which assign high attribution to irrelevant regions (e.g. uniform background in "lighter"), especially for deep networks such as Inception. RectGrad does not suffer from this problem since it thresholds irrelevant features at every layer and hence stops noise accumulation. In this situation, final thresholding cannot replicate RectGrad's ability to remove noise. In Appendix A.2, we corroborate this claim by comparing Saliency map and RectGrad attributions as they are propagated towards the input layer. For the second type of visual coherence, Figure 11 in Appendix A.3 shows attribution maps for a pair of images belonging to the same class. Attribution maps generated by RectGrad consistently emphasized similar parts of the object of interest. On the contrary, Saliency map, Gradient \* Input and Epsilon- LRP emphasized different regions for each image instance. Attributions for Smooth <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 8: Comparison of amount of attribution on occluded patch. The left and right charts compare the amount of attribution inside occluded patch without and with final thresholding respectively. The numbers in parentheses show the custom threshold levels. </center> Grad, Guided Backpropagation, Integrated Gradient and DeepLIFT were generally coherent across images of the same class. Nevertheless, they also highlighted background features and hence failed to satisfy the first type of visual coherence. This observation also holds for attribution maps with final thresholding. Adversarial Attack. We evaluated class sensitivity following prior work by Nie et al. (2018). Specifically, we compared the attributions for an image and its adversarial example. If the attribution method is class sensitive, attribution maps should change significantly since ReLU activations and consequently the predicted class have changed. On the other hand, if the attribution method merely does image reconstruction, attribution maps will not change much since we add an indistinguishable adversarial perturbation to the image. In this experiment, we used the fast gradient sign method (Goodfellow et al., 2015) with \(\epsilon = 0.01\) to generate adversarial examples. Figure 7 shows large changes in attribution maps produced by RectGrad. We observed that only RectGrad attributions were coherent with the class labels. Figure 12 in Appendix A.3 shows some instances where there was no significant change in attribution maps produced by RectGrad. In those cases, attribution maps for other methods also showed little change. Hence, we can conclude that RectGrad is equally or more class sensitive than baseline attribution methods. We observed that this conclusion also holds with final thresholding. It is also possible that adversarial attacks only modified a tiny amount of ReLU activations (i.e. the images were near the decision boundary), causing little change in attribution maps. ### 5.3 QUANTITATIVE COMPARISON WITH BASELINE METHODS In this section, we quantitatively compare RectGrad with baseline methods using DNNs trained on CIFAR- 10. We did not include Epsilon- LRP since it is equivalent to Gradient \* Input for ReLU DNNs (Ancona et al., 2018). We divided baseline attribution methods into local and global methods following the criterion proposed by Ancona et al. (2018). We also repeated the same experiments with final thresholding to the baselines to compare them with RectGrad in similar sparsity setting. Training Dataset Occlusion. Just like the training dataset occlusion experiment in Section 3, we occluded the upper left corner of all images in CIFAR- 10 training dataset with a \(10 \times 10\) random patch and trained a randomly initialized CNN on the modified dataset. We then summed all absolute attribution within the patch and averaged across the test dataset. A reasonable attribution method should assign nearly zero attribution to the patch as it is completely irrelevant to the classification task. Figure 8 compares the amount of average attribution in the patch between attribution methods. We observed that without final thresholding, RectGrad assigned little or no attribution to the random patch. However, all other methods failed to do so. For this test, we found using \(q = 95\) final threshold led to trivially different averages. Hence we used a custom threshold for each baseline method such that they had similar average attribution in the patch as RectGrad. We observed that RectGrad had smaller standard deviation than baseline methods. This indicates that RectGrad more consistently assigns near- zero attribution to the patch. Therefore RectGrad has advantages over <--- Page Split ---> baseline methods regardless of whether final threshold is used or not. Figures for the following quantitative experiment outcomes are in Appendix A.4. Noise Level. We evaluated whether RectGrad really reduces noise through two experiments. For the first test, we created segmentation masks for 10 correctly classified images of each class (total 100 images) and measured how much attribution falls on the background. Specifically, we compared the sum of absolute value of attribution on the background. For the second test, we measured the average total variation of attribution maps for each attribution method. The average was taken over the test dataset. Figure 13 shows that RectGrad assigned significantly less attribution to the background than baseline methods. Moreover, even with final thresholding, RectGrad outperformed baseline methods. In addition, Figure 14 shows that even though the total variation reduces for baseline methods after final thresholding, RectGrad outperforms baseline methods in both cases. The results imply that baselines with final thresholding cannot replicate RectGrad's ability to reduce noise. Sensitivity. We evaluated RectGrad using the Sensitivity metric proposed by Bach et al. (2015) and Samek et al. (2017). Specifically, we measured how the logit for the initial class changed as features were occluded based on the ordering assigned by the attribution method. We split the image into non- overlapping patches of \(2 \times 2\) pixels. Next, we computed attributions and summed all the values within each patch. We sorted the patches in decreasing order based on the aggregate attribution values. We then incrementally replaced the first 100 patches with per- channel mean computed using the entire training set and measured the change in class logit. We calculated the average across 500 randomly chosen test set images. An attribution method is better if it has a lower sensitivity AUC. The results are shown in Figure 15. All attribution methods outperformed the random baseline in which we randomly removed patches. We observed that RectGrad performed better than local attribution methods. In comparison with global attribution methods, RectGrad showed similar performance up to approximately 10 patches (red vertical line) but the performance dropped as more patches were removed. In Appendix B.1, we offer an explanation for this behavior. Figure 16 shows that after final thresholding, RectGrad still outperforms local attribution methods. For global attribution methods, RectGrad now shows similar performance. ROAR and KAR. We evaluated RectGrad using Remove and Retrain (ROAR) and Keep and Retrain (KAR) proposed by Hooker et al. (2018). Specifically, we measured how the performance of the classifier changed as features were occluded based on the ordering assigned by the attribution method. For ROAR, given an attribution method, we replaced a fraction of all CIFAR- 10 pixels that were estimated to be most important with a constant value. We then retrained a CNN on the modified dataset and measured the change in test accuracy. For KAR, we replaced a fraction of all CIFAR- 10 pixels that were estimated to be least important. We trained 3 CNNs per estimator for each fraction \(\{0.1, 0.3, 0.5, 0.7, 0.9\}\) . We measured test accuracy as the average of theses 3 CNNs. An attribution method is better if it has a lower ROAR AUC and a higher KAR AUC. Figure 17 presents ROAR scores. All attribution methods outperformed the random baseline in which we randomly removed pixels. RectGrad showed similar performance to local attribution methods but performed worse than all global attribution methods. Next, Figure 18 shows KAR scores. Interestingly, all baseline attribution methods failed to exceed even the random baseline. Only RectGrad had similar or better performance than the random baseline. In Appendix B.2, we offer an explanation for why RectGrad performed poorly in ROAR. ## 6 CONCLUSIONS Saliency map is the most basic means of interpreting deep neural network decisions. However, it is often visually noisy. Although several hypotheses were proposed to account for this phenomenon, there is no work that provides a thorough analysis of noisy saliency maps. Therefore, we first identified saliency maps are noisy because DNNs do not filter out irrelevant features during forward propagation. We then proposed RectGrad Gradient which significantly improves saliency maps by alleviating this problem through layer- wise thresholding during backpropagation. We showed that Rectified Gradient generalizes Deconvolution and Guided Backpropagation and moreover, overcomes the class- insensitivity problem. We also demonstrated through extensive experiments that Rectified Gradient outperforms previous attribution methods. <--- Page Split ---> ## REFERENCES Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow: A system for large- scale machine learning. In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, OSDI'16, pp. 265- 283, Berkeley, CA, USA, 2016. USENIX Association. ISBN 978- 1- 931971- 33- 1. URL http://dl.acm.org/citation.cfm?id=3026877.3026899. Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus Gross. Towards better understanding of gradient- based attribution methods for deep neural networks. In International Conference on Learning Representations, 2018. Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus Robert Müller, and Wojciech Samek. On pixel- wise explanations for non- linear classifier decisions by layer- wise relevance propagation. PLoS ONE, 10(7):1- 46, 2015. ISSN 19326203. doi: 10.1371/journal.pone.0130140. David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus- Robert Müller. How to explain individual classification decisions. Journal of Machine Learning Research, 11(Jun):1803- 1831, 2010. Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher- layer features of a deep network. University of Montreal 1341.3, 2009. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Striving for simplicity: The all convolutional net. In International Conference on Learning Representations, 2015. Sara Hooker, Dumitru Erhan, Pieter- Jan Kindermans, and Been Kim. Evaluating feature importance estimates. In ICML Workshop on Human Interpretability in Machine Learning, 2018. ALex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009. Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus- Robert Müller. Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recognition, 65:211- 222, 2017. Weili Nie, Yang Zhang, and Ankit Patel. A theoretical explanation for perplexing behaviors of backpropagation- based visualizations. In International Conference on Machine Learning, 2018. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In NIPS Workshop on Autodiff, 2017. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211- 252, 2015. Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, and Klaus- Robert Müller. Evaluating the visualization of what a deep neural network has learned. IEEE transactions on neural networks and learning systems, 28(11):2660- 2673, 2017. Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad- cam: Visual explanations from deep networks via gradient- based localization. In The IEEE International Conference on Computer Vision, Oct 2017. Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In International Conference on Machine Learning, 2017. <--- Page Split ---> Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In International Conference on Learning Representations Workshop, 2014. Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. In ICML Workshop on Visualization for Deep Learning, 2017. Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. International Conference on Learning Representations Workshop, 2015. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Gradients of counterfactuals. arXiv preprint arXiv:1611.02639, 2016. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International Conference on Machine Learning, 2017. Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. Inception- v4, inception- resnet and the impact of residual connections on learning. In AAAI Conference on Artificial Intelligence, 2017. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European Conference on Computer Vision, pp. 818- 833. Springer, 2014. <--- Page Split ---> ## A EXPERIMENT RESULTS ## A.1 SUPPLEMENTARY EXPERIMENT FOR FEATURE MAP OCCLUSION ![](images/12_0.jpg) <center>Figure 9: Larger-scale study of the impact of background feature activation occlusion on the final decision. </center> To further support our claim that background feature activations are irrelevant to the classification task, we conducted a larger-scale experiment. We created segmentation masks for 10 correctly classified images of each class (total 100 images) and repeated the feature map occlusion for each image. We then took the average of (class logit) – (largest logit among the other 9 classes) across all 100 images. Figures 9a and 9b give an example of a background segmentation mask and a completely occluded feature map. Figure 9c shows that the difference is generally positive throughout the occlusion process, that is, the class does not change for most images. From this, we can infer that background features are generally irrelevant to the classification task. <--- Page Split ---> ## A.2 SUPPLEMENTARY EXPERIMENT FOR NOISE ACCUMULATION ![](images/13_0.jpg) <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 10: Saliency map and RectGrad attributions at Inception v4 intermediate layers as they are propagated toward the input layer. We show channel-wise average attributions for hidden layer inputs with respect to the output layer. For each subfigure, first row shows the input image and Saliency map and RectGrad attribution maps. Second and third rows show Saliency map and RectGrad attributions at intermediate layers, respectively. An attribution map is closer to the output layer if it is closer to the right. </center> To verify our claims on the noise accumulation phenomenon, we compared Saliency map and RectGrad attributions as they are propagated towards the input layer. As Figure 10 shows, at higher layers, Saliency map attributions for objects of interest are generally larger than or equal to attributions on the background. However, as they are propagated towards the input layer, attributions for objects of interest diminish while background attributions grow. On the other hand, RectGrad removes background attributions from higher layers through importance score based thresholding, stopping noise accumulation in the first place. <--- Page Split ---> ## A.3 QUALITATIVE EXPERIMENTS ![](images/15_0.jpg) <center>Figure 11: Evaluation of coherence within the same class (rows) without and with final thresholding. </center> ![](images/15_1.jpg) <center>Figure 12: Comparison of attribution maps for images (left column) and their adversarial examples (right column) without and with final thresholding. This figure shows examples where attribution maps produced by RectGrad did not change significantly. </center> <--- Page Split ---> ## A.4 QUANTITATIVE EXPERIMENTS ![](images/16_0.jpg) <center>Figure 13: Comparison of amount of attribution on the background. The left and right charts compare the amount of attribution outside mask (on background) without and with final thresholding respectively. </center> ![](images/16_1.jpg) <center>Figure 14: Comparison of average total variation. The left and right charts compare average total variation without and with final thresholding respectively. </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 15: Comparison of Sensitivity. The left plot compares RectGrad with local attribution methods and the right plot with with global attribution methods. We also include the random baseline (patches are randomly removed) for reference. Lower AUC indicates a better attribution method. The red vertical line in the right plot indicates where RectGrad starts to perform worse than baseline global attribution methods (10 patches). We took the average over 500 randomly chosen test set images. </center> ![](images/17_1.jpg) <center>Figure 16: Comparison of Sensitivity after final thresholding. The left plot compares RectGrad with local attribution methods and the right plot with with global attribution methods. We also include the random baseline (patches are randomly removed) for reference. Lower AUC indicates a better attribution method. We took the average over 500 randomly chosen test set images. </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 17: Comparison of ROAR. The left plot compares RectGrad with local attribution methods and the right plot with with global attribution methods. We also include the random baseline (pixels are randomly removed) for reference. Lower AUC indicates a better attribution method. </center> ![](images/18_1.jpg) <center>Figure 18: Comparison of KAR. The left plot compares RectGrad with local attribution methods and the right plot with with global attribution methods. We also include the random baseline (pixels are randomly removed) for reference. Higher AUC indicates a better attribution method. </center> <--- Page Split ---> ## B ADDITIONAL EXPLANATION FOR QUANTITATIVE EXPERIMENTS ## B.1 SENSITIVITY In comparison with global attribution methods, RectGrad showed similar performance up to approximately 10 patches (red vertical line) but the performance dropped as more patches were removed (Figure 15). We speculate this happens due to the sparseness of RectGrad attribution maps. Since RectGrad attribution maps are sparser than those of other methods, occluding approximately 10 features will be enough to remove core features highlighted by RectGrad. Attributions for other features will not be as informative since they have trivial values. Figure 19 shows that it is indeed the case. For RectGrad, after occluding 10 top \(2 \times 2\) patches, only attributions of small values remained. For Gradient \* Input, on the other hand, still had significant amount of nontrivial leftover attributions. We also see from Figure 16 that this phenomenon also happens for baseline methods with final thresholding. This implies that such behavior may be an inevitable consequence of sparseness. ![](images/19_0.jpg) <center>Figure 19: Comparison of attribution methods in sensitivity. The first row shows the image as top \(N 2 \times 2\) patches are occluded according to RectGrad. The second and third rows show the positive parts (indicated by \(+\) ) of RectGrad and Gradient \* Input attribution maps as top \(N 2 \times 2\) patches are occluded respectively. We did not cap outlying values in this visualization. </center> <--- Page Split ---> ## B.2 ROAR B.2 ROARWe believe that the poor performance of RectGrad in ROAR is also due to its sparseness. Since RectGrad produces visually coherent attribution maps, the occluded regions can act as discriminative features. To verify this, we replaced \(10\%\) of all CIFAR-10 pixels that were estimated to be most important with the channel-wise mean. We then trained a CNN on the occluded dataset and visualized RectGrad attribution maps for images whose original and occluded versions were both classified correctly. Figure 20 shows the results. Attribution maps highlighted pixels around the occluded regions and moreover, similar regions were emphasized in the original image. This corroborates our claim that the occluded regions act as discriminative features. The assumption behind ROAR is that the occluded features do not influence the classification task (Hooker et al., 2018). Since the above observation contradicts this assumption, ROAR may not be suitable for objectively evaluating RectGrad. ![](images/20_0.jpg) <center>Figure 20: RectGrad attribution maps produced from a CNN trained on images occluded according to RectGrad. We show images whose original and occluded versions were both classified correctly. </center> ## C USEFUL TECHNIQUES Here, we present two useful techniques that can enhance the visual quality of attribution maps produced by RectGrad. ## C.1 PADDING TRICK C.1 PADDING TRICKConvolution inputs are typically zero padded along the border in order to preserve the spatial dimension of feature maps. This occasionally leads to high activation values along the border if zero is out of input distribution. Since importance scores are calculated by multiplying activation with gradient, outlying border activation can cause RectGrad to be propagated through the border instead of relevant features. To solve this problem, we masked the border of gradient to zero before the backward pass through convolutions with padding. One possible concern with the padding trick is that attributions may be faint for features adjacent to the border of the image. However, we did not find this to be a significantly problem experimentally. Listing 2 in Appendix D.2 shows how to implement the padding trick in TensorFlow. ## C.2 PROPORTIONAL REDISTRIBUTION RULE (PRR) FOR POOLING LAYERS. C.2 PROPORTIONAL REDISTRIBUTION RULE (PRR) FOR POOLING LAYERS.Attribution maps produced by RectGrad tend to be rough due to the discrete nature of thresholding. This discontinuity can be compensated by using the proportional redistribution rule proposed by Montavon et al. (2017) for the backward pass through max-pooling layers. Instead of propagating the gradient through only the most activated unit in the pool, gradient is redistributed proportional to unit activations. Since the redistribution operation is continuous, attribution maps generated with the proportional redistribution rule are smoother. Listing 3 in Appendix D.3 shows how to implement the proportional redistribution rule in TensorFlow. <--- Page Split ---> ## D TENSORFLOW CODES ## D.1 IMPLEMENTATION OF RECTIFIED GRADIENT D.1 IMPLEMENTATION OF RECTIFIED GRADIENT1 import tensorflow as tf2 from tensorflow.contrib.distributions import percentile3 4 @tf.RegisterGradient("RectifiedRelu")5 def _RectifiedReluGrad(op, grad):6 7 def threshold(x, q):8 9 if len(x.shape.as_list()): > 3:10 11 thresh = percentile(x, q, axis=[1,2,3], keep_dims=True)12 else:13 14 15 16 17 18 19 20 return tf.where(thresh < activation_grad, grad, tf.zeros_like(grad)) Listing 1: Implementation of Rectified Gradient in TensorFlow. After registering this function as the gradient for ReLU activation functions, call tf.gradients() and multiply with inputs to generate attributions. ## D.2 IMPLEMENTATION OF THE PADDING TRICK D.2 IMPLEMENTATION OF THE PADDING TRICK1 import tensorflow as tf2 3 @tf.RegisterGradient("RectifiedConv2D")4 def _RectifiedConv2DGrad(op, grad):5 6 if op.get_attr('padding') == b'SAME':7 8 shape = tf.shape(grad)9 mask = tf.ones([shape[0], shape[1] - 2, shape[2] - 2, shape[3]])10 mask = tf.pad(mask, [[0,0],[1,1],[1,1],[0,0]])11 grad = grad \* mask12 13 input_grad = tf.nn.conv2d_backprop_input(tf.shape(op.inputs[0]), op.inputs[1], grad, op.get_attr('strides'), op.get_attr('padding'))14 filter_grad = tf.nn.conv2d_backprop_filter(op.inputs[0], tf.shape(op.inputs[1]), grad, op.get_attr('strides'), op.get_attr('padding'))15 16 return input_grad, filter_grad Listing 2: Implementation of the padding trick in TensorFlow. After registering this function as the gradient for convolution operations, call tf.gradients() and multiply with inputs to generate attributions. <--- Page Split ---> ## D.3 IMPLEMENTATION OF THE PROPORTIONAL REDISTRIBUTION RULE 1 import tensorflow as tf 2 3 from tensorflow.python.ops import gen_nn_ops 4 5 @tf.RegisterGradient("RectifiedMaxPool") 6 def _RectifiedMaxPoolGrad(op, grad): 7 8 z = tf.nn.avg_pool(op.inputs[0], op.get_attr('ksize'), op.get_attr(' strides'), op.get_attr('padding')) + 1e- 10 9 s = grad / z 10 c = gen_nn_ops._avg_pool_grad(tf.shape(op.inputs[0]), s, op.get_attr('ksize'), op.get_attr('strides'), op.get_attr('padding')) 11 return op.inputs[0] \* c Listing 3: Implementation of the proportional redistribution rule in TensorFlow. After registering this function as the gradient for max- pooling operations, call tf.gradients() and multiply with inputs to generate attributions. ## E PROOF OF CLAIMS ## E.1 PROOF OF CLAIM 1 Proof. Note that the backward propagation rule for Deconvolution through the ReLU nonlinearity is given by \[R_{i}^{(l)} = \mathbb{I}\left(R_{i}^{(l + 1)} > 0\right)\cdot R_{i}^{(l + 1)}. \quad (1)\] Since the DNN uses ReLU activation functions, \(a_{i}^{(l)} + \epsilon > 0\) and therefore \[\mathbb{I}\left[\left(a_{i}^{(l)} + \epsilon\right)\cdot R_{i}^{(l + 1)} > 0\right] = \mathbb{I}\left(R_{i}^{(l + 1)} > 0\right) \quad (2)\] for all \(l\) and \(i\) . The result follows from Equation 2. ## E.2 PROOF OF CLAIM 2 Proof. Note that the backward propagation rule for Guided Backpropagation through the ReLU nonlinearity is given by \[R_{i}^{(l)} = \mathbb{I}\left(z_{i}^{(l)} > 0\right)\cdot \mathbb{I}\left(R_{i}^{(l + 1)} > 0\right)\cdot R_{i}^{(l + 1)}. \quad (3)\] Since the DNN uses ReLU activation functions, \(a_{i}^{(l)} \geq 0\) and therefore \[\mathbb{I}\left(a_{i}^{(l)}\cdot R_{i}^{(l + 1)} > 0\right) = \mathbb{I}\left(z_{i}^{(l)} > 0\right)\cdot \mathbb{I}\left(R_{i}^{(l + 1)} > 0\right) \quad (4)\] for all \(l\) and \(i\) . The result follows from Equation 4. <--- Page Split ---> ## F EXPERIMENTS SETUP ## F.1 ATTRIBUTION MAP VISUALIZATION To visualize the attributions, we summed up the attributions along the color channel and then capped low outlying values to \(0.5^{\mathrm{th}}\) percentile and high outlying values to \(99.5^{\mathrm{th}}\) percentile for RGB images. We only capped outlying values for grayscale images. ## F.2 CIFAR-10 The CIFAR- 10 dataset (Krizhevsky & Hinton, 2009) was pre- processed to normalize the input images into range \([- 1;1]\) . We trained a CNN using ReLU activation functions with Adam for 20 epochs to achieve \(79.4\%\) test accuracy. For the dataset occluded with the random patch, we used the same settings to achieve \(79.3\%\) test accuracy. <table><tr><td>CIFAR-10 CNN</td></tr><tr><td>Conv 2D (3 × 3, 32 kernels)</td></tr><tr><td>Conv 2D (3 × 3, 32 kernels)</td><td></td></tr><tr><td>Max-pooling (2 × 2)</td><td></td></tr><tr><td>Dropout (0.25)</td><td></td></tr><tr><td>Conv 2D (3 × 3, 64 kernels)</td><td></td></tr><tr><td>Conv 2D (3 × 3, 64 Kernels)</td><td></td></tr><tr><td>Max-pooling (2 × 2)</td><td></td></tr><tr><td>Dropout (0.25)</td><td></td></tr><tr><td>Dense (256)</td><td></td></tr><tr><td>Dropout (0.5)</td><td></td></tr><tr><td>Dense (10)</td><td></td></tr></table> ## F.3 INCEPTION V4 We used a pre- trained Inception V4 network. The details of this architecture can be found in Szegedy et al. (2017). For the adversarial attack, we used the fast gradient sign method with \(\epsilon = 0.01\) . <--- Page Split --->
## ABSTRACT Saliency map, or the gradient of the score function with respect to the input, is the most basic means of interpreting deep neural network decisions. However, saliency maps are often visually noisy. Although several hypotheses were proposed to account for this phenomenon, there is no work that provides a rigorous analysis of noisy saliency maps. This may be a problem as numerous advanced attribution methods were proposed under the assumption that the existing hypotheses are true. In this paper, we identify the cause of noisy saliency maps. Then, we propose Rectified Gradient, a simple method that significantly improves saliency maps by alleviating that cause. Experiments showed effectiveness of our method and its superiority to other attribution methods. Codes and examples for the experiments will be released in public. ## 1 INTRODUCTION The gradient of the score function with respect to the input, also called the saliency map (Erhan et al., 2009; Baehrens et al., 2010; Simonyan et al., 2014), is the most basic means of interpreting deep neural networks (DNNs). It is also a baseline method for other advanced attribution- based methods. However, our understanding of saliency maps is still poor. Previous studies such as Springenberg et al. (2015) and Selvaraju et al. (2017) have noted that saliency maps tend to be visually noisy. To explain this phenomenon, Sundararajan et al. (2016) and Smilkov et al. (2017) suggested saturation and discontinuous gradients as the causes (see Section 2.1 for further explanation). There were several studies attempting to improve saliency maps by tackling these hypothesized causes (Bach et al., 2015; Montavon et al., 2017; Sundararajan et al., 2016; Shrikumar et al., 2017; Smilkov et al., 2017; Sundararajan et al., 2017). Even though such attribution methods generally produce better visualizations, we find troubling that the hypotheses regarding noisy saliency maps have not been rigorously verified (see Section 2.2 for more detail on attribution methods). In other words, numerous attribution methods were built upon unproven claims that gradient discontinuity or saturation truly causes saliency maps to be noisy. This situation gives rise to two major problems. First, if the hypotheses regarding noisy saliency maps are incorrect, current and future works based on those hypotheses will also be erroneous. Second, as we do not know precisely why saliency maps are noisy, we have to rely on heuristics and guessworks to develop better attribution methods. In this paper, we address these problems by identifying saliency maps are noisy because DNNs do not filter out irrelevant features during forward propagation. We then introduce Rectified Gradient, or RectGrad in short, a simple technique that significantly improves the quality of saliency maps by alleviating the cause through layer- wise thresholding during backpropagation. Finally, we demonstrate that RectGrad produces attributions qualitatively superior and quantitatively comparable to other attribution methods. Specifically, we have the following key contributions: - We explain why saliency maps are noisy. Noise occurs in saliency maps when irrelevant features have positive pre-activation values and consequently pass through ReLU activation functions. This causes gradients to be nonzero at unimportant regions. We perform experiments with networks trained on CIFAR-10 to justify our claims (Section 3). <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Comparison of attribution methods. See Section 5 for details on the visualization. </center> - We introduce Rectified Gradient, a method that removes noise from saliency maps by thresholding irrelevant units at ReLU binary gates during backpropagation (Section 4). We first explain the rationale behind Rectified Gradient (Section 4.1). We then prove that Rectified Gradient generalizes Deconvolution and Guided Backpropagation (Section 4.2). In addition, we discuss two techniques that enhance the visual quality of Rectified Gradient attribution maps (Appendix C).- We first investigate the effect of threshold level on attribution maps produced by Rectified Gradient (Section 5.1). Then, we apply Rectified Gradient to networks trained on CIFAR-10 and ImageNet to demonstrate that it produces qualitatively superior attribution maps (Section 5.2). We also compare Rectified Gradient with other attribution methods using several quantitative metrics (Section 5.3). ## 2 BACKGROUND OVERVIEW Let \(S:\mathbb{R}^{d}\mapsto \mathbb{R}^{|C|}\) be an image classification network, where \(x\in \mathbb{R}^{d}\) is a single image instance and \(C\) is the set of image classes. Then, we can define a score function \(S_{c}:\mathbb{R}^{d}\mapsto \mathbb{R}\) for each class \(c\in C\) and the final class of the image \(x\) is given by \(class(x) = \arg \max_{c\in C}S_{c}(x)\) . A typical score function is constructed by alternately composing affine transformations and nonlinear activation functions. A squashing function such as softmax is applied to the final layer. Since functions comprising \(S_{c}\) are differentiable or piecewise linear, the score function is also piecewise differentiable. Using this fact, Erhan et al. (2009), Baehrens et al. (2010) and Simonyan et al. (2014) proposed the saliency map, or the gradient of \(S_{c}\) with respect to \(x\) , to highlight features within \(x\) that the network associates with the given class. In an ideal case, saliency maps highlight objects of interest. However, previous studies such as Springenberg et al. (2015) and Selvaraju et al. (2017) have pointed out that saliency maps tend to be visually noisy, as verified by Figure 1. Three hypotheses were proposed to account for this phenomenon. We describe them in the next section. ### 2.1 PREVIOUS HYPOTHESES Saliency Maps are Truthful. Smilkov et al. (2017) suggested that noisy saliency maps are faithful descriptions of what the network is doing. That is, pixels scattered seemingly at random are actually crucial to how the network makes a decision. In short, this hypothesis claims that noise is actually informative. Discontinuous Gradients. Smilkov et al. (2017) and Shrikumar et al. (2017) proposed that saliency maps are noisy due to the piece- wise linearity of the score function. Specifically, since typical DNNs use ReLU activation functions and max pooling, the derivative of the score function with respect to the input will not be continuously differentiable. Under this hypothesis, noise is caused by meaningless local variations in the gradient. <--- Page Split ---> Saturating Score Function. Shrikumar et al. (2017) and Sundararajan et al. (2017) suggested that important features may have small gradient due to saturation. In other words, the score function can flatten in the proximity of the input and have a small derivative. This hypothesis explains why informative features may not be highlighted in the saliency map even though they contributed significantly to the decision of the DNN. ### 2.2 PREVIOUS WORKS ON IMPROVING SALIENCY MAPS DNN interpretation methods that assign a signed attribution value to each input feature are collectively called attribution methods. Attributions are usually visualized as a heatmap by arranging them to have the same shape as the input sample. Such heatmaps are called attribution maps. We now describe attribution methods that have been proposed to improve saliency maps. Attribution Methods Addressing Discontinuity. SmoothGrad (Smilkov et al., 2017) attempts to smooth discontinuous gradient with a Gaussian kernel. Since calculating the local average in a high dimensional space is intractable, the authors proposed a stochastic approximation which takes random samples in a neighborhood of the input \(x\) and then averages their gradients. Attribution Methods Addressing Saturation. Since saliency maps estimate the local importance of each input feature, they are vulnerable to saturation. Therefore, attribution methods such as Gradient \* Input (Shrikumar et al., 2017), Layer- wise Relevance Propagation (LRP) (Bach et al., 2015), DeepLIFT (Shrikumar et al., 2017) and Integrated Gradient (Sundararajan et al., 2017) attempt to alleviate saturation by estimating the global importance of each pixel (Ancona et al., 2018). Ancona et al. (2018) has also shown that several global attribution methods are closely related under certain conditions. Other Attribution Methods. Some attribution methods take a different approach to improving saliency maps. Deconvolution (Zeiler & Fergus, 2014) and Guided Backpropagation (Springenberg et al., 2015) remove negative gradient during backpropagation. Due to this imputation procedure, Deconvolution and Guided Backpropagation yield attribution maps sharper than those of other methods. However, Nie et al. (2018) has recently proven that these methods are actually doing partial image recovery which is unrelated to DNN decisions. ## 3 OUR EXPLANATION FOR NOISY SALIENCY MAPS For brevity, we refer to pixels on the background as background features and pixels on the object as object features. Then, noise in a saliency map corresponds to background gradient, or gradient that highlights background features. We assume the DNN uses ReLU activation functions. Under this condition, nonzero background gradient indicates the presence of at least one positive pre- activation in each network layer corresponding to background features. To verify this, we visualized intermediate layer activations of a convolutional neural network (CNN) trained on CIFAR- 10. Figure 2b shows convolutional layer feature maps for an image that produced a noisy saliency map. Since CNN filters act as feature extractors, we expected the CNN to remove most background feature activations through convolutions. However, we found significant amounts of background feature activations in all convolution layers. As the last convolution layer is connected to fully connected layers, the majority of activations in the last convolution layer will have nonzero gradient. Hence, the gradient flowed through background feature activations up to the input. This gradient flow caused background gradient, as shown in Figure 2a. From our perspective, the answer to "why are saliency maps noisy?" is trivial. Saliency maps are noisy because background features pass through ReLU activation functions. Therefore, rather than asking why saliency maps are noisy, we ask "do activations of background features highlighted by background gradient have nontrivial influence on the decision?" If the answer is yes, noise in saliency maps is informative as suggested by Smilkov et al. (2017), and saliency maps do not need any major improvement. However, if the answer is no, we should find a way to remove background gradient. We investigated this question through two experiments. Feature Map Occlusion. We evaluated the significance of background features by occluding activations at intermediate layers. Then, we analyzed the effect of this perturbation on the final decision. Note that this is different from the Sensitivity metric (Bach et al., 2015; Samek et al., 2017). Sensi <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: Feature map visualization for an image with a noisy saliency map. </center> tivity measures the impact of occlusion in the data space (e.g. pixel occlusion) while we measured the impact of occlusion in each feature space. We first created a background mask that covers background features in the images. We then plotted the average class logits as we incrementally occluded intermediate layer activations that fell on the background mask. We carried out occlusion following a random ordering and took the average over 50 trials. Figures 3a and 3b give an example of a background mask and a completely occluded feature map respectively. Figure 3c shows that the final decision did not change throughout the occlusion process for all convolution layers. Moreover, the difference between the top label logit and the next largest logit remained constant. Therefore, background feature activations are irrelevant to the classification task. To further support this claim, we conducted a larger- scale version of this experiment, and we describe the procedure and results in Appendix A.1. Training Dataset Occlusion. Next, we show that gradient can be nonzero for completely uninformative features. We occluded the upper left corner of all images in the training dataset with a \(10 \times 10\) random patch and trained a randomly initialized CNN on the modified dataset. We used the same patch for all images. Since the test accuracy did not change significantly (79.4% to 79.3%), we expected the CNN to have learned to extract important features and ignore irrelevant ones. However, Figure 4 shows that gradient is nonzero for the patch although it is completely irrelevant to the classification task. We can draw three conclusions from these experiments: 1. DNNs do not filter out irrelevant features during forward propagation. 2. DNNs are capable of making correct decisions even if we occlude the majority of background feature activations in intermediate layers. This implies that most background feature activations are irrelevant to the classification task. 3. Since DNNs do not remove irrelevant features through ReLU activation functions, zero threshold at ReLU binary gates during backpropagation also allow irrelevant information to flow through the gradient. With the conclusions above, we can refute the first of three previous hypotheses. As for the second hypothesis, we can interpret meaningless local variation in the gradient as a side effect of irrelevant features contaminating the gradient. Why the network does not learn to filter out irrelevant features is <--- Page Split ---> ![](images/4_0.jpg) <center>(c) Average class logits as background feature activations are incrementally occluded in a random order. The average is taken over 50 trials. Image class is illustrated by a solid line and other classes by dotted lines. </center> ![](images/4_1.jpg) <center>Figure 3: Impact of background feature activation occlusion on the final decision. </center> Figure 4: Saliency maps produced from a CNN trained on occluded images. The upper left corner of all the images in the training dataset is replaced with a \(10 \times 10\) random patch, as shown above. Readers should examine the \(8 \times 8\) patch enclosed by the red square instead of the entire \(10 \times 10\) patch due to the receptive field of filters in the first convolution layer \((3 \times 3)\) . a matter of optimization, which is out of scope of this paper. However, we believe it is a phenomenon worth investigating. ## 4 RECTIFIED GRADIENT We now introduce our technique to improve saliency maps. As we have shown in Section 3, zero is a poor threshold at ReLU binary gates during backpropagation. This indicates that we need better thresholds at ReLU binary gates in order to remove uninformative gradient from saliency maps. To this end, we propose Rectified Gradient, or RectGrad in short, where the gradient propagates only through units whose importance scores exceed some threshold. Importance score for an unit is calculated by multiplying its activation with gradient propagated up to the unit. Formally, RectGrad is given as follows: Suppose we have a \(L\) - layer ReLU DNN. Denote input feature \(i\) as \(x_{i}\) , pre- activation of unit \(i\) in layer \(l\) as \(z_{i}^{(l)}\) , its activation as \(a_{i}^{(l)}\) and gradient propagated up to \(a_{i}^{(l)}\) as \(R_{i}^{(l + 1)}\) . Let \(\mathbb{I}(\cdot)\) be the indicator function. Then, the relation between \(a_{i}^{(l)}\) and \(z_{i}^{(l)}\) is given by \(a_{i}^{(l)} = ReLU(z_{i}^{(l)}) = \max (z_{i}^{(l)},0)\) when \(l< L\) and \(a_{i}^{(L)} = softmax(z_{i}^{(L)})\) . By the chain rule, backward pass through the ReLU nonlinearity for vanilla gradient is achieved by \(R_{i}^{(l)} = \mathbb{I}(a_{i}^{(l)} > 0)\cdot R_{i}^{(l + 1)}\) . We modify this rule such that \(R_{i}^{(l)} = \mathbb{I}(a_{i}^{(l)}\cdot R_{i}^{(l + 1)} > \tau)\cdot R_{i}^{(l + 1)}\) for some threshold \(\tau\) . Backward pass through affine transformations and pooling operations is carried out in the same manner as backpropagation. Finally, importance scores for input features are calculated by multiplying gradient propagated up to input layer \((l = 0)\) with input features: \(x_{i}\cdot R_{i}^{(1)}\) . Instead of setting \(\tau\) to a constant <--- Page Split ---> value, we use the \(q^{\mathrm{th}}\) percentile of importance scores at each layer. This prevents the gradient from entirely dying out during the backward pass. Due to the simplicity of the propagation rule, RectGrad can easily be applied to DNNs in graph computation frameworks such as TensorFlow (Abadi et al., 2016) or PyTorch (Paszke et al., 2017). Listing 1 in Appendix D.1 shows how to implement RectGrad in TensorFlow. In Appendix C we also introduce two techniques, namely the padding trick and the proportional redistribution rule (PRR) that enhance the visual quality of RectGrad attribution maps. ### 4.1 RATIONALE BEHIND THE PROPAGATION RULE FOR RECTIFIED GRADIENT This subsection explains the reason we have chosen \(R_{i}^{(l)} = \mathbb{I}(a_{i}^{(l)}\cdot R_{i}^{(l + 1)} > \tau)\cdot R_{i}^{(l + 1)}\) and not \(R_{i}^{(l)} = \mathbb{I}(a_{i}^{(l)} > \tau)\cdot R_{i}^{(l + 1)}\) or \(R_{i}^{(l)} = \mathbb{I}(R_{i}^{(l + 1)} > \tau)\cdot R_{i}^{(l + 1)}\) as the definition of RectGrad. The significance of multiplying an unit's activation with gradient propagated up to the unit is that it estimates the marginal effect of that unit on the output (Ancona et al., 2018). For instance, consider the following linear model: \(f(a_{1},a_{2},a_{3}) = 2\cdot a_{1} + 1\cdot a_{2} + 3\cdot a_{3}\) . We have \(\partial f / \partial a_{1} = 2\) , \(\partial f / \partial a_{2} = 1\) , and \(\partial f / a_{3} = 3\) . Suppose we are given inputs \(a_{1} = 2\) , \(a_{2} = 3\) , \(a_{3} = 1\) and we apply RectGrad with \(q = 67\) , i.e., we propagate the gradient through the unit with the highest importance score. Clearly \(a_{1}\) has the largest contribution of \(2\cdot 2 = 4\) to the final output compared to \(1\cdot 3 = 3\cdot 1 = 3\) of \(a_{2}\) and \(a_{3}\) . Only the first rule correctly propagates gradient through the most influential unit \(a_{1}\) while the latter two rules mistakenly choose \(a_{2}\) and \(a_{3}\) respectively. Since the latter two rules fail even for this simple example, it is highly likely that they will not work for DNNs which are constructed by composing multiple linear layers. On the other hand, the first rule propagates gradient through units with the largest marginal effect in a layer- wise manner. Hence, it makes sense to select the first propagation rule as the definition of RectGrad. Next, we show that RectGrad generalizes Deconvolution and Guided Backpropagation. ### 4.2 RELATION TO DECONVOLUTION AND GUIDED BACKPROPAGATION Claim 1. Deconvolution \* Input is equivalent to Rectified Gradient with the propagation rule \[R_{i}^{(l)} = \mathbb{I}\left[\left(a_{i}^{(l)} + \epsilon\right)\cdot R_{i}^{(l + 1)} > 0\right]\cdot R_{i}^{(l + 1)}\] for some small \(\epsilon > 0\) Claim 2. Guided Backpropagation \* Input is equivalent to Rectified Gradient when \(\tau = 0\) : \[R_{i}^{(l)} = \mathbb{I}\left(a_{i}^{(l)}\cdot R_{i}^{(l + 1)} > 0\right)\cdot R_{i}^{(l + 1)}.\] The proofs for Claims 1 and 2 are provided in Appendix E.1 and E.2 respectively. These results indicate that RectGrad generalizes Deconvolution and Guided Backpropagation. Figure 1 illustrates the relation between the saliency map, Deconvolution, Guided Backpropagation and RectGrad. However, Nie et al. (2018) has recently proven that Deconvolution and Guided Backpropagation are actually doing partial image recovery which is unrelated to DNN decisions. RectGrad does not suffer from this problem as it does not satisfy the assumptions of the analyses of Nie et al. (2018) for two reasons. First, the threshold criterion is based on the product of activation and gradient which is not Gaussian distributed.\(^{1}\) Second, we set \(\tau\) as the \(q^{\mathrm{th}}\) percentile of importance scores and therefore \(\tau\) will vary layer by layer. We also show in Section 5.2 with adversarial attacks that attributions produced by RectGrad are class sensitive. Therefore, RectGrad inherits the sharp visualizations of Deconvolution and Guided Backpropagation while amending their disadvantages with layer- wise importance score thresholding. ## 5 EXPERIMENTS To evaluate RectGrad, we performed a series of experiments using Inception V4 network (Szegedy et al., 2017) trained on ImageNet (Russakovsky et al., 2015) and CNNs trained on CIFAR- 10 (Krizhevsky & Hinton, 2009). See Appendix F.1 for details on the attribution map visualization method. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 5: Effect of threshold \(\tau\) (columns) on RectGrad for 3 images of the cabbage butterfly class in ImageNet (rows). The second column shows attribution maps with \(\tau = 0\) , which is equivalent to Guided Backpropagation \* Input. For the following columns, \(\tau\) is set to \(q^{\mathrm{th}}\) percentile of importance scores. The padding trick was used for all attribution maps above. </center> ### 5.1 EFFECT OF THRESHOLD PERCENTILE RectGrad has one hyper- parameter \(\tau\) , which is set to \(q^{\mathrm{th}}\) percentile of importance scores for each layer. Figure 5 shows the effect of threshold percentile for several images from ImageNet. While the attribution maps were incomprehensible for \(q = 0\) , the visual quality dramatically improved as we incremented \(q\) up to 20. There was no significant change up to \(q = 80\) . Then the attribution maps began to sparse out again as we incremented \(q\) further. We also observed that regions of high attributions did not change from \(q > 20\) . We speculate that the attributions stay constant between \(q = 20\) and 80 because of zero activations. That is, since we use ReLU activation functions, the majority of activations and consequently importance scores will be zero. Hence, \(\tau \approx 0\) for \(20 \leq q \leq 80\) . This causes RectGrad attribution maps to resemble those produced by Guided Backpropagation \* Input. It indicates that we have to increment \(q > 80\) in order to produce sparser attribution maps that highlight important regions instead of reconstruct input images. ### 5.2 QUALITATIVE COMPARISON WITH BASELINE METHODS We used the saliency map, Gradient \* Input, Guided Backpropagation, SmoothGrad, Integrated Gradient, Epsilon- LRP and DeepLIFT as baseline methods. As for RectGrad, we used the padding trick and \(q = 98\) for all attribution maps. We show attributions both with and without application of the proportional redistribution rule. In this subsection, we compare RectGrad with other attribution methods through three experiments that each focus on different aspect of qualitative evaluation. We also show applying simple final thresholding to baseline methods is not enough to replicate the benefits of RectGrad. To demonstrate this, we applied 95 percentile final threshold to baseline attribution methods such that RectGrad and baseline attribution maps have similar levels of sparsity. Coherence. Following prior work (Simonyan et al., 2014; Zeiler & Fergus, 2014), we inspected two types of visual coherence. First, the attributions should fall on discriminative features (e.g. the object of interest), not the background. Second, the attributions should highlight similar features for images of the same class. For the first type of visual coherence, Figure 6 shows a side- by- side comparison between our method and baseline methods. It can clearly be seen that RectGrad produced attribution maps more visually coherent and focused than other methods—background noise was nearly nonexistent. This <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 6: Evaluation of coherence across different classes without and with final thresholding. </center> ![](images/7_1.jpg) <center>Figure 7: Comparison of attribution maps for images (left column) and their adversarial examples (right column) without and with final thresholding. This figure shows examples where attribution maps produced by RectGrad changed significantly. </center> phenomenon may be due to noise accumulation. Specifically, irrelevant features may have trivial gradient near the output layer. However, since gradient is calculated by successive multiplication, the noise can grow exponentially as gradient is propagated towards the input layer. This can result in confusing attribution maps which assign high attribution to irrelevant regions (e.g. uniform background in "lighter"), especially for deep networks such as Inception. RectGrad does not suffer from this problem since it thresholds irrelevant features at every layer and hence stops noise accumulation. In this situation, final thresholding cannot replicate RectGrad's ability to remove noise. In Appendix A.2, we corroborate this claim by comparing Saliency map and RectGrad attributions as they are propagated towards the input layer. For the second type of visual coherence, Figure 11 in Appendix A.3 shows attribution maps for a pair of images belonging to the same class. Attribution maps generated by RectGrad consistently emphasized similar parts of the object of interest. On the contrary, Saliency map, Gradient \* Input and Epsilon- LRP emphasized different regions for each image instance. Attributions for Smooth <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 8: Comparison of amount of attribution on occluded patch. The left and right charts compare the amount of attribution inside occluded patch without and with final thresholding respectively. The numbers in parentheses show the custom threshold levels. </center> Grad, Guided Backpropagation, Integrated Gradient and DeepLIFT were generally coherent across images of the same class. Nevertheless, they also highlighted background features and hence failed to satisfy the first type of visual coherence. This observation also holds for attribution maps with final thresholding. Adversarial Attack. We evaluated class sensitivity following prior work by Nie et al. (2018). Specifically, we compared the attributions for an image and its adversarial example. If the attribution method is class sensitive, attribution maps should change significantly since ReLU activations and consequently the predicted class have changed. On the other hand, if the attribution method merely does image reconstruction, attribution maps will not change much since we add an indistinguishable adversarial perturbation to the image. In this experiment, we used the fast gradient sign method (Goodfellow et al., 2015) with \(\epsilon = 0.01\) to generate adversarial examples. Figure 7 shows large changes in attribution maps produced by RectGrad. We observed that only RectGrad attributions were coherent with the class labels. Figure 12 in Appendix A.3 shows some instances where there was no significant change in attribution maps produced by RectGrad. In those cases, attribution maps for other methods also showed little change. Hence, we can conclude that RectGrad is equally or more class sensitive than baseline attribution methods. We observed that this conclusion also holds with final thresholding. It is also possible that adversarial attacks only modified a tiny amount of ReLU activations (i.e. the images were near the decision boundary), causing little change in attribution maps. ### 5.3 QUANTITATIVE COMPARISON WITH BASELINE METHODS In this section, we quantitatively compare RectGrad with baseline methods using DNNs trained on CIFAR- 10. We did not include Epsilon- LRP since it is equivalent to Gradient \* Input for ReLU DNNs (Ancona et al., 2018). We divided baseline attribution methods into local and global methods following the criterion proposed by Ancona et al. (2018). We also repeated the same experiments with final thresholding to the baselines to compare them with RectGrad in similar sparsity setting. Training Dataset Occlusion. Just like the training dataset occlusion experiment in Section 3, we occluded the upper left corner of all images in CIFAR- 10 training dataset with a \(10 \times 10\) random patch and trained a randomly initialized CNN on the modified dataset. We then summed all absolute attribution within the patch and averaged across the test dataset. A reasonable attribution method should assign nearly zero attribution to the patch as it is completely irrelevant to the classification task. Figure 8 compares the amount of average attribution in the patch between attribution methods. We observed that without final thresholding, RectGrad assigned little or no attribution to the random patch. However, all other methods failed to do so. For this test, we found using \(q = 95\) final threshold led to trivially different averages. Hence we used a custom threshold for each baseline method such that they had similar average attribution in the patch as RectGrad. We observed that RectGrad had smaller standard deviation than baseline methods. This indicates that RectGrad more consistently assigns near- zero attribution to the patch. Therefore RectGrad has advantages over <--- Page Split ---> baseline methods regardless of whether final threshold is used or not. Figures for the following quantitative experiment outcomes are in Appendix A.4. Noise Level. We evaluated whether RectGrad really reduces noise through two experiments. For the first test, we created segmentation masks for 10 correctly classified images of each class (total 100 images) and measured how much attribution falls on the background. Specifically, we compared the sum of absolute value of attribution on the background. For the second test, we measured the average total variation of attribution maps for each attribution method. The average was taken over the test dataset. Figure 13 shows that RectGrad assigned significantly less attribution to the background than baseline methods. Moreover, even with final thresholding, RectGrad outperformed baseline methods. In addition, Figure 14 shows that even though the total variation reduces for baseline methods after final thresholding, RectGrad outperforms baseline methods in both cases. The results imply that baselines with final thresholding cannot replicate RectGrad's ability to reduce noise. Sensitivity. We evaluated RectGrad using the Sensitivity metric proposed by Bach et al. (2015) and Samek et al. (2017). Specifically, we measured how the logit for the initial class changed as features were occluded based on the ordering assigned by the attribution method. We split the image into non- overlapping patches of \(2 \times 2\) pixels. Next, we computed attributions and summed all the values within each patch. We sorted the patches in decreasing order based on the aggregate attribution values. We then incrementally replaced the first 100 patches with per- channel mean computed using the entire training set and measured the change in class logit. We calculated the average across 500 randomly chosen test set images. An attribution method is better if it has a lower sensitivity AUC. The results are shown in Figure 15. All attribution methods outperformed the random baseline in which we randomly removed patches. We observed that RectGrad performed better than local attribution methods. In comparison with global attribution methods, RectGrad showed similar performance up to approximately 10 patches (red vertical line) but the performance dropped as more patches were removed. In Appendix B.1, we offer an explanation for this behavior. Figure 16 shows that after final thresholding, RectGrad still outperforms local attribution methods. For global attribution methods, RectGrad now shows similar performance. ROAR and KAR. We evaluated RectGrad using Remove and Retrain (ROAR) and Keep and Retrain (KAR) proposed by Hooker et al. (2018). Specifically, we measured how the performance of the classifier changed as features were occluded based on the ordering assigned by the attribution method. For ROAR, given an attribution method, we replaced a fraction of all CIFAR- 10 pixels that were estimated to be most important with a constant value. We then retrained a CNN on the modified dataset and measured the change in test accuracy. For KAR, we replaced a fraction of all CIFAR- 10 pixels that were estimated to be least important. We trained 3 CNNs per estimator for each fraction \(\{0.1, 0.3, 0.5, 0.7, 0.9\}\) . We measured test accuracy as the average of theses 3 CNNs. An attribution method is better if it has a lower ROAR AUC and a higher KAR AUC. Figure 17 presents ROAR scores. All attribution methods outperformed the random baseline in which we randomly removed pixels. RectGrad showed similar performance to local attribution methods but performed worse than all global attribution methods. Next, Figure 18 shows KAR scores. Interestingly, all baseline attribution methods failed to exceed even the random baseline. Only RectGrad had similar or better performance than the random baseline. In Appendix B.2, we offer an explanation for why RectGrad performed poorly in ROAR. ## 6 CONCLUSIONS Saliency map is the most basic means of interpreting deep neural network decisions. However, it is often visually noisy. Although several hypotheses were proposed to account for this phenomenon, there is no work that provides a thorough analysis of noisy saliency maps. Therefore, we first identified saliency maps are noisy because DNNs do not filter out irrelevant features during forward propagation. We then proposed RectGrad Gradient which significantly improves saliency maps by alleviating this problem through layer- wise thresholding during backpropagation. We showed that Rectified Gradient generalizes Deconvolution and Guided Backpropagation and moreover, overcomes the class- insensitivity problem. We also demonstrated through extensive experiments that Rectified Gradient outperforms previous attribution methods. <--- Page Split ---> ## A EXPERIMENT RESULTS ## A.1 SUPPLEMENTARY EXPERIMENT FOR FEATURE MAP OCCLUSION ![](images/12_0.jpg) <center>Figure 9: Larger-scale study of the impact of background feature activation occlusion on the final decision. </center> To further support our claim that background feature activations are irrelevant to the classification task, we conducted a larger-scale experiment. We created segmentation masks for 10 correctly classified images of each class (total 100 images) and repeated the feature map occlusion for each image. We then took the average of (class logit) – (largest logit among the other 9 classes) across all 100 images. Figures 9a and 9b give an example of a background segmentation mask and a completely occluded feature map. Figure 9c shows that the difference is generally positive throughout the occlusion process, that is, the class does not change for most images. From this, we can infer that background features are generally irrelevant to the classification task. <--- Page Split ---> ## A.2 SUPPLEMENTARY EXPERIMENT FOR NOISE ACCUMULATION ![](images/13_0.jpg) <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 10: Saliency map and RectGrad attributions at Inception v4 intermediate layers as they are propagated toward the input layer. We show channel-wise average attributions for hidden layer inputs with respect to the output layer. For each subfigure, first row shows the input image and Saliency map and RectGrad attribution maps. Second and third rows show Saliency map and RectGrad attributions at intermediate layers, respectively. An attribution map is closer to the output layer if it is closer to the right. </center> To verify our claims on the noise accumulation phenomenon, we compared Saliency map and RectGrad attributions as they are propagated towards the input layer. As Figure 10 shows, at higher layers, Saliency map attributions for objects of interest are generally larger than or equal to attributions on the background. However, as they are propagated towards the input layer, attributions for objects of interest diminish while background attributions grow. On the other hand, RectGrad removes background attributions from higher layers through importance score based thresholding, stopping noise accumulation in the first place. <--- Page Split ---> ## A.3 QUALITATIVE EXPERIMENTS ![](images/15_0.jpg) <center>Figure 11: Evaluation of coherence within the same class (rows) without and with final thresholding. </center> ![](images/15_1.jpg) <center>Figure 12: Comparison of attribution maps for images (left column) and their adversarial examples (right column) without and with final thresholding. This figure shows examples where attribution maps produced by RectGrad did not change significantly. </center> <--- Page Split ---> ## A.4 QUANTITATIVE EXPERIMENTS ![](images/16_0.jpg) <center>Figure 13: Comparison of amount of attribution on the background. The left and right charts compare the amount of attribution outside mask (on background) without and with final thresholding respectively. </center> ![](images/16_1.jpg) <center>Figure 14: Comparison of average total variation. The left and right charts compare average total variation without and with final thresholding respectively. </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 15: Comparison of Sensitivity. The left plot compares RectGrad with local attribution methods and the right plot with with global attribution methods. We also include the random baseline (patches are randomly removed) for reference. Lower AUC indicates a better attribution method. The red vertical line in the right plot indicates where RectGrad starts to perform worse than baseline global attribution methods (10 patches). We took the average over 500 randomly chosen test set images. </center> ![](images/17_1.jpg) <center>Figure 16: Comparison of Sensitivity after final thresholding. The left plot compares RectGrad with local attribution methods and the right plot with with global attribution methods. We also include the random baseline (patches are randomly removed) for reference. Lower AUC indicates a better attribution method. We took the average over 500 randomly chosen test set images. </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 17: Comparison of ROAR. The left plot compares RectGrad with local attribution methods and the right plot with with global attribution methods. We also include the random baseline (pixels are randomly removed) for reference. Lower AUC indicates a better attribution method. </center> ![](images/18_1.jpg) <center>Figure 18: Comparison of KAR. The left plot compares RectGrad with local attribution methods and the right plot with with global attribution methods. We also include the random baseline (pixels are randomly removed) for reference. Higher AUC indicates a better attribution method. </center> <--- Page Split ---> ## B ADDITIONAL EXPLANATION FOR QUANTITATIVE EXPERIMENTS ## B.1 SENSITIVITY In comparison with global attribution methods, RectGrad showed similar performance up to approximately 10 patches (red vertical line) but the performance dropped as more patches were removed (Figure 15). We speculate this happens due to the sparseness of RectGrad attribution maps. Since RectGrad attribution maps are sparser than those of other methods, occluding approximately 10 features will be enough to remove core features highlighted by RectGrad. Attributions for other features will not be as informative since they have trivial values. Figure 19 shows that it is indeed the case. For RectGrad, after occluding 10 top \(2 \times 2\) patches, only attributions of small values remained. For Gradient \* Input, on the other hand, still had significant amount of nontrivial leftover attributions. We also see from Figure 16 that this phenomenon also happens for baseline methods with final thresholding. This implies that such behavior may be an inevitable consequence of sparseness. ![](images/19_0.jpg) <center>Figure 19: Comparison of attribution methods in sensitivity. The first row shows the image as top \(N 2 \times 2\) patches are occluded according to RectGrad. The second and third rows show the positive parts (indicated by \(+\) ) of RectGrad and Gradient \* Input attribution maps as top \(N 2 \times 2\) patches are occluded respectively. We did not cap outlying values in this visualization. </center> <--- Page Split ---> ## B.2 ROAR B.2 ROARWe believe that the poor performance of RectGrad in ROAR is also due to its sparseness. Since RectGrad produces visually coherent attribution maps, the occluded regions can act as discriminative features. To verify this, we replaced \(10\%\) of all CIFAR-10 pixels that were estimated to be most important with the channel-wise mean. We then trained a CNN on the occluded dataset and visualized RectGrad attribution maps for images whose original and occluded versions were both classified correctly. Figure 20 shows the results. Attribution maps highlighted pixels around the occluded regions and moreover, similar regions were emphasized in the original image. This corroborates our claim that the occluded regions act as discriminative features. The assumption behind ROAR is that the occluded features do not influence the classification task (Hooker et al., 2018). Since the above observation contradicts this assumption, ROAR may not be suitable for objectively evaluating RectGrad. ![](images/20_0.jpg) <center>Figure 20: RectGrad attribution maps produced from a CNN trained on images occluded according to RectGrad. We show images whose original and occluded versions were both classified correctly. </center> ## C USEFUL TECHNIQUES Here, we present two useful techniques that can enhance the visual quality of attribution maps produced by RectGrad. ## C.1 PADDING TRICK C.1 PADDING TRICKConvolution inputs are typically zero padded along the border in order to preserve the spatial dimension of feature maps. This occasionally leads to high activation values along the border if zero is out of input distribution. Since importance scores are calculated by multiplying activation with gradient, outlying border activation can cause RectGrad to be propagated through the border instead of relevant features. To solve this problem, we masked the border of gradient to zero before the backward pass through convolutions with padding. One possible concern with the padding trick is that attributions may be faint for features adjacent to the border of the image. However, we did not find this to be a significantly problem experimentally. Listing 2 in Appendix D.2 shows how to implement the padding trick in TensorFlow. ## C.2 PROPORTIONAL REDISTRIBUTION RULE (PRR) FOR POOLING LAYERS. C.2 PROPORTIONAL REDISTRIBUTION RULE (PRR) FOR POOLING LAYERS.Attribution maps produced by RectGrad tend to be rough due to the discrete nature of thresholding. This discontinuity can be compensated by using the proportional redistribution rule proposed by Montavon et al. (2017) for the backward pass through max-pooling layers. Instead of propagating the gradient through only the most activated unit in the pool, gradient is redistributed proportional to unit activations. Since the redistribution operation is continuous, attribution maps generated with the proportional redistribution rule are smoother. Listing 3 in Appendix D.3 shows how to implement the proportional redistribution rule in TensorFlow. <--- Page Split ---> ## D TENSORFLOW CODES ## D.1 IMPLEMENTATION OF RECTIFIED GRADIENT D.1 IMPLEMENTATION OF RECTIFIED GRADIENT1 import tensorflow as tf2 from tensorflow.contrib.distributions import percentile3 4 @tf.RegisterGradient("RectifiedRelu")5 def _RectifiedReluGrad(op, grad):6 7 def threshold(x, q):8 9 if len(x.shape.as_list()): > 3:10 11 thresh = percentile(x, q, axis=[1,2,3], keep_dims=True)12 else:13 14 15 16 17 18 19 20 return tf.where(thresh < activation_grad, grad, tf.zeros_like(grad)) Listing 1: Implementation of Rectified Gradient in TensorFlow. After registering this function as the gradient for ReLU activation functions, call tf.gradients() and multiply with inputs to generate attributions. ## D.2 IMPLEMENTATION OF THE PADDING TRICK D.2 IMPLEMENTATION OF THE PADDING TRICK1 import tensorflow as tf2 3 @tf.RegisterGradient("RectifiedConv2D")4 def _RectifiedConv2DGrad(op, grad):5 6 if op.get_attr('padding') == b'SAME':7 8 shape = tf.shape(grad)9 mask = tf.ones([shape[0], shape[1] - 2, shape[2] - 2, shape[3]])10 mask = tf.pad(mask, [[0,0],[1,1],[1,1],[0,0]])11 grad = grad \* mask12 13 input_grad = tf.nn.conv2d_backprop_input(tf.shape(op.inputs[0]), op.inputs[1], grad, op.get_attr('strides'), op.get_attr('padding'))14 filter_grad = tf.nn.conv2d_backprop_filter(op.inputs[0], tf.shape(op.inputs[1]), grad, op.get_attr('strides'), op.get_attr('padding'))15 16 return input_grad, filter_grad Listing 2: Implementation of the padding trick in TensorFlow. After registering this function as the gradient for convolution operations, call tf.gradients() and multiply with inputs to generate attributions. <--- Page Split ---> ## D.3 IMPLEMENTATION OF THE PROPORTIONAL REDISTRIBUTION RULE 1 import tensorflow as tf 2 3 from tensorflow.python.ops import gen_nn_ops 4 5 @tf.RegisterGradient("RectifiedMaxPool") 6 def _RectifiedMaxPoolGrad(op, grad): 7 8 z = tf.nn.avg_pool(op.inputs[0], op.get_attr('ksize'), op.get_attr(' strides'), op.get_attr('padding')) + 1e- 10 9 s = grad / z 10 c = gen_nn_ops._avg_pool_grad(tf.shape(op.inputs[0]), s, op.get_attr('ksize'), op.get_attr('strides'), op.get_attr('padding')) 11 return op.inputs[0] \* c Listing 3: Implementation of the proportional redistribution rule in TensorFlow. After registering this function as the gradient for max- pooling operations, call tf.gradients() and multiply with inputs to generate attributions. ## E PROOF OF CLAIMS ## E.1 PROOF OF CLAIM 1 Proof. Note that the backward propagation rule for Deconvolution through the ReLU nonlinearity is given by \[R_{i}^{(l)} = \mathbb{I}\left(R_{i}^{(l + 1)} > 0\right)\cdot R_{i}^{(l + 1)}. \quad (1)\] Since the DNN uses ReLU activation functions, \(a_{i}^{(l)} + \epsilon > 0\) and therefore \[\mathbb{I}\left[\left(a_{i}^{(l)} + \epsilon\right)\cdot R_{i}^{(l + 1)} > 0\right] = \mathbb{I}\left(R_{i}^{(l + 1)} > 0\right) \quad (2)\] for all \(l\) and \(i\) . The result follows from Equation 2. ## E.2 PROOF OF CLAIM 2 Proof. Note that the backward propagation rule for Guided Backpropagation through the ReLU nonlinearity is given by \[R_{i}^{(l)} = \mathbb{I}\left(z_{i}^{(l)} > 0\right)\cdot \mathbb{I}\left(R_{i}^{(l + 1)} > 0\right)\cdot R_{i}^{(l + 1)}. \quad (3)\] Since the DNN uses ReLU activation functions, \(a_{i}^{(l)} \geq 0\) and therefore \[\mathbb{I}\left(a_{i}^{(l)}\cdot R_{i}^{(l + 1)} > 0\right) = \mathbb{I}\left(z_{i}^{(l)} > 0\right)\cdot \mathbb{I}\left(R_{i}^{(l + 1)} > 0\right) \quad (4)\] for all \(l\) and \(i\) . The result follows from Equation 4. <--- Page Split ---> ## F EXPERIMENTS SETUP ## F.1 ATTRIBUTION MAP VISUALIZATION To visualize the attributions, we summed up the attributions along the color channel and then capped low outlying values to \(0.5^{\mathrm{th}}\) percentile and high outlying values to \(99.5^{\mathrm{th}}\) percentile for RGB images. We only capped outlying values for grayscale images. ## F.2 CIFAR-10 The CIFAR- 10 dataset (Krizhevsky & Hinton, 2009) was pre- processed to normalize the input images into range \([- 1;1]\) . We trained a CNN using ReLU activation functions with Adam for 20 epochs to achieve \(79.4\%\) test accuracy. For the dataset occluded with the random patch, we used the same settings to achieve \(79.3\%\) test accuracy. <table><tr><td>CIFAR-10 CNN</td></tr><tr><td>Conv 2D (3 × 3, 32 kernels)</td></tr><tr><td>Conv 2D (3 × 3, 32 kernels)</td><td></td></tr><tr><td>Max-pooling (2 × 2)</td><td></td></tr><tr><td>Dropout (0.25)</td><td></td></tr><tr><td>Conv 2D (3 × 3, 64 kernels)</td><td></td></tr><tr><td>Conv 2D (3 × 3, 64 Kernels)</td><td></td></tr><tr><td>Max-pooling (2 × 2)</td><td></td></tr><tr><td>Dropout (0.25)</td><td></td></tr><tr><td>Dense (256)</td><td></td></tr><tr><td>Dropout (0.5)</td><td></td></tr><tr><td>Dense (10)</td><td></td></tr></table> ## F.3 INCEPTION V4 We used a pre- trained Inception V4 network. The details of this architecture can be found in Szegedy et al. (2017). For the adversarial attack, we used the fast gradient sign method with \(\epsilon = 0.01\) . <--- Page Split --->
reject
Reject
4.666667
ICLR_2019_paper_0870
iclr
2,019
# ADEF: AN ITERATIVE ALGORITHM TO CONSTRUCT ADVERSARIAL DEFORMATIONS Rima Alaifari Department of Mathematics ETH Zurich rima.alaifari@math.ethz.ch Giovanni S. Alberti Department of Mathematics University of Genoa alberti@dima.unige.it Tandri Gauksson Department of Mathematics ETH Zurich tandri.gauksson@math.ethz.ch ## ABSTRACT While deep neural networks have proven to be a powerful tool for many recognition and classification tasks, their stability properties are still not well understood. In the past, image classifiers have been shown to be vulnerable to so- called adversarial attacks, which are created by additively perturbing the correctly classified image. In this paper, we propose the ADef algorithm to construct a different kind of adversarial attack created by iteratively applying small deformations to the image, found through a gradient descent step. We demonstrate our results on MNIST with convolutional neural networks and on ImageNet with Inception- v3 and ResNet- 101. ## 1 INTRODUCTION In a first observation in Szegedy et al. (2013) it was found that deep neural networks exhibit unstable behavior to small perturbations in the input. For the task of image classification this means that two visually indistinguishable images may have very different outputs, resulting in one of them being misclassified even if the other one is correctly classified with high confidence. Since then, a lot of research has been done to investigate this issue through the construction of adversarial examples: given a correctly classified image \(x\) , we look for an image \(y\) which is visually indistinguishable from \(x\) but is misclassified by the network. Typically, the image \(y\) is constructed as \(y = x + r\) , where \(r\) is an adversarial perturbation that is supposed to be small in a suitable sense (normally, with respect to an \(\ell^p\) norm). Several algorithms have been developed to construct adversarial perturbations, see Goodfellow et al. (2014); Moosavi Dezfooli et al. (2016); Kurakin et al. (2017b); Madry et al. (2018); Carlini & Wagner (2017b) and the review paper Akhtar & Mian (2018). Even though such pathological cases are very unlikely to occur in practice, their existence is relevant since malicious attackers may exploit this drawback to fool classifiers or other automatic systems. Further, adversarial perturbations may be constructed in a black- box setting (i.e., without knowing the architecture of the DNN but only its outputs) (Papernot et al., 2017; Moosavi- Dezfooli et al., 2017) and also in the physical world (Kurakin et al., 2017b; Athalye & Sutskever, 2017; Brown et al., 2017; Sharif et al., 2016). This has motivated the investigation of defenses, i.e., how to make the network invulnerable to such attacks, see Kurakin et al. (2017a); Carlini & Wagner (2017a); Madry et al. (2018); Tramer et al. (2018); Wong & Kolter (2018); Raghunathan et al. (2018); Athalye et al. (2018); Kannan et al. (2018). In most cases, adversarial examples are artificially created and then used to retrain the network, which becomes more stable under these types of perturbations. Most of the work on the construction of adversarial examples and on the design of defense strategies has been conducted in the context of small perturbations \(r\) measured in the \(\ell^{\infty}\) norm. However, this is not necessarily a good measure of image similarity: e.g., for two translated images \(x\) and \(y\) , the norm of \(x - y\) is not small in general, even though \(x\) and \(y\) will look indistinguishable if the translation is small. Several papers have investigated the construction of adversarial perturbations not designed <--- Page Split ---> for norm proximity (Rozsa et al., 2016; Sharif et al., 2016; Brown et al., 2017; Engstrom et al., 2017; Xiao et al., 2018). In this work, we build up on these ideas and investigate the construction of adversarial deformations. In other words, the misclassified image \(y\) is not constructed as an additive perturbation \(y = x + r\) , but as a deformation \(y = x \circ (\mathrm{id} + \tau)\) , where \(\tau\) is a vector field defining the transformation. In this case, the similarity is not measured through a norm of \(y - x\) , but instead through a norm of \(\tau\) , which quantifies the deformation between \(y\) and \(x\) . We develop an efficient algorithm for the construction of adversarial deformations, which we call ADef. It is based on the main ideas of DeepFool (Moosavi Dezfooli et al., 2016), and iteratively constructs the smallest deformation to misclassify the image. We test the procedure on MNIST (LeCun) (with convolutional neural networks) and on ImageNet (Russakovsky et al., 2015) (with Inception- v3 (Szegedy et al., 2016) and ResNet- 101 (He et al., 2016)). The results show that ADef can successfully fool the classifiers in the vast majority of cases (around \(99\%\) ) by using very small and imperceptible deformations. We also test our adversarial attacks on adversarially trained networks for MNIST. Our implementation of the algorithm can be found at https://gitlab.math.ethz.ch/tandrig/ADef. The results of this work have initially appeared in the master's thesis Gauksson (2017), to which we refer for additional details on the mathematical aspects of this construction. While writing this paper, we have come across Xiao et al. (2018), in which a similar problem is considered and solved with a different algorithm. Whereas in Xiao et al. (2018) the authors use a second order solver to find a deforming vector field, we show how a first order method can be formulated efficiently and justify a smoothing operation, independent of the optimization step. We report, for the first time, success rates for adversarial attacks with deformations on ImageNet. The topic of deformations has also come up in Jaderberg et al. (2015), in which the authors introduce a class of learnable modules that deform inputs in order to increase the performance of existing DNNs, and Fawzi & Frossard (2015), in which the authors introduce a method to measure the invariance of classifiers to geometric transformations. ## 2 ADVERSARIAL DEFORMATIONS ### 2.1 ADVERSARIAL PERTURBATIONS Let \(\mathcal{K}\) be a classifier of images consisting of \(P\) pixels into \(L \geq 2\) categories, i.e. a function from the space of images \(X = \mathbb{R}^{cP}\) , where \(c = 1\) (for grayscale images) or \(c = 3\) (for color images), and into the set of labels \(\mathcal{L} = \{1, \ldots , L\}\) . Suppose \(x \in X\) is an image that is correctly classified by \(\mathcal{K}\) and suppose \(y \in X\) is another image that is imperceptible from \(x\) and such that \(\mathcal{K}(y) \neq \mathcal{K}(x)\) , then \(y\) is said to be an adversarial example. The meaning of imperceptibility varies, but generally, proximity in \(\ell^{p}\) - norm (with \(1 \leq p \leq \infty\) ) is considered to be a sufficient substitute. Thus, an adversarial perturbation for an image \(x \in X\) is a vector \(r \in X\) such that \(\mathcal{K}(x + r) \neq \mathcal{K}(x)\) and \(\| r\|_{p}\) is small, where \[\| r\|_{p} = \left(\sum_{j = 1}^{cP}|r_{j}|^{p}\right)^{1 / p}\quad \mathrm{if~}1\leq p< \infty ,\mathrm{and}\quad \| r\|_{\infty} = \max_{j = 1,\ldots ,cP}|r_{j}|. \quad (1)\] Given such a classifier \(\mathcal{K}\) and an image \(x\) , an adversary may attempt to find an adversarial example \(y\) by minimizing \(\| x - y\|_{p}\) subject to \(\mathcal{K}(y) \neq \mathcal{K}(x)\) , or even subject to \(\mathcal{K}(y) = k\) for some target label \(k \neq \mathcal{K}(x)\) . Different methods for finding minimal adversarial perturbations have been proposed, most notably FGSM (Goodfellow et al., 2014) and PGD (Madry et al., 2018) for \(\ell^{\infty}\) , and the DeepFool algorithm (Moosavi Dezfooli et al., 2016) for general \(\ell^{p}\) - norms. ### 2.2 DEFORMATIONS Instead of constructing adversarial perturbations, we intend to fool the classifier by small deformations of correctly classified images. Our procedure is in the spirit of the DeepFool algorithm. Before we explain it, let us first clarify what we mean by a deformation of an image. The discussion is at first more intuitive if we model images as functions \(\xi : [0, 1]^{2} \to \mathbb{R}^{c}\) (with \(c = 1\) or \(c = 3\) ) instead <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 1: First row: The original \(28 \times 28\) pixel image from the MNIST database, and the same image translated by \((-2, 1)\) , rotated by an angle of \(10^{\circ}\) , and deformed w.r.t. an arbitrary smooth vector field \(\tau\) . The \(\ell^{\infty}\) -norm of the corresponding perturbation is shown under each deformed image. The pixel values range from 0 (white) to 1 (black), so the deformed images all lie far from the original image in the \(\ell^{\infty}\) -norm. Second row: The vector fields corresponding to the above deformations and their \(T\) -norms (cf. equation (3)). </center> of discrete vectors \(x\) in \(\mathbb{R}^{c P}\) . In this setting, perturbing an image \(\xi :[0,1]^{2}\to \mathbb{R}^{c}\) corresponds to adding to it another function \(\rho :[0,1]^{2}\to \mathbb{R}^{c}\) with a small \(L^{p}\) - norm. While any transformation of an image \(\xi\) can be written as a perturbation \(\xi +\rho\) , we shall restrict ourselves to a particular class of transformations. A deformation with respect to a vector field \(\tau :[0,1]^{2}\to \mathbb{R}^{2}\) is a transformation of the form \(\xi \mapsto \xi^{\tau}\) , where for any image \(\xi :[0,1]^{2}\to \mathbb{R}^{c}\) , the image \(\xi^{\tau}:[0,1]^{2}\to \mathbb{R}^{c}\) is defined by \[\xi^{\tau}(u) = \xi (u + \tau (u))\quad \mathrm{for~all~}u\in [0,1]^{2},\] extending \(\xi\) by zero outside of \([0,1]^{2}\) . Deformations capture many natural image transformations. For example, a translation of the image \(\xi\) by a vector \(v\in \mathbb{R}^{2}\) is a deformation with respect to the constant vector field \(\tau = v\) . If \(v\) is small, the images \(\xi\) and \(\xi^{v}\) may look similar, but the corresponding perturbation \(\rho = \xi^{v} - \xi\) may be arbitrarily large in the aforementioned \(L^{p}\) - norms. Figure 1 shows three minor deformations, all of which yield large \(L^{\infty}\) - norms. In the discrete setting, deformations are implemented as follows. We consider square images of \(W\times W\) pixels and define the space of images to be \(X = \left(\mathbb{R}^{W\times W}\right)^{c}\) . A discrete vector field is a function \(\tau :\{1,\ldots ,W\}^{2}\to \mathbb{R}^{2}\) . In what follows we will only consider the set \(T\) of vector fields that do not move points on the grid \(\{1,\ldots ,W\}^{2}\) outside of \([1,W]^{2}\) . More precisely, \[T:= \{\tau :\{1,\ldots ,W\}^{2}\to \mathbb{R}^{2}\mid \tau (s,t) + (s,t)\in [1,W]^{2}\mathrm{~for~all~}s,t\in \{1,\ldots ,W\} \} .\] An image \(x\in X\) can be viewed as the collection of values of a function \(\xi :[0,1]^{2}\to \mathbb{R}^{c}\) on a regular grid \(\{1 / (W + 1),\ldots ,W / (W + 1)\}^{2}\subseteq [0,1]^{2}\) , i.e. \(x_{s,t} = \xi (s / (W + 1),t / (W + 1))\) for \(s,t = 1,\ldots ,W\) . Such a function \(\xi\) can be computed by interpolating from \(x\) . Thus, the deformation of an image \(x\) with respect to the discrete vector field \(\tau\) can be defined as the discrete deformed image \(x^{\tau}\) in \(X\) by \[x_{s,t}^{\tau} = \xi \left(\frac{(s,t) + \tau(s,t)}{W + 1}\right),\qquad s,t\in \{1,\ldots ,W\} . \quad (2)\] It is not straightforward to measure the size of a deformation such that it captures the visual difference between the original image \(x\) and its deformed counterpart \(x^{\tau}\) . We will use the size of the <--- Page Split ---> corresponding vector field, \(\tau\) , in the norm defined by \[\| \tau \|_{T} = \max_{s,t = 1,\ldots ,W}\| \tau (s,t)\|_{2} \quad (3)\] as a proxy. The \(\ell^{p}\) - norms defined in (1), adapted to vector fields, can be used as well. (We remark, however, that none of these norms define a distance between \(x\) and \(x^{\tau}\) , since two vector fields \(\tau , \sigma \in T\) with \(\| \tau \|_{T} \neq \| \sigma \|_{T}\) may produce the same deformed image \(x^{\tau} = x^{\sigma}\) .) ### 2.3 THE ALGORITHM ADEF We will now describe our procedure for finding deformations that will lead a classifier to yield an output different from the original label. Let \(F = (F_{1},\ldots ,F_{L}):X\to \mathbb{R}^{L}\) be the underlying model for the classifier \(\mathcal{K}\) , such that \[\mathcal{K}(x) = \underset {k = 1,\ldots ,L}{\arg \max}F_{k}(x).\] Let \(x\in X\) be the image of interest and fix \(\xi :[0,1]^{2}\to \mathbb{R}^{c}\) obtained by interpolation from \(x\) . Let \(l = \mathcal{K}(x)\) denote the true label of \(x\) , let \(k\in \mathcal{L}\) be a target label and set \(f = F_{k} - F_{l}\) . We assume that \(x\) does not lie on a decision boundary, so that we have \(f(x)< 0\) We define the function \(g:T\to \mathbb{R},\tau \mapsto f(x^{\tau})\) and note that \(g(0) = f(x^{0}) = f(x)< 0\) . Our goal is to find a small vector field \(\tau \in T\) such that \(g(\tau) = f(x^{\tau})\geq 0\) . We can use a linear approximation of \(g\) around the zero vector field as a guide: \[g(\tau)\approx g(0) + (\mathrm{D}_{0}g)\tau \quad (4)\] for small enough \(\tau \in T\) and \(D_{0}g:T\to \mathbb{R}\) the derivative of \(g\) at \(\tau = 0\) . Hence, if \(\tau\) is a vector field such that \[(\mathrm{D}_{0}g)\tau = -g(0) \quad (5)\] and \(\| \tau \|_{T}\) is small, then the classifier \(\mathcal{K}\) has approximately equal confidence for the deformed image \(x^{\tau}\) to have either label \(l\) or \(k\) . This is a scalar equation with unknown in \(T\) , and so has infinitely many solutions. In order to select \(\tau\) with small norm, we solve it in the least- squares sense. In view of (2), we have \(\frac{\partial x^{\tau}}{\partial\tau}\big|_{\tau = 0}(s,t) = \frac{1}{W + 1}\nabla \xi \left(\frac{(s,t)}{W + 1}\right)\in \mathbb{R}^{c\times 2}\) . Thus, by applying the chain rule to \(g(\tau) = f(x^{\tau})\) , we obtain that its derivative at \(\tau = 0\) can, with a slight abuse of notation, be identified with the vector field \[\mathrm{D}_{0}g(s,t) = \frac{1}{W + 1}\big(\nabla f(x)\big)_{s,t}\nabla \xi \left(\frac{(s,t)}{W + 1}\right), \quad (6)\] where \(\left(\nabla f(x)\right)_{s,t}\in \mathbb{R}^{1\times c}\) is the derivative of \(f\) in \(x\) calculated at \((s,t)\) . With this, \((\mathrm{D}_{0}g)\tau\) stands for \(\begin{array}{r}{\sum_{s,t = 1}^{W}\mathrm{D}_{0}g(s,t)\cdot \tau (s,t)} \end{array}\) , and the solution to (5) in the least- square sense is given by \[\tau = -\frac{f(x)}{\sum_{s,t = 1}^{W}|\mathrm{D}_{0}g(s,t)|^{2}}\mathrm{D}_{0}g. \quad (7)\] Finally, we define the deformed image \(x^{\tau}\in X\) according to (2). One might like to impose some degree of smoothness on the deforming vector field. In fact, it suffices to search in the range of a smoothing operator \(\mathcal{S}:T\to T\) . However, this essentially amounts to applying \(\mathcal{S}\) to the solution from the larger search space \(T\) . Let \(\alpha = \mathcal{S}(\mathrm{D}_{0}g) = \phi *(\mathrm{D}_{0}g)\) , where \(\mathcal{S}\) denotes the componentwise application of a two- dimensional Gaussian filter \(\phi\) (of any standard deviation). Then the vector field \[\hat{\tau} = -\frac{f(x)}{\sum_{s,t = 1}^{W}|\alpha(s,t)|^{2}}\mathcal{S}\alpha = -\frac{f(x)}{\sum_{s,t = 1}^{W}|\alpha(s,t) |^{2}}\mathcal{S}^{2}(\mathrm{D}_{0}g)\] also satisfies (5), since \(\mathcal{S}\) is self- adjoint. We can hence replace \(\tau\) by \(\hat{\tau}\) to obtain a smooth deformation of the image \(x\) . We iterate the deformation process until the deformed image is misclassified. More explicitly, let \(x^{(0)} = x\) and for \(n\geq 1\) let \(\tau^{(n)}\) be given by (7) for \(x^{(n - 1)}\) . Then we can define the iteration as <--- Page Split ---> \(x^{(n)} = x^{(n - 1)}\circ (\mathrm{id} + \tau^{(n)})\) . The algorithm terminates and outputs an adversarial example \(y = x^{(n)}\) if \(\mathcal{K}(x^{(n)})\neq l\) . The iteration also terminates if \(x^{(n)}\) lies on a decision boundary of \(\mathcal{K}\) , in which case we propose to introduce an overshoot factor \(1 + \eta\) on the total deforming vector field. Provided that the number of iterations is moderate, the total vector field can be well approximated by \(\tau^{*} =\) \(\tau^{(1)} + \cdot \cdot \cdot +\tau^{(n)}\) and the process can be altered to output the deformed image \(y = x\circ (\mathrm{id} + (1 + \eta)\tau^{*})\) instead. The target label \(k\) may be chosen in each iteration to minimize the vector field to obtain a better approximation in the linearization (4). More precisely, for a candidate set of labels \(k_{1},\ldots ,k_{m}\) , we compute the corresponding vectors fields \(\tau_{1},\ldots ,\tau_{m}\) and select \[k = \underset {j = 1,\ldots ,m}{\arg \min}\| \tau_{j}\|_{T}.\] The candidate set consists of the labels corresponding to the indices of the \(m\) smallest entries of \(F - F_{l}\) , in absolute value. ## Algorithm ADef Input: Classification model \(F\) , image \(x\) , correct label \(l\) , candidate labels \(k_{1},\ldots ,k_{m}\) Output: Deformed image \(y\) Initialize \(y\leftarrow x\) while \(\mathcal{K}(y) = l\) do for \(j = 1,\ldots ,m\) do \[\alpha_{j}\leftarrow \mathcal{S}\left(\sum_{i = 1}^{n}\left((\nabla F_{k_{j}})_{i} - (\nabla F_{l})_{i}\right)\cdot \nabla y_{i}\right)\] \[\tau_{j}\leftarrow -\frac{F_{k_{j}}(y) - F_{l}(y)}{\|\alpha_{j}\|_{l^{2}}^{2}}\mathcal{S}\alpha_{j}\] end for \(i\leftarrow \arg \min_{j = 1,\ldots ,m}\| \tau_{j}\|_{T}\) \(y\leftarrow y\circ (\mathrm{id} + \tau_{i})\) end while return \(y\) By equation (6), provided that \(\nabla f\) is moderate, the deforming vector field takes small values wherever \(\xi\) has a small derivative. This means that the vector field will be concentrated on the edges in the image \(x\) (see e.g. the first row of figure 2). Further, note that the result of a deformation is always a valid image in the sense that it does not violate the pixel value bounds. This is not guaranteed for the perturbations computed with DeepFool. ## 3 EXPERIMENTS ### 3.1 SETUP We evaluate the performance of ADef by applying the algorithm to classifiers trained on the MNIST (LeCun) and ImageNet (Russakovsky et al., 2015) datasets. Below, we briefly describe the setup of the experiments and in tables 1 and 2 we summarize their results. MNIST: We train two convolutional neural networks based on architectures that appear in Madry et al. (2018) and Tramer et al. (2018) respectively. The network MNIST- A consists of two convolutional layers of sizes \(32\times 5\times 5\) and \(64\times 5\times 5\) , each followed by \(2\times 2\) max- pooling and a rectifier activation function, a fully connected layer into dimension 1024 with a rectifier activation function, and a final linear layer with output dimension 10. The network MNIST- B consists of two convolutional layers of sizes \(128\times 3\times 3\) and \(64\times 3\times 3\) with a rectifier activation function, a fully connected layer into dimension 128 with a rectifier activation function, and a final linear layer with output dimension 10. During training, the latter convolutional layer and the former fully connected layer of MNIST- B are subject to dropout of drop probabilities \(1 / 4\) and \(1 / 2\) . We use ADef to produce adversarial deformations of the images in the test set. The algorithm is configured to pursue any label different from the correct label (all incorrect labels are candidate labels). It performs smoothing by a Gaussian filter of standard deviation \(1 / 2\) , uses bilinear interpolation to obtain intermediate pixel intensities, and it overshoots by \(\eta = 2 / 10\) whenever it converges to a decision boundary. <--- Page Split ---> Table 1: The results of applying ADef to the images in the MNIST test set and the ILSVRC2012 validation set. The accuracy of the Inception and ResNet models is defined as the top-1 accuracy on the center-cropped and resized images. The success rate of ADef is shown as a percentage of the correctly classified inputs. The pixel range is scaled to [0, 1], so the perturbation \(r = y - x\) , where \(x\) is the input and \(y\) the output of ADef, has values in \([-1, 1]\) . The averages in the three last columns are computed over the set of images on which ADef is successful. Recall the definition of the vector field norm in equation (3). <table><tr><td>Model</td><td>Accuracy</td><td>ADef success</td><td>Avg.</td><td></td><td>Avg.</td><td></td><td>Avg. # iterations</td></tr><tr><td>MNIST-A</td><td>98.99%</td><td>99.85%</td><td>1.1950</td><td>0.7455</td><td>7.002</td><td></td><td></td></tr><tr><td>MNIST-B</td><td>98.91%</td><td>99.51%</td><td>1.0841</td><td>0.7654</td><td>4.422</td><td></td><td></td></tr><tr><td>Inception-v3</td><td>77.56%</td><td>98.94%</td><td>0.5984</td><td>0.2039</td><td>4.050</td><td></td><td></td></tr><tr><td>ResNet-101</td><td>76.97%</td><td>99.78%</td><td>0.5561</td><td>0.1882</td><td>4.176</td><td></td><td></td></tr></table> ![](images/5_0.jpg) <center>Figure 2: Sample deformations for the Inception-v3 model. The vector fields and perturbations have been amplified for visualization. First row: An image from the ILSVRC2012 validation set, the output of ADef with a Gaussian filter of standard deviation 1, the corresponding vector field and perturbation. The rightmost image is a close-up of the vector field around the nose of the ape. Second row: A larger deformation of the same image, obtained by using a wider Gaussian filter (standard deviation 6) for smoothing. </center> ImageNet: We apply ADef to pretrained Inception- v3 (Szegedy et al., 2016) and ResNet- 101 (He et al., 2016) models to generate adversarial deformations for the images in the ILSVRC2012 validation set. The images are preprocessed by first scaling so that the smaller axis has 299 pixels for the Inception model and 224 pixels for ResNet, and then they are center- cropped to a square image. The algorithm is set to focus only on the label of second highest probability. It employs a Gaussian filter of standard deviation 1, bilinear interpolation, and an overshoot factor \(\eta = \frac{1}{10}\) . We only consider inputs that are correctly classified by the model in question, and, since \(\tau^{*} = \tau^{(1)} + \dots +\tau^{(n)}\) approximates the total deforming vector field, we declare ADef to be successful if its output is misclassified and \(\| \tau^{*}\|_{T} \leq \epsilon\) , where we choose \(\epsilon = 3\) . Observe that, by (3), a deformation with respect to a vector field \(\tau\) does not displace any pixel further away from its original position than \(\| \tau\|_{T}\) . Hence, for high resolution images, the choice \(\epsilon = 3\) indeed produces small deformations if the vector fields are smooth. In appendix A, we illustrate how the success rate of ADef depends on the choice of \(\epsilon\) . When searching for an adversarial example, one usually searches for a perturbation with \(\ell^{\infty}\) - norm smaller than some small number \(\epsilon > 0\) . Common choices of \(\epsilon\) range from \(\frac{1}{10}\) to \(\frac{3}{10}\) for MNIST <--- Page Split ---> Table 2: Success rates for PGD and ADef attacks on adversarially trained networks. <table><tr><td>Model</td><td>Adv. training</td><td>Accuracy</td><td>PGD success</td><td>ADef success</td></tr><tr><td rowspan="2">MNIST-A</td><td>PGD</td><td>98.36%</td><td>5.81%</td><td>6.67%</td></tr><tr><td>ADef</td><td>98.95%</td><td>100.00%</td><td>54.16%</td></tr><tr><td rowspan="2">MNIST-B</td><td>PGD</td><td>98.74%</td><td>5.84%</td><td>20.35%</td></tr><tr><td>ADef</td><td>98.79%</td><td>100.00%</td><td>45.07%</td></tr></table> ![](images/6_0.jpg) <center>Figure 3: Targeted ADef against MNIST-A. First row: The original image and deformed images produced by restricting ADef to the target labels 0 to 8. The \(\ell^{\infty}\) -norms of the corresponding perturbations are shown under the deformed images. Second row: The vector fields corresponding to the deformations and their \(T\) -norms. </center> classifiers (Goodfellow et al., 2014; Madry et al., 2018; Wong & Kolter, 2018; Tramer et al., 2018; Kannan et al., 2018) and \(\frac{2}{255}\) to \(\frac{16}{255}\) for ImageNet classifiers (Goodfellow et al., 2014; Kurakin et al., 2017a; Tramer et al., 2018; Kannan et al., 2018). Table 1 shows that on average, the perturbations obtained by ADef are quite large compared to those constraints. However, as can be seen in figure 2, the relatively high resolution images of the ImageNet dataset can be deformed into adversarial examples that, while corresponding to large perturbations, are not visibly different from the original images. In appendices B and C, we give more examples of adversarially deformed images. ### 3.2 ADVERSARIAL TRAINING In addition to training MNIST- A and MNIST- B on the original MNIST data, we train independent copies of the networks using the adversarial training procedure described by Madry et al. (2018). That is, before each step of the training process, the input images are adversarially perturbed using the PGD algorithm. This manner of training provides increased robustness against adversarial perturbations of low \(\ell^{\infty}\) - norm. Moreover, we train networks using ADef instead of PGD as an adversary. In table 2 we show the results of attacking these adversarially trained networks, using ADef on the one hand, and PGD on the other. We use the same configuration for ADef as above, and for PGD we use 40 iterations, step size \(\frac{1}{100}\) and \(\frac{3}{10}\) as the maximum \(\ell^{\infty}\) - norm of the perturbation. Interestingly, using these configurations, the networks trained against PGD attacks are more resistant to adversarial deformations than those trained against ADef. ### 3.3 TARGETED ATTACKS ADef can also be used for targeted adversarial attacks, by restricting the deformed image to have a particular target label instead of any label which yields the optimal deformation. Figure 3 demonstrates the effect of choosing different target labels for a given MNIST image, and figure 4 shows the result of targeting the label of lowest probability for an image from the ImageNet dataset. <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 4: Untargeted vs. targeted attack on the ResNet-101 model. An image from the ILSVRC2012 validation set deformed to the labels of second highest (first row) and lowest (second row) probabilities (out of 1,000) for the original image. The vector fields and perturbations have been amplified for visualization. </center> ## 4 CONCLUSION In this work, we proposed a new efficient algorithm, ADef, to construct a new type of adversarial attacks for DNN image classifiers. The procedure is iterative and in each iteration takes a gradient descent step to deform the previous iterate in order to push to a decision boundary. We demonstrated that with almost imperceptible deformations, state- of- the art classifiers can be fooled to misclassify with a high success rate of ADef. This suggests that networks are vulnerable to different types of attacks and that simply training the network on a specific class of adversarial examples might not form a sufficient defense strategy. Given this vulnerability of neural networks to deformations, we wish to study in future work how ADef can help for designing possible defense strategies. Furthermore, we also showed initial results on fooling adversarially trained networks. Remarkably, PGD trained networks on MNIST are more resistant to adversarial deformations than ADef trained networks. However, for this result to be more conclusive, similar tests on ImageNet will have to be conducted. We wish to study this in future work. ## ACKNOWLEDGMENTS The authors would like to thank Helmut Bölcskei and Thomas Wiatowski for fruitful discussions. ## REFERENCES N. Akhtar and A. Mian. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6:14410–14430, 2018. doi: 10.1109/ACCESS.2018.2807385. Anish Athalye and Ilya Sutskever. Synthesizing robust adversarial examples. arXiv preprint arXiv:1707.07397, 2017. Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018. <--- Page Split ---> Tom B Brown, Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. Adversarial patch. arXiv preprint arXiv:1712.09665, 2017. Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 3- 14. ACM, 2017a. Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy (SP), pp. 39- 57. IEEE, 2017b. Logan Engstrom, Dimitris Tsipras, Ludwig Schmidt, and Aleksander Madry. A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations. arXiv preprint arXiv:1712.02779, 2017. Alhussein Fawzi and Pascal Frossard. Manitest: Are classifiers really invariant? arXiv preprint arXiv:1507.06535, 2015. Tandri Gauksson. Adversarial perturbations and deformations for convolutional neural networks. https://www.research- collection.ethz.ch/handle/20.500.11850/258550/, 2017. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (CVPR), pp. 770- 778, 2016. Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial transformer networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (eds.), Advances in Neural Information Processing Systems 28, pp. 2017- 2025. Curran Associates, Inc., 2015. URL http://papers.nips.cc/paper/5854- spatial- transformer- networks.pdf. Harini Kannan, Alexey Kurakin, and Ian Goodfellow. Adversarial logit pairing. arXiv preprint arXiv:1803.06373, 2018. Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. In International Conference on Learning Representations, 2017a. Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. In International Conference on Learning Representations, 2017b. Yann LeCun. The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJzIBfZAb. S. M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard. Universal adversarial perturbations. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 86-94, July 2017. doi: 10.1109/CVPR.2017.17. Seyed Mohsen Moosavi Dezfooli, Alhussein Fawzi, and Pascal Frossard. DeepFool: a simple and accurate method to fool deep neural networks. In Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), number EPFL-CONF-218057, 2016. Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black- box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506- 519. ACM, 2017. Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. Certified defenses against adversarial examples. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=Bys4ob- Rb. <--- Page Split ---> Andras Rozsa, Ethan M Rudd, and Terrance E Boult. Adversarial diversity and hard positive generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 25- 32, 2016. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei- Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211- 252, 2015. doi: 10.1007/s11263- 015- 0816- y. Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K Reiter. Accessorize to a crime: Real and stealthy attacks on state- of- the- art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528- 1540. ACM, 2016. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818- 2826, 2016. Florian Tramer, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rKZvSe- RZ. Eric Wong and Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning, 2018. Chaowei Xiao, Jun- Yan Zhu, Bo Li, Warren He, Mingyan Liu, and Dawn Song. Spatially transformed adversarial examples. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=HyydRMZC- . <--- Page Split ---> ![](images/10_0.jpg) <center>Figure 5: The (normalized) distribution of \(\| \tau^{*}\|_{T}\) from the MNIST experiments. Deformations that fall to the left of the vertical line at \(\epsilon = 3\) are considered successful. The networks in the first column were trained using the original MNIST data, and the networks in the second and third columns were adversarially trained using ADef and PGD, respectively. </center> ![](images/10_1.jpg) <center>Figure 6: The (normalized) distribution of \(\| \tau^{*}\|_{T}\) from the ImageNet experiments. Deformations that fall to the left of the vertical line at \(\epsilon = 3\) are considered successful. </center> ## A DISTRIBUTION OF VECTOR FIELD NORMS Figures 5 and 6 show the distribution of the norms of the total deforming vector fields, \(\tau^{*}\) , from the experiments in section 3. For networks that have not been adversarially trained, most deformations fall well below the threshold of \(\epsilon = 3\) . Out of the adversarially trained networks, only MNIST- A trained against PGD is truly robust against ADef. Further, a comparison between the first column of figure 5 and figure 6 indicates that ImageNet is much more vulnerable to adversarial deformations than MNIST, also considering the much higher resolution of the images in ImageNet. Thus, it would be very interesting to study the performance of ADef with adversarially trained network for ImageNet, as mentioned in the Conclusion. <--- Page Split ---> Table 3: The results of applying ADef to the images in the ILSVRC2012 validation set and the Inception model, using different values for the standard deviation \(\sigma\) of the Gaussian filter. As before, we define ADef to be successful if \(\| \tau^{*}\|_{T}\leq 3\) <table><tr><td>σ</td><td>ADef success</td><td>Avg.</td><td></td><td></td><td></td></tr><tr><td>0</td><td>99.12%</td><td>0.5272</td><td>0.1628</td><td>5.247</td><td></td></tr><tr><td>1</td><td>98.94%</td><td>0.5984</td><td>0.2039</td><td>4.050</td><td></td></tr><tr><td>2</td><td>95.91%</td><td>0.7685</td><td>0.2573</td><td>3.963</td><td></td></tr><tr><td>4</td><td>86.66%</td><td>0.9632</td><td>0.3128</td><td>4.379</td><td></td></tr><tr><td>8</td><td>67.54%</td><td>1.1684</td><td>0.3687</td><td>5.476</td><td></td></tr></table> ## B SMOOTH DEFORMATIONS The standard deviation of the Gaussian filter used for smoothing in the update step of ADef has significant impact on the resulting vector field. To explore this aspect of the algorithm, we repeat the experiment from section 3 on the Inception- v3 model, using standard deviations \(\sigma = 0,1,2,4,8\) (where \(\sigma = 0\) stands for no smoothing). The results are shown in table 3, and the effect of varying \(\sigma\) is illustrated in figures 7 and 8. We observe that as \(\sigma\) increases, the adversarial distortion steadily increases both in terms of vector field norm and perturbation norm. Likewise, the success rate of ADef decreases with larger \(\sigma\) . However, from figure 8 we see that the constraint \(\| \tau^{*}\|_{T}\leq 3\) on the total vector field may provide a rather conservative measure of the effectiveness of ADef in the case of smooth high dimensional vector fields. ## C ADDITIONAL DEFORMED IMAGES ## C.1 MNIST Figures 9 and 10 show adversarial deformations for the models MNIST- A and MNIST- B, respectively. The attacks are performed using the same configuration as in the experiments in section 3. Observe that in some cases, features resembling the target class have appeared in the deformed image. For example, the top part of the 4 in the fifth column of figure 10 has been curved slightly to more resemble a 9. ## C.2 IMAGENET Figures 11 – 15 show additional deformed images resulting from attacking the Inception- v3 model using the same configuration as in the experiments in section 3. Similarly, figures 16 – 20 show deformed images resulting from attacking the ResNet- 10 model. However, in order to increase variability in the output labels, we perform a targeted attack, targeting the label of 50th highest probability. <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 7: The effects of increasing the smoothness parameter \(\sigma\) on adversarial deformations for Inception-v3. First and fourth rows: A correctly classified image and deformed versions. Second and fifth rows: The corresponding deforming vector fields and their \(T\) -norms. Third and sixth rows: The corresponding perturbations and their \(\ell^{\infty}\) norms. </center> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 8: The effects of increasing the smoothness parameter \(\sigma\) on adversarial deformations for Inception-v3. Note that according to the criterion \(\| \tau^{*}\|_{T} \leq 3\) , the value \(\sigma = 8\) yields an unsuccessful deformation of the recreational vehicle. </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 9: Adversarial deformations for MNIST-A. First and third rows: Original images from the MNIST test set. Second and fourth rows: The deformed images and the norms of the corresponding deforming vector fields. </center> ![](images/14_1.jpg) <center>Figure 10: Adversarial deformations for MNIST-B. Note that image 9 in row 3 is misclassified, and is then deformed to its correct label. </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 11: ADef attacks on the Inception-v3 model using the same configuration as in the experiments in section 3. </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 12: ADef attacks on the Inception-v3 model using the same configuration as in the experiments in section 3. </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 13: ADef attacks on the Inception-v3 model using the same configuration as in the experiments in section 3. </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 14: ADef attacks on the Inception-v3 model using the same configuration as in the experiments in section 3. </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 15: ADef attacks on the Inception-v3 model using the same configuration as in the experiments in section 3. </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 16: ADef attacks on the ResNet-101 model targeting the 50th most likely label. </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 17: ADef attacks on the ResNet-101 model targeting the 50th most likely label. </center> <--- Page Split ---> ![](images/22_0.jpg) ![](images/22_1.jpg) T: 1.396 ![](images/22_2.jpg) Deformed: viaduct ![](images/22_3.jpg) T: 1.550 ![](images/22_4.jpg) Deformed: ski mask ![](images/22_5.jpg) <center>Figure 18: ADef attacks on the ResNet-101 model targeting the 50th most likely label. </center> <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 19: ADef attacks on the ResNet-101 model targeting the 50th most likely label. </center> <--- Page Split ---> ![](images/24_0.jpg) <center>Figure 20: ADef attacks on the ResNet-101 model targeting the 50th most likely label. </center> <--- Page Split --->
## ABSTRACT While deep neural networks have proven to be a powerful tool for many recognition and classification tasks, their stability properties are still not well understood. In the past, image classifiers have been shown to be vulnerable to so- called adversarial attacks, which are created by additively perturbing the correctly classified image. In this paper, we propose the ADef algorithm to construct a different kind of adversarial attack created by iteratively applying small deformations to the image, found through a gradient descent step. We demonstrate our results on MNIST with convolutional neural networks and on ImageNet with Inception- v3 and ResNet- 101. ## 1 INTRODUCTION In a first observation in Szegedy et al. (2013) it was found that deep neural networks exhibit unstable behavior to small perturbations in the input. For the task of image classification this means that two visually indistinguishable images may have very different outputs, resulting in one of them being misclassified even if the other one is correctly classified with high confidence. Since then, a lot of research has been done to investigate this issue through the construction of adversarial examples: given a correctly classified image \(x\) , we look for an image \(y\) which is visually indistinguishable from \(x\) but is misclassified by the network. Typically, the image \(y\) is constructed as \(y = x + r\) , where \(r\) is an adversarial perturbation that is supposed to be small in a suitable sense (normally, with respect to an \(\ell^p\) norm). Several algorithms have been developed to construct adversarial perturbations, see Goodfellow et al. (2014); Moosavi Dezfooli et al. (2016); Kurakin et al. (2017b); Madry et al. (2018); Carlini & Wagner (2017b) and the review paper Akhtar & Mian (2018). Even though such pathological cases are very unlikely to occur in practice, their existence is relevant since malicious attackers may exploit this drawback to fool classifiers or other automatic systems. Further, adversarial perturbations may be constructed in a black- box setting (i.e., without knowing the architecture of the DNN but only its outputs) (Papernot et al., 2017; Moosavi- Dezfooli et al., 2017) and also in the physical world (Kurakin et al., 2017b; Athalye & Sutskever, 2017; Brown et al., 2017; Sharif et al., 2016). This has motivated the investigation of defenses, i.e., how to make the network invulnerable to such attacks, see Kurakin et al. (2017a); Carlini & Wagner (2017a); Madry et al. (2018); Tramer et al. (2018); Wong & Kolter (2018); Raghunathan et al. (2018); Athalye et al. (2018); Kannan et al. (2018). In most cases, adversarial examples are artificially created and then used to retrain the network, which becomes more stable under these types of perturbations. Most of the work on the construction of adversarial examples and on the design of defense strategies has been conducted in the context of small perturbations \(r\) measured in the \(\ell^{\infty}\) norm. However, this is not necessarily a good measure of image similarity: e.g., for two translated images \(x\) and \(y\) , the norm of \(x - y\) is not small in general, even though \(x\) and \(y\) will look indistinguishable if the translation is small. Several papers have investigated the construction of adversarial perturbations not designed <--- Page Split ---> for norm proximity (Rozsa et al., 2016; Sharif et al., 2016; Brown et al., 2017; Engstrom et al., 2017; Xiao et al., 2018). In this work, we build up on these ideas and investigate the construction of adversarial deformations. In other words, the misclassified image \(y\) is not constructed as an additive perturbation \(y = x + r\) , but as a deformation \(y = x \circ (\mathrm{id} + \tau)\) , where \(\tau\) is a vector field defining the transformation. In this case, the similarity is not measured through a norm of \(y - x\) , but instead through a norm of \(\tau\) , which quantifies the deformation between \(y\) and \(x\) . We develop an efficient algorithm for the construction of adversarial deformations, which we call ADef. It is based on the main ideas of DeepFool (Moosavi Dezfooli et al., 2016), and iteratively constructs the smallest deformation to misclassify the image. We test the procedure on MNIST (LeCun) (with convolutional neural networks) and on ImageNet (Russakovsky et al., 2015) (with Inception- v3 (Szegedy et al., 2016) and ResNet- 101 (He et al., 2016)). The results show that ADef can successfully fool the classifiers in the vast majority of cases (around \(99\%\) ) by using very small and imperceptible deformations. We also test our adversarial attacks on adversarially trained networks for MNIST. Our implementation of the algorithm can be found at https://gitlab.math.ethz.ch/tandrig/ADef. The results of this work have initially appeared in the master's thesis Gauksson (2017), to which we refer for additional details on the mathematical aspects of this construction. While writing this paper, we have come across Xiao et al. (2018), in which a similar problem is considered and solved with a different algorithm. Whereas in Xiao et al. (2018) the authors use a second order solver to find a deforming vector field, we show how a first order method can be formulated efficiently and justify a smoothing operation, independent of the optimization step. We report, for the first time, success rates for adversarial attacks with deformations on ImageNet. The topic of deformations has also come up in Jaderberg et al. (2015), in which the authors introduce a class of learnable modules that deform inputs in order to increase the performance of existing DNNs, and Fawzi & Frossard (2015), in which the authors introduce a method to measure the invariance of classifiers to geometric transformations. ## 2 ADVERSARIAL DEFORMATIONS ### 2.1 ADVERSARIAL PERTURBATIONS Let \(\mathcal{K}\) be a classifier of images consisting of \(P\) pixels into \(L \geq 2\) categories, i.e. a function from the space of images \(X = \mathbb{R}^{cP}\) , where \(c = 1\) (for grayscale images) or \(c = 3\) (for color images), and into the set of labels \(\mathcal{L} = \{1, \ldots , L\}\) . Suppose \(x \in X\) is an image that is correctly classified by \(\mathcal{K}\) and suppose \(y \in X\) is another image that is imperceptible from \(x\) and such that \(\mathcal{K}(y) \neq \mathcal{K}(x)\) , then \(y\) is said to be an adversarial example. The meaning of imperceptibility varies, but generally, proximity in \(\ell^{p}\) - norm (with \(1 \leq p \leq \infty\) ) is considered to be a sufficient substitute. Thus, an adversarial perturbation for an image \(x \in X\) is a vector \(r \in X\) such that \(\mathcal{K}(x + r) \neq \mathcal{K}(x)\) and \(\| r\|_{p}\) is small, where \[\| r\|_{p} = \left(\sum_{j = 1}^{cP}|r_{j}|^{p}\right)^{1 / p}\quad \mathrm{if~}1\leq p< \infty ,\mathrm{and}\quad \| r\|_{\infty} = \max_{j = 1,\ldots ,cP}|r_{j}|. \quad (1)\] Given such a classifier \(\mathcal{K}\) and an image \(x\) , an adversary may attempt to find an adversarial example \(y\) by minimizing \(\| x - y\|_{p}\) subject to \(\mathcal{K}(y) \neq \mathcal{K}(x)\) , or even subject to \(\mathcal{K}(y) = k\) for some target label \(k \neq \mathcal{K}(x)\) . Different methods for finding minimal adversarial perturbations have been proposed, most notably FGSM (Goodfellow et al., 2014) and PGD (Madry et al., 2018) for \(\ell^{\infty}\) , and the DeepFool algorithm (Moosavi Dezfooli et al., 2016) for general \(\ell^{p}\) - norms. ### 2.2 DEFORMATIONS Instead of constructing adversarial perturbations, we intend to fool the classifier by small deformations of correctly classified images. Our procedure is in the spirit of the DeepFool algorithm. Before we explain it, let us first clarify what we mean by a deformation of an image. The discussion is at first more intuitive if we model images as functions \(\xi : [0, 1]^{2} \to \mathbb{R}^{c}\) (with \(c = 1\) or \(c = 3\) ) instead <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 1: First row: The original \(28 \times 28\) pixel image from the MNIST database, and the same image translated by \((-2, 1)\) , rotated by an angle of \(10^{\circ}\) , and deformed w.r.t. an arbitrary smooth vector field \(\tau\) . The \(\ell^{\infty}\) -norm of the corresponding perturbation is shown under each deformed image. The pixel values range from 0 (white) to 1 (black), so the deformed images all lie far from the original image in the \(\ell^{\infty}\) -norm. Second row: The vector fields corresponding to the above deformations and their \(T\) -norms (cf. equation (3)). </center> of discrete vectors \(x\) in \(\mathbb{R}^{c P}\) . In this setting, perturbing an image \(\xi :[0,1]^{2}\to \mathbb{R}^{c}\) corresponds to adding to it another function \(\rho :[0,1]^{2}\to \mathbb{R}^{c}\) with a small \(L^{p}\) - norm. While any transformation of an image \(\xi\) can be written as a perturbation \(\xi +\rho\) , we shall restrict ourselves to a particular class of transformations. A deformation with respect to a vector field \(\tau :[0,1]^{2}\to \mathbb{R}^{2}\) is a transformation of the form \(\xi \mapsto \xi^{\tau}\) , where for any image \(\xi :[0,1]^{2}\to \mathbb{R}^{c}\) , the image \(\xi^{\tau}:[0,1]^{2}\to \mathbb{R}^{c}\) is defined by \[\xi^{\tau}(u) = \xi (u + \tau (u))\quad \mathrm{for~all~}u\in [0,1]^{2},\] extending \(\xi\) by zero outside of \([0,1]^{2}\) . Deformations capture many natural image transformations. For example, a translation of the image \(\xi\) by a vector \(v\in \mathbb{R}^{2}\) is a deformation with respect to the constant vector field \(\tau = v\) . If \(v\) is small, the images \(\xi\) and \(\xi^{v}\) may look similar, but the corresponding perturbation \(\rho = \xi^{v} - \xi\) may be arbitrarily large in the aforementioned \(L^{p}\) - norms. Figure 1 shows three minor deformations, all of which yield large \(L^{\infty}\) - norms. In the discrete setting, deformations are implemented as follows. We consider square images of \(W\times W\) pixels and define the space of images to be \(X = \left(\mathbb{R}^{W\times W}\right)^{c}\) . A discrete vector field is a function \(\tau :\{1,\ldots ,W\}^{2}\to \mathbb{R}^{2}\) . In what follows we will only consider the set \(T\) of vector fields that do not move points on the grid \(\{1,\ldots ,W\}^{2}\) outside of \([1,W]^{2}\) . More precisely, \[T:= \{\tau :\{1,\ldots ,W\}^{2}\to \mathbb{R}^{2}\mid \tau (s,t) + (s,t)\in [1,W]^{2}\mathrm{~for~all~}s,t\in \{1,\ldots ,W\} \} .\] An image \(x\in X\) can be viewed as the collection of values of a function \(\xi :[0,1]^{2}\to \mathbb{R}^{c}\) on a regular grid \(\{1 / (W + 1),\ldots ,W / (W + 1)\}^{2}\subseteq [0,1]^{2}\) , i.e. \(x_{s,t} = \xi (s / (W + 1),t / (W + 1))\) for \(s,t = 1,\ldots ,W\) . Such a function \(\xi\) can be computed by interpolating from \(x\) . Thus, the deformation of an image \(x\) with respect to the discrete vector field \(\tau\) can be defined as the discrete deformed image \(x^{\tau}\) in \(X\) by \[x_{s,t}^{\tau} = \xi \left(\frac{(s,t) + \tau(s,t)}{W + 1}\right),\qquad s,t\in \{1,\ldots ,W\} . \quad (2)\] It is not straightforward to measure the size of a deformation such that it captures the visual difference between the original image \(x\) and its deformed counterpart \(x^{\tau}\) . We will use the size of the <--- Page Split ---> corresponding vector field, \(\tau\) , in the norm defined by \[\| \tau \|_{T} = \max_{s,t = 1,\ldots ,W}\| \tau (s,t)\|_{2} \quad (3)\] as a proxy. The \(\ell^{p}\) - norms defined in (1), adapted to vector fields, can be used as well. (We remark, however, that none of these norms define a distance between \(x\) and \(x^{\tau}\) , since two vector fields \(\tau , \sigma \in T\) with \(\| \tau \|_{T} \neq \| \sigma \|_{T}\) may produce the same deformed image \(x^{\tau} = x^{\sigma}\) .) ### 2.3 THE ALGORITHM ADEF We will now describe our procedure for finding deformations that will lead a classifier to yield an output different from the original label. Let \(F = (F_{1},\ldots ,F_{L}):X\to \mathbb{R}^{L}\) be the underlying model for the classifier \(\mathcal{K}\) , such that \[\mathcal{K}(x) = \underset {k = 1,\ldots ,L}{\arg \max}F_{k}(x).\] Let \(x\in X\) be the image of interest and fix \(\xi :[0,1]^{2}\to \mathbb{R}^{c}\) obtained by interpolation from \(x\) . Let \(l = \mathcal{K}(x)\) denote the true label of \(x\) , let \(k\in \mathcal{L}\) be a target label and set \(f = F_{k} - F_{l}\) . We assume that \(x\) does not lie on a decision boundary, so that we have \(f(x)< 0\) We define the function \(g:T\to \mathbb{R},\tau \mapsto f(x^{\tau})\) and note that \(g(0) = f(x^{0}) = f(x)< 0\) . Our goal is to find a small vector field \(\tau \in T\) such that \(g(\tau) = f(x^{\tau})\geq 0\) . We can use a linear approximation of \(g\) around the zero vector field as a guide: \[g(\tau)\approx g(0) + (\mathrm{D}_{0}g)\tau \quad (4)\] for small enough \(\tau \in T\) and \(D_{0}g:T\to \mathbb{R}\) the derivative of \(g\) at \(\tau = 0\) . Hence, if \(\tau\) is a vector field such that \[(\mathrm{D}_{0}g)\tau = -g(0) \quad (5)\] and \(\| \tau \|_{T}\) is small, then the classifier \(\mathcal{K}\) has approximately equal confidence for the deformed image \(x^{\tau}\) to have either label \(l\) or \(k\) . This is a scalar equation with unknown in \(T\) , and so has infinitely many solutions. In order to select \(\tau\) with small norm, we solve it in the least- squares sense. In view of (2), we have \(\frac{\partial x^{\tau}}{\partial\tau}\big|_{\tau = 0}(s,t) = \frac{1}{W + 1}\nabla \xi \left(\frac{(s,t)}{W + 1}\right)\in \mathbb{R}^{c\times 2}\) . Thus, by applying the chain rule to \(g(\tau) = f(x^{\tau})\) , we obtain that its derivative at \(\tau = 0\) can, with a slight abuse of notation, be identified with the vector field \[\mathrm{D}_{0}g(s,t) = \frac{1}{W + 1}\big(\nabla f(x)\big)_{s,t}\nabla \xi \left(\frac{(s,t)}{W + 1}\right), \quad (6)\] where \(\left(\nabla f(x)\right)_{s,t}\in \mathbb{R}^{1\times c}\) is the derivative of \(f\) in \(x\) calculated at \((s,t)\) . With this, \((\mathrm{D}_{0}g)\tau\) stands for \(\begin{array}{r}{\sum_{s,t = 1}^{W}\mathrm{D}_{0}g(s,t)\cdot \tau (s,t)} \end{array}\) , and the solution to (5) in the least- square sense is given by \[\tau = -\frac{f(x)}{\sum_{s,t = 1}^{W}|\mathrm{D}_{0}g(s,t)|^{2}}\mathrm{D}_{0}g. \quad (7)\] Finally, we define the deformed image \(x^{\tau}\in X\) according to (2). One might like to impose some degree of smoothness on the deforming vector field. In fact, it suffices to search in the range of a smoothing operator \(\mathcal{S}:T\to T\) . However, this essentially amounts to applying \(\mathcal{S}\) to the solution from the larger search space \(T\) . Let \(\alpha = \mathcal{S}(\mathrm{D}_{0}g) = \phi *(\mathrm{D}_{0}g)\) , where \(\mathcal{S}\) denotes the componentwise application of a two- dimensional Gaussian filter \(\phi\) (of any standard deviation). Then the vector field \[\hat{\tau} = -\frac{f(x)}{\sum_{s,t = 1}^{W}|\alpha(s,t)|^{2}}\mathcal{S}\alpha = -\frac{f(x)}{\sum_{s,t = 1}^{W}|\alpha(s,t) |^{2}}\mathcal{S}^{2}(\mathrm{D}_{0}g)\] also satisfies (5), since \(\mathcal{S}\) is self- adjoint. We can hence replace \(\tau\) by \(\hat{\tau}\) to obtain a smooth deformation of the image \(x\) . We iterate the deformation process until the deformed image is misclassified. More explicitly, let \(x^{(0)} = x\) and for \(n\geq 1\) let \(\tau^{(n)}\) be given by (7) for \(x^{(n - 1)}\) . Then we can define the iteration as <--- Page Split ---> \(x^{(n)} = x^{(n - 1)}\circ (\mathrm{id} + \tau^{(n)})\) . The algorithm terminates and outputs an adversarial example \(y = x^{(n)}\) if \(\mathcal{K}(x^{(n)})\neq l\) . The iteration also terminates if \(x^{(n)}\) lies on a decision boundary of \(\mathcal{K}\) , in which case we propose to introduce an overshoot factor \(1 + \eta\) on the total deforming vector field. Provided that the number of iterations is moderate, the total vector field can be well approximated by \(\tau^{*} =\) \(\tau^{(1)} + \cdot \cdot \cdot +\tau^{(n)}\) and the process can be altered to output the deformed image \(y = x\circ (\mathrm{id} + (1 + \eta)\tau^{*})\) instead. The target label \(k\) may be chosen in each iteration to minimize the vector field to obtain a better approximation in the linearization (4). More precisely, for a candidate set of labels \(k_{1},\ldots ,k_{m}\) , we compute the corresponding vectors fields \(\tau_{1},\ldots ,\tau_{m}\) and select \[k = \underset {j = 1,\ldots ,m}{\arg \min}\| \tau_{j}\|_{T}.\] The candidate set consists of the labels corresponding to the indices of the \(m\) smallest entries of \(F - F_{l}\) , in absolute value. ## Algorithm ADef Input: Classification model \(F\) , image \(x\) , correct label \(l\) , candidate labels \(k_{1},\ldots ,k_{m}\) Output: Deformed image \(y\) Initialize \(y\leftarrow x\) while \(\mathcal{K}(y) = l\) do for \(j = 1,\ldots ,m\) do \[\alpha_{j}\leftarrow \mathcal{S}\left(\sum_{i = 1}^{n}\left((\nabla F_{k_{j}})_{i} - (\nabla F_{l})_{i}\right)\cdot \nabla y_{i}\right)\] \[\tau_{j}\leftarrow -\frac{F_{k_{j}}(y) - F_{l}(y)}{\|\alpha_{j}\|_{l^{2}}^{2}}\mathcal{S}\alpha_{j}\] end for \(i\leftarrow \arg \min_{j = 1,\ldots ,m}\| \tau_{j}\|_{T}\) \(y\leftarrow y\circ (\mathrm{id} + \tau_{i})\) end while return \(y\) By equation (6), provided that \(\nabla f\) is moderate, the deforming vector field takes small values wherever \(\xi\) has a small derivative. This means that the vector field will be concentrated on the edges in the image \(x\) (see e.g. the first row of figure 2). Further, note that the result of a deformation is always a valid image in the sense that it does not violate the pixel value bounds. This is not guaranteed for the perturbations computed with DeepFool. ## 3 EXPERIMENTS ### 3.1 SETUP We evaluate the performance of ADef by applying the algorithm to classifiers trained on the MNIST (LeCun) and ImageNet (Russakovsky et al., 2015) datasets. Below, we briefly describe the setup of the experiments and in tables 1 and 2 we summarize their results. MNIST: We train two convolutional neural networks based on architectures that appear in Madry et al. (2018) and Tramer et al. (2018) respectively. The network MNIST- A consists of two convolutional layers of sizes \(32\times 5\times 5\) and \(64\times 5\times 5\) , each followed by \(2\times 2\) max- pooling and a rectifier activation function, a fully connected layer into dimension 1024 with a rectifier activation function, and a final linear layer with output dimension 10. The network MNIST- B consists of two convolutional layers of sizes \(128\times 3\times 3\) and \(64\times 3\times 3\) with a rectifier activation function, a fully connected layer into dimension 128 with a rectifier activation function, and a final linear layer with output dimension 10. During training, the latter convolutional layer and the former fully connected layer of MNIST- B are subject to dropout of drop probabilities \(1 / 4\) and \(1 / 2\) . We use ADef to produce adversarial deformations of the images in the test set. The algorithm is configured to pursue any label different from the correct label (all incorrect labels are candidate labels). It performs smoothing by a Gaussian filter of standard deviation \(1 / 2\) , uses bilinear interpolation to obtain intermediate pixel intensities, and it overshoots by \(\eta = 2 / 10\) whenever it converges to a decision boundary. <--- Page Split ---> Table 1: The results of applying ADef to the images in the MNIST test set and the ILSVRC2012 validation set. The accuracy of the Inception and ResNet models is defined as the top-1 accuracy on the center-cropped and resized images. The success rate of ADef is shown as a percentage of the correctly classified inputs. The pixel range is scaled to [0, 1], so the perturbation \(r = y - x\) , where \(x\) is the input and \(y\) the output of ADef, has values in \([-1, 1]\) . The averages in the three last columns are computed over the set of images on which ADef is successful. Recall the definition of the vector field norm in equation (3). <table><tr><td>Model</td><td>Accuracy</td><td>ADef success</td><td>Avg.</td><td></td><td>Avg.</td><td></td><td>Avg. # iterations</td></tr><tr><td>MNIST-A</td><td>98.99%</td><td>99.85%</td><td>1.1950</td><td>0.7455</td><td>7.002</td><td></td><td></td></tr><tr><td>MNIST-B</td><td>98.91%</td><td>99.51%</td><td>1.0841</td><td>0.7654</td><td>4.422</td><td></td><td></td></tr><tr><td>Inception-v3</td><td>77.56%</td><td>98.94%</td><td>0.5984</td><td>0.2039</td><td>4.050</td><td></td><td></td></tr><tr><td>ResNet-101</td><td>76.97%</td><td>99.78%</td><td>0.5561</td><td>0.1882</td><td>4.176</td><td></td><td></td></tr></table> ![](images/5_0.jpg) <center>Figure 2: Sample deformations for the Inception-v3 model. The vector fields and perturbations have been amplified for visualization. First row: An image from the ILSVRC2012 validation set, the output of ADef with a Gaussian filter of standard deviation 1, the corresponding vector field and perturbation. The rightmost image is a close-up of the vector field around the nose of the ape. Second row: A larger deformation of the same image, obtained by using a wider Gaussian filter (standard deviation 6) for smoothing. </center> ImageNet: We apply ADef to pretrained Inception- v3 (Szegedy et al., 2016) and ResNet- 101 (He et al., 2016) models to generate adversarial deformations for the images in the ILSVRC2012 validation set. The images are preprocessed by first scaling so that the smaller axis has 299 pixels for the Inception model and 224 pixels for ResNet, and then they are center- cropped to a square image. The algorithm is set to focus only on the label of second highest probability. It employs a Gaussian filter of standard deviation 1, bilinear interpolation, and an overshoot factor \(\eta = \frac{1}{10}\) . We only consider inputs that are correctly classified by the model in question, and, since \(\tau^{*} = \tau^{(1)} + \dots +\tau^{(n)}\) approximates the total deforming vector field, we declare ADef to be successful if its output is misclassified and \(\| \tau^{*}\|_{T} \leq \epsilon\) , where we choose \(\epsilon = 3\) . Observe that, by (3), a deformation with respect to a vector field \(\tau\) does not displace any pixel further away from its original position than \(\| \tau\|_{T}\) . Hence, for high resolution images, the choice \(\epsilon = 3\) indeed produces small deformations if the vector fields are smooth. In appendix A, we illustrate how the success rate of ADef depends on the choice of \(\epsilon\) . When searching for an adversarial example, one usually searches for a perturbation with \(\ell^{\infty}\) - norm smaller than some small number \(\epsilon > 0\) . Common choices of \(\epsilon\) range from \(\frac{1}{10}\) to \(\frac{3}{10}\) for MNIST <--- Page Split ---> Table 2: Success rates for PGD and ADef attacks on adversarially trained networks. <table><tr><td>Model</td><td>Adv. training</td><td>Accuracy</td><td>PGD success</td><td>ADef success</td></tr><tr><td rowspan="2">MNIST-A</td><td>PGD</td><td>98.36%</td><td>5.81%</td><td>6.67%</td></tr><tr><td>ADef</td><td>98.95%</td><td>100.00%</td><td>54.16%</td></tr><tr><td rowspan="2">MNIST-B</td><td>PGD</td><td>98.74%</td><td>5.84%</td><td>20.35%</td></tr><tr><td>ADef</td><td>98.79%</td><td>100.00%</td><td>45.07%</td></tr></table> ![](images/6_0.jpg) <center>Figure 3: Targeted ADef against MNIST-A. First row: The original image and deformed images produced by restricting ADef to the target labels 0 to 8. The \(\ell^{\infty}\) -norms of the corresponding perturbations are shown under the deformed images. Second row: The vector fields corresponding to the deformations and their \(T\) -norms. </center> classifiers (Goodfellow et al., 2014; Madry et al., 2018; Wong & Kolter, 2018; Tramer et al., 2018; Kannan et al., 2018) and \(\frac{2}{255}\) to \(\frac{16}{255}\) for ImageNet classifiers (Goodfellow et al., 2014; Kurakin et al., 2017a; Tramer et al., 2018; Kannan et al., 2018). Table 1 shows that on average, the perturbations obtained by ADef are quite large compared to those constraints. However, as can be seen in figure 2, the relatively high resolution images of the ImageNet dataset can be deformed into adversarial examples that, while corresponding to large perturbations, are not visibly different from the original images. In appendices B and C, we give more examples of adversarially deformed images. ### 3.2 ADVERSARIAL TRAINING In addition to training MNIST- A and MNIST- B on the original MNIST data, we train independent copies of the networks using the adversarial training procedure described by Madry et al. (2018). That is, before each step of the training process, the input images are adversarially perturbed using the PGD algorithm. This manner of training provides increased robustness against adversarial perturbations of low \(\ell^{\infty}\) - norm. Moreover, we train networks using ADef instead of PGD as an adversary. In table 2 we show the results of attacking these adversarially trained networks, using ADef on the one hand, and PGD on the other. We use the same configuration for ADef as above, and for PGD we use 40 iterations, step size \(\frac{1}{100}\) and \(\frac{3}{10}\) as the maximum \(\ell^{\infty}\) - norm of the perturbation. Interestingly, using these configurations, the networks trained against PGD attacks are more resistant to adversarial deformations than those trained against ADef. ### 3.3 TARGETED ATTACKS ADef can also be used for targeted adversarial attacks, by restricting the deformed image to have a particular target label instead of any label which yields the optimal deformation. Figure 3 demonstrates the effect of choosing different target labels for a given MNIST image, and figure 4 shows the result of targeting the label of lowest probability for an image from the ImageNet dataset. <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 4: Untargeted vs. targeted attack on the ResNet-101 model. An image from the ILSVRC2012 validation set deformed to the labels of second highest (first row) and lowest (second row) probabilities (out of 1,000) for the original image. The vector fields and perturbations have been amplified for visualization. </center> ## 4 CONCLUSION In this work, we proposed a new efficient algorithm, ADef, to construct a new type of adversarial attacks for DNN image classifiers. The procedure is iterative and in each iteration takes a gradient descent step to deform the previous iterate in order to push to a decision boundary. We demonstrated that with almost imperceptible deformations, state- of- the art classifiers can be fooled to misclassify with a high success rate of ADef. This suggests that networks are vulnerable to different types of attacks and that simply training the network on a specific class of adversarial examples might not form a sufficient defense strategy. Given this vulnerability of neural networks to deformations, we wish to study in future work how ADef can help for designing possible defense strategies. Furthermore, we also showed initial results on fooling adversarially trained networks. Remarkably, PGD trained networks on MNIST are more resistant to adversarial deformations than ADef trained networks. However, for this result to be more conclusive, similar tests on ImageNet will have to be conducted. We wish to study this in future work. ## ACKNOWLEDGMENTS The authors would like to thank Helmut Bölcskei and Thomas Wiatowski for fruitful discussions. ## A DISTRIBUTION OF VECTOR FIELD NORMS Figures 5 and 6 show the distribution of the norms of the total deforming vector fields, \(\tau^{*}\) , from the experiments in section 3. For networks that have not been adversarially trained, most deformations fall well below the threshold of \(\epsilon = 3\) . Out of the adversarially trained networks, only MNIST- A trained against PGD is truly robust against ADef. Further, a comparison between the first column of figure 5 and figure 6 indicates that ImageNet is much more vulnerable to adversarial deformations than MNIST, also considering the much higher resolution of the images in ImageNet. Thus, it would be very interesting to study the performance of ADef with adversarially trained network for ImageNet, as mentioned in the Conclusion. <--- Page Split ---> Table 3: The results of applying ADef to the images in the ILSVRC2012 validation set and the Inception model, using different values for the standard deviation \(\sigma\) of the Gaussian filter. As before, we define ADef to be successful if \(\| \tau^{*}\|_{T}\leq 3\) <table><tr><td>σ</td><td>ADef success</td><td>Avg.</td><td></td><td></td><td></td></tr><tr><td>0</td><td>99.12%</td><td>0.5272</td><td>0.1628</td><td>5.247</td><td></td></tr><tr><td>1</td><td>98.94%</td><td>0.5984</td><td>0.2039</td><td>4.050</td><td></td></tr><tr><td>2</td><td>95.91%</td><td>0.7685</td><td>0.2573</td><td>3.963</td><td></td></tr><tr><td>4</td><td>86.66%</td><td>0.9632</td><td>0.3128</td><td>4.379</td><td></td></tr><tr><td>8</td><td>67.54%</td><td>1.1684</td><td>0.3687</td><td>5.476</td><td></td></tr></table> ## B SMOOTH DEFORMATIONS The standard deviation of the Gaussian filter used for smoothing in the update step of ADef has significant impact on the resulting vector field. To explore this aspect of the algorithm, we repeat the experiment from section 3 on the Inception- v3 model, using standard deviations \(\sigma = 0,1,2,4,8\) (where \(\sigma = 0\) stands for no smoothing). The results are shown in table 3, and the effect of varying \(\sigma\) is illustrated in figures 7 and 8. We observe that as \(\sigma\) increases, the adversarial distortion steadily increases both in terms of vector field norm and perturbation norm. Likewise, the success rate of ADef decreases with larger \(\sigma\) . However, from figure 8 we see that the constraint \(\| \tau^{*}\|_{T}\leq 3\) on the total vector field may provide a rather conservative measure of the effectiveness of ADef in the case of smooth high dimensional vector fields. ## C ADDITIONAL DEFORMED IMAGES ## C.1 MNIST Figures 9 and 10 show adversarial deformations for the models MNIST- A and MNIST- B, respectively. The attacks are performed using the same configuration as in the experiments in section 3. Observe that in some cases, features resembling the target class have appeared in the deformed image. For example, the top part of the 4 in the fifth column of figure 10 has been curved slightly to more resemble a 9. ## C.2 IMAGENET Figures 11 – 15 show additional deformed images resulting from attacking the Inception- v3 model using the same configuration as in the experiments in section 3. Similarly, figures 16 – 20 show deformed images resulting from attacking the ResNet- 10 model. However, in order to increase variability in the output labels, we perform a targeted attack, targeting the label of 50th highest probability. <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 7: The effects of increasing the smoothness parameter \(\sigma\) on adversarial deformations for Inception-v3. First and fourth rows: A correctly classified image and deformed versions. Second and fifth rows: The corresponding deforming vector fields and their \(T\) -norms. Third and sixth rows: The corresponding perturbations and their \(\ell^{\infty}\) norms. </center> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 8: The effects of increasing the smoothness parameter \(\sigma\) on adversarial deformations for Inception-v3. Note that according to the criterion \(\| \tau^{*}\|_{T} \leq 3\) , the value \(\sigma = 8\) yields an unsuccessful deformation of the recreational vehicle. </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 9: Adversarial deformations for MNIST-A. First and third rows: Original images from the MNIST test set. Second and fourth rows: The deformed images and the norms of the corresponding deforming vector fields. </center> ![](images/14_1.jpg) <center>Figure 10: Adversarial deformations for MNIST-B. Note that image 9 in row 3 is misclassified, and is then deformed to its correct label. </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 11: ADef attacks on the Inception-v3 model using the same configuration as in the experiments in section 3. </center> <--- Page Split ---> ![](images/16_0.jpg) <center>Figure 12: ADef attacks on the Inception-v3 model using the same configuration as in the experiments in section 3. </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 13: ADef attacks on the Inception-v3 model using the same configuration as in the experiments in section 3. </center> <--- Page Split ---> ![](images/18_0.jpg) <center>Figure 14: ADef attacks on the Inception-v3 model using the same configuration as in the experiments in section 3. </center> <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 15: ADef attacks on the Inception-v3 model using the same configuration as in the experiments in section 3. </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 16: ADef attacks on the ResNet-101 model targeting the 50th most likely label. </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 17: ADef attacks on the ResNet-101 model targeting the 50th most likely label. </center> <--- Page Split ---> ![](images/22_0.jpg) ![](images/22_1.jpg) T: 1.396 ![](images/22_2.jpg) Deformed: viaduct ![](images/22_3.jpg) T: 1.550 ![](images/22_4.jpg) Deformed: ski mask ![](images/22_5.jpg) <center>Figure 18: ADef attacks on the ResNet-101 model targeting the 50th most likely label. </center> <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 19: ADef attacks on the ResNet-101 model targeting the 50th most likely label. </center> <--- Page Split ---> ![](images/24_0.jpg) <center>Figure 20: ADef attacks on the ResNet-101 model targeting the 50th most likely label. </center> <--- Page Split --->
accept
Accept (Poster)
6.666667
ICLR_2019_paper_0968
iclr
2,019
# LATENT CONVOLUTIONAL MODELS ShahRukh Athar\* Evgeny Burnaev Victor Lempitsky† Skolkovo Institute of Science and Technology (Skoltech), Russia ## ABSTRACT We present a new latent model of natural images that can be learned on large- scale datasets. The learning process provides a latent embedding for every image in the training dataset, as well as a deep convolutional network that maps the latent space to the image space. After training, the new model provides a strong and universal image prior for a variety of image restoration tasks such as large- hole inpainting, superresolution, and colorization. To model high- resolution natural images, our approach uses latent spaces of very high dimensionality (one to two orders of magnitude higher than previous latent image models). To tackle this high dimensionality, we use latent spaces with a special manifold structure (convolutional manifolds) parameterized by a ConvNet of a certain architecture. In the experiments, we compare the learned latent models with latent models learned by autoencoders, advanced variants of generative adversarial networks, and a strong baseline system using simpler parameterization of the latent space. Our model outperforms the competing approaches over a range of restoration tasks. ## 1 INTRODUCTION Learning good image priors is one of the core problems of computer vision and machine learning. One promising approach to obtaining such priors is to learn a deep latent model, where the set of natural images is parameterized by a certain simple- structured set or probabilistic distribution, whereas the complexity of natural images is tackled by a deep ConvNet (often called a generator or a decoder) that maps from the latent space into the space of images. The best known examples are generative adversarial networks (GANs) (Goodfellow et al., 2014) and autoencoders (Goodfellow et al., 2016). Given a good deep latent model, virtually any image restoration task can be solved by finding a latent representation that best corresponds to the image evidence (e.g. the known pixels of an occluded image or a low- resolution image). The attractiveness of such approach is in the universality of the learned image prior. Indeed, applying the model to a new restoration task can be performed by simply changing the likelihood objective. The same latent model can therefore be reused for multiple tasks, and the learning process needs not to know the image degradation process in advance. This is in contrast to task- specific approaches that usually train deep feed- forward ConvNets for individual tasks, and which have a limited ability to generalize across tasks (e.g. a feed- forward network trained for denoising cannot perform large- hole inpainting and vice versa). At the moment, such image restoration approach based on latent models is limited to low- resolution images. E.g. (Yeh et al., 2017) showed how a latent model trained with GAN can be used to perform inpainting of tightly- cropped \(64 \times 64\) face images. Below, we show that such models trained with GANs cannot generalize to higher resolution (eventhough GAN- based systems are now able to obtain high- quality samples at high resolutions (Karras et al., 2018)). We argue that it is the limited dimensionality of the latent space in GANs and other existing latent models that precludes them from spanning the space of high- resolution natural images. To scale up latent modeling to high- resolution images, we consider latent models with tens of thousands of latent dimensions (as compared to few hundred latent dimensions in existing works). We show that training such latent models is possible using direct optimization (Bojanowski et al., 2018) and that it leads to good image priors that can be used across a broad variety of reconstruction tasks. In previous models, the latent space has a simple structure such as a sphere or a box in a Euclidean space, or a full Euclidean space with a Gaussian prior. Such choice, however, is not viable in our <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Restorations using the same Latent Convolutional Model (images 2,4,6) for different image degradations (images 1,3,5). At training time, our approach builds a latent model of non-degraded images, and at test time the restoration process simply finds a latent representation that maximizes the likelihood of the corrupted image and outputs a corresponding non-degraded image as a restoration result. </center> case, as vectors with tens of thousands of dimensions cannot be easily used as inputs to a generator. Therefore, we consider two alternative parameterizations of a latent space. Firstly, as a baseline, we consider latent spaces parameterized by image stacks (three- dimensional tensors), which allows to have "fully- convolutional" generators with reasonable number of parameters. Our full system uses a more sophisticated parameterization of the latent space, which we call a convolutional manifold, where the elements of the manifold correspond to the parameter vector of a separate ConvNet. Such indirect parameterization of images and image stacks have recently been shown to impose a certain prior (Ulyanov et al., 2018), which is beneficial for restoration of natural images. In our case, we show that a similar prior can be used with success to parameterize high- dimensional latent spaces. To sum up, our contributions are as follows. Firstly, we consider the training of deep latent image models with the latent dimensionality that is much higher than previous works, and demonstrate that the resulting models provide universal (w.r.t. restoration tasks) image priors. Secondly, we suggest and investigate the convolutional parameterization for the latent spaces of such models, and show the benefits of such parameterization. Our experiments are performed on CelebA (Liu et al., 2015) (128x128 resolution), SUN Bedrooms (Yu et al., 2015) (256x256 resolution), CelebA- HQ (Karras et al., 2018) (1024x1024 resolution) datasets, and we demonstrate that the latent models, once trained, can be applied to large hole inpainting, superresolution of very small images, and colorization tasks, outperforming other latent models in our comparisons. To the best of our knowledge, we are the first to demonstrate how "direct" latent modeling of natural images without extra components can be used to solve image restoration problems at these resolutions (Figure 1). Other related work. Deep latent models follow a long line of works on latent image models that goes back at least to the eigenfaces approach (Sirovich & Kirby, 1987). In terms of restoration, a competing and more popular approach are feed- forward networks trained for specific restoration tasks, which have seen rapid progress recently. Our approach does not quite match the quality of e.g. (Iizuka et al., 2017), that is designed and trained specifically for the inpainting task, or the quality of e.g. (Yu & Porikli, 2016) that is designed and trained specifically for the face superresolution task. Yet the models trained within our approach (like other latent models) are universal, as they can handle degradations unanticipated at training time. Our work is also related to pre- deep learning ("shallow") methods that learn priors on (potentially- overlapping) image patches using maximum likelihood- type objectives such as (Roth & Black, 2005; Karklin & Lewicki, 2009; Zoran & Weiss, 2011). The use of multiple layers in our method allows to capture much longer correlations. As a result, our method can be used successfully to handle restoration tasks that require exploiting these correlations, such as large- hole inpainting. ## 2 METHOD Let \(\{\mathbf{x}_1, \mathbf{x}_2, \ldots , \mathbf{x}_N\}\) be a set of training images, that are considered to be samples from the distribution \(X\) of images in the space \(\mathcal{X}\) of images of a certain size that need to be modeled. In latent modeling, we introduce a different space \(\mathcal{Z}\) and a certain distribution \(Z\) in that space that is used to re- parameterize \(\mathcal{X}\) . In previous works, \(\mathcal{Z}\) is usually chosen to be a Euclidean space with few dozen to few hundred dimensions, while our choice for \(\mathcal{Z}\) is discussed further below. <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: The Latent Convolutional Model incorporates two sequential ConvNets. The smaller ConvNet \(f\) (red) is fitted to each training image and is effectively used to parameterize the latent manifold. The bigger ConvNet \(g\) (magenta) is used as a generator, and its parameters are fitted to all training data. The input \(s\) to the pipeline is fixed to a random noise and not updated during training. </center> The deep latent modeling of images implies learning the generator network \(g_{\theta}\) with learnable parameters \(\theta\) , which usually has convolutional architecture. The generator network maps from \(\mathcal{Z}\) to \(\mathcal{X}\) and in particular is trained so that \(g_{\theta}(Z) \approx X\) . Achieving the latter condition is extremely hard, and there are several approaches that can be used. Thus, generative adversarial networks (GANs) (Goodfellow et al., 2014) train the generator network in parallel with a separate discriminator network that in some variants of GANs serves as an approximate ratio estimator between \(X\) and \(X + g_{\theta}(Z)\) over points in \(\mathcal{X}\) . Alternatively, autoencoders (Goodfellow et al., 2016) and their variational counter- parts (Kingma & Welling, 2014) train the generator in parallel with the encoder operating in the reverse direction, resulting in a more complex distribution \(Z\) . Of these two approaches, only GANs are known to be capable of synthesizing high- resolution images, although such ability comes with additional tricks and modifications of the learning formulation (Arjovsky et al., 2017; Karras et al., 2018). In this work, we start with a simpler approach to deep latent modeling (Bojanowski et al., 2018) known as the GLO model. GLO model optimizes the parameters of the generator network in parallel with the explicit embeddings of the training examples \(\{\mathbf{z}_1, \mathbf{z}_2, \ldots , \mathbf{z}_N\}\) , such that \(g_{\theta}(\mathbf{z}_i) \approx \mathbf{x}_i\) by the end of the optimization. Our approach differs from and expands (Bojanowski et al., 2018) in three ways: (i) we consider a much higher dimensionality of the latent space, (ii) we use an indirect parameterization of the latent space discussed further below, (iii) we demonstrate the applicability of the resulting model to a variety of image restoration tasks. Scaling up latent modeling. Relatively low- dimensional latent models of natural images presented in previous works are capable of producing visually- compelling image samples from the distribution (Karras et al., 2018), but are not actually capable of matching or covering a rather high- dimensional distribution \(X\) . E.g. in our experiments, none of GAN models were capable of reconstructing most samples \(\mathbf{x}\) from the hold- out set (or even from the training set; this observation is consistent with (Bojanowski et al., 2018) and also with (Zhu et al., 2016)). Being unable to reconstruct uncorrupted samples clearly suggests that the learned models are not suitable to perform restoration of corrupted samples. On the other hand, autoencoders and the related GLO latent model (Bojanowski et al., 2018) were able to achieve better reconstructions than GAN on the hold- out sets, yet have distinctly blurry reconstructions (even on the training set), suggesting strong underfitting. We posit that existing deep latent models are limited by the dimensionality of the latent space that they consider, and aim to scale up this dimensionality significantly. Simply scaling up the latent dimensionality to few tens of dimensions is not easily feasible, as e.g. the generator network has to work with such a vector as an input, which would make the first fully- connected layer excessively large with hundreds of millions of parameters<sup>1</sup>. To achieve a tractable size of the generator, one can consider latent elements \(\mathbf{z}\) to have a three- dimensional tensor structure, i.e. to be stacks of 2D image maps. Such choice of structure is very <--- Page Split ---> natural for convolutional architectures, and allows to train "fully- convolutional" generators with the first layer being a standard convolutional operation. The downside of this choice, as we shall see, is that it allows limited coordination between distant parts of the images \(\mathbf{x} = g_{\theta}(\mathbf{z})\) produced by the generator. This drawback is avoided when the latent space is parameterized using latent convolutional manifolds as described next. Latent convolutional manifolds. To impose more appropriate structure on the latent space, we consider structuring these spaces as convolutional manifolds defined as follows. Let \(\mathbf{s}\) be a stack of maps of the size \(W_{s} \times H_{s} \times C_{s}\) and let \(\{f_{\phi} | \phi \in \Phi \}\) be a set of convolutional networks all sharing the same architecture \(f\) that transforms \(\mathbf{s}\) to different maps of size \(W_{z} \times H_{z} \times C_{z}\) . A certain parameter vector \(\phi \in \Phi\) thus defines a certain convolutional network \(f_{\phi}\) . Then, let \(\mathbf{z}(\phi) = f_{\phi}(\mathbf{s})\) be an element in the space of \((W_{z} \times H_{z} \times C_{z})\) - dimensional maps. Various choices of \(\phi\) then span a manifold embedded into this space, and we refer to it as the convolutional manifold. A convolutional manifold \(\mathbf{C}_{f,\mathbf{s}}\) is thus defined by the ConvNet architecture \(f\) as well as by the choice of the input \(\mathbf{s}\) (which in our experiments is always chosen to be filled with uniform random noise). Additionally, we also restrict the elements of vectors \(\phi\) to lie within the \([- B; B]\) range. Formally, the convolutional manifold is defined as the following set: \[\mathbf{C}_{f,\mathbf{s}} = \{\mathbf{z} | \mathbf{z} = f_{\phi}(\mathbf{s}), \phi \in \Phi \} , \Phi = [- B; B]^{N_{\phi}}, \quad (1)\] where \(\phi\) serves as a natural parameterization and \(N_{\phi}\) is the number of network parameters. Below, we refer to \(f\) as latent ConvNet, to disambiguate it from the generator \(g\) , which also has a convolutional structure. The idea of the convolutional manifold is inspired by the recent work on deep image priors (Ulyanov et al., 2018). While they effectively use convolutional manifolds to model natural images directly, in our case, we use them to model the latent space of the generator networks resulting in a fully- fledged learnable latent image model (whereas the model in (Ulyanov et al., 2018) cannot be learned on a dataset of images). The work (Ulyanov et al., 2018) demonstrates that the regularization imposed by the structure of a very high- dimensional convolutional manifold is beneficial when modeling natural images. Our intuition here is that similar regularization would be beneficial in regularizing learning of high- dimensional latent spaces. As our experiments below reveal, this intuition holds true. Learning formulation. Learning the deep latent model (Figure 2) in our framework then amounts to the following optimization task. Given the training examples \(\{\mathbf{x}_{1}, \mathbf{x}_{2}, \ldots , \mathbf{x}_{N}\}\) , the architecture \(f\) of the convolutional manifold, and the architecture \(g\) of the generator network, we seek the set of the latent ConvNet parameter vectors \(\{\phi_{1}, \phi_{2}, \ldots , \phi_{N}\}\) and the parameters of the generator network \(\theta\) that minimize the following objective: \[L(\phi_{1}, \phi_{2}, \ldots , \phi_{N}, \theta) = \frac{1}{N} \sum_{i = 1}^{N} \| g_{\theta}(f_{\phi_{i}}(\mathbf{s})) - \mathbf{x}_{i} \| , \quad (2)\] with an additional box constraints \(\phi_{i}^{j} \in [- 0.01; 0.01]\) and \(\mathbf{s}\) being a random set of image maps filled with uniform noise. Following (Bojanowski et al., 2018), the norm in (2) is taken to be the Laplacian- L1: \(\| \mathbf{x}_{1} - \mathbf{x}_{2}\|_{\mathrm{Lap - L1}} = \sum_{j} 2^{- 2j} |L^{j}(\mathbf{x}_{1} - \mathbf{x}_{2})|_{1}\) , where \(L^{j}\) is the \(j\) th level of the Laplacian image pyramid (Burt & Adelson, 1983). We have also found that adding an extra MSE loss term to the Lap- L1 loss term with the weight of 1.0 speeds up convergence of the models without affecting the results by much. The optimization (2) is performed using stochastic gradient descent. As an outcome of the optimization, each training example \(\mathbf{x}_{i}\) gets a representation \(\mathbf{z}_{i} = f_{\phi_{i}}\) on the convolutional manifold \(\mathbf{C}_{f,\mathbf{s}}\) . Importantly, the elements of the convolutional manifold then define a set of images in the image space (which is the image of the convolutional manifold under learned generator): \[\mathbf{I}_{f,\mathbf{s},\theta} = \{\mathbf{x} | \mathbf{x} = g_{\theta}(f_{\phi}(\mathbf{s})), \phi \in \Phi \} . \quad (3)\] <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 3: Results (perceptual metrics – lower is better – and user preferences) for the two datasets (CelebA – left, Bedrooms – right) and three tasks (inpainting, super-resolution, colorization). For the colorization task the perceptual metric is inadequate as the grayscale image has the lowest error, but is shown for completeness. </center> Table 1: MSE loss on the restored images with respect to the ground truth. For inpainting the MSE was calculated just over the inpainted region of the images. <table><tr><td rowspan="2"></td><td colspan="4">CelebA</td><td colspan="4">LSUN-Bedrooms</td></tr><tr><td>LCM</td><td>GLO</td><td>DIP</td><td>AE</td><td>WGAN</td><td>LCM</td><td>GLO</td><td>DIP</td></tr><tr><td>Inpainting</td><td>0.0034</td><td>0.0038</td><td>0.0091</td><td>0.0065</td><td>0.0344</td><td>0.0065</td><td>0.0085</td><td>0.0063</td></tr><tr><td>Super-res</td><td>0.0061</td><td>0.0063</td><td>0.0052</td><td>0.0083</td><td>0.0446</td><td>0.0071</td><td>0.0069</td><td>0.0057</td></tr><tr><td>Colorization</td><td>0.0071</td><td>0.0069</td><td>0.0136</td><td>0.0194</td><td>0.0373</td><td>0.0066</td><td>0.0075</td><td>0.0696</td></tr></table> While not all elements of the manifold \(\mathbf{I}_{f,\mathbf{s},\theta}\) will correspond to natural images from the distribution \(X\) , we have found out that with few thousand dimensions, the resulting manifolds can cover the support of \(X\) rather well. I.e. each sample from the image distribution can be approximated by the element of \(\mathbf{I}_{f,\mathbf{s},\theta}\) with a low approximation error. This property can be used to perform all kinds of image restoration tasks. Image restoration using learned latent models. We now describe how the learned latent model can be used to perform the restoration of the unknown image \(\mathbf{x}_0\) from the distribution \(X\) , given some evidence \(\mathbf{y}\) . Depending on the degradation process, the evidence \(\mathbf{y}\) can be an image \(\mathbf{x}_0\) with masked values (inpainting task), the low- resolution version of \(\mathbf{x}_0\) (superresolution task), the grayscale version of \(\mathbf{x}_0\) (colorization task), the noisy version of \(\mathbf{x}_0\) (denoising task), a certain statistics of \(\mathbf{x}_0\) computed e.g. using a deep network (feature inversion task), etc. We further assume, that the degradation process is described by the objective \(E(\mathbf{x}|\mathbf{y})\) , which can be set to minus log- likelihood \(E(\mathbf{x}|\mathbf{y}) = -\log p(\mathbf{y}|\mathbf{x})\) of observing \(\mathbf{y}\) as a result of the degradation of \(\mathbf{x}\) . E.g. for the inpainting task, one can use \(E(\mathbf{x}|\mathbf{y}) = \| (\mathbf{x} - \mathbf{y})\odot \mathbf{m}\|\) , where \(\mathbf{m}\) is the 0- 1 mask of known pixels and \(\odot\) denotes element- wise product. For the superresolution task, the restoration objective is naturally defined as \(E(\mathbf{x}|\mathbf{y}) = \| \downarrow (\mathbf{x}) - \mathbf{y}\|\) , where \(\downarrow (\cdot)\) is an image downsampling operator (we use Lanczos in the experiments) and \(\mathbf{y}\) is the low- resolution version of the image. For the colorization task, the objective is defined as \(E(\mathbf{x}|\mathbf{y}) = \| \mathrm{gray}(\mathbf{x}) - \mathbf{y}\|\) , where \(\mathrm{gray}(\cdot)\) denotes a projection from the RGB to grayscale images (we use a simple averaging of the three color channels in the experiments) and \(\mathbf{y}\) is the grayscale version of the image. <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 4: Qualitative comparison on CelebA (see the text for discussion). </center> Using the learned latent model as a prior, the following estimation combining the learned prior and the provided image evidence is performed: \[\hat{\phi} = \arg \min_{\phi}E(g_{\theta}(f_{\phi}(\mathbf{s}))\mid \mathbf{y}),\qquad \hat{\mathbf{x}} = g_{\theta}(f_{\hat{\phi}}(\mathbf{s})). \quad (4)\] In other words, we simply estimate the element of the image manifold (3) that has the highest likelihood. The optimization is performed using stochastic gradient descent over the parameters \(\phi\) on the latent convolutional manifold. For the baseline models, which use a direct parameterization of the latent space, we perform analogous estimation using optimization in the latent space: \[\hat{\mathbf{z}} = \arg \min_{\mathbf{z}}E(g_{\theta}(\mathbf{z})\mid \mathbf{y}),\qquad \hat{\mathbf{x}} = g_{\theta}(\mathbf{z}). \quad (5)\] In the experiments, we compare the performance of our full model and several baseline models over a range of the restoration tasks using formulations (4) and (5). ## 3 EXPERIMENTS Datasets. The experiments were conducted on three datasets. The CelebA dataset was obtained by taking the 150K images from (Liu et al., 2015) (cropped version) and resizing them from \(178 \times 218\) to \(128 \times 128\) . Note that unlike most other works, we have performed anisotropic rescaling rather than additional cropping, leading to the version of the dataset with larger background portions and higher variability (corresponding to a harder modeling task). The Bedrooms dataset from the LSUN (Yu et al., 2015) is another popular dataset of images. We rescale all images to the \(256 \times 256\) size. Finally, the CelebA- HQ dataset from (Karras et al., 2018) that consists of \(30\mathrm{K} 1024 \times 1024\) images of faces. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 5: Qualitative comparison on SUN Bedrooms for the tasks of inpainting (rows 1-2), superresolution (rows 3-4), colorization (rows 5-6). The LCM method performs better than most methods for the first two tasks. </center> Tasks. We have compared methods for three diverse tasks. For the inpainting task, we have degraded the input images by masking the center part of the image ( \(50 \times 50\) for CelebA, \(100 \times 100\) for Bedrooms, \(400 \times 400\) for CelebA- HQ). For the superresolution task, we downsampled the images by a factor of eight. For the colorization task, we have averaged the color channels obtaining the gray version of the image. ### 3.1 EXPERIMENTS ON CELEBA AND BEDROOMS We have performed extensive comparisons with other latent models on the two datasets with smaller image size and lower training times (CelebA and Bedrooms). The following latent models were compared: - Latent Convolutional Networks (LCM - Ours): Each \(f_{\phi_i}\) has 4 layers (in CelebA), 5 layers (in Bedrooms) or 7 layers (in CelebA-HQ) and takes as input random uniform noise. The Generator, \(g_\theta\) has an hourglass architecture. The latent dimensionality of the model was 24k for CelebA and 61k for Bedrooms.- GLO: The baseline model discussed in the end of Section 2 and inspired by (Bojanowski et al., 2018), where the generator network has the same architecture as in LCM, but the <--- Page Split ---> convolutional space is parameterized by a set of maps. The latent dimensionality is the same as in LCM (and thus much higher than in (Bojanowski et al., 2018)). We have also tried a variant reproduced exactly from (Bojanowski et al., 2018) with vectorial latent spaces that feed into a fully- connected layers (for the dimensionalities ranging from 2048 to 8162 – see Appendix B), but invariably observed underfitting. Generally, we took extra care to find the optimal parameterization that would be most favourable to this baseline. - DIP: The deep image prior-based restoration (Ulyanov et al., 2018). We use the architecture proposed by the authors in the paper. DIP can be regarded as an extreme version of our paper with the generator network being an identity. DIP fits 1M parameters to each image for inpainting and colorization and 2M parameters for super-resolution.- GAN: For CelebA we train a WGAN-GP (Gulrajani et al., 2017) with the DCGAN type generator and a latent space of 256. For Bedrooms we use the pretrained Progressive GAN (PGAN) models with the latent space of dimensionality 512 published by the authors of (Karras et al., 2018). During restoration, we do not impose prior on the norm of \(\mathbf{z}\) since it worsens the underfitting problem of GANs (as demonstrated in Appendix C).- AE: For the CelebA we have also included a standard autoencoder using the Lap-L1 and MSE reconstruction metrics into the comparison (latent dimensionality 1024). We have also tried the variant with convolutional higher-dimensional latent space, but have observed very strong overfitting. The variational variant (latent dimensionality 1024) lead to stronger underfitting than the non-variational variant. As the experiments on CelebA clearly showed a strong underfitting, we have not included AE into the comparison on the higher-resolution Bedrooms dataset. For Bedrooms dataset we restricted training to the first 200K training samples, except for the DIP (which does not require training) and GAN (we used the progressive GAN model trained on all 3M samples). All comparisons were performed on hold- out sets not used for training. Following (Bojanowski et al., 2018), we use plain SGD with very high learning rate of 1.0 to train LCM and of 10.0 to train the GLO models. The exact architectures are given in Appendix D. Metrics. We have used quantitative and user study- based assessment of the results. For the quantitative measure, we have chosen the mean squared error (MSE) measure in pixel space, as well as the mean squared distance of the VGG16- features (Simonyan & Zisserman, 2015) between the original and the reconstructed images. Such perceptual metrics are known to be correlated with the human judgement (Johnson et al., 2016; Zhang et al., 2018). We have used the [relu1_2, relu2_2, relu3_3, relu4_3, relu5_3] layers contributing to the distance metric with equal weight. Generally, we observed that the relative performance of the methods were very similar for the MSE measure, for the individual VGG layers, and for the averaged VGG metrics that we report here. When computing the loss for the inpainting task we only considered the positions corresponding to the masked part. Quantitative metrics however have limited relevance for the tasks with big multimodal conditional distributions, i.e. where two very different answers can be equally plausible, such as all three tasks that we consider (e.g. there could be very different colorizations of the same bedroom image). In this situation, human judgement of quality is perhaps the best measure of the algorithm performance. To obtain such judgements, we have performed a user study, where we have picked 10 random images for each of the two datasets and each of the three tasks. The results of all compared methods alongside the degraded inputs were shown to the participants (100 for CelebA, 38 for Bedrooms). For each example, each subject was asked to pick the best restoration variant (we asked to take into account both realism and fidelity to the input). The results were presented in random order (shuffled independently for each example). We then just report the percentage of user choices for each method for a given task on a given dataset averaged over all subjects and all ten images. Results. The results of the comparison are summarized in Figure 3 and Table 1 with representative examples shown in Figure 4 and Figure 5. "Traditional" latent models (built WGAN/PGAN and AE) performed poorly. In particular, GAN- based models produced results that were both unrealistic and poorly fit the likelihood. Note that during fitting we have not imposed the Gaussian prior on the latent space of GANs. Adding such prior did not result in considerable increase of realism and lead to even poorer fit to the evidence (see Appendix C). <--- Page Split ---> The DIP model did very well for inpainting and superresolution of relatively unstructured Bedrooms dataset. It however performed very poorly on CelebA due to its inability to learn face structure from data and on the colorization task due to its inability to learn about natural image colors. Except for the Bedrooms- inpainting, the new models with very large latent space produced results that were clearly favoured by the users. LCM performed better than GLO in all six user comparisons, while in terms of the perceptual metric the performance of LCM was also better than GLO for inpainting and superresolution tasks. For the colorization task, the LCM is unequivocally better in terms of user preferences, and worse in terms of the perceptual metric. We note that, however, perceptual metric is inadequate for the colorization task as the original grayscale image scores better than the results of all evaluated methods. We therefore only provide the results in this metric for colorization for the sake of completeness (finding good quantitative measure for the highly- ambiguous colorization task is a well- known unsolved problem). Additional results on CelebA and Bedrooms dataset are given in Appendices A, F, G. Table 2: Metrics of optimization over the z-space, the convolutional manifold and Progressive GAN (Karras et al., 2018) latent space <table><tr><td>Optimization Over</td><td>MSE (known pixels)</td><td>MSE (inpainted pixels)</td><td>Perceptual Metric</td></tr><tr><td>Convolutional Net Parameters</td><td>0.00307</td><td>0.00171</td><td>0.02381</td></tr><tr><td>Z-Space</td><td>0.00141</td><td>0.00854</td><td>0.02736</td></tr><tr><td>PGAN Latent Space</td><td>0.00477</td><td>0.00224</td><td>0.02546</td></tr></table> ![](images/8_0.jpg) <center>Figure 6: A comparision of optimization over the convolutional manifold (column "OptConv"), the z-space (column "OptZ") and the Progressive GAN (Karras et al., 2018) latent space (column "PGAN") on the CelebA-HQ dataset (Karras et al., 2018). </center> <--- Page Split ---> For the CelebA- HQ, we have limited comparison of the LCM model to the pretrained progressive GAN model (Karras et al., 2018) published by the authors (this is because proper tuning of the parameters of other baselines would take too much time). On this dataset, LCM uses a latent space of 135k parameters. Additionally, we use CelebA- HQ to highlight the role of the convolutional manifold structure in the latent space. Recall that the use of the convolutional manifold parameterization is what distinguish the LCM approach from the GLO baseline. The advantage of the new parameterization is highlighted by the experiments described above. One may wonder, if the convolutional manifold constraint is needed at testtime, or if during the restoration process the constraint can be omitted (i.e. if (5) can be used instead of (4) with the generator network \(q\) trained with the constraint). Generally, we observed that the use of the constraint at testtime had a minor effect on the CelebA and Bedrooms dataset, but was very pronounced on the CelebA- HQ dataset (where the training set is much smaller and the resolution is much higher). In Figure 6 and Table 2, we provide qualitative and quantitative comparison between the progressive GAN model (Karras et al., 2018), the LCM model, and the same LCM model applied without the convolutional manifold constraint for the task of inpainting. The full LCM model with the convolutional manifold performed markedly better than the other two approaches. Progressive GAN severely underfit even the known pixels. This is even despite the fact that the training set of (Karras et al., 2018) included the validation set (since their model was trained on full CelebA- HQ dataset). Unconstrained LCM overfit the known pixels while providing implausible inpaintings for the unknown. Full LCM model obtained much better balance between fitting the known pixels and inpainting the unknown pixels. ## 4 CONCLUSION The results in this work suggest that high- dimensional latent spaces are necessary to get good image reconstructions on desired hold- out sets. Further, it shows that parametrizing these spaces using ConvNets imposes further structure on them that allow us to produce good image restorations from a wide variety of degradations and at relatively high resolutions. More generally, this method can easily be extended to come up with more interesting parametrizations of the latent space, e.g. by interleaving the layers with image- specific and dataset- specific parameters. The proposed approach has several limitations. First, when trained over very large datasets, the LCM model requires long time to be trained till convergence. For instance, training an LCM on 150k samples of CelebA at \(128 \times 128\) resolution takes about 14 GPU- days. Note that the GLO model of the same latent dimensionality takes about 10 GPU- days. On the other hand, the universality of the models means that they only need to be trained once for a certain image type, and can be applied to any degradations after that. The second limitation is that both LCM and GLO model require storing their latent representations in memory, which for large datasets and large latent spaces may pose a problem. Furthermore, we observe that even with the large latent dimensionalities that we use here, the models are not able to fit the training data perfectly suffering from such underfitting. Our model also assumes that the (log)- likelihood corresponding to the degradation process can be modeled and can be differentiated. Experiments suggests that however such modeling needs not be very accurate, e.g. simple quadratic log- likelihood can be used to restore JPEG- degraded images (Appendix H). Finally, our model requires lengthy optimization in latent space, rather than a feedforward pass, at test time. The number of iterations however can be drastically reduced using degradation- specific or universal feed- forward encoders from image- space to the latent space that may provide a reasonable starting point for optimization. ## 5 ACKNOWLEDGEMENTS. This work has been supported by the Ministry of Science of the Russian Federation (grant 14.756.31.0001). ShahRukh Athar is partially supported by NSF- CNS- 1718014, NSF- IIS- 1566248 and a gift from Adobe. <--- Page Split ---> ## REFERENCES Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In Proc. ICML, pp. 214- 223, 2017. P. Bojanowski, A. Joulin, D. Lopez-Paz, and A. Szlam. Optimizing the latent space of generative networks. In Proc. ICML, 2018. P. Burt and E. Adelson. The laplacian pyramid as a compact image code. IEEE Transactions on Communications, 31(4):532- 540, Apr 1983. ISSN 0090- 6778. doi: 10.1109/TCOM.1983.1095851. Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Proc. NIPS, pp. 2672- 2680. 2014. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein GANs. In Proc. NIPS, pp. 5767- 5777. 2017. Satoshi Iizuka, Edgar Simo- Serra, and Hiroshi Ishikawa. Globally and Locally Consistent Image Completion. Proc. SIGGRAPH, 36(4):107:1- 107:14, 2017. Justin Johnson, Alexandre Alahi, and Li Fei- Fei. Perceptual losses for real- time style transfer and super- resolution. In Proc. ECCV, 2016. Yan Karklin and Michael S Lewicki. Emergence of complex cell properties by learning to generalize in natural scenes. Nature, 457(7225):83, 2009. T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. In Proc. ICLR, 2018. D. P Kingma and M. Welling. Auto-encoding variational bayes. In Proc. ICLR, 2014. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proc. ICCV, December 2015. Stefan Roth and Michael J Black. Fields of experts: A framework for learning image priors. In Proc. CVPR, volume 2, pp. 860- 867, 2005. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In Proc. ICLR, 2015. Lawrence Sirovich and Michael Kirby. Low- dimensional procedure for the characterization of human faces. Josa A, 4(3):519- 524, 1987. D. Ulyanov, A. Vedaldi, and V. Lempitsky. Deep image prior. In Proc. CVPR, 2018. R. A. Yeh, C. Chen, T. Y. Lim, A. G. Schwing, M. Hasegawa-Johnson, and M. N. Do. Semantic image inpainting with deep generative models. In Proc. CVPR, pp. 6882- 6890, 2017. Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. Lsun: Construction of a large- scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015. Xin Yu and Fatih Porikli. Ultra- resolving face images by discriminative generative networks. In Proc. ECCV, pp. 318- 333. Springer, 2016. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. ArXiv e- prints, January 2018. Jun- Yan Zhu, Philipp Krahenbuhl, Eli Shechtman, and Alexei A. Efros. Generative visual manipulation on the natural image manifold. In Proc. ECCV, pp. 597- 613, 2016. Daniel Zoran and Yair Weiss. From learning models of natural image patches to whole image restoration. In Proc. ICCV, pp. 479- 486, 2011. <--- Page Split ---> ![](images/11_0.jpg) <center>Figure 7: Half-image completion task results on the CelebA dataset ( \(128 \times 128\) resolution) </center> ## A OTHER INPAINTING TASKS Further qualitative comparisons are performed on the CelebA dataset. In Figure 7, we show comparison on the “extreme” task of half- image inpainting. Figure 8 gives a comparison for the task of inpainting where \(95\%\) of pixel values are occluded at random. In both cases, the LCM model achieves the best balance of fitting the known evidence and the inpainting quality of known pixels. ## B VECTORIAL GLO RESULTS As a baseline in the main text, we have used the variant of the GLO model Bojanowski et al. (2018), where the latent space is organized as maps leading to “fully- convolutional” generator. The latent dimensionality is picked the same as for the the LCM model. Here, we provide evidence that using the original GLO implementation with vectorial- structured latent space, followed by a fully- convolutional layer gives worse results. In particular, we have tried different dimensionality of the latent space (up to 8192, after which we ran out of memory due to the size of the generator). The results for vector- space GLO in comparison with the GLO baseline used in the main text are in Figure 9 and Table 3. The vector based GLO model, despite being trained on latent vector with relatively high dimensionality, clearly underfits. <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 8: Image inpainting on CelebA with \(95\%\) of randomly chosen pixels missing. </center> Table 3: Training losses of different GLO variants suggesting the underfitting experienced by the vectorial GLO variants. <table><tr><td>Latent Space Type</td><td>Latent Space Dimension</td><td>Reconstruction Loss (train)</td></tr><tr><td>3D Map</td><td>24576</td><td>0.0113</td></tr><tr><td>Vector</td><td>8192</td><td>0.0144</td></tr><tr><td>Vector</td><td>4096</td><td>0.0171</td></tr><tr><td>Vector</td><td>2048</td><td>0.0202</td></tr></table> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 9: Image inpainting using GLO models with latent spaces of different dimension and structure. The GLO baseline from the main text achieves the best fit to the known pixels and arguably the best inpaintings of the unknown pixels. </center> <--- Page Split ---> Table 4: Reconstruction train losses with different weight penalties using the WGAN-GP. <table><tr><td>L2-penalty Weight</td><td>Reconstruction Loss</td></tr><tr><td>1</td><td>0.53</td></tr><tr><td>1e-3</td><td>0.3178</td></tr><tr><td>1e-5</td><td>0.2176</td></tr><tr><td>0</td><td>0.1893</td></tr></table> ![](images/14_0.jpg) <center>Figure 10: Image reconstruction using the WGAN-GP with gradually increasing penalties on the norm of the latent representation \(\mathbf{z}\) as justified by the probabilistic model behind GANs. Increasing the weight of this penalty (shown above) leads to worse underfitting without improving the quality of the reconstruction. Therefore the comparisons in the main text use the variant without such penalty. </center> ## C LATENT SPACE PRIOR FOR GANs Most GAN implementations (including ours) use Gaussian prior when sampling in the latent space. In principle, such prior should be imposed during the restoration process (in the form of an additional term penalizing the squared norm of \(\mathbf{z}\) ). We however do not impose such prior in the comparisons in the main text, since it makes the underfitting problem of GANs even worse. In Table 4 we demonstrate that the fitting error for the images from the train set indeed gets worse as the penalty weight is increased. In Figure 10, this effect is demonstrated qualitatively. ## D ARCHITECTURE DETAILS The architecture details for the components of the LCM model are as follows: - Generator Network \(g_{\theta}\) : The generator network \(g_{\theta}\) has an hourglass architecture in all three datasets. In CelebA the map size varies as follows: \(32 \times 32 \rightarrow 4 \times 4 \rightarrow 128 \times 128\) and the generator has a total of 38M parameters. In Bedrooms the map size varies as: \(64 \times 64 \rightarrow 4 \times 4 \rightarrow 256 \times 256\) and the generator has a total of 30M parameters. In CelebAHQ the map size varies as \(256 \times 256 \rightarrow 32 \times 32 \rightarrow 1024 \times 1024\) and the generator has a total of 40M parameters. All the generator networks contain two skip connections within them and have a batch-norm and the LeakyReLU non-linearity after every convolution layer. <--- Page Split ---> - Latent Network \(f_{\phi_{i}}\) : The latent network used in CelebA128 consists of 4 convolutional layers with no padding. The latent network used in Bedrooms and CelebA-HQ consists of 5 and 7 convolutional layers respectively with no padding. The code of our implementation is available at the project website. ## E TRAIN/TEST LOSSES For the sake of completeness we provide losses of LCM and GLO models on the training and test set. We additionally provide the loss if the LCM is optimized over the z- space (i.e the output of \(f_{\phi}\) ) instead of the parameters of \(f_{\phi}\) (The results shown in row "LCM Z- Space"). In general, the full LCM model has higher loss for train and for test sets, as being more constrained than the other two methods. The additional constraints however allow the LCM model to perform better at image reconstruction tasks. Table 5: Reconstruction Loss as measured by the L1- Laploss + MSE on the training and test set. LCM has a higher reconstruction error because while LCM and GLO have the same latent space dimensionality, the LCM additionally imposes the convolutional manifold constrain on the latent space which is absent in GLO. <table><tr><td>Model Name</td><td>Train Set</td><td>Test Set</td></tr><tr><td>LCM (Ours)</td><td>0.01306</td><td>0.01345</td></tr><tr><td>GLO</td><td>0.01007</td><td>0.01033</td></tr><tr><td>LCM Z-Space</td><td>0.01205</td><td>0.01235</td></tr></table> <--- Page Split ---> ## F MORE RESULTS ON BEDROOMS In Figure 11, we provide additional inpainting and superresolution results on the Bedrooms dataset for the compared methods. ![](images/16_0.jpg) <center>Figure 11: Additional qualitative comparisons on the Bedrooms dataset (see main text for discussion). </center> <--- Page Split ---> ## G INTERPOLATIONS IN THE LATENT SPACE In this section we show the results of performing linear interpolations on the latent space of the convolutional GLO, and LCM and compare it to a linear cross- fade performed in the image space. We start by first finding the best fitting latent parameters (we optimize over \(\phi\) for LCM and over \(z\) for convolutional GLO) for the source and target images and then perform linear interpolation between them. As can be seen in Figure 12, interpolations in the LCM latent space seem to be smoother and a lot more faithful to the training data distribution than interpolations in convolutional GLO latent space. ![](images/17_0.jpg) <center>Figure 12: Interpolations in the latent space of the LCM model (top row), the convolutional GLO model (middle row). For the reference, we also provide linear cross-fade in the image pixel space in the bottom row. In the case of our model, the interpolation is performed between \(\phi_{1}\) and \(\phi_{2}\) , i.e. along the convolutional manifold. Arguably, LCM interpolations are more plausible, with faces rotating smoothly and with more plausible details (e.g. noses) in the case of LCM. Generally, there is noticeably less “double-vision” artefacts. Electronic zoom-in recommended. </center> <--- Page Split ---> ## H JPEG IMAGE RESTORATION In this section we perform JPEG image restoration using a squared error negative log- likelihood function as a loss. As in the case of inpainting, super- resolution and colorization we perform the optimization over \(\phi\) keeping the generator fixed. Results in Figure 13 suggest that LCMs can be used to restore images even when application- specific likelihood function is unknown/hard to model. ![](images/18_0.jpg) <center>Figure 13: Image restoration from heavy JPEG compression. Left – the input, middle – restored, right – ground truth. Rather than modeling JPEG degradation with a specific likelihood function, we used a simple quadratic (log)-likelihood potential (corresponding to Gaussian noise corruption). </center> <--- Page Split ---> ## I UNCONDITIONAL IMAGE GENERATION In this section, we show the results of unconditional sampling from the LCM latent space. A random subset of \(m = 30k\) trained latent ConvNet parameter vectors \(\{\phi_{1},\ldots ,\phi_{m}\}\) are first mapped to a 512- dimensional space using PCA. We then fit a GMM with 3 components and a full covariance matrix on these 512- dim vectors and sample from it. Figure 14 shows the results of the sampling procedure. ![](images/19_0.jpg) <center>Figure 14: Unconditional Image Generation. We first project the latent parameters, the \(\phi\) 's, to a lower dimensional space using PCA and then sample from it. The details are given in the text. </center> <--- Page Split --->
## ABSTRACT We present a new latent model of natural images that can be learned on large- scale datasets. The learning process provides a latent embedding for every image in the training dataset, as well as a deep convolutional network that maps the latent space to the image space. After training, the new model provides a strong and universal image prior for a variety of image restoration tasks such as large- hole inpainting, superresolution, and colorization. To model high- resolution natural images, our approach uses latent spaces of very high dimensionality (one to two orders of magnitude higher than previous latent image models). To tackle this high dimensionality, we use latent spaces with a special manifold structure (convolutional manifolds) parameterized by a ConvNet of a certain architecture. In the experiments, we compare the learned latent models with latent models learned by autoencoders, advanced variants of generative adversarial networks, and a strong baseline system using simpler parameterization of the latent space. Our model outperforms the competing approaches over a range of restoration tasks. ## 1 INTRODUCTION Learning good image priors is one of the core problems of computer vision and machine learning. One promising approach to obtaining such priors is to learn a deep latent model, where the set of natural images is parameterized by a certain simple- structured set or probabilistic distribution, whereas the complexity of natural images is tackled by a deep ConvNet (often called a generator or a decoder) that maps from the latent space into the space of images. The best known examples are generative adversarial networks (GANs) (Goodfellow et al., 2014) and autoencoders (Goodfellow et al., 2016). Given a good deep latent model, virtually any image restoration task can be solved by finding a latent representation that best corresponds to the image evidence (e.g. the known pixels of an occluded image or a low- resolution image). The attractiveness of such approach is in the universality of the learned image prior. Indeed, applying the model to a new restoration task can be performed by simply changing the likelihood objective. The same latent model can therefore be reused for multiple tasks, and the learning process needs not to know the image degradation process in advance. This is in contrast to task- specific approaches that usually train deep feed- forward ConvNets for individual tasks, and which have a limited ability to generalize across tasks (e.g. a feed- forward network trained for denoising cannot perform large- hole inpainting and vice versa). At the moment, such image restoration approach based on latent models is limited to low- resolution images. E.g. (Yeh et al., 2017) showed how a latent model trained with GAN can be used to perform inpainting of tightly- cropped \(64 \times 64\) face images. Below, we show that such models trained with GANs cannot generalize to higher resolution (eventhough GAN- based systems are now able to obtain high- quality samples at high resolutions (Karras et al., 2018)). We argue that it is the limited dimensionality of the latent space in GANs and other existing latent models that precludes them from spanning the space of high- resolution natural images. To scale up latent modeling to high- resolution images, we consider latent models with tens of thousands of latent dimensions (as compared to few hundred latent dimensions in existing works). We show that training such latent models is possible using direct optimization (Bojanowski et al., 2018) and that it leads to good image priors that can be used across a broad variety of reconstruction tasks. In previous models, the latent space has a simple structure such as a sphere or a box in a Euclidean space, or a full Euclidean space with a Gaussian prior. Such choice, however, is not viable in our <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Restorations using the same Latent Convolutional Model (images 2,4,6) for different image degradations (images 1,3,5). At training time, our approach builds a latent model of non-degraded images, and at test time the restoration process simply finds a latent representation that maximizes the likelihood of the corrupted image and outputs a corresponding non-degraded image as a restoration result. </center> case, as vectors with tens of thousands of dimensions cannot be easily used as inputs to a generator. Therefore, we consider two alternative parameterizations of a latent space. Firstly, as a baseline, we consider latent spaces parameterized by image stacks (three- dimensional tensors), which allows to have "fully- convolutional" generators with reasonable number of parameters. Our full system uses a more sophisticated parameterization of the latent space, which we call a convolutional manifold, where the elements of the manifold correspond to the parameter vector of a separate ConvNet. Such indirect parameterization of images and image stacks have recently been shown to impose a certain prior (Ulyanov et al., 2018), which is beneficial for restoration of natural images. In our case, we show that a similar prior can be used with success to parameterize high- dimensional latent spaces. To sum up, our contributions are as follows. Firstly, we consider the training of deep latent image models with the latent dimensionality that is much higher than previous works, and demonstrate that the resulting models provide universal (w.r.t. restoration tasks) image priors. Secondly, we suggest and investigate the convolutional parameterization for the latent spaces of such models, and show the benefits of such parameterization. Our experiments are performed on CelebA (Liu et al., 2015) (128x128 resolution), SUN Bedrooms (Yu et al., 2015) (256x256 resolution), CelebA- HQ (Karras et al., 2018) (1024x1024 resolution) datasets, and we demonstrate that the latent models, once trained, can be applied to large hole inpainting, superresolution of very small images, and colorization tasks, outperforming other latent models in our comparisons. To the best of our knowledge, we are the first to demonstrate how "direct" latent modeling of natural images without extra components can be used to solve image restoration problems at these resolutions (Figure 1). Other related work. Deep latent models follow a long line of works on latent image models that goes back at least to the eigenfaces approach (Sirovich & Kirby, 1987). In terms of restoration, a competing and more popular approach are feed- forward networks trained for specific restoration tasks, which have seen rapid progress recently. Our approach does not quite match the quality of e.g. (Iizuka et al., 2017), that is designed and trained specifically for the inpainting task, or the quality of e.g. (Yu & Porikli, 2016) that is designed and trained specifically for the face superresolution task. Yet the models trained within our approach (like other latent models) are universal, as they can handle degradations unanticipated at training time. Our work is also related to pre- deep learning ("shallow") methods that learn priors on (potentially- overlapping) image patches using maximum likelihood- type objectives such as (Roth & Black, 2005; Karklin & Lewicki, 2009; Zoran & Weiss, 2011). The use of multiple layers in our method allows to capture much longer correlations. As a result, our method can be used successfully to handle restoration tasks that require exploiting these correlations, such as large- hole inpainting. ## 2 METHOD Let \(\{\mathbf{x}_1, \mathbf{x}_2, \ldots , \mathbf{x}_N\}\) be a set of training images, that are considered to be samples from the distribution \(X\) of images in the space \(\mathcal{X}\) of images of a certain size that need to be modeled. In latent modeling, we introduce a different space \(\mathcal{Z}\) and a certain distribution \(Z\) in that space that is used to re- parameterize \(\mathcal{X}\) . In previous works, \(\mathcal{Z}\) is usually chosen to be a Euclidean space with few dozen to few hundred dimensions, while our choice for \(\mathcal{Z}\) is discussed further below. <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: The Latent Convolutional Model incorporates two sequential ConvNets. The smaller ConvNet \(f\) (red) is fitted to each training image and is effectively used to parameterize the latent manifold. The bigger ConvNet \(g\) (magenta) is used as a generator, and its parameters are fitted to all training data. The input \(s\) to the pipeline is fixed to a random noise and not updated during training. </center> The deep latent modeling of images implies learning the generator network \(g_{\theta}\) with learnable parameters \(\theta\) , which usually has convolutional architecture. The generator network maps from \(\mathcal{Z}\) to \(\mathcal{X}\) and in particular is trained so that \(g_{\theta}(Z) \approx X\) . Achieving the latter condition is extremely hard, and there are several approaches that can be used. Thus, generative adversarial networks (GANs) (Goodfellow et al., 2014) train the generator network in parallel with a separate discriminator network that in some variants of GANs serves as an approximate ratio estimator between \(X\) and \(X + g_{\theta}(Z)\) over points in \(\mathcal{X}\) . Alternatively, autoencoders (Goodfellow et al., 2016) and their variational counter- parts (Kingma & Welling, 2014) train the generator in parallel with the encoder operating in the reverse direction, resulting in a more complex distribution \(Z\) . Of these two approaches, only GANs are known to be capable of synthesizing high- resolution images, although such ability comes with additional tricks and modifications of the learning formulation (Arjovsky et al., 2017; Karras et al., 2018). In this work, we start with a simpler approach to deep latent modeling (Bojanowski et al., 2018) known as the GLO model. GLO model optimizes the parameters of the generator network in parallel with the explicit embeddings of the training examples \(\{\mathbf{z}_1, \mathbf{z}_2, \ldots , \mathbf{z}_N\}\) , such that \(g_{\theta}(\mathbf{z}_i) \approx \mathbf{x}_i\) by the end of the optimization. Our approach differs from and expands (Bojanowski et al., 2018) in three ways: (i) we consider a much higher dimensionality of the latent space, (ii) we use an indirect parameterization of the latent space discussed further below, (iii) we demonstrate the applicability of the resulting model to a variety of image restoration tasks. Scaling up latent modeling. Relatively low- dimensional latent models of natural images presented in previous works are capable of producing visually- compelling image samples from the distribution (Karras et al., 2018), but are not actually capable of matching or covering a rather high- dimensional distribution \(X\) . E.g. in our experiments, none of GAN models were capable of reconstructing most samples \(\mathbf{x}\) from the hold- out set (or even from the training set; this observation is consistent with (Bojanowski et al., 2018) and also with (Zhu et al., 2016)). Being unable to reconstruct uncorrupted samples clearly suggests that the learned models are not suitable to perform restoration of corrupted samples. On the other hand, autoencoders and the related GLO latent model (Bojanowski et al., 2018) were able to achieve better reconstructions than GAN on the hold- out sets, yet have distinctly blurry reconstructions (even on the training set), suggesting strong underfitting. We posit that existing deep latent models are limited by the dimensionality of the latent space that they consider, and aim to scale up this dimensionality significantly. Simply scaling up the latent dimensionality to few tens of dimensions is not easily feasible, as e.g. the generator network has to work with such a vector as an input, which would make the first fully- connected layer excessively large with hundreds of millions of parameters<sup>1</sup>. To achieve a tractable size of the generator, one can consider latent elements \(\mathbf{z}\) to have a three- dimensional tensor structure, i.e. to be stacks of 2D image maps. Such choice of structure is very <--- Page Split ---> natural for convolutional architectures, and allows to train "fully- convolutional" generators with the first layer being a standard convolutional operation. The downside of this choice, as we shall see, is that it allows limited coordination between distant parts of the images \(\mathbf{x} = g_{\theta}(\mathbf{z})\) produced by the generator. This drawback is avoided when the latent space is parameterized using latent convolutional manifolds as described next. Latent convolutional manifolds. To impose more appropriate structure on the latent space, we consider structuring these spaces as convolutional manifolds defined as follows. Let \(\mathbf{s}\) be a stack of maps of the size \(W_{s} \times H_{s} \times C_{s}\) and let \(\{f_{\phi} | \phi \in \Phi \}\) be a set of convolutional networks all sharing the same architecture \(f\) that transforms \(\mathbf{s}\) to different maps of size \(W_{z} \times H_{z} \times C_{z}\) . A certain parameter vector \(\phi \in \Phi\) thus defines a certain convolutional network \(f_{\phi}\) . Then, let \(\mathbf{z}(\phi) = f_{\phi}(\mathbf{s})\) be an element in the space of \((W_{z} \times H_{z} \times C_{z})\) - dimensional maps. Various choices of \(\phi\) then span a manifold embedded into this space, and we refer to it as the convolutional manifold. A convolutional manifold \(\mathbf{C}_{f,\mathbf{s}}\) is thus defined by the ConvNet architecture \(f\) as well as by the choice of the input \(\mathbf{s}\) (which in our experiments is always chosen to be filled with uniform random noise). Additionally, we also restrict the elements of vectors \(\phi\) to lie within the \([- B; B]\) range. Formally, the convolutional manifold is defined as the following set: \[\mathbf{C}_{f,\mathbf{s}} = \{\mathbf{z} | \mathbf{z} = f_{\phi}(\mathbf{s}), \phi \in \Phi \} , \Phi = [- B; B]^{N_{\phi}}, \quad (1)\] where \(\phi\) serves as a natural parameterization and \(N_{\phi}\) is the number of network parameters. Below, we refer to \(f\) as latent ConvNet, to disambiguate it from the generator \(g\) , which also has a convolutional structure. The idea of the convolutional manifold is inspired by the recent work on deep image priors (Ulyanov et al., 2018). While they effectively use convolutional manifolds to model natural images directly, in our case, we use them to model the latent space of the generator networks resulting in a fully- fledged learnable latent image model (whereas the model in (Ulyanov et al., 2018) cannot be learned on a dataset of images). The work (Ulyanov et al., 2018) demonstrates that the regularization imposed by the structure of a very high- dimensional convolutional manifold is beneficial when modeling natural images. Our intuition here is that similar regularization would be beneficial in regularizing learning of high- dimensional latent spaces. As our experiments below reveal, this intuition holds true. Learning formulation. Learning the deep latent model (Figure 2) in our framework then amounts to the following optimization task. Given the training examples \(\{\mathbf{x}_{1}, \mathbf{x}_{2}, \ldots , \mathbf{x}_{N}\}\) , the architecture \(f\) of the convolutional manifold, and the architecture \(g\) of the generator network, we seek the set of the latent ConvNet parameter vectors \(\{\phi_{1}, \phi_{2}, \ldots , \phi_{N}\}\) and the parameters of the generator network \(\theta\) that minimize the following objective: \[L(\phi_{1}, \phi_{2}, \ldots , \phi_{N}, \theta) = \frac{1}{N} \sum_{i = 1}^{N} \| g_{\theta}(f_{\phi_{i}}(\mathbf{s})) - \mathbf{x}_{i} \| , \quad (2)\] with an additional box constraints \(\phi_{i}^{j} \in [- 0.01; 0.01]\) and \(\mathbf{s}\) being a random set of image maps filled with uniform noise. Following (Bojanowski et al., 2018), the norm in (2) is taken to be the Laplacian- L1: \(\| \mathbf{x}_{1} - \mathbf{x}_{2}\|_{\mathrm{Lap - L1}} = \sum_{j} 2^{- 2j} |L^{j}(\mathbf{x}_{1} - \mathbf{x}_{2})|_{1}\) , where \(L^{j}\) is the \(j\) th level of the Laplacian image pyramid (Burt & Adelson, 1983). We have also found that adding an extra MSE loss term to the Lap- L1 loss term with the weight of 1.0 speeds up convergence of the models without affecting the results by much. The optimization (2) is performed using stochastic gradient descent. As an outcome of the optimization, each training example \(\mathbf{x}_{i}\) gets a representation \(\mathbf{z}_{i} = f_{\phi_{i}}\) on the convolutional manifold \(\mathbf{C}_{f,\mathbf{s}}\) . Importantly, the elements of the convolutional manifold then define a set of images in the image space (which is the image of the convolutional manifold under learned generator): \[\mathbf{I}_{f,\mathbf{s},\theta} = \{\mathbf{x} | \mathbf{x} = g_{\theta}(f_{\phi}(\mathbf{s})), \phi \in \Phi \} . \quad (3)\] <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 3: Results (perceptual metrics – lower is better – and user preferences) for the two datasets (CelebA – left, Bedrooms – right) and three tasks (inpainting, super-resolution, colorization). For the colorization task the perceptual metric is inadequate as the grayscale image has the lowest error, but is shown for completeness. </center> Table 1: MSE loss on the restored images with respect to the ground truth. For inpainting the MSE was calculated just over the inpainted region of the images. <table><tr><td rowspan="2"></td><td colspan="4">CelebA</td><td colspan="4">LSUN-Bedrooms</td></tr><tr><td>LCM</td><td>GLO</td><td>DIP</td><td>AE</td><td>WGAN</td><td>LCM</td><td>GLO</td><td>DIP</td></tr><tr><td>Inpainting</td><td>0.0034</td><td>0.0038</td><td>0.0091</td><td>0.0065</td><td>0.0344</td><td>0.0065</td><td>0.0085</td><td>0.0063</td></tr><tr><td>Super-res</td><td>0.0061</td><td>0.0063</td><td>0.0052</td><td>0.0083</td><td>0.0446</td><td>0.0071</td><td>0.0069</td><td>0.0057</td></tr><tr><td>Colorization</td><td>0.0071</td><td>0.0069</td><td>0.0136</td><td>0.0194</td><td>0.0373</td><td>0.0066</td><td>0.0075</td><td>0.0696</td></tr></table> While not all elements of the manifold \(\mathbf{I}_{f,\mathbf{s},\theta}\) will correspond to natural images from the distribution \(X\) , we have found out that with few thousand dimensions, the resulting manifolds can cover the support of \(X\) rather well. I.e. each sample from the image distribution can be approximated by the element of \(\mathbf{I}_{f,\mathbf{s},\theta}\) with a low approximation error. This property can be used to perform all kinds of image restoration tasks. Image restoration using learned latent models. We now describe how the learned latent model can be used to perform the restoration of the unknown image \(\mathbf{x}_0\) from the distribution \(X\) , given some evidence \(\mathbf{y}\) . Depending on the degradation process, the evidence \(\mathbf{y}\) can be an image \(\mathbf{x}_0\) with masked values (inpainting task), the low- resolution version of \(\mathbf{x}_0\) (superresolution task), the grayscale version of \(\mathbf{x}_0\) (colorization task), the noisy version of \(\mathbf{x}_0\) (denoising task), a certain statistics of \(\mathbf{x}_0\) computed e.g. using a deep network (feature inversion task), etc. We further assume, that the degradation process is described by the objective \(E(\mathbf{x}|\mathbf{y})\) , which can be set to minus log- likelihood \(E(\mathbf{x}|\mathbf{y}) = -\log p(\mathbf{y}|\mathbf{x})\) of observing \(\mathbf{y}\) as a result of the degradation of \(\mathbf{x}\) . E.g. for the inpainting task, one can use \(E(\mathbf{x}|\mathbf{y}) = \| (\mathbf{x} - \mathbf{y})\odot \mathbf{m}\|\) , where \(\mathbf{m}\) is the 0- 1 mask of known pixels and \(\odot\) denotes element- wise product. For the superresolution task, the restoration objective is naturally defined as \(E(\mathbf{x}|\mathbf{y}) = \| \downarrow (\mathbf{x}) - \mathbf{y}\|\) , where \(\downarrow (\cdot)\) is an image downsampling operator (we use Lanczos in the experiments) and \(\mathbf{y}\) is the low- resolution version of the image. For the colorization task, the objective is defined as \(E(\mathbf{x}|\mathbf{y}) = \| \mathrm{gray}(\mathbf{x}) - \mathbf{y}\|\) , where \(\mathrm{gray}(\cdot)\) denotes a projection from the RGB to grayscale images (we use a simple averaging of the three color channels in the experiments) and \(\mathbf{y}\) is the grayscale version of the image. <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 4: Qualitative comparison on CelebA (see the text for discussion). </center> Using the learned latent model as a prior, the following estimation combining the learned prior and the provided image evidence is performed: \[\hat{\phi} = \arg \min_{\phi}E(g_{\theta}(f_{\phi}(\mathbf{s}))\mid \mathbf{y}),\qquad \hat{\mathbf{x}} = g_{\theta}(f_{\hat{\phi}}(\mathbf{s})). \quad (4)\] In other words, we simply estimate the element of the image manifold (3) that has the highest likelihood. The optimization is performed using stochastic gradient descent over the parameters \(\phi\) on the latent convolutional manifold. For the baseline models, which use a direct parameterization of the latent space, we perform analogous estimation using optimization in the latent space: \[\hat{\mathbf{z}} = \arg \min_{\mathbf{z}}E(g_{\theta}(\mathbf{z})\mid \mathbf{y}),\qquad \hat{\mathbf{x}} = g_{\theta}(\mathbf{z}). \quad (5)\] In the experiments, we compare the performance of our full model and several baseline models over a range of the restoration tasks using formulations (4) and (5). ## 3 EXPERIMENTS Datasets. The experiments were conducted on three datasets. The CelebA dataset was obtained by taking the 150K images from (Liu et al., 2015) (cropped version) and resizing them from \(178 \times 218\) to \(128 \times 128\) . Note that unlike most other works, we have performed anisotropic rescaling rather than additional cropping, leading to the version of the dataset with larger background portions and higher variability (corresponding to a harder modeling task). The Bedrooms dataset from the LSUN (Yu et al., 2015) is another popular dataset of images. We rescale all images to the \(256 \times 256\) size. Finally, the CelebA- HQ dataset from (Karras et al., 2018) that consists of \(30\mathrm{K} 1024 \times 1024\) images of faces. <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 5: Qualitative comparison on SUN Bedrooms for the tasks of inpainting (rows 1-2), superresolution (rows 3-4), colorization (rows 5-6). The LCM method performs better than most methods for the first two tasks. </center> Tasks. We have compared methods for three diverse tasks. For the inpainting task, we have degraded the input images by masking the center part of the image ( \(50 \times 50\) for CelebA, \(100 \times 100\) for Bedrooms, \(400 \times 400\) for CelebA- HQ). For the superresolution task, we downsampled the images by a factor of eight. For the colorization task, we have averaged the color channels obtaining the gray version of the image. ### 3.1 EXPERIMENTS ON CELEBA AND BEDROOMS We have performed extensive comparisons with other latent models on the two datasets with smaller image size and lower training times (CelebA and Bedrooms). The following latent models were compared: - Latent Convolutional Networks (LCM - Ours): Each \(f_{\phi_i}\) has 4 layers (in CelebA), 5 layers (in Bedrooms) or 7 layers (in CelebA-HQ) and takes as input random uniform noise. The Generator, \(g_\theta\) has an hourglass architecture. The latent dimensionality of the model was 24k for CelebA and 61k for Bedrooms.- GLO: The baseline model discussed in the end of Section 2 and inspired by (Bojanowski et al., 2018), where the generator network has the same architecture as in LCM, but the <--- Page Split ---> convolutional space is parameterized by a set of maps. The latent dimensionality is the same as in LCM (and thus much higher than in (Bojanowski et al., 2018)). We have also tried a variant reproduced exactly from (Bojanowski et al., 2018) with vectorial latent spaces that feed into a fully- connected layers (for the dimensionalities ranging from 2048 to 8162 – see Appendix B), but invariably observed underfitting. Generally, we took extra care to find the optimal parameterization that would be most favourable to this baseline. - DIP: The deep image prior-based restoration (Ulyanov et al., 2018). We use the architecture proposed by the authors in the paper. DIP can be regarded as an extreme version of our paper with the generator network being an identity. DIP fits 1M parameters to each image for inpainting and colorization and 2M parameters for super-resolution.- GAN: For CelebA we train a WGAN-GP (Gulrajani et al., 2017) with the DCGAN type generator and a latent space of 256. For Bedrooms we use the pretrained Progressive GAN (PGAN) models with the latent space of dimensionality 512 published by the authors of (Karras et al., 2018). During restoration, we do not impose prior on the norm of \(\mathbf{z}\) since it worsens the underfitting problem of GANs (as demonstrated in Appendix C).- AE: For the CelebA we have also included a standard autoencoder using the Lap-L1 and MSE reconstruction metrics into the comparison (latent dimensionality 1024). We have also tried the variant with convolutional higher-dimensional latent space, but have observed very strong overfitting. The variational variant (latent dimensionality 1024) lead to stronger underfitting than the non-variational variant. As the experiments on CelebA clearly showed a strong underfitting, we have not included AE into the comparison on the higher-resolution Bedrooms dataset. For Bedrooms dataset we restricted training to the first 200K training samples, except for the DIP (which does not require training) and GAN (we used the progressive GAN model trained on all 3M samples). All comparisons were performed on hold- out sets not used for training. Following (Bojanowski et al., 2018), we use plain SGD with very high learning rate of 1.0 to train LCM and of 10.0 to train the GLO models. The exact architectures are given in Appendix D. Metrics. We have used quantitative and user study- based assessment of the results. For the quantitative measure, we have chosen the mean squared error (MSE) measure in pixel space, as well as the mean squared distance of the VGG16- features (Simonyan & Zisserman, 2015) between the original and the reconstructed images. Such perceptual metrics are known to be correlated with the human judgement (Johnson et al., 2016; Zhang et al., 2018). We have used the [relu1_2, relu2_2, relu3_3, relu4_3, relu5_3] layers contributing to the distance metric with equal weight. Generally, we observed that the relative performance of the methods were very similar for the MSE measure, for the individual VGG layers, and for the averaged VGG metrics that we report here. When computing the loss for the inpainting task we only considered the positions corresponding to the masked part. Quantitative metrics however have limited relevance for the tasks with big multimodal conditional distributions, i.e. where two very different answers can be equally plausible, such as all three tasks that we consider (e.g. there could be very different colorizations of the same bedroom image). In this situation, human judgement of quality is perhaps the best measure of the algorithm performance. To obtain such judgements, we have performed a user study, where we have picked 10 random images for each of the two datasets and each of the three tasks. The results of all compared methods alongside the degraded inputs were shown to the participants (100 for CelebA, 38 for Bedrooms). For each example, each subject was asked to pick the best restoration variant (we asked to take into account both realism and fidelity to the input). The results were presented in random order (shuffled independently for each example). We then just report the percentage of user choices for each method for a given task on a given dataset averaged over all subjects and all ten images. Results. The results of the comparison are summarized in Figure 3 and Table 1 with representative examples shown in Figure 4 and Figure 5. "Traditional" latent models (built WGAN/PGAN and AE) performed poorly. In particular, GAN- based models produced results that were both unrealistic and poorly fit the likelihood. Note that during fitting we have not imposed the Gaussian prior on the latent space of GANs. Adding such prior did not result in considerable increase of realism and lead to even poorer fit to the evidence (see Appendix C). <--- Page Split ---> The DIP model did very well for inpainting and superresolution of relatively unstructured Bedrooms dataset. It however performed very poorly on CelebA due to its inability to learn face structure from data and on the colorization task due to its inability to learn about natural image colors. Except for the Bedrooms- inpainting, the new models with very large latent space produced results that were clearly favoured by the users. LCM performed better than GLO in all six user comparisons, while in terms of the perceptual metric the performance of LCM was also better than GLO for inpainting and superresolution tasks. For the colorization task, the LCM is unequivocally better in terms of user preferences, and worse in terms of the perceptual metric. We note that, however, perceptual metric is inadequate for the colorization task as the original grayscale image scores better than the results of all evaluated methods. We therefore only provide the results in this metric for colorization for the sake of completeness (finding good quantitative measure for the highly- ambiguous colorization task is a well- known unsolved problem). Additional results on CelebA and Bedrooms dataset are given in Appendices A, F, G. Table 2: Metrics of optimization over the z-space, the convolutional manifold and Progressive GAN (Karras et al., 2018) latent space <table><tr><td>Optimization Over</td><td>MSE (known pixels)</td><td>MSE (inpainted pixels)</td><td>Perceptual Metric</td></tr><tr><td>Convolutional Net Parameters</td><td>0.00307</td><td>0.00171</td><td>0.02381</td></tr><tr><td>Z-Space</td><td>0.00141</td><td>0.00854</td><td>0.02736</td></tr><tr><td>PGAN Latent Space</td><td>0.00477</td><td>0.00224</td><td>0.02546</td></tr></table> ![](images/8_0.jpg) <center>Figure 6: A comparision of optimization over the convolutional manifold (column "OptConv"), the z-space (column "OptZ") and the Progressive GAN (Karras et al., 2018) latent space (column "PGAN") on the CelebA-HQ dataset (Karras et al., 2018). </center> <--- Page Split ---> For the CelebA- HQ, we have limited comparison of the LCM model to the pretrained progressive GAN model (Karras et al., 2018) published by the authors (this is because proper tuning of the parameters of other baselines would take too much time). On this dataset, LCM uses a latent space of 135k parameters. Additionally, we use CelebA- HQ to highlight the role of the convolutional manifold structure in the latent space. Recall that the use of the convolutional manifold parameterization is what distinguish the LCM approach from the GLO baseline. The advantage of the new parameterization is highlighted by the experiments described above. One may wonder, if the convolutional manifold constraint is needed at testtime, or if during the restoration process the constraint can be omitted (i.e. if (5) can be used instead of (4) with the generator network \(q\) trained with the constraint). Generally, we observed that the use of the constraint at testtime had a minor effect on the CelebA and Bedrooms dataset, but was very pronounced on the CelebA- HQ dataset (where the training set is much smaller and the resolution is much higher). In Figure 6 and Table 2, we provide qualitative and quantitative comparison between the progressive GAN model (Karras et al., 2018), the LCM model, and the same LCM model applied without the convolutional manifold constraint for the task of inpainting. The full LCM model with the convolutional manifold performed markedly better than the other two approaches. Progressive GAN severely underfit even the known pixels. This is even despite the fact that the training set of (Karras et al., 2018) included the validation set (since their model was trained on full CelebA- HQ dataset). Unconstrained LCM overfit the known pixels while providing implausible inpaintings for the unknown. Full LCM model obtained much better balance between fitting the known pixels and inpainting the unknown pixels. ## 4 CONCLUSION The results in this work suggest that high- dimensional latent spaces are necessary to get good image reconstructions on desired hold- out sets. Further, it shows that parametrizing these spaces using ConvNets imposes further structure on them that allow us to produce good image restorations from a wide variety of degradations and at relatively high resolutions. More generally, this method can easily be extended to come up with more interesting parametrizations of the latent space, e.g. by interleaving the layers with image- specific and dataset- specific parameters. The proposed approach has several limitations. First, when trained over very large datasets, the LCM model requires long time to be trained till convergence. For instance, training an LCM on 150k samples of CelebA at \(128 \times 128\) resolution takes about 14 GPU- days. Note that the GLO model of the same latent dimensionality takes about 10 GPU- days. On the other hand, the universality of the models means that they only need to be trained once for a certain image type, and can be applied to any degradations after that. The second limitation is that both LCM and GLO model require storing their latent representations in memory, which for large datasets and large latent spaces may pose a problem. Furthermore, we observe that even with the large latent dimensionalities that we use here, the models are not able to fit the training data perfectly suffering from such underfitting. Our model also assumes that the (log)- likelihood corresponding to the degradation process can be modeled and can be differentiated. Experiments suggests that however such modeling needs not be very accurate, e.g. simple quadratic log- likelihood can be used to restore JPEG- degraded images (Appendix H). Finally, our model requires lengthy optimization in latent space, rather than a feedforward pass, at test time. The number of iterations however can be drastically reduced using degradation- specific or universal feed- forward encoders from image- space to the latent space that may provide a reasonable starting point for optimization. ## 5 ACKNOWLEDGEMENTS. This work has been supported by the Ministry of Science of the Russian Federation (grant 14.756.31.0001). ShahRukh Athar is partially supported by NSF- CNS- 1718014, NSF- IIS- 1566248 and a gift from Adobe. <--- Page Split ---> ## A OTHER INPAINTING TASKS Further qualitative comparisons are performed on the CelebA dataset. In Figure 7, we show comparison on the “extreme” task of half- image inpainting. Figure 8 gives a comparison for the task of inpainting where \(95\%\) of pixel values are occluded at random. In both cases, the LCM model achieves the best balance of fitting the known evidence and the inpainting quality of known pixels. ## B VECTORIAL GLO RESULTS As a baseline in the main text, we have used the variant of the GLO model Bojanowski et al. (2018), where the latent space is organized as maps leading to “fully- convolutional” generator. The latent dimensionality is picked the same as for the the LCM model. Here, we provide evidence that using the original GLO implementation with vectorial- structured latent space, followed by a fully- convolutional layer gives worse results. In particular, we have tried different dimensionality of the latent space (up to 8192, after which we ran out of memory due to the size of the generator). The results for vector- space GLO in comparison with the GLO baseline used in the main text are in Figure 9 and Table 3. The vector based GLO model, despite being trained on latent vector with relatively high dimensionality, clearly underfits. <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 8: Image inpainting on CelebA with \(95\%\) of randomly chosen pixels missing. </center> Table 3: Training losses of different GLO variants suggesting the underfitting experienced by the vectorial GLO variants. <table><tr><td>Latent Space Type</td><td>Latent Space Dimension</td><td>Reconstruction Loss (train)</td></tr><tr><td>3D Map</td><td>24576</td><td>0.0113</td></tr><tr><td>Vector</td><td>8192</td><td>0.0144</td></tr><tr><td>Vector</td><td>4096</td><td>0.0171</td></tr><tr><td>Vector</td><td>2048</td><td>0.0202</td></tr></table> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 9: Image inpainting using GLO models with latent spaces of different dimension and structure. The GLO baseline from the main text achieves the best fit to the known pixels and arguably the best inpaintings of the unknown pixels. </center> <--- Page Split ---> Table 4: Reconstruction train losses with different weight penalties using the WGAN-GP. <table><tr><td>L2-penalty Weight</td><td>Reconstruction Loss</td></tr><tr><td>1</td><td>0.53</td></tr><tr><td>1e-3</td><td>0.3178</td></tr><tr><td>1e-5</td><td>0.2176</td></tr><tr><td>0</td><td>0.1893</td></tr></table> ![](images/14_0.jpg) <center>Figure 10: Image reconstruction using the WGAN-GP with gradually increasing penalties on the norm of the latent representation \(\mathbf{z}\) as justified by the probabilistic model behind GANs. Increasing the weight of this penalty (shown above) leads to worse underfitting without improving the quality of the reconstruction. Therefore the comparisons in the main text use the variant without such penalty. </center> ## C LATENT SPACE PRIOR FOR GANs Most GAN implementations (including ours) use Gaussian prior when sampling in the latent space. In principle, such prior should be imposed during the restoration process (in the form of an additional term penalizing the squared norm of \(\mathbf{z}\) ). We however do not impose such prior in the comparisons in the main text, since it makes the underfitting problem of GANs even worse. In Table 4 we demonstrate that the fitting error for the images from the train set indeed gets worse as the penalty weight is increased. In Figure 10, this effect is demonstrated qualitatively. ## D ARCHITECTURE DETAILS The architecture details for the components of the LCM model are as follows: - Generator Network \(g_{\theta}\) : The generator network \(g_{\theta}\) has an hourglass architecture in all three datasets. In CelebA the map size varies as follows: \(32 \times 32 \rightarrow 4 \times 4 \rightarrow 128 \times 128\) and the generator has a total of 38M parameters. In Bedrooms the map size varies as: \(64 \times 64 \rightarrow 4 \times 4 \rightarrow 256 \times 256\) and the generator has a total of 30M parameters. In CelebAHQ the map size varies as \(256 \times 256 \rightarrow 32 \times 32 \rightarrow 1024 \times 1024\) and the generator has a total of 40M parameters. All the generator networks contain two skip connections within them and have a batch-norm and the LeakyReLU non-linearity after every convolution layer. <--- Page Split ---> - Latent Network \(f_{\phi_{i}}\) : The latent network used in CelebA128 consists of 4 convolutional layers with no padding. The latent network used in Bedrooms and CelebA-HQ consists of 5 and 7 convolutional layers respectively with no padding. The code of our implementation is available at the project website. ## E TRAIN/TEST LOSSES For the sake of completeness we provide losses of LCM and GLO models on the training and test set. We additionally provide the loss if the LCM is optimized over the z- space (i.e the output of \(f_{\phi}\) ) instead of the parameters of \(f_{\phi}\) (The results shown in row "LCM Z- Space"). In general, the full LCM model has higher loss for train and for test sets, as being more constrained than the other two methods. The additional constraints however allow the LCM model to perform better at image reconstruction tasks. Table 5: Reconstruction Loss as measured by the L1- Laploss + MSE on the training and test set. LCM has a higher reconstruction error because while LCM and GLO have the same latent space dimensionality, the LCM additionally imposes the convolutional manifold constrain on the latent space which is absent in GLO. <table><tr><td>Model Name</td><td>Train Set</td><td>Test Set</td></tr><tr><td>LCM (Ours)</td><td>0.01306</td><td>0.01345</td></tr><tr><td>GLO</td><td>0.01007</td><td>0.01033</td></tr><tr><td>LCM Z-Space</td><td>0.01205</td><td>0.01235</td></tr></table> <--- Page Split ---> ## F MORE RESULTS ON BEDROOMS In Figure 11, we provide additional inpainting and superresolution results on the Bedrooms dataset for the compared methods. ![](images/16_0.jpg) <center>Figure 11: Additional qualitative comparisons on the Bedrooms dataset (see main text for discussion). </center> <--- Page Split ---> ## G INTERPOLATIONS IN THE LATENT SPACE In this section we show the results of performing linear interpolations on the latent space of the convolutional GLO, and LCM and compare it to a linear cross- fade performed in the image space. We start by first finding the best fitting latent parameters (we optimize over \(\phi\) for LCM and over \(z\) for convolutional GLO) for the source and target images and then perform linear interpolation between them. As can be seen in Figure 12, interpolations in the LCM latent space seem to be smoother and a lot more faithful to the training data distribution than interpolations in convolutional GLO latent space. ![](images/17_0.jpg) <center>Figure 12: Interpolations in the latent space of the LCM model (top row), the convolutional GLO model (middle row). For the reference, we also provide linear cross-fade in the image pixel space in the bottom row. In the case of our model, the interpolation is performed between \(\phi_{1}\) and \(\phi_{2}\) , i.e. along the convolutional manifold. Arguably, LCM interpolations are more plausible, with faces rotating smoothly and with more plausible details (e.g. noses) in the case of LCM. Generally, there is noticeably less “double-vision” artefacts. Electronic zoom-in recommended. </center> <--- Page Split ---> ## H JPEG IMAGE RESTORATION In this section we perform JPEG image restoration using a squared error negative log- likelihood function as a loss. As in the case of inpainting, super- resolution and colorization we perform the optimization over \(\phi\) keeping the generator fixed. Results in Figure 13 suggest that LCMs can be used to restore images even when application- specific likelihood function is unknown/hard to model. ![](images/18_0.jpg) <center>Figure 13: Image restoration from heavy JPEG compression. Left – the input, middle – restored, right – ground truth. Rather than modeling JPEG degradation with a specific likelihood function, we used a simple quadratic (log)-likelihood potential (corresponding to Gaussian noise corruption). </center> <--- Page Split ---> ## I UNCONDITIONAL IMAGE GENERATION In this section, we show the results of unconditional sampling from the LCM latent space. A random subset of \(m = 30k\) trained latent ConvNet parameter vectors \(\{\phi_{1},\ldots ,\phi_{m}\}\) are first mapped to a 512- dimensional space using PCA. We then fit a GMM with 3 components and a full covariance matrix on these 512- dim vectors and sample from it. Figure 14 shows the results of the sampling procedure. ![](images/19_0.jpg) <center>Figure 14: Unconditional Image Generation. We first project the latent parameters, the \(\phi\) 's, to a lower dimensional space using PCA and then sample from it. The details are given in the text. </center> <--- Page Split --->
accept
Accept (Poster)
6.666667
ICLR_2019_paper_0993
iclr
2,019
# DISCRIMINATIVE OUT-OF-DISTRIBUTION DETECTION FOR SEMANTIC SEGMENTATION Anonymous authors Paper under double- blind review ## ABSTRACT Most classification and segmentation datasets assume a closed- world scenario in which predictions are expressed as distribution over a predetermined set of visual classes. However, such assumption implies unavoidable and often unnoticeable failures in presence of out- of- distribution (OOD) input. These failures are bound to happen in most real- life applications since current visual ontologies are far from being comprehensive. We propose to address this issue by discriminative detection of OOD pixels in input data. Different from recent approaches, we avoid to bring any decisions by only observing the training dataset of the primary model trained to solve the desired computer vision task. Instead, we train a dedicated OOD model which discriminates the primary training set from a much larger "background" dataset which approximates the variety of the visual world. We perform our experiments on high resolution natural images in a dense prediction setup. We use several road driving datasets as our training distribution, while we approximate the background distribution with the ILSVRC dataset. We evaluate our approach on WildDash test, which is currently the only public test dataset that includes out- of- distribution images. The obtained results show that the proposed approach succeeds to identify out- of- distribution pixels while outperforming previous work by a wide margin. ## 1 INTRODUCTION Development of deep convolutional models has resulted in tremendous advances of visual recognition. Recent semantic segmentation systems surpass \(80\%\) mIoU (Chen et al., 2017) on demanding natural datasets such as Pascal VOC 2012 (Everingham et al., 2015) or Cityscapes (Cordts et al., 2015). Such performance level suggests a clear application potential in exciting areas such as road safety assessment or autonomous driving. Unfortunately, most existing semantic segmentation datasets assume closed- world evaluation (Scheirer et al., 2013), which means that they require predictions over a predetermined set of visual classes. Closed- world datasets are very useful for promoting research, however they are poor proxies for real- life operation even in a very restricted scenario such as road driving. In fact, one can easily imagine many real- life driving scenes which give rise to image regions that can not be recognized by learning on the Cityscapes ontology. Some of those regions may be projected from objects which are foreign to Cityscapes (e.g. road works, water, animals). Other may appear unrelated to Cityscapes due to particular configurations being absent from the training dataset (e.g. pedestrians lying on the ground, crashed cars, fallen trees). Finally, some regions may be poorly classified due to different environmental conditions, acquisition setup, or geographical location (Tsai et al., 2018). The simplest way to approach unrecognizable data is to improve datasets. For instance, the Vistas dataset (Neuhold et al., 2017) proposes a richer ontology and addresses more factors of variation than Cityscapes. However, training on Vistas requires considerable computational resources while still being unable to account for the full variety of the recent WildDash dataset (Zendel et al., 2018), as we show in experiments. Another way to approach this problem would be to design strategies for knowledge transfer between the training dataset and the test images (Tsai et al., 2018). However, this is unsatisfactory for many real world applications where the same model should be directly applicable to a variety of environments. <--- Page Split ---> These examples emphasize the need to quantify model prediction uncertainty, especially if we wish to achieve reliable deployment in the real world. Uncertainty can be divided into two categories (Kendall & Gal, 2017). Aleatoric uncertainty is caused by limitations of the model which cannot be reduced by supplying additional training data. For example, the quality of segmentation models on distant and small objects depends on the resolution on which inference is performed. On the other hand, epistemic uncertainty arises when the trained model is unable to bring the desired prediction given particular training dataset. In other words, it occurs when the model receives the kind of data which was not seen during training. Epistemic uncertainty is therefore strongly related to the probability that the model operates on an out- of- distribution sample. Recent work in image- wide out- of- distribution detection (Kendall & Gal, 2017; Hendrycks & Gimpel, 2017; Liang et al., 2018) evaluates the prediction uncertainty by analyzing the model output. We find that these approaches perform poorly in dense prediction tasks due to prominence of aleatoric uncertainty. This means that total uncertainty can be high even on in- distribution pixels (e.g. on pixels at semantic borders, or very distant objects). A different approach attempts to detect out- of- distribution samples with GAN discriminators, whereby the GAN generator is used as a proxy for the out- of- distribution class (Lee et al., 2018; Sabokrou et al., 2018). However, these approaches do not scale easily to dense prediction in high resolution images due to high computational complexity and large memory requirements. Therefore, in this paper we propose to detect out- of- distribution samples on the pixel level by a dedicated "OOD" model which complements the "primary" model trained for a specific vision task. We formulate the OOD model as dense binary classification between the training dataset and a much larger "background" dataset. The proposed formulation requires less computational resources than approaches with GAN- generated backgrounds, and is insensitive to aleatoric uncertainty related to semantic segmentation. ## 2 RELATED WORK Detection of out- of- distribution (OOD) examples (together with related fields of anomaly and novelty detection) have received a lot of attention in recent literature. Many approaches try to estimate uncertainty by analyzing entropy of the predictions. The simplest approach is to express the prediction confidence as the probability of the winning class or, equivalently, the maximum softmax (max- softmax) activation (Hendrycks & Gimpel, 2017). The resulting approach achieves useful results in image classification context, although max- softmax must be recalibrated (Guo et al., 2017) before being interpreted as \(P(\mathrm{inlier}|\mathbf{x})\) . This result has been improved upon by the approach known as ODIN (Liang et al., 2018) which proposes to pre- process input images with a well- tempered anti- adversarial perturbation with respect to the winning class and increase the softmax temperature. Another line of research characterizes uncertainty by a separate head of the primary model which learns either prediction uncertainty (Kendall & Gal, 2017; Lakshminarayanan et al., 2017) or confidence (DeVries & Taylor, 2018). The predicted variance (or confidence) is further employed to discount the data loss while at the same time incurring a small penalty proportional to the uncertainty (or inversely proportional to confidence). This way, the model is encouraged to learn to recognize hard examples if they are present in the training dataset. Unfortunately, such modelling is able to detect only aleatoric uncertainty (Kendall & Gal, 2017), since the data which would allow us to learn epistemic uncertainty is absent by definition. A principled information- theoretic approach for calculating epistemic uncertainty has been proposed by Smith & Gal (2018). They express epistemic uncertainty as mutual information between the model parameters and the particular prediction. Intuitively, if our knowledge about the parameters increases a lot when the ground truth prediction becomes known, then the corresponding sample is likely to be out of distribution. The sought mutual information is quantified as a difference between the total prediction entropy and the marginalized prediction entropy over the parameter distribution. Both entropies are easily calculated with MC dropout. Unfortunately, our experiments along these lines resulted in poor OOD detection accuracy. This may be caused by MC dropout being an insufficiently accurate approximation of model sampling according to the parameter distribution, however, further work would be required in order to produce a more definitive answer. <--- Page Split ---> Lakshminarayanan et al. (2017) show that uncertainty can be more accurately recovered by replacing MC dropout with an ensemble of several independently trained models. However, for this to be done, many models would need to be in GPU memory during evaluation. Explicit ensembles are therefore not suited for systems which have ambition to perform dense prediction in real time. A principled algorithm for recognizing OOD samples would fit a generative model \(P_{\mathcal{D}}(\mathbf{x}|\theta)\) to the training dataset \(\mathcal{D} = \{\mathbf{x}_{i}\}\) . Such model would learn to evaluate the probability distribution of the training dataset at the given sample, which can be viewed as epistemic uncertainty. Unfortunately, evaluating probability density function for high- dimensional data is very hard. Instead Lee et al. (2018); Sabokrou et al. (2018) formulate OOD detection as binary classification where OOD samples are produced by a GAN generator (Goodfellow et al., 2014). Sabokrou et al. (2018) use an autoencoder in place of the generator, which is trained to denoise images as well as to trick the discriminator. During evaluation the autoencoder enhances inliers while distorting the outliers, making the two more separable. Lee et al. (2018) train a GAN to generate images on the borders of the distribution. In both of these approaches the discriminator can be used for OOD detection. Unfortunately, these approaches have been shown to work on low dimensional samples and simple image datasets. They do not scale well to pixel level prediction of a highly complex domain such as traffic scenes, especially if we take into account memory and time restrictions of real world applications. While Lee et al. (2018) try to generate outliers, it may be simpler to just use a foreign dataset during training. Kreso et al. (2018) show that out- of- distribution samples for traffic images can be detected by joint segmentation training on Cityscapes (road driving) and ScanNet (indoor). Out- of- distribution sample is signaled whenever the prediction favours a class which is foreign to the evaluation dataset. For instance, if a bathtub region is detected in an image acquired from an intelligent vehicle, then the pixels of that region are labeled as OOD. ## 3 THE PROPOSED DISCRIMINATIVE OOD DETECTION APPROACH We formulate out- of- distribution pixel detection as dense binary classification into two classes: OOD (out- of- distribution) and ID (in- distribution). As in (Lee et al., 2018; Sabokrou et al., 2018), we take ID training pixels from the semantic segmentation dataset used to train the primary semantic segmentation model (e.g. Vistas). Subsequently, we propose to take the OOD training pixels from a foreign dataset which approximates the entire diversity of the visual world. Since none of existing public datasets is able to satisfy our requirements, in this paper we propose to use the ILSVRC dataset (Russakovsky et al., 2015) as the next- best solution for this purpose (cf. Figure 1). We think that this approximation is reasonable since the ILSVRC dataset has been successfully used in many knowledge transfer experiments (Oquab et al., 2014; Garcia- Gasulla et al., 2018). Furthermore, ILSVRC images contain a variety of content around the 1000 official classes. ![](images/2_0.jpg) <center>Figure 1: The proposed discriminative OOD detection approach for semantic segmentation. Training in-distribution (ID) images are taken from a diverse road driving dataset (e.g. VISTAS). Training out-of-distribution (OOD) images are taken from ILSVRC, which is much more diverse than ID images. Our model is composed of a feature extractor pretrained on ImageNet and the OOD detection head. The model transforms colour images \(\mathrm{H}\times \mathrm{W}\) into \(\mathrm{H} / 32\times \mathrm{W} / 32\) logits. The cross-entropy loss is applied to bilinearly upsampled logits at input resolution. We train only the OOD head, DB4 and the preceeding transition layer (light blue), while we freeze the remaining layers (dark blue). </center> <--- Page Split ---> We define the learning algorithm in a way to avoid ILSVRC images hijacking content from the primary training dataset. We observe two distinct problems: i) ILSVRC classes which overlap classes from the primary dataset, e.g. a car in an ILSVRC image labeled as car, and ii) primary dataset classes in the ILSVRC image background, e.g. a person playing a banjo in an ILSVRC image labeled as banjo. Currently, we address both problems by ensuring that the model is trained on in- distribution (ID) pixels as often as on OOD ones. Due to diversity of the OOD class (ILSVRC) such training results in a bias towards the ID dataset. For example, there are around 10 car- like classes in ILSVRC; therefore, cars occur in only around 1/100 ILSVRC images. On the other hand, almost every image from any road driving dataset is likely to contain at least one car. Hence, the proposed approach ensures that the model is more likely to classify car pixels as inliers (Cityscapes) rather than as outliers (ILSVRC). We consider several improvements to this approach in the appendix. The proposed OOD model is discriminative and fully convolutional, and we train it with the dense cross- entropy loss. During evaluation, we apply the trained model to test images in order to obtain dense logit maps for the two classes. These logits reflect the epistemic uncertainty in each individual pixel. OOD detection is finally performed by thresholding the probability of the inlier class. The proposed OOD model works well in tandem with ILSVRC because it can be warmed- up with parameters pre- trained on the same dataset. This considerably speeds- up the training since the optimization starts from a model which is already able to extract useful features, and only needs to learn to distinguish between ILSVRC and the in- distribution dataset. This approach can share the common feature extractor with the primary semantic segmentation model, which results in a smaller OOD- detection overhead than in GAN- based approaches (Lee et al., 2018; Sabokrou et al., 2018). ## 4 EXPERIMENTS Our experiments explore the accuracy of dense OOD detection in realistic natural images from several driving datasets. We train our models on different datasets and discuss the obtained performance on WildDash test and other datasets. Experiments compare our approach to several previous approaches from the literature, which we adapt for dense prediction as explained next. ### 4.1 ADAPTING IMAGE-WIDE APPROACHES FOR DENSE PREDICTION We consider three previous approaches which were originally developed for image- wide OOD detection and adapt them for dense prediction. For the max- softmax approach (Hendrycks & Gimpel, 2017), the task is trivial: it is enough to independently assess the prediction of the primary model in each image pixel. For the ODIN approach (Liang et al., 2018), we first perturb the input image in the direction which increases the probability of the winning class in each individual pixel (according to the primary model). Consequently, we apply the softmax temperature and consider the max- softmax response as above. The perturbation magnitude \(\epsilon\) and softmax temperature \(T\) are hyperparameters that need to be validated. Note that dense ODIN perturbation implies complex pixel interactions which are absent in the image- wide case. Despite hyperparameter tuning, this achieved only a modest improvement over the max- softmax approach so we do not present it in the tables. For trained confidence (DeVries & Taylor, 2018), we introduce a separate convolutional head to the primary model. The resulting confidence predictions diminish the loss of wrong prediction in the corresponding pixel while incurring a direct loss multiplied with a small constant. ### 4.2 DATASETS Cityscapes is a widely used dataset (Cordts et al., 2015) containing densely annotated images from the driver perspective acquired during rides through different German cities. It is divided into training (2975 images), validation (500 images) and test subsets (1525 images). Vistas (Neuhold et al., 2017) is larger and more diverse than Cityscapes. It contains much more diversity with respect to locations, time of day, weather, and cameras. There are 18 000 train and 2 000 validation images. The WildDash dataset (Zendel et al., 2018) provides a benchmark for semantic segmentation and instance segmentation. It focuses on providing performance- decreasing images. These images are <--- Page Split ---> challenging due to conditions and unusual locations in which they were taken or because they contain various distortions. There are 70 validation and 156 test images. The test set contains 15 images which are marked as negatives. All pixels in these images are considered out- of- distribution in the context of semantic segmentation on road driving datasets. These images contain noise, indoor images, and five artificially altered inlier images (see Figure 3). WildDash is compatible with Cityscapes labeling policy (Cordts et al., 2015), but it also considers performance on negative images. Pixels in negative images are considered to be correctly classified if they are assigned a correct Cityscapes label (e.g. people in an indoor scene) or if they are assigned a void label (which means that they are detected as OOD samples). The official WildDash web site suggest that OOD pixels could be detected by thresholding the max- softmax value. The ILSVRC dataset (Russakovsky et al., 2015) includes a selection of 1000 ImageNet (Deng et al., 2009) classes. It contains 1 281 167 images with image- wide annotations and 544 546 images annotated with the bounding box of the object which defines the class. The training split contains over one million images, while the validation split contains 50 000 images. The Pascal VOC 2007 dataset contains 9 963 training and validation images with image- wide annotations into 20 classes. Pixel- level semantic segmentation groundtruth is available for 632 images. ### 4.3 MODEL DETAILS We experiment with three different models: i) the primary model for semantic segmentation, ii) the augmented primary model with confidence (DeVries & Taylor, 2018), and iii) the proposed discriminative model. All three models are based on the DenseNet- 121 architecture (Huang et al., 2017) (we assume the BC variant throughout the paper). DenseNet- 121 contains 120 convolutional layers which can be considered as a feature extractor. These layers are organized into 4 dense blocks (DB_1 to DB_4) and 3 transition layers (T_1 to T_3) inbetween. In all of our models, the feature extractor is initialized with parameters pretrained on ILSVRC. We build our primary model by concatenating the upsampled output of DB_4 with the output of DB_2. This concatenation is routed to the spatial pyramid pooling layer (SPP) (He et al., 2014) which is followed by a BN- ReLU- Conv block (batch normalization, ReLU activation, 3x3 convolution) which outputs 19 feature maps with logits. The logits are normalized with softmax and then fed to the usual cross- entropy loss. We augment the primary model by introducing a new SPP layer and BN- ReLU- Conv block parallel to the SPP layer and BN- ReLU- Conv layer of the primary model. The output of confidence BN- ReLU- Conv block is concatenated with the segmentation maps. The confidence estimation is obtained by blending the resulting representation with a BN- ReLU- Conv block with sigmoid activation. We prevent the gradients from flowing into the segmentation maps to ensure that the segmentation head is only trained for segmentation and not for estimating confidence. The confidence map is then used to modulate the cross- entropy loss of the segmentation maps while low confidence is penalized with a hyper- parameter \(\lambda\) . Our proposed discriminative OOD detector feeds the output of DB_4 to a BN- ReLU- Conv block with 2 feature maps corresponding to logits for inliers and outliers. The logits are fed to softmax and then to the cross- entropy loss which encourages the model to classify the pixels according to the respective labels. We only train the head, DB_4 and T_3 in order to speed up learning and prevent overfitting as shown in Figure 1. We assume that initial DB_3 features are expressive enough due to ILSVRC pretraining, and that it is enough to only fine- tune DB_4 for discriminating road- driving scenes from the rest of the visual world. ### 4.4 TRAINING We train the primary model on Cityscapes train at half resolution. The training images are normalized with Cityscapes train mean and variance. We optimize the model on entire images for 40 epochs without jittering using ADAM optimizer, a decaying learning rate starting from 4e- 4 and a batch size of 5. The learning rate for the pretrained DenseNet layers is four times smaller than the learning rate for the model head. The model achieves 70.1 % mIoU on Cityscapes val and 23.6 % mIoU on WildDash val. Due to poor generalization on WildDash, we also trained a model on <--- Page Split ---> the combination of Cityscapes train and WildDash val. This model reaches \(70.2\%\) mIoU on the Cityscapes validation set. We train the augmented primary model (DeVries & Taylor, 2018) in the same way as the primary model. This model achieves \(71.1\%\) mIoU on Cityscapes val and \(25.7\%\) mIoU on WildDash val. We train our discriminative models for OOD detection in the similar way as the primary model. We use three different in- distribution (ID) datasets: Cityscapes, Cityscapes + WildDash val and Vistas. WildDash val was added to Cityscapes because models trained on Cityscapes are prone to overfitting. In order to show that training on the WildDash validation subset can be avoided, we also show results of a model instance trained on the Vistas dataset (Neuhold et al., 2017) which is much more diverse than Cityscapes. As explained earlier, we always use ILSVRC as the training dataset for the OOD class. Unfortunately, ILSVRC images are not annotated at the pixel level. We deal with this challenge by simply labeling all ILSVRC pixels as the OOD class, and all pixels from the road- driving datasets as the ID class. In order to account for different resolutions, we resize the ILSVRC images so that the smaller side equals 512 pixels while keeping the image proportions. We form mixed batches (Kreso et al., 2018) by taking random crops of \(512 \times 512\) pixels from both training datasets (ILSVRC, in- distribution) and normalizing them using the ILSVRC mean and variance. Since there is a huge disproportion in size between the ILSVRC dataset and road- driving datasets, we oversample the road- driving datasets so that the number of images becomes approximately equal. We train the models using ADAM optimizer, a decaying learning rate and a batch size of 30, until the accuracy on WildDash val reaches \(100\%\) . Similar accuracies are also observed on ILSVRC val (OOD) and Vistas val (ID) after the training. ### 4.5 EVALUATION We evaluate how well the considered models separate OOD and ID samples on several test datasets. We compare our discriminative OOD model with the max- softmax (Hendrycks & Gimpel, 2017), ODIN (Liang et al., 2018), trained confidence (DeVries & Taylor, 2018) and the pretrained runner- up model from the ROB Challenge which was provided by its authors (Kreso et al., 2018). We quantify performance with average precision because it shows how well the method separates the OOD and ID pixels without having to look for appropriate discrimination thresholds. We assume image- wide ID and OOD labeling (further experiments are presented in the appendix). We label all pixels in WildDash test images #0-#140 as ID, and all pixels in WildDash test images #141-#155 as OOD. We provide two AP measures. The first one evaluates results on all negative images (#141-#155). The second one ignores altered valid images (Zendel et al., 2018) (see Figure 3) which, in our view, can not be clearly distinguished from in- distribution images. Thus, we respect the original setup of the challenge, but also provide the second performance metric for a more complete illustration of the results. Note that we consider all pixels in OOD images as "positive" responses, including the pixels of Cityscapes classes (e.g. a person in a room). Conversely, we consider all pixels in ID images as "negatives", including the pixels which do not belong to any of the road driving classes (e.g. animals on the road). Such ambiguous pixels are rare and do not compromise our conclusions. ### 4.6 RESULTS ON WILDDASH TEST Table 1 shows the results of OOD detection based on the max- softmax criterion with two instances of our primary semantic segmentation model. The two model instances were trained on i) Cityscapes train and ii) Cityscapes train + WildDash val. The evaluation was performed on the WildDash test without the five altered valid images (cf. Figure 3) (left) and on the complete WildDash test (right). Both approaches perform rather poorly, though training on WildDash val doubles the performance. The precision- recall curve for the model trained on Cityscapes and WildDash val can be seen in the first column of Figure 2. The precision is low even for small values of recall indicating a very poor separation of ID an OOD pixels. <--- Page Split ---> Table 1: Average precision performance for OOD detection by applying max-softmax to predictions of the primary model trained on two road-driving datasets. <table><tr><td>Training set</td><td>WD test selection</td><td>WD test complete</td></tr><tr><td>City</td><td>10.09</td><td>11.91</td></tr><tr><td>City + WD val</td><td>17.62</td><td>19.29</td></tr></table> ![](images/6_0.jpg) <center>Figure 2: Precision-recall curves of OOD responses on the complete WildDash test. The columns correspond to: i) OOD detection according to max-softmax of the primary model, ii) OOD detection by recognizing foreign classes with the ROB model, iii) discriminative OOD detection trained on Vistas, and iv) discriminative OOD detection trained on Cityscapes and WildDash val. </center> We show the confidence that the corresponding pixel is OOD in the second column of Figures 3, 4 and 5, where red denotes higher confidence. We see that OOD responses obtained by the max- softmax approach do not correlate with epistemic uncertainty. Instead they occur on semantic borders, small objects and distant parts of the scene, that is on details which occur in most training images. ![](images/6_1.jpg) <center>Figure 3: OOD detection in three of the five altered valid scenes from WildDash test (images #141, #148 and #151). These images were ignored in experiments labeled as "WD test selected". The columns correspond to: i) original image, ii) best result from Table 1, iii) discriminative OOD detection with the ROB model, iv) discriminative OOD detection trained on Vistas, and v) discriminative OOD detection trained on Cityscapes and WildDash val. Red denotes the confidence that the corresponding pixel is OOD which can be interpreted as epistemic uncertainty. </center> Table 2 shows the OOD detection with the augmented primary model and trained confidence. This approach achieves a better mIoU performance than the basic segmentation model trained only on Cityscapes. This suggests that training with uncertainty alleviates overfitting. Still, the uncertainty estimation itself proves to be the worst predictor of OOD pixels among all approaches considered in our experiments. This suggests that the confidence head is "confident" by default and must be taught to recognize the pixels which should be uncertain. Since this model performed poorly we do not show its precision- recall curve nor its OOD segmentation results. Table 3 shows OOD detection results for a model that has seen indoor scenes (OOD) and road- driving scenes (ID) by training on all four datasets from the ROB challenge. We perform OOD detection using: max traffic softmax (maximum softmax value of ID classes) and sum traffic soft <--- Page Split ---> Table 2: Average precision for OOD detection by the augmented primary model featuring a trained confidence head. <table><tr><td rowspan="2">Training set</td><td colspan="2">WD test selection</td><td colspan="2">WD test complete</td></tr><tr><td>max-softmax confidence</td><td>max-softmax confidence</td><td>max-softmax confidence</td><td></td></tr><tr><td>City</td><td>10.61</td><td>9.52</td><td>15.62</td><td>11.38</td></tr></table> ![](images/7_0.jpg) <center>Figure 4: OOD detection in in-distribution WildDash test images. We use the same column arrangement as in Figure 3. The row three is interesting because it contains animals which are classified as ID by the model trained on WildDash val. This is likely due to the fact that WildDash val contains images with animals on the road (although not ducks). Notice finally that the model instance trained on WildDash val performs better on distorted images. </center> max (sum of softmax values of ID classes). Former is the OOD detection criterion from the the original paper. Latter is the total probability of ID classes. Calculating it turns the ROB model into discriminative OOD detector with ScanNet as the outlier dataset. There is a significant jump in average precision between these models and approaches trained only on road- driving scenes. Sum traffic softmax works better than max traffic softmax so we analyze it further. Interestingly, this model recognizes most pixels in the five artificially altered negative WildDash test images as ID (column 3, Figure 3). The model works well on ID images (Figure 4), however it makes some errors in some OOD images. The third column of Figure 5 shows that some pixels like ants (row 2) are recognized as ID samples. Interestingly, the model recognizes pixels at people as ID, even though they are located in an OOD context (row 3 in image 5). We also show the precision- recall curve for this model in the second column of Figure 2. The precision is high for low values of recall. Furthermore, the precision remains high along a greater range of recall values when using probabilities for OOD detection. Table 3: Average precision for OOD detection with the classifier trained on all four datasets from the ROB 2018 challenge: Cityscapes trainval, KITTI train, WildDash val and ScanNet train. Detection of inlier classes (sum traffic softmax) works better than using model output uncertainty for inlier classes (max traffic softmax). <table><tr><td>Training set</td><td>OOD approach</td><td>WD test selection</td><td>WD test complete</td></tr><tr><td>ROB 2018</td><td>max traffic softmax</td><td>65.64</td><td>51.29</td></tr><tr><td>ROB 2018</td><td>sum traffic softmax</td><td>69.19</td><td>54.71</td></tr></table> Table 4 shows average precision for the proposed discriminative OOD detectors which jointly train on the ILSVRC dataset and road- driving images from different datasets. We start with the model instance which was trained using only Cityscapes images as ID examples. Interestingly, this instance performs poorly because it classifies all WildDash test images as OOD. This result indicates that Cityscapes dataset encourages overfitting due to all images being acquired with the same cam <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 5: OOD detection in negative WildDash test images. We use the same column arrangement as in Figure 3. The ROB model classifies all non-indoor images as ID, different from the models that have seen the ILSVRC dataset. </center> era. The other two instances of the model perform better than all other approaches presented in this paper (including sum max softmax on ROB dataset indicating that ILSRVC is a better choice of outlier dataset than ScanNet) The model instance trained on Vistas performs significantly better than the model instance trained only on Cityscapes. Still, we obtain the best results with the instance that has seen WildDash validation set. This suggests that a model that needs to work in challenging environments also needs to see challenging examples during training. Interestingly, these two instances recognize pixels from artificially altered images as ID samples, which is evident from the drop of performance between the two columns in Table 4 as well as from columns 4 and 5 in Figure 3. Finally, these models do not recognize ID classes in OOD context: people sitting in a room are classified as OOD samples as shown in row 3 in Figure 5. Table 4: Average precision for discriminative OOD detection on the WildDash test dataset. The OOD detection model is jointly trained on ILSVRC (OOD pixels) and road-driving images from different datasets (ID pixels). <table><tr><td>Training set</td><td>WD test selection</td><td>WD test complete</td></tr><tr><td>city,img</td><td>32.11</td><td>24.83</td></tr><tr><td>city,wd,img</td><td>96.24</td><td>71.43</td></tr><tr><td>vistas,img</td><td>89.23</td><td>67.44</td></tr></table> Precision- recall curves for the model instance trained on Vistas and the model instance trained on Cityscapes + WildDash can be seen in Figure 2, in columns 3 and 4 respectively. The curve for the model that has only seen the Vistas dataset slopes relatively constantly, while the curve for the model that has seen the WildDash validation set remains constant and high and then drops suddenly. This drop is due to altered valid images shown in Figure 3. Finally, we show a few difficult cases in Figure 6 to discuss the space for improvement. Rather than visualizing the classification (which has sharp edges), we show confidence that the pixel is OOD. The first two rows contain images which are clearly inliers, however our discriminative models suspect that some of their parts are OOD. This is probably caused by existence of ID classes in ILSVRC images (e.g. caravan, sports car, dome). Our models are not expressive enough to indicate which ILSVRC classes caused these errors. Images in rows 3 and 4 are ID images that contain OOD objects, in this case animals. Future models would benefit by better considering the overlap between ILSVRC and the primary training dataset. Furthermore, the predictions are very coarse due to image- wide annotations of the training datasets. Finer pixel- level predictions would likely be obtained by training on images that contain both ID and OOD pixels. ### 4.7 RESULTS ON OTHER DATASETS Table 5 shows how well the proposed OOD- detection models generalize to datasets which were not seen during training. Rows 1 and 3 show the difference between using Vistas and Cityscapes as ID dataset. When using Vistas as ID, almost no OOD pixels are detected in Cityscapes. On the <--- Page Split ---> other hand, when using Cityscapes as ID, most Vistas pixels are classified as OOD. This suggests that Cityscapes poorly represents the variety of traffic scenes. Row 2 shows that almost all Pascal VOC2007 pixels are classified as OOD. This finding complements the results from Figure 5, and suggests that using ILSVRC as outlier dataset is able to generalize well to other outlier datasets. Table 5: Pixel accuracy of discriminative OOD detection on various datasets. PASCAL\\* denotes PASCAL VOC 2007 trainval without Cityscapes classes (bicycle, bus, car, motorbike, person, train). <table><tr><td>Training set</td><td>Test set</td><td>OOD incidence</td></tr><tr><td>Vistas, ILSVRC</td><td>Cityscapes test</td><td>0.01%</td></tr><tr><td>Vistas, ILSVRC</td><td>PASCAL*</td><td>99.99%</td></tr><tr><td>Cityscapes, ILSVRC</td><td>Vistas val</td><td>93.76%</td></tr></table> ## 5 CONCLUSION Graceful performance degradation in presence of unforeseen scenery is a crucial capability for any real- life application of computer vision. Any system for recognizing images in the wild should at least be able to detect such situations in order to avoid disasters and fear of technology. We have considered image- wide OOD detection approaches which can be easily adapted for dense prediction in high resolution images. These approaches have delivered very low precision in our experiments because they unable to ignore the contribution of aleatoric uncertainty in the primary model output. We have therefore proposed a novel approach for recognizing the outliers as being more similar to some "background" dataset than to the training dataset of the primary model. Our experiments have resulted in a substantial improvement of OOD detection AP performance with respect to all previous approaches which are suitable for dense prediction in high resolution images. ILSVRC appears as a reasonable background dataset candidate due to successful OOD detection in negative WildDash images that are (at least nominally) not represented in ILSVRC (white wall, two kinds of noise, anthill closeup, aquarium, etc). Nevertheless, our study emphasizes the need for more comprehensive background datasets. Future work will address employing these results as a guide for better direction of the annotation effort as well as towards further development of approaches for recognizing epistemic uncertainty in images and video. Future work will address employing these results as a guide for better direction of the annotation effort as well as towards further development of approaches for recognizing epistemic uncertainty in images and video. ![](images/9_0.jpg) <center>Figure 6: Examples of OOD pixel detections in positive WildDash test images. The columns correspond to: i) original image, ii) discriminative OOD detection trained on Vistas, and iii) discriminative OOD detection trained on Cityscapes and WildDash val. Red denotes the confidence that the corresponding pixel is OOD which can be interpreted as epistemic uncertainty. </center> <--- Page Split ---> ## REFERENCES Liang- Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam. Rethinking atrous convolution for semantic image segmentation. CoRR, abs/1706.05587, 2017. Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Scharwächter, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset. In CVPRW, 2015. Jia Deng, Wei Dong, Richard Socher, Li- Jia Li, Kai Li, and Fei- Fei Li. Imagenet: A large- scale hierarchical image database. In CVPR, pp. 248- 255, 2009. Terrance DeVries and Graham W. Taylor. Learning confidence for out- of- distribution detection in neural networks. CoRR, abs/1802.04865, 2018. Mark Everingham, S. M. Ali Eslami, Luc Van Gool, Christopher K. I. Williams, John M. Winn, and Andrew Zisserman. The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111(1):98- 136, 2015. Dario Garcia- Gasulla, Ferran Parés, Armand Vilalta, Jonathan Moreno, Eduard Ayguadé, Jesús Labarta, Ulises Cortés, and Toyotaro Suzumura. On the behavior of convolutional nets for feature extraction. J. Artif. Intell. Res., 61:563- 592, 2018. Ian J. Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8- 13 2014, Montreal, Quebec, Canada, pp. 2672- 2680, 2014. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. In ICML, pp. 1321- 1330, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV, pp. 346- 361, 2014. URL http://arxiv.org/abs/1406.4729. Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out- of- distribution examples in neural networks. In ICLR, 2017. Gao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks. In CVPR, 2017. Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning for computer vision? In NIPS, pp. 5574- 5584, 2017. Ivan Kreso, Marin Orsic, Petra Bevandic, and Sinisa Segvic. Robust semantic segmentation with ladder- densenet models. CoRR, abs/1806.03465, 2018. URL http://arxiv.org/abs/1806.03465. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In NIPS, pp. 6402- 6413, 2017. Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. Training confidence- calibrated classifiers for detecting out- of- distribution samples. In ICLR, 2018. Shiyu Liang, Yixuan Li, and R. Srikant. Enhancing the reliability of out- of- distribution image detection in neural networks. In ICLR, 2018. Gerhard Neuhold, Tobias Ollmann, Samuel Rota Bulò, and Peter Kontschieder. The mapillary vistas dataset for semantic understanding of street scenes. In ICCV, pp. 5000- 5009, 2017. Maxime Oquab, Léon Bottou, Ivan Laptev, and Josef Sivic. Learning and transferring mid- level image representations using convolutional neural networks. In CVPR, pp. 1717- 1724, 2014. <--- Page Split ---> Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei- Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211- 252, 2015. Mohammad Sabokrou, Mohammad Khalooei, Mahmood Fathy, and Ehsan Adeli. Adversarially learned one- class classifier for novelty detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3379- 3388, 2018. Walter J. Scheirer, Anderson de Rezende Rocha, Archana Sapkota, and Terrance E. Boult. Toward open set recognition. IEEE Trans. Pattern Anal. Mach. Intell., 35(7):1757- 1772, 2013. Lewis Smith and Yarin Gal. Understanding measures of uncertainty for adversarial example detection. In UAI, volume abs/1803.08533, 2018. Yi- Hsuan Tsai, Wei- Chih Hung, Samuel Schulter, Kihyuk Sohn, Ming- Hsuan Yang, and Manmohan Chandraker. Learning to adapt structured output space for semantic segmentation. In CVPR, 2018. Oliver Zendel, Katrin Honauer, Markus Murschitz, Daniel Steininger, and Gustavo Fernandez Dominguez. Wilddash - creating hazard- aware benchmarks. In The European Conference on Computer Vision (ECCV), September 2018. <--- Page Split ---> ## APPENDIX A DENSE OOD DETECTION ON ROAD DRIVING IMAGES WITH MIXED CONTENT The Wilddash dataset is the only publicly available dataset that provides OOD images. Unfortunately, the Wilddash OOD content is annotated only on the image level. This makes Wilddash unsuitable for testing detection of unfamiliar objects in familiar settings. We therefore propose six new datasets for that purpose. We also propose an improved training procedure which allows the proposed discriminative OOD detection model to accurately predict borders of OOD objects. This procedure is used to train a new instance of the discriminative OOD model which is going to be evaluated in experiments. Finally, we present and discuss experiments which compare the AP performance across several OOD models and datasets. ## A.1 TEST DATASETS In order to be able to evaluate how different OOD detection methods perform when OOD pixels are present in ID scenes, we create six new datasets- Three of these datasets include images which contain both ID and OOD pixels. We shall use these datasets for evaluating various OOD detection approaches. The remaining three datasets are designed for control experiments in which we shall explore whether the evaluated OOD detection approaches are able to react to pasted ID content. We obtain the first two datasets by pasting Pascal VOC 2007 animals of different sizes to images from Vistas val. Pascal was chosen because it contains many densely annotated object instances which are out- of- distribution for road- driving scenes. We do not use Vistas train at this point since we wish to preserve it for OOD training. The three control datasets are formed by pasting objects across images from road driving datasets. The final dataset contains a selection of Vistas images in which a significant number of pixels are labeled as the class 'ground animal'. The last dataset is the closest match to a real- world scenario of encountering an unexpected object while driving, however, the number of its images is rather small (hence the need for datasets obtained by pasting). ## A.1.1 PASCAL TO VISTAS \(10\%\) We start by locating Pascal images with segmentation groundtruth which contain any of the 7 animal classes: bird, cat, cow, dog, horse and sheep. We select 369 large Pascal objects from their original images using pixel- level segmentation groundtruth. For each selected object we choose a random image from Vistas val, resize the object to cover at least \(10\%\) image pixels and then paste the object at random image location. This results in 369 combined images. Examples of the combined images are shown in column 1 in Figure 7. ## A.1.2 PASCAL TO VISTAS \(1\%\) A possible issue with resizing objects before pasting might be that the OOD model may succeed to detect the pasted objects by recognizing the resizing artifacts instead of the novelty. In order to address this issue, we form another dataset as follows. We iterate over all instances of Pascal objects, we choose a random image from Vistas val and paste the object without any resizing only if it takes at least \(1\%\) image pixels. This results in 31 combined images. This datasets is more difficult than the previous one since OOD patches are much smaller. Examples can be seen in the first column of Figure 8. ## A.1.3 CITY TO CITY We create this dataset by pasting a random object instance Cityscapes val at a random location of a different random Cityscapes validation image. The only condition is that the object instance takes at least \(0.5\%\) of the cityscapes image. No preprocessing is performed before the pasting. Performance on this set indicates whether a model detects OOD pixels due to different imaging conditions in which the patches were acquired. This dataset contains 288 images. Examples can be seen in the first column Figure 9. <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 7: OOD detection in Vistas images with pasted Pascal objects that take up at least \(10\%\) of the image. The columns correspond to: i) original image, ii) max-softmax of the primary model (cf. Table 1), iii) OOD detection with the ROB model (cf. Table 3), iv) discriminative OOD detection trained on entire images from ILSVRC (OOD) and Vistas train (ID) (cf. Table 4), and v) discriminative OOD detection trained on entire ILSVRC images (OOD), and ILSVRC bounding boxes (OOD) pasted over Vistas images without ground animals (ID). Red denotes the confidence that the corresponding pixel is OOD, which can be interpreted as epistemic uncertainty. Max-softmax of the primary model detects borders. The model trained according to A.2 manages to accurately detect the OOD shape. The ROB model manages to detect the position the pasted patch, while the discriminative model trained only on the whole OOD images does not detect any of the pasted patches. </center> ![](images/13_1.jpg) <center>Figure 8: OOD detection in Vistas images with pasted Pascal objects that take at least \(1\%\) of the image. We use the same column arrangement and colors as in Figure 7. The model trained as described in A.2 is able to detect even the relatively small objects pasted into a similar background. The ROB model fails to detect the location of the pasted patch. </center> ## A.1.4 VISTAS TO CITY We create this dataset by pasting a random object instance from Vistas val into a random image from Cityscapes val. The pasted instance has to take at least \(0.5\%\) of the Cityscapes image. No preprocessing is performed before the pasting. Performance on this set indicates whether the model is able to detect different camera characteristics of the patch rather than real OOD pixels. This dataset contains 1543 images. Some examples are shown in Figure 10. <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 9: OOD detection in Cityscapes images with pasted Cityscapes instances that take at least \(0.5\%\) of the image. We use the same column arrangement and colors as in Figure 7. None of the models accurately detect the pasted patches. The fourth model seems to react to borders of pasted content (row 2). </center> ![](images/14_1.jpg) <center>Figure 10: OOD detection in Cityscapes images with pasted Vistas instances that take at least \(0.5\%\) of the image. We use the same column arrangement and colors as in Figure 7. The fourth model experiences trouble with atypical Cityscapes images (row 1) and detects borders of the pasted patches. </center> ## A.1.5 SELF TO SELF We create this dataset by pasting a randomly selected object instance from a Vistas image to a random location in the same image. The object instance had to take at least \(0.5\%\) of the vistas image. No preprocessing was performed before the pasting. Performance on this set indicates whether the model is able to detect objects at unusual locations in the scene. This set contains 1873 images. Some examples can be seen in Figure 11. ## A.1.6 VISTAS ANIMALS This dataset is a subset of Vistas training and validation images which contain instances labeled 'ground animal' that take at least \(0.7\%\) of the image. This set is closest to the real- world scenario of encountering unknown objects in ID road driving scenes. Unlike in images with pasted Pascal animals, OOD detection is unable to succeed by recognizing the pasting artifacts or different imaging conditions. This set contains 8 images. Three of those are shown in the first column of Figure 12. ## A.2 TRAINING DATASET In order to be able to cope with images containing both ID and OOD pixels, we perform the following changes to the training dataset. First, we remove all Vistas images which contain instances of the class 'ground animal' from the training split, regardless of the instance size. We denote the vistas dataset without animals as "vistas- a". Then, we select 544 546 ILSVRC images in which bounding box annotation is available. Each of the selected ILSVRC images is used only once during training, either as i) a single image or ii) a combined image obtained by pasting the resized bounding box to a random location of a random Vistas train image. In the former case, the bounding box is labeled <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 11: OOD detection in Vistas images that contain objects copied and pasted from the same image We use the same column arrangement and colors as in Figure 7. None of the models detect the pasted patches. </center> ![](images/15_1.jpg) <center>Figure 12: OOD detection in Vistas images that contain the class 'ground animal'. We use the same column arrangement and colors as in Figure 7. Only the fourth model manages to accurately segment an animal in row 1, and reacts to animals in other two images. The ROB model detects some parts as OOD however those regions do not correspond to animal locations. </center> as OOD while the rest of ILSVRC image is ignored. In the latter case, the bounding box is resized to contain \(5\%\) pixels of the Vistas image, the resulting ILSVRC pixels are labeled as OOD and the rest is labeled as ID. In both cases the resulting image is resized so that the shorter dimension equals 512 pixels and randomly cropped to the resolution of \(512 \times 512\) . ## A.3 MODEL AND TRAINING We use the same fully convolutional discriminative OOD model as described in Section 4.3. The model is trained according to procedure described in Section 4.4, except that the training dataset is composed as described in A.2. <--- Page Split ---> ## A.4 EXPERIMENTAL RESULTS Tables 6 and 7 show the average precision performance for all OOD models described in the paper, together with the new instance of the discriminative model which is trained on the train dataset described in Section A.2 of this appendix. The models are evaluated on all six test datasets presented in section A.1. We compare the achieved performance with the respective results on the WildDash test dataset which we copy from Section 4. Table 6: Average precision for discriminative OOD detection on the test datasets with images that have both ID and OOD pixels. Labels stand for max-softmax of the primary model (ms), maxsoftmax of the model with trained confidence (ms-conf), primary model trained for the ROB challenge (ROB), and the discriminative model (discrim). We show results for the following datasets: WildDash (wd), Cityscapes (city), the four ROB 2018 challenge datasets (ROB), full imagenet (img), the subset of imagenet with annotated bounding boxes (imb.bb), the full vistas dataset (vistas) and the vistas dataset with images containing ground animals removed (vistas-a). <table><tr><td>Model</td><td>Training set</td><td>PascalVistas10</td><td>PascalVistas1</td><td>VistasAnimals</td><td>Wilddash selection</td></tr><tr><td>ms</td><td>city</td><td>28.81</td><td>9.91</td><td>6.05</td><td>10.09</td></tr><tr><td>ms</td><td>city, wd</td><td>34.27</td><td>8.8</td><td>6.79</td><td>17.62</td></tr><tr><td>ms-conf</td><td>city</td><td>26.09</td><td>8.04</td><td>5.94</td><td>10.61</td></tr><tr><td>ROB</td><td>ROB</td><td>25.65</td><td>4.55</td><td>2.96</td><td>69.19</td></tr><tr><td>discrim</td><td>city, img</td><td>34.07</td><td>3.19</td><td>2.28</td><td>32.11</td></tr><tr><td>discrim</td><td>city, wd, img</td><td>24.46</td><td>3.19</td><td>4.59</td><td>96.24</td></tr><tr><td>discrim</td><td>vistas, img</td><td>13.14</td><td>2.39</td><td>2.4</td><td>89.23</td></tr><tr><td>discrim</td><td>vistas-a, img.bb</td><td>87.87</td><td>78.58</td><td>25.61</td><td>68.59</td></tr></table> Images 7, 8, 9, 10, 11 and 12 show the responses of OOD detection for various models. Red denotes the confidence that the corresponding pixel is OOD, which can also be interpreted as epistemic uncertainty. The columns in these images correspond to: i) original image, ii) max- softmax of the primary model (cf. Table 1), iii) OOD detection with the ROB model (cf. Table 3), iv) discriminative OOD detection trained on entire images from ILSVRC (OOD) and Vistas train (ID) (cf. Table 4), and v) discriminative OOD detection trained on entire ILSVRC images (OOD), and ILSVRC bounding boxes (OOD) pasted over Vistas images without ground animals (ID), as described in Section A.2. These results once again show that the max- softmax approach predicts high uncertainty on object borders. Both the ROB model and the discriminative model trained on entire images fail to detect OOD patches in many images (Figures 7, 8 and 12). Poor performance of the ROB model is expected since its training datasets do not include animals. Poor performance of the discriminative model trained on entire images is also understandable since none of its training images had a border between ID and OOD regions. The discriminative model which we train according to Section A.2 delivers the best overall performance. It is able to detects OOD patches even on very small pasted objects (cf. Figure 8) and genuine animals in Vistas images (cf. Figure 12). We note that this model occasionally detects borders of ID patches (row 2 in Figure 9 and row 2 in Figure 10) which suggests that results on PascalToVistas may be a little too optimistic. We also note that this model sometimes misclassifies parts of Cityscapes images. Genuine Vistas images with ground animals (VistasAnimals) is the most difficult dataset for all models, however the discriminative model trained according to Section A.2 clearly achieves the best performance (cf. Figure 12, row 1). Table 7 shows average precision for pasted content detection on the three control datasets. The AP on control datasets indicates if the model is able to distinguish between the ID image and the ID pasted region. High AP on these datasets means that the corresponding OOD model detects differences in imaging conditions or unexpected object locations between the ID image and the pasted ID patch. High performance on control datasets would indicate that success on PascalToVistas datasets is the result of detecting the process of pasting instead of the novelty of the Pascal classes. The score of the best discriminative model indeed indicates that part of its success on PascalToVistas datasets comes from recognizing pasting interventions. <--- Page Split ---> Table 7: AP for detection of pasted content in the three control datasets. Labels stand for max- softmax of the primary model (ms), max- softmax of the model with trained confidence (ms- conf), primary model trained for the ROB challenge (ROB), and the discriminative model (discrim). We show results for the following datasets: WildDash (wd), Cityscapes (city), the four ROB 2018 challenge datasets (ROB), full imagenet (img), the subset of imagenet with annotated bounding boxes (imb_bb), the full vistas dataset (vistas) and the vistas dataset with images containing ground animals removed (vistas- a). <table><tr><td>Model</td><td>Training set</td><td>CityCity</td><td>VistasCity</td><td>Self</td></tr><tr><td>ms</td><td>city</td><td>3.96</td><td>7.42</td><td>5.74</td></tr><tr><td>ms</td><td>city, wd</td><td>3.49</td><td>6.81</td><td>7.42</td></tr><tr><td>ms-conf</td><td>city</td><td>3.61</td><td>6.69</td><td>5.34</td></tr><tr><td>ROB</td><td>ROB</td><td>4.64</td><td>13.37</td><td>5.95</td></tr><tr><td>discrim</td><td>city, img</td><td>2.15</td><td>45.48</td><td>2.92</td></tr><tr><td>discrim</td><td>city, wd, img</td><td>2.68</td><td>42.82</td><td>3.21</td></tr><tr><td>discrim</td><td>vistas, img</td><td>2.39</td><td>9.14</td><td>3.56</td></tr><tr><td>discrim</td><td>vistas-a, img_bb</td><td>7.62</td><td>34.12</td><td>19.74</td></tr></table> ## A.5 CONCLUSION Experiments show that training on ILSVRC bounding boxes pasted above Vistas images is able to deliver fair open- set dense- prediction performance. In particular, our model succeeds to detect animals in road- driving images although it was not specifically trained for that task, while outperforming all previous approaches by a wide margin. We believe that these results strengthen the conclusions from the main article and provide useful insights for future work in estimating epistemic uncertainty in images. <--- Page Split ---> ## APPENDIX B DENSE OOD DETECTION ON UCSD PED2 DATASET We further explore performance of the proposed dense OOD- detection approach on UCSD Ped2 dataset which contains video sequences of pedestrian scenes. The test subset contains 12 video clips in which each image is annotated with pixel- level binary masks which identify regions with anomalies. The train subset contains no anomalies. It is similar to Vistas animals dataset (subsection A.1.6) in that it contains images that are a true mix of anomalous and non- anomalous regions. However, the problem of OOD detection is not the same as UCSD anomaly detection since UCSD anomalies are connected to motion. For example, a person walking next to a bike is not an anomaly. On the other hand, a person riding a bike and a person walking on the grass are both considered anomalous. Still, it is interesting to see how discriminative OOD detector behaves on a dataset quite different to previously described road driving datasets. Furthermore, since there is overlap between UCSD anomalies and OOD detection, AP can be obtained and used to get a sense of the behavior of the detector. ## B.1 UCSD PED2 DATASET The UCSD dataset contains grayscale video of pedestrian walkways taken with a stationary camera. We focus on video clips from the Ped2 subset which contains 16 training and 12 testing video clips. All these clips have been acquired from the same viewpoint and by the same camera, so that the pedestrian movement is parallel to the image plane. Ped2 test subset contains anomalies which are not present in Ped2 train; these anomalies correspond to bikers, skaters and small carts. Examples of images from UCSD Ped2 dataset are shown in the first column of Figure 13. ![](images/18_0.jpg) <center>Figure 13: OOD detection in UCSD Ped2 dataset. The first column contains original images, the second ground truth annotations, and the third OOD discriminator output. Red denotes the confidence that the corresponding pixel is OOD. The model easily detects the cart as OOD. It detects the wheels of the bicycle as OOD, but bike riders are detected as ID. The skater is also detected as ID. </center> ## B.2 TRAINING DATASET The training dataset is built as described in in A.2 except that we use images from UCSD Ped2 training dataset in place of Vistas images. <--- Page Split ---> ## B.3 MODEL AND TRAINING B.3 MODEL AND TRAININGWe use the same setup for training on the UCSD as described in A.3, but we also randomly convert half of the training images into grayscale images. ## B.4 EXPERIMENTAL RESULTS B.4 EXPERIMENTAL RESULTSFigure 13 shows the response of the discriminative model trained on the UCSD Ped2 dataset. Table 8 shows the result of the discriminative model on the complete UCSD Ped 2 dataset, as well as on three of its test sequences: 1, 4, and 8. Sequence 1 contains a cyclist and a relatively large number of pedestrians, sequence 4 contains a small cart, while sequence 8 contains a cyclist and a skater but less pedestrians. The AP is highest on sequence 4 where the motion anomaly occurs on an object which is clearly OOD. We report lower AP on sequences with cyclists and skaters. There are two causes for this deterioration. Firstly, our model is unable to detect skaters as OOD due to their similarity with pedestrians. Secondly, cyclists (together with their bikes) are labeled as anomalies in the ground truth annotations. Bikes are not necessarily labeled as anomalies if they are not being ridden. Our model recognizes majority of the pixels of bikes as OOD (ridden or not), while riders themselves are recognized as ID, again due to similarity with pedestrians. This discrepancy considerably decreases our AP. Table 8: AP for OOD detection on the whole UCSD Ped2 test dataset, as well as on sequences 1, 4 and 8 from that dataset, denoted as S1, S4 and S8 respectively. <table><tr><td>Model</td><td>Training set</td><td>UCSD Ped2</td><td>UCSD Ped2 S1</td><td>UCSD Ped2 S4</td><td>UCSD Ped2 S8</td></tr><tr><td>discrim</td><td>UCSD, img_bb</td><td>48.49</td><td>37.08</td><td>83.60</td><td>40.24</td></tr></table> ## B.5 CONCLUSION B.5 CONCLUSIONExperiments show that unexpected objects can be detected in pedestrian walkways by training on ILSVRC bounding boxes pasted above UCSD images. This further supports the conclusion that ILSVRC is a good choice as an outlier distribution for a wide variety of training datasets. <--- Page Split --->
## ABSTRACT Most classification and segmentation datasets assume a closed- world scenario in which predictions are expressed as distribution over a predetermined set of visual classes. However, such assumption implies unavoidable and often unnoticeable failures in presence of out- of- distribution (OOD) input. These failures are bound to happen in most real- life applications since current visual ontologies are far from being comprehensive. We propose to address this issue by discriminative detection of OOD pixels in input data. Different from recent approaches, we avoid to bring any decisions by only observing the training dataset of the primary model trained to solve the desired computer vision task. Instead, we train a dedicated OOD model which discriminates the primary training set from a much larger "background" dataset which approximates the variety of the visual world. We perform our experiments on high resolution natural images in a dense prediction setup. We use several road driving datasets as our training distribution, while we approximate the background distribution with the ILSVRC dataset. We evaluate our approach on WildDash test, which is currently the only public test dataset that includes out- of- distribution images. The obtained results show that the proposed approach succeeds to identify out- of- distribution pixels while outperforming previous work by a wide margin. ## 1 INTRODUCTION Development of deep convolutional models has resulted in tremendous advances of visual recognition. Recent semantic segmentation systems surpass \(80\%\) mIoU (Chen et al., 2017) on demanding natural datasets such as Pascal VOC 2012 (Everingham et al., 2015) or Cityscapes (Cordts et al., 2015). Such performance level suggests a clear application potential in exciting areas such as road safety assessment or autonomous driving. Unfortunately, most existing semantic segmentation datasets assume closed- world evaluation (Scheirer et al., 2013), which means that they require predictions over a predetermined set of visual classes. Closed- world datasets are very useful for promoting research, however they are poor proxies for real- life operation even in a very restricted scenario such as road driving. In fact, one can easily imagine many real- life driving scenes which give rise to image regions that can not be recognized by learning on the Cityscapes ontology. Some of those regions may be projected from objects which are foreign to Cityscapes (e.g. road works, water, animals). Other may appear unrelated to Cityscapes due to particular configurations being absent from the training dataset (e.g. pedestrians lying on the ground, crashed cars, fallen trees). Finally, some regions may be poorly classified due to different environmental conditions, acquisition setup, or geographical location (Tsai et al., 2018). The simplest way to approach unrecognizable data is to improve datasets. For instance, the Vistas dataset (Neuhold et al., 2017) proposes a richer ontology and addresses more factors of variation than Cityscapes. However, training on Vistas requires considerable computational resources while still being unable to account for the full variety of the recent WildDash dataset (Zendel et al., 2018), as we show in experiments. Another way to approach this problem would be to design strategies for knowledge transfer between the training dataset and the test images (Tsai et al., 2018). However, this is unsatisfactory for many real world applications where the same model should be directly applicable to a variety of environments. <--- Page Split ---> These examples emphasize the need to quantify model prediction uncertainty, especially if we wish to achieve reliable deployment in the real world. Uncertainty can be divided into two categories (Kendall & Gal, 2017). Aleatoric uncertainty is caused by limitations of the model which cannot be reduced by supplying additional training data. For example, the quality of segmentation models on distant and small objects depends on the resolution on which inference is performed. On the other hand, epistemic uncertainty arises when the trained model is unable to bring the desired prediction given particular training dataset. In other words, it occurs when the model receives the kind of data which was not seen during training. Epistemic uncertainty is therefore strongly related to the probability that the model operates on an out- of- distribution sample. Recent work in image- wide out- of- distribution detection (Kendall & Gal, 2017; Hendrycks & Gimpel, 2017; Liang et al., 2018) evaluates the prediction uncertainty by analyzing the model output. We find that these approaches perform poorly in dense prediction tasks due to prominence of aleatoric uncertainty. This means that total uncertainty can be high even on in- distribution pixels (e.g. on pixels at semantic borders, or very distant objects). A different approach attempts to detect out- of- distribution samples with GAN discriminators, whereby the GAN generator is used as a proxy for the out- of- distribution class (Lee et al., 2018; Sabokrou et al., 2018). However, these approaches do not scale easily to dense prediction in high resolution images due to high computational complexity and large memory requirements. Therefore, in this paper we propose to detect out- of- distribution samples on the pixel level by a dedicated "OOD" model which complements the "primary" model trained for a specific vision task. We formulate the OOD model as dense binary classification between the training dataset and a much larger "background" dataset. The proposed formulation requires less computational resources than approaches with GAN- generated backgrounds, and is insensitive to aleatoric uncertainty related to semantic segmentation. ## 2 RELATED WORK Detection of out- of- distribution (OOD) examples (together with related fields of anomaly and novelty detection) have received a lot of attention in recent literature. Many approaches try to estimate uncertainty by analyzing entropy of the predictions. The simplest approach is to express the prediction confidence as the probability of the winning class or, equivalently, the maximum softmax (max- softmax) activation (Hendrycks & Gimpel, 2017). The resulting approach achieves useful results in image classification context, although max- softmax must be recalibrated (Guo et al., 2017) before being interpreted as \(P(\mathrm{inlier}|\mathbf{x})\) . This result has been improved upon by the approach known as ODIN (Liang et al., 2018) which proposes to pre- process input images with a well- tempered anti- adversarial perturbation with respect to the winning class and increase the softmax temperature. Another line of research characterizes uncertainty by a separate head of the primary model which learns either prediction uncertainty (Kendall & Gal, 2017; Lakshminarayanan et al., 2017) or confidence (DeVries & Taylor, 2018). The predicted variance (or confidence) is further employed to discount the data loss while at the same time incurring a small penalty proportional to the uncertainty (or inversely proportional to confidence). This way, the model is encouraged to learn to recognize hard examples if they are present in the training dataset. Unfortunately, such modelling is able to detect only aleatoric uncertainty (Kendall & Gal, 2017), since the data which would allow us to learn epistemic uncertainty is absent by definition. A principled information- theoretic approach for calculating epistemic uncertainty has been proposed by Smith & Gal (2018). They express epistemic uncertainty as mutual information between the model parameters and the particular prediction. Intuitively, if our knowledge about the parameters increases a lot when the ground truth prediction becomes known, then the corresponding sample is likely to be out of distribution. The sought mutual information is quantified as a difference between the total prediction entropy and the marginalized prediction entropy over the parameter distribution. Both entropies are easily calculated with MC dropout. Unfortunately, our experiments along these lines resulted in poor OOD detection accuracy. This may be caused by MC dropout being an insufficiently accurate approximation of model sampling according to the parameter distribution, however, further work would be required in order to produce a more definitive answer. <--- Page Split ---> Lakshminarayanan et al. (2017) show that uncertainty can be more accurately recovered by replacing MC dropout with an ensemble of several independently trained models. However, for this to be done, many models would need to be in GPU memory during evaluation. Explicit ensembles are therefore not suited for systems which have ambition to perform dense prediction in real time. A principled algorithm for recognizing OOD samples would fit a generative model \(P_{\mathcal{D}}(\mathbf{x}|\theta)\) to the training dataset \(\mathcal{D} = \{\mathbf{x}_{i}\}\) . Such model would learn to evaluate the probability distribution of the training dataset at the given sample, which can be viewed as epistemic uncertainty. Unfortunately, evaluating probability density function for high- dimensional data is very hard. Instead Lee et al. (2018); Sabokrou et al. (2018) formulate OOD detection as binary classification where OOD samples are produced by a GAN generator (Goodfellow et al., 2014). Sabokrou et al. (2018) use an autoencoder in place of the generator, which is trained to denoise images as well as to trick the discriminator. During evaluation the autoencoder enhances inliers while distorting the outliers, making the two more separable. Lee et al. (2018) train a GAN to generate images on the borders of the distribution. In both of these approaches the discriminator can be used for OOD detection. Unfortunately, these approaches have been shown to work on low dimensional samples and simple image datasets. They do not scale well to pixel level prediction of a highly complex domain such as traffic scenes, especially if we take into account memory and time restrictions of real world applications. While Lee et al. (2018) try to generate outliers, it may be simpler to just use a foreign dataset during training. Kreso et al. (2018) show that out- of- distribution samples for traffic images can be detected by joint segmentation training on Cityscapes (road driving) and ScanNet (indoor). Out- of- distribution sample is signaled whenever the prediction favours a class which is foreign to the evaluation dataset. For instance, if a bathtub region is detected in an image acquired from an intelligent vehicle, then the pixels of that region are labeled as OOD. ## 3 THE PROPOSED DISCRIMINATIVE OOD DETECTION APPROACH We formulate out- of- distribution pixel detection as dense binary classification into two classes: OOD (out- of- distribution) and ID (in- distribution). As in (Lee et al., 2018; Sabokrou et al., 2018), we take ID training pixels from the semantic segmentation dataset used to train the primary semantic segmentation model (e.g. Vistas). Subsequently, we propose to take the OOD training pixels from a foreign dataset which approximates the entire diversity of the visual world. Since none of existing public datasets is able to satisfy our requirements, in this paper we propose to use the ILSVRC dataset (Russakovsky et al., 2015) as the next- best solution for this purpose (cf. Figure 1). We think that this approximation is reasonable since the ILSVRC dataset has been successfully used in many knowledge transfer experiments (Oquab et al., 2014; Garcia- Gasulla et al., 2018). Furthermore, ILSVRC images contain a variety of content around the 1000 official classes. ![](images/2_0.jpg) <center>Figure 1: The proposed discriminative OOD detection approach for semantic segmentation. Training in-distribution (ID) images are taken from a diverse road driving dataset (e.g. VISTAS). Training out-of-distribution (OOD) images are taken from ILSVRC, which is much more diverse than ID images. Our model is composed of a feature extractor pretrained on ImageNet and the OOD detection head. The model transforms colour images \(\mathrm{H}\times \mathrm{W}\) into \(\mathrm{H} / 32\times \mathrm{W} / 32\) logits. The cross-entropy loss is applied to bilinearly upsampled logits at input resolution. We train only the OOD head, DB4 and the preceeding transition layer (light blue), while we freeze the remaining layers (dark blue). </center> <--- Page Split ---> We define the learning algorithm in a way to avoid ILSVRC images hijacking content from the primary training dataset. We observe two distinct problems: i) ILSVRC classes which overlap classes from the primary dataset, e.g. a car in an ILSVRC image labeled as car, and ii) primary dataset classes in the ILSVRC image background, e.g. a person playing a banjo in an ILSVRC image labeled as banjo. Currently, we address both problems by ensuring that the model is trained on in- distribution (ID) pixels as often as on OOD ones. Due to diversity of the OOD class (ILSVRC) such training results in a bias towards the ID dataset. For example, there are around 10 car- like classes in ILSVRC; therefore, cars occur in only around 1/100 ILSVRC images. On the other hand, almost every image from any road driving dataset is likely to contain at least one car. Hence, the proposed approach ensures that the model is more likely to classify car pixels as inliers (Cityscapes) rather than as outliers (ILSVRC). We consider several improvements to this approach in the appendix. The proposed OOD model is discriminative and fully convolutional, and we train it with the dense cross- entropy loss. During evaluation, we apply the trained model to test images in order to obtain dense logit maps for the two classes. These logits reflect the epistemic uncertainty in each individual pixel. OOD detection is finally performed by thresholding the probability of the inlier class. The proposed OOD model works well in tandem with ILSVRC because it can be warmed- up with parameters pre- trained on the same dataset. This considerably speeds- up the training since the optimization starts from a model which is already able to extract useful features, and only needs to learn to distinguish between ILSVRC and the in- distribution dataset. This approach can share the common feature extractor with the primary semantic segmentation model, which results in a smaller OOD- detection overhead than in GAN- based approaches (Lee et al., 2018; Sabokrou et al., 2018). ## 4 EXPERIMENTS Our experiments explore the accuracy of dense OOD detection in realistic natural images from several driving datasets. We train our models on different datasets and discuss the obtained performance on WildDash test and other datasets. Experiments compare our approach to several previous approaches from the literature, which we adapt for dense prediction as explained next. ### 4.1 ADAPTING IMAGE-WIDE APPROACHES FOR DENSE PREDICTION We consider three previous approaches which were originally developed for image- wide OOD detection and adapt them for dense prediction. For the max- softmax approach (Hendrycks & Gimpel, 2017), the task is trivial: it is enough to independently assess the prediction of the primary model in each image pixel. For the ODIN approach (Liang et al., 2018), we first perturb the input image in the direction which increases the probability of the winning class in each individual pixel (according to the primary model). Consequently, we apply the softmax temperature and consider the max- softmax response as above. The perturbation magnitude \(\epsilon\) and softmax temperature \(T\) are hyperparameters that need to be validated. Note that dense ODIN perturbation implies complex pixel interactions which are absent in the image- wide case. Despite hyperparameter tuning, this achieved only a modest improvement over the max- softmax approach so we do not present it in the tables. For trained confidence (DeVries & Taylor, 2018), we introduce a separate convolutional head to the primary model. The resulting confidence predictions diminish the loss of wrong prediction in the corresponding pixel while incurring a direct loss multiplied with a small constant. ### 4.2 DATASETS Cityscapes is a widely used dataset (Cordts et al., 2015) containing densely annotated images from the driver perspective acquired during rides through different German cities. It is divided into training (2975 images), validation (500 images) and test subsets (1525 images). Vistas (Neuhold et al., 2017) is larger and more diverse than Cityscapes. It contains much more diversity with respect to locations, time of day, weather, and cameras. There are 18 000 train and 2 000 validation images. The WildDash dataset (Zendel et al., 2018) provides a benchmark for semantic segmentation and instance segmentation. It focuses on providing performance- decreasing images. These images are <--- Page Split ---> challenging due to conditions and unusual locations in which they were taken or because they contain various distortions. There are 70 validation and 156 test images. The test set contains 15 images which are marked as negatives. All pixels in these images are considered out- of- distribution in the context of semantic segmentation on road driving datasets. These images contain noise, indoor images, and five artificially altered inlier images (see Figure 3). WildDash is compatible with Cityscapes labeling policy (Cordts et al., 2015), but it also considers performance on negative images. Pixels in negative images are considered to be correctly classified if they are assigned a correct Cityscapes label (e.g. people in an indoor scene) or if they are assigned a void label (which means that they are detected as OOD samples). The official WildDash web site suggest that OOD pixels could be detected by thresholding the max- softmax value. The ILSVRC dataset (Russakovsky et al., 2015) includes a selection of 1000 ImageNet (Deng et al., 2009) classes. It contains 1 281 167 images with image- wide annotations and 544 546 images annotated with the bounding box of the object which defines the class. The training split contains over one million images, while the validation split contains 50 000 images. The Pascal VOC 2007 dataset contains 9 963 training and validation images with image- wide annotations into 20 classes. Pixel- level semantic segmentation groundtruth is available for 632 images. ### 4.3 MODEL DETAILS We experiment with three different models: i) the primary model for semantic segmentation, ii) the augmented primary model with confidence (DeVries & Taylor, 2018), and iii) the proposed discriminative model. All three models are based on the DenseNet- 121 architecture (Huang et al., 2017) (we assume the BC variant throughout the paper). DenseNet- 121 contains 120 convolutional layers which can be considered as a feature extractor. These layers are organized into 4 dense blocks (DB_1 to DB_4) and 3 transition layers (T_1 to T_3) inbetween. In all of our models, the feature extractor is initialized with parameters pretrained on ILSVRC. We build our primary model by concatenating the upsampled output of DB_4 with the output of DB_2. This concatenation is routed to the spatial pyramid pooling layer (SPP) (He et al., 2014) which is followed by a BN- ReLU- Conv block (batch normalization, ReLU activation, 3x3 convolution) which outputs 19 feature maps with logits. The logits are normalized with softmax and then fed to the usual cross- entropy loss. We augment the primary model by introducing a new SPP layer and BN- ReLU- Conv block parallel to the SPP layer and BN- ReLU- Conv layer of the primary model. The output of confidence BN- ReLU- Conv block is concatenated with the segmentation maps. The confidence estimation is obtained by blending the resulting representation with a BN- ReLU- Conv block with sigmoid activation. We prevent the gradients from flowing into the segmentation maps to ensure that the segmentation head is only trained for segmentation and not for estimating confidence. The confidence map is then used to modulate the cross- entropy loss of the segmentation maps while low confidence is penalized with a hyper- parameter \(\lambda\) . Our proposed discriminative OOD detector feeds the output of DB_4 to a BN- ReLU- Conv block with 2 feature maps corresponding to logits for inliers and outliers. The logits are fed to softmax and then to the cross- entropy loss which encourages the model to classify the pixels according to the respective labels. We only train the head, DB_4 and T_3 in order to speed up learning and prevent overfitting as shown in Figure 1. We assume that initial DB_3 features are expressive enough due to ILSVRC pretraining, and that it is enough to only fine- tune DB_4 for discriminating road- driving scenes from the rest of the visual world. ### 4.4 TRAINING We train the primary model on Cityscapes train at half resolution. The training images are normalized with Cityscapes train mean and variance. We optimize the model on entire images for 40 epochs without jittering using ADAM optimizer, a decaying learning rate starting from 4e- 4 and a batch size of 5. The learning rate for the pretrained DenseNet layers is four times smaller than the learning rate for the model head. The model achieves 70.1 % mIoU on Cityscapes val and 23.6 % mIoU on WildDash val. Due to poor generalization on WildDash, we also trained a model on <--- Page Split ---> the combination of Cityscapes train and WildDash val. This model reaches \(70.2\%\) mIoU on the Cityscapes validation set. We train the augmented primary model (DeVries & Taylor, 2018) in the same way as the primary model. This model achieves \(71.1\%\) mIoU on Cityscapes val and \(25.7\%\) mIoU on WildDash val. We train our discriminative models for OOD detection in the similar way as the primary model. We use three different in- distribution (ID) datasets: Cityscapes, Cityscapes + WildDash val and Vistas. WildDash val was added to Cityscapes because models trained on Cityscapes are prone to overfitting. In order to show that training on the WildDash validation subset can be avoided, we also show results of a model instance trained on the Vistas dataset (Neuhold et al., 2017) which is much more diverse than Cityscapes. As explained earlier, we always use ILSVRC as the training dataset for the OOD class. Unfortunately, ILSVRC images are not annotated at the pixel level. We deal with this challenge by simply labeling all ILSVRC pixels as the OOD class, and all pixels from the road- driving datasets as the ID class. In order to account for different resolutions, we resize the ILSVRC images so that the smaller side equals 512 pixels while keeping the image proportions. We form mixed batches (Kreso et al., 2018) by taking random crops of \(512 \times 512\) pixels from both training datasets (ILSVRC, in- distribution) and normalizing them using the ILSVRC mean and variance. Since there is a huge disproportion in size between the ILSVRC dataset and road- driving datasets, we oversample the road- driving datasets so that the number of images becomes approximately equal. We train the models using ADAM optimizer, a decaying learning rate and a batch size of 30, until the accuracy on WildDash val reaches \(100\%\) . Similar accuracies are also observed on ILSVRC val (OOD) and Vistas val (ID) after the training. ### 4.5 EVALUATION We evaluate how well the considered models separate OOD and ID samples on several test datasets. We compare our discriminative OOD model with the max- softmax (Hendrycks & Gimpel, 2017), ODIN (Liang et al., 2018), trained confidence (DeVries & Taylor, 2018) and the pretrained runner- up model from the ROB Challenge which was provided by its authors (Kreso et al., 2018). We quantify performance with average precision because it shows how well the method separates the OOD and ID pixels without having to look for appropriate discrimination thresholds. We assume image- wide ID and OOD labeling (further experiments are presented in the appendix). We label all pixels in WildDash test images #0-#140 as ID, and all pixels in WildDash test images #141-#155 as OOD. We provide two AP measures. The first one evaluates results on all negative images (#141-#155). The second one ignores altered valid images (Zendel et al., 2018) (see Figure 3) which, in our view, can not be clearly distinguished from in- distribution images. Thus, we respect the original setup of the challenge, but also provide the second performance metric for a more complete illustration of the results. Note that we consider all pixels in OOD images as "positive" responses, including the pixels of Cityscapes classes (e.g. a person in a room). Conversely, we consider all pixels in ID images as "negatives", including the pixels which do not belong to any of the road driving classes (e.g. animals on the road). Such ambiguous pixels are rare and do not compromise our conclusions. ### 4.6 RESULTS ON WILDDASH TEST Table 1 shows the results of OOD detection based on the max- softmax criterion with two instances of our primary semantic segmentation model. The two model instances were trained on i) Cityscapes train and ii) Cityscapes train + WildDash val. The evaluation was performed on the WildDash test without the five altered valid images (cf. Figure 3) (left) and on the complete WildDash test (right). Both approaches perform rather poorly, though training on WildDash val doubles the performance. The precision- recall curve for the model trained on Cityscapes and WildDash val can be seen in the first column of Figure 2. The precision is low even for small values of recall indicating a very poor separation of ID an OOD pixels. <--- Page Split ---> Table 1: Average precision performance for OOD detection by applying max-softmax to predictions of the primary model trained on two road-driving datasets. <table><tr><td>Training set</td><td>WD test selection</td><td>WD test complete</td></tr><tr><td>City</td><td>10.09</td><td>11.91</td></tr><tr><td>City + WD val</td><td>17.62</td><td>19.29</td></tr></table> ![](images/6_0.jpg) <center>Figure 2: Precision-recall curves of OOD responses on the complete WildDash test. The columns correspond to: i) OOD detection according to max-softmax of the primary model, ii) OOD detection by recognizing foreign classes with the ROB model, iii) discriminative OOD detection trained on Vistas, and iv) discriminative OOD detection trained on Cityscapes and WildDash val. </center> We show the confidence that the corresponding pixel is OOD in the second column of Figures 3, 4 and 5, where red denotes higher confidence. We see that OOD responses obtained by the max- softmax approach do not correlate with epistemic uncertainty. Instead they occur on semantic borders, small objects and distant parts of the scene, that is on details which occur in most training images. ![](images/6_1.jpg) <center>Figure 3: OOD detection in three of the five altered valid scenes from WildDash test (images #141, #148 and #151). These images were ignored in experiments labeled as "WD test selected". The columns correspond to: i) original image, ii) best result from Table 1, iii) discriminative OOD detection with the ROB model, iv) discriminative OOD detection trained on Vistas, and v) discriminative OOD detection trained on Cityscapes and WildDash val. Red denotes the confidence that the corresponding pixel is OOD which can be interpreted as epistemic uncertainty. </center> Table 2 shows the OOD detection with the augmented primary model and trained confidence. This approach achieves a better mIoU performance than the basic segmentation model trained only on Cityscapes. This suggests that training with uncertainty alleviates overfitting. Still, the uncertainty estimation itself proves to be the worst predictor of OOD pixels among all approaches considered in our experiments. This suggests that the confidence head is "confident" by default and must be taught to recognize the pixels which should be uncertain. Since this model performed poorly we do not show its precision- recall curve nor its OOD segmentation results. Table 3 shows OOD detection results for a model that has seen indoor scenes (OOD) and road- driving scenes (ID) by training on all four datasets from the ROB challenge. We perform OOD detection using: max traffic softmax (maximum softmax value of ID classes) and sum traffic soft <--- Page Split ---> Table 2: Average precision for OOD detection by the augmented primary model featuring a trained confidence head. <table><tr><td rowspan="2">Training set</td><td colspan="2">WD test selection</td><td colspan="2">WD test complete</td></tr><tr><td>max-softmax confidence</td><td>max-softmax confidence</td><td>max-softmax confidence</td><td></td></tr><tr><td>City</td><td>10.61</td><td>9.52</td><td>15.62</td><td>11.38</td></tr></table> ![](images/7_0.jpg) <center>Figure 4: OOD detection in in-distribution WildDash test images. We use the same column arrangement as in Figure 3. The row three is interesting because it contains animals which are classified as ID by the model trained on WildDash val. This is likely due to the fact that WildDash val contains images with animals on the road (although not ducks). Notice finally that the model instance trained on WildDash val performs better on distorted images. </center> max (sum of softmax values of ID classes). Former is the OOD detection criterion from the the original paper. Latter is the total probability of ID classes. Calculating it turns the ROB model into discriminative OOD detector with ScanNet as the outlier dataset. There is a significant jump in average precision between these models and approaches trained only on road- driving scenes. Sum traffic softmax works better than max traffic softmax so we analyze it further. Interestingly, this model recognizes most pixels in the five artificially altered negative WildDash test images as ID (column 3, Figure 3). The model works well on ID images (Figure 4), however it makes some errors in some OOD images. The third column of Figure 5 shows that some pixels like ants (row 2) are recognized as ID samples. Interestingly, the model recognizes pixels at people as ID, even though they are located in an OOD context (row 3 in image 5). We also show the precision- recall curve for this model in the second column of Figure 2. The precision is high for low values of recall. Furthermore, the precision remains high along a greater range of recall values when using probabilities for OOD detection. Table 3: Average precision for OOD detection with the classifier trained on all four datasets from the ROB 2018 challenge: Cityscapes trainval, KITTI train, WildDash val and ScanNet train. Detection of inlier classes (sum traffic softmax) works better than using model output uncertainty for inlier classes (max traffic softmax). <table><tr><td>Training set</td><td>OOD approach</td><td>WD test selection</td><td>WD test complete</td></tr><tr><td>ROB 2018</td><td>max traffic softmax</td><td>65.64</td><td>51.29</td></tr><tr><td>ROB 2018</td><td>sum traffic softmax</td><td>69.19</td><td>54.71</td></tr></table> Table 4 shows average precision for the proposed discriminative OOD detectors which jointly train on the ILSVRC dataset and road- driving images from different datasets. We start with the model instance which was trained using only Cityscapes images as ID examples. Interestingly, this instance performs poorly because it classifies all WildDash test images as OOD. This result indicates that Cityscapes dataset encourages overfitting due to all images being acquired with the same cam <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 5: OOD detection in negative WildDash test images. We use the same column arrangement as in Figure 3. The ROB model classifies all non-indoor images as ID, different from the models that have seen the ILSVRC dataset. </center> era. The other two instances of the model perform better than all other approaches presented in this paper (including sum max softmax on ROB dataset indicating that ILSRVC is a better choice of outlier dataset than ScanNet) The model instance trained on Vistas performs significantly better than the model instance trained only on Cityscapes. Still, we obtain the best results with the instance that has seen WildDash validation set. This suggests that a model that needs to work in challenging environments also needs to see challenging examples during training. Interestingly, these two instances recognize pixels from artificially altered images as ID samples, which is evident from the drop of performance between the two columns in Table 4 as well as from columns 4 and 5 in Figure 3. Finally, these models do not recognize ID classes in OOD context: people sitting in a room are classified as OOD samples as shown in row 3 in Figure 5. Table 4: Average precision for discriminative OOD detection on the WildDash test dataset. The OOD detection model is jointly trained on ILSVRC (OOD pixels) and road-driving images from different datasets (ID pixels). <table><tr><td>Training set</td><td>WD test selection</td><td>WD test complete</td></tr><tr><td>city,img</td><td>32.11</td><td>24.83</td></tr><tr><td>city,wd,img</td><td>96.24</td><td>71.43</td></tr><tr><td>vistas,img</td><td>89.23</td><td>67.44</td></tr></table> Precision- recall curves for the model instance trained on Vistas and the model instance trained on Cityscapes + WildDash can be seen in Figure 2, in columns 3 and 4 respectively. The curve for the model that has only seen the Vistas dataset slopes relatively constantly, while the curve for the model that has seen the WildDash validation set remains constant and high and then drops suddenly. This drop is due to altered valid images shown in Figure 3. Finally, we show a few difficult cases in Figure 6 to discuss the space for improvement. Rather than visualizing the classification (which has sharp edges), we show confidence that the pixel is OOD. The first two rows contain images which are clearly inliers, however our discriminative models suspect that some of their parts are OOD. This is probably caused by existence of ID classes in ILSVRC images (e.g. caravan, sports car, dome). Our models are not expressive enough to indicate which ILSVRC classes caused these errors. Images in rows 3 and 4 are ID images that contain OOD objects, in this case animals. Future models would benefit by better considering the overlap between ILSVRC and the primary training dataset. Furthermore, the predictions are very coarse due to image- wide annotations of the training datasets. Finer pixel- level predictions would likely be obtained by training on images that contain both ID and OOD pixels. ### 4.7 RESULTS ON OTHER DATASETS Table 5 shows how well the proposed OOD- detection models generalize to datasets which were not seen during training. Rows 1 and 3 show the difference between using Vistas and Cityscapes as ID dataset. When using Vistas as ID, almost no OOD pixels are detected in Cityscapes. On the <--- Page Split ---> other hand, when using Cityscapes as ID, most Vistas pixels are classified as OOD. This suggests that Cityscapes poorly represents the variety of traffic scenes. Row 2 shows that almost all Pascal VOC2007 pixels are classified as OOD. This finding complements the results from Figure 5, and suggests that using ILSVRC as outlier dataset is able to generalize well to other outlier datasets. Table 5: Pixel accuracy of discriminative OOD detection on various datasets. PASCAL\\* denotes PASCAL VOC 2007 trainval without Cityscapes classes (bicycle, bus, car, motorbike, person, train). <table><tr><td>Training set</td><td>Test set</td><td>OOD incidence</td></tr><tr><td>Vistas, ILSVRC</td><td>Cityscapes test</td><td>0.01%</td></tr><tr><td>Vistas, ILSVRC</td><td>PASCAL*</td><td>99.99%</td></tr><tr><td>Cityscapes, ILSVRC</td><td>Vistas val</td><td>93.76%</td></tr></table> ## 5 CONCLUSION Graceful performance degradation in presence of unforeseen scenery is a crucial capability for any real- life application of computer vision. Any system for recognizing images in the wild should at least be able to detect such situations in order to avoid disasters and fear of technology. We have considered image- wide OOD detection approaches which can be easily adapted for dense prediction in high resolution images. These approaches have delivered very low precision in our experiments because they unable to ignore the contribution of aleatoric uncertainty in the primary model output. We have therefore proposed a novel approach for recognizing the outliers as being more similar to some "background" dataset than to the training dataset of the primary model. Our experiments have resulted in a substantial improvement of OOD detection AP performance with respect to all previous approaches which are suitable for dense prediction in high resolution images. ILSVRC appears as a reasonable background dataset candidate due to successful OOD detection in negative WildDash images that are (at least nominally) not represented in ILSVRC (white wall, two kinds of noise, anthill closeup, aquarium, etc). Nevertheless, our study emphasizes the need for more comprehensive background datasets. Future work will address employing these results as a guide for better direction of the annotation effort as well as towards further development of approaches for recognizing epistemic uncertainty in images and video. Future work will address employing these results as a guide for better direction of the annotation effort as well as towards further development of approaches for recognizing epistemic uncertainty in images and video. ![](images/9_0.jpg) <center>Figure 6: Examples of OOD pixel detections in positive WildDash test images. The columns correspond to: i) original image, ii) discriminative OOD detection trained on Vistas, and iii) discriminative OOD detection trained on Cityscapes and WildDash val. Red denotes the confidence that the corresponding pixel is OOD which can be interpreted as epistemic uncertainty. </center> <--- Page Split ---> ## APPENDIX A DENSE OOD DETECTION ON ROAD DRIVING IMAGES WITH MIXED CONTENT The Wilddash dataset is the only publicly available dataset that provides OOD images. Unfortunately, the Wilddash OOD content is annotated only on the image level. This makes Wilddash unsuitable for testing detection of unfamiliar objects in familiar settings. We therefore propose six new datasets for that purpose. We also propose an improved training procedure which allows the proposed discriminative OOD detection model to accurately predict borders of OOD objects. This procedure is used to train a new instance of the discriminative OOD model which is going to be evaluated in experiments. Finally, we present and discuss experiments which compare the AP performance across several OOD models and datasets. ## A.1 TEST DATASETS In order to be able to evaluate how different OOD detection methods perform when OOD pixels are present in ID scenes, we create six new datasets- Three of these datasets include images which contain both ID and OOD pixels. We shall use these datasets for evaluating various OOD detection approaches. The remaining three datasets are designed for control experiments in which we shall explore whether the evaluated OOD detection approaches are able to react to pasted ID content. We obtain the first two datasets by pasting Pascal VOC 2007 animals of different sizes to images from Vistas val. Pascal was chosen because it contains many densely annotated object instances which are out- of- distribution for road- driving scenes. We do not use Vistas train at this point since we wish to preserve it for OOD training. The three control datasets are formed by pasting objects across images from road driving datasets. The final dataset contains a selection of Vistas images in which a significant number of pixels are labeled as the class 'ground animal'. The last dataset is the closest match to a real- world scenario of encountering an unexpected object while driving, however, the number of its images is rather small (hence the need for datasets obtained by pasting). ## A.1.1 PASCAL TO VISTAS \(10\%\) We start by locating Pascal images with segmentation groundtruth which contain any of the 7 animal classes: bird, cat, cow, dog, horse and sheep. We select 369 large Pascal objects from their original images using pixel- level segmentation groundtruth. For each selected object we choose a random image from Vistas val, resize the object to cover at least \(10\%\) image pixels and then paste the object at random image location. This results in 369 combined images. Examples of the combined images are shown in column 1 in Figure 7. ## A.1.2 PASCAL TO VISTAS \(1\%\) A possible issue with resizing objects before pasting might be that the OOD model may succeed to detect the pasted objects by recognizing the resizing artifacts instead of the novelty. In order to address this issue, we form another dataset as follows. We iterate over all instances of Pascal objects, we choose a random image from Vistas val and paste the object without any resizing only if it takes at least \(1\%\) image pixels. This results in 31 combined images. This datasets is more difficult than the previous one since OOD patches are much smaller. Examples can be seen in the first column of Figure 8. ## A.1.3 CITY TO CITY We create this dataset by pasting a random object instance Cityscapes val at a random location of a different random Cityscapes validation image. The only condition is that the object instance takes at least \(0.5\%\) of the cityscapes image. No preprocessing is performed before the pasting. Performance on this set indicates whether a model detects OOD pixels due to different imaging conditions in which the patches were acquired. This dataset contains 288 images. Examples can be seen in the first column Figure 9. <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 7: OOD detection in Vistas images with pasted Pascal objects that take up at least \(10\%\) of the image. The columns correspond to: i) original image, ii) max-softmax of the primary model (cf. Table 1), iii) OOD detection with the ROB model (cf. Table 3), iv) discriminative OOD detection trained on entire images from ILSVRC (OOD) and Vistas train (ID) (cf. Table 4), and v) discriminative OOD detection trained on entire ILSVRC images (OOD), and ILSVRC bounding boxes (OOD) pasted over Vistas images without ground animals (ID). Red denotes the confidence that the corresponding pixel is OOD, which can be interpreted as epistemic uncertainty. Max-softmax of the primary model detects borders. The model trained according to A.2 manages to accurately detect the OOD shape. The ROB model manages to detect the position the pasted patch, while the discriminative model trained only on the whole OOD images does not detect any of the pasted patches. </center> ![](images/13_1.jpg) <center>Figure 8: OOD detection in Vistas images with pasted Pascal objects that take at least \(1\%\) of the image. We use the same column arrangement and colors as in Figure 7. The model trained as described in A.2 is able to detect even the relatively small objects pasted into a similar background. The ROB model fails to detect the location of the pasted patch. </center> ## A.1.4 VISTAS TO CITY We create this dataset by pasting a random object instance from Vistas val into a random image from Cityscapes val. The pasted instance has to take at least \(0.5\%\) of the Cityscapes image. No preprocessing is performed before the pasting. Performance on this set indicates whether the model is able to detect different camera characteristics of the patch rather than real OOD pixels. This dataset contains 1543 images. Some examples are shown in Figure 10. <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 9: OOD detection in Cityscapes images with pasted Cityscapes instances that take at least \(0.5\%\) of the image. We use the same column arrangement and colors as in Figure 7. None of the models accurately detect the pasted patches. The fourth model seems to react to borders of pasted content (row 2). </center> ![](images/14_1.jpg) <center>Figure 10: OOD detection in Cityscapes images with pasted Vistas instances that take at least \(0.5\%\) of the image. We use the same column arrangement and colors as in Figure 7. The fourth model experiences trouble with atypical Cityscapes images (row 1) and detects borders of the pasted patches. </center> ## A.1.5 SELF TO SELF We create this dataset by pasting a randomly selected object instance from a Vistas image to a random location in the same image. The object instance had to take at least \(0.5\%\) of the vistas image. No preprocessing was performed before the pasting. Performance on this set indicates whether the model is able to detect objects at unusual locations in the scene. This set contains 1873 images. Some examples can be seen in Figure 11. ## A.1.6 VISTAS ANIMALS This dataset is a subset of Vistas training and validation images which contain instances labeled 'ground animal' that take at least \(0.7\%\) of the image. This set is closest to the real- world scenario of encountering unknown objects in ID road driving scenes. Unlike in images with pasted Pascal animals, OOD detection is unable to succeed by recognizing the pasting artifacts or different imaging conditions. This set contains 8 images. Three of those are shown in the first column of Figure 12. ## A.2 TRAINING DATASET In order to be able to cope with images containing both ID and OOD pixels, we perform the following changes to the training dataset. First, we remove all Vistas images which contain instances of the class 'ground animal' from the training split, regardless of the instance size. We denote the vistas dataset without animals as "vistas- a". Then, we select 544 546 ILSVRC images in which bounding box annotation is available. Each of the selected ILSVRC images is used only once during training, either as i) a single image or ii) a combined image obtained by pasting the resized bounding box to a random location of a random Vistas train image. In the former case, the bounding box is labeled <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 11: OOD detection in Vistas images that contain objects copied and pasted from the same image We use the same column arrangement and colors as in Figure 7. None of the models detect the pasted patches. </center> ![](images/15_1.jpg) <center>Figure 12: OOD detection in Vistas images that contain the class 'ground animal'. We use the same column arrangement and colors as in Figure 7. Only the fourth model manages to accurately segment an animal in row 1, and reacts to animals in other two images. The ROB model detects some parts as OOD however those regions do not correspond to animal locations. </center> as OOD while the rest of ILSVRC image is ignored. In the latter case, the bounding box is resized to contain \(5\%\) pixels of the Vistas image, the resulting ILSVRC pixels are labeled as OOD and the rest is labeled as ID. In both cases the resulting image is resized so that the shorter dimension equals 512 pixels and randomly cropped to the resolution of \(512 \times 512\) . ## A.3 MODEL AND TRAINING We use the same fully convolutional discriminative OOD model as described in Section 4.3. The model is trained according to procedure described in Section 4.4, except that the training dataset is composed as described in A.2. <--- Page Split ---> ## A.4 EXPERIMENTAL RESULTS Tables 6 and 7 show the average precision performance for all OOD models described in the paper, together with the new instance of the discriminative model which is trained on the train dataset described in Section A.2 of this appendix. The models are evaluated on all six test datasets presented in section A.1. We compare the achieved performance with the respective results on the WildDash test dataset which we copy from Section 4. Table 6: Average precision for discriminative OOD detection on the test datasets with images that have both ID and OOD pixels. Labels stand for max-softmax of the primary model (ms), maxsoftmax of the model with trained confidence (ms-conf), primary model trained for the ROB challenge (ROB), and the discriminative model (discrim). We show results for the following datasets: WildDash (wd), Cityscapes (city), the four ROB 2018 challenge datasets (ROB), full imagenet (img), the subset of imagenet with annotated bounding boxes (imb.bb), the full vistas dataset (vistas) and the vistas dataset with images containing ground animals removed (vistas-a). <table><tr><td>Model</td><td>Training set</td><td>PascalVistas10</td><td>PascalVistas1</td><td>VistasAnimals</td><td>Wilddash selection</td></tr><tr><td>ms</td><td>city</td><td>28.81</td><td>9.91</td><td>6.05</td><td>10.09</td></tr><tr><td>ms</td><td>city, wd</td><td>34.27</td><td>8.8</td><td>6.79</td><td>17.62</td></tr><tr><td>ms-conf</td><td>city</td><td>26.09</td><td>8.04</td><td>5.94</td><td>10.61</td></tr><tr><td>ROB</td><td>ROB</td><td>25.65</td><td>4.55</td><td>2.96</td><td>69.19</td></tr><tr><td>discrim</td><td>city, img</td><td>34.07</td><td>3.19</td><td>2.28</td><td>32.11</td></tr><tr><td>discrim</td><td>city, wd, img</td><td>24.46</td><td>3.19</td><td>4.59</td><td>96.24</td></tr><tr><td>discrim</td><td>vistas, img</td><td>13.14</td><td>2.39</td><td>2.4</td><td>89.23</td></tr><tr><td>discrim</td><td>vistas-a, img.bb</td><td>87.87</td><td>78.58</td><td>25.61</td><td>68.59</td></tr></table> Images 7, 8, 9, 10, 11 and 12 show the responses of OOD detection for various models. Red denotes the confidence that the corresponding pixel is OOD, which can also be interpreted as epistemic uncertainty. The columns in these images correspond to: i) original image, ii) max- softmax of the primary model (cf. Table 1), iii) OOD detection with the ROB model (cf. Table 3), iv) discriminative OOD detection trained on entire images from ILSVRC (OOD) and Vistas train (ID) (cf. Table 4), and v) discriminative OOD detection trained on entire ILSVRC images (OOD), and ILSVRC bounding boxes (OOD) pasted over Vistas images without ground animals (ID), as described in Section A.2. These results once again show that the max- softmax approach predicts high uncertainty on object borders. Both the ROB model and the discriminative model trained on entire images fail to detect OOD patches in many images (Figures 7, 8 and 12). Poor performance of the ROB model is expected since its training datasets do not include animals. Poor performance of the discriminative model trained on entire images is also understandable since none of its training images had a border between ID and OOD regions. The discriminative model which we train according to Section A.2 delivers the best overall performance. It is able to detects OOD patches even on very small pasted objects (cf. Figure 8) and genuine animals in Vistas images (cf. Figure 12). We note that this model occasionally detects borders of ID patches (row 2 in Figure 9 and row 2 in Figure 10) which suggests that results on PascalToVistas may be a little too optimistic. We also note that this model sometimes misclassifies parts of Cityscapes images. Genuine Vistas images with ground animals (VistasAnimals) is the most difficult dataset for all models, however the discriminative model trained according to Section A.2 clearly achieves the best performance (cf. Figure 12, row 1). Table 7 shows average precision for pasted content detection on the three control datasets. The AP on control datasets indicates if the model is able to distinguish between the ID image and the ID pasted region. High AP on these datasets means that the corresponding OOD model detects differences in imaging conditions or unexpected object locations between the ID image and the pasted ID patch. High performance on control datasets would indicate that success on PascalToVistas datasets is the result of detecting the process of pasting instead of the novelty of the Pascal classes. The score of the best discriminative model indeed indicates that part of its success on PascalToVistas datasets comes from recognizing pasting interventions. <--- Page Split ---> Table 7: AP for detection of pasted content in the three control datasets. Labels stand for max- softmax of the primary model (ms), max- softmax of the model with trained confidence (ms- conf), primary model trained for the ROB challenge (ROB), and the discriminative model (discrim). We show results for the following datasets: WildDash (wd), Cityscapes (city), the four ROB 2018 challenge datasets (ROB), full imagenet (img), the subset of imagenet with annotated bounding boxes (imb_bb), the full vistas dataset (vistas) and the vistas dataset with images containing ground animals removed (vistas- a). <table><tr><td>Model</td><td>Training set</td><td>CityCity</td><td>VistasCity</td><td>Self</td></tr><tr><td>ms</td><td>city</td><td>3.96</td><td>7.42</td><td>5.74</td></tr><tr><td>ms</td><td>city, wd</td><td>3.49</td><td>6.81</td><td>7.42</td></tr><tr><td>ms-conf</td><td>city</td><td>3.61</td><td>6.69</td><td>5.34</td></tr><tr><td>ROB</td><td>ROB</td><td>4.64</td><td>13.37</td><td>5.95</td></tr><tr><td>discrim</td><td>city, img</td><td>2.15</td><td>45.48</td><td>2.92</td></tr><tr><td>discrim</td><td>city, wd, img</td><td>2.68</td><td>42.82</td><td>3.21</td></tr><tr><td>discrim</td><td>vistas, img</td><td>2.39</td><td>9.14</td><td>3.56</td></tr><tr><td>discrim</td><td>vistas-a, img_bb</td><td>7.62</td><td>34.12</td><td>19.74</td></tr></table> ## A.5 CONCLUSION Experiments show that training on ILSVRC bounding boxes pasted above Vistas images is able to deliver fair open- set dense- prediction performance. In particular, our model succeeds to detect animals in road- driving images although it was not specifically trained for that task, while outperforming all previous approaches by a wide margin. We believe that these results strengthen the conclusions from the main article and provide useful insights for future work in estimating epistemic uncertainty in images. <--- Page Split ---> ## APPENDIX B DENSE OOD DETECTION ON UCSD PED2 DATASET We further explore performance of the proposed dense OOD- detection approach on UCSD Ped2 dataset which contains video sequences of pedestrian scenes. The test subset contains 12 video clips in which each image is annotated with pixel- level binary masks which identify regions with anomalies. The train subset contains no anomalies. It is similar to Vistas animals dataset (subsection A.1.6) in that it contains images that are a true mix of anomalous and non- anomalous regions. However, the problem of OOD detection is not the same as UCSD anomaly detection since UCSD anomalies are connected to motion. For example, a person walking next to a bike is not an anomaly. On the other hand, a person riding a bike and a person walking on the grass are both considered anomalous. Still, it is interesting to see how discriminative OOD detector behaves on a dataset quite different to previously described road driving datasets. Furthermore, since there is overlap between UCSD anomalies and OOD detection, AP can be obtained and used to get a sense of the behavior of the detector. ## B.1 UCSD PED2 DATASET The UCSD dataset contains grayscale video of pedestrian walkways taken with a stationary camera. We focus on video clips from the Ped2 subset which contains 16 training and 12 testing video clips. All these clips have been acquired from the same viewpoint and by the same camera, so that the pedestrian movement is parallel to the image plane. Ped2 test subset contains anomalies which are not present in Ped2 train; these anomalies correspond to bikers, skaters and small carts. Examples of images from UCSD Ped2 dataset are shown in the first column of Figure 13. ![](images/18_0.jpg) <center>Figure 13: OOD detection in UCSD Ped2 dataset. The first column contains original images, the second ground truth annotations, and the third OOD discriminator output. Red denotes the confidence that the corresponding pixel is OOD. The model easily detects the cart as OOD. It detects the wheels of the bicycle as OOD, but bike riders are detected as ID. The skater is also detected as ID. </center> ## B.2 TRAINING DATASET The training dataset is built as described in in A.2 except that we use images from UCSD Ped2 training dataset in place of Vistas images. <--- Page Split ---> ## B.3 MODEL AND TRAINING B.3 MODEL AND TRAININGWe use the same setup for training on the UCSD as described in A.3, but we also randomly convert half of the training images into grayscale images. ## B.4 EXPERIMENTAL RESULTS B.4 EXPERIMENTAL RESULTSFigure 13 shows the response of the discriminative model trained on the UCSD Ped2 dataset. Table 8 shows the result of the discriminative model on the complete UCSD Ped 2 dataset, as well as on three of its test sequences: 1, 4, and 8. Sequence 1 contains a cyclist and a relatively large number of pedestrians, sequence 4 contains a small cart, while sequence 8 contains a cyclist and a skater but less pedestrians. The AP is highest on sequence 4 where the motion anomaly occurs on an object which is clearly OOD. We report lower AP on sequences with cyclists and skaters. There are two causes for this deterioration. Firstly, our model is unable to detect skaters as OOD due to their similarity with pedestrians. Secondly, cyclists (together with their bikes) are labeled as anomalies in the ground truth annotations. Bikes are not necessarily labeled as anomalies if they are not being ridden. Our model recognizes majority of the pixels of bikes as OOD (ridden or not), while riders themselves are recognized as ID, again due to similarity with pedestrians. This discrepancy considerably decreases our AP. Table 8: AP for OOD detection on the whole UCSD Ped2 test dataset, as well as on sequences 1, 4 and 8 from that dataset, denoted as S1, S4 and S8 respectively. <table><tr><td>Model</td><td>Training set</td><td>UCSD Ped2</td><td>UCSD Ped2 S1</td><td>UCSD Ped2 S4</td><td>UCSD Ped2 S8</td></tr><tr><td>discrim</td><td>UCSD, img_bb</td><td>48.49</td><td>37.08</td><td>83.60</td><td>40.24</td></tr></table> ## B.5 CONCLUSION B.5 CONCLUSIONExperiments show that unexpected objects can be detected in pedestrian walkways by training on ILSVRC bounding boxes pasted above UCSD images. This further supports the conclusion that ILSVRC is a good choice as an outlier distribution for a wide variety of training datasets. <--- Page Split --->
reject
Reject
4.666667
ICLR_2019_paper_1083
iclr
2,019
## POINT CLOUD GAN Anonymous authors Paper under double- blind review ## ABSTRACT Generative Adversarial Networks (GAN) can achieve promising performance on learning complex data distributions on different types of data. In this paper, we first show that a straightforward extension of an existing GAN algorithm is not applicable to point clouds, because the constraint required for discriminators is undefined for set data. We propose a two fold modification to a GAN algorithm to be able to generate point clouds (PC- GAN). First, we combine ideas from hierarchical Bayesian modeling and implicit generative models by learning a hierarchical and interpretable sampling process. A key component of our method is that we train a posterior inference network for the hidden variables. Second, PC- GAN defines a generic framework that can incorporate many existing GAN algorithms. We further propose a sandwiching objective, which results in a tighter Wasserstein distance estimate than the commonly used dual form in WGAN. We validate our claims on the ModelNet40 benchmark dataset and observe that PC- GAN trained by the sandwiching objective achieves better results on test data than existing methods. We also conduct studies on several tasks, including generalization on unseen point clouds, latent space interpolation, classification, and image to point clouds transformation, to demonstrate the versatility of the proposed PC- GAN algorithm. ## 1 INTRODUCTION A fundamental problem in machine learning is that given a data set, learn a generative model that can efficiently generate arbitrary many new sample points from the domain of the underlying distribution (Bishop, 2006). Deep generative models use deep neural networks as a tool for learning complex data distributions (Kingma & Welling, 2013; Oord et al., 2016; Goodfellow et al., 2014). Especially, Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) has drawn attention because of its success in many applications. Compelling results have been demonstrated on different types of data, including text, images, and videos (Lamb et al., 2016; Karras et al., 2017; Vondrick et al., 2016). Their wide range of applicability was also shown in many important problems, including data augmentation (Salimans et al., 2016), image style transformation (Zhu et al., 2017), image captioning (Dai et al., 2017), and art creations (Kang, 2017). Recently, capturing 3D information is garnering attention. There are many different data types for 3D information, such as CAD, 3D meshes, and point clouds. 3D point clouds are getting popular since these store more information than 2D images and sensors capable of collecting point clouds have become more accessible. These include Lidar on self- driving cars, Kinect for Xbox, and face identification sensor on phones. Compared to other formats, point clouds can be easily represented as a set of points, which has several advantages, such as permutation invariance of the set members. The algorithms which can effectively learn from this type of data is an emerging field (Qi et al., 2017a,b; Zaheer et al., 2017; Kalogerakis et al., 2017; Fan et al., 2017). However, compared to supervised learning, unsupervised generative models for 3D data are still under explored (Achlioptas et al., 2017; Oliva et al., 2018). Extending existing GAN frameworks to point clouds or more generally set data is not straightforward. In this paper, we begin by formally defining the problem and discussing its difficulty (Section 2). Circumventing the challenges, we propose a deep generative adversarial network (PC- GAN) with a hierarchical sampling and inference network for point clouds. The proposed architecture learns a stochastic procedure which can generate new point clouds and draw samples from the generated point clouds without explicitly modeling the underlying density function (Section 3). The proposed PC- GAN is a generic algorithm which can incorporate many existing GAN variants. By utilizing <--- Page Split ---> the property of point clouds, we further propose a sandwiching objective by considering both upper and lower bounds of Wasserstein distance estimate, which can lead to tighter approximation (Section 3.1). Evaluation on ModelNet40 shows excellent generalization capability of PC- GAN. We first demonstrate that we can sample from the learned model to generate new point clouds and the latent representations learned by the inference network provide meaningful interpolations between point clouds. Then we show the conditional generation results on unseen classes of objects, which demonstrates the superior generalization ability of PC- GAN. Lastly, we also provide several interesting studies, such as classification and point clouds generation from images (Section 5). ## 2 PROBLEM DEFINITION AND DIFFICULTY A point cloud for an object \(\theta\) is a set of \(n\) low dimensional vectors \(X = \{x_{1},\dots,x_{n}\}\) with \(x_{i}\in \mathbb{R}^{d}\) where \(d\) is usually 3 and \(n\) can be infinite. \(M\) different objects can be described as a collection of point clouds \(X^{(1)},\ldots ,X^{(M)}\) . A generative model for sets should be able to: (1) Sample entirely new sets according to \(p(X)\) , and (2) sample arbitrarily many more points from the distribution of given set, i.e. \(x\sim p(x|X)\) . Based on the De- Finetti theorem, we could factor the probability with some suitably defined \(\theta\) , such as object representation of point clouds, as \(p(X) = \int_{\theta}\prod_{i = 1}^{n}p(x_{i}|\theta)p(\theta)d\theta\) . In this view, the factoring can be understood as follows: Given an object, \(\theta\) , the points \(x_{i}\) in the point cloud can be considered as i.i.d. samples from \(p(x|\theta)\) , an unknown latent distribution representing object \(\theta\) . Joint likelihood can be expressed as: \[p(X,\theta) = \underbrace{p(\theta)}_{\mathrm{object}}\prod_{i = 1}^{n}p(x_{i}|\theta) \quad (1)\] ![](images/1_0.jpg) <center>Figure 1: Natural extension of GAN to handle set data does not work. </center> One approach can be used to model the distribution of the point cloud set together, i.e., \(\{\{x_{i}^{(1)}\}_{i = 1}^{n},\ldots ,\{x_{i}^{(m)}\}_{i = 1}^{n}\}\) . In this setting, a naive application of traditional GAN is possible through treating the point cloud as finite dimensional vector by fixing the number and order of the points (reducing the problem to instances in \(\mathbb{R}^{n\times 3}\) ) with DeepSets (Zaheer et al., 2017) classifier as the discriminator to distinguish real sets from fake sets. However, this approach would not work in practice because the integral probability metric (IPM) guarantees behind the traditional GAN no longer hold (e.g. in case of Arjovsky et al. (2017), nor are 1- Lipschitz functions over sets well- defined). The probabilistic divergence approximated by a DeepSets classifier might be ill- defined. Counter examples for breaking IPM guarantees can be easily found as we show next. Counter Example Consider a simple GAN (Goodfellow et al., 2014) with a DeepSets classifier as the discriminator. In order to generate coherent sets of variable size, we consider a generator \(G\) having two noise sources: \(u\) and \(z_{i}\) . To generate a set, \(u\) is sampled once and \(z_{i}\) is sampled for \(i = 1,2,\ldots ,n\) to produce \(n\) points in the generated set. Intuitively, fixing the first noise source \(u\) selects a set and ensures the points generated by repeated sampling of \(z_{i}\) are coherent and belong to the same set. The setup is depicted in Figure 1. In this setup, the GAN minimax problem would be: \[\min_{G}\max_{D}\underset {\theta \sim p(\theta)}{\mathbb{E}}\frac{[\log D(\{x_{i}\})]}{u\sim p(u)} +\underset {z_{i}\sim p(z_{i})}{\mathbb{E}}\frac{[\log (1 - D(\{G(u,z_{i})\}))]}{u\sim p(u)} \quad (2)\] Now consider the case, when there exists an 'oracle' mapping \(T\) which maps each sample point deterministically to the object it originated from, i.e. \(\exists T:T(\{x_{i}\}) = \theta\) . A valid example is when different \(\theta\) leads to conditional distribution \(p(x|\theta)\) with non- overlapping support. Let \(D = D^{\prime}\circ T\) and \(G\) ignore \(z\) , then the optimization task becomes as follows: \[\begin{array}{r l} & {\underset {G}{\min}\underset {D^{\prime}}{\max}\underset {\theta \sim p(\theta)}{\mathbb{E}}\frac{[\log D^{\prime}(T(\{x_{i}\}))]}{u\sim p(u)} +\underset {z_{i}\sim p(z_{i})}{\mathbb{E}}\frac{[\log (1 - D^{\prime}(T(\{G(u,z_{i})\}))]}{u\sim p(u)}}\\ & {\Rightarrow \underset {G}{\min}\underset {D^{\prime}}{\max}\underset {\theta \sim p(\theta)}{\mathbb{E}}\frac{[\log D^{\prime}(\theta)]}{u\sim p(u)} +\underset {z_{i}\sim p(z_{i})}{\mathbb{E}}\frac{[\log (1 - D^{\prime}(T(\{G(u)\}))]}{u\sim p(u)}}\\ & {\Rightarrow \underset {G}{\min}\underset {D^{\prime}}{\max}\underset {\theta \sim p(\theta)}{\mathbb{E}}\frac{[\log D^{\prime}(\theta)]}{u\sim p(u)} +\underset {z_{i}\sim p(z_{i})}{\mathbb{E}}\frac{[\log (1 - D^{\prime}(T(\{G(u)\}))]}{u\sim p(u)}} \end{array} \quad (3)\] <--- Page Split ---> Thus, we can achieve the lower bound \(-\log (4)\) by only matching the \(p(\theta)\) component, while the conditional \(p(x|\theta)\) is allowed to remain arbitrary. So simply using DeepSets classifier without any constraints in simple GAN in order to handle sets does not lead to a valid generative model. ## 3 PROPOSED METHOD As described in Section 2, directly learning point cloud generation under GAN formulation is difficult. However, given \(\theta\) , learning \(p(x|\theta)\) is a simpler task of learning a 3- dimensional distribution. Given two point clouds, one popular heuristic distance between them is the Chamfer distance (Achlioptas et al., 2017). On the other hand, if we treat each point cloud as a 3- dimensional distribution, we can adopt a broader class of probabilistic divergences for comparing them. Instead of learning explicit densities (Jian & Vemuri, 2005; Strom et al., 2010; Eckart et al., 2015), we are interested in implicit generative models with a GAN- like objective (Goodfellow et al., 2014), which has ![](images/2_0.jpg) <center>Figure 2: Overview of PC-GAN. </center> been demonstrated to learn complicated distributions. Formally, given a \(\theta\) , we train a generator \(G_{x}(z,\theta)\) such that \(x = G_{x}(z,\theta)\) , where \(z\sim p(z)\) . The generator \(G_{x}(z,\theta)\) follows \(\mathbb{G}\) by optimizing a probabilistic divergence \(D(\mathbb{P}\| \mathbb{G})\) between the distribution \(\mathbb{G}\) of \(G_{x}(z,\theta)\) and \(p(x|\theta)\) , which is denoted as \(\mathbb{P}\) . The full objective can be written as \(\mathbb{E}_{\theta \sim p(\theta)}\left[\min_{G_{x}}D(\mathbb{P}\| \mathbb{G})\right]\) . Inference Although GANs have been extended to learn conditional distributions (Mirza & Osindero, 2014; Isola et al., 2017), they require conditioning variables to be observed, such as the one- hot label or a given image. Our \(\theta\) , instead, is an unobserved latent variable for modeling different objects, which we need to infer during training. The proposed algorithm has to concurrently learn the inference network \(Q(X)\approx \theta\) while we learn \(p(x|\theta)\) . Since \(X\) is a set of points, we can adopt Qi et al. (2017a); Zaheer et al. (2017) for modeling \(Q\) . We provide more discussion on this topic in the Appendix A.1. Hierarchical Sampling After training \(G_{x}\) and \(Q\) , we use the trained \(Q\) to collect the inferred \(Q(X)\) and train the generator \(G_{\theta}(u)\sim p(\theta)\) for higher hierarchical sampling. Here \(u\sim p(u)\) is the other noise source independent of \(z\) . In addition to layer- wise training, a joint training could further boost performance. The full generative process for sampling one point cloud could be represented as \(\{x_{i}\}_{i = 1}^{n} = \{G(z_{i},u)\}_{i = 1}^{n} = \{G_{x}(z_{i},G_{\theta}(u)) \}_{i = 1}^{n}\) , where \(z_{1},\ldots ,z_{n}\sim p(z)\) , and \(u\sim p(u)\) . The overview of proposed algorithm for point cloud generation (PC- GAN) is shown in Figure 2. ### 3.1 DIFFERENT DIVERGENCES FOR MATCHING POINT CLOUDS To train the generator \(G_{x}\) using a GAN- like objective for point clouds, we need a discriminator \(f(\cdot)\) to distinguishes generated samples and true samples conditioned on \(\theta\) . Combining with the inference network \(Q(X)\) discussed aforementioned, the objective with IPM- based GANs can be written as \[\mathbb{E}_{\theta \sim p(\theta)}\left[\min_{G_{x},Q}\max_{f\in \Omega_{f}}\mathbb{E}_{x\sim p(X|\theta)}\left[f(x)\right] - \mathbb{E}_{z\sim p(z),X\sim p(X|\theta)}\left[f(G_{x}(z,Q(X)))\right]\right], \quad (4)\] where \(\Omega_{f}\) is the constraint for different probabilistic distances, such as 1- Lipschitz (Arjovsky et al., 2017), \(L^{2}\) ball (Mroueh & Sercu, 2017) or Sobolev ball (Mroueh et al., 2017). ### 3.2 TIGHTER SOLUTIONS VIA SANDWICHING In our setting, each point \(x_{i}\) in the point cloud can be considered to correspond to single images when we train GANs over images. An example is illustrated in Figure 3 where samples from MMD- GAN (Li et al., 2017a) trained on CelebA consists of both good and bad faces. In case of images, when quality is evaluated, it primarily focuses on coherence individual images and the few bad ones are usually left out. Whereas in case of point cloud, to get representation of an object we need many sampled points together and presence of outlier points degrades the quality of the object. Thus, when training a generative model for point cloud, we need to ensure a much lower distance \(D(\mathbb{P}\| \mathbb{G})\) between true distribution \(\mathbb{P}\) and generator distribution \(\mathbb{G}\) than would be needed in case of images. <--- Page Split ---> We begin by noting that the popular Wasserstein GAN (Arjovsky et al., 2017), aims to optimize \(G\) by \(\min w(\mathbb{P},\mathbb{G})\) where \(w(\mathbb{P},\mathbb{G})\) is the Wasserstein distance \(w(\mathbb{P},\mathbb{G})\) between the truth \(\mathbb{P}\) and generated distribution \(\mathbb{G}\) of \(G\) . Many GAN works (e.g. Arjovsky et al. (2017)) approximate \(w(\mathbb{P},\mathbb{G})\) in dual form (a maximization problem), such as (4), by neural networks. The resulting estimate \(W_{L}(\mathbb{P},\mathbb{G})\) is a lower bound of the true Wasserstein distance, as neural networks can only recover a subset of 1- Lipschitz functions (Arora et al., 2017) required in the ![](images/3_0.jpg) <center>Figure 3: Connection between good/bad points and faces generated from a GAN. </center> dual form. However, finding a lower bound \(W_{L}(\mathbb{P},\mathbb{G})\) for \(w(\mathbb{P},\mathbb{G})\) may not be an ideal surrogate for solving a minimization problem \(\min w(\mathbb{P},\mathbb{G})\) . In optimal transport literature, Wasserstein distance is usually estimated by approximate matching cost, \(W_{U}(\mathbb{P},\mathbb{G})\) , which gives us an upper bound of the true Wasserstein distance. dual form. However, finding a lower bound \(W_{L}(\mathbb{P},\mathbb{G})\) for \(w(\mathbb{P},\mathbb{G})\) may not be an ideal surrogate for solving a minimization problem \(\min w(\mathbb{P},\mathbb{G})\) . In optimal transport literature, Wasserstein distance is usually estimated by approximate matching cost, \(W_{U}(\mathbb{P},\mathbb{G})\) , which gives us an upper bound of the true Wasserstein distance. We propose to combine, in general, a lower bound \(W_{L}\) and upper bound estimate \(W_{U}\) by sandwiching the solution between the two, i.e. we solve the following minimization problem: \[\min_{G} W_{U}(\mathbb{P},\mathbb{G}) \qquad \text{s.t.} \quad W_{U}(\mathbb{P},\mathbb{G}) - W_{L}(\mathbb{P},\mathbb{G}) < \lambda \quad (5)\] The problem can be simplified and solved using method of lagrange multipliers as follows: \[\min_{G} W_{s}(\mathbb{P},\mathbb{G}) := (1 - s)W_{U}(\mathbb{P},\mathbb{G}) + sW_{L}(\mathbb{P},\mathbb{G}) \quad (6)\] By solving the new sandwiched problem (6), we show that under certain conditions we obtain a better estimate of Wasserstein distance in the following lemma: Lemma 1. Suppose we have two approximators to Wasserstein distance: an upper bound \(W_{U}\) and a lower \(W_{L}\) , such that \(\forall \mathbb{P},\mathbb{G}:(1 + \epsilon_{1})w(\mathbb{P},\mathbb{G})\leq W_{U}(\mathbb{P},\mathbb{G})\leq (1 + \epsilon_{2})w(\mathbb{P},\mathbb{G})\) and \(\forall P,G:(1 - \epsilon_{2})w(\mathbb{P},\mathbb{G})\leq W_{L}(\mathbb{P},\mathbb{G})\leq (1 - \epsilon_{1})w(\mathbb{P},\mathbb{G})\) respectively, for some \(\epsilon_{2} > \epsilon_{1} > 0\) and \(\epsilon_{1} > \epsilon_{2} / 3\) . Then, using the sandwiched estimator \(W_{s}\) from (6), we can achieve tighter estimate of the Wasserstein distance than using either one estimator, i.e. \[\exists s:|W_{s}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})|< \min \{|W_{U}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})|,|W_{L}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})|\} \quad (7)\] #### 3.2.1 UPPER AND LOWER BOUND IMPLEMENTATION For \(W_{L}\) , we can adopt many GAN variants (Arjovsky et al., 2017; Gulrajani et al., 2017; Mroueh & Sercu, 2017). For \(W_{U}\) , we use Bertsekas (1985), which results in a fast \(\epsilon\) approximation of the Wasserstein distance estimate in primal form without solving non- trivial linear programming. We remark estimating Wasserstein distance \(w(\mathbb{P},\mathbb{G})\) with finite samples via its primal is only favorable to low dimensional data, such as point clouds. The error of empirical estimate in primal is \(O(1 / n^{1 / d})\) (Weed & Bach, 2017). When the dimension \(d\) is large (e.g. images), we cannot accurately estimate \(w(\mathbb{P},\mathbb{G})\) in primal as well as its upper bound with a small minibatch. For detailed discussion of finding lower and upper bound, please refer to Appendix A.2 and A.3. ## 4 RELATED WORKS Generative Adversarial Network (Goodfellow et al., 2014) aims to learn a generator that can sample data followed by the data distribution. Compelling results on learning complex data distributions with GAN have been shown on images (Karras et al., 2017), speech (Lamb et al., 2016), text (Yu et al., 2016; Hjelm et al., 2017), vedio (Vondrick et al., 2016) and 3D voxels (Wu et al., 2016). However, the GAN algorithm on 3D point cloud is still under explored (Achlioptas et al., 2017). Many alternative objectives for training GANs have been studied. Most of them are the dual form of \(f\) - divergence (Goodfellow et al., 2014; Mao et al., 2017; Nowozin et al., 2016), integral probability metrics (IPMs) (Zhao et al., 2016; Li et al., 2017a; Arjovsky et al., 2017; Gulrajani et al., 2017) or IPM extensions (Mroueh & Sercu, 2017; Mroueh et al., 2017). Genevay et al. (2018) learn the generative model by the approximated primal form of Wasserstein distance (Cuturi, 2013). Instead of training a generative model on the data space directly, one popular approach is combining with autoencoder (AE), which is called adversarial autoencoder (AAE) (Makhzani et al., 2015). AAE constrain the encoded data to follow normal distribution via GAN loss, which is similar to VAE (Kingma & Welling, 2013) by replacing the KL- divergence on latent space via any GAN loss. <--- Page Split ---> Tolstikhin et al. (2017) provide a theoretical explanation for AAE by connecting it with the primal form of Wasserstein distance. The other variant of AAE is training the other generative model to learn the distribution of the encoded data instead of enforcing it to be similar to a known distribution (Engel et al., 2017; Kim et al., 2017). Achlioptas et al. (2017) explore a AAE variant for point cloud. They use a specially- designed encoder network (Qi et al., 2017a) for learning a compressed representation for point clouds before training GAN on the latent space. However, their decoder is restricted to be a MLP which generates \(m\) fixed number of points, where \(m\) has to be pre- defined. That is, the output of their decoder is fixed to be \(3m\) for 3D point clouds, while the output of the proposed \(G_{x}\) is only 3 dimensional and \(G_{x}\) can generate arbitrarily many points by sampling different random noise \(z\) as input. Yang et al. (2018); Groueix et al. (2018b) propose similar decoders to \(G_{x}\) with fixed grids to break the limitation of Achlioptas et al. (2017) aforementioned, but they use heuristic Chamfer distance without any theoretical guarantee and do not exploit generative models for point clouds. The proposed PC- GAN can also be interpreted as an encoder- decoder formulation. However, the underlying interpretation is different. We start from De- Finetti theorem to learn both \(p(X|\theta)\) and \(p(\theta)\) with inference network interpretation of \(Q\) , while Achlioptas et al. (2017) focus on learning \(p(\theta)\) without modeling \(p(X|\theta)\) . Lastly, GAN for learning conditional distribution (conditional GAN) has been studied in images with single conditioning (Mirza & Osindero, 2014; Pathak et al., 2016; Isola et al., 2017; Chang et al., 2017) or multiple conditioning (Wang & Gupta, 2016). The case on point cloud is still under explored. Also, most of the works assume the conditioning is given (e.g. labels and base images) without learning the inference during the training. Training GAN with inference is studied by Donahue et al. (2016); Dumoulin et al. (2016); Li et al. (2017b); however, their goal is to infer the random noise \(z\) of generators and match the semantic latent variable to be similar to \(z\) . Li et al. (2018) is a parallel work aiming to learn GAN and unseen latent variable simultaneously, but they only study image and video datasets. ## 5 EXPERIMENTS In this section we demonstrate the point cloud generation capabilities of PC- GAN. As discussed in Section 4, we refer Achlioptas et al. (2017) as AAE as it could be treated as an AAE extension to point clouds and we use the implementation provided by the authors for experiments. The sandwitching objective \(W_{s}\) for PC- GAN combines \(W_{L}\) and \(W_{U}\) with the mixture 1:20 without tuning for all experiment. \(W_{L}\) is a GAN loss by combining Arjovsky et al. (2017) and Mroueh & Sercu (2017) (technical details are in Appendix A.3) and we adopt (Bertsekas, 1985) for \(W_{U}\) . We parametrize \(Q\) in PC- GAN by DeepSets (Zaheer et al., 2017). The review of DeepSets is in Appendix E. Other detailed configurations of each experiment can be found in Appendix F. ### 5.1 SYNTHETIC DATASETS We generate 2D circle point clouds. The center of circles follows a mixture of Gaussians \(\mathcal{N}\left(\left\{\pm 16\right\} \times \left\{\pm 16\right\} ,16I\right)\) with equal mixture weights. The radius of the circles was drawn from a uniform distribution \(Unif(1.6,6.4)\) . One sampled circle is shown in Figure 4a. For AAE, the output size of the decoder is \(500 \times 2\) for 500 points, and the output size of the encoder (latent code) is 20. The total number of parameters are \(24K\) . For PC- GAN, the inference network output size is 15. The total number of parameters of PC- GAN is only \(12K\) . We evaluated the conditional distributions on the 10,000 testing circles. We measured the empirical distributions of the centers and the radius of the generated circles conditioning on the testing data as shown in Figure 4. From Figure 4, both AAE and PC- GAN can successfully recover the center distribution, but AAE does not learn the radius distribution well even with larger latent code ![](images/4_0.jpg) <center>Figure 4: (a) (top) the true center distribution and (bottom) one example of a circle point cloud. (b-d) are the reconstructed center and radius distributions. </center> <--- Page Split ---> Table 1: Quantitative results of different models trained on different subsets of ModelNet40 and evaluated on the corresponding test set. ModelNet10 is a subset containing 10 classes of objects, while ModelNet40 is a full training set. AAE is trained using the code from Achlioptas et al. (2017). The PC-GAN variants are trained via upper bound \(W_{U}\) , lower bound \(W_{L}\) and sandwiching loss \(W_{s}\) <table><tr><td rowspan="2">Data</td><td colspan="3">Distance to Face (D2F↓)</td><td colspan="3">Coverage (↑)</td></tr><tr><td>PC-GAN(W2)</td><td>AAE</td><td>PC-GAN(W1)</td><td>PC-GAN(W2)</td><td>PC-GAN(W1)</td><td>PC-GAN(W2)</td></tr><tr><td>Aeroplanes</td><td>1.89E+01</td><td>1.99E+01</td><td>1.53E+01</td><td>2.49E+01</td><td>1.95E+01</td><td>2.99E+02</td></tr><tr><td>Benches</td><td>1.09E+01</td><td>1.41E+01</td><td>1.05E+01</td><td>2.46E+01</td><td>4.44E+01</td><td>2.35E+01</td></tr><tr><td>Cars</td><td>4.39E+01</td><td>6.23E+01</td><td>4.25E+01</td><td>6.68E+01</td><td>2.35E+01</td><td>1.78E+01</td></tr><tr><td>Chairs</td><td>1.01E+01</td><td>1.08E+01</td><td>1.06E+01</td><td>1.08E+01</td><td>3.90E+01</td><td>1.82E+01</td></tr><tr><td>Cups</td><td>1.44E+03</td><td>1.79E+03</td><td>1.28E+03</td><td>3.01E+03</td><td>6.31E+01</td><td>3.31E+01</td></tr><tr><td>Guitars</td><td>2.16E+02</td><td>1.92E+02</td><td>1.97E+02</td><td>1.81E+02</td><td>2.25E+01</td><td>7.98E+02</td></tr><tr><td>Lamps</td><td>1.47E+03</td><td>1.60E+03</td><td>1.64E+03</td><td>2.77E+03</td><td>3.89E+01</td><td>2.32E+01</td></tr><tr><td>Laptops</td><td>2.43E+00</td><td>3.73E+00</td><td>2.65E+00</td><td>2.58E+00</td><td>4.31E+01</td><td>2.56E+01</td></tr><tr><td>Sofa</td><td>1.71E+01</td><td>1.64E+01</td><td>1.45E+01</td><td>2.76E+01</td><td>3.65E+01</td><td>1.62E+01</td></tr><tr><td>Tables</td><td>2.79E+00</td><td>2.96E+00</td><td>2.44E+00</td><td>3.69E+00</td><td>3.82E+01</td><td>2.59E+01</td></tr><tr><td>ModelNet10</td><td>5.77E+00</td><td>6.89E+00</td><td>6.03E+00</td><td>9.19E+00</td><td>3.47E+01</td><td>1.90E+01</td></tr><tr><td>ModelNet40</td><td>4.84E+01</td><td>5.86E+01</td><td>5.24E+01</td><td>7.96E+01</td><td>3.80E+01</td><td>1.85E+01</td></tr></table> (20) and more parameters \((24K)\) . The gap of memory usage could be larger if we configure AAE to generate more points, while the model size required for PC- GAN is independent of the number of points. The reason is MLP decoder adopted by Achlioptas et al. (2017) wastes parameters for nearby points. Using the much larger model (more parameters) could boost the performance. However, it is still restricted to generate a fixed number of points for each object as we discussed in Section 4. ### 5.2 STUDY ON MODELNET40 We consider ModelNet40 (Wu et al., 2015) benchmark, which contains 40 classes of objects. There are 9, 843 training and 2, 468 testing instances. We follow Achlioptas et al. (2017) to consider two settings. One is training on single class of objects. The other is training on all 9, 843 objects in the training set. Achlioptas et al. (2017) set the latent code size of AAE to be 128 and 256 for these two settings, with the total number of parameters to be \(15M\) and \(15.2M\) , respectively. Similarly, we set the output dimension of \(Q\) in PC- GAN to be 128 and 256 for single- class and all- classes. The total number of parameters are \(1M\) and \(3M\) , respectively. Metrics for Quantitative Comparison Firstly, we are interested in whether the learned \(G_{x}\) and \(Q\) can model the distribution of unseen test data. For each test point cloud, we infer the latent variable \(Q(X)\) , then use \(G_{x}\) to generate points. We then compare the distribution between the input point cloud and the conditionally generated point clouds. There are many finite sample estimation for \(f\) - divergence and IPM can be used for evaluation. However, those estimators with finite samples are either biased or with high variance (Peyré et al., 2017; Wang et al., 2009; Póczos et al., 2012; Weed & Bach, 2017). Also, it is impossible to use these estimators with infinitely many samples if they are accessible. For ModelNet40, the meshes of each object are available. In many statistically guaranteed distance estimates, the adopted statistics are commonly based on distance between nearest neighbors (Wang et al., 2009; Póczos et al., 2012). Therefore, we propose to measure the performance with the following criteria. Given a point cloud \(\{x_{i}\}_{i = 1}^{n}\) and a mesh, which is a collection of faces \(\{F_{j}\}_{j = 1}^{m}\) , we measure the distance to face (D2F) as \[D2F\left(\{x_{i}\}_{i = 1}^{n},\{F_{j}\}_{j = 1}^{m}\right) = \frac{1}{n}\sum_{i = 1}^{n}\min_{j}\mathcal{D}(x_{i},F_{j}),\] ![](images/5_0.jpg) <center>Figure 5: Sample mesh of ModelNet40 </center> where \(\mathcal{D}(x_{i},F_{j})\) is the Euclidean distance from \(x_{i}\) to the face \(F_{j}\) . This distance is similar to Chamfer distance, which is commonly used for measuring images and point clouds (Achlioptas et al., 2017; Fan et al., 2017), with infinitely samples from true distributions (meshes). Nevertheless, the algorithm can have low or zero D2F by only focusing a small portion of the point clouds (mode collapse). Therefore, we are also interested in whether the generated points recover enough supports of the distribution. We compute the Coverage ratio as follows. For each point, we <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 6: Example reconstruction (conditional generation) on test objects. PC-GAN with sandwiching \((W_{s})\) is better in capturing fine details like wheels of aeroplane or proper chair legs. </center> find the its nearest face, we then treat this face is covered'. We then compute the ratio of number of faces of a mesh is covered. A sampled mesh is showed in Figure 5, where the details have more faces (non- uniform). Thus, it is difficult to get high coverage for AAE or PC- GAN trained by limited number of sampled points. However, the coverage ratio, on the other hand, serve as an indicator about how much details the model recovers. The results are reported in Table 1. We compare four different algorithm, AAE and PC- GAN with three objectives, including upper bound \(W_{U}\) ( \(\epsilon\) approximated Wasserstein distance), lower bound \(W_{L}\) (GAN with \(L^{2}\) ball constraints and weight clipping), and the sandwiching loss \(W_{s}\) as discussed in Section 3.2, The study with \(W_{U}\) and \(W_{L}\) also serves as the ablation test of \(W_{s}\) . Comparison between Upper bound, Lower bound and Sandwiching Since \(W_{U}\) directly optimizes distance between training and generated point clouds, \(W_{U}\) usually results in smaller D2F than \(W_{L}\) in Table 1. One the other hand, although \(W_{L}\) only recovers lower bound estimate of Wasserstein distance, its discriminator is known to focus on learning support of the distribution (Bengio, 2018), which results in better coverage (support) than \(W_{U}\) . Theoretically, the proposed sandwiching \(W_{s}\) results in a tighter Wasserstein distance estimation than \(W_{U}\) and \(W_{L}\) (Lemma 1). Based on above discussion, it can also be understood as balancing both D2F and coverage by combining both \(W_{U}\) and \(W_{L}\) to get a desirable middle ground. Empirically, we even observe that \(W_{s}\) results in better coverage than \(W_{L}\) , and competitive D2F with \(W_{U}\) . The intuitive explanation is that some discriminative tasks are off to \(W_{U}\) objective, so the discriminator can focus more on learning distribution supports. We argue that this difference is crucial for capturing the object details. Some reconstructed point clouds of testing data are shown in Figure 6. For aeroplane examples, \(W_{U}\) are failed to capture aeroplane tires and \(W_{s}\) has better tire than \(W_{L}\) . For Chair example, \(W_{s}\) recovers better legs than \(W_{U}\) and better seat cushion than \(W_{L}\) . Lastly, we highlight \(W_{s}\) outperforms others more significantly when training data is larger (ModelNet10 and ModelNet40) in Table 1. Comparison between PC- GAN and AAE In most of cases, PC- GAN with \(W_{s}\) has lower D2F in Table 1 with less number of parameters aforementioned. Similar to the argument in Section 5.1, although AAE use larger networks, the decoder wastes parameters for nearby points. AAE only outperforms PC- GAN \((W_{s})\) in Guitar and Sofa in terms of D2F, since the variety of these two classes are low. It is easier for MLP to learn the shared template (basis) of the point clouds. On the other hand, due to the limitation of the fixed number of output points and Chamfer distance objective, AAE has worse coverage than PC- GAN, It can be supported by Figure 6, where AAE is also failed to recover aeroplane tire. Hierarchical Sampling In Section 3, we propose a hierarchical sampling process for sampling point clouds. In the first hierarchy, the generator \(G_{\theta}\) , samples a object \((\theta = G_{\theta}(u), u \sim \mathbb{P}(u))\) , while the second generator \(G_{x}\) samples points based on \(\theta\) to form the point cloud. The randomly sampled results without given any data as input are shown in Figure 7. More results can be found in Appendix C. The point clouds are all smooth, structured and almost symmetric. It shows PC- GAN captures inherent symmetries and patterns in all the randomly sampled objects, even if overall object is not perfectly formed. This highlights that learning point- wise generation scheme encourages learning basic building blocks of objects. <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 7: Randomly sampled objects and corresponding point cloud from the hierarchical sampling Even if there are some defects, the objects are smooth, symmetric and structured. </center> Interpolation of Learned Manifold We study whether the interpolation between two objects on the latent space results in smooth change. We interpolate the inferred representations of two objects by \(Q\) , and use the generator \(G_{x}\) to sample points based on the interpolation. The inter- class result is shown in Figure 8. More studies about interpolation between rotations can be found in Appendix D.1. ![](images/7_1.jpg) <center>Figure 8: Interpolating between latent representations \(Q(X)\) of a table and a chair point clouds. </center> Generalization on Unseen Classes In above, we studied the reconstruction of unseen testing objects, while PC- GAN still saw the point clouds from the same class during training. Here we study the more challenging task. We train PC- GAN on first 30 (Alphabetic order) class, and test on the other fully unseen 10 classes. Some reconstructed (conditionally generated) point clouds are shown in Figure 9. More (larger) results can be found in Appendix C. For the object from the unseen classes, the conditionally generated point clouds still recovers main shape and reasonable geometry structure, which confirms the advantage of the proposed PC- GAN: by enforcing the point- wise transformation, the model is forced to learn the underlying geometry structure and the shared building blocks, instead of naively copying the input from the conditioning. The resulted D2F and coverage are 57.4 and 0.36, which are only slightly worse than 48.4 and 0.38 by training on whole 40 classes in Table 1 (ModelNet40), which also supports the claims of the good generalization ability of PC- GAN. ![](images/7_2.jpg) <center>Figure 9: The reconstructed objects from unseen classes (even in training). In each plot, LHS is true data while RHS is PC-GAN. PC-GAN generalizes well as it can match patterns and symmetries from classes seen in the past to new unseen classes. </center> More Studies We also conduct other studies to make experiments complete, including interpolation between different rotations, classification and image to point clouds. Due to space limit, all of the results can be found in Appendix D. ## 6 CONCLUSION In this paper, we first showed a straightforward extension of existing GAN algorithm is not applicable to point clouds. We then proposed a GAN modification (PC- GAN) that is capable of learning to generate point clouds by using ideas both from hierarchical Bayesian modeling and implicit generative models. We further propose a sandwiching objective which results in a tighter Wasserstein distance estimate theoretically and better performance empirically. In contrast to some existing methods (Achlioptas et al., 2017), PC- GAN can generate arbitrary as many i.i.d. points as we need to form a point clouds without pre- specification. Quantitatively, PC- GAN achieves competitive or better results using smaller network than existing methods. We also demonstrated that PC- GAN can capture delicate details of point clouds and generalize well even on unseen data. Our method learns "point- wise" transformations which encourage the model to learn the building components of the objects, instead of just naively copying the whole object. We also demonstrate other interesting results, including point cloud interpolation and image to point clouds. Although we only focused on 3D applications in this paper, our framework can be naturally generalized to higher dimensions. In the future we would like to explore higher dimensional applications, where each 3D point can have other attributes, such as RGB colors and 3D velocity vectors. <--- Page Split ---> ## REFERENCES Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, and Leonidas Guibas. Learning representations and generative models for 3d point clouds. arXiv preprint arXiv:1707.02392, 2017. Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. ICML, 2017. Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and equilibrium in generative adversarial nets (gans). arXiv preprint arXiv:1703.00573, 2017. Yoshua Bengio. Gans and unsupervised representation learning, 2018. Dimitri P Bertsekas. A distributed asynchronous relaxation algorithm for the assignment problem. In Decision and Control, 1985. M Christopher Bishop. Pattern Recognition and Machine Learning. Springer- Verlag New York, 2006. JH Rick Chang, Chun- Liang Li, Barnabas Poczos, BVK Vijaya Kumar, and Aswin C Sankaranarayanan. One network to solve them all—solving linear inverse problems using deep projection models. arXiv preprint, 2017. Ding- Yun Chen, Xiao- Pei Tian, Yu- Te Shen, and Ming Ouhyoung. On visual similarity based 3d model retrieval. In Computer graphics forum, 2003. Christopher B Choy, Danfei Xu, JunYoung Gwak, Kevin Chen, and Silvio Savarese. 3d- r2n2: A unified approach for single and multi- view 3d object reconstruction. In ECCV, 2016. Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In NIPS, 2013. Bo Dai, Sanja Fidler, Raquel Urtasun, and Dahua Lin. Towards diverse and natural image descriptions via a conditional gan. arXiv preprint arXiv:1703.06029, 2017. Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016. Ben Eckart, Kihwan Kim, Alejandro Troccoli, Alonzo Kelly, and Jan Kautz. Mlm4: Maximum likelihood mixture decoupling for fast and accurate point cloud registration. In 3DV, 2015. Jesse Engel, Matthew Hoffman, and Adam Roberts. Latent constraints: Learning to generate conditionally from unconditional generative models. arXiv preprint arXiv:1711.05772, 2017. Haoqiang Fan, Hao Su, and Leonidas J Guibas. A point set generation network for 3d object reconstruction from a single image. In CVPR, 2017. Aude Genevay, Gabriel Peyré, and Marco Cuturi. Learning generative models with sinkhorn divergences. In AISTATS, 2018. Rohit Girdhar, David F Fouhey, Mikel Rodriguez, and Abhinav Gupta. Learning a predictable and generative vector representation for objects. In European Conference on Computer Vision, 2016. Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. Thibault Groueix, Matthew Fisher, Vladimir G Kim, Bryan C Russell, and Mathieu Aubry. Atlasnet: A papier- m\^ach\^e approach to learning 3d surface generation. arXiv preprint arXiv:1802.05384, 2018a. Thibault Groueix, Matthew Fisher, Vladimir G Kim, Bryan C Russell, and Mathieu Aubry. A papier- mâché approach to learning 3d surface generation. In CVPR, 2018b. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans. In NIPS, 2017. <--- Page Split ---> Christian Häne, Shubham Tulsiani, and Jitendra Malik. Hierarchical surface prediction for 3d object reconstruction. In 3D Vision (3DV), 2017 International Conference on, 2017. R. Devon Hjelm, Athul Paul Jacob, Tong Che, Kyunghyun Cho, and Yoshua Bengio. Boundary-seeking generative adversarial networks. arXiv:1702.08431, 2017. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. arXiv preprint, 2017. Bing Jian and Baba C Vemuri. A robust algorithm for point set registration using mixture of gaussians. In ICCV, 2005. Evangelos Kalogerakis, Melinos Averkiou, Subhransu Maji, and Siddhartha Chaudhuri. 3d shape segmentation with projective convolutional networks. CVPR, 2, 2017. Eunsu Kang. FACE Exhibition, 2017. Judith Rae Solomon Gallery, Youngstown, OH. http://art.ysu.edu/2017/09/06/face- by- eunsu- kang- and- collaborators/. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017. Michael Kazhdan, Thomas Funkhouser, and Szymon Rusinkiewicz. Rotation invariant spherical harmonic representation of 3 d shape descriptors. In Symposium on geometry processing, 2003. Yoon Kim, Kelly Zhang, Alexander M Rush, Yann LeCun, et al. Adversarially regularized autoencoders for generating discrete structures. arXiv preprint arXiv:1706.04223, 2017. Diederik P Kingma and Max Welling. Auto- encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. Alex M Lamb, Anirudh Goyal ALIAS PARTH GOYAL, Ying Zhang, Saizheng Zhang, Aaron C Courville, and Yoshua Bengio. Professor forcing: A new algorithm for training recurrent networks. In NIPS, 2016. Chongxuan Li, Max Welling, Jun Zhu, and Bo Zhang. Graphical generative adversarial networks. arXiv preprint arXiv:1804.03429, 2018. Chun- Liang Li, Wei- Cheng Chang, Yu Cheng, Yiming Yang, and Barnabas Póczos. Mmd gan: Towards deeper understanding of moment matching network. In NIPS, 2017a. Chunyuan Li, Hao Liu, Changyou Chen, Yuchen Pu, Liqun Chen, Ricardo Henao, and Lawrence Carin. Alice: Towards understanding adversarial learning for joint distribution matching. In NIPS, 2017b. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015. Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, and Zhen Wang. Least squares generative adversarial networks. In ICCV, 2017. Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. Youssef Mroueh and Tom Sercu. Fisher gan. In NIPS, 2017. Youssef Mroueh, Chun- Liang Li, Tom Sercu, Anant Raj, and Yu Cheng. Sobolev gan. arXiv preprint arXiv:1711.04894, 2017. Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f- gan: Training generative neural samplers using variational divergence minimization. In NIPS, 2016. Junier B Oliva, Avinava Dubey, Barnabas Póczos, Jeff Schneider, and Eric P Xing. Transformation autoregressive networks. arXiv preprint arXiv:1801.09819, 2018. Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016. <--- Page Split ---> Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016. Gabriel Peyré, Marco Cuturi, et al. Computational optimal transport. Technical report, 2017. Barnabás Póczos, Liang Xiong, and Jeff Schneider. Nonparametric divergence estimation with applications to machine learning on distributions. arXiv preprint arXiv:1202.3758, 2012. Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. CVPR, 2017a. Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In NIPS, 2017b. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NIPS, 2016. Hang Shao, Abhishek Kumar, and P Thomas Fletcher. The riemannian geometry of deep generative models. arXiv preprint arXiv:1711.08014, 2017. Abhishek Sharma, Oliver Grau, and Mario Fritz. Vconv- dae: Deep volumetric shape learning without object labels. In European Conference on Computer Vision, 2016. Johannes Strom, Andrew Richardson, and Edwin Olson. Graph- based segmentation for colored 3d laser point clouds. In IROS, 2010. Hang Su, Subhransu Maji, Evangelos Kalogerakis, and Erik Learned- Miller. Multi- view convolutional neural networks for 3d shape recognition. In ICCV, 2015. Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. Wasserstein auto- encoders. arXiv preprint arXiv:1711.01558, 2017. Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics. In NIPS, pp. 613- 621, 2016. Qing Wang, Sanjeev R Kulkarni, and Sergio Verdu. Divergence estimation for multidimensional densities via \(k\) - nearest- neighbor distances. IEEE Transactions on Information Theory, 2009. Xiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversarial networks. In ECCV, 2016. Jonathan Weed and Francis Bach. Sharp asymptotic and finite- sample rates of convergence of empirical measures in wasserstein distance. arXiv preprint arXiv:1707.00087, 2017. Jiajun Wu, Chengkai Zhang, Tianfan Xue, Bill Freeman, and Josh Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative- adversarial modeling. In NIPS, 2016. Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In CVPR, 2015. Yaoqing Yang, Chen Feng, Yiru Shen, and Dong Tian. Foldingnet: Point cloud auto- encoder via deep grid deformation. In CVPR, volume 3, 2018. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: Sequence generative adversarial nets with policy gradient. CoRR, abs/1609.05473, 2016. Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan R Salakhutdinov, and Alexander J Smola. Deep sets. In NIPS, 2017. Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy- based generative adversarial network. arXiv preprint arXiv:1609.03126, 2016. Jun- Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image- to- image translation using cycle- consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017. <--- Page Split ---> ## A DETAILS OF THE PROPOSED METHOD ## A.1 NEURAL NETWORK REALIZATION OF INFERENCE NETWORK Our solution comprises of a generator \(G_{x}(z,\psi)\) which takes in a noise source \(z\in \mathbb{R}^{d_{1}}\) and a descriptor \(\psi \in \mathbb{R}^{d_{2}}\) encoding information about distribution of \(\theta\) . For a given \(\theta_{0}\) , the descriptor \(\psi\) would encode information about the distribution \(\delta (\theta - \theta_{0})\) and samples generated as \(x = G_{x}(z,\psi)\) would follow the distribution \(p(x|\theta_{0})\) . More generally, \(\psi\) can be used to encode more complicated distributions regarding \(\theta\) as well. In particular, it could be used to encode the posterior \(p(\theta |X)\) for a given sample set \(X\) , such that \(x = G_{x}(z,\psi)\) follows the posterior predictive distribution: \[p(x|X) = \int p(x|\theta)p(\theta |X)d\theta .\] A major hurdle in taking this path is that \(X\) is a set of points, which can vary in size and permutation of elements. Thus, making design of \(Q\) complicated as traditional neural network can not handle this and possibly is the reason for absence of such framework in the literature despite being a natural solution for the important problem of generative modeling of point clouds. However, we can overcome this challenge and we propose to construct the inference network by utilizing the permutation equivariant layers from Deep Sets (Zaheer et al., 2017). This allows it handle variable number of inputs points in arbitrary order, yet yielding a consistent descriptor \(\psi\) . After training \(G_{x}\) and the inference network \(Q\) , we use trained \(Q\) to collect inferred \(Q(X)\) and train the generator \(G_{\theta}(u)\sim p(\theta)\) for higher hierarchical sampling, where \(u\) is the other noise source independent of \(z\) . In addition to the layer- wise training, a joint training may further boost the performance. The full generative process for sampling one point cloud could be represented as \(\{x_{i}\}_{i = 1}^{n} = \{G_{x}(z_{i},G_{\theta}(u)) \}_{i = 1}^{n}\) , where \(z_{1},\ldots ,z_{n}\sim p(z)\) and \(u\sim p(u)\) . We call the proposed GAN framework for learning to generative point clouds as PC- GAN as shown in Figure 2. The conditional distribution matching with a learned inference in PC- GAN can also be interpreted as an encoder- decoder formulation (Kingma & Welling, 2013). The difference between it and the point cloud autoencoder (Achlioptas et al., 2017; Yang et al., 2018) will be discussed in Section 4. ## A.2 UPPER IMPLEMENTATION The primal form of Wasserstein distance is defined as \[w(\mathbb{P},\mathbb{G}) = \inf_{\gamma \in \Gamma (\mathbb{P},\mathbb{G})}\int \| x - y\|_{1}d\gamma (x,y),\] where \(\gamma\) is the coupling of \(P\) and \(G\) . The Wasserstein distance is also known as optimal transport (OT) or earth moving distance (EMD). As the name suggests, when \(w(\mathbb{P},\mathbb{G})\) is estimated with finite number of samples \(X = x_{1},\ldots ,x_{n}\) and \(Y = y_{1},\ldots ,y_{n}\) , we find the one- to- one matching between \(X\) and \(Y\) such that the total pairwise distance is minimal. The resulting minimal total (average) pairwise distance is \(w(X,Y)\) . In practice, finding the exact matching efficiently is non- trivial and still an open research problem (Peyré et al., 2017). Instead, we consider an approximation provided by Bertsekas (1985). It is an iterative algorithm where each iteration operates like an auction whereby unassigned points \(x\in X\) bid simultaneously for closest points \(y\in Y\) , thereby raising their prices. Once all bids are in, points are awarded to the highest bidder. The crux of the algorithm lies in designing a non- greedy bidding strategy. One can see by construction the algorithm is embarrassingly parallelizable, which is favourable for GPU implementation. One can show that algorithm terminates with a valid matching and the resulting matching cost \(W_{U}(X,Y)\) is an \(\epsilon\) - approximation of \(w(X,Y)\) . Thus, the estimate can serve as an upper bound, i.e. \[w(X,Y)\leq W_{U}(X,Y)\leq (1 + \epsilon)w(X,Y), \quad (8)\] We remark estimating Wasserstein distance \(w(\mathbb{P},\mathbb{G})\) with finite sample via primal form is only favorable in low dimensional data, such as point clouds. The error between \(w(\mathbb{P},\mathbb{G})\) and \(w(X,Y)\) is \(O(1 / n^{1 / d})\) , where \(d\) is data dimension (Weed & Bach, 2017). Therefore, for high dimensional data, such as images, we cannot accurately estimate wasserstein distance in primal and its upper bound with a small minibatch. <--- Page Split ---> Finding a modified primal form with low sample complexity is also an open research problem (Cuturi, 2013; Genevay et al., 2018), and combining those into the proposed sandwiching objective for high dimensional data is left for future works. ## A.3 LOWER IMPLEMENTATION The dual form of Wasserstein distance is defined as \[w(\mathbb{P},\mathbb{G}) = \sup_{f\in \mathcal{L}_{1}}\mathbb{E}_{x\sim P}f(x) - \mathbb{E}_{x\sim G}f(x), \quad (9)\] where \(\mathcal{L}_{k}\) is the set of \(k\) - Lipschitz functions whose Lipschitz constant is no larger than \(k\) . In practice, deep neural networks parameterized by \(\phi\) with constraints \(f_{\phi} \in \Omega_{\phi}\) (Arjovsky et al., 2017), result in a distance approximation \[W_{L}(\mathbb{P},\mathbb{G}) = \max_{f_{\phi}\in \Omega_{\phi}}\mathbb{E}_{x\sim P}f_{\phi}(x) - \mathbb{E}_{x\sim G}f_{\phi}(x). \quad (10)\] If there exists \(k\) such that \(\Omega_{f} \subseteq \mathcal{L}_{k}\) , then \(W_{L}(\mathbb{P},\mathbb{G}) / k \leq w(\mathbb{P},\mathbb{G}) \forall P,G\) is a lower bound. To enforce \(\Omega_{\phi} \subseteq \mathcal{L}_{k}\) , Arjovsky et al. (2017) propose a weight clipping constraint \(\Omega_{c}\) , which constrains every weight to be in \([- c,c]\) and guarantees that \(\Omega_{c} \subseteq \mathcal{L}_{k}\) for some \(k\) . However, choosing clipping range \(c\) is non- trivial in practice. Small ranges limit the capacity of networks, while large ranges result in numerical issues during the training. On the other hand, in addition to weight clipping, several constraints (regularization) have been proposed with better empirical performance, such as gradient penalty (Gulrajani et al., 2017) and \(L^{2}\) ball (Mroueh & Sercu, 2017). However, there is no guarantee the resulted functions are still Lipschitz or the resulted distances are lower bounds of Wasserstein distance. To take the advantage of those regularization with the Lipschitz guarantee, we propose a simple variation by combining weight clipping, which always ensures Lipschitz functions. Lemma 2. There exists \(k > 0\) such that \[\max_{f\in \Omega_{c}\cap \Omega_{\phi}}\mathbb{E}_{x\sim P}[f_{\phi}(x)] - \mathbb{E}_{x\sim G}[f_{\phi}(x)]\leq \frac{1}{k} w(\mathbb{P},\mathbb{G}) \quad (11)\] Note that, if \(c \to \infty\) , then \(\Omega_{c} \cap \Omega_{\phi} = \Omega_{\phi}\) . Therefore, from Proposition 2, for any regularization of discriminator (Gulrajani et al., 2017; Mroueh & Sercu, 2017; Mroueh et al., 2017), we can always combine it with a weight clipping constraint \(\Omega_{c}\) to ensure a valid lower bound estimate of Wasserstein distance and enjoy the advantage that it is numerically stable when we use large \(c\) compared with original weight- clipping WGAN (Arjovsky et al., 2017). In practice, we found combing \(L^{2}\) ball constraint and weight- clipping leads to satisfactory performance. We also studied popular WGAN- GP (Gulrajani et al., 2017) with weight clipping to ensure Lipschitz continuity of discriminator, but we found \(L^{2}\) ball with weight clipping is faster and more numerically stable to train. ## B TECHNICAL PROOF Lemma 1. Suppose we have two approximators to Wasserstein distance: an upper bound \(W_{U}\) and a lower \(W_{L}\) , such that \(\forall P,G:(1 + \epsilon_{1})w(\mathbb{P},\mathbb{G})\leq W_{U}(\mathbb{P},\mathbb{G})\leq (1 + \epsilon_{2})w(\mathbb{P},\mathbb{G})\) and \(\forall P,G:(1 - \epsilon_{2})w(\mathbb{P},\mathbb{G})\leq W_{L}(\mathbb{P},\mathbb{G})\leq (1 - \epsilon_{1})w(\mathbb{P},\mathbb{G})\) respectively, for some \(\epsilon_{2} > \epsilon_{1} > 0\) and \(\epsilon_{1} > \epsilon_{2} / 3\) . Then, using the sandwiched estimator \(W_{s}\) from (6), we can achieve tighter estimate of the Wasserstein distance than using either one estimator, i.e. \[\exists s:|W_{s}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})|< \min \{|W_{U}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})|,|W_{L}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})|\} \quad (12)\] <--- Page Split ---> Proof. We prove the claim by show that LHS is at most \(\epsilon_{1}\) , which is the lower bound for RHS. \[\begin{array}{r l} & {|W_{s}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})|}\\ & {\quad = |(1 - s)W_{U}(\mathbb{P},\mathbb{G}) + sW_{L}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})|}\\ & {\quad = |(1 - s)(W_{U}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})) - s(w(\mathbb{P},\mathbb{G}) - W_{L}(\mathbb{P},\mathbb{G}))|}\\ & {\quad \leq \max \{(1 - s)\underbrace{(W_{U}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G}))}_{\leq \epsilon_{2}},\underbrace{s(w(\mathbb{P},\mathbb{G}) - W_{L}(\mathbb{P},\mathbb{G}))}_{\leq \epsilon_{2}}\}}\\ & {\quad \quad -\min \{(1 - s)\underbrace{(W_{U}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G}))}_{\geq \epsilon_{1}},\underbrace{s(w(\mathbb{P},\mathbb{G}) - W_{L}(\mathbb{P},\mathbb{G}))}_{\geq \epsilon_{1}}\}}\\ & {\quad \leq \max \{(1 - s),s\} \epsilon_{2} - \min \{(1 - s),s\} \epsilon_{1}} \end{array} \quad (13)\] Without loss of generality we can assume \(\lambda < 0.5\) , which brings us to \[|W_{s}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})|\leq (1 - \lambda)\epsilon_{2} - \lambda \epsilon_{1} \quad (14)\] Now if we chose \(\frac{\epsilon_{2} - \epsilon_{1}}{\epsilon_{2} + \epsilon_{1}} < \lambda < 0.5\) , then \(|W_{s}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})| < \epsilon_{1}\) as desired. Lemma 2. There exists \(k > 0\) such that \[\max_{f\in \Omega_{c}\cap \Omega_{\phi}}\mathbb{E}_{x\sim P}[f_{\phi}(x)] - \mathbb{E}_{x\sim G}[f_{\phi}(x)]\leq \frac{1}{k} w(\mathbb{P},\mathbb{G}) \quad (15)\] Proof. Since there exists \(k\) such that \(\begin{array}{r}{\max_{f\in \Omega_{c}}\mathbb{E}_{x\sim P}[f_{\phi}(x)] - \mathbb{E}_{x\sim G}[f_{\phi}(x)]\leq \frac{1}{k} w(\mathbb{P},\mathbb{G})} \end{array}\) , it is clear that \[\max_{f\in \Omega_{c}\cap \Omega_{\phi}}\mathbb{E}_{x\sim P}[f_{\phi}(x)] - \mathbb{E}_{x\sim G}[f_{\phi}(x)]\leq \max_{f\in \Omega_{c}}\mathbb{E}_{x\sim P}[f_{\phi}(x)] - \mathbb{E}_{x\sim G}[f_{\phi}(x)]\leq \frac{1}{k} w(\mathbb{P},\mathbb{G}). \quad (16)\] ## C LARGER RESULTS The larger and more hierarchical sampling discussed in Section 5.2 can be found in Figure 10. The reconstruction results on unseen classes are shown in Figure 11. ## D ADDITIONAL STUDY ## D.1 INTERPOLATION BETWEEN ROTATIONS It is also popular to show intra- class interpolation. In addition show simple intra- class interpolations, where the objects are almost aligned, we present an interesting study on interpolations between rotations. During the training, we only rotate data with 8 possible angles for augmentation, here we show it generalizes to other unseen rotations as shown in Figure 12. However, if we linearly interpolate the code, the resulted change is scattered and not smooth as shown in Figure 12. Instead of using linear interpolation, We train a 2- layer MLP with limited hidden layer size to be 16, where the input is the angle, output is the corresponding latent representation of rotated object. We then generate the code for rotated planes with this trained MLP. It suggests although the transformation path of rotation on the latent space is not linear, it follows a smooth trajectory. It may also suggest the geodesic path of the learned manifold may not be nearly linear between rotations. Finding the geodesic path with a principal method (Shao et al., 2017) and Understanding the geometry of the manifold for point cloud worth more deeper study as future work. ## D.2 CLASSIFICATION RESULTS We evaluate the quality of the representation acquired from the learned inference network \(Q\) . We train the inference network \(Q\) and the generator \(G_{x}\) on the training split of ModelNet40 with data <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 10: Randomly sampled objects and corresponding point cloud from the hierarchical sampling. Even if there are some defects, the objects are smooth, symmetric and structured. It suggests PC-GAN captures inherent patterns and learns basic building blocks of objects. </center> augmentation as mentioned above for learning generative models without label information. We then extract the latent representation \(Q(X)\) for each point clouds and train linear SVM on the that with its label. We apply the same setting to a linear classifier on the latent code of Achlioptas et al. (2017). We only sample 1000 as input for our inference network \(Q\) . Benefited by the Deep Sets architecture for the inference network, which is invariant to number of points. Therefore, we are allowed to sample different number of points as input to the trained inference network for evaluation. Because of the randomness of sampling points for extracting latent representation, we repeat the experiments 20 times and report the average accuracy and standard deviation on the testing split in Table 2. By using 1000 points, we are already better than Achlioptas et al. (2017) with 2048 points, and competitive with the supervised learning algorithm Deep Sets. We also follow the same protocol as Achlioptas et al. (2017); Wu et al. (2016) that we train on ShapeNet55 and test the accuracy on ModelNet40. Compared with existing unsupervised learning algorithms, PC- GAN has the best performance as shown in Table 3. Table 2: Classification accuracy results. <table><tr><td>Method</td><td># points</td><td>Accuracy</td></tr><tr><td>PC-GAN</td><td>1000</td><td>87.5 ± .6%</td></tr><tr><td>PC-GAN</td><td>2048</td><td>87.8 ± .2%</td></tr><tr><td>AAE (Achlioptas et al., 2017)</td><td>2048</td><td>85.5 ± .3%</td></tr><tr><td>Deep Sets (Zaheer et al., 2017)</td><td>1000</td><td>87 ± 1%</td></tr><tr><td>Deep Sets (Zaheer et al., 2017)</td><td>5000</td><td>90 ± .3%</td></tr></table> We note that Yang et al. (2018) using additional geometry features by appending pre- calculated features with 3- dimensional coordinate as input or using more advanced grouping structure to achieve better performance. Those techniques are all applicable to PC- GAN and leave it for future works by leveraging geometry information into the proposed PC- GAN framework. <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 11: The reconstructed objects from unseen categories. In each plot, LHS is true data while RHS is PC-GAN. PC-GAN generalizes well as it can match patterns and symmetries from categories seen in the past to new unseen categories. </center> ![](images/15_1.jpg) <center>Figure 12: Interpolating between rotation of an aeroplane, using our latent space representation. </center> ## D.3 IMAGES TO POINT CLOUD Here we demonstrate a potential extension of the proposed PC- GAN for images to point cloud applications. After training \(Q\) as described in 3 and Appendix A.1, instead of learning \(G_{\theta}\) for hierarchical sampling, we train a regressor \(R\) , where the input is the different views of the point cloud \(X\) , and the output is \(Q(X)\) . In this proof of concept experiment, we use the 12 view data and the Res18 architecture in Su et al. (2015), while we change the output size to be 256. Some example results on reconstructing testing data is shown in Figure 13. A straightforward extension is using end- to- end training instead of two- staged approached adopted here. Also, after aligning objects and take representative view along with traditional ICP techniques, we can also do single view to point cloud transformation as Choy et al. (2016); Fan et al. (2017); Hane et al. (2017); Groueix et al. (2018a), which is not the main focus of this paper and we leave it for future work. <--- Page Split ---> Table 3: Classification accuracy results (Trained on ShapeNet55). <table><tr><td>Method</td><td>Accuracy</td></tr><tr><td>SPH (Kazhdan et al., 2003)</td><td>68.2%</td></tr><tr><td>T-L Network (Girdhar et al., 2016)</td><td>74.4%</td></tr><tr><td>LFD (Chen et al., 2003)</td><td>75.5%</td></tr><tr><td>VConv-DAE (Sharma et al., 2016)</td><td>75.5%</td></tr><tr><td>3D GAN (Wu et al., 2016)</td><td>83.3%</td></tr><tr><td>AAE (Achlioptas et al., 2017)</td><td>84.5%</td></tr><tr><td>PC-GAN</td><td>86.9%</td></tr></table> ![](images/16_0.jpg) <center>Figure 13: Image to Point Cloud </center> ## E DEEP SETS (PERMUTATION EQUIVARIANCE LAYERS) We briefly review the notion of Permutation Equivariance Layers proposed by Zaheer et al. (2017) as a background required for this paper. For more details, please refer to Zaheer et al. (2017). Zaheer et al. (2017) propose a generic framework of deep learning for set data. The building block which can be stacked to be deep neural networks is called Permutation Equivariance Layer. One Permutation Equivariance Layer example is defined as \[f(x_{i}) = \sigma (x_{i} + \gamma \mathrm{maxpool}(X)),\] where \(\sigma\) can be any functions (e.g. parametrized by neural networks) and \(X = x_{1},\ldots ,x_{n}\) is an input set. Also, the \(\mathrm{max}\) pooling operation can be replaced with mean pooling. We note that PointNetQi et al. (2017a) is a special case of using Permutation Equivariance Layer by properly defining \(\sigma (\cdot)\) . In our experiments, we follow Zaheer et al. (2017) to set \(\sigma\) to be a linear layer with output size \(h\) followed by any nonlinear activation function. ## F EXPERIMENT SETTINGS ## F.1 SYNTHETIC DATA The batch size is fixed to be 64. We sampled 10,000 samples for training and testing. For the inference network, we stack 3 mean Permutation Equivariance Layer (Zaheer et al., 2017), where the hidden layer size (the output of the first two layers) is 30 and the final output size is 15. The activation function are used SoftPlus. For the generator is a 5 layer MLP, where the hidden layer size is set to be 30. The discriminator is 4 layer MLP with hidden layer size to be 30. For Achlioptas et al. (2017), we change their implementation by replacing the number of filters for encoder to be [30, 30, 30, 30, 15], while the hidden layer width for decoder is 10 or 20 except for the output layer. The decoder is increased from 3 to 4 layers to have more capacity. ## F.2 MODELNET40 We follow Zaheer et al. (2017) to do pre- processing. For each object, we sampled 10,000 points from the mesh representation and normalize it to have zero mean (for each axis) and unit (global) variance. During the training, we augment the data by uniformly rotating \(0,\pi /8,\ldots ,7\pi /8\) rad on the \(x - y\) plane. The random noise \(z_{2}\) of PC- GAN is fixed to be 10 dimensional for all experiments. For \(Q\) of single class model, we stack 3 max Permutation Equivariance Layer with output size to be 128 for every layer. On the top of the stack, we have a 2 layer MLP with the same width and the output. The generator \(G_{x}\) is a 4 layer MLP where the hidden layer size is 128 and output size is 3. <--- Page Split ---> The discriminator is 4 layer MLP with hidden layer size to be 128. The random source \(u\) and \(z\) are set to be 64 and 10 dimensional and sampled from standard normal distributions. For training whole ModelNet40 training set, we increase the width to be 256. The generator \(G_{x}\) is a 5 layer MLP where the hidden layer size is 256 and output size is 3. The discriminator is 5 layer MLP with hidden layer size to be 256. For hirarchical sampling, the top generator \(G_{\theta}\) and discriminator are all 5- layer MLP with hidden layer size to be 256. For AAE, we follow every setting used in Achlioptas et al. (2017), where the latent code size is 128 and 256 for single class model and whole ModelNet40 models. <--- Page Split --->
## ABSTRACT Generative Adversarial Networks (GAN) can achieve promising performance on learning complex data distributions on different types of data. In this paper, we first show that a straightforward extension of an existing GAN algorithm is not applicable to point clouds, because the constraint required for discriminators is undefined for set data. We propose a two fold modification to a GAN algorithm to be able to generate point clouds (PC- GAN). First, we combine ideas from hierarchical Bayesian modeling and implicit generative models by learning a hierarchical and interpretable sampling process. A key component of our method is that we train a posterior inference network for the hidden variables. Second, PC- GAN defines a generic framework that can incorporate many existing GAN algorithms. We further propose a sandwiching objective, which results in a tighter Wasserstein distance estimate than the commonly used dual form in WGAN. We validate our claims on the ModelNet40 benchmark dataset and observe that PC- GAN trained by the sandwiching objective achieves better results on test data than existing methods. We also conduct studies on several tasks, including generalization on unseen point clouds, latent space interpolation, classification, and image to point clouds transformation, to demonstrate the versatility of the proposed PC- GAN algorithm. ## 1 INTRODUCTION A fundamental problem in machine learning is that given a data set, learn a generative model that can efficiently generate arbitrary many new sample points from the domain of the underlying distribution (Bishop, 2006). Deep generative models use deep neural networks as a tool for learning complex data distributions (Kingma & Welling, 2013; Oord et al., 2016; Goodfellow et al., 2014). Especially, Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) has drawn attention because of its success in many applications. Compelling results have been demonstrated on different types of data, including text, images, and videos (Lamb et al., 2016; Karras et al., 2017; Vondrick et al., 2016). Their wide range of applicability was also shown in many important problems, including data augmentation (Salimans et al., 2016), image style transformation (Zhu et al., 2017), image captioning (Dai et al., 2017), and art creations (Kang, 2017). Recently, capturing 3D information is garnering attention. There are many different data types for 3D information, such as CAD, 3D meshes, and point clouds. 3D point clouds are getting popular since these store more information than 2D images and sensors capable of collecting point clouds have become more accessible. These include Lidar on self- driving cars, Kinect for Xbox, and face identification sensor on phones. Compared to other formats, point clouds can be easily represented as a set of points, which has several advantages, such as permutation invariance of the set members. The algorithms which can effectively learn from this type of data is an emerging field (Qi et al., 2017a,b; Zaheer et al., 2017; Kalogerakis et al., 2017; Fan et al., 2017). However, compared to supervised learning, unsupervised generative models for 3D data are still under explored (Achlioptas et al., 2017; Oliva et al., 2018). Extending existing GAN frameworks to point clouds or more generally set data is not straightforward. In this paper, we begin by formally defining the problem and discussing its difficulty (Section 2). Circumventing the challenges, we propose a deep generative adversarial network (PC- GAN) with a hierarchical sampling and inference network for point clouds. The proposed architecture learns a stochastic procedure which can generate new point clouds and draw samples from the generated point clouds without explicitly modeling the underlying density function (Section 3). The proposed PC- GAN is a generic algorithm which can incorporate many existing GAN variants. By utilizing <--- Page Split ---> the property of point clouds, we further propose a sandwiching objective by considering both upper and lower bounds of Wasserstein distance estimate, which can lead to tighter approximation (Section 3.1). Evaluation on ModelNet40 shows excellent generalization capability of PC- GAN. We first demonstrate that we can sample from the learned model to generate new point clouds and the latent representations learned by the inference network provide meaningful interpolations between point clouds. Then we show the conditional generation results on unseen classes of objects, which demonstrates the superior generalization ability of PC- GAN. Lastly, we also provide several interesting studies, such as classification and point clouds generation from images (Section 5). ## 2 PROBLEM DEFINITION AND DIFFICULTY A point cloud for an object \(\theta\) is a set of \(n\) low dimensional vectors \(X = \{x_{1},\dots,x_{n}\}\) with \(x_{i}\in \mathbb{R}^{d}\) where \(d\) is usually 3 and \(n\) can be infinite. \(M\) different objects can be described as a collection of point clouds \(X^{(1)},\ldots ,X^{(M)}\) . A generative model for sets should be able to: (1) Sample entirely new sets according to \(p(X)\) , and (2) sample arbitrarily many more points from the distribution of given set, i.e. \(x\sim p(x|X)\) . Based on the De- Finetti theorem, we could factor the probability with some suitably defined \(\theta\) , such as object representation of point clouds, as \(p(X) = \int_{\theta}\prod_{i = 1}^{n}p(x_{i}|\theta)p(\theta)d\theta\) . In this view, the factoring can be understood as follows: Given an object, \(\theta\) , the points \(x_{i}\) in the point cloud can be considered as i.i.d. samples from \(p(x|\theta)\) , an unknown latent distribution representing object \(\theta\) . Joint likelihood can be expressed as: \[p(X,\theta) = \underbrace{p(\theta)}_{\mathrm{object}}\prod_{i = 1}^{n}p(x_{i}|\theta) \quad (1)\] ![](images/1_0.jpg) <center>Figure 1: Natural extension of GAN to handle set data does not work. </center> One approach can be used to model the distribution of the point cloud set together, i.e., \(\{\{x_{i}^{(1)}\}_{i = 1}^{n},\ldots ,\{x_{i}^{(m)}\}_{i = 1}^{n}\}\) . In this setting, a naive application of traditional GAN is possible through treating the point cloud as finite dimensional vector by fixing the number and order of the points (reducing the problem to instances in \(\mathbb{R}^{n\times 3}\) ) with DeepSets (Zaheer et al., 2017) classifier as the discriminator to distinguish real sets from fake sets. However, this approach would not work in practice because the integral probability metric (IPM) guarantees behind the traditional GAN no longer hold (e.g. in case of Arjovsky et al. (2017), nor are 1- Lipschitz functions over sets well- defined). The probabilistic divergence approximated by a DeepSets classifier might be ill- defined. Counter examples for breaking IPM guarantees can be easily found as we show next. Counter Example Consider a simple GAN (Goodfellow et al., 2014) with a DeepSets classifier as the discriminator. In order to generate coherent sets of variable size, we consider a generator \(G\) having two noise sources: \(u\) and \(z_{i}\) . To generate a set, \(u\) is sampled once and \(z_{i}\) is sampled for \(i = 1,2,\ldots ,n\) to produce \(n\) points in the generated set. Intuitively, fixing the first noise source \(u\) selects a set and ensures the points generated by repeated sampling of \(z_{i}\) are coherent and belong to the same set. The setup is depicted in Figure 1. In this setup, the GAN minimax problem would be: \[\min_{G}\max_{D}\underset {\theta \sim p(\theta)}{\mathbb{E}}\frac{[\log D(\{x_{i}\})]}{u\sim p(u)} +\underset {z_{i}\sim p(z_{i})}{\mathbb{E}}\frac{[\log (1 - D(\{G(u,z_{i})\}))]}{u\sim p(u)} \quad (2)\] Now consider the case, when there exists an 'oracle' mapping \(T\) which maps each sample point deterministically to the object it originated from, i.e. \(\exists T:T(\{x_{i}\}) = \theta\) . A valid example is when different \(\theta\) leads to conditional distribution \(p(x|\theta)\) with non- overlapping support. Let \(D = D^{\prime}\circ T\) and \(G\) ignore \(z\) , then the optimization task becomes as follows: \[\begin{array}{r l} & {\underset {G}{\min}\underset {D^{\prime}}{\max}\underset {\theta \sim p(\theta)}{\mathbb{E}}\frac{[\log D^{\prime}(T(\{x_{i}\}))]}{u\sim p(u)} +\underset {z_{i}\sim p(z_{i})}{\mathbb{E}}\frac{[\log (1 - D^{\prime}(T(\{G(u,z_{i})\}))]}{u\sim p(u)}}\\ & {\Rightarrow \underset {G}{\min}\underset {D^{\prime}}{\max}\underset {\theta \sim p(\theta)}{\mathbb{E}}\frac{[\log D^{\prime}(\theta)]}{u\sim p(u)} +\underset {z_{i}\sim p(z_{i})}{\mathbb{E}}\frac{[\log (1 - D^{\prime}(T(\{G(u)\}))]}{u\sim p(u)}}\\ & {\Rightarrow \underset {G}{\min}\underset {D^{\prime}}{\max}\underset {\theta \sim p(\theta)}{\mathbb{E}}\frac{[\log D^{\prime}(\theta)]}{u\sim p(u)} +\underset {z_{i}\sim p(z_{i})}{\mathbb{E}}\frac{[\log (1 - D^{\prime}(T(\{G(u)\}))]}{u\sim p(u)}} \end{array} \quad (3)\] <--- Page Split ---> Thus, we can achieve the lower bound \(-\log (4)\) by only matching the \(p(\theta)\) component, while the conditional \(p(x|\theta)\) is allowed to remain arbitrary. So simply using DeepSets classifier without any constraints in simple GAN in order to handle sets does not lead to a valid generative model. ## 3 PROPOSED METHOD As described in Section 2, directly learning point cloud generation under GAN formulation is difficult. However, given \(\theta\) , learning \(p(x|\theta)\) is a simpler task of learning a 3- dimensional distribution. Given two point clouds, one popular heuristic distance between them is the Chamfer distance (Achlioptas et al., 2017). On the other hand, if we treat each point cloud as a 3- dimensional distribution, we can adopt a broader class of probabilistic divergences for comparing them. Instead of learning explicit densities (Jian & Vemuri, 2005; Strom et al., 2010; Eckart et al., 2015), we are interested in implicit generative models with a GAN- like objective (Goodfellow et al., 2014), which has ![](images/2_0.jpg) <center>Figure 2: Overview of PC-GAN. </center> been demonstrated to learn complicated distributions. Formally, given a \(\theta\) , we train a generator \(G_{x}(z,\theta)\) such that \(x = G_{x}(z,\theta)\) , where \(z\sim p(z)\) . The generator \(G_{x}(z,\theta)\) follows \(\mathbb{G}\) by optimizing a probabilistic divergence \(D(\mathbb{P}\| \mathbb{G})\) between the distribution \(\mathbb{G}\) of \(G_{x}(z,\theta)\) and \(p(x|\theta)\) , which is denoted as \(\mathbb{P}\) . The full objective can be written as \(\mathbb{E}_{\theta \sim p(\theta)}\left[\min_{G_{x}}D(\mathbb{P}\| \mathbb{G})\right]\) . Inference Although GANs have been extended to learn conditional distributions (Mirza & Osindero, 2014; Isola et al., 2017), they require conditioning variables to be observed, such as the one- hot label or a given image. Our \(\theta\) , instead, is an unobserved latent variable for modeling different objects, which we need to infer during training. The proposed algorithm has to concurrently learn the inference network \(Q(X)\approx \theta\) while we learn \(p(x|\theta)\) . Since \(X\) is a set of points, we can adopt Qi et al. (2017a); Zaheer et al. (2017) for modeling \(Q\) . We provide more discussion on this topic in the Appendix A.1. Hierarchical Sampling After training \(G_{x}\) and \(Q\) , we use the trained \(Q\) to collect the inferred \(Q(X)\) and train the generator \(G_{\theta}(u)\sim p(\theta)\) for higher hierarchical sampling. Here \(u\sim p(u)\) is the other noise source independent of \(z\) . In addition to layer- wise training, a joint training could further boost performance. The full generative process for sampling one point cloud could be represented as \(\{x_{i}\}_{i = 1}^{n} = \{G(z_{i},u)\}_{i = 1}^{n} = \{G_{x}(z_{i},G_{\theta}(u)) \}_{i = 1}^{n}\) , where \(z_{1},\ldots ,z_{n}\sim p(z)\) , and \(u\sim p(u)\) . The overview of proposed algorithm for point cloud generation (PC- GAN) is shown in Figure 2. ### 3.1 DIFFERENT DIVERGENCES FOR MATCHING POINT CLOUDS To train the generator \(G_{x}\) using a GAN- like objective for point clouds, we need a discriminator \(f(\cdot)\) to distinguishes generated samples and true samples conditioned on \(\theta\) . Combining with the inference network \(Q(X)\) discussed aforementioned, the objective with IPM- based GANs can be written as \[\mathbb{E}_{\theta \sim p(\theta)}\left[\min_{G_{x},Q}\max_{f\in \Omega_{f}}\mathbb{E}_{x\sim p(X|\theta)}\left[f(x)\right] - \mathbb{E}_{z\sim p(z),X\sim p(X|\theta)}\left[f(G_{x}(z,Q(X)))\right]\right], \quad (4)\] where \(\Omega_{f}\) is the constraint for different probabilistic distances, such as 1- Lipschitz (Arjovsky et al., 2017), \(L^{2}\) ball (Mroueh & Sercu, 2017) or Sobolev ball (Mroueh et al., 2017). ### 3.2 TIGHTER SOLUTIONS VIA SANDWICHING In our setting, each point \(x_{i}\) in the point cloud can be considered to correspond to single images when we train GANs over images. An example is illustrated in Figure 3 where samples from MMD- GAN (Li et al., 2017a) trained on CelebA consists of both good and bad faces. In case of images, when quality is evaluated, it primarily focuses on coherence individual images and the few bad ones are usually left out. Whereas in case of point cloud, to get representation of an object we need many sampled points together and presence of outlier points degrades the quality of the object. Thus, when training a generative model for point cloud, we need to ensure a much lower distance \(D(\mathbb{P}\| \mathbb{G})\) between true distribution \(\mathbb{P}\) and generator distribution \(\mathbb{G}\) than would be needed in case of images. <--- Page Split ---> We begin by noting that the popular Wasserstein GAN (Arjovsky et al., 2017), aims to optimize \(G\) by \(\min w(\mathbb{P},\mathbb{G})\) where \(w(\mathbb{P},\mathbb{G})\) is the Wasserstein distance \(w(\mathbb{P},\mathbb{G})\) between the truth \(\mathbb{P}\) and generated distribution \(\mathbb{G}\) of \(G\) . Many GAN works (e.g. Arjovsky et al. (2017)) approximate \(w(\mathbb{P},\mathbb{G})\) in dual form (a maximization problem), such as (4), by neural networks. The resulting estimate \(W_{L}(\mathbb{P},\mathbb{G})\) is a lower bound of the true Wasserstein distance, as neural networks can only recover a subset of 1- Lipschitz functions (Arora et al., 2017) required in the ![](images/3_0.jpg) <center>Figure 3: Connection between good/bad points and faces generated from a GAN. </center> dual form. However, finding a lower bound \(W_{L}(\mathbb{P},\mathbb{G})\) for \(w(\mathbb{P},\mathbb{G})\) may not be an ideal surrogate for solving a minimization problem \(\min w(\mathbb{P},\mathbb{G})\) . In optimal transport literature, Wasserstein distance is usually estimated by approximate matching cost, \(W_{U}(\mathbb{P},\mathbb{G})\) , which gives us an upper bound of the true Wasserstein distance. dual form. However, finding a lower bound \(W_{L}(\mathbb{P},\mathbb{G})\) for \(w(\mathbb{P},\mathbb{G})\) may not be an ideal surrogate for solving a minimization problem \(\min w(\mathbb{P},\mathbb{G})\) . In optimal transport literature, Wasserstein distance is usually estimated by approximate matching cost, \(W_{U}(\mathbb{P},\mathbb{G})\) , which gives us an upper bound of the true Wasserstein distance. We propose to combine, in general, a lower bound \(W_{L}\) and upper bound estimate \(W_{U}\) by sandwiching the solution between the two, i.e. we solve the following minimization problem: \[\min_{G} W_{U}(\mathbb{P},\mathbb{G}) \qquad \text{s.t.} \quad W_{U}(\mathbb{P},\mathbb{G}) - W_{L}(\mathbb{P},\mathbb{G}) < \lambda \quad (5)\] The problem can be simplified and solved using method of lagrange multipliers as follows: \[\min_{G} W_{s}(\mathbb{P},\mathbb{G}) := (1 - s)W_{U}(\mathbb{P},\mathbb{G}) + sW_{L}(\mathbb{P},\mathbb{G}) \quad (6)\] By solving the new sandwiched problem (6), we show that under certain conditions we obtain a better estimate of Wasserstein distance in the following lemma: Lemma 1. Suppose we have two approximators to Wasserstein distance: an upper bound \(W_{U}\) and a lower \(W_{L}\) , such that \(\forall \mathbb{P},\mathbb{G}:(1 + \epsilon_{1})w(\mathbb{P},\mathbb{G})\leq W_{U}(\mathbb{P},\mathbb{G})\leq (1 + \epsilon_{2})w(\mathbb{P},\mathbb{G})\) and \(\forall P,G:(1 - \epsilon_{2})w(\mathbb{P},\mathbb{G})\leq W_{L}(\mathbb{P},\mathbb{G})\leq (1 - \epsilon_{1})w(\mathbb{P},\mathbb{G})\) respectively, for some \(\epsilon_{2} > \epsilon_{1} > 0\) and \(\epsilon_{1} > \epsilon_{2} / 3\) . Then, using the sandwiched estimator \(W_{s}\) from (6), we can achieve tighter estimate of the Wasserstein distance than using either one estimator, i.e. \[\exists s:|W_{s}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})|< \min \{|W_{U}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})|,|W_{L}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})|\} \quad (7)\] #### 3.2.1 UPPER AND LOWER BOUND IMPLEMENTATION For \(W_{L}\) , we can adopt many GAN variants (Arjovsky et al., 2017; Gulrajani et al., 2017; Mroueh & Sercu, 2017). For \(W_{U}\) , we use Bertsekas (1985), which results in a fast \(\epsilon\) approximation of the Wasserstein distance estimate in primal form without solving non- trivial linear programming. We remark estimating Wasserstein distance \(w(\mathbb{P},\mathbb{G})\) with finite samples via its primal is only favorable to low dimensional data, such as point clouds. The error of empirical estimate in primal is \(O(1 / n^{1 / d})\) (Weed & Bach, 2017). When the dimension \(d\) is large (e.g. images), we cannot accurately estimate \(w(\mathbb{P},\mathbb{G})\) in primal as well as its upper bound with a small minibatch. For detailed discussion of finding lower and upper bound, please refer to Appendix A.2 and A.3. ## 4 RELATED WORKS Generative Adversarial Network (Goodfellow et al., 2014) aims to learn a generator that can sample data followed by the data distribution. Compelling results on learning complex data distributions with GAN have been shown on images (Karras et al., 2017), speech (Lamb et al., 2016), text (Yu et al., 2016; Hjelm et al., 2017), vedio (Vondrick et al., 2016) and 3D voxels (Wu et al., 2016). However, the GAN algorithm on 3D point cloud is still under explored (Achlioptas et al., 2017). Many alternative objectives for training GANs have been studied. Most of them are the dual form of \(f\) - divergence (Goodfellow et al., 2014; Mao et al., 2017; Nowozin et al., 2016), integral probability metrics (IPMs) (Zhao et al., 2016; Li et al., 2017a; Arjovsky et al., 2017; Gulrajani et al., 2017) or IPM extensions (Mroueh & Sercu, 2017; Mroueh et al., 2017). Genevay et al. (2018) learn the generative model by the approximated primal form of Wasserstein distance (Cuturi, 2013). Instead of training a generative model on the data space directly, one popular approach is combining with autoencoder (AE), which is called adversarial autoencoder (AAE) (Makhzani et al., 2015). AAE constrain the encoded data to follow normal distribution via GAN loss, which is similar to VAE (Kingma & Welling, 2013) by replacing the KL- divergence on latent space via any GAN loss. <--- Page Split ---> Tolstikhin et al. (2017) provide a theoretical explanation for AAE by connecting it with the primal form of Wasserstein distance. The other variant of AAE is training the other generative model to learn the distribution of the encoded data instead of enforcing it to be similar to a known distribution (Engel et al., 2017; Kim et al., 2017). Achlioptas et al. (2017) explore a AAE variant for point cloud. They use a specially- designed encoder network (Qi et al., 2017a) for learning a compressed representation for point clouds before training GAN on the latent space. However, their decoder is restricted to be a MLP which generates \(m\) fixed number of points, where \(m\) has to be pre- defined. That is, the output of their decoder is fixed to be \(3m\) for 3D point clouds, while the output of the proposed \(G_{x}\) is only 3 dimensional and \(G_{x}\) can generate arbitrarily many points by sampling different random noise \(z\) as input. Yang et al. (2018); Groueix et al. (2018b) propose similar decoders to \(G_{x}\) with fixed grids to break the limitation of Achlioptas et al. (2017) aforementioned, but they use heuristic Chamfer distance without any theoretical guarantee and do not exploit generative models for point clouds. The proposed PC- GAN can also be interpreted as an encoder- decoder formulation. However, the underlying interpretation is different. We start from De- Finetti theorem to learn both \(p(X|\theta)\) and \(p(\theta)\) with inference network interpretation of \(Q\) , while Achlioptas et al. (2017) focus on learning \(p(\theta)\) without modeling \(p(X|\theta)\) . Lastly, GAN for learning conditional distribution (conditional GAN) has been studied in images with single conditioning (Mirza & Osindero, 2014; Pathak et al., 2016; Isola et al., 2017; Chang et al., 2017) or multiple conditioning (Wang & Gupta, 2016). The case on point cloud is still under explored. Also, most of the works assume the conditioning is given (e.g. labels and base images) without learning the inference during the training. Training GAN with inference is studied by Donahue et al. (2016); Dumoulin et al. (2016); Li et al. (2017b); however, their goal is to infer the random noise \(z\) of generators and match the semantic latent variable to be similar to \(z\) . Li et al. (2018) is a parallel work aiming to learn GAN and unseen latent variable simultaneously, but they only study image and video datasets. ## 5 EXPERIMENTS In this section we demonstrate the point cloud generation capabilities of PC- GAN. As discussed in Section 4, we refer Achlioptas et al. (2017) as AAE as it could be treated as an AAE extension to point clouds and we use the implementation provided by the authors for experiments. The sandwitching objective \(W_{s}\) for PC- GAN combines \(W_{L}\) and \(W_{U}\) with the mixture 1:20 without tuning for all experiment. \(W_{L}\) is a GAN loss by combining Arjovsky et al. (2017) and Mroueh & Sercu (2017) (technical details are in Appendix A.3) and we adopt (Bertsekas, 1985) for \(W_{U}\) . We parametrize \(Q\) in PC- GAN by DeepSets (Zaheer et al., 2017). The review of DeepSets is in Appendix E. Other detailed configurations of each experiment can be found in Appendix F. ### 5.1 SYNTHETIC DATASETS We generate 2D circle point clouds. The center of circles follows a mixture of Gaussians \(\mathcal{N}\left(\left\{\pm 16\right\} \times \left\{\pm 16\right\} ,16I\right)\) with equal mixture weights. The radius of the circles was drawn from a uniform distribution \(Unif(1.6,6.4)\) . One sampled circle is shown in Figure 4a. For AAE, the output size of the decoder is \(500 \times 2\) for 500 points, and the output size of the encoder (latent code) is 20. The total number of parameters are \(24K\) . For PC- GAN, the inference network output size is 15. The total number of parameters of PC- GAN is only \(12K\) . We evaluated the conditional distributions on the 10,000 testing circles. We measured the empirical distributions of the centers and the radius of the generated circles conditioning on the testing data as shown in Figure 4. From Figure 4, both AAE and PC- GAN can successfully recover the center distribution, but AAE does not learn the radius distribution well even with larger latent code ![](images/4_0.jpg) <center>Figure 4: (a) (top) the true center distribution and (bottom) one example of a circle point cloud. (b-d) are the reconstructed center and radius distributions. </center> <--- Page Split ---> Table 1: Quantitative results of different models trained on different subsets of ModelNet40 and evaluated on the corresponding test set. ModelNet10 is a subset containing 10 classes of objects, while ModelNet40 is a full training set. AAE is trained using the code from Achlioptas et al. (2017). The PC-GAN variants are trained via upper bound \(W_{U}\) , lower bound \(W_{L}\) and sandwiching loss \(W_{s}\) <table><tr><td rowspan="2">Data</td><td colspan="3">Distance to Face (D2F↓)</td><td colspan="3">Coverage (↑)</td></tr><tr><td>PC-GAN(W2)</td><td>AAE</td><td>PC-GAN(W1)</td><td>PC-GAN(W2)</td><td>PC-GAN(W1)</td><td>PC-GAN(W2)</td></tr><tr><td>Aeroplanes</td><td>1.89E+01</td><td>1.99E+01</td><td>1.53E+01</td><td>2.49E+01</td><td>1.95E+01</td><td>2.99E+02</td></tr><tr><td>Benches</td><td>1.09E+01</td><td>1.41E+01</td><td>1.05E+01</td><td>2.46E+01</td><td>4.44E+01</td><td>2.35E+01</td></tr><tr><td>Cars</td><td>4.39E+01</td><td>6.23E+01</td><td>4.25E+01</td><td>6.68E+01</td><td>2.35E+01</td><td>1.78E+01</td></tr><tr><td>Chairs</td><td>1.01E+01</td><td>1.08E+01</td><td>1.06E+01</td><td>1.08E+01</td><td>3.90E+01</td><td>1.82E+01</td></tr><tr><td>Cups</td><td>1.44E+03</td><td>1.79E+03</td><td>1.28E+03</td><td>3.01E+03</td><td>6.31E+01</td><td>3.31E+01</td></tr><tr><td>Guitars</td><td>2.16E+02</td><td>1.92E+02</td><td>1.97E+02</td><td>1.81E+02</td><td>2.25E+01</td><td>7.98E+02</td></tr><tr><td>Lamps</td><td>1.47E+03</td><td>1.60E+03</td><td>1.64E+03</td><td>2.77E+03</td><td>3.89E+01</td><td>2.32E+01</td></tr><tr><td>Laptops</td><td>2.43E+00</td><td>3.73E+00</td><td>2.65E+00</td><td>2.58E+00</td><td>4.31E+01</td><td>2.56E+01</td></tr><tr><td>Sofa</td><td>1.71E+01</td><td>1.64E+01</td><td>1.45E+01</td><td>2.76E+01</td><td>3.65E+01</td><td>1.62E+01</td></tr><tr><td>Tables</td><td>2.79E+00</td><td>2.96E+00</td><td>2.44E+00</td><td>3.69E+00</td><td>3.82E+01</td><td>2.59E+01</td></tr><tr><td>ModelNet10</td><td>5.77E+00</td><td>6.89E+00</td><td>6.03E+00</td><td>9.19E+00</td><td>3.47E+01</td><td>1.90E+01</td></tr><tr><td>ModelNet40</td><td>4.84E+01</td><td>5.86E+01</td><td>5.24E+01</td><td>7.96E+01</td><td>3.80E+01</td><td>1.85E+01</td></tr></table> (20) and more parameters \((24K)\) . The gap of memory usage could be larger if we configure AAE to generate more points, while the model size required for PC- GAN is independent of the number of points. The reason is MLP decoder adopted by Achlioptas et al. (2017) wastes parameters for nearby points. Using the much larger model (more parameters) could boost the performance. However, it is still restricted to generate a fixed number of points for each object as we discussed in Section 4. ### 5.2 STUDY ON MODELNET40 We consider ModelNet40 (Wu et al., 2015) benchmark, which contains 40 classes of objects. There are 9, 843 training and 2, 468 testing instances. We follow Achlioptas et al. (2017) to consider two settings. One is training on single class of objects. The other is training on all 9, 843 objects in the training set. Achlioptas et al. (2017) set the latent code size of AAE to be 128 and 256 for these two settings, with the total number of parameters to be \(15M\) and \(15.2M\) , respectively. Similarly, we set the output dimension of \(Q\) in PC- GAN to be 128 and 256 for single- class and all- classes. The total number of parameters are \(1M\) and \(3M\) , respectively. Metrics for Quantitative Comparison Firstly, we are interested in whether the learned \(G_{x}\) and \(Q\) can model the distribution of unseen test data. For each test point cloud, we infer the latent variable \(Q(X)\) , then use \(G_{x}\) to generate points. We then compare the distribution between the input point cloud and the conditionally generated point clouds. There are many finite sample estimation for \(f\) - divergence and IPM can be used for evaluation. However, those estimators with finite samples are either biased or with high variance (Peyré et al., 2017; Wang et al., 2009; Póczos et al., 2012; Weed & Bach, 2017). Also, it is impossible to use these estimators with infinitely many samples if they are accessible. For ModelNet40, the meshes of each object are available. In many statistically guaranteed distance estimates, the adopted statistics are commonly based on distance between nearest neighbors (Wang et al., 2009; Póczos et al., 2012). Therefore, we propose to measure the performance with the following criteria. Given a point cloud \(\{x_{i}\}_{i = 1}^{n}\) and a mesh, which is a collection of faces \(\{F_{j}\}_{j = 1}^{m}\) , we measure the distance to face (D2F) as \[D2F\left(\{x_{i}\}_{i = 1}^{n},\{F_{j}\}_{j = 1}^{m}\right) = \frac{1}{n}\sum_{i = 1}^{n}\min_{j}\mathcal{D}(x_{i},F_{j}),\] ![](images/5_0.jpg) <center>Figure 5: Sample mesh of ModelNet40 </center> where \(\mathcal{D}(x_{i},F_{j})\) is the Euclidean distance from \(x_{i}\) to the face \(F_{j}\) . This distance is similar to Chamfer distance, which is commonly used for measuring images and point clouds (Achlioptas et al., 2017; Fan et al., 2017), with infinitely samples from true distributions (meshes). Nevertheless, the algorithm can have low or zero D2F by only focusing a small portion of the point clouds (mode collapse). Therefore, we are also interested in whether the generated points recover enough supports of the distribution. We compute the Coverage ratio as follows. For each point, we <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 6: Example reconstruction (conditional generation) on test objects. PC-GAN with sandwiching \((W_{s})\) is better in capturing fine details like wheels of aeroplane or proper chair legs. </center> find the its nearest face, we then treat this face is covered'. We then compute the ratio of number of faces of a mesh is covered. A sampled mesh is showed in Figure 5, where the details have more faces (non- uniform). Thus, it is difficult to get high coverage for AAE or PC- GAN trained by limited number of sampled points. However, the coverage ratio, on the other hand, serve as an indicator about how much details the model recovers. The results are reported in Table 1. We compare four different algorithm, AAE and PC- GAN with three objectives, including upper bound \(W_{U}\) ( \(\epsilon\) approximated Wasserstein distance), lower bound \(W_{L}\) (GAN with \(L^{2}\) ball constraints and weight clipping), and the sandwiching loss \(W_{s}\) as discussed in Section 3.2, The study with \(W_{U}\) and \(W_{L}\) also serves as the ablation test of \(W_{s}\) . Comparison between Upper bound, Lower bound and Sandwiching Since \(W_{U}\) directly optimizes distance between training and generated point clouds, \(W_{U}\) usually results in smaller D2F than \(W_{L}\) in Table 1. One the other hand, although \(W_{L}\) only recovers lower bound estimate of Wasserstein distance, its discriminator is known to focus on learning support of the distribution (Bengio, 2018), which results in better coverage (support) than \(W_{U}\) . Theoretically, the proposed sandwiching \(W_{s}\) results in a tighter Wasserstein distance estimation than \(W_{U}\) and \(W_{L}\) (Lemma 1). Based on above discussion, it can also be understood as balancing both D2F and coverage by combining both \(W_{U}\) and \(W_{L}\) to get a desirable middle ground. Empirically, we even observe that \(W_{s}\) results in better coverage than \(W_{L}\) , and competitive D2F with \(W_{U}\) . The intuitive explanation is that some discriminative tasks are off to \(W_{U}\) objective, so the discriminator can focus more on learning distribution supports. We argue that this difference is crucial for capturing the object details. Some reconstructed point clouds of testing data are shown in Figure 6. For aeroplane examples, \(W_{U}\) are failed to capture aeroplane tires and \(W_{s}\) has better tire than \(W_{L}\) . For Chair example, \(W_{s}\) recovers better legs than \(W_{U}\) and better seat cushion than \(W_{L}\) . Lastly, we highlight \(W_{s}\) outperforms others more significantly when training data is larger (ModelNet10 and ModelNet40) in Table 1. Comparison between PC- GAN and AAE In most of cases, PC- GAN with \(W_{s}\) has lower D2F in Table 1 with less number of parameters aforementioned. Similar to the argument in Section 5.1, although AAE use larger networks, the decoder wastes parameters for nearby points. AAE only outperforms PC- GAN \((W_{s})\) in Guitar and Sofa in terms of D2F, since the variety of these two classes are low. It is easier for MLP to learn the shared template (basis) of the point clouds. On the other hand, due to the limitation of the fixed number of output points and Chamfer distance objective, AAE has worse coverage than PC- GAN, It can be supported by Figure 6, where AAE is also failed to recover aeroplane tire. Hierarchical Sampling In Section 3, we propose a hierarchical sampling process for sampling point clouds. In the first hierarchy, the generator \(G_{\theta}\) , samples a object \((\theta = G_{\theta}(u), u \sim \mathbb{P}(u))\) , while the second generator \(G_{x}\) samples points based on \(\theta\) to form the point cloud. The randomly sampled results without given any data as input are shown in Figure 7. More results can be found in Appendix C. The point clouds are all smooth, structured and almost symmetric. It shows PC- GAN captures inherent symmetries and patterns in all the randomly sampled objects, even if overall object is not perfectly formed. This highlights that learning point- wise generation scheme encourages learning basic building blocks of objects. <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 7: Randomly sampled objects and corresponding point cloud from the hierarchical sampling Even if there are some defects, the objects are smooth, symmetric and structured. </center> Interpolation of Learned Manifold We study whether the interpolation between two objects on the latent space results in smooth change. We interpolate the inferred representations of two objects by \(Q\) , and use the generator \(G_{x}\) to sample points based on the interpolation. The inter- class result is shown in Figure 8. More studies about interpolation between rotations can be found in Appendix D.1. ![](images/7_1.jpg) <center>Figure 8: Interpolating between latent representations \(Q(X)\) of a table and a chair point clouds. </center> Generalization on Unseen Classes In above, we studied the reconstruction of unseen testing objects, while PC- GAN still saw the point clouds from the same class during training. Here we study the more challenging task. We train PC- GAN on first 30 (Alphabetic order) class, and test on the other fully unseen 10 classes. Some reconstructed (conditionally generated) point clouds are shown in Figure 9. More (larger) results can be found in Appendix C. For the object from the unseen classes, the conditionally generated point clouds still recovers main shape and reasonable geometry structure, which confirms the advantage of the proposed PC- GAN: by enforcing the point- wise transformation, the model is forced to learn the underlying geometry structure and the shared building blocks, instead of naively copying the input from the conditioning. The resulted D2F and coverage are 57.4 and 0.36, which are only slightly worse than 48.4 and 0.38 by training on whole 40 classes in Table 1 (ModelNet40), which also supports the claims of the good generalization ability of PC- GAN. ![](images/7_2.jpg) <center>Figure 9: The reconstructed objects from unseen classes (even in training). In each plot, LHS is true data while RHS is PC-GAN. PC-GAN generalizes well as it can match patterns and symmetries from classes seen in the past to new unseen classes. </center> More Studies We also conduct other studies to make experiments complete, including interpolation between different rotations, classification and image to point clouds. Due to space limit, all of the results can be found in Appendix D. ## 6 CONCLUSION In this paper, we first showed a straightforward extension of existing GAN algorithm is not applicable to point clouds. We then proposed a GAN modification (PC- GAN) that is capable of learning to generate point clouds by using ideas both from hierarchical Bayesian modeling and implicit generative models. We further propose a sandwiching objective which results in a tighter Wasserstein distance estimate theoretically and better performance empirically. In contrast to some existing methods (Achlioptas et al., 2017), PC- GAN can generate arbitrary as many i.i.d. points as we need to form a point clouds without pre- specification. Quantitatively, PC- GAN achieves competitive or better results using smaller network than existing methods. We also demonstrated that PC- GAN can capture delicate details of point clouds and generalize well even on unseen data. Our method learns "point- wise" transformations which encourage the model to learn the building components of the objects, instead of just naively copying the whole object. We also demonstrate other interesting results, including point cloud interpolation and image to point clouds. Although we only focused on 3D applications in this paper, our framework can be naturally generalized to higher dimensions. In the future we would like to explore higher dimensional applications, where each 3D point can have other attributes, such as RGB colors and 3D velocity vectors. <--- Page Split ---> ## A DETAILS OF THE PROPOSED METHOD ## A.1 NEURAL NETWORK REALIZATION OF INFERENCE NETWORK Our solution comprises of a generator \(G_{x}(z,\psi)\) which takes in a noise source \(z\in \mathbb{R}^{d_{1}}\) and a descriptor \(\psi \in \mathbb{R}^{d_{2}}\) encoding information about distribution of \(\theta\) . For a given \(\theta_{0}\) , the descriptor \(\psi\) would encode information about the distribution \(\delta (\theta - \theta_{0})\) and samples generated as \(x = G_{x}(z,\psi)\) would follow the distribution \(p(x|\theta_{0})\) . More generally, \(\psi\) can be used to encode more complicated distributions regarding \(\theta\) as well. In particular, it could be used to encode the posterior \(p(\theta |X)\) for a given sample set \(X\) , such that \(x = G_{x}(z,\psi)\) follows the posterior predictive distribution: \[p(x|X) = \int p(x|\theta)p(\theta |X)d\theta .\] A major hurdle in taking this path is that \(X\) is a set of points, which can vary in size and permutation of elements. Thus, making design of \(Q\) complicated as traditional neural network can not handle this and possibly is the reason for absence of such framework in the literature despite being a natural solution for the important problem of generative modeling of point clouds. However, we can overcome this challenge and we propose to construct the inference network by utilizing the permutation equivariant layers from Deep Sets (Zaheer et al., 2017). This allows it handle variable number of inputs points in arbitrary order, yet yielding a consistent descriptor \(\psi\) . After training \(G_{x}\) and the inference network \(Q\) , we use trained \(Q\) to collect inferred \(Q(X)\) and train the generator \(G_{\theta}(u)\sim p(\theta)\) for higher hierarchical sampling, where \(u\) is the other noise source independent of \(z\) . In addition to the layer- wise training, a joint training may further boost the performance. The full generative process for sampling one point cloud could be represented as \(\{x_{i}\}_{i = 1}^{n} = \{G_{x}(z_{i},G_{\theta}(u)) \}_{i = 1}^{n}\) , where \(z_{1},\ldots ,z_{n}\sim p(z)\) and \(u\sim p(u)\) . We call the proposed GAN framework for learning to generative point clouds as PC- GAN as shown in Figure 2. The conditional distribution matching with a learned inference in PC- GAN can also be interpreted as an encoder- decoder formulation (Kingma & Welling, 2013). The difference between it and the point cloud autoencoder (Achlioptas et al., 2017; Yang et al., 2018) will be discussed in Section 4. ## A.2 UPPER IMPLEMENTATION The primal form of Wasserstein distance is defined as \[w(\mathbb{P},\mathbb{G}) = \inf_{\gamma \in \Gamma (\mathbb{P},\mathbb{G})}\int \| x - y\|_{1}d\gamma (x,y),\] where \(\gamma\) is the coupling of \(P\) and \(G\) . The Wasserstein distance is also known as optimal transport (OT) or earth moving distance (EMD). As the name suggests, when \(w(\mathbb{P},\mathbb{G})\) is estimated with finite number of samples \(X = x_{1},\ldots ,x_{n}\) and \(Y = y_{1},\ldots ,y_{n}\) , we find the one- to- one matching between \(X\) and \(Y\) such that the total pairwise distance is minimal. The resulting minimal total (average) pairwise distance is \(w(X,Y)\) . In practice, finding the exact matching efficiently is non- trivial and still an open research problem (Peyré et al., 2017). Instead, we consider an approximation provided by Bertsekas (1985). It is an iterative algorithm where each iteration operates like an auction whereby unassigned points \(x\in X\) bid simultaneously for closest points \(y\in Y\) , thereby raising their prices. Once all bids are in, points are awarded to the highest bidder. The crux of the algorithm lies in designing a non- greedy bidding strategy. One can see by construction the algorithm is embarrassingly parallelizable, which is favourable for GPU implementation. One can show that algorithm terminates with a valid matching and the resulting matching cost \(W_{U}(X,Y)\) is an \(\epsilon\) - approximation of \(w(X,Y)\) . Thus, the estimate can serve as an upper bound, i.e. \[w(X,Y)\leq W_{U}(X,Y)\leq (1 + \epsilon)w(X,Y), \quad (8)\] We remark estimating Wasserstein distance \(w(\mathbb{P},\mathbb{G})\) with finite sample via primal form is only favorable in low dimensional data, such as point clouds. The error between \(w(\mathbb{P},\mathbb{G})\) and \(w(X,Y)\) is \(O(1 / n^{1 / d})\) , where \(d\) is data dimension (Weed & Bach, 2017). Therefore, for high dimensional data, such as images, we cannot accurately estimate wasserstein distance in primal and its upper bound with a small minibatch. <--- Page Split ---> Finding a modified primal form with low sample complexity is also an open research problem (Cuturi, 2013; Genevay et al., 2018), and combining those into the proposed sandwiching objective for high dimensional data is left for future works. ## A.3 LOWER IMPLEMENTATION The dual form of Wasserstein distance is defined as \[w(\mathbb{P},\mathbb{G}) = \sup_{f\in \mathcal{L}_{1}}\mathbb{E}_{x\sim P}f(x) - \mathbb{E}_{x\sim G}f(x), \quad (9)\] where \(\mathcal{L}_{k}\) is the set of \(k\) - Lipschitz functions whose Lipschitz constant is no larger than \(k\) . In practice, deep neural networks parameterized by \(\phi\) with constraints \(f_{\phi} \in \Omega_{\phi}\) (Arjovsky et al., 2017), result in a distance approximation \[W_{L}(\mathbb{P},\mathbb{G}) = \max_{f_{\phi}\in \Omega_{\phi}}\mathbb{E}_{x\sim P}f_{\phi}(x) - \mathbb{E}_{x\sim G}f_{\phi}(x). \quad (10)\] If there exists \(k\) such that \(\Omega_{f} \subseteq \mathcal{L}_{k}\) , then \(W_{L}(\mathbb{P},\mathbb{G}) / k \leq w(\mathbb{P},\mathbb{G}) \forall P,G\) is a lower bound. To enforce \(\Omega_{\phi} \subseteq \mathcal{L}_{k}\) , Arjovsky et al. (2017) propose a weight clipping constraint \(\Omega_{c}\) , which constrains every weight to be in \([- c,c]\) and guarantees that \(\Omega_{c} \subseteq \mathcal{L}_{k}\) for some \(k\) . However, choosing clipping range \(c\) is non- trivial in practice. Small ranges limit the capacity of networks, while large ranges result in numerical issues during the training. On the other hand, in addition to weight clipping, several constraints (regularization) have been proposed with better empirical performance, such as gradient penalty (Gulrajani et al., 2017) and \(L^{2}\) ball (Mroueh & Sercu, 2017). However, there is no guarantee the resulted functions are still Lipschitz or the resulted distances are lower bounds of Wasserstein distance. To take the advantage of those regularization with the Lipschitz guarantee, we propose a simple variation by combining weight clipping, which always ensures Lipschitz functions. Lemma 2. There exists \(k > 0\) such that \[\max_{f\in \Omega_{c}\cap \Omega_{\phi}}\mathbb{E}_{x\sim P}[f_{\phi}(x)] - \mathbb{E}_{x\sim G}[f_{\phi}(x)]\leq \frac{1}{k} w(\mathbb{P},\mathbb{G}) \quad (11)\] Note that, if \(c \to \infty\) , then \(\Omega_{c} \cap \Omega_{\phi} = \Omega_{\phi}\) . Therefore, from Proposition 2, for any regularization of discriminator (Gulrajani et al., 2017; Mroueh & Sercu, 2017; Mroueh et al., 2017), we can always combine it with a weight clipping constraint \(\Omega_{c}\) to ensure a valid lower bound estimate of Wasserstein distance and enjoy the advantage that it is numerically stable when we use large \(c\) compared with original weight- clipping WGAN (Arjovsky et al., 2017). In practice, we found combing \(L^{2}\) ball constraint and weight- clipping leads to satisfactory performance. We also studied popular WGAN- GP (Gulrajani et al., 2017) with weight clipping to ensure Lipschitz continuity of discriminator, but we found \(L^{2}\) ball with weight clipping is faster and more numerically stable to train. ## B TECHNICAL PROOF Lemma 1. Suppose we have two approximators to Wasserstein distance: an upper bound \(W_{U}\) and a lower \(W_{L}\) , such that \(\forall P,G:(1 + \epsilon_{1})w(\mathbb{P},\mathbb{G})\leq W_{U}(\mathbb{P},\mathbb{G})\leq (1 + \epsilon_{2})w(\mathbb{P},\mathbb{G})\) and \(\forall P,G:(1 - \epsilon_{2})w(\mathbb{P},\mathbb{G})\leq W_{L}(\mathbb{P},\mathbb{G})\leq (1 - \epsilon_{1})w(\mathbb{P},\mathbb{G})\) respectively, for some \(\epsilon_{2} > \epsilon_{1} > 0\) and \(\epsilon_{1} > \epsilon_{2} / 3\) . Then, using the sandwiched estimator \(W_{s}\) from (6), we can achieve tighter estimate of the Wasserstein distance than using either one estimator, i.e. \[\exists s:|W_{s}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})|< \min \{|W_{U}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})|,|W_{L}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})|\} \quad (12)\] <--- Page Split ---> Proof. We prove the claim by show that LHS is at most \(\epsilon_{1}\) , which is the lower bound for RHS. \[\begin{array}{r l} & {|W_{s}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})|}\\ & {\quad = |(1 - s)W_{U}(\mathbb{P},\mathbb{G}) + sW_{L}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})|}\\ & {\quad = |(1 - s)(W_{U}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})) - s(w(\mathbb{P},\mathbb{G}) - W_{L}(\mathbb{P},\mathbb{G}))|}\\ & {\quad \leq \max \{(1 - s)\underbrace{(W_{U}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G}))}_{\leq \epsilon_{2}},\underbrace{s(w(\mathbb{P},\mathbb{G}) - W_{L}(\mathbb{P},\mathbb{G}))}_{\leq \epsilon_{2}}\}}\\ & {\quad \quad -\min \{(1 - s)\underbrace{(W_{U}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G}))}_{\geq \epsilon_{1}},\underbrace{s(w(\mathbb{P},\mathbb{G}) - W_{L}(\mathbb{P},\mathbb{G}))}_{\geq \epsilon_{1}}\}}\\ & {\quad \leq \max \{(1 - s),s\} \epsilon_{2} - \min \{(1 - s),s\} \epsilon_{1}} \end{array} \quad (13)\] Without loss of generality we can assume \(\lambda < 0.5\) , which brings us to \[|W_{s}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})|\leq (1 - \lambda)\epsilon_{2} - \lambda \epsilon_{1} \quad (14)\] Now if we chose \(\frac{\epsilon_{2} - \epsilon_{1}}{\epsilon_{2} + \epsilon_{1}} < \lambda < 0.5\) , then \(|W_{s}(\mathbb{P},\mathbb{G}) - w(\mathbb{P},\mathbb{G})| < \epsilon_{1}\) as desired. Lemma 2. There exists \(k > 0\) such that \[\max_{f\in \Omega_{c}\cap \Omega_{\phi}}\mathbb{E}_{x\sim P}[f_{\phi}(x)] - \mathbb{E}_{x\sim G}[f_{\phi}(x)]\leq \frac{1}{k} w(\mathbb{P},\mathbb{G}) \quad (15)\] Proof. Since there exists \(k\) such that \(\begin{array}{r}{\max_{f\in \Omega_{c}}\mathbb{E}_{x\sim P}[f_{\phi}(x)] - \mathbb{E}_{x\sim G}[f_{\phi}(x)]\leq \frac{1}{k} w(\mathbb{P},\mathbb{G})} \end{array}\) , it is clear that \[\max_{f\in \Omega_{c}\cap \Omega_{\phi}}\mathbb{E}_{x\sim P}[f_{\phi}(x)] - \mathbb{E}_{x\sim G}[f_{\phi}(x)]\leq \max_{f\in \Omega_{c}}\mathbb{E}_{x\sim P}[f_{\phi}(x)] - \mathbb{E}_{x\sim G}[f_{\phi}(x)]\leq \frac{1}{k} w(\mathbb{P},\mathbb{G}). \quad (16)\] ## C LARGER RESULTS The larger and more hierarchical sampling discussed in Section 5.2 can be found in Figure 10. The reconstruction results on unseen classes are shown in Figure 11. ## D ADDITIONAL STUDY ## D.1 INTERPOLATION BETWEEN ROTATIONS It is also popular to show intra- class interpolation. In addition show simple intra- class interpolations, where the objects are almost aligned, we present an interesting study on interpolations between rotations. During the training, we only rotate data with 8 possible angles for augmentation, here we show it generalizes to other unseen rotations as shown in Figure 12. However, if we linearly interpolate the code, the resulted change is scattered and not smooth as shown in Figure 12. Instead of using linear interpolation, We train a 2- layer MLP with limited hidden layer size to be 16, where the input is the angle, output is the corresponding latent representation of rotated object. We then generate the code for rotated planes with this trained MLP. It suggests although the transformation path of rotation on the latent space is not linear, it follows a smooth trajectory. It may also suggest the geodesic path of the learned manifold may not be nearly linear between rotations. Finding the geodesic path with a principal method (Shao et al., 2017) and Understanding the geometry of the manifold for point cloud worth more deeper study as future work. ## D.2 CLASSIFICATION RESULTS We evaluate the quality of the representation acquired from the learned inference network \(Q\) . We train the inference network \(Q\) and the generator \(G_{x}\) on the training split of ModelNet40 with data <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 10: Randomly sampled objects and corresponding point cloud from the hierarchical sampling. Even if there are some defects, the objects are smooth, symmetric and structured. It suggests PC-GAN captures inherent patterns and learns basic building blocks of objects. </center> augmentation as mentioned above for learning generative models without label information. We then extract the latent representation \(Q(X)\) for each point clouds and train linear SVM on the that with its label. We apply the same setting to a linear classifier on the latent code of Achlioptas et al. (2017). We only sample 1000 as input for our inference network \(Q\) . Benefited by the Deep Sets architecture for the inference network, which is invariant to number of points. Therefore, we are allowed to sample different number of points as input to the trained inference network for evaluation. Because of the randomness of sampling points for extracting latent representation, we repeat the experiments 20 times and report the average accuracy and standard deviation on the testing split in Table 2. By using 1000 points, we are already better than Achlioptas et al. (2017) with 2048 points, and competitive with the supervised learning algorithm Deep Sets. We also follow the same protocol as Achlioptas et al. (2017); Wu et al. (2016) that we train on ShapeNet55 and test the accuracy on ModelNet40. Compared with existing unsupervised learning algorithms, PC- GAN has the best performance as shown in Table 3. Table 2: Classification accuracy results. <table><tr><td>Method</td><td># points</td><td>Accuracy</td></tr><tr><td>PC-GAN</td><td>1000</td><td>87.5 ± .6%</td></tr><tr><td>PC-GAN</td><td>2048</td><td>87.8 ± .2%</td></tr><tr><td>AAE (Achlioptas et al., 2017)</td><td>2048</td><td>85.5 ± .3%</td></tr><tr><td>Deep Sets (Zaheer et al., 2017)</td><td>1000</td><td>87 ± 1%</td></tr><tr><td>Deep Sets (Zaheer et al., 2017)</td><td>5000</td><td>90 ± .3%</td></tr></table> We note that Yang et al. (2018) using additional geometry features by appending pre- calculated features with 3- dimensional coordinate as input or using more advanced grouping structure to achieve better performance. Those techniques are all applicable to PC- GAN and leave it for future works by leveraging geometry information into the proposed PC- GAN framework. <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 11: The reconstructed objects from unseen categories. In each plot, LHS is true data while RHS is PC-GAN. PC-GAN generalizes well as it can match patterns and symmetries from categories seen in the past to new unseen categories. </center> ![](images/15_1.jpg) <center>Figure 12: Interpolating between rotation of an aeroplane, using our latent space representation. </center> ## D.3 IMAGES TO POINT CLOUD Here we demonstrate a potential extension of the proposed PC- GAN for images to point cloud applications. After training \(Q\) as described in 3 and Appendix A.1, instead of learning \(G_{\theta}\) for hierarchical sampling, we train a regressor \(R\) , where the input is the different views of the point cloud \(X\) , and the output is \(Q(X)\) . In this proof of concept experiment, we use the 12 view data and the Res18 architecture in Su et al. (2015), while we change the output size to be 256. Some example results on reconstructing testing data is shown in Figure 13. A straightforward extension is using end- to- end training instead of two- staged approached adopted here. Also, after aligning objects and take representative view along with traditional ICP techniques, we can also do single view to point cloud transformation as Choy et al. (2016); Fan et al. (2017); Hane et al. (2017); Groueix et al. (2018a), which is not the main focus of this paper and we leave it for future work. <--- Page Split ---> Table 3: Classification accuracy results (Trained on ShapeNet55). <table><tr><td>Method</td><td>Accuracy</td></tr><tr><td>SPH (Kazhdan et al., 2003)</td><td>68.2%</td></tr><tr><td>T-L Network (Girdhar et al., 2016)</td><td>74.4%</td></tr><tr><td>LFD (Chen et al., 2003)</td><td>75.5%</td></tr><tr><td>VConv-DAE (Sharma et al., 2016)</td><td>75.5%</td></tr><tr><td>3D GAN (Wu et al., 2016)</td><td>83.3%</td></tr><tr><td>AAE (Achlioptas et al., 2017)</td><td>84.5%</td></tr><tr><td>PC-GAN</td><td>86.9%</td></tr></table> ![](images/16_0.jpg) <center>Figure 13: Image to Point Cloud </center> ## E DEEP SETS (PERMUTATION EQUIVARIANCE LAYERS) We briefly review the notion of Permutation Equivariance Layers proposed by Zaheer et al. (2017) as a background required for this paper. For more details, please refer to Zaheer et al. (2017). Zaheer et al. (2017) propose a generic framework of deep learning for set data. The building block which can be stacked to be deep neural networks is called Permutation Equivariance Layer. One Permutation Equivariance Layer example is defined as \[f(x_{i}) = \sigma (x_{i} + \gamma \mathrm{maxpool}(X)),\] where \(\sigma\) can be any functions (e.g. parametrized by neural networks) and \(X = x_{1},\ldots ,x_{n}\) is an input set. Also, the \(\mathrm{max}\) pooling operation can be replaced with mean pooling. We note that PointNetQi et al. (2017a) is a special case of using Permutation Equivariance Layer by properly defining \(\sigma (\cdot)\) . In our experiments, we follow Zaheer et al. (2017) to set \(\sigma\) to be a linear layer with output size \(h\) followed by any nonlinear activation function. ## F EXPERIMENT SETTINGS ## F.1 SYNTHETIC DATA The batch size is fixed to be 64. We sampled 10,000 samples for training and testing. For the inference network, we stack 3 mean Permutation Equivariance Layer (Zaheer et al., 2017), where the hidden layer size (the output of the first two layers) is 30 and the final output size is 15. The activation function are used SoftPlus. For the generator is a 5 layer MLP, where the hidden layer size is set to be 30. The discriminator is 4 layer MLP with hidden layer size to be 30. For Achlioptas et al. (2017), we change their implementation by replacing the number of filters for encoder to be [30, 30, 30, 30, 15], while the hidden layer width for decoder is 10 or 20 except for the output layer. The decoder is increased from 3 to 4 layers to have more capacity. ## F.2 MODELNET40 We follow Zaheer et al. (2017) to do pre- processing. For each object, we sampled 10,000 points from the mesh representation and normalize it to have zero mean (for each axis) and unit (global) variance. During the training, we augment the data by uniformly rotating \(0,\pi /8,\ldots ,7\pi /8\) rad on the \(x - y\) plane. The random noise \(z_{2}\) of PC- GAN is fixed to be 10 dimensional for all experiments. For \(Q\) of single class model, we stack 3 max Permutation Equivariance Layer with output size to be 128 for every layer. On the top of the stack, we have a 2 layer MLP with the same width and the output. The generator \(G_{x}\) is a 4 layer MLP where the hidden layer size is 128 and output size is 3. <--- Page Split ---> The discriminator is 4 layer MLP with hidden layer size to be 128. The random source \(u\) and \(z\) are set to be 64 and 10 dimensional and sampled from standard normal distributions. For training whole ModelNet40 training set, we increase the width to be 256. The generator \(G_{x}\) is a 5 layer MLP where the hidden layer size is 256 and output size is 3. The discriminator is 5 layer MLP with hidden layer size to be 256. For hirarchical sampling, the top generator \(G_{\theta}\) and discriminator are all 5- layer MLP with hidden layer size to be 256. For AAE, we follow every setting used in Achlioptas et al. (2017), where the latent code size is 128 and 256 for single class model and whole ModelNet40 models. <--- Page Split --->
reject
Reject
5.333333
ICLR_2019_paper_1123
iclr
2,019
# LEARNING PROTEIN STRUCTURE WITH A DIFFERENTIABLE SIMULATOR John Ingraham \(^{1,2}\) , Adam Riesselman \(^{1}\) , Chris Sander \(^{1,2,3}\) , Debora Marks \(^{1,3}\) \(^{1}\) Harvard Medical School \(^{2}\) Dana- Farber Cancer Institute \(^{3}\) Broad Institute of Harvard and MIT ## ABSTRACT The Boltzmann distribution is a natural model for many systems, from brains to materials and biomolecules, but is often of limited utility for fitting data because Monte Carlo algorithms are unable to simulate it in available time. This gap between the expressive capabilities and sampling practicalities of energy- based models is exemplified by the protein folding problem, since energy landscapes underlie contemporary knowledge of protein biophysics but computer simulations are challenged to fold all but the smallest proteins from first principles. In this work we aim to bridge the gap between the expressive capacity of energy functions and the practical capabilities of their simulators by using an unrolled Monte Carlo simulation as a model for data. We compose a neural energy function with a novel and efficient simulator based on Langevin dynamics to build an end- to- end- differentiable model of atomic protein structure given amino acid sequence information. We introduce techniques for stabilizing backpropagation under long roll- outs and demonstrate the model's capacity to make multimodal predictions and to, in some cases, generalize to unobserved protein fold types when trained on a large corpus of protein structures. ## 1 INTRODUCTION Many natural systems, such as cells in a tissue or atoms in a protein, organize into complex structures from simple underlying interactions. Explaining and predicting how macroscopic structures such as these arise from simple interactions is a major goal of science and, increasingly, machine learning. The Boltzmann distribution is a foundational model for relating local interactions to system behavior, but can be difficult to fit to data. Given an energy function \(U_{\theta}[x]\) , the probability of a system configuration \(x\) scales exponentially with energy as \[p_{\theta}(x) = \frac{1}{Z}\exp \left(-U_{\theta}[x]\right), \quad (1)\] where the (typically intractable) constant \(Z\) normalizes the distribution. Importantly, simple energy functions \(U_{\theta}[x]\) consisting of weak, local interactions can collectively encode complex system behaviors, such as the structures of materials and molecules or, when endowed with latent variables, the statistics of images, sound, and text (Ackley et al., 1985; Salakhutdinov & Larochelle, 2010). Unfortunately, learning model parameters \(\hat{\theta}\) and generating samples \(x \sim p_{\theta}(x)\) of the Boltzmann distribution is difficult in practice, as these procedures depend on expensive Monte Carlo simulations that may struggle to mix effectively. These difficulties have driven a shift towards generative models that are easier to learn and sample from, such as directed latent variable models and autoregressive models (Goodfellow et al., 2016). The protein folding problem provides a prime example of both the power of energy- based models at describing complex relationships in data as well as the challenge of generating samples from them. Decades of research in biochemistry and biophysics support an energy landscape theory of <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: An unrolled simulator as a model for protein structure. NEMO combines a neural energy function for coarse protein structure, a stochastic simulator based on Langevin dynamics with learned (amortized) initialization, and an atomic imputation network to build atomic coordinate output from sequence information. It is trained end-to-end by backpropagating through the unrolled folding simulation. </center> protein folding (Dill et al., 2017), in which the folds that natural protein sequences adopt are those that minimize free energy. Without the availability of external information such as coevolutionary information (Marks et al., 2012) or homologous structures (Martí- Renom et al., 2000) to constrain the energy function, however, contemporary simulations are challenged to generate globally favorable low- energy structures in available time. How can we get the representational benefits of energy- based models with the sampling efficiency of directed models? Here we explore a potential solution of directly training an unrolled simulator of an energy function as a model for data. By directly training the sampling process, we eschew the question 'when has the simulator converged' and instead demand that it produce a useful answer in a fixed amount of time. Leveraging this idea, we construct an end- to- end differentiable model of protein structure that is trained by backpropagation through folding (Figure 1). NEMO (Neural energy modeling and optimization) can learn at scale to generate 3D protein structures consisting of hundreds of points directly from sequence information. Our main contributions are: - Neural energy simulator model for protein structure that composes a deep energy function, unrolled Langevin dynamics, and an atomic imputation network for an end-to-end differentiable model of protein structure given sequence information- Efficient sampling algorithm that is based on a transform integrator for efficient sampling in transformed coordinate systems- Stabilization techniques for long roll-outs of simulators that can exhibit chaotic dynamics and, in turn, exploding gradients during backpropagation- Systematic analysis of combinatorial generalization with a new dataset of protein sequence and structure ### 1.1 RELATED WORK Protein modeling Our model builds on a long history of coarse- grained modeling of protein structure (Kolinski et al., 1998; Kmiecik et al., 2016). Recently, multiple groups have demonstrated how to learn full force fields using likelihood- based approaches (Jumper et al., 2018; Krupa et al., 2017), similar to our maximum likelihood loss (but without backpropagation through folding for fast sampling). While this work was in progress, two groups reported neural models of protein structure (AlQuraishi, 2018; Anand & Huang, 2018), where the former focused on modeling structure in terms of backbone angles and the latter in terms of residue- residue distances. We show how an energy function provides a natural framework to integrate both kinds of constraints, which in turn is important for achieving sample- efficient structural generalization. Learning to infer or sample Structured prediction includes a long history of casting predictions in terms of energy minimization (LeCun et al., 2006). Recently, others have built hybrid neural networks that use differentiable optimization as a building block in neural architectures (Wang et al., <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: A neural energy function models coarse grained structure and is sampled by internal coordinate dynamics. (A) The energy function is formulated as a Markov Random Field with structure-based features and sequence-based weights computed by neural networks (Figure 6). (B) To rapidly sample low-energy configurations, the Langevin dynamics simulator leverages both (i) an internal coordinate parameterization, which is more effective for global rearrangements, and (ii) a Cartesian parameterization, which is more effective for localized structural refinement. (C) The base features of the structure network are rotationally and translationally invariant internal coordinates (not shown), pairwise distances, and pairwise orientations. </center> 2016; Amos & Kolter, 2017; Belanger & McCallum, 2016). Structured Prediction Energy Networks (SPENs) with unrolled optimization (Belanger et al., 2017) are a highly similar approach to ours, differing in terms of the use of optimization rather than sampling. Additional methodologically related work includes approaches to learn energy functions and samplers simultaneously (Kim & Bengio, 2016; Wang & Liu, 2017; Dai et al., 2017; Song et al., 2017; Chen et al., 2018a), to learn efficient MCMC operators (Song et al., 2017; Levy et al., 2018), to build expressive approximating distributions with unrolled Monte Carlo simulations (Salimans et al., 2015; Titsias, 2017), and to learn the parameters of simulators with implicitly defined likelihoods<sup>1</sup> (Mohamed & Lakshminarayanan, 2016; Tran et al., 2017). ## 2 MODEL Overview NEMO is an end- to- end differentiable model of protein structure \(X\) conditioned on sequence information \(s\) consisting of three components (Figure 1): (i) a neural energy function \(U_{\theta}[x; s]\) for coarse grained structure \(x\) given sequence, (ii) an unrolled simulator that generates approximate samples from \(U\) via internal coordinate Langevin dynamics (§ 2.3), and (iii) an imputation network that generates an atomic model \(X\) from the final coarse- grained sample \(x^{(T)}\) (§ 2.4). All components are trained simultaneously via backpropagation through the unrolled process. ### 2.1 REPRESENTATION Proteins Proteins are linear polymers (sequences) of amino acids that fold into defined 3D structures. The 20 natural amino acids have a common monomer structure \([- (\mathrm{N - H}) - (\mathrm{C - R}) - (\mathrm{C = O}) - ]\) with variable side- chain \(\mathbb{R}\) groups that can differ in properties such as hydrophobicity, charge, and ability to form hydrogen bonds. When placed in solvent (such as water or a lipid membrane), interactions between the side- chains, backbone, and solvent drive proteins into particular 3D configurations ('folds'), which are the basis for understanding protein properties such as biochemical activity, ligand binding, and interactions with drugs. Coordinate representations We predict protein structure \(X\) in terms of 5 positions per amino acid: the four heavy atoms of the backbone (N, \(\mathrm{C}_{\alpha}\) , and carbonyl \(\mathrm{C} = \mathrm{O}\) ) and the center of mass of <--- Page Split ---> the side chain \(\mathbb{R}\) group. While it is well- established that the locations of \(C_{\alpha}\) carbons are sufficient to reconstruct a full atomic structure (Kmiecik et al., 2016), we include these additional positions for evaluating backbone hydrogen bonding (secondary structure) and coarse side- chain placement. Internally, the differentiable simulator generates an initial coarse- grained structure (1- position- per- amino- acid) with the loss function targeted to the midpoint of the \(C_{\alpha}\) carbon and the side chain center of mass. Sequence conditioning We consider two modes for conditioning our model on sequence information: (1) 1- seq, in which \(s\) is an \(L \times 20\) matrix containing a one- hot encoding of the amino acid sequence, and (2) Profile, in which \(s\) is an \(L \times 40\) matrix encoding both the amino acid sequence and a profile of evolutionarily related sequences (§ B.7). Internal coordinates In contrast to Cartesian coordinates \(x\) , which parameterize structure in terms of absolute positions of points \(\boldsymbol{x}_{i} \in \mathbb{R}^{3}\) , internal coordinates \(z\) parameterize structure in terms of relative distances and angles between points. We adopt a standard convention for internal coordinates of chains (Parsons et al., 2005) where each point \(\boldsymbol{x}_{i}\) is placed in a spherical coordinate system defined by the three preceding points \(\boldsymbol{x}_{i - 1}, \boldsymbol{x}_{i - 2}, \boldsymbol{x}_{i - 3}\) in terms of a radius (bond length \(^{2} b_{i} \in (0, \infty)\) , a polar angle (bond angle) \(a_{i} \in [0, \pi)\) , and an azimuthal angle (dihedral angle) \(d_{i} \in [0, 2 \pi)\) (Figure 2B). We define \(\boldsymbol{z}_{i} = \{\hat{b}_{i}, \hat{a}_{i}, d_{i}\}\) , where \(\hat{b}_{i}, \hat{a}_{i}\) are unconstrained parameterizations of \(b_{i}\) and \(a_{i}\) (§ A.1). The transformation \(\boldsymbol{x} = \mathcal{F}(\boldsymbol{z})\) from internal coordinates to Cartesian is then defined by the recurrence \[\boldsymbol{x}_{i} = \boldsymbol{x}_{i - 1} + b_{i}\left[\hat{\boldsymbol{u}}_{i - 1}\hat{\boldsymbol{n}}_{i - 1}\times \hat{\boldsymbol{u}}_{i - 1}\hat{\boldsymbol{n}}_{i - 1} \right]\left[ \begin{array}{c}\cos (\pi -a_{i})\\ \sin (\pi -a_{i})\cos (d_{i})\\ \sin (\pi -a_{i})\sin (d_{i}) \end{array} \right],\] where \(\hat{\boldsymbol{u}}_{i} = \frac{\boldsymbol{x}_{i} - \boldsymbol{x}_{i - 1}}{\| \boldsymbol{x}_{i} - \boldsymbol{x}_{i - 1}\|}\) is a unit vector from \(\boldsymbol{x}_{i - 1}\) to \(\boldsymbol{x}_{i}\) and \(\hat{\boldsymbol{n}}_{i} = \frac{\hat{\boldsymbol{u}}_{i - 1}\times\hat{\boldsymbol{u}}_{i}}{\| \hat{\boldsymbol{u}}_{i - 1}\times\hat{\boldsymbol{u}}_{i}\|}\) is a unit vector normal to each bond plane. The inverse transformation \(\boldsymbol{z} = \mathcal{F}^{- 1}(\boldsymbol{x})\) is simpler to compute, as it only involves local (and fully parallelizable) calculations of distances and angles (§ A.1). ### 2.2 NEURAL ENERGY FUNCTION Deep Markov Random Field We model the distribution of a structure \(\boldsymbol{x}\) conditioned on a sequence \(s\) with the Boltzmann distribution, \(p_{\boldsymbol{\theta}}(\boldsymbol {x}|\boldsymbol {s}) = \frac{1}{Z}\exp \left(- U_{\boldsymbol{\theta}}[\boldsymbol {x};\boldsymbol {s}]\right)\) , where \(U_{\boldsymbol{\theta}}[\boldsymbol {x};\boldsymbol {s}]\) is a sequence- conditioned energy function parameterized by a neural network. Our approach is compatible with any differentiable energy function \(U[\boldsymbol {x};\boldsymbol {s}]\) , though we focus on a decomposition \[U_{\boldsymbol{\theta}}[\boldsymbol {x};\boldsymbol {s}] = \sum_{i}l_{i}(\boldsymbol {s};\boldsymbol {\theta})f_{i}(\boldsymbol {x};\boldsymbol {\theta}), \quad (2)\] which is a Markov Random Field with coefficients \(\{l_{i}(\boldsymbol {s};\boldsymbol {\theta})\}_{i = 1}^{M}\) computed by a sequence network and structural features \(\{f_{i}(\boldsymbol {x};\boldsymbol {\theta})\}_{i = 1}^{M}\) computed by a structure network (Figure 2A). This decomposition facilitates (i) increased interpretability, as the (learned) structural features are independent of sequence, and (ii) increased computational efficiency, as the sequence- based coefficients can be computed once and reused throughout a simulation. Sequence network The sequence network takes as input one- dimensional sequence information \(s\) and outputs: (1) Energetic coefficients, a set of 1- and 2- dimensional sequence features \(\{l_{i}(\boldsymbol {s};\boldsymbol {\theta})\}_{i = 1}^{M}\) , (2) Simulator initial state \(\boldsymbol{z}^{(0)}\) , (3) Simulator hyperparameters preconditioning matrix \(C\) , and (4) Predicted secondary structure (Figure 6). It is parameterized by a combination of 1D, 2D, and graph convolutions (Gilmer et al., 2017) (§ A). Structure network The structure network takes as input a coarse- grained structure \(\boldsymbol{x}\) and outputs a set of 1D and 2D structural features \(\{f_{i}(\boldsymbol {x};\boldsymbol {\theta})\}_{i = 1}^{M}\) (Figure 6). We design the energy function to be invariant to rigid body motions (rotations and translations in \(\mathrm{SE}(3)\) ) by leveraging a set of invariant base features (Figure 2C) which are: <--- Page Split ---> 1. Internal coordinates \(z\) All internal coordinates except 6 are invariant to rotation and translation \(^3\) and we mask these in the energy loss. 2. Distances \(D_{i j} = \| x_{i} - x_{j}\|\) between all pairs of points. We further process these by 4 radial basis functions with (learned) Gaussian kernels. 3. Orientation vectors \(\hat{\pmb{v}}_{i j}\) , which are unit vectors encoding the relative position of point \(\boldsymbol{x}_{j}\) in a local coordinate system of \(\boldsymbol{x}_{i}\) with base vectors \(\frac{\hat{\pmb{u}}_{i} - \hat{\pmb{u}}_{i + 1}}{\| \hat{\pmb{u}}_{i} - \hat{\pmb{u}}_{i + 2}\|}\) , \(\hat{\pmb{n}}_{i + 1}\) , and the cross product thereof. ### 2.3 EFFICIENT SIMULATOR Langevin dynamics The Langevin dynamics is a stochastic differential equation that asymptotically samples from the Boltzmann distribution (Equation 1). It is typically simulated by a first- order discretization as \[\pmb{x}^{(t + \epsilon)}\leftarrow \pmb{x}^{(t)} - \frac{\epsilon}{2}\nabla_{\pmb{x}}U^{(t)} + \sqrt{\epsilon}\mathbf{p},\qquad \mathbf{p}\sim \mathcal{N}(0,I). \quad (3)\] Internal coordinate dynamics The efficiency with which Langevin dynamics explores conformational space is highly dependent on the geometry (and thus parameterization) of the energy landscape \(U(\boldsymbol {x})\) . While Cartesian dynamics are efficient at local structural rearrangement, internal coordinate dynamics much more efficiently sample global, coherent changes to the topology of the fold (Figure 2B). We interleave the Cartesian Langevin dynamics with preconditioned Internal Coordinate dynamics, \[\pmb{z}^{(t + \epsilon)}\leftarrow \pmb{z}^{(t)} - \frac{\epsilon C}{2}\nabla_{\pmb{z}}U^{(t)} + \sqrt{\epsilon C}\mathbf{p},\qquad \mathbf{p}\sim \mathcal{N}(0,I), \quad (4)\] where \(C\) is a preconditioning matrix that sets the relative scaling of changes to each degree of freedom. For all simulations we unroll \(T = 250\) time steps, each of which is comprised of one Cartesian step followed by one internal coordinate step (Equation 9, § A.3). Transform integrator Simulating internal coordinate dynamics is often computationally intensive as it requires rebuilding Cartesian geometry \(\boldsymbol{x}\) from internal coordinates \(\boldsymbol{z}\) with \(\mathcal{F}(\boldsymbol {z})\) (Parsons et al., 2005) which is an intrinsically sequential process. Here we bypass the need for recomputing coordinate transformations at every step by instead computing on- the- fly transformation integration (Figure 3). The idea is to directly apply coordinate updates in one coordinate system to another by numerically integrating the Jacobian. This can be favorable when the Jacobian has a simple structure, such as in our case where it requires only distributed cross products. ### 2.4 ATOMIC IMPUTATION Local reference frame reconstruction The imputation network builds an atomic model \(\pmb{X}\) from the final coarse coordinates \(\boldsymbol{x}^{(T)}\) . Each atomic coordinate \(\mathbf{X}_{i,j}\) of atom type \(j\) at position \(i\) is placed in a local reference frame as \[\mathbf{X}_{i,j} = \pmb{x}_{i} + e_{i,j}(\pmb {z};\theta)\left[\hat{\pmb{u}}_{i}\hat{\pmb{n}}_{i + 1}\hat{\pmb{n}}_{i + 1}\times \hat{\pmb{u}}_{i}\right]\mathbf{r}_{i,j}(\pmb {z};\theta),\] where \(e_{i,j}(\boldsymbol {z};\theta)\) and \(\mathbf{r}_{i,j}(\boldsymbol {z};\theta)\) are computed by a 1D convolutional neural network (Figure 6). ## 3 TRAINING We train and evaluate the model on a set of \(\sim 67,000\) protein structures (domains) that are hierarchically and temporally split. The model is trained by gradient descent using a composite loss that combines terms from likelihood- based and empirical- risk minimization- based training. <--- Page Split ---> Algorithm 1: Direct integrator <table><tr><td>Input</td><td>:State z(0), energy U(x), step e, time T, scale C</td></tr><tr><td>Output</td><td>: Trajectory x(0),..., x(T)</td></tr><tr><td>Initialize</td><td>x(0) ← F(z(0));</td></tr><tr><td>while t &amp;lt; T do</td><td></td></tr><tr><td>Compute forces fz = -∂xT∇xU;</td><td></td></tr><tr><td>Sample Δz ~ N(1/2cFz, cC);</td><td></td></tr><tr><td>z(t+c) ← z(t) + Δz;</td><td></td></tr><tr><td>x(t+c) ← F(z(t+c));</td><td></td></tr><tr><td>t ← t + c;</td><td></td></tr><tr><td>end</td><td></td></tr></table> Algorithm 2: Transform integrator <table><tr><td>Input</td><td>:State z(0), energy U(x), step e, time T, scale C</td></tr><tr><td>Output</td><td>: Trajectory x(0),..., x(T)</td></tr><tr><td>Initialize</td><td>x(0) ← F(z(0));</td></tr><tr><td>while t &amp;lt; T do</td><td></td></tr><tr><td>Compute forces fz = -∂xT∇xU;</td><td></td></tr><tr><td>Sample Δz ~ N(1/2cFz, cC);</td><td></td></tr><tr><td>x̃ ← x(t) + ∂x(t)Δz(t);</td><td></td></tr><tr><td>x̃(t+c) ← x(t) + 1/2 (∂x(t) + ∂x̃)Δz(t);</td><td></td></tr><tr><td>t ← t + c;</td><td></td></tr><tr><td>end</td><td></td></tr></table> Figure 3: A transform integrator simulates Langevin dynamics in a more favorable coordinate system (e.g. internal coordinates \(\mathbf{z}\) ) directly in terms of the untransformed state variables (e.g. Cartesian \(\mathbf{x}\) ). This exchanges the cost of an inner- loop transformation step (e.g. geometry construction \(\mathcal{F}(\mathbf{z})\) ) for an extra Jacobian evaluation, which is fully parallelizable on modern hardware (e.g. GPUs). ### 3.1 DATA Structural stratification There are several scales of generalization in protein structure prediction, which range from predicting the structure of a sequence that differs from the training set at a few positions to predicting a 3D fold topology that is absent from training set. To test these various levels of generalization systematically across many different protein families, we built a dataset on top of the CATH hierarchical classification of protein folds (Orengo et al., 1997). CATH hierarchically organizes proteins from the Protein Data Bank (Berman et al., 2000) into domains (individual folds) that are classified at the levels of Class, Architecture, Topology, and Homologous superfamily (from general to specific). We collected protein domains from CATH releases 4.1 and 4.2 up to length 200 and hierarchically and temporally split this set (§ B.1) into training ( \(\sim 35\mathrm{k}\) folds), validation ( \(\sim 21\mathrm{k}\) folds), and test sets ( \(\sim 10\mathrm{k}\) folds). Test subsets The final test set is subdivided into four subsets: C, A, T, and H, based on the level of maximal similarity between a given test domain and domains in the training set. For example, domains in the C or A sets may share class and potentially architecture classifications with train but will not share topology (i.e. fold). ### 3.2 LOSS Likelihood The gradient of the data- averaged log likelihood of the Boltzmann distribution is \[\frac{\partial}{\partial\theta_{i}}\mathbb{E}_{\mathbf{x}\sim \mathrm{Data}}[\log p(\mathbf{x}|\mathbf{s},\theta)] = \mathbb{E}_{\mathbf{x}\sim p(\mathbf{x}|\theta)}\left[\frac{\partial}{\partial\theta_{i}} U_{\theta}(\mathbf{x};\mathbf{s})\right] - \mathbb{E}_{\mathbf{x}\sim \mathrm{Data}}\left[\frac{\partial}{\partial\theta_{i}} U_{\theta}(\mathbf{x};\mathbf{s})\right], \quad (5)\] which, when ascended, will minimize the average energy of samples from the data relative to samples from the model. In an automatic differentiation setting, we implement a Monte Carlo estimator for (the negative of) this gradient by adding the energy gap, \[\mathcal{L}_{\mathrm{ML}} = U_{\theta}(\bot (\mathbf{x}^{\mathrm{(D)}});\mathbf{s}) - U_{\theta}(\bot (\mathbf{x}^{\mathrm{(M)}});\mathbf{s}), \quad (6)\] to the loss, where \(\bot\) is an identity operator that sets the gradient to zero4. Empirical Risk In addition to the likelihood loss, which backpropagates through the energy function but not the whole simulation, we developed an empirical risk loss composing several measures of protein model quality. It takes the form \[\mathcal{L}_{\mathrm{ER}} = \mathcal{L}_{\mathrm{Distances}} + \mathcal{L}_{\mathrm{Angles}} + \mathcal{L}_{\mathrm{H-bonds}} + \mathcal{L}_{\mathrm{TM-score}} + \mathcal{L}_{\mathrm{Init}} + \mathcal{L}_{\mathrm{Trajectory}} \quad (7)\] <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 4: Model generalizes and outperforms end-to-end baseline for unseen fold topologies. Colors indicate varying difficulty levels of protein domains in the test set, with the C (cyan) and A (magenta) subsets containing corresponding to test-set domains with topologies (folds) and superfamilies that were not represented in the training set. (Left) As the model exhibits higher confidence (reduced structural diversity), it becomes more accurate. (Center) The model occasionally achieves TM scores greater than 0.5 even for difficult C and A level generalization tasks. (Right) NEMO outperforms a strong RNN baseline for difficult generalization problems. All results for NEMO and RNN baselines are conditioned on profiles. </center> Table 1: Test set performance across different levels of generalization <table><tr><td>Model</td><td># params</td><td>Total</td><td>C</td><td>A</td><td>T</td><td>H</td></tr><tr><td>NEMO (ours, profile)</td><td>21.3m</td><td>0.366</td><td>0.274</td><td>0.361</td><td>0.331</td><td>0.431</td></tr><tr><td>NEMO (ours, sequence-only)</td><td>19.1m</td><td>0.248</td><td>0.198</td><td>0.245</td><td>0.254</td><td>0.263</td></tr><tr><td>RNN baseline model (profile)</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>2x100</td><td>5.9m</td><td>0.293</td><td>0.213</td><td>0.230</td><td>0.247</td><td>0.388</td></tr><tr><td>2x300 (avg. of 3)</td><td>8.8m</td><td>0.335</td><td>0.229</td><td>0.282</td><td>0.278</td><td>0.446</td></tr><tr><td>2x500</td><td>13.7m</td><td>0.347</td><td>0.222</td><td>0.272</td><td>0.286</td><td>0.477</td></tr><tr><td>2x700</td><td>21.4m</td><td>0.309</td><td>0.223</td><td>0.259</td><td>0.261</td><td>0.403</td></tr><tr><td>Number of structures</td><td></td><td>10381</td><td>1537</td><td>1705</td><td>3198</td><td>3941</td></tr></table> schematized in Figure 6. Our combined loss sums all of the terms \(\mathcal{L} = \mathcal{L}_{\mathrm{ER}} + \mathcal{L}_{\mathrm{ML}}\) without weighting. ### 3.3 STABILIZING BACKPROPAGATION THROUGH TIME We found that the long roll- outs of our simulator were prone to chaotic dynamics and exploding gradients, as seen in other work (Maclaurin et al., 2015; Parmas et al., 2018). Unfortunately, when chaotic dynamics do occur, it is typical for all gradients to explode (across learning steps) and standard techniques such as gradient clipping (Pascanu et al., 2013) are unable to rescue learning (§ B.5). To stabilize training, we developed two complimentary techniques that regularize against chaotic simulator dynamics while still facilitating learning when they arise. They are - Lyapunov regularization We regularize the simulator time-step function (rather than the energy function) to be approximately 1-Lipschitz. (If exactly satisfied, this eliminates the possibility of chaotic dynamics.) - Damped backpropagation through time We exponentially decay gradient accumulation on the backwards pass of automatic differentiation by multiplying each backwards iteration by a damping factor \(\gamma\) . We adaptively tune \(\gamma\) to cancel the scale of the exploding gradients. This can be thought of as a continuous relaxation of and a quantitatively tunable alternative to truncated backpropagation through time. <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: Examples of fold generalization at topology and architecture level. These predicted structures show a range of prediction accuracy for structural generalization (C and A) tasks, with the TM-score comparing the top ranked 3D-Jury pick against the target. The largest clusters are the three most-populated clusters derived from 100 models per domain with a within-cluster cutoff of \(\mathrm{TM} > 0.5\) . CATH IDs: 2oy8A03; 5c3uA02; 2y6xA00; 3cimB00; 4ykA00; 2f09A00; 3i5qA02; 2ayxA01. </center> ## 4 RESULTS ### 4.1 GENERALIZATION ACROSS CATH For each of the 10,381 protein structures in our test set, we sampled 100 models from NEMO, clustered them by structural similarity, and selected a representative structure by a standard consensus algorithm (Ginalski et al., 2003). For evaluation of performance we focus on the TM- Score (Zhang & Skolnick, 2005), a measure of structural similarity between 0 and 1 for which \(\mathrm{TM} > 0.5\) is typically considered an approximate reconstruction of a fold. Calibrated uncertainty We find that, when the model is confident (i.e. the number of distinct structural clusters is low \(\sim 1 - 3\) ), it is also accurate with some predictions having average \(\mathrm{TM} > 0.5\) (Figure 4, left). Unsurprisingly, the confidence of the model tends to go with the difficulty of generalization, with the most confident predictions from the H test set and the least confident from C. Structural generalization However, even when sequence identity is low and generalization difficulty is high (Figure 4, center), the model is still able to make some accurate predictions of 3D structures. Figure 5 illustrates some of these successful predictions at C and A levels, specifically 4ykA00, 5c3uA02 and beta sheet formation in 2oy8A03. We observe that the predictive distribution is multimodal with non- trivial differences between the clusters representing alternate packing of the chain. In some of the models there is uneven distribution of uncertainty along the chain, which sometimes corresponded to loosely packed regions of the protein. Comparison to an end- to- end baseline We constructed a baseline model that is a non- iterative replica of NEMO which replaces the coarse- grained simulator module (and energy function) with a two- layer bidirectional LSTM that directly predicts coarse internal coordinates \(z^{(0)}\) (followed by transformation to Cartesian coordinates with \(\mathcal{F}\) ). We trained this baseline across a range of hyperparameter values and found that for difficult C, A, and T tasks, NEMO generalized more effectively than the RNNs (Table 1). For the best performing 2x300 architecture, we trained two additional replicates and report the averaged performance in Figure 4 (right). Additionally, we report the results of a sequence- only NEMO model in Table 1. Paralleling secondary structure prediction (Rost & Sander, 1993; McGuffin et al., 2000), we find that the availability of evolutionary information has significant impact on prediction quality. <--- Page Split ---> ### 4.2 ADVANTAGES AND DISADVANTAGES 4.2 ADVANTAGES AND DISADVANTAGESThis work presents a novel approach for protein structure prediction that combines the inductive bias of simulators with the speed of directed models. A major advantage of the approach is that model sampling (inference) times can be considerably faster than conventional approaches to protein structure prediction (Table 4). There are two major disadvantages. First, the computational cost of training and sampling is higher than that of angle- predicting RNNs (Figure 10) such as our baseline or AIQuraishi (2018). Consequently, those methods have been scaled to larger datasets than ours (in protein length and diversity) which are more relevant to protein structure prediction tasks. Second, the instability of backpropagating through long simulations is unavoidable and only partially remedied by our approaches of Lipschitz regularization and gradient damping. These approaches can also lead to slower learning and less expressive energy functions. Methods for efficient (i.e. subquadratic) \(N\) - body simulations and for more principled stabilization of deep networks may be relevant to addressing both of these challenges in the future. ## 5 CONCLUSION 5 CONCLUSIONWe described a model for protein structure given sequence information that combines a coarse- grained neural energy function and an unrolled simulation into an end- to- end differentiable model. To realize this idea at the scale of real proteins, we introduced an efficient simulator for Langevin dynamics in transformed coordinate systems and stabilization techniques for backpropagating through long simulator roll- outs. We find that that model is able to predict the structures of protein molecules with hundreds of atoms while capturing structural uncertainty, and that the model can structurally generalize to distant fold classifications more effectively than a strong baseline. ## ACKNOWLEDGEMENTS ACKNOWLEDGEMENTSWe thank members of the Marks lab for useful discussions and feedback. Parts of this work were performed on the Orchestra compute cluster at Harvard Medical School. ## REFERENCES REFERENCESDavid H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmann machines. Cognitive science, 9(1):147- 169, 1985. Mohammed AlQuraishi. End- to- end differentiable learning of protein structure. bioRxiv, pp. 265231, 2018. Brandon Amos and J Zico Kolter. Optnet: Differentiable optimization as a layer in neural networks. In International Conference on Machine Learning, pp. 136- 145, 2017. Namrata Anand and Possu Huang. Generative modeling for protein structures. In Advances in Neural Information Processing Systems, pp. 7505- 7516, 2018. R Apweiler, A Bairoch, CH Wu, WC Barker, B Boeckmann, S Ferro, E Gasteiger, H Huang, R Lopez, M Magrane, et al. Uniprot: the universal protein knowledgebase. Nucleic acids research, 32(Database issue): D115- 9, 2004. David Belanger and Andrew McCallum. Structured prediction energy networks. In International Conference on Machine Learning, pp. 983- 992, 2016. David Belanger, Bishan Yang, and Andrew McCallum. End- to- end learning for structured prediction energy networks. In International Conference on Machine Learning, pp. 429- 439, 2017. Helen M Berman, John Westbrook, Zukang Feng, Gary Gilliland, Talapady N Bhat, Helge Weissig, Ilya N Shindyalov, and Philip E Bourne. The protein data bank. Nucleic acids research, 28(1):235- 242, 2000. Changyou Chen, Chunyuan Li, Liquan Chen, Wenlin Wang, Yunchen Pu, and Lawrence Carin Duke. Continuous- time flows for efficient inference and density estimation. In International Conference on Machine Learning, pp. 823- 832, 2018a. Minmin Chen, Jeffrey Pennington, and Samuel Schoenholz. Dynamical isometry and a mean field theory of rnns: Gating enables signal propagation in recurrent neural networks. In International Conference on Machine Learning, pp. 872- 881, 2018b. <--- Page Split ---> Zihang Dai, Amjad Almahairi, Philip Bachman, Eduard Hovy, and Aaron Courville. Calibrating energy- based generative adversarial networks. In International Conference on Learning Representations, 2017. Ken Dill, Robert L Jernigan, and Ivet Bahar. Protein Actions: Principles and Modeling. Garland Science, 2017. Sean R Eddy. Accelerated profile hmm searches. PLoS computational biology, 7(10):e1002195, 2011. Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. arXiv preprint arXiv:1704.01212, 2017. Krzysztof Ginalski, Arne Elofsson, Daniel Fischer, and Leszek Rychlewski. 3d- jury: a simple approach to improve protein structure predictions. Bioinformatics, 19(8):1015- 1018, 2003. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org. Mikael Henaff, Arthur Szlam, and Yann LeCun. Recurrent orthogonal networks and long- memory tasks. In International Conference on Machine Learning, pp. 2034- 2042, 2016. Sergey Ioffe. Batch renormalization: Towards reducing minibatch dependence in batch- normalized models. In Advances in Neural Information Processing Systems, pp. 1945- 1953, 2017. John M Jumper, Nabil F Faruk, Karl F Freed, and Tobin R Sosnick. Trajectory- based training enables protein simulations with accurate folding and boltzmann ensembles in cpu- hours. PLoS computational biology, 14(12):e1006578, 2018. Wolfgang Kabsch and Christian Sander. Dictionary of protein secondary structure: pattern recognition of hydrogen- bonded and geometrical features. Biopolymers: Original Research on Biomolecules, 22(12): 2577- 2637, 1983. Taesup Kim and Yoshua Bengio. Deep directed generative models with energy- based probability estimation. In International Conference on Learning Representations, 2016. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Sebastian Kmiecik, Dominik Gront, Michal Kolinski, Lukasz Wieteska, Aleksandra Elzbieta Dawid, and Andrzej Kolinski. Coarse- grained protein models and their applications. Chemical Reviews, 116(14):7898- 7936, 2016. Andrzej Kolinski, Lukasz Jaroszewski, Piotr Rotkiewicz, and Jeffrey Skolnick. An efficient monte carlo model of protein chains. modeling the short- range correlations between side group centers of mass. The Journal of Physical Chemistry B, 102(23):4628- 4637, 1998. Pawel Krupa, Anna Halabis, Wioletta Zmudzinska, Stanislaw Oldziej, Harold A Scheraga, and Adam Liwo. Maximum likelihood calibration of the unires force field for simulation of protein structure and dynamics. Journal of chemical information and modeling, 57(9):2364- 2377, 2017. Yann LeCun, Sumit Chopra, Raia Hadsell, M Ranzato, and F Huang. A tutorial on energy- based learning. Predicting structured data, 1(0), 2006. Daniel Levy, Matthew D Hoffman, and Jascha Sohl- Dickstein. Generalizing hamiltonian monte carlo with neural networks. In International Conference on Learning Representations, 2018. Xiaoyu Lu, Valerio Perrone, Leonard Hasenclever, Yee Whye Teh, and Sebastian Vollmer. Relativistic monte carlo. In Artificial Intelligence and Statistics, pp. 1236- 1245, 2017. Dougal Maclaurin, David Duvenaud, and Ryan Adams. Gradient- based hyperparameter optimization through reversible learning. In International Conference on Machine Learning, pp. 2113- 2122, 2015. Debora S Marks, Thomas A Hopf, and Chris Sander. Protein structure prediction from sequence variation. Nature biotechnology, 30(11):1072, 2012. Marc A Marti- Renom, Ashley C Stuart, Andras Fiser, Roberto Sanchez, Francisco Melo, and Andrej Sali. Comparative protein structure modeling of genes and genomes. Annual review of biophysics and biomolecular structure, 29(1):291- 325, 2000. Liam J McGuffin, Kevin Bryson, and David T Jones. The psipred protein structure prediction server. Bioinformatics, 16(4):404- 405, 2000. <--- Page Split ---> Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. arXiv preprint arXiv:1610.03483, 2016. Christine A Orengo, AD Michie, S Jones, David T Jones, MB Swindells, and Janet M Thornton. Cath- a hierarchic classification of protein domain structures. Structure, 5(8):1093- 1109, 1997. Paavo Parmas, Carl Edward Rasmussen, Jan Peters, and Kenji Doya. Pipps: Flexible model- based policy search robust to the curse of chaos. In International Conference on Machine Learning, pp. 4062- 4071, 2018. Jerod Parsons, J Bradley Holmes, J Maurice Rojas, Jerry Tsai, and Charlie EM Strauss. Practical conversion from torsion space to cartesian space for in silico protein synthesis. Journal of computational chemistry, 26 (10):1063- 1068, 2005. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning, pp. 1310- 1318, 2013. Michael Remmert, Andreas Biegert, Andreas Hauser, and Johannes Söding. Hhblits: lightning- fast iterative protein sequence searching by hmm- hmm alignment. Nature methods, 9(2):173, 2012. Burkhard Rost and Chris Sander. Prediction of protein secondary structure at better than \(70\%\) accuracy. Journal of molecular biology, 232(2):584- 599, 1993. Ruslan Salakhutdinov and Hugo Larochelle. Efficient learning of deep boltzmann machines. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 693- 700, 2010. Tim Salimans, Diederik Kingma, and Max Welling. Markov chain monte carlo and variational inference: Bridging the gap. In International Conference on Machine Learning, pp. 1218- 1226, 2015. Naomi Siew, Arne Elofsson, Leszek Rychlewski, and Daniel Fischer. Maxsub: an automated measure for the assessment of protein structure prediction quality. Bioinformatics, 16(9):776- 785, 2000. Jiaming Song, Shengjia Zhao, and Stefano Ermon. A- nice- mc: Adversarial training for mcmc. In Advances in Neural Information Processing Systems, pp. 5140- 5150, 2017. Steven H Strogatz. Nonlinear dynamics and chaos: with applications to physics, biology, chemistry, and engineering. CRC Press, 2018. Baris E Suzek, Yuqi Wang, Hongzhan Huang, Peter B McGarvey, Cathy H Wu, and UniProt Consortium. Uniref clusters: a comprehensive and scalable alternative for improving sequence similarity searches. Bioinformatics, 31(6):926- 932, 2014. Michalis K Titsias. Learning model reparametrizations: Implicit variational inference by fitting mcmc distributions. arXiv preprint arXiv:1708.01529, 2017. Dustin Tran, Rajesh Ranganath, and David Blei. Hierarchical implicit models and likelihood- free variational inference. In Advances in Neural Information Processing Systems, pp. 5523- 5533, 2017. Dilin Wang and Qiang Liu. Learning to draw samples with amortized stein variational gradient descent. In Uncertainty in Artificial Intelligence, 2017. Shenlong Wang, Sanja Fidler, and Raquel Urtasun. Proximal deep structured models. In Advances in Neural Information Processing Systems, pp. 865- 873, 2016. Yang Zhang and Jeffrey Skolnick. Tm- align: a protein structure alignment algorithm based on the tm- score. Nucleic acids research, 33(7):2302- 2309, 2005. <--- Page Split ---> ![](images/11_0.jpg) <center>Figure 6: Model schematic. The model generates an atomic structure \(X\) (top right) from sequence information \(s\) (top left) via 3 steps: First, a sequence network takes in the sequence information \(s\) , processes it with a combination of 1D, 2D, and graph convolutions (MPNN, bottom left), and outputs energy function weights \(l\) as well as simulator hyperparameters (top center). Second, the simulator iteratively modifies the structure via Langevin dynamics based on the gradient of the energy landscape (Forces, bottom center). Third, the imputation network constructs predicted atomic coordinates \(X\) from the final simulator time step \(\boldsymbol{x}^{(T)}\) . During training, the true atomic coordinates \(X^{(\mathrm{Data})}\) , predicted atomic coordinates \(X\) , simulator trajectory \(\boldsymbol{x}^{(1)}\) , ..., \(\boldsymbol{x}^{(T)}\) , and secondary structure predictions \(SS^{(\mathrm{Model})}\) feed into a composite loss function (Loss, bottom right), which is then optimized via backpropagation. </center> ## APPENDICES ## A MODEL ## A.1 COORDINATE SYSTEMS Inverse transformation The inverse transformation \(\mathbf{z} = \mathcal{F}^{- 1}(\mathbf{x})\) involves fully local computations of bong lengths and angles. \[b_{i} = ||\pmb{x}_{i} - \pmb{x}_{i - 1}||,\quad a_{i} = \arccos \left(-\hat{\pmb{u}}_{i}\cdot \hat{\pmb{u}}_{i - 1}\right),\quad d_{i} = \mathrm{sign}\left(\hat{\pmb{u}}_{i - 2}\cdot \hat{\pmb{n}}_{i}\right)\arccos \left(\hat{\pmb{n}}_{i - 1}\cdot \hat{\pmb{n}}_{i}\right).\] Jacobian The Jacobian \(\frac{\partial x}{\partial z}\) defines the infinitesimal response of the Cartesian coordinates \(\mathbf{x}\) to perturbations of the internal coordinates \(\mathbf{z}\) . It will be important for both converting Cartesian forces into angular torques and bond forces as well as the development of our transform integrator. It is <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 7: Component architectures. (Left) The energy function is the inner product of sequence-based weights and structure-based features. A combination of low- and high-level features capture multi-scale constraints on structure. (Center) The structure network is a lightweight convolutional network operating on both 1D (backbone) and 2D (interaction) features. (Right) Convolutional neural network modules used for sequence processing are composed of residual blocks that interleave spatial convolutions with 1x1 convolutions. </center> defined element- wise as \[\frac{\partial\pmb{x}_{j}}{\partial b_{i}} = \left\{ \begin{array}{l l}{\hat{\pmb{u}}_{i} i\leq j\] \[0 i > j} \end{array} \right.,\] \[\frac{\partial\pmb{x}_{j}}{\partial a_{i}} = \left\{ \begin{array}{l l}{\hat{\pmb{n}}_{i}\times (\pmb{x}_{j} - \pmb{x}_{i - 1}) i\leq j\] \[0 i > j} \end{array} \right.,\] \[\frac{\partial\pmb{x}_{j}}{\partial d_{i}} = \left\{ \begin{array}{l l}{\hat{\pmb{u}}_{i - 1}\times (\pmb{x}_{j} - \pmb{x}_{i - 1}) i\leq j\] \[0 i > j} \end{array} \right..\] The Jacobian has a simple form that can be understood by imagining the protein backbone as a robot arm that is planted at \(\pmb{x}_{0}\) (Figure 2B). Increasing or decreasing the bond length \(b_{i}\) extends or retracts all downstream coordinates along the bonds axis, moving a bond angle \(a_{i}\) drives circular motion of all downstream coordinates around the bond normal vector \(\hat{\pmb{n}}_{i}\) centered at \(\pmb{x}_{i - 1}\) , and moving a dihedral angle \(d_{i}\) drives circular motion of downstream coordinate \(\pmb{x}_{j}\) around bond vector \(\hat{\pmb{u}}_{i - 1}\) centered at \(\pmb{x}_{i - 1}\) . Unconstrained representations Bond lengths and angles are subject to the constraints \(b_{i} > 0\) and \(0< a_{i}< \pi\) . We enforce these constraints by representing these degrees of freedom in terms of fully unconstrained variables \(\hat{b}_{i}\) and \(\tilde{a}_{i}\) via the transformations \(b_{i} = \log \left(1 + e^{\hat{b}_{i}}\right)\) and \(a_{i} = \frac{\pi}{1 + e^{- \hat{a}_{i}}}\) . All references to the internal coordinates \(\pmb{z}\) and Jacobians \(\frac{\partial\pmb{x}}{\partial\pmb{z}}\) will refer to the use of fully unconstrained representations (Table 2). ## A.2 ENERGY FUNCTION Figure 6 provides an overall schematic of the model, including the components of the energy function. CNN primitives All convolutional neural network primitives in the model schematic (Figure 6) follow a common structure consisting of stacks of residual blocks. Each residual block includes <--- Page Split ---> Table 2: Coordinate systems and representations for protein structure. <table><tr><td>Variable</td><td>Notation</td><td>Shape</td></tr><tr><td>Sequence</td><td>s</td><td>[L, 20]</td></tr><tr><td>Cartesian coordinates (coarse)</td><td>x</td><td>[3L,1]</td></tr><tr><td>Internal coordinates</td><td>z</td><td>[3L,1]</td></tr><tr><td>Cartesian coordinates (atomic)</td><td>X</td><td>[3L,A]</td></tr><tr><td>Cartesian coordinates for position i</td><td>xi</td><td>[3,1]</td></tr><tr><td>Internal coordinate for position i</td><td>zi = [bi ãi d̃i]T</td><td>[3,1]</td></tr><tr><td>Unit vector from xi-1 to xi</td><td>ũi</td><td>[3,1]</td></tr><tr><td>Unit vector normal to bond plane at xi-1</td><td>ñi</td><td>[3,1]</td></tr><tr><td>Bond length ||xi - xi-1||</td><td>bi</td><td>[1]</td></tr><tr><td>Bond angle ∠(ũi, - ũi-1)</td><td>ai</td><td>[1]</td></tr><tr><td>Dihedral angle ∠(ñi, ñi-1)</td><td>di</td><td>[1]</td></tr><tr><td>Unconstrained bond length</td><td>bi</td><td>[1]</td></tr><tr><td>Unconstrained bond angle</td><td>ai</td><td>[1]</td></tr><tr><td>Jacobian matrix</td><td>∂x/∂z</td><td>[3L,3L]</td></tr></table> consists of a layer of channel mixing (1x1 convolution), a variable- sized convolution layer, and a second layer of channel mixing. We use dropout with \(p = 0.9\) and Batch Renormalization (Ioffe, 2017) on all convolutional layers. Batch Renormalization rather than Normalization was necessary rather owing to the large variation in sizes of the structures of the proteins and resulting large variation in mini- batch statistics. ## A.3 INTERNAL COORDINATE DYNAMICS WITH A TRANSFORM INTEGRATOR Why sampling vs. optimization Deterministic methods for optimizing the energy \(U(\pmb {x}; \pmb {s})\) such as gradient descent or quasi- Newton methods can effectively seek local minima of the energy surface, but are challenged to optimize globally and completely ignore the contribution of the widths of energy minima (entropy) to their probability. We prefer sampling to optimization for three reasons: (i) noise in sampling algorithms can facilitate faster global conformational exploration by overcoming local minima and saddle points, (ii) sampling generates populations of states that respect the width (entropy) of wells in \(U\) and can be used for uncertainty quantification, and (iii) sampling allows training with an approximate Maximum Likelihood objective (Equation 5). Langevin Dynamics The Langevin dynamics are a stochastic dynamics that sample from the canonical ensemble. They are defined as a continuous- time stochastic differential equation, and are simulated in discrete time with the first order discretization \[\pmb{x}^{(t + \epsilon)}\leftarrow \pmb{x}^{(t)} - \frac{\epsilon}{2}\nabla_{\pmb{x}}U^{(t)} + \sqrt{\epsilon}\mathbf{p},\qquad \mathbf{p}\sim \mathcal{N}(0,I). \quad (8)\] Each time step of \(\epsilon\) involves a descent step down the energy gradient plus a perturbation of Gaussian noise. Importantly, as time tends toward to infinity, the time- distribution of the Langevin dynamics converges to the canonical ensemble. Our goal is to design a dynamics that converge to an approximate sample in a very short period of time. Table 3: Model architecture. Input number of channels \(q = 20\) for sequence-only and \(q = 40\) for profiles. <table><tr><td>Location</td><td>Type</td><td>Channels</td><td># Blocks</td><td>Width</td><td>Dilation</td><td>Stride</td></tr><tr><td>Pre-MPNN</td><td>1D</td><td>128</td><td>12</td><td>3</td><td>[1, 2, 4, 8] × 3</td><td>1</td></tr><tr><td>MPNN</td><td>1D</td><td>128</td><td>4</td><td>3</td><td>[1, 2, 4, 8]</td><td>1</td></tr><tr><td>MPNN</td><td>2D</td><td>50</td><td>1</td><td>7</td><td>1</td><td>1</td></tr><tr><td>Post-MPNN</td><td>1D</td><td>q+256</td><td>12</td><td>3</td><td>[1, 2, 4, 8] × 3</td><td>1</td></tr><tr><td>Post-MPNN*</td><td>2D</td><td>100</td><td>1</td><td>9</td><td>1</td><td>1</td></tr><tr><td>Imputation</td><td>1D</td><td>q+256</td><td>12</td><td>3</td><td>[1, 2, 4, 8] × 3</td><td>1</td></tr></table> <--- Page Split ---> Coordinate systems and preconditioning The efficiency with which Langevin dynamics explore conformational space is highly dependent on the geometry of the energy landscape \(U(\pmb {x})\) , which in turn depends on how the system is parameterized. Molecular energy functions in Cartesian coordinates tend to exhibit strong correlations between variables that result from the requirement that underlying molecular geometries satisfy highly stereotyped bond lengths and angles. As a result, simulations of naive Cartesian Langevin dynamics require a small time step to satisfy these constraints and tend to be dominated by high- frequency, localized vibrations of the chain. The large, global motions that are essential to protein folding can require thousands to millions of times steps to manifest. A well- known solution to the complex dependencies of Cartesian coordinates is to carry out optimization and simulation in internal coordinates, which directly parameterize molecular geometries in terms of the bond lengths and angles (Parsons et al., 2005). Internal coordinate parameterizations possess the advantages that (i) bond length and angle constraints are easy to satisfy and (ii) small changes to a single angle can drive large, coherent rearrangements of the chain (Figure 2B). For example, simply replacing \(\mathbf{x}\) 's with \(\mathbf{z}\) 's in Equation 8 yields the dynamics \[\mathbf{z}^{(t + \epsilon)}\leftarrow \mathbf{z}^{(t)} - \frac{\epsilon}{2}\nabla_{\mathbf{z}}U^{(t)} + \sqrt{\epsilon}\mathbf{p},\qquad \mathbf{p}\sim \mathcal{N}(0,I).\] The advantages and disadvantages of the two coordinate systems are complementary: Cartesian dynamics efficiently sample local structural rearrangements and inefficiently sample global chain motions, while internal coordinate dynamics efficiently sample global, correlated motions of the chain but are challenged to make precise local rearrangements. The time dynamics of these alternative parameterizations need not be kinetically realistic to converge to the correct distribution over conformational space. Different coordinate systems warp the local geometry of the energy landscape and will in turn rescale and redirect which global vibrational and local vibrations dominate the dynamics. This relative rescaling can be further optimized by applying a global linear transformation to the energy landscape with a preconditioning 'inverse mass' matrix \(C\) , giving the update \[\mathbf{z}^{(t + \epsilon)}\leftarrow \mathbf{z}^{(t)} - \frac{\epsilon C}{2}\nabla_{\mathbf{z}}U^{(t)} + \sqrt{\epsilon C}\mathbf{p},\qquad \mathbf{p}\sim \mathcal{N}(0,I). \quad (9)\] Transform integrator The need to rebuild Cartesian geometry \(\mathbf{x}\) from internal coordinates \(\mathbf{z}\) with \(\mathcal{F}(\mathbf{z})\) at every time step is one of the major costs of conformational sampling codes based on Internal coordinates (Parsons et al., 2005) because it is intrinsically sequential. Here we show how it is possible to bypass the need for geometry reconstruction at every step by instead computing on- the- fly geometry modification. Imagine following a change to the internal coordinates \(\Delta \mathbf{z}^{(t)}\) along a straight path from \(\mathbf{z}^{(t)}\) to \(\mathbf{z}^{(t + \epsilon)}\) and tracking the corresponding nonlinear path of the Cartesian coordinates from \(\mathbf{x}^{(t)}\) to \(\mathbf{x}^{(t + \epsilon)}\) If this path is indexed by \(u\in (t,t + \epsilon)\) , then the dynamics of \(\mathbf{x}\) with respect to \(u\) are given by \(\frac{\partial\mathbf{x}}{\partial u} = \frac{\partial\mathbf{x}}{\partial z}\frac{\partial z}{\partial u} = \frac{\partial\mathbf{x}}{\partial z}\pm \Delta \mathbf{z}\) . Integrating the dynamics of \(\mathbf{x}\) gives \[\begin{aligned} \mathbf{x}^{(t + \epsilon)} & = \mathcal{F}\left(\mathbf{z}^{(t)} + \Delta \mathbf{z}^{(t)}\right) \\ & = \mathbf{x}^{(t)} + \int_{t}^{t + \epsilon}\frac{1}{\epsilon}\frac{\partial\mathbf{x}}{\partial z}^{(u)}\Delta \mathbf{z}^{(t)}du. \end{aligned}\] This illustrates that it is possible to convert coordinate changes in one coordinate system (e.g. Internal Coordinates) to coordinate changes in another (e.g. Cartesian) by integrating an autonomous system of ODEs with dynamics governed by the Jacobian. Since \(\epsilon\) is small, we integrate this system with a single step of Heun's method (improved Euler), where we first substitute an Euler approximation to predict \(\mathbf{x}^{(t + \epsilon)}\) as \[\tilde{\mathbf{x}}^{(t + \epsilon)}\approx \mathbf{x}^{(t)} + \frac{\partial\mathbf{x}^{(t)}}{\partial\mathbf{z}}\Delta \mathbf{z}^{(t)},\] and then substitute the Jacobian evaluated at the predicted state \(\tilde{\mathbf{x}}^{(t + \epsilon)}\) to form trapezoidal approximation \[\mathbf{x}^{(t + \epsilon)}\approx \mathbf{x}^{(t)} + \frac{1}{2}\left(\frac{\partial\mathbf{x}^{(t)}}{\partial\mathbf{z}} +\frac{\partial\tilde{\mathbf{x}}^{(t + \epsilon)}}{\partial\mathbf{z}}\right)\Delta \mathbf{z}^{(t)}.\] <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 8: Accounting for second order errors is essential for internal coordinate dynamics. (Top) Discarding the corrector step rapidly accumulates errors due to the curvilinear motions of internal coordinate dynamics. (Bottom) Heun integration with a corrector step accounts for curvature in curvilinear motion. </center> The comparison of this algorithm with naive integration is given in Figure 8. The corrector step is important for eliminating the large second- order errors that arise in curvilinear motions caused by angle changes (Figure 2B and Figure 8). In principle higher- order numerical integration methods or more time steps could increase accuracy at the cost of more evaluations of the Jacobian, but we found that second- order effects seemed to be the most relevant on our timescales. Mixed integrator Cartesian dynamics favor local structural rearrangements, such as the transitioning from a helical to an extended conformation, while internal coordinate dynamics favor global motions such as the change of the overall fold topology. Since both kinds of structural rearrangements are important to the folding process, we form a hybrid integrator (Algorithm 3) by taking one step with each integrator per force evaluation. Translational and rotational detrending Both Cartesian and Internal coordinates are overparameterized with \(3L\) degrees of freedom, since only \(3L - 6\) degrees of freedom are necessary to encode a centered and un- oriented structure. As a consequence, a significant fraction of the per time- step changes \(\Delta x\) can be explained by rigid translational and rotational motions of the entire structure. We isolate and remove these components of motion by treating the system \(\{x_{1},\ldots ,x_{L}\}\) as a set of particles with unit mass, and computing effective structural translational and rotational velocities by summing point- wise momenta. The translational component of motion is simply the average displacement across positions \(\Delta x_{i}^{\mathrm{trans}} = \langle \Delta x_{i}\rangle\) . For rotational motion around the center of mass, it is convenient to define the non- translational motion as \(\Delta \bar{x}_{i} = \Delta x_{i} - \Delta x_{i}^{\mathrm{trans}}\) and the centered Cartesian coordinates as \(\bar{x}_{i} = x_{i} - \langle x_{i}\rangle\) . The point- wise angular momentum is then \(l_{i} = \bar{x}_{i}\times \Delta \bar{x}_{i}\) and we define a total angular velocity of the structure \(\omega\) by summing these and dividing by the moment of inertia as \(\omega = (\sum_{i}l_{i}) / (\sum_{i}\| \bar{x}_{i}\|_{2}^{2})\) . We convert the angular velocity \(\omega\) into Cartesian displacements with an unrolled Heun integration as \(\Delta x_{i}^{\mathrm{Rot}} = \frac{1}{2}\omega \times (\bar{x}_{i} + \omega \times \bar{x}_{i})\) , which leaves the isolated structural motions as \(\Delta x_{i}^{\mathrm{Struct}} = \Delta x_{i} - \Delta x_{i}^{\mathrm{trans}} - \Delta x_{i}^{\mathrm{Rot}}\) . <--- Page Split ---> Algorithm 3: Mixed Integrator Input :Initial state \(z^{(0)}\) , energy \(U(x)\) , time steps \(\epsilon_{x},\epsilon_{z}\) , total time \(T\) , preconditioners \(\mathbf{C}_{x},\mathbf{C}_{z}\) Output :Trajectory \(x^{(0)},\ldots ,x^{(T)}\) Initialize \(x^{(0)}\leftarrow \mathcal{F}(z^{(0)})\) . while \(t< T\) do \[\begin{array}{r l} & {f_{x}\leftarrow \nabla_{x}U;}\\ & {\Delta x^{(C a r t)}\leftarrow \mathrm{C a r t e s i a n S t e p}(x^{(t)},f_{x},\epsilon_{x},\mathbf{C}_{x});}\\ & {\Delta x^{(I n t)}\leftarrow \mathrm{C l i p p e d I n t e r n a l S t e p}(x + \Delta x^{(C a r t)},f_{x},\epsilon_{z},\mathbf{C}_{z});}\\ & {x\leftarrow x + \mathrm{D e t r e n d}(\Delta x^{(C a r t)} + \Delta x^{(I n t)});}\\ & {t\leftarrow t + \epsilon ;} \end{array} \quad (end)\] end Speed clipping We found it helpful to stabilize the model by enforcing a speed limit on overall structural motions for the internal coordinate steps. This prevents small changes to the energy function during learning from causing extreme dynamics that in turn produce a non- informative learning signal. To accomplish this, we translationally and rotationally detrend the update of the predictor step \(\Delta x\) and compute a hypothetical time step \(\hat{\epsilon}_{z}\) that would limit the fastest motion to 2 Angstroms per iteration. We then compute modified predictor and corrector steps subject to this new, potentially slower, time step. While this breaks the asymptotics of Langevin dynamics, (i) it is unlikely on our timescales that we achieve stationarity and (ii) it can be avoided by regularizing the dynamics away from situations where clipping is necessary. In the future, considering non- Gaussian perturbations with kinetic energies similar to Relativistic Monte Carlo (Lu et al., 2017) might accomplish a similar goal in a more principled manner. The final integrator combining these ideas is presented in Figure 3. ## B APPENDIX B: TRAINING ## B.1 DATA For a training and validation set, we downloaded all protein domains of length \(L\leq 200\) from Classes \(\alpha\) , \(\beta\) , and \(\alpha /\beta\) in CATH release 4.1 (2015), and then hierarchically purged a randomly selected set of A, T, and H categories. This created three validation sets of increasing levels of difficulty: H, which contains domains with superfamilies that are excluded from train (but fold topologies may be present), T, which contains fold topologies that were excluded from train (fold generalization), and A which contains secondary structure architectures that were excluded from train. For a test set, we downloaded all folds that were new to CATH release 4.2 (2017), which (due to a propensity of structural biology to make new structures of previously solved folds), provided 10,381 test domains. We further stratified this test set into C, A, T, and H categories based on their nearest CATH classification in the training set. We also analyzed test set stratifications based on nearest neighbors in both training and validation in figure Figure 12. We note that the validation set was not explicitly used to tune hyperparameters due to the large cost of training (2 months on 2 M40 GPUs), but we did keep track of validation statistics during training. ## B.2 SGD We optimized all models for 200,000 iterations with Adam (Kingma & Ba, 2014). ## B.3 LOSS We optimize the model using a composite loss containing several terms, which are detailed as follows. <--- Page Split ---> Distance loss We score distances in the model with a contact- focused distance loss \[\sum_{i< j}w_{i j}\left|D_{i j}^{(\mathrm{Model})} - D_{i j}^{(\mathrm{Data})}\right|,\] where the contact- focusing weights are \[w_{i j} = \frac{\sigma\left(\alpha(D_{0} - \min (D_{i j}^{(\mathrm{Model})},D_{i j}^{(\mathrm{Data})}))\right)}{\sum_{k< l}\sigma\left(\alpha(D_{0} - \min (D_{k l}^{(\mathrm{Model})},D_{k l}^{(\mathrm{Data})}))\right)}\] and \(\begin{array}{r}{\sigma (u) = \frac{1}{1 + \exp(- u)}} \end{array}\) is the sigmoid function. Angle loss We use the loss \[\mathcal{L}_{\mathrm{angles}} = \sum_{i}||\mathcal{H}(\mathbf{z}_{i}^{(T)}) - \mathcal{H}(\mathbf{z}_{i}^{(\mathrm{Data})})||,\] where \(\mathcal{H}(\mathbf{z}) = [\cos (a_{i})\sin (a_{i})\cos (d_{i})\sin (a_{i})\sin (d_{i})]^{T}\) are unit length feature vectors that map the angles \(\{a_{i},d_{i}\}\) to the unit sphere. Other angular losses, such as the negative log probability of a Von- Mises Fisher distribution, are based on the inner product of the feature vectors \(\mathcal{H}(\mathbf{z}_{a})\cdot \mathcal{H}(\mathbf{z}_{b})\) rather than the Euclidean distance \(||\mathcal{H}(\mathbf{z}_{a}) - \mathcal{H}(\mathbf{z}_{b})||\) between them. It is worth noting that these two quantities are directly related by \(||\mathcal{H}(\mathbf{z}_{a}) - \mathcal{H}(\mathbf{z}_{b})|| = \sqrt{2(1 - \mathcal{H}(\mathbf{z}_{a})\cdot\mathcal{H}(\mathbf{z}_{b}))}\) . Taking \(\mathbf{z}_{a}\) as fixed and \(\mathbf{z}_{b}\) as the argument, the Euclidean loss has a cusp at \(\mathbf{z}_{a}\) whereas the Von- Mises Fisher loss is smooth around \(\mathbf{z}_{a}\) . This is analogous to the difference between \(L^{1}\) and \(L^{2}\) losses, where the cusped \(L^{1}\) loss favors median behavior while the smooth \(L^{2}\) loss favors average behavior. Trajectory loss In a further analogy to reinforcement learning, damped backpropation through time necessitates an intermediate loss function that can criticize transient states of the simulator. We compute this by featurizing the per time step coordinates as the product \(D_{i j}\hat{\mathbf{v}}_{i j}\) (Figure 2C) and doing the same contact- weighted averaging as the distance loss. Template Modelling (TM) Score The TM- score (Zhang & Skolnick, 2005), \[\sum_{i}\frac{1}{1 + \left(\frac{D_{i}}{D_{0}}\right)^{2}},\] is a measure of superposition quality between two protein structures on \([0,1]\) that was presented as an approximately length- independent alternative to RMSD. The TM- score is the best attainable value of the preceding quantity for all possible superpositions of two structures, where \(D_{i} = ||\mathbf{x}_{i}^{(\mathrm{Model})} - \mathbf{x}^{(\mathrm{Data})}||\) . This requires iterative optimization, which we implemented with a sign gradient descent with 100 iterations to optimally superimpose the model and target structure. We backpropagate through this unrolled optimization process as well as that of the simulator. Hydrogen bond loss We determine intra- backbone hydrogen bonds using the electrostatic model of DSSP (Kabsch & Sander, 1983). First, we place virtual hydrogens at 1 Angstroms along the negative angle bisector of the \(C_{i - 1} - N_{i} - C\alpha_{i}\) bond angle. Second, we compute a putative energy \(U_{ij}^{\mathrm{b - bond}}\) (in kcal/mol) for each potential hydrogen bond from an amide donor at \(i\) to a carbonyl acceptor at \(j\) as \[U_{i j}^{\mathrm{b - bond}}(\mathbf{X}) = \left(\frac{q_{N}q_{O}}{D_{N O}} +\frac{q_{H}q_{C}}{D_{H C}} +\frac{q_{H}q_{O}}{D_{H O}} +\frac{q_{N}q_{C}}{D_{N C}}\right)332\] \[\qquad = 0.084\left(\frac{1}{D_{N O}} +\frac{1}{D_{H C}} -\frac{1}{D_{H O}} -\frac{1}{D_{N C}}\right)332\] where \(D_{a b} = ||\mathbf{X}_{i,a} - \mathbf{X}_{j,b}||\) is the Euclidean distance between atom \(a\) of residue \(i\) and atom \(b\) of residue \(j\) . We then make hard assignments of hydrogen bonds for the data with \[y_{i j}^{\mathrm{data}} = \mathbf{1}\left(U_{i j}^{\mathrm{b - bond}}(\mathbf{X}^{(\mathrm{data})})< -0.5\right).\] <--- Page Split ---> We 'predict' the probabilities of hydrogen bonds of the data given the model via logistic regression of soft model assignments as \[y_{ij}^{\mathrm{model}} = \sigma \left(a\sigma \left(b\left(-U_{ij}^{\mathrm{hbond}}(\mathbf{X}^{(model)}) + 0.5\right)\right) + c\right),\] where \(a,b,c\) are learned parameters with the softplus parameterizations enforcing \(a,b > 0\) and \(\sigma (u) = 1 / (1 + \exp (- u)\) is the sigmoid function. The final hydrogen bond loss is the cross- entropy between these predictions and the data, \[\mathcal{L}_{\mathrm{h - bond}} = \sum_{|i - j| > 2}y_{ij}^{\mathrm{data}}\log y_{ij}^{\mathrm{model}} + (1 - y_{ij}^{\mathrm{data}})\log \left(1 - y_{ij}^{\mathrm{model}}\right).\] Secondary Structure Prediction We output standard 8- class predictions of secondary structure and score them with a cross- entropy loss. ## B.4 STABILIZING BACKPROPAGATION THROUGH TIME The combination of energy function, simulator, and refinement network can build an atomic level model of protein structure from sequence, and our goal is to optimize (meta- learn) this entire procedure by gradient descent. Before going into specifics of the loss function, however, we will discuss a challenges and solutions for computing gradients of unrolled simulations in the face of chaos. ## B.5 CHAOS AND EXPLODING GRADIENTS B.5 CHAOS AND EXPLODING GRADIENTSGradient-based learning of iterative computational procedures such as Recurrent Neural Networks (RNNs) is well known to be subject to the problems of exploding and vanishing gradients (Pascanu et al., 2013). Informally, these occur when the sensitivities of model outputs to inputs become either extremely large or extremely small and the gradient is no longer an informative signal for optimization. We find that backpropagation through unrolled simulations such as those presented is no exception to this rule. Often we observed that a model would productively learn for tens of thousands of iterations, only to suddenly and catastrophically exhibit diverging gradients from which the optimizer could not recover - even when the observed simulation dynamics exhibited no obvious qualitative changes to behavior and the standard solutions of gradient clipping (Pascanu et al., 2013) were in effect. Similar phenomena have been observed previously in the context of meta-learning (Maclaurin et al., 2015) and are explored in detail in a concurrent work (Parmas et al., 2018). In Figure 9, we furnish a minimal example that illustrates how chaos can lead to irrevocable loss of learning. We see that for even a simple particle- in- a- well, some choices of system parameters (such as too large a time step) can lead to chaotic dynamics which are synonymous with explosive gradients. This example is hardly contrived, and is in fact a simple model of the distance potentials between coordinates in our simulations. Moreover, it is important to note that chaos may not be easy to diagnose: for learning rates \(\alpha \in [1.7,1.8]\) the position of the particle \(x\) remains more or less confined in the well while the sensitivities diverge to \(10^{200}\) . It seems unlikely that meta- learning would be able to recover after descending into chaos. The view per time step Exploding gradients and chaotic dynamics involve the same mechanism: a multiplicative accumulation of sensitivities. In dynamical systems this is frequently phrased as 'exponentially diverging sensitivity to initial conditions'. Intuitively, this can be understood by examining how the Jacobian of an entire trajectory decomposes into a product of Jacobians as \[\frac{\partial x^{(T)}}{\partial x^{(0)}} = \frac{\partial x^{(T)}}{\partial x^{(T - 1)}}\frac{\partial x^{(T - 1)}}{\partial x^{(T - 2)}}\dots \frac{\partial x^{(1)}}{\partial x^{(0)}}. \quad (10)\] When the norms of the per time- step Jacobians \(\frac{\partial x^{(T)}}{\partial x^{(T - 1)}}\) are typically larger than 1, the sensitivity \(\| \frac{\partial x^{(T)}}{\partial x^{(0)}}\|\) will grow exponentially with \(T\) . Ideally, we would keep these norms well- behaved which is the rationale recent work on stabilization of RNNs (Henaff et al., 2016; Chen et al., 2018b). Next we will offer a general- purpose regularizer to approximately enforce this goal for any differentiable computational iteration with continuous state. <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 9: Chaos impedes meta-learning for gradient descent in a well. (a) Gradient descent of a particle in a well with initial conditions \(x^{(0)}\) and step size \(\alpha\) . (b) Orbit diagrams visualize long-term dynamics from iterations 1000 to 2000 of the position \(x\) (top) and the gradient \(\frac{d x^{(0)}}{d x^{(0)}}\) (bottom). When the step size \(\alpha\) is small, these dynamics converge to a periodic orbit over \(2^{k}\) values where \(0 \leq k < \infty\) . After some critical step size, the dynamics undergo a period-doubling bifurcation (Strogatz, 2018), become chaotic, and the gradients regularly diverge to huge numbers. </center> Approximate Lipschitz conditions One condition that guarantees that a deterministic map \(F: \mathbb{R}^{N} \to \mathbb{R}^{N}\) , \(x_{t} = F(x_{t - 1}, \theta)\) cannot exhibit exponential sensitivity to initial conditions is the condition of being non- expansive (also known as 1- Lipschitz or Metric). That is, for any two input points \(x_{a}, x_{b} \in \mathbb{R}^{N}\) , iterating the map cannot increase the distance between them as \(|F(x_{a}, \theta) - F(x_{b}, \theta)| \leq |x_{a} - x_{b}|\) . Replying the map to the bound immediately implies \[|F^{(t)}({\pmb x},\theta) - F^{(t)}({\pmb x} + \Delta {\pmb x},\theta)|\leq |\Delta {\pmb x}| \quad (11)\] for any number of iterations \(t\) . Thus, two initially close trajectories iterated through a non- expansive mapping must remain at least that close for arbitrary time. We approximately enforce non- expansivity by performing an online sensitivity analysis within simulations. At randomly selected time- steps, the current time step \(x^{(t)}\) is rolled back to the preceding state and re- executed with small Gaussian perturbations to the state \(\delta \sim \mathcal{N}(0, 10^{- 4} I)^{6}\) . We regularize the sensitivity by adding \[\mathcal{L}_{Lyapunov} = \max \left(0, \log \frac{|F(x^{(t)}) - F(x^{(t)} + \delta)|}{|\delta|}\right) \quad (12)\] to the loss. Interestingly, the stochastic nature of this approximate regularizer is likely a good thing - a truly non- expansive map is quite limited in what it can model. However, being 'almost' non- expansive seems to be incredibly helpful for learning. Damped Backpropagation through Time The approximate Lipschitz conditions (or Lyapunov regularization) encourage but do not guarantee stable backpropagation. When chaotic phase- transitions or otherwise occur we need a fall- back plan to be able to continue learning. At the same time, we would like gradient descent to proceed in the usual manner when simulator dynamics <--- Page Split ---> Algorithm 4: Damped Backpropagation Through Time Input : Initial state \(\boldsymbol{x}^{(0)}\) , time- stepping function \(F(\boldsymbol {x},\boldsymbol {s},\boldsymbol {\theta})\) , external inputs \(s_{1},\ldots ,s_{T}\) parameters \(\theta\) , Loss function \(\mathcal{L}(\boldsymbol {x}_{1},\ldots ,\boldsymbol {x}_{T})\) , Damping factor \(0< < \gamma < 1\) Output : Exponentially damped gradient \(\nabla_{\theta}\mathcal{L}\) Initialize \(\boldsymbol{x}^{(0)}\leftarrow \mathcal{F}(\boldsymbol{z}^{(0)})\) . for \(t\leftarrow 2,\ldots ,T\) do Compute time step \(\tilde{\mathbf{x}}_{t}\leftarrow F(\mathbf{x}_{t - 1},\mathbf{s}_{t},\theta)\) . Decay the gradient \(\mathbf{\nabla}\cdot \mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla} = \mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\mathbf{\nabla}}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\) \(\mathbf{\nabla}\cdot \mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla} = \mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\mathbf{\nabla}}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\) end Compute loss \(\mathcal{L}(\boldsymbol{x}_{1},\ldots ,\boldsymbol{x}_{T})\) . Compute gradient \(\nabla_{\theta}\mathcal{L}\leftarrow \mathrm{Autodiff}(\mathcal{L},\theta)\) . where \(\perp (\cdot)\) is the stop_gradient function. are stable. To this end we introduce a damping factor to backpropagation that can adaptively combat exponentially diverging gradients with exponential discounting (Algorithm 4). Damped backpropagation can be seen as a continuous alternative to the standard approach of Truncated Backpropagation through Time. Rather than setting the gradient to 0 after some fixed intervals of time- steps, we decay it on the backwards pass of reverse- mode differentiation by a factor of \(\gamma\) . This is mildly evocative of the notion of discounted future rewards in reinforcement learning. During backpropagation this causes a biased estimate of Jacobians that favors short term sensitivities (or rewards) as \[\partial \frac{\hat{\boldsymbol{x}}^{(t)}}{\partial\boldsymbol{x}^{(t - k)}} = \left(\left(\left(\gamma \frac{\partial\boldsymbol{x}^{(t)}}{\partial\boldsymbol{x}^{(t - 1)}}\right)\gamma \frac{\partial\boldsymbol{x}^{(t - 1)}}{\partial\boldsymbol{x}^{(t - 2)}}\right)\cdot \cdot \cdot \gamma \frac{\partial\boldsymbol{x}^{(t - k + 1)}}{\partial\boldsymbol{x}^{(t - k)}}\right) = \gamma^{k}\frac{\partial\boldsymbol{x}^{(t)}}{\partial\boldsymbol{x}^{(t - k)}}. \quad (13)\] ## B.6 MULTIPLE SEQUENCE ALIGNMENT GENERATION We use multiple sequence alignments of evolutionarily related sequences for both profile construction (§ B.7) and (ii) data augmentation (§ B.8). For every domain in the dataset, we extracted the sequence from the PDB and then used jackhammer (Eddy, 2011), to iteratively search the Uniprot90 database (Suzek et al., 2014) (release 4/2016) with 5 iterations and a length- normalized bitscore threshold of 0.3. We then removed sequences with over \(50\%\) gaps relative to the query sequence and then redundancy- reduced the alignment with hhfilter (Remmert et al., 2012) such that all sequences are at least a normalized Hamming distance of 0.8 away from one another. ## B.7 PROFILE GENERATION We briefly describe how we construct evolutionary profiles, or position- specific scoring matrices (PSSMs), for each protein domain. Let \(\mathcal{S} = \{\boldsymbol{S}^{(1)},\ldots ,\boldsymbol{S}^{(L)}\}\) be the set of \(L\) columns of a multiple sequence alignment over \(M\) sequences where each column \(\boldsymbol{S}^{(i)}\) is an \(M\times q\) matrix that one- hot encodes the sequence data at position \(i\) (for an alphabet of size \(q\) ). The regularized empirical frequency of letter \(a\) at site \(i\) is then \[f_{a}^{(i)} = \frac{\alpha + \sum_{j}\boldsymbol{S}_{j a}^{(i)}}{\alpha + M},\] where \(\alpha\) is a pseudocount that we set to 10. We compute our PSSM features for letter \(a\) at site \(i\) as \[w_{a}^{(i)} = \sigma \left(\log \frac{f_{a}^{(i)}}{B_{a}}\right)\] where \(\sigma (u) = \frac{1}{1 + \exp(- u)}\) is the logistic sigmoid and \(B_{a}\) is the average frequency of amino acid \(a\) in UniProt (Apweiler et al., 2004). <--- Page Split ---> Table 4: Qualitative timings. Results on CATH dataset and 2 M40 GPUs. <table><tr><td>Method</td><td>Generation time</td><td>Training time</td></tr><tr><td>RNN baseline†</td><td>milliseconds</td><td>~ 1 week</td></tr><tr><td>NEMO†</td><td>seconds</td><td>~ 2 months</td></tr><tr><td>Coevolution-based methods</td><td>minutes to hours</td><td>Coupled to generation</td></tr><tr><td>Physical simulations</td><td>days to weeks</td><td>N/A</td></tr></table> ## B.8 EVOLUTIONARY DATA AUGMENTATION To reduce our reliance on alignments and the generation of profiles for inference of new sequences while still leveraging evolutionary sequence data, we augmented our training set by dynamically spiking in diverse, related sequence into the model during training. Given a set of \(M\) sequences in the alignment we sample a sequence \(t\) based on its normalized Hamming distance \(d_{t}\) with probability \[p_{t} = \frac{e^{\lambda_{\mathrm{EDA}}}d_{t}}{\sum_{s = 1}^{M}e^{\lambda_{\mathrm{EDA}}}d_{s}},\] where \(\lambda_{\mathrm{EDA}}\) is a scaling parameter that we set to 5. When the alternate sequence contains gaps, we construct a chimeric sequence that substitutes those sites with the query. This strategy increased the number of available sequence- structure pairs by several orders of magnitude, and we used it for both profile and 1- seq based training. ## C APPENDIX C: RESULTS ## C.1 STRUCTURE GENERATION AND PROCESSING For each sequence from the CATH release 4.2 dataset, 100 structures were generated from both the profile and sequence- only models, while a single structure was generated from the RNN baseline models. The reported TM- scores were calculated using Maxcluster (Siew et al., 2000). A single representative structure was chosen from the ensemble of 100 structures using 3D- Jury (Ginalski et al., 2003). A pairwise distance matrix of TM- scores was calculated for all of the 100 structures in the ensemble. Clusters were determined by agglomerative hierarchical clustering with complete linkage using a TM- score threshold of 0.5 to determine cluster membership. ![](images/21_0.jpg) <center>Figure 10: Sampling speed. Per-protein sampling times for various batch sizes across NEMO and one of the RNN baselines on a single Tesla M40 GPU with 12GB memory and 20 cores. For all results in the main paper, 100 models were sampled per protein followed by consensus clustering with 3D-jury, adding an additional factor of \(10^{2}\) cost between NEMO and the RNN. </center> <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 11: Predictive performance of structures generated by the sequence-only model. (left) Structures in the test set are hierarchically organized by CATH classification. Groups further up the tree are broader generalization. (center-left) Ensembles of models with increasing certainty tend to have a better average TM-score. (center-right) TM-score of 3D-jury-selected models versus distance from the training data. Withheld (right) Comparing the energy-based model with and without profiles. Profile information greatly improves protein model accuracy as judged by TM-score. </center> ![](images/22_1.jpg) <center>Figure 12: Generalization results upon re-stratification. Profile-based model. </center> <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 13: RNN baseline performance for different hyperparameters. Predictive performance of the two-layer bidirectional LSTM baseline models across a range of hidden unit dimensions compared to the energy model. </center> <--- Page Split --->
## ABSTRACT The Boltzmann distribution is a natural model for many systems, from brains to materials and biomolecules, but is often of limited utility for fitting data because Monte Carlo algorithms are unable to simulate it in available time. This gap between the expressive capabilities and sampling practicalities of energy- based models is exemplified by the protein folding problem, since energy landscapes underlie contemporary knowledge of protein biophysics but computer simulations are challenged to fold all but the smallest proteins from first principles. In this work we aim to bridge the gap between the expressive capacity of energy functions and the practical capabilities of their simulators by using an unrolled Monte Carlo simulation as a model for data. We compose a neural energy function with a novel and efficient simulator based on Langevin dynamics to build an end- to- end- differentiable model of atomic protein structure given amino acid sequence information. We introduce techniques for stabilizing backpropagation under long roll- outs and demonstrate the model's capacity to make multimodal predictions and to, in some cases, generalize to unobserved protein fold types when trained on a large corpus of protein structures. ## 1 INTRODUCTION Many natural systems, such as cells in a tissue or atoms in a protein, organize into complex structures from simple underlying interactions. Explaining and predicting how macroscopic structures such as these arise from simple interactions is a major goal of science and, increasingly, machine learning. The Boltzmann distribution is a foundational model for relating local interactions to system behavior, but can be difficult to fit to data. Given an energy function \(U_{\theta}[x]\) , the probability of a system configuration \(x\) scales exponentially with energy as \[p_{\theta}(x) = \frac{1}{Z}\exp \left(-U_{\theta}[x]\right), \quad (1)\] where the (typically intractable) constant \(Z\) normalizes the distribution. Importantly, simple energy functions \(U_{\theta}[x]\) consisting of weak, local interactions can collectively encode complex system behaviors, such as the structures of materials and molecules or, when endowed with latent variables, the statistics of images, sound, and text (Ackley et al., 1985; Salakhutdinov & Larochelle, 2010). Unfortunately, learning model parameters \(\hat{\theta}\) and generating samples \(x \sim p_{\theta}(x)\) of the Boltzmann distribution is difficult in practice, as these procedures depend on expensive Monte Carlo simulations that may struggle to mix effectively. These difficulties have driven a shift towards generative models that are easier to learn and sample from, such as directed latent variable models and autoregressive models (Goodfellow et al., 2016). The protein folding problem provides a prime example of both the power of energy- based models at describing complex relationships in data as well as the challenge of generating samples from them. Decades of research in biochemistry and biophysics support an energy landscape theory of <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: An unrolled simulator as a model for protein structure. NEMO combines a neural energy function for coarse protein structure, a stochastic simulator based on Langevin dynamics with learned (amortized) initialization, and an atomic imputation network to build atomic coordinate output from sequence information. It is trained end-to-end by backpropagating through the unrolled folding simulation. </center> protein folding (Dill et al., 2017), in which the folds that natural protein sequences adopt are those that minimize free energy. Without the availability of external information such as coevolutionary information (Marks et al., 2012) or homologous structures (Martí- Renom et al., 2000) to constrain the energy function, however, contemporary simulations are challenged to generate globally favorable low- energy structures in available time. How can we get the representational benefits of energy- based models with the sampling efficiency of directed models? Here we explore a potential solution of directly training an unrolled simulator of an energy function as a model for data. By directly training the sampling process, we eschew the question 'when has the simulator converged' and instead demand that it produce a useful answer in a fixed amount of time. Leveraging this idea, we construct an end- to- end differentiable model of protein structure that is trained by backpropagation through folding (Figure 1). NEMO (Neural energy modeling and optimization) can learn at scale to generate 3D protein structures consisting of hundreds of points directly from sequence information. Our main contributions are: - Neural energy simulator model for protein structure that composes a deep energy function, unrolled Langevin dynamics, and an atomic imputation network for an end-to-end differentiable model of protein structure given sequence information- Efficient sampling algorithm that is based on a transform integrator for efficient sampling in transformed coordinate systems- Stabilization techniques for long roll-outs of simulators that can exhibit chaotic dynamics and, in turn, exploding gradients during backpropagation- Systematic analysis of combinatorial generalization with a new dataset of protein sequence and structure ### 1.1 RELATED WORK Protein modeling Our model builds on a long history of coarse- grained modeling of protein structure (Kolinski et al., 1998; Kmiecik et al., 2016). Recently, multiple groups have demonstrated how to learn full force fields using likelihood- based approaches (Jumper et al., 2018; Krupa et al., 2017), similar to our maximum likelihood loss (but without backpropagation through folding for fast sampling). While this work was in progress, two groups reported neural models of protein structure (AlQuraishi, 2018; Anand & Huang, 2018), where the former focused on modeling structure in terms of backbone angles and the latter in terms of residue- residue distances. We show how an energy function provides a natural framework to integrate both kinds of constraints, which in turn is important for achieving sample- efficient structural generalization. Learning to infer or sample Structured prediction includes a long history of casting predictions in terms of energy minimization (LeCun et al., 2006). Recently, others have built hybrid neural networks that use differentiable optimization as a building block in neural architectures (Wang et al., <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: A neural energy function models coarse grained structure and is sampled by internal coordinate dynamics. (A) The energy function is formulated as a Markov Random Field with structure-based features and sequence-based weights computed by neural networks (Figure 6). (B) To rapidly sample low-energy configurations, the Langevin dynamics simulator leverages both (i) an internal coordinate parameterization, which is more effective for global rearrangements, and (ii) a Cartesian parameterization, which is more effective for localized structural refinement. (C) The base features of the structure network are rotationally and translationally invariant internal coordinates (not shown), pairwise distances, and pairwise orientations. </center> 2016; Amos & Kolter, 2017; Belanger & McCallum, 2016). Structured Prediction Energy Networks (SPENs) with unrolled optimization (Belanger et al., 2017) are a highly similar approach to ours, differing in terms of the use of optimization rather than sampling. Additional methodologically related work includes approaches to learn energy functions and samplers simultaneously (Kim & Bengio, 2016; Wang & Liu, 2017; Dai et al., 2017; Song et al., 2017; Chen et al., 2018a), to learn efficient MCMC operators (Song et al., 2017; Levy et al., 2018), to build expressive approximating distributions with unrolled Monte Carlo simulations (Salimans et al., 2015; Titsias, 2017), and to learn the parameters of simulators with implicitly defined likelihoods<sup>1</sup> (Mohamed & Lakshminarayanan, 2016; Tran et al., 2017). ## 2 MODEL Overview NEMO is an end- to- end differentiable model of protein structure \(X\) conditioned on sequence information \(s\) consisting of three components (Figure 1): (i) a neural energy function \(U_{\theta}[x; s]\) for coarse grained structure \(x\) given sequence, (ii) an unrolled simulator that generates approximate samples from \(U\) via internal coordinate Langevin dynamics (§ 2.3), and (iii) an imputation network that generates an atomic model \(X\) from the final coarse- grained sample \(x^{(T)}\) (§ 2.4). All components are trained simultaneously via backpropagation through the unrolled process. ### 2.1 REPRESENTATION Proteins Proteins are linear polymers (sequences) of amino acids that fold into defined 3D structures. The 20 natural amino acids have a common monomer structure \([- (\mathrm{N - H}) - (\mathrm{C - R}) - (\mathrm{C = O}) - ]\) with variable side- chain \(\mathbb{R}\) groups that can differ in properties such as hydrophobicity, charge, and ability to form hydrogen bonds. When placed in solvent (such as water or a lipid membrane), interactions between the side- chains, backbone, and solvent drive proteins into particular 3D configurations ('folds'), which are the basis for understanding protein properties such as biochemical activity, ligand binding, and interactions with drugs. Coordinate representations We predict protein structure \(X\) in terms of 5 positions per amino acid: the four heavy atoms of the backbone (N, \(\mathrm{C}_{\alpha}\) , and carbonyl \(\mathrm{C} = \mathrm{O}\) ) and the center of mass of <--- Page Split ---> the side chain \(\mathbb{R}\) group. While it is well- established that the locations of \(C_{\alpha}\) carbons are sufficient to reconstruct a full atomic structure (Kmiecik et al., 2016), we include these additional positions for evaluating backbone hydrogen bonding (secondary structure) and coarse side- chain placement. Internally, the differentiable simulator generates an initial coarse- grained structure (1- position- per- amino- acid) with the loss function targeted to the midpoint of the \(C_{\alpha}\) carbon and the side chain center of mass. Sequence conditioning We consider two modes for conditioning our model on sequence information: (1) 1- seq, in which \(s\) is an \(L \times 20\) matrix containing a one- hot encoding of the amino acid sequence, and (2) Profile, in which \(s\) is an \(L \times 40\) matrix encoding both the amino acid sequence and a profile of evolutionarily related sequences (§ B.7). Internal coordinates In contrast to Cartesian coordinates \(x\) , which parameterize structure in terms of absolute positions of points \(\boldsymbol{x}_{i} \in \mathbb{R}^{3}\) , internal coordinates \(z\) parameterize structure in terms of relative distances and angles between points. We adopt a standard convention for internal coordinates of chains (Parsons et al., 2005) where each point \(\boldsymbol{x}_{i}\) is placed in a spherical coordinate system defined by the three preceding points \(\boldsymbol{x}_{i - 1}, \boldsymbol{x}_{i - 2}, \boldsymbol{x}_{i - 3}\) in terms of a radius (bond length \(^{2} b_{i} \in (0, \infty)\) , a polar angle (bond angle) \(a_{i} \in [0, \pi)\) , and an azimuthal angle (dihedral angle) \(d_{i} \in [0, 2 \pi)\) (Figure 2B). We define \(\boldsymbol{z}_{i} = \{\hat{b}_{i}, \hat{a}_{i}, d_{i}\}\) , where \(\hat{b}_{i}, \hat{a}_{i}\) are unconstrained parameterizations of \(b_{i}\) and \(a_{i}\) (§ A.1). The transformation \(\boldsymbol{x} = \mathcal{F}(\boldsymbol{z})\) from internal coordinates to Cartesian is then defined by the recurrence \[\boldsymbol{x}_{i} = \boldsymbol{x}_{i - 1} + b_{i}\left[\hat{\boldsymbol{u}}_{i - 1}\hat{\boldsymbol{n}}_{i - 1}\times \hat{\boldsymbol{u}}_{i - 1}\hat{\boldsymbol{n}}_{i - 1} \right]\left[ \begin{array}{c}\cos (\pi -a_{i})\\ \sin (\pi -a_{i})\cos (d_{i})\\ \sin (\pi -a_{i})\sin (d_{i}) \end{array} \right],\] where \(\hat{\boldsymbol{u}}_{i} = \frac{\boldsymbol{x}_{i} - \boldsymbol{x}_{i - 1}}{\| \boldsymbol{x}_{i} - \boldsymbol{x}_{i - 1}\|}\) is a unit vector from \(\boldsymbol{x}_{i - 1}\) to \(\boldsymbol{x}_{i}\) and \(\hat{\boldsymbol{n}}_{i} = \frac{\hat{\boldsymbol{u}}_{i - 1}\times\hat{\boldsymbol{u}}_{i}}{\| \hat{\boldsymbol{u}}_{i - 1}\times\hat{\boldsymbol{u}}_{i}\|}\) is a unit vector normal to each bond plane. The inverse transformation \(\boldsymbol{z} = \mathcal{F}^{- 1}(\boldsymbol{x})\) is simpler to compute, as it only involves local (and fully parallelizable) calculations of distances and angles (§ A.1). ### 2.2 NEURAL ENERGY FUNCTION Deep Markov Random Field We model the distribution of a structure \(\boldsymbol{x}\) conditioned on a sequence \(s\) with the Boltzmann distribution, \(p_{\boldsymbol{\theta}}(\boldsymbol {x}|\boldsymbol {s}) = \frac{1}{Z}\exp \left(- U_{\boldsymbol{\theta}}[\boldsymbol {x};\boldsymbol {s}]\right)\) , where \(U_{\boldsymbol{\theta}}[\boldsymbol {x};\boldsymbol {s}]\) is a sequence- conditioned energy function parameterized by a neural network. Our approach is compatible with any differentiable energy function \(U[\boldsymbol {x};\boldsymbol {s}]\) , though we focus on a decomposition \[U_{\boldsymbol{\theta}}[\boldsymbol {x};\boldsymbol {s}] = \sum_{i}l_{i}(\boldsymbol {s};\boldsymbol {\theta})f_{i}(\boldsymbol {x};\boldsymbol {\theta}), \quad (2)\] which is a Markov Random Field with coefficients \(\{l_{i}(\boldsymbol {s};\boldsymbol {\theta})\}_{i = 1}^{M}\) computed by a sequence network and structural features \(\{f_{i}(\boldsymbol {x};\boldsymbol {\theta})\}_{i = 1}^{M}\) computed by a structure network (Figure 2A). This decomposition facilitates (i) increased interpretability, as the (learned) structural features are independent of sequence, and (ii) increased computational efficiency, as the sequence- based coefficients can be computed once and reused throughout a simulation. Sequence network The sequence network takes as input one- dimensional sequence information \(s\) and outputs: (1) Energetic coefficients, a set of 1- and 2- dimensional sequence features \(\{l_{i}(\boldsymbol {s};\boldsymbol {\theta})\}_{i = 1}^{M}\) , (2) Simulator initial state \(\boldsymbol{z}^{(0)}\) , (3) Simulator hyperparameters preconditioning matrix \(C\) , and (4) Predicted secondary structure (Figure 6). It is parameterized by a combination of 1D, 2D, and graph convolutions (Gilmer et al., 2017) (§ A). Structure network The structure network takes as input a coarse- grained structure \(\boldsymbol{x}\) and outputs a set of 1D and 2D structural features \(\{f_{i}(\boldsymbol {x};\boldsymbol {\theta})\}_{i = 1}^{M}\) (Figure 6). We design the energy function to be invariant to rigid body motions (rotations and translations in \(\mathrm{SE}(3)\) ) by leveraging a set of invariant base features (Figure 2C) which are: <--- Page Split ---> 1. Internal coordinates \(z\) All internal coordinates except 6 are invariant to rotation and translation \(^3\) and we mask these in the energy loss. 2. Distances \(D_{i j} = \| x_{i} - x_{j}\|\) between all pairs of points. We further process these by 4 radial basis functions with (learned) Gaussian kernels. 3. Orientation vectors \(\hat{\pmb{v}}_{i j}\) , which are unit vectors encoding the relative position of point \(\boldsymbol{x}_{j}\) in a local coordinate system of \(\boldsymbol{x}_{i}\) with base vectors \(\frac{\hat{\pmb{u}}_{i} - \hat{\pmb{u}}_{i + 1}}{\| \hat{\pmb{u}}_{i} - \hat{\pmb{u}}_{i + 2}\|}\) , \(\hat{\pmb{n}}_{i + 1}\) , and the cross product thereof. ### 2.3 EFFICIENT SIMULATOR Langevin dynamics The Langevin dynamics is a stochastic differential equation that asymptotically samples from the Boltzmann distribution (Equation 1). It is typically simulated by a first- order discretization as \[\pmb{x}^{(t + \epsilon)}\leftarrow \pmb{x}^{(t)} - \frac{\epsilon}{2}\nabla_{\pmb{x}}U^{(t)} + \sqrt{\epsilon}\mathbf{p},\qquad \mathbf{p}\sim \mathcal{N}(0,I). \quad (3)\] Internal coordinate dynamics The efficiency with which Langevin dynamics explores conformational space is highly dependent on the geometry (and thus parameterization) of the energy landscape \(U(\boldsymbol {x})\) . While Cartesian dynamics are efficient at local structural rearrangement, internal coordinate dynamics much more efficiently sample global, coherent changes to the topology of the fold (Figure 2B). We interleave the Cartesian Langevin dynamics with preconditioned Internal Coordinate dynamics, \[\pmb{z}^{(t + \epsilon)}\leftarrow \pmb{z}^{(t)} - \frac{\epsilon C}{2}\nabla_{\pmb{z}}U^{(t)} + \sqrt{\epsilon C}\mathbf{p},\qquad \mathbf{p}\sim \mathcal{N}(0,I), \quad (4)\] where \(C\) is a preconditioning matrix that sets the relative scaling of changes to each degree of freedom. For all simulations we unroll \(T = 250\) time steps, each of which is comprised of one Cartesian step followed by one internal coordinate step (Equation 9, § A.3). Transform integrator Simulating internal coordinate dynamics is often computationally intensive as it requires rebuilding Cartesian geometry \(\boldsymbol{x}\) from internal coordinates \(\boldsymbol{z}\) with \(\mathcal{F}(\boldsymbol {z})\) (Parsons et al., 2005) which is an intrinsically sequential process. Here we bypass the need for recomputing coordinate transformations at every step by instead computing on- the- fly transformation integration (Figure 3). The idea is to directly apply coordinate updates in one coordinate system to another by numerically integrating the Jacobian. This can be favorable when the Jacobian has a simple structure, such as in our case where it requires only distributed cross products. ### 2.4 ATOMIC IMPUTATION Local reference frame reconstruction The imputation network builds an atomic model \(\pmb{X}\) from the final coarse coordinates \(\boldsymbol{x}^{(T)}\) . Each atomic coordinate \(\mathbf{X}_{i,j}\) of atom type \(j\) at position \(i\) is placed in a local reference frame as \[\mathbf{X}_{i,j} = \pmb{x}_{i} + e_{i,j}(\pmb {z};\theta)\left[\hat{\pmb{u}}_{i}\hat{\pmb{n}}_{i + 1}\hat{\pmb{n}}_{i + 1}\times \hat{\pmb{u}}_{i}\right]\mathbf{r}_{i,j}(\pmb {z};\theta),\] where \(e_{i,j}(\boldsymbol {z};\theta)\) and \(\mathbf{r}_{i,j}(\boldsymbol {z};\theta)\) are computed by a 1D convolutional neural network (Figure 6). ## 3 TRAINING We train and evaluate the model on a set of \(\sim 67,000\) protein structures (domains) that are hierarchically and temporally split. The model is trained by gradient descent using a composite loss that combines terms from likelihood- based and empirical- risk minimization- based training. <--- Page Split ---> Algorithm 1: Direct integrator <table><tr><td>Input</td><td>:State z(0), energy U(x), step e, time T, scale C</td></tr><tr><td>Output</td><td>: Trajectory x(0),..., x(T)</td></tr><tr><td>Initialize</td><td>x(0) ← F(z(0));</td></tr><tr><td>while t &amp;lt; T do</td><td></td></tr><tr><td>Compute forces fz = -∂xT∇xU;</td><td></td></tr><tr><td>Sample Δz ~ N(1/2cFz, cC);</td><td></td></tr><tr><td>z(t+c) ← z(t) + Δz;</td><td></td></tr><tr><td>x(t+c) ← F(z(t+c));</td><td></td></tr><tr><td>t ← t + c;</td><td></td></tr><tr><td>end</td><td></td></tr></table> Algorithm 2: Transform integrator <table><tr><td>Input</td><td>:State z(0), energy U(x), step e, time T, scale C</td></tr><tr><td>Output</td><td>: Trajectory x(0),..., x(T)</td></tr><tr><td>Initialize</td><td>x(0) ← F(z(0));</td></tr><tr><td>while t &amp;lt; T do</td><td></td></tr><tr><td>Compute forces fz = -∂xT∇xU;</td><td></td></tr><tr><td>Sample Δz ~ N(1/2cFz, cC);</td><td></td></tr><tr><td>x̃ ← x(t) + ∂x(t)Δz(t);</td><td></td></tr><tr><td>x̃(t+c) ← x(t) + 1/2 (∂x(t) + ∂x̃)Δz(t);</td><td></td></tr><tr><td>t ← t + c;</td><td></td></tr><tr><td>end</td><td></td></tr></table> Figure 3: A transform integrator simulates Langevin dynamics in a more favorable coordinate system (e.g. internal coordinates \(\mathbf{z}\) ) directly in terms of the untransformed state variables (e.g. Cartesian \(\mathbf{x}\) ). This exchanges the cost of an inner- loop transformation step (e.g. geometry construction \(\mathcal{F}(\mathbf{z})\) ) for an extra Jacobian evaluation, which is fully parallelizable on modern hardware (e.g. GPUs). ### 3.1 DATA Structural stratification There are several scales of generalization in protein structure prediction, which range from predicting the structure of a sequence that differs from the training set at a few positions to predicting a 3D fold topology that is absent from training set. To test these various levels of generalization systematically across many different protein families, we built a dataset on top of the CATH hierarchical classification of protein folds (Orengo et al., 1997). CATH hierarchically organizes proteins from the Protein Data Bank (Berman et al., 2000) into domains (individual folds) that are classified at the levels of Class, Architecture, Topology, and Homologous superfamily (from general to specific). We collected protein domains from CATH releases 4.1 and 4.2 up to length 200 and hierarchically and temporally split this set (§ B.1) into training ( \(\sim 35\mathrm{k}\) folds), validation ( \(\sim 21\mathrm{k}\) folds), and test sets ( \(\sim 10\mathrm{k}\) folds). Test subsets The final test set is subdivided into four subsets: C, A, T, and H, based on the level of maximal similarity between a given test domain and domains in the training set. For example, domains in the C or A sets may share class and potentially architecture classifications with train but will not share topology (i.e. fold). ### 3.2 LOSS Likelihood The gradient of the data- averaged log likelihood of the Boltzmann distribution is \[\frac{\partial}{\partial\theta_{i}}\mathbb{E}_{\mathbf{x}\sim \mathrm{Data}}[\log p(\mathbf{x}|\mathbf{s},\theta)] = \mathbb{E}_{\mathbf{x}\sim p(\mathbf{x}|\theta)}\left[\frac{\partial}{\partial\theta_{i}} U_{\theta}(\mathbf{x};\mathbf{s})\right] - \mathbb{E}_{\mathbf{x}\sim \mathrm{Data}}\left[\frac{\partial}{\partial\theta_{i}} U_{\theta}(\mathbf{x};\mathbf{s})\right], \quad (5)\] which, when ascended, will minimize the average energy of samples from the data relative to samples from the model. In an automatic differentiation setting, we implement a Monte Carlo estimator for (the negative of) this gradient by adding the energy gap, \[\mathcal{L}_{\mathrm{ML}} = U_{\theta}(\bot (\mathbf{x}^{\mathrm{(D)}});\mathbf{s}) - U_{\theta}(\bot (\mathbf{x}^{\mathrm{(M)}});\mathbf{s}), \quad (6)\] to the loss, where \(\bot\) is an identity operator that sets the gradient to zero4. Empirical Risk In addition to the likelihood loss, which backpropagates through the energy function but not the whole simulation, we developed an empirical risk loss composing several measures of protein model quality. It takes the form \[\mathcal{L}_{\mathrm{ER}} = \mathcal{L}_{\mathrm{Distances}} + \mathcal{L}_{\mathrm{Angles}} + \mathcal{L}_{\mathrm{H-bonds}} + \mathcal{L}_{\mathrm{TM-score}} + \mathcal{L}_{\mathrm{Init}} + \mathcal{L}_{\mathrm{Trajectory}} \quad (7)\] <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 4: Model generalizes and outperforms end-to-end baseline for unseen fold topologies. Colors indicate varying difficulty levels of protein domains in the test set, with the C (cyan) and A (magenta) subsets containing corresponding to test-set domains with topologies (folds) and superfamilies that were not represented in the training set. (Left) As the model exhibits higher confidence (reduced structural diversity), it becomes more accurate. (Center) The model occasionally achieves TM scores greater than 0.5 even for difficult C and A level generalization tasks. (Right) NEMO outperforms a strong RNN baseline for difficult generalization problems. All results for NEMO and RNN baselines are conditioned on profiles. </center> Table 1: Test set performance across different levels of generalization <table><tr><td>Model</td><td># params</td><td>Total</td><td>C</td><td>A</td><td>T</td><td>H</td></tr><tr><td>NEMO (ours, profile)</td><td>21.3m</td><td>0.366</td><td>0.274</td><td>0.361</td><td>0.331</td><td>0.431</td></tr><tr><td>NEMO (ours, sequence-only)</td><td>19.1m</td><td>0.248</td><td>0.198</td><td>0.245</td><td>0.254</td><td>0.263</td></tr><tr><td>RNN baseline model (profile)</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>2x100</td><td>5.9m</td><td>0.293</td><td>0.213</td><td>0.230</td><td>0.247</td><td>0.388</td></tr><tr><td>2x300 (avg. of 3)</td><td>8.8m</td><td>0.335</td><td>0.229</td><td>0.282</td><td>0.278</td><td>0.446</td></tr><tr><td>2x500</td><td>13.7m</td><td>0.347</td><td>0.222</td><td>0.272</td><td>0.286</td><td>0.477</td></tr><tr><td>2x700</td><td>21.4m</td><td>0.309</td><td>0.223</td><td>0.259</td><td>0.261</td><td>0.403</td></tr><tr><td>Number of structures</td><td></td><td>10381</td><td>1537</td><td>1705</td><td>3198</td><td>3941</td></tr></table> schematized in Figure 6. Our combined loss sums all of the terms \(\mathcal{L} = \mathcal{L}_{\mathrm{ER}} + \mathcal{L}_{\mathrm{ML}}\) without weighting. ### 3.3 STABILIZING BACKPROPAGATION THROUGH TIME We found that the long roll- outs of our simulator were prone to chaotic dynamics and exploding gradients, as seen in other work (Maclaurin et al., 2015; Parmas et al., 2018). Unfortunately, when chaotic dynamics do occur, it is typical for all gradients to explode (across learning steps) and standard techniques such as gradient clipping (Pascanu et al., 2013) are unable to rescue learning (§ B.5). To stabilize training, we developed two complimentary techniques that regularize against chaotic simulator dynamics while still facilitating learning when they arise. They are - Lyapunov regularization We regularize the simulator time-step function (rather than the energy function) to be approximately 1-Lipschitz. (If exactly satisfied, this eliminates the possibility of chaotic dynamics.) - Damped backpropagation through time We exponentially decay gradient accumulation on the backwards pass of automatic differentiation by multiplying each backwards iteration by a damping factor \(\gamma\) . We adaptively tune \(\gamma\) to cancel the scale of the exploding gradients. This can be thought of as a continuous relaxation of and a quantitatively tunable alternative to truncated backpropagation through time. <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 5: Examples of fold generalization at topology and architecture level. These predicted structures show a range of prediction accuracy for structural generalization (C and A) tasks, with the TM-score comparing the top ranked 3D-Jury pick against the target. The largest clusters are the three most-populated clusters derived from 100 models per domain with a within-cluster cutoff of \(\mathrm{TM} > 0.5\) . CATH IDs: 2oy8A03; 5c3uA02; 2y6xA00; 3cimB00; 4ykA00; 2f09A00; 3i5qA02; 2ayxA01. </center> ## 4 RESULTS ### 4.1 GENERALIZATION ACROSS CATH For each of the 10,381 protein structures in our test set, we sampled 100 models from NEMO, clustered them by structural similarity, and selected a representative structure by a standard consensus algorithm (Ginalski et al., 2003). For evaluation of performance we focus on the TM- Score (Zhang & Skolnick, 2005), a measure of structural similarity between 0 and 1 for which \(\mathrm{TM} > 0.5\) is typically considered an approximate reconstruction of a fold. Calibrated uncertainty We find that, when the model is confident (i.e. the number of distinct structural clusters is low \(\sim 1 - 3\) ), it is also accurate with some predictions having average \(\mathrm{TM} > 0.5\) (Figure 4, left). Unsurprisingly, the confidence of the model tends to go with the difficulty of generalization, with the most confident predictions from the H test set and the least confident from C. Structural generalization However, even when sequence identity is low and generalization difficulty is high (Figure 4, center), the model is still able to make some accurate predictions of 3D structures. Figure 5 illustrates some of these successful predictions at C and A levels, specifically 4ykA00, 5c3uA02 and beta sheet formation in 2oy8A03. We observe that the predictive distribution is multimodal with non- trivial differences between the clusters representing alternate packing of the chain. In some of the models there is uneven distribution of uncertainty along the chain, which sometimes corresponded to loosely packed regions of the protein. Comparison to an end- to- end baseline We constructed a baseline model that is a non- iterative replica of NEMO which replaces the coarse- grained simulator module (and energy function) with a two- layer bidirectional LSTM that directly predicts coarse internal coordinates \(z^{(0)}\) (followed by transformation to Cartesian coordinates with \(\mathcal{F}\) ). We trained this baseline across a range of hyperparameter values and found that for difficult C, A, and T tasks, NEMO generalized more effectively than the RNNs (Table 1). For the best performing 2x300 architecture, we trained two additional replicates and report the averaged performance in Figure 4 (right). Additionally, we report the results of a sequence- only NEMO model in Table 1. Paralleling secondary structure prediction (Rost & Sander, 1993; McGuffin et al., 2000), we find that the availability of evolutionary information has significant impact on prediction quality. <--- Page Split ---> ### 4.2 ADVANTAGES AND DISADVANTAGES 4.2 ADVANTAGES AND DISADVANTAGESThis work presents a novel approach for protein structure prediction that combines the inductive bias of simulators with the speed of directed models. A major advantage of the approach is that model sampling (inference) times can be considerably faster than conventional approaches to protein structure prediction (Table 4). There are two major disadvantages. First, the computational cost of training and sampling is higher than that of angle- predicting RNNs (Figure 10) such as our baseline or AIQuraishi (2018). Consequently, those methods have been scaled to larger datasets than ours (in protein length and diversity) which are more relevant to protein structure prediction tasks. Second, the instability of backpropagating through long simulations is unavoidable and only partially remedied by our approaches of Lipschitz regularization and gradient damping. These approaches can also lead to slower learning and less expressive energy functions. Methods for efficient (i.e. subquadratic) \(N\) - body simulations and for more principled stabilization of deep networks may be relevant to addressing both of these challenges in the future. ## 5 CONCLUSION 5 CONCLUSIONWe described a model for protein structure given sequence information that combines a coarse- grained neural energy function and an unrolled simulation into an end- to- end differentiable model. To realize this idea at the scale of real proteins, we introduced an efficient simulator for Langevin dynamics in transformed coordinate systems and stabilization techniques for backpropagating through long simulator roll- outs. We find that that model is able to predict the structures of protein molecules with hundreds of atoms while capturing structural uncertainty, and that the model can structurally generalize to distant fold classifications more effectively than a strong baseline. ## ACKNOWLEDGEMENTS ACKNOWLEDGEMENTSWe thank members of the Marks lab for useful discussions and feedback. Parts of this work were performed on the Orchestra compute cluster at Harvard Medical School. ## APPENDICES ## A MODEL ## A.1 COORDINATE SYSTEMS Inverse transformation The inverse transformation \(\mathbf{z} = \mathcal{F}^{- 1}(\mathbf{x})\) involves fully local computations of bong lengths and angles. \[b_{i} = ||\pmb{x}_{i} - \pmb{x}_{i - 1}||,\quad a_{i} = \arccos \left(-\hat{\pmb{u}}_{i}\cdot \hat{\pmb{u}}_{i - 1}\right),\quad d_{i} = \mathrm{sign}\left(\hat{\pmb{u}}_{i - 2}\cdot \hat{\pmb{n}}_{i}\right)\arccos \left(\hat{\pmb{n}}_{i - 1}\cdot \hat{\pmb{n}}_{i}\right).\] Jacobian The Jacobian \(\frac{\partial x}{\partial z}\) defines the infinitesimal response of the Cartesian coordinates \(\mathbf{x}\) to perturbations of the internal coordinates \(\mathbf{z}\) . It will be important for both converting Cartesian forces into angular torques and bond forces as well as the development of our transform integrator. It is <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 7: Component architectures. (Left) The energy function is the inner product of sequence-based weights and structure-based features. A combination of low- and high-level features capture multi-scale constraints on structure. (Center) The structure network is a lightweight convolutional network operating on both 1D (backbone) and 2D (interaction) features. (Right) Convolutional neural network modules used for sequence processing are composed of residual blocks that interleave spatial convolutions with 1x1 convolutions. </center> defined element- wise as \[\frac{\partial\pmb{x}_{j}}{\partial b_{i}} = \left\{ \begin{array}{l l}{\hat{\pmb{u}}_{i} i\leq j\] \[0 i > j} \end{array} \right.,\] \[\frac{\partial\pmb{x}_{j}}{\partial a_{i}} = \left\{ \begin{array}{l l}{\hat{\pmb{n}}_{i}\times (\pmb{x}_{j} - \pmb{x}_{i - 1}) i\leq j\] \[0 i > j} \end{array} \right.,\] \[\frac{\partial\pmb{x}_{j}}{\partial d_{i}} = \left\{ \begin{array}{l l}{\hat{\pmb{u}}_{i - 1}\times (\pmb{x}_{j} - \pmb{x}_{i - 1}) i\leq j\] \[0 i > j} \end{array} \right..\] The Jacobian has a simple form that can be understood by imagining the protein backbone as a robot arm that is planted at \(\pmb{x}_{0}\) (Figure 2B). Increasing or decreasing the bond length \(b_{i}\) extends or retracts all downstream coordinates along the bonds axis, moving a bond angle \(a_{i}\) drives circular motion of all downstream coordinates around the bond normal vector \(\hat{\pmb{n}}_{i}\) centered at \(\pmb{x}_{i - 1}\) , and moving a dihedral angle \(d_{i}\) drives circular motion of downstream coordinate \(\pmb{x}_{j}\) around bond vector \(\hat{\pmb{u}}_{i - 1}\) centered at \(\pmb{x}_{i - 1}\) . Unconstrained representations Bond lengths and angles are subject to the constraints \(b_{i} > 0\) and \(0< a_{i}< \pi\) . We enforce these constraints by representing these degrees of freedom in terms of fully unconstrained variables \(\hat{b}_{i}\) and \(\tilde{a}_{i}\) via the transformations \(b_{i} = \log \left(1 + e^{\hat{b}_{i}}\right)\) and \(a_{i} = \frac{\pi}{1 + e^{- \hat{a}_{i}}}\) . All references to the internal coordinates \(\pmb{z}\) and Jacobians \(\frac{\partial\pmb{x}}{\partial\pmb{z}}\) will refer to the use of fully unconstrained representations (Table 2). ## A.2 ENERGY FUNCTION Figure 6 provides an overall schematic of the model, including the components of the energy function. CNN primitives All convolutional neural network primitives in the model schematic (Figure 6) follow a common structure consisting of stacks of residual blocks. Each residual block includes <--- Page Split ---> Table 2: Coordinate systems and representations for protein structure. <table><tr><td>Variable</td><td>Notation</td><td>Shape</td></tr><tr><td>Sequence</td><td>s</td><td>[L, 20]</td></tr><tr><td>Cartesian coordinates (coarse)</td><td>x</td><td>[3L,1]</td></tr><tr><td>Internal coordinates</td><td>z</td><td>[3L,1]</td></tr><tr><td>Cartesian coordinates (atomic)</td><td>X</td><td>[3L,A]</td></tr><tr><td>Cartesian coordinates for position i</td><td>xi</td><td>[3,1]</td></tr><tr><td>Internal coordinate for position i</td><td>zi = [bi ãi d̃i]T</td><td>[3,1]</td></tr><tr><td>Unit vector from xi-1 to xi</td><td>ũi</td><td>[3,1]</td></tr><tr><td>Unit vector normal to bond plane at xi-1</td><td>ñi</td><td>[3,1]</td></tr><tr><td>Bond length ||xi - xi-1||</td><td>bi</td><td>[1]</td></tr><tr><td>Bond angle ∠(ũi, - ũi-1)</td><td>ai</td><td>[1]</td></tr><tr><td>Dihedral angle ∠(ñi, ñi-1)</td><td>di</td><td>[1]</td></tr><tr><td>Unconstrained bond length</td><td>bi</td><td>[1]</td></tr><tr><td>Unconstrained bond angle</td><td>ai</td><td>[1]</td></tr><tr><td>Jacobian matrix</td><td>∂x/∂z</td><td>[3L,3L]</td></tr></table> consists of a layer of channel mixing (1x1 convolution), a variable- sized convolution layer, and a second layer of channel mixing. We use dropout with \(p = 0.9\) and Batch Renormalization (Ioffe, 2017) on all convolutional layers. Batch Renormalization rather than Normalization was necessary rather owing to the large variation in sizes of the structures of the proteins and resulting large variation in mini- batch statistics. ## A.3 INTERNAL COORDINATE DYNAMICS WITH A TRANSFORM INTEGRATOR Why sampling vs. optimization Deterministic methods for optimizing the energy \(U(\pmb {x}; \pmb {s})\) such as gradient descent or quasi- Newton methods can effectively seek local minima of the energy surface, but are challenged to optimize globally and completely ignore the contribution of the widths of energy minima (entropy) to their probability. We prefer sampling to optimization for three reasons: (i) noise in sampling algorithms can facilitate faster global conformational exploration by overcoming local minima and saddle points, (ii) sampling generates populations of states that respect the width (entropy) of wells in \(U\) and can be used for uncertainty quantification, and (iii) sampling allows training with an approximate Maximum Likelihood objective (Equation 5). Langevin Dynamics The Langevin dynamics are a stochastic dynamics that sample from the canonical ensemble. They are defined as a continuous- time stochastic differential equation, and are simulated in discrete time with the first order discretization \[\pmb{x}^{(t + \epsilon)}\leftarrow \pmb{x}^{(t)} - \frac{\epsilon}{2}\nabla_{\pmb{x}}U^{(t)} + \sqrt{\epsilon}\mathbf{p},\qquad \mathbf{p}\sim \mathcal{N}(0,I). \quad (8)\] Each time step of \(\epsilon\) involves a descent step down the energy gradient plus a perturbation of Gaussian noise. Importantly, as time tends toward to infinity, the time- distribution of the Langevin dynamics converges to the canonical ensemble. Our goal is to design a dynamics that converge to an approximate sample in a very short period of time. Table 3: Model architecture. Input number of channels \(q = 20\) for sequence-only and \(q = 40\) for profiles. <table><tr><td>Location</td><td>Type</td><td>Channels</td><td># Blocks</td><td>Width</td><td>Dilation</td><td>Stride</td></tr><tr><td>Pre-MPNN</td><td>1D</td><td>128</td><td>12</td><td>3</td><td>[1, 2, 4, 8] × 3</td><td>1</td></tr><tr><td>MPNN</td><td>1D</td><td>128</td><td>4</td><td>3</td><td>[1, 2, 4, 8]</td><td>1</td></tr><tr><td>MPNN</td><td>2D</td><td>50</td><td>1</td><td>7</td><td>1</td><td>1</td></tr><tr><td>Post-MPNN</td><td>1D</td><td>q+256</td><td>12</td><td>3</td><td>[1, 2, 4, 8] × 3</td><td>1</td></tr><tr><td>Post-MPNN*</td><td>2D</td><td>100</td><td>1</td><td>9</td><td>1</td><td>1</td></tr><tr><td>Imputation</td><td>1D</td><td>q+256</td><td>12</td><td>3</td><td>[1, 2, 4, 8] × 3</td><td>1</td></tr></table> <--- Page Split ---> Coordinate systems and preconditioning The efficiency with which Langevin dynamics explore conformational space is highly dependent on the geometry of the energy landscape \(U(\pmb {x})\) , which in turn depends on how the system is parameterized. Molecular energy functions in Cartesian coordinates tend to exhibit strong correlations between variables that result from the requirement that underlying molecular geometries satisfy highly stereotyped bond lengths and angles. As a result, simulations of naive Cartesian Langevin dynamics require a small time step to satisfy these constraints and tend to be dominated by high- frequency, localized vibrations of the chain. The large, global motions that are essential to protein folding can require thousands to millions of times steps to manifest. A well- known solution to the complex dependencies of Cartesian coordinates is to carry out optimization and simulation in internal coordinates, which directly parameterize molecular geometries in terms of the bond lengths and angles (Parsons et al., 2005). Internal coordinate parameterizations possess the advantages that (i) bond length and angle constraints are easy to satisfy and (ii) small changes to a single angle can drive large, coherent rearrangements of the chain (Figure 2B). For example, simply replacing \(\mathbf{x}\) 's with \(\mathbf{z}\) 's in Equation 8 yields the dynamics \[\mathbf{z}^{(t + \epsilon)}\leftarrow \mathbf{z}^{(t)} - \frac{\epsilon}{2}\nabla_{\mathbf{z}}U^{(t)} + \sqrt{\epsilon}\mathbf{p},\qquad \mathbf{p}\sim \mathcal{N}(0,I).\] The advantages and disadvantages of the two coordinate systems are complementary: Cartesian dynamics efficiently sample local structural rearrangements and inefficiently sample global chain motions, while internal coordinate dynamics efficiently sample global, correlated motions of the chain but are challenged to make precise local rearrangements. The time dynamics of these alternative parameterizations need not be kinetically realistic to converge to the correct distribution over conformational space. Different coordinate systems warp the local geometry of the energy landscape and will in turn rescale and redirect which global vibrational and local vibrations dominate the dynamics. This relative rescaling can be further optimized by applying a global linear transformation to the energy landscape with a preconditioning 'inverse mass' matrix \(C\) , giving the update \[\mathbf{z}^{(t + \epsilon)}\leftarrow \mathbf{z}^{(t)} - \frac{\epsilon C}{2}\nabla_{\mathbf{z}}U^{(t)} + \sqrt{\epsilon C}\mathbf{p},\qquad \mathbf{p}\sim \mathcal{N}(0,I). \quad (9)\] Transform integrator The need to rebuild Cartesian geometry \(\mathbf{x}\) from internal coordinates \(\mathbf{z}\) with \(\mathcal{F}(\mathbf{z})\) at every time step is one of the major costs of conformational sampling codes based on Internal coordinates (Parsons et al., 2005) because it is intrinsically sequential. Here we show how it is possible to bypass the need for geometry reconstruction at every step by instead computing on- the- fly geometry modification. Imagine following a change to the internal coordinates \(\Delta \mathbf{z}^{(t)}\) along a straight path from \(\mathbf{z}^{(t)}\) to \(\mathbf{z}^{(t + \epsilon)}\) and tracking the corresponding nonlinear path of the Cartesian coordinates from \(\mathbf{x}^{(t)}\) to \(\mathbf{x}^{(t + \epsilon)}\) If this path is indexed by \(u\in (t,t + \epsilon)\) , then the dynamics of \(\mathbf{x}\) with respect to \(u\) are given by \(\frac{\partial\mathbf{x}}{\partial u} = \frac{\partial\mathbf{x}}{\partial z}\frac{\partial z}{\partial u} = \frac{\partial\mathbf{x}}{\partial z}\pm \Delta \mathbf{z}\) . Integrating the dynamics of \(\mathbf{x}\) gives \[\begin{aligned} \mathbf{x}^{(t + \epsilon)} & = \mathcal{F}\left(\mathbf{z}^{(t)} + \Delta \mathbf{z}^{(t)}\right) \\ & = \mathbf{x}^{(t)} + \int_{t}^{t + \epsilon}\frac{1}{\epsilon}\frac{\partial\mathbf{x}}{\partial z}^{(u)}\Delta \mathbf{z}^{(t)}du. \end{aligned}\] This illustrates that it is possible to convert coordinate changes in one coordinate system (e.g. Internal Coordinates) to coordinate changes in another (e.g. Cartesian) by integrating an autonomous system of ODEs with dynamics governed by the Jacobian. Since \(\epsilon\) is small, we integrate this system with a single step of Heun's method (improved Euler), where we first substitute an Euler approximation to predict \(\mathbf{x}^{(t + \epsilon)}\) as \[\tilde{\mathbf{x}}^{(t + \epsilon)}\approx \mathbf{x}^{(t)} + \frac{\partial\mathbf{x}^{(t)}}{\partial\mathbf{z}}\Delta \mathbf{z}^{(t)},\] and then substitute the Jacobian evaluated at the predicted state \(\tilde{\mathbf{x}}^{(t + \epsilon)}\) to form trapezoidal approximation \[\mathbf{x}^{(t + \epsilon)}\approx \mathbf{x}^{(t)} + \frac{1}{2}\left(\frac{\partial\mathbf{x}^{(t)}}{\partial\mathbf{z}} +\frac{\partial\tilde{\mathbf{x}}^{(t + \epsilon)}}{\partial\mathbf{z}}\right)\Delta \mathbf{z}^{(t)}.\] <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 8: Accounting for second order errors is essential for internal coordinate dynamics. (Top) Discarding the corrector step rapidly accumulates errors due to the curvilinear motions of internal coordinate dynamics. (Bottom) Heun integration with a corrector step accounts for curvature in curvilinear motion. </center> The comparison of this algorithm with naive integration is given in Figure 8. The corrector step is important for eliminating the large second- order errors that arise in curvilinear motions caused by angle changes (Figure 2B and Figure 8). In principle higher- order numerical integration methods or more time steps could increase accuracy at the cost of more evaluations of the Jacobian, but we found that second- order effects seemed to be the most relevant on our timescales. Mixed integrator Cartesian dynamics favor local structural rearrangements, such as the transitioning from a helical to an extended conformation, while internal coordinate dynamics favor global motions such as the change of the overall fold topology. Since both kinds of structural rearrangements are important to the folding process, we form a hybrid integrator (Algorithm 3) by taking one step with each integrator per force evaluation. Translational and rotational detrending Both Cartesian and Internal coordinates are overparameterized with \(3L\) degrees of freedom, since only \(3L - 6\) degrees of freedom are necessary to encode a centered and un- oriented structure. As a consequence, a significant fraction of the per time- step changes \(\Delta x\) can be explained by rigid translational and rotational motions of the entire structure. We isolate and remove these components of motion by treating the system \(\{x_{1},\ldots ,x_{L}\}\) as a set of particles with unit mass, and computing effective structural translational and rotational velocities by summing point- wise momenta. The translational component of motion is simply the average displacement across positions \(\Delta x_{i}^{\mathrm{trans}} = \langle \Delta x_{i}\rangle\) . For rotational motion around the center of mass, it is convenient to define the non- translational motion as \(\Delta \bar{x}_{i} = \Delta x_{i} - \Delta x_{i}^{\mathrm{trans}}\) and the centered Cartesian coordinates as \(\bar{x}_{i} = x_{i} - \langle x_{i}\rangle\) . The point- wise angular momentum is then \(l_{i} = \bar{x}_{i}\times \Delta \bar{x}_{i}\) and we define a total angular velocity of the structure \(\omega\) by summing these and dividing by the moment of inertia as \(\omega = (\sum_{i}l_{i}) / (\sum_{i}\| \bar{x}_{i}\|_{2}^{2})\) . We convert the angular velocity \(\omega\) into Cartesian displacements with an unrolled Heun integration as \(\Delta x_{i}^{\mathrm{Rot}} = \frac{1}{2}\omega \times (\bar{x}_{i} + \omega \times \bar{x}_{i})\) , which leaves the isolated structural motions as \(\Delta x_{i}^{\mathrm{Struct}} = \Delta x_{i} - \Delta x_{i}^{\mathrm{trans}} - \Delta x_{i}^{\mathrm{Rot}}\) . <--- Page Split ---> Algorithm 3: Mixed Integrator Input :Initial state \(z^{(0)}\) , energy \(U(x)\) , time steps \(\epsilon_{x},\epsilon_{z}\) , total time \(T\) , preconditioners \(\mathbf{C}_{x},\mathbf{C}_{z}\) Output :Trajectory \(x^{(0)},\ldots ,x^{(T)}\) Initialize \(x^{(0)}\leftarrow \mathcal{F}(z^{(0)})\) . while \(t< T\) do \[\begin{array}{r l} & {f_{x}\leftarrow \nabla_{x}U;}\\ & {\Delta x^{(C a r t)}\leftarrow \mathrm{C a r t e s i a n S t e p}(x^{(t)},f_{x},\epsilon_{x},\mathbf{C}_{x});}\\ & {\Delta x^{(I n t)}\leftarrow \mathrm{C l i p p e d I n t e r n a l S t e p}(x + \Delta x^{(C a r t)},f_{x},\epsilon_{z},\mathbf{C}_{z});}\\ & {x\leftarrow x + \mathrm{D e t r e n d}(\Delta x^{(C a r t)} + \Delta x^{(I n t)});}\\ & {t\leftarrow t + \epsilon ;} \end{array} \quad (end)\] end Speed clipping We found it helpful to stabilize the model by enforcing a speed limit on overall structural motions for the internal coordinate steps. This prevents small changes to the energy function during learning from causing extreme dynamics that in turn produce a non- informative learning signal. To accomplish this, we translationally and rotationally detrend the update of the predictor step \(\Delta x\) and compute a hypothetical time step \(\hat{\epsilon}_{z}\) that would limit the fastest motion to 2 Angstroms per iteration. We then compute modified predictor and corrector steps subject to this new, potentially slower, time step. While this breaks the asymptotics of Langevin dynamics, (i) it is unlikely on our timescales that we achieve stationarity and (ii) it can be avoided by regularizing the dynamics away from situations where clipping is necessary. In the future, considering non- Gaussian perturbations with kinetic energies similar to Relativistic Monte Carlo (Lu et al., 2017) might accomplish a similar goal in a more principled manner. The final integrator combining these ideas is presented in Figure 3. ## B APPENDIX B: TRAINING ## B.1 DATA For a training and validation set, we downloaded all protein domains of length \(L\leq 200\) from Classes \(\alpha\) , \(\beta\) , and \(\alpha /\beta\) in CATH release 4.1 (2015), and then hierarchically purged a randomly selected set of A, T, and H categories. This created three validation sets of increasing levels of difficulty: H, which contains domains with superfamilies that are excluded from train (but fold topologies may be present), T, which contains fold topologies that were excluded from train (fold generalization), and A which contains secondary structure architectures that were excluded from train. For a test set, we downloaded all folds that were new to CATH release 4.2 (2017), which (due to a propensity of structural biology to make new structures of previously solved folds), provided 10,381 test domains. We further stratified this test set into C, A, T, and H categories based on their nearest CATH classification in the training set. We also analyzed test set stratifications based on nearest neighbors in both training and validation in figure Figure 12. We note that the validation set was not explicitly used to tune hyperparameters due to the large cost of training (2 months on 2 M40 GPUs), but we did keep track of validation statistics during training. ## B.2 SGD We optimized all models for 200,000 iterations with Adam (Kingma & Ba, 2014). ## B.3 LOSS We optimize the model using a composite loss containing several terms, which are detailed as follows. <--- Page Split ---> Distance loss We score distances in the model with a contact- focused distance loss \[\sum_{i< j}w_{i j}\left|D_{i j}^{(\mathrm{Model})} - D_{i j}^{(\mathrm{Data})}\right|,\] where the contact- focusing weights are \[w_{i j} = \frac{\sigma\left(\alpha(D_{0} - \min (D_{i j}^{(\mathrm{Model})},D_{i j}^{(\mathrm{Data})}))\right)}{\sum_{k< l}\sigma\left(\alpha(D_{0} - \min (D_{k l}^{(\mathrm{Model})},D_{k l}^{(\mathrm{Data})}))\right)}\] and \(\begin{array}{r}{\sigma (u) = \frac{1}{1 + \exp(- u)}} \end{array}\) is the sigmoid function. Angle loss We use the loss \[\mathcal{L}_{\mathrm{angles}} = \sum_{i}||\mathcal{H}(\mathbf{z}_{i}^{(T)}) - \mathcal{H}(\mathbf{z}_{i}^{(\mathrm{Data})})||,\] where \(\mathcal{H}(\mathbf{z}) = [\cos (a_{i})\sin (a_{i})\cos (d_{i})\sin (a_{i})\sin (d_{i})]^{T}\) are unit length feature vectors that map the angles \(\{a_{i},d_{i}\}\) to the unit sphere. Other angular losses, such as the negative log probability of a Von- Mises Fisher distribution, are based on the inner product of the feature vectors \(\mathcal{H}(\mathbf{z}_{a})\cdot \mathcal{H}(\mathbf{z}_{b})\) rather than the Euclidean distance \(||\mathcal{H}(\mathbf{z}_{a}) - \mathcal{H}(\mathbf{z}_{b})||\) between them. It is worth noting that these two quantities are directly related by \(||\mathcal{H}(\mathbf{z}_{a}) - \mathcal{H}(\mathbf{z}_{b})|| = \sqrt{2(1 - \mathcal{H}(\mathbf{z}_{a})\cdot\mathcal{H}(\mathbf{z}_{b}))}\) . Taking \(\mathbf{z}_{a}\) as fixed and \(\mathbf{z}_{b}\) as the argument, the Euclidean loss has a cusp at \(\mathbf{z}_{a}\) whereas the Von- Mises Fisher loss is smooth around \(\mathbf{z}_{a}\) . This is analogous to the difference between \(L^{1}\) and \(L^{2}\) losses, where the cusped \(L^{1}\) loss favors median behavior while the smooth \(L^{2}\) loss favors average behavior. Trajectory loss In a further analogy to reinforcement learning, damped backpropation through time necessitates an intermediate loss function that can criticize transient states of the simulator. We compute this by featurizing the per time step coordinates as the product \(D_{i j}\hat{\mathbf{v}}_{i j}\) (Figure 2C) and doing the same contact- weighted averaging as the distance loss. Template Modelling (TM) Score The TM- score (Zhang & Skolnick, 2005), \[\sum_{i}\frac{1}{1 + \left(\frac{D_{i}}{D_{0}}\right)^{2}},\] is a measure of superposition quality between two protein structures on \([0,1]\) that was presented as an approximately length- independent alternative to RMSD. The TM- score is the best attainable value of the preceding quantity for all possible superpositions of two structures, where \(D_{i} = ||\mathbf{x}_{i}^{(\mathrm{Model})} - \mathbf{x}^{(\mathrm{Data})}||\) . This requires iterative optimization, which we implemented with a sign gradient descent with 100 iterations to optimally superimpose the model and target structure. We backpropagate through this unrolled optimization process as well as that of the simulator. Hydrogen bond loss We determine intra- backbone hydrogen bonds using the electrostatic model of DSSP (Kabsch & Sander, 1983). First, we place virtual hydrogens at 1 Angstroms along the negative angle bisector of the \(C_{i - 1} - N_{i} - C\alpha_{i}\) bond angle. Second, we compute a putative energy \(U_{ij}^{\mathrm{b - bond}}\) (in kcal/mol) for each potential hydrogen bond from an amide donor at \(i\) to a carbonyl acceptor at \(j\) as \[U_{i j}^{\mathrm{b - bond}}(\mathbf{X}) = \left(\frac{q_{N}q_{O}}{D_{N O}} +\frac{q_{H}q_{C}}{D_{H C}} +\frac{q_{H}q_{O}}{D_{H O}} +\frac{q_{N}q_{C}}{D_{N C}}\right)332\] \[\qquad = 0.084\left(\frac{1}{D_{N O}} +\frac{1}{D_{H C}} -\frac{1}{D_{H O}} -\frac{1}{D_{N C}}\right)332\] where \(D_{a b} = ||\mathbf{X}_{i,a} - \mathbf{X}_{j,b}||\) is the Euclidean distance between atom \(a\) of residue \(i\) and atom \(b\) of residue \(j\) . We then make hard assignments of hydrogen bonds for the data with \[y_{i j}^{\mathrm{data}} = \mathbf{1}\left(U_{i j}^{\mathrm{b - bond}}(\mathbf{X}^{(\mathrm{data})})< -0.5\right).\] <--- Page Split ---> We 'predict' the probabilities of hydrogen bonds of the data given the model via logistic regression of soft model assignments as \[y_{ij}^{\mathrm{model}} = \sigma \left(a\sigma \left(b\left(-U_{ij}^{\mathrm{hbond}}(\mathbf{X}^{(model)}) + 0.5\right)\right) + c\right),\] where \(a,b,c\) are learned parameters with the softplus parameterizations enforcing \(a,b > 0\) and \(\sigma (u) = 1 / (1 + \exp (- u)\) is the sigmoid function. The final hydrogen bond loss is the cross- entropy between these predictions and the data, \[\mathcal{L}_{\mathrm{h - bond}} = \sum_{|i - j| > 2}y_{ij}^{\mathrm{data}}\log y_{ij}^{\mathrm{model}} + (1 - y_{ij}^{\mathrm{data}})\log \left(1 - y_{ij}^{\mathrm{model}}\right).\] Secondary Structure Prediction We output standard 8- class predictions of secondary structure and score them with a cross- entropy loss. ## B.4 STABILIZING BACKPROPAGATION THROUGH TIME The combination of energy function, simulator, and refinement network can build an atomic level model of protein structure from sequence, and our goal is to optimize (meta- learn) this entire procedure by gradient descent. Before going into specifics of the loss function, however, we will discuss a challenges and solutions for computing gradients of unrolled simulations in the face of chaos. ## B.5 CHAOS AND EXPLODING GRADIENTS B.5 CHAOS AND EXPLODING GRADIENTSGradient-based learning of iterative computational procedures such as Recurrent Neural Networks (RNNs) is well known to be subject to the problems of exploding and vanishing gradients (Pascanu et al., 2013). Informally, these occur when the sensitivities of model outputs to inputs become either extremely large or extremely small and the gradient is no longer an informative signal for optimization. We find that backpropagation through unrolled simulations such as those presented is no exception to this rule. Often we observed that a model would productively learn for tens of thousands of iterations, only to suddenly and catastrophically exhibit diverging gradients from which the optimizer could not recover - even when the observed simulation dynamics exhibited no obvious qualitative changes to behavior and the standard solutions of gradient clipping (Pascanu et al., 2013) were in effect. Similar phenomena have been observed previously in the context of meta-learning (Maclaurin et al., 2015) and are explored in detail in a concurrent work (Parmas et al., 2018). In Figure 9, we furnish a minimal example that illustrates how chaos can lead to irrevocable loss of learning. We see that for even a simple particle- in- a- well, some choices of system parameters (such as too large a time step) can lead to chaotic dynamics which are synonymous with explosive gradients. This example is hardly contrived, and is in fact a simple model of the distance potentials between coordinates in our simulations. Moreover, it is important to note that chaos may not be easy to diagnose: for learning rates \(\alpha \in [1.7,1.8]\) the position of the particle \(x\) remains more or less confined in the well while the sensitivities diverge to \(10^{200}\) . It seems unlikely that meta- learning would be able to recover after descending into chaos. The view per time step Exploding gradients and chaotic dynamics involve the same mechanism: a multiplicative accumulation of sensitivities. In dynamical systems this is frequently phrased as 'exponentially diverging sensitivity to initial conditions'. Intuitively, this can be understood by examining how the Jacobian of an entire trajectory decomposes into a product of Jacobians as \[\frac{\partial x^{(T)}}{\partial x^{(0)}} = \frac{\partial x^{(T)}}{\partial x^{(T - 1)}}\frac{\partial x^{(T - 1)}}{\partial x^{(T - 2)}}\dots \frac{\partial x^{(1)}}{\partial x^{(0)}}. \quad (10)\] When the norms of the per time- step Jacobians \(\frac{\partial x^{(T)}}{\partial x^{(T - 1)}}\) are typically larger than 1, the sensitivity \(\| \frac{\partial x^{(T)}}{\partial x^{(0)}}\|\) will grow exponentially with \(T\) . Ideally, we would keep these norms well- behaved which is the rationale recent work on stabilization of RNNs (Henaff et al., 2016; Chen et al., 2018b). Next we will offer a general- purpose regularizer to approximately enforce this goal for any differentiable computational iteration with continuous state. <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 9: Chaos impedes meta-learning for gradient descent in a well. (a) Gradient descent of a particle in a well with initial conditions \(x^{(0)}\) and step size \(\alpha\) . (b) Orbit diagrams visualize long-term dynamics from iterations 1000 to 2000 of the position \(x\) (top) and the gradient \(\frac{d x^{(0)}}{d x^{(0)}}\) (bottom). When the step size \(\alpha\) is small, these dynamics converge to a periodic orbit over \(2^{k}\) values where \(0 \leq k < \infty\) . After some critical step size, the dynamics undergo a period-doubling bifurcation (Strogatz, 2018), become chaotic, and the gradients regularly diverge to huge numbers. </center> Approximate Lipschitz conditions One condition that guarantees that a deterministic map \(F: \mathbb{R}^{N} \to \mathbb{R}^{N}\) , \(x_{t} = F(x_{t - 1}, \theta)\) cannot exhibit exponential sensitivity to initial conditions is the condition of being non- expansive (also known as 1- Lipschitz or Metric). That is, for any two input points \(x_{a}, x_{b} \in \mathbb{R}^{N}\) , iterating the map cannot increase the distance between them as \(|F(x_{a}, \theta) - F(x_{b}, \theta)| \leq |x_{a} - x_{b}|\) . Replying the map to the bound immediately implies \[|F^{(t)}({\pmb x},\theta) - F^{(t)}({\pmb x} + \Delta {\pmb x},\theta)|\leq |\Delta {\pmb x}| \quad (11)\] for any number of iterations \(t\) . Thus, two initially close trajectories iterated through a non- expansive mapping must remain at least that close for arbitrary time. We approximately enforce non- expansivity by performing an online sensitivity analysis within simulations. At randomly selected time- steps, the current time step \(x^{(t)}\) is rolled back to the preceding state and re- executed with small Gaussian perturbations to the state \(\delta \sim \mathcal{N}(0, 10^{- 4} I)^{6}\) . We regularize the sensitivity by adding \[\mathcal{L}_{Lyapunov} = \max \left(0, \log \frac{|F(x^{(t)}) - F(x^{(t)} + \delta)|}{|\delta|}\right) \quad (12)\] to the loss. Interestingly, the stochastic nature of this approximate regularizer is likely a good thing - a truly non- expansive map is quite limited in what it can model. However, being 'almost' non- expansive seems to be incredibly helpful for learning. Damped Backpropagation through Time The approximate Lipschitz conditions (or Lyapunov regularization) encourage but do not guarantee stable backpropagation. When chaotic phase- transitions or otherwise occur we need a fall- back plan to be able to continue learning. At the same time, we would like gradient descent to proceed in the usual manner when simulator dynamics <--- Page Split ---> Algorithm 4: Damped Backpropagation Through Time Input : Initial state \(\boldsymbol{x}^{(0)}\) , time- stepping function \(F(\boldsymbol {x},\boldsymbol {s},\boldsymbol {\theta})\) , external inputs \(s_{1},\ldots ,s_{T}\) parameters \(\theta\) , Loss function \(\mathcal{L}(\boldsymbol {x}_{1},\ldots ,\boldsymbol {x}_{T})\) , Damping factor \(0< < \gamma < 1\) Output : Exponentially damped gradient \(\nabla_{\theta}\mathcal{L}\) Initialize \(\boldsymbol{x}^{(0)}\leftarrow \mathcal{F}(\boldsymbol{z}^{(0)})\) . for \(t\leftarrow 2,\ldots ,T\) do Compute time step \(\tilde{\mathbf{x}}_{t}\leftarrow F(\mathbf{x}_{t - 1},\mathbf{s}_{t},\theta)\) . Decay the gradient \(\mathbf{\nabla}\cdot \mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla} = \mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\mathbf{\nabla}}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\) \(\mathbf{\nabla}\cdot \mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla} = \mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\mathbf{\nabla}}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\mathbf{\nabla}\) end Compute loss \(\mathcal{L}(\boldsymbol{x}_{1},\ldots ,\boldsymbol{x}_{T})\) . Compute gradient \(\nabla_{\theta}\mathcal{L}\leftarrow \mathrm{Autodiff}(\mathcal{L},\theta)\) . where \(\perp (\cdot)\) is the stop_gradient function. are stable. To this end we introduce a damping factor to backpropagation that can adaptively combat exponentially diverging gradients with exponential discounting (Algorithm 4). Damped backpropagation can be seen as a continuous alternative to the standard approach of Truncated Backpropagation through Time. Rather than setting the gradient to 0 after some fixed intervals of time- steps, we decay it on the backwards pass of reverse- mode differentiation by a factor of \(\gamma\) . This is mildly evocative of the notion of discounted future rewards in reinforcement learning. During backpropagation this causes a biased estimate of Jacobians that favors short term sensitivities (or rewards) as \[\partial \frac{\hat{\boldsymbol{x}}^{(t)}}{\partial\boldsymbol{x}^{(t - k)}} = \left(\left(\left(\gamma \frac{\partial\boldsymbol{x}^{(t)}}{\partial\boldsymbol{x}^{(t - 1)}}\right)\gamma \frac{\partial\boldsymbol{x}^{(t - 1)}}{\partial\boldsymbol{x}^{(t - 2)}}\right)\cdot \cdot \cdot \gamma \frac{\partial\boldsymbol{x}^{(t - k + 1)}}{\partial\boldsymbol{x}^{(t - k)}}\right) = \gamma^{k}\frac{\partial\boldsymbol{x}^{(t)}}{\partial\boldsymbol{x}^{(t - k)}}. \quad (13)\] ## B.6 MULTIPLE SEQUENCE ALIGNMENT GENERATION We use multiple sequence alignments of evolutionarily related sequences for both profile construction (§ B.7) and (ii) data augmentation (§ B.8). For every domain in the dataset, we extracted the sequence from the PDB and then used jackhammer (Eddy, 2011), to iteratively search the Uniprot90 database (Suzek et al., 2014) (release 4/2016) with 5 iterations and a length- normalized bitscore threshold of 0.3. We then removed sequences with over \(50\%\) gaps relative to the query sequence and then redundancy- reduced the alignment with hhfilter (Remmert et al., 2012) such that all sequences are at least a normalized Hamming distance of 0.8 away from one another. ## B.7 PROFILE GENERATION We briefly describe how we construct evolutionary profiles, or position- specific scoring matrices (PSSMs), for each protein domain. Let \(\mathcal{S} = \{\boldsymbol{S}^{(1)},\ldots ,\boldsymbol{S}^{(L)}\}\) be the set of \(L\) columns of a multiple sequence alignment over \(M\) sequences where each column \(\boldsymbol{S}^{(i)}\) is an \(M\times q\) matrix that one- hot encodes the sequence data at position \(i\) (for an alphabet of size \(q\) ). The regularized empirical frequency of letter \(a\) at site \(i\) is then \[f_{a}^{(i)} = \frac{\alpha + \sum_{j}\boldsymbol{S}_{j a}^{(i)}}{\alpha + M},\] where \(\alpha\) is a pseudocount that we set to 10. We compute our PSSM features for letter \(a\) at site \(i\) as \[w_{a}^{(i)} = \sigma \left(\log \frac{f_{a}^{(i)}}{B_{a}}\right)\] where \(\sigma (u) = \frac{1}{1 + \exp(- u)}\) is the logistic sigmoid and \(B_{a}\) is the average frequency of amino acid \(a\) in UniProt (Apweiler et al., 2004). <--- Page Split ---> Table 4: Qualitative timings. Results on CATH dataset and 2 M40 GPUs. <table><tr><td>Method</td><td>Generation time</td><td>Training time</td></tr><tr><td>RNN baseline†</td><td>milliseconds</td><td>~ 1 week</td></tr><tr><td>NEMO†</td><td>seconds</td><td>~ 2 months</td></tr><tr><td>Coevolution-based methods</td><td>minutes to hours</td><td>Coupled to generation</td></tr><tr><td>Physical simulations</td><td>days to weeks</td><td>N/A</td></tr></table> ## B.8 EVOLUTIONARY DATA AUGMENTATION To reduce our reliance on alignments and the generation of profiles for inference of new sequences while still leveraging evolutionary sequence data, we augmented our training set by dynamically spiking in diverse, related sequence into the model during training. Given a set of \(M\) sequences in the alignment we sample a sequence \(t\) based on its normalized Hamming distance \(d_{t}\) with probability \[p_{t} = \frac{e^{\lambda_{\mathrm{EDA}}}d_{t}}{\sum_{s = 1}^{M}e^{\lambda_{\mathrm{EDA}}}d_{s}},\] where \(\lambda_{\mathrm{EDA}}\) is a scaling parameter that we set to 5. When the alternate sequence contains gaps, we construct a chimeric sequence that substitutes those sites with the query. This strategy increased the number of available sequence- structure pairs by several orders of magnitude, and we used it for both profile and 1- seq based training. ## C APPENDIX C: RESULTS ## C.1 STRUCTURE GENERATION AND PROCESSING For each sequence from the CATH release 4.2 dataset, 100 structures were generated from both the profile and sequence- only models, while a single structure was generated from the RNN baseline models. The reported TM- scores were calculated using Maxcluster (Siew et al., 2000). A single representative structure was chosen from the ensemble of 100 structures using 3D- Jury (Ginalski et al., 2003). A pairwise distance matrix of TM- scores was calculated for all of the 100 structures in the ensemble. Clusters were determined by agglomerative hierarchical clustering with complete linkage using a TM- score threshold of 0.5 to determine cluster membership. ![](images/21_0.jpg) <center>Figure 10: Sampling speed. Per-protein sampling times for various batch sizes across NEMO and one of the RNN baselines on a single Tesla M40 GPU with 12GB memory and 20 cores. For all results in the main paper, 100 models were sampled per protein followed by consensus clustering with 3D-jury, adding an additional factor of \(10^{2}\) cost between NEMO and the RNN. </center> <--- Page Split ---> ![](images/22_0.jpg) <center>Figure 11: Predictive performance of structures generated by the sequence-only model. (left) Structures in the test set are hierarchically organized by CATH classification. Groups further up the tree are broader generalization. (center-left) Ensembles of models with increasing certainty tend to have a better average TM-score. (center-right) TM-score of 3D-jury-selected models versus distance from the training data. Withheld (right) Comparing the energy-based model with and without profiles. Profile information greatly improves protein model accuracy as judged by TM-score. </center> ![](images/22_1.jpg) <center>Figure 12: Generalization results upon re-stratification. Profile-based model. </center> <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 13: RNN baseline performance for different hyperparameters. Predictive performance of the two-layer bidirectional LSTM baseline models across a range of hidden unit dimensions compared to the energy model. </center> <--- Page Split --->
accept
Accept (Oral)
6.5
ICLR_2019_paper_1141
iclr
2,019
# VARIATIONAL DOMAIN ADAPTATION Anonymous authors Paper under double- blind review ## ABSTRACT This paper proposes variational domain adaptation, a unified, scalable, simple framework for learning multiple distributions through variational inference. Unlike the existing methods on domain transfer through deep generative models, such as StarGAN (Choi et al., 2017) and UFDN (Liu et al., 2018), the variational domain adaptation has three advantages. Firstly, the samples from the target are not required. Instead, the framework requires one known source as a prior \(p(x)\) and binary discriminators, \(p(\mathcal{D}_i|x)\) , discriminating the target domain \(\mathcal{D}_i\) from others. Consequently, the framework regards a target as a posterior that can be explicitly formulated through the Bayesian inference, \(p(x|\mathcal{D}_i)\propto p(\mathcal{D}_i|x)p(x)\) , as exhibited by a further proposed model of dual variational autoencoder (DualVAE). Secondly, the framework is scalable to large- scale domains. As well as VAE encodes a sample \(x\) as a mode on a latent space: \(\mu (x)\in \mathcal{Z}\) , DualVAE encodes a domain \(\mathcal{D}_i\) as a mode on the dual latent space \(\mu^{*}(\mathcal{D}_i)\in \mathcal{Z}^{*}\) , named domain embedding. It reformulates the posterior with a natural paring \(\langle \cdot \rangle :\mathcal{Z}\times \mathcal{Z}^{*}\to \mathbb{R}\) , which can be expanded to uncountable infinite domains such as continuous domains as well as interpolation. Thirdly, DualVAE fastly converges without sophisticated automatic/manual hyperparameter search in comparison to GANs as it requires only one additional parameter to VAE. Through the numerical experiment, we demonstrate the three benefits with multi- domain image generation task on CelebA with up to 60 domains, and exhibits that DualVAE records the state- of- the- art performance outperforming StarGAN and UFDN. ## 1 INTRODUCTION "...we hold that all the loveliness of this world comes by communion in Ideal- Form. All shapelessness whose kind admits of pattern and form, as long as it remains outside of Reason and Idea, is ugly from that very isolation from the Divine- Thought." — Plato (427 – 347 bc) Agents that interact in various environments have to handle multiple observation distributions. Domain adaptation (Bengio, 2012) is a methodology employed to exploit deep generative models, such as adversarial learning (Goodfellow et al., 2014) and variational inference (Kingma & Welling, 2013), that can handle distributions that vary with environments and other agents. Further, multi- task learning and domain transfer are examples of how domain adaptation methodology is used. We focus on domain transfer involving transfers across a distribution between domains. For instance, pix2pix (Isola et al., 2017) outputs a sample from the target domain that corresponds to the input sample from the source domain. This can be achieved by learning the pair relation of samples from the source and target domains. CycleGAN (Zhu et al., 2017a) transfers the samples between two domains using samples obtained from both domains. Similarly, UNIT (Liu et al., 2017), DiscoGAN(Kim et al., 2017), and DTN(Taigman et al., 2016) have been proposed in previous studies. However, the aforementioned method requires samples that are obtained from the target domains, and because of this requirement, it cannot be applied to domains for which direct sampling is expensive or often impossible. For example, the desired, continuous, high- dimensional action in the environment, intrinsic reward (e.g., preference and curiosity) and the policy of interacting agents other than itself cannot be sampled from inside, and they can only discriminate the proposed input. Even for ourselves, the concept of beauty or interest in our conscious is subjective, complex, and difficult to be sampled from the inside, although it is easy to discriminate on the outside. <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: The key concept of variational domain adaptation. a) Given the proposal drawn from the prior, the discriminator discriminates the target domain from the others. Each domain is posterior for the prior \(\mathcal{N}(z|0,1)\) ; further, the distribution in the latent space is observed to be a normal distribution using the conjugate likelihood. b) Domain transfer is represented by the mean shift in the latent space. c) Domain embedding: After training, all the domains can only be represented by vectors \(\mu_{i}\) . </center> In this study, we propose variational domain adaptation, which is a framework for targets that pose challenges with respect to direct sampling. One solution is multi- domain semi- supervision, which converts the problem to semi- supervised learning, thereby making it possible to perform variational inference. In this supervision, a source domain is regarded as a prior \(p(x)\) and a target domain is considered to be a posterior \(p(x|\mathcal{D}_{i})\) by referring to the label given by a supervised discriminator \(p(\mathcal{D}_{i}|x)\) that distinguishes the target domain from others. Our model imitates the behavior of the discriminator and models the target domain using a simple conclusion of the Bayesian theorem, \(p_{\theta}(x|\mathcal{D}_{i})\propto p_{\theta}(\mathcal{D}_{i}|x)p_{\theta}(x)\) . The end- to- end learning framework also makes it possible to learn good prior \(p_{\theta}(x)\) with respect to all the domains. After the training was completed, the posterior \(p_{\theta}(x|\mathcal{D}_{i})\) succeeded in deceiving the discriminator \(p(\mathcal{D}_{i}|x)\) . This concept is similar to rejection sampling in the Monte Carlo methods. Further, variational domain adaptation is the first important contribution from this study. The second contribution from this study is a model of dual variational autoencoder (DualVAE), which is a simple extension of the conditional VAE (Kingma et al., 2014), employed to demonstrate our concept of multi- domain semi- supervision. DualVAE learns multiple domains in one network by maximizing the variational lower bound of the total negative KL- divergence between the target domain and the model. DualVAE uses VAE to model the prior \(p(x)\) and an abstract representation for the discriminator \(p(\mathcal{D}_{i}|x)\) . The major feature of DualVAE is domain embedding that states that all the posteriors are modeled as a normal distribution \(\mathcal{N}(z|\mu_{i},\sigma^{2})\) in the same latent space \(\mathcal{Z}\) using the conjecture distribution of the prior. Here, \(\mu_{i}\) is the domain embedding that represents the domain \(\mathcal{D}_{i}\) . This enables us to sample from \(p_{\theta}(x|\mathcal{D}_{i})\) . Our major finding was that the discriminator of DualVAE was a simple inner product between the two means of domain embedding and the VAE output: \[\log \frac{p_{\theta}(\mathcal{D}_{i}|x)}{p_{\theta}(\mathcal{D}_{i})} = \log \int \frac{\mathcal{N}(z|\mu_{i},\sigma^{2})\mathcal{N}(z|\mu_{\phi}(x),\sigma^{2})}{\mathcal{N}(z|0,I)} dz = \frac{\mu_{i}^{\mathrm{T}}\mu_{\phi}(x)}{\sigma^{2}},\] that acts as a natural paring between the sample and the domain. The probabilistic end- to- end model learns multiple domains in a single network, making it possible to determine the effect of transfer learning and to learn data that multi- domains cannot observe from sparse feedback. Domain embedding is a powerful tool and allows us to use VAEs instead of GANs. The third contribution of this study is that DualVAE was validated for use in a recommendation task using celebA (Liu et al., 2015). In the experiment, using celebA and face imaging data obtained based on evaluations by 60 users, an image was generated based on the prediction of user evaluation and an ideal image that was determined to be good by multiple users. We demonstrated that the image could be modified to improve the evaluation by interpolating the image, and the image was evaluated using the domain inception score (DIS), which is the score of the model that has learned the preference of each user. We present the beauty inside each evaluator by simply sampling \(p_{\theta}(x|\mathcal{D}_{i})\) . The DIS of DualVAE is higher than that of a single domain, and the dataset and code are available online. <--- Page Split ---> ## 2 RELATED WORK The existing literature related to the domain transfer is based on the assumption that the samples are obtained from the target domain. For example, pix2pix(Isola et al., 2017) can output the samples from the target domain that corresponds to the input samples from the source domain by learning the pair relation between the samples of the source and target domains. CycleGAN (Zhu et al., 2017a), which differs from pix2pix, does not require sample pairs from both domains. Similarly, UNIT (Liu et al., 2017), DiscoGAN(Kim et al., 2017), and DTN(Taigman et al., 2016) also do not require sample pairs. Furthermore, because there are few cases in which samples from the source and target domains form a one- to- one pair in real world research after being extended to the conversion of one- to- many relationships, including BicycleGAN(Zhu et al., 2017b) and MUNIT(Huang et al., 2018). Several studies were conducted to model multiple distributions in a semi- supervised manner. StarGAN(Choi et al., 2017), UFDN(Liu et al., 2018), and RegCGAN(Mao & Li, 2018) are extensions of the aforementioned models and are frameworks that can convert the source domain samples into samples for various target domains with a single- network structure. However, the problem with these methods is associated with hyperparameter tuning, which arises from the characteristics of adversarial learning. DualVAE is a simple extension of a conditional VAE in a multi- domain situation. Conditional VAEs utilizes VAE for semi- supervised learning. Although the model is quite simple, it is powerful and scalable making it possible to learn multiple distributions with domain embedding. In fact, we demonstrated that DualVAE quickly converged for more than 30 domains without sophisticated hyperparameter tuning. In the experiment conducted in this study, \(\mathbb{E}_{\omega}\left[J(\theta |\omega)\right]\) was evaluated instead of \(J(\theta |\hat{\omega})\) to demonstrate that our method required less hyperparameter tuning. ## 3 METHOD ### 3.1 PROBLEM DEFINITION With regard to \(n\) domains \(\mathcal{D}_{1},\ldots ,\mathcal{D}_{n}\) , and a sample \(x\) on an observation space \(\mathcal{X}\) , the objective of unsupervised domain adaptation is to minimize the KL- divergence between the target distribution and the model, \(D_{\mathrm{KL}}\left(p^{(i)}(x)\| p^{(i)}(x,\theta)\right)\) , over all the domains \(\mathcal{D}_{i}\) . From the perspective of optimizing \(\theta\) , minimizing the KL divergence is equivalent to maximizing the cross- entropy. As \(p^{(i)}(x,\theta) = p^{(i)}(x|\theta)p(\theta)\) , the unsupervised domain adaptation can be formulated as a maximizing problem for the weighted average of cross- entropy over the domains: \[\mathrm{Maximize}_{\theta}:J(\theta) = \frac{1}{n}\sum_{i = 1}^{n}\gamma_{i}\mathbb{E}_{x\sim p^{(i)}}\left[\log p_{\theta}^{(i)}(x)\right] + \gamma \log p(\theta), \quad (1)\] where \(p_{\theta}^{(i)}(x) = p^{(i)}(x|\theta)\) , \(\gamma_{i}\in [0,1]\) is the importance of each domain \(\mathcal{D}_{i}\) and \(\begin{array}{r}{\gamma = \sum_{i = 1}^{n}\gamma_{i} / n} \end{array}\) If \(\gamma_{i} = 1\) for all the \(i\) 's, the objective function is simply the mean, and if \(\gamma_{i} = 0\) for certain \(i\) 's, the domain \(\mathcal{D}_{i}\) is ignored. The difficulty arises from the fact that it is not possible to directly sample \(x\) from \(p^{(i)}\) \(x\) can be directly sampled from the likelihood \(p(\mathcal{D}_{i}|x)\) . This challenge was the motivation for considering multi- domain semi- supervision. ### 3.2 MULTI-DOMAIN SEMI-SUPERVISION Multi- domain semi- supervision assumes a prior \(p(x)\) and models each the domain as a posterior \(p^{(i)} = p(x|\mathcal{D}_{i})\) . As the Bayesian inference, we reformulate the cross- entropy \(\mathbb{E}_{x\sim p^{(i)}}\left[\log p_{\theta}(x|\mathcal{D}_{i})\right]\) in Eq. (1) as follows: \[\begin{array}{r l} & {\mathbb{E}_{x\sim p^{(i)}}\left[\log p_{\theta}(x|\mathcal{D}_{i})\right] = \int p(x|\mathcal{D}_{i})\log p_{\theta}(x|\mathcal{D}_{i})d x = \int \frac{p(\mathcal{D}_{i}|x)p(x)}{p(\mathcal{D}_{i})}\log \frac{p_{\theta}(\mathcal{D}_{i}|x)p_{\theta}(x)}{p_{\theta}(\mathcal{D}_{i})} d x}\\ & {\qquad = \mathbb{E}_{x\sim p}\left[\frac{p(\mathcal{D}_{i}|x)}{p(\mathcal{D}_{i})}\log \frac{p_{\theta}(\mathcal{D}_{i}|x)}{p_{\theta}(\mathcal{D}_{i})}\right] + \mathbb{E}_{x\sim p}\left[\frac{p(\mathcal{D}_{i}|x)}{p(\mathcal{D}_{i})}\log p_{\theta}(x)\right]}\\ & {\qquad = \mathbb{E}_{x\sim p}\left[f(\mathcal{D}_{i}|x)\log f_{\theta}(\mathcal{D}_{i}|x)\right] + \mathbb{E}_{x\sim p}\left[f(\mathcal{D}_{i}|x)\log p_{\theta}(x)\right],} \end{array} \quad (2)\] <--- Page Split ---> where \(f(\mathcal{D}_{i}|x) = p(\mathcal{D}_{i}|x) / p(\mathcal{D}_{i})\) and \(f_{\theta}(\mathcal{D}_{i}|x) = p_{\theta}(\mathcal{D}_{i}|x) / p_{\theta}(\mathcal{D}_{i})\) . By letting \(\gamma_{i} = p(\mathcal{D}_{i})\) , the objective is identical to: \[J(\theta) = \underbrace{\mathbb{E}_{x\sim p,i\sim[n]}\left[p(\mathcal{D}_{i}|x)\log f_{\theta}(\mathcal{D}_{i}|x)\right]}_{\mathrm{discriminator}} + \underbrace{\mathbb{E}_{x\sim p}\left[\log p_{\theta}(x)\right]}_{\mathrm{prior}} + \underbrace{\gamma\log p(\theta)}_{\mathrm{regularizer}}, \quad (3)\] where \([n]\) is a uniform distribution over \(\{1,\ldots ,n\}\) and \(f(\bar{\mathcal{D}} |x) = \mathbb{E}_{i\sim [n]}\left[f(\mathcal{D}_{i}|x)\right]\) . The first term is the likelihood from the discriminator; the second term is the prior learned by a generative model, including VAE; and the last term is the regularizer. Because the equation is intractable, we use Monte Carlo sampling to estimate the function. During the estimation, we initially sample \(x_{1},\ldots ,x_{m}\) from the prior \(p(x)\) and subsequently obtain the binary labels \(y_{ij}\in \{0,1\}\) from each discriminator \(y_{ij}\sim p(\mathcal{D}_{i}|x_{j})\) . Since the number of labels from supervises is \(n m\) , the situation that the sparse labels: \(k< < m n\) is considered. Further, some discriminators only provide parts of the labels. In the situation, the missing values are 0- padded: \(y_{ij} = 0\) . \[J(\theta)\approx \frac{1}{n}\sum_{i = 1}^{n}\sum_{j = 1}^{m}y_{ij}\log f_{\theta}(y_{ij}|x_{j}) + \frac{1}{m}\sum_{j = 1}^{m}\log p_{\theta}(x_{j}) + \bar{y}\log p(\theta), \quad (4)\] where \(\approx\) indicates Monte Carlo estimation and \(\bar{y} = \sum_{i = 1}^{n}\sum_{j = 1}^{m}y_{ij} / k\) . In the limit of \(n\to \infty\) , the right side of the equation is identical to the left side. ### 3.3 DUAL VARIATIONAL AUTOENCODER (DUALVAE) We extended the VAE for multi- domain transfer to demonstrate our concept of multi- domain semisupervision. Our proposed model, dual variational autoencoder (DualVAE), models each domain \(p_{i}(x)\) as a posterior distribution \(p(x|\mathcal{D}_{i})\) that is similar to that observed in a conditional VAE. Fig. 2 depicts the VAE and DualVAE graphical models. The major feature of DualVAE is domain embedding, where all the domains and the prior share the same latent space \(\mathcal{Z}\) . For the prior distribution, \(p(z) = \mathcal{N}(z|0,I)\) and \(p(z|\mathcal{D}_{i}) = \mathcal{N}(z|\mu_{i},\sigma^{2}I)\) , where \(\mu_{i}\in \mathcal{Z}\) is an embedding and \(I\) is a unit matrix in \(\mathcal{Z}\) . In the following, we denote \(\sigma^{2}I = \sigma^{2}\) without loss of generality. The domain \(\mathcal{D}_{i}\) is characterized only by its embedding \(\mu_{i}\) . Here, \(\mu_{0}\) is the embedding of the prior that can be assumed to be \(\mu_{0} = 0\) . Training DualVAE is virtually equivalent to simultaneously training \((n + 1)\) VAEs which share a parameter, including the prior. Using conjecture distribution for the prior \(p(z)\) , the posterior distribution is observed to be a normal distribution. Therefore, all the posteriors are VAEs. The joint distribution can be given as follows: #### 3.3.1 VAE: THE PRIOR \(p_{\theta}(x)\) A VAE (Kingma & Welling, 2013) is used to model the prior \(p(x)\) , a deep generative model that employs an autoencoder to model the hidden variable as random variable. The benefit of a VAE is that it can be used to model each distribution as a normal distribution in \(\mathcal{Z}\) , achieved by maximizing the variational lower bound of \(\log p(x)\) as follows: \[\log p_{\theta}(x)\geq \mathcal{L}_{\theta}(x) = \mathbb{E}_{z\sim q_{\phi}(\cdot |x)}\left[\log p_{w}(x|z)\right] - D_{\mathrm{KL}}\left(q_{\phi}(z|x)\| p(z)\right), \quad (5)\] where \(\phi ,w\in \theta\) is a parameter of the encoder and the decoder, respectively. The objective is to learn a pair of the encoder \(p_{w}(x|z)\) and the decoder \(q_{\phi}(z|x)\) to maximize \(\mathcal{L}(x)\) . \(z\) acts as a prior \(p(z) = \mathcal{N}(z|0,I)\) . The lower bound \(\mathcal{L}_{\theta}(x)\) is derived using the reconstruction error and penalty term as the KL divergence between the model and the prior \(p(z)\) . Further, the gradient of the reconstruction term can be calculated using the Monte Carlo method, and because the construction term is the KL divergence between two normal distributions, it can be analytically calculated. <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 2: Left: Graphical models of the probabilistic models of VAE and DualVAE. The gray and white circles indicate the observed variables and latent variables, respectively. Symbols without circles indicate the constants. Arrows between the symbols indicate probabilistic dependency (e.g., X generates Y). A rectangle with suffixes indicates a block, which comprises multiple elements. Right: The network structure of DualVAE. The label is structured as the inner product of latent \(z_{\theta}\) and domain embedding \(z_{i}\) . </center> #### 3.3.2 DISCRIMINATOR \(f_{\theta}(\mathcal{D}_{i}|x)\) Using the definition and the Bayesian theorem, \(\log f_{\theta}(\mathcal{D}_{i}|x)\) can be written as follows: \[\begin{array}{r l r} & {} & {\log f_{\theta}(\mathcal{D}_{i}|x) = \log \int \frac{p_{\theta}(\mathcal{D}_{i}|z)p_{\theta}(z|x)}{p_{\theta}(\mathcal{D}_{i})} d z = \log \int \frac{p_{\theta}(z|\mathcal{D}_{i})p_{\theta}(z|x)}{p_{\theta}(z)} d z}\\ & {} & {= \log \int \frac{\mathcal{N}(z|\mu_{i},\sigma^{2})\mathcal{N}(z|\mu_{\phi}(x),\sigma^{2})}{\mathcal{N}(z|0,I)} d z = \frac{\mu_{i}^{\mathrm{T}}\mu_{\phi}(x)}{\sigma^{2}}.} \end{array} \quad (6)\] The equation above indicates \(\log f_{\theta}(\mathcal{D}_{i}|x)\) can be written simply as the inner product between \(\mu_{i}\) and \(\mu_{\phi}(x)\) , and the objective can be written as follows: \[\mathbb{E}_{i\sim [n]}\left[y_{i}\log f_{\theta}(\mathcal{D}_{i}|x)\right] = \frac{\mathbf{y}^{\mathrm{T}}U\mu_{\phi}(x)}{n\sigma^{2}} = \alpha \mu_{U}^{*}(\mathbf{y})\mu_{\phi}(x), \quad (7)\] where \(U = (\mu_{1},\ldots ,\mu_{n})^{\mathrm{T}}\) , \(\mu_{U}^{*} = \mathbf{y}^{\mathrm{T}}U / n\) and \(\alpha = \sigma^{- 2}\) . Interestingly, it only requires one additional parameter \(U\) except a hyperparameter \(\alpha\) . \(U\) is named as a domain embedding matrix, representing the set of the domain prototypes. Domain embedding makes it possible to extend our method to infinite domains such as a continuous domain. In fact, \(\mu_{U}^{*}(\mathbf{y})\in \mathcal{Z}^{*}\) represents a prototype of mixed domains indicated by \(\mathbf{y}\) in a domain latent space \(\mathcal{Z}^{*}\) , a dual space of \(\mathcal{Z}\) . Note that \(\dim \mathcal{Z} = \dim \mathcal{Z}^{*}\) . #### 3.3.3 REGULARIZER \(p(\theta)\) The overall parameters of DualVAE is \(\theta = (w,\phi ,U)\) , where \(w\) is the encoder's, parameter \(\phi\) is the decoders's parameter, and \(U\) is the domain embedding matrix. While a typical VAE does not assume any distribution of \(w,\phi ,p(U)\) is set as an exponential distribution with an additional hyperparameter \(\beta \in (0,\infty)\) to obtain sparse representation: \(p(U)\propto \exp (-\beta \| U\|_{1} / \gamma)\) where \(\| \cdot \|_{1}\) is a 1- norm, thus, \[\log p(\theta) = \log p(U) = -\frac{\beta}{\gamma}\| U\|_{1} - m\dim \mathcal{Z}\log 2 + \log \beta -\log \gamma . \quad (8)\] As the terms except for the first are independent of \(\theta\) , we ignore them later as constants. #### 3.3.4 THE FINAL FORM By putting together the prior, the discriminator, and the regularizer, the variational lower bound of the point- wise objective of DualVAE \(J(\theta |x,\mathbf{y})\) can be written as a surprisingly simple form: \[J(\theta |x,\mathbf{y})\geq \mathcal{L}_{\theta}(x) + \alpha \langle \mu_{\phi}(x),\mu_{U}(\mathbf{y})\rangle -\beta \| U\|_{1}, \quad (9)\] where \(\langle u,v\rangle = v^{\mathrm{T}}u\) . Consequently, a DualVAE maximizes a duality paring \(\langle \cdot ,\cdot \rangle :\mathcal{Z}\times \mathcal{Z}^{*}\to \mathbb{R}\) between the sample latent space \(\mathcal{Z} = \mathcal{Z}_{\phi}(\mathcal{X})\) and the domain latent space \(\mathcal{Z}^{*} = \mathcal{Z}_{U}^{*}(\mathcal{Y})\) where <--- Page Split ---> \(\mathcal{V} = \{0,1\}^{n}\) . Note that the objective requires only two additional hyperparameters in addition to the VAE. If \(\alpha ,\beta \to 0\) , it is equivalent to a single VAE. Intuitively, \(1 / \alpha\) and \(1 / \beta\) control variance and bias of the domain embeddings, respectively. The training algorithm of the DualVAE is shown in Algorithm 1. Algorithm 1 Variational domain adaptation through DualVAE Require: observations \((x_{j})_{j = 1}^{n}\) , batch size \(M\) , VAE/encoder optimisers: \(g\) , \(g_{e}\) , hyperparameters \(\alpha ,\beta\) , and the label matrix \(Y = (y_{j})_{j = 1}^{n}\) . Initialize encoder, decoder and domain embedding parameters: \(\phi ,w,U\) repeat Randomly select batch \((x_{j})_{j\in B}\) of size \(M\) Sample \(z_{j}\sim q_{\phi}(z|x_{j})\forall j\in B\) \(\phi ,w\gets g(\nabla_{\phi ,w}\sum_{j\in B}[\log p_{w}(x_{j}|z_{j}) - D_{\mathrm{KL}}(q_{\phi}(z|x_{j})||p(z))])\) \(\phi ,U\gets g_{e}(\nabla_{\phi ,U}\sum_{j\in B}[\alpha (Y_{:,j} - U^{T}z_{j})^{2} + \beta ||U||_{1}])\) until convergence of parameters \(\theta = (\phi ,w,U)\) ## 4 EXPERIMENT Based on an original numerical experiment in domain adaptation, we confirmed that the DualVAE learns multiple distributions both qualitatively and quantitatively. Similar to the case of the existing methods, domain adaptation was confirmed via an image- generation task in this study. First, we performed A facial image recommendation task, which is a content- based recommendation task for generating the preferences of users. Second, we performed the standard domain transfer task with 40 domains in CelebA (Liu et al., 2015) and we showed that DualVAE outperformed two state- of- the- art methods through GAN and VAE. The objective of the first task was to generate an image that was preferred by a specific user. We set the input space \(\mathcal{X}\) as the raw image, the prior \(p(x)\) as faces, and the domain \(\mathcal{D}_{i}\) as a user. We used the dataset of CelebA and SCUT- FBP5500 as the samples from the prior. The objective of the task was to generate samples from \(p_{\theta}(x|\mathcal{D}_{i})\) , exhibiting the images that were preferred by a user. We used label \(y_{i}\sim p(\mathcal{D}_{i}|x)\) as the existing dataset of SCUT- FBP5500 with 5,500 faces and 60 users for the content- based recommendation. The purpose of the second task was to transfer samples from \(p(x)\) into samples from \(p_{\theta}(x|\mathcal{D}_{i})\) . We set the prior \(p(x)\) as face images and the posterior \(p_{\theta}(x|\mathcal{D}_{i})\) as face images with certain attributes of CelebA. We used label \(y_{i}\sim p(\mathcal{D}_{i}|x)\) as the attribute of CelebA. The results revealed that the DualVAE successfully learned the model of the target distribution \(p_{\theta}(x|\mathcal{D}_{i})\) both quantitatively and qualitatively. Quantitatively, we confirmed that the discriminator learned the distribution by evaluating the negative log- likelihood loss, \(-\log p_{\theta}(\mathcal{D}_{i}|x)\) . We evaluated the samples using the domain inception score (DIS), which is the score for evaluating the transformation of images into multiple target domains. Notably, the DIS of the DualVAE was higher than several models. Qualitatively, we demonstrated that the image could be transferred to improve the evaluation by interpolating the image. We further exhibited several beautiful facial images that the users were conscious of by decoding each domain embedding \(\mu_{i}\) , which can be considered as the projection of the ideal from inside the users. In addition, 40 domain- transferred images using the dataset of CelebA by the proposed method was better than the images by other models. ### 4.1 DATASET CelebA CelebA(Liu et al., 2015) comprises approximately 200,000 images of faces of celebrities with 40 attributes. SCUT- FBP5500 SCUT- FBP5500(Liang et al., 2018) comprises 5500 face images and employs a 5- point scale evaluation by 60 people in terms of beauty preference. The face images can be categorized as Asian male, Asian female, Caucasian male, and Caucasian female, with 2000, 2000, 750, 750 images, respectively. <--- Page Split ---> ### 4.2 RESULT The quantitative result of the experiment can be demonstrated by evaluating the generated images by several models using a Domain Inception Score (DIS). Although the Inception Score (Salimans et al., 2016) is a score for measuring generated images, it can only measure the diversity of the images, and it is not for evaluating domain transfer of the images. Therefore, we proposed using a DIS, which is a score for evaluating the transformation of images into multiple target domains. The DIS is a scalar value using the output of Inceptionv3(Szegedy et al., 2016) pretrained to output the domain label, and it is evaluated by the sum of two elements. The first is whether the domain transfer of the original image has been successful (transfer score), and the second is whether the features other than the transferred domain are retained (reconstruction score). A more detailed explanation of the DIS is provided in the appendix. Comparison of a DualVAE and a single- domain VAE A DualVAE can transform the image of the source domain into images of multiple target domains with one model. However, considering a simpler method, it is also possible to transfer the image of the source domain to the images of the multiple target domains by creating multiple models. We will call each of these models a Single Domain VAE (SD- VAE). Since an SD- VAE is a model that converts the image of one source domain to the image of one target domain, models corresponding to the number of target domains are required, and thus, 60 models required training. We demonstrated that the DualVAE performance was equal to or higher than that of the SD- VAE using the DIS. With respect to the output images of these two models, the one with a higher DIS value was considered to be capable of outputting ideal images. We calculated the DIS of 200 test images transferred by these two model. The DIS of the DualVAE was - 0.0185, whereas that of the SD- VAE was - 0.0282. Thus, the DIS of the DualVAE was 0.01 higher than that of SD- VAE. Comparison of DualVAE and several models The DualVAE was compared with several models capable of performing image- to- image translations for multiple domains using a single model. In this experiment, only the celebA dataset and the attributes of the dataset were used as the domain. Also, the input image was resized to \(128\times 128\) . In each model, the dimension of the latent variable and the learning rate were randomly changed, the DIS was calculated several times, and the average and the standard deviation were obtained. The DualVAE obtained a higher DIS than the other models. Table 1: Average DISs for three domain adaptation methods employing random hyperparameter search, which demonstrates DualVAE outperforms several models based on the DIS as the hyperparameter search is not necessary. Typical generated images are shown in the Appendix. <table><tr><td>Method</td><td>5 domains</td><td>10 domains</td><td>20 domains</td><td>40 domains</td></tr><tr><td>CVAE(Kingma et al., 2014)</td><td>-0.055±0.011</td><td>-0.108±0.017</td><td>-0.112±0.007</td><td>-0.152±0.006</td></tr><tr><td>UFDN (Liu et al., 2018)</td><td>0.251±0.011</td><td>0.160±0.013</td><td>0.075±0.008</td><td>-0.002±0.003</td></tr><tr><td>StarGAN (Choi et al., 2017)</td><td>0.239±0.261</td><td>-0.094±0.346</td><td>0.068±0.188</td><td>0.050±0.032</td></tr><tr><td>DualVAE</td><td>0.278±0.026</td><td>0.180±0.011</td><td>0.163±0.025</td><td>0.140±0.020</td></tr></table> ### 4.3 VISUALIZATION OF DOMAIN TRANSFER We transferred the images by interpolating between the original and the target domain images. We calculated the following vector \(\mathbf{w}_i\) : \[\mathbf{w}_i = \mathbf{z} + \lambda \pmb{\mu}_i. \quad (10)\] Here, \(\mathbf{w}_i\) was constrained by giving it the same norm as \(\mathbf{z}\) to retain as much of the original features as possible. By changing \(\lambda\) and decoding \(\mathbf{w}_i\) , five images were determined to represent unideal to ideal reconstructions for each of the three sample users \((i = 14, 18, \text{and} 32)\) , and interpolation was performed to approach the ideal image \(\mathbf{x}_i\) in Figure 3. In addition, we have visualized transferred images of the 40 attributes by the proposed method and other models in Figure 4.3. Although StarGAN and UFDN <--- Page Split ---> retained the characteristics of the original image considerably, it was qualitatively understood that domain transfer was not good especially when the number of domains was large like 40 attributes. ![](images/7_0.jpg) <center>Figure 3: Images obtained from our model by decoding \(\mathbf{w}_{i}(i = 14,18\) , and 32) while changing the value of \(\lambda\) . The reconstructed images are present in the center. </center> ![](images/7_1.jpg) <center>Figure 4: Scatter plot of DVAE, UFDN, StarGAN and CVAE when we change several parameters. Different color denotes different models. All 40 domains transferred images are in subsection C.1. </center> ## 5 CONCLUSION Variational domain adaptation, which is a unified framework for learning multiple distributions in a single network, is proposed in this study. Our framework uses one known source as a prior \(p(x)\) and binary discriminator \(p(D_{i}|x)\) , thereby discriminating the target domain \(D_{i}\) from the others; this is in contrast with the existing frameworks in which samples undergo domain transfer through deep generative models. Consequently, our framework regards the target as a posterior that is characterized through Bayesian inference, \(p(x|\mathcal{D}_{i})\propto p(D_{i}|x)p(x)\) . This was exhibited by the proposed DualVAE. The major feature of the DualVAE is domain embedding, which is a powerful tool that encodes all the domains and the samples obtained from the prior into normal distributions in the same latent space as that learned by a unified network through variational inference. In the experiment, we applied our framework and model to a multi- domain image generation task. celebA and face image data that were obtained based on evaluation by 60 users were used, and the result revealed that the DualVAE method outperformed StarGAN and UFDN. Several directions should be considered for future research. First, we intend to expand DualVAEs for learning in complex domains, such as high- resolution images with several models, for example, glow (Kingma & Dhariwal, 2018). Second, we will perform an experiment to consider wider domains with respect to beauty. We expect that our proposed method will contribute to society in a number of ways and will help to deal with the paradigm of multiple contexts—multimodal, multi- task, and multi- agent contexts. <--- Page Split ---> ## REFERENCES Yoshua Bengio. Deep learning of representations for unsupervised and transfer learning. In Proceedings of ICML Workshop on Unsupervised and Transfer Learning, pp. 17- 36, 2012. Yunjoy Choi, Minje Choi, Munyoung Kim, Jung- Woo Ha, Sunghun Kim, and Jaegul Choo. Star- gan: Unified generative adversarial networks for multi- domain image- to- image translation. arXiv preprint, 1711, 2017. Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672- 2680, 2014. Xun Huang, Ming- Yu Liu, Serge Belongie, and Jan Kautz. Multimodal unsupervised image- to- image translation. arXiv preprint arXiv:1804.04732, 2018. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. Phillip Isola, Jun- Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image- to- image translation with conditional adversarial networks. arXiv preprint, 2017. Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, and Jiwon Kim. Learning to discover cross- domain relations with generative adversarial networks. arXiv preprint arXiv:1703.05192, 2017. Diederik P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. arXiv preprint arXiv:1807.03039, 2018. Diederik P Kingma and Max Welling. Auto- encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi- supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pp. 3581- 3589, 2014. Lingyu Liang, Luojun Lin, Lianwen Jin, Duorui Xie, and Mengru Li. Scut- fbp5500: A diverse benchmark dataset for multi- paradigm facial beauty prediction. 2018. Alexander Liu, Yen- Chen Liu, Yu- Ying Yeh, and Yu- Chiang Frank Wang. A unified feature disentangler for multi- domain image translation and manipulation. arXiv preprint arXiv:1809.01361, 2018. Ming- Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image- to- image translation networks. In Advances in Neural Information Processing Systems, pp. 700- 708, 2017. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015. Xudong Mao and Qing Li. Unpaired multi- domain image generation via regularized conditional gans. arXiv preprint arXiv:1805.02456, 2018. Leland McInnes and John Healy. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426, 2018. Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. Andriy Mnih and Ruslan R Salakhutdinov. Probabilistic matrix factorization. In J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis (eds.), Advances in Neural Information Processing Systems 20, pp. 1257- 1264. Curran Associates, Inc., 2008. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, and Xi Chen. Improved techniques for training gans. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 2234- 2242. Curran Associates, Inc., 2016. <--- Page Split ---> Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. Yaniv Taigman, Adam Polyak, and Lior Wolf. Unsupervised cross- domain image generation. arXiv preprint arXiv:1611.02200, 2016. Jun- Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image- to- image translation using cycle- consistent adversarial networks. arXiv preprint, 2017a. Jun- Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A Efros, Oliver Wang, and Eli Shechtman. Toward multimodal image- to- image translation. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 465- 476. Curran Associates, Inc., 2017b. ## A LATENT SPACE We visualized the latent space \(\mathcal{Z}\) of VAE and DualVAE. VAE differs from DualVAE methodology because evaluation regression is not conducted during training. For each model, we can achieve 5500 latent vectors of 63 dimensions by encoding 5500 images from SCUT- FBP5500. We obtained a scatter plot after using UMAP (McInnes & Healy, 2018) to reduce the number of dimensions to two. The average score is indicated by colors ranging from red to blue. As can be observed from the UMAP of DualVAE, the gradient of the score is learned, and it represents the user vector(domain embedding vector) in Figure 5. ![](images/9_0.jpg) <center>Figure 5: Latent visualization of VAE (left) and DualVAE (right) demonstrates that DualVAE learns a good prior to model the domains. The heat map indicates the mean score of all the users. </center> ## B DOMAIN INCEPTION SCORE (DIS) Although the Inception Score (Salimans et al., 2016) is a score for measuring generated images, it can only measure the diversity of the images, and it is not for evaluating domain transfer of the images. Therefore, we proposed using a DIS, which is a score for evaluating the transformation of images into multiple target domains. DIS is a scalar value, and it is evaluated by the sum of two elements. The first is whether the domain transfer of the original image has been successful (transfer score), and the second is whether the features other than the transferred domain are retained (reconstruction score). We calculated the DIS using Algorithm 2. First, we assumed that there were N domains and we knew which domain each image belongs to. We fine- tuned Inceptionv3 (Szegedy et al., 2016) using images X as inputs and domains as outputs. To enable the model to classify the images as the domains, we replaced the last layer of the model in a new layer which had N outputs. Second, we transferred test <--- Page Split ---> images into N domains using Equation 10 and loaded the transferred images into the Inceptionv3 pretrained above. Through this process we got \(N \times N\) matrix for every original image, because one image was transferred into N domains and each domain image was mapped to N- dim vector. We then mapped the original image into N- dim vector using Inceptionv3, and subtracted this vector from each row of the abobe \(N \times N\) matrix. We named this matrix M. The key points are (1) the diagonal elements of M should be large because we transferred the original image into the diagonal domains, and (2) the off- diagonal elements of M should be small because the transferred images should preserve original features as possible. In a later subsection, we will directly visualize these two elements and evaluate models. Algorithm 2 Domain Inception Score (DIS) Require: observation \(x \in \mathcal{X}\) , Inceptionv3 \(f\) , domain transfer model \(m\) . \(x^{\prime} \leftarrow m(x)\) \(\mathbf{M} \leftarrow f(x^{\prime}) - f(x)\) \(\mathrm{ts} \leftarrow \mathrm{average}(\mathrm{diag}(\mathbf{M}))\) \(\mathrm{rs} \leftarrow - \mathrm{average}(\mathrm{abs}(\mathrm{notdiag}(\mathbf{M})))\) \(\mathrm{DIS} \leftarrow \mathrm{ts} + \mathrm{rs}\) In the Algorithm, abs denotes taking the absolute value, diag denotes taking the diagonal elements of the matrix, notdiag denotes taking the non- diagonal elements, avg denotes taking the mean of multiple values. ## C ADAPTATION OVER MANY DOMAINS This section shows further results of Table 1, the experimental result for domain adaptation over 40 domains made from CelebA. In the experimental setting above, we use attributes in CelebA as a domain, the setting is used by several studies with domain adaptation (Choi et al., 2017). The result shows DualVAE only learns 40 domains in one network, which indicates DualVAE is an easy way to learn over 10 domains. Next, we show several experimental results when we change the parameters of the models. Because StarGAN uses GAN, the learning rate parameter is not robust, thus the learning is not conducted well. Moreover, celebA has 40 domains which are too many for StarGAN, and this can also be considered as one of the reasons that learning is not conducted well. Because reconstruction is conducted well, rs in Algorithm 2 becomes larger than that of DualVAE. On the other hand, domain transfer is not conducted properly, ts in Algorithm 2 becomes extremely small compares to that of DualVAE. Therefore, as we can see from Table 1, DIS becomes a very small value. ## C.1 CELEBA ![](images/10_0.jpg) <center>(a) DualVAE </center> <--- Page Split ---> ![](images/11_0.jpg) <center>Figure 6: Comparing domain transfer by several methods. (a) Since the image is blurry compared to StarGAN, although the original features change significantly, domain transfer is still conducted properly. (b) Although the characteristics of the original image are well preserved, domain transfer and reconstruction is not conducted. (c) Although the characteristics of the original image are well preserved, domain transfer is not conducted well. (d) Keeping the characteristic and being able to transfer a small amount of the domain. </center> <--- Page Split ---> ## C.2 MNIST (10 DOMAINS) Next, we conduct domain transfer experiments using the MNIST dataset. In this experiment, we demonstrated that it is possible to transfer the image into another label (domain), while not compromising the style of the original image. We also plotted the relation with DIS when labels are sparse. Moreover, we showed in subsection I.1 it is possible to transfer to another domain step by step. ![](images/12_0.jpg) <center>Figure 7: Scatter plot of the missing ratio of MNIST's label and DIS of the DualVAE. Variable s is the missing ratio. The original image is shown in the top right of the figure. The labels of the original images are transformed to zero, one and two. The vertical axis is ts of Algorithm 2, the horizontal axis is rs of Algorithm 2. DIS grows in the upper right corner. </center> ![](images/12_1.jpg) <center>Figure 8: Domain transfer by varying \(\lambda\) . (a) Good example: domain transfer to different label is successful while keeping the characteristic of the reconstruction image. (b) Bad example: although the characteristic of reconstruction images are kept, domain transfer to different labels is not enough. (c) Bad example: although domain transfer to different label is successful, the characteristic of reconstruction images is lost a little bit. (d) Bad example: domain transfer to different labels is not successful. </center> <--- Page Split ---> ## D DOMAIN EMBEDDINGS By reducing the dimensions of the 60 domain embedding vectors from 63 to 2 using UMAP(McInnes & Healy, 2018), the domain embedding vectors were visualized by means of a scatter plot. Furthermore, \(\mathbf{x}_{i}\) was visualized by decoding samples from the domain distribution. ![](images/13_0.jpg) <center>Figure 9: Scatter plot of the domain embedding vectors, and several decoded images of the samples from each domain. Six \(\mathbf{z}_{i}\) from the target domain distribution and output \(\mathbf{x}_{i}\) were decoded. Furthermore, \(\mathbf{z}_{0}\) from the source domain data distribution and output \(\mathbf{x}_{0}\) was also decoded. </center> ## E DOMAIN MIXING In this chapter, we show it is possible to conduct arithmetic operations among domains. For example, suppose we learned the embedding vector of a charming image domain for each single person. We can output the charming image for the group of people as an entity without learning simply by taking the average value of the domain embedding vectors. Denote Community preference as \(f_{I}\) , personal evaluation model as \(f_{i}(= \mu_{i}^{\mathrm{T}}\mathbf{z}(x))\) , \[f_{I}(x) = \frac{1}{|I|}\sum_{i\in I}f_{i}(x) = \frac{1}{|I|}\sum_{i\in I}\mu_{i}^{\mathrm{T}}\mathbf{z}(x) = \bar{\mu}^{\mathrm{T}}\mathbf{z}(x), \quad (11)\] where, \(\bar{\mu} = (1 / |I|)\sum_{i\in I}\mu_{i}\) , which is the average of domain embedding vectors. Moreover, i is the index denoting the domain (person), I is the number of domains, and \(\mathbf{z}(\mathbf{x})\) is the latent vector of image \(\mathbf{x}\) . As shown in Equation 11, since the domain embedding vectors are linearly functional, by taking the inner product of the average of these vectors \(\bar{\mu}\) and the latent vector \(\mathbf{z}\) , the average of personal evaluation (evaluation of the community) can be obtained. Therefore, by substituting \(\mu_{i}\) for \(\bar{\mu}\) in Equation 10, we can reconstruct the face images with high a high degree of community evaluation. We reconstructed for higher (and lower) evaluation using 10 face images from both genders. Each image enjoys higher evaluation to the right. We can see that gradually the caving becomes deep, the beard disappears, the eyes become bigger and the outline becomes sharp Figure 10. <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 10: Images decoded to get closer (or further) to the average of domain embedding vectors. The middle is the reconstructed original image. </center> ## F CONNECTION TO THE HISTORY OF MATRIX FACTORIZATION The section tells the proposed method, DualVAE, is a natural generalization from probabilistic Matrix Factorization (PMF)(Mnih & Salakhutdinov, 2008), proposed in ten years ago. ## F.1 PROBABILISTIC MATRIX FACTORIZATION (PMF) PMF is used in several application area, mainly collaborative filtering algorithm, which are typical recommendation algorithms. PMF learns the user matrix \(U\in \mathcal{R}^{K\times N}\) and the item matrix \(V\in\) \(\mathcal{R}^{K\times J}\) that can restore the evaluation matrix. Here, \(r_{i j}\) is the evaluation value of item j by user i, the evaluation matrix is denoted as \(R\in \mathcal{R}^{I\times J}\) . Moreover, the column vector of the user matrix \(U\) and the item matrix \(V\) are denoted as \(\mathbf{u}_{i},\mathbf{v}_{j}\) respectively. \(\mathrm{K}\) is the dimension of these vectors, \(\mathrm{N}\) is the number of users, \(\mathrm{J}\) is the number of items. \(I_{i j}\) is the indicator function that takes the value 1 when evaluation \(r_{i j}\) exists and 0 otherwise. The log likelihood of PMF is \[\log p(R|U,V,\sigma^{2}) = \sum_{i}\sum_{j}I_{i j}\log \mathcal{N}(r_{i j}|\mathbf{u}_{i}^{T}\mathbf{v}_{j},\sigma^{2}). \quad (12)\] Our objective is to find the \(\mathbf{u}_{i}\) , \(\mathbf{v}_{j}\) that maximizes the above. Relationship to DualVAE DualVAE is an end- to- end coupling of VAE and PMF. We could see DualVAE as PMF extended to a generative model. \(\mathbf{u}_{i}\) in Equation 12 corresponds to the domain embedding vector in DVAE, \(\mathbf{v}_{j}\) corresponds to the latent vector in DVAE, \(r_{i j}\) corresponds to the likelihood that item j belongs to domain i. ## F.2 EXPERIMENTAL ANALYSIS ## F.2.1 EFFECT OF END-TO-END COUPLING We experimentally show that the DualVAE outperformed the non- end- to- end coupling. We compared two models. One is the model trained to regress evaluation of the image end- to- end by calculating inner product of hidden representation of VAE and domain embedding (DVAE). The other is the model which learns hidden representation of VAE followed by learning to regress evaluation by inner product like above (VAE- PMF). We used SCUTFBP- 5500 Figure 17 dataset, and validated it into 5000 images with 60 evaluators and 500 test images with 60 evaluators. We quantitatively compared these two models in terms of Root Mean Square Error (RMSE) of model prediction and reconstruction error of test images. The result suggests that DualVAE achieved a much smaller <--- Page Split ---> RMSE. Moreover, though DualVAE constrained its hidden representation to regress evaluation, the reconstruction error was almost the same as VAE- PMF. This suggests that DualVAE can generate as clear images as vanilla VAE. Table 2: compare DualVAE with VAE <table><tr><td>Method</td><td>RMSE</td><td>Reconstruction loss</td></tr><tr><td>VAE + PMF</td><td>0.423</td><td>8.19 × 104</td></tr><tr><td>DualVAE</td><td>0.356</td><td>8.20 × 104</td></tr></table> ![](images/15_0.jpg) <center>Figure 11: RMSE and Reconstruction loss. DualVAE is far superior to VAE in classification accuracy, and there is almost no difference in reconstruction error between them. </center> ## F.2.2 ROBUSTNESS TO SPARSITY In addition to generalization capability, another benefit from PMF is robustness to sparsity as PMF is robust to a matrix with many missing values. We will experimentally demonstrate that DualVAE is also robust with respect to sparse labels. We calculate the rs and ts when applying Algorithm 2 on 160 celebA test images, and plot the below figure when we change the missing ratio of celeA's domain labels and the \(\lambda\) in Equation 10. ![](images/15_1.jpg) <center>Figure 12: Scatter plot of the missing ratio of celeA's label and DIS of DualVAE. Variable s is the missing ratio. The original image is shown on the left top of the figure. The attributes of the original images are transformed to blond hair, eyeglasses and mustache. The vertical axis is ts of Algorithm 2, the horizontal axis is rs of Algorithm 2. DIS grows in the upper right corner. </center> <--- Page Split ---> ![](images/16_0.jpg) <center>(a) \(s = 0\) . Keeping the characteristic and being able to domain transfer. </center> ![](images/16_1.jpg) <center>(b) \(s = 0.9\) . Although the sparseness of the labels is high, domain transfer is was still conducted rather well. </center> ![](images/16_2.jpg) <center>(c) \(s = 0.99\) (bad example). Image quality is poor, domain transfer is not conducted properly. </center> Figure 13: Sparsity analysis through various sparsity. <--- Page Split ---> From Figure 13, keeping the characteristic of the upper right plots, it is possible to conduct domain transfer at the same time. Moreover, the method is strong on the sparseness of domain labels, and DIS does not drop even when 90 of the labels are missing. On the other hand, we show that StarGAN is not as robust as DualVAE with respect to sparseness. When 90 of domain labels are missing, StarGAN cannot learn at all and generates identical images. ![](images/17_0.jpg) <center>Figure 14: Scatter plot of the missing ratio of celeA's label and DIS of StarGAN. Variable s is the missing ratio. The vertical axis is ts of Algorithm 2, the horizontal axis is rs of Algorithm 2. DIS grows at the upper right corner. </center> ![](images/17_1.jpg) (a) \(s = 0\) . Keeping the characteristic and being able to transfer a small amount of the domain. <--- Page Split ---> ![](images/18_0.jpg) <center>(b) \(s = 0.9\) . All identical images are generated, and domain transfer is not properly conducted. </center> ## G FURTHER EXPERIMENTS WITH VARIOUS HYPERPARAMETER \((\alpha)\) We conducted a comparison experiment with the existing methods when changing \(\alpha (= \sigma^{- 2})\) in Equation 9. Here, the number of domains was set to 40. As you can see from the results below, it turns out that the performance of DualVAE is robust to \(\alpha\) . ![](images/18_1.jpg) <center>Figure 16: Plot of DVAE, UFDN, and StarGAN when we change alpha. The performance of DVAE is robust to \(\alpha\) and DVAE outperforms the existing methods based on the DIS. </center> ## H MODEL DETAILS The section shows three models used in tasks of domain adaptation over three types of domains: environment, attribute and class. Environment First, we describe the experimental setting for domain transfer to the ideal image of each individual. We assumed that the beauty criterion required for evaluating the facial images de <--- Page Split ---> pends on the gender of a person in the target image. Therefore, we added the gender information to the images. For this purpose, we applied CGAN (Mirza & Osindero, 2014) to VAE. We normalized the scoring in \([- 1,1]\) to accelerate the learning. Subsequently, we considered the specific model structure of DualVAE. Both the input and output images were RGB images, \(x \in R^{256 \times 256 \times 3}\) . We used convolution networks for the encoder and stride 2 for convolution and no pooling. Convolution, batch normalization (Ioffe & Szegedy, 2015), and LeakyReLU were repeated four times and were subsequently connected to fully connected layers. Further, after batch normalization and LeakyReLU layers, a 63- dimensional latent variable was obtained. The decoder exhibited a completely symmetric shape with deconvolution layers instead of convolution layers. Furthermore, as the gender attribute, we set 0 as female and 1 as male. We added an image \(x \in R^{256 \times 256 \times 1}\) comprising 0 or 1 data as the input of the encoder and a scalar of 0 or 1 for gender to the latent variable, which was the input to the decoder. The detailed structure is in Structure A of Table 3. We optimized DualVAE on SCUT- FBP5500. Because there were no face evaluation data in celebA, we only used it to optimize VAE. Learning was alternatively realized using these two datasets. We show the image example of SCUT- FBP5500 (Liang et al., 2018). From Figure 17, we can see the evaluation value depends on each person. Attribute Next, in comparative experiment with several models, domain transfer was performed with only celebA data and domain number of 40, 20, 10, and 5. We experimented with several parameters of the models. In particular, the dimensions of the latent variable and the learning rates were randomly selected. Both the input and output images were RGB images, \(x \in R^{128 \times 128 \times 3}\) . The detailed structure is in Structure B of Table 3. Class Finally, we describe the experimental setting of domain transfer in the MNIST dataset. This experimental result is stated in the subsection C.2. Both the input and output images were gray images, \(x \in R^{28 \times 28 \times 1}\) . The detailed structure is in Structure C of Table 3. Table 3: The model structures of DualVAE used in our experiment. Conv stands for Convolution, Deconv stands for Deconvolution, FC stands for Full Connected, and numbers in each parenthesis are input and output channels. The kernel sizes of convolution are all \(4\times 4\) , and stride sizes of that are all two. Also, Batch2d represents batchnormalization in two dimensions, Batch1d represents batchnormalization in one dimension, and LReLU stands for LeakyReLU. <table><tr><td>Preference</td><td>Attribute</td><td>Class</td></tr><tr><td>Conv(3+1, 32)</td><td>Conv(3, 64)</td><td>Conv(1, 64)</td></tr><tr><td>Batch2d, LReLU</td><td>Batch2d, LReLU</td><td>Batch2d, LReLU</td></tr><tr><td>Conv(32, 64)</td><td>Conv(64, 128)</td><td>Conv(64, 128, 4, 4)</td></tr><tr><td>Batch2d, LReLU</td><td>Batch2d, LReLU</td><td>Batch2d, LReLU</td></tr><tr><td>Conv(64, 128)</td><td>Conv(128, 256)</td><td>FC(128*7*7, 1024)</td></tr><tr><td>Batch2d, LReLU</td><td>Batch2d, LReLU</td><td>Batch1d, LReLU</td></tr><tr><td>Conv(128, 256)</td><td>FC(256*16*16, 1024)</td><td>FC(1000, 100*2)</td></tr><tr><td>Batch2d, LReLU</td><td>Batch1d, LReLU</td><td>FC(100, 1024)+FC(100, 10)</td></tr><tr><td>FC(256*16*16, 1024)</td><td>FC(1024, dim*2)</td><td>Batch1d, LReLU</td></tr><tr><td>Batch1d, LReLU</td><td>FC(dim, 1024)+FC(dim, 40)</td><td>FC(1024, 128*7*7)</td></tr><tr><td>FC(1024, 63*2)</td><td>Batch1d, LReLU</td><td>LReLU</td></tr><tr><td>FC(63+1, 1024)+FC(63+1, 60)</td><td>FC(1024, 256*16*16)</td><td>Deconv(128, 64)</td></tr><tr><td>Batch1d, LReLU</td><td>LReLU</td><td>Batch2d, LReLU</td></tr><tr><td>FC(1024, 256*16*16)</td><td>Deconv(256, 128)</td><td>Deconv(64, 1)</td></tr><tr><td>LReLU</td><td>Batch2d, LReLU</td><td>Sigmoid</td></tr><tr><td>Deconv(256, 128)</td><td>Deconv(128, 64)</td><td></td></tr><tr><td>Batch2d, LReLU</td><td>Batch2d, LReLU</td><td></td></tr><tr><td>Deconv(128, 64)</td><td>Deconv(64, 3)</td><td></td></tr><tr><td>Batch2d, LReLU</td><td>Sigmoid</td><td></td></tr><tr><td>Deconv(64, 32)</td><td></td><td></td></tr><tr><td>Batch2d, LReLU</td><td></td><td></td></tr><tr><td>Deconv(32, 3)</td><td></td><td></td></tr><tr><td>Sigmoid</td><td></td><td></td></tr></table> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 17: (a) Distribution of the respective scores within [1, 5] rated by 60 people. (b) The images of SCUT-FBP5500. </center> <--- Page Split ---> ## I FURTHER EXAMPLES OF DOMAIN TRANSFER THROUGH DUALVAE The results below shows result from domain adaptation performed by DualVAE by randomly- sampled images from two datasets: MNIST and CelebA. ## I.1 MNIST (10 DOMAINS) ![](images/21_0.jpg) <center>Figure 18: DualVAE stably transfers samples across 10 domains while domain-irrelevant features (e.g., style) are kept. </center> <--- Page Split ---> ## I.2 CELEBA (40 DOMAINS) ![](images/22_0.jpg) <--- Page Split ---> ![](images/23_0.jpg) <--- Page Split ---> ![](images/24_0.jpg) <--- Page Split --->
## ABSTRACT This paper proposes variational domain adaptation, a unified, scalable, simple framework for learning multiple distributions through variational inference. Unlike the existing methods on domain transfer through deep generative models, such as StarGAN (Choi et al., 2017) and UFDN (Liu et al., 2018), the variational domain adaptation has three advantages. Firstly, the samples from the target are not required. Instead, the framework requires one known source as a prior \(p(x)\) and binary discriminators, \(p(\mathcal{D}_i|x)\) , discriminating the target domain \(\mathcal{D}_i\) from others. Consequently, the framework regards a target as a posterior that can be explicitly formulated through the Bayesian inference, \(p(x|\mathcal{D}_i)\propto p(\mathcal{D}_i|x)p(x)\) , as exhibited by a further proposed model of dual variational autoencoder (DualVAE). Secondly, the framework is scalable to large- scale domains. As well as VAE encodes a sample \(x\) as a mode on a latent space: \(\mu (x)\in \mathcal{Z}\) , DualVAE encodes a domain \(\mathcal{D}_i\) as a mode on the dual latent space \(\mu^{*}(\mathcal{D}_i)\in \mathcal{Z}^{*}\) , named domain embedding. It reformulates the posterior with a natural paring \(\langle \cdot \rangle :\mathcal{Z}\times \mathcal{Z}^{*}\to \mathbb{R}\) , which can be expanded to uncountable infinite domains such as continuous domains as well as interpolation. Thirdly, DualVAE fastly converges without sophisticated automatic/manual hyperparameter search in comparison to GANs as it requires only one additional parameter to VAE. Through the numerical experiment, we demonstrate the three benefits with multi- domain image generation task on CelebA with up to 60 domains, and exhibits that DualVAE records the state- of- the- art performance outperforming StarGAN and UFDN. ## 1 INTRODUCTION "...we hold that all the loveliness of this world comes by communion in Ideal- Form. All shapelessness whose kind admits of pattern and form, as long as it remains outside of Reason and Idea, is ugly from that very isolation from the Divine- Thought." — Plato (427 – 347 bc) Agents that interact in various environments have to handle multiple observation distributions. Domain adaptation (Bengio, 2012) is a methodology employed to exploit deep generative models, such as adversarial learning (Goodfellow et al., 2014) and variational inference (Kingma & Welling, 2013), that can handle distributions that vary with environments and other agents. Further, multi- task learning and domain transfer are examples of how domain adaptation methodology is used. We focus on domain transfer involving transfers across a distribution between domains. For instance, pix2pix (Isola et al., 2017) outputs a sample from the target domain that corresponds to the input sample from the source domain. This can be achieved by learning the pair relation of samples from the source and target domains. CycleGAN (Zhu et al., 2017a) transfers the samples between two domains using samples obtained from both domains. Similarly, UNIT (Liu et al., 2017), DiscoGAN(Kim et al., 2017), and DTN(Taigman et al., 2016) have been proposed in previous studies. However, the aforementioned method requires samples that are obtained from the target domains, and because of this requirement, it cannot be applied to domains for which direct sampling is expensive or often impossible. For example, the desired, continuous, high- dimensional action in the environment, intrinsic reward (e.g., preference and curiosity) and the policy of interacting agents other than itself cannot be sampled from inside, and they can only discriminate the proposed input. Even for ourselves, the concept of beauty or interest in our conscious is subjective, complex, and difficult to be sampled from the inside, although it is easy to discriminate on the outside. <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: The key concept of variational domain adaptation. a) Given the proposal drawn from the prior, the discriminator discriminates the target domain from the others. Each domain is posterior for the prior \(\mathcal{N}(z|0,1)\) ; further, the distribution in the latent space is observed to be a normal distribution using the conjugate likelihood. b) Domain transfer is represented by the mean shift in the latent space. c) Domain embedding: After training, all the domains can only be represented by vectors \(\mu_{i}\) . </center> In this study, we propose variational domain adaptation, which is a framework for targets that pose challenges with respect to direct sampling. One solution is multi- domain semi- supervision, which converts the problem to semi- supervised learning, thereby making it possible to perform variational inference. In this supervision, a source domain is regarded as a prior \(p(x)\) and a target domain is considered to be a posterior \(p(x|\mathcal{D}_{i})\) by referring to the label given by a supervised discriminator \(p(\mathcal{D}_{i}|x)\) that distinguishes the target domain from others. Our model imitates the behavior of the discriminator and models the target domain using a simple conclusion of the Bayesian theorem, \(p_{\theta}(x|\mathcal{D}_{i})\propto p_{\theta}(\mathcal{D}_{i}|x)p_{\theta}(x)\) . The end- to- end learning framework also makes it possible to learn good prior \(p_{\theta}(x)\) with respect to all the domains. After the training was completed, the posterior \(p_{\theta}(x|\mathcal{D}_{i})\) succeeded in deceiving the discriminator \(p(\mathcal{D}_{i}|x)\) . This concept is similar to rejection sampling in the Monte Carlo methods. Further, variational domain adaptation is the first important contribution from this study. The second contribution from this study is a model of dual variational autoencoder (DualVAE), which is a simple extension of the conditional VAE (Kingma et al., 2014), employed to demonstrate our concept of multi- domain semi- supervision. DualVAE learns multiple domains in one network by maximizing the variational lower bound of the total negative KL- divergence between the target domain and the model. DualVAE uses VAE to model the prior \(p(x)\) and an abstract representation for the discriminator \(p(\mathcal{D}_{i}|x)\) . The major feature of DualVAE is domain embedding that states that all the posteriors are modeled as a normal distribution \(\mathcal{N}(z|\mu_{i},\sigma^{2})\) in the same latent space \(\mathcal{Z}\) using the conjecture distribution of the prior. Here, \(\mu_{i}\) is the domain embedding that represents the domain \(\mathcal{D}_{i}\) . This enables us to sample from \(p_{\theta}(x|\mathcal{D}_{i})\) . Our major finding was that the discriminator of DualVAE was a simple inner product between the two means of domain embedding and the VAE output: \[\log \frac{p_{\theta}(\mathcal{D}_{i}|x)}{p_{\theta}(\mathcal{D}_{i})} = \log \int \frac{\mathcal{N}(z|\mu_{i},\sigma^{2})\mathcal{N}(z|\mu_{\phi}(x),\sigma^{2})}{\mathcal{N}(z|0,I)} dz = \frac{\mu_{i}^{\mathrm{T}}\mu_{\phi}(x)}{\sigma^{2}},\] that acts as a natural paring between the sample and the domain. The probabilistic end- to- end model learns multiple domains in a single network, making it possible to determine the effect of transfer learning and to learn data that multi- domains cannot observe from sparse feedback. Domain embedding is a powerful tool and allows us to use VAEs instead of GANs. The third contribution of this study is that DualVAE was validated for use in a recommendation task using celebA (Liu et al., 2015). In the experiment, using celebA and face imaging data obtained based on evaluations by 60 users, an image was generated based on the prediction of user evaluation and an ideal image that was determined to be good by multiple users. We demonstrated that the image could be modified to improve the evaluation by interpolating the image, and the image was evaluated using the domain inception score (DIS), which is the score of the model that has learned the preference of each user. We present the beauty inside each evaluator by simply sampling \(p_{\theta}(x|\mathcal{D}_{i})\) . The DIS of DualVAE is higher than that of a single domain, and the dataset and code are available online. <--- Page Split ---> ## 2 RELATED WORK The existing literature related to the domain transfer is based on the assumption that the samples are obtained from the target domain. For example, pix2pix(Isola et al., 2017) can output the samples from the target domain that corresponds to the input samples from the source domain by learning the pair relation between the samples of the source and target domains. CycleGAN (Zhu et al., 2017a), which differs from pix2pix, does not require sample pairs from both domains. Similarly, UNIT (Liu et al., 2017), DiscoGAN(Kim et al., 2017), and DTN(Taigman et al., 2016) also do not require sample pairs. Furthermore, because there are few cases in which samples from the source and target domains form a one- to- one pair in real world research after being extended to the conversion of one- to- many relationships, including BicycleGAN(Zhu et al., 2017b) and MUNIT(Huang et al., 2018). Several studies were conducted to model multiple distributions in a semi- supervised manner. StarGAN(Choi et al., 2017), UFDN(Liu et al., 2018), and RegCGAN(Mao & Li, 2018) are extensions of the aforementioned models and are frameworks that can convert the source domain samples into samples for various target domains with a single- network structure. However, the problem with these methods is associated with hyperparameter tuning, which arises from the characteristics of adversarial learning. DualVAE is a simple extension of a conditional VAE in a multi- domain situation. Conditional VAEs utilizes VAE for semi- supervised learning. Although the model is quite simple, it is powerful and scalable making it possible to learn multiple distributions with domain embedding. In fact, we demonstrated that DualVAE quickly converged for more than 30 domains without sophisticated hyperparameter tuning. In the experiment conducted in this study, \(\mathbb{E}_{\omega}\left[J(\theta |\omega)\right]\) was evaluated instead of \(J(\theta |\hat{\omega})\) to demonstrate that our method required less hyperparameter tuning. ## 3 METHOD ### 3.1 PROBLEM DEFINITION With regard to \(n\) domains \(\mathcal{D}_{1},\ldots ,\mathcal{D}_{n}\) , and a sample \(x\) on an observation space \(\mathcal{X}\) , the objective of unsupervised domain adaptation is to minimize the KL- divergence between the target distribution and the model, \(D_{\mathrm{KL}}\left(p^{(i)}(x)\| p^{(i)}(x,\theta)\right)\) , over all the domains \(\mathcal{D}_{i}\) . From the perspective of optimizing \(\theta\) , minimizing the KL divergence is equivalent to maximizing the cross- entropy. As \(p^{(i)}(x,\theta) = p^{(i)}(x|\theta)p(\theta)\) , the unsupervised domain adaptation can be formulated as a maximizing problem for the weighted average of cross- entropy over the domains: \[\mathrm{Maximize}_{\theta}:J(\theta) = \frac{1}{n}\sum_{i = 1}^{n}\gamma_{i}\mathbb{E}_{x\sim p^{(i)}}\left[\log p_{\theta}^{(i)}(x)\right] + \gamma \log p(\theta), \quad (1)\] where \(p_{\theta}^{(i)}(x) = p^{(i)}(x|\theta)\) , \(\gamma_{i}\in [0,1]\) is the importance of each domain \(\mathcal{D}_{i}\) and \(\begin{array}{r}{\gamma = \sum_{i = 1}^{n}\gamma_{i} / n} \end{array}\) If \(\gamma_{i} = 1\) for all the \(i\) 's, the objective function is simply the mean, and if \(\gamma_{i} = 0\) for certain \(i\) 's, the domain \(\mathcal{D}_{i}\) is ignored. The difficulty arises from the fact that it is not possible to directly sample \(x\) from \(p^{(i)}\) \(x\) can be directly sampled from the likelihood \(p(\mathcal{D}_{i}|x)\) . This challenge was the motivation for considering multi- domain semi- supervision. ### 3.2 MULTI-DOMAIN SEMI-SUPERVISION Multi- domain semi- supervision assumes a prior \(p(x)\) and models each the domain as a posterior \(p^{(i)} = p(x|\mathcal{D}_{i})\) . As the Bayesian inference, we reformulate the cross- entropy \(\mathbb{E}_{x\sim p^{(i)}}\left[\log p_{\theta}(x|\mathcal{D}_{i})\right]\) in Eq. (1) as follows: \[\begin{array}{r l} & {\mathbb{E}_{x\sim p^{(i)}}\left[\log p_{\theta}(x|\mathcal{D}_{i})\right] = \int p(x|\mathcal{D}_{i})\log p_{\theta}(x|\mathcal{D}_{i})d x = \int \frac{p(\mathcal{D}_{i}|x)p(x)}{p(\mathcal{D}_{i})}\log \frac{p_{\theta}(\mathcal{D}_{i}|x)p_{\theta}(x)}{p_{\theta}(\mathcal{D}_{i})} d x}\\ & {\qquad = \mathbb{E}_{x\sim p}\left[\frac{p(\mathcal{D}_{i}|x)}{p(\mathcal{D}_{i})}\log \frac{p_{\theta}(\mathcal{D}_{i}|x)}{p_{\theta}(\mathcal{D}_{i})}\right] + \mathbb{E}_{x\sim p}\left[\frac{p(\mathcal{D}_{i}|x)}{p(\mathcal{D}_{i})}\log p_{\theta}(x)\right]}\\ & {\qquad = \mathbb{E}_{x\sim p}\left[f(\mathcal{D}_{i}|x)\log f_{\theta}(\mathcal{D}_{i}|x)\right] + \mathbb{E}_{x\sim p}\left[f(\mathcal{D}_{i}|x)\log p_{\theta}(x)\right],} \end{array} \quad (2)\] <--- Page Split ---> where \(f(\mathcal{D}_{i}|x) = p(\mathcal{D}_{i}|x) / p(\mathcal{D}_{i})\) and \(f_{\theta}(\mathcal{D}_{i}|x) = p_{\theta}(\mathcal{D}_{i}|x) / p_{\theta}(\mathcal{D}_{i})\) . By letting \(\gamma_{i} = p(\mathcal{D}_{i})\) , the objective is identical to: \[J(\theta) = \underbrace{\mathbb{E}_{x\sim p,i\sim[n]}\left[p(\mathcal{D}_{i}|x)\log f_{\theta}(\mathcal{D}_{i}|x)\right]}_{\mathrm{discriminator}} + \underbrace{\mathbb{E}_{x\sim p}\left[\log p_{\theta}(x)\right]}_{\mathrm{prior}} + \underbrace{\gamma\log p(\theta)}_{\mathrm{regularizer}}, \quad (3)\] where \([n]\) is a uniform distribution over \(\{1,\ldots ,n\}\) and \(f(\bar{\mathcal{D}} |x) = \mathbb{E}_{i\sim [n]}\left[f(\mathcal{D}_{i}|x)\right]\) . The first term is the likelihood from the discriminator; the second term is the prior learned by a generative model, including VAE; and the last term is the regularizer. Because the equation is intractable, we use Monte Carlo sampling to estimate the function. During the estimation, we initially sample \(x_{1},\ldots ,x_{m}\) from the prior \(p(x)\) and subsequently obtain the binary labels \(y_{ij}\in \{0,1\}\) from each discriminator \(y_{ij}\sim p(\mathcal{D}_{i}|x_{j})\) . Since the number of labels from supervises is \(n m\) , the situation that the sparse labels: \(k< < m n\) is considered. Further, some discriminators only provide parts of the labels. In the situation, the missing values are 0- padded: \(y_{ij} = 0\) . \[J(\theta)\approx \frac{1}{n}\sum_{i = 1}^{n}\sum_{j = 1}^{m}y_{ij}\log f_{\theta}(y_{ij}|x_{j}) + \frac{1}{m}\sum_{j = 1}^{m}\log p_{\theta}(x_{j}) + \bar{y}\log p(\theta), \quad (4)\] where \(\approx\) indicates Monte Carlo estimation and \(\bar{y} = \sum_{i = 1}^{n}\sum_{j = 1}^{m}y_{ij} / k\) . In the limit of \(n\to \infty\) , the right side of the equation is identical to the left side. ### 3.3 DUAL VARIATIONAL AUTOENCODER (DUALVAE) We extended the VAE for multi- domain transfer to demonstrate our concept of multi- domain semisupervision. Our proposed model, dual variational autoencoder (DualVAE), models each domain \(p_{i}(x)\) as a posterior distribution \(p(x|\mathcal{D}_{i})\) that is similar to that observed in a conditional VAE. Fig. 2 depicts the VAE and DualVAE graphical models. The major feature of DualVAE is domain embedding, where all the domains and the prior share the same latent space \(\mathcal{Z}\) . For the prior distribution, \(p(z) = \mathcal{N}(z|0,I)\) and \(p(z|\mathcal{D}_{i}) = \mathcal{N}(z|\mu_{i},\sigma^{2}I)\) , where \(\mu_{i}\in \mathcal{Z}\) is an embedding and \(I\) is a unit matrix in \(\mathcal{Z}\) . In the following, we denote \(\sigma^{2}I = \sigma^{2}\) without loss of generality. The domain \(\mathcal{D}_{i}\) is characterized only by its embedding \(\mu_{i}\) . Here, \(\mu_{0}\) is the embedding of the prior that can be assumed to be \(\mu_{0} = 0\) . Training DualVAE is virtually equivalent to simultaneously training \((n + 1)\) VAEs which share a parameter, including the prior. Using conjecture distribution for the prior \(p(z)\) , the posterior distribution is observed to be a normal distribution. Therefore, all the posteriors are VAEs. The joint distribution can be given as follows: #### 3.3.1 VAE: THE PRIOR \(p_{\theta}(x)\) A VAE (Kingma & Welling, 2013) is used to model the prior \(p(x)\) , a deep generative model that employs an autoencoder to model the hidden variable as random variable. The benefit of a VAE is that it can be used to model each distribution as a normal distribution in \(\mathcal{Z}\) , achieved by maximizing the variational lower bound of \(\log p(x)\) as follows: \[\log p_{\theta}(x)\geq \mathcal{L}_{\theta}(x) = \mathbb{E}_{z\sim q_{\phi}(\cdot |x)}\left[\log p_{w}(x|z)\right] - D_{\mathrm{KL}}\left(q_{\phi}(z|x)\| p(z)\right), \quad (5)\] where \(\phi ,w\in \theta\) is a parameter of the encoder and the decoder, respectively. The objective is to learn a pair of the encoder \(p_{w}(x|z)\) and the decoder \(q_{\phi}(z|x)\) to maximize \(\mathcal{L}(x)\) . \(z\) acts as a prior \(p(z) = \mathcal{N}(z|0,I)\) . The lower bound \(\mathcal{L}_{\theta}(x)\) is derived using the reconstruction error and penalty term as the KL divergence between the model and the prior \(p(z)\) . Further, the gradient of the reconstruction term can be calculated using the Monte Carlo method, and because the construction term is the KL divergence between two normal distributions, it can be analytically calculated. <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 2: Left: Graphical models of the probabilistic models of VAE and DualVAE. The gray and white circles indicate the observed variables and latent variables, respectively. Symbols without circles indicate the constants. Arrows between the symbols indicate probabilistic dependency (e.g., X generates Y). A rectangle with suffixes indicates a block, which comprises multiple elements. Right: The network structure of DualVAE. The label is structured as the inner product of latent \(z_{\theta}\) and domain embedding \(z_{i}\) . </center> #### 3.3.2 DISCRIMINATOR \(f_{\theta}(\mathcal{D}_{i}|x)\) Using the definition and the Bayesian theorem, \(\log f_{\theta}(\mathcal{D}_{i}|x)\) can be written as follows: \[\begin{array}{r l r} & {} & {\log f_{\theta}(\mathcal{D}_{i}|x) = \log \int \frac{p_{\theta}(\mathcal{D}_{i}|z)p_{\theta}(z|x)}{p_{\theta}(\mathcal{D}_{i})} d z = \log \int \frac{p_{\theta}(z|\mathcal{D}_{i})p_{\theta}(z|x)}{p_{\theta}(z)} d z}\\ & {} & {= \log \int \frac{\mathcal{N}(z|\mu_{i},\sigma^{2})\mathcal{N}(z|\mu_{\phi}(x),\sigma^{2})}{\mathcal{N}(z|0,I)} d z = \frac{\mu_{i}^{\mathrm{T}}\mu_{\phi}(x)}{\sigma^{2}}.} \end{array} \quad (6)\] The equation above indicates \(\log f_{\theta}(\mathcal{D}_{i}|x)\) can be written simply as the inner product between \(\mu_{i}\) and \(\mu_{\phi}(x)\) , and the objective can be written as follows: \[\mathbb{E}_{i\sim [n]}\left[y_{i}\log f_{\theta}(\mathcal{D}_{i}|x)\right] = \frac{\mathbf{y}^{\mathrm{T}}U\mu_{\phi}(x)}{n\sigma^{2}} = \alpha \mu_{U}^{*}(\mathbf{y})\mu_{\phi}(x), \quad (7)\] where \(U = (\mu_{1},\ldots ,\mu_{n})^{\mathrm{T}}\) , \(\mu_{U}^{*} = \mathbf{y}^{\mathrm{T}}U / n\) and \(\alpha = \sigma^{- 2}\) . Interestingly, it only requires one additional parameter \(U\) except a hyperparameter \(\alpha\) . \(U\) is named as a domain embedding matrix, representing the set of the domain prototypes. Domain embedding makes it possible to extend our method to infinite domains such as a continuous domain. In fact, \(\mu_{U}^{*}(\mathbf{y})\in \mathcal{Z}^{*}\) represents a prototype of mixed domains indicated by \(\mathbf{y}\) in a domain latent space \(\mathcal{Z}^{*}\) , a dual space of \(\mathcal{Z}\) . Note that \(\dim \mathcal{Z} = \dim \mathcal{Z}^{*}\) . #### 3.3.3 REGULARIZER \(p(\theta)\) The overall parameters of DualVAE is \(\theta = (w,\phi ,U)\) , where \(w\) is the encoder's, parameter \(\phi\) is the decoders's parameter, and \(U\) is the domain embedding matrix. While a typical VAE does not assume any distribution of \(w,\phi ,p(U)\) is set as an exponential distribution with an additional hyperparameter \(\beta \in (0,\infty)\) to obtain sparse representation: \(p(U)\propto \exp (-\beta \| U\|_{1} / \gamma)\) where \(\| \cdot \|_{1}\) is a 1- norm, thus, \[\log p(\theta) = \log p(U) = -\frac{\beta}{\gamma}\| U\|_{1} - m\dim \mathcal{Z}\log 2 + \log \beta -\log \gamma . \quad (8)\] As the terms except for the first are independent of \(\theta\) , we ignore them later as constants. #### 3.3.4 THE FINAL FORM By putting together the prior, the discriminator, and the regularizer, the variational lower bound of the point- wise objective of DualVAE \(J(\theta |x,\mathbf{y})\) can be written as a surprisingly simple form: \[J(\theta |x,\mathbf{y})\geq \mathcal{L}_{\theta}(x) + \alpha \langle \mu_{\phi}(x),\mu_{U}(\mathbf{y})\rangle -\beta \| U\|_{1}, \quad (9)\] where \(\langle u,v\rangle = v^{\mathrm{T}}u\) . Consequently, a DualVAE maximizes a duality paring \(\langle \cdot ,\cdot \rangle :\mathcal{Z}\times \mathcal{Z}^{*}\to \mathbb{R}\) between the sample latent space \(\mathcal{Z} = \mathcal{Z}_{\phi}(\mathcal{X})\) and the domain latent space \(\mathcal{Z}^{*} = \mathcal{Z}_{U}^{*}(\mathcal{Y})\) where <--- Page Split ---> \(\mathcal{V} = \{0,1\}^{n}\) . Note that the objective requires only two additional hyperparameters in addition to the VAE. If \(\alpha ,\beta \to 0\) , it is equivalent to a single VAE. Intuitively, \(1 / \alpha\) and \(1 / \beta\) control variance and bias of the domain embeddings, respectively. The training algorithm of the DualVAE is shown in Algorithm 1. Algorithm 1 Variational domain adaptation through DualVAE Require: observations \((x_{j})_{j = 1}^{n}\) , batch size \(M\) , VAE/encoder optimisers: \(g\) , \(g_{e}\) , hyperparameters \(\alpha ,\beta\) , and the label matrix \(Y = (y_{j})_{j = 1}^{n}\) . Initialize encoder, decoder and domain embedding parameters: \(\phi ,w,U\) repeat Randomly select batch \((x_{j})_{j\in B}\) of size \(M\) Sample \(z_{j}\sim q_{\phi}(z|x_{j})\forall j\in B\) \(\phi ,w\gets g(\nabla_{\phi ,w}\sum_{j\in B}[\log p_{w}(x_{j}|z_{j}) - D_{\mathrm{KL}}(q_{\phi}(z|x_{j})||p(z))])\) \(\phi ,U\gets g_{e}(\nabla_{\phi ,U}\sum_{j\in B}[\alpha (Y_{:,j} - U^{T}z_{j})^{2} + \beta ||U||_{1}])\) until convergence of parameters \(\theta = (\phi ,w,U)\) ## 4 EXPERIMENT Based on an original numerical experiment in domain adaptation, we confirmed that the DualVAE learns multiple distributions both qualitatively and quantitatively. Similar to the case of the existing methods, domain adaptation was confirmed via an image- generation task in this study. First, we performed A facial image recommendation task, which is a content- based recommendation task for generating the preferences of users. Second, we performed the standard domain transfer task with 40 domains in CelebA (Liu et al., 2015) and we showed that DualVAE outperformed two state- of- the- art methods through GAN and VAE. The objective of the first task was to generate an image that was preferred by a specific user. We set the input space \(\mathcal{X}\) as the raw image, the prior \(p(x)\) as faces, and the domain \(\mathcal{D}_{i}\) as a user. We used the dataset of CelebA and SCUT- FBP5500 as the samples from the prior. The objective of the task was to generate samples from \(p_{\theta}(x|\mathcal{D}_{i})\) , exhibiting the images that were preferred by a user. We used label \(y_{i}\sim p(\mathcal{D}_{i}|x)\) as the existing dataset of SCUT- FBP5500 with 5,500 faces and 60 users for the content- based recommendation. The purpose of the second task was to transfer samples from \(p(x)\) into samples from \(p_{\theta}(x|\mathcal{D}_{i})\) . We set the prior \(p(x)\) as face images and the posterior \(p_{\theta}(x|\mathcal{D}_{i})\) as face images with certain attributes of CelebA. We used label \(y_{i}\sim p(\mathcal{D}_{i}|x)\) as the attribute of CelebA. The results revealed that the DualVAE successfully learned the model of the target distribution \(p_{\theta}(x|\mathcal{D}_{i})\) both quantitatively and qualitatively. Quantitatively, we confirmed that the discriminator learned the distribution by evaluating the negative log- likelihood loss, \(-\log p_{\theta}(\mathcal{D}_{i}|x)\) . We evaluated the samples using the domain inception score (DIS), which is the score for evaluating the transformation of images into multiple target domains. Notably, the DIS of the DualVAE was higher than several models. Qualitatively, we demonstrated that the image could be transferred to improve the evaluation by interpolating the image. We further exhibited several beautiful facial images that the users were conscious of by decoding each domain embedding \(\mu_{i}\) , which can be considered as the projection of the ideal from inside the users. In addition, 40 domain- transferred images using the dataset of CelebA by the proposed method was better than the images by other models. ### 4.1 DATASET CelebA CelebA(Liu et al., 2015) comprises approximately 200,000 images of faces of celebrities with 40 attributes. SCUT- FBP5500 SCUT- FBP5500(Liang et al., 2018) comprises 5500 face images and employs a 5- point scale evaluation by 60 people in terms of beauty preference. The face images can be categorized as Asian male, Asian female, Caucasian male, and Caucasian female, with 2000, 2000, 750, 750 images, respectively. <--- Page Split ---> ### 4.2 RESULT The quantitative result of the experiment can be demonstrated by evaluating the generated images by several models using a Domain Inception Score (DIS). Although the Inception Score (Salimans et al., 2016) is a score for measuring generated images, it can only measure the diversity of the images, and it is not for evaluating domain transfer of the images. Therefore, we proposed using a DIS, which is a score for evaluating the transformation of images into multiple target domains. The DIS is a scalar value using the output of Inceptionv3(Szegedy et al., 2016) pretrained to output the domain label, and it is evaluated by the sum of two elements. The first is whether the domain transfer of the original image has been successful (transfer score), and the second is whether the features other than the transferred domain are retained (reconstruction score). A more detailed explanation of the DIS is provided in the appendix. Comparison of a DualVAE and a single- domain VAE A DualVAE can transform the image of the source domain into images of multiple target domains with one model. However, considering a simpler method, it is also possible to transfer the image of the source domain to the images of the multiple target domains by creating multiple models. We will call each of these models a Single Domain VAE (SD- VAE). Since an SD- VAE is a model that converts the image of one source domain to the image of one target domain, models corresponding to the number of target domains are required, and thus, 60 models required training. We demonstrated that the DualVAE performance was equal to or higher than that of the SD- VAE using the DIS. With respect to the output images of these two models, the one with a higher DIS value was considered to be capable of outputting ideal images. We calculated the DIS of 200 test images transferred by these two model. The DIS of the DualVAE was - 0.0185, whereas that of the SD- VAE was - 0.0282. Thus, the DIS of the DualVAE was 0.01 higher than that of SD- VAE. Comparison of DualVAE and several models The DualVAE was compared with several models capable of performing image- to- image translations for multiple domains using a single model. In this experiment, only the celebA dataset and the attributes of the dataset were used as the domain. Also, the input image was resized to \(128\times 128\) . In each model, the dimension of the latent variable and the learning rate were randomly changed, the DIS was calculated several times, and the average and the standard deviation were obtained. The DualVAE obtained a higher DIS than the other models. Table 1: Average DISs for three domain adaptation methods employing random hyperparameter search, which demonstrates DualVAE outperforms several models based on the DIS as the hyperparameter search is not necessary. Typical generated images are shown in the Appendix. <table><tr><td>Method</td><td>5 domains</td><td>10 domains</td><td>20 domains</td><td>40 domains</td></tr><tr><td>CVAE(Kingma et al., 2014)</td><td>-0.055±0.011</td><td>-0.108±0.017</td><td>-0.112±0.007</td><td>-0.152±0.006</td></tr><tr><td>UFDN (Liu et al., 2018)</td><td>0.251±0.011</td><td>0.160±0.013</td><td>0.075±0.008</td><td>-0.002±0.003</td></tr><tr><td>StarGAN (Choi et al., 2017)</td><td>0.239±0.261</td><td>-0.094±0.346</td><td>0.068±0.188</td><td>0.050±0.032</td></tr><tr><td>DualVAE</td><td>0.278±0.026</td><td>0.180±0.011</td><td>0.163±0.025</td><td>0.140±0.020</td></tr></table> ### 4.3 VISUALIZATION OF DOMAIN TRANSFER We transferred the images by interpolating between the original and the target domain images. We calculated the following vector \(\mathbf{w}_i\) : \[\mathbf{w}_i = \mathbf{z} + \lambda \pmb{\mu}_i. \quad (10)\] Here, \(\mathbf{w}_i\) was constrained by giving it the same norm as \(\mathbf{z}\) to retain as much of the original features as possible. By changing \(\lambda\) and decoding \(\mathbf{w}_i\) , five images were determined to represent unideal to ideal reconstructions for each of the three sample users \((i = 14, 18, \text{and} 32)\) , and interpolation was performed to approach the ideal image \(\mathbf{x}_i\) in Figure 3. In addition, we have visualized transferred images of the 40 attributes by the proposed method and other models in Figure 4.3. Although StarGAN and UFDN <--- Page Split ---> retained the characteristics of the original image considerably, it was qualitatively understood that domain transfer was not good especially when the number of domains was large like 40 attributes. ![](images/7_0.jpg) <center>Figure 3: Images obtained from our model by decoding \(\mathbf{w}_{i}(i = 14,18\) , and 32) while changing the value of \(\lambda\) . The reconstructed images are present in the center. </center> ![](images/7_1.jpg) <center>Figure 4: Scatter plot of DVAE, UFDN, StarGAN and CVAE when we change several parameters. Different color denotes different models. All 40 domains transferred images are in subsection C.1. </center> ## 5 CONCLUSION Variational domain adaptation, which is a unified framework for learning multiple distributions in a single network, is proposed in this study. Our framework uses one known source as a prior \(p(x)\) and binary discriminator \(p(D_{i}|x)\) , thereby discriminating the target domain \(D_{i}\) from the others; this is in contrast with the existing frameworks in which samples undergo domain transfer through deep generative models. Consequently, our framework regards the target as a posterior that is characterized through Bayesian inference, \(p(x|\mathcal{D}_{i})\propto p(D_{i}|x)p(x)\) . This was exhibited by the proposed DualVAE. The major feature of the DualVAE is domain embedding, which is a powerful tool that encodes all the domains and the samples obtained from the prior into normal distributions in the same latent space as that learned by a unified network through variational inference. In the experiment, we applied our framework and model to a multi- domain image generation task. celebA and face image data that were obtained based on evaluation by 60 users were used, and the result revealed that the DualVAE method outperformed StarGAN and UFDN. Several directions should be considered for future research. First, we intend to expand DualVAEs for learning in complex domains, such as high- resolution images with several models, for example, glow (Kingma & Dhariwal, 2018). Second, we will perform an experiment to consider wider domains with respect to beauty. We expect that our proposed method will contribute to society in a number of ways and will help to deal with the paradigm of multiple contexts—multimodal, multi- task, and multi- agent contexts. <--- Page Split ---> ## A LATENT SPACE We visualized the latent space \(\mathcal{Z}\) of VAE and DualVAE. VAE differs from DualVAE methodology because evaluation regression is not conducted during training. For each model, we can achieve 5500 latent vectors of 63 dimensions by encoding 5500 images from SCUT- FBP5500. We obtained a scatter plot after using UMAP (McInnes & Healy, 2018) to reduce the number of dimensions to two. The average score is indicated by colors ranging from red to blue. As can be observed from the UMAP of DualVAE, the gradient of the score is learned, and it represents the user vector(domain embedding vector) in Figure 5. ![](images/9_0.jpg) <center>Figure 5: Latent visualization of VAE (left) and DualVAE (right) demonstrates that DualVAE learns a good prior to model the domains. The heat map indicates the mean score of all the users. </center> ## B DOMAIN INCEPTION SCORE (DIS) Although the Inception Score (Salimans et al., 2016) is a score for measuring generated images, it can only measure the diversity of the images, and it is not for evaluating domain transfer of the images. Therefore, we proposed using a DIS, which is a score for evaluating the transformation of images into multiple target domains. DIS is a scalar value, and it is evaluated by the sum of two elements. The first is whether the domain transfer of the original image has been successful (transfer score), and the second is whether the features other than the transferred domain are retained (reconstruction score). We calculated the DIS using Algorithm 2. First, we assumed that there were N domains and we knew which domain each image belongs to. We fine- tuned Inceptionv3 (Szegedy et al., 2016) using images X as inputs and domains as outputs. To enable the model to classify the images as the domains, we replaced the last layer of the model in a new layer which had N outputs. Second, we transferred test <--- Page Split ---> images into N domains using Equation 10 and loaded the transferred images into the Inceptionv3 pretrained above. Through this process we got \(N \times N\) matrix for every original image, because one image was transferred into N domains and each domain image was mapped to N- dim vector. We then mapped the original image into N- dim vector using Inceptionv3, and subtracted this vector from each row of the abobe \(N \times N\) matrix. We named this matrix M. The key points are (1) the diagonal elements of M should be large because we transferred the original image into the diagonal domains, and (2) the off- diagonal elements of M should be small because the transferred images should preserve original features as possible. In a later subsection, we will directly visualize these two elements and evaluate models. Algorithm 2 Domain Inception Score (DIS) Require: observation \(x \in \mathcal{X}\) , Inceptionv3 \(f\) , domain transfer model \(m\) . \(x^{\prime} \leftarrow m(x)\) \(\mathbf{M} \leftarrow f(x^{\prime}) - f(x)\) \(\mathrm{ts} \leftarrow \mathrm{average}(\mathrm{diag}(\mathbf{M}))\) \(\mathrm{rs} \leftarrow - \mathrm{average}(\mathrm{abs}(\mathrm{notdiag}(\mathbf{M})))\) \(\mathrm{DIS} \leftarrow \mathrm{ts} + \mathrm{rs}\) In the Algorithm, abs denotes taking the absolute value, diag denotes taking the diagonal elements of the matrix, notdiag denotes taking the non- diagonal elements, avg denotes taking the mean of multiple values. ## C ADAPTATION OVER MANY DOMAINS This section shows further results of Table 1, the experimental result for domain adaptation over 40 domains made from CelebA. In the experimental setting above, we use attributes in CelebA as a domain, the setting is used by several studies with domain adaptation (Choi et al., 2017). The result shows DualVAE only learns 40 domains in one network, which indicates DualVAE is an easy way to learn over 10 domains. Next, we show several experimental results when we change the parameters of the models. Because StarGAN uses GAN, the learning rate parameter is not robust, thus the learning is not conducted well. Moreover, celebA has 40 domains which are too many for StarGAN, and this can also be considered as one of the reasons that learning is not conducted well. Because reconstruction is conducted well, rs in Algorithm 2 becomes larger than that of DualVAE. On the other hand, domain transfer is not conducted properly, ts in Algorithm 2 becomes extremely small compares to that of DualVAE. Therefore, as we can see from Table 1, DIS becomes a very small value. ## C.1 CELEBA ![](images/10_0.jpg) <center>(a) DualVAE </center> <--- Page Split ---> ![](images/11_0.jpg) <center>Figure 6: Comparing domain transfer by several methods. (a) Since the image is blurry compared to StarGAN, although the original features change significantly, domain transfer is still conducted properly. (b) Although the characteristics of the original image are well preserved, domain transfer and reconstruction is not conducted. (c) Although the characteristics of the original image are well preserved, domain transfer is not conducted well. (d) Keeping the characteristic and being able to transfer a small amount of the domain. </center> <--- Page Split ---> ## C.2 MNIST (10 DOMAINS) Next, we conduct domain transfer experiments using the MNIST dataset. In this experiment, we demonstrated that it is possible to transfer the image into another label (domain), while not compromising the style of the original image. We also plotted the relation with DIS when labels are sparse. Moreover, we showed in subsection I.1 it is possible to transfer to another domain step by step. ![](images/12_0.jpg) <center>Figure 7: Scatter plot of the missing ratio of MNIST's label and DIS of the DualVAE. Variable s is the missing ratio. The original image is shown in the top right of the figure. The labels of the original images are transformed to zero, one and two. The vertical axis is ts of Algorithm 2, the horizontal axis is rs of Algorithm 2. DIS grows in the upper right corner. </center> ![](images/12_1.jpg) <center>Figure 8: Domain transfer by varying \(\lambda\) . (a) Good example: domain transfer to different label is successful while keeping the characteristic of the reconstruction image. (b) Bad example: although the characteristic of reconstruction images are kept, domain transfer to different labels is not enough. (c) Bad example: although domain transfer to different label is successful, the characteristic of reconstruction images is lost a little bit. (d) Bad example: domain transfer to different labels is not successful. </center> <--- Page Split ---> ## D DOMAIN EMBEDDINGS By reducing the dimensions of the 60 domain embedding vectors from 63 to 2 using UMAP(McInnes & Healy, 2018), the domain embedding vectors were visualized by means of a scatter plot. Furthermore, \(\mathbf{x}_{i}\) was visualized by decoding samples from the domain distribution. ![](images/13_0.jpg) <center>Figure 9: Scatter plot of the domain embedding vectors, and several decoded images of the samples from each domain. Six \(\mathbf{z}_{i}\) from the target domain distribution and output \(\mathbf{x}_{i}\) were decoded. Furthermore, \(\mathbf{z}_{0}\) from the source domain data distribution and output \(\mathbf{x}_{0}\) was also decoded. </center> ## E DOMAIN MIXING In this chapter, we show it is possible to conduct arithmetic operations among domains. For example, suppose we learned the embedding vector of a charming image domain for each single person. We can output the charming image for the group of people as an entity without learning simply by taking the average value of the domain embedding vectors. Denote Community preference as \(f_{I}\) , personal evaluation model as \(f_{i}(= \mu_{i}^{\mathrm{T}}\mathbf{z}(x))\) , \[f_{I}(x) = \frac{1}{|I|}\sum_{i\in I}f_{i}(x) = \frac{1}{|I|}\sum_{i\in I}\mu_{i}^{\mathrm{T}}\mathbf{z}(x) = \bar{\mu}^{\mathrm{T}}\mathbf{z}(x), \quad (11)\] where, \(\bar{\mu} = (1 / |I|)\sum_{i\in I}\mu_{i}\) , which is the average of domain embedding vectors. Moreover, i is the index denoting the domain (person), I is the number of domains, and \(\mathbf{z}(\mathbf{x})\) is the latent vector of image \(\mathbf{x}\) . As shown in Equation 11, since the domain embedding vectors are linearly functional, by taking the inner product of the average of these vectors \(\bar{\mu}\) and the latent vector \(\mathbf{z}\) , the average of personal evaluation (evaluation of the community) can be obtained. Therefore, by substituting \(\mu_{i}\) for \(\bar{\mu}\) in Equation 10, we can reconstruct the face images with high a high degree of community evaluation. We reconstructed for higher (and lower) evaluation using 10 face images from both genders. Each image enjoys higher evaluation to the right. We can see that gradually the caving becomes deep, the beard disappears, the eyes become bigger and the outline becomes sharp Figure 10. <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 10: Images decoded to get closer (or further) to the average of domain embedding vectors. The middle is the reconstructed original image. </center> ## F CONNECTION TO THE HISTORY OF MATRIX FACTORIZATION The section tells the proposed method, DualVAE, is a natural generalization from probabilistic Matrix Factorization (PMF)(Mnih & Salakhutdinov, 2008), proposed in ten years ago. ## F.1 PROBABILISTIC MATRIX FACTORIZATION (PMF) PMF is used in several application area, mainly collaborative filtering algorithm, which are typical recommendation algorithms. PMF learns the user matrix \(U\in \mathcal{R}^{K\times N}\) and the item matrix \(V\in\) \(\mathcal{R}^{K\times J}\) that can restore the evaluation matrix. Here, \(r_{i j}\) is the evaluation value of item j by user i, the evaluation matrix is denoted as \(R\in \mathcal{R}^{I\times J}\) . Moreover, the column vector of the user matrix \(U\) and the item matrix \(V\) are denoted as \(\mathbf{u}_{i},\mathbf{v}_{j}\) respectively. \(\mathrm{K}\) is the dimension of these vectors, \(\mathrm{N}\) is the number of users, \(\mathrm{J}\) is the number of items. \(I_{i j}\) is the indicator function that takes the value 1 when evaluation \(r_{i j}\) exists and 0 otherwise. The log likelihood of PMF is \[\log p(R|U,V,\sigma^{2}) = \sum_{i}\sum_{j}I_{i j}\log \mathcal{N}(r_{i j}|\mathbf{u}_{i}^{T}\mathbf{v}_{j},\sigma^{2}). \quad (12)\] Our objective is to find the \(\mathbf{u}_{i}\) , \(\mathbf{v}_{j}\) that maximizes the above. Relationship to DualVAE DualVAE is an end- to- end coupling of VAE and PMF. We could see DualVAE as PMF extended to a generative model. \(\mathbf{u}_{i}\) in Equation 12 corresponds to the domain embedding vector in DVAE, \(\mathbf{v}_{j}\) corresponds to the latent vector in DVAE, \(r_{i j}\) corresponds to the likelihood that item j belongs to domain i. ## F.2 EXPERIMENTAL ANALYSIS ## F.2.1 EFFECT OF END-TO-END COUPLING We experimentally show that the DualVAE outperformed the non- end- to- end coupling. We compared two models. One is the model trained to regress evaluation of the image end- to- end by calculating inner product of hidden representation of VAE and domain embedding (DVAE). The other is the model which learns hidden representation of VAE followed by learning to regress evaluation by inner product like above (VAE- PMF). We used SCUTFBP- 5500 Figure 17 dataset, and validated it into 5000 images with 60 evaluators and 500 test images with 60 evaluators. We quantitatively compared these two models in terms of Root Mean Square Error (RMSE) of model prediction and reconstruction error of test images. The result suggests that DualVAE achieved a much smaller <--- Page Split ---> RMSE. Moreover, though DualVAE constrained its hidden representation to regress evaluation, the reconstruction error was almost the same as VAE- PMF. This suggests that DualVAE can generate as clear images as vanilla VAE. Table 2: compare DualVAE with VAE <table><tr><td>Method</td><td>RMSE</td><td>Reconstruction loss</td></tr><tr><td>VAE + PMF</td><td>0.423</td><td>8.19 × 104</td></tr><tr><td>DualVAE</td><td>0.356</td><td>8.20 × 104</td></tr></table> ![](images/15_0.jpg) <center>Figure 11: RMSE and Reconstruction loss. DualVAE is far superior to VAE in classification accuracy, and there is almost no difference in reconstruction error between them. </center> ## F.2.2 ROBUSTNESS TO SPARSITY In addition to generalization capability, another benefit from PMF is robustness to sparsity as PMF is robust to a matrix with many missing values. We will experimentally demonstrate that DualVAE is also robust with respect to sparse labels. We calculate the rs and ts when applying Algorithm 2 on 160 celebA test images, and plot the below figure when we change the missing ratio of celeA's domain labels and the \(\lambda\) in Equation 10. ![](images/15_1.jpg) <center>Figure 12: Scatter plot of the missing ratio of celeA's label and DIS of DualVAE. Variable s is the missing ratio. The original image is shown on the left top of the figure. The attributes of the original images are transformed to blond hair, eyeglasses and mustache. The vertical axis is ts of Algorithm 2, the horizontal axis is rs of Algorithm 2. DIS grows in the upper right corner. </center> <--- Page Split ---> ![](images/16_0.jpg) <center>(a) \(s = 0\) . Keeping the characteristic and being able to domain transfer. </center> ![](images/16_1.jpg) <center>(b) \(s = 0.9\) . Although the sparseness of the labels is high, domain transfer is was still conducted rather well. </center> ![](images/16_2.jpg) <center>(c) \(s = 0.99\) (bad example). Image quality is poor, domain transfer is not conducted properly. </center> Figure 13: Sparsity analysis through various sparsity. <--- Page Split ---> From Figure 13, keeping the characteristic of the upper right plots, it is possible to conduct domain transfer at the same time. Moreover, the method is strong on the sparseness of domain labels, and DIS does not drop even when 90 of the labels are missing. On the other hand, we show that StarGAN is not as robust as DualVAE with respect to sparseness. When 90 of domain labels are missing, StarGAN cannot learn at all and generates identical images. ![](images/17_0.jpg) <center>Figure 14: Scatter plot of the missing ratio of celeA's label and DIS of StarGAN. Variable s is the missing ratio. The vertical axis is ts of Algorithm 2, the horizontal axis is rs of Algorithm 2. DIS grows at the upper right corner. </center> ![](images/17_1.jpg) (a) \(s = 0\) . Keeping the characteristic and being able to transfer a small amount of the domain. <--- Page Split ---> ![](images/18_0.jpg) <center>(b) \(s = 0.9\) . All identical images are generated, and domain transfer is not properly conducted. </center> ## G FURTHER EXPERIMENTS WITH VARIOUS HYPERPARAMETER \((\alpha)\) We conducted a comparison experiment with the existing methods when changing \(\alpha (= \sigma^{- 2})\) in Equation 9. Here, the number of domains was set to 40. As you can see from the results below, it turns out that the performance of DualVAE is robust to \(\alpha\) . ![](images/18_1.jpg) <center>Figure 16: Plot of DVAE, UFDN, and StarGAN when we change alpha. The performance of DVAE is robust to \(\alpha\) and DVAE outperforms the existing methods based on the DIS. </center> ## H MODEL DETAILS The section shows three models used in tasks of domain adaptation over three types of domains: environment, attribute and class. Environment First, we describe the experimental setting for domain transfer to the ideal image of each individual. We assumed that the beauty criterion required for evaluating the facial images de <--- Page Split ---> pends on the gender of a person in the target image. Therefore, we added the gender information to the images. For this purpose, we applied CGAN (Mirza & Osindero, 2014) to VAE. We normalized the scoring in \([- 1,1]\) to accelerate the learning. Subsequently, we considered the specific model structure of DualVAE. Both the input and output images were RGB images, \(x \in R^{256 \times 256 \times 3}\) . We used convolution networks for the encoder and stride 2 for convolution and no pooling. Convolution, batch normalization (Ioffe & Szegedy, 2015), and LeakyReLU were repeated four times and were subsequently connected to fully connected layers. Further, after batch normalization and LeakyReLU layers, a 63- dimensional latent variable was obtained. The decoder exhibited a completely symmetric shape with deconvolution layers instead of convolution layers. Furthermore, as the gender attribute, we set 0 as female and 1 as male. We added an image \(x \in R^{256 \times 256 \times 1}\) comprising 0 or 1 data as the input of the encoder and a scalar of 0 or 1 for gender to the latent variable, which was the input to the decoder. The detailed structure is in Structure A of Table 3. We optimized DualVAE on SCUT- FBP5500. Because there were no face evaluation data in celebA, we only used it to optimize VAE. Learning was alternatively realized using these two datasets. We show the image example of SCUT- FBP5500 (Liang et al., 2018). From Figure 17, we can see the evaluation value depends on each person. Attribute Next, in comparative experiment with several models, domain transfer was performed with only celebA data and domain number of 40, 20, 10, and 5. We experimented with several parameters of the models. In particular, the dimensions of the latent variable and the learning rates were randomly selected. Both the input and output images were RGB images, \(x \in R^{128 \times 128 \times 3}\) . The detailed structure is in Structure B of Table 3. Class Finally, we describe the experimental setting of domain transfer in the MNIST dataset. This experimental result is stated in the subsection C.2. Both the input and output images were gray images, \(x \in R^{28 \times 28 \times 1}\) . The detailed structure is in Structure C of Table 3. Table 3: The model structures of DualVAE used in our experiment. Conv stands for Convolution, Deconv stands for Deconvolution, FC stands for Full Connected, and numbers in each parenthesis are input and output channels. The kernel sizes of convolution are all \(4\times 4\) , and stride sizes of that are all two. Also, Batch2d represents batchnormalization in two dimensions, Batch1d represents batchnormalization in one dimension, and LReLU stands for LeakyReLU. <table><tr><td>Preference</td><td>Attribute</td><td>Class</td></tr><tr><td>Conv(3+1, 32)</td><td>Conv(3, 64)</td><td>Conv(1, 64)</td></tr><tr><td>Batch2d, LReLU</td><td>Batch2d, LReLU</td><td>Batch2d, LReLU</td></tr><tr><td>Conv(32, 64)</td><td>Conv(64, 128)</td><td>Conv(64, 128, 4, 4)</td></tr><tr><td>Batch2d, LReLU</td><td>Batch2d, LReLU</td><td>Batch2d, LReLU</td></tr><tr><td>Conv(64, 128)</td><td>Conv(128, 256)</td><td>FC(128*7*7, 1024)</td></tr><tr><td>Batch2d, LReLU</td><td>Batch2d, LReLU</td><td>Batch1d, LReLU</td></tr><tr><td>Conv(128, 256)</td><td>FC(256*16*16, 1024)</td><td>FC(1000, 100*2)</td></tr><tr><td>Batch2d, LReLU</td><td>Batch1d, LReLU</td><td>FC(100, 1024)+FC(100, 10)</td></tr><tr><td>FC(256*16*16, 1024)</td><td>FC(1024, dim*2)</td><td>Batch1d, LReLU</td></tr><tr><td>Batch1d, LReLU</td><td>FC(dim, 1024)+FC(dim, 40)</td><td>FC(1024, 128*7*7)</td></tr><tr><td>FC(1024, 63*2)</td><td>Batch1d, LReLU</td><td>LReLU</td></tr><tr><td>FC(63+1, 1024)+FC(63+1, 60)</td><td>FC(1024, 256*16*16)</td><td>Deconv(128, 64)</td></tr><tr><td>Batch1d, LReLU</td><td>LReLU</td><td>Batch2d, LReLU</td></tr><tr><td>FC(1024, 256*16*16)</td><td>Deconv(256, 128)</td><td>Deconv(64, 1)</td></tr><tr><td>LReLU</td><td>Batch2d, LReLU</td><td>Sigmoid</td></tr><tr><td>Deconv(256, 128)</td><td>Deconv(128, 64)</td><td></td></tr><tr><td>Batch2d, LReLU</td><td>Batch2d, LReLU</td><td></td></tr><tr><td>Deconv(128, 64)</td><td>Deconv(64, 3)</td><td></td></tr><tr><td>Batch2d, LReLU</td><td>Sigmoid</td><td></td></tr><tr><td>Deconv(64, 32)</td><td></td><td></td></tr><tr><td>Batch2d, LReLU</td><td></td><td></td></tr><tr><td>Deconv(32, 3)</td><td></td><td></td></tr><tr><td>Sigmoid</td><td></td><td></td></tr></table> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 17: (a) Distribution of the respective scores within [1, 5] rated by 60 people. (b) The images of SCUT-FBP5500. </center> <--- Page Split ---> ## I FURTHER EXAMPLES OF DOMAIN TRANSFER THROUGH DUALVAE The results below shows result from domain adaptation performed by DualVAE by randomly- sampled images from two datasets: MNIST and CelebA. ## I.1 MNIST (10 DOMAINS) ![](images/21_0.jpg) <center>Figure 18: DualVAE stably transfers samples across 10 domains while domain-irrelevant features (e.g., style) are kept. </center> <--- Page Split ---> ## I.2 CELEBA (40 DOMAINS) ![](images/22_0.jpg) <--- Page Split ---> ![](images/23_0.jpg) <--- Page Split ---> ![](images/24_0.jpg) <--- Page Split --->
reject
Reject
4.333333
ICLR_2019_paper_1197
iclr
2,019
# UNDERSTANDING OPPORTUNITIES FOR EFFICIENCY IN SINGLE-IMAGE SUPER RESOLUTION NETWORKS Anonymous authors Paper under double- blind review ## ABSTRACT A successful application of convolutional architectures is to increase the resolution of single low- resolution images – a image restoration task called super- resolution (SR). Naturally, SR is of value to resource constrained devices like mobile phones, electronic photograph frames and televisions to enhance image quality. However, SR demands perhaps the most extreme amounts of memory and compute operations of any mainstream vision task known today, preventing SR from being deployed to devices that require them. In this paper, we perform a early systematic study of system resource efficiency for SR, within the context of a variety of architectural and low- precision approaches originally developed for discriminative neural networks. We present a rich set of insights, representative SR architectures, and efficiency trade- offs; for example, the prioritization of ways to compress models to reach a specific memory and computation target and techniques to compact SR models so that they are suitable for DSPs and FPGAs. As a result of doing so, we manage to achieve better and comparable performance with previous models in the existing literature, highlighting the practicality of using existing efficiency techniques in SR tasks. Collectively, we believe these results provides the foundation for further research into the little explored area of resource efficiency for SR. ## 1 INTRODUCTION Rapid progress has been made in the development of convolutional networks (Dong et al., 2015) that are capable of taking a low- resolution image and producing an image with a significant increase in resolution. This image restoration task is referred to as super- resolution (SR) and has many potential applications in devices with limited memory and compute capacity. The fundamental problem however is that the state- of- the- art networks (Lim et al., 2017; Zhang et al., 2018; Zhang et al., 2018) consist of thousands of layers and are some of the most resource intensive networks currently known. Furthermore, due to the spatial dimensions of feature maps needed to maintain or up- scale the input, the number of operations are counted in the billions as opposed to millions in models for discriminative tasks. As a result, there is a need for a general systematic approach to improve the efficiency of SR models. The challenge of the system resource requirements for deep learning models for tasks other than SR have been carefully studied in previous works (Zhang et al., 2017b; Howard et al., 2017; Ma et al., 2018; Sandler et al., 2018), achieving massive gains in size and compute with little to no loss in performance. These reductions are achieved with a wide variety of methods being developed grounded in primarily architecture- level changes and techniques grounded in the use of low precision and quantized model parameters. However, how these efficiency methods behave when applied within SR have not yet been studied in significant depth, with very few results published in the literature. Extrapolating from prior results for other tasks is problematic given that predominantly existing studies are applied to discriminative tasks with substantially different architectures and operations. Due to the up- sampling structure of SR models, these efficiency methods may therefore produce potentially stronger side- effects to image distortion. In this paper, we detail a systematic study that seeks to bridge current understanding in SR and known approaches for scaling down the consumption of system resources by deep models. By <--- Page Split ---> examining the impact on image distortion quality when performing various efficiency techniques, we provide the following new insights: - The effectiveness of low rank tensor decomposition and other convolution approximations, which are comparable and successful in discriminative tasks, can vary considerably in SR. (See section 4.1).- Unlike image discriminative networks, SR networks suffer from a worse trade-off between efficiency and performance as more layers are compressed. (See section 4.2)- The practicality of adopting compression techniques for other tasks to SR as our best models are better or comparable to existing literature. For instance, our best model achieves significantly better performance and 6x less compute than MemNet (Tai et al., 2017b) and VDSR (Kim et al., 2015b). Additionally, it also performs better and is 4.1x-5.8x smaller than SRMDNF (Zhang et al., 2017a). (See section 4.3)- Successful quantization techniques used in image discriminative tasks are equally successful in SR. (See section 5) ## 2 RELATED WORK We focus on using neural networks for SR as they have shown to achieve superior performance against previous traditional approaches (Timofte et al., 2013; Kim & Kwon, 2010; Chang et al., 2004). An SR image can either be evaluated using standard image distortion metrics, such as PSNR, SSIM (Wang et al., 2004) and IFC (Sheikh et al., 2005), or using perception metrics, such as Ma et al. (2016), NIQE (Mittal et al., 2013), and BRISQUE (Mittal et al., 2012). Blau & Michaeli (2017) provided theoretical backups on the trade- off between image distortion and perception. Distortion SR: In the distortion line of work, models favour pixel- to- pixel comparisons and are usually trained on either the L1 or L2 (MSE) loss. These models have been known to produce more visually pleasing outcomes on structural images Blau et al. (2018) than perceptual SR models. Dong et al. (2015) first proposed using convolutional networks for SR, leading to a surge in using neural networks for SR. These networks differ in their building blocks for feature extraction and up- sampling. For instance, Dong et al. (2016) proposed a faster convolutional network by taking the down- sampled low- resolution image as an input. Other variations include using more layers (Kim et al., 2015b), recursive layers (Kim et al., 2015a; Tai et al., 2017a), memory blocks (Tai et al., 2017b; Ahn et al., 2018), DenseNet (Huang et al., 2016) blocks (Tong et al., 2017), residual (He et al., 2015) blocks (Ledig et al., 2016; Lim et al., 2017; Kim & Lee, 2018), and multiple- image degradations (Zhang et al., 2017a). Additionally, more recent models use attention (Bahdanau et al., 2014) mechanisms (Liu et al., 2018; Zhang et al., 2018), back- projection (Haris et al., 2018; Navarrete Michelini et al., 2018), and other non- conventional non- linear layers (Choi & Kim, 2017; Gu et al., 2018). Perceptual SR: Perceptual SR models, on the other hand, are better at reconstructing unstructured details with high perceptual quality (Blau et al., 2018). These models usually adopt popular models for image distortion and train them using a variety of different loss functions, such as the perceptual loss (Johnson et al., 2016), contextual loss (Mechrez et al., 2018b), adversarial loss (Goodfellow et al., 2014), and the Gram loss (Gatys et al., 2015). For instance, Choi et al. (2018) adopted EUSR Kim & Lee (2018) and Ledig et al. (2016); Wang et al. (2018); Mechrez et al. (2018a) adopted SRResNet (Ledig et al., 2016) by making slight architecture changes and replacing the objective. Although these perceptual models are able to generate more visually pleasing results on certain images, they do not seem to work well as inputs for image classification (Jaffe et al., 2017). Efficient SR: As models in both tracks are resource- intensive, the recent PIRM 2018 Challenge for mobile (Ignatov et al., 2018) presented a range of high efficiency models that were designed to run faster and perform better than SRCNN (Dong et al., 2015). These models are complementary to our work and can follow our best practices to achieve greater efficiency gains. A work closely related to our work is done by Ahn et al. (2018) who systemically investigate the impact of using grouped convolutions. Due to the massive design space caused by the variability of training and evaluating these models, we focus on the trade- offs between performance measured by the image distortion metrics and efficiency and leave the rest as future work. <--- Page Split ---> ## 3 SYSTEMATIC STUDY OF LOW-RESOURCE SUPER RESOLUTION NETWORKS The key step in our work is to build understanding towards building resource- efficient architectures for super- resolution. While there is a lot of understanding of how these efficiency- saving techniques work in classification problems, there is a lack of experimental studies and systematic approaches to understand their practicality in super- resolution. To our knowledge, this is the first systematic study of wide range efficiency methods on super- resolution. We measure performances using PSNR and SSIM (Wang et al., 2004) and measure efficiency of memory and compute using the number of parameters and the number of multiply- add operations (Mult- Adds), both of which dictate which platform these models can run on. However, these metrics alone do not reflect the trade- off between performance and efficiency. Therefore, we introduce two new metrics that measures the number of Giga Mult- Adds saved and the number of parameters saved for every 0.01dB PSNR loss in the test sets: Set5 (Bevilacqua et al., 2012), Set14 (Yang et al., 2010), B100 (Martin et al., 2001), and Urban100 (Huang et al., 2015). These metrics are calculated by taking the difference between the compressed model and the uncompressed model. All Mult- Adds are calculated by upscaling to a 720p image. We decide to use RCAN Zhang et al. (2018) as our baseline model as it proves to be the state- of- the- art and has the best performance in the image distortion metrics at the time of writing. We take its simplest building block and build a shallower network and use that as a basis for exploring the use of a variety of techniques. Implementation Details: We train our models in section 4 and section 5.1 in the same manner as that of EDSR Lim et al. (2017). In particular, we use \(48 \times 48\) RGB patches of LR images from the DIV2K dataset Timofte et al. (2017). We augment the training data with random horizontal flips and 90 degree rotations and pre- process them by subtracting the mean RGB value of the DIV2K dataset. Our model is trained using the ADAM Kingma & Ba (2014) optimizer with hyper- parameters \(\beta_{1} = 0.9\) , \(\beta_{2} = 0.999\) , and \(\epsilon = 10^{- 8}\) . The mini- batch size is 16, learning rate begins with \(1e - 4\) and is halved at 200 epochs, and the model is trained for 300 epochs using L1 loss. We train x2 models from scratch and use them as pre- trained models to train x3 and x4 models for faster convergence. Lastly, for ternary quantization in section 5.2, we further train the model with quantization enabled in each forward pass for 40 epochs, starting at a learning rate of \(5e - 5\) , and then fix the quantized ternary weights and further train for another 10 epochs at a learning rate of \(2.5e - 5\) . ## 4 EFFICIENT NETWORK ARCHITECTURES FOR SUPER RESOLUTION We begin our evaluation by conducting a series of experiments: (i) we explore the effects of applying different resource- efficient architectures to our baseline model (section 4.1), (ii) we consider two best techniques and present trade- off solutions while applying them to different parts of our baseline model (section 4.2), (iii) and lastly, we compare our best results with previous SR architectures (section 4.3). ### 4.1 EFFECTS OF VARIOUS RESOURCE-EFFICIENT TECHNIQUES Motivation: Resource- efficient architectures use various low rank tensor decomposition and other convolutional approximation techniques, which is agnostic and is not specifically designed for any particular task, to build fast and accurate image discriminative models. We first develop an initial understanding of the trade- off solution by replacing and modifying 3x3 convolution layer blocks in the baseline model. Approach: We explore the use of known techniques such as the bottleneck design, separable/grouped convolutions, and channel shuffling. We take the feature extraction unit from resource- efficient architectures and remove all batch normalisation layers as they were previously shown to reduce performance and increase GPU memory usage (Lim et al., 2017). For our first set of experiments, we replace all 3x3 convolution layers in the residual groups of our baseline model. bl: Our baseline model from RCAN (Zhang et al., 2018). We reduce the number of residual groups (RG) from 10 to 2 and the number of residual channel attention block (RCAB) in each RG from 20 to 5. We use a feature map size of 64. Making the network shallower and small in parameters allow us to clearly understand each architectural changes as opposed to having a deep network which may cause other effects and interplay. <--- Page Split ---> blrn(r): We adopt the residual bottleneck design from ResNet (He et al., 2015) with a reduction factor of \(r\) . Specifically, a 1x1 convolution is used to compress information among channels by a reduction factor, resulting in a cheaper 3x3 convolution. Another 1x1 convolution is then used to recover the dimension of the output channel and a skip connection is used to pass on information that may have been lost. blrxn(r,g): We replace the 3x3 convolution in blrn to a 3x3 grouped convolution, forming a block that is similar to that of ResNeXt (Xie et al., 2016) with an additional group size of \(g\) . Computation cost is further reduced by the use of grouped convolutions (Krizhevsky et al., 2012). blm1: In order to further improve efficiency of the 3x3 grouped convolution, we can maximise the group size, forming a convolution that is known as depthwise convolution. Following this idea, we adopt the MobileNet v1 (Howard et al., 2017) unit which uses depthwise separable convolutions, each consist of a 3x3 depthwise convolution followed by a 1x1 convolution, also known as a pointwise convolution. bleff(r): We can further approximate the 3x3 depthwise convolution by using a 1x3 and a 3x1 depthwise convolution, a technique that is used in EffNet (Freeman et al., 2018). We adopt the unit from EffNet by removing the pooling layers. bls1(r, g): We group both 3x3 and 1x1 convolutions and added channel shuffling in order to improve the information flow among channels. In order to test the effects of channel shuffling, we adopt the ShuffleNet v1 (Zhang et al., 2017b) unit. blclc(g1, g2): Channel shuffle is also used in Clnet (Zhang, 2017) to further improve efficiency of blm1. In order to maximise efficiency from our adoption of ClNet units, we follow the group size guidelines recommended by the authors for both the group sizes of the 3x3 (g1) and 1x1 (g2) grouped convolution. bls2: Apart from using grouped convolutions, Ma et al. (2018) proposed splitting the flow into two, which is termed as channel splitting, and performing convolution on only half of the input channels in each unit at each pass. Channel shuffle is then used to enable information flow between both branches. blm2(e): Inverted residuals can be used to enable skip connections directly on the bottleneck layers. Therefore, we adopt the MobileNet v2 (Sandler et al., 2018) unit in our experiments Table 1: Quantitative results of applying resource-efficient techniques. bold/italics indicates best/second-best trade-off. <table><tr><td>Scale</td><td>Model</td><td>Params (K)</td><td>Multi-Adds (G)</td><td>Set5 PSNR/SSIM</td><td>Set14 PSNR/SSIM</td><td>B100 PSNR/SSIM</td><td>Urban100 PSNR/SSIM</td><td>Multi-Add Saved (G)</td><td>Params Saved (K)</td></tr><tr><td rowspan="5">2×</td><td>bl</td><td>1006</td><td>231.2</td><td>37.6/0.9600</td><td>33.3/0.9159</td><td>32.0/0.8982</td><td>31.7/0.9248</td><td>-</td><td>-</td></tr><tr><td>blrn(r=2)</td><td>464</td><td>106.5</td><td>37.6/0.9591</td><td>33.1/0.9137</td><td>31.9/0.8961</td><td>31.0/0.9173</td><td>0.9819</td><td>4.27</td></tr><tr><td>blm1</td><td>265</td><td>60.7</td><td>37.4/0.9586</td><td>33.0/0.9122</td><td>31.7/0.8948</td><td>30.6/0.9128</td><td>0.7894</td><td>3.43</td></tr><tr><td>blrxn(r=2, g=4)</td><td>305</td><td>69.9</td><td>37.4/0.9585</td><td>33.0/0.9123</td><td>31.8/0.8948</td><td>30.7/0.9131</td><td>0.7755</td><td>3.37</td></tr><tr><td>blrn(r=4)</td><td>258</td><td>59</td><td>37.4/0.9583</td><td>32.9/0.9120</td><td>31.7/0.8943</td><td>30.6/0.9123</td><td>0.7653</td><td>3.32</td></tr><tr><td rowspan="5">2×</td><td>blclc(g1=32, g=2=2)</td><td>232</td><td>52.9</td><td>37.4/0.9581</td><td>32.9/0.9118</td><td>31.7/0.8939</td><td>30.5/0.9108</td><td>0.7278</td><td>3.16</td></tr><tr><td>bls2</td><td>211</td><td>48.4</td><td>37.7/0.9580</td><td>32.9/0.9117</td><td>31.7/0.8936</td><td>30.4/0.9102</td><td>0.7254</td><td>3.15</td></tr><tr><td>bleff(r=2)</td><td>257</td><td>58.7</td><td>37.3/0.9580</td><td>32.9/0.9114</td><td>31.7/0.8936</td><td>30.4/0.9108</td><td>0.6458</td><td>2.82</td></tr><tr><td>blm2(e=2)</td><td>561</td><td>128.9</td><td>37.4/0.9586</td><td>33.0/0.9130</td><td>31.8/0.8949</td><td>30.7/0.9138</td><td>0.5219</td><td>2.27</td></tr><tr><td>bls1(r=2, g=4)</td><td>188</td><td>42.9</td><td>37.6/0.9568</td><td>32.7/0.9096</td><td>31.5/0.8912</td><td>29.9/0.9031</td><td>0.4968</td><td>2.16</td></tr><tr><td>blm2(e=3)</td><td>763</td><td>175.4</td><td>37.5/0.9589</td><td>33.1/0.9133</td><td>31.8/0.8954</td><td>30.9/0.9155</td><td>0.3509</td><td>1.53</td></tr></table> Results: Our results in Table 1 show that techniques that result in a better trade- off between memory and performance will have a better trade- off between compute and performance. 1. Overall, the use of bottlenecks alone (blrn) result in the best trade- offs followed by the use of separable/grouped convolutions. Reducing the number of features to accommodate inverted bottlenecks (blm2) severely impact the performance and thus we omit the results from the table. We speculate that doing so would result in insufficient features at the up- sampling layer to fully capture the up- sampled image representation. Thus, we use the same number of feature maps as our bottleneck. Although the use of inverted <--- Page Split ---> residuals in our experiments seem worse off, it may perform better on models that use a larger feature size or multiple smaller up- sampling layers. Lastly, the use of 1x1 grouped convolution or channel splitting with channel shuffling further reduces the evaluation metric. Although doing so can drastically reduce size, the trade- off does not seem to justify its advantages. Therefore, we recommend using bottlenecks for building resource- efficient SR architectures. If the budget for memory and efficiency is tight, we recommend the use of depthwise separable convolutions instead. In image discriminative tasks, the proposed architecture changes are comparable in terms of efficiency and accuracy trade- offs. In our work, we show that the sole use of low rank tensor decomposition (bottleneck architectures) provide the best trade- offs, followed by the use of separable/grouped convolutions and the use of both channel splitting and shuffling. ### 4.2 EFFECTS OF ARCHITECTURAL LAYERS BETWEEN THE INPUT AND OUTPUT LAYER Motivation: Bhattacharya & Lane (2016) and Kim et al. (2015c) have shown that it is possible in image classification to maintain a similar or slight drop in performance by decomposing tensors of known models. However, our models suffer a significant drop in performance. (Table 1). Therefore, in order to further understand the extent of their applicability in SR, we apply the top two best techniques, which are bottleneck reduction (blm) and depthwise separable convolutions (blm1), on various different parts of our baseline model. Approach: Our preliminary experiments with applying some of these techniques on the first and last convolution layer led to worse trade- offs. Therefore, we apply our techniques between them. We replace the sub- pixel convolution upsampling layer to the enhanced upscaling module (EUM) as proposed by Kim & Lee (2018) to allow the use of skip connections. Using EUM leads to an increase in performance at a slight cost of both memory and compute. Thus, in order to maintain the memory cost, we use recursion, forming the enhanced recursive upscaling module (ERUM) shown in figure 1. The number of ERUMs is the same as the scaling factor and each ERUM recurses twice or thrice for x2, x4 or x3 scales respectively. Experiments that use ERUMs for up- sampling are indicated with a postfix - e. We calculate our trade- off metrics based on our baseline model with ERUM as its up- sampling layer instead bl- e. We modify all 3x3 convolution layers as such: \(rb\) : Changes are made in residual blocks/modules. \(rg\) : Changes are made in residual groups, therefore including those in \(rb\) . (Experiments in section 4.1 are done in this setting.) \(rg + u\) : Changes are made in both \(rg\) and the up- sampling layers (ERUMs). ![](images/4_0.jpg) <center>Figure 1: Our proposed ERUM for image upscaling. </center> Results: Our results in Table 2 reinforce our findings in section 4.1 that the adoption of bottleneck reduction alone leads to the best trade- offs, followed by the use of group convolutions. Therefore, we recommend taking gradual steps to compress the model. For instance, we suggest gradually changing convolutions to use bottleneck reduction, avoiding the up- sampling, first, and last convolutions until a budget is reached. If further compression is needed, we suggest changes to the up- sampling layer or the use of group convolutions. ### 4.3 COMPARISONS WITH PREVIOUS SR MODELS We take our derived best models based on different budgets from our first two experiments (See section 4.1 & 4.2) and compare them with the existing literature, which is shown in Table 3. For fair comparisons, we omit models that are way bigger by several magnitudes as their performances are much better. Likewise, we exclude models that are way smaller as their performance are much <--- Page Split ---> Table 2: Quantitative results of applying techniques on different parts of the model. We took the best three derived models given three different budgets and compared them with previous models in Table 3 bold/italics indicates best/second-best trade-off. <table><tr><td>Scale</td><td>Model [Changes]</td><td>Params (K)</td><td>Mult-Adds (G)</td><td>Set5 PSNR/SSIM</td><td>Set14 PSNR/SSIM</td><td>B100 PSNR/SSIM</td><td>Urban100 PSNR/SSIM</td><td>Mult-Add Saved (G)</td><td>Params Saved (K)</td></tr><tr><td rowspan="5">2×</td><td>bl-c</td><td>1006</td><td>265.3</td><td>37.92/0.9602</td><td>33.43/0.9162</td><td>32.09/0.8988</td><td>31.83/0.9256</td><td></td><td></td></tr><tr><td>blm-e(r=2)[rb]</td><td>535</td><td>156.8</td><td>37.75/0.9596</td><td>33.30/0.9153</td><td>32.00/0.8973</td><td>31.48/0.9218</td><td>1.4662</td><td>6.3649</td></tr><tr><td>blm-l-e[rb]</td><td>363</td><td>117</td><td>37.65/0.9592</td><td>33.19/0.9143</td><td>31.92/0.8964</td><td>31.13/0.9181</td><td>1.0746</td><td>4.6594</td></tr><tr><td>blm-l-e[rg]</td><td>265</td><td>94.7</td><td>37.59/0.9590</td><td>33.12/0.9132</td><td>31.87/0.8959</td><td>30.91/0.9185</td><td>0.9584</td><td>4.1629</td></tr><tr><td>blm-e(r=2)[rg]</td><td>464</td><td>140.5</td><td>37.64/0.9592</td><td>33.20/0.9142</td><td>31.95/0.8967</td><td>31.10/0.9185</td><td>0.9455</td><td>4.1061</td></tr><tr><td rowspan="5">3×</td><td>blm-e(r=2)[rg+u]</td><td>370</td><td>97.1</td><td>37.56/0.9589</td><td>33.10/0.9131</td><td>31.86/0.8954</td><td>30.99/0.9164</td><td>0.9557</td><td>3.6136</td></tr><tr><td>blm-l-e[rg+u]</td><td>137</td><td>35.4</td><td>37.35/0.9580</td><td>32.94/0.9117</td><td>31.72/0.8939</td><td>30.45/0.9103</td><td>0.8181</td><td>3.0925</td></tr><tr><td>bl-c</td><td>1080</td><td>156.3</td><td>34.35/0.9269</td><td>30.29/0.8415</td><td>29.06/0.8046</td><td>28.13/0.8521</td><td></td><td></td></tr><tr><td>blm-e(r=2)[rb]</td><td>609</td><td>108.1</td><td>34.19/0.9257</td><td>30.18/0.8394</td><td>29.00/0.8026</td><td>27.89/0.8469</td><td>0.8456</td><td>8.2632</td></tr><tr><td>blm-l-e[rb]</td><td>437</td><td>90.5</td><td>34.09/0.9257</td><td>30.13/0.8379</td><td>29.58/0.8015</td><td>27.69/0.8419</td><td>0.6784</td><td>6.6289</td></tr><tr><td rowspan="5">4×</td><td>blm-e(r=2)[rg+u]</td><td>397</td><td>57.6</td><td>33.97/0.9236</td><td>30.04/0.8362</td><td>28.89/0.7997</td><td>27.50/0.8376</td><td>0.6902</td><td>4.7762</td></tr><tr><td>blm-e(r=2)[rg]</td><td>538</td><td>100.9</td><td>34.13/0.9251</td><td>30.14/0.8380</td><td>28.97/0.8016</td><td>27.72/0.8421</td><td>0.6368</td><td>6.2299</td></tr><tr><td>blm-l-e[rg]</td><td>339</td><td>80.6</td><td>34.08/0.9244</td><td>30.03/0.8356</td><td>28.92/0.8005</td><td>27.55/0.8386</td><td>0.6056</td><td>5.928</td></tr><tr><td>blm-l-e[rg+u]</td><td>146</td><td>21.4</td><td>33.73/0.9222</td><td>29.83/0.8320</td><td>28.76/0.7970</td><td>27.10/0.8323</td><td>0.5644</td><td>3.9079</td></tr><tr><td>bl-c</td><td>1154</td><td>135.5</td><td>32.08/0.8942</td><td>28.58/0.7815</td><td>27.56/0.7360</td><td>26.16/0.7872</td><td></td><td></td></tr><tr><td rowspan="5">4×</td><td>blm-e(r=2)[rb]</td><td>683</td><td>108.5</td><td>32.10/0.8938</td><td>28.51/0.7795</td><td>27.51/0.7340</td><td>25.95/0.7808</td><td>0.8774</td><td>15.1935</td></tr><tr><td>blm-l-e[rb]</td><td>511</td><td>98.4</td><td>31.98/0.8921</td><td>28.45/0.7778</td><td>27.47/0.7328</td><td>25.80/0.7754</td><td>0.5456</td><td>9.4559</td></tr><tr><td>blm-e(r=2)[rg+u]</td><td>424</td><td>50</td><td>31.84/0.8898</td><td>28.34/0.7750</td><td>27.38/0.7295</td><td>25.52/0.7673</td><td>0.6577</td><td>5.6154</td></tr><tr><td>blm-l-e[rg]</td><td>413</td><td>92.8</td><td>32.02/0.8921</td><td>28.44/0.7765</td><td>27.44/0.7310</td><td>25.67/0.7715</td><td>0.5272</td><td>9.1481</td></tr><tr><td>blm-e(r=2)[rg]</td><td>612</td><td>104.3</td><td>32.02/0.8925</td><td>28.47/0.7779</td><td>27.48/0.7323</td><td>25.79/0.7755</td><td>0.5032</td><td>8.7419</td></tr><tr><td>blm-l-e[rg+u]</td><td>156</td><td>18.7</td><td>31.47/0.8847</td><td>28.12/0.7697</td><td>27.26/0.7252</td><td>25.16/0.7538</td><td>0.4928</td><td>4.211</td></tr></table> worse. Regardless, our techniques can be applied to any model for further trade- offs between performance and efficiency. Although our main objective is not to beat previous models but to understand and recommend techniques that can be applied to any existing model, we manage to derive models that are better or comparable to other models in the literature. For instance, in terms of size and evaluation metric, our best model (blm- e[rb]) outperforms all models that have a count of 1,500K parameters and below. By comparing compute and evaluation, our best model performs better and has roughly x6 less operations than MemNet (Tai et al., 2017b). It is also comparable with the CARN model in the number of operations, trading a slightly worse performance with a 2.5x size reduction. Overall, our best model is better than earlier models such as VDSR (Kim et al., 2015b) and later models such as SRMDNF (Zhang et al., 2017a) for 3x and 4x scales. Our second and third best models also outperform earlier models in performance with huge savings in the number of operations for 3x and 4x scales. Our results show that these techniques which are designed for image discriminative tasks can be effective in SR. Visual comparisons for some of these models can be found in the appendix. ## 5 QUANTIZATION AND LOW-PRECISION UNDER SUPER RESOLUTION In our next set of experiments, we examine the viability of quantization and the use of extreme low- precision (ternary/binary) as mechanisms to reduce system resource for SR. ### 5.1 INTEGER QUANTIZATION Motivation: With the success of low precision on neural networks on classification problems, we aim to show initial understanding of applying 8- bits integer quantization on our baseline model as described in section 4.1. Moving from 32- bits to 8- bits will result in a 4x reduction in memory and allow support for low- power embedded devices. Approach: We train the model in full precision and apply the quantization scheme in Tensorflow- Lite for integer- only arithmetic (Jacob et al., 2017) and retrain for an additional 5 epochs with the a learning rate of \(5e - 5\) . Results: Our results show that applying quantization lead to a slight evaluation loss in 2x scaling and a slight improvement in 4x scaling. Our results are similar to that of classification Jacob et al. (2017). Furthermore the results show that deep neural networks are robust to noise and perturbations caused by quantization. Therefore, we strongly recommend quantization especially on hardware that can further utilise its benefits. <--- Page Split ---> Table 3: We extend the table that is provided by Ahn et al. (2018) and compared our best three models (in order). For fair comparisons, we do not include models that are much bigger or much smaller than our derived models. <table><tr><td>Scale</td><td>Model</td><td>Params (K)</td><td>Mult-Adds (G)</td><td>Set5</td><td>Set14</td><td>B100</td><td>Urban100</td></tr><tr><td rowspan="10">2×</td><td>VDSR (Kim et al., 2015b)</td><td>665</td><td>612.6</td><td>37.53/0.9587</td><td>33.03/0.9124</td><td>31.90/0.8960</td><td>30.76/0.9140</td></tr><tr><td>DRCN (Kim et al., 2015a)</td><td>1,774</td><td>9,788.7</td><td>37.63/0.9588</td><td>33.04/0.9118</td><td>31.85/0.8942</td><td>30.75/0.9133</td></tr><tr><td>LapSRN (Lai et al., 2017)</td><td>813</td><td>29.9</td><td>37.52/0.9590</td><td>33.08/0.9130</td><td>31.80/0.8950</td><td>30.41/0.9100</td></tr><tr><td>DRRN (Tai et al., 2017a)</td><td>297</td><td>6,796.9</td><td>37.74/0.9591</td><td>33.23/0.9136</td><td>32.05/0.8973</td><td>31.23/0.9188</td></tr><tr><td>BTSRN (Fan et al., 2017)</td><td>410</td><td>207.7</td><td>37.75/-</td><td>33.20/-</td><td>32.05/-</td><td>31.63/-</td></tr><tr><td>MemNet (Tai et al., 2017b)</td><td>677</td><td>623.9</td><td>37.78/0.9597</td><td>33.28/0.9142</td><td>32.08/0.8978</td><td>31.31/0.9195</td></tr><tr><td>SelNet (Choi &amp;amp; Kim, 2017)</td><td>974</td><td>225.7</td><td>37.89/0.9598</td><td>33.61/0.9160</td><td>32.08/0.8984</td><td>31.33/0.9204</td></tr><tr><td>SRMDNF (Zhang et al., 2017a)</td><td>2218</td><td>513.6</td><td>37.79/0.9601</td><td>33.32/0.9190</td><td>32.05/0.8985</td><td>31.33/0.9204</td></tr><tr><td>D-DPPN (Harsis et al., 2018)</td><td>1,261</td><td>158.9</td><td>38.09/0.9600</td><td>33.85/0.9190</td><td>32.27/0.9000</td><td>32.55/0.9324</td></tr><tr><td>CARN (Ahn et al., 2018)</td><td>1,592</td><td>222.8</td><td>37.76/0.9590</td><td>33.52/0.9166</td><td>32.09/0.8978</td><td>31.92/0.9256</td></tr><tr><td rowspan="4">3×</td><td>CARN-M (Ahn et al., 2018)</td><td>412</td><td>91.2</td><td>37.53/0.9583</td><td>33.26/0.9141</td><td>31.92/0.8960</td><td>31.23/0.9193</td></tr><tr><td>blm-e(r=2)[rb](ours)</td><td>535</td><td>156.8</td><td>37.75/0.9596</td><td>33.30/0.9153</td><td>32.00/0.8973</td><td>31.48/0.9218</td></tr><tr><td>blm1-e[rb](ours)</td><td>363</td><td>117</td><td>37.65/0.9592</td><td>33.19/0.9143</td><td>31.92/0.8964</td><td>31.13/0.9181</td></tr><tr><td>blm1-e[rg](ours)</td><td>265</td><td>94.7</td><td>37.59/0.9590</td><td>33.12/0.9132</td><td>31.87/0.8959</td><td>30.91/0.9158</td></tr><tr><td rowspan="10">3×</td><td>VDSE (Kim et al., 2015b)</td><td>665</td><td>612.6</td><td>33.65/0.9213</td><td>29.77/0.8314</td><td>28.82/0.7976</td><td>27.14/0.8279</td></tr><tr><td>DRCN (Kim et al., 2015a)</td><td>1,675</td><td>9,788.7</td><td>33.82/0.9226</td><td>29.76/0.8311</td><td>28.80/0.7963</td><td>27.15/0.8276</td></tr><tr><td>DRRN (Tai et al., 2017a)</td><td>297</td><td>6,796.9</td><td>34.03/0.9244</td><td>29.96/0.8349</td><td>28.95/0.8004</td><td>27.53/0.8378</td></tr><tr><td>BTSRN (Fan et al., 2017)</td><td>410</td><td>176.2</td><td>34.03/-</td><td>29.90/-</td><td>28.97/-</td><td>27.75/-</td></tr><tr><td>MemNet (Tai et al., 2017b)</td><td>677</td><td>623.9</td><td>34.09/0.9248</td><td>30.00/0.8350</td><td>28.96/0.8001</td><td>27.56/0.8376</td></tr><tr><td>SelNet (Choi &amp;amp; Kim, 2017)</td><td>1,159</td><td>120.0</td><td>34.27/0.9257</td><td>30.30/0.8399</td><td>28.97/0.8025</td><td>27.50/0.8376</td></tr><tr><td>SRMDNF (Zhang et al., 2017a)</td><td>2,956</td><td>305.5</td><td>34.12/0.9254</td><td>30.04/0.8382</td><td>28.97/0.8225</td><td>27.57/0.8398</td></tr><tr><td>CARN (Ahn et al., 2018)</td><td>1,592</td><td>118.8</td><td>34.29/0.9255</td><td>30.29/0.8407</td><td>29.06/0.8034</td><td>28.06/0.8493</td></tr><tr><td>CARN-M (Ahn et al., 2018)</td><td>412</td><td>46.1</td><td>33.99/0.9236</td><td>30.08/0.8367</td><td>28.91/0.8000</td><td>27.55/0.8385</td></tr><tr><td>blm-e(r=2)[rb](ours)</td><td>609</td><td>108.1</td><td>34.19/0.9257</td><td>30.18/0.8394</td><td>29.00/0.8026</td><td>27.89/0.8469</td></tr><tr><td rowspan="2">blm1-e[rb](ours)</td><td>437</td><td>90.5</td><td>34.09/0.9249</td><td>30.13/0.8379</td><td>28.95/0.8015</td><td>27.69/0.8419</td></tr><tr><td>blm1-e[rg](ours)</td><td>397</td><td>57.6</td><td>33.97/0.9286</td><td>30.04/0.8362</td><td>28.89/0.7997</td><td>27.50/0.8376</td></tr><tr><td rowspan="10">4×</td><td>VDSE (Kim et al., 2015b)</td><td>665</td><td>612.6</td><td>31.35/0.8838</td><td>28.01/0.7674</td><td>27.29/0.7251</td><td>25.18/0.7524</td></tr><tr><td>DRCN (Kim et al., 2015a)</td><td>1,774</td><td>9,788.7</td><td>31.53/0.8854</td><td>28.02/0.7670</td><td>27.23/0.7233</td><td>25.14/0.7510</td></tr><tr><td>LapSRN (Lai et al., 2017)</td><td>813</td><td>149.4</td><td>31.54/0.8850</td><td>28.19/0.7720</td><td>27.32/0.7280</td><td>25.21/0.7560</td></tr><tr><td>DRRN (Tai et al., 2017a)</td><td>297</td><td>6,796.9</td><td>31.68/0.8888</td><td>28.21/0.7720</td><td>27.38/0.7284</td><td>25.44/0.7638</td></tr><tr><td>BTSRN (Fan et al., 2017)</td><td>410</td><td>165.2</td><td>31.85/-</td><td>28.20/-</td><td>27.47/-</td><td>25.74/-</td></tr><tr><td>MemNet (Tai et al., 2017b)</td><td>677</td><td>623.9</td><td>31.74/0.8893</td><td>28.26/0.7723</td><td>27.40/0.7281</td><td>25.50/0.7630</td></tr><tr><td>SelNet (Choi &amp;amp; Kim, 2017)</td><td>1,417</td><td>83.1</td><td>32.00/0.8931</td><td>28.49/0.7783</td><td>27.44/0.7325</td><td>26.05/0.7819</td></tr><tr><td>SRDenseNet (Tong et al., 2017)</td><td>2,015</td><td>389.9</td><td>32.02/0.8934</td><td>28.50/0.7782</td><td>27.53/0.7337</td><td>26.05/0.7819</td></tr><tr><td>SRMDNF (Zhang et al., 2017a)</td><td>3,988</td><td>232.7</td><td>31.96/0.8925</td><td>28.35/0.7787</td><td>27.49/0.7337</td><td>25.68/0.7731</td></tr><tr><td>D-DBPN (Harris et al., 2018)</td><td>2,207</td><td>79.7</td><td>32.47/0.8980</td><td>28.82/0.7860</td><td>27.72/0.7400</td><td>26.38/0.7946</td></tr><tr><td>CARN (Ahn et al., 2018)</td><td>1,592</td><td>90.9</td><td>32.13/0.8937</td><td>28.60/0.7806</td><td>27.58/0.7349</td><td>26.07/0.7837</td></tr><tr><td>CARN-M (Ahn et al., 2018)</td><td>412</td><td>32.5</td><td>31.92/0.8993</td><td>28.42/0.7762</td><td>27.44/0.7304</td><td>25.62/0.7694</td></tr><tr><td>blm-e(r=2)[rb](ours)</td><td>683</td><td>108.3</td><td>32.10/0.8938</td><td>28.51/0.7795</td><td>27.51/0.7340</td><td>25.95/0.7808</td></tr><tr><td>blm1-e[rb](ours)</td><td>511</td><td>98.4</td><td>31.98/0.8921</td><td>28.45/0.7778</td><td>27.47/0.7328</td><td>25.80/0.7754</td></tr><tr><td>blm1-e(r=2)[rg+u](ours)</td><td>424</td><td>50</td><td>31.84/0.8989</td><td>28.34/0.7750</td><td>27.38/0.7295</td><td>25.52/0.7673</td></tr></table> Table 4: Quantitative results of applying 8-bit integer quantization in TF-Lite. <table><tr><td rowspan="2">Scale</td><td rowspan="2">Model</td><td rowspan="2">Params (K)</td><td rowspan="2">Mult-Adds (G)</td><td colspan="2">Set5</td><td colspan="2">Set14</td><td colspan="2">B100</td><td colspan="2">Urban100</td></tr><tr><td>PSNR/SSIM</td><td>PSNR/SSIM</td><td>PSNR/SSIM</td><td>PSNR/SSSIM</td><td>PSNR/SSIM</td><td>PSNR/SSIM</td><td>PSNR/ssIM</td></tr><tr><td rowspan="2">2×</td><td>bl</td><td>961</td><td>231.2</td><td>37.82/0.9599</td><td>33.36/0.9160</td><td>32.05/0.8981</td><td>31.67/0.9237</td><td></td><td></td><td></td></tr><tr><td>bl_q</td><td></td><td></td><td>37.68/0.9582</td><td>33.34/0.9146</td><td>32.01/0.8966</td><td>31.64/0.9226</td><td></td><td></td><td></td></tr><tr><td rowspan="2">3×</td><td>bl</td><td>1191</td><td>122.4</td><td>34.20/0.9257</td><td>30.15/0.8392</td><td>29.01/0.8028</td><td>27.91/0.8467</td><td></td><td></td><td></td></tr><tr><td>bl_q</td><td></td><td></td><td>34.17/0.9259</td><td>30.13/0.8393</td><td>29.00/0.8030</td><td>27.92/0.8474</td><td></td><td></td><td></td></tr><tr><td rowspan="2">4×</td><td>bl</td><td>1154</td><td>93</td><td>32.01/0.8927</td><td>28.41/0.7793</td><td>27.49/0.7337</td><td>25.87/0.7792</td><td></td><td></td><td></td></tr><tr><td>bl_q</td><td></td><td></td><td>31.99/0.8922</td><td>28.43/0.7794</td><td>27.51/0.7337</td><td>25.88/0.7790</td><td></td><td></td><td></td></tr></table> ## 5.2 TERNARY PRECISION Motivation: The success of using binarized (Courbariaux & Bengio, 2016; Rastegari et al., 2016; Lin et al., 2017) and ternarized neural networks (Li & Liu, 2016; Tschannen et al., 2017) to approximate the full- precision convolutions in image discriminative tasks motivates us to experiment the effectiveness of these techniques in SR. Approach: We adapt the baseline SR architecture used in prior experiments in section 4.1 but modify it structurally by replacing every convolution layer with sum- product convolution layers <--- Page Split ---> proposed in StrassenNets (Tschannen et al., 2017). These sum-product convolution layers represent a sum-product network (SPN) that is used to approximate a matrix multiplication. Specifically, each convolution layer is replaced with a convolution layer that outputs \(r\) feature maps, followed by a element-wise multiplication and a transpose convolution layer. As both the convolution layers hold ternary weights, the number of multiply operations required is determined by the number of element-wise multiplication which is controlled by \(r\) . Besides outlining the trade-off of tuning \(r\) , we aggressively use group convolutions. Results: Similar to section 5.1, the results in Table 5 are similar to that of image discriminative tasks. Specifically, the higher the width of the hidden layer of the SPN, \(r\) , the better the performance at a cost of additional multiplications and additions. When \(r = 6c.out\) , we achieve an evaluation score that is close to the uncompressed model for 2x scales and suffer a slight drop for 3x and 4x scales. Any further attempts to increase \(r\) do not improve evaluation metric. As proposed by Tschannen et al. (2017), we use group convolutions to reduce the number of additions. We take a step further and experiment with a wide range of groups as well. We found that the reduced number of additions do not justify the evaluation drop; the use of a lower \(r\) is better than the use of groups. Additionally, since multipliers are more costly and take up more area on chip than adders, we suggest lowering \(r\) instead of using grouped convolutions. Table 5: Quantitative results of applying ternary-weighted SP convolutions. We omit the use of group convolutions as it leads to worse results. c.out refers to the number of output channels in each SP convolution layer. <table><tr><td>Scale</td><td>Model</td><td>r</td><td>Reduction in Mult (%)</td><td>Reduction in Add (%)</td><td>Set5 PSNR/SSIM</td><td>Set14 PSNR/SSIM</td><td>B100 PSNR/SSIM</td><td>Urban100 PSNR/SSIM</td></tr><tr><td rowspan="5">2×</td><td rowspan="5">bl</td><td>-</td><td>-</td><td>-</td><td>37.86/0.9600</td><td>33.39/0.9159</td><td>32.06/0.8982</td><td>31.74/0.9248</td></tr><tr><td>c.out</td><td>99.82</td><td>-15.96</td><td>37.59/0.9588</td><td>33.14/0.9138</td><td>31.88/0.8959</td><td>31.02/0.9171</td></tr><tr><td>2c.out</td><td>99.64</td><td>-132.1</td><td>37.73/0.9595</td><td>33.30/0.9152</td><td>31.98/0.8973</td><td>31.37/0.9210</td></tr><tr><td>4c.out</td><td>99.28</td><td>-364.38</td><td>37.81/0.9598</td><td>33.31/0.9151</td><td>32.03/0.8978</td><td>31.59/0.9229</td></tr><tr><td>6c.out</td><td>98.92</td><td>-596.65</td><td>37.85/0.9600</td><td>33.41/0.9162</td><td>32.06/0.8981</td><td>31.68/0.9240</td></tr><tr><td rowspan="5">3×</td><td rowspan="5">ST-bl</td><td>-</td><td>-</td><td>-</td><td>34.24/0.9260</td><td>30.26/0.8405</td><td>29.03/0.8033</td><td>27.96/0.8479</td></tr><tr><td>c.out</td><td>99.81</td><td>-35.58</td><td>33.79/0.9215</td><td>29.92/0.8340</td><td>28.82/0.7983</td><td>27.25/0.8318</td></tr><tr><td>2c.out</td><td>99.64</td><td>-171.33</td><td>33.99/0.9236</td><td>30.07/0.8365</td><td>28.91/0.8005</td><td>27.54/0.8389</td></tr><tr><td>4c.out</td><td>99.28</td><td>-442.83</td><td>34.16/0.9250</td><td>30.11/0.8380</td><td>28.97/0.8021</td><td>27.73/0.8435</td></tr><tr><td>6c.out</td><td>98.92</td><td>-714.33</td><td>34.17/0.9253</td><td>30.15/0.8383</td><td>28.99/0.8023</td><td>27.83/0.8454</td></tr><tr><td rowspan="5">4×</td><td rowspan="5">ST-bl</td><td>-</td><td>-</td><td>-</td><td>32.06/0.8930</td><td>28.49/0.7787</td><td>27.50/0.7337</td><td>25.87/0.7788</td></tr><tr><td>c.out</td><td>99.81</td><td>-26.04</td><td>31.46/0.8829</td><td>28.15/0.7709</td><td>27.27/0.7265</td><td>25.24/0.7566</td></tr><tr><td>2c.out</td><td>99.64</td><td>-152.24</td><td>31.72/0.8871</td><td>28.31/0.7743</td><td>27.37/0.7296</td><td>25.49/0.7662</td></tr><tr><td>4c.out</td><td>99.28</td><td>-404.65</td><td>31.90/0.8901</td><td>28.40/0.7770</td><td>27.45/0.7319</td><td>25.67/0.7723</td></tr><tr><td>6c.out</td><td>98.93</td><td>-657.06</td><td>31.95/0.8907</td><td>28.43/0.7777</td><td>27.46/0.7324</td><td>25.77/0.7752</td></tr></table> ## 6 BEST PRACTICES FOR EFFICIENT SUPER-RESOLUTION Through an extensive set of experiments, we show that some of the previous efficiency techniques that are successful in image discriminative tasks can be successfully applied to SR. Although these techniques are comparable in the former tasks, we highlight their varying effectiveness in SR and derive a list of best practices to construct or reduce any model that are designed to reduce image distortion: - The sole use of low rank tensor decomposition (bottleneck design) results in the best trade-offs between performance and efficiency. If further compression of memory and/or compute is needed, separable/grouped convolution is recommended. If efficiency on conventional hardware is the topmost priority, we recommend reducing the number of layers or adopting the use of both channel splitting and shuffling (Ma et al., 2018).- The fewer resource-efficient architecture changes applied, the better the trade-off. Therefore, we recommend a mixture of convolution and resource-efficient units unless further compression is needed.- Avoid architecture changes on the first and last convolution layers.- We strongly recommend using any form of quantization if the hardware supports it. <--- Page Split ---> ## REFERENCES Namhyuk Ahn, Byungkong Kang, and Kyung- Ah Sohn. Fast, accurate, and, lightweight superresolution with cascading residual network. CoRR, abs/1803.08664, 2018. URL http://arxiv.org/abs/1803.08664. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014. Marco Bevilacqua, Aline Roumy, Christine Guillemot, and Marie line Alberi Morel. Lowcomplexity single- image super- resolution based on nonnegative neighbor embedding. In Proceedings of the British Machine Vision Conference, pp. 135.1- 135.10. BMVA Press, 2012. ISBN 1- 901725- 46- 4. doi: http://dx.doi.org/10.5244/C.26.135. Sourav Bhattacharya and Nicholas D. Lane. Sparsification and separation of deep learning layers for constrained resource inference on wearables. In SenSys, 2016. Y. Blau, R. Mechrez, R. Timofte, T. Michaeli, and L. Zelnik-Manor. 2018 PIRM Challenge on Perceptual Image Super-resolution. ArXiv e- prints, September 2018. Yochai Blau and Tomer Michaeli. The perception- distortion tradeoff. CoRR, abs/1711.06077, 2017. Hong Chang, Dit- Yan Yeung, and Yimin Xiong. Super- resolution through neighbor embedding. Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. , 1:1- I, 2004. J.- H. Choi, J.- H. Kim, M. Cheon, and J.- S. Lee. Deep Learning- based Image Super- Resolution Considering Quantitative and Perceptual Quality. ArXiv e- prints, September 2018. Jae- Seok Choi and Munchurl Kim. A deep convolutional neural network with selection units for super- resolution. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, July 2017. Matthieu Courbariaux and Yoshua Bengio. Binarynet: Training deep neural networks with weights and activations constrained to \(+1\) or - 1. CoRR, abs/1602.02830, 2016. Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super- resolution using deep convolutional networks. CoRR, abs/1501.00092, 2015. URL http://arxiv.org/abs/1501.00092. Chao Dong, Chen Change Loy, and Xiaoou Tang. Accelerating the super- resolution convolutional neural network. CoRR, abs/1608.00367, 2016. URL http://arxiv.org/abs/1608.00367. Yuchen Fan, Honghui Shi, Jiahui Yu, Ding Liu, Wei Han, Haichao Yu, Zhangyang Wang, Xinchao Wang, and Thomas S. Huang. Balanced two- stage residual networks for image super- resolution. 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1157- 1164, 2017. Ido Freeman, Lutz Roese- Koerner, and Anton Kummert. Effnet: An efficient structure for convolutional neural networks. CoRR, abs/1801.06434, 2018. URL http://arxiv.org/abs/1801.06434. Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Texture synthesis and the controlled generation of natural stimuli using convolutional neural networks. CoRR, abs/1505.07376, 2015. Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 27, pp. 2672- 2680. Curran Associates, Inc., 2014. URL http://papers.nips.cc/paper/5423- generative- adversarial- nets.pdf. S. Gu, R. Timofte, and L. Van Gool. Multi-bin Trainable Linear Unit for Fast Image Restoration Networks. ArXiv e- prints, July 2018. <--- Page Split ---> Muhammad Haris, Greg Shakhnarovich, and Norimichi Ukita. Deep back- projection networks for super- resolution. CoRR, abs/1803.02735, 2018. URL http://arxiv.org/abs/1803.02735. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. URL http://arxiv.org/abs/1512.03385. Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. CoRR, abs/1704.04861, 2017. URL http://arxiv.org/abs/1704.04861. Gao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks. CoRR, abs/1608.06993, 2016. J. Huang, A. Singh, and N. Ahuja. Single image super-resolution from transformed self-exemplars. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5197- 5206, June 2015. doi: 10.1109/CVPR.2015.7299156. A. Ignatov, R. Timofte, T. Van Vu, T. M. Luu, T. X Pham, C. Van Nguyen, Y. Kim, J.-S. Choi, M. Kim, J. Huang, J. Ran, C. Xing, X. Zhou, P. Zhu, M. Geng, Y. Li, E. Agustsson, S. Gu, L. Van Gool, E. de Stoutz, N. Kobyshev, K. Nie, Y. Zhao, G. Li, T. Tong, Q. Gao, L. Hanwen, P. Navarrete Michelini, Z. Dan, H. Fengshuo, Z. Hui, X. Wang, L. Deng, R. Meng, J. Qin, Y. Shi, W. Wen, L. Lin, R. Feng, S. Wu, C. Dong, Y. Qiao, S. Vasu, N. Thekke Madam, P. Kandula, A. N. Rajagopalan, J. Liu, and C. Jung. PIRM Challenge on Perceptual Image Enhancement on Smartphones: Report. ArXiv e- prints, October 2018. Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew G. Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and training of neural networks for efficient integer- arithmetic- only inference. CoRR, abs/1712.05877, 2017. URL http://arxiv.org/abs/1712.05877. Luke Jaffe, Shiv Sundram, and Christian Martinez- Nieves. Super- resolution to improve classification accuracy of low- resolution images. Technical report, Tech. Rep. 19, Stanford University, 2017. Justin Johnson, Alexandre Alahi, and Fei- Fei Li. Perceptual losses for real- time style transfer and super- resolution. CoRR, abs/1603.08155, 2016. Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Deeply- recursive convolutional network for image super- resolution. CoRR, abs/1511.04491, 2015a. Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Accurate image super- resolution using very deep convolutional networks. CoRR, abs/1511.04587, 2015b. URL http://arxiv.org/abs/1511.04587. Jun- Hyuk Kim and Jong- Seok Lee. Deep residual network with enhanced upscaling module for super- resolution. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2018. Kwang In Kim and Younghee Kwon. Single- image super- resolution using sparse regression and natural image prior. IEEE Trans. Pattern Anal. Mach. Intell., 32(6):1127- 1133, 2010. Yong- Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, and Dongjun Shin. Compression of deep convolutional neural networks for fast and low power mobile applications. CoRR, abs/1511.06530, 2015c. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. URL http://arxiv.org/abs/1412.6980. <--- Page Split ---> Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 25, pp. 1097- 1105. Curran Associates, Inc., 2012. URL http://papers.nips.cc/paper/4824- imagenet- classification- with- deep- convolutional- neural- networks.pdf. Wei- Sheng Lai, Jia- Bin Huang, Narendra Ahuja, and Ming- Hsuan Yang. Deep laplacian pyramid networks for fast and accurate super- resolution. CoRR, abs/1704.03915, 2017. URL http://arxiv.org/abs/1704.03915. Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew P. Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo- realistic single image super- resolution using a generative adversarial network. CoRR, abs/1609.04802, 2016. Fengfu Li and Bin Liu. Ternary weight networks. CoRR, abs/1605.04711, 2016. Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for single image super- resolution. CoRR, abs/1707.02921, 2017. URL http://arxiv.org/abs/1707.02921. Xiaofan Lin, Cong Zhao, and Wei Pan. Towards accurate binary convolutional neural network. CoRR, abs/1711.11294, 2017. Yuan Liu, Yuancheng Wang, Nan Li, Xu Cheng, Yifeng Zhang, Yongming Huang, and Guojun Lu. An attention- based approach for single image super resolution. CoRR, abs/1807.06779, 2018. Chao Ma, Chih- Yuan Yang, Xiaokang Yang, and Ming- Hsuan Yang. Learning a no- reference quality metric for single- image super- resolution. CoRR, abs/1612.05890, 2016. N. Ma, X. Zhang, H.- T. Zheng, and J. Sun. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. ArXiv e- prints, July 2018. D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, volume 2, pp. 416-423 vol.2, July 2001. doi: 10.1109/ICCV.2001.937655. Roey Mechrez, Itamar Talmi, Firas Shama, and Lihi Zelnik-Manor. Learning to maintain natural image statistics. CoRR, abs/1803.04626, 2018a. Roey Mechrez, Itamar Talmi, and Lihi Zelnik- Manor. The contextual loss for image transformation with non- aligned data. CoRR, abs/1803.02077, 2018b. Anish Mittal, Anush Krishna Moorthy, and Alan Conrad Bovik. No- reference image quality assessment in the spatial domain. Trans. Img. Proc., 21(12):4695- 4708, December 2012. ISSN 1057- 7149. doi: 10.1109/TIP.2012.2214050. URL https://doi.org/10.1109/TIP.2012.2214050. Anish Mittal, Rajiv Soundararajan, and Alan C. Bovik. Making a completely blind image quality analyzer. IEEE Signal Processing Letters, 20:209- 212, 2013. P. Navarrete Michelini, H. Liu, and D. Zhu. Mutigrid Backprojection Super- Resolution and Deep Filter Visualization. ArXiv e- prints, September 2018. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor- net: Imagenet classification using binary convolutional neural networks. CoRR, abs/1603.05279, 2016. Mark Sandler, Andrew G. Howard, Menglong Zhu, Andrey Zhmoginov, and Liang- Chieh Chen. Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation. CoRR, abs/1801.04381, 2018. URL http://arxiv.org/abs/1801.04381. H. R. Sheikh, A. C. Bovik, and G. de Veciana. An information fidelity criterion for image quality assessment using natural scene statistics. IEEE Transactions on Image Processing, 14:2117-2128, December 2005. doi: 10.1109/TIP.2005.859389. <--- Page Split ---> Ying Tai, Jian Yang, and Xiaoming Liu. Image super- resolution via deep recursive residual network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017a. Ying Tai, Jian Yang, Xiaoming Liu, and Chunyan Xu. Memnet: A persistent memory network for image restoration. CoRR, abs/1708.02209, 2017b. R. Timofte, E. Agustsson, L. V. Gool, M. Yang, L. Zhang, B. Lim, S. Son, H. Kim, S. Nah, K. M. Lee, X. Wang, Y. Tian, K. Yu, Y. Zhang, S. Wu, C. Dong, L. Lin, Y. Qiao, C. C. Loy, W. Bae, J. Yoo, Y. Han, J. C. Ye, J. Choi, M. Kim, Y. Fan, J. Yu, W. Han, D. Liu, H. Yu, Z. Wang, H. Shi, X. Wang, T. S. Huang, Y. Chen, K. Zhang, W. Zuo, Z. Tang, L. Luo, S. Li, M. Fu, L. Cao, W. Heng, G. Bui, T. Le, Y. Duan, D. Tao, R. Wang, X. Lin, J. Pang, J. Xu, Y. Zhao, X. Xu, J. Pan, D. Sun, Y. Zhang, X. Song, Y. Dai, X. Qin, X. Huynh, T. Guo, H. S. Mousavi, T. H. Vu, V. Monga, C. Cruz, K. Egiazarian, V. Katkovnik, R. Mehta, A. K. Jain, A. Agarwalla, C. V. S. Praveen, R. Zhou, H. Wen, C. Zhu, Z. Xia, Z. Wang, and Q. Guo. Nitre 2017 challenge on single image super-resolution: Methods and results. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1110-1121, July 2017. doi: 10.1109/CVPRW.2017.149. Radu Timofte, Vincent De Smet, and Luc J. Van Gool. Anchored neighborhood regression for fast example- based super- resolution. In ICCV, pp. 1920- 1927. IEEE Computer Society, 2013. Tong Tong, Gen Li, Xiejie Liu, and Qinquan Gao. Image super- resolution using dense skip connections. 2017 IEEE International Conference on Computer Vision (ICCV), pp. 4809- 4817, 2017. Michael Tschannen, Aran Khanna, and Anima Anandkumar. Strassennets: Deep learning with a multiplication budget. CoRR, abs/1712.03942, 2017. URL http://arxiv.org/abs/1712.03942. X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, C. Change Loy, Y. Qiao, and X. Tang. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. ArXiv e-prints, September 2018. Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: From error visibility to structural similarity. Trans. Img. Proc., 13(4):600- 612, April 2004. ISSN 1057- 7149. doi: 10.1109/TIP.2003.819861. URL http://dx.doi.org/10.1109/TIP.2003.819861. Saining Xie, Ross B. Girshick, Piotr Dollar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. CoRR, abs/1611.05431, 2016. URL http://arxiv.org/abs/1611.05431. Jianchao Yang, John Wright, Thomas S. Huang, and Yi Ma. Image super- resolution via sparse representation. Trans. Img. Proc., 19(11):2861- 2873, November 2010. ISSN 1057- 7149. doi: 10.1109/TIP.2010.2050625. URL http://dx.doi.org/10.1109/TIP.2010.2050625. Dong- Qing Zhang. clcnet: Improving the efficiency of convolutional neural network using channel local convolutions. CoRR, abs/1712.06145, 2017. URL http://arxiv.org/abs/1712.06145. Kai Zhang, Wangmeng Zuo, and Lei Zhang. Learning a single convolutional super- resolution network for multiple degradations. CoRR, abs/1712.06116, 2017a. URL http://arxiv.org/abs/1712.06116. Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. CoRR, abs/1707.01083, 2017b. URL http://arxiv.org/abs/1707.01083. Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. ArXiv e-prints, July 2018. Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. Residual dense network for image super- resolution. CoRR, abs/1802.08797, 2018. URL http://arxiv.org/abs/1802.08797. <--- Page Split ---> ## A VISUAL COMPARISON ON X4 SCALE BENCHMARKS ![](images/12_0.jpg) <center>Figure 2: Visual comparisons with state-of-the-art models on Set5 and Set14. VSDR and LapSRN are comparable to our models with regards to model size and/or number of operations and RCAN is x22.8-x30.5 larger and has x8.4-x9.3 more operations. </center> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 3: Visual comparisons with state-of-the-art models on B100. </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 4: Visual comparisons with state-of-the-art models on Urban100. </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 5: More \(\times 4\) scale visual comparisons on Urban100 </center> <--- Page Split --->
## ABSTRACT A successful application of convolutional architectures is to increase the resolution of single low- resolution images – a image restoration task called super- resolution (SR). Naturally, SR is of value to resource constrained devices like mobile phones, electronic photograph frames and televisions to enhance image quality. However, SR demands perhaps the most extreme amounts of memory and compute operations of any mainstream vision task known today, preventing SR from being deployed to devices that require them. In this paper, we perform a early systematic study of system resource efficiency for SR, within the context of a variety of architectural and low- precision approaches originally developed for discriminative neural networks. We present a rich set of insights, representative SR architectures, and efficiency trade- offs; for example, the prioritization of ways to compress models to reach a specific memory and computation target and techniques to compact SR models so that they are suitable for DSPs and FPGAs. As a result of doing so, we manage to achieve better and comparable performance with previous models in the existing literature, highlighting the practicality of using existing efficiency techniques in SR tasks. Collectively, we believe these results provides the foundation for further research into the little explored area of resource efficiency for SR. ## 1 INTRODUCTION Rapid progress has been made in the development of convolutional networks (Dong et al., 2015) that are capable of taking a low- resolution image and producing an image with a significant increase in resolution. This image restoration task is referred to as super- resolution (SR) and has many potential applications in devices with limited memory and compute capacity. The fundamental problem however is that the state- of- the- art networks (Lim et al., 2017; Zhang et al., 2018; Zhang et al., 2018) consist of thousands of layers and are some of the most resource intensive networks currently known. Furthermore, due to the spatial dimensions of feature maps needed to maintain or up- scale the input, the number of operations are counted in the billions as opposed to millions in models for discriminative tasks. As a result, there is a need for a general systematic approach to improve the efficiency of SR models. The challenge of the system resource requirements for deep learning models for tasks other than SR have been carefully studied in previous works (Zhang et al., 2017b; Howard et al., 2017; Ma et al., 2018; Sandler et al., 2018), achieving massive gains in size and compute with little to no loss in performance. These reductions are achieved with a wide variety of methods being developed grounded in primarily architecture- level changes and techniques grounded in the use of low precision and quantized model parameters. However, how these efficiency methods behave when applied within SR have not yet been studied in significant depth, with very few results published in the literature. Extrapolating from prior results for other tasks is problematic given that predominantly existing studies are applied to discriminative tasks with substantially different architectures and operations. Due to the up- sampling structure of SR models, these efficiency methods may therefore produce potentially stronger side- effects to image distortion. In this paper, we detail a systematic study that seeks to bridge current understanding in SR and known approaches for scaling down the consumption of system resources by deep models. By <--- Page Split ---> examining the impact on image distortion quality when performing various efficiency techniques, we provide the following new insights: - The effectiveness of low rank tensor decomposition and other convolution approximations, which are comparable and successful in discriminative tasks, can vary considerably in SR. (See section 4.1).- Unlike image discriminative networks, SR networks suffer from a worse trade-off between efficiency and performance as more layers are compressed. (See section 4.2)- The practicality of adopting compression techniques for other tasks to SR as our best models are better or comparable to existing literature. For instance, our best model achieves significantly better performance and 6x less compute than MemNet (Tai et al., 2017b) and VDSR (Kim et al., 2015b). Additionally, it also performs better and is 4.1x-5.8x smaller than SRMDNF (Zhang et al., 2017a). (See section 4.3)- Successful quantization techniques used in image discriminative tasks are equally successful in SR. (See section 5) ## 2 RELATED WORK We focus on using neural networks for SR as they have shown to achieve superior performance against previous traditional approaches (Timofte et al., 2013; Kim & Kwon, 2010; Chang et al., 2004). An SR image can either be evaluated using standard image distortion metrics, such as PSNR, SSIM (Wang et al., 2004) and IFC (Sheikh et al., 2005), or using perception metrics, such as Ma et al. (2016), NIQE (Mittal et al., 2013), and BRISQUE (Mittal et al., 2012). Blau & Michaeli (2017) provided theoretical backups on the trade- off between image distortion and perception. Distortion SR: In the distortion line of work, models favour pixel- to- pixel comparisons and are usually trained on either the L1 or L2 (MSE) loss. These models have been known to produce more visually pleasing outcomes on structural images Blau et al. (2018) than perceptual SR models. Dong et al. (2015) first proposed using convolutional networks for SR, leading to a surge in using neural networks for SR. These networks differ in their building blocks for feature extraction and up- sampling. For instance, Dong et al. (2016) proposed a faster convolutional network by taking the down- sampled low- resolution image as an input. Other variations include using more layers (Kim et al., 2015b), recursive layers (Kim et al., 2015a; Tai et al., 2017a), memory blocks (Tai et al., 2017b; Ahn et al., 2018), DenseNet (Huang et al., 2016) blocks (Tong et al., 2017), residual (He et al., 2015) blocks (Ledig et al., 2016; Lim et al., 2017; Kim & Lee, 2018), and multiple- image degradations (Zhang et al., 2017a). Additionally, more recent models use attention (Bahdanau et al., 2014) mechanisms (Liu et al., 2018; Zhang et al., 2018), back- projection (Haris et al., 2018; Navarrete Michelini et al., 2018), and other non- conventional non- linear layers (Choi & Kim, 2017; Gu et al., 2018). Perceptual SR: Perceptual SR models, on the other hand, are better at reconstructing unstructured details with high perceptual quality (Blau et al., 2018). These models usually adopt popular models for image distortion and train them using a variety of different loss functions, such as the perceptual loss (Johnson et al., 2016), contextual loss (Mechrez et al., 2018b), adversarial loss (Goodfellow et al., 2014), and the Gram loss (Gatys et al., 2015). For instance, Choi et al. (2018) adopted EUSR Kim & Lee (2018) and Ledig et al. (2016); Wang et al. (2018); Mechrez et al. (2018a) adopted SRResNet (Ledig et al., 2016) by making slight architecture changes and replacing the objective. Although these perceptual models are able to generate more visually pleasing results on certain images, they do not seem to work well as inputs for image classification (Jaffe et al., 2017). Efficient SR: As models in both tracks are resource- intensive, the recent PIRM 2018 Challenge for mobile (Ignatov et al., 2018) presented a range of high efficiency models that were designed to run faster and perform better than SRCNN (Dong et al., 2015). These models are complementary to our work and can follow our best practices to achieve greater efficiency gains. A work closely related to our work is done by Ahn et al. (2018) who systemically investigate the impact of using grouped convolutions. Due to the massive design space caused by the variability of training and evaluating these models, we focus on the trade- offs between performance measured by the image distortion metrics and efficiency and leave the rest as future work. <--- Page Split ---> ## 3 SYSTEMATIC STUDY OF LOW-RESOURCE SUPER RESOLUTION NETWORKS The key step in our work is to build understanding towards building resource- efficient architectures for super- resolution. While there is a lot of understanding of how these efficiency- saving techniques work in classification problems, there is a lack of experimental studies and systematic approaches to understand their practicality in super- resolution. To our knowledge, this is the first systematic study of wide range efficiency methods on super- resolution. We measure performances using PSNR and SSIM (Wang et al., 2004) and measure efficiency of memory and compute using the number of parameters and the number of multiply- add operations (Mult- Adds), both of which dictate which platform these models can run on. However, these metrics alone do not reflect the trade- off between performance and efficiency. Therefore, we introduce two new metrics that measures the number of Giga Mult- Adds saved and the number of parameters saved for every 0.01dB PSNR loss in the test sets: Set5 (Bevilacqua et al., 2012), Set14 (Yang et al., 2010), B100 (Martin et al., 2001), and Urban100 (Huang et al., 2015). These metrics are calculated by taking the difference between the compressed model and the uncompressed model. All Mult- Adds are calculated by upscaling to a 720p image. We decide to use RCAN Zhang et al. (2018) as our baseline model as it proves to be the state- of- the- art and has the best performance in the image distortion metrics at the time of writing. We take its simplest building block and build a shallower network and use that as a basis for exploring the use of a variety of techniques. Implementation Details: We train our models in section 4 and section 5.1 in the same manner as that of EDSR Lim et al. (2017). In particular, we use \(48 \times 48\) RGB patches of LR images from the DIV2K dataset Timofte et al. (2017). We augment the training data with random horizontal flips and 90 degree rotations and pre- process them by subtracting the mean RGB value of the DIV2K dataset. Our model is trained using the ADAM Kingma & Ba (2014) optimizer with hyper- parameters \(\beta_{1} = 0.9\) , \(\beta_{2} = 0.999\) , and \(\epsilon = 10^{- 8}\) . The mini- batch size is 16, learning rate begins with \(1e - 4\) and is halved at 200 epochs, and the model is trained for 300 epochs using L1 loss. We train x2 models from scratch and use them as pre- trained models to train x3 and x4 models for faster convergence. Lastly, for ternary quantization in section 5.2, we further train the model with quantization enabled in each forward pass for 40 epochs, starting at a learning rate of \(5e - 5\) , and then fix the quantized ternary weights and further train for another 10 epochs at a learning rate of \(2.5e - 5\) . ## 4 EFFICIENT NETWORK ARCHITECTURES FOR SUPER RESOLUTION We begin our evaluation by conducting a series of experiments: (i) we explore the effects of applying different resource- efficient architectures to our baseline model (section 4.1), (ii) we consider two best techniques and present trade- off solutions while applying them to different parts of our baseline model (section 4.2), (iii) and lastly, we compare our best results with previous SR architectures (section 4.3). ### 4.1 EFFECTS OF VARIOUS RESOURCE-EFFICIENT TECHNIQUES Motivation: Resource- efficient architectures use various low rank tensor decomposition and other convolutional approximation techniques, which is agnostic and is not specifically designed for any particular task, to build fast and accurate image discriminative models. We first develop an initial understanding of the trade- off solution by replacing and modifying 3x3 convolution layer blocks in the baseline model. Approach: We explore the use of known techniques such as the bottleneck design, separable/grouped convolutions, and channel shuffling. We take the feature extraction unit from resource- efficient architectures and remove all batch normalisation layers as they were previously shown to reduce performance and increase GPU memory usage (Lim et al., 2017). For our first set of experiments, we replace all 3x3 convolution layers in the residual groups of our baseline model. bl: Our baseline model from RCAN (Zhang et al., 2018). We reduce the number of residual groups (RG) from 10 to 2 and the number of residual channel attention block (RCAB) in each RG from 20 to 5. We use a feature map size of 64. Making the network shallower and small in parameters allow us to clearly understand each architectural changes as opposed to having a deep network which may cause other effects and interplay. <--- Page Split ---> blrn(r): We adopt the residual bottleneck design from ResNet (He et al., 2015) with a reduction factor of \(r\) . Specifically, a 1x1 convolution is used to compress information among channels by a reduction factor, resulting in a cheaper 3x3 convolution. Another 1x1 convolution is then used to recover the dimension of the output channel and a skip connection is used to pass on information that may have been lost. blrxn(r,g): We replace the 3x3 convolution in blrn to a 3x3 grouped convolution, forming a block that is similar to that of ResNeXt (Xie et al., 2016) with an additional group size of \(g\) . Computation cost is further reduced by the use of grouped convolutions (Krizhevsky et al., 2012). blm1: In order to further improve efficiency of the 3x3 grouped convolution, we can maximise the group size, forming a convolution that is known as depthwise convolution. Following this idea, we adopt the MobileNet v1 (Howard et al., 2017) unit which uses depthwise separable convolutions, each consist of a 3x3 depthwise convolution followed by a 1x1 convolution, also known as a pointwise convolution. bleff(r): We can further approximate the 3x3 depthwise convolution by using a 1x3 and a 3x1 depthwise convolution, a technique that is used in EffNet (Freeman et al., 2018). We adopt the unit from EffNet by removing the pooling layers. bls1(r, g): We group both 3x3 and 1x1 convolutions and added channel shuffling in order to improve the information flow among channels. In order to test the effects of channel shuffling, we adopt the ShuffleNet v1 (Zhang et al., 2017b) unit. blclc(g1, g2): Channel shuffle is also used in Clnet (Zhang, 2017) to further improve efficiency of blm1. In order to maximise efficiency from our adoption of ClNet units, we follow the group size guidelines recommended by the authors for both the group sizes of the 3x3 (g1) and 1x1 (g2) grouped convolution. bls2: Apart from using grouped convolutions, Ma et al. (2018) proposed splitting the flow into two, which is termed as channel splitting, and performing convolution on only half of the input channels in each unit at each pass. Channel shuffle is then used to enable information flow between both branches. blm2(e): Inverted residuals can be used to enable skip connections directly on the bottleneck layers. Therefore, we adopt the MobileNet v2 (Sandler et al., 2018) unit in our experiments Table 1: Quantitative results of applying resource-efficient techniques. bold/italics indicates best/second-best trade-off. <table><tr><td>Scale</td><td>Model</td><td>Params (K)</td><td>Multi-Adds (G)</td><td>Set5 PSNR/SSIM</td><td>Set14 PSNR/SSIM</td><td>B100 PSNR/SSIM</td><td>Urban100 PSNR/SSIM</td><td>Multi-Add Saved (G)</td><td>Params Saved (K)</td></tr><tr><td rowspan="5">2×</td><td>bl</td><td>1006</td><td>231.2</td><td>37.6/0.9600</td><td>33.3/0.9159</td><td>32.0/0.8982</td><td>31.7/0.9248</td><td>-</td><td>-</td></tr><tr><td>blrn(r=2)</td><td>464</td><td>106.5</td><td>37.6/0.9591</td><td>33.1/0.9137</td><td>31.9/0.8961</td><td>31.0/0.9173</td><td>0.9819</td><td>4.27</td></tr><tr><td>blm1</td><td>265</td><td>60.7</td><td>37.4/0.9586</td><td>33.0/0.9122</td><td>31.7/0.8948</td><td>30.6/0.9128</td><td>0.7894</td><td>3.43</td></tr><tr><td>blrxn(r=2, g=4)</td><td>305</td><td>69.9</td><td>37.4/0.9585</td><td>33.0/0.9123</td><td>31.8/0.8948</td><td>30.7/0.9131</td><td>0.7755</td><td>3.37</td></tr><tr><td>blrn(r=4)</td><td>258</td><td>59</td><td>37.4/0.9583</td><td>32.9/0.9120</td><td>31.7/0.8943</td><td>30.6/0.9123</td><td>0.7653</td><td>3.32</td></tr><tr><td rowspan="5">2×</td><td>blclc(g1=32, g=2=2)</td><td>232</td><td>52.9</td><td>37.4/0.9581</td><td>32.9/0.9118</td><td>31.7/0.8939</td><td>30.5/0.9108</td><td>0.7278</td><td>3.16</td></tr><tr><td>bls2</td><td>211</td><td>48.4</td><td>37.7/0.9580</td><td>32.9/0.9117</td><td>31.7/0.8936</td><td>30.4/0.9102</td><td>0.7254</td><td>3.15</td></tr><tr><td>bleff(r=2)</td><td>257</td><td>58.7</td><td>37.3/0.9580</td><td>32.9/0.9114</td><td>31.7/0.8936</td><td>30.4/0.9108</td><td>0.6458</td><td>2.82</td></tr><tr><td>blm2(e=2)</td><td>561</td><td>128.9</td><td>37.4/0.9586</td><td>33.0/0.9130</td><td>31.8/0.8949</td><td>30.7/0.9138</td><td>0.5219</td><td>2.27</td></tr><tr><td>bls1(r=2, g=4)</td><td>188</td><td>42.9</td><td>37.6/0.9568</td><td>32.7/0.9096</td><td>31.5/0.8912</td><td>29.9/0.9031</td><td>0.4968</td><td>2.16</td></tr><tr><td>blm2(e=3)</td><td>763</td><td>175.4</td><td>37.5/0.9589</td><td>33.1/0.9133</td><td>31.8/0.8954</td><td>30.9/0.9155</td><td>0.3509</td><td>1.53</td></tr></table> Results: Our results in Table 1 show that techniques that result in a better trade- off between memory and performance will have a better trade- off between compute and performance. 1. Overall, the use of bottlenecks alone (blrn) result in the best trade- offs followed by the use of separable/grouped convolutions. Reducing the number of features to accommodate inverted bottlenecks (blm2) severely impact the performance and thus we omit the results from the table. We speculate that doing so would result in insufficient features at the up- sampling layer to fully capture the up- sampled image representation. Thus, we use the same number of feature maps as our bottleneck. Although the use of inverted <--- Page Split ---> residuals in our experiments seem worse off, it may perform better on models that use a larger feature size or multiple smaller up- sampling layers. Lastly, the use of 1x1 grouped convolution or channel splitting with channel shuffling further reduces the evaluation metric. Although doing so can drastically reduce size, the trade- off does not seem to justify its advantages. Therefore, we recommend using bottlenecks for building resource- efficient SR architectures. If the budget for memory and efficiency is tight, we recommend the use of depthwise separable convolutions instead. In image discriminative tasks, the proposed architecture changes are comparable in terms of efficiency and accuracy trade- offs. In our work, we show that the sole use of low rank tensor decomposition (bottleneck architectures) provide the best trade- offs, followed by the use of separable/grouped convolutions and the use of both channel splitting and shuffling. ### 4.2 EFFECTS OF ARCHITECTURAL LAYERS BETWEEN THE INPUT AND OUTPUT LAYER Motivation: Bhattacharya & Lane (2016) and Kim et al. (2015c) have shown that it is possible in image classification to maintain a similar or slight drop in performance by decomposing tensors of known models. However, our models suffer a significant drop in performance. (Table 1). Therefore, in order to further understand the extent of their applicability in SR, we apply the top two best techniques, which are bottleneck reduction (blm) and depthwise separable convolutions (blm1), on various different parts of our baseline model. Approach: Our preliminary experiments with applying some of these techniques on the first and last convolution layer led to worse trade- offs. Therefore, we apply our techniques between them. We replace the sub- pixel convolution upsampling layer to the enhanced upscaling module (EUM) as proposed by Kim & Lee (2018) to allow the use of skip connections. Using EUM leads to an increase in performance at a slight cost of both memory and compute. Thus, in order to maintain the memory cost, we use recursion, forming the enhanced recursive upscaling module (ERUM) shown in figure 1. The number of ERUMs is the same as the scaling factor and each ERUM recurses twice or thrice for x2, x4 or x3 scales respectively. Experiments that use ERUMs for up- sampling are indicated with a postfix - e. We calculate our trade- off metrics based on our baseline model with ERUM as its up- sampling layer instead bl- e. We modify all 3x3 convolution layers as such: \(rb\) : Changes are made in residual blocks/modules. \(rg\) : Changes are made in residual groups, therefore including those in \(rb\) . (Experiments in section 4.1 are done in this setting.) \(rg + u\) : Changes are made in both \(rg\) and the up- sampling layers (ERUMs). ![](images/4_0.jpg) <center>Figure 1: Our proposed ERUM for image upscaling. </center> Results: Our results in Table 2 reinforce our findings in section 4.1 that the adoption of bottleneck reduction alone leads to the best trade- offs, followed by the use of group convolutions. Therefore, we recommend taking gradual steps to compress the model. For instance, we suggest gradually changing convolutions to use bottleneck reduction, avoiding the up- sampling, first, and last convolutions until a budget is reached. If further compression is needed, we suggest changes to the up- sampling layer or the use of group convolutions. ### 4.3 COMPARISONS WITH PREVIOUS SR MODELS We take our derived best models based on different budgets from our first two experiments (See section 4.1 & 4.2) and compare them with the existing literature, which is shown in Table 3. For fair comparisons, we omit models that are way bigger by several magnitudes as their performances are much better. Likewise, we exclude models that are way smaller as their performance are much <--- Page Split ---> Table 2: Quantitative results of applying techniques on different parts of the model. We took the best three derived models given three different budgets and compared them with previous models in Table 3 bold/italics indicates best/second-best trade-off. <table><tr><td>Scale</td><td>Model [Changes]</td><td>Params (K)</td><td>Mult-Adds (G)</td><td>Set5 PSNR/SSIM</td><td>Set14 PSNR/SSIM</td><td>B100 PSNR/SSIM</td><td>Urban100 PSNR/SSIM</td><td>Mult-Add Saved (G)</td><td>Params Saved (K)</td></tr><tr><td rowspan="5">2×</td><td>bl-c</td><td>1006</td><td>265.3</td><td>37.92/0.9602</td><td>33.43/0.9162</td><td>32.09/0.8988</td><td>31.83/0.9256</td><td></td><td></td></tr><tr><td>blm-e(r=2)[rb]</td><td>535</td><td>156.8</td><td>37.75/0.9596</td><td>33.30/0.9153</td><td>32.00/0.8973</td><td>31.48/0.9218</td><td>1.4662</td><td>6.3649</td></tr><tr><td>blm-l-e[rb]</td><td>363</td><td>117</td><td>37.65/0.9592</td><td>33.19/0.9143</td><td>31.92/0.8964</td><td>31.13/0.9181</td><td>1.0746</td><td>4.6594</td></tr><tr><td>blm-l-e[rg]</td><td>265</td><td>94.7</td><td>37.59/0.9590</td><td>33.12/0.9132</td><td>31.87/0.8959</td><td>30.91/0.9185</td><td>0.9584</td><td>4.1629</td></tr><tr><td>blm-e(r=2)[rg]</td><td>464</td><td>140.5</td><td>37.64/0.9592</td><td>33.20/0.9142</td><td>31.95/0.8967</td><td>31.10/0.9185</td><td>0.9455</td><td>4.1061</td></tr><tr><td rowspan="5">3×</td><td>blm-e(r=2)[rg+u]</td><td>370</td><td>97.1</td><td>37.56/0.9589</td><td>33.10/0.9131</td><td>31.86/0.8954</td><td>30.99/0.9164</td><td>0.9557</td><td>3.6136</td></tr><tr><td>blm-l-e[rg+u]</td><td>137</td><td>35.4</td><td>37.35/0.9580</td><td>32.94/0.9117</td><td>31.72/0.8939</td><td>30.45/0.9103</td><td>0.8181</td><td>3.0925</td></tr><tr><td>bl-c</td><td>1080</td><td>156.3</td><td>34.35/0.9269</td><td>30.29/0.8415</td><td>29.06/0.8046</td><td>28.13/0.8521</td><td></td><td></td></tr><tr><td>blm-e(r=2)[rb]</td><td>609</td><td>108.1</td><td>34.19/0.9257</td><td>30.18/0.8394</td><td>29.00/0.8026</td><td>27.89/0.8469</td><td>0.8456</td><td>8.2632</td></tr><tr><td>blm-l-e[rb]</td><td>437</td><td>90.5</td><td>34.09/0.9257</td><td>30.13/0.8379</td><td>29.58/0.8015</td><td>27.69/0.8419</td><td>0.6784</td><td>6.6289</td></tr><tr><td rowspan="5">4×</td><td>blm-e(r=2)[rg+u]</td><td>397</td><td>57.6</td><td>33.97/0.9236</td><td>30.04/0.8362</td><td>28.89/0.7997</td><td>27.50/0.8376</td><td>0.6902</td><td>4.7762</td></tr><tr><td>blm-e(r=2)[rg]</td><td>538</td><td>100.9</td><td>34.13/0.9251</td><td>30.14/0.8380</td><td>28.97/0.8016</td><td>27.72/0.8421</td><td>0.6368</td><td>6.2299</td></tr><tr><td>blm-l-e[rg]</td><td>339</td><td>80.6</td><td>34.08/0.9244</td><td>30.03/0.8356</td><td>28.92/0.8005</td><td>27.55/0.8386</td><td>0.6056</td><td>5.928</td></tr><tr><td>blm-l-e[rg+u]</td><td>146</td><td>21.4</td><td>33.73/0.9222</td><td>29.83/0.8320</td><td>28.76/0.7970</td><td>27.10/0.8323</td><td>0.5644</td><td>3.9079</td></tr><tr><td>bl-c</td><td>1154</td><td>135.5</td><td>32.08/0.8942</td><td>28.58/0.7815</td><td>27.56/0.7360</td><td>26.16/0.7872</td><td></td><td></td></tr><tr><td rowspan="5">4×</td><td>blm-e(r=2)[rb]</td><td>683</td><td>108.5</td><td>32.10/0.8938</td><td>28.51/0.7795</td><td>27.51/0.7340</td><td>25.95/0.7808</td><td>0.8774</td><td>15.1935</td></tr><tr><td>blm-l-e[rb]</td><td>511</td><td>98.4</td><td>31.98/0.8921</td><td>28.45/0.7778</td><td>27.47/0.7328</td><td>25.80/0.7754</td><td>0.5456</td><td>9.4559</td></tr><tr><td>blm-e(r=2)[rg+u]</td><td>424</td><td>50</td><td>31.84/0.8898</td><td>28.34/0.7750</td><td>27.38/0.7295</td><td>25.52/0.7673</td><td>0.6577</td><td>5.6154</td></tr><tr><td>blm-l-e[rg]</td><td>413</td><td>92.8</td><td>32.02/0.8921</td><td>28.44/0.7765</td><td>27.44/0.7310</td><td>25.67/0.7715</td><td>0.5272</td><td>9.1481</td></tr><tr><td>blm-e(r=2)[rg]</td><td>612</td><td>104.3</td><td>32.02/0.8925</td><td>28.47/0.7779</td><td>27.48/0.7323</td><td>25.79/0.7755</td><td>0.5032</td><td>8.7419</td></tr><tr><td>blm-l-e[rg+u]</td><td>156</td><td>18.7</td><td>31.47/0.8847</td><td>28.12/0.7697</td><td>27.26/0.7252</td><td>25.16/0.7538</td><td>0.4928</td><td>4.211</td></tr></table> worse. Regardless, our techniques can be applied to any model for further trade- offs between performance and efficiency. Although our main objective is not to beat previous models but to understand and recommend techniques that can be applied to any existing model, we manage to derive models that are better or comparable to other models in the literature. For instance, in terms of size and evaluation metric, our best model (blm- e[rb]) outperforms all models that have a count of 1,500K parameters and below. By comparing compute and evaluation, our best model performs better and has roughly x6 less operations than MemNet (Tai et al., 2017b). It is also comparable with the CARN model in the number of operations, trading a slightly worse performance with a 2.5x size reduction. Overall, our best model is better than earlier models such as VDSR (Kim et al., 2015b) and later models such as SRMDNF (Zhang et al., 2017a) for 3x and 4x scales. Our second and third best models also outperform earlier models in performance with huge savings in the number of operations for 3x and 4x scales. Our results show that these techniques which are designed for image discriminative tasks can be effective in SR. Visual comparisons for some of these models can be found in the appendix. ## 5 QUANTIZATION AND LOW-PRECISION UNDER SUPER RESOLUTION In our next set of experiments, we examine the viability of quantization and the use of extreme low- precision (ternary/binary) as mechanisms to reduce system resource for SR. ### 5.1 INTEGER QUANTIZATION Motivation: With the success of low precision on neural networks on classification problems, we aim to show initial understanding of applying 8- bits integer quantization on our baseline model as described in section 4.1. Moving from 32- bits to 8- bits will result in a 4x reduction in memory and allow support for low- power embedded devices. Approach: We train the model in full precision and apply the quantization scheme in Tensorflow- Lite for integer- only arithmetic (Jacob et al., 2017) and retrain for an additional 5 epochs with the a learning rate of \(5e - 5\) . Results: Our results show that applying quantization lead to a slight evaluation loss in 2x scaling and a slight improvement in 4x scaling. Our results are similar to that of classification Jacob et al. (2017). Furthermore the results show that deep neural networks are robust to noise and perturbations caused by quantization. Therefore, we strongly recommend quantization especially on hardware that can further utilise its benefits. <--- Page Split ---> Table 3: We extend the table that is provided by Ahn et al. (2018) and compared our best three models (in order). For fair comparisons, we do not include models that are much bigger or much smaller than our derived models. <table><tr><td>Scale</td><td>Model</td><td>Params (K)</td><td>Mult-Adds (G)</td><td>Set5</td><td>Set14</td><td>B100</td><td>Urban100</td></tr><tr><td rowspan="10">2×</td><td>VDSR (Kim et al., 2015b)</td><td>665</td><td>612.6</td><td>37.53/0.9587</td><td>33.03/0.9124</td><td>31.90/0.8960</td><td>30.76/0.9140</td></tr><tr><td>DRCN (Kim et al., 2015a)</td><td>1,774</td><td>9,788.7</td><td>37.63/0.9588</td><td>33.04/0.9118</td><td>31.85/0.8942</td><td>30.75/0.9133</td></tr><tr><td>LapSRN (Lai et al., 2017)</td><td>813</td><td>29.9</td><td>37.52/0.9590</td><td>33.08/0.9130</td><td>31.80/0.8950</td><td>30.41/0.9100</td></tr><tr><td>DRRN (Tai et al., 2017a)</td><td>297</td><td>6,796.9</td><td>37.74/0.9591</td><td>33.23/0.9136</td><td>32.05/0.8973</td><td>31.23/0.9188</td></tr><tr><td>BTSRN (Fan et al., 2017)</td><td>410</td><td>207.7</td><td>37.75/-</td><td>33.20/-</td><td>32.05/-</td><td>31.63/-</td></tr><tr><td>MemNet (Tai et al., 2017b)</td><td>677</td><td>623.9</td><td>37.78/0.9597</td><td>33.28/0.9142</td><td>32.08/0.8978</td><td>31.31/0.9195</td></tr><tr><td>SelNet (Choi &amp;amp; Kim, 2017)</td><td>974</td><td>225.7</td><td>37.89/0.9598</td><td>33.61/0.9160</td><td>32.08/0.8984</td><td>31.33/0.9204</td></tr><tr><td>SRMDNF (Zhang et al., 2017a)</td><td>2218</td><td>513.6</td><td>37.79/0.9601</td><td>33.32/0.9190</td><td>32.05/0.8985</td><td>31.33/0.9204</td></tr><tr><td>D-DPPN (Harsis et al., 2018)</td><td>1,261</td><td>158.9</td><td>38.09/0.9600</td><td>33.85/0.9190</td><td>32.27/0.9000</td><td>32.55/0.9324</td></tr><tr><td>CARN (Ahn et al., 2018)</td><td>1,592</td><td>222.8</td><td>37.76/0.9590</td><td>33.52/0.9166</td><td>32.09/0.8978</td><td>31.92/0.9256</td></tr><tr><td rowspan="4">3×</td><td>CARN-M (Ahn et al., 2018)</td><td>412</td><td>91.2</td><td>37.53/0.9583</td><td>33.26/0.9141</td><td>31.92/0.8960</td><td>31.23/0.9193</td></tr><tr><td>blm-e(r=2)[rb](ours)</td><td>535</td><td>156.8</td><td>37.75/0.9596</td><td>33.30/0.9153</td><td>32.00/0.8973</td><td>31.48/0.9218</td></tr><tr><td>blm1-e[rb](ours)</td><td>363</td><td>117</td><td>37.65/0.9592</td><td>33.19/0.9143</td><td>31.92/0.8964</td><td>31.13/0.9181</td></tr><tr><td>blm1-e[rg](ours)</td><td>265</td><td>94.7</td><td>37.59/0.9590</td><td>33.12/0.9132</td><td>31.87/0.8959</td><td>30.91/0.9158</td></tr><tr><td rowspan="10">3×</td><td>VDSE (Kim et al., 2015b)</td><td>665</td><td>612.6</td><td>33.65/0.9213</td><td>29.77/0.8314</td><td>28.82/0.7976</td><td>27.14/0.8279</td></tr><tr><td>DRCN (Kim et al., 2015a)</td><td>1,675</td><td>9,788.7</td><td>33.82/0.9226</td><td>29.76/0.8311</td><td>28.80/0.7963</td><td>27.15/0.8276</td></tr><tr><td>DRRN (Tai et al., 2017a)</td><td>297</td><td>6,796.9</td><td>34.03/0.9244</td><td>29.96/0.8349</td><td>28.95/0.8004</td><td>27.53/0.8378</td></tr><tr><td>BTSRN (Fan et al., 2017)</td><td>410</td><td>176.2</td><td>34.03/-</td><td>29.90/-</td><td>28.97/-</td><td>27.75/-</td></tr><tr><td>MemNet (Tai et al., 2017b)</td><td>677</td><td>623.9</td><td>34.09/0.9248</td><td>30.00/0.8350</td><td>28.96/0.8001</td><td>27.56/0.8376</td></tr><tr><td>SelNet (Choi &amp;amp; Kim, 2017)</td><td>1,159</td><td>120.0</td><td>34.27/0.9257</td><td>30.30/0.8399</td><td>28.97/0.8025</td><td>27.50/0.8376</td></tr><tr><td>SRMDNF (Zhang et al., 2017a)</td><td>2,956</td><td>305.5</td><td>34.12/0.9254</td><td>30.04/0.8382</td><td>28.97/0.8225</td><td>27.57/0.8398</td></tr><tr><td>CARN (Ahn et al., 2018)</td><td>1,592</td><td>118.8</td><td>34.29/0.9255</td><td>30.29/0.8407</td><td>29.06/0.8034</td><td>28.06/0.8493</td></tr><tr><td>CARN-M (Ahn et al., 2018)</td><td>412</td><td>46.1</td><td>33.99/0.9236</td><td>30.08/0.8367</td><td>28.91/0.8000</td><td>27.55/0.8385</td></tr><tr><td>blm-e(r=2)[rb](ours)</td><td>609</td><td>108.1</td><td>34.19/0.9257</td><td>30.18/0.8394</td><td>29.00/0.8026</td><td>27.89/0.8469</td></tr><tr><td rowspan="2">blm1-e[rb](ours)</td><td>437</td><td>90.5</td><td>34.09/0.9249</td><td>30.13/0.8379</td><td>28.95/0.8015</td><td>27.69/0.8419</td></tr><tr><td>blm1-e[rg](ours)</td><td>397</td><td>57.6</td><td>33.97/0.9286</td><td>30.04/0.8362</td><td>28.89/0.7997</td><td>27.50/0.8376</td></tr><tr><td rowspan="10">4×</td><td>VDSE (Kim et al., 2015b)</td><td>665</td><td>612.6</td><td>31.35/0.8838</td><td>28.01/0.7674</td><td>27.29/0.7251</td><td>25.18/0.7524</td></tr><tr><td>DRCN (Kim et al., 2015a)</td><td>1,774</td><td>9,788.7</td><td>31.53/0.8854</td><td>28.02/0.7670</td><td>27.23/0.7233</td><td>25.14/0.7510</td></tr><tr><td>LapSRN (Lai et al., 2017)</td><td>813</td><td>149.4</td><td>31.54/0.8850</td><td>28.19/0.7720</td><td>27.32/0.7280</td><td>25.21/0.7560</td></tr><tr><td>DRRN (Tai et al., 2017a)</td><td>297</td><td>6,796.9</td><td>31.68/0.8888</td><td>28.21/0.7720</td><td>27.38/0.7284</td><td>25.44/0.7638</td></tr><tr><td>BTSRN (Fan et al., 2017)</td><td>410</td><td>165.2</td><td>31.85/-</td><td>28.20/-</td><td>27.47/-</td><td>25.74/-</td></tr><tr><td>MemNet (Tai et al., 2017b)</td><td>677</td><td>623.9</td><td>31.74/0.8893</td><td>28.26/0.7723</td><td>27.40/0.7281</td><td>25.50/0.7630</td></tr><tr><td>SelNet (Choi &amp;amp; Kim, 2017)</td><td>1,417</td><td>83.1</td><td>32.00/0.8931</td><td>28.49/0.7783</td><td>27.44/0.7325</td><td>26.05/0.7819</td></tr><tr><td>SRDenseNet (Tong et al., 2017)</td><td>2,015</td><td>389.9</td><td>32.02/0.8934</td><td>28.50/0.7782</td><td>27.53/0.7337</td><td>26.05/0.7819</td></tr><tr><td>SRMDNF (Zhang et al., 2017a)</td><td>3,988</td><td>232.7</td><td>31.96/0.8925</td><td>28.35/0.7787</td><td>27.49/0.7337</td><td>25.68/0.7731</td></tr><tr><td>D-DBPN (Harris et al., 2018)</td><td>2,207</td><td>79.7</td><td>32.47/0.8980</td><td>28.82/0.7860</td><td>27.72/0.7400</td><td>26.38/0.7946</td></tr><tr><td>CARN (Ahn et al., 2018)</td><td>1,592</td><td>90.9</td><td>32.13/0.8937</td><td>28.60/0.7806</td><td>27.58/0.7349</td><td>26.07/0.7837</td></tr><tr><td>CARN-M (Ahn et al., 2018)</td><td>412</td><td>32.5</td><td>31.92/0.8993</td><td>28.42/0.7762</td><td>27.44/0.7304</td><td>25.62/0.7694</td></tr><tr><td>blm-e(r=2)[rb](ours)</td><td>683</td><td>108.3</td><td>32.10/0.8938</td><td>28.51/0.7795</td><td>27.51/0.7340</td><td>25.95/0.7808</td></tr><tr><td>blm1-e[rb](ours)</td><td>511</td><td>98.4</td><td>31.98/0.8921</td><td>28.45/0.7778</td><td>27.47/0.7328</td><td>25.80/0.7754</td></tr><tr><td>blm1-e(r=2)[rg+u](ours)</td><td>424</td><td>50</td><td>31.84/0.8989</td><td>28.34/0.7750</td><td>27.38/0.7295</td><td>25.52/0.7673</td></tr></table> Table 4: Quantitative results of applying 8-bit integer quantization in TF-Lite. <table><tr><td rowspan="2">Scale</td><td rowspan="2">Model</td><td rowspan="2">Params (K)</td><td rowspan="2">Mult-Adds (G)</td><td colspan="2">Set5</td><td colspan="2">Set14</td><td colspan="2">B100</td><td colspan="2">Urban100</td></tr><tr><td>PSNR/SSIM</td><td>PSNR/SSIM</td><td>PSNR/SSIM</td><td>PSNR/SSSIM</td><td>PSNR/SSIM</td><td>PSNR/SSIM</td><td>PSNR/ssIM</td></tr><tr><td rowspan="2">2×</td><td>bl</td><td>961</td><td>231.2</td><td>37.82/0.9599</td><td>33.36/0.9160</td><td>32.05/0.8981</td><td>31.67/0.9237</td><td></td><td></td><td></td></tr><tr><td>bl_q</td><td></td><td></td><td>37.68/0.9582</td><td>33.34/0.9146</td><td>32.01/0.8966</td><td>31.64/0.9226</td><td></td><td></td><td></td></tr><tr><td rowspan="2">3×</td><td>bl</td><td>1191</td><td>122.4</td><td>34.20/0.9257</td><td>30.15/0.8392</td><td>29.01/0.8028</td><td>27.91/0.8467</td><td></td><td></td><td></td></tr><tr><td>bl_q</td><td></td><td></td><td>34.17/0.9259</td><td>30.13/0.8393</td><td>29.00/0.8030</td><td>27.92/0.8474</td><td></td><td></td><td></td></tr><tr><td rowspan="2">4×</td><td>bl</td><td>1154</td><td>93</td><td>32.01/0.8927</td><td>28.41/0.7793</td><td>27.49/0.7337</td><td>25.87/0.7792</td><td></td><td></td><td></td></tr><tr><td>bl_q</td><td></td><td></td><td>31.99/0.8922</td><td>28.43/0.7794</td><td>27.51/0.7337</td><td>25.88/0.7790</td><td></td><td></td><td></td></tr></table> ## 5.2 TERNARY PRECISION Motivation: The success of using binarized (Courbariaux & Bengio, 2016; Rastegari et al., 2016; Lin et al., 2017) and ternarized neural networks (Li & Liu, 2016; Tschannen et al., 2017) to approximate the full- precision convolutions in image discriminative tasks motivates us to experiment the effectiveness of these techniques in SR. Approach: We adapt the baseline SR architecture used in prior experiments in section 4.1 but modify it structurally by replacing every convolution layer with sum- product convolution layers <--- Page Split ---> proposed in StrassenNets (Tschannen et al., 2017). These sum-product convolution layers represent a sum-product network (SPN) that is used to approximate a matrix multiplication. Specifically, each convolution layer is replaced with a convolution layer that outputs \(r\) feature maps, followed by a element-wise multiplication and a transpose convolution layer. As both the convolution layers hold ternary weights, the number of multiply operations required is determined by the number of element-wise multiplication which is controlled by \(r\) . Besides outlining the trade-off of tuning \(r\) , we aggressively use group convolutions. Results: Similar to section 5.1, the results in Table 5 are similar to that of image discriminative tasks. Specifically, the higher the width of the hidden layer of the SPN, \(r\) , the better the performance at a cost of additional multiplications and additions. When \(r = 6c.out\) , we achieve an evaluation score that is close to the uncompressed model for 2x scales and suffer a slight drop for 3x and 4x scales. Any further attempts to increase \(r\) do not improve evaluation metric. As proposed by Tschannen et al. (2017), we use group convolutions to reduce the number of additions. We take a step further and experiment with a wide range of groups as well. We found that the reduced number of additions do not justify the evaluation drop; the use of a lower \(r\) is better than the use of groups. Additionally, since multipliers are more costly and take up more area on chip than adders, we suggest lowering \(r\) instead of using grouped convolutions. Table 5: Quantitative results of applying ternary-weighted SP convolutions. We omit the use of group convolutions as it leads to worse results. c.out refers to the number of output channels in each SP convolution layer. <table><tr><td>Scale</td><td>Model</td><td>r</td><td>Reduction in Mult (%)</td><td>Reduction in Add (%)</td><td>Set5 PSNR/SSIM</td><td>Set14 PSNR/SSIM</td><td>B100 PSNR/SSIM</td><td>Urban100 PSNR/SSIM</td></tr><tr><td rowspan="5">2×</td><td rowspan="5">bl</td><td>-</td><td>-</td><td>-</td><td>37.86/0.9600</td><td>33.39/0.9159</td><td>32.06/0.8982</td><td>31.74/0.9248</td></tr><tr><td>c.out</td><td>99.82</td><td>-15.96</td><td>37.59/0.9588</td><td>33.14/0.9138</td><td>31.88/0.8959</td><td>31.02/0.9171</td></tr><tr><td>2c.out</td><td>99.64</td><td>-132.1</td><td>37.73/0.9595</td><td>33.30/0.9152</td><td>31.98/0.8973</td><td>31.37/0.9210</td></tr><tr><td>4c.out</td><td>99.28</td><td>-364.38</td><td>37.81/0.9598</td><td>33.31/0.9151</td><td>32.03/0.8978</td><td>31.59/0.9229</td></tr><tr><td>6c.out</td><td>98.92</td><td>-596.65</td><td>37.85/0.9600</td><td>33.41/0.9162</td><td>32.06/0.8981</td><td>31.68/0.9240</td></tr><tr><td rowspan="5">3×</td><td rowspan="5">ST-bl</td><td>-</td><td>-</td><td>-</td><td>34.24/0.9260</td><td>30.26/0.8405</td><td>29.03/0.8033</td><td>27.96/0.8479</td></tr><tr><td>c.out</td><td>99.81</td><td>-35.58</td><td>33.79/0.9215</td><td>29.92/0.8340</td><td>28.82/0.7983</td><td>27.25/0.8318</td></tr><tr><td>2c.out</td><td>99.64</td><td>-171.33</td><td>33.99/0.9236</td><td>30.07/0.8365</td><td>28.91/0.8005</td><td>27.54/0.8389</td></tr><tr><td>4c.out</td><td>99.28</td><td>-442.83</td><td>34.16/0.9250</td><td>30.11/0.8380</td><td>28.97/0.8021</td><td>27.73/0.8435</td></tr><tr><td>6c.out</td><td>98.92</td><td>-714.33</td><td>34.17/0.9253</td><td>30.15/0.8383</td><td>28.99/0.8023</td><td>27.83/0.8454</td></tr><tr><td rowspan="5">4×</td><td rowspan="5">ST-bl</td><td>-</td><td>-</td><td>-</td><td>32.06/0.8930</td><td>28.49/0.7787</td><td>27.50/0.7337</td><td>25.87/0.7788</td></tr><tr><td>c.out</td><td>99.81</td><td>-26.04</td><td>31.46/0.8829</td><td>28.15/0.7709</td><td>27.27/0.7265</td><td>25.24/0.7566</td></tr><tr><td>2c.out</td><td>99.64</td><td>-152.24</td><td>31.72/0.8871</td><td>28.31/0.7743</td><td>27.37/0.7296</td><td>25.49/0.7662</td></tr><tr><td>4c.out</td><td>99.28</td><td>-404.65</td><td>31.90/0.8901</td><td>28.40/0.7770</td><td>27.45/0.7319</td><td>25.67/0.7723</td></tr><tr><td>6c.out</td><td>98.93</td><td>-657.06</td><td>31.95/0.8907</td><td>28.43/0.7777</td><td>27.46/0.7324</td><td>25.77/0.7752</td></tr></table> ## 6 BEST PRACTICES FOR EFFICIENT SUPER-RESOLUTION Through an extensive set of experiments, we show that some of the previous efficiency techniques that are successful in image discriminative tasks can be successfully applied to SR. Although these techniques are comparable in the former tasks, we highlight their varying effectiveness in SR and derive a list of best practices to construct or reduce any model that are designed to reduce image distortion: - The sole use of low rank tensor decomposition (bottleneck design) results in the best trade-offs between performance and efficiency. If further compression of memory and/or compute is needed, separable/grouped convolution is recommended. If efficiency on conventional hardware is the topmost priority, we recommend reducing the number of layers or adopting the use of both channel splitting and shuffling (Ma et al., 2018).- The fewer resource-efficient architecture changes applied, the better the trade-off. Therefore, we recommend a mixture of convolution and resource-efficient units unless further compression is needed.- Avoid architecture changes on the first and last convolution layers.- We strongly recommend using any form of quantization if the hardware supports it. <--- Page Split ---> ## A VISUAL COMPARISON ON X4 SCALE BENCHMARKS ![](images/12_0.jpg) <center>Figure 2: Visual comparisons with state-of-the-art models on Set5 and Set14. VSDR and LapSRN are comparable to our models with regards to model size and/or number of operations and RCAN is x22.8-x30.5 larger and has x8.4-x9.3 more operations. </center> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 3: Visual comparisons with state-of-the-art models on B100. </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 4: Visual comparisons with state-of-the-art models on Urban100. </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 5: More \(\times 4\) scale visual comparisons on Urban100 </center> <--- Page Split --->
reject
Reject
4
ICLR_2019_paper_1241
iclr
2,019
# BOUNCE AND LEARN: MODELING SCENE DYNAMICS WITH REAL-WORLD BOUNCES Senthil Purushwalkam\* & Abhinav Gupta Robotics Institute, Carnegie Mellon University {spurushw, abhinavg}@cs.cmu.edu Danny Kaufman & Bryan Russell Adobe Research {kaufman,brussell}@adobe.com ## ABSTRACT We introduce an approach to model surface properties governing bounces in everyday scenes. Our model learns end- to- end, starting from sensor inputs, to predict post- bounce trajectories and infer two underlying physical properties that govern bouncing - restitution and effective collision normals. Our model, Bounce and Learn, comprises two modules - a Physics Inference Module (PIM) and a Visual Inference Module (VIM). VIM learns to infer physical parameters for locations in a scene given a single still image, while PIM learns to model physical interactions for the prediction task given physical parameters and observed pre- collision 3D trajectories. To achieve our results, we introduce the Bounce Dataset comprising 5K RGB- D videos of bouncing trajectories of a foam ball to probe surfaces of varying shapes and materials in everyday scenes including homes and offices. Our proposed model learns from our collected dataset of real- world bounces and is bootstrapped with additional information from simple physics simulations. We show on our newly collected dataset that our model out- performs baselines, including trajectory fitting with Newtonian physics, in predicting post- bounce trajectories and inferring physical properties of a scene. ## 1 INTRODUCTION Consider the scenario depicted in Figure 1. Here, a ball has been tossed into an everyday scene and is about to make contact with a sofa. What will happen next? In this paper, we seek a system that learns to predict the future after an object makes contact with, or bounces off, everyday surfaces, such as sofas, beds, and walls. The ability for a system to make such predictions will allow applications in augmented reality and robotics, such as compositing a dynamic virtual object into a video or allowing an agent to react to real- world bounces in everyday environments. We begin by observing that humans exploit both visual recognition and direct physical interactions to estimate the physical properties of objects in the world around them. By learning from a large number of physical interactions in the real world, we develop an approximate visual mapping for physical properties (e.g., sofas are soft, tables are hard, etc). However, two surfaces that look the same may produce vastly different outcomes when objects are dropped upon them (e.g., query for "happy ball and sad ball" on YouTube). Without observing these interactions, we would have no way of knowing that the surfaces are made of materials with differing physical properties. Motivated by this observation, in this paper, we investigate the application of physical interactions to probe surfaces in real- world environments, infer physical properties, and to leverage the interactions as supervision to learn an appearance- based estimator. Leveraging the regularity of spherical collisions, we adopt a simple ball as our probe. Our goal is to use captured probe collision trajectories to predict post- bounce trajectories and estimate surface- varying coefficients of restitution (COR) and effective collision normals over complex, everyday objects. Rigid- body physics has often been employed to model collision events (Bhat et al., 2002; Brubaker et al., 2009; Kyriazis et al., 2011; Monszpart et al., 2016). However, real- world objects deform under collision, and so violate rigid- body assumptions. Collision normals and COR model a complex pro <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Goal. (a) We seek to predict what will happen after a ball bounces off an everyday surface. (b) We introduce a large Bounce Dataset of videos depicting real-world bounces in a variety of scenes. We show (hand-crafted) estimates of coefficient of restitution in the top-left corners (higher values indicate hard surfaces). (c) Output of our Bounce and Learn model that predicts the trajectory of the object after the bounce. (d) Observed ground truth post-bounce trajectory. </center> cess, which is challenging and expensive to observe and simulate, especially for softer materials (Belytschko et al., 2013; Marsden & Hughes, 2012). While there are many fast soft- body simulators (e.g., FlexSim (Nordgren, 2003)), none are physically accurate (Chen et al., 2017). Furthermore, these simulators do not allow estimation of parameters of observed real- world scenes. Moreover, current computer vision techniques do not accurately capture high- speed colliding trajectories in the wild. Inaccurate trajectory capture means we are far from certain of our trajectories – we have point clouds that are not a good fit to any exact trajectory curve but instead could be explained by any number of nearby trajectories. This means that inferring underlying collision normals and CORs cannot be done by a simple process of inverting a deterministic Newtonian physics model. Indeed, as we show in Section 4, fitting a single trajectory curve and learning with Newtonian physics leads to poor results. These results are explained in part by noting that collision dynamics and the codes that simulate them are particularly sensitive to variations and uncertainties. To address these challenges, we seek to directly learn collision- response models of deformable surfaces from observed real- world interactions and bootstrapped by only a set of simple, inexpensive rigid- body simulation examples. We propose Bounce and Learn, a model that learns end- to- end to predict post- bounce trajectories and infers effective physical parameters starting from sensor inputs. Our model comprises a Physics Inference Module (PIM) and a Visual Inference Module (VIM). VIM learns to infer physical parameters for locations in a scene given a single still image, while PIM learns to model the physical interaction for the prediction task given physical parameters and an observed pre- collision 3D trajectory. We show that our model can account for non- rigid surfaces that deform during collision and, compared to inverting a parametric physics model using hand- designed features, better handles uncertainty in the captured trajectories due to end- to- end learning. Moreover, our model can be trained in batch mode over a training set or can learn to adapt in an online fashion by incorporating multiple observed bounces in a given scene. Our effort is a step towards a learnable, efficient real- world physics simulator trained by observing real- world interactions. To train our model, we introduce a large- scale Bounce Dataset of 5K bouncing trajectories of a probe with different surfaces of varying shape and material in everyday scenes including homes and offices. For our study, we use a spherical ball made of foam for our probe as it provides a rich range of interactions with everyday objects and its symmetry allows us to better track and model the physical properties of the complex objects it collides with. As collision events are transient (we observe contact occurring over \(1 / 50^{\mathrm{th}}\) of a second), we have collected our dataset using a high- framerate stereo camera. Our dataset is the largest of its kind and goes well beyond simpler setups involving a handful of interacting objects (Bhat et al., 2002; Brubaker et al., 2009; Kyriazis et al., 2011; Monszpart et al., 2016). Note that prior datasets involving human interaction in sports (Bettadapura et al., 2016) require high- level reasoning without much diversity in collision surfaces. Contributions. Our work demonstrates that an agent can learn to predict physical properties of surfaces in daily scenes and is the first to explore this across a large variety of different real- world surfaces, such as sofas, beds, and tables. Our contributions are twofold: (1) we propose a model that is trained end- to- end for both predicting post- bounce trajectories given an observed, noisy 3D point cloud of a pre- bounce trajectory in a scene, and for inferring physical properties (COR and collision normal) given a single still image; and (2) we build a large- scale dataset of real- world bounces in a variety of everyday scenes. We evaluate our model on our collected dataset and show that it out <--- Page Split ---> performs baselines, including trajectory fitting with Newtonian physics, in predicting post- bounce trajectories and inferring physical properties of a scene. ## 2 RELATED WORK Our goal is related to work that captures and models physical interactions of objects in scenes. While prior work addresses various aspects of our overall goal, none gets at all aspects we seek to capture. Simulation- only approaches. There have been a number of simulation- only approaches to learning or modeling object interactions and ("intuitive") physics. Examples include learning predictive models for a set of synthetic objects, like billiards (Fragkiadaki et al., 2016), and general N- body interaction problems (Battaglia et al., 2016; Chang et al., 2017; Ehrhardt et al., 2017a;b; Watters et al., 2017), and learning bounce maps (Wang et al., 2017). However, most of these approaches operate over simple scenes consisting of a single object or parametric objects. Graphics- based richer 3D environments like AI2- THOR (Zhu et al., 2017a;b) have mainly been explored for navigation and planning tasks. Visual prediction in toy worlds. There are approaches that predict physical interactions in realworld imagery by incorporating a simulator during inference (Wu et al., 2015) or by learning from videos depicting real scenes (Wu et al., 2016) or simulated sequences (Lerer et al., 2016). However, these approaches model simple objects, such as blocks, balls, and ramps, and again make simplifying assumptions regarding COR, mass and friction. On the other hand, in our approach we exploit physical interactions to estimate physical properties of everyday objects in real- world scenes. Visual prediction in real- world scenes. There are approaches that seek to estimate geometric and physical properties in everyday scenes using visual appearance. Examples include estimating the geometric layout of a scene (e.g., depth and surface normals from RGB- D data) (Bansal et al., 2016; Eigen & Fergus, 2015; Wang et al., 2015), material, texture, and reflectances (Bell et al., 2013), and qualitative densities of objects (Gupta et al., 2010). Instead of estimating physical properties, some approaches make visual predictions by leveraging hand- aligned Newtonian scenarios to images/videos (Mottaghi et al., 2016a), using simulated physical models fitted to the RGB- D data (Mottaghi et al., 2016b), or learning to synthesize future video frames (Chao et al., 2017; Vondrick & Torralba, 2017; Walker et al., 2016; Xue et al., 2016). A recent approach predicts sounds of everyday objects from simulations of 3D object models (Zhang et al., 2017). Real- world interaction and capture. Our work is inspired by models of physical properties in the real world via interaction. Examples include parametric models of a human to model forces or affordances (Brubaker & Fleet, 2008; Brubaker et al., 2009; 2010; Zhu et al., 2015; 2016), grasping objects (Mann et al., 1997), multi- view video sequences of a ball on a ramp (Kyriazis et al., 2011), video sequences of known objects in free flight (Bhat et al., 2002), and video sequences depicting collision events between pairs of known real- world objects (Monszpart et al., 2016). We seek to generalize beyond pre- specified objects to unknown, everyday objects with complex geometries and physical properties in real- world scenes. More closely related to our work are approaches that repeatedly interact with real- world environments, such as hitting objects to learn audio- visual representations (Owens et al., 2016), crashing a drone to learn navigation (Gandhi et al., 2017), and repeated pokes and grasps of a robotic arm to learn visual representations for object manipulation (Agrawal et al., 2016; Levine et al., 2016; Pinto & Gupta, 2016; Pinto et al., 2016). Our goal is to scale learning and reasoning of dynamics starting with interactions via object bounces in everyday scenes. ## 3 BOUNCE AND LEARN MODEL AND BOUNCE DATASET This section introduces our Bounce and Learn model and Bounce Dataset. Please see Appendix A for more details on the underlying physics governing bounces. Our overall model is shown in Figure 2 (left) and consists of a Physics Inference module (PIM) and Visual Inference Module (VIM). Having separate physics and visual modules allows for pre- training PIM using simulation data and joint training using real- world data. We describe each module in the following subsections. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: System overview. Our model (left) consists of a Physics Inference Module (top-right) and a Visual Inference Module (bottom-right). See text for more details. </center> ### 3.1 PHYSICS INFERENCE MODULE (PIM) PIM is shown in Figure 2 (top- right). Given a ball's incoming 3D trajectory and physical parameters of the bounce surface, the goal of PIM is to predict the outgoing 3D trajectory of the ball after bouncing off the surface. We assume the ball's trajectory is a sequence of point clouds given by the stereo camera. Let \(\mathcal{T}_{i}\) and \(\mathcal{T}_{o}\) be pre- and post- bounce point cloud trajectories, respectively, and \(\rho\) be the physical parameters of the probed collision surface - effective collision normal and coefficient of restitution (COR). For a non- rigid collision, the input \(\rho\) represents the values of effective physical parameters that lead to the aggregated effect of the impact process. We seek to learn the mapping \(\mathcal{T}_{o} = \mathcal{F}(\mathcal{T}_{i},\rho)\) . One challenge is how to represent the trajectory. While we could try to fit a static, reconstructed 3D model of the ball to the trajectory and track the centers, we found that it required manual parameter tuning. Moreover, such an approach is not easily extendable to other objects, particularly if the object undergoes deformation. We instead seek to represent the trajectory directly from sensor inputs by jointly learning an embedding in addition to predicting the post- bounce trajectory. Let \(t_{i}\) and \(t_{o}\) be the encoded versions of the trajectories with embedding functions \(\mathcal{E}_{i}\) and \(\mathcal{E}_{o}\) , e.g., \(t_{i} = \mathcal{E}_{i}(\mathcal{T}_{i})\) . Since PIM uses embedded trajectories, we can write mapping \(\mathcal{F}\) as the composition, \[\mathcal{T}_{o} = \mathcal{F}(\mathcal{T}_{i},\rho) = \mathcal{E}_{o}^{-1}\left(f(\mathcal{E}_{i}(T_{i}),\rho)\right), \quad (1)\] where core physics engine \(f\) maps encoded pre- bounce trajectories \(t_{i}\) and \(\rho\) to predicted encoded post- bounce trajectories \(t_{p} = f\left(t_{i},\rho\right)\) and \(\mathcal{E}_{o}^{- 1}\) decodes \(t_{p}\) to predict final \(\mathcal{T}_{o}\) . Core physics engine \(f\) . We seek an architecture that is flexible and has the ability to model complex interactions, such as deformation. We use two FC layers to encode the physical parameters \(\rho\) as a vector (experimentally observed to converge faster than using \(\rho\) directly), which is concatenated with input encoded trajectory \(t_{i}\) and followed by two FC layers to estimate the encoded predicted trajectory. See Appendix D for details. We observe in our experiments that core physics engine \(f\) , when trained on real- world data, can model interactions more complex than rigid- body collisions. Trajectory encoder \(\mathcal{E}\) . We encode trajectories via an architecture inspired by PointNet (Qi et al., 2016). Our encoder takes as input a \(T\times N\times 3\) array containing a lexicographically sorted list of 3D points, where \(T\) is the number of time steps in the trajectory and \(N\) is the number of 3D points at each time step. A 128- dimensional vector is generated for each time step, which are concatenated and processed by two fully connected (FC) layers, followed by \(L_{2}\) normalization resulting in a 64- dimensional vector. See Appendix C for more details. Trajectory decoder \(\mathcal{E}^{- 1}\) . While an encoded trajectory can be decoded by a deconvolution network predicting an output point- cloud trajectory, learning a point- cloud decoder in our setting is a non <--- Page Split ---> trivial task (Fan et al., 2017); we leave this to future work. Instead, we use a non- parametric decoder. Specifically, we build a database of 10K simulated post- collision trajectories \(\mathcal{T}_{o}\) and their encodings \(t_{o} = \mathcal{E}_{o}(\mathcal{T}_{o})\) . We use predicted encoded trajectory \(t_{p}\) as query and find the nearest \(t_{o}\) to estimate \(\mathcal{T}_{o}\) . Pre- training PIM. For pre- training, we need triplets \((\mathcal{T}_{i}, \mathcal{T}_{o}, \rho)\) , which are easily and abundantly available from simulators (c.f., Section 3.3) that generate data under rigid body assumptions using physical parameters \(\rho\) . For the training loss, we minimize the distance between the encodings of the post- collision trajectory \(t_{o}\) and the predicted post- collision trajectory \(t_{p}\) . We also use negative encoded trajectories \(\{n_{j}\}\) to make the loss contrastive and prevent collapse of the encoding. We found that additional supervision for the trajectory encoders improves the performance of the model. We achieve this by a reconstruction network \(\mathcal{P}(t_{i}, t_{o})\) which explicitly ensures that the physical parameter vector \(\rho\) can be recovered given the ground truth encoded pre- and post- bounce trajectories \(t_{i}\) and \(t_{o}\) . We use a 2- layered neural network for \(\mathcal{P}\) ; note that this part of the model is not shown in Figure 2 since it is only used in training and not inference. We optimize triplet and squared- \(L_{2}\) losses for each triplet \((t_{p}, t_{o}, \{n_{j}\})\) corresponding to \(t_{i}\) , \[\mathcal{L}_{\mathrm{PIM}} = \max \left(d(t_{p}, t_{o}) - d(t_{p}, n_{j}) + m, 0\right) + \| \rho - \mathcal{P}(t_{i}, t_{o})\|_{2}^{2}, \quad (2)\] where \(d\) is cosine distance and \(m\) is a scalar parameter for the margin. Inference. The PIM could be potentially used for two tasks: (a) predicting post- bounce trajectories given physical parameters \(\rho\) and pre- bounce trajectory \(\mathcal{T}_{i}\) ; (b) estimating physical parameters \(\rho\) given pre- bounce trajectory \(\mathcal{T}_{i}\) and post- bounce trajectory \(\mathcal{T}_{o}\) . The first form of prediction, which is also used in our experiments, is straightforward: we estimate the encoded predicted trajectory \(t_{p}\) and then use the non- parametric decoding described above. For the second task, a grid search or optimization based strategy could be used to search the range of possible physical parameters \(\rho\) such that the encoded predicted trajectory \(t_{p}\) is closest to the encoding of post- collision trajectory \(t_{o}\) . ### 3.2 VISUAL INFERENCE MODULE (VIM) Predicting the outcome of a bounce in a real- world scene requires knowledge of the physical parameters \(\rho\) at the location of the bounce. While PIM allows us to model physical interactions, it does not provide a means to account for knowledge of the visual scene. We would like to integrate a module that can reason over visual data. To this end, we propose a Visual Inference Module (VIM) that is designed to infer the physical parameters of a scene from the visual input. We demonstrate experimentally that VIM can generalize to novel scenes for inferring physical properties and also predicting post- bounce trajectories when used in conjunction with a pretrained PIM. Moreover, in scenarios where multiple interactions with a scene is possible, we show that VIM and PIM can be jointly used to update predictions about the scene by observing the multiple- bounce events. This scenario is important since visual cues only provide limited evidence about the underlying physical properties of surfaces, e.g., two sofas that look the same visually could have different physical properties. The proposed VIM, shown in Figure 2 (bottom- right), is a convolutional neural network (CNN) that takes as input the image of a scene and outputs the physical parameters for each location in the scene. We represent this by the function \(\mathcal{V}\) , which is an AlexNet architecture (Krizhevsky et al., 2012) up to the \(5^{\mathrm{th}}\) convolution layer, followed by \(3 \times 3\) and \(1 \times 1\) convolution layers. Each location in the output map \(\mathcal{V}(\mathcal{I})\) for an image \(\mathcal{I}\) contains a four- dimensional vector corresponding to the coefficient of restitution and collision normal (normalized to unit length). Training. Training VIM alone is not directly possible since the ground truth for the output physical properties cannot be easily collected. These physical properties are also closely tied to the assumed physics model. Therefore, we use PIM to train VIM. For each bounce trajectory, given the impact location \((x, y)\) in the image, the physical properties can be estimated using VIM by indexing \(\mathcal{V}(\mathcal{I})\) to extract the corresponding output feature. We refer to this as \(\rho_{x, y}\) . PIM can use the estimated \(\rho_{x, y}\) along with the encoding of the pre- collision trajectory \(t_{i}\) to predict the encoding of the post- collision trajectory. Our loss is the sum of cosine distance between the predicted and ground truth post- collision trajectory encodings \(t_{o}\) and the squared- \(L_{2}\) distance of \(\rho_{x, y}\) from the parameters estimated with \(\mathcal{P}\) (described in Sec 3.1), which helps constrain the outputs to a plausible range. We also add a regularization term to ensure spatial smoothness. \[\mathcal{L}_{\mathrm{Joint}} = d(t_{o}, f(t_{i}, \rho_{x, y})) + \| \rho_{x, y} - \mathcal{P}(t_{i}, t_{o})\|_{2}^{2} + \sum_{x, y} \sum_{i \in \{0, 1\}} \sum_{j \in \{0, 1\}} \| \rho_{x, y} - \rho_{x + i, y + j}\|_{2}^{2}, \quad (3)\] <--- Page Split ---> where \(f\) is the core physics engine and \(t_{i}, t_{o}\) are the encoded pre- and post- bounce trajectories, respectively (described in Section 3.1). The objective can be optimized by SGD to update the parameters of VIM and PIM (parameters of \(\mathcal{P}\) are not updated). Note that the parameters of PIM can also be held fixed during this optimization, but we observe that training jointly performs significantly better. This demonstrates improvement over the simple rigid body physics- based pretraining. We present this ablative study in Appendix F. Online learning and inference (results in Appendix J). While inference of physical properties requires physical interactions, visual cues can also be leveraged to generalize these inferences in a scene. For example, a bounce of a ball on a wall can inform our inference of physical properties of the rest of the wall. Therefore, we explore an online learning framework where our estimates of the physical parameters in a scene are updated online upon observing bounces. For every bounce trajectory \((\mathcal{T}_{i}, \mathcal{T}_{o})\) observed at scene location \((x, y)\) , we use VIM to estimate the physical parameters \(\rho_{x, y}\) . VIM is then updated until convergence using Equation (3). For each novel scene, we can train incrementally by interacting with the scene and updating the previously learned model. Such a setting holds significance in robotics where an agent can actively interact with an environment to make inferences about physical properties. We observe in our results that we achieve better estimates of the physical properties with an increasing number of interactions. ### 3.3 BOUNCE DATASET To explore whether active physical interactions can be used to infer the physical properties of objects, we collect a large- scale dataset of a probe ball bouncing off of complex, everyday surfaces. In addition, we augment our collected real- world Bounce Dataset with a dataset of simple simulated bounces. Please see the supplemental for more details of our collected dataset. Real- world Bounce Dataset. The dataset consists of 5172 stereo videos of bounces with surfaces in office and home environments. Each individual bounce video depicts the probe following a pre- collision trajectory, its impact with a surface, and its post- collision trajectory. On average each video contains 172 frames containing the ball. As shown in Figs. 1 and 5, the environments contain diverse surfaces with varying material properties. Each sample in the dataset consists of the RGB frames, depth maps, point clouds for each frame, and estimated surface normal maps. Since we are interested in the trajectory of the ball, we first perform background subtraction (Stauffer & Grimson, 1999) on the frames to localize the ball followed by RANSAC fitting (Fischler & Bolles, 1981) of a sphere on the point cloud corresponding to the foreground mask. This helps reject any outlier foreground points and collect a point cloud approximately corresponding to the ball in each frame. Note that in each frame the point cloud only depicts one viewpoint of the ball. Simulation data. We bootstrap learning by augmenting our captured real- world data with simulation data. We simulate a set of sphere- to- plane collisions with the PyBullet Physics Engine (Coumans & Bai, 2016–2017). To match our captured frame rate, we set simulation time steps at 0.01 seconds. We initialize sphere locations and linear velocities randomly and set angular velocities and friction coefficients to zero. Collision surfaces are oriented randomly and COR values are sampled uniformly in the feasible range [0,1]. Each simulation returns pre- and post- bounce trajectories of the sphere for the sampled orientation and COR. To make the synthetic data consistent with our captured dataset, we create point clouds per simulation by picking a viewpoint and sampling only visible points on the sphere at each time step. ## 4 EVALUATION ### 4.1 VISUAL FORWARD PREDICTION For the forward- prediction task, we split our dataset into training, validation, and test sets containing 4503, 196, and 473 trajectories, respectively. There are no common scenes across these sets. We found including a mix of 3:1 synthetic- to- real data in the mini- batches achieves the best balance between prediction accuracy and interpretable physical parameter inference. Qualitative prediction results are shown in Figure 3. Notice that we effectively predict the post- bounce trajectory when the ball bounces off a variety of different surfaces. To quantitatively evaluate the model's predictions, we train on the training and validation sets and test on the test set. We report the \(L_{2}\) distance in world <--- Page Split ---> Table 1: Forward prediction, collision normal estimation and COR estimation (test set). We evaluate our model and compare to baselines on the task of forward prediction, collision normal and COR estimation. We report median distance in centimeters to observed post-bounce trajectories for each experimental setting. <table><tr><td>Models</td><td>Dist</td><td>Dist Est Normal</td><td>Dist Est COR</td><td>Dist Est Normal + COR</td><td>% Normals with 30° of Est Normal</td><td>COR Median Absolute Error</td></tr><tr><td>1. Parabola encoding</td><td>26.3 ± 0.5</td><td>34.1 ± 0.7</td><td>26.4 ± 0.9</td><td>33.5 ± 1.6</td><td>17.52 ± 2.22</td><td>0.179 ± 0.006</td></tr><tr><td>2. Center encoding</td><td>23.1 ± 0.9</td><td>21.2 ± 0.3</td><td>23.2 ± 0.7</td><td>21.2 ± 0.2</td><td>23.41 ± 2.02</td><td>0.178 ± 0.013</td></tr><tr><td>3. Pretrain. VIM + IN</td><td>40.1 ± 6.0</td><td>34.1 ± 5.4</td><td>41.0 ± 6.3</td><td>34.4 ± 5.5</td><td>-</td><td>-</td></tr><tr><td>4. Ours</td><td>21.3 ± 0.9</td><td>21.2 ± 0.8</td><td>21.4 ± 0.7</td><td>20.6 ± 0.6</td><td>24.08 ± 3.82</td><td>0.168 ± 0.018</td></tr><tr><td>5. Ours + Est. normals</td><td>22.7 ± 1.0</td><td>18.7 ± 0.9</td><td>22.7 ± 0.7</td><td>18.4 ± 0.9</td><td>50.14 ± 1.26</td><td>0.159 ± 0.016</td></tr></table> ![](images/6_0.jpg) <center>Figure 3: Predicted post-bounce trajectories in novel scenes. (left) Input pre-bounce trajectories. (center) Observed post-bounce trajectories. (right) Our predicted post-bounce trajectories. Additional results in Appendix. </center> coordinates between the predicted ball's center post- bounce and the observed ball's center extracted from the depth data at time step 0.1 seconds post- bounce (Table 1 column "Dist"). Baselines. The point cloud trajectory encoders in PIM are a generic solution to encoding raw trajectory data. Such a model allows applying the same model to various probe objects. An alternative model that provides less flexibility is replacing the PointNet encoder with a "center encoding" model, which involves extracting the center- of- mass 3D points obtained by background subtraction and extracting the mean point of the sphere point cloud at each frame for 10 frames before and after the collision. We encode the 30- D vector of \((x,y,z)\) coordinates using a two- layer neural network and train PIM with this encoding instead of the PointNet model. As a second baseline (parabola encoding), we fit a parabola to the set of centers and use that as the representation for the trajectory. We use the parameters of a least squares- fitted parabola as input to a two- layer neural network that outputs the encoded representation of the trajectory. We also experimented with Interaction Networks (IN) (Battaglia et al., 2016) as an alternative for the PIM. We use the output physical parameters of the VIM from our best model to predict post- bounce trajectories using IN (Pretrain. VIM + IN). More details about this model and direct comparison to PIM are provided in Appendix L. Note that the IN model is not trained jointly with the VIM. Discussion and model ablations. Notice that our Bounce and Learn model out- performs the presented baselines (Table 1, top), likely due to the errors in hand- crafted estimates of the ball center in noisy stereo depth data. We report ablations of different training strategies for our model in Appendix F. We also quantify the effect of the spatial regularizer in Appendix G. Leveraging sensor- estimated collision normals and COR. We study how our model can benefit if the collision normal or COR is known. As collision normals are loosely tied to the surface normals, we train our best model (row 4) with the sensor- estimated normals by adding an additional cosine- distance loss between the VIM- predicted collision normals and the sensor normals (row 5). Furthermore, we evaluate the different models when the sensor- estimated normals are available at test time (col. "Dist Est Normal"). Notice how most of the models benefit from this information. <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 4: Inferred COR and collision normals. Given a single still image (first col.), we show inferred COR (second col., warmer colors indicate higher COR values), predicted collision normals (third col., colors indicate normal direction) from VIM and stereo camera surface normals (last col.). </center> We also investigate whether knowing a sensor- estimated COR at test time is beneficial. The COR can be estimated from pre- and post- collision velocities under the assumption of a rigid- body model. We estimate these velocities from sensor data of the ball in the frames 0.05 seconds before and after impact. Note that these are noisy estimates due to the complexity of the collisions in our dataset and errors of depth estimation from the stereo camera. We evaluate prediction accuracy when the models have access to the sensor- estimated COR during testing (col. "Dist Est COR"). We observe that for the best model, sensor- estimated COR combined with sensor normals (col. "Dist Est Normal+COR") improves accuracy over sensor- estimated COR or normals alone. ### 4.2 INFERRING PHYSICAL PROPERTIES We show qualitative results of inferred COR values and collision normals from VIM in Figure 4 (more in Appendix I). We also compare our collision normal estimates to the surface normals estimated from the stereo camera. Notice that the soft seats have lower COR values while the walls have higher values, and the normals align with the major scene surfaces. In Table 1, we evaluate the percentage of VIM's collision normal predictions which align within \(30^{\circ}\) of the sensor- estimated surface normal - the standard evaluation criterion for the NYUv2 surface normal estimation task (Silberman et al., 2012). We observe that training the trajectory encoder (row 4) leads to a model that is not constrained to predict interpretable effective physical parameters (since the input domain of \(\mathcal{P}\) changes). Adding sensor- estimated normals during training improves normal prediction accuracy while maintaining good forward- prediction accuracy (row 5). Finally, we evaluate the inferred COR from VIM. As noted in Section 4.1, the estimated COR from the stereo camera is noisy, but is the best available ground- truth. For our evaluation, we compare our inferred COR to the estimated COR from the stereo camera using median absolute error in Table 1 (last column). Notice how our best model (row 4) out- performs the baseline and ablation models. ## 5 DISCUSSION We have introduced a new large- scale dataset of real- world bounces and have demonstrated the ability to predict post- bounce trajectories and infer physical properties of the bounces for a variety of everyday surfaces via our Bounce and Learn model. The collection of our Bounce Dataset facilitates studying physical properties not addressed by our model and future applications. Example properties include friction, object spin, surface deformation, and stochastic surfaces (e.g., corners, pebbled surface). Detection of collisions robustly and performing rollouts of predictions can be interesting directions towards practical applications. Future applications also include the ability to transfer the recovered physical properties to 3D shapes and simulating 3D scenes with real- world or cartoon physics. ## 6 ACKNOWLEDGEMENTS This research is partly sponsored by ONR MURI N000141612007 and the ARO under Grant Number W911NF- 18- 1- 0019. Abhinav Gupta was supported in part by Okawa Foundation. <--- Page Split ---> ## REFERENCES Pulkit Agrawal, Ashvin Nair, Pieter Abbeel, Jitendra Malik, and Sergey Levine. Learning to poke by poking: Experiential learning of intuitive physics. In Advances in Neural Information Processing Systems (NIPS), 2016. Aayush Bansal, Bryan C. Russell, and Abhinav Gupta. Marr revisited: 2d- 3d alignment via surface normal prediction. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, and Koray Kavukcuoglu. Interaction networks for learning about objects, relations and physics. In Advances in Neural Information Processing Systems (NIPS), 2016. Sean Bell, Paul Upchurch, Noah Snavely, and Kavita Bala. OpenSurfaces: A richly annotated catalog of surface appearance. ACM Transactions on Graphics (SIGGRAPH 2013), 2013. Ted Belytschko, Wing Kam Liu, Brian Moran, and Khalil Elkhodary. Nonlinear Finite Elements for Continua and Structures. John Wiley & Sons, November 2013. Vinay Bettadapura, Caroline Pantofaru, and Irfan Essa. Leveraging contextual cues for generating basketball highlights. In Proceedings of ACM International Conference on Multimedia (ACM- MM). ACM, October 2016. K. Bhat, S. Seitz, J. Popović, and P. Khosla. Computing the physical parameters of rigid-body motion from video. In Proceedings of European Conference on Computer Vision (ECCV), 2002. M. Brubaker, L. Sigal, and D. Fleet. Estimating contact dynamics. In Proceedings of IEEE International Conference on Computer Vision (ICCV), 2009. Marcus A. Brubaker and David J. Fleet. The kneed walker for human pose tracking. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008. Marcus A. Brubaker, David J. Fleet, and Aaron Hertzmann. Physics- based person tracking using the anthropomorphic walker. International Journal of Computer Vision, 87(140), 2010. Michael B. Chang, Tomer Ullman, Antonio Torralba, and Joshua B. Tenenbaum. A compositional object- based approach to learning physical dynamics. In Proceedings of the International Conference on Learning Representations (ICLR), 2017. Yu- Wei Chao, Jimei Yang, Brian Price, Scott Cohen, and Jia Deng. Forecasting human dynamics from static images. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. A. Chatterjee and A. L. Ruina. A New Algebraic Rigid-Body Collision Law Based on Impulse Space Considerations. Journal of Applied Mechanics, 65(4):939-951, 1998. doi: 10.1115/1.2791938. Desai Chen, David I. Levin, Wojciech Matusik, and Danny M. Kaufman. Dynamics-aware numerical coarsening for fabrication design. ACM Trans. Graph., 34(4), 2017. doi: 10.1145/3072959.3073669. Erwin Coumans and Yunfei Bai. pybullet, a Python module for physics simulation for games, robotics and machine learning. http://pybullet.org/, 2016- 2017. Sébastien Ehrhardt, Aron Monszpart, Niloy J. Mitra, and Andrea Vedaldi. Learning a physical long- term predictor. CoRR, abs/1703.00247, 2017a. URL http://arxiv.org/abs/1703.00247. Sébastien Ehrhardt, Aron Monszpart, Andrea Vedaldi, and Niloy J. Mitra. Learning to represent mechanics via long- term extrapolation and interpolation. CoRR, abs/1706.02179, 2017b. URL http://arxiv.org/abs/1706.02179. David Eigen and Rob Fergus. Predicting depth, surface normals and semantic labels with a common multi- scale convolutional architecture. In Proceedings of IEEE International Conference on Computer Vision (ICCV), 2015. <--- Page Split ---> Haoqiang Fan, Hao Su, and Leonidas Guibas. A point set generation network for 3D object reconstruction from a single image. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. Martin A Fischler and Robert C Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24 (6):381- 395, 1981. Katerina Fragkiadaki, Pulkit Agrawal, Sergey Levine, and Jitendra Malik. Learning predictive visual models of physics for playing billiards. In Proceedings of the International Conference on Learning Representations (ICLR), 2016. Dhiraj Gandhi, Lerrel Pinto, and Abhinav Gupta. Learning to fly by crashing. In Proceedings of the International Conference On Intelligent Robots and Systems (IROS), 2017. Abhinav Gupta, Alexei A. Efros, and Martial Hebert. Blocks world revisited: Image understanding using qualitative geometry and mechanics. In ECCV, 2010. Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 1097- 1105, 2012. URL http://books.nips.cc/papers/files/nips25/NIPS2012_0534. pdf. N. Kyriazis, I. Oikonomidis, and A. Argyros. Binding vision to physics based simulation: The case study of a bouncing ball. In Proceedings of the British Machine Vision Conference (BMVC), 2011. Adam Lerer, Sam Gross, and Rob Fergus. Learning physical intuition of block towers by example. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML'16, pp. 430- 438. JMLR.org, 2016. URL http://dl.acm.org/citation.cfm?id=3045390.3045437. Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End- to- end training of deep visuomotor policies. Journal of Machine Learning Research (JMLR), 2016. Richard Mann, Allan Jepson, and Jeffrey Siskind. The computational perception of scene dynamics. In CVIU, 1997. J.E. Marsden and T.J.R. Hughes. Mathematical Foundations of Elasticity. Dover Civil and Mechanical Engineering. 2012. Aron Monszpart, Nils Thuerey, and Niloy J. Mitra. SMASH: Physics- guided reconstruction of collisions from videos. ACM Transactions on Graphics (SIGGRAPH Asia), 2016. Roozbeh Mottaghi, Hessam Bagherinezhad, Mohammad Rastegari, and Ali Farhadi. Newtonian image understanding: Unfolding the dynamics of objects in static images. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016a. Roozbeh Mottaghi, Mohammad Rastegari, Abhinav Gupta, and Ali Farhadi. "What happens if..." Learning to predict the effect of forces in images. In Proceedings of European Conference on Computer Vision (ECCV), 2016b. William B. Nordgren. Flexible simulation (Flexsim) software: Flexsim simulation environment. In Proceedings of the 35th conference on Winter simulation: driving innovation, 2003. Andrew Owens, Phillip Isola, Josh McDermott, Antonio Torralba, Edward Adelson, and William Freeman. Visually indicated sounds. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. Lerrel Pinto and Abhinav Gupta. Supersizing self- supervision: Learning to grasp from 50K tries and 700 robot hours. In Proceedings of the International Conference On Robotics and Automation (ICRA), 2016. Lerrel Pinto, Dhiraj Gandhi, Yuanfeng Han, Yong- Lae Park, and Abhinav Gupta. The curious robot: Learning visual representations via physical interactions. In Proceedings of European Conference on Computer Vision (ECCV), 2016. <--- Page Split ---> Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. arXiv preprint arXiv:1612.00593, 2016. Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor segmentation and support inference from rgbd images. In European Conference on Computer Vision, pp. 746- 760. Springer, 2012. Chris Stauffer and W Eric L Grimson. Adaptive background mixture models for real- time tracking. In Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference on., volume 2, pp. 246- 252. IEEE, 1999. David E Stewart. Dynamics with Inequalities: Impacts and Hard Constraints. Society for Industrial and Applied Mathematics, 2011. Dan Stoianovici and Yildirim Hurmuzlu. A critical study of the applicability of rigid- body collision theory. Journal of Applied Mechanics, 63(2):307- 316, 1996. Carl Vondrick and Antonio Torralba. Generating the future with adversarial transformers. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. Jacob Walker, Carl Doersch, Abhinav Gupta, and Martial Hebert. An uncertain future: Forecasting from variational autoencoders. In Proceedings of European Conference on Computer Vision (ECCV), 2016. Jui- Hsien Wang, Rajsekhar Setaluri, Dinesh K. Pai, and Doug L. James. Bounce maps: An improved restitution model for real- time rigid- body impact. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2017), 36(4), July 2017. doi: https://doi.org/10.1145/3072959.3073634. Xiaolong Wang, David Fouhey, and Abhinav Gupta. Designing deep networks for surface normal estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 539- 547, 2015. Nicholas Watters, Andrea Tacchetti, Theophane Weber, Razvan Pascanu, Peter Battaglia, and Daniel Zoran. Visual interaction networks. CoRR, abs/1706.01433, 2017. URL http://arxiv.org/abs/1706.01433. Jiajun Wu, Ilker Yildirim, Joseph J. Lim, William T. Freeman, and Joshua B. Tenenbaum. Galileo: Perceiving physical object properties by integrating a physics engine with deep learning. In Advances in Neural Information Processing Systems (NIPS), 2015. Jiajun Wu, Joseph J. Lim, Hongyi Zhang, Joshua B. Tenenbaum, and William T. Freeman. Physics 101: Learning physical object properties from unlabeled videos. In Proceedings of the British Machine Vision Conference (BMVC), 2016. Tianfan Xue, Jiajun Wu, Katherine L Bouman, and William T Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. In Advances in Neural Information Processing Systems (NIPS), 2016. Zhoutong Zhang, Jiajun Wu, Qiujia Li, Zhengjia Huang, James Traer, Josh H. McDermott, Joshua B. Tenenbaum, and William T. Freeman. Generative modeling of audible shapes for object perception. In Proceedings of IEEE International Conference on Computer Vision (ICCV), 2017. Yixin Zhu, Yibiao Zhao, and Song- Chun Zhu. Understanding tools: Task- oriented object modeling, learning and recognition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. Yixin Zhu, Chenfanfu Jiang, Yibiao Zhao, Demetri Terzopoulos, and Song- Chun Zhu. Inferring forces and learning human utilities from videos. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. Yuke Zhu, Daniel Gordon, Eric Kolve, Dieter Fox, Li Fei- Fei, Abhinav Gupta, Roozbeh Mottaghi, and Ali Farhadi. Visual semantic planning using deep successor representations. In Proceedings of IEEE International Conference on Computer Vision (ICCV), 2017a. <--- Page Split ---> Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J. Lim, Abhinav Gupta, Li Fei- Fei, and Ali Farhadi. Target- driven visual navigation in indoor scenes using deep reinforcement learning. In Proceedings of the International Conference On Robotics and Automation (ICRA), 2017b. <--- Page Split ---> ## APPENDIX A PRELIMINARIES We review some of the fundamentals and challenges in modeling restitution during collision. Physics has long adopted simple algebraic collision laws that map a pre- collision velocity \(v^{- }\) , along with a collision normal \(n\) , to a post- collision velocity \(v^{+}\) (Chatterjee & Ruina, 1998). Among these, perhaps the most widely applied collision law is the coefficient of restitution \(\mathrm{COR} = \frac{c n, v^{+}}{c n, v^{- }} \in [0, 1]\) . Here \(\mathrm{COR} = 0\) gives purely inelastic impact (no rebound) and \(\mathrm{COR} = 1\) is a fully elastic impact that dissipates no energy. Note that the notion of a collision normal idealizes the impact process with a well defined normal direction. As soft objects collide (e.g., leather sofa), surface normals vary throughout the process; the collision normal \(n\) encodes the averaged effect of a range of normal directions experienced throughout an impact process. However, when it comes to physically modeling the real world, it is important to note that: (a) there is no single, physically correct COR for a material; (b) nor is there a valid geometric surface normal that encodes the collision normal behavior for a pair of colliding objects (Stewart, 2011). At best a COR value is valid only locally for a unique orientation between a pair of colliding geometries (Stoianovici & Hurmuzlu, 1996). For example, two identical rods dropped with vertical and horizontal orientations to the ground will give drastically different rebounds. If, however, a sphere with uniform material is one of the objects in a collision pair, symmetry ensures that COR will effectively vary only across the surface of the second, more complex collision surface. We leverage this in our capture setup by adopting a spherical probe object to extract a map of varying COR across complex collision surfaces in everyday environments. Similarly, the notion of a collision normal idealizes the impact process with a well defined normal direction. As soft objects collide (e.g., leather sofa), surface normals vary throughout the process. Thus a collision normal, \(n\) , encodes the averaged effect of a range of normal directions experienced throughout an impact process. We seek COR and collision normals that effectively encode the post- impact response. While we could hypothetically attempt to identify materials and use material fact sheets as look- up tables to seek physical properties like COR, such information would be of limited value in any practical setting. Recall that recorded COR values are generally valid for a limited range of geometries – essentially just for perfectly rigid, sphere- to- half- plane collisions. The real world violates these assumptions almost at every turn. COR values and collision normals vary for every contact configuration and location on the object (Wang et al., 2017). There is never a single COR value characterizing collision responses against real- world objects and hence there is no database (and thus ground- truth) for materials and their COR. ## APPENDIX B BOUNCE DATASET ![](images/12_0.jpg) <center>Figure 5: Our collected Bounce Dataset of 5K real-world bounces. Our dataset spans a variety of real-world everyday scenes. In each image we mark the location of the observed bounce and show the estimated coefficient of restitution at the top-left. Larger values correspond to harder surfaces such as floors and countertops. </center> We investigate whether we can use active physical interactions to infer the physical properties of objects. We explore this with a probe ball bounced off of complex, everyday surfaces to infer the <--- Page Split ---> physical properties of these surfaces. Each individual bounce consists of the probe following a pre- collision trajectory, its impact with a surface, and its post- collision trajectory. In addition, we augment our collected real- world bounce dataset with a dataset of simple simulated bounces. Probe object. As our focus is on probing the physical and geometric properties of complex surfaces in a scene, we apply a simple, spherical foam ball (radius \(\sim 7 \mathrm{cm}\) .) as our probe object for all our captured bounce sessions. A ball's post- collision trajectory is largely determined by the properties of the collision surface and the velocity with which the ball hits this surface. Symmetry of the ball eliminates variations in the geometry of the impact location on the probe and so its effect upon the resulting bounce. Capture setup. Our analysis of bounces requires knowledge of both the pre- bounce and post- bounce trajectories of the probe object in 3D. Hence, we use the Stereolabs ZED stereo camera to capture videos at 100fps VGA resolution. We observed that the impact of the probe with a surface often takes around 1/50th of a second. The high framerate of our camera ensures that frames depicting bounces are captured. For each bounce video, we ensure that the camera is stationary to facilitate accurate reconstruction of the trajectory. The dataset consists of 5172 stereo videos of bounces with surfaces in office and home environments. On average each video contains 172 frames containing the ball. As shown in Figure 5, these environments capture diverse surfaces with varying material properties. Each sample in the dataset consists of the RGB frames, depth maps, point clouds for each frame, and estimated surface normal maps. Since we are interested in the trajectory of the ball, we first perform background subtraction Stauffer & Grimson (1999) on the frames to localize the ball followed by RANSAC fitting Fischler & Bolles (1981) of a sphere on the point cloud corresponding to the foreground mask. This helps us reject any outlier foreground points and collect a point cloud corresponding to the ball in each frame. Note that in each frame only one viewpoint of the ball is visible. Hence, the point cloud for each frame contains a partial sphere. Simulation data. We bootstrap our learning by augmenting our captured real- world data with very simple simulation data. We simulate a set of sphere- to- plane collisions with the PyBullet Physics Engine Coumans & Bai (2016- 2017). To match our capture frame rate we set simulation time steps at 0.01 seconds. We initialize sphere locations and linear velocities randomly while angular velocities and friction coefficients are set to zero. Collision surfaces are likewise given a random normal orientation and COR value in the feasible range [0,1], sampling from a uniform distribution over the range of allowed values. Each simulation returns pre- bounce and post- bounce trajectories of the sphere for the sampled COR and normal. Finally, to make this synthetic data consistent with our captured dataset we create point clouds per simulation by picking a viewpoint and sampling only visible points on the sphere at each time step. ## APPENDIX C TRAJECTORY ENCODERS The Physics Inference Module described in Section 3.2 of the main text takes as input an encoded representation of a trajectory. Due to the significant overlap of our architecture with PointNet Qi et al. (2016), we only provide a brief description in the main text. In this section, we provide a more detailed description of the encoder model to facilitate replication of our results. Note that code for this model will be made publicly available with the final version of the paper. Each trajectory (pre- bounce or post- bounce) in our dataset consists of point clouds of the ball at T timesteps. At each timestep, we first sample N points from the point cloud and lexicographically sort them according to their \(x, y\) and \(z\) coordinates (in that order of priority). We observed that random and uniform sampling achieve similar results. This sampling gives us a TxNx3 array of points for each trajectory. The lexicographic sorting provides partial spatial relationship between consecutive points in the array. For each timestep, the corresponding \(N \times 3\) array is processed by a sequence of convolution and ReLu layers as shown in Figure 6. This gives us a single 1x128 dimensional vector for each time step. These vectors for each timestep are concatenated to give us a 1x128T dimensional vector. This vector is processed by two fully connected layers followed by L2 normalization giving us a 256- <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 6: Our proposed encoder architecture takes as input a sequence of length \(T\) of point cloud data, each containing \(N\) points, and outputs a single vector. </center> dimensional vector. We observed that using 64- dimensions as the encoding size leads to the best results. In our implementation, we set \(\mathrm{T} = 10\) and \(\mathrm{N} = 500\) . ## APPENDIX D CORE PHYSICS ENGINE The Core Physics Engine described in Section 3.2 is the component of the PIM that performs the prediction task. Given input physical parameters \(\rho\) and an encoded pre- bounce trajectory \(t_{i}\) , the Core Physics Engine predicts the encoded post- bounce trajectory \(t_{o}\) . This is done by first encoding the input physical parameters in a suitable latent representation \(v_{\rho}\) (1x32 vector in our implementation) using two fully connected layers. We observed that increasing the dimensionality of the intermediate representation \(v_{p}\) beyond 1x32 doesn't improve results and using sizes below 1x32 leads to a drop in performance. The encoded physical parameters \(v_{\rho}\) are then concatenated with the encoded pre- bounce trajectory \(t_{i}\) . This concatenated vector is used to predict the post- bounce trajectory encoding \(t_{o}\) using two fully connected layers. This pipeline along with the sizes of the intermediate representations are shown in Figure 7. ![](images/14_1.jpg) <center>Figure 7: Our proposed Core Physics Engine takes as input the encoded pre-bounce trajectory \(t_{i}\) along with the physical parameters of the collision surface \(\rho\) to predict the encoded post-bounce trajectory \(t_{p}\) . </center> <--- Page Split ---> ## APPENDIX E TRAINING DETAILS ## E.1 PRETRAINING THE PIM The Physics Inference Module (PIM) is pretrained using simulation data as described in Section 3.2. In order to optimize the objective function given in Equation 2 of the main text we use the Adam optimizer. We update the parameters of the PIM using a batchsize of 32, initial learning rate of 0.01 and weight decay of 0.0005. The learning rate is dropped by a factor of 10 after every 32000 iterations. The training is done for a total of 96000 iterations. ## E.2 TRAINING VIM+PIM The Visual Inference Module (VIM) is trained jointly with the PIM using the captured data as described in Section 3.3. Similar to the PIM, the objective presented in Equation 3 of the main text is optimized using the Adam optimizer. We update the parameters of the PIM and VIM using a batchsize of 32, initial learning rate of 0.001 and weight decay of 0.0005. The learning rate is dropped by a factor of 10 after every 8000 iterations. We observe that the training converges at 24000 iterations. ## APPENDIX F ABLATIVE STUDY: TRAINING VARIOUS COMPONENTS OF THE BOUNCE AND LEARN PIPELINE We conducted an ablative study on training various components of the Bounce and Learn pipeline. These results are presented in Table 2. First, we consider pre- training PIM (core physics engine and trajectory encoder) with only synthetic data and holding it fixed when training VIM on real data (row 1). In other words, PIM only sees synthetic data during training and is upper bounded by the rigid- body Newtonian physics simulation. Next, we consider pre- training PIM on synthetic data and fine- tuning with VIM training on real data (row 2). Next, we consider pre- training PIM on synthetic data and fine- tuning the core physics engine (holding the trajectory encoder fixed) with VIM training on real data (row 3). Table 2: Bounce and Learn ablative evaluation (val set). We evaluate our models in different training settings on the task of forward prediction, collision normal and COR estimation. We report median distance in centimeters to observed post-bounce trajectories for each experimental setting. Please see the text for details. <table><tr><td>Training Setting</td><td>Dist</td><td>Dist Est Normal</td><td>Dist Est COR</td><td>Dist Est Normal + COR</td><td>% Normals within 30° of Est Normal</td><td>COR Median Absolute Error</td></tr><tr><td>1. Fix core and traj. enc. (synth only)</td><td>37.2±1.5</td><td>29.6±0.4</td><td>38.3±0.4</td><td>29.4±0.1</td><td>28.72±4.87</td><td>0.193±0.020</td></tr><tr><td>2. Train core and traj. enc.</td><td>20.4±0.9</td><td>31.7±2.8</td><td>20.4±2.5</td><td>31.4±2.0</td><td>13.01±1.26</td><td>0.134±0.008</td></tr><tr><td>3. (Ours) Train core, Fix traj. enc.</td><td>19.9±1.8</td><td>18.1±0.8</td><td>20.5±1.0</td><td>17.8±0.7</td><td>34.98±3.43</td><td>0.123±0.009</td></tr></table> We observe that fine- tuning PIM on real data results in better prediction accuracy than training PIM on synthetic data alone. We also observe that our best model requires finetuning the core physics engine while keeping the trajectory encoding fixed. This suggests that the core physics engine learns effective physical parameters beyond the parameters used in rigid- body simulation pretraining. We further analyze this hypothesis in Appendix H. ## APPENDIX G ABLATIVE STUDY OF SPATIAL REGULARIZATION Here we analyze the effect of the spatial regularizer term presented in Equation 3. We evaluate forward prediction, normal estimation and COR estimation errors for our proposed model with and without spatial regularization ("Reg."). These results are presented in Table 3. <--- Page Split ---> <table><tr><td>Models</td><td>Dist</td><td>Dist Est Normal</td><td>Dist Est COR</td><td>Dist Est Normal + COR</td><td>% Normals within 30° of Est Normal</td><td>COR Median Absolute Error</td></tr><tr><td>1. (Ours) Point Cloud-based</td><td>20.5±1.1</td><td>18.5±0.6</td><td>20.3±1.3</td><td>18.2±0.5</td><td>34.14±2.87</td><td>0.127±0.008</td></tr><tr><td>2. (Ours) Point Cloud-based + Spatial Reg.</td><td>19.9±1.8</td><td>18.1±0.8</td><td>20.5±1.0</td><td>17.8±0.7</td><td>34.98±3.43</td><td>0.123±0.009</td></tr></table> Table 3: Spatial Regularization Ablation (val set). We present an ablative study of the effect of spatial regulation on the Bounce and Learn model. We report median distance in centimeters to observed post- bounce trajectories for each experimental setting. We also evaluate the predicted normals and CORs. ## APPENDIX H ANALYZING THE EFFECT OF TRAINING PIM ON REAL DATA In the results presented in Section 4 of the main text, we observe that jointly training the core physics engine leads to significant improvements in performance on both tasks - forward prediction and physical parameter estimation. Here we further analyze this by looking at the forward prediction errors for trajectories within different ranges of sensor- estimated COR (calculated using the hand- crafted approach described in Sec 4.1). We present these results in Figure 8. ![](images/16_0.jpg) <center>Figure 8: Forward prediction errors for trajectories belonging to different ranges of estimated COR. An interesting observation is that the improvement in performance in lower COR ranges is significantly higher. </center> In general, lower COR values point to collisions that are more non- rigid and higher COR value collisions are closer to rigid- body collisions. While there are exceptions to this intuition (for example, trampolines are high COR and non- rigid), such examples are rare in the Bounce dataset. We observe from the results in Figure 8 that the improvement in performance is significantly higher in the lower COR ranges compared to the higher. This supports the hypothesis that training the PIM jointly leads to modeling of non- rigid collisions better than the pretrained rigid- body simulation based PIM. Furthermore, we see that training the PIM also improves forward prediction results in all ranges of COR. It is also worth noting that the predictions are generally more sensitive to variations in the higher COR region leading since small variations in velocities could lead to larger distance errors. This explains the increasing error for the "Only Core Trained" model as the values of the sensor- estimated COR increases. <--- Page Split ---> ## APPENDIX I ADDITIONAL QUALITATIVE RESULTS We provide additional visualizations of the predictions from the VIM in Figure 9. We also provide interesting visualizations of the predicted trajectories by the Bounce and Learn model in Figure 10. Row 1 of Figure 10 shows a case where the restitution is predicted to be slightly higher. Row 3 of Figure 10 shows a case where the output seems physically plausible, but does not match the true trajectory since it fails to account for the spin of the ball. ![](images/17_0.jpg) <center>Figure 9: Inferred COR and collision normals. Given a single still image (first col.), we show inferred COR (second col., warmer colors indicate higher COR values), predicted collision normals (third col., colors indicate normal direction) from VIM and stereo camera surface normals (last col.). </center> ![](images/17_1.jpg) <center>Figure 10: Predicted post-bounce trajectories in novel scenes. (left) Input pre-bounce trajectories. (center) Observed post-bounce trajectories. (right) Our predicted post-bounce trajectories. </center> <--- Page Split ---> ## APPENDIX J ONLINE INFERENCE FROM BOUNCES AND VISUAL CUES In the main text, we demonstrate the efficacy of the VIM+PIM framework for inference of physical properties and predicting bounce outcomes in novel environments. However, in numerous robotics applications, an agent can usually interact with a scene to infer the physical properties. In such interactions, visual cues can also be leveraged to generalize these inferences in a scene. For example, a bounce of a ball on a wall can inform our inference of physical properties of the rest of the wall. Therefore, we explore an online learning framework, where our estimates of the physical parameters in a scene are updated online upon observing bounces. First, we pretrain the PIM as described in Section 3.2. For every bounce trajectory \((\mathcal{T}_{i},\mathcal{T}_{o})\) observed at scene location \((x,y)\) , we use the VIM to estimate the physical parameters \(\rho_{x,y}\) . The VIM is then updated until convergence using the same objective function as the VIM training from Equation 3 (also presented here). \[\mathcal{L} = d(t_{o},f(t_{i},\rho_{x,y})) + ||\rho_{x,y} - \mathcal{P}(t_{i},t_{o})||_{2}^{2}\] The loss is optimized incrementally using all observed bounces so far. Therefore, for each scene, we can train incrementally by interacting with the scene and updating the previously learned model. The PIM is then kept fixed during the online learning process. Fixing the PIM makes the optimization easier since we usually have access to limited number of bounce trajectories in novel environments (interactions of the agent). We observe in our results that we achieve better estimates of the physical properties with an increasing number of interactions. In Figure 11, we visualize the intermediate predictions from the VIM after observing an increasing number of bounces. Evidently, the estimates of collision normals and coefficient of restitution for the image improves with the number of bounces. For each scene, we leave out 10 bounce trajectories for evaluation and use the rest as the online interactions. We consider 10 random shuffles of the bounce trajectories, creating different sets for evaluation and online interactions, to compute mean performance. Figure 11 shows the quantitative improvement with increasing number of bounces according to the metrics described in Section 5.2. We present the final predictions from two other scenes in Figure 12. Observe that the predictions accurately capture the lower restitution of softer objects (pillow, seat of chair). Similarly, it also captures the "hardness" of the edge of the chair and tables. Next, we demonstrate the prediction of trajectories. Qualitative examples of trajectory predictions in the online setting are shown in Figure 13. We observe that the combination of VIM and PIM can successfully predict the outcomes of most bounce events. We also present some failure cases in Figure 13. <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 11: We learn to estimate physical parameters of surfaces in a scene by observing bounces at different locations (row 1). Our predictions of collision normals (row 2) and coefficient of restitution (row 3) improve with number of bounces. The quantitative evaluation (row 4) provides strong evidence for the efficacy of our online learning approach.(Best viewed electronically) </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 12: The final predicted COR maps shows the ability of our approach to differentiate soft (pillows, moveable lamp) and rigid objects (edges of chairs and tables). Furthermore, the quantitative evaluation of the estimated collision normals and COR shows a strong improving trend with increasing number of bounces.(Best viewed electronically) </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 13: Online Learning based Predicted post-bounce trajectories. (left) Input pre-bounce trajectories. (center) Observed post-bounce trajectories. (right) Our predicted post-bounce trajectories. We correctly predict the trajectory in different scenes. Bottom two rows are example failures. </center> <--- Page Split ---> APPENDIX K EVALUATION IN THE ABSENCE OF IMPACT LOCATION ANNOTATIONS <table><tr><td>Models</td><td>Dist</td><td>Dist Est Normal</td><td>Dist Est COR</td><td>Dist Est Normal + COR</td><td>% Normals within 30° of Est Normal</td><td>COR Median Absolute Error</td></tr><tr><td>Annotated collision pts</td><td>21.3 ± 0.9</td><td>21.2 ± 0.8</td><td>21.4 ± 0.7</td><td>20.6 ± 0.6</td><td>24.08 ± 3.82</td><td>0.168 ± 0.018</td></tr><tr><td>Estimated collision pts</td><td>21.8 ± 1.7</td><td>21.5 ± 0.1</td><td>22.0 ± 1.8</td><td>21.4 ± 0.5</td><td>24.26 ± 1.99</td><td>0.171 ± 0.023</td></tr></table> Table 4: Bounce and Learn with estimated collision points (test set). We evaluate our model on the task of forward prediction, collision normal and restitution estimation under the absence of impact location annotations in the scene image. In the experiments presented in Section 4, we use human- annotations for the impact location in the image. These annotations are used to index the output of the Visual Inference Module (VIM) as explained in Equation 3. In this section, we conduct an experiment by relaxing the need for this human annotation. The point cloud of the ball in the frame of collision provides information about it's location in the image. We leverage this information to estimate the point of collision in the image. More specifically, we project the mean point of the point cloud to the image using the camera parameters. This serves as an estimate of the point of collision. Note that this is not an accurate estimate since the point of collision could occur at the edges of the ball, which would not coincide with the projected center. However, since the output of VIM is coarse, we hypothesize that minor errors in impact location would not effect the coarse index. In Table 4, we present results for our model trained and tested using this estimate for collision location. We observe very minimal difference in performance compared to using the human annotations. ## APPENDIX L COMPARING PIM TO INTERACTION NETWORKS (BATTAGLIA ET AL., 2016) <table><tr><td>Models</td><td>Dist</td></tr><tr><td>Center-based PIM</td><td>10.87±(0.32)</td></tr><tr><td>IN - velocity</td><td>49.00±(6.34)</td></tr><tr><td>IN - location history</td><td>20.10±(1.50)</td></tr></table> Table 5: Comparison to Interaction Networks. We evaluate our Center- based PIM model and Interaction Networks (Battaglia et al., 2016) on 10000 simulated trajectories. We report median distance in centimeters to ground truth post- bounce location at \(\mathrm{t = 0.1s}\) . Please see the text for details. Interaction Networks (IN) (Battaglia et al., 2016) model the interactions between objects in complex systems by performing inferences about the abstract properties that govern them. INs are shown to be effective for reasoning about n- body interactions, colliding rigid balls and interaction of discretized strings with rigid objects. The Physics Inference Module (PIM) proposed in this paper is aimed to address a more specific problem - collision between two non- rigid objects. However, the PIM also provides the additional benefit of performing inference over sensor inputs in the form of point clouds. We perform a quantitative comparison of the PIM and IN models. Since the IN is not designed to deal with point cloud data, we use the Center- based PIM baseline presented in Section 4 for fair comparison. The IN model maintains a state vector of each object at each timestep. We follow the choices in (Battaglia et al., 2016) to design the state vector. In our scenario, we have two objects - the ball and the collision surface. We represent both using a 7- dimensional state vector. The surface is represented as a static object with inverse mass 0, located at (0, 0, 0) and with velocity (0, 0, 0). The ball is represented as an object with constant inverse- mass and with the location and velocity determined by the sample. The relation attribute is represented by the coefficient of restitution and the collision normal. Since our training and test simulation trajectories have added noise to imitate <--- Page Split ---> sensor data, the estimates of initial velocity are also noisy. Therefore, we formulate another location history based state vector containing location of the object in the last 3 time steps. More concretely, the history- based state vector is represented as: \[o = [m,x_{t - 2},y_{t - 2},z_{t - 2},x_{t - 1},y_{t - 1},z_{t - 1},x_{t},y_{t},z_{t}]\] We test these models on 10000 simulated trajectories and present the results in Table 5. We observe that the IN model demonstrates a higher error at \(\mathrm{t = 0.1s}\) post- collision. We believe that this difference in error is due to the recurrent nature of IN, leading to accumulation of errors at every time step. On the other hand, since the PIM predicts by choosing the best simulated post- collision trajectory, it does not suffer from this issue. However, incorporating IN in our proposed framework could allow modelling of n- body interactions and could be addressed in future work. <--- Page Split --->
## ABSTRACT We introduce an approach to model surface properties governing bounces in everyday scenes. Our model learns end- to- end, starting from sensor inputs, to predict post- bounce trajectories and infer two underlying physical properties that govern bouncing - restitution and effective collision normals. Our model, Bounce and Learn, comprises two modules - a Physics Inference Module (PIM) and a Visual Inference Module (VIM). VIM learns to infer physical parameters for locations in a scene given a single still image, while PIM learns to model physical interactions for the prediction task given physical parameters and observed pre- collision 3D trajectories. To achieve our results, we introduce the Bounce Dataset comprising 5K RGB- D videos of bouncing trajectories of a foam ball to probe surfaces of varying shapes and materials in everyday scenes including homes and offices. Our proposed model learns from our collected dataset of real- world bounces and is bootstrapped with additional information from simple physics simulations. We show on our newly collected dataset that our model out- performs baselines, including trajectory fitting with Newtonian physics, in predicting post- bounce trajectories and inferring physical properties of a scene. ## 1 INTRODUCTION Consider the scenario depicted in Figure 1. Here, a ball has been tossed into an everyday scene and is about to make contact with a sofa. What will happen next? In this paper, we seek a system that learns to predict the future after an object makes contact with, or bounces off, everyday surfaces, such as sofas, beds, and walls. The ability for a system to make such predictions will allow applications in augmented reality and robotics, such as compositing a dynamic virtual object into a video or allowing an agent to react to real- world bounces in everyday environments. We begin by observing that humans exploit both visual recognition and direct physical interactions to estimate the physical properties of objects in the world around them. By learning from a large number of physical interactions in the real world, we develop an approximate visual mapping for physical properties (e.g., sofas are soft, tables are hard, etc). However, two surfaces that look the same may produce vastly different outcomes when objects are dropped upon them (e.g., query for "happy ball and sad ball" on YouTube). Without observing these interactions, we would have no way of knowing that the surfaces are made of materials with differing physical properties. Motivated by this observation, in this paper, we investigate the application of physical interactions to probe surfaces in real- world environments, infer physical properties, and to leverage the interactions as supervision to learn an appearance- based estimator. Leveraging the regularity of spherical collisions, we adopt a simple ball as our probe. Our goal is to use captured probe collision trajectories to predict post- bounce trajectories and estimate surface- varying coefficients of restitution (COR) and effective collision normals over complex, everyday objects. Rigid- body physics has often been employed to model collision events (Bhat et al., 2002; Brubaker et al., 2009; Kyriazis et al., 2011; Monszpart et al., 2016). However, real- world objects deform under collision, and so violate rigid- body assumptions. Collision normals and COR model a complex pro <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Goal. (a) We seek to predict what will happen after a ball bounces off an everyday surface. (b) We introduce a large Bounce Dataset of videos depicting real-world bounces in a variety of scenes. We show (hand-crafted) estimates of coefficient of restitution in the top-left corners (higher values indicate hard surfaces). (c) Output of our Bounce and Learn model that predicts the trajectory of the object after the bounce. (d) Observed ground truth post-bounce trajectory. </center> cess, which is challenging and expensive to observe and simulate, especially for softer materials (Belytschko et al., 2013; Marsden & Hughes, 2012). While there are many fast soft- body simulators (e.g., FlexSim (Nordgren, 2003)), none are physically accurate (Chen et al., 2017). Furthermore, these simulators do not allow estimation of parameters of observed real- world scenes. Moreover, current computer vision techniques do not accurately capture high- speed colliding trajectories in the wild. Inaccurate trajectory capture means we are far from certain of our trajectories – we have point clouds that are not a good fit to any exact trajectory curve but instead could be explained by any number of nearby trajectories. This means that inferring underlying collision normals and CORs cannot be done by a simple process of inverting a deterministic Newtonian physics model. Indeed, as we show in Section 4, fitting a single trajectory curve and learning with Newtonian physics leads to poor results. These results are explained in part by noting that collision dynamics and the codes that simulate them are particularly sensitive to variations and uncertainties. To address these challenges, we seek to directly learn collision- response models of deformable surfaces from observed real- world interactions and bootstrapped by only a set of simple, inexpensive rigid- body simulation examples. We propose Bounce and Learn, a model that learns end- to- end to predict post- bounce trajectories and infers effective physical parameters starting from sensor inputs. Our model comprises a Physics Inference Module (PIM) and a Visual Inference Module (VIM). VIM learns to infer physical parameters for locations in a scene given a single still image, while PIM learns to model the physical interaction for the prediction task given physical parameters and an observed pre- collision 3D trajectory. We show that our model can account for non- rigid surfaces that deform during collision and, compared to inverting a parametric physics model using hand- designed features, better handles uncertainty in the captured trajectories due to end- to- end learning. Moreover, our model can be trained in batch mode over a training set or can learn to adapt in an online fashion by incorporating multiple observed bounces in a given scene. Our effort is a step towards a learnable, efficient real- world physics simulator trained by observing real- world interactions. To train our model, we introduce a large- scale Bounce Dataset of 5K bouncing trajectories of a probe with different surfaces of varying shape and material in everyday scenes including homes and offices. For our study, we use a spherical ball made of foam for our probe as it provides a rich range of interactions with everyday objects and its symmetry allows us to better track and model the physical properties of the complex objects it collides with. As collision events are transient (we observe contact occurring over \(1 / 50^{\mathrm{th}}\) of a second), we have collected our dataset using a high- framerate stereo camera. Our dataset is the largest of its kind and goes well beyond simpler setups involving a handful of interacting objects (Bhat et al., 2002; Brubaker et al., 2009; Kyriazis et al., 2011; Monszpart et al., 2016). Note that prior datasets involving human interaction in sports (Bettadapura et al., 2016) require high- level reasoning without much diversity in collision surfaces. Contributions. Our work demonstrates that an agent can learn to predict physical properties of surfaces in daily scenes and is the first to explore this across a large variety of different real- world surfaces, such as sofas, beds, and tables. Our contributions are twofold: (1) we propose a model that is trained end- to- end for both predicting post- bounce trajectories given an observed, noisy 3D point cloud of a pre- bounce trajectory in a scene, and for inferring physical properties (COR and collision normal) given a single still image; and (2) we build a large- scale dataset of real- world bounces in a variety of everyday scenes. We evaluate our model on our collected dataset and show that it out <--- Page Split ---> performs baselines, including trajectory fitting with Newtonian physics, in predicting post- bounce trajectories and inferring physical properties of a scene. ## 2 RELATED WORK Our goal is related to work that captures and models physical interactions of objects in scenes. While prior work addresses various aspects of our overall goal, none gets at all aspects we seek to capture. Simulation- only approaches. There have been a number of simulation- only approaches to learning or modeling object interactions and ("intuitive") physics. Examples include learning predictive models for a set of synthetic objects, like billiards (Fragkiadaki et al., 2016), and general N- body interaction problems (Battaglia et al., 2016; Chang et al., 2017; Ehrhardt et al., 2017a;b; Watters et al., 2017), and learning bounce maps (Wang et al., 2017). However, most of these approaches operate over simple scenes consisting of a single object or parametric objects. Graphics- based richer 3D environments like AI2- THOR (Zhu et al., 2017a;b) have mainly been explored for navigation and planning tasks. Visual prediction in toy worlds. There are approaches that predict physical interactions in realworld imagery by incorporating a simulator during inference (Wu et al., 2015) or by learning from videos depicting real scenes (Wu et al., 2016) or simulated sequences (Lerer et al., 2016). However, these approaches model simple objects, such as blocks, balls, and ramps, and again make simplifying assumptions regarding COR, mass and friction. On the other hand, in our approach we exploit physical interactions to estimate physical properties of everyday objects in real- world scenes. Visual prediction in real- world scenes. There are approaches that seek to estimate geometric and physical properties in everyday scenes using visual appearance. Examples include estimating the geometric layout of a scene (e.g., depth and surface normals from RGB- D data) (Bansal et al., 2016; Eigen & Fergus, 2015; Wang et al., 2015), material, texture, and reflectances (Bell et al., 2013), and qualitative densities of objects (Gupta et al., 2010). Instead of estimating physical properties, some approaches make visual predictions by leveraging hand- aligned Newtonian scenarios to images/videos (Mottaghi et al., 2016a), using simulated physical models fitted to the RGB- D data (Mottaghi et al., 2016b), or learning to synthesize future video frames (Chao et al., 2017; Vondrick & Torralba, 2017; Walker et al., 2016; Xue et al., 2016). A recent approach predicts sounds of everyday objects from simulations of 3D object models (Zhang et al., 2017). Real- world interaction and capture. Our work is inspired by models of physical properties in the real world via interaction. Examples include parametric models of a human to model forces or affordances (Brubaker & Fleet, 2008; Brubaker et al., 2009; 2010; Zhu et al., 2015; 2016), grasping objects (Mann et al., 1997), multi- view video sequences of a ball on a ramp (Kyriazis et al., 2011), video sequences of known objects in free flight (Bhat et al., 2002), and video sequences depicting collision events between pairs of known real- world objects (Monszpart et al., 2016). We seek to generalize beyond pre- specified objects to unknown, everyday objects with complex geometries and physical properties in real- world scenes. More closely related to our work are approaches that repeatedly interact with real- world environments, such as hitting objects to learn audio- visual representations (Owens et al., 2016), crashing a drone to learn navigation (Gandhi et al., 2017), and repeated pokes and grasps of a robotic arm to learn visual representations for object manipulation (Agrawal et al., 2016; Levine et al., 2016; Pinto & Gupta, 2016; Pinto et al., 2016). Our goal is to scale learning and reasoning of dynamics starting with interactions via object bounces in everyday scenes. ## 3 BOUNCE AND LEARN MODEL AND BOUNCE DATASET This section introduces our Bounce and Learn model and Bounce Dataset. Please see Appendix A for more details on the underlying physics governing bounces. Our overall model is shown in Figure 2 (left) and consists of a Physics Inference module (PIM) and Visual Inference Module (VIM). Having separate physics and visual modules allows for pre- training PIM using simulation data and joint training using real- world data. We describe each module in the following subsections. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: System overview. Our model (left) consists of a Physics Inference Module (top-right) and a Visual Inference Module (bottom-right). See text for more details. </center> ### 3.1 PHYSICS INFERENCE MODULE (PIM) PIM is shown in Figure 2 (top- right). Given a ball's incoming 3D trajectory and physical parameters of the bounce surface, the goal of PIM is to predict the outgoing 3D trajectory of the ball after bouncing off the surface. We assume the ball's trajectory is a sequence of point clouds given by the stereo camera. Let \(\mathcal{T}_{i}\) and \(\mathcal{T}_{o}\) be pre- and post- bounce point cloud trajectories, respectively, and \(\rho\) be the physical parameters of the probed collision surface - effective collision normal and coefficient of restitution (COR). For a non- rigid collision, the input \(\rho\) represents the values of effective physical parameters that lead to the aggregated effect of the impact process. We seek to learn the mapping \(\mathcal{T}_{o} = \mathcal{F}(\mathcal{T}_{i},\rho)\) . One challenge is how to represent the trajectory. While we could try to fit a static, reconstructed 3D model of the ball to the trajectory and track the centers, we found that it required manual parameter tuning. Moreover, such an approach is not easily extendable to other objects, particularly if the object undergoes deformation. We instead seek to represent the trajectory directly from sensor inputs by jointly learning an embedding in addition to predicting the post- bounce trajectory. Let \(t_{i}\) and \(t_{o}\) be the encoded versions of the trajectories with embedding functions \(\mathcal{E}_{i}\) and \(\mathcal{E}_{o}\) , e.g., \(t_{i} = \mathcal{E}_{i}(\mathcal{T}_{i})\) . Since PIM uses embedded trajectories, we can write mapping \(\mathcal{F}\) as the composition, \[\mathcal{T}_{o} = \mathcal{F}(\mathcal{T}_{i},\rho) = \mathcal{E}_{o}^{-1}\left(f(\mathcal{E}_{i}(T_{i}),\rho)\right), \quad (1)\] where core physics engine \(f\) maps encoded pre- bounce trajectories \(t_{i}\) and \(\rho\) to predicted encoded post- bounce trajectories \(t_{p} = f\left(t_{i},\rho\right)\) and \(\mathcal{E}_{o}^{- 1}\) decodes \(t_{p}\) to predict final \(\mathcal{T}_{o}\) . Core physics engine \(f\) . We seek an architecture that is flexible and has the ability to model complex interactions, such as deformation. We use two FC layers to encode the physical parameters \(\rho\) as a vector (experimentally observed to converge faster than using \(\rho\) directly), which is concatenated with input encoded trajectory \(t_{i}\) and followed by two FC layers to estimate the encoded predicted trajectory. See Appendix D for details. We observe in our experiments that core physics engine \(f\) , when trained on real- world data, can model interactions more complex than rigid- body collisions. Trajectory encoder \(\mathcal{E}\) . We encode trajectories via an architecture inspired by PointNet (Qi et al., 2016). Our encoder takes as input a \(T\times N\times 3\) array containing a lexicographically sorted list of 3D points, where \(T\) is the number of time steps in the trajectory and \(N\) is the number of 3D points at each time step. A 128- dimensional vector is generated for each time step, which are concatenated and processed by two fully connected (FC) layers, followed by \(L_{2}\) normalization resulting in a 64- dimensional vector. See Appendix C for more details. Trajectory decoder \(\mathcal{E}^{- 1}\) . While an encoded trajectory can be decoded by a deconvolution network predicting an output point- cloud trajectory, learning a point- cloud decoder in our setting is a non <--- Page Split ---> trivial task (Fan et al., 2017); we leave this to future work. Instead, we use a non- parametric decoder. Specifically, we build a database of 10K simulated post- collision trajectories \(\mathcal{T}_{o}\) and their encodings \(t_{o} = \mathcal{E}_{o}(\mathcal{T}_{o})\) . We use predicted encoded trajectory \(t_{p}\) as query and find the nearest \(t_{o}\) to estimate \(\mathcal{T}_{o}\) . Pre- training PIM. For pre- training, we need triplets \((\mathcal{T}_{i}, \mathcal{T}_{o}, \rho)\) , which are easily and abundantly available from simulators (c.f., Section 3.3) that generate data under rigid body assumptions using physical parameters \(\rho\) . For the training loss, we minimize the distance between the encodings of the post- collision trajectory \(t_{o}\) and the predicted post- collision trajectory \(t_{p}\) . We also use negative encoded trajectories \(\{n_{j}\}\) to make the loss contrastive and prevent collapse of the encoding. We found that additional supervision for the trajectory encoders improves the performance of the model. We achieve this by a reconstruction network \(\mathcal{P}(t_{i}, t_{o})\) which explicitly ensures that the physical parameter vector \(\rho\) can be recovered given the ground truth encoded pre- and post- bounce trajectories \(t_{i}\) and \(t_{o}\) . We use a 2- layered neural network for \(\mathcal{P}\) ; note that this part of the model is not shown in Figure 2 since it is only used in training and not inference. We optimize triplet and squared- \(L_{2}\) losses for each triplet \((t_{p}, t_{o}, \{n_{j}\})\) corresponding to \(t_{i}\) , \[\mathcal{L}_{\mathrm{PIM}} = \max \left(d(t_{p}, t_{o}) - d(t_{p}, n_{j}) + m, 0\right) + \| \rho - \mathcal{P}(t_{i}, t_{o})\|_{2}^{2}, \quad (2)\] where \(d\) is cosine distance and \(m\) is a scalar parameter for the margin. Inference. The PIM could be potentially used for two tasks: (a) predicting post- bounce trajectories given physical parameters \(\rho\) and pre- bounce trajectory \(\mathcal{T}_{i}\) ; (b) estimating physical parameters \(\rho\) given pre- bounce trajectory \(\mathcal{T}_{i}\) and post- bounce trajectory \(\mathcal{T}_{o}\) . The first form of prediction, which is also used in our experiments, is straightforward: we estimate the encoded predicted trajectory \(t_{p}\) and then use the non- parametric decoding described above. For the second task, a grid search or optimization based strategy could be used to search the range of possible physical parameters \(\rho\) such that the encoded predicted trajectory \(t_{p}\) is closest to the encoding of post- collision trajectory \(t_{o}\) . ### 3.2 VISUAL INFERENCE MODULE (VIM) Predicting the outcome of a bounce in a real- world scene requires knowledge of the physical parameters \(\rho\) at the location of the bounce. While PIM allows us to model physical interactions, it does not provide a means to account for knowledge of the visual scene. We would like to integrate a module that can reason over visual data. To this end, we propose a Visual Inference Module (VIM) that is designed to infer the physical parameters of a scene from the visual input. We demonstrate experimentally that VIM can generalize to novel scenes for inferring physical properties and also predicting post- bounce trajectories when used in conjunction with a pretrained PIM. Moreover, in scenarios where multiple interactions with a scene is possible, we show that VIM and PIM can be jointly used to update predictions about the scene by observing the multiple- bounce events. This scenario is important since visual cues only provide limited evidence about the underlying physical properties of surfaces, e.g., two sofas that look the same visually could have different physical properties. The proposed VIM, shown in Figure 2 (bottom- right), is a convolutional neural network (CNN) that takes as input the image of a scene and outputs the physical parameters for each location in the scene. We represent this by the function \(\mathcal{V}\) , which is an AlexNet architecture (Krizhevsky et al., 2012) up to the \(5^{\mathrm{th}}\) convolution layer, followed by \(3 \times 3\) and \(1 \times 1\) convolution layers. Each location in the output map \(\mathcal{V}(\mathcal{I})\) for an image \(\mathcal{I}\) contains a four- dimensional vector corresponding to the coefficient of restitution and collision normal (normalized to unit length). Training. Training VIM alone is not directly possible since the ground truth for the output physical properties cannot be easily collected. These physical properties are also closely tied to the assumed physics model. Therefore, we use PIM to train VIM. For each bounce trajectory, given the impact location \((x, y)\) in the image, the physical properties can be estimated using VIM by indexing \(\mathcal{V}(\mathcal{I})\) to extract the corresponding output feature. We refer to this as \(\rho_{x, y}\) . PIM can use the estimated \(\rho_{x, y}\) along with the encoding of the pre- collision trajectory \(t_{i}\) to predict the encoding of the post- collision trajectory. Our loss is the sum of cosine distance between the predicted and ground truth post- collision trajectory encodings \(t_{o}\) and the squared- \(L_{2}\) distance of \(\rho_{x, y}\) from the parameters estimated with \(\mathcal{P}\) (described in Sec 3.1), which helps constrain the outputs to a plausible range. We also add a regularization term to ensure spatial smoothness. \[\mathcal{L}_{\mathrm{Joint}} = d(t_{o}, f(t_{i}, \rho_{x, y})) + \| \rho_{x, y} - \mathcal{P}(t_{i}, t_{o})\|_{2}^{2} + \sum_{x, y} \sum_{i \in \{0, 1\}} \sum_{j \in \{0, 1\}} \| \rho_{x, y} - \rho_{x + i, y + j}\|_{2}^{2}, \quad (3)\] <--- Page Split ---> where \(f\) is the core physics engine and \(t_{i}, t_{o}\) are the encoded pre- and post- bounce trajectories, respectively (described in Section 3.1). The objective can be optimized by SGD to update the parameters of VIM and PIM (parameters of \(\mathcal{P}\) are not updated). Note that the parameters of PIM can also be held fixed during this optimization, but we observe that training jointly performs significantly better. This demonstrates improvement over the simple rigid body physics- based pretraining. We present this ablative study in Appendix F. Online learning and inference (results in Appendix J). While inference of physical properties requires physical interactions, visual cues can also be leveraged to generalize these inferences in a scene. For example, a bounce of a ball on a wall can inform our inference of physical properties of the rest of the wall. Therefore, we explore an online learning framework where our estimates of the physical parameters in a scene are updated online upon observing bounces. For every bounce trajectory \((\mathcal{T}_{i}, \mathcal{T}_{o})\) observed at scene location \((x, y)\) , we use VIM to estimate the physical parameters \(\rho_{x, y}\) . VIM is then updated until convergence using Equation (3). For each novel scene, we can train incrementally by interacting with the scene and updating the previously learned model. Such a setting holds significance in robotics where an agent can actively interact with an environment to make inferences about physical properties. We observe in our results that we achieve better estimates of the physical properties with an increasing number of interactions. ### 3.3 BOUNCE DATASET To explore whether active physical interactions can be used to infer the physical properties of objects, we collect a large- scale dataset of a probe ball bouncing off of complex, everyday surfaces. In addition, we augment our collected real- world Bounce Dataset with a dataset of simple simulated bounces. Please see the supplemental for more details of our collected dataset. Real- world Bounce Dataset. The dataset consists of 5172 stereo videos of bounces with surfaces in office and home environments. Each individual bounce video depicts the probe following a pre- collision trajectory, its impact with a surface, and its post- collision trajectory. On average each video contains 172 frames containing the ball. As shown in Figs. 1 and 5, the environments contain diverse surfaces with varying material properties. Each sample in the dataset consists of the RGB frames, depth maps, point clouds for each frame, and estimated surface normal maps. Since we are interested in the trajectory of the ball, we first perform background subtraction (Stauffer & Grimson, 1999) on the frames to localize the ball followed by RANSAC fitting (Fischler & Bolles, 1981) of a sphere on the point cloud corresponding to the foreground mask. This helps reject any outlier foreground points and collect a point cloud approximately corresponding to the ball in each frame. Note that in each frame the point cloud only depicts one viewpoint of the ball. Simulation data. We bootstrap learning by augmenting our captured real- world data with simulation data. We simulate a set of sphere- to- plane collisions with the PyBullet Physics Engine (Coumans & Bai, 2016–2017). To match our captured frame rate, we set simulation time steps at 0.01 seconds. We initialize sphere locations and linear velocities randomly and set angular velocities and friction coefficients to zero. Collision surfaces are oriented randomly and COR values are sampled uniformly in the feasible range [0,1]. Each simulation returns pre- and post- bounce trajectories of the sphere for the sampled orientation and COR. To make the synthetic data consistent with our captured dataset, we create point clouds per simulation by picking a viewpoint and sampling only visible points on the sphere at each time step. ## 4 EVALUATION ### 4.1 VISUAL FORWARD PREDICTION For the forward- prediction task, we split our dataset into training, validation, and test sets containing 4503, 196, and 473 trajectories, respectively. There are no common scenes across these sets. We found including a mix of 3:1 synthetic- to- real data in the mini- batches achieves the best balance between prediction accuracy and interpretable physical parameter inference. Qualitative prediction results are shown in Figure 3. Notice that we effectively predict the post- bounce trajectory when the ball bounces off a variety of different surfaces. To quantitatively evaluate the model's predictions, we train on the training and validation sets and test on the test set. We report the \(L_{2}\) distance in world <--- Page Split ---> Table 1: Forward prediction, collision normal estimation and COR estimation (test set). We evaluate our model and compare to baselines on the task of forward prediction, collision normal and COR estimation. We report median distance in centimeters to observed post-bounce trajectories for each experimental setting. <table><tr><td>Models</td><td>Dist</td><td>Dist Est Normal</td><td>Dist Est COR</td><td>Dist Est Normal + COR</td><td>% Normals with 30° of Est Normal</td><td>COR Median Absolute Error</td></tr><tr><td>1. Parabola encoding</td><td>26.3 ± 0.5</td><td>34.1 ± 0.7</td><td>26.4 ± 0.9</td><td>33.5 ± 1.6</td><td>17.52 ± 2.22</td><td>0.179 ± 0.006</td></tr><tr><td>2. Center encoding</td><td>23.1 ± 0.9</td><td>21.2 ± 0.3</td><td>23.2 ± 0.7</td><td>21.2 ± 0.2</td><td>23.41 ± 2.02</td><td>0.178 ± 0.013</td></tr><tr><td>3. Pretrain. VIM + IN</td><td>40.1 ± 6.0</td><td>34.1 ± 5.4</td><td>41.0 ± 6.3</td><td>34.4 ± 5.5</td><td>-</td><td>-</td></tr><tr><td>4. Ours</td><td>21.3 ± 0.9</td><td>21.2 ± 0.8</td><td>21.4 ± 0.7</td><td>20.6 ± 0.6</td><td>24.08 ± 3.82</td><td>0.168 ± 0.018</td></tr><tr><td>5. Ours + Est. normals</td><td>22.7 ± 1.0</td><td>18.7 ± 0.9</td><td>22.7 ± 0.7</td><td>18.4 ± 0.9</td><td>50.14 ± 1.26</td><td>0.159 ± 0.016</td></tr></table> ![](images/6_0.jpg) <center>Figure 3: Predicted post-bounce trajectories in novel scenes. (left) Input pre-bounce trajectories. (center) Observed post-bounce trajectories. (right) Our predicted post-bounce trajectories. Additional results in Appendix. </center> coordinates between the predicted ball's center post- bounce and the observed ball's center extracted from the depth data at time step 0.1 seconds post- bounce (Table 1 column "Dist"). Baselines. The point cloud trajectory encoders in PIM are a generic solution to encoding raw trajectory data. Such a model allows applying the same model to various probe objects. An alternative model that provides less flexibility is replacing the PointNet encoder with a "center encoding" model, which involves extracting the center- of- mass 3D points obtained by background subtraction and extracting the mean point of the sphere point cloud at each frame for 10 frames before and after the collision. We encode the 30- D vector of \((x,y,z)\) coordinates using a two- layer neural network and train PIM with this encoding instead of the PointNet model. As a second baseline (parabola encoding), we fit a parabola to the set of centers and use that as the representation for the trajectory. We use the parameters of a least squares- fitted parabola as input to a two- layer neural network that outputs the encoded representation of the trajectory. We also experimented with Interaction Networks (IN) (Battaglia et al., 2016) as an alternative for the PIM. We use the output physical parameters of the VIM from our best model to predict post- bounce trajectories using IN (Pretrain. VIM + IN). More details about this model and direct comparison to PIM are provided in Appendix L. Note that the IN model is not trained jointly with the VIM. Discussion and model ablations. Notice that our Bounce and Learn model out- performs the presented baselines (Table 1, top), likely due to the errors in hand- crafted estimates of the ball center in noisy stereo depth data. We report ablations of different training strategies for our model in Appendix F. We also quantify the effect of the spatial regularizer in Appendix G. Leveraging sensor- estimated collision normals and COR. We study how our model can benefit if the collision normal or COR is known. As collision normals are loosely tied to the surface normals, we train our best model (row 4) with the sensor- estimated normals by adding an additional cosine- distance loss between the VIM- predicted collision normals and the sensor normals (row 5). Furthermore, we evaluate the different models when the sensor- estimated normals are available at test time (col. "Dist Est Normal"). Notice how most of the models benefit from this information. <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 4: Inferred COR and collision normals. Given a single still image (first col.), we show inferred COR (second col., warmer colors indicate higher COR values), predicted collision normals (third col., colors indicate normal direction) from VIM and stereo camera surface normals (last col.). </center> We also investigate whether knowing a sensor- estimated COR at test time is beneficial. The COR can be estimated from pre- and post- collision velocities under the assumption of a rigid- body model. We estimate these velocities from sensor data of the ball in the frames 0.05 seconds before and after impact. Note that these are noisy estimates due to the complexity of the collisions in our dataset and errors of depth estimation from the stereo camera. We evaluate prediction accuracy when the models have access to the sensor- estimated COR during testing (col. "Dist Est COR"). We observe that for the best model, sensor- estimated COR combined with sensor normals (col. "Dist Est Normal+COR") improves accuracy over sensor- estimated COR or normals alone. ### 4.2 INFERRING PHYSICAL PROPERTIES We show qualitative results of inferred COR values and collision normals from VIM in Figure 4 (more in Appendix I). We also compare our collision normal estimates to the surface normals estimated from the stereo camera. Notice that the soft seats have lower COR values while the walls have higher values, and the normals align with the major scene surfaces. In Table 1, we evaluate the percentage of VIM's collision normal predictions which align within \(30^{\circ}\) of the sensor- estimated surface normal - the standard evaluation criterion for the NYUv2 surface normal estimation task (Silberman et al., 2012). We observe that training the trajectory encoder (row 4) leads to a model that is not constrained to predict interpretable effective physical parameters (since the input domain of \(\mathcal{P}\) changes). Adding sensor- estimated normals during training improves normal prediction accuracy while maintaining good forward- prediction accuracy (row 5). Finally, we evaluate the inferred COR from VIM. As noted in Section 4.1, the estimated COR from the stereo camera is noisy, but is the best available ground- truth. For our evaluation, we compare our inferred COR to the estimated COR from the stereo camera using median absolute error in Table 1 (last column). Notice how our best model (row 4) out- performs the baseline and ablation models. ## 5 DISCUSSION We have introduced a new large- scale dataset of real- world bounces and have demonstrated the ability to predict post- bounce trajectories and infer physical properties of the bounces for a variety of everyday surfaces via our Bounce and Learn model. The collection of our Bounce Dataset facilitates studying physical properties not addressed by our model and future applications. Example properties include friction, object spin, surface deformation, and stochastic surfaces (e.g., corners, pebbled surface). Detection of collisions robustly and performing rollouts of predictions can be interesting directions towards practical applications. Future applications also include the ability to transfer the recovered physical properties to 3D shapes and simulating 3D scenes with real- world or cartoon physics. ## 6 ACKNOWLEDGEMENTS This research is partly sponsored by ONR MURI N000141612007 and the ARO under Grant Number W911NF- 18- 1- 0019. Abhinav Gupta was supported in part by Okawa Foundation. <--- Page Split ---> ## APPENDIX A PRELIMINARIES We review some of the fundamentals and challenges in modeling restitution during collision. Physics has long adopted simple algebraic collision laws that map a pre- collision velocity \(v^{- }\) , along with a collision normal \(n\) , to a post- collision velocity \(v^{+}\) (Chatterjee & Ruina, 1998). Among these, perhaps the most widely applied collision law is the coefficient of restitution \(\mathrm{COR} = \frac{c n, v^{+}}{c n, v^{- }} \in [0, 1]\) . Here \(\mathrm{COR} = 0\) gives purely inelastic impact (no rebound) and \(\mathrm{COR} = 1\) is a fully elastic impact that dissipates no energy. Note that the notion of a collision normal idealizes the impact process with a well defined normal direction. As soft objects collide (e.g., leather sofa), surface normals vary throughout the process; the collision normal \(n\) encodes the averaged effect of a range of normal directions experienced throughout an impact process. However, when it comes to physically modeling the real world, it is important to note that: (a) there is no single, physically correct COR for a material; (b) nor is there a valid geometric surface normal that encodes the collision normal behavior for a pair of colliding objects (Stewart, 2011). At best a COR value is valid only locally for a unique orientation between a pair of colliding geometries (Stoianovici & Hurmuzlu, 1996). For example, two identical rods dropped with vertical and horizontal orientations to the ground will give drastically different rebounds. If, however, a sphere with uniform material is one of the objects in a collision pair, symmetry ensures that COR will effectively vary only across the surface of the second, more complex collision surface. We leverage this in our capture setup by adopting a spherical probe object to extract a map of varying COR across complex collision surfaces in everyday environments. Similarly, the notion of a collision normal idealizes the impact process with a well defined normal direction. As soft objects collide (e.g., leather sofa), surface normals vary throughout the process. Thus a collision normal, \(n\) , encodes the averaged effect of a range of normal directions experienced throughout an impact process. We seek COR and collision normals that effectively encode the post- impact response. While we could hypothetically attempt to identify materials and use material fact sheets as look- up tables to seek physical properties like COR, such information would be of limited value in any practical setting. Recall that recorded COR values are generally valid for a limited range of geometries – essentially just for perfectly rigid, sphere- to- half- plane collisions. The real world violates these assumptions almost at every turn. COR values and collision normals vary for every contact configuration and location on the object (Wang et al., 2017). There is never a single COR value characterizing collision responses against real- world objects and hence there is no database (and thus ground- truth) for materials and their COR. ## APPENDIX B BOUNCE DATASET ![](images/12_0.jpg) <center>Figure 5: Our collected Bounce Dataset of 5K real-world bounces. Our dataset spans a variety of real-world everyday scenes. In each image we mark the location of the observed bounce and show the estimated coefficient of restitution at the top-left. Larger values correspond to harder surfaces such as floors and countertops. </center> We investigate whether we can use active physical interactions to infer the physical properties of objects. We explore this with a probe ball bounced off of complex, everyday surfaces to infer the <--- Page Split ---> physical properties of these surfaces. Each individual bounce consists of the probe following a pre- collision trajectory, its impact with a surface, and its post- collision trajectory. In addition, we augment our collected real- world bounce dataset with a dataset of simple simulated bounces. Probe object. As our focus is on probing the physical and geometric properties of complex surfaces in a scene, we apply a simple, spherical foam ball (radius \(\sim 7 \mathrm{cm}\) .) as our probe object for all our captured bounce sessions. A ball's post- collision trajectory is largely determined by the properties of the collision surface and the velocity with which the ball hits this surface. Symmetry of the ball eliminates variations in the geometry of the impact location on the probe and so its effect upon the resulting bounce. Capture setup. Our analysis of bounces requires knowledge of both the pre- bounce and post- bounce trajectories of the probe object in 3D. Hence, we use the Stereolabs ZED stereo camera to capture videos at 100fps VGA resolution. We observed that the impact of the probe with a surface often takes around 1/50th of a second. The high framerate of our camera ensures that frames depicting bounces are captured. For each bounce video, we ensure that the camera is stationary to facilitate accurate reconstruction of the trajectory. The dataset consists of 5172 stereo videos of bounces with surfaces in office and home environments. On average each video contains 172 frames containing the ball. As shown in Figure 5, these environments capture diverse surfaces with varying material properties. Each sample in the dataset consists of the RGB frames, depth maps, point clouds for each frame, and estimated surface normal maps. Since we are interested in the trajectory of the ball, we first perform background subtraction Stauffer & Grimson (1999) on the frames to localize the ball followed by RANSAC fitting Fischler & Bolles (1981) of a sphere on the point cloud corresponding to the foreground mask. This helps us reject any outlier foreground points and collect a point cloud corresponding to the ball in each frame. Note that in each frame only one viewpoint of the ball is visible. Hence, the point cloud for each frame contains a partial sphere. Simulation data. We bootstrap our learning by augmenting our captured real- world data with very simple simulation data. We simulate a set of sphere- to- plane collisions with the PyBullet Physics Engine Coumans & Bai (2016- 2017). To match our capture frame rate we set simulation time steps at 0.01 seconds. We initialize sphere locations and linear velocities randomly while angular velocities and friction coefficients are set to zero. Collision surfaces are likewise given a random normal orientation and COR value in the feasible range [0,1], sampling from a uniform distribution over the range of allowed values. Each simulation returns pre- bounce and post- bounce trajectories of the sphere for the sampled COR and normal. Finally, to make this synthetic data consistent with our captured dataset we create point clouds per simulation by picking a viewpoint and sampling only visible points on the sphere at each time step. ## APPENDIX C TRAJECTORY ENCODERS The Physics Inference Module described in Section 3.2 of the main text takes as input an encoded representation of a trajectory. Due to the significant overlap of our architecture with PointNet Qi et al. (2016), we only provide a brief description in the main text. In this section, we provide a more detailed description of the encoder model to facilitate replication of our results. Note that code for this model will be made publicly available with the final version of the paper. Each trajectory (pre- bounce or post- bounce) in our dataset consists of point clouds of the ball at T timesteps. At each timestep, we first sample N points from the point cloud and lexicographically sort them according to their \(x, y\) and \(z\) coordinates (in that order of priority). We observed that random and uniform sampling achieve similar results. This sampling gives us a TxNx3 array of points for each trajectory. The lexicographic sorting provides partial spatial relationship between consecutive points in the array. For each timestep, the corresponding \(N \times 3\) array is processed by a sequence of convolution and ReLu layers as shown in Figure 6. This gives us a single 1x128 dimensional vector for each time step. These vectors for each timestep are concatenated to give us a 1x128T dimensional vector. This vector is processed by two fully connected layers followed by L2 normalization giving us a 256- <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 6: Our proposed encoder architecture takes as input a sequence of length \(T\) of point cloud data, each containing \(N\) points, and outputs a single vector. </center> dimensional vector. We observed that using 64- dimensions as the encoding size leads to the best results. In our implementation, we set \(\mathrm{T} = 10\) and \(\mathrm{N} = 500\) . ## APPENDIX D CORE PHYSICS ENGINE The Core Physics Engine described in Section 3.2 is the component of the PIM that performs the prediction task. Given input physical parameters \(\rho\) and an encoded pre- bounce trajectory \(t_{i}\) , the Core Physics Engine predicts the encoded post- bounce trajectory \(t_{o}\) . This is done by first encoding the input physical parameters in a suitable latent representation \(v_{\rho}\) (1x32 vector in our implementation) using two fully connected layers. We observed that increasing the dimensionality of the intermediate representation \(v_{p}\) beyond 1x32 doesn't improve results and using sizes below 1x32 leads to a drop in performance. The encoded physical parameters \(v_{\rho}\) are then concatenated with the encoded pre- bounce trajectory \(t_{i}\) . This concatenated vector is used to predict the post- bounce trajectory encoding \(t_{o}\) using two fully connected layers. This pipeline along with the sizes of the intermediate representations are shown in Figure 7. ![](images/14_1.jpg) <center>Figure 7: Our proposed Core Physics Engine takes as input the encoded pre-bounce trajectory \(t_{i}\) along with the physical parameters of the collision surface \(\rho\) to predict the encoded post-bounce trajectory \(t_{p}\) . </center> <--- Page Split ---> ## APPENDIX E TRAINING DETAILS ## E.1 PRETRAINING THE PIM The Physics Inference Module (PIM) is pretrained using simulation data as described in Section 3.2. In order to optimize the objective function given in Equation 2 of the main text we use the Adam optimizer. We update the parameters of the PIM using a batchsize of 32, initial learning rate of 0.01 and weight decay of 0.0005. The learning rate is dropped by a factor of 10 after every 32000 iterations. The training is done for a total of 96000 iterations. ## E.2 TRAINING VIM+PIM The Visual Inference Module (VIM) is trained jointly with the PIM using the captured data as described in Section 3.3. Similar to the PIM, the objective presented in Equation 3 of the main text is optimized using the Adam optimizer. We update the parameters of the PIM and VIM using a batchsize of 32, initial learning rate of 0.001 and weight decay of 0.0005. The learning rate is dropped by a factor of 10 after every 8000 iterations. We observe that the training converges at 24000 iterations. ## APPENDIX F ABLATIVE STUDY: TRAINING VARIOUS COMPONENTS OF THE BOUNCE AND LEARN PIPELINE We conducted an ablative study on training various components of the Bounce and Learn pipeline. These results are presented in Table 2. First, we consider pre- training PIM (core physics engine and trajectory encoder) with only synthetic data and holding it fixed when training VIM on real data (row 1). In other words, PIM only sees synthetic data during training and is upper bounded by the rigid- body Newtonian physics simulation. Next, we consider pre- training PIM on synthetic data and fine- tuning with VIM training on real data (row 2). Next, we consider pre- training PIM on synthetic data and fine- tuning the core physics engine (holding the trajectory encoder fixed) with VIM training on real data (row 3). Table 2: Bounce and Learn ablative evaluation (val set). We evaluate our models in different training settings on the task of forward prediction, collision normal and COR estimation. We report median distance in centimeters to observed post-bounce trajectories for each experimental setting. Please see the text for details. <table><tr><td>Training Setting</td><td>Dist</td><td>Dist Est Normal</td><td>Dist Est COR</td><td>Dist Est Normal + COR</td><td>% Normals within 30° of Est Normal</td><td>COR Median Absolute Error</td></tr><tr><td>1. Fix core and traj. enc. (synth only)</td><td>37.2±1.5</td><td>29.6±0.4</td><td>38.3±0.4</td><td>29.4±0.1</td><td>28.72±4.87</td><td>0.193±0.020</td></tr><tr><td>2. Train core and traj. enc.</td><td>20.4±0.9</td><td>31.7±2.8</td><td>20.4±2.5</td><td>31.4±2.0</td><td>13.01±1.26</td><td>0.134±0.008</td></tr><tr><td>3. (Ours) Train core, Fix traj. enc.</td><td>19.9±1.8</td><td>18.1±0.8</td><td>20.5±1.0</td><td>17.8±0.7</td><td>34.98±3.43</td><td>0.123±0.009</td></tr></table> We observe that fine- tuning PIM on real data results in better prediction accuracy than training PIM on synthetic data alone. We also observe that our best model requires finetuning the core physics engine while keeping the trajectory encoding fixed. This suggests that the core physics engine learns effective physical parameters beyond the parameters used in rigid- body simulation pretraining. We further analyze this hypothesis in Appendix H. ## APPENDIX G ABLATIVE STUDY OF SPATIAL REGULARIZATION Here we analyze the effect of the spatial regularizer term presented in Equation 3. We evaluate forward prediction, normal estimation and COR estimation errors for our proposed model with and without spatial regularization ("Reg."). These results are presented in Table 3. <--- Page Split ---> <table><tr><td>Models</td><td>Dist</td><td>Dist Est Normal</td><td>Dist Est COR</td><td>Dist Est Normal + COR</td><td>% Normals within 30° of Est Normal</td><td>COR Median Absolute Error</td></tr><tr><td>1. (Ours) Point Cloud-based</td><td>20.5±1.1</td><td>18.5±0.6</td><td>20.3±1.3</td><td>18.2±0.5</td><td>34.14±2.87</td><td>0.127±0.008</td></tr><tr><td>2. (Ours) Point Cloud-based + Spatial Reg.</td><td>19.9±1.8</td><td>18.1±0.8</td><td>20.5±1.0</td><td>17.8±0.7</td><td>34.98±3.43</td><td>0.123±0.009</td></tr></table> Table 3: Spatial Regularization Ablation (val set). We present an ablative study of the effect of spatial regulation on the Bounce and Learn model. We report median distance in centimeters to observed post- bounce trajectories for each experimental setting. We also evaluate the predicted normals and CORs. ## APPENDIX H ANALYZING THE EFFECT OF TRAINING PIM ON REAL DATA In the results presented in Section 4 of the main text, we observe that jointly training the core physics engine leads to significant improvements in performance on both tasks - forward prediction and physical parameter estimation. Here we further analyze this by looking at the forward prediction errors for trajectories within different ranges of sensor- estimated COR (calculated using the hand- crafted approach described in Sec 4.1). We present these results in Figure 8. ![](images/16_0.jpg) <center>Figure 8: Forward prediction errors for trajectories belonging to different ranges of estimated COR. An interesting observation is that the improvement in performance in lower COR ranges is significantly higher. </center> In general, lower COR values point to collisions that are more non- rigid and higher COR value collisions are closer to rigid- body collisions. While there are exceptions to this intuition (for example, trampolines are high COR and non- rigid), such examples are rare in the Bounce dataset. We observe from the results in Figure 8 that the improvement in performance is significantly higher in the lower COR ranges compared to the higher. This supports the hypothesis that training the PIM jointly leads to modeling of non- rigid collisions better than the pretrained rigid- body simulation based PIM. Furthermore, we see that training the PIM also improves forward prediction results in all ranges of COR. It is also worth noting that the predictions are generally more sensitive to variations in the higher COR region leading since small variations in velocities could lead to larger distance errors. This explains the increasing error for the "Only Core Trained" model as the values of the sensor- estimated COR increases. <--- Page Split ---> ## APPENDIX I ADDITIONAL QUALITATIVE RESULTS We provide additional visualizations of the predictions from the VIM in Figure 9. We also provide interesting visualizations of the predicted trajectories by the Bounce and Learn model in Figure 10. Row 1 of Figure 10 shows a case where the restitution is predicted to be slightly higher. Row 3 of Figure 10 shows a case where the output seems physically plausible, but does not match the true trajectory since it fails to account for the spin of the ball. ![](images/17_0.jpg) <center>Figure 9: Inferred COR and collision normals. Given a single still image (first col.), we show inferred COR (second col., warmer colors indicate higher COR values), predicted collision normals (third col., colors indicate normal direction) from VIM and stereo camera surface normals (last col.). </center> ![](images/17_1.jpg) <center>Figure 10: Predicted post-bounce trajectories in novel scenes. (left) Input pre-bounce trajectories. (center) Observed post-bounce trajectories. (right) Our predicted post-bounce trajectories. </center> <--- Page Split ---> ## APPENDIX J ONLINE INFERENCE FROM BOUNCES AND VISUAL CUES In the main text, we demonstrate the efficacy of the VIM+PIM framework for inference of physical properties and predicting bounce outcomes in novel environments. However, in numerous robotics applications, an agent can usually interact with a scene to infer the physical properties. In such interactions, visual cues can also be leveraged to generalize these inferences in a scene. For example, a bounce of a ball on a wall can inform our inference of physical properties of the rest of the wall. Therefore, we explore an online learning framework, where our estimates of the physical parameters in a scene are updated online upon observing bounces. First, we pretrain the PIM as described in Section 3.2. For every bounce trajectory \((\mathcal{T}_{i},\mathcal{T}_{o})\) observed at scene location \((x,y)\) , we use the VIM to estimate the physical parameters \(\rho_{x,y}\) . The VIM is then updated until convergence using the same objective function as the VIM training from Equation 3 (also presented here). \[\mathcal{L} = d(t_{o},f(t_{i},\rho_{x,y})) + ||\rho_{x,y} - \mathcal{P}(t_{i},t_{o})||_{2}^{2}\] The loss is optimized incrementally using all observed bounces so far. Therefore, for each scene, we can train incrementally by interacting with the scene and updating the previously learned model. The PIM is then kept fixed during the online learning process. Fixing the PIM makes the optimization easier since we usually have access to limited number of bounce trajectories in novel environments (interactions of the agent). We observe in our results that we achieve better estimates of the physical properties with an increasing number of interactions. In Figure 11, we visualize the intermediate predictions from the VIM after observing an increasing number of bounces. Evidently, the estimates of collision normals and coefficient of restitution for the image improves with the number of bounces. For each scene, we leave out 10 bounce trajectories for evaluation and use the rest as the online interactions. We consider 10 random shuffles of the bounce trajectories, creating different sets for evaluation and online interactions, to compute mean performance. Figure 11 shows the quantitative improvement with increasing number of bounces according to the metrics described in Section 5.2. We present the final predictions from two other scenes in Figure 12. Observe that the predictions accurately capture the lower restitution of softer objects (pillow, seat of chair). Similarly, it also captures the "hardness" of the edge of the chair and tables. Next, we demonstrate the prediction of trajectories. Qualitative examples of trajectory predictions in the online setting are shown in Figure 13. We observe that the combination of VIM and PIM can successfully predict the outcomes of most bounce events. We also present some failure cases in Figure 13. <--- Page Split ---> ![](images/19_0.jpg) <center>Figure 11: We learn to estimate physical parameters of surfaces in a scene by observing bounces at different locations (row 1). Our predictions of collision normals (row 2) and coefficient of restitution (row 3) improve with number of bounces. The quantitative evaluation (row 4) provides strong evidence for the efficacy of our online learning approach.(Best viewed electronically) </center> <--- Page Split ---> ![](images/20_0.jpg) <center>Figure 12: The final predicted COR maps shows the ability of our approach to differentiate soft (pillows, moveable lamp) and rigid objects (edges of chairs and tables). Furthermore, the quantitative evaluation of the estimated collision normals and COR shows a strong improving trend with increasing number of bounces.(Best viewed electronically) </center> <--- Page Split ---> ![](images/21_0.jpg) <center>Figure 13: Online Learning based Predicted post-bounce trajectories. (left) Input pre-bounce trajectories. (center) Observed post-bounce trajectories. (right) Our predicted post-bounce trajectories. We correctly predict the trajectory in different scenes. Bottom two rows are example failures. </center> <--- Page Split ---> APPENDIX K EVALUATION IN THE ABSENCE OF IMPACT LOCATION ANNOTATIONS <table><tr><td>Models</td><td>Dist</td><td>Dist Est Normal</td><td>Dist Est COR</td><td>Dist Est Normal + COR</td><td>% Normals within 30° of Est Normal</td><td>COR Median Absolute Error</td></tr><tr><td>Annotated collision pts</td><td>21.3 ± 0.9</td><td>21.2 ± 0.8</td><td>21.4 ± 0.7</td><td>20.6 ± 0.6</td><td>24.08 ± 3.82</td><td>0.168 ± 0.018</td></tr><tr><td>Estimated collision pts</td><td>21.8 ± 1.7</td><td>21.5 ± 0.1</td><td>22.0 ± 1.8</td><td>21.4 ± 0.5</td><td>24.26 ± 1.99</td><td>0.171 ± 0.023</td></tr></table> Table 4: Bounce and Learn with estimated collision points (test set). We evaluate our model on the task of forward prediction, collision normal and restitution estimation under the absence of impact location annotations in the scene image. In the experiments presented in Section 4, we use human- annotations for the impact location in the image. These annotations are used to index the output of the Visual Inference Module (VIM) as explained in Equation 3. In this section, we conduct an experiment by relaxing the need for this human annotation. The point cloud of the ball in the frame of collision provides information about it's location in the image. We leverage this information to estimate the point of collision in the image. More specifically, we project the mean point of the point cloud to the image using the camera parameters. This serves as an estimate of the point of collision. Note that this is not an accurate estimate since the point of collision could occur at the edges of the ball, which would not coincide with the projected center. However, since the output of VIM is coarse, we hypothesize that minor errors in impact location would not effect the coarse index. In Table 4, we present results for our model trained and tested using this estimate for collision location. We observe very minimal difference in performance compared to using the human annotations. ## APPENDIX L COMPARING PIM TO INTERACTION NETWORKS (BATTAGLIA ET AL., 2016) <table><tr><td>Models</td><td>Dist</td></tr><tr><td>Center-based PIM</td><td>10.87±(0.32)</td></tr><tr><td>IN - velocity</td><td>49.00±(6.34)</td></tr><tr><td>IN - location history</td><td>20.10±(1.50)</td></tr></table> Table 5: Comparison to Interaction Networks. We evaluate our Center- based PIM model and Interaction Networks (Battaglia et al., 2016) on 10000 simulated trajectories. We report median distance in centimeters to ground truth post- bounce location at \(\mathrm{t = 0.1s}\) . Please see the text for details. Interaction Networks (IN) (Battaglia et al., 2016) model the interactions between objects in complex systems by performing inferences about the abstract properties that govern them. INs are shown to be effective for reasoning about n- body interactions, colliding rigid balls and interaction of discretized strings with rigid objects. The Physics Inference Module (PIM) proposed in this paper is aimed to address a more specific problem - collision between two non- rigid objects. However, the PIM also provides the additional benefit of performing inference over sensor inputs in the form of point clouds. We perform a quantitative comparison of the PIM and IN models. Since the IN is not designed to deal with point cloud data, we use the Center- based PIM baseline presented in Section 4 for fair comparison. The IN model maintains a state vector of each object at each timestep. We follow the choices in (Battaglia et al., 2016) to design the state vector. In our scenario, we have two objects - the ball and the collision surface. We represent both using a 7- dimensional state vector. The surface is represented as a static object with inverse mass 0, located at (0, 0, 0) and with velocity (0, 0, 0). The ball is represented as an object with constant inverse- mass and with the location and velocity determined by the sample. The relation attribute is represented by the coefficient of restitution and the collision normal. Since our training and test simulation trajectories have added noise to imitate <--- Page Split ---> sensor data, the estimates of initial velocity are also noisy. Therefore, we formulate another location history based state vector containing location of the object in the last 3 time steps. More concretely, the history- based state vector is represented as: \[o = [m,x_{t - 2},y_{t - 2},z_{t - 2},x_{t - 1},y_{t - 1},z_{t - 1},x_{t},y_{t},z_{t}]\] We test these models on 10000 simulated trajectories and present the results in Table 5. We observe that the IN model demonstrates a higher error at \(\mathrm{t = 0.1s}\) post- collision. We believe that this difference in error is due to the recurrent nature of IN, leading to accumulation of errors at every time step. On the other hand, since the PIM predicts by choosing the best simulated post- collision trajectory, it does not suffer from this issue. However, incorporating IN in our proposed framework could allow modelling of n- body interactions and could be addressed in future work. <--- Page Split --->
accept
Accept (Poster)
7
ICLR_2019_paper_1246
iclr
2,019
# CLASSIFIER-AGNOSTIC SALIENCY MAP EXTRACTION Anonymous authors Paper under double- blind review ## ABSTRACT Extracting saliency maps, which indicate parts of the image important to classification, requires many tricks to achieve satisfactory performance when using classifier- dependent methods. Instead, we propose classifier- agnostic saliency map extraction, which finds all parts of the image that any classifier could use, not just one given in advance. We observe that the proposed approach extracts higher quality saliency maps and outperforms existing weakly- supervised localization techniques, setting the new state of the art result on the ImageNet dataset. ## 1 INTRODUCTION The success of deep convolutional networks for large- scale object recognition Krizhevsky et al. (2012); Simonyan & Zisserman (2014); Szegedy et al. (2015); He et al. (2016) has spurred interest in utilizing them to automatically detect and localize objects in natural images. Pioneering this direction, Simonyan et al. (2013) and Springenberg et al. (2014) demonstrated that the gradient of the class- specific score of a given classifier could be used for extracting a saliency map of an image. Such classifier- dependent saliency maps can be utilized to analyze the inner workings of a specific network. However, as only the part of the image that is used by a given model is highlighted, these methods are not identifying all "evidence" in a given image. They also tend to be noisy, covering many irrelevant pixels and missing many relevant ones. Therefore, much of the recent work has focused on introducing regularization techniques of correcting such classifier- dependent saliency maps. For instance, Selvaraju et al. (2017) propose averaging multiple saliency maps created for perturbed images to obtain a smooth saliency map. We argue, however, that applying tricks and tweaks on top of methods that were designed to analyze inner workings of a given classifier is not a principled way to get saliency maps that focus on all useful evidence. In this work, we aim to find saliency maps indicating pixels which aid classification, i.e. we want to find pixels in the input image such that if they were masked, it would confuse an unknown classifier. Assuming we were given a classifier, a naive approximate solution would be to train a generative model to output a mask (a saliency map) confusing that classifier. That can be achieved using a simple GAN- like approach (Goodfellow et al., 2014) where the classifier acts as a fixed discriminator. Unfortunately, as we prove experimentally, this solution suffers from the same issues as prior approaches. We argue that the strong dependence on a given classifier lies at the center of the problem. To tackle this directly we propose to train a saliency mapping that is not strongly coupled with any specific classifier. Our approach, a class- agnostic saliency map extraction, can be formulated as a practical algorithm that realizes our goal. Our focus on classifier- agnostic saliency maps is not our objective per se, it is a remedy that resolves the core problem. The proposed approach results in a neural network based saliency mapping that only depends on an input image. We qualitatively find that it extracts higher quality saliency maps compared to classifier- dependent methods, as can be seen in Fig. 2. Extracted saliency maps show all the evidence without using any symptom- masking methods: difficult to tune regularization penalties (such as total variation), exotic activation functions, complex training procedures or image preprocessing tricks (such as superpixels), etc. We also evaluate our method quantitatively by using the extracted saliency maps for object localization. We observe that the proposed approach outperforms the existing weakly- supervised techniques setting the new state of the art result on the ImageNet dataset and closely approaches the localization performance of a strongly supervised model. Furthermore, we experimentally validate that the proposed approach works reasonably well even for classes unseen during training. <--- Page Split ---> Our method has many potential applications, in which being classifier- agnostic is of primary importance. For instance, in medical image analysis, where we are interested not only in class prediction but also in indicating which part of the image is important to classification. Importantly, however, it is critical to indicate all parts of the image, which can influence diagnosis, not just ones used by a specific classifier. ## 2 CLASSIFIER-AGNOSTIC SALIENCY MAP EXTRACTION In this paper, we tackle a problem of extracting a salient region of an input image as a problem of extracting a mapping \(m:\mathbb{R}^{W\times H\times 3}\to [0,1]^{W\times H}\) over an input image \(x\in \mathbb{R}^{W\times H\times 3}\) . Such a mapping should retain \((= 1)\) any pixel of the input image if it aids classification, while it should mask \((= 0)\) any other pixel. ### 2.1 CLASSIFIER-DEPENDENT SALIENCY MAP EXTRACTION Earlier work has largely focused on a setting in which a classifier \(f\) was given (Fong & Vedaldi, 2017; Dabkowski & Gal, 2017). These approaches can be implemented as solving the following maximization problem: \[m = \arg \max_{m^{\prime}}S(m^{\prime},f), \quad (1)\] where \(S\) is a score function corresponding to a classification loss. That is, \[S(m,f) = \frac{1}{N}\sum_{n = 1}^{N}\left[l(f((1 - m(x_{n}))\odot x_{n}),y_{n}) + R(m(x_{n}))\right], \quad (2)\] where \(\odot\) denotes elementwise multiplication (masking), \(R(m)\) is a regularization term and \(l\) is a classification loss, such as cross- entropy. We are given a training set \(D = \{(x_{1},y_{1}),\ldots ,(x_{N},y_{N})\}\) . This optimization procedure could be interpreted as finding a mapping \(m\) that maximally confuses a given classifier \(f\) . We refer to it as a classifier- dependent saliency map extraction. A mapping \(m\) obtained with a classifier \(f\) may differ from a mapping \(m^{\prime}\) found using \(f^{\prime}\) , even if both classifiers are equally good in respect to a classification loss for both original and masked images, i.e. \(L(0,f) = L(0,f^{\prime})\) and \(L(m,f) = L(m^{\prime},f^{\prime})\) , where \[L(m,f) = \frac{1}{N}\sum_{n = 1}^{N}l(f((1 - m(x_{n}))\odot x_{n}),y_{n}). \quad (3)\] This property is against our definition of the mapping \(m\) above, which stated that any pixel which helps classification should be indicated by the mask (a saliency map) with 1. The reason why this is possible is these two equally good, but distinct classifiers may use different subsets of input pixels to perform classification. An example This behaviour can be intuitively explained with a simple example, illustrating an extreme special case. Let us consider a data set in which all instances consist of two identical copies of images concatenated together, that is, for all \(x_{n}\) , \(x_{n} = [x_{n}^{\prime};x_{n}^{\prime}]\) , where \(x_{n}^{\prime}\in \mathbb{R}^{W / 2\times H\times 3}\) . For such a data set, there exist at least two classifiers, \(f\) and \(f^{\prime}\) , with the same classification loss. The classifier \(f\) uses only the left half of the image, while \(f^{\prime}\) uses the other half. Each of the corresponding mappings, \(m\) and \(m^{\prime}\) , would then indicate a region of interest only on the corresponding half of the image. When the input image does not consist of two concatenated copies of the same image, it is unlikely that two equally good classifiers will use disjoint sets of input pixels. Our example is to show an extreme case when it is possible. ### 2.2 CLASSIFIER-AGNOSTIC SALIENCY MAP EXTRACTION In order to address the issue of saliency mapping's dependence on a single classifier, we propose to alter the objective function in Eq. (1) to consider not only a single fixed classifier but all possible classifiers weighted by their posterior probabilities. That is, \[m = \arg \max_{m^{\prime}}\mathbb{E}_{f}\left[S(m^{\prime},f)\right], \quad (4)\] <--- Page Split ---> where the posterior probability, \(p(f|D,m^{\prime})\) , is defined to be proportional to the exponentiated classification loss \(L\) , i.e., \(p(f|D,m^{\prime})\propto p(f)\exp (- L(m^{\prime},f))\) . Solving this optimization problem is equivalent to searching over the space of all possible classifiers, and finding a mapping \(m\) that works with all of them. As we parameterize \(f\) as a convolutional network (with parameters denoted as \(\theta_{f}\) ), the space of all possible classifiers is isomorphic to the space of its parameters. The proposed approach considers all the classifiers and we call it a classifier- agnostic saliency map extraction. In the case of the simple example above, where each image contains two copies of a smaller image, both \(f\) and \(f^{\prime}\) , which respectively look at one and the other half of an image, the posterior probabilities of these two classifiers would be the same<sup>1</sup>. Solving Eq. (4) implies that a mapping \(m\) must minimize the loss \(S\) for both of these classifiers. ### 2.3 ALGORITHM The optimization problem in Eq. (4) is, unfortunately, generally intractable. This arises from the intractable expectation over the posterior distribution. Furthermore, the expectation is inside the optimization loop for the mapping \(m\) , making it even harder to solve. Thus, we approximately solve this problem by simultaneously estimating the mapping \(m\) and the expected objective. First, we sample one \(f^{(k)}\) with the posterior probability \(p(f|D,m^{(k - 1)})\) by taking a single step of stochastic gradient descent (SGD) on the classification loss with respect to \(\theta_{f}\) (classifier \(f\) parameters) with a small step size: Algorithm 1: Classifier- agnostic saliency map extraction input : an initial classifier \(f^{(0)}\) , an initial mapping \(m^{(0)}\) , dataset \(D\) , number of iterations \(K\) output : the final mapping \(m^{(K)}\) Initialize a sample set \(F^{(0)} = \left\{f^{(0)}\right\}\) . for \(k\gets 1\) to \(K\) do \(\begin{array}{r l} & {\theta_{f^{(k)}}\leftarrow \theta_{f^{(k - 1)}} - \eta_{f}\nabla_{\theta_{f}}L(m^{(k - 1)},f^{(k - 1)})\\ & {F^{(k)}\leftarrow F^{(k - 1)}\cup \left\{f^{(k)}\right\}}\\ & {f^{\prime}\leftarrow \mathrm{Sample}(F^{(k)})\\ & {\theta_{m^{(k)}}\leftarrow \theta_{m^{(k - 1)}} + \eta_{m}\nabla_{\theta_{m}}S(m^{(k - 1)},f^{\prime})}\\ & {F^{(k)}\leftarrow \mathrm{Thin}(F^{(k)})} \end{array}\) \[\theta_{f^{(k)}}\leftarrow \theta_{f^{(k - 1)}} - \eta_{f}\nabla_{\theta_{f}}L(m^{(k - 1)},f^{(k - 1)}). \quad (5)\] This is motivated by earlier work (Welling & Teh, 2011; Mandt et al., 2017) which showed that SGD performs approximate Bayesian posterior inference. We have up to \(k + 1\) samples<sup>2</sup> from the posterior distribution \(F^{(k)} = \left\{f^{(0)},\ldots ,f^{(k)}\right\}\) . We sample<sup>3</sup> \(f^{\prime}\in F^{(k)}\) to get a single- sample estimate of \(m\) in in Eq. (4) by computing \(S(m^{(k - 1)},f^{\prime})\) . Then, we use it to obtain an updated \(\theta_{m}\) (mapping \(m\) parameters) by \[\theta_{m^{(k)}}\leftarrow \theta_{m^{(k - 1)}} + \eta_{m}\nabla_{\theta_{m}}S(m^{(k - 1)},f^{\prime}). \quad (6)\] We alternate between these two steps until \(m^{(k)}\) converges (cf. Alg. 1). Note, that our algorithm resembles the training procedure of GANs (Goodfellow et al., 2014), where mapping \(m\) takes the role of a generator and the classifier \(f\) can be understood as a discriminator. Fan et al. (2017) also applied adversarial training in order to achieve better saliency maps. The relation to the work is discussed in details in Section 6. Score function The score function \(S(m,f)\) estimates the quality of the saliency map extracted by \(m\) given a data set and a classifier \(f\) . The score function must be designed to balance the precision and recall. The precision refers to the fraction of relevant pixels among those marked by \(m\) as relevant, while the recall is the fraction of pixels correctly marked by \(m\) as relevant among all the relevant pixels. In order to balance these two, the score function often consists of two terms. The first term is aiming to ensure that all relevant pixels are included (high recall). As in Eq. (2), a popular choice has been the classification loss based on an input image \(x\) masked out by \(1 - m(x)\) . In our preliminary experiments, however, we noticed that this approach leads to obtaining masks with adversarial artifacts. We hence propose to use the entropy \(\mathcal{H}(f((1 - m(x))\odot x))\) instead. This makes generated masks cover all salient pixels in the input, avoiding masks that may sway the class <--- Page Split ---> prediction to a different, but semantically close class. For example, from one dog species to another. The second term, \(R(m)\) , excludes a trivial solution, \(m\) simply outputting an all- ones saliency map, which would achieve maximal recall with low very precision. In order that, we must introduce a regularization term. Some of the popular choices include total variation (Rudin et al., 1992) and \(L^{1}\) norm. For simplicity, we use the latter only. In summary, we use the following score function for the class- agnostic saliency map extraction: \[S(m,f) = \frac{1}{N}\sum_{n = 1}^{N}\left[\mathcal{H}\big(f((1 - m(x_{n}))\odot x_{n})\big) - \lambda_{R}\| m(x_{n})\|_{1}\right], \quad (7)\] where \(\lambda_{R} > 0\) is a regularization coefficient. Thinning As the algorithm collects a set of classifiers, \(f^{(k)}\) 's, from the posterior distribution, we need a strategy to keep a small subset of them. An obvious approach would be to keep all classifiers but this does not scale well with the number of iterations. We propose and empirically evaluate a few strategies. The first three of them assume a fixed size of \(F^{(k)}\) . Namely, keeping the first classifier only, denoted by \(\mathbf{F}(F^{(k)} = \{f^{(0)}\})\) , the last only, denoted by \(\mathbf{L}(F^{(k)} = \{f^{(k)}\})\) and the first and last only, denoted by \(\mathbf{FL}(F^{(k)} = \{f^{(0)},f^{(k)}\})\) . As an alternative, we also considered a growing set of classifiers where we only keep one every 1000 iterations (denoted by L1000) but whenever \(|F^{(k)}| = 30\) , we randomly remove one from the set. Analogously, we experimented with L100. Classification loss Although we described our approach using the classification loss computed only on masked images, as in Eq. (3), it is not necessary to define the classification loss exactly in this way. In the preliminary experiments, we noticed that the following alternative formulation, inspired by adversarial training (Szegedy et al., 2013), works better: \[L(m,f) = \frac{1}{2N}\sum_{n = 1}^{N}\left[l(f((1 - m(x_{n}))\odot x_{n}),y_{n}) + l(f(x_{n}),y_{n})\right]. \quad (8)\] We thus use the loss as defined above in the experiments. We conjecture that it is advantageous over the original one in Eq. (3), as the additional term prevents the degradation of the classifier's performance on the original, unmasked images while the first term encourages the classifier to collect new pieces of evidence from the images that are masked. ## 3 EXPERIMENTAL SETTINGS Dataset: ImageNet Our models were trained on the official ImageNet training set with ground truth class labels Deng et al. (2009). We evaluate them on the validation set. Depending on the experiment, we use ground truth class or localization labels. Reproducibility We made our code publicly available at MASKED. ### 3.1 ARCHITECTURES Classifier \(f\) and mapping \(m\) We use ResNet- 50 (He et al., 2016) as a classifier \(f\) in our experiments. We follow an encoder- decoder architecture for constructing a mapping \(m\) . The encoder is implemented also as a ResNet- 50 so its weights can be shared with the classifier or it can be separate. We experimentally find that sharing is beneficial. The decoder is a deep deconvolutional network that ultimately outputs the mask of an input image. The overall architecture is shown in Fig. 1. Details of the architecture and training procedure are in the appendix. Regularization coefficient \(\lambda_{R}\) As noticed by Fan et al. (2017), it is not trivial to find an optimal regularization coefficient \(\lambda_{R}\) . They proposed an adaptive strategy which gets rid of the manual selection of \(\lambda_{R}\) . We, however, find it undesirable due to the lack of control on the average size of the saliency map. Instead, we propose to control the average number of relevant pixels by manually setting \(\lambda_{R}\) , while applying the regularization term \(R(m(x))\) only when there is a disagreement between \(f(x)\) and \(f((1 - m(x))\odot x)\) . We then set \(\lambda_{R}\) for each experiment such that approximately \(50\%\) of pixels in each image are indicated as relevant by a mapping \(m\) . In the preliminary experiments, we further noticed that this approach avoids problematic behavior when an image contains small objects, earlier observed by Fong & Vedaldi (2017). <--- Page Split ---> We also noticed that the training of mapping \(m\) is more stable and effective when we use only images that the classifier is not trivially confused on, i.e. predicts the correct class for the original images. ### 3.2 EVALUATION In our experiments we only use a single architecture explained in subsection 3.1. We use the abbreviation CASM (classifier- agnostic saliency mapping) to denote the final model obtained using the proposed method. Our baseline model (Baseline) is of the same architecture and it is trained with a fixed classifier (classifier- dependent saliency mapping) realized by following thinning strategy \(\mathbf{F}\) . ![](images/4_0.jpg) <center>Figure 1: The overall architecture. The mapping \(m\) consists of an encoder and an decoder and is shown at the top with gray background. The additional forward pass (when classifier acts on masked-out image) is needed during training only. </center> Following the previous work (Cao et al., 2015; Fong & Vedaldi, 2017; Zhang et al., 2016) we discretize our mask by computing \[b_{i j}(x) = \left\{ \begin{array}{l l}{1,} & {\mathrm{if} m_{i j}(x)\geq \alpha \overline{m} (x)}\\ {0,} & {\mathrm{otherwise}} \end{array} \right.,\] where \(\overline{m} (x)\) is the average mask intensity and \(\alpha\) is a hyperparameter. We simply set \(\alpha\) to 1, hence the average of pixel intensities is the same for the input mask \(m(x)\) and the discretized binary mask \(b(x)\) . To focus on the most dominant object we take the largest connected component of the binary mask to obtain the binary connected mask. Visualization We visualize the learned mapping \(m\) by inspecting the saliency map of each image in three different ways. First, we visualize the masked- in image \(b(x)\odot x\) , which ideally leaves only the relevant pixels visible. Second, we visualize the masked- out image \((1 - b(x))\odot x\) , which highlights pixels irrelevant to classification. Third, we visualize the inpainted masked- out image using an inpainting algorithm (Telea, 2004). This allows us to inspect whether the object that should be masked out cannot be easily reconstructed from nearby pixels. Classification by multiple classifiers In order to verify our claim that the proposed approach results in a classifier- agnostic saliency mapping, we evaluate a set of classifiers<sup>4</sup> on the validation sets of masked- in images, masked- out images and inpainted masked- out images. If our claim is correct, we expect the inpainted masked- out images created by our method to break these classifiers, while the masked- in images would suffer minimal performance degradation. Object localization As the saliency map can be used to find the most dominant object in an image, we can evaluate our approach on the task of weakly supervised localization. To do so, we use the ILSVRC'14 localization task. We compute the bounding box of an object as the tightest box that covers the binary connected mask. We use three metrics to quantify the quality of localization. First, we use the official metric (OM) from the ImageNet localization challenge, which considers the localization successful if at least one ground truth bounding box has IOU with predicted bounding box higher than 0.5 and the class prediction is correct. Since OM is dependent on the classifier, from which we have sought to make our mapping independent, we use another widely used metric, called localization error (LE), which only depends on the bounding box prediction Cao et al. (2015); Fong & Vedaldi (2017). Lastly, we evaluate the original saliency map, of which each mask pixel is a continuous value between 0 and 1, by the continuous F1 score. Precision \(P\) and recall \(R\) are defined as the following: \[P = \frac{\sum_{(i,j)\in B^{*}(x)}m_{i j}(x)}{\sum_{i j}m_{i j}(x)}\mathrm{and}R = \frac{\sum_{(i,j)\in B^{*}(x)}m_{i j}(x)}{|B^{*}(x)|},\] <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 2: The original images are in the first row. In the following rows masked-in images, masked-out images and inpainted masked-out images are shown, respectively. Note that the proposed approach (a-b) remove all relevant pixels and hence the inpainted images show the background only. Seven randomly selected consecutive images from validation set are presented here. Please look into the appendix for extra visualizations. </center> where \(B^{*}(x)\) is the ground truth bounding box. We compute F1 scores against all the ground truth bounding boxes for each image and report the highest one among them as its final score. ## 4 RESULTS AND ANALYSIS Visualization and statistics We randomly select seven consecutive images from the validation set and input them to two instances of CASM (each using a different thinning strategy - L or L100) and Baseline. We visualize the original (clean), masked-in, masked-out and inpainted masked-out images in Fig. 2. The proposed approach produces clearly better saliency maps, while the classifier- dependent approach (Baseline) produces so- called adversarial masks (Dabkowski & Gal, 2017). We further compute some statistics of the saliency maps generated by CASM and Baseline over the validation set. The masks extracted by CASM exhibit lower total variation \((2.5 \times 10^{3}\) vs. \(7.0 \times 10^{3}\) ), indicating that CASM produced more regular masks, despite the lack of explicit TV regularization. The entropy of mask pixel intensities is much smaller for CASM (0.05 vs. 0.21), indicating that the mask intensities are closer to either 0 or 1 on average. Furthermore, the standard deviation of the masked out volume is larger with CASM (0.19 vs. 0.14), indicating that CASM is capable of producing saliency maps of varying sizes dependent on the input images. ![](images/5_1.jpg) <center>Figure 3: The classification accuracy of the ImageNet-trained convolutional networks on masked-in images (left), masked-out images (center) and inpainted masked-out images (right). Orange and blue dots correspond to ResNet-50 models and all the other types of convolutional networks, respectively. We observe that the inpainted masked-out images obtained using Baseline are easier to classify than those using CASM, because Baseline fails to mask out all the relevant pixels, unlike CASM. On the right panel, most of the classifiers evaluated on images with random masks achieve accuracy higher than 40% and are not shown. We add jitters in the x-axis to make each dot more distinct from the others visibly. </center> <--- Page Split ---> Table 1: Localization evaluation using OM and LE scores. We report the better accuracy between those reported in the original papers or by Fong & Vedaldi (2017). <table><tr><td>Model</td><td>OM↓</td><td>LE↓</td></tr><tr><td>Our:</td><td></td><td></td></tr><tr><td>Baseline</td><td>62.7</td><td>53.5</td></tr><tr><td>CASM</td><td>48.6</td><td>36.1</td></tr><tr><td>Fan et al. (2017)</td><td>54.5</td><td>43.5</td></tr><tr><td>Weakly supervised:</td><td></td><td></td></tr><tr><td>Zeiler &amp;amp; Fergus (2014)</td><td>-</td><td>48.6</td></tr><tr><td>Zhou et al. (2016)</td><td>56.4</td><td>48.1</td></tr><tr><td>Selvaraju et al. (2017)</td><td>-</td><td>47.5</td></tr><tr><td>Fong &amp;amp; Vedaldi (2017)</td><td>-</td><td>43.1</td></tr><tr><td>Mahendran &amp;amp; Vedaldi (2016)</td><td>-</td><td>42.0</td></tr><tr><td>Simonyan et al. (2013)</td><td>-</td><td>41.7</td></tr><tr><td>Cao et al. (2015)</td><td>-</td><td>38.8</td></tr><tr><td>Zhang et al. (2016)</td><td>-</td><td>38.7</td></tr><tr><td>Dabkowski &amp;amp; Gal (2017)</td><td>-</td><td>36.7</td></tr><tr><td>Supervised:</td><td></td><td></td></tr><tr><td>Simonyan &amp;amp; Zisserman (2014)</td><td>-</td><td>34.3</td></tr></table> Table 2: Ablation study. S refers to the choice of a score function (E: entropy, C: classification loss), Shr to whether the encoder and classifier are shared (Y: yes, N: no) and Thin to the thinning strategies. <table><tr><td></td><td>S</td><td>Shr</td><td>Thin</td><td>OM↓</td><td>LE↓</td><td>F1↑</td></tr><tr><td>(a)</td><td>E</td><td>Y</td><td>F</td><td>62.7</td><td>53.5</td><td>49.0</td></tr><tr><td>(b)</td><td>E</td><td>Y</td><td>L</td><td>49.0</td><td>36.5</td><td>61.7</td></tr><tr><td>(c)</td><td>E</td><td>Y</td><td>FL</td><td>52.7</td><td>41.3</td><td>57.2</td></tr><tr><td>(d)</td><td>E</td><td>Y</td><td>L1000</td><td>48.7</td><td>36.2</td><td>61.6</td></tr><tr><td>(e)</td><td>E</td><td>Y</td><td>L100</td><td>48.6</td><td>36.1</td><td>61.4</td></tr><tr><td>(f)</td><td>C</td><td>Y</td><td>F</td><td>80.8</td><td>75.9</td><td>42.5</td></tr><tr><td>(g)</td><td>C</td><td>Y</td><td>L</td><td>49.5</td><td>37.0</td><td>62.2</td></tr><tr><td>(h)</td><td>C</td><td>Y</td><td>L100</td><td>49.7</td><td>37.3</td><td>62.1</td></tr><tr><td>(i)</td><td>E</td><td>N</td><td>F</td><td>-</td><td>55.5</td><td>-</td></tr><tr><td>(j)</td><td>E</td><td>N</td><td>L</td><td>-</td><td>47.2</td><td>-</td></tr><tr><td>(k)</td><td>E</td><td>N</td><td>L100</td><td>-</td><td>46.8</td><td>-</td></tr></table> Classification As shown on the left panel of Figure 3, the entire set of classifiers suffers less from the masked- in images produced by CASM than those by Baseline. We, however, notice that most of the classifiers fail to classify the masked- out images produced by Baseline, which we conjecture is due to the adversarial nature of the saliency maps produced by Baseline approach. This is confirmed by the right panel which shows that simple inpainting of the masked- out images dramatically increases the accuracy when the saliency maps were produced by Baseline. The inpainted masked- out images by CASM, on the other hand, do not benefit from inpainting, because it truly does not maintain any useful evidence for classification. Localization We report the localization performance of CASM, Baseline and prior works in Table 1 using two different metrics. Most of the existing approaches, except for Fan et al. (2017), assume the knowledge of the target class, unlike our work. CASM performs better than all prior approaches including the classifier- dependent Baseline. The difference is statistically significant. For ten separate training runs with random initialization the worst scores 36.3 and the best 36.0 with the average of 36.1. The fully supervised approach is the only approach that outperforms CASM. Thinning strategies In Table 2 (a- e), we compare the five thinning strategies described earlier, where F is equivalent to the Baseline. According to LE and OM metrics, the strategies L100 and L1000 perform better than the others, closely followed by L. These three strategies also perform the best in term of F1. Sharing the encoder and classifier As clear from Fig. 1, it is not necessary to share the parameters of the encoder and classifier. Our experiments, however, reveal that it is always beneficial to share them as shown in Table 2. Score function Unlike Fan et al. (2017), we use separate score functions for training the classifier and the saliency mapping. We empirically observe in Table 2 that the proposed use of entropy as a score function results in a better mapping in term of OM and LE. The gap, however, narrows as we use better thinning strategies. On the other hand, the classification loss is better for F1 as it makes CASM focus on the dominant object only. Because we take the highest score for each ground truth bounding box, concentrating on the dominant object yields higher scores. ## 5 UNSEEN CLASSES Since the proposed approach does not require knowing the class of the object to be localized, we can use it with images that contain objects of classes that were not seen during training neither by the classifier \(f^{(k)}\) nor the mapping \(m\) . We explicitly test this capability by training five different CASMs on five subsets of the original training set of ImageNet. <--- Page Split ---> Table 3: Localization errors (LE in \(\%\) , \(\downarrow\) ) of the models trained on a subset of classes. Each row corresponds to the training subset of classes and each column to the test subset of classes. Error rates on the previously unseen classes are marked with gray shade. <table><tr><td></td><td>A</td><td>B</td><td>C</td><td>D</td><td>E</td><td>F</td><td>All</td></tr><tr><td>F</td><td>46.5</td><td>46.4</td><td>48.1</td><td>45.0</td><td>45.7</td><td>41.3</td><td>44.9</td></tr><tr><td>E, F</td><td>39.5</td><td>41.2</td><td>43.1</td><td>40.3</td><td>39.5</td><td>38.7</td><td>40.0</td></tr><tr><td>D, E, F</td><td>37.9</td><td>39.3</td><td>40.0</td><td>38.0</td><td>38.0</td><td>37.4</td><td>38.1</td></tr><tr><td>C, D, E, F</td><td>38.2</td><td>38.5</td><td>39.9</td><td>37.9</td><td>37.9</td><td>37.8</td><td>38.1</td></tr><tr><td>B, C, D, E, F</td><td>36.7</td><td>36.8</td><td>39.9</td><td>37.4</td><td>37.0</td><td>37.0</td><td>37.4</td></tr><tr><td>-</td><td>35.6</td><td>36.1</td><td>39.0</td><td>37.0</td><td>36.6</td><td>36.7</td><td>36.9</td></tr></table> We first divide the 1000 classes into five disjoint subsets (denoted as A, B, C, D, E and F) of sizes 50, 50, 100, 300, 300 and 200, respectively. We train our models (in all stages) on \(95\%\) images (classes in B, C, D, E and F), \(90\%\) images (classes in C, D, E and F), \(80\%\) images (classes in D, E and F), \(50\%\) images (classes in E and F) and finally on \(20\%\) of images only (classes in F only). Then, we test each saliency mapping on all the six subsets of classes independently. We use the thinning strategy L for computational efficiency in each case. All models generalize well and the difference between their accuracy on seen or unseen classes is negligible (with exemption of the model trained on \(20\%\) of classes). The general performance is a little poorer which can be explained by the smaller training set. In Table 3, we see that the proposed approach works well even for localizing objects from previously unseen classes. The gap in the localization error between the seen and unseen classes grows as the training set shrinks. However, with a reasonably sized training set, the difference between the seen and unseen classes is small. This is an encouraging sign for the proposed model as a class- agnostic saliency map. ## 6 RELATED WORK The adversarial localization network Fan et al. (2017) is perhaps the most closely related to our work. Similarly to ours, they simultaneously train the classifier and the saliency mapping which does not require the object's class at test time. There are four major differences between that work and ours. First, we use the entropy as a score function for training the mapping, whereas they used the classification loss. This results in obtaining better saliency maps as we have shown earlier. Second, we make the training procedure faster thanks to tying the weights of the encoder and the classifier, which also results in a much better performance. Third, we do not let the classifier shift to the distribution of masked- out images by continuing training it on both clean and masked- out images. Finally, their mapping relies on superpixels to build more contiguous masks which may miss small details due to inaccurate segmentation and makes the entire procedure more complex. Our approach solely works on raw pixels without requiring any extra tricks. Dabkowski & Gal (2017) also train a separate neural network dedicated to predicting saliency maps. However, their approach is a classifier- dependent method and, as such, a lot of effort is devoted to preventing generating adversarial masks. Furthermore, the authors use a complex training objective with multiple hyperparameters which also has to be tuned carefully. On a final note, their model needs a ground truth class label which limits its use in practice. ## 7 CONCLUSIONS In this paper, we proposed a new framework for classifier- agnostic saliency map extraction which aims at finding a saliency mapping that works for all possible classifiers weighted by their posterior probabilities. We designed a practical algorithm that amounts to simultaneously training a classifier and a saliency mapping using stochastic gradient descent. We qualitatively observed that the proposed approach extracts saliency maps that cover all the relevant pixels in an image and that the masked- out images cannot be easily recovered by inpainting, unlike for classifier- dependent approaches. We further observed that the proposed saliency map extraction procedure outperforms all existing weakly supervised approaches to object localization and can also be used on images containing objects from previously unseen classes, paving a way toward class- agnostic saliency map extraction. <--- Page Split ---> ## REFERENCES Chunshui Cao, Xianming Liu, Yi Yang, Yinan Yu, Jiang Wang, Zilei Wang, Yongzhen Huang, Liang Wang, Chang Huang, Wei Xu, et al. Look and think twice: Capturing top- down visual attention with feedback convolutional neural networks. In International Conference on Computer Vision, 2015. Piotr Dabkowski and Yarin Gal. Real time image saliency for black box classifiers. In Neural Information Processing Systems, 2017. Jia Deng, Wei Dong, Richard Socher, Li- Jia Li, Kai Li, and Li Fei- Fei. Imagenet: A large- scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. Lijie Fan, Shengjia Zhao, and Stefano Ermon. Adversarial localization network. In Learning with limited labeled data: weak supervision and beyond, NIPS Workshop, 2017. Ruth C Fong and Andrea Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. arXiv preprint arXiv:1704.03296, 2017. Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Neural Information Processing Systems, 2014. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Computer Vision and Pattern Recognition, 2016. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Neural Information Processing Systems, 2012. Aravindh Mahendran and Andrea Vedaldi. Salient deconvolutional networks. In European Conference on Computer Vision. Springer, 2016. Stephan Mandt, Matthew D Hoffman, and David M Blei. Stochastic gradient descent as approximate bayesian inference. The Journal of Machine Learning Research, 2017. Leonid I Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60, 1992. Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad- cam: Visual explanations from deep networks via gradient- based localization. In ICCV, pp. 618- 626, 2017. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large- scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013. Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Computer Vision and Pattern Recognition, 2015. Alexandru Telea. An image inpainting technique based on the fast marching method. Journal of graphics tools, 2004. <--- Page Split ---> Max Welling and Yee W Teh. Bayesian learning via stochastic gradient langevin dynamics. In International Conference on Machine Learning, 2011. Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European Conference on Computer Vision, 2014. Jianming Zhang, Zhe Lin, Jonathan Brandt, Xiaohui Shen, and Stan Sclaroff. Top- down neural attention by excitation backprop. In European Conference on Computer Vision, 2016. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Computer Vision and Pattern Recognition, 2016. <--- Page Split ---> ## APPENDIX ## ARCHITECTURE AND TRAINING PROCEDURE Classifier \(f\) and mapping \(m\) As mentioned before, we use ResNet- 50 (He et al., 2016) as a classifier \(f\) in our experiments. We follow the encoder- decoder architecture for constructing a mapping \(m\) . The encoder is implemented also as a ResNet- 50 so its weights can be shared with the classifier or it can be separate. We experimentally find that sharing is beneficial. The decoder is a deep deconvolutional network that ultimately outputs the mask of an input image. The input to the decoder consists of all hidden layers of the encoder which are directly followed by a downscaling operation. We upsample them to be of the same size and concatenate them into a single feature map \(H\) . This upsampling operation is implemented by first applying \(1 \times 1\) convolution with 64 filters, followed by batch normalization, ReLU non- linearity and then rescaling to \(56 \times 56\) pixels (using bilinear interpolation). Finally, a single \(3 \times 3\) convolutional filter followed by sigmoid activation is applied on \(H\) and the output is upscaled to a \(224 \times 224\) pixel- sized mask using proximal interpolation. The overall architecture is shown in Fig. 1. Training procedure We initialize the classifier \(f^{(0)}\) by training it on the entire training set. We find this pretraining strategy facilitates learning, particularly in the early stage. In practice we use the pretrained ResNet- 50 from torchvision model zoo. We use vanilla SGD with a small learning rate of 0.001 (with momentum coefficient set to 0.9 and weight- decay coefficient set to \(10^{- 4}\) ) to continue training the classifier with the mixed classification loss as in Eq. (8). To train the mapping \(m\) we use Adam (Kingma & Ba, 2014) with the learning rate \(l_{0} = 0.001\) (with weight- decay coefficient set to \(10^{- 4}\) ) and all the other hyperparameters set to default values. We fix the number of training epochs to 70 (each epoch covers only a random \(20\%\) of the training set). ## RESIZING We noticed that the details of the resizing policy preceding the evaluation procedures OM and LE vary between different works. The one thing they have in common is that the resized image is always \(224 \times 224\) pixels. The two main approaches are the following. - The image in the original size is resized such that the smaller edge of the resulting image is 224 pixels long. Then, the central \(224 \times 224\) crop is taken. The original aspect ratio of the objects in the image is preserved. Unfortunately, this method has a flaw – it may be impossible to obtain \(\text{IOU} > 0.5\) between predicted localization box and the ground truth box when than a half of the bounding box is not seen by the model. - The image in the original size is resized directly to \(224 \times 224\) pixels. The advantage of this method is that the image is not cropped and it is always possible to obtain \(\text{IOU} > 0.5\) between predicted localization box and the ground truth box. However, the original aspect ratio is distorted. The difference in LE scores for different resizing strategies should not be large. For CASM it is \(0.6\%\) . In this paper, for CASM, we report results for the first method. ## VISUALIZATIONS In the remained of the appendix we replicate the content of Fig. 2 for sixteen randomly chosen classes. That is, in each figure we visualize saliency maps obtained for seven consecutive images from the validation set. The original images are in the first row. In the following rows masked- in images, masked- out images and inpainted masked- out images are shown. As before, we used two instances of CASM (each using a different thinning strategy – L or L100) and Baseline. <--- Page Split ---> ![](images/11_0.jpg) <center>(a) CASM (L100) </center> ![](images/11_1.jpg) <center>(b) CASM (L) </center> ![](images/11_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/12_0.jpg) <center>(a) CASM (L100) </center> ![](images/12_1.jpg) <center>(b) CASM (L) </center> ![](images/12_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/13_0.jpg) <center>(a) CASM (L100) </center> ![](images/13_1.jpg) <center>(b) CASM (L) </center> ![](images/13_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/14_0.jpg) <center>(a) CASM (L100) </center> ![](images/14_1.jpg) <center>(b) CASM (L) </center> ![](images/14_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/15_0.jpg) <center>(a) CASM (L100) </center> ![](images/15_1.jpg) <center>(b) CASM (L) </center> ![](images/15_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/16_0.jpg) <center>(a) CASM (L100) </center> ![](images/16_1.jpg) <center>(b) CASM (L) </center> ![](images/16_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/17_0.jpg) <center>(a) CASM (L100) </center> ![](images/17_1.jpg) <center>(b) CASM (L) </center> ![](images/17_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/18_0.jpg) <center>(a) CASM (L100) </center> ![](images/18_1.jpg) <center>(b) CASM (L) </center> ![](images/18_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/19_0.jpg) <center>(a) CASM (L100) </center> ![](images/19_1.jpg) <center>(b) CASM (L) </center> ![](images/19_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/20_0.jpg) <center>(a) CASM (L100) </center> ![](images/20_1.jpg) <center>(b) CASM (L) </center> ![](images/20_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/21_0.jpg) <center>(a) CASM (L100) </center> ![](images/21_1.jpg) <center>(b) CASM (L) </center> ![](images/21_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/22_0.jpg) <center>(a) CASM (L100) </center> ![](images/22_1.jpg) <center>(b) CASM (L) </center> ![](images/22_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/23_0.jpg) <center>(a) CASM (L100) </center> ![](images/23_1.jpg) <center>(b) CASM (L) </center> ![](images/23_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/24_0.jpg) <center>(a) CASM (L100) </center> ![](images/24_1.jpg) <center>(b) CASM (L) </center> ![](images/24_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/25_0.jpg) <center>(a) CASM (L100) </center> ![](images/25_1.jpg) <center>(b) CASM (L) </center> ![](images/25_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/26_0.jpg) <center>(a) CASM (L100) </center> ![](images/26_1.jpg) <center>(b) CASM (L) </center> ![](images/26_2.jpg) <center>(c) Baseline </center> <--- Page Split --->
## ABSTRACT Extracting saliency maps, which indicate parts of the image important to classification, requires many tricks to achieve satisfactory performance when using classifier- dependent methods. Instead, we propose classifier- agnostic saliency map extraction, which finds all parts of the image that any classifier could use, not just one given in advance. We observe that the proposed approach extracts higher quality saliency maps and outperforms existing weakly- supervised localization techniques, setting the new state of the art result on the ImageNet dataset. ## 1 INTRODUCTION The success of deep convolutional networks for large- scale object recognition Krizhevsky et al. (2012); Simonyan & Zisserman (2014); Szegedy et al. (2015); He et al. (2016) has spurred interest in utilizing them to automatically detect and localize objects in natural images. Pioneering this direction, Simonyan et al. (2013) and Springenberg et al. (2014) demonstrated that the gradient of the class- specific score of a given classifier could be used for extracting a saliency map of an image. Such classifier- dependent saliency maps can be utilized to analyze the inner workings of a specific network. However, as only the part of the image that is used by a given model is highlighted, these methods are not identifying all "evidence" in a given image. They also tend to be noisy, covering many irrelevant pixels and missing many relevant ones. Therefore, much of the recent work has focused on introducing regularization techniques of correcting such classifier- dependent saliency maps. For instance, Selvaraju et al. (2017) propose averaging multiple saliency maps created for perturbed images to obtain a smooth saliency map. We argue, however, that applying tricks and tweaks on top of methods that were designed to analyze inner workings of a given classifier is not a principled way to get saliency maps that focus on all useful evidence. In this work, we aim to find saliency maps indicating pixels which aid classification, i.e. we want to find pixels in the input image such that if they were masked, it would confuse an unknown classifier. Assuming we were given a classifier, a naive approximate solution would be to train a generative model to output a mask (a saliency map) confusing that classifier. That can be achieved using a simple GAN- like approach (Goodfellow et al., 2014) where the classifier acts as a fixed discriminator. Unfortunately, as we prove experimentally, this solution suffers from the same issues as prior approaches. We argue that the strong dependence on a given classifier lies at the center of the problem. To tackle this directly we propose to train a saliency mapping that is not strongly coupled with any specific classifier. Our approach, a class- agnostic saliency map extraction, can be formulated as a practical algorithm that realizes our goal. Our focus on classifier- agnostic saliency maps is not our objective per se, it is a remedy that resolves the core problem. The proposed approach results in a neural network based saliency mapping that only depends on an input image. We qualitatively find that it extracts higher quality saliency maps compared to classifier- dependent methods, as can be seen in Fig. 2. Extracted saliency maps show all the evidence without using any symptom- masking methods: difficult to tune regularization penalties (such as total variation), exotic activation functions, complex training procedures or image preprocessing tricks (such as superpixels), etc. We also evaluate our method quantitatively by using the extracted saliency maps for object localization. We observe that the proposed approach outperforms the existing weakly- supervised techniques setting the new state of the art result on the ImageNet dataset and closely approaches the localization performance of a strongly supervised model. Furthermore, we experimentally validate that the proposed approach works reasonably well even for classes unseen during training. <--- Page Split ---> Our method has many potential applications, in which being classifier- agnostic is of primary importance. For instance, in medical image analysis, where we are interested not only in class prediction but also in indicating which part of the image is important to classification. Importantly, however, it is critical to indicate all parts of the image, which can influence diagnosis, not just ones used by a specific classifier. ## 2 CLASSIFIER-AGNOSTIC SALIENCY MAP EXTRACTION In this paper, we tackle a problem of extracting a salient region of an input image as a problem of extracting a mapping \(m:\mathbb{R}^{W\times H\times 3}\to [0,1]^{W\times H}\) over an input image \(x\in \mathbb{R}^{W\times H\times 3}\) . Such a mapping should retain \((= 1)\) any pixel of the input image if it aids classification, while it should mask \((= 0)\) any other pixel. ### 2.1 CLASSIFIER-DEPENDENT SALIENCY MAP EXTRACTION Earlier work has largely focused on a setting in which a classifier \(f\) was given (Fong & Vedaldi, 2017; Dabkowski & Gal, 2017). These approaches can be implemented as solving the following maximization problem: \[m = \arg \max_{m^{\prime}}S(m^{\prime},f), \quad (1)\] where \(S\) is a score function corresponding to a classification loss. That is, \[S(m,f) = \frac{1}{N}\sum_{n = 1}^{N}\left[l(f((1 - m(x_{n}))\odot x_{n}),y_{n}) + R(m(x_{n}))\right], \quad (2)\] where \(\odot\) denotes elementwise multiplication (masking), \(R(m)\) is a regularization term and \(l\) is a classification loss, such as cross- entropy. We are given a training set \(D = \{(x_{1},y_{1}),\ldots ,(x_{N},y_{N})\}\) . This optimization procedure could be interpreted as finding a mapping \(m\) that maximally confuses a given classifier \(f\) . We refer to it as a classifier- dependent saliency map extraction. A mapping \(m\) obtained with a classifier \(f\) may differ from a mapping \(m^{\prime}\) found using \(f^{\prime}\) , even if both classifiers are equally good in respect to a classification loss for both original and masked images, i.e. \(L(0,f) = L(0,f^{\prime})\) and \(L(m,f) = L(m^{\prime},f^{\prime})\) , where \[L(m,f) = \frac{1}{N}\sum_{n = 1}^{N}l(f((1 - m(x_{n}))\odot x_{n}),y_{n}). \quad (3)\] This property is against our definition of the mapping \(m\) above, which stated that any pixel which helps classification should be indicated by the mask (a saliency map) with 1. The reason why this is possible is these two equally good, but distinct classifiers may use different subsets of input pixels to perform classification. An example This behaviour can be intuitively explained with a simple example, illustrating an extreme special case. Let us consider a data set in which all instances consist of two identical copies of images concatenated together, that is, for all \(x_{n}\) , \(x_{n} = [x_{n}^{\prime};x_{n}^{\prime}]\) , where \(x_{n}^{\prime}\in \mathbb{R}^{W / 2\times H\times 3}\) . For such a data set, there exist at least two classifiers, \(f\) and \(f^{\prime}\) , with the same classification loss. The classifier \(f\) uses only the left half of the image, while \(f^{\prime}\) uses the other half. Each of the corresponding mappings, \(m\) and \(m^{\prime}\) , would then indicate a region of interest only on the corresponding half of the image. When the input image does not consist of two concatenated copies of the same image, it is unlikely that two equally good classifiers will use disjoint sets of input pixels. Our example is to show an extreme case when it is possible. ### 2.2 CLASSIFIER-AGNOSTIC SALIENCY MAP EXTRACTION In order to address the issue of saliency mapping's dependence on a single classifier, we propose to alter the objective function in Eq. (1) to consider not only a single fixed classifier but all possible classifiers weighted by their posterior probabilities. That is, \[m = \arg \max_{m^{\prime}}\mathbb{E}_{f}\left[S(m^{\prime},f)\right], \quad (4)\] <--- Page Split ---> where the posterior probability, \(p(f|D,m^{\prime})\) , is defined to be proportional to the exponentiated classification loss \(L\) , i.e., \(p(f|D,m^{\prime})\propto p(f)\exp (- L(m^{\prime},f))\) . Solving this optimization problem is equivalent to searching over the space of all possible classifiers, and finding a mapping \(m\) that works with all of them. As we parameterize \(f\) as a convolutional network (with parameters denoted as \(\theta_{f}\) ), the space of all possible classifiers is isomorphic to the space of its parameters. The proposed approach considers all the classifiers and we call it a classifier- agnostic saliency map extraction. In the case of the simple example above, where each image contains two copies of a smaller image, both \(f\) and \(f^{\prime}\) , which respectively look at one and the other half of an image, the posterior probabilities of these two classifiers would be the same<sup>1</sup>. Solving Eq. (4) implies that a mapping \(m\) must minimize the loss \(S\) for both of these classifiers. ### 2.3 ALGORITHM The optimization problem in Eq. (4) is, unfortunately, generally intractable. This arises from the intractable expectation over the posterior distribution. Furthermore, the expectation is inside the optimization loop for the mapping \(m\) , making it even harder to solve. Thus, we approximately solve this problem by simultaneously estimating the mapping \(m\) and the expected objective. First, we sample one \(f^{(k)}\) with the posterior probability \(p(f|D,m^{(k - 1)})\) by taking a single step of stochastic gradient descent (SGD) on the classification loss with respect to \(\theta_{f}\) (classifier \(f\) parameters) with a small step size: Algorithm 1: Classifier- agnostic saliency map extraction input : an initial classifier \(f^{(0)}\) , an initial mapping \(m^{(0)}\) , dataset \(D\) , number of iterations \(K\) output : the final mapping \(m^{(K)}\) Initialize a sample set \(F^{(0)} = \left\{f^{(0)}\right\}\) . for \(k\gets 1\) to \(K\) do \(\begin{array}{r l} & {\theta_{f^{(k)}}\leftarrow \theta_{f^{(k - 1)}} - \eta_{f}\nabla_{\theta_{f}}L(m^{(k - 1)},f^{(k - 1)})\\ & {F^{(k)}\leftarrow F^{(k - 1)}\cup \left\{f^{(k)}\right\}}\\ & {f^{\prime}\leftarrow \mathrm{Sample}(F^{(k)})\\ & {\theta_{m^{(k)}}\leftarrow \theta_{m^{(k - 1)}} + \eta_{m}\nabla_{\theta_{m}}S(m^{(k - 1)},f^{\prime})}\\ & {F^{(k)}\leftarrow \mathrm{Thin}(F^{(k)})} \end{array}\) \[\theta_{f^{(k)}}\leftarrow \theta_{f^{(k - 1)}} - \eta_{f}\nabla_{\theta_{f}}L(m^{(k - 1)},f^{(k - 1)}). \quad (5)\] This is motivated by earlier work (Welling & Teh, 2011; Mandt et al., 2017) which showed that SGD performs approximate Bayesian posterior inference. We have up to \(k + 1\) samples<sup>2</sup> from the posterior distribution \(F^{(k)} = \left\{f^{(0)},\ldots ,f^{(k)}\right\}\) . We sample<sup>3</sup> \(f^{\prime}\in F^{(k)}\) to get a single- sample estimate of \(m\) in in Eq. (4) by computing \(S(m^{(k - 1)},f^{\prime})\) . Then, we use it to obtain an updated \(\theta_{m}\) (mapping \(m\) parameters) by \[\theta_{m^{(k)}}\leftarrow \theta_{m^{(k - 1)}} + \eta_{m}\nabla_{\theta_{m}}S(m^{(k - 1)},f^{\prime}). \quad (6)\] We alternate between these two steps until \(m^{(k)}\) converges (cf. Alg. 1). Note, that our algorithm resembles the training procedure of GANs (Goodfellow et al., 2014), where mapping \(m\) takes the role of a generator and the classifier \(f\) can be understood as a discriminator. Fan et al. (2017) also applied adversarial training in order to achieve better saliency maps. The relation to the work is discussed in details in Section 6. Score function The score function \(S(m,f)\) estimates the quality of the saliency map extracted by \(m\) given a data set and a classifier \(f\) . The score function must be designed to balance the precision and recall. The precision refers to the fraction of relevant pixels among those marked by \(m\) as relevant, while the recall is the fraction of pixels correctly marked by \(m\) as relevant among all the relevant pixels. In order to balance these two, the score function often consists of two terms. The first term is aiming to ensure that all relevant pixels are included (high recall). As in Eq. (2), a popular choice has been the classification loss based on an input image \(x\) masked out by \(1 - m(x)\) . In our preliminary experiments, however, we noticed that this approach leads to obtaining masks with adversarial artifacts. We hence propose to use the entropy \(\mathcal{H}(f((1 - m(x))\odot x))\) instead. This makes generated masks cover all salient pixels in the input, avoiding masks that may sway the class <--- Page Split ---> prediction to a different, but semantically close class. For example, from one dog species to another. The second term, \(R(m)\) , excludes a trivial solution, \(m\) simply outputting an all- ones saliency map, which would achieve maximal recall with low very precision. In order that, we must introduce a regularization term. Some of the popular choices include total variation (Rudin et al., 1992) and \(L^{1}\) norm. For simplicity, we use the latter only. In summary, we use the following score function for the class- agnostic saliency map extraction: \[S(m,f) = \frac{1}{N}\sum_{n = 1}^{N}\left[\mathcal{H}\big(f((1 - m(x_{n}))\odot x_{n})\big) - \lambda_{R}\| m(x_{n})\|_{1}\right], \quad (7)\] where \(\lambda_{R} > 0\) is a regularization coefficient. Thinning As the algorithm collects a set of classifiers, \(f^{(k)}\) 's, from the posterior distribution, we need a strategy to keep a small subset of them. An obvious approach would be to keep all classifiers but this does not scale well with the number of iterations. We propose and empirically evaluate a few strategies. The first three of them assume a fixed size of \(F^{(k)}\) . Namely, keeping the first classifier only, denoted by \(\mathbf{F}(F^{(k)} = \{f^{(0)}\})\) , the last only, denoted by \(\mathbf{L}(F^{(k)} = \{f^{(k)}\})\) and the first and last only, denoted by \(\mathbf{FL}(F^{(k)} = \{f^{(0)},f^{(k)}\})\) . As an alternative, we also considered a growing set of classifiers where we only keep one every 1000 iterations (denoted by L1000) but whenever \(|F^{(k)}| = 30\) , we randomly remove one from the set. Analogously, we experimented with L100. Classification loss Although we described our approach using the classification loss computed only on masked images, as in Eq. (3), it is not necessary to define the classification loss exactly in this way. In the preliminary experiments, we noticed that the following alternative formulation, inspired by adversarial training (Szegedy et al., 2013), works better: \[L(m,f) = \frac{1}{2N}\sum_{n = 1}^{N}\left[l(f((1 - m(x_{n}))\odot x_{n}),y_{n}) + l(f(x_{n}),y_{n})\right]. \quad (8)\] We thus use the loss as defined above in the experiments. We conjecture that it is advantageous over the original one in Eq. (3), as the additional term prevents the degradation of the classifier's performance on the original, unmasked images while the first term encourages the classifier to collect new pieces of evidence from the images that are masked. ## 3 EXPERIMENTAL SETTINGS Dataset: ImageNet Our models were trained on the official ImageNet training set with ground truth class labels Deng et al. (2009). We evaluate them on the validation set. Depending on the experiment, we use ground truth class or localization labels. Reproducibility We made our code publicly available at MASKED. ### 3.1 ARCHITECTURES Classifier \(f\) and mapping \(m\) We use ResNet- 50 (He et al., 2016) as a classifier \(f\) in our experiments. We follow an encoder- decoder architecture for constructing a mapping \(m\) . The encoder is implemented also as a ResNet- 50 so its weights can be shared with the classifier or it can be separate. We experimentally find that sharing is beneficial. The decoder is a deep deconvolutional network that ultimately outputs the mask of an input image. The overall architecture is shown in Fig. 1. Details of the architecture and training procedure are in the appendix. Regularization coefficient \(\lambda_{R}\) As noticed by Fan et al. (2017), it is not trivial to find an optimal regularization coefficient \(\lambda_{R}\) . They proposed an adaptive strategy which gets rid of the manual selection of \(\lambda_{R}\) . We, however, find it undesirable due to the lack of control on the average size of the saliency map. Instead, we propose to control the average number of relevant pixels by manually setting \(\lambda_{R}\) , while applying the regularization term \(R(m(x))\) only when there is a disagreement between \(f(x)\) and \(f((1 - m(x))\odot x)\) . We then set \(\lambda_{R}\) for each experiment such that approximately \(50\%\) of pixels in each image are indicated as relevant by a mapping \(m\) . In the preliminary experiments, we further noticed that this approach avoids problematic behavior when an image contains small objects, earlier observed by Fong & Vedaldi (2017). <--- Page Split ---> We also noticed that the training of mapping \(m\) is more stable and effective when we use only images that the classifier is not trivially confused on, i.e. predicts the correct class for the original images. ### 3.2 EVALUATION In our experiments we only use a single architecture explained in subsection 3.1. We use the abbreviation CASM (classifier- agnostic saliency mapping) to denote the final model obtained using the proposed method. Our baseline model (Baseline) is of the same architecture and it is trained with a fixed classifier (classifier- dependent saliency mapping) realized by following thinning strategy \(\mathbf{F}\) . ![](images/4_0.jpg) <center>Figure 1: The overall architecture. The mapping \(m\) consists of an encoder and an decoder and is shown at the top with gray background. The additional forward pass (when classifier acts on masked-out image) is needed during training only. </center> Following the previous work (Cao et al., 2015; Fong & Vedaldi, 2017; Zhang et al., 2016) we discretize our mask by computing \[b_{i j}(x) = \left\{ \begin{array}{l l}{1,} & {\mathrm{if} m_{i j}(x)\geq \alpha \overline{m} (x)}\\ {0,} & {\mathrm{otherwise}} \end{array} \right.,\] where \(\overline{m} (x)\) is the average mask intensity and \(\alpha\) is a hyperparameter. We simply set \(\alpha\) to 1, hence the average of pixel intensities is the same for the input mask \(m(x)\) and the discretized binary mask \(b(x)\) . To focus on the most dominant object we take the largest connected component of the binary mask to obtain the binary connected mask. Visualization We visualize the learned mapping \(m\) by inspecting the saliency map of each image in three different ways. First, we visualize the masked- in image \(b(x)\odot x\) , which ideally leaves only the relevant pixels visible. Second, we visualize the masked- out image \((1 - b(x))\odot x\) , which highlights pixels irrelevant to classification. Third, we visualize the inpainted masked- out image using an inpainting algorithm (Telea, 2004). This allows us to inspect whether the object that should be masked out cannot be easily reconstructed from nearby pixels. Classification by multiple classifiers In order to verify our claim that the proposed approach results in a classifier- agnostic saliency mapping, we evaluate a set of classifiers<sup>4</sup> on the validation sets of masked- in images, masked- out images and inpainted masked- out images. If our claim is correct, we expect the inpainted masked- out images created by our method to break these classifiers, while the masked- in images would suffer minimal performance degradation. Object localization As the saliency map can be used to find the most dominant object in an image, we can evaluate our approach on the task of weakly supervised localization. To do so, we use the ILSVRC'14 localization task. We compute the bounding box of an object as the tightest box that covers the binary connected mask. We use three metrics to quantify the quality of localization. First, we use the official metric (OM) from the ImageNet localization challenge, which considers the localization successful if at least one ground truth bounding box has IOU with predicted bounding box higher than 0.5 and the class prediction is correct. Since OM is dependent on the classifier, from which we have sought to make our mapping independent, we use another widely used metric, called localization error (LE), which only depends on the bounding box prediction Cao et al. (2015); Fong & Vedaldi (2017). Lastly, we evaluate the original saliency map, of which each mask pixel is a continuous value between 0 and 1, by the continuous F1 score. Precision \(P\) and recall \(R\) are defined as the following: \[P = \frac{\sum_{(i,j)\in B^{*}(x)}m_{i j}(x)}{\sum_{i j}m_{i j}(x)}\mathrm{and}R = \frac{\sum_{(i,j)\in B^{*}(x)}m_{i j}(x)}{|B^{*}(x)|},\] <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 2: The original images are in the first row. In the following rows masked-in images, masked-out images and inpainted masked-out images are shown, respectively. Note that the proposed approach (a-b) remove all relevant pixels and hence the inpainted images show the background only. Seven randomly selected consecutive images from validation set are presented here. Please look into the appendix for extra visualizations. </center> where \(B^{*}(x)\) is the ground truth bounding box. We compute F1 scores against all the ground truth bounding boxes for each image and report the highest one among them as its final score. ## 4 RESULTS AND ANALYSIS Visualization and statistics We randomly select seven consecutive images from the validation set and input them to two instances of CASM (each using a different thinning strategy - L or L100) and Baseline. We visualize the original (clean), masked-in, masked-out and inpainted masked-out images in Fig. 2. The proposed approach produces clearly better saliency maps, while the classifier- dependent approach (Baseline) produces so- called adversarial masks (Dabkowski & Gal, 2017). We further compute some statistics of the saliency maps generated by CASM and Baseline over the validation set. The masks extracted by CASM exhibit lower total variation \((2.5 \times 10^{3}\) vs. \(7.0 \times 10^{3}\) ), indicating that CASM produced more regular masks, despite the lack of explicit TV regularization. The entropy of mask pixel intensities is much smaller for CASM (0.05 vs. 0.21), indicating that the mask intensities are closer to either 0 or 1 on average. Furthermore, the standard deviation of the masked out volume is larger with CASM (0.19 vs. 0.14), indicating that CASM is capable of producing saliency maps of varying sizes dependent on the input images. ![](images/5_1.jpg) <center>Figure 3: The classification accuracy of the ImageNet-trained convolutional networks on masked-in images (left), masked-out images (center) and inpainted masked-out images (right). Orange and blue dots correspond to ResNet-50 models and all the other types of convolutional networks, respectively. We observe that the inpainted masked-out images obtained using Baseline are easier to classify than those using CASM, because Baseline fails to mask out all the relevant pixels, unlike CASM. On the right panel, most of the classifiers evaluated on images with random masks achieve accuracy higher than 40% and are not shown. We add jitters in the x-axis to make each dot more distinct from the others visibly. </center> <--- Page Split ---> Table 1: Localization evaluation using OM and LE scores. We report the better accuracy between those reported in the original papers or by Fong & Vedaldi (2017). <table><tr><td>Model</td><td>OM↓</td><td>LE↓</td></tr><tr><td>Our:</td><td></td><td></td></tr><tr><td>Baseline</td><td>62.7</td><td>53.5</td></tr><tr><td>CASM</td><td>48.6</td><td>36.1</td></tr><tr><td>Fan et al. (2017)</td><td>54.5</td><td>43.5</td></tr><tr><td>Weakly supervised:</td><td></td><td></td></tr><tr><td>Zeiler &amp;amp; Fergus (2014)</td><td>-</td><td>48.6</td></tr><tr><td>Zhou et al. (2016)</td><td>56.4</td><td>48.1</td></tr><tr><td>Selvaraju et al. (2017)</td><td>-</td><td>47.5</td></tr><tr><td>Fong &amp;amp; Vedaldi (2017)</td><td>-</td><td>43.1</td></tr><tr><td>Mahendran &amp;amp; Vedaldi (2016)</td><td>-</td><td>42.0</td></tr><tr><td>Simonyan et al. (2013)</td><td>-</td><td>41.7</td></tr><tr><td>Cao et al. (2015)</td><td>-</td><td>38.8</td></tr><tr><td>Zhang et al. (2016)</td><td>-</td><td>38.7</td></tr><tr><td>Dabkowski &amp;amp; Gal (2017)</td><td>-</td><td>36.7</td></tr><tr><td>Supervised:</td><td></td><td></td></tr><tr><td>Simonyan &amp;amp; Zisserman (2014)</td><td>-</td><td>34.3</td></tr></table> Table 2: Ablation study. S refers to the choice of a score function (E: entropy, C: classification loss), Shr to whether the encoder and classifier are shared (Y: yes, N: no) and Thin to the thinning strategies. <table><tr><td></td><td>S</td><td>Shr</td><td>Thin</td><td>OM↓</td><td>LE↓</td><td>F1↑</td></tr><tr><td>(a)</td><td>E</td><td>Y</td><td>F</td><td>62.7</td><td>53.5</td><td>49.0</td></tr><tr><td>(b)</td><td>E</td><td>Y</td><td>L</td><td>49.0</td><td>36.5</td><td>61.7</td></tr><tr><td>(c)</td><td>E</td><td>Y</td><td>FL</td><td>52.7</td><td>41.3</td><td>57.2</td></tr><tr><td>(d)</td><td>E</td><td>Y</td><td>L1000</td><td>48.7</td><td>36.2</td><td>61.6</td></tr><tr><td>(e)</td><td>E</td><td>Y</td><td>L100</td><td>48.6</td><td>36.1</td><td>61.4</td></tr><tr><td>(f)</td><td>C</td><td>Y</td><td>F</td><td>80.8</td><td>75.9</td><td>42.5</td></tr><tr><td>(g)</td><td>C</td><td>Y</td><td>L</td><td>49.5</td><td>37.0</td><td>62.2</td></tr><tr><td>(h)</td><td>C</td><td>Y</td><td>L100</td><td>49.7</td><td>37.3</td><td>62.1</td></tr><tr><td>(i)</td><td>E</td><td>N</td><td>F</td><td>-</td><td>55.5</td><td>-</td></tr><tr><td>(j)</td><td>E</td><td>N</td><td>L</td><td>-</td><td>47.2</td><td>-</td></tr><tr><td>(k)</td><td>E</td><td>N</td><td>L100</td><td>-</td><td>46.8</td><td>-</td></tr></table> Classification As shown on the left panel of Figure 3, the entire set of classifiers suffers less from the masked- in images produced by CASM than those by Baseline. We, however, notice that most of the classifiers fail to classify the masked- out images produced by Baseline, which we conjecture is due to the adversarial nature of the saliency maps produced by Baseline approach. This is confirmed by the right panel which shows that simple inpainting of the masked- out images dramatically increases the accuracy when the saliency maps were produced by Baseline. The inpainted masked- out images by CASM, on the other hand, do not benefit from inpainting, because it truly does not maintain any useful evidence for classification. Localization We report the localization performance of CASM, Baseline and prior works in Table 1 using two different metrics. Most of the existing approaches, except for Fan et al. (2017), assume the knowledge of the target class, unlike our work. CASM performs better than all prior approaches including the classifier- dependent Baseline. The difference is statistically significant. For ten separate training runs with random initialization the worst scores 36.3 and the best 36.0 with the average of 36.1. The fully supervised approach is the only approach that outperforms CASM. Thinning strategies In Table 2 (a- e), we compare the five thinning strategies described earlier, where F is equivalent to the Baseline. According to LE and OM metrics, the strategies L100 and L1000 perform better than the others, closely followed by L. These three strategies also perform the best in term of F1. Sharing the encoder and classifier As clear from Fig. 1, it is not necessary to share the parameters of the encoder and classifier. Our experiments, however, reveal that it is always beneficial to share them as shown in Table 2. Score function Unlike Fan et al. (2017), we use separate score functions for training the classifier and the saliency mapping. We empirically observe in Table 2 that the proposed use of entropy as a score function results in a better mapping in term of OM and LE. The gap, however, narrows as we use better thinning strategies. On the other hand, the classification loss is better for F1 as it makes CASM focus on the dominant object only. Because we take the highest score for each ground truth bounding box, concentrating on the dominant object yields higher scores. ## 5 UNSEEN CLASSES Since the proposed approach does not require knowing the class of the object to be localized, we can use it with images that contain objects of classes that were not seen during training neither by the classifier \(f^{(k)}\) nor the mapping \(m\) . We explicitly test this capability by training five different CASMs on five subsets of the original training set of ImageNet. <--- Page Split ---> Table 3: Localization errors (LE in \(\%\) , \(\downarrow\) ) of the models trained on a subset of classes. Each row corresponds to the training subset of classes and each column to the test subset of classes. Error rates on the previously unseen classes are marked with gray shade. <table><tr><td></td><td>A</td><td>B</td><td>C</td><td>D</td><td>E</td><td>F</td><td>All</td></tr><tr><td>F</td><td>46.5</td><td>46.4</td><td>48.1</td><td>45.0</td><td>45.7</td><td>41.3</td><td>44.9</td></tr><tr><td>E, F</td><td>39.5</td><td>41.2</td><td>43.1</td><td>40.3</td><td>39.5</td><td>38.7</td><td>40.0</td></tr><tr><td>D, E, F</td><td>37.9</td><td>39.3</td><td>40.0</td><td>38.0</td><td>38.0</td><td>37.4</td><td>38.1</td></tr><tr><td>C, D, E, F</td><td>38.2</td><td>38.5</td><td>39.9</td><td>37.9</td><td>37.9</td><td>37.8</td><td>38.1</td></tr><tr><td>B, C, D, E, F</td><td>36.7</td><td>36.8</td><td>39.9</td><td>37.4</td><td>37.0</td><td>37.0</td><td>37.4</td></tr><tr><td>-</td><td>35.6</td><td>36.1</td><td>39.0</td><td>37.0</td><td>36.6</td><td>36.7</td><td>36.9</td></tr></table> We first divide the 1000 classes into five disjoint subsets (denoted as A, B, C, D, E and F) of sizes 50, 50, 100, 300, 300 and 200, respectively. We train our models (in all stages) on \(95\%\) images (classes in B, C, D, E and F), \(90\%\) images (classes in C, D, E and F), \(80\%\) images (classes in D, E and F), \(50\%\) images (classes in E and F) and finally on \(20\%\) of images only (classes in F only). Then, we test each saliency mapping on all the six subsets of classes independently. We use the thinning strategy L for computational efficiency in each case. All models generalize well and the difference between their accuracy on seen or unseen classes is negligible (with exemption of the model trained on \(20\%\) of classes). The general performance is a little poorer which can be explained by the smaller training set. In Table 3, we see that the proposed approach works well even for localizing objects from previously unseen classes. The gap in the localization error between the seen and unseen classes grows as the training set shrinks. However, with a reasonably sized training set, the difference between the seen and unseen classes is small. This is an encouraging sign for the proposed model as a class- agnostic saliency map. ## 6 RELATED WORK The adversarial localization network Fan et al. (2017) is perhaps the most closely related to our work. Similarly to ours, they simultaneously train the classifier and the saliency mapping which does not require the object's class at test time. There are four major differences between that work and ours. First, we use the entropy as a score function for training the mapping, whereas they used the classification loss. This results in obtaining better saliency maps as we have shown earlier. Second, we make the training procedure faster thanks to tying the weights of the encoder and the classifier, which also results in a much better performance. Third, we do not let the classifier shift to the distribution of masked- out images by continuing training it on both clean and masked- out images. Finally, their mapping relies on superpixels to build more contiguous masks which may miss small details due to inaccurate segmentation and makes the entire procedure more complex. Our approach solely works on raw pixels without requiring any extra tricks. Dabkowski & Gal (2017) also train a separate neural network dedicated to predicting saliency maps. However, their approach is a classifier- dependent method and, as such, a lot of effort is devoted to preventing generating adversarial masks. Furthermore, the authors use a complex training objective with multiple hyperparameters which also has to be tuned carefully. On a final note, their model needs a ground truth class label which limits its use in practice. ## 7 CONCLUSIONS In this paper, we proposed a new framework for classifier- agnostic saliency map extraction which aims at finding a saliency mapping that works for all possible classifiers weighted by their posterior probabilities. We designed a practical algorithm that amounts to simultaneously training a classifier and a saliency mapping using stochastic gradient descent. We qualitatively observed that the proposed approach extracts saliency maps that cover all the relevant pixels in an image and that the masked- out images cannot be easily recovered by inpainting, unlike for classifier- dependent approaches. We further observed that the proposed saliency map extraction procedure outperforms all existing weakly supervised approaches to object localization and can also be used on images containing objects from previously unseen classes, paving a way toward class- agnostic saliency map extraction. <--- Page Split ---> ## APPENDIX ## ARCHITECTURE AND TRAINING PROCEDURE Classifier \(f\) and mapping \(m\) As mentioned before, we use ResNet- 50 (He et al., 2016) as a classifier \(f\) in our experiments. We follow the encoder- decoder architecture for constructing a mapping \(m\) . The encoder is implemented also as a ResNet- 50 so its weights can be shared with the classifier or it can be separate. We experimentally find that sharing is beneficial. The decoder is a deep deconvolutional network that ultimately outputs the mask of an input image. The input to the decoder consists of all hidden layers of the encoder which are directly followed by a downscaling operation. We upsample them to be of the same size and concatenate them into a single feature map \(H\) . This upsampling operation is implemented by first applying \(1 \times 1\) convolution with 64 filters, followed by batch normalization, ReLU non- linearity and then rescaling to \(56 \times 56\) pixels (using bilinear interpolation). Finally, a single \(3 \times 3\) convolutional filter followed by sigmoid activation is applied on \(H\) and the output is upscaled to a \(224 \times 224\) pixel- sized mask using proximal interpolation. The overall architecture is shown in Fig. 1. Training procedure We initialize the classifier \(f^{(0)}\) by training it on the entire training set. We find this pretraining strategy facilitates learning, particularly in the early stage. In practice we use the pretrained ResNet- 50 from torchvision model zoo. We use vanilla SGD with a small learning rate of 0.001 (with momentum coefficient set to 0.9 and weight- decay coefficient set to \(10^{- 4}\) ) to continue training the classifier with the mixed classification loss as in Eq. (8). To train the mapping \(m\) we use Adam (Kingma & Ba, 2014) with the learning rate \(l_{0} = 0.001\) (with weight- decay coefficient set to \(10^{- 4}\) ) and all the other hyperparameters set to default values. We fix the number of training epochs to 70 (each epoch covers only a random \(20\%\) of the training set). ## RESIZING We noticed that the details of the resizing policy preceding the evaluation procedures OM and LE vary between different works. The one thing they have in common is that the resized image is always \(224 \times 224\) pixels. The two main approaches are the following. - The image in the original size is resized such that the smaller edge of the resulting image is 224 pixels long. Then, the central \(224 \times 224\) crop is taken. The original aspect ratio of the objects in the image is preserved. Unfortunately, this method has a flaw – it may be impossible to obtain \(\text{IOU} > 0.5\) between predicted localization box and the ground truth box when than a half of the bounding box is not seen by the model. - The image in the original size is resized directly to \(224 \times 224\) pixels. The advantage of this method is that the image is not cropped and it is always possible to obtain \(\text{IOU} > 0.5\) between predicted localization box and the ground truth box. However, the original aspect ratio is distorted. The difference in LE scores for different resizing strategies should not be large. For CASM it is \(0.6\%\) . In this paper, for CASM, we report results for the first method. ## VISUALIZATIONS In the remained of the appendix we replicate the content of Fig. 2 for sixteen randomly chosen classes. That is, in each figure we visualize saliency maps obtained for seven consecutive images from the validation set. The original images are in the first row. In the following rows masked- in images, masked- out images and inpainted masked- out images are shown. As before, we used two instances of CASM (each using a different thinning strategy – L or L100) and Baseline. <--- Page Split ---> ![](images/11_0.jpg) <center>(a) CASM (L100) </center> ![](images/11_1.jpg) <center>(b) CASM (L) </center> ![](images/11_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/12_0.jpg) <center>(a) CASM (L100) </center> ![](images/12_1.jpg) <center>(b) CASM (L) </center> ![](images/12_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/13_0.jpg) <center>(a) CASM (L100) </center> ![](images/13_1.jpg) <center>(b) CASM (L) </center> ![](images/13_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/14_0.jpg) <center>(a) CASM (L100) </center> ![](images/14_1.jpg) <center>(b) CASM (L) </center> ![](images/14_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/15_0.jpg) <center>(a) CASM (L100) </center> ![](images/15_1.jpg) <center>(b) CASM (L) </center> ![](images/15_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/16_0.jpg) <center>(a) CASM (L100) </center> ![](images/16_1.jpg) <center>(b) CASM (L) </center> ![](images/16_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/17_0.jpg) <center>(a) CASM (L100) </center> ![](images/17_1.jpg) <center>(b) CASM (L) </center> ![](images/17_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/18_0.jpg) <center>(a) CASM (L100) </center> ![](images/18_1.jpg) <center>(b) CASM (L) </center> ![](images/18_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/19_0.jpg) <center>(a) CASM (L100) </center> ![](images/19_1.jpg) <center>(b) CASM (L) </center> ![](images/19_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/20_0.jpg) <center>(a) CASM (L100) </center> ![](images/20_1.jpg) <center>(b) CASM (L) </center> ![](images/20_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/21_0.jpg) <center>(a) CASM (L100) </center> ![](images/21_1.jpg) <center>(b) CASM (L) </center> ![](images/21_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/22_0.jpg) <center>(a) CASM (L100) </center> ![](images/22_1.jpg) <center>(b) CASM (L) </center> ![](images/22_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/23_0.jpg) <center>(a) CASM (L100) </center> ![](images/23_1.jpg) <center>(b) CASM (L) </center> ![](images/23_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/24_0.jpg) <center>(a) CASM (L100) </center> ![](images/24_1.jpg) <center>(b) CASM (L) </center> ![](images/24_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/25_0.jpg) <center>(a) CASM (L100) </center> ![](images/25_1.jpg) <center>(b) CASM (L) </center> ![](images/25_2.jpg) <center>(c) Baseline </center> <--- Page Split ---> ![](images/26_0.jpg) <center>(a) CASM (L100) </center> ![](images/26_1.jpg) <center>(b) CASM (L) </center> ![](images/26_2.jpg) <center>(c) Baseline </center> <--- Page Split --->
reject
Reject
4.333333
ICLR_2019_paper_1302
iclr
2,019
# PIX2SCENE: LEARNING IMPLICIT 3D REPRESENTATIONS FROM A SINGLE IMAGE Anonymous authors Paper under double- blind review ## ABSTRACT We aim to model 3D properties of the scenes from single 2D images. Learning 3D scenes from 2D images is a long- standing problem in computer vision with applications in related fields such as simulation and robotics. We propose pix2scene, a deep generative- based approach that represents the 3D scene in a learnt latent variable decoded into a viewpoint- dependent representation that can be rendered. Our method learns the depth of the scene and leverages a local smoothness assumption to extract the orientation of visible scene points. We achieve this using an encoder- decoder adversarial learning mechanism and a novel differentiable renderer to train the 3D model in an end- to- end fashion, using only images. We showcase the generative ability of our model qualitatively on the ShapeNet dataset (Chang et al., 2015). We also demonstrate that our model can predict the structure of scenes from various, previously unseen view points. Finally, we evaluate the effectiveness of the learned 3D scene representation in supporting 3D spatial reasoning. ## 1 INTRODUCTION Understanding the 3- dimensional (3D) world from its 2- dimensional (2D) projections is a fundamental problem in computer vision with a broad range of application in robotics, simulation and design. Given that the majority natural scene data is available exclusively in the form of 2D images, the ability to directly infer knowledge about 3D structure from these images would be of great utility in scene understanding. Inferring the 3D structure from multiple images of a scene has been pursued extensively, such as in stereo or structure from motion tasks (Hartley & Zisserman, 2004). Since most available natural image data informative about the real world comes with only a single view of a given scene, it is perhaps more important to explore the development of models which can infer the 3D structural properties from a single image. On the other hand, single image 3D recovery is an extremely challenging and heavily under constrained task. The system has to rely on prior knowledge and 2D visual cues such as textures, shadows or occlusions in order to provide hints to the 3D structure of the scene. Practically, building a machine learning model that learns to infer 3D structure from images requires either a strong inductive bias or supervision. While some have used the 3D ground truth as explicit supervision (Wu et al., 2016; 2015), in most cases of interest, such supervision will not be available. Consequently, our long term goal is to infer the 3D structure of realistic scenes from single images. In this paper we take a step towards this direction via a method of unsupervised learning of the 3D structure, directly from a single 2D image of each scene. Our method based on the adversarial learning framework (Goodfellow et al., 2014) and exploits a uniquely suitable 3D representation (i.e., surfels (Pfister et al., 2000)) and a differentiable renderer. Most 3D reconstruction methods rely on representing 3D objects explicitly using either voxels (Rezende et al., 2016; Yan et al., 2016) or meshes (Kanazawa et al., 2018; Wang et al., 2018). Explicit representations store all the rendering- relevant information from a given 3D space and are easily transferable, i.e., they can be loaded with any 3D modeling software and viewed from any angle. However, approaches using explicit representations typically scale very poorly ( \(O(n^{3})\) or require a sparse/discrete representation which can be challenging for deep learning methods. As a result, these representations have only been applied to the reconstruction of single objects. As an alternative we propose to learn an implicit 3D representation which produces only the 3D geometry which is directly relevant for a particular viewpoint. Our viewpoint- specific 3D geometry is captured <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Implicit vs explicit representations. Explicit voxel and mesh representations are viewpoint-independent and constitutes the complete scene. Our implicit surfel-based representation is viewpoint-dependent and it adapts the resolution to the viewpoint. The full scene is contained in a high-dimensional latent variable and only when the scene is to be rendered, the latent variable is serialized to surfels for a specific view. </center> using camera facing surfels (Pfister et al., 2000) which are surface elements defined by its position, orientation and material properties. Given an image we can infer its implicit 3D representation and then recreate novel surfel representations of the underlying scene from unobserved viewpoints. In general, we note that in a 3D scene, only a small fraction of the entities are perceivable from the camera. As the camera moves, and the occluded regions become visible, our method then generates surfels for those newly unoccluded regions. Another advantage of this approach is that minimal number of primitives (surfels) are required to obtain a high- resolution image as the camera moves closer to a part of the scene. Moreover this representation fits well with image based convolutional architectures. Our model, Pix2Scene, is a deep generative- based approach for modelling the 3D structure of a scene directly from images. This model is unsupervised in the sense that it does not require 3D groundtruth or any other kind of image annotations. We base our model on Adversarially Learned Inference (ALI) approach (Dumoulin et al., 2016). ALI extends the GAN (Goodfellow et al., 2014) framework by learning to infer the latent representation of a given image. In pix2scene the learned latent space embeds the 3D information of the underlying scene. The latent representation is mapped via a decoder network to a view- dependent 3D surface and then projected to image space by a differentiable renderer. The resulting image is then evaluated by an adversarial critic. While our long- term goal is to be able to infer the 3D structure of a real- world photograph, in this paper we experiment exclusively with synthetically- constructed scenes and adopt several simplifying assumptions. In particular, we assume that the world is piece- wise smooth and that for each input image the illumination, view and object materials are known. This work has the following main contributions, (1) we propose a novel unsupervised method for 3D understanding from a single image; (2) we propose a new implicit 3D representation based on view- space surfels; (3) we propose a surfel- based differentiable 3D renderer that can be used as a layer of a neural network; and (4) we propose 3D- IQTT a new 3D understanding evaluation benchmark. This task evaluates the model's ability to perform mental rotation by obtaining comprehensive understanding of underlying 3D structure. We also estimate the camera pose as part of the learnt latent variable for this particular task. ## 2 RELATED WORK The generation and reconstruction of 3D objects from images has been studied extensively in the computer vision and graphics communities (Saxena et al., 2009; Chaudhuri et al., 2011; Kalogerakis et al., 2012; Chang et al., 2015; Rezende et al., 2016; Soltani et al., 2017). Our work bears some conceptual similarities to Kulkarni et al. (2015) which casts the 3D reconstruction problem as a more traditional inverse graphics task. By using Variational Auto- Encoder(VAEs) (Kingma & Welling, 2014), they learn a representation of objects that disentangles factors of variations from images (i.e., object pose and configuration) and use the approach for specific transformations such as out of axis rotation. However, unlike their approach, ours is fully unsupervised and we implicitly generate <--- Page Split ---> 3D structure of scenes from single images. Our mechanism learns a latent representation for the underlying scene, which can later be used to render from different views and lighting conditions. Similar to ours, Rezende et al. (2016) infer the 3D configuration at their output. They adopt a probabilistic inference framework to build a generative model for 3D by combining a standard projection mechanism with gradient estimation methods. In particular, their approach requires multiple runs with mechanisms such as REINFORCE (Williams, 1992) in order to infer gradient from the projection layer. In addition, their use of mesh and voxel representations could become an obstacle to scaling their method to more complex scenes. Our approach is not susceptible to restrictions imposed by meshes or other scaling issues and has the potential to adapt to arbitrary scene configurations. ## 3 METHOD ### 3.1 IMPLICIT 3D REPRESENTATION AND SURFELS Explicitly representing 3D structure presents different challenges for generative models (Kobbelt & Botsch, 2004). Representing entire objects using voxels scales poorly given its \((O(n^{3})\) complexity. The vast majority of the generated voxels aren't relevant to most viewpoints, such as the voxels that are entirely inside of objects. A common workaround is to use a sparse representation such as meshes. However, these too come with their own drawbacks, such as the need to discretise complex objects. This makes mesh representation difficult to generate using neural networks. Current mesh based methods mainly rely on deforming a pre- existing mesh. On the other hand, our implicit approach represents the 3D scene in a high- dimensional latent variable. In our framework, this latent variable (or vector) is decoded using a generator network into a viewpoint- dependent representation of surface elements — similar to the surfels (Pfister et al., 2000) — that constitute the visible part of the scene. This representation is very compact: given a renderer's point of view, we can represent only the part of the 3D surface needed by the renderer. Also, as the camera moves closer to a part of the scene, our generator will allocate more surfels to represent that part of the scene and thereby increasing the resolution. Figure 1 illustrates these different representations. For descriptive purpose, surfels are shown as squares, but in general they do not have any shape. Formally, surfels are represented as a tuple \((P,N,\rho)\) , where \(P = (p_{x},p_{y},p_{z})\) is its 3D position, \(N = (n_{x},n_{y},n_{z})\) is the surface normal vector, and \(\rho = (k_{r},k_{g},k_{b})\) is the reflectance of the surface material. Since we are interested in modelling structural properties of the scenes i.e., geometry and depth, we assume that objects in the scene have a uniform material. We represent the surfels in the camera coordinate system. This significantly reduces the number of surfels by considering only the ones that will get projected onto a pixel in the rendered image. Moreover, this allows to reduce the position parameters to only \(p_{z}\) being this the distance along a ray going through the surfel to the center of its pixel. ### 3.2 DIFFERENTIABLE 3D RENDERER As the critic operates only on image space, we need to project the generated 3D representations back to the 2D space using a renderer. In our setting, each stage of the rendering pipeline must be differentiable to allow us to take advantage of gradient- based optimization and backpropogate the critic's error signal to the surfel representation. The rendering process can be partitioned into two stages. During forward- propagation, the first stage finds the mapping between the surfels and the pixels; and the second stage computes the color of each pixel. During back- propagation, the first stage directs the gradients only to the surfels that get projected onto the image; and the second stage is differentiable as long as the shading operations are differentiable. The first stage of the rendering involves finding the mapping between the surfels and the pixels. This requires performing the expensive operation of ray object intersection (See Figure 2a). Our model requires a fast rendering engine as it will be use in every learning iteration. Conventional ray tracing algorithms are optimized for generating multiple views from the same scene. However in our setting during learning we render only one image from each scene. Moreover ray tracing algorithms require from representing the full scene, which is very inefficient as we only represent the part visible by the camera. To resolve these issues, our generator proposes one surfel for each pixel in the camera's <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: Differentiable 3D renderer. (a) A surfel is defined by its position \(P\) , normal \(N\) , and reflectance \(\rho\) . Each surfel maps to an image pixel \(P_{im}\) . (b) The surfel's color depends on its reflectance \(\rho\) and the angles \(\theta\) between each light \(I\) and the surfel's normal \(N\) . </center> coordinate system. Our PyTorch implementation of the differentiable renderer can render a \(128 \times 128\) surfel- based scene in under 1.4 ms on a mobile NVIDIA GTX 1060 GPU. The color of a surfel depends on the material reflectance, its position and orientation, and the ambient and point light source colors. See Figure 2b. Given a surface point \(P_{i}\) , the color of its corresponding pixel \(I_{rc}\) is given by the shading equation: \[I_{rc} = \rho_{i}(L_{a} + \sum_{j}\frac{1}{k_{l}\|d_{i j}\| + k_{q}\|d_{i j}\|^{2}} L_{j}\max \left(0,N_{i}^{T}d_{i j} / \|d_{i j}\|\right)), \quad (1)\] where \(\rho_{i}\) is the surface reflectance, \(L_{a}\) is the ambient light's color, \(L_{j}\) is the \(j^{\mathrm{th}}\) positional light source's color, with \(d_{i j} = L_{j}^{\mathrm{pos}} - P_{i}\) , or the direction vector from the scene point to the point light source, and \(k_{l}\) , \(k_{q}\) being the linear and quadratic attenuation terms respectively. Equation 1 is an approximation of rendering equation proposed in Kajiya (1986). ### 3.3 PIX2SCENE MODEL The adversarial training paradigm allows the generator network to capture the underlying target distribution by competing with an adversarial critic network. Pix2scene employs bi- directional adversarial training to model the distribution of surfels from just 2D images. #### 3.3.1 BI-DIRECTIONAL ADVERSARIAL TRAINING ALI (Dumoulin et al., 2016) or Bi- GAN (Donahue et al., 2016) extends the GAN (Goodfellow et al., 2014) framework by including the learning of an inference mechanism. Specifically, in addition to the decoder network \(G_{x}\) , ALI provides an encoder \(G_{z}\) which maps data points \(\boldsymbol{x}\) to latent representations \(z\) . In these bi- directional models, the critic, \(D\) , discriminates in both the data space ( \(\boldsymbol{x}\) versus \(G_{x}(z)\) ), and latent space ( \(z\) versus \(G_{z}(x)\) ), maximizing the adversarial value function over two joint distributions. The final min- max objective can be written as: \[\min_{G}\max_{D}\mathcal{L}_{ALI}(G,D):= \mathbb{E}_{q(\boldsymbol {x})}[\log (D(\boldsymbol {x},G_{z}(\boldsymbol {x})))] + \mathbb{E}_{p(\boldsymbol {z})}[\log (1 - D(G_{x}(\boldsymbol {z}),\boldsymbol {z}))], \quad (2)\] where \(q(\boldsymbol {x})\) and \(p(\boldsymbol {z})\) denote encoder and decoder marginal distributions. #### 3.3.2 MODELLING DEPTH AND CONSTRAINED NORMAL ESTIMATION Based on the ALI formulation, as depicted in Figure 3, our model has an encoder network which captures the distribution over the latent space given an image data point \(x\) . The decoder network maps a fixed latent distribution \(p(z)\) (a standard normal distribution in our case) to the 3D surfel representation. Next, the surfel representation are rendered into a 2D image using our differentiable renderer. The resulting image is then given as input to the critic to distinguish from the real image data. Note that the input to the critic comes from the joint space of data with its corresponding latent code, as in ALI. <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 3: Pix2scene model. Pix2scene generates realistic 3D views of scenes by training on 2D images alone. Its decoder generates the surfels depth \(p_{z}\) from a noise vector \(\boldsymbol{z}\) conditioned on the camera pose. The surfels normal is estimated from its predicted depth. The surfels are then rendered into a 2D image and together with image samples from the target distribution are fed to the critic. </center> A straightforward way to model the decoder network could be to learn a conditional distribution to produce the surfel's depth \((p_{z})\) and normal \((N)\) . But this could lead to inconsistencies between the local shape and the surface normal. For instance, the decoder can fake an RGB image of a 3D shape simply by changing the normals while keeping the depth fixed. To avoid this issue, we exploit the fact that real- world surfaces are locally planar, and surfaces visible to the camera have normals constrained to be in the half- space of visible normal directions from the camera's view point. Considering the camera to be looking along the \(- z\) direction, the estimated normal has the constraint \(n_{z} > 0\) . Therefore, the local surface normal is estimated by solving the following problem for every surfel, \[\begin{array}{r}{\| N^{T}\nabla P\| = 0}\\ {\mathrm{subject~to},\| N\| = 1\mathrm{~and~}n_{z} > 0,} \end{array} \quad (3)\] where the spatial gradient \(\nabla P\) is computed using 8 neighbour points, and \(P\) is the position of the surfels in the camera coordinate system obtained by backprojecting the generated depth along rays. This approach enforces consistency between the predicted depth field and the computed normals. If the depth is incorrect, the normal- estimator outputs an incorrect set of normals, and result in an RGB image inconsistent with the data distribution, which would in- turn get penalized by the critic. The decoder and the encoder networks are thus incentivized to predict realistic depths. #### 3.3.3 MODEL TRAINING The Wasserstein- GAN (Arjovsky et al., 2017) formalism provides stable training dynamics using the 1- Wasserstein distance between the distributions. We adopt the gradient penalty setup as proposed in Gulrajani et al. (2017) for more robust training, however, we modify the formulation to take into account the bidirectional training. Architectures of our networks, and training hyper parameters are explained in detail in appendix A. Briefly, we used Conditional Normalization (Dumoulin et al., 2016; Perez et al., 2017) for conditioning the view point (or camera pose) in the encoder, decoder and the discriminator networks. The view point is a three dimensional vector representing positional co- ordinates of the camera. In our training, the affine parameters of the Batch- Normalization layers (Ioffe & Szegedy, 2015) are replaced by learned representations based on the view point. The final objective includes a bi- directional reconstruction loss as formulated in Equation 4. This in- turn enforces the reconstructions from the model to stay close to the corresponding inputs. We also use a reconstruction error in the objective function for the encoder and decoder networks as it has been empirically shown to improve reconstructions in ALI- type models Li et al. (2017). \[\begin{array}{r}{\mathcal{L}_{r e c o n} = \mathbb{E}_{q(\pmb {x})}[||\pmb {x} - r e n d(G_{x}(G_{z}(\pmb {x})))|_{2}] + \mathbb{E}_{p(\pmb {z})}[||\pmb {z} - G_{z}(r e n d(G_{x}(\pmb {z})))|_{2}]} \end{array} \quad (4)\] where function \(r e n d(\cdot)\) denotes rendered image on the decoder side. <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 4: Scene reconstruction. Left: Input images of rotated objects into a room with its depth and normal groundtruth maps. Right: pix2scene reconstructions with its depth and normal maps. </center> Table 1: Scene reconstruction results. Hausdorff metric on 3D surfels and MSE on the depth maps. <table><tr><td rowspan="2"></td><td colspan="2">Box scenes</td><td colspan="2">Shape scenes</td></tr><tr><td>rand Tr.</td><td>rand Rot.</td><td>rand Rot.</td><td></td></tr><tr><td>Hausdorff-F</td><td>0.087</td><td>0.102</td><td>0.125</td><td></td></tr><tr><td>Hausdorff-R</td><td>0.093</td><td>0.183</td><td>0.191</td><td></td></tr><tr><td>MSE-depth</td><td>0.032</td><td>0.022</td><td>0.038</td><td></td></tr></table> ## 4 EXPERIMENTS ### 4.1 EXPERIMENTAL SETUP Tasks. Our model is capable of both reconstructing 3D scenes and generate new ones. We evaluate the 3D understanding capability of the model on 3D- IQTT: a spatial reasoning based semi- supervised classification task. The goal of the 3D- IQTT is to quantify ability of our model to perform 3D spatial reasoning test by using large amounts of the unlabeled training data and considerably small set of labelled examples. Evaluation. In order to evaluate the 3D reconstruction ability of the model we used Hausdorff distance (Taha & Hanbury, 2015) and MSE. Hausdorff distance measures the model's 3D reconstruction's correspondence with the input for a given camera pose. We measure the correctness of the recovered depth using standard MSE with respect to ground truth depth maps. We evaluate the 3D generation generation qualitatively. Finally the evaluation metric for the 3D- IQTT is the percentage of correctly answered questions. Datasets. We have created multiple different scene datasets ranging from simple to complex in nature. Those scenes are composed by a room containing one or more objects placed at random positions and orientations. Each 3D scene is rendered into a single \(128 \times 128 \times 3\) image taken from a camera in a random sampled uniformly on the positive octant of a sphere containing the room. Technically, the probability of seeing the same configuration of a scene from two different views is near zero. Box scenes is created with simple box 3D shape (as depicted in Figure 14). Shape scenes is created with basic 3D shapes (i.e., box, sphere, cone, torus, teapot etc). ShapeNet scenes is composed by 6 objects from the ShapeNet dataset (Chang et al., 2015) (i.e., bowls, bottles, mugs, cans, caps and bags). For 3D- IQTT task we generated a test where each IQ question instance consists of a reference image of Tetris- like shape, as well as 3 other images, one of which is a randomly rotated version of the reference (see Figure 10 for an example). The training set is formed by 200k questions where <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 5: View point reconstruction. Given a scene (first column), we rotate the camera around to visualize the models understanding of 3D shape. As shown, the model correctly infers the unobserved geometry of the objects, demonstrating true 3D understanding of the scene. Videos of these reconstructions can be seen at https://bit.ly/2zADuqG. </center> <table><tr><td rowspan="2"></td><td colspan="4">Shape scenes</td><td colspan="4">Multiple-shape scenes</td></tr><tr><td>5°</td><td>35°</td><td>55°</td><td>80°</td><td>5°</td><td>35°</td><td>55°</td><td>80°</td></tr><tr><td>Hausdorff-F</td><td>0.110</td><td>0.143</td><td>0.140</td><td>0.161</td><td>0.256</td><td>0.301</td><td>0.282</td><td>0.272</td></tr><tr><td>Hausdorff-R</td><td>0.156</td><td>0.191</td><td>0.189</td><td>0.202</td><td>0.308</td><td>0.355</td><td>0.329</td><td>0.316</td></tr><tr><td>MSE-depth</td><td>0.012</td><td>0.021</td><td>0.022</td><td>0.027</td><td>0.070</td><td>0.091</td><td>0.088</td><td>0.083</td></tr></table> Table 2: View point reconstruction. Quantitative evaluation of implicit 3D reconstruction for unseen views by extrapolating the view angle from \(0^{\circ}\) (original) to \(80^{\circ}\) . only a few are labelled with the information about the correct answer (i.e., either \(5\%\) (10k) or \(0.5\%\) (1k) of the total training data). The validation and test sets each contain 100K labelled questions. More details on experimental setup and evaluation can be found in appendix B. ### 4.2 IMPLICIT 3D SCENE RECONSTRUCTION Figure 4 shows the input Shape scenes data and its corresponding reconstructions, along with its recovered depth and normal maps. The depth map is encoded in such a way that the darkest points are closer to the camera. The normal map colors correspond to the cardinal directions (red/green/blue for \(x / y / z\) axis respectively). Table 1 shows a quantitative evaluation of the forward and reverse Hausdorff distances on three different datasets. The table also depicts mean squared error of the generated depth map with respect to the input depth map. Figure 6 shows the reconstructions from the model on more challenging multiple- shape scenes where the number of objects as well as their shape varies. Figure 16 in the appendix showcases more qualitative evaluations. To showcase that our model can reconstruct unobserved views, we infer the latent code \(z\) of an image \(x\) and then we decode and render different views while rotating the camera around the scene. Table 2 shows the Hausdorff distance and MSE of reconstructing a scene from different unobserved view angles. As the view angle increases from \(0^{\circ}\) (original) to \(80^{\circ}\) for shape scenes the reconstruction Error and MSE tend to increase. However, for the multiple- shape scenes setup the trend is not as clear because of the complexity of the scene an the inter- object occlusions. Figure 5 qualitatively shows how pix2scene correctly infers the extents of the scene not in view in a consistent manner, demonstrating true 3D understanding of the scene. ![](images/6_1.jpg) <center>Figure 6: Multiple-shape scenes reconstruction. Implicit 3D reconstruction of scenes composed by multiple ShapeNet objects. </center> <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 7: Unconditional scene generation. Generated samples from pix2scene model trained on ShapeNet scenes. Left: shaded images; Right: depth maps </center> ![](images/7_1.jpg) <center>Figure 8: Conditional scene generation. Class conditioned generated samples for ShapeNet dataset. </center> ### 4.3 IMPLICIT 3D SCENE GENERATION We trained pix2scene on scenes composed of ShapeNet objects. Figure 7 shows qualitative results on unconditional generation. This shows how our model is able to generate correct 3D interpretations of the world. We also trained our model conditionally by giving the class label of the ShapeNet object to the decoder and critic networks (Mirza & Osindero, 2014). Figure 8 shows the results of conditioning the generator on different target classes. In order to explore the manifold of the learned representations we select two images \(x_{1}\) and \(x_{2}\) from the held out data, then we linearly interpolate between their encodings \(z_{1}\) and \(z_{2}\) and decode the intermediary points into their corresponding images. Figure 9 shows this for two different settings. In each case, our representations capture the major geometrical aspects of the scene. ### 4.4 3D-IQ TEST TASK We have designed a quantitative evaluation for 3D reconstruction which we refer to as the 3D- IQ test task (3D- IQTT). In their landmark work, Shepard & Metzler (1971) introduced the mental rotation task into the toolkit of cognitive assessment. The authors presented human subjects with reference images and answer images and the subjects had to quickly decide if the answer was either a 3D- rotated version or a mirrored version of the reference. The speed and accuracy with which people can solve this mental rotation task has since become a staple of IQ tests like the Woodcock- Johnson test (Woodcock et al., 2001). We took these as inspiration when designing a quantitative evaluation: we are using the same kind of 3D objects but instead of confronting our model with pairs of images ![](images/7_2.jpg) <center>Figure 9: Manifold exploration. Exploration of the learned manifold of 3D representations. Generated interpolations (middle columns) between two images \(x_{1}\) and \(x_{2}\) (first and last columns). </center> <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 10: Sample questions from the 3D-IQ test task. For this "mental rotation" task, a set of reference images and 3 possible answers are presented. The goal is to find the rotated version of the reference 3D model. To solve this task, the human or the model has to infer the 3D shape of the reference from the 2D image and compare that to the inferred 3D shapes of the answers. The correct answers to these two examples are in the footnote. </center> ![](images/8_1.jpg) <center>Figure 11: 3D IQ test task. Pix2scene reconstructions of the 3D-IQTT shapes. </center> and only two possible answers, we include several distractor answers and the subject (human or computer) has to pick the correct answers out of 3 that is a 3D- rotated version of the reference object. To verify that our approach is able to learn accurate embeddings of these shapes we first assessed the reconstruction of these shapes qualitatively as shown in Figure 11. #### 4.4.1 SEMI-SUPERVISED CLASSIFICATION ON THE 3D-IQ TEST TASK For training pix2scene in a semi- supervised setting, in addition to the unlabeled data, we also used the labelled data. The training with the unlabeled samples differs from the approach described for previous experiments as we do not assume that we have the knowledge of camera position. Thus, part of the latent vector \(\mathbf{z}\) encodes the actual 3D object (denoted as \(\mathbf{z}_{scene}\) ) and the remainder estimates the camera- pose (denoted as \(\mathbf{z}_{view}\) ). For the supervised training two additional loss terms were added: (a) a loss that enforces the object component ( \(\mathbf{z}_{scene}\) ) to be the same for both the reference object and the correct answer, (b) a loss that maximizes the distance between object component of reference and the distractors. Losses (a) and (b) are contained in Equation 5 where \(d_{i}\) denotes the distractors, \(\mathbf{x}_{ref}\) is the reference and \(\mathbf{x}_{ans}\) the correct answer. Algorithm is detailed in Table 1. \[\begin{array}{l}{{\mathcal{L}_{\theta}(\pmb{x}_{r e f},\pmb{x}_{d_{1}},\pmb{x}_{d_{2}},\pmb{x}_{a n s})=\frac{1}{2}D_{\theta}(\pmb{x}_{r e f},\pmb{x}_{a n s})-\frac{1}{2}\sum_{i=1}^{2}D_{\theta}(\pmb{x}_{r e f},\pmb{x}_{d_{i}})}}\\ {{\mathrm{where}D_{\theta}(\pmb{x}_{1},\pmb{x}_{2})=(\|\pmb{x}_{s c e n e}^{2}-\pmb{x}_{s c e n e}^{2}\|_{2})^{2}\mathrm{and}\pmb{x}^{2}=E n c o d e r_{\theta}(\pmb{x})}}\end{array} \quad (5)\] During the training we also minimize the mutual information between \(\mathbf{z}_{scene}\) and \(\mathbf{z}_{view}\) to explicitly disentangle and make sure that the learnt latent code has distinct source of information present in its dimensions. This is implemented via MINE (Belghazi et al., 2018). The strategy of MINE is to parameterize a variational formulation of the mutual information in terms of a neural network: \[I_{\Theta}(z_{s},z_{v}) = \sup_{\theta \in \Theta}\mathbb{E}_{\mathbb{P}_{z_{s}z_{v}}}[T_{\theta}] - \log (\mathbb{E}_{\mathbb{P}_{z_{s}}\otimes \mathbb{P}_{z_{v}}}[e^{T_{\theta}}]). \quad (6)\] This objective is optimized in an adversarial paradigm where \(T\) , the statistics network, plays the role of the critic and is fed with samples from the joint and marginal distribution. We added this loss to our pix2scene objective to minimize the mutual information estimate in both unsupervised and <--- Page Split ---> # Algorithm 1 Semisupervised classification 1: while iter \(< max\_iter\) do 2: \(D\leftarrow \mathrm{MiniBatch}()\) 3: \(z\sim E(\pmb{x}_{ref});\forall (\pmb{x}_{ref},\pmb{x}_{d_1},\pmb{x}_{d_2},\pmb{x}_{ans})\in D\) 4: \(L\leftarrow \mathcal{L}_{A L I} + \mathcal{L}_{r e c o n} + \mathrm{I}_{\Theta}(z_{s c e n e},z_{v i e w})\) 5: if supervised- training- interval(iter) then 6: \(L\leftarrow L + \mathcal{L}_{\theta}(\pmb{x}_{ref},\pmb{x}_{d_1},\pmb{x}_{d_2},\pmb{x}_{ans})\) 7: end if 8: optimize networks with \(L\) 9: end while <table><tr><td>Labeled Samples</td><td>CNN</td><td>Siamese CNN</td><td>Human Evaluation</td><td>Pix2Scene (Ours)</td></tr><tr><td>0 (Unsupervised)</td><td>0.3385</td><td>0.3698</td><td>0.7329 ± 0.1488</td><td>0.4372 ± 0.0301</td></tr><tr><td>200</td><td>0.3350</td><td>0.3610</td><td>-</td><td>0.4691 ± 0.0259</td></tr><tr><td>1,000</td><td>0.3392</td><td>0.3701</td><td>-</td><td>0.5567 ± 0.0095</td></tr><tr><td>10,000</td><td>0.3649</td><td>0.3752</td><td>-</td><td>0.5983 ± 0.0021</td></tr></table> Table 3: 3D- IQTT quantitative results. The test accuracy of the 3D- IQ test task show that the CNN baselines struggle to solve this task Pix2scene is able to understand the underlying 3D structure of the images and solve the task. The results show that although our model performs better than the baselines, we are still lagging behind the human level. supervised training iterations. Once the model is trained, we answer 3D- IQTT questions, by inferring the latent 3D representation for each of the 4 images and we select the answer closest to the reference image as measured by L2 distance. We compared our model to two different baselines. The first one is composed of 4 ResNet- 50 modules (He et al., 2016) with shared weights followed by 3 fully- connected layers. We trained this CNN only on the labeled samples. Our second baseline has a similar architecture as the previous one but the fully- connected layers were removed. Instead of the supervised loss provided in the form of correct answers, it is trained on the contrastive loss (Koch et al., 2015). This loss reduces the feature distance between the references and correct answers and maximizes the feature distance between the references and incorrect answers. A more detailed description of the networks and contrastive loss function can be found in the appendix D. Table 3 shows 3D- IQTT results for our method and baselines. The baselines were not able to interpret the underlying 3D structure of the data and its results are only slightly better than a random guess. The poor performance of the Siamese CNN might be in part because the contrastive loss rewards similarities in pixel space and has no notion of 3D similarity. However, pix2scene achieved significantly better accuracy by leveraging the learned 3D knowledge of objects. ## 5 CONCLUSIONS In this paper we proposed a generative approach to learn 3D structural properties from single images in an unsupervised and implicit fashion. Our model receives an image of a scene with uniform material as input, estimates the depth of the scene points and then reconstructs the input scene. We also provided quantitative evidence that support our argument by introducing a novel IQ- task in a semi- supervised setup. We hope that this evaluation metric will be used as a standard benchmark to measure the 3D understanding capability of the models across different 3D representations. The main drawback of our current model is that it requires the knowledge of lighting and material properties. Future work will focus on tackling the more ambitious setting of learning complex materials and texture along with modelling the lighting properties of the scene. <--- Page Split ---> ## REFERENCES Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. International Conference on Machine Learning (ICML), 2017. Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Devon Hjelm, and Aaron Courville. Mutual information neural estimation. In International Conference on Machine Learning, 2018. Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An Information- Rich 3D Model Repository. arXiv, 2015. Siddhartha Chaudhuri, Evangelos Kalogerakis, Leonidas Guibas, and Vladlen Koltun. Probabilistic reasoning for assembly- based 3d modeling. In ACM SIGGRAPH, 2011. Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016. Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in Neural Information Processing Systems (NIPS), 2014. Ihsaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. Advances in Neural Information Processing Systems (NIPS), 2017. Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In null, pp. 1735- 1742. IEEE, 2006. R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, ISBN: 0521540518, second edition, 2004. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770- 778, 2016. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pp. 448- 456, 2015. James T. Kajiya. The rendering equation. In Proceedings of the 13th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '86, 1986. Evangelos Kalogerakis, Siddhartha Chaudhuri, Daphne Koller, and Vladlen Koltun. A probabilistic model for component- based shape synthesis. ACM Transactions in Graphics, 31(4):55:1- 55:11, 2012. Angjoo Kanazawa, Shubham Tulsiani, Alexei A. Efros, and Jitendra Malik. Learning category- specific mesh reconstruction from image collections. In ECCV, 2018. Diederik P. Kingma and Max Welling. Auto- encoding variational bayes. International Conference on Learning Representations (ICLR), 2014. Leif Kobbelt and Mario Botsch. A survey of point- based techniques in computer graphics. Computers & Graphics, 28(6):801- 814, 2004. Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese neural networks for one- shot image recognition. In ICML Deep Learning Workshop, volume 2, 2015. Tejas D Kulkarni, Will Whitney, Pushmeet Kohli, and Joshua B Tenenbaum. Deep Convolutional Inverse Graphics Network. Advances in Neural Information Processing Systems (NIPS), 2015. <--- Page Split ---> Chunyuan Li, Hao Liu, Changyou Chen, Yuchen Pu, Liqun Chen, Ricardo Henao, and Lawrence Carin. Alice: Towards understanding adversarial learning for joint distribution matching. In Advances in Neural Information Processing Systems, pp. 5501- 5509, 2017. Tomas Mikolov, Anoop Deoras, Stefan Kombrink, Lukas Burget, and Jan Cernocky. Empirical evaluation and combination of advanced language modeling techniques. In INTERSPEECH, 2011. Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. Arxiv, 2014. Chengjie Niu, Jun Li, and Kai Xu. Im2struct: Recovering 3d shape structure from a single RGB image. CVPR, 2018. Ethan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. Film: Visual reasoning with a general conditioning layer. arXiv preprint arXiv:1709.07871, 2017. Hanspeter Pfister, Matthias Zwicker, Jeroen van Baar, and Markus Gross. Surfels: Surface elements as rendering primitives. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '00, 2000. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. International Conference on Learning Representations (ICLR), 2015. Danilo J Rezende, S M Ali Eslami, Shakir Mohamed, Peter Battaglia, Max Jaderberg, and Nicolas Heess. Unsupervised Learning of 3D Structure from Images. Advances in Neural Information Processing Systems (NIPS), 2016. Ashutosh Saxena, Min Sun, and Andrew Y. Ng. Make3d: Learning 3d scene structure from a single still image. IEEE Trans. Pattern Anal. Mach. Intell., 31(5), May 2009. Roger N Shepard and Jacqueline Metzler. Mental rotation of three- dimensional objects. Science, 171 (3972):701- 703, 1971. Amir Arsalan Soltani, Haibin Huang, Jiajun Wu, Tejas D Kulkarni, and Joshua B Tenenbaum. Synthesizing 3d shapes via modeling multi- view depth maps and silhouettes with deep generative networks. Computer Vision and Pattern Recognition (CVPR), 2017. Abdel Aziz Taha and Allan Hanbury. An efficient algorithm for calculating the exact hausdorff distance. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015. Nanyang Wang, Yinda Zhang, Zhuwen Li, Yanwei Fu, Wei Liu, and Yu- Gang Jiang. Pixel2mesh: Generating 3d mesh models from single rgb images. In ECCV, 2018. Ronald J. Williams. Simple statistical gradient- following algorithms for connectionist reinforcement learning. Machine Learning, 8(3):229- 256, 1992. Richard Woodcock, Nancy Mather, and Kevin McGrew. Woodcock johnson iii - tests of cognitive skills. Riverside Pub, 2001. Jiajun Wu, Chengkai Zhang, Tianfan Xue, William T. Freeman, and Joshua B. Tenenbaum. Learning a Probabilistic Latent Space of Object Shapes via 3D Generative- Adversarial Modeling. Advances in Neural Information Processing Systems (NIPS), 2016. Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. Computer Vision and Pattern Recognition (CVPR), 2015. Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, and Honglak Lee. Perspective transformer nets: Learning single- view 3d object reconstruction without 3d supervision. Advances in Neural Information Processing Systems (NIPS), 2016. <--- Page Split ---> ## A ARCHITECTURE Pix2scene is composed of an encoder network(See Table 4), a decoder network(See Table 5), and a critic network(See Table 6). Specifically, the decoder architecture is similar to the generator in DCGAN (Radford et al., 2015) but with LeakyReLU (Mikolov et al., 2011) as activation function and batch- normalization (Ioffe & Szegedy, 2015). Also, we adjusted its depth and width to accommodate the high resolution images accordingly. In order to condition the camera position on the \(z\) variable, we use conditional normalization in the alternate layers of the decoder. We train our model for 60K iterations with a batchsize of 6 with images of resolution \(128\times 128\times 3\) Table 4: Pix2scene encoder architecture <table><tr><td>Layer</td><td>Output size</td><td>Kernel size</td><td>Stride</td><td>BatchNorm</td><td>Activation</td></tr><tr><td>Input [x, c]</td><td>128 × 128 × 3</td><td></td><td></td><td></td><td></td></tr><tr><td>Convolution</td><td>64 × 64 × 85</td><td>4 × 4</td><td>2</td><td>Yes</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>32 × 32 × 170</td><td>4 × 4</td><td>2</td><td>Yes</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>16 × 16 × 340</td><td>4 × 4</td><td>2</td><td>Yes</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>8 × 8 × 680</td><td>4 × 4</td><td>2</td><td>Yes</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>4 × 4 × 1360</td><td>4 × 4</td><td>2</td><td>No</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>1 × 1 × 1</td><td>4 × 4</td><td>1</td><td>No</td><td></td></tr></table> Table 5: Pix2scene decoder architecture. <table><tr><td>Layer</td><td>Output size</td><td>Kernel size</td><td>Stride</td><td>BatchNorm</td><td>Activation</td></tr><tr><td>Input [x, c]</td><td>131 × 1</td><td></td><td></td><td></td><td></td></tr><tr><td>Convolution</td><td>4 × 4 × 1344</td><td>4 × 4</td><td>1</td><td>Yes</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>8 × 8 × 627</td><td>4 × 4</td><td>2</td><td>Yes</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>16 × 16 × 336</td><td>4 × 4</td><td>2</td><td>Yes</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>32 × 32 × 168</td><td>4 × 4</td><td>2</td><td>Yes</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>64 × 64 × 84</td><td>4 × 4</td><td>2</td><td>Yes</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>128 × 128 × nCh</td><td>4 × 4</td><td>2</td><td>Yes</td><td></td></tr></table> <table><tr><td>Layer</td><td>Output size</td><td>Kernel size</td><td>Stride</td><td>BatchNorm</td><td>Activation</td></tr><tr><td>Input [x, c]</td><td>128 × 128 × 6</td><td></td><td></td><td></td><td></td></tr><tr><td>Convolution</td><td>64 × 64 × 85</td><td>4 × 4</td><td>2</td><td>No</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>32 × 32 × 170</td><td>4 × 4</td><td>2</td><td>No</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>16 × 16 × 340</td><td>4 × 4</td><td>2</td><td>No</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>8 × 8 × 680</td><td>4 × 4</td><td>2</td><td>No</td><td>LeakyReLU</td></tr><tr><td>Convolution + [z]</td><td>4 × 4 × 1360</td><td>4 × 4</td><td>2</td><td>No</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>1 × 1 × 1</td><td>4 × 4</td><td>1</td><td>No</td><td></td></tr></table> Table 6: Pix2scene critic architecture. Conditional version takes image, latent code \(z\) and camera position \(c\) ## B MATERIAL, LIGHTS, AND CAMERA PROPERTIES Material. In our experiments, we use diffuse materials with uniform reflectance. The reflectance values are chosen arbitrarily and we use the same material properties for both the input and the generator side. Camera. The camera is specified by its position, viewing direction and vector indicating the orientation of the camera. The camera positions were uniform randomly sampled on a sphere for the 3D- IQTT task and on a spherical patch contained in the positive octant, for the rest of the experiments. The viewing direction was updated based on the camera position and the center of mass of the objects, so that the camera was always looking at a fixed point in the scene as its position changed. The focal <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 12: Random lights configuration. </center> length ranged between [18 mm and \(25\mathrm{mm}\) ] in all the experiments. and field- of- view was fixed to \(24\mathrm{mm}\) . The camera properties were also shared between the input and the generator side. However, in the 3D- IQTT task we relax the assumption that we know the camera pose and instead estimate the view as a part of the learnt latent representation. Lights. For the light sources, we experimented with single and multiple point- light sources, with the light colors chosen randomly. The light positions are uniformly sampled on a sphere for the 3D IQTT tasks, and uniformly on a spherical patch covering the positive octant for the other scenes. The same light colors and positions are used both for rendering the input and the generated images. The lights acted as a physical spot lights with the radiant energy attenuating quadratically with distance. As an ablation study we relaxed this assumption of having perfect knowledge of lights by using random position and random color lights. Those experiments show that the light information is not needed by our model to learn the 3D structure of the data. However, as we use random lights on the generator side, the shading of the reconstructions is in different color than in the input as shown in Figure 12. ## C EVALUATION OF 3D RECONSTRUCTIONS For evaluating 3D reconstructions, we use the Hausdorff distance (Taha & Hanbury, 2015) as a measure of similarity between two shapes as in Niu et al. (2018). Given two point sets, \(A\) and \(B\) , the Hausdorff distance is, \(\max \left\{\max D_{H}^{+}(A,B),\max D_{H}^{+}(B,A)\right\}\) , where \(D_{H}^{+}\) is an asymmetric Hausdorff distance between two point sets. E.g., \(\max D_{H}^{+}(A,B) = \max D(a,B)\) , for all \(a\in A\) , or the largest Euclidean distance \(D(\cdot)\) , from a set of points in \(A\) to \(B\) , and a similar definition for the reverse case \(\max D_{H}^{+}(B,A)\) . ## D ARCHITECTURE FOR 3D IQTT EVALUATIONS Pixel2Scene architecture remains similar to the ones in previous sections but with higher capacity on decoder and critic as this task is more challenging and complex. The more important difference is that for those experiments we do not condition the networks with the camera pose to be fair with the baselines. In addition to the three networks we have a statistics network (see Table 7) that estimates and minimizes the mutual information between the two set of dimensions in the latent code using MINE (Belghazi et al., 2018). Out of 128 dimensions for \(z\) we use first 118 dimensions for represent scene- based information and rest to encode view based info. The architecture of the baseline networks is shown in Figure 13. The contrastive loss using for training this baselines is shown in Figure 7. Table 7: Pix2scene statistics network architecture. <table><tr><td>Layer</td><td>Output size</td><td>Kernel size</td><td>Stride</td><td>BatchNorm</td><td>Activation</td></tr><tr><td>Input [z[: 118], z[118 :]]</td><td>1 × 1 × 128</td><td></td><td></td><td></td><td></td></tr><tr><td>Convolution</td><td>1 × 1 × 256</td><td>1 × 1</td><td>1</td><td>No</td><td>ELU</td></tr><tr><td>Convolution</td><td>1 × 1 × 512</td><td>1 × 1</td><td>1</td><td>No</td><td>ELU</td></tr><tr><td>Convolution</td><td>1 × 1 × 1</td><td>1 × 1</td><td>2</td><td>No</td><td>None</td></tr></table> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 13: 3D-IQTT baseline architecture. The ResNet-50 all share the same weights and were slightly modified to support our image size. "FC" stands for fully-connected layer and the hidden node sizes are 2048, 512, and 256 respectively. The output of the network is encoded as one-hot vector. </center> The contrastive loss from Equation 7 is applied to the 2048 features that are generated by each ResNet block. \(x_{1}\) and \(x_{2}\) are the input images, \(y\) is either 0 (if the inputs are supposed to be the same) or 1 (if the images are supposed to be different), \(G_{\theta}\) is each ResNet block, parameterized by \(\theta\) , and \(m\) is the margin, which we set to 2.0. The loss function is from Hadsell et al. (2006) but used slightly differently. \[\begin{array}{c}{{\mathcal{L}_{\theta}(x_{1},x_{2},y)=(1-y)\frac{1}{2}(D_{\theta}(x_{1},x_{2}))^{2}+(y)\frac{1}{2}(m a x(0,m-D_{\theta}(x_{1},x_{2})))^{2}}}\\ {{D_{\theta}(x_{1},x_{2})=||G_{\theta}(x_{1})-G_{\theta}(x_{2})||_{2}}}\end{array} \quad (7)\] ## E MORE SCENE RECONSTRUCTIONS Figure 14 shows 3D reconstructions of scenes formed by boxes in a room. In Figure 15 our model is asked to reconstruct the scenes of the first column and then render different views of the same scene. In this case we show the normal maps of those views. Figure 16 shows the recovered shading, depth and normal images from reconstructions of complex scenes such as bedrooms and bunny. ![](images/14_1.jpg) <center>Figure 14: Scene reconstruction. (a) Input images of rotated cubes into a room. (b) pix2scene reconstructions with its (c) associated depth maps. </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 15: Normal views reconstruction. For each row, the first column is the input image and other columns are the extrapolated normal maps of that image from different views. </center> ![](images/15_1.jpg) <center>Figure 16: Reconstruction of complex scenes. Reconstruction of bedroom scenes and bunny. </center> <--- Page Split --->
## ABSTRACT We aim to model 3D properties of the scenes from single 2D images. Learning 3D scenes from 2D images is a long- standing problem in computer vision with applications in related fields such as simulation and robotics. We propose pix2scene, a deep generative- based approach that represents the 3D scene in a learnt latent variable decoded into a viewpoint- dependent representation that can be rendered. Our method learns the depth of the scene and leverages a local smoothness assumption to extract the orientation of visible scene points. We achieve this using an encoder- decoder adversarial learning mechanism and a novel differentiable renderer to train the 3D model in an end- to- end fashion, using only images. We showcase the generative ability of our model qualitatively on the ShapeNet dataset (Chang et al., 2015). We also demonstrate that our model can predict the structure of scenes from various, previously unseen view points. Finally, we evaluate the effectiveness of the learned 3D scene representation in supporting 3D spatial reasoning. ## 1 INTRODUCTION Understanding the 3- dimensional (3D) world from its 2- dimensional (2D) projections is a fundamental problem in computer vision with a broad range of application in robotics, simulation and design. Given that the majority natural scene data is available exclusively in the form of 2D images, the ability to directly infer knowledge about 3D structure from these images would be of great utility in scene understanding. Inferring the 3D structure from multiple images of a scene has been pursued extensively, such as in stereo or structure from motion tasks (Hartley & Zisserman, 2004). Since most available natural image data informative about the real world comes with only a single view of a given scene, it is perhaps more important to explore the development of models which can infer the 3D structural properties from a single image. On the other hand, single image 3D recovery is an extremely challenging and heavily under constrained task. The system has to rely on prior knowledge and 2D visual cues such as textures, shadows or occlusions in order to provide hints to the 3D structure of the scene. Practically, building a machine learning model that learns to infer 3D structure from images requires either a strong inductive bias or supervision. While some have used the 3D ground truth as explicit supervision (Wu et al., 2016; 2015), in most cases of interest, such supervision will not be available. Consequently, our long term goal is to infer the 3D structure of realistic scenes from single images. In this paper we take a step towards this direction via a method of unsupervised learning of the 3D structure, directly from a single 2D image of each scene. Our method based on the adversarial learning framework (Goodfellow et al., 2014) and exploits a uniquely suitable 3D representation (i.e., surfels (Pfister et al., 2000)) and a differentiable renderer. Most 3D reconstruction methods rely on representing 3D objects explicitly using either voxels (Rezende et al., 2016; Yan et al., 2016) or meshes (Kanazawa et al., 2018; Wang et al., 2018). Explicit representations store all the rendering- relevant information from a given 3D space and are easily transferable, i.e., they can be loaded with any 3D modeling software and viewed from any angle. However, approaches using explicit representations typically scale very poorly ( \(O(n^{3})\) or require a sparse/discrete representation which can be challenging for deep learning methods. As a result, these representations have only been applied to the reconstruction of single objects. As an alternative we propose to learn an implicit 3D representation which produces only the 3D geometry which is directly relevant for a particular viewpoint. Our viewpoint- specific 3D geometry is captured <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: Implicit vs explicit representations. Explicit voxel and mesh representations are viewpoint-independent and constitutes the complete scene. Our implicit surfel-based representation is viewpoint-dependent and it adapts the resolution to the viewpoint. The full scene is contained in a high-dimensional latent variable and only when the scene is to be rendered, the latent variable is serialized to surfels for a specific view. </center> using camera facing surfels (Pfister et al., 2000) which are surface elements defined by its position, orientation and material properties. Given an image we can infer its implicit 3D representation and then recreate novel surfel representations of the underlying scene from unobserved viewpoints. In general, we note that in a 3D scene, only a small fraction of the entities are perceivable from the camera. As the camera moves, and the occluded regions become visible, our method then generates surfels for those newly unoccluded regions. Another advantage of this approach is that minimal number of primitives (surfels) are required to obtain a high- resolution image as the camera moves closer to a part of the scene. Moreover this representation fits well with image based convolutional architectures. Our model, Pix2Scene, is a deep generative- based approach for modelling the 3D structure of a scene directly from images. This model is unsupervised in the sense that it does not require 3D groundtruth or any other kind of image annotations. We base our model on Adversarially Learned Inference (ALI) approach (Dumoulin et al., 2016). ALI extends the GAN (Goodfellow et al., 2014) framework by learning to infer the latent representation of a given image. In pix2scene the learned latent space embeds the 3D information of the underlying scene. The latent representation is mapped via a decoder network to a view- dependent 3D surface and then projected to image space by a differentiable renderer. The resulting image is then evaluated by an adversarial critic. While our long- term goal is to be able to infer the 3D structure of a real- world photograph, in this paper we experiment exclusively with synthetically- constructed scenes and adopt several simplifying assumptions. In particular, we assume that the world is piece- wise smooth and that for each input image the illumination, view and object materials are known. This work has the following main contributions, (1) we propose a novel unsupervised method for 3D understanding from a single image; (2) we propose a new implicit 3D representation based on view- space surfels; (3) we propose a surfel- based differentiable 3D renderer that can be used as a layer of a neural network; and (4) we propose 3D- IQTT a new 3D understanding evaluation benchmark. This task evaluates the model's ability to perform mental rotation by obtaining comprehensive understanding of underlying 3D structure. We also estimate the camera pose as part of the learnt latent variable for this particular task. ## 2 RELATED WORK The generation and reconstruction of 3D objects from images has been studied extensively in the computer vision and graphics communities (Saxena et al., 2009; Chaudhuri et al., 2011; Kalogerakis et al., 2012; Chang et al., 2015; Rezende et al., 2016; Soltani et al., 2017). Our work bears some conceptual similarities to Kulkarni et al. (2015) which casts the 3D reconstruction problem as a more traditional inverse graphics task. By using Variational Auto- Encoder(VAEs) (Kingma & Welling, 2014), they learn a representation of objects that disentangles factors of variations from images (i.e., object pose and configuration) and use the approach for specific transformations such as out of axis rotation. However, unlike their approach, ours is fully unsupervised and we implicitly generate <--- Page Split ---> 3D structure of scenes from single images. Our mechanism learns a latent representation for the underlying scene, which can later be used to render from different views and lighting conditions. Similar to ours, Rezende et al. (2016) infer the 3D configuration at their output. They adopt a probabilistic inference framework to build a generative model for 3D by combining a standard projection mechanism with gradient estimation methods. In particular, their approach requires multiple runs with mechanisms such as REINFORCE (Williams, 1992) in order to infer gradient from the projection layer. In addition, their use of mesh and voxel representations could become an obstacle to scaling their method to more complex scenes. Our approach is not susceptible to restrictions imposed by meshes or other scaling issues and has the potential to adapt to arbitrary scene configurations. ## 3 METHOD ### 3.1 IMPLICIT 3D REPRESENTATION AND SURFELS Explicitly representing 3D structure presents different challenges for generative models (Kobbelt & Botsch, 2004). Representing entire objects using voxels scales poorly given its \((O(n^{3})\) complexity. The vast majority of the generated voxels aren't relevant to most viewpoints, such as the voxels that are entirely inside of objects. A common workaround is to use a sparse representation such as meshes. However, these too come with their own drawbacks, such as the need to discretise complex objects. This makes mesh representation difficult to generate using neural networks. Current mesh based methods mainly rely on deforming a pre- existing mesh. On the other hand, our implicit approach represents the 3D scene in a high- dimensional latent variable. In our framework, this latent variable (or vector) is decoded using a generator network into a viewpoint- dependent representation of surface elements — similar to the surfels (Pfister et al., 2000) — that constitute the visible part of the scene. This representation is very compact: given a renderer's point of view, we can represent only the part of the 3D surface needed by the renderer. Also, as the camera moves closer to a part of the scene, our generator will allocate more surfels to represent that part of the scene and thereby increasing the resolution. Figure 1 illustrates these different representations. For descriptive purpose, surfels are shown as squares, but in general they do not have any shape. Formally, surfels are represented as a tuple \((P,N,\rho)\) , where \(P = (p_{x},p_{y},p_{z})\) is its 3D position, \(N = (n_{x},n_{y},n_{z})\) is the surface normal vector, and \(\rho = (k_{r},k_{g},k_{b})\) is the reflectance of the surface material. Since we are interested in modelling structural properties of the scenes i.e., geometry and depth, we assume that objects in the scene have a uniform material. We represent the surfels in the camera coordinate system. This significantly reduces the number of surfels by considering only the ones that will get projected onto a pixel in the rendered image. Moreover, this allows to reduce the position parameters to only \(p_{z}\) being this the distance along a ray going through the surfel to the center of its pixel. ### 3.2 DIFFERENTIABLE 3D RENDERER As the critic operates only on image space, we need to project the generated 3D representations back to the 2D space using a renderer. In our setting, each stage of the rendering pipeline must be differentiable to allow us to take advantage of gradient- based optimization and backpropogate the critic's error signal to the surfel representation. The rendering process can be partitioned into two stages. During forward- propagation, the first stage finds the mapping between the surfels and the pixels; and the second stage computes the color of each pixel. During back- propagation, the first stage directs the gradients only to the surfels that get projected onto the image; and the second stage is differentiable as long as the shading operations are differentiable. The first stage of the rendering involves finding the mapping between the surfels and the pixels. This requires performing the expensive operation of ray object intersection (See Figure 2a). Our model requires a fast rendering engine as it will be use in every learning iteration. Conventional ray tracing algorithms are optimized for generating multiple views from the same scene. However in our setting during learning we render only one image from each scene. Moreover ray tracing algorithms require from representing the full scene, which is very inefficient as we only represent the part visible by the camera. To resolve these issues, our generator proposes one surfel for each pixel in the camera's <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: Differentiable 3D renderer. (a) A surfel is defined by its position \(P\) , normal \(N\) , and reflectance \(\rho\) . Each surfel maps to an image pixel \(P_{im}\) . (b) The surfel's color depends on its reflectance \(\rho\) and the angles \(\theta\) between each light \(I\) and the surfel's normal \(N\) . </center> coordinate system. Our PyTorch implementation of the differentiable renderer can render a \(128 \times 128\) surfel- based scene in under 1.4 ms on a mobile NVIDIA GTX 1060 GPU. The color of a surfel depends on the material reflectance, its position and orientation, and the ambient and point light source colors. See Figure 2b. Given a surface point \(P_{i}\) , the color of its corresponding pixel \(I_{rc}\) is given by the shading equation: \[I_{rc} = \rho_{i}(L_{a} + \sum_{j}\frac{1}{k_{l}\|d_{i j}\| + k_{q}\|d_{i j}\|^{2}} L_{j}\max \left(0,N_{i}^{T}d_{i j} / \|d_{i j}\|\right)), \quad (1)\] where \(\rho_{i}\) is the surface reflectance, \(L_{a}\) is the ambient light's color, \(L_{j}\) is the \(j^{\mathrm{th}}\) positional light source's color, with \(d_{i j} = L_{j}^{\mathrm{pos}} - P_{i}\) , or the direction vector from the scene point to the point light source, and \(k_{l}\) , \(k_{q}\) being the linear and quadratic attenuation terms respectively. Equation 1 is an approximation of rendering equation proposed in Kajiya (1986). ### 3.3 PIX2SCENE MODEL The adversarial training paradigm allows the generator network to capture the underlying target distribution by competing with an adversarial critic network. Pix2scene employs bi- directional adversarial training to model the distribution of surfels from just 2D images. #### 3.3.1 BI-DIRECTIONAL ADVERSARIAL TRAINING ALI (Dumoulin et al., 2016) or Bi- GAN (Donahue et al., 2016) extends the GAN (Goodfellow et al., 2014) framework by including the learning of an inference mechanism. Specifically, in addition to the decoder network \(G_{x}\) , ALI provides an encoder \(G_{z}\) which maps data points \(\boldsymbol{x}\) to latent representations \(z\) . In these bi- directional models, the critic, \(D\) , discriminates in both the data space ( \(\boldsymbol{x}\) versus \(G_{x}(z)\) ), and latent space ( \(z\) versus \(G_{z}(x)\) ), maximizing the adversarial value function over two joint distributions. The final min- max objective can be written as: \[\min_{G}\max_{D}\mathcal{L}_{ALI}(G,D):= \mathbb{E}_{q(\boldsymbol {x})}[\log (D(\boldsymbol {x},G_{z}(\boldsymbol {x})))] + \mathbb{E}_{p(\boldsymbol {z})}[\log (1 - D(G_{x}(\boldsymbol {z}),\boldsymbol {z}))], \quad (2)\] where \(q(\boldsymbol {x})\) and \(p(\boldsymbol {z})\) denote encoder and decoder marginal distributions. #### 3.3.2 MODELLING DEPTH AND CONSTRAINED NORMAL ESTIMATION Based on the ALI formulation, as depicted in Figure 3, our model has an encoder network which captures the distribution over the latent space given an image data point \(x\) . The decoder network maps a fixed latent distribution \(p(z)\) (a standard normal distribution in our case) to the 3D surfel representation. Next, the surfel representation are rendered into a 2D image using our differentiable renderer. The resulting image is then given as input to the critic to distinguish from the real image data. Note that the input to the critic comes from the joint space of data with its corresponding latent code, as in ALI. <--- Page Split ---> ![](images/4_0.jpg) <center>Figure 3: Pix2scene model. Pix2scene generates realistic 3D views of scenes by training on 2D images alone. Its decoder generates the surfels depth \(p_{z}\) from a noise vector \(\boldsymbol{z}\) conditioned on the camera pose. The surfels normal is estimated from its predicted depth. The surfels are then rendered into a 2D image and together with image samples from the target distribution are fed to the critic. </center> A straightforward way to model the decoder network could be to learn a conditional distribution to produce the surfel's depth \((p_{z})\) and normal \((N)\) . But this could lead to inconsistencies between the local shape and the surface normal. For instance, the decoder can fake an RGB image of a 3D shape simply by changing the normals while keeping the depth fixed. To avoid this issue, we exploit the fact that real- world surfaces are locally planar, and surfaces visible to the camera have normals constrained to be in the half- space of visible normal directions from the camera's view point. Considering the camera to be looking along the \(- z\) direction, the estimated normal has the constraint \(n_{z} > 0\) . Therefore, the local surface normal is estimated by solving the following problem for every surfel, \[\begin{array}{r}{\| N^{T}\nabla P\| = 0}\\ {\mathrm{subject~to},\| N\| = 1\mathrm{~and~}n_{z} > 0,} \end{array} \quad (3)\] where the spatial gradient \(\nabla P\) is computed using 8 neighbour points, and \(P\) is the position of the surfels in the camera coordinate system obtained by backprojecting the generated depth along rays. This approach enforces consistency between the predicted depth field and the computed normals. If the depth is incorrect, the normal- estimator outputs an incorrect set of normals, and result in an RGB image inconsistent with the data distribution, which would in- turn get penalized by the critic. The decoder and the encoder networks are thus incentivized to predict realistic depths. #### 3.3.3 MODEL TRAINING The Wasserstein- GAN (Arjovsky et al., 2017) formalism provides stable training dynamics using the 1- Wasserstein distance between the distributions. We adopt the gradient penalty setup as proposed in Gulrajani et al. (2017) for more robust training, however, we modify the formulation to take into account the bidirectional training. Architectures of our networks, and training hyper parameters are explained in detail in appendix A. Briefly, we used Conditional Normalization (Dumoulin et al., 2016; Perez et al., 2017) for conditioning the view point (or camera pose) in the encoder, decoder and the discriminator networks. The view point is a three dimensional vector representing positional co- ordinates of the camera. In our training, the affine parameters of the Batch- Normalization layers (Ioffe & Szegedy, 2015) are replaced by learned representations based on the view point. The final objective includes a bi- directional reconstruction loss as formulated in Equation 4. This in- turn enforces the reconstructions from the model to stay close to the corresponding inputs. We also use a reconstruction error in the objective function for the encoder and decoder networks as it has been empirically shown to improve reconstructions in ALI- type models Li et al. (2017). \[\begin{array}{r}{\mathcal{L}_{r e c o n} = \mathbb{E}_{q(\pmb {x})}[||\pmb {x} - r e n d(G_{x}(G_{z}(\pmb {x})))|_{2}] + \mathbb{E}_{p(\pmb {z})}[||\pmb {z} - G_{z}(r e n d(G_{x}(\pmb {z})))|_{2}]} \end{array} \quad (4)\] where function \(r e n d(\cdot)\) denotes rendered image on the decoder side. <--- Page Split ---> ![](images/5_0.jpg) <center>Figure 4: Scene reconstruction. Left: Input images of rotated objects into a room with its depth and normal groundtruth maps. Right: pix2scene reconstructions with its depth and normal maps. </center> Table 1: Scene reconstruction results. Hausdorff metric on 3D surfels and MSE on the depth maps. <table><tr><td rowspan="2"></td><td colspan="2">Box scenes</td><td colspan="2">Shape scenes</td></tr><tr><td>rand Tr.</td><td>rand Rot.</td><td>rand Rot.</td><td></td></tr><tr><td>Hausdorff-F</td><td>0.087</td><td>0.102</td><td>0.125</td><td></td></tr><tr><td>Hausdorff-R</td><td>0.093</td><td>0.183</td><td>0.191</td><td></td></tr><tr><td>MSE-depth</td><td>0.032</td><td>0.022</td><td>0.038</td><td></td></tr></table> ## 4 EXPERIMENTS ### 4.1 EXPERIMENTAL SETUP Tasks. Our model is capable of both reconstructing 3D scenes and generate new ones. We evaluate the 3D understanding capability of the model on 3D- IQTT: a spatial reasoning based semi- supervised classification task. The goal of the 3D- IQTT is to quantify ability of our model to perform 3D spatial reasoning test by using large amounts of the unlabeled training data and considerably small set of labelled examples. Evaluation. In order to evaluate the 3D reconstruction ability of the model we used Hausdorff distance (Taha & Hanbury, 2015) and MSE. Hausdorff distance measures the model's 3D reconstruction's correspondence with the input for a given camera pose. We measure the correctness of the recovered depth using standard MSE with respect to ground truth depth maps. We evaluate the 3D generation generation qualitatively. Finally the evaluation metric for the 3D- IQTT is the percentage of correctly answered questions. Datasets. We have created multiple different scene datasets ranging from simple to complex in nature. Those scenes are composed by a room containing one or more objects placed at random positions and orientations. Each 3D scene is rendered into a single \(128 \times 128 \times 3\) image taken from a camera in a random sampled uniformly on the positive octant of a sphere containing the room. Technically, the probability of seeing the same configuration of a scene from two different views is near zero. Box scenes is created with simple box 3D shape (as depicted in Figure 14). Shape scenes is created with basic 3D shapes (i.e., box, sphere, cone, torus, teapot etc). ShapeNet scenes is composed by 6 objects from the ShapeNet dataset (Chang et al., 2015) (i.e., bowls, bottles, mugs, cans, caps and bags). For 3D- IQTT task we generated a test where each IQ question instance consists of a reference image of Tetris- like shape, as well as 3 other images, one of which is a randomly rotated version of the reference (see Figure 10 for an example). The training set is formed by 200k questions where <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 5: View point reconstruction. Given a scene (first column), we rotate the camera around to visualize the models understanding of 3D shape. As shown, the model correctly infers the unobserved geometry of the objects, demonstrating true 3D understanding of the scene. Videos of these reconstructions can be seen at https://bit.ly/2zADuqG. </center> <table><tr><td rowspan="2"></td><td colspan="4">Shape scenes</td><td colspan="4">Multiple-shape scenes</td></tr><tr><td>5°</td><td>35°</td><td>55°</td><td>80°</td><td>5°</td><td>35°</td><td>55°</td><td>80°</td></tr><tr><td>Hausdorff-F</td><td>0.110</td><td>0.143</td><td>0.140</td><td>0.161</td><td>0.256</td><td>0.301</td><td>0.282</td><td>0.272</td></tr><tr><td>Hausdorff-R</td><td>0.156</td><td>0.191</td><td>0.189</td><td>0.202</td><td>0.308</td><td>0.355</td><td>0.329</td><td>0.316</td></tr><tr><td>MSE-depth</td><td>0.012</td><td>0.021</td><td>0.022</td><td>0.027</td><td>0.070</td><td>0.091</td><td>0.088</td><td>0.083</td></tr></table> Table 2: View point reconstruction. Quantitative evaluation of implicit 3D reconstruction for unseen views by extrapolating the view angle from \(0^{\circ}\) (original) to \(80^{\circ}\) . only a few are labelled with the information about the correct answer (i.e., either \(5\%\) (10k) or \(0.5\%\) (1k) of the total training data). The validation and test sets each contain 100K labelled questions. More details on experimental setup and evaluation can be found in appendix B. ### 4.2 IMPLICIT 3D SCENE RECONSTRUCTION Figure 4 shows the input Shape scenes data and its corresponding reconstructions, along with its recovered depth and normal maps. The depth map is encoded in such a way that the darkest points are closer to the camera. The normal map colors correspond to the cardinal directions (red/green/blue for \(x / y / z\) axis respectively). Table 1 shows a quantitative evaluation of the forward and reverse Hausdorff distances on three different datasets. The table also depicts mean squared error of the generated depth map with respect to the input depth map. Figure 6 shows the reconstructions from the model on more challenging multiple- shape scenes where the number of objects as well as their shape varies. Figure 16 in the appendix showcases more qualitative evaluations. To showcase that our model can reconstruct unobserved views, we infer the latent code \(z\) of an image \(x\) and then we decode and render different views while rotating the camera around the scene. Table 2 shows the Hausdorff distance and MSE of reconstructing a scene from different unobserved view angles. As the view angle increases from \(0^{\circ}\) (original) to \(80^{\circ}\) for shape scenes the reconstruction Error and MSE tend to increase. However, for the multiple- shape scenes setup the trend is not as clear because of the complexity of the scene an the inter- object occlusions. Figure 5 qualitatively shows how pix2scene correctly infers the extents of the scene not in view in a consistent manner, demonstrating true 3D understanding of the scene. ![](images/6_1.jpg) <center>Figure 6: Multiple-shape scenes reconstruction. Implicit 3D reconstruction of scenes composed by multiple ShapeNet objects. </center> <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 7: Unconditional scene generation. Generated samples from pix2scene model trained on ShapeNet scenes. Left: shaded images; Right: depth maps </center> ![](images/7_1.jpg) <center>Figure 8: Conditional scene generation. Class conditioned generated samples for ShapeNet dataset. </center> ### 4.3 IMPLICIT 3D SCENE GENERATION We trained pix2scene on scenes composed of ShapeNet objects. Figure 7 shows qualitative results on unconditional generation. This shows how our model is able to generate correct 3D interpretations of the world. We also trained our model conditionally by giving the class label of the ShapeNet object to the decoder and critic networks (Mirza & Osindero, 2014). Figure 8 shows the results of conditioning the generator on different target classes. In order to explore the manifold of the learned representations we select two images \(x_{1}\) and \(x_{2}\) from the held out data, then we linearly interpolate between their encodings \(z_{1}\) and \(z_{2}\) and decode the intermediary points into their corresponding images. Figure 9 shows this for two different settings. In each case, our representations capture the major geometrical aspects of the scene. ### 4.4 3D-IQ TEST TASK We have designed a quantitative evaluation for 3D reconstruction which we refer to as the 3D- IQ test task (3D- IQTT). In their landmark work, Shepard & Metzler (1971) introduced the mental rotation task into the toolkit of cognitive assessment. The authors presented human subjects with reference images and answer images and the subjects had to quickly decide if the answer was either a 3D- rotated version or a mirrored version of the reference. The speed and accuracy with which people can solve this mental rotation task has since become a staple of IQ tests like the Woodcock- Johnson test (Woodcock et al., 2001). We took these as inspiration when designing a quantitative evaluation: we are using the same kind of 3D objects but instead of confronting our model with pairs of images ![](images/7_2.jpg) <center>Figure 9: Manifold exploration. Exploration of the learned manifold of 3D representations. Generated interpolations (middle columns) between two images \(x_{1}\) and \(x_{2}\) (first and last columns). </center> <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 10: Sample questions from the 3D-IQ test task. For this "mental rotation" task, a set of reference images and 3 possible answers are presented. The goal is to find the rotated version of the reference 3D model. To solve this task, the human or the model has to infer the 3D shape of the reference from the 2D image and compare that to the inferred 3D shapes of the answers. The correct answers to these two examples are in the footnote. </center> ![](images/8_1.jpg) <center>Figure 11: 3D IQ test task. Pix2scene reconstructions of the 3D-IQTT shapes. </center> and only two possible answers, we include several distractor answers and the subject (human or computer) has to pick the correct answers out of 3 that is a 3D- rotated version of the reference object. To verify that our approach is able to learn accurate embeddings of these shapes we first assessed the reconstruction of these shapes qualitatively as shown in Figure 11. #### 4.4.1 SEMI-SUPERVISED CLASSIFICATION ON THE 3D-IQ TEST TASK For training pix2scene in a semi- supervised setting, in addition to the unlabeled data, we also used the labelled data. The training with the unlabeled samples differs from the approach described for previous experiments as we do not assume that we have the knowledge of camera position. Thus, part of the latent vector \(\mathbf{z}\) encodes the actual 3D object (denoted as \(\mathbf{z}_{scene}\) ) and the remainder estimates the camera- pose (denoted as \(\mathbf{z}_{view}\) ). For the supervised training two additional loss terms were added: (a) a loss that enforces the object component ( \(\mathbf{z}_{scene}\) ) to be the same for both the reference object and the correct answer, (b) a loss that maximizes the distance between object component of reference and the distractors. Losses (a) and (b) are contained in Equation 5 where \(d_{i}\) denotes the distractors, \(\mathbf{x}_{ref}\) is the reference and \(\mathbf{x}_{ans}\) the correct answer. Algorithm is detailed in Table 1. \[\begin{array}{l}{{\mathcal{L}_{\theta}(\pmb{x}_{r e f},\pmb{x}_{d_{1}},\pmb{x}_{d_{2}},\pmb{x}_{a n s})=\frac{1}{2}D_{\theta}(\pmb{x}_{r e f},\pmb{x}_{a n s})-\frac{1}{2}\sum_{i=1}^{2}D_{\theta}(\pmb{x}_{r e f},\pmb{x}_{d_{i}})}}\\ {{\mathrm{where}D_{\theta}(\pmb{x}_{1},\pmb{x}_{2})=(\|\pmb{x}_{s c e n e}^{2}-\pmb{x}_{s c e n e}^{2}\|_{2})^{2}\mathrm{and}\pmb{x}^{2}=E n c o d e r_{\theta}(\pmb{x})}}\end{array} \quad (5)\] During the training we also minimize the mutual information between \(\mathbf{z}_{scene}\) and \(\mathbf{z}_{view}\) to explicitly disentangle and make sure that the learnt latent code has distinct source of information present in its dimensions. This is implemented via MINE (Belghazi et al., 2018). The strategy of MINE is to parameterize a variational formulation of the mutual information in terms of a neural network: \[I_{\Theta}(z_{s},z_{v}) = \sup_{\theta \in \Theta}\mathbb{E}_{\mathbb{P}_{z_{s}z_{v}}}[T_{\theta}] - \log (\mathbb{E}_{\mathbb{P}_{z_{s}}\otimes \mathbb{P}_{z_{v}}}[e^{T_{\theta}}]). \quad (6)\] This objective is optimized in an adversarial paradigm where \(T\) , the statistics network, plays the role of the critic and is fed with samples from the joint and marginal distribution. We added this loss to our pix2scene objective to minimize the mutual information estimate in both unsupervised and <--- Page Split ---> # Algorithm 1 Semisupervised classification 1: while iter \(< max\_iter\) do 2: \(D\leftarrow \mathrm{MiniBatch}()\) 3: \(z\sim E(\pmb{x}_{ref});\forall (\pmb{x}_{ref},\pmb{x}_{d_1},\pmb{x}_{d_2},\pmb{x}_{ans})\in D\) 4: \(L\leftarrow \mathcal{L}_{A L I} + \mathcal{L}_{r e c o n} + \mathrm{I}_{\Theta}(z_{s c e n e},z_{v i e w})\) 5: if supervised- training- interval(iter) then 6: \(L\leftarrow L + \mathcal{L}_{\theta}(\pmb{x}_{ref},\pmb{x}_{d_1},\pmb{x}_{d_2},\pmb{x}_{ans})\) 7: end if 8: optimize networks with \(L\) 9: end while <table><tr><td>Labeled Samples</td><td>CNN</td><td>Siamese CNN</td><td>Human Evaluation</td><td>Pix2Scene (Ours)</td></tr><tr><td>0 (Unsupervised)</td><td>0.3385</td><td>0.3698</td><td>0.7329 ± 0.1488</td><td>0.4372 ± 0.0301</td></tr><tr><td>200</td><td>0.3350</td><td>0.3610</td><td>-</td><td>0.4691 ± 0.0259</td></tr><tr><td>1,000</td><td>0.3392</td><td>0.3701</td><td>-</td><td>0.5567 ± 0.0095</td></tr><tr><td>10,000</td><td>0.3649</td><td>0.3752</td><td>-</td><td>0.5983 ± 0.0021</td></tr></table> Table 3: 3D- IQTT quantitative results. The test accuracy of the 3D- IQ test task show that the CNN baselines struggle to solve this task Pix2scene is able to understand the underlying 3D structure of the images and solve the task. The results show that although our model performs better than the baselines, we are still lagging behind the human level. supervised training iterations. Once the model is trained, we answer 3D- IQTT questions, by inferring the latent 3D representation for each of the 4 images and we select the answer closest to the reference image as measured by L2 distance. We compared our model to two different baselines. The first one is composed of 4 ResNet- 50 modules (He et al., 2016) with shared weights followed by 3 fully- connected layers. We trained this CNN only on the labeled samples. Our second baseline has a similar architecture as the previous one but the fully- connected layers were removed. Instead of the supervised loss provided in the form of correct answers, it is trained on the contrastive loss (Koch et al., 2015). This loss reduces the feature distance between the references and correct answers and maximizes the feature distance between the references and incorrect answers. A more detailed description of the networks and contrastive loss function can be found in the appendix D. Table 3 shows 3D- IQTT results for our method and baselines. The baselines were not able to interpret the underlying 3D structure of the data and its results are only slightly better than a random guess. The poor performance of the Siamese CNN might be in part because the contrastive loss rewards similarities in pixel space and has no notion of 3D similarity. However, pix2scene achieved significantly better accuracy by leveraging the learned 3D knowledge of objects. ## 5 CONCLUSIONS In this paper we proposed a generative approach to learn 3D structural properties from single images in an unsupervised and implicit fashion. Our model receives an image of a scene with uniform material as input, estimates the depth of the scene points and then reconstructs the input scene. We also provided quantitative evidence that support our argument by introducing a novel IQ- task in a semi- supervised setup. We hope that this evaluation metric will be used as a standard benchmark to measure the 3D understanding capability of the models across different 3D representations. The main drawback of our current model is that it requires the knowledge of lighting and material properties. Future work will focus on tackling the more ambitious setting of learning complex materials and texture along with modelling the lighting properties of the scene. <--- Page Split ---> ## A ARCHITECTURE Pix2scene is composed of an encoder network(See Table 4), a decoder network(See Table 5), and a critic network(See Table 6). Specifically, the decoder architecture is similar to the generator in DCGAN (Radford et al., 2015) but with LeakyReLU (Mikolov et al., 2011) as activation function and batch- normalization (Ioffe & Szegedy, 2015). Also, we adjusted its depth and width to accommodate the high resolution images accordingly. In order to condition the camera position on the \(z\) variable, we use conditional normalization in the alternate layers of the decoder. We train our model for 60K iterations with a batchsize of 6 with images of resolution \(128\times 128\times 3\) Table 4: Pix2scene encoder architecture <table><tr><td>Layer</td><td>Output size</td><td>Kernel size</td><td>Stride</td><td>BatchNorm</td><td>Activation</td></tr><tr><td>Input [x, c]</td><td>128 × 128 × 3</td><td></td><td></td><td></td><td></td></tr><tr><td>Convolution</td><td>64 × 64 × 85</td><td>4 × 4</td><td>2</td><td>Yes</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>32 × 32 × 170</td><td>4 × 4</td><td>2</td><td>Yes</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>16 × 16 × 340</td><td>4 × 4</td><td>2</td><td>Yes</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>8 × 8 × 680</td><td>4 × 4</td><td>2</td><td>Yes</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>4 × 4 × 1360</td><td>4 × 4</td><td>2</td><td>No</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>1 × 1 × 1</td><td>4 × 4</td><td>1</td><td>No</td><td></td></tr></table> Table 5: Pix2scene decoder architecture. <table><tr><td>Layer</td><td>Output size</td><td>Kernel size</td><td>Stride</td><td>BatchNorm</td><td>Activation</td></tr><tr><td>Input [x, c]</td><td>131 × 1</td><td></td><td></td><td></td><td></td></tr><tr><td>Convolution</td><td>4 × 4 × 1344</td><td>4 × 4</td><td>1</td><td>Yes</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>8 × 8 × 627</td><td>4 × 4</td><td>2</td><td>Yes</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>16 × 16 × 336</td><td>4 × 4</td><td>2</td><td>Yes</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>32 × 32 × 168</td><td>4 × 4</td><td>2</td><td>Yes</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>64 × 64 × 84</td><td>4 × 4</td><td>2</td><td>Yes</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>128 × 128 × nCh</td><td>4 × 4</td><td>2</td><td>Yes</td><td></td></tr></table> <table><tr><td>Layer</td><td>Output size</td><td>Kernel size</td><td>Stride</td><td>BatchNorm</td><td>Activation</td></tr><tr><td>Input [x, c]</td><td>128 × 128 × 6</td><td></td><td></td><td></td><td></td></tr><tr><td>Convolution</td><td>64 × 64 × 85</td><td>4 × 4</td><td>2</td><td>No</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>32 × 32 × 170</td><td>4 × 4</td><td>2</td><td>No</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>16 × 16 × 340</td><td>4 × 4</td><td>2</td><td>No</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>8 × 8 × 680</td><td>4 × 4</td><td>2</td><td>No</td><td>LeakyReLU</td></tr><tr><td>Convolution + [z]</td><td>4 × 4 × 1360</td><td>4 × 4</td><td>2</td><td>No</td><td>LeakyReLU</td></tr><tr><td>Convolution</td><td>1 × 1 × 1</td><td>4 × 4</td><td>1</td><td>No</td><td></td></tr></table> Table 6: Pix2scene critic architecture. Conditional version takes image, latent code \(z\) and camera position \(c\) ## B MATERIAL, LIGHTS, AND CAMERA PROPERTIES Material. In our experiments, we use diffuse materials with uniform reflectance. The reflectance values are chosen arbitrarily and we use the same material properties for both the input and the generator side. Camera. The camera is specified by its position, viewing direction and vector indicating the orientation of the camera. The camera positions were uniform randomly sampled on a sphere for the 3D- IQTT task and on a spherical patch contained in the positive octant, for the rest of the experiments. The viewing direction was updated based on the camera position and the center of mass of the objects, so that the camera was always looking at a fixed point in the scene as its position changed. The focal <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 12: Random lights configuration. </center> length ranged between [18 mm and \(25\mathrm{mm}\) ] in all the experiments. and field- of- view was fixed to \(24\mathrm{mm}\) . The camera properties were also shared between the input and the generator side. However, in the 3D- IQTT task we relax the assumption that we know the camera pose and instead estimate the view as a part of the learnt latent representation. Lights. For the light sources, we experimented with single and multiple point- light sources, with the light colors chosen randomly. The light positions are uniformly sampled on a sphere for the 3D IQTT tasks, and uniformly on a spherical patch covering the positive octant for the other scenes. The same light colors and positions are used both for rendering the input and the generated images. The lights acted as a physical spot lights with the radiant energy attenuating quadratically with distance. As an ablation study we relaxed this assumption of having perfect knowledge of lights by using random position and random color lights. Those experiments show that the light information is not needed by our model to learn the 3D structure of the data. However, as we use random lights on the generator side, the shading of the reconstructions is in different color than in the input as shown in Figure 12. ## C EVALUATION OF 3D RECONSTRUCTIONS For evaluating 3D reconstructions, we use the Hausdorff distance (Taha & Hanbury, 2015) as a measure of similarity between two shapes as in Niu et al. (2018). Given two point sets, \(A\) and \(B\) , the Hausdorff distance is, \(\max \left\{\max D_{H}^{+}(A,B),\max D_{H}^{+}(B,A)\right\}\) , where \(D_{H}^{+}\) is an asymmetric Hausdorff distance between two point sets. E.g., \(\max D_{H}^{+}(A,B) = \max D(a,B)\) , for all \(a\in A\) , or the largest Euclidean distance \(D(\cdot)\) , from a set of points in \(A\) to \(B\) , and a similar definition for the reverse case \(\max D_{H}^{+}(B,A)\) . ## D ARCHITECTURE FOR 3D IQTT EVALUATIONS Pixel2Scene architecture remains similar to the ones in previous sections but with higher capacity on decoder and critic as this task is more challenging and complex. The more important difference is that for those experiments we do not condition the networks with the camera pose to be fair with the baselines. In addition to the three networks we have a statistics network (see Table 7) that estimates and minimizes the mutual information between the two set of dimensions in the latent code using MINE (Belghazi et al., 2018). Out of 128 dimensions for \(z\) we use first 118 dimensions for represent scene- based information and rest to encode view based info. The architecture of the baseline networks is shown in Figure 13. The contrastive loss using for training this baselines is shown in Figure 7. Table 7: Pix2scene statistics network architecture. <table><tr><td>Layer</td><td>Output size</td><td>Kernel size</td><td>Stride</td><td>BatchNorm</td><td>Activation</td></tr><tr><td>Input [z[: 118], z[118 :]]</td><td>1 × 1 × 128</td><td></td><td></td><td></td><td></td></tr><tr><td>Convolution</td><td>1 × 1 × 256</td><td>1 × 1</td><td>1</td><td>No</td><td>ELU</td></tr><tr><td>Convolution</td><td>1 × 1 × 512</td><td>1 × 1</td><td>1</td><td>No</td><td>ELU</td></tr><tr><td>Convolution</td><td>1 × 1 × 1</td><td>1 × 1</td><td>2</td><td>No</td><td>None</td></tr></table> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 13: 3D-IQTT baseline architecture. The ResNet-50 all share the same weights and were slightly modified to support our image size. "FC" stands for fully-connected layer and the hidden node sizes are 2048, 512, and 256 respectively. The output of the network is encoded as one-hot vector. </center> The contrastive loss from Equation 7 is applied to the 2048 features that are generated by each ResNet block. \(x_{1}\) and \(x_{2}\) are the input images, \(y\) is either 0 (if the inputs are supposed to be the same) or 1 (if the images are supposed to be different), \(G_{\theta}\) is each ResNet block, parameterized by \(\theta\) , and \(m\) is the margin, which we set to 2.0. The loss function is from Hadsell et al. (2006) but used slightly differently. \[\begin{array}{c}{{\mathcal{L}_{\theta}(x_{1},x_{2},y)=(1-y)\frac{1}{2}(D_{\theta}(x_{1},x_{2}))^{2}+(y)\frac{1}{2}(m a x(0,m-D_{\theta}(x_{1},x_{2})))^{2}}}\\ {{D_{\theta}(x_{1},x_{2})=||G_{\theta}(x_{1})-G_{\theta}(x_{2})||_{2}}}\end{array} \quad (7)\] ## E MORE SCENE RECONSTRUCTIONS Figure 14 shows 3D reconstructions of scenes formed by boxes in a room. In Figure 15 our model is asked to reconstruct the scenes of the first column and then render different views of the same scene. In this case we show the normal maps of those views. Figure 16 shows the recovered shading, depth and normal images from reconstructions of complex scenes such as bedrooms and bunny. ![](images/14_1.jpg) <center>Figure 14: Scene reconstruction. (a) Input images of rotated cubes into a room. (b) pix2scene reconstructions with its (c) associated depth maps. </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 15: Normal views reconstruction. For each row, the first column is the input image and other columns are the extrapolated normal maps of that image from different views. </center> ![](images/15_1.jpg) <center>Figure 16: Reconstruction of complex scenes. Reconstruction of bedroom scenes and bunny. </center> <--- Page Split --->
reject
Reject
5.666667
ICLR_2019_paper_1325
iclr
2,019
# LARGE SCALE GAN TRAINING FOR HIGH FIDELITY NATURAL IMAGE SYNTHESIS Andrew Brock†† Heriot- Watt University ajb5@hw.ac.uk Jeff Donahue† DeepMind jeffdonahue@google.com Karen Simonyan† DeepMind simonyan@google.com ## ABSTRACT Despite recent progress in generative image modeling, successfully generating high- resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade- off between sample fidelity and variety by reducing the variance of the Generator's input. Our modifications lead to models which set the new state of the art in class- conditional image synthesis. When trained on ImageNet at \(128 \times 128\) resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.5 and Fréchet Inception Distance (FID) of 7.4, improving over the previous best IS of 52.52 and FID of 18.65. ## 1 INTRODUCTION ![](images/0_0.jpg) <center>Figure 1: Class-conditional samples generated by our model. </center> The state of generative image modeling has advanced dramatically in recent years, with Generative Adversarial Networks (GANs, Goodfellow et al. (2014)) at the forefront of efforts to generate high- fidelity, diverse images with models learned directly from data. GAN training is dynamic, and sensitive to nearly every aspect of its setup (from optimization parameters to model architecture), but a torrent of research has yielded empirical and theoretical insights enabling stable training in a variety of settings. Despite this progress, the current state of the art in conditional ImageNet modeling (Zhang et al., 2018) achieves an Inception Score (Salimans et al., 2016) of 52.5, compared to 233 for real data. In this work, we set out to close the gap in fidelity and variety between images generated by GANs and real- world images from the ImageNet dataset. We make the following three contributions towards this goal: - We demonstrate that GANs benefit dramatically from scaling, and train models with two to four times as many parameters and eight times the batch size compared to prior art. We introduce two simple, general architectural changes that improve scalability, and modify a regularization scheme to improve conditioning, demonstrably boosting performance. <--- Page Split ---> - As a side effect of our modifications, our models become amenable to the "truncation trick," a simple sampling technique that allows explicit, fine-grained control of the trade-off between sample variety and fidelity. - We discover instabilities specific to large scale GANs, and characterize them empirically. Leveraging insights from this analysis, we demonstrate that a combination of novel and existing techniques can reduce these instabilities, but complete training stability can only be achieved at a dramatic cost to performance. Our modifications substantially improve class- conditional GANs. When trained on ImageNet at \(128 \times 128\) resolution, our models (BigGANs) improve the state- of- the- art Inception Score (IS) and Fréchet Inception Distance (FID) from 52.52 and 18.65 to 166.5 and 7.4 respectively. We also successfully train BigGANs on ImageNet at \(256 \times 256\) and \(512 \times 512\) resolution, and achieve IS and FID of 232.5 and 8.1 at \(256 \times 256\) and IS and FID of 241.5 and 11.5 at \(512 \times 512\) . Finally, we train our models on an even larger dataset – JFT- 300M – and demonstrate that our design choices transfer well from ImageNet. Code and weights for our pretrained generators are publicly available \(^{1}\) . ## 2 BACKGROUND A Generative Adversarial Network (GAN) involves Generator \((\mathbf{G})\) and Discriminator \((\mathbf{D})\) networks whose purpose, respectively, is to map random noise to samples and discriminate real and generated samples. Formally, the GAN objective, in its original form (Goodfellow et al., 2014) involves finding a Nash equilibrium to the following two player min- max problem: \[\min_{G}\max_{D}\mathbb{E}_{x\sim q_{\mathrm{data}}(\mathbf{x})}[\log D(\mathbf{x})] + \mathbb{E}_{\mathbf{z}\sim p(\mathbf{z})}[\log (1 - D(G(\mathbf{z})))], \quad (1)\] where \(\mathbf{z} \in \mathbb{R}^{d_{z}}\) is a latent variable drawn from distribution \(p(\mathbf{z})\) such as \(\mathcal{N}(0, I)\) or \(\mathcal{U}[- 1, 1]\) . When applied to images, \(\mathbf{G}\) and \(\mathbf{D}\) are usually convolutional neural networks (Radford et al., 2016). Without auxiliary stabilization techniques, this training procedure is notoriously brittle, requiring finely- tuned hyperparameters and architectural choices to work at all. Much recent research has accordingly focused on modifications to the vanilla GAN procedure to impart stability, drawing on a growing body of empirical and theoretical insights (Nowozin et al., 2016; Sonderby et al., 2017; Fedus et al., 2018). One line of work is focused on changing the objective function (Arjovsky et al., 2017; Mao et al., 2016; Lim & Ye, 2017; Bellemare et al., 2017; Salimans et al., 2018) to encourage convergence. Another line is focused on constraining \(\mathbf{D}\) through gradient penalties (Gulrajani et al., 2017; Kodali et al., 2017; Mescheder et al., 2018) or normalization (Miyato et al., 2018), both to counteract the use of unbounded loss functions and ensure \(\mathbf{D}\) provides gradients everywhere to \(\mathbf{G}\) . Of particular relevance to our work is Spectral Normalization (Miyato et al., 2018), which enforces Lipschitz continuity on \(\mathbf{D}\) by normalizing its parameters with running estimates of their first singular values, inducing backwards dynamics that adaptively regularize the top singular direction. Relatedly Odena et al. (2018) analyze the condition number of the Jacobian of \(\mathbf{G}\) and find that performance is dependent on \(\mathbf{G}\) 's conditioning. Zhang et al. (2018) find that employing Spectral Normalization in \(\mathbf{G}\) improves stability, allowing for fewer \(\mathbf{D}\) steps per iteration. We extend on these analyses to gain further insight into the pathology of GAN training. Other works focus on the choice of architecture, such as SA- GAN (Zhang et al., 2018) which adds the self- attention block from (Wang et al., 2018) to improve the ability of both \(\mathbf{G}\) and \(\mathbf{D}\) to model global structure. ProGAN (Karras et al., 2018) trains high- resolution GANs in the single- class setting by training a single model across a sequence of increasing resolutions. In conditional GANs (Mirza & Osindero, 2014) class information can be fed into the model in various ways. In (Odena et al., 2017) it is provided to \(\mathbf{G}\) by concatenating a 1- hot class vector to the noise vector, and the objective is modified to encourage conditional samples to maximize the corresponding class probability predicted by an auxiliary classifier. de Vries et al. (2017) and <--- Page Split ---> Table 1: Fréchet Inception Distance (FID, lower is better) and Inception Score (IS, higher is better) for ablations of our proposed modifications. Batch is batch size, Param is total number of parameters, \(Ch\) is the channel multiplier representing the number of units in each layer, Shared is using shared embeddings, Skip- \(z\) is using skip connections from the latent to multiple layers, Ortho. is Orthogonal Regularization, and \(Irr\) indicates if the setting is stable to \(10^{6}\) iterations, or it collapses at the given iteration. Other than rows 1-4, results are computed across 8 random initializations. <table><tr><td>Batch</td><td>Ch.</td><td>Param (M)</td><td>Shared</td><td>Skip-z</td><td>Ortho.</td><td>Itr × 103</td><td>FID</td><td>IS</td></tr><tr><td>256</td><td>64</td><td>81.5</td><td colspan="2">SA-GAN Baseline</td><td>1000</td><td>18.65</td><td>52.52</td><td></td></tr><tr><td>512</td><td>64</td><td>81.5</td><td>X</td><td>X</td><td>X</td><td>1000</td><td>15.30</td><td>58.77(±1.18)</td></tr><tr><td>1024</td><td>64</td><td>81.5</td><td>X</td><td>X</td><td>X</td><td>1000</td><td>14.88</td><td>63.03(±1.42)</td></tr><tr><td>2048</td><td>64</td><td>81.5</td><td>X</td><td>X</td><td>X</td><td>732</td><td>12.39</td><td>76.85(±3.83)</td></tr><tr><td>2048</td><td>96</td><td>173.5</td><td>X</td><td>X</td><td>X</td><td>295(±18)</td><td>9.54(±0.62)</td><td>92.98(±4.27)</td></tr><tr><td>2048</td><td>96</td><td>160.6</td><td>✓</td><td>X</td><td>X</td><td>185(±11)</td><td>9.18(±0.13)</td><td>94.94(±1.32)</td></tr><tr><td>2048</td><td>96</td><td>158.3</td><td>✓</td><td>✓</td><td>X</td><td>152(±7)</td><td>8.73(±0.45)</td><td>98.76(±2.84)</td></tr><tr><td>2048</td><td>96</td><td>158.3</td><td>✓</td><td>✓</td><td>✓</td><td>165(±13)</td><td>8.51(±0.32)</td><td>99.31(±2.10)</td></tr><tr><td>2048</td><td>64</td><td>71.3</td><td>✓</td><td>✓</td><td>✓</td><td>371(±7)</td><td>10.48(±0.10)</td><td>86.90(±0.61)</td></tr></table> Dumoulin et al. (2017) modify the way class conditioning is passed to \(\mathbf{G}\) by supplying it with class- conditional gains and biases in BatchNorm (Ioffe & Szegedy, 2015) layers. In Miyato & Koyama (2018), \(\mathbf{D}\) is conditioned by using the cosine similarity between its features and a set of learned class embeddings as additional evidence for distinguishing real and generated samples, effectively encouraging generation of samples whose features match a learned class prototype. Objectively evaluating implicit generative models is difficult (Theis et al., 2015). A variety of works have proposed heuristics for measuring the sample quality of models without tractable likelihoods (Salimans et al., 2016; Heusel et al., 2017; Binkowski et al., 2018; Wu et al., 2017). Of these, the Inception Score (IS, Salimans et al. (2016)) and Fréchet Inception Distance (FID, Heusel et al. (2017)) have become popular despite their notable flaws (Barratt & Sharma, 2018). We employ them as approximate measures of sample quality, and to enable comparison against previous work. ## 3 SCALING UP GANs In this section, we explore methods for scaling up GAN training to reap the performance benefits of larger models and larger batches. As a baseline, we employ the SA- GAN architecture of Zhang et al. (2018), which uses the hinge loss (Lim & Ye, 2017; Tran et al., 2017) GAN objective. We provide class information to \(\mathbf{G}\) with class- conditional BatchNorm (Dumoulin et al., 2017; de Vries et al., 2017) and to \(\mathbf{D}\) with projection (Miyato & Koyama, 2018). The optimization settings follow Zhang et al. (2018) (notably employing Spectral Norm in \(\mathbf{G}\) ) with the modification that we halve the learning rates and take two \(\mathbf{D}\) steps per \(\mathbf{G}\) step. For evaluation, we employ moving averages of \(\mathbf{G}\) 's weights following Karras et al. (2018); Mescheder et al. (2018); Yazc et al. (2018), with a decay of 0.9999. We use Orthogonal Initialization (Saxe et al., 2014), whereas previous works used \(\mathcal{N}(0,0.02I)\) (Radford et al., 2016) or Xavier initialization (Glorot & Bengio, 2010). Each model is trained on 128 to 512 cores of a Google TPUv3 Pod (Google, 2018), and computes BatchNorm statistics in \(\mathbf{G}\) across all devices, rather than per- device as is typical. We find progressive growing (Karras et al., 2018) unnecessary even for our \(512 \times 512\) models. Additional details are in Appendix C. We begin by increasing the batch size for the baseline model, and immediately find tremendous benefits in doing so. Rows 1- 4 of Table 1 show that simply increasing the batch size by a factor of 8 improves the state- of- the- art IS by 46%. We conjecture that this is a result of each batch covering more modes, providing better gradients for both networks. One notable side effect of this scaling is that our models reach better final performance in fewer iterations, but become unstable and undergo complete training collapse. We discuss the causes and ramifications of this in Section 4. For these experiments, we report scores from checkpoints saved just before collapse. We then increase the width (number of channels) in each layer by 50%, approximately doubling the number of parameters in both models. This leads to a further IS improvement of 21%, which we posit is due to the increased capacity of the model relative to the complexity of the dataset. Doubling <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: (a) The effects of increasing truncation. From left to right, the threshold is set to 2, 1, 0.5, 0.04. (b) Saturation artifacts from applying truncation to a poorly conditioned model. </center> the depth did not initially lead to improvement – we addressed this later in the BigGAN- deep model, which uses a different residual block structure. We note that class embeddings \(c\) used for the conditional BatchNorm layers in \(\mathbf{G}\) contain a large number of weights. Instead of having a separate layer for each embedding (Miyato et al., 2018; Zhang et al., 2018), we opt to use a shared embedding, which is linearly projected to each layer's gains and biases (Perez et al., 2018). This reduces computation and memory costs, and improves training speed (in number of iterations required to reach a given performance) by \(37\%\) . Next, we add direct skip connections (skip- \(z\) ) from the noise vector \(z\) to multiple layers of \(\mathbf{G}\) rather than just the initial layer. The intuition behind this design is to allow \(\mathbf{G}\) to use the latent space to directly influence features at different resolutions and levels of hierarchy. In BigGAN, this is accomplished by splitting \(z\) into one chunk per resolution, and concatenating each chunk to the conditional vector \(c\) which gets projected to the BatchNorm gains and biases. In BigGAN- deep, we use an even simpler design, concatenating the entire \(z\) with the conditional vector without splitting it into chunks. Previous works (Goodfellow et al., 2014; Denton et al., 2015) have considered variants of this concept; our implementation is a minor modification of this design. Skip- \(z\) provides a modest performance improvement of around \(4\%\) , and improves training speed by a further \(18\%\) . ### 3.1 TRADING OFF VARIETY AND FIDELITY WITH THE TRUNCATION TRICK Unlike models which need to backpropagate through their latents, GANs can employ an arbitrary prior \(p(z)\) , yet the vast majority of previous works have chosen to draw \(z\) from either \(\mathcal{N}(0, I)\) or \(\mathcal{U}[- 1, 1]\) . We question the optimality of this choice and explore alternatives in Appendix E. Remarkably, our best results come from using a different latent distribution for sampling than was used in training. Taking a model trained with \(z \sim \mathcal{N}(0, I)\) and sampling \(z\) from a truncated normal (where values which fall outside a range are resampled to fall inside that range) immediately provides a boost to IS and FID. We call this the Truncation Trick: truncating a \(z\) vector by resampling the values with magnitude above a chosen threshold leads to improvement in individual sample quality at the cost of reduction in overall sample variety. Figure 2(a) demonstrates this: as the threshold is reduced, and elements of \(z\) are truncated towards zero (the mode of the latent distribution), individual samples approach the mode of \(\mathbf{G}\) 's output distribution. Related observations about this trade- off were made in (Marchesi, 2016; Pieters & Wiering, 2014). This technique allows fine- grained, post- hoc selection of the trade- off between sample quality and variety for a given \(\mathbf{G}\) . Notably, we can compute FID and IS for a range of thresholds, obtaining the variety- fidelity curve reminiscent of the precision- recall curve (Figure 17). As IS does not penalize lack of variety in class- conditional models, reducing the truncation threshold leads to a direct increase in IS (analogous to precision). FID penalizes lack of variety (analogous to recall) but also rewards precision, so we initially see a moderate improvement in FID, but as truncation approaches zero and variety diminishes, the FID sharply drops. The distribution shift caused by sampling with different latents than those seen in training is problematic for many models. Some of our larger models are not amenable to truncation, producing saturation artifacts (Figure 2(b)) when fed truncated noise. To counteract this, we seek to enforce amenability to truncation by conditioning \(\mathbf{G}\) to be smooth, so that the full space of \(z\) will map to good output samples. For this, we turn to Orthogonal Regularization (Brock et al., 2017), which directly enforces the orthogonality condition: <--- Page Split ---> \[R_{\beta}(W) = \beta \| W^{\top}W - I\|_{\mathrm{F}}^{2}, \quad (2)\] where \(W\) is a weight matrix and \(\beta\) a hyperparameter. This regularization is known to often be too limiting (Miyato et al., 2018), so we explore several variants designed to relax the constraint while still imparting the desired smoothness to our models. The version we find to work best removes the diagonal terms from the regularization, and aims to minimize the pairwise cosine similarity between filters but does not constrain their norm: \[R_{\beta}(W) = \beta \| W^{\top}W\odot (\mathbf{1} - I)\|_{\mathrm{F}}^{2}, \quad (3)\] where \(\mathbf{1}\) denotes a matrix with all elements set to 1. We sweep \(\beta\) values and select \(10^{- 4}\) , finding this small added penalty sufficient to improve the likelihood that our models will be amenable to truncation. Across runs in Table 1, we observe that without Orthogonal Regularization, only \(16\%\) of models are amenable to truncation, compared to \(60\%\) when trained with Orthogonal Regularization. ### 3.2 SUMMARY We find that current GAN techniques are sufficient to enable scaling to large models and distributed, large- batch training. We find that we can dramatically improve the state of the art and train models up to \(512 \times 512\) resolution without need for explicit multiscale methods like Karras et al. (2018). Despite these improvements, our models undergo training collapse, necessitating early stopping in practice. In the next two sections we investigate why settings which were stable in previous works become unstable when applied at scale. ## 4 ANALYSIS ![](images/4_0.jpg) <center>Figure 3: A typical plot of the first singular value \(\sigma_{0}\) in the layers of \(\mathbf{G}\) (a) and \(\mathbf{D}\) (b) before Spectral Normalization. Most layers in \(\mathbf{G}\) have well-behaved spectra, but without constraints a small subset grow throughout training and explode at collapse. \(\mathbf{D}\) 's spectra are noisier but otherwise better-behaved. Colors from red to violet indicate increasing depth. </center> ### 4.1 CHARACTERIZING INSTABILITY: THE GENERATOR Much previous work has investigated GAN stability from a variety of analytical angles and on toy problems, but the instabilities we observe occur for settings which are stable at small scale, necessitating direct analysis at large scale. We monitor a range of weight, gradient, and loss statistics during training, in search of a metric which might presage the onset of training collapse, similar to (Odena et al., 2018). We found the top three singular values \(\sigma_{0}, \sigma_{1}, \sigma_{2}\) of each weight matrix to be the most informative. They can be efficiently computed using the Arnoldi iteration method (Golub & der Vorst, 2000), which extends the power iteration method, used in Miyato et al. (2018), to estimation of additional singular vectors and values. A clear pattern emerges, as can be seen in Figure 3(a) and Appendix F: most \(\mathbf{G}\) layers have well- behaved spectral norms, but some layers <--- Page Split ---> (typically the first layer in \(\mathbf{G}\) , which is over- complete and not convolutional) are ill- behaved, with spectral norms that grow throughout training and explode at collapse. To ascertain if this pathology is a cause of collapse or merely a symptom, we study the effects of imposing additional conditioning on \(\mathbf{G}\) to explicitly counteract spectral explosion. First, we directly regularize the top singular values \(\sigma_{0}\) of each weight, either towards a fixed value \(\sigma_{reg}\) or towards some ratio \(r\) of the second singular value, \(r \cdot sg(\sigma_{1})\) (with \(sg\) the stop- gradient operation to prevent the regularization from increasing \(\sigma_{1}\) ). Alternatively, we employ a partial singular value decomposition to instead clamp \(\sigma_{0}\) . Given a weight \(W\) , its first singular vectors \(u_{0}\) and \(v_{0}\) , and \(\sigma_{clamp}\) the value to which the \(\sigma_{0}\) will be clamped, our weights become: \[W = W - \max (0,\sigma_{0} - \sigma_{clamp})v_{0}u_{0}^{\top}, \quad (4)\] where \(\sigma_{clamp}\) is set to either \(\sigma_{reg}\) or \(r \cdot sg(\sigma_{1})\) . We observe that both with and without Spectral Normalization these techniques have the effect of preventing the gradual increase and explosion of either \(\sigma_{0}\) or \(\frac{\sigma_{0}}{\sigma_{1}}\) , but even though in some cases they mildly improve performance, no combination prevents training collapse. This evidence suggests that while conditioning \(\mathbf{G}\) might improve stability, it is insufficient to ensure stability. We accordingly turn our attention to \(\mathbf{D}\) . ### 4.2 CHARACTERIZING INSTABILITY: THE DISCRIMINATOR As with \(\mathbf{G}\) , we analyze the spectra of \(\mathbf{D}\) 's weights to gain insight into its behavior, then seek to stabilize training by imposing additional constraints. Figure 3(b) displays a typical plot of \(\sigma_{0}\) for \(\mathbf{D}\) (with further plots in Appendix F). Unlike \(\mathbf{G}\) , we see that the spectra are noisy, \(\frac{\sigma_{0}}{\sigma_{1}}\) is well- behaved, and the singular values grow throughout training but only jump at collapse, instead of exploding. The spikes in \(\mathbf{D}\) 's spectra might suggest that it periodically receives very large gradients, but we observe that the Frobenius norms are smooth (Appendix F), suggesting that this effect is primarily concentrated on the top few singular directions. We posit that this noise is a result of optimization through the adversarial training process, where \(\mathbf{G}\) periodically produces batches which strongly perturb \(\mathbf{D}\) . If this spectral noise is causally related to instability, a natural counter is to employ gradient penalties, which explicitly regularize changes in \(\mathbf{D}\) 's Jacobian. We explore the \(R_{1}\) zero- centered gradient penalty from Mescheder et al. (2018): \[R_{1}:= \frac{\gamma}{\epsilon}\mathbb{E}_{p_{D}(x)}\left[\| \nabla D(x)\|_{F}^{2}\right]. \quad (5)\] With the default suggested \(\gamma\) strength of 10, training becomes stable and improves the smoothness and boundedness of spectra in both \(\mathbf{G}\) and \(\mathbf{D}\) , but performance severely degrades, resulting in a \(45\%\) reduction in IS. Reducing the penalty partially alleviates this degradation, but results in increasingly ill- behaved spectra; even with the penalty strength reduced to 1 (the lowest strength for which sudden collapse does not occur) the IS is reduced by \(20\%\) . Repeating this experiment with various strengths of Orthogonal Regularization, DropOut (Srivastava et al., 2014), and L2 (See Appendix I for details), reveals similar behaviors for these regularization strategies: with high enough penalties on \(\mathbf{D}\) , training stability can be achieved, but at a substantial cost to performance. We also observe that \(\mathbf{D}\) 's loss approaches zero during training, but undergoes a sharp upward jump at collapse (Appendix F). One possible explanation for this behavior is that \(\mathbf{D}\) is overfitting to the training set, memorizing training examples rather than learning some meaningful boundary between real and generated images. As a simple test for \(\mathbf{D}\) 's memorization (related to Gulrajani et al. (2017)), we evaluate uncollapsed discriminators on the ImageNet training and validation sets, and measure what percentage of samples are classified as real or generated. While the training accuracy is consistently above \(98\%\) , the validation accuracy falls in the range of \(50 - 55\%\) , no better than random guessing (regardless of regularization strategy). This confirms that \(\mathbf{D}\) is indeed memorizing the training set; we deem this in line with \(\mathbf{D}\) 's role, which is not explicitly to generalize, but to distill the training data and provide a useful learning signal for \(\mathbf{G}\) . Additional experiments and discussion are provided in Appendix G. ### 4.3 SUMMARY We find that stability does not come solely from \(\mathbf{G}\) or \(\mathbf{D}\) , but from their interaction through the adversarial training process. While the symptoms of their poor conditioning can be used to track and <--- Page Split ---> Table 2: Evaluation of models at different resolutions. We report scores without truncation (Column 3), scores at the best FID (Column 4), scores at the IS of validation data (Column 5), and scores at the max IS (Column 6). Standard deviations are computed over at least three random initializations. <table><tr><td>Model</td><td>Res.</td><td>FID/IS</td><td>(min FID) / IS</td><td>FID / (valid IS)</td><td>FID / (max IS)</td></tr><tr><td>SN-GAN</td><td>128</td><td>27.62/36.80</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>SA-GAN</td><td>128</td><td>18.65/52.52</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>BigGAN</td><td>128</td><td>8.7 ± .6/98.8 ± 3</td><td>7.7 ± .2/126.5 ± 0</td><td>9.6 ± .4/166.3 ± 1</td><td>25 ± 2/206 ± 2</td></tr><tr><td>BigGAN</td><td>256</td><td>8.7 ± .1/142.3 ± 2</td><td>7.7 ± .1/178.0 ± 5</td><td>9.3 ± .3/233.1 ± 1</td><td>25 ± 5/291 ± 4</td></tr><tr><td>BigGAN</td><td>512</td><td>8.1/144.2</td><td>7.6/170.3</td><td>11.8/241.4</td><td>27.0/275</td></tr><tr><td>BigGAN-deep</td><td>128</td><td>5.7 ± .3/124.5 ± 2</td><td>6.3 ± .3/148.1 ± 4</td><td>7.4 ± .6/166.5 ± 1</td><td>25 ± 2/253 ± 11</td></tr><tr><td>BigGAN-deep</td><td>256</td><td>6.9 ± .2/171.4 ± 2</td><td>7.0 ± .1/202.6 ± 2</td><td>8.1 ± .1/232.5 ± 2</td><td>27 ± 8/317 ± 6</td></tr><tr><td>BigGAN-deep</td><td>512</td><td>7.5/152.8</td><td>7.7/181.4</td><td>11.5/241.5</td><td>39.7/298</td></tr></table> identify instability, ensuring reasonable conditioning proves necessary for training but insufficient to prevent eventual training collapse. It is possible to enforce stability by strongly constraining \(\mathbf{D}\) , but doing so incurs a dramatic cost in performance. With current techniques, better final performance can be achieved by relaxing this conditioning and allowing collapse to occur at the later stages of training, by which time a model is sufficiently trained to achieve good results. ## 5 EXPERIMENTS ![](images/6_0.jpg) <center>Figure 4: Samples from our BigGAN model with truncation threshold 0.5 (a-c) and an example of class leakage in a partially trained model (d). </center> ### 5.1 EVALUATION ON IMAGENET We evaluate our models on ImageNet ILSVRC 2012 (Russakovsky et al., 2015) at \(128 \times 128\) , \(256 \times 256\) , and \(512 \times 512\) resolutions, employing the settings from Table 1, row 8. The samples generated by our models are presented in Figure 4, with additional samples in Appendix A, and online 2. We report IS and FID in Table 2. As our models are able to trade sample variety for quality, it is unclear how best to compare against prior art; we accordingly report values at three settings, with complete curves in Appendix D. First, we report the FID/IS values at the truncation setting which attains the best FID. Second, we report the FID at the truncation setting for which our model's IS is the same as that attained by the real validation data, reasoning that this is a passable measure of maximum sample variety achieved while still achieving a good level of "objectness." Third, we report FID at the maximum IS achieved by each model, to demonstrate how much variety must be traded off to maximize quality. In all three cases, our models outperform the previous state- of- the- art IS and FID scores achieved by Miyato et al. (2018) and Zhang et al. (2018). In addition to the BigGAN model introduced in the first version of the paper and used in the majority of experiments (unless otherwise stated), we also present a \(4\mathrm{x}\) deeper model (BigGAN- deep) which uses a different configuration of residual blocks. As can be seen from Table 2, BigGAN- deep substantially outperforms BigGAN across all resolutions and metrics. This confirms that our findings <--- Page Split ---> <table><tr><td>Ch.</td><td>Param (M)</td><td>Shared</td><td>Skip-z</td><td>Ortho.</td><td>FID</td><td>IS</td><td>(min FID) / IS</td><td>FID / (max IS)</td></tr><tr><td>64</td><td>317.1</td><td>✘</td><td>✘</td><td>✘</td><td>48.38</td><td>23.27</td><td>48.6 / 23.1</td><td>49.1 / 23.9</td></tr><tr><td>64</td><td>99.4</td><td>✓</td><td>✓</td><td>✓</td><td>23.48</td><td>24.78</td><td>22.4 / 21.0</td><td>60.9 / 35.8</td></tr><tr><td>96</td><td>207.9</td><td>✓</td><td>✓</td><td>✓</td><td>18.84</td><td>27.86</td><td>17.1 / 23.3</td><td>51.6 / 38.1</td></tr><tr><td>128</td><td>355.7</td><td>✓</td><td>✓</td><td>✓</td><td>13.75</td><td>30.61</td><td>13.0 / 28.0</td><td>46.2 / 47.8</td></tr></table> Table 3: BigGAN results on JFT- 300M at \(256 \times 256\) resolution. The FID and IS columns report these scores given by the JFT- 300M- trained Inception v2 classifier with noise distributed as \(z \sim \mathcal{N}(0, I)\) (non- truncated). The (min FID) / IS and FID / (max IS) columns report scores at the best FID and IS from a sweep across truncated noise distributions ranging from \(\sigma = 0\) to \(\sigma = 2\) . Images from the JFT- 300M validation set have an IS of 50.88 and FID of 1.94. extend to other architectures, and that increased depth leads to improvement in sample quality. Both BigGAN and BigGAN- deep architectures are described in Appendix B. Our observation that \(\mathbf{D}\) overfits to the training set, coupled with our model's sample quality, raises the obvious question of whether or not \(\mathbf{G}\) simply memorizes training points. To test this, we perform class- wise nearest neighbors analysis in pixel space and the feature space of pre- trained classifier networks (Appendix A). In addition, we present both interpolations between samples and class- wise interpolations (where \(z\) is held constant) in Figures 8 and 9. Our model convincingly interpolates between disparate samples, and the nearest neighbors for its samples are visually distinct, suggesting that our model does not simply memorize training data. We note that some failure modes of our partially- trained models are distinct from those previously observed. Most previous failures involve local artifacts (Odena et al., 2016), images consisting of texture blobs instead of objects (Salimans et al., 2016), or the canonical mode collapse. We observe class leakage, where images from one class contain properties of another, as exemplified by Figure 4(d). We also find that many classes on ImageNet are more difficult than others for our model; our model is more successful at generating dogs (which make up a large portion of the dataset, and are mostly distinguished by their texture) than crowds (which comprise a small portion of the dataset and have more large- scale structure). Further discussion is available in Appendix A. ### 5.2 ADDITIONAL EVALUATION ON JFT-300M To confirm that our design choices are effective for even larger and more complex and diverse datasets, we also present results of our system on a subset of JFT- 300M (Sun et al., 2017). The full JFT- 300M dataset contains 300M real- world images labeled with 18K categories. Since the category distribution is heavily long- tailed, we subsample the dataset to keep only images with the 8.5K most common labels. The resulting dataset contains 292M images – two orders of magnitude larger than ImageNet. For images with multiple labels, we sample a single label randomly and independently whenever an image is sampled. To compute IS and FID for the GANs trained on this dataset, we use an Inception v2 classifier (Szegedy et al., 2016) trained on this dataset. Quantitative results are presented in Table 3. All models are trained with batch size 2048. We compare an ablated version of our model – comparable to SA- GAN (Zhang et al., 2018) but with the larger batch size – against a “full” BigGAN model that makes uses of all of the techniques applied to obtain the best results on ImageNet (shared embedding, skip- \(z\) , and orthogonal regularization). Our results show that these techniques substantially improve performance even in the setting of this much larger dataset at the same model capacity (64 base channels). We further show that for a dataset of this scale, we see significant additional improvements from expanding the capacity of our models to 128 base channels, while for ImageNet GANs that additional capacity was not beneficial. In Figure 19 (Appendix D), we present truncation plots for models trained on this dataset. Unlike for ImageNet, where truncation limits of \(\sigma \approx 0\) tend to produce the highest fidelity scores, IS is typically maximized for our JFT- 300M models when the truncation value \(\sigma\) ranges from 0.5 to 1. We suspect that this is at least partially due to the intra- class variability of JFT- 300M labels, as well as the relative complexity of the image distribution, which includes images with multiple objects at a variety of scales. Interestingly, unlike models trained on ImageNet, where training tends to collapse without heavy regularization (Section 4), the models trained on JFT- 300M remain stable over many <--- Page Split ---> hundreds of thousands of iterations. This suggests that moving beyond ImageNet to larger datasets may partially alleviate GAN stability issues. The improvement over the baseline GAN model that we achieve on this dataset without changes to the underlying models or training and regularization techniques (beyond expanded capacity) demonstrates that our findings extend from ImageNet to datasets with scale and complexity thus far unprecedented for generative models of images. ## 6 CONCLUSION We have demonstrated that Generative Adversarial Networks trained to model natural images of multiple categories highly benefit from scaling up, both in terms of fidelity and variety of the generated samples. As a result, our models set a new level of performance among ImageNet GAN models, improving on the state of the art by a large margin. We have also presented an analysis of the training behavior of large scale GANs, characterized their stability in terms of the singular values of their weights, and discussed the interplay between stability and performance. ## ACKNOWLEDGMENTS We would like to thank Kai Arulkumaran, Matthias Bauer, Peter Buchlovsky, Jeffrey Defauw, Sander Dieleman, Ian Goodfellow, Ariel Gordon, Karol Gregor, Dominik Grewe, Chris Jones, Jacob Menick, Augustus Odena, Suman Ravuri, Ali Razavi, Mihaela Rosca, and Jeff Stanway. ## REFERENCES Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: A system for large- scale machine learning. In OSDI, 2016. Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In ICML, 2017. Shane Barratt and Rishi Sharma. A note on the Inception Score. In arXiv preprint arXiv:1801.01973, 2018. Marc G. Bellemare, Ivo Danihelka, Will Dabney, Shakir Mohamed, Balaji Lakshminarayanan, Stephan Hoyer, and Rémi Munos. The Cramer distance as a solution to biased Wasserstein gradients. In arXiv preprint arXiv:1705.10743, 2017. Mikolaj Binkowski, Dougal J. Sutherland, Michael Arbel, and Arthur Gretton. Demystifying MMD GANs. In ICLR, 2018. Andrew Brock, Theodore Lim, J.M. Ritchie, and Nick Weston. Neural photo editing with introspective adversarial networks. In ICLR, 2017. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS, 2016. Harm de Vries, Florian Strub, Jérémie Mary, Hugo Larochelle, Olivier Pietquin, and Aaron Courville. Modulating early visual processing by language. In NIPS, 2017. Emily Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a laplacian pyramid of adversarial networks. In NIPS, 2015. Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. A learned representation for artistic style. In ICLR, 2017. <--- Page Split ---> William Fedus, Mihaela Rosca, Balaji Lakshminarayanan, Andrew M. Dai, Shakir Mohamed, and Ian Goodfellow. Many paths to equilibrium: GANs do not need to decrease a divergence at every step. In ICLR, 2018. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In AISTATS, 2010. Gene Golub and Henk Van der Vorst. Eigenvalue computation in the 20th century. Journal of Computational and Applied Mathematics, 123:35- 65, 2000. Ian Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, and Aaron Courville Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. Google. Cloud TPUs. https://cloud.google.com/tpu/, 2018. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C. Courville. Improved training of Wasserstein GANs. In NIPS, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Günter Klambauer, and Sepp Hochreiter. GANs trained by a two time- scale update rule converge to a local nash equilibrium. In NIPS, 2017. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. In ICLR, 2018. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2014. Naveen Kodali, Jacob Abernethy, James Hays, and Zsolt Kira. On convergence and stability of GANs. In arXiv preprint arXiv:1705.07215, 2017. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009. Jae Hyun Lim and Jong Chul Ye. Geometric GAN. In arXiv preprint arXiv:1705.02894, 2017. Xudong Mao, Qing Li, Haoran Xie, Raymond Y. K. Lau, and Zhen Wang. Least squares generative adversarial networks. In arXiv preprint arXiv:1611.04076, 2016. Marco Marchesi. Megapixel size image creation using generative adversarial networks. In arXiv preprint arXiv:1706.00082, 2016. Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for GANs do actually converge? In ICML, 2018. Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. In arXiv preprint arXiv:1411.1784, 2014. Takeru Miyato and Masanori Koyama. cGANs with projection discriminator. In ICLR, 2018. Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In ICLR, 2018. Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f- GAN: Training generative neural samplers using variational divergence minimization. In NIPS, 2016. Augustus Odena, Vincent Dumoulin, and Chris Olah. Deconvolution and checkerboard artifacts. Distill, 2016. Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary classifier GANs. In ICML, 2017. <--- Page Split ---> Augustus Odena, Jacob Buckman, Catherine Olsson, Tom B. Brown, Christopher Olah, Colin Raffel, and Ian Goodfellow. Is generator conditioning causally related to GAN performance? In ICML, 2018. Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron Courville. FiLM: Visual reasoning with a general conditioning layer. In AAAI, 2018. Mathijs Pieters and Marco Wiering. Comparing generative adversarial network techniques for image creation and modificatio. In arXiv preprint arXiv:1803.09093, 2014. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, and Michael Bernstein. ImageNet large scale visual recognition challenge. IJCV, 115:211- 252, 2015. Tim Salimans and Diederik Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In NIPS, 2016. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. In NIPS, 2016. Tim Salimans, Han Zhang, Alec Radford, and Dimitris Metaxas. Improving GANs using optimal transport. In ICLR, 2018. Andrew Saxe, James McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In ICLR, 2014. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large- scale image recognition. In ICLR, 2015. Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Huszr. Amortised map inference for image super- resolution. In ICLR, 2017. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 15:1929- 1958, 2014. Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In ICCV, 2017. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In CVPR, 2016. Lucas Theis, Aaron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. In arXiv preprint arXiv:1511.01844, 2015. Dustin Tran, Rajesh Ranganath, and David M. Blei. Hierarchical implicit models and likelihood- free variational inference. In NIPS, 2017. Xiaolong Wang, Ross B. Girshick, Abhinav Gupta, and Kaiming He. Non- local neural networks. In CVPR, 2018. Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, and Roger B. Grosse. On the quantitative analysis of decoder- based generative models. In ICLR, 2017. Yasin Yazc, Chuan- Sheng Foo, Stefan Winkler, Kim- Hui Yap, Georgios Piliouras, and Vijay Chandrasekhar. The unusual effectiveness of averaging in gan training. In arXiv preprint arXiv:1806.04498, 2018. Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. Self- attention generative adversarial networks. In arXiv preprint arXiv:1805.08318, 2018. <--- Page Split ---> ## APPENDIX A ADDITIONAL SAMPLES, INTERPOLATIONS, AND NEAREST NEIGHBORS FROM IMAGENET MODELS ![](images/11_0.jpg) <center>Figure 5: Samples generated by our BigGAN model at \(256 \times 256\) resolution. </center> ![](images/11_1.jpg) <center>Figure 6: Samples generated by our BigGAN model at \(512 \times 512\) resolution. </center> <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 7: Comparing easy classes (a) with difficult classes (b) at \(512 \times 512\) . Classes such as dogs which are largely textural, and common in the dataset, are far easier to model than classes involving unaligned human faces or crowds. Such classes are more dynamic and structured, and often have details to which human observers are more sensitive. The difficulty of modeling global structure is further exacerbated when producing high-resolution images, even with non-local blocks. </center> ![](images/12_1.jpg) <center>Figure 8: Interpolations between \(z, c\) pairs. </center> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 9: Interpolations between \(c\) with \(z\) held constant. Pose semantics are frequently maintained between endpoints (particularly in the final row). Row 2 demonstrates that grayscale is encoded in the joint \(z, c\) space, rather than in \(z\) . </center> ![](images/13_1.jpg) <center>Figure 10: Nearest neighbors in VGG-16-fc7 (Simonyan & Zisserman, 2015) feature space. The generated image is in the top left. </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 11: Nearest neighbors in ResNet-50-avgpool (He et al., 2016) feature space. The generated image is in the top left. </center> ![](images/14_1.jpg) <center>Figure 12: Nearest neighbors in pixel space. The generated image is in the top left. </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 13: Nearest neighbors in VGG-16-fc7 (Simonyan & Zisserman, 2015) feature space. The generated image is in the top left. </center> ![](images/15_1.jpg) <center>Figure 14: Nearest neighbors in ResNet-50-avgpool (He et al., 2016) feature space. The generated image is in the top left. </center> <--- Page Split ---> ## APPENDIX B ARCHITECTURAL DETAILS In the BigGAN model (Figure 15), we use the ResNet (He et al., 2016) GAN architecture of (Zhang et al., 2018), which is identical to that used by (Miyato et al., 2018), but with the channel pattern in \(\mathbf{D}\) modified so that the number of filters in the first convolutional layer of each block is equal to the number of output filters (rather than the number of input filters, as in Miyato et al. (2018); Gulrajani et al. (2017)). We use a single shared class embedding in \(\mathbf{G}\) , and skip connections for the latent vector \(z\) (skip- \(z\) ). In particular, we employ hierarchical latent spaces, so that the latent vector \(z\) is split along its channel dimension into chunks of equal size (20- D in our case), and each chunk is concatenated to the shared class embedding and passed to a corresponding residual block as a conditioning vector. The conditioning of each block is linearly projected to produce per- sample gains and biases for the BatchNorm layers of the block. The bias projections are zero- centered, while the gain projections are centered at 1. Since the number of residual blocks depends on the image resolution, the full dimensionality of \(z\) is 120 for \(128 \times 128\) , 140 for \(256 \times 256\) , and 160 for \(512 \times 512\) images. The BigGAN- deep model (Figure 16) differs from BigGAN in several aspects. It uses a simpler variant of skip- \(z\) conditioning: instead of first splitting \(z\) into chunks, we concatenate the entire \(z\) with the class embedding, and pass the resulting vector to each residual block through skip connections. BigGAN- deep is based on residual blocks with bottlenecks (He et al., 2016), which incorporate two additional \(1 \times 1\) convolutions: the first reduces the number of channels by a factor of 4 before the more expensive \(3 \times 3\) convolutions; the second produces the required number of output channels. While BigGAN relies on \(1 \times 1\) convolutions in the skip connections whenever the number of channels needs to change, in BigGAN- deep we use a different strategy aimed at preserving identity throughout the skip connections. In \(\mathbf{G}\) , where the number of channels needs to be reduced, we simply retain the first group of channels and drop the rest to produce the required number of channels. In \(\mathbf{D}\) , where the number of channels should be increased, we pass the input channels unperturbed, and concatenate them with the remaining channels produced by a \(1 \times 1\) convolution. As far as the network configuration is concerned, the discriminator is an exact reflection of the generator. There are two blocks at each resolution (BigGAN uses one), and as a result BigGAN- deep is four times deeper than BigGAN. Despite their increased depth, the BigGAN- deep models have significantly fewer parameters mainly due to the bottleneck structure of their residual blocks. For example, the \(128 \times 128\) BigGAN- deep \(\mathbf{G}\) and \(\mathbf{D}\) have 50.4M and 34.6M parameters respectively, while the corresponding original BigGAN models have 70.4M and 88.0M parameters. All BigGAN- deep models use attention at \(64 \times 64\) resolution, channel width multiplier \(ch = 128\) , and \(z \in \mathbb{R}^{128}\) . ![](images/16_0.jpg) <center>Figure 15: (a) A typical architectural layout for BigGAN's \(\mathbf{G}\) ; details are in the following tables. (b) A Residual Block (ResBlock up) in BigGAN's \(\mathbf{G}\) . (c) A Residual Block (ResBlock down) in BigGAN's \(\mathbf{D}\) . </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 16: (a) A typical architectural layout for BigGAN-deep’s G; details are in the following tables. (b) A Residual Block (ResBlock up) in BigGAN-deep’s G. (c) A Residual Block (ResBlock down) in BigGAN-deep’s D. A ResBlock (without up or down) in BigGAN-deep does not include the Upsample or Average Pooling layers, and has identity skip connections. </center> <--- Page Split ---> Table 4: BigGAN architecture for \(128\times 128\) images. \(c h\) represents the channel width multiplier in each network from Table 1. <table><tr><td>z ∈ R120 ∼ N(0, I) <br>Embed(y) ∈ R128</td></tr><tr><td>Linear (20 + 128) → 4 × 4 × 16ch</td></tr><tr><td>ResBlock up 16ch → 16ch</td></tr><tr><td>ResBlock up 16ch → 8ch</td></tr><tr><td>ResBlock up 8ch → 4ch</td></tr><tr><td>ResBlock up 4ch → 2ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock up 2ch → ch</td></tr><tr><td>BN, ReLU, 3 × 3 Conv ch → 3</td></tr><tr><td>Tanh</td></tr><tr><td>(a) Generator</td></tr></table> <table><tr><td>RGB image x ∈ R128×128×3</td></tr><tr><td>ResBlock down ch → 2ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock down 2ch → 4ch</td></tr><tr><td>ResBlock down 4ch → 8ch</td></tr><tr><td>ResBlock down 8ch → 16ch</td></tr><tr><td>ResBlock down 16ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ReLU, Global sum pooling</td></tr><tr><td>Embed(y)·h + (linear → 1)</td></tr><tr><td>(b) Discriminator</td></tr></table> Table 5: BigGAN architecture for \(256\times 256\) images. Relative to the \(128\times 128\) architecture, we add an additional ResBlock in each network at \(16\times 16\) resolution, and move the non-local block in \(\mathbf{G}\) to \(128\times 128\) resolution. Memory constraints prevent us from moving the non-local block in \(\mathbf{D}\) <table><tr><td>z ∈ R140 ∼ N(0, I) <br>Embed(y) ∈ R128</td></tr><tr><td>Linear (20 + 128) → 4 × 4 × 16ch</td></tr><tr><td>ResBlock up 16ch → 16ch</td></tr><tr><td>ResBlock up 16ch → 8ch</td></tr><tr><td>ResBlock up 8ch → 8ch</td></tr><tr><td>ResBlock up 8ch → 4ch</td></tr><tr><td>ResBlock up 4ch → 2ch</td></tr><tr><td>Non-Local Block (128 × 128)</td></tr><tr><td>ResBlock up 2ch → ch</td></tr><tr><td>BN, ReLU, 3 × 3 Conv ch → 3</td></tr><tr><td>Tanh</td></tr><tr><td>(a) Generator</td></tr></table> <table><tr><td>RGB image x ∈ R256×256×3</td></tr><tr><td>ResBlock down ch → 2ch</td></tr><tr><td>ResBlock down 2ch → 4ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock down 4ch → 8ch</td></tr><tr><td>ResBlock down 8ch → 8ch</td></tr><tr><td>ResBlock down 8ch → 16ch</td></tr><tr><td>ResBlock down 16ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ReLU, Global sum pooling</td></tr><tr><td>Embed(y)·h + (linear → 1)</td></tr><tr><td>(b) Discriminator</td></tr></table> <--- Page Split ---> Table 6: BigGAN architecture for \(512\times 512\) images. Relative to the \(256\times 256\) architecture, we add an additional ResBlock at the \(512\times 512\) resolution. Memory constraints force us to move the non-local block in both networks back to \(64\times 64\) resolution as in the \(128\times 128\) pixel setting. <table><tr><td>z ∈ R160 ∼ N(0, I) <br>Embed(y) ∈ R128</td></tr><tr><td>Linear (20 + 128) → 4 × 4 × 16ch</td></tr><tr><td>ResBlock up 16ch → 16ch</td></tr><tr><td>ResBlock up 16ch → 8ch</td></tr><tr><td>ResBlock up 8ch → 8ch</td></tr><tr><td>ResBlock up 8ch → 4ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock up 4ch → 2ch</td></tr><tr><td>ResBlock up 2ch → ch</td></tr><tr><td>ResBlock up ch → ch</td></tr><tr><td>BN, ReLU, 3 × 3 Conv ch → 3</td></tr><tr><td>Tanh</td></tr><tr><td>(a) Generator</td></tr></table> <table><tr><td>RGB image x ∈ R512×512×3</td></tr><tr><td>ResBlock down ch → ch</td></tr><tr><td>ResBlock down ch → 2ch</td></tr><tr><td>ResBlock down 2ch → 4ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock down 4ch → 8ch</td></tr><tr><td>ResBlock down 8ch → 8ch</td></tr><tr><td>ResBlock down 8ch → 16ch</td></tr><tr><td>ResBlock down 16ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ReLU, Global sum pooling</td></tr><tr><td>Embed(y)·h + (linear → 1)</td></tr><tr><td>(b) Discriminator</td></tr></table> Table 7: BigGAN-deep architecture for \(128\times 128\) images. <table><tr><td>z ∈ R128 ∼ N(0, I) <br>Embed(y) ∈ R128</td></tr><tr><td>Linear (128 + 128) → 4 × 4 × 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ResBlock up 16ch → 16ch</td></tr><tr><td>ResBlock 16ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td></tr><tr><td>ResBlock up 8ch → 4ch</td></tr><tr><td>ResBlock 4ch → 4ch</td></tr><tr><td>ResBlock up 4ch → 2ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock 2ch → 2ch</td></tr><tr><td>ResBlock up 2ch → ch</td></tr><tr><td>BN, ReLU, 3 × 3 Conv ch → 3</td></tr><tr><td>Tanh</td></tr><tr><td>(a) Generator</td></tr></table> <table><tr><td>RGB image x ∈ R128×128×3</td></tr><tr><td>3 × 3 Conv 3 → ch</td></tr><tr><td>ResBlock down ch → 2ch</td></tr><tr><td>ResBlock 2ch → 2ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock down 2ch → 4ch</td></tr><tr><td>ResBlock 4ch → 4ch</td></tr><tr><td>ResBlock down 4ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td></tr><tr><td>ResBlock down 8ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ResBlock down 16ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td><td></td></tr><tr><td>ReLU, Global sum pooling</td><td></td></tr><tr><td>Embed(y)·h + (linear → 1)</td><td></td></tr><tr><td>(b) Discriminator</td><td></td></tr></table> <--- Page Split ---> Table 8: BigGAN-deep architecture for \(256\times 256\) images. <table><tr><td>z ∈ R128 ∼ N(0, I) <br>Embed(y) ∈ R128</td></tr><tr><td>Linear (128 + 128) → 4 × 4 × 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ResBlock up 16ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ReBlock up 16ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td></tr><tr><td>ResBlock up 8ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td><td></td></tr><tr><td>ResBlock up 8ch → 4ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock 4ch → 4ch</td></tr><tr><td>ResBlock up 4ch → 2ch</td></tr><tr><td>ResBlock 2ch → 2ch</td></tr><tr><td>ResBlock up 2ch → ch</td></tr><tr><td>BN, ReLU, 3 × 3 Conv ch → 3</td></tr><tr><td>Tanh</td></tr><tr><td>(a) Generator</td></tr></table> <table><tr><td>RGB image x ∈ R256×256×3</td></tr><tr><td>3 × 3 Conv 3 → ch</td></tr><tr><td>ResBlock down ch → 2ch</td></tr><tr><td>ResBlock 2ch → 2ch</td></tr><tr><td>ResBlock down 2ch → 4ch</td></tr><tr><td>ResBlock 4ch → 4ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock down 4ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td></tr><tr><td>ResBlock down 8ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td><td></td></tr><tr><td>ResBlock down 8ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ResBlock down 16ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td><td></td></tr><tr><td>ReLU, Global sum pooling</td></tr><tr><td>Embed(y)·h + (linear → 1)</td></tr><tr><td>(b) Discriminator</td></tr></table> <--- Page Split ---> Table 9: BigGAN-deep architecture for \(512\times 512\) images. <table><tr><td>z ∈ R128 ∼ N(0, I)</td></tr><tr><td>Embed(y) ∈ R128</td></tr><tr><td>Linear (128 + 128) → 4 × 4 × 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ResBlock up 16ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ReSBlock up 16ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td></tr><tr><td>ResBlock up 8ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td><td></td></tr><tr><td>ResBlock up 8ch → 4ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock 4ch → 4ch</td></tr><tr><td>ResBlock up 4ch → 2ch</td></tr><tr><td>ResBlock 2ch → 2ch</td></tr><tr><td>ResBlock up 2ch → ch</td></tr><tr><td>ResBlock ch → ch</td></tr><tr><td>ResBlock up ch → ch</td></tr><tr><td>BN, ReLU, 3 × 3 Conv ch → 3</td></tr><tr><td>Tanh</td></tr><tr><td>(a) Generator</td></tr></table> <table><tr><td>RGB image x ∈ R512×512×3</td></tr><tr><td>3 × 3 Conv 3 → ch</td></tr><tr><td>ResBlock down ch → ch</td></tr><tr><td>ResBlock ch → ch</td></tr><tr><td>ResBlock down ch → 2ch</td></tr><tr><td>ResBlock 2ch → 2ch</td></tr><tr><td>ResBlock down 2ch → 4ch</td></tr><tr><td>ResBlock 4ch → 4ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock down 4ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td></tr><tr><td>ResBlock down 8ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td><td></td></tr><tr><td>ResBlock down 8ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ResBlock down 16ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td><td></td></tr><tr><td>ReLU, Global sum pooling</td></tr><tr><td>Embed(y)h + (linear → 1)</td></tr><tr><td>(b) Discriminator</td></tr></table> <--- Page Split ---> ## APPENDIX C EXPERIMENTAL DETAILS Our basic setup follows SA- GAN (Zhang et al., 2018), and is implemented in TensorFlow (Abadi et al., 2016). We employ the architectures detailed in Appendix B, with non- local blocks inserted at a single stage in each network. Both \(\mathbf{G}\) and \(\mathbf{D}\) networks are initialized with Orthogonal Initialization (Saxe et al., 2014). We use Adam optimizer (Kingma & Ba, 2014) with \(\beta_{1} = 0\) and \(\beta_{2} = 0.999\) and a constant learning rate. For BigGAN models at all resolutions, we use \(2\cdot 10^{- 4}\) in \(\mathbf{D}\) and \(5\cdot 10^{- 5}\) in \(\mathbf{G}\) . For BigGAN- deep, we use the learning rate of \(2\cdot 10^{- 4}\) in \(\mathbf{D}\) and \(5\cdot 10^{- 5}\) in \(\mathbf{G}\) for \(128\times 128\) models, and \(2.5\cdot 10^{- 5}\) in both \(\mathbf{D}\) and \(\mathbf{G}\) for \(256\times 256\) and \(512\times 512\) models. We experimented with the number of \(\mathbf{D}\) steps per \(\mathbf{G}\) step (varying it from 1 to 6) and found that two \(\mathbf{D}\) steps per \(\mathbf{G}\) step gave the best results. We use an exponential moving average of the weights of \(\mathbf{G}\) at sampling time, with a decay rate set to 0.9999. We employ cross- replica BatchNorm (Ioffe & Szegedy, 2015) in \(\mathbf{G}\) , where batch statistics are aggregated across all devices, rather than a single device as in standard implementations. Spectral Normalization (Miyato et al., 2018) is used in both \(\mathbf{G}\) and \(\mathbf{D}\) , following SA- GAN (Zhang et al., 2018). We train on a Google TPU v3 Pod, with the number of cores proportional to the resolution: 128 for \(128\times 128\) , 256 for \(256\times 256\) , and 512 for \(512\times 512\) . Training takes between 24 and 48 hours for most models. We increase \(\epsilon\) from the default \(10^{- 8}\) to \(10^{- 4}\) in BatchNorm and Spectral Norm to mollify low- precision numerical issues. We preprocess data by cropping along the long edge and rescaling to a given resolution with area resampling. ## C.1 BATCHNORM STATISTICS AND SAMPLING The default behavior with batch normalized classifier networks is to use a running average of the activation moments at test time. Previous works (Radford et al., 2016) have instead used batch statistics when sampling images. While this is not technically an invalid way to sample, it means that results are dependent on the test batch size (and how many devices it is split across), and further complicates reproducibility. We find that this detail is extremely important, with changes in test batch size producing drastic changes in performance. This is further exacerbated when one uses exponential moving averages of \(\mathbf{G}\) 's weights for sampling, as the BatchNorm running averages are computed with non- averaged weights and are poor estimates of the activation statistics for the averaged weights. To counteract both these issues, we employ "standing statistics," where we compute activation statistics at sampling time by running the \(\mathbf{G}\) through multiple forward passes (typically 100) each with different batches of random noise, and storing means and variances aggregated across all forward passes. Analogous to using running statistics, this results in \(\mathbf{G}\) 's outputs becoming invariant to batch size and the number of devices, even when producing a single sample. ## C.2 CIFAR-10 We run our networks on CIFAR- 10 (Krizhevsky & Hinton, 2009) using the settings from Table 1, row 8, and achieve an IS of 9.22 and an FID of 14.73 without truncation. ## C.3 INCEPTION SCORES OF IMAGENET IMAGES We compute the IS for both the training and validation sets of ImageNet. At \(128\times 128\) the training data has an IS of 233, and the validation data has an IS of 166. At \(256\times 256\) the training data has an IS of 377, and the validation data has an IS of 234. At \(512\times 512\) the training data has an IS of 348, and the validation data has an IS of 241. The discrepancy between training and validation scores is due to the Inception classifier having been trained on the training data, resulting in high- confidence outputs that are preferred by the Inception Score. <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 17: IS vs. FID at \(128 \times 128\) . Scores are averaged across three random seeds. </center> ![](images/23_1.jpg) <center>Figure 18: IS vs. FID at 256 and 512 pixels. Scores are averaged across three random seeds for 256. </center> <--- Page Split ---> ![](images/24_0.jpg) <center>Figure 19: JFT-300M IS vs. FID at \(256 \times 256\) . We show truncation values from \(\sigma = 0\) to \(\sigma = 2\) (top) and from \(\sigma = 0.5\) to \(\sigma = 1.5\) (bottom). Each curve corresponds to a row in Table 3. The curve labeled with baseline corresponds to the first row (with orthogonal regularization and other techniques disabled), while the rest correspond to rows 2-4 – the same architecture at different capacities (Ch). </center> <--- Page Split ---> ## APPENDIX E CHOOSING LATENT SPACES While most previous work has employed \(\mathcal{N}(0,I)\) or \(\mathcal{U}[- 1,1]\) as the prior for \(z\) (the noise input to \(\mathbf{G}\) ), we are free to choose any latent distribution from which we can sample. We explore the choice of latents by considering an array of possible designs, described below. For each latent, we provide the intuition behind its design and briefly describe how it performs when used as a drop- in replacement for \(z\sim \mathcal{N}(0,I)\) in an SA- GAN baseline. As the Truncation Trick proved more beneficial than switching to any of these latents, we do not perform a full ablation study, and employ \(z\sim \mathcal{N}(0,I)\) for our main results to take full advantage of truncation. The two latents which we find to work best without truncation are Bernoulli \(\{0,1\}\) and Censored Normal \(\max (\mathcal{N}(0,I),0)\) , both of which improve speed of training and lightly improve final performance, but are less amenable to truncation. We also ablate the choice of latent space dimensionality (which by default is \(z\in \mathbb{R}^{128}\) ), finding that we are able to successfully train with latent dimensions as low as \(z\in \mathbb{R}^{8}\) , and that with \(z\in \mathbb{R}^{\frac{32}{2}}\) we see a minimal drop in performance. While this is substantially smaller than many previous works, direct comparison to single- class networks (such as those in Karras et al. (2018), which employ a \(z\in \mathbb{R}^{512}\) latent space on a highly constrained dataset with 30,000 images) is improper, as our networks have additional class information provided as input. ## LATENTS \(\mathcal{N}(0,I)\) . A standard choice of the latent space which we use in the main experiments. \(\mathcal{U}[- 1,1]\) . Another standard choice; we find that it performs similarly to \(\mathcal{N}(0,I)\) . Bernoulli \(\{0,1\}\) . A discrete latent might reflect our prior that underlying factors of variation in natural images are not continuous, but discrete (one feature is present, another is not). This latent outperforms \(\mathcal{N}(0,I)\) (in terms of IS) by \(8\%\) and requires \(60\%\) fewer iterations. \(\max (\mathcal{N}(0,I),0)\) , also called Censored Normal. This latent is designed to introduce sparsity in the latent space (reflecting our prior that certain latent features are sometimes present and sometimes not), but also allow those latents to vary continuously, expressing different degrees of intensity for latents which are active. This latent outperforms \(\mathcal{N}(0,I)\) (in terms of IS) by \(15 - 20\%\) and tends to require fewer iterations. Bernoulli \(\{- 1,1\}\) . This latent is designed to be discrete, but not sparse (as the network can learn to activate in response to negative inputs). This latent performs near- identically to \(\mathcal{N}(0,I)\) . Independent Categorical in \(\{- 1,0,1\}\) , with equal probability. This distribution is chosen to be discrete and have sparsity, but also to allow latents to take on both positive and negative values. This latent performs near- identically to \(\mathcal{N}(0,I)\) . \(\mathcal{N}(0,I)\) multiplied by Bernoulli \(\{0,1\}\) . This distribution is chosen to have continuous latent factors which are also sparse (with a peak at zero), similar to Censored Normal but not constrained to be positive. This latent performs near- identically to \(\mathcal{N}(0,I)\) . Concatenating \(\mathcal{N}(0,I)\) and Bernoulli \(\{0,1\}\) , each taking half of the latent dimensions. This is inspired by Chen et al. (2016), and is chosen to allow some factors of variation to be discrete, while others are continuous. This latent outperforms \(\mathcal{N}(0,I)\) by around \(5\%\) . Variance annealing: we sample from \(\mathcal{N}(0,\sigma I)\) , where \(\sigma\) is allowed to vary over training. We compared a variety of piecewise schedules and found that starting with \(\sigma = 2\) and annealing towards \(\sigma = 1\) over the course of training mildly improved performance. The space of possible variance schedules is large, and we did not explore it in depth - we suspect that a more principled or better- tuned schedule could more strongly impact performance. Per- sample variable variance: \(\mathcal{N}(0,\sigma_{i}I)\) , where \(\sigma_{i}\sim \mathcal{U}[\sigma_{l},\sigma_{h}]\) independently for each sample \(i\) in a batch, and \((\sigma_{l},\sigma_{h})\) are hyperparameters. This distribution was chosen to try and improve amenability to the Truncation Trick by feeding the network noise samples with non- constant variance. This did not appear to affect performance, but we did not explore it in depth. One might also consider scheduling \((\sigma_{l},\sigma_{h})\) , similar to variance annealing. <--- Page Split ---> ![](images/26_0.jpg) <center>Figure 20: Training statistics for a typical model without special modifications. Collapse occurs after 200000 iterations. </center> <--- Page Split ---> ![](images/27_0.jpg) <center>Figure 21: G training statistics with \(\sigma_{0}\) in G regularized towards 1. Collapse occurs after 125000 iterations. </center> ![](images/27_1.jpg) <center>Figure 22: D training statistics with \(\sigma_{0}\) in G regularized towards 1. Collapse occurs after 125000 iterations. </center> <--- Page Split ---> ![](images/28_0.jpg) <center>Figure 23: G training statistics with an R1 Gradient Penalty of strength 10 on D. This model does not collapse, but only reaches a maximum IS of 55. </center> ![](images/28_1.jpg) <center>Figure 24: D training statistics with an R1 Gradient Penalty of strength 10 on D. This model does not collapse, but only reaches a maximum IS of 55. </center> <--- Page Split ---> ![](images/29_0.jpg) <center>Figure 25: G training statistics with Dropout (keep probability 0.8) applied to the last feature layer of D. This model does not collapse, but only reaches a maximum IS of 70. </center> ![](images/29_1.jpg) <center>Figure 26: D training statistics with Dropout (keep probability 0.8) applied to the last feature layer of D. This model does not collapse, but only reaches a maximum IS of 70. </center> <--- Page Split ---> ![](images/30_0.jpg) <center>Figure 27: Additional training statistics for a typical model without special modifications. Collapse occurs after 200000 iterations. </center> ![](images/30_1.jpg) <center>Figure 28: Additional training statistics with an R1 Gradient Penalty of strength 10 on D. This model does not collapse, but only reaches a maximum IS of 55. </center> <--- Page Split ---> ## APPENDIX G ADDITIONAL DISCUSSION: STABILITY AND COLLAPSE In this section, we present and discuss additional investigations into the stability of our models, expanding upon the discussion in Section 4. ## G.1 INTERVENING BEFORE COLLAPSE The symptoms of collapse are sharp and sudden, with sample quality dropping from its peak to its lowest value over the course of a few hundred iterations. We can detect this collapse when the singular values in \(\mathbf{G}\) explode, but while the (unnormalized) singular values grow throughout training, there is no consistent threshold at which collapse occurs. This raises the question of whether it is possible to prevent or delay collapse by taking a model checkpoint several thousand iterations before collapse, and continuing training with some hyperparameters modified (e.g., the learning rate). We conducted a range of intervention experiments wherein we took checkpoints of a collapsed model ten or twenty thousand iterations before collapse, changed some aspect of the training setup, then observed whether collapse occurred, when it occurred relative to the original collapse, and the final performance attained at collapse. We found that increasing the learning rates (relative to their initial values) in either \(\mathbf{G}\) or \(\mathbf{D}\) , or both \(\mathbf{G}\) and \(\mathbf{D}\) , led to immediate collapse. This occurred even when doubling the learning rates from \(2 \cdot 10^{- 4}\) in \(\mathbf{D}\) and \(5 \cdot 10^{- 5}\) in \(\mathbf{G}\) , to \(4 \cdot 10^{- 4}\) in \(\mathbf{D}\) and \(1 \cdot 10^{- 4}\) in \(\mathbf{G}\) , a setting which is not normally unstable when used as the initial learning rates. We also tried changing the momentum terms (Adam's \(\beta_{1}\) and \(\beta_{2}\) ), or resetting the momentum vectors to zero, but this tended to either make no difference or, when increasing the momentum, cause immediate collapse. We found that decreasing the learning rate in \(\mathbf{G}\) , but keeping the learning rate in \(\mathbf{D}\) unchanged could delay collapse (in some cases by over one hundred thousand iterations), but also crippled training—once the learning rate in \(\mathbf{G}\) was decayed, performance either stayed constant or slowly decayed. Conversely, reducing the learning rate in \(\mathbf{D}\) while keeping \(\mathbf{G}\) 's learning rate led to immediate collapse. We hypothesize that this is because of the need for \(\mathbf{D}\) to remain optimal throughout training—if its learning rate is reduced, it can no longer "keep up" with \(\mathbf{G}\) , and training collapses. With this in mind, we also tried increasing the number of \(\mathbf{D}\) steps per \(\mathbf{G}\) step, but this either had no effect, or delayed collapse at the cost of crippling training (similar to decaying \(\mathbf{G}\) 's learning rate). To further illuminate these dynamics, we construct two additional intervention experiments, one where we freeze \(\mathbf{G}\) before collapse (by ceasing all parameter updates) and observe whether \(\mathbf{D}\) remains stable, and the reverse, where we freeze \(\mathbf{D}\) before collapse and observe whether \(\mathbf{G}\) remains stable. We find that when \(\mathbf{G}\) is frozen, \(\mathbf{D}\) remains stable, and slowly reduces both components of its loss towards zero. However, when \(\mathbf{D}\) is frozen, \(\mathbf{G}\) immediately and dramatically collapses, maxing out \(\mathbf{D}\) 's loss to values upwards of 300, compared to the normal range of 0 to 3. This leads to two conclusions: first, as has been noted in previous works (Miyato et al., 2018; Gulrajani et al., 2017; Zhang et al., 2018), \(\mathbf{D}\) must remain optimal with respect to \(\mathbf{G}\) both for stability and to provide useful gradient information. The consequence of \(\mathbf{G}\) being allowed to win the game is a complete breakdown of the training process, regardless of \(\mathbf{G}\) 's conditioning or optimization settings. Second, favoring \(\mathbf{D}\) over \(\mathbf{G}\) (either by training it with a larger learning rate, or for more steps) is insufficient to ensure stability even if \(\mathbf{D}\) is well- conditioned. This suggests either that in practice, an optimal \(\mathbf{D}\) is necessary but insufficient for training stability, or that some aspect of the system results in \(\mathbf{D}\) not being trained towards optimality. With the latter possibility in mind, we take a closer look at the noise in \(\mathbf{D}\) 's spectra in the following section. <--- Page Split ---> ![](images/32_0.jpg) <center>Figure 29: A closeup of \(\mathbf{D}\) 's spectra at a noise spike. </center> If some element of \(\mathbf{D}\) 's training process results in undesirable dynamics, it follows that the behavior of \(\mathbf{D}\) 's spectra may hold clues as to what that element is. The top three singular values of \(\mathbf{D}\) differ from \(\mathbf{G}\) 's in that they have a large noise component, tend to grow throughout training but only show a small response to collapse, and the ratio of the first two singular values tends to be centered around one, suggesting that the spectra of \(\mathbf{D}\) have a slow decay. When viewed up close (Figure 29), the noise spikes resemble an impulse response: at each spike, the spectra jump upwards, then slowly decrease, with some oscillation. One possible explanation is that this behavior is a consequence of \(\mathbf{D}\) memorizing the training data, as suggested by experiments in Section 4.2. As it approaches perfect memorization, it receives less and less signal from real data, as both the original GAN loss and the hinge loss provide zero gradients when \(\mathbf{D}\) outputs a confident and correct prediction for a given example. If the gradient signal from real data attenuates to zero, this can result in \(\mathbf{D}\) eventually becoming biased due to exclusively received gradients that encourage its outputs to be negative. If this bias passes a certain threshold, \(\mathbf{D}\) will eventually misclassify a large number of real examples and receive a large gradient encouraging positive outputs, resulting in the observed impulse responses. This argument suggests several fixes. First, one might consider an unbounded loss (such as the Wasserstein loss (Arjovsky et al., 2017)) which would not suffer this gradient attentuation. We found that even with gradient penalties and brief re- tuning of optimizer hyperparameters, our models did not stably train for more than a few thousand iterations with this loss. We instead explored changing the margin of the hinge loss as a partial compromise: for a given model and minibatch of data, increasing the margin will result in more examples falling within the margin, and thus contributing to the loss.\(^3\) . Training with a smaller margin (by a factor of 2) measurably reduces performance, but training with a larger margin (by up to a factor of 3) does not prevent collapse or reduce the noise in \(\mathbf{D}\) 's spectra. Increasing the margin beyond 3 results in unstable training similar to using the Wasserstein loss. Finally, the memorization argument might suggest that using a smaller \(\mathbf{D}\) or using dropout in \(\mathbf{D}\) would improve training by reducing its capacity to memorize, but in practice this degrades training. <--- Page Split ---> ## APPENDIX H NEGATIVE RESULTS We explored a range of novel and existing techniques which ended up degrading or otherwise not affecting performance in our setting. We report them here; our evaluations for this section are not as thorough as those for the main architectural choices. Our intention in reporting these results is to save time for future work, and to give a more complete picture of our attempts to improve performance or stability. We note, however, that these results must be understood to be specific to the particular setup we used. A pitfall of reporting negative results is that one might report that a particular technique doesn't work, when the reality is that this technique did not have the desired effect when applied in a particular way to a particular problem. Drawing overly general conclusions might close off potentially fruitful avenues of research. - We found that doubling the depth (by inserting an additional Residual block after every up-or down-sampling block) hampered performance.- We experimented with sharing class embeddings between both G and D (as opposed to just within G). This is accomplished by replacing D's class embedding with a projection from G's embeddings, as is done in G's BatchNorm layers. In our initial experiments this seemed to help and accelerate training, but we found this trick scaled poorly and was sensitive to optimization hyperparameters, particularly the choice of number of D steps per G step.- We tried replacing BatchNorm in G with WeightNorm (Salimans & Kingma, 2016), but this crippled training. We also tried removing BatchNorm and only having Spectral Normalization, but this also crippled training.- We tried adding BatchNorm to D (both class-conditional and unconditional) in addition to Spectral Normalization, but this crippled training.- We tried varying the choice of location of the attention block in G and D (and inserting multiple attention blocks at different resolutions) but found that at \(128 \times 128\) there was no noticeable benefit to doing so, and compute and memory costs increased substantially. We found a benefit to moving the attention block up one stage when moving to \(256 \times 256\) , which is in line with our expectations given the increased resolution.- We tried using filter sizes of 5 or 7 instead of 3 in either G or D or both. We found that having a filter size of 5 in G only provided a small improvement over the baseline but came at an unjustifiable compute cost. All other settings degraded performance.- We tried varying the dilation for convolutional filters in both G and D at \(128 \times 128\) , but found that even a small amount of dilation in either network degraded performance.- We tried bilinear upsampling in G in place of nearest-neighbors upsampling, but this degraded performance.- In some of our models, we observed class-conditional mode collapse, where the model would only output one or two samples for a subset of classes but was still able to generate samples for all other classes. We noticed that the collapsed classes had embeddings which had become very large relative to the other embeddings, and attempted to ameliorate this issue by applying weight decay to the shared embedding only. We found that small amounts of weight decay \((10^{-6})\) instead degraded performance, and that only even smaller values \((10^{-8})\) did not degrade performance, but these values were also too small to prevent the class vectors from exploding. Higher-resolution models appear to be more resilient to this problem, and none of our final models appear to suffer from this type of collapse.- We experimented with using MLPs instead of linear projections from G's class embeddings to its BatchNorm gains and biases, but did not find any benefit to doing so. We also experimented with Spectrally Normalizing these MLPs, and with providing these (and the linear projections) with a bias at their output, but did not notice any benefit.- We tried gradient norm clipping (both the global variant typically used in recurrent networks, and a local version where the clipping value is determined on a per-parameter basis) but found this did not alleviate instability. <--- Page Split ---> ## APPENDIX I HYPERPARAMETERS We performed various hyperparameter sweeps in this work: We swept the Cartesian product of the learning rates for each network through \([10^{- 5}\) \(5\cdot 10^{- 5}\) \(10^{- 4}\) \(2\cdot 10^{- 4}\) \(4\cdot 10^{- 4}\) \(8\cdot 10^{- 4}\) \(10^{- 3}]\) , and initially found that the SA- GAN settings ( \(\pmb{G}\) 's learning rate \(10^{- 4}\) , \(\mathbf{D}\) 's learning rate \(4\cdot 10^{- 4}\) ) were optimal at lower batch sizes; we did not repeat this sweep at higher batch sizes but did try halving and doubling the learning rate, arriving at the halved settings used for our experiments. We swept the R1 gradient penalty strength through \([10^{- 3}\) \(10^{- 2}\) \(10^{- 1}\) , 0.5, 1, 2, 3, 5, 10]. We find that the strength of the penalty correlates negatively with performance, but that settings above 0.5 impart training stability. We swept the keep probabilities for DropOut in the final layer of \(\mathbf{D}\) through [0.5, 0.6, 0.7, 0.8, 0.9, 0.95]. We find that DropOut has a similar stabilizing effect to R1 but also degrades performance. We swept \(\mathbf{D}\) 's Adam \(\beta_{1}\) parameter through [0.1, 0.2, 0.3, 0.4, 0.5] and found it to have a light regularization effect similar to DropOut, but not to significantly improve results. Higher \(\beta_{1}\) terms in either network crippled training. We swept the strength of the modified Orthogonal Regularization penalty in \(\mathbf{G}\) through \([10^{- 5}\) \(5\cdot 10^{- 5}\) \(10^{- 4}\) \(5\cdot 10^{- 4}\) \(10^{- 3}\) \(10^{- 2}]\) , and selected \(10^{- 4}\) <--- Page Split --->
## ABSTRACT Despite recent progress in generative image modeling, successfully generating high- resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade- off between sample fidelity and variety by reducing the variance of the Generator's input. Our modifications lead to models which set the new state of the art in class- conditional image synthesis. When trained on ImageNet at \(128 \times 128\) resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.5 and Fréchet Inception Distance (FID) of 7.4, improving over the previous best IS of 52.52 and FID of 18.65. ## 1 INTRODUCTION ![](images/0_0.jpg) <center>Figure 1: Class-conditional samples generated by our model. </center> The state of generative image modeling has advanced dramatically in recent years, with Generative Adversarial Networks (GANs, Goodfellow et al. (2014)) at the forefront of efforts to generate high- fidelity, diverse images with models learned directly from data. GAN training is dynamic, and sensitive to nearly every aspect of its setup (from optimization parameters to model architecture), but a torrent of research has yielded empirical and theoretical insights enabling stable training in a variety of settings. Despite this progress, the current state of the art in conditional ImageNet modeling (Zhang et al., 2018) achieves an Inception Score (Salimans et al., 2016) of 52.5, compared to 233 for real data. In this work, we set out to close the gap in fidelity and variety between images generated by GANs and real- world images from the ImageNet dataset. We make the following three contributions towards this goal: - We demonstrate that GANs benefit dramatically from scaling, and train models with two to four times as many parameters and eight times the batch size compared to prior art. We introduce two simple, general architectural changes that improve scalability, and modify a regularization scheme to improve conditioning, demonstrably boosting performance. <--- Page Split ---> - As a side effect of our modifications, our models become amenable to the "truncation trick," a simple sampling technique that allows explicit, fine-grained control of the trade-off between sample variety and fidelity. - We discover instabilities specific to large scale GANs, and characterize them empirically. Leveraging insights from this analysis, we demonstrate that a combination of novel and existing techniques can reduce these instabilities, but complete training stability can only be achieved at a dramatic cost to performance. Our modifications substantially improve class- conditional GANs. When trained on ImageNet at \(128 \times 128\) resolution, our models (BigGANs) improve the state- of- the- art Inception Score (IS) and Fréchet Inception Distance (FID) from 52.52 and 18.65 to 166.5 and 7.4 respectively. We also successfully train BigGANs on ImageNet at \(256 \times 256\) and \(512 \times 512\) resolution, and achieve IS and FID of 232.5 and 8.1 at \(256 \times 256\) and IS and FID of 241.5 and 11.5 at \(512 \times 512\) . Finally, we train our models on an even larger dataset – JFT- 300M – and demonstrate that our design choices transfer well from ImageNet. Code and weights for our pretrained generators are publicly available \(^{1}\) . ## 2 BACKGROUND A Generative Adversarial Network (GAN) involves Generator \((\mathbf{G})\) and Discriminator \((\mathbf{D})\) networks whose purpose, respectively, is to map random noise to samples and discriminate real and generated samples. Formally, the GAN objective, in its original form (Goodfellow et al., 2014) involves finding a Nash equilibrium to the following two player min- max problem: \[\min_{G}\max_{D}\mathbb{E}_{x\sim q_{\mathrm{data}}(\mathbf{x})}[\log D(\mathbf{x})] + \mathbb{E}_{\mathbf{z}\sim p(\mathbf{z})}[\log (1 - D(G(\mathbf{z})))], \quad (1)\] where \(\mathbf{z} \in \mathbb{R}^{d_{z}}\) is a latent variable drawn from distribution \(p(\mathbf{z})\) such as \(\mathcal{N}(0, I)\) or \(\mathcal{U}[- 1, 1]\) . When applied to images, \(\mathbf{G}\) and \(\mathbf{D}\) are usually convolutional neural networks (Radford et al., 2016). Without auxiliary stabilization techniques, this training procedure is notoriously brittle, requiring finely- tuned hyperparameters and architectural choices to work at all. Much recent research has accordingly focused on modifications to the vanilla GAN procedure to impart stability, drawing on a growing body of empirical and theoretical insights (Nowozin et al., 2016; Sonderby et al., 2017; Fedus et al., 2018). One line of work is focused on changing the objective function (Arjovsky et al., 2017; Mao et al., 2016; Lim & Ye, 2017; Bellemare et al., 2017; Salimans et al., 2018) to encourage convergence. Another line is focused on constraining \(\mathbf{D}\) through gradient penalties (Gulrajani et al., 2017; Kodali et al., 2017; Mescheder et al., 2018) or normalization (Miyato et al., 2018), both to counteract the use of unbounded loss functions and ensure \(\mathbf{D}\) provides gradients everywhere to \(\mathbf{G}\) . Of particular relevance to our work is Spectral Normalization (Miyato et al., 2018), which enforces Lipschitz continuity on \(\mathbf{D}\) by normalizing its parameters with running estimates of their first singular values, inducing backwards dynamics that adaptively regularize the top singular direction. Relatedly Odena et al. (2018) analyze the condition number of the Jacobian of \(\mathbf{G}\) and find that performance is dependent on \(\mathbf{G}\) 's conditioning. Zhang et al. (2018) find that employing Spectral Normalization in \(\mathbf{G}\) improves stability, allowing for fewer \(\mathbf{D}\) steps per iteration. We extend on these analyses to gain further insight into the pathology of GAN training. Other works focus on the choice of architecture, such as SA- GAN (Zhang et al., 2018) which adds the self- attention block from (Wang et al., 2018) to improve the ability of both \(\mathbf{G}\) and \(\mathbf{D}\) to model global structure. ProGAN (Karras et al., 2018) trains high- resolution GANs in the single- class setting by training a single model across a sequence of increasing resolutions. In conditional GANs (Mirza & Osindero, 2014) class information can be fed into the model in various ways. In (Odena et al., 2017) it is provided to \(\mathbf{G}\) by concatenating a 1- hot class vector to the noise vector, and the objective is modified to encourage conditional samples to maximize the corresponding class probability predicted by an auxiliary classifier. de Vries et al. (2017) and <--- Page Split ---> Table 1: Fréchet Inception Distance (FID, lower is better) and Inception Score (IS, higher is better) for ablations of our proposed modifications. Batch is batch size, Param is total number of parameters, \(Ch\) is the channel multiplier representing the number of units in each layer, Shared is using shared embeddings, Skip- \(z\) is using skip connections from the latent to multiple layers, Ortho. is Orthogonal Regularization, and \(Irr\) indicates if the setting is stable to \(10^{6}\) iterations, or it collapses at the given iteration. Other than rows 1-4, results are computed across 8 random initializations. <table><tr><td>Batch</td><td>Ch.</td><td>Param (M)</td><td>Shared</td><td>Skip-z</td><td>Ortho.</td><td>Itr × 103</td><td>FID</td><td>IS</td></tr><tr><td>256</td><td>64</td><td>81.5</td><td colspan="2">SA-GAN Baseline</td><td>1000</td><td>18.65</td><td>52.52</td><td></td></tr><tr><td>512</td><td>64</td><td>81.5</td><td>X</td><td>X</td><td>X</td><td>1000</td><td>15.30</td><td>58.77(±1.18)</td></tr><tr><td>1024</td><td>64</td><td>81.5</td><td>X</td><td>X</td><td>X</td><td>1000</td><td>14.88</td><td>63.03(±1.42)</td></tr><tr><td>2048</td><td>64</td><td>81.5</td><td>X</td><td>X</td><td>X</td><td>732</td><td>12.39</td><td>76.85(±3.83)</td></tr><tr><td>2048</td><td>96</td><td>173.5</td><td>X</td><td>X</td><td>X</td><td>295(±18)</td><td>9.54(±0.62)</td><td>92.98(±4.27)</td></tr><tr><td>2048</td><td>96</td><td>160.6</td><td>✓</td><td>X</td><td>X</td><td>185(±11)</td><td>9.18(±0.13)</td><td>94.94(±1.32)</td></tr><tr><td>2048</td><td>96</td><td>158.3</td><td>✓</td><td>✓</td><td>X</td><td>152(±7)</td><td>8.73(±0.45)</td><td>98.76(±2.84)</td></tr><tr><td>2048</td><td>96</td><td>158.3</td><td>✓</td><td>✓</td><td>✓</td><td>165(±13)</td><td>8.51(±0.32)</td><td>99.31(±2.10)</td></tr><tr><td>2048</td><td>64</td><td>71.3</td><td>✓</td><td>✓</td><td>✓</td><td>371(±7)</td><td>10.48(±0.10)</td><td>86.90(±0.61)</td></tr></table> Dumoulin et al. (2017) modify the way class conditioning is passed to \(\mathbf{G}\) by supplying it with class- conditional gains and biases in BatchNorm (Ioffe & Szegedy, 2015) layers. In Miyato & Koyama (2018), \(\mathbf{D}\) is conditioned by using the cosine similarity between its features and a set of learned class embeddings as additional evidence for distinguishing real and generated samples, effectively encouraging generation of samples whose features match a learned class prototype. Objectively evaluating implicit generative models is difficult (Theis et al., 2015). A variety of works have proposed heuristics for measuring the sample quality of models without tractable likelihoods (Salimans et al., 2016; Heusel et al., 2017; Binkowski et al., 2018; Wu et al., 2017). Of these, the Inception Score (IS, Salimans et al. (2016)) and Fréchet Inception Distance (FID, Heusel et al. (2017)) have become popular despite their notable flaws (Barratt & Sharma, 2018). We employ them as approximate measures of sample quality, and to enable comparison against previous work. ## 3 SCALING UP GANs In this section, we explore methods for scaling up GAN training to reap the performance benefits of larger models and larger batches. As a baseline, we employ the SA- GAN architecture of Zhang et al. (2018), which uses the hinge loss (Lim & Ye, 2017; Tran et al., 2017) GAN objective. We provide class information to \(\mathbf{G}\) with class- conditional BatchNorm (Dumoulin et al., 2017; de Vries et al., 2017) and to \(\mathbf{D}\) with projection (Miyato & Koyama, 2018). The optimization settings follow Zhang et al. (2018) (notably employing Spectral Norm in \(\mathbf{G}\) ) with the modification that we halve the learning rates and take two \(\mathbf{D}\) steps per \(\mathbf{G}\) step. For evaluation, we employ moving averages of \(\mathbf{G}\) 's weights following Karras et al. (2018); Mescheder et al. (2018); Yazc et al. (2018), with a decay of 0.9999. We use Orthogonal Initialization (Saxe et al., 2014), whereas previous works used \(\mathcal{N}(0,0.02I)\) (Radford et al., 2016) or Xavier initialization (Glorot & Bengio, 2010). Each model is trained on 128 to 512 cores of a Google TPUv3 Pod (Google, 2018), and computes BatchNorm statistics in \(\mathbf{G}\) across all devices, rather than per- device as is typical. We find progressive growing (Karras et al., 2018) unnecessary even for our \(512 \times 512\) models. Additional details are in Appendix C. We begin by increasing the batch size for the baseline model, and immediately find tremendous benefits in doing so. Rows 1- 4 of Table 1 show that simply increasing the batch size by a factor of 8 improves the state- of- the- art IS by 46%. We conjecture that this is a result of each batch covering more modes, providing better gradients for both networks. One notable side effect of this scaling is that our models reach better final performance in fewer iterations, but become unstable and undergo complete training collapse. We discuss the causes and ramifications of this in Section 4. For these experiments, we report scores from checkpoints saved just before collapse. We then increase the width (number of channels) in each layer by 50%, approximately doubling the number of parameters in both models. This leads to a further IS improvement of 21%, which we posit is due to the increased capacity of the model relative to the complexity of the dataset. Doubling <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: (a) The effects of increasing truncation. From left to right, the threshold is set to 2, 1, 0.5, 0.04. (b) Saturation artifacts from applying truncation to a poorly conditioned model. </center> the depth did not initially lead to improvement – we addressed this later in the BigGAN- deep model, which uses a different residual block structure. We note that class embeddings \(c\) used for the conditional BatchNorm layers in \(\mathbf{G}\) contain a large number of weights. Instead of having a separate layer for each embedding (Miyato et al., 2018; Zhang et al., 2018), we opt to use a shared embedding, which is linearly projected to each layer's gains and biases (Perez et al., 2018). This reduces computation and memory costs, and improves training speed (in number of iterations required to reach a given performance) by \(37\%\) . Next, we add direct skip connections (skip- \(z\) ) from the noise vector \(z\) to multiple layers of \(\mathbf{G}\) rather than just the initial layer. The intuition behind this design is to allow \(\mathbf{G}\) to use the latent space to directly influence features at different resolutions and levels of hierarchy. In BigGAN, this is accomplished by splitting \(z\) into one chunk per resolution, and concatenating each chunk to the conditional vector \(c\) which gets projected to the BatchNorm gains and biases. In BigGAN- deep, we use an even simpler design, concatenating the entire \(z\) with the conditional vector without splitting it into chunks. Previous works (Goodfellow et al., 2014; Denton et al., 2015) have considered variants of this concept; our implementation is a minor modification of this design. Skip- \(z\) provides a modest performance improvement of around \(4\%\) , and improves training speed by a further \(18\%\) . ### 3.1 TRADING OFF VARIETY AND FIDELITY WITH THE TRUNCATION TRICK Unlike models which need to backpropagate through their latents, GANs can employ an arbitrary prior \(p(z)\) , yet the vast majority of previous works have chosen to draw \(z\) from either \(\mathcal{N}(0, I)\) or \(\mathcal{U}[- 1, 1]\) . We question the optimality of this choice and explore alternatives in Appendix E. Remarkably, our best results come from using a different latent distribution for sampling than was used in training. Taking a model trained with \(z \sim \mathcal{N}(0, I)\) and sampling \(z\) from a truncated normal (where values which fall outside a range are resampled to fall inside that range) immediately provides a boost to IS and FID. We call this the Truncation Trick: truncating a \(z\) vector by resampling the values with magnitude above a chosen threshold leads to improvement in individual sample quality at the cost of reduction in overall sample variety. Figure 2(a) demonstrates this: as the threshold is reduced, and elements of \(z\) are truncated towards zero (the mode of the latent distribution), individual samples approach the mode of \(\mathbf{G}\) 's output distribution. Related observations about this trade- off were made in (Marchesi, 2016; Pieters & Wiering, 2014). This technique allows fine- grained, post- hoc selection of the trade- off between sample quality and variety for a given \(\mathbf{G}\) . Notably, we can compute FID and IS for a range of thresholds, obtaining the variety- fidelity curve reminiscent of the precision- recall curve (Figure 17). As IS does not penalize lack of variety in class- conditional models, reducing the truncation threshold leads to a direct increase in IS (analogous to precision). FID penalizes lack of variety (analogous to recall) but also rewards precision, so we initially see a moderate improvement in FID, but as truncation approaches zero and variety diminishes, the FID sharply drops. The distribution shift caused by sampling with different latents than those seen in training is problematic for many models. Some of our larger models are not amenable to truncation, producing saturation artifacts (Figure 2(b)) when fed truncated noise. To counteract this, we seek to enforce amenability to truncation by conditioning \(\mathbf{G}\) to be smooth, so that the full space of \(z\) will map to good output samples. For this, we turn to Orthogonal Regularization (Brock et al., 2017), which directly enforces the orthogonality condition: <--- Page Split ---> \[R_{\beta}(W) = \beta \| W^{\top}W - I\|_{\mathrm{F}}^{2}, \quad (2)\] where \(W\) is a weight matrix and \(\beta\) a hyperparameter. This regularization is known to often be too limiting (Miyato et al., 2018), so we explore several variants designed to relax the constraint while still imparting the desired smoothness to our models. The version we find to work best removes the diagonal terms from the regularization, and aims to minimize the pairwise cosine similarity between filters but does not constrain their norm: \[R_{\beta}(W) = \beta \| W^{\top}W\odot (\mathbf{1} - I)\|_{\mathrm{F}}^{2}, \quad (3)\] where \(\mathbf{1}\) denotes a matrix with all elements set to 1. We sweep \(\beta\) values and select \(10^{- 4}\) , finding this small added penalty sufficient to improve the likelihood that our models will be amenable to truncation. Across runs in Table 1, we observe that without Orthogonal Regularization, only \(16\%\) of models are amenable to truncation, compared to \(60\%\) when trained with Orthogonal Regularization. ### 3.2 SUMMARY We find that current GAN techniques are sufficient to enable scaling to large models and distributed, large- batch training. We find that we can dramatically improve the state of the art and train models up to \(512 \times 512\) resolution without need for explicit multiscale methods like Karras et al. (2018). Despite these improvements, our models undergo training collapse, necessitating early stopping in practice. In the next two sections we investigate why settings which were stable in previous works become unstable when applied at scale. ## 4 ANALYSIS ![](images/4_0.jpg) <center>Figure 3: A typical plot of the first singular value \(\sigma_{0}\) in the layers of \(\mathbf{G}\) (a) and \(\mathbf{D}\) (b) before Spectral Normalization. Most layers in \(\mathbf{G}\) have well-behaved spectra, but without constraints a small subset grow throughout training and explode at collapse. \(\mathbf{D}\) 's spectra are noisier but otherwise better-behaved. Colors from red to violet indicate increasing depth. </center> ### 4.1 CHARACTERIZING INSTABILITY: THE GENERATOR Much previous work has investigated GAN stability from a variety of analytical angles and on toy problems, but the instabilities we observe occur for settings which are stable at small scale, necessitating direct analysis at large scale. We monitor a range of weight, gradient, and loss statistics during training, in search of a metric which might presage the onset of training collapse, similar to (Odena et al., 2018). We found the top three singular values \(\sigma_{0}, \sigma_{1}, \sigma_{2}\) of each weight matrix to be the most informative. They can be efficiently computed using the Arnoldi iteration method (Golub & der Vorst, 2000), which extends the power iteration method, used in Miyato et al. (2018), to estimation of additional singular vectors and values. A clear pattern emerges, as can be seen in Figure 3(a) and Appendix F: most \(\mathbf{G}\) layers have well- behaved spectral norms, but some layers <--- Page Split ---> (typically the first layer in \(\mathbf{G}\) , which is over- complete and not convolutional) are ill- behaved, with spectral norms that grow throughout training and explode at collapse. To ascertain if this pathology is a cause of collapse or merely a symptom, we study the effects of imposing additional conditioning on \(\mathbf{G}\) to explicitly counteract spectral explosion. First, we directly regularize the top singular values \(\sigma_{0}\) of each weight, either towards a fixed value \(\sigma_{reg}\) or towards some ratio \(r\) of the second singular value, \(r \cdot sg(\sigma_{1})\) (with \(sg\) the stop- gradient operation to prevent the regularization from increasing \(\sigma_{1}\) ). Alternatively, we employ a partial singular value decomposition to instead clamp \(\sigma_{0}\) . Given a weight \(W\) , its first singular vectors \(u_{0}\) and \(v_{0}\) , and \(\sigma_{clamp}\) the value to which the \(\sigma_{0}\) will be clamped, our weights become: \[W = W - \max (0,\sigma_{0} - \sigma_{clamp})v_{0}u_{0}^{\top}, \quad (4)\] where \(\sigma_{clamp}\) is set to either \(\sigma_{reg}\) or \(r \cdot sg(\sigma_{1})\) . We observe that both with and without Spectral Normalization these techniques have the effect of preventing the gradual increase and explosion of either \(\sigma_{0}\) or \(\frac{\sigma_{0}}{\sigma_{1}}\) , but even though in some cases they mildly improve performance, no combination prevents training collapse. This evidence suggests that while conditioning \(\mathbf{G}\) might improve stability, it is insufficient to ensure stability. We accordingly turn our attention to \(\mathbf{D}\) . ### 4.2 CHARACTERIZING INSTABILITY: THE DISCRIMINATOR As with \(\mathbf{G}\) , we analyze the spectra of \(\mathbf{D}\) 's weights to gain insight into its behavior, then seek to stabilize training by imposing additional constraints. Figure 3(b) displays a typical plot of \(\sigma_{0}\) for \(\mathbf{D}\) (with further plots in Appendix F). Unlike \(\mathbf{G}\) , we see that the spectra are noisy, \(\frac{\sigma_{0}}{\sigma_{1}}\) is well- behaved, and the singular values grow throughout training but only jump at collapse, instead of exploding. The spikes in \(\mathbf{D}\) 's spectra might suggest that it periodically receives very large gradients, but we observe that the Frobenius norms are smooth (Appendix F), suggesting that this effect is primarily concentrated on the top few singular directions. We posit that this noise is a result of optimization through the adversarial training process, where \(\mathbf{G}\) periodically produces batches which strongly perturb \(\mathbf{D}\) . If this spectral noise is causally related to instability, a natural counter is to employ gradient penalties, which explicitly regularize changes in \(\mathbf{D}\) 's Jacobian. We explore the \(R_{1}\) zero- centered gradient penalty from Mescheder et al. (2018): \[R_{1}:= \frac{\gamma}{\epsilon}\mathbb{E}_{p_{D}(x)}\left[\| \nabla D(x)\|_{F}^{2}\right]. \quad (5)\] With the default suggested \(\gamma\) strength of 10, training becomes stable and improves the smoothness and boundedness of spectra in both \(\mathbf{G}\) and \(\mathbf{D}\) , but performance severely degrades, resulting in a \(45\%\) reduction in IS. Reducing the penalty partially alleviates this degradation, but results in increasingly ill- behaved spectra; even with the penalty strength reduced to 1 (the lowest strength for which sudden collapse does not occur) the IS is reduced by \(20\%\) . Repeating this experiment with various strengths of Orthogonal Regularization, DropOut (Srivastava et al., 2014), and L2 (See Appendix I for details), reveals similar behaviors for these regularization strategies: with high enough penalties on \(\mathbf{D}\) , training stability can be achieved, but at a substantial cost to performance. We also observe that \(\mathbf{D}\) 's loss approaches zero during training, but undergoes a sharp upward jump at collapse (Appendix F). One possible explanation for this behavior is that \(\mathbf{D}\) is overfitting to the training set, memorizing training examples rather than learning some meaningful boundary between real and generated images. As a simple test for \(\mathbf{D}\) 's memorization (related to Gulrajani et al. (2017)), we evaluate uncollapsed discriminators on the ImageNet training and validation sets, and measure what percentage of samples are classified as real or generated. While the training accuracy is consistently above \(98\%\) , the validation accuracy falls in the range of \(50 - 55\%\) , no better than random guessing (regardless of regularization strategy). This confirms that \(\mathbf{D}\) is indeed memorizing the training set; we deem this in line with \(\mathbf{D}\) 's role, which is not explicitly to generalize, but to distill the training data and provide a useful learning signal for \(\mathbf{G}\) . Additional experiments and discussion are provided in Appendix G. ### 4.3 SUMMARY We find that stability does not come solely from \(\mathbf{G}\) or \(\mathbf{D}\) , but from their interaction through the adversarial training process. While the symptoms of their poor conditioning can be used to track and <--- Page Split ---> Table 2: Evaluation of models at different resolutions. We report scores without truncation (Column 3), scores at the best FID (Column 4), scores at the IS of validation data (Column 5), and scores at the max IS (Column 6). Standard deviations are computed over at least three random initializations. <table><tr><td>Model</td><td>Res.</td><td>FID/IS</td><td>(min FID) / IS</td><td>FID / (valid IS)</td><td>FID / (max IS)</td></tr><tr><td>SN-GAN</td><td>128</td><td>27.62/36.80</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>SA-GAN</td><td>128</td><td>18.65/52.52</td><td>N/A</td><td>N/A</td><td>N/A</td></tr><tr><td>BigGAN</td><td>128</td><td>8.7 ± .6/98.8 ± 3</td><td>7.7 ± .2/126.5 ± 0</td><td>9.6 ± .4/166.3 ± 1</td><td>25 ± 2/206 ± 2</td></tr><tr><td>BigGAN</td><td>256</td><td>8.7 ± .1/142.3 ± 2</td><td>7.7 ± .1/178.0 ± 5</td><td>9.3 ± .3/233.1 ± 1</td><td>25 ± 5/291 ± 4</td></tr><tr><td>BigGAN</td><td>512</td><td>8.1/144.2</td><td>7.6/170.3</td><td>11.8/241.4</td><td>27.0/275</td></tr><tr><td>BigGAN-deep</td><td>128</td><td>5.7 ± .3/124.5 ± 2</td><td>6.3 ± .3/148.1 ± 4</td><td>7.4 ± .6/166.5 ± 1</td><td>25 ± 2/253 ± 11</td></tr><tr><td>BigGAN-deep</td><td>256</td><td>6.9 ± .2/171.4 ± 2</td><td>7.0 ± .1/202.6 ± 2</td><td>8.1 ± .1/232.5 ± 2</td><td>27 ± 8/317 ± 6</td></tr><tr><td>BigGAN-deep</td><td>512</td><td>7.5/152.8</td><td>7.7/181.4</td><td>11.5/241.5</td><td>39.7/298</td></tr></table> identify instability, ensuring reasonable conditioning proves necessary for training but insufficient to prevent eventual training collapse. It is possible to enforce stability by strongly constraining \(\mathbf{D}\) , but doing so incurs a dramatic cost in performance. With current techniques, better final performance can be achieved by relaxing this conditioning and allowing collapse to occur at the later stages of training, by which time a model is sufficiently trained to achieve good results. ## 5 EXPERIMENTS ![](images/6_0.jpg) <center>Figure 4: Samples from our BigGAN model with truncation threshold 0.5 (a-c) and an example of class leakage in a partially trained model (d). </center> ### 5.1 EVALUATION ON IMAGENET We evaluate our models on ImageNet ILSVRC 2012 (Russakovsky et al., 2015) at \(128 \times 128\) , \(256 \times 256\) , and \(512 \times 512\) resolutions, employing the settings from Table 1, row 8. The samples generated by our models are presented in Figure 4, with additional samples in Appendix A, and online 2. We report IS and FID in Table 2. As our models are able to trade sample variety for quality, it is unclear how best to compare against prior art; we accordingly report values at three settings, with complete curves in Appendix D. First, we report the FID/IS values at the truncation setting which attains the best FID. Second, we report the FID at the truncation setting for which our model's IS is the same as that attained by the real validation data, reasoning that this is a passable measure of maximum sample variety achieved while still achieving a good level of "objectness." Third, we report FID at the maximum IS achieved by each model, to demonstrate how much variety must be traded off to maximize quality. In all three cases, our models outperform the previous state- of- the- art IS and FID scores achieved by Miyato et al. (2018) and Zhang et al. (2018). In addition to the BigGAN model introduced in the first version of the paper and used in the majority of experiments (unless otherwise stated), we also present a \(4\mathrm{x}\) deeper model (BigGAN- deep) which uses a different configuration of residual blocks. As can be seen from Table 2, BigGAN- deep substantially outperforms BigGAN across all resolutions and metrics. This confirms that our findings <--- Page Split ---> <table><tr><td>Ch.</td><td>Param (M)</td><td>Shared</td><td>Skip-z</td><td>Ortho.</td><td>FID</td><td>IS</td><td>(min FID) / IS</td><td>FID / (max IS)</td></tr><tr><td>64</td><td>317.1</td><td>✘</td><td>✘</td><td>✘</td><td>48.38</td><td>23.27</td><td>48.6 / 23.1</td><td>49.1 / 23.9</td></tr><tr><td>64</td><td>99.4</td><td>✓</td><td>✓</td><td>✓</td><td>23.48</td><td>24.78</td><td>22.4 / 21.0</td><td>60.9 / 35.8</td></tr><tr><td>96</td><td>207.9</td><td>✓</td><td>✓</td><td>✓</td><td>18.84</td><td>27.86</td><td>17.1 / 23.3</td><td>51.6 / 38.1</td></tr><tr><td>128</td><td>355.7</td><td>✓</td><td>✓</td><td>✓</td><td>13.75</td><td>30.61</td><td>13.0 / 28.0</td><td>46.2 / 47.8</td></tr></table> Table 3: BigGAN results on JFT- 300M at \(256 \times 256\) resolution. The FID and IS columns report these scores given by the JFT- 300M- trained Inception v2 classifier with noise distributed as \(z \sim \mathcal{N}(0, I)\) (non- truncated). The (min FID) / IS and FID / (max IS) columns report scores at the best FID and IS from a sweep across truncated noise distributions ranging from \(\sigma = 0\) to \(\sigma = 2\) . Images from the JFT- 300M validation set have an IS of 50.88 and FID of 1.94. extend to other architectures, and that increased depth leads to improvement in sample quality. Both BigGAN and BigGAN- deep architectures are described in Appendix B. Our observation that \(\mathbf{D}\) overfits to the training set, coupled with our model's sample quality, raises the obvious question of whether or not \(\mathbf{G}\) simply memorizes training points. To test this, we perform class- wise nearest neighbors analysis in pixel space and the feature space of pre- trained classifier networks (Appendix A). In addition, we present both interpolations between samples and class- wise interpolations (where \(z\) is held constant) in Figures 8 and 9. Our model convincingly interpolates between disparate samples, and the nearest neighbors for its samples are visually distinct, suggesting that our model does not simply memorize training data. We note that some failure modes of our partially- trained models are distinct from those previously observed. Most previous failures involve local artifacts (Odena et al., 2016), images consisting of texture blobs instead of objects (Salimans et al., 2016), or the canonical mode collapse. We observe class leakage, where images from one class contain properties of another, as exemplified by Figure 4(d). We also find that many classes on ImageNet are more difficult than others for our model; our model is more successful at generating dogs (which make up a large portion of the dataset, and are mostly distinguished by their texture) than crowds (which comprise a small portion of the dataset and have more large- scale structure). Further discussion is available in Appendix A. ### 5.2 ADDITIONAL EVALUATION ON JFT-300M To confirm that our design choices are effective for even larger and more complex and diverse datasets, we also present results of our system on a subset of JFT- 300M (Sun et al., 2017). The full JFT- 300M dataset contains 300M real- world images labeled with 18K categories. Since the category distribution is heavily long- tailed, we subsample the dataset to keep only images with the 8.5K most common labels. The resulting dataset contains 292M images – two orders of magnitude larger than ImageNet. For images with multiple labels, we sample a single label randomly and independently whenever an image is sampled. To compute IS and FID for the GANs trained on this dataset, we use an Inception v2 classifier (Szegedy et al., 2016) trained on this dataset. Quantitative results are presented in Table 3. All models are trained with batch size 2048. We compare an ablated version of our model – comparable to SA- GAN (Zhang et al., 2018) but with the larger batch size – against a “full” BigGAN model that makes uses of all of the techniques applied to obtain the best results on ImageNet (shared embedding, skip- \(z\) , and orthogonal regularization). Our results show that these techniques substantially improve performance even in the setting of this much larger dataset at the same model capacity (64 base channels). We further show that for a dataset of this scale, we see significant additional improvements from expanding the capacity of our models to 128 base channels, while for ImageNet GANs that additional capacity was not beneficial. In Figure 19 (Appendix D), we present truncation plots for models trained on this dataset. Unlike for ImageNet, where truncation limits of \(\sigma \approx 0\) tend to produce the highest fidelity scores, IS is typically maximized for our JFT- 300M models when the truncation value \(\sigma\) ranges from 0.5 to 1. We suspect that this is at least partially due to the intra- class variability of JFT- 300M labels, as well as the relative complexity of the image distribution, which includes images with multiple objects at a variety of scales. Interestingly, unlike models trained on ImageNet, where training tends to collapse without heavy regularization (Section 4), the models trained on JFT- 300M remain stable over many <--- Page Split ---> hundreds of thousands of iterations. This suggests that moving beyond ImageNet to larger datasets may partially alleviate GAN stability issues. The improvement over the baseline GAN model that we achieve on this dataset without changes to the underlying models or training and regularization techniques (beyond expanded capacity) demonstrates that our findings extend from ImageNet to datasets with scale and complexity thus far unprecedented for generative models of images. ## 6 CONCLUSION We have demonstrated that Generative Adversarial Networks trained to model natural images of multiple categories highly benefit from scaling up, both in terms of fidelity and variety of the generated samples. As a result, our models set a new level of performance among ImageNet GAN models, improving on the state of the art by a large margin. We have also presented an analysis of the training behavior of large scale GANs, characterized their stability in terms of the singular values of their weights, and discussed the interplay between stability and performance. ## ACKNOWLEDGMENTS We would like to thank Kai Arulkumaran, Matthias Bauer, Peter Buchlovsky, Jeffrey Defauw, Sander Dieleman, Ian Goodfellow, Ariel Gordon, Karol Gregor, Dominik Grewe, Chris Jones, Jacob Menick, Augustus Odena, Suman Ravuri, Ali Razavi, Mihaela Rosca, and Jeff Stanway. ## APPENDIX A ADDITIONAL SAMPLES, INTERPOLATIONS, AND NEAREST NEIGHBORS FROM IMAGENET MODELS ![](images/11_0.jpg) <center>Figure 5: Samples generated by our BigGAN model at \(256 \times 256\) resolution. </center> ![](images/11_1.jpg) <center>Figure 6: Samples generated by our BigGAN model at \(512 \times 512\) resolution. </center> <--- Page Split ---> ![](images/12_0.jpg) <center>Figure 7: Comparing easy classes (a) with difficult classes (b) at \(512 \times 512\) . Classes such as dogs which are largely textural, and common in the dataset, are far easier to model than classes involving unaligned human faces or crowds. Such classes are more dynamic and structured, and often have details to which human observers are more sensitive. The difficulty of modeling global structure is further exacerbated when producing high-resolution images, even with non-local blocks. </center> ![](images/12_1.jpg) <center>Figure 8: Interpolations between \(z, c\) pairs. </center> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 9: Interpolations between \(c\) with \(z\) held constant. Pose semantics are frequently maintained between endpoints (particularly in the final row). Row 2 demonstrates that grayscale is encoded in the joint \(z, c\) space, rather than in \(z\) . </center> ![](images/13_1.jpg) <center>Figure 10: Nearest neighbors in VGG-16-fc7 (Simonyan & Zisserman, 2015) feature space. The generated image is in the top left. </center> <--- Page Split ---> ![](images/14_0.jpg) <center>Figure 11: Nearest neighbors in ResNet-50-avgpool (He et al., 2016) feature space. The generated image is in the top left. </center> ![](images/14_1.jpg) <center>Figure 12: Nearest neighbors in pixel space. The generated image is in the top left. </center> <--- Page Split ---> ![](images/15_0.jpg) <center>Figure 13: Nearest neighbors in VGG-16-fc7 (Simonyan & Zisserman, 2015) feature space. The generated image is in the top left. </center> ![](images/15_1.jpg) <center>Figure 14: Nearest neighbors in ResNet-50-avgpool (He et al., 2016) feature space. The generated image is in the top left. </center> <--- Page Split ---> ## APPENDIX B ARCHITECTURAL DETAILS In the BigGAN model (Figure 15), we use the ResNet (He et al., 2016) GAN architecture of (Zhang et al., 2018), which is identical to that used by (Miyato et al., 2018), but with the channel pattern in \(\mathbf{D}\) modified so that the number of filters in the first convolutional layer of each block is equal to the number of output filters (rather than the number of input filters, as in Miyato et al. (2018); Gulrajani et al. (2017)). We use a single shared class embedding in \(\mathbf{G}\) , and skip connections for the latent vector \(z\) (skip- \(z\) ). In particular, we employ hierarchical latent spaces, so that the latent vector \(z\) is split along its channel dimension into chunks of equal size (20- D in our case), and each chunk is concatenated to the shared class embedding and passed to a corresponding residual block as a conditioning vector. The conditioning of each block is linearly projected to produce per- sample gains and biases for the BatchNorm layers of the block. The bias projections are zero- centered, while the gain projections are centered at 1. Since the number of residual blocks depends on the image resolution, the full dimensionality of \(z\) is 120 for \(128 \times 128\) , 140 for \(256 \times 256\) , and 160 for \(512 \times 512\) images. The BigGAN- deep model (Figure 16) differs from BigGAN in several aspects. It uses a simpler variant of skip- \(z\) conditioning: instead of first splitting \(z\) into chunks, we concatenate the entire \(z\) with the class embedding, and pass the resulting vector to each residual block through skip connections. BigGAN- deep is based on residual blocks with bottlenecks (He et al., 2016), which incorporate two additional \(1 \times 1\) convolutions: the first reduces the number of channels by a factor of 4 before the more expensive \(3 \times 3\) convolutions; the second produces the required number of output channels. While BigGAN relies on \(1 \times 1\) convolutions in the skip connections whenever the number of channels needs to change, in BigGAN- deep we use a different strategy aimed at preserving identity throughout the skip connections. In \(\mathbf{G}\) , where the number of channels needs to be reduced, we simply retain the first group of channels and drop the rest to produce the required number of channels. In \(\mathbf{D}\) , where the number of channels should be increased, we pass the input channels unperturbed, and concatenate them with the remaining channels produced by a \(1 \times 1\) convolution. As far as the network configuration is concerned, the discriminator is an exact reflection of the generator. There are two blocks at each resolution (BigGAN uses one), and as a result BigGAN- deep is four times deeper than BigGAN. Despite their increased depth, the BigGAN- deep models have significantly fewer parameters mainly due to the bottleneck structure of their residual blocks. For example, the \(128 \times 128\) BigGAN- deep \(\mathbf{G}\) and \(\mathbf{D}\) have 50.4M and 34.6M parameters respectively, while the corresponding original BigGAN models have 70.4M and 88.0M parameters. All BigGAN- deep models use attention at \(64 \times 64\) resolution, channel width multiplier \(ch = 128\) , and \(z \in \mathbb{R}^{128}\) . ![](images/16_0.jpg) <center>Figure 15: (a) A typical architectural layout for BigGAN's \(\mathbf{G}\) ; details are in the following tables. (b) A Residual Block (ResBlock up) in BigGAN's \(\mathbf{G}\) . (c) A Residual Block (ResBlock down) in BigGAN's \(\mathbf{D}\) . </center> <--- Page Split ---> ![](images/17_0.jpg) <center>Figure 16: (a) A typical architectural layout for BigGAN-deep’s G; details are in the following tables. (b) A Residual Block (ResBlock up) in BigGAN-deep’s G. (c) A Residual Block (ResBlock down) in BigGAN-deep’s D. A ResBlock (without up or down) in BigGAN-deep does not include the Upsample or Average Pooling layers, and has identity skip connections. </center> <--- Page Split ---> Table 4: BigGAN architecture for \(128\times 128\) images. \(c h\) represents the channel width multiplier in each network from Table 1. <table><tr><td>z ∈ R120 ∼ N(0, I) <br>Embed(y) ∈ R128</td></tr><tr><td>Linear (20 + 128) → 4 × 4 × 16ch</td></tr><tr><td>ResBlock up 16ch → 16ch</td></tr><tr><td>ResBlock up 16ch → 8ch</td></tr><tr><td>ResBlock up 8ch → 4ch</td></tr><tr><td>ResBlock up 4ch → 2ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock up 2ch → ch</td></tr><tr><td>BN, ReLU, 3 × 3 Conv ch → 3</td></tr><tr><td>Tanh</td></tr><tr><td>(a) Generator</td></tr></table> <table><tr><td>RGB image x ∈ R128×128×3</td></tr><tr><td>ResBlock down ch → 2ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock down 2ch → 4ch</td></tr><tr><td>ResBlock down 4ch → 8ch</td></tr><tr><td>ResBlock down 8ch → 16ch</td></tr><tr><td>ResBlock down 16ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ReLU, Global sum pooling</td></tr><tr><td>Embed(y)·h + (linear → 1)</td></tr><tr><td>(b) Discriminator</td></tr></table> Table 5: BigGAN architecture for \(256\times 256\) images. Relative to the \(128\times 128\) architecture, we add an additional ResBlock in each network at \(16\times 16\) resolution, and move the non-local block in \(\mathbf{G}\) to \(128\times 128\) resolution. Memory constraints prevent us from moving the non-local block in \(\mathbf{D}\) <table><tr><td>z ∈ R140 ∼ N(0, I) <br>Embed(y) ∈ R128</td></tr><tr><td>Linear (20 + 128) → 4 × 4 × 16ch</td></tr><tr><td>ResBlock up 16ch → 16ch</td></tr><tr><td>ResBlock up 16ch → 8ch</td></tr><tr><td>ResBlock up 8ch → 8ch</td></tr><tr><td>ResBlock up 8ch → 4ch</td></tr><tr><td>ResBlock up 4ch → 2ch</td></tr><tr><td>Non-Local Block (128 × 128)</td></tr><tr><td>ResBlock up 2ch → ch</td></tr><tr><td>BN, ReLU, 3 × 3 Conv ch → 3</td></tr><tr><td>Tanh</td></tr><tr><td>(a) Generator</td></tr></table> <table><tr><td>RGB image x ∈ R256×256×3</td></tr><tr><td>ResBlock down ch → 2ch</td></tr><tr><td>ResBlock down 2ch → 4ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock down 4ch → 8ch</td></tr><tr><td>ResBlock down 8ch → 8ch</td></tr><tr><td>ResBlock down 8ch → 16ch</td></tr><tr><td>ResBlock down 16ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ReLU, Global sum pooling</td></tr><tr><td>Embed(y)·h + (linear → 1)</td></tr><tr><td>(b) Discriminator</td></tr></table> <--- Page Split ---> Table 6: BigGAN architecture for \(512\times 512\) images. Relative to the \(256\times 256\) architecture, we add an additional ResBlock at the \(512\times 512\) resolution. Memory constraints force us to move the non-local block in both networks back to \(64\times 64\) resolution as in the \(128\times 128\) pixel setting. <table><tr><td>z ∈ R160 ∼ N(0, I) <br>Embed(y) ∈ R128</td></tr><tr><td>Linear (20 + 128) → 4 × 4 × 16ch</td></tr><tr><td>ResBlock up 16ch → 16ch</td></tr><tr><td>ResBlock up 16ch → 8ch</td></tr><tr><td>ResBlock up 8ch → 8ch</td></tr><tr><td>ResBlock up 8ch → 4ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock up 4ch → 2ch</td></tr><tr><td>ResBlock up 2ch → ch</td></tr><tr><td>ResBlock up ch → ch</td></tr><tr><td>BN, ReLU, 3 × 3 Conv ch → 3</td></tr><tr><td>Tanh</td></tr><tr><td>(a) Generator</td></tr></table> <table><tr><td>RGB image x ∈ R512×512×3</td></tr><tr><td>ResBlock down ch → ch</td></tr><tr><td>ResBlock down ch → 2ch</td></tr><tr><td>ResBlock down 2ch → 4ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock down 4ch → 8ch</td></tr><tr><td>ResBlock down 8ch → 8ch</td></tr><tr><td>ResBlock down 8ch → 16ch</td></tr><tr><td>ResBlock down 16ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ReLU, Global sum pooling</td></tr><tr><td>Embed(y)·h + (linear → 1)</td></tr><tr><td>(b) Discriminator</td></tr></table> Table 7: BigGAN-deep architecture for \(128\times 128\) images. <table><tr><td>z ∈ R128 ∼ N(0, I) <br>Embed(y) ∈ R128</td></tr><tr><td>Linear (128 + 128) → 4 × 4 × 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ResBlock up 16ch → 16ch</td></tr><tr><td>ResBlock 16ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td></tr><tr><td>ResBlock up 8ch → 4ch</td></tr><tr><td>ResBlock 4ch → 4ch</td></tr><tr><td>ResBlock up 4ch → 2ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock 2ch → 2ch</td></tr><tr><td>ResBlock up 2ch → ch</td></tr><tr><td>BN, ReLU, 3 × 3 Conv ch → 3</td></tr><tr><td>Tanh</td></tr><tr><td>(a) Generator</td></tr></table> <table><tr><td>RGB image x ∈ R128×128×3</td></tr><tr><td>3 × 3 Conv 3 → ch</td></tr><tr><td>ResBlock down ch → 2ch</td></tr><tr><td>ResBlock 2ch → 2ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock down 2ch → 4ch</td></tr><tr><td>ResBlock 4ch → 4ch</td></tr><tr><td>ResBlock down 4ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td></tr><tr><td>ResBlock down 8ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ResBlock down 16ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td><td></td></tr><tr><td>ReLU, Global sum pooling</td><td></td></tr><tr><td>Embed(y)·h + (linear → 1)</td><td></td></tr><tr><td>(b) Discriminator</td><td></td></tr></table> <--- Page Split ---> Table 8: BigGAN-deep architecture for \(256\times 256\) images. <table><tr><td>z ∈ R128 ∼ N(0, I) <br>Embed(y) ∈ R128</td></tr><tr><td>Linear (128 + 128) → 4 × 4 × 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ResBlock up 16ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ReBlock up 16ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td></tr><tr><td>ResBlock up 8ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td><td></td></tr><tr><td>ResBlock up 8ch → 4ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock 4ch → 4ch</td></tr><tr><td>ResBlock up 4ch → 2ch</td></tr><tr><td>ResBlock 2ch → 2ch</td></tr><tr><td>ResBlock up 2ch → ch</td></tr><tr><td>BN, ReLU, 3 × 3 Conv ch → 3</td></tr><tr><td>Tanh</td></tr><tr><td>(a) Generator</td></tr></table> <table><tr><td>RGB image x ∈ R256×256×3</td></tr><tr><td>3 × 3 Conv 3 → ch</td></tr><tr><td>ResBlock down ch → 2ch</td></tr><tr><td>ResBlock 2ch → 2ch</td></tr><tr><td>ResBlock down 2ch → 4ch</td></tr><tr><td>ResBlock 4ch → 4ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock down 4ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td></tr><tr><td>ResBlock down 8ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td><td></td></tr><tr><td>ResBlock down 8ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ResBlock down 16ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td><td></td></tr><tr><td>ReLU, Global sum pooling</td></tr><tr><td>Embed(y)·h + (linear → 1)</td></tr><tr><td>(b) Discriminator</td></tr></table> <--- Page Split ---> Table 9: BigGAN-deep architecture for \(512\times 512\) images. <table><tr><td>z ∈ R128 ∼ N(0, I)</td></tr><tr><td>Embed(y) ∈ R128</td></tr><tr><td>Linear (128 + 128) → 4 × 4 × 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ResBlock up 16ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ReSBlock up 16ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td></tr><tr><td>ResBlock up 8ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td><td></td></tr><tr><td>ResBlock up 8ch → 4ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock 4ch → 4ch</td></tr><tr><td>ResBlock up 4ch → 2ch</td></tr><tr><td>ResBlock 2ch → 2ch</td></tr><tr><td>ResBlock up 2ch → ch</td></tr><tr><td>ResBlock ch → ch</td></tr><tr><td>ResBlock up ch → ch</td></tr><tr><td>BN, ReLU, 3 × 3 Conv ch → 3</td></tr><tr><td>Tanh</td></tr><tr><td>(a) Generator</td></tr></table> <table><tr><td>RGB image x ∈ R512×512×3</td></tr><tr><td>3 × 3 Conv 3 → ch</td></tr><tr><td>ResBlock down ch → ch</td></tr><tr><td>ResBlock ch → ch</td></tr><tr><td>ResBlock down ch → 2ch</td></tr><tr><td>ResBlock 2ch → 2ch</td></tr><tr><td>ResBlock down 2ch → 4ch</td></tr><tr><td>ResBlock 4ch → 4ch</td></tr><tr><td>Non-Local Block (64 × 64)</td></tr><tr><td>ResBlock down 4ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td></tr><tr><td>ResBlock down 8ch → 8ch</td></tr><tr><td>ResBlock 8ch → 8ch</td><td></td></tr><tr><td>ResBlock down 8ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td></tr><tr><td>ResBlock down 16ch → 16ch</td></tr><tr><td>ResBlock 16ch → 16ch</td><td></td></tr><tr><td>ReLU, Global sum pooling</td></tr><tr><td>Embed(y)h + (linear → 1)</td></tr><tr><td>(b) Discriminator</td></tr></table> <--- Page Split ---> ## APPENDIX C EXPERIMENTAL DETAILS Our basic setup follows SA- GAN (Zhang et al., 2018), and is implemented in TensorFlow (Abadi et al., 2016). We employ the architectures detailed in Appendix B, with non- local blocks inserted at a single stage in each network. Both \(\mathbf{G}\) and \(\mathbf{D}\) networks are initialized with Orthogonal Initialization (Saxe et al., 2014). We use Adam optimizer (Kingma & Ba, 2014) with \(\beta_{1} = 0\) and \(\beta_{2} = 0.999\) and a constant learning rate. For BigGAN models at all resolutions, we use \(2\cdot 10^{- 4}\) in \(\mathbf{D}\) and \(5\cdot 10^{- 5}\) in \(\mathbf{G}\) . For BigGAN- deep, we use the learning rate of \(2\cdot 10^{- 4}\) in \(\mathbf{D}\) and \(5\cdot 10^{- 5}\) in \(\mathbf{G}\) for \(128\times 128\) models, and \(2.5\cdot 10^{- 5}\) in both \(\mathbf{D}\) and \(\mathbf{G}\) for \(256\times 256\) and \(512\times 512\) models. We experimented with the number of \(\mathbf{D}\) steps per \(\mathbf{G}\) step (varying it from 1 to 6) and found that two \(\mathbf{D}\) steps per \(\mathbf{G}\) step gave the best results. We use an exponential moving average of the weights of \(\mathbf{G}\) at sampling time, with a decay rate set to 0.9999. We employ cross- replica BatchNorm (Ioffe & Szegedy, 2015) in \(\mathbf{G}\) , where batch statistics are aggregated across all devices, rather than a single device as in standard implementations. Spectral Normalization (Miyato et al., 2018) is used in both \(\mathbf{G}\) and \(\mathbf{D}\) , following SA- GAN (Zhang et al., 2018). We train on a Google TPU v3 Pod, with the number of cores proportional to the resolution: 128 for \(128\times 128\) , 256 for \(256\times 256\) , and 512 for \(512\times 512\) . Training takes between 24 and 48 hours for most models. We increase \(\epsilon\) from the default \(10^{- 8}\) to \(10^{- 4}\) in BatchNorm and Spectral Norm to mollify low- precision numerical issues. We preprocess data by cropping along the long edge and rescaling to a given resolution with area resampling. ## C.1 BATCHNORM STATISTICS AND SAMPLING The default behavior with batch normalized classifier networks is to use a running average of the activation moments at test time. Previous works (Radford et al., 2016) have instead used batch statistics when sampling images. While this is not technically an invalid way to sample, it means that results are dependent on the test batch size (and how many devices it is split across), and further complicates reproducibility. We find that this detail is extremely important, with changes in test batch size producing drastic changes in performance. This is further exacerbated when one uses exponential moving averages of \(\mathbf{G}\) 's weights for sampling, as the BatchNorm running averages are computed with non- averaged weights and are poor estimates of the activation statistics for the averaged weights. To counteract both these issues, we employ "standing statistics," where we compute activation statistics at sampling time by running the \(\mathbf{G}\) through multiple forward passes (typically 100) each with different batches of random noise, and storing means and variances aggregated across all forward passes. Analogous to using running statistics, this results in \(\mathbf{G}\) 's outputs becoming invariant to batch size and the number of devices, even when producing a single sample. ## C.2 CIFAR-10 We run our networks on CIFAR- 10 (Krizhevsky & Hinton, 2009) using the settings from Table 1, row 8, and achieve an IS of 9.22 and an FID of 14.73 without truncation. ## C.3 INCEPTION SCORES OF IMAGENET IMAGES We compute the IS for both the training and validation sets of ImageNet. At \(128\times 128\) the training data has an IS of 233, and the validation data has an IS of 166. At \(256\times 256\) the training data has an IS of 377, and the validation data has an IS of 234. At \(512\times 512\) the training data has an IS of 348, and the validation data has an IS of 241. The discrepancy between training and validation scores is due to the Inception classifier having been trained on the training data, resulting in high- confidence outputs that are preferred by the Inception Score. <--- Page Split ---> ![](images/23_0.jpg) <center>Figure 17: IS vs. FID at \(128 \times 128\) . Scores are averaged across three random seeds. </center> ![](images/23_1.jpg) <center>Figure 18: IS vs. FID at 256 and 512 pixels. Scores are averaged across three random seeds for 256. </center> <--- Page Split ---> ![](images/24_0.jpg) <center>Figure 19: JFT-300M IS vs. FID at \(256 \times 256\) . We show truncation values from \(\sigma = 0\) to \(\sigma = 2\) (top) and from \(\sigma = 0.5\) to \(\sigma = 1.5\) (bottom). Each curve corresponds to a row in Table 3. The curve labeled with baseline corresponds to the first row (with orthogonal regularization and other techniques disabled), while the rest correspond to rows 2-4 – the same architecture at different capacities (Ch). </center> <--- Page Split ---> ## APPENDIX E CHOOSING LATENT SPACES While most previous work has employed \(\mathcal{N}(0,I)\) or \(\mathcal{U}[- 1,1]\) as the prior for \(z\) (the noise input to \(\mathbf{G}\) ), we are free to choose any latent distribution from which we can sample. We explore the choice of latents by considering an array of possible designs, described below. For each latent, we provide the intuition behind its design and briefly describe how it performs when used as a drop- in replacement for \(z\sim \mathcal{N}(0,I)\) in an SA- GAN baseline. As the Truncation Trick proved more beneficial than switching to any of these latents, we do not perform a full ablation study, and employ \(z\sim \mathcal{N}(0,I)\) for our main results to take full advantage of truncation. The two latents which we find to work best without truncation are Bernoulli \(\{0,1\}\) and Censored Normal \(\max (\mathcal{N}(0,I),0)\) , both of which improve speed of training and lightly improve final performance, but are less amenable to truncation. We also ablate the choice of latent space dimensionality (which by default is \(z\in \mathbb{R}^{128}\) ), finding that we are able to successfully train with latent dimensions as low as \(z\in \mathbb{R}^{8}\) , and that with \(z\in \mathbb{R}^{\frac{32}{2}}\) we see a minimal drop in performance. While this is substantially smaller than many previous works, direct comparison to single- class networks (such as those in Karras et al. (2018), which employ a \(z\in \mathbb{R}^{512}\) latent space on a highly constrained dataset with 30,000 images) is improper, as our networks have additional class information provided as input. ## LATENTS \(\mathcal{N}(0,I)\) . A standard choice of the latent space which we use in the main experiments. \(\mathcal{U}[- 1,1]\) . Another standard choice; we find that it performs similarly to \(\mathcal{N}(0,I)\) . Bernoulli \(\{0,1\}\) . A discrete latent might reflect our prior that underlying factors of variation in natural images are not continuous, but discrete (one feature is present, another is not). This latent outperforms \(\mathcal{N}(0,I)\) (in terms of IS) by \(8\%\) and requires \(60\%\) fewer iterations. \(\max (\mathcal{N}(0,I),0)\) , also called Censored Normal. This latent is designed to introduce sparsity in the latent space (reflecting our prior that certain latent features are sometimes present and sometimes not), but also allow those latents to vary continuously, expressing different degrees of intensity for latents which are active. This latent outperforms \(\mathcal{N}(0,I)\) (in terms of IS) by \(15 - 20\%\) and tends to require fewer iterations. Bernoulli \(\{- 1,1\}\) . This latent is designed to be discrete, but not sparse (as the network can learn to activate in response to negative inputs). This latent performs near- identically to \(\mathcal{N}(0,I)\) . Independent Categorical in \(\{- 1,0,1\}\) , with equal probability. This distribution is chosen to be discrete and have sparsity, but also to allow latents to take on both positive and negative values. This latent performs near- identically to \(\mathcal{N}(0,I)\) . \(\mathcal{N}(0,I)\) multiplied by Bernoulli \(\{0,1\}\) . This distribution is chosen to have continuous latent factors which are also sparse (with a peak at zero), similar to Censored Normal but not constrained to be positive. This latent performs near- identically to \(\mathcal{N}(0,I)\) . Concatenating \(\mathcal{N}(0,I)\) and Bernoulli \(\{0,1\}\) , each taking half of the latent dimensions. This is inspired by Chen et al. (2016), and is chosen to allow some factors of variation to be discrete, while others are continuous. This latent outperforms \(\mathcal{N}(0,I)\) by around \(5\%\) . Variance annealing: we sample from \(\mathcal{N}(0,\sigma I)\) , where \(\sigma\) is allowed to vary over training. We compared a variety of piecewise schedules and found that starting with \(\sigma = 2\) and annealing towards \(\sigma = 1\) over the course of training mildly improved performance. The space of possible variance schedules is large, and we did not explore it in depth - we suspect that a more principled or better- tuned schedule could more strongly impact performance. Per- sample variable variance: \(\mathcal{N}(0,\sigma_{i}I)\) , where \(\sigma_{i}\sim \mathcal{U}[\sigma_{l},\sigma_{h}]\) independently for each sample \(i\) in a batch, and \((\sigma_{l},\sigma_{h})\) are hyperparameters. This distribution was chosen to try and improve amenability to the Truncation Trick by feeding the network noise samples with non- constant variance. This did not appear to affect performance, but we did not explore it in depth. One might also consider scheduling \((\sigma_{l},\sigma_{h})\) , similar to variance annealing. <--- Page Split ---> ![](images/26_0.jpg) <center>Figure 20: Training statistics for a typical model without special modifications. Collapse occurs after 200000 iterations. </center> <--- Page Split ---> ![](images/27_0.jpg) <center>Figure 21: G training statistics with \(\sigma_{0}\) in G regularized towards 1. Collapse occurs after 125000 iterations. </center> ![](images/27_1.jpg) <center>Figure 22: D training statistics with \(\sigma_{0}\) in G regularized towards 1. Collapse occurs after 125000 iterations. </center> <--- Page Split ---> ![](images/28_0.jpg) <center>Figure 23: G training statistics with an R1 Gradient Penalty of strength 10 on D. This model does not collapse, but only reaches a maximum IS of 55. </center> ![](images/28_1.jpg) <center>Figure 24: D training statistics with an R1 Gradient Penalty of strength 10 on D. This model does not collapse, but only reaches a maximum IS of 55. </center> <--- Page Split ---> ![](images/29_0.jpg) <center>Figure 25: G training statistics with Dropout (keep probability 0.8) applied to the last feature layer of D. This model does not collapse, but only reaches a maximum IS of 70. </center> ![](images/29_1.jpg) <center>Figure 26: D training statistics with Dropout (keep probability 0.8) applied to the last feature layer of D. This model does not collapse, but only reaches a maximum IS of 70. </center> <--- Page Split ---> ![](images/30_0.jpg) <center>Figure 27: Additional training statistics for a typical model without special modifications. Collapse occurs after 200000 iterations. </center> ![](images/30_1.jpg) <center>Figure 28: Additional training statistics with an R1 Gradient Penalty of strength 10 on D. This model does not collapse, but only reaches a maximum IS of 55. </center> <--- Page Split ---> ## APPENDIX G ADDITIONAL DISCUSSION: STABILITY AND COLLAPSE In this section, we present and discuss additional investigations into the stability of our models, expanding upon the discussion in Section 4. ## G.1 INTERVENING BEFORE COLLAPSE The symptoms of collapse are sharp and sudden, with sample quality dropping from its peak to its lowest value over the course of a few hundred iterations. We can detect this collapse when the singular values in \(\mathbf{G}\) explode, but while the (unnormalized) singular values grow throughout training, there is no consistent threshold at which collapse occurs. This raises the question of whether it is possible to prevent or delay collapse by taking a model checkpoint several thousand iterations before collapse, and continuing training with some hyperparameters modified (e.g., the learning rate). We conducted a range of intervention experiments wherein we took checkpoints of a collapsed model ten or twenty thousand iterations before collapse, changed some aspect of the training setup, then observed whether collapse occurred, when it occurred relative to the original collapse, and the final performance attained at collapse. We found that increasing the learning rates (relative to their initial values) in either \(\mathbf{G}\) or \(\mathbf{D}\) , or both \(\mathbf{G}\) and \(\mathbf{D}\) , led to immediate collapse. This occurred even when doubling the learning rates from \(2 \cdot 10^{- 4}\) in \(\mathbf{D}\) and \(5 \cdot 10^{- 5}\) in \(\mathbf{G}\) , to \(4 \cdot 10^{- 4}\) in \(\mathbf{D}\) and \(1 \cdot 10^{- 4}\) in \(\mathbf{G}\) , a setting which is not normally unstable when used as the initial learning rates. We also tried changing the momentum terms (Adam's \(\beta_{1}\) and \(\beta_{2}\) ), or resetting the momentum vectors to zero, but this tended to either make no difference or, when increasing the momentum, cause immediate collapse. We found that decreasing the learning rate in \(\mathbf{G}\) , but keeping the learning rate in \(\mathbf{D}\) unchanged could delay collapse (in some cases by over one hundred thousand iterations), but also crippled training—once the learning rate in \(\mathbf{G}\) was decayed, performance either stayed constant or slowly decayed. Conversely, reducing the learning rate in \(\mathbf{D}\) while keeping \(\mathbf{G}\) 's learning rate led to immediate collapse. We hypothesize that this is because of the need for \(\mathbf{D}\) to remain optimal throughout training—if its learning rate is reduced, it can no longer "keep up" with \(\mathbf{G}\) , and training collapses. With this in mind, we also tried increasing the number of \(\mathbf{D}\) steps per \(\mathbf{G}\) step, but this either had no effect, or delayed collapse at the cost of crippling training (similar to decaying \(\mathbf{G}\) 's learning rate). To further illuminate these dynamics, we construct two additional intervention experiments, one where we freeze \(\mathbf{G}\) before collapse (by ceasing all parameter updates) and observe whether \(\mathbf{D}\) remains stable, and the reverse, where we freeze \(\mathbf{D}\) before collapse and observe whether \(\mathbf{G}\) remains stable. We find that when \(\mathbf{G}\) is frozen, \(\mathbf{D}\) remains stable, and slowly reduces both components of its loss towards zero. However, when \(\mathbf{D}\) is frozen, \(\mathbf{G}\) immediately and dramatically collapses, maxing out \(\mathbf{D}\) 's loss to values upwards of 300, compared to the normal range of 0 to 3. This leads to two conclusions: first, as has been noted in previous works (Miyato et al., 2018; Gulrajani et al., 2017; Zhang et al., 2018), \(\mathbf{D}\) must remain optimal with respect to \(\mathbf{G}\) both for stability and to provide useful gradient information. The consequence of \(\mathbf{G}\) being allowed to win the game is a complete breakdown of the training process, regardless of \(\mathbf{G}\) 's conditioning or optimization settings. Second, favoring \(\mathbf{D}\) over \(\mathbf{G}\) (either by training it with a larger learning rate, or for more steps) is insufficient to ensure stability even if \(\mathbf{D}\) is well- conditioned. This suggests either that in practice, an optimal \(\mathbf{D}\) is necessary but insufficient for training stability, or that some aspect of the system results in \(\mathbf{D}\) not being trained towards optimality. With the latter possibility in mind, we take a closer look at the noise in \(\mathbf{D}\) 's spectra in the following section. <--- Page Split ---> ![](images/32_0.jpg) <center>Figure 29: A closeup of \(\mathbf{D}\) 's spectra at a noise spike. </center> If some element of \(\mathbf{D}\) 's training process results in undesirable dynamics, it follows that the behavior of \(\mathbf{D}\) 's spectra may hold clues as to what that element is. The top three singular values of \(\mathbf{D}\) differ from \(\mathbf{G}\) 's in that they have a large noise component, tend to grow throughout training but only show a small response to collapse, and the ratio of the first two singular values tends to be centered around one, suggesting that the spectra of \(\mathbf{D}\) have a slow decay. When viewed up close (Figure 29), the noise spikes resemble an impulse response: at each spike, the spectra jump upwards, then slowly decrease, with some oscillation. One possible explanation is that this behavior is a consequence of \(\mathbf{D}\) memorizing the training data, as suggested by experiments in Section 4.2. As it approaches perfect memorization, it receives less and less signal from real data, as both the original GAN loss and the hinge loss provide zero gradients when \(\mathbf{D}\) outputs a confident and correct prediction for a given example. If the gradient signal from real data attenuates to zero, this can result in \(\mathbf{D}\) eventually becoming biased due to exclusively received gradients that encourage its outputs to be negative. If this bias passes a certain threshold, \(\mathbf{D}\) will eventually misclassify a large number of real examples and receive a large gradient encouraging positive outputs, resulting in the observed impulse responses. This argument suggests several fixes. First, one might consider an unbounded loss (such as the Wasserstein loss (Arjovsky et al., 2017)) which would not suffer this gradient attentuation. We found that even with gradient penalties and brief re- tuning of optimizer hyperparameters, our models did not stably train for more than a few thousand iterations with this loss. We instead explored changing the margin of the hinge loss as a partial compromise: for a given model and minibatch of data, increasing the margin will result in more examples falling within the margin, and thus contributing to the loss.\(^3\) . Training with a smaller margin (by a factor of 2) measurably reduces performance, but training with a larger margin (by up to a factor of 3) does not prevent collapse or reduce the noise in \(\mathbf{D}\) 's spectra. Increasing the margin beyond 3 results in unstable training similar to using the Wasserstein loss. Finally, the memorization argument might suggest that using a smaller \(\mathbf{D}\) or using dropout in \(\mathbf{D}\) would improve training by reducing its capacity to memorize, but in practice this degrades training. <--- Page Split ---> ## APPENDIX H NEGATIVE RESULTS We explored a range of novel and existing techniques which ended up degrading or otherwise not affecting performance in our setting. We report them here; our evaluations for this section are not as thorough as those for the main architectural choices. Our intention in reporting these results is to save time for future work, and to give a more complete picture of our attempts to improve performance or stability. We note, however, that these results must be understood to be specific to the particular setup we used. A pitfall of reporting negative results is that one might report that a particular technique doesn't work, when the reality is that this technique did not have the desired effect when applied in a particular way to a particular problem. Drawing overly general conclusions might close off potentially fruitful avenues of research. - We found that doubling the depth (by inserting an additional Residual block after every up-or down-sampling block) hampered performance.- We experimented with sharing class embeddings between both G and D (as opposed to just within G). This is accomplished by replacing D's class embedding with a projection from G's embeddings, as is done in G's BatchNorm layers. In our initial experiments this seemed to help and accelerate training, but we found this trick scaled poorly and was sensitive to optimization hyperparameters, particularly the choice of number of D steps per G step.- We tried replacing BatchNorm in G with WeightNorm (Salimans & Kingma, 2016), but this crippled training. We also tried removing BatchNorm and only having Spectral Normalization, but this also crippled training.- We tried adding BatchNorm to D (both class-conditional and unconditional) in addition to Spectral Normalization, but this crippled training.- We tried varying the choice of location of the attention block in G and D (and inserting multiple attention blocks at different resolutions) but found that at \(128 \times 128\) there was no noticeable benefit to doing so, and compute and memory costs increased substantially. We found a benefit to moving the attention block up one stage when moving to \(256 \times 256\) , which is in line with our expectations given the increased resolution.- We tried using filter sizes of 5 or 7 instead of 3 in either G or D or both. We found that having a filter size of 5 in G only provided a small improvement over the baseline but came at an unjustifiable compute cost. All other settings degraded performance.- We tried varying the dilation for convolutional filters in both G and D at \(128 \times 128\) , but found that even a small amount of dilation in either network degraded performance.- We tried bilinear upsampling in G in place of nearest-neighbors upsampling, but this degraded performance.- In some of our models, we observed class-conditional mode collapse, where the model would only output one or two samples for a subset of classes but was still able to generate samples for all other classes. We noticed that the collapsed classes had embeddings which had become very large relative to the other embeddings, and attempted to ameliorate this issue by applying weight decay to the shared embedding only. We found that small amounts of weight decay \((10^{-6})\) instead degraded performance, and that only even smaller values \((10^{-8})\) did not degrade performance, but these values were also too small to prevent the class vectors from exploding. Higher-resolution models appear to be more resilient to this problem, and none of our final models appear to suffer from this type of collapse.- We experimented with using MLPs instead of linear projections from G's class embeddings to its BatchNorm gains and biases, but did not find any benefit to doing so. We also experimented with Spectrally Normalizing these MLPs, and with providing these (and the linear projections) with a bias at their output, but did not notice any benefit.- We tried gradient norm clipping (both the global variant typically used in recurrent networks, and a local version where the clipping value is determined on a per-parameter basis) but found this did not alleviate instability. <--- Page Split ---> ## APPENDIX I HYPERPARAMETERS We performed various hyperparameter sweeps in this work: We swept the Cartesian product of the learning rates for each network through \([10^{- 5}\) \(5\cdot 10^{- 5}\) \(10^{- 4}\) \(2\cdot 10^{- 4}\) \(4\cdot 10^{- 4}\) \(8\cdot 10^{- 4}\) \(10^{- 3}]\) , and initially found that the SA- GAN settings ( \(\pmb{G}\) 's learning rate \(10^{- 4}\) , \(\mathbf{D}\) 's learning rate \(4\cdot 10^{- 4}\) ) were optimal at lower batch sizes; we did not repeat this sweep at higher batch sizes but did try halving and doubling the learning rate, arriving at the halved settings used for our experiments. We swept the R1 gradient penalty strength through \([10^{- 3}\) \(10^{- 2}\) \(10^{- 1}\) , 0.5, 1, 2, 3, 5, 10]. We find that the strength of the penalty correlates negatively with performance, but that settings above 0.5 impart training stability. We swept the keep probabilities for DropOut in the final layer of \(\mathbf{D}\) through [0.5, 0.6, 0.7, 0.8, 0.9, 0.95]. We find that DropOut has a similar stabilizing effect to R1 but also degrades performance. We swept \(\mathbf{D}\) 's Adam \(\beta_{1}\) parameter through [0.1, 0.2, 0.3, 0.4, 0.5] and found it to have a light regularization effect similar to DropOut, but not to significantly improve results. Higher \(\beta_{1}\) terms in either network crippled training. We swept the strength of the modified Orthogonal Regularization penalty in \(\mathbf{G}\) through \([10^{- 5}\) \(5\cdot 10^{- 5}\) \(10^{- 4}\) \(5\cdot 10^{- 4}\) \(10^{- 3}\) \(10^{- 2}]\) , and selected \(10^{- 4}\) <--- Page Split --->
accept
Accept (Oral)
8
ICLR_2019_paper_1345
iclr
2,019
# LEARNING TO CONTROL SELF-ASSEMBLING MORPHOLOGIES: A STUDY OF GENERALIZATION VIA MODULARITY Anonymous authors Paper under double- blind review ## ABSTRACT Much of contemporary sensorimotor learning assumes that one is already given a complex agent (e.g., a robotic arm) and the goal is to learn to control it. In contrast, this paper investigates a modular co- evolution strategy: a collection of primitive agents learns to self- assemble into increasingly complex collectives in order to solve control tasks. Each primitive agent consists of a limb and a neural controller. Limbs may choose to link up to form collectives, with linking being treated as a dynamic action. When two limbs link, a joint is added between them, actuated by the 'parent' limb's controller. This forms a new 'single' agent, which may further link with other agents. In this way, complex morphologies can emerge, controlled by a policy whose architecture is in explicit correspondence with the morphology. In experiments, we demonstrate that agents with these modular and dynamic topologies generalize better to test- time environments compared to static and monolithic baselines. Project videos are available at https://doubleblindICLR19.github.io/self- assembly/. ## 1 INTRODUCTION Only a tiny fraction of the Earth's biomass is composed of higher- level organisms capable of complex sensorimotor actions of the kind popular in contemporary robotics research (navigation, pick and place, etc). A large portion is primitive single- celled organisms, such as bacteria (Bar- On et al., 2018). Possibly the single most pivotal event in the history of evolution was the point when single- celled organisms switched from always competing with each other for resources to sometimes cooperating, first by forming colonies, and later by merging into multicellular organisms (Alberts et al., 1994). These modular self- assemblies were successful because they combined the high adaptability of single- celled organisms while making it possible for vastly more complex behaviours to emerge. Like many researchers before us (Murata & Kurokawa, 2007; Sims, 1994; Tu & Terzopoulos, 1994; Yim et al., 2000; 2007), we are inspired by the biology of multicellular evolution as a model for emergent complexity in artificial agents. Unlike most previous work however, we are primarily focused on modularity as a way of improving generalization to novel environmental conditions. In this paper, we present a study of modular self- assemblies of primitive agents — "limbs" which can link up to solve a shared task. The limbs have the option to bind together by adding a joint that connects their morphologies (Figure 1a), and when they do so, they pass messages and share rewards. Each limb comes with a simple neural net that controls the torque applied to its joints. Linking and unlinking is treated as a dynamic action, so that the limb assembly can change shape during a single episode of the simulation. This setup has previously been explored in robotics as "self- reconfiguring modular robots" (Stoy et al., 2010). However, unlike prior work on such robots, where the control policies are hand- defined, we show how to learn the policies and study the generalization properties that emerge. To make this problem computationally tractable, we do not allow the limb assemblies to form cycles in morphology. Limbs pass messages to their neighbors in this graph in order to coordinate behavior. All limbs share a common policy function, parametrized by a neural network, which takes the messages from adjacent limbs as input and outputs a torque to rotate the limb in addition to the linking/un- linking action. We call the aggregate neural network a Dynamic Graph Network (DGN) <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: We study the modular co-evolution of control and morphology where a collection of primitive agents self-assemble to form complex collectives to perform given tasks. (a) Each primitive agent is a limb containing a cylindrical body and a configurable motor. These limbs can connect with each other using the attached motor as a joint. (b) We illustrate our dynamic agents in four environments / tasks: standing up, locomotion, manipulation (pushing), and sumo wrestling. See project videos at https://doubleblindICLR19.github.io/self-assembly/. </center> since it is a graph neural network (Scarselli et al., 2009) that can dynamically change topology as a function of its own outputs. We test our limb assemblies on four tasks: standing, locomotion, pushing and wrestling, shown in Figure 1b. We find that DGNs enable a single modular policy to control multiple possible morphologies, even those unseen during training. For example, a 6- limb policy, trained to build a 6- limb tower, can be applied at test time on 12 limbs, and results in a 12- limb tower. Not only are the policies robust to changes in number of limbs, they also generalize well to novel test- time environmental conditions, such as added wind, or new landscapes. These results together demonstrate that our modular and dynamic self- assembling agents have advantages toward generalization to new environments and tasks. Our main contributions are: - Training primitive agents that self-assemble into complex morphologies to jointly solve control tasks.- Formulating morphological search as a reinforcement learning problem, where linking and unlinking are treated as actions.- Representing policy via a graph whose topology matches the agent's physical structure.- Demonstrating that these self-assembling agents both train and generalize better than fixed-morphology baselines. ## 2 ENVIRONMENT AND AGENTS Investigating the co- evolution of control (i.e., software) and morphology (i.e., hardware) is not supported within standard benchmark environments typically used for sensorimotor control, requiring us to create our own. We opted for a minimalist design for our agents, the environment, and the reward structure, which is crucial to ensuring that the emergence of limb assemblies with complex morphologies is not forced, but happens naturally. Environment Structure Our environment contains an arena where a collection of primitive agent limbs can self- assemble to perform control tasks. This arena is a ground surface equipped with <--- Page Split ---> gravity and friction. The arena can be procedurally changed to generate a variety of novel terrains by changing the height of each tile on the ground (see Figure 1b). To evaluate the generalization properties of our agents, we generate a series of novels terrains. This include generating bumpy terrain by randomizing the height of nearby tiles, stairs terrain by incrementally increasing height of each row of tiles, hurdles terrain by changing height of each row of tiles, gaps terrain by removing alternate row of tiles, etc. Some variations also include putting the arena 'under water' which basically amounts to increased drag (i.e. buoyancy). We start our environment with a set of six primitive limb agents on the ground which can assemble to form collectives to perform complex tasks. Agent Structure All our primitive limb agents share the same simple structure: a cylindrical body with a configurable motor on one end. One end of the cylinder is free and the other end contains a configurable motor. The free- end of the limb can link up with the motor- end of the other limb, and then the motor acts as a joint between two limbs with three degrees of rotation. Hence, one can refer to the motor- end of the cylindrical limb as a parent- end and the free end as a child- end. Multiple limbs can attach their child- end to the parent- end of another limb, as shown in Figure 1(a), to allow for complex graph morphologies to emerge. The limb of the parent- end controls the torques of joint. The un- linking action can be easily implemented by detaching two limbs, but the linking action has to deal with the ambiguity of which limb to connect to (if at all). To resolve these modeling issues, we implement the linking action by attaching the closest limb within a small radius around the parent- node. If no other limb is present within the threshold range, the linking action has no effect. The primitive limb agents are dropped in an environment to jointly solve a given control task. One key component of the self- assembling agent setup that makes it different from typical multi- agent scenarios (Wooldridge, 2009) is that if some agents assemble to form a collective, the resulting morphology becomes a new single agent and all limbs within the morphology maximize a joint reward function. The output action space of each primitive agent contains the continuous torque values that are to be applied to the motor connected to the agent, and are denoted by \(\{\tau_{\alpha}, \tau_{\beta}, \tau_{\gamma}\}\) for three degrees of rotation. In addition to the torque controls, each limb can decide to attach another link at its parent- end, or decide to unlink its child- end if already connected to other limb. The linking and unlinking decisions are binary. This complementary role assignment of child and parent ends, i.e., parent can only link and child can only unlink, makes it possible to decentralize the control across limbs in a self- assembly. In our self- assembling setup, each agent limb only has access to its local sensory information and does not know about other limbs. The sensory input of each agent includes its own dynamics, i.e., the location of the limb in 3- D euclidean coordinates, its velocity, angular rotation and angular velocity. Each end of the limb also has a trinary touch sensor to detect whether the end of the cylinder is touching 1) the floor, 2) another limb, or 3) nothing. Additionally, we also provide our limbs with a very simple point depth sensor that captures the surface height on a \(9 \times 9\) grid around the projection of center of limb on the surface. One essential requirement to operationalize this setup is an efficient simulator to allow simultaneous simulation of several of these primitive limb agents. We implement our environments in the Unity ML (Juliani et al., 2018) framework, which is one of the dominant platforms for designing realistic games. For computational reasons, we do not allow the emergence of cycles in the self- assembling agents by not allowing the limbs to link up with already attached limbs within the same morphology. However, our setup is trivially extensible to general graphs. ## 3 LEARNING TO CONTROL SELF-ASSEMBLING MORPHOLOGIES Consider a set of primitive limbs indexed by \(i\) in \(\{1, 2, \ldots , n\}\) , which are dropped in the environment arena \(\mathcal{E}\) to perform a given continuous control task. If needed, these limbs can assemble to form complex collectives in order to improve their performance on the task. The task is represented by a reward function \(r_{t}\) and the goal of the limbs is to maximize the discounted sum of rewards over time \(t\) . If some limbs assemble to form a collective, the resulting morphology effectively becomes a single agent with a joint network to maximize the joint reward of the connected limbs. Further, the reward of an assembled morphology is a function of the whole morphology and not the individual agent limbs. For instance, in the task of learning to stand up, the reward is the height of the individual limbs if they are separate, but is the height of the whole morphology if those limbs have assembled into a collective. We now discuss our proposed formulation for learning to control these self- assembling agents. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: High-level visualization of our method. A set of primitive 'limbs' learn to self-assemble into morphologies where each limb is represented by a neural network linked via graph of physical edges. The inset on right shows the message-passing diagram for each node. Project videos at https://doubleblindICLR19.github.io/self-assembly/. </center> ### 3.1 CO-EVOLUTION: LINKING/UNLINKING AS AN ACTION To learn a modular controller policy that could generalize to novel setups, our agents must learn the controller jointly as the morphology evolves over time. The limbs should simultaneously decide which torques to apply to their respective motors, while taking into account the connected morphology. Our hypothesis is that if a controller policy could learn in a modular fashion over iterations of increasingly sophisticated morphologies (see Figure 3b), it could learn to be robust and generalizable to diverse situations. So, how can we optimize control and morphology under a common end- to- end framework? We propose to treat the decision of linking and unlinking as additional actions of our primitive limb agents. The total action space \(a_{t}\) at each iteration \(t\) can be denoted as \(\{\tau_{\alpha},\tau_{\beta},\tau_{\gamma},\sigma_{link},\sigma_{unlink}\}\) where \(\tau_{*}\) denote the raw continuous torque values to be applied at the motor and \(\sigma_{*}\) denote the binary actions whether to connect another limb at the parent- end or disconnect the child- end from the other already attached limb. This simple view of morphological evolution allows us to use ideas from learning- driven control, in particular, reinforcement learning (Sutton & Barto, 1998). ### 3.2 MODULARITY: SELF-ASSEMBLING AGENT AS A GRAPH OF LIMBS Integration of control and morphology in a common framework is only the first step. The key question is how to model this controller policy such that it is modular and reuses information across generations of morphologies. Let \(a_{t}^{i}\) be the action space and \(s_{t}^{i}\) be the local sensory input- space of the agent \(i\) . One naive approach to maximizing the reward is to simply combine the states of the limbs into the input- space output all the actions jointly using a single network. Formally, the policy is simply \(\bar{a}_{t} = [a_{t}^{0},a_{t}^{1}\ldots a_{t}^{n}] = \Pi (s_{t}^{0},s_{t}^{0}\ldots ,s_{t}^{n})\) . This interprets the self- assemblies as a single monolithic agent, ignoring the graphical structure. This is the current approach to solve many control problems, e.g., Mujoco environments like humanoid (Brockman et al., 2016) where the policy \(\Pi\) is trained to maximize the sum of discounted rewards using reinforcement learning. In this work, we represent the policy of the agent via a graph neural network (Scarselli et al., 2009) in such a way that it explicitly corresponds to the morphology of the agent. Let's consider the collection of primitive agent limbs as graph \(G\) where each node is denoted by to the primitive limb agent \(i\) . Two limbs being physically connected by a joint is analogous to having an edge in the graph. At a joint, the limb which connects itself via its parent- end acts as a parent- node in the corresponding edge, and the other limbs which connect to that joint via child- ends are child- nodes. The parent- node (i.e., the agent with the parent- end) controls the torque of the edge (i.e., the joint motor), as described in Section- 2. <--- Page Split ---> ### 3.3 DYNAMIC GRAPH NETWORKS (DGN) Each primitive limb node \(i\) has a policy controller of its own, which is represented by a neural network \(\pi_{\theta}^{i}\) and receives a corresponding reward \(r_{i}^{t}\) for each time step \(t\) . We represent the policy of the self- assembled agent by the aggregated neural network that is connected in the same graphical manner as the physical morphology. The edge connectivity of the graph is represented in the overall graph policy by passing messages that flow from each limb network to the other limbs physically connected to it via a joint. The parameters \(\theta\) are shared across each primitive limb agent allowing the overall policy of the graph to be modular with respect to each node. However, recall that the agent morphologies are dynamic, i.e., the connectivity of the limbs changes based on policy outputs. This changes the edge connectivity of the corresponding graph network at every timestep, depending on the actions predicted by each limb controller network in the previous timestep. Hence, we call this aggregate neural net a Dynamic Graph Network (DGN) since it is a graph neural network that can dynamically change topology as a function of its own outputs in the previous iteration. DGN Optimization A typical rollout of our self- assembling agents during an episode of training contains a sequence of torques \(\tau_{t}^{i}\) and the linking actions \(\sigma_{t}^{i}\) for each limb at each timestep \(t\) . The policy parameter \(\theta\) is optimized to jointly maximize the reward for each network limb: \[\max_{\theta}\sum_{i = \{1,2,\dots,n\}}\mathbb{E}_{a^{i}\sim \pi_{\theta}^{i}}[\Sigma_{t}r_{t}^{i}] \quad (1)\] We optimize this objective via reinforcement learning, in particular the policy gradient method PPO (Schulman et al., 2017). DGN Connectivity The topology is captured in the DGN by passing messages through the edges between individual network nodes. These messages allow each node to take into account its context relative to other nodes, and are supposed to convey information about the neighbouring policy network nodes in the graph. Since the parameters of these limb networks are shared across each node, these messages can be seen as context information that may inform the policy of its role in the corresponding connected component of graph. The aggregated flow through the whole graph can be encapsulated by passing these contextual messages in topological order (no cycles). One can either do a top- down pass, beginning from the root node (i.e., the node with no parents) to the leaf nodes, or do bottom- up pass, from leaves to root node. This idea is inspired from classical work on Bayesian graph networks where message passing is used for belief- propagation (Jordan, 2003). However, when the graph contains cycles, this idea can be easily extended by performing message- passing iteratively through the cycle until convergence, similar to loopy- belief- propagation in Bayesian graphs (Murphy et al., 1999). We now discuss these message- passing strategies: (a) Top-down message passing: Instead of defining \(\pi_{\theta}^{i}\) to be just as a function of state, \(\pi_{\theta}^{i}:s_{t}^{i}\to a_{t}^{i}\) , we pass each limb's policy network the information about its parent node as well. Formally, one can redefine \(\pi_{\theta}^{i}\) as \(\pi_{\theta}^{i}:[s_{t}^{i},m_{t}^{p_{i}}]\to a_{t}^{i}\) where \(p_{i}\) is the parent of node \(i\) . However, this also implies that each network node should pass context information as messages to its children networks for them to take it as input. So, we need to define \(m_{t}^{i}\) which is the output of each node \(i\) , and which is passed as the input context message to all its children. We simply append this to the output of \(\pi_{\theta}^{i}\) . Thus, we finally define \(\pi_{\theta}^{i}:[s_{t}^{i},m_{t}^{p_{i}}]\to [a_{t}^{i},m_{t}^{i}]\) . If \(i\) has no parents (i.e, root), a vector of zeros is passed in \(m_{t}^{p_{i}}\) . This is computed recursively until the messages reach the leaf nodes. (b) Bottom-up message passing: In this strategy, messages are passed from leaf nodes to root, i.e., each agent gets information from its children, but not from its parent. Similar to top-down, we redefine \(\pi_{\theta}^{i}\) as \(\pi_{\theta}^{i}:[s_{t}^{i},m_{t}^{C_{i}}]\to [a_{t}^{i},m_{t}^{i}]\) where \(m_{t}^{i}\) is the output message of policy that goes into the parent limb and \(m_{t}^{C_{i}}\) is the aggregated input messages from all the children nodes, i.e, \(m_{t}^{C_{i}} = \sum_{c\in C_{i}}m_{t}^{c}\) . If \(i\) has no children (i.e, root), a vector of zeros is passed in \(m_{t}^{C_{i}}\) . Messages are passed recursively until the root node. (c) Bottom-up then top-down message passing: In this strategy, we pass messages both ways: bottom-up, then top-down. In the absence of cycles in graph, a one-way pass (either top-down or bottom-up) is sufficient to capture the aggregated information, similar to Bayesian trees (Jordan, 2003). Even though both-way message-passing is redundant, we still explore it as an alternative since it might help in learning when the agent grows too complex. This is implemented by dividing the policy into two <--- Page Split ---> parts, each responsible for one direction of message passing, i.e., the parameters \(\theta = [\theta_{1},\theta_{2}]\) . First the bottom- up message passing is formulated as \(\pi_{\theta_{1}}^{i}:[s_{t}^{i},m_{t}^{C_{i}}]\to m_{t}^{i}\) where the sensory input \(s_{t}^{i}\) and input messages \(m_{t}^{C_{i}}\) are used to generate outgoing messages to the parent node. In the top- down pass, messages from the parent are used, in addition with the agent's own message, to output its action: \(\pi_{\theta_{2}}^{i}:[m_{t}^{i},m_{t}^{p_{i}}]\to [a_{t}^{i},\hat{m}_{t}^{i}]\) where \(\hat{m}_{t}^{i}\) are the messages passed to the children nodes. (d) No message passing: Note that for some environments or tasks, the context from the other nodes might not be a necessary requirement for effective control. In such scenarios, passing messages might create an extra-overhead for training a DGN. Importantly, even with no messages being passed, the DGN framework still allows for coordination between limbs. This is because the control and morphology are still learned jointly in a modular manner through the course of an episode i.e. the morphology and control in each timestep t depends explicitly on the physical morphology and the torques at previous timestep t 1. To implement the no message passing variant of DGN, we simply zero-out the messages \(m_{t}^{p_{i}},m_{t}^{i}\) at each timestep \(t\) . This is similar to a typical cooperative multi-agent setup (Wooldridge, 2009) where each limb makes its own decisions in response to the previous actions of the other agents. However, our setup differs in that our agents may physically join up, rather than just coordinate behavior. ## 4 IMPLEMENTATION DETAILS AND BASELINES Implementation Details: We use PPO (Schulman et al., 2017) as the underlying reinforcement learning method to optimize Equation 1. Limb policies are represented by fully- connected neural network and trained with a learning rate of \(3e - 4\) , discount factor of 0.995 and entropy coefficient of 0.01. Each episode is 5000 steps long at training and 1200 steps long at testing. Across all the tasks, the number of limbs at training is kept fixed to 6. Limbs start each episode disconnected and located just above the ground plane at random locations, as shown in Figure 3b. During generalization to novel scenarios, we experiment with changing the number of limbs to 12 or 3 to test the same policy without any further finetuning. All of our tasks require the agent to output continuous raw torque control values. Baselines We compare the role of the above four message passing strategies in DGN across a variety of tasks. Different strategies may work well in different scenarios. We further compare how well these dynamic morphologies perform in comparison to a learned monolithic policy for both dynamic and fixed morphologies. In particular, we compare to a (a) Monolithic Policy, Dynamic Graph: in this baseline, our agents are still dynamic and self- assemble to perform the task, however, their controller is represented by a single monolithic policy that takes as input the combined state of all agents and outputs actions for each of them. (b) Monolithic Policy, Fixed Graph: For each task, a hand- designed morphology is constructed from the limbs and trained using a single monolithic policy that takes as input the combined state of all agents and outputs the actions for all agents. The agents are not able to combine or separate This can be compared to a standard robotics setup in which a morphology is predefined and then a policy is learned to control it. Note that one cannot generalize Monolithic Policy baselines to scenarios where the number of limbs vary as it would change the action and state space of the policy. For the Fixed Graph baseline, we chose the fixed morphology to be a straight line chain of 6- limbs (i.e., a linear morphology) in all the experiments including the task of standing up and locomotion. This linear- chain may be optimal for standing as tall as possible, but it is not necessarily optimal for learning to stand; the same would hold for locomotion. Further, note that, the best performing DGN variants also converges to linear- chain morphology (shown in Figure 3b and video results on the project website) to achieve the best reward in case of standing up task. Moreover, one can confirm that the locomotion task is also solvable with linear- morphology because one of the DGN ablation methods converged to a linear- morphology while doing well at locomotion (see video). ## 5 EXPERIMENTS: EMERGENT MORPHOLOGIES AND GENERALIZATION We test the co- evolution of morphology and control across four tasks where self- assembling agents learn to: (a) stand up, (b) perform locomotion, (c) perform manipulation, and (d) fight in a sumo wrestling environment. There are two primary objectives of our investigation. The first is to determine <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: Training of self-assembling agents: (a) The training performance of different methods for joint training of control and morphology for the task of learning to stand up. The generalization performance of these policies across new scenarios is shown in Table 1. (b) The gradual co-evolution of controller as well as the morphology of self-assembling agents over the course of training. </center> <table><tr><td>Environment</td><td colspan="3">DGN</td><td colspan="3">Monolithic Policy</td></tr><tr><td></td><td>(up then down)</td><td>(top-down)</td><td>(bottom-up)</td><td>(no msgs)</td><td>(dynamic graph)</td><td>(fixed graph)</td></tr><tr><td colspan="7">Training Environment</td></tr><tr><td>Standing Up</td><td>15253</td><td>13486</td><td>17518</td><td>12470</td><td>4104</td><td>5351</td></tr><tr><td colspan="7">Zero-Shot Generalization</td></tr><tr><td>More (2x) Limbs</td><td>15006 (98%)</td><td>14429 (107%)</td><td>19796 (113%)</td><td>14084 (113%)</td><td>-</td><td>-</td></tr><tr><td>Fewer (.5x) Limbs</td><td>11730 (77%)</td><td>9842 (73%)</td><td>10839 (62%)</td><td>9070 (73%)</td><td>-</td><td>-</td></tr><tr><td>Water + 2x Limbs</td><td>16642 (109%)</td><td>14192 (105%)</td><td>16871 (96%)</td><td>13360 (107%)</td><td>-</td><td>-</td></tr><tr><td>Winds</td><td>14654 (96%)</td><td>12116 (90%)</td><td>16803 (96%)</td><td>12560 (101%)</td><td>3923 (96%)</td><td>4531 (85%)</td></tr><tr><td>Strong Winds</td><td>14727 (97%)</td><td>13416 (99%)</td><td>15853 (90%)</td><td>12257 (98%)</td><td>3937 (96%)</td><td>4961 (93%)</td></tr></table> Table 1: Testing generalization for the standing up task. We show quantitative evaluation of the generalization ability of the learned policies. For each of the methods, we first pick the best performing model from the training run and then evaluate it on each of the novel scenarios without any further finetuning, i.e., in a zero- shot manner. We report first the score attained by the self- assembling agent and then report, in parenthesis, the percentage of training performance retained upon transfer. The higher the numbers, the better it is. if such a modular co- evolution results in the emergence of complex self- assembling agents. The second is to evaluate if the emerged modular controller generalizes to novel scenarios. ### 5.1 TASK: STANDING UP In this task, each agent's reward is proportional to the highest vertical point in its combined morphology, i.e., the limb assemblies should try to maximize their \(Y\) - axis height. Limbs have an incentive to self- assemble since the potential reward scales with the number of agents in the body, given that the agent can learn the controller for it. The learning process begins by six- limbs falling on the ground randomly, as shown in Figure 3b. In the beginning, each agent learns independently of others but these limbs learn to self- assemble to form a complex agent after training. Figure 3a compares different methods in terms of their performance on the task of standing as high as possible. We found that our DGN policy variants perform significantly better than the monolithic policies for the standing up task. In particular, the bottom- up and up- then- down message passing strategies attain the highest reward. To verify the implementation of our monolithic policy with fixed morphology, we show its ablation with varying number of limbs in Section A.1 in the supplementary. However, the key question is whether the learned policy generalizes to novel scenarios. We investigate it by testing the learned policies without any further finetuning, i.e. zero- shot generalization, in novel scenarios: adding two times the number of limbs, reducing the number of limbs by half, increasing drag (i.e., 'under water') and number of limbs at the same time, and adding varying strength of random pushes- n- pulls (i.e., 'wind'). As the results in Table 1 show, DGN achieves similar performance as it did on the training environment, despite never having seen these scenarios before. Interestingly, the DGN variants seem to generalize better than the fixed- graph policies (last column). Monolithic <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 4: Training self-assembling agents: We show the performance of different methods for joint training of control and morphology for three tasks: standing up in the presence of wind and random push-n-pulls (left), locomotion in bumpy terrain (center) and manipulation (pushing) of two objects (right). These policies generalize to novel scenarios as shown in respective tables. </center> <table><tr><td>Environment</td><td colspan="3">DGN</td><td colspan="3">Monolithic Policy</td></tr><tr><td></td><td>(up then down)</td><td>(top-down)</td><td>(bottom-up)</td><td>(no msgs)</td><td>(dynamic graph)</td><td>(fixed graph)</td></tr><tr><td>Training Environment</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Standing Up in Wind</td><td>16339</td><td>18423</td><td>-</td><td>17237</td><td>4176</td><td>4500</td></tr><tr><td>Zero-Shot Generalization</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>(S)trong Winds</td><td>15649 (96%)</td><td>17384 (94%)</td><td>-</td><td>-</td><td>4010 (96%)</td><td>4507 (100%)</td></tr><tr><td>2x Limbs + (S)Winds</td><td>16250 (99%)</td><td>15351 (83%)</td><td>-</td><td>15728 (91%)</td><td>-</td><td>-</td></tr><tr><td>Water + 2x(L) + (S)Winds</td><td>17254 (106%)</td><td>17068 (93%)</td><td>-</td><td>16592 (96%)</td><td>-</td><td>-</td></tr></table> Table 2: Testing generalization for the standing up task in the presence of random push- n- pulls (i.e. 'wind'). The best performing model from the training is evaluated on each of the novel scenarios without any further finetuning. The score attained by the self- assembling agent is reported first and then, in parenthesis, the percentage of training performance retained upon transfer. The bottom- up DGN failed due to some experimental error and will be reported in the final version of paper. policy baselines cannot be generalized to more or fewer limbs due to the fixed action and state space. A better understanding of these results may be obtained by looking at the dynamically combining morphologies in the project video. ### 5.2 TASK: STANDING UP IN THE PRESENCE OF RANDOM PUSH-N-PULLS (WIND) The task in this case is same as the previous one of learning to stand up. However, unlike in the previous subsection, here we also trained in the presence of random push- n- pulls (i.e., 'wind') with hope of making the learned morphologies even more robust. The training performance in Figure 4a show the superior performance of DGN with respect to the baselines. The generalization results, in Table 2, show that the DGN both- ways messaging passing variant is the most robust. This may be because in the presence of distractors, communication both ways can be helpful since a random force on a single limb affects all other attached limbs. ### 5.3 LOCOMOTION TASK The reward function in this environment is defined as the distance covered by the agent along an axis, in particular, the limbs are rewarded is proportional to their velocity along the \(X\) - axis. The training environment is a bumpy terrain (shown in Figure 1(b)) and the training performance is shown in Figure 4b. Our DGN variants significantly outperform the monolithic baselines (see supplementary, Section A.1, for ablation). Interestingly, DGN variant with no message passing performs the best. Upon in- depth investigation, we found that it is possible to do well on this locomotion task with a large variety of morphologies, unlike the task of standing up where a tower is strongly preferable. Here, any morphology with sufficient height and forward velocity is able to make competitive progress in locomotion (see videos), and thus reducing message- passing to an unnecessary overhead. As discussed in Section 3.3, no message passing merely implies the absence of context to the limbs, but the DGN aggregated policy is still modular and jointly learned with the morphology over the episode. <--- Page Split ---> Table 3: Testing generalization for the locomotion task. The best performing model from the training is evaluated on each of the novel scenarios without any further finetuning. The score attained by the self-assembling agent is reported first and then, in parenthesis, the percentage of training performance retained upon transfer. <table><tr><td rowspan="2">Environment</td><td colspan="4">DGN</td><td colspan="2">Monolithic Policy</td></tr><tr><td>(up then down)</td><td>(top-down)</td><td>(bottom-up)</td><td>(no msgs)</td><td>(dynamic graph)</td><td>(fixed graph)</td></tr><tr><td>Training Environment</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Locomotion</td><td>3.91</td><td>6.87</td><td>8.71</td><td>9.0</td><td>0.96</td><td>2.96</td></tr><tr><td>Zero-Shot Generalization</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>More (2x) Limbs</td><td>4.01 (103%)</td><td>4.29 (63%)</td><td>5.47 (63%)</td><td>9.19 (102%)</td><td>-</td><td>-</td></tr><tr><td>Fewer (.5x) Limbs</td><td>3.52 (90%)</td><td>4.49 (65%)</td><td>6.64 (76%)</td><td>8.2 (91%)</td><td>-</td><td>-</td></tr><tr><td>Water + 2x Limbs</td><td>2.64 (68%)</td><td>3.54 (52%)</td><td>6.57 (75%)</td><td>7.2 (80%)</td><td>-</td><td>-</td></tr><tr><td>Hurdles</td><td>1.84 (47%)</td><td>3.66 (53%)</td><td>6.39 (73%)</td><td>5.56 (62%)</td><td>-0.77 (-79%)</td><td>-3.12 (-104%)</td></tr><tr><td>Gaps in Terrain</td><td>1.84 (47%)</td><td>2.8 (41%)</td><td>3.25 (37%)</td><td>4.17 (46%)</td><td>-0.32 (-33%)</td><td>2.09 (71%)</td></tr><tr><td>Bi-modal Bumps</td><td>2.97 (76%)</td><td>4.55 (66%)</td><td>6.62 (76%)</td><td>6.15 (68%)</td><td>-0.56 (-57%)</td><td>-0.44 (-14%)</td></tr><tr><td>Stairs</td><td>1.0 (26%)</td><td>4.25 (62%)</td><td>6.6 (76%)</td><td>8.59 (95%)</td><td>-8.8 (-912%)</td><td>-3.65 (-122%)</td></tr><tr><td>Inside Valley</td><td>4.37 (112%)</td><td>6.55 (95%)</td><td>5.29 (61%)</td><td>6.21 (69%)</td><td>0.47 (48%)</td><td>-1.35 (-45%)</td></tr></table> Table 4: Testing generalization for the manipulation task. The score attained by the self-assembling agent is reported first and then, in parenthesis, the percentage of training performance retained. <table><tr><td rowspan="2">Environment</td><td colspan="4">DGN</td><td colspan="2">Monolithic Policy</td></tr><tr><td>(up then down)</td><td>(top-down)</td><td>(bottom-up)</td><td>(no msgs)</td><td>(dynamic graph)</td><td>(fixed graph)</td></tr><tr><td>Training Environment</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Manipulation</td><td>-7985</td><td>-7861</td><td>-8482</td><td>-9603</td><td>-8773</td><td>-7725</td></tr><tr><td>Zero-Shot Generalization</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>More (2x) Limbs</td><td>-14319 (-179%)</td><td>-14894 (-189%)</td><td>-9969 (-118%)</td><td>-10879 (-112%)</td><td>-</td><td>-</td></tr><tr><td>Water + 2x Limbs</td><td>-10724 (-134%)</td><td>-13278 (-169%)</td><td>-12368 (-146%)</td><td>-10362 (-108%)</td><td>-</td><td>-</td></tr></table> We evaluate the learned policy without any further finetuning on several scenarios: more limbs, fewer limbs, more limbs under water, a terrain with hurdles of a certain height, a terrain with gaps between platforms, a bumpy terrain with a bi- modal distribution of bump heights, stairs, and an environment with a valley surrounded by walls on both sides. These environments are procedurally generated as discussed in Section 2. Across these novel environments, the modular policies learned by DGN tend to generalize better than the monolithic agent policies, as indicated in Table 3. ### 5.4 TASK: MANIPULATION OF TWO OBJECTS The agents are dropped inside a room containing two objects and the goal is to decrease the distance between the objects, as shown in Figure 1(b). The reward for the agents is the negative distance between the objects, so as to encourage the behavior of pushing the blocks together. The training plots are shown in Figure 4c and the generalization results are shown in Table 4. This is a very hard task due to the sparse reward problem as agents only get reward if they move the block. Interestingly, the learned policies do not work well enough in this environment, and only learn to slightly move the blocks (see video). We believe this task requires more reward engineering than just the distance, and we will update the improved results in the final version. ### 5.5 TASK: SUMO WRESTLING BETWEEN TWO TEAMS In this task, we divide the limbs into two teams of 6 limbs each and drop them into an arena to fight. Each team gets rewarded if any opponent limb falls out of the arena. The agents are trained via competitive self- play (Bansal et al., 2017; Tesauro, 1995). This is in contrast to the previous "single- team" tasks for self- assembling agents, i.e., standing, locomotion and manipulation. We present it as an additional result demonstrating the wider applicability of the method. However, it is non- trivial to measure the performance in self- play as the game is zero- sum, and rewards therefore do not increase over time. Instead, we refer the readers to the qualitative results in the video. The <--- Page Split ---> policies learned by the self- assembling agents demonstrate some interesting behaviors, but there is a lot of room for improvement in future research. We will release these environments upon acceptance. ## 6 RELATED WORK Morphogenesis and self- reconfiguring modular robots The idea of modular and self- assembling agents goes back at least to Von Neumman's Theory of Self- Reproducing Automata (Von Neumann et al., 1966). In robotics, such systems have been termed "self- reconfiguring modular robots" (Murata & Kurokawa, 2007; Stoy et al., 2010). There has been a lot of work in the modular robotics community in designing real hardware robotic modules that can be docked with each other to form complex robotic morphologies (Daudelin et al., 2018; Gilpin et al., 2008; Romanishin et al., 2013; Wright et al., 2007; Yim et al., 2000). Our main contribution is to approach this problem from a learning perspective, in particular deep RL, and study the resulting generalization properties. A variety of alternative approaches have also been proposed to optimize agent morphologies, including genetic algorithms that search over a generative grammar (Sims, 1994), as well as directly optimizing over morphology parameters with RL (Schaff et al., 2018). One key difference between these approaches and our own is that we achieve morphogenesis via dynamic actions (linking), which agents take during their lifetimes, whereas the past approaches treat morphology as an optimization target to be updated between generations or episodes. Since the physical morphology also defines the connectivity of the policy net, our proposed algorithm can also be viewed as performing a kind of neural architecture search (Zoph & Le, 2016) in physical agents. Graph neural networks Encoding graphical structures into neural networks has been used for a large number of applications, including quantum chemistry (Gilmer et al., 2017), semi- supervised classification (Kipf & Welling, 2016), and representation learning (Yang et al., 2018). The works most similar to ours involve learning control policies. For example, Nervenet (Wang et al., 2018) represents individual limbs and joints as nodes in a graph and demonstrates multi- limb generalization, just like our system does. However, the morphologies on which Nervenet operates are not learned jointly with the policy. hand- defined to be compositional in nature. Others (Battaglia et al., 2018; Huang et al., 2018) have shown that graph neural networks can also be applied to inference models as well as to planning. Many of these past works implement some variant of Graph Neural Networks (Scarselli et al., 2009) which operate on general graphs. Our method leverages the constraint that the morphologies can always be represented as a rooted tree in order to simplify the message passing. ## 7 DISCUSSION Modeling intelligent agents as modular, self- assembling morphologies has long been a very appealing idea. The efforts to create practical systems to evolve artificial agents goes back at least two decades to the beautiful work of Karl Sims (Sims, 1994). In this paper, we are revisiting these ideas using the contemporary machinery of deep networks and reinforcement learning. Examining the problem in the context of machine learning, rather than optimization, we are particularly interested in modularity as a key to generalization, in terms of improving adaptability and robustness to novel environmental conditions. Poor generalization is the Achilles heel of modern robotics research, and the hope is that this could be a promising direction in addressing this key issue. We demonstrated a number of promising experimental results, suggesting that modularity does indeed improve generalization in simulated agents. While these are just the initial steps, we believe that the proposed research direction is promising and its exploration will be fruitful to the research community. To encourage follow- up work, we will release all code, models, and environments online once the paper is published. ## REFERENCES Bruce Alberts, Dennis Bray, Julian Lewis, Martin Raff, Keith Roberts, and James D Watson. Molecu- lar Biology of the Cell. Garland Publishing, New York, 1994. 1 Trapit Bansal, Jakub Pachocki, Szymon Sidor, Ilya Sutskever, and Igor Mordatch. Emergent complexity via multi- agent competition. CoRR, 2017. 9 <--- Page Split ---> Yinon M Bar- On, Rob Phillips, and Ron Milo. The biomass distribution on earth. Proceedings of the National Academy of Sciences, 2018. 1 Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez- Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018. 10 Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv:1606.01540, 2016. 4 Jonathan Daudelin, Gangyuan Jing, Tarik Tosun, Mark Yim, Hadas Kress- Gazit, and Mark Campbell. An integrated system for perception- driven autonomy with modular robots. Science Robotics, 2018. 10 Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. arXiv preprint arXiv:1704.01212, 2017. 10 Kyle Gilpin, Keith Kotay, Daniela Rus, and Iuliu Vasilescu. Miche: Modular shape formation by self- disassembly. IJRR, 2008. 10 De- An Huang, Suraj Nair, Danfei Xu, Yuke Zhu, Animesh Garg, Li Fei- Fei, Silvio Savarese, and Juan Carlos Niebles. Neural task graphs: Generalizing to unseen tasks from a single video demonstration. arXiv preprint arXiv:1807.03480, 2018. 10 Michael I Jordan. An introduction to probabilistic graphical models, 2003. 5 Arthur Juliani, Vincent- Pierre Berges, Esh Vckay, Yuan Gao, Hunter Henry, Marwan Mattar, and Danny Lange. Unity: A general platform for intelligent agents. arXiv preprint arXiv:1809.02627, 2018. 3 Thomas N Kipf and Max Welling. Semi- supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016. 10 Satoshi Murata and Haruha Kurokawa. Self- reconfigurable robots. IEEE Robotics & Automation Magazine, 2007. 1, 10 Kevin P Murphy, Yair Weiss, and Michael I Jordan. Loopy belief propagation for approximate inference: An empirical study. In Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, 1999. 5 John W Romanishin, Kyle Gilpin, and Daniela Rus. M- blocks: Momentum- driven, magnetic modular robots. In IROS, 2013. 10 Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Network, 2009. 2, 4, 10 Charles B. Schaff, David Yunis, Ayan Chakrabarti, and Matthew R. Walter. Jointly learning to construct and control agents using deep reinforcement learning. CoRR, abs/1801.01432, 2018. URL http://arxiv.org/abs/1801.01432. 10 John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. abs/1707.06347, 2017. 5, 6 Karl Sims. Evolving virtual creatures. In Proceedings of the 21st annual conference on Computer graphics and interactive techniques, 1994. 1, 10 Kasper Stoy, David Brandt, David J Christensen, and David Brandt. Self- reconfigurable robots: an introduction. Mit Press Cambridge, 2010. 1, 10 Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press Cambridge, 1998. 4 Gerald Tesauro. Temporal difference learning and td- gammon. Communications of the ACM, 1995. 9 <--- Page Split ---> Xiaoyuan Tu and Demetri Terzopoulos. Artificial fishes: Physics, locomotion, perception, behavior. In Proceedings of the 21st annual conference on Computer graphics and interactive techniques, 1994. 1 John Von Neumann, Arthur W Burks, et al. Theory of self- reproducing automata. IEEE Transactions on Neural Networks, 1966. 10 Tingwu Wang, Renjie Liao, Jimmy Ba, and Sanja Fidler. Nervenet: Learning structured policy with graph neural networks. 2018. 10 Michael Wooldridge. An introduction to multiagent systems. John Wiley & Sons, 2009. 3, 6 Cornell Wright, Aaron Johnson, Aaron Peck, Zachary McCord, Allison Naaktgeboren, Philip Gianfortoni, Manuel Gonzalez- Rivero, Ross Hatton, and Howie Choset. Design of a modular snake robot. In IROS, 2007. 10 Zhilin Yang, Bhuwan Dhingra, Kaiming He, William W Cohen, Ruslan Salakhutdinov, Yann LeCun, et al. Glomo: Unsupervisedly learned relational graphs as transferable representations. arXiv preprint arXiv:1806.05662, 2018. 10 Mark Yim, David G Duff, and Kimon D Roufas. Polybot: a modular reconfigurable robot. In ICRA, 2000. 1, 10 Mark Yim, Wei- Min Shen, Behnam Salemi, Daniela Rus, Mark Moll, Hod Lipson, Eric Klavins, and Gregory S Chirikjian. Modular self- reconfigurable robot systems [grand challenges of robotics]. IEEE Robotics & Automation Magazine, 2007. 1 Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016. 10 <--- Page Split ---> ## A SUPPLEMENTARY MATERIAL ## A.1 PERFORMANCE OF FIXED-GRAPH BASELINE VS. NUMBER OF LIMBS To verify whether the training of Monolithic Policy w/ Fixed Graph is working, we ran it on standing up and locomotion tasks across varying number of limbs. We show in Figure 5 that the baseline performs well with less number of limbs which suggests that the reason for failure in 6- limbs case is indeed the morphology graph being fixed, and not the implementation of this baseline. ![](images/12_0.jpg) <center>Figure 5: The performance of Monolithic Policy w/ Fixed Graph baseline as the number of limbs varies in the two tasks: standing up (left) and locomotion (right). This shows that the monolithic baseline works well with less (1-3 limbs), but fails with 6 limbs during training. </center> ## A.2 GENERALIZATION OF LEARNED POLICIES AT DIFFERENT TRAINING INTERVALS In this section, we show the generalization plots corresponding to the Tables 1, 2, 3, 4. To plot generalization, we pick the trained model from different training intervals and plot them across new environments without finetuning at all, in a zero- shot manner. ![](images/12_1.jpg) <center>Figure 6: Generalization for the task of Standing Up: Performance of different methods across novel scenarios without any finetuning. </center> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 7: Generalization for the task of Standing Up w/ Wind: Performance of different methods across novel scenarios without any finetuning. </center> ![](images/13_1.jpg) <center>Figure 8: Generalization for the task of Locomotion: Performance of different methods across novel scenarios without any finetuning. </center> ![](images/13_2.jpg) <center>Figure 9: Generalization for the task of Manipulation: Performance of different methods across novel scenarios without any finetuning. </center> <--- Page Split --->
## ABSTRACT Much of contemporary sensorimotor learning assumes that one is already given a complex agent (e.g., a robotic arm) and the goal is to learn to control it. In contrast, this paper investigates a modular co- evolution strategy: a collection of primitive agents learns to self- assemble into increasingly complex collectives in order to solve control tasks. Each primitive agent consists of a limb and a neural controller. Limbs may choose to link up to form collectives, with linking being treated as a dynamic action. When two limbs link, a joint is added between them, actuated by the 'parent' limb's controller. This forms a new 'single' agent, which may further link with other agents. In this way, complex morphologies can emerge, controlled by a policy whose architecture is in explicit correspondence with the morphology. In experiments, we demonstrate that agents with these modular and dynamic topologies generalize better to test- time environments compared to static and monolithic baselines. Project videos are available at https://doubleblindICLR19.github.io/self- assembly/. ## 1 INTRODUCTION Only a tiny fraction of the Earth's biomass is composed of higher- level organisms capable of complex sensorimotor actions of the kind popular in contemporary robotics research (navigation, pick and place, etc). A large portion is primitive single- celled organisms, such as bacteria (Bar- On et al., 2018). Possibly the single most pivotal event in the history of evolution was the point when single- celled organisms switched from always competing with each other for resources to sometimes cooperating, first by forming colonies, and later by merging into multicellular organisms (Alberts et al., 1994). These modular self- assemblies were successful because they combined the high adaptability of single- celled organisms while making it possible for vastly more complex behaviours to emerge. Like many researchers before us (Murata & Kurokawa, 2007; Sims, 1994; Tu & Terzopoulos, 1994; Yim et al., 2000; 2007), we are inspired by the biology of multicellular evolution as a model for emergent complexity in artificial agents. Unlike most previous work however, we are primarily focused on modularity as a way of improving generalization to novel environmental conditions. In this paper, we present a study of modular self- assemblies of primitive agents — "limbs" which can link up to solve a shared task. The limbs have the option to bind together by adding a joint that connects their morphologies (Figure 1a), and when they do so, they pass messages and share rewards. Each limb comes with a simple neural net that controls the torque applied to its joints. Linking and unlinking is treated as a dynamic action, so that the limb assembly can change shape during a single episode of the simulation. This setup has previously been explored in robotics as "self- reconfiguring modular robots" (Stoy et al., 2010). However, unlike prior work on such robots, where the control policies are hand- defined, we show how to learn the policies and study the generalization properties that emerge. To make this problem computationally tractable, we do not allow the limb assemblies to form cycles in morphology. Limbs pass messages to their neighbors in this graph in order to coordinate behavior. All limbs share a common policy function, parametrized by a neural network, which takes the messages from adjacent limbs as input and outputs a torque to rotate the limb in addition to the linking/un- linking action. We call the aggregate neural network a Dynamic Graph Network (DGN) <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: We study the modular co-evolution of control and morphology where a collection of primitive agents self-assemble to form complex collectives to perform given tasks. (a) Each primitive agent is a limb containing a cylindrical body and a configurable motor. These limbs can connect with each other using the attached motor as a joint. (b) We illustrate our dynamic agents in four environments / tasks: standing up, locomotion, manipulation (pushing), and sumo wrestling. See project videos at https://doubleblindICLR19.github.io/self-assembly/. </center> since it is a graph neural network (Scarselli et al., 2009) that can dynamically change topology as a function of its own outputs. We test our limb assemblies on four tasks: standing, locomotion, pushing and wrestling, shown in Figure 1b. We find that DGNs enable a single modular policy to control multiple possible morphologies, even those unseen during training. For example, a 6- limb policy, trained to build a 6- limb tower, can be applied at test time on 12 limbs, and results in a 12- limb tower. Not only are the policies robust to changes in number of limbs, they also generalize well to novel test- time environmental conditions, such as added wind, or new landscapes. These results together demonstrate that our modular and dynamic self- assembling agents have advantages toward generalization to new environments and tasks. Our main contributions are: - Training primitive agents that self-assemble into complex morphologies to jointly solve control tasks.- Formulating morphological search as a reinforcement learning problem, where linking and unlinking are treated as actions.- Representing policy via a graph whose topology matches the agent's physical structure.- Demonstrating that these self-assembling agents both train and generalize better than fixed-morphology baselines. ## 2 ENVIRONMENT AND AGENTS Investigating the co- evolution of control (i.e., software) and morphology (i.e., hardware) is not supported within standard benchmark environments typically used for sensorimotor control, requiring us to create our own. We opted for a minimalist design for our agents, the environment, and the reward structure, which is crucial to ensuring that the emergence of limb assemblies with complex morphologies is not forced, but happens naturally. Environment Structure Our environment contains an arena where a collection of primitive agent limbs can self- assemble to perform control tasks. This arena is a ground surface equipped with <--- Page Split ---> gravity and friction. The arena can be procedurally changed to generate a variety of novel terrains by changing the height of each tile on the ground (see Figure 1b). To evaluate the generalization properties of our agents, we generate a series of novels terrains. This include generating bumpy terrain by randomizing the height of nearby tiles, stairs terrain by incrementally increasing height of each row of tiles, hurdles terrain by changing height of each row of tiles, gaps terrain by removing alternate row of tiles, etc. Some variations also include putting the arena 'under water' which basically amounts to increased drag (i.e. buoyancy). We start our environment with a set of six primitive limb agents on the ground which can assemble to form collectives to perform complex tasks. Agent Structure All our primitive limb agents share the same simple structure: a cylindrical body with a configurable motor on one end. One end of the cylinder is free and the other end contains a configurable motor. The free- end of the limb can link up with the motor- end of the other limb, and then the motor acts as a joint between two limbs with three degrees of rotation. Hence, one can refer to the motor- end of the cylindrical limb as a parent- end and the free end as a child- end. Multiple limbs can attach their child- end to the parent- end of another limb, as shown in Figure 1(a), to allow for complex graph morphologies to emerge. The limb of the parent- end controls the torques of joint. The un- linking action can be easily implemented by detaching two limbs, but the linking action has to deal with the ambiguity of which limb to connect to (if at all). To resolve these modeling issues, we implement the linking action by attaching the closest limb within a small radius around the parent- node. If no other limb is present within the threshold range, the linking action has no effect. The primitive limb agents are dropped in an environment to jointly solve a given control task. One key component of the self- assembling agent setup that makes it different from typical multi- agent scenarios (Wooldridge, 2009) is that if some agents assemble to form a collective, the resulting morphology becomes a new single agent and all limbs within the morphology maximize a joint reward function. The output action space of each primitive agent contains the continuous torque values that are to be applied to the motor connected to the agent, and are denoted by \(\{\tau_{\alpha}, \tau_{\beta}, \tau_{\gamma}\}\) for three degrees of rotation. In addition to the torque controls, each limb can decide to attach another link at its parent- end, or decide to unlink its child- end if already connected to other limb. The linking and unlinking decisions are binary. This complementary role assignment of child and parent ends, i.e., parent can only link and child can only unlink, makes it possible to decentralize the control across limbs in a self- assembly. In our self- assembling setup, each agent limb only has access to its local sensory information and does not know about other limbs. The sensory input of each agent includes its own dynamics, i.e., the location of the limb in 3- D euclidean coordinates, its velocity, angular rotation and angular velocity. Each end of the limb also has a trinary touch sensor to detect whether the end of the cylinder is touching 1) the floor, 2) another limb, or 3) nothing. Additionally, we also provide our limbs with a very simple point depth sensor that captures the surface height on a \(9 \times 9\) grid around the projection of center of limb on the surface. One essential requirement to operationalize this setup is an efficient simulator to allow simultaneous simulation of several of these primitive limb agents. We implement our environments in the Unity ML (Juliani et al., 2018) framework, which is one of the dominant platforms for designing realistic games. For computational reasons, we do not allow the emergence of cycles in the self- assembling agents by not allowing the limbs to link up with already attached limbs within the same morphology. However, our setup is trivially extensible to general graphs. ## 3 LEARNING TO CONTROL SELF-ASSEMBLING MORPHOLOGIES Consider a set of primitive limbs indexed by \(i\) in \(\{1, 2, \ldots , n\}\) , which are dropped in the environment arena \(\mathcal{E}\) to perform a given continuous control task. If needed, these limbs can assemble to form complex collectives in order to improve their performance on the task. The task is represented by a reward function \(r_{t}\) and the goal of the limbs is to maximize the discounted sum of rewards over time \(t\) . If some limbs assemble to form a collective, the resulting morphology effectively becomes a single agent with a joint network to maximize the joint reward of the connected limbs. Further, the reward of an assembled morphology is a function of the whole morphology and not the individual agent limbs. For instance, in the task of learning to stand up, the reward is the height of the individual limbs if they are separate, but is the height of the whole morphology if those limbs have assembled into a collective. We now discuss our proposed formulation for learning to control these self- assembling agents. <--- Page Split ---> ![](images/3_0.jpg) <center>Figure 2: High-level visualization of our method. A set of primitive 'limbs' learn to self-assemble into morphologies where each limb is represented by a neural network linked via graph of physical edges. The inset on right shows the message-passing diagram for each node. Project videos at https://doubleblindICLR19.github.io/self-assembly/. </center> ### 3.1 CO-EVOLUTION: LINKING/UNLINKING AS AN ACTION To learn a modular controller policy that could generalize to novel setups, our agents must learn the controller jointly as the morphology evolves over time. The limbs should simultaneously decide which torques to apply to their respective motors, while taking into account the connected morphology. Our hypothesis is that if a controller policy could learn in a modular fashion over iterations of increasingly sophisticated morphologies (see Figure 3b), it could learn to be robust and generalizable to diverse situations. So, how can we optimize control and morphology under a common end- to- end framework? We propose to treat the decision of linking and unlinking as additional actions of our primitive limb agents. The total action space \(a_{t}\) at each iteration \(t\) can be denoted as \(\{\tau_{\alpha},\tau_{\beta},\tau_{\gamma},\sigma_{link},\sigma_{unlink}\}\) where \(\tau_{*}\) denote the raw continuous torque values to be applied at the motor and \(\sigma_{*}\) denote the binary actions whether to connect another limb at the parent- end or disconnect the child- end from the other already attached limb. This simple view of morphological evolution allows us to use ideas from learning- driven control, in particular, reinforcement learning (Sutton & Barto, 1998). ### 3.2 MODULARITY: SELF-ASSEMBLING AGENT AS A GRAPH OF LIMBS Integration of control and morphology in a common framework is only the first step. The key question is how to model this controller policy such that it is modular and reuses information across generations of morphologies. Let \(a_{t}^{i}\) be the action space and \(s_{t}^{i}\) be the local sensory input- space of the agent \(i\) . One naive approach to maximizing the reward is to simply combine the states of the limbs into the input- space output all the actions jointly using a single network. Formally, the policy is simply \(\bar{a}_{t} = [a_{t}^{0},a_{t}^{1}\ldots a_{t}^{n}] = \Pi (s_{t}^{0},s_{t}^{0}\ldots ,s_{t}^{n})\) . This interprets the self- assemblies as a single monolithic agent, ignoring the graphical structure. This is the current approach to solve many control problems, e.g., Mujoco environments like humanoid (Brockman et al., 2016) where the policy \(\Pi\) is trained to maximize the sum of discounted rewards using reinforcement learning. In this work, we represent the policy of the agent via a graph neural network (Scarselli et al., 2009) in such a way that it explicitly corresponds to the morphology of the agent. Let's consider the collection of primitive agent limbs as graph \(G\) where each node is denoted by to the primitive limb agent \(i\) . Two limbs being physically connected by a joint is analogous to having an edge in the graph. At a joint, the limb which connects itself via its parent- end acts as a parent- node in the corresponding edge, and the other limbs which connect to that joint via child- ends are child- nodes. The parent- node (i.e., the agent with the parent- end) controls the torque of the edge (i.e., the joint motor), as described in Section- 2. <--- Page Split ---> ### 3.3 DYNAMIC GRAPH NETWORKS (DGN) Each primitive limb node \(i\) has a policy controller of its own, which is represented by a neural network \(\pi_{\theta}^{i}\) and receives a corresponding reward \(r_{i}^{t}\) for each time step \(t\) . We represent the policy of the self- assembled agent by the aggregated neural network that is connected in the same graphical manner as the physical morphology. The edge connectivity of the graph is represented in the overall graph policy by passing messages that flow from each limb network to the other limbs physically connected to it via a joint. The parameters \(\theta\) are shared across each primitive limb agent allowing the overall policy of the graph to be modular with respect to each node. However, recall that the agent morphologies are dynamic, i.e., the connectivity of the limbs changes based on policy outputs. This changes the edge connectivity of the corresponding graph network at every timestep, depending on the actions predicted by each limb controller network in the previous timestep. Hence, we call this aggregate neural net a Dynamic Graph Network (DGN) since it is a graph neural network that can dynamically change topology as a function of its own outputs in the previous iteration. DGN Optimization A typical rollout of our self- assembling agents during an episode of training contains a sequence of torques \(\tau_{t}^{i}\) and the linking actions \(\sigma_{t}^{i}\) for each limb at each timestep \(t\) . The policy parameter \(\theta\) is optimized to jointly maximize the reward for each network limb: \[\max_{\theta}\sum_{i = \{1,2,\dots,n\}}\mathbb{E}_{a^{i}\sim \pi_{\theta}^{i}}[\Sigma_{t}r_{t}^{i}] \quad (1)\] We optimize this objective via reinforcement learning, in particular the policy gradient method PPO (Schulman et al., 2017). DGN Connectivity The topology is captured in the DGN by passing messages through the edges between individual network nodes. These messages allow each node to take into account its context relative to other nodes, and are supposed to convey information about the neighbouring policy network nodes in the graph. Since the parameters of these limb networks are shared across each node, these messages can be seen as context information that may inform the policy of its role in the corresponding connected component of graph. The aggregated flow through the whole graph can be encapsulated by passing these contextual messages in topological order (no cycles). One can either do a top- down pass, beginning from the root node (i.e., the node with no parents) to the leaf nodes, or do bottom- up pass, from leaves to root node. This idea is inspired from classical work on Bayesian graph networks where message passing is used for belief- propagation (Jordan, 2003). However, when the graph contains cycles, this idea can be easily extended by performing message- passing iteratively through the cycle until convergence, similar to loopy- belief- propagation in Bayesian graphs (Murphy et al., 1999). We now discuss these message- passing strategies: (a) Top-down message passing: Instead of defining \(\pi_{\theta}^{i}\) to be just as a function of state, \(\pi_{\theta}^{i}:s_{t}^{i}\to a_{t}^{i}\) , we pass each limb's policy network the information about its parent node as well. Formally, one can redefine \(\pi_{\theta}^{i}\) as \(\pi_{\theta}^{i}:[s_{t}^{i},m_{t}^{p_{i}}]\to a_{t}^{i}\) where \(p_{i}\) is the parent of node \(i\) . However, this also implies that each network node should pass context information as messages to its children networks for them to take it as input. So, we need to define \(m_{t}^{i}\) which is the output of each node \(i\) , and which is passed as the input context message to all its children. We simply append this to the output of \(\pi_{\theta}^{i}\) . Thus, we finally define \(\pi_{\theta}^{i}:[s_{t}^{i},m_{t}^{p_{i}}]\to [a_{t}^{i},m_{t}^{i}]\) . If \(i\) has no parents (i.e, root), a vector of zeros is passed in \(m_{t}^{p_{i}}\) . This is computed recursively until the messages reach the leaf nodes. (b) Bottom-up message passing: In this strategy, messages are passed from leaf nodes to root, i.e., each agent gets information from its children, but not from its parent. Similar to top-down, we redefine \(\pi_{\theta}^{i}\) as \(\pi_{\theta}^{i}:[s_{t}^{i},m_{t}^{C_{i}}]\to [a_{t}^{i},m_{t}^{i}]\) where \(m_{t}^{i}\) is the output message of policy that goes into the parent limb and \(m_{t}^{C_{i}}\) is the aggregated input messages from all the children nodes, i.e, \(m_{t}^{C_{i}} = \sum_{c\in C_{i}}m_{t}^{c}\) . If \(i\) has no children (i.e, root), a vector of zeros is passed in \(m_{t}^{C_{i}}\) . Messages are passed recursively until the root node. (c) Bottom-up then top-down message passing: In this strategy, we pass messages both ways: bottom-up, then top-down. In the absence of cycles in graph, a one-way pass (either top-down or bottom-up) is sufficient to capture the aggregated information, similar to Bayesian trees (Jordan, 2003). Even though both-way message-passing is redundant, we still explore it as an alternative since it might help in learning when the agent grows too complex. This is implemented by dividing the policy into two <--- Page Split ---> parts, each responsible for one direction of message passing, i.e., the parameters \(\theta = [\theta_{1},\theta_{2}]\) . First the bottom- up message passing is formulated as \(\pi_{\theta_{1}}^{i}:[s_{t}^{i},m_{t}^{C_{i}}]\to m_{t}^{i}\) where the sensory input \(s_{t}^{i}\) and input messages \(m_{t}^{C_{i}}\) are used to generate outgoing messages to the parent node. In the top- down pass, messages from the parent are used, in addition with the agent's own message, to output its action: \(\pi_{\theta_{2}}^{i}:[m_{t}^{i},m_{t}^{p_{i}}]\to [a_{t}^{i},\hat{m}_{t}^{i}]\) where \(\hat{m}_{t}^{i}\) are the messages passed to the children nodes. (d) No message passing: Note that for some environments or tasks, the context from the other nodes might not be a necessary requirement for effective control. In such scenarios, passing messages might create an extra-overhead for training a DGN. Importantly, even with no messages being passed, the DGN framework still allows for coordination between limbs. This is because the control and morphology are still learned jointly in a modular manner through the course of an episode i.e. the morphology and control in each timestep t depends explicitly on the physical morphology and the torques at previous timestep t 1. To implement the no message passing variant of DGN, we simply zero-out the messages \(m_{t}^{p_{i}},m_{t}^{i}\) at each timestep \(t\) . This is similar to a typical cooperative multi-agent setup (Wooldridge, 2009) where each limb makes its own decisions in response to the previous actions of the other agents. However, our setup differs in that our agents may physically join up, rather than just coordinate behavior. ## 4 IMPLEMENTATION DETAILS AND BASELINES Implementation Details: We use PPO (Schulman et al., 2017) as the underlying reinforcement learning method to optimize Equation 1. Limb policies are represented by fully- connected neural network and trained with a learning rate of \(3e - 4\) , discount factor of 0.995 and entropy coefficient of 0.01. Each episode is 5000 steps long at training and 1200 steps long at testing. Across all the tasks, the number of limbs at training is kept fixed to 6. Limbs start each episode disconnected and located just above the ground plane at random locations, as shown in Figure 3b. During generalization to novel scenarios, we experiment with changing the number of limbs to 12 or 3 to test the same policy without any further finetuning. All of our tasks require the agent to output continuous raw torque control values. Baselines We compare the role of the above four message passing strategies in DGN across a variety of tasks. Different strategies may work well in different scenarios. We further compare how well these dynamic morphologies perform in comparison to a learned monolithic policy for both dynamic and fixed morphologies. In particular, we compare to a (a) Monolithic Policy, Dynamic Graph: in this baseline, our agents are still dynamic and self- assemble to perform the task, however, their controller is represented by a single monolithic policy that takes as input the combined state of all agents and outputs actions for each of them. (b) Monolithic Policy, Fixed Graph: For each task, a hand- designed morphology is constructed from the limbs and trained using a single monolithic policy that takes as input the combined state of all agents and outputs the actions for all agents. The agents are not able to combine or separate This can be compared to a standard robotics setup in which a morphology is predefined and then a policy is learned to control it. Note that one cannot generalize Monolithic Policy baselines to scenarios where the number of limbs vary as it would change the action and state space of the policy. For the Fixed Graph baseline, we chose the fixed morphology to be a straight line chain of 6- limbs (i.e., a linear morphology) in all the experiments including the task of standing up and locomotion. This linear- chain may be optimal for standing as tall as possible, but it is not necessarily optimal for learning to stand; the same would hold for locomotion. Further, note that, the best performing DGN variants also converges to linear- chain morphology (shown in Figure 3b and video results on the project website) to achieve the best reward in case of standing up task. Moreover, one can confirm that the locomotion task is also solvable with linear- morphology because one of the DGN ablation methods converged to a linear- morphology while doing well at locomotion (see video). ## 5 EXPERIMENTS: EMERGENT MORPHOLOGIES AND GENERALIZATION We test the co- evolution of morphology and control across four tasks where self- assembling agents learn to: (a) stand up, (b) perform locomotion, (c) perform manipulation, and (d) fight in a sumo wrestling environment. There are two primary objectives of our investigation. The first is to determine <--- Page Split ---> ![](images/6_0.jpg) <center>Figure 3: Training of self-assembling agents: (a) The training performance of different methods for joint training of control and morphology for the task of learning to stand up. The generalization performance of these policies across new scenarios is shown in Table 1. (b) The gradual co-evolution of controller as well as the morphology of self-assembling agents over the course of training. </center> <table><tr><td>Environment</td><td colspan="3">DGN</td><td colspan="3">Monolithic Policy</td></tr><tr><td></td><td>(up then down)</td><td>(top-down)</td><td>(bottom-up)</td><td>(no msgs)</td><td>(dynamic graph)</td><td>(fixed graph)</td></tr><tr><td colspan="7">Training Environment</td></tr><tr><td>Standing Up</td><td>15253</td><td>13486</td><td>17518</td><td>12470</td><td>4104</td><td>5351</td></tr><tr><td colspan="7">Zero-Shot Generalization</td></tr><tr><td>More (2x) Limbs</td><td>15006 (98%)</td><td>14429 (107%)</td><td>19796 (113%)</td><td>14084 (113%)</td><td>-</td><td>-</td></tr><tr><td>Fewer (.5x) Limbs</td><td>11730 (77%)</td><td>9842 (73%)</td><td>10839 (62%)</td><td>9070 (73%)</td><td>-</td><td>-</td></tr><tr><td>Water + 2x Limbs</td><td>16642 (109%)</td><td>14192 (105%)</td><td>16871 (96%)</td><td>13360 (107%)</td><td>-</td><td>-</td></tr><tr><td>Winds</td><td>14654 (96%)</td><td>12116 (90%)</td><td>16803 (96%)</td><td>12560 (101%)</td><td>3923 (96%)</td><td>4531 (85%)</td></tr><tr><td>Strong Winds</td><td>14727 (97%)</td><td>13416 (99%)</td><td>15853 (90%)</td><td>12257 (98%)</td><td>3937 (96%)</td><td>4961 (93%)</td></tr></table> Table 1: Testing generalization for the standing up task. We show quantitative evaluation of the generalization ability of the learned policies. For each of the methods, we first pick the best performing model from the training run and then evaluate it on each of the novel scenarios without any further finetuning, i.e., in a zero- shot manner. We report first the score attained by the self- assembling agent and then report, in parenthesis, the percentage of training performance retained upon transfer. The higher the numbers, the better it is. if such a modular co- evolution results in the emergence of complex self- assembling agents. The second is to evaluate if the emerged modular controller generalizes to novel scenarios. ### 5.1 TASK: STANDING UP In this task, each agent's reward is proportional to the highest vertical point in its combined morphology, i.e., the limb assemblies should try to maximize their \(Y\) - axis height. Limbs have an incentive to self- assemble since the potential reward scales with the number of agents in the body, given that the agent can learn the controller for it. The learning process begins by six- limbs falling on the ground randomly, as shown in Figure 3b. In the beginning, each agent learns independently of others but these limbs learn to self- assemble to form a complex agent after training. Figure 3a compares different methods in terms of their performance on the task of standing as high as possible. We found that our DGN policy variants perform significantly better than the monolithic policies for the standing up task. In particular, the bottom- up and up- then- down message passing strategies attain the highest reward. To verify the implementation of our monolithic policy with fixed morphology, we show its ablation with varying number of limbs in Section A.1 in the supplementary. However, the key question is whether the learned policy generalizes to novel scenarios. We investigate it by testing the learned policies without any further finetuning, i.e. zero- shot generalization, in novel scenarios: adding two times the number of limbs, reducing the number of limbs by half, increasing drag (i.e., 'under water') and number of limbs at the same time, and adding varying strength of random pushes- n- pulls (i.e., 'wind'). As the results in Table 1 show, DGN achieves similar performance as it did on the training environment, despite never having seen these scenarios before. Interestingly, the DGN variants seem to generalize better than the fixed- graph policies (last column). Monolithic <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 4: Training self-assembling agents: We show the performance of different methods for joint training of control and morphology for three tasks: standing up in the presence of wind and random push-n-pulls (left), locomotion in bumpy terrain (center) and manipulation (pushing) of two objects (right). These policies generalize to novel scenarios as shown in respective tables. </center> <table><tr><td>Environment</td><td colspan="3">DGN</td><td colspan="3">Monolithic Policy</td></tr><tr><td></td><td>(up then down)</td><td>(top-down)</td><td>(bottom-up)</td><td>(no msgs)</td><td>(dynamic graph)</td><td>(fixed graph)</td></tr><tr><td>Training Environment</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Standing Up in Wind</td><td>16339</td><td>18423</td><td>-</td><td>17237</td><td>4176</td><td>4500</td></tr><tr><td>Zero-Shot Generalization</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>(S)trong Winds</td><td>15649 (96%)</td><td>17384 (94%)</td><td>-</td><td>-</td><td>4010 (96%)</td><td>4507 (100%)</td></tr><tr><td>2x Limbs + (S)Winds</td><td>16250 (99%)</td><td>15351 (83%)</td><td>-</td><td>15728 (91%)</td><td>-</td><td>-</td></tr><tr><td>Water + 2x(L) + (S)Winds</td><td>17254 (106%)</td><td>17068 (93%)</td><td>-</td><td>16592 (96%)</td><td>-</td><td>-</td></tr></table> Table 2: Testing generalization for the standing up task in the presence of random push- n- pulls (i.e. 'wind'). The best performing model from the training is evaluated on each of the novel scenarios without any further finetuning. The score attained by the self- assembling agent is reported first and then, in parenthesis, the percentage of training performance retained upon transfer. The bottom- up DGN failed due to some experimental error and will be reported in the final version of paper. policy baselines cannot be generalized to more or fewer limbs due to the fixed action and state space. A better understanding of these results may be obtained by looking at the dynamically combining morphologies in the project video. ### 5.2 TASK: STANDING UP IN THE PRESENCE OF RANDOM PUSH-N-PULLS (WIND) The task in this case is same as the previous one of learning to stand up. However, unlike in the previous subsection, here we also trained in the presence of random push- n- pulls (i.e., 'wind') with hope of making the learned morphologies even more robust. The training performance in Figure 4a show the superior performance of DGN with respect to the baselines. The generalization results, in Table 2, show that the DGN both- ways messaging passing variant is the most robust. This may be because in the presence of distractors, communication both ways can be helpful since a random force on a single limb affects all other attached limbs. ### 5.3 LOCOMOTION TASK The reward function in this environment is defined as the distance covered by the agent along an axis, in particular, the limbs are rewarded is proportional to their velocity along the \(X\) - axis. The training environment is a bumpy terrain (shown in Figure 1(b)) and the training performance is shown in Figure 4b. Our DGN variants significantly outperform the monolithic baselines (see supplementary, Section A.1, for ablation). Interestingly, DGN variant with no message passing performs the best. Upon in- depth investigation, we found that it is possible to do well on this locomotion task with a large variety of morphologies, unlike the task of standing up where a tower is strongly preferable. Here, any morphology with sufficient height and forward velocity is able to make competitive progress in locomotion (see videos), and thus reducing message- passing to an unnecessary overhead. As discussed in Section 3.3, no message passing merely implies the absence of context to the limbs, but the DGN aggregated policy is still modular and jointly learned with the morphology over the episode. <--- Page Split ---> Table 3: Testing generalization for the locomotion task. The best performing model from the training is evaluated on each of the novel scenarios without any further finetuning. The score attained by the self-assembling agent is reported first and then, in parenthesis, the percentage of training performance retained upon transfer. <table><tr><td rowspan="2">Environment</td><td colspan="4">DGN</td><td colspan="2">Monolithic Policy</td></tr><tr><td>(up then down)</td><td>(top-down)</td><td>(bottom-up)</td><td>(no msgs)</td><td>(dynamic graph)</td><td>(fixed graph)</td></tr><tr><td>Training Environment</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Locomotion</td><td>3.91</td><td>6.87</td><td>8.71</td><td>9.0</td><td>0.96</td><td>2.96</td></tr><tr><td>Zero-Shot Generalization</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>More (2x) Limbs</td><td>4.01 (103%)</td><td>4.29 (63%)</td><td>5.47 (63%)</td><td>9.19 (102%)</td><td>-</td><td>-</td></tr><tr><td>Fewer (.5x) Limbs</td><td>3.52 (90%)</td><td>4.49 (65%)</td><td>6.64 (76%)</td><td>8.2 (91%)</td><td>-</td><td>-</td></tr><tr><td>Water + 2x Limbs</td><td>2.64 (68%)</td><td>3.54 (52%)</td><td>6.57 (75%)</td><td>7.2 (80%)</td><td>-</td><td>-</td></tr><tr><td>Hurdles</td><td>1.84 (47%)</td><td>3.66 (53%)</td><td>6.39 (73%)</td><td>5.56 (62%)</td><td>-0.77 (-79%)</td><td>-3.12 (-104%)</td></tr><tr><td>Gaps in Terrain</td><td>1.84 (47%)</td><td>2.8 (41%)</td><td>3.25 (37%)</td><td>4.17 (46%)</td><td>-0.32 (-33%)</td><td>2.09 (71%)</td></tr><tr><td>Bi-modal Bumps</td><td>2.97 (76%)</td><td>4.55 (66%)</td><td>6.62 (76%)</td><td>6.15 (68%)</td><td>-0.56 (-57%)</td><td>-0.44 (-14%)</td></tr><tr><td>Stairs</td><td>1.0 (26%)</td><td>4.25 (62%)</td><td>6.6 (76%)</td><td>8.59 (95%)</td><td>-8.8 (-912%)</td><td>-3.65 (-122%)</td></tr><tr><td>Inside Valley</td><td>4.37 (112%)</td><td>6.55 (95%)</td><td>5.29 (61%)</td><td>6.21 (69%)</td><td>0.47 (48%)</td><td>-1.35 (-45%)</td></tr></table> Table 4: Testing generalization for the manipulation task. The score attained by the self-assembling agent is reported first and then, in parenthesis, the percentage of training performance retained. <table><tr><td rowspan="2">Environment</td><td colspan="4">DGN</td><td colspan="2">Monolithic Policy</td></tr><tr><td>(up then down)</td><td>(top-down)</td><td>(bottom-up)</td><td>(no msgs)</td><td>(dynamic graph)</td><td>(fixed graph)</td></tr><tr><td>Training Environment</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Manipulation</td><td>-7985</td><td>-7861</td><td>-8482</td><td>-9603</td><td>-8773</td><td>-7725</td></tr><tr><td>Zero-Shot Generalization</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>More (2x) Limbs</td><td>-14319 (-179%)</td><td>-14894 (-189%)</td><td>-9969 (-118%)</td><td>-10879 (-112%)</td><td>-</td><td>-</td></tr><tr><td>Water + 2x Limbs</td><td>-10724 (-134%)</td><td>-13278 (-169%)</td><td>-12368 (-146%)</td><td>-10362 (-108%)</td><td>-</td><td>-</td></tr></table> We evaluate the learned policy without any further finetuning on several scenarios: more limbs, fewer limbs, more limbs under water, a terrain with hurdles of a certain height, a terrain with gaps between platforms, a bumpy terrain with a bi- modal distribution of bump heights, stairs, and an environment with a valley surrounded by walls on both sides. These environments are procedurally generated as discussed in Section 2. Across these novel environments, the modular policies learned by DGN tend to generalize better than the monolithic agent policies, as indicated in Table 3. ### 5.4 TASK: MANIPULATION OF TWO OBJECTS The agents are dropped inside a room containing two objects and the goal is to decrease the distance between the objects, as shown in Figure 1(b). The reward for the agents is the negative distance between the objects, so as to encourage the behavior of pushing the blocks together. The training plots are shown in Figure 4c and the generalization results are shown in Table 4. This is a very hard task due to the sparse reward problem as agents only get reward if they move the block. Interestingly, the learned policies do not work well enough in this environment, and only learn to slightly move the blocks (see video). We believe this task requires more reward engineering than just the distance, and we will update the improved results in the final version. ### 5.5 TASK: SUMO WRESTLING BETWEEN TWO TEAMS In this task, we divide the limbs into two teams of 6 limbs each and drop them into an arena to fight. Each team gets rewarded if any opponent limb falls out of the arena. The agents are trained via competitive self- play (Bansal et al., 2017; Tesauro, 1995). This is in contrast to the previous "single- team" tasks for self- assembling agents, i.e., standing, locomotion and manipulation. We present it as an additional result demonstrating the wider applicability of the method. However, it is non- trivial to measure the performance in self- play as the game is zero- sum, and rewards therefore do not increase over time. Instead, we refer the readers to the qualitative results in the video. The <--- Page Split ---> policies learned by the self- assembling agents demonstrate some interesting behaviors, but there is a lot of room for improvement in future research. We will release these environments upon acceptance. ## 6 RELATED WORK Morphogenesis and self- reconfiguring modular robots The idea of modular and self- assembling agents goes back at least to Von Neumman's Theory of Self- Reproducing Automata (Von Neumann et al., 1966). In robotics, such systems have been termed "self- reconfiguring modular robots" (Murata & Kurokawa, 2007; Stoy et al., 2010). There has been a lot of work in the modular robotics community in designing real hardware robotic modules that can be docked with each other to form complex robotic morphologies (Daudelin et al., 2018; Gilpin et al., 2008; Romanishin et al., 2013; Wright et al., 2007; Yim et al., 2000). Our main contribution is to approach this problem from a learning perspective, in particular deep RL, and study the resulting generalization properties. A variety of alternative approaches have also been proposed to optimize agent morphologies, including genetic algorithms that search over a generative grammar (Sims, 1994), as well as directly optimizing over morphology parameters with RL (Schaff et al., 2018). One key difference between these approaches and our own is that we achieve morphogenesis via dynamic actions (linking), which agents take during their lifetimes, whereas the past approaches treat morphology as an optimization target to be updated between generations or episodes. Since the physical morphology also defines the connectivity of the policy net, our proposed algorithm can also be viewed as performing a kind of neural architecture search (Zoph & Le, 2016) in physical agents. Graph neural networks Encoding graphical structures into neural networks has been used for a large number of applications, including quantum chemistry (Gilmer et al., 2017), semi- supervised classification (Kipf & Welling, 2016), and representation learning (Yang et al., 2018). The works most similar to ours involve learning control policies. For example, Nervenet (Wang et al., 2018) represents individual limbs and joints as nodes in a graph and demonstrates multi- limb generalization, just like our system does. However, the morphologies on which Nervenet operates are not learned jointly with the policy. hand- defined to be compositional in nature. Others (Battaglia et al., 2018; Huang et al., 2018) have shown that graph neural networks can also be applied to inference models as well as to planning. Many of these past works implement some variant of Graph Neural Networks (Scarselli et al., 2009) which operate on general graphs. Our method leverages the constraint that the morphologies can always be represented as a rooted tree in order to simplify the message passing. ## 7 DISCUSSION Modeling intelligent agents as modular, self- assembling morphologies has long been a very appealing idea. The efforts to create practical systems to evolve artificial agents goes back at least two decades to the beautiful work of Karl Sims (Sims, 1994). In this paper, we are revisiting these ideas using the contemporary machinery of deep networks and reinforcement learning. Examining the problem in the context of machine learning, rather than optimization, we are particularly interested in modularity as a key to generalization, in terms of improving adaptability and robustness to novel environmental conditions. Poor generalization is the Achilles heel of modern robotics research, and the hope is that this could be a promising direction in addressing this key issue. We demonstrated a number of promising experimental results, suggesting that modularity does indeed improve generalization in simulated agents. While these are just the initial steps, we believe that the proposed research direction is promising and its exploration will be fruitful to the research community. To encourage follow- up work, we will release all code, models, and environments online once the paper is published. ## A SUPPLEMENTARY MATERIAL ## A.1 PERFORMANCE OF FIXED-GRAPH BASELINE VS. NUMBER OF LIMBS To verify whether the training of Monolithic Policy w/ Fixed Graph is working, we ran it on standing up and locomotion tasks across varying number of limbs. We show in Figure 5 that the baseline performs well with less number of limbs which suggests that the reason for failure in 6- limbs case is indeed the morphology graph being fixed, and not the implementation of this baseline. ![](images/12_0.jpg) <center>Figure 5: The performance of Monolithic Policy w/ Fixed Graph baseline as the number of limbs varies in the two tasks: standing up (left) and locomotion (right). This shows that the monolithic baseline works well with less (1-3 limbs), but fails with 6 limbs during training. </center> ## A.2 GENERALIZATION OF LEARNED POLICIES AT DIFFERENT TRAINING INTERVALS In this section, we show the generalization plots corresponding to the Tables 1, 2, 3, 4. To plot generalization, we pick the trained model from different training intervals and plot them across new environments without finetuning at all, in a zero- shot manner. ![](images/12_1.jpg) <center>Figure 6: Generalization for the task of Standing Up: Performance of different methods across novel scenarios without any finetuning. </center> <--- Page Split ---> ![](images/13_0.jpg) <center>Figure 7: Generalization for the task of Standing Up w/ Wind: Performance of different methods across novel scenarios without any finetuning. </center> ![](images/13_1.jpg) <center>Figure 8: Generalization for the task of Locomotion: Performance of different methods across novel scenarios without any finetuning. </center> ![](images/13_2.jpg) <center>Figure 9: Generalization for the task of Manipulation: Performance of different methods across novel scenarios without any finetuning. </center> <--- Page Split --->
reject
Reject
5
ICLR_2019_paper_1359
iclr
2,019
# UNSUPERVISED DISENTANGLING STRUCTURE AND APPEARANCE Anonymous authors Paper under double- blind review ## ABSTRACT It is challenging to disentangle an object into two orthogonal spaces of structure and appearance since each can influence the visual observation in a different and unpredictable way. It is rare for one to have access to a large number of data to help separate the influences. In this paper, we present a novel framework to learn this disentangled representation in a completely unsupervised manner. We address this problem in a two- branch Variational Autoencoder framework. For the structure branch, we project the latent factor into a soft structured point tensor and constrain it with losses derived from prior knowledge. This encourages the branch to distill geometry information. Another branch learns the complementary appearance information. The two branches form an effective framework that can disentangle object's structure- appearance representation without any human annotation. We evaluate our approach on four image datasets, on which we demonstrate the superior disentanglement and visual analogy quality both in synthesized and real- world data. We are able to generate photo- realistic images with \(256 \times 256\) resolution that are clearly disentangled in structure and appearance. ## 1 INTRODUCTION Structure and appearance are the two most inherent attributes that characterize an object visually. Computer vision researchers have devoted decades of efforts to understand object structure and extract features that are invariant to geometry change (Huang et al., 2007; Thewlis et al., 2017; Rocco et al., 2018). Learning such disentangled deep representation for visual objects is an important topic in deep learning. ![](images/0_0.jpg) <center>Figure 1: Walking in the disentangled representation space: Here we show an example learned by our algorithm. Our approach effectively disentangles the structure and appearance space. Three cat faces in the bounding box are from real data while others are interpolated through our learned representations. </center> <--- Page Split ---> The main objective of our work is to disentangle object's appearance and structure in an unsupervised manner. Achieving this goal is non- trivial due to three reasons: 1) Without supervision, we can hardly guarantee the separation of different representations in the latent space. 2) Although some methods like InfoGAN (Chen et al., 2016) are capable of learning several groups of independent attributes from objects, attributes from these unsupervised frameworks are uninterpretable since we cannot pinpoint which portion of the disentangled representation is related to the structure and which to the appearance. 3) Learning structure from a set of natural real- world images is difficult. To overcome the aforementioned challenges, we propose a novel two- branch Variational Autoencoder (VAE) framework, of which the structure branch aims to discover meaningful structural points to represent the object geometry, while the other appearance branch learns the complementary appearance representation. The settings of these two branches are asymmetric. For the structure branch, we add a layer- wise softmax operator to the last layer. This can be seen as a projection of a latent structure to a soft structured point tensor space. Specifically designed prior losses are used to constrain the structured point tensors so that the discovered points have high repeatability across images yet distributed uniformly to cover different parts of the object. To encourage the framework to learn a disentangled yet complementary representation of both appearance and structure, we introduce a Kullback- Leibler (KL) divergence loss and skip- connections design to the framework. Extensive experiments demonstrate the effectiveness of the proposed method in manipulating the structure and appearance of natural images, e.g., cat faces in Figure 1, outperform state- of- the- art algorithms (Chen et al., 2016; Higgins et al., 2017; Jakab et al., 2018). We also conduct several experiments on MNIST- Color, 3D synthesized data and real photos. ## 2 METHODOLOGY In the absence of annotation on structure, we rely on prior knowledge on how object landmarks should distribute to constrain the learning and disentanglement of structural information. Our experiments show that this is possible given appropriate prior losses and learning architecture. We first formulate our loss function with a consideration on prior. Specifically, we follow the VAE framework and assume 1) the two latent variables \(z\) and \(y\) , which represent the appearance and structure, are generated from some prior distributions. 2) \(x\) follows the conditional distribution \(p(x|y,z)\) . We start with a Bayesian formulation and maximize the log- likelihood over all observed samples \(x \in X\) . \[\begin{array}{r l} & {\log p(x) = \log p(y) + \log p(x|y) - \log p(y|x)}\\ & {\qquad \geq \log p(y) + \log \int p(x,z|y)\mathrm{d}z}\\ & {\qquad \geq \log p(y) + \mathbb{E}_{q}\log \frac{p(x,z|y)}{q(z|x,y)}}\\ & {\qquad = \log p(y) + \mathbb{E}_{q}\log \frac{p(x|y,z)p(z|y)}{q(z|x,y)}}. \end{array} \quad (1)\] Equation 1 learns a deterministic mapping \(e(\cdot ;\theta)\) from \(x\) to \(y\) , which we assume \(y\) is following a Gaussian distribution over \(\mathcal{N}(e(x;\omega),\Sigma)\) . Term \(-\log p(y|x)\) is non- negative. In the second line of the equation, we start to consider the factor \(z\) . Similar to VAE, we address the issue of intractable integral by introducing an approximate posterior \(q(y,z|x;\phi)\) to estimate the integral using evidence lower bound (ELBO). By splitting the \(p(x|y,z)\) from the second term of the last expression, we obtain our final loss as, \[\mathcal{L}(x,\theta ,\phi ,\omega) = -\log p_{\omega}(y) - \mathbb{E}_{q_{\phi (z|x,y)}}\log p_{\theta}(x|y,z) + \mathrm{KL}(q_{\phi (z|x,y)}(z|x,y)||p_{\theta}(z|y)). \quad (2)\] The first term is the prior on \(y\) . The second term describes the conditional distribution of \(x\) given all representation. Ideally, if the decoder can perfectly reconstruct the \(x\) , the second term would be a delta function over \(x\) . The third term represents the Kullback- Leibler divergence between approximate. In the rest of this paper we name these three terms respectively as prior loss \(L_{\mathrm{prior}}\) reconstruction loss \(L_{\mathrm{recon}}\) and KL loss \(L_{KL}\) <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: Architecture: Our framework follows an auto-encoder framework. It contains two branches: 1) the structure branch forces the representation into a Gaussian spatial probability distribution with an hourglass network \(e_{\omega}\) . 2) the appearance branch \(E_{\phi}\) learns a complementary appearance representation to the structure. </center> ### 2.1 PRIOR LOSS Inspired by Zhang et al. (2018) and Jakab et al. (2018), we formulate our structure representation \(y\) as a soft latent structured point tensor. A re- projecting operator is applied here to force \(y\) to lie on a Gaussian spatial probability distribution space. Following the notations from Newell et al. (2016), we denote the direct outputs of the hourglass network \(e_{\omega}\) as landmark heatmaps \(h\) , and each channel of which represents the spatial location of a structural point. Instead of using max activations across each heatmap as landmark coordinates, we weighted average all activations across each heatmap. We then re- project landmark coordinates to spatial features with the same size as heatmaps by a fixed Gaussian- like function centered at predicted coordinates with a fixed standard deviation. As a result, we obtain a new tensor \(y\) with prior on structure representation. Similar to the difficulty described in Zhang et al. (2018), we find that training the structure branch with general random initialization tend to locate all structural points around the mean location at the center of the image. This could lead to a local minimum from which optimizer might not escape. As such, we introduce a Separation Loss to encourage each heatmap to sufficiently cover the object of interest. This is achieved by the first part in Eq. 3, where we encourage each pair of \(i^{th}\) and \(j^{th}\) heatmaps to share different activations. \(\sigma\) can be regarded as a normalization factor here. Another prior constraint is that we wish the structural point to behave like landmarks to encode geometry structure information. To achieve this goal, we add a Concentration Loss to encourage the variance of activations \(h\) to be small so that it could concentrate at a single location. This corresponds to the second term in Eq. 3. \[L_{prior} = \sum_{i\neq j}\exp (-\frac{||h_{i} - h_{j}||^{2}}{2\sigma^{2}}) + \mathrm{Var}(h) \quad (3)\] It is noteworthy that some recent works have considered the prior of latent factor. Dupont (2018) proposed a Joint- \(\beta\) - VAE by adding different prior distribution over several latent factors so as to disentangle continuous and discrete factors from data. Our work differs in that we investigates a different prior to disentangle visual structure and appearance. ### 2.2 RECONSTRUCTION LOSS For the second term we optimize the reconstruction loss of whole model, which will be denoted as generator \(G\) in the following context. We assume that the decoder \(D_{\theta}\) is able to reconstruct original input \(x\) from latent representation \(y\) and \(z\) , which is \(\hat{x} = G(y,z)\) . Consequently, we can design the reconstruction loss as \(L_{\mathrm{recon}} = \| x - \hat{x} \|_{1}\) . <--- Page Split ---> However, minimizing \(L_{1} / L_{2}\) loss at pixel- level only does not model the perceptual quality well and makes the prediction look blurry and implausible. This phenomenon has been well- observed in the literature of super- resolution (Bruna et al., 2016; Sajjadi et al., 2017). We consequently define the reconstruction loss as \(L_{\mathrm{recon}} = \| x - \hat{x}\|_{1} + \sum_{i}\lambda_{i}\| \psi_{i}(x) - \psi_{i}(\hat{x})\|_{1}\) , where \(\psi_{i}\) is the feature obtained from \(l\) - th layer of a VGG- 19 model (Simonyan & Zisserman, 2014) pre- trained on ImageNet. It is also possible to add adversarial loss to further improve the perceptual reconstruction quality. Since the goal of this work is disentanglement rather than reconstruction, we only adopt the \(L_{\mathrm{recon}}\) described above. ### 2.3 KL LOSS We model \(q(z|x,y)\) as a parametric Gaussian distribution which can be estimated by the encoder network \(E_{\phi}\) . Therefore, the appearance code \(z\) can be sampled from \(q(z|x,y)\) . Meanwhile, the prior \(p(z|y)\) can be estimated by the encoder network \(E_{\theta}\) . By using the reparametrization trick (Kingma & Welling, 2014), these networks can be trained end- to- end. In this work, only mean is estimated for the stability of learning. By modeling the two distributions as Gaussian with identity covariances, the KL Loss is simply equal to the Euclidean distance between their means. Thus, \(z\) is regularized by minimizing the KL divergence between \(q(z|x,y)\) and \(p(z|y)\) . Notice that with only prior and reconstruction loss. The framework only makes sure \(z\) is from \(x\) and the Decoder \(D_{\theta}\) will recover as much information of \(x\) as possible. There is no guarantee that \(z\) will learn a complementary of \(y\) . Towards this end, we design the network as concatenating the encoded structure representation by \(E_{\theta}\) with the inferred appearance code \(z\) . Then, the concatenated representation is decoded together by \(D_{\theta}\) . Moreover, skip- connections between \(E_{\theta}\) and \(D_{\theta}\) are also used to pass multi- level structure information to the decoder. Since enough structure information can be obtained from prior, any information about structure encoded in \(z\) incurs a penalty of the likelihood \(p(x|y,z)\) with no new information (i.e. appearance information) is captured. This design of network and the KL Loss result in a constraint to guide \(z\) to encode more information about appearance which is complementary to the structure prior. ### 2.4 IMPLEMENTATION DETAIL Each of the input images \(x\) is cropped and resized to \(256 \times 256\) resolution. A one- stack hourglass network (Newell et al., 2016) is used as a geometry extractor \(e_{\omega}\) to project input image to the heatmap \(y \in \mathbb{R}^{256 \times 256 \times 30}\) , in which each channel represents one point- centered 2D- Gaussian map (with \(\sigma = 4\) ). \(y\) is drawn in a single- channel map for visualization in Fig. 2. Same network (with stride- 2 convolution for downsample) is use for both \(E_{\theta}\) and \(E_{\phi}\) to obtain appearance representation \(z\) and the embedded structure representation as two 128- dimension vectors. A symmetrical deconvolution network with skip connection is used as the decoder \(D_{\theta}\) to get the reconstructed result \(\hat{x}\) . All of the networks are jointly trained from scratch end- to- end. We detail the architectures and hyperparameters used for our experiments in appendix A. ## 3 RELATED WORK Unsupervised Feature Disentangle: Several pioneer works focus on unsupervised disentangled representation learning. Following the propose of GANs (Goodfellow et al., 2014), Chen et al. (2016) purpose InfoGAN to learn a mapping from a group of latent variables to the data in an unsupervised manner. Many similar methods were purposed to achieve a more stable result (Higgins et al., 2017; Kumar et al., 2018). However, these works suffer to interpret, and the meaning of each learned factor is uncontrollable. There are some following works focusing on dividing latent factors into different sets to enforce better disentangling. Mathieu et al. (2016) assign one code to the specified factors of variation associated with the labels, and left the remaining as unspecified variability. Similar to Mathieu et al. (2016), Hu et al. (2018) then proposes to obtain disentanglement of feature chunks by leveraging Autoencoders, with the supervision of some same/different class pairs. Dupont (2018) divides latent variable into discrete and continuous one, and distribute them in different prior distribution. In our work, we give one branch of representation are more complicated prior, to force it to represent only the pose information for the object. <--- Page Split ---> Supervised Pose Synthesis: Recently the booming of GANs research improves the capacity of pose- guided image generation. Ma et al. (2017) firstly try to synthesize pose images with U- Net- like networks. Several works soon follow this appealing topic and obtain better results on human pose or face generation. A close work to us from Esser et al. (2018) applied a conditional U- Net for shape- guided image generation. Nevertheless, existing works rely on massive annotated data, they need to treat pose of a object as input, or a strong pre- trained pose estimator. Unsupervised Structure Learning: Unsupervised learning structure from objects is one of the essential topics in computer vision. The rudimentary works focus on keypoints detection and learning a strong descriptor to match (Thewlis et al., 2017; Rocco et al., 2018). Recent two concurrent works, from Jakab et al. (2018) and Zhang et al. (2018), show the possibility of end- to- end learning of structure in Autoencoder formulations. Our work can be seen as extending their work to learn the complementary appearance representation as well (in other words, in the loss Eq. 1, they only consider the first two terms, and ignore the factor from \(z\) ). ## 4 EXPERIMENTS ### 4.1 EXPERIMENTAL PROTOCOL Datasets: We evaluate our method on four datasets that cover both synthesized and real world data: 1). MNIST- Color: we extend MNIST by either colorizing the digit (MNIST- CD) or the background (MNIST- CB) with a randomly chosen color following Gonzalez- Garcia et al. (2018). We use the standard split of training (50k) and testing (10k) set. 2). 3D Chair: Aubry et al. (2014) offers rendered images of 1393 CAD chair models. We take 1343 chairs for training and the left 50 chairs for testing. For each chair, 12 rendered images with different views are selected randomly. 3). Cat & Dog Face, we collect 6k (5k for training and 1k for testing) images of cat and dog from YFCC100M (Kalkowski et al., 2015) and Standford Dog (Khosla et al., 2011) datasets respectively. All images are center cropped around the face and scaled to the same size. 4). CelebA: it supplies plenty of celebrity faces with different attributes. The training and testing sizes are 160K and 20K respectively. ![](images/4_0.jpg) <center>Figure 3: Conditional generation results:(a) Walking in the appearance space with fixed structure. (b) Walking in the structure space with fixed appearance. (c) A visualization of the disentangled space by linear interpolation. The Structure is smoothly changed in row-wise and the appearance is changed by each column. </center> Evaluation Metric: Less existed evaluation metric and benchmark can be utilized to evaluate the performance of disentanglement. Here we propose two forms of evaluation to study the behavior of the proposed framework: 1). Qualitative: we provide four kinds of qualitative results to show as many usages of the disentangled space as possible, i.e. conditional sampling, interpolation, retrieval, and visual analogy. 2). Quantitative: we apply several metrics that are widely employed in image generation (a) Structure consistency: content similarity metric (Li et al., 2017) and mean- error of landmarks (Bulat & Tzimiropoulos, 2017). (b) Appearance consistency: style similarity metric (Johnson et al., 2016) (c). Disentangled ability: retrieval recall@K (Sangkloy et al., 2016). (d). Reconstruction and generation quality: SSIM (Wang et al., 2004) and Inception Score (Salimans et al., 2016). <--- Page Split ---> ### 4.2 RESULTS ON SYNTHESIZED DATASETS Diverse Generation. We first demonstrate the diversity of conditional generation results on MNIST- Color with the successfully disentangled structure and appearance in Fig. 3. It can be observed that, given an image as a structure condition, same digit information with different appearance can be generated by sampling the appearance condition images randomly. While given an image as appearance condition, different digits with the same color can be generated by sampling different structural conditional images. Note that the model has no prior knowledge of the digit in the image as no label is provided, it effectively learns the disentanglement spontaneously. Interpolation. In Fig. 3, the linear interpolation results show reasonable coverage of the manifold. From left to right, the color is changed smoothly from blue to red with interpolated appearance latent space while maintaining the digit information. Analogously, the color stays stable while one digit transforms into the other smoothly from top to down. Retrieval. To demonstrate the disentangled ability of the representation learned by the model, we perform nearest neighbor retrieval experiments following Mathieu et al. (2016) on MNIST- Color. With structure and appearance representation used, both semantic and visual retrieval can be performed respectively. The Qualitative results are shown in appendix A. Quantitatively. We use a commonly used retrieval metric Recall@K as in (Sangkley et al., 2016; Pang et al., 2017), where for a particular query digit, Recall@K is 1 if the corresponding digit is within the top- K retrieved results and O otherwise. We report the most challenging Recall@1 by averaging over all queries on the test set in Table 2. It can be observed that the structure representation shows the best performance and clearly outperforms image pixel and appearance representation. In addition to the disentangled ability. This result shows that the structure representation learned by our model is useful for visual retrieval. Visual Analogy. The task of visual analogy is that the particular attribute of a given reference image can be transformed to a query one (Reed et al., 2015). We show the visual analogy results on MNIST- Color and 3D Chair in Fig. 4. Note that even for the detail component (e.g. wheel and leg of 3D chair) the structure can be maintained successfully, which is a rather challenging task in previous unsupervised works (Chen et al., 2016; Higgins et al., 2017). ![](images/5_0.jpg) <center>Figure 4: Visual analogy results on synthesized datasets: (a) MNIST-CD. (b) MNIST-CB. (c) 3D Chair. Taking the structure representation of a query image and the appearance representation of the reference one, our model can output an image which maintains the geometric shape of query image while capturing the appearance of the reference image. </center> ### 4.3 RESULTS ON REAL-LIFE DATASETS We have so far only discussed results on the synthesized benchmarks. In this section, we will demonstrate the scalable performance of our model on several real- life datasets, i.e., Cat, Dog Face and CelebA. To the best knowledge of ours, there is no literature of unsupervised disentanglement before can successfully extend to photo- realistic generation with \(256 \times 256\) resolution. Owing to <--- Page Split ---> Table 1: Structure and appearance consistency evaluation on Cat and CelebA dataset (lower is better). <table><tr><td>Method</td><td colspan="3">Cat</td><td colspan="3">CelebA</td></tr><tr><td></td><td>Style (×e-5)</td><td>Content (×e-5)</td><td>Landmark (%)</td><td>Style (×e-5)</td><td>Content (×e-5)</td><td></td></tr><tr><td>Random</td><td>7.700</td><td>1.881</td><td>0.051</td><td>5.858</td><td>1.693</td><td>0.293</td></tr><tr><td>Ours</td><td>5.208</td><td>1.759</td><td>0.030</td><td>3.886</td><td>1.529</td><td>0.162</td></tr></table> the structural prior which accurately capture the structural information of images, our model can transform appearance information while faithfully maintain the geometry shapes. Qualitative evaluation is performed by visually examining the perceptual quality of the generated images. In Fig. 7, the swapping results along with the learned geometry heatmaps \(y\) are illustrated on Cat dataset. In can be seen that the geometry information, i.e., expression, head- pose, facial action, and appearance information i.e., hair texture, can be swapped between each other arbitrarily. The learned geometry heatmaps can be shown as a map with several 2D Gaussian points, which successfully encode the geometry cues of a image by the location of its points and supply an effective prior for the VAE network. More results of visual analogy of real- life datasets on Standford Dog and CelebA dataset are illustrated in Fig. 5. We observe that the model is able to generalize to various real- life images with large variations, such as mouth- opening, eye- closing, tongue- sticking and exclusive appearance. For quantitative measurement, there is no standard evaluation metric of the quality of the visual analogy results for real- life datasets since ground- truth targets are absent. We propose to evaluate the structure and appearance consistency of the analogy predictions respectively instead. We use content similarity metric for the evaluation of structure consistency between a condition input \(x_{s}\) and its guided generated images (e.g., for each column of images in Fig. 7). We use style similarity metric to evaluate the appearance consistency between a condition input \(x_{a}\) and its guided generated images (e.g., each row of images in Fig. 7). These two metrics are used widely in image generation applications as an objective for training to maintain content and texture information (Li et al., 2017; Johnson et al., 2016). As content similarity metric is less sensitive to the small variation of images, we propose to use the mean- error of landmarks detected by a landmark detection network, which is pre- trained on manually annotated data, to evaluate the structure consistency. Since the public cat facial landmark annotations are too sparse to evaluate the structure consistency (e.g. 9- points (Zhang et al., 2008)), we manually annotated 10k cat face with 18- points to train a landmark detection network for evaluation purpose. As for the evaluation of celebA, a state- of- the- art model (Bulat & Tzimipopoulos, 2017) with 68- landmarks is used. The results on the testing set of the two real- life datasets are reported in Table 1. For each test image, 1k other images in the testing set are all used as the reference of structure or appearance for generating, in which mean value is calculated. In the baseline random setting, for one test image, the mean value is calculated by sampling randomly among the generated images guided by each image. The superior structure and appearance consistency of the images generated by our method can be obviously observed. ![](images/6_0.jpg) <center>Figure 5: Visual analogy results on real-life datasets: (a) Standford Dog. (b) CelebA. The geometry (e.g. identity, head pose and expression) of query image can be faithfully maintained while the appearance (e.g. the color of hair, beard and illumination) of reference image can be precisely transformed. As concrete examples, the output of the dog in the third column is still tongue-sticking while the hair color is changed, and in the last column of CelebA, even the fine-grain eye make-up is successfully transformed to the query image surprisingly. </center> <--- Page Split ---> ### 4.4 COMPARISON TO OTHER METHODS 4.4 COMPARISON TO OTHER METHODSSince there is hardly any literature before share exactly the same settings with us as discussed in the related work, we compare perceptual quality with the four most related unsupervised representation learning methods in Fig. 6, including three disentangled factor learning methods, i.e., VAE (Kingma & Welling, 2014), \(\beta\) - VAE (Higgins et al., 2017) and InfoGAN (Chen et al., 2016), and one unsupervised structure learning method (Jakab et al., 2018). It can be observed that all of the three methods can automatically discover and learn to disentangle the factor of azimuth on 3D Chair dataset. However, it can be perceived that the geometry shape can be maintained much better in our approach than all the other methods, owing to the informative prior supplied by our structure branch. We randomly sample several query- reference pairs in testing set to compare with results reported in the paper of Jakab et al. (2018). The results of unsupervised structure learning methods have severe artifacts and look more blurred compared with ours. Nevertheless, the identity of query face can be hardly kept in the results of Jakab et al. (2018). ![](images/7_0.jpg) <center>Figure 6: Comparison to other methods. Qualitative results of disentangling performance of VAE, \(\beta\) -VAE, InfoGAN and Jakab et al. (2018). We demonstrate the disentanglement of the factor of azimuth for 3D chair dataset. Visual analogy results are demonstrated for face dataset. </center> ### 4.5 ABLATION STUDY It is worth studying the effect of individual component of our method on the quality of generated images. Structural Similarities (SSIM) and Inception Scores (IS) are utilized to evaluate the reconstruction quality and the analogy quality. As reported in Table 2, without KL- Loss, the network has no incentive to learn the shape invariant appearance of representation, almost all of the metrics degraded dramatically. <table><tr><td>Method</td><td>Color-Digit</td><td>Color-Back</td></tr><tr><td>Pixel</td><td>31.65</td><td>39.52</td></tr><tr><td>Appearance</td><td>10.25</td><td>15.32</td></tr><tr><td>Structure</td><td>99.96</td><td>99.92</td></tr></table> <table><tr><td>Method</td><td>Style (×e-6)</td><td>Content (×e-6)</td><td>Landmark (%)</td><td>SSIM mean</td><td>Inception Score mean</td></tr><tr><td>Real Data</td><td></td><td></td><td></td><td>1.000</td><td>0.000</td></tr><tr><td>Without KL Loss</td><td>6.556</td><td>1.813</td><td>0.036</td><td>0.406</td><td>0.103</td></tr><tr><td>Ours</td><td>5.208</td><td>1.759</td><td>0.030</td><td>0.449</td><td>0.121</td></tr></table> We extend VAE model to disentangle object's representation by structure and appearance. Our framework is able to mine structure from a kind of objects and learn structure- invariant appearance representation simultaneously, without any annotation. Our work may also reveal several potential topics for future research: 1) Instead of relying on supervision, using strong prior to restrict the latent variables seems to be a potential and effective tool for disentangling. 2) In this work we only experiment on near- rigid objects like chairs and faces, learning on deformable objects is still an opening problem. 3) The structure- invariant appearance representation may have some potentials on recognition tasks. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 7: A grid of structure&appearance swapping visualization. The top row and left-most columns are random selected from the test set. In each column, the structure of the generated images are shown to be consistent with the top ones. In each row, the appearance of the generated images are shown to be consistent with the left-most ones. </center> ## REFERENCES Mathieu Aubry, Daniel Maturana, Alexei A. Efros, Bryan C. Russell, and Josef Sivic. Seeing 3d chairs: Exemplar part- based 2d- 3d alignment using a large dataset of CAD models. In CVPR, 2014. Joan Bruna, Pablo Sprechmann, and Yann LeCun. Super- resolution with deep convolutional sufficient statistics. In ICLR, 2016. Adrian Bulat and Georgios Tzimiropoulos. How far are we from solving the 2d & 3d face alignment problem? In ICCV, 2017. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS, 2016. <--- Page Split ---> Emilien Dupont. Learning disentangled joint continuous and discrete representations. In NIPS, 2018. Patrick Esser, Ekaterina Sutter, and Björn Ommer. A variational u- net for conditional appearance and shape generation. In CVPR, 2018. Abel Gonzalez- Garcia, Joost van de Weijer, and Yoshua Bengio. Image- to- image translation for cross- domain disentanglement. In NIPS, 2018. Ian J. Goodfellow, Jean Pouget- Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta- vae: Learning basic visual concepts with a constrained variational framework. In ICLR, 2017. Qiyang Hu, Attila Szabó, Tiziano Portenier, Paolo Favaro, and Matthias Zwicker. Disentangling factors of variation by mixing them. CVPR, 2018. Gary B. Huang, Vidit Jain, and Erik G. Learned- Miller. Unsupervised joint alignment of complex images. In ICCV, 2007. Tomas Jakab, Ankush Gupta, Hakan Bilen, and Andrea Vedaldi. Conditional image generation for learning the structure of visual objects. NIPS, 2018. Justin Johnson, Alexandre Alahi, and Li Fei- Fei. Perceptual losses for real- time style transfer and super- resolution. In ECCV, 2016. Sebastian Kalkowski, Christian Schulze, Andreas Dengel, and Damian Borth. Real- time analysis and visualization of the yfcc100m dataset. In MM Workshop, 2015. Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Li Fei- Fei. Novel dataset for fine- grained image categorization. In CVPR Workshop, 2011. Diederik P. Kingma and Max Welling. Auto- encoding variational bayes. In ICLR, 2014. Abhishek Kumar, Prasanna Sattigeri, and Avinash Balakrishnan. Variational inference of disentangled latent concepts from unlabeled observations. In ICLR, 2018. Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, and Ming- Hsuan Yang. Universal style transfer via feature transforms. In NIPS, 2017. Liqian Ma, Xu Jia, Qianru Sun, Bernt Schiele, Tinne Tuytelaars, and Luc Van Gool. Pose guided person image generation. In NIPS, 2017. Michael F Mathieu, Junbo Jake Zhao, Junbo Zhao, Aditya Ramesh, Pablo Sprechmann, and Yann LeCun. Disentangling factors of variation in deep representation using adversarial training. In NIPS, 2016. Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estimation. In ECCV, 2016. Kaiyue Pang, Yi- Zhe Song, Tony Xiang, and Timothy M. Hospedales. Cross- domain generative learning for fine- grained sketch- based image retrieval. In BMVC, 2017. Scott E. Reed, Yi Zhang, Yuting Zhang, and Honglak Lee. Deep visual analogy- making. In NIPS, 2015. Ignacio Rocco, Relja Arandjelovic, and Josef Sivic. End- to- end weakly- supervised semantic alignment. In CVPR, 2018. Mehdi S. M. Sajjadi, Bernhard Schölkopf, and Michael Hirsch. Enhancenet: Single image super- resolution through automated texture synthesis. In ICCV, 2017. <--- Page Split ---> Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NIPS, 2016. Patsorn Sangkloy, Nathan Burnell, Cusuh Ham, and James Hays. The sketchy database: learning to retrieve badly drawn bunnies. In ACM Transactions on Graphics, 2016. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. volume abs/1409.1556, 2014. James Thewlis, Hakan Bilen, and Andrea Vedaldi. Unsupervised learning of object frames by dense equivariant image labelling. In NIPS, 2017. Zhou Wang, Alan C. Bovik, Hamid R. Sheikh, and Eero P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Processing, 13(4):600- 612, 2004. Weiwei Zhang, Jian Sun, and Xiaoou Tang. Cat head detection - how to effectively exploit shape and texture features. In ECCV, 2008. Yuting Zhang, Yijie Guo, Yixin Jin, Yijun Luo, Zhiyuan He, and Honglak Lee. Unsupervised discovery of object landmarks as structural representations. In CVPR, 2018. ## A APPENDIX ## A.1 DETAILS OF ARCHITECTURE We use Adam with parameters \(\beta_{1} = 0.5\) and \(\beta_{1} = 0.999\) to optimise the network with a minibatch size of 8 for 160 epochs for all datasets. The initial learning rate is set to be 0.0001 and then decrease linearly to 0 during training. The network architecture used for our experiments is given in Table 3. We use the following abbreviation for ease of presentation: N=Neurons, K=Kernel size, S=Stride size. The transposed convolutional layer is denoted by DCONV. Table 3: Network architecture of encoder and decoder. <table><tr><td></td><td>Layer</td><td>Module</td></tr><tr><td rowspan="6">Encoder (Eφ, Eθ)</td><td>1</td><td>CONV-(N64,K4,S2)</td></tr><tr><td>2</td><td>LeakyReLU, CONV-(N128,K4,S2), InstanceNorm</td></tr><tr><td>3</td><td>LeakyReLU, CONV-(N128,K4,S2), InstanceNorm</td><td></td></tr><tr><td>4</td><td>LeakyReLU, CONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td></tr><tr><td>5</td><td>LeakyReLU, CONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td></tr><tr><td>6</td><td>LeakyReLU, CONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td><td></td></tr><tr><td rowspan="3">Decoder (Dθ)</td><td>7</td><td>LeakyReLU, CONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td></tr><tr><td>8</td><td>LeakyReLU, CONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td><td></td></tr><tr><td>μ</td><td>CONV-(N128,K1,S1)</td><td></td><td></td><td></td><td></td></tr><tr><td rowspan="9">Decoder (Dθ)</td><td>1</td><td>CONV-(N128,K1,S1)</td><td></td><td></td><td></td><td></td></tr><tr><td>2</td><td>ReLU, DCONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td><td></td></tr><tr><td>3</td><td>ReLU, DCONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>4</td><td>ReLU, DCONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>5</td><td>ReLU, DCONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>6</td><td>ReLU, DCONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>7</td><td>ReLU, DCONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>8</td><td>ReLU, DCONV-(N64,K4,S2), InstanceNorm</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>9</td><td>ReLU, DCONV-(N3,K4,S2), Tanh</td><td></td><td></td><td></td><td></td><td></td></tr></table> ## A.2 QUALITATIVE RESULTS The qualitative retrieval results on MNIST- Color are illustrated in Fig. 8. With structure and appearance representation used, both semantic and visual retrieval can be per- formed respectively. Moreover, the interpolation results of 3D Chair with same arrangement as MNIST- Color is shown in Fig. 9. <--- Page Split ---> ![](images/11_0.jpg) <center>Figure 8: Random chosen 4 query images and the corresponding 5 nearest-neighbors are illustrated, which are retrieved with image pixel, structure code, appearance code respectively. </center> ![](images/11_1.jpg) <center>Figure 9: Interpolation results on 3D Chair. </center> <--- Page Split --->
## ABSTRACT It is challenging to disentangle an object into two orthogonal spaces of structure and appearance since each can influence the visual observation in a different and unpredictable way. It is rare for one to have access to a large number of data to help separate the influences. In this paper, we present a novel framework to learn this disentangled representation in a completely unsupervised manner. We address this problem in a two- branch Variational Autoencoder framework. For the structure branch, we project the latent factor into a soft structured point tensor and constrain it with losses derived from prior knowledge. This encourages the branch to distill geometry information. Another branch learns the complementary appearance information. The two branches form an effective framework that can disentangle object's structure- appearance representation without any human annotation. We evaluate our approach on four image datasets, on which we demonstrate the superior disentanglement and visual analogy quality both in synthesized and real- world data. We are able to generate photo- realistic images with \(256 \times 256\) resolution that are clearly disentangled in structure and appearance. ## 1 INTRODUCTION Structure and appearance are the two most inherent attributes that characterize an object visually. Computer vision researchers have devoted decades of efforts to understand object structure and extract features that are invariant to geometry change (Huang et al., 2007; Thewlis et al., 2017; Rocco et al., 2018). Learning such disentangled deep representation for visual objects is an important topic in deep learning. ![](images/0_0.jpg) <center>Figure 1: Walking in the disentangled representation space: Here we show an example learned by our algorithm. Our approach effectively disentangles the structure and appearance space. Three cat faces in the bounding box are from real data while others are interpolated through our learned representations. </center> <--- Page Split ---> The main objective of our work is to disentangle object's appearance and structure in an unsupervised manner. Achieving this goal is non- trivial due to three reasons: 1) Without supervision, we can hardly guarantee the separation of different representations in the latent space. 2) Although some methods like InfoGAN (Chen et al., 2016) are capable of learning several groups of independent attributes from objects, attributes from these unsupervised frameworks are uninterpretable since we cannot pinpoint which portion of the disentangled representation is related to the structure and which to the appearance. 3) Learning structure from a set of natural real- world images is difficult. To overcome the aforementioned challenges, we propose a novel two- branch Variational Autoencoder (VAE) framework, of which the structure branch aims to discover meaningful structural points to represent the object geometry, while the other appearance branch learns the complementary appearance representation. The settings of these two branches are asymmetric. For the structure branch, we add a layer- wise softmax operator to the last layer. This can be seen as a projection of a latent structure to a soft structured point tensor space. Specifically designed prior losses are used to constrain the structured point tensors so that the discovered points have high repeatability across images yet distributed uniformly to cover different parts of the object. To encourage the framework to learn a disentangled yet complementary representation of both appearance and structure, we introduce a Kullback- Leibler (KL) divergence loss and skip- connections design to the framework. Extensive experiments demonstrate the effectiveness of the proposed method in manipulating the structure and appearance of natural images, e.g., cat faces in Figure 1, outperform state- of- the- art algorithms (Chen et al., 2016; Higgins et al., 2017; Jakab et al., 2018). We also conduct several experiments on MNIST- Color, 3D synthesized data and real photos. ## 2 METHODOLOGY In the absence of annotation on structure, we rely on prior knowledge on how object landmarks should distribute to constrain the learning and disentanglement of structural information. Our experiments show that this is possible given appropriate prior losses and learning architecture. We first formulate our loss function with a consideration on prior. Specifically, we follow the VAE framework and assume 1) the two latent variables \(z\) and \(y\) , which represent the appearance and structure, are generated from some prior distributions. 2) \(x\) follows the conditional distribution \(p(x|y,z)\) . We start with a Bayesian formulation and maximize the log- likelihood over all observed samples \(x \in X\) . \[\begin{array}{r l} & {\log p(x) = \log p(y) + \log p(x|y) - \log p(y|x)}\\ & {\qquad \geq \log p(y) + \log \int p(x,z|y)\mathrm{d}z}\\ & {\qquad \geq \log p(y) + \mathbb{E}_{q}\log \frac{p(x,z|y)}{q(z|x,y)}}\\ & {\qquad = \log p(y) + \mathbb{E}_{q}\log \frac{p(x|y,z)p(z|y)}{q(z|x,y)}}. \end{array} \quad (1)\] Equation 1 learns a deterministic mapping \(e(\cdot ;\theta)\) from \(x\) to \(y\) , which we assume \(y\) is following a Gaussian distribution over \(\mathcal{N}(e(x;\omega),\Sigma)\) . Term \(-\log p(y|x)\) is non- negative. In the second line of the equation, we start to consider the factor \(z\) . Similar to VAE, we address the issue of intractable integral by introducing an approximate posterior \(q(y,z|x;\phi)\) to estimate the integral using evidence lower bound (ELBO). By splitting the \(p(x|y,z)\) from the second term of the last expression, we obtain our final loss as, \[\mathcal{L}(x,\theta ,\phi ,\omega) = -\log p_{\omega}(y) - \mathbb{E}_{q_{\phi (z|x,y)}}\log p_{\theta}(x|y,z) + \mathrm{KL}(q_{\phi (z|x,y)}(z|x,y)||p_{\theta}(z|y)). \quad (2)\] The first term is the prior on \(y\) . The second term describes the conditional distribution of \(x\) given all representation. Ideally, if the decoder can perfectly reconstruct the \(x\) , the second term would be a delta function over \(x\) . The third term represents the Kullback- Leibler divergence between approximate. In the rest of this paper we name these three terms respectively as prior loss \(L_{\mathrm{prior}}\) reconstruction loss \(L_{\mathrm{recon}}\) and KL loss \(L_{KL}\) <--- Page Split ---> ![](images/2_0.jpg) <center>Figure 2: Architecture: Our framework follows an auto-encoder framework. It contains two branches: 1) the structure branch forces the representation into a Gaussian spatial probability distribution with an hourglass network \(e_{\omega}\) . 2) the appearance branch \(E_{\phi}\) learns a complementary appearance representation to the structure. </center> ### 2.1 PRIOR LOSS Inspired by Zhang et al. (2018) and Jakab et al. (2018), we formulate our structure representation \(y\) as a soft latent structured point tensor. A re- projecting operator is applied here to force \(y\) to lie on a Gaussian spatial probability distribution space. Following the notations from Newell et al. (2016), we denote the direct outputs of the hourglass network \(e_{\omega}\) as landmark heatmaps \(h\) , and each channel of which represents the spatial location of a structural point. Instead of using max activations across each heatmap as landmark coordinates, we weighted average all activations across each heatmap. We then re- project landmark coordinates to spatial features with the same size as heatmaps by a fixed Gaussian- like function centered at predicted coordinates with a fixed standard deviation. As a result, we obtain a new tensor \(y\) with prior on structure representation. Similar to the difficulty described in Zhang et al. (2018), we find that training the structure branch with general random initialization tend to locate all structural points around the mean location at the center of the image. This could lead to a local minimum from which optimizer might not escape. As such, we introduce a Separation Loss to encourage each heatmap to sufficiently cover the object of interest. This is achieved by the first part in Eq. 3, where we encourage each pair of \(i^{th}\) and \(j^{th}\) heatmaps to share different activations. \(\sigma\) can be regarded as a normalization factor here. Another prior constraint is that we wish the structural point to behave like landmarks to encode geometry structure information. To achieve this goal, we add a Concentration Loss to encourage the variance of activations \(h\) to be small so that it could concentrate at a single location. This corresponds to the second term in Eq. 3. \[L_{prior} = \sum_{i\neq j}\exp (-\frac{||h_{i} - h_{j}||^{2}}{2\sigma^{2}}) + \mathrm{Var}(h) \quad (3)\] It is noteworthy that some recent works have considered the prior of latent factor. Dupont (2018) proposed a Joint- \(\beta\) - VAE by adding different prior distribution over several latent factors so as to disentangle continuous and discrete factors from data. Our work differs in that we investigates a different prior to disentangle visual structure and appearance. ### 2.2 RECONSTRUCTION LOSS For the second term we optimize the reconstruction loss of whole model, which will be denoted as generator \(G\) in the following context. We assume that the decoder \(D_{\theta}\) is able to reconstruct original input \(x\) from latent representation \(y\) and \(z\) , which is \(\hat{x} = G(y,z)\) . Consequently, we can design the reconstruction loss as \(L_{\mathrm{recon}} = \| x - \hat{x} \|_{1}\) . <--- Page Split ---> However, minimizing \(L_{1} / L_{2}\) loss at pixel- level only does not model the perceptual quality well and makes the prediction look blurry and implausible. This phenomenon has been well- observed in the literature of super- resolution (Bruna et al., 2016; Sajjadi et al., 2017). We consequently define the reconstruction loss as \(L_{\mathrm{recon}} = \| x - \hat{x}\|_{1} + \sum_{i}\lambda_{i}\| \psi_{i}(x) - \psi_{i}(\hat{x})\|_{1}\) , where \(\psi_{i}\) is the feature obtained from \(l\) - th layer of a VGG- 19 model (Simonyan & Zisserman, 2014) pre- trained on ImageNet. It is also possible to add adversarial loss to further improve the perceptual reconstruction quality. Since the goal of this work is disentanglement rather than reconstruction, we only adopt the \(L_{\mathrm{recon}}\) described above. ### 2.3 KL LOSS We model \(q(z|x,y)\) as a parametric Gaussian distribution which can be estimated by the encoder network \(E_{\phi}\) . Therefore, the appearance code \(z\) can be sampled from \(q(z|x,y)\) . Meanwhile, the prior \(p(z|y)\) can be estimated by the encoder network \(E_{\theta}\) . By using the reparametrization trick (Kingma & Welling, 2014), these networks can be trained end- to- end. In this work, only mean is estimated for the stability of learning. By modeling the two distributions as Gaussian with identity covariances, the KL Loss is simply equal to the Euclidean distance between their means. Thus, \(z\) is regularized by minimizing the KL divergence between \(q(z|x,y)\) and \(p(z|y)\) . Notice that with only prior and reconstruction loss. The framework only makes sure \(z\) is from \(x\) and the Decoder \(D_{\theta}\) will recover as much information of \(x\) as possible. There is no guarantee that \(z\) will learn a complementary of \(y\) . Towards this end, we design the network as concatenating the encoded structure representation by \(E_{\theta}\) with the inferred appearance code \(z\) . Then, the concatenated representation is decoded together by \(D_{\theta}\) . Moreover, skip- connections between \(E_{\theta}\) and \(D_{\theta}\) are also used to pass multi- level structure information to the decoder. Since enough structure information can be obtained from prior, any information about structure encoded in \(z\) incurs a penalty of the likelihood \(p(x|y,z)\) with no new information (i.e. appearance information) is captured. This design of network and the KL Loss result in a constraint to guide \(z\) to encode more information about appearance which is complementary to the structure prior. ### 2.4 IMPLEMENTATION DETAIL Each of the input images \(x\) is cropped and resized to \(256 \times 256\) resolution. A one- stack hourglass network (Newell et al., 2016) is used as a geometry extractor \(e_{\omega}\) to project input image to the heatmap \(y \in \mathbb{R}^{256 \times 256 \times 30}\) , in which each channel represents one point- centered 2D- Gaussian map (with \(\sigma = 4\) ). \(y\) is drawn in a single- channel map for visualization in Fig. 2. Same network (with stride- 2 convolution for downsample) is use for both \(E_{\theta}\) and \(E_{\phi}\) to obtain appearance representation \(z\) and the embedded structure representation as two 128- dimension vectors. A symmetrical deconvolution network with skip connection is used as the decoder \(D_{\theta}\) to get the reconstructed result \(\hat{x}\) . All of the networks are jointly trained from scratch end- to- end. We detail the architectures and hyperparameters used for our experiments in appendix A. ## 3 RELATED WORK Unsupervised Feature Disentangle: Several pioneer works focus on unsupervised disentangled representation learning. Following the propose of GANs (Goodfellow et al., 2014), Chen et al. (2016) purpose InfoGAN to learn a mapping from a group of latent variables to the data in an unsupervised manner. Many similar methods were purposed to achieve a more stable result (Higgins et al., 2017; Kumar et al., 2018). However, these works suffer to interpret, and the meaning of each learned factor is uncontrollable. There are some following works focusing on dividing latent factors into different sets to enforce better disentangling. Mathieu et al. (2016) assign one code to the specified factors of variation associated with the labels, and left the remaining as unspecified variability. Similar to Mathieu et al. (2016), Hu et al. (2018) then proposes to obtain disentanglement of feature chunks by leveraging Autoencoders, with the supervision of some same/different class pairs. Dupont (2018) divides latent variable into discrete and continuous one, and distribute them in different prior distribution. In our work, we give one branch of representation are more complicated prior, to force it to represent only the pose information for the object. <--- Page Split ---> Supervised Pose Synthesis: Recently the booming of GANs research improves the capacity of pose- guided image generation. Ma et al. (2017) firstly try to synthesize pose images with U- Net- like networks. Several works soon follow this appealing topic and obtain better results on human pose or face generation. A close work to us from Esser et al. (2018) applied a conditional U- Net for shape- guided image generation. Nevertheless, existing works rely on massive annotated data, they need to treat pose of a object as input, or a strong pre- trained pose estimator. Unsupervised Structure Learning: Unsupervised learning structure from objects is one of the essential topics in computer vision. The rudimentary works focus on keypoints detection and learning a strong descriptor to match (Thewlis et al., 2017; Rocco et al., 2018). Recent two concurrent works, from Jakab et al. (2018) and Zhang et al. (2018), show the possibility of end- to- end learning of structure in Autoencoder formulations. Our work can be seen as extending their work to learn the complementary appearance representation as well (in other words, in the loss Eq. 1, they only consider the first two terms, and ignore the factor from \(z\) ). ## 4 EXPERIMENTS ### 4.1 EXPERIMENTAL PROTOCOL Datasets: We evaluate our method on four datasets that cover both synthesized and real world data: 1). MNIST- Color: we extend MNIST by either colorizing the digit (MNIST- CD) or the background (MNIST- CB) with a randomly chosen color following Gonzalez- Garcia et al. (2018). We use the standard split of training (50k) and testing (10k) set. 2). 3D Chair: Aubry et al. (2014) offers rendered images of 1393 CAD chair models. We take 1343 chairs for training and the left 50 chairs for testing. For each chair, 12 rendered images with different views are selected randomly. 3). Cat & Dog Face, we collect 6k (5k for training and 1k for testing) images of cat and dog from YFCC100M (Kalkowski et al., 2015) and Standford Dog (Khosla et al., 2011) datasets respectively. All images are center cropped around the face and scaled to the same size. 4). CelebA: it supplies plenty of celebrity faces with different attributes. The training and testing sizes are 160K and 20K respectively. ![](images/4_0.jpg) <center>Figure 3: Conditional generation results:(a) Walking in the appearance space with fixed structure. (b) Walking in the structure space with fixed appearance. (c) A visualization of the disentangled space by linear interpolation. The Structure is smoothly changed in row-wise and the appearance is changed by each column. </center> Evaluation Metric: Less existed evaluation metric and benchmark can be utilized to evaluate the performance of disentanglement. Here we propose two forms of evaluation to study the behavior of the proposed framework: 1). Qualitative: we provide four kinds of qualitative results to show as many usages of the disentangled space as possible, i.e. conditional sampling, interpolation, retrieval, and visual analogy. 2). Quantitative: we apply several metrics that are widely employed in image generation (a) Structure consistency: content similarity metric (Li et al., 2017) and mean- error of landmarks (Bulat & Tzimiropoulos, 2017). (b) Appearance consistency: style similarity metric (Johnson et al., 2016) (c). Disentangled ability: retrieval recall@K (Sangkloy et al., 2016). (d). Reconstruction and generation quality: SSIM (Wang et al., 2004) and Inception Score (Salimans et al., 2016). <--- Page Split ---> ### 4.2 RESULTS ON SYNTHESIZED DATASETS Diverse Generation. We first demonstrate the diversity of conditional generation results on MNIST- Color with the successfully disentangled structure and appearance in Fig. 3. It can be observed that, given an image as a structure condition, same digit information with different appearance can be generated by sampling the appearance condition images randomly. While given an image as appearance condition, different digits with the same color can be generated by sampling different structural conditional images. Note that the model has no prior knowledge of the digit in the image as no label is provided, it effectively learns the disentanglement spontaneously. Interpolation. In Fig. 3, the linear interpolation results show reasonable coverage of the manifold. From left to right, the color is changed smoothly from blue to red with interpolated appearance latent space while maintaining the digit information. Analogously, the color stays stable while one digit transforms into the other smoothly from top to down. Retrieval. To demonstrate the disentangled ability of the representation learned by the model, we perform nearest neighbor retrieval experiments following Mathieu et al. (2016) on MNIST- Color. With structure and appearance representation used, both semantic and visual retrieval can be performed respectively. The Qualitative results are shown in appendix A. Quantitatively. We use a commonly used retrieval metric Recall@K as in (Sangkley et al., 2016; Pang et al., 2017), where for a particular query digit, Recall@K is 1 if the corresponding digit is within the top- K retrieved results and O otherwise. We report the most challenging Recall@1 by averaging over all queries on the test set in Table 2. It can be observed that the structure representation shows the best performance and clearly outperforms image pixel and appearance representation. In addition to the disentangled ability. This result shows that the structure representation learned by our model is useful for visual retrieval. Visual Analogy. The task of visual analogy is that the particular attribute of a given reference image can be transformed to a query one (Reed et al., 2015). We show the visual analogy results on MNIST- Color and 3D Chair in Fig. 4. Note that even for the detail component (e.g. wheel and leg of 3D chair) the structure can be maintained successfully, which is a rather challenging task in previous unsupervised works (Chen et al., 2016; Higgins et al., 2017). ![](images/5_0.jpg) <center>Figure 4: Visual analogy results on synthesized datasets: (a) MNIST-CD. (b) MNIST-CB. (c) 3D Chair. Taking the structure representation of a query image and the appearance representation of the reference one, our model can output an image which maintains the geometric shape of query image while capturing the appearance of the reference image. </center> ### 4.3 RESULTS ON REAL-LIFE DATASETS We have so far only discussed results on the synthesized benchmarks. In this section, we will demonstrate the scalable performance of our model on several real- life datasets, i.e., Cat, Dog Face and CelebA. To the best knowledge of ours, there is no literature of unsupervised disentanglement before can successfully extend to photo- realistic generation with \(256 \times 256\) resolution. Owing to <--- Page Split ---> Table 1: Structure and appearance consistency evaluation on Cat and CelebA dataset (lower is better). <table><tr><td>Method</td><td colspan="3">Cat</td><td colspan="3">CelebA</td></tr><tr><td></td><td>Style (×e-5)</td><td>Content (×e-5)</td><td>Landmark (%)</td><td>Style (×e-5)</td><td>Content (×e-5)</td><td></td></tr><tr><td>Random</td><td>7.700</td><td>1.881</td><td>0.051</td><td>5.858</td><td>1.693</td><td>0.293</td></tr><tr><td>Ours</td><td>5.208</td><td>1.759</td><td>0.030</td><td>3.886</td><td>1.529</td><td>0.162</td></tr></table> the structural prior which accurately capture the structural information of images, our model can transform appearance information while faithfully maintain the geometry shapes. Qualitative evaluation is performed by visually examining the perceptual quality of the generated images. In Fig. 7, the swapping results along with the learned geometry heatmaps \(y\) are illustrated on Cat dataset. In can be seen that the geometry information, i.e., expression, head- pose, facial action, and appearance information i.e., hair texture, can be swapped between each other arbitrarily. The learned geometry heatmaps can be shown as a map with several 2D Gaussian points, which successfully encode the geometry cues of a image by the location of its points and supply an effective prior for the VAE network. More results of visual analogy of real- life datasets on Standford Dog and CelebA dataset are illustrated in Fig. 5. We observe that the model is able to generalize to various real- life images with large variations, such as mouth- opening, eye- closing, tongue- sticking and exclusive appearance. For quantitative measurement, there is no standard evaluation metric of the quality of the visual analogy results for real- life datasets since ground- truth targets are absent. We propose to evaluate the structure and appearance consistency of the analogy predictions respectively instead. We use content similarity metric for the evaluation of structure consistency between a condition input \(x_{s}\) and its guided generated images (e.g., for each column of images in Fig. 7). We use style similarity metric to evaluate the appearance consistency between a condition input \(x_{a}\) and its guided generated images (e.g., each row of images in Fig. 7). These two metrics are used widely in image generation applications as an objective for training to maintain content and texture information (Li et al., 2017; Johnson et al., 2016). As content similarity metric is less sensitive to the small variation of images, we propose to use the mean- error of landmarks detected by a landmark detection network, which is pre- trained on manually annotated data, to evaluate the structure consistency. Since the public cat facial landmark annotations are too sparse to evaluate the structure consistency (e.g. 9- points (Zhang et al., 2008)), we manually annotated 10k cat face with 18- points to train a landmark detection network for evaluation purpose. As for the evaluation of celebA, a state- of- the- art model (Bulat & Tzimipopoulos, 2017) with 68- landmarks is used. The results on the testing set of the two real- life datasets are reported in Table 1. For each test image, 1k other images in the testing set are all used as the reference of structure or appearance for generating, in which mean value is calculated. In the baseline random setting, for one test image, the mean value is calculated by sampling randomly among the generated images guided by each image. The superior structure and appearance consistency of the images generated by our method can be obviously observed. ![](images/6_0.jpg) <center>Figure 5: Visual analogy results on real-life datasets: (a) Standford Dog. (b) CelebA. The geometry (e.g. identity, head pose and expression) of query image can be faithfully maintained while the appearance (e.g. the color of hair, beard and illumination) of reference image can be precisely transformed. As concrete examples, the output of the dog in the third column is still tongue-sticking while the hair color is changed, and in the last column of CelebA, even the fine-grain eye make-up is successfully transformed to the query image surprisingly. </center> <--- Page Split ---> ### 4.4 COMPARISON TO OTHER METHODS 4.4 COMPARISON TO OTHER METHODSSince there is hardly any literature before share exactly the same settings with us as discussed in the related work, we compare perceptual quality with the four most related unsupervised representation learning methods in Fig. 6, including three disentangled factor learning methods, i.e., VAE (Kingma & Welling, 2014), \(\beta\) - VAE (Higgins et al., 2017) and InfoGAN (Chen et al., 2016), and one unsupervised structure learning method (Jakab et al., 2018). It can be observed that all of the three methods can automatically discover and learn to disentangle the factor of azimuth on 3D Chair dataset. However, it can be perceived that the geometry shape can be maintained much better in our approach than all the other methods, owing to the informative prior supplied by our structure branch. We randomly sample several query- reference pairs in testing set to compare with results reported in the paper of Jakab et al. (2018). The results of unsupervised structure learning methods have severe artifacts and look more blurred compared with ours. Nevertheless, the identity of query face can be hardly kept in the results of Jakab et al. (2018). ![](images/7_0.jpg) <center>Figure 6: Comparison to other methods. Qualitative results of disentangling performance of VAE, \(\beta\) -VAE, InfoGAN and Jakab et al. (2018). We demonstrate the disentanglement of the factor of azimuth for 3D chair dataset. Visual analogy results are demonstrated for face dataset. </center> ### 4.5 ABLATION STUDY It is worth studying the effect of individual component of our method on the quality of generated images. Structural Similarities (SSIM) and Inception Scores (IS) are utilized to evaluate the reconstruction quality and the analogy quality. As reported in Table 2, without KL- Loss, the network has no incentive to learn the shape invariant appearance of representation, almost all of the metrics degraded dramatically. <table><tr><td>Method</td><td>Color-Digit</td><td>Color-Back</td></tr><tr><td>Pixel</td><td>31.65</td><td>39.52</td></tr><tr><td>Appearance</td><td>10.25</td><td>15.32</td></tr><tr><td>Structure</td><td>99.96</td><td>99.92</td></tr></table> <table><tr><td>Method</td><td>Style (×e-6)</td><td>Content (×e-6)</td><td>Landmark (%)</td><td>SSIM mean</td><td>Inception Score mean</td></tr><tr><td>Real Data</td><td></td><td></td><td></td><td>1.000</td><td>0.000</td></tr><tr><td>Without KL Loss</td><td>6.556</td><td>1.813</td><td>0.036</td><td>0.406</td><td>0.103</td></tr><tr><td>Ours</td><td>5.208</td><td>1.759</td><td>0.030</td><td>0.449</td><td>0.121</td></tr></table> We extend VAE model to disentangle object's representation by structure and appearance. Our framework is able to mine structure from a kind of objects and learn structure- invariant appearance representation simultaneously, without any annotation. Our work may also reveal several potential topics for future research: 1) Instead of relying on supervision, using strong prior to restrict the latent variables seems to be a potential and effective tool for disentangling. 2) In this work we only experiment on near- rigid objects like chairs and faces, learning on deformable objects is still an opening problem. 3) The structure- invariant appearance representation may have some potentials on recognition tasks. <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 7: A grid of structure&appearance swapping visualization. The top row and left-most columns are random selected from the test set. In each column, the structure of the generated images are shown to be consistent with the top ones. In each row, the appearance of the generated images are shown to be consistent with the left-most ones. </center> ## A APPENDIX ## A.1 DETAILS OF ARCHITECTURE We use Adam with parameters \(\beta_{1} = 0.5\) and \(\beta_{1} = 0.999\) to optimise the network with a minibatch size of 8 for 160 epochs for all datasets. The initial learning rate is set to be 0.0001 and then decrease linearly to 0 during training. The network architecture used for our experiments is given in Table 3. We use the following abbreviation for ease of presentation: N=Neurons, K=Kernel size, S=Stride size. The transposed convolutional layer is denoted by DCONV. Table 3: Network architecture of encoder and decoder. <table><tr><td></td><td>Layer</td><td>Module</td></tr><tr><td rowspan="6">Encoder (Eφ, Eθ)</td><td>1</td><td>CONV-(N64,K4,S2)</td></tr><tr><td>2</td><td>LeakyReLU, CONV-(N128,K4,S2), InstanceNorm</td></tr><tr><td>3</td><td>LeakyReLU, CONV-(N128,K4,S2), InstanceNorm</td><td></td></tr><tr><td>4</td><td>LeakyReLU, CONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td></tr><tr><td>5</td><td>LeakyReLU, CONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td></tr><tr><td>6</td><td>LeakyReLU, CONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td><td></td></tr><tr><td rowspan="3">Decoder (Dθ)</td><td>7</td><td>LeakyReLU, CONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td></tr><tr><td>8</td><td>LeakyReLU, CONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td><td></td></tr><tr><td>μ</td><td>CONV-(N128,K1,S1)</td><td></td><td></td><td></td><td></td></tr><tr><td rowspan="9">Decoder (Dθ)</td><td>1</td><td>CONV-(N128,K1,S1)</td><td></td><td></td><td></td><td></td></tr><tr><td>2</td><td>ReLU, DCONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td><td></td></tr><tr><td>3</td><td>ReLU, DCONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>4</td><td>ReLU, DCONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>5</td><td>ReLU, DCONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>6</td><td>ReLU, DCONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>7</td><td>ReLU, DCONV-(N128,K4,S2), InstanceNorm</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>8</td><td>ReLU, DCONV-(N64,K4,S2), InstanceNorm</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>9</td><td>ReLU, DCONV-(N3,K4,S2), Tanh</td><td></td><td></td><td></td><td></td><td></td></tr></table> ## A.2 QUALITATIVE RESULTS The qualitative retrieval results on MNIST- Color are illustrated in Fig. 8. With structure and appearance representation used, both semantic and visual retrieval can be per- formed respectively. Moreover, the interpolation results of 3D Chair with same arrangement as MNIST- Color is shown in Fig. 9. <--- Page Split ---> ![](images/11_0.jpg) <center>Figure 8: Random chosen 4 query images and the corresponding 5 nearest-neighbors are illustrated, which are retrieved with image pixel, structure code, appearance code respectively. </center> ![](images/11_1.jpg) <center>Figure 9: Interpolation results on 3D Chair. </center> <--- Page Split --->
reject
Reject
4.666667
ICLR_2020_paper_0006
iclr
2,020
# DIVERSE TRAJECTORY FORECASTING WITH DETERMINANTAL POINT PROCESSES Ye Yuan, Kris M. Kitani Robotics Institute Carnegie Mellon University {yyuan2,kkitani}@cs.cmu.edu ## ABSTRACT The ability to forecast a set of likely yet diverse possible future behaviors of an agent (e.g., future trajectories of a pedestrian) is essential for safety- critical perception systems (e.g., autonomous vehicles). In particular, a set of possible future behaviors generated by the system must be diverse to account for all possible outcomes in order to take necessary safety precautions. It is not sufficient to maintain a set of the most likely future outcomes because the set may only contain perturbations of a dominating single outcome (major mode). While generative models such as variational autoencoders (VAEs) have been shown to be a powerful tool for learning a distribution over future trajectories, randomly drawn samples from the learned implicit likelihood model may not be diverse – the likelihood model is derived from the training data distribution and the samples will concentrate around the major mode of the data. In this work, we propose to learn a diversity sampling function (DSF) that generates a diverse yet likely set of future trajectories. The DSF maps forecasting context features to a set of latent codes which can be decoded by a generative model (e.g., VAE) into a set of diverse trajectory samples. Concretely, the process of identifying the diverse set of samples is posed as DSF parameter estimation. To learn the parameters of the DSF, the diversity of the trajectory samples is evaluated by a diversity loss based on a determinantal point process (DPP). Gradient descent is performed over the DSF parameters, which in turn moves the latent codes of the sample set to find an optimal set of diverse yet likely trajectories. Our method is a novel application of DPPs to optimize a set of items (forecasted trajectories) in continuous space. We demonstrate the diversity of the trajectories produced by our approach on both low- dimensional 2D trajectory data and high- dimensional human motion data. (Video<sup>1</sup>) ## 1 INTRODUCTION Forecasting future trajectories of vehicles and human has many useful applications in autonomous driving, virtual reality and assistive living. What makes trajectory forecasting challenging is that the future is uncertain and multi- modal – vehicles can choose different routes and people can perform different future actions. In many safety- critical applications, it is important to consider a diverse set of possible future trajectories, even those that are less likely, so that necessary preemptive actions can be taken. For example, an autonomous vehicle should understand that a neighboring car can merge into its lane even though the car is most likely to keep driving straight. To address this requirement, we need to take a generative approach to trajectory forecasting that can fully characterize the multimodal distribution of future trajectories. To capture all modes of a data distribution, variational autoencoders (VAEs) are well- suited generative models. However, random samples from a learned VAE model with Gaussian latent codes are not guaranteed to be diverse for two reasons. First, the sampling procedure is stochastic and the VAE samples can fail to cover some minor modes even with a large number of samples. Second, since VAE sampling is based on the implicit likelihood function encoded in the training data, if most of the training data is centered around a specific mode while other modes have less data (Fig. 1 (a)), the VAE samples will reflect this bias and concentrate around the major mode (Fig. 1 (b)). To tackle this problem, we propose to learn a diversity sampling function (DSF) that can reliably generate a diverse set of trajectory samples (Fig. 1 (c)). <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: A toy trajectory forecasting example. (a) The three modes (pink, blue, purple) of the future trajectory distribution are shown in both the trajectory space and the latent space of a learned VAE model. The data distribution is imbalanced, where the blue mode has most data and covers most of the latent space. (b) Random samples from the VAE only cover the major (blue) mode. (c) Our proposed DSF generates a diverse set of future trajectories covering all three modes. </center> The proposed DSF is a deterministic parameterized function that maps forecasting context features (e.g., past trajectories) to a set of latent codes. The latent codes are decoded by the VAE decoder into a set of future trajectory samples, denoted as the DSF samples. In order to optimize the DSF, we formulate a diversity loss based on a determinantal point process (DPP) (Macchi, 1975) to evaluate the diversity of the DSF samples. The DPP defines the probability of choosing a random subset from the set of trajectory samples. It models the negative correlations between samples: the inclusion of a sample reduces the probability of including a similar sample. This makes the DPP an ideal tool for modeling the diversity within a set. In particular, we use the expected cardinality of the DPP as the diversity measure, which is defined as the expected size of a random subset drawn from the set of trajectory samples according to the DPP. Intuitively, since the DPP inhibits selection of similar samples, if the set of trajectory samples is more diverse, the random subset is more likely to select more samples from the set. The expected cardinality of the DPP is easy to compute and differentiable, which allows us to use it as the objective to optimize the DSF to enable diverse trajectory sampling. Our contributions are as follows: (1) We propose a new forecasting approach that learns a diversity sampling function to produce a diverse set of future trajectories; (2) We propose a novel application of DPPs to optimize a set of items (trajectories) in continuous space with a DPP- based diversity measure; (3) Experiments on synthetic data and human motion validate that our method can reliably generate a more diverse set of future trajectories compared to state- of- the- art generative models. ## 2 RELATED WORK Trajectory Forecasting has recently received significant attention from the vision community. A large portion of previous work focuses on forecasting 2D future trajectories for pedestrians (Kitani et al., 2012; Ma et al., 2017; Ballan et al., 2016; Xie et al., 2013) or vehicles (Jain et al., 2016a). Some approaches use deterministic trajectory modeling and only forecast one future trajectory (Alahi et al., 2016; Yagi et al., 2018; Robicquet et al., 2016). As there are often multiple plausible future trajectories, several approaches have tried to forecast distributions over trajectories (Lee et al., 2017; Galceran et al., 2015; Gupta et al., 2018). Recently, Rhinehart et al. (2018; 2019) propose a generative model that can accurately forecast multi- modal trajectories for vehicles. Soo Park et al. (2016) also use egocentric videos to predict the future trajectories of the camera wearer. Some work has investigated forecasting higher dimensional trajectories such as the 3D full- body pose sequence of human motions. Most existing work takes a deterministic approach and forecasts only one possible future motion from past 3D poses (Fragkiadaki et al., 2015; Butepage et al., 2017; Li et al., 2017; Jain et al., 2016b), static images (Chao et al., 2017; Kanazawa et al., 2018) or egocentric videos (Yuan and Kitani, 2019). Differently, some probabilistic approaches (Habibie et al., 2017; Yan et al., 2018) use conditional variational autoencoders (cVAEs) to generate multiple future motions. In contrast to previous work, our approach can generate a diverse set of future motions with a limited number of samples. <--- Page Split ---> Diverse Solutions have been sought after in a number of problems in computer vision and machine learning. A branch of these methods aiming for diversity stems from the M- Best MAP problem (Nilsson, 1998; Seroussi and Golmard, 1994), including diverse M- Best solutions (Batra et al., 2012) and multiple choice learning (Guzman- Rivera et al., 2012; Lee et al., 2016). Alternatively, previous work has used submodular function maximization to select a diverse subset of garments from fashion images (Hsiao and Grauman, 2018). Determinantal point processes (DPPs) (Macchi, 1975; Kulesza et al., 2012) are efficient probabilistic models that can measure both the diversity and quality of items in a subset, which makes it a natural choice for the diverse subset selection problem. DPPs have been applied for document and video summarization (Kulesza and Taskar, 2011; Gong et al., 2014), recommendation systems (Gillenwater et al., 2014), object detection (Azadi et al., 2017), and grasp clustering (Huang et al., 2015). Elfeki et al. (2018) have also used DPPs to mitigate mode collapse in generative adversarial networks (GANs). The work most related ours is (Gillenwater et al., 2014), which also uses the cardinality of DPPs as a proxy for user engagement. However, there are two important differences between our approach and theirs. First, the context is different as they use the cardinality for a subset selection problem while we apply the cardinality as an objective of a continuous optimization problem in the setting of generative models. Second, their main motivation behind using the cardinality is that it aligns better with the user engagement semantics, while our motivation is that using cardinality as a diversity loss for deep neural networks is more stable due to its tolerance of similar trajectories, which are often produced by deep neural networks during stochastic gradient descent. ## 3 BACKGROUND ### 3.1 VARIATIONAL AUTOENCODERS The aim of multi- modal trajectory forecasting is to learn a generative model over future trajectories. Variational autoencoders (VAEs) are a popular choice of generative models for trajectory forecasting (Lee et al., 2017; Walker et al., 2016) because it can effectively capture all possible future trajectories by explicitly mapping each data point to a latent code. VAEs model the joint distribution \(p_{\theta}(\mathbf{x},\mathbf{z}) = p(\mathbf{z})p_{\theta}(\mathbf{x}|\mathbf{z})\) of each data sample \(\mathbf{x}\) (e.g., a future trajectory) and its corresponding latent code \(\mathbf{z}\) , where \(p(\mathbf{z})\) denotes some prior distribution (e.g., Gaussians) and \(p_{\theta}(\mathbf{x}|\mathbf{z})\) denotes the conditional likelihood model. To calculate the marginal likelihood \(p_{\theta}(\mathbf{x}) = p_{\theta}(\mathbf{x},\mathbf{z}) / p_{\theta}(\mathbf{z}|\mathbf{x})\) , one needs to compute the posterior distribution \(p_{\theta}(\mathbf{z}|\mathbf{x})\) which is typically intractable. To tackle this, VAEs use variational inference (Jordan et al., 1999) which introduces an approximate posterior \(q_{\phi}(\mathbf{z}|\mathbf{x})\) and decomposes the marginal log- likelihood as \[\log p_{\theta}(\mathbf{x}) = \mathrm{KL}\left(q_{\phi}(\mathbf{z}|\mathbf{x})\| p_{\theta}(\mathbf{z}|\mathbf{x})\right) + \mathcal{L}(\mathbf{x};\theta ,\phi), \quad (1)\] where \(\mathcal{L}(\mathbf{x};\theta ,\phi)\) is the evidence lower bound (ELBO) defined as \[\mathcal{L}(\mathbf{x};\theta ,\phi) = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\left[\log p_{\theta}(\mathbf{x}|\mathbf{z})\right] - \mathrm{KL}\left(q_{\phi}(\mathbf{z}|\mathbf{x})\| p(\mathbf{z})\right). \quad (2)\] During training, VAEs jointly optimize the recognition model (encoder) \(q_{\phi}(\mathbf{z}|\mathbf{x})\) and the likelihood model (decoder) \(p_{\theta}(\mathbf{x}|\mathbf{z})\) to maximize the ELBO. In the context of multi- modal trajectory forecasting, one can generate future trajectories from \(p(\mathbf{x})\) by drawing a latent code \(\mathbf{z}\) from the prior \(p(\mathbf{z})\) and decoding \(\mathbf{z}\) with the decoder \(p_{\theta}(\mathbf{x}|\mathbf{z})\) to produce a corresponding future trajectory \(\mathbf{x}\) . ### 3.2 DETERMINANTAL POINT PROCESSES Our core technical innovation is a method to learn a diversity sampling function (DSF) that can generate a diverse set of future trajectories. To achieve this, we must equip ourselves with a tool to evaluate the diversity of a set of trajectories. To this end, we make use of determinantal point processes (DPPs) to model the diversity within a set. DPPs promote diversity within a set because the inclusion of one item makes the inclusion of a similar item less likely if the set is sampled according to a DPP. Formally, given a set of items (e.g., data points) \(\mathcal{V} = \{\mathbf{x}_{1},\ldots ,\mathbf{x}_{N}\}\) , a point process \(\mathcal{P}\) on the ground set \(\mathcal{V}\) is a probability measure on \(2^{\mathcal{V}}\) , i.e., the set of all subsets of \(\mathcal{V}\) . \(\mathcal{P}\) is called a determinantal point process if a random subset \(\mathbf{Y}\) drawn according to \(\mathcal{P}\) has \[\mathcal{P}_{\mathbf{L}}(\mathbf{Y} = Y) = \frac{\operatorname*{det}\left(\mathbf{L}_{\mathbf{Y}}\right)}{\sum_{Y\subseteq\mathcal{Y}}\operatorname*{det}\left(\mathbf{L}_{\mathbf{Y}}\right)} = \frac{\operatorname*{det}\left(\mathbf{L}_{\mathbf{Y}}\right)}{\operatorname*{det}\left(\mathbf{L} + \mathbf{I}\right)}, \quad (3)\] <--- Page Split ---> where \(Y\subseteq \mathcal{Y}\) , \(\mathbf{I}\) is the identity matrix, \(\mathbf{L}\in \mathbb{R}^{N\times N}\) is the DPP kernel, a symmetric positive semidefinite matrix, and \(\mathbf{L}_{\mathbf{Y}}\in \mathbb{R}^{|Y|\times |Y|}\) is a submatrix of \(\mathbf{L}\) indexed by elements of \(Y\) . The DPP kernel \(\mathbf{L}\) is typically constructed by a similarity matrix \(\mathbf{S}\) , where \(\mathbf{S}_{i,j}\) defines the similarity between two items \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) . If we use the inner product as the similarity measure, \(\mathbf{L}\) can be written in the form of a Gram matrix \(\mathbf{L} = \mathbf{S} = \mathbf{X}^{T}\mathbf{X}\) where \(\mathbf{X}\) is the stacked feature matrix of \(\mathcal{Y}\) . As a property of the Gram matrix, \(\operatorname *{det}\left(\mathbf{L}_{\mathbf{Y}}\right)\) equals the squared volume spanned by vectors \(\mathbf{x}_{i}\in Y\) . With this geometric interpretation in mind, one can observe that diverse sets are more probable because their features are more orthogonal, thus spanning a larger volume. In addition to set diversity encoded in the similarity matrix \(\mathbf{S}\) , it is also convenient to introduce a quality vector \(\mathbf{r} = [r_{1},\ldots ,r_{N}]\) to weigh each item according to some unary metric. For example, the quality weight might be derived from the likelihood of an item. To capture both diversity and quality of a subset, the DPP kernel \(\mathbf{L}\) is often decomposed in the more general form: \[\mathbf{L} = \mathrm{Diag}(\mathbf{r})\cdot \mathbf{S}\cdot \mathrm{Diag}(\mathbf{r}). \quad (4)\] With this decomposition, we can see that both the quality vector \(\mathbf{r}\) and similarity matrix \(\mathbf{S}\) contribute to the DPP probability of a subset \(Y\) : \[\mathcal{P}_{L}(\mathbf{Y} = Y)\propto \operatorname *{det}\left(\mathbf{L}_{Y}\right) = \left(\prod_{\mathbf{x}_{i}\in Y}r_{i}^{2}\right)\operatorname *{det}\left(\mathbf{S}_{Y}\right). \quad (5)\] Due to its ability to capture the global diversity and quality of a set of items, we choose DPPs as the probabilistic approach to evaluate and optimize the diversity of the future trajectories drawn by our proposed diversity sampling function. ## 4 APPROACH Safety- critical applications often require that the system can maintain a diverse set of outcomes covering all modes of a predictive distribution and not just the most likely one. To address this requirement, we propose to learn a diversity sampling function (DSF) to draw deterministic trajectory samples by generating a set of latent codes in the latent space of a conditional variational autoencoder (cVAE) and decoding them into trajectories using the cVAE decoder. The DSF trajectory samples are evaluated with a DPP- based diversity loss, which in turn optimizes the parameters of the DSF for more diverse trajectory samples. Formally, the future trajectory \(\mathbf{x}\in \mathbb{R}^{T\times D}\) is a random variable denoting a \(D\) dimensional feature over a future time horizon \(\hat{T}\) (e.g., a vehicle trajectory or a sequence of humanoid poses). The context \(\psi = \{\mathbf{h},\mathbf{f}\}\) provides the information to infer the future trajectory \(\mathbf{x}\) , and it contains the past trajectory \(\mathbf{h}\in \mathbb{R}^{H\times D}\) of last \(H\) time steps and optionally other side information \(\mathbf{f}\) , such as an obstacle map. In the following, we first describe how we learn the future trajectory model \(p_{\theta}(\mathbf{x}|\psi)\) with a cVAE. Then, we introduce the DSF and the DPP- based diversity loss used to optimize the DSF. ### 4.1 LEARNING A CVAE FOR FUTURE TRAJECTORIES In order to generate a diverse set of future trajectory samples, we need to learn a generative trajectory forecasting model \(p_{\theta}(\mathbf{x}|\psi)\) that can cover all modes of the data distribution. Here we use cVAEs (other proper generative models can also be used), which explicitly map data \(\mathbf{x}\) with the encoder \(q_{\phi}(\mathbf{z}|\mathbf{x},\psi)\) to its corresponding latent code \(\mathbf{z}\) and reconstruct the data from the latent code using the decoder \(p_{\theta}(\mathbf{x}|\mathbf{z},\psi)\) . By maintaining this one- on- one mapping between the data and the latent code, cVAEs attempt to capture all modes of the data. As discussed in Sec. 3.1, cVAEs jointly optimize the encoder and decoder to maximize the variational lower bound: \[\mathcal{L}(\mathbf{x},\psi ;\theta ,\phi) = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x},\psi)}\left[\log p_{\theta}(\mathbf{x}|\mathbf{z},\psi)\right] - \mathrm{KL}\left(q_{\phi}(\mathbf{z}|\mathbf{x},\psi)\| p(\mathbf{z})\right). \quad (6)\] We use multivariate Gaussians for the prior, encoder and decoder: \(p(\mathbf{z}) = \mathcal{N}(\mathbf{z};\mathbf{0},\mathbf{I})\) , \(q_{\phi}(\mathbf{z}|\mathbf{x},\psi) = \mathcal{N}(\mathbf{z};\mu ,\sigma^{2}\mathbf{I})\) , and \(p_{\theta}(\mathbf{x}|\mathbf{z},\psi) = \mathcal{N}(\mathbf{x};\tilde{\mathbf{x}},\alpha \mathbf{I})\) . Both the encoder and decoder are implemented as neural networks. The encoder network \(f_{\phi}\) outputs the parameters of the posterior distribution: \((\mu ,\sigma) = f_{\phi}(\mathbf{x},\psi)\) . The decoder network \(g_{\theta}\) outputs the reconstructed future trajectory \(\tilde{\mathbf{x}}\) : <--- Page Split ---> \(\tilde{\mathbf{x}} = g_{\theta}(\mathbf{z},\boldsymbol {\psi})\) . Detailed network architectures are given in Appendix B.1. Based on the Gaussian parameterization of the cVAE, the objective in Eq. 6 can be rewritten as \[\mathcal{L}_{c v a e}(\mathbf{x},\boldsymbol {\psi};\boldsymbol {\theta},\boldsymbol {\phi}) = -\frac{1}{V}\sum_{\boldsymbol{v} = 1}^{V}\| \tilde{\mathbf{x}}_{\boldsymbol{v}} - \mathbf{x}\|^{2} + \beta \cdot \frac{1}{D_{z}}\sum_{j = 1}^{D_{z}}\left(1 + 2\log \sigma_{j} - \mu_{j}^{2} - \sigma_{j}^{2}\right), \quad (7)\] where we take \(V\) samples from the posterior \(q_{\phi}(\mathbf{z}|\mathbf{x},\boldsymbol {\psi})\) , \(D_{z}\) is the number of latent dimensions, and \(\beta = 1 / \alpha\) is a weighting factor. The training procedure for the cVAE is detailed in Alg. 2 (Appendix A). Once the cVAE model is trained, sampling from the learned future trajectory model \(p_{\theta}(\mathbf{x}|\boldsymbol {\psi})\) is efficient: we can sample a latent code \(\mathbf{z}\) according to the prior \(p(\mathbf{z})\) and use the decoder \(p_{\theta}(\mathbf{x}|\mathbf{z},\boldsymbol {\psi})\) to decode it into a future trajectory \(\mathbf{x}\) . Algorithm 1 Training the diversity sampling function (DSF) \(\mathcal{S}_{\gamma}(\boldsymbol {\psi})\) 1: Input: Training data \(\{\mathbf{x}^{(i)},\boldsymbol{\psi}^{(i)}\}_{i = 1}^{M}\) cVAE decoder network \(g_{\theta}(\mathbf{z},\boldsymbol {\psi})\) 2: Output: DSF \(\mathcal{S}_{\gamma}(\boldsymbol {\psi})\) 3: Initialize \(\gamma\) randomly 4: while not converged do 5: for each \(\boldsymbol{\psi}^{(i)}\) do 6: Generate latent codes \(\mathcal{Z} = \{\mathbf{z}_{1},\ldots ,\mathbf{z}_{N}\}\) with the DSF \(\mathcal{S}_{\gamma}(\boldsymbol {\psi})\) 7: Generate the trajectory ground set \(\mathcal{Y} = \{\mathbf{x}_{1},\ldots ,\mathbf{x}_{N}\}\) with the decoder \(g_{\theta}(\mathbf{z},\boldsymbol {\psi})\) 8: Compute the similarity matrix \(\mathbf{S}\) and quality vector \(\mathbf{r}\) with Eq. 8 and 9 9: Compute the DPP kernel \(\mathbf{L}(\gamma) = \mathrm{Diag}(\mathbf{r})\cdot \mathbf{S}\cdot \mathrm{Diag}(\mathbf{r})\) 10: Calculate the diversity loss \(\mathcal{L}_{d i v e r s e}\) 11: Update \(\gamma\) with the gradient \(\nabla \mathcal{L}_{d i v e r s e}\) 12: end for 13: end while ### 4.2 DIVERSITY SAMPLING FUNCTION (DSF) As mentioned before, randomly sampling from the learned cVAE model according to the implicit likelihood function \(p_{\theta}(\mathbf{x}|\boldsymbol {\psi})\) , i.e., sampling latent codes from the prior \(p(\mathbf{z})\) , does not guarantee that the trajectory samples are diverse: major modes (those having more data) with higher likelihood will produce most of the samples while minor modes with lower likelihood will have almost no sample. This prompts us to devise a new sampling strategy that can reliably generate a diverse set of samples covering both major and minor modes. We propose to learn a diversity sampling function (DSF) \(\mathcal{S}_{\gamma}(\boldsymbol {\psi})\) that maps context \(\boldsymbol{\psi}\) to a set of latent codes \(\mathcal{Z} = \{\mathbf{z}_{1},\ldots ,\mathbf{z}_{N}\}\) . The DSF is implemented as a \(\gamma\) - parameterized neural network which takes \(\boldsymbol{\psi}\) as input and outputs a vector of length \(N\cdot D_{z}\) (see Appendix B.1 for network details). The latent codes \(\mathcal{Z}\) are decoded into a diverse set of future trajectories \(\mathcal{Y} = \{\mathbf{x}_{1},\ldots ,\mathbf{x}_{N}\}\) , which are denoted as the DSF trajectory samples. We note that \(N\) is the sampling budget. To solve for the parameters of the DSF, we propose a diversity loss based on a DPP defined over \(\mathcal{Y}\) . In this section, we first describe how the DPP kernel \(\mathbf{L}\) is defined, which involves the construction of the similarity matrix \(\mathbf{S}\) and quality vector \(\mathbf{r}\) . We then discuss how we use the DPP kernel \(\mathbf{L}\) to formulate a diversity loss to optimize the parameters of the DSF. Recall that the DPP kernel is defined as \(\mathbf{L} = \mathrm{Diag}(\mathbf{r})\cdot \mathbf{S}\cdot \mathrm{Diag}(\mathbf{r})\) , where \(\mathbf{r}\) defines the quality of each trajectory and \(\mathbf{S}\) measures the similarity between two trajectories. The DPP kernel \(\mathbf{L}(\gamma)\) is a function of \(\gamma\) as it is defined over the ground set \(\mathcal{Y}\) output by the DSF \(\mathcal{S}_{\gamma}(\boldsymbol {\psi})\) . Similarity. We measure the similarity \(\mathbf{S}_{i j}\) between two trajectories \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) as \[\mathbf{S}_{i j} = \exp \left(-k\cdot d_{\mathbf{x}}^{2}(\mathbf{x}_{i},\mathbf{x}_{j})\right), \quad (8)\] where \(d_{\mathbf{x}}\) is the Euclidean distance and \(k\) is a scaling factor. This similarity design ensures that \(0\leq \mathbf{S}_{i j}\leq 1\) and \(\mathbf{S}_{i i} = 1\) . It also makes \(\mathbf{S}\) positive definite since the Gaussian kernel we use is a positive definite kernel. Quality. It may be tempting to use \(p(\mathbf{x}|\boldsymbol {\psi})\) to define the quality of each trajectory sample. However, this likelihood- based measure will clearly favor major modes that have higher probabilities, making it less likely to generate samples from minor modes. This motivates us to design a quality metric that <--- Page Split ---> treats all modes equally. To this end, unlike the similarity metric which is defined in the trajectory space, the quality of each sample is measured in the latent space and is defined as \[r_{i} = \left\{ \begin{array}{ll}\omega , & \mathrm{if}\| \mathbf{z}_{i}\| \leq R\\ \omega \exp \left(-\mathbf{z}_{i}^{T}\mathbf{z}_{i} + R^{2}\right), & \mathrm{otherwise} \end{array} \right. \quad (9)\] Geometrically, let \(R\) be the radius of a sphere \(\Phi\) containing most samples from the Gaussian prior \(p(\mathbf{z})\) . We treat samples inside \(\Phi\) equally and only penalize samples outside \(\Phi\) . In this way, samples from major modes are not preferred over those from minor modes as long as they are inside \(\Phi\) , while samples far away from the data manifold are heavily penalized as they are outside \(\Phi\) . The radius \(R\) is determined by where \(\rho\) percent of the Gaussian samples lie within, and we set \(\rho = 90\) . To compute \(R\) , we use the percentage point function of the chi- squared distribution which models the distribution over the sum of squares of independent standard normal variables. The base quality \(\omega\) is a hyperparameter which we set to 1 during training in our experiments. At test time, we can use a larger \(\omega\) to encourage the DPP to select more items from the ground set \(\mathcal{V}\) . The hyperparameter \(\rho\) (or \(R\) ) allows for the trade- off between diversity and quality. When \(R\) is small, the quality metric is reduced to a pure likelihood- based metric (proportional to the latent likelihood), which will prefer samples with high likelihood and result in a less diverse sample set. When \(R\) is large, most samples will have the same quality, and the resulting samples will be highly diverse but less likely. In practice, the choice of \(R\) should be application dependent, as one could imagine autonomous vehicles would need to consider more diverse scenarios including those less likely ones to ensure robustness. We note that after the diverse samples are obtained, it is possible to reassign the quality score for each sample based on its likelihood to allow users to prioritize more likely samples. Diversity Loss. To optimize the DSF \(\mathcal{S}_{\gamma}(\psi)\) , we need to define a diversity loss that measures the diversity of the trajectory ground set \(\mathcal{V}\) generated by \(\mathcal{S}_{\gamma}(\psi)\) . An obvious choice for the diversity loss would be the negative log likelihood \(- \log \mathcal{P}_{\mathbf{L}(\gamma)}(\mathbf{Y} = \mathcal{V}) = - \log \operatorname *{det}(\mathbf{L}(\gamma)) + \log \operatorname *{det}(\mathbf{L}(\gamma) + \mathbf{I})\) . However, there is a problem with directly using the DPP log likelihood. The log likelihood heavily penalizes repeated items: if two trajectories inside \(\mathcal{V}\) are very similar, their corresponding rows in \(\mathbf{L}\) will be almost identical, making \(\operatorname *{det}(\mathbf{L}(\gamma)) = \lambda_{1}\lambda_{2}\ldots \lambda_{N}\approx 0\) ( \(\lambda_{n}\) is the \(n\) - th eigenvalue). In practice, if the number of modes in the trajectory distribution \(p(\mathbf{x}|\psi)\) is smaller than \(|\mathcal{V}|\) , \(\mathcal{V}\) will always have similar trajectories, thus making \(\operatorname *{det}(\mathbf{L}(\gamma))\) always close to zero. In such cases, optimizing the negative log likelihood causes numerical issues, which is observed in our early experiments. Instead, the expected cardinality of the DPP is a better measure for the diversity of \(\mathcal{V}\) , which is defined as \(\mathbb{E}_{\mathbf{Y}\sim \mathcal{P}_{\mathbf{L}(\gamma)}}[\| \mathbf{Y}\| ]\) . Intuitively, since the DPP discourages selection of similar items, if \(\mathcal{V}\) is more diverse, a random subset \(\mathbf{Y}\) drawn according to the DPP is more likely to select more items from \(\mathcal{V}\) , thus having larger cardinality. The expected cardinality can be computed as (Eq. 15 and 34 in Kulesza et al. (2012)): \[\mathbb{E}[\| \mathbf{Y}\| ] = \sum_{n = 1}^{N}\frac{\lambda_{n}}{\lambda_{n} + 1} = \operatorname {tr}\left(\mathbf{I} - (\mathbf{L}(\gamma) + \mathbf{I})^{-1}\right). \quad (10)\] The main advantage of the expected cardinality is that it is well defined even when the ground set \(\mathcal{V}\) has duplicated items, since it does not require all eigenvalues of \(\mathbf{L}\) to be non- zero as the log likelihood does. Thus, our diversity loss is defined as \(\mathcal{L}_{diverse}(\gamma) = - \operatorname {tr}\left(\mathbf{I} - (\mathbf{L}(\gamma) + \mathbf{I})^{-1}\right)\) . The training procedure for \(\mathcal{S}_{\gamma}(\psi)\) is outlined in Alg. 1. Inference. At test time, given current context \(\psi\) , we use the learned DSF \(\mathcal{S}_{\gamma}(\psi)\) to generate the future trajectory ground set \(\mathcal{V}\) . In some cases, \(\mathcal{V}\) may still contain some trajectories that are similar to others. In order to obtain a diverse set of trajectories without repetition, we aim to perform MAP inference for the DPP to find the most diverse subset \(Y^{*} = \arg \max_{Y\in \mathcal{V}}\mathcal{P}_{\mathbf{L}(\gamma)}(Y)\) . A useful property of DPPs is that the log- probability function is submodular (Gillenwater et al., 2012). Even though submodular maximization is NP- hard, we use a greedy algorithm (Nemhauser et al., 1978) which is a popular optimization procedure that works well in practice. As outlined in Alg. 3, the output set \(Y_{f}\) is initialized as \(\emptyset\) , and at each iteration, the trajectory which maximizes the log probability \[\mathbf{x}^{*} = \arg \max_{\mathbf{x}\in \mathcal{V}\backslash Y_{f}}\log \operatorname *{det}\left(\mathbf{L}_{Y_{f}\cup \{\mathbf{x}\}}\right) \quad (11)\] is added to \(Y_{f}\) , until the marginal gain becomes negative or \(Y_{f} = \mathcal{V}\) . <--- Page Split ---> ## 5 EXPERIMENTS The primary focus of our experiments is to answer the following questions: (1) Are trajectory samples generated with our diversity sampling function more diverse than samples from the cVAE and other baselines? (2) How does our method perform on both balanced and imbalanced data? (3) Is our method general enough to perform well on both low- dimensional and high- dimensional tasks? Metrics. A problem with trajectory forecasting evaluation is that in real data each context \(\psi^{(i)}\) usually only has one future trajectory \(\mathbf{x}^{(i)}\) , which means we only have one sample from a multi- modal distribution. Let us consider a scenario of three data examples \(\{\mathbf{x}^{(i)},\psi^{(i)}\}_{i = 1}^{3}\) as shown in Fig. 2 (red, purple, blue). The contexts (past trajectories) of the three examples are instances of the same trajectory but they are slightly different due to noise. As these three contexts have the same semantic meaning, they should share the future trajectories, e.g., the purple and blue future trajectories are also valid for the red context. If we evaluate each example \((\mathbf{x}^{(i)},\psi^{(i)})\) only with its own future trajectory \(\mathbf{x}^{(i)}\) , a method can achieve high scores by only forecasting the mode corresponding to \(\mathbf{x}^{(i)}\) and dropping other modes. This is undesirable because we want a good method to capture all modes of the future trajectory distribution, not just a single mode. To allow for multi- modal evaluation, we propose collecting multiple future trajectories for each example by clustering examples with similar contexts. Specifically, we augment each data example \((\mathbf{x}^{(i)},\psi^{(i)})\) with a future trajectory set \(\mathcal{X}^{(i)} = \{\mathbf{x}^{(j)}||\psi^{(j)} - \psi^{(i)}||\leq \epsilon ,j = 1,\ldots ,M\}\) and metrics are calculated based on \(\mathcal{X}^{(i)}\) instead of \(\mathbf{x}^{(i)}\) , i.e., we compute metrics for each \(\mathbf{x}\in \mathcal{X}^{(i)}\) and average the results. ![](images/6_0.jpg) <center>Figure 2: In real data, contexts (past trajectories) are seldom the same due to noise. </center> can achieve high scores by only forecasting the mode corresponding to \(\mathbf{x}^{(i)}\) and dropping other modes. This is undesirable because we want a good method to capture all modes of the future trajectory distribution, not just a single mode. To allow for multi- modal evaluation, we propose collecting multiple future trajectories for each example by clustering examples with similar contexts. Specifically, we augment each data example \((\mathbf{x}^{(i)},\psi^{(i)})\) with a future trajectory set \(\mathcal{X}^{(i)} = \{\mathbf{x}^{(j)}||\psi^{(j)} - \psi^{(i)}||\leq \epsilon ,j = 1,\ldots ,M\}\) and metrics are calculated based on \(\mathcal{X}^{(i)}\) instead of \(\mathbf{x}^{(i)}\) , i.e., we compute metrics for each \(\mathbf{x}\in \mathcal{X}^{(i)}\) and average the results. We use the following metrics for evaluation: (1) Average Displacement Error (ADE): average mean square error (MSE) over all time steps between the ground truth future trajectory \(\mathbf{x}\) and the closest sample \(\tilde{\mathbf{x}}\) in the forecasted set of trajectories \(Y_{f}\) . (2) Final Displacement Error (FDE): MSE between the final ground truth position \(\mathbf{x}^{T}\) and the closest sample's final position \(\tilde{\mathbf{x}}^{T}\) . (3) Average Self Distance (ASD): average \(L2\) distance over all time steps between a forecasted sample \(\tilde{\mathbf{x}}_{i}\) and its closest neighbor \(\tilde{\mathbf{x}}_{j}\) in \(Y_{f}\) . (4) Final Self Distance (FSD): \(L2\) distance between the final position of a sample \(\tilde{\mathbf{x}}_{i}^{T}\) and its closest neighbor's final position \(\tilde{\mathbf{x}}_{j}^{T}\) . The ADE and FDE are common metrics used in prior work on trajectory forecasting (Alahi et al., 2016; Lee et al., 2017; Rhinehart et al., 2018; Gupta et al., 2018). However, these two metrics do not penalize repeated samples. To address this, we introduce two new metrics ASD and FSD to evaluate the similarity between samples in the set of forecasted trajectories. Larger ASD and FSD means the forecasted trajectories are more non- repetitive and diverse. Baselines. We compare our Diversity Sampler Function (DSF) with the following baselines: (1) cVAE: a method that follows the original sampling scheme of cVAE by sampling latent codes from a Gaussian prior \(p(\mathbf{z})\) . (2) MCL: an approach that uses multiple choice learning (Lee et al., 2016) to optimize the sampler \(S_{\gamma}(\psi)\) with the following loss: \(\mathcal{L}_{\mathrm{mcl}} = \min_{\mathbf{x}\in \mathcal{Y}}\| \tilde{\mathbf{x}} -\mathbf{x}\|^{2}\) , where \(\mathbf{x}\) is the ground truth future trajectory. (3) R2P2: a method proposed in (Rhinehart et al., 2018) that uses a reparametrized pushforward policy to improve modeling of multi- modal distributions for vehicle trajectories. (4) cGAN: generative adversarial networks (Goodfellow et al., 2014) conditioned on the forecasting context. We implement all baselines using similar networks and perform hyperparameter search for each method for fair comparisons. For methods whose samples are stochastic, we use 10 random seeds and report the average results for all metrics. ### 5.1 SYNTHETIC 2D TRAJECTORY DATA We first use synthetic data to evaluate our method's performance for low- dimensional tasks. We design a virtual 2D traffic scene where a vehicle comes to a crossroad and can choose three different future routes: forward, left, and right. We consider two types of synthetic data: (1) Balanced data, which means the probabilities of the vehicle choosing one of the three routes are the same; (2) <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 3: Qualitative results on synthetic data for both balanced and imbalanced data distribution when \(N = 10\) . Blue represents the past trajectory and red represents forecasted future trajectories. </center> Table 1: Quantitative results on synthetic data (numbers scaled by 10) when \(N = 10\) <table><tr><td rowspan="2">Method</td><td colspan="4">Balanced Data</td><td colspan="4">Imbalanced Data</td></tr><tr><td>ADE ↓</td><td>FDE ↓</td><td>ASD ↑</td><td>FSD ↑</td><td>ADE ↓</td><td>FDE ↓</td><td>ASD ↑</td><td>FSD ↑</td></tr><tr><td>DSF (Ours)</td><td>0.182</td><td>0.344</td><td>0.147</td><td>0.340</td><td>0.198</td><td>0.371</td><td>0.207</td><td>0.470</td></tr><tr><td>cVAE</td><td>0.262</td><td>0.518</td><td>0.022</td><td>0.050</td><td>0.332</td><td>0.662</td><td>0.021</td><td>0.050</td></tr><tr><td>MCL</td><td>0.276</td><td>0.548</td><td>0.012</td><td>0.030</td><td>0.457</td><td>0.938</td><td>0.005</td><td>0.010</td></tr><tr><td>R2P2</td><td>0.211</td><td>0.361</td><td>0.047</td><td>0.080</td><td>0.393</td><td>0.776</td><td>0.019</td><td>0.030</td></tr><tr><td>cGAN</td><td>0.808</td><td>1.619</td><td>0.018</td><td>0.010</td><td>1.784</td><td>3.744</td><td>0.006</td><td>0.001</td></tr></table> Imbalanced data, where the probabilities of the vehicle going forward, left and right are 0.8, 0.1, 0.1, respectively. We synthesize trajectory data by simulating the vehicle's behavior and adding Gaussian noise to vehicle velocities. Each data example \((\mathbf{x}^{(i)},\psi^{(i)})\) contains future trajectories of 3 steps and past trajectories of 2 steps. We also add an obstacle map around the current position to the context \(\psi^{(i)}\) . In total, we have around 1100 training examples and 1000 test examples. Please refer to Appendix B for more implementation details. Table 1 summarizes the quantitative results for both balanced and imbalanced data when the sampling budget \(N\) is 10. We can see that our method DSF outperforms the baselines in all metrics in both test settings. As shown in Fig. 3, our method generates more diverse trajectories and is less affected by the imbalanced data distribution. The trajectory samples of our method are also less repetitive, a feature afforded by our DPP formulation. Fig. 4 shows how ADE changes as a function of the sampling budget \(N\) . ### 5.2 DIVERSE HUMAN MOTION FORECASTING To further evaluate our method's performance for more complex and high- dimensional tasks, we apply our method to forecast future human motions (pose sequences). We use motion capture to obtain 10 motion sequences including different types of motions such as walking, turning, jogging, bending, and crouching. Each sequence is about 1 minute long and each pose consists of 59 joint angles. We use past 3 poses (0.1s) to forecast next 30 poses (1s). There are around 9400 training examples and 2000 test examples where we use different sequences for training and testing. More implementation details can be found in Appendix B. Table 2: Quantitative results on for human motion forecasting when \(N = 10\) <table><tr><td>Method</td><td>ADE ↓</td><td>FDE ↓</td><td>ASD ↑</td><td>FSD ↑</td></tr><tr><td>DSF (Ours)</td><td>0.259</td><td>0.421</td><td>0.115</td><td>0.282</td></tr><tr><td>cVAE</td><td>0.332</td><td>0.642</td><td>0.034</td><td>0.098</td></tr><tr><td>MCL</td><td>0.344</td><td>0.674</td><td>0.036</td><td>0.122</td></tr><tr><td>cGAN</td><td>0.652</td><td>1.296</td><td>0.001</td><td>0.003</td></tr></table> (0.1s) to forecast next 30 poses (1s). There are around 9400 training examples and 2000 test examples where we use different sequences for training and testing. More implementation details can be found in Appendix B. We present quantitative results in Table 2 and we can see that our approach outperforms other methods in all metrics. As the dynamics model used in R2P2 (Rhinehart et al., 2018) does not generalize well for high- dimensional human motion, we find the model fails to converge and we do not compare with it in this experiment. Fig. 4 shows that our method achieves large improvement when the sampling budget is big ( \(N = 50\) ). We also present qualitative results in Fig. 5, where we show the starting pose and the final pose of all 10 forecasted motion samples for each method. We can clearly <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 4: ADE vs. \(N\) for synthetic data and human motion forecasting. cGAN is not shown in this plot as it is much worse than other methods due to mode collapse. </center> ![](images/8_1.jpg) <center>Figure 5: Qualitative results for human motion forecasting when \(N = 10\) . The left shows the starting pose, and the right shows for each method the final pose of all 10 forecasted motion samples. </center> see that our method generates more diverse future human motions than the baselines. Please refer to Appendix C and our video for additional qualitative results. ### 5.3 ADDITIONAL EXPERIMENTS WITH DIVERSITY-BASED BASELINES In this section, we perform additional experiments on a large human motion dataset (3.6 million frames), Human3.6M (Ionescu et al., 2013), to evaluate the generalization ability of our approach. We predict future motion of 2 seconds based on observed motion of 0.5 seconds. Please refer to Appendix B.3 for implementation details. We also use a new selection of baselines including several variants of our method (DSF) and the cVAE to validate several design choices of our method, including the choice of the expected cardinality over the negative log likelihood (NLL) of the DPP as the diversity loss. Specifically, we use the following new baselines: (1) DSF- NLL: a variant of DSF that uses NLL as the diversity loss instead of the expected cardinality. (2) DSF- COS: a DSF variant that uses cosine similarity to build the similarity matrix \(\mathbf{S}\) for the DPP kernel \(\mathbf{L}\) . (3) DSF- NLL: a variant of the cVAE that samples 100 latent codes and performs DPP MAP inference on the latent codes to obtain a diverse set of latent codes, which are then decoded into trajectory samples. We present quantitative results in Table 3 when the number of samples \(N\) is 10 and 50. The baseline DSF- COS is able to achieve very high diversity (ASD and FSD) but its samples are overly diverse and have poor quality which is indicated by the large ADE and FDE. Compared with DSF- NLL, <table><tr><td rowspan="2">Method</td><td colspan="4">N=10</td><td colspan="4">N=50</td></tr><tr><td>ADE↓</td><td>FDE↓</td><td>ASD↑</td><td>FSD↑</td><td>ADE↓</td><td>FDE↓</td><td>ASD↑</td><td>FSD↑</td></tr><tr><td>DSF (Ours)</td><td>0.340</td><td>0.521</td><td>0.381</td><td>0.621</td><td>0.236</td><td>0.306</td><td>0.313</td><td>0.415</td></tr><tr><td>DSF-NLL</td><td>0.335</td><td>0.514</td><td>0.343</td><td>0.496</td><td>X</td><td>X</td><td>X</td><td>X</td></tr><tr><td>DSF-COS</td><td>2.588</td><td>1.584</td><td>5.093</td><td>5.718</td><td>0.978</td><td>0.891</td><td>2.007</td><td>1.968</td></tr><tr><td>cVAE</td><td>0.363</td><td>0.549</td><td>0.235</td><td>0.360</td><td>0.276</td><td>0.369</td><td>0.160</td><td>0.220</td></tr><tr><td>cVAE-LDPP</td><td>0.373</td><td>0.554</td><td>0.280</td><td>0.426</td><td>0.277</td><td>0.365</td><td>0.176</td><td>0.240</td></tr></table> Table 3: Quantitative results on Human3.6M (Ionescu et al., 2013) for \(N = 10\) and \(N = 50\) . X means the method is unable to learn a model due to numerical instability. <--- Page Split ---> our method achieves better diversity (ASD and FSD) and similar ADE and FDE when the number of samples is small ( \(N = 10\) ). For a larger number of samples ( \(N = 50\) ), NLL becomes unstable even with a large \(\epsilon\) (1e- 3) added to the diagonal. This behavior of NLL, i.e., stable for small \(N\) but unstable for large \(N\) , matches our intuition that NLL becomes unstable when samples become similar (as discussed in Sec. 4.2), because when there are more samples, it is easier to have similar samples during the SGD updates of the DSF network. The baseline cVAE- LDPP also performs worse than DSF in all metrics even though it is able to outperform the cVAE. We believe the reason is that diversity in sample space may not be well reflected in the latent space due to the non- linear mapping from latent codes to samples induced by deep neural networks. ## 6 CONCLUSION We proposed a novel forecasting approach using a DSF to optimize over the sample space of a generative model. Our method learns the DSF with a DPP- based diversity measure to generate a diverse set of trajectories. The diversity measure is a novel application of DPPs to optimize a set of items in continuous space. Experiments have shown that our approach can generate more diverse vehicle trajectories and human motions compared to state- of- the- art baseline forecasting approaches. Acknowledgment. This project was sponsored in part by JST CREST (JPMJCR14E1), NSF NRI (1637927) and IARPA (D17PC00340). ## REFERENCES A. Alahi, K. Goel, V. Ramanathan, A. Robicquet, L. Fei-Fei, and S. Savarese. Social lstm: Human trajectory prediction in crowded spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 961-971, 2016. S. Azadi, J. Feng, and T. Darrell. Learning detection with diverse proposals. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7149-7157, 2017. L. Ballan, F. Castaldo, A. Alahi, F. Palmieri, and S. Savarese. Knowledge transfer for scene-specific motion prediction. In European Conference on Computer Vision, pages 697-713. Springer, 2016. D. Batra, P. Yadollahpour, A. Guzman-Rivera, and G. Shakhnarovich. Diverse m-best solutions in markov random fields. In European Conference on Computer Vision, pages 1-16. Springer, 2012. J. Butepage, M. J. Black, D. Kragic, and H. Kjellstrom. Deep representation learning for human motion prediction and classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6158-6166, 2017. Y.-W. Chao, J. Yang, B. Price, S. Cohen, and J. Deng. Forecasting human dynamics from static images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 548-556, 2017. M. Elfeki, C. Couprie, M. Riviere, and M. Elhoseiny. Gdpp: Learning diverse generations using determinantal point process. arXiv preprint arXiv:1812.00068, 2018. K. Fragkiadaki, S. Levine, P. Felsen, and J. Malik. Recurrent network models for human dynamics. In Proceedings of the IEEE International Conference on Computer Vision, pages 4346-4354, 2015. E. Galceran, A. G. Cunningham, R. M. Eustice, and E. Olson. Multiplicity decision-making for autonomous driving via changepoint-based behavior prediction. In Robotics: Science and Systems, volume 1, 2015. J. Gillenwater, A. Kulesza, and B. Taskar. Near-optimal map inference for determinantal point processes. In Advances in Neural Information Processing Systems, pages 2735-2743, 2012. J. A. Gillenwater, A. Kulesza, E. Fox, and B. Taskar. Expectation-maximization for learning determinantal point processes. In Advances in Neural Information Processing Systems, pages 3149-3157, 2014. B. Gong, W.-L. Chao, K. Grauman, and F. Sha. Diverse sequential subset selection for supervised video summarization. In Advances in Neural Information Processing Systems, pages 2069-2077, 2014. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672-2680, 2014. A. Gupta, J. Johnson, L. Fei-Fei, S. Savarese, and A. Alahi. Social gan: Socially acceptable trajectories with generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2255-2264, 2018. A. Guzman-Rivera, D. Batra, and P. Kohli. Multiple choice learning: Learning to produce multiple structured outputs. In Advances in Neural Information Processing Systems, pages 1799-1807, 2012. I. Habibie, D. Holden, J. Schwarz, J. Yearsley, and T. Komura. A recurrent variational autoencoder for human motion synthesis. BMVC17, 2017. W.-L. Hsiao and K. Grauman. Creating capsule wardrobes from fashion images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7161-7170, 2018. <--- Page Split ---> D.- A. Huang, M. Ma, W.- C. Ma, and K. M. Kitani. How do we use our hands? discovering a diverse set of common grasps. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 666- 675, 2015. C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE transactions on pattern analysis and machine intelligence, 36(7):1325- 1339, 2013. A. Jain, A. Singh, H. S. Koppula, S. Soh, and A. Saxena. Recurrent neural networks for driver activity anticipation via sensory- fusion architecture. In Robotics and Automation (ICRA), 2016 IEEE International Conference on, pages 3118- 3125. IEEE, 2016a. A. Jain, A. R. Zamir, S. Savarese, and A. Saxena. Structural- rnn: Deep learning on spatio- temporal graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5308- 5317, 2016b. M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183- 233, 1999. A. Kanazawa, J. Zhang, P. Felsen, and J. Malik. Learning 3d human dynamics from video. arXiv preprint arXiv:1812.01601, 2018. D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. K. M. Kitani, B. D. Ziebart, J. A. Bagnell, and M. Hebert. Activity forecasting. In European Conference on Computer Vision, pages 201- 214. Springer, 2012. A. Kulesza and B. Taskar. k- dpps: Fixed- size determinantal point processes. In Proceedings of the 28th International Conference on Machine Learning (ICML- 11), pages 1193- 1200, 2011. A. Kulesza, B. Taskar, et al. Determinantal point processes for machine learning. Foundations and Trends® in Machine Learning, 5(2- 3):123- 286, 2012. N. Lee, W. Choi, P. Vernaza, C. B. Choy, P. H. Torr, and M. Chandraker. Desire: Distant future prediction in dynamic scenes with interacting agents. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 336- 345, 2017. S. Lee, S. P. S. Prakash, M. Cogswell, V. Ranjan, D. Crandall, and D. Batra. Stochastic multiple choice learning for training diverse deep ensembles. In Advances in Neural Information Processing Systems, pages 2119- 2127, 2016. Z. Li, Y. Zhou, S. Xiao, C. He, Z. Huang, and H. Li. Auto- conditioned recurrent networks for extended complex human motion synthesis. arXiv preprint arXiv:1707.05363, 2017. W.- C. Ma, D.- A. Huang, N. Lee, and K. M. Kitani. Forecasting interactive dynamics of pedestrians with fictitious play. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 774- 782, 2017. O. Macchi. The coincidence approach to stochastic point processes. Advances in Applied Probability, 7(1): 83- 122, 1975. J. Martinez, R. Hossain, J. Romero, and J. J. Little. A simple yet effective baseline for 3d human pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, pages 2640- 2649, 2017. G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functions. Mathematical programming, 14(1):265- 294, 1978. D. Nilsson. An efficient algorithm for finding the m most probable configurations in probabilistic expert systems. Statistics and computing, 8(2):159- 173, 1998. G. Pavlakos, X. Zhou, K. G. Derpanis, and K. Daniilidis. Coarse- to- fine volumetric prediction for single- image 3d human pose. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7025- 7034, 2017. D. Pavllo, C. Feichtenhofer, D. Grangier, and M. Auli. 3d human pose estimation in video with temporal convolutions and semi- supervised training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7753- 7762, 2019. N. Rhinehart, K. M. Kitani, and P. Vernaza. R2p2: A reparameterized pushforward policy for diverse, precise generative path forecasting. In Proceedings of the European Conference on Computer Vision (ECCV), pages 772- 788, 2018. N. Rhinehart, R. McAllister, K. Kitani, and S. Levine. Precog: Prediction conditioned on goals in visual multi- agent settings. arXiv preprint arXiv:1905.01296, 2019. A. Robicquet, A. Sadeghian, A. Alahi, and S. Savarese. Learning social etiquette: Human trajectory understanding in crowded scenes. In European conference on computer vision, pages 549- 565. Springer, 2016. B. Seroussi and J.- L. Golmard. An algorithm directly finding the k most probable configurations in bayesian networks. International Journal of Approximate Reasoning, 11(3):205- 233, 1994. H. Soo Park, J.- J. Hwang, Y. Niu, and J. Shi. Egocentric future localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4697- 4705, 2016. J. Walker, C. Doersch, A. Gupta, and M. Hebert. An uncertain future: Forecasting from static images using variational autoencoders. In European Conference on Computer Vision, pages 835- 851. Springer, 2016. <--- Page Split ---> D. Xie, S. Todorovic, and S.-C. Zhu. Inferring "dark matter" and "dark energy" from videos. 2013 IEEE International Conference on Computer Vision, pages 2224-2231, 2013. T. Yagi, K. Mangalam, R. Yonetani, and Y. Sato. Future person localization in first-person videos. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. X. Yan, A. Rastogi, R. Villegas, K. Sunkavalli, E. Shechtman, S. Hadap, E. Yumer, and H. Lee. Mt-vae: Learning motion transformations to generate multimodal human dynamics. In Proceedings of the European Conference on Computer Vision (ECCV), pages 265-281, 2018. Y. Yuan and K. Kitani. Ego-pose estimation and forecasting as real-time pd control. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 10082-10092, 2019. ## A ALGORITHMS ## Algorithm 2 Training the cVAE 1: Input: Training data \(\{\mathbf{x}^{(i)},\psi^{(i)}\}_{i = 1}^{M}\) 2: Output: cVAE encoder network \(f_{\phi}(\mathbf{x},\psi)\) and decoder network \(g_{\theta}(\mathbf{z},\psi)\) 3: Initialize \(\phi\) and \(\theta\) randomly 4: while not converged do 5: for each \((\mathbf{x}^{(i)},\psi^{(i)})\) do 6: Compute parameters \((\mu ,\sigma)\) of the posterior distribution \(q_{\phi}(\mathbf{z}|\mathbf{x},\psi)\) using \(f_{\phi}(\mathbf{x},\psi)\) 7: Sample \(V\) Gaussian noises \(\{\epsilon_{1},\ldots ,\epsilon_{V}\}\) from \(\mathcal{N}(\mathbf{0},\mathbf{I})\) 8: Transform noises to latent samples from \(q_{\phi}(\mathbf{z}|\mathbf{x},\psi)\) : \(\mathbf{z}_{v} = \mu +\sigma \odot \epsilon_{v}\) 9: Decode latent samples into reconstructed trajectories \(\{\tilde{\mathbf{x}}_{1},\ldots ,\tilde{\mathbf{x}}_{V}\}\) using \(g_{\theta}(\mathbf{z},\psi)\) 10: Calculate the cVAE loss \(\mathcal{L}_{c v a e}\) according to Eq. 6 11: Update \(\phi\) and \(\theta\) with \(\nabla_{\phi}\mathcal{L}_{c v a e}\) and \(\nabla_{\theta}\mathcal{L}_{c v a e}\) 12: end for 13: end while Algorithm 3 Inference with the DSF \(\mathcal{S}_{\gamma}(\psi)\) 1: Input: Context \(\psi\) , DSF \(\mathcal{S}_{\gamma}(\psi)\) , cVAE decoder network \(g_{\theta}(\mathbf{z},\psi)\) 2: Output: Forecasted trajectory set \(Y_{f}\) 3: Generate latent codes \(\mathcal{Z} = \{\mathbf{z}_{1},\ldots ,\mathbf{z}_{N}\}\) with the DSF \(\mathcal{S}_{\gamma}(\psi)\) 4: Generate the trajectory ground set \(\mathcal{Y} = \{\mathbf{x}_{1},\ldots ,\mathbf{x}_{N}\}\) with the decoder \(g_{\theta}(\mathbf{z},\psi)\) 5: Compute the DPP kernel \(\mathbf{L} = \mathrm{Diag}(\mathbf{r})\cdot \mathbf{S}\cdot \mathrm{Diag}(\mathbf{r})\) 6: \(Y_{f}\leftarrow \emptyset ,U\leftarrow \mathcal{Y}\) 7: while \(U\) is not empty do 8: \(\mathbf{x}^{*}\leftarrow \arg \max_{\mathbf{x}\in U}\log \operatorname *{det}\left(\mathbf{L}_{Y_{f}\cup \{\mathbf{x}\}}\right)\) 9: if \(\log \operatorname *{det}\left(\mathbf{L}_{Y_{f}\cup \{\mathrm{x}^{*}\}}\right) - \log \operatorname *{det}\left(\mathbf{L}_{Y_{f}}\right)< 0\) then 10: break 11: end if 12: \(Y_{f}\leftarrow Y_{f}\cup \{\mathbf{x}^{*}\}\) 13: \(U\leftarrow U\backslash \{\mathbf{x}^{*}\}\) 14: end while <--- Page Split ---> ## B IMPLEMENTATION DETAILS ![](images/12_0.jpg) <center>Figure 6: Network architectures for synthetic data and human motion. Top: for synthetic data, we use a CNN to process the obstacle map \(\mathbf{f}\) and directly flatten trajectories \(\mathbf{x}\) and \(\mathbf{h}\) into vectors. The reconstructed trajectory \(\tilde{\mathbf{x}}\) is decoded with an MLP. Bottom: for human motion, we use Bi-LSTMs to extract temporal features for \(\mathbf{x}\) and \(\mathbf{h}\) and decode the reconstructed trajectory \(\tilde{\mathbf{x}}\) with a forward LSTM. </center> ## B.1 NETWORK ARCHITECTURES Synthetic data. Fig. 6 (Top) shows the network architecture for synthetic data. The number of latent dimensions is 2. By default, we use ReLU activation for all networks. The future trajectory \(\mathbf{x} \in \mathbb{R}^{3 \times 2}\) consists of 3 future positions of the vehicle. The context \(\psi\) contains past trajectories \(\mathbf{h} \in \mathbb{R}^{2 \times 2}\) of 2 time steps and a obstacle map \(\mathbf{f} \in \{0, 1\}^{28 \times 28}\) spanning a \(4 \times 4\) area around the current position of the vehicle (the road width is 2). For the encoder, we use a convolutional neural network (CNN) with three 32- channel convolutional layers to process \(\mathbf{f}\) . The first two layers have kernel size 4 and stride 2 while the last layer has kernel size 6 and stride 1. The obtained CNN features are concatenated with flattened \(\mathbf{x}\) and \(\mathbf{h}\) into a unified feature, which is feed into a multilayer perceptron (MLP). The MLP has one 128- dim hidden layer and two heads outputting the mean \(\mu\) and variance \(\sigma\) of the latent distribution. For the decoder, we concatenate the CNN feature from \(\mathbf{f}\) with the latent code \(\mathbf{z} \in \mathbb{R}^{2}\) and flattened \(\mathbf{h}\) into a unified feature. The feature is passed through an MLP with one 128- dim hidden layer which outputs the reconstructed future trajectory \(\tilde{\mathbf{x}} \in \mathbb{R}^{3 \times 2}\) . For the diversity sampler function (DSF), we concatenate the CNN feature from \(\mathbf{f}\) with the flattened \(\mathbf{h}\) and pass it through an MLP with one 128- dim hidden layer to obtain a set of latent codes \(\{\mathbf{z}_1, \ldots , \mathbf{z}_N\}\) which are represented by a vector of length \(2N\) . Human motion. Fig. 6 (Bottom) shows the network architecture for synthetic data. The number of latent dimensions is 8. The future trajectory \(\mathbf{x} \in \mathbb{R}^{30 \times 59}\) consists of future poses of 30 time steps (1s). The context \(\psi\) contains past poses \(\mathbf{h} \in \mathbb{R}^{3 \times 59}\) of 3 time steps (0.1s). Each pose consists of 59 joint angles. For the encoder, we use two 128- dim bidirectional LSTMs (Bi- LSTMs) and mean pooling to obtain the temporal features for \(\mathbf{x}\) and \(\mathbf{h}\) . We then concatenate the temporal features into a unified feature and feed it into an MLP with two hidden layers (300, 200) and two heads to obtain the mean \(\mu\) and variance \(\sigma\) of the latent distribution. For the decoder, we reuse the Bi- LSTM of <--- Page Split ---> the encoder for the context \(\mathbf{h}\) and a 128- dim forward LSTM to decode the future trajectory \(\tilde{\mathbf{x}}\) . At each time step \(t\) , the forward LSTM takes as input the previous pose \(\tilde{\mathbf{x}}^{t - 1}\) \((\mathbf{h}^{H}\) for \(t = 0\) ), the latent code \(\mathbf{z} \in \mathbb{R}^{8}\) and the temporal features from \(\mathbf{h}\) , and outputs a 128- dim feature. The feature is then passed through an MLP with two hidden layers \((300,200)\) to generate the reconstructed pose \(\tilde{\mathbf{x}}^{t}\) . For the DSF, we use a different 128- dim Bi- LSTM to obtain the temporal feature for \(\mathbf{h}\) , which is feed into an MLP with a 128- dim hidden layer to produce a set of latent codes \(\{\mathbf{z}_{1},\ldots ,\mathbf{z}_{N}\}\) which are represented by a vector of length \(8N\) . ## B.2 TRAINING AND EVALUATION When training the cVAE model using Eq. 7, we take \(V = 1\) sample from the posterior \(q_{\phi}(\mathbf{z}|\mathbf{x},\boldsymbol {\psi})\) . The weighting factor \(\beta\) for the KL term is set to 0.1 for synthetic data and 1e- 4 for human motion. We use Adam (Kingma and Ba, 2014) to jointly optimize the encoder and decoder. The learning rate is set to 1e- 4 and we use a mini batch size of 32 for synthetic data. We optimize the model for 500 epochs for synthetic data and 100 epochs for human motion. When training the DSF, the scale factor \(k\) for the similarity matrix \(\mathbf{S}\) is set to 1 for synthetic data and 1e- 2 for human motions. For both synthetic data and human motions, we use Adam with learning rate 1e- 4 to optimize the DSF for 20 epochs. Recall that in the metrics section (Sec. 5.1), we need the grouping threshold \(\epsilon\) to build the ground truth future trajectory set \(\mathcal{X}^{(i)} = \{\mathbf{x}^{(j)} \| \psi^{(j)} - \psi^{(i)} \| \leq \epsilon , j = 1, \ldots , M \}\) . For synthetic data, \(\epsilon\) is set to 0.1 and we only use past trajectories \(\mathbf{h}\) to compute the distance between contexts. For human motion, \(\epsilon\) is set to 0.5. ## B.3 IMPLEMENTATION DETAILS FOR EXPERIMENTS ON HUMAN3.6M Following previous work (Martinez et al., 2017; Pavlakos et al., 2017; Pavllo et al., 2019), we convert the motion sequences in the dataset into sequences of 3D joint positions, and adopt a 17- joint skeleton. We train on five subjects (S1, S5, S6, S7, S8), and test on two subjects (S9 and S11). We use the same network architectures (Fig.6 (Bottom)) in this experiment as the one used in the human motion forecasting experiment above. The number of latent dimensions is 128. When training the cVAE model, the weighting factor \(\beta\) is set to 0.1. We sample 5000 training examples every epoch and optimize the cVAE for 500 epochs using Adam and a learning rate of 1e- 4. We set the batch size to 64 for the optimization. The scale factor \(k\) for the similarity matrix \(\mathbf{S}\) of the DPP kernel is set to 5. When learning the DSF, we use a batch size of 64 and sample 1000 training examples every epoch and optimize the DSF for 20 epochs using Adam and a learning rate of 1e- 3. When computing the metrics, we set the grouping threshold \(\epsilon\) to 0.1. <--- Page Split ---> ## C ADDITIONAL VISUALIZATION We also show additional qualitative results for human motion forecasting in Fig. 7. The quality and diversity of the forecasted motions are best seen in our video². ![](images/14_0.jpg) <center>Figure 7: Additional visualization for human motion forecasting. The left shows the starting pose, and on the right we show for each method the final pose of 10 forecasted motion samples. </center> <--- Page Split --->
## ABSTRACT The ability to forecast a set of likely yet diverse possible future behaviors of an agent (e.g., future trajectories of a pedestrian) is essential for safety- critical perception systems (e.g., autonomous vehicles). In particular, a set of possible future behaviors generated by the system must be diverse to account for all possible outcomes in order to take necessary safety precautions. It is not sufficient to maintain a set of the most likely future outcomes because the set may only contain perturbations of a dominating single outcome (major mode). While generative models such as variational autoencoders (VAEs) have been shown to be a powerful tool for learning a distribution over future trajectories, randomly drawn samples from the learned implicit likelihood model may not be diverse – the likelihood model is derived from the training data distribution and the samples will concentrate around the major mode of the data. In this work, we propose to learn a diversity sampling function (DSF) that generates a diverse yet likely set of future trajectories. The DSF maps forecasting context features to a set of latent codes which can be decoded by a generative model (e.g., VAE) into a set of diverse trajectory samples. Concretely, the process of identifying the diverse set of samples is posed as DSF parameter estimation. To learn the parameters of the DSF, the diversity of the trajectory samples is evaluated by a diversity loss based on a determinantal point process (DPP). Gradient descent is performed over the DSF parameters, which in turn moves the latent codes of the sample set to find an optimal set of diverse yet likely trajectories. Our method is a novel application of DPPs to optimize a set of items (forecasted trajectories) in continuous space. We demonstrate the diversity of the trajectories produced by our approach on both low- dimensional 2D trajectory data and high- dimensional human motion data. (Video<sup>1</sup>) ## 1 INTRODUCTION Forecasting future trajectories of vehicles and human has many useful applications in autonomous driving, virtual reality and assistive living. What makes trajectory forecasting challenging is that the future is uncertain and multi- modal – vehicles can choose different routes and people can perform different future actions. In many safety- critical applications, it is important to consider a diverse set of possible future trajectories, even those that are less likely, so that necessary preemptive actions can be taken. For example, an autonomous vehicle should understand that a neighboring car can merge into its lane even though the car is most likely to keep driving straight. To address this requirement, we need to take a generative approach to trajectory forecasting that can fully characterize the multimodal distribution of future trajectories. To capture all modes of a data distribution, variational autoencoders (VAEs) are well- suited generative models. However, random samples from a learned VAE model with Gaussian latent codes are not guaranteed to be diverse for two reasons. First, the sampling procedure is stochastic and the VAE samples can fail to cover some minor modes even with a large number of samples. Second, since VAE sampling is based on the implicit likelihood function encoded in the training data, if most of the training data is centered around a specific mode while other modes have less data (Fig. 1 (a)), the VAE samples will reflect this bias and concentrate around the major mode (Fig. 1 (b)). To tackle this problem, we propose to learn a diversity sampling function (DSF) that can reliably generate a diverse set of trajectory samples (Fig. 1 (c)). <--- Page Split ---> ![](images/1_0.jpg) <center>Figure 1: A toy trajectory forecasting example. (a) The three modes (pink, blue, purple) of the future trajectory distribution are shown in both the trajectory space and the latent space of a learned VAE model. The data distribution is imbalanced, where the blue mode has most data and covers most of the latent space. (b) Random samples from the VAE only cover the major (blue) mode. (c) Our proposed DSF generates a diverse set of future trajectories covering all three modes. </center> The proposed DSF is a deterministic parameterized function that maps forecasting context features (e.g., past trajectories) to a set of latent codes. The latent codes are decoded by the VAE decoder into a set of future trajectory samples, denoted as the DSF samples. In order to optimize the DSF, we formulate a diversity loss based on a determinantal point process (DPP) (Macchi, 1975) to evaluate the diversity of the DSF samples. The DPP defines the probability of choosing a random subset from the set of trajectory samples. It models the negative correlations between samples: the inclusion of a sample reduces the probability of including a similar sample. This makes the DPP an ideal tool for modeling the diversity within a set. In particular, we use the expected cardinality of the DPP as the diversity measure, which is defined as the expected size of a random subset drawn from the set of trajectory samples according to the DPP. Intuitively, since the DPP inhibits selection of similar samples, if the set of trajectory samples is more diverse, the random subset is more likely to select more samples from the set. The expected cardinality of the DPP is easy to compute and differentiable, which allows us to use it as the objective to optimize the DSF to enable diverse trajectory sampling. Our contributions are as follows: (1) We propose a new forecasting approach that learns a diversity sampling function to produce a diverse set of future trajectories; (2) We propose a novel application of DPPs to optimize a set of items (trajectories) in continuous space with a DPP- based diversity measure; (3) Experiments on synthetic data and human motion validate that our method can reliably generate a more diverse set of future trajectories compared to state- of- the- art generative models. ## 2 RELATED WORK Trajectory Forecasting has recently received significant attention from the vision community. A large portion of previous work focuses on forecasting 2D future trajectories for pedestrians (Kitani et al., 2012; Ma et al., 2017; Ballan et al., 2016; Xie et al., 2013) or vehicles (Jain et al., 2016a). Some approaches use deterministic trajectory modeling and only forecast one future trajectory (Alahi et al., 2016; Yagi et al., 2018; Robicquet et al., 2016). As there are often multiple plausible future trajectories, several approaches have tried to forecast distributions over trajectories (Lee et al., 2017; Galceran et al., 2015; Gupta et al., 2018). Recently, Rhinehart et al. (2018; 2019) propose a generative model that can accurately forecast multi- modal trajectories for vehicles. Soo Park et al. (2016) also use egocentric videos to predict the future trajectories of the camera wearer. Some work has investigated forecasting higher dimensional trajectories such as the 3D full- body pose sequence of human motions. Most existing work takes a deterministic approach and forecasts only one possible future motion from past 3D poses (Fragkiadaki et al., 2015; Butepage et al., 2017; Li et al., 2017; Jain et al., 2016b), static images (Chao et al., 2017; Kanazawa et al., 2018) or egocentric videos (Yuan and Kitani, 2019). Differently, some probabilistic approaches (Habibie et al., 2017; Yan et al., 2018) use conditional variational autoencoders (cVAEs) to generate multiple future motions. In contrast to previous work, our approach can generate a diverse set of future motions with a limited number of samples. <--- Page Split ---> Diverse Solutions have been sought after in a number of problems in computer vision and machine learning. A branch of these methods aiming for diversity stems from the M- Best MAP problem (Nilsson, 1998; Seroussi and Golmard, 1994), including diverse M- Best solutions (Batra et al., 2012) and multiple choice learning (Guzman- Rivera et al., 2012; Lee et al., 2016). Alternatively, previous work has used submodular function maximization to select a diverse subset of garments from fashion images (Hsiao and Grauman, 2018). Determinantal point processes (DPPs) (Macchi, 1975; Kulesza et al., 2012) are efficient probabilistic models that can measure both the diversity and quality of items in a subset, which makes it a natural choice for the diverse subset selection problem. DPPs have been applied for document and video summarization (Kulesza and Taskar, 2011; Gong et al., 2014), recommendation systems (Gillenwater et al., 2014), object detection (Azadi et al., 2017), and grasp clustering (Huang et al., 2015). Elfeki et al. (2018) have also used DPPs to mitigate mode collapse in generative adversarial networks (GANs). The work most related ours is (Gillenwater et al., 2014), which also uses the cardinality of DPPs as a proxy for user engagement. However, there are two important differences between our approach and theirs. First, the context is different as they use the cardinality for a subset selection problem while we apply the cardinality as an objective of a continuous optimization problem in the setting of generative models. Second, their main motivation behind using the cardinality is that it aligns better with the user engagement semantics, while our motivation is that using cardinality as a diversity loss for deep neural networks is more stable due to its tolerance of similar trajectories, which are often produced by deep neural networks during stochastic gradient descent. ## 3 BACKGROUND ### 3.1 VARIATIONAL AUTOENCODERS The aim of multi- modal trajectory forecasting is to learn a generative model over future trajectories. Variational autoencoders (VAEs) are a popular choice of generative models for trajectory forecasting (Lee et al., 2017; Walker et al., 2016) because it can effectively capture all possible future trajectories by explicitly mapping each data point to a latent code. VAEs model the joint distribution \(p_{\theta}(\mathbf{x},\mathbf{z}) = p(\mathbf{z})p_{\theta}(\mathbf{x}|\mathbf{z})\) of each data sample \(\mathbf{x}\) (e.g., a future trajectory) and its corresponding latent code \(\mathbf{z}\) , where \(p(\mathbf{z})\) denotes some prior distribution (e.g., Gaussians) and \(p_{\theta}(\mathbf{x}|\mathbf{z})\) denotes the conditional likelihood model. To calculate the marginal likelihood \(p_{\theta}(\mathbf{x}) = p_{\theta}(\mathbf{x},\mathbf{z}) / p_{\theta}(\mathbf{z}|\mathbf{x})\) , one needs to compute the posterior distribution \(p_{\theta}(\mathbf{z}|\mathbf{x})\) which is typically intractable. To tackle this, VAEs use variational inference (Jordan et al., 1999) which introduces an approximate posterior \(q_{\phi}(\mathbf{z}|\mathbf{x})\) and decomposes the marginal log- likelihood as \[\log p_{\theta}(\mathbf{x}) = \mathrm{KL}\left(q_{\phi}(\mathbf{z}|\mathbf{x})\| p_{\theta}(\mathbf{z}|\mathbf{x})\right) + \mathcal{L}(\mathbf{x};\theta ,\phi), \quad (1)\] where \(\mathcal{L}(\mathbf{x};\theta ,\phi)\) is the evidence lower bound (ELBO) defined as \[\mathcal{L}(\mathbf{x};\theta ,\phi) = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}\left[\log p_{\theta}(\mathbf{x}|\mathbf{z})\right] - \mathrm{KL}\left(q_{\phi}(\mathbf{z}|\mathbf{x})\| p(\mathbf{z})\right). \quad (2)\] During training, VAEs jointly optimize the recognition model (encoder) \(q_{\phi}(\mathbf{z}|\mathbf{x})\) and the likelihood model (decoder) \(p_{\theta}(\mathbf{x}|\mathbf{z})\) to maximize the ELBO. In the context of multi- modal trajectory forecasting, one can generate future trajectories from \(p(\mathbf{x})\) by drawing a latent code \(\mathbf{z}\) from the prior \(p(\mathbf{z})\) and decoding \(\mathbf{z}\) with the decoder \(p_{\theta}(\mathbf{x}|\mathbf{z})\) to produce a corresponding future trajectory \(\mathbf{x}\) . ### 3.2 DETERMINANTAL POINT PROCESSES Our core technical innovation is a method to learn a diversity sampling function (DSF) that can generate a diverse set of future trajectories. To achieve this, we must equip ourselves with a tool to evaluate the diversity of a set of trajectories. To this end, we make use of determinantal point processes (DPPs) to model the diversity within a set. DPPs promote diversity within a set because the inclusion of one item makes the inclusion of a similar item less likely if the set is sampled according to a DPP. Formally, given a set of items (e.g., data points) \(\mathcal{V} = \{\mathbf{x}_{1},\ldots ,\mathbf{x}_{N}\}\) , a point process \(\mathcal{P}\) on the ground set \(\mathcal{V}\) is a probability measure on \(2^{\mathcal{V}}\) , i.e., the set of all subsets of \(\mathcal{V}\) . \(\mathcal{P}\) is called a determinantal point process if a random subset \(\mathbf{Y}\) drawn according to \(\mathcal{P}\) has \[\mathcal{P}_{\mathbf{L}}(\mathbf{Y} = Y) = \frac{\operatorname*{det}\left(\mathbf{L}_{\mathbf{Y}}\right)}{\sum_{Y\subseteq\mathcal{Y}}\operatorname*{det}\left(\mathbf{L}_{\mathbf{Y}}\right)} = \frac{\operatorname*{det}\left(\mathbf{L}_{\mathbf{Y}}\right)}{\operatorname*{det}\left(\mathbf{L} + \mathbf{I}\right)}, \quad (3)\] <--- Page Split ---> where \(Y\subseteq \mathcal{Y}\) , \(\mathbf{I}\) is the identity matrix, \(\mathbf{L}\in \mathbb{R}^{N\times N}\) is the DPP kernel, a symmetric positive semidefinite matrix, and \(\mathbf{L}_{\mathbf{Y}}\in \mathbb{R}^{|Y|\times |Y|}\) is a submatrix of \(\mathbf{L}\) indexed by elements of \(Y\) . The DPP kernel \(\mathbf{L}\) is typically constructed by a similarity matrix \(\mathbf{S}\) , where \(\mathbf{S}_{i,j}\) defines the similarity between two items \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) . If we use the inner product as the similarity measure, \(\mathbf{L}\) can be written in the form of a Gram matrix \(\mathbf{L} = \mathbf{S} = \mathbf{X}^{T}\mathbf{X}\) where \(\mathbf{X}\) is the stacked feature matrix of \(\mathcal{Y}\) . As a property of the Gram matrix, \(\operatorname *{det}\left(\mathbf{L}_{\mathbf{Y}}\right)\) equals the squared volume spanned by vectors \(\mathbf{x}_{i}\in Y\) . With this geometric interpretation in mind, one can observe that diverse sets are more probable because their features are more orthogonal, thus spanning a larger volume. In addition to set diversity encoded in the similarity matrix \(\mathbf{S}\) , it is also convenient to introduce a quality vector \(\mathbf{r} = [r_{1},\ldots ,r_{N}]\) to weigh each item according to some unary metric. For example, the quality weight might be derived from the likelihood of an item. To capture both diversity and quality of a subset, the DPP kernel \(\mathbf{L}\) is often decomposed in the more general form: \[\mathbf{L} = \mathrm{Diag}(\mathbf{r})\cdot \mathbf{S}\cdot \mathrm{Diag}(\mathbf{r}). \quad (4)\] With this decomposition, we can see that both the quality vector \(\mathbf{r}\) and similarity matrix \(\mathbf{S}\) contribute to the DPP probability of a subset \(Y\) : \[\mathcal{P}_{L}(\mathbf{Y} = Y)\propto \operatorname *{det}\left(\mathbf{L}_{Y}\right) = \left(\prod_{\mathbf{x}_{i}\in Y}r_{i}^{2}\right)\operatorname *{det}\left(\mathbf{S}_{Y}\right). \quad (5)\] Due to its ability to capture the global diversity and quality of a set of items, we choose DPPs as the probabilistic approach to evaluate and optimize the diversity of the future trajectories drawn by our proposed diversity sampling function. ## 4 APPROACH Safety- critical applications often require that the system can maintain a diverse set of outcomes covering all modes of a predictive distribution and not just the most likely one. To address this requirement, we propose to learn a diversity sampling function (DSF) to draw deterministic trajectory samples by generating a set of latent codes in the latent space of a conditional variational autoencoder (cVAE) and decoding them into trajectories using the cVAE decoder. The DSF trajectory samples are evaluated with a DPP- based diversity loss, which in turn optimizes the parameters of the DSF for more diverse trajectory samples. Formally, the future trajectory \(\mathbf{x}\in \mathbb{R}^{T\times D}\) is a random variable denoting a \(D\) dimensional feature over a future time horizon \(\hat{T}\) (e.g., a vehicle trajectory or a sequence of humanoid poses). The context \(\psi = \{\mathbf{h},\mathbf{f}\}\) provides the information to infer the future trajectory \(\mathbf{x}\) , and it contains the past trajectory \(\mathbf{h}\in \mathbb{R}^{H\times D}\) of last \(H\) time steps and optionally other side information \(\mathbf{f}\) , such as an obstacle map. In the following, we first describe how we learn the future trajectory model \(p_{\theta}(\mathbf{x}|\psi)\) with a cVAE. Then, we introduce the DSF and the DPP- based diversity loss used to optimize the DSF. ### 4.1 LEARNING A CVAE FOR FUTURE TRAJECTORIES In order to generate a diverse set of future trajectory samples, we need to learn a generative trajectory forecasting model \(p_{\theta}(\mathbf{x}|\psi)\) that can cover all modes of the data distribution. Here we use cVAEs (other proper generative models can also be used), which explicitly map data \(\mathbf{x}\) with the encoder \(q_{\phi}(\mathbf{z}|\mathbf{x},\psi)\) to its corresponding latent code \(\mathbf{z}\) and reconstruct the data from the latent code using the decoder \(p_{\theta}(\mathbf{x}|\mathbf{z},\psi)\) . By maintaining this one- on- one mapping between the data and the latent code, cVAEs attempt to capture all modes of the data. As discussed in Sec. 3.1, cVAEs jointly optimize the encoder and decoder to maximize the variational lower bound: \[\mathcal{L}(\mathbf{x},\psi ;\theta ,\phi) = \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x},\psi)}\left[\log p_{\theta}(\mathbf{x}|\mathbf{z},\psi)\right] - \mathrm{KL}\left(q_{\phi}(\mathbf{z}|\mathbf{x},\psi)\| p(\mathbf{z})\right). \quad (6)\] We use multivariate Gaussians for the prior, encoder and decoder: \(p(\mathbf{z}) = \mathcal{N}(\mathbf{z};\mathbf{0},\mathbf{I})\) , \(q_{\phi}(\mathbf{z}|\mathbf{x},\psi) = \mathcal{N}(\mathbf{z};\mu ,\sigma^{2}\mathbf{I})\) , and \(p_{\theta}(\mathbf{x}|\mathbf{z},\psi) = \mathcal{N}(\mathbf{x};\tilde{\mathbf{x}},\alpha \mathbf{I})\) . Both the encoder and decoder are implemented as neural networks. The encoder network \(f_{\phi}\) outputs the parameters of the posterior distribution: \((\mu ,\sigma) = f_{\phi}(\mathbf{x},\psi)\) . The decoder network \(g_{\theta}\) outputs the reconstructed future trajectory \(\tilde{\mathbf{x}}\) : <--- Page Split ---> \(\tilde{\mathbf{x}} = g_{\theta}(\mathbf{z},\boldsymbol {\psi})\) . Detailed network architectures are given in Appendix B.1. Based on the Gaussian parameterization of the cVAE, the objective in Eq. 6 can be rewritten as \[\mathcal{L}_{c v a e}(\mathbf{x},\boldsymbol {\psi};\boldsymbol {\theta},\boldsymbol {\phi}) = -\frac{1}{V}\sum_{\boldsymbol{v} = 1}^{V}\| \tilde{\mathbf{x}}_{\boldsymbol{v}} - \mathbf{x}\|^{2} + \beta \cdot \frac{1}{D_{z}}\sum_{j = 1}^{D_{z}}\left(1 + 2\log \sigma_{j} - \mu_{j}^{2} - \sigma_{j}^{2}\right), \quad (7)\] where we take \(V\) samples from the posterior \(q_{\phi}(\mathbf{z}|\mathbf{x},\boldsymbol {\psi})\) , \(D_{z}\) is the number of latent dimensions, and \(\beta = 1 / \alpha\) is a weighting factor. The training procedure for the cVAE is detailed in Alg. 2 (Appendix A). Once the cVAE model is trained, sampling from the learned future trajectory model \(p_{\theta}(\mathbf{x}|\boldsymbol {\psi})\) is efficient: we can sample a latent code \(\mathbf{z}\) according to the prior \(p(\mathbf{z})\) and use the decoder \(p_{\theta}(\mathbf{x}|\mathbf{z},\boldsymbol {\psi})\) to decode it into a future trajectory \(\mathbf{x}\) . Algorithm 1 Training the diversity sampling function (DSF) \(\mathcal{S}_{\gamma}(\boldsymbol {\psi})\) 1: Input: Training data \(\{\mathbf{x}^{(i)},\boldsymbol{\psi}^{(i)}\}_{i = 1}^{M}\) cVAE decoder network \(g_{\theta}(\mathbf{z},\boldsymbol {\psi})\) 2: Output: DSF \(\mathcal{S}_{\gamma}(\boldsymbol {\psi})\) 3: Initialize \(\gamma\) randomly 4: while not converged do 5: for each \(\boldsymbol{\psi}^{(i)}\) do 6: Generate latent codes \(\mathcal{Z} = \{\mathbf{z}_{1},\ldots ,\mathbf{z}_{N}\}\) with the DSF \(\mathcal{S}_{\gamma}(\boldsymbol {\psi})\) 7: Generate the trajectory ground set \(\mathcal{Y} = \{\mathbf{x}_{1},\ldots ,\mathbf{x}_{N}\}\) with the decoder \(g_{\theta}(\mathbf{z},\boldsymbol {\psi})\) 8: Compute the similarity matrix \(\mathbf{S}\) and quality vector \(\mathbf{r}\) with Eq. 8 and 9 9: Compute the DPP kernel \(\mathbf{L}(\gamma) = \mathrm{Diag}(\mathbf{r})\cdot \mathbf{S}\cdot \mathrm{Diag}(\mathbf{r})\) 10: Calculate the diversity loss \(\mathcal{L}_{d i v e r s e}\) 11: Update \(\gamma\) with the gradient \(\nabla \mathcal{L}_{d i v e r s e}\) 12: end for 13: end while ### 4.2 DIVERSITY SAMPLING FUNCTION (DSF) As mentioned before, randomly sampling from the learned cVAE model according to the implicit likelihood function \(p_{\theta}(\mathbf{x}|\boldsymbol {\psi})\) , i.e., sampling latent codes from the prior \(p(\mathbf{z})\) , does not guarantee that the trajectory samples are diverse: major modes (those having more data) with higher likelihood will produce most of the samples while minor modes with lower likelihood will have almost no sample. This prompts us to devise a new sampling strategy that can reliably generate a diverse set of samples covering both major and minor modes. We propose to learn a diversity sampling function (DSF) \(\mathcal{S}_{\gamma}(\boldsymbol {\psi})\) that maps context \(\boldsymbol{\psi}\) to a set of latent codes \(\mathcal{Z} = \{\mathbf{z}_{1},\ldots ,\mathbf{z}_{N}\}\) . The DSF is implemented as a \(\gamma\) - parameterized neural network which takes \(\boldsymbol{\psi}\) as input and outputs a vector of length \(N\cdot D_{z}\) (see Appendix B.1 for network details). The latent codes \(\mathcal{Z}\) are decoded into a diverse set of future trajectories \(\mathcal{Y} = \{\mathbf{x}_{1},\ldots ,\mathbf{x}_{N}\}\) , which are denoted as the DSF trajectory samples. We note that \(N\) is the sampling budget. To solve for the parameters of the DSF, we propose a diversity loss based on a DPP defined over \(\mathcal{Y}\) . In this section, we first describe how the DPP kernel \(\mathbf{L}\) is defined, which involves the construction of the similarity matrix \(\mathbf{S}\) and quality vector \(\mathbf{r}\) . We then discuss how we use the DPP kernel \(\mathbf{L}\) to formulate a diversity loss to optimize the parameters of the DSF. Recall that the DPP kernel is defined as \(\mathbf{L} = \mathrm{Diag}(\mathbf{r})\cdot \mathbf{S}\cdot \mathrm{Diag}(\mathbf{r})\) , where \(\mathbf{r}\) defines the quality of each trajectory and \(\mathbf{S}\) measures the similarity between two trajectories. The DPP kernel \(\mathbf{L}(\gamma)\) is a function of \(\gamma\) as it is defined over the ground set \(\mathcal{Y}\) output by the DSF \(\mathcal{S}_{\gamma}(\boldsymbol {\psi})\) . Similarity. We measure the similarity \(\mathbf{S}_{i j}\) between two trajectories \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) as \[\mathbf{S}_{i j} = \exp \left(-k\cdot d_{\mathbf{x}}^{2}(\mathbf{x}_{i},\mathbf{x}_{j})\right), \quad (8)\] where \(d_{\mathbf{x}}\) is the Euclidean distance and \(k\) is a scaling factor. This similarity design ensures that \(0\leq \mathbf{S}_{i j}\leq 1\) and \(\mathbf{S}_{i i} = 1\) . It also makes \(\mathbf{S}\) positive definite since the Gaussian kernel we use is a positive definite kernel. Quality. It may be tempting to use \(p(\mathbf{x}|\boldsymbol {\psi})\) to define the quality of each trajectory sample. However, this likelihood- based measure will clearly favor major modes that have higher probabilities, making it less likely to generate samples from minor modes. This motivates us to design a quality metric that <--- Page Split ---> treats all modes equally. To this end, unlike the similarity metric which is defined in the trajectory space, the quality of each sample is measured in the latent space and is defined as \[r_{i} = \left\{ \begin{array}{ll}\omega , & \mathrm{if}\| \mathbf{z}_{i}\| \leq R\\ \omega \exp \left(-\mathbf{z}_{i}^{T}\mathbf{z}_{i} + R^{2}\right), & \mathrm{otherwise} \end{array} \right. \quad (9)\] Geometrically, let \(R\) be the radius of a sphere \(\Phi\) containing most samples from the Gaussian prior \(p(\mathbf{z})\) . We treat samples inside \(\Phi\) equally and only penalize samples outside \(\Phi\) . In this way, samples from major modes are not preferred over those from minor modes as long as they are inside \(\Phi\) , while samples far away from the data manifold are heavily penalized as they are outside \(\Phi\) . The radius \(R\) is determined by where \(\rho\) percent of the Gaussian samples lie within, and we set \(\rho = 90\) . To compute \(R\) , we use the percentage point function of the chi- squared distribution which models the distribution over the sum of squares of independent standard normal variables. The base quality \(\omega\) is a hyperparameter which we set to 1 during training in our experiments. At test time, we can use a larger \(\omega\) to encourage the DPP to select more items from the ground set \(\mathcal{V}\) . The hyperparameter \(\rho\) (or \(R\) ) allows for the trade- off between diversity and quality. When \(R\) is small, the quality metric is reduced to a pure likelihood- based metric (proportional to the latent likelihood), which will prefer samples with high likelihood and result in a less diverse sample set. When \(R\) is large, most samples will have the same quality, and the resulting samples will be highly diverse but less likely. In practice, the choice of \(R\) should be application dependent, as one could imagine autonomous vehicles would need to consider more diverse scenarios including those less likely ones to ensure robustness. We note that after the diverse samples are obtained, it is possible to reassign the quality score for each sample based on its likelihood to allow users to prioritize more likely samples. Diversity Loss. To optimize the DSF \(\mathcal{S}_{\gamma}(\psi)\) , we need to define a diversity loss that measures the diversity of the trajectory ground set \(\mathcal{V}\) generated by \(\mathcal{S}_{\gamma}(\psi)\) . An obvious choice for the diversity loss would be the negative log likelihood \(- \log \mathcal{P}_{\mathbf{L}(\gamma)}(\mathbf{Y} = \mathcal{V}) = - \log \operatorname *{det}(\mathbf{L}(\gamma)) + \log \operatorname *{det}(\mathbf{L}(\gamma) + \mathbf{I})\) . However, there is a problem with directly using the DPP log likelihood. The log likelihood heavily penalizes repeated items: if two trajectories inside \(\mathcal{V}\) are very similar, their corresponding rows in \(\mathbf{L}\) will be almost identical, making \(\operatorname *{det}(\mathbf{L}(\gamma)) = \lambda_{1}\lambda_{2}\ldots \lambda_{N}\approx 0\) ( \(\lambda_{n}\) is the \(n\) - th eigenvalue). In practice, if the number of modes in the trajectory distribution \(p(\mathbf{x}|\psi)\) is smaller than \(|\mathcal{V}|\) , \(\mathcal{V}\) will always have similar trajectories, thus making \(\operatorname *{det}(\mathbf{L}(\gamma))\) always close to zero. In such cases, optimizing the negative log likelihood causes numerical issues, which is observed in our early experiments. Instead, the expected cardinality of the DPP is a better measure for the diversity of \(\mathcal{V}\) , which is defined as \(\mathbb{E}_{\mathbf{Y}\sim \mathcal{P}_{\mathbf{L}(\gamma)}}[\| \mathbf{Y}\| ]\) . Intuitively, since the DPP discourages selection of similar items, if \(\mathcal{V}\) is more diverse, a random subset \(\mathbf{Y}\) drawn according to the DPP is more likely to select more items from \(\mathcal{V}\) , thus having larger cardinality. The expected cardinality can be computed as (Eq. 15 and 34 in Kulesza et al. (2012)): \[\mathbb{E}[\| \mathbf{Y}\| ] = \sum_{n = 1}^{N}\frac{\lambda_{n}}{\lambda_{n} + 1} = \operatorname {tr}\left(\mathbf{I} - (\mathbf{L}(\gamma) + \mathbf{I})^{-1}\right). \quad (10)\] The main advantage of the expected cardinality is that it is well defined even when the ground set \(\mathcal{V}\) has duplicated items, since it does not require all eigenvalues of \(\mathbf{L}\) to be non- zero as the log likelihood does. Thus, our diversity loss is defined as \(\mathcal{L}_{diverse}(\gamma) = - \operatorname {tr}\left(\mathbf{I} - (\mathbf{L}(\gamma) + \mathbf{I})^{-1}\right)\) . The training procedure for \(\mathcal{S}_{\gamma}(\psi)\) is outlined in Alg. 1. Inference. At test time, given current context \(\psi\) , we use the learned DSF \(\mathcal{S}_{\gamma}(\psi)\) to generate the future trajectory ground set \(\mathcal{V}\) . In some cases, \(\mathcal{V}\) may still contain some trajectories that are similar to others. In order to obtain a diverse set of trajectories without repetition, we aim to perform MAP inference for the DPP to find the most diverse subset \(Y^{*} = \arg \max_{Y\in \mathcal{V}}\mathcal{P}_{\mathbf{L}(\gamma)}(Y)\) . A useful property of DPPs is that the log- probability function is submodular (Gillenwater et al., 2012). Even though submodular maximization is NP- hard, we use a greedy algorithm (Nemhauser et al., 1978) which is a popular optimization procedure that works well in practice. As outlined in Alg. 3, the output set \(Y_{f}\) is initialized as \(\emptyset\) , and at each iteration, the trajectory which maximizes the log probability \[\mathbf{x}^{*} = \arg \max_{\mathbf{x}\in \mathcal{V}\backslash Y_{f}}\log \operatorname *{det}\left(\mathbf{L}_{Y_{f}\cup \{\mathbf{x}\}}\right) \quad (11)\] is added to \(Y_{f}\) , until the marginal gain becomes negative or \(Y_{f} = \mathcal{V}\) . <--- Page Split ---> ## 5 EXPERIMENTS The primary focus of our experiments is to answer the following questions: (1) Are trajectory samples generated with our diversity sampling function more diverse than samples from the cVAE and other baselines? (2) How does our method perform on both balanced and imbalanced data? (3) Is our method general enough to perform well on both low- dimensional and high- dimensional tasks? Metrics. A problem with trajectory forecasting evaluation is that in real data each context \(\psi^{(i)}\) usually only has one future trajectory \(\mathbf{x}^{(i)}\) , which means we only have one sample from a multi- modal distribution. Let us consider a scenario of three data examples \(\{\mathbf{x}^{(i)},\psi^{(i)}\}_{i = 1}^{3}\) as shown in Fig. 2 (red, purple, blue). The contexts (past trajectories) of the three examples are instances of the same trajectory but they are slightly different due to noise. As these three contexts have the same semantic meaning, they should share the future trajectories, e.g., the purple and blue future trajectories are also valid for the red context. If we evaluate each example \((\mathbf{x}^{(i)},\psi^{(i)})\) only with its own future trajectory \(\mathbf{x}^{(i)}\) , a method can achieve high scores by only forecasting the mode corresponding to \(\mathbf{x}^{(i)}\) and dropping other modes. This is undesirable because we want a good method to capture all modes of the future trajectory distribution, not just a single mode. To allow for multi- modal evaluation, we propose collecting multiple future trajectories for each example by clustering examples with similar contexts. Specifically, we augment each data example \((\mathbf{x}^{(i)},\psi^{(i)})\) with a future trajectory set \(\mathcal{X}^{(i)} = \{\mathbf{x}^{(j)}||\psi^{(j)} - \psi^{(i)}||\leq \epsilon ,j = 1,\ldots ,M\}\) and metrics are calculated based on \(\mathcal{X}^{(i)}\) instead of \(\mathbf{x}^{(i)}\) , i.e., we compute metrics for each \(\mathbf{x}\in \mathcal{X}^{(i)}\) and average the results. ![](images/6_0.jpg) <center>Figure 2: In real data, contexts (past trajectories) are seldom the same due to noise. </center> can achieve high scores by only forecasting the mode corresponding to \(\mathbf{x}^{(i)}\) and dropping other modes. This is undesirable because we want a good method to capture all modes of the future trajectory distribution, not just a single mode. To allow for multi- modal evaluation, we propose collecting multiple future trajectories for each example by clustering examples with similar contexts. Specifically, we augment each data example \((\mathbf{x}^{(i)},\psi^{(i)})\) with a future trajectory set \(\mathcal{X}^{(i)} = \{\mathbf{x}^{(j)}||\psi^{(j)} - \psi^{(i)}||\leq \epsilon ,j = 1,\ldots ,M\}\) and metrics are calculated based on \(\mathcal{X}^{(i)}\) instead of \(\mathbf{x}^{(i)}\) , i.e., we compute metrics for each \(\mathbf{x}\in \mathcal{X}^{(i)}\) and average the results. We use the following metrics for evaluation: (1) Average Displacement Error (ADE): average mean square error (MSE) over all time steps between the ground truth future trajectory \(\mathbf{x}\) and the closest sample \(\tilde{\mathbf{x}}\) in the forecasted set of trajectories \(Y_{f}\) . (2) Final Displacement Error (FDE): MSE between the final ground truth position \(\mathbf{x}^{T}\) and the closest sample's final position \(\tilde{\mathbf{x}}^{T}\) . (3) Average Self Distance (ASD): average \(L2\) distance over all time steps between a forecasted sample \(\tilde{\mathbf{x}}_{i}\) and its closest neighbor \(\tilde{\mathbf{x}}_{j}\) in \(Y_{f}\) . (4) Final Self Distance (FSD): \(L2\) distance between the final position of a sample \(\tilde{\mathbf{x}}_{i}^{T}\) and its closest neighbor's final position \(\tilde{\mathbf{x}}_{j}^{T}\) . The ADE and FDE are common metrics used in prior work on trajectory forecasting (Alahi et al., 2016; Lee et al., 2017; Rhinehart et al., 2018; Gupta et al., 2018). However, these two metrics do not penalize repeated samples. To address this, we introduce two new metrics ASD and FSD to evaluate the similarity between samples in the set of forecasted trajectories. Larger ASD and FSD means the forecasted trajectories are more non- repetitive and diverse. Baselines. We compare our Diversity Sampler Function (DSF) with the following baselines: (1) cVAE: a method that follows the original sampling scheme of cVAE by sampling latent codes from a Gaussian prior \(p(\mathbf{z})\) . (2) MCL: an approach that uses multiple choice learning (Lee et al., 2016) to optimize the sampler \(S_{\gamma}(\psi)\) with the following loss: \(\mathcal{L}_{\mathrm{mcl}} = \min_{\mathbf{x}\in \mathcal{Y}}\| \tilde{\mathbf{x}} -\mathbf{x}\|^{2}\) , where \(\mathbf{x}\) is the ground truth future trajectory. (3) R2P2: a method proposed in (Rhinehart et al., 2018) that uses a reparametrized pushforward policy to improve modeling of multi- modal distributions for vehicle trajectories. (4) cGAN: generative adversarial networks (Goodfellow et al., 2014) conditioned on the forecasting context. We implement all baselines using similar networks and perform hyperparameter search for each method for fair comparisons. For methods whose samples are stochastic, we use 10 random seeds and report the average results for all metrics. ### 5.1 SYNTHETIC 2D TRAJECTORY DATA We first use synthetic data to evaluate our method's performance for low- dimensional tasks. We design a virtual 2D traffic scene where a vehicle comes to a crossroad and can choose three different future routes: forward, left, and right. We consider two types of synthetic data: (1) Balanced data, which means the probabilities of the vehicle choosing one of the three routes are the same; (2) <--- Page Split ---> ![](images/7_0.jpg) <center>Figure 3: Qualitative results on synthetic data for both balanced and imbalanced data distribution when \(N = 10\) . Blue represents the past trajectory and red represents forecasted future trajectories. </center> Table 1: Quantitative results on synthetic data (numbers scaled by 10) when \(N = 10\) <table><tr><td rowspan="2">Method</td><td colspan="4">Balanced Data</td><td colspan="4">Imbalanced Data</td></tr><tr><td>ADE ↓</td><td>FDE ↓</td><td>ASD ↑</td><td>FSD ↑</td><td>ADE ↓</td><td>FDE ↓</td><td>ASD ↑</td><td>FSD ↑</td></tr><tr><td>DSF (Ours)</td><td>0.182</td><td>0.344</td><td>0.147</td><td>0.340</td><td>0.198</td><td>0.371</td><td>0.207</td><td>0.470</td></tr><tr><td>cVAE</td><td>0.262</td><td>0.518</td><td>0.022</td><td>0.050</td><td>0.332</td><td>0.662</td><td>0.021</td><td>0.050</td></tr><tr><td>MCL</td><td>0.276</td><td>0.548</td><td>0.012</td><td>0.030</td><td>0.457</td><td>0.938</td><td>0.005</td><td>0.010</td></tr><tr><td>R2P2</td><td>0.211</td><td>0.361</td><td>0.047</td><td>0.080</td><td>0.393</td><td>0.776</td><td>0.019</td><td>0.030</td></tr><tr><td>cGAN</td><td>0.808</td><td>1.619</td><td>0.018</td><td>0.010</td><td>1.784</td><td>3.744</td><td>0.006</td><td>0.001</td></tr></table> Imbalanced data, where the probabilities of the vehicle going forward, left and right are 0.8, 0.1, 0.1, respectively. We synthesize trajectory data by simulating the vehicle's behavior and adding Gaussian noise to vehicle velocities. Each data example \((\mathbf{x}^{(i)},\psi^{(i)})\) contains future trajectories of 3 steps and past trajectories of 2 steps. We also add an obstacle map around the current position to the context \(\psi^{(i)}\) . In total, we have around 1100 training examples and 1000 test examples. Please refer to Appendix B for more implementation details. Table 1 summarizes the quantitative results for both balanced and imbalanced data when the sampling budget \(N\) is 10. We can see that our method DSF outperforms the baselines in all metrics in both test settings. As shown in Fig. 3, our method generates more diverse trajectories and is less affected by the imbalanced data distribution. The trajectory samples of our method are also less repetitive, a feature afforded by our DPP formulation. Fig. 4 shows how ADE changes as a function of the sampling budget \(N\) . ### 5.2 DIVERSE HUMAN MOTION FORECASTING To further evaluate our method's performance for more complex and high- dimensional tasks, we apply our method to forecast future human motions (pose sequences). We use motion capture to obtain 10 motion sequences including different types of motions such as walking, turning, jogging, bending, and crouching. Each sequence is about 1 minute long and each pose consists of 59 joint angles. We use past 3 poses (0.1s) to forecast next 30 poses (1s). There are around 9400 training examples and 2000 test examples where we use different sequences for training and testing. More implementation details can be found in Appendix B. Table 2: Quantitative results on for human motion forecasting when \(N = 10\) <table><tr><td>Method</td><td>ADE ↓</td><td>FDE ↓</td><td>ASD ↑</td><td>FSD ↑</td></tr><tr><td>DSF (Ours)</td><td>0.259</td><td>0.421</td><td>0.115</td><td>0.282</td></tr><tr><td>cVAE</td><td>0.332</td><td>0.642</td><td>0.034</td><td>0.098</td></tr><tr><td>MCL</td><td>0.344</td><td>0.674</td><td>0.036</td><td>0.122</td></tr><tr><td>cGAN</td><td>0.652</td><td>1.296</td><td>0.001</td><td>0.003</td></tr></table> (0.1s) to forecast next 30 poses (1s). There are around 9400 training examples and 2000 test examples where we use different sequences for training and testing. More implementation details can be found in Appendix B. We present quantitative results in Table 2 and we can see that our approach outperforms other methods in all metrics. As the dynamics model used in R2P2 (Rhinehart et al., 2018) does not generalize well for high- dimensional human motion, we find the model fails to converge and we do not compare with it in this experiment. Fig. 4 shows that our method achieves large improvement when the sampling budget is big ( \(N = 50\) ). We also present qualitative results in Fig. 5, where we show the starting pose and the final pose of all 10 forecasted motion samples for each method. We can clearly <--- Page Split ---> ![](images/8_0.jpg) <center>Figure 4: ADE vs. \(N\) for synthetic data and human motion forecasting. cGAN is not shown in this plot as it is much worse than other methods due to mode collapse. </center> ![](images/8_1.jpg) <center>Figure 5: Qualitative results for human motion forecasting when \(N = 10\) . The left shows the starting pose, and the right shows for each method the final pose of all 10 forecasted motion samples. </center> see that our method generates more diverse future human motions than the baselines. Please refer to Appendix C and our video for additional qualitative results. ### 5.3 ADDITIONAL EXPERIMENTS WITH DIVERSITY-BASED BASELINES In this section, we perform additional experiments on a large human motion dataset (3.6 million frames), Human3.6M (Ionescu et al., 2013), to evaluate the generalization ability of our approach. We predict future motion of 2 seconds based on observed motion of 0.5 seconds. Please refer to Appendix B.3 for implementation details. We also use a new selection of baselines including several variants of our method (DSF) and the cVAE to validate several design choices of our method, including the choice of the expected cardinality over the negative log likelihood (NLL) of the DPP as the diversity loss. Specifically, we use the following new baselines: (1) DSF- NLL: a variant of DSF that uses NLL as the diversity loss instead of the expected cardinality. (2) DSF- COS: a DSF variant that uses cosine similarity to build the similarity matrix \(\mathbf{S}\) for the DPP kernel \(\mathbf{L}\) . (3) DSF- NLL: a variant of the cVAE that samples 100 latent codes and performs DPP MAP inference on the latent codes to obtain a diverse set of latent codes, which are then decoded into trajectory samples. We present quantitative results in Table 3 when the number of samples \(N\) is 10 and 50. The baseline DSF- COS is able to achieve very high diversity (ASD and FSD) but its samples are overly diverse and have poor quality which is indicated by the large ADE and FDE. Compared with DSF- NLL, <table><tr><td rowspan="2">Method</td><td colspan="4">N=10</td><td colspan="4">N=50</td></tr><tr><td>ADE↓</td><td>FDE↓</td><td>ASD↑</td><td>FSD↑</td><td>ADE↓</td><td>FDE↓</td><td>ASD↑</td><td>FSD↑</td></tr><tr><td>DSF (Ours)</td><td>0.340</td><td>0.521</td><td>0.381</td><td>0.621</td><td>0.236</td><td>0.306</td><td>0.313</td><td>0.415</td></tr><tr><td>DSF-NLL</td><td>0.335</td><td>0.514</td><td>0.343</td><td>0.496</td><td>X</td><td>X</td><td>X</td><td>X</td></tr><tr><td>DSF-COS</td><td>2.588</td><td>1.584</td><td>5.093</td><td>5.718</td><td>0.978</td><td>0.891</td><td>2.007</td><td>1.968</td></tr><tr><td>cVAE</td><td>0.363</td><td>0.549</td><td>0.235</td><td>0.360</td><td>0.276</td><td>0.369</td><td>0.160</td><td>0.220</td></tr><tr><td>cVAE-LDPP</td><td>0.373</td><td>0.554</td><td>0.280</td><td>0.426</td><td>0.277</td><td>0.365</td><td>0.176</td><td>0.240</td></tr></table> Table 3: Quantitative results on Human3.6M (Ionescu et al., 2013) for \(N = 10\) and \(N = 50\) . X means the method is unable to learn a model due to numerical instability. <--- Page Split ---> our method achieves better diversity (ASD and FSD) and similar ADE and FDE when the number of samples is small ( \(N = 10\) ). For a larger number of samples ( \(N = 50\) ), NLL becomes unstable even with a large \(\epsilon\) (1e- 3) added to the diagonal. This behavior of NLL, i.e., stable for small \(N\) but unstable for large \(N\) , matches our intuition that NLL becomes unstable when samples become similar (as discussed in Sec. 4.2), because when there are more samples, it is easier to have similar samples during the SGD updates of the DSF network. The baseline cVAE- LDPP also performs worse than DSF in all metrics even though it is able to outperform the cVAE. We believe the reason is that diversity in sample space may not be well reflected in the latent space due to the non- linear mapping from latent codes to samples induced by deep neural networks. ## 6 CONCLUSION We proposed a novel forecasting approach using a DSF to optimize over the sample space of a generative model. Our method learns the DSF with a DPP- based diversity measure to generate a diverse set of trajectories. The diversity measure is a novel application of DPPs to optimize a set of items in continuous space. Experiments have shown that our approach can generate more diverse vehicle trajectories and human motions compared to state- of- the- art baseline forecasting approaches. Acknowledgment. This project was sponsored in part by JST CREST (JPMJCR14E1), NSF NRI (1637927) and IARPA (D17PC00340). ## A ALGORITHMS ## Algorithm 2 Training the cVAE 1: Input: Training data \(\{\mathbf{x}^{(i)},\psi^{(i)}\}_{i = 1}^{M}\) 2: Output: cVAE encoder network \(f_{\phi}(\mathbf{x},\psi)\) and decoder network \(g_{\theta}(\mathbf{z},\psi)\) 3: Initialize \(\phi\) and \(\theta\) randomly 4: while not converged do 5: for each \((\mathbf{x}^{(i)},\psi^{(i)})\) do 6: Compute parameters \((\mu ,\sigma)\) of the posterior distribution \(q_{\phi}(\mathbf{z}|\mathbf{x},\psi)\) using \(f_{\phi}(\mathbf{x},\psi)\) 7: Sample \(V\) Gaussian noises \(\{\epsilon_{1},\ldots ,\epsilon_{V}\}\) from \(\mathcal{N}(\mathbf{0},\mathbf{I})\) 8: Transform noises to latent samples from \(q_{\phi}(\mathbf{z}|\mathbf{x},\psi)\) : \(\mathbf{z}_{v} = \mu +\sigma \odot \epsilon_{v}\) 9: Decode latent samples into reconstructed trajectories \(\{\tilde{\mathbf{x}}_{1},\ldots ,\tilde{\mathbf{x}}_{V}\}\) using \(g_{\theta}(\mathbf{z},\psi)\) 10: Calculate the cVAE loss \(\mathcal{L}_{c v a e}\) according to Eq. 6 11: Update \(\phi\) and \(\theta\) with \(\nabla_{\phi}\mathcal{L}_{c v a e}\) and \(\nabla_{\theta}\mathcal{L}_{c v a e}\) 12: end for 13: end while Algorithm 3 Inference with the DSF \(\mathcal{S}_{\gamma}(\psi)\) 1: Input: Context \(\psi\) , DSF \(\mathcal{S}_{\gamma}(\psi)\) , cVAE decoder network \(g_{\theta}(\mathbf{z},\psi)\) 2: Output: Forecasted trajectory set \(Y_{f}\) 3: Generate latent codes \(\mathcal{Z} = \{\mathbf{z}_{1},\ldots ,\mathbf{z}_{N}\}\) with the DSF \(\mathcal{S}_{\gamma}(\psi)\) 4: Generate the trajectory ground set \(\mathcal{Y} = \{\mathbf{x}_{1},\ldots ,\mathbf{x}_{N}\}\) with the decoder \(g_{\theta}(\mathbf{z},\psi)\) 5: Compute the DPP kernel \(\mathbf{L} = \mathrm{Diag}(\mathbf{r})\cdot \mathbf{S}\cdot \mathrm{Diag}(\mathbf{r})\) 6: \(Y_{f}\leftarrow \emptyset ,U\leftarrow \mathcal{Y}\) 7: while \(U\) is not empty do 8: \(\mathbf{x}^{*}\leftarrow \arg \max_{\mathbf{x}\in U}\log \operatorname *{det}\left(\mathbf{L}_{Y_{f}\cup \{\mathbf{x}\}}\right)\) 9: if \(\log \operatorname *{det}\left(\mathbf{L}_{Y_{f}\cup \{\mathrm{x}^{*}\}}\right) - \log \operatorname *{det}\left(\mathbf{L}_{Y_{f}}\right)< 0\) then 10: break 11: end if 12: \(Y_{f}\leftarrow Y_{f}\cup \{\mathbf{x}^{*}\}\) 13: \(U\leftarrow U\backslash \{\mathbf{x}^{*}\}\) 14: end while <--- Page Split ---> ## B IMPLEMENTATION DETAILS ![](images/12_0.jpg) <center>Figure 6: Network architectures for synthetic data and human motion. Top: for synthetic data, we use a CNN to process the obstacle map \(\mathbf{f}\) and directly flatten trajectories \(\mathbf{x}\) and \(\mathbf{h}\) into vectors. The reconstructed trajectory \(\tilde{\mathbf{x}}\) is decoded with an MLP. Bottom: for human motion, we use Bi-LSTMs to extract temporal features for \(\mathbf{x}\) and \(\mathbf{h}\) and decode the reconstructed trajectory \(\tilde{\mathbf{x}}\) with a forward LSTM. </center> ## B.1 NETWORK ARCHITECTURES Synthetic data. Fig. 6 (Top) shows the network architecture for synthetic data. The number of latent dimensions is 2. By default, we use ReLU activation for all networks. The future trajectory \(\mathbf{x} \in \mathbb{R}^{3 \times 2}\) consists of 3 future positions of the vehicle. The context \(\psi\) contains past trajectories \(\mathbf{h} \in \mathbb{R}^{2 \times 2}\) of 2 time steps and a obstacle map \(\mathbf{f} \in \{0, 1\}^{28 \times 28}\) spanning a \(4 \times 4\) area around the current position of the vehicle (the road width is 2). For the encoder, we use a convolutional neural network (CNN) with three 32- channel convolutional layers to process \(\mathbf{f}\) . The first two layers have kernel size 4 and stride 2 while the last layer has kernel size 6 and stride 1. The obtained CNN features are concatenated with flattened \(\mathbf{x}\) and \(\mathbf{h}\) into a unified feature, which is feed into a multilayer perceptron (MLP). The MLP has one 128- dim hidden layer and two heads outputting the mean \(\mu\) and variance \(\sigma\) of the latent distribution. For the decoder, we concatenate the CNN feature from \(\mathbf{f}\) with the latent code \(\mathbf{z} \in \mathbb{R}^{2}\) and flattened \(\mathbf{h}\) into a unified feature. The feature is passed through an MLP with one 128- dim hidden layer which outputs the reconstructed future trajectory \(\tilde{\mathbf{x}} \in \mathbb{R}^{3 \times 2}\) . For the diversity sampler function (DSF), we concatenate the CNN feature from \(\mathbf{f}\) with the flattened \(\mathbf{h}\) and pass it through an MLP with one 128- dim hidden layer to obtain a set of latent codes \(\{\mathbf{z}_1, \ldots , \mathbf{z}_N\}\) which are represented by a vector of length \(2N\) . Human motion. Fig. 6 (Bottom) shows the network architecture for synthetic data. The number of latent dimensions is 8. The future trajectory \(\mathbf{x} \in \mathbb{R}^{30 \times 59}\) consists of future poses of 30 time steps (1s). The context \(\psi\) contains past poses \(\mathbf{h} \in \mathbb{R}^{3 \times 59}\) of 3 time steps (0.1s). Each pose consists of 59 joint angles. For the encoder, we use two 128- dim bidirectional LSTMs (Bi- LSTMs) and mean pooling to obtain the temporal features for \(\mathbf{x}\) and \(\mathbf{h}\) . We then concatenate the temporal features into a unified feature and feed it into an MLP with two hidden layers (300, 200) and two heads to obtain the mean \(\mu\) and variance \(\sigma\) of the latent distribution. For the decoder, we reuse the Bi- LSTM of <--- Page Split ---> the encoder for the context \(\mathbf{h}\) and a 128- dim forward LSTM to decode the future trajectory \(\tilde{\mathbf{x}}\) . At each time step \(t\) , the forward LSTM takes as input the previous pose \(\tilde{\mathbf{x}}^{t - 1}\) \((\mathbf{h}^{H}\) for \(t = 0\) ), the latent code \(\mathbf{z} \in \mathbb{R}^{8}\) and the temporal features from \(\mathbf{h}\) , and outputs a 128- dim feature. The feature is then passed through an MLP with two hidden layers \((300,200)\) to generate the reconstructed pose \(\tilde{\mathbf{x}}^{t}\) . For the DSF, we use a different 128- dim Bi- LSTM to obtain the temporal feature for \(\mathbf{h}\) , which is feed into an MLP with a 128- dim hidden layer to produce a set of latent codes \(\{\mathbf{z}_{1},\ldots ,\mathbf{z}_{N}\}\) which are represented by a vector of length \(8N\) . ## B.2 TRAINING AND EVALUATION When training the cVAE model using Eq. 7, we take \(V = 1\) sample from the posterior \(q_{\phi}(\mathbf{z}|\mathbf{x},\boldsymbol {\psi})\) . The weighting factor \(\beta\) for the KL term is set to 0.1 for synthetic data and 1e- 4 for human motion. We use Adam (Kingma and Ba, 2014) to jointly optimize the encoder and decoder. The learning rate is set to 1e- 4 and we use a mini batch size of 32 for synthetic data. We optimize the model for 500 epochs for synthetic data and 100 epochs for human motion. When training the DSF, the scale factor \(k\) for the similarity matrix \(\mathbf{S}\) is set to 1 for synthetic data and 1e- 2 for human motions. For both synthetic data and human motions, we use Adam with learning rate 1e- 4 to optimize the DSF for 20 epochs. Recall that in the metrics section (Sec. 5.1), we need the grouping threshold \(\epsilon\) to build the ground truth future trajectory set \(\mathcal{X}^{(i)} = \{\mathbf{x}^{(j)} \| \psi^{(j)} - \psi^{(i)} \| \leq \epsilon , j = 1, \ldots , M \}\) . For synthetic data, \(\epsilon\) is set to 0.1 and we only use past trajectories \(\mathbf{h}\) to compute the distance between contexts. For human motion, \(\epsilon\) is set to 0.5. ## B.3 IMPLEMENTATION DETAILS FOR EXPERIMENTS ON HUMAN3.6M Following previous work (Martinez et al., 2017; Pavlakos et al., 2017; Pavllo et al., 2019), we convert the motion sequences in the dataset into sequences of 3D joint positions, and adopt a 17- joint skeleton. We train on five subjects (S1, S5, S6, S7, S8), and test on two subjects (S9 and S11). We use the same network architectures (Fig.6 (Bottom)) in this experiment as the one used in the human motion forecasting experiment above. The number of latent dimensions is 128. When training the cVAE model, the weighting factor \(\beta\) is set to 0.1. We sample 5000 training examples every epoch and optimize the cVAE for 500 epochs using Adam and a learning rate of 1e- 4. We set the batch size to 64 for the optimization. The scale factor \(k\) for the similarity matrix \(\mathbf{S}\) of the DPP kernel is set to 5. When learning the DSF, we use a batch size of 64 and sample 1000 training examples every epoch and optimize the DSF for 20 epochs using Adam and a learning rate of 1e- 3. When computing the metrics, we set the grouping threshold \(\epsilon\) to 0.1. <--- Page Split ---> ## C ADDITIONAL VISUALIZATION We also show additional qualitative results for human motion forecasting in Fig. 7. The quality and diversity of the forecasted motions are best seen in our video². ![](images/14_0.jpg) <center>Figure 7: Additional visualization for human motion forecasting. The left shows the starting pose, and on the right we show for each method the final pose of 10 forecasted motion samples. </center> <--- Page Split --->
accept
Accept (Poster)
6.666667